Blogue

14 Avril 2022

From PyTorch to Libtorch: tips and tricks

Cet article est uniquement disponible en anglais.

Deep learning practitioners hone their skills using PyTorch and Python as their tools of choice. For that reason, on-line courses, blog posts, tutorials, etc. introducing PyTorch to new users abound on the internet. That is not the case for Libtorch, PyTorch’s C++ API, which lags behind in terms of user base size despite the fact that it has been (almost?) at parity with Python since April 2020. As a consequence, when searching for examples of Libtorch usage, one regularly ends up getting search results with links to raw source code. The purpose of this post is to give readers some hints on what to pay attention to when using Libtorch and where to look for support in addition to the official documentation.

Resources

Quite naturally, the main source of documentation about Libtorch is its official documentation, which includes not only a description of the API itself but also installation procedures, code snippets showing how to interact with the main classes and objects, etc. API description of member functions is sometimes minimal, the reader is then referred to their python counterpart. Apart from that, a few blog posts are of interest:

  • Chintapalli has written a soft, step-by-step introduction to Libtorch.
  • In July 2020, Garry’s blog has published a 3-part series that gives a broad overview of Libtorch, discussing the rationale behind its use, how to use pytorch-trained models, how to convert to and from OpenCV’s cv::Mat, as well as more advanced stuff. Certainly a must-read.
  • A year earlier, krshrimali has also published a significant number of posts on Libtorch: some are beginner-friendly but some others address advanced subjects such as GANs.

In terms of source code, the examples provided with PyTorch should definitely be a starting point for anyone willing to gain some experience in using Libtorch. Another stop is Prabhu’s GitHub repository which contains a lot of useful material in the form of tutorial code. Finally, some of krshrimali’s posts are accompanied with GitHub repositories that are also worth the visit.

PyTorch-to-Libtorch conversion trick

Converting PyTorch source code to a Libtorch equivalent is fairly easy thanks to the similarity between both APIs. This type of operation may be necessary in case one finds a particularly appealing architecture available in Python that has no C++ implementation. (That was the motivation behind an open-source contribution described in a companion post.) Here are some miscellaneous tricks that may facilitate this conversion effort:

Modules

PyTorch modules (of type torch.nn.Module) have their C++ type counterpart: torch::nn::Module. Internal access to e.g. the module’s weights is possible via the get() method:

this->get()->weight

The get() method is implemented as:

Array type conversion

Libtorch has its own containers to manipulate tensors and simpler types. For example, at::TensorList is equivalent to c10::ArrayRef, and c10::IntArrayRef is equivalent to c10::ArrayRef. c10::ArrayRef is a lightweight container with its own set of member functions, but in case there is a need for a conversion to std::vector, the member function vec() as well as one of ArrayRef’s constructors come in handy:

with the output:

TORCH_ARG

Option structures are often created using the TORCH_ARG macro, like this:

This approach simplifies instantiation/initialization of an option object via “chaining”. In the following example, the constructor for torch::nn::Conv2dOptions() receives three parameters (the most common ones, e.g. number of in/out channels and kernel size), and chaining allows the developer to specify additional parameters in an elegant way, close to Python’s keyword arguments:

Dimname

Many tensor-manipulating functions work with a dimension parameter. For example, a torch::Tensor object has this min() member function to find its minimum values (and their index) along a specified dimension:

But these types of functions also support named dimensions, which is why an overloaded copy of min() has this signature:

That is an experimental feature so documentation about it is very scarce. This piece of code, where dimension 0 is named “columnwise” and dimension 1 “rowwise”, has been tested with Libtorch=1.8 and it seems to work:

with the output being equal for dim=0 and dim=’columnwise’:

InputArchive/OutputArchive

Examples on how to load/save models can be found on the internet, for example:

Note the existence of overloaded signatures for OutputArchive.save_to() and InputArchive.load_from():

This gives more options on how to perform model serialization/deserialization, e.g. allowing the storage of a model in a zip file along with additional metadata, or encrypting/decrypting models.

Type AnyModule

The torch::nn::AnyModule class offers a unified type-erased interface to assign any model reference that derives from torch::nn::Module. This is convenient when an application has to deal with different model implementations to be instantiated and manipulated at runtime, especially in cases where the selected module is not known in advance: easy handling in Python but not so easy in C++.

The only requirement of this class is that the referenced module must implement the forward() method, but it allows for any input and output type. Contrary to traditional class inheritance where specific input and output types would be enforced for any derived class, AnyModule permits any signature of forward() using polymorphism. This provides flexibility when writing the code since the lack of explicit type checking for inputs and outputs allows the execution of the model inference using generic arguments.

Below is an example where AnyModule can be useful, since the same variable “module” can be assigned to different implementations and can accept entirely different inputs and outputs.

struct AdditionModule {
    torch::Tensor forward(torch::Tensor x) { return x + x };
}enum ModuleType { addition, linear, conv2d };
ModuleType module_type = ModuleType.addition;
torch::nn::AnyModule module;
switch (module_type) {
    case addition: module = AdditionModule{}; break;
    case linear: module = torch::nn::Linear(3, 4); break;
    case conv2d: module = torch::nn::Conv2d(3, 4, 2); break;
}auto input = torch::ones({2, 3});
auto output = module.forward(input);

Functions required to perform operations using any model could then be defined using arguments typed with AnyModule to implement generic training pipeline utilities: One obvious example is a train() function typically found in Python implementations that takes a net or model as parameter to perform its training using provided data loaders and optimizer:

def train(model: torch.nn.Module,
          optimizer: torch.optim.Optimizer,
          train_dl: torch.data.utils.DataLoader,
          val_dl: torch.data.utils.DataLoader,
          epochs: int = 100,
          device: torch.device = torch.device(‘cpu’),
         )

This function would be translated into the following in C++ using the versatile AnyModule type:

template
void train( torch::nn::AnyModule model,
            std::shared_ptr optimizer,
            DataLoader& train_dl,
            DataLoader& val_dl,
            unsigned int epochs = 100,
            torch::Device device = torch::Device(torch::kCPU),);

Note: DataLoader in this case is a template created using a function such as torch::data::make_data_loader.

Conclusion

Using Libtorch for the first time might be intimidating in several respects: detailed documentation is limited, finding some example code is difficult, etc. The purpose of this post is to provide new users with pointers to some key resources to browse and explore: The more on-line resources there are about Libtorch, the easier it will be to actually use it.

Mots-clés

Partager sur vos médias sociaux

Group 2Created with Sketch.

Articles similaires

button upCreated with Sketch.

Abonnez-vous à notre infolettre

*Champs requis

Hidden