{"id":17998,"date":"2022-02-25T13:47:17","date_gmt":"2022-02-25T18:47:17","guid":{"rendered":"https:\/\/www.crim.ca\/blogue\/contributing-to-libtorch-recent-architectures-and-vanilla-training-pipeline\/"},"modified":"2023-06-14T10:03:39","modified_gmt":"2023-06-14T14:03:39","slug":"contributing-to-libtorch-recent-architectures-and-vanilla-training-pipeline","status":"publish","type":"blogue","link":"https:\/\/www.crim.ca\/en\/blogue\/contributing-to-libtorch-recent-architectures-and-vanilla-training-pipeline\/","title":{"rendered":"Contributing to LibTorch: recent architectures and \u201cvanilla\u201d training pipeline"},"content":{"rendered":"<p id=\"e70b\" class=\"pw-post-body-paragraph ks kt jo ku b kv kw kx ky kz la lb lc ld le lf lg lh li lj lk ll lm ln lo lp iw gi\" data-selectable-paragraph=\"\">In August 2021, a\u00a0<a class=\"au lr\" href=\"https:\/\/github.com\/pytorch\/vision\/pull\/4293\" target=\"_blank\" rel=\"noopener ugc nofollow\">PR<\/a>\u00a0aimed at adding a SOTA architecture (namely EfficientNet) to TorchVision, a Python-based PyTorch package for computer vision experiments, was submitted on GitHub. Even though deep learning practitioners are used to testing new architectures that are regularly posted on this platform, this is certainly a welcome contribution. On the other hand, C++ contributions to PyTorch (or more precisely to LibTorch, which is PyTorch\u2019s C++ API) and TorchVision from the official maintainers are limited, particularly for independent contributors, so the need for new contributions is even greater.<\/p>\n<p id=\"304e\" class=\"pw-post-body-paragraph ks kt jo ku b kv kw kx ky kz la lb lc ld le lf lg lh li lj lk ll lm ln lo lp iw gi\" data-selectable-paragraph=\"\">This post describes the C++ implementation of a pair of recent architectures, EfficientNet and NFNet, as well as a testing tool that uses these architectures. The whole package is\u00a0<a class=\"au lr\" href=\"https:\/\/github.com\/crim-ca\/crim-libtorch-extensions\" target=\"_blank\" rel=\"noopener ugc nofollow\">available on GitHub<\/a>.<\/p>\n<p id=\"e6de\" class=\"pw-post-body-paragraph ks kt jo ku b kv kw kx ky kz la lb lc ld le lf lg lh li lj lk ll lm ln lo lp iw gi\" data-selectable-paragraph=\"\">The proposed package represents a wider pedagogical contribution aimed at providing a concrete example of development of a training pipeline using LibTorch.<\/p>\n<h2 id=\"08cd\" class=\"ls lt jo bn lu lv lw lx ly lz ma mb mc md me mf mg mh mi mj mk ml mm mn mo mp gi\">Implementing EfficientNet<\/h2>\n<p id=\"e21b\" class=\"pw-post-body-paragraph ks kt jo ku b kv mq kx ky kz mr lb lc ld ms lf lg lh mt lj lk ll mu ln lo lp iw gi\" data-selectable-paragraph=\"\">In 2019, researchers from Google Brain used neural architecture search to design a new baseline network called B0, whose scalable properties (width, depth, image resolution) allow the creation of a family of models (from B1 through B7) called EfficientNets. They define a compound parameter alpha that helps determine the depth and input resolution of the most appropriate architecture with respect to available resources, while also setting the width of the convolutional filters layers making up each layer.<\/p>\n<p id=\"db75\" class=\"pw-post-body-paragraph ks kt jo ku b kv kw kx ky kz la lb lc ld le lf lg lh li lj lk ll lm ln lo lp iw gi\" data-selectable-paragraph=\"\">The architecture consists of 9 stages, each of which being made of 1 to 4 layers (thick blue rectangles in the Figure below), where a layer is either a convolutional layer or a series of \u201cmobile inverted bottleneck blocks\u201d (MBConv blocks). The term MBConv6 means that six of those blocks make up that particular layer. Note the stride error flagged by the red rectangle (28x28x80 should be 14x14x80, see\u00a0<a class=\"au lr\" href=\"https:\/\/github.com\/lukemelas\/EfficientNet-PyTorch\/issues\/13\" target=\"_blank\" rel=\"noopener ugc nofollow\">https:\/\/github.com\/lukemelas\/EfficientNet-PyTorch\/issues\/13<\/a>).<\/p>\n<p data-selectable-paragraph=\"\"><img fetchpriority=\"high\" decoding=\"async\" class=\"alignnone wp-image-16678\" src=\"https:\/\/www.crim.ca\/wp-content\/uploads\/2022\/05\/0_yeXQh_ClnNMoaFm6-300x159.png\" alt=\"\" width=\"728\" height=\"386\" srcset=\"https:\/\/www.crim.ca\/wp-content\/uploads\/2022\/05\/0_yeXQh_ClnNMoaFm6-300x159.png 300w, https:\/\/www.crim.ca\/wp-content\/uploads\/2022\/05\/0_yeXQh_ClnNMoaFm6-1024x543.png 1024w, https:\/\/www.crim.ca\/wp-content\/uploads\/2022\/05\/0_yeXQh_ClnNMoaFm6-768x407.png 768w, https:\/\/www.crim.ca\/wp-content\/uploads\/2022\/05\/0_yeXQh_ClnNMoaFm6.png 1400w\" sizes=\"(max-width: 728px) 100vw, 728px\" \/><\/p>\n<p data-selectable-paragraph=\"\">This C++ Implementation of the architecture with LibTorch closely follows Lukemelas\u2019\u00a0<a class=\"au lr\" href=\"https:\/\/github.com\/lukemelas\/EfficientNet-PyTorch\" target=\"_blank\" rel=\"noopener ugc nofollow\">PyTorch implementation<\/a>. At the time of writing, the swish() activation function wasn\u2019t available in LibTorch, so a basic implementation is provided in the source code.<\/p>\n<h2 id=\"35fc\" class=\"ls lt jo bn lu lv lw lx ly lz ma mb mc md me mf mg mh mi mj mk ml mm mn mo mp gi\">Implementing NFNet and Gradient Clipp<\/h2>\n<p id=\"7399\" class=\"pw-post-body-paragraph ks kt jo ku b kv mq kx ky kz mr lb lc ld ms lf lg lh mt lj lk ll mu ln lo lp iw gi\" data-selectable-paragraph=\"\">The NFNet family of architectures has attracted\u00a0<a class=\"au lr\" href=\"https:\/\/towardsdatascience.com\/nfnets-explained-deepminds-new-state-of-the-art-image-classifier-10430c8599ee\" target=\"_blank\" rel=\"noopener\">a lot of attention<\/a>\u00a0recently due to its SOTA performance on the ImageNet dataset while avoiding the use of batch normalization. To do so, the authors draw not only on their previous work on Normalizer-Free ResNets, but they also make a number of contributions in order to stabilize and optimize the new architecture:<\/p>\n<ul class=\"\">\n<li id=\"c91b\" class=\"na nb jo ku b kv kw kz la ld nc lh nd ll ne lp nf ng nh ni gi\" data-selectable-paragraph=\"\">Normalization of the weights of the convolutional layers<\/li>\n<li id=\"dbf6\" class=\"na nb jo ku b kv nj kz nk ld nl lh nm ll nn lp nf ng nh ni gi\" data-selectable-paragraph=\"\">Use of SE-ResNeXt-D as baseline and improvement to their width\/depth patterns<\/li>\n<li id=\"3880\" class=\"na nb jo ku b kv nj kz nk ld nl lh nm ll nn lp nf ng nh ni gi\" data-selectable-paragraph=\"\">Adoption of a scaling strategy to adapt the baseline to different compute budgets<\/li>\n<li id=\"6b90\" class=\"na nb jo ku b kv nj kz nk ld nl lh nm ll nn lp nf ng nh ni gi\" data-selectable-paragraph=\"\">Use of a gradient clipping strategy to stabilize training when large batch sizes are used<\/li>\n<\/ul>\n<p id=\"6fda\" class=\"pw-post-body-paragraph ks kt jo ku b kv kw kx ky kz la lb lc ld le lf lg lh li lj lk ll lm ln lo lp iw gi\" data-selectable-paragraph=\"\">Again, the proposed C++ implementation of NFNet and its variants is a \u2018translation\u2019 of a PyTorch implementation\u00a0<a class=\"au lr\" href=\"https:\/\/github.com\/vballoli\/nfnets-pytorch\" target=\"_blank\" rel=\"noopener ugc nofollow\">available on GitHub<\/a>. In order to provide the new optimizer with the above-mentioned gradient clipping strategy, instead of doing Python-C++ translation, the C++ source code of the SGD optimizer was cloned and modified accordingly.<\/p>\n<h2 id=\"c657\" class=\"ls lt jo bn lu lv lw lx ly lz ma mb mc md me mf mg mh mi mj mk ml mm mn mo mp gi\">Training Pipeline<\/h2>\n<p id=\"06d8\" class=\"pw-post-body-paragraph ks kt jo ku b kv mq kx ky kz mr lb lc ld ms lf lg lh mt lj lk ll mu ln lo lp iw gi\" data-selectable-paragraph=\"\">The code base features a fairly complete training loop that includes a\u00a0<a class=\"au lr\" href=\"https:\/\/github.com\/crim-ca\/crim-libtorch-extensions\/blob\/0.6.0\/TestBench\/training.h#L120-L175\" target=\"_blank\" rel=\"noopener ugc nofollow\">data loader<\/a>\u00a0and\u00a0<a class=\"au lr\" href=\"https:\/\/github.com\/crim-ca\/crim-libtorch-extensions\/tree\/master\/source\/data\" target=\"_blank\" rel=\"noopener ugc nofollow\">data augmentation functions<\/a>. The training loop has been put to test with the ImageNet dataset. Whether the proposed source code is generic enough for all types of image classification projects remains to be seen; yet it is another example of how LibTorch objects and components can be used in a practical application (beyond toy examples found on the Internet).<\/p>\n<h2 id=\"6d4b\" class=\"ls lt jo bn lu lv lw lx ly lz ma mb mc md me mf mg mh mi mj mk ml mm mn mo mp gi\">Training Loop<\/h2>\n<p id=\"b2d7\" class=\"pw-post-body-paragraph ks kt jo ku b kv mq kx ky kz mr lb lc ld ms lf lg lh mt lj lk ll mu ln lo lp iw gi\" data-selectable-paragraph=\"\">The main training loop is defined\u00a0<a class=\"au lr\" href=\"https:\/\/github.com\/crim-ca\/crim-libtorch-extensions\/blob\/0.6.0\/TestBench\/training.h#L222-L306\" target=\"_blank\" rel=\"noopener ugc nofollow\">here<\/a>. When using LibTorch, it is important to take into account that many functions are defined using templates, as presented below.<\/p>\n<p id=\"01a0\" data-selectable-paragraph=\"\"><img decoding=\"async\" class=\"alignnone wp-image-16680\" src=\"https:\/\/www.crim.ca\/wp-content\/uploads\/2022\/05\/Capture-de\u0301cran-le-2022-05-09-a\u0300-13.39.44-300x117.png\" alt=\"\" width=\"544\" height=\"212\" srcset=\"https:\/\/www.crim.ca\/wp-content\/uploads\/2022\/05\/Capture-de\u0301cran-le-2022-05-09-a\u0300-13.39.44-300x117.png 300w, https:\/\/www.crim.ca\/wp-content\/uploads\/2022\/05\/Capture-de\u0301cran-le-2022-05-09-a\u0300-13.39.44.png 703w\" sizes=\"(max-width: 544px) 100vw, 544px\" \/><\/p>\n<p id=\"b363\" class=\"pw-post-body-paragraph ks kt jo ku b kv kw kx ky kz la lb lc ld le lf lg lh li lj lk ll lm ln lo lp iw gi\" data-selectable-paragraph=\"\">For instance, the reference sample training loop takes a generic DataLoader, which will be compiled with the specific data loader and sampler implementations that were specified to execute training, as detailed\u00a0<a class=\"au lr\" href=\"https:\/\/github.com\/crim-ca\/crim-libtorch-extensions\/blob\/0.6.0\/TestBench\/TestBench.cpp#L566-L577\" target=\"_blank\" rel=\"noopener ugc nofollow\">here<\/a>.<\/p>\n<p id=\"8f84\" class=\"pw-post-body-paragraph ks kt jo ku b kv kw kx ky kz la lb lc ld le lf lg lh li lj lk ll lm ln lo lp iw gi\" data-selectable-paragraph=\"\">Using a custom DataAugmentationDataset implementation (located\u00a0<a class=\"au lr\" href=\"https:\/\/github.com\/crim-ca\/crim-libtorch-extensions\/blob\/0.6.0\/TestBench\/training.h#L131-L175\" target=\"_blank\" rel=\"noopener ugc nofollow\">here<\/a>), the complete instantiation of the data loader employed by the templated training loop can be summarized with the code snippet below. In that particular example, a random sampler is employed to pick images and labels from the dataset of samples that get augmented by the selected DataLoader. Due to the use of the templating approach, it is easy to modify only the \u201cdata loading\u201d portion of the code if one wants to explore another approach, without affecting the training loop itself.<\/p>\n<p data-selectable-paragraph=\"\"><img decoding=\"async\" class=\"alignnone wp-image-16682\" src=\"https:\/\/www.crim.ca\/wp-content\/uploads\/2022\/05\/Capture-de\u0301cran-le-2022-05-09-a\u0300-13.39.59-269x300.png\" alt=\"\" width=\"530\" height=\"591\" srcset=\"https:\/\/www.crim.ca\/wp-content\/uploads\/2022\/05\/Capture-de\u0301cran-le-2022-05-09-a\u0300-13.39.59-269x300.png 269w, https:\/\/www.crim.ca\/wp-content\/uploads\/2022\/05\/Capture-de\u0301cran-le-2022-05-09-a\u0300-13.39.59.png 703w\" sizes=\"(max-width: 530px) 100vw, 530px\" \/><\/p>\n<p id=\"e0b8\" class=\"pw-post-body-paragraph ks kt jo ku b kv kw kx ky kz la lb lc ld le lf lg lh li lj lk ll lm ln lo lp iw gi\" data-selectable-paragraph=\"\">This makes the training loop versatile for any implementation of the data loading procedure, either adapted for different dataset annotation formats or loading strategies, as long as templating dependencies are respected. Class interfaces are also employed as function inputs to the training loop (see torch::nn::AnyModule and torch::optim::Optimizer of the first code snippet) to allow switching between different model and optimizer implementations.<\/p>\n<p id=\"ce2c\" class=\"pw-post-body-paragraph ks kt jo ku b kv kw kx ky kz la lb lc ld le lf lg lh li lj lk ll lm ln lo lp iw gi\" data-selectable-paragraph=\"\">The training loop itself contains a top-level iteration of the requested amount of epochs, for which the whole training and validation sets get iterated, each with nested loops of batch samples. Within the train set loop, each batch processes forward pass predictions and computes backpropagation of gradients according to the obtained loss against label annotations. In the proposed implementation, NLL Loss is employed in combination with Softmax, although many more loss functions are available since LibTorch 1.4.0. Model weights are updated using the computed gradients based on this loss and the configured optimizer step, which defines learning-rate and other hyperparameters based on its algorithm. This process is presented in the following code snippet.<\/p>\n<p class=\"pw-post-body-paragraph ks kt jo ku b kv kw kx ky kz la lb lc ld le lf lg lh li lj lk ll lm ln lo lp iw gi\" data-selectable-paragraph=\"\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-16684\" src=\"https:\/\/www.crim.ca\/wp-content\/uploads\/2022\/05\/Capture-de\u0301cran-le-2022-05-09-a\u0300-13.40.11-300x296.png\" alt=\"\" width=\"525\" height=\"518\" srcset=\"https:\/\/www.crim.ca\/wp-content\/uploads\/2022\/05\/Capture-de\u0301cran-le-2022-05-09-a\u0300-13.40.11-300x296.png 300w, https:\/\/www.crim.ca\/wp-content\/uploads\/2022\/05\/Capture-de\u0301cran-le-2022-05-09-a\u0300-13.40.11.png 704w\" sizes=\"(max-width: 525px) 100vw, 525px\" \/><\/p>\n<p id=\"d99d\" class=\"pw-post-body-paragraph ks kt jo ku b kv kw kx ky kz la lb lc ld le lf lg lh li lj lk ll lm ln lo lp iw gi\" data-selectable-paragraph=\"\">The validation batch loop does essentially the same forward pass to obtain model predictions, but counts the number of correct and incorrect predictions against labels instead of updating weights. Following each epoch iteration, the model weights are saved in a checkpoint file, and the best model is updated when accuracy improves over previous epochs.<\/p>\n<h2 id=\"4fe5\" class=\"ls lt jo bn lu lv lw lx ly lz ma mb mc md me mf mg mh mi mj mk ml mm mn mo mp gi\">Data Augmentation<\/h2>\n<p id=\"6620\" class=\"pw-post-body-paragraph ks kt jo ku b kv mq kx ky kz mr lb lc ld ms lf lg lh mt lj lk ll mu ln lo lp iw gi\" data-selectable-paragraph=\"\">The data augmentation code is a fork of a\u00a0<a class=\"au lr\" href=\"https:\/\/github.com\/takmin\/DataAugmentation\" target=\"_blank\" rel=\"noopener ugc nofollow\">GitHub project<\/a>\u00a0that has been directly integrated in the source code. It includes basic data augmentation operations such as image resizing, rotations, reflections, translations, noise injection, etc. with randomness. All these operations help generate training data with small variations. This helps the model to generalize and be more robust against small changes within images, in order for inference to produce consistent predictions when presented with new images that can contain similar variations. It also reduces the chances of overfitting.<\/p>\n<p id=\"8152\" class=\"pw-post-body-paragraph ks kt jo ku b kv kw kx ky kz la lb lc ld le lf lg lh li lj lk ll lm ln lo lp iw gi\" data-selectable-paragraph=\"\">Data augmentation is applied inline when loading samples within the training loop, using the\u00a0<a class=\"au lr\" href=\"https:\/\/github.com\/crim-ca\/crim-libtorch-extensions\/blob\/0.6.0\/TestBench\/training.h#L160-L170\" target=\"_blank\" rel=\"noopener ugc nofollow\">data loader iterator<\/a>.<\/p>\n<figure class=\"mw mx my mz hf jf\"><\/figure>\n<p id=\"88ae\" data-selectable-paragraph=\"\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-16686\" src=\"https:\/\/www.crim.ca\/wp-content\/uploads\/2022\/05\/Capture-de\u0301cran-le-2022-05-09-a\u0300-13.40.20-300x118.png\" alt=\"\" width=\"534\" height=\"210\" srcset=\"https:\/\/www.crim.ca\/wp-content\/uploads\/2022\/05\/Capture-de\u0301cran-le-2022-05-09-a\u0300-13.40.20-300x118.png 300w, https:\/\/www.crim.ca\/wp-content\/uploads\/2022\/05\/Capture-de\u0301cran-le-2022-05-09-a\u0300-13.40.20.png 701w\" sizes=\"(max-width: 534px) 100vw, 534px\" \/><\/p>\n<p data-selectable-paragraph=\"\">This iterator retrieves each sample image and its associated label by loading it from file, augmenting it randomly and returning it following normalization using the function <a class=\"au lr\" href=\"https:\/\/github.com\/crim-ca\/crim-libtorch-extensions\/blob\/0.6.0\/TestBench\/training.cpp#L37-L73\" target=\"_blank\" rel=\"noopener ugc nofollow\">here<\/a>. Note that the proposed implementation uses normalization mean and standard deviation from ImageNet specifically, and should be adjusted accordingly when using other datasets.<\/p>\n<div class=\"o dz\">\n<div class=\"eo cf fi fj fk fl fm fn fo fp fq\">\n<article>\n<div class=\"l\">\n<div class=\"l\">\n<section>\n<div class=\"iw ix iy iz ja\">\n<h2 id=\"0acc\" class=\"ls lt jo bn lu lv lw lx ly lz ma mb mc md me mf mg mh mi mj mk ml mm mn mo mp gi\">Building and Testing the Pipeline<\/h2>\n<p id=\"940e\" class=\"pw-post-body-paragraph ks kt jo ku b kv mq kx ky kz mr lb lc ld ms lf lg lh mt lj lk ll mu ln lo lp iw gi\" data-selectable-paragraph=\"\">Because the main training loop offers flexibility over the selected model, optimizer and data loader, this C++ implementation uses the\u00a0<a class=\"au lr\" href=\"https:\/\/github.com\/CLIUtils\/CLI11\" target=\"_blank\" rel=\"noopener ugc nofollow\">CLI11<\/a>\u00a0library to create a command line interface that eases this parametrization as an utility that can be called from the shell. The\u00a0<a class=\"au lr\" href=\"https:\/\/github.com\/crim-ca\/crim-libtorch-extensions\/blob\/0.6.0\/TestBench\/TestBench.cpp#L106\" target=\"_blank\" rel=\"noopener ugc nofollow\">TestBench main function<\/a>\u00a0allows the user to specify many options, which are used to instantiate the relevant models, optimizers, data loader paths and other configuration parameters. This is where the most important distinction against most toy example codes found online can be observed. The code does not make use of specific hardcoded model, optimizer or hyperparameters definitions. Instead, the implementation presents how abstract classes and interfaces can be employed to develop a flexible and extendable test utility. As a result, users can extend the testing pipeline as needed by adding new model or optimizer implementations and rapidly training them without having to directly adjust the training loop.<\/p>\n<h1 id=\"88d2\" class=\"ls lt jo bn lu lv lw lx ly lz ma mb mc md me mf mg mh mi mj mk ml mm mn mo mp gi\" data-selectable-paragraph=\"\">Conclusion<\/h1>\n<p id=\"5c10\" class=\"pw-post-body-paragraph ks kt jo ku b kv mq kx ky kz mr lb lc ld ms lf lg lh mt lj lk ll mu ln lo lp iw gi\" data-selectable-paragraph=\"\">This post described an effort to contribute to LibTorch, including implementation of new architectures and a code base that makes use of this library. The main goal was to show how to instantiate these architectures and use them in a fairly feature-complete training loop. Hopefully, developers faced with the challenge of implementing deep learning training loops in C++ will find this code useful.<\/p>\n<\/div>\n<\/section>\n<\/div>\n<\/div>\n<\/article>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>In August 2021, a\u00a0PR\u00a0aimed at adding a SOTA architecture (namely EfficientNet) to TorchVision, a Python-based PyTorch package for computer vision experiments, was submitted on GitHub. Even though deep learning practitioners are used to testing new architectures that are regularly posted on this platform, this is certainly a welcome contribution. On the other hand, C++ contributions [&hellip;]<\/p>\n","protected":false},"author":409,"featured_media":16689,"menu_order":0,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":"","_links_to":"","_links_to_target":""},"mots_cles":[450,446,447],"categorie_blogue":[449],"class_list":["post-17998","blogue","type-blogue","status-publish","format-standard","has-post-thumbnail","hentry","mots_cles-computer-vision-en","mots_cles-deep-learning-en","mots_cles-libtorch-en","categorie_blogue-artificial-intelligence"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.crim.ca\/en\/wp-json\/wp\/v2\/blogue\/17998","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.crim.ca\/en\/wp-json\/wp\/v2\/blogue"}],"about":[{"href":"https:\/\/www.crim.ca\/en\/wp-json\/wp\/v2\/types\/blogue"}],"author":[{"embeddable":true,"href":"https:\/\/www.crim.ca\/en\/wp-json\/wp\/v2\/users\/409"}],"version-history":[{"count":2,"href":"https:\/\/www.crim.ca\/en\/wp-json\/wp\/v2\/blogue\/17998\/revisions"}],"predecessor-version":[{"id":19985,"href":"https:\/\/www.crim.ca\/en\/wp-json\/wp\/v2\/blogue\/17998\/revisions\/19985"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.crim.ca\/en\/wp-json\/wp\/v2\/media\/16689"}],"wp:attachment":[{"href":"https:\/\/www.crim.ca\/en\/wp-json\/wp\/v2\/media?parent=17998"}],"wp:term":[{"taxonomy":"mots_cles","embeddable":true,"href":"https:\/\/www.crim.ca\/en\/wp-json\/wp\/v2\/mots_cles?post=17998"},{"taxonomy":"categorie_blogue","embeddable":true,"href":"https:\/\/www.crim.ca\/en\/wp-json\/wp\/v2\/categorie_blogue?post=17998"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}