Shortcuts

PyTorch Recipes

Recipes are bite-sized, actionable examples of how to use specific PyTorch features, different from our full-length tutorials.


Loading data in PyTorch

Learn how to use PyTorch packages to prepare and load common datasets for your model.

Basics

Defining a Neural Network

Learn how to use PyTorch's torch.nn package to create and define a neural network for the MNIST dataset.

Basics

What is a state_dict in PyTorch

Learn how state_dict objects and Python dictionaries are used in saving or loading models from PyTorch.

Basics

Saving and loading models for inference in PyTorch

Learn about the two approaches for saving and loading models for inference in PyTorch - via the state_dict and via the entire model.

Basics

Saving and loading a general checkpoint in PyTorch

Saving and loading a general checkpoint model for inference or resuming training can be helpful for picking up where you last left off. In this recipe, explore how to save and load multiple checkpoints.

Basics

Saving and loading multiple models in one file using PyTorch

In this recipe, learn how saving and loading multiple models can be helpful for reusing models that you have previously trained.

Basics

Warmstarting model using parameters from a different model in PyTorch

Learn how warmstarting the training process by partially loading a model or loading a partial model can help your model converge much faster than training from scratch.

Basics

Saving and loading models across devices in PyTorch

Learn how saving and loading models across devices (CPUs and GPUs) is relatively straightforward using PyTorch.

Basics

Zeroing out gradients in PyTorch

Learn when you should zero out gradients and how doing so can help increase the accuracy of your model.

Basics

PyTorch Benchmark

Learn how to use PyTorch's benchmark module to measure and compare the performance of your code

Basics

PyTorch Benchmark (quick start)

Learn how to measure snippet run times and collect instructions.

Basics

PyTorch Profiler

Learn how to use PyTorch's profiler to measure operators time and memory consumption

Basics

Model Interpretability using Captum

Learn how to use Captum attribute the predictions of an image classifier to their corresponding image features and visualize the attribution results.

Interpretability,Captum

How to use TensorBoard with PyTorch

Learn basic usage of TensorBoard with PyTorch, and how to visualize data in TensorBoard UI

Visualization,TensorBoard

Dynamic Quantization

Apply dynamic quantization to a simple LSTM model.

Quantization,Text,Model-Optimization

TorchScript for Deployment

Learn how to export your trained model in TorchScript format and how to load your TorchScript model in C++ and do inference.

TorchScript

Deploying with Flask

Learn how to use Flask, a lightweight web server, to quickly setup a web API from your trained PyTorch model.

Production,TorchScript

PyTorch Mobile Performance Recipes

List of recipes for performance optimizations for using PyTorch on Mobile (Android and iOS).

Mobile,Model-Optimization

Making Android Native Application That Uses PyTorch Android Prebuilt Libraries

Learn how to make Android application from the scratch that uses LibTorch C++ API and uses TorchScript model with custom C++ operator.

Mobile

Fuse Modules recipe

Learn how to fuse a list of PyTorch modules into a single module to reduce the model size before quantization.

Mobile

Quantization for Mobile Recipe

Learn how to reduce the model size and make it run faster without losing much on accuracy.

Mobile,Quantization

Script and Optimize for Mobile

Learn how to convert the model to TorchScipt and (optional) optimize it for mobile apps.

Mobile

Model Preparation for iOS Recipe

Learn how to add the model in an iOS project and use PyTorch pod for iOS.

Mobile

Model Preparation for Android Recipe

Learn how to add the model in an Android project and use the PyTorch library for Android.

Mobile

Profiling PyTorch RPC-Based Workloads

How to use the PyTorch profiler to profile RPC-based workloads.

Production

Automatic Mixed Precision

Use torch.cuda.amp to reduce runtime and save memory on NVIDIA GPUs.

Model-Optimization

Performance Tuning Guide

Tips for achieving optimal performance.

Model-Optimization

Shard Optimizer States with ZeroRedundancyOptimizer

How to use ZeroRedundancyOptimizer to reduce memory consumption.

Distributed-Training

Direct Device-to-Device Communication with TensorPipe RPC

How to use RPC with direct GPU-to-GPU communication.

Distributed-Training

Distributed Optimizer with TorchScript support

How to enable TorchScript support for Distributed Optimizer.

Distributed-Training,TorchScript

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources