Updated April 6, 2023
Introduction to PyTorch Versions
PyTorch is one of the most preferred deep learning frameworks due to its ease of use and simplicity. PyTorch has its own community of developers who are working to improve it with new features and fix the critical bugs introduced with these new features. PyTorch has evolved with each release and used mostly for providing NumPy like operation on a multi-dimensional array with GPU so the computation is faster and builds a deep neural network for computer vision or natural language processing. The goal of each new release is to provide the user better and cleaner interface to build Artificial intelligence models. The latest stable version released by Facebook of PyTorch is 1.3 builds, this is currently tested and supported version.
Different Versions of PyTorch
Here we discuss the different versions of Pytorch released with the system configuration required and mainly focus on current stable release v1.3 as this is the one used in market and research community currently:
1. Old Version – PyTorch Versions < 1.0.0
In the very first release of PyTorch, Facebook combined Python and Torch libraries to create an open-source framework that can also be operated on CUDA and Nvidia GPU. PyTorch mainly uses Tensors (torch.Tensors) to store and operate on the Multi-Dimensional array. PyTorch released the first version as 0.1.12 in public. 0.4 version was one of the most significant released version with core changes.
In PyTorch v0.4 version has added the support for Windows, added features to support the use of RNN in ONNX (Open Neural Network Exchange). It has C++/Cuda extensions for user’s use. Also in 0.4 version provide support for writing device-agnostic code. Tensors and variables have been merged in the 0.4 release as well as operations can return 0-Dimensional tensors. To install all the old version through conda or mini conda use below commands:
In the below command, the user can replace ‘0.2.0’ with his desired version like ‘0.4.0 or 0.4.1’ And replace cuda9 by cuda8, cuda7.5, etc.
conda install pytorch=0.2.0 cuda90 -c pytorch
PyTorch libraries are also available in GitHub and users can check out the older version of PyTorch and build it. User can replace ‘0.2.0’ with his desired version: git checkout v0.2.0. Users can also download the required libraries for macOS or for Windows. User can download the respective OS libraries from the below URL from the official website of:
- Windows Binaries:
https://pytorch.org/get-started/previous-versions/#windows-binaries - Mac Binaries:
https://pytorch.org/get-started/previous-versions/#mac-and-misc-binaries
2. PyTorch Version 1.0 to 1.2
Before the 1.0 version of the code was written in Pytorch, the Python VM environment was needed to run this app. In 1.0 version python function and classes are provided with torch.jit and to separate python code, this function/classes can be compiled into high-level representation. The main goal during the release of version from 1.0 to 1.2 was to combine features of Pytorch, ONNX and caffe2 framework into a single framework for seamless integration from research to production deployment. Some of the features added in version 1.0 are as below:
- Easy to integrate C++ function with Python.
- It separates the AI model from code by providing two modes:
- Eager Mode: Mostly used for research as it is simple, debuggable and can use any python library. It needs a Python environment to run.
- Script Mode: Model can run without a Python interpreter. This is a production deployment mode it has no python dependency and code is an optimizable subset of Python.
- A model can run on servers, GPU or TPUs.
In below command the user can replace ‘1.2.0’ with his desired version ‘1.0.0 or 1.0.1’ And in torchvision replace 0.4.0 by 0.3.0, 0.2.2, etc based on it.
conda install pytorch==1.2.0 torchvision==0.4.0 -c pytorch
3. Latest PyTorch Version
Facebook has released the latest version of PyTorch in 2019. This new version is packed with new changes and bug fixes. Some of the new exciting features are supported for mobile, transparency, named tensors and quantization to meet the needs of researchers. I will be explaining in brief about these new features with some other information.
PyTorch Named Tensors
In prior 1.3 released PyTorch which did not support the suggestion of dimensions, broadcasting based on position or no information related to type was there in documentation with named tensors. PyTorch has overcome this debacle. PyTorch has added Named tensor as a feature so that users can access tensor dimensions using direct names. Previously while performing simple task users had to know the general structure of the now by broadcasting name of the dimensions user can rearrange the dimensions as required.
Named tensors also support error check on the name of the parameter to check dimension name match with the parameter or not.
Example:
import torch
data_sample = torch.randn(100, 3, 250, 600 , names=('N', 'C', 'H', 'W'))
Here, N is Number of Batches, C is Number of the channel, H is the height of the image, W is the width of the image.
PyTorch Quantization
Quantization is a technique to perform high-precision computation and storage operation in reduced precision. Quantization is already supported in TensorFlow but in PyTorch it has been added recently. To develop ML application and deploy efficiently to a server or on-premise resources 8-bit model quantization is added. PyTorch currently supports three types of Quantization models as Post Training, Dynamic Quantization, and Quantization Aware Training also for quantization PyTorch has introduced three new datatypes as torch.quint8 — 8-bit unsigned integer, torch.qint8 — 8-bit signed integer and torch.qint32 — 32-bit signed integer.
To run quantized operations PyTorch uses x86 CPUs with AVX2 support and ARM CPUs.
import torch
m = nn.quantized.ReLU()
input = torch.randn(2)
input = torch.quantize_per_tensor(input, 1.0, 0, dtype=torch.qint32)
PyTorch Mobile Support
Quantization is used while developing ML application so that PyTorch models can be deployed to Mobile or Other Devices. In PyTorch 1.3 the developer has added end to end workflow APIs for Android and iOS. This was done to reduce the latency and provide security on the edge node. It is an early-stage developer who is still working on this development with optimized computation, performance, and coverage on mobile CPUs and GPUs.
Apart from the above three features, there are some features added like support for PyTorch on Google colab. Support for tensorboard and performance improvement in the Autograd engine. Some new tools for model privacy, interpretability, and tools to support a multi-modal AI system.
Conclusion
In conclusion, PyTorch is the most used deep learning framework with support to all state of the art technology. As developers are continuously working on improving the PyTorch you can assume that there will be many more releases with exciting new features that will get added. So learning PyTorch to create machine learning or deep learning application will be beneficial for aspiring AI enthusiasts as this is one of the well documented and supported frameworks.
Recommended Articles
We hope that this EDUCBA information on “PyTorch Versions” was beneficial to you. You can view EDUCBA’s recommended articles for more information.