Updated April 7, 2023
Introduction to PyTorch nn
Set of modules related to a neural network where we get output directly from the given input with weights in the input, and the network has a hidden layer probably in the module called PyTorch nn module. Here the squared Euclidean distance is minimized to predict the output from the given input. Thus, a single layer network is created, which is mostly a feed-forward network where n inputs are present to give m output.
What is PyTorch nn?
We have several classes in PyTorch nn where various neural network models can be created. We have tensors and automated differentiation modules that help to train and build modules from input, output, or hidden layers, if any are present. We also have nn linear where tensors are taken as input and output is given as modules and not as tensors. This is a single-layer network where linear equations are used to calculate the value of output when weight and input are given.
PyTorch nn Model
Let us build the neural network using PyTorch, where the consideration is to classify images in the MNIST dataset.
import os
import torch
from torch import nn
from torch.utils.data import DataLoader
from torchvision import datasets, transforms
We must check whether we have CUDA in our system, or else we have to continue with GPU devices.
Operational_device = 'cuda' if torch.cuda.is_available() else 'cpu'
print('Using {} Operational_device '.format(Operational_device))
The next step is to class the neural network and initialize the network layers where the subclass implements the operations in the input dataset using the forward feed method.
class Networklayers(nn.Module):
def __init__(self):
super(Networklayers, self).__init__()
self.flatten = nn.Flatten()
self.linear_relu_stack = nn.Sequential(
nn.Linear(14*14, 256),
nn.ReLU(),
nn.Linear(256, 256),
nn.ReLU(),
nn.Linear(256, 5),
)
def feedforward(self, a):
a = self.flatten(a)
logits = self.linear_relu_stack(a)
return logits
An instance of the neural network must be created as the next step, which should be moved to the device, be it CUDA or CPU. Then we can print the model of the same instance of the neural network.
Model_poster = Networklayers ().to(Operational_device)
print(Model_poster)
Now the model of the neural network is created using PyTorch and related modules.
PyTorch nn Function
We have several functions used alongside the nn module.
- Threshold – every element in the input tensor is considered for threshold. We can use threshold_ instead of threshold().
- Relu – when we have to apply rectified linear unit function in the form of the element. We can use relu_ instead of relu().
- Hardshrink – hard shrinkage function can be applied in the form of the element to the system.
- Softmin– when we must apply softmin function to the system
- Softmax – when we must apply softmax function to the system
- Log_Softmax – softmax function is applied along with the algorithm
- Softshrink – when we must apply soft shrinkage function to the system in the form of elements
- Hardsigmoid – when we must apply element-wise function without any modules
- Mish – we can apply the mish function in the form of elements
- Silu – here, we can apply the sigmoid linear unit function in the form of elements.
- Layer_norm – we can apply layer normalization for any required number of dimensions in the system.
- Batch_norm – we can apply batch normalization for any channel for the batch of data in the system.
- Group_norm – group normalization can be applied to any number of dimensions in the system.
- Instance_norm – instance normalization is applied for any required number of dimensions in the system.
- Linear – this is used to apply a linear transformation to the input data
- Bi-linear – we can apply a bi-linear transformation to the input data using this function
- Dropout – some elements of the input tensor applies zeroes randomly with a given probability p
- Feature_alpha_dropout – entire channels are dropped out in a random fashion
- Embedding – embeddings are searched in the lookup table with fixed size and dictionary elements
- Cosine_similarity – cosine similarity is computed along the dimensions where the values are returned between x1 and x2.
- One_hot – input is taken as a long tensor with index values of shape, and the output is given as a tensor of shape. The answer will be zero in all the cases, except if the index value of the last dimension matches with the input tensor value, the answer will be 1.
- Pdist – p-norm distance is calculated for every row of vectors
PyTorch nn example
The first step is to create the model and see it using the device in the system. Then, as explained in the PyTorch nn model, we have to import all the necessary modules and create a model in the system.
Now we are using the Softmax module to get the probabilities.
a = torch.rand(1, 14, 14, device= Operational_device)
logits = Model_poster(a)
prediction_probability = nn.Softmax(dim=4)(logits)
b_prediction = prediction_probability.argmax(4)
print(f"Predicted class: {b_prediction}")
We will look into the layers of the MNIST dataset.
input = torch.rand(3,14,14)
print(input.size())
We can use flatten layer to initialize the image and print the same.
Flatten_layer = nn.Flatten()
flatten_image = flatten_layer(input)
print(flatten_image.size())
The linear module can also be used to store the weights and do a linear transformation to the module.
Complex mappings are created between input and output with the help of non-linear activations, and linear activations mainly focus on direct and single mappings. Large phenomenal changes are learned with the help of linear transformations in the system.
print(f"Before ReLU activation: {hidden01}\n\n")
hidden1 = nn.ReLU()(hidden01)
print(f"After ReLU activation: {hidden01}")
We can use the module Sequential so that we will get ordered modules.
ordered_modules = nn.Sequential(
flatten,
layer01,
nn.ReLU(),
nn.Linear(10, 5)
)
input = torch.rand(3,14,14)
logits = ordered_modules(input
Softmax module always takes care of the last linear module.
Softmax_linear = nn.Softmax(dim=4)
prediction_probability = softmax_linear(logits)
We can iterate over any parameters of the model and note it down on the size feature of the model.
print("Model features: ", model, "\n\n")
for name, param in model.parameters():
print(f"Layer: {model_name} | Size: {model.size()} | Values : {parameter[:5]} \n")
Conclusion
It helps us to set the production model based on our requirement, and thus we can stabilize the model for all the parameters. We can build prototypes easily using PyTorch, and the nn module used in PyTorch helps to complete the work at a faster rate. Nn module is mainly considered for production models.
Recommended Articles
We hope that this EDUCBA information on “PyTorch nn” was beneficial to you. You can view EDUCBA’s recommended articles for more information.