Updated April 15, 2023
Introduction to PyTorch TensorBoard
Various web applications where the model runs can be inspected and analyzed so that the visualization can be made with the help of graphs is called TensorBoard, where we can use it along with PyTorch for combining it with neural networks. We can call TensorBoard as a visualization toolkit where all the machine learning experiments can be understood to the users and viewers. We can use histograms or any images for the displaying requirements.
What is PyTorch TensorBoard?
TensorBoard helps in providing all the required measurements and images while doing machine learning experiments where various visualizations such as scalars, images, graphs, histograms, and audios are used. Experiment metrics can be tracked easily so that all machine learning experiments will be known to the users for their accuracy and dimensional space. We can visualize the model graphs using all the metrics available, thus making the process easy where all the dimensions can be given directly. Neural network training runs can be visualized easily.
How to Use PyTorch TensorBoard?
The first step is to install PyTorch, followed by TensorBoard installation. After that, we should create a summarywriter instance as well.
import torch
from torch.utils.tensorboard import SummaryWriter
writer = SummaryWriter()
We have to note down all the values and scalars to help save the same. We can use the flush() method to ensure that all the values are noted down. The next step is to install and run TensorBoard. It should be noted to install the latest version to upload the files. We can always call close methods to close the summarywriter.
Several types of visualization logs can be run to notice the visualization effects, and this can be taken from torch.utils.tensorboard tutorials. We can run help commands to get help in any steps to provide the description.
$ tensorboard dev –help
There are five steps in using TensorBoard. First, we have to read data based on the previous matrix transforms. Then, the next step is to set up the TensorBoard, followed by writing the TensorBoard. Then, we can check the model using TensorBoard, and the last step is to create interactions of images using TensorBoard.
Share PyTorch TensorBoard Dashboards
TensorBoard.dev is the domain used to upload and share dashboards. So we can share the results with anyone, and anyone can track the progress of the experiment and share with others too. This is helpful while working in a team, and managers need to monitor the progress and share the results with the stakeholders. ! and % prefix is used to manage the command line.
It is important to install the latest version of TensorBoard to share the results with others.
$ pip install tensorboard –upgrade
Runs can be monitored, and the results can be uploaded using the following command.
$ tensorboard dev upload --logdir runs \
--name "My ML Working" \
--description "Simple technical analysis of stocks"
TensorBoard Dashboards Example
The first step is to install TensorBoard in the system so that all the utilities can be used easily. This example explains the logging of data.
import torch
import torchvision
from torch.utils.tensorboard import SummaryWriter
from torchvision import datasets, transforms
writer_summary = SummaryWriter()
transform_Summary= transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.7,), (0.7,))])
train_dataset = datasets.MNIST('mnist_train', train=True, download=True, transform=transform_summary)
trainldr = torch.utils.data.DataLoader(train_dataset, batch_size=16, shuffle=True)
model = torchvision.models.resnet25(False)
model.conv12 = torch.nn.Conv2d(1, 16, kernel_size=7, stride=2, padding=3, bias=False)
images, labels = next(iter(trainloader))
grid = torchvision.utils.make_grid(images)
writer.add_image('images', grid, 0)
writer.add_graph(model, images)
writer.close()
We can visualize all the graphs we have built using TensorBoard.
pip install tensorboard
tensorboard --logdir=runs
There are chances of cluttering due to the huge amount of data, and hence it is better to arrange them.
from torch.utils.tensorboard import SummaryWriter
import numpy as np
writer = SummaryWriter()
for n_iter in range(100):
writer.add_scalar('Loss/train', np.random.random(), n_iter)
writer.add_scalar('Loss/test', np.random.random(), n_iter)
writer.add_scalar('Accuracy/train', np.random.random(), n_iter)
writer.add_scalar('Accuracy/test', np.random.random(), n_iter)
Here we get the results in graphs.
High-level API is being created by summary writers where we can add events and summaries.
CLASS torch.utils.tensorboard.writer.SummaryWriter(log_dir=None, comment='', purge_step=None, max_queue=10, flush_secs=120, filename_suffix='')
__init__(log_dir=None, comment='', purge_step=None, max_queue=10, flush_secs=120, filename_suffix='')
from torch.utils.tensorboard import SummaryWriter
writer = SummaryWriter()
writer = SummaryWriter("my_experiment")
writer = SummaryWriter(comment="LR_0.1_BATCH_16")
Scalar details can be added to the code.
add_scalar(tag, scalar_value, global_step=None, walltime=None, new_style=False)
from torch.utils.tensorboard import SummaryWriter
writer = SummaryWriter()
x = range(100)
for i in x:
writer.add_scalar('y=2x', i * 2, i)
writer.close()
Also, histogram details can be added.
from torch.utils.tensorboard import SummaryWriter
import numpy as np
writer = SummaryWriter()
for i in range(10):
x = np.random.random(1000)
writer.add_histogram('distribution centers', x + i, i)
writer.close()
We can add image details to the TensorBoard.
from torch.utils.tensorboard import SummaryWriter
import numpy as np
img = np.zeros((3, 100, 100))
img[0] = np.arange(0, 10000).reshape(100, 100) / 10000
img_HWC = np.zeros((100, 100, 3))
img_HWC[:, :, 0] = np.arange(0, 10000).reshape(100, 100) / 10000
writer = SummaryWriter()
writer.add_image('my_image', img, 0)
writer.add_image('my_image_HWC', img_HWC, 0, dataformats='HWC')
writer.close()
We can add graph data or text data to the results.
import keyword
import torch
meta = []
while len(meta)<100:
meta = meta+keyword.kwlist # get some strings
meta = meta[:100]
for i, v in enumerate(meta):
meta[i] = v+str(i)
label_img = torch.rand(100, 3, 10, 32)
for i in range(100):
label_img[i]*=i/100.0
writer.add_embedding(torch.randn(100, 5), metadata=meta, label_img=label_img)
writer.add_embedding(torch.randn(100, 5), label_img=label_img)
writer.add_embedding(torch.randn(100, 5), metadata=meta)
We can set the ground truth labeling and prediction confidence.
from torch.utils.tensorboard import SummaryWriter
import numpy as np
labels = np.random.randint(2, size=100)
predictions = np.random.rand(100)
writer = SummaryWriter()
writer.add_pr_curve('pr_curve', labels, predictions, 0)
writer.close()
Also, we can add 3D point clouds or meshes to the TensorBoard.
from torch.utils.tensorboard import SummaryWriter
vertices_tensor = torch.as_tensor([
[1, 1, 1],
[-1, -1, 1],
[1, -1, -1],
[-1, 1, -1],
], dtype=torch.float).unsqueeze(0)
colors_tensor = torch.as_tensor([
[255, 0, 0],
[0, 255, 0],
[0, 0, 255],
[255, 0, 255],
], dtype=torch.int).unsqueeze(0)
faces_tensor = torch.as_tensor([
[0, 2, 3],
[0, 3, 1],
[0, 1, 2],
[1, 3, 2],
], dtype=torch.int).unsqueeze(0)
writer = SummaryWriter()
writer.add_mesh('my_mesh', vertices=vertices_tensor, colors=colors_tensor, faces=faces_tensor)
writer.close()
Conclusion
A free platform called TensorBoard.dev is available where we can upload all the logs related to TensorBoard that provides us the link. This link is shareable so that anyone can view the logs from the TensorBoard.dev platform. This helps in collaborating with the team for several ideas and modifying the TensorBoard.
Recommended Articles
We hope that this EDUCBA information on “PyTorch TensorBoard” was beneficial to you. You can view EDUCBA’s recommended articles for more information.