Implementation of DNN - ( Deep Neural Network with PyTorch )
Heading h2
Syntax
import torch
import torch.nn as nn
import torch.optim as optim
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.fc1 = nn.Linear(10, 100)
self.fc2 = nn.Linear(100, 50)
self.out = nn.Linear(50, 1)
def forward(self, x):
x = torch.relu(self.fc1(x))
x = torch.relu(self.fc2(x))
x = torch.sigmoid(self.out(x))
return x
criterion = nn.BCELoss()
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9)
Example
import torch
import torch.nn as nn
import torch.optim as optim
class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.fc1 = nn.Linear(2, 4)
self.fc2 = nn.Linear(4, 2)
self.out = nn.Linear(2, 1)
def forward(self, x):
x = torch.relu(self.fc1(x))
x = torch.relu(self.fc2(x))
x = torch.sigmoid(self.out(x))
return x
model = MyModel()
X = torch.tensor([[0, 0], [0, 1], [1, 0], [1, 1]], dtype=torch.float)
y = torch.tensor([[0], [1], [1], [0]], dtype=torch.float)
criterion = nn.BCELoss()
optimizer = optim.SGD(model.parameters(), lr=0.1)
for epoch in range(10000):
optimizer.zero_grad()
output = model(X)
loss = criterion(output, y)
loss.backward()
optimizer.step()
print(model(X))
Output
tensor([[0.0279],
[0.9644],
[0.9626],
[0.0447]], grad_fn=<SigmoidBackward>)
Explanation
In this example, we're using PyTorch to implement a deep neural network. The neural network has two hidden layers with 4 and 2 neurons respectively and uses the sigmoid activation function for the output layer.
We're using the binary cross-entropy loss function and the stochastic gradient descent optimizer to train the model.
The neural network is trained on a small dataset of 4 input samples and their corresponding binary output values. The model is trained for 10000 epochs and finally tested on the same input samples.
Use
You can use this code as a starting point for implementing your own neural networks in PyTorch. The example demonstrates how to define the neural network architecture, create an instance of the model, define the loss function and optimizer, and train the model.
Important Points
- PyTorch is a powerful deep learning framework that allows you to define and train neural networks
- The example demonstrates how to define a deep neural network with PyTorch and train it using stochastic gradient descent
- The example also shows how to define the loss function and evaluate the output of the neural network on test data
Summary
In conclusion, this example provides a simple and easy-to-understand implementation of a deep neural network with PyTorch. While the example is small and simple, it provides a solid foundation for building more complex neural networks. PyTorch is a powerful deep learning framework with a lot of versatility and flexibility, and you can use it to build a wide range of deep learning models for various applications.