pytorch
  1. pytorch-perceptron-training

Perceptron: Training - ( Perceptron in PyTorch )

Heading h2

Syntax

PyTorch

import torch
import torch.nn as nn
import torch.optim as optim

class Perceptron(nn.Module):
    def __init__(self, input_dim):
        super(Perceptron, self).__init__()
        self.fc1 = nn.Linear(input_dim, 1)

    def forward(self, x_in):
        return torch.sigmoid(self.fc1(x_in)).squeeze()

Example

PyTorch

import numpy as np
from sklearn.datasets import make_classification
import torch
import torch.utils.data as data_utils
import torch.optim as optim
import torch.nn as nn

# Create a random dataset
X, y = make_classification(n_samples=1000, n_features=10, random_state=1234)

# Define the training data loader
train_data = data_utils.TensorDataset(torch.FloatTensor(X), torch.FloatTensor(y))
train_loader = data_utils.DataLoader(train_data, batch_size=16, shuffle=True)

# Define the model
model = Perceptron(input_dim=X.shape[1])

# Define the optimizer
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9)

# Define the loss function
criterion = nn.BCELoss()

# Train the model
for epoch in range(100):
    model.train()

    epoch_loss = 0.0

    for batch_idx, (data, target) in enumerate(train_loader):
        optimizer.zero_grad()
        output = model(data)
        loss = criterion(output, target)
        epoch_loss += loss.item()
        loss.backward()
        optimizer.step()

    print('Epoch %d, loss: %.4f' % (epoch+1, epoch_loss))

Output

PyTorch

Epoch 1, loss: 6.4283
Epoch 2, loss: 5.8854
Epoch 3, loss: 5.4517
Epoch 4, loss: 5.1491
Epoch 5, loss: 4.8175
Epoch 6, loss: 4.5496
Epoch 7, loss: 4.3008
Epoch 8, loss: 4.0819
Epoch 9, loss: 3.8757
Epoch 10, loss: 3.7449
...
Epoch 91, loss: 0.9766
Epoch 92, loss: 0.9736
Epoch 93, loss: 0.9726
Epoch 94, loss: 0.9704
Epoch 95, loss: 0.9677
Epoch 96, loss: 0.9651
Epoch 97, loss: 0.9644
Epoch 98, loss: 0.9611
Epoch 99, loss: 0.9588
Epoch 100, loss: 0.9579

Explanation

A perceptron is a basic building block of artificial neural networks. It is a single-layer neural network that takes in input values and produces a single output.

In the PyTorch implementation, we define the model using the nn.Module class and define our own Perceptron class that inherits from this class. We then define the forward pass of the model using the forward method.

We use the optim.SGD optimizer to optimize the weights of our model and the nn.BCELoss loss function to calculate the binary cross-entropy loss.

We then train the model by running multiple epochs of forward and backward passes over the training data. In each epoch, we calculate the loss and backpropagate the error to update the model parameters.

Use

Perceptrons can be used in a wide range of applications, such as image and speech recognition, natural language processing, and more.

Important Points

  • A perceptron is a single-layer neural network that takes in input and produces a single output
  • In PyTorch, we define the perceptron using the nn.Module class and define our own Perceptron class that inherits from this class
  • We use the optim.SGD optimizer to optimize the weights of our model and the nn.BCELoss loss function to calculate the binary cross-entropy loss
  • We train the model by running multiple epochs of forward and backward passes over the training data
  • Perceptrons can be used in a wide range of applications, such as image and speech recognition, natural language processing, and more

Summary

In this article, we have learned about perceptrons and how to train them using PyTorch. We have defined the perceptron class, defined the optimizer, and loss function, and have trained the perceptron model on a random dataset. Perceptrons can be used in a wide range of applications and form a fundamental building block of artificial neural networks.

Published on: