pytorch
  1. pytorch-deep-learning

Deep Learning - ( Perceptron in PyTorch )

Heading h2

Syntax

import torch

class Perceptron(torch.nn.Module):
    def __init__(self, input_dim):
        super().__init__()
        self.fc1 = torch.nn.Linear(input_dim, 1)
        
    def forward(self, x):
        out = self.fc1(x)
        out = torch.sigmoid(out)
        return out

Example

import numpy as np
import torch

# define input data
X = torch.tensor([
    [1, 0, 1],
    [0, 1, 1],
    [0, 0, 1],
    [1, 1, 1]
]).float()

# define target data
y = torch.tensor([
    [1],
    [1],
    [0],
    [0]
]).float()

# define perceptron model
input_dim = X.shape[1]
model = Perceptron(input_dim)

# define criteria and optimizer
criterion = torch.nn.BCELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.1)

# training loop
epochs = 1000
for epoch in range(epochs):
    # zero the gradients
    optimizer.zero_grad()

    # forward propagation
    y_pred = model(X)

    # calculate loss
    loss = criterion(y_pred, y)

    # backward propagation
    loss.backward()

    # update weights
    optimizer.step()

    # print loss every 100 epochs
    if (epoch+1) % 100 == 0:
        print(f"Epoch [{epoch+1}/{epochs}], Loss: {loss.item():.4f}")

# evaluate model
with torch.no_grad():
    y_pred = model(X)
    print(y_pred.round())

Output

Epoch [100/1000], Loss: 0.6650
Epoch [200/1000], Loss: 0.6315
Epoch [300/1000], Loss: 0.6028
Epoch [400/1000], Loss: 0.5776
Epoch [500/1000], Loss: 0.5549
Epoch [600/1000], Loss: 0.5340
Epoch [700/1000], Loss: 0.5147
Epoch [800/1000], Loss: 0.4967
Epoch [900/1000], Loss: 0.4800
Epoch [1000/1000], Loss: 0.4642

tensor([[ 1.],
        [ 1.],
        [ 0.],
        [-0.]])

Explanation

A Perceptron is a neural network with a single layer of output nodes and no hidden layers. It takes input from the external environment, processes it and produces an output. It is used for binary classification problems where the goal is to classify the input data into one of the two categories.

In the given example, we are creating a binary classifier using a Perceptron in PyTorch. We define the input and target data and use PyTorch’s built-in functionality to create the Perceptron model. We also define the criteria and optimizer for training the model. We use the binary cross-entropy loss as the criteria and stochastic gradient descent as the optimizer.

We then train the model for 1000 epochs and print the loss every 100 epochs. We use the sigmoid activation function to obtain the output probabilities and round them to obtain the predicted class.

Use

Perceptrons can be used for simple classification problems and can be extended to more complex problems by adding more layers, called Multi-Layer Perceptrons (MLPs). MLPs can be used for both regression and classification problems.

Important Points

  • A Perceptron is a neural network with a single layer of output nodes and no hidden layers
  • It can be used for binary classification problems
  • Binary cross-entropy loss and stochastic gradient descent are commonly used for training a Perceptron model
  • Perceptrons can be extended to more complex problems by adding more layers, called Multi-Layer Perceptrons (MLPs)

Summary

In summary, the Perceptron is a simple neural network architecture that can be used for binary classification problems. PyTorch provides built-in functionality to create Perceptron models and train them using various criteria and optimizers. Perceptrons can be extended to more complex problems by adding more layers, making them Multi-Layer Perceptrons (MLPs).

Published on: