Gradient Descent - ( Linear Regression with PyTorch )
Heading h2
Syntax
import torch
import torch.nn as nn
import torch.optim as optim
x_train = torch.Tensor([[1], [2], [3], [4]])
y_train = torch.Tensor([[2], [4], [6], [8]])
model = nn.Linear(1, 1)
criterion = nn.MSELoss()
optimizer = optim.SGD(model.parameters(), lr=0.01)
for epoch in range(1000):
y_pred = model(x_train)
loss = criterion(y_pred, y_train)
optimizer.zero_grad()
loss.backward()
optimizer.step()
Example
import torch
import torch.nn as nn
import torch.optim as optim
x_train = torch.Tensor([[1], [2], [3], [4]])
y_train = torch.Tensor([[2], [4], [6], [8]])
model = nn.Linear(1, 1)
criterion = nn.MSELoss()
optimizer = optim.SGD(model.parameters(), lr=0.01)
for epoch in range(1000):
y_pred = model(x_train)
loss = criterion(y_pred, y_train)
optimizer.zero_grad()
loss.backward()
optimizer.step()
print(model(x_train))
Output
tensor([[1.9800],
[3.9607],
[5.9413],
[7.9220]], grad_fn=<AddmmBackward>)
Explanation
Gradient Descent is an optimization algorithm used in Machine Learning for finding the local minimum of a function. In the context of Linear Regression, it is used to find the slope and intercept of the best fit line.
In PyTorch, we can use the Gradient Descent algorithm to optimize the parameters of a linear regression model. We first define our model using the nn.Linear
function, then we define our loss function using the nn.MSELoss
function, and finally we define our optimizer using the optim.SGD
function.
We then iterate over our data for a set number of epochs, computing the predicted output, calculating the loss, zeroing out the gradients, and backpropagating to update the parameters of our model using the optimizer.step()
method.
Use
Gradient Descent can be used in a variety of Machine Learning tasks. In the context of Linear Regression with PyTorch, Gradient Descent is used to optimize the parameters of the model to minimize the difference between the predicted output and the target output.
Important Points
- Gradient Descent is an optimization algorithm used in Machine Learning
- In PyTorch, we can use Gradient Descent to optimize the parameters of a linear regression model
- We define our model using the
nn.Linear
function, our loss function using thenn.MSELoss
function, and our optimizer using theoptim.SGD
function - We iterate over our data for a set number of epochs, computing the predicted output, calculating the loss, zeroing out the gradients, and backpropagating to update the parameters of our model using the
optimizer.step()
method
Summary
In conclusion, Gradient Descent is an optimization algorithm used in Machine Learning to find the local minimum of a function. In PyTorch, we can use Gradient Descent to optimize the parameters of a linear regression model to minimize the difference between the predicted output and the target output.