pytorch
  1. pytorch-hyperparameter-tuning

Hyperparameter Tuning - ( Image Classification with PyTorch )

Heading h2

Syntax

model = models.resnet50(pretrained=True)

# Freezing parameters
for param in model.parameters():
    param.requires_grad = False

# Changing the classifier
model.fc = nn.Sequential(
    nn.Linear(2048, 512),
    nn.ReLU(),
    nn.Dropout(0.2),
    nn.Linear(512, 10),
    nn.LogSoftmax(dim=1))
    
# Defining hyperparameters
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.fc.parameters(), lr=0.003)

# Training the network
epochs = 10
steps = 0
train_losses, test_losses = [], []
for epoch in range(epochs):
    running_loss = 0
    for images, labels in trainloader:
        steps += 1
        optimizer.zero_grad()
        logps = model(images)
        loss = criterion(logps, labels)
        loss.backward()
        optimizer.step()
        running_loss += loss.item()
    else:
        test_loss = 0
        accuracy = 0
        with torch.no_grad():
            model.eval()
            for images, labels in testloader:
                logps = model(images)
                test_loss += criterion(logps, labels)
                ps = torch.exp(logps)
                top_p, top_class = ps.topk(1, dim=1)
                equals = top_class == labels.view(*top_class.shape)
                accuracy += torch.mean(equals.type(torch.FloatTensor))
        model.train()
        train_losses.append(running_loss/len(trainloader))
        test_losses.append(test_loss/len(testloader))

Example

# Defining hyperparameters
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.fc.parameters(), lr=0.003)
scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=3, gamma=0.1)

# Training the network
epochs = 10
steps = 0
train_losses, test_losses = [], []
for epoch in range(epochs):
    running_loss = 0
    scheduler.step()
    for images, labels in trainloader:
        steps += 1
        optimizer.zero_grad()
        logps = model(images)
        loss = criterion(logps, labels)
        loss.backward()
        optimizer.step()
        running_loss += loss.item()
    else:
        test_loss = 0
        accuracy = 0
        with torch.no_grad():
            model.eval()
            for images, labels in testloader:
                logps = model(images)
                test_loss += criterion(logps, labels)
                ps = torch.exp(logps)
                top_p, top_class = ps.topk(1, dim=1)
                equals = top_class == labels.view(*top_class.shape)
                accuracy += torch.mean(equals.type(torch.FloatTensor))
        model.train()
        train_losses.append(running_loss/len(trainloader))
        test_losses.append(test_loss/len(testloader))

Output

The output of the above code will be the decreased training loss and improved accuracy due to hyperparameter tuning and learning rate scheduling.

Explanation

Hyperparameter tuning is an important process in machine learning that involves adjusting the parameters of a model to improve its performance. In PyTorch, hyperparameters are typically defined at the beginning of a training cycle and can be adjusted to improve model performance.

In the example above, we define the hyperparameters of a ResNet50 model for image classification, including the learning rate and loss function. We then use a learning rate scheduler to gradually decrease the learning rate over time. This helps the model converge to a better solution and prevents divergent behavior.

Use

Hyperparameter tuning is useful for improving the performance of machine learning models, especially for complex tasks like image classification. By adjusting the hyperparameters, we can improve the accuracy and convergence speed of the model.

Important Points

  • Hyperparameter tuning involves adjusting the parameters of a model to improve its performance
  • In PyTorch, hyperparameters can be defined at the beginning of a training cycle and adjusted over time
  • Learning rate scheduling is a useful technique for improving model convergence

Summary

In conclusion, hyperparameter tuning is a useful technique for improving the performance of machine learning models in PyTorch. By adjusting the hyperparameters, we can improve the accuracy and convergence speed of the model. Learning rate scheduling is a particularly useful technique for improving model convergence over time.

Published on: