Linear Regression is an approach that tries to find a linear relationship between a dependent variable and an independent variable by minimizing the distance as shown below.


In this post, I’ll show how to implement a simple linear regression model using PyTorch.

Let’s consider a very basic linear equation i.e., y=2x+1. Here, ‘x’ is the independent variable and y is the dependent variable. We’ll use this equation to create a dummy dataset which will be used to train this linear regression model. Following is the code for creating the dataset.


Once we have created the dataset, we can start writing the code for our model. First thing will be to define the model architecture. We do that using the following piece of code.


We defined a class for linear regression, that inherits torch.nn.Module which is the basic Neural Network module containing all the required functions. Our Linear Regression model only contains one simple linear function.

Next, we instantiate the model using the following code.


After that, we initialize the loss (Mean Squared Error) and optimization (Stochastic Gradient Descent) functions that we’ll use in the training of this model.


After completing all the initializations, we can now begin to train our model. Following is the code for training the model.

Now that our Linear Regression Model is trained, let’s test it. Since it’s a very trivial model, we’ll test this on our existing dataset and also plot to see the original vs the predicted outputs.

This plots the following graph.

Looks like our model has correctly figured out the linear relation between our dependent and independent variables.

If you have understood this, you should try and train a linear regression model for a little more complex linear equation with multiple independent variables.