Linear regression using gradient descent

In this post we will implement gradient descent algorithm from scratch using numpy for linear regression.
gradient descent
machine learning
Author

Vidyasagar Bhargava

Published

December 10, 2021

Simple Linear Regression

A simple linear regression equation with one feature is defined as :

\[y = b + w * x + \epsilon\]

Here w is coefficient and b is intercept term and \(\epsilon\) is the noise.

Gradient Descent

Gradient descent is an optimization algorithm used to minimize some function by iteratively moving in the direction of steepest descent as defined by the negative of the gradient. In machine learning, we use gradient descent to update the parameters of our model. Parameters refer to coefficients in Linear Regression and weights in neural networks.

Steps to implement gradient descent
Step 1 - Compute the loss
Step 2 - Compute the gradient
Step 3 - Update the parameters
Step 4 - Repeat Step 1 to 3

Step 1 :- Compute the loss

For regression problem loss is given by mean squared error (MSE).

\[ MSE = \frac{1}{N} \sum\limits_{i=1}^{N} (y-\hat{y_i})^2 \]

\[ MSE = \frac{1}{N} \sum\limits_{i=1}^{N} (y-b-wx_i)^2 \]

Step 2 :- Compute the gradient

Gradient is a vector with function’s partial derivatives for components.

\[ \Delta (MSE) = [\frac{\partial MSE}{\partial w}, \frac{\partial MSE}{\partial b}] \]

It also tell the direction of steepest ascent which means in which direction one should step to increase the function most quickly.

Note

A derivative tells us how much a given quantity changes when we change slightly some other quantity.

In our case we are interested how much our MSE loss change when we vary one of our two parameters.

\[ \frac{\partial MSE}{\partial w} = \frac{\partial MSE}{\partial \hat{y}}.\frac{\partial \hat{y}}{\partial w} = \frac{1}{N} \sum\limits_{i=1}^{N} 2(y-b-wx_i).(-x_i) = -2\frac{1}{N} \sum\limits_{i=1}^{N}(x_i)(y-\hat{y_i}) \]

\[ \frac{\partial MSE}{\partial b} = \frac{\partial MSE}{\partial \hat{y}}.\frac{\partial \hat{y}}{\partial b} = \frac{1}{N} \sum\limits_{i=1}^{N} 2(y-b-wx_i).(-1) = -2\frac{1}{N} \sum\limits_{i=1}^{N}(y-\hat{y_i}) \]

Step 3 :- Update the Parameters

Now we will update the parameters by using gradients to minimize the loss. But gradient tells the direction of steepest ascent so we will multiply by -1.

\[ w = w - \eta\frac{\partial MSE}{\partial w} \]
\[ b = b - \eta\frac{\partial MSE}{\partial b} \]

Step 4:- Repeat

Now we will use updated parameters and start with step 1 again. We will repeat this process for multiple epochs. This is also known as training the model.

An epoch is completed when all the data points has been used for calculating the loss.


Click below to add some new points and see how algorithm adjust the slope of line over time to meet best fit.

Gradient Descent Variants

  • Batch Gradient Descent
  • Stochastic Gradient Descent
  • Mini Batch Gradient Descent

If we use all the points in the training set to compute loss then it is batch gradient descent. If we use single data point and update our parameters it stochastic gradient descent. Anything between 1 and N is mini batch gradient descent.

Implementing Linear Regression using batch gradient descent

Now its time to implement our linear regression model using batch gradient descent.

Generate Synthetic data

Using the above equation we will generate some synthetic data with w = 2 and b = 1 and some random noise.

import numpy as np
np.random.seed(42)

x = np.random.randn(100,1)
y = 1 + 2*x + np.random.randn(100,1)

Our goal will be to accurately predict the value of w (i.e. 2) and b (i.e. 1).

Initialize parameters (w and b) and hyperparameter (learning_rate)

So we will initialize the learnable parameters w and b with some random values and try to find right value by minimizing the loss function. The value of the hyperparameter i.e. learning rate is fixed.

w = np.random.randn(1)
b = np.random.randn(1)
lr = 0.001

Gradient Descent algorithm

def gradient_descent(x, y, w, b, lr):
    
    #compute the loss
    yhat =  b + w*x
    error = (y-yhat)
    loss  = (error*2).mean()
    
    #compute the gradients
    b_grad = -2*error.mean()
    w_grad = -2*(x*error).mean()
    
    #update the parameters
    b = b - lr * b_grad
    w = w - lr * w_grad
    return w,b
# implementing multiple epochs 

for epoch in range(5000):
    w,b = gradient_descent(x, y, w, b, lr)
print(b,w)
[1.00716235] [1.85616176]

Let’s compare our output with scikit-learn’s Linear Regression

from sklearn.linear_model import LinearRegression
lr = LinearRegression()
lr.fit(x, y)
print(lr.intercept_, lr.coef_[0])
[1.00742783] [1.85674284]

Our result matches upto few decimal places that means we have correctly implemented our batch gradient descent algorithm.

Back to top