BlogsProjectsVideosSign UpLogin
Back to Projects
Fundamental

Building RNN from scratch using Python

In this project, I coded the entire architecture of Recurrent Neural Network including Forwardprop, Backprop through time,SGD optimization and Tanh() activation.

Technologies Used

PythonNumpyMatplotlib

What is this project about?

We often hear that ML models are "black-box", which are quite difficult to comprehend. Building RNN from scratch is my effort at understanding this very 'black-box" and knowing what actually powers the ml models. In other words, ml model is the engine if an ml application is a car. With this project I showcase my ability to build the engine, the most important part.

This project is a raw, educational implementation of a Recurrent Neural Network (RNN) built entirely from scratch using Python and NumPy. It relies on no deep learning libraries (like PyTorch or TensorFlow), forcing the code to handle every mathematical operation manually.

The core objective is to demonstrate how machine can learn from sequential data (time-series in this case) by predicting future values based on past patterns.

Why does it matter?

A. Most developers only ever type [import torch.nn as nn] and [use nn.RNN]. They know what it does, but not how. By building the model myself, I have gone a step ahead in understanding the underlying architecture, enhancing my capabilities to build novel models, improving the performance over their predecessors.

B. Apart from just understanding the mathematics and theory, implementing the model has helped me grasp an intuition about crucial concepts including Backprop Through Time

C. Remembering the dimensions of each and every parameter, and yet facing dimension mismatches during operations like product, helped improve my intuition about data shapes and tensor operations.