Welcome to the world of Neograd, a deep learning framework built from the ground up using Python and NumPy! This framework serves as an educational tool designed to help you understand the intricacies of automatic differentiation and neural network training in a clear and accessible way.
Installation
To embark on your journey with Neograd, the first step is installation. You can easily install Neograd using pip:
pip install neograd
Getting Familiar with Neograd
Here are some essential links you might find useful:
Motivation Behind Neograd
The motivation for developing Neograd stems from a desire to gain a deeper understanding of deep learning frameworks such as TensorFlow and PyTorch. The complex codebases of these popular frameworks can be overwhelming, but with Neograd, you can easily grasp the underlying principles of automatic differentiation and neural network implementation.
Key Features of Neograd
- Automatic Differentiation: Implements automatic differentiation for essential operations, making gradient calculations simple.
- Custom Autograd Operations: Easily define your operations with a simple interface.
- Gradient Checking: Debug models to ensure proper gradient propagation.
- Highly Customizable: Create your own layers, optimizers, and loss functions.
- PyTorch-like API: Offers an elegant and intuitive interface similar to PyTorch.
- Neural Network Module: Includes common optimizers, activations, and loss functions needed for training.
- Model Checkpointing: Prevent weight loss during model training interruptions.
Example Code: Training a Simple Neural Network
Let’s walk through an analogy to explain the provided code. Imagine Neograd as a set of very organized building blocks where each block has a specific function—just like how each layer of a neural network works to convert inputs into outputs. You will stack these blocks together to create a robust structure capable of learning from data.
In the example, our code builds a neural network using these blocks:
import neograd as ng
from neograd import nn
import numpy as np
from neograd.nn.loss import BCE
from neograd.nn.optim import Adam
from neograd.autograd.utils import grad_check
from sklearn.datasets import make_circles
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report, accuracy_score
# Load dataset (binary classification problem)
X, y = make_circles(n_samples=1000, noise=0.05, random_state=100)
X_train, X_test, y_train, y_test = train_test_split(X, y)
num_train = 750 # Number of train examples
num_test = 250 # Number of test examples
num_iter = 50 # Number of training iterations
# Convert data into tensors
X_train, X_test = ng.tensor(X_train[:num_train, :]), ng.tensor(X_test[:num_test, :])
y_train, y_test = ng.tensor(y_train[:num_train].reshape(num_train, 1)), ng.tensor(y_test[:num_test].reshape(num_test, 1))
# Define the structure of your neural net
class NN(nn.Model):
def __init__(self):
self.stack = nn.Sequential(
nn.Linear(2, 100),
nn.ReLU(),
nn.Linear(100, 1),
nn.Sigmoid()
)
def forward(self, inputs):
return self.stack(inputs)
model = NN() # Initialize a model
loss_fn = BCE() # Initialize a loss function (Binary Cross Entropy)
optim = Adam(model.parameters(), 0.05) # Initialize an optimizer
# Training loop
for i in range(num_iter):
optim.zero_grad() # Zero out gradients
outputs = model(X_train) # Get outputs
loss = loss_fn(outputs, y_train) # Calculate loss
loss.backward() # Backward pass
optim.step() # Update parameters
print(f'Iteration {i + 1}/{num_iter}, Loss: {loss}')
with model.eval(): # Evaluation mode
test_outputs = model(X_test) # Outputs on test data
preds = np.where(test_outputs.data >= 0.5, 1, 0) # Make predictions
print(classification_report(y_test.data.astype(int).flatten(), preds.flatten()))
print(accuracy_score(y_test.data.astype(int).flatten(), preds.flatten()))
grad_check(model, X_train, y_train, loss_fn) # Perform gradient checking
In this code, we create a neural network with a structured flow—from data preparation to the training loop, and finally evaluating the model’s performance, all while keeping the operations crisp, similar to stacking blocks to build something magnificent!
Troubleshooting
If you run into issues while using Neograd, consider the following troubleshooting tips:
- Ensure all dependencies are installed correctly. Running
pip install neogradagain can help check for any missing packages. - Check for any syntax errors in your code. Copying and pasting can sometimes introduce hidden characters.
- When debugging gradients, utilize the gradient checking function provided to ensure gradients are calculated accurately.
For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.
Conclusion
At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

