NanoNeuron: Understanding Machine Learning with JavaScript

Jun 24, 2022 | Data Science

If you’ve ever wondered how machines learn, the NanoNeuron project provides an exciting glimpse into this fascinating world using just a handful of simple JavaScript functions. By implementing an over-simplified version of a neuron, we can train our NanoNeuron to predict temperature conversions between Celsius and Fahrenheit, revealing the mathematical principles behind machine learning.

What is NanoNeuron?

At its core, the NanoNeuron serves as a stepping stone to understanding how neural networks operate. Our simplified version uses pure JavaScript—no third-party libraries or dependencies—to demonstrate fundamental concepts such as model prediction, cost calculation, and training through straightforward functions. This simplification aims to provide a basic understanding of machine learning, breaking down the notion that it’s all just ‘magic’.

How Does NanoNeuron Work?

The NanoNeuron is like a young student in a classroom, learning from basic examples. Picture it as a child trying to draw straight lines on a graph. The child (our NanoNeuron) learns by adjusting its parameters (w and b) to match known outputs. Let’s take a look at how this works in practice:

function NanoNeuron(w, b) {
    this.w = w; 
    this.b = b; 
    this.predict = (x) => {
        return x * this.w + this.b; 
    };
}

In this analogy, ‘w’ (weight) represents the slope of the line, while ‘b’ (bias) positions the line on the y-axis. Just as a child learns to draw a straight line by adjusting these parameters, our NanoNeuron learns to mimic the linear relationship between Celsius and Fahrenheit temperatures.

Celsius to Fahrenheit Conversion

The conversion from Celsius to Fahrenheit follows the formula f = 1.8 * c + 32. Here’s how we implement this in JavaScript:

function celsiusToFahrenheit(c) {
    const w = 1.8; 
    const b = 32; 
    const f = c * w + b; 
    return f; 
}

We aim to teach our NanoNeuron to imitate this function and understand how to achieve the expected output without having prior knowledge of the values of ‘w’ and ‘b’.

Generating Data Sets for Training

Before we can train our NanoNeuron, we need to create datasets for both training and testing. This data consists of input-output pairs based on our conversion function:

function generateDataSets() {
    const xTrain = [], yTrain = [];
    for (let x = 0; x <= 100; x += 1) {
        const y = celsiusToFahrenheit(x); 
        xTrain.push(x); 
        yTrain.push(y);
    }

    const xTest = [], yTest = [];
    for (let x = 0.5; x < 100; x += 1) {
        const y = celsiusToFahrenheit(x); 
        xTest.push(x); 
        yTest.push(y);
    }
    return [xTrain, yTrain, xTest, yTest];
}

Calculating Prediction Costs

To assess how well our NanoNeuron is learning, we need to evaluate the cost of its predictions using a cost function based on the difference between predicted and actual values:

function predictionCost(y, prediction) {
    return (y - prediction) ** 2 / 2;
}

This function helps us quantify how far off our predictions are, guiding the adjustments we need to make during training.

Forward and Backward Propagation: The Learning Process

The processes of forward propagation and backward propagation are crucial for our NanoNeuron to learn. Forward propagation allows it to make predictions and track costs, while backward propagation helps it adjust its parameters for better predictions.

function forwardPropagation(model, xTrain, yTrain) {
    const m = xTrain.length; 
    const predictions = [];
    let cost = 0; 
    for (let i = 0; i < m; i += 1) {
        const prediction = model.predict(xTrain[i]); 
        cost += predictionCost(yTrain[i], prediction); 
        predictions.push(prediction); 
    }
    cost /= m; 
    return [predictions, cost];
}

After calculating the cost, we implement backward propagation to adjust our model’s parameters based on the cost, ensuring our predictions get progressively better.

function backwardPropagation(predictions, xTrain, yTrain) {
    const m = xTrain.length; 
    let dW = 0; 
    let dB = 0; 
    for (let i = 0; i < m; i += 1) {
        dW += (yTrain[i] - predictions[i]) * xTrain[i]; 
        dB += yTrain[i] - predictions[i]; 
    }
    dW /= m; 
    dB /= m; 
    return [dW, dB];
}

Training the Model

Finally, we bring everything together in the training process, iterating through multiple epochs to allow our NanoNeuron to learn effectively:

function trainModel(model, epochs, alpha, xTrain, yTrain) {
    const costHistory = []; 
    for (let epoch = 0; epoch < epochs; epoch += 1) {
        const [predictions, cost] = forwardPropagation(model, xTrain, yTrain); 
        costHistory.push(cost); 
        const [dW, dB] = backwardPropagation(predictions, xTrain, yTrain);
        model.w += alpha * dW; 
        model.b += alpha * dB;
    }
    return costHistory; 
}

How to Run NanoNeuron

To run the NanoNeuron model locally, clone the repository and execute the following commands:

git clone https://github.com/trekhleb/nano-neuron.git
cd nano-neuron
node NanoNeuron.js

Troubleshooting Tips

If you encounter issues while running the NanoNeuron project, consider the following troubleshooting tips:

  • Ensure all dependencies are met and that your Node.js environment is correctly set up.
  • If you run into errors regarding the functions, double-check your code syntax for typos and mismatched parentheses.
  • To quickly identify problems, use console logs to track variable values at different stages of the program.

For more insights, updates, or to collaborate on AI development projects, stay connected with fxis.ai.

Conclusion

With the NanoNeuron, we’ve taken a fun yet insightful journey into the world of machine learning. By breaking down the process to its core functions, we can appreciate the elegance of the mathematics behind intelligent predictions. At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.

Stay Informed with the Newest F(x) Insights and Blogs

Tech News and Blog Highlights, Straight to Your Inbox