Welcome to our guide on combining machine learning (ML) with software engineering practices to design, develop, deploy, and iterate on production-grade ML applications. This tutorial takes you through setting up your environment and workflow efficiently.
Getting Started
In this course, you’ll learn how to take machine learning models from experimentation all the way to production, covering essential concepts along the way. Here’s a brief overview of what you’ll be doing:
- Developing a strong understanding of machine learning principles.
- Implementing software engineering best practices.
- Scaling ML workloads easily using Python.
- Integrating MLOps components into a cohesive system.
- Creating effective CI/CD workflows for ongoing model improvement.
Setting Up Your Environment
You’ll have choices when it comes to the environment in which you’ll develop your ML applications. Here are your options:
Local Setup
Your laptop will act as a cluster, with CPU configurations denoting head and worker nodes. The handling is slower than larger clusters but adequate for initial testing.
Anyscale Setup
In an Anyscale Workspace, a structured environment with available compute resources can be created for your ML workloads.
Other Options
- Using cloud services like AWS and GCP.
- Deploying on premise clusters.
- Utilizing Kubernetes using the KubeRay project.
Creating a Git Repository
Setting up version control is essential. Here’s how to do it:
- Create a new repository named “Made-With-ML” on GitHub.
- Clone your repository locally using:
git clone https://github.com/GokuMohandas/Made-With-ML.git .
Setting Up Your Virtual Environment
It’s crucial to isolate your Python dependencies:
- In your terminal, export your Python path and create a virtual environment:
export PYTHONPATH=$PYTHONPATH:$PWDpython3 -m venv venv- Activate the virtual environment using
source venv/bin/activate. - Install necessary packages using
pip install -r requirements.txt.
Executing Machine Learning Workloads
With your environment ready, you can start executing your ML workloads. Here’s a simple analogy to understand the process:
Think of your ML project as a restaurant. The data is the raw ingredients, your model training is the cooking process, and serving predictions to users is like serving dishes to customers. You need to prepare everything in a structured manner to ensure that your “dishes” are not only delicious (accurate) but also served on time (in a reliable manner).
Training Your Model
Run your training command in the terminal after configuring environment variables. Here’s what you would execute:
export EXPERIMENT_NAME=llm
export DATASET_LOC=https://raw.githubusercontent.com/GokuMohandas/Made-With-ML/main/datasets/dataset.csv
export TRAIN_LOOP_CONFIG=dropout_p: 0.5, lr: 1e-4, lr_factor: 0.8, lr_patience: 3
python madewithml/train.py --experiment-name $EXPERIMENT_NAME --dataset-loc $DATASET_LOC --train-loop-config $TRAIN_LOOP_CONFIG --num-workers 1 --cpu-per-worker 3 --gpu-per-worker 1 --num-epochs 10 --batch-size 256 --results-fp results/training_results.json
Tuning the Model
Next, you’ll want to refine the model. Similar to a chef tweaking their recipe for perfection, you will adjust hyperparameters:
export INITIAL_PARAMS=[train_loop_config: $TRAIN_LOOP_CONFIG]
python madewithml/tune.py --experiment-name $EXPERIMENT_NAME --dataset-loc $DATASET_LOC --initial-params $INITIAL_PARAMS --num-runs 2 --num-workers 1 --cpu-per-worker 3 --gpu-per-worker 1 --num-epochs 10 --batch-size 256 --results-fp results/tuning_results.json
Troubleshooting Common Issues
If you encounter issues, here are some troubleshooting steps:
- Check your Python environment; ensure all dependencies are installed correctly.
- Verify your paths and dataset locations for typos.
- Restart your Jupyter Notebook kernel if unexpected errors occur.
- If you need further assistance or insights into AI development, reach out for updates at fxis.ai.
Conclusion
This is merely a glimpse into how to seamlessly integrate machine learning into software engineering practice. Remember, continual iteration and learning are keys to mastering these concepts.
At fxis.ai, we believe that such advancements are crucial for the future of AI, as they enable more comprehensive and effective solutions. Our team is continually exploring new methodologies to push the envelope in artificial intelligence, ensuring that our clients benefit from the latest technological innovations.
FAQ
Jupyter Notebook Kernels
If you have issues configuring the Jupyter notebook kernels, you can manually add them using:
python3 -m ipykernel install --user --name=venv
Now you can navigate to a notebook and select the venv kernel. To remove it, you can do:
jupyter kernelspec list
jupyter kernelspec uninstall venv
With patience and practice, you’ll run effective ML applications in no time!

