You want to break into AI but the sheer number of courses, tutorials, and bootcamps makes it hard to know where to start. This guide lays out a 36-week learning path that takes you from zero programming knowledge to job-ready AI practitioner. Every course recommended here is either free or under $80, and each one was chosen because it teaches skills employers actually ask for. No fluff stages, no detours into theory you will never use. Just a concrete week-by-week plan built around the best available courses in 2026.
Stage 1: Foundations (Weeks 1-4)
Before you touch any machine learning library, you need two things: basic Python fluency and enough math to understand what algorithms are doing under the hood. Skipping foundations is the number one reason people stall out at the intermediate level. Four weeks is enough to get both pieces in place if you put in 10-15 hours per week. Many people try to jump straight to TensorFlow or PyTorch tutorials, hit a wall when they cannot debug a shape mismatch or understand a loss function, and then give up. Do not be that person. These four weeks are an investment that pays off for the entire rest of your learning path.
Weeks 1-2: Python for Data Science
Start with Python for Data Science and Machine Learning Bootcamp. Jose Portilla's course covers Python fundamentals, NumPy, Pandas, and Matplotlib in a hands-on style. You do not need prior coding experience. Spend about 12-15 hours per week and focus on the exercises rather than just watching videos. By the end of week two, you should be comfortable loading a CSV, cleaning data, and making basic plots. If Portilla's style does not click for you, an alternative is Google's Python course which takes a more structured, computer science approach to Python fundamentals. The key is picking one and finishing it — not shopping between five options.
During these two weeks, set up your development environment properly. Install Anaconda or Miniconda, get comfortable with Jupyter notebooks, and learn basic terminal commands. Practice writing functions, using list comprehensions, and working with dictionaries. These are not glamorous skills, but you will use them every single day in ML work. Write at least one small script per day — even something trivial like reading a CSV and computing summary statistics. The muscle memory matters.
Weeks 3-4: Math Essentials
Take Mathematics for Machine Learning from Imperial College London on Coursera. Focus on the linear algebra and multivariate calculus modules. You do not need to finish the PCA module yet. Simultaneously, watch the first 10 lectures of MIT 18.06 Linear Algebra by Gilbert Strang. His geometric intuitions for matrix operations will save you hours of confusion later. Budget 10-12 hours per week across both resources.
At the end of Stage 1, you should be able to: write Python scripts that manipulate dataframes, explain what a dot product and matrix multiplication represent, and take partial derivatives of simple functions. If any of these feel shaky, spend an extra week here. Rushing past foundations always costs more time later. A good self-test: try implementing linear regression from scratch using only NumPy. If you can write the gradient descent loop and explain why it works, you are ready for Stage 2.
Stage 2: Core Machine Learning (Weeks 5-12)
This is where you learn the bread and butter of ML: regression, classification, clustering, decision trees, ensemble methods, and model evaluation. You will spend eight weeks here because these fundamentals underpin everything that follows, including deep learning and generative AI. The temptation to rush through this stage and jump to neural networks is strong — resist it. Most working ML engineers spend more time on classical ML than on deep learning, because most business problems do not need a neural network. A well-tuned gradient boosted tree outperforms a poorly trained neural network on tabular data almost every time.
Weeks 5-8: Your Main ML Course
Enroll in Machine Learning Specialization by Andrew Ng on Coursera. This three-course specialization updated in 2022 now uses Python instead of Octave, which makes it far more practical than the original. Course 1 covers supervised learning (linear regression, logistic regression, neural networks). Course 2 covers advanced algorithms (decision trees, random forests, XGBoost, recommender systems). Course 3 covers unsupervised learning and reinforcement learning basics. Spend 12-15 hours per week and complete every programming assignment — do not just watch the videos. The assignments are where learning actually happens. If you get stuck, use the Coursera discussion forums rather than jumping to the solution. Struggling with a problem for 30 minutes teaches you more than reading someone else's answer.
As you work through the specialization, keep a learning journal. After each week, write a one-paragraph summary of what you learned and one question you still have. This forces you to consolidate knowledge rather than letting it evaporate. By the end of Course 2, start reading ML blog posts and Kaggle notebooks to see how practitioners apply these concepts to real problems. The gap between textbook ML and applied ML is smaller than people think, but it exists, and exposure to real-world applications helps you bridge it.
Weeks 9-10: Practical Kaggle Skills
Now apply what you learned. Complete Kaggle Intro to Machine Learning and Kaggle Intermediate Machine Learning back to back. These are short, focused courses that teach you to use scikit-learn on real datasets. You will learn to handle missing values, categorical variables, pipelines, and cross-validation. Each takes about 6-8 hours. After finishing, enter one active Kaggle competition and submit at least three solutions. Your score does not matter; the process of iterating on a real problem is what teaches you.
Weeks 11-12: Feature Engineering and Evaluation
Take Kaggle Feature Engineering to learn techniques like target encoding, mutual information, and feature creation. This is the skill that separates good ML practitioners from beginners. Spend the remaining time revisiting your Kaggle competition entry and improving it using these techniques. Also complete Google's Machine Learning Crash Course as a fast review. Google's course fills gaps with interactive visualizations that help cement concepts like regularization, ROC curves, and embeddings.
Stage 3: Deep Learning (Weeks 13-20)
With solid ML fundamentals, you are ready for neural networks. Deep learning is not harder than classical ML; it is just different. You trade interpretability for power, and you need to learn new debugging techniques. Eight weeks gives you time to build genuine understanding rather than just copying tutorial code. This is also where the math from Stage 1 pays off. Understanding gradients, matrix operations, and probability distributions makes debugging neural networks dramatically easier. If a model is not converging, you need to reason about learning rates, gradient magnitudes, and loss landscapes — none of which makes sense without the mathematical foundation.
Weeks 13-16: Deep Learning Theory and Practice
You have two strong options here, and you should pick based on your learning style. If you prefer top-down, code-first learning, take fast.ai Practical Deep Learning for Coders. Jeremy Howard teaches you to train state-of-the-art models in the first lesson and then gradually peels back layers of abstraction. If you prefer bottom-up understanding, take the Deep Learning Specialization by Andrew Ng. It starts with the math of a single neuron and builds up to convolutional and recurrent networks. Both are excellent. The fast.ai approach gets you to results faster; the Ng approach gives you more theoretical grounding.
Weeks 17-18: Computer Vision or NLP Focus
Pick one specialization track. For computer vision, watch the first 10 lectures of Stanford CS231n. For natural language processing, take the HuggingFace NLP Course. Both are free. The HuggingFace course is particularly practical because it teaches you the tools you will actually use in production: tokenizers, transformer models, and the HuggingFace ecosystem. Spend 12-15 hours per week.
Weeks 19-20: Generative AI Fundamentals
Take Generative AI with Large Language Models by DeepLearning.AI and AWS. This course covers transformer architecture, training processes, fine-tuning, RLHF, and prompt engineering with real code exercises. Follow it with ChatGPT Prompt Engineering for Developers for a focused look at building applications on top of LLMs. These two courses together take about 20-25 hours and bring you up to speed on the technology driving most AI hiring in 2026.
Stage 4: Specialization (Weeks 21-28)
Now you specialize based on the type of job you want. Pick one track below and stick with it for eight weeks. Trying to learn everything at once is a trap. Employers hire specialists, not generalists who know a little about everything. Look at job postings for roles you want and note which skills appear most frequently. That tells you which track to choose. If you are unsure, Track B (AI Application Developer) has the most job openings in 2026 and the lowest barrier to entry.
Track A: ML Engineer
Take Full Stack Deep Learning to learn experiment tracking, deployment, testing, and monitoring. This course bridges the gap between training a model in a notebook and running it in production — a gap that most courses completely ignore. Follow with MLOps Specialization to understand ML pipelines, model serving, and CI/CD for models. You will learn tools like MLflow, Docker, Kubernetes for ML, and monitoring for data drift. This track prepares you for roles where you build and deploy models in production. Spend 15 hours per week. By the end, you should be able to take a trained model and serve it as an API with proper logging, versioning, and automated retraining.
Track B: AI Application Developer
Take LangChain for LLM Application Development and then build three projects: a RAG chatbot, an AI agent with tool use, and a fine-tuned model for a specific domain. This track is where the most job openings are in 2026. Companies need people who can integrate LLMs into products, not just train models from scratch. After LangChain, take Building Systems with the ChatGPT API to learn how to chain LLM calls, implement guardrails, and handle edge cases in production. The combination of these two short courses plus three solid projects makes you competitive for AI application developer roles at most companies.
Track C: Research-Oriented
Take Stanford CS229 for rigorous ML theory, then NYU Deep Learning with Yann LeCun for cutting-edge architecture understanding. Complement with fast.ai Part 2 which covers building a deep learning framework from scratch. This track suits people aiming for research labs or PhD programs.
Stage 5: Portfolio and Job Prep (Weeks 29-36)
Courses alone will not land you a job. You need a portfolio that proves you can solve real problems. Spend eight weeks building three polished projects and preparing for interviews. This is the stage most self-learners skip or rush through, and it is the reason many people complete dozens of courses but still cannot land an AI role. Your portfolio is your proof of competence. Treat these eight weeks with the same seriousness as the preceding twenty-eight.
Weeks 29-32: Build Three Portfolio Projects
Each project should have: a clear problem statement, a dataset you collected or curated yourself, a trained model with documented experiments, a deployed demo (Streamlit, Gradio, or a simple API), and a well-written README. One project should use classical ML, one should use deep learning, and one should involve generative AI or LLMs. Put all three on GitHub with clean code and documentation.
Weeks 33-34: Certifications (Optional)
If you are targeting corporate roles, consider Google AI Essentials or Microsoft AI Fundamentals (AI-900). These certifications are most useful for getting past HR filters at large companies. They are not substitutes for skills, but they signal familiarity with specific platforms. Skip them if you are targeting startups; startups care about your GitHub, not your certificates.
Weeks 35-36: Interview Preparation
Practice ML system design questions, review statistics fundamentals, and prepare to explain every project in your portfolio in detail. Be ready to whiteboard a simple model training pipeline and discuss tradeoffs between approaches. Mock interviews with peers are more valuable than any course at this stage.
What If You Already Know How to Code?
If you are a working software engineer, you can compress this timeline significantly. Skip weeks 1-2 (Python) entirely and spend just one week brushing up on ML-relevant math. Start Stage 2 in week 3 instead of week 5. You can also move faster through the ML Specialization because you already know how to debug code, read documentation, and structure projects. Experienced developers typically complete this path in 20-24 weeks instead of 36.
One warning for developers: do not assume that being a good programmer makes you a good ML practitioner. ML debugging is fundamentally different from software debugging. When your model gives wrong predictions, there is no stack trace. The bug could be in your data, your features, your hyperparameters, your evaluation method, or your problem formulation. You need to develop new intuitions, and that takes practice even if the code itself comes easily.
Timeline Summary
- Weeks 1-4: Python and math foundations with udemy-python-for-ds-ml, coursera-mathematics-for-ml, and mit-linear-algebra
- Weeks 5-12: Core ML with coursera-ml-specialization, kaggle-intro-ml, kaggle-intermediate-ml, kaggle-feature-engineering, and google-ml-crash-course
- Weeks 13-20: Deep learning with fast-ai-practical-dl or deep-learning-specialization, plus huggingface-nlp or stanford-cs231n, plus deeplearning-ai-genai-llm
- Weeks 21-28: Specialization track (ML Engineer, AI App Developer, or Research)
- Weeks 29-36: Portfolio projects, optional certification, and interview prep
Total time commitment: 10-15 hours per week for 36 weeks, or roughly 400-540 hours. This is equivalent to one semester of full-time study. If you can dedicate 30+ hours per week, you can compress this to about 18 weeks. If you can only do 5 hours per week, stretch it to 72 weeks. The order matters more than the speed.
Frequently Asked Questions
Can I skip the math foundations if I already know Python?
You can skip the Python course but do not skip the math. Linear algebra and calculus show up constantly when debugging model behavior, reading papers, and understanding why certain architectures work. Take coursera-mathematics-for-ml even if you studied math in college. The ML-specific framing is what matters.
Do I need a GPU to follow this path?
Not for Stages 1-2. For Stages 3-5, you need GPU access for training deep learning models. Google Colab's free tier is sufficient for most course exercises. If you want to train larger models, Colab Pro at $10/month or a Kaggle notebook with free GPU hours will cover you. You do not need to buy a gaming PC.
Should I take Andrew Ng's courses or fast.ai first?
For the ML Specialization in Stage 2, Andrew Ng is the clear choice because it covers classical ML thoroughly. For deep learning in Stage 3, pick based on your style: fast.ai if you want to build things immediately and learn theory as needed, or the Deep Learning Specialization if you want to understand every equation before writing code. Both paths lead to the same place.
What if I want to focus on generative AI and skip classical ML?
You will hit a wall. Generative AI roles require understanding of attention mechanisms, loss functions, optimization, and evaluation metrics. All of those concepts come from classical ML and deep learning foundations. Companies hiring for GenAI roles test these fundamentals in interviews. Follow the stages in order.
How do I know when I am job-ready?
You are job-ready when you can: (1) take a business problem and frame it as an ML task, (2) select and train an appropriate model, (3) evaluate it rigorously, (4) deploy it so others can use it, and (5) explain every decision you made. If your portfolio projects demonstrate all five, start applying. Do not wait until you feel 100% ready because that day never comes.



