Build Your Own ML Framework
Don't just import it. Build it.
Build a complete machine learning (ML) framework from tensors to systems—understand how PyTorch, TensorFlow, and JAX really work under the hood.
Setup & First Run
From zero to ready in 2 minutes
Complete a Module
Build → Test → Ship workflow
Run a Milestone
Your code recreates ML history
The "Aha!" Moment
You built this, not imported it
Getting Started#
TinyTorch is organized into four progressive tiers that take you from mathematical foundations to production-ready systems. Each tier builds on the previous one, teaching you not just how to code ML components, but how they work together as a complete system.
🏗 Foundation (Modules 01-07)
Build the mathematical core that makes neural networks learn.
Unlocks: Perceptron (1957) • XOR Crisis (1969) • MLP (1986)
🏛️ Architecture (Modules 08-13)
Build modern neural architectures—from computer vision to language models.
Unlocks: CNN Revolution (1998) • Transformer Era (2017)
⏱️ Optimization (Modules 14-19)
Transform research prototypes into production-ready systems.
Unlocks: MLPerf Torch Olympics (2018) • 8-16× compression • 12-40× speedup
🏅 Torch Olympics (Module 20)
The ultimate test: Build a complete, competition-ready ML system.
Capstone: Vision • Language • Speed • Compression tracks
Complete course structure • Daily workflow guide • Join the community
Recreate ML History#
Walk through ML history by rebuilding its greatest breakthroughs with YOUR TinyTorch implementations. Click each milestone to see what you’ll build and how it shaped modern AI.
View complete milestone details to see full technical requirements and learning objectives.
Why Build Instead of Use?#
Understanding the difference between using a framework and building one is the difference between being limited by tools and being empowered to create them.
Traditional ML Education
import torch
model = torch.nn.Linear(784, 10)
output = model(input)
# When this breaks, you're stuck
Problem: OOM errors, NaN losses, slow training—you can't debug what you don't understand.
TinyTorch Approach
from tinytorch import Linear # YOUR code
model = Linear(784, 10) # YOUR implementation
output = model(input)
# You know exactly how this works
Advantage: You understand memory layouts, gradient flows, and performance bottlenecks because you implemented them.
Systems Thinking: TinyTorch emphasizes understanding how components interact—memory hierarchies, computational complexity, and optimization trade-offs—not just isolated algorithms. Every module connects mathematical theory to systems understanding.
See Course Philosophy for the full origin story and pedagogical approach.
The Build → Use → Reflect Approach#
Every module follows a proven learning cycle that builds deep understanding:
graph LR
B[Build<br/>Implement from scratch] --> U[Use<br/>Real data, real problems]
U --> R[Reflect<br/>Systems thinking questions]
R --> B
style B fill:#FFC107,color:#000
style U fill:#4CAF50,color:#fff
style R fill:#2196F3,color:#fff
Build: Implement each component yourself—tensors, autograd, optimizers, attention
Use: Apply your implementations to real problems—MNIST, CIFAR-10, text generation
Reflect: Answer systems thinking questions—memory usage, scaling behavior, trade-offs
This approach develops not just coding ability, but systems engineering intuition essential for production ML.
Is This For You?#
Perfect if you want to debug ML systems, implement custom operations, or understand how PyTorch actually works.
Prerequisites: Python + basic linear algebra. No prior ML experience required.
Next Steps: Quick Start Guide (15 min) • Course Structure • FAQ