Build Your Own ML Framework

Don't just import it. Build it.

Build a complete machine learning (ML) framework from tensors to systems—understand how PyTorch, TensorFlow, and JAX really work under the hood.

$ pip install tinytorch (coming soon)

Getting Started#

TinyTorch is organized into four progressive tiers that take you from mathematical foundations to production-ready systems. Each tier builds on the previous one, teaching you not just how to code ML components, but how they work together as a complete system.

Complete course structureDaily workflow guideJoin the community

Recreate ML History#

Walk through ML history by rebuilding its greatest breakthroughs with YOUR TinyTorch implementations. Click each milestone to see what you’ll build and how it shaped modern AI.

1957
The Perceptron
The first trainable neural network
Input → Linear → Sigmoid → Output
1969
XOR Crisis Solved
Hidden layers unlock non-linear learning
Input → Linear → ReLU → Linear → Output
1986
MLP Revival
Backpropagation enables deep learning (95%+ MNIST)
Images → Flatten → Linear → ... → Classes
1998
CNN Revolution 🎯
Spatial intelligence unlocks computer vision (75%+ CIFAR-10)
Images → Conv → Pool → ... → Classes
2017
Transformer Era
Attention launches the LLM revolution
Tokens → Attention → FFN → Output
2018
MLPerf Benchmarks
Production optimization (8-16× smaller, 12-40× faster)
Profile → Compress → Accelerate

View complete milestone details to see full technical requirements and learning objectives.

Why Build Instead of Use?#

Understanding the difference between using a framework and building one is the difference between being limited by tools and being empowered to create them.

Traditional ML Education

import torch
model = torch.nn.Linear(784, 10)
output = model(input)
# When this breaks, you're stuck

Problem: OOM errors, NaN losses, slow training—you can't debug what you don't understand.

TinyTorch Approach

from tinytorch import Linear  # YOUR code
model = Linear(784, 10)       # YOUR implementation
output = model(input)
# You know exactly how this works

Advantage: You understand memory layouts, gradient flows, and performance bottlenecks because you implemented them.

Systems Thinking: TinyTorch emphasizes understanding how components interact—memory hierarchies, computational complexity, and optimization trade-offs—not just isolated algorithms. Every module connects mathematical theory to systems understanding.

See Course Philosophy for the full origin story and pedagogical approach.

The Build → Use → Reflect Approach#

Every module follows a proven learning cycle that builds deep understanding:

        graph LR
    B[Build<br/>Implement from scratch] --> U[Use<br/>Real data, real problems]
    U --> R[Reflect<br/>Systems thinking questions]
    R --> B

    style B fill:#FFC107,color:#000
    style U fill:#4CAF50,color:#fff
    style R fill:#2196F3,color:#fff
    
  1. Build: Implement each component yourself—tensors, autograd, optimizers, attention

  2. Use: Apply your implementations to real problems—MNIST, CIFAR-10, text generation

  3. Reflect: Answer systems thinking questions—memory usage, scaling behavior, trade-offs

This approach develops not just coding ability, but systems engineering intuition essential for production ML.

Is This For You?#

Perfect if you want to debug ML systems, implement custom operations, or understand how PyTorch actually works.

Prerequisites: Python + basic linear algebra. No prior ML experience required.


Next Steps: Quick Start Guide (15 min) • Course StructureFAQ