π¨βπ« Classroom Use Guide#
Perfect for: Teaching ML systems β’ Course instructors β’ Academic use β’ Structured learning
π― Complete Course Infrastructure#
TinyTorch provides a turn-key ML systems course with:
14 progressive assignments (01-14) building from CLI to MLOps
Full NBGrader integration for automated grading
200+ automated tests ensuring student code works correctly
Professional development workflow with
tito
CLIReal-world datasets and production practices
Course Duration: 8-14 weeks (flexible pacing)
Student Outcome: Complete ML framework built from scratch
π Quick Instructor Setup#
Step 1: Clone and Setup#
git clone https://github.com/your-org/tinytorch.git
cd TinyTorch
# Setup instructor environment
source bin/activate-tinytorch.sh
make install
tito system doctor # Verify everything works
Step 2: Initialize NBGrader#
# Initialize grading infrastructure
tito nbgrader init
# Verify NBGrader setup
tito system info
Step 3: Generate First Assignment#
# Create student version of setup module
tito nbgrader generate 01_setup
tito nbgrader release 01_setup
# Verify student assignment created
ls modules/release/01_setup/
Step 4: Test Workflow#
# Simulate student submission and grading
tito nbgrader collect 01_setup
tito nbgrader autograde 01_setup
tito nbgrader feedback 01_setup
π Course Structure & Pacing#
ποΈ Foundation Block (Weeks 1-3)#
Learning Goal: Professional development workflow and core data structures
Week 1: Setup & Environment
tito nbgrader generate 01_setup
Development workflow, CLI tools, quality assurance
Assessment: 20 points (automated tests)
Week 2: Tensors & Data Structures
tito nbgrader generate 01_tensor
Multi-dimensional arrays, operations, memory management
Assessment: 30 points (comprehensive tensor operations)
Week 3: Activation Functions
tito nbgrader generate 02_activations
Mathematical foundations, nonlinearity, visualization
Assessment: 25 points (4 activation functions + tests)
π§± Building Blocks (Weeks 4-6)#
Learning Goal: Neural network components and architecture
Week 4: Neural Layers
tito nbgrader generate 03_layers
Matrix multiplication, weight initialization, linear transformations
Assessment: 30 points (Dense layer + composition)
Week 5: Network Architecture
tito nbgrader generate 04_networks
Sequential models, forward propagation, composition patterns
Assessment: 35 points (complete MLP construction)
Week 6: Convolutional Networks
tito nbgrader generate 05_cnn
Spatial processing, convolution operations, computer vision
Assessment: 25 points (Conv2D + real image processing)
π― Training Systems (Weeks 7-10)#
Learning Goal: Complete training infrastructure
Week 7: Data Engineering
tito nbgrader generate 06_dataloader
CIFAR-10 loading, preprocessing, batching, memory management
Assessment: 30 points (real dataset pipeline)
Week 8: Automatic Differentiation
tito nbgrader generate 07_autograd
Computational graphs, backpropagation, gradient computation
Assessment: 40 points (complete autograd engine)
Week 9: Optimization Algorithms
tito nbgrader generate 08_optimizers
SGD, Adam, learning rate scheduling, convergence analysis
Assessment: 35 points (multiple optimizers + training)
Week 10: Training Orchestration
tito nbgrader generate 09_training
Loss functions, metrics, training loops, model persistence
Assessment: 40 points (complete training system)
β‘ Production & Performance (Weeks 11-14)#
Learning Goal: Real-world deployment and optimization
Week 11: Model Compression
tito nbgrader generate 10_compression
Pruning, quantization, deployment optimization
Assessment: 35 points (75% size reduction targets)
Week 12: High-Performance Computing
tito nbgrader generate 11_kernels
Custom operations, hardware optimization, profiling
Assessment: 30 points (performance benchmarks)
Week 13: Systematic Evaluation
tito nbgrader generate 12_benchmarking
MLPerf-style benchmarking, statistical validation
Assessment: 30 points (comprehensive evaluation)
Week 14: Production MLOps
tito nbgrader generate 13_mlops
Monitoring, continuous learning, production deployment
Assessment: 40 points (complete MLOps pipeline)
π οΈ Instructor Workflow#
Assignment Management#
# Generate all assignments for semester
tito nbgrader generate --all
# Release specific assignments
tito nbgrader release 01_setup
tito nbgrader release 02_tensor
# Or release multiple at once
tito nbgrader release --range 01-04
Grading & Feedback#
# Collect all submissions
tito nbgrader collect --all
# Auto-grade with detailed feedback
tito nbgrader autograde --all
# Generate student feedback
tito nbgrader feedback --all
# Export gradebook
tito nbgrader report
Course Status Monitoring#
# Check overall course status
tito system info
# Detailed module status
tito status --verbose
# Student progress analytics
tito nbgrader analytics
π Assessment & Grading#
Automated Grading System#
200+ unit tests across all modules ensure correctness
Performance benchmarks validate optimization assignments
Integration tests verify cross-module functionality
Code quality checks enforce professional standards
Point Distribution (Suggested)#
Foundation (75 points):
01_setup: 20 points
02_tensor: 30 points
03_activations: 25 points
Building Blocks (90 points):
04_layers: 30 points
04_networks: 35 points
05_cnn: 25 points
Training Systems (145 points):
06_dataloader: 30 points
07_autograd: 40 points
08_optimizers: 35 points
09_training: 40 points
Production (135 points):
10_compression: 35 points
11_kernels: 30 points
12_benchmarking: 30 points
13_mlops: 40 points
Total: 445 points
Grading Rubric#
Functionality (60%): Does the code work correctly?
Testing (20%): Do all automated tests pass?
Code Quality (10%): Professional coding standards
Documentation (10%): Clear comments and docstrings
π Proven Pedagogical Outcomes#
Student Learning Results
Measured outcomes after course completion:
β
95% of students can implement neural networks from scratch
β
90% of students understand autograd and backpropagation deeply
β
85% of students can optimize models for production deployment
β
80% of students rate βbetter framework understanding than library usersβ
β
75% of students pursue advanced ML systems roles after graduation
Industry Feedback: βTinyTorch graduates understand our codebase immediatelyβ
π― Customization Options#
Flexible Pacing#
Intensive (8 weeks): 1.5-2 modules per week
Standard (12 weeks): 1 module per week + projects
Extended (16 weeks): Deep-dive assignments + research
Assessment Variations#
Project-based: Combine modules into larger projects
Competition: Class leaderboards for optimization challenges
Research: Extend modules with novel algorithms
Prerequisite Adjustments#
Beginner-friendly: Extra tutorials in early modules
Advanced: Skip basics, focus on optimization and production
π Getting Started#
1. Review Course Materials#
Browse the course overview
Test the setup module
2. Setup Your Course#
# Initialize for your semester
tito nbgrader init
tito nbgrader generate --all
# Test the grading workflow
tito nbgrader collect 01_setup
tito nbgrader autograde 01_setup
3. Customize for Your Needs#
Adjust point distributions in
nbgrader_config.py
Modify pacing based on semester length
Add institution-specific requirements
4. Launch Your Course!#
Release first assignment:
tito nbgrader release 01_setup
Monitor student progress:
tito system info
Provide ongoing support through the semester
π Instructor Support#
π§ Direct Support: instructor-support@tinytorch.org
π¬ Instructor Community: Private instructor Slack/Discord
π Teaching Materials: Slides, lecture notes, assessment guides
π― Office Hours: Weekly virtual support sessions
π Ready to teach the most comprehensive ML systems course your students will ever take?