Gradient Descent Visualizer

Overview

The Gradient Descent Visualizer brings the optimization process to life with 3D surface plots and animated descent paths. Watch how gradient descent navigates loss landscapes, experiment with learning rates to see the difference between smooth convergence and chaotic overshooting, and compare different loss surfaces including convex bowls, saddle points, and local minima. This tool transforms abstract optimization theory into visual, interactive understanding.

Open in new tab

Tips

  • Start with a convex bowl surface and moderate learning rate (0.1) to see ideal convergence
  • Try increasing the learning rate to see oscillation and potential divergence
  • Decrease the learning rate to observe slower but more stable convergence
  • Experiment with the saddle point surface to see why optimization can stall
  • The local minima surface demonstrates why global optimization is challenging
  • Watch the learning curve plot to see loss decreasing over iterations
  • Compare multiple learning rates side-by-side to understand the tradeoff between speed and stability
  • Notice how gradient magnitude (step size) decreases as you approach the minimum
  • The 3D visualization shows the optimization path from above - steepest descent follows the gradient