Gradient Descent Visualizer
3D visualization of gradient descent optimization on various loss surfaces
Configuration
Learning Rate:
0.1
Max Iterations:
Loss Surface:
Convex Bowl (Ideal)
Saddle Point
Local Minima
Rosenbrock (Hard)
Start X:
Start Y:
Run Gradient Descent
Step
Reset
Loss Surface & Optimization Path
Download Data
Optimization Statistics
Learning Curve
Gradient Magnitude
Help
Gradient Descent
Algorithm:
Iteratively move in direction of steepest descent (negative gradient)
Learning Rate:
Step size - too large causes divergence, too small is slow
Convergence:
Stops when gradient magnitude is very small
Update Rule:
x_new = x_old - learning_rate * gradient
Loss Surfaces
Convex Bowl:
Single global minimum, guaranteed convergence
Saddle Point:
Has flat regions where optimization can stall
Local Minima:
Multiple valleys - may get stuck in suboptimal solution
Rosenbrock:
Narrow valley makes optimization challenging