Gradient Descent Visualizer

3D visualization of gradient descent optimization on various loss surfaces

Configuration

0.1

Help

Gradient Descent
  • Algorithm: Iteratively move in direction of steepest descent (negative gradient)
  • Learning Rate: Step size - too large causes divergence, too small is slow
  • Convergence: Stops when gradient magnitude is very small
  • Update Rule: x_new = x_old - learning_rate * gradient
Loss Surfaces
  • Convex Bowl: Single global minimum, guaranteed convergence
  • Saddle Point: Has flat regions where optimization can stall
  • Local Minima: Multiple valleys - may get stuck in suboptimal solution
  • Rosenbrock: Narrow valley makes optimization challenging