[null,null,["최종 업데이트: 2025-06-29(UTC)"],[],[],null,["In this exercise, you'll revisit the graph of fuel-efficiency data from\nthe [Parameters exercise](/machine-learning/crash-course/linear-regression/parameters-exercise). But this time, you'll\nuse gradient descent to learn the optimal weight and bias values for a linear\nmodel that minimizes loss.\n\nComplete the three tasks below the graph.\n| This interactive visualization can produce flashing visuals when set to a high Learning Rate, which may affect photosensitive individuals.\n\n**Task #1:** Adjust the **Learning Rate** slider below the graph to set a\nlearning rate of 0.03. Click the **Start** button to run gradient descent.\n\nHow long does the model training take to converge (reach a stable minimum\nloss value)? What is the MSE value at model convergence? What weight and bias\nvalues produce this value?\n\nClick the plus icon to see our solution \nWhen we set a learning rate of 0.03, the model converged after\napproximately 30 seconds, achieving a MSE of just under 3 with weight and\nbias values of --2.08 and 23.098, respectively. This indicates we've\npicked a good learning rate value.\n| **Note:** The data points in the graph vary slightly each time you load the page, so your solutions here may be a little different than ours.\n\n**Task #2:** Click the **Reset** button below the graph to reset the Weight and\nBias values in the graph. Adjust the **Learning Rate** slider to a value around\n1.10e^--5^. Click the **Start** button to run gradient descent.\n\nWhat do you notice about how long it takes the model training to converge\nthis time?\n\nClick the plus icon to see the solution \nAfter several minutes, model training still hasn't converged. Small\nupdates to Weight and Bias values continue to result in slightly lower\nloss values. This suggests that picking a higher learning rate would\nenable gradient descent to find the optimal weight and bias values more\nquickly.\n\n**Task #3:** Click the **Reset** button below the graph to reset the Weight\nand Bias values in the graph. Adjust the **Learning Rate** slider up to 1.\nClick the **Start** button to run gradient descent.\n\nWhat happens to the loss values as gradient descent runs? How long will model\ntraining take to converge this time?\n\nClick the plus icon to see the solution \nLoss values fluctuate wildly at high values (MSE over 300).\nThis indicates that the learning rate is too high, and model training\nwill never reach convergence."]]