[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["缺少我需要的資訊","missingTheInformationINeed","thumb-down"],["過於複雜/步驟過多","tooComplicatedTooManySteps","thumb-down"],["過時","outOfDate","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["示例/程式碼問題","samplesCodeIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2024-10-15 (世界標準時間)。"],[[["\u003cp\u003eLogistic regression models output probabilities, which can be used directly or converted to binary categories.\u003c/p\u003e\n"],["\u003cp\u003eThe sigmoid function ensures the output of logistic regression is always between 0 and 1, representing a probability.\u003c/p\u003e\n"],["\u003cp\u003eA logistic regression model uses a linear equation and the sigmoid function to calculate the probability of an event.\u003c/p\u003e\n"],["\u003cp\u003eThe log-odds (z) represent the log of the ratio of probabilities for the two possible outcomes.\u003c/p\u003e\n"]]],[],null,["Many problems require a probability estimate as output.\n[**Logistic regression**](/machine-learning/glossary#logistic_regression) is\nan extremely efficient mechanism for calculating probabilities. Practically\nspeaking, you can use the returned probability in either of the following\ntwo ways:\n\n- Applied \"as is.\" For example, if a spam-prediction model takes an email as\n input and outputs a value of `0.932`, this implies a `93.2%` probability that\n the email is spam.\n\n- Converted to a [**binary category**](/machine-learning/glossary#binary-classification)\n such as `True` or `False`, `Spam` or `Not Spam`.\n\nThis module focuses on using logistic regression model output as-is. In the\n[Classification module](../classification/index.md), you'll learn how to\nconvert this output into a binary category.\n\nSigmoid function\n\nYou might be wondering how a logistic regression model can ensure its output\nrepresents a probability, always outputting a value between 0 and 1. As it\nhappens, there's a family of functions called **logistic functions**\nwhose output has those same characteristics. The standard logistic function,\nalso known as the\n[**sigmoid function**](/machine-learning/glossary#sigmoid-function)\n(*sigmoid* means \"s-shaped\"), has the\nformula:\n\n\\\\\\[f(x) = \\\\frac{1}{1 + e\\^{-x}}\\\\\\]\n\nFigure 1 shows the corresponding graph of the sigmoid function.\n**Figure 1.** Graph of the sigmoid function. The curve approaches 0 as *x* values decrease to negative infinity, and 1 as *x* values increase toward infinity.\n\nAs the input, `x`, increases, the output of the sigmoid function approaches\nbut never reaches `1`. Similarly, as the input decreases, the sigmoid\nfunction's output approaches but never reaches `0`. \n**Click here for a deeper dive into the math\nbehind the sigmoid function**\n\nThe table below shows the output values of the sigmoid function for\ninput values in the range --7 to 7. Note how quickly the sigmoid approaches\n0 for decreasing negative input values, and how quickly the sigmoid approaches\n1 for increasing positive input values.\n\nHowever, no matter how large or how small the input value, the output will\nalways be greater than 0 and less than 1.\n\n| Input | Sigmoid output |\n|-------|----------------|\n| -7 | 0.001 |\n| -6 | 0.002 |\n| -5 | 0.007 |\n| -4 | 0.018 |\n| -3 | 0.047 |\n| -2 | 0.119 |\n| -1 | 0.269 |\n| 0 | 0.50 |\n| 1 | 0.731 |\n| 2 | 0.881 |\n| 3 | 0.952 |\n| 4 | 0.982 |\n| 5 | 0.993 |\n| 6 | 0.997 |\n| 7 | 0.999 |\n\nTransforming linear output using the sigmoid function\n\nThe following equation represents the linear component of a logistic\nregression model:\n\n\\\\\\[z = b + w_1x_1 + w_2x_2 + \\\\ldots + w_Nx_N\\\\\\]\n\nwhere:\n\n- *z* is the output of the linear equation, also called the [**log odds**](/machine-learning/glossary#log-odds).\n- *b* is the bias.\n- The *w* values are the model's learned weights.\n- The *x* values are the feature values for a particular example.\n\nTo obtain the logistic regression prediction, the *z* value is then passed to\nthe sigmoid function, yielding a value (a probability) between 0 and 1:\n\n\\\\\\[y' = \\\\frac{1}{1 + e\\^{-z}}\\\\\\]\n\nwhere:\n\n- *y'* is the output of the logistic regression model.\n- *z* is the linear output (as calculated in the preceding equation).\n\n**Click here to learn more about\nlog-odds**\n\nIn the equation $z = b + w_1x_1 + w_2x_2 + \\\\ldots + w_Nx_N$, *z*\nis referred to as the *log-odds* because if you start with the\nfollowing sigmoid function (where $y$ is the output of a logistic\nregression model, representing a probability):\n\n$$y = \\\\frac{1}{1 + e\\^{-z}}$$\n\nAnd then solve for *z*:\n\n$$ z = \\\\log\\\\left(\\\\frac{y}{1-y}\\\\right) $$\n\nThen *z* is defined as the log of the ratio of the probabilities\nof the two possible outcomes: *y* and *1 -- y*.\n\nFigure 2 illustrates how linear output is transformed to logistic regression\noutput using these calculations.\n**Figure 2.** Left: graph of the linear function z = 2x + 5, with three points highlighted. Right: Sigmoid curve with the same three points highlighted after being transformed by the sigmoid function.\n\nIn Figure 2, a linear equation becomes input to the sigmoid function,\nwhich bends the straight line into an s-shape. Notice that the linear equation\ncan output very big or very small values of z, but the output of the sigmoid\nfunction, y', is always between 0 and 1, exclusive. For example, the yellow\nsquare on the left graph has a z value of --10, but the sigmoid function in the\nright graph maps that --10 into a y' value of 0.00004.\n\nExercise: Check your understanding\n\nA logistic regression model with three features has the following bias and\nweights:\n\n\\\\\\[\\\\begin{align}\nb \\&= 1 \\\\\\\\\nw_1 \\&= 2 \\\\\\\\\nw_2 \\&= -1 \\\\\\\\\nw_3 \\&= 5\n\\\\end{align}\n\\\\\\]\n\nGiven the following input values:\n\n\\\\\\[\\\\begin{align}\nx_1 \\&= 0 \\\\\\\\\nx_2 \\&= 10 \\\\\\\\\nx_3 \\&= 2\n\\\\end{align}\n\\\\\\]\n\nAnswer the following two questions. \n1. What is the value of *z* for these input values? \n--1 \n0 \n0.731 \n1 \nCorrect! The linear equation defined by the weights and bias is *z* = 1 + 2x~1~ -- x~2~ + 5 x~3~. Plugging the input values into the equation produces z = 1 + (2)(0) - (10) + (5)(2) = 1 2. What is the logistic regression prediction for these input values? \n0.268 \n0.5 \n0.731 \nAs calculated in #1 above, the log-odds for the input values is 1.\nPlugging that value for *z* into the sigmoid function:\n\n\\\\(y = \\\\frac{1}{1 + e\\^{-z}} = \\\\frac{1}{1 + e\\^{-1}} = \\\\frac{1}{1 + 0.367} = \\\\frac{1}{1.367} = 0.731\\\\) \n1 \nRemember, the output of the sigmoid function will always be greater than 0 and less than 1.\n| **Key terms:**\n|\n| - [Binary classification](/machine-learning/glossary#binary-classification)\n| - [Log odds](/machine-learning/glossary#log-odds)\n| - [Logistic regression](/machine-learning/glossary#logistic_regression)\n- [Sigmoid function](/machine-learning/glossary#sigmoid-function) \n[Help Center](https://support.google.com/machinelearningeducation)"]]