[[["เข้าใจง่าย","easyToUnderstand","thumb-up"],["แก้ปัญหาของฉันได้","solvedMyProblem","thumb-up"],["อื่นๆ","otherUp","thumb-up"]],[["ไม่มีข้อมูลที่ฉันต้องการ","missingTheInformationINeed","thumb-down"],["ซับซ้อนเกินไป/มีหลายขั้นตอนมากเกินไป","tooComplicatedTooManySteps","thumb-down"],["ล้าสมัย","outOfDate","thumb-down"],["ปัญหาเกี่ยวกับการแปล","translationIssue","thumb-down"],["ตัวอย่าง/ปัญหาเกี่ยวกับโค้ด","samplesCodeIssue","thumb-down"],["อื่นๆ","otherDown","thumb-down"]],["อัปเดตล่าสุด 2024-08-13 UTC"],[[["\u003cp\u003eAggregate model performance metrics like precision, recall, and accuracy can hide biases against minority groups.\u003c/p\u003e\n"],["\u003cp\u003eFairness in model evaluation involves ensuring equitable outcomes across different demographic groups.\u003c/p\u003e\n"],["\u003cp\u003eThis page explores various fairness metrics, including demographic parity, equality of opportunity, and counterfactual fairness, to assess model predictions for bias.\u003c/p\u003e\n"],["\u003cp\u003eEvaluating model predictions with these metrics helps in identifying and mitigating potential biases that can negatively affect minority groups.\u003c/p\u003e\n"],["\u003cp\u003eThe goal is to develop models that not only achieve good overall performance but also ensure fair treatment for all individuals, regardless of their demographic background.\u003c/p\u003e\n"]]],[],null,["When evaluating a model, metrics calculated against an entire test or validation\nset don't always give an accurate picture of how fair the model is.\nGreat model performance overall for a majority of examples may mask poor\nperformance on a minority subset of examples, which can result in biased\nmodel predictions. Using aggregate performance metrics such as\n[**precision**](/machine-learning/glossary#precision),\n[**recall**](/machine-learning/glossary#recall),\nand [**accuracy**](/machine-learning/glossary#accuracy) is not necessarily going\nto expose these issues.\n\nWe can revisit our [admissions model](/machine-learning/crash-course/fairness) and explore some new techniques\nfor how to evaluate its predictions for bias, with fairness in mind.\n\nSuppose the admissions classification model selects 20 students to admit to the\nuniversity from a pool of 100 candidates, belonging to two demographic groups:\nthe majority group (blue, 80 students) and the minority group\n(orange, 20 students).\n**Figure 1.** Candidate pool of 100 students: 80 students belong to the majority group (blue), and 20 students belong to the minority group (orange).\n\nThe model must admit qualified students in a manner that is fair to the\ncandidates in both demographic groups.\n\nHow should we evaluate the model's predictions for fairness? There are a variety\nof metrics we can consider, each of which provides a different mathematical\ndefinition of \"fairness.\" In the following sections, we'll explore three of\nthese fairness metrics in depth: demographic parity, equality of opportunity,\nand counterfactual fairness.\n| **Key terms:**\n|\n| - [Accuracy](/machine-learning/glossary#accuracy)\n| - [Bias (ethics/fairness)](/machine-learning/glossary#bias-ethicsfairness)\n| - [Precision](/machine-learning/glossary#precision)\n- [Recall](/machine-learning/glossary#recall) \n[Help Center](https://support.google.com/machinelearningeducation)"]]