[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["缺少我需要的資訊","missingTheInformationINeed","thumb-down"],["過於複雜/步驟過多","tooComplicatedTooManySteps","thumb-down"],["過時","outOfDate","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["示例/程式碼問題","samplesCodeIssue","thumb-down"],["其他","otherDown","thumb-down"]],["上次更新時間:2025-07-27 (世界標準時間)。"],[[["\u003cp\u003eML models should benefit society and avoid causing harm, bias, or misuse of personal data.\u003c/p\u003e\n"],["\u003cp\u003eGoogle's AI principles emphasize fairness, privacy, transparency, and safety in ML development.\u003c/p\u003e\n"],["\u003cp\u003eFairness in models requires addressing potential biases in training data and ensuring equitable outcomes for all user groups.\u003c/p\u003e\n"],["\u003cp\u003ePrivacy considerations involve adhering to relevant regulations, protecting personal data, and ensuring secure data handling practices.\u003c/p\u003e\n"],["\u003cp\u003eTransparency and safety involve making model functionality understandable, documenting model details, and designing models to operate securely and reliably.\u003c/p\u003e\n"]]],[],null,["# AI and ML ethics and safety\n\nML has the potential to transform society in many meaningful ways,\neither positively or negatively. It's critical to consider the ethical\nimplications of your models and the systems they're a part of.\nYour ML projects should benefit society. They shouldn't cause harm or be susceptible to misuse. They shouldn't perpetuate, reinforce, or exacerbate biases or prejudices. They shouldn't collect or use personal data irresponsibly.\n\n\u003cbr /\u003e\n\nGoogle's AI principles\n----------------------\n\nGoogle advocates developing ML and AI applications that adhere to its\n[Responsible AI principles](https://ai.google/responsibility/principles/).\n\nBeyond adhering to responsible AI principles, aim to develop systems\nthat incorporate the following:\n\n- Fairness\n- Privacy\n- Transparency\n- Safety\n\n### Fairness\n\nAvoid creating or reinforcing unfair\n[bias](/machine-learning/glossary#bias-ethicsfairness).\nModels exhibit bias when their\ntraining data has some of the following characteristics:\n\n- Doesn't reflect the real-world population of\n their users.\n\n- Preserves biased decisions or outcomes, for example, criminal justice\n decisions like incarceration times.\n\n- Uses features with more predictive power for certain groups of users.\n\nThe previous examples are just some ways models become biased. Understanding\nyour data thoroughly is critical for uncovering and resolving any potential\nbiases it contains. The first step for developing fair models is verifying the\ntraining data accurately reflects the distribution of your users. The following\nare further practices to help create fair models:\n\n- Identify underrepresented groups in evaluation datasets or groups that might\n experience worse model quality compared to other groups. You might need to\n oversample a subgroup of your users to increase their presence in the\n training data.\n\n- Use\n [golden datasets](https://developers.google.com/machine-learning/glossary#golden-dataset)\n (also known as benchmark datasets) to validate the model against fairness\n issues and detect implicit bias.\n\n- Avoid including sensitive features in datasets, like gender or ethnicity.\n\n- Avoid including features with little empirical or explanatory power, but\n especially in sensitive contexts where the trained model is used to perform\n high-impact tasks in areas such as healthcare, finance, education,\n employment, and so forth. For example, in a model for approving home loans,\n don't include names in the training data. Not only is an applicant's name\n irrelevant to the prediction task, but leaving such an irrelevant feature\n in the dataset also has the potential to create implicit bias or\n allocative harms. For instance, the model might correlate male names with\n a higher probability for repayment, or vice versa.\n\n- Measure potential adverse impact a model's predictions might have on\n particular groups, and consider intentional bias correction techniques if\n you find adverse impact in a sensitive context.\n\n### Privacy\n\nIncorporate privacy design principles from the beginning.\n\nThe following are privacy related laws and policies to be aware of and\nadhere to:\n\n- [European Union's Digital Markets Act (DMA)](https://wikipedia.org/wiki/Digital_Markets_Act)\n for consent to share or use personal data.\n\n- [European Union GDPR](https://wikipedia.org/wiki/General_Data_Protection_Regulation) laws.\n\nMoreover, be sure to remove all personally identifiable information (PII) from\ndatasets and confirm your model and data repositories are set up with the right\npermissions, for example, not world-readable.\n\n\n### Transparency\n\nBe accountable to people. For example, make it easy for others to\nunderstand what your model does, how it does it, and why it does it.\n\n[Model cards](https://modelcards.withgoogle.com/face-detection)\n\nprovide a template to document your model and create transparency artifacts.\n\n### Safety\n\nDesign models to operate safely in adversarial conditions. For example, test\nyour model with potential hostile inputs to confirm your model is secure.\nFurthermore, check for potential failure conditions. Teams typically use\nspecially designed datasets to test their models with inputs or conditions that\ncaused the model to fail in the past.\n\n### Check Your Understanding\n\nYou're developing a model to quickly approve auto loans. What ethical implications should you consider? \nDoes the model perpetuate existing biases or stereotypes? \nCorrect. Models should be trained on high-quality datasets that have been inspected for potential implicit biases or prejudices. \nDoes the model serve predictions with low-enough latency? \nCan the model be deployed to devices, like phones? \n\nAlways consider the broader social contexts your models operate within. Work to\nbe sure your handling of sensitive data doesn't violate privacy issues,\nperpetuate bias, or infringe on someone else's intellectual property."]]