由于模型 A 存在 bug,因此错误地决定购买股票 X。这些购买交易会推高股票 X 的价格。模型 B 使用股票 X 的价格作为输入特征,因此可能会得出一些关于股票 X 价值的错误结论。因此,模型 B 可以根据模型 A 的错误行为来买入或卖出股票 X 的股票。反过来,模型 B 的行为可能会影响模型 A,可能会触发郁金香热潮或导致公司 X 的股票价格下跌。
[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["没有我需要的信息","missingTheInformationINeed","thumb-down"],["太复杂/步骤太多","tooComplicatedTooManySteps","thumb-down"],["内容需要更新","outOfDate","thumb-down"],["翻译问题","translationIssue","thumb-down"],["示例/代码问题","samplesCodeIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-07-27。"],[[["\u003cp\u003eContinuously monitor models in production to evaluate feature importance and potentially remove unnecessary ones, ensuring prediction quality and resource efficiency.\u003c/p\u003e\n"],["\u003cp\u003eData reliability is crucial; consider data source stability, potential changes in upstream data processes, and create local data copies to control versioning and mitigate risks.\u003c/p\u003e\n"],["\u003cp\u003eBe aware of feedback loops where a model's predictions influence future input data, potentially leading to unexpected behavior or biased outcomes, especially in interconnected systems.\u003c/p\u003e\n"],["\u003cp\u003eRegularly assess your model by asking if features are truly helpful and if their value outweighs the costs of inclusion, aiming for a balance between prediction accuracy and maintainability.\u003c/p\u003e\n"],["\u003cp\u003eEvaluate if your model is susceptible to a feedback loop and take steps to isolate it if you find it is.\u003c/p\u003e\n"]]],[],null,["This lesson focuses on the questions you should ask about your data\nand model in production systems.\n\nIs each feature helpful?\n\nYou should continuously monitor your model to remove features that contribute\nlittle or nothing to the model's predictive ability. If the input data for\nthat feature abruptly changes, your model's behavior might also abruptly\nchange in undesirable ways.\n\nAlso consider the following related question:\n\n- Does the usefulness of the feature justify the cost of including it?\n\nIt is always tempting to add more features to the model. For example,\nsuppose you find a new feature whose addition makes your model's predictions\nslightly better. Slightly better predictions certainly seem better than\nslightly worse predictions; however, the extra feature adds to your\nmaintenance burden.\n\nIs your data source reliable?\n\nSome questions to ask about the reliability of your input data:\n\n- Is the signal always going to be available or is it coming from an unreliable source? For example:\n - Is the signal coming from a server that crashes under heavy load?\n - Is the signal coming from humans that go on vacation every August?\n- Does the system that computes your model's input data ever change? If so:\n - How often?\n - How will you know when that system changes?\n\nConsider creating your own copy of the data you receive from the\nupstream process. Then, only advance to the next version of the upstream\ndata when you are certain that it is safe to do so.\n\nIs your model part of a feedback loop?\n\nSometimes a model can affect its own training data. For example, the\nresults from some models, in turn, become (directly or indirectly) input\nfeatures to that same model.\n\nSometimes a model can affect another model. For example, consider two\nmodels for predicting stock prices:\n\n- Model A, which is a bad predictive model.\n- Model B.\n\nSince Model A is buggy, it mistakenly decides to buy stock in Stock X.\nThose purchases drive up the price of Stock X. Model B uses the price\nof Stock X as an input feature, so Model B can come to some false\nconclusions about the value of Stock X. Model B could, therefore,\nbuy or sell shares of Stock X based on the buggy behavior of Model A.\nModel B's behavior, in turn, can affect Model A, possibly triggering a\n[tulip mania](https://wikipedia.org/wiki/Tulip_mania) or a slide in\nCompany X's stock.\n\nExercise: Check your understanding \nWhich **three** of the following models are susceptible to a feedback loop? \nA traffic-forecasting model that predicts congestion at highway exits near the beach, using beach crowd size as one of its features. \nSome beachgoers are likely to base their plans on the traffic forecast. If there is a large beach crowd and traffic is forecast to be heavy, many people may make alternative plans. This may depress beach turnout, resulting in a lighter traffic forecast, which then may increase attendance, and the cycle repeats. \nA book-recommendation model that suggests novels its users may like based on their popularity (i.e., the number of times the books have been purchased). \nBook recommendations are likely to drive purchases, and these additional sales will be fed back into the model as input, making it more likely to recommend these same books in the future. \nA university-ranking model that rates schools in part by their selectivity---the percentage of students who applied that were admitted. \nThe model's rankings may drive additional interest to top-rated schools, increasing the number of applications they receive. If these schools continue to admit the same number of students, selectivity will increase (the percentage of students admitted will go down). This will boost these schools' rankings, which will further increase prospective student interest, and so on... \nAn election-results model that forecasts the winner of a mayoral race by surveying 2% of voters after the polls have closed. \nIf the model does not publish its forecast until after the polls have closed, it is not possible for its predictions to affect voter behavior. \nA housing-value model that predicts house prices, using size (area in square meters), number of bedrooms, and geographic location as features. \nIt is not possible to quickly change a house's location, size, or number of bedrooms in response to price forecasts, making a feedback loop unlikely. However, there is potentially a correlation between size and number of bedrooms (larger homes are likely to have more rooms) that may need to be teased apart. \nA face-attributes model that detects whether a person is smiling in a photo, which is regularly trained on a database of stock photography that is automatically updated monthly. \nThere is no feedback loop here, as model predictions don't have any impact on the photo database. However, versioning of the input data is a concern here, as these monthly updates could potentially have unforeseen effects on the model. \n[Help Center](https://support.google.com/machinelearningeducation)"]]