[[["เข้าใจง่าย","easyToUnderstand","thumb-up"],["แก้ปัญหาของฉันได้","solvedMyProblem","thumb-up"],["อื่นๆ","otherUp","thumb-up"]],[["ไม่มีข้อมูลที่ฉันต้องการ","missingTheInformationINeed","thumb-down"],["ซับซ้อนเกินไป/มีหลายขั้นตอนมากเกินไป","tooComplicatedTooManySteps","thumb-down"],["ล้าสมัย","outOfDate","thumb-down"],["ปัญหาเกี่ยวกับการแปล","translationIssue","thumb-down"],["ตัวอย่าง/ปัญหาเกี่ยวกับโค้ด","samplesCodeIssue","thumb-down"],["อื่นๆ","otherDown","thumb-down"]],["อัปเดตล่าสุด 2024-11-06 UTC"],[[["\u003cp\u003eCategorical data quality hinges on how categories are defined and labeled, impacting data reliability.\u003c/p\u003e\n"],["\u003cp\u003eHuman-labeled data, known as "gold labels," is generally preferred for training due to its higher quality, but it's essential to check for human errors and biases.\u003c/p\u003e\n"],["\u003cp\u003eMachine-labeled data, or "silver labels," can introduce biases or inaccuracies, necessitating careful quality checks and awareness of potential common-sense violations.\u003c/p\u003e\n"],["\u003cp\u003eHigh-dimensionality in categorical data increases training complexity and costs, leading to techniques like embeddings for dimensionality reduction.\u003c/p\u003e\n"]]],[],null,["Numerical data is often recorded by scientific instruments or\nautomated measurements. Categorical data, on the other hand, is often\ncategorized by human beings or by machine learning (ML) models. *Who* decides\non categories and labels, and *how* they make those decisions, affects the\nreliability and usefulness of that data.\n\nHuman raters\n\nData manually labeled by human beings is often referred to as *gold labels*,\nand is considered more desirable than machine-labeled data for training models,\ndue to relatively better data quality.\n\nThis doesn't necessarily mean that any set of human-labeled data is of high\nquality. Human errors, bias, and malice can be introduced at the point\nof data collection or during data cleaning and processing. Check for them\nbefore training.\n\n\nAny two human beings may label the same example differently. The difference\nbetween human raters' decisions is called\n[**inter-rater\nagreement**](/machine-learning/glossary#inter-rater-agreement).\nYou can get a sense of the variance in raters' opinions by using\nmultiple raters per example and measuring inter-rater agreement.\n\n**Click to learn about inter-rater agreement metrics** \nThe following are ways to measure inter-rater agreement:\n\n- Cohen's kappa and variants\n- Intra-class correlation (ICC)\n- Krippendorff's alpha\n\nFor details on Cohen's kappa and intra-class correlation, see\n[Hallgren\n2012](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3402032/). For details on Krippendorff's alpha, see\n[Krippendorff 2011](https://www.asc.upenn.edu/sites/default/files/2021-03/Computing%20Krippendorff%27s%20Alpha-Reliability.pdf).\n\nMachine raters\n\nMachine-labeled data, where categories are automatically determined by one or\nmore classification models, is often referred to as *silver labels* .\nMachine-labeled data can vary widely in quality. Check it not only for accuracy\nand biases but also for violations of common sense, reality, and intention. For\nexample, if a computer-vision model mislabels a photo of a\n[chihuahua as a muffin](https://www.freecodecamp.org/news/chihuahua-or-muffin-my-search-for-the-best-computer-vision-api-cbda4d6b425d/),\nor a photo of a muffin as a chihuahua, models trained on that labeled data will\nbe of lower quality.\n\nSimilarly, a sentiment analyzer that scores neutral words as -0.25, when 0.0 is\nthe neutral value, might be scoring all words with an additional negative bias\nthat is not actually present in the data. An oversensitive toxicity detector\nmay falsely flag many neutral statements as toxic. Try to get a sense of the\nquality and biases of machine labels and annotations in your data before\ntraining on it.\n\nHigh dimensionality\n\nCategorical data tends to produce high-dimensional feature vectors; that is,\nfeature vectors having a large number of elements.\nHigh dimensionality increases training costs and makes training more\ndifficult. For these reasons, ML experts often seek ways to reduce the number\nof dimensions prior to training.\n\nFor natural-language data, the main method of reducing dimensionality is\nto convert feature vectors to embedding vectors. This is discussed in the\n[Embeddings module](/machine-learning/crash-course/embeddings) later in\nthis course.\n| **Key terms:**\n|\n- [Inter-rater agreement](/machine-learning/glossary#inter-rater-agreement) \n[Help Center](https://support.google.com/machinelearningeducation)"]]