[null,null,["最終更新日 2025-06-23 UTC。"],[[["\u003cp\u003eImbalanced datasets occur when one label (majority class) is significantly more frequent than another (minority class), potentially hindering model training on the minority class.\u003c/p\u003e\n"],["\u003cp\u003eDownsampling the majority class and upweighting it can improve model performance by balancing class representation and reducing prediction bias.\u003c/p\u003e\n"],["\u003cp\u003eExperimenting with rebalancing ratios is crucial for optimal performance, ensuring batches contain enough minority class examples for effective training.\u003c/p\u003e\n"],["\u003cp\u003eUpweighting the minority class is simpler but may increase prediction bias compared to downsampling and upweighting the majority class.\u003c/p\u003e\n"],["\u003cp\u003eDownsampling offers benefits like faster convergence and less disk space usage but requires manual effort, especially for large datasets.\u003c/p\u003e\n"]]],[],null,["Consider a dataset containing a categorical label whose value is either\n*Positive* or *Negative* . In a **balanced** dataset, the number of *Positive*\nand *Negative* labels is about equal. However, if one label is more common\nthan the other label, then the dataset is\n[**imbalanced**](/machine-learning/glossary#class_imbalanced_data_set).\nThe predominant label in an imbalanced dataset is called the\n[**majority class**](/machine-learning/glossary#majority_class);\nthe less common label is called the\n[**minority class**](/machine-learning/glossary#minority_class).\n\nThe following table provides generally accepted names and ranges for\ndifferent degrees of imbalance:\n\n| Percentage of data belonging to minority class | Degree of imbalance |\n|------------------------------------------------|---------------------|\n| 20-40% of the dataset | Mild |\n| 1-20% of the dataset | Moderate |\n| \\\u003c1% of the dataset | Extreme |\n\nFor example, consider a virus detection dataset in which the minority class\nrepresents 0.5% of the dataset and the majority class represents 99.5%.\nExtremely imbalanced datasets like this one are common in medicine since\nmost subjects won't have the virus.\n**Figure 5.** Extremely imbalanced dataset.\n\nImbalanced datasets sometimes don't contain enough minority class\nexamples to train a model properly.\nThat is, with so few positive labels, the model trains almost exclusively on\nnegative labels and can't learn enough about positive labels. For example,\nif the batch size is 50, many batches would contain no positive labels.\n\nOften, especially for mildly imbalanced and some moderately imbalanced\ndatasets, imbalance isn't a problem. So, you should **first try\ntraining on the original dataset.** If the model works well, you're done.\nIf not, at least the suboptimal model provides a good\n[**baseline**](/machine-learning/glossary#baseline) for future experiments.\nAfterwards, you can try the following techniques to overcome problems\ncaused by imbalanced datasets.\n\nDownsampling and Upweighting\n\nOne way to handle an imbalanced dataset is to downsample and upweight the\nmajority class. Here are the definitions of those two new terms:\n\n- [**Downsampling**](/machine-learning/glossary#downsampling) (in this context) means training on a disproportionately low subset of the majority class examples.\n- [**Upweighting**](/machine-learning/glossary#upweighting) means adding an example weight to the downsampled class equal to the factor by which you downsampled.\n\n**Step 1: Downsample the majority class.** Consider the virus dataset shown in\nFigure 5 that has a ratio of 1 positive label for every 200 negative labels.\nDownsampling by a factor of 10 improves the balance to 1 positive to 20\nnegatives (5%). Although the resulting training set is still *moderately\nimbalanced* , the proportion of positives to negatives is much better than\nthe original *extremely imbalanced* proportion (0.5%).\n**Figure 6.** Downsampling.\n\n**Step 2: Upweight the downsampled class**: Add example\nweights to the downsampled class. After downsampling by a factor of 10, the\nexample weight should be 10. (Yes, this might seem counterintuitive, but we'll\nexplain why later on.)\n**Figure 7.** Upweighting.\n\nThe term *weight* doesn't refer to model parameters (like, w~1~ or\nw~2~). Here, *weight* refers to\n*example weights*, which increases the importance of an individual example\nduring training. An example weight of 10 means the model treats the example as\n10 times as important (when computing loss) than it would an example of\nweight 1.\n\nThe *weight* should be equal to the factor you used to downsample:\n\n\\\\\\[\\\\text{ \\\\{example weight\\\\} = \\\\{original example weight\\\\} ×\n\\\\{downsampling factor\\\\} }\\\\\\]\n\nIt may seem odd to add example weights after downsampling. After all, you are\ntrying to make the model improve on the minority class, so why upweight the\nmajority class? In fact, upweighting the majority class tends to reduce\n[**prediction bias**](/machine-learning/glossary#prediction-bias). That is,\nupweighting after downsampling tends to reduce the delta between the average\nof your model's predictions and the average of your dataset's labels.\n\nClick the icon to learn more about downsampling and upweighting. \nYou might also be wondering whether upweighting cancels out downsampling.\nYes, to some degree. However, the combination of upweighting and downsampling\nenables [**mini-batches**](/machine-learning/glossary#mini-batch)\nto contain enough minority classes to train an effective model.\n\nUpweighting the *minority class* by itself is usually easier to\nimplement than downsampling and upweighting the *majority class*.\nHowever, upweighting the minority class tends to increase prediction bias.\n\nDownsampling the majority class brings the following benefits:\n\n- **Faster convergence**: During training, the model sees the minority class more often, which helps the model converge faster.\n- **Less disk space**: By consolidating the majority class into fewer examples with larger weights, the model uses less disk space storing those weights. This savings allows more disk space for the minority class, so the model can collect a greater number and a wider range of examples from that class.\n\nUnfortunately, you must usually downsample the majority class manually, which\ncan be time consuming during training experiments, particularly for very large\ndatasets.\n\nRebalance ratios\n\nHow much should you downsample and upweight to rebalance your dataset?\nTo determine the answer, you should experiment with the rebalancing ratio,\njust as you would experiment with other\n[**hyperparameters**](/machine-learning/glossary#hyperparameter).\nThat said, the answer ultimately depends on the following factors:\n\n- The batch size\n- The imbalance ratio\n- The number of examples in the training set\n\nIdeally, each batch should contain multiple minority class examples.\nBatches that don't contain sufficient minority classes will train very poorly.\nThe batch size should be several times greater than the imbalance ratio.\nFor example, if the imbalance ratio is 100:1, then the batch size should\nbe at least 500.\n\nExercise: Check your understanding\n\nConsider the following situation:\n\n- The training set contains a little over one billion examples.\n- The batch size is 128.\n- The imbalance ratio is 100:1, so the training set is divided as follows:\n - \\~1 billion majority class examples.\n - \\~10 million minority class examples.\n\nWhich of the following statements are true? \nIncreasing the batch size to 1,024 will improve the resulting model. \nWith a batch size of 1,024, each batch will average about 10 minority class examples, which should help to train a much better model. \nKeeping the batch size at 128 but downsampling (and upweighting) to 20:1 will improve the resulting model. \nThanks to downsampling, each batch of 128 will average about 21 minority class examples, which should be sufficient for training a useful model. Note that downsampling reduces the number of examples in the training set from a little over one billion to about 60 million. \nThe current hyperparameters are fine. \nWith a batch size of 128, each batch will average about 1 minority class example, which might be insufficient to train a useful model.\n| **Key terms:**\n|\n| - [Baseline](/machine-learning/glossary#baseline)\n| - [Class-imbalanced dataset](/machine-learning/glossary#class_imbalanced_data_set)\n| - [Dataset](/machine-learning/glossary#dataset)\n| - [Downsampling](/machine-learning/glossary#downsampling)\n| - [Hyperparameter](/machine-learning/glossary#hyperparameter)\n| - [Majority class](/machine-learning/glossary#majority_class)\n| - [Mini-batch](/machine-learning/glossary#mini-batch)\n| - [Minority class](/machine-learning/glossary#minority_class)\n| - [Prediction bias](/machine-learning/glossary#prediction-bias)\n- [Upweighting](/machine-learning/glossary#upweighting) \n[Help Center](https://support.google.com/machinelearningeducation)"]]