whether to include the fully-connected layer at the top of the network.
weights
one of None (random initialization), "imagenet" (pre-training on ImageNet), or the path to the weights file to be loaded.
input_tensor
optional Keras tensor (i.e. output of layers.Input()) to use as image input for the model.
input_shape
optional shape tuple, only to be specified if include_top is False (otherwise the input shape has to be (224, 224, 3) (with "channels_last" data format) or (3, 224, 224) (with "channels_first" data format). It should have exactly 3 inputs channels, and width and height should be no smaller than 32. E.g. (200, 200, 3) would be one valid value.
pooling
Optional pooling mode for feature extraction when include_top is False.
None means that the output of the model will be the 4D tensor output of the last convolutional block.
avg means that global average pooling will be applied to the output of the last convolutional block, and thus the output of the model will be a 2D tensor.
max means that global max pooling will be applied.
classes
optional number of classes to classify images into, only to be specified if include_top is True, and if no weights argument is specified.
classifier_activation
A str or callable. The activation function to use on the "top" layer. Ignored unless include_top=True. Set classifier_activation=None to return the logits of the "top" layer. When loading pretrained weights, classifier_activation can only be None or "softmax".
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-06-07 UTC."],[],[],null,["# tf.keras.applications.ResNet101\n\n\u003cbr /\u003e\n\n|---------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/applications/resnet.py#L421-L456) |\n\nInstantiates the ResNet101 architecture.\n\n#### View aliases\n\n\n**Main aliases**\n\n[`tf.keras.applications.resnet.ResNet101`](https://www.tensorflow.org/api_docs/python/tf/keras/applications/ResNet101)\n\n\u003cbr /\u003e\n\n tf.keras.applications.ResNet101(\n include_top=True,\n weights='imagenet',\n input_tensor=None,\n input_shape=None,\n pooling=None,\n classes=1000,\n classifier_activation='softmax'\n )\n\n#### Reference:\n\n- [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) (CVPR 2015)\n\nFor image classification use cases, see [this page for detailed examples](https://keras.io/api/applications/#usage-examples-for-image-classification-models).\n\nFor transfer learning use cases, make sure to read the\n[guide to transfer learning \\& fine-tuning](https://keras.io/guides/transfer_learning/).\n| **Note:** each Keras Application expects a specific kind of input preprocessing. For ResNet, call [`keras.applications.resnet.preprocess_input`](../../../tf/keras/applications/resnet/preprocess_input) on your inputs before passing them to the model. [`resnet.preprocess_input`](../../../tf/keras/applications/resnet/preprocess_input) will convert the input images from RGB to BGR, then will zero-center each color channel with respect to the ImageNet dataset, without scaling.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `include_top` | whether to include the fully-connected layer at the top of the network. |\n| `weights` | one of `None` (random initialization), `\"imagenet\"` (pre-training on ImageNet), or the path to the weights file to be loaded. |\n| `input_tensor` | optional Keras tensor (i.e. output of [`layers.Input()`](../../../tf/keras/Input)) to use as image input for the model. |\n| `input_shape` | optional shape tuple, only to be specified if `include_top` is `False` (otherwise the input shape has to be `(224, 224, 3)` (with `\"channels_last\"` data format) or `(3, 224, 224)` (with `\"channels_first\"` data format). It should have exactly 3 inputs channels, and width and height should be no smaller than 32. E.g. `(200, 200, 3)` would be one valid value. |\n| `pooling` | Optional pooling mode for feature extraction when `include_top` is `False`. \u003cbr /\u003e - `None` means that the output of the model will be the 4D tensor output of the last convolutional block. - `avg` means that global average pooling will be applied to the output of the last convolutional block, and thus the output of the model will be a 2D tensor. - `max` means that global max pooling will be applied. |\n| `classes` | optional number of classes to classify images into, only to be specified if `include_top` is `True`, and if no `weights` argument is specified. |\n| `classifier_activation` | A `str` or callable. The activation function to use on the \"top\" layer. Ignored unless `include_top=True`. Set `classifier_activation=None` to return the logits of the \"top\" layer. When loading pretrained weights, `classifier_activation` can only be `None` or `\"softmax\"`. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| A Model instance. ||\n\n\u003cbr /\u003e"]]