tf.keras.layers.GRUCell
Stay organized with collections Save and categorize content based on your preferences.
Cell class for the GRU layer.
Inherits From: Layer
, Operation
tf.keras.layers.GRUCell( units, activation='tanh', recurrent_activation='sigmoid', use_bias=True, kernel_initializer='glorot_uniform', recurrent_initializer='orthogonal', bias_initializer='zeros', kernel_regularizer=None, recurrent_regularizer=None, bias_regularizer=None, kernel_constraint=None, recurrent_constraint=None, bias_constraint=None, dropout=0.0, recurrent_dropout=0.0, reset_after=True, seed=None, **kwargs )
This class processes one step within the whole time sequence input, whereas keras.layer.GRU
processes the whole sequence.
Args |
units | Positive integer, dimensionality of the output space. |
activation | Activation function to use. Default: hyperbolic tangent (tanh ). If you pass None, no activation is applied (ie. "linear" activation: a(x) = x ). |
recurrent_activation | Activation function to use for the recurrent step. Default: sigmoid (sigmoid ). If you pass None , no activation is applied (ie. "linear" activation: a(x) = x ). |
use_bias | Boolean, (default True ), whether the layer should use a bias vector. |
kernel_initializer | Initializer for the kernel weights matrix, used for the linear transformation of the inputs. Default: "glorot_uniform" . |
recurrent_initializer | Initializer for the recurrent_kernel weights matrix, used for the linear transformation of the recurrent state. Default: "orthogonal" . |
bias_initializer | Initializer for the bias vector. Default: "zeros" . |
kernel_regularizer | Regularizer function applied to the kernel weights matrix. Default: None . |
recurrent_regularizer | Regularizer function applied to the recurrent_kernel weights matrix. Default: None . |
bias_regularizer | Regularizer function applied to the bias vector. Default: None . |
kernel_constraint | Constraint function applied to the kernel weights matrix. Default: None . |
recurrent_constraint | Constraint function applied to the recurrent_kernel weights matrix. Default: None . |
bias_constraint | Constraint function applied to the bias vector. Default: None . |
dropout | Float between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs. Default: 0. |
recurrent_dropout | Float between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state. Default: 0. |
reset_after | GRU convention (whether to apply reset gate after or before matrix multiplication). False = "before", True = "after" (default and cuDNN compatible). |
seed | Random seed for dropout. |
Call arguments |
inputs | A 2D tensor, with shape (batch, features) . |
states | A 2D tensor with shape (batch, units) , which is the state from the previous time step. |
training | Python boolean indicating whether the layer should behave in training mode or in inference mode. Only relevant when dropout or recurrent_dropout is used. |
Example:
inputs = np.random.random((32, 10, 8))
rnn = keras.layers.RNN(keras.layers.GRUCell(4))
output = rnn(inputs)
output.shape
(32, 4)
rnn = keras.layers.RNN(
keras.layers.GRUCell(4),
return_sequences=True,
return_state=True)
whole_sequence_output, final_state = rnn(inputs)
whole_sequence_output.shape
(32, 10, 4)
final_state.shape
(32, 4)
Attributes |
input | Retrieves the input tensor(s) of a symbolic operation. Only returns the tensor(s) corresponding to the first time the operation was called. |
output | Retrieves the output tensor(s) of a layer. Only returns the tensor(s) corresponding to the first time the operation was called. |
Methods
from_config
View source
@classmethod
from_config( config )
Creates a layer from its config.
This method is the reverse of get_config
, capable of instantiating the same layer from the config dictionary. It does not handle layer connectivity (handled by Network), nor weights (handled by set_weights
).
Args |
config | A Python dictionary, typically the output of get_config. |
Returns |
A layer instance. |
get_dropout_mask
View source
get_dropout_mask( step_input )
get_initial_state
View source
get_initial_state( batch_size=None )
get_recurrent_dropout_mask
View source
get_recurrent_dropout_mask( step_input )
reset_dropout_mask
View source
reset_dropout_mask()
Reset the cached dropout mask if any.
The RNN layer invokes this in the call()
method so that the cached mask is cleared after calling cell.call()
. The mask should be cached across all timestep within the same batch, but shouldn't be cached between batches.
reset_recurrent_dropout_mask
View source
reset_recurrent_dropout_mask()
symbolic_call
View source
symbolic_call( *args, **kwargs )
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2024-06-07 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-06-07 UTC."],[],[],null,["# tf.keras.layers.GRUCell\n\n\u003cbr /\u003e\n\n|---------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/layers/rnn/gru.py#L15-L327) |\n\nCell class for the GRU layer.\n\nInherits From: [`Layer`](../../../tf/keras/Layer), [`Operation`](../../../tf/keras/Operation) \n\n tf.keras.layers.GRUCell(\n units,\n activation='tanh',\n recurrent_activation='sigmoid',\n use_bias=True,\n kernel_initializer='glorot_uniform',\n recurrent_initializer='orthogonal',\n bias_initializer='zeros',\n kernel_regularizer=None,\n recurrent_regularizer=None,\n bias_regularizer=None,\n kernel_constraint=None,\n recurrent_constraint=None,\n bias_constraint=None,\n dropout=0.0,\n recurrent_dropout=0.0,\n reset_after=True,\n seed=None,\n **kwargs\n )\n\nThis class processes one step within the whole time sequence input, whereas\n`keras.layer.GRU` processes the whole sequence.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `units` | Positive integer, dimensionality of the output space. |\n| `activation` | Activation function to use. Default: hyperbolic tangent (`tanh`). If you pass None, no activation is applied (ie. \"linear\" activation: `a(x) = x`). |\n| `recurrent_activation` | Activation function to use for the recurrent step. Default: sigmoid (`sigmoid`). If you pass `None`, no activation is applied (ie. \"linear\" activation: `a(x) = x`). |\n| `use_bias` | Boolean, (default `True`), whether the layer should use a bias vector. |\n| `kernel_initializer` | Initializer for the `kernel` weights matrix, used for the linear transformation of the inputs. Default: `\"glorot_uniform\"`. |\n| `recurrent_initializer` | Initializer for the `recurrent_kernel` weights matrix, used for the linear transformation of the recurrent state. Default: `\"orthogonal\"`. |\n| `bias_initializer` | Initializer for the bias vector. Default: `\"zeros\"`. |\n| `kernel_regularizer` | Regularizer function applied to the `kernel` weights matrix. Default: `None`. |\n| `recurrent_regularizer` | Regularizer function applied to the `recurrent_kernel` weights matrix. Default: `None`. |\n| `bias_regularizer` | Regularizer function applied to the bias vector. Default: `None`. |\n| `kernel_constraint` | Constraint function applied to the `kernel` weights matrix. Default: `None`. |\n| `recurrent_constraint` | Constraint function applied to the `recurrent_kernel` weights matrix. Default: `None`. |\n| `bias_constraint` | Constraint function applied to the bias vector. Default: `None`. |\n| `dropout` | Float between 0 and 1. Fraction of the units to drop for the linear transformation of the inputs. Default: 0. |\n| `recurrent_dropout` | Float between 0 and 1. Fraction of the units to drop for the linear transformation of the recurrent state. Default: 0. |\n| `reset_after` | GRU convention (whether to apply reset gate after or before matrix multiplication). False = \"before\", True = \"after\" (default and cuDNN compatible). |\n| `seed` | Random seed for dropout. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Call arguments -------------- ||\n|------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `inputs` | A 2D tensor, with shape `(batch, features)`. |\n| `states` | A 2D tensor with shape `(batch, units)`, which is the state from the previous time step. |\n| `training` | Python boolean indicating whether the layer should behave in training mode or in inference mode. Only relevant when `dropout` or `recurrent_dropout` is used. |\n\n\u003cbr /\u003e\n\n#### Example:\n\n inputs = np.random.random((32, 10, 8))\n rnn = keras.layers.RNN(keras.layers.GRUCell(4))\n output = rnn(inputs)\n output.shape\n (32, 4)\n rnn = keras.layers.RNN(\n keras.layers.GRUCell(4),\n return_sequences=True,\n return_state=True)\n whole_sequence_output, final_state = rnn(inputs)\n whole_sequence_output.shape\n (32, 10, 4)\n final_state.shape\n (32, 4)\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Attributes ---------- ||\n|----------|------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `input` | Retrieves the input tensor(s) of a symbolic operation. \u003cbr /\u003e Only returns the tensor(s) corresponding to the *first time* the operation was called. |\n| `output` | Retrieves the output tensor(s) of a layer. \u003cbr /\u003e Only returns the tensor(s) corresponding to the *first time* the operation was called. |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `from_config`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/ops/operation.py#L191-L213) \n\n @classmethod\n from_config(\n config\n )\n\nCreates a layer from its config.\n\nThis method is the reverse of `get_config`,\ncapable of instantiating the same layer from the config\ndictionary. It does not handle layer connectivity\n(handled by Network), nor weights (handled by `set_weights`).\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|----------|----------------------------------------------------------|\n| `config` | A Python dictionary, typically the output of get_config. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| A layer instance. ||\n\n\u003cbr /\u003e\n\n### `get_dropout_mask`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/layers/rnn/dropout_rnn_cell.py#L22-L30) \n\n get_dropout_mask(\n step_input\n )\n\n### `get_initial_state`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/layers/rnn/gru.py#L324-L327) \n\n get_initial_state(\n batch_size=None\n )\n\n### `get_recurrent_dropout_mask`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/layers/rnn/dropout_rnn_cell.py#L32-L40) \n\n get_recurrent_dropout_mask(\n step_input\n )\n\n### `reset_dropout_mask`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/layers/rnn/dropout_rnn_cell.py#L42-L50) \n\n reset_dropout_mask()\n\nReset the cached dropout mask if any.\n\nThe RNN layer invokes this in the `call()` method\nso that the cached mask is cleared after calling `cell.call()`. The\nmask should be cached across all timestep within the same batch, but\nshouldn't be cached between batches.\n\n### `reset_recurrent_dropout_mask`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/layers/rnn/dropout_rnn_cell.py#L52-L53) \n\n reset_recurrent_dropout_mask()\n\n### `symbolic_call`\n\n[View source](https://github.com/keras-team/keras/tree/v3.3.3/keras/src/ops/operation.py#L58-L70) \n\n symbolic_call(\n *args, **kwargs\n )"]]