tf.experimental.dtensor.relayout
Stay organized with collections Save and categorize content based on your preferences.
Changes the layout of tensor
.
tf.experimental.dtensor.relayout( tensor: tf.Tensor
, layout: tf.experimental.dtensor.Layout
, name: Optional[str] = None ) -> tf.Tensor
Used in the notebooks
Changes the layout of tensor
to layout
. This is used to fine-tune the behavior of ops following/connected to tensor
, such as choosing one SPMD expansion pattern over another. This works by forward propagating layout
to connected TensorFlow computation graphs during layout propagation.
Currently, only converting layouts from replicated to sharded or sharded to replicated per mesh dimension is supported. That is, "x, y" -> "unsharded, y" is supported, while "x, y" -> "z, y" is not supported.
We also support a special "match" sharding spec, which instructs the relayout to act as an identity operation with respect to any sharding on these mesh dimensions.
Relayout is internally lowered to a set of Split and/or AllToAll ops. When tensor layouts are converted from replicated to sharded, the cost is comparatively low because we only insert Split ops and no cross-device communication is needed. However, when tensor layouts are converted from sharded to replicated, cross-device communication may occur, causing potential performance impact.
Args |
tensor | A DTensor to specify a new layout for. |
layout | A Layout object specifying a new sharding spec. |
name | name of the Op. |
Returns |
A DTensor output from the Relayout op. |
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2024-04-26 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-04-26 UTC."],[],[],null,["# tf.experimental.dtensor.relayout\n\n\u003cbr /\u003e\n\n|---------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.16.1/tensorflow/dtensor/python/api.py#L411-L449) |\n\nChanges the layout of `tensor`. \n\n tf.experimental.dtensor.relayout(\n tensor: ../../../tf/Tensor,\n layout: ../../../tf/experimental/dtensor/Layout,\n name: Optional[str] = None\n ) -\u003e ../../../tf/Tensor\n\n### Used in the notebooks\n\n| Used in the guide |\n|-------------------------------------------------------------------------|\n| - [DTensor concepts](https://www.tensorflow.org/guide/dtensor_overview) |\n\nChanges the layout of `tensor` to `layout`. This is used to fine-tune the\nbehavior of ops following/connected to `tensor`, such as choosing one SPMD\nexpansion pattern over another. This works by forward propagating `layout`\nto connected TensorFlow computation graphs during layout propagation.\n\nCurrently, only converting layouts from replicated to sharded or sharded to\nreplicated per mesh dimension is supported. That is, \"x, y\" -\\\u003e \"unsharded, y\"\nis supported, while \"x, y\" -\\\u003e \"z, y\" is not supported.\n\nWe also support a special \"match\" sharding spec, which instructs the relayout\nto act as an identity operation with respect to any sharding on these\nmesh dimensions.\n\nRelayout is internally lowered to a set of Split and/or AllToAll ops. When\ntensor layouts are converted from replicated to sharded, the cost is\ncomparatively low because we only insert Split ops and no cross-device\ncommunication is needed. However, when tensor layouts are converted from\nsharded to replicated, cross-device communication may occur, causing potential\nperformance impact.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|----------|-------------------------------------------------|\n| `tensor` | A DTensor to specify a new layout for. |\n| `layout` | A Layout object specifying a new sharding spec. |\n| `name` | name of the Op. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| A DTensor output from the Relayout op. ||\n\n\u003cbr /\u003e"]]