Starting April 29, 2025, Gemini 1.5 Pro and Gemini 1.5 Flash models are not available in projects that have no prior usage of these models, including new projects. For details, see Model versions and lifecycle.
Sample request: See a complete example of an API request.
Image input options
When you provide an image of a person or a clothing product, you can specify it as either a Base64-encoded byte string or a Cloud Storage URI. The following table provides a comparison of these two options to help you decide which one to use.
Option
Description
Pros
Cons
Use Case
bytesBase64Encoded
The image data is sent directly within the JSON request body.
Simple for smaller images; no need for a separate storage step.
Increases request size; not suitable for very large images due to JSON payload limits.
Quick tests or applications where images are generated or processed on the fly and not stored long-term.
gcsUri
A URI pointing to an image file stored in a Cloud Storage bucket.
Efficient for large images; keeps the request payload small.
Requires uploading the image to Cloud Storage first, which adds an extra step.
Batch processing, workflows where images are already stored in Cloud Storage, or when dealing with large image files.
Supported model versions
Virtual Try-On supports the following model:
virtual-try-on-preview-08-04
For more information about the features that the model supports, see Imagen models.
HTTP request
To generate an image, send a POST request to the model's predict endpoint.
curl-XPOST\-H"Authorization: Bearer $(gcloudauthprint-access-token)"\-H"Content-Type: application/json"\ https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/MODEL_ID:predict\ -d'{ "instances": [ { "personImage": { "image": { // Union field can be only one of the following: "bytesBase64Encoded": string, "gcsUri": string, } }, "productImages": [ { "image": { // Union field can be only one of the following: "bytesBase64Encoded": string, "gcsUri": string, } } ] } ], "parameters": { "addWatermark": boolean, "baseSteps": integer, "personGeneration": string, "safetySetting": string, "sampleCount": integer, "seed": integer, "storageUri": string, "outputOptions": { "mimeType": string, "compressionQuality": integer } }}'
Instances
personImage
string
Required. An image of a person to try-on the clothing product, which can be either of the following:
A bytesBase64Encoded string that encodes an image.
A gcsUri string URI to a Cloud Storage bucket location.
productImages
string
Required. An image of a product to try-on a person, which can be either of the following:
A bytesBase64Encoded string that encodes an image.
A gcsUri string URI to a Cloud Storage bucket location.
Parameters
addWatermark
bool
Optional. Add an invisible watermark to the generated images.
The default value is true.
baseSteps
int
Required. An integer that controls image generation, with higher steps trading higher quality for increased latency.
Integer values greater than 0. The default is 32.
personGeneration
string
Optional. Allow generation of people by the model. The following values are supported:
"dont_allow": Disallow the inclusion of people or faces in images.
"allow_adult": Allow generation of adults only.
"allow_all": Allow generation of people of all ages.
The default value is "allow_adult".
safetySetting
string
Optional. Adds a filter level to safety filtering. The following values are supported:
"block_low_and_above": Strongest filtering level, most strict blocking. Deprecated value: "block_most".
"block_medium_and_above": Block some problematic prompts and responses. Deprecated value: "block_some".
"block_only_high": Reduces the number of requests blocked due to safety filters. May increase objectionable content generated by Imagen. Deprecated value: "block_few".
"block_none": Block very few problematic prompts and responses. Access to this feature is restricted. Previous field value: "block_fewest".
The default value is "block_medium_and_above".
sampleCount
int
Required. The number of images to generate.
An integer value between 1 and 4, inclusive. The default value is 1.
seed
Uint32
Optional. The random seed for image generation. This isn't available when addWatermark is set to true.
storageUri
string
Optional. A string URI to a Cloud Storage bucket location to store the generated images.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-27 UTC."],[],[],null,["# Virtual Try-On API\n\n| **Preview**\n|\n|\n| This product or feature is a Generative AI Preview offering, subject to\n| the \"Pre-GA Offerings Terms\" of the\n| [Google Cloud Service Specific Terms](/terms/service-terms),\n| as well as the\n| [Additional Terms for Generative AI Preview Products](/trustedtester/aitos). For this\n| Generative AI Preview offering, Customers may elect to use it for\n| production or commercial purposes, or disclose Generated Output to\n| third-parties, and may process personal data as outlined in the\n| [Cloud Data Processing\n| Addendum](/terms/data-processing-addendum),\n| subject to the obligations and restrictions described in the agreement\n| under which you access Google Cloud. Pre-GA products are available \"as is\"\n| and might have limited support. For more information, see the\n| [launch stage descriptions](/products#product-launch-stages).\n\nVirtual Try-On lets you generate images of people modeling clothing products. You\nprovide an image of a person and a sample clothing product, and then you use\nVirtual Try-On to generate images of the person wearing the product.\n\nSupported model versions\n------------------------\n\nVirtual Try-On supports the following models:\n\n- `virtual-try-on-preview-08-04`\n\nFor more information about the features that the model supports, see\n[Imagen\nmodels](/vertex-ai/generative-ai/docs/models#imagen-models).\n\nHTTP request\n------------\n\n curl -X POST \\\n -H \"Authorization: Bearer $(gcloud auth print-access-token)\" \\\n -H \"Content-Type: application/json\" \\\n https://\u003cvar translate=\"no\"\u003eLOCATION\u003c/var\u003e-aiplatform.googleapis.com/v1/projects/\u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e/locations/\u003cvar translate=\"no\"\u003eLOCATION\u003c/var\u003e/publishers/google/models/\u003cvar translate=\"no\"\u003eMODEL_ID\u003c/var\u003e:predict \\\n\n -d '{\n \"instances\": [\n {\n \"personImage\": {\n \"image\": {\n // Union field can be only one of the following:\n \"bytesBase64Encoded\": string,\n \"gcsUri\": string,\n }\n },\n \"productImages\": [\n {\n \"image\": {\n // Union field can be only one of the following:\n \"bytesBase64Encoded\": string,\n \"gcsUri\": string,\n }\n }\n ]\n }\n ],\n \"parameters\": {\n \"addWatermark\": boolean,\n \"baseSteps\": integer,\n \"personGeneration\": string,\n \"safetySetting\": string,\n \"sampleCount\": integer,\n \"seed\": integer,\n \"storageUri\": string,\n \"outputOptions\": {\n \"mimeType\": string,\n \"compressionQuality\": integer\n }\n }\n }'\n\n### Output options object\n\nThe `outputOptions` object describes the image output.\n\nSample request\n--------------\n\n### REST\n\n\nBefore using any of the request data,\nmake the following replacements:\n\n- \u003cvar translate=\"no\"\u003eREGION\u003c/var\u003e: The region that your project is located in. For more information about supported regions, see [Generative AI on Vertex AI\n locations](/vertex-ai/generative-ai/docs/learn/locations).\n- \u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e: Your Google Cloud [project ID](/resource-manager/docs/creating-managing-projects#identifiers).\n- \u003cvar translate=\"no\"\u003eBASE64_PERSON_IMAGE\u003c/var\u003e: The Base64-encoded image of the person image.\n- \u003cvar translate=\"no\"\u003eBASE64_PRODUCT_IMAGE\u003c/var\u003e: The Base64-encoded image of the product image.\n- \u003cvar translate=\"no\"\u003eIMAGE_COUNT\u003c/var\u003e: The number of images to generate. The accepted range of values is `1` to `4`.\n- \u003cvar translate=\"no\"\u003eGCS_OUTPUT_PATH\u003c/var\u003e: The Cloud Storage path to store the virtual try-on output to.\n\n\nHTTP method and URL:\n\n```\nPOST https://REGION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/REGION/publishers/google/models/virtual-try-on-preview-08-04:predict\n```\n\n\nRequest JSON body:\n\n```\n{\n \"instances\": [\n {\n \"personImage\": {\n \"image\": {\n \"bytesBase64Encoded\": \"BASE64_PERSON_IMAGE\"\n }\n },\n \"productImages\": [\n {\n \"image\": {\n \"bytesBase64Encoded\": \"BASE64_PRODUCT_IMAGE\"\n }\n }\n ]\n }\n ],\n \"parameters\": {\n \"sampleCount\": IMAGE_COUNT,\n \"storageUri\": \"GCS_OUTPUT_PATH\"\n }\n}\n```\n\nTo send your request, choose one of these options: \n\n#### curl\n\n| **Note:** The following command assumes that you have logged in to the `gcloud` CLI with your user account by running [`gcloud init`](/sdk/gcloud/reference/init) or [`gcloud auth login`](/sdk/gcloud/reference/auth/login) , or by using [Cloud Shell](/shell/docs), which automatically logs you into the `gcloud` CLI . You can check the currently active account by running [`gcloud auth list`](/sdk/gcloud/reference/auth/list).\n\n\nSave the request body in a file named `request.json`,\nand execute the following command:\n\n```\ncurl -X POST \\\n -H \"Authorization: Bearer $(gcloud auth print-access-token)\" \\\n -H \"Content-Type: application/json; charset=utf-8\" \\\n -d @request.json \\\n \"https://REGION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/REGION/publishers/google/models/virtual-try-on-preview-08-04:predict\"\n```\n\n#### PowerShell\n\n| **Note:** The following command assumes that you have logged in to the `gcloud` CLI with your user account by running [`gcloud init`](/sdk/gcloud/reference/init) or [`gcloud auth login`](/sdk/gcloud/reference/auth/login) . You can check the currently active account by running [`gcloud auth list`](/sdk/gcloud/reference/auth/list).\n\n\nSave the request body in a file named `request.json`,\nand execute the following command:\n\n```\n$cred = gcloud auth print-access-token\n$headers = @{ \"Authorization\" = \"Bearer $cred\" }\n\nInvoke-WebRequest `\n -Method POST `\n -Headers $headers `\n -ContentType: \"application/json; charset=utf-8\" `\n -InFile request.json `\n -Uri \"https://REGION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/REGION/publishers/google/models/virtual-try-on-preview-08-04:predict\" | Select-Object -Expand Content\n```\nThe request returns image objects. In this example, two image objects are returned, with two prediction objects as base64-encoded images.\n\n```\n{\n \"predictions\": [\n {\n \"mimeType\": \"image/png\",\n \"bytesBase64Encoded\": \"BASE64_IMG_BYTES\"\n },\n {\n \"bytesBase64Encoded\": \"BASE64_IMG_BYTES\",\n \"mimeType\": \"image/png\"\n }\n ]\n}\n```\n\n\u003cbr /\u003e"]]