This guide shows you how to use Imagen to perform outpainting, which expands the content of an image to a larger area or an area with different dimensions. This page covers the following topics: Outpainting is a mask-based editing method that you can use to expand the content of a base image to fit a larger or differently sized mask canvas. View Imagen for Editing and Customization model card In the Google Cloud console, on the project selector page, select or create a Google Cloud project. Verify that billing is enabled for your Google Cloud project. Enable the Vertex AI API. In the Google Cloud console, on the project selector page, select or create a Google Cloud project. Verify that billing is enabled for your Google Cloud project. Enable the Vertex AI API. Set up authentication for your environment. Select the tab for how you plan to use the samples on this page: When you use the Google Cloud console to access Google Cloud services and APIs, you don't need to set up authentication. To use the Java samples on this page in a local development environment, install and initialize the gcloud CLI, and then set up Application Default Credentials with your user credentials. Install the Google Cloud CLI. If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity. If you're using a local shell, then create local authentication credentials for your user account: You don't need to do this if you're using Cloud Shell. If an authentication error is returned, and you are using an external identity provider (IdP), confirm that you have signed in to the gcloud CLI with your federated identity. For more information, see Set up ADC for a local development environment in the Google Cloud authentication documentation. To use the Node.js samples on this page in a local development environment, install and initialize the gcloud CLI, and then set up Application Default Credentials with your user credentials. Install the Google Cloud CLI. If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity. If you're using a local shell, then create local authentication credentials for your user account: You don't need to do this if you're using Cloud Shell. If an authentication error is returned, and you are using an external identity provider (IdP), confirm that you have signed in to the gcloud CLI with your federated identity. For more information, see Set up ADC for a local development environment in the Google Cloud authentication documentation. To use the Python samples on this page in a local development environment, install and initialize the gcloud CLI, and then set up Application Default Credentials with your user credentials. Install the Google Cloud CLI. If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity. If you're using a local shell, then create local authentication credentials for your user account: You don't need to do this if you're using Cloud Shell. If an authentication error is returned, and you are using an external identity provider (IdP), confirm that you have signed in to the gcloud CLI with your federated identity. For more information, see Set up ADC for a local development environment in the Google Cloud authentication documentation. To use the REST API samples on this page in a local development environment, you use the credentials you provide to the gcloud CLI. Install the Google Cloud CLI. If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity. For more information, see Authenticate for using REST in the Google Cloud authentication documentation. Use the following code samples to expand the content of an existing image. Use the following samples to send an outpainting request using the Imagen 3 model. In the Google Cloud console, go to the Vertex AI > Media Studio page. Click Upload. In the file dialog, select a file to upload. Click Outpaint. In the Outpaint menu, select one of the predefined aspect ratios for your final image, or click Custom to define custom dimensions. In the editing toolbar, select the placement of your image: Optional: In the Parameters panel, adjust the following options: In the prompt field, enter a prompt to modify the image. Click To learn more, see the SDK reference documentation. Set environment variables to use the Gen AI SDK with Vertex AI: For more information, see the Edit images API reference. Before using any of the request data, make the following replacements: HTTP method and URL: Request JSON body: To send your request, choose one of these options: Save the request body in a file named Save the request body in a file named Use the following samples to send an outpainting request using the Imagen 2 model. In the Google Cloud console, go to the Vertex AI > Media Studio page. In the lower task panel, click Click Upload to select a locally stored image to edit. In the editing toolbar, click Select one of the predefined aspect ratios for your final image, or click Custom to define custom dimensions. Optional: In the editing toolbar, select the placement of your original image within the output canvas: Optional: In the Parameters panel, adjust the Number of results or other parameters. Click To learn how to install or update the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python. For more information, see the Python API reference documentation. Before using any of the request data, make the following replacements: HTTP method and URL: Request JSON body: To send your request, choose one of these options: Save the request body in a file named Save the request body in a file named Before trying this sample, follow the Java setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Java API reference documentation. To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment. In this sample, you specify the model as part of an Before trying this sample, follow the Node.js setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Node.js API reference documentation. To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment. In this sample, you call the When you use outpainting, consider the following: The following code is an example of post-processing with alpha blending: Read articles about Imagen and other Generative AI on Vertex AI products:
Outpainting example
Image source: Kari Shea on Unsplash.
Before you begin
Console
Java
gcloud auth application-default login
Node.js
gcloud auth application-default login
Python
gcloud auth application-default login
REST
Expand the content of an image
Model Description Pros Use Case Imagen 3 The latest image generation model. Higher image quality, better prompt understanding, more features. Recommended for new projects and for higher quality results. Imagen 2 A previous generation model. Deprecated. Supports existing workflows built on this version. Legacy applications or during migration to Imagen 3. Imagen 3
Console
Python
Install
pip install --upgrade google-genai
# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values # with appropriate values for your project. export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT export GOOGLE_CLOUD_LOCATION=us-central1 export GOOGLE_GENAI_USE_VERTEXAI=True
REST
us-central1
, europe-west2
, or asia-northeast3
. For a list of available regions, see Generative AI on Vertex AI locations. prompt
: For image outpainting you can provide an empty string to create the edited images. If you choose to provide a prompt, use a description of the masked area for best results. For example, "a blue sky" instead of "insert a blue sky". referenceType
: A ReferenceImage
is an image that provides additional context for image editing. A normal RGB raw reference image (REFERENCE_TYPE_RAW
) is required for editing use cases. At most one raw reference image may exist in one request. The output image has the same height and width as raw reference image. A mask reference image (REFERENCE_TYPE_MASK
) is required for masked editing use cases. If a raw reference image is present, the mask image has to be the same height and width as the raw reference image. If the mask reference image is empty and maskMode
is not set to MASK_MODE_USER_PROVIDED
, the mask is computed based on the raw reference image. 0.03
is recommended for outpainting. Setting "dilation": 0.0
might result in obvious borders at the extension point, or might cause a white border effect. 35
steps. Increase steps if the quality doesn't meet your requirements. POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/imagen-3.0-capability-001:predict
{ "instances": [ { "prompt": "", "referenceImages": [ { "referenceType": "REFERENCE_TYPE_RAW", "referenceId": 1, "referenceImage": { "bytesBase64Encoded": "B64_BASE_IMAGE" } }, { "referenceType": "REFERENCE_TYPE_MASK", "referenceId": 2, "referenceImage": { "bytesBase64Encoded": "B64_OUTPAINTING_MASK" }, "maskImageConfig": { "maskMode": "MASK_MODE_USER_PROVIDED", "dilation": MASK_DILATION } } ] } ], "parameters": { "editConfig": { "baseSteps": EDIT_STEPS }, "editMode": "EDIT_MODE_OUTPAINT", "sampleCount": EDIT_IMAGE_COUNT } }
curl
request.json
, and execute the following command: curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/imagen-3.0-capability-001:predict"PowerShell
request.json
, and execute the following command: $cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/imagen-3.0-capability-001:predict" | Select-Object -Expand Content"sampleCount": 2
. The response returns two prediction objects, with the generated image bytes base64-encoded. { "predictions": [ { "bytesBase64Encoded": "BASE64_IMG_BYTES", "mimeType": "image/png" }, { "mimeType": "image/png", "bytesBase64Encoded": "BASE64_IMG_BYTES" } ] }
Imagen 2
Console
Python
REST
us-central1
, europe-west2
, or asia-northeast3
. For a list of available regions, see Generative AI on Vertex AI locations. prompt
: For image outpainting you can provide an empty string to create the edited images. POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/imagegeneration@006:predict
{ "instances": [ { "prompt": "", "image": { "bytesBase64Encoded": "B64_BASE_IMAGE" }, "mask": { "image": { "bytesBase64Encoded": "B64_OUTPAINTING_MASK" } } } ], "parameters": { "sampleCount": EDIT_IMAGE_COUNT, "editConfig": { "editMode": "outpainting" } } }
curl
request.json
, and execute the following command: curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/imagegeneration@006:predict"PowerShell
request.json
, and execute the following command: $cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/imagegeneration@006:predict" | Select-Object -Expand Content"sampleCount": 2
. The response returns two prediction objects, with the generated image bytes base64-encoded. { "predictions": [ { "bytesBase64Encoded": "BASE64_IMG_BYTES", "mimeType": "image/png" }, { "mimeType": "image/png", "bytesBase64Encoded": "BASE64_IMG_BYTES" } ] }
Java
EndpointName
. The EndpointName
is passed to the predict
method which is called on a PredictionServiceClient
. The service returns an edited version of the image, which is then saved locally.Node.js
predict
method on a PredictionServiceClient
. The service generates images, and the sample code saves them to local files.Limitations and best practices
parameters = { "editConfig": { "outpaintingConfig": { "blendingMode": "alpha-blending", "blendingFactor": 0.01, }, }, }
What's next
Expand the content of an image using outpaint
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-08-15 UTC.