Expand the content of an image using outpaint

This page describes outpainting. Outpainting lets you use Imagen to expand the content of an image to a larger area or area with different dimensions.

Outpainting example

Outpainting is a mask-based editing method that lets you expand the content of a base image to fit a larger or differently sized mask canvas.

sample base image
Original image with image padding to match mask image (target) size.
Image source: Kari Shea on Unsplash.
sample mask image
Mask image the dimensions of the target output, with the original image pixel dimensions and location marked.
sample output image
Outpainting output image (no prompt).

View Imagen for Editing and Customization model card

Before you begin

  1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
  2. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  3. Make sure that billing is enabled for your Google Cloud project.

  4. Enable the Vertex AI API.

    Enable the API

  5. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  6. Make sure that billing is enabled for your Google Cloud project.

  7. Enable the Vertex AI API.

    Enable the API

  8. Set up authentication for your environment.

    Select the tab for how you plan to use the samples on this page:

    Console

    When you use the Google Cloud console to access Google Cloud services and APIs, you don't need to set up authentication.

    Java

    To use the Java samples on this page in a local development environment, install and initialize the gcloud CLI, and then set up Application Default Credentials with your user credentials.

    1. Install the Google Cloud CLI.

    2. If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.

    3. To initialize the gcloud CLI, run the following command:

      gcloud init
    4. If you're using a local shell, then create local authentication credentials for your user account:

      gcloud auth application-default login

      You don't need to do this if you're using Cloud Shell.

      If an authentication error is returned, and you are using an external identity provider (IdP), confirm that you have signed in to the gcloud CLI with your federated identity.

    5. For more information, see Set up ADC for a local development environment in the Google Cloud authentication documentation.

    Node.js

    To use the Node.js samples on this page in a local development environment, install and initialize the gcloud CLI, and then set up Application Default Credentials with your user credentials.

    1. Install the Google Cloud CLI.

    2. If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.

    3. To initialize the gcloud CLI, run the following command:

      gcloud init
    4. If you're using a local shell, then create local authentication credentials for your user account:

      gcloud auth application-default login

      You don't need to do this if you're using Cloud Shell.

      If an authentication error is returned, and you are using an external identity provider (IdP), confirm that you have signed in to the gcloud CLI with your federated identity.

    5. For more information, see Set up ADC for a local development environment in the Google Cloud authentication documentation.

    Python

    To use the Python samples on this page in a local development environment, install and initialize the gcloud CLI, and then set up Application Default Credentials with your user credentials.

    1. Install the Google Cloud CLI.

    2. If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.

    3. To initialize the gcloud CLI, run the following command:

      gcloud init
    4. If you're using a local shell, then create local authentication credentials for your user account:

      gcloud auth application-default login

      You don't need to do this if you're using Cloud Shell.

      If an authentication error is returned, and you are using an external identity provider (IdP), confirm that you have signed in to the gcloud CLI with your federated identity.

    5. For more information, see Set up ADC for a local development environment in the Google Cloud authentication documentation.

    REST

    To use the REST API samples on this page in a local development environment, you use the credentials you provide to the gcloud CLI.

      After installing the Google Cloud CLI, initialize it by running the following command:

      gcloud init

      If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.

    For more information, see Authenticate for using REST in the Google Cloud authentication documentation.

Expand the content of an image

Use the following code samples to expand the content of an existing image.

Imagen 3

Use the following samples to send an outpainting request using the Imagen 3 model.

Console

  1. In the Google Cloud console, go to the Vertex AI > Media Studio page.

    Go to Media Studio

  2. Click Upload. In the displayed file dialog, select a file to upload.
  3. Click Outpaint.
  4. In the Outpaint menu, select one of the predefined aspect ratios for your final image, or click Custom to define custom dimensions for your final image.
  5. In the editing toolbar, select the placement of your image:
    • Left align:
    • Horizontal center align:
    • Right align:
    • Top align:
    • Vertical center align:
    • Bottom align:
  6. Optional: In the Parameters panel, adjust the following options:
    • Model: the Imagen model to use
    • Number of results: the number of result to generate
    • Negative prompt: items to avoid generating
  7. In the prompt field, enter a prompt to modify the image.
  8. Click Generate.

Gen AI SDK for Python

Install

pip install --upgrade google-genai

To learn more, see the SDK reference documentation.

Set environment variables to use the Gen AI SDK with Vertex AI:

# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values
# with appropriate values for your project.
export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT
export GOOGLE_CLOUD_LOCATION=us-central1
export GOOGLE_GENAI_USE_VERTEXAI=True

from google import genai
from google.genai.types import RawReferenceImage, MaskReferenceImage, MaskReferenceConfig, EditImageConfig

client = genai.Client()

# TODO(developer): Update and un-comment below line
# output_file = "output-image.png"

raw_ref = RawReferenceImage(
    reference_image=Image.from_file(location='test_resources/living_room.png'), reference_id=0)
mask_ref = MaskReferenceImage(
    reference_id=1,
    reference_image=Image.from_file(location='test_resources/living_room_mask.png'),
    config=MaskReferenceConfig(
        mask_mode="MASK_MODE_USER_PROVIDED",
        mask_dilation=0.03,
    ),
)

image = client.models.edit_image(
    model="imagen-3.0-capability-001",
    prompt="A chandelier hanging from the ceiling",
    reference_images=[raw_ref, mask_ref],
    config=EditImageConfig(
        edit_mode="EDIT_MODE_OUTPAINT",
    ),
)

image.generated_images[0].image.save(output_file)

print(f"Created output image using {len(image.generated_images[0].image.image_bytes)} bytes")
# Example response:
# Created output image using 1234567 bytes

REST

For more information, see the Edit images API reference.

Before using any of the request data, make the following replacements:

  • PROJECT_ID: Your Google Cloud project ID.
  • LOCATION: Your project's region. For example, us-central1, europe-west2, or asia-northeast3. For a list of available regions, see Generative AI on Vertex AI locations.
  • prompt: For image outpainting you can provide an empty string to create the edited images. If you choose to provide a prompt, use a description of the masked area for best results. For example, "a blue sky" instead of "insert a blue sky".
  • referenceType: A ReferenceImage is an image that provides additional context for image editing. A normal RGB raw reference image (REFERENCE_TYPE_RAW) is required for editing use cases. At most one raw reference image may exist in one request. The output image has the same height and width as raw reference image. A mask reference image (REFERENCE_TYPE_MASK) is required for masked editing use cases. If a raw reference image is present, the mask image has to be the same height and width as the raw reference image. If the mask reference image is empty and maskMode is not set to MASK_MODE_USER_PROVIDED, the mask is computed based on the raw reference image.
  • B64_BASE_IMAGE: The base image to edit or upscale. The image must be specified as a base64-encoded byte string. Size limit: 10 MB.
  • B64_OUTPAINTING_MASK: The black and white image you want to use as a mask layer to edit the original image. The mask should be same resolution as input image. The output image will be the same resolution as the input image. This mask image must be specified as a base64-encoded byte string. Size limit: 10 MB.
  • MASK_DILATION - float. The percentage of image width to dilate this mask by. A value of 0.03 is recommended for outpainting. Setting "dilation": 0.0 might result in obvious borders at the extension point, or might cause a white border effect.
  • EDIT_STEPS - integer. The number of sampling steps for the base model. For outpainting, start at 35 steps. Increase steps if the quality doesn't meet your requirements.
  • EDIT_IMAGE_COUNT - The number of edited images. Accepted integer values: 1-4. Default value: 4.

HTTP method and URL:

POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/imagen-3.0-capability-001:predict

Request JSON body:

{
  "instances": [
    {
      "prompt": "",
      "referenceImages": [
        {
          "referenceType": "REFERENCE_TYPE_RAW",
          "referenceId": 1,
          "referenceImage": {
            "bytesBase64Encoded": "B64_BASE_IMAGE"
          }
        },
        {
          "referenceType": "REFERENCE_TYPE_MASK",
          "referenceId": 2,
          "referenceImage": {
            "bytesBase64Encoded": "B64_OUTPAINTING_MASK"
          },
          "maskImageConfig": {
            "maskMode": "MASK_MODE_USER_PROVIDED",
            "dilation": MASK_DILATION
          }
        }
      ]
    }
  ],
  "parameters": {
    "editConfig": {
      "baseSteps": EDIT_STEPS
    },
    "editMode": "EDIT_MODE_OUTPAINT",
    "sampleCount": EDIT_IMAGE_COUNT
  }
}

To send your request, choose one of these options:

curl

Save the request body in a file named request.json, and execute the following command:

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/imagen-3.0-capability-001:predict"

PowerShell

Save the request body in a file named request.json, and execute the following command:

$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/imagen-3.0-capability-001:predict" | Select-Object -Expand Content
The following sample response is for a request with "sampleCount": 2. The response returns two prediction objects, with the generated image bytes base64-encoded.
{
  "predictions": [
    {
      "bytesBase64Encoded": "BASE64_IMG_BYTES",
      "mimeType": "image/png"
    },
    {
      "mimeType": "image/png",
      "bytesBase64Encoded": "BASE64_IMG_BYTES"
    }
  ]
}

Imagen 2

Use the following samples to send an outpainting request using the Imagen 2 model.

Console

  1. In the Google Cloud console, go to the Vertex AI > Media Studio page.

    Go to Media Studio

  2. In the lower task panel, click Edit image.

  3. Click Upload to select your locally stored product image to edit.

  4. In the editing toolbar, click Outpaint.

  5. Select one of the predefined aspect ratios for your final image, or click Custom to define custom dimensions for your final image.

  6. Optional. In the editing toolbar, select the horizontal placement ( left, horizontal center, or right align) and vertical placement ( top, vertical center, or bottom align) of your original image in the canvas of the image to be generated.

  7. Optional. In the Parameters panel, adjust the Number of results or other parameters.

  8. Click Generate.

Vertex AI SDK for Python

To learn how to install or update the Vertex AI SDK for Python, see Install the Vertex AI SDK for Python. For more information, see the Vertex AI SDK for Python API reference documentation.


import vertexai
from vertexai.preview.vision_models import Image, ImageGenerationModel

# TODO(developer): Update and un-comment below lines
# PROJECT_ID = "your-project-id"
# input_file = "input-image.png"
# mask_file = "mask-image.png"
# output_file = "output-image.png"
# prompt = "" # The optional text prompt describing what you want to see inserted.

vertexai.init(project=PROJECT_ID, location="us-central1")

model = ImageGenerationModel.from_pretrained("imagegeneration@006")
base_img = Image.load_from_file(location=input_file)
mask_img = Image.load_from_file(location=mask_file)

images = model.edit_image(
    base_image=base_img,
    mask=mask_img,
    prompt=prompt,
    edit_mode="outpainting",
)

images[0].save(location=output_file, include_generation_parameters=False)

# Optional. View the edited image in a notebook.
# images[0].show()

print(f"Created output image using {len(images[0]._image_bytes)} bytes")
# Example response:
# Created output image using 1234567 bytes

REST

Before using any of the request data, make the following replacements:

  • PROJECT_ID: Your Google Cloud project ID.
  • LOCATION: Your project's region. For example, us-central1, europe-west2, or asia-northeast3. For a list of available regions, see Generative AI on Vertex AI locations.
  • prompt: For image outpainting you can provide an empty string to create the edited images.
  • B64_BASE_IMAGE: The base image to edit or upscale. The image must be specified as a base64-encoded byte string. Size limit: 10 MB.
  • B64_OUTPAINTING_MASK: The black and white image you want to use as a mask layer to edit the original image. The mask should be same resolution as input image. The output image will be the same resolution as the input image. This mask image must be specified as a base64-encoded byte string. Size limit: 10 MB.
  • EDIT_IMAGE_COUNT: The number of edited images. Default value: 4.

HTTP method and URL:

POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/imagegeneration@006:predict

Request JSON body:

{
  "instances": [
    {
      "prompt": "",
      "image": {
          "bytesBase64Encoded": "B64_BASE_IMAGE"
      },
      "mask": {
        "image": {
          "bytesBase64Encoded": "B64_OUTPAINTING_MASK"
        }
      }
    }
  ],
  "parameters": {
    "sampleCount": EDIT_IMAGE_COUNT,
    "editConfig": {
      "editMode": "outpainting"
    }
  }
}

To send your request, choose one of these options:

curl

Save the request body in a file named request.json, and execute the following command:

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/imagegeneration@006:predict"

PowerShell

Save the request body in a file named request.json, and execute the following command:

$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/imagegeneration@006:predict" | Select-Object -Expand Content
The following sample response is for a request with "sampleCount": 2. The response returns two prediction objects, with the generated image bytes base64-encoded.
{
  "predictions": [
    {
      "bytesBase64Encoded": "BASE64_IMG_BYTES",
      "mimeType": "image/png"
    },
    {
      "mimeType": "image/png",
      "bytesBase64Encoded": "BASE64_IMG_BYTES"
    }
  ]
}

Java

Before trying this sample, follow the Java setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Java API reference documentation.

To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.

In this sample, you specify the model as part of an EndpointName. The EndpointName is passed to the predict method which is called on a PredictionServiceClient. The service returns an edited version of the image, which is then saved locally.


import com.google.api.gax.rpc.ApiException;
import com.google.cloud.aiplatform.v1.EndpointName;
import com.google.cloud.aiplatform.v1.PredictResponse;
import com.google.cloud.aiplatform.v1.PredictionServiceClient;
import com.google.cloud.aiplatform.v1.PredictionServiceSettings;
import com.google.gson.Gson;
import com.google.protobuf.InvalidProtocolBufferException;
import com.google.protobuf.Value;
import com.google.protobuf.util.JsonFormat;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.Base64;
import java.util.Collections;
import java.util.HashMap;
import java.util.Map;

public class EditImageOutpaintingMaskSample {

  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    String projectId = "my-project-id";
    String location = "us-central1";
    String inputPath = "/path/to/my-input.png";
    String maskPath = "/path/to/my-mask.png";
    String prompt = ""; // The optional text prompt describing what you want to see inserted.

    editImageOutpaintingMask(projectId, location, inputPath, maskPath, prompt);
  }

  // Edit an image using a mask file. Outpainting lets you expand the content of a base image to fit
  // a larger or differently sized mask canvas.
  public static PredictResponse editImageOutpaintingMask(
      String projectId, String location, String inputPath, String maskPath, String prompt)
      throws ApiException, IOException {
    final String endpoint = String.format("%s-aiplatform.googleapis.com:443", location);
    PredictionServiceSettings predictionServiceSettings =
        PredictionServiceSettings.newBuilder().setEndpoint(endpoint).build();

    // Initialize client that will be used to send requests. This client only needs to be created
    // once, and can be reused for multiple requests.
    try (PredictionServiceClient predictionServiceClient =
        PredictionServiceClient.create(predictionServiceSettings)) {

      final EndpointName endpointName =
          EndpointName.ofProjectLocationPublisherModelName(
              projectId, location, "google", "imagegeneration@006");

      // Encode image and mask to Base64
      String imageBase64 =
          Base64.getEncoder().encodeToString(Files.readAllBytes(Paths.get(inputPath)));
      String maskBase64 =
          Base64.getEncoder().encodeToString(Files.readAllBytes(Paths.get(maskPath)));

      // Create the image and image mask maps
      Map<String, String> imageMap = new HashMap<>();
      imageMap.put("bytesBase64Encoded", imageBase64);

      Map<String, String> maskMap = new HashMap<>();
      maskMap.put("bytesBase64Encoded", maskBase64);
      Map<String, Map> imageMaskMap = new HashMap<>();
      imageMaskMap.put("image", maskMap);

      Map<String, Object> instancesMap = new HashMap<>();
      instancesMap.put("prompt", prompt); // [ "prompt", "<my-prompt>" ]
      instancesMap.put(
          "image", imageMap); // [ "image", [ "bytesBase64Encoded", "iVBORw0KGgo...==" ] ]
      instancesMap.put(
          "mask",
          imageMaskMap); // [ "mask", [ "image", [ "bytesBase64Encoded", "iJKDF0KGpl...==" ] ] ]
      instancesMap.put("editMode", "outpainting"); // [ "editMode", "outpainting" ]
      Value instances = mapToValue(instancesMap);

      // Optional parameters
      Map<String, Object> paramsMap = new HashMap<>();
      paramsMap.put("sampleCount", 1);
      Value parameters = mapToValue(paramsMap);

      PredictResponse predictResponse =
          predictionServiceClient.predict(
              endpointName, Collections.singletonList(instances), parameters);

      for (Value prediction : predictResponse.getPredictionsList()) {
        Map<String, Value> fieldsMap = prediction.getStructValue().getFieldsMap();
        if (fieldsMap.containsKey("bytesBase64Encoded")) {
          String bytesBase64Encoded = fieldsMap.get("bytesBase64Encoded").getStringValue();
          Path tmpPath = Files.createTempFile("imagen-", ".png");
          Files.write(tmpPath, Base64.getDecoder().decode(bytesBase64Encoded));
          System.out.format("Image file written to: %s\n", tmpPath.toUri());
        }
      }
      return predictResponse;
    }
  }

  private static Value mapToValue(Map<String, Object> map) throws InvalidProtocolBufferException {
    Gson gson = new Gson();
    String json = gson.toJson(map);
    Value.Builder builder = Value.newBuilder();
    JsonFormat.parser().merge(json, builder);
    return builder.build();
  }
}

Node.js

Before trying this sample, follow the Node.js setup instructions in the Vertex AI quickstart using client libraries. For more information, see the Vertex AI Node.js API reference documentation.

To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment.

In this sample, you call the predict method on a PredictionServiceClient. The service generates images which are then saved locally.

/**
 * TODO(developer): Update these variables before running the sample.
 */
const projectId = process.env.CAIP_PROJECT_ID;
const location = 'us-central1';
const inputFile = 'resources/roller_skaters.png';
const maskFile = 'resources/roller_skaters_mask.png';
const prompt = 'city with skyscrapers';

const aiplatform = require('@google-cloud/aiplatform');

// Imports the Google Cloud Prediction Service Client library
const {PredictionServiceClient} = aiplatform.v1;

// Import the helper module for converting arbitrary protobuf.Value objects
const {helpers} = aiplatform;

// Specifies the location of the api endpoint
const clientOptions = {
  apiEndpoint: `${location}-aiplatform.googleapis.com`,
};

// Instantiates a client
const predictionServiceClient = new PredictionServiceClient(clientOptions);

async function editImageOutpaintingMask() {
  const fs = require('fs');
  const util = require('util');
  // Configure the parent resource
  const endpoint = `projects/${projectId}/locations/${location}/publishers/google/models/imagegeneration@006`;

  const imageFile = fs.readFileSync(inputFile);
  // Convert the image data to a Buffer and base64 encode it.
  const encodedImage = Buffer.from(imageFile).toString('base64');

  const maskImageFile = fs.readFileSync(maskFile);
  // Convert the image mask data to a Buffer and base64 encode it.
  const encodedMask = Buffer.from(maskImageFile).toString('base64');

  const promptObj = {
    prompt: prompt, // The optional text prompt describing what you want to see inserted
    editMode: 'outpainting',
    image: {
      bytesBase64Encoded: encodedImage,
    },
    mask: {
      image: {
        bytesBase64Encoded: encodedMask,
      },
    },
  };
  const instanceValue = helpers.toValue(promptObj);
  const instances = [instanceValue];

  const parameter = {
    // Optional parameters
    seed: 100,
    // Controls the strength of the prompt
    // 0-9 (low strength), 10-20 (medium strength), 21+ (high strength)
    guidanceScale: 21,
    sampleCount: 1,
  };
  const parameters = helpers.toValue(parameter);

  const request = {
    endpoint,
    instances,
    parameters,
  };

  // Predict request
  const [response] = await predictionServiceClient.predict(request);
  const predictions = response.predictions;
  if (predictions.length === 0) {
    console.log(
      'No image was generated. Check the request parameters and prompt.'
    );
  } else {
    let i = 1;
    for (const prediction of predictions) {
      const buff = Buffer.from(
        prediction.structValue.fields.bytesBase64Encoded.stringValue,
        'base64'
      );
      // Write image content to the output file
      const writeFile = util.promisify(fs.writeFile);
      const filename = `output${i}.png`;
      await writeFile(filename, buff);
      console.log(`Saved image ${filename}`);
      i++;
    }
  }
}
await editImageOutpaintingMask();

Limitations

The model may produce distorted details if the outpainted image is expanded 200% or more from the original image. As a best practice, we recommend that you add a post-processing step to run alpha blending on outpainted images.

The following code is an example of post-processing:

parameters = {
   "editConfig": {
       "outpaintingConfig": {
         "blendingMode": "alpha-blending",
         "blendingFactor": 0.01,
       },
   },
}

What's next

Read articles about Imagen and other Generative AI on Vertex AI products: