Previsão de texto em lote com o modelo Gemini através do Google Cloud Storage

Realiza a previsão de texto em lote com o modelo Gemini e devolve a localização da saída.

Explorar mais

Para ver documentação detalhada que inclui este exemplo de código, consulte o seguinte:

Exemplo de código

Go

Antes de experimentar este exemplo, siga as Goinstruções de configuração no início rápido do Vertex AI com bibliotecas de cliente. Para mais informações, consulte a documentação de referência da API Go Vertex AI.

Para se autenticar no Vertex AI, configure as Credenciais padrão da aplicação. Para mais informações, consulte o artigo Configure a autenticação para um ambiente de desenvolvimento local.

import (
	"context"
	"fmt"
	"io"
	"time"

	"google.golang.org/genai"
)

// generateBatchPredict runs a batch prediction job using GCS input/output.
func generateBatchPredict(w io.Writer, outputURI string) error {
	// outputURI = "gs://your-bucket/your-prefix"
	ctx := context.Background()

	client, err := genai.NewClient(ctx, &genai.ClientConfig{
		HTTPOptions: genai.HTTPOptions{APIVersion: "v1"},
	})
	if err != nil {
		return fmt.Errorf("failed to create genai client: %w", err)
	}

	// Source file with prompts for prediction
	src := &genai.BatchJobSource{
		Format: "jsonl",
		// Source link: https://storage.cloud.google.com/cloud-samples-data/batch/prompt_for_batch_gemini_predict.jsonl
		GCSURI: []string{"gs://cloud-samples-data/batch/prompt_for_batch_gemini_predict.jsonl"},
	}

	// Batch job config with output GCS location
	config := &genai.CreateBatchJobConfig{
		Dest: &genai.BatchJobDestination{
			Format: "jsonl",
			GCSURI: outputURI,
		},
	}
	// To use a tuned model, set the model param to your tuned model using the following format:
	//  modelName:= "projects/{PROJECT_ID}/locations/{LOCATION}/models/{MODEL_ID}
	modelName := "gemini-2.5-flash"
	// See the documentation: https://pkg.go.dev/google.golang.org/genai#Batches.Create
	job, err := client.Batches.Create(ctx, modelName, src, config)
	if err != nil {
		return fmt.Errorf("failed to create batch job: %w", err)
	}

	fmt.Fprintf(w, "Job name: %s\n", job.Name)
	fmt.Fprintf(w, "Job state: %s\n", job.State)
	// Example response:
	//   Job name: projects/{PROJECT_ID}/locations/us-central1/batchPredictionJobs/9876453210000000000
	//   Job state: JOB_STATE_PENDING

	// See the documentation: https://pkg.go.dev/google.golang.org/genai#BatchJob
	completedStates := map[genai.JobState]bool{
		genai.JobStateSucceeded: true,
		genai.JobStateFailed:    true,
		genai.JobStateCancelled: true,
		genai.JobStatePaused:    true,
	}

	for !completedStates[job.State] {
		time.Sleep(30 * time.Second)

		job, err = client.Batches.Get(ctx, job.Name, nil)
		if err != nil {
			return fmt.Errorf("failed to get batch job: %w", err)
		}

		fmt.Fprintf(w, "Job state: %s\n", job.State)
	}

	// Example response:
	//  Job state: JOB_STATE_PENDING
	//  Job state: JOB_STATE_RUNNING
	//  Job state: JOB_STATE_RUNNING
	//  ...
	//  Job state: JOB_STATE_SUCCEEDED

	return nil
}

Java

Antes de experimentar este exemplo, siga as Javainstruções de configuração no início rápido do Vertex AI com bibliotecas de cliente. Para mais informações, consulte a documentação de referência da API Java Vertex AI.

Para se autenticar no Vertex AI, configure as Credenciais padrão da aplicação. Para mais informações, consulte o artigo Configure a autenticação para um ambiente de desenvolvimento local.


import static com.google.genai.types.JobState.Known.JOB_STATE_CANCELLED;
import static com.google.genai.types.JobState.Known.JOB_STATE_FAILED;
import static com.google.genai.types.JobState.Known.JOB_STATE_PAUSED;
import static com.google.genai.types.JobState.Known.JOB_STATE_SUCCEEDED;

import com.google.genai.Client;
import com.google.genai.types.BatchJob;
import com.google.genai.types.BatchJobDestination;
import com.google.genai.types.BatchJobSource;
import com.google.genai.types.CreateBatchJobConfig;
import com.google.genai.types.GetBatchJobConfig;
import com.google.genai.types.HttpOptions;
import com.google.genai.types.JobState;
import java.util.EnumSet;
import java.util.Optional;
import java.util.Set;
import java.util.concurrent.TimeUnit;

public class BatchPredictionWithGcs {

  public static void main(String[] args) throws InterruptedException {
    // TODO(developer): Replace these variables before running the sample.
    // To use a tuned model, set the model param to your tuned model using the following format:
    // modelId = "projects/{PROJECT_ID}/locations/{LOCATION}/models/{MODEL_ID}
    String modelId = "gemini-2.5-flash";
    String outputGcsUri = "gs://your-bucket/your-prefix";
    createBatchJob(modelId, outputGcsUri);
  }

  // Creates a batch prediction job with Google Cloud Storage.
  public static JobState createBatchJob(String modelId, String outputGcsUri)
      throws InterruptedException {
    // Client Initialization. Once created, it can be reused for multiple requests.
    try (Client client =
        Client.builder()
            .location("global")
            .vertexAI(true)
            .httpOptions(HttpOptions.builder().apiVersion("v1").build())
            .build()) {
      // See the documentation:
      // https://googleapis.github.io/java-genai/javadoc/com/google/genai/Batches.html
      BatchJobSource batchJobSource =
          BatchJobSource.builder()
              // Source link:
              // https://storage.cloud.google.com/cloud-samples-data/batch/prompt_for_batch_gemini_predict.jsonl
              .gcsUri("gs://cloud-samples-data/batch/prompt_for_batch_gemini_predict.jsonl")
              .format("jsonl")
              .build();

      CreateBatchJobConfig batchJobConfig =
          CreateBatchJobConfig.builder()
              .displayName("your-display-name")
              .dest(BatchJobDestination.builder().gcsUri(outputGcsUri).format("jsonl").build())
              .build();

      BatchJob batchJob = client.batches.create(modelId, batchJobSource, batchJobConfig);

      String jobName =
          batchJob.name().orElseThrow(() -> new IllegalStateException("Missing job name"));
      JobState jobState =
          batchJob.state().orElseThrow(() -> new IllegalStateException("Missing job state"));
      System.out.println("Job name: " + jobName);
      System.out.println("Job state: " + jobState);
      // Job name: projects/.../locations/.../batchPredictionJobs/6205497615459549184
      // Job state: JOB_STATE_PENDING

      // See the documentation:
      // https://googleapis.github.io/java-genai/javadoc/com/google/genai/types/BatchJob.html
      Set<JobState.Known> completedStates =
          EnumSet.of(JOB_STATE_SUCCEEDED, JOB_STATE_FAILED, JOB_STATE_CANCELLED, JOB_STATE_PAUSED);

      while (!completedStates.contains(jobState.knownEnum())) {
        TimeUnit.SECONDS.sleep(30);
        batchJob = client.batches.get(jobName, GetBatchJobConfig.builder().build());
        jobState =
            batchJob
                .state()
                .orElseThrow(() -> new IllegalStateException("Missing job state during polling"));
        System.out.println("Job state: " + jobState);
      }
      // Example response:
      // Job state: JOB_STATE_QUEUED
      // Job state: JOB_STATE_RUNNING
      // Job state: JOB_STATE_RUNNING
      // ...
      // Job state: JOB_STATE_SUCCEEDED
      return jobState;
    }
  }
}

Node.js

Antes de experimentar este exemplo, siga as Node.jsinstruções de configuração no início rápido do Vertex AI com bibliotecas de cliente. Para mais informações, consulte a documentação de referência da API Node.js Vertex AI.

Para se autenticar no Vertex AI, configure as Credenciais padrão da aplicação. Para mais informações, consulte o artigo Configure a autenticação para um ambiente de desenvolvimento local.

const {GoogleGenAI} = require('@google/genai');

const GOOGLE_CLOUD_PROJECT = process.env.GOOGLE_CLOUD_PROJECT;
const GOOGLE_CLOUD_LOCATION =
  process.env.GOOGLE_CLOUD_LOCATION || 'us-central1';
const OUTPUT_URI = 'gs://your-bucket/your-prefix';

async function runBatchPredictionJob(
  outputUri = OUTPUT_URI,
  projectId = GOOGLE_CLOUD_PROJECT,
  location = GOOGLE_CLOUD_LOCATION
) {
  const client = new GoogleGenAI({
    vertexai: true,
    project: projectId,
    location: location,
    httpOptions: {
      apiVersion: 'v1',
    },
  });

  // See the documentation: https://googleapis.github.io/js-genai/release_docs/classes/batches.Batches.html
  let job = await client.batches.create({
    // To use a tuned model, set the model param to your tuned model using the following format:
    // model="projects/{PROJECT_ID}/locations/{LOCATION}/models/{MODEL_ID}"
    model: 'gemini-2.5-flash',
    // Source link: https://storage.cloud.google.com/cloud-samples-data/batch/prompt_for_batch_gemini_predict.jsonl
    src: 'gs://cloud-samples-data/batch/prompt_for_batch_gemini_predict.jsonl',
    config: {
      dest: outputUri,
    },
  });

  console.log(`Job name: ${job.name}`);
  console.log(`Job state: ${job.state}`);

  // Example response:
  //  Job name: projects/%PROJECT_ID%/locations/us-central1/batchPredictionJobs/9876453210000000000
  //  Job state: JOB_STATE_PENDING

  const completedStates = new Set([
    'JOB_STATE_SUCCEEDED',
    'JOB_STATE_FAILED',
    'JOB_STATE_CANCELLED',
    'JOB_STATE_PAUSED',
  ]);

  while (!completedStates.has(job.state)) {
    await new Promise(resolve => setTimeout(resolve, 30000));
    job = await client.batches.get({name: job.name});
    console.log(`Job state: ${job.state}`);
  }

  // Example response:
  //  Job state: JOB_STATE_PENDING
  //  Job state: JOB_STATE_RUNNING
  //  Job state: JOB_STATE_RUNNING
  //  ...
  //  Job state: JOB_STATE_SUCCEEDED

  return job.state;
}

Python

Antes de experimentar este exemplo, siga as Pythoninstruções de configuração no início rápido do Vertex AI com bibliotecas de cliente. Para mais informações, consulte a documentação de referência da API Python Vertex AI.

Para se autenticar no Vertex AI, configure as Credenciais padrão da aplicação. Para mais informações, consulte o artigo Configure a autenticação para um ambiente de desenvolvimento local.

import time

from google import genai
from google.genai.types import CreateBatchJobConfig, JobState, HttpOptions

client = genai.Client(http_options=HttpOptions(api_version="v1"))
# TODO(developer): Update and un-comment below line
# output_uri = "gs://your-bucket/your-prefix"

# See the documentation: https://googleapis.github.io/python-genai/genai.html#genai.batches.Batches.create
job = client.batches.create(
    # To use a tuned model, set the model param to your tuned model using the following format:
    # model="projects/{PROJECT_ID}/locations/{LOCATION}/models/{MODEL_ID}
    model="gemini-2.5-flash",
    # Source link: https://storage.cloud.google.com/cloud-samples-data/batch/prompt_for_batch_gemini_predict.jsonl
    src="gs://cloud-samples-data/batch/prompt_for_batch_gemini_predict.jsonl",
    config=CreateBatchJobConfig(dest=output_uri),
)
print(f"Job name: {job.name}")
print(f"Job state: {job.state}")
# Example response:
# Job name: projects/.../locations/.../batchPredictionJobs/9876453210000000000
# Job state: JOB_STATE_PENDING

# See the documentation: https://googleapis.github.io/python-genai/genai.html#genai.types.BatchJob
completed_states = {
    JobState.JOB_STATE_SUCCEEDED,
    JobState.JOB_STATE_FAILED,
    JobState.JOB_STATE_CANCELLED,
    JobState.JOB_STATE_PAUSED,
}

while job.state not in completed_states:
    time.sleep(30)
    job = client.batches.get(name=job.name)
    print(f"Job state: {job.state}")
# Example response:
# Job state: JOB_STATE_PENDING
# Job state: JOB_STATE_RUNNING
# Job state: JOB_STATE_RUNNING
# ...
# Job state: JOB_STATE_SUCCEEDED

O que se segue?

Para pesquisar e filtrar exemplos de código para outros Google Cloud produtos, consulte o Google Cloud navegador de exemplos.