Kurzanleitung: Gemini API in Vertex AI

In dieser Kurzanleitung erfahren Sie, wie Sie das Google Gen AI SDK für Ihre bevorzugte Sprache installieren und dann Ihre erste API-Anfrage stellen. Die Beispiele variieren leicht, je nachdem, ob Sie sich bei Vertex AI mit einem API-Schlüssel oder Standardanmeldedaten für Anwendungen (Application Default Credentials, ADC) authentifizieren.

Wählen Sie Ihre Authentifizierungsmethode aus:


Hinweise

Wenn Sie ADC noch nicht konfiguriert haben, folgen Sie dieser Anleitung:

Projekt konfigurieren

Wählen Sie ein Projekt aus, aktivieren Sie die Abrechnung und die Vertex AI API und installieren Sie die gcloud CLI:

  1. Sign in to your Google Account.

    If you don't already have one, sign up for a new account.

  2. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Roles required to select or create a project

    • Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
    • Create a project: To create a project, you need the Project Creator (roles/resourcemanager.projectCreator), which contains the resourcemanager.projects.create permission. Learn how to grant roles.

    Go to project selector

  3. Verify that billing is enabled for your Google Cloud project.

  4. Enable the Vertex AI API.

    Roles required to enable APIs

    To enable APIs, you need the Service Usage Admin IAM role (roles/serviceusage.serviceUsageAdmin), which contains the serviceusage.services.enable permission. Learn how to grant roles.

    Enable the API

  5. Install the Google Cloud CLI.

  6. Wenn Sie einen externen Identitätsanbieter (IdP) verwenden, müssen Sie sich zuerst mit Ihrer föderierten Identität in der gcloud CLI anmelden.

  7. Führen Sie folgenden Befehl aus, um die gcloud CLI zu initialisieren:

    gcloud init
  8. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Roles required to select or create a project

    • Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
    • Create a project: To create a project, you need the Project Creator (roles/resourcemanager.projectCreator), which contains the resourcemanager.projects.create permission. Learn how to grant roles.

    Go to project selector

  9. Verify that billing is enabled for your Google Cloud project.

  10. Enable the Vertex AI API.

    Roles required to enable APIs

    To enable APIs, you need the Service Usage Admin IAM role (roles/serviceusage.serviceUsageAdmin), which contains the serviceusage.services.enable permission. Learn how to grant roles.

    Enable the API

  11. Install the Google Cloud CLI.

  12. Wenn Sie einen externen Identitätsanbieter (IdP) verwenden, müssen Sie sich zuerst mit Ihrer föderierten Identität in der gcloud CLI anmelden.

  13. Führen Sie folgenden Befehl aus, um die gcloud CLI zu initialisieren:

    gcloud init
  14. Lokale Anmeldedaten zur Authentifizierung erstellen

    Create local authentication credentials for your user account:

    gcloud auth application-default login

    If an authentication error is returned, and you are using an external identity provider (IdP), confirm that you have signed in to the gcloud CLI with your federated identity.

    Erforderliche Rollen

    Bitten Sie Ihren Administrator, Ihnen die IAM-Rolle Vertex AI-Nutzer (roles/aiplatform.user) für Ihr Projekt zuzuweisen, um die Berechtigungen zu erhalten, die Sie zum Verwenden der Gemini API in Vertex AI benötigen. Weitere Informationen zum Zuweisen von Rollen finden Sie unter Zugriff auf Projekte, Ordner und Organisationen verwalten.

    Sie können die erforderlichen Berechtigungen auch über benutzerdefinierte Rollen oder andere vordefinierte Rollen erhalten.

    SDK installieren und Umgebung einrichten

    Klicken Sie auf Ihrem lokalen Computer auf einen der folgenden Tabs, um das SDK für Ihre Programmiersprache zu installieren.

    Python Gen AI SDK

    Installieren und aktualisieren Sie das Gen AI SDK for Python, indem Sie diesen Befehl ausführen.

    pip install --upgrade google-genai

    Legen Sie Umgebungsvariablen fest:

    # Replace the `GOOGLE_CLOUD_PROJECT_ID` and `GOOGLE_CLOUD_LOCATION` values
    # with appropriate values for your project.
    export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT_ID
    export GOOGLE_CLOUD_LOCATION=global
    export GOOGLE_GENAI_USE_VERTEXAI=True

    Go Gen AI SDK

    Installieren und aktualisieren Sie das Gen AI SDK für Go, indem Sie diesen Befehl ausführen.

    go get google.golang.org/genai

    Legen Sie Umgebungsvariablen fest:

    # Replace the `GOOGLE_CLOUD_PROJECT_ID` and `GOOGLE_CLOUD_LOCATION` values
    # with appropriate values for your project.
    export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT_ID
    export GOOGLE_CLOUD_LOCATION=global
    export GOOGLE_GENAI_USE_VERTEXAI=True

    Node.js Gen AI SDK

    Installieren und aktualisieren Sie das Gen AI SDK für Node.js, indem Sie diesen Befehl ausführen.

    npm install @google/genai

    Legen Sie Umgebungsvariablen fest:

    # Replace the `GOOGLE_CLOUD_PROJECT_ID` and `GOOGLE_CLOUD_LOCATION` values
    # with appropriate values for your project.
    export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT_ID
    export GOOGLE_CLOUD_LOCATION=global
    export GOOGLE_GENAI_USE_VERTEXAI=True

    Java Gen AI SDK

    Installieren und aktualisieren Sie das Gen AI SDK für Java, indem Sie diesen Befehl ausführen.

    Maven

    Fügen Sie zum pom.xml Folgendes hinzu:

    <dependencies>
      <dependency>
        <groupId>com.google.genai</groupId>
        <artifactId>google-genai</artifactId>
        <version>0.7.0</version>
      </dependency>
    </dependencies>
    

    Legen Sie Umgebungsvariablen fest:

    # Replace the `GOOGLE_CLOUD_PROJECT_ID` and `GOOGLE_CLOUD_LOCATION` values
    # with appropriate values for your project.
    export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT_ID
    export GOOGLE_CLOUD_LOCATION=global
    export GOOGLE_GENAI_USE_VERTEXAI=True

    REST

    Legen Sie Umgebungsvariablen fest:

    GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT_ID
    GOOGLE_CLOUD_LOCATION=global
    API_ENDPOINT=YOUR_API_ENDPOINT
    MODEL_ID="gemini-2.5-flash"
    GENERATE_CONTENT_API="generateContent"

    Erste Anfrage senden

    Verwenden Sie die Methode generateContent, um eine Anfrage an die Gemini API in Vertex AI zu senden:

    Python

    from google import genai
    from google.genai.types import HttpOptions
    
    client = genai.Client(http_options=HttpOptions(api_version="v1"))
    response = client.models.generate_content(
        model="gemini-2.5-flash",
        contents="How does AI work?",
    )
    print(response.text)
    # Example response:
    # Okay, let's break down how AI works. It's a broad field, so I'll focus on the ...
    #
    # Here's a simplified overview:
    # ...

    Go

    import (
    	"context"
    	"fmt"
    	"io"
    
    	"google.golang.org/genai"
    )
    
    // generateWithText shows how to generate text using a text prompt.
    func generateWithText(w io.Writer) error {
    	ctx := context.Background()
    
    	client, err := genai.NewClient(ctx, &genai.ClientConfig{
    		HTTPOptions: genai.HTTPOptions{APIVersion: "v1"},
    	})
    	if err != nil {
    		return fmt.Errorf("failed to create genai client: %w", err)
    	}
    
    	resp, err := client.Models.GenerateContent(ctx,
    		"gemini-2.5-flash",
    		genai.Text("How does AI work?"),
    		nil,
    	)
    	if err != nil {
    		return fmt.Errorf("failed to generate content: %w", err)
    	}
    
    	respText := resp.Text()
    
    	fmt.Fprintln(w, respText)
    	// Example response:
    	// That's a great question! Understanding how AI works can feel like ...
    	// ...
    	// **1. The Foundation: Data and Algorithms**
    	// ...
    
    	return nil
    }
    

    Node.js

    const {GoogleGenAI} = require('@google/genai');
    
    const GOOGLE_CLOUD_PROJECT = process.env.GOOGLE_CLOUD_PROJECT;
    const GOOGLE_CLOUD_LOCATION = process.env.GOOGLE_CLOUD_LOCATION || 'global';
    
    async function generateContent(
      projectId = GOOGLE_CLOUD_PROJECT,
      location = GOOGLE_CLOUD_LOCATION
    ) {
      const client = new GoogleGenAI({
        vertexai: true,
        project: projectId,
        location: location,
      });
    
      const response = await client.models.generateContent({
        model: 'gemini-2.5-flash',
        contents: 'How does AI work?',
      });
    
      console.log(response.text);
    
      return response.text;
    }

    Java

    
    import com.google.genai.Client;
    import com.google.genai.types.GenerateContentResponse;
    import com.google.genai.types.HttpOptions;
    
    public class TextGenerationWithText {
    
      public static void main(String[] args) {
        // TODO(developer): Replace these variables before running the sample.
        String modelId = "gemini-2.5-flash";
        generateContent(modelId);
      }
    
      // Generates text with text input
      public static String generateContent(String modelId) {
        // Initialize client that will be used to send requests. This client only needs to be created
        // once, and can be reused for multiple requests.
        try (Client client =
            Client.builder()
                .location("global")
                .vertexAI(true)
                .httpOptions(HttpOptions.builder().apiVersion("v1").build())
                .build()) {
    
          GenerateContentResponse response =
              client.models.generateContent(modelId, "How does AI work?", null);
    
          System.out.print(response.text());
          // Example response:
          // Okay, let's break down how AI works. It's a broad field, so I'll focus on the ...
          //
          // Here's a simplified overview:
          // ...
          return response.text();
        }
      }
    }

    REST

    Führen Sie den curl-Befehl über die Befehlszeile aus oder fügen Sie den REST-Aufruf in Ihre Anwendung ein, um diese Prompt-Anfrage zu senden.

    curl
    -X POST
    -H "Content-Type: application/json"
    -H "Authorization: Bearer $(gcloud auth print-access-token)"
    "https://${API_ENDPOINT}/v1/projects/${GOOGLE_CLOUD_PROJECT}/locations/${GOOGLE_CLOUD_LOCATION}/publishers/google/models/${MODEL_ID}:${GENERATE_CONTENT_API}" -d
    $'{
      "contents": {
        "role": "user",
        "parts": {
          "text": "Explain how AI works in a few words"
        }
      }
    }'

    Das Modell gibt eine Antwort zurück. Die Antwort wird in Abschnitten generiert, die jeweils separat auf Sicherheit geprüft werden.

    Bilder erstellen

    Gemini kann Bilder im Rahmen von Unterhaltungen generieren und verarbeiten. Sie können Gemini mit Text, Bildern oder einer Kombination aus beidem auffordern, verschiedene bildbezogene Aufgaben auszuführen, z. B. Bilder zu generieren und zu bearbeiten. Der folgende Code zeigt, wie Sie ein Bild auf Grundlage eines beschreibenden Prompts generieren:

    Sie müssen responseModalities: ["TEXT", "IMAGE"] in Ihre Konfiguration aufnehmen. Die reine Bildausgabe wird bei diesen Modellen nicht unterstützt.

    Python

    from google import genai
    from google.genai.types import GenerateContentConfig, Modality
    from PIL import Image
    from io import BytesIO
    
    client = genai.Client()
    
    response = client.models.generate_content(
        model="gemini-2.5-flash-image",
        contents=("Generate an image of the Eiffel tower with fireworks in the background."),
        config=GenerateContentConfig(
            response_modalities=[Modality.TEXT, Modality.IMAGE],
            candidate_count=1,
            safety_settings=[
                {"method": "PROBABILITY"},
                {"category": "HARM_CATEGORY_DANGEROUS_CONTENT"},
                {"threshold": "BLOCK_MEDIUM_AND_ABOVE"},
            ],
        ),
    )
    for part in response.candidates[0].content.parts:
        if part.text:
            print(part.text)
        elif part.inline_data:
            image = Image.open(BytesIO((part.inline_data.data)))
            image.save("output_folder/example-image-eiffel-tower.png")
    # Example response:
    #   I will generate an image of the Eiffel Tower at night, with a vibrant display of
    #   colorful fireworks exploding in the dark sky behind it. The tower will be
    #   illuminated, standing tall as the focal point of the scene, with the bursts of
    #   light from the fireworks creating a festive atmosphere.

    Node.js

    const fs = require('fs');
    const {GoogleGenAI, Modality} = require('@google/genai');
    
    const GOOGLE_CLOUD_PROJECT = process.env.GOOGLE_CLOUD_PROJECT;
    const GOOGLE_CLOUD_LOCATION =
      process.env.GOOGLE_CLOUD_LOCATION || 'us-central1';
    
    async function generateContent(
      projectId = GOOGLE_CLOUD_PROJECT,
      location = GOOGLE_CLOUD_LOCATION
    ) {
      const client = new GoogleGenAI({
        vertexai: true,
        project: projectId,
        location: location,
      });
    
      const response = await client.models.generateContentStream({
        model: 'gemini-2.5-flash-image',
        contents:
          'Generate an image of the Eiffel tower with fireworks in the background.',
        config: {
          responseModalities: [Modality.TEXT, Modality.IMAGE],
        },
      });
    
      const generatedFileNames = [];
      let imageIndex = 0;
      for await (const chunk of response) {
        const text = chunk.text;
        const data = chunk.data;
        if (text) {
          console.debug(text);
        } else if (data) {
          const fileName = `generate_content_streaming_image_${imageIndex++}.png`;
          console.debug(`Writing response image to file: ${fileName}.`);
          try {
            fs.writeFileSync(fileName, data);
            generatedFileNames.push(fileName);
          } catch (error) {
            console.error(`Failed to write image file ${fileName}:`, error);
          }
        }
      }
    
      return generatedFileNames;
    }

    Java

    
    import com.google.genai.Client;
    import com.google.genai.types.Blob;
    import com.google.genai.types.Candidate;
    import com.google.genai.types.Content;
    import com.google.genai.types.GenerateContentConfig;
    import com.google.genai.types.GenerateContentResponse;
    import com.google.genai.types.Part;
    import com.google.genai.types.SafetySetting;
    import java.awt.image.BufferedImage;
    import java.io.ByteArrayInputStream;
    import java.io.File;
    import java.io.IOException;
    import java.util.ArrayList;
    import java.util.List;
    import javax.imageio.ImageIO;
    
    public class ImageGenMmFlashWithText {
    
      public static void main(String[] args) throws IOException {
        // TODO(developer): Replace these variables before running the sample.
        String modelId = "gemini-2.5-flash-image";
        String outputFile = "resources/output/example-image-eiffel-tower.png";
        generateContent(modelId, outputFile);
      }
    
      // Generates an image with text input
      public static void generateContent(String modelId, String outputFile) throws IOException {
        // Client Initialization. Once created, it can be reused for multiple requests.
        try (Client client = Client.builder().location("global").vertexAI(true).build()) {
    
          GenerateContentConfig contentConfig =
              GenerateContentConfig.builder()
                  .responseModalities("TEXT", "IMAGE")
                  .candidateCount(1)
                  .safetySettings(
                      SafetySetting.builder()
                          .method("PROBABILITY")
                          .category("HARM_CATEGORY_DANGEROUS_CONTENT")
                          .threshold("BLOCK_MEDIUM_AND_ABOVE")
                          .build())
                  .build();
    
          GenerateContentResponse response =
              client.models.generateContent(
                  modelId,
                  "Generate an image of the Eiffel tower with fireworks in the background.",
                  contentConfig);
    
          // Get parts of the response
          List<Part> parts =
              response
                  .candidates()
                  .flatMap(candidates -> candidates.stream().findFirst())
                  .flatMap(Candidate::content)
                  .flatMap(Content::parts)
                  .orElse(new ArrayList<>());
    
          // For each part print text if present, otherwise read image data if present and
          // write it to the output file
          for (Part part : parts) {
            if (part.text().isPresent()) {
              System.out.println(part.text().get());
            } else if (part.inlineData().flatMap(Blob::data).isPresent()) {
              BufferedImage image =
                  ImageIO.read(new ByteArrayInputStream(part.inlineData().flatMap(Blob::data).get()));
              ImageIO.write(image, "png", new File(outputFile));
            }
          }
    
          System.out.println("Content written to: " + outputFile);
          // Example response:
          // Here is the Eiffel Tower with fireworks in the background...
          //
          // Content written to: resources/output/example-image-eiffel-tower.png
        }
      }
    }

    Bilder verstehen

    Gemini kann auch Bilder verstehen. Im folgenden Code wird das im vorherigen Abschnitt generierte Bild verwendet und ein anderes Modell, um Informationen zum Bild abzuleiten:

    Python

    from google import genai
    from google.genai.types import HttpOptions, Part
    
    client = genai.Client(http_options=HttpOptions(api_version="v1"))
    response = client.models.generate_content(
        model="gemini-2.5-flash",
        contents=[
            "What is shown in this image?",
            Part.from_uri(
                file_uri="gs://cloud-samples-data/generative-ai/image/scones.jpg",
                mime_type="image/jpeg",
            ),
        ],
    )
    print(response.text)
    # Example response:
    # The image shows a flat lay of blueberry scones arranged on parchment paper. There are ...

    Go

    import (
    	"context"
    	"fmt"
    	"io"
    
    	genai "google.golang.org/genai"
    )
    
    // generateWithTextImage shows how to generate text using both text and image input
    func generateWithTextImage(w io.Writer) error {
    	ctx := context.Background()
    
    	client, err := genai.NewClient(ctx, &genai.ClientConfig{
    		HTTPOptions: genai.HTTPOptions{APIVersion: "v1"},
    	})
    	if err != nil {
    		return fmt.Errorf("failed to create genai client: %w", err)
    	}
    
    	modelName := "gemini-2.5-flash"
    	contents := []*genai.Content{
    		{Parts: []*genai.Part{
    			{Text: "What is shown in this image?"},
    			{FileData: &genai.FileData{
    				// Image source: https://storage.googleapis.com/cloud-samples-data/generative-ai/image/scones.jpg
    				FileURI:  "gs://cloud-samples-data/generative-ai/image/scones.jpg",
    				MIMEType: "image/jpeg",
    			}},
    		},
    			Role: "user"},
    	}
    
    	resp, err := client.Models.GenerateContent(ctx, modelName, contents, nil)
    	if err != nil {
    		return fmt.Errorf("failed to generate content: %w", err)
    	}
    
    	respText := resp.Text()
    
    	fmt.Fprintln(w, respText)
    
    	// Example response:
    	// The image shows an overhead shot of a rustic, artistic arrangement on a surface that ...
    
    	return nil
    }
    

    Node.js

    const {GoogleGenAI} = require('@google/genai');
    
    const GOOGLE_CLOUD_PROJECT = process.env.GOOGLE_CLOUD_PROJECT;
    const GOOGLE_CLOUD_LOCATION = process.env.GOOGLE_CLOUD_LOCATION || 'global';
    
    async function generateContent(
      projectId = GOOGLE_CLOUD_PROJECT,
      location = GOOGLE_CLOUD_LOCATION
    ) {
      const client = new GoogleGenAI({
        vertexai: true,
        project: projectId,
        location: location,
      });
    
      const image = {
        fileData: {
          fileUri: 'gs://cloud-samples-data/generative-ai/image/scones.jpg',
          mimeType: 'image/jpeg',
        },
      };
    
      const response = await client.models.generateContent({
        model: 'gemini-2.5-flash',
        contents: [image, 'What is shown in this image?'],
      });
    
      console.log(response.text);
    
      return response.text;
    }

    Java

    
    import com.google.genai.Client;
    import com.google.genai.types.Content;
    import com.google.genai.types.GenerateContentResponse;
    import com.google.genai.types.HttpOptions;
    import com.google.genai.types.Part;
    
    public class TextGenerationWithTextAndImage {
    
      public static void main(String[] args) {
        // TODO(developer): Replace these variables before running the sample.
        String modelId = "gemini-2.5-flash";
        generateContent(modelId);
      }
    
      // Generates text with text and image input
      public static String generateContent(String modelId) {
        // Initialize client that will be used to send requests. This client only needs to be created
        // once, and can be reused for multiple requests.
        try (Client client =
            Client.builder()
                .location("global")
                .vertexAI(true)
                .httpOptions(HttpOptions.builder().apiVersion("v1").build())
                .build()) {
    
          GenerateContentResponse response =
              client.models.generateContent(
                  modelId,
                  Content.fromParts(
                      Part.fromText("What is shown in this image?"),
                      Part.fromUri(
                          "gs://cloud-samples-data/generative-ai/image/scones.jpg", "image/jpeg")),
                  null);
    
          System.out.print(response.text());
          // Example response:
          // The image shows a flat lay of blueberry scones arranged on parchment paper. There are ...
          return response.text();
        }
      }
    }

    Codeausführung

    Die Funktion zur Codeausführung der Gemini API in Vertex AI ermöglicht es dem Modell, Python-Code zu generieren und auszuführen und iterativ aus den Ergebnissen zu lernen, bis das Modell eine endgültige Ausgabe erstellt hat. Vertex AI stellt die Codeausführung ähnlich wie den Funktionsaufruf als Tool zur Verfügung. Sie können diese Codeausführungsfunktion verwenden, um Anwendungen zu erstellen, die die Vorteile codebasierter Schlussfolgerungen nutzen und Textausgaben erzeugen. Beispiel:

    Python

    from google import genai
    from google.genai.types import (
        HttpOptions,
        Tool,
        ToolCodeExecution,
        GenerateContentConfig,
    )
    
    client = genai.Client(http_options=HttpOptions(api_version="v1"))
    model_id = "gemini-2.5-flash"
    
    code_execution_tool = Tool(code_execution=ToolCodeExecution())
    response = client.models.generate_content(
        model=model_id,
        contents="Calculate 20th fibonacci number. Then find the nearest palindrome to it.",
        config=GenerateContentConfig(
            tools=[code_execution_tool],
            temperature=0,
        ),
    )
    print("# Code:")
    print(response.executable_code)
    print("# Outcome:")
    print(response.code_execution_result)
    
    # Example response:
    # # Code:
    # def fibonacci(n):
    #     if n <= 0:
    #         return 0
    #     elif n == 1:
    #         return 1
    #     else:
    #         a, b = 0, 1
    #         for _ in range(2, n + 1):
    #             a, b = b, a + b
    #         return b
    #
    # fib_20 = fibonacci(20)
    # print(f'{fib_20=}')
    #
    # # Outcome:
    # fib_20=6765

    Go

    import (
    	"context"
    	"fmt"
    	"io"
    
    	genai "google.golang.org/genai"
    )
    
    // generateWithCodeExec shows how to generate text using the code execution tool.
    func generateWithCodeExec(w io.Writer) error {
    	ctx := context.Background()
    
    	client, err := genai.NewClient(ctx, &genai.ClientConfig{
    		HTTPOptions: genai.HTTPOptions{APIVersion: "v1"},
    	})
    	if err != nil {
    		return fmt.Errorf("failed to create genai client: %w", err)
    	}
    
    	prompt := "Calculate 20th fibonacci number. Then find the nearest palindrome to it."
    	contents := []*genai.Content{
    		{Parts: []*genai.Part{
    			{Text: prompt},
    		},
    			Role: "user"},
    	}
    	config := &genai.GenerateContentConfig{
    		Tools: []*genai.Tool{
    			{CodeExecution: &genai.ToolCodeExecution{}},
    		},
    		Temperature: genai.Ptr(float32(0.0)),
    	}
    	modelName := "gemini-2.5-flash"
    
    	resp, err := client.Models.GenerateContent(ctx, modelName, contents, config)
    	if err != nil {
    		return fmt.Errorf("failed to generate content: %w", err)
    	}
    
    	for _, p := range resp.Candidates[0].Content.Parts {
    		if p.Text != "" {
    			fmt.Fprintf(w, "Gemini: %s", p.Text)
    		}
    		if p.ExecutableCode != nil {
    			fmt.Fprintf(w, "Language: %s\n%s\n", p.ExecutableCode.Language, p.ExecutableCode.Code)
    		}
    		if p.CodeExecutionResult != nil {
    			fmt.Fprintf(w, "Outcome: %s\n%s\n", p.CodeExecutionResult.Outcome, p.CodeExecutionResult.Output)
    		}
    	}
    
    	// Example response:
    	// Gemini: Okay, I can do that. First, I'll calculate the 20th Fibonacci number. Then, I need ...
    	//
    	// Language: PYTHON
    	//
    	// def fibonacci(n):
    	//    ...
    	//
    	// fib_20 = fibonacci(20)
    	// print(f'{fib_20=}')
    	//
    	// Outcome: OUTCOME_OK
    	// fib_20=6765
    	//
    	// Now that I have the 20th Fibonacci number (6765), I need to find the nearest palindrome. ...
    	// ...
    
    	return nil
    }
    

    Node.js

    const {GoogleGenAI} = require('@google/genai');
    
    const GOOGLE_CLOUD_PROJECT = process.env.GOOGLE_CLOUD_PROJECT;
    const GOOGLE_CLOUD_LOCATION = process.env.GOOGLE_CLOUD_LOCATION || 'global';
    
    async function generateContent(
      projectId = GOOGLE_CLOUD_PROJECT,
      location = GOOGLE_CLOUD_LOCATION
    ) {
      const client = new GoogleGenAI({
        vertexai: true,
        project: projectId,
        location: location,
      });
    
      const response = await client.models.generateContent({
        model: 'gemini-2.5-flash',
        contents:
          'What is the sum of the first 50 prime numbers? Generate and run code for the calculation, and make sure you get all 50.',
        config: {
          tools: [{codeExecution: {}}],
          temperature: 0,
        },
      });
    
      console.debug(response.executableCode);
      console.debug(response.codeExecutionResult);
    
      return response.codeExecutionResult;
    }

    Java

    
    import com.google.genai.Client;
    import com.google.genai.types.GenerateContentConfig;
    import com.google.genai.types.GenerateContentResponse;
    import com.google.genai.types.HttpOptions;
    import com.google.genai.types.Tool;
    import com.google.genai.types.ToolCodeExecution;
    
    public class ToolsCodeExecWithText {
    
      public static void main(String[] args) {
        // TODO(developer): Replace these variables before running the sample.
        String modelId = "gemini-2.5-flash";
        generateContent(modelId);
      }
    
      // Generates text using the Code Execution tool
      public static String generateContent(String modelId) {
        // Initialize client that will be used to send requests. This client only needs to be created
        // once, and can be reused for multiple requests.
        try (Client client =
            Client.builder()
                .location("global")
                .vertexAI(true)
                .httpOptions(HttpOptions.builder().apiVersion("v1").build())
                .build()) {
    
          // Create a GenerateContentConfig and set codeExecution tool
          GenerateContentConfig contentConfig =
              GenerateContentConfig.builder()
                  .tools(Tool.builder().codeExecution(ToolCodeExecution.builder().build()).build())
                  .temperature(0.0F)
                  .build();
    
          GenerateContentResponse response =
              client.models.generateContent(
                  modelId,
                  "Calculate 20th fibonacci number. Then find the nearest palindrome to it.",
                  contentConfig);
    
          System.out.println("Code: \n" + response.executableCode());
          System.out.println("Outcome: \n" + response.codeExecutionResult());
          // Example response
          // Code:
          // def fibonacci(n):
          //    if n <= 0:
          //        return 0
          //    elif n == 1:
          //        return 1
          //    else:
          //        a, b = 1, 1
          //        for _ in range(2, n):
          //            a, b = b, a + b
          //        return b
          //
          // fib_20 = fibonacci(20)
          // print(f'{fib_20=}')
          //
          // Outcome:
          // fib_20=6765
          return response.executableCode();
        }
      }
    }

    Weitere Beispiele für die Codeausführung finden Sie in der Dokumentation zur Codeausführung.

    Nächste Schritte

    Nachdem Sie Ihre erste API-Anfrage gesendet haben, können Sie sich die folgenden Anleitungen ansehen, in denen beschrieben wird, wie Sie erweiterte Vertex AI-Funktionen für Produktionscode einrichten: