This tutorial shows you how to use Gemini Enterprise Agent Platform Pipelines to run an end-to-end ML workflow, including the following tasks:
- Import and transform data.
- Fine-tune an image classification model from TFHub using the transformed data.
- Import the trained model to Vertex AI Model Registry.
- Optional: Deploy the model for online serving with Vertex AI Inference.
Before you begin
Ensure that you've completed steps 1-3 in Set up a project.
Create an isolated Python environment and install the Agent Platform SDK for Python.
Install the Kubeflow Pipelines SDK:
python3 -m pip install "kfp<2.0.0" "google-cloud-aiplatform>=1.16.0" --upgrade --quiet
Run the ML model training pipeline
The sample code does the following:
- Loads components from a component repository to be used as pipeline building blocks.
- Composes a pipeline by creating component tasks and passing data between them using arguments.
- Submits the pipeline for execution on Gemini Enterprise Agent Platform Pipelines. See Gemini Enterprise Agent Platform Pipelines pricing.
Copy the following sample code into your development environment and run it.
Image classification
Note the following about the sample code provided:
- A Kubeflow pipeline is defined as a Python function.
- The pipeline's workflow steps are created using Kubeflow pipeline
components. By using the outputs of a component as an input of another
component, you define the pipeline's workflow as a graph. For example, the
preprocess_image_data_opcomponent task depends on thetfrecord_image_data_pathoutput from thetranscode_imagedataset_tfrecord_from_csv_opcomponent task. - You create a pipeline run on Gemini Enterprise Agent Platform Pipelines using the Agent Platform SDK for Python.
Monitor the pipeline
In the Google Cloud console, in the Agent Platform section, go to the Pipelines page and open the Runs tab.
What's next
- To learn more about Gemini Enterprise Agent Platform Pipelines, see Introduction to Gemini Enterprise Agent Platform Pipelines.