This page describes integration patterns for embedding data agent experiences into your applications. These patterns range in complexity from an embedded chat component to an orchestrated multi-agent system.
This guide is intended for cloud architects and data engineers who design generative AI applications. You should have a foundational understanding of Google Cloud concepts, Identity and Access Management, and REST APIs. You should also be familiar with the architecture of the data source that your application uses.
Overview of integration patterns
This guide is organized into the following primary tracks based on your starting point:
- Looker track: Select this track if you want to provide chat functionality through Looker embedding, the Looker API, or the Conversational Analytics API.
- BigQuery and database track: Select this track if you are building a custom application that uses BigQuery, Data Studio, or a supported operational database.
The following table summarizes the integration patterns that are available:
| Integration pattern | Description | Data source |
|---|---|---|
| Looker iframe embed | Adds the standard chat interface to an application that requires minimal code. | Looker |
| Looker API and SDKs | Builds a custom chat interface that uses the Looker API for authentication. | Looker |
| Conversational Analytics API (Looker source) | Manages Looker data agents as Google Cloud resources that work across multiple surfaces and multi-agent systems. | Looker |
| Direct API (single-agent) | Uses a direct API integration for text-to-answer flows. | BigQuery, databases, Looker |
| Direct API (orchestrator) | Routes queries between the API and other tools by using function calling. | BigQuery, databases, Looker |
ADK (schema-driven) with BigQueryToolset |
Generates rapid insights from table references by using the ask_data_insights tool. |
BigQuery |
ADK (governed) with DataAgentToolset |
Queries pre-configured data agents that use the ask_data_agent tool to ensure consistent behavior. |
BigQuery, databases, Looker |
| ADK (custom streaming) | Supports real-time streaming of charts and SQL by using a custom agent class. | BigQuery, databases, Looker |
MCP with McpToolset or ToolboxToolset |
Connects applications to data tools that use the Model Context Protocol (MCP). | BigQuery Looker |
| A2A protocol | Enables secure collaboration between specialized agents that operate on different systems. | Framework dependent |
Integration options for Looker
If you use Looker, you can provide Looker Conversational Analytics to your users through the following patterns:
- Embed with iframes: A low-code pattern that adds the standard chat interface to an existing application.
- Build with the Looker API and SDKs: A flexible approach that lets you build a custom frontend through Looker authentication and agent management.
- Use the Conversational Analytics API: A direct integration with the API that allows data agents to be used across multiple Google Cloud surfaces.
Summary of Looker integration patterns
The following table summarizes the primary integration patterns for Looker:
| Pattern | Best for | Advantages | Considerations |
|---|---|---|---|
| Embed with iframes: A low-code method to quickly add the standard Looker chat experience to an application. | Teams that require a production-ready conversational analytics experience with minimal custom development. |
|
|
| Build with the Looker API and SDKs: A flexible approach to building a custom chat interface while keeping authentication and agent management within Looker. | Teams that require a custom chat user experience but want to keep user authentication and agent management within the Looker ecosystem. This pattern is ideal for applications that already use Looker embedding or the API. |
|
|
| Use the Conversational Analytics API: Direct integration with the API to manage agents as cloud-level resources. | Looker customers who require cross-platform portability for their data agents. |
|
|
Embed with iframes
You can embed Conversational Analytics as an iframe to deliver the chat experience outside of the Looker UI. This pattern is a direct way to provide Conversational Analytics that requires no custom UI development, backend orchestration, or API state management. To use this pattern, you add a preformatted URL into your application.
The Looker Embed SDK provides tools that manage tasks such as secure URL generation, iframe lifecycle management, and JavaScript event passing between the host application and the iframe. You can embed the Agents page, the Conversations page, or a specific conversation by adding a preformatted URL into your application.
You can use the following authentication methods for embedded content:
- Private embedding: Authenticates users with their existing Looker credentials. When you set the embed URL as the iframe source, users sign in with their Looker accounts. This method automatically enforces existing roles, content access, and data-level permissions, such as access filters or access grants, without requiring additional Identity and Access Management configuration or token mapping.
- Signed embedding: Authenticates users through your application by using single sign-on (SSO). You construct a signed URL that includes the Conversational Analytics content path, which lets you dynamically specify exactly which permissions to grant.
Build with the Looker API and SDKs
For more flexibility in the chat experience, you can use the ConversationalAnalytics methods in the Looker API or use a Looker SDK to build a custom application. This approach lets you build a custom frontend that communicates directly with Looker endpoints.
Integrating with the Looker API provides the following benefits:
- You only manage authentication with Looker. There is no need to authenticate separately with the Conversational Analytics API.
- For applications that already use Looker embedding or the API, this pattern simplifies the project architecture by avoiding secondary authentication mechanisms and eliminating the need to manage external data agents.
- You have full control over the chat interface, conversation flow, and how the application renders results (such as charts and tables).
For a reference implementation, see the Conversational Analytics Looker API guide on GitHub.
Use the Conversational Analytics API with Looker data
You can integrate directly with the Conversational Analytics API at geminidataanalytics.googleapis.com if you need to perform any of the following tasks:
- Share the same data agent across multiple Google Cloud surfaces, such as custom web applications, Google Chat, and Gemini Enterprise
- Combine Looker data sources with BigQuery or operational database sources in a single multi-agent system
- Manage your data agent as a cloud-level resource that is governed by Identity and Access Management rather than the Looker permission model
For more information on common architecture patterns for the Conversational Analytics API, see Integration options for BigQuery and databases.
Integration options for BigQuery and databases
This section describes architectural patterns for applications that use BigQuery, Data Studio, or supported Google Cloud operational databases to build custom experiences with the Conversational Analytics API.
If you use the Conversational Analytics API with a Looker data source, the patterns that are described in this section also apply to your integration.
The Conversational Analytics API provides the following primary methods to interact with data:
chatmethod: Supports BigQuery, Looker, Data Studio, and operational databases.queryDatamethod: Supports operational databases such as AlloyDB, GoogleSQL for Spanner, Cloud SQL for MySQL, and Cloud SQL for PostgreSQL.
When you build a custom application, you can use one or more of the following integration patterns:
- Direct API integration: A custom approach that provides the most flexibility but requires you to build the infrastructure for authentication, conversation management, and response parsing.
- Framework-driven orchestration (ADK): An approach that uses the Agent Development Kit (ADK) for multi-agent routing, tool execution, and state management.
- Vertical integration (MCP): An approach that uses the Model Context Protocol (MCP) to provide a uniform way to connect AI applications to tools and data sources across different environments.
- Horizontal orchestration (A2A): An approach that uses the Agent-to-Agent (A2A) protocol to allow specialized agents on different systems to collaborate securely without requiring custom integration code.
Summary of BigQuery and database integration patterns
The following table summarizes the specific implementation patterns for BigQuery and operational databases:
| Pattern | Best for | Advantages | Considerations |
|---|---|---|---|
| Single-agent integration (direct API): A pattern where your application calls the API directly to return insights from your data sources. | Single-agent applications, prototypes, or microservices that require direct control over every API call. |
|
|
| Custom orchestrator (direct API): A pattern that uses a root agent and function calling to route queries between the Conversational Analytics API and other tools or services. | Applications that combine data queries with other tasks, such as email or documents, in a single conversational flow. |
|
|
Schema-driven integration with BigQueryToolset (ADK): A pattern that uses table references to return data insights quickly. |
Rapid prototyping, internal tools where data governance is less critical, or scenarios where data insights are one of several capabilities in an ADK agent. |
|
|
Governed integration with DataAgentToolset (ADK): A pattern that queries a pre-configured data agent by referencing the agent's ID. |
Production applications that require consistent data access or multi-agent systems where the data agent is a trusted, reusable component. |
|
|
| Custom sub-agents (ADK): A pattern that uses a custom agent class to connect directly to the API and stream data chunks back to the user. | User-facing applications where low response latency is a priority or multi-agent pipelines where data retrieval feeds downstream agents. |
|
|
| Model Context Protocol (MCP): A pattern that uses an open standard to connect AI applications to data sources and tools across different environments. | Organizations that require tool interoperability across multiple AI clients and IDEs, or teams that need the same data tools to be accessible from the ADK framework, IDEs, and custom applications. |
|
|
| Agent-to-Agent (A2A): A pattern that uses the A2A protocol, an open standard that lets specialized agents on different systems collaborate securely without requiring custom integration code. | Highly distributed enterprise environments where a central routing agent must delegate tasks to data agents that operate on different systems or networks. |
|
|
Direct API integration
Direct API integration provides granular control over your application's logic and architecture but requires you to build the supporting infrastructure. With this approach, you're responsible for tasks such as authentication, conversation management, response parsing, and deployment.
This section covers the following topics:
- Single-agent integration: A pattern where your application calls the Conversational Analytics API directly to return insights from your data sources.
- Custom orchestrator integration: An advanced pattern that uses a root agent and function calling to route queries between the Conversational Analytics API.
Single-agent integration
With a single-agent integration, your backend directly calls the Conversational Analytics API by using REST or a client library and passes the user's query and context. This pattern is suitable for low-complexity applications, such as web apps, internal chat tools, or microservices, that use a straightforward text-to-answer flow. You can also use this approach for prototyping and proof-of-concept work.
This pattern supports both stateful chat, where Google manages the conversation history, and stateless chat, where your application manages the history.
For reference implementations, see the Conversational Analytics API quickstart or the Conversational Analytics API golden demo on GitHub.
Custom orchestrator integration
With this approach, you build a root agent that acts as the primary entry point and coordinator for your application. The root agent uses a standard Gemini model that is equipped with tools through function calling. When a user asks a data-related question, the root agent emits a tool call to the Conversational Analytics API, receives the result, and can then continue reasoning or call other tools downstream.
Function calling involves the following stages:
- Declare: Define tool schemas as
FunctionDeclarationobjects that include parameter definitions. - Invoke: The model returns a structured
functionCallmessage that contains the function name and arguments. - Execute: Your application performs the API call and returns the result in a
FunctionResponsemessage. - Synthesize: The Gemini model uses the result to generate a final answer or to determine the next action.
This approach is suitable for applications where users might ask for data insights alongside other tasks. For example, a user might ask the agent to "Show me sales, and then draft an email to the sales team." The root agent can route the data question to the Conversational Analytics API and use other tools for non-data tasks.
For a reference implementation, see the orchestrate or multimodal pages in the Conversational Analytics API golden demo on GitHub.
Framework-driven orchestration (ADK)
The Agent Development Kit (ADK) is a code-first framework for building AI agents that manages the complexity of multi-agent routing, tool execution, and state management. The ADK framework is the same framework that powers Gemini Enterprise.
By using the ADK, you can chain the Conversational Analytics API with other tools and agents to perform complex actions.
This section covers the following topics:
- Schema-driven integration: A pattern that uses the
ask_data_insightstool from theBigQueryToolsettoolset to return data insights from BigQuery table references. - Governed integration: A pattern that uses the
ask_data_agenttool from theDataAgentToolsettoolset to query a pre-configured data agent by referencing the agent's ID. - Advanced UX integration with custom sub-agents: A pattern that uses a custom sub-agent component to connect directly to the Conversational Analytics API and stream data chunks back to the user asynchronously.
Schema-driven integration with BigQueryToolset
The BigQueryToolset toolset in the ADK framework includes a prebuilt ask_data_insights tool. To use this tool, you pass table names and the user's question to the tool, which then calls the Conversational Analytics API by using inline context.
When you call the tool, it sends a stateless request that includes the specified BigQuery table references to the Conversational Analytics API. The API infers the database schema, generates and executes the SQL query, and returns a text answer. The result is then passed back to the ADK agent as a tool response.
This pattern is an effective way to add conversational analytics to an agent quickly. However, because the call to the API is stateless and lacks governance, the API generates SQL based entirely on the table schema without semantic guardrails. This makes the pattern faster to deploy but riskier for production business logic where naming conventions, business logic, or access controls apply.
Governed integration with DataAgentToolset
The DataAgentToolset toolset in the ADK framework provides a prebuilt integration that references a pre-configured data agent by its ID. The ADK agent passes the user's question to the ask_data_agent tool, which calls the Conversational Analytics API with the specified data agent context.
You can create a data agent programmatically by using the Conversational Analytics API or through the Agent Catalog in the Google Cloud console. A data agent comes equipped with the following components:
- Knowledge sources: Tables, views, or user-defined functions (UDFs) that the agent can query
- Structured context: Descriptions for tables and columns that help the agent understand the underlying data
- Instructions: Additional guidance for the agent to interpret and query the data sources
- Verified queries: Pre-validated SQL queries that serve as examples for common questions
- Glossary: Business term definitions that help the agent understand domain-specific language
For an in-depth guide on creating agents through the Agent Catalog, see the conversational analytics in BigQuery codelab.
Because the agent is defined as a governed unit, it uses the same trusted logic, context, and guardrails regardless of which application or sub-agent calls it.
Advanced UX integration with custom sub-agents
The BigQueryToolset and DataAgentToolset toolsets don't return results to the user until the API request finishes processing. Because the ADK framework treats the API as a tool that blocks responses until completion, longer-running queries might leave users without feedback.
As an alternative for applications where low response latency is a priority or where data retrieval feeds downstream agents, you can build a custom ADK agent class that connects directly to the Conversational Analytics API and streams data in chunks back to the user asynchronously. This pattern supports the following response types as they are produced:
- Thought messages: The data agent's reasoning process as it interprets the question.
- Progress messages: Status updates during data retrieval for data sources.
- Query generation: The generated SQL or Looker query, which is streamed as it is produced.
- Data: The data results in JSON format.
- Visualization: Vega-Lite chart specifications.
- Summary: The final text-based answer.
For a full list of the types of data that are returned, see the SystemMessage type in the API reference documentation.
This asynchronous approach ensures that users don't have to wait for a complex data retrieval process to finish completely. As the data agent flows through the query process, it continuously shares incremental updates—such as text summaries, raw data, or chart configurations—into a temporary, shared workspace. This data can then be rendered for the user in real time and shared with specialized sub-agents to perform additional tasks.
For a reference implementation that includes a root agent, a data sub-agent, and a visualization agent, see the ADK streaming demo on GitHub.
Vertical integration (MCP)
The Model Context Protocol (MCP) is an open standard that provides a uniform way for AI applications to connect to external tools and data sources. MCP standardizes the interface between AI models and the tools that they use.
This section covers the following topics:
- MCP Toolbox for Databases: Describes prebuilt tools for connecting to BigQuery and Looker.
- MCP implementation patterns for standalone and ADK architectures: Describes patterns for using MCP as a standalone server or within an Agent Development Kit (ADK) workflow.
MCP Toolbox for Databases
While there is no dedicated Conversational Analytics API MCP server, you can access the API through the MCP Toolbox for Databases server. This open-source server provides prebuilt, MCP-compatible tools that expose the chat method in the Conversational Analytics API:
bigquery-conversational-analytics: Wraps thechatmethod for BigQuery data sources.looker-conversational-analytics: Wraps thechatmethod for Looker data sources.
MCP is an interoperability layer that exposes analytics capabilities as tools to MCP-compatible clients rather than a separate execution model of the Conversational Analytics API.
MCP implementation patterns for standalone and ADK architectures
You can implement MCP through the following patterns:
| Pattern | Details |
|---|---|
| Standalone MCP (without ADK) |
Use the MCP Toolbox for Databases server as a standalone server to connect to any MCP-compliant client. This pattern is commonly used for the following tasks:
|
| MCP within ADK |
The ADK framework provides the following mechanisms for integrating MCP servers into agent workflows:
|
You can also build MCP servers by using the FastMCP framework to expose tools that were built with the ADK framework to any MCP-compliant client. This approach makes your ADK agents available as tools in other ecosystems.
Choose an integration pattern based on the specific architectural requirements of your application:
- Using built-in Agent Development Kit (ADK) toolsets, such as
BigQueryToolsetorDataAgentToolset, provides tighter integration with no external server dependencies. This approach is ideal for systems that exist entirely within the ADK framework. - Using the tools in the MCP Toolbox provides interoperability across MCP-compliant clients. This approach is ideal for data tools that must serve multiple consumer applications or third-party IDEs.
Horizontal orchestration (A2A)
The Agent-to-Agent (A2A) protocol is an open standard that enables specialized agents on different systems to communicate and collaborate securely without requiring custom integration code.
As systems scale, organizations often deploy multiple specialized agents that are built on different frameworks or cloud infrastructures. A2A establishes a universal messaging tier for these autonomous agents. Instead of using custom APIs, each agent publishes an agent card, which is a discoverable profile that details the agent's capabilities, supported data formats, and security requirements.
When a central orchestrator or peer agent requires analytical data, it securely delegates the task to a data agent through structured A2A messaging. The data agent processes the request autonomously and returns the findings, which decouples the execution logic from the requester.
Choose an integration pattern
Use the following table to compare the complexity, governance, and capabilities of each integration pattern.
Complexity levels are defined as follows:
- Low: Patterns that require minimal custom code and rely on prebuilt user interfaces or tools.
- Medium: Patterns that require custom frontend development and API or SDK integration but avoid complex backend orchestration infrastructure.
- High: Patterns that require full-stack application development, conversation state management, multiple authentication layers, or intermediate orchestrator infrastructure.
- Varies: Patterns where the complexity depends on the underlying integration method that you choose.
| Integration pattern | Complexity | Customization | Agent governance | Access control | Multi-agent | Streaming | Portability |
|---|---|---|---|---|---|---|---|
| Looker iframe embed | Low | Low | Managed through Looker | Looker | No | Built-in | Looker only |
| Looker API and SDKs | Medium | High | Managed through Looker | Looker | No | Built-in | Looker only |
| Conversational Analytics API with Looker source | Varies | High | Managed through the API | Looker and IAM | Yes | Yes | Any Google Cloud surface |
| Single-agent (direct API) | Medium | High | Managed through the API | IAM | No | Yes (supported) | Any Google Cloud surface |
| Custom orchestrator | High | Very high | Managed through the API | IAM | Manual | Manual | Any Google Cloud surface |
Schema-driven with BigQueryToolset (ADK) |
Low | Medium | None (schema inference) | IAM | Yes (ADK) | No (blocking) | ADK ecosystem |
Governed with DataAgentToolset (ADK) |
Low | Medium | Managed through the API | IAM | Yes (ADK) | No (blocking) | ADK ecosystem |
| Custom streaming sub-agent (ADK) | High | Very high | Managed through the API | IAM | Yes (ADK) | Yes (custom) | ADK ecosystem |
| Standalone MCP | Medium | Medium | None (schema inference) | IAM | No | No | Any MCP client |
| MCP within ADK | Medium | High | None (schema inference) | IAM | Yes (ADK) | No | ADK and MCP clients |
| A2A Protocol | High | High | Framework dependent | IAM | Yes | Yes | Cross-platform |
What's next
- Learn about Conversational Analytics API architecture and key concepts.
- Understand state management for data agents and how the API manages conversation context.
- Explore how to authenticate and connect to a data source.
- Learn how to create and configure an agent with HTTP.
- Learn how to create and configure an agent with Python.
- Learn more about guiding an agent's behavior with authored context.
- Understand access control with IAM for the Conversational Analytics API.
- Learn how to protect your data agents and conversations by using CMEK.
- Learn how to render agent responses for Looker data sources.