Conversational Analytics is a chat-with-your-data feature that is powered by Gemini for Google Cloud. Grounded in the Looker semantic modeling layer, Conversational Analytics empowers users in your organization to ask data-related questions in regular, natural (conversational) language for governed, trusted self-service BI. This approach accelerates adoption of analytics across your organization by providing enterprise-grade governance and security.
Conversational Analytics is available in both Looker (Google Cloud core) instances and in Looker (original) instances.
Learn how and when Gemini for Google Cloud uses your data.
Key features
Conversational Analytics includes the following key features:
- Converse with a Looker Explore: Converse in natural language with Looker Explore data or with a custom data agent within a Looker (original) instance or a Looker (Google Cloud core) instance. You can converse with up to five Explores at a time.
- Create and manage data agents: With data agents, you can customize the AI-powered data querying agent by providing context and instructions that are specific to your data, which helps the data agent generate more accurate and contextually relevant responses. You can also share your data agents with other users so that they can ask questions with the same context. You can connect your agent to as many as five Explores.
- Advanced analytics with the Code Interpreter [Preview]: The Code Interpreter within Conversational Analytics translates your natural language questions into Python code and executes that code. Compared to standard SQL-based queries, the Code Interpreter's use of Python enables more complex analysis and visualizations.
- Embed Conversational Analytics in a website or application: You can embed Conversational Analytics in a website or an application using an HTML iframe tag just as you can with other Looker content types. Conversational Analytics supports both private embedding, where users are authenticated using their Looker login, and signed embedding, where users are authenticated through your own application.
How does Conversational Analytics work?
Conversational Analytics uses Gemini for Google Cloud to interpret natural language questions and provide answers based on your data in Looker. It uses the Looker semantic model — the LookML definitions of your data — as its source of truth to ensure that responses are accurate and consistent. Conversational Analytics can interpret your business definitions for metrics like "revenue" or "churn" because they are defined in LookML, using these definitions to deliver accurate and consistent questions.
To ground its responses in your specific data and business context, Conversational Analytics uses several techniques:
- LookML schema: At the beginning of a request, Conversational Analytics fetches the schema from the Explores that are connected to it. Conversational Analytics uses parameters from the LookML model in several ways:
- Field identification: Metadata from the schema helps Conversational Analytics hone in on relevant fields. This metadata includes LookML parameters such as
name,label,description,type, anddimension_group. These parameters help Conversational Analytics to map terms in user questions to the correct fields. For example,descriptioncan provide business-specific terminology or context for a field. - Response formatting: Conversational Analytics uses
labelparameters for user-friendly field naming andvalue_formatparameters for formatting data in responses.
- Field identification: Metadata from the schema helps Conversational Analytics hone in on relevant fields. This metadata includes LookML parameters such as
- Query generation: Instead of querying your database directly, Conversational Analytics determines which fields, filters, sorts, and limits should be used in the query. Looker then composes and executes the query using the underlying LookML model. This process is similar to how a user interacts with an Explore interface; Conversational Analytics doesn't need to understand complex join logic or field definitions because Looker handles query composition based on the LookML model. Query generation ensures that all queries adhere to the join logic, filtering, aggregation, and data permissions that are defined in your LookML model. To generate queries, Conversational Analytics must determine the correct values to use in filters. These values must be exact matches of values in the underlying data or more advanced filter expressions such as wildcards. To resolve discrepancies between what values the user includes in their natural language questions and the exact values that may be required by a filter, Conversational Analytics uses values that are defined with
allowed_valueinparameterfields and can use tools to check for specific values in fields:- Sample data: Returns up to 100 values from a field to help Conversational Analytics learn patterns or find an exact match for filter values.
- Fuzzy search: Generates a set of search terms based on user input and checks for their presence in a dimension to find appropriate filter values.
Both sample data and fuzzy search use Looker's suggestion API and are therefore influenced by LookML parameters such as
suggestions,suggest_explore, andsuggest_dimension.
- Analysis: After Looker runs the queries, Conversational Analytics analyzes query results to answer user questions. Conversational Analytics can analyze results in one or more of the following ways:
- It uses built-in Gemini capabilities to interpret and summarize results.
- It uses Python code execution through the Code Interpreter to perform further analysis on the results.
- It creates visualizations based on query results.
By leveraging the Looker semantic model, Conversational Analytics provides access to data from various platforms — such as BigQuery, AlloyDB, Redshift, Snowflake, and Databricks — without needing to understand the underlying data complexity, and it ensures that all responses are consistent and governed.
How do Conversational Analytics data agents work?
A Conversational Analytics data agent bases its responses on two primary inputs: your instance's LookML schema, which is defined by a Looker developer, and agent instructions, which you write when you create the agent.
From your query, a data agent must determine which LookML fields to select and which filters, sorts, or limits to apply. To do this accurately, it maps the natural language in your query to its own agent instructions and the data's LookML schema in the following ways:
- Mapping semantic terms: Users often use business jargon in their questions. The agent uses your agent instructions and the LookML field metadata to interpret your query. For example, for the query "How much new business did we do?" a data agent would be able to map "new business" to a measure that calculates monthly recurring revenue. For the query "Who are our top customers?" a data agent would be able to map "top" to a usage count and "customers" to a dimension called Customer Name.
- Mapping field values: The agent looks for specific data points — for example, "California" or "Slim-fit Jeans" — using specialized tools to sample the data or perform fuzzy searches. For example, if a user asks for "jeans," the agent might trigger a fuzzy search across the Product Name field to find the exact string matches in your database. If a user asks for "NY," is that the City field or a State field? The agent may need to sample the data to see which field contains "NY".
- Refinement with query examples: You can provide specific question-and-answer examples, called "golden queries", in data agent instructions to improve accuracy for common or critical queries.
Looker then uses the LookML definitions of these fields and other logic that's defined in the Explore — including field definitions, access grants or user attributes, or complex Liquid or join logic — to compose the query that's sent to the database. Since the agent isn't writing the whole SQL query, it doesn't need to "understand" the data and can operate more accurately and deterministically.
Data agents versus Conversations
Creating a data agent offers several key advantages over having a standard conversation with a single Explore. While starting a conversation from an Explore allows for quick natural language querying of that specific data source, a data agent functions as a specialized standalone analyst that can be customized and shared across your organization.
Data agents provide the following advantage over conversations with Explores:
- Chat with multiple Explores: In a conversation with an Explore, you're limited to querying one Explore at a time. However, a data agent can connect to as many as five distinct Explores, enabling users to perform cross-domain analysis and receive more comprehensive answers.
- Authored context: You can provide your data agent with custom instructions that are not available in a standard Explore conversation, including the following resources:
- Golden queries: You can provide the agent with pairs of natural language questions and verified Looker queries to anchor common business patterns and significantly reduce ambiguity for the model.
- Business glossaries: You can define organization-specific jargon or acronyms directly within the agent's instructions.
- Persona framework: You can assign a specific role or expertise to the agent, setting a consistent tone and professional judgment for the conversation.
- Agent specialization: Rather than using the single generic interface of a conversation, you can create specialized agents for different business units, such as a Revenue Agent or an Ops Agent. This allows for a more guided analysis experience, leading users toward the specific fields and filters most relevant to their needs.
- Collaboration and reuse: Explore conversations are typically limited to one user, while data agents can be shared with other members of your organization. Sharing ensures that multiple users can benefit from the same authored context and governance that was developed by an administrator or data expert.
- Custom behavior: You can configure your agent to operate within strict requirements, such as using default filters — for example, "always default to the last 6 months if no timeframe is mentioned." These guardrails ensure that the agent operates within your organization's specific governance and security standards. You can also hide fields in Explores to prevent the data agent from using them in queries.
List of documentation
- Set up Conversational Analytics in Looker: Enable Conversational Analytics for a Looker instance and grant access to users.
- Recommended setup and rollout strategy for Conversational Analytics in Looker: Plan the rollout of Conversational Analytics to users of a Looker instance.
- Converse with Looker data: Connect to and ask questions about Looker Explores.
- Create and manage data agents: Create, edit, delete, and share data agents that converse with Looker Explores.
- Best practices for configuring Conversational Analytics in Looker: Explore strategies and best practices to help Looker administrators and LookML developers successfully configure and optimize Conversational Analytics.
- Enable and use the Code Interpreter: Enable and use the Code Interpreter, which translates your natural language questions into Python code and executes that code. Compared to standard SQL-based queries, the Code Interpreter's use of Python enables more complex analysis and visualizations.
- Embedding Conversational Analytics: Embed Conversational Analytics in an HTML iframe.
Understand the compliance capabilities of Gemini in Looker features
Conversational Analytics is not yet included in FedRAMP High or FedRAMP Medium authorization boundaries. Prior to enabling the Gemini in Looker setting for your Looker instance, discuss with your authorizing body whether Gemini for Google Cloud's compliance offerings meet the needs of your organization.
For Looker (Google Cloud core) instances, each Assured Workloads control package that becomes available will add Gemini in Looker features as default offerings as that package's change requirements and processes are met. Conversational Analytics in Looker respects the compliance capabilities of the associated Looker (Google Cloud core) instance, with the following exception:
Data Residency (DRZ) support, specifically for data-at-rest, is available to all Looker customers. All data-at-rest that is associated with Conversational Analytics resides strictly within the Looker instance and is confined to a single region. Data in transit may be processed using a global service.
Provide feedback
You can provide feedback to Google about individual responses in Conversational Analytics by selecting one of the following options:
- thumb_up Good response: Indicate that the response was helpful.
- thumb_down Bad response: Indicate that the response was not helpful.
Related resources
- Learn more about the Conversational Analytics API.
- Review Gemini in Looker feature pricing information.
- Learn more about Gemini for Google Cloud.
- Learn more about Gemini in Looker.
- LookML basics: Understand the Looker semantic layer that Conversational Analytics is built on.
- Exploring data in Looker: Learn about the Explore interface for querying data, which Conversational Analytics complements.
- Access control and permission management: Understand how Looker manages user permissions and data access, which Conversational Analytics respects.