This guide is intended to help plan the rollout of Conversational Analytics to users of a Looker instance. It recommends the following phased approach for implementing Conversational Analytics in Looker:
- Phase 1: Curate data and define the initial scope
- Phase 2: Configure agents and validate internally
- Phase 3: Expand Conversational Analytics adoption to more users
This approach lets you start with a small, controlled scope, validate your setup, and then expand to more users and data.
Phase 1: Curate data and define the initial scope
In this phase, prepare your data for users to query with Conversational Analytics and define the scope of the initial deployment. Follow these recommendations for starting with a small and controlled scope:
- Initial setup: Follow the setup steps in Set up Conversational Analytics in Looker for your instance.
- Limit initial user access: To enable internal testing and validation, use Looker's permission system to grant the required permissions to use Conversational Analytics to a small set of users who are familiar with the data. To start, consider limiting access to one or two models that you have curated for Conversational Analytics.
- Select curated Explores: Start with one or two well-structured Explores that are based on relatively clean data and that provide clear business value. Optimize these Explores for Conversational Analytics in Looker by following the detailed instructions in Best practices for configuring Conversational Analytics in Looker.
Phase 2: Configure agents and validate internally
In this phase, build and refine your Conversational Analytics agents, and then thoroughly test them with internal users to confirm accuracy and effectiveness. This phase involves the following steps:
- Create curated agents: Create Conversational Analytics agents that are based only on the curated Explores that you prepared during the curation and initial setup phase.
Refine with agent instructions: Use agent instructions to provide additional context and further guidance. For example:
- Define synonyms for field names or values.
- Provide specific context or rules for how certain fields should be used.
Validate internally and iterate: Thoroughly test the agents with users who are familiar with the data. Ask various questions, test edge cases, and identify weaknesses. Make the following changes based on feedback from testing:
- Refine the LookML. For example, adjust the values for the
label,description, orhiddenLookML parameters. - Adjust agent instructions.
- Continue flagging issues with data quality.
- Refine the LookML. For example, adjust the values for the
Phase 3: Expand Conversational Analytics adoption to more users
In this phase, expand Conversational Analytics adoption to more users by granting access, collecting feedback, and iterating on your agents. This phase involves the following steps:
- Grant targeted access: Grant Conversational Analytics access to additional users, and encourage those users to use the specific, vetted agents that you have created.
Launch and collect feedback: Actively solicit feedback on the following topics:
- Accuracy of responses
- Ease of use
- Missing information or confusing results
Iterate continuously: Use feedback to make further refinements to LookML and agent instructions, and prioritize data cleanup efforts.
Expand access: Once the agents prove stable and valuable, expand access to other relevant user groups and introduce new curated agents by granting these users the appropriate permissions. You can also introduce new curated agents and expand access to the models that are available to the Gemini role, following the same processes that were used in the previous phases.