MCP servers let you access and run tools to create, manage, and query Google Cloud resources using AI agent platforms. This document describes how to connect to Google Cloud remote Model Context Protocol (MCP) servers from the Google Cloud Data Agent Kit extension for Visual Studio Code.
Supported MCP servers
The following remote MCP servers are supported for work in the Data Agent Kit extension for VS Code. For product-specific details, refer to these guides:
- Use the AlloyDB for PostgreSQL remote MCP server
- Use the BigQuery MCP server
- Use the Cloud SQL remote MCP server
- Use the Knowledge Catalog remote MCP server
- Use the Managed Service for Apache Spark MCP server
- Use the Spanner remote MCP server
A remote MCP server is enabled when you enable the corresponding API. For example, when you enable the Spanner API in your project, the Spanner remote MCP is automatically enabled.
Google Cloud remote MCP servers
Google and Google Cloud remote MCP servers have the following features and benefits:- Simplified, centralized discovery
- Managed global or regional HTTP endpoints
- Fine-grained authorization
- Optional prompt and response security with Model Armor protection
- Centralized audit logging
For information about other MCP servers and information about security and governance controls available for Google Cloud MCP servers, see Google Cloud MCP servers overview.
Model Context Protocol (MCP) standardizes how large language models (LLMs) and AI applications or agents connect to external data sources. MCP servers let you use their tools, resources, and prompts to take actions and get updated data from their backend service.
What's the difference between local and remote MCP servers?
- Local MCP servers
- Typically run on your local machine and use the standard input and output streams (stdio) for communication between services on the same device.
- Remote MCP servers
- Run on the service's infrastructure and offer an HTTP endpoint to AI applications for communication between the AI MCP client and the MCP server. For more information about MCP architecture, see MCP architecture.
Before you begin
- Install the Data Agent Kit extension for VS Code extension.
- Set up and configure the extension. Be sure to enable all APIs.
Required roles
To get the permissions that you need to access Google Cloud MCP servers from
your IDE, ask your administrator to grant you the MCP Tool User
(roles/mcp.toolUser) IAM role on the project that you are
working in.
Depending on the resources that you plan to access, you might need additional roles beyond the roles that let you use the extension. To see the minimum required roles to access remote MCP servers for each supported Google Cloud service, see the product-specific guides.
For more information about granting roles, see Manage access to projects, folders, and organizations.
Service account impersonation
When you connect to MCP servers from the Data Agent Kit extension for VS Code, you can use your user credentials or you can use service account impersonation. We recommend using service account impersonation for the gcloud CLI and application default credentials (ADC) when you connect to MCP servers.
This involves starting with the credentials that you used to sign in to Google Cloud and the gcloud CLI in VS Code – usually your user account. Then you request short-lived credentials for your service account and use the service account credentials instead of your user credentials . For more information, see Use service account impersonation.
For guidance on using service account impersonation to authenticate to MCP servers, see Service account impersonation.
Authentication and authorization
Complete the following authentication and authorization steps. For more information on Google Account authorization, see Using OAuth 2.0 to Access Google APIs.Initiate the auth process in the IDE
To begin the process, submit a prompt to the agentic chat in the IDE.
- If the Chat panel isn't displayed, open the command palette by pressing Cmd/Ctrl+Shift+P, then search for and run Open Chat.
In the Chat panel, submit the following prompt:
list BigQuery datasets using mcp serverThe agent proposes the following gcloud CLI command:
gcloud config get-value projectClick Allow.
A dialog appears asking if you want to provide a client ID.
Click Copy URIs & Proceed.
The command palette opens and asks for an OAuth client ID.
Obtain the OAuth client ID
Next, obtain your OAuth client ID from the Google Cloud console.
- In your browser, open the Google Cloud console.
- Click Select a project.
- Select the same project that you are using in the Data Agent Kit extension for VS Code.
- In the API navigation menu, click Credentials.
- Under OAuth 2.0 Client IDs, click the ID of the client that you want to use.
- In the details pane, under Additional information, find and copy the value for Client ID.
Provide credentials to VS Code
Enter your client ID and client secret in the IDE.
- Return to VS Code.
- Paste the Client ID into the Add Client Registration Details dialog.
- Press Enter.
- Return to the console and copy the value for Client secret.
- Return to VS Code.
Paste the client secret into the Add Client Registration Details dialog, and press Enter.
The agent continues running, and then a dialog appears stating that the MCP server wants to authenticate.
Click Allow.
If you are prompted to sign in to your Google Account again, follow the instructions and return to the IDE.
The agent obtains your requested BigQuery datasets from the MCP server and displays the list.
Configure Google Cloud remote MCP servers in the extension
Using agentic tools in the IDE, you can instantiate an MCP client that connects to a single MCP server. You can have multiple clients that connect to different MCP servers. To connect to a remote MCP server, the MCP client must know at a minimum the URL of the remote MCP server.
For each client, use the following as required.
- Server name: SERVER_NAME
- Endpoint: ENDPOINT
- Transport: HTTP
- Authentication details: Depending on how you want to authenticate, you
can enter your Google Cloud credentials, your OAuth client ID and
secret, or an agent identity and credentials.
- For more information about authentication, see Authenticate to MCP servers.
- For authentication details for each supported service, see the product-specific guides.
- OAuth scope: the OAuth 2.0 scope that you want to use when connecting to the AlloyDB MCP server.
Replace SERVER_NAME and ENDPOINT with the values
for the service you are connecting to. For AlloyDB,
you must specify a region, such as us-east4.
Service |
Server name |
Endpoint |
AlloyDB |
AlloyDB MCP server |
https://alloydb.REGION.rep.googleapis.com/mcp |
BigQuery |
BigQuery MCP server |
https://bigquery.googleapis.com/mcp |
Cloud SQL |
Cloud SQL MCP server |
https://sqladmin.googleapis.com/mcp |
Knowledge Catalog |
Knowledge Catalog MCP server |
https://dataplex.googleapis.com/mcp |
Managed Service for Apache Spark |
Managed Service for Apache Spark MCP server |
https://dataproc-us-central1.googleapis.com/mcp |
Spanner |
Spanner MCP server |
https://spanner.googleapis.com/mcp |
MCP configurations for toolbox and remote MCP servers
Use the following configurations for each Google Cloud Data Agent Kit service that you want to connect to.
AlloyDB
Remote
'alloydb': {
remote: {
vscode: {
'datacloud_alloydb_remote': {
type: 'http',
url: 'https://alloydb.REGION.rep.googleapis.com/mcp',
authProviderType: 'google_credentials',
oauth: {
scopes: ['https://www.googleapis.com/auth/cloud-platform'],
},
},
Local
'datacloud_alloydb-postgres-admin_toolbox': {
'$typeName': 'exa.cascade_plugins_pb.CascadePluginCommandTemplate',
'command': 'npx',
'args': [
'-y',
'@toolbox-sdk/server@>=1.1.0',
'--prebuilt',
'alloydb-postgres-admin',
'--stdio'
],
'env': {},
},
'datacloud_alloydb-postgres_toolbox': {
'$typeName': 'exa.cascade_plugins_pb.CascadePluginCommandTemplate',
'command': 'npx',
'args': [
'-y',
'@toolbox-sdk/server@>=1.1.0',
'--prebuilt',
'alloydb-postgres',
'--stdio'
],
'env': {
'ALLOYDB_POSTGRES_CLUSTER': '',
'ALLOYDB_POSTGRES_DATABASE': '',
'ALLOYDB_POSTGRES_INSTANCE': '',
'ALLOYDB_POSTGRES_IP_TYPE': '',
'ALLOYDB_POSTGRES_PASSWORD': '',
'ALLOYDB_POSTGRES_PROJECT': '',
'ALLOYDB_POSTGRES_REGION': '',
'ALLOYDB_POSTGRES_USER': '',
},
},
BigQuery
Remote
'datacloud_bigquery_remote': {
type: 'http',
url: 'https://bigquery.googleapis.com/mcp',
authProviderType: 'google_credentials',
oauth: {
scopes: [
'https://www.googleapis.com/auth/bigquery',
'https://www.googleapis.com/auth/cloud-platform',
],
},
},
Local
'datacloud_bigquery_toolbox': {
command: 'npx',
args: [
'-y',
'@toolbox-sdk/server@>=1.1.0',
'--prebuilt',
'bigquery',
'--stdio'
],
env: {
'BIGQUERY_LOCATION': '',
'BIGQUERY_PROJECT': '',
},
},
Cloud SQL
Remote
'datacloud_cloud-sql_remote': {
type: 'http',
url: 'https://sqladmin.googleapis.com/mcp',
authProviderType: 'google_credentials',
oauth: {
scopes: ['https://www.googleapis.com/auth/cloud-platform'],
},
},
Local
'datacloud_cloud-sql-postgresql-admin_toolbox': {
'$typeName': 'exa.cascade_plugins_pb.CascadePluginCommandTemplate',
'command': 'npx',
'args': [
'-y',
'@toolbox-sdk/server@>=1.1.0',
'--prebuilt',
'cloud-sql-postgres-admin',
'--stdio'
],
'env': {},
},
'datacloud_cloud-sql-postgresql_toolbox': {
'$typeName': 'exa.cascade_plugins_pb.CascadePluginCommandTemplate',
'command': 'npx',
'args': [
'-y',
'@toolbox-sdk/server@>=1.1.0',
'--prebuilt',
'cloud-sql-postgres',
'--stdio'
],
'env': {
'CLOUD_SQL_POSTGRES_DATABASE': '',
'CLOUD_SQL_POSTGRES_INSTANCE': '',
'CLOUD_SQL_POSTGRES_IP_TYPE': '',
'CLOUD_SQL_POSTGRES_PASSWORD': '',
'CLOUD_SQL_POSTGRES_PROJECT': '',
'CLOUD_SQL_POSTGRES_REGION': '',
'CLOUD_SQL_POSTGRES_USER': '',
},
},
Knowledge Catalog
Remote
'datacloud_knowledge_catalog_remote': {
type: 'http',
url: 'https://dataplex.googleapis.com/mcp',
authProviderType: 'google_credentials',
oauth: {
scopes: ['https://www.googleapis.com/auth/cloud-platform'],
},
Local
'datacloud_knowledge_catalog_toolbox': {
'$typeName': 'exa.cascade_plugins_pb.CascadePluginCommandTemplate',
'command': 'npx',
'args': [
'-y',
'@toolbox-sdk/server@>=1.1.0',
'--prebuilt',
'dataplex',
'--stdio'
],
'env': {
'DATAPLEX_PROJECT': '${PROJECT_ID}',
}
Managed Service for Apache Spark
Remote
'datacloud_managed_apache_spark_remote': {
type: 'http',
url: 'https://dataproc-${REGION}.googleapis.com/mcp',
authProviderType: 'google_credentials',
oauth: {
scopes: ['https://www.googleapis.com/auth/dataproc'],
},
},
Local
'datacloud_managed_apache_spark_toolbox': {
'$typeName': 'exa.cascade_plugins_pb.CascadePluginCommandTemplate',
'command': 'npx',
'args': [
'-y',
TOOLBOX_VERSION,
'--prebuilt',
'dataproc',
'--stdio',
'--user-agent-metadata',
'google-cloud-data-agent-kit-${IDE_TYPE}',
],
'env': {
'DATAPROC_PROJECT': '${PROJECT_ID}',
'DATAPROC_REGION': '${REGION}',
},
},
'datacloud_serverless-spark_toolbox': {
'$typeName': 'exa.cascade_plugins_pb.CascadePluginCommandTemplate',
'command': 'npx',
'args': [
'-y',
TOOLBOX_VERSION,
'--prebuilt',
'serverless-spark',
'--stdio',
'--user-agent-metadata',
'google-cloud-data-agent-kit-${IDE_TYPE}',
],
'env': {
'SERVERLESS_SPARK_PROJECT': '${PROJECT_ID}',
'SERVERLESS_SPARK_LOCATION': '${REGION}',
},
},
Spanner
Remote
'datacloud_spanner_remote': {
type: 'http',
url: 'https://spanner.googleapis.com/mcp',
authProviderType: 'google_credentials',
oauth: {
scopes: [
'https://www.googleapis.com/auth/spanner.admin',
'https://www.googleapis.com/auth/spanner.data',
],
},
},
Local
'datacloud_spanner_toolbox': {
'$typeName': 'exa.cascade_plugins_pb.CascadePluginCommandTemplate',
'command': 'npx',
'args': [
'-y',
'@toolbox-sdk/server@>=1.1.0',
'--prebuilt',
'spanner',
'--stdio'
],
'env': {
'SPANNER_DATABASE': '',
'SPANNER_DIALECT': '',
'SPANNER_INSTANCE': '',
'SPANNER_PROJECT': '',
},
},
Optional security and safety configurations
MCP introduces new security risks and considerations due to the wide variety of actions that you can do with the MCP tools. To minimize and manage these risks, Google Cloud offers default settings and customizable policies to control the use of MCP tools in your Google Cloud organization or project.
For more information about MCP security and governance, see AI security and safety.
Use Model Armor
Model Armor is a Google Cloud service designed to enhance the security and safety of your AI applications. It works by proactively screening LLM prompts and responses, protecting against various risks and supporting responsible AI practices. Whether you are deploying AI in your cloud environment, or on external cloud providers, Model Armor can help you prevent malicious input, verify content safety, protect sensitive data, maintain compliance, and enforce your AI safety and security policies consistently across your diverse AI landscape.
When Model Armor is enabled with logging enabled, Model Armor logs the entire payload. This might expose sensitive information in your logs.
Enable Model Armor
You must enable Model Armor APIs before you can use Model Armor.
Console
Enable the Model Armor API.
Roles required to enable APIs
To enable APIs, you need the Service Usage Admin IAM role (
roles/serviceusage.serviceUsageAdmin), which contains theserviceusage.services.enablepermission. Learn how to grant roles.Select the project where you want to activate Model Armor.
gcloud
Before you begin, follow these steps using the Google Cloud CLI with the Model Armor API:
In the Google Cloud console, activate Cloud Shell.
At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.
-
Run the following command to set the API endpoint for the Model Armor service.
gcloud config set api_endpoint_overrides/modelarmor "https://modelarmor.LOCATION.rep.googleapis.com/"
Replace
LOCATIONwith the region where you want to use Model Armor.
Configure protection for Google and Google Cloud remote MCP servers
To help protect your MCP tool calls and responses you can use Model Armor floor settings. A floor setting defines the minimum security filters that apply across the project. This configuration applies a consistent set of filters to all MCP tool calls and responses within the project.
Set up a Model Armor floor setting with MCP sanitization enabled. For more information, see Configure Model Armor floor settings.
See the following example command:
gcloud model-armor floorsettings update \ --full-uri='projects/PROJECT_ID/locations/global/floorSetting' \ --enable-floor-setting-enforcement=TRUE \ --add-integrated-services=GOOGLE_MCP_SERVER \ --google-mcp-server-enforcement-type=INSPECT_AND_BLOCK \ --enable-google-mcp-server-cloud-logging \ --malicious-uri-filter-settings-enforcement=ENABLED \ --add-rai-settings-filters='[{"confidenceLevel": "MEDIUM_AND_ABOVE", "filterType": "DANGEROUS"}]'
Replace PROJECT_ID with your Google Cloud project ID.
Note the following settings:
INSPECT_AND_BLOCK: The enforcement type that inspects content for the Google MCP server and blocks prompts and responses that match the filters.ENABLED: The setting that enables a filter or enforcement.MEDIUM_AND_ABOVE: The confidence level for the Responsible AI - Dangerous filter settings. You can modify this setting, though lower values might result in more false positives. For more information, see Model Armor confidence levels.
Disable scanning MCP traffic with Model Armor
To stop Model Armor from automatically scanning traffic to and from Google MCP servers based on the project's floor settings, run the following command:
gcloud model-armor floorsettings update \
--full-uri='projects/PROJECT_ID/locations/global/floorSetting' \
--remove-integrated-services=GOOGLE_MCP_SERVER
Replace PROJECT_ID with the Google Cloud project
ID. Model Armor doesn't automatically apply the rules defined in
this project's floor settings to any Google MCP server traffic.
Model Armor floor settings and general configuration can impact more than just MCP. Because Model Armor integrates with services like Vertex AI, any changes you make to floor settings can affect traffic scanning and safety behaviors across all integrated services, not just MCP.
Control MCP use with IAM deny policies
Identity and Access Management (IAM) deny policies help you secure Google Cloud remote MCP servers. Configure these policies to block unwanted MCP tool access.
For example, you can deny or allow access based on:
- The principal
- Tool properties like read-only
- The application's OAuth client ID
For more information, see Control MCP use with Identity and Access Management.
Troubleshoot
To find methods for diagnosing and resolving Data Agent Kit extension for VS Code errors, see Troubleshoot the Data Agent Kit extension for VS Code.