MCP Tools Reference: cloud-sql

Tool: execute_sql

Execute any valid SQL statement, including data definition language (DDL), data control language (DCL), data query language (DQL), or data manipulation language (DML) statements, on a Cloud SQL instance.

To support the execute_sql tool, a Cloud SQL instance must meet the following requirements:

  • The value of data_api_access must be set to ALLOW_DATA_API.
  • For a MySQL instance, the database flag cloudsql_iam_authentication must be set to on. For a PostgreSQL instance, the database flag cloudsql.iam_authentication must be set to on.
  • An IAM user account or IAM service account (CLOUD_IAM_USER or CLOUD_IAM_SERVICE_ACCOUNT) is required to call the execute_sql tool. The tool executes the SQL statements using the privileges of the database user logged with IAM database authentication.

After you use the create_instance tool to create an instance, you can use the create_user tool to create an IAM user account for the user currently logged in to the project.

The execute_sql tool has the following limitations:

  • If a SQL statement returns a response larger than 10 MB, then the response will be truncated.
  • The execute_sql tool has a default timeout of 30 seconds. If a query runs longer than 30 seconds, then the tool returns a DEADLINE_EXCEEDED error.
  • The execute_sql tool isn't supported for SQL Server.

If you receive errors similar to "IAM authentication is not enabled for the instance", then you can use the get_instance tool to check the value of the IAM database authentication flag for the instance.

If you receive errors like "The instance doesn't allow using executeSql to access this instance", then you can use get_instance tool to check the data_api_access setting.

When you receive authentication errors:

  1. Check if the currently logged-in user account exists as an IAM user on the instance using the list_users tool.
  2. If the IAM user account doesn't exist, then use the create_user tool to create the IAM user account for the logged-in user.
  3. If the currently logged in user doesn't have the proper database user roles, then you can use update_user tool to grant database roles to the user. For example, cloudsqlsuperuser role can provide an IAM user with many required permissions.
  4. Check if the currently logged in user has the correct IAM permissions assigned for the project. You can use gcloud projects get-iam-policy [PROJECT_ID] command to check if the user has the proper IAM roles or permissions assigned for the project.

    • The user must have cloudsql.instance.login permission to do automatic IAM database authentication.
    • The user must have cloudsql.instances.executeSql permission to execute SQL statements using the execute_sql tool or executeSql API.
    • Common IAM roles that contain the required permissions: Cloud SQL Instance User (roles/cloudsql.instanceUser) or Cloud SQL Admin (roles/cloudsql.admin)

When receiving an ExecuteSqlResponse, always check the message and status fields within the response body. A successful HTTP status code doesn't guarantee full success of all SQL statements. The message and status fields will indicate if there were any partial errors or warnings during SQL statement execution.

The following sample demonstrate how to use curl to invoke the execute_sql MCP tool.

Curl Request
                  
curl --location 'https://sqladmin.googleapis.com/mcp' \
--header 'content-type: application/json' \
--header 'accept: application/json, text/event-stream' \
--data '{
  "method": "tools/call",
  "params": {
    "name": "execute_sql",
    "arguments": {
      // provide these details according to the tool's MCP specification
    }
  },
  "jsonrpc": "2.0",
  "id": 1
}'
                

Input Schema

Instance execute sql request for MCP.

SqlInstancesExecuteSqlMcpRequest

JSON representation
{
  "instance": string,
  "project": string,
  "sqlStatement": string,
  "database": string
}
Fields
instance

string

Required. Database instance ID. This does not include the project ID.

project

string

Required. Project ID of the project that contains the instance.

sqlStatement

string

Required. SQL statements to run on the database. It can be a single statement or a sequence of statements separated by semicolons.

database

string

Optional. Name of the database on which the statement will be executed. For Postgres it's required, for MySQL it's optional. For Postgres, if your query is not scoped to an existings database, like list databases / create new database / grant roles, you can pass in default value as postgres.

Output Schema

Execute SQL statements response.

SqlInstancesExecuteSqlResponse

JSON representation
{
  "messages": [
    {
      object (Message)
    }
  ],
  "metadata": {
    object (Metadata)
  },
  "results": [
    {
      object (QueryResult)
    }
  ],
  "status": {
    object (Status)
  }
}
Fields
messages[]

object (Message)

A list of notices and warnings generated during query execution. For PostgreSQL, this includes all notices and warnings. For MySQL, this includes warnings generated by the last executed statement. To retrieve all warnings for a multi-statement query, SHOW WARNINGS must be executed after each statement.

metadata

object (Metadata)

The additional metadata information regarding the execution of the SQL statements.

results[]

object (QueryResult)

The list of results after executing all the SQL statements.

status

object (Status)

Contains the error from the database if the SQL execution failed.

Message

JSON representation
{

  // Union field _message can be only one of the following:
  "message": string
  // End of list of possible types for union field _message.

  // Union field _severity can be only one of the following:
  "severity": string
  // End of list of possible types for union field _severity.
}
Fields

Union field _message.

_message can be only one of the following:

message

string

The full message string. For PostgreSQL, this is a formatted string that may include severity, code, and the notice/warning message. For MySQL, this contains the warning message.

Union field _severity.

_severity can be only one of the following:

severity

string

The severity of the message (e.g., "NOTICE" for PostgreSQL, "WARNING" for MySQL).

Metadata

JSON representation
{
  "sqlStatementExecutionTime": string
}
Fields
sqlStatementExecutionTime

string (Duration format)

The time taken to execute the SQL statements.

A duration in seconds with up to nine fractional digits, ending with 's'. Example: "3.5s".

Duration

JSON representation
{
  "seconds": string,
  "nanos": integer
}
Fields
seconds

string (int64 format)

Signed seconds of the span of time. Must be from -315,576,000,000 to +315,576,000,000 inclusive. Note: these bounds are computed from: 60 sec/min * 60 min/hr * 24 hr/day * 365.25 days/year * 10000 years

nanos

integer

Signed fractions of a second at nanosecond resolution of the span of time. Durations less than one second are represented with a 0 seconds field and a positive or negative nanos field. For durations of one second or more, a non-zero value for the nanos field must be of the same sign as the seconds field. Must be from -999,999,999 to +999,999,999 inclusive.

QueryResult

JSON representation
{
  "columns": [
    {
      object (Column)
    }
  ],
  "rows": [
    {
      object (Row)
    }
  ],
  "message": string,
  "partialResult": boolean,
  "status": {
    object (Status)
  }
}
Fields
columns[]

object (Column)

List of columns included in the result. This also includes the data type of the column.

rows[]

object (Row)

Rows returned by the SQL statement.

message

string

Message related to the SQL execution result.

partialResult

boolean

Set to true if the SQL execution's result is truncated due to size limits or an error retrieving results.

status

object (Status)

If results were truncated due to an error, details of that error.

Column

JSON representation
{
  "name": string,
  "type": string
}
Fields
name

string

Name of the column.

type

string

Datatype of the column.

Row

JSON representation
{
  "values": [
    {
      object (Value)
    }
  ]
}
Fields
values[]

object (Value)

The values for the row.

Value

JSON representation
{
  "value": string,
  "nullValue": boolean
}
Fields
value

string

The cell value in string format.

nullValue

boolean

If cell value is null, then this flag will be set to true.

Status

JSON representation
{
  "code": integer,
  "message": string,
  "details": [
    {
      "@type": string,
      field1: ...,
      ...
    }
  ]
}
Fields
code

integer

The status code, which should be an enum value of google.rpc.Code.

message

string

A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.

details[]

object

A list of messages that carry the error details. There is a common set of message types for APIs to use.

An object containing fields of an arbitrary type. An additional field "@type" contains a URI identifying the type. Example: { "id": 1234, "@type": "types.example.com/standard/id" }.

Any

JSON representation
{
  "typeUrl": string,
  "value": string
}
Fields
typeUrl

string

A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one "/" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration). The name should be in a canonical form (e.g., leading "." is not accepted).

In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http, https, or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows:

  • If no scheme is provided, https is assumed.
  • An HTTP GET on the URL must yield a google.protobuf.Type value in binary format, or produce an error.
  • Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.)

Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one.

Schemes other than http, https (or the empty scheme) might be used with implementation specific semantics.

value

string (bytes format)

Must be a valid serialized protocol buffer of the above specified type.

A base64-encoded string.

Tool Annotations

Destructive Hint: ✅ | Idempotent Hint: ❌ | Read Only Hint: ❌ | Open World Hint: ❌