Tool: clone_instance
Create a Cloud SQL instance as a clone of a source instance.
- This tool returns a long-running operation. Use the
get_operationtool to poll its status until the operation completes. - The clone operation can take several minutes. Use a command line tool to pause for 30 seconds before rechecking the status.
The following sample demonstrate how to use curl to invoke the clone_instance MCP tool.
| Curl Request |
|---|
curl --location 'https://sqladmin.googleapis.com/mcp' \ --header 'content-type: application/json' \ --header 'accept: application/json, text/event-stream' \ --data '{ "method": "tools/call", "params": { "name": "clone_instance", "arguments": { // provide these details according to the tool's MCP specification } }, "jsonrpc": "2.0", "id": 1 }' |
Input Schema
Instance clone request.
SqlInstancesCloneRequest
| JSON representation |
|---|
{
"instance": string,
"project": string,
"body": {
object ( |
| Fields | |
|---|---|
instance |
Required. The ID of the Cloud SQL instance to be cloned (source). This does not include the project ID. |
project |
Required. Project ID of the source as well as the clone Cloud SQL instance. |
body |
|
InstancesCloneRequest
| JSON representation |
|---|
{
"cloneContext": {
object ( |
| Fields | |
|---|---|
cloneContext |
Required. Contains details about the clone operation. |
CloneContext
| JSON representation |
|---|
{ "kind": string, "pitrTimestampMs": string, "destinationInstanceName": string, "binLogCoordinates": { object ( |
| Fields | |
|---|---|
kind |
This is always |
pitrTimestampMs |
Reserved for future use. |
destinationInstanceName |
Required. Name of the Cloud SQL instance to be created as a clone. |
binLogCoordinates |
Binary log coordinates, if specified, identify the position up to which the source instance is cloned. If not specified, the source instance is cloned up to the most recent binary log coordinates. |
pointInTime |
Timestamp, if specified, identifies the time to which the source instance is cloned. Uses RFC 3339, where generated output will always be Z-normalized and use 0, 3, 6 or 9 fractional digits. Offsets other than "Z" are also accepted. Examples: |
allocatedIpRange |
The name of the allocated ip range for the private ip Cloud SQL instance. For example: "google-managed-services-default". If set, the cloned instance ip will be created in the allocated range. The range name must comply with RFC 1035. Specifically, the name must be 1-63 characters long and match the regular expression a-z?. Reserved for future use. |
databaseNames[] |
(SQL Server only) Clone only the specified databases from the source instance. Clone all databases if empty. |
Union field
|
|
preferredZone |
Optional. Copy clone and point-in-time recovery clone of an instance to the specified zone. If no zone is specified, clone to the same primary zone as the source instance. This field applies to all DB types. |
Union field
|
|
preferredSecondaryZone |
Optional. Copy clone and point-in-time recovery clone of a regional instance in the specified zones. If not specified, clone to the same secondary zone as the source instance. This value cannot be the same as the preferred_zone field. This field applies to all DB types. |
Union field
|
|
sourceInstanceDeletionTime |
The timestamp used to identify the time when the source instance is deleted. If this instance is deleted, then you must set the timestamp. Uses RFC 3339, where generated output will always be Z-normalized and use 0, 3, 6 or 9 fractional digits. Offsets other than "Z" are also accepted. Examples: |
Union field
|
|
destinationProject |
Optional. The project ID of the destination project where the cloned instance will be created. To perform a cross-project clone, this field is required. If not specified, the clone is created in the same project as the source instance. |
Union field
|
|
destinationNetwork |
Optional. The fully qualified URI of the VPC network to which the cloned instance will be connected via Private Services Access for private IP. For example: |
BinLogCoordinates
| JSON representation |
|---|
{ "binLogFileName": string, "binLogPosition": string, "kind": string } |
| Fields | |
|---|---|
binLogFileName |
Name of the binary log file for a Cloud SQL instance. |
binLogPosition |
Position (offset) within the binary log file. |
kind |
This is always |
Timestamp
| JSON representation |
|---|
{ "seconds": string, "nanos": integer } |
| Fields | |
|---|---|
seconds |
Represents seconds of UTC time since Unix epoch 1970-01-01T00:00:00Z. Must be between -62135596800 and 253402300799 inclusive (which corresponds to 0001-01-01T00:00:00Z to 9999-12-31T23:59:59Z). |
nanos |
Non-negative fractions of a second at nanosecond resolution. This field is the nanosecond portion of the duration, not an alternative to seconds. Negative second values with fractions must still have non-negative nanos values that count forward in time. Must be between 0 and 999,999,999 inclusive. |
Output Schema
An Operation resource. For successful operations that return an Operation resource, only the fields relevant to the operation are populated in the resource.
Operation
| JSON representation |
|---|
{ "kind": string, "targetLink": string, "status": enum ( |
| Fields | |
|---|---|
kind |
This is always |
targetLink |
|
status |
The status of an operation. |
user |
The email address of the user who initiated this operation. |
insertTime |
The time this operation was enqueued in UTC timezone in RFC 3339 format, for example Uses RFC 3339, where generated output will always be Z-normalized and use 0, 3, 6 or 9 fractional digits. Offsets other than "Z" are also accepted. Examples: |
startTime |
The time this operation actually started in UTC timezone in RFC 3339 format, for example Uses RFC 3339, where generated output will always be Z-normalized and use 0, 3, 6 or 9 fractional digits. Offsets other than "Z" are also accepted. Examples: |
endTime |
The time this operation finished in UTC timezone in RFC 3339 format, for example Uses RFC 3339, where generated output will always be Z-normalized and use 0, 3, 6 or 9 fractional digits. Offsets other than "Z" are also accepted. Examples: |
error |
If errors occurred during processing of this operation, this field will be populated. |
apiWarning |
An Admin API warning message. |
operationType |
The type of the operation. Valid values are: * |
importContext |
The context for import operation, if applicable. |
exportContext |
The context for export operation, if applicable. |
backupContext |
The context for backup operation, if applicable. |
preCheckMajorVersionUpgradeContext |
This field is only populated when the operation_type is PRE_CHECK_MAJOR_VERSION_UPGRADE. The PreCheckMajorVersionUpgradeContext message itself contains the details for that pre-check, such as the target database version for the upgrade and the results of the check (including any warnings or errors found). |
name |
An identifier that uniquely identifies the operation. You can use this identifier to retrieve the Operations resource that has information about the operation. |
targetId |
Name of the resource on which this operation runs. |
selfLink |
The URI of this resource. |
targetProject |
The project ID of the target instance related to this operation. |
acquireSsrsLeaseContext |
The context for acquire SSRS lease operation, if applicable. |
subOperationType |
Optional. The sub operation based on the operation type. |
Timestamp
| JSON representation |
|---|
{ "seconds": string, "nanos": integer } |
| Fields | |
|---|---|
seconds |
Represents seconds of UTC time since Unix epoch 1970-01-01T00:00:00Z. Must be between -62135596800 and 253402300799 inclusive (which corresponds to 0001-01-01T00:00:00Z to 9999-12-31T23:59:59Z). |
nanos |
Non-negative fractions of a second at nanosecond resolution. This field is the nanosecond portion of the duration, not an alternative to seconds. Negative second values with fractions must still have non-negative nanos values that count forward in time. Must be between 0 and 999,999,999 inclusive. |
OperationErrors
| JSON representation |
|---|
{
"kind": string,
"errors": [
{
object ( |
| Fields | |
|---|---|
kind |
This is always |
errors[] |
The list of errors encountered while processing this operation. |
OperationError
| JSON representation |
|---|
{ "kind": string, "code": string, "message": string } |
| Fields | |
|---|---|
kind |
This is always |
code |
Identifies the specific error that occurred. |
message |
Additional information about the error encountered. |
ApiWarning
| JSON representation |
|---|
{
"code": enum ( |
| Fields | |
|---|---|
code |
Code to uniquely identify the warning type. |
message |
The warning message. |
region |
The region name for REGION_UNREACHABLE warning. |
ImportContext
| JSON representation |
|---|
{ "uri": string, "database": string, "kind": string, "fileType": enum ( |
| Fields | |
|---|---|
uri |
Path to the import file in Cloud Storage, in the form |
database |
The target database for the import. If |
kind |
This is always |
fileType |
The file type for the specified uri.`SQL |
csvImportOptions |
Options for importing data as CSV. |
importUser |
The PostgreSQL user for this import operation. PostgreSQL instances only. |
bakImportOptions |
Import parameters specific to SQL Server .BAK files |
sqlImportOptions |
Optional. Options for importing data from SQL statements. |
tdeImportOptions |
Optional. Import parameters specific to SQL Server TDE certificates |
SqlCsvImportOptions
| JSON representation |
|---|
{ "table": string, "columns": [ string ], "escapeCharacter": string, "quoteCharacter": string, "fieldsTerminatedBy": string, "linesTerminatedBy": string } |
| Fields | |
|---|---|
table |
The table to which CSV data is imported. |
columns[] |
The columns to which CSV data is imported. If not specified, all columns of the database table are loaded with CSV data. |
escapeCharacter |
Specifies the character that should appear before a data character that needs to be escaped. |
quoteCharacter |
Specifies the quoting character to be used when a data value is quoted. |
fieldsTerminatedBy |
Specifies the character that separates columns within each row (line) of the file. |
linesTerminatedBy |
This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values. |
SqlBakImportOptions
| JSON representation |
|---|
{ "encryptionOptions": { object ( |
| Fields | |
|---|---|
encryptionOptions |
|
striped |
Whether or not the backup set being restored is striped. Applies only to Cloud SQL for SQL Server. |
noRecovery |
Whether or not the backup importing will restore database with NORECOVERY option. Applies only to Cloud SQL for SQL Server. |
recoveryOnly |
Whether or not the backup importing request will just bring database online without downloading Bak content only one of "no_recovery" and "recovery_only" can be true otherwise error will return. Applies only to Cloud SQL for SQL Server. |
bakType |
Type of the bak content, FULL or DIFF |
stopAt |
Optional. The timestamp when the import should stop. This timestamp is in the RFC 3339 format (for example, Uses RFC 3339, where generated output will always be Z-normalized and use 0, 3, 6 or 9 fractional digits. Offsets other than "Z" are also accepted. Examples: |
stopAtMark |
Optional. The marked transaction where the import should stop. This field is equivalent to the STOPATMARK keyword and applies to Cloud SQL for SQL Server only. |
EncryptionOptions
| JSON representation |
|---|
{ "certPath": string, "pvkPath": string, "pvkPassword": string, "keepEncrypted": boolean } |
| Fields | |
|---|---|
certPath |
Path to the Certificate (.cer) in Cloud Storage, in the form |
pvkPath |
Path to the Certificate Private Key (.pvk) in Cloud Storage, in the form |
pvkPassword |
Password that encrypts the private key |
keepEncrypted |
Optional. Whether the imported file remains encrypted. |
BoolValue
| JSON representation |
|---|
{ "value": boolean } |
| Fields | |
|---|---|
value |
The bool value. |
SqlImportOptions
| JSON representation |
|---|
{
"threads": integer,
"parallel": boolean,
"postgresImportOptions": {
object ( |
| Fields | |
|---|---|
threads |
Optional. The number of threads to use for parallel import. |
parallel |
Optional. Whether or not the import should be parallel. |
postgresImportOptions |
Optional. Options for importing from a Cloud SQL for PostgreSQL instance. |
Int32Value
| JSON representation |
|---|
{ "value": integer } |
| Fields | |
|---|---|
value |
The int32 value. |
PostgresImportOptions
| JSON representation |
|---|
{ "clean": boolean, "ifExists": boolean } |
| Fields | |
|---|---|
clean |
Optional. The --clean flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel. |
ifExists |
Optional. The --if-exists flag for the pg_restore utility. This flag applies only if you enabled Cloud SQL to import files in parallel. |
SqlTdeImportOptions
| JSON representation |
|---|
{ "certificatePath": string, "privateKeyPath": string, "privateKeyPassword": string, "name": string } |
| Fields | |
|---|---|
certificatePath |
Required. Path to the TDE certificate public key in the form gs://bucketName/fileName. The instance must have read access to the file. Applicable only for SQL Server instances. |
privateKeyPath |
Required. Path to the TDE certificate private key in the form gs://bucketName/fileName. The instance must have read access to the file. Applicable only for SQL Server instances. |
privateKeyPassword |
Required. Password that encrypts the private key. |
name |
Required. Certificate name. Applicable only for SQL Server instances. |
ExportContext
| JSON representation |
|---|
{ "uri": string, "databases": [ string ], "kind": string, "sqlExportOptions": { object ( |
| Fields | |
|---|---|
uri |
The path to the file in Google Cloud Storage where the export will be stored. The URI is in the form |
databases[] |
Databases to be exported. |
kind |
This is always |
sqlExportOptions |
Options for exporting data as SQL statements. |
csvExportOptions |
Options for exporting data as CSV. |
fileType |
The file type for the specified uri. |
offload |
Whether to perform a serverless export. |
bakExportOptions |
Options for exporting data as BAK files. |
tdeExportOptions |
Optional. Export parameters specific to SQL Server TDE certificates |
SqlExportOptions
| JSON representation |
|---|
{ "tables": [ string ], "schemaOnly": boolean, "mysqlExportOptions": { object ( |
| Fields | |
|---|---|
tables[] |
Tables to export, or that were exported, from the specified database. If you specify tables, specify one and only one database. For PostgreSQL instances, you can specify only one table. |
schemaOnly |
Export only schemas. |
mysqlExportOptions |
|
threads |
Optional. The number of threads to use for parallel export. |
parallel |
Optional. Whether or not the export should be parallel. |
postgresExportOptions |
Optional. Options for exporting from a Cloud SQL for PostgreSQL instance. |
MysqlExportOptions
| JSON representation |
|---|
{ "masterData": integer } |
| Fields | |
|---|---|
masterData |
Option to include SQL statement required to set up replication. If set to |
PostgresExportOptions
| JSON representation |
|---|
{ "clean": boolean, "ifExists": boolean } |
| Fields | |
|---|---|
clean |
Optional. Use this option to include DROP
SQL statements. Use these statements to delete database objects before running the import operation. |
ifExists |
Optional. Option to include an IF EXISTS SQL statement with each DROP statement produced by clean. |
SqlCsvExportOptions
| JSON representation |
|---|
{ "selectQuery": string, "escapeCharacter": string, "quoteCharacter": string, "fieldsTerminatedBy": string, "linesTerminatedBy": string } |
| Fields | |
|---|---|
selectQuery |
The select query used to extract the data. |
escapeCharacter |
Specifies the character that should appear before a data character that needs to be escaped. |
quoteCharacter |
Specifies the quoting character to be used when a data value is quoted. |
fieldsTerminatedBy |
Specifies the character that separates columns within each row (line) of the file. |
linesTerminatedBy |
This is used to separate lines. If a line does not contain all fields, the rest of the columns are set to their default values. |
SqlBakExportOptions
| JSON representation |
|---|
{
"striped": boolean,
"stripeCount": integer,
"bakType": enum ( |
| Fields | |
|---|---|
striped |
Whether or not the export should be striped. |
stripeCount |
Option for specifying how many stripes to use for the export. If blank, and the value of the striped field is true, the number of stripes is automatically chosen. |
bakType |
Type of this bak file will be export, FULL or DIFF, SQL Server only |
copyOnly |
Deprecated: copy_only is deprecated. Use differential_base instead |
differentialBase |
Whether or not the backup can be used as a differential base copy_only backup can not be served as differential base |
exportLogStartTime |
Optional. The begin timestamp when transaction log will be included in the export operation. RFC 3339 format (for example, Uses RFC 3339, where generated output will always be Z-normalized and use 0, 3, 6 or 9 fractional digits. Offsets other than "Z" are also accepted. Examples: |
exportLogEndTime |
Optional. The end timestamp when transaction log will be included in the export operation. RFC 3339 format (for example, Uses RFC 3339, where generated output will always be Z-normalized and use 0, 3, 6 or 9 fractional digits. Offsets other than "Z" are also accepted. Examples: |
SqlTdeExportOptions
| JSON representation |
|---|
{ "certificatePath": string, "privateKeyPath": string, "privateKeyPassword": string, "name": string } |
| Fields | |
|---|---|
certificatePath |
Required. Path to the TDE certificate public key in the form gs://bucketName/fileName. The instance must have write access to the bucket. Applicable only for SQL Server instances. |
privateKeyPath |
Required. Path to the TDE certificate private key in the form gs://bucketName/fileName. The instance must have write access to the location. Applicable only for SQL Server instances. |
privateKeyPassword |
Required. Password that encrypts the private key. |
name |
Required. Certificate name. Applicable only for SQL Server instances. |
BackupContext
| JSON representation |
|---|
{ "backupId": string, "kind": string, "name": string } |
| Fields | |
|---|---|
backupId |
The identifier of the backup. |
kind |
This is always |
name |
The name of the backup. Format: projects/{project}/backups/{backup} |
PreCheckMajorVersionUpgradeContext
| JSON representation |
|---|
{ "targetDatabaseVersion": enum ( |
| Fields | |
|---|---|
targetDatabaseVersion |
Required. The target database version to upgrade to. |
preCheckResponse[] |
Output only. The responses from the precheck operation. |
kind |
Optional. This is always |
PreCheckResponse
| JSON representation |
|---|
{ "actionsRequired": [ string ], // Union field |
| Fields | |
|---|---|
actionsRequired[] |
The actions that the user needs to take. Use repeated for multiple actions. |
Union field
|
|
message |
The message to be displayed to the user. |
Union field
|
|
messageType |
The type of message whether it is an info, warning, or error. |
AcquireSsrsLeaseContext
| JSON representation |
|---|
{ // Union field |
| Fields | |
|---|---|
Union field
|
|
setupLogin |
The username to be used as the setup login to connect to the database server for SSRS setup. |
Union field
|
|
serviceLogin |
The username to be used as the service login to connect to the report database for SSRS setup. |
Union field
|
|
reportDatabase |
The report database to be used for SSRS setup. |
Union field
|
|
duration |
Lease duration needed for SSRS setup. A duration in seconds with up to nine fractional digits, ending with ' |
Duration
| JSON representation |
|---|
{ "seconds": string, "nanos": integer } |
| Fields | |
|---|---|
seconds |
Signed seconds of the span of time. Must be from -315,576,000,000 to +315,576,000,000 inclusive. Note: these bounds are computed from: 60 sec/min * 60 min/hr * 24 hr/day * 365.25 days/year * 10000 years |
nanos |
Signed fractions of a second at nanosecond resolution of the span of time. Durations less than one second are represented with a 0 |
SqlSubOperationType
| JSON representation |
|---|
{ // Union field |
| Fields | |
|---|---|
Union field sub_operation_details. Sub operation details corresponding to the operation type. sub_operation_details can be only one of the following: |
|
maintenanceType |
The type of maintenance to be performed on the instance. |
Tool Annotations
Destructive Hint: ❌ | Idempotent Hint: ❌ | Read Only Hint: ❌ | Open World Hint: ❌