Tool: update_connect_cluster
Update an existing Google Cloud Managed service for Apache Kafka Connect cluster. To update a Connect cluster, please provide the Project ID, Location, and Connect Cluster ID.
The following parameters can be updated:
- vCPU Count: The number of vCPUs to provision for the Connect cluster workers. This is part of
capacity_config.vcpu_count. Please note that the vCPU count must be a string. - Memory Bytes: The memory to provision for the Connect cluster workers in bytes. This is part of
capacity_config.memory_bytes. - DNS Domain Names: Additional DNS domain names from the subnet's network to be made visible to the Connect Cluster. This is part of
gcp_config.access_config.network_configs.dns_domain_names. - Secret Paths: A list of Secret Manager SecretVersion resource names to load into workers. This is part of
gcp_config.secret_paths. - Config: Key-value pairs for Kafka Connect worker configuration overrides. This is the
configfield. - Labels: Key-value pairs to help you organize your Connect clusters. This is the
labelsfield.
This tool returns a long-running operation (LRO) that you can poll using the get_operation tool to track the Connect cluster update status. Connect cluster updates can take 20 minutes or longer.
Important Notes:
- The
UpdateConnectClusterRequestrequires the following parameters:update_mask: A field mask used to specify the fields to be overwritten. For example, to update the vCPU count and labels, the mask would be"capacity_config.vcpu_count,labels". A value of*will overwrite all fields.connect_cluster: The Connect cluster configuration. This includes the requiredcapacity_configandgcp_configparameters. If this information is not provided as part of the udpate, please use theget_connect_clustertool to retrieve it.connect_cluster.name: The name of the Connect cluster to be updated in the formatprojects/{project}/locations/{location}/connectClusters/{connect_cluster_id}.
- The
kafka_clusterandgcp_config.access_config.network_configs.primary_subnetfields are immutable and cannot be updated after creation.
The following sample demonstrate how to use curl to invoke the update_connect_cluster MCP tool.
| Curl Request |
|---|
curl --location 'https://managedkafka.googleapis.com/mcp' \ --header 'content-type: application/json' \ --header 'accept: application/json, text/event-stream' \ --data '{ "method": "tools/call", "params": { "name": "update_connect_cluster", "arguments": { // provide these details according to the tool's MCP specification } }, "jsonrpc": "2.0", "id": 1 }' |
Input Schema
Request for UpdateConnectCluster.
UpdateConnectClusterRequest
| JSON representation |
|---|
{
"updateMask": string,
"connectCluster": {
object ( |
| Fields | |
|---|---|
updateMask |
Required. Field mask is used to specify the fields to be overwritten in the cluster resource by the update. The fields specified in the update_mask are relative to the resource, not the full request. A field will be overwritten if it is in the mask. The mask is required and a value of * will update all fields. This is a comma-separated list of fully qualified names of fields. Example: |
connectCluster |
Required. The Kafka Connect cluster to update. Its |
requestId |
Optional. An optional request ID to identify requests. Specify a unique request ID to avoid duplication of requests. If a request times out or fails, retrying with the same ID allows the server to recognize the previous attempt. For at least 60 minutes, the server ignores duplicate requests bearing the same ID. For example, consider a situation where you make an initial request and the request times out. If you make the request again with the same request ID within 60 minutes of the last request, the server checks if an original operation with the same request ID was received. If so, the server ignores the second request. The request ID must be a valid UUID. A zero UUID is not supported (00000000-0000-0000-0000-000000000000). |
FieldMask
| JSON representation |
|---|
{ "paths": [ string ] } |
| Fields | |
|---|---|
paths[] |
The set of field mask paths. |
ConnectCluster
| JSON representation |
|---|
{ "name": string, "kafkaCluster": string, "createTime": string, "updateTime": string, "labels": { string: string, ... }, "capacityConfig": { object ( |
| Fields | |
|---|---|
name |
Identifier. The name of the Kafka Connect cluster. Structured like: projects/{project_number}/locations/{location}/connectClusters/{connect_cluster_id} |
kafkaCluster |
Required. Immutable. The name of the Kafka cluster this Kafka Connect cluster is attached to. Structured like: projects/{project}/locations/{location}/clusters/{cluster} |
createTime |
Output only. The time when the cluster was created. Uses RFC 3339, where generated output will always be Z-normalized and use 0, 3, 6 or 9 fractional digits. Offsets other than "Z" are also accepted. Examples: |
updateTime |
Output only. The time when the cluster was last updated. Uses RFC 3339, where generated output will always be Z-normalized and use 0, 3, 6 or 9 fractional digits. Offsets other than "Z" are also accepted. Examples: |
labels |
Optional. Labels as key value pairs. An object containing a list of |
capacityConfig |
Required. Capacity configuration for the Kafka Connect cluster. |
state |
Output only. The current state of the Kafka Connect cluster. |
config |
Optional. Configurations for the worker that are overridden from the defaults. The key of the map is a Kafka Connect worker property name, for example: An object containing a list of |
Union field platform_config. Platform specific configuration properties for a Kafka Connect cluster. platform_config can be only one of the following: |
|
gcpConfig |
Required. Configuration properties for a Kafka Connect cluster deployed to Google Cloud Platform. |
Union field
|
|
satisfiesPzi |
Output only. Reserved for future use. |
Union field
|
|
satisfiesPzs |
Output only. Reserved for future use. |
ConnectGcpConfig
| JSON representation |
|---|
{
"accessConfig": {
object ( |
| Fields | |
|---|---|
accessConfig |
Required. Access configuration for the Kafka Connect cluster. |
secretPaths[] |
Optional. Secrets to load into workers. Exact SecretVersions from Secret Manager must be provided -- aliases are not supported. Up to 32 secrets may be loaded into one cluster. Format: projects/ |
ConnectAccessConfig
| JSON representation |
|---|
{
"networkConfigs": [
{
object ( |
| Fields | |
|---|---|
networkConfigs[] |
Required. Virtual Private Cloud (VPC) networks that must be granted direct access to the Kafka Connect cluster. Minimum of 1 network is required. Maximum 10 networks can be specified. |
ConnectNetworkConfig
| JSON representation |
|---|
{ "primarySubnet": string, "additionalSubnets": [ string ], "dnsDomainNames": [ string ] } |
| Fields | |
|---|---|
primarySubnet |
Required. VPC subnet to make available to the Kafka Connect cluster. Structured like: projects/{project}/regions/{region}/subnetworks/{subnet_id} It is used to create a Private Service Connect (PSC) interface for the Kafka Connect workers. It must be located in the same region as the Kafka Connect cluster. The CIDR range of the subnet must be within the IPv4 address ranges for private networks, as specified in RFC 1918. The primary subnet CIDR range must have a minimum size of /22 (1024 addresses). |
additionalSubnets[] |
Optional. Deprecated: Managed Kafka Connect clusters can now reach any endpoint accessible from the primary subnet without the need to define additional subnets. Please see https://cloud.google.com/managed-service-for-apache-kafka/docs/connect-cluster/create-connect-cluster#worker-subnet for more information. |
dnsDomainNames[] |
Optional. Additional DNS domain names from the subnet's network to be made visible to the Connect Cluster. When using MirrorMaker2, it's necessary to add the bootstrap address's dns domain name of the target cluster to make it visible to the connector. For example: my-kafka-cluster.us-central1.managedkafka.my-project.cloud.goog |
Timestamp
| JSON representation |
|---|
{ "seconds": string, "nanos": integer } |
| Fields | |
|---|---|
seconds |
Represents seconds of UTC time since Unix epoch 1970-01-01T00:00:00Z. Must be between -62135596800 and 253402300799 inclusive (which corresponds to 0001-01-01T00:00:00Z to 9999-12-31T23:59:59Z). |
nanos |
Non-negative fractions of a second at nanosecond resolution. This field is the nanosecond portion of the duration, not an alternative to seconds. Negative second values with fractions must still have non-negative nanos values that count forward in time. Must be between 0 and 999,999,999 inclusive. |
LabelsEntry
| JSON representation |
|---|
{ "key": string, "value": string } |
| Fields | |
|---|---|
key |
|
value |
|
CapacityConfig
| JSON representation |
|---|
{ "vcpuCount": string, "memoryBytes": string } |
| Fields | |
|---|---|
vcpuCount |
Required. The number of vCPUs to provision for the cluster. Minimum: 3. |
memoryBytes |
Required. The memory to provision for the cluster in bytes. The CPU:memory ratio (vCPU:GiB) must be between 1:1 and 1:8. Minimum: 3221225472 (3 GiB). |
ConfigEntry
| JSON representation |
|---|
{ "key": string, "value": string } |
| Fields | |
|---|---|
key |
|
value |
|
Output Schema
This resource represents a long-running operation that is the result of a network API call.
Operation
| JSON representation |
|---|
{ "name": string, "metadata": { "@type": string, field1: ..., ... }, "done": boolean, // Union field |
| Fields | |
|---|---|
name |
The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the |
metadata |
Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any. An object containing fields of an arbitrary type. An additional field |
done |
If the value is |
Union field result. The operation result, which can be either an error or a valid response. If done == false, neither error nor response is set. If done == true, exactly one of error or response can be set. Some services might not provide the result. result can be only one of the following: |
|
error |
The error result of the operation in case of failure or cancellation. |
response |
The normal, successful response of the operation. If the original method returns no data on success, such as An object containing fields of an arbitrary type. An additional field |
Any
| JSON representation |
|---|
{ "typeUrl": string, "value": string } |
| Fields | |
|---|---|
typeUrl |
Identifies the type of the serialized Protobuf message with a URI reference consisting of a prefix ending in a slash and the fully-qualified type name. Example: type.googleapis.com/google.protobuf.StringValue This string must contain at least one The prefix is arbitrary and Protobuf implementations are expected to simply strip off everything up to and including the last All type URL strings must be legal URI references with the additional restriction (for the text format) that the content of the reference must consist only of alphanumeric characters, percent-encoded escapes, and characters in the following set (not including the outer backticks): In the original design of |
value |
Holds a Protobuf serialization of the type described by type_url. A base64-encoded string. |
Status
| JSON representation |
|---|
{ "code": integer, "message": string, "details": [ { "@type": string, field1: ..., ... } ] } |
| Fields | |
|---|---|
code |
The status code, which should be an enum value of |
message |
A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the |
details[] |
A list of messages that carry the error details. There is a common set of message types for APIs to use. An object containing fields of an arbitrary type. An additional field |
Tool Annotations
Destructive Hint: ✅ | Idempotent Hint: ✅ | Read Only Hint: ❌ | Open World Hint: ❌