Tool: create_cluster
Create a new cluster for Google Cloud Managed service for Apache Kafka. To create a cluster, the following parameters must be provided:
- Project ID: The ID of the Google Cloud project (e.g.,
my-project). - Location: The Google Cloud region for the cluster (e.g.,
us-central1). - Cluster ID: A unique identifier for your cluster (e.g.,
my-kafka-cluster). - vCPU Count: The number of vCPUs to provision for the cluster (minimum 3). Please note that the vCPU count must be a string.
- Memory Bytes: The memory to provision for the cluster in bytes (minimum 3 GiB, and the CPU:memory ratio must be between 1:1 and 1:8).
- Subnet: The VPC subnet for Private Service Connect (PSC) endpoints. This must be a full resource path in the format
projects/{project}/regions/{region}/subnetworks/{subnet_id}. The subnet's region must match the cluster's location, but the project can be different. Please provide the user with the option to select the default subnet, which has the formatprojects/{project}/regions/{region}/subnetworks/default, where the project and region are the same as the cluster. - Other parameters, like the TLS config, can also be set. The agent should also support these parameters.
This tool returns a long-running operation (LRO) that you can poll using the get_operation tool to track the cluster creation status. Cluster creation can take 30 minutes or longer.
Important Notes:
- The CreateCluster request must include both
capacity_configandgcp_configparameters. - Do not create the cluster without getting all of the required parameters from the user.
The following sample demonstrate how to use curl to invoke the create_cluster MCP tool.
| Curl Request |
|---|
curl --location 'https://managedkafka.googleapis.com/mcp' \ --header 'content-type: application/json' \ --header 'accept: application/json, text/event-stream' \ --data '{ "method": "tools/call", "params": { "name": "create_cluster", "arguments": { // provide these details according to the tool's MCP specification } }, "jsonrpc": "2.0", "id": 1 }' |
Input Schema
Request for CreateCluster.
CreateClusterRequest
| JSON representation |
|---|
{
"parent": string,
"clusterId": string,
"cluster": {
object ( |
| Fields | |
|---|---|
parent |
Required. The parent region in which to create the cluster. Structured like |
clusterId |
Required. The ID to use for the cluster, which will become the final component of the cluster's name. The ID must be 1-63 characters long, and match the regular expression This value is structured like: |
cluster |
Required. Configuration of the cluster to create. Its |
requestId |
Optional. An optional request ID to identify requests. Specify a unique request ID to avoid duplication of requests. If a request times out or fails, retrying with the same ID allows the server to recognize the previous attempt. For at least 60 minutes, the server ignores duplicate requests bearing the same ID. For example, consider a situation where you make an initial request and the request times out. If you make the request again with the same request ID within 60 minutes of the last request, the server checks if an original operation with the same request ID was received. If so, the server ignores the second request. The request ID must be a valid UUID. A zero UUID is not supported (00000000-0000-0000-0000-000000000000). |
Cluster
| JSON representation |
|---|
{ "name": string, "createTime": string, "updateTime": string, "labels": { string: string, ... }, "capacityConfig": { object ( |
| Fields | |
|---|---|
name |
Identifier. The name of the cluster. Structured like: projects/{project_number}/locations/{location}/clusters/{cluster_id} |
createTime |
Output only. The time when the cluster was created. Uses RFC 3339, where generated output will always be Z-normalized and use 0, 3, 6 or 9 fractional digits. Offsets other than "Z" are also accepted. Examples: |
updateTime |
Output only. The time when the cluster was last updated. Uses RFC 3339, where generated output will always be Z-normalized and use 0, 3, 6 or 9 fractional digits. Offsets other than "Z" are also accepted. Examples: |
labels |
Optional. Labels as key value pairs. An object containing a list of |
capacityConfig |
Required. Capacity configuration for the Kafka cluster. |
rebalanceConfig |
Optional. Rebalance configuration for the Kafka cluster. |
state |
Output only. The current state of the cluster. |
tlsConfig |
Optional. TLS configuration for the Kafka cluster. |
updateOptions |
Optional. UpdateOptions represents options that control how updates to the cluster are applied. |
kafkaVersion |
Output only. Only populated when FULL view is requested. The Kafka version of the cluster. |
brokerDetails[] |
Output only. Only populated when FULL view is requested. Details of each broker in the cluster. |
Union field platform_config. Platform specific configuration properties for a Kafka cluster. platform_config can be only one of the following: |
|
gcpConfig |
Required. Configuration properties for a Kafka cluster deployed to Google Cloud Platform. |
Union field
|
|
satisfiesPzi |
Output only. Reserved for future use. |
Union field
|
|
satisfiesPzs |
Output only. Reserved for future use. |
GcpConfig
| JSON representation |
|---|
{
"accessConfig": {
object ( |
| Fields | |
|---|---|
accessConfig |
Required. Access configuration for the Kafka cluster. |
kmsKey |
Optional. Immutable. The Cloud KMS Key name to use for encryption. The key must be located in the same region as the cluster and cannot be changed. Structured like: projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{crypto_key}. |
AccessConfig
| JSON representation |
|---|
{
"networkConfigs": [
{
object ( |
| Fields | |
|---|---|
networkConfigs[] |
Required. Virtual Private Cloud (VPC) networks that must be granted direct access to the Kafka cluster. Minimum of 1 network is required. Maximum 10 networks can be specified. |
NetworkConfig
| JSON representation |
|---|
{ "subnet": string } |
| Fields | |
|---|---|
subnet |
Required. Name of the VPC subnet in which to create Private Service Connect (PSC) endpoints for the Kafka brokers and bootstrap address. Structured like: projects/{project}/regions/{region}/subnetworks/{subnet_id} The subnet must be located in the same region as the Kafka cluster. The project may differ. Multiple subnets from the same parent network must not be specified. |
Timestamp
| JSON representation |
|---|
{ "seconds": string, "nanos": integer } |
| Fields | |
|---|---|
seconds |
Represents seconds of UTC time since Unix epoch 1970-01-01T00:00:00Z. Must be between -62135596800 and 253402300799 inclusive (which corresponds to 0001-01-01T00:00:00Z to 9999-12-31T23:59:59Z). |
nanos |
Non-negative fractions of a second at nanosecond resolution. This field is the nanosecond portion of the duration, not an alternative to seconds. Negative second values with fractions must still have non-negative nanos values that count forward in time. Must be between 0 and 999,999,999 inclusive. |
LabelsEntry
| JSON representation |
|---|
{ "key": string, "value": string } |
| Fields | |
|---|---|
key |
|
value |
|
CapacityConfig
| JSON representation |
|---|
{ "vcpuCount": string, "memoryBytes": string } |
| Fields | |
|---|---|
vcpuCount |
Required. The number of vCPUs to provision for the cluster. Minimum: 3. |
memoryBytes |
Required. The memory to provision for the cluster in bytes. The CPU:memory ratio (vCPU:GiB) must be between 1:1 and 1:8. Minimum: 3221225472 (3 GiB). |
RebalanceConfig
| JSON representation |
|---|
{
"mode": enum ( |
| Fields | |
|---|---|
mode |
Optional. The rebalance behavior for the cluster. When not specified, defaults to |
TlsConfig
| JSON representation |
|---|
{
"trustConfig": {
object ( |
| Fields | |
|---|---|
trustConfig |
Optional. The configuration of the broker truststore. If specified, clients can use mTLS for authentication. |
sslPrincipalMappingRules |
Optional. A list of rules for mapping from SSL principal names to short names. These are applied in order by Kafka. Refer to the Apache Kafka documentation for This is a static Kafka broker configuration. Setting or modifying this field will trigger a rolling restart of the Kafka brokers to apply the change. An empty string means no rules are applied (Kafka default). |
TrustConfig
| JSON representation |
|---|
{
"casConfigs": [
{
object ( |
| Fields | |
|---|---|
casConfigs[] |
Optional. Configuration for the Google Certificate Authority Service. Maximum 10. |
CertificateAuthorityServiceConfig
| JSON representation |
|---|
{ "caPool": string } |
| Fields | |
|---|---|
caPool |
Required. The name of the CA pool to pull CA certificates from. Structured like: projects/{project}/locations/{location}/caPools/{ca_pool}. The CA pool does not need to be in the same project or location as the Kafka cluster. |
UpdateOptions
| JSON representation |
|---|
{ "allowBrokerDownscaleOnClusterUpscale": boolean } |
| Fields | |
|---|---|
allowBrokerDownscaleOnClusterUpscale |
Optional. If true, allows an update operation that increases the total vCPU and/or memory allocation of the cluster to significantly decrease the per-broker vCPU and/or memory allocation. This can result in reduced performance and availability. By default, the update operation will fail if an upscale request results in a vCPU or memory allocation for the brokers that is smaller than 90% of the current broker size. |
BrokerDetails
| JSON representation |
|---|
{ "rack": string, // Union field |
| Fields | |
|---|---|
rack |
Output only. The rack of the broker. |
Union field
|
|
brokerIndex |
Output only. The index of the broker. |
Union field
|
|
nodeId |
Output only. The node id of the broker. |
Output Schema
This resource represents a long-running operation that is the result of a network API call.
Operation
| JSON representation |
|---|
{ "name": string, "metadata": { "@type": string, field1: ..., ... }, "done": boolean, // Union field |
| Fields | |
|---|---|
name |
The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the |
metadata |
Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any. An object containing fields of an arbitrary type. An additional field |
done |
If the value is |
Union field result. The operation result, which can be either an error or a valid response. If done == false, neither error nor response is set. If done == true, exactly one of error or response can be set. Some services might not provide the result. result can be only one of the following: |
|
error |
The error result of the operation in case of failure or cancellation. |
response |
The normal, successful response of the operation. If the original method returns no data on success, such as An object containing fields of an arbitrary type. An additional field |
Any
| JSON representation |
|---|
{ "typeUrl": string, "value": string } |
| Fields | |
|---|---|
typeUrl |
Identifies the type of the serialized Protobuf message with a URI reference consisting of a prefix ending in a slash and the fully-qualified type name. Example: type.googleapis.com/google.protobuf.StringValue This string must contain at least one The prefix is arbitrary and Protobuf implementations are expected to simply strip off everything up to and including the last All type URL strings must be legal URI references with the additional restriction (for the text format) that the content of the reference must consist only of alphanumeric characters, percent-encoded escapes, and characters in the following set (not including the outer backticks): In the original design of |
value |
Holds a Protobuf serialization of the type described by type_url. A base64-encoded string. |
Status
| JSON representation |
|---|
{ "code": integer, "message": string, "details": [ { "@type": string, field1: ..., ... } ] } |
| Fields | |
|---|---|
code |
The status code, which should be an enum value of |
message |
A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the |
details[] |
A list of messages that carry the error details. There is a common set of message types for APIs to use. An object containing fields of an arbitrary type. An additional field |
Tool Annotations
Destructive Hint: ❌ | Idempotent Hint: ❌ | Read Only Hint: ❌ | Open World Hint: ❌