After a topic is created, you can edit the topic configuration to update these properties: the number of partitions and topic configurations that don't default to the properties already set at the cluster-level. You can only increase the number of partitions, you cannot decrease it.
To update a single topic, you can use the Google Cloud console, the Google Cloud CLI, the client library, the Managed Kafka API, or the open source Apache Kafka APIs.
Required roles and permissions to edit a topic
To get the permissions that
you need to edit a topic,
ask your administrator to grant you the
Managed Kafka Topic Editor(roles/managedkafka.topicEditor)
IAM role on your project.
For more information about granting roles, see Manage access to projects, folders, and organizations.
This predefined role contains the permissions required to edit a topic. To see the exact permissions that are required, expand the Required permissions section:
Required permissions
The following permissions are required to edit a topic:
-
Update a topic:
managedkafka.topics.update
You might also be able to get these permissions with custom roles or other predefined roles.
For more information about this role, see Managed Service for Apache Kafka predefined roles.
Edit a topic
To edit a topic, follow these steps:
Console
In the Google Cloud console, go to the Clusters page.
The clusters you created in a project are listed.
Click the cluster to which the topic that you want to edit belongs.
The Cluster details page opens. In the cluster details page, for the Resources tab, the topics are listed.
Click the topic that you want to edit.
The Topic details page opens.
To make your edits, click Edit.
Click Save after the changes.
gcloud
-
In the Google Cloud console, activate Cloud Shell.
At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.
Run the
gcloud managed-kafka topics updatecommand:gcloud managed-kafka topics update TOPIC_ID \ --cluster=CLUSTER_ID \ --location=LOCATION_ID \ --partitions=PARTITIONS \ --configs=CONFIGSThis command modifies the configuration of an existing topic in the specified Managed Service for Apache Kafka cluster. You can use this command to increase the number of partitions and update topic-level configuration settings.
Replace the following:
- TOPIC_ID: The ID of the topic.
- CLUSTER_ID: The ID of the cluster containing the topic.
- LOCATION_ID: The location of the cluster.
- PARTITIONS: Optional: The updated number of partitions for the topic. You can only increase the number of partitions, you cannot decrease it.
- CONFIGS: Optional: A list of configuration
settings to update. Specify as a comma-separated list of key-value
pairs. For example,
retention.ms=3600000,retention.bytes=10000000.
REST
Before using any of the request data, make the following replacements:
-
PROJECT_ID: your Google Cloud project ID -
LOCATION: the location of the cluster -
CLUSTER_ID: the ID of the cluster -
TOPIC_ID: the ID of the topic -
UPDATE_MASK: which fields to update, as a comma-separated list of fully qualified names. Example:partitionCount -
PARTITION_COUNT: the updated number of partitions for the topic
HTTP method and URL:
PATCH https://managedkafka.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/clusters/CLUSTER_ID/topics/TOPIC_ID?updateMask=UPDATE_MASK
Request JSON body:
{
"name": "TOPIC_ID",
"partitionCount": PARTITION_COUNT
}
To send your request, expand one of these options:
You should receive a JSON response similar to the following:
{
"name": "projects/PROJECT_ID/locations/LOCATION/operations/OPERATION_ID",
"metadata": {
"@type": "type.googleapis.com/google.cloud.managedkafka.v1.OperationMetadata",
"createTime": "CREATE_TIME",
"target": "projects/PROJECT_ID/locations/LOCATION/clusters/CLUSTER_ID",
"verb": "update",
"requestedCancellation": false,
"apiVersion": "v1"
},
"done": false
}
Go
Before trying this sample, follow the Go setup instructions in Install the client libraries. For more information, see the Managed Service for Apache Kafka Go API reference documentation.
To authenticate to Managed Service for Apache Kafka, set up Application Default Credentials(ADC). For more information, see Set up ADC for a local development environment.
Java
Before trying this sample, follow the Java setup instructions in Install the client libraries. For more information, see the Managed Service for Apache Kafka Java API reference documentation.
To authenticate to Managed Service for Apache Kafka, set up Application Default Credentials. For more information, see Set up ADC for a local development environment.
Python
Before trying this sample, follow the Python setup instructions in Install the client libraries. For more information, see the Managed Service for Apache Kafka Python API reference documentation.
To authenticate to Managed Service for Apache Kafka, set up Application Default Credentials. For more information, see Set up ADC for a local development environment.
Configure message retention
Kafka stores messages in log segment files. By default, Kafka deletes segment files after a retention period or when a partition exceeds a data size threshold. You can change this behavior by enabling log compaction. If log compaction is enabled, then Kafka only keeps the latest value for each key.
Google Cloud Managed Service for Apache Kafka uses tiered storage, which means that completed log segments are stored remotely, rather than local storage. To learn more about tiered storage, see Tiered Storage in the Apache Kafka documentation.
Set the retention values
If log compaction is not enabled, then the following settings control how Kafka stores log segment files:
retention.ms: The maximum length of time to save segment files, in milliseconds.retention.bytes: The maximum number of bytes to store per partition. If the data in a partition exceeds this value, then Kafka discards older segment files.
To update these settings, use either the gcloud CLI or the Kafka CLI:
gcloud
To set the message retention, run the
gcloud managed-kafka topics update
command.
gcloud managed-kafka topics update TOPIC_ID \
--cluster=CLUSTER_ID \
--location=LOCATION_ID \
--configs=retention.ms=RETENTION_PERIOD,retention.bytes=MAX_BYTES
Replace the following:
- TOPIC_ID: The ID of the topic.
- CLUSTER_ID: The ID of the cluster containing the topic.
- LOCATION_ID: The location of the cluster.
- RETENTION_PERIOD: The maximum amount of time to store segment files, in milliseconds.
- MAX_BYTES: The maximum number of bytes to store, per partition.
Kafka CLI
Before running this command, install the Kafka command-line tools on a Compute Engine VM. The VM must be able to reach a subnet that is connected to your Managed Service for Apache Kafka cluster. Follow the instructions in Produce and consume messages with the Kafka command-line tools.
Run the kafka-configs.sh command:
kafka-configs.sh --alter \
--bootstrap-server=BOOTSTRAP_ADDRESS \
--command-config client.properties \
--entity-type topics \
--entity-name TOPIC_ID \
--add-config retention.ms=RETENTION_PERIOD,retention.bytes=MAX_BYTES
Replace the following:
- BOOTSTRAP_ADDRESS: The bootstrap address of the Managed Service for Apache Kafka cluster.
- TOPIC_ID: The ID of the topic.
- RETENTION_PERIOD: The maximum amount of time to store segment files, in milliseconds.
- MAX_BYTES: The maximum number of bytes to store, per partition.
Enable log compaction
If log compaction is enabled, then Kafka only stores the latest message for
each key. Log compaction is disabled by default. To enable log compaction for
a topic, set the cleanup.policy configuration to "compact", as follows:
gcloud
Run the
gcloud managed-kafka topics update
command.
gcloud managed-kafka topics update TOPIC_ID \
--cluster=CLUSTER_ID \
--location=LOCATION_ID \
--configs=cleanup.policy=compact
Replace the following:
- TOPIC_ID: The ID of the topic.
- CLUSTER_ID: The ID of the cluster containing the topic.
- LOCATION_ID: The location of the cluster.
Kafka CLI
Before running this command, install the Kafka command-line tools on a Compute Engine VM. The VM must be able to reach a subnet that is connected to your Managed Service for Apache Kafka cluster. Follow the instructions in Produce and consume messages with the Kafka command-line tools.
Run the kafka-configs.sh command:
kafka-configs.sh --alter \
--bootstrap-server=BOOTSTRAP_ADDRESS \
--command-config client.properties \
--entity-type topics \
--entity-name TOPIC_ID \
--add-config cleanup.policy=compact
Replace the following:
- BOOTSTRAP_ADDRESS: The bootstrap address of the Managed Service for Apache Kafka cluster.
- TOPIC_ID: The ID of the topic.
Limitations
You can't override topic configurations for remote storage, such as
remote.storage.enable.You can't override topic configurations for log segment files, such as
segment.bytes.Enabling log compaction for a topic implicitly disables tiered storage for that topic. All log files for the topic are stored locally.