For production environments, Apigee hybrid stateless components like MART, Runtime, and Synchronizer
require specific resource configurations to ensure stability and performance. This topic provides
recommended minimum configurations for these stateless Apigee hybrid components in a production installation.
Stateless components run in the apigee-runtime node pool,
as described in Configuring dedicated node pools.
These recommendations are based on Apigee's default provisioning for paid organizations and should be considered minimum requirements for a production deployment. You may need to adjust these values based on your specific traffic patterns and load testing results.
Production resource recommendations
The following table lists the recommended resource requests and limits for Apigee hybrid components in a production environment.
| Component | CPU Request | Memory Request | CPU Limit | Memory Limit |
|---|---|---|---|---|
| MART | 500m |
2Gi |
2000m |
5Gi |
| Watcher | 1 |
4Gi |
4 |
16Gi |
| Runtime | 2 |
6Gi |
4 | 12Gi |
| Synchronizer | 1 |
3Gi |
2000m |
5Gi |
| Metrics Aggregator | 1 |
10Gi |
4 |
32Gi |
| Metrics App | 500m |
10Gi |
2 |
32Gi |
| Metrics Proxy | 500m |
1Gi |
2 |
32Gi |
| Metrics Exporters (App/Proxy) |
500m |
3Gi |
1 |
10Gi |
| Logger | 100m |
200Mi |
200m |
200Mi |
| Connect Agent | 500m |
1Gi |
500m |
512Mi |
| Mint Task Scheduler (If using Monetization) |
100m |
100Mi |
2000m |
4Gi |
Example overrides
Add the following configurations to your overrides.yaml file to configure your
Apigee hybrid installation for production.
# Production resource requests for MART mart: resources: requests: cpu: 500m memory: 2Gi limits: cpu: 2000m memory: 5Gi # Production resource requests for Watcher watcher: resources: requests: cpu: 1 memory: 4Gi limits: cpu: 4 memory: 16Gi # Production resource requests for environment-scoped components envs: - name: your-env-name runtime: resources: requests: cpu: 2 memory: 6Gi limits: memory: 12Gi synchronizer: resources: requests: cpu: 1 memory: 3Gi limits: cpu: 2000m memory: 5Gi - name: your-other-env-name runtime: resources: requests: cpu: 2 memory: 6Gi limits: memory: 12Gi synchronizer: resources: requests: cpu: 1 memory: 3Gi limits: cpu: 2000m memory: 5Gi # Production resource requests for Metrics metrics: aggregator: resources: requests: cpu: 1 memory: 10Gi limits: cpu: 4 memory: 32Gi app: resources: requests: cpu: 500m memory: 10Gi limits: cpu: 2 memory: 32Gi proxy: resources: requests: cpu: 500m memory: 1Gi limits: cpu: 2 memory: 32Gi appStackdriverExporter: resources: requests: cpu: 500m memory: 3Gi limits: cpu: 1 memory: 10Gi proxyStackdriverExporter: resources: requests: cpu: 500m memory: 3Gi limits: cpu: 1 memory: 10Gi # Production resource requests for Logger logger: resources: requests: cpu: 100m memory: 200Mi limits: cpu: 200m memory: 200Mi # Production resource requests for Connect Agent connectAgent: resources: requests: cpu: 500m memory: 1Gi limits: cpu: 500m memory: 512Mi # Production resource requests for Mint Task Scheduler (if using Monetization) mintTaskScheduler: resources: requests: cpu: 100m memory: 100Mi limits: cpu: 2000m memory: 4Gi
Applying the configuration
After updating your overrides.yaml file, apply the changes to your cluster using
helm upgrade for each component.
For Helm, apply the changes in order:
Update apigee-operator:
helm upgrade operator apigee-operator/ \ --namespace APIGEE_NAMESPACE \ --atomic \ -f OVERRIDES_FILE.yaml
Apply changes for Logger and Metrics components:
helm upgrade telemetry apigee-telemetry/ \ --namespace APIGEE_NAMESPACE \ --atomic \ -f OVERRIDES_FILE.yaml
Apply changes for MART, Watcher, Connect Agent, and Mint Task Scheduler components:
helm upgrade ORG_NAME apigee-org/ \ --namespace APIGEE_NAMESPACE \ --atomic \ -f OVERRIDES_FILE.yaml
Apply changes for Runtime and Synchronizer components for each environment in your installation:
helm upgrade ENV_RELEASE_NAME apigee-env/ \ --namespace APIGEE_NAMESPACE \ --atomic \ --set env=ENV_NAME \ -f OVERRIDES_FILE.yaml
See also
- Configuring dedicated node pools
- Configure Cassandra for production
- Configuration properties: