App Lifecycle Manager feature flag allocations help you to test your application by splitting traffic percentages. This lets some of your users see one variant of your application, while other users see a different variant, helping you determine which variants or features are more successful (this process is often referred to as A/B testing). Feature flag allocations help you define how to split your traffic, and integrate allocated variants with your application.
This guide shows you how to define variants, configure traffic splitting, and integrate variants with your application.
To simplify tracking and analysis, you should assign descriptive string IDs to your variants (e.g., "experimental" and "baseline") and use these names directly in your code and analysis pipelines.
Prerequisites
Before you begin, ensure you have:
- Completed the Deploy feature flags quickstart or Use feature flags (standalone quickstart).
- A Google Cloud CLI environment configured to manage App Lifecycle Manager resources.
Configure a feature flag allocation
To configure an experimental feature flag:
Define the randomized attribute (e.g.,
userID) which will be used for sticky bucketing to ensure a consistent experience for each user.gcloud beta app-lifecycle-manager flags attributes create "user-id-attr" \ --key="userID" \ --attribute-value-type="STRING" \ --location=globalUse an allocation to define the split (50/50, for example), referencing your descriptive variant IDs.
# Create a flag with explicitly named variants for the experiment and a 50/50 allocation referencing the custom IDs gcloud beta app-lifecycle-manager flags create "search-algo-test" \ --key="search-algo-test" \ --flag-value-type=BOOL \ --location="global" \ --unit-kind="demo-test-unitkind" \ --variants='[ { "id": "experimental", "booleanValue": true }, { "id": "baseline", "booleanValue": false } ]'\ --evaluation-spec='{ "allocations": [{ "id": "search-split-50-50", "randomizedOn": "userID", "slots": [ {"variant": "baseline", "weight": 50}, {"variant": "experimental", "weight": 50} ] }], "defaultTarget": "search-split-50-50", "attributes": ["projects/PROJECT_ID/locations/global/flagAttributes/user-id-attr"] }'In your backend service, initialize the OpenFeature SDK and inject the
userIDinto the evaluation context. Use theBooleanValueDetails(or equivalent for your language) method to retrieve thevariantID(string) in your application. This lets you switch your backend logic based on the descriptive name rather than only a boolean value.// 1. Prepare evaluation context evalCtx := map[string]any{"userID": currentUser.ID} // 2. Fetch evaluation details to get the variant name details, err := client.BooleanValueDetails(ctx, "search-algo-test", false, evalCtx) // 3. Execute logic based on the Variant ID (string name) if details.Variant == "experimental" { results = search.ExperimentalV2(query) } else { results = search.BaselineV1(query) }Use descriptive variant names so that your auditing and analysis generate documentation automatically. To audit evaluations, log the
details.Variantstring alongside your performance metrics:startTime := time.Now() // ... perform search ... duration := time.Since(startTime) // Audit: Log the descriptive variant name ("experimental" or "baseline") logger.Info("Search performed", "variant", details.Variant, "latency_ms", duration.Milliseconds(), )By comparing metrics for the "Experimental" group versus the "Baseline" group, you can manually analyze whether the new algorithm improves backend efficiency or search relevance before proceeding with a full rollout.
What's next
- Learn about Troubleshooting.