Client¶
The FeatureFlagClient is the main interface for evaluating feature flags.
It provides type-safe methods for all flag types with automatic caching and
graceful degradation.
Overview¶
The client is designed to never throw exceptions during flag evaluation. Instead, it returns the default value and includes error information in the evaluation details. This ensures your application remains stable even when flag evaluation encounters issues.
Key Features:
Type-safe evaluation methods for boolean, string, number, and object flags
Detailed evaluation results with metadata
Bulk evaluation for multiple flags
Async context manager support
Health check functionality
Quick Example¶
from litestar_flags import FeatureFlagClient, MemoryStorageBackend, EvaluationContext
# Create a client with in-memory storage
storage = MemoryStorageBackend()
client = FeatureFlagClient(storage=storage)
# Evaluate a boolean flag
enabled = await client.get_boolean_value("my-feature", default=False)
# Evaluate with context for targeting
context = EvaluationContext(
targeting_key="user-123",
attributes={"plan": "premium"},
)
enabled = await client.get_boolean_value("premium-feature", context=context)
# Get detailed evaluation information
details = await client.get_boolean_details("my-feature", default=False)
print(f"Value: {details.value}, Reason: {details.reason}")
API Reference¶
Feature flag client for evaluation.
- class litestar_flags.client.FeatureFlagClient[source]¶
Bases:
objectMain client for feature flag evaluation.
Provides type-safe methods for all flag types with automatic caching and graceful degradation (never throws exceptions).
Example
>>> client = FeatureFlagClient(storage=MemoryStorageBackend()) >>> enabled = await client.get_boolean_value("my-feature", default=False) >>> variant = await client.get_string_value("ab-test", default="control")
- Parameters:
storage (
StorageBackend)default_context (
EvaluationContext|None)rate_limiter (
RateLimiter|None)cache (
CacheProtocol|None)
- __init__(storage, default_context=None, rate_limiter=None, cache=None)[source]¶
Initialize the feature flag client.
- Parameters:
storage (
StorageBackend) – The storage backend for flag data.default_context (
EvaluationContext|None) – Default evaluation context to use when none is provided.rate_limiter (
RateLimiter|None) – Optional rate limiter to control evaluation throughput.cache (
CacheProtocol|None) – Optional cache for flag data. When provided, flag lookups will check the cache before hitting storage, and cache entries will be populated after storage reads.
- Return type:
None
- property storage: StorageBackend¶
Get the storage backend.
- cache_stats()[source]¶
Get cache statistics.
- Return type:
CacheStats|None- Returns:
CacheStats if a cache is configured, None otherwise.
- async classmethod bootstrap(config, storage, default_context=None, rate_limiter=None, cache=None)[source]¶
Create a client with flags bootstrapped from a static source.
Loads flags from the bootstrap configuration and stores them in the provided storage backend, then returns a configured client.
- Parameters:
config (
BootstrapConfig) – Bootstrap configuration specifying flag source.storage (
StorageBackend) – Storage backend to populate with bootstrap flags.default_context (
EvaluationContext|None) – Default evaluation context.rate_limiter (
RateLimiter|None) – Optional rate limiter.cache (
CacheProtocol|None) – Optional cache for flag data.
- Return type:
- Returns:
Configured FeatureFlagClient with bootstrapped flags.
Example
>>> config = BootstrapConfig(source=Path("flags.json")) >>> client = await FeatureFlagClient.bootstrap( ... config=config, ... storage=MemoryStorageBackend(), ... )
- async preload_flags(flag_keys=None)[source]¶
Preload flags into the client’s cache for faster evaluation.
This method fetches flags from storage and caches them locally. Useful for warming up the client at startup to avoid cold-start latency on first evaluations.
- Parameters:
flag_keys (
list[str] |None) – Optional list of specific flag keys to preload. If None, preloads all active flags.- Return type:
- Returns:
Dictionary of preloaded flags keyed by flag key.
Example
>>> await client.preload_flags() # Preload all flags >>> await client.preload_flags(["feature-a", "feature-b"])
- clear_preloaded_flags()[source]¶
Clear the preloaded flags cache.
Call this method when you want to force fresh flag fetches from the storage backend.
- Return type:
- async clear_cache()[source]¶
Clear the external cache.
This method clears all entries in the external cache if one is configured. Use this when you need to invalidate all cached flag data.
Note
This does not clear the preloaded flags. Use clear_preloaded_flags() for that, or clear_all_caches() to clear both.
- Return type:
- async clear_all_caches()[source]¶
Clear both preloaded flags and external cache.
Convenience method to ensure all cached data is cleared.
- Return type:
- async get_boolean_value(flag_key, default=False, context=None)[source]¶
Evaluate a boolean flag.
- Parameters:
flag_key (
str) – The unique flag key.default (
bool) – Default value if flag is not found or evaluation fails.context (
EvaluationContext|None) – Optional evaluation context.
- Return type:
- Returns:
The evaluated boolean value.
- async get_boolean_details(flag_key, default=False, context=None)[source]¶
Evaluate a boolean flag with details.
- Parameters:
flag_key (
str) – The unique flag key.default (
bool) – Default value if flag is not found or evaluation fails.context (
EvaluationContext|None) – Optional evaluation context.
- Return type:
EvaluationDetails[bool]- Returns:
EvaluationDetails containing the value and metadata.
- async get_string_value(flag_key, default='', context=None)[source]¶
Evaluate a string flag.
- Parameters:
flag_key (
str) – The unique flag key.default (
str) – Default value if flag is not found or evaluation fails.context (
EvaluationContext|None) – Optional evaluation context.
- Return type:
- Returns:
The evaluated string value.
- async get_string_details(flag_key, default='', context=None)[source]¶
Evaluate a string flag with details.
- Parameters:
flag_key (
str) – The unique flag key.default (
str) – Default value if flag is not found or evaluation fails.context (
EvaluationContext|None) – Optional evaluation context.
- Return type:
EvaluationDetails[str]- Returns:
EvaluationDetails containing the value and metadata.
- async get_number_value(flag_key, default=0.0, context=None)[source]¶
Evaluate a number flag.
- Parameters:
flag_key (
str) – The unique flag key.default (
float) – Default value if flag is not found or evaluation fails.context (
EvaluationContext|None) – Optional evaluation context.
- Return type:
- Returns:
The evaluated number value.
- async get_number_details(flag_key, default=0.0, context=None)[source]¶
Evaluate a number flag with details.
- Parameters:
flag_key (
str) – The unique flag key.default (
float) – Default value if flag is not found or evaluation fails.context (
EvaluationContext|None) – Optional evaluation context.
- Return type:
EvaluationDetails[float]- Returns:
EvaluationDetails containing the value and metadata.
- async get_object_details(flag_key, default, context=None)[source]¶
Evaluate an object/JSON flag with details.
- Parameters:
- Return type:
- Returns:
EvaluationDetails containing the value and metadata.
- async is_enabled(flag_key, context=None)[source]¶
Check if a boolean flag is enabled.
Shorthand for get_boolean_value(flag_key, default=False, context).
- Parameters:
flag_key (
str) – The unique flag key.context (
EvaluationContext|None) – Optional evaluation context.
- Return type:
- Returns:
True if the flag is enabled, False otherwise.
- async get_all_flags(context=None)[source]¶
Evaluate all active flags.
- Parameters:
context (
EvaluationContext|None) – Optional evaluation context.- Return type:
- Returns:
Dictionary mapping flag keys to their evaluation details.
Evaluation Methods¶
The client provides pairs of methods for each flag type:
get_<type>_value(): Returns just the evaluated valueget_<type>_details(): Returns the value wrapped inEvaluationDetails
Boolean Flags¶
# Simple boolean check
enabled = await client.get_boolean_value("feature-x", default=False)
# With full details
details = await client.get_boolean_details("feature-x", default=False)
if details.is_error:
logger.warning(f"Flag evaluation error: {details.error_message}")
# Convenience method
if await client.is_enabled("feature-x"):
# Feature is enabled
pass
String Flags¶
# Get experiment variant
variant = await client.get_string_value("experiment-color", default="blue")
# A/B testing with context
context = EvaluationContext(targeting_key="user-123")
variant = await client.get_string_value("button-text", default="Click", context=context)
Number Flags¶
# Get configuration value
max_items = await client.get_number_value("max-items", default=10.0)
rate_limit = await client.get_number_value("rate-limit-rps", default=100.0)
Object/JSON Flags¶
# Get complex configuration
config = await client.get_object_value(
"feature-config",
default={"enabled": False, "limit": 10},
)
Bulk Evaluation¶
For evaluating multiple flags at once:
# Get all active flags
all_flags = await client.get_all_flags(context=context)
# Get specific flags
flags = await client.get_flags(
["feature-a", "feature-b", "experiment-1"],
context=context,
)
for key, details in flags.items():
print(f"{key}: {details.value}")
Lifecycle Management¶
The client supports async context manager protocol for proper resource cleanup:
async with FeatureFlagClient(storage=storage) as client:
enabled = await client.get_boolean_value("my-feature")
# Resources are automatically cleaned up
Health Checks¶
Check if the client and storage backend are healthy:
if await client.health_check():
print("Feature flag system is healthy")
else:
print("Feature flag system is not available")