Quick Start
This guide gets you up and running with Reminiscence in under 5 minutes.
Basic Usage
Section titled “Basic Usage”Create a cache instance and start caching:
from reminiscence import Reminiscence
# Initialize cachecache = Reminiscence()
# Check cacheresult = cache.lookup( query="What were Q3 2024 sales?", context={"agent": "analyst"})
if result.is_hit: print(f"Cache hit! Similarity: {result.similarity:.2f}") data = result.resultelse: # Cache miss - execute expensive operation data = expensive_llm_call("What were Q3 2024 sales?")
# Store for future queries cache.store( query="What were Q3 2024 sales?", context={"agent": "analyst"}, result=data )
Using the Decorator
Section titled “Using the Decorator”The @cached
decorator automatically handles lookup and storage:
from reminiscence import Reminiscence
cache = Reminiscence()
@cache.cached(query="prompt", context_params=["model"])def ask_llm(prompt: str, model: str) -> str: """Expensive LLM call with automatic caching.""" return openai.ChatCompletion.create( model=model, messages=[{"role": "user", "content": prompt}] ).choices.message.content
# First call executes the functionanswer = ask_llm("Explain quantum entanglement", model="gpt-4")
# Similar queries return cached resultsanswer = ask_llm("What is quantum entanglement?", model="gpt-4") \# Cache hit!
Query Modes
Section titled “Query Modes”Reminiscence supports three matching strategies:
# Semantic mode (default) - fuzzy matchingresult = cache.lookup( "Analyze Q3 sales", context={"agent": "analyst"}, query_mode="semantic")
# Exact mode - near-exact string matching (threshold 0.9999)result = cache.lookup( "SELECT * FROM users WHERE id = 123", context={"db": "prod"}, query_mode="exact")
# Auto mode - tries exact first, falls back to semanticresult = cache.lookup( "What is Python?", context={}, query_mode="auto")
Context Isolation
Section titled “Context Isolation”Context provides cache isolation between different scenarios:
# Same query, different contexts = separate cache entriescache.store( query="Analyze sales", context={"region": "US", "year": 2024}, result={"total": 5_000_000})
cache.store( query="Analyze sales", context={"region": "EU", "year": 2024}, result={"total": 3_500_000})
# Exact context matchingus = cache.lookup("Analyze sales", context={"region": "US", "year": 2024})eu = cache.lookup("Analyze sales", context={"region": "EU", "year": 2024})
Decorator with Context
Section titled “Decorator with Context”Use context_params
to automatically extract context from function arguments:
@cache.cached(query="sql_query",context_params=["database", "user_id"],query_mode="exact")def execute_query(sql_query: str, database: str, user_id: int): """Cache results per database and user.""" return run_expensive_query(sql_query, database, user_id)
# Context automatically built from parameters result = execute_query( sql_query="SELECT * FROM orders", database="prod", user_id=42 )
Auto-Strict Mode
Section titled “Auto-Strict Mode”Automatically detect non-string parameters as context:
@cache.cached(query="prompt",auto_strict=True # Detects temperature, max_tokens as context)def generate_text(prompt: str, temperature: float, max_tokens: int): """Non-string params automatically added to context.""" return llm_call(prompt, temperature, max_tokens)
Configuration
Section titled “Configuration”Customize cache behavior:
from reminiscence import Reminiscence, ReminiscenceConfig
config = ReminiscenceConfig( similarity_threshold=0.85, # Stricter matching max_entries=10000, # Cache size limit eviction_policy="lru", # LRU, LFU, or FIFO ttl_seconds=3600, # 1 hour expiration)
cache = Reminiscence(config=config)
Background Tasks
Section titled “Background Tasks”Enable automatic cleanup and metrics export:
cache = Reminiscence()
# Start background schedulerscache.start_scheduler( interval_seconds=1800, # Cleanup every 30 minutes metrics_export_interval_seconds=10 # Export metrics every 10s)
# Use cache...# Stop schedulers when donecache.stop_scheduler()
Or use as context manager:
with Reminiscence() as cache: cache.start_scheduler() # Use cache...# Automatically stops schedulers on exit
Next Steps
Section titled “Next Steps”- Learn How It Works to understand semantic matching
- Explore Decorators for advanced patterns
- See Configuration for all options
- Check API Reference for complete documentation