Tips and Tricks – Use functools.cache for Automatic Memoization

Repeatedly computing the same expensive results kills Python
performance. Call a function with the same arguments hundreds of times? It recalculates from scratch every time. The
functools cache decorators eliminate this waste by automatically storing and reusing results, transforming slow
recursive algorithms and expensive computations into lightning-fast lookups.

This guide covers production-ready caching patterns that can
speed up your Python code by 10-1000x. We’ll master @cache, @lru_cache, and custom caching strategies for all
scenarios.

Why Memoization Transforms Performance

The Repeated Computation Problem

Without memoization, code suffers from:

  • Wasted CPU: Same calculations repeated thousands of times
  • Slow recursion: Exponential time complexity for simple algorithms
  • API rate limits: Unnecessary external calls
  • Database pressure: Repeated queries for same data
  • Poor scalability: Performance degrades with load

Memoization Benefits

  • Automatic caching: One-line decorator for massive speedups
  • O(1) lookups: Cached results returned instantly
  • Recursive optimization: Fibonacci in O(n) instead of O(2ⁿ)
  • Memory control: LRU eviction prevents unbounded growth
  • Thread-safe: Built-in synchronization

Pattern 1: Basic @cache (Python 3.9+)

Simplest Memoization

from functools import cache

# Without cache - exponential time complexity
def fibonacci_slow(n):
    if n < 2:
        return n
    return fibonacci_slow(n-1) + fibonacci_slow(n-2)

# fibonacci_slow(40) takes ~30 seconds!

# With cache - linear time complexity
@cache
def fibonacci_fast(n):
    if n < 2:
        return n
    return fibonacci_fast(n-1) + fibonacci_fast(n-2)

# fibonacci_fast(40) takes ~0.0001 seconds!

# Performance comparison
import time

start = time.time()
result = fibonacci_slow(35)
print(f"Slow: {time.time() - start:.2f}s")  # ~5 seconds

start = time.time()
result = fibonacci_fast(35)
print(f"Fast: {time.time() - start:.4f}s")  # ~0.0001 seconds

print(f"Speedup: {5 / 0.0001:.0f}x faster!")  # 50,000x faster!

Pattern 2: @lru_cache with Size Limit

Bounded Cache for Production

from functools import lru_cache

# Cache with size limit (LRU = Least Recently Used)
@lru_cache(maxsize=128)
def expensive_computation(x, y):
    """Simulate expensive calculation"""
    print(f"Computing for {x}, {y}")
    time.sleep(0.1)  # Simulate work
    return x ** y + y ** x

# First call - computes
result = expensive_computation(2, 3)  # Prints "Computing..."
print(result)

# Second call with same args - cached
result = expensive_computation(2, 3)  # No print, instant return
print(result)

# Check cache statistics
print(expensive_computation.cache_info())
# CacheInfo(hits=1, misses=1, maxsize=128, currsize=1)

# Clear cache manually
expensive_computation.cache_clear()

# Practical example: Database queries
@lru_cache(maxsize=256)
def get_user_by_id(user_id: int):
    """Cache database queries"""
    # Expensive database call
    result = db.query("SELECT * FROM users WHERE id = ?", user_id)
    return result

# First call - hits database
user = get_user_by_id(123)

# Subsequent calls - from cache
user = get_user_by_id(123)  # Instant, no DB hit
user = get_user_by_id(123)  # Instant, no DB hit

Pattern 3: Caching with Kwargs

Handle Keyword Arguments

from functools import lru_cache

# Cache works with kwargs too
@lru_cache(maxsize=128)
def fetch_data(endpoint: str, *, timeout: int = 30, retries: int = 3):
    """Fetch data with caching"""
    print(f"Fetching {endpoint} with timeout={timeout}, retries={retries}")
    # Simulate API call
    return f"Data from {endpoint}"

# These create separate cache entries
result1 = fetch_data("api/users", timeout=30)
result2 = fetch_data("api/users", timeout=60)  # Different timeout = different cache key
result3 = fetch_data("api/users", timeout=30)  # Same as result1 - cached

print(fetch_data.cache_info())
# CacheInfo(hits=1, misses=2, maxsize=128, currsize=2)

# Important: Order of kwargs doesn't matter for cache key
result4 = fetch_data(endpoint="api/users", timeout=30, retries=3)
result5 = fetch_data(timeout=30, endpoint="api/users", retries=3)
# Both use same cache entry!

Pattern 4: Unhashable Arguments

Caching Functions with Lists/Dicts

from functools import lru_cache

# This won't work - lists aren't hashable
# @lru_cache
# def process_items(items: list):  # TypeError: unhashable type: 'list'
#     return sum(items)

# Solution 1: Convert to tuple
@lru_cache(maxsize=128)
def process_items(items: tuple):  # Use tuple instead of list
    return sum(items)

# Usage
result = process_items((1, 2, 3))  # Convert list to tuple
result = process_items((1, 2, 3))  # Cached

# Solution 2: Use json for complex objects
import json

@lru_cache(maxsize=128)
def process_config(config_json: str):
    config = json.loads(config_json)
    # Process config dict
    return config.get('setting', 'default')

# Usage
config = {'setting': 'value', 'option': True}
config_json = json.dumps(config, sort_keys=True)  # sort_keys for consistent cache key
result = process_config(config_json)

# Solution 3: Use frozenset for set arguments
@lru_cache(maxsize=128)
def count_unique(items: frozenset):
    return len(items)

# Usage
result = count_unique(frozenset([1, 2, 2, 3]))  # 3

Pattern 5: Method Caching

Cache Instance and Class Methods

from functools import lru_cache, cache

class DataProcessor:
    def __init__(self, data_source):
        self.data_source = data_source
    
    # DON'T do this - creates separate cache per instance
    # @lru_cache  # Wrong!
    # def expensive_method(self, x):
    #     return x * 2
    
    # Solution 1: Cache at class level
    @staticmethod
    @lru_cache(maxsize=128)
    def expensive_calculation(x, y):
        """Pure function - no instance state"""
        print(f"Computing {x} + {y}")
        time.sleep(0.1)
        return x + y
    
    # Solution 2: Manual caching with __dict__
    def cached_property_method(self, key):
        cache_key = f"_cache_{key}"
        if cache_key not in self.__dict__:
            print(f"Computing for {key}")
            self.__dict__[cache_key] = self._expensive_compute(key)
        return self.__dict__[cache_key]
    
    def _expensive_compute(self, key):
        time.sleep(0.1)
        return f"Result for {key}"

# Solution 3: Use cached_property for properties (Python 3.8+)
from functools import cached_property

class User:
    def __init__(self, user_id):
        self.user_id = user_id
    
    @cached_property
    def expensive_data(self):
        """Computed once per instance, then cached"""
        print(f"Loading data for user {self.user_id}")
        time.sleep(0.1)
        return f"Data for {self.user_id}"

# Usage
user1 = User(123)
print(user1.expensive_data)  # Computes
print(user1.expensive_data)  # Cached - no recomputation

user2 = User(456)
print(user2.expensive_data)  # Different instance, different cache

Pattern 6: Time-Based Cache Expiration

TTL (Time To Live) Cache

from functools import wraps
import time

def timed_cache(seconds: int):
    """Cache with TTL expiration"""
    def decorator(func):
        cache = {}
        
        @wraps(func)
        def wrapper(*args, **kwargs):
            key = str(args) + str(kwargs)
            now = time.time()
            
            # Check if cached and not expired
            if key in cache:
                result, timestamp = cache[key]
                if now - timestamp < seconds:
                    print(f"Cache hit (age: {now - timestamp:.1f}s)")
                    return result
                else:
                    print("Cache expired")
            
            # Compute and cache
            print("Computing...")
            result = func(*args, **kwargs)
            cache[key] = (result, now)
            return result
        
        def cache_clear():
            cache.clear()
        
        wrapper.cache_clear = cache_clear
        return wrapper
    return decorator

# Usage
@timed_cache(seconds=5)
def get_stock_price(symbol):
    """Cache stock price for 5 seconds"""
    print(f"Fetching price for {symbol}")
    # Simulate API call
    return 100.0 + hash(symbol) % 50

# First call
price = get_stock_price("AAPL")  # Fetches

# Within 5 seconds - cached
price = get_stock_price("AAPL")  # From cache

# Wait 6 seconds
time.sleep(6)
price = get_stock_price("AAPL")  # Refetches (cache expired)

Pattern 7: Conditional Caching

Cache Only Successful Results

from functools import wraps

def cache_on_success(maxsize=128):
    """Only cache successful results (no exceptions)"""
    def decorator(func):
        cache = {}
        
        @wraps(func)
        def wrapper(*args):
            if args in cache:
                return cache[args]
            
            try:
                result = func(*args)
                # Only cache if no exception
                if len(cache) < maxsize:
                    cache[args] = result
                return result
            except Exception:
                # Don't cache errors
                raise
        
        wrapper.cache = cache
        wrapper.cache_clear = cache.clear
        return wrapper
    return decorator

# Usage
@cache_on_success(maxsize=128)
def divide(x, y):
    if y == 0:
        raise ValueError("Division by zero")
    return x / y

# Valid result - cached
result = divide(10, 2)  # Computes
result = divide(10, 2)  # Cached

# Error result - not cached
try:
    divide(10, 0)  # Raises ValueError
except ValueError:
    pass

try:
    divide(10, 0)  # Still raises - not cached
except ValueError:
    pass

Pattern 8: Multi-Level Caching

Memory + Redis Cache

from functools import wraps
import pickle
import redis

class MultiLevelCache:
    def __init__(self, redis_client, ttl=3600):
        self.redis = redis_client
        self.memory_cache = {}
        self.ttl = ttl
    
    def cache(self, func):
        @wraps(func)
        def wrapper(*args, **kwargs):
            key = f"{func.__name__}:{args}:{kwargs}"
            
            # Level 1: Memory cache
            if key in self.memory_cache:
                print("Memory cache hit")
                return self.memory_cache[key]
            
            # Level 2: Redis cache
            redis_value = self.redis.get(key)
            if redis_value:
                print("Redis cache hit")
                result = pickle.loads(redis_value)
                self.memory_cache[key] = result
                return result
            
            # Cache miss - compute
            print("Cache miss - computing")
            result = func(*args, **kwargs)
            
            # Store in both caches
            self.memory_cache[key] = result
            self.redis.setex(key, self.ttl, pickle.dumps(result))
            
            return result
        
        return wrapper

# Usage
redis_client = redis.Redis(host='localhost', port=6379, db=0)
cache_manager = MultiLevelCache(redis_client, ttl=3600)

@cache_manager.cache
def expensive_api_call(endpoint):
    print(f"Calling API: {endpoint}")
    time.sleep(1)
    return f"Response from {endpoint}"

# First call - cache miss
result = expensive_api_call("/users")

# Second call - memory cache hit
result = expensive_api_call("/users")

# Restart process, third call - Redis cache hit
result = expensive_api_call("/users")

Real-World Example: Web Scraper Cache

from functools import lru_cache
import hashlib
import requests
from typing import Optional

class WebScraper:
    def __init__(self, cache_size=1000):
        self._fetch_page.cache_clear()  # Clear on init
        self._fetch_page = lru_cache(maxsize=cache_size)(self._fetch_page_impl)
    
    def _fetch_page_impl(self, url: str) -> str:
        """Actual fetch implementation"""
        print(f"Fetching: {url}")
        response = requests.get(url, timeout=10)
        response.raise_for_status()
        return response.text
    
    def fetch_page(self, url: str) -> str:
        """Public method with caching"""
        return self._fetch_page(url)
    
    @lru_cache(maxsize=500)
    def extract_links(self, html: str) -> list:
        """Extract links from HTML (cached)"""
        print("Extracting links...")
        # Simplified link extraction
        import re
        links = re.findall(r'href="(https?://[^"]+)"', html)
        return links
    
    @lru_cache(maxsize=200)
    def get_page_title(self, html: str) -> Optional[str]:
        """Extract page title (cached)"""
        print("Extracting title...")
        import re
        match = re.search(r'<title>(.+?)</title>', html)
        return match.group(1) if match else None
    
    def scrape_with_cache(self, url: str) -> dict:
        """Scrape page using all cached methods"""
        html = self.fetch_page(url)
        
        return {
            'url': url,
            'title': self.get_page_title(html),
            'links': self.extract_links(html),
            'size': len(html)
        }
    
    def get_cache_stats(self):
        """Get cache statistics"""
        return {
            'fetch_page': self._fetch_page.cache_info(),
            'extract_links': self.extract_links.cache_info(),
            'get_page_title': self.get_page_title.cache_info()
        }

# Usage
scraper = WebScraper(cache_size=1000)

# First scrape - fetches from network
data1 = scraper.scrape_with_cache("https://example.com")

# Second scrape - all from cache
data2 = scraper.scrape_with_cache("https://example.com")

# Check performance
stats = scraper.get_cache_stats()
print(f"Fetch cache: {stats['fetch_page']}")
print(f"Links cache: {stats['extract_links']}")
print(f"Title cache: {stats['get_page_title']}")

Performance Comparison

Function Without Cache With Cache Speedup
fibonacci(35) 5.2 seconds 0.0001 seconds 52,000x
API call (repeated) 200ms × 100 = 20s 200ms + 0ms × 99 = 0.2s 100x
DB query (repeated) 50ms × 1000 = 50s 50ms + 0ms × 999 = 0.05s 1,000x

Best Practices

  • Use @cache for unlimited cache: When results never change
  • Use @lru_cache for bounded cache: Production systems need memory limits
  • Choose appropriate maxsize: 128-512 for most cases, 1000+ for heavy use
  • Cache pure functions: Same inputs always return same outputs
  • Monitor cache stats: Use cache_info() to tune maxsize
  • Clear cache when needed: Call cache_clear() for data updates
  • Be careful with mutable arguments: Convert lists to tuples

Common Pitfalls

  • Caching impure functions: Don't cache functions that depend on time, random, or external state
  • Memory leaks with @cache: Unbounded cache can consume all memory
  • Stale data: Cached data doesn't update automatically
  • Unhashable arguments: Lists and dicts can't be cache keys
  • Instance method caching: Creates cache per instance unless using staticmethod
  • Thread safety confusion: Built-in caches are thread-safe, custom ones may not be

When to Use Caching

✅ Perfect for:

  • Expensive recursive algorithms (Fibonacci, graph traversal)
  • Repeated API calls with same parameters
  • Database queries that don't change often
  • Complex calculations used multiple times
  • Web scraping and data fetching

❌ Avoid caching:

  • Functions with side effects (logging, file I/O)
  • Functions that depend on current time
  • Functions that use random numbers
  • One-time calculations
  • Functions with rapidly changing data

Key Takeaways

  • @cache and @lru_cache provide automatic memoization with one line
  • Can achieve 100-50,000x speedups for repeated computations
  • LRU cache prevents memory issues with maxsize parameter
  • Cache statistics help tune cache size for optimal performance
  • Works with functions, static methods, and properties
  • Convert unhashable arguments (lists) to hashable (tuples)
  • Thread-safe by default—great for concurrent applications
  • Monitor with cache_info(), clear with cache_clear()

Function caching is Python's easiest performance optimization. A single decorator can transform exponential
algorithms into linear ones, eliminate redundant API calls, and turn slow applications into fast ones. The ROI
is immediate and dramatic—often the first optimization you should try.


Discover more from C4: Container, Code, Cloud & Context

Subscribe to get the latest posts sent to your email.

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.