How to Monitor Redis Memory

How to Monitor Redis Memory Redis is an in-memory data structure store, widely used for caching, real-time analytics, message brokering, and session storage. Its speed and simplicity make it a cornerstone of modern application architectures. However, because Redis operates entirely in RAM, memory usage becomes a critical performance and stability factor. Unlike disk-based databases, Redis has no f

Nov 10, 2025 - 12:28
Nov 10, 2025 - 12:28
 1

How to Monitor Redis Memory

Redis is an in-memory data structure store, widely used for caching, real-time analytics, message brokering, and session storage. Its speed and simplicity make it a cornerstone of modern application architectures. However, because Redis operates entirely in RAM, memory usage becomes a critical performance and stability factor. Unlike disk-based databases, Redis has no fallback when memory is exhaustedout-of-memory (OOM) errors can crash your instance, degrade response times, or trigger eviction policies that remove critical data.

Monitoring Redis memory isnt optionalits essential. Without proper visibility into memory consumption patterns, you risk unexpected downtime, poor user experience, and costly infrastructure overprovisioning. This guide provides a comprehensive, step-by-step approach to monitoring Redis memory effectively. Whether youre managing a single instance or a large-scale cluster, understanding how Redis allocates, uses, and evicts memory empowers you to optimize performance, prevent failures, and scale efficiently.

Step-by-Step Guide

1. Understand How Redis Uses Memory

Before you can monitor Redis memory effectively, you must understand how it consumes RAM. Redis stores all data in memory, and each key-value pair incurs overhead beyond the raw data size. This includes:

  • Key overhead: Each key is stored as a string with metadata, typically consuming 3050 bytes depending on length and encoding.
  • Value overhead: Values have internal structures (e.g., SDS for strings, ziplist or hashtable for hashes) that add memory cost.
  • Redis internals: The Redis database uses hash tables, linked lists, and other data structures for indexing and operations, which consume memory even when empty.
  • Memory fragmentation: Over time, memory allocation and deallocation can lead to fragmentation, where allocated memory is not contiguous, reducing usable space.

For example, a simple string key like user:12345:session with a value of active may appear small, but Redis may use over 150 bytes due to metadata, encoding, and alignment padding. This overhead compounds at scale1 million keys could easily consume 150 MB just in overhead, even if your actual data is only 50 MB.

2. Use the INFO MEMORY Command

The most direct way to inspect Redis memory usage is via the INFO MEMORY command. Connect to your Redis instance using the Redis CLI:

redis-cli INFO MEMORY

This returns a detailed set of memory-related metrics. Key fields include:

  • used_memory: Total number of bytes allocated by Redis using its allocator (typically jemalloc).
  • used_memory_human: Human-readable version of used_memory (e.g., 1.23G).
  • used_memory_rss: Resident Set Sizethe total amount of physical memory allocated to the Redis process by the OS. This includes memory fragmentation and shared libraries.
  • used_memory_peak: Peak memory usage since Redis started. Useful for identifying memory spikes.
  • used_memory_peak_human: Human-readable peak memory usage.
  • mem_fragmentation_ratio: Ratio of used_memory_rss to used_memory. A ratio significantly above 1.5 suggests high fragmentation; below 1 suggests memory overcommit or swapping.
  • mem_allocator: The memory allocator in use (e.g., jemalloc, libc).

Example output:

used_memory:134217728

used_memory_human:128.00M

used_memory_rss:157286400

used_memory_peak:145234560

used_memory_peak_human:138.50M

mem_fragmentation_ratio:1.17

mem_allocator:jemalloc

Interpretation: Redis is using 128 MB of allocated memory, but the OS has allocated 150 MB. The fragmentation ratio of 1.17 is healthy (close to 1). The peak usage was slightly higher, indicating recent memory growth.

3. Monitor Memory Usage Over Time

One-time checks are insufficient. Memory usage trends reveal patterns that static snapshots miss. Set up periodic polling of INFO MEMORY using a script or monitoring agent.

Heres a simple Bash script that logs memory usage every 5 minutes:

!/bin/bash

while true; do

TIMESTAMP=$(date '+%Y-%m-%d %H:%M:%S')

MEMORY=$(redis-cli INFO MEMORY | grep "used_memory_human" | cut -d: -f2 | tr -d ' ')

RSS=$(redis-cli INFO MEMORY | grep "used_memory_rss" | cut -d: -f2 | tr -d ' ')

FRAG=$(redis-cli INFO MEMORY | grep "mem_fragmentation_ratio" | cut -d: -f2 | tr -d ' ')

echo "$TIMESTAMP, $MEMORY, $RSS, $FRAG" >> redis_memory_log.csv

sleep 300

done

Save this as monitor_redis.sh, make it executable, and run it in the background:

chmod +x monitor_redis.sh

nohup ./monitor_redis.sh &

Log data can be imported into visualization tools like Grafana or Excel to plot memory trends over days or weeks. Look for:

  • Gradual increases (indicating memory leaks or growing datasets)
  • Sudden spikes (triggered by batch jobs or cache invalidation)
  • Consistent peaks near memory limits (signaling need for scaling)

4. Set Memory Limits with maxmemory

To prevent Redis from consuming all system RAM, configure a hard memory limit using the maxmemory directive in your redis.conf file:

maxmemory 2gb

Once Redis reaches this limit, it will begin evicting keys based on the policy defined by maxmemory-policy. Common policies include:

  • volatile-lru: Evict keys with an expire set, using LRU (Least Recently Used).
  • allkeys-lru: Evict any key, regardless of expiry, using LRU.
  • volatile-ttl: Evict keys with an expire set, prioritizing those with shortest TTL.
  • noeviction: Return errors on write operations (recommended for critical data).

For caching use cases, allkeys-lru is often ideal. For session storage with TTLs, volatile-lru or volatile-ttl works well. Avoid noeviction unless youre certain your dataset fits within the limit.

After setting maxmemory, restart Redis or reload the config:

redis-cli CONFIG SET maxmemory 2147483648

Always test memory limits in staging before applying to production.

5. Analyze Key-Level Memory Usage

Knowing total memory is useful, but identifying which keys consume the most memory is critical for optimization. Use the MEMORY USAGE command to check individual key memory:

redis-cli MEMORY USAGE user:12345:session

This returns the number of bytes used by that specific key. Combine this with scripting to find top memory consumers:

redis-cli KEYS "*" | while read key; do

size=$(redis-cli MEMORY USAGE "$key")

echo "$size $key"

done | sort -n -r | head -10

This script lists the 10 largest keys. Common culprits include:

  • Large serialized objects (e.g., JSON blobs stored as strings)
  • Hashes with hundreds of fields
  • Sorted sets with thousands of members
  • Lists with long elements

Optimization strategies:

  • Use Redis hashes for objects instead of serializing to JSON strings.
  • Break large lists into smaller chunks with prefixes (e.g., log:2024-05-01:1, log:2024-05-01:2).
  • Compress data before storing (e.g., gzip) if CPU overhead is acceptable.
  • Use Redis modules like RedisJSON for structured data with better memory efficiency.

6. Enable Memory Profiling with Redis Memory Analyzer

Redis 6.2+ includes a built-in memory profiler: MEMORY DOCTOR. Run it to get automated diagnostics:

redis-cli MEMORY DOCTOR

Output example:

OK! Redis is not using too much memory. The current memory usage is 128MB, which is 10% of the available 1GB. Fragmentation ratio is 1.17, which is healthy.

For deeper analysis, use the MEMORY STATS command:

redis-cli MEMORY STATS

This returns granular statistics including:

  • Total allocated memory
  • Memory used by datasets, buffers, slaves, etc.
  • Number of keys per database
  • Memory fragmentation ratio
  • Allocators internal fragmentation

These stats help pinpoint whether memory is consumed by data, replication buffers, client connections, or internal overhead.

7. Monitor Eviction Events

If youve enabled a memory eviction policy, monitor when keys are evicted. Use Redis CONFIG GET notify-keyspace-events to check if key event notifications are enabled:

redis-cli CONFIG GET notify-keyspace-events

If empty, enable it:

redis-cli CONFIG SET notify-keyspace-events Ex

The Ex flag enables expiration and eviction events. Then subscribe to events:

redis-cli --csv PUBLISH __keyevent@0__:expired "test"

Or use a script to listen:

redis-cli MONITOR | grep -E "(expired|evicted)"

High eviction rates indicate your maxmemory limit is too low or your data is growing faster than expected. Combine eviction logs with application metrics to correlate spikes with user behavior or batch jobs.

8. Integrate with System-Level Monitoring

Redis memory usage must be viewed in context of system memory. Use tools like top, htop, or free -h to monitor overall RAM usage:

top -p $(pgrep redis-server)

Watch for:

  • Redis process using >90% of system RAM
  • Swap usage increasing (indicates memory pressure)
  • High I/O wait (may indicate disk swapping)

Use Prometheus and Node Exporter to collect system metrics alongside Redis metrics. This allows correlation between Redis memory spikes and system load, helping identify root causes like memory leaks in application code or inefficient queries.

9. Automate Alerts

Manual monitoring is unsustainable. Set up automated alerts when memory thresholds are breached.

Example alert conditions:

  • used_memory > 80% of maxmemory ? Warning
  • mem_fragmentation_ratio > 1.5 ? Warning
  • evictions per minute > 10 ? Critical
  • used_memory_rss > 95% of total system RAM ? Critical

Use monitoring platforms like Prometheus + Alertmanager, Datadog, or New Relic to define these alerts. For example, in Prometheus:

ALERT RedisMemoryHigh

IF redis_memory_used_bytes / redis_memory_max_bytes * 100 > 80

FOR 5m

LABELS { severity="warning" }

ANNOTATIONS {

summary = "Redis memory usage is over 80% of limit",

description = "Redis instance {{ $labels.instance }} is using {{ $value | printf \"%.2f\" }}% of its maxmemory limit. Consider scaling or optimizing keys."

}

Alerts should trigger notifications via email, Slack, or PagerDuty, and ideally include links to dashboards with live metrics.

10. Plan for Scaling and Optimization

Monitoring reveals problems; planning prevents them. Use memory trends to forecast growth:

  • Calculate daily memory increase: (current_memory - memory_7_days_ago) / 7
  • Project when youll hit maxmemory: (maxmemory - current_memory) / daily_increase

Based on projections:

  • Scale vertically: Increase RAM on the host (if using a single instance).
  • Scale horizontally: Implement Redis Cluster to distribute data across multiple nodes.
  • Optimize data: Remove stale keys, compress data, refactor data structures.
  • Use Redis Modules: RedisTimeSeries, RedisSearch, or RedisJSON for more efficient storage.

Always test scaling changes in a staging environment that mirrors production traffic patterns.

Best Practices

1. Set Realistic maxmemory Limits

Never allocate 100% of system RAM to Redis. Reserve 1020% for the OS, background processes, and memory fragmentation. For example, on a 16GB server, set maxmemory to 1214GB, not 16GB.

2. Use Expiration Policies Aggressively

Every cached key should have a TTL unless its truly permanent. Even for session data, use TTLs of 124 hours. Avoid forever keys unless backed by persistent storage.

3. Avoid Large Keys

Keys larger than 1MB can block Redis during read/write operations, causing latency spikes. Break them into smaller chunks. Use Redis hashes for objects instead of serializing to JSON strings.

4. Regularly Audit and Clean Keys

Run KEYS * sparingly (its blocking), but use SCAN for periodic audits:

redis-cli --scan --pattern "user:*" | xargs -L 1000 redis-cli DEL

Automate cleanup of orphaned or stale keys using application-level logic or scheduled jobs.

5. Monitor Client Connections

Each client connection consumes memory. Use CLIENT LIST to check active connections:

redis-cli CLIENT LIST

Look for long-lived idle connections. Implement connection pooling in your application to reduce overhead.

6. Avoid Using Redis for Large Binary Data

Redis is not a file store. Storing images, videos, or large files increases memory pressure and slows down operations. Use object storage (S3, MinIO) and store only metadata in Redis.

7. Enable AOF and RDB for Recovery, Not Memory Efficiency

While persistence (AOF/RDB) ensures data durability, it does not reduce memory usage. In fact, AOF rewrite and RDB snapshotting can temporarily double memory consumption. Monitor disk I/O and memory usage during these operations.

8. Use Redis Cluster for Large Datasets

If your dataset exceeds 50100GB, consider Redis Cluster. It distributes keys across multiple nodes, allowing horizontal scaling and better memory utilization.

9. Keep Redis Updated

Redis 6+ includes better memory management, including improved jemalloc integration and memory profiling tools. Older versions may have unpatched memory leaks or inefficiencies.

10. Document Your Memory Strategy

Ensure your team understands:

  • Which keys are critical and should not be evicted
  • What eviction policy is used and why
  • How memory limits were determined
  • How to respond to memory alerts

Documenting this prevents misconfigurations during on-call rotations or infrastructure changes.

Tools and Resources

Redis CLI

The built-in Redis command-line interface is your first line of defense. Master commands like INFO MEMORY, MEMORY USAGE, CLIENT LIST, and SCAN. Use redis-cli --bigkeys to find large keys without scripting.

Prometheus + Redis Exporter

The Redis Exporter exposes Redis metrics in Prometheus format. It automatically scrapes INFO output and converts it into time-series metrics like:

  • redis_memory_used_bytes
  • redis_memory_max_bytes
  • redis_mem_fragmentation_ratio
  • redis_evicted_keys_total

Integrate with Grafana to build dashboards with real-time memory usage graphs, eviction rate trends, and fragmentation alerts.

Grafana Dashboards

Use pre-built dashboards like Redis Overview (ID 763) or Redis Memory Usage (ID 1860) from Grafanas dashboard library. Customize them to highlight your key metrics.

Datadog and New Relic

Both offer native Redis integration with automatic metric collection, anomaly detection, and alerting. They correlate Redis memory usage with application performance (APM) data, helping identify if memory spikes are caused by specific endpoints or services.

RedisInsight

Redis Labs official GUI tool, RedisInsight, provides a visual memory analyzer. It shows key distribution, memory usage per database, and even heatmaps of key sizes. Ideal for developers and DevOps teams who prefer UI over CLI.

Redis Memory Analyzer (RMA)

A third-party tool that scans Redis databases and generates reports on key sizes, memory distribution, and optimization opportunities. Useful for large, complex datasets.

Scripting Libraries

Use Python, Node.js, or Go to automate monitoring:

  • Python: redis-py library
  • Node.js: redis npm package
  • Go: go-redis/redis

Example Python script to log memory usage:

import redis

import time

import csv

r = redis.Redis(host='localhost', port=6379, db=0)

with open('redis_memory.csv', 'a', newline='') as f:

writer = csv.writer(f)

while True:

info = r.info('memory')

writer.writerow([

time.strftime('%Y-%m-%d %H:%M:%S'),

info['used_memory_human'],

info['used_memory_rss'],

info['mem_fragmentation_ratio']

])

time.sleep(300)

Documentation and References

Real Examples

Example 1: E-commerce Cache Overload

A retail platform used Redis to cache product catalog data. After a holiday sale, memory usage spiked from 8GB to 14GB within 24 hours, triggering OOM kills. Investigation revealed:

  • Product data was stored as JSON strings (510KB each)
  • 1.2 million keys were cached without TTL
  • Redis was configured with maxmemory 16GB and noeviction

Solution:

  • Changed storage format from JSON strings to Redis hashes (reduced size by 40%)
  • Added TTL of 2 hours to all product keys
  • Switched to allkeys-lru eviction policy
  • Set maxmemory 12GB to leave room for fragmentation

Result: Memory usage stabilized at 79GB, eviction rate dropped to

Example 2: Session Storage Memory Leak

A SaaS application stored user sessions in Redis. Over time, memory usage grew linearly, even with low active users. Monitoring showed:

  • 100,000+ session keys with TTL of 1 hour
  • But only 10,000 active sessions
  • Memory fragmentation ratio was 2.3

Root cause: Applications created new sessions but failed to delete old ones due to a bug in logout logic. The system kept accumulating expired keys.

Solution:

  • Fixed application code to call DEL on logout
  • Added a background job to scan and delete keys older than 2 hours using SCAN
  • Restarted Redis to reset fragmentation

Result: Memory usage dropped from 18GB to 5GB, fragmentation ratio normalized to 1.1.

Example 3: Leaderboard with Sorted Sets

A gaming app used Redis sorted sets to maintain global leaderboards. Each leaderboard had 500,000+ entries. Memory usage exceeded 12GB.

Optimization:

  • Replaced single large sorted set with 10 smaller sets (e.g., by region)
  • Used ZADD with incremental updates instead of full rewrites
  • Implemented TTL of 24 hours on leaderboard keys
  • Switched from ziplist to hashtable encoding for better performance

Result: Memory usage reduced to 3.5GB, query latency improved from 80ms to 12ms.

Example 4: Fragmentation Nightmare

A financial service ran Redis on a VM with 32GB RAM. Memory usage was 18GB, but used_memory_rss was 30GB. Fragmentation ratio was 1.67.

Root cause: Frequent restarts and memory allocation/deallocation due to rapid scaling of worker processes.

Solution:

  • Disabled dynamic memory allocation by setting maxmemory and maxmemory-policy
  • Restarted Redis during low-traffic window to reset fragmentation
  • Upgraded to Redis 7.0 with improved jemalloc integration
  • Switched to dedicated Redis instances per service to reduce noise

Result: Fragmentation ratio dropped to 1.05, system became more predictable.

FAQs

What is the difference between used_memory and used_memory_rss?

used_memory is the total memory allocated by Rediss internal allocator (e.g., jemalloc) for storing data and structures. used_memory_rss is the actual physical memory the operating system has allocated to the Redis process, including fragmentation, shared libraries, and memory overhead. The difference between them indicates memory fragmentation or OS-level overhead.

How do I know if Redis is running out of memory?

Look for:

  • Memory usage exceeding 8590% of maxmemory
  • High eviction rates (>10 evictions/minute)
  • Redis returning OOM command not allowed errors
  • System swap usage increasing
  • mem_fragmentation_ratio > 2.0

Can I reduce Redis memory usage without adding more RAM?

Yes. Optimize data structures (use hashes instead of strings), add TTLs to keys, remove unused keys, compress data, and use Redis modules like RedisJSON or RedisTimeSeries for better efficiency. Also, ensure your application doesnt create duplicate or stale keys.

Why is mem_fragmentation_ratio so high?

High fragmentation (above 1.5) typically occurs due to frequent memory allocation and deallocationcommon with short-lived keys, restarts, or large key updates. Restarting Redis can reset fragmentation. Using jemalloc (default) and avoiding large key modifications helps prevent it.

Should I use Redis Cluster for memory management?

Redis Cluster is ideal if your dataset exceeds 50GB or if you need high availability. It distributes memory across nodes, preventing single-instance memory exhaustion. However, for smaller datasets, vertical scaling (more RAM) is simpler and more cost-effective.

How often should I check Redis memory usage?

For production systems, monitor continuously with tools like Prometheus. For manual checks, review metrics daily during peak hours. If youre experiencing instability, check every 1530 minutes until the issue is resolved.

Does Redis compression reduce memory usage?

Redis doesnt natively compress data. However, you can compress values (e.g., using gzip) before storing them as strings. This reduces memory usage but increases CPU load. Use it for large, infrequently accessed data like logs or reports.

What happens if I dont set maxmemory?

Redis will consume all available system RAM. This can cause the OS to kill the Redis process via OOM killer, leading to downtime. Always set a reasonable maxmemory limit.

Can I monitor Redis memory remotely?

Yes. Use the Redis CLI over TCP: redis-cli -h your-redis-host -p 6379 INFO MEMORY. Or use exporters like redis_exporter with Prometheus to scrape metrics from any network-accessible Redis instance.

Conclusion

Monitoring Redis memory is not a one-time taskits an ongoing discipline essential to maintaining application performance, stability, and scalability. Rediss in-memory nature makes it fast, but also fragile under memory pressure. Without proper monitoring, you risk silent degradation, unexpected crashes, and costly infrastructure overprovisioning.

This guide has provided a complete roadmap: from understanding how Redis allocates memory, to using INFO MEMORY and MEMORY USAGE, to setting alerts, optimizing data structures, and integrating with enterprise monitoring tools. Real-world examples demonstrate how memory issues manifest and how theyre resolved with practical, actionable steps.

Remember: the goal isnt just to watch memoryits to understand why it changes, anticipate growth, and act before problems occur. Combine automated monitoring with proactive optimization, and youll transform Redis from a potential liability into a resilient, high-performance engine that scales with your business.

Start today. Run redis-cli INFO MEMORY. Log it. Set a threshold. Alert on it. Optimize one key. And repeat. Your usersand your infrastructurewill thank you.