Understanding /proc/meminfo
A practical guide to reading Linux memory stats
This is a scenario-based diagnostic guide. For a field-by-field reference, see /proc/meminfo reference.
Why /proc/meminfo?
You SSH into a server. Something is wrong — the application is slow, the monitoring dashboard is red, or an OOM kill just happened. Your first instinct is free -h, but that only shows a summary. The real data lives in /proc/meminfo.
$ cat /proc/meminfo
MemTotal: 16384000 kB
MemFree: 204560 kB
MemAvailable: 8741200 kB
Buffers: 182400 kB
Cached: 7892000 kB
SwapCached: 12400 kB
Active: 6420000 kB
Inactive: 5840000 kB
Active(anon): 3210000 kB
Inactive(anon): 1120000 kB
Active(file): 3210000 kB
Inactive(file): 4720000 kB
...
This file is generated by fs/proc/meminfo.c. Every field comes from kernel data structures — it is the ground truth for system memory state.
Scenario 1: "My application is slow — is it memory?"
This is the most common question. Before blaming memory, check two things:
Step 1: Is the system swapping?
If SwapTotal - SwapFree is large and growing, the system is actively using swap. That means page reclaim is pushing anonymous pages to disk — a major source of latency.
Confirm with vmstat 1:
$ vmstat 1
procs -----------memory---------- ---swap--
r b swpd free buff cache si so
3 1 238400 204560 182400 7892000 40 120
2 2 239200 198000 182400 7890000 80 200
Non-zero si (swap in) and so (swap out) columns mean pages are actively moving between RAM and disk. This is the smoking gun for memory-induced slowness.
Step 2: Is MemAvailable dangerously low?
MemAvailable estimates how much memory is available for starting new applications without swapping. When this approaches zero, the system is under memory pressure. (The kernel itself uses zone watermarks, not a percentage threshold — see page reclaim.)
Step 3: Check for direct reclaim.
$ grep -E 'allocstall|pgscan_direct' /proc/vmstat
allocstall_dma 0
allocstall_normal 142
allocstall_movable 38
pgscan_direct 284500
Non-zero allocstall values mean processes had to stop and reclaim memory themselves (direct reclaim), which adds latency to every allocation. See Page Reclaim for details.
Verdict table:
| Symptom | Meaning |
|---|---|
| MemAvailable high, no swap activity | Memory is fine — look elsewhere |
| MemAvailable low, swap activity | Memory pressure — likely cause of slowness |
| MemAvailable low, no swap | Reclaim is working but tight — add RAM or reduce workload |
Direct reclaim (allocstall) increasing |
Allocations are stalling — serious pressure |
Scenario 2: "We're at 95% memory usage — should I panic?"
This is the most common misconception in Linux memory management. Monitoring tools typically compute memory usage as:
On a healthy, busy server, MemFree will be tiny. The kernel uses "free" memory for page cache — caching file data in RAM to speed up reads (see Page Cache). This is a good thing.
The number that matters is MemAvailable, not MemFree.
MemFree vs MemAvailable
MemTotal: 16384000 kB
MemFree: 204560 kB <-- 1.2% free — looks alarming
MemAvailable: 8741200 kB <-- 53% available — actually fine
MemFree is RAM that is completely unused — not even used for caching. On a well-utilized system, this should be small.
MemAvailable is an estimate of how much memory can be allocated without swapping. It accounts for:
MemFree— obviously available- Reclaimable page cache — file-backed pages that can be dropped instantly
- Reclaimable slab — kernel caches (like dentry/inode caches) that can be shrunk
The MemAvailable calculation was added by Rik van Riel in commit 34e431b0ae39 ("proc: provide MemAvailable in /proc/meminfo", Linux 3.14, 2014). Before this, sysadmins had to estimate what was reclaimable. The commit message explains the problem:
Many load balancing and workload placing programs check /proc/meminfo to estimate how much free memory is available. They generally do this by adding up "free" and "cached", which was fine ten years ago, but is pretty much guaranteed to be wrong today.
The kernel computes MemAvailable in si_mem_available() (historically in mm/page_alloc.c, moved in later kernels). The simplified logic:
available = MemFree - low watermark reserve
available += reclaimable page cache (with a safety margin)
available += reclaimable slab (with a safety margin)
The "safety margin" accounts for the fact that not all cache is reclaimable under pressure — the kernel keeps a low watermark of pages reserved.
Healthy vs Unhealthy: A Side-by-Side
Healthy web server (16 GB RAM):
MemTotal: 16384000 kB
MemFree: 204560 kB # Low — that's fine
MemAvailable: 8741200 kB # 53% — plenty of room
Buffers: 182400 kB
Cached: 7892000 kB # 7.5 GB of page cache — good
SwapTotal: 8388608 kB
SwapFree: 8388608 kB # No swap used
Active(anon): 3210000 kB # ~3 GB application memory
Active(file): 3210000 kB # Actively used cache
Inactive(file): 4720000 kB # Cache that can be reclaimed
This system looks "95% used" by naive metrics but has 8.7 GB available. The large Cached value means file I/O is fast. SwapFree equals SwapTotal — no swapping.
Unhealthy application server (16 GB RAM):
MemTotal: 16384000 kB
MemFree: 52000 kB # Very low
MemAvailable: 310000 kB # 1.9% — critical
Buffers: 14000 kB
Cached: 420000 kB # Almost no cache — reclaimed away
SwapTotal: 8388608 kB
SwapFree: 2100000 kB # 6 GB of swap used
Active(anon): 12800000 kB # ~12.2 GB — apps consuming almost everything
Active(file): 180000 kB # Almost no file cache
Inactive(file): 160000 kB # Nothing left to reclaim
This system is genuinely in trouble. Application memory (Active(anon)) has consumed almost everything. Page cache has been squeezed to almost nothing (file I/O will be slow). 6 GB is in swap. MemAvailable is under 2%.
Scenario 3: "Where is all my memory going?"
Break /proc/meminfo into categories:
MemTotal (16 GB)
│
┌───────────┼───────────┐
│ │ │
Application Kernel Page Cache
Memory Memory + Buffers
│ │ │
┌─────┴─────┐ │ ┌─────┴─────┐
│ │ │ │ │
Anon Anon Slab File File
Active Inactive Active Inactive
Application memory (anonymous pages)
$ grep -E 'Active\(anon\)|Inactive\(anon\)|AnonPages' /proc/meminfo
Active(anon): 3210000 kB
Inactive(anon): 1120000 kB
AnonPages: 4280000 kB
This is memory allocated by applications — heap, stack, anonymous mmap() regions. It has no file backing, so it can only be freed by the application releasing it or by swapping it out.
AnonPages counts the actual anonymous pages in use. Active(anon) + Inactive(anon) includes anonymous pages on LRU lists, which also counts shared memory and tmpfs pages.
Page cache (file-backed pages)
$ grep -E 'Active\(file\)|Inactive\(file\)|Cached|Buffers' /proc/meminfo
Buffers: 182400 kB
Cached: 7892000 kB
Active(file): 3210000 kB
Inactive(file): 4720000 kB
Cached is the total page cache size (minus SwapCached). Buffers is the page cache for block device inodes — raw block device I/O (e.g., direct reads from /dev/sda). It is a small subset of the unified page cache, not a separate structure.
Active(file) pages have been accessed recently and are less likely to be reclaimed. Inactive(file) pages are older and will be reclaimed first when memory is needed. A large Inactive(file) is a good reserve — it means there is memory that can be freed quickly.
Kernel memory
$ grep -E 'Slab|SReclaimable|SUnreclaim|KernelStack|PageTables|VmallocUsed' /proc/meminfo
Slab: 620000 kB
SReclaimable: 480000 kB
SUnreclaim: 140000 kB
KernelStack: 24000 kB
PageTables: 86000 kB
VmallocUsed: 52000 kB
Slab is memory used by kernel object caches (see kmalloc (SLUB)). SReclaimable (dentry cache, inode cache) can be freed under pressure. SUnreclaim (task structs, network buffers) cannot.
KernelStack is kernel stacks for all threads. PageTables is page table memory — can grow large with many processes or large address spaces (see Page Tables). VmallocUsed tracks memory from vmalloc() (see vmalloc).
Shared memory and tmpfs
Shmem includes shared memory segments (shmget, shm_open), tmpfs, and mmap(MAP_SHARED|MAP_ANONYMOUS). This counts against Cached because it is file-backed (tmpfs). A common surprise: tmpfs files consume RAM, and they appear in Cached.
Mapped is the total amount of memory mapped into process address spaces with mmap(). This includes shared libraries, memory-mapped files, and shared memory.
Scenario 4: "Is my page cache effective?"
An effective page cache reduces disk I/O. Examine the file-backed LRU lists:
$ grep -E 'Active\(file\)|Inactive\(file\)' /proc/meminfo
Active(file): 3210000 kB
Inactive(file): 4720000 kB
What to look for:
| Pattern | Meaning |
|---|---|
Large Active(file) + large Inactive(file) |
Healthy cache, plenty of cached file data |
Large Active(file) + small Inactive(file) |
Working set fits in RAM, little to reclaim |
Small Active(file) + small Inactive(file) |
Cache has been squeezed — app memory dominates |
Small Active(file) + large Inactive(file) |
Lots of cached data but low reuse (streaming reads) |
Monitor cache effectiveness over time:
# Watch page cache hit ratio via /proc/vmstat
$ grep -E 'pgpgin|pgpgout|pswpin|pswpout' /proc/vmstat
pgpgin 12483920 # KiB read from block devices (not pages)
pgpgout 8492100 # KiB written to block devices (not pages)
pswpin 0 # Pages swapped in (counted in pages, not KiB)
pswpout 0 # Pages swapped out (counted in pages, not KiB)
If pgpgin is low relative to your application's read rate, the cache is doing its job. If pswpin/pswpout are non-zero and growing, the system is swapping — cache is not the issue; you need more RAM.
Compare with cache pressure:
# Watch in real-time (values are cumulative counters)
$ watch -d -n 1 'grep -E "pgpgin|pgpgout|pswpin|pswpout|pgfault" /proc/vmstat'
A high pgfault count with low pgpgin means most faults are satisfied from cache (minor faults). A high pgpgin means data is being read from disk (major faults or cache misses).
Scenario 5: "Am I leaking memory?"
Memory leaks show up as a steady increase in anonymous memory over time with no corresponding increase in workload.
What to watch:
# Sample every 60 seconds
$ while true; do
echo "$(date): $(grep -E 'MemAvailable|AnonPages|Shmem|Slab' /proc/meminfo | tr '\n' ' ')"
sleep 60
done
Leak indicators:
AnonPagessteadily increasing — An application is allocating and not freeingMemAvailablesteadily decreasing — Available memory is being consumedSlabsteadily increasing — Kernel memory leak (less common, more serious)Shmemsteadily increasing — Shared memory or tmpfs growing without cleanup
Healthy pattern (stable workload):
14:00 AnonPages: 4280000 kB MemAvailable: 8741200 kB Slab: 620000 kB
14:10 AnonPages: 4310000 kB MemAvailable: 8720000 kB Slab: 621000 kB
14:20 AnonPages: 4275000 kB MemAvailable: 8750000 kB Slab: 619000 kB
14:30 AnonPages: 4290000 kB MemAvailable: 8735000 kB Slab: 620000 kB
Values fluctuate within a range but don't trend upward.
Leaking pattern:
14:00 AnonPages: 4280000 kB MemAvailable: 8741200 kB Slab: 620000 kB
14:10 AnonPages: 4520000 kB MemAvailable: 8500000 kB Slab: 620000 kB
14:20 AnonPages: 4780000 kB MemAvailable: 8240000 kB Slab: 620000 kB
14:30 AnonPages: 5050000 kB MemAvailable: 7970000 kB Slab: 620000 kB
AnonPages growing by ~250 MB every 10 minutes with MemAvailable dropping at the same rate — a classic userspace leak. Slab is stable, so the kernel is not the culprit.
Finding the leaker:
# Sort processes by RSS
$ ps aux --sort=-%mem | head -10
# Or check per-process anonymous memory
$ for pid in $(ls /proc | grep '^[0-9]'); do
if [ -f /proc/$pid/status ]; then
name=$(grep Name /proc/$pid/status | awk '{print $2}')
anon=$(grep RssAnon /proc/$pid/status 2>/dev/null | awk '{print $2}')
[ -n "$anon" ] && [ "$anon" -gt 100000 ] && echo "$anon kB $name (pid $pid)"
fi
done 2>/dev/null | sort -rn | head -10
Field-by-Field Reference
All values are in kB. Source: fs/proc/meminfo.c.
Core fields
| Field | Description | Watch when... |
|---|---|---|
MemTotal |
Total usable RAM (minus kernel reserved) | Baseline — does not change |
MemFree |
Completely unused RAM | Low value is normal — kernel caches aggressively |
MemAvailable |
Estimated memory available without swapping | Primary metric for capacity planning |
Buffers |
Page cache for block device inodes (raw block I/O) | Usually small; counts toward reclaimable |
Cached |
Page cache (file data in RAM) + tmpfs | Includes Shmem; reclaimable portion is available |
SwapCached |
Swap pages also in RAM (avoids re-reading) | Non-zero means system swapped recently |
Anonymous memory
| Field | Description | Watch when... |
|---|---|---|
Active(anon) |
Recently accessed anonymous pages | Growing = app working set is growing |
Inactive(anon) |
Anonymous pages not recently accessed | Candidates for swap-out |
AnonPages |
Total anonymous pages mapped into page tables | Primary metric for application memory usage |
AnonHugePages |
Anonymous transparent huge pages | See Transparent Huge Pages |
File-backed memory
| Field | Description | Watch when... |
|---|---|---|
Active(file) |
Recently accessed file cache pages | High = working set is cached |
Inactive(file) |
File cache not recently accessed | Reclaimable reserve |
Mapped |
Files mmap'd into address spaces | Shared libraries, mmap'd data files |
Shmem |
Shared memory + tmpfs | Counts toward Cached — a common surprise |
Kernel memory
| Field | Description | Watch when... |
|---|---|---|
Slab |
Total kernel slab allocator memory | Growing without bound = kernel leak |
SReclaimable |
Slab that can be reclaimed (dentry, inode caches) | Counts toward MemAvailable |
SUnreclaim |
Slab that cannot be reclaimed | Network-heavy workloads can push this up |
KernelStack |
Kernel stacks for all threads | Grows with thread count |
PageTables |
Memory used for page tables | Grows with process count and address space size |
VmallocUsed |
vmalloc area usage | Kernel modules, some drivers |
Percpu |
Per-CPU allocations | Grows with CPU count |
Swap
| Field | Description | Watch when... |
|---|---|---|
SwapTotal |
Total swap space | Zero = no swap configured |
SwapFree |
Unused swap | SwapTotal - SwapFree = swap in use |
SwapCached |
Swapped pages still in RAM | Reduces cost of swap-in |
Misc
| Field | Description | Watch when... |
|---|---|---|
Dirty |
Pages modified but not yet written to disk | Large values = writeback is behind |
Writeback |
Pages actively being written to disk | Should be transient; sustained = slow I/O |
CommitLimit |
Max memory committable (mode 2 only) | See Memory Overcommit |
Committed_AS |
Total memory "promised" to processes | Exceeding CommitLimit triggers ENOMEM in mode 2 |
HugePages_Total |
Preallocated huge pages (2 MB) | Static allocation; not available for normal use |
HugePages_Free |
Unused huge pages | Wasted if allocated but not used |
DirectMap4k / 2M / 1G |
Kernel direct map page sizes | Relevant for TLB performance tuning |
Common Misconceptions
"MemFree is low, we're running out of memory."
No. The kernel intentionally uses free memory for page cache. MemAvailable is the metric you want. A system with 200 MB MemFree and 8 GB MemAvailable has plenty of memory.
"Cached memory is wasted." The opposite — cached memory speeds up file I/O. It is automatically reclaimed when applications need it. A system with no page cache has to read everything from disk.
"Buffers + Cached is the total cache."
Mostly true historically, but Cached includes Shmem (shared memory, tmpfs), which is not reclaimable cache. The real reclaimable amount is closer to Cached - Shmem + Buffers + SReclaimable. This is exactly why MemAvailable was added — so you don't have to compute this yourself.
"SwapCached means we're swapping."
Not necessarily. SwapCached means pages that were swapped out are also kept in RAM. The system swapped at some point, but if si/so in vmstat are zero, it is not actively swapping now. SwapCached avoids re-reading from swap if those pages are needed again.
"Adding up all process RSS gives total memory usage."
No. RSS double-counts shared pages (shared libraries, shared memory). Use PSS (Proportional Set Size) from /proc/<pid>/smaps_rollup for a fair per-process accounting, or look at AnonPages + Cached + Slab in /proc/meminfo for system-wide totals. See Virtual vs Physical vs Resident.
Quick-Reference Commands
# The essentials — one command
grep -E 'MemTotal|MemFree|MemAvailable|Cached|SwapTotal|SwapFree|AnonPages|Slab' /proc/meminfo
# Is the system swapping right now?
vmstat 1 5 # Watch si/so columns
# Memory pressure summary
cat /proc/pressure/memory # (Linux 4.20+, with PSI enabled)
# Drop page cache (for testing only — not a fix)
echo 3 > /proc/sys/vm/drop_caches
# Watch memory trends
watch -n 5 'grep -E "MemAvailable|AnonPages|Slab|Dirty" /proc/meminfo'
# Per-process memory usage (sorted)
ps aux --sort=-%mem | head -10
# Detailed per-process breakdown
cat /proc/<pid>/smaps_rollup
# Human-friendly summary (uses MemAvailable)
free -h
How free(1) Maps to /proc/meminfo
The free command reads /proc/meminfo and presents a summary:
total used free shared buff/cache available
Mem: 15Gi 4.1Gi 195Mi 870Mi 11Gi 8.3Gi
Swap: 8.0Gi 0B 8.0Gi
| free column | /proc/meminfo source |
|---|---|
total |
MemTotal |
free |
MemFree |
shared |
Shmem |
buff/cache |
Buffers + Cached + SReclaimable |
available |
MemAvailable |
used |
MemTotal - MemFree - Buffers - Cached - SReclaimable |
The available column is the one that matters for capacity decisions.
Further Reading
fs/proc/meminfo.c— Where these numbers come from- Commit
34e431b0ae39— MemAvailable addition - Page Cache — How file caching works
- Page Reclaim — What happens when memory is low
- Memory Overcommit — CommitLimit and Committed_AS explained
- Virtual vs Physical vs Resident — RSS, VSZ, and PSS
- Running out of memory — The path from low memory to OOM kill
- /proc/meminfo documentation — Official kernel docs