Skip to content

Understanding /proc/meminfo

What every field means and how to use them for diagnostics

What is /proc/meminfo?

/proc/meminfo is the kernel's report of system memory state. It is generated by meminfo_proc_show() in fs/proc/meminfo.c, which reads from multiple kernel subsystems to produce a single snapshot.

$ cat /proc/meminfo
MemTotal:       16384000 kB
MemFree:          524288 kB
MemAvailable:    8912000 kB
Buffers:           65536 kB
Cached:          7168000 kB
SwapCached:        12288 kB
Active:          9216000 kB
Inactive:        5120000 kB
...

The Fields That Matter Most

MemTotal

Total usable RAM — physical RAM minus reserved areas (kernel code, BIOS holes, firmware).

  • Source: si_meminfo() returns totalram_pages(), set during boot.
  • Common mistake: MemTotal is not the installed RAM. A 16 GB machine typically shows a few hundred MB less.

MemFree

Pages sitting on the buddy allocator's free lists. Completely unused by anything.

  • Source: global_zone_page_state(NR_FREE_PAGES).
  • Common mistake: Low MemFree does not mean the system is out of memory. Linux aggressively uses free RAM for page cache. A healthy system often has very low MemFree. Watch MemAvailable instead.

MemAvailable

Estimate of how much memory is available for starting new applications without swapping. This is the single most useful field for "am I running out of memory?"

The algorithm:

  1. Start with free pages minus total reserves (NR_FREE_PAGES - totalreserve_pages)
  2. Add reclaimable file cache (Active(file) + Inactive(file)), subtracting min(pagecache/2, wmark_low) as a reserve
  3. Add reclaimable slab + misc reclaimable kernel memory, subtracting min(reclaimable/2, wmark_low) as a reserve
  4. Floor at zero
MemAvailable ≈ (MemFree - totalreserve_pages)
             + pagecache - min(pagecache/2, wmark_low)
             + reclaimable_slab - min(reclaimable_slab/2, wmark_low)

The min() deducts the smaller of half the cache or the low watermark sum — this ensures the estimate is conservative without being overly pessimistic.

Why this was needed: Applications used to estimate available memory as MemFree + Cached. This was wrong because Cached includes non-freeable memory (shmem/tmpfs) and excludes reclaimable slab.

Page Cache Fields

Buffers

Page cache pages belonging to block device inodes — raw block device reads and writes (e.g., dd, direct partition I/O).

  • Source: nr_blockdev_pages() walks every block device inode and sums their cached pages.
  • Common mistake: "Buffers" is not a separate cache from the page cache. Since Linux 2.4, the buffer cache and page cache are unified. Buffers is the subset of page cache belonging to block device inodes. Usually small (tens of MB).

Cached

File-backed page cache pages, minus swap cache, minus buffers.

  • Source: Computed as NR_FILE_PAGES - total_swapcache_pages() - Buffers.
  • Common mistake: Cached includes shared memory (shmem/tmpfs). These pages are not reclaimable without swap — they have no disk backing. This is why MemFree + Cached overstates available memory.

To get the truly reclaimable file cache:

Reclaimable cache ≈ Cached - Shmem

SwapCached

Pages that exist in both RAM and swap simultaneously. When a swapped-out page is read back in, the kernel keeps the swap slot so it can be re-swapped cheaply without writing again.

  • Common mistake: SwapCached is not extra memory usage. These pages can be reclaimed instantly (no write needed). Large SwapCached means the system experienced memory pressure in the past and has since recovered.

LRU Fields

Active / Inactive

Pages on the LRU (Least Recently Used) lists. Active pages have been accessed recently; inactive pages have not.

Active   = Active(anon)   + Active(file)
Inactive = Inactive(anon) + Inactive(file)

Active(anon) / Inactive(anon)

Anonymous pages (heap, stack, private mmap) on the LRU. These have no file backing — the only way to reclaim them is to swap.

  • Growing Active(anon) indicates increasing application memory consumption.

Active(file) / Inactive(file)

File-backed pages on the LRU. These can be reclaimed by simply dropping them (re-read from disk if needed). This is the cheapest reclaim path.

  • These are what MemAvailable uses to estimate reclaimable cache.
  • If Active(file) + Inactive(file) is tiny on a system with GBs of RAM, the page cache has been cannibalized and disk reads will be slow.

Swap Fields

SwapTotal / SwapFree

Total configured swap space / currently unused swap space.

  • SwapTotal - SwapFree = swap in use.
  • Zero SwapFree with nonzero SwapTotal is a danger sign.
  • SwapTotal = 0 means no swap is configured — anonymous pages cannot be reclaimed at all.

Dirty / Writeback

Dirty

Pages in the page cache modified but not yet written to disk.

  • Source: NR_FILE_DIRTY.
  • High Dirty means data loss risk on crash. Sustained high Dirty suggests slow storage or too-generous dirty limits (vm.dirty_ratio).

Writeback

Pages currently being written to disk by writeback threads.

  • If both Dirty and Writeback are high, the storage subsystem is a bottleneck.

Process Memory Fields

AnonPages

Anonymous pages mapped into user page tables (not file-backed).

  • Source: NR_ANON_MAPPED.
  • Note: AnonPages does NOT equal Active(anon) + Inactive(anon). The LRU counters count all anon pages on the LRU; AnonPages counts only those currently mapped in at least one process's page tables.

Mapped

File-backed pages mapped into user page tables (shared libraries, mmap'd files, executable code).

  • Source: NR_FILE_MAPPED.
  • High Mapped relative to Cached means processes are actively using their file mappings.

Shmem

Total memory used by shared memory (shmem, tmpfs, POSIX shared memory, devtmpfs).

  • Source: NR_SHMEM.
  • Shmem is counted in Cached but is not reclaimable without swap.
  • Large Shmem on desktops may be GPU shared memory. On servers, tmpfs-heavy workloads (Redis, etc.) show up here.

Kernel Memory Fields

Slab / SReclaimable / SUnreclaim

Memory used by the kernel slab allocator's caches (SLUB by default since v2.6.23).

Slab = SReclaimable + SUnreclaim
  • SReclaimable: Kernel can free these under pressure (dentry cache, inode cache).
  • SUnreclaim: Cannot be freed (in-use kernel objects).
  • Growing SUnreclaim over time is a potential kernel memory leak. Check with slabtop or /proc/slabinfo.

KernelStack

Memory used for kernel thread stacks. Each thread gets a fixed-size stack (typically 16 kB on x86-64).

  • Source: NR_KERNEL_STACK_KB (reported directly in kB).
  • Proportional to thread count. Growing KernelStack with stable process count may indicate a thread leak.

PageTables

Memory used for page tables mapping virtual to physical addresses for all processes.

  • Source: NR_PAGETABLE.
  • Grows with the total virtual address space mapped. Very high PageTables suggests processes with enormous sparse mmap regions.

Commit Accounting

CommitLimit

Maximum memory that can be committed under strict overcommit (vm.overcommit_memory = 2).

  • Formula: (totalram - hugetlb) * overcommit_ratio/100 + swap
  • Source: vm_commit_limit() in mm/util.c.
  • Only meaningful with strict overcommit (mode 2). Irrelevant under the default heuristic mode (mode 0).

Committed_AS

Total virtual memory allocated (committed) by all processes.

  • Common mistake: This is virtual, not physical. A process can commit 1 GB but only touch 10 MB. Under the default heuristic overcommit (mode 0), Committed_AS can exceed MemTotal + SwapTotal.

Huge Page Fields

HugePages_Total / HugePages_Free / Hugepagesize

Preallocated huge pages (hugetlbfs), their free count, and the default size (2 MB on x86-64).

  • Source: hugetlb_report_meminfo() in mm/hugetlb.c.
  • Common mistake: Preallocated huge pages are not counted in MemFree or MemAvailable. They are removed from the buddy allocator entirely. Allocating 4 GB of huge pages effectively reduces available memory by 4 GB for everything else.

Direct Map Fields

DirectMap4k / DirectMap2M / DirectMap1G

Amount of kernel direct-mapped memory using each page table entry size.

  • Source: arch_report_meminfo() in arch/x86/mm/pat/set_memory.c.
  • Ideally most memory is DirectMap1G or DirectMap2M for TLB efficiency. Large DirectMap4k means the kernel direct map has been fragmented (often by set_memory_* calls from drivers).

Does It All Add Up?

No. MemTotal does NOT equal the sum of all reported fields. The approximate decomposition:

MemTotal ≈ MemFree
         + Active + Inactive + Unevictable    (LRU-tracked pages)
         + Slab                                (SReclaimable + SUnreclaim)
         + KernelStack
         + PageTables
         + HugePages_Total * Hugepagesize      (if preallocated)
         + <unlabeled kernel allocations>

The gap comes from: - Kernel allocations not individually tracked in meminfo (percpu memory, vmalloc backing pages, kernel page tables) - Race conditions — meminfo_proc_show() reads counters without a single lock - CMA pages in MemFree that are restricted-use - Fields that intentionally overlap (Shmem is part of Cached; Mapped is a subset of Cached + AnonPages)

Diagnostic Patterns

Am I running out of memory?

Watch MemAvailable. If it approaches zero, the system is under memory pressure. Do not watch MemFree — it is normally near zero on a healthy system.

Secondary signals: - SwapFree dropping (swap being consumed) - Active(anon) growing - /proc/pressure/memory showing elevated some or full (v4.20+)

Is the page cache working?

Compare Active(file) + Inactive(file) with disk I/O rates (from iostat):

  • Large file cache + low disk reads = cache is effective
  • Small file cache + high disk reads = working set does not fit, cache is thrashing

How to spot a memory leak

Userspace leak: - AnonPages and Committed_AS growing over time - Per-process: watch /proc/<pid>/status VmRSS or /proc/<pid>/smaps_rollup

Kernel leak: - SUnreclaim growing = slab leak (check slabtop) - VmallocUsed growing = vmalloc leak (note: this field was zeroed in v4.4 (commit a5ad88ce8c7f) and restored in v5.3 (commit 97105f0ab7b8); on kernels 4.4-5.2, check /proc/vmallocinfo instead) - KernelStack growing with stable process count = thread leak - PageTables growing without new processes = mapping leak

What is Buffers vs Cached?

Field What it is Typical size
Buffers Block device page cache (raw block I/O) Tens of MB
Cached File data page cache + shmem GB-scale

The historical "buffer cache vs page cache" distinction was eliminated in Linux 2.4. Today they are both the page cache; Buffers is the block-device subset.

Try It Yourself

# Watch memory state change in real time
watch -n 1 'grep -E "MemTotal|MemFree|MemAvailable|Cached|SwapFree|Dirty|Active\(|Inactive\(" /proc/meminfo'

# Quick health check: available vs total
awk '/MemTotal/{t=$2} /MemAvailable/{a=$2} END{printf "%.0f%% available\n", a/t*100}' /proc/meminfo

# Find where memory is going
awk '/^(MemFree|Active|Inactive|Slab|KernelStack|PageTables|Shmem|AnonPages):/' /proc/meminfo

# Truly reclaimable cache (excluding shmem)
awk '/^Cached/{c=$2} /^Shmem/{s=$2} END{print (c-s)" kB reclaimable file cache"}' /proc/meminfo

Further Reading