mm: mTHP stats for pagecache folio allocations
Expose 3 new mTHP stats for file (pagecache) folio allocations:
/sys/kernel/mm/transparent_hugepage/hugepages-*kB/stats/file_alloc
/sys/kernel/mm/transparent_hugepage/hugepages-*kB/stats/file_fallback
/sys/kernel/mm/transparent_hugepage/hugepages-*kB/stats/file_fallback_charge
This will provide some insight on the sizes of large folios being
allocated for file-backed memory, and how often allocation is failing.
All non-order-0 (and most order-0) folio allocations are currently done
through filemap_alloc_folio(), and folios are charged in a subsequent
call to filemap_add_folio(). So count file_fallback when allocation
fails in filemap_alloc_folio() and count file_alloc or
file_fallback_charge in filemap_add_folio(), based on whether charging
succeeded or not. There are some users of filemap_add_folio() that
allocate their own order-0 folio by other means, so we would not count
an allocation failure in this case, but we also don't care about order-0
allocations. This approach feels like it should be good enough and
doesn't require any (impractically large) refactoring.
The existing mTHP stats interface is reused to provide consistency to
users. And because we are reusing the same interface, we can reuse the
same infrastructure on the kernel side. The one small wrinkle is that
the set of folio sizes supported by the pagecache are not identical to
those supported by anon and shmem; pagecache supports order-1, unlike
anon and shmem, and the max pagecache order may be less than PMD-size
(see arm64 with 64K base pages), again unlike anon and shmem. So we now
create a hugepages-*kB directory for the union of the sizes supported by
all 3 memory types and populate it with the relevant stats and controls.
Signed-off-by:
Ryan Roberts <ryan.roberts@arm.com>
Loading
Please register or sign in to comment