Skip to content
Commit 1c688757 authored by Ryan Roberts's avatar Ryan Roberts
Browse files

mm/mmu_gather: generalize mmu_gather rmap removal mechanism



Commit 5df397de ("mm: delay page_remove_rmap() until after the TLB
has been flushed") added a mechanism whereby pages added to the
mmu_gather buffer could indicate whether they should also be removed
from the rmap. Then a call to the new tlb_flush_rmaps() API would
iterate though the buffer and remove each flagged page from the rmap.
This mechanism was intended for use with !PageAnon(page) pages only.

Let's generalize this rmap removal mechanism so that any type of page
can be removed from the rmap. This is done as preparation for batching
rmap removals with folio_remove_rmap_range(), whereby we will pass a
contiguous range of pages belonging to the same folio to be removed in
one shot for a performance improvement.

The mmu_gather now maintains a "pointer" that points to batch and index
within that batch of the next page in the queue that is yet to be
removed from the rmap. tlb_discard_rmaps() resets this "pointer" to the
first empty location in the queue. Whenever tlb_flush_rmaps() is called,
every page from "pointer" to the end of the queue is removed from the
rmap. Once the mmu is flushed (tlb_flush_mmu()/tlb_finish_mmu()) any
pending rmap removals are discarded. This pointer mechanism ensures that
tlb_flush_rmaps() only has to walk the part of the queue for which rmap
removal is pending, avoids the (potentially large) early portion of the
queue for which rmap removal has already been performed but for which
tlb invalidation/page freeing is still pending.

tlb_flush_rmaps() must always be called under the same PTL as was used
to clear the corresponding PTEs. So in practice rmap removal will be
done in a batch for each PTE table, while the tlbi/freeing can continue
to be done in much bigger batches outside the PTL. See this example
flow:

tlb_gather_mmu()
	for each pte table {
		with ptl held {
			for each pte {
				tlb_remove_tlb_entry()
				__tlb_remove_page()
			}

			if (any removed pages require rmap after tlbi)
				tlb_flush_mmu_tlbonly()

			tlb_flush_rmaps()
		}

		if (full)
			tlb_flush_mmu()
	}
tlb_finish_mmu()

So this more general mechanism is no longer just for delaying rmap
removal until after tlbi, but can be used that way when required.

Note that s390 does not gather pages, but does immediate tlbi and page
freeing. In this case we continue to do the rmap removal page-by-page
without gathering them in the mmu_gather.

Signed-off-by: Ryan Roberts's avatarRyan Roberts <ryan.roberts@arm.com>
parent edfee547
Loading
Loading
Loading
Loading
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment