From: "Chen, Kenneth W" 2nd patch on top of wolfgang's patch. It's a compliment on top of initial attempt by wolfgang to solve the fragmentation problem. The code path in munmap is suboptimal and potentially worsen the fragmentation because with a series of munmap, the free_area_cache would point to last vma that was freed, ignoring its surrounding and not performing any coalescing at all, thus artificially create more holes in the virtual address space than necessary. Since all the information needed to perform coalescing are actually already there. This patch put that data in use so we will prevent artificial fragmentation. It covers both bottom-up and top-down topology. For bottom-up topology, free_area_cache points to prev->vm_end. And for top-down, free_area_cache points to next->vm_start. Signed-off-by: Ken Chen Acked-by: Ingo Molnar Cc: Wolfgang Wander Signed-off-by: Andrew Morton --- mm/mmap.c | 24 ++++++++++++++---------- 1 files changed, 14 insertions(+), 10 deletions(-) diff -puN mm/mmap.c~avoiding-mmap-fragmentation-fix-2 mm/mmap.c --- 25/mm/mmap.c~avoiding-mmap-fragmentation-fix-2 Mon May 23 16:10:32 2005 +++ 25-akpm/mm/mmap.c Mon May 23 16:10:32 2005 @@ -1217,14 +1217,11 @@ void arch_unmap_area(struct vm_area_stru /* * Is this a new hole at the lowest possible address? */ - if (area->vm_start >= TASK_UNMAPPED_BASE && - area->vm_start < area->vm_mm->free_area_cache) { - unsigned area_size = area->vm_end-area->vm_start; - - if (area->vm_mm->cached_hole_size < area_size) - area->vm_mm->cached_hole_size = area_size; - else - area->vm_mm->cached_hole_size = ~0UL; + unsigned long addr = (unsigned long) area->vm_private_data; + + if (addr >= TASK_UNMAPPED_BASE && addr < area->vm_mm->free_area_cache) { + area->vm_mm->free_area_cache = addr; + area->vm_mm->cached_hole_size = ~0UL; } } @@ -1317,8 +1314,10 @@ void arch_unmap_area_topdown(struct vm_a /* * Is this a new hole at the highest possible address? */ - if (area->vm_end > area->vm_mm->free_area_cache) - area->vm_mm->free_area_cache = area->vm_end; + unsigned long addr = (unsigned long) area->vm_private_data; + + if (addr > area->vm_mm->free_area_cache) + area->vm_mm->free_area_cache = addr; /* dont allow allocations above current base */ if (area->vm_mm->free_area_cache > area->vm_mm->mmap_base) @@ -1686,6 +1685,11 @@ detach_vmas_to_be_unmapped(struct mm_str } while (vma && vma->vm_start < end); *insertion_point = vma; tail_vma->vm_next = NULL; + if (mm->unmap_area == arch_unmap_area) + tail_vma->vm_private_data = (void*) prev->vm_end; + else + tail_vma->vm_private_data = vma ? + (void*) vma->vm_start : (void*) mm->mmap_base; mm->mmap_cache = NULL; /* Kill the cache. */ } _