From: "Chen, Kenneth W" x86-64 may have to allocate a bunch of upper levels of pagetables, and those allocations may fail. When they do, unmap_hugepage_range() needs to be able to clean up after them. Acked-by: William Lee Irwin III Signed-off-by: Andrew Morton --- 25-akpm/arch/i386/mm/hugetlbpage.c | 7 +++++-- 1 files changed, 5 insertions(+), 2 deletions(-) diff -puN arch/i386/mm/hugetlbpage.c~x86_64-hugetlb-fix arch/i386/mm/hugetlbpage.c --- 25/arch/i386/mm/hugetlbpage.c~x86_64-hugetlb-fix 2005-02-28 16:41:32.000000000 -0800 +++ 25-akpm/arch/i386/mm/hugetlbpage.c 2005-02-28 16:41:32.000000000 -0800 @@ -209,14 +209,17 @@ void unmap_hugepage_range(struct vm_area { struct mm_struct *mm = vma->vm_mm; unsigned long address; - pte_t pte; + pte_t pte, *ptep; struct page *page; BUG_ON(start & (HPAGE_SIZE - 1)); BUG_ON(end & (HPAGE_SIZE - 1)); for (address = start; address < end; address += HPAGE_SIZE) { - pte = ptep_get_and_clear(huge_pte_offset(mm, address)); + ptep = huge_pte_offset(mm, address); + if (!ptep) + continue; + pte = ptep_get_and_clear(ptep); if (pte_none(pte)) continue; page = pte_page(pte); _