aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorGeert Uytterhoeven <geert+renesas@glider.be>2021-04-29 17:05:00 +0200
committerPalmer Dabbelt <palmerdabbelt@google.com>2021-05-06 09:40:12 -0700
commit8db6f937f4e76d9dd23795311fc14f0a5c0ac119 (patch)
tree4fc671a0bf9717eb421b2e053b7c8855190005ce
parent939b7cbc00906b02c6eae6a380ad6c24c7a1e043 (diff)
downloadlinux-fpga-8db6f937f4e76d9dd23795311fc14f0a5c0ac119.tar.gz
riscv: Only extend kernel reservation if mapped read-only
When the kernel mapping was moved outside of the linear mapping, the kernel memory reservation was increased, to take into account mapping granularity. However, this is done unconditionally, regardless of whether the kernel memory is mapped read-only or not. If this extension is not needed, up to 2 MiB may be lost, which has a big impact on e.g. Canaan K210 (64-bit nommu) platforms with only 8 MiB of RAM. Reclaim the lost memory by only extending the reserved region when needed, i.e. depending on a simplified version of the conditional logic around the call to protect_kernel_linear_mapping_text_rodata(). Fixes: 2bfc6cd81bd17e43 ("riscv: Move kernel mapping outside of linear mapping") Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Tested-by: Alexandre Ghiti <alex@ghiti.fr> Signed-off-by: Palmer Dabbelt <palmerdabbelt@google.com>
-rw-r--r--arch/riscv/mm/init.c9
1 files changed, 7 insertions, 2 deletions
diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
index dfb5e4f7a67007..1ec98d97b67f7b 100644
--- a/arch/riscv/mm/init.c
+++ b/arch/riscv/mm/init.c
@@ -135,11 +135,16 @@ void __init setup_bootmem(void)
/*
* Reserve from the start of the kernel to the end of the kernel
- * and make sure we align the reservation on PMD_SIZE since we will
+ */
+#if defined(CONFIG_64BIT) && defined(CONFIG_STRICT_KERNEL_RWX)
+ /*
+ * Make sure we align the reservation on PMD_SIZE since we will
* map the kernel in the linear mapping as read-only: we do not want
* any allocation to happen between _end and the next pmd aligned page.
*/
- memblock_reserve(vmlinux_start, (vmlinux_end - vmlinux_start + PMD_SIZE - 1) & PMD_MASK);
+ vmlinux_end = (vmlinux_end + PMD_SIZE - 1) & PMD_MASK;
+#endif
+ memblock_reserve(vmlinux_start, vmlinux_end - vmlinux_start);
/*
* memblock allocator is not aware of the fact that last 4K bytes of