diff options
author | Sebastian Andrzej Siewior <bigeasy@linutronix.de> | 2016-10-24 10:36:24 +0200 |
---|---|---|
committer | Sebastian Andrzej Siewior <bigeasy@linutronix.de> | 2016-10-24 10:36:24 +0200 |
commit | 3736188f60e006392918de3b181841ce49f5658d (patch) | |
tree | 4c29bf371c1c1b6e08b537669cad89d5020f8509 | |
parent | f17a87a2e45d102b048049c8d6b11a73c34d1a83 (diff) | |
download | 4.9-rt-patches-3736188f60e006392918de3b181841ce49f5658d.tar.gz |
[ANNOUNCE] 4.8.2-rt3
Dear RT folks!
I'm pleased to announce the v4.8.2-rt3 patch set.
Changes since v4.8.2-rt2:
- The connector subsystem could sleep in invalid context. Found and
fixed by Mike Galbraith
- zram / zcomp has shown new warnings. The warnings have been
addressed and an old error fixed (Mike Galbraith)
- The ftrace header was off slightly and the ascii arrow were point to
the wrong direction (Mike Galbraith)
- On CPU-down (CPU hotplug) we could attemp to sleep in wrong context.
(Mike Galbraith)
- Removed an unused static variable in RXPRC (noticed by kbuild test
robot)
- ifdefed a variable in APIC so we don't get this "unused variable"
warning on certain configurations (noticed by kbuild test robot)
- Added `-no-PIE' to the Makefile. This breaks gcc 3.2. Is someone
here still using it?
- Fixed docbook in two places (noticed by kbuild test robot)
- Fixed compile on sparc which was broken after I moved RCU headers
(noticed by kbuild test robot)
- The kbuild test robot sent me a warning about sleeping in invalid
context in the NFS4 code. I didn't manage to reproduce this myself
but the warning is valid. I attempted to fix this and will wait for
robot's feedback :)
- Lazy preempt was broken on x86-32. Fixed by Paul Gortmaker.
Known issues
- CPU hotplug got a little better but can deadlock.
The delta patch against 4.8.2-rt3 is appended below and can be found here:
https://cdn.kernel.org/pub/linux/kernel/projects/rt/4.8/incr/patch-4.8.2-rt2-rt3.patch.xz
You can get this release via the git tree at:
git://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git v4.8.2-rt3
The RT patch against 4.6.5 can be found here:
https://cdn.kernel.org/pub/linux/kernel/projects/rt/4.8/patch-4.8.2-rt3.patch.xz
The split quilt queue is available at:
https://cdn.kernel.org/pub/linux/kernel/projects/rt/4.8/patches-4.8.2-rt3.tar.xz
Sebastian
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
17 files changed, 583 insertions, 50 deletions
diff --git a/patches/NFSv4-replace-seqcount_t-with-a-seqlock_t.patch b/patches/NFSv4-replace-seqcount_t-with-a-seqlock_t.patch new file mode 100644 index 0000000000000..ba0e7dabf92c3 --- /dev/null +++ b/patches/NFSv4-replace-seqcount_t-with-a-seqlock_t.patch @@ -0,0 +1,135 @@ +From 270706631839c60dfb6ed7ac7c1c7dde4cc1092f Mon Sep 17 00:00:00 2001 +From: Sebastian Andrzej Siewior <bigeasy@linutronix.de> +Date: Fri, 21 Oct 2016 18:07:24 +0200 +Subject: [RFC PATCH] NFSv4: replace seqcount_t with a seqlock_t + +The raw_write_seqcount_begin() in nfs4_reclaim_open_state() bugs me +because it maps to preempt_disable() in -RT which I can't have at this +point. So I took a look at the code. +It the lockdep part was removed in commit abbec2da13f0 ("NFS: Use +raw_write_seqcount_begin/end int nfs4_reclaim_open_state") because +lockdep complained. The whole seqcount thing was introduced in commit +c137afabe330 ("NFSv4: Allow the state manager to mark an open_owner as +being recovered"). +I don't understand how it is possible that we don't end up with two +writers for the same resource because the `sp->so_lock' lock is dropped +is soon in the list_for_each_entry() loop. It might be the +test_and_clear_bit() check in nfs4_do_reclaim() but it might clear one +bit on each iteration so I *think* we could have two invocations of the +same struct nfs4_state_owner in nfs4_reclaim_open_state(). +So there is that. + +But back to the list_for_each_entry() macro. +It seems that this `so_lock' lock protects the ->so_states list among +other atomic_t & flags members. So at the begin of the loop we inc ->count +ensuring that this field is not removed while we use it. So we drop the +->so_lock lock during the loop. And after nfs4_reclaim_locks() invocation we +nfs4_put_open_state() and then grab the ->so_lock again. So if we were the last +user of this struct and we remove it, then the following list_next_entry() +invocation is a use-after-free. Even if we use list_for_each_entry_safe() there +is no guarantee that the following member is still valid because it might have +been removed by something that invoked nfs4_put_open_state(), right? +So there is this. + +However to address my initial problem I have here a patch :) So it uses +a seqlock_t which ensures that there is only one writer at a time. So it +should be basically what is happening now plus a tiny tiny tiny lock +plus lockdep coverage. I tried to test this myself but I don't manage to get +into this code path at all so I might be doing something wrong. + +Could you please check if this patch is working for you and whether my +list_for_each_entry() observation is correct or not? + +Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> +--- + fs/nfs/delegation.c | 4 ++-- + fs/nfs/nfs4_fs.h | 2 +- + fs/nfs/nfs4proc.c | 4 ++-- + fs/nfs/nfs4state.c | 10 ++++------ + 4 files changed, 9 insertions(+), 11 deletions(-) + +--- a/fs/nfs/delegation.c ++++ b/fs/nfs/delegation.c +@@ -140,11 +140,11 @@ static int nfs_delegation_claim_opens(st + sp = state->owner; + /* Block nfs4_proc_unlck */ + mutex_lock(&sp->so_delegreturn_mutex); +- seq = raw_seqcount_begin(&sp->so_reclaim_seqcount); ++ seq = read_seqbegin(&sp->so_reclaim_seqlock); + err = nfs4_open_delegation_recall(ctx, state, stateid, type); + if (!err) + err = nfs_delegation_claim_locks(ctx, state, stateid); +- if (!err && read_seqcount_retry(&sp->so_reclaim_seqcount, seq)) ++ if (!err && read_seqretry(&sp->so_reclaim_seqlock, seq)) + err = -EAGAIN; + mutex_unlock(&sp->so_delegreturn_mutex); + put_nfs_open_context(ctx); +--- a/fs/nfs/nfs4_fs.h ++++ b/fs/nfs/nfs4_fs.h +@@ -107,7 +107,7 @@ struct nfs4_state_owner { + unsigned long so_flags; + struct list_head so_states; + struct nfs_seqid_counter so_seqid; +- seqcount_t so_reclaim_seqcount; ++ seqlock_t so_reclaim_seqlock; + struct mutex so_delegreturn_mutex; + }; + +--- a/fs/nfs/nfs4proc.c ++++ b/fs/nfs/nfs4proc.c +@@ -2525,7 +2525,7 @@ static int _nfs4_open_and_get_state(stru + unsigned int seq; + int ret; + +- seq = raw_seqcount_begin(&sp->so_reclaim_seqcount); ++ seq = raw_seqcount_begin(&sp->so_reclaim_seqlock.seqcount); + + ret = _nfs4_proc_open(opendata); + if (ret != 0) +@@ -2561,7 +2561,7 @@ static int _nfs4_open_and_get_state(stru + ctx->state = state; + if (d_inode(dentry) == state->inode) { + nfs_inode_attach_open_context(ctx); +- if (read_seqcount_retry(&sp->so_reclaim_seqcount, seq)) ++ if (read_seqretry(&sp->so_reclaim_seqlock, seq)) + nfs4_schedule_stateid_recovery(server, state); + } + out: +--- a/fs/nfs/nfs4state.c ++++ b/fs/nfs/nfs4state.c +@@ -488,7 +488,7 @@ nfs4_alloc_state_owner(struct nfs_server + nfs4_init_seqid_counter(&sp->so_seqid); + atomic_set(&sp->so_count, 1); + INIT_LIST_HEAD(&sp->so_lru); +- seqcount_init(&sp->so_reclaim_seqcount); ++ seqlock_init(&sp->so_reclaim_seqlock); + mutex_init(&sp->so_delegreturn_mutex); + return sp; + } +@@ -1459,8 +1459,8 @@ static int nfs4_reclaim_open_state(struc + * recovering after a network partition or a reboot from a + * server that doesn't support a grace period. + */ ++ write_seqlock(&sp->so_reclaim_seqlock); + spin_lock(&sp->so_lock); +- raw_write_seqcount_begin(&sp->so_reclaim_seqcount); + restart: + list_for_each_entry(state, &sp->so_states, open_states) { + if (!test_and_clear_bit(ops->state_flag_bit, &state->flags)) +@@ -1525,14 +1525,12 @@ static int nfs4_reclaim_open_state(struc + spin_lock(&sp->so_lock); + goto restart; + } +- raw_write_seqcount_end(&sp->so_reclaim_seqcount); + spin_unlock(&sp->so_lock); ++ write_sequnlock(&sp->so_reclaim_seqlock); + return 0; + out_err: + nfs4_put_open_state(state); +- spin_lock(&sp->so_lock); +- raw_write_seqcount_end(&sp->so_reclaim_seqcount); +- spin_unlock(&sp->so_lock); ++ write_sequnlock(&sp->so_reclaim_seqlock); + return status; + } + diff --git a/patches/connector-cn_proc-Protect-send_msg-with-a-local-lock.patch b/patches/connector-cn_proc-Protect-send_msg-with-a-local-lock.patch new file mode 100644 index 0000000000000..f91af26e904cd --- /dev/null +++ b/patches/connector-cn_proc-Protect-send_msg-with-a-local-lock.patch @@ -0,0 +1,67 @@ +From: Mike Galbraith <umgwanakikbuti@gmail.com> +Date: Sun, 16 Oct 2016 05:11:54 +0200 +Subject: [PATCH] connector/cn_proc: Protect send_msg() with a local lock + on RT + +|BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:931 +|in_atomic(): 1, irqs_disabled(): 0, pid: 31807, name: sleep +|Preemption disabled at:[<ffffffff8148019b>] proc_exit_connector+0xbb/0x140 +| +|CPU: 4 PID: 31807 Comm: sleep Tainted: G W E 4.8.0-rt11-rt #106 +|Call Trace: +| [<ffffffff813436cd>] dump_stack+0x65/0x88 +| [<ffffffff8109c425>] ___might_sleep+0xf5/0x180 +| [<ffffffff816406b0>] __rt_spin_lock+0x20/0x50 +| [<ffffffff81640978>] rt_read_lock+0x28/0x30 +| [<ffffffff8156e209>] netlink_broadcast_filtered+0x49/0x3f0 +| [<ffffffff81522621>] ? __kmalloc_reserve.isra.33+0x31/0x90 +| [<ffffffff8156e5cd>] netlink_broadcast+0x1d/0x20 +| [<ffffffff8147f57a>] cn_netlink_send_mult+0x19a/0x1f0 +| [<ffffffff8147f5eb>] cn_netlink_send+0x1b/0x20 +| [<ffffffff814801d8>] proc_exit_connector+0xf8/0x140 +| [<ffffffff81077f71>] do_exit+0x5d1/0xba0 +| [<ffffffff810785cc>] do_group_exit+0x4c/0xc0 +| [<ffffffff81078654>] SyS_exit_group+0x14/0x20 +| [<ffffffff81640a72>] entry_SYSCALL_64_fastpath+0x1a/0xa4 + +Since ab8ed951080e ("connector: fix out-of-order cn_proc netlink message +delivery") which is v4.7-rc6. + +Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com> +Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> +--- + drivers/connector/cn_proc.c | 6 ++++-- + 1 file changed, 4 insertions(+), 2 deletions(-) + +--- a/drivers/connector/cn_proc.c ++++ b/drivers/connector/cn_proc.c +@@ -32,6 +32,7 @@ + #include <linux/pid_namespace.h> + + #include <linux/cn_proc.h> ++#include <linux/locallock.h> + + /* + * Size of a cn_msg followed by a proc_event structure. Since the +@@ -54,10 +55,11 @@ static struct cb_id cn_proc_event_id = { + + /* proc_event_counts is used as the sequence number of the netlink message */ + static DEFINE_PER_CPU(__u32, proc_event_counts) = { 0 }; ++static DEFINE_LOCAL_IRQ_LOCK(send_msg_lock); + + static inline void send_msg(struct cn_msg *msg) + { +- preempt_disable(); ++ local_lock(send_msg_lock); + + msg->seq = __this_cpu_inc_return(proc_event_counts) - 1; + ((struct proc_event *)msg->data)->cpu = smp_processor_id(); +@@ -70,7 +72,7 @@ static inline void send_msg(struct cn_ms + */ + cn_netlink_send(msg, 0, CN_IDX_PROC, GFP_NOWAIT); + +- preempt_enable(); ++ local_unlock(send_msg_lock); + } + + void proc_fork_connector(struct task_struct *task) diff --git a/patches/cpumask-disable-offstack-on-rt.patch b/patches/cpumask-disable-offstack-on-rt.patch index 3aec33b0e2249..1e9f5348a2b7f 100644 --- a/patches/cpumask-disable-offstack-on-rt.patch +++ b/patches/cpumask-disable-offstack-on-rt.patch @@ -2,8 +2,41 @@ Subject: cpumask: Disable CONFIG_CPUMASK_OFFSTACK for RT From: Thomas Gleixner <tglx@linutronix.de> Date: Wed, 14 Dec 2011 01:03:49 +0100 -We can't deal with the cpumask allocations which happen in atomic -context (see arch/x86/kernel/apic/io_apic.c) on RT right now. +There are "valid" GFP_ATOMIC allocations such as + +|BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:931 +|in_atomic(): 1, irqs_disabled(): 0, pid: 2130, name: tar +|1 lock held by tar/2130: +| #0: (&mm->mmap_sem){++++++}, at: [<ffffffff811d4e89>] SyS_brk+0x39/0x190 +|Preemption disabled at:[<ffffffff81063048>] flush_tlb_mm_range+0x28/0x350 +| +|CPU: 1 PID: 2130 Comm: tar Tainted: G W 4.8.2-rt2+ #747 +|Call Trace: +| [<ffffffff814d52dc>] dump_stack+0x86/0xca +| [<ffffffff810a26fb>] ___might_sleep+0x14b/0x240 +| [<ffffffff819bc1d4>] rt_spin_lock+0x24/0x60 +| [<ffffffff81194fba>] get_page_from_freelist+0x83a/0x11b0 +| [<ffffffff81195e8b>] __alloc_pages_nodemask+0x15b/0x1190 +| [<ffffffff811f0b81>] alloc_pages_current+0xa1/0x1f0 +| [<ffffffff811f7df5>] new_slab+0x3e5/0x690 +| [<ffffffff811fb0d5>] ___slab_alloc+0x495/0x660 +| [<ffffffff811fb311>] __slab_alloc.isra.79+0x71/0xc0 +| [<ffffffff811fb447>] __kmalloc_node+0xe7/0x240 +| [<ffffffff814d4ee0>] alloc_cpumask_var_node+0x20/0x50 +| [<ffffffff814d4f3e>] alloc_cpumask_var+0xe/0x10 +| [<ffffffff810430c1>] native_send_call_func_ipi+0x21/0x130 +| [<ffffffff8111c13f>] smp_call_function_many+0x22f/0x370 +| [<ffffffff81062b64>] native_flush_tlb_others+0x1a4/0x3a0 +| [<ffffffff8106309b>] flush_tlb_mm_range+0x7b/0x350 +| [<ffffffff811c88e2>] tlb_flush_mmu_tlbonly+0x62/0xd0 +| [<ffffffff811c9af4>] tlb_finish_mmu+0x14/0x50 +| [<ffffffff811d1c84>] unmap_region+0xe4/0x110 +| [<ffffffff811d3db3>] do_munmap+0x293/0x470 +| [<ffffffff811d4f8c>] SyS_brk+0x13c/0x190 +| [<ffffffff810032e2>] do_fast_syscall_32+0xb2/0x2f0 +| [<ffffffff819be181>] entry_SYSENTER_compat+0x51/0x60 + +which forbid allocations at run-time. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> --- diff --git a/patches/drivers-zram-Don-t-disable-preemption-in-zcomp_strea.patch b/patches/drivers-zram-Don-t-disable-preemption-in-zcomp_strea.patch new file mode 100644 index 0000000000000..ad79f2f6df237 --- /dev/null +++ b/patches/drivers-zram-Don-t-disable-preemption-in-zcomp_strea.patch @@ -0,0 +1,91 @@ +From: Mike Galbraith <umgwanakikbuti@gmail.com> +Date: Thu, 20 Oct 2016 11:15:22 +0200 +Subject: [PATCH] drivers/zram: Don't disable preemption in + zcomp_stream_get/put() + +In v4.7, the driver switched to percpu compression streams, disabling +preemption via get/put_cpu_ptr(). Use a per-zcomp_strm lock here. We +also have to fix an lock order issue in zram_decompress_page() such +that zs_map_object() nests inside of zcomp_stream_put() as it does in +zram_bvec_write(). + +Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com> +[bigeasy: get_locked_var() -> per zcomp_strm lock] +Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> +--- + drivers/block/zram/zcomp.c | 12 ++++++++++-- + drivers/block/zram/zcomp.h | 1 + + drivers/block/zram/zram_drv.c | 6 +++--- + 3 files changed, 14 insertions(+), 5 deletions(-) + +--- a/drivers/block/zram/zcomp.c ++++ b/drivers/block/zram/zcomp.c +@@ -118,12 +118,19 @@ ssize_t zcomp_available_show(const char + + struct zcomp_strm *zcomp_stream_get(struct zcomp *comp) + { +- return *get_cpu_ptr(comp->stream); ++ struct zcomp_strm *zstrm; ++ ++ zstrm = *this_cpu_ptr(comp->stream); ++ spin_lock(&zstrm->zcomp_lock); ++ return zstrm; + } + + void zcomp_stream_put(struct zcomp *comp) + { +- put_cpu_ptr(comp->stream); ++ struct zcomp_strm *zstrm; ++ ++ zstrm = *this_cpu_ptr(comp->stream); ++ spin_unlock(&zstrm->zcomp_lock); + } + + int zcomp_compress(struct zcomp_strm *zstrm, +@@ -174,6 +181,7 @@ static int __zcomp_cpu_notifier(struct z + pr_err("Can't allocate a compression stream\n"); + return NOTIFY_BAD; + } ++ spin_lock_init(&zstrm->zcomp_lock); + *per_cpu_ptr(comp->stream, cpu) = zstrm; + break; + case CPU_DEAD: +--- a/drivers/block/zram/zcomp.h ++++ b/drivers/block/zram/zcomp.h +@@ -14,6 +14,7 @@ struct zcomp_strm { + /* compression/decompression buffer */ + void *buffer; + struct crypto_comp *tfm; ++ spinlock_t zcomp_lock; + }; + + /* dynamic per-device compression frontend */ +--- a/drivers/block/zram/zram_drv.c ++++ b/drivers/block/zram/zram_drv.c +@@ -568,6 +568,7 @@ static int zram_decompress_page(struct z + struct zram_meta *meta = zram->meta; + unsigned long handle; + unsigned int size; ++ struct zcomp_strm *zstrm; + + zram_lock_table(&meta->table[index]); + handle = meta->table[index].handle; +@@ -579,16 +580,15 @@ static int zram_decompress_page(struct z + return 0; + } + ++ zstrm = zcomp_stream_get(zram->comp); + cmem = zs_map_object(meta->mem_pool, handle, ZS_MM_RO); + if (size == PAGE_SIZE) { + copy_page(mem, cmem); + } else { +- struct zcomp_strm *zstrm = zcomp_stream_get(zram->comp); +- + ret = zcomp_decompress(zstrm, cmem, size, mem); +- zcomp_stream_put(zram->comp); + } + zs_unmap_object(meta->mem_pool, handle); ++ zcomp_stream_put(zram->comp); + zram_unlock_table(&meta->table[index]); + + /* Should NEVER happen. Return bio error if it does. */ diff --git a/patches/ftrace-Fix-trace-header-alignment.patch b/patches/ftrace-Fix-trace-header-alignment.patch new file mode 100644 index 0000000000000..20a019ec397ca --- /dev/null +++ b/patches/ftrace-Fix-trace-header-alignment.patch @@ -0,0 +1,62 @@ +From: Mike Galbraith <umgwanakikbuti@gmail.com> +Date: Sun, 16 Oct 2016 05:08:30 +0200 +Subject: [PATCH] ftrace: Fix trace header alignment + +Line up helper arrows to the right column. + +Cc: stable-rt@vger.kernel.org +Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com> +[bigeasy: fixup function tracer header] +Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> +--- + kernel/trace/trace.c | 32 ++++++++++++++++---------------- + 1 file changed, 16 insertions(+), 16 deletions(-) + +--- a/kernel/trace/trace.c ++++ b/kernel/trace/trace.c +@@ -2896,17 +2896,17 @@ get_total_entries(struct trace_buffer *b + + static void print_lat_help_header(struct seq_file *m) + { +- seq_puts(m, "# _--------=> CPU# \n" +- "# / _-------=> irqs-off \n" +- "# | / _------=> need-resched \n" +- "# || / _-----=> need-resched_lazy \n" +- "# ||| / _----=> hardirq/softirq \n" +- "# |||| / _---=> preempt-depth \n" +- "# ||||| / _--=> preempt-lazy-depth\n" +- "# |||||| / _-=> migrate-disable \n" +- "# ||||||| / delay \n" +- "# cmd pid |||||||| time | caller \n" +- "# \\ / |||||||| \\ | / \n"); ++ seq_puts(m, "# _--------=> CPU# \n" ++ "# / _-------=> irqs-off \n" ++ "# | / _------=> need-resched \n" ++ "# || / _-----=> need-resched_lazy \n" ++ "# ||| / _----=> hardirq/softirq \n" ++ "# |||| / _---=> preempt-depth \n" ++ "# ||||| / _--=> preempt-lazy-depth\n" ++ "# |||||| / _-=> migrate-disable \n" ++ "# ||||||| / delay \n" ++ "# cmd pid |||||||| time | caller \n" ++ "# \\ / |||||||| \\ | / \n"); + } + + static void print_event_info(struct trace_buffer *buf, struct seq_file *m) +@@ -2935,11 +2935,11 @@ static void print_func_help_header_irq(s + "# |/ _-----=> need-resched_lazy\n" + "# || / _---=> hardirq/softirq\n" + "# ||| / _--=> preempt-depth\n" +- "# |||| /_--=> preempt-lazy-depth\n" +- "# ||||| _-=> migrate-disable \n" +- "# ||||| / delay\n" +- "# TASK-PID CPU# |||||| TIMESTAMP FUNCTION\n" +- "# | | | |||||| | |\n"); ++ "# |||| / _-=> preempt-lazy-depth\n" ++ "# ||||| / _-=> migrate-disable \n" ++ "# |||||| / delay\n" ++ "# TASK-PID CPU# ||||||| TIMESTAMP FUNCTION\n" ++ "# | | | ||||||| | |\n"); + } + + void diff --git a/patches/genirq-do-not-invoke-the-affinity-callback-via-a-wor.patch b/patches/genirq-do-not-invoke-the-affinity-callback-via-a-wor.patch index 09e5a4325c286..c2d12a021a698 100644 --- a/patches/genirq-do-not-invoke-the-affinity-callback-via-a-wor.patch +++ b/patches/genirq-do-not-invoke-the-affinity-callback-via-a-wor.patch @@ -10,9 +10,9 @@ This patch uses swork_queue() instead. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> --- drivers/scsi/qla2xxx/qla_isr.c | 4 +++ - include/linux/interrupt.h | 5 ++++ + include/linux/interrupt.h | 6 +++++ kernel/irq/manage.c | 43 ++++++++++++++++++++++++++++++++++++++--- - 3 files changed, 49 insertions(+), 3 deletions(-) + 3 files changed, 50 insertions(+), 3 deletions(-) --- a/drivers/scsi/qla2xxx/qla_isr.c +++ b/drivers/scsi/qla2xxx/qla_isr.c @@ -38,7 +38,15 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> #include <linux/atomic.h> #include <asm/ptrace.h> -@@ -229,7 +230,11 @@ extern void resume_device_irqs(void); +@@ -218,6 +219,7 @@ extern void resume_device_irqs(void); + * struct irq_affinity_notify - context for notification of IRQ affinity changes + * @irq: Interrupt to which notification applies + * @kref: Reference count, for internal use ++ * @swork: Swork item, for internal use + * @work: Work item, for internal use + * @notify: Function to be called on change. This will be + * called in process context. +@@ -229,7 +231,11 @@ extern void resume_device_irqs(void); struct irq_affinity_notify { unsigned int irq; struct kref kref; diff --git a/patches/kbuild-add-fno-PIE.patch b/patches/kbuild-add-fno-PIE.patch new file mode 100644 index 0000000000000..87a1885446a8f --- /dev/null +++ b/patches/kbuild-add-fno-PIE.patch @@ -0,0 +1,25 @@ +From 5f490bfc33f69f490c6c7a90889287e84f1556c0 Mon Sep 17 00:00:00 2001 +From: Sebastian Andrzej Siewior <bigeasy@linutronix.de> +Date: Fri, 21 Oct 2016 12:21:07 +0200 +Subject: [PATCH] kbuild: add -fno-PIE + +Debian started to build the gcc with -fPIE by default so the kernel +build ends before it starts properly with: +|kernel/bounds.c:1:0: error: code model kernel does not support PIC mode + +Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> +--- + Makefile | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +--- a/Makefile ++++ b/Makefile +@@ -398,7 +398,7 @@ KBUILD_CPPFLAGS := -D__KERNEL__ + KBUILD_CFLAGS := -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs \ + -fno-strict-aliasing -fno-common \ + -Werror-implicit-function-declaration \ +- -Wno-format-security \ ++ -Wno-format-security -fno-PIE \ + -std=gnu89 + + KBUILD_AFLAGS_KERNEL := diff --git a/patches/localversion.patch b/patches/localversion.patch index 279489a1d1455..e36eb4b6666a7 100644 --- a/patches/localversion.patch +++ b/patches/localversion.patch @@ -10,4 +10,4 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> --- /dev/null +++ b/localversion-rt @@ -0,0 +1 @@ -+-rt2 ++-rt3 diff --git a/patches/mm-zsmalloc-Use-get-put_cpu_light-in-zs_map_object-z.patch b/patches/mm_zsmalloc_copy_with_get_cpu_var_and_locking.patch index 0c032dc8f1e81..9b329559fc872 100644 --- a/patches/mm-zsmalloc-Use-get-put_cpu_light-in-zs_map_object-z.patch +++ b/patches/mm_zsmalloc_copy_with_get_cpu_var_and_locking.patch @@ -1,41 +1,64 @@ From: Mike Galbraith <umgwanakikbuti@gmail.com> Date: Tue, 22 Mar 2016 11:16:09 +0100 -Subject: [PATCH] mm/zsmalloc: Use get/put_cpu_light in - zs_map_object()/zs_unmap_object() - -Otherwise, we get a ___might_sleep() splat. +Subject: [PATCH] mm/zsmalloc: copy with get_cpu_var() and locking +get_cpu_var() disables preemption and triggers a might_sleep() splat later. +This is replaced with get_locked_var(). +This bitspinlocks are replaced with a proper mutex which requires a slightly +larger struct to allocate. Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com> -[bigeasy: replace the bitspin_lock() with a mutex] +[bigeasy: replace the bitspin_lock() with a mutex, get_locked_var(). Mike then +fixed the size magic] Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> --- - mm/zsmalloc.c | 73 ++++++++++++++++++++++++++++++++++++++++++++++++++++++---- - 1 file changed, 69 insertions(+), 4 deletions(-) + mm/zsmalloc.c | 80 +++++++++++++++++++++++++++++++++++++++++++++++++++++----- + 1 file changed, 74 insertions(+), 6 deletions(-) --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c -@@ -71,7 +71,19 @@ +@@ -53,6 +53,7 @@ + #include <linux/mount.h> + #include <linux/migrate.h> + #include <linux/pagemap.h> ++#include <linux/locallock.h> + + #define ZSPAGE_MAGIC 0x58 + +@@ -70,9 +71,22 @@ + */ #define ZS_MAX_ZSPAGE_ORDER 2 #define ZS_MAX_PAGES_PER_ZSPAGE (_AC(1, UL) << ZS_MAX_ZSPAGE_ORDER) +- + #define ZS_HANDLE_SIZE (sizeof(unsigned long)) -+#ifdef CONFIG_PREEMPT_RT_BASE ++#ifdef CONFIG_PREEMPT_RT_FULL + +struct zsmalloc_handle { + unsigned long addr; + struct mutex lock; +}; + -+#define ZS_HANDLE_SIZE (sizeof(struct zsmalloc_handle)) ++#define ZS_HANDLE_ALLOC_SIZE (sizeof(struct zsmalloc_handle)) + +#else + - #define ZS_HANDLE_SIZE (sizeof(unsigned long)) ++#define ZS_HANDLE_ALLOC_SIZE (sizeof(unsigned long)) +#endif - ++ /* * Object location (<PFN>, <obj_idx>) is encoded as -@@ -351,9 +363,26 @@ static void destroy_cache(struct zs_pool + * as single (unsigned long) handle value. +@@ -327,7 +341,7 @@ static void SetZsPageMovable(struct zs_p + + static int create_cache(struct zs_pool *pool) + { +- pool->handle_cachep = kmem_cache_create("zs_handle", ZS_HANDLE_SIZE, ++ pool->handle_cachep = kmem_cache_create("zs_handle", ZS_HANDLE_ALLOC_SIZE, + 0, 0, NULL); + if (!pool->handle_cachep) + return 1; +@@ -351,10 +365,27 @@ static void destroy_cache(struct zs_pool static unsigned long cache_alloc_handle(struct zs_pool *pool, gfp_t gfp) { @@ -45,7 +68,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> + + p = kmem_cache_alloc(pool->handle_cachep, + gfp & ~(__GFP_HIGHMEM|__GFP_MOVABLE)); -+#ifdef CONFIG_PREEMPT_RT_BASE ++#ifdef CONFIG_PREEMPT_RT_FULL + if (p) { + struct zsmalloc_handle *zh = p; + @@ -53,22 +76,23 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> + } +#endif + return (unsigned long)p; -+} -+ -+#ifdef CONFIG_PREEMPT_RT_BASE + } + ++#ifdef CONFIG_PREEMPT_RT_FULL +static struct zsmalloc_handle *zs_get_pure_handle(unsigned long handle) +{ + return (void *)(handle &~((1 << OBJ_TAG_BITS) - 1)); - } ++} +#endif - ++ static void cache_free_handle(struct zs_pool *pool, unsigned long handle) { -@@ -373,12 +402,18 @@ static void cache_free_zspage(struct zs_ + kmem_cache_free(pool->handle_cachep, (void *)handle); +@@ -373,12 +404,18 @@ static void cache_free_zspage(struct zs_ static void record_obj(unsigned long handle, unsigned long obj) { -+#ifdef CONFIG_PREEMPT_RT_BASE ++#ifdef CONFIG_PREEMPT_RT_FULL + struct zsmalloc_handle *zh = zs_get_pure_handle(handle); + + WRITE_ONCE(zh->addr, obj); @@ -83,11 +107,19 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> } /* zpool driver */ -@@ -902,7 +937,13 @@ static unsigned long location_to_obj(str +@@ -467,6 +504,7 @@ MODULE_ALIAS("zpool-zsmalloc"); + + /* per-cpu VM mapping areas for zspage accesses that cross page boundaries */ + static DEFINE_PER_CPU(struct mapping_area, zs_map_area); ++static DEFINE_LOCAL_IRQ_LOCK(zs_map_area_lock); + + static bool is_zspage_isolated(struct zspage *zspage) + { +@@ -902,7 +940,13 @@ static unsigned long location_to_obj(str static unsigned long handle_to_obj(unsigned long handle) { -+#ifdef CONFIG_PREEMPT_RT_BASE ++#ifdef CONFIG_PREEMPT_RT_FULL + struct zsmalloc_handle *zh = zs_get_pure_handle(handle); + + return zh->addr; @@ -97,11 +129,11 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> } static unsigned long obj_to_head(struct page *page, void *obj) -@@ -916,22 +957,46 @@ static unsigned long obj_to_head(struct +@@ -916,22 +960,46 @@ static unsigned long obj_to_head(struct static inline int testpin_tag(unsigned long handle) { -+#ifdef CONFIG_PREEMPT_RT_BASE ++#ifdef CONFIG_PREEMPT_RT_FULL + struct zsmalloc_handle *zh = zs_get_pure_handle(handle); + + return mutex_is_locked(&zh->lock); @@ -112,7 +144,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> static inline int trypin_tag(unsigned long handle) { -+#ifdef CONFIG_PREEMPT_RT_BASE ++#ifdef CONFIG_PREEMPT_RT_FULL + struct zsmalloc_handle *zh = zs_get_pure_handle(handle); + + return mutex_trylock(&zh->lock); @@ -123,7 +155,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> static void pin_tag(unsigned long handle) { -+#ifdef CONFIG_PREEMPT_RT_BASE ++#ifdef CONFIG_PREEMPT_RT_FULL + struct zsmalloc_handle *zh = zs_get_pure_handle(handle); + + return mutex_lock(&zh->lock); @@ -134,7 +166,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> static void unpin_tag(unsigned long handle) { -+#ifdef CONFIG_PREEMPT_RT_BASE ++#ifdef CONFIG_PREEMPT_RT_FULL + struct zsmalloc_handle *zh = zs_get_pure_handle(handle); + + return mutex_unlock(&zh->lock); @@ -144,21 +176,21 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> } static void reset_page(struct page *page) -@@ -1423,7 +1488,7 @@ void *zs_map_object(struct zs_pool *pool +@@ -1423,7 +1491,7 @@ void *zs_map_object(struct zs_pool *pool class = pool->size_class[class_idx]; off = (class->size * obj_idx) & ~PAGE_MASK; - area = &get_cpu_var(zs_map_area); -+ area = per_cpu_ptr(&zs_map_area, get_cpu_light()); ++ area = &get_locked_var(zs_map_area_lock, zs_map_area); area->vm_mm = mm; if (off + class->size <= PAGE_SIZE) { /* this object is contained entirely within a page */ -@@ -1477,7 +1542,7 @@ void zs_unmap_object(struct zs_pool *poo +@@ -1477,7 +1545,7 @@ void zs_unmap_object(struct zs_pool *poo __zs_unmap_object(area, pages, off, class->size); } - put_cpu_var(zs_map_area); -+ put_cpu_light(); ++ put_locked_var(zs_map_area_lock, zs_map_area); migrate_read_unlock(zspage); unpin_tag(handle); diff --git a/patches/net-provide-a-way-to-delegate-processing-a-softirq-t.patch b/patches/net-provide-a-way-to-delegate-processing-a-softirq-t.patch index 28053a98c27ed..52e61ec5d79f0 100644 --- a/patches/net-provide-a-way-to-delegate-processing-a-softirq-t.patch +++ b/patches/net-provide-a-way-to-delegate-processing-a-softirq-t.patch @@ -20,7 +20,7 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> --- a/include/linux/interrupt.h +++ b/include/linux/interrupt.h -@@ -487,6 +487,14 @@ extern void thread_do_softirq(void); +@@ -488,6 +488,14 @@ extern void thread_do_softirq(void); extern void open_softirq(int nr, void (*action)(struct softirq_action *)); extern void softirq_init(void); extern void __raise_softirq_irqoff(unsigned int nr); diff --git a/patches/rtmutex_dont_include_rcu.patch b/patches/rtmutex_dont_include_rcu.patch index 2578a1998fac1..494f141c3f1be 100644 --- a/patches/rtmutex_dont_include_rcu.patch +++ b/patches/rtmutex_dont_include_rcu.patch @@ -19,9 +19,9 @@ a new header file which can be included by both users. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> --- include/linux/rbtree.h | 2 - - include/linux/rcu_assign_pointer.h | 53 +++++++++++++++++++++++++++++++++++++ - include/linux/rcupdate.h | 49 ---------------------------------- - 3 files changed, 55 insertions(+), 49 deletions(-) + include/linux/rcu_assign_pointer.h | 54 +++++++++++++++++++++++++++++++++++++ + include/linux/rcupdate.h | 49 --------------------------------- + 3 files changed, 56 insertions(+), 49 deletions(-) --- a/include/linux/rbtree.h +++ b/include/linux/rbtree.h @@ -36,10 +36,11 @@ Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> unsigned long __rb_parent_color; --- /dev/null +++ b/include/linux/rcu_assign_pointer.h -@@ -0,0 +1,53 @@ +@@ -0,0 +1,54 @@ +#ifndef __LINUX_RCU_ASSIGN_POINTER_H__ +#define __LINUX_RCU_ASSIGN_POINTER_H__ +#include <linux/compiler.h> ++#include <asm/barrier.h> + +/** + * RCU_INITIALIZER() - statically initialize an RCU-protected global variable diff --git a/patches/rxrpc-remove-unused-static-variables.patch b/patches/rxrpc-remove-unused-static-variables.patch new file mode 100644 index 0000000000000..4d82f3d99ad49 --- /dev/null +++ b/patches/rxrpc-remove-unused-static-variables.patch @@ -0,0 +1,27 @@ +From f9cf73e8bad7daa90318edfd933f8676cd1e5cd4 Mon Sep 17 00:00:00 2001 +From: Sebastian Andrzej Siewior <bigeasy@linutronix.de> +Date: Fri, 21 Oct 2016 10:54:50 +0200 +Subject: [PATCH] rxrpc: remove unused static variables + +The rxrpc_security_methods and rxrpc_security_sem user has been removed +in 648af7fca159 ("rxrpc: Absorb the rxkad security module"). This was +noticed by kbuild test robot for the -RT tree but is also true for !RT. + +Reported-by: kbuild test robot <fengguang.wu@intel.com> +Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> +--- + net/rxrpc/security.c | 3 --- + 1 file changed, 3 deletions(-) + +--- a/net/rxrpc/security.c ++++ b/net/rxrpc/security.c +@@ -19,9 +19,6 @@ + #include <keys/rxrpc-type.h> + #include "ar-internal.h" + +-static LIST_HEAD(rxrpc_security_methods); +-static DECLARE_RWSEM(rxrpc_security_sem); +- + static const struct rxrpc_security *rxrpc_security_types[] = { + [RXRPC_SECURITY_NONE] = &rxrpc_no_security, + #ifdef CONFIG_RXKAD diff --git a/patches/sched-mmdrop-delayed.patch b/patches/sched-mmdrop-delayed.patch index 03fb860a319ab..769d328eadc6b 100644 --- a/patches/sched-mmdrop-delayed.patch +++ b/patches/sched-mmdrop-delayed.patch @@ -119,7 +119,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> nohz_balance_exit_idle(cpu); hrtick_clear(rq); + if (per_cpu(idle_last_mm, cpu)) { -+ mmdrop(per_cpu(idle_last_mm, cpu)); ++ mmdrop_delayed(per_cpu(idle_last_mm, cpu)); + per_cpu(idle_last_mm, cpu) = NULL; + } return 0; diff --git a/patches/series b/patches/series index b9a9e83cf5e97..b7dfbcafbea3c 100644 --- a/patches/series +++ b/patches/series @@ -38,8 +38,12 @@ fs-dcache-init-in_lookup_hashtable.patch iommu-iova-don-t-disable-preempt-around-this_cpu_ptr.patch iommu-vt-d-don-t-disable-preemption-while-accessing-.patch lockdep-Quiet-gcc-about-dangerous-__builtin_return_a.patch +x86-apic-get-rid-of-warning-acpi_ioapic_lock-defined.patch +rxrpc-remove-unused-static-variables.patch +kbuild-add-fno-PIE.patch # Wants a different fix for upstream +NFSv4-replace-seqcount_t-with-a-seqlock_t.patch ############################################################ # Submitted on LKML @@ -247,7 +251,7 @@ mm-memcontrol-Don-t-call-schedule_work_on-in-preempt.patch mm-memcontrol-do_not_disable_irq.patch mm-memcontrol-mem_cgroup_migrate-replace-another-loc.patch mm-backing-dev-don-t-disable-IRQs-in-wb_congested_pu.patch -mm-zsmalloc-Use-get-put_cpu_light-in-zs_map_object-z.patch +mm_zsmalloc_copy_with_get_cpu_var_and_locking.patch # RADIX TREE radix-tree-rt-aware.patch @@ -550,6 +554,7 @@ rcu-make-RCU_BOOST-default-on-RT.patch # PREEMPT LAZY preempt-lazy-support.patch +ftrace-Fix-trace-header-alignment.patch x86-preempt-lazy.patch arm-preempt-lazy-support.patch powerpc-preempt-lazy-support.patch @@ -561,7 +566,9 @@ leds-trigger-disable-CPU-trigger-on-RT.patch # DRIVERS mmci-remove-bogus-irq-save.patch cpufreq-drop-K8-s-driver-from-beeing-selected.patch +connector-cn_proc-Protect-send_msg-with-a-local-lock.patch drivers-block-zram-Replace-bit-spinlocks-with-rtmute.patch +drivers-zram-Don-t-disable-preemption-in-zcomp_strea.patch # I915 drm-i915-drop-trace_i915_gem_ring_dispatch-onrt.patch diff --git a/patches/workqueue-distangle-from-rq-lock.patch b/patches/workqueue-distangle-from-rq-lock.patch index 5cc25b8c4a937..071360f3f93ca 100644 --- a/patches/workqueue-distangle-from-rq-lock.patch +++ b/patches/workqueue-distangle-from-rq-lock.patch @@ -161,9 +161,9 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> /** - * wq_worker_waking_up - a worker is waking up -- * @task: task waking up + * wq_worker_running - a worker is running again - * @cpu: CPU @task is waking up to + * @task: task waking up +- * @cpu: CPU @task is waking up to * - * This function is called during try_to_wake_up() when a worker is - * being awoken. diff --git a/patches/x86-apic-get-rid-of-warning-acpi_ioapic_lock-defined.patch b/patches/x86-apic-get-rid-of-warning-acpi_ioapic_lock-defined.patch new file mode 100644 index 0000000000000..ebd11b4a83b65 --- /dev/null +++ b/patches/x86-apic-get-rid-of-warning-acpi_ioapic_lock-defined.patch @@ -0,0 +1,44 @@ +From 309789b8125b7eee6fd1c3a4716fcb6ea1ad32ba Mon Sep 17 00:00:00 2001 +From: Sebastian Andrzej Siewior <bigeasy@linutronix.de> +Date: Fri, 21 Oct 2016 10:29:11 +0200 +Subject: [PATCH] x86/apic: get rid of "warning: 'acpi_ioapic_lock' defined but + not used" + +kbuild test robot reported this against the -RT tree: + +| In file included from include/linux/mutex.h:30:0, +| from include/linux/notifier.h:13, +| from include/linux/memory_hotplug.h:6, +| from include/linux/mmzone.h:777, +| from include/linux/gfp.h:5, +| from include/linux/slab.h:14, +| from include/linux/resource_ext.h:19, +| from include/linux/acpi.h:26, +| from arch/x86/kernel/acpi/boot.c:27: +|>> arch/x86/kernel/acpi/boot.c:90:21: warning: 'acpi_ioapic_lock' defined but not used [-Wunused-variable] +| static DEFINE_MUTEX(acpi_ioapic_lock); +| ^ +| include/linux/mutex_rt.h:27:15: note: in definition of macro 'DEFINE_MUTEX' +| struct mutex mutexname = __MUTEX_INITIALIZER(mutexname) + ^~~~~~~~~ +which is also true (as in non-used) for !RT but the compiler does not +emit a warning. + +Reported-by: kbuild test robot <fengguang.wu@intel.com> +Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> +--- + arch/x86/kernel/acpi/boot.c | 2 ++ + 1 file changed, 2 insertions(+) + +--- a/arch/x86/kernel/acpi/boot.c ++++ b/arch/x86/kernel/acpi/boot.c +@@ -87,7 +87,9 @@ static u64 acpi_lapic_addr __initdata = + * ->ioapic_mutex + * ->ioapic_lock + */ ++#ifdef CONFIG_X86_IO_APIC + static DEFINE_MUTEX(acpi_ioapic_lock); ++#endif + + /* -------------------------------------------------------------------------- + Boot-time Configuration diff --git a/patches/x86-preempt-lazy.patch b/patches/x86-preempt-lazy.patch index ad04b04ea7c9f..023abdef9ca9f 100644 --- a/patches/x86-preempt-lazy.patch +++ b/patches/x86-preempt-lazy.patch @@ -8,12 +8,12 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> --- arch/x86/Kconfig | 1 + arch/x86/entry/common.c | 4 ++-- - arch/x86/entry/entry_32.S | 16 ++++++++++++++++ + arch/x86/entry/entry_32.S | 17 +++++++++++++++++ arch/x86/entry/entry_64.S | 16 ++++++++++++++++ arch/x86/include/asm/preempt.h | 31 ++++++++++++++++++++++++++++++- arch/x86/include/asm/thread_info.h | 10 ++++++++++ arch/x86/kernel/asm-offsets.c | 2 ++ - 7 files changed, 77 insertions(+), 3 deletions(-) + 7 files changed, 78 insertions(+), 3 deletions(-) --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -47,7 +47,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> #ifdef ARCH_RT_DELAYS_SIGNAL_SEND --- a/arch/x86/entry/entry_32.S +++ b/arch/x86/entry/entry_32.S -@@ -271,8 +271,24 @@ END(ret_from_exception) +@@ -271,8 +271,25 @@ END(ret_from_exception) ENTRY(resume_kernel) DISABLE_INTERRUPTS(CLBR_ANY) need_resched: @@ -62,6 +62,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> + cmpl $_PREEMPT_ENABLED,PER_CPU_VAR(__preempt_count) + jne restore_all + ++ GET_THREAD_INFO(%ebp) + cmpl $0,TI_preempt_lazy_count(%ebp) # non-zero preempt_lazy_count ? + jnz restore_all + |