diff options
author | Paul Gortmaker <paul.gortmaker@windriver.com> | 2013-10-17 14:56:32 -0400 |
---|---|---|
committer | Paul Gortmaker <paul.gortmaker@windriver.com> | 2013-10-17 15:02:59 -0400 |
commit | 35c34859666c9d577e1204a1d6598929bd95eb79 (patch) | |
tree | 4f666b8f5322a7614284fdbfebec9c24dbd03623 | |
parent | 3a10eb8e15af336853f13342f942f9adcb430885 (diff) | |
download | 3.10-rt-patches-35c34859666c9d577e1204a1d6598929bd95eb79.tar.gz |
patches-3.10.15-rt11.tar.xzv3.10.15-rt11
md5sum:
87a42409bdaeba3ad91e521daf7ec85b patches-3.10.15-rt11.tar.xz
Announce:
---------------------
Dear RT folks!
I'm pleased to announce the v3.10.15-rt11 patch set.
Changes since v3.10.15-rt10
- two genirq patches: one was already in v3.8-rt ("genirq: Set irq
thread to RT priority on creation"). The second ("genirq: Set the irq
thread policy without checking CAP_SYS_NICE") ensures that the
priority is also assigned if the irq is requested in user contex. Patch
by Thomas Pfaff.
- A patch from Paul Gortmaker to compile PowerPC without RT
- Four patches from Paul Gortmaker to compile with SLAB while RT is
disabled.
- A fix for "sleeping from invalid context" in tty_ldisc. Reported by
Luis Claudio R. Goncalves.
- A fix for "sleeping from invalid context" in the drm layer triggered
by the i915 driver. Reported by Luis Claudio R. Goncalves.
Known issues:
- SLAB support not working
- The cpsw network driver shows some issues.
- bcache is disabled.
- an ancient race (since we got sleeping spinlocks) where the
TASK_TRACED state is temporary replaced while waiting on a rw
lock and the task can't be traced.
- livelock in sem_lock(). A race fix queued for 3.10+ fixes
the livelock issue as well, but it's not yet in 3.10.15.
Should someone trigger, please look at 5e9d5275 ("ipc/sem.c:
fix race in sem_lock()") or wait till it hits the stable queue.
The delta patch against v3.10.15-rt10 is appended below and can be found
here:
https://www.kernel.org/pub/linux/kernel/projects/rt/3.10/incr/patch-3.10.15-rt10-rt11.patch.xz
The RT patch against 3.10.15 can be found here:
https://www.kernel.org/pub/linux/kernel/projects/rt/3.10/patch-3.10.15-rt11.patch.xz
The split quilt queue is available at:
https://www.kernel.org/pub/linux/kernel/projects/rt/3.10/patches-3.10.15-rt11.tar.xz
Sebastian
[delta patch snipped]
---------------------
http://marc.info/?l=linux-rt-users&m=138152550726604&w=2
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Conflicts:
patches/localversion.patch
-rw-r--r-- | patches/drm-remove-preempt_disable-from-drm_calc_vbltimestam.patch | 82 | ||||
-rw-r--r-- | patches/genirq-Set-irq-thread-to-RT-priority-on-creation.patch | 66 | ||||
-rw-r--r-- | patches/genirq-Set-the-irq-thread-policy-without-checking-CA.patch | 43 | ||||
-rw-r--r-- | patches/localversion.patch | 2 | ||||
-rw-r--r-- | patches/mm-disable-slab-on-rt.patch | 11 | ||||
-rw-r--r-- | patches/mm-slab-more-lock-breaks.patch | 52 | ||||
-rw-r--r-- | patches/mm-slab-wrap-functions.patch | 92 | ||||
-rw-r--r-- | patches/peter_zijlstra-frob-pagefault_disable.patch | 20 | ||||
-rw-r--r-- | patches/series | 4 | ||||
-rw-r--r-- | patches/tty-ldisc-drop-the-raw-lock-earlier.patch | 63 |
10 files changed, 367 insertions, 68 deletions
diff --git a/patches/drm-remove-preempt_disable-from-drm_calc_vbltimestam.patch b/patches/drm-remove-preempt_disable-from-drm_calc_vbltimestam.patch new file mode 100644 index 0000000..2bcfd01 --- /dev/null +++ b/patches/drm-remove-preempt_disable-from-drm_calc_vbltimestam.patch @@ -0,0 +1,82 @@ +From c17c11831778d5ed948858ef4bb32058f5013094 Mon Sep 17 00:00:00 2001 +From: Sebastian Andrzej Siewior <bigeasy@linutronix.de> +Date: Fri, 11 Oct 2013 17:14:31 +0200 +Subject: [PATCH] drm: remove preempt_disable() from + drm_calc_vbltimestamp_from_scanoutpos() + +Luis captured the following: + +| BUG: sleeping function called from invalid context at kernel/rtmutex.c:659 +| in_atomic(): 1, irqs_disabled(): 0, pid: 517, name: Xorg +| 2 locks held by Xorg/517: +| #0: +| ( +| &dev->vbl_lock +| ){......} +| , at: +| [<ffffffffa0024c60>] drm_vblank_get+0x30/0x2b0 [drm] +| #1: +| ( +| &dev->vblank_time_lock +| ){......} +| , at: +| [<ffffffffa0024ce1>] drm_vblank_get+0xb1/0x2b0 [drm] +| Preemption disabled at: +| [<ffffffffa008bc95>] i915_get_vblank_timestamp+0x45/0xa0 [i915] +| CPU: 3 PID: 517 Comm: Xorg Not tainted 3.10.10-rt7+ #5 +| Call Trace: +| [<ffffffff8164b790>] dump_stack+0x19/0x1b +| [<ffffffff8107e62f>] __might_sleep+0xff/0x170 +| [<ffffffff81651ac4>] rt_spin_lock+0x24/0x60 +| [<ffffffffa0084e67>] i915_read32+0x27/0x170 [i915] +| [<ffffffffa008a591>] i915_pipe_enabled+0x31/0x40 [i915] +| [<ffffffffa008a6be>] i915_get_crtc_scanoutpos+0x3e/0x1b0 [i915] +| [<ffffffffa00245d4>] drm_calc_vbltimestamp_from_scanoutpos+0xf4/0x430 [drm] +| [<ffffffffa008bc95>] i915_get_vblank_timestamp+0x45/0xa0 [i915] +| [<ffffffffa0024998>] drm_get_last_vbltimestamp+0x48/0x70 [drm] +| [<ffffffffa0024db5>] drm_vblank_get+0x185/0x2b0 [drm] +| [<ffffffffa0025d03>] drm_wait_vblank+0x83/0x5d0 [drm] +| [<ffffffffa00212a2>] drm_ioctl+0x552/0x6a0 [drm] +| [<ffffffff811a0095>] do_vfs_ioctl+0x325/0x5b0 +| [<ffffffff811a03a1>] SyS_ioctl+0x81/0xa0 +| [<ffffffff8165a342>] tracesys+0xdd/0xe2 + +After a longer thread it was decided to drop the preempt_disable()/ +enable() invocations which were meant for -RT and Mario Kleiner looks +for a replacement. + +Cc: stable-rt@vger.kernel.org +Reported-By: Luis Claudio R. Goncalves <lclaudio@uudg.org> +Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> +--- + drivers/gpu/drm/drm_irq.c | 7 ------- + 1 file changed, 7 deletions(-) + +diff --git a/drivers/gpu/drm/drm_irq.c b/drivers/gpu/drm/drm_irq.c +index f92da0a..434ea84 100644 +--- a/drivers/gpu/drm/drm_irq.c ++++ b/drivers/gpu/drm/drm_irq.c +@@ -628,11 +628,6 @@ int drm_calc_vbltimestamp_from_scanoutpos(struct drm_device *dev, int crtc, + * code gets preempted or delayed for some reason. + */ + for (i = 0; i < DRM_TIMESTAMP_MAXRETRIES; i++) { +- /* Disable preemption to make it very likely to +- * succeed in the first iteration even on PREEMPT_RT kernel. +- */ +- preempt_disable(); +- + /* Get system timestamp before query. */ + stime = ktime_get(); + +@@ -644,8 +639,6 @@ int drm_calc_vbltimestamp_from_scanoutpos(struct drm_device *dev, int crtc, + if (!drm_timestamp_monotonic) + mono_time_offset = ktime_get_monotonic_offset(); + +- preempt_enable(); +- + /* Return as no-op if scanout query unsupported or failed. */ + if (!(vbl_status & DRM_SCANOUTPOS_VALID)) { + DRM_DEBUG("crtc %d : scanoutpos query failed [%d].\n", +-- +1.8.4.rc3 + diff --git a/patches/genirq-Set-irq-thread-to-RT-priority-on-creation.patch b/patches/genirq-Set-irq-thread-to-RT-priority-on-creation.patch new file mode 100644 index 0000000..47573d7 --- /dev/null +++ b/patches/genirq-Set-irq-thread-to-RT-priority-on-creation.patch @@ -0,0 +1,66 @@ +From 0b4a953a0a014bee0bc3eaa5ae791f4b985f2c7a Mon Sep 17 00:00:00 2001 +From: Ivo Sieben <meltedpianoman@gmail.com> +Date: Mon, 3 Jun 2013 10:12:02 +0000 +Subject: [PATCH] genirq: Set irq thread to RT priority on creation + +When a threaded irq handler is installed the irq thread is initially +created on normal scheduling priority. Only after the irq thread is +woken up it sets its priority to RT_FIFO MAX_USER_RT_PRIO/2 itself. + +This means that interrupts that occur directly after the irq handler +is installed will be handled on a normal scheduling priority instead +of the realtime priority that one would expect. + +Fix this by setting the RT priority on creation of the irq_thread. + +Signed-off-by: Ivo Sieben <meltedpianoman@gmail.com> +Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de> +Cc: Steven Rostedt <rostedt@goodmis.org> +Link: http://lkml.kernel.org/r/1370254322-17240-1-git-send-email-meltedpianoman@gmail.com +Signed-off-by: Thomas Gleixner <tglx@linutronix.de> +Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> +--- + kernel/irq/manage.c | 11 ++++++----- + 1 file changed, 6 insertions(+), 5 deletions(-) + +--- a/kernel/irq/manage.c ++++ b/kernel/irq/manage.c +@@ -839,9 +839,6 @@ static void irq_thread_dtor(struct callb + static int irq_thread(void *data) + { + struct callback_head on_exit_work; +- static const struct sched_param param = { +- .sched_priority = MAX_USER_RT_PRIO/2, +- }; + struct irqaction *action = data; + struct irq_desc *desc = irq_to_desc(action->irq); + irqreturn_t (*handler_fn)(struct irq_desc *desc, +@@ -853,8 +850,6 @@ static int irq_thread(void *data) + else + handler_fn = irq_thread_fn; + +- sched_setscheduler(current, SCHED_FIFO, ¶m); +- + init_task_work(&on_exit_work, irq_thread_dtor); + task_work_add(current, &on_exit_work, false); + +@@ -949,6 +944,9 @@ __setup_irq(unsigned int irq, struct irq + */ + if (new->thread_fn && !nested) { + struct task_struct *t; ++ static const struct sched_param param = { ++ .sched_priority = MAX_USER_RT_PRIO/2, ++ }; + + t = kthread_create(irq_thread, new, "irq/%d-%s", irq, + new->name); +@@ -956,6 +954,9 @@ __setup_irq(unsigned int irq, struct irq + ret = PTR_ERR(t); + goto out_mput; + } ++ ++ sched_setscheduler(t, SCHED_FIFO, ¶m); ++ + /* + * We keep the reference to the task struct even if + * the thread dies to avoid that the interrupt code diff --git a/patches/genirq-Set-the-irq-thread-policy-without-checking-CA.patch b/patches/genirq-Set-the-irq-thread-policy-without-checking-CA.patch new file mode 100644 index 0000000..7838c2f --- /dev/null +++ b/patches/genirq-Set-the-irq-thread-policy-without-checking-CA.patch @@ -0,0 +1,43 @@ +From 7f095a71d6bc49d7c33ed33ebc26daf4867ee4c8 Mon Sep 17 00:00:00 2001 +From: Thomas Pfaff <tpfaff@pcs.com> +Date: Fri, 11 Oct 2013 12:42:49 +0200 +Subject: [PATCH] genirq: Set the irq thread policy without checking + CAP_SYS_NICE + +In commit ee23871389 ("genirq: Set irq thread to RT priority on +creation") we moved the assigment of the thread's priority from the +thread's function into __setup_irq(). That function may run in user +context for instance if the user opens an UART node and then driver +calls requests in the ->open() callback. That user may not have +CAP_SYS_NICE and so the irq thread won't run with the SCHED_OTHER +policy. + +This patch uses sched_setscheduler_nocheck() so we omit the CAP_SYS_NICE +check which is otherwise required for the SCHED_OTHER policy. + +Cc: Ivo Sieben <meltedpianoman@gmail.com> +Cc: stable@vger.kernel.org +Cc: stable-rt@vger.kernel.org +Signed-off-by: Thomas Pfaff <tpfaff@pcs.com> +[bigeasy: rewrite the changelog] +Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> +--- + kernel/irq/manage.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c +index 514bcfd..3e59f95 100644 +--- a/kernel/irq/manage.c ++++ b/kernel/irq/manage.c +@@ -956,7 +956,7 @@ __setup_irq(unsigned int irq, struct irq_desc *desc, struct irqaction *new) + goto out_mput; + } + +- sched_setscheduler(t, SCHED_FIFO, ¶m); ++ sched_setscheduler_nocheck(t, SCHED_FIFO, ¶m); + + /* + * We keep the reference to the task struct even if +-- +1.8.4.rc3 + diff --git a/patches/localversion.patch b/patches/localversion.patch index 458b434..4e59cb4 100644 --- a/patches/localversion.patch +++ b/patches/localversion.patch @@ -12,4 +12,4 @@ Link: http://lkml.kernel.org/n/tip-8vdw4bfcsds27cvox6rpb334@git.kernel.org --- /dev/null +++ b/localversion-rt @@ -0,0 +1 @@ -+-rt10 ++-rt11 diff --git a/patches/mm-disable-slab-on-rt.patch b/patches/mm-disable-slab-on-rt.patch index 20530ca..f8e3307 100644 --- a/patches/mm-disable-slab-on-rt.patch +++ b/patches/mm-disable-slab-on-rt.patch @@ -4,8 +4,8 @@ Subject: mm: disable slab on rt --- init/Kconfig | 1 + - mm/slab.h | 2 +- - 2 files changed, 2 insertions(+), 1 deletion(-) + mm/slab.h | 4 ++++ + 2 files changed, 5 insertions(+) --- a/init/Kconfig +++ b/init/Kconfig @@ -19,12 +19,15 @@ Subject: mm: disable slab on rt well in all environments. It organizes cache hot objects in --- a/mm/slab.h +++ b/mm/slab.h -@@ -245,7 +245,7 @@ static inline struct kmem_cache *cache_f +@@ -247,7 +247,11 @@ static inline struct kmem_cache *cache_f * The slab lists for all objects. */ struct kmem_cache_node { -- spinlock_t list_lock; ++#ifdef CONFIG_SLAB + spinlock_t list_lock; ++#else + raw_spinlock_t list_lock; ++#endif #ifdef CONFIG_SLAB struct list_head slabs_partial; /* partial list first, better asm code */ diff --git a/patches/mm-slab-more-lock-breaks.patch b/patches/mm-slab-more-lock-breaks.patch index 4aceafa..4842275 100644 --- a/patches/mm-slab-more-lock-breaks.patch +++ b/patches/mm-slab-more-lock-breaks.patch @@ -9,8 +9,8 @@ Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> --- - mm/slab.c | 80 +++++++++++++++++++++++++++++++++++++++++++++++++------------- - 1 file changed, 64 insertions(+), 16 deletions(-) + mm/slab.c | 84 ++++++++++++++++++++++++++++++++++++++++++++++++-------------- + 1 file changed, 66 insertions(+), 18 deletions(-) --- a/mm/slab.c +++ b/mm/slab.c @@ -57,7 +57,21 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> static inline struct array_cache *cpu_cache_get(struct kmem_cache *cachep) { return cachep->array[smp_processor_id()]; -@@ -1242,6 +1271,7 @@ static void __cpuinit cpuup_canceled(lon +@@ -1210,11 +1239,11 @@ static int init_cache_node_node(int node + cachep->node[node] = n; + } + +- local_spin_lock_irq(slab_lock, &cachep->nodelists[node]->list_lock); ++ local_spin_lock_irq(slab_lock, &cachep->node[node]->list_lock); + cachep->node[node]->free_limit = + (1 + nr_cpus_node(node)) * + cachep->batchcount + cachep->num; +- local_spin_unlock_irq(slab_lock, &cachep->nodelists[node]->list_lock); ++ local_spin_unlock_irq(slab_lock, &cachep->node[node]->list_lock); + } + return 0; + } +@@ -1248,6 +1277,7 @@ static void __cpuinit cpuup_canceled(lon if (!cpumask_empty(mask)) { local_spin_unlock_irq(slab_lock, &n->list_lock); @@ -65,7 +79,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> goto free_array_cache; } -@@ -1256,6 +1286,7 @@ static void __cpuinit cpuup_canceled(lon +@@ -1262,6 +1292,7 @@ static void __cpuinit cpuup_canceled(lon n->alien = NULL; local_spin_unlock_irq(slab_lock, &n->list_lock); @@ -73,7 +87,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> kfree(shared); if (alien) { -@@ -1546,6 +1577,8 @@ void __init kmem_cache_init(void) +@@ -1552,6 +1583,8 @@ void __init kmem_cache_init(void) use_alien_caches = 0; local_irq_lock_init(slab_lock); @@ -82,7 +96,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> for (i = 0; i < NUM_INIT_LISTS; i++) kmem_cache_node_init(&init_kmem_cache_node[i]); -@@ -1824,12 +1857,14 @@ static void *kmem_getpages(struct kmem_c +@@ -1830,12 +1863,14 @@ static void *kmem_getpages(struct kmem_c /* * Interface to system's page release. */ @@ -99,7 +113,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> kmemcheck_free_shadow(page, cachep->gfporder); if (cachep->flags & SLAB_RECLAIM_ACCOUNT) -@@ -1848,7 +1883,12 @@ static void kmem_freepages(struct kmem_c +@@ -1854,7 +1889,12 @@ static void kmem_freepages(struct kmem_c memcg_release_pages(cachep, cachep->gfporder); if (current->reclaim_state) current->reclaim_state->reclaimed_slab += nr_freed; @@ -113,7 +127,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> } static void kmem_rcu_free(struct rcu_head *head) -@@ -1856,7 +1896,7 @@ static void kmem_rcu_free(struct rcu_hea +@@ -1862,7 +1902,7 @@ static void kmem_rcu_free(struct rcu_hea struct slab_rcu *slab_rcu = (struct slab_rcu *)head; struct kmem_cache *cachep = slab_rcu->cachep; @@ -122,7 +136,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> if (OFF_SLAB(cachep)) kmem_cache_free(cachep->slabp_cache, slab_rcu); } -@@ -2073,7 +2113,8 @@ static void slab_destroy_debugcheck(stru +@@ -2079,7 +2119,8 @@ static void slab_destroy_debugcheck(stru * Before calling the slab must have been unlinked from the cache. The * cache-lock is not held/needed. */ @@ -132,7 +146,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> { void *addr = slabp->s_mem - slabp->colouroff; -@@ -2086,7 +2127,7 @@ static void slab_destroy(struct kmem_cac +@@ -2092,7 +2133,7 @@ static void slab_destroy(struct kmem_cac slab_rcu->addr = addr; call_rcu(&slab_rcu->head, kmem_rcu_free); } else { @@ -141,7 +155,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> if (OFF_SLAB(cachep)) kmem_cache_free(cachep->slabp_cache, slabp); } -@@ -2497,9 +2538,15 @@ static void do_drain(void *arg) +@@ -2503,9 +2544,15 @@ static void do_drain(void *arg) __do_drain(arg, smp_processor_id()); } #else @@ -159,7 +173,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> } #endif -@@ -2557,7 +2604,7 @@ static int drain_freelist(struct kmem_ca +@@ -2563,7 +2610,7 @@ static int drain_freelist(struct kmem_ca */ n->free_objects -= cache->num; local_spin_unlock_irq(slab_lock, &n->list_lock); @@ -168,7 +182,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> nr_freed++; } out: -@@ -2872,7 +2919,7 @@ static int cache_grow(struct kmem_cache +@@ -2878,7 +2925,7 @@ static int cache_grow(struct kmem_cache spin_unlock(&n->list_lock); return 1; opps1: @@ -177,7 +191,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> failed: if (local_flags & __GFP_WAIT) local_lock_irq(slab_lock); -@@ -3554,7 +3601,7 @@ static void free_block(struct kmem_cache +@@ -3560,7 +3607,7 @@ static void free_block(struct kmem_cache * a different cache, refer to comments before * alloc_slabmgmt. */ @@ -186,7 +200,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> } else { list_add(&slabp->list, &n->slabs_free); } -@@ -3822,7 +3869,7 @@ void kmem_cache_free(struct kmem_cache * +@@ -3828,7 +3875,7 @@ void kmem_cache_free(struct kmem_cache * debug_check_no_obj_freed(objp, cachep->object_size); local_lock_irqsave(slab_lock, flags); __cache_free(cachep, objp, _RET_IP_); @@ -195,7 +209,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> trace_kmem_cache_free(_RET_IP_, objp); } -@@ -3853,7 +3900,7 @@ void kfree(const void *objp) +@@ -3859,7 +3906,7 @@ void kfree(const void *objp) debug_check_no_obj_freed(objp, c->object_size); local_lock_irqsave(slab_lock, flags); __cache_free(c, (void *)objp, _RET_IP_); @@ -204,7 +218,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> } EXPORT_SYMBOL(kfree); -@@ -3903,7 +3950,8 @@ static int alloc_kmemlist(struct kmem_ca +@@ -3909,7 +3956,8 @@ static int alloc_kmemlist(struct kmem_ca } n->free_limit = (1 + nr_cpus_node(node)) * cachep->batchcount + cachep->num; @@ -214,14 +228,14 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> kfree(shared); free_alien_cache(new_alien); continue; -@@ -4011,8 +4059,8 @@ static int __do_tune_cpucache(struct kme +@@ -4017,8 +4065,8 @@ static int __do_tune_cpucache(struct kme local_spin_lock_irq(slab_lock, &cachep->node[cpu_to_mem(i)]->list_lock); free_block(cachep, ccold->entry, ccold->avail, cpu_to_mem(i)); - local_spin_unlock_irq(slab_lock, -x &cachep->node[cpu_to_mem(i)]->list_lock); + -+ unlock_l3_and_free_delayed(&cachep->nodelists[cpu_to_mem(i)]->list_lock); ++ unlock_l3_and_free_delayed(&cachep->node[cpu_to_mem(i)]->list_lock); kfree(ccold); } kfree(new); diff --git a/patches/mm-slab-wrap-functions.patch b/patches/mm-slab-wrap-functions.patch index c8830f3..798de97 100644 --- a/patches/mm-slab-wrap-functions.patch +++ b/patches/mm-slab-wrap-functions.patch @@ -4,8 +4,8 @@ Date: Sat, 18 Jun 2011 19:44:43 +0200 Signed-off-by: Thomas Gleixner <tglx@linutronix.de> --- - mm/slab.c | 157 ++++++++++++++++++++++++++++++++++++++++++-------------------- - 1 file changed, 108 insertions(+), 49 deletions(-) + mm/slab.c | 163 +++++++++++++++++++++++++++++++++++++++++++------------------- + 1 file changed, 114 insertions(+), 49 deletions(-) --- a/mm/slab.c +++ b/mm/slab.c @@ -17,7 +17,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> #include <net/sock.h> -@@ -633,6 +634,37 @@ static void slab_set_debugobj_lock_class +@@ -633,12 +634,49 @@ static void slab_set_debugobj_lock_class #endif static DEFINE_PER_CPU(struct delayed_work, slab_reap_work); @@ -55,7 +55,19 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> static inline struct array_cache *cpu_cache_get(struct kmem_cache *cachep) { -@@ -1073,9 +1105,10 @@ static void reap_alien(struct kmem_cache + return cachep->array[smp_processor_id()]; + } + ++static inline struct array_cache *cpu_cache_get_on_cpu(struct kmem_cache *cachep, ++ int cpu) ++{ ++ return cachep->array[cpu]; ++} ++ + static size_t slab_mgmt_size(size_t nr_objs, size_t align) + { + return ALIGN(sizeof(struct slab)+nr_objs*sizeof(kmem_bufctl_t), align); +@@ -1073,9 +1111,10 @@ static void reap_alien(struct kmem_cache if (n->alien) { struct array_cache *ac = n->alien[node]; @@ -68,7 +80,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> } } } -@@ -1090,9 +1123,9 @@ static void drain_alien_cache(struct kme +@@ -1090,9 +1129,9 @@ static void drain_alien_cache(struct kme for_each_online_node(i) { ac = alien[i]; if (ac) { @@ -80,7 +92,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> } } } -@@ -1171,11 +1204,11 @@ static int init_cache_node_node(int node +@@ -1171,11 +1210,11 @@ static int init_cache_node_node(int node cachep->node[node] = n; } @@ -94,7 +106,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> } return 0; } -@@ -1200,7 +1233,7 @@ static void __cpuinit cpuup_canceled(lon +@@ -1200,7 +1239,7 @@ static void __cpuinit cpuup_canceled(lon if (!n) goto free_array_cache; @@ -103,7 +115,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> /* Free limit for this kmem_cache_node */ n->free_limit -= cachep->batchcount; -@@ -1208,7 +1241,7 @@ static void __cpuinit cpuup_canceled(lon +@@ -1208,7 +1247,7 @@ static void __cpuinit cpuup_canceled(lon free_block(cachep, nc->entry, nc->avail, node); if (!cpumask_empty(mask)) { @@ -112,7 +124,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> goto free_array_cache; } -@@ -1222,7 +1255,7 @@ static void __cpuinit cpuup_canceled(lon +@@ -1222,7 +1261,7 @@ static void __cpuinit cpuup_canceled(lon alien = n->alien; n->alien = NULL; @@ -121,7 +133,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> kfree(shared); if (alien) { -@@ -1296,7 +1329,7 @@ static int __cpuinit cpuup_prepare(long +@@ -1296,7 +1335,7 @@ static int __cpuinit cpuup_prepare(long n = cachep->node[node]; BUG_ON(!n); @@ -130,7 +142,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> if (!n->shared) { /* * We are serialised from CPU_DEAD or -@@ -1311,7 +1344,7 @@ static int __cpuinit cpuup_prepare(long +@@ -1311,7 +1350,7 @@ static int __cpuinit cpuup_prepare(long alien = NULL; } #endif @@ -139,7 +151,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> kfree(shared); free_alien_cache(alien); if (cachep->flags & SLAB_DEBUG_OBJECTS) -@@ -1512,6 +1545,8 @@ void __init kmem_cache_init(void) +@@ -1512,6 +1551,8 @@ void __init kmem_cache_init(void) if (num_possible_nodes() == 1) use_alien_caches = 0; @@ -148,7 +160,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> for (i = 0; i < NUM_INIT_LISTS; i++) kmem_cache_node_init(&init_kmem_cache_node[i]); -@@ -2408,7 +2443,7 @@ __kmem_cache_create (struct kmem_cache * +@@ -2408,7 +2449,7 @@ __kmem_cache_create (struct kmem_cache * #if DEBUG static void check_irq_off(void) { @@ -157,12 +169,12 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> } static void check_irq_on(void) -@@ -2443,26 +2478,37 @@ static void drain_array(struct kmem_cach +@@ -2443,26 +2484,37 @@ static void drain_array(struct kmem_cach struct array_cache *ac, int force, int node); -static void do_drain(void *arg) -+static void __do_drain(void *arg, unsigned int cpu)+static void __do_drain(void *arg, unsigned int cpu) ++static void __do_drain(void *arg, unsigned int cpu) { struct kmem_cache *cachep = arg; struct array_cache *ac; @@ -200,7 +212,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> check_irq_on(); for_each_online_node(node) { n = cachep->node[node]; -@@ -2493,10 +2539,10 @@ static int drain_freelist(struct kmem_ca +@@ -2493,10 +2545,10 @@ static int drain_freelist(struct kmem_ca nr_freed = 0; while (nr_freed < tofree && !list_empty(&n->slabs_free)) { @@ -213,7 +225,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> goto out; } -@@ -2510,7 +2556,7 @@ static int drain_freelist(struct kmem_ca +@@ -2510,7 +2562,7 @@ static int drain_freelist(struct kmem_ca * to the cache. */ n->free_objects -= cache->num; @@ -222,7 +234,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> slab_destroy(cache, slabp); nr_freed++; } -@@ -2785,7 +2831,7 @@ static int cache_grow(struct kmem_cache +@@ -2785,7 +2837,7 @@ static int cache_grow(struct kmem_cache offset *= cachep->colour_off; if (local_flags & __GFP_WAIT) @@ -231,7 +243,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> /* * The test for missing atomic flag is performed here, rather than -@@ -2815,7 +2861,7 @@ static int cache_grow(struct kmem_cache +@@ -2815,7 +2867,7 @@ static int cache_grow(struct kmem_cache cache_init_objs(cachep, slabp); if (local_flags & __GFP_WAIT) @@ -240,7 +252,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> check_irq_off(); spin_lock(&n->list_lock); -@@ -2829,7 +2875,7 @@ opps1: +@@ -2829,7 +2881,7 @@ opps1: kmem_freepages(cachep, objp); failed: if (local_flags & __GFP_WAIT) @@ -249,7 +261,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> return 0; } -@@ -3243,11 +3289,11 @@ retry: +@@ -3243,11 +3295,11 @@ retry: * set and go into memory reserves if necessary. */ if (local_flags & __GFP_WAIT) @@ -263,7 +275,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> if (obj) { /* * Insert into the appropriate per node queues -@@ -3368,7 +3414,7 @@ slab_alloc_node(struct kmem_cache *cache +@@ -3368,7 +3420,7 @@ slab_alloc_node(struct kmem_cache *cache cachep = memcg_kmem_get_cache(cachep, flags); cache_alloc_debugcheck_before(cachep, flags); @@ -272,7 +284,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> if (nodeid == NUMA_NO_NODE) nodeid = slab_node; -@@ -3393,7 +3439,7 @@ slab_alloc_node(struct kmem_cache *cache +@@ -3393,7 +3445,7 @@ slab_alloc_node(struct kmem_cache *cache /* ___cache_alloc_node can fall back to other nodes */ ptr = ____cache_alloc_node(cachep, flags, nodeid); out: @@ -281,7 +293,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> ptr = cache_alloc_debugcheck_after(cachep, flags, ptr, caller); kmemleak_alloc_recursive(ptr, cachep->object_size, 1, cachep->flags, flags); -@@ -3455,9 +3501,9 @@ slab_alloc(struct kmem_cache *cachep, gf +@@ -3455,9 +3507,9 @@ slab_alloc(struct kmem_cache *cachep, gf cachep = memcg_kmem_get_cache(cachep, flags); cache_alloc_debugcheck_before(cachep, flags); @@ -293,7 +305,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> objp = cache_alloc_debugcheck_after(cachep, flags, objp, caller); kmemleak_alloc_recursive(objp, cachep->object_size, 1, cachep->flags, flags); -@@ -3774,9 +3820,9 @@ void kmem_cache_free(struct kmem_cache * +@@ -3774,9 +3826,9 @@ void kmem_cache_free(struct kmem_cache * debug_check_no_locks_freed(objp, cachep->object_size); if (!(cachep->flags & SLAB_DEBUG_OBJECTS)) debug_check_no_obj_freed(objp, cachep->object_size); @@ -305,7 +317,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> trace_kmem_cache_free(_RET_IP_, objp); } -@@ -3805,9 +3851,9 @@ void kfree(const void *objp) +@@ -3805,9 +3857,9 @@ void kfree(const void *objp) debug_check_no_locks_freed(objp, c->object_size); debug_check_no_obj_freed(objp, c->object_size); @@ -317,7 +329,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> } EXPORT_SYMBOL(kfree); -@@ -3844,7 +3890,7 @@ static int alloc_kmemlist(struct kmem_ca +@@ -3844,7 +3896,7 @@ static int alloc_kmemlist(struct kmem_ca if (n) { struct array_cache *shared = n->shared; @@ -326,7 +338,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> if (shared) free_block(cachep, shared->entry, -@@ -3857,7 +3903,7 @@ static int alloc_kmemlist(struct kmem_ca +@@ -3857,7 +3909,7 @@ static int alloc_kmemlist(struct kmem_ca } n->free_limit = (1 + nr_cpus_node(node)) * cachep->batchcount + cachep->num; @@ -335,7 +347,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> kfree(shared); free_alien_cache(new_alien); continue; -@@ -3904,17 +3950,28 @@ struct ccupdate_struct { +@@ -3904,17 +3956,28 @@ struct ccupdate_struct { struct array_cache *new[0]; }; @@ -348,13 +360,13 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> - check_irq_off(); - old = cpu_cache_get(new->cachep); + old = cpu_cache_get_on_cpu(new->cachep, cpu); -+ -+ new->cachep->array[cpu] = new->new[cpu]; -+ new->new[cpu] = old; -+} - new->cachep->array[smp_processor_id()] = new->new[smp_processor_id()]; - new->new[smp_processor_id()] = old; ++ new->cachep->array[cpu] = new->new[cpu]; ++ new->new[cpu] = old; ++} ++ +#ifndef CONFIG_PREEMPT_RT_BASE +static void do_ccupdate_local(void *info) +{ @@ -369,7 +381,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> /* Always called with the slab_mutex held */ static int __do_tune_cpucache(struct kmem_cache *cachep, int limit, -@@ -3940,7 +3997,7 @@ static int __do_tune_cpucache(struct kme +@@ -3940,7 +4003,7 @@ static int __do_tune_cpucache(struct kme } new->cachep = cachep; @@ -378,7 +390,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> check_irq_on(); cachep->batchcount = batchcount; -@@ -3951,9 +4008,11 @@ static int __do_tune_cpucache(struct kme +@@ -3951,9 +4014,11 @@ static int __do_tune_cpucache(struct kme struct array_cache *ccold = new->new[i]; if (!ccold) continue; @@ -392,7 +404,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> kfree(ccold); } kfree(new); -@@ -4068,7 +4127,7 @@ static void drain_array(struct kmem_cach +@@ -4068,7 +4133,7 @@ static void drain_array(struct kmem_cach if (ac->touched && !force) { ac->touched = 0; } else { @@ -401,7 +413,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> if (ac->avail) { tofree = force ? ac->avail : (ac->limit + 4) / 5; if (tofree > ac->avail) -@@ -4078,7 +4137,7 @@ static void drain_array(struct kmem_cach +@@ -4078,7 +4143,7 @@ static void drain_array(struct kmem_cach memmove(ac->entry, &(ac->entry[tofree]), sizeof(void *) * ac->avail); } @@ -410,7 +422,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> } } -@@ -4171,7 +4230,7 @@ void get_slabinfo(struct kmem_cache *cac +@@ -4171,7 +4236,7 @@ void get_slabinfo(struct kmem_cache *cac continue; check_irq_on(); @@ -419,7 +431,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> list_for_each_entry(slabp, &n->slabs_full, list) { if (slabp->inuse != cachep->num && !error) -@@ -4196,7 +4255,7 @@ void get_slabinfo(struct kmem_cache *cac +@@ -4196,7 +4261,7 @@ void get_slabinfo(struct kmem_cache *cac if (n->shared) shared_avail += n->shared->avail; @@ -428,7 +440,7 @@ Signed-off-by: Thomas Gleixner <tglx@linutronix.de> } num_slabs += active_slabs; num_objs = num_slabs * cachep->num; -@@ -4396,13 +4455,13 @@ static int leaks_show(struct seq_file *m +@@ -4396,13 +4461,13 @@ static int leaks_show(struct seq_file *m continue; check_irq_on(); diff --git a/patches/peter_zijlstra-frob-pagefault_disable.patch b/patches/peter_zijlstra-frob-pagefault_disable.patch index 355a340..60ac921 100644 --- a/patches/peter_zijlstra-frob-pagefault_disable.patch +++ b/patches/peter_zijlstra-frob-pagefault_disable.patch @@ -20,6 +20,7 @@ Link: http://lkml.kernel.org/n/tip-3yy517m8zsi9fpsf14xfaqkw@git.kernel.org arch/mips/mm/fault.c | 2 +- arch/mn10300/mm/fault.c | 2 +- arch/parisc/mm/fault.c | 2 +- + arch/powerpc/mm/fault.c | 2 +- arch/s390/mm/fault.c | 8 ++++---- arch/score/mm/fault.c | 2 +- arch/sh/mm/fault.c | 2 +- @@ -31,7 +32,7 @@ Link: http://lkml.kernel.org/n/tip-3yy517m8zsi9fpsf14xfaqkw@git.kernel.org arch/xtensa/mm/fault.c | 2 +- include/linux/sched.h | 14 ++++++++++++++ kernel/fork.c | 2 ++ - 23 files changed, 40 insertions(+), 25 deletions(-) + 24 files changed, 41 insertions(+), 26 deletions(-) --- a/arch/alpha/mm/fault.c +++ b/arch/alpha/mm/fault.c @@ -166,6 +167,17 @@ Link: http://lkml.kernel.org/n/tip-3yy517m8zsi9fpsf14xfaqkw@git.kernel.org goto no_context; retry: +--- a/arch/powerpc/mm/fault.c ++++ b/arch/powerpc/mm/fault.c +@@ -264,7 +264,7 @@ int __kprobes do_page_fault(struct pt_re + if (!arch_irq_disabled_regs(regs)) + local_irq_enable(); + +- if (in_atomic() || mm == NULL || current->pagefault_disabled) { ++ if (in_atomic() || mm == NULL || pagefault_disabled()) { + if (!user_mode(regs)) { + rc = SIGSEGV; + goto bail; --- a/arch/s390/mm/fault.c +++ b/arch/s390/mm/fault.c @@ -296,8 +296,8 @@ static inline int do_exception(struct pt @@ -288,7 +300,7 @@ Link: http://lkml.kernel.org/n/tip-3yy517m8zsi9fpsf14xfaqkw@git.kernel.org #include <asm/processor.h> -@@ -1261,7 +1262,9 @@ struct task_struct { +@@ -1260,7 +1261,9 @@ struct task_struct { /* mutex deadlock detection */ struct mutex_waiter *blocked_on; #endif @@ -298,7 +310,7 @@ Link: http://lkml.kernel.org/n/tip-3yy517m8zsi9fpsf14xfaqkw@git.kernel.org #ifdef CONFIG_TRACE_IRQFLAGS unsigned int irq_events; unsigned long hardirq_enable_ip; -@@ -1444,6 +1447,17 @@ static inline void set_numabalancing_sta +@@ -1443,6 +1446,17 @@ static inline void set_numabalancing_sta } #endif @@ -318,7 +330,7 @@ Link: http://lkml.kernel.org/n/tip-3yy517m8zsi9fpsf14xfaqkw@git.kernel.org return task->pids[PIDTYPE_PID].pid; --- a/kernel/fork.c +++ b/kernel/fork.c -@@ -1293,7 +1293,9 @@ static struct task_struct *copy_process( +@@ -1294,7 +1294,9 @@ static struct task_struct *copy_process( p->hardirq_context = 0; p->softirq_context = 0; #endif diff --git a/patches/series b/patches/series index b480970..801eb95 100644 --- a/patches/series +++ b/patches/series @@ -7,10 +7,12 @@ ############################################################ hpsa-fix-warning-with-smp_processor_id-in-preemptibl.patch sparc64-Remove-RWSEM-export-leftovers.patch +genirq-Set-irq-thread-to-RT-priority-on-creation.patch ############################################################ # UPSTREAM FIXES, patches pending ############################################################ +genirq-Set-the-irq-thread-policy-without-checking-CA.patch ############################################################ # Stuff broken upstream, patches submitted @@ -441,6 +443,7 @@ patch-to-introduce-rcu-bh-qs-where-safe-from-softirq.patch lglocks-rt.patch # DRIVERS SERIAL +tty-ldisc-drop-the-raw-lock-earlier.patch drivers-serial-cleanup-locking-for-rt.patch drivers-serial-call-flush_to_ldisc-when-the-irq-is-t.patch drivers-tty-fix-omap-lock-crap.patch @@ -614,6 +617,7 @@ mmci-remove-bogus-irq-save.patch #net-iwlwifi-request-only-a-threaded-handler-for-inte.patch # I915 +drm-remove-preempt_disable-from-drm_calc_vbltimestam.patch i915_compile_fix.patch gpu-i915-allow-the-user-not-to-do-the-wbinvd.patch drm-i915-drop-trace_i915_gem_ring_dispatch-onrt.patch diff --git a/patches/tty-ldisc-drop-the-raw-lock-earlier.patch b/patches/tty-ldisc-drop-the-raw-lock-earlier.patch new file mode 100644 index 0000000..1f66dd9 --- /dev/null +++ b/patches/tty-ldisc-drop-the-raw-lock-earlier.patch @@ -0,0 +1,63 @@ +From 15490ab9d58994558ce5de0116c20fc8ac32f68f Mon Sep 17 00:00:00 2001 +From: Sebastian Andrzej Siewior <bigeasy@linutronix.de> +Date: Fri, 11 Oct 2013 16:15:08 +0200 +Subject: [PATCH] tty: ldisc: drop the raw lock earlier + +Luis Claudio R. Goncalves reported the following: + +|BUG: sleeping function called from invalid context at kernel/rtmutex.c:659 +|in_atomic(): 1, irqs_disabled(): 1, pid: 188, name: plymouthd +|3 locks held by plymouthd/188: +| #0: (&tty->legacy_mutex){......}, at: [<ffffffff816528d0>] tty_lock_nested+0x40/0x90 +| #1: (&tty->ldisc_mutex){......}, at: [<ffffffff8139b482>] tty_ldisc_hangup+0x152/0x300 +| #2: (tty_ldisc_lock){......}, at: [<ffffffff8139a9c2>] tty_ldisc_reinit+0x72/0x130 +|Preemption disabled at:[<ffffffff8139a9c2>] tty_ldisc_reinit+0x72/0x130 +|CPU: 2 PID: 188 Comm: plymouthd Not tainted 3.10.10-rt7+ #6 +|Call Trace: +| [<ffffffff8164b790>] dump_stack+0x19/0x1b +| [<ffffffff8107e62f>] __might_sleep+0xff/0x170 +| [<ffffffff81651ac4>] rt_spin_lock+0x24/0x60 +| [<ffffffff81130984>] free_hot_cold_page+0xb4/0x3c0 +| [<ffffffff81178209>] ?unfreeze_partials.isra.42+0x229/0x2b0 +| [<ffffffff81130dc7>] __free_pages+0x47/0x70 +| [<ffffffff81130fb2>] __free_memcg_kmem_pages+0x22/0x50 +| [<ffffffff81177528>] __free_slab+0xe8/0x1e0 +| [<ffffffff81177654>] free_delayed+0x34/0x50 +| [<ffffffff81649633>] __slab_free+0x273/0x36b +| [<ffffffff81178794>] kfree+0x1c4/0x210 +| [<ffffffff8139a9f5>] tty_ldisc_reinit+0xa5/0x130 +| [<ffffffff8139b49f>] tty_ldisc_hangup+0x16f/0x300 +| [<ffffffff81392136>] __tty_hangup+0x346/0x460 +| [<ffffffff81392260>] tty_vhangup+0x10/0x20 +| [<ffffffff8139d6e1>] pty_close+0x131/0x180 +| [<ffffffff813936ad>] tty_release+0x11d/0x5f0 + +It looks safe to drop the tty_ldisc_lock lock before module_put() and +kree() so do it. + +Cc: stable-rt@vger.kernel.org +Reported-By: Luis Claudio R. Goncalves <lclaudio@uudg.org> +Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> +--- + drivers/tty/tty_ldisc.c | 3 ++- + 1 file changed, 2 insertions(+), 1 deletion(-) + +diff --git a/drivers/tty/tty_ldisc.c b/drivers/tty/tty_ldisc.c +index 1afe192..bc4ffe4 100644 +--- a/drivers/tty/tty_ldisc.c ++++ b/drivers/tty/tty_ldisc.c +@@ -197,9 +197,10 @@ static inline void tty_ldisc_put(struct tty_ldisc *ld) + WARN_ON(!atomic_dec_and_test(&ld->users)); + + ld->ops->refcount--; ++ raw_spin_unlock_irqrestore(&tty_ldisc_lock, flags); ++ + module_put(ld->ops->owner); + kfree(ld); +- raw_spin_unlock_irqrestore(&tty_ldisc_lock, flags); + } + + static void *tty_ldiscs_seq_start(struct seq_file *m, loff_t *pos) +-- +1.8.4.rc3 + |