x86/asm/entry: Switch all C consumers of kernel_stack to this_cpu_sp0()
authorAndy Lutomirski <luto@amacapital.net>
Fri, 6 Mar 2015 03:19:03 +0000 (19:19 -0800)
committerIngo Molnar <mingo@kernel.org>
Fri, 6 Mar 2015 07:32:57 +0000 (08:32 +0100)
This will make modifying the semantics of kernel_stack easier.

The change to ist_begin_non_atomic() is necessary because sp0 no
longer points to the same THREAD_SIZE-aligned region as RSP;
it's one byte too high for that.  At Denys' suggestion, rather
than offsetting it, just check explicitly that we're in the
correct range ending at sp0.  This has the added benefit that we
no longer assume that the thread stack is aligned to
THREAD_SIZE.

Suggested-by: Denys Vlasenko <dvlasenk@redhat.com>
Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/ef8254ad414cbb8034c9a56396eeb24f5dd5b0de.1425611534.git.luto@amacapital.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
arch/x86/include/asm/thread_info.h
arch/x86/kernel/traps.c

index 1d4e4f2..a2fa189 100644 (file)
@@ -159,8 +159,7 @@ DECLARE_PER_CPU(unsigned long, kernel_stack);
 static inline struct thread_info *current_thread_info(void)
 {
        struct thread_info *ti;
-       ti = (void *)(this_cpu_read_stable(kernel_stack) +
-                     KERNEL_STACK_OFFSET - THREAD_SIZE);
+       ti = (void *)(this_cpu_sp0() - THREAD_SIZE);
        return ti;
 }
 
index 9965bd1..fa29058 100644 (file)
@@ -174,8 +174,8 @@ void ist_begin_non_atomic(struct pt_regs *regs)
         * will catch asm bugs and any attempt to use ist_preempt_enable
         * from double_fault.
         */
-       BUG_ON(((current_stack_pointer() ^ this_cpu_read_stable(kernel_stack))
-               & ~(THREAD_SIZE - 1)) != 0);
+       BUG_ON((unsigned long)(this_cpu_sp0() - current_stack_pointer()) >=
+              THREAD_SIZE);
 
        preempt_count_sub(HARDIRQ_OFFSET);
 }