Horia Geantă [Fri, 10 Feb 2017 12:07:23 +0000 (14:07 +0200)]
crypto: caam - fix error path for ctx_dma mapping failure
commit
87ec02e7409d787348c244039aa3536a812dfa8b upstream.
In case ctx_dma dma mapping fails, ahash_unmap_ctx() tries to
dma unmap an invalid address:
map_seq_out_ptr_ctx() / ctx_map_to_sec4_sg() -> goto unmap_ctx ->
-> ahash_unmap_ctx() -> dma unmap ctx_dma
There is also possible to reach ahash_unmap_ctx() with ctx_dma
uninitialzed or to try to unmap the same address twice.
Fix these by setting ctx_dma = 0 where needed:
-initialize ctx_dma in ahash_init()
-clear ctx_dma in case of mapping error (instead of holding
the error code returned by the dma map function)
-clear ctx_dma after each unmapping
Fixes:
32686d34f8fb6 ("crypto: caam - ensure that we clean up after an error")
Signed-off-by: Horia Geantă <horia.geanta@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Winkler, Tomas [Wed, 23 Nov 2016 10:04:13 +0000 (12:04 +0200)]
tmp: use pdev for parent device in tpm_chip_alloc
commit
2998b02b2fb58f36ccbc318b00513174e9947d8e upstream.
The tpm stack uses pdev name convention for the parent device.
Fix that also in tpm_chip_alloc().
Fixes:
3897cd9c8d1d ("tpm: Split out the devm stuff from tpmm_chip_alloc")'
Signed-off-by: Tomas Winkler <tomas.winkler@intel.com>
Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Tested-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jarkko Sakkinen [Wed, 25 Jan 2017 21:00:22 +0000 (23:00 +0200)]
tpm: fix RC value check in tpm2_seal_trusted
commit
7d761119a914ec0ac05ec2a5378d1f86e680967d upstream.
The error code handling is broken as any error code that has the same
bits set as TPM_RC_HASH passes. Implemented tpm2_rc_value() helper to
parse the error value from FMT0 and FMT1 error codes so that these types
of mistakes are prevented in the future.
Fixes:
5ca4c20cfd37 ("keys, trusted: select hash algorithm for TPM2 chips")
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Reviewed-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Guenter Roeck [Wed, 8 Feb 2017 22:05:56 +0000 (14:05 -0800)]
hwmon: (it87) Fix pwm4 detection for IT8620 and IT8628
commit
d66777caa57ffade6061782f3a4d4056f0b0c1ac upstream.
pwm4 is enabled if bit 2 of GPIO control register 4 is disabled,
not when it is enabled. Since the check is for the skip condition,
it is reversed. This applies to both IT8620 and IT8628.
Fixes:
36c4d98a7883d ("hwmon: (it87) Add support for all pwm channels ...")
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Vincent Abriou [Thu, 23 Mar 2017 14:44:52 +0000 (15:44 +0100)]
drm/sti: fix GDP size to support up to UHD resolution
commit
2f410f88c0a1c7e19762918d2614f506a7b63a82 upstream.
On stih407-410 chip family the GDP layers are able to support up to UHD
resolution (3840 x 2160).
Signed-off-by: Vincent Abriou <vincent.abriou@st.com>
Acked-by: Lee Jones <lee.jones@linaro.org>
Tested-by: Lee Jones <lee.jones@linaro.org>
Link: http://patchwork.freedesktop.org/patch/msgid/1490280292-30466-1-git-send-email-vincent.abriou@st.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cong Wang [Wed, 22 Feb 2017 23:40:53 +0000 (15:40 -0800)]
9p: fix a potential acl leak
commit
b5c66bab72a6a65edb15beb60b90d3cb84c5763b upstream.
posix_acl_update_mode() could possibly clear 'acl', if so we leak the
memory pointed by 'acl'. Save this pointer before calling
posix_acl_update_mode() and release the memory if 'acl' really gets
cleared.
Link: http://lkml.kernel.org/r/1486678332-2430-1-git-send-email-xiyou.wangcong@gmail.com
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Reported-by: Mark Salyzyn <salyzyn@android.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Reviewed-by: Greg Kurz <groug@kaod.org>
Cc: Eric Van Hensbergen <ericvh@gmail.com>
Cc: Ron Minnich <rminnich@sandia.gov>
Cc: Latchesar Ionkov <lucho@ionkov.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Greg Kroah-Hartman [Mon, 8 May 2017 05:48:32 +0000 (07:48 +0200)]
Linux 4.9.27
Adrian Salido [Thu, 27 Apr 2017 17:32:55 +0000 (10:32 -0700)]
dm ioctl: prevent stack leak in dm ioctl call
commit
4617f564c06117c7d1b611be49521a4430042287 upstream.
When calling a dm ioctl that doesn't process any data
(IOCTL_FLAGS_NO_PARAMS), the contents of the data field in struct
dm_ioctl are left initialized. Current code is incorrectly extending
the size of data copied back to user, causing the contents of kernel
stack to be leaked to user. Fix by only copying contents before data
and allow the functions processing the ioctl to override.
Signed-off-by: Adrian Salido <salidoa@google.com>
Reviewed-by: Alasdair G Kergon <agk@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sebastian Andrzej Siewior [Tue, 14 Mar 2017 15:06:45 +0000 (16:06 +0100)]
cpu/hotplug: Serialize callback invocations proper
commit
dc434e056fe1dada20df7ba07f32739d3a701adf upstream.
The setup/remove_state/instance() functions in the hotplug core code are
serialized against concurrent CPU hotplug, but unfortunately not serialized
against themself.
As a consequence a concurrent invocation of these function results in
corruption of the callback machinery because two instances try to invoke
callbacks on remote cpus at the same time. This results in missing callback
invocations and initiator threads waiting forever on the completion.
The obvious solution to replace get_cpu_online() with cpu_hotplug_begin()
is not possible because at least one callsite calls into these functions
from a get_online_cpu() locked region.
Extend the protection scope of the cpuhp_state_mutex from solely protecting
the state arrays to cover the callback invocation machinery as well.
Fixes:
5b7aa87e0482 ("cpu/hotplug: Implement setup/removal interface")
Reported-and-tested-by: Bart Van Assche <Bart.VanAssche@sandisk.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: hpa@zytor.com
Cc: mingo@kernel.org
Cc: akpm@linux-foundation.org
Cc: torvalds@linux-foundation.org
Link: http://lkml.kernel.org/r/20170314150645.g4tdyoszlcbajmna@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Yan, Zheng [Tue, 25 Oct 2016 02:51:55 +0000 (10:51 +0800)]
ceph: try getting buffer capability for readahead/fadvise
commit
2b1ac852eb67a6e95595e576371d23519105559f upstream.
For readahead/fadvise cases, caller of ceph_readpages does not
hold buffer capability. Pages can be added to page cache while
there is no buffer capability. This can cause data integrity
issue.
Signed-off-by: Yan, Zheng <zyan@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Gabriel Krisman Bertazi [Wed, 28 Dec 2016 18:42:00 +0000 (16:42 -0200)]
8250_pci: Fix potential use-after-free in error path
commit
c130b666a9a711f985a0a44b58699ebe14bb7245 upstream.
Commit
f209fa03fc9d ("serial: 8250_pci: Detach low-level driver during
PCI error recovery") introduces a potential use-after-free in case the
pciserial_init_ports call in serial8250_io_resume fails, which may
happen if a memory allocation fails or if the .init quirk failed for
whatever reason). If this happen, further pci_get_drvdata will return a
pointer to freed memory.
This patch reworks the PCI recovery resume hook to restore the old priv
structure in this case, which should be ok, since the ports were already
detached. Such error during recovery causes us to give up on the
recovery.
Fixes:
f209fa03fc9d ("serial: 8250_pci: Detach low-level driver during PCI error recovery")
Reported-by: Michal Suchanek <msuchanek@suse.com>
Signed-off-by: Gabriel Krisman Bertazi <krisman@linux.vnet.ibm.com>
Signed-off-by: Guilherme G. Piccoli <gpiccoli@linux.vnet.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Guenter Roeck [Sun, 12 Mar 2017 13:18:58 +0000 (06:18 -0700)]
hwmon: (it87) Avoid registering the same chip on both SIO addresses
commit
8358378b22518d92424597503d3c1cd302a490b6 upstream.
IT8705F is known to respond on both SIO addresses. Registering it twice
may result in system lockups.
Reported-by: Russell King <linux@armlinux.org.uk>
Fixes:
e84bd9535e2b ("hwmon: (it87) Add support for second Super-IO chip")
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Cc: Jean Delvare <jdelvare@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Stephen Hemminger [Tue, 7 Mar 2017 17:15:53 +0000 (09:15 -0800)]
scsi: storvsc: Workaround for virtual DVD SCSI version
commit
f1c635b439a5c01776fe3a25b1e2dc546ea82e6f upstream.
Hyper-V host emulation of SCSI for virtual DVD device reports SCSI
version 0 (UNKNOWN) but is still capable of supporting REPORTLUN.
Without this patch, a GEN2 Linux guest on Hyper-V will not boot 4.11
successfully with virtual DVD ROM device. What happens is that the SCSI
scan process falls back to doing sequential probing by INQUIRY. But the
storvsc driver has a previous workaround that masks/blocks all errors
reports from INQUIRY (or MODE_SENSE) commands. This workaround causes
the scan to then populate a full set of bogus LUN's on the target and
then sends kernel spinning off into a death spiral doing block reads on
the non-existent LUNs.
By setting the correct blacklist flags, the target with the DVD device
is scanned with REPORTLUN and that works correctly.
Patch needs to go in current 4.11, it is safe but not necessary in older
kernels.
Signed-off-by: Stephen Hemminger <sthemmin@microsoft.com>
Reviewed-by: K. Y. Srinivasan <kys@microsoft.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Maciej S. Szmigiero [Fri, 13 Jan 2017 21:37:00 +0000 (22:37 +0100)]
tpm_tis: use default timeout value if chip reports it as zero
commit
1d70fe9d9c3a4c627f9757cbba5d628687b121c1 upstream.
Since commit
1107d065fdf1 ("tpm_tis: Introduce intermediate layer for
TPM access") Atmel 3203 TPM on ThinkPad X61S (TPM firmware version 13.9)
no longer works. The initialization proceeds fine until we get and
start using chip-reported timeouts - and the chip reports C and D
timeouts of zero.
It turns out that until commit
8e54caf407b98e ("tpm: Provide a generic
means to override the chip returned timeouts") we had actually let
default timeout values remain in this case, so let's bring back this
behavior to make chips like Atmel 3203 work again.
Use a common code that was introduced by that commit so a warning is
printed in this case and /sys/class/tpm/tpm*/timeouts correctly says the
timeouts aren't chip-original.
This is a backport for 4.9 kernel version of the original commit, with
renaming of "TPM_TIS_ITPM_POSSIBLE" flag removed since it was only a
cosmetic change and not a part of the real bug fix.
Fixes:
1107d065fdf1 ("tpm_tis: Introduce intermediate layer for TPM access")
Cc: stable@vger.kernel.org
Signed-off-by: Maciej S. Szmigiero <mail@maciej.szmigiero.name>
Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sachin Prabhu [Fri, 3 Mar 2017 23:41:38 +0000 (15:41 -0800)]
Handle mismatched open calls
commit
38bd49064a1ecb67baad33598e3d824448ab11ec upstream.
A signal can interrupt a SendReceive call which result in incoming
responses to the call being ignored. This is a problem for calls such as
open which results in the successful response being ignored. This
results in an open file resource on the server.
The patch looks into responses which were cancelled after being sent and
in case of successful open closes the open fids.
For this patch, the check is only done in SendReceive2()
RH-bz:
1403319
Signed-off-by: Sachin Prabhu <sprabhu@redhat.com>
Reviewed-by: Pavel Shilovsky <pshilov@microsoft.com>
Acked-by: Sachin Prabhu <sprabhu@redhat.com>
Signed-off-by: Pavel Shilovsky <pshilov@microsoft.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Thomas Gleixner [Tue, 31 Jan 2017 14:24:03 +0000 (15:24 +0100)]
timerfd: Protect the might cancel mechanism proper
commit
1e38da300e1e395a15048b0af1e5305bd91402f6 upstream.
The handling of the might_cancel queueing is not properly protected, so
parallel operations on the file descriptor can race with each other and
lead to list corruptions or use after free.
Protect the context for these operations with a seperate lock.
The wait queue lock cannot be reused for this because that would create a
lock inversion scenario vs. the cancel lock. Replacing might_cancel with an
atomic (atomic_t or atomic bit) does not help either because it still can
race vs. the actual list operation.
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: "linux-fsdevel@vger.kernel.org"
Cc: syzkaller <syzkaller@googlegroups.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: linux-fsdevel@vger.kernel.org
Link: http://lkml.kernel.org/r/alpine.DEB.2.20.1701311521430.3457@nanos
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Greg Kroah-Hartman [Wed, 3 May 2017 15:36:50 +0000 (08:36 -0700)]
Linux 4.9.26
Josh Poimboeuf [Thu, 13 Apr 2017 22:53:55 +0000 (17:53 -0500)]
ftrace/x86: Fix triple fault with graph tracing and suspend-to-ram
commit
34a477e5297cbaa6ecc6e17c042a866e1cbe80d6 upstream.
On x86-32, with CONFIG_FIRMWARE and multiple CPUs, if you enable function
graph tracing and then suspend to RAM, it will triple fault and reboot when
it resumes.
The first fault happens when booting a secondary CPU:
startup_32_smp()
load_ucode_ap()
prepare_ftrace_return()
ftrace_graph_is_dead()
(accesses 'kill_ftrace_graph')
The early head_32.S code calls into load_ucode_ap(), which has an an
ftrace hook, so it calls prepare_ftrace_return(), which calls
ftrace_graph_is_dead(), which tries to access the global
'kill_ftrace_graph' variable with a virtual address, causing a fault
because the CPU is still in real mode.
The fix is to add a check in prepare_ftrace_return() to make sure it's
running in protected mode before continuing. The check makes sure the
stack pointer is a virtual kernel address. It's a bit of a hack, but
it's not very intrusive and it works well enough.
For reference, here are a few other (more difficult) ways this could
have potentially been fixed:
- Move startup_32_smp()'s call to load_ucode_ap() down to *after* paging
is enabled. (No idea what that would break.)
- Track down load_ucode_ap()'s entire callee tree and mark all the
functions 'notrace'. (Probably not realistic.)
- Pause graph tracing in ftrace_suspend_notifier_call() or bringup_cpu()
or __cpu_up(), and ensure that the pause facility can be queried from
real mode.
Reported-by: Paul Menzel <pmenzel@molgen.mpg.de>
Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
Tested-by: Paul Menzel <pmenzel@molgen.mpg.de>
Reviewed-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Cc: "Rafael J . Wysocki" <rjw@rjwysocki.net>
Cc: linux-acpi@vger.kernel.org
Cc: Borislav Petkov <bp@alien8.de>
Cc: Len Brown <lenb@kernel.org>
Link: http://lkml.kernel.org/r/5c1272269a580660703ed2eccf44308e790c7a98.1492123841.git.jpoimboe@redhat.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Vineet Gupta [Mon, 9 Jan 2017 03:45:48 +0000 (19:45 -0800)]
ARCv2: save r30 on kernel entry as gcc uses it for code-gen
commit
ecd43afdbe72017aefe48080631eb625e177ef4d upstream.
This is not exposed to userspace debugers yet, which can be done
independently as a seperate patch !
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Maksim Salau [Sun, 23 Apr 2017 17:31:40 +0000 (20:31 +0300)]
net: can: usb: gs_usb: Fix buffer on stack
commit
b05c73bd1e3ec60357580eb042ee932a5ed754d5 upstream.
Allocate buffers on HEAP instead of STACK for local structures
that are to be sent using usb_control_msg().
Signed-off-by: Maksim Salau <maksim.salau@gmail.com>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jason A. Donenfeld [Fri, 21 Apr 2017 21:14:48 +0000 (23:14 +0200)]
macsec: avoid heap overflow in skb_to_sgvec
commit
4d6fa57b4dab0d77f4d8e9d9c73d1e63f6fe8fee upstream.
While this may appear as a humdrum one line change, it's actually quite
important. An sk_buff stores data in three places:
1. A linear chunk of allocated memory in skb->data. This is the easiest
one to work with, but it precludes using scatterdata since the memory
must be linear.
2. The array skb_shinfo(skb)->frags, which is of maximum length
MAX_SKB_FRAGS. This is nice for scattergather, since these fragments
can point to different pages.
3. skb_shinfo(skb)->frag_list, which is a pointer to another sk_buff,
which in turn can have data in either (1) or (2).
The first two are rather easy to deal with, since they're of a fixed
maximum length, while the third one is not, since there can be
potentially limitless chains of fragments. Fortunately dealing with
frag_list is opt-in for drivers, so drivers don't actually have to deal
with this mess. For whatever reason, macsec decided it wanted pain, and
so it explicitly specified NETIF_F_FRAGLIST.
Because dealing with (1), (2), and (3) is insane, most users of sk_buff
doing any sort of crypto or paging operation calls a convenient function
called skb_to_sgvec (which happens to be recursive if (3) is in use!).
This takes a sk_buff as input, and writes into its output pointer an
array of scattergather list items. Sometimes people like to declare a
fixed size scattergather list on the stack; othertimes people like to
allocate a fixed size scattergather list on the heap. However, if you're
doing it in a fixed-size fashion, you really shouldn't be using
NETIF_F_FRAGLIST too (unless you're also ensuring the sk_buff and its
frag_list children arent't shared and then you check the number of
fragments in total required.)
Macsec specifically does this:
size += sizeof(struct scatterlist) * (MAX_SKB_FRAGS + 1);
tmp = kmalloc(size, GFP_ATOMIC);
*sg = (struct scatterlist *)(tmp + sg_offset);
...
sg_init_table(sg, MAX_SKB_FRAGS + 1);
skb_to_sgvec(skb, sg, 0, skb->len);
Specifying MAX_SKB_FRAGS + 1 is the right answer usually, but not if you're
using NETIF_F_FRAGLIST, in which case the call to skb_to_sgvec will
overflow the heap, and disaster ensues.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Yan, Zheng [Wed, 19 Apr 2017 02:01:48 +0000 (10:01 +0800)]
ceph: fix recursion between ceph_set_acl() and __ceph_setattr()
commit
8179a101eb5f4ef0ac9a915fcea9a9d3109efa90 upstream.
ceph_set_acl() calls __ceph_setattr() if the setacl operation needs
to modify inode's i_mode. __ceph_setattr() updates inode's i_mode,
then calls posix_acl_chmod().
The problem is that __ceph_setattr() calls posix_acl_chmod() before
sending the setattr request. The get_acl() call in posix_acl_chmod()
can trigger a getxattr request. The reply of the getxattr request
can restore inode's i_mode to its old value. The set_acl() call in
posix_acl_chmod() sees old value of inode's i_mode, so it calls
__ceph_setattr() again.
Link: http://tracker.ceph.com/issues/19688
Reported-by: Jerry Lee <leisurelysw24@gmail.com>
Signed-off-by: "Yan, Zheng" <zyan@redhat.com>
Reviewed-by: Jeff Layton <jlayton@redhat.com>
Tested-by: Luis Henriques <lhenriques@suse.com>
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
J. Bruce Fields [Fri, 21 Apr 2017 19:26:30 +0000 (15:26 -0400)]
nfsd: stricter decoding of write-like NFSv2/v3 ops
commit
13bf9fbff0e5e099e2b6f003a0ab8ae145436309 upstream.
The NFSv2/v3 code does not systematically check whether we decode past
the end of the buffer. This generally appears to be harmless, but there
are a few places where we do arithmetic on the pointers involved and
don't account for the possibility that a length could be negative. Add
checks to catch these.
Reported-by: Tuomas Haanpää <thaan@synopsys.com>
Reported-by: Ari Kauppi <ari@synopsys.com>
Reviewed-by: NeilBrown <neilb@suse.com>
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
J. Bruce Fields [Tue, 25 Apr 2017 20:21:34 +0000 (16:21 -0400)]
nfsd4: minor NFSv2/v3 write decoding cleanup
commit
db44bac41bbfc0c0d9dd943092d8bded3c9db19b upstream.
Use a couple shortcuts that will simplify a following bugfix.
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
J. Bruce Fields [Fri, 21 Apr 2017 20:10:18 +0000 (16:10 -0400)]
nfsd: check for oversized NFSv2/v3 arguments
commit
e6838a29ecb484c97e4efef9429643b9851fba6e upstream.
A client can append random data to the end of an NFSv2 or NFSv3 RPC call
without our complaining; we'll just stop parsing at the end of the
expected data and ignore the rest.
Encoded arguments and replies are stored together in an array of pages,
and if a call is too large it could leave inadequate space for the
reply. This is normally OK because NFS RPC's typically have either
short arguments and long replies (like READ) or long arguments and short
replies (like WRITE). But a client that sends an incorrectly long reply
can violate those assumptions. This was observed to cause crashes.
Also, several operations increment rq_next_page in the decode routine
before checking the argument size, which can leave rq_next_page pointing
well past the end of the page array, causing trouble later in
svc_free_pages.
So, following a suggestion from Neil Brown, add a central check to
enforce our expectation that no NFSv2/v3 call has both a large call and
a large reply.
As followup we may also want to rewrite the encoding routines to check
more carefully that they aren't running off the end of the page array.
We may also consider rejecting calls that have any extra garbage
appended. That would be safer, and within our rights by spec, but given
the age of our server and the NFS protocol, and the fact that we've
never enforced this before, we may need to balance that against the
possibility of breaking some oddball client.
Reported-by: Tuomas Haanpää <thaan@synopsys.com>
Reported-by: Ari Kauppi <ari@synopsys.com>
Reviewed-by: NeilBrown <neilb@suse.com>
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dmitry Torokhov [Thu, 13 Apr 2017 22:36:31 +0000 (15:36 -0700)]
Input: i8042 - add Clevo P650RS to the i8042 reset list
commit
7c5bb4ac2b76d2a09256aec8a7d584bf3e2b0466 upstream.
Clevo P650RS and other similar devices require i8042 to be reset in order
to detect Synaptics touchpad.
Reported-by: Paweł Bylica <chfast@gmail.com>
Tested-by: Ed Bordin <edbordin@gmail.com>
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=190301
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Takashi Iwai [Mon, 24 Apr 2017 12:09:55 +0000 (14:09 +0200)]
ASoC: intel: Fix PM and non-atomic crash in bytcr drivers
commit
6e4cac23c5a648d50b107d1b53e9c4e1120c7943 upstream.
The FE setups of Intel SST bytcr_rt5640 and bytcr_rt5651 drivers carry
the ignore_suspend flag, and this prevents the suspend/resume working
properly while the stream is running, since SST core code has the
check of the running streams and returns -EBUSY. Drop these
superfluous flags for fixing the behavior.
Also, the bytcr_rt5640 driver lacks of nonatomic flag in some FE
definitions, which leads to the kernel Oops at suspend/resume like:
BUG: scheduling while atomic: systemd-sleep/3144/0x00000003
Call Trace:
dump_stack+0x5c/0x7a
__schedule_bug+0x55/0x70
__schedule+0x63c/0x8c0
schedule+0x3d/0x90
schedule_timeout+0x16b/0x320
? del_timer_sync+0x50/0x50
? sst_wait_timeout+0xa9/0x170 [snd_intel_sst_core]
? sst_wait_timeout+0xa9/0x170 [snd_intel_sst_core]
? remove_wait_queue+0x60/0x60
? sst_prepare_and_post_msg+0x275/0x960 [snd_intel_sst_core]
? sst_pause_stream+0x9b/0x110 [snd_intel_sst_core]
....
This patch addresses these appropriately, too.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Al Viro [Fri, 14 Apr 2017 21:22:18 +0000 (17:22 -0400)]
p9_client_readdir() fix
commit
71d6ad08379304128e4bdfaf0b4185d54375423e upstream.
Don't assume that server is sane and won't return more data than
asked for.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
James Cowgill [Tue, 11 Apr 2017 12:51:07 +0000 (13:51 +0100)]
MIPS: Avoid BUG warning in arch_check_elf
commit
c46f59e90226fa5bfcc83650edebe84ae47d454b upstream.
arch_check_elf contains a usage of current_cpu_data that will call
smp_processor_id() with preemption enabled and therefore triggers a
"BUG: using smp_processor_id() in preemptible" warning when an fpxx
executable is loaded.
As a follow-up to commit
b244614a60ab ("MIPS: Avoid a BUG warning during
prctl(PR_SET_FP_MODE, ...)"), apply the same fix to arch_check_elf by
using raw_current_cpu_data instead. The rationale quoted from the previous
commit:
"It is assumed throughout the kernel that if any CPU has an FPU, then
all CPUs would have an FPU as well, so it is safe to perform the check
with preemption enabled - change the code to use raw_ variant of the
check to avoid the warning."
Fixes:
46490b572544 ("MIPS: kernel: elf: Improve the overall ABI and FPU mode checks")
Signed-off-by: James Cowgill <James.Cowgill@imgtec.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/15951/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
James Hogan [Wed, 5 Apr 2017 15:32:45 +0000 (16:32 +0100)]
MIPS: cevt-r4k: Fix out-of-bounds array access
commit
9d7f29cdb4ca53506115cf1d7a02ce6013894df0 upstream.
calculate_min_delta() may incorrectly access a 4th element of buf2[]
which only has 3 elements. This may trigger undefined behaviour and has
been reported to cause strange crashes in start_kernel() sometime after
timer initialization when built with GCC 5.3, possibly due to
register/stack corruption:
sched_clock: 32 bits at 200MHz, resolution 5ns, wraps every 10737418237ns
CPU 0 Unable to handle kernel paging request at virtual address
ffffb0aa, epc ==
8067daa8, ra ==
8067da84
Oops[#1]:
CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.9.18 #51
task:
8065e3e0 task.stack:
80644000
$ 0 :
00000000 00000001 00000000 00000000
$ 4 :
8065b4d0 00000000 805d0000 00000010
$ 8 :
00000010 80321400 fffff000 812de408
$12 :
00000000 00000000 00000000 ffffffff
$16 :
00000002 ffffffff 80660000 806a666c
$20 :
806c0000 00000000 00000000 00000000
$24 :
00000000 00000010
$28 :
80644000 80645ed0 00000000 8067da84
Hi :
00000000
Lo :
00000000
epc :
8067daa8 start_kernel+0x33c/0x500
ra :
8067da84 start_kernel+0x318/0x500
Status:
11000402 KERNEL EXL
Cause :
4080040c (ExcCode 03)
BadVA :
ffffb0aa
PrId :
0501992c (MIPS 1004Kc)
Modules linked in:
Process swapper/0 (pid: 0, threadinfo=
80644000, task=
8065e3e0, tls=
00000000)
Call Trace:
[<
8067daa8>] start_kernel+0x33c/0x500
Code:
24050240 0c0131f9 24849c64 <
a200b0a8>
41606020 000000c0 0c1a45e6 00000000 0c1a5f44
UBSAN also detects the same issue:
================================================================
UBSAN: Undefined behaviour in arch/mips/kernel/cevt-r4k.c:85:41
load of address
80647e4c with insufficient space
for an object of type 'unsigned int'
CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.9.18 #47
Call Trace:
[<
80028f70>] show_stack+0x88/0xa4
[<
80312654>] dump_stack+0x84/0xc0
[<
8034163c>] ubsan_epilogue+0x14/0x50
[<
803417d8>] __ubsan_handle_type_mismatch+0x160/0x168
[<
8002dab0>] r4k_clockevent_init+0x544/0x764
[<
80684d34>] time_init+0x18/0x90
[<
8067fa5c>] start_kernel+0x2f0/0x500
=================================================================
buf2[] is intentionally only 3 elements so that the last element is the
median once 5 samples have been inserted, so explicitly prevent the
possibility of comparing against the 4th element rather than extending
the array.
Fixes:
1fa405552e33f2 ("MIPS: cevt-r4k: Dynamically calculate min_delta_ns")
Reported-by: Rabin Vincent <rabinv@axis.com>
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Tested-by: Rabin Vincent <rabinv@axis.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/15892/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
James Hogan [Thu, 30 Mar 2017 15:06:02 +0000 (16:06 +0100)]
MIPS: KGDB: Use kernel context for sleeping threads
commit
162b270c664dca2e0944308e92f9fcc887151a72 upstream.
KGDB is a kernel debug stub and it can't be used to debug userland as it
can only safely access kernel memory.
On MIPS however KGDB has always got the register state of sleeping
processes from the userland register context at the beginning of the
kernel stack. This is meaningless for kernel threads (which never enter
userland), and for user threads it prevents the user seeing what it is
doing while in the kernel:
(gdb) info threads
Id Target Id Frame
...
3 Thread 2 (kthreadd) 0x0000000000000000 in ?? ()
2 Thread 1 (init) 0x000000007705c4b4 in ?? ()
1 Thread -2 (shadowCPU0) 0xffffffff8012524c in arch_kgdb_breakpoint () at arch/mips/kernel/kgdb.c:201
Get the register state instead from the (partial) kernel register
context stored in the task's thread_struct for resume() to restore. All
threads now correctly appear to be in context_switch():
(gdb) info threads
Id Target Id Frame
...
3 Thread 2 (kthreadd) context_switch (rq=<optimized out>, cookie=..., next=<optimized out>, prev=0x0) at kernel/sched/core.c:2903
2 Thread 1 (init) context_switch (rq=<optimized out>, cookie=..., next=<optimized out>, prev=0x0) at kernel/sched/core.c:2903
1 Thread -2 (shadowCPU0) 0xffffffff8012524c in arch_kgdb_breakpoint () at arch/mips/kernel/kgdb.c:201
Call clobbered registers which aren't saved and exception registers
(BadVAddr & Cause) which can't be easily determined without stack
unwinding are reported as 0. The PC is taken from the return address,
such that the state presented matches that found immediately after
returning from resume().
Fixes:
8854700115ec ("[MIPS] kgdb: add arch support for the kernel's kgdb core")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Jason Wessel <jason.wessel@windriver.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/15829/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Noam Camus [Tue, 4 Apr 2017 08:00:41 +0000 (11:00 +0300)]
ARC: [plat-eznps] Fix build error
commit
6492f09e864417d382e22b922ae30693a7ce2982 upstream.
Make ATOMIC_INIT available for all ARC platforms (including plat-eznps)
Signed-off-by: Noam Camus <noamca@mellanox.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Takashi Iwai [Sun, 9 Apr 2017 08:41:27 +0000 (10:41 +0200)]
ALSA: seq: Don't break snd_use_lock_sync() loop by timeout
commit
4e7655fd4f47c23e5249ea260dc802f909a64611 upstream.
The snd_use_lock_sync() (thus its implementation
snd_use_lock_sync_helper()) has the 5 seconds timeout to break out of
the sync loop. It was introduced from the beginning, just to be
"safer", in terms of avoiding the stupid bugs.
However, as Ben Hutchings suggested, this timeout rather introduces a
potential leak or use-after-free that was apparently fixed by the
commit
2d7d54002e39 ("ALSA: seq: Fix race during FIFO resize"):
for example, snd_seq_fifo_event_in() -> snd_seq_event_dup() ->
copy_from_user() could block for a long time, and snd_use_lock_sync()
goes timeout and still leaves the cell at releasing the pool.
For fixing such a problem, we remove the break by the timeout while
still keeping the warning.
Suggested-by: Ben Hutchings <ben.hutchings@codethink.co.uk>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Takashi Sakamoto [Fri, 14 Apr 2017 03:43:01 +0000 (12:43 +0900)]
ALSA: firewire-lib: fix inappropriate assignment between signed/unsigned type
commit
dfb00a56935186171abb5280b3407c3f910011f1 upstream.
An abstraction of asynchronous transaction for transmission of MIDI
messages was introduced in Linux v4.4. Each driver can utilize this
abstraction to transfer MIDI messages via fixed-length payload of
transaction to a certain unit address. Filling payload of the transaction
is done by callback. In this callback, each driver can return negative
error code, however current implementation assigns the return value to
unsigned variable.
This commit changes type of the variable to fix the bug.
Reported-by: Julia Lawall <Julia.Lawall@lip6.fr>
Fixes:
585d7cba5e1f ("ALSA: firewire-lib: add helper functions for asynchronous transactions to transfer MIDI messages")
Signed-off-by: Takashi Sakamoto <o-takashi@sakamocchi.jp>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Takashi Sakamoto [Mon, 3 Apr 2017 12:13:40 +0000 (21:13 +0900)]
ALSA: oxfw: fix regression to handle Stanton SCS.1m/1d
commit
3d016d57fdc5e6caa4cd67896f4b081bccad6e2c upstream.
At a commit
6c29230e2a5f ("ALSA: oxfw: delayed registration of sound
card"), ALSA oxfw driver fails to handle SCS.1m/1d, due to -EBUSY at a call
of snd_card_register(). The cause is that the driver manages to register
two rawmidi instances with the same device number 0. This is a regression
introduced since kernel 4.7.
This commit fixes the regression, by fixing up device property after
discovering stream formats.
Fixes:
6c29230e2a5f ("ALSA: oxfw: delayed registration of sound card")
Signed-off-by: Takashi Sakamoto <o-takashi@sakamocchi.jp>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jamie Bainbridge [Wed, 26 Apr 2017 00:43:27 +0000 (10:43 +1000)]
ipv6: check raw payload size correctly in ioctl
[ Upstream commit
105f5528b9bbaa08b526d3405a5bcd2ff0c953c8 ]
In situations where an skb is paged, the transport header pointer and
tail pointer can be the same because the skb contents are in frags.
This results in ioctl(SIOCINQ/FIONREAD) incorrectly returning a
length of 0 when the length to receive is actually greater than zero.
skb->len is already correctly set in ip6_input_finish() with
pskb_pull(), so use skb->len as it always returns the correct result
for both linear and paged data.
Signed-off-by: Jamie Bainbridge <jbainbri@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Wei Wang [Wed, 26 Apr 2017 00:38:02 +0000 (17:38 -0700)]
tcp: memset ca_priv data to 0 properly
[ Upstream commit
c1201444075009507a6818de6518e2822b9a87c8 ]
Always zero out ca_priv data in tcp_assign_congestion_control() so that
ca_priv data is cleared out during socket creation.
Also always zero out ca_priv data in tcp_reinit_congestion_control() so
that when cc algorithm is changed, ca_priv data is cleared out as well.
We should still zero out ca_priv data even in TCP_CLOSE state because
user could call connect() on AF_UNSPEC to disconnect the socket and
leave it in TCP_CLOSE state and later call setsockopt() to switch cc
algorithm on this socket.
Fixes:
2b0a8c9ee ("tcp: add CDG congestion control")
Reported-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Wei Wang <weiwan@google.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Acked-by: Yuchung Cheng <ycheng@google.com>
Acked-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
WANG Cong [Tue, 25 Apr 2017 21:37:15 +0000 (14:37 -0700)]
ipv6: check skb->protocol before lookup for nexthop
[ Upstream commit
199ab00f3cdb6f154ea93fa76fd80192861a821d ]
Andrey reported a out-of-bound access in ip6_tnl_xmit(), this
is because we use an ipv4 dst in ip6_tnl_xmit() and cast an IPv4
neigh key as an IPv6 address:
neigh = dst_neigh_lookup(skb_dst(skb),
&ipv6_hdr(skb)->daddr);
if (!neigh)
goto tx_err_link_failure;
addr6 = (struct in6_addr *)&neigh->primary_key; // <=== HERE
addr_type = ipv6_addr_type(addr6);
if (addr_type == IPV6_ADDR_ANY)
addr6 = &ipv6_hdr(skb)->daddr;
memcpy(&fl6->daddr, addr6, sizeof(fl6->daddr));
Also the network header of the skb at this point should be still IPv4
for 4in6 tunnels, we shold not just use it as IPv6 header.
This patch fixes it by checking if skb->protocol is ETH_P_IPV6: if it
is, we are safe to do the nexthop lookup using skb_dst() and
ipv6_hdr(skb)->daddr; if not (aka IPv4), we have no clue about which
dest address we can pick here, we have to rely on callers to fill it
from tunnel config, so just fall to ip6_route_output() to make the
decision.
Fixes:
ea3dc9601bda ("ip6_tunnel: Add support for wildcard tunnel endpoints.")
Reported-by: Andrey Konovalov <andreyknvl@google.com>
Tested-by: Andrey Konovalov <andreyknvl@google.com>
Cc: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Alexander Kochetkov [Thu, 20 Apr 2017 11:00:04 +0000 (14:00 +0300)]
net: phy: fix auto-negotiation stall due to unavailable interrupt
[ Upstream commit
f555f34fdc586a56204cd16d9a7c104ec6cb6650 ]
The Ethernet link on an interrupt driven PHY was not coming up if the Ethernet
cable was plugged before the Ethernet interface was brought up.
The patch trigger PHY state machine to update link state if PHY was requested to
do auto-negotiation and auto-negotiation complete flag already set.
During power-up cycle the PHY do auto-negotiation, generate interrupt and set
auto-negotiation complete flag. Interrupt is handled by PHY state machine but
doesn't update link state because PHY is in PHY_READY state. After some time
MAC bring up, start and request PHY to do auto-negotiation. If there are no new
settings to advertise genphy_config_aneg() doesn't start PHY auto-negotiation.
PHY continue to stay in auto-negotiation complete state and doesn't fire
interrupt. At the same time PHY state machine expect that PHY started
auto-negotiation and is waiting for interrupt from PHY and it won't get it.
Fixes:
321beec5047a ("net: phy: Use interrupts when available in NOLINK state")
Signed-off-by: Alexander Kochetkov <al.kochet@gmail.com>
Cc: stable <stable@vger.kernel.org> # v4.9+
Tested-by: Roger Quadros <rogerq@ti.com>
Tested-by: Alexandre Belloni <alexandre.belloni@free-electrons.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David Ahern [Tue, 25 Apr 2017 16:17:29 +0000 (09:17 -0700)]
net: ipv6: regenerate host route if moved to gc list
[ Upstream commit
8048ced9beb21a52e3305f3332ae82020619f24e ]
Taking down the loopback device wreaks havoc on IPv6 routing. By
extension, taking down a VRF device wreaks havoc on its table.
Dmitry and Andrey both reported heap out-of-bounds reports in the IPv6
FIB code while running syzkaller fuzzer. The root cause is a dead dst
that is on the garbage list gets reinserted into the IPv6 FIB. While on
the gc (or perhaps when it gets added to the gc list) the dst->next is
set to an IPv4 dst. A subsequent walk of the ipv6 tables causes the
out-of-bounds access.
Andrey's reproducer was the key to getting to the bottom of this.
With IPv6, host routes for an address have the dst->dev set to the
loopback device. When the 'lo' device is taken down, rt6_ifdown initiates
a walk of the fib evicting routes with the 'lo' device which means all
host routes are removed. That process moves the dst which is attached to
an inet6_ifaddr to the gc list and marks it as dead.
The recent change to keep global IPv6 addresses added a new function,
fixup_permanent_addr, that is called on admin up. That function restarts
dad for an inet6_ifaddr and when it completes the host route attached
to it is inserted into the fib. Since the route was marked dead and
moved to the gc list, re-inserting the route causes the reported
out-of-bounds accesses. If the device with the address is taken down
or the address is removed, the WARN_ON in fib6_del is triggered.
All of those faults are fixed by regenerating the host route if the
existing one has been moved to the gc list, something that can be
determined by checking if the rt6i_ref counter is 0.
Fixes:
f1705ec197e7 ("net: ipv6: Make address flushing on ifdown optional")
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Reported-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: David Ahern <dsa@cumulusnetworks.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Herbert Xu [Thu, 20 Apr 2017 12:55:12 +0000 (20:55 +0800)]
macvlan: Fix device ref leak when purging bc_queue
[ Upstream commit
f6478218e6edc2a587b8f132f66373baa7b2497c ]
When a parent macvlan device is destroyed we end up purging its
broadcast queue without dropping the device reference count on
the packet source device. This causes the source device to linger.
This patch drops that reference count.
Fixes:
260916dfb48c ("macvlan: Fix potential use-after free for...")
Reported-by: Joe Ghalam <Joe.Ghalam@dell.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ilan Tayari [Thu, 2 Mar 2017 13:49:45 +0000 (15:49 +0200)]
net/mlx5e: Fix ETHTOOL_GRXCLSRLALL handling
[ Upstream commit
5e82c9e4ed60beba83f46a1a5a8307b99a23e982 ]
Handler for ETHTOOL_GRXCLSRLALL must set info->data to the size
of the table, regardless of the amount of entries in it.
Existing code does not do that, and this breaks all usage of ethtool -N
or -n without explicit location, with this error:
rmgr: Invalid RX class rules table size: Success
Set info->data to the table size.
Tested:
ethtool -n ens8
ethtool -N ens8 flow-type ip4 src-ip 1.1.1.1 dst-ip 2.2.2.2 action 1
ethtool -N ens8 flow-type ip4 src-ip 1.1.1.1 dst-ip 2.2.2.2 action 1 loc 55
ethtool -n ens8
ethtool -N ens8 delete 1023
ethtool -N ens8 delete 55
Fixes:
f913a72aa008 ("net/mlx5e: Add support to get ethtool flow rules")
Signed-off-by: Ilan Tayari <ilant@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eugenia Emantayev [Wed, 22 Mar 2017 09:44:14 +0000 (11:44 +0200)]
net/mlx5e: Fix small packet threshold
[ Upstream commit
cbad8cddb6ed7ef3a5f0a9a70f1711d4d7fb9a8f ]
RX packet headers are meant to be contained in SKB linear part,
and chose a threshold of 128.
It turns out this is not enough, i.e. for IPv6 packet over VxLAN.
In this case, UDP/IPv4 needs 42 bytes, GENEVE header is 8 bytes,
and 86 bytes for TCP/IPv6. In total 136 bytes that is more than
current 128 bytes. In this case expand header flow is reached.
The warning in skb_try_coalesce() caused by a wrong truesize
was already fixed here:
commit
158f323b9868 ("net: adjust skb->truesize in pskb_expand_head()").
Still, we prefer to totally avoid the expand header flow for performance reasons.
Tested regular TCP_STREAM with iperf for 1 and 8 streams, no degradation was found.
Fixes:
461017cb006a ("net/mlx5e: Support RX multi-packet WQE (Striding RQ)")
Signed-off-by: Eugenia Emantayev <eugenia@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Mohamad Haj Yahia [Thu, 30 Mar 2017 14:00:25 +0000 (17:00 +0300)]
net/mlx5: Fix driver load bad flow when having fw initializing timeout
[ Upstream commit
55378a238e04b39cc82957d91d16499704ea719b ]
If FW is stuck in initializing state we will skip the driver load, but
current error handling flow doesn't clean previously allocated command
interface resources.
Fixes:
e3297246c2c8 ('net/mlx5_core: Wait for FW readiness on startup')
Signed-off-by: Mohamad Haj Yahia <mohamad@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Nikolay Aleksandrov [Fri, 21 Apr 2017 17:42:16 +0000 (20:42 +0300)]
ip6mr: fix notification device destruction
[ Upstream commit
723b929ca0f79c0796f160c2eeda4597ee98d2b8 ]
Andrey Konovalov reported a BUG caused by the ip6mr code which is caused
because we call unregister_netdevice_many for a device that is already
being destroyed. In IPv4's ipmr that has been resolved by two commits
long time ago by introducing the "notify" parameter to the delete
function and avoiding the unregister when called from a notifier, so
let's do the same for ip6mr.
The trace from Andrey:
------------[ cut here ]------------
kernel BUG at net/core/dev.c:6813!
invalid opcode: 0000 [#1] SMP KASAN
Modules linked in:
CPU: 1 PID: 1165 Comm: kworker/u4:3 Not tainted 4.11.0-rc7+ #251
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs
01/01/2011
Workqueue: netns cleanup_net
task:
ffff880069208000 task.stack:
ffff8800692d8000
RIP: 0010:rollback_registered_many+0x348/0xeb0 net/core/dev.c:6813
RSP: 0018:
ffff8800692de7f0 EFLAGS:
00010297
RAX:
ffff880069208000 RBX:
0000000000000002 RCX:
0000000000000001
RDX:
0000000000000000 RSI:
0000000000000000 RDI:
ffff88006af90569
RBP:
ffff8800692de9f0 R08:
ffff8800692dec60 R09:
0000000000000000
R10:
0000000000000006 R11:
0000000000000000 R12:
ffff88006af90070
R13:
ffff8800692debf0 R14:
dffffc0000000000 R15:
ffff88006af90000
FS:
0000000000000000(0000) GS:
ffff88006cb00000(0000)
knlGS:
0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0:
0000000080050033
CR2:
00007fe7e897d870 CR3:
00000000657e7000 CR4:
00000000000006e0
Call Trace:
unregister_netdevice_many.part.105+0x87/0x440 net/core/dev.c:7881
unregister_netdevice_many+0xc8/0x120 net/core/dev.c:7880
ip6mr_device_event+0x362/0x3f0 net/ipv6/ip6mr.c:1346
notifier_call_chain+0x145/0x2f0 kernel/notifier.c:93
__raw_notifier_call_chain kernel/notifier.c:394
raw_notifier_call_chain+0x2d/0x40 kernel/notifier.c:401
call_netdevice_notifiers_info+0x51/0x90 net/core/dev.c:1647
call_netdevice_notifiers net/core/dev.c:1663
rollback_registered_many+0x919/0xeb0 net/core/dev.c:6841
unregister_netdevice_many.part.105+0x87/0x440 net/core/dev.c:7881
unregister_netdevice_many net/core/dev.c:7880
default_device_exit_batch+0x4fa/0x640 net/core/dev.c:8333
ops_exit_list.isra.4+0x100/0x150 net/core/net_namespace.c:144
cleanup_net+0x5a8/0xb40 net/core/net_namespace.c:463
process_one_work+0xc04/0x1c10 kernel/workqueue.c:2097
worker_thread+0x223/0x19c0 kernel/workqueue.c:2231
kthread+0x35e/0x430 kernel/kthread.c:231
ret_from_fork+0x31/0x40 arch/x86/entry/entry_64.S:430
Code: 3c 32 00 0f 85 70 0b 00 00 48 b8 00 02 00 00 00 00 ad de 49 89
47 78 e9 93 fe ff ff 49 8d 57 70 49 8d 5f 78 eb 9e e8 88 7a 14 fe <0f>
0b 48 8b 9d 28 fe ff ff e8 7a 7a 14 fe 48 b8 00 00 00 00 00
RIP: rollback_registered_many+0x348/0xeb0 RSP:
ffff8800692de7f0
---[ end trace
e0b29c57e9b3292c ]---
Reported-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Tested-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Tushar Dave [Thu, 20 Apr 2017 22:57:31 +0000 (15:57 -0700)]
netpoll: Check for skb->queue_mapping
[ Upstream commit
c70b17b775edb21280e9de7531acf6db3b365274 ]
Reducing real_num_tx_queues needs to be in sync with skb queue_mapping
otherwise skbs with queue_mapping greater than real_num_tx_queues
can be sent to the underlying driver and can result in kernel panic.
One such event is running netconsole and enabling VF on the same
device. Or running netconsole and changing number of tx queues via
ethtool on same device.
e.g.
Unable to handle kernel NULL pointer dereference
tsk->{mm,active_mm}->context =
0000000000001525
tsk->{mm,active_mm}->pgd =
fff800130ff9a000
\|/ ____ \|/
"@'/ .. \`@"
/_| \__/ |_\
\__U_/
kworker/48:1(475): Oops [#1]
CPU: 48 PID: 475 Comm: kworker/48:1 Tainted: G OE
4.11.0-rc3-davem-net+ #7
Workqueue: events queue_process
task:
fff80013113299c0 task.stack:
fff800131132c000
TSTATE:
0000004480e01600 TPC:
00000000103f9e3c TNPC:
00000000103f9e40 Y:
00000000 Tainted: G OE
TPC: <ixgbe_xmit_frame_ring+0x7c/0x6c0 [ixgbe]>
g0:
0000000000000000 g1:
0000000000003fff g2:
0000000000000000 g3:
0000000000000001
g4:
fff80013113299c0 g5:
fff8001fa6808000 g6:
fff800131132c000 g7:
00000000000000c0
o0:
fff8001fa760c460 o1:
fff8001311329a50 o2:
fff8001fa7607504 o3:
0000000000000003
o4:
fff8001f96e63a40 o5:
fff8001311d77ec0 sp:
fff800131132f0e1 ret_pc:
000000000049ed94
RPC: <set_next_entity+0x34/0xb80>
l0:
0000000000000000 l1:
0000000000000800 l2:
0000000000000000 l3:
0000000000000000
l4:
000b2aa30e34b10d l5:
0000000000000000 l6:
0000000000000000 l7:
fff8001fa7605028
i0:
fff80013111a8a00 i1:
fff80013155a0780 i2:
0000000000000000 i3:
0000000000000000
i4:
0000000000000000 i5:
0000000000100000 i6:
fff800131132f1a1 i7:
00000000103fa4b0
I7: <ixgbe_xmit_frame+0x30/0xa0 [ixgbe]>
Call Trace:
[
00000000103fa4b0] ixgbe_xmit_frame+0x30/0xa0 [ixgbe]
[
0000000000998c74] netpoll_start_xmit+0xf4/0x200
[
0000000000998e10] queue_process+0x90/0x160
[
0000000000485fa8] process_one_work+0x188/0x480
[
0000000000486410] worker_thread+0x170/0x4c0
[
000000000048c6b8] kthread+0xd8/0x120
[
0000000000406064] ret_from_fork+0x1c/0x2c
[
0000000000000000] (null)
Disabling lock debugging due to kernel taint
Caller[
00000000103fa4b0]: ixgbe_xmit_frame+0x30/0xa0 [ixgbe]
Caller[
0000000000998c74]: netpoll_start_xmit+0xf4/0x200
Caller[
0000000000998e10]: queue_process+0x90/0x160
Caller[
0000000000485fa8]: process_one_work+0x188/0x480
Caller[
0000000000486410]: worker_thread+0x170/0x4c0
Caller[
000000000048c6b8]: kthread+0xd8/0x120
Caller[
0000000000406064]: ret_from_fork+0x1c/0x2c
Caller[
0000000000000000]: (null)
Signed-off-by: Tushar Dave <tushar.n.dave@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David Ahern [Wed, 19 Apr 2017 21:19:43 +0000 (14:19 -0700)]
net: ipv6: RTF_PCPU should not be settable from userspace
[ Upstream commit
557c44be917c322860665be3d28376afa84aa936 ]
Andrey reported a fault in the IPv6 route code:
kasan: GPF could be caused by NULL-ptr deref or user memory access
general protection fault: 0000 [#1] SMP KASAN
Modules linked in:
CPU: 1 PID: 4035 Comm: a.out Not tainted 4.11.0-rc7+ #250
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
task:
ffff880069809600 task.stack:
ffff880062dc8000
RIP: 0010:ip6_rt_cache_alloc+0xa6/0x560 net/ipv6/route.c:975
RSP: 0018:
ffff880062dced30 EFLAGS:
00010206
RAX:
dffffc0000000000 RBX:
ffff8800670561c0 RCX:
0000000000000006
RDX:
0000000000000003 RSI:
ffff880062dcfb28 RDI:
0000000000000018
RBP:
ffff880062dced68 R08:
0000000000000001 R09:
0000000000000000
R10:
0000000000000000 R11:
0000000000000000 R12:
0000000000000000
R13:
ffff880062dcfb28 R14:
dffffc0000000000 R15:
0000000000000000
FS:
00007feebe37e7c0(0000) GS:
ffff88006cb00000(0000) knlGS:
0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0:
0000000080050033
CR2:
00000000205a0fe4 CR3:
000000006b5c9000 CR4:
00000000000006e0
Call Trace:
ip6_pol_route+0x1512/0x1f20 net/ipv6/route.c:1128
ip6_pol_route_output+0x4c/0x60 net/ipv6/route.c:1212
...
Andrey's syzkaller program passes rtmsg.rtmsg_flags with the RTF_PCPU bit
set. Flags passed to the kernel are blindly copied to the allocated
rt6_info by ip6_route_info_create making a newly inserted route appear
as though it is a per-cpu route. ip6_rt_cache_alloc sees the flag set
and expects rt->dst.from to be set - which it is not since it is not
really a per-cpu copy. The subsequent call to __ip6_dst_alloc then
generates the fault.
Fix by checking for the flag and failing with EINVAL.
Fixes:
d52d3997f843f ("ipv6: Create percpu rt6_info")
Reported-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: David Ahern <dsa@cumulusnetworks.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Tested-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ilan Tayari [Wed, 19 Apr 2017 18:26:07 +0000 (21:26 +0300)]
gso: Validate assumption of frag_list segementation
[ Upstream commit
43170c4e0ba709c79130c3fe5a41e66279950cd0 ]
Commit
07b26c9454a2 ("gso: Support partial splitting at the frag_list
pointer") assumes that all SKBs in a frag_list (except maybe the last
one) contain the same amount of GSO payload.
This assumption is not always correct, resulting in the following
warning message in the log:
skb_segment: too many frags
For example, mlx5 driver in Striding RQ mode creates some RX SKBs with
one frag, and some with 2 frags.
After GRO, the frag_list SKBs end up having different amounts of payload.
If this frag_list SKB is then forwarded, the aforementioned assumption
is violated.
Validate the assumption, and fall back to software GSO if it not true.
Change-Id: Ia03983f4a47b6534dd987d7a2aad96d54d46d212
Fixes:
07b26c9454a2 ("gso: Support partial splitting at the frag_list pointer")
Signed-off-by: Ilan Tayari <ilant@mellanox.com>
Signed-off-by: Ilya Lesokhin <ilyal@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dan Carpenter [Tue, 18 Apr 2017 19:14:26 +0000 (22:14 +0300)]
dp83640: don't recieve time stamps twice
[ Upstream commit
9d386cd9a755c8293e8916264d4d053878a7c9c7 ]
This patch is prompted by a static checker warning about a potential
use after free. The concern is that netif_rx_ni() can free "skb" and we
call it twice.
When I look at the commit that added this, it looks like some stray
lines were added accidentally. It doesn't make sense to me that we
would recieve the same data two times. I asked the author but never
recieved a response.
I can't test this code, but I'm pretty sure my patch is correct.
Fixes:
4b063258ab93 ("dp83640: Delay scheduled work.")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Acked-by: Stefan Sørensen <stefan.sorensen@spectralink.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sergei Shtylyov [Mon, 17 Apr 2017 12:55:22 +0000 (15:55 +0300)]
sh_eth: unmap DMA buffers when freeing rings
[ Upstream commit
1debdc8f9ebd07daf140e417b3841596911e0066 ]
The DMA API debugging (when enabled) causes:
WARNING: CPU: 0 PID: 1445 at lib/dma-debug.c:519 add_dma_entry+0xe0/0x12c
DMA-API: exceeded 7 overlapping mappings of cacheline 0x01b2974d
to be printed after repeated initialization of the Ether device, e.g.
suspend/resume or 'ifconfig' up/down. This is because DMA buffers mapped
using dma_map_single() in sh_eth_ring_format() and sh_eth_start_xmit() are
never unmapped. Resolve this problem by unmapping the buffers when freeing
the descriptor rings; in order to do it right, we'd have to add an extra
parameter to sh_eth_txfree() (we rename this function to sh_eth_tx_free(),
while at it).
Based on the commit
a47b70ea86bd ("ravb: unmap descriptors when freeing
rings").
Signed-off-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David Ahern [Thu, 13 Apr 2017 16:57:15 +0000 (10:57 -0600)]
net: vrf: Fix setting NLM_F_EXCL flag when adding l3mdev rule
[ Upstream commit
426c87caa2b4578b43cd3f689f02c65b743b2559 ]
Only need 1 l3mdev FIB rule. Fix setting NLM_F_EXCL in the nlmsghdr.
Fixes:
1aa6c4f6b8cd8 ("net: vrf: Add l3mdev rules on first device create")
Signed-off-by: David Ahern <dsa@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Willem de Bruijn [Wed, 12 Apr 2017 23:24:35 +0000 (19:24 -0400)]
net-timestamp: avoid use-after-free in ip_recv_error
[ Upstream commit
1862d6208db0aeca9c8ace44915b08d5ab2cd667 ]
Syzkaller reported a use-after-free in ip_recv_error at line
info->ipi_ifindex = skb->dev->ifindex;
This function is called on dequeue from the error queue, at which
point the device pointer may no longer be valid.
Save ifindex on enqueue in __skb_complete_tx_timestamp, when the
pointer is valid or NULL. Store it in temporary storage skb->cb.
It is safe to reference skb->dev here, as called from device drivers
or dev_queue_xmit. The exception is when called from tcp_ack_tstamp;
in that case it is NULL and ifindex is set to 0 (invalid).
Do not return a pktinfo cmsg if ifindex is 0. This maintains the
current behavior of not returning a cmsg if skb->dev was NULL.
On dequeue, the ipv4 path will cast from sock_exterr_skb to
in_pktinfo. Both have ifindex as their first element, so no explicit
conversion is needed. This is by design, introduced in commit
0b922b7a829c ("net: original ingress device index in PKTINFO"). For
ipv6 ip6_datagram_support_cmsg converts to in6_pktinfo.
Fixes:
829ae9d61165 ("net-timestamp: allow reading recv cmsg on errqueue with origin tstamp")
Reported-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Rabin Vincent [Mon, 10 Apr 2017 06:36:39 +0000 (08:36 +0200)]
ipv6: Fix idev->addr_list corruption
[ Upstream commit
a2d6cbb0670d54806f18192cb0db266b4a6d285a ]
addrconf_ifdown() removes elements from the idev->addr_list without
holding the idev->lock.
If this happens while the loop in __ipv6_dev_get_saddr() is handling the
same element, that function ends up in an infinite loop:
NMI watchdog: BUG: soft lockup - CPU#1 stuck for 23s! [test:1719]
Call Trace:
ipv6_get_saddr_eval+0x13c/0x3a0
__ipv6_dev_get_saddr+0xe4/0x1f0
ipv6_dev_get_saddr+0x1b4/0x204
ip6_dst_lookup_tail+0xcc/0x27c
ip6_dst_lookup_flow+0x38/0x80
udpv6_sendmsg+0x708/0xba8
sock_sendmsg+0x18/0x30
SyS_sendto+0xb8/0xf8
syscall_common+0x34/0x58
Fixes:
6a923934c33 (Revert "ipv6: Revert optional address flusing on ifdown.")
Signed-off-by: Rabin Vincent <rabinv@axis.com>
Acked-by: David Ahern <dsa@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eric Dumazet [Sat, 8 Apr 2017 15:07:33 +0000 (08:07 -0700)]
tcp: clear saved_syn in tcp_disconnect()
[ Upstream commit
17c3060b1701fc69daedb4c90be6325d3d9fca8e ]
In the (very unlikely) case a passive socket becomes a listener,
we do not want to duplicate its saved SYN headers.
This would lead to double frees, use after free, and please hackers and
various fuzzers
Tested:
0 socket(..., SOCK_STREAM, IPPROTO_TCP) = 3
+0 setsockopt(3, IPPROTO_TCP, TCP_SAVE_SYN, [1], 4) = 0
+0 fcntl(3, F_SETFL, O_RDWR|O_NONBLOCK) = 0
+0 bind(3, ..., ...) = 0
+0 listen(3, 5) = 0
+0 < S 0:0(0) win 32972 <mss 1460,nop,wscale 7>
+0 > S. 0:0(0) ack 1 <...>
+.1 < . 1:1(0) ack 1 win 257
+0 accept(3, ..., ...) = 4
+0 connect(4, AF_UNSPEC, ...) = 0
+0 close(3) = 0
+0 bind(4, ..., ...) = 0
+0 listen(4, 5) = 0
+0 < S 0:0(0) win 32972 <mss 1460,nop,wscale 7>
+0 > S. 0:0(0) ack 1 <...>
+.1 < . 1:1(0) ack 1 win 257
Fixes:
cd8ae85299d5 ("tcp: provide SYN headers for passive connections")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Xin Long [Thu, 6 Apr 2017 05:10:52 +0000 (13:10 +0800)]
sctp: listen on the sock only when it's state is listening or closed
[ Upstream commit
34b2789f1d9bf8dcca9b5cb553d076ca2cd898ee ]
Now sctp doesn't check sock's state before listening on it. It could
even cause changing a sock with any state to become a listening sock
when doing sctp_listen.
This patch is to fix it by checking sock's state in sctp_listen, so
that it will listen on the sock with right state.
Reported-by: Andrey Konovalov <andreyknvl@google.com>
Tested-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Florian Larysch [Mon, 3 Apr 2017 14:46:09 +0000 (16:46 +0200)]
net: ipv4: fix multipath RTM_GETROUTE behavior when iif is given
[ Upstream commit
a8801799c6975601fd58ae62f48964caec2eb83f ]
inet_rtm_getroute synthesizes a skeletal ICMP skb, which is passed to
ip_route_input when iif is given. If a multipath route is present for
the designated destination, ip_multipath_icmp_hash ends up being called,
which uses the source/destination addresses within the skb to calculate
a hash. However, those are not set in the synthetic skb, causing it to
return an arbitrary and incorrect result.
Instead, use UDP, which gets no such special treatment.
Signed-off-by: Florian Larysch <fl@n621.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Guillaume Nault [Mon, 3 Apr 2017 11:23:15 +0000 (13:23 +0200)]
l2tp: fix PPP pseudo-wire auto-loading
[ Upstream commit
249ee819e24c180909f43c1173c8ef6724d21faf ]
PPP pseudo-wire type is 7 (11 is L2TP_PWTYPE_IP).
Fixes:
f1f39f911027 ("l2tp: auto load type modules")
Signed-off-by: Guillaume Nault <g.nault@alphalink.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Guillaume Nault [Mon, 3 Apr 2017 10:03:13 +0000 (12:03 +0200)]
l2tp: take reference on sessions being dumped
[ Upstream commit
e08293a4ccbcc993ded0fdc46f1e57926b833d63 ]
Take a reference on the sessions returned by l2tp_session_find_nth()
(and rename it l2tp_session_get_nth() to reflect this change), so that
caller is assured that the session isn't going to disappear while
processing it.
For procfs and debugfs handlers, the session is held in the .start()
callback and dropped in .show(). Given that pppol2tp_seq_session_show()
dereferences the associated PPPoL2TP socket and that
l2tp_dfs_seq_session_show() might call pppol2tp_show(), we also need to
call the session's .ref() callback to prevent the socket from going
away from under us.
Fixes:
fd558d186df2 ("l2tp: Split pppol2tp patch into separate l2tp and ppp parts")
Fixes:
0ad6614048cf ("l2tp: Add debugfs files for dumping l2tp debug info")
Fixes:
309795f4bec2 ("l2tp: Add netlink control API for L2TP")
Signed-off-by: Guillaume Nault <g.nault@alphalink.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Andrey Konovalov [Wed, 29 Mar 2017 14:11:22 +0000 (16:11 +0200)]
net/packet: fix overflow in check for tp_reserve
[ Upstream commit
bcc5364bdcfe131e6379363f089e7b4108d35b70 ]
When calculating po->tp_hdrlen + po->tp_reserve the result can overflow.
Fix by checking that tp_reserve <= INT_MAX on assign.
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Andrey Konovalov [Wed, 29 Mar 2017 14:11:21 +0000 (16:11 +0200)]
net/packet: fix overflow in check for tp_frame_nr
[ Upstream commit
8f8d28e4d6d815a391285e121c3a53a0b6cb9e7b ]
When calculating rb->frames_per_block * req->tp_block_nr the result
can overflow.
Add a check that tp_block_size * tp_block_nr <= UINT_MAX.
Since frames_per_block <= tp_block_size, the expression would
never overflow.
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Guillaume Nault [Wed, 29 Mar 2017 06:45:29 +0000 (08:45 +0200)]
l2tp: purge socket queues in the .destruct() callback
[ Upstream commit
e91793bb615cf6cdd59c0b6749fe173687bb0947 ]
The Rx path may grab the socket right before pppol2tp_release(), but
nothing guarantees that it will enqueue packets before
skb_queue_purge(). Therefore, the socket can be destroyed without its
queues fully purged.
Fix this by purging queues in pppol2tp_session_destruct() where we're
guaranteed nothing is still referencing the socket.
Fixes:
9e9cb6221aa7 ("l2tp: fix userspace reception on plain L2TP sockets")
Signed-off-by: Guillaume Nault <g.nault@alphalink.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Guillaume Nault [Wed, 29 Mar 2017 06:44:59 +0000 (08:44 +0200)]
l2tp: hold tunnel socket when handling control frames in l2tp_ip and l2tp_ip6
[ Upstream commit
94d7ee0baa8b764cf64ad91ed69464c1a6a0066b ]
The code following l2tp_tunnel_find() expects that a new reference is
held on sk. Either sk_receive_skb() or the discard_put error path will
drop a reference from the tunnel's socket.
This issue exists in both l2tp_ip and l2tp_ip6.
Fixes:
a3c18422a4b4 ("l2tp: hold socket before dropping lock in l2tp_ip{, 6}_recv()")
Signed-off-by: Guillaume Nault <g.nault@alphalink.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Talat Batheesh [Tue, 28 Mar 2017 13:13:41 +0000 (16:13 +0300)]
net/mlx5: Avoid dereferencing uninitialized pointer
[ Upstream commit
e497ec680c4cd51e76bfcdd49363d9ab8d32a757 ]
In NETDEV_CHANGEUPPER event the upper_info field is valid
only when linking is true. Otherwise it should be ignored.
Fixes:
7907f23adc18 (net/mlx5: Implement RoCE LAG feature)
Signed-off-by: Talat Batheesh <talatb@mellanox.com>
Reviewed-by: Aviv Heller <avivh@mellanox.com>
Reviewed-by: Moni Shoua <monis@mellanox.com>
Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Alexei Starovoitov [Fri, 24 Mar 2017 22:57:33 +0000 (15:57 -0700)]
bpf: improve verifier packet range checks
[ Upstream commit
b1977682a3858b5584ffea7cfb7bd863f68db18d ]
llvm can optimize the 'if (ptr > data_end)' checks to be in the order
slightly different than the original C code which will confuse verifier.
Like:
if (ptr + 16 > data_end)
return TC_ACT_SHOT;
// may be followed by
if (ptr + 14 > data_end)
return TC_ACT_SHOT;
while llvm can see that 'ptr' is valid for all 16 bytes,
the verifier could not.
Fix verifier logic to account for such case and add a test.
Reported-by: Huapeng Zhou <hzhou@fb.com>
Fixes:
969bf05eb3ce ("bpf: direct packet access")
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
WANG Cong [Thu, 23 Mar 2017 18:03:31 +0000 (11:03 -0700)]
kcm: return immediately after copy_from_user() failure
[ Upstream commit
a80db69e47d764bbcaf2fec54b1f308925e7c490 ]
There is no reason to continue after a copy_from_user()
failure.
Fixes:
ab7ac4eb9832 ("kcm: Kernel Connection Multiplexor module")
Cc: Tom Herbert <tom@herbertland.com>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Nathan Sullivan [Wed, 22 Mar 2017 20:27:01 +0000 (15:27 -0500)]
net: phy: handle state correctly in phy_stop_machine
[ Upstream commit
49d52e8108a21749dc2114b924c907db43358984 ]
If the PHY is halted on stop, then do not set the state to PHY_UP. This
ensures the phy will be restarted later in phy_start when the machine is
started again.
Fixes:
00db8189d984 ("This patch adds a PHY Abstraction Layer to the Linux Kernel, enabling ethernet drivers to remain as ignorant as is reasonable of the connected PHY's design and operation details.")
Signed-off-by: Nathan Sullivan <nathan.sullivan@ni.com>
Signed-off-by: Brad Mouring <brad.mouring@ni.com>
Acked-by: Xander Huff <xander.huff@ni.com>
Acked-by: Kyle Roeschley <kyle.roeschley@ni.com>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eric Dumazet [Thu, 23 Mar 2017 19:39:21 +0000 (12:39 -0700)]
net: neigh: guard against NULL solicit() method
[ Upstream commit
48481c8fa16410ffa45939b13b6c53c2ca609e5f ]
Dmitry posted a nice reproducer of a bug triggering in neigh_probe()
when dereferencing a NULL neigh->ops->solicit method.
This can happen for arp_direct_ops/ndisc_direct_ops and similar,
which can be used for NUD_NOARP neighbours (created when dev->header_ops
is NULL). Admin can then force changing nud_state to some other state
that would fire neigh timer.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Tom Hromatka [Fri, 31 Mar 2017 22:31:42 +0000 (16:31 -0600)]
sparc64: Fix kernel panic due to erroneous #ifdef surrounding pmd_write()
[ Upstream commit
9ae34dbd8afd790cb5f52467e4f816434379eafa ]
This commit moves sparc64's prototype of pmd_write() outside
of the CONFIG_TRANSPARENT_HUGEPAGE ifdef.
In 2013, commit
a7b9403f0e6d ("sparc64: Encode huge PMDs using PTE
encoding.") exposed a path where pmd_write() could be called without
CONFIG_TRANSPARENT_HUGEPAGE defined. This can result in the panic below.
The diff is awkward to read, but the changes are straightforward.
pmd_write() was moved outside of #ifdef CONFIG_TRANSPARENT_HUGEPAGE.
Also, __HAVE_ARCH_PMD_WRITE was defined.
kernel BUG at include/asm-generic/pgtable.h:576!
\|/ ____ \|/
"@'/ .. \`@"
/_| \__/ |_\
\__U_/
oracle_8114_cdb(8114): Kernel bad sw trap 5 [#1]
CPU: 120 PID: 8114 Comm: oracle_8114_cdb Not tainted
4.1.12-61.7.1.el6uek.rc1.sparc64 #1
task:
fff8400700a24d60 ti:
fff8400700bc4000 task.ti:
fff8400700bc4000
TSTATE:
0000004411e01607 TPC:
00000000004609f8 TNPC:
00000000004609fc Y:
00000005 Not tainted
TPC: <gup_huge_pmd+0x198/0x1e0>
g0:
000000000001c000 g1:
0000000000ef3954 g2:
0000000000000000 g3:
0000000000000001
g4:
fff8400700a24d60 g5:
fff8001fa5c10000 g6:
fff8400700bc4000 g7:
0000000000000720
o0:
0000000000bc5058 o1:
0000000000000240 o2:
0000000000006000 o3:
0000000000001c00
o4:
0000000000000000 o5:
0000048000080000 sp:
fff8400700bc6ab1 ret_pc:
00000000004609f0
RPC: <gup_huge_pmd+0x190/0x1e0>
l0:
fff8400700bc74fc l1:
0000000000020000 l2:
0000000000002000 l3:
0000000000000000
l4:
fff8001f93250950 l5:
000000000113f800 l6:
0000000000000004 l7:
0000000000000000
i0:
fff8400700ca46a0 i1:
bd0000085e800453 i2:
000000026a0c4000 i3:
000000026a0c6000
i4:
0000000000000001 i5:
fff800070c958de8 i6:
fff8400700bc6b61 i7:
0000000000460dd0
I7: <gup_pud_range+0x170/0x1a0>
Call Trace:
[
0000000000460dd0] gup_pud_range+0x170/0x1a0
[
0000000000460e84] get_user_pages_fast+0x84/0x120
[
00000000006f5a18] iov_iter_get_pages+0x98/0x240
[
00000000005fa744] do_direct_IO+0xf64/0x1e00
[
00000000005fbbc0] __blockdev_direct_IO+0x360/0x15a0
[
00000000101f74fc] ext4_ind_direct_IO+0xdc/0x400 [ext4]
[
00000000101af690] ext4_ext_direct_IO+0x1d0/0x2c0 [ext4]
[
00000000101af86c] ext4_direct_IO+0xec/0x220 [ext4]
[
0000000000553bd4] generic_file_read_iter+0x114/0x140
[
00000000005bdc2c] __vfs_read+0xac/0x100
[
00000000005bf254] vfs_read+0x54/0x100
[
00000000005bf368] SyS_pread64+0x68/0x80
Signed-off-by: Tom Hromatka <tom.hromatka@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
bob picco [Fri, 10 Mar 2017 19:31:19 +0000 (14:31 -0500)]
sparc64: kern_addr_valid regression
[ Upstream commit
adfae8a5d833fa2b46577a8081f350e408851f5b ]
I encountered this bug when using /proc/kcore to examine the kernel. Plus a
coworker inquired about debugging tools. We computed pa but did
not use it during the maximum physical address bits test. Instead we used
the identity mapped virtual address which will always fail this test.
I believe the defect came in here:
[bpicco@zareason linus.git]$ git describe --contains
bb4e6e85daa52
v3.18-rc1~87^2~4
.
Signed-off-by: Bob Picco <bob.picco@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eric Dumazet [Sat, 25 Mar 2017 02:36:13 +0000 (19:36 -0700)]
ping: implement proper locking
commit
43a6684519ab0a6c52024b5e25322476cabad893 upstream.
We got a report of yet another bug in ping
http://www.openwall.com/lists/oss-security/2017/03/24/6
->disconnect() is not called with socket lock held.
Fix this by acquiring ping rwlock earlier.
Thanks to Daniel, Alexander and Andrey for letting us know this problem.
Fixes:
c319b4d76b9e ("net: ipv4: add IPPROTO_ICMP socket kind")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Reported-by: Daniel Jiang <danieljiang0415@gmail.com>
Reported-by: Solar Designer <solar@openwall.com>
Reported-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Cc: Ben Hutchings <ben.hutchings@codethink.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Greg Kroah-Hartman [Thu, 27 Apr 2017 10:37:20 +0000 (12:37 +0200)]
Revert "mmc: sdhci-msm: Enable few quirks"
This reverts commit
d84be51d1c1d3fa148a3abdeeb1455690df59e63 which is
commit
a0e3142869d29688de6f77be31aa7a401a4a88f1 upstream.
It causes problems and would need other patches backported to resolve
it, and it shouldn't have been applied to 4.9-stable.
Reported-by: Georgi Djakov <georgi.djakov@linaro.org>
Cc: Sahitya Tummala <stummala@codeaurora.org>
Cc: Ritesh Harjani <riteshh@codeaurora.org>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Ulf Hansson <ulf.hansson@linaro.org>
Cc: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Greg Kroah-Hartman [Thu, 27 Apr 2017 07:11:26 +0000 (09:11 +0200)]
Linux 4.9.25
Dan Williams [Fri, 7 Apr 2017 23:42:08 +0000 (16:42 -0700)]
device-dax: switch to srcu, fix rcu_read_lock() vs pte allocation
commit
956a4cd2c957acf638ff29951aabaa9d8e92bbc2 upstream.
The following warning triggers with a new unit test that stresses the
device-dax interface.
===============================
[ ERR: suspicious RCU usage. ]
4.11.0-rc4+ #1049 Tainted: G O
-------------------------------
./include/linux/rcupdate.h:521 Illegal context switch in RCU read-side critical section!
other info that might help us debug this:
rcu_scheduler_active = 2, debug_locks = 0
2 locks held by fio/9070:
#0: (&mm->mmap_sem){++++++}, at: [<
ffffffff8d0739d7>] __do_page_fault+0x167/0x4f0
#1: (rcu_read_lock){......}, at: [<
ffffffffc03fbd02>] dax_dev_huge_fault+0x32/0x620 [dax]
Call Trace:
dump_stack+0x86/0xc3
lockdep_rcu_suspicious+0xd7/0x110
___might_sleep+0xac/0x250
__might_sleep+0x4a/0x80
__alloc_pages_nodemask+0x23a/0x360
alloc_pages_current+0xa1/0x1f0
pte_alloc_one+0x17/0x80
__pte_alloc+0x1e/0x120
__get_locked_pte+0x1bf/0x1d0
insert_pfn.isra.70+0x3a/0x100
? lookup_memtype+0xa6/0xd0
vm_insert_mixed+0x64/0x90
dax_dev_huge_fault+0x520/0x620 [dax]
? dax_dev_huge_fault+0x32/0x620 [dax]
dax_dev_fault+0x10/0x20 [dax]
__do_fault+0x1e/0x140
__handle_mm_fault+0x9af/0x10d0
handle_mm_fault+0x16d/0x370
? handle_mm_fault+0x47/0x370
__do_page_fault+0x28c/0x4f0
trace_do_page_fault+0x58/0x2a0
do_async_page_fault+0x1a/0xa0
async_page_fault+0x28/0x30
Inserting a page table entry may trigger an allocation while we are
holding a read lock to keep the device instance alive for the duration
of the fault. Use srcu for this keep-alive protection.
Fixes:
dee410792419 ("/dev/dax, core: file operations and dax-mmap")
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Vishal Verma [Tue, 18 Apr 2017 18:42:35 +0000 (20:42 +0200)]
x86/mce: Make the MCE notifier a blocking one
commit
0dc9c639e6553e39c13b2c0d54c8a1b098cb95e2 upstream.
The NFIT MCE handler callback (for handling media errors on NVDIMMs)
takes a mutex to add the location of a memory error to a list. But since
the notifier call chain for machine checks (x86_mce_decoder_chain) is
atomic, we get a lockdep splat like:
BUG: sleeping function called from invalid context at kernel/locking/mutex.c:620
in_atomic(): 1, irqs_disabled(): 0, pid: 4, name: kworker/0:0
[..]
Call Trace:
dump_stack
___might_sleep
__might_sleep
mutex_lock_nested
? __lock_acquire
nfit_handle_mce
notifier_call_chain
atomic_notifier_call_chain
? atomic_notifier_call_chain
mce_gen_pool_process
Convert the notifier to a blocking one which gets to run only in process
context.
Boris: remove the notifier call in atomic context in print_mce(). For
now, let's print the MCE on the atomic path so that we can make sure
they go out and get logged at least.
Fixes:
6839a6d96f4e ("nfit: do an ARS scrub on hitting a latent media error")
Reported-by: Ross Zwisler <ross.zwisler@linux.intel.com>
Signed-off-by: Vishal Verma <vishal.l.verma@intel.com>
Acked-by: Tony Luck <tony.luck@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: linux-edac <linux-edac@vger.kernel.org>
Cc: x86-ml <x86@kernel.org>
Link: http://lkml.kernel.org/r/20170411224457.24777-1-vishal.l.verma@intel.com
Signed-off-by: Borislav Petkov <bp@suse.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Yazen Ghannam [Thu, 30 Mar 2017 11:17:14 +0000 (13:17 +0200)]
x86/mce/AMD: Give a name to MCA bank 3 when accessed with legacy MSRs
commit
29f72ce3e4d18066ec75c79c857bee0618a3504b upstream.
MCA bank 3 is reserved on systems pre-Fam17h, so it didn't have a name.
However, MCA bank 3 is defined on Fam17h systems and can be accessed
using legacy MSRs. Without a name we get a stack trace on Fam17h systems
when trying to register sysfs files for bank 3 on kernels that don't
recognize Scalable MCA.
Call MCA bank 3 "decode_unit" since this is what it represents on
Fam17h. This will allow kernels without SMCA support to see this bank on
Fam17h+ and prevent the stack trace. This will not affect older systems
since this bank is reserved on them, i.e. it'll be ignored.
Tested on AMD Fam15h and Fam17h systems.
WARNING: CPU: 26 PID: 1 at lib/kobject.c:210 kobject_add_internal
kobject: (
ffff88085bb256c0): attempted to be registered with empty name!
...
Call Trace:
kobject_add_internal
kobject_add
kobject_create_and_add
threshold_create_device
threshold_init_device
Signed-off-by: Yazen Ghannam <yazen.ghannam@amd.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Link: http://lkml.kernel.org/r/1490102285-3659-1-git-send-email-Yazen.Ghannam@amd.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ravi Bangoria [Tue, 11 Apr 2017 05:08:13 +0000 (10:38 +0530)]
powerpc/kprobe: Fix oops when kprobed on 'stdu' instruction
commit
9e1ba4f27f018742a1aa95d11e35106feba08ec1 upstream.
If we set a kprobe on a 'stdu' instruction on powerpc64, we see a kernel
OOPS:
Bad kernel stack pointer
cd93c840 at
c000000000009868
Oops: Bad kernel stack pointer, sig: 6 [#1]
...
GPR00:
c000001fcd93cb30 00000000cd93c840 c0000000015c5e00 00000000cd93c840
...
NIP [
c000000000009868] resume_kernel+0x2c/0x58
LR [
c000000000006208] program_check_common+0x108/0x180
On a 64-bit system when the user probes on a 'stdu' instruction, the kernel does
not emulate actual store in emulate_step() because it may corrupt the exception
frame. So the kernel does the actual store operation in exception return code
i.e. resume_kernel().
resume_kernel() loads the saved stack pointer from memory using lwz, which only
loads the low 32-bits of the address, causing the kernel crash.
Fix this by loading the 64-bit value instead.
Fixes:
be96f63375a1 ("powerpc: Split out instruction analysis part of emulate_step()")
Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
Reviewed-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Reviewed-by: Ananth N Mavinakayanahalli <ananth@linux.vnet.ibm.com>
[mpe: Change log massage, add stable tag]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sebastian Siewior [Wed, 22 Feb 2017 16:15:21 +0000 (17:15 +0100)]
ubi/upd: Always flush after prepared for an update
commit
9cd9a21ce070be8a918ffd3381468315a7a76ba6 upstream.
In commit
6afaf8a484cb ("UBI: flush wl before clearing update marker") I
managed to trigger and fix a similar bug. Now here is another version of
which I assumed it wouldn't matter back then but it turns out UBI has a
check for it and will error out like this:
|ubi0 warning: validate_vid_hdr: inconsistent used_ebs
|ubi0 error: validate_vid_hdr: inconsistent VID header at PEB 592
All you need to trigger this is? "ubiupdatevol /dev/ubi0_0 file" + a
powercut in the middle of the operation.
ubi_start_update() sets the update-marker and puts all EBs on the erase
list. After that userland can proceed to write new data while the old EB
aren't erased completely. A powercut at this point is usually not that
much of a tragedy. UBI won't give read access to the static volume
because it has the update marker. It will most likely set the corrupted
flag because it misses some EBs.
So we are all good. Unless the size of the image that has been written
differs from the old image in the magnitude of at least one EB. In that
case UBI will find two different values for `used_ebs' and refuse to
attach the image with the error message mentioned above.
So in order not to get in the situation, the patch will ensure that we
wait until everything is removed before it tries to write any data.
The alternative would be to detect such a case and remove all EBs at the
attached time after we processed the volume-table and see the
update-marker set. The patch looks bigger and I doubt it is worth it
since usually the write() will wait from time to time for a new EB since
usually there not that many spare EB that can be used.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Johannes Berg [Thu, 13 Apr 2017 12:23:49 +0000 (14:23 +0200)]
mac80211: fix MU-MIMO follow-MAC mode
commit
9e478066eae41211c92a8f63cc69aafc391bd6ab upstream.
There are two bugs in the follow-MAC code:
* it treats the radiotap header as the 802.11 header
(therefore it can't possibly work)
* it doesn't verify that the skb data it accesses is actually
present in the header, which is mitigated by the first point
Fix this by moving all of this out into a separate function.
This function copies the data it needs using skb_copy_bits()
to make sure it can be accessed if it's paged, and offsets
that by the possibly present vendor radiotap header.
This also makes all those conditions more readable.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Johannes Berg [Thu, 20 Apr 2017 19:32:16 +0000 (21:32 +0200)]
mac80211: reject ToDS broadcast data frames
commit
3018e947d7fd536d57e2b550c33e456d921fff8c upstream.
AP/AP_VLAN modes don't accept any real 802.11 multicast data
frames, but since they do need to accept broadcast management
frames the same is currently permitted for data frames. This
opens a security problem because such frames would be decrypted
with the GTK, and could even contain unicast L3 frames.
Since the spec says that ToDS frames must always have the BSSID
as the RA (addr1), reject any other data frames.
The problem was originally reported in "Predicting, Decrypting,
and Abusing WPA2/802.11 Group Keys" at usenix
https://www.usenix.org/conference/usenixsecurity16/technical-sessions/presentation/vanhoef
and brought to my attention by Jouni.
Reported-by: Jouni Malinen <j@w1.fi>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
--
Richard Weinberger [Thu, 30 Mar 2017 08:50:49 +0000 (10:50 +0200)]
ubifs: Fix O_TMPFILE corner case in ubifs_link()
commit
32fe905c17f001c0eee13c59afddd0bf2eed509c upstream.
It is perfectly fine to link a tmpfile back using linkat().
Since tmpfiles are created with a link count of 0 they appear
on the orphan list, upon re-linking the inode has to be removed
from the orphan list again.
Ralph faced a filesystem corruption in combination with overlayfs
due to this bug.
Cc: Ralph Sennhauser <ralph.sennhauser@gmail.com>
Cc: Amir Goldstein <amir73il@gmail.com>
Reported-by: Ralph Sennhauser <ralph.sennhauser@gmail.com>
Tested-by: Ralph Sennhauser <ralph.sennhauser@gmail.com>
Reported-by: Amir Goldstein <amir73il@gmail.com>
Fixes:
474b93704f321 ("ubifs: Implement O_TMPFILE")
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Felix Fietkau [Mon, 6 Mar 2017 09:04:25 +0000 (10:04 +0100)]
ubifs: Fix RENAME_WHITEOUT support
commit
c3d9fda688742c06e89aa1f0f8fd943fc11468cb upstream.
Remove faulty leftover check in do_rename(), apparently introduced in a
merge that combined whiteout support changes with commit
f03b8ad8d386
("fs: support RENAME_NOREPLACE for local filesystems")
Fixes:
f03b8ad8d386 ("fs: support RENAME_NOREPLACE for local filesystems")
Fixes:
9e0a1fff8db5 ("ubifs: Implement RENAME_WHITEOUT")
Signed-off-by: Felix Fietkau <nbd@nbd.name>
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Haibo Chen [Wed, 19 Apr 2017 02:53:51 +0000 (10:53 +0800)]
mmc: sdhci-esdhc-imx: increase the pad I/O drive strength for DDR50 card
commit
9f327845358d3dd0d8a5a7a5436b0aa5c432e757 upstream.
Currently for DDR50 card, it need tuning in default. We meet tuning fail
issue for DDR50 card and some data CRC error when DDR50 sd card works.
This is because the default pad I/O drive strength can't make sure DDR50
card work stable. So increase the pad I/O drive strength for DDR50 card,
and use pins_100mhz.
This fixes DDR50 card support for IMX since DDR50 tuning was enabled from
commit
9faac7b95ea4 ("mmc: sdhci: enable tuning for DDR50")
Tested-and-reported-by: Tim Harvey <tharvey@gateworks.com>
Signed-off-by: Haibo Chen <haibo.chen@nxp.com>
Acked-by: Dong Aisheng <aisheng.dong@nxp.com>
Acked-by: Adrian Hunter <adrian.hunter@intel.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Arnd Bergmann [Wed, 19 Apr 2017 17:47:04 +0000 (19:47 +0200)]
ACPI / power: Avoid maybe-uninitialized warning
commit
fe8c470ab87d90e4b5115902dd94eced7e3305c3 upstream.
gcc -O2 cannot always prove that the loop in acpi_power_get_inferred_state()
is enterered at least once, so it assumes that cur_state might not get
initialized:
drivers/acpi/power.c: In function 'acpi_power_get_inferred_state':
drivers/acpi/power.c:222:9: error: 'cur_state' may be used uninitialized in this function [-Werror=maybe-uninitialized]
This sets the variable to zero at the start of the loop, to ensure that
there is well-defined behavior even for an empty list. This gets rid of
the warning.
The warning first showed up when the -Os flag got removed in a bug fix
patch in linux-4.11-rc5.
I would suggest merging this addon patch on top of that bug fix to avoid
introducing a new warning in the stable kernels.
Fixes:
61b79e16c68d (ACPI: Fix incompatibility with mcount-based function graph tracing)
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Thorsten Leemhuis [Tue, 18 Apr 2017 18:14:28 +0000 (11:14 -0700)]
Input: elantech - add Fujitsu Lifebook E547 to force crc_enabled
commit
704de489e0e3640a2ee2d0daf173e9f7375582ba upstream.
Temporary got a Lifebook E547 into my hands and noticed the touchpad
only works after running:
echo "1" > /sys/devices/platform/i8042/serio2/crc_enabled
Add it to the list of machines that need this workaround.
Signed-off-by: Thorsten Leemhuis <linux@leemhuis.info>
Reviewed-by: Ulrik De Bie <ulrik.debie-os@e2big.org>
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Christian Borntraeger [Sun, 9 Apr 2017 20:09:38 +0000 (22:09 +0200)]
s390/mm: fix CMMA vs KSM vs others
commit
a8f60d1fadf7b8b54449fcc9d6b15248917478ba upstream.
On heavy paging with KSM I see guest data corruption. Turns out that
KSM will add pages to its tree, where the mapping return true for
pte_unused (or might become as such later). KSM will unmap such pages
and reinstantiate with different attributes (e.g. write protected or
special, e.g. in replace_page or write_protect_page)). This uncovered
a bug in our pagetable handling: We must remove the unused flag as
soon as an entry becomes present again.
Signed-of-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Germano Percossi [Fri, 7 Apr 2017 11:29:37 +0000 (12:29 +0100)]
CIFS: remove bad_network_name flag
commit
a0918f1ce6a43ac980b42b300ec443c154970979 upstream.
STATUS_BAD_NETWORK_NAME can be received during node failover,
causing the flag to be set and making the reconnect thread
always unsuccessful, thereafter.
Once the only place where it is set is removed, the remaining
bits are rendered moot.
Removing it does not prevent "mount" from failing when a non
existent share is passed.
What happens when the share really ceases to exist while the
share is mounted is undefined now as much as it was before.
Signed-off-by: Germano Percossi <germano.percossi@citrix.com>
Reviewed-by: Pavel Shilovsky <pshilov@microsoft.com>
Signed-off-by: Steve French <smfrench@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sachin Prabhu [Sun, 16 Apr 2017 19:37:24 +0000 (20:37 +0100)]
cifs: Do not send echoes before Negotiate is complete
commit
62a6cfddcc0a5313e7da3e8311ba16226fe0ac10 upstream.
commit
4fcd1813e640 ("Fix reconnect to not defer smb3 session reconnect
long after socket reconnect") added support for Negotiate requests to
be initiated by echo calls.
To avoid delays in calling echo after a reconnect, I added the patch
introduced by the commit
b8c600120fc8 ("Call echo service immediately
after socket reconnect").
This has however caused a regression with cifs shares which do not have
support for echo calls to trigger Negotiate requests. On connections
which need to call Negotiation, the echo calls trigger an error which
triggers a reconnect which in turn triggers another echo call. This
results in a loop which is only broken when an operation is performed on
the cifs share. For an idle share, it can DOS a server.
The patch uses the smb_operation can_echo() for cifs so that it is
called only if connection has been already been setup.
kernel bz: 194531
Signed-off-by: Sachin Prabhu <sprabhu@redhat.com>
Tested-by: Jonathan Liu <net147@gmail.com>
Acked-by: Pavel Shilovsky <pshilov@microsoft.com>
Signed-off-by: Steve French <smfrench@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Rabin Vincent [Thu, 20 Apr 2017 21:37:46 +0000 (14:37 -0700)]
mm: prevent NR_ISOLATE_* stats from going negative
commit
fc280fe871449ead4bdbd1665fa52c7c01c64765 upstream.
Commit
6afcf8ef0ca0 ("mm, compaction: fix NR_ISOLATED_* stats for pfn
based migration") moved the dec_node_page_state() call (along with the
page_is_file_cache() call) to after putback_lru_page().
But page_is_file_cache() can change after putback_lru_page() is called,
so it should be called before putback_lru_page(), as it was before that
patch, to prevent NR_ISOLATE_* stats from going negative.
Without this fix, non-CONFIG_SMP kernels end up hanging in the
while(too_many_isolated()) { congestion_wait() } loop in
shrink_active_list() due to the negative stats.
Mem-Info:
active_anon:32567 inactive_anon:121 isolated_anon:1
active_file:6066 inactive_file:6639 isolated_file:
4294967295
^^^^^^^^^^
unevictable:0 dirty:115 writeback:0 unstable:0
slab_reclaimable:2086 slab_unreclaimable:3167
mapped:3398 shmem:18366 pagetables:1145 bounce:0
free:1798 free_pcp:13 free_cma:0
Fixes:
6afcf8ef0ca0 ("mm, compaction: fix NR_ISOLATED_* stats for pfn based migration")
Link: http://lkml.kernel.org/r/1492683865-27549-1-git-send-email-rabin.vincent@axis.com
Signed-off-by: Rabin Vincent <rabinv@axis.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Ming Ling <ming.ling@spreadtrum.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Steven Rostedt (VMware) [Wed, 19 Apr 2017 18:29:46 +0000 (14:29 -0400)]
ring-buffer: Have ring_buffer_iter_empty() return true when empty
commit
78f7a45dac2a2d2002f98a3a95f7979867868d73 upstream.
I noticed that reading the snapshot file when it is empty no longer gives a
status. It suppose to show the status of the snapshot buffer as well as how
to allocate and use it. For example:
># cat snapshot
# tracer: nop
#
#
# * Snapshot is allocated *
#
# Snapshot commands:
# echo 0 > snapshot : Clears and frees snapshot buffer
# echo 1 > snapshot : Allocates snapshot buffer, if not already allocated.
# Takes a snapshot of the main buffer.
# echo 2 > snapshot : Clears snapshot buffer (but does not allocate or free)
# (Doesn't have to be '2' works with any number that
# is not a '0' or '1')
But instead it just showed an empty buffer:
># cat snapshot
# tracer: nop
#
# entries-in-buffer/entries-written: 0/0 #P:4
#
# _-----=> irqs-off
# / _----=> need-resched
# | / _---=> hardirq/softirq
# || / _--=> preempt-depth
# ||| / delay
# TASK-PID CPU# |||| TIMESTAMP FUNCTION
# | | | |||| | |
What happened was that it was using the ring_buffer_iter_empty() function to
see if it was empty, and if it was, it showed the status. But that function
was returning false when it was empty. The reason was that the iter header
page was on the reader page, and the reader page was empty, but so was the
buffer itself. The check only tested to see if the iter was on the commit
page, but the commit page was no longer pointing to the reader page, but as
all pages were empty, the buffer is also.
Fixes:
651e22f2701b ("ring-buffer: Always reset iterator to reader page")
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Steven Rostedt (VMware) [Wed, 19 Apr 2017 16:07:08 +0000 (12:07 -0400)]
tracing: Allocate the snapshot buffer before enabling probe
commit
df62db5be2e5f070ecd1a5ece5945b590ee112e0 upstream.
Currently the snapshot trigger enables the probe and then allocates the
snapshot. If the probe triggers before the allocation, it could cause the
snapshot to fail and turn tracing off. It's best to allocate the snapshot
buffer first, and then enable the trigger. If something goes wrong in the
enabling of the trigger, the snapshot buffer is still allocated, but it can
also be freed by the user by writting zero into the snapshot buffer file.
Also add a check of the return status of alloc_snapshot().
Fixes:
77fd5c15e3 ("tracing: Add snapshot trigger to function probes")
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eric Biggers [Tue, 18 Apr 2017 14:31:09 +0000 (15:31 +0100)]
KEYS: fix keyctl_set_reqkey_keyring() to not leak thread keyrings
commit
c9f838d104fed6f2f61d68164712e3204bf5271b upstream.
This fixes CVE-2017-7472.
Running the following program as an unprivileged user exhausts kernel
memory by leaking thread keyrings:
#include <keyutils.h>
int main()
{
for (;;)
keyctl_set_reqkey_keyring(KEY_REQKEY_DEFL_THREAD_KEYRING);
}
Fix it by only creating a new thread keyring if there wasn't one before.
To make things more consistent, make install_thread_keyring_to_cred()
and install_process_keyring_to_cred() both return 0 if the corresponding
keyring is already present.
Fixes:
d84f4f992cbd ("CRED: Inaugurate COW credentials")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David Howells [Tue, 18 Apr 2017 14:31:08 +0000 (15:31 +0100)]
KEYS: Change the name of the dead type to ".dead" to prevent user access
commit
c1644fe041ebaf6519f6809146a77c3ead9193af upstream.
This fixes CVE-2017-6951.
Userspace should not be able to do things with the "dead" key type as it
doesn't have some of the helper functions set upon it that the kernel
needs. Attempting to use it may cause the kernel to crash.
Fix this by changing the name of the type to ".dead" so that it's rejected
up front on userspace syscalls by key_get_type_from_user().
Though this doesn't seem to affect recent kernels, it does affect older
ones, certainly those prior to:
commit
c06cfb08b88dfbe13be44a69ae2fdc3a7c902d81
Author: David Howells <dhowells@redhat.com>
Date: Tue Sep 16 17:36:06 2014 +0100
KEYS: Remove key_type::match in favour of overriding default by match_preparse
which went in before 3.18-rc1.
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David Howells [Tue, 18 Apr 2017 14:31:07 +0000 (15:31 +0100)]
KEYS: Disallow keyrings beginning with '.' to be joined as session keyrings
commit
ee8f844e3c5a73b999edf733df1c529d6503ec2f upstream.
This fixes CVE-2016-9604.
Keyrings whose name begin with a '.' are special internal keyrings and so
userspace isn't allowed to create keyrings by this name to prevent
shadowing. However, the patch that added the guard didn't fix
KEYCTL_JOIN_SESSION_KEYRING. Not only can that create dot-named keyrings,
it can also subscribe to them as a session keyring if they grant SEARCH
permission to the user.
This, for example, allows a root process to set .builtin_trusted_keys as
its session keyring, at which point it has full access because now the
possessor permissions are added. This permits root to add extra public
keys, thereby bypassing module verification.
This also affects kexec and IMA.
This can be tested by (as root):
keyctl session .builtin_trusted_keys
keyctl add user a a @s
keyctl list @s
which on my test box gives me:
2 keys in keyring:
180010936: ---lswrv 0 0 asymmetric: Build time autogenerated kernel key:
ae3d4a31b82daa8e1a75b49dc2bba949fd992a05
801382539: --alswrv 0 0 user: a
Fix this by rejecting names beginning with a '.' in the keyctl.
Signed-off-by: David Howells <dhowells@redhat.com>
Acked-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
cc: linux-ima-devel@lists.sourceforge.net
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Greg Kroah-Hartman [Fri, 21 Apr 2017 07:31:39 +0000 (09:31 +0200)]
Linux 4.9.24
Marcelo Ricardo Leitner [Thu, 23 Feb 2017 12:31:18 +0000 (09:31 -0300)]
sctp: deny peeloff operation on asocs with threads sleeping on it
commit
dfcb9f4f99f1e9a49e43398a7bfbf56927544af1 upstream.
commit
2dcab5984841 ("sctp: avoid BUG_ON on sctp_wait_for_sndbuf")
attempted to avoid a BUG_ON call when the association being used for a
sendmsg() is blocked waiting for more sndbuf and another thread did a
peeloff operation on such asoc, moving it to another socket.
As Ben Hutchings noticed, then in such case it would return without
locking back the socket and would cause two unlocks in a row.
Further analysis also revealed that it could allow a double free if the
application managed to peeloff the asoc that is created during the
sendmsg call, because then sctp_sendmsg() would try to free the asoc
that was created only for that call.
This patch takes another approach. It will deny the peeloff operation
if there is a thread sleeping on the asoc, so this situation doesn't
exist anymore. This avoids the issues described above and also honors
the syscalls that are already being handled (it can be multiple sendmsg
calls).
Joint work with Xin Long.
Fixes:
2dcab5984841 ("sctp: avoid BUG_ON on sctp_wait_for_sndbuf")
Cc: Alexander Popov <alex.popov@linux.com>
Cc: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Mantas M [Fri, 16 Dec 2016 08:30:59 +0000 (10:30 +0200)]
net: ipv6: check route protocol when deleting routes
commit
c2ed1880fd61a998e3ce40254a99a2ad000f1a7d upstream.
The protocol field is checked when deleting IPv4 routes, but ignored for
IPv6, which causes problems with routing daemons accidentally deleting
externally set routes (observed by multiple bird6 users).
This can be verified using `ip -6 route del <prefix> proto something`.
Signed-off-by: Mantas Mikulėnas <grawity@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Cc: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Omar Sandoval [Wed, 1 Feb 2017 08:02:27 +0000 (00:02 -0800)]
virtio-console: avoid DMA from stack
commit
c4baad50297d84bde1a7ad45e50c73adae4a2192 upstream.
put_chars() stuffs the buffer it gets into an sg, but that buffer may be
on the stack. This breaks with CONFIG_VMAP_STACK=y (for me, it
manifested as printks getting turned into NUL bytes).
Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Amit Shah <amit.shah@redhat.com>
Cc: Ben Hutchings <ben@decadent.org.uk>
Cc: Brad Spengler <spender@grsecurity.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Stefan Brüns [Sun, 5 Feb 2017 14:57:59 +0000 (12:57 -0200)]
cxusb: Use a dma capable buffer also for reading
commit
3f190e3aec212fc8c61e202c51400afa7384d4bc upstream.
Commit
17ce039b4e54 ("[media] cxusb: don't do DMA on stack")
added a kmalloc'ed bounce buffer for writes, but missed to do the same
for reads. As the read only happens after the write is finished, we can
reuse the same buffer.
As dvb_usb_generic_rw handles a read length of 0 by itself, avoid calling
it using the dvb_usb_generic_read wrapper function.
Signed-off-by: Stefan Brüns <stefan.bruens@rwth-aachen.de>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
Cc: Ben Hutchings <ben@decadent.org.uk>
Cc: Brad Spengler <spender@grsecurity.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Stefan Brüns [Sun, 12 Feb 2017 15:02:13 +0000 (13:02 -0200)]
dvb-usb-firmware: don't do DMA on stack
commit
67b0503db9c29b04eadfeede6bebbfe5ddad94ef upstream.
The buffer allocation for the firmware data was changed in
commit
43fab9793c1f ("[media] dvb-usb: don't use stack for firmware load")
but the same applies for the reset value.
Fixes:
43fab9793c1f ("[media] dvb-usb: don't use stack for firmware load")
Signed-off-by: Stefan Brüns <stefan.bruens@rwth-aachen.de>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
Cc: Ben Hutchings <ben@decadent.org.uk>
Cc: Brad Spengler <spender@grsecurity.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Mauro Carvalho Chehab [Tue, 24 Jan 2017 10:13:11 +0000 (08:13 -0200)]
dvb-usb: don't use stack for firmware load
commit
43fab9793c1f44e665b4f98035a14942edf03ddc upstream.
As reported by Marc Duponcheel <marc@offline.be>, firmware load on
dvb-usb is using the stack, with is not allowed anymore on default
Kernel configurations:
[ 1025.958836] dvb-usb: found a 'WideView WT-220U PenType Receiver (based on ZL353)' in cold state, will try to load a firmware
[ 1025.958853] dvb-usb: downloading firmware from file 'dvb-usb-wt220u-zl0353-01.fw'
[ 1025.958855] dvb-usb: could not stop the USB controller CPU.
[ 1025.958856] dvb-usb: error while transferring firmware (transferred size: -11, block size: 3)
[ 1025.958856] dvb-usb: firmware download failed at 8 with -22
[ 1025.958867] usbcore: registered new interface driver dvb_usb_dtt200u
[ 2.789902] dvb-usb: downloading firmware from file 'dvb-usb-wt220u-zl0353-01.fw'
[ 2.789905] ------------[ cut here ]------------
[ 2.789911] WARNING: CPU: 3 PID: 2196 at drivers/usb/core/hcd.c:1584 usb_hcd_map_urb_for_dma+0x430/0x560 [usbcore]
[ 2.789912] transfer buffer not dma capable
[ 2.789912] Modules linked in: btusb dvb_usb_dtt200u(+) dvb_usb_af9035(+) btrtl btbcm dvb_usb dvb_usb_v2 btintel dvb_core bluetooth rc_core rfkill x86_pkg_temp_thermal intel_powerclamp coretemp crc32_pclmul aesni_intel aes_x86_64 glue_helper lrw gf128mul ablk_helper cryptd drm_kms_helper syscopyarea sysfillrect pcspkr i2c_i801 sysimgblt fb_sys_fops drm i2c_smbus i2c_core r8169 lpc_ich mfd_core mii thermal fan rtc_cmos video button acpi_cpufreq processor snd_hda_codec_realtek snd_hda_codec_generic snd_hda_intel snd_hda_codec snd_hwdep snd_hda_core snd_pcm snd_timer snd crc32c_intel ahci libahci libata xhci_pci ehci_pci xhci_hcd ehci_hcd usbcore usb_common dm_mirror dm_region_hash dm_log dm_mod
[ 2.789936] CPU: 3 PID: 2196 Comm: systemd-udevd Not tainted 4.9.0-gentoo #1
[ 2.789937] Hardware name: ASUS All Series/H81I-PLUS, BIOS 0401 07/23/2013
[ 2.789938]
ffffc9000339b690 ffffffff812bd397 ffffc9000339b6e0 0000000000000000
[ 2.789939]
ffffc9000339b6d0 ffffffff81055c86 000006300339b6a0 ffff880116c0c000
[ 2.789941]
0000000000000000 0000000000000000 0000000000000001 ffff880116c08000
[ 2.789942] Call Trace:
[ 2.789945] [<
ffffffff812bd397>] dump_stack+0x4d/0x66
[ 2.789947] [<
ffffffff81055c86>] __warn+0xc6/0xe0
[ 2.789948] [<
ffffffff81055cea>] warn_slowpath_fmt+0x4a/0x50
[ 2.789952] [<
ffffffffa006d460>] usb_hcd_map_urb_for_dma+0x430/0x560 [usbcore]
[ 2.789954] [<
ffffffff814ed5a8>] ? io_schedule_timeout+0xd8/0x110
[ 2.789956] [<
ffffffffa006e09c>] usb_hcd_submit_urb+0x9c/0x980 [usbcore]
[ 2.789958] [<
ffffffff812d0ebf>] ? copy_page_to_iter+0x14f/0x2b0
[ 2.789960] [<
ffffffff81126818>] ? pagecache_get_page+0x28/0x240
[ 2.789962] [<
ffffffff8118c2a0>] ? touch_atime+0x20/0xa0
[ 2.789964] [<
ffffffffa006f7c4>] usb_submit_urb+0x2c4/0x520 [usbcore]
[ 2.789967] [<
ffffffffa006feca>] usb_start_wait_urb+0x5a/0xe0 [usbcore]
[ 2.789969] [<
ffffffffa007000c>] usb_control_msg+0xbc/0xf0 [usbcore]
[ 2.789970] [<
ffffffffa067903d>] usb_cypress_writemem+0x3d/0x40 [dvb_usb]
[ 2.789972] [<
ffffffffa06791cf>] usb_cypress_load_firmware+0x4f/0x130 [dvb_usb]
[ 2.789973] [<
ffffffff8109dbbe>] ? console_unlock+0x2fe/0x5d0
[ 2.789974] [<
ffffffff8109e10c>] ? vprintk_emit+0x27c/0x410
[ 2.789975] [<
ffffffff8109e40a>] ? vprintk_default+0x1a/0x20
[ 2.789976] [<
ffffffff81124d76>] ? printk+0x43/0x4b
[ 2.789977] [<
ffffffffa0679310>] dvb_usb_download_firmware+0x60/0xd0 [dvb_usb]
[ 2.789979] [<
ffffffffa0679898>] dvb_usb_device_init+0x3d8/0x610 [dvb_usb]
[ 2.789981] [<
ffffffffa069e302>] dtt200u_usb_probe+0x92/0xd0 [dvb_usb_dtt200u]
[ 2.789984] [<
ffffffffa007420c>] usb_probe_interface+0xfc/0x270 [usbcore]
[ 2.789985] [<
ffffffff8138bf95>] driver_probe_device+0x215/0x2d0
[ 2.789986] [<
ffffffff8138c0e6>] __driver_attach+0x96/0xa0
[ 2.789987] [<
ffffffff8138c050>] ? driver_probe_device+0x2d0/0x2d0
[ 2.789988] [<
ffffffff81389ffb>] bus_for_each_dev+0x5b/0x90
[ 2.789989] [<
ffffffff8138b7b9>] driver_attach+0x19/0x20
[ 2.789990] [<
ffffffff8138b33c>] bus_add_driver+0x11c/0x220
[ 2.789991] [<
ffffffff8138c91b>] driver_register+0x5b/0xd0
[ 2.789994] [<
ffffffffa0072f6c>] usb_register_driver+0x7c/0x130 [usbcore]
[ 2.789994] [<
ffffffffa06a5000>] ? 0xffffffffa06a5000
[ 2.789996] [<
ffffffffa06a501e>] dtt200u_usb_driver_init+0x1e/0x20 [dvb_usb_dtt200u]
[ 2.789997] [<
ffffffff81000408>] do_one_initcall+0x38/0x140
[ 2.789998] [<
ffffffff8116001c>] ? __vunmap+0x7c/0xc0
[ 2.789999] [<
ffffffff81124fb0>] ? do_init_module+0x22/0x1d2
[ 2.790000] [<
ffffffff81124fe8>] do_init_module+0x5a/0x1d2
[ 2.790002] [<
ffffffff810c96b1>] load_module+0x1e11/0x2580
[ 2.790003] [<
ffffffff810c68b0>] ? show_taint+0x30/0x30
[ 2.790004] [<
ffffffff81177250>] ? kernel_read_file+0x100/0x190
[ 2.790005] [<
ffffffff810c9ffa>] SyS_finit_module+0xba/0xc0
[ 2.790007] [<
ffffffff814f13e0>] entry_SYSCALL_64_fastpath+0x13/0x94
[ 2.790008] ---[ end trace
c78a74e78baec6fc ]---
So, allocate the structure dynamically.
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
[bwh: Backported to 4.9: adjust context]
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>