Fugang Duan [Fri, 30 Mar 2018 08:45:26 +0000 (16:45 +0800)]
MLK-17779 input: egalax_ts: free irq resource before request the line as GPIO
If GPIO is connected to an IRQ then it should not request it as
GPIO function only when free its IRQ resouce.
Tested-by: Haibo Chen <haibo.chen@nxp.com>
Signed-off-by: Fugang Duan <fugang.duan@nxp.com>
Signed-off-by: Anson Huang <Anson.Huang@nxp.com>
Signed-off-by: Robin Gong <yibin.gong@nxp.com
Liu Ying [Mon, 26 Mar 2018 08:05:03 +0000 (16:05 +0800)]
MLK-17924 gpu: imx: imx8_dprc: Do not set FRAME_2P_PIX_X/Y_CTRL for updated IP
We've got some fixups for DPR IP in the new i.MX8QXP silicon.
To address the cropping issue(TKT344978), the new IP changes the
FRAME_2P_PIX_X/Y_CTRL(@F0h and @100h) register definitions to be
FRAME_PIX_X/Y_ULC_CTRL. Thus, we should not set the two registers
for the new IP. FRAME_PIX_X/Y_ULC_CTRL will be programmed after
we figure out how to use them to do fb x/y offset for tile formats.
Signed-off-by: Liu Ying <victor.liu@nxp.com>
Liu Ying [Mon, 26 Mar 2018 08:19:37 +0000 (16:19 +0800)]
MLK-17923 drm/imx: dpu: plane: Do not support fb x/y src offset for tile fmts
We don't have correct support for fb x/y source offset for tile formats.
The buffer address calculation is wrong when the offset is non-zero.
Also, finer offset needs a fix in silicon(TKT344978). So, let's do not
support the offset currently. We may add it back after we figure out
how the updated silicon supports the offset.
Signed-off-by: Liu Ying <victor.liu@nxp.com>
Marc Zyngier [Tue, 6 Feb 2018 17:56:21 +0000 (17:56 +0000)]
arm64: Kill PSCI_GET_VERSION as a variant-2 workaround
commit
3a0a397ff5ff upstream.
Now that we've standardised on SMCCC v1.1 to perform the branch
prediction invalidation, let's drop the previous band-aid.
If vendors haven't updated their firmware to do SMCCC 1.1, they
haven't updated PSCI either, so we don't loose anything.
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
no falkor/thunderx2/vulcan in arch/arm64/kernel/cpu_errata.c
Marc Zyngier [Tue, 6 Feb 2018 17:56:20 +0000 (17:56 +0000)]
arm64: Add ARM_SMCCC_ARCH_WORKAROUND_1 BP hardening support
commit
b092201e0020 upstream.
Add the detection and runtime code for ARM_SMCCC_ARCH_WORKAROUND_1.
It is lovely. Really.
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
no qcom hyp functions in
arch/arm64/kernel/bpi.S
arch/arm64/kernel/cpu_errata.c
Marc Zyngier [Tue, 6 Feb 2018 17:56:19 +0000 (17:56 +0000)]
arm/arm64: smccc: Implement SMCCC v1.1 inline primitive
commit
f2d3b2e8759a upstream.
One of the major improvement of SMCCC v1.1 is that it only clobbers
the first 4 registers, both on 32 and 64bit. This means that it
becomes very easy to provide an inline version of the SMC call
primitive, and avoid performing a function call to stash the
registers that would otherwise be clobbered by SMCCC v1.0.
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Marc Zyngier [Tue, 6 Feb 2018 17:56:18 +0000 (17:56 +0000)]
arm/arm64: smccc: Make function identifiers an unsigned quantity
commit
ded4c39e93f3 upstream.
Function identifiers are a 32bit, unsigned quantity. But we never
tell so to the compiler, resulting in the following:
4ac:
b26187e0 mov x0, #0xffffffff80000001
We thus rely on the firmware narrowing it for us, which is not
always a reasonable expectation.
Cc: stable@vger.kernel.org
Reported-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Marc Zyngier [Tue, 6 Feb 2018 17:56:17 +0000 (17:56 +0000)]
firmware/psci: Expose SMCCC version through psci_ops
commit
e78eef554a91 upstream.
Since PSCI 1.0 allows the SMCCC version to be (indirectly) probed,
let's do that at boot time, and expose the version of the calling
convention as part of the psci_ops structure.
Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Marc Zyngier [Tue, 6 Feb 2018 17:56:16 +0000 (17:56 +0000)]
firmware/psci: Expose PSCI conduit
commit
09a8d6d48499 upstream.
In order to call into the firmware to apply workarounds, it is
useful to find out whether we're using HVC or SMC. Let's expose
this through the psci_ops.
Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Marc Zyngier [Tue, 6 Feb 2018 17:56:15 +0000 (17:56 +0000)]
arm64: KVM: Add SMCCC_ARCH_WORKAROUND_1 fast handling
commit
f72af90c3783 upstream.
We want SMCCC_ARCH_WORKAROUND_1 to be fast. As fast as possible.
So let's intercept it as early as we can by testing for the
function call number as soon as we've identified a HVC call
coming from the guest.
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Marc Zyngier [Tue, 6 Feb 2018 17:56:14 +0000 (17:56 +0000)]
arm64: KVM: Report SMCCC_ARCH_WORKAROUND_1 BP hardening support
commit
6167ec5c9145 upstream.
A new feature of SMCCC 1.1 is that it offers firmware-based CPU
workarounds. In particular, SMCCC_ARCH_WORKAROUND_1 provides
BP hardening for CVE-2017-5715.
If the host has some mitigation for this issue, report that
we deal with it using SMCCC_ARCH_WORKAROUND_1, as we apply the
host workaround on every guest exit.
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
no sve support in arch/arm64/include/asm/kvm_host.h
mv changes from virt/kvm/arm/psci.c to arch/arm/kvm/psci.c
using cpus_have_cap instead of cpus_have_const_cap
Marc Zyngier [Tue, 6 Feb 2018 17:56:13 +0000 (17:56 +0000)]
arm/arm64: KVM: Turn kvm_psci_version into a static inline
commit
a4097b351118 upstream.
We're about to need kvm_psci_version in HYP too. So let's turn it
into a static inline, and pass the kvm structure as a second
parameter (so that HYP can do a kern_hyp_va on it).
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
mv changes from virt/kvm/arm/psci.c to arch/arm/kvm/psci.c
Marc Zyngier [Wed, 3 Jan 2018 16:38:37 +0000 (16:38 +0000)]
arm64: KVM: Make PSCI_VERSION a fast path
commit
90348689d500 upstream.
For those CPUs that require PSCI to perform a BP invalidation,
going all the way to the PSCI code for not much is a waste of
precious cycles. Let's terminate that call as early as possible.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Marc Zyngier [Tue, 6 Feb 2018 17:56:12 +0000 (17:56 +0000)]
arm/arm64: KVM: Advertise SMCCC v1.1
commit
09e6be12effd upstream.
The new SMC Calling Convention (v1.1) allows for a reduced overhead
when calling into the firmware, and provides a new feature discovery
mechanism.
Make it visible to KVM guests.
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
mv change from virt/kvm/arm/psci.c to arch/arm/kvm/psci.c
Marc Zyngier [Tue, 6 Feb 2018 17:56:11 +0000 (17:56 +0000)]
arm/arm64: KVM: Implement PSCI 1.0 support
commit
58e0b2239a4d upstream.
PSCI 1.0 can be trivially implemented by providing the FEATURES
call on top of PSCI 0.2 and returning 1.0 as the PSCI version.
We happily ignore everything else, as they are either optional or
are clarifications that do not require any additional change.
PSCI 1.0 is now the default until we decide to add a userspace
selection API.
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
mv chagnes from virt/kvm/arm/psci.c to arch/arm/kvm/psci.c
Marc Zyngier [Sat, 24 Feb 2018 07:38:00 +0000 (15:38 +0800)]
arm/arm64: KVM: Add smccc accessors to PSCI code
commit
84684fecd7ea upstream.
Instead of open coding the accesses to the various registers,
let's add explicit SMCCC accessors.
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
mv change from virt/kvm/arm/psci.c to arch/arm/kvm/psci.c
Marc Zyngier [Tue, 6 Feb 2018 17:56:09 +0000 (17:56 +0000)]
arm/arm64: KVM: Add PSCI_VERSION helper
commit
d0a144f12a7c upstream.
As we're about to trigger a PSCI version explosion, it doesn't
hurt to introduce a PSCI_VERSION helper that is going to be
used everywhere.
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
mv change form virt/kvm/arm/psci.c to arch/arm/kvm/psci.c
Marc Zyngier [Tue, 6 Feb 2018 17:56:08 +0000 (17:56 +0000)]
arm/arm64: KVM: Consolidate the PSCI include files
commit
1a2fb94e6a77 upstream.
As we're about to update the PSCI support, and because I'm lazy,
let's move the PSCI include file to include/kvm so that both
ARM architectures can find it.
Acked-by: Christoffer Dall <christoffer.dall@linaro.org>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
need kvm/arm_psci.h in files:
arch/arm64/kvm/handle_exit.c
arch/arm/kvm/psci.c and arch/arm/kvm/arm.c
no virt/kvm/arm/arm.c and virt/kvm/arm/psci.c
Marc Zyngier [Tue, 6 Feb 2018 17:56:07 +0000 (17:56 +0000)]
arm64: KVM: Increment PC after handling an SMC trap
commit
f5115e8869e1 upstream.
When handling an SMC trap, the "preferred return address" is set
to that of the SMC, and not the next PC (which is a departure from
the behaviour of an SMC that isn't trapped).
Increment PC in the handler, as the guest is otherwise forever
stuck...
Cc: stable@vger.kernel.org
Fixes:
acfb3b883f6d ("arm64: KVM: Fix SMCCC handling of unimplemented SMC/HVC calls")
Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Will Deacon [Wed, 3 Jan 2018 12:46:21 +0000 (12:46 +0000)]
arm64: Implement branch predictor hardening for affected Cortex-A CPUs
commit
aa6acde65e03 upstream.
Cortex-A57, A72, A73 and A75 are susceptible to branch predictor aliasing
and can theoretically be attacked by malicious code.
This patch implements a PSCI-based mitigation for these CPUs when available.
The call into firmware will invalidate the branch predictor state, preventing
any malicious entries from affecting other victim contexts.
Co-developed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
no falkor in arch/arm64/kernel/cpu_errata.c
Will Deacon [Fri, 2 Feb 2018 17:31:40 +0000 (17:31 +0000)]
arm64: entry: Apply BP hardening for suspicious interrupts from EL0
commit
30d88c0e3ace upstream.
It is possible to take an IRQ from EL0 following a branch to a kernel
address in such a way that the IRQ is prioritised over the instruction
abort. Whilst an attacker would need to get the stars to align here,
it might be sufficient with enough calibration so perform BP hardening
in the rare case that we see a kernel address in the ELR when handling
an IRQ from EL0.
Reported-by: Dan Hettena <dhettena@nvidia.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Will Deacon [Fri, 2 Feb 2018 17:31:39 +0000 (17:31 +0000)]
arm64: entry: Apply BP hardening for high-priority synchronous exceptions
commit
5dfc6ed27710 upstream.
Software-step and PC alignment fault exceptions have higher priority than
instruction abort exceptions, so apply the BP hardening hooks there too
if the user PC appears to reside in kernel space.
Reported-by: Dan Hettena <dhettena@nvidia.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
expand enable_da_f to 'msr daifclr, #(8 | 4 | 1)'
in arch/arm64/kernel/entry.S
Marc Zyngier [Wed, 3 Jan 2018 16:38:35 +0000 (16:38 +0000)]
arm64: KVM: Use per-CPU vector when BP hardening is enabled
commit
6840bdd73d07 upstream
Now that we have per-CPU vectors, let's plug then in the KVM/arm64 code.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
mv changes from virt/kvm/arm/arm.c to arch/arm/kvm/arm.c
Marc Zyngier [Fri, 19 Jan 2018 15:42:09 +0000 (15:42 +0000)]
arm64: Move BP hardening to check_and_switch_context
commit
a8e4c0a919ae upstream.
We call arm64_apply_bp_hardening() from post_ttbr_update_workaround,
which has the unexpected consequence of being triggered on every
exception return to userspace when ARM64_SW_TTBR0_PAN is selected,
even if no context switch actually occured.
This is a bit suboptimal, and it would be more logical to only
invalidate the branch predictor when we actually switch to
a different mm.
In order to solve this, move the call to arm64_apply_bp_hardening()
into check_and_switch_context(), where we're guaranteed to pick
a different mm context.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
no sw pan in arch/arm64/mm/context.c
Will Deacon [Wed, 3 Jan 2018 11:17:58 +0000 (11:17 +0000)]
arm64: Add skeleton to harden the branch predictor against aliasing attacks
commit
0f15adbb2861 upstream.
Aliasing attacks against CPU branch predictors can allow an attacker to
redirect speculative control flow on some CPUs and potentially divulge
information from one context to another.
This patch adds initial skeleton code behind a new Kconfig option to
enable implementation-specific mitigations against these attacks for
CPUs that are affected.
Co-developed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
expand enable_da_f in entry.S
use 5 parameters ARM64_FTR_BITS()
add percpu.h in mm_types.h for percpu functions
use cpus_have_cap instead of cpus_have_const_cap
arch/arm64/Kconfig
arch/arm64/include/asm/cpucaps.h
arch/arm64/include/asm/mmu.h
arch/arm64/include/asm/sysreg.h
arch/arm64/kernel/cpufeature.c
arch/arm64/kernel/entry.S
arch/arm64/mm/fault.c
Marc Zyngier [Tue, 30 Jan 2018 04:02:03 +0000 (12:02 +0800)]
arm64: Move post_ttbr_update_workaround to C code
commit
95e3de3590e3 upstream.
We will soon need to invoke a CPU-specific function pointer after changing
page tables, so move post_ttbr_update_workaround out into C code to make
this possible.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
don't include PAN related changes
arch/arm64/include/asm/assembler.h
arch/arm64/kernel/entry.S
arch/arm64/mm/proc.S
Will Deacon [Tue, 2 Jan 2018 21:37:25 +0000 (21:37 +0000)]
arm64: cpufeature: Pass capability structure to ->enable callback
commit
0a0d111d40fd upstream.
In order to invoke the CPU capability ->matches callback from the ->enable
callback for applying local-CPU workarounds, we need a handle on the
capability structure.
This patch passes a pointer to the capability structure to the ->enable
callback.
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
arch/arm64/kernel/cpufeature.c
Suzuki K Poulose [Wed, 17 Jan 2018 17:42:20 +0000 (17:42 +0000)]
arm64: Run enable method for errata work arounds on late CPUs
commit
55b35d070c25 upstream.
When a CPU is brought up after we have finalised the system
wide capabilities (i.e, features and errata), we make sure the
new CPU doesn't need a new errata work around which has not been
detected already. However we don't run enable() method on the new
CPU for the errata work arounds already detected. This could
cause the new CPU running without potential work arounds.
It is upto the "enable()" method to decide if this CPU should
do something about the errata.
Fixes: commit
6a6efbb45b7d95c84 ("arm64: Verify CPU errata work arounds on hotplugged CPU")
Cc: Will Deacon <will.deacon@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Andre Przywara <andre.przywara@arm.com>
Cc: Dave Martin <dave.martin@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Marc Zyngier [Wed, 1 Feb 2017 14:38:46 +0000 (14:38 +0000)]
arm64: cpu_errata: Allow an erratum to be match for all revisions of a core
commit
06f1494f837 upstream.
Some minor erratum may not be fixed in further revisions of a core,
leading to a situation where the workaround needs to be updated each
time an updated core is released.
Introduce a MIDR_ALL_VERSIONS match helper that will work for all
versions of that MIDR, once and for all.
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
James Morse [Fri, 23 Feb 2018 14:31:42 +0000 (22:31 +0800)]
arm64: cpufeature: __this_cpu_has_cap() shouldn't stop early
commit
edf298cfce47 upstream.
Alex Shi rewrite this commit on func this_cpu_has_cap(). The following commit
log is still meaningful.
this_cpu_has_cap() tests caps->desc not caps->matches, so it stops
walking the list when it finds a 'silent' feature, instead of
walking to the end of the list.
Prior to v4.6's
644c2ae198412 ("arm64: cpufeature: Test 'matches' pointer
to find the end of the list") we always tested desc to find the end of
a capability list. This was changed for dubious things like PAN_NOT_UAO.
v4.7's
e3661b128e53e ("arm64: Allow a capability to be checked on
single CPU") added this_cpu_has_cap() using the old desc style test.
CC: Suzuki K Poulose <suzuki.poulose@arm.com>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: James Morse <james.morse@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Will Deacon [Tue, 2 Jan 2018 21:45:41 +0000 (21:45 +0000)]
drivers/firmware: Expose psci_get_version through psci_ops structure
commit
d68e3ba5303f upstream.
Entry into recent versions of ARM Trusted Firmware will invalidate the CPU
branch predictor state in order to protect against aliasing attacks.
This patch exposes the PSCI "VERSION" function via psci_ops, so that it
can be invoked outside of the PSCI driver where necessary.
Acked-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Will Deacon [Mon, 5 Feb 2018 15:34:24 +0000 (15:34 +0000)]
arm64: futex: Mask __user pointers prior to dereference
commit
91b2d3442f6a upstream.
The arm64 futex code has some explicit dereferencing of user pointers
where performing atomic operations in response to a futex command. This
patch uses masking to limit any speculative futex operations to within
the user address space.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
change on old futex_atomic_op_inuser function instead of
arch_futex_atomic_op_inuser in arch/arm64/include/asm/futex.h
Will Deacon [Fri, 23 Feb 2018 12:29:00 +0000 (20:29 +0800)]
arm64: uaccess: Mask __user pointers for __arch_{clear, copy_*}_user
Rewritting from commit
f71c2ffcb20d upstream. On LTS 4.9, there has no
raw_copy_from/to_user, neither __copy_user_flushcache, and it isn't good
idead to pick up them. The following is origin commit log, that's also
applicable for the new patch.
Like we've done for get_user and put_user, ensure that user pointers
are masked before invoking the underlying __arch_{clear,copy_*}_user
operations.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Will Deacon [Mon, 5 Feb 2018 15:34:22 +0000 (15:34 +0000)]
arm64: uaccess: Don't bother eliding access_ok checks in __{get, put}_user
commit
84624087dd7e upstream.
access_ok isn't an expensive operation once the addr_limit for the current
thread has been loaded into the cache. Given that the initial access_ok
check preceding a sequence of __{get,put}_user operations will take
the brunt of the miss, we can make the __* variants identical to the
full-fat versions, which brings with it the benefits of address masking.
The likely cost in these sequences will be from toggling PAN/UAO, which
we can address later by implementing the *_unsafe versions.
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
keep __{get/put}_user_unaligned in arch/arm64/include/asm/uaccess.h
Will Deacon [Mon, 5 Feb 2018 15:34:21 +0000 (15:34 +0000)]
arm64: uaccess: Prevent speculative use of the current addr_limit
commit
c2f0ad4fc089 upstream.
A mispredicted conditional call to set_fs could result in the wrong
addr_limit being forwarded under speculation to a subsequent access_ok
check, potentially forming part of a spectre-v1 attack using uaccess
routines.
This patch prevents this forwarding from taking place, but putting heavy
barriers in set_fs after writing the addr_limit.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
no set_thread_flag(TIF_FSCHECK) in arch/arm64/include/asm/uaccess.h
Will Deacon [Mon, 5 Feb 2018 15:34:20 +0000 (15:34 +0000)]
arm64: entry: Ensure branch through syscall table is bounded under speculation
commit
6314d90e6493 upstream.
In a similar manner to array_index_mask_nospec, this patch introduces an
assembly macro (mask_nospec64) which can be used to bound a value under
speculation. This macro is then used to ensure that the indirect branch
through the syscall table is bounded under speculation, with out-of-range
addresses speculating as calls to sys_io_setup (0).
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Dave Martin [Tue, 1 Aug 2017 14:35:53 +0000 (15:35 +0100)]
arm64: syscallno is secretly an int, make it official
commit
35d0e6fb4d upstream.
The upper 32 bits of the syscallno field in thread_struct are
handled inconsistently, being sometimes zero extended and sometimes
sign-extended. In fact, only the lower 32 bits seem to have any
real significance for the behaviour of the code: it's been OK to
handle the upper bits inconsistently because they don't matter.
Currently, the only place I can find where those bits are
significant is in calling trace_sys_enter(), which may be
unintentional: for example, if a compat tracer attempts to cancel a
syscall by passing -1 to (COMPAT_)PTRACE_SET_SYSCALL at the
syscall-enter-stop, it will be traced as syscall
4294967295
rather than -1 as might be expected (and as occurs for a native
tracer doing the same thing). Elsewhere, reads of syscallno cast
it to an int or truncate it.
There's also a conspicuous amount of code and casting to bodge
around the fact that although semantically an int, syscallno is
stored as a u64.
Let's not pretend any more.
In order to preserve the stp x instruction that stores the syscall
number in entry.S, this patch special-cases the layout of struct
pt_regs for big endian so that the newly 32-bit syscallno field
maps onto the low bits of the stored value. This is not beautiful,
but benchmarking of the getpid syscall on Juno suggests indicates a
minor slowdown if the stp is split into an stp x and stp w.
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Robin Murphy [Mon, 5 Feb 2018 15:34:19 +0000 (15:34 +0000)]
arm64: Use pointer masking to limit uaccess speculation
commit
4d8efc2d5ee4 upstream.
Similarly to x86, mitigate speculation past an access_ok() check by
masking the pointer against the address limit before use.
Even if we don't expect speculative writes per se, it is plausible that
a CPU may still speculate at least as far as fetching a cache line for
writing, hence we also harden put_user() and clear_user() for peace of
mind.
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Robin Murphy [Mon, 5 Feb 2018 15:34:18 +0000 (15:34 +0000)]
arm64: Make USER_DS an inclusive limit
commit
51369e398d0d upstream.
Currently, USER_DS represents an exclusive limit while KERNEL_DS is
inclusive. In order to do some clever trickery for speculation-safe
masking, we need them both to behave equivalently - there aren't enough
bits to make KERNEL_DS exclusive, so we have precisely one option. This
also happens to correct a longstanding false negative for a range
ending on the very top byte of kernel memory.
Mark Rutland points out that we've actually got the semantics of
addresses vs. segments muddled up in most of the places we need to
amend, so shuffle the {USER,KERNEL}_DS definitions around such that we
can correct those properly instead of just pasting "-1"s everywhere.
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
(cherry picked from commit
83b20dff71ea949431cf57c6aebaaf7ebd5c1991)
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
force replace __range_ok and add asm/processor.h
in arch/arm64/include/asm/uaccess.h
using old macro TI_ADDR_LIMIT instead of TSK_TI_ADDR_LIMIT
in arch/arm64/kernel/entry.S
manual change USER_DS to TASK_SIZE in arch/arm64/mm/fault.c
Mark Rutland [Tue, 7 Feb 2017 12:33:55 +0000 (12:33 +0000)]
arm64: uaccess: consistently check object sizes
commit
76624175dca upstream.
Currently in arm64's copy_{to,from}_user, we only check the
source/destination object size if access_ok() tells us the user access
is permissible.
However, in copy_from_user() we'll subsequently zero any remainder on
the destination object. If we failed the access_ok() check, that applies
to the whole object size, which we didn't check.
To ensure that we catch that case, this patch hoists check_object_size()
to the start of copy_from_user(), matching __copy_from_user() and
__copy_to_user(). To make all of our uaccess copy primitives consistent,
the same is done to copy_to_user().
Cc: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Catalin Marinas [Fri, 1 Jul 2016 14:48:55 +0000 (15:48 +0100)]
arm64: Factor out TTBR0_EL1 post-update workaround into a specific asm macro
commit
f33bcf03e6 upstream
This patch takes the errata workaround code out of cpu_do_switch_mm into
a dedicated post_ttbr0_update_workaround macro which will be reused in a
subsequent patch.
Cc: Will Deacon <will.deacon@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Catalin Marinas [Fri, 1 Jul 2016 13:58:21 +0000 (14:58 +0100)]
arm64: Factor out PAN enabling/disabling into separate uaccess_* macros
commit
bd38967d406 upstream.
This patch moves the directly coded alternatives for turning PAN on/off
into separate uaccess_{enable,disable} macros or functions. The asm
macros take a few arguments which will be used in subsequent patches.
Note that any (unlikely) access that the compiler might generate between
uaccess_enable() and uaccess_disable(), other than those explicitly
specified by the user access code, will not be protected by PAN.
Cc: Will Deacon <will.deacon@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Yury Norov [Thu, 31 Aug 2017 08:30:50 +0000 (11:30 +0300)]
arm64: move TASK_* definitions to <asm/processor.h>
commit
eef94a3d09aab upstream.
ILP32 series [1] introduces the dependency on <asm/is_compat.h> for
TASK_SIZE macro. Which in turn requires <asm/thread_info.h>, and
<asm/thread_info.h> include <asm/memory.h>, giving a circular dependency,
because TASK_SIZE is currently located in <asm/memory.h>.
In other architectures, TASK_SIZE is defined in <asm/processor.h>, and
moving TASK_SIZE there fixes the problem.
Discussion: https://patchwork.kernel.org/patch/
9929107/
[1] https://github.com/norov/linux/tree/ilp32-next
CC: Will Deacon <will.deacon@arm.com>
CC: Laura Abbott <labbott@redhat.com>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Suggested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Yury Norov <ynorov@caviumnetworks.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
no ptrace.h in arch/arm64/kernel/entry.S
Robin Murphy [Mon, 5 Feb 2018 15:34:17 +0000 (15:34 +0000)]
arm64: Implement array_index_mask_nospec()
commit
022620eed3d0 upstream.
Provide an optimised, assembly implementation of array_index_mask_nospec()
for arm64 so that the compiler is not in a position to transform the code
in ways which affect its ability to inhibit speculation (e.g. by introducing
conditional branches).
This is similar to the sequence used by x86, modulo architectural differences
in the carry/borrow flags.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Will Deacon [Mon, 5 Feb 2018 15:34:16 +0000 (15:34 +0000)]
arm64: barrier: Add CSDB macros to control data-value prediction
commit
669474e772b9 upstream.
For CPUs capable of data value prediction, CSDB waits for any outstanding
predictions to architecturally resolve before allowing speculative execution
to continue. Provide macros to expose it to the arch code.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
arch/arm64/include/asm/assembler.h
no psb_csync in arch/arm64/include/asm/barrier.h
Ard Biesheuvel [Thu, 9 Mar 2017 20:52:01 +0000 (21:52 +0100)]
arm64: alternatives: apply boot time fixups via the linear mapping
commit
5ea5306c323 upstream.
One important rule of thumb when desiging a secure software system is
that memory should never be writable and executable at the same time.
We mostly adhere to this rule in the kernel, except at boot time, when
regions may be mapped RWX until after we are done applying alternatives
or making other one-off changes.
For the alternative patching, we can improve the situation by applying
the fixups via the linear mapping, which is never mapped with executable
permissions. So map the linear alias of .text with RW- permissions
initially, and remove the write permissions as soon as alternative
patching has completed.
Reviewed-by: Laura Abbott <labbott@redhat.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
replace update_mapping_prot with old create_mapping_late
arch/arm64/mm/mmu.c
Laura Abbott [Tue, 10 Jan 2017 21:35:42 +0000 (13:35 -0800)]
mm: Introduce lm_alias
commit
568c5fe5a54 upstream.
Certain architectures may have the kernel image mapped separately to
alias the linear map. Introduce a macro lm_alias to translate a kernel
image symbol into its linear alias. This is used in part with work to
add CONFIG_DEBUG_VIRTUAL support for arm64.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Laura Abbott <labbott@redhat.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Will Deacon [Mon, 29 Jan 2018 12:00:00 +0000 (12:00 +0000)]
arm64: idmap: Use "awx" flags for .idmap.text .pushsection directives
commit
439e70e27a51 upstream.
The identity map is mapped as both writeable and executable by the
SWAPPER_MM_MMUFLAGS and this is relied upon by the kpti code to manage
a synchronisation flag. Update the .pushsection flags to reflect the
actual mapping attributes.
Reported-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Will Deacon [Mon, 29 Jan 2018 11:59:58 +0000 (11:59 +0000)]
arm64: entry: Reword comment about post_ttbr_update_workaround
commit
f167211a93ac upstream.
We don't fully understand the Cavium ThunderX erratum, but it appears
that mapping the kernel as nG can lead to horrible consequences such as
attempting to execute userspace from kernel context. Since kpti isn't
enabled for these CPUs anyway, simplify the comment justifying the lack
of post_ttbr_update_workaround in the exception trampoline.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Marc Zyngier [Mon, 29 Jan 2018 11:59:56 +0000 (11:59 +0000)]
arm64: Force KPTI to be disabled on Cavium ThunderX
commit
6dc52b15c4a4 upstream.
Cavium ThunderX's erratum 27456 results in a corruption of icache
entries that are loaded from memory that is mapped as non-global
(i.e. ASID-tagged).
As KPTI is based on memory being mapped non-global, let's prevent
it from kicking in if this erratum is detected.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
[will: Update comment]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
using old function read_system_reg/cpus_have_cap to replace
read_sanitised_ftr_reg/cpus_have_const_cap in
arch/arm64/kernel/cpufeature.c
Will Deacon [Tue, 6 Feb 2018 22:22:50 +0000 (22:22 +0000)]
arm64: kpti: Add ->enable callback to remap swapper using nG mappings
commit
f992b4dfd58b upstream.
Defaulting to global mappings for kernel space is generally good for
performance and appears to be necessary for Cavium ThunderX. If we
subsequently decide that we need to enable kpti, then we need to rewrite
our existing page table entries to be non-global. This is fiddly, and
made worse by the possible use of contiguous mappings, which require
a strict break-before-make sequence.
Since the enable callback runs on each online CPU from stop_machine
context, we can have all CPUs enter the idmap, where secondaries can
wait for the primary CPU to rewrite swapper with its MMU off. It's all
fairly horrible, but at least it only runs once.
Nicolas Dechesne <nicolas.dechesne@linaro.org> found a bug on this commit
which cause boot failure on db410c etc board. Ard Biesheuvel found it
writting wrong contenct to ttbr1_el1 in __idmap_cpu_set_reserved_ttbr1
macro and fixed it by give it the right content.
Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
no get_thread_info/post_ttbr_update_workaround/pre_disable_mmu_workaround
in arch/arm64/include/asm/assembler.h and arch/arm64/mm/proc.S
Will Deacon [Mon, 29 Jan 2018 11:59:53 +0000 (11:59 +0000)]
arm64: kpti: Make use of nG dependent on arm64_kernel_unmapped_at_el0()
commit
41acec624087 upstream.
To allow systems which do not require kpti to continue running with
global kernel mappings (which appears to be a requirement for Cavium
ThunderX due to a CPU erratum), make the use of nG in the kernel page
tables dependent on arm64_kernel_unmapped_at_el0(), which is resolved
at runtime.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Jayachandran C [Fri, 19 Jan 2018 12:22:48 +0000 (04:22 -0800)]
arm64: Turn on KPTI only on CPUs that need it
commit
0ba2e29c7fc1 upstream.
Whitelist Broadcom Vulcan/Cavium ThunderX2 processors in
unmap_kernel_at_el0(). These CPUs are not vulnerable to
CVE-2017-5754 and do not need KPTI when KASLR is off.
Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Jayachandran C <jnair@caviumnetworks.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Jayachandran C [Mon, 8 Jan 2018 06:53:35 +0000 (22:53 -0800)]
arm64: cputype: Add MIDR values for Cavium ThunderX2 CPUs
commit
0d90718871fe upstream.
Add the older Broadcom ID as well as the new Cavium ID for ThunderX2
CPUs.
Signed-off-by: Jayachandran C <jnair@caviumnetworks.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
no falkor support in arch/arm64/include/asm/cputype.h
Will Deacon [Wed, 3 Jan 2018 11:19:34 +0000 (11:19 +0000)]
arm64: cputype: Add missing MIDR values for Cortex-A72 and Cortex-A75
commit
f0be3364335d47267aa1f7c5ed5faaa59c70db13 upstream
Hook up MIDR values for the Cortex-A72 and Cortex-A75 CPUs, since they
will soon need MIDR matches for hardening the branch predictor.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
add A73 type in arch/arm64/include/asm/cputype.h
Suzuki K Poulose [Tue, 9 Jan 2018 16:12:18 +0000 (16:12 +0000)]
arm64: capabilities: Handle duplicate entries for a capability
commit
67948af41f2e upstream.
Sometimes a single capability could be listed multiple times with
differing matches(), e.g, CPU errata for different MIDR versions.
This breaks verify_local_cpu_feature() and this_cpu_has_cap() as
we stop checking for a capability on a CPU with the first
entry in the given table, which is not sufficient. Make sure we
run the checks for all entries of the same capability. We do
this by fixing __this_cpu_has_cap() to run through all the
entries in the given table for a match and reuse it for
verify_local_cpu_feature().
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
arch/arm64/kernel/cpufeature.c
Marc Zyngier [Mon, 30 Jan 2017 15:39:52 +0000 (15:39 +0000)]
arm64: Allow checking of a CPU-local erratum
commit
8f4137588261d7504f4aa022dc9d1a1fd1940e8e upstream.
this_cpu_has_cap() only checks the feature array, and not the errata
one. In order to be able to check for a CPU-local erratum, allow it
to inspect the latter as well.
This is consistent with cpus_have_cap()'s behaviour, which includes
errata already.
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Will Deacon [Mon, 27 Nov 2017 18:29:30 +0000 (18:29 +0000)]
arm64: Take into account ID_AA64PFR0_EL1.CSV3
commit
179a56f6f9fb upstream.
For non-KASLR kernels where the KPTI behaviour has not been overridden
on the command line we can use ID_AA64PFR0_EL1.CSV3 to determine whether
or not we should unmap the kernel whilst running at EL0.
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
skip cpu features like SVE etc.
and use 5 paramaters function ARM64_FTR_BITS()
replace read_sanitised_ftr_reg with old name read_system_reg
arch/arm64/include/asm/sysreg.h
arch/arm64/kernel/cpufeature.c
Will Deacon [Tue, 14 Nov 2017 16:19:39 +0000 (16:19 +0000)]
arm64: Kconfig: Reword UNMAP_KERNEL_AT_EL0 kconfig entry
commit
0617052ddde3 upstream.
Although CONFIG_UNMAP_KERNEL_AT_EL0 does make KASLR more robust, it's
actually more useful as a mitigation against speculation attacks that
can leak arbitrary kernel data to userspace through speculation.
Reword the Kconfig help message to reflect this, and make the option
depend on EXPERT so that it is on by default for the majority of users.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Will Deacon [Tue, 14 Nov 2017 14:41:01 +0000 (14:41 +0000)]
arm64: Kconfig: Add CONFIG_UNMAP_KERNEL_AT_EL0
commit
084eb77cd3a8 upstream.
Add a Kconfig entry to control use of the entry trampoline, which allows
us to unmap the kernel whilst running in userspace and improve the
robustness of KASLR.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Will Deacon [Tue, 14 Nov 2017 16:15:59 +0000 (16:15 +0000)]
arm64: use RET instruction for exiting the trampoline
commit
be04a6d1126b upstream.
Speculation attacks against the entry trampoline can potentially resteer
the speculative instruction stream through the indirect branch and into
arbitrary gadgets within the kernel.
This patch defends against these attacks by forcing a misprediction
through the return stack: a dummy BL instruction loads an entry into
the stack, so that the predicted program flow of the subsequent RET
instruction is to a branch-to-self instruction which is finally resolved
as a branch to the kernel vectors with speculation suppressed.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Will Deacon [Wed, 6 Dec 2017 11:24:02 +0000 (11:24 +0000)]
arm64: kaslr: Put kernel vectors address in separate data page
commit
6c27c4082f4f upstream.
The literal pool entry for identifying the vectors base is the only piece
of information in the trampoline page that identifies the true location
of the kernel.
This patch moves it into a page-aligned region of the .rodata section
and maps this adjacent to the trampoline text via an additional fixmap
entry, which protects against any accidental leakage of the trampoline
contents.
Suggested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
remove ARM64_WORKAROUND_QCOM_FALKOR_E1003 fix
in arch/arm64/kernel/entry.S
Will Deacon [Tue, 14 Nov 2017 14:38:19 +0000 (14:38 +0000)]
arm64: entry: Add fake CPU feature for unmapping the kernel at EL0
commit
ea1e3de85e94 upstream.
Allow explicit disabling of the entry trampoline on the kernel command
line (kpti=off) by adding a fake CPU feature (ARM64_UNMAP_KERNEL_AT_EL0)
that can be used to toggle the alternative sequences in our entry code and
avoid use of the trampoline altogether if desired. This also allows us to
make use of a static key in arm64_kernel_unmapped_at_el0().
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
skip non enabled cpu features in
arch/arm64/include/asm/cpucaps.h and
arch/arm64/kernel/cpufeature.c
using cpus_have_cap instead of cpus_have_const_cap in
arch/arm64/include/asm/mmu.h
Will Deacon [Tue, 14 Nov 2017 14:33:28 +0000 (14:33 +0000)]
arm64: tls: Avoid unconditional zeroing of tpidrro_el0 for native tasks
commit
18011eac28c7 upstream.
When unmapping the kernel at EL0, we use tpidrro_el0 as a scratch register
during exception entry from native tasks and subsequently zero it in
the kernel_ventry macro. We can therefore avoid zeroing tpidrro_el0
in the context-switch path for native tasks using the entry trampoline.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
fold tls_preserve_current_state() in arch/arm64/kernel/process.c
Will Deacon [Tue, 14 Nov 2017 14:24:29 +0000 (14:24 +0000)]
arm64: entry: Hook up entry trampoline to exception vectors
commit
4bf3286d29f3 upstream.
Hook up the entry trampoline to our exception vectors so that all
exceptions from and returns to EL0 go via the trampoline, which swizzles
the vector base register accordingly. Transitioning to and from the
kernel clobbers x30, so we use tpidrro_el0 and far_el1 as scratch
registers for native tasks.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Will Deacon [Tue, 14 Nov 2017 14:20:21 +0000 (14:20 +0000)]
arm64: entry: Explicitly pass exception level to kernel_ventry macro
commit
5b1f7fe41909 upstream.
We will need to treat exceptions from EL0 differently in kernel_ventry,
so rework the macro to take the exception level as an argument and
construct the branch target using that.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
no vmap_stack in arch/arm64/kernel/entry.S
Will Deacon [Tue, 14 Nov 2017 14:14:17 +0000 (14:14 +0000)]
arm64: mm: Map entry trampoline into trampoline and kernel page tables
commit
51a0048beb44 upstream.
The exception entry trampoline needs to be mapped at the same virtual
address in both the trampoline page table (which maps nothing else)
and also the kernel page table, so that we can swizzle TTBR1_EL1 on
exceptions from and return to EL0.
This patch maps the trampoline at a fixed virtual address in the fixmap
area of the kernel virtual address space, which allows the kernel proper
to be randomized with respect to the trampoline when KASLR is enabled.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
no acpi apei in arch/arm64/include/asm/fixmap.h
no rodata in arch/arm64/mm/mmu.c
Will Deacon [Tue, 14 Nov 2017 14:07:40 +0000 (14:07 +0000)]
arm64: entry: Add exception trampoline page for exceptions from EL0
commit
c7b9adaf85f8 upstream.
To allow unmapping of the kernel whilst running at EL0, we need to
point the exception vectors at an entry trampoline that can map/unmap
the kernel on entry/exit respectively.
This patch adds the trampoline page, although it is not yet plugged
into the vector table and is therefore unused.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
add asm/mmu.h in entry.S for ASID marco
add kernel-pgtable.h in entry.S for SWAPPER_DIR_SIZE and
RESERVED_TTBR0_SIZE
no SW PAN in vmlinux.lds.S
AKASHI Takahiro [Mon, 14 Nov 2016 06:15:05 +0000 (15:15 +0900)]
module: extend 'rodata=off' boot cmdline parameter to module mappings
commit
39290b389ea upstream.
The current "rodata=off" parameter disables read-only kernel mappings
under CONFIG_DEBUG_RODATA:
commit
d2aa1acad22f ("mm/init: Add 'rodata=off' boot cmdline parameter
to disable read-only kernel mappings")
This patch is a logical extension to module mappings ie. read-only mappings
at module loading can be disabled even if CONFIG_DEBUG_SET_MODULE_RONX
(mainly for debug use). Please note, however, that it only affects RO/RW
permissions, keeping NX set.
This is the first step to make CONFIG_DEBUG_SET_MODULE_RONX mandatory
(always-on) in the future as CONFIG_DEBUG_RODATA on x86 and arm64.
Suggested-by: and Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Acked-by: Rusty Russell <rusty@rustcorp.com.au>
Link: http://lkml.kernel.org/r/20161114061505.15238-1-takahiro.akashi@linaro.org
Signed-off-by: Jessica Yu <jeyu@redhat.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
keeping kaiser.h in init/main.c
Xie XiuQi [Thu, 2 Nov 2017 12:12:42 +0000 (12:12 +0000)]
arm64: entry.S: move SError handling into a C function for future expansion
commit
a92d4d1454ab upstream.
Today SError is taken using the inv_entry macro that ends up in
bad_mode.
SError can be used by the RAS Extensions to notify either the OS or
firmware of CPU problems, some of which may have been corrected.
To allow this handling to be added, add a do_serror() C function
that just panic()s. Add the entry.S boiler plate to save/restore the
CPU registers and unmask debug exceptions. Future patches may change
do_serror() to return if the SError Interrupt was notification of a
corrected error.
Signed-off-by: Xie XiuQi <xiexiuqi@huawei.com>
Signed-off-by: Wang Xiongfeng <wangxiongfengi2@huawei.com>
[Split out of a bigger patch, added compat path, renamed, enabled debug
exceptions]
Signed-off-by: James Morse <james.morse@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
no vmap_stack in arch/arm64/kernel/traps.c
using old enable_dbg_and_irq instead of enable_daif in
arch/arm64/kernel/entry.S
Mark Rutland [Wed, 19 Jul 2017 16:24:49 +0000 (17:24 +0100)]
arm64: factor out entry stack manipulation
commit
b11e5759bfac upstream.
In subsequent patches, we will detect stack overflow in our exception
entry code, by verifying the SP after it has been decremented to make
space for the exception regs.
This verification code is small, and we can minimize its impact by
placing it directly in the vectors. To avoid redundant modification of
the SP, we also need to move the initial decrement of the SP into the
vectors.
As a preparatory step, this patch introduces kernel_ventry, which
performs this decrement, and updates the entry code accordingly.
Subsequent patches will fold SP verification into kernel_ventry.
There should be no functional change as a result of this patch.
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
[Mark: turn into prep patch, expand commit msg]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Will Deacon [Thu, 10 Aug 2017 13:13:33 +0000 (14:13 +0100)]
arm64: mm: Invalidate both kernel and user ASIDs when performing TLBI
commit
9b0de864b5bc upstream.
Since an mm has both a kernel and a user ASID, we need to ensure that
broadcast TLB maintenance targets both address spaces so that things
like CoW continue to work with the uaccess primitives in the kernel.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Will Deacon [Tue, 14 Nov 2017 13:58:08 +0000 (13:58 +0000)]
arm64: mm: Add arm64_kernel_unmapped_at_el0 helper
commit
fc0e1299da54 upstream.
In order for code such as TLB invalidation to operate efficiently when
the decision to map the kernel at EL0 is determined at runtime, this
patch introduces a helper function, arm64_kernel_unmapped_at_el0, to
determine whether or not the kernel is mapped whilst running in userspace.
Currently, this just reports the value of CONFIG_UNMAP_KERNEL_AT_EL0,
but will later be hooked up to a fake CPU capability using a static key.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Will Deacon [Thu, 10 Aug 2017 13:10:28 +0000 (14:10 +0100)]
arm64: mm: Allocate ASIDs in pairs
commit
0c8ea531b774 upstream.
In preparation for separate kernel/user ASIDs, allocate them in pairs
for each mm_struct. The bottom bit distinguishes the two: if it is set,
then the ASID will map only userspace.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
no MMCF_AARCH32 in arch/arm64/include/asm/mmu.h
Will Deacon [Thu, 10 Aug 2017 12:19:09 +0000 (13:19 +0100)]
arm64: mm: Move ASID from TTBR0 to TTBR1
commit
7655abb95386 upstream.
In preparation for mapping kernelspace and userspace with different
ASIDs, move the ASID to TTBR1 and update switch_mm to context-switch
TTBR0 via an invalid mapping (the zero page).
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
no pre_ttbr0_update_workaround in arch/arm64/mm/proc.S
Will Deacon [Thu, 10 Aug 2017 11:56:18 +0000 (12:56 +0100)]
arm64: mm: Use non-global mappings for kernel space
commit
e046eb0c9bf2 upstream.
In preparation for unmapping the kernel whilst running in userspace,
make the kernel mappings non-global so we can avoid expensive TLB
invalidation on kernel exit to userspace.
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Tested-by: Shanker Donthineni <shankerd@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Alex Shi <alex.shi@linaro.org>
Conflicts:
skip PTE_RDONLY of PAGE_NONE in arch/arm64/include/asm/pgtable-prot.h
Laurentiu Palcu [Sun, 25 Mar 2018 13:41:52 +0000 (08:41 -0500)]
MLK-17703-10: drm: imx: dcss: remove unused variable warning
dcss_crtc variable was not used anymore and generated a compilation
warning. Remove it.
Signed-off-by: Laurentiu Palcu <laurentiu.palcu@nxp.com>
Laurentiu Palcu [Fri, 23 Mar 2018 22:36:33 +0000 (17:36 -0500)]
MLK-17703-9: drm: imx: dcss: align input and output pipe gamut and nonlinearity
For better results, output and input pipe gamut and nonlinearity should
match.
Signed-off-by: Laurentiu Palcu <laurentiu.palcu@nxp.com>
Laurentiu Palcu [Fri, 23 Mar 2018 22:34:55 +0000 (17:34 -0500)]
MLK-17703-8: drm: imx: dcss: default output pipe gamut to REC709
In case we don't get any information about colorimetry from HDMI sink,
set default gamut to REC709.
Signed-off-by: Laurentiu Palcu <laurentiu.palcu@nxp.com>
Laurentiu Palcu [Fri, 23 Mar 2018 13:25:05 +0000 (08:25 -0500)]
MLK-17703-7: drm: imx: dcss: configure output pipe according to what sink supports
Output pipe tables' configuration was hardcoded. This patch will allow the output
pipe to be configured according to what the sink supports.
Also, since there's no way to pass gamut and nonlinearity settings from userspace,
configure the input pipe as REC2020/REC2084.
Signed-off-by: Laurentiu Palcu <laurentiu.palcu@nxp.com>
Laurentiu Palcu [Fri, 23 Mar 2018 13:23:11 +0000 (08:23 -0500)]
MLK-17703-6: drm: imx: dcss: fix output colorimetry in crtc
The detection of the supported output colorimetry was wrong. This patch will
fix that and, also, get rid of the REC2100HLG EOTF setting for now. It produces
bad colors.
Signed-off-by: Laurentiu Palcu <laurentiu.palcu@nxp.com>
Laurentiu Palcu [Fri, 23 Mar 2018 12:48:13 +0000 (07:48 -0500)]
MLK-17703-5: drm: imx: dcss: ignore the 8 bit for input pipe
Since the input of HDR10 is always 10-bit, ignore 8-bit flags when
setting up the output pipe.
Signed-off-by: Laurentiu Palcu <laurentiu.palcu@nxp.com>
Laurentiu Palcu [Fri, 23 Mar 2018 12:46:48 +0000 (07:46 -0500)]
MLK-17703-4: drm: imx: dcss: return the hdr10 table at once
Don't go through the rest of the list if we found our table. Just return it
immediately.
Signed-off-by: Laurentiu Palcu <laurentiu.palcu@nxp.com>
Laurentiu Palcu [Thu, 22 Mar 2018 23:10:28 +0000 (18:10 -0500)]
MLK-17703-3: drm: imx: hdp: send the right colorimetry to the sink
Currently, the colorimetry was hardcoded to NONE. However, a sink may support
different types of colorimetry. This patch will allow for the colorimetry to be
set according to what the sink supports.
Signed-off-by: Laurentiu Palcu <laurentiu.palcu@nxp.com>
CC: Sandor Yu <sandor.yu@nxp.com>
Laurentiu Palcu [Thu, 22 Mar 2018 22:08:10 +0000 (17:08 -0500)]
MLK-17703-2: drm: change HDR metadata infoframe structure
According to ANSI-CTA-861-G specification:
* EOTF is 8 bit, not 16;
* metadata type is 8 bit, not 16;
* There's no "Minimum Content Light Level"
This patch will change the HDR metadata structures to reflect that. Also, this
will fix problems seen on some TVs that were rejecting HDR metadata because
it's size was too big (more than 26 bytes).
Signed-off-by: Laurentiu Palcu <laurentiu.palcu@nxp.com>
CC: Sandor Yu <sandor.yu@nxp.com>
Laurentiu Palcu [Thu, 22 Mar 2018 20:53:24 +0000 (15:53 -0500)]
MLK-17703-1: drm: imx: dcss: update HDR10 tables
The old tables had incorrect CSCBs when YUV formats were being used. That's
because the application used to generate the tables always assumed channel 0 is
graphics even if it was configured as YUV.
Signed-off-by: Laurentiu Palcu <laurentiu.palcu@nxp.com>
Haibo Chen [Sun, 11 Feb 2018 11:07:22 +0000 (19:07 +0800)]
MLK-17586-4 ARM: dts: improve usdhc root clock rate
Confirm with IC, HS400 MAX clock Freq for Instance 0 is 198Mhz
and for Instance 1 is 192MHz, so set the usdhc parent clock at
396MHz, due to current APLL is config to 529.2MHz, use the formula
APLL_PFD clock = APLL * 18 / i, the nearest clock is 381.024MHz when
the i is 25, so the usdhc root clock is 190.512MHz.
But eMMC HS400 can't pass stress test at 190.512MHz, will meet CRC
error sometimes, only when down to 176.4MHz can pass the stress test.
This patch make the usdhc0 and usdhc1 root clock both source from
IMX7ULP_CLK_APLL_PFD1, and set this APLL_PFD1 clcok rate at 352.8MHz,
and set the USDHC0 root clock at 352.8MHz, and set the USDHC1 root
clock at 176.4MHz.
Also remove the clk_prepare_enable() and clk_disable_unprepare() for
APLL_PFD2, bacause U-Boot already gate off APLL_PFD1, not need to do
this again.
Acked-by: Dong Aisheng <aisheng.dong@nxp.com>
Signed-off-by: Haibo Chen <haibo.chen@nxp.com>
Haibo Chen [Sun, 11 Feb 2018 08:50:26 +0000 (16:50 +0800)]
MLK-17586-2 mmc: add HS400 support for iMX7ULP
Add HS400 support for iMX7ULP B0.
According to IC suggest, need to clear the STROBE_DLL_CTRL_RESET
before any setting of STROBE_DLL_CTRL register.
USDHC has register bits(bit[27~20] of register STROBE_DLL_CTRL)
for slave sel vaule. If this register bits value is 0, it needs
256 ref_clk cycles to update slave sel value. IC suggest to set
bit[27~20] to 0x4, it only need 4 ref_clk cycle to update slave
sel vaule. This will short the lock time of slave.
i.MX7ULP B0 will need more time to lock the REF and SLV, so change
to add another 5us delay.
Acked-by: Dong Aisheng <aisheng.dong@nxp.com>
Signed-off-by: Haibo Chen <haibo.chen@nxp.com>
Haibo Chen [Sun, 11 Feb 2018 08:27:41 +0000 (16:27 +0800)]
MLK-17586-1 ARM64: dts: imx7ulp-evk: add eMMC HS200 support for B0 chip
USDHC internal IC data handle bug already fixed on i.MX7ULP B0, so add
HS200 support first.
To let HS200 work on i.MX7ULP REV A3 board, need to do the following
rework, otherwise, switch to HS200 will always meet error, caused by
the voltage change make eMMC work not stable, this rework fix the eMMC
I/O voltage to 1.8v, align with the MMC spec.
1,remove TF sd slot, replace eMMC chip
2,fix eMMC I/O voltage to 1.8v, remove R183, short TP3 and TP89
3,add R107, make eMMC boot work
For i.MX7ULP REV B1 board, do not need this rework, board already fix the
eMMC I/O voltage to 1.8v
Acked-by: Dong Aisheng <aisheng.dong@nxp.com>
Signed-off-by: Haibo Chen <haibo.chen@nxp.com>
nxa13443 [Mon, 26 Mar 2018 08:13:34 +0000 (16:13 +0800)]
MLK-17912 [IMX8QXP B0] ENABLE SEEK for DECODER on IMX8QXP B0 board
Modify seek for vpu decoder on B0
Signed-off-by: nxa13443 <chaofan.huang@nxp.com>
Guoniu.Zhou [Fri, 23 Mar 2018 03:41:49 +0000 (11:41 +0800)]
MLK-17487: pxp: fix pxp yuv to yuv generate color dots issue
When PxP convert yuyv to nv12 format, some color dots will
introdue to output image. IC recommend that YCBCR_MODE and
BYPASS bit of CSC1_COEF0 should be 1.
Reviewed-by: robby.cai <robby.cai@nxp.com>
Signed-off-by: Guoniu.Zhou <guoniu.zhou@nxp.com>
Liu Ying [Thu, 22 Mar 2018 03:43:46 +0000 (11:43 +0800)]
MLK-17889 drm/imx: dpu: crtc: Enable irqs before HWs are triggered in ->enable
We should enable irqs before HWs are triggered in ->enable and then
wait for shadow loads are done, otherwise we would miss the irqs if
the irqs come right after the triggers although it's not very likely
to happen.
Signed-off-by: Liu Ying <victor.liu@nxp.com>
Peng Fan [Wed, 21 Mar 2018 08:08:08 +0000 (16:08 +0800)]
MLK-17878 ARM64: defconfig: built-in xen block back driver
Built-in xen block back driver to avoid insmod
xen-blkback.ko when use xvda from DomU.
Signed-off-by: Peng Fan <peng.fan@nxp.com>
Acked-by: Leonard Crestez <leonard.crestez@nxp.com>
nxa13443 [Fri, 23 Mar 2018 12:20:37 +0000 (20:20 +0800)]
MLK-17902 [IMX8QXP B0]VPU ENCODER and DECODER on IMX8QXP B0 board
Add vpu decoder and encoder for imx8qxp b0 board,
decoder can support H265 H264 MPEG2 MPEG4 H263 etc
encoder can support H264
Signed-off-by: nxa13443 <chaofan.huang@nxp.com>
Fugang Duan [Mon, 19 Mar 2018 09:45:42 +0000 (17:45 +0800)]
MLK-17837-03 ARM: imx_v7_defconfig: enable rpmsg input for i.MX7ULP
Add rpmsg input config enable for i.MX7ULP.
Reviewed-by: Elven Wang <elven.wang@nxp.com>
Signed-off-by: Fugang Duan <fugang.duan@nxp.com>
Fugang Duan [Mon, 19 Mar 2018 09:42:57 +0000 (17:42 +0800)]
MLK-17837-02 dts: imx7ulp-evk: add rpmsg sensor support
Enable rpmsg input device like sensor support for i.MX7ULP B0
EVK board.
Reviewed-by: Elven Wang <elven.wang@nxp.com>
Signed-off-by: Fugang Duan <fugang.duan@nxp.com>
Fugang Duan [Fri, 16 Mar 2018 06:07:05 +0000 (14:07 +0800)]
MLK-17837-01 input: misc: rpmsg_input: add rpmsg virtual sensor driver
NXP i.MX7ULP EVK boards all sensors connect with M4 core, A core
has to conmunicate with sensors by virtual io bus like rpmsg bus.
The driver implement the virtual sensor input driver to configure
sensors active/idle/delay actions and report the sensors' event to
user space.
Supply below sysfs for user to enable/disable detector and counter,
set poll delay:
/sys/class/misc/step_counter/enable
/sys/class/misc/step_detector/enable
/sys/class/misc/step_counter/poll_delay
Reviewed-by: Elven Wang <elven.wang@nxp.com>
Signed-off-by: Fugang Duan <fugang.duan@nxp.com>
Mirela Rabulea [Tue, 20 Mar 2018 11:01:25 +0000 (13:01 +0200)]
MLK-17684: drm/bridge: nwl-dsi: Propagate DSI format to the attached panel/bridge
Signed-off-by: Mirela Rabulea <mirela.rabulea@nxp.com>
Mirela Rabulea [Mon, 19 Mar 2018 13:25:46 +0000 (15:25 +0200)]
MLK-17543-1: drm/mxsfb: Signal mode changed when bpp changed
Add mxsfb_atomic_helper_check to signal mode changed when bpp changed.
This will trigger the execution of disable/enable on
a modeset with different bpp than the current one.
Signed-off-by: Mirela Rabulea <mirela.rabulea@nxp.com>
Peng Fan [Thu, 22 Mar 2018 03:06:36 +0000 (11:06 +0800)]
MLK-17788 soc: imx: ipc: not abort when set wake error
When irq_set_irq_wake error, it means no irq wakeup capability,
showing the error msg is enough, no need to abort and cause
kernel stop.
For xen, currently we do not support suspend/resume, and
no wu interrupt controller support now, so need to remove
"return err" to avoid kernel stop.
Signed-off-by: Peng Fan <peng.fan@nxp.com>
Reviewed-by: Anson Huang <Anson.Huang@nxp.com>