Skip to content

Commit 22f5538

Browse files
Qais YousefMarc Zyngier
authored andcommitted
KVM: arm64: Handle Asymmetric AArch32 systems
On a system without uniform support for AArch32 at EL0, it is possible for the guest to force run AArch32 at EL0 and potentially cause an illegal exception if running on a core without AArch32. Add an extra check so that if we catch the guest doing that, then we prevent it from running again by resetting vcpu->arch.target and return ARM_EXCEPTION_IL. We try to catch this misbehaviour as early as possible and not rely on an illegal exception occuring to signal the problem. Attempting to run a 32bit app in the guest will produce an error from QEMU if the guest exits while running in AArch32 EL0. Tested on Juno by instrumenting the host to fake asym aarch32 and instrumenting KVM to make the asymmetry visible to the guest. [will: Incorporated feedback from Marc] Signed-off-by: Qais Yousef <qais.yousef@arm.com> Signed-off-by: Will Deacon <will@kernel.org> Signed-off-by: Marc Zyngier <maz@kernel.org> Cc: James Morse <james.morse@arm.com> Cc: Marc Zyngier <maz@kernel.org> Link: https://lore.kernel.org/r/20201021104611.2744565-2-qais.yousef@arm.com Link: https://lore.kernel.org/r/20201027215118.27003-2-will@kernel.org
1 parent d86de40 commit 22f5538

1 file changed

Lines changed: 19 additions & 0 deletions

File tree

arch/arm64/kvm/arm.c

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -808,6 +808,25 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
808808

809809
preempt_enable();
810810

811+
/*
812+
* The ARMv8 architecture doesn't give the hypervisor
813+
* a mechanism to prevent a guest from dropping to AArch32 EL0
814+
* if implemented by the CPU. If we spot the guest in such
815+
* state and that we decided it wasn't supposed to do so (like
816+
* with the asymmetric AArch32 case), return to userspace with
817+
* a fatal error.
818+
*/
819+
if (!system_supports_32bit_el0() && vcpu_mode_is_32bit(vcpu)) {
820+
/*
821+
* As we have caught the guest red-handed, decide that
822+
* it isn't fit for purpose anymore by making the vcpu
823+
* invalid. The VMM can try and fix it by issuing a
824+
* KVM_ARM_VCPU_INIT if it really wants to.
825+
*/
826+
vcpu->arch.target = -1;
827+
ret = ARM_EXCEPTION_IL;
828+
}
829+
811830
ret = handle_exit(vcpu, ret);
812831
}
813832

0 commit comments

Comments
 (0)