i386/cpu: Prevent delivering SIPI during SMM in TCG mode

[commit message by YiFei Zhu]

A malicious kernel may control the instruction pointer in SMM in a
multi-processor VM by sending a sequence of IPIs via APIC:

CPU0			CPU1
IPI(CPU1, MODE_INIT)
			x86_cpu_exec_reset()
			apic_init_reset()
			s->wait_for_sipi = true
IPI(CPU1, MODE_SMI)
			do_smm_enter()
			env->hflags |= HF_SMM_MASK;
IPI(CPU1, MODE_STARTUP, vector)
			do_cpu_sipi()
			apic_sipi()
			/* s->wait_for_sipi check passes */
			cpu_x86_load_seg_cache_sipi(vector)

A different sequence, SMI INIT SIPI, is also buggy in TCG because
INIT is not blocked or latched during SMM. However, it is not
vulnerable to an instruction pointer control in the same way because
x86_cpu_exec_reset clears env->hflags, exiting SMM.

Fixes: a9bad65d2c ("target-i386: wake up processors that receive an SMI")
Analyzed-by: YiFei Zhu <zhuyifei@google.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This commit is contained in:
Paolo Bonzini 2025-10-11 09:13:29 +02:00
parent 00001a22d1
commit df32e5c568
3 changed files with 5 additions and 2 deletions

View file

@ -646,8 +646,6 @@ void apic_sipi(DeviceState *dev)
{
APICCommonState *s = APIC(dev);
cpu_reset_interrupt(CPU(s->cpu), CPU_INTERRUPT_SIPI);
if (!s->wait_for_sipi)
return;
cpu_x86_load_seg_cache_sipi(s->cpu, s->sipi_vector);

View file

@ -621,6 +621,10 @@ void do_cpu_init(X86CPU *cpu)
void do_cpu_sipi(X86CPU *cpu)
{
CPUX86State *env = &cpu->env;
if (env->hflags & HF_SMM_MASK) {
return;
}
apic_sipi(cpu->apic_state);
}

View file

@ -182,6 +182,7 @@ bool x86_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
apic_poll_irq(cpu->apic_state);
break;
case CPU_INTERRUPT_SIPI:
cpu_reset_interrupt(cs, CPU_INTERRUPT_SIPI);
do_cpu_sipi(cpu);
break;
case CPU_INTERRUPT_SMI: