include/hw/core/cpu: Invert the indexing into CPUTLBDescFast

This array is within CPUNegativeOffsetState, which means the
last element of the array has an offset from env with the
smallest magnitude.  This can be encoded into fewer bits
when generating TCG fast path memory references.

When we changed the NB_MMU_MODES to be a global constant,
rather than a per-target value, we pessimized the code
generated for targets which use only a few mmu indexes.
By inverting the array index, we counteract that.

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This commit is contained in:
Richard Henderson 2025-07-11 18:20:26 -06:00
parent 3c58ddc9d7
commit 33ea495cd3
2 changed files with 12 additions and 2 deletions

View file

@ -425,7 +425,8 @@ static uintptr_t G_GNUC_UNUSED get_jmp_target_addr(TCGContext *s, int which)
static int __attribute__((unused))
tlb_mask_table_ofs(TCGContext *s, int which)
{
return (offsetof(CPUNegativeOffsetState, tlb.f[which]) -
int fi = mmuidx_to_fast_index(which);
return (offsetof(CPUNegativeOffsetState, tlb.f[fi]) -
sizeof(CPUNegativeOffsetState));
}