First RISC-V PR for 10.2

* Fix MSI table size limit
 * Add riscv64 to FirmwareArchitecture
 * Sync RISC-V hwprobe with Linux
 * Implement MonitorDef HMP API
 * Update OpenSBI to v1.7
 * Fix SiFive UART character drop issue and minor refactors
 * Fix RISC-V timer migration issues
 * Use riscv_cpu_is_32bit() when handling SBI_DBCN reg
 * Use riscv_csrr in riscv_csr_read
 * Align memory allocations to 2M on RISC-V
 * Do not use translator_ldl in opcode_at
 * Minor fixes of RISC-V CFI
 * Modify minimum VLEN rule
 * Fix vslide1[up|down].vx unexpected result when XLEN=32 and SEW=64
 * Fixup IOMMU PDT Nested Walk
 * Fix endianness swap on compressed instructions
 * Update status of IOMMU kernel support
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEaukCtqfKh31tZZKWr3yVEwxTgBMFAmjfQhoACgkQr3yVEwxT
 gBPnTg//eQ9GMFTLcW4kFMsVYeY8TbkmQN9Wnk+XubG92siGkzuNmfy36yo7oeib
 dB6/h5JLjycjttOfgyx73/TKUucyZs+ZYkVVWWQCSU+sqPTA370MmGNM8CSmPms/
 lFuNIixd+sSUDIOod9zQHzxv+f3ZN2bjEAyzJAEhSXgTO+1xnOeJHHjxB5O2Z/a1
 ccd3Po1wR6nm2T4x88LcHDHj8svLsfG0G1RRkU+yeLu7J6Qpp0d/lOZI7if+AQqb
 Nmz65n2uSuUEuNNQIxYaQp/nbkF3DSxi3mg3+hCQjF+hMjXL4hAhSEPril3MQjGi
 802nEaqG8Qdzec+bZiKt0c3e0f4SrnpDXDnz7NrtfSO6vXAvqqZuC8kTdZy8dsPU
 1D809ksZoNDIB87z89MQPsQ7k1Bs2Iq9pNpB9huD3mzY4DHqYhkzysAwc8Qhvimv
 pBaeSDV66OrI/al5c0FqSN0LiLHvlRcwqiATiQwIdCV+PUe+cVPwIKq6ABQiYpVu
 mvnzgEJ4r7iO92hOoAGM+eRC7krafF1/gbe3SDI3RLUTDPM6hcTRcluvBlpBdNDj
 lIYXs89f0jBh0I4IRGm8ftqD9xPDP56mZVEIIjSWDRTT6mfZLxWWMmXC/OK63U7/
 bpJKohFOKy8P6SSvTACcLSOQlP3r+FRrmBOXs7S24U+Hr9xUep0=
 =DGkt
 -----END PGP SIGNATURE-----

Merge tag 'pull-riscv-to-apply-20251003-3' of https://github.com/alistair23/qemu into staging

First RISC-V PR for 10.2

* Fix MSI table size limit
* Add riscv64 to FirmwareArchitecture
* Sync RISC-V hwprobe with Linux
* Implement MonitorDef HMP API
* Update OpenSBI to v1.7
* Fix SiFive UART character drop issue and minor refactors
* Fix RISC-V timer migration issues
* Use riscv_cpu_is_32bit() when handling SBI_DBCN reg
* Use riscv_csrr in riscv_csr_read
* Align memory allocations to 2M on RISC-V
* Do not use translator_ldl in opcode_at
* Minor fixes of RISC-V CFI
* Modify minimum VLEN rule
* Fix vslide1[up|down].vx unexpected result when XLEN=32 and SEW=64
* Fixup IOMMU PDT Nested Walk
* Fix endianness swap on compressed instructions
* Update status of IOMMU kernel support

# -----BEGIN PGP SIGNATURE-----
#
# iQIzBAABCgAdFiEEaukCtqfKh31tZZKWr3yVEwxTgBMFAmjfQhoACgkQr3yVEwxT
# gBPnTg//eQ9GMFTLcW4kFMsVYeY8TbkmQN9Wnk+XubG92siGkzuNmfy36yo7oeib
# dB6/h5JLjycjttOfgyx73/TKUucyZs+ZYkVVWWQCSU+sqPTA370MmGNM8CSmPms/
# lFuNIixd+sSUDIOod9zQHzxv+f3ZN2bjEAyzJAEhSXgTO+1xnOeJHHjxB5O2Z/a1
# ccd3Po1wR6nm2T4x88LcHDHj8svLsfG0G1RRkU+yeLu7J6Qpp0d/lOZI7if+AQqb
# Nmz65n2uSuUEuNNQIxYaQp/nbkF3DSxi3mg3+hCQjF+hMjXL4hAhSEPril3MQjGi
# 802nEaqG8Qdzec+bZiKt0c3e0f4SrnpDXDnz7NrtfSO6vXAvqqZuC8kTdZy8dsPU
# 1D809ksZoNDIB87z89MQPsQ7k1Bs2Iq9pNpB9huD3mzY4DHqYhkzysAwc8Qhvimv
# pBaeSDV66OrI/al5c0FqSN0LiLHvlRcwqiATiQwIdCV+PUe+cVPwIKq6ABQiYpVu
# mvnzgEJ4r7iO92hOoAGM+eRC7krafF1/gbe3SDI3RLUTDPM6hcTRcluvBlpBdNDj
# lIYXs89f0jBh0I4IRGm8ftqD9xPDP56mZVEIIjSWDRTT6mfZLxWWMmXC/OK63U7/
# bpJKohFOKy8P6SSvTACcLSOQlP3r+FRrmBOXs7S24U+Hr9xUep0=
# =DGkt
# -----END PGP SIGNATURE-----
# gpg: Signature made Thu 02 Oct 2025 08:25:14 PM PDT
# gpg:                using RSA key 6AE902B6A7CA877D6D659296AF7C95130C538013
# gpg: Good signature from "Alistair Francis <alistair@alistair23.me>" [unknown]
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg:          There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 6AE9 02B6 A7CA 877D 6D65  9296 AF7C 9513 0C53 8013

* tag 'pull-riscv-to-apply-20251003-3' of https://github.com/alistair23/qemu: (26 commits)
  docs: riscv-iommu: Update status of kernel support
  target/riscv: Fix endianness swap on compressed instructions
  hw/riscv/riscv-iommu: Fixup PDT Nested Walk
  target/riscv: rvv: Fix vslide1[up|down].vx unexpected result when XLEN=32 and SEW=64
  target/riscv: rvv: Modify minimum VLEN according to enabled vector extensions
  target/riscv: rvv: Replace checking V by checking Zve32x
  target/riscv: Fix ssamoswap error handling
  target/riscv: Fix SSP CSR error handling in VU/VS mode
  target/riscv: Fix the mepc when sspopchk triggers the exception
  target/riscv: do not use translator_ldl in opcode_at
  qemu/osdep: align memory allocations to 2M on RISC-V
  target/riscv: use riscv_csrr in riscv_csr_read
  target/riscv/kvm: Use riscv_cpu_is_32bit() when handling SBI_DBCN reg
  target/riscv: Save stimer and vstimer in CPU vmstate
  hw/intc: Save timers array in RISC-V mtimer VMState
  migration: Add support for a variable-length array of UINT32 pointers
  hw/intc: Save time_delta in RISC-V mtimer VMState
  hw/char: sifive_uart: Add newline to error message
  hw/char: sifive_uart: Remove outdated comment about Tx FIFO
  hw/char: sifive_uart: Avoid pushing Tx FIFO when size is zero
  ...

Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This commit is contained in:
Richard Henderson 2025-10-03 04:57:12 -07:00
commit 91f80dda70
26 changed files with 624 additions and 84 deletions

View file

@ -85,12 +85,14 @@
#
# @loongarch64: 64-bit LoongArch. (since: 7.1)
#
# @riscv64: 64-bit RISC-V.
#
# @x86_64: 64-bit x86.
#
# Since: 3.0
##
{ 'enum' : 'FirmwareArchitecture',
'data' : [ 'aarch64', 'arm', 'i386', 'loongarch64', 'x86_64' ] }
'data' : [ 'aarch64', 'arm', 'i386', 'loongarch64', 'riscv64', 'x86_64' ] }
##
# @FirmwareTarget:

View file

@ -30,15 +30,15 @@ This will add a RISC-V IOMMU PCI device in the board following any additional
PCI parameters (like PCI bus address). The behavior of the RISC-V IOMMU is
defined by the spec but its operation is OS dependent.
As of this writing the existing Linux kernel support `linux-v8`_, not yet merged,
does not have support for features like VFIO passthrough. The IOMMU emulation
was tested using a public Ventana Micro Systems kernel repository in
`ventana-linux`_. This kernel is based on `linux-v8`_ with additional patches that
enable features like KVM VFIO passthrough with irqbypass. Until the kernel support
is feature complete feel free to use the kernel available in the Ventana Micro Systems
mirror.
Linux kernel iommu support was merged in v6.13. QEMU IOMMU emulation can be
used with mainline kernels for simple IOMMU PCIe support.
The current Linux kernel support will use the IOMMU device to create IOMMU groups
As of v6.17, it does not have support for features like VFIO passthrough.
There is a `VFIO`_ RFC series that is not yet merged. The public Ventana Micro
Systems kernel repository in `ventana-linux`_ can be used for testing the VFIO
functions.
The v6.13+ Linux kernel support uses the IOMMU device to create IOMMU groups
with any eligible cards available in the system, regardless of factors such as the
order in which the devices are added in the command line.
@ -49,7 +49,7 @@ IOMMU kernel driver behaves:
$ qemu-system-riscv64 \
-M virt,aia=aplic-imsic,aia-guests=5 \
-device riscv-iommu-pci,addr=1.0,vendor-id=0x1efd,device-id=0xedf1 \
-device riscv-iommu-pci,addr=1.0 \
-device e1000e,netdev=net1 -netdev user,id=net1,net=192.168.0.0/24 \
-device e1000e,netdev=net2 -netdev user,id=net2,net=192.168.200.0/24 \
(...)
@ -58,21 +58,11 @@ IOMMU kernel driver behaves:
-M virt,aia=aplic-imsic,aia-guests=5 \
-device e1000e,netdev=net1 -netdev user,id=net1,net=192.168.0.0/24 \
-device e1000e,netdev=net2 -netdev user,id=net2,net=192.168.200.0/24 \
-device riscv-iommu-pci,addr=1.0,vendor-id=0x1efd,device-id=0xedf1 \
-device riscv-iommu-pci,addr=3.0 \
(...)
Both will create iommu groups for the two e1000e cards.
Another thing to notice on `linux-v8`_ and `ventana-linux`_ is that the kernel driver
considers an IOMMU identified as a Rivos device, i.e. it uses Rivos vendor ID. To
use the riscv-iommu-pci device with the existing kernel support we need to emulate
a Rivos PCI IOMMU by setting 'vendor-id' and 'device-id':
.. code-block:: bash
$ qemu-system-riscv64 -M virt \
-device riscv-iommu-pci,vendor-id=0x1efd,device-id=0xedf1 (...)
Several options are available to control the capabilities of the device, namely:
- "bus": the bus that the IOMMU device uses
@ -84,6 +74,7 @@ Several options are available to control the capabilities of the device, namely:
- "g-stage": enable g-stage support
- "hpm-counters": number of hardware performance counters available. Maximum value is 31.
Default value is 31. Use 0 (zero) to disable HPM support
- "vendor-id"/"device-id": pci device ID. Defaults to 1b36:0014 (Redhat)
riscv-iommu-sys device
----------------------
@ -111,6 +102,6 @@ riscv-iommu options:
.. _iommu1.0.0: https://github.com/riscv-non-isa/riscv-iommu/releases/download/v1.0.0/riscv-iommu.pdf
.. _linux-v8: https://lore.kernel.org/linux-riscv/cover.1718388908.git.tjeznach@rivosinc.com/
.. _VFIO: https://lore.kernel.org/linux-riscv/20241114161845.502027-17-ajones@ventanamicro.com/
.. _ventana-linux: https://github.com/ventanamicro/linux/tree/dev-upstream

View file

@ -28,23 +28,18 @@
#define TX_INTERRUPT_TRIGGER_DELAY_NS 100
/*
* Not yet implemented:
*
* Transmit FIFO using "qemu/fifo8.h"
*/
/* Returns the state of the IP (interrupt pending) register */
static uint64_t sifive_uart_ip(SiFiveUARTState *s)
static uint32_t sifive_uart_ip(SiFiveUARTState *s)
{
uint64_t ret = 0;
uint32_t ret = 0;
uint64_t txcnt = SIFIVE_UART_GET_TXCNT(s->txctrl);
uint64_t rxcnt = SIFIVE_UART_GET_RXCNT(s->rxctrl);
uint32_t txcnt = SIFIVE_UART_GET_TXCNT(s->txctrl);
uint32_t rxcnt = SIFIVE_UART_GET_RXCNT(s->rxctrl);
if (txcnt != 0) {
if (fifo8_num_used(&s->tx_fifo) < txcnt) {
ret |= SIFIVE_UART_IP_TXWM;
}
if (s->rx_fifo_len > rxcnt) {
ret |= SIFIVE_UART_IP_RXWM;
}
@ -55,15 +50,14 @@ static uint64_t sifive_uart_ip(SiFiveUARTState *s)
static void sifive_uart_update_irq(SiFiveUARTState *s)
{
int cond = 0;
if ((s->ie & SIFIVE_UART_IE_TXWM) ||
((s->ie & SIFIVE_UART_IE_RXWM) && s->rx_fifo_len)) {
uint32_t ip = sifive_uart_ip(s);
if (((ip & SIFIVE_UART_IP_TXWM) && (s->ie & SIFIVE_UART_IE_TXWM)) ||
((ip & SIFIVE_UART_IP_RXWM) && (s->ie & SIFIVE_UART_IE_RXWM))) {
cond = 1;
}
if (cond) {
qemu_irq_raise(s->irq);
} else {
qemu_irq_lower(s->irq);
}
qemu_set_irq(s->irq, cond);
}
static gboolean sifive_uart_xmit(void *do_not_use, GIOCondition cond,
@ -119,10 +113,12 @@ static void sifive_uart_write_tx_fifo(SiFiveUARTState *s, const uint8_t *buf,
if (size > fifo8_num_free(&s->tx_fifo)) {
size = fifo8_num_free(&s->tx_fifo);
qemu_log_mask(LOG_GUEST_ERROR, "sifive_uart: TX FIFO overflow");
qemu_log_mask(LOG_GUEST_ERROR, "sifive_uart: TX FIFO overflow.\n");
}
fifo8_push_all(&s->tx_fifo, buf, size);
if (size > 0) {
fifo8_push_all(&s->tx_fifo, buf, size);
}
if (fifo8_is_full(&s->tx_fifo)) {
s->txfifo |= SIFIVE_UART_TXFIFO_FULL;

View file

@ -323,12 +323,15 @@ static void riscv_aclint_mtimer_reset_enter(Object *obj, ResetType type)
static const VMStateDescription vmstate_riscv_mtimer = {
.name = "riscv_mtimer",
.version_id = 1,
.minimum_version_id = 1,
.version_id = 3,
.minimum_version_id = 3,
.fields = (const VMStateField[]) {
VMSTATE_UINT64(time_delta, RISCVAclintMTimerState),
VMSTATE_VARRAY_UINT32(timecmp, RISCVAclintMTimerState,
num_harts, 0,
vmstate_info_uint64, uint64_t),
VMSTATE_TIMER_PTR_VARRAY(timers, RISCVAclintMTimerState,
num_harts),
VMSTATE_END_OF_LIST()
}
};

View file

@ -558,6 +558,7 @@ static MemTxResult riscv_iommu_msi_write(RISCVIOMMUState *s,
MemTxResult res;
dma_addr_t addr;
uint64_t intn;
size_t offset;
uint32_t n190;
uint64_t pte[2];
int fault_type = RISCV_IOMMU_FQ_TTYPE_UADDR_WR;
@ -565,16 +566,18 @@ static MemTxResult riscv_iommu_msi_write(RISCVIOMMUState *s,
/* Interrupt File Number */
intn = riscv_iommu_pext_u64(PPN_DOWN(gpa), ctx->msi_addr_mask);
if (intn >= 256) {
offset = intn * sizeof(pte);
/* fetch MSI PTE */
addr = PPN_PHYS(get_field(ctx->msiptp, RISCV_IOMMU_DC_MSIPTP_PPN));
if (addr & offset) {
/* Interrupt file number out of range */
res = MEMTX_ACCESS_ERROR;
cause = RISCV_IOMMU_FQ_CAUSE_MSI_LOAD_FAULT;
goto err;
}
/* fetch MSI PTE */
addr = PPN_PHYS(get_field(ctx->msiptp, RISCV_IOMMU_DC_MSIPTP_PPN));
addr = addr | (intn * sizeof(pte));
addr |= offset;
res = dma_memory_read(s->target_as, addr, &pte, sizeof(pte),
MEMTXATTRS_UNSPECIFIED);
if (res != MEMTX_OK) {
@ -866,6 +869,145 @@ static bool riscv_iommu_validate_process_ctx(RISCVIOMMUState *s,
return true;
}
/**
* pdt_memory_read: PDT wrapper of dma_memory_read.
*
* @s: IOMMU Device State
* @ctx: Device Translation Context with devid and pasid set
* @addr: address within that address space
* @buf: buffer with the data transferred
* @len: length of the data transferred
* @attrs: memory transaction attributes
*/
static MemTxResult pdt_memory_read(RISCVIOMMUState *s,
RISCVIOMMUContext *ctx,
dma_addr_t addr,
void *buf, dma_addr_t len,
MemTxAttrs attrs)
{
uint64_t gatp_mode, pte;
struct {
unsigned char step;
unsigned char levels;
unsigned char ptidxbits;
unsigned char ptesize;
} sc;
MemTxResult ret;
dma_addr_t base = addr;
/* G stages translation mode */
gatp_mode = get_field(ctx->gatp, RISCV_IOMMU_ATP_MODE_FIELD);
if (gatp_mode == RISCV_IOMMU_DC_IOHGATP_MODE_BARE) {
goto out;
}
/* G stages translation tables root pointer */
base = PPN_PHYS(get_field(ctx->gatp, RISCV_IOMMU_ATP_PPN_FIELD));
/* Start at step 0 */
sc.step = 0;
if (s->fctl & RISCV_IOMMU_FCTL_GXL) {
/* 32bit mode for GXL == 1 */
switch (gatp_mode) {
case RISCV_IOMMU_DC_IOHGATP_MODE_SV32X4:
if (!(s->cap & RISCV_IOMMU_CAP_SV32X4)) {
return MEMTX_ACCESS_ERROR;
}
sc.levels = 2;
sc.ptidxbits = 10;
sc.ptesize = 4;
break;
default:
return MEMTX_ACCESS_ERROR;
}
} else {
/* 64bit mode for GXL == 0 */
switch (gatp_mode) {
case RISCV_IOMMU_DC_IOHGATP_MODE_SV39X4:
if (!(s->cap & RISCV_IOMMU_CAP_SV39X4)) {
return MEMTX_ACCESS_ERROR;
}
sc.levels = 3;
sc.ptidxbits = 9;
sc.ptesize = 8;
break;
case RISCV_IOMMU_DC_IOHGATP_MODE_SV48X4:
if (!(s->cap & RISCV_IOMMU_CAP_SV48X4)) {
return MEMTX_ACCESS_ERROR;
}
sc.levels = 4;
sc.ptidxbits = 9;
sc.ptesize = 8;
break;
case RISCV_IOMMU_DC_IOHGATP_MODE_SV57X4:
if (!(s->cap & RISCV_IOMMU_CAP_SV57X4)) {
return MEMTX_ACCESS_ERROR;
}
sc.levels = 5;
sc.ptidxbits = 9;
sc.ptesize = 8;
break;
default:
return MEMTX_ACCESS_ERROR;
}
}
do {
const unsigned va_bits = (sc.step ? 0 : 2) + sc.ptidxbits;
const unsigned va_skip = TARGET_PAGE_BITS + sc.ptidxbits *
(sc.levels - 1 - sc.step);
const unsigned idx = (addr >> va_skip) & ((1 << va_bits) - 1);
const dma_addr_t pte_addr = base + idx * sc.ptesize;
/* Address range check before first level lookup */
if (!sc.step) {
const uint64_t va_mask = (1ULL << (va_skip + va_bits)) - 1;
if ((addr & va_mask) != addr) {
return MEMTX_ACCESS_ERROR;
}
}
/* Read page table entry */
if (sc.ptesize == 4) {
uint32_t pte32 = 0;
ret = ldl_le_dma(s->target_as, pte_addr, &pte32, attrs);
pte = pte32;
} else {
ret = ldq_le_dma(s->target_as, pte_addr, &pte, attrs);
}
if (ret != MEMTX_OK) {
return ret;
}
sc.step++;
hwaddr ppn = pte >> PTE_PPN_SHIFT;
if (!(pte & PTE_V)) {
return MEMTX_ACCESS_ERROR; /* Invalid PTE */
} else if (!(pte & (PTE_R | PTE_W | PTE_X))) {
base = PPN_PHYS(ppn); /* Inner PTE, continue walking */
} else if ((pte & (PTE_R | PTE_W | PTE_X)) == PTE_W) {
return MEMTX_ACCESS_ERROR; /* Reserved leaf PTE flags: PTE_W */
} else if ((pte & (PTE_R | PTE_W | PTE_X)) == (PTE_W | PTE_X)) {
return MEMTX_ACCESS_ERROR; /* Reserved leaf PTE flags: PTE_W + PTE_X */
} else if (ppn & ((1ULL << (va_skip - TARGET_PAGE_BITS)) - 1)) {
return MEMTX_ACCESS_ERROR; /* Misaligned PPN */
} else {
/* Leaf PTE, translation completed. */
base = PPN_PHYS(ppn) | (addr & ((1ULL << va_skip) - 1));
break;
}
if (sc.step == sc.levels) {
return MEMTX_ACCESS_ERROR; /* Can't find leaf PTE */
}
} while (1);
out:
return dma_memory_read(s->target_as, base, buf, len, attrs);
}
/*
* RISC-V IOMMU Device Context Loopkup - Device Directory Tree Walk
*
@ -1038,7 +1180,7 @@ static int riscv_iommu_ctx_fetch(RISCVIOMMUState *s, RISCVIOMMUContext *ctx)
*/
const int split = depth * 9 + 8;
addr |= ((ctx->process_id >> split) << 3) & ~TARGET_PAGE_MASK;
if (dma_memory_read(s->target_as, addr, &de, sizeof(de),
if (pdt_memory_read(s, ctx, addr, &de, sizeof(de),
MEMTXATTRS_UNSPECIFIED) != MEMTX_OK) {
return RISCV_IOMMU_FQ_CAUSE_PDT_LOAD_FAULT;
}
@ -1053,7 +1195,7 @@ static int riscv_iommu_ctx_fetch(RISCVIOMMUState *s, RISCVIOMMUContext *ctx)
/* Leaf entry in PDT */
addr |= (ctx->process_id << 4) & ~TARGET_PAGE_MASK;
if (dma_memory_read(s->target_as, addr, &dc.ta, sizeof(uint64_t) * 2,
if (pdt_memory_read(s, ctx, addr, &dc.ta, sizeof(uint64_t) * 2,
MEMTXATTRS_UNSPECIFIED) != MEMTX_OK) {
return RISCV_IOMMU_FQ_CAUSE_PDT_LOAD_FAULT;
}

View file

@ -80,4 +80,8 @@ enum {
RISCV_ACLINT_SWI_SIZE = 0x4000
};
#define VMSTATE_TIMER_PTR_VARRAY(_f, _s, _f_n) \
VMSTATE_VARRAY_OF_POINTER_UINT32(_f, _s, _f_n, 0, vmstate_info_timer, \
QEMUTimer *)
#endif

View file

@ -522,6 +522,16 @@ extern const VMStateInfo vmstate_info_qlist;
.offset = vmstate_offset_array(_s, _f, _type*, _n), \
}
#define VMSTATE_VARRAY_OF_POINTER_UINT32(_field, _state, _field_num, _version, _info, _type) { \
.name = (stringify(_field)), \
.version_id = (_version), \
.num_offset = vmstate_offset_value(_state, _field_num, uint32_t), \
.info = &(_info), \
.size = sizeof(_type), \
.flags = VMS_VARRAY_UINT32 | VMS_ARRAY_OF_POINTER | VMS_POINTER, \
.offset = vmstate_offset_pointer(_state, _field, _type), \
}
#define VMSTATE_STRUCT_SUB_ARRAY(_field, _state, _start, _num, _version, _vmsd, _type) { \
.name = (stringify(_field)), \
.version_id = (_version), \

View file

@ -561,7 +561,7 @@ int madvise(char *, size_t, int);
#if defined(__linux__) && \
(defined(__x86_64__) || defined(__arm__) || defined(__aarch64__) \
|| defined(__powerpc64__))
|| defined(__powerpc64__) || defined(__riscv))
/* Use 2 MiB alignment so transparent hugepages can be used by KVM.
Valgrind does not support alignments larger than 1 MiB,
therefore we need special code which handles running on Valgrind. */

View file

@ -9023,6 +9023,29 @@ static int do_getdents64(abi_long dirfd, abi_long arg2, abi_long count)
#define RISCV_HWPROBE_EXT_ZTSO (1ULL << 33)
#define RISCV_HWPROBE_EXT_ZACAS (1ULL << 34)
#define RISCV_HWPROBE_EXT_ZICOND (1ULL << 35)
#define RISCV_HWPROBE_EXT_ZIHINTPAUSE (1ULL << 36)
#define RISCV_HWPROBE_EXT_ZVE32X (1ULL << 37)
#define RISCV_HWPROBE_EXT_ZVE32F (1ULL << 38)
#define RISCV_HWPROBE_EXT_ZVE64X (1ULL << 39)
#define RISCV_HWPROBE_EXT_ZVE64F (1ULL << 40)
#define RISCV_HWPROBE_EXT_ZVE64D (1ULL << 41)
#define RISCV_HWPROBE_EXT_ZIMOP (1ULL << 42)
#define RISCV_HWPROBE_EXT_ZCA (1ULL << 43)
#define RISCV_HWPROBE_EXT_ZCB (1ULL << 44)
#define RISCV_HWPROBE_EXT_ZCD (1ULL << 45)
#define RISCV_HWPROBE_EXT_ZCF (1ULL << 46)
#define RISCV_HWPROBE_EXT_ZCMOP (1ULL << 47)
#define RISCV_HWPROBE_EXT_ZAWRS (1ULL << 48)
#define RISCV_HWPROBE_EXT_SUPM (1ULL << 49)
#define RISCV_HWPROBE_EXT_ZICNTR (1ULL << 50)
#define RISCV_HWPROBE_EXT_ZIHPM (1ULL << 51)
#define RISCV_HWPROBE_EXT_ZFBFMIN (1ULL << 52)
#define RISCV_HWPROBE_EXT_ZVFBFMIN (1ULL << 53)
#define RISCV_HWPROBE_EXT_ZVFBFWMA (1ULL << 54)
#define RISCV_HWPROBE_EXT_ZICBOM (1ULL << 55)
#define RISCV_HWPROBE_EXT_ZAAMO (1ULL << 56)
#define RISCV_HWPROBE_EXT_ZALRSC (1ULL << 57)
#define RISCV_HWPROBE_EXT_ZABHA (1ULL << 58)
#define RISCV_HWPROBE_KEY_CPUPERF_0 5
#define RISCV_HWPROBE_MISALIGNED_UNKNOWN (0 << 0)
@ -9033,6 +9056,22 @@ static int do_getdents64(abi_long dirfd, abi_long arg2, abi_long count)
#define RISCV_HWPROBE_MISALIGNED_MASK (7 << 0)
#define RISCV_HWPROBE_KEY_ZICBOZ_BLOCK_SIZE 6
#define RISCV_HWPROBE_KEY_HIGHEST_VIRT_ADDRESS 7
#define RISCV_HWPROBE_KEY_TIME_CSR_FREQ 8
#define RISCV_HWPROBE_KEY_MISALIGNED_SCALAR_PERF 9
#define RISCV_HWPROBE_MISALIGNED_SCALAR_UNKNOWN 0
#define RISCV_HWPROBE_MISALIGNED_SCALAR_EMULATED 1
#define RISCV_HWPROBE_MISALIGNED_SCALAR_SLOW 2
#define RISCV_HWPROBE_MISALIGNED_SCALAR_FAST 3
#define RISCV_HWPROBE_MISALIGNED_SCALAR_UNSUPPORTED 4
#define RISCV_HWPROBE_KEY_MISALIGNED_VECTOR_PERF 10
#define RISCV_HWPROBE_MISALIGNED_VECTOR_UNKNOWN 0
#define RISCV_HWPROBE_MISALIGNED_VECTOR_SLOW 2
#define RISCV_HWPROBE_MISALIGNED_VECTOR_FAST 3
#define RISCV_HWPROBE_MISALIGNED_VECTOR_UNSUPPORTED 4
#define RISCV_HWPROBE_KEY_VENDOR_EXT_THEAD_0 11
#define RISCV_HWPROBE_KEY_ZICBOM_BLOCK_SIZE 12
#define RISCV_HWPROBE_KEY_VENDOR_EXT_SIFIVE_0 13
struct riscv_hwprobe {
abi_llong key;
@ -9141,6 +9180,52 @@ static void risc_hwprobe_fill_pairs(CPURISCVState *env,
RISCV_HWPROBE_EXT_ZACAS : 0;
value |= cfg->ext_zicond ?
RISCV_HWPROBE_EXT_ZICOND : 0;
value |= cfg->ext_zihintpause ?
RISCV_HWPROBE_EXT_ZIHINTPAUSE : 0;
value |= cfg->ext_zve32x ?
RISCV_HWPROBE_EXT_ZVE32X : 0;
value |= cfg->ext_zve32f ?
RISCV_HWPROBE_EXT_ZVE32F : 0;
value |= cfg->ext_zve64x ?
RISCV_HWPROBE_EXT_ZVE64X : 0;
value |= cfg->ext_zve64f ?
RISCV_HWPROBE_EXT_ZVE64F : 0;
value |= cfg->ext_zve64d ?
RISCV_HWPROBE_EXT_ZVE64D : 0;
value |= cfg->ext_zimop ?
RISCV_HWPROBE_EXT_ZIMOP : 0;
value |= cfg->ext_zca ?
RISCV_HWPROBE_EXT_ZCA : 0;
value |= cfg->ext_zcb ?
RISCV_HWPROBE_EXT_ZCB : 0;
value |= cfg->ext_zcd ?
RISCV_HWPROBE_EXT_ZCD : 0;
value |= cfg->ext_zcf ?
RISCV_HWPROBE_EXT_ZCF : 0;
value |= cfg->ext_zcmop ?
RISCV_HWPROBE_EXT_ZCMOP : 0;
value |= cfg->ext_zawrs ?
RISCV_HWPROBE_EXT_ZAWRS : 0;
value |= cfg->ext_supm ?
RISCV_HWPROBE_EXT_SUPM : 0;
value |= cfg->ext_zicntr ?
RISCV_HWPROBE_EXT_ZICNTR : 0;
value |= cfg->ext_zihpm ?
RISCV_HWPROBE_EXT_ZIHPM : 0;
value |= cfg->ext_zfbfmin ?
RISCV_HWPROBE_EXT_ZFBFMIN : 0;
value |= cfg->ext_zvfbfmin ?
RISCV_HWPROBE_EXT_ZVFBFMIN : 0;
value |= cfg->ext_zvfbfwma ?
RISCV_HWPROBE_EXT_ZVFBFWMA : 0;
value |= cfg->ext_zicbom ?
RISCV_HWPROBE_EXT_ZICBOM : 0;
value |= cfg->ext_zaamo ?
RISCV_HWPROBE_EXT_ZAAMO : 0;
value |= cfg->ext_zalrsc ?
RISCV_HWPROBE_EXT_ZALRSC : 0;
value |= cfg->ext_zabha ?
RISCV_HWPROBE_EXT_ZABHA : 0;
__put_user(value, &pair->value);
break;
case RISCV_HWPROBE_KEY_CPUPERF_0:
@ -9150,6 +9235,10 @@ static void risc_hwprobe_fill_pairs(CPURISCVState *env,
value = cfg->ext_zicboz ? cfg->cboz_blocksize : 0;
__put_user(value, &pair->value);
break;
case RISCV_HWPROBE_KEY_ZICBOM_BLOCK_SIZE:
value = cfg->ext_zicbom ? cfg->cbom_blocksize : 0;
__put_user(value, &pair->value);
break;
default:
__put_user(-1, &pair->key);
break;

@ -1 +1 @@
Subproject commit 43cace6c3671e5172d0df0a8963e552bb04b7b20
Subproject commit a32a91069119e7a5aa31e6bc51d5e00860be3d80

View file

@ -604,7 +604,7 @@ static void riscv_cpu_dump_state(CPUState *cs, FILE *f, int flags)
}
}
}
if (riscv_has_ext(env, RVV) && (flags & CPU_DUMP_VPU)) {
if (riscv_cpu_cfg(env)->ext_zve32x && (flags & CPU_DUMP_VPU)) {
static const int dump_rvv_csrs[] = {
CSR_VSTART,
CSR_VXSAT,

View file

@ -592,6 +592,7 @@ static inline int riscv_has_ext(CPURISCVState *env, target_ulong ext)
extern const char * const riscv_int_regnames[];
extern const char * const riscv_int_regnamesh[];
extern const char * const riscv_fpr_regnames[];
extern const char * const riscv_rvv_regnames[];
const char *riscv_cpu_get_trap_name(target_ulong cause, bool async);
int riscv_cpu_write_elf64_note(WriteCoreDumpFunction f, CPUState *cs,
@ -873,7 +874,7 @@ static inline void riscv_csr_write(CPURISCVState *env, int csrno,
static inline target_ulong riscv_csr_read(CPURISCVState *env, int csrno)
{
target_ulong val = 0;
riscv_csrrw(env, csrno, &val, 0, 0, 0);
riscv_csrr(env, csrno, &val);
return val;
}

View file

@ -203,6 +203,8 @@ static RISCVException cfi_ss(CPURISCVState *env, int csrno)
#if !defined(CONFIG_USER_ONLY)
if (env->debugger) {
return RISCV_EXCP_NONE;
} else if (env->virt_enabled) {
return RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
}
#endif
return RISCV_EXCP_ILLEGAL_INST;
@ -2003,7 +2005,8 @@ static RISCVException write_mstatus(CPURISCVState *env, int csrno,
if (riscv_has_ext(env, RVF)) {
mask |= MSTATUS_FS;
}
if (riscv_has_ext(env, RVV)) {
if (riscv_cpu_cfg(env)->ext_zve32x) {
mask |= MSTATUS_VS;
}

View file

@ -1101,14 +1101,14 @@ DEF_HELPER_6(vslidedown_vx_b, void, ptr, ptr, tl, ptr, env, i32)
DEF_HELPER_6(vslidedown_vx_h, void, ptr, ptr, tl, ptr, env, i32)
DEF_HELPER_6(vslidedown_vx_w, void, ptr, ptr, tl, ptr, env, i32)
DEF_HELPER_6(vslidedown_vx_d, void, ptr, ptr, tl, ptr, env, i32)
DEF_HELPER_6(vslide1up_vx_b, void, ptr, ptr, tl, ptr, env, i32)
DEF_HELPER_6(vslide1up_vx_h, void, ptr, ptr, tl, ptr, env, i32)
DEF_HELPER_6(vslide1up_vx_w, void, ptr, ptr, tl, ptr, env, i32)
DEF_HELPER_6(vslide1up_vx_d, void, ptr, ptr, tl, ptr, env, i32)
DEF_HELPER_6(vslide1down_vx_b, void, ptr, ptr, tl, ptr, env, i32)
DEF_HELPER_6(vslide1down_vx_h, void, ptr, ptr, tl, ptr, env, i32)
DEF_HELPER_6(vslide1down_vx_w, void, ptr, ptr, tl, ptr, env, i32)
DEF_HELPER_6(vslide1down_vx_d, void, ptr, ptr, tl, ptr, env, i32)
DEF_HELPER_6(vslide1up_vx_b, void, ptr, ptr, i64, ptr, env, i32)
DEF_HELPER_6(vslide1up_vx_h, void, ptr, ptr, i64, ptr, env, i32)
DEF_HELPER_6(vslide1up_vx_w, void, ptr, ptr, i64, ptr, env, i32)
DEF_HELPER_6(vslide1up_vx_d, void, ptr, ptr, i64, ptr, env, i32)
DEF_HELPER_6(vslide1down_vx_b, void, ptr, ptr, i64, ptr, env, i32)
DEF_HELPER_6(vslide1down_vx_h, void, ptr, ptr, i64, ptr, env, i32)
DEF_HELPER_6(vslide1down_vx_w, void, ptr, ptr, i64, ptr, env, i32)
DEF_HELPER_6(vslide1down_vx_d, void, ptr, ptr, i64, ptr, env, i32)
DEF_HELPER_6(vfslide1up_vf_h, void, ptr, ptr, i64, ptr, env, i32)
DEF_HELPER_6(vfslide1up_vf_w, void, ptr, ptr, i64, ptr, env, i32)
@ -1284,3 +1284,8 @@ DEF_HELPER_4(vgmul_vv, void, ptr, ptr, env, i32)
DEF_HELPER_5(vsm4k_vi, void, ptr, ptr, i32, env, i32)
DEF_HELPER_4(vsm4r_vv, void, ptr, ptr, env, i32)
DEF_HELPER_4(vsm4r_vs, void, ptr, ptr, env, i32)
/* CFI (zicfiss) helpers */
#ifndef CONFIG_USER_ONLY
DEF_HELPER_1(ssamoswap_disabled, void, env)
#endif

View file

@ -3561,7 +3561,6 @@ static bool slideup_check(DisasContext *s, arg_rmrr *a)
}
GEN_OPIVX_TRANS(vslideup_vx, slideup_check)
GEN_OPIVX_TRANS(vslide1up_vx, slideup_check)
GEN_OPIVI_TRANS(vslideup_vi, IMM_ZX, vslideup_vx, slideup_check)
static bool slidedown_check(DisasContext *s, arg_rmrr *a)
@ -3572,9 +3571,56 @@ static bool slidedown_check(DisasContext *s, arg_rmrr *a)
}
GEN_OPIVX_TRANS(vslidedown_vx, slidedown_check)
GEN_OPIVX_TRANS(vslide1down_vx, slidedown_check)
GEN_OPIVI_TRANS(vslidedown_vi, IMM_ZX, vslidedown_vx, slidedown_check)
typedef void gen_helper_vslide1_vx(TCGv_ptr, TCGv_ptr, TCGv_i64, TCGv_ptr,
TCGv_env, TCGv_i32);
#define GEN_OPIVX_VSLIDE1_TRANS(NAME, CHECK) \
static bool trans_##NAME(DisasContext *s, arg_rmrr *a) \
{ \
if (CHECK(s, a)) { \
static gen_helper_vslide1_vx * const fns[4] = { \
gen_helper_##NAME##_b, gen_helper_##NAME##_h, \
gen_helper_##NAME##_w, gen_helper_##NAME##_d, \
}; \
\
TCGv_ptr dest, src2, mask; \
TCGv_i64 src1; \
TCGv_i32 desc; \
uint32_t data = 0; \
\
dest = tcg_temp_new_ptr(); \
mask = tcg_temp_new_ptr(); \
src2 = tcg_temp_new_ptr(); \
src1 = tcg_temp_new_i64(); \
\
data = FIELD_DP32(data, VDATA, VM, a->vm); \
data = FIELD_DP32(data, VDATA, LMUL, s->lmul); \
data = FIELD_DP32(data, VDATA, VTA, s->vta); \
data = FIELD_DP32(data, VDATA, VTA_ALL_1S, s->cfg_vta_all_1s); \
data = FIELD_DP32(data, VDATA, VMA, s->vma); \
desc = tcg_constant_i32(simd_desc(s->cfg_ptr->vlenb, \
s->cfg_ptr->vlenb, data)); \
\
tcg_gen_addi_ptr(dest, tcg_env, vreg_ofs(s, a->rd)); \
tcg_gen_addi_ptr(src2, tcg_env, vreg_ofs(s, a->rs2)); \
tcg_gen_addi_ptr(mask, tcg_env, vreg_ofs(s, 0)); \
tcg_gen_ext_tl_i64(src1, get_gpr(s, a->rs1, EXT_SIGN)); \
\
fns[s->sew](dest, mask, src1, src2, tcg_env, desc); \
\
tcg_gen_movi_tl(cpu_vstart, 0); \
finalize_rvv_inst(s); \
\
return true; \
} \
return false; \
}
GEN_OPIVX_VSLIDE1_TRANS(vslide1up_vx, slideup_check)
GEN_OPIVX_VSLIDE1_TRANS(vslide1down_vx, slidedown_check)
/* Vector Floating-Point Slide Instructions */
static bool fslideup_check(DisasContext *s, arg_rmrr *a)
{

View file

@ -88,13 +88,13 @@ static bool trans_c_lbu(DisasContext *ctx, arg_c_lbu *a)
static bool trans_c_lhu(DisasContext *ctx, arg_c_lhu *a)
{
REQUIRE_ZCB(ctx);
return gen_load(ctx, a, MO_UW);
return gen_load(ctx, a, MO_TEUW);
}
static bool trans_c_lh(DisasContext *ctx, arg_c_lh *a)
{
REQUIRE_ZCB(ctx);
return gen_load(ctx, a, MO_SW);
return gen_load(ctx, a, MO_TESW);
}
static bool trans_c_sb(DisasContext *ctx, arg_c_sb *a)
@ -106,7 +106,7 @@ static bool trans_c_sb(DisasContext *ctx, arg_c_sb *a)
static bool trans_c_sh(DisasContext *ctx, arg_c_sh *a)
{
REQUIRE_ZCB(ctx);
return gen_store(ctx, a, MO_UW);
return gen_store(ctx, a, MO_TEUW);
}
#define X_S0 8

View file

@ -40,6 +40,7 @@ static bool trans_sspopchk(DisasContext *ctx, arg_sspopchk *a)
tcg_gen_brcond_tl(TCG_COND_EQ, data, rs1, skip);
tcg_gen_st_tl(tcg_constant_tl(RISCV_EXCP_SW_CHECK_BCFI_TVAL),
tcg_env, offsetof(CPURISCVState, sw_check_code));
gen_update_pc(ctx, 0);
gen_helper_raise_exception(tcg_env,
tcg_constant_i32(RISCV_EXCP_SW_CHECK));
gen_set_label(skip);
@ -90,7 +91,11 @@ static bool trans_ssamoswap_w(DisasContext *ctx, arg_amoswap_w *a)
}
if (!ctx->bcfi_enabled) {
#ifndef CONFIG_USER_ONLY
gen_helper_ssamoswap_disabled(tcg_env);
#else
return false;
#endif
}
TCGv dest = dest_gpr(ctx, a->rd);
@ -115,7 +120,11 @@ static bool trans_ssamoswap_d(DisasContext *ctx, arg_amoswap_w *a)
}
if (!ctx->bcfi_enabled) {
#ifndef CONFIG_USER_ONLY
gen_helper_ssamoswap_disabled(tcg_env);
#else
return false;
#endif
}
TCGv dest = dest_gpr(ctx, a->rd);

View file

@ -1588,7 +1588,7 @@ static void kvm_riscv_handle_sbi_dbcn(CPUState *cs, struct kvm_run *run)
* Handle the case where a 32 bit CPU is running in a
* 64 bit addressing env.
*/
if (riscv_cpu_mxl(&cpu->env) == MXL_RV32) {
if (riscv_cpu_is_32bit(cpu)) {
addr |= (uint64_t)run->riscv_sbi.args[2] << 32;
}

View file

@ -131,7 +131,8 @@ static bool vector_needed(void *opaque)
RISCVCPU *cpu = opaque;
CPURISCVState *env = &cpu->env;
return riscv_has_ext(env, RVV);
return kvm_enabled() ? riscv_has_ext(env, RVV) :
riscv_cpu_cfg(env)->ext_zve32x;
}
static const VMStateDescription vmstate_vector = {
@ -400,6 +401,30 @@ static const VMStateDescription vmstate_ssp = {
}
};
static bool sstc_timer_needed(void *opaque)
{
RISCVCPU *cpu = opaque;
CPURISCVState *env = &cpu->env;
if (!cpu->cfg.ext_sstc) {
return false;
}
return env->stimer != NULL || env->vstimer != NULL;
}
static const VMStateDescription vmstate_sstc = {
.name = "cpu/timer",
.version_id = 1,
.minimum_version_id = 1,
.needed = sstc_timer_needed,
.fields = (const VMStateField[]) {
VMSTATE_TIMER_PTR(env.stimer, RISCVCPU),
VMSTATE_TIMER_PTR(env.vstimer, RISCVCPU),
VMSTATE_END_OF_LIST()
}
};
const VMStateDescription vmstate_riscv_cpu = {
.name = "cpu",
.version_id = 10,
@ -476,6 +501,7 @@ const VMStateDescription vmstate_riscv_cpu = {
&vmstate_elp,
&vmstate_ssp,
&vmstate_ctr,
&vmstate_sstc,
NULL
}
};

View file

@ -717,4 +717,53 @@ target_ulong helper_hyp_hlvx_wu(CPURISCVState *env, target_ulong addr)
return cpu_ldl_code_mmu(env, addr, oi, ra);
}
void helper_ssamoswap_disabled(CPURISCVState *env)
{
int exception = RISCV_EXCP_ILLEGAL_INST;
/*
* Here we follow the RISC-V CFI spec [1] to implement the exception type
* of ssamoswap* instruction.
*
* [1] RISC-V CFI spec v1.0, ch2.7 Atomic Swap from a Shadow Stack Location
*
* Note: We have already checked some conditions in trans_* functions:
* 1. The effective priv mode is not M-mode.
* 2. The xSSE specific to the effictive priv mode is disabled.
*/
if (!get_field(env->menvcfg, MENVCFG_SSE)) {
/*
* Disabled M-mode SSE always trigger illegal instruction when
* current priv mode is not M-mode.
*/
exception = RISCV_EXCP_ILLEGAL_INST;
goto done;
}
if (!riscv_has_ext(env, RVS)) {
/* S-mode is not implemented */
exception = RISCV_EXCP_ILLEGAL_INST;
goto done;
} else if (env->virt_enabled) {
/*
* VU/VS-mode with disabled xSSE will trigger the virtual instruction
* exception.
*/
exception = RISCV_EXCP_VIRT_INSTRUCTION_FAULT;
goto done;
} else {
/*
* U-mode with disabled S-mode SSE will trigger the illegal instruction
* exception.
*
* Note: S-mode is already handled in the disabled M-mode SSE case.
*/
exception = RISCV_EXCP_ILLEGAL_INST;
goto done;
}
done:
riscv_raise_exception(env, exception, GETPC());
}
#endif /* !CONFIG_USER_ONLY */

View file

@ -31,6 +31,10 @@
#include "qapi/qobject-input-visitor.h"
#include "qapi/visitor.h"
#include "qom/qom-qobject.h"
#include "qemu/ctype.h"
#include "qemu/qemu-print.h"
#include "monitor/hmp.h"
#include "monitor/hmp-target.h"
#include "system/kvm.h"
#include "system/tcg.h"
#include "cpu-qom.h"
@ -240,3 +244,147 @@ CpuModelExpansionInfo *qmp_query_cpu_model_expansion(CpuModelExpansionType type,
return expansion_info;
}
/*
* We have way too many potential CSRs and regs being added
* regularly to register them in a static array.
*
* Declare an empty array instead, making get_monitor_def() use
* the target_get_monitor_def() API directly.
*/
const MonitorDef monitor_defs[] = { { } };
const MonitorDef *target_monitor_defs(void)
{
return monitor_defs;
}
static bool reg_is_ulong_integer(CPURISCVState *env, const char *name,
target_ulong *val, bool is_gprh)
{
const char * const *reg_names;
target_ulong *vals;
if (is_gprh) {
reg_names = riscv_int_regnamesh;
vals = env->gprh;
} else {
reg_names = riscv_int_regnames;
vals = env->gpr;
}
for (int i = 0; i < 32; i++) {
g_autofree char *reg_name = g_strdup(reg_names[i]);
char *reg1 = strtok(reg_name, "/");
char *reg2 = strtok(NULL, "/");
if (strcasecmp(reg1, name) == 0 ||
(reg2 && strcasecmp(reg2, name) == 0)) {
*val = vals[i];
return true;
}
}
return false;
}
static bool reg_is_u64_fpu(CPURISCVState *env, const char *name, uint64_t *val)
{
if (qemu_tolower(name[0]) != 'f') {
return false;
}
for (int i = 0; i < 32; i++) {
g_autofree char *reg_name = g_strdup(riscv_fpr_regnames[i]);
char *reg1 = strtok(reg_name, "/");
char *reg2 = strtok(NULL, "/");
if (strcasecmp(reg1, name) == 0 ||
(reg2 && strcasecmp(reg2, name) == 0)) {
*val = env->fpr[i];
return true;
}
}
return false;
}
static bool reg_is_vreg(const char *name)
{
if (qemu_tolower(name[0]) != 'v' || strlen(name) > 3) {
return false;
}
for (int i = 0; i < 32; i++) {
if (strcasecmp(name, riscv_rvv_regnames[i]) == 0) {
return true;
}
}
return false;
}
int target_get_monitor_def(CPUState *cs, const char *name, uint64_t *pval)
{
CPURISCVState *env = &RISCV_CPU(cs)->env;
target_ulong val = 0;
uint64_t val64 = 0;
int i;
if (reg_is_ulong_integer(env, name, &val, false) ||
reg_is_ulong_integer(env, name, &val, true)) {
*pval = val;
return 0;
}
if (reg_is_u64_fpu(env, name, &val64)) {
*pval = val64;
return 0;
}
if (reg_is_vreg(name)) {
if (!riscv_cpu_cfg(env)->ext_zve32x) {
return -EINVAL;
}
qemu_printf("Unable to print the value of vector "
"vreg '%s' from this API\n", name);
/*
* We're returning 0 because returning -EINVAL triggers
* an 'unknown register' message in exp_unary() later,
* which feels ankward after our own error message.
*/
*pval = 0;
return 0;
}
for (i = 0; i < ARRAY_SIZE(csr_ops); i++) {
RISCVException res;
int csrno = i;
/*
* Early skip when possible since we're going
* through a lot of NULL entries.
*/
if (csr_ops[csrno].predicate == NULL) {
continue;
}
if (strcasecmp(csr_ops[csrno].name, name) != 0) {
continue;
}
res = riscv_csrrw_debug(env, csrno, &val, 0, 0);
/*
* Rely on the smode, hmode, etc, predicates within csr.c
* to do the filtering of the registers that are present.
*/
if (res == RISCV_EXCP_NONE) {
*pval = val;
return 0;
}
}
return -EINVAL;
}

View file

@ -417,12 +417,21 @@ static void riscv_cpu_validate_misa_priv(CPURISCVState *env, Error **errp)
static void riscv_cpu_validate_v(CPURISCVState *env, RISCVCPUConfig *cfg,
Error **errp)
{
uint32_t min_vlen;
uint32_t vlen = cfg->vlenb << 3;
if (vlen > RV_VLEN_MAX || vlen < 128) {
if (riscv_has_ext(env, RVV)) {
min_vlen = 128;
} else if (cfg->ext_zve64x) {
min_vlen = 64;
} else if (cfg->ext_zve32x) {
min_vlen = 32;
}
if (vlen > RV_VLEN_MAX || vlen < min_vlen) {
error_setg(errp,
"Vector extension implementation only supports VLEN "
"in the range [128, %d]", RV_VLEN_MAX);
"in the range [%d, %d]", min_vlen, RV_VLEN_MAX);
return;
}
@ -432,6 +441,12 @@ static void riscv_cpu_validate_v(CPURISCVState *env, RISCVCPUConfig *cfg,
"in the range [8, 64]");
return;
}
if (vlen < cfg->elen) {
error_setg(errp, "Vector extension implementation requires VLEN "
"to be greater than or equal to ELEN");
return;
}
}
static void riscv_cpu_disable_priv_spec_isa_exts(RISCVCPU *cpu)
@ -661,7 +676,7 @@ void riscv_cpu_validate_set_extensions(RISCVCPU *cpu, Error **errp)
return;
}
if (riscv_has_ext(env, RVV)) {
if (cpu->cfg.ext_zve32x) {
riscv_cpu_validate_v(env, &cpu->cfg, &local_err);
if (local_err != NULL) {
error_propagate(errp, local_err);

View file

@ -24,6 +24,7 @@
#include "exec/helper-gen.h"
#include "exec/target_page.h"
#include "exec/translator.h"
#include "accel/tcg/cpu-ldst.h"
#include "exec/translation-block.h"
#include "exec/log.h"
#include "semihosting/semihost.h"
@ -1166,7 +1167,7 @@ static uint32_t opcode_at(DisasContextBase *dcbase, target_ulong pc)
CPUState *cpu = ctx->cs;
CPURISCVState *env = cpu_env(cpu);
return translator_ldl(env, &ctx->base, pc);
return cpu_ldl_code(env, pc);
}
#define SS_MMU_INDEX(ctx) (ctx->mem_idx | MMU_IDX_SS_WRITE)

View file

@ -5198,11 +5198,11 @@ GEN_VEXT_VSLIE1UP(16, H2)
GEN_VEXT_VSLIE1UP(32, H4)
GEN_VEXT_VSLIE1UP(64, H8)
#define GEN_VEXT_VSLIDE1UP_VX(NAME, BITWIDTH) \
void HELPER(NAME)(void *vd, void *v0, target_ulong s1, void *vs2, \
CPURISCVState *env, uint32_t desc) \
{ \
vslide1up_##BITWIDTH(vd, v0, s1, vs2, env, desc); \
#define GEN_VEXT_VSLIDE1UP_VX(NAME, BITWIDTH) \
void HELPER(NAME)(void *vd, void *v0, uint64_t s1, void *vs2, \
CPURISCVState *env, uint32_t desc) \
{ \
vslide1up_##BITWIDTH(vd, v0, s1, vs2, env, desc); \
}
/* vslide1up.vx vd, vs2, rs1, vm # vd[0]=x[rs1], vd[i+1] = vs2[i] */
@ -5249,11 +5249,11 @@ GEN_VEXT_VSLIDE1DOWN(16, H2)
GEN_VEXT_VSLIDE1DOWN(32, H4)
GEN_VEXT_VSLIDE1DOWN(64, H8)
#define GEN_VEXT_VSLIDE1DOWN_VX(NAME, BITWIDTH) \
void HELPER(NAME)(void *vd, void *v0, target_ulong s1, void *vs2, \
CPURISCVState *env, uint32_t desc) \
{ \
vslide1down_##BITWIDTH(vd, v0, s1, vs2, env, desc); \
#define GEN_VEXT_VSLIDE1DOWN_VX(NAME, BITWIDTH) \
void HELPER(NAME)(void *vd, void *v0, uint64_t s1, void *vs2, \
CPURISCVState *env, uint32_t desc) \
{ \
vslide1down_##BITWIDTH(vd, v0, s1, vs2, env, desc); \
}
/* vslide1down.vx vd, vs2, rs1, vm # vd[i] = vs2[i+1], vd[vl-1]=x[rs1] */