cpu: parse +/- feature to avoid failure
cpu: add Kunpeng-920 cpu support
cpu: add Cortex-A72 processor kvm target support
add Phytium's CPU models: FT-2000+ and Tengyun-S2500.
Signed-off-by: Chen Qun<kuhn.chenqun@huawei.com>
The ARM Cortex-A72 is ARMv8-A micro-architecture,
add kvm target to ARM Cortex-A72 processor definition.
Signed-off-by: Xu Yandong <xuyandong2@huawei.com>
Signed-off-by: Mingwang Li <limingwang@huawei.com>
To avoid cpu feature parse failure, +/- feature is added.
Signed-off-by: Xu Yandong <xuyandong2@huawei.com>
Signed-off-by: Mingwang Li <limingwang@huawei.com>
esp: ensure cmdfifo is not empty and current_dev is non-NULL
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: imxcc <xingchaochao@huawei.com>
esp: always check current_req is not NULL before use in DMA callbacks
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: imxcc <xingchaochao@huawei.com>
When mergeable buffer is enabled, we try to set the num_buffers after
the virtqueue elem has been unmapped. This will lead several issues,
E.g a use after free when the descriptor has an address which belongs
to the non direct access region. In this case we use bounce buffer
that is allocated during address_space_map() and freed during
address_space_unmap().
Fixing this by storing the elems temporarily in an array and delay the
unmap after we set the the num_buffers.
This addresses CVE-2021-3748.
Reported-by: Alexander Bulekov <alxndr@bu.edu>
Fixes: fbe78f4f55c6 ("virtio-net support")
Cc: qemu-stable@nongnu.org
Signed-off-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
The device uses the guest-supplied stream number unchecked, which can
lead to guest-triggered out-of-band access to the UASDevice->data3 and
UASDevice->status3 fields. Add the missing checks.
Fixes: CVE-2021-3713
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reported-by: Chen Zhe <chenzhe@huawei.com>
Reported-by: Tan Jingguo <tanjingguo@huawei.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20210818120505.1258262-2-kraxel@redhat.com>
Both vfio_listener_region_add and vfio_listener_region_del have
reference counting operations on ram section->mr. If the 'iova'
and 'llend' of the ram section do not pass the alignment
check, the ram section should not be mapped or unmapped. It means
that the reference counting should not be changed.
However, the address alignment check is missing in
vfio_listener_region_del. This makes memory_region_unref will
be unconditional called and causes unintended problems in some
scenarios.
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
The 'iova' will be passed to host kernel for mapping with the
HPA. It is related to the host page size. So TARGET_PAGE_ALIGN
should be replaced by REAL_HOST_PAGE_ALIGN. In the case of
large granularity (64K), it may return early when map MMIO RAM
section. And because of the inconsistency with
vfio_dma_unmap_ram_section, it may cause 'assert(qrange)'
in vfio_dma_unmap.
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: Zenghui Yu <yuzenghui@huawei.com>
data might point into the middle of a larger buffer, there is a separate
free_on_destroy pointer passed into bufp_alloc() to handle that. It is
only used in the normal workflow though, not when dropping packets due
to the queue being full. Fix that.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/491
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20210722072756.647673-1-kraxel@redhat.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
vfio: Support host translation granule size
vfio/migrate: Move switch of dirty tracking into vfio_memory_listener
vfio: Fix unregister SaveVMHandler in vfio_migration_finalize
migration/ram: Reduce unnecessary rate limiting
migration/ram: Optimize ram_save_host_page()
qdev/monitors: Fix reundant error_setg of qdev_add_device
linux-headers: update against 5.10 and manual clear vfio dirty log series
vfio: Maintain DMA mapping range for the container
vfio/migration: Add support for manual clear vfio dirty log
hw/arm/smmuv3: Support 16K translation granule
hw/arm/smmuv3: Set the restoration priority of the vSMMUv3 explicitly
hw/vfio/common: trace vfio_connect_container operations
update-linux-headers: Import iommu.h
vfio.h and iommu.h header update against 5.10
memory: Add new fields in IOTLBEntry
hw/arm/smmuv3: Improve stage1 ASID invalidation
hw/arm/smmu-common: Allow domain invalidation for NH_ALL/NSNH_ALL
memory: Add IOMMU_ATTR_VFIO_NESTED IOMMU memory region attribute
memory: Add IOMMU_ATTR_MSI_TRANSLATE IOMMU memory region attribute
memory: Introduce IOMMU Memory Region inject_faults API
iommu: Introduce generic header
pci: introduce PCIPASIDOps to PCIDevice
vfio: Force nested if iommu requires it
vfio: Introduce hostwin_from_range helper
vfio: Introduce helpers to DMA map/unmap a RAM section
vfio: Set up nested stage mappings
vfio: Pass stage 1 MSI bindings to the host
vfio: Helper to get IRQ info including capabilities
vfio/pci: Register handler for iommu fault
vfio/pci: Set up the DMA FAULT region
vfio/pci: Implement the DMA fault handler
hw/arm/smmuv3: Advertise MSI_TRANSLATE attribute
hw/arm/smmuv3: Store the PASID table GPA in the translation config
hw/arm/smmuv3: Fill the IOTLBEntry arch_id on NH_VA invalidation
hw/arm/smmuv3: Fill the IOTLBEntry leaf field on NH_VA invalidation
hw/arm/smmuv3: Pass stage 1 configurations to the host
hw/arm/smmuv3: Implement fault injection
hw/arm/smmuv3: Allow MAP notifiers
pci: Add return_page_response pci ops
vfio/pci: Implement return_page_response page response callback
vfio/common: Avoid unmap ram section at vfio_listener_region_del() in nested mode
vfio: Introduce helpers to mark dirty pages of a RAM section
vfio: Add vfio_prereg_listener_log_sync in nested stage
vfio: Add vfio_prereg_listener_log_clear to re-enable mark dirty pages
vfio: Add vfio_prereg_listener_global_log_start/stop in nested stage
hw/arm/smmuv3: Post-load stage 1 configurations to the host
Signed-off-by: Chen Qun<kuhn.chenqun@huawei.com>