657 Commits

Author SHA1 Message Date
Chen Qun
6e9963d521 qmp: add command to query used memslots of vhost-net and vhost-user
Signed-off-by: Jinhua Cao <caojinhua1@huawei.com>
2022-03-19 14:42:32 +08:00
Chen Qun
3b29064bd8 vhost-user: quit infinite loop while used memslots is more than the backend limit
When used memslots is more than the backend limit,
the vhost-user netcard would attach fail and quit
infinite loop.

Signed-off-by: Jinhua Cao <caojinhua1@huawei.com>
2022-03-19 14:42:32 +08:00
Chen Qun
ca34c7a2fe vhost-user: add separate memslot counter for vhost-user
Used_memslots is equal to dev->mem->nregions now, it is true for
vhost kernel, but not for vhost user, which uses the memory regions
that have file descriptor. In fact, not all of the memory regions
have file descriptor.
It is usefully in some scenarios, e.g. used_memslots is 8, and only
5 memory slots can be used by vhost user, it is failed to hot plug
a new memory RAM because vhost_has_free_slot just returned false,
but we can hot plug it safely in fact.

Signed-off-by: Jinhua Cao <caojinhua1@huawei.com>
2022-03-19 14:42:32 +08:00
Chen Qun
97aa125a10 vhost-user: add vhost_set_mem_table when vm load_setup at destination
When migrate huge vm, packages lost are 90+.

During the load_setup of the destination vm, pass the
vm mem structure to ovs, the netcard could be enabled
when the migration finish state shifting.

Signed-off-by: Jinhua Cao <caojinhua1@huawei.com>
2022-03-19 14:42:32 +08:00
Chen Qun
8bcfea21b4 vhost-user: Set the acked_features to vm's featrue
Fix the problem when vm restart, the ovs restart and lead to the net
unreachable. The soluation is set the acked_features to vm's featrue
just the same as guest virtio-net mod load.

Signed-off-by: Jinhua Cao <caojinhua1@huawei.com>
2022-03-19 14:42:32 +08:00
Chen Qun
3158fa53b6 vhost-user: Add support reconnect vhost-user socket
Add support reconnect vhost-user socket, the reconnect time
is set to be 3 seconds.

Signed-off-by: Jinhua Cao <caojinhua1@huawei.com>
2022-03-19 14:42:32 +08:00
Chen Qun
866997d963 qemu-img: block: dont blk_make_zero if discard_zeroes false
Signed-off-by: Jinhua Cao <caojinhua1@huawei.com>
2022-03-19 14:42:32 +08:00
Chen Qun
25f5db2839 vhost-user: add unregister_savevm when vhost-user cleanup
Signed-off-by: Jinhua Cao <caojinhua1@huawei.com>
2022-03-19 14:42:32 +08:00
Chen Qun
2b5d913b53 virtio-net: update the default and max of rx/tx_queue_size
Set the max of tx_queue_size to 4096 even if the backends
are not vhost-user.

Set the default of rx/tx_queue_size to 2048 if the backends
are vhost-user, otherwise to 4096.

Signed-off-by: Jinhua Cao <caojinhua1@huawei.com>
2022-03-19 14:42:32 +08:00
Chen Qun
533a0f0fd7 virtio-net: set the max of queue size to 4096
Signed-off-by: Jinhua Cao <caojinhua1@huawei.com>
2022-03-19 14:42:32 +08:00
Chen Qun
c0e69b781c virtio: bugfix: check the value of caches before accessing it
Vring caches may be NULL in check_vring_avail_num() if
virtio_reset() is called at the same time, such as when
the virtual machine starts.
So check it before accessing it in vring_avail_idx().

Signed-off-by: Jinhua Cao <caojinhua1@huawei.com>
2022-03-19 14:42:32 +08:00
Chen Qun
ca01e036b8 virtio: print the guest virtio_net features that host does not support
Signed-off-by: Jinhua Cao <caojinhua1@huawei.com>
2022-03-19 14:42:32 +08:00
Chen Qun
16b7f98db0 virtio: bugfix: add rcu_read_lock when vring_avail_idx is called
viring_avail_idx should be called within rcu_read_lock(),
or may get NULL caches in vring_get_region_caches() and
trigger assert().

Signed-off-by: Jinhua Cao <caojinhua1@huawei.com>
2022-03-19 14:42:32 +08:00
Chen Qun
c98857b3fa virtio: check descriptor numbers
Check if the vring num is normal in virtio_save(), and add LOG
the vm push the wrong viring num down through writing IO Port.

Signed-off-by: Jinhua Cao <caojinhua1@huawei.com>
2022-03-19 14:42:32 +08:00
Chen Qun
cc847c6023 virtio-net: fix max vring buf size when set ring num
Signed-off-by: Jinhua Cao <caojinhua1@huawei.com>
2022-03-19 14:42:32 +08:00
Chen Qun
4fb0b966ec virtio-net: bugfix: do not delete netdev before virtio net
For the vhost-user net-card, it is allow to delete its
network backend while the virtio-net device still exists.
However, when the status of the device changes in guest,
QEMU will check whether the network backend exists, otherwise
it will crash.
So do not allowed to delete the network backend directly
without delete virtio-net device.

Signed-off-by: Jinhua Cao <caojinhua1@huawei.com>
2022-03-19 14:42:32 +08:00
Chen Qun
c86cebdfb5 virtio: bugfix: clean up callback when del virtqueue
We will access NULL pointer as follow:
1. Start a vm with multiqueue vhost-net
2. then we write VIRTIO_PCI_GUEST_FEATURES in PCI configuration to
   trigger multiqueue disable in this vm which will delete the virtqueue.
   In this step, the tx_bh is deleted but the callback virtio_net_handle_tx_bh
   still exist.
3. Finally, we write VIRTIO_PCI_QUEUE_NOTIFY in PCI configuration to
   notify the deleted virtqueue. In this way, virtio_net_handle_tx_bh
   will be called and qemu will be crashed.

Signed-off-by: Jinhua Cao <caojinhua1@huawei.com>
2022-03-19 14:42:32 +08:00
Chen Qun
c7155d3353 virtio: net-tap: bugfix: del net client if net_init_tap_one failed
In net_init_tap_one(), if the net-tap initializes successful
but other actions failed during vhost-net hot-plugging, the
net-tap will remain in the net clients.causing next hot-plug
fails again.

Signed-off-by: Jinhua Cao <caojinhua1@huawei.com>
2022-03-19 14:42:32 +08:00
Chen Qun
a55dd0838d virtio-scsi: bugfix: fix qemu crash for hotplug scsi disk with dataplane
The vm will trigger a disk sweep operation after plugging
a controller who's io type is iothread.  If attach a scsi
disk immediately, the sg_inqury request in vm will trigger
the assert in virtio_scsi_ctx_check(), which is called by
virtio_scsi_handle_cmd_req_prepare().

Add judgment in virtio_scsi_handle_cmd_req_prepare() and
return IO Error directly if the device has not been
initialized.

Signed-off-by: Jinhua Cao <caojinhua1@huawei.com>
2022-03-19 14:42:32 +08:00
Chen Qun
49f96694c5 spec: Update patch and changelog with !248 【6.2.0】内存泄露及部分IO相关bugfix回合 !248
bugfix: fix some illegal memory access and memory leak
bugfix: fix possible memory leak
bugfix: fix eventfds may double free when vm_id reused in ivshmem
block/mirror: fix file-system went to read-only after block-mirror
bugfix: fix mmio information leak and ehci vm escape 0-day vulnerability
target-i386: Fix the RES memory inc which caused by the coroutine created

Signed-off-by: Chen Qun<kuhn.chenqun@huawei.com>
2022-03-19 14:42:32 +08:00
Chen Qun
dcd9c3d752 target-i386: Fix the RES memory inc which caused by the coroutine created
for better performance, change the POOL_BATCH_SIZE from 64 to 128.

Signed-off-by: caojinhua <caojinhua1@huawei.com>
Signed-off-by: jiangdongxu <jiangdongxu1@huawei.com>
2022-03-19 14:42:32 +08:00
Chen Qun
e7a1c5d229 bugfix: fix mmio information leak and ehci vm escape 0-day vulnerability
Signed-off-by: Yutao Ai <aiyutao@huawei.com>
Signed-off-by: jiangdongxu <jiangdongxu1@huawei.com>
2022-03-19 14:42:32 +08:00
Chen Qun
6bfd0edc7b block/mirror: fix file-system went to read-only after block-mirror
config vm disk with prdm, keep the disk writing data continuously
during block-mirror, the file-system will went to read-only after
block-mirror, fix it.

Signed-off-by: caojinhua <caojinhua1@huawei.com>
Signed-off-by: jiangdongxu <jiangdongxu1@huawei.com>
2022-03-19 14:42:32 +08:00
Chen Qun
4dc229df1a bugfix: fix eventfds may double free when vm_id reused in ivshmem
As the ivshmem Server-Client Protol describes, when a
client disconnects from the server, server sends disconnect
notifications to the other clients. And the other clients
will free the eventfds of the disconnected client according
to the client ID. If the client ID is reused, the eventfds
may be double freed.

It will be solved by setting eventfds to NULL after freeing
and allocating memory for it when it's used.

Signed-off-by: Peng Liang <liangpeng10@huawei.com>
Signed-off-by: jiangdongxu <jiangdongxu1@huawei.com>
2022-03-19 14:42:32 +08:00
Chen Qun
9c3e999acd bugfix: fix possible memory leak
Signed-off-by: caojinhua <caojinhua1@huawei.com>
Signed-off-by: jiangdongxu <jiangdongxu1@huawei.com>
2022-03-19 14:42:32 +08:00
Chen Qun
39d3f0f0a3 bugfix: fix some illegal memory access and memory leak
Signed-off-by: yuxiating <yuxiating@huawei.com>
Signed-off-by: jiangdongxu <jiangdongxu1@huawei.com>
2022-03-19 14:42:32 +08:00
imxcc
6e7dfc22ee Update with openeuler !235
Signed-off-by: imxcc <xingchaochao@huawei.com>
2022-03-19 14:42:32 +08:00
Chen Qun
e6f6fe87dd spec: Update release version with !245 !247 !243
increase release verison by one

Signed-off-by: Chen Qun <kuhn.chenqun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit bfee3ac59d622c963cbbcc9d937baa09de2c3691)
2022-03-19 14:42:32 +08:00
Chen Qun
5475d034e3 spec: Update patch and changelog with !243 Support VFIO migration manual clear interface & vSMMUv3/pSMMUv3 2 stage VFIO integration & Support migration in SMMUv3 nested mode !243
linux-headers: update against 5.10 and manual clear vfio dirty log series
vfio: Maintain DMA mapping range for the container
vfio/migration: Add support for manual clear vfio dirty log
update-linux-headers: Import iommu.h
vfio.h and iommu.h header update against 5.10
memory: Add new fields in IOTLBEntry
hw/arm/smmuv3: Improve stage1 ASID invalidation
hw/arm/smmu-common: Allow domain invalidation for NH_ALL/NSNH_ALL
memory: Add IOMMU_ATTR_VFIO_NESTED IOMMU memory region attribute
memory: Add IOMMU_ATTR_MSI_TRANSLATE IOMMU memory region attribute
memory: Introduce IOMMU Memory Region inject_faults API
iommu: Introduce generic header
pci: introduce PCIPASIDOps to PCIDevice
vfio: Force nested if iommu requires it
vfio: Introduce hostwin_from_range helper
vfio: Introduce helpers to DMA map/unmap a RAM section
vfio: Set up nested stage mappings
vfio: Pass stage 1 MSI bindings to the host
vfio: Helper to get IRQ info including capabilities
vfio/pci: Register handler for iommu fault
vfio/pci: Set up the DMA FAULT region
vfio/pci: Implement the DMA fault handler
hw/arm/smmuv3: Advertise MSI_TRANSLATE attribute
hw/arm/smmuv3: Store the PASID table GPA in the translation config
hw/arm/smmuv3: Fill the IOTLBEntry arch_id on NH_VA invalidation
hw/arm/smmuv3: Fill the IOTLBEntry leaf field on NH_VA invalidation
hw/arm/smmuv3: Pass stage 1 configurations to the host
hw/arm/smmuv3: Implement fault injection
hw/arm/smmuv3: Allow MAP notifiers
pci: Add return_page_response pci ops
vfio/pci: Implement return_page_response page response callback
vfio/common: Avoid unmap ram section at vfio_listener_region_del() in nested mode
vfio: Introduce helpers to mark dirty pages of a RAM section
vfio: Add vfio_prereg_listener_log_sync in nested stage
vfio: Add vfio_prereg_listener_log_clear to re-enable mark dirty pages
vfio: Add vfio_prereg_listener_global_log_start/stop in nested stage
hw/arm/smmuv3: Post-load stage 1 configurations to the host
vfio/common: Fix incorrect address alignment in vfio_dma_map_ram_section
vfio/common: Add address alignment check in vfio_listener_region_del

Signed-off-by: Chen Qun<kuhn.chenqun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit 45d983f4507f9f6089de83fcd4d3a2136876b566)
2022-03-19 14:42:32 +08:00
Chen Qun
936335aa5e vfio/common: Add address alignment check in vfio_listener_region_del
Both vfio_listener_region_add and vfio_listener_region_del have
reference counting operations on ram section->mr. If the 'iova'
and 'llend' of the ram section do not pass the alignment
check, the ram section should not be mapped or unmapped. It means
that the reference counting should not be changed.

However, the address alignment check is missing in
vfio_listener_region_del. This makes memory_region_unref will
be unconditional called and causes unintended problems in some
scenarios.

Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit c4568a05c1d9f9017c89abc9df4270ce128a9cc3)
2022-03-19 14:42:32 +08:00
Chen Qun
e7519bc2f9 vfio/common: Fix incorrect address alignment in vfio_dma_map_ram_section
The 'iova' will be passed to host kernel for mapping with the
HPA. It is related to the host page size. So TARGET_PAGE_ALIGN
should be replaced by REAL_HOST_PAGE_ALIGN. In the case of
large granularity (64K), it may return early when map MMIO RAM
section. And because of the inconsistency with
vfio_dma_unmap_ram_section, it may cause 'assert(qrange)'
in vfio_dma_unmap.

Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: Zenghui Yu <yuzenghui@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit af442e7ad177338fae5a5399de604cf8bef777ee)
2022-03-19 14:42:32 +08:00
Chen Qun
e923b443e1 hw/arm/smmuv3: Post-load stage 1 configurations to the host
In nested mode, we call the set_pasid_table() callback on each
STE update to pass the guest stage 1 configuration to the host
and apply it at physical level.

In the case of live migration, we need to manually call the
set_pasid_table() to load the guest stage 1 configurations to
the host. If this operation fails, the migration fails.

Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit db24e228d7511ab6cb54795db3237bafb275d14e)
2022-03-19 14:42:32 +08:00
Chen Qun
24d054989e vfio: Add vfio_prereg_listener_global_log_start/stop in nested stage
In nested mode, we set up the stage 2 and stage 1 separately. In my
opinion, vfio_memory_prereg_listener is used for stage 2 and
vfio_memory_listener is used for stage 1. So it feels weird to call
the global_log_start/stop interface in vfio_memory_listener to switch
dirty tracking, although this won't cause any errors. Add
global_log_start/stop interface in vfio_memory_prereg_listener
can separate stage 2 from stage 1.

Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit 3608c966eaac58ea54a2f084ddb7d31f4309c8fe)
2022-03-19 14:42:32 +08:00
Chen Qun
2ed57b4921 vfio: Add vfio_prereg_listener_log_clear to re-enable mark dirty pages
When tracking dirty pages, we just need to pay attention to stage 2
mappings. Legacy vfio_listener_log_clear cannot be used in nested
stage. This patch adds vfio_prereg_listener_log_clear to re-enable
dirty pages in nested mode.

Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit 9e95ed8404c1d32048d6288100c2a5fcb1ebba75)
2022-03-19 14:42:32 +08:00
Chen Qun
4b6ed12507 vfio: Add vfio_prereg_listener_log_sync in nested stage
In nested mode, we set up the stage 2 (gpa->hpa)and stage 1
(giova->gpa) separately by vfio_prereg_listener_region_add()
and vfio_listener_region_add(). So when marking dirty pages
we just need to pay attention to stage 2 mappings.

Legacy vfio_listener_log_sync cannot be used in nested stage.
This patch adds vfio_prereg_listener_log_sync to mark dirty
pages in nested mode.

Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit 28f87e27c755c532c757ebb590b95817035504f8)
2022-03-19 14:42:32 +08:00
Chen Qun
4a31f0107a vfio: Introduce helpers to mark dirty pages of a RAM section
Extract part of the code from vfio_sync_dirty_bitmap to form a
new helper, which allows to mark dirty pages of a RAM section.
This helper will be called for nested stage.

Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit 6c878a81777952114b8c559c51450b19ab9e13d8)
2022-03-19 14:42:32 +08:00
Chen Qun
74e7cc0ba0 vfio/common: Avoid unmap ram section at vfio_listener_region_del() in nested mode
The ram section will be unmapped at vfio_prereg_listener_region_del()
in nested mode. So let's avoid unmap ram section at
vfio_listener_region_dev().

Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit 8eba56191ca6910d5501b9d80301898369294aa7)
2022-03-19 14:42:32 +08:00
Chen Qun
847b67f808 vfio/pci: Implement return_page_response page response callback
This patch implements the page response path. The
response is written into the page response ring buffer and then
update header's head index is updated. This path is not used
by this series. It is introduced here as a POC for vSVA/ARM
integration.

Signed-off-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit 043fcb1e27352b4a10af55cd967fa55190ef4b46)
2022-03-19 14:42:32 +08:00
Chen Qun
a649ed5b6d pci: Add return_page_response pci ops
Add a new PCI operation that allows to return page responses
to registered VFIO devices

Signed-off-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit fe6cd1988a114c63dd1bf6f0a2fcd5770edd6fc6)
2022-03-19 14:42:32 +08:00
Chen Qun
1008c54c66 hw/arm/smmuv3: Allow MAP notifiers
We now have all bricks to support nested paging. This
uses MAP notifiers to map the MSIs. So let's allow MAP
notifiers to be registered.

Signed-off-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit 24eaca5a81bd79b9afd8bd2b07a7d493554447c8)
2022-03-19 14:42:32 +08:00
Chen Qun
dec3ee75e6 hw/arm/smmuv3: Implement fault injection
We convert iommu_fault structs received from the kernel
into the data struct used by the emulation code and record
the evnts into the virtual event queue.

Signed-off-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit df9b0e5fdf7fec20c79e840a36d05254082cc2ec)
2022-03-19 14:42:32 +08:00
Chen Qun
7c6a3358e5 hw/arm/smmuv3: Pass stage 1 configurations to the host
In case PASID PciOps are set for the device we call
the set_pasid_table() callback on each STE update.

This allows to pass the guest stage 1 configuration
to the host and apply it at physical level.

Signed-off-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit 6b84c577acac4c464c47edffb669b1b48641cbcc)
2022-03-19 14:42:32 +08:00
Chen Qun
baaac583e5 hw/arm/smmuv3: Fill the IOTLBEntry leaf field on NH_VA invalidation
Let's propagate the leaf attribute throughout the invalidation path.
This hint is used to reduce the scope of the invalidations to the
last level of translation. Not enforcing it induces large performance
penalties in nested mode.

Signed-off-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit c87109806f14b82c0669d054ba6ada0dafcf95c4)
2022-03-19 14:42:32 +08:00
Chen Qun
3a03f28c87 hw/arm/smmuv3: Fill the IOTLBEntry arch_id on NH_VA invalidation
When the guest invalidates one S1 entry, it passes the asid.
When propagating this invalidation downto the host, the asid
information also must be passed. So let's fill the arch_id field
introduced for that purpose and accordingly set the flags to
indicate its presence.

Signed-off-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit fe59e0178d303c21480d2a03399fc562ee010721)
2022-03-19 14:42:32 +08:00
Chen Qun
fc290dbd8a hw/arm/smmuv3: Store the PASID table GPA in the translation config
For VFIO integration we will need to pass the Context Descriptor (CD)
table GPA to the host. The CD table is also referred to as the PASID
table. Its GPA corresponds to the s1ctrptr field of the Stream Table
Entry. So let's decode and store it in the configuration structure.

Signed-off-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit 4462bbb2034b784b62cb430c61a535981f4de3ec)
2022-03-19 14:42:32 +08:00
Chen Qun
0142a89675 hw/arm/smmuv3: Advertise MSI_TRANSLATE attribute
The SMMUv3 has the peculiarity to translate MSI
transactionss. let's advertise the corresponding
attribute.

Signed-off-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit 8560e69e378f45f717f049b0f3ad01ce62472708)
2022-03-19 14:42:32 +08:00
Chen Qun
b5cd34148c vfio/pci: Implement the DMA fault handler
Whenever the eventfd is triggered, we retrieve the DMA fault(s)
from the mmapped fault region and inject them in the iommu
memory region.

Signed-off-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit b6ea1f38e8bd59777643b744aa0fa901395483c6)
2022-03-19 14:42:32 +08:00
Chen Qun
1e9d080fe8 vfio/pci: Set up the DMA FAULT region
Set up the fault region which is composed of the actual fault
queue (mmappable) and a header used to handle it. The fault
queue is mmapped.

Signed-off-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit 447d9fccd23773f36d35ad684f620b0f0c24cce3)
2022-03-19 14:42:32 +08:00
Chen Qun
cd90a518b0 vfio/pci: Register handler for iommu fault
We use the new extended IRQ VFIO_IRQ_TYPE_NESTED type and
VFIO_IRQ_SUBTYPE_DMA_FAULT subtype to set/unset
a notifier for physical DMA faults. The associated eventfd is
triggered, in nested mode, whenever a fault is detected at IOMMU
physical level.

The actual handler will be implemented in subsequent patches.

Signed-off-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit bd87c37dab62815cddd5ec81badc570a6119fc5a)
2022-03-19 14:42:32 +08:00
Chen Qun
9686e15026 vfio: Helper to get IRQ info including capabilities
As done for vfio regions, add helpers to retrieve irq info
including their optional capabilities.

Signed-off-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit 5ecad96e205c189caf01cc855749c5d8eb7205fa)
2022-03-19 14:42:32 +08:00