There is a small window between the twice blk_is_available in
scsi_disk_emulate_command which would cause crash due to the later
assertion if the remote cdrom is detached in this window.
So this patch replaces assertions with return to avoid qemu crash.
Signed-off-by: wangjian161 <wangjian161@huawei.com>
Currently, migration will put a BDRV_O_INACTIVE flag
on bs's open_flags until another resume being called. In that case,
any IO from vm or block jobs will cause a qemu crash with an assert
'assert(!(bs->open_flags & BDRV_O_INACTIVE))' failure in bdrv_co_pwritev
function. we hereby disallow block jobs by faking a blocker.
Signed-off-by: wangjian161 <wangjian161@huawei.com>
We use ioctl to detect multipath devices. However, we only set flags in
struct dm_ioctl (the argument to ioctl) and left other fields in random,
which may cause the failure of calling ioctl. Hence, we set other
fields to 0 to avoid the failure.
Signed-off-by: wangjian161 <wangjian161@huawei.com>
In case of insufficient memory and kill-9,
the NBD socket cannot be processed and stuck all the time.
Signed-off-by: wangjian161 <wangjian161@huawei.com>
When the file system is dealing with multithreading concurrent writing to a file,
the performance will be degraded because of the lock.
At present, the default AIO mode of QEMU NBD is threads. In the case of large blocks,
because IO is divided into small pieces and multiple queues, it will become multithreading
concurrent writing the same file. Due to the file system, the performance will be greatly reduced.
If you change to native mode, this problem will not exist.
Signed-off-by: wangjian161 <wangjian161@huawei.com>
In the process of NBD equipment pressurization, executing QEMU NBD will
lead to the failure of IO distribution and go to NBD_ Out process of trip().
If two or more IO go to the out process, client NBD will release in nbd_request_put().
The user after free problem that is read again in close().
Through the NBD_ Save the value of client > closing before the out process in trip
to solve the use after free problem.
Signed-off-by: wangjian161 <wangjian161@huawei.com>
Lots of patches will use qemu_log, it will cause "make check V=1"
failure. So disable qemu_log when calling "make check V=1".
Signed-off-by: Yan Wang <wangyan122@huawei.com>
cpu: parse +/- feature to avoid failure
cpu: add Kunpeng-920 cpu support
cpu: add Cortex-A72 processor kvm target support
add Phytium's CPU models: FT-2000+ and Tengyun-S2500.
Signed-off-by: Chen Qun<kuhn.chenqun@huawei.com>
The ARM Cortex-A72 is ARMv8-A micro-architecture,
add kvm target to ARM Cortex-A72 processor definition.
Signed-off-by: Xu Yandong <xuyandong2@huawei.com>
Signed-off-by: Mingwang Li <limingwang@huawei.com>
To avoid cpu feature parse failure, +/- feature is added.
Signed-off-by: Xu Yandong <xuyandong2@huawei.com>
Signed-off-by: Mingwang Li <limingwang@huawei.com>
esp: ensure cmdfifo is not empty and current_dev is non-NULL
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: imxcc <xingchaochao@huawei.com>
esp: always check current_req is not NULL before use in DMA callbacks
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: imxcc <xingchaochao@huawei.com>
When mergeable buffer is enabled, we try to set the num_buffers after
the virtqueue elem has been unmapped. This will lead several issues,
E.g a use after free when the descriptor has an address which belongs
to the non direct access region. In this case we use bounce buffer
that is allocated during address_space_map() and freed during
address_space_unmap().
Fixing this by storing the elems temporarily in an array and delay the
unmap after we set the the num_buffers.
This addresses CVE-2021-3748.
Reported-by: Alexander Bulekov <alxndr@bu.edu>
Fixes: fbe78f4f55c6 ("virtio-net support")
Cc: qemu-stable@nongnu.org
Signed-off-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
The device uses the guest-supplied stream number unchecked, which can
lead to guest-triggered out-of-band access to the UASDevice->data3 and
UASDevice->status3 fields. Add the missing checks.
Fixes: CVE-2021-3713
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reported-by: Chen Zhe <chenzhe@huawei.com>
Reported-by: Tan Jingguo <tanjingguo@huawei.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20210818120505.1258262-2-kraxel@redhat.com>
Both vfio_listener_region_add and vfio_listener_region_del have
reference counting operations on ram section->mr. If the 'iova'
and 'llend' of the ram section do not pass the alignment
check, the ram section should not be mapped or unmapped. It means
that the reference counting should not be changed.
However, the address alignment check is missing in
vfio_listener_region_del. This makes memory_region_unref will
be unconditional called and causes unintended problems in some
scenarios.
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>