Currently, if guest has workloads, IO thread will acquire aio_context
lock before do io_submit, it leads to segmentfault when do block commit
after snapshot. Just like below:
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7f7c7d91f700 (LWP 99907)]
0x00005576d0f65aab in bdrv_mirror_top_pwritev at ../block/mirror.c:1437
1437 ../block/mirror.c: No such file or directory.
(gdb) p s->job
$17 = (MirrorBlockJob *) 0x0
(gdb) p s->stop
$18 = false
Call trace of IO thread:
0 0x00005576d0f65aab in bdrv_mirror_top_pwritev at ../block/mirror.c:1437
1 0x00005576d0f7f3ab in bdrv_driver_pwritev at ../block/io.c:1174
2 0x00005576d0f8139d in bdrv_aligned_pwritev at ../block/io.c:1988
3 0x00005576d0f81b65 in bdrv_co_pwritev_part at ../block/io.c:2156
4 0x00005576d0f8e6b7 in blk_do_pwritev_part at ../block/block-backend.c:1260
5 0x00005576d0f8e84d in blk_aio_write_entry at ../block/block-backend.c:1476
...
Switch to qemu main thread:
0 0x00007f903be704ed in __lll_lock_wait at
/lib/../lib64/libpthread.so.0
1 0x00007f903be6bde6 in _L_lock_941 at /lib/../lib64/libpthread.so.0
2 0x00007f903be6bcdf in pthread_mutex_lock at
/lib/../lib64/libpthread.so.0
3 0x0000564b21456889 in qemu_mutex_lock_impl at
../util/qemu-thread-posix.c:79
4 0x0000564b213af8a5 in block_job_add_bdrv at ../blockjob.c:224
5 0x0000564b213b00ad in block_job_create at ../blockjob.c:440
6 0x0000564b21357c0a in mirror_start_job at ../block/mirror.c:1622
7 0x0000564b2135a9af in commit_active_start at ../block/mirror.c:1867
8 0x0000564b2133d132 in qmp_block_commit at ../blockdev.c:2768
9 0x0000564b2141fef3 in qmp_marshal_block_commit at
qapi/qapi-commands-block-core.c:346
10 0x0000564b214503c9 in do_qmp_dispatch_bh at
../qapi/qmp-dispatch.c:110
11 0x0000564b21451996 in aio_bh_poll at ../util/async.c:164
12 0x0000564b2146018e in aio_dispatch at ../util/aio-posix.c:381
13 0x0000564b2145187e in aio_ctx_dispatch at ../util/async.c:306
14 0x00007f9040239049 in g_main_context_dispatch at
/lib/../lib64/libglib-2.0.so.0
15 0x0000564b21447368 in main_loop_wait at ../util/main-loop.c:232
16 0x0000564b21447368 in main_loop_wait at ../util/main-loop.c:255
17 0x0000564b21447368 in main_loop_wait at ../util/main-loop.c:531
18 0x0000564b212304e1 in qemu_main_loop at ../softmmu/runstate.c:721
19 0x0000564b20f7975e in main at ../softmmu/main.c:50
In IO thread when do bdrv_mirror_top_pwritev, the job is NULL, and stop field
is false, this means the MirrorBDSOpaque "s" object has not been initialized
yet, and this object is initialized by block_job_create(), but the initialize
process is stuck in acquiring the lock.
In this situation, IO thread come to bdrv_mirror_top_pwritev(),which means that
mirror-top node is already inserted into block graph, but its bs->opaque->job
is not initialized.
The root cause is that qemu main thread do release/acquire when hold the lock,
at the same time, IO thread get the lock after release stage, and the crash
occured.
Actually, in this situation, job->job.aio_context will not equal to
qemu_get_aio_context(), and will be the same as bs->aio_context,
thus, no need to release the lock, becasue bdrv_root_attach_child()
will not change the context.
This patch fix this issue.
Fixes: 132ada80 "block: Adjust AioContexts when attaching nodes"
Signed-off-by: Michael Qiu <qiudayu@huayun.com>
Message-Id: <20210203024059.52683-1-08005325@163.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
fix CVE-2021-20203 #I3A34O
While activating device in vmxnet3_acticate_device(), it does not
validate guest supplied configuration values against predefined
minimum - maximum limits. This may lead to integer overflow or
OOB access issues. Add checks to avoid it.
Fixes: CVE-2021-20203
Buglink: https://bugs.launchpad.net/qemu/+bug/1913873
Reported-by: Gaoning Pan <pgn@zju.edu.cn>
Signed-off-by: Prasad J Pandit <pjp@fedoraproject.org>
Signed-off-by: Jiajie Li <lijiajie11@huawei.com>
If any error happens during multifd send thread creating (e.g. channel broke
because new domain is destroyed by the dst), multifd_tls_handshake_thread
may exit silently, leaving main migration thread hanging (ram_save_setup ->
multifd_send_sync_main -> qemu_sem_wait(&p->sem_sync)).
Fix that by adding error handling in multifd_tls_handshake_thread.
Signed-off-by: Hao Wang <wanghao232@huawei.com>
Function multifd_channel_connect() return "true" to indicate failure,
which is rather confusing. Fix that.
Signed-off-by: Hao Wang <wanghao232@huawei.com>
"tmp.tls_hostname" and "tmp.tls_creds" allocated by migrate_params_test_apply()
is forgot to free at the end of qmp_migrate_set_parameters(). Fix that.
The leak stack:
Direct leak of 2 byte(s) in 2 object(s) allocated from:
#0 0xffffb597c20b in __interceptor_malloc (/usr/lib64/libasan.so.4+0xd320b)
#1 0xffffb52dcb1b in g_malloc (/usr/lib64/libglib-2.0.so.0+0x58b1b)
#2 0xffffb52f8143 in g_strdup (/usr/lib64/libglib-2.0.so.0+0x74143)
#3 0xaaaac52447fb in migrate_params_test_apply (/usr/src/debug/qemu-4.1.0/migration/migration.c:1377)
#4 0xaaaac52fdca7 in qmp_migrate_set_parameters (/usr/src/debug/qemu-4.1.0/qapi/qapi-commands-migration.c:192)
#5 0xaaaac551d543 in qmp_dispatch (/usr/src/debug/qemu-4.1.0/qapi/qmp-dispatch.c:165)
#6 0xaaaac52a0a8f in qmp_dispatch (/usr/src/debug/qemu-4.1.0/monitor/qmp.c:125)
#7 0xaaaac52a1c7f in monitor_qmp_dispatch (/usr/src/debug/qemu-4.1.0/monitor/qmp.c:214)
#8 0xaaaac55cb0cf in aio_bh_call (/usr/src/debug/qemu-4.1.0/util/async.c:117)
#9 0xaaaac55d4543 in aio_bh_poll (/usr/src/debug/qemu-4.1.0/util/aio-posix.c:459)
#10 0xaaaac55cae0f in aio_dispatch (/usr/src/debug/qemu-4.1.0/util/async.c:268)
#11 0xffffb52d6a7b in g_main_context_dispatch (/usr/lib64/libglib-2.0.so.0+0x52a7b)
#12 0xaaaac55d1e3b(/usr/bin/qemu-kvm-4.1.0+0x1622e3b)
#13 0xaaaac4e314bb(/usr/bin/qemu-kvm-4.1.0+0xe824bb)
#14 0xaaaac47f45ef(/usr/bin/qemu-kvm-4.1.0+0x8455ef)
#15 0xffffb4bfef3f in __libc_start_main (/usr/lib64/libc.so.6+0x23f3f)
#16 0xaaaac47ffacb(/usr/bin/qemu-kvm-4.1.0+0x850acb)
Direct leak of 2 byte(s) in 2 object(s) allocated from:
#0 0xffffb597c20b in __interceptor_malloc (/usr/lib64/libasan.so.4+0xd320b)
#1 0xffffb52dcb1b in g_malloc (/usr/lib64/libglib-2.0.so.0+0x58b1b)
#2 0xffffb52f8143 in g_strdup (/usr/lib64/libglib-2.0.so.0+0x74143)
#3 0xaaaac5244893 in migrate_params_test_apply (/usr/src/debug/qemu-4.1.0/migration/migration.c:1382)
#4 0xaaaac52fdca7 in qmp_migrate_set_parameters (/usr/src/debug/qemu-4.1.0/qapi/qapi-commands-migration.c:192)
#5 0xaaaac551d543 in qmp_dispatch (/usr/src/debug/qemu-4.1.0/qapi/qmp-dispatch.c)
#6 0xaaaac52a0a8f in qmp_dispatch (/usr/src/debug/qemu-4.1.0/monitor/qmp.c:125)
#7 0xaaaac52a1c7f in monitor_qmp_dispatch (/usr/src/debug/qemu-4.1.0/monitor/qmp.c:214)
#8 0xaaaac55cb0cf in aio_bh_call (/usr/src/debug/qemu-4.1.0/util/async.c:117)
#9 0xaaaac55d4543 in aio_bh_poll (/usr/src/debug/qemu-4.1.0/util/aio-posix.c:459)
#10 0xaaaac55cae0f in in aio_dispatch (/usr/src/debug/qemu-4.1.0/util/async.c:268)
#11 0xffffb52d6a7b in g_main_context_dispatch (/usr/lib64/libglib-2.0.so.0+0x52a7b)
#12 0xaaaac55d1e3b(/usr/bin/qemu-kvm-4.1.0+0x1622e3b)
#13 0xaaaac4e314bb(/usr/bin/qemu-kvm-4.1.0+0xe824bb)
#14 0xaaaac47f45ef (/usr/bin/qemu-kvm-4.1.0+0x8455ef)
#15 0xffffb4bfef3f in __libc_start_main (/usr/lib64/libc.so.6+0x23f3f)
#16 0xaaaac47ffacb(/usr/bin/qemu-kvm-4.1.0+0x850acb)
Signed-off-by: Chuan Zheng <zhengchuan@huawei.com>
Reviewed-by: KeQian Zhu <zhukeqian1@huawei.com>
Reviewed-by: HaiLiang <zhang.zhanghailiang@huawei.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Retrying failed requests when draining would make the draining hung. So it
is better not to trigger the retry timer when draining. And after the
virtual devices go back to work, they would retry those queued requests.
Signed-off-by: Jiahui Cen <cenjiahui@huawei.com>
Signed-off-by: Ying Fang <fangying1@huawei.com>
ati: use vga_read_byte in ati_cursor_define
sd: sdhci: assert data_count is within fifo_buffer
msix: add valid.accepts methods to check address
Signed-off-by: Alex Chen <alex.chen@huawei.com>