qemu/vhost-user-add-vhost_set_mem_table-when-vm-load_setu.patch
Jiabo Feng c300b8e80b QEMU update to version 8.2.0-5
- vfio/migration: Add support for manual clear vfio dirty log
- vfio: Maintain DMA mapping range for the container
- linux-headers: update against 5.10 and manual clear vfio dirty log series
- arm/acpi: Fix when make qemu-system-aarch64 at x86_64 host bios_tables_test fail reason: __aarch64__ macro let build_pptt at x86_64 and aarch64 host build different function that let bios_tables_test fail.
- pl031: support rtc-timer property for pl031
- feature: Add logs for vm start and destroy
- feature: Add log for each modules
- log: Add log at boot & cpu init for aarch64
- bugfix: irq: Avoid covering object refcount of qemu_irq
- i386: cache passthrough: Update AMD 8000_001D.EAX[25:14] based on vCPU topo
- freeclock: set rtc_date_diff for X86
- freeclock: set rtc_date_diff for arm
- freeclock: add qmp command to get time offset of vm in seconds
- tests: Disable filemonitor testcase
- shadow_dev: introduce shadow dev for virtio-net device
- pl011: reset read FIFO when UARTTIMSC=0 & UARTICR=0xffff
- tests: virt: Update expected ACPI tables for virt test(update BinDir)
- arm64: Add the cpufreq device to show cpufreq info to guest
- hw/arm64: add vcpu cache info support
- tests: virt: Allow changes to PPTT test table
- cpu: add Cortex-A72 processor kvm target support
- cpu: add Kunpeng-920 cpu support
- net: eepro100: validate various address valuesi(CVE-2021-20255)
- ide: ahci: add check to avoid null dereference (CVE-2019-12067)
- vdpa: set vring enable only if the vring address has already been set
- docs: Add generic vhost-vdpa device documentation
- vdpa: don't suspend/resume device when vdpa device not started
- vdpa: correct param passed in when unregister save
- vdpa: suspend function return 0 when the vdpa device is stopped
- vdpa: support vdpa device suspend/resume
- vdpa: move memory listener to the realize stage
- vdpa: implement vdpa device migration
- vhost: implement migration state notifier for vdpa device
- vhost: implement post resume bh
- vhost: implement savevm_handler for vdpa device
- vhost: implement vhost_vdpa_device_suspend/resume
- vhost: implement vhost-vdpa suspend/resume
- vhost: add vhost_dev_suspend/resume_op
- vhost: introduce bytemap for vhost backend logging
- vhost-vdpa: add migration log ops for VhostOps
- vhost-vdpa: add VHOST_BACKEND_F_BYTEMAPLOG
- hw/usb: reduce the vpcu cost of UHCI when VNC disconnect
- virtio-net: update the default and max of rx/tx_queue_size
- virtio-net: set the max of queue size to 4096
- virtio-net: fix max vring buf size when set ring num
- virtio-net: bugfix: do not delete netdev before virtio net
- monitor: Discard BLOCK_IO_ERROR event when VM rebooted
- vhost-user: add unregister_savevm when vhost-user cleanup
- vhost-user: add vhost_set_mem_table when vm load_setup at destination
- vhost-user: quit infinite loop while used memslots is more than the backend limit
- fix qemu-core when vhost-user-net config with server mode
- vhost-user: Add support reconnect vhost-user socket
- vhost-user: Set the acked_features to vm's featrue
- i6300esb watchdog: bugfix: Add a runstate transition
- hw/net/rocker_of_dpa: fix double free bug of rocker device
- net/dump.c: Suppress spurious compiler warning
- pcie: Add pcie-root-port fast plug/unplug feature
- pcie: Compat with devices which do not support Link Width, such as ioh3420
- qdev/monitors: Fix reundant error_setg of qdev_add_device
- qemu-nbd: set timeout to qemu-nbd socket
- qemu-nbd: make native as the default aio mode
- nbd/server.c: fix invalid read after client was already free
- virtio-scsi: bugfix: fix qemu crash for hotplug scsi disk with dataplane
- virtio: bugfix: check the value of caches before accessing it
- virtio: print the guest virtio_net features that host does not support
- virtio: bugfix: add rcu_read_lock when vring_avail_idx is called
- virtio: check descriptor numbers
- migration: report multiFd related thread pid to libvirt
- migration: report migration related thread pid to libvirt
- cpu/features: fix bug for memory leakage
- doc: Update multi-thread compression doc
- migration: Add compress_level sanity check
- migration: Add zstd support in multi-thread compression
- migration: Add multi-thread compress ops
- migration: Refactoring multi-thread compress migration
- migration: Add multi-thread compress method
- migration: skip cache_drop for bios bootloader and nvram template
- oslib-posix: optimise vm startup time for 1G hugepage
- monitor/qmp: drop inflight rsp if qmp client broken
- ps2: fix oob in ps2 kbd
- Currently, while kvm and qemu can not handle some kvm exit, qemu will do vm_stop, which will make vm in pause state. This action make vm unrecoverable, so send guest panic to libvirt instead.
- vhost: cancel migration when vhost-user restarted during migraiton

Signed-off-by: Jiabo Feng <fengjiabo1@huawei.com>
2024-04-10 20:19:06 +08:00

131 lines
4.1 KiB
Diff

From 12cf5e9ece9cb0825f14ca80f6b1c5d1eb95c3e5 Mon Sep 17 00:00:00 2001
From: Jinhua Cao <caojinhua1@huawei.com>
Date: Fri, 11 Feb 2022 18:59:34 +0800
Subject: [PATCH] vhost-user: add vhost_set_mem_table when vm load_setup at
destination
When migrate huge vm, packages lost are 90+.
During the load_setup of the destination vm, pass the
vm mem structure to ovs, the netcard could be enabled
when the migration finish state shifting.
Signed-off-by: Jinhua Cao <caojinhua1@huawei.com>
---
hw/virtio/vhost-user.c | 24 ++++++++++++++++++++++++
tests/qtest/vhost-user-test.c | 35 ++++++++++++++++++-----------------
2 files changed, 42 insertions(+), 17 deletions(-)
diff --git a/hw/virtio/vhost-user.c b/hw/virtio/vhost-user.c
index f214df804b..6739dfc98e 100644
--- a/hw/virtio/vhost-user.c
+++ b/hw/virtio/vhost-user.c
@@ -28,6 +28,7 @@
#include "sysemu/cryptodev.h"
#include "migration/migration.h"
#include "migration/postcopy-ram.h"
+#include "migration/register.h"
#include "trace.h"
#include "exec/ramblock.h"
@@ -2119,6 +2120,28 @@ static int vhost_user_postcopy_notifier(NotifierWithReturn *notifier,
return 0;
}
+static int vhost_user_load_setup(QEMUFile *f, void *opaque)
+{
+ struct vhost_dev *hdev = opaque;
+ int r;
+
+ if (hdev->vhost_ops && hdev->vhost_ops->vhost_set_mem_table) {
+ r = hdev->vhost_ops->vhost_set_mem_table(hdev, hdev->mem);
+ if (r < 0) {
+ qemu_log("error: vhost_set_mem_table failed: %s(%d)\n",
+ strerror(errno), errno);
+ return r;
+ } else {
+ qemu_log("info: vhost_set_mem_table OK\n");
+ }
+ }
+ return 0;
+}
+
+SaveVMHandlers savevm_vhost_user_handlers = {
+ .load_setup = vhost_user_load_setup,
+};
+
static int vhost_user_backend_init(struct vhost_dev *dev, void *opaque,
Error **errp)
{
@@ -2255,6 +2278,7 @@ static int vhost_user_backend_init(struct vhost_dev *dev, void *opaque,
u->postcopy_notifier.notify = vhost_user_postcopy_notifier;
postcopy_add_notifier(&u->postcopy_notifier);
+ register_savevm_live("vhost-user", -1, 1, &savevm_vhost_user_handlers, dev);
return 0;
}
diff --git a/tests/qtest/vhost-user-test.c b/tests/qtest/vhost-user-test.c
index d4e437265f..fadf3f0f2e 100644
--- a/tests/qtest/vhost-user-test.c
+++ b/tests/qtest/vhost-user-test.c
@@ -799,6 +799,23 @@ static void test_read_guest_mem(void *obj, void *arg, QGuestAllocator *alloc)
read_guest_mem_server(global_qtest, server);
}
+static void wait_for_rings_started(TestServer *s, size_t count)
+{
+ gint64 end_time;
+
+ g_mutex_lock(&s->data_mutex);
+ end_time = g_get_monotonic_time() + 5 * G_TIME_SPAN_SECOND;
+ while (ctpop64(s->rings) != count) {
+ if (!g_cond_wait_until(&s->data_cond, &s->data_mutex, end_time)) {
+ /* timeout has passed */
+ g_assert_cmpint(ctpop64(s->rings), ==, count);
+ break;
+ }
+ }
+
+ g_mutex_unlock(&s->data_mutex);
+}
+
static void test_migrate(void *obj, void *arg, QGuestAllocator *alloc)
{
TestServer *s = arg;
@@ -869,6 +886,7 @@ static void test_migrate(void *obj, void *arg, QGuestAllocator *alloc)
qtest_qmp_eventwait(to, "RESUME");
g_assert(wait_for_fds(dest));
+ wait_for_rings_started(dest, 2);
read_guest_mem_server(to, dest);
g_source_destroy(source);
@@ -880,23 +898,6 @@ static void test_migrate(void *obj, void *arg, QGuestAllocator *alloc)
g_string_free(dest_cmdline, true);
}
-static void wait_for_rings_started(TestServer *s, size_t count)
-{
- gint64 end_time;
-
- g_mutex_lock(&s->data_mutex);
- end_time = g_get_monotonic_time() + 5 * G_TIME_SPAN_SECOND;
- while (ctpop64(s->rings) != count) {
- if (!g_cond_wait_until(&s->data_cond, &s->data_mutex, end_time)) {
- /* timeout has passed */
- g_assert_cmpint(ctpop64(s->rings), ==, count);
- break;
- }
- }
-
- g_mutex_unlock(&s->data_mutex);
-}
-
static inline void test_server_connect(TestServer *server)
{
test_server_create_chr(server, ",reconnect=1");
--
2.27.0