When handle dirty log, we face qemu_real_host_page_size and
TARGET_PAGE_SIZE. The first one is the granule of KVM dirty
bitmap, and the second one is the granule of QEMU dirty bitmap.
As qemu_real_host_page_size >= TARGET_PAGE_SIZE (kvm_init()
enforced it), misuse TARGET_PAGE_SIZE to init kvmslot dirty_bmap
may waste memory. For example, when qemu_real_host_page_size is
64K and TARGET_PAGE_SIZE is 4K, it wastes 93.75% (15/16) memory.
Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Message-Id: <20201217014941.22872-2-zhukeqian1@huawei.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
kvm_set_phys_mem can be called to reallocate a slot by something the
guest does (e.g. writing to PAM and other chipset registers).
This can happen in the middle of a migration, and if we're unlucky
it can now happen between the split 'sync' and 'clear'; the clear
asserts if there's no bmap to clear. Recreate the bmap whenever
we change the slot, keeping the clear path happy.
Typically this is triggered by the guest rebooting during a migrate.
Corresponds to:
https://bugzilla.redhat.com/show_bug.cgi?id=1772774https://bugzilla.redhat.com/show_bug.cgi?id=1771032
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Max memslot size supported by kvm on s390 is 8Tb,
move logic of splitting RAM in chunks upto 8T to KVM code.
This way it will hide KVM specific restrictions in KVM code
and won't affect board level design decisions. Which would allow
us to avoid misusing memory_region_allocate_system_memory() API
and eventually use a single hostmem backend for guest RAM.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <20190924144751.24149-4-imammedo@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
vfio: Move the saving of the config space to the right place in VFIO migration
vfio: Set the priority of the VFIO VM state change handler explicitly
vfio: Avoid disabling and enabling vectors repeatedly in VFIO migration
Signed-off-by: Chen Qun<kuhn.chenqun@huawei.com>
In VFIO migration resume phase and some guest startups, there are
already unmasked vectors in the vector table when calling
vfio_msix_enable(). So in order to avoid inefficiently disabling
and enabling vectors repeatedly, let's allocate all needed vectors
first and then enable these unmasked vectors one by one without
disabling.
Signed-off-by: Shenming Lu <lushenming@huawei.com>
Message-Id: <20210310030233.1133-4-lushenming@huawei.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
In the VFIO VM state change handler when stopping the VM, the _RUNNING
bit in device_state is cleared which makes the VFIO device stop, including
no longer generating interrupts. Then we can save the pending states of
all interrupts in the GIC VM state change handler (on ARM).
So we have to set the priority of the VFIO VM state change handler
explicitly (like virtio devices) to ensure it is called before the
GIC's in saving.
Signed-off-by: Shenming Lu <lushenming@huawei.com>
Reviewed-by: Kirti Wankhede <kwankhede@nvidia.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Message-Id: <20210310030233.1133-3-lushenming@huawei.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
On ARM64 the VFIO SET_IRQS ioctl is dependent on the VM interrupt
setup, if the restoring of the VFIO PCI device config space is
before the VGIC, an error might occur in the kernel.
So we move the saving of the config space to the non-iterable
process, thus it will be called after the VGIC according to
their priorities.
As for the possible dependence of the device specific migration
data on it's config space, we can let the vendor driver to
include any config info it needs in its own data stream.
Signed-off-by: Shenming Lu <lushenming@huawei.com>
Reviewed-by: Kirti Wankhede <kwankhede@nvidia.com>
Message-Id: <20210310030233.1133-2-lushenming@huawei.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
vl: Don't mismatch g_strsplit()/g_free()
seqlock: fix seqlock_write_unlock_impl function
target/i386: kvm: initialize microcode revision from KVM
target/i386: check for availability of MSR_IA32_UCODE_REV as an emulated MSR
Signed-off-by: Chen Qun<kuhn.chenqun@huawei.com>
Even though MSR_IA32_UCODE_REV has been available long before Linux 5.6,
which added it to the emulated MSR list, a bug caused the microcode
version to revert to 0x100000000 on INIT. As a result, processors other
than the bootstrap processor would not see the host microcode revision;
some Windows version complain loudly about this and crash with a
fairly explicit MICROCODE REVISION MISMATCH error.
[If running 5.6 prereleases, the kernel fix "KVM: x86: do not reset
microcode version on INIT or RESET" should also be applied.]
Reported-by: Alex Williamson <alex.williamson@redhat.com>
Message-id: <20200211175516.10716-1-pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
KVM can return the host microcode revision as a feature MSR.
Use it as the default value for -cpu host.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <1579544504-3616-4-git-send-email-pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The seqlock write unlock function was incorrectly calling
seqlock_write_begin() instead of seqlock_write_end(), and was releasing
the lock before incrementing the sequence. This could lead to a race
condition and a corrupted sequence number becoming odd even though the
lock is not held.
Signed-off-by: Luc Michel <luc.michel@greensocs.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20200129144948.2161551-1-luc.michel@greensocs.com>
Fixes: 988fcafc73 ("seqlock: add QemuLockable support", 2018-08-23)
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
It's a mismatch between g_strsplit and g_free, it will cause a memory leak as follow:
[root@localhost]# ./aarch64-softmmu/qemu-system-aarch64 -accel help
Accelerators supported in QEMU binary:
tcg
kvm
=================================================================
==1207900==ERROR: LeakSanitizer: detected memory leaks
Direct leak of 8 byte(s) in 2 object(s) allocated from:
#0 0xfffd700231cb in __interceptor_malloc (/lib64/libasan.so.4+0xd31cb)
#1 0xfffd6ec57163 in g_malloc (/lib64/libglib-2.0.so.0+0x57163)
#2 0xfffd6ec724d7 in g_strndup (/lib64/libglib-2.0.so.0+0x724d7)
#3 0xfffd6ec73d3f in g_strsplit (/lib64/libglib-2.0.so.0+0x73d3f)
#4 0xaaab66be5077 in main /mnt/sdc/qemu-master/qemu-4.2.0-rc0/vl.c:3517
#5 0xfffd6e140b9f in __libc_start_main (/lib64/libc.so.6+0x20b9f)
#6 0xaaab66bf0f53 (./build/aarch64-softmmu/qemu-system-aarch64+0x8a0f53)
Direct leak of 2 byte(s) in 2 object(s) allocated from:
#0 0xfffd700231cb in __interceptor_malloc (/lib64/libasan.so.4+0xd31cb)
#1 0xfffd6ec57163 in g_malloc (/lib64/libglib-2.0.so.0+0x57163)
#2 0xfffd6ec7243b in g_strdup (/lib64/libglib-2.0.so.0+0x7243b)
#3 0xfffd6ec73e6f in g_strsplit (/lib64/libglib-2.0.so.0+0x73e6f)
#4 0xaaab66be5077 in main /mnt/sdc/qemu-master/qemu-4.2.0-rc0/vl.c:3517
#5 0xfffd6e140b9f in __libc_start_main (/lib64/libc.so.6+0x20b9f)
#6 0xaaab66bf0f53 (./build/aarch64-softmmu/qemu-system-aarch64+0x8a0f53)
Reported-by: Euler Robot <euler.robot@huawei.com>
Signed-off-by: Pan Nengyuan <pannengyuan@huawei.com>
Message-Id: <20200110091710.53424-2-pannengyuan@huawei.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
external_snapshot_prepare() tries to move the overlay to the AioContext
of the backing file (the snapshotted node). However, it's possible that
this doesn't work, but the backing file can instead be moved to the
overlay's AioContext (e.g. opening the backing chain for a mirror
target).
bdrv_append() already indirectly uses bdrv_attach_node(), which takes
care to move nodes to make sure they use the same AioContext and which
tries both directions.
So the problem has a simple fix: Just delete the unnecessary extra
bdrv_try_set_aio_context() call in external_snapshot_prepare() and
instead assert in bdrv_append() that both nodes were indeed moved to the
same AioContext.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-Id: <20200310113831.27293-6-kwolf@redhat.com>
Tested-by: Peter Krempa <pkrempa@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
external_snapshot_abort() calls to bdrv_set_backing_hd(), which
returns state->old_bs to the main AioContext, as it's intended to be
used then the BDS is going to be released. As that's not the case when
aborting an external snapshot, return it to the AioContext it was
before the call.
This issue can be triggered by issuing a transaction with two actions,
a proper blockdev-snapshot-sync and a bogus one, so the second will
trigger a transaction abort. This results in a crash with an stack
trace like this one:
#0 0x00007fa1048b28df in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
#1 0x00007fa10489ccf5 in __GI_abort () at abort.c:79
#2 0x00007fa10489cbc9 in __assert_fail_base
(fmt=0x7fa104a03300 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion=0x5572240b44d8 "bdrv_get_aio_context(old_bs) == bdrv_get_aio_context(new_bs)", file=0x557224014d30 "block.c", line=2240, function=<optimized out>) at assert.c:92
#3 0x00007fa1048aae96 in __GI___assert_fail
(assertion=assertion@entry=0x5572240b44d8 "bdrv_get_aio_context(old_bs) == bdrv_get_aio_context(new_bs)", file=file@entry=0x557224014d30 "block.c", line=line@entry=2240, function=function@entry=0x5572240b5d60 <__PRETTY_FUNCTION__.31620> "bdrv_replace_child_noperm") at assert.c:101
#4 0x0000557223e631f8 in bdrv_replace_child_noperm (child=0x557225b9c980, new_bs=new_bs@entry=0x557225c42e40) at block.c:2240
#5 0x0000557223e68be7 in bdrv_replace_node (from=0x557226951a60, to=0x557225c42e40, errp=0x5572247d6138 <error_abort>) at block.c:4196
#6 0x0000557223d069c4 in external_snapshot_abort (common=0x557225d7e170) at blockdev.c:1731
#7 0x0000557223d069c4 in external_snapshot_abort (common=0x557225d7e170) at blockdev.c:1717
#8 0x0000557223d09013 in qmp_transaction (dev_list=<optimized out>, has_props=<optimized out>, props=0x557225cc7d70, errp=errp@entry=0x7ffe704c0c98) at blockdev.c:2360
#9 0x0000557223e32085 in qmp_marshal_transaction (args=<optimized out>, ret=<optimized out>, errp=0x7ffe704c0d08) at qapi/qapi-commands-transaction.c:44
#10 0x0000557223ee798c in do_qmp_dispatch (errp=0x7ffe704c0d00, allow_oob=<optimized out>, request=<optimized out>, cmds=0x5572247d3cc0 <qmp_commands>) at qapi/qmp-dispatch.c:132
#11 0x0000557223ee798c in qmp_dispatch (cmds=0x5572247d3cc0 <qmp_commands>, request=<optimized out>, allow_oob=<optimized out>) at qapi/qmp-dispatch.c:175
#12 0x0000557223e06141 in monitor_qmp_dispatch (mon=0x557225c69ff0, req=<optimized out>) at monitor/qmp.c:120
#13 0x0000557223e0678a in monitor_qmp_bh_dispatcher (data=<optimized out>) at monitor/qmp.c:209
#14 0x0000557223f2f366 in aio_bh_call (bh=0x557225b9dc60) at util/async.c:117
#15 0x0000557223f2f366 in aio_bh_poll (ctx=ctx@entry=0x557225b9c840) at util/async.c:117
#16 0x0000557223f32754 in aio_dispatch (ctx=0x557225b9c840) at util/aio-posix.c:459
#17 0x0000557223f2f242 in aio_ctx_dispatch (source=<optimized out>, callback=<optimized out>, user_data=<optimized out>) at util/async.c:260
#18 0x00007fa10913467d in g_main_dispatch (context=0x557225c28e80) at gmain.c:3176
#19 0x00007fa10913467d in g_main_context_dispatch (context=context@entry=0x557225c28e80) at gmain.c:3829
#20 0x0000557223f31808 in glib_pollfds_poll () at util/main-loop.c:219
#21 0x0000557223f31808 in os_host_main_loop_wait (timeout=<optimized out>) at util/main-loop.c:242
#22 0x0000557223f31808 in main_loop_wait (nonblocking=<optimized out>) at util/main-loop.c:518
#23 0x0000557223d13201 in main_loop () at vl.c:1828
#24 0x0000557223bbfb82 in main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4504
RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=1779036
Signed-off-by: Sergio Lopez <slp@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
bdrv_try_set_aio_context() requires that the old context is held, and
the new context is not held. Fix all the occurrences where it's not
done this way.
Suggested-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Sergio Lopez <slp@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Issuing a blockdev-backup from qmp_blockdev_backup takes a slightly
different path than when it's issued from a transaction. In the code,
this is manifested as some redundancy between do_blockdev_backup() and
blockdev_backup_prepare().
This change unifies both paths, merging do_blockdev_backup() and
blockdev_backup_prepare(), and changing qmp_blockdev_backup() to
create a transaction instead of calling do_backup_common() direcly.
As a side-effect, now qmp_blockdev_backup() is executed inside a
drained section, as it happens when creating a blockdev-backup
transaction. This change is visible from the user's perspective, as
the job gets paused and immediately resumed before starting the actual
work.
Signed-off-by: Sergio Lopez <slp@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Issuing a drive-backup from qmp_drive_backup takes a slightly
different path than when it's issued from a transaction. In the code,
this is manifested as some redundancy between do_drive_backup() and
drive_backup_prepare().
This change unifies both paths, merging do_drive_backup() and
drive_backup_prepare(), and changing qmp_drive_backup() to create a
transaction instead of calling do_backup_common() direcly.
As a side-effect, now qmp_drive_backup() is executed inside a drained
section, as it happens when creating a drive-backup transaction. This
change is visible from the user's perspective, as the job gets paused
and immediately resumed before starting the actual work.
Also fix tests 141, 185 and 219 to cope with the extra
JOB_STATUS_CHANGE lines.
Signed-off-by: Sergio Lopez <slp@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Fix a couple of minor coding style issues in drive_backup_prepare.
Signed-off-by: Sergio Lopez <slp@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
bdrv_has_zero_init() only has meaning for newly created images or image
areas. If the mirror job itself did not create the image, it cannot
rely on bdrv_has_zero_init()'s result to carry any meaning.
This is the case for drive-mirror with mode=existing and always for
blockdev-mirror.
Note that we only have to zero-initialize the target with sync=full,
because other modes actually do not promise that the target will contain
the same data as the source after the job -- sync=top only promises to
copy anything allocated in the top layer, and sync=none will only copy
new I/O. (Which is how mirror has always handled it.)
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190724171239.8764-3-mreitz@redhat.com
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
We have detect_zeroes option, so at least for blockdev-backup user
should define it if zero-detection is needed. For drive-backup leave
detection enabled by default but do it through existing option instead
of open-coding.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190730163251.755248-2-vsementsov@virtuozzo.com
Signed-off-by: John Snow <jsnow@redhat.com>
This is nicer to do in the unified QMP interface that we have now,
because it lets us use the right terminology back at the user.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190716000117.25219-5-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
With the "never" sync policy, we actually can utilize readonly bitmaps
now. Loosen the check at the QMP level, and tighten it based on
provided arguments down at the job creation level instead.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190709232550.10724-19-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
This adds a "never" policy for bitmap synchronization. Regardless of if
the job succeeds or fails, we never update the bitmap. This can be used
to perform differential backups, or simply to avoid the job modifying a
bitmap.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190709232550.10724-7-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
We don't need or want a new sync mode for simple differences in
semantics. Create a new mode simply named "BITMAP" that is designed to
make use of the new Bitmap Sync Mode field.
Because the only bitmap sync mode is 'on-success', this adds no new
functionality to the backup job (yet). The old incremental backup mode
is maintained as a syntactic sugar for sync=bitmap, mode=on-success.
Add all of the plumbing necessary to support this new instruction.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190709232550.10724-6-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
Depending on what a user is trying to accomplish, there might be a few
bitmap cleanup actions that occur when an operation is finished that
could be useful.
I am proposing three:
- NEVER: The bitmap is never synchronized against what was copied.
- ALWAYS: The bitmap is always synchronized, even on failures.
- ON-SUCCESS: The bitmap is synchronized only on success.
The existing incremental backup modes use 'on-success' semantics,
so add just that one for right now.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 20190709232550.10724-5-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190709232550.10724-4-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
Create a common core that comprises the actual meat of what the backup API
boundary needs to do, and then switch drive-backup to use it.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190709232550.10724-3-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
drive-backup and blockdev-backup have an awful lot of things in common
that are the same. Let's fix that.
I don't deduplicate 'target', because the semantics actually did change
between each structure. Leave that one alone so it can be documented
separately.
Where documentation was not identical, use the most up-to-date version.
For "speed", use Blockdev-Backup's version. For "sync", use
Drive-Backup's version.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
[Maintainer edit: modified commit message. --js]
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 20190709232550.10724-2-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
RH-Author: Julia Suvorova <jusual@redhat.com>
Message-id: <20200616122536.1027685-1-jusual@redhat.com>
Patchwork-id: 97548
O-Subject: [RHEL-AV-8.2.1 qemu-kvm PATCH 1/1] hw/pci/pcie: Move hot plug capability check to pre_plug callback
Bugzilla: 1820531
RH-Acked-by: Danilo de Paula <ddepaula@redhat.com>
RH-Acked-by: Auger Eric <eric.auger@redhat.com>
RH-Acked-by: Sergio Lopez Pascual <slp@redhat.com>
BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1820531
BRANCH: rhel-av-8.2.1
UPSTREAM: merged
BREW: 29422092
Check for hot plug capability earlier to avoid removing devices attached
during the initialization process.
Run qemu with an unattached drive:
-drive file=$FILE,if=none,id=drive0 \
-device pcie-root-port,id=rp0,slot=3,bus=pcie.0,hotplug=off
Hotplug a block device:
device_add virtio-blk-pci,id=blk0,drive=drive0,bus=rp0
If hotplug fails on plug_cb, drive0 will be deleted.
Fixes: 0501e1aa1d32a6 ("hw/pci/pcie: Forbid hot-plug if it's disabled on the slot")
Acked-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Julia Suvorova <jusual@redhat.com>
Message-Id: <20200604125947.881210-1-jusual@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 0dabc0f6544f2c0310546f6d6cf3b68979580a9c)
Signed-off-by: Eduardo Lima (Etrunko) <etrunko@redhat.com>
RH-Author: Laurent Vivier <lvivier@redhat.com>
Message-id: <20200317170518.9303-1-lvivier@redhat.com>
Patchwork-id: 94374
O-Subject: [RHEL-AV-8.2.0 qemu-kvm PATCH] migration: Rate limit inside host pages
Bugzilla: 1814336
RH-Acked-by: Peter Xu <peterx@redhat.com>
RH-Acked-by: Juan Quintela <quintela@redhat.com>
RH-Acked-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
When using hugepages, rate limiting is necessary within each huge
page, since a 1G huge page can take a significant time to send, so
you end up with bursty behaviour.
Fixes: 4c011c37ecb3 ("postcopy: Send whole huge pages")
Reported-by: Lin Ma <LMa@suse.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
(cherry picked from commit 97e1e06780e70f6e98a0d2df881e0c0927d3aeb6)
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1814336
BRANCH: rhel-av-8.2.0
UPSTREAM: Merged
BREW: https://brewweb.engineering.redhat.com/brew/taskinfo?taskID=27283241
TESTED: Tested that the migration abort doesn't trigger an error message in
the kernel logs on P9
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
Wrap the check into a function to make it easy to read.
Signed-off-by: Wei Yang <richardw.yang@linux.intel.com>
Message-Id: <20190717005341.14140-1-richardw.yang@linux.intel.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
virtio-net: delete also control queue when TX/RX deleted
target/i386: enable monitor and ucode revision with -cpu max
target/i386: set the CPUID level to 0x14 on old machine-type
target/i386: kvm: initialize feature MSRs very early
target/i386: add a ucode-rev property
Signed-off-by: Chen Qun<kuhn.chenqun@huawei.com>
RH-Author: Paolo Bonzini <pbonzini@redhat.com>
Message-id: <20200217162316.2464-3-pbonzini@redhat.com>
Patchwork-id: 93909
O-Subject: [RHEL-AV-8.2.0 qemu-kvm PATCH 2/6] target/i386: add a ucode-rev property
Bugzilla: 1791648
RH-Acked-by: Eduardo Habkost <ehabkost@redhat.com>
RH-Acked-by: Maxim Levitsky <mlevitsk@redhat.com>
RH-Acked-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Add the property and plumb it in TCG and HVF (the latter of which
tried to support returning a constant value but used the wrong MSR).
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <1579544504-3616-3-git-send-email-pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 4e45aff398cd1542c2a384a2a3b8600f23337d86)
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>