!23 backport-from-qemu-4.1.1

Merge pull request !23 from FangYing/backport-from-qemu-4.1.1
This commit is contained in:
openeuler-ci-bot 2020-03-16 23:05:14 +08:00 committed by Gitee
commit a7318b6b26
11 changed files with 646 additions and 2 deletions

View File

@ -0,0 +1,60 @@
From 124032e79e354f5e7cc28958f2ca6b9f898da719 Mon Sep 17 00:00:00 2001
From: Fan Yang <Fan_Yang@sjtu.edu.cn>
Date: Tue, 24 Sep 2019 22:08:29 +0800
Subject: [PATCH] COLO-compare: Fix incorrect `if` logic
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
'colo_mark_tcp_pkt' should return 'true' when packets are the same, and
'false' otherwise. However, it returns 'true' when
'colo_compare_packet_payload' returns non-zero while
'colo_compare_packet_payload' is just a 'memcmp'. The result is that
COLO-compare reports inconsistent TCP packets when they are actually
the same.
Fixes: f449c9e549c ("colo: compare the packet based on the tcp sequence number")
Cc: qemu-stable@nongnu.org
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Fan Yang <Fan_Yang@sjtu.edu.cn>
Signed-off-by: Jason Wang <jasowang@redhat.com>
(cherry picked from commit 1e907a32b77e5d418538453df5945242e43224fa)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
---
net/colo-compare.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/net/colo-compare.c b/net/colo-compare.c
index bf10526..9827c0e 100644
--- a/net/colo-compare.c
+++ b/net/colo-compare.c
@@ -287,7 +287,7 @@ static bool colo_mark_tcp_pkt(Packet *ppkt, Packet *spkt,
*mark = 0;
if (ppkt->tcp_seq == spkt->tcp_seq && ppkt->seq_end == spkt->seq_end) {
- if (colo_compare_packet_payload(ppkt, spkt,
+ if (!colo_compare_packet_payload(ppkt, spkt,
ppkt->header_size, spkt->header_size,
ppkt->payload_size)) {
*mark = COLO_COMPARE_FREE_SECONDARY | COLO_COMPARE_FREE_PRIMARY;
@@ -297,7 +297,7 @@ static bool colo_mark_tcp_pkt(Packet *ppkt, Packet *spkt,
/* one part of secondary packet payload still need to be compared */
if (!after(ppkt->seq_end, spkt->seq_end)) {
- if (colo_compare_packet_payload(ppkt, spkt,
+ if (!colo_compare_packet_payload(ppkt, spkt,
ppkt->header_size + ppkt->offset,
spkt->header_size + spkt->offset,
ppkt->payload_size - ppkt->offset)) {
@@ -316,7 +316,7 @@ static bool colo_mark_tcp_pkt(Packet *ppkt, Packet *spkt,
/* primary packet is longer than secondary packet, compare
* the same part and mark the primary packet offset
*/
- if (colo_compare_packet_payload(ppkt, spkt,
+ if (!colo_compare_packet_payload(ppkt, spkt,
ppkt->header_size + ppkt->offset,
spkt->header_size + spkt->offset,
spkt->payload_size - spkt->offset)) {
--
1.8.3.1

View File

@ -0,0 +1,35 @@
From adb934c8d2cfd8b920e69712f07a8fb9399fdc2d Mon Sep 17 00:00:00 2001
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Date: Fri, 20 Sep 2019 17:20:43 +0300
Subject: [PATCH] block/backup: fix backup_cow_with_offload for last cluster
We shouldn't try to copy bytes beyond EOF. Fix it.
Fixes: 9ded4a0114968e
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Message-id: 20190920142056.12778-3-vsementsov@virtuozzo.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
(cherry picked from commit 1048ddf0a32dcdaa952e581bd503d49adad527cc)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
---
block/backup.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/block/backup.c b/block/backup.c
index 8119d3c..55736ea 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -169,7 +169,7 @@ static int coroutine_fn backup_cow_with_offload(BackupBlockJob *job,
assert(QEMU_IS_ALIGNED(job->copy_range_size, job->cluster_size));
assert(QEMU_IS_ALIGNED(start, job->cluster_size));
- nbytes = MIN(job->copy_range_size, end - start);
+ nbytes = MIN(job->copy_range_size, MIN(end, job->len) - start);
nr_clusters = DIV_ROUND_UP(nbytes, job->cluster_size);
hbitmap_reset(job->copy_bitmap, start, job->cluster_size * nr_clusters);
ret = blk_co_copy_range(blk, start, job->target, start, nbytes,
--
1.8.3.1

View File

@ -0,0 +1,51 @@
From bad8a640a29f16b4d333673577b06880894766e1 Mon Sep 17 00:00:00 2001
From: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Date: Fri, 20 Sep 2019 17:20:42 +0300
Subject: [PATCH] block/backup: fix max_transfer handling for copy_range
Of course, QEMU_ALIGN_UP is a typo, it should be QEMU_ALIGN_DOWN, as we
are trying to find aligned size which satisfy both source and target.
Also, don't ignore too small max_transfer. In this case seems safer to
disable copy_range.
Fixes: 9ded4a0114968e
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-id: 20190920142056.12778-2-vsementsov@virtuozzo.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
(cherry picked from commit 981fb5810aa3f68797ee6e261db338bd78857614)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
---
block/backup.c | 15 +++++++++++----
1 file changed, 11 insertions(+), 4 deletions(-)
diff --git a/block/backup.c b/block/backup.c
index 381659d..8119d3c 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -666,12 +666,19 @@ BlockJob *backup_job_create(const char *job_id, BlockDriverState *bs,
job->cluster_size = cluster_size;
job->copy_bitmap = copy_bitmap;
copy_bitmap = NULL;
- job->use_copy_range = !compress; /* compression isn't supported for it */
job->copy_range_size = MIN_NON_ZERO(blk_get_max_transfer(job->common.blk),
blk_get_max_transfer(job->target));
- job->copy_range_size = MAX(job->cluster_size,
- QEMU_ALIGN_UP(job->copy_range_size,
- job->cluster_size));
+ job->copy_range_size = QEMU_ALIGN_DOWN(job->copy_range_size,
+ job->cluster_size);
+ /*
+ * Set use_copy_range, consider the following:
+ * 1. Compression is not supported for copy_range.
+ * 2. copy_range does not respect max_transfer (it's a TODO), so we factor
+ * that in here. If max_transfer is smaller than the job->cluster_size,
+ * we do not use copy_range (in that case it's zero after aligning down
+ * above).
+ */
+ job->use_copy_range = !compress && job->copy_range_size > 0;
/* Required permissions are already taken with target's blk_new() */
block_job_add_bdrv(&job->common, "target", target, 0, BLK_PERM_ALL,
--
1.8.3.1

View File

@ -0,0 +1,158 @@
From b4bbef6714a45ffd7b4a57e5a0522c7006f504a6 Mon Sep 17 00:00:00 2001
From: Nir Soffer <nirsof@gmail.com>
Date: Tue, 13 Aug 2019 21:21:03 +0300
Subject: [PATCH] file-posix: Handle undetectable alignment
In some cases buf_align or request_alignment cannot be detected:
1. With Gluster, buf_align cannot be detected since the actual I/O is
done on Gluster server, and qemu buffer alignment does not matter.
Since we don't have alignment requirement, buf_align=1 is the best
value.
2. With local XFS filesystem, buf_align cannot be detected if reading
from unallocated area. In this we must align the buffer, but we don't
know what is the correct size. Using the wrong alignment results in
I/O error.
3. With Gluster backed by XFS, request_alignment cannot be detected if
reading from unallocated area. In this case we need to use the
correct alignment, and failing to do so results in I/O errors.
4. With NFS, the server does not use direct I/O, so both buf_align cannot
be detected. In this case we don't need any alignment so we can use
buf_align=1 and request_alignment=1.
These cases seems to work when storage sector size is 512 bytes, because
the current code starts checking align=512. If the check succeeds
because alignment cannot be detected we use 512. But this does not work
for storage with 4k sector size.
To determine if we can detect the alignment, we probe first with
align=1. If probing succeeds, maybe there are no alignment requirement
(cases 1, 4) or we are probing unallocated area (cases 2, 3). Since we
don't have any way to tell, we treat this as undetectable alignment. If
probing with align=1 fails with EINVAL, but probing with one of the
expected alignments succeeds, we know that we found a working alignment.
Practically the alignment requirements are the same for buffer
alignment, buffer length, and offset in file. So in case we cannot
detect buf_align, we can use request alignment. If we cannot detect
request alignment, we can fallback to a safe value. To use this logic,
we probe first request alignment instead of buf_align.
Here is a table showing the behaviour with current code (the value in
parenthesis is the optimal value).
Case Sector buf_align (opt) request_alignment (opt) result
======================================================================
1 512 512 (1) 512 (512) OK
1 4096 512 (1) 4096 (4096) FAIL
----------------------------------------------------------------------
2 512 512 (512) 512 (512) OK
2 4096 512 (4096) 4096 (4096) FAIL
----------------------------------------------------------------------
3 512 512 (1) 512 (512) OK
3 4096 512 (1) 512 (4096) FAIL
----------------------------------------------------------------------
4 512 512 (1) 512 (1) OK
4 4096 512 (1) 512 (1) OK
Same cases with this change:
Case Sector buf_align (opt) request_alignment (opt) result
======================================================================
1 512 512 (1) 512 (512) OK
1 4096 4096 (1) 4096 (4096) OK
----------------------------------------------------------------------
2 512 512 (512) 512 (512) OK
2 4096 4096 (4096) 4096 (4096) OK
----------------------------------------------------------------------
3 512 4096 (1) 4096 (512) OK
3 4096 4096 (1) 4096 (4096) OK
----------------------------------------------------------------------
4 512 4096 (1) 4096 (1) OK
4 4096 4096 (1) 4096 (1) OK
I tested that provisioning VMs and copying disks on local XFS and
Gluster with 4k bytes sector size work now, resolving bugs [1],[2].
I tested also on XFS, NFS, Gluster with 512 bytes sector size.
[1] https://bugzilla.redhat.com/1737256
[2] https://bugzilla.redhat.com/1738657
Signed-off-by: Nir Soffer <nsoffer@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit a6b257a08e3d72219f03e461a52152672fec0612)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
---
block/file-posix.c | 36 +++++++++++++++++++++++++-----------
1 file changed, 25 insertions(+), 11 deletions(-)
diff --git a/block/file-posix.c b/block/file-posix.c
index c185f34..d5065c6 100644
--- a/block/file-posix.c
+++ b/block/file-posix.c
@@ -321,6 +321,7 @@ static void raw_probe_alignment(BlockDriverState *bs, int fd, Error **errp)
BDRVRawState *s = bs->opaque;
char *buf;
size_t max_align = MAX(MAX_BLOCKSIZE, getpagesize());
+ size_t alignments[] = {1, 512, 1024, 2048, 4096};
/* For SCSI generic devices the alignment is not really used.
With buffered I/O, we don't have any restrictions. */
@@ -347,25 +348,38 @@ static void raw_probe_alignment(BlockDriverState *bs, int fd, Error **errp)
}
#endif
- /* If we could not get the sizes so far, we can only guess them */
- if (!s->buf_align) {
+ /*
+ * If we could not get the sizes so far, we can only guess them. First try
+ * to detect request alignment, since it is more likely to succeed. Then
+ * try to detect buf_align, which cannot be detected in some cases (e.g.
+ * Gluster). If buf_align cannot be detected, we fallback to the value of
+ * request_alignment.
+ */
+
+ if (!bs->bl.request_alignment) {
+ int i;
size_t align;
- buf = qemu_memalign(max_align, 2 * max_align);
- for (align = 512; align <= max_align; align <<= 1) {
- if (raw_is_io_aligned(fd, buf + align, max_align)) {
- s->buf_align = align;
+ buf = qemu_memalign(max_align, max_align);
+ for (i = 0; i < ARRAY_SIZE(alignments); i++) {
+ align = alignments[i];
+ if (raw_is_io_aligned(fd, buf, align)) {
+ /* Fallback to safe value. */
+ bs->bl.request_alignment = (align != 1) ? align : max_align;
break;
}
}
qemu_vfree(buf);
}
- if (!bs->bl.request_alignment) {
+ if (!s->buf_align) {
+ int i;
size_t align;
- buf = qemu_memalign(s->buf_align, max_align);
- for (align = 512; align <= max_align; align <<= 1) {
- if (raw_is_io_aligned(fd, buf, align)) {
- bs->bl.request_alignment = align;
+ buf = qemu_memalign(max_align, 2 * max_align);
+ for (i = 0; i < ARRAY_SIZE(alignments); i++) {
+ align = alignments[i];
+ if (raw_is_io_aligned(fd, buf + align, max_align)) {
+ /* Fallback to request_aligment. */
+ s->buf_align = (align != 1) ? align : bs->bl.request_alignment;
break;
}
}
--
1.8.3.1

View File

@ -0,0 +1,45 @@
From aebd98d0799d6dd9bb4dd4bf73f0b75c5f4e665d Mon Sep 17 00:00:00 2001
From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Date: Wed, 14 Aug 2019 18:55:33 +0100
Subject: [PATCH] memory: Align MemoryRegionSections fields
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
MemoryRegionSection includes an Int128 'size' field;
on some platforms the compiler causes an alignment of this to
a 128bit boundary, leaving 8 bytes of dead space.
This deadspace can be filled with junk.
Move the size field to the top avoiding unnecessary alignment.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20190814175535.2023-2-dgilbert@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 44f85d3276397cfa2cfa379c61430405dad4e644)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
---
include/exec/memory.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/exec/memory.h b/include/exec/memory.h
index 1625913..f0f0767 100644
--- a/include/exec/memory.h
+++ b/include/exec/memory.h
@@ -484,10 +484,10 @@ static inline FlatView *address_space_to_flatview(AddressSpace *as)
* @nonvolatile: this section is non-volatile
*/
struct MemoryRegionSection {
+ Int128 size;
MemoryRegion *mr;
FlatView *fv;
hwaddr offset_within_region;
- Int128 size;
hwaddr offset_within_address_space;
bool readonly;
bool nonvolatile;
--
1.8.3.1

View File

@ -0,0 +1,47 @@
From 026ef4aabd2d533d1d2f206bd3312fb1b1674058 Mon Sep 17 00:00:00 2001
From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Date: Wed, 14 Aug 2019 18:55:34 +0100
Subject: [PATCH] memory: Provide an equality function for MemoryRegionSections
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Provide a comparison function that checks all the fields are the same.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20190814175535.2023-3-dgilbert@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 9366cf02e4e31c2a8128904d4d8290a0fad5f888)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
---
include/exec/memory.h | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/include/exec/memory.h b/include/exec/memory.h
index f0f0767..ba0ce25 100644
--- a/include/exec/memory.h
+++ b/include/exec/memory.h
@@ -493,6 +493,18 @@ struct MemoryRegionSection {
bool nonvolatile;
};
+static inline bool MemoryRegionSection_eq(MemoryRegionSection *a,
+ MemoryRegionSection *b)
+{
+ return a->mr == b->mr &&
+ a->fv == b->fv &&
+ a->offset_within_region == b->offset_within_region &&
+ a->offset_within_address_space == b->offset_within_address_space &&
+ int128_eq(a->size, b->size) &&
+ a->readonly == b->readonly &&
+ a->nonvolatile == b->nonvolatile;
+}
+
/**
* memory_region_init: Initialize a memory region
*
--
1.8.3.1

View File

@ -0,0 +1,61 @@
From 609aad11051c6f2053cc32b4881f5581c92435f3 Mon Sep 17 00:00:00 2001
From: Max Reitz <mreitz@redhat.com>
Date: Mon, 14 Oct 2019 17:39:28 +0200
Subject: [PATCH] mirror: Do not dereference invalid pointers
mirror_exit_common() may be called twice (if it is called from
mirror_prepare() and fails, it will be called from mirror_abort()
again).
In such a case, many of the pointers in the MirrorBlockJob object will
already be freed. This can be seen most reliably for s->target, which
is set to NULL (and then dereferenced by blk_bs()).
Cc: qemu-stable@nongnu.org
Fixes: 737efc1eda23b904fbe0e66b37715fb0e5c3e58b
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-id: 20191014153931.20699-2-mreitz@redhat.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
(cherry picked from commit f93c3add3a773e0e3f6277e5517583c4ad3a43c2)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
---
block/mirror.c | 13 +++++++++----
1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/block/mirror.c b/block/mirror.c
index 062dc42..408486c 100644
--- a/block/mirror.c
+++ b/block/mirror.c
@@ -617,11 +617,11 @@ static int mirror_exit_common(Job *job)
{
MirrorBlockJob *s = container_of(job, MirrorBlockJob, common.job);
BlockJob *bjob = &s->common;
- MirrorBDSOpaque *bs_opaque = s->mirror_top_bs->opaque;
+ MirrorBDSOpaque *bs_opaque;
AioContext *replace_aio_context = NULL;
- BlockDriverState *src = s->mirror_top_bs->backing->bs;
- BlockDriverState *target_bs = blk_bs(s->target);
- BlockDriverState *mirror_top_bs = s->mirror_top_bs;
+ BlockDriverState *src;
+ BlockDriverState *target_bs;
+ BlockDriverState *mirror_top_bs;
Error *local_err = NULL;
bool abort = job->ret < 0;
int ret = 0;
@@ -631,6 +631,11 @@ static int mirror_exit_common(Job *job)
}
s->prepared = true;
+ mirror_top_bs = s->mirror_top_bs;
+ bs_opaque = mirror_top_bs->opaque;
+ src = mirror_top_bs->backing->bs;
+ target_bs = blk_bs(s->target);
+
if (bdrv_chain_contains(src, target_bs)) {
bdrv_unfreeze_backing_chain(mirror_top_bs, target_bs);
}
--
1.8.3.1

View File

@ -0,0 +1,59 @@
From 3d83643fb8d69f1c38df3e90634f9b82d4a62a1c Mon Sep 17 00:00:00 2001
From: Max Reitz <mreitz@redhat.com>
Date: Thu, 10 Oct 2019 12:08:57 +0200
Subject: [PATCH] qcow2: Limit total allocation range to INT_MAX
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
When the COW areas are included, the size of an allocation can exceed
INT_MAX. This is kind of limited by handle_alloc() in that it already
caps avail_bytes at INT_MAX, but the number of clusters still reflects
the original length.
This can have all sorts of effects, ranging from the storage layer write
call failing to image corruption. (If there were no image corruption,
then I suppose there would be data loss because the .cow_end area is
forced to be empty, even though there might be something we need to
COW.)
Fix all of it by limiting nb_clusters so the equivalent number of bytes
will not exceed INT_MAX.
Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit d1b9d19f99586b33795e20a79f645186ccbc070f)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
---
block/qcow2-cluster.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/block/qcow2-cluster.c b/block/qcow2-cluster.c
index 974a4e8..c4a99c1 100644
--- a/block/qcow2-cluster.c
+++ b/block/qcow2-cluster.c
@@ -1342,6 +1342,9 @@ static int handle_alloc(BlockDriverState *bs, uint64_t guest_offset,
nb_clusters = MIN(nb_clusters, s->l2_slice_size - l2_index);
assert(nb_clusters <= INT_MAX);
+ /* Limit total allocation byte count to INT_MAX */
+ nb_clusters = MIN(nb_clusters, INT_MAX >> s->cluster_bits);
+
/* Find L2 entry for the first involved cluster */
ret = get_cluster_table(bs, guest_offset, &l2_slice, &l2_index);
if (ret < 0) {
@@ -1430,7 +1433,7 @@ static int handle_alloc(BlockDriverState *bs, uint64_t guest_offset,
* request actually writes to (excluding COW at the end)
*/
uint64_t requested_bytes = *bytes + offset_into_cluster(s, guest_offset);
- int avail_bytes = MIN(INT_MAX, nb_clusters << s->cluster_bits);
+ int avail_bytes = nb_clusters << s->cluster_bits;
int nb_bytes = MIN(requested_bytes, avail_bytes);
QCowL2Meta *old_m = *m;
--
1.8.3.1

View File

@ -0,0 +1,69 @@
From 66ad3c6ecce098d8f01545859c5ebf7a9e505e2c Mon Sep 17 00:00:00 2001
From: Tuguoyi <tu.guoyi@h3c.com>
Date: Fri, 1 Nov 2019 07:37:35 +0000
Subject: [PATCH] qcow2-bitmap: Fix uint64_t left-shift overflow
There are two issues in In check_constraints_on_bitmap(),
1) The sanity check on the granularity will cause uint64_t
integer left-shift overflow when cluster_size is 2M and the
granularity is BIGGER than 32K.
2) The way to calculate image size that the maximum bitmap
supported can map to is a bit incorrect.
This patch fix it by add a helper function to calculate the
number of bytes needed by a normal bitmap in image and compare
it to the maximum bitmap bytes supported by qemu.
Fixes: 5f72826e7fc62167cf3a
Signed-off-by: Guoyi Tu <tu.guoyi@h3c.com>
Message-id: 4ba40cd1e7ee4a708b40899952e49f22@h3c.com
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
(cherry picked from commit 570542ecb11e04b61ef4b3f4d0965a6915232a88)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
---
block/qcow2-bitmap.c | 14 +++++++++++---
1 file changed, 11 insertions(+), 3 deletions(-)
diff --git a/block/qcow2-bitmap.c b/block/qcow2-bitmap.c
index e53a160..997923f 100644
--- a/block/qcow2-bitmap.c
+++ b/block/qcow2-bitmap.c
@@ -143,6 +143,13 @@ static int check_table_entry(uint64_t entry, int cluster_size)
return 0;
}
+static int64_t get_bitmap_bytes_needed(int64_t len, uint32_t granularity)
+{
+ int64_t num_bits = DIV_ROUND_UP(len, granularity);
+
+ return DIV_ROUND_UP(num_bits, 8);
+}
+
static int check_constraints_on_bitmap(BlockDriverState *bs,
const char *name,
uint32_t granularity,
@@ -151,6 +158,7 @@ static int check_constraints_on_bitmap(BlockDriverState *bs,
BDRVQcow2State *s = bs->opaque;
int granularity_bits = ctz32(granularity);
int64_t len = bdrv_getlength(bs);
+ int64_t bitmap_bytes;
assert(granularity > 0);
assert((granularity & (granularity - 1)) == 0);
@@ -172,9 +180,9 @@ static int check_constraints_on_bitmap(BlockDriverState *bs,
return -EINVAL;
}
- if ((len > (uint64_t)BME_MAX_PHYS_SIZE << granularity_bits) ||
- (len > (uint64_t)BME_MAX_TABLE_SIZE * s->cluster_size <<
- granularity_bits))
+ bitmap_bytes = get_bitmap_bytes_needed(len, granularity);
+ if ((bitmap_bytes > (uint64_t)BME_MAX_PHYS_SIZE) ||
+ (bitmap_bytes > (uint64_t)BME_MAX_TABLE_SIZE * s->cluster_size))
{
error_setg(errp, "Too much space will be occupied by the bitmap. "
"Use larger granularity");
--
1.8.3.1

View File

@ -1,6 +1,6 @@
Name: qemu Name: qemu
Version: 4.0.1 Version: 4.0.1
Release: 9 Release: 10
Epoch: 2 Epoch: 2
Summary: QEMU is a generic and open source machine emulator and virtualizer Summary: QEMU is a generic and open source machine emulator and virtualizer
License: GPLv2 and BSD and MIT and CC-BY License: GPLv2 and BSD and MIT and CC-BY
@ -49,6 +49,16 @@ Patch0036: slirp-use-correct-size-while-emulating-commands.patch
Patch0037: tcp_emu-fix-unsafe-snprintf-usages.patch Patch0037: tcp_emu-fix-unsafe-snprintf-usages.patch
Patch0038: block-iscsi-use-MIN-between-mx_sb_len-and-sb_len_wr.patch Patch0038: block-iscsi-use-MIN-between-mx_sb_len-and-sb_len_wr.patch
Patch0039: monitor-fix-memory-leak-in-monitor_fdset_dup_fd_find.patch Patch0039: monitor-fix-memory-leak-in-monitor_fdset_dup_fd_find.patch
Patch0040: vhost-Fix-memory-region-section-comparison.patch
Patch0041: memory-Align-MemoryRegionSections-fields.patch
Patch0042: memory-Provide-an-equality-function-for-MemoryRegion.patch
Patch0043: file-posix-Handle-undetectable-alignment.patch
Patch0044: block-backup-fix-max_transfer-handling-for-copy_rang.patch
Patch0045: block-backup-fix-backup_cow_with_offload-for-last-cl.patch
Patch0046: qcow2-Limit-total-allocation-range-to-INT_MAX.patch
Patch0047: mirror-Do-not-dereference-invalid-pointers.patch
Patch0048: COLO-compare-Fix-incorrect-if-logic.patch
Patch0049: qcow2-bitmap-Fix-uint64_t-left-shift-overflow.patch
BuildRequires: flex BuildRequires: flex
BuildRequires: bison BuildRequires: bison
@ -382,7 +392,10 @@ getent passwd qemu >/dev/null || \
%endif %endif
%changelog %changelog
* Thu Mar 16 2020 Huawei Technologies Co., Ltd. <kuhn.chenqun@huawei.com> * Mon Mar 16 2020 backport some bug fix patches from upstream
- Patch from number 0040 to 0049 are picked from stable-4.1.1
* Mon Mar 16 2020 Huawei Technologies Co., Ltd. <kuhn.chenqun@huawei.com>
- moniter: fix memleak in monitor_fdset_dup_fd_find_remove - moniter: fix memleak in monitor_fdset_dup_fd_find_remove
- block/iscsi: use MIN() between mx_sb_len and sb_len_wr - block/iscsi: use MIN() between mx_sb_len and sb_len_wr
@ -457,3 +470,4 @@ getent passwd qemu >/dev/null || \
* Tue Jul 23 2019 openEuler Buildteam <buildteam@openeuler.org> - version-release * Tue Jul 23 2019 openEuler Buildteam <buildteam@openeuler.org> - version-release
- Package init - Package init

View File

@ -0,0 +1,45 @@
From 8df0aa6ebb27fcf535a7d28cfdc006cd9a34a041 Mon Sep 17 00:00:00 2001
From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Date: Wed, 14 Aug 2019 18:55:35 +0100
Subject: [PATCH] vhost: Fix memory region section comparison
Using memcmp to compare structures wasn't safe,
as I found out on ARM when I was getting falce miscompares.
Use the helper function for comparing the MRSs.
Fixes: ade6d081fc33948e56e6 ("vhost: Regenerate region list from changed sections list")
Cc: qemu-stable@nongnu.org
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20190814175535.2023-4-dgilbert@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit 3fc4a64cbaed2ddee4c60ddc06740b320e18ab82)
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
---
hw/virtio/vhost.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
index 6d3a013..221e635 100644
--- a/hw/virtio/vhost.c
+++ b/hw/virtio/vhost.c
@@ -451,8 +451,13 @@ static void vhost_commit(MemoryListener *listener)
changed = true;
} else {
/* Same size, lets check the contents */
- changed = n_old_sections && memcmp(dev->mem_sections, old_sections,
- n_old_sections * sizeof(old_sections[0])) != 0;
+ for (int i = 0; i < n_old_sections; i++) {
+ if (!MemoryRegionSection_eq(&old_sections[i],
+ &dev->mem_sections[i])) {
+ changed = true;
+ break;
+ }
+ }
}
trace_vhost_commit(dev->started, changed);
--
1.8.3.1