QEMU update to version 8.2.0-19:

- mac_dbdma: Remove leftover `dma_memory_unmap` calls(CVE-2024-8612)
- softmmu: Support concurrent bounce buffers(CVE-2024-8612)
- system/physmem: Per-AddressSpace bounce buffering
- system/physmem: Propagate AddressSpace to MapClient helpers

Signed-off-by: Jiabo Feng <fengjiabo1@huawei.com>
This commit is contained in:
Jiabo Feng 2024-10-14 20:07:41 +08:00
parent 7c4551d1e1
commit 6cd8af4c81
5 changed files with 843 additions and 2 deletions

View File

@ -0,0 +1,71 @@
From 234034ba7e8ab516f12cb199fc45cfe7229eb281 Mon Sep 17 00:00:00 2001
From: Mattias Nissler <mnissler@rivosinc.com>
Date: Mon, 16 Sep 2024 10:57:08 -0700
Subject: [PATCH 4/4] mac_dbdma: Remove leftover `dma_memory_unmap`
calls(CVE-2024-8612)
cherry-pick from 2d0a071e625d7234e8c5623b7e7bf445e1bef72c
These were passing a NULL buffer pointer unconditionally, which happens
to behave in a mostly benign way (except for the chance of an excess
memory region unref and a bounce buffer leak). Per the function comment,
this was never meant to be accepted though, and triggers an assertion
with the "softmmu: Support concurrent bounce buffers" change.
Given that the code in question never sets up any mappings, just remove
the unnecessary dma_memory_unmap calls along with the DBDMA_io struct
fields that are now entirely unused.
Signed-off-by: Mattias Nissler <mnissler@rivosinc.com>
Message-Id: <20240916175708.1829059-1-mnissler@rivosinc.com>
Fixes: be1e343995 ("macio: switch over to new byte-aligned DMA helpers")
Reviewed-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
---
hw/ide/macio.c | 6 ------
include/hw/ppc/mac_dbdma.h | 4 ----
2 files changed, 10 deletions(-)
diff --git a/hw/ide/macio.c b/hw/ide/macio.c
index dca1cc9efc..3d895c07f4 100644
--- a/hw/ide/macio.c
+++ b/hw/ide/macio.c
@@ -119,9 +119,6 @@ static void pmac_ide_atapi_transfer_cb(void *opaque, int ret)
return;
done:
- dma_memory_unmap(&address_space_memory, io->dma_mem, io->dma_len,
- io->dir, io->dma_len);
-
if (ret < 0) {
block_acct_failed(blk_get_stats(s->blk), &s->acct);
} else {
@@ -202,9 +199,6 @@ static void pmac_ide_transfer_cb(void *opaque, int ret)
return;
done:
- dma_memory_unmap(&address_space_memory, io->dma_mem, io->dma_len,
- io->dir, io->dma_len);
-
if (s->dma_cmd == IDE_DMA_READ || s->dma_cmd == IDE_DMA_WRITE) {
if (ret < 0) {
block_acct_failed(blk_get_stats(s->blk), &s->acct);
diff --git a/include/hw/ppc/mac_dbdma.h b/include/hw/ppc/mac_dbdma.h
index 4a3f644516..c774f6bf84 100644
--- a/include/hw/ppc/mac_dbdma.h
+++ b/include/hw/ppc/mac_dbdma.h
@@ -44,10 +44,6 @@ struct DBDMA_io {
DBDMA_end dma_end;
/* DMA is in progress, don't start another one */
bool processing;
- /* DMA request */
- void *dma_mem;
- dma_addr_t dma_len;
- DMADirection dir;
};
/*
--
2.45.1.windows.1

View File

@ -3,7 +3,7 @@
Name: qemu Name: qemu
Version: 8.2.0 Version: 8.2.0
Release: 18 Release: 19
Epoch: 11 Epoch: 11
Summary: QEMU is a generic and open source machine emulator and virtualizer Summary: QEMU is a generic and open source machine emulator and virtualizer
License: GPLv2 and BSD and MIT and CC-BY-SA-4.0 License: GPLv2 and BSD and MIT and CC-BY-SA-4.0
@ -355,7 +355,10 @@ Patch0338: migration-colo-Fix-bdrv_graph_rdlock_main_loop-Asser.patch
Patch0339: load_elf-fix-iterator-s-type-for-elf-file-processing.patch Patch0339: load_elf-fix-iterator-s-type-for-elf-file-processing.patch
Patch0340: hw-loongarch-Fix-fdt-memory-node-wrong-reg.patch Patch0340: hw-loongarch-Fix-fdt-memory-node-wrong-reg.patch
Patch0341: hw-loongarch-virt-Fix-FDT-memory-node-address-width.patch Patch0341: hw-loongarch-virt-Fix-FDT-memory-node-address-width.patch
Patch0342: system-physmem-Propagate-AddressSpace-to-MapClient-h.patch
Patch0343: system-physmem-Per-AddressSpace-bounce-buffering.patch
Patch0344: softmmu-Support-concurrent-bounce-buffers-CVE-2024-8.patch
Patch0345: mac_dbdma-Remove-leftover-dma_memory_unmap-calls-CVE.patch
BuildRequires: flex BuildRequires: flex
BuildRequires: gcc BuildRequires: gcc
@ -953,6 +956,12 @@ getent passwd qemu >/dev/null || \
%endif %endif
%changelog %changelog
* Mon Oct 14 2024 Jiabo Feng <fengjiabo1@huawei.com> - 11:8.2.0-19
- mac_dbdma: Remove leftover `dma_memory_unmap` calls(CVE-2024-8612)
- softmmu: Support concurrent bounce buffers(CVE-2024-8612)
- system/physmem: Per-AddressSpace bounce buffering
- system/physmem: Propagate AddressSpace to MapClient helpers
* Wed Sep 18 2024 Jiabo Feng <fengjiabo1@huawei.com> - 11:8.2.0-18 * Wed Sep 18 2024 Jiabo Feng <fengjiabo1@huawei.com> - 11:8.2.0-18
- hw/loongarch/virt: Fix FDT memory node address width - hw/loongarch/virt: Fix FDT memory node address width
- hw/loongarch: Fix fdt memory node wrong 'reg' - hw/loongarch: Fix fdt memory node wrong 'reg'

View File

@ -0,0 +1,289 @@
From 17ba0dab19bd20d6388ce26e71b02c211e1d4690 Mon Sep 17 00:00:00 2001
From: Mattias Nissler <mnissler@rivosinc.com>
Date: Mon, 19 Aug 2024 06:54:54 -0700
Subject: [PATCH 3/4] softmmu: Support concurrent bounce buffers(CVE-2024-8612)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
cherry-pick from 637b0aa139565cb82a7b9269e62214f87082635c
When DMA memory can't be directly accessed, as is the case when
running the device model in a separate process without shareable DMA
file descriptors, bounce buffering is used.
It is not uncommon for device models to request mapping of several DMA
regions at the same time. Examples include:
* net devices, e.g. when transmitting a packet that is split across
several TX descriptors (observed with igb)
* USB host controllers, when handling a packet with multiple data TRBs
(observed with xhci)
Previously, qemu only provided a single bounce buffer per AddressSpace
and would fail DMA map requests while the buffer was already in use. In
turn, this would cause DMA failures that ultimately manifest as hardware
errors from the guest perspective.
This change allocates DMA bounce buffers dynamically instead of
supporting only a single buffer. Thus, multiple DMA mappings work
correctly also when RAM can't be mmap()-ed.
The total bounce buffer allocation size is limited individually for each
AddressSpace. The default limit is 4096 bytes, matching the previous
maximum buffer size. A new x-max-bounce-buffer-size parameter is
provided to configure the limit for PCI devices.
Signed-off-by: Mattias Nissler <mnissler@rivosinc.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Acked-by: Peter Xu <peterx@redhat.com>
Link: https://lore.kernel.org/r/20240819135455.2957406-1-mnissler@rivosinc.com
Signed-off-by: Peter Xu <peterx@redhat.com>
---
hw/pci/pci.c | 8 ++++
include/exec/memory.h | 14 +++----
include/hw/pci/pci_device.h | 3 ++
system/memory.c | 5 ++-
system/physmem.c | 82 ++++++++++++++++++++++++++-----------
5 files changed, 76 insertions(+), 36 deletions(-)
diff --git a/hw/pci/pci.c b/hw/pci/pci.c
index 9da41088df..7467a2a9de 100644
--- a/hw/pci/pci.c
+++ b/hw/pci/pci.c
@@ -85,6 +85,8 @@ static Property pci_props[] = {
QEMU_PCIE_ERR_UNC_MASK_BITNR, true),
DEFINE_PROP_BIT("x-pcie-ari-nextfn-1", PCIDevice, cap_present,
QEMU_PCIE_ARI_NEXTFN_1_BITNR, false),
+ DEFINE_PROP_SIZE32("x-max-bounce-buffer-size", PCIDevice,
+ max_bounce_buffer_size, DEFAULT_MAX_BOUNCE_BUFFER_SIZE),
DEFINE_PROP_END_OF_LIST()
};
@@ -1201,6 +1203,8 @@ static PCIDevice *do_pci_register_device(PCIDevice *pci_dev,
"bus master container", UINT64_MAX);
address_space_init(&pci_dev->bus_master_as,
&pci_dev->bus_master_container_region, pci_dev->name);
+ pci_dev->bus_master_as.max_bounce_buffer_size =
+ pci_dev->max_bounce_buffer_size;
if (phase_check(PHASE_MACHINE_READY)) {
pci_init_bus_master(pci_dev);
@@ -2658,6 +2662,10 @@ static void pci_device_class_init(ObjectClass *klass, void *data)
k->unrealize = pci_qdev_unrealize;
k->bus_type = TYPE_PCI_BUS;
device_class_set_props(k, pci_props);
+ object_class_property_set_description(
+ klass, "x-max-bounce-buffer-size",
+ "Maximum buffer size allocated for bounce buffers used for mapped "
+ "access to indirect DMA memory");
}
static void pci_device_class_base_init(ObjectClass *klass, void *data)
diff --git a/include/exec/memory.h b/include/exec/memory.h
index 40dcf70530..73d274d8f3 100644
--- a/include/exec/memory.h
+++ b/include/exec/memory.h
@@ -1111,13 +1111,7 @@ typedef struct AddressSpaceMapClient {
QLIST_ENTRY(AddressSpaceMapClient) link;
} AddressSpaceMapClient;
-typedef struct {
- MemoryRegion *mr;
- void *buffer;
- hwaddr addr;
- hwaddr len;
- bool in_use;
-} BounceBuffer;
+#define DEFAULT_MAX_BOUNCE_BUFFER_SIZE (4096)
/**
* struct AddressSpace: describes a mapping of addresses to #MemoryRegion objects
@@ -1138,8 +1132,10 @@ struct AddressSpace {
QTAILQ_HEAD(, MemoryListener) listeners;
QTAILQ_ENTRY(AddressSpace) address_spaces_link;
- /* Bounce buffer to use for this address space. */
- BounceBuffer bounce;
+ /* Maximum DMA bounce buffer size used for indirect memory map requests */
+ size_t max_bounce_buffer_size;
+ /* Total size of bounce buffers currently allocated, atomically accessed */
+ size_t bounce_buffer_size;
/* List of callbacks to invoke when buffers free up */
QemuMutex map_client_list_lock;
QLIST_HEAD(, AddressSpaceMapClient) map_client_list;
diff --git a/include/hw/pci/pci_device.h b/include/hw/pci/pci_device.h
index d3dd0f64b2..253b48a688 100644
--- a/include/hw/pci/pci_device.h
+++ b/include/hw/pci/pci_device.h
@@ -160,6 +160,9 @@ struct PCIDevice {
/* ID of standby device in net_failover pair */
char *failover_pair_id;
uint32_t acpi_index;
+
+ /* Maximum DMA bounce buffer size used for indirect memory map requests */
+ uint32_t max_bounce_buffer_size;
};
static inline int pci_intx(PCIDevice *pci_dev)
diff --git a/system/memory.c b/system/memory.c
index 026e47dcb8..1ae03074f3 100644
--- a/system/memory.c
+++ b/system/memory.c
@@ -3117,7 +3117,8 @@ void address_space_init(AddressSpace *as, MemoryRegion *root, const char *name)
as->ioeventfds = NULL;
QTAILQ_INIT(&as->listeners);
QTAILQ_INSERT_TAIL(&address_spaces, as, address_spaces_link);
- as->bounce.in_use = false;
+ as->max_bounce_buffer_size = DEFAULT_MAX_BOUNCE_BUFFER_SIZE;
+ as->bounce_buffer_size = 0;
qemu_mutex_init(&as->map_client_list_lock);
QLIST_INIT(&as->map_client_list);
as->name = g_strdup(name ? name : "anonymous");
@@ -3127,7 +3128,7 @@ void address_space_init(AddressSpace *as, MemoryRegion *root, const char *name)
static void do_address_space_destroy(AddressSpace *as)
{
- assert(!qatomic_read(&as->bounce.in_use));
+ assert(qatomic_read(&as->bounce_buffer_size) == 0);
assert(QLIST_EMPTY(&as->map_client_list));
qemu_mutex_destroy(&as->map_client_list_lock);
diff --git a/system/physmem.c b/system/physmem.c
index 4491a7dbd1..2c8b83f811 100644
--- a/system/physmem.c
+++ b/system/physmem.c
@@ -3021,6 +3021,20 @@ void cpu_flush_icache_range(hwaddr start, hwaddr len)
NULL, len, FLUSH_CACHE);
}
+/*
+ * A magic value stored in the first 8 bytes of the bounce buffer struct. Used
+ * to detect illegal pointers passed to address_space_unmap.
+ */
+#define BOUNCE_BUFFER_MAGIC 0xb4017ceb4ffe12ed
+
+typedef struct {
+ uint64_t magic;
+ MemoryRegion *mr;
+ hwaddr addr;
+ size_t len;
+ uint8_t buffer[];
+} BounceBuffer;
+
static void
address_space_unregister_map_client_do(AddressSpaceMapClient *client)
{
@@ -3046,9 +3060,9 @@ void address_space_register_map_client(AddressSpace *as, QEMUBH *bh)
qemu_mutex_lock(&as->map_client_list_lock);
client->bh = bh;
QLIST_INSERT_HEAD(&as->map_client_list, client, link);
- /* Write map_client_list before reading in_use. */
+ /* Write map_client_list before reading bounce_buffer_size. */
smp_mb();
- if (!qatomic_read(&as->bounce.in_use)) {
+ if (qatomic_read(&as->bounce_buffer_size) < as->max_bounce_buffer_size) {
address_space_notify_map_clients_locked(as);
}
qemu_mutex_unlock(&as->map_client_list_lock);
@@ -3178,28 +3192,40 @@ void *address_space_map(AddressSpace *as,
mr = flatview_translate(fv, addr, &xlat, &l, is_write, attrs);
if (!memory_access_is_direct(mr, is_write)) {
- if (qatomic_xchg(&as->bounce.in_use, true)) {
+ size_t used = qatomic_read(&as->bounce_buffer_size);
+ for (;;) {
+ hwaddr alloc = MIN(as->max_bounce_buffer_size - used, l);
+ size_t new_size = used + alloc;
+ size_t actual =
+ qatomic_cmpxchg(&as->bounce_buffer_size, used, new_size);
+ if (actual == used) {
+ l = alloc;
+ break;
+ }
+ used = actual;
+ }
+
+ if (l == 0) {
*plen = 0;
return NULL;
}
- /* Avoid unbounded allocations */
- l = MIN(l, TARGET_PAGE_SIZE);
- as->bounce.buffer = qemu_memalign(TARGET_PAGE_SIZE, l);
- as->bounce.addr = addr;
- as->bounce.len = l;
+ BounceBuffer *bounce = g_malloc0(l + sizeof(BounceBuffer));
+ bounce->magic = BOUNCE_BUFFER_MAGIC;
memory_region_ref(mr);
- as->bounce.mr = mr;
+ bounce->mr = mr;
+ bounce->addr = addr;
+ bounce->len = l;
+
if (!is_write) {
flatview_read(fv, addr, MEMTXATTRS_UNSPECIFIED,
- as->bounce.buffer, l);
+ bounce->buffer, l);
}
*plen = l;
- return as->bounce.buffer;
+ return bounce->buffer;
}
-
memory_region_ref(mr);
*plen = flatview_extend_translation(fv, addr, len, mr, xlat,
l, is_write, attrs);
@@ -3214,12 +3240,11 @@ void *address_space_map(AddressSpace *as,
void address_space_unmap(AddressSpace *as, void *buffer, hwaddr len,
bool is_write, hwaddr access_len)
{
- if (buffer != as->bounce.buffer) {
- MemoryRegion *mr;
- ram_addr_t addr1;
+ MemoryRegion *mr;
+ ram_addr_t addr1;
- mr = memory_region_from_host(buffer, &addr1);
- assert(mr != NULL);
+ mr = memory_region_from_host(buffer, &addr1);
+ if (mr != NULL) {
if (is_write) {
invalidate_and_set_dirty(mr, addr1, access_len);
}
@@ -3229,15 +3254,22 @@ void address_space_unmap(AddressSpace *as, void *buffer, hwaddr len,
memory_region_unref(mr);
return;
}
+
+
+ BounceBuffer *bounce = container_of(buffer, BounceBuffer, buffer);
+ assert(bounce->magic == BOUNCE_BUFFER_MAGIC);
+
if (is_write) {
- address_space_write(as, as->bounce.addr, MEMTXATTRS_UNSPECIFIED,
- as->bounce.buffer, access_len);
- }
- qemu_vfree(as->bounce.buffer);
- as->bounce.buffer = NULL;
- memory_region_unref(as->bounce.mr);
- /* Clear in_use before reading map_client_list. */
- qatomic_set_mb(&as->bounce.in_use, false);
+ address_space_write(as, bounce->addr, MEMTXATTRS_UNSPECIFIED,
+ bounce->buffer, access_len);
+ }
+
+ qatomic_sub(&as->bounce_buffer_size, bounce->len);
+ bounce->magic = ~BOUNCE_BUFFER_MAGIC;
+ memory_region_unref(bounce->mr);
+ g_free(bounce);
+ /* Write bounce_buffer_size before reading map_client_list. */
+ smp_mb();
address_space_notify_map_clients(as);
}
--
2.45.1.windows.1

View File

@ -0,0 +1,265 @@
From 215731d484366474a90a2e14f3a75bb84fd314a3 Mon Sep 17 00:00:00 2001
From: Mattias Nissler <mnissler@rivosinc.com>
Date: Thu, 7 Sep 2023 06:04:23 -0700
Subject: [PATCH 2/4] system/physmem: Per-AddressSpace bounce buffering
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
cherry-pick from 69e78f1b3484e429274352a464a94fa1d78be339
Instead of using a single global bounce buffer, give each AddressSpace
its own bounce buffer. The MapClient callback mechanism moves to
AddressSpace accordingly.
This is in preparation for generalizing bounce buffer handling further
to allow multiple bounce buffers, with a total allocation limit
configured per AddressSpace.
Reviewed-by: Peter Xu <peterx@redhat.com>
Tested-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Signed-off-by: Mattias Nissler <mnissler@rivosinc.com>
Message-ID: <20240507094210.300566-2-mnissler@rivosinc.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
[PMD: Split patch, part 2/2]
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: liuxiangdong <liuxiangdong5@huawei.com>
---
include/exec/memory.h | 19 +++++++++++
system/memory.c | 7 ++++
system/physmem.c | 79 ++++++++++++++++---------------------------
3 files changed, 56 insertions(+), 49 deletions(-)
diff --git a/include/exec/memory.h b/include/exec/memory.h
index 4b7dc7f055..40dcf70530 100644
--- a/include/exec/memory.h
+++ b/include/exec/memory.h
@@ -1106,6 +1106,19 @@ struct MemoryListener {
QTAILQ_ENTRY(MemoryListener) link_as;
};
+typedef struct AddressSpaceMapClient {
+ QEMUBH *bh;
+ QLIST_ENTRY(AddressSpaceMapClient) link;
+} AddressSpaceMapClient;
+
+typedef struct {
+ MemoryRegion *mr;
+ void *buffer;
+ hwaddr addr;
+ hwaddr len;
+ bool in_use;
+} BounceBuffer;
+
/**
* struct AddressSpace: describes a mapping of addresses to #MemoryRegion objects
*/
@@ -1124,6 +1137,12 @@ struct AddressSpace {
struct MemoryRegionIoeventfd *ioeventfds;
QTAILQ_HEAD(, MemoryListener) listeners;
QTAILQ_ENTRY(AddressSpace) address_spaces_link;
+
+ /* Bounce buffer to use for this address space. */
+ BounceBuffer bounce;
+ /* List of callbacks to invoke when buffers free up */
+ QemuMutex map_client_list_lock;
+ QLIST_HEAD(, AddressSpaceMapClient) map_client_list;
};
typedef struct AddressSpaceDispatch AddressSpaceDispatch;
diff --git a/system/memory.c b/system/memory.c
index fb817e54bc..026e47dcb8 100644
--- a/system/memory.c
+++ b/system/memory.c
@@ -3117,6 +3117,9 @@ void address_space_init(AddressSpace *as, MemoryRegion *root, const char *name)
as->ioeventfds = NULL;
QTAILQ_INIT(&as->listeners);
QTAILQ_INSERT_TAIL(&address_spaces, as, address_spaces_link);
+ as->bounce.in_use = false;
+ qemu_mutex_init(&as->map_client_list_lock);
+ QLIST_INIT(&as->map_client_list);
as->name = g_strdup(name ? name : "anonymous");
address_space_update_topology(as);
address_space_update_ioeventfds(as);
@@ -3124,6 +3127,10 @@ void address_space_init(AddressSpace *as, MemoryRegion *root, const char *name)
static void do_address_space_destroy(AddressSpace *as)
{
+ assert(!qatomic_read(&as->bounce.in_use));
+ assert(QLIST_EMPTY(&as->map_client_list));
+ qemu_mutex_destroy(&as->map_client_list_lock);
+
assert(QTAILQ_EMPTY(&as->listeners));
flatview_unref(as->current_map);
diff --git a/system/physmem.c b/system/physmem.c
index 1d01e7a32b..4491a7dbd1 100644
--- a/system/physmem.c
+++ b/system/physmem.c
@@ -3021,26 +3021,8 @@ void cpu_flush_icache_range(hwaddr start, hwaddr len)
NULL, len, FLUSH_CACHE);
}
-typedef struct {
- MemoryRegion *mr;
- void *buffer;
- hwaddr addr;
- hwaddr len;
- bool in_use;
-} BounceBuffer;
-
-static BounceBuffer bounce;
-
-typedef struct MapClient {
- QEMUBH *bh;
- QLIST_ENTRY(MapClient) link;
-} MapClient;
-
-QemuMutex map_client_list_lock;
-static QLIST_HEAD(, MapClient) map_client_list
- = QLIST_HEAD_INITIALIZER(map_client_list);
-
-static void address_space_unregister_map_client_do(MapClient *client)
+static void
+address_space_unregister_map_client_do(AddressSpaceMapClient *client)
{
QLIST_REMOVE(client, link);
g_free(client);
@@ -3048,10 +3030,10 @@ static void address_space_unregister_map_client_do(MapClient *client)
static void address_space_notify_map_clients_locked(AddressSpace *as)
{
- MapClient *client;
+ AddressSpaceMapClient *client;
- while (!QLIST_EMPTY(&map_client_list)) {
- client = QLIST_FIRST(&map_client_list);
+ while (!QLIST_EMPTY(&as->map_client_list)) {
+ client = QLIST_FIRST(&as->map_client_list);
qemu_bh_schedule(client->bh);
address_space_unregister_map_client_do(client);
}
@@ -3059,17 +3041,17 @@ static void address_space_notify_map_clients_locked(AddressSpace *as)
void address_space_register_map_client(AddressSpace *as, QEMUBH *bh)
{
- MapClient *client = g_malloc(sizeof(*client));
+ AddressSpaceMapClient *client = g_malloc(sizeof(*client));
- qemu_mutex_lock(&map_client_list_lock);
+ qemu_mutex_lock(&as->map_client_list_lock);
client->bh = bh;
- QLIST_INSERT_HEAD(&map_client_list, client, link);
+ QLIST_INSERT_HEAD(&as->map_client_list, client, link);
/* Write map_client_list before reading in_use. */
smp_mb();
- if (!qatomic_read(&bounce.in_use)) {
+ if (!qatomic_read(&as->bounce.in_use)) {
address_space_notify_map_clients_locked(as);
}
- qemu_mutex_unlock(&map_client_list_lock);
+ qemu_mutex_unlock(&as->map_client_list_lock);
}
void cpu_exec_init_all(void)
@@ -3085,28 +3067,27 @@ void cpu_exec_init_all(void)
finalize_target_page_bits();
io_mem_init();
memory_map_init();
- qemu_mutex_init(&map_client_list_lock);
}
void address_space_unregister_map_client(AddressSpace *as, QEMUBH *bh)
{
- MapClient *client;
+ AddressSpaceMapClient *client;
- qemu_mutex_lock(&map_client_list_lock);
- QLIST_FOREACH(client, &map_client_list, link) {
+ qemu_mutex_lock(&as->map_client_list_lock);
+ QLIST_FOREACH(client, &as->map_client_list, link) {
if (client->bh == bh) {
address_space_unregister_map_client_do(client);
break;
}
}
- qemu_mutex_unlock(&map_client_list_lock);
+ qemu_mutex_unlock(&as->map_client_list_lock);
}
static void address_space_notify_map_clients(AddressSpace *as)
{
- qemu_mutex_lock(&map_client_list_lock);
+ qemu_mutex_lock(&as->map_client_list_lock);
address_space_notify_map_clients_locked(as);
- qemu_mutex_unlock(&map_client_list_lock);
+ qemu_mutex_unlock(&as->map_client_list_lock);
}
static bool flatview_access_valid(FlatView *fv, hwaddr addr, hwaddr len,
@@ -3197,25 +3178,25 @@ void *address_space_map(AddressSpace *as,
mr = flatview_translate(fv, addr, &xlat, &l, is_write, attrs);
if (!memory_access_is_direct(mr, is_write)) {
- if (qatomic_xchg(&bounce.in_use, true)) {
+ if (qatomic_xchg(&as->bounce.in_use, true)) {
*plen = 0;
return NULL;
}
/* Avoid unbounded allocations */
l = MIN(l, TARGET_PAGE_SIZE);
- bounce.buffer = qemu_memalign(TARGET_PAGE_SIZE, l);
- bounce.addr = addr;
- bounce.len = l;
+ as->bounce.buffer = qemu_memalign(TARGET_PAGE_SIZE, l);
+ as->bounce.addr = addr;
+ as->bounce.len = l;
memory_region_ref(mr);
- bounce.mr = mr;
+ as->bounce.mr = mr;
if (!is_write) {
flatview_read(fv, addr, MEMTXATTRS_UNSPECIFIED,
- bounce.buffer, l);
+ as->bounce.buffer, l);
}
*plen = l;
- return bounce.buffer;
+ return as->bounce.buffer;
}
@@ -3233,7 +3214,7 @@ void *address_space_map(AddressSpace *as,
void address_space_unmap(AddressSpace *as, void *buffer, hwaddr len,
bool is_write, hwaddr access_len)
{
- if (buffer != bounce.buffer) {
+ if (buffer != as->bounce.buffer) {
MemoryRegion *mr;
ram_addr_t addr1;
@@ -3249,14 +3230,14 @@ void address_space_unmap(AddressSpace *as, void *buffer, hwaddr len,
return;
}
if (is_write) {
- address_space_write(as, bounce.addr, MEMTXATTRS_UNSPECIFIED,
- bounce.buffer, access_len);
+ address_space_write(as, as->bounce.addr, MEMTXATTRS_UNSPECIFIED,
+ as->bounce.buffer, access_len);
}
- qemu_vfree(bounce.buffer);
- bounce.buffer = NULL;
- memory_region_unref(bounce.mr);
+ qemu_vfree(as->bounce.buffer);
+ as->bounce.buffer = NULL;
+ memory_region_unref(as->bounce.mr);
/* Clear in_use before reading map_client_list. */
- qatomic_set_mb(&bounce.in_use, false);
+ qatomic_set_mb(&as->bounce.in_use, false);
address_space_notify_map_clients(as);
}
--
2.45.1.windows.1

View File

@ -0,0 +1,207 @@
From 9fd6abb40a7223f83244cdad4edf1f8ba21071aa Mon Sep 17 00:00:00 2001
From: Mattias Nissler <mnissler@rivosinc.com>
Date: Thu, 7 Sep 2023 06:04:23 -0700
Subject: [PATCH 1/4] system/physmem: Propagate AddressSpace to MapClient
helpers
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
cherry-pick from 5c62719710bab66a98f68ebdba333e2240ed6668
Propagate AddressSpace handler to following helpers:
- register_map_client()
- unregister_map_client()
- notify_map_clients[_locked]()
Rename them using 'address_space_' prefix instead of 'cpu_'.
The AddressSpace argument will be used in the next commit.
Reviewed-by: Peter Xu <peterx@redhat.com>
Tested-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Signed-off-by: Mattias Nissler <mnissler@rivosinc.com>
Message-ID: <20240507094210.300566-2-mnissler@rivosinc.com>
[PMD: Split patch, part 1/2]
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: liuxiangdong <liuxiangdong5@huawei.com>
---
include/exec/cpu-common.h | 2 --
include/exec/memory.h | 26 ++++++++++++++++++++++++--
system/dma-helpers.c | 4 ++--
system/physmem.c | 24 ++++++++++++------------
4 files changed, 38 insertions(+), 18 deletions(-)
diff --git a/include/exec/cpu-common.h b/include/exec/cpu-common.h
index 2a3d4aa1c8..c7fd30d5b9 100644
--- a/include/exec/cpu-common.h
+++ b/include/exec/cpu-common.h
@@ -165,8 +165,6 @@ void *cpu_physical_memory_map(hwaddr addr,
bool is_write);
void cpu_physical_memory_unmap(void *buffer, hwaddr len,
bool is_write, hwaddr access_len);
-void cpu_register_map_client(QEMUBH *bh);
-void cpu_unregister_map_client(QEMUBH *bh);
bool cpu_physical_memory_is_io(hwaddr phys_addr);
diff --git a/include/exec/memory.h b/include/exec/memory.h
index 91c42c9a6a..4b7dc7f055 100644
--- a/include/exec/memory.h
+++ b/include/exec/memory.h
@@ -2916,8 +2916,8 @@ bool address_space_access_valid(AddressSpace *as, hwaddr addr, hwaddr len,
* May return %NULL and set *@plen to zero(0), if resources needed to perform
* the mapping are exhausted.
* Use only for reads OR writes - not for read-modify-write operations.
- * Use cpu_register_map_client() to know when retrying the map operation is
- * likely to succeed.
+ * Use address_space_register_map_client() to know when retrying the map
+ * operation is likely to succeed.
*
* @as: #AddressSpace to be accessed
* @addr: address within that address space
@@ -2942,6 +2942,28 @@ void *address_space_map(AddressSpace *as, hwaddr addr,
void address_space_unmap(AddressSpace *as, void *buffer, hwaddr len,
bool is_write, hwaddr access_len);
+/*
+ * address_space_register_map_client: Register a callback to invoke when
+ * resources for address_space_map() are available again.
+ *
+ * address_space_map may fail when there are not enough resources available,
+ * such as when bounce buffer memory would exceed the limit. The callback can
+ * be used to retry the address_space_map operation. Note that the callback
+ * gets automatically removed after firing.
+ *
+ * @as: #AddressSpace to be accessed
+ * @bh: callback to invoke when address_space_map() retry is appropriate
+ */
+void address_space_register_map_client(AddressSpace *as, QEMUBH *bh);
+
+/*
+ * address_space_unregister_map_client: Unregister a callback that has
+ * previously been registered and not fired yet.
+ *
+ * @as: #AddressSpace to be accessed
+ * @bh: callback to unregister
+ */
+void address_space_unregister_map_client(AddressSpace *as, QEMUBH *bh);
/* Internal functions, part of the implementation of address_space_read. */
MemTxResult address_space_read_full(AddressSpace *as, hwaddr addr,
diff --git a/system/dma-helpers.c b/system/dma-helpers.c
index 36211acc7e..611ea04ffb 100644
--- a/system/dma-helpers.c
+++ b/system/dma-helpers.c
@@ -167,7 +167,7 @@ static void dma_blk_cb(void *opaque, int ret)
if (dbs->iov.size == 0) {
trace_dma_map_wait(dbs);
dbs->bh = aio_bh_new(ctx, reschedule_dma, dbs);
- cpu_register_map_client(dbs->bh);
+ address_space_register_map_client(dbs->sg->as, dbs->bh);
goto out;
}
@@ -197,7 +197,7 @@ static void dma_aio_cancel(BlockAIOCB *acb)
}
if (dbs->bh) {
- cpu_unregister_map_client(dbs->bh);
+ address_space_unregister_map_client(dbs->sg->as, dbs->bh);
qemu_bh_delete(dbs->bh);
dbs->bh = NULL;
}
diff --git a/system/physmem.c b/system/physmem.c
index 0c629233bd..1d01e7a32b 100644
--- a/system/physmem.c
+++ b/system/physmem.c
@@ -3040,24 +3040,24 @@ QemuMutex map_client_list_lock;
static QLIST_HEAD(, MapClient) map_client_list
= QLIST_HEAD_INITIALIZER(map_client_list);
-static void cpu_unregister_map_client_do(MapClient *client)
+static void address_space_unregister_map_client_do(MapClient *client)
{
QLIST_REMOVE(client, link);
g_free(client);
}
-static void cpu_notify_map_clients_locked(void)
+static void address_space_notify_map_clients_locked(AddressSpace *as)
{
MapClient *client;
while (!QLIST_EMPTY(&map_client_list)) {
client = QLIST_FIRST(&map_client_list);
qemu_bh_schedule(client->bh);
- cpu_unregister_map_client_do(client);
+ address_space_unregister_map_client_do(client);
}
}
-void cpu_register_map_client(QEMUBH *bh)
+void address_space_register_map_client(AddressSpace *as, QEMUBH *bh)
{
MapClient *client = g_malloc(sizeof(*client));
@@ -3067,7 +3067,7 @@ void cpu_register_map_client(QEMUBH *bh)
/* Write map_client_list before reading in_use. */
smp_mb();
if (!qatomic_read(&bounce.in_use)) {
- cpu_notify_map_clients_locked();
+ address_space_notify_map_clients_locked(as);
}
qemu_mutex_unlock(&map_client_list_lock);
}
@@ -3088,24 +3088,24 @@ void cpu_exec_init_all(void)
qemu_mutex_init(&map_client_list_lock);
}
-void cpu_unregister_map_client(QEMUBH *bh)
+void address_space_unregister_map_client(AddressSpace *as, QEMUBH *bh)
{
MapClient *client;
qemu_mutex_lock(&map_client_list_lock);
QLIST_FOREACH(client, &map_client_list, link) {
if (client->bh == bh) {
- cpu_unregister_map_client_do(client);
+ address_space_unregister_map_client_do(client);
break;
}
}
qemu_mutex_unlock(&map_client_list_lock);
}
-static void cpu_notify_map_clients(void)
+static void address_space_notify_map_clients(AddressSpace *as)
{
qemu_mutex_lock(&map_client_list_lock);
- cpu_notify_map_clients_locked();
+ address_space_notify_map_clients_locked(as);
qemu_mutex_unlock(&map_client_list_lock);
}
@@ -3173,8 +3173,8 @@ flatview_extend_translation(FlatView *fv, hwaddr addr,
* May map a subset of the requested range, given by and returned in *plen.
* May return NULL if resources needed to perform the mapping are exhausted.
* Use only for reads OR writes - not for read-modify-write operations.
- * Use cpu_register_map_client() to know when retrying the map operation is
- * likely to succeed.
+ * Use address_space_register_map_client() to know when retrying the map
+ * operation is likely to succeed.
*/
void *address_space_map(AddressSpace *as,
hwaddr addr,
@@ -3257,7 +3257,7 @@ void address_space_unmap(AddressSpace *as, void *buffer, hwaddr len,
memory_region_unref(bounce.mr);
/* Clear in_use before reading map_client_list. */
qatomic_set_mb(&bounce.in_use, false);
- cpu_notify_map_clients();
+ address_space_notify_map_clients(as);
}
void *cpu_physical_memory_map(hwaddr addr,
--
2.45.1.windows.1