For the vhost-user net-card, it is allow to delete its
network backend while the virtio-net device still exists.
However, when the status of the device changes in guest,
QEMU will check whether the network backend exists, otherwise
it will crash.
So do not allowed to delete the network backend directly
without delete virtio-net device.
Signed-off-by: Jinhua Cao <caojinhua1@huawei.com>
We will access NULL pointer as follow:
1. Start a vm with multiqueue vhost-net
2. then we write VIRTIO_PCI_GUEST_FEATURES in PCI configuration to
trigger multiqueue disable in this vm which will delete the virtqueue.
In this step, the tx_bh is deleted but the callback virtio_net_handle_tx_bh
still exist.
3. Finally, we write VIRTIO_PCI_QUEUE_NOTIFY in PCI configuration to
notify the deleted virtqueue. In this way, virtio_net_handle_tx_bh
will be called and qemu will be crashed.
Signed-off-by: Jinhua Cao <caojinhua1@huawei.com>
In net_init_tap_one(), if the net-tap initializes successful
but other actions failed during vhost-net hot-plugging, the
net-tap will remain in the net clients.causing next hot-plug
fails again.
Signed-off-by: Jinhua Cao <caojinhua1@huawei.com>
The vm will trigger a disk sweep operation after plugging
a controller who's io type is iothread. If attach a scsi
disk immediately, the sg_inqury request in vm will trigger
the assert in virtio_scsi_ctx_check(), which is called by
virtio_scsi_handle_cmd_req_prepare().
Add judgment in virtio_scsi_handle_cmd_req_prepare() and
return IO Error directly if the device has not been
initialized.
Signed-off-by: Jinhua Cao <caojinhua1@huawei.com>
bugfix: fix some illegal memory access and memory leak
bugfix: fix possible memory leak
bugfix: fix eventfds may double free when vm_id reused in ivshmem
block/mirror: fix file-system went to read-only after block-mirror
bugfix: fix mmio information leak and ehci vm escape 0-day vulnerability
target-i386: Fix the RES memory inc which caused by the coroutine created
Signed-off-by: Chen Qun<kuhn.chenqun@huawei.com>
for better performance, change the POOL_BATCH_SIZE from 64 to 128.
Signed-off-by: caojinhua <caojinhua1@huawei.com>
Signed-off-by: jiangdongxu <jiangdongxu1@huawei.com>
config vm disk with prdm, keep the disk writing data continuously
during block-mirror, the file-system will went to read-only after
block-mirror, fix it.
Signed-off-by: caojinhua <caojinhua1@huawei.com>
Signed-off-by: jiangdongxu <jiangdongxu1@huawei.com>
As the ivshmem Server-Client Protol describes, when a
client disconnects from the server, server sends disconnect
notifications to the other clients. And the other clients
will free the eventfds of the disconnected client according
to the client ID. If the client ID is reused, the eventfds
may be double freed.
It will be solved by setting eventfds to NULL after freeing
and allocating memory for it when it's used.
Signed-off-by: Peng Liang <liangpeng10@huawei.com>
Signed-off-by: jiangdongxu <jiangdongxu1@huawei.com>
linux-headers: update against 5.10 and manual clear vfio dirty log series
vfio: Maintain DMA mapping range for the container
vfio/migration: Add support for manual clear vfio dirty log
update-linux-headers: Import iommu.h
vfio.h and iommu.h header update against 5.10
memory: Add new fields in IOTLBEntry
hw/arm/smmuv3: Improve stage1 ASID invalidation
hw/arm/smmu-common: Allow domain invalidation for NH_ALL/NSNH_ALL
memory: Add IOMMU_ATTR_VFIO_NESTED IOMMU memory region attribute
memory: Add IOMMU_ATTR_MSI_TRANSLATE IOMMU memory region attribute
memory: Introduce IOMMU Memory Region inject_faults API
iommu: Introduce generic header
pci: introduce PCIPASIDOps to PCIDevice
vfio: Force nested if iommu requires it
vfio: Introduce hostwin_from_range helper
vfio: Introduce helpers to DMA map/unmap a RAM section
vfio: Set up nested stage mappings
vfio: Pass stage 1 MSI bindings to the host
vfio: Helper to get IRQ info including capabilities
vfio/pci: Register handler for iommu fault
vfio/pci: Set up the DMA FAULT region
vfio/pci: Implement the DMA fault handler
hw/arm/smmuv3: Advertise MSI_TRANSLATE attribute
hw/arm/smmuv3: Store the PASID table GPA in the translation config
hw/arm/smmuv3: Fill the IOTLBEntry arch_id on NH_VA invalidation
hw/arm/smmuv3: Fill the IOTLBEntry leaf field on NH_VA invalidation
hw/arm/smmuv3: Pass stage 1 configurations to the host
hw/arm/smmuv3: Implement fault injection
hw/arm/smmuv3: Allow MAP notifiers
pci: Add return_page_response pci ops
vfio/pci: Implement return_page_response page response callback
vfio/common: Avoid unmap ram section at vfio_listener_region_del() in nested mode
vfio: Introduce helpers to mark dirty pages of a RAM section
vfio: Add vfio_prereg_listener_log_sync in nested stage
vfio: Add vfio_prereg_listener_log_clear to re-enable mark dirty pages
vfio: Add vfio_prereg_listener_global_log_start/stop in nested stage
hw/arm/smmuv3: Post-load stage 1 configurations to the host
vfio/common: Fix incorrect address alignment in vfio_dma_map_ram_section
vfio/common: Add address alignment check in vfio_listener_region_del
Signed-off-by: Chen Qun<kuhn.chenqun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit 45d983f4507f9f6089de83fcd4d3a2136876b566)
Both vfio_listener_region_add and vfio_listener_region_del have
reference counting operations on ram section->mr. If the 'iova'
and 'llend' of the ram section do not pass the alignment
check, the ram section should not be mapped or unmapped. It means
that the reference counting should not be changed.
However, the address alignment check is missing in
vfio_listener_region_del. This makes memory_region_unref will
be unconditional called and causes unintended problems in some
scenarios.
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit c4568a05c1d9f9017c89abc9df4270ce128a9cc3)
The 'iova' will be passed to host kernel for mapping with the
HPA. It is related to the host page size. So TARGET_PAGE_ALIGN
should be replaced by REAL_HOST_PAGE_ALIGN. In the case of
large granularity (64K), it may return early when map MMIO RAM
section. And because of the inconsistency with
vfio_dma_unmap_ram_section, it may cause 'assert(qrange)'
in vfio_dma_unmap.
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: Zenghui Yu <yuzenghui@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit af442e7ad177338fae5a5399de604cf8bef777ee)
In nested mode, we call the set_pasid_table() callback on each
STE update to pass the guest stage 1 configuration to the host
and apply it at physical level.
In the case of live migration, we need to manually call the
set_pasid_table() to load the guest stage 1 configurations to
the host. If this operation fails, the migration fails.
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit db24e228d7511ab6cb54795db3237bafb275d14e)
In nested mode, we set up the stage 2 and stage 1 separately. In my
opinion, vfio_memory_prereg_listener is used for stage 2 and
vfio_memory_listener is used for stage 1. So it feels weird to call
the global_log_start/stop interface in vfio_memory_listener to switch
dirty tracking, although this won't cause any errors. Add
global_log_start/stop interface in vfio_memory_prereg_listener
can separate stage 2 from stage 1.
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit 3608c966eaac58ea54a2f084ddb7d31f4309c8fe)
When tracking dirty pages, we just need to pay attention to stage 2
mappings. Legacy vfio_listener_log_clear cannot be used in nested
stage. This patch adds vfio_prereg_listener_log_clear to re-enable
dirty pages in nested mode.
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit 9e95ed8404c1d32048d6288100c2a5fcb1ebba75)
In nested mode, we set up the stage 2 (gpa->hpa)and stage 1
(giova->gpa) separately by vfio_prereg_listener_region_add()
and vfio_listener_region_add(). So when marking dirty pages
we just need to pay attention to stage 2 mappings.
Legacy vfio_listener_log_sync cannot be used in nested stage.
This patch adds vfio_prereg_listener_log_sync to mark dirty
pages in nested mode.
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit 28f87e27c755c532c757ebb590b95817035504f8)
Extract part of the code from vfio_sync_dirty_bitmap to form a
new helper, which allows to mark dirty pages of a RAM section.
This helper will be called for nested stage.
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit 6c878a81777952114b8c559c51450b19ab9e13d8)
The ram section will be unmapped at vfio_prereg_listener_region_del()
in nested mode. So let's avoid unmap ram section at
vfio_listener_region_dev().
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit 8eba56191ca6910d5501b9d80301898369294aa7)
This patch implements the page response path. The
response is written into the page response ring buffer and then
update header's head index is updated. This path is not used
by this series. It is introduced here as a POC for vSVA/ARM
integration.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit 043fcb1e27352b4a10af55cd967fa55190ef4b46)
Add a new PCI operation that allows to return page responses
to registered VFIO devices
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit fe6cd1988a114c63dd1bf6f0a2fcd5770edd6fc6)
We now have all bricks to support nested paging. This
uses MAP notifiers to map the MSIs. So let's allow MAP
notifiers to be registered.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit 24eaca5a81bd79b9afd8bd2b07a7d493554447c8)
We convert iommu_fault structs received from the kernel
into the data struct used by the emulation code and record
the evnts into the virtual event queue.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit df9b0e5fdf7fec20c79e840a36d05254082cc2ec)
In case PASID PciOps are set for the device we call
the set_pasid_table() callback on each STE update.
This allows to pass the guest stage 1 configuration
to the host and apply it at physical level.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit 6b84c577acac4c464c47edffb669b1b48641cbcc)
Let's propagate the leaf attribute throughout the invalidation path.
This hint is used to reduce the scope of the invalidations to the
last level of translation. Not enforcing it induces large performance
penalties in nested mode.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit c87109806f14b82c0669d054ba6ada0dafcf95c4)
When the guest invalidates one S1 entry, it passes the asid.
When propagating this invalidation downto the host, the asid
information also must be passed. So let's fill the arch_id field
introduced for that purpose and accordingly set the flags to
indicate its presence.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit fe59e0178d303c21480d2a03399fc562ee010721)
For VFIO integration we will need to pass the Context Descriptor (CD)
table GPA to the host. The CD table is also referred to as the PASID
table. Its GPA corresponds to the s1ctrptr field of the Stream Table
Entry. So let's decode and store it in the configuration structure.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit 4462bbb2034b784b62cb430c61a535981f4de3ec)
The SMMUv3 has the peculiarity to translate MSI
transactionss. let's advertise the corresponding
attribute.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit 8560e69e378f45f717f049b0f3ad01ce62472708)
Whenever the eventfd is triggered, we retrieve the DMA fault(s)
from the mmapped fault region and inject them in the iommu
memory region.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit b6ea1f38e8bd59777643b744aa0fa901395483c6)
Set up the fault region which is composed of the actual fault
queue (mmappable) and a header used to handle it. The fault
queue is mmapped.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit 447d9fccd23773f36d35ad684f620b0f0c24cce3)
We use the new extended IRQ VFIO_IRQ_TYPE_NESTED type and
VFIO_IRQ_SUBTYPE_DMA_FAULT subtype to set/unset
a notifier for physical DMA faults. The associated eventfd is
triggered, in nested mode, whenever a fault is detected at IOMMU
physical level.
The actual handler will be implemented in subsequent patches.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit bd87c37dab62815cddd5ec81badc570a6119fc5a)
As done for vfio regions, add helpers to retrieve irq info
including their optional capabilities.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit 5ecad96e205c189caf01cc855749c5d8eb7205fa)
We register the stage1 MSI bindings when enabling the vectors
and we unregister them on msi disable.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit 2cd99d4ded8aa1edaf7039d5f1249a946766fccc)
In nested mode, legacy vfio_iommu_map_notify cannot be used as
there is no "caching" mode and we do not trap on map.
On Intel, vfio_iommu_map_notify was used to DMA map the RAM
through the host single stage.
With nested mode, we need to setup the stage 2 and the stage 1
separately. This patch introduces a prereg_listener to setup
the stage 2 mapping.
The stage 1 mapping, owned by the guest, is passed to the host
when the guest invalidates the stage 1 configuration, through
a dedicated PCIPASIDOps callback. Guest IOTLB invalidations
are cascaded downto the host through another IOMMU MR UNMAP
notifier.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit 0b8a30af7d1c4e85b6e00dbf2569dc9e0a2e254f)
Let's introduce two helpers that allow to DMA map/unmap a RAM
section. Those helpers will be called for nested stage setup in
another call site. Also the vfio_listener_region_add/del()
structure may be clearer.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit 68e372b0530d234ce4a3facb486d401595cf59ba)
Let's introduce a hostwin_from_range() helper that returns the
hostwin encapsulating an IOVA range or NULL if none is found.
This improves the readibility of callers and removes the usage
of hostwin_found.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit bd940b45c300ea93abe1656b6733530564c1a6c9)
In case we detect the address space is translated by
a virtual IOMMU which requires HW nested paging to
integrate with VFIO, let's set up the container with
the VFIO_TYPE1_NESTING_IOMMU iommu_type.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit 37327fcd687f9abbd756f2d1eb9858d55f570b85)
This patch introduces PCIPASIDOps for IOMMU related operations.
https://lists.gnu.org/archive/html/qemu-devel/2018-03/msg00078.htmlhttps://lists.gnu.org/archive/html/qemu-devel/2018-03/msg00940.html
So far, to setup virt-SVA for assigned SVA capable device, needs to
configure host translation structures for specific pasid. (e.g. bind
guest page table to host and enable nested translation in host).
Besides, vIOMMU emulator needs to forward guest's cache invalidation
to host since host nested translation is enabled. e.g. on VT-d, guest
owns 1st level translation table, thus cache invalidation for 1st
level should be propagated to host.
This patch adds two functions: alloc_pasid and free_pasid to support
guest pasid allocation and free. The implementations of the callbacks
would be device passthru modules. Like vfio.
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: Jacob Pan <jacob.jun.pan@linux.intel.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Eric Auger <eric.auger@redhat.com>
Cc: Yi Sun <yi.y.sun@linux.intel.com>
Cc: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Liu Yi L <yi.l.liu@intel.com>
Signed-off-by: Yi Sun <yi.y.sun@linux.intel.com>
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit a72ba82a9dbcf6b5baa3b9f212fd14ec23fc5832)
This header is meant to exposes data types used by
several IOMMU devices such as struct for SVA and
nested stage configuration.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit c9e0fbc4c125a3630d6aa71225ee4a6bac119a99)
This new API allows to inject @count iommu_faults into
the IOMMU memory region.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit ecfcf5f4e9723e9477ec5ef2528ccce4b688d9e1)
We introduce a new IOMMU Memory Region attribute, IOMMU_ATTR_MSI_TRANSLATE
which tells whether the virtual IOMMU translates MSIs. ARM SMMU
will expose this attribute since, as opposed to Intel DMAR, MSIs
are translated as any other DMA requests.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit 850ae26a9a2e75f30785c69a1ffc8a41d91aa461)
We introduce a new IOMMU Memory Region attribute,
IOMMU_ATTR_VFIO_NESTED that tells whether the virtual IOMMU
requires HW nested paging for VFIO integration.
Current Intel virtual IOMMU device supports "Caching
Mode" and does not require 2 stages at physical level to be
integrated with VFIO. However SMMUv3 does not implement such
"caching mode" and requires to use HW nested paging.
As such SMMUv3 is the first IOMMU device to advertise this
attribute.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit f526fefda07d10fd540f9ac04efddb92112e1078)
NH_ALL/NSNH_ALL corresponds to a domain granularity invalidation,
ie. all the notifier range gets invalidation, whatever the ASID.
So let's set the granularity to IOMMU_INV_GRAN_DOMAIN to allow
the consumer to benefit from the info if it can.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Suggested-by: chenxiang (M) <chenxiang66@hisilicon.com>
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit 68904c523de49807997db96134a0e016641647bd)
At the moment ASID invalidation command (CMD_TLBI_NH_ASID) is
propagated as a domain invalidation (the whole notifier range
is invalidated independently on any ASID information).
The new granularity field now allows to be more precise and
restrict the invalidation to a peculiar ASID. Set the corresponding
fields and flag.
We still keep the iova and addr_mask settings for consumers that
do not support the new fields, like VHOST.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit 7e255ba2a5a40cb16f311e7be000419f39e30c5e)
The current IOTLBEntry becomes too simple to interact with
some physical IOMMUs. IOTLBs can be invalidated with different
granularities: domain, pasid, addr. Current IOTLB entry only offers
page selective invalidation. Let's add a granularity field
that conveys this information.
TLB entries are usually tagged with some ids such as the asid
or pasid. When propagating an invalidation command from the
guest to the host, we need to pass those IDs.
Also we add a leaf field which indicates, in case of invalidation
notification, whether only cache entries for the last level of
translation are required to be invalidated.
A flag field is introduced to inform whether those fields are set.
To enforce all existing users do not use those new fields,
initialize the IOMMUTLBEvents when needed.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit 2fa8c817167816f22c951308e3edbd0bc7370ecd)
Update the script to import the new iommu.h uapi header.
Signed-off-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
Signed-off-by: imxcc <xingchaochao@huawei.com>
(cherry picked from commit 683ab306ab1d149c11d4326e5459a5312405c4de)