sync some patchs from upstreaming

Sync some patches for hns3 about refactor mailbox, add new API for RSS,
support power monitor and some bugfix, modifies are as follow:
 - app/testpmd: fix crash in multi -process forwarding
 - net/hns3: support power monitor
 - net/hns3: remove QinQ insert support for VF
 - net/hns3: fix reset level comparison
 - net/hns3: fix disable command with firmware
 - net/hns3: fix VF multiple count on one reset
 - net/hns3: refactor handle mailbox function
 - net/hns3: refactor send mailbox function
 - net/hns3: refactor PF mailbox message struct
 - net/hns3: refactor VF mailbox message struct
 - app/testpmd: set RSS hash algorithm
 - ethdev: get RSS hash algorithm by name
 - ring: add telemetry command for ring info
 - ring: add telemetry command to list rings
 - eal: introduce more macros for bit definition
 - dmadev: add tracepoints in data path API
 - dmadev: add telemetry capability for m2d auto free
 - maintainers: update for DMA device performance tool

Signed-off-by: Dengdui Huang <huangdengdui@huawei.com>
This commit is contained in:
Dengdui Huang 2024-03-05 14:35:20 +08:00
parent 700228ad5f
commit 1f34bd76e4
19 changed files with 2789 additions and 3 deletions

View File

@ -0,0 +1,30 @@
From e222cf205386ecfdb4b429e5de3a7545bf0a43b6 Mon Sep 17 00:00:00 2001
From: Chengwen Feng <fengchengwen@huawei.com>
Date: Fri, 15 Dec 2023 00:35:11 +0000
Subject: [PATCH 13/30] maintainers: update for DMA device performance tool
[ upstream commit 4ca7aa1af6c6ff32c2c5ad958e527e17d11f6d18 ]
Add myself as a new maintainer to DMA device performance tool.
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
---
MAINTAINERS | 1 +
1 file changed, 1 insertion(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index 0d1c812..78d4695 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1789,6 +1789,7 @@ F: doc/guides/testpmd_app_ug/
DMA device performance tool
M: Cheng Jiang <honest.jiang@foxmail.com>
+M: Chengwen Feng <fengchengwen@huawei.com>
F: app/test-dma-perf/
F: doc/guides/tools/dmaperf.rst
--
2.33.0

View File

@ -0,0 +1,40 @@
From b894c53c587b901f0a03bcd2506dfc3c468f7fd4 Mon Sep 17 00:00:00 2001
From: Chengwen Feng <fengchengwen@huawei.com>
Date: Tue, 28 Nov 2023 02:16:55 +0000
Subject: [PATCH 14/30] dmadev: add telemetry capability for m2d auto free
[ upstream commit ec932cb907b72291af28406d1621d6837771e10c ]
The m2d auto free capability was introduced in [1],
but it lacks the support in telemetry.
[1] 877cb3e3742 dmadev: add buffer auto free offload
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
---
lib/dmadev/rte_dmadev.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c
index 4e5e420..7729519 100644
--- a/lib/dmadev/rte_dmadev.c
+++ b/lib/dmadev/rte_dmadev.c
@@ -749,6 +749,7 @@ dma_capability_name(uint64_t capability)
{ RTE_DMA_CAPA_SVA, "sva" },
{ RTE_DMA_CAPA_SILENT, "silent" },
{ RTE_DMA_CAPA_HANDLES_ERRORS, "handles_errors" },
+ { RTE_DMA_CAPA_M2D_AUTO_FREE, "m2d_auto_free" },
{ RTE_DMA_CAPA_OPS_COPY, "copy" },
{ RTE_DMA_CAPA_OPS_COPY_SG, "copy_sg" },
{ RTE_DMA_CAPA_OPS_FILL, "fill" },
@@ -953,6 +954,7 @@ dmadev_handle_dev_info(const char *cmd __rte_unused,
ADD_CAPA(dma_caps, dev_capa, RTE_DMA_CAPA_SVA);
ADD_CAPA(dma_caps, dev_capa, RTE_DMA_CAPA_SILENT);
ADD_CAPA(dma_caps, dev_capa, RTE_DMA_CAPA_HANDLES_ERRORS);
+ ADD_CAPA(dma_caps, dev_capa, RTE_DMA_CAPA_M2D_AUTO_FREE);
ADD_CAPA(dma_caps, dev_capa, RTE_DMA_CAPA_OPS_COPY);
ADD_CAPA(dma_caps, dev_capa, RTE_DMA_CAPA_OPS_COPY_SG);
ADD_CAPA(dma_caps, dev_capa, RTE_DMA_CAPA_OPS_FILL);
--
2.33.0

View File

@ -0,0 +1,426 @@
From f8e2db6965abc49f6f9c80df2b8277969cec2988 Mon Sep 17 00:00:00 2001
From: Chengwen Feng <fengchengwen@huawei.com>
Date: Fri, 12 Jan 2024 10:26:59 +0000
Subject: [PATCH 15/30] dmadev: add tracepoints in data path API
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
[ upstream commit 112327c220befef14129e4852e8df46e60410128 ]
Add tracepoints at data path APIs for tracing support.
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
lib/dmadev/meson.build | 2 +-
lib/dmadev/rte_dmadev.h | 56 +++++++---
lib/dmadev/rte_dmadev_trace_fp.h | 150 +++++++++++++++++++++++++++
lib/dmadev/rte_dmadev_trace_points.c | 27 +++++
lib/dmadev/version.map | 15 +++
5 files changed, 236 insertions(+), 14 deletions(-)
create mode 100644 lib/dmadev/rte_dmadev_trace_fp.h
diff --git a/lib/dmadev/meson.build b/lib/dmadev/meson.build
index e0d90ae..62b0650 100644
--- a/lib/dmadev/meson.build
+++ b/lib/dmadev/meson.build
@@ -3,7 +3,7 @@
sources = files('rte_dmadev.c', 'rte_dmadev_trace_points.c')
headers = files('rte_dmadev.h')
-indirect_headers += files('rte_dmadev_core.h')
+indirect_headers += files('rte_dmadev_core.h', 'rte_dmadev_trace_fp.h')
driver_sdk_headers += files('rte_dmadev_pmd.h')
deps += ['telemetry']
diff --git a/lib/dmadev/rte_dmadev.h b/lib/dmadev/rte_dmadev.h
index 450b81c..5474a52 100644
--- a/lib/dmadev/rte_dmadev.h
+++ b/lib/dmadev/rte_dmadev.h
@@ -773,6 +773,7 @@ struct rte_dma_sge {
};
#include "rte_dmadev_core.h"
+#include "rte_dmadev_trace_fp.h"
/**@{@name DMA operation flag
* @see rte_dma_copy()
@@ -836,6 +837,7 @@ rte_dma_copy(int16_t dev_id, uint16_t vchan, rte_iova_t src, rte_iova_t dst,
uint32_t length, uint64_t flags)
{
struct rte_dma_fp_object *obj = &rte_dma_fp_objs[dev_id];
+ int ret;
#ifdef RTE_DMADEV_DEBUG
if (!rte_dma_is_valid(dev_id) || length == 0)
@@ -844,7 +846,10 @@ rte_dma_copy(int16_t dev_id, uint16_t vchan, rte_iova_t src, rte_iova_t dst,
return -ENOTSUP;
#endif
- return (*obj->copy)(obj->dev_private, vchan, src, dst, length, flags);
+ ret = (*obj->copy)(obj->dev_private, vchan, src, dst, length, flags);
+ rte_dma_trace_copy(dev_id, vchan, src, dst, length, flags, ret);
+
+ return ret;
}
/**
@@ -883,6 +888,7 @@ rte_dma_copy_sg(int16_t dev_id, uint16_t vchan, struct rte_dma_sge *src,
uint64_t flags)
{
struct rte_dma_fp_object *obj = &rte_dma_fp_objs[dev_id];
+ int ret;
#ifdef RTE_DMADEV_DEBUG
if (!rte_dma_is_valid(dev_id) || src == NULL || dst == NULL ||
@@ -892,8 +898,12 @@ rte_dma_copy_sg(int16_t dev_id, uint16_t vchan, struct rte_dma_sge *src,
return -ENOTSUP;
#endif
- return (*obj->copy_sg)(obj->dev_private, vchan, src, dst, nb_src,
- nb_dst, flags);
+ ret = (*obj->copy_sg)(obj->dev_private, vchan, src, dst, nb_src,
+ nb_dst, flags);
+ rte_dma_trace_copy_sg(dev_id, vchan, src, dst, nb_src, nb_dst, flags,
+ ret);
+
+ return ret;
}
/**
@@ -927,6 +937,7 @@ rte_dma_fill(int16_t dev_id, uint16_t vchan, uint64_t pattern,
rte_iova_t dst, uint32_t length, uint64_t flags)
{
struct rte_dma_fp_object *obj = &rte_dma_fp_objs[dev_id];
+ int ret;
#ifdef RTE_DMADEV_DEBUG
if (!rte_dma_is_valid(dev_id) || length == 0)
@@ -935,8 +946,11 @@ rte_dma_fill(int16_t dev_id, uint16_t vchan, uint64_t pattern,
return -ENOTSUP;
#endif
- return (*obj->fill)(obj->dev_private, vchan, pattern, dst, length,
- flags);
+ ret = (*obj->fill)(obj->dev_private, vchan, pattern, dst, length,
+ flags);
+ rte_dma_trace_fill(dev_id, vchan, pattern, dst, length, flags, ret);
+
+ return ret;
}
/**
@@ -957,6 +971,7 @@ static inline int
rte_dma_submit(int16_t dev_id, uint16_t vchan)
{
struct rte_dma_fp_object *obj = &rte_dma_fp_objs[dev_id];
+ int ret;
#ifdef RTE_DMADEV_DEBUG
if (!rte_dma_is_valid(dev_id))
@@ -965,7 +980,10 @@ rte_dma_submit(int16_t dev_id, uint16_t vchan)
return -ENOTSUP;
#endif
- return (*obj->submit)(obj->dev_private, vchan);
+ ret = (*obj->submit)(obj->dev_private, vchan);
+ rte_dma_trace_submit(dev_id, vchan, ret);
+
+ return ret;
}
/**
@@ -995,7 +1013,7 @@ rte_dma_completed(int16_t dev_id, uint16_t vchan, const uint16_t nb_cpls,
uint16_t *last_idx, bool *has_error)
{
struct rte_dma_fp_object *obj = &rte_dma_fp_objs[dev_id];
- uint16_t idx;
+ uint16_t idx, ret;
bool err;
#ifdef RTE_DMADEV_DEBUG
@@ -1019,8 +1037,12 @@ rte_dma_completed(int16_t dev_id, uint16_t vchan, const uint16_t nb_cpls,
has_error = &err;
*has_error = false;
- return (*obj->completed)(obj->dev_private, vchan, nb_cpls, last_idx,
- has_error);
+ ret = (*obj->completed)(obj->dev_private, vchan, nb_cpls, last_idx,
+ has_error);
+ rte_dma_trace_completed(dev_id, vchan, nb_cpls, last_idx, has_error,
+ ret);
+
+ return ret;
}
/**
@@ -1055,7 +1077,7 @@ rte_dma_completed_status(int16_t dev_id, uint16_t vchan,
enum rte_dma_status_code *status)
{
struct rte_dma_fp_object *obj = &rte_dma_fp_objs[dev_id];
- uint16_t idx;
+ uint16_t idx, ret;
#ifdef RTE_DMADEV_DEBUG
if (!rte_dma_is_valid(dev_id) || nb_cpls == 0 || status == NULL)
@@ -1067,8 +1089,12 @@ rte_dma_completed_status(int16_t dev_id, uint16_t vchan,
if (last_idx == NULL)
last_idx = &idx;
- return (*obj->completed_status)(obj->dev_private, vchan, nb_cpls,
- last_idx, status);
+ ret = (*obj->completed_status)(obj->dev_private, vchan, nb_cpls,
+ last_idx, status);
+ rte_dma_trace_completed_status(dev_id, vchan, nb_cpls, last_idx, status,
+ ret);
+
+ return ret;
}
/**
@@ -1087,6 +1113,7 @@ static inline uint16_t
rte_dma_burst_capacity(int16_t dev_id, uint16_t vchan)
{
struct rte_dma_fp_object *obj = &rte_dma_fp_objs[dev_id];
+ uint16_t ret;
#ifdef RTE_DMADEV_DEBUG
if (!rte_dma_is_valid(dev_id))
@@ -1094,7 +1121,10 @@ rte_dma_burst_capacity(int16_t dev_id, uint16_t vchan)
if (*obj->burst_capacity == NULL)
return 0;
#endif
- return (*obj->burst_capacity)(obj->dev_private, vchan);
+ ret = (*obj->burst_capacity)(obj->dev_private, vchan);
+ rte_dma_trace_burst_capacity(dev_id, vchan, ret);
+
+ return ret;
}
#ifdef __cplusplus
diff --git a/lib/dmadev/rte_dmadev_trace_fp.h b/lib/dmadev/rte_dmadev_trace_fp.h
new file mode 100644
index 0000000..f5b9683
--- /dev/null
+++ b/lib/dmadev/rte_dmadev_trace_fp.h
@@ -0,0 +1,150 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2024 HiSilicon Limited
+ */
+
+#ifndef RTE_DMADEV_TRACE_FP_H
+#define RTE_DMADEV_TRACE_FP_H
+
+/**
+ * @file
+ *
+ * API for dmadev fastpath trace support
+ */
+
+#include <rte_trace_point.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+RTE_TRACE_POINT_FP(
+ rte_dma_trace_stats_get,
+ RTE_TRACE_POINT_ARGS(int16_t dev_id, uint16_t vchan,
+ struct rte_dma_stats *stats, int ret),
+ rte_trace_point_emit_i16(dev_id);
+ rte_trace_point_emit_u16(vchan);
+ rte_trace_point_emit_u64(stats->submitted);
+ rte_trace_point_emit_u64(stats->completed);
+ rte_trace_point_emit_u64(stats->errors);
+ rte_trace_point_emit_int(ret);
+)
+
+RTE_TRACE_POINT_FP(
+ rte_dma_trace_vchan_status,
+ RTE_TRACE_POINT_ARGS(int16_t dev_id, uint16_t vchan,
+ enum rte_dma_vchan_status *status, int ret),
+#ifdef _RTE_TRACE_POINT_REGISTER_H_
+ enum rte_dma_vchan_status __status = 0;
+ status = &__status;
+#endif /* _RTE_TRACE_POINT_REGISTER_H_ */
+ int vchan_status = *status;
+ rte_trace_point_emit_i16(dev_id);
+ rte_trace_point_emit_u16(vchan);
+ rte_trace_point_emit_int(vchan_status);
+ rte_trace_point_emit_int(ret);
+)
+
+RTE_TRACE_POINT_FP(
+ rte_dma_trace_copy,
+ RTE_TRACE_POINT_ARGS(int16_t dev_id, uint16_t vchan, rte_iova_t src,
+ rte_iova_t dst, uint32_t length, uint64_t flags,
+ int ret),
+ rte_trace_point_emit_i16(dev_id);
+ rte_trace_point_emit_u16(vchan);
+ rte_trace_point_emit_u64(src);
+ rte_trace_point_emit_u64(dst);
+ rte_trace_point_emit_u32(length);
+ rte_trace_point_emit_u64(flags);
+ rte_trace_point_emit_int(ret);
+)
+
+RTE_TRACE_POINT_FP(
+ rte_dma_trace_copy_sg,
+ RTE_TRACE_POINT_ARGS(int16_t dev_id, uint16_t vchan,
+ struct rte_dma_sge *src, struct rte_dma_sge *dst,
+ uint16_t nb_src, uint16_t nb_dst, uint64_t flags,
+ int ret),
+ rte_trace_point_emit_i16(dev_id);
+ rte_trace_point_emit_u16(vchan);
+ rte_trace_point_emit_ptr(src);
+ rte_trace_point_emit_ptr(dst);
+ rte_trace_point_emit_u16(nb_src);
+ rte_trace_point_emit_u16(nb_dst);
+ rte_trace_point_emit_u64(flags);
+ rte_trace_point_emit_int(ret);
+)
+
+RTE_TRACE_POINT_FP(
+ rte_dma_trace_fill,
+ RTE_TRACE_POINT_ARGS(int16_t dev_id, uint16_t vchan, uint64_t pattern,
+ rte_iova_t dst, uint32_t length, uint64_t flags,
+ int ret),
+ rte_trace_point_emit_i16(dev_id);
+ rte_trace_point_emit_u16(vchan);
+ rte_trace_point_emit_u64(pattern);
+ rte_trace_point_emit_u64(dst);
+ rte_trace_point_emit_u32(length);
+ rte_trace_point_emit_u64(flags);
+ rte_trace_point_emit_int(ret);
+)
+
+RTE_TRACE_POINT_FP(
+ rte_dma_trace_submit,
+ RTE_TRACE_POINT_ARGS(int16_t dev_id, uint16_t vchan, int ret),
+ rte_trace_point_emit_i16(dev_id);
+ rte_trace_point_emit_u16(vchan);
+ rte_trace_point_emit_int(ret);
+)
+
+RTE_TRACE_POINT_FP(
+ rte_dma_trace_completed,
+ RTE_TRACE_POINT_ARGS(int16_t dev_id, uint16_t vchan,
+ const uint16_t nb_cpls, uint16_t *last_idx,
+ bool *has_error, uint16_t ret),
+#ifdef _RTE_TRACE_POINT_REGISTER_H_
+ uint16_t __last_idx = 0;
+ bool __has_error = false;
+ last_idx = &__last_idx;
+ has_error = &__has_error;
+#endif /* _RTE_TRACE_POINT_REGISTER_H_ */
+ int has_error_val = *has_error;
+ int last_idx_val = *last_idx;
+ rte_trace_point_emit_i16(dev_id);
+ rte_trace_point_emit_u16(vchan);
+ rte_trace_point_emit_u16(nb_cpls);
+ rte_trace_point_emit_int(last_idx_val);
+ rte_trace_point_emit_int(has_error_val);
+ rte_trace_point_emit_u16(ret);
+)
+
+RTE_TRACE_POINT_FP(
+ rte_dma_trace_completed_status,
+ RTE_TRACE_POINT_ARGS(int16_t dev_id, uint16_t vchan,
+ const uint16_t nb_cpls, uint16_t *last_idx,
+ enum rte_dma_status_code *status, uint16_t ret),
+#ifdef _RTE_TRACE_POINT_REGISTER_H_
+ uint16_t __last_idx = 0;
+ last_idx = &__last_idx;
+#endif /* _RTE_TRACE_POINT_REGISTER_H_ */
+ int last_idx_val = *last_idx;
+ rte_trace_point_emit_i16(dev_id);
+ rte_trace_point_emit_u16(vchan);
+ rte_trace_point_emit_u16(nb_cpls);
+ rte_trace_point_emit_int(last_idx_val);
+ rte_trace_point_emit_ptr(status);
+ rte_trace_point_emit_u16(ret);
+)
+
+RTE_TRACE_POINT_FP(
+ rte_dma_trace_burst_capacity,
+ RTE_TRACE_POINT_ARGS(int16_t dev_id, uint16_t vchan, uint16_t ret),
+ rte_trace_point_emit_i16(dev_id);
+ rte_trace_point_emit_u16(vchan);
+ rte_trace_point_emit_u16(ret);
+)
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* RTE_DMADEV_TRACE_FP_H */
diff --git a/lib/dmadev/rte_dmadev_trace_points.c b/lib/dmadev/rte_dmadev_trace_points.c
index 2a83b90..4c74356 100644
--- a/lib/dmadev/rte_dmadev_trace_points.c
+++ b/lib/dmadev/rte_dmadev_trace_points.c
@@ -24,8 +24,35 @@ RTE_TRACE_POINT_REGISTER(rte_dma_trace_close,
RTE_TRACE_POINT_REGISTER(rte_dma_trace_vchan_setup,
lib.dmadev.vchan_setup)
+RTE_TRACE_POINT_REGISTER(rte_dma_trace_stats_get,
+ lib.dmadev.stats_get)
+
RTE_TRACE_POINT_REGISTER(rte_dma_trace_stats_reset,
lib.dmadev.stats_reset)
+RTE_TRACE_POINT_REGISTER(rte_dma_trace_vchan_status,
+ lib.dmadev.vchan_status)
+
RTE_TRACE_POINT_REGISTER(rte_dma_trace_dump,
lib.dmadev.dump)
+
+RTE_TRACE_POINT_REGISTER(rte_dma_trace_copy,
+ lib.dmadev.copy)
+
+RTE_TRACE_POINT_REGISTER(rte_dma_trace_copy_sg,
+ lib.dmadev.copy_sg)
+
+RTE_TRACE_POINT_REGISTER(rte_dma_trace_fill,
+ lib.dmadev.fill)
+
+RTE_TRACE_POINT_REGISTER(rte_dma_trace_submit,
+ lib.dmadev.submit)
+
+RTE_TRACE_POINT_REGISTER(rte_dma_trace_completed,
+ lib.dmadev.completed)
+
+RTE_TRACE_POINT_REGISTER(rte_dma_trace_completed_status,
+ lib.dmadev.completed_status)
+
+RTE_TRACE_POINT_REGISTER(rte_dma_trace_burst_capacity,
+ lib.dmadev.burst_capacity)
diff --git a/lib/dmadev/version.map b/lib/dmadev/version.map
index 2a37365..14e0927 100644
--- a/lib/dmadev/version.map
+++ b/lib/dmadev/version.map
@@ -20,6 +20,21 @@ DPDK_24 {
local: *;
};
+EXPERIMENTAL {
+ global:
+
+ # added in 24.03
+ __rte_dma_trace_burst_capacity;
+ __rte_dma_trace_completed;
+ __rte_dma_trace_completed_status;
+ __rte_dma_trace_copy;
+ __rte_dma_trace_copy_sg;
+ __rte_dma_trace_fill;
+ __rte_dma_trace_submit;
+
+ local: *;
+};
+
INTERNAL {
global:
--
2.33.0

View File

@ -0,0 +1,102 @@
From 8ff07d4203a07fadbae3b9cac3e0f6301d85b022 Mon Sep 17 00:00:00 2001
From: Chengwen Feng <fengchengwen@huawei.com>
Date: Fri, 26 Jan 2024 06:10:06 +0000
Subject: [PATCH 16/30] eal: introduce more macros for bit definition
[ upstream commit 1d8f2285ed3ffc3dfbf0857a960915c0e8ef6a8d ]
Introduce macros:
1. RTE_SHIFT_VAL64: get the uint64_t value which shifted by nr.
2. RTE_SHIFT_VAL32: get the uint32_t value which shifted by nr.
3. RTE_GENMASK64: generate a contiguous 64bit bitmask starting at bit
position low and ending at position high.
4. RTE_GENMASK32: generate a contiguous 32bit bitmask starting at bit
position low and ending at position high.
5. RTE_FIELD_GET64: extract a 64bit field element.
6. RTE_FIELD_GET32: extract a 32bit field element.
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
---
lib/eal/include/rte_bitops.h | 66 ++++++++++++++++++++++++++++++++++++
1 file changed, 66 insertions(+)
diff --git a/lib/eal/include/rte_bitops.h b/lib/eal/include/rte_bitops.h
index 6bd8bae..449565e 100644
--- a/lib/eal/include/rte_bitops.h
+++ b/lib/eal/include/rte_bitops.h
@@ -39,6 +39,72 @@ extern "C" {
*/
#define RTE_BIT32(nr) (UINT32_C(1) << (nr))
+/**
+ * Get the uint32_t shifted value.
+ *
+ * @param val
+ * The value to be shifted.
+ * @param nr
+ * The shift number in range of 0 to (32 - width of val).
+ */
+#define RTE_SHIFT_VAL32(val, nr) (UINT32_C(val) << (nr))
+
+/**
+ * Get the uint64_t shifted value.
+ *
+ * @param val
+ * The value to be shifted.
+ * @param nr
+ * The shift number in range of 0 to (64 - width of val).
+ */
+#define RTE_SHIFT_VAL64(val, nr) (UINT64_C(val) << (nr))
+
+/**
+ * Generate a contiguous 32-bit mask
+ * starting at bit position low and ending at position high.
+ *
+ * @param high
+ * High bit position.
+ * @param low
+ * Low bit position.
+ */
+#define RTE_GENMASK32(high, low) \
+ (((~UINT32_C(0)) << (low)) & (~UINT32_C(0) >> (31u - (high))))
+
+/**
+ * Generate a contiguous 64-bit mask
+ * starting at bit position low and ending at position high.
+ *
+ * @param high
+ * High bit position.
+ * @param low
+ * Low bit position.
+ */
+#define RTE_GENMASK64(high, low) \
+ (((~UINT64_C(0)) << (low)) & (~UINT64_C(0) >> (63u - (high))))
+
+/**
+ * Extract a 32-bit field element.
+ *
+ * @param mask
+ * Shifted mask.
+ * @param reg
+ * Value of entire bitfield.
+ */
+#define RTE_FIELD_GET32(mask, reg) \
+ ((typeof(mask))(((reg) & (mask)) >> rte_ctz32(mask)))
+
+/**
+ * Extract a 64-bit field element.
+ *
+ * @param mask
+ * Shifted mask.
+ * @param reg
+ * Value of entire bitfield.
+ */
+#define RTE_FIELD_GET64(mask, reg) \
+ ((typeof(mask))(((reg) & (mask)) >> rte_ctz64(mask)))
+
/*------------------------ 32-bit relaxed operations ------------------------*/
/**
--
2.33.0

View File

@ -0,0 +1,99 @@
From 5d3158e366fe1293ee5a7293a9050bf93a460f53 Mon Sep 17 00:00:00 2001
From: Jie Hai <haijie1@huawei.com>
Date: Mon, 19 Feb 2024 16:32:52 +0800
Subject: [PATCH 17/30] ring: add telemetry command to list rings
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
[ upstream commit 36e5c1b91607cc41e73e1108bdc5843c68e3ebc6 ]
Add a telemetry command to list the rings used in the system.
An example using this command is shown below:
--> /ring/list
{
"/ring/list": [
"HT_0000:7d:00.2",
"MP_mb_pool_0"
]
}
Signed-off-by: Jie Hai <haijie1@huawei.com>
Acked-by: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
lib/ring/meson.build | 1 +
lib/ring/rte_ring.c | 40 ++++++++++++++++++++++++++++++++++++++++
2 files changed, 41 insertions(+)
diff --git a/lib/ring/meson.build b/lib/ring/meson.build
index c20685c..7fca958 100644
--- a/lib/ring/meson.build
+++ b/lib/ring/meson.build
@@ -18,3 +18,4 @@ indirect_headers += files (
'rte_ring_rts.h',
'rte_ring_rts_elem_pvt.h',
)
+deps += ['telemetry']
diff --git a/lib/ring/rte_ring.c b/lib/ring/rte_ring.c
index 057d25f..6a10280 100644
--- a/lib/ring/rte_ring.c
+++ b/lib/ring/rte_ring.c
@@ -22,6 +22,7 @@
#include <rte_errno.h>
#include <rte_string_fns.h>
#include <rte_tailq.h>
+#include <rte_telemetry.h>
#include "rte_ring.h"
#include "rte_ring_elem.h"
@@ -418,3 +419,42 @@ rte_ring_lookup(const char *name)
return r;
}
+
+static void
+ring_walk(void (*func)(struct rte_ring *, void *), void *arg)
+{
+ struct rte_ring_list *ring_list;
+ struct rte_tailq_entry *tailq_entry;
+
+ ring_list = RTE_TAILQ_CAST(rte_ring_tailq.head, rte_ring_list);
+ rte_mcfg_tailq_read_lock();
+
+ TAILQ_FOREACH(tailq_entry, ring_list, next) {
+ (*func)((struct rte_ring *) tailq_entry->data, arg);
+ }
+
+ rte_mcfg_tailq_read_unlock();
+}
+
+static void
+ring_list_cb(struct rte_ring *r, void *arg)
+{
+ struct rte_tel_data *d = (struct rte_tel_data *)arg;
+
+ rte_tel_data_add_array_string(d, r->name);
+}
+
+static int
+ring_handle_list(const char *cmd __rte_unused,
+ const char *params __rte_unused, struct rte_tel_data *d)
+{
+ rte_tel_data_start_array(d, RTE_TEL_STRING_VAL);
+ ring_walk(ring_list_cb, d);
+ return 0;
+}
+
+RTE_INIT(ring_init_telemetry)
+{
+ rte_telemetry_register_cmd("/ring/list", ring_handle_list,
+ "Returns list of available rings. Takes no parameters");
+}
--
2.33.0

View File

@ -0,0 +1,154 @@
From b57a22942d27636291889298a3e186fde0f5225e Mon Sep 17 00:00:00 2001
From: Jie Hai <haijie1@huawei.com>
Date: Mon, 19 Feb 2024 16:32:53 +0800
Subject: [PATCH 18/30] ring: add telemetry command for ring info
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
[ upstream commit 2e99bd65ca788f7f540b2c155208dae2b0128b36 ]
This patch supports dump of ring information by its name.
An example using this command is shown below:
--> /ring/info,MP_mb_pool_0
{
"/ring/info": {
"name": "MP_mb_pool_0",
"socket": 0,
"flags": 0,
"producer_type": "MP",
"consumer_type": "MC",
"size": 262144,
"mask": "0x3ffff",
"capacity": 262143,
"used_count": 153197,
"mz_name": "RG_MP_mb_pool_0",
"mz_len": 2097536,
"mz_hugepage_sz": 1073741824,
"mz_socket_id": 0,
"mz_flags": "0x0"
}
}
Signed-off-by: Jie Hai <haijie1@huawei.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>
Acked-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
lib/ring/rte_ring.c | 95 +++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 95 insertions(+)
diff --git a/lib/ring/rte_ring.c b/lib/ring/rte_ring.c
index 6a10280..ec56b38 100644
--- a/lib/ring/rte_ring.c
+++ b/lib/ring/rte_ring.c
@@ -453,8 +453,103 @@ ring_handle_list(const char *cmd __rte_unused,
return 0;
}
+static const char *
+ring_prod_sync_type_to_name(struct rte_ring *r)
+{
+ switch (r->prod.sync_type) {
+ case RTE_RING_SYNC_MT:
+ return "MP";
+ case RTE_RING_SYNC_ST:
+ return "SP";
+ case RTE_RING_SYNC_MT_RTS:
+ return "MP_RTS";
+ case RTE_RING_SYNC_MT_HTS:
+ return "MP_HTS";
+ default:
+ return "Unknown";
+ }
+}
+
+static const char *
+ring_cons_sync_type_to_name(struct rte_ring *r)
+{
+ switch (r->cons.sync_type) {
+ case RTE_RING_SYNC_MT:
+ return "MC";
+ case RTE_RING_SYNC_ST:
+ return "SC";
+ case RTE_RING_SYNC_MT_RTS:
+ return "MC_RTS";
+ case RTE_RING_SYNC_MT_HTS:
+ return "MC_HTS";
+ default:
+ return "Unknown";
+ }
+}
+
+struct ring_info_cb_arg {
+ char *ring_name;
+ struct rte_tel_data *d;
+};
+
+static void
+ring_info_cb(struct rte_ring *r, void *arg)
+{
+ struct ring_info_cb_arg *ring_arg = (struct ring_info_cb_arg *)arg;
+ struct rte_tel_data *d = ring_arg->d;
+ const struct rte_memzone *mz;
+
+ if (strncmp(r->name, ring_arg->ring_name, RTE_RING_NAMESIZE))
+ return;
+
+ rte_tel_data_add_dict_string(d, "name", r->name);
+ rte_tel_data_add_dict_int(d, "socket", r->memzone->socket_id);
+ rte_tel_data_add_dict_int(d, "flags", r->flags);
+ rte_tel_data_add_dict_string(d, "producer_type",
+ ring_prod_sync_type_to_name(r));
+ rte_tel_data_add_dict_string(d, "consumer_type",
+ ring_cons_sync_type_to_name(r));
+ rte_tel_data_add_dict_uint(d, "size", r->size);
+ rte_tel_data_add_dict_uint_hex(d, "mask", r->mask, 0);
+ rte_tel_data_add_dict_uint(d, "capacity", r->capacity);
+ rte_tel_data_add_dict_uint(d, "used_count", rte_ring_count(r));
+
+ mz = r->memzone;
+ if (mz == NULL)
+ return;
+ rte_tel_data_add_dict_string(d, "mz_name", mz->name);
+ rte_tel_data_add_dict_uint(d, "mz_len", mz->len);
+ rte_tel_data_add_dict_uint(d, "mz_hugepage_sz", mz->hugepage_sz);
+ rte_tel_data_add_dict_int(d, "mz_socket_id", mz->socket_id);
+ rte_tel_data_add_dict_uint_hex(d, "mz_flags", mz->flags, 0);
+}
+
+static int
+ring_handle_info(const char *cmd __rte_unused, const char *params,
+ struct rte_tel_data *d)
+{
+ char name[RTE_RING_NAMESIZE] = {0};
+ struct ring_info_cb_arg ring_arg;
+
+ if (params == NULL || strlen(params) == 0 ||
+ strlen(params) >= RTE_RING_NAMESIZE)
+ return -EINVAL;
+
+ rte_strlcpy(name, params, RTE_RING_NAMESIZE);
+
+ ring_arg.ring_name = name;
+ ring_arg.d = d;
+
+ rte_tel_data_start_dict(d);
+ ring_walk(ring_info_cb, &ring_arg);
+
+ return 0;
+}
+
RTE_INIT(ring_init_telemetry)
{
rte_telemetry_register_cmd("/ring/list", ring_handle_list,
"Returns list of available rings. Takes no parameters");
+ rte_telemetry_register_cmd("/ring/info", ring_handle_info,
+ "Returns ring info. Parameters: ring_name.");
}
--
2.33.0

View File

@ -0,0 +1,93 @@
From 6b7a19a619a47d85cb21edce610d9b7a2313b19d Mon Sep 17 00:00:00 2001
From: Jie Hai <haijie1@huawei.com>
Date: Fri, 1 Dec 2023 16:52:53 +0800
Subject: [PATCH 19/30] ethdev: get RSS hash algorithm by name
[ upstream commit c9884dfb4a35ec00c1aaa2ece2033da679233003 ]
This patch supports conversion from names to hash algorithm
(see RTE_ETH_HASH_FUNCTION_XXX).
Signed-off-by: Jie Hai <haijie1@huawei.com>
Reviewed-by: Huisong Li <lihuisong@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
lib/ethdev/rte_ethdev.c | 15 +++++++++++++++
lib/ethdev/rte_ethdev.h | 20 ++++++++++++++++++++
lib/ethdev/version.map | 3 +++
3 files changed, 38 insertions(+)
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 3858983..c0398d9 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -4826,6 +4826,21 @@ rte_eth_dev_rss_algo_name(enum rte_eth_hash_function rss_algo)
return name;
}
+int
+rte_eth_find_rss_algo(const char *name, uint32_t *algo)
+{
+ unsigned int i;
+
+ for (i = 0; i < RTE_DIM(rte_eth_dev_rss_algo_names); i++) {
+ if (strcmp(name, rte_eth_dev_rss_algo_names[i].name) == 0) {
+ *algo = rte_eth_dev_rss_algo_names[i].algo;
+ return 0;
+ }
+ }
+
+ return -EINVAL;
+}
+
int
rte_eth_dev_udp_tunnel_port_add(uint16_t port_id,
struct rte_eth_udp_tunnel *udp_tunnel)
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index 77331ce..57a9a55 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -4667,6 +4667,26 @@ __rte_experimental
const char *
rte_eth_dev_rss_algo_name(enum rte_eth_hash_function rss_algo);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice.
+ *
+ * Get RSS hash algorithm by its name.
+ *
+ * @param name
+ * RSS hash algorithm.
+ *
+ * @param algo
+ * Return the RSS hash algorithm found, @see rte_eth_hash_function.
+ *
+ * @return
+ * - (0) if successful.
+ * - (-EINVAL) if not found.
+ */
+__rte_experimental
+int
+rte_eth_find_rss_algo(const char *name, uint32_t *algo);
+
/**
* Add UDP tunneling port for a type of tunnel.
*
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 5c4917c..a050baa 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -316,6 +316,9 @@ EXPERIMENTAL {
rte_eth_recycle_rx_queue_info_get;
rte_flow_group_set_miss_actions;
rte_flow_calc_table_hash;
+
+ # added in 24.03
+ rte_eth_find_rss_algo;
};
INTERNAL {
--
2.33.0

View File

@ -0,0 +1,150 @@
From 787e715f8e77722658d56209c1a73ba5626d19fa Mon Sep 17 00:00:00 2001
From: Jie Hai <haijie1@huawei.com>
Date: Fri, 1 Dec 2023 16:52:54 +0800
Subject: [PATCH 20/30] app/testpmd: set RSS hash algorithm
[ upstream commit 3da59f30a23f2e795d2315f3d949e1b3e0ce0c3d ]
Since API rte_eth_dev_rss_hash_update() supports setting RSS hash
algorithm, add new command to support it:
testpmd> port config 0 rss-hash-algo symmetric_toeplitz
Signed-off-by: Jie Hai <haijie1@huawei.com>
Reviewed-by: Huisong Li <lihuisong@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
app/test-pmd/cmdline.c | 81 +++++++++++++++++++++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 10 +++
2 files changed, 91 insertions(+)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 9369d3b..f704319 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -726,6 +726,10 @@ static void cmd_help_long_parsed(void *parsed_result,
"port config port-id rss reta (hash,queue)[,(hash,queue)]\n"
" Set the RSS redirection table.\n\n"
+ "port config (port_id) rss-hash-algo (default|simple_xor|toeplitz|"
+ "symmetric_toeplitz|symmetric_toeplitz_sort)\n"
+ " Set the RSS hash algorithm.\n\n"
+
"port config (port_id) dcb vt (on|off) (traffic_class)"
" pfc (on|off)\n"
" Set the DCB mode.\n\n"
@@ -2275,6 +2279,82 @@ static cmdline_parse_inst_t cmd_config_rss_hash_key = {
},
};
+/* *** configure rss hash algorithm *** */
+struct cmd_config_rss_hash_algo {
+ cmdline_fixed_string_t port;
+ cmdline_fixed_string_t config;
+ portid_t port_id;
+ cmdline_fixed_string_t rss_hash_algo;
+ cmdline_fixed_string_t algo;
+};
+
+static void
+cmd_config_rss_hash_algo_parsed(void *parsed_result,
+ __rte_unused struct cmdline *cl,
+ __rte_unused void *data)
+{
+ struct cmd_config_rss_hash_algo *res = parsed_result;
+ uint8_t rss_key[RSS_HASH_KEY_LENGTH];
+ struct rte_eth_rss_conf rss_conf;
+ uint32_t algorithm;
+ int ret;
+
+ rss_conf.rss_key_len = RSS_HASH_KEY_LENGTH;
+ rss_conf.rss_key = rss_key;
+ ret = rte_eth_dev_rss_hash_conf_get(res->port_id, &rss_conf);
+ if (ret != 0) {
+ fprintf(stderr, "failed to get port %u RSS configuration\n",
+ res->port_id);
+ return;
+ }
+
+ algorithm = (uint32_t)rss_conf.algorithm;
+ ret = rte_eth_find_rss_algo(res->algo, &algorithm);
+ if (ret != 0) {
+ fprintf(stderr, "port %u configured invalid RSS hash algorithm: %s\n",
+ res->port_id, res->algo);
+ return;
+ }
+
+ ret = rte_eth_dev_rss_hash_update(res->port_id, &rss_conf);
+ if (ret != 0) {
+ fprintf(stderr, "failed to set port %u RSS hash algorithm\n",
+ res->port_id);
+ return;
+ }
+}
+
+static cmdline_parse_token_string_t cmd_config_rss_hash_algo_port =
+ TOKEN_STRING_INITIALIZER(struct cmd_config_rss_hash_algo, port, "port");
+static cmdline_parse_token_string_t cmd_config_rss_hash_algo_config =
+ TOKEN_STRING_INITIALIZER(struct cmd_config_rss_hash_algo, config,
+ "config");
+static cmdline_parse_token_num_t cmd_config_rss_hash_algo_port_id =
+ TOKEN_NUM_INITIALIZER(struct cmd_config_rss_hash_algo, port_id,
+ RTE_UINT16);
+static cmdline_parse_token_string_t cmd_config_rss_hash_algo_rss_hash_algo =
+ TOKEN_STRING_INITIALIZER(struct cmd_config_rss_hash_algo,
+ rss_hash_algo, "rss-hash-algo");
+static cmdline_parse_token_string_t cmd_config_rss_hash_algo_algo =
+ TOKEN_STRING_INITIALIZER(struct cmd_config_rss_hash_algo, algo,
+ "default#simple_xor#toeplitz#"
+ "symmetric_toeplitz#symmetric_toeplitz_sort");
+
+static cmdline_parse_inst_t cmd_config_rss_hash_algo = {
+ .f = cmd_config_rss_hash_algo_parsed,
+ .data = NULL,
+ .help_str = "port config <port_id> rss-hash-algo "
+ "default|simple_xor|toeplitz|symmetric_toeplitz|symmetric_toeplitz_sort",
+ .tokens = {
+ (void *)&cmd_config_rss_hash_algo_port,
+ (void *)&cmd_config_rss_hash_algo_config,
+ (void *)&cmd_config_rss_hash_algo_port_id,
+ (void *)&cmd_config_rss_hash_algo_rss_hash_algo,
+ (void *)&cmd_config_rss_hash_algo_algo,
+ NULL,
+ },
+};
+
/* *** cleanup txq mbufs *** */
struct cmd_cleanup_txq_mbufs_result {
cmdline_fixed_string_t port;
@@ -13165,6 +13245,7 @@ static cmdline_parse_ctx_t builtin_ctx[] = {
(cmdline_parse_inst_t *)&cmd_showport_rss_hash_key,
(cmdline_parse_inst_t *)&cmd_showport_rss_hash_algo,
(cmdline_parse_inst_t *)&cmd_config_rss_hash_key,
+ (cmdline_parse_inst_t *)&cmd_config_rss_hash_algo,
(cmdline_parse_inst_t *)&cmd_cleanup_txq_mbufs,
(cmdline_parse_inst_t *)&cmd_dump,
(cmdline_parse_inst_t *)&cmd_dump_one,
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 447e28e..227188f 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -2263,6 +2263,16 @@ hash of input [IP] packets received on port::
ipv6-udp-ex <string of hex digits \
(variable length, NIC dependent)>)
+port config rss hash algorithm
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To configure the RSS hash algorithm used to compute the RSS
+hash of input packets received on port::
+
+ testpmd> port config <port_id> rss-hash-algo (default|\
+ simple_xor|toeplitz|symmetric_toeplitz|\
+ symmetric_toeplitz_sort)
+
port cleanup txq mbufs
~~~~~~~~~~~~~~~~~~~~~~
--
2.33.0

View File

@ -0,0 +1,279 @@
From 7adff88ea9f41ea3270f52407345571cd35946e6 Mon Sep 17 00:00:00 2001
From: Dengdui Huang <huangdengdui@huawei.com>
Date: Fri, 8 Dec 2023 14:55:05 +0800
Subject: [PATCH 21/30] net/hns3: refactor VF mailbox message struct
[ upstream commit 692b35be121b724119da001d7ec4c0fabd51177b ]
The data region in VF to PF mbx memssage command is
used to communicate with PF driver. And this data
region exists as an array. As a result, some complicated
feature commands, like setting promisc mode, map/unmap
ring vector and setting VLAN id, have to use magic number
to set them. This isn't good for maintenance of driver.
So this patch refactors these messages by extracting an
hns3_vf_to_pf_msg structure.
In addition, the PF link change event message is reported
by the firmware and is reported in hns3_mbx_vf_to_pf_cmd
format, it also needs to be modified.
Fixes: 463e748964f5 ("net/hns3: support mailbox")
Cc: stable@dpdk.org
Signed-off-by: Dengdui Huang <huangdengdui@huawei.com>
Signed-off-by: Jie Hai <haijie1@huawei.com>
---
drivers/net/hns3/hns3_ethdev_vf.c | 54 ++++++++++++++---------------
drivers/net/hns3/hns3_mbx.c | 24 ++++++-------
drivers/net/hns3/hns3_mbx.h | 56 ++++++++++++++++++++++---------
3 files changed, 76 insertions(+), 58 deletions(-)
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 916cc0f..19e734c 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -254,11 +254,12 @@ hns3vf_set_promisc_mode(struct hns3_hw *hw, bool en_bc_pmc,
* the packets with vlan tag in promiscuous mode.
*/
hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_MBX_VF_TO_PF, false);
- req->msg[0] = HNS3_MBX_SET_PROMISC_MODE;
- req->msg[1] = en_bc_pmc ? 1 : 0;
- req->msg[2] = en_uc_pmc ? 1 : 0;
- req->msg[3] = en_mc_pmc ? 1 : 0;
- req->msg[4] = hw->promisc_mode == HNS3_LIMIT_PROMISC_MODE ? 1 : 0;
+ req->msg.code = HNS3_MBX_SET_PROMISC_MODE;
+ req->msg.en_bc = en_bc_pmc ? 1 : 0;
+ req->msg.en_uc = en_uc_pmc ? 1 : 0;
+ req->msg.en_mc = en_mc_pmc ? 1 : 0;
+ req->msg.en_limit_promisc =
+ hw->promisc_mode == HNS3_LIMIT_PROMISC_MODE ? 1 : 0;
ret = hns3_cmd_send(hw, &desc, 1);
if (ret)
@@ -347,30 +348,28 @@ hns3vf_bind_ring_with_vector(struct hns3_hw *hw, uint16_t vector_id,
bool mmap, enum hns3_ring_type queue_type,
uint16_t queue_id)
{
- struct hns3_vf_bind_vector_msg bind_msg;
+#define HNS3_RING_VERCTOR_DATA_SIZE 14
+ struct hns3_vf_to_pf_msg req = {0};
const char *op_str;
- uint16_t code;
int ret;
- memset(&bind_msg, 0, sizeof(bind_msg));
- code = mmap ? HNS3_MBX_MAP_RING_TO_VECTOR :
+ req.code = mmap ? HNS3_MBX_MAP_RING_TO_VECTOR :
HNS3_MBX_UNMAP_RING_TO_VECTOR;
- bind_msg.vector_id = (uint8_t)vector_id;
+ req.vector_id = (uint8_t)vector_id;
+ req.ring_num = 1;
if (queue_type == HNS3_RING_TYPE_RX)
- bind_msg.param[0].int_gl_index = HNS3_RING_GL_RX;
+ req.ring_param[0].int_gl_index = HNS3_RING_GL_RX;
else
- bind_msg.param[0].int_gl_index = HNS3_RING_GL_TX;
-
- bind_msg.param[0].ring_type = queue_type;
- bind_msg.ring_num = 1;
- bind_msg.param[0].tqp_index = queue_id;
+ req.ring_param[0].int_gl_index = HNS3_RING_GL_TX;
+ req.ring_param[0].ring_type = queue_type;
+ req.ring_param[0].tqp_index = queue_id;
op_str = mmap ? "Map" : "Unmap";
- ret = hns3_send_mbx_msg(hw, code, 0, (uint8_t *)&bind_msg,
- sizeof(bind_msg), false, NULL, 0);
+ ret = hns3_send_mbx_msg(hw, req.code, 0, (uint8_t *)&req.vector_id,
+ HNS3_RING_VERCTOR_DATA_SIZE, false, NULL, 0);
if (ret)
- hns3_err(hw, "%s TQP %u fail, vector_id is %u, ret is %d.",
- op_str, queue_id, bind_msg.vector_id, ret);
+ hns3_err(hw, "%s TQP %u fail, vector_id is %u, ret = %d.",
+ op_str, queue_id, req.vector_id, ret);
return ret;
}
@@ -965,19 +964,16 @@ hns3vf_update_link_status(struct hns3_hw *hw, uint8_t link_status,
static int
hns3vf_vlan_filter_configure(struct hns3_adapter *hns, uint16_t vlan_id, int on)
{
-#define HNS3VF_VLAN_MBX_MSG_LEN 5
+ struct hns3_mbx_vlan_filter vlan_filter = {0};
struct hns3_hw *hw = &hns->hw;
- uint8_t msg_data[HNS3VF_VLAN_MBX_MSG_LEN];
- uint16_t proto = htons(RTE_ETHER_TYPE_VLAN);
- uint8_t is_kill = on ? 0 : 1;
- msg_data[0] = is_kill;
- memcpy(&msg_data[1], &vlan_id, sizeof(vlan_id));
- memcpy(&msg_data[3], &proto, sizeof(proto));
+ vlan_filter.is_kill = on ? 0 : 1;
+ vlan_filter.proto = rte_cpu_to_le_16(RTE_ETHER_TYPE_VLAN);
+ vlan_filter.vlan_id = rte_cpu_to_le_16(vlan_id);
return hns3_send_mbx_msg(hw, HNS3_MBX_SET_VLAN, HNS3_MBX_VLAN_FILTER,
- msg_data, HNS3VF_VLAN_MBX_MSG_LEN, true, NULL,
- 0);
+ (uint8_t *)&vlan_filter, sizeof(vlan_filter),
+ true, NULL, 0);
}
static int
diff --git a/drivers/net/hns3/hns3_mbx.c b/drivers/net/hns3/hns3_mbx.c
index f1743c1..ad5ec55 100644
--- a/drivers/net/hns3/hns3_mbx.c
+++ b/drivers/net/hns3/hns3_mbx.c
@@ -11,8 +11,6 @@
#include "hns3_intr.h"
#include "hns3_rxtx.h"
-#define HNS3_CMD_CODE_OFFSET 2
-
static const struct errno_respcode_map err_code_map[] = {
{0, 0},
{1, -EPERM},
@@ -127,29 +125,30 @@ hns3_send_mbx_msg(struct hns3_hw *hw, uint16_t code, uint16_t subcode,
struct hns3_mbx_vf_to_pf_cmd *req;
struct hns3_cmd_desc desc;
bool is_ring_vector_msg;
- int offset;
int ret;
req = (struct hns3_mbx_vf_to_pf_cmd *)desc.data;
/* first two bytes are reserved for code & subcode */
- if (msg_len > (HNS3_MBX_MAX_MSG_SIZE - HNS3_CMD_CODE_OFFSET)) {
+ if (msg_len > HNS3_MBX_MSG_MAX_DATA_SIZE) {
hns3_err(hw,
"VF send mbx msg fail, msg len %u exceeds max payload len %d",
- msg_len, HNS3_MBX_MAX_MSG_SIZE - HNS3_CMD_CODE_OFFSET);
+ msg_len, HNS3_MBX_MSG_MAX_DATA_SIZE);
return -EINVAL;
}
hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_MBX_VF_TO_PF, false);
- req->msg[0] = code;
+ req->msg.code = code;
is_ring_vector_msg = (code == HNS3_MBX_MAP_RING_TO_VECTOR) ||
(code == HNS3_MBX_UNMAP_RING_TO_VECTOR) ||
(code == HNS3_MBX_GET_RING_VECTOR_MAP);
if (!is_ring_vector_msg)
- req->msg[1] = subcode;
+ req->msg.subcode = subcode;
if (msg_data) {
- offset = is_ring_vector_msg ? 1 : HNS3_CMD_CODE_OFFSET;
- memcpy(&req->msg[offset], msg_data, msg_len);
+ if (is_ring_vector_msg)
+ memcpy(&req->msg.vector_id, msg_data, msg_len);
+ else
+ memcpy(&req->msg.data, msg_data, msg_len);
}
/* synchronous send */
@@ -296,11 +295,8 @@ static void
hns3pf_handle_link_change_event(struct hns3_hw *hw,
struct hns3_mbx_vf_to_pf_cmd *req)
{
-#define LINK_STATUS_OFFSET 1
-#define LINK_FAIL_CODE_OFFSET 2
-
- if (!req->msg[LINK_STATUS_OFFSET])
- hns3_link_fail_parse(hw, req->msg[LINK_FAIL_CODE_OFFSET]);
+ if (!req->msg.link_status)
+ hns3_link_fail_parse(hw, req->msg.link_fail_code);
hns3_update_linkstatus_and_event(hw, true);
}
diff --git a/drivers/net/hns3/hns3_mbx.h b/drivers/net/hns3/hns3_mbx.h
index 4a32880..59fb73a 100644
--- a/drivers/net/hns3/hns3_mbx.h
+++ b/drivers/net/hns3/hns3_mbx.h
@@ -89,7 +89,6 @@ enum hns3_mbx_link_fail_subcode {
HNS3_MBX_LF_XSFP_ABSENT,
};
-#define HNS3_MBX_MAX_MSG_SIZE 16
#define HNS3_MBX_MAX_RESP_DATA_SIZE 8
#define HNS3_MBX_DEF_TIME_LIMIT_MS 500
@@ -107,6 +106,46 @@ struct hns3_mbx_resp_status {
uint8_t additional_info[HNS3_MBX_MAX_RESP_DATA_SIZE];
};
+struct hns3_ring_chain_param {
+ uint8_t ring_type;
+ uint8_t tqp_index;
+ uint8_t int_gl_index;
+};
+
+struct hns3_mbx_vlan_filter {
+ uint8_t is_kill;
+ uint16_t vlan_id;
+ uint16_t proto;
+} __rte_packed;
+
+#define HNS3_MBX_MSG_MAX_DATA_SIZE 14
+#define HNS3_MBX_MAX_RING_CHAIN_PARAM_NUM 4
+struct hns3_vf_to_pf_msg {
+ uint8_t code;
+ union {
+ struct {
+ uint8_t subcode;
+ uint8_t data[HNS3_MBX_MSG_MAX_DATA_SIZE];
+ };
+ struct {
+ uint8_t en_bc;
+ uint8_t en_uc;
+ uint8_t en_mc;
+ uint8_t en_limit_promisc;
+ };
+ struct {
+ uint8_t vector_id;
+ uint8_t ring_num;
+ struct hns3_ring_chain_param
+ ring_param[HNS3_MBX_MAX_RING_CHAIN_PARAM_NUM];
+ };
+ struct {
+ uint8_t link_status;
+ uint8_t link_fail_code;
+ };
+ };
+};
+
struct errno_respcode_map {
uint16_t resp_code;
int err_no;
@@ -122,7 +161,7 @@ struct hns3_mbx_vf_to_pf_cmd {
uint8_t msg_len;
uint8_t rsv2;
uint16_t match_id;
- uint8_t msg[HNS3_MBX_MAX_MSG_SIZE];
+ struct hns3_vf_to_pf_msg msg;
};
struct hns3_mbx_pf_to_vf_cmd {
@@ -134,19 +173,6 @@ struct hns3_mbx_pf_to_vf_cmd {
uint16_t msg[8];
};
-struct hns3_ring_chain_param {
- uint8_t ring_type;
- uint8_t tqp_index;
- uint8_t int_gl_index;
-};
-
-#define HNS3_MBX_MAX_RING_CHAIN_PARAM_NUM 4
-struct hns3_vf_bind_vector_msg {
- uint8_t vector_id;
- uint8_t ring_num;
- struct hns3_ring_chain_param param[HNS3_MBX_MAX_RING_CHAIN_PARAM_NUM];
-};
-
struct hns3_pf_rst_done_cmd {
uint8_t pf_rst_done;
uint8_t rsv[23];
--
2.33.0

View File

@ -0,0 +1,195 @@
From 11e1f6c8b4ec6d246b65448a65533aad7129a5d2 Mon Sep 17 00:00:00 2001
From: Dengdui Huang <huangdengdui@huawei.com>
Date: Fri, 8 Dec 2023 14:55:06 +0800
Subject: [PATCH 22/30] net/hns3: refactor PF mailbox message struct
[ upstream commit 4d534598d922130d12c244d3237652fbfdad0f4b ]
The data region in PF to VF mbx memssage command is used
to communicate with VF driver. And this data region exists
as an array. As a result, some complicated feature commands,
like mailbox response, link change event, close promisc mode,
reset request and update pvid state, have to use magic number
to set them. This isn't good for maintenance of driver. So
this patch refactors these messages by extracting an
hns3_pf_to_vf_msg structure.
Fixes: 463e748964f5 ("net/hns3: support mailbox")
Cc: stable@dpdk.org
Signed-off-by: Dengdui Huang <huangdengdui@huawei.com>
Signed-off-by: Jie Hai <haijie1@huawei.com>
---
drivers/net/hns3/hns3_mbx.c | 38 ++++++++++++++++++-------------------
drivers/net/hns3/hns3_mbx.h | 25 +++++++++++++++++++++++-
2 files changed, 43 insertions(+), 20 deletions(-)
diff --git a/drivers/net/hns3/hns3_mbx.c b/drivers/net/hns3/hns3_mbx.c
index ad5ec55..c90f5d5 100644
--- a/drivers/net/hns3/hns3_mbx.c
+++ b/drivers/net/hns3/hns3_mbx.c
@@ -192,17 +192,17 @@ static void
hns3vf_handle_link_change_event(struct hns3_hw *hw,
struct hns3_mbx_pf_to_vf_cmd *req)
{
+ struct hns3_mbx_link_status *link_info =
+ (struct hns3_mbx_link_status *)req->msg.msg_data;
uint8_t link_status, link_duplex;
- uint16_t *msg_q = req->msg;
uint8_t support_push_lsc;
uint32_t link_speed;
- memcpy(&link_speed, &msg_q[2], sizeof(link_speed));
- link_status = rte_le_to_cpu_16(msg_q[1]);
- link_duplex = (uint8_t)rte_le_to_cpu_16(msg_q[4]);
- hns3vf_update_link_status(hw, link_status, link_speed,
- link_duplex);
- support_push_lsc = (*(uint8_t *)&msg_q[5]) & 1u;
+ link_status = (uint8_t)rte_le_to_cpu_16(link_info->link_status);
+ link_speed = rte_le_to_cpu_32(link_info->speed);
+ link_duplex = (uint8_t)rte_le_to_cpu_16(link_info->duplex);
+ hns3vf_update_link_status(hw, link_status, link_speed, link_duplex);
+ support_push_lsc = (link_info->flag) & 1u;
hns3vf_update_push_lsc_cap(hw, support_push_lsc);
}
@@ -211,7 +211,6 @@ hns3_handle_asserting_reset(struct hns3_hw *hw,
struct hns3_mbx_pf_to_vf_cmd *req)
{
enum hns3_reset_level reset_level;
- uint16_t *msg_q = req->msg;
/*
* PF has asserted reset hence VF should go in pending
@@ -219,7 +218,7 @@ hns3_handle_asserting_reset(struct hns3_hw *hw,
* has been completely reset. After this stack should
* eventually be re-initialized.
*/
- reset_level = rte_le_to_cpu_16(msg_q[1]);
+ reset_level = rte_le_to_cpu_16(req->msg.reset_level);
hns3_atomic_set_bit(reset_level, &hw->reset.pending);
hns3_warn(hw, "PF inform reset level %d", reset_level);
@@ -241,8 +240,9 @@ hns3_handle_mbx_response(struct hns3_hw *hw, struct hns3_mbx_pf_to_vf_cmd *req)
* to match the request.
*/
if (req->match_id == resp->match_id) {
- resp->resp_status = hns3_resp_to_errno(req->msg[3]);
- memcpy(resp->additional_info, &req->msg[4],
+ resp->resp_status =
+ hns3_resp_to_errno(req->msg.resp_status);
+ memcpy(resp->additional_info, &req->msg.resp_data,
HNS3_MBX_MAX_RESP_DATA_SIZE);
rte_io_wmb();
resp->received_match_resp = true;
@@ -255,7 +255,8 @@ hns3_handle_mbx_response(struct hns3_hw *hw, struct hns3_mbx_pf_to_vf_cmd *req)
* support copy request's match_id to its response. So VF follows the
* original scheme to process.
*/
- msg_data = (uint32_t)req->msg[1] << HNS3_MBX_RESP_CODE_OFFSET | req->msg[2];
+ msg_data = (uint32_t)req->msg.vf_mbx_msg_code <<
+ HNS3_MBX_RESP_CODE_OFFSET | req->msg.vf_mbx_msg_subcode;
if (resp->req_msg_data != msg_data) {
hns3_warn(hw,
"received response tag (%u) is mismatched with requested tag (%u)",
@@ -263,8 +264,8 @@ hns3_handle_mbx_response(struct hns3_hw *hw, struct hns3_mbx_pf_to_vf_cmd *req)
return;
}
- resp->resp_status = hns3_resp_to_errno(req->msg[3]);
- memcpy(resp->additional_info, &req->msg[4],
+ resp->resp_status = hns3_resp_to_errno(req->msg.resp_status);
+ memcpy(resp->additional_info, &req->msg.resp_data,
HNS3_MBX_MAX_RESP_DATA_SIZE);
rte_io_wmb();
resp->received_match_resp = true;
@@ -305,8 +306,7 @@ static void
hns3_update_port_base_vlan_info(struct hns3_hw *hw,
struct hns3_mbx_pf_to_vf_cmd *req)
{
-#define PVID_STATE_OFFSET 1
- uint16_t new_pvid_state = req->msg[PVID_STATE_OFFSET] ?
+ uint16_t new_pvid_state = req->msg.pvid_state ?
HNS3_PORT_BASE_VLAN_ENABLE : HNS3_PORT_BASE_VLAN_DISABLE;
/*
* Currently, hardware doesn't support more than two layers VLAN offload
@@ -355,7 +355,7 @@ hns3_handle_mbx_msg_out_intr(struct hns3_hw *hw)
while (next_to_use != tail) {
desc = &crq->desc[next_to_use];
req = (struct hns3_mbx_pf_to_vf_cmd *)desc->data;
- opcode = req->msg[0] & 0xff;
+ opcode = req->msg.code & 0xff;
flag = rte_le_to_cpu_16(crq->desc[next_to_use].flag);
if (!hns3_get_bit(flag, HNS3_CMDQ_RX_OUTVLD_B))
@@ -428,7 +428,7 @@ hns3_dev_handle_mbx_msg(struct hns3_hw *hw)
desc = &crq->desc[crq->next_to_use];
req = (struct hns3_mbx_pf_to_vf_cmd *)desc->data;
- opcode = req->msg[0] & 0xff;
+ opcode = req->msg.code & 0xff;
flag = rte_le_to_cpu_16(crq->desc[crq->next_to_use].flag);
if (unlikely(!hns3_get_bit(flag, HNS3_CMDQ_RX_OUTVLD_B))) {
@@ -484,7 +484,7 @@ hns3_dev_handle_mbx_msg(struct hns3_hw *hw)
* hns3 PF kernel driver, VF driver will receive this
* mailbox message from PF driver.
*/
- hns3_handle_promisc_info(hw, req->msg[1]);
+ hns3_handle_promisc_info(hw, req->msg.promisc_en);
break;
default:
hns3_err(hw, "received unsupported(%u) mbx msg",
diff --git a/drivers/net/hns3/hns3_mbx.h b/drivers/net/hns3/hns3_mbx.h
index 59fb73a..09780fc 100644
--- a/drivers/net/hns3/hns3_mbx.h
+++ b/drivers/net/hns3/hns3_mbx.h
@@ -118,6 +118,13 @@ struct hns3_mbx_vlan_filter {
uint16_t proto;
} __rte_packed;
+struct hns3_mbx_link_status {
+ uint16_t link_status;
+ uint32_t speed;
+ uint16_t duplex;
+ uint8_t flag;
+} __rte_packed;
+
#define HNS3_MBX_MSG_MAX_DATA_SIZE 14
#define HNS3_MBX_MAX_RING_CHAIN_PARAM_NUM 4
struct hns3_vf_to_pf_msg {
@@ -146,6 +153,22 @@ struct hns3_vf_to_pf_msg {
};
};
+struct hns3_pf_to_vf_msg {
+ uint16_t code;
+ union {
+ struct {
+ uint16_t vf_mbx_msg_code;
+ uint16_t vf_mbx_msg_subcode;
+ uint16_t resp_status;
+ uint8_t resp_data[HNS3_MBX_MAX_RESP_DATA_SIZE];
+ };
+ uint16_t promisc_en;
+ uint16_t reset_level;
+ uint16_t pvid_state;
+ uint8_t msg_data[HNS3_MBX_MSG_MAX_DATA_SIZE];
+ };
+};
+
struct errno_respcode_map {
uint16_t resp_code;
int err_no;
@@ -170,7 +193,7 @@ struct hns3_mbx_pf_to_vf_cmd {
uint8_t msg_len;
uint8_t rsv1;
uint16_t match_id;
- uint16_t msg[8];
+ struct hns3_pf_to_vf_msg msg;
};
struct hns3_pf_rst_done_cmd {
--
2.33.0

View File

@ -0,0 +1,526 @@
From a7731d0a954c84aaf347aef67a773ae1561e7cd5 Mon Sep 17 00:00:00 2001
From: Dengdui Huang <huangdengdui@huawei.com>
Date: Fri, 8 Dec 2023 14:55:07 +0800
Subject: [PATCH 23/30] net/hns3: refactor send mailbox function
[ upstream commit c9bd98d84587dbc0dddb8964ad3d7d54818aca01 ]
The 'hns3_send_mbx_msg' function has following problem:
1. the name is vague, missing caller indication
2. too many input parameters because the filling messages
are placed in commands the send command.
Therefore, a common interface is encapsulated to fill in
the mailbox message before sending it.
Fixes: 463e748964f5 ("net/hns3: support mailbox")
Cc: stable@dpdk.org
Signed-off-by: Dengdui Huang <huangdengdui@huawei.com>
Signed-off-by: Jie Hai <haijie1@huawei.com>
---
drivers/net/hns3/hns3_ethdev_vf.c | 141 ++++++++++++++++++------------
drivers/net/hns3/hns3_mbx.c | 50 ++++-------
drivers/net/hns3/hns3_mbx.h | 8 +-
drivers/net/hns3/hns3_rxtx.c | 18 ++--
4 files changed, 116 insertions(+), 101 deletions(-)
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 19e734c..b0d0c29 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -91,11 +91,13 @@ hns3vf_add_uc_mac_addr(struct hns3_hw *hw, struct rte_ether_addr *mac_addr)
{
/* mac address was checked by upper level interface */
char mac_str[RTE_ETHER_ADDR_FMT_SIZE];
+ struct hns3_vf_to_pf_msg req;
int ret;
- ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_UNICAST,
- HNS3_MBX_MAC_VLAN_UC_ADD, mac_addr->addr_bytes,
- RTE_ETHER_ADDR_LEN, false, NULL, 0);
+ hns3vf_mbx_setup(&req, HNS3_MBX_SET_UNICAST,
+ HNS3_MBX_MAC_VLAN_UC_ADD);
+ memcpy(req.data, mac_addr->addr_bytes, RTE_ETHER_ADDR_LEN);
+ ret = hns3vf_mbx_send(hw, &req, false, NULL, 0);
if (ret) {
hns3_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE,
mac_addr);
@@ -110,12 +112,13 @@ hns3vf_remove_uc_mac_addr(struct hns3_hw *hw, struct rte_ether_addr *mac_addr)
{
/* mac address was checked by upper level interface */
char mac_str[RTE_ETHER_ADDR_FMT_SIZE];
+ struct hns3_vf_to_pf_msg req;
int ret;
- ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_UNICAST,
- HNS3_MBX_MAC_VLAN_UC_REMOVE,
- mac_addr->addr_bytes, RTE_ETHER_ADDR_LEN,
- false, NULL, 0);
+ hns3vf_mbx_setup(&req, HNS3_MBX_SET_UNICAST,
+ HNS3_MBX_MAC_VLAN_UC_REMOVE);
+ memcpy(req.data, mac_addr->addr_bytes, RTE_ETHER_ADDR_LEN);
+ ret = hns3vf_mbx_send(hw, &req, false, NULL, 0);
if (ret) {
hns3_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE,
mac_addr);
@@ -134,6 +137,7 @@ hns3vf_set_default_mac_addr(struct rte_eth_dev *dev,
struct rte_ether_addr *old_addr;
uint8_t addr_bytes[HNS3_TWO_ETHER_ADDR_LEN]; /* for 2 MAC addresses */
char mac_str[RTE_ETHER_ADDR_FMT_SIZE];
+ struct hns3_vf_to_pf_msg req;
int ret;
/*
@@ -146,9 +150,10 @@ hns3vf_set_default_mac_addr(struct rte_eth_dev *dev,
memcpy(&addr_bytes[RTE_ETHER_ADDR_LEN], old_addr->addr_bytes,
RTE_ETHER_ADDR_LEN);
- ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_UNICAST,
- HNS3_MBX_MAC_VLAN_UC_MODIFY, addr_bytes,
- HNS3_TWO_ETHER_ADDR_LEN, true, NULL, 0);
+ hns3vf_mbx_setup(&req, HNS3_MBX_SET_UNICAST,
+ HNS3_MBX_MAC_VLAN_UC_MODIFY);
+ memcpy(req.data, addr_bytes, HNS3_TWO_ETHER_ADDR_LEN);
+ ret = hns3vf_mbx_send(hw, &req, true, NULL, 0);
if (ret) {
/*
* The hns3 VF PMD depends on the hns3 PF kernel ethdev
@@ -185,12 +190,13 @@ hns3vf_add_mc_mac_addr(struct hns3_hw *hw,
struct rte_ether_addr *mac_addr)
{
char mac_str[RTE_ETHER_ADDR_FMT_SIZE];
+ struct hns3_vf_to_pf_msg req;
int ret;
- ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_MULTICAST,
- HNS3_MBX_MAC_VLAN_MC_ADD,
- mac_addr->addr_bytes, RTE_ETHER_ADDR_LEN, false,
- NULL, 0);
+ hns3vf_mbx_setup(&req, HNS3_MBX_SET_MULTICAST,
+ HNS3_MBX_MAC_VLAN_MC_ADD);
+ memcpy(req.data, mac_addr->addr_bytes, RTE_ETHER_ADDR_LEN);
+ ret = hns3vf_mbx_send(hw, &req, false, NULL, 0);
if (ret) {
hns3_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE,
mac_addr);
@@ -206,12 +212,13 @@ hns3vf_remove_mc_mac_addr(struct hns3_hw *hw,
struct rte_ether_addr *mac_addr)
{
char mac_str[RTE_ETHER_ADDR_FMT_SIZE];
+ struct hns3_vf_to_pf_msg req;
int ret;
- ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_MULTICAST,
- HNS3_MBX_MAC_VLAN_MC_REMOVE,
- mac_addr->addr_bytes, RTE_ETHER_ADDR_LEN, false,
- NULL, 0);
+ hns3vf_mbx_setup(&req, HNS3_MBX_SET_MULTICAST,
+ HNS3_MBX_MAC_VLAN_MC_REMOVE);
+ memcpy(req.data, mac_addr->addr_bytes, RTE_ETHER_ADDR_LEN);
+ ret = hns3vf_mbx_send(hw, &req, false, NULL, 0);
if (ret) {
hns3_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE,
mac_addr);
@@ -348,7 +355,6 @@ hns3vf_bind_ring_with_vector(struct hns3_hw *hw, uint16_t vector_id,
bool mmap, enum hns3_ring_type queue_type,
uint16_t queue_id)
{
-#define HNS3_RING_VERCTOR_DATA_SIZE 14
struct hns3_vf_to_pf_msg req = {0};
const char *op_str;
int ret;
@@ -365,8 +371,7 @@ hns3vf_bind_ring_with_vector(struct hns3_hw *hw, uint16_t vector_id,
req.ring_param[0].ring_type = queue_type;
req.ring_param[0].tqp_index = queue_id;
op_str = mmap ? "Map" : "Unmap";
- ret = hns3_send_mbx_msg(hw, req.code, 0, (uint8_t *)&req.vector_id,
- HNS3_RING_VERCTOR_DATA_SIZE, false, NULL, 0);
+ ret = hns3vf_mbx_send(hw, &req, false, NULL, 0);
if (ret)
hns3_err(hw, "%s TQP %u fail, vector_id is %u, ret = %d.",
op_str, queue_id, req.vector_id, ret);
@@ -452,10 +457,12 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
static int
hns3vf_config_mtu(struct hns3_hw *hw, uint16_t mtu)
{
+ struct hns3_vf_to_pf_msg req;
int ret;
- ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_MTU, 0, (const uint8_t *)&mtu,
- sizeof(mtu), true, NULL, 0);
+ hns3vf_mbx_setup(&req, HNS3_MBX_SET_MTU, 0);
+ memcpy(req.data, &mtu, sizeof(mtu));
+ ret = hns3vf_mbx_send(hw, &req, true, NULL, 0);
if (ret)
hns3_err(hw, "Failed to set mtu (%u) for vf: %d", mtu, ret);
@@ -646,12 +653,13 @@ hns3vf_get_push_lsc_cap(struct hns3_hw *hw)
uint16_t val = HNS3_PF_PUSH_LSC_CAP_NOT_SUPPORTED;
uint16_t exp = HNS3_PF_PUSH_LSC_CAP_UNKNOWN;
struct hns3_vf *vf = HNS3_DEV_HW_TO_VF(hw);
+ struct hns3_vf_to_pf_msg req;
__atomic_store_n(&vf->pf_push_lsc_cap, HNS3_PF_PUSH_LSC_CAP_UNKNOWN,
__ATOMIC_RELEASE);
- (void)hns3_send_mbx_msg(hw, HNS3_MBX_GET_LINK_STATUS, 0, NULL, 0, false,
- NULL, 0);
+ hns3vf_mbx_setup(&req, HNS3_MBX_GET_LINK_STATUS, 0);
+ (void)hns3vf_mbx_send(hw, &req, false, NULL, 0);
while (remain_ms > 0) {
rte_delay_ms(HNS3_POLL_RESPONE_MS);
@@ -746,12 +754,13 @@ hns3vf_check_tqp_info(struct hns3_hw *hw)
static int
hns3vf_get_port_base_vlan_filter_state(struct hns3_hw *hw)
{
+ struct hns3_vf_to_pf_msg req;
uint8_t resp_msg;
int ret;
- ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_VLAN,
- HNS3_MBX_GET_PORT_BASE_VLAN_STATE, NULL, 0,
- true, &resp_msg, sizeof(resp_msg));
+ hns3vf_mbx_setup(&req, HNS3_MBX_SET_VLAN,
+ HNS3_MBX_GET_PORT_BASE_VLAN_STATE);
+ ret = hns3vf_mbx_send(hw, &req, true, &resp_msg, sizeof(resp_msg));
if (ret) {
if (ret == -ETIME) {
/*
@@ -792,10 +801,12 @@ hns3vf_get_queue_info(struct hns3_hw *hw)
{
#define HNS3VF_TQPS_RSS_INFO_LEN 6
uint8_t resp_msg[HNS3VF_TQPS_RSS_INFO_LEN];
+ struct hns3_vf_to_pf_msg req;
int ret;
- ret = hns3_send_mbx_msg(hw, HNS3_MBX_GET_QINFO, 0, NULL, 0, true,
- resp_msg, HNS3VF_TQPS_RSS_INFO_LEN);
+ hns3vf_mbx_setup(&req, HNS3_MBX_GET_QINFO, 0);
+ ret = hns3vf_mbx_send(hw, &req, true,
+ resp_msg, HNS3VF_TQPS_RSS_INFO_LEN);
if (ret) {
PMD_INIT_LOG(ERR, "Failed to get tqp info from PF: %d", ret);
return ret;
@@ -833,10 +844,11 @@ hns3vf_get_basic_info(struct hns3_hw *hw)
{
uint8_t resp_msg[HNS3_MBX_MAX_RESP_DATA_SIZE];
struct hns3_basic_info *basic_info;
+ struct hns3_vf_to_pf_msg req;
int ret;
- ret = hns3_send_mbx_msg(hw, HNS3_MBX_GET_BASIC_INFO, 0, NULL, 0,
- true, resp_msg, sizeof(resp_msg));
+ hns3vf_mbx_setup(&req, HNS3_MBX_GET_BASIC_INFO, 0);
+ ret = hns3vf_mbx_send(hw, &req, true, resp_msg, sizeof(resp_msg));
if (ret) {
hns3_err(hw, "failed to get basic info from PF, ret = %d.",
ret);
@@ -856,10 +868,11 @@ static int
hns3vf_get_host_mac_addr(struct hns3_hw *hw)
{
uint8_t host_mac[RTE_ETHER_ADDR_LEN];
+ struct hns3_vf_to_pf_msg req;
int ret;
- ret = hns3_send_mbx_msg(hw, HNS3_MBX_GET_MAC_ADDR, 0, NULL, 0,
- true, host_mac, RTE_ETHER_ADDR_LEN);
+ hns3vf_mbx_setup(&req, HNS3_MBX_GET_MAC_ADDR, 0);
+ ret = hns3vf_mbx_send(hw, &req, true, host_mac, RTE_ETHER_ADDR_LEN);
if (ret) {
hns3_err(hw, "Failed to get mac addr from PF: %d", ret);
return ret;
@@ -908,6 +921,7 @@ static void
hns3vf_request_link_info(struct hns3_hw *hw)
{
struct hns3_vf *vf = HNS3_DEV_HW_TO_VF(hw);
+ struct hns3_vf_to_pf_msg req;
bool send_req;
int ret;
@@ -919,8 +933,8 @@ hns3vf_request_link_info(struct hns3_hw *hw)
if (!send_req)
return;
- ret = hns3_send_mbx_msg(hw, HNS3_MBX_GET_LINK_STATUS, 0, NULL, 0, false,
- NULL, 0);
+ hns3vf_mbx_setup(&req, HNS3_MBX_GET_LINK_STATUS, 0);
+ ret = hns3vf_mbx_send(hw, &req, false, NULL, 0);
if (ret) {
hns3_err(hw, "failed to fetch link status, ret = %d", ret);
return;
@@ -964,16 +978,18 @@ hns3vf_update_link_status(struct hns3_hw *hw, uint8_t link_status,
static int
hns3vf_vlan_filter_configure(struct hns3_adapter *hns, uint16_t vlan_id, int on)
{
- struct hns3_mbx_vlan_filter vlan_filter = {0};
+ struct hns3_mbx_vlan_filter *vlan_filter;
+ struct hns3_vf_to_pf_msg req = {0};
struct hns3_hw *hw = &hns->hw;
- vlan_filter.is_kill = on ? 0 : 1;
- vlan_filter.proto = rte_cpu_to_le_16(RTE_ETHER_TYPE_VLAN);
- vlan_filter.vlan_id = rte_cpu_to_le_16(vlan_id);
+ req.code = HNS3_MBX_SET_VLAN;
+ req.subcode = HNS3_MBX_VLAN_FILTER;
+ vlan_filter = (struct hns3_mbx_vlan_filter *)req.data;
+ vlan_filter->is_kill = on ? 0 : 1;
+ vlan_filter->proto = rte_cpu_to_le_16(RTE_ETHER_TYPE_VLAN);
+ vlan_filter->vlan_id = rte_cpu_to_le_16(vlan_id);
- return hns3_send_mbx_msg(hw, HNS3_MBX_SET_VLAN, HNS3_MBX_VLAN_FILTER,
- (uint8_t *)&vlan_filter, sizeof(vlan_filter),
- true, NULL, 0);
+ return hns3vf_mbx_send(hw, &req, true, NULL, 0);
}
static int
@@ -1002,6 +1018,7 @@ hns3vf_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
static int
hns3vf_en_vlan_filter(struct hns3_hw *hw, bool enable)
{
+ struct hns3_vf_to_pf_msg req;
uint8_t msg_data;
int ret;
@@ -1009,9 +1026,10 @@ hns3vf_en_vlan_filter(struct hns3_hw *hw, bool enable)
return 0;
msg_data = enable ? 1 : 0;
- ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_VLAN,
- HNS3_MBX_ENABLE_VLAN_FILTER, &msg_data,
- sizeof(msg_data), true, NULL, 0);
+ hns3vf_mbx_setup(&req, HNS3_MBX_SET_VLAN,
+ HNS3_MBX_ENABLE_VLAN_FILTER);
+ memcpy(req.data, &msg_data, sizeof(msg_data));
+ ret = hns3vf_mbx_send(hw, &req, true, NULL, 0);
if (ret)
hns3_err(hw, "%s vlan filter failed, ret = %d.",
enable ? "enable" : "disable", ret);
@@ -1022,12 +1040,15 @@ hns3vf_en_vlan_filter(struct hns3_hw *hw, bool enable)
static int
hns3vf_en_hw_strip_rxvtag(struct hns3_hw *hw, bool enable)
{
+ struct hns3_vf_to_pf_msg req;
uint8_t msg_data;
int ret;
msg_data = enable ? 1 : 0;
- ret = hns3_send_mbx_msg(hw, HNS3_MBX_SET_VLAN, HNS3_MBX_VLAN_RX_OFF_CFG,
- &msg_data, sizeof(msg_data), false, NULL, 0);
+ hns3vf_mbx_setup(&req, HNS3_MBX_SET_VLAN,
+ HNS3_MBX_VLAN_RX_OFF_CFG);
+ memcpy(req.data, &msg_data, sizeof(msg_data));
+ ret = hns3vf_mbx_send(hw, &req, false, NULL, 0);
if (ret)
hns3_err(hw, "vf %s strip failed, ret = %d.",
enable ? "enable" : "disable", ret);
@@ -1171,11 +1192,13 @@ hns3vf_dev_configure_vlan(struct rte_eth_dev *dev)
static int
hns3vf_set_alive(struct hns3_hw *hw, bool alive)
{
+ struct hns3_vf_to_pf_msg req;
uint8_t msg_data;
msg_data = alive ? 1 : 0;
- return hns3_send_mbx_msg(hw, HNS3_MBX_SET_ALIVE, 0, &msg_data,
- sizeof(msg_data), false, NULL, 0);
+ hns3vf_mbx_setup(&req, HNS3_MBX_SET_ALIVE, 0);
+ memcpy(req.data, &msg_data, sizeof(msg_data));
+ return hns3vf_mbx_send(hw, &req, false, NULL, 0);
}
static void
@@ -1183,11 +1206,12 @@ hns3vf_keep_alive_handler(void *param)
{
struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)param;
struct hns3_adapter *hns = eth_dev->data->dev_private;
+ struct hns3_vf_to_pf_msg req;
struct hns3_hw *hw = &hns->hw;
int ret;
- ret = hns3_send_mbx_msg(hw, HNS3_MBX_KEEP_ALIVE, 0, NULL, 0,
- false, NULL, 0);
+ hns3vf_mbx_setup(&req, HNS3_MBX_KEEP_ALIVE, 0);
+ ret = hns3vf_mbx_send(hw, &req, false, NULL, 0);
if (ret)
hns3_err(hw, "VF sends keeping alive cmd failed(=%d)",
ret);
@@ -1326,9 +1350,11 @@ hns3vf_init_hardware(struct hns3_adapter *hns)
static int
hns3vf_clear_vport_list(struct hns3_hw *hw)
{
- return hns3_send_mbx_msg(hw, HNS3_MBX_HANDLE_VF_TBL,
- HNS3_MBX_VPORT_LIST_CLEAR, NULL, 0, false,
- NULL, 0);
+ struct hns3_vf_to_pf_msg req;
+
+ hns3vf_mbx_setup(&req, HNS3_MBX_HANDLE_VF_TBL,
+ HNS3_MBX_VPORT_LIST_CLEAR);
+ return hns3vf_mbx_send(hw, &req, false, NULL, 0);
}
static int
@@ -1797,12 +1823,13 @@ hns3vf_wait_hardware_ready(struct hns3_adapter *hns)
static int
hns3vf_prepare_reset(struct hns3_adapter *hns)
{
+ struct hns3_vf_to_pf_msg req;
struct hns3_hw *hw = &hns->hw;
int ret;
if (hw->reset.level == HNS3_VF_FUNC_RESET) {
- ret = hns3_send_mbx_msg(hw, HNS3_MBX_RESET, 0, NULL,
- 0, true, NULL, 0);
+ hns3vf_mbx_setup(&req, HNS3_MBX_RESET, 0);
+ ret = hns3vf_mbx_send(hw, &req, true, NULL, 0);
if (ret)
return ret;
}
diff --git a/drivers/net/hns3/hns3_mbx.c b/drivers/net/hns3/hns3_mbx.c
index c90f5d5..43195ff 100644
--- a/drivers/net/hns3/hns3_mbx.c
+++ b/drivers/net/hns3/hns3_mbx.c
@@ -24,6 +24,14 @@ static const struct errno_respcode_map err_code_map[] = {
{95, -EOPNOTSUPP},
};
+void
+hns3vf_mbx_setup(struct hns3_vf_to_pf_msg *req, uint8_t code, uint8_t subcode)
+{
+ memset(req, 0, sizeof(struct hns3_vf_to_pf_msg));
+ req->code = code;
+ req->subcode = subcode;
+}
+
static int
hns3_resp_to_errno(uint16_t resp_code)
{
@@ -118,45 +126,24 @@ hns3_mbx_prepare_resp(struct hns3_hw *hw, uint16_t code, uint16_t subcode)
}
int
-hns3_send_mbx_msg(struct hns3_hw *hw, uint16_t code, uint16_t subcode,
- const uint8_t *msg_data, uint8_t msg_len, bool need_resp,
- uint8_t *resp_data, uint16_t resp_len)
+hns3vf_mbx_send(struct hns3_hw *hw,
+ struct hns3_vf_to_pf_msg *req, bool need_resp,
+ uint8_t *resp_data, uint16_t resp_len)
{
- struct hns3_mbx_vf_to_pf_cmd *req;
+ struct hns3_mbx_vf_to_pf_cmd *cmd;
struct hns3_cmd_desc desc;
- bool is_ring_vector_msg;
int ret;
- req = (struct hns3_mbx_vf_to_pf_cmd *)desc.data;
-
- /* first two bytes are reserved for code & subcode */
- if (msg_len > HNS3_MBX_MSG_MAX_DATA_SIZE) {
- hns3_err(hw,
- "VF send mbx msg fail, msg len %u exceeds max payload len %d",
- msg_len, HNS3_MBX_MSG_MAX_DATA_SIZE);
- return -EINVAL;
- }
-
hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_MBX_VF_TO_PF, false);
- req->msg.code = code;
- is_ring_vector_msg = (code == HNS3_MBX_MAP_RING_TO_VECTOR) ||
- (code == HNS3_MBX_UNMAP_RING_TO_VECTOR) ||
- (code == HNS3_MBX_GET_RING_VECTOR_MAP);
- if (!is_ring_vector_msg)
- req->msg.subcode = subcode;
- if (msg_data) {
- if (is_ring_vector_msg)
- memcpy(&req->msg.vector_id, msg_data, msg_len);
- else
- memcpy(&req->msg.data, msg_data, msg_len);
- }
+ cmd = (struct hns3_mbx_vf_to_pf_cmd *)desc.data;
+ cmd->msg = *req;
/* synchronous send */
if (need_resp) {
- req->mbx_need_resp |= HNS3_MBX_NEED_RESP_BIT;
+ cmd->mbx_need_resp |= HNS3_MBX_NEED_RESP_BIT;
rte_spinlock_lock(&hw->mbx_resp.lock);
- hns3_mbx_prepare_resp(hw, code, subcode);
- req->match_id = hw->mbx_resp.match_id;
+ hns3_mbx_prepare_resp(hw, req->code, req->subcode);
+ cmd->match_id = hw->mbx_resp.match_id;
ret = hns3_cmd_send(hw, &desc, 1);
if (ret) {
rte_spinlock_unlock(&hw->mbx_resp.lock);
@@ -165,7 +152,8 @@ hns3_send_mbx_msg(struct hns3_hw *hw, uint16_t code, uint16_t subcode,
return ret;
}
- ret = hns3_get_mbx_resp(hw, code, subcode, resp_data, resp_len);
+ ret = hns3_get_mbx_resp(hw, req->code, req->subcode,
+ resp_data, resp_len);
rte_spinlock_unlock(&hw->mbx_resp.lock);
} else {
/* asynchronous send */
diff --git a/drivers/net/hns3/hns3_mbx.h b/drivers/net/hns3/hns3_mbx.h
index 09780fc..2952b96 100644
--- a/drivers/net/hns3/hns3_mbx.h
+++ b/drivers/net/hns3/hns3_mbx.h
@@ -208,7 +208,9 @@ struct hns3_pf_rst_done_cmd {
struct hns3_hw;
void hns3_dev_handle_mbx_msg(struct hns3_hw *hw);
-int hns3_send_mbx_msg(struct hns3_hw *hw, uint16_t code, uint16_t subcode,
- const uint8_t *msg_data, uint8_t msg_len, bool need_resp,
- uint8_t *resp_data, uint16_t resp_len);
+void hns3vf_mbx_setup(struct hns3_vf_to_pf_msg *req,
+ uint8_t code, uint8_t subcode);
+int hns3vf_mbx_send(struct hns3_hw *hw,
+ struct hns3_vf_to_pf_msg *req_msg, bool need_resp,
+ uint8_t *resp_data, uint16_t resp_len);
#endif /* HNS3_MBX_H */
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 09b7e90..9087bcf 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -686,13 +686,12 @@ hns3pf_reset_tqp(struct hns3_hw *hw, uint16_t queue_id)
static int
hns3vf_reset_tqp(struct hns3_hw *hw, uint16_t queue_id)
{
- uint8_t msg_data[2];
+ struct hns3_vf_to_pf_msg req;
int ret;
- memcpy(msg_data, &queue_id, sizeof(uint16_t));
-
- ret = hns3_send_mbx_msg(hw, HNS3_MBX_QUEUE_RESET, 0, msg_data,
- sizeof(msg_data), true, NULL, 0);
+ hns3vf_mbx_setup(&req, HNS3_MBX_QUEUE_RESET, 0);
+ memcpy(req.data, &queue_id, sizeof(uint16_t));
+ ret = hns3vf_mbx_send(hw, &req, true, NULL, 0);
if (ret)
hns3_err(hw, "fail to reset tqp, queue_id = %u, ret = %d.",
queue_id, ret);
@@ -769,15 +768,14 @@ static int
hns3vf_reset_all_tqps(struct hns3_hw *hw)
{
#define HNS3VF_RESET_ALL_TQP_DONE 1U
+ struct hns3_vf_to_pf_msg req;
uint8_t reset_status;
- uint8_t msg_data[2];
int ret;
uint16_t i;
- memset(msg_data, 0, sizeof(msg_data));
- ret = hns3_send_mbx_msg(hw, HNS3_MBX_QUEUE_RESET, 0, msg_data,
- sizeof(msg_data), true, &reset_status,
- sizeof(reset_status));
+ hns3vf_mbx_setup(&req, HNS3_MBX_QUEUE_RESET, 0);
+ ret = hns3vf_mbx_send(hw, &req, true,
+ &reset_status, sizeof(reset_status));
if (ret) {
hns3_err(hw, "fail to send rcb reset mbx, ret = %d.", ret);
return ret;
--
2.33.0

View File

@ -0,0 +1,188 @@
From f6f87e815c8bc4e287e8f37abfd969464658ea29 Mon Sep 17 00:00:00 2001
From: Dengdui Huang <huangdengdui@huawei.com>
Date: Fri, 8 Dec 2023 14:55:08 +0800
Subject: [PATCH 24/30] net/hns3: refactor handle mailbox function
[ upstream commit 277d522ae39f6c9daa38c5ad5d3b94f632f9cf49 ]
The mailbox messages of the PF and VF are processed in
the same function. The PF and VF call the same function
to process the messages. This code is excessive coupling
and isn't good for maintenance. Therefore, this patch
separates the interfaces that handle PF mailbox message
and handle VF mailbox message.
Fixes: 463e748964f5 ("net/hns3: support mailbox")
Fixes: 109e4dd1bd7a ("net/hns3: get link state change through mailbox")
Cc: stable@dpdk.org
Signed-off-by: Dengdui Huang <huangdengdui@huawei.com>
Signed-off-by: Jie Hai <haijie1@huawei.com>
---
drivers/net/hns3/hns3_ethdev.c | 2 +-
drivers/net/hns3/hns3_ethdev_vf.c | 4 +-
drivers/net/hns3/hns3_mbx.c | 69 ++++++++++++++++++++++++-------
drivers/net/hns3/hns3_mbx.h | 3 +-
4 files changed, 58 insertions(+), 20 deletions(-)
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index ae81368..bccd9db 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -380,7 +380,7 @@ hns3_interrupt_handler(void *param)
hns3_warn(hw, "received reset interrupt");
hns3_schedule_reset(hns);
} else if (event_cause == HNS3_VECTOR0_EVENT_MBX) {
- hns3_dev_handle_mbx_msg(hw);
+ hns3pf_handle_mbx_msg(hw);
} else if (event_cause != HNS3_VECTOR0_EVENT_PTP) {
hns3_warn(hw, "received unknown event: vector0_int_stat:0x%x "
"ras_int_stat:0x%x cmdq_int_stat:0x%x",
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index b0d0c29..f5a7a2b 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -618,7 +618,7 @@ hns3vf_interrupt_handler(void *param)
hns3_schedule_reset(hns);
break;
case HNS3VF_VECTOR0_EVENT_MBX:
- hns3_dev_handle_mbx_msg(hw);
+ hns3vf_handle_mbx_msg(hw);
break;
default:
break;
@@ -670,7 +670,7 @@ hns3vf_get_push_lsc_cap(struct hns3_hw *hw)
* driver has to actively handle the HNS3_MBX_LINK_STAT_CHANGE
* mailbox from PF driver to get this capability.
*/
- hns3_dev_handle_mbx_msg(hw);
+ hns3vf_handle_mbx_msg(hw);
if (__atomic_load_n(&vf->pf_push_lsc_cap, __ATOMIC_ACQUIRE) !=
HNS3_PF_PUSH_LSC_CAP_UNKNOWN)
break;
diff --git a/drivers/net/hns3/hns3_mbx.c b/drivers/net/hns3/hns3_mbx.c
index 43195ff..9cdbc16 100644
--- a/drivers/net/hns3/hns3_mbx.c
+++ b/drivers/net/hns3/hns3_mbx.c
@@ -78,7 +78,7 @@ hns3_get_mbx_resp(struct hns3_hw *hw, uint16_t code, uint16_t subcode,
return -EIO;
}
- hns3_dev_handle_mbx_msg(hw);
+ hns3vf_handle_mbx_msg(hw);
rte_delay_us(HNS3_WAIT_RESP_US);
if (hw->mbx_resp.received_match_resp)
@@ -372,9 +372,57 @@ hns3_handle_mbx_msg_out_intr(struct hns3_hw *hw)
}
void
-hns3_dev_handle_mbx_msg(struct hns3_hw *hw)
+hns3pf_handle_mbx_msg(struct hns3_hw *hw)
+{
+ struct hns3_cmq_ring *crq = &hw->cmq.crq;
+ struct hns3_mbx_vf_to_pf_cmd *req;
+ struct hns3_cmd_desc *desc;
+ uint16_t flag;
+
+ rte_spinlock_lock(&hw->cmq.crq.lock);
+
+ while (!hns3_cmd_crq_empty(hw)) {
+ if (__atomic_load_n(&hw->reset.disable_cmd, __ATOMIC_RELAXED)) {
+ rte_spinlock_unlock(&hw->cmq.crq.lock);
+ return;
+ }
+ desc = &crq->desc[crq->next_to_use];
+ req = (struct hns3_mbx_vf_to_pf_cmd *)desc->data;
+
+ flag = rte_le_to_cpu_16(crq->desc[crq->next_to_use].flag);
+ if (unlikely(!hns3_get_bit(flag, HNS3_CMDQ_RX_OUTVLD_B))) {
+ hns3_warn(hw,
+ "dropped invalid mailbox message, code = %u",
+ req->msg.code);
+
+ /* dropping/not processing this invalid message */
+ crq->desc[crq->next_to_use].flag = 0;
+ hns3_mbx_ring_ptr_move_crq(crq);
+ continue;
+ }
+
+ switch (req->msg.code) {
+ case HNS3_MBX_PUSH_LINK_STATUS:
+ hns3pf_handle_link_change_event(hw, req);
+ break;
+ default:
+ hns3_err(hw, "received unsupported(%u) mbx msg",
+ req->msg.code);
+ break;
+ }
+ crq->desc[crq->next_to_use].flag = 0;
+ hns3_mbx_ring_ptr_move_crq(crq);
+ }
+
+ /* Write back CMDQ_RQ header pointer, IMP need this pointer */
+ hns3_write_dev(hw, HNS3_CMDQ_RX_HEAD_REG, crq->next_to_use);
+
+ rte_spinlock_unlock(&hw->cmq.crq.lock);
+}
+
+void
+hns3vf_handle_mbx_msg(struct hns3_hw *hw)
{
- struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
struct hns3_cmq_ring *crq = &hw->cmq.crq;
struct hns3_mbx_pf_to_vf_cmd *req;
struct hns3_cmd_desc *desc;
@@ -385,7 +433,7 @@ hns3_dev_handle_mbx_msg(struct hns3_hw *hw)
rte_spinlock_lock(&hw->cmq.crq.lock);
handle_out = (rte_eal_process_type() != RTE_PROC_PRIMARY ||
- !rte_thread_is_intr()) && hns->is_vf;
+ !rte_thread_is_intr());
if (handle_out) {
/*
* Currently, any threads in the primary and secondary processes
@@ -430,8 +478,7 @@ hns3_dev_handle_mbx_msg(struct hns3_hw *hw)
continue;
}
- handle_out = hns->is_vf && desc->opcode == 0;
- if (handle_out) {
+ if (desc->opcode == 0) {
/* Message already processed by other thread */
crq->desc[crq->next_to_use].flag = 0;
hns3_mbx_ring_ptr_move_crq(crq);
@@ -448,16 +495,6 @@ hns3_dev_handle_mbx_msg(struct hns3_hw *hw)
case HNS3_MBX_ASSERTING_RESET:
hns3_handle_asserting_reset(hw, req);
break;
- case HNS3_MBX_PUSH_LINK_STATUS:
- /*
- * This message is reported by the firmware and is
- * reported in 'struct hns3_mbx_vf_to_pf_cmd' format.
- * Therefore, we should cast the req variable to
- * 'struct hns3_mbx_vf_to_pf_cmd' and then process it.
- */
- hns3pf_handle_link_change_event(hw,
- (struct hns3_mbx_vf_to_pf_cmd *)req);
- break;
case HNS3_MBX_PUSH_VLAN_INFO:
/*
* When the PVID configuration status of VF device is
diff --git a/drivers/net/hns3/hns3_mbx.h b/drivers/net/hns3/hns3_mbx.h
index 2952b96..2b6cb8f 100644
--- a/drivers/net/hns3/hns3_mbx.h
+++ b/drivers/net/hns3/hns3_mbx.h
@@ -207,7 +207,8 @@ struct hns3_pf_rst_done_cmd {
((crq)->next_to_use = ((crq)->next_to_use + 1) % (crq)->desc_num)
struct hns3_hw;
-void hns3_dev_handle_mbx_msg(struct hns3_hw *hw);
+void hns3pf_handle_mbx_msg(struct hns3_hw *hw);
+void hns3vf_handle_mbx_msg(struct hns3_hw *hw);
void hns3vf_mbx_setup(struct hns3_vf_to_pf_msg *req,
uint8_t code, uint8_t subcode);
int hns3vf_mbx_send(struct hns3_hw *hw,
--
2.33.0

View File

@ -0,0 +1,114 @@
From a9364470e9198353e6ed8e28b0b86e6b2b7ce5b3 Mon Sep 17 00:00:00 2001
From: Dengdui Huang <huangdengdui@huawei.com>
Date: Fri, 8 Dec 2023 15:44:14 +0800
Subject: [PATCH 25/30] net/hns3: fix VF multiple count on one reset
[ upstream commit 072a07a9dcbd604b1983bf2cb266d3dd4dc89824 ]
There are two ways for the hns3 VF driver to know reset event, namely,
interrupt task and periodic detection task. For the latter, the real
reset process will delay several microseconds to execute. Both tasks
cause the count to increase by 1.
However, the periodic detection task also detects a reset event A
after interrupt task receive a reset event A. As a result, the reset
count will be double.
So this patch adds the comparison of reset level for VF in case of the
multiple reset count.
Fixes: a5475d61fa34 ("net/hns3: support VF")
Cc: stable@dpdk.org
Signed-off-by: Dengdui Huang <huangdengdui@huawei.com>
Signed-off-by: Jie Hai <haijie1@huawei.com>
---
drivers/net/hns3/hns3_ethdev_vf.c | 44 ++++++++++++++++++++-----------
1 file changed, 29 insertions(+), 15 deletions(-)
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index f5a7a2b..83d3d66 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -569,13 +569,8 @@ hns3vf_check_event_cause(struct hns3_adapter *hns, uint32_t *clearval)
val = hns3_read_dev(hw, HNS3_VF_RST_ING);
hns3_write_dev(hw, HNS3_VF_RST_ING, val | HNS3_VF_RST_ING_BIT);
val = cmdq_stat_reg & ~BIT(HNS3_VECTOR0_RST_INT_B);
- if (clearval) {
- hw->reset.stats.global_cnt++;
- hns3_warn(hw, "Global reset detected, clear reset status");
- } else {
- hns3_schedule_delayed_reset(hns);
- hns3_warn(hw, "Global reset detected, don't clear reset status");
- }
+ hw->reset.stats.global_cnt++;
+ hns3_warn(hw, "Global reset detected, clear reset status");
ret = HNS3VF_VECTOR0_EVENT_RST;
goto out;
@@ -590,9 +585,9 @@ hns3vf_check_event_cause(struct hns3_adapter *hns, uint32_t *clearval)
val = 0;
ret = HNS3VF_VECTOR0_EVENT_OTHER;
+
out:
- if (clearval)
- *clearval = val;
+ *clearval = val;
return ret;
}
@@ -1731,11 +1726,25 @@ is_vf_reset_done(struct hns3_hw *hw)
return true;
}
+static enum hns3_reset_level
+hns3vf_detect_reset_event(struct hns3_hw *hw)
+{
+ enum hns3_reset_level reset = HNS3_NONE_RESET;
+ uint32_t cmdq_stat_reg;
+
+ cmdq_stat_reg = hns3_read_dev(hw, HNS3_VECTOR0_CMDQ_STAT_REG);
+ if (BIT(HNS3_VECTOR0_RST_INT_B) & cmdq_stat_reg)
+ reset = HNS3_VF_RESET;
+
+ return reset;
+}
+
bool
hns3vf_is_reset_pending(struct hns3_adapter *hns)
{
+ enum hns3_reset_level last_req;
struct hns3_hw *hw = &hns->hw;
- enum hns3_reset_level reset;
+ enum hns3_reset_level new_req;
/*
* According to the protocol of PCIe, FLR to a PF device resets the PF
@@ -1758,13 +1767,18 @@ hns3vf_is_reset_pending(struct hns3_adapter *hns)
if (rte_eal_process_type() != RTE_PROC_PRIMARY)
return false;
- hns3vf_check_event_cause(hns, NULL);
- reset = hns3vf_get_reset_level(hw, &hw->reset.pending);
- if (hw->reset.level != HNS3_NONE_RESET && reset != HNS3_NONE_RESET &&
- hw->reset.level < reset) {
- hns3_warn(hw, "High level reset %d is pending", reset);
+ new_req = hns3vf_detect_reset_event(hw);
+ if (new_req == HNS3_NONE_RESET)
+ return false;
+
+ last_req = hns3vf_get_reset_level(hw, &hw->reset.pending);
+ if (last_req == HNS3_NONE_RESET || last_req < new_req) {
+ __atomic_store_n(&hw->reset.disable_cmd, 1, __ATOMIC_RELAXED);
+ hns3_schedule_delayed_reset(hns);
+ hns3_warn(hw, "High level reset detected, delay do reset");
return true;
}
+
return false;
}
--
2.33.0

View File

@ -0,0 +1,48 @@
From b58308dd555c3e8491c8fd7839c1af030b6ad6e1 Mon Sep 17 00:00:00 2001
From: Dengdui Huang <huangdengdui@huawei.com>
Date: Fri, 8 Dec 2023 15:44:15 +0800
Subject: [PATCH 26/30] net/hns3: fix disable command with firmware
[ upstream commit 66048270a49088f79c2a020bd28ee0a93c4a73ed ]
Disable command only when need to delay handle reset.
This patch fixes it.
Fixes: 5be38fc6c0fc ("net/hns3: fix multiple reset detected log")
Cc: stable@dpdk.org
Signed-off-by: Dengdui Huang <huangdengdui@huawei.com>
Signed-off-by: Jie Hai <haijie1@huawei.com>
---
drivers/net/hns3/hns3_ethdev.c | 8 +++-----
1 file changed, 3 insertions(+), 5 deletions(-)
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index bccd9db..d626bdd 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -5552,18 +5552,16 @@ hns3_detect_reset_event(struct hns3_hw *hw)
last_req = hns3_get_reset_level(hns, &hw->reset.pending);
vector0_intr_state = hns3_read_dev(hw, HNS3_VECTOR0_OTHER_INT_STS_REG);
- if (BIT(HNS3_VECTOR0_IMPRESET_INT_B) & vector0_intr_state) {
- __atomic_store_n(&hw->reset.disable_cmd, 1, __ATOMIC_RELAXED);
+ if (BIT(HNS3_VECTOR0_IMPRESET_INT_B) & vector0_intr_state)
new_req = HNS3_IMP_RESET;
- } else if (BIT(HNS3_VECTOR0_GLOBALRESET_INT_B) & vector0_intr_state) {
- __atomic_store_n(&hw->reset.disable_cmd, 1, __ATOMIC_RELAXED);
+ else if (BIT(HNS3_VECTOR0_GLOBALRESET_INT_B) & vector0_intr_state)
new_req = HNS3_GLOBAL_RESET;
- }
if (new_req == HNS3_NONE_RESET)
return HNS3_NONE_RESET;
if (last_req == HNS3_NONE_RESET || last_req < new_req) {
+ __atomic_store_n(&hw->reset.disable_cmd, 1, __ATOMIC_RELAXED);
hns3_schedule_delayed_reset(hns);
hns3_warn(hw, "High level reset detected, delay do reset");
}
--
2.33.0

View File

@ -0,0 +1,78 @@
From 5fe966c13b5c37e1dbaed7afa7d0125de26068e4 Mon Sep 17 00:00:00 2001
From: Dengdui Huang <huangdengdui@huawei.com>
Date: Fri, 8 Dec 2023 15:44:16 +0800
Subject: [PATCH 27/30] net/hns3: fix reset level comparison
[ upstream commit 1ceb5ad2dfdcc3e6658119d25253bced5be3cc32 ]
Currently, there are two problems in hns3vf_is_reset_pending():
1. The new detect reset level is not HNS3_NONE_RESET, but the
last reset level is HNS3_NONE_RESET, this function returns false.
2. Comparison between last_req and new_req is opposite.
In addition, the reset level comparison in hns3_detect_reset_event()
is similar to the hns3vf_is_reset_pending(). So this patch fixes
above the problems and merges the logic of reset level comparison.
Fixes: 5be38fc6c0fc ("net/hns3: fix multiple reset detected log")
Cc: stable@dpdk.org
Signed-off-by: Dengdui Huang <huangdengdui@huawei.com>
Signed-off-by: Jie Hai <haijie1@huawei.com>
---
drivers/net/hns3/hns3_ethdev.c | 22 +++++++---------------
1 file changed, 7 insertions(+), 15 deletions(-)
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index d626bdd..eafcf2c 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -5545,27 +5545,15 @@ is_pf_reset_done(struct hns3_hw *hw)
static enum hns3_reset_level
hns3_detect_reset_event(struct hns3_hw *hw)
{
- struct hns3_adapter *hns = HNS3_DEV_HW_TO_ADAPTER(hw);
enum hns3_reset_level new_req = HNS3_NONE_RESET;
- enum hns3_reset_level last_req;
uint32_t vector0_intr_state;
- last_req = hns3_get_reset_level(hns, &hw->reset.pending);
vector0_intr_state = hns3_read_dev(hw, HNS3_VECTOR0_OTHER_INT_STS_REG);
if (BIT(HNS3_VECTOR0_IMPRESET_INT_B) & vector0_intr_state)
new_req = HNS3_IMP_RESET;
else if (BIT(HNS3_VECTOR0_GLOBALRESET_INT_B) & vector0_intr_state)
new_req = HNS3_GLOBAL_RESET;
- if (new_req == HNS3_NONE_RESET)
- return HNS3_NONE_RESET;
-
- if (last_req == HNS3_NONE_RESET || last_req < new_req) {
- __atomic_store_n(&hw->reset.disable_cmd, 1, __ATOMIC_RELAXED);
- hns3_schedule_delayed_reset(hns);
- hns3_warn(hw, "High level reset detected, delay do reset");
- }
-
return new_req;
}
@@ -5584,10 +5572,14 @@ hns3_is_reset_pending(struct hns3_adapter *hns)
return false;
new_req = hns3_detect_reset_event(hw);
+ if (new_req == HNS3_NONE_RESET)
+ return false;
+
last_req = hns3_get_reset_level(hns, &hw->reset.pending);
- if (last_req != HNS3_NONE_RESET && new_req != HNS3_NONE_RESET &&
- new_req < last_req) {
- hns3_warn(hw, "High level reset %d is pending", last_req);
+ if (last_req == HNS3_NONE_RESET || last_req < new_req) {
+ __atomic_store_n(&hw->reset.disable_cmd, 1, __ATOMIC_RELAXED);
+ hns3_schedule_delayed_reset(hns);
+ hns3_warn(hw, "High level reset detected, delay do reset");
return true;
}
last_req = hns3_get_reset_level(hns, &hw->reset.request);
--
2.33.0

View File

@ -0,0 +1,48 @@
From 28db014a776d5f3ca2d2e162c1cbab3ab874379c Mon Sep 17 00:00:00 2001
From: Chengwen Feng <fengchengwen@huawei.com>
Date: Thu, 28 Dec 2023 20:14:28 +0800
Subject: [PATCH 28/30] net/hns3: remove QinQ insert support for VF
[ upstream commit f6e79b8d3968150736499bc225762b62fbf1b768 ]
In the HIP08 platform, the PF driver will notify VF driver to update
the PVID state [1], and VF will declare support QinQ insert when PVID
is disabled.
In the later platform (e.g. HIP09), the hardware has been improved,
so the PF driver will NOT notify VF driver to update the PVID state.
However, the later platform still have constraint: PVID and QinQ insert
cannot be enabled at the same time, otherwise, the hardware discards
packets and reports an error interrupt.
Plus, as far as we known, VF driver's users don't use the QinQ insert.
Therefore, we declare that the VF driver don't support QinQ insert.
[1] commit b4e4d7ac9f09 ("net/hns3: support setting VF PVID by PF driver")
Cc: stable@dpdk.org
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Jie Hai <haijie1@huawei.com>
---
drivers/net/hns3/hns3_common.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/hns3/hns3_common.c b/drivers/net/hns3/hns3_common.c
index 8f224aa..28c26b0 100644
--- a/drivers/net/hns3/hns3_common.c
+++ b/drivers/net/hns3/hns3_common.c
@@ -85,7 +85,7 @@ hns3_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
RTE_ETH_TX_OFFLOAD_VLAN_INSERT);
- if (!hw->port_base_vlan_cfg.state)
+ if (!hns->is_vf && !hw->port_base_vlan_cfg.state)
info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_QINQ_INSERT;
if (hns3_dev_get_support(hw, OUTER_UDP_CKSUM))
--
2.33.0

View File

@ -0,0 +1,90 @@
From ec982e4420625a346de01d46343a63f044dfb0e6 Mon Sep 17 00:00:00 2001
From: Chengwen Feng <fengchengwen@huawei.com>
Date: Mon, 5 Feb 2024 16:35:21 +0800
Subject: [PATCH 29/30] net/hns3: support power monitor
[ upstream commit 9e1e7dded323ce424aebf992f6ddfa9656655631 ]
This commit supports power monitor on the Rx queue descriptor of the
next poll.
Note: Although rte_power_monitor() on the ARM platform does not support
callback, this commit still implements the callback so that it does not
need to be adjusted after the ARM platform supports callback.
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Jie Hai <haijie1@huawei.com>
---
drivers/net/hns3/hns3_ethdev.c | 1 +
drivers/net/hns3/hns3_ethdev_vf.c | 1 +
drivers/net/hns3/hns3_rxtx.c | 21 +++++++++++++++++++++
drivers/net/hns3/hns3_rxtx.h | 1 +
4 files changed, 24 insertions(+)
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index eafcf2c..b10d121 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -6501,6 +6501,7 @@ static const struct eth_dev_ops hns3_eth_dev_ops = {
.eth_dev_priv_dump = hns3_eth_dev_priv_dump,
.eth_rx_descriptor_dump = hns3_rx_descriptor_dump,
.eth_tx_descriptor_dump = hns3_tx_descriptor_dump,
+ .get_monitor_addr = hns3_get_monitor_addr,
};
static const struct hns3_reset_ops hns3_reset_ops = {
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 83d3d66..4eeb46a 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -2209,6 +2209,7 @@ static const struct eth_dev_ops hns3vf_eth_dev_ops = {
.eth_dev_priv_dump = hns3_eth_dev_priv_dump,
.eth_rx_descriptor_dump = hns3_rx_descriptor_dump,
.eth_tx_descriptor_dump = hns3_tx_descriptor_dump,
+ .get_monitor_addr = hns3_get_monitor_addr,
};
static const struct hns3_reset_ops hns3vf_reset_ops = {
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 9087bcf..04ae8dc 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -4883,3 +4883,24 @@ hns3_start_rxtx_datapath(struct rte_eth_dev *dev)
hns3_mp_req_start_rxtx(dev);
}
+
+static int
+hns3_monitor_callback(const uint64_t value,
+ const uint64_t arg[RTE_POWER_MONITOR_OPAQUE_SZ] __rte_unused)
+{
+ const uint64_t vld = rte_le_to_cpu_32(BIT(HNS3_RXD_VLD_B));
+ return (value & vld) == vld ? -1 : 0;
+}
+
+int
+hns3_get_monitor_addr(void *rx_queue, struct rte_power_monitor_cond *pmc)
+{
+ struct hns3_rx_queue *rxq = rx_queue;
+ struct hns3_desc *rxdp = &rxq->rx_ring[rxq->next_to_use];
+
+ pmc->addr = &rxdp->rx.bd_base_info;
+ pmc->fn = hns3_monitor_callback;
+ pmc->size = sizeof(uint32_t);
+
+ return 0;
+}
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
index b6a6513..18dcc75 100644
--- a/drivers/net/hns3/hns3_rxtx.h
+++ b/drivers/net/hns3/hns3_rxtx.h
@@ -814,5 +814,6 @@ void hns3_stop_tx_datapath(struct rte_eth_dev *dev);
void hns3_start_tx_datapath(struct rte_eth_dev *dev);
void hns3_stop_rxtx_datapath(struct rte_eth_dev *dev);
void hns3_start_rxtx_datapath(struct rte_eth_dev *dev);
+int hns3_get_monitor_addr(void *rx_queue, struct rte_power_monitor_cond *pmc);
#endif /* HNS3_RXTX_H */
--
2.33.0

View File

@ -0,0 +1,84 @@
From 432000eaee7f2e5188e9517c408ab4bd348a3553 Mon Sep 17 00:00:00 2001
From: Dengdui Huang <huangdengdui@huawei.com>
Date: Tue, 30 Jan 2024 09:32:49 +0800
Subject: [PATCH 30/30] app/testpmd: fix crash in multi-process forwarding
[ upstream commit b3a33138f317d1c651cd86f423cc703176eb7b07 ]
On multi-process scenario, each process creates flows based on the
number of queues. When nbcore is greater than 1, multiple cores may
use the same queue to forward packet, like:
dpdk-testpmd -a BDF --proc-type=auto -- -i --rxq=4 --txq=4
--nb-cores=2 --num-procs=2 --proc-id=0
testpmd> start
mac packet forwarding - ports=1 - cores=2 - streams=4 - NUMA support
enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 2 streams:
RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
RX P=0/Q=1 (socket 0) -> TX P=0/Q=1 (socket 0) peer=02:00:00:00:00:00
Logical Core 3 (socket 0) forwards packets on 2 streams:
RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
RX P=0/Q=1 (socket 0) -> TX P=0/Q=1 (socket 0) peer=02:00:00:00:00:00
After this commit, the result will be:
dpdk-testpmd -a BDF --proc-type=auto -- -i --rxq=4 --txq=4
--nb-cores=2 --num-procs=2 --proc-id=0
testpmd> start
io packet forwarding - ports=1 - cores=2 - streams=2 - NUMA support
enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 1 streams:
RX P=0/Q=0 (socket 2) -> TX P=0/Q=0 (socket 2) peer=02:00:00:00:00:00
Logical Core 3 (socket 0) forwards packets on 1 streams:
RX P=0/Q=1 (socket 2) -> TX P=0/Q=1 (socket 2) peer=02:00:00:00:00:00
Fixes: a550baf24af9 ("app/testpmd: support multi-process")
Cc: stable@dpdk.org
Signed-off-by: Dengdui Huang <huangdengdui@huawei.com>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
app/test-pmd/config.c | 6 +-----
1 file changed, 1 insertion(+), 5 deletions(-)
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index cad7537..2c4dedd 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -4794,7 +4794,6 @@ rss_fwd_config_setup(void)
queueid_t nb_q;
streamid_t sm_id;
int start;
- int end;
nb_q = nb_rxq;
if (nb_q > nb_txq)
@@ -4802,7 +4801,7 @@ rss_fwd_config_setup(void)
cur_fwd_config.nb_fwd_lcores = (lcoreid_t) nb_fwd_lcores;
cur_fwd_config.nb_fwd_ports = nb_fwd_ports;
cur_fwd_config.nb_fwd_streams =
- (streamid_t) (nb_q * cur_fwd_config.nb_fwd_ports);
+ (streamid_t) (nb_q / num_procs * cur_fwd_config.nb_fwd_ports);
if (cur_fwd_config.nb_fwd_streams < cur_fwd_config.nb_fwd_lcores)
cur_fwd_config.nb_fwd_lcores =
@@ -4824,7 +4823,6 @@ rss_fwd_config_setup(void)
* the 2~3 queue for secondary process.
*/
start = proc_id * nb_q / num_procs;
- end = start + nb_q / num_procs;
rxp = 0;
rxq = start;
for (sm_id = 0; sm_id < cur_fwd_config.nb_fwd_streams; sm_id++) {
@@ -4843,8 +4841,6 @@ rss_fwd_config_setup(void)
continue;
rxp = 0;
rxq++;
- if (rxq >= end)
- rxq = start;
}
}
--
2.33.0

View File

@ -10,12 +10,13 @@
Name: dpdk
Version: 23.11
Release: 5
Release: 6
URL: http://dpdk.org
Source: https://fast.dpdk.org/rel/dpdk-%{version}.tar.xz
# upstream patch number start from 6000
# self developed patch number start from 9000
# upstream patch number is 6xxx
# self developed patch number is 9xxx
# And the name number of two types patch are numbered together
Patch9001: 0001-add-igb_uio.patch
Patch9002: 0002-dpdk-add-secure-compile-option-and-fPIC-option.patch
Patch9003: 0003-dpdk-bugfix-the-deadlock-in-rte_eal_init.patch
@ -29,6 +30,25 @@ Patch9010: 0010-example-l3fwd-masking-wrong-warning-array-subscript-.patch
Patch9011: 0011-dpdk-add-support-for-gazellle.patch
Patch9012: 0012-lstack-need-skip-rte_bus_probe-when-use-ltran-mode.patch
Patch6013: 0013-maintainers-update-for-DMA-device-performance-tool.patch
Patch6014: 0014-dmadev-add-telemetry-capability-for-m2d-auto-free.patch
Patch6015: 0015-dmadev-add-tracepoints-in-data-path-API.patch
Patch6016: 0016-eal-introduce-more-macros-for-bit-definition.patch
Patch6017: 0017-ring-add-telemetry-command-to-list-rings.patch
Patch6018: 0018-ring-add-telemetry-command-for-ring-info.patch
Patch6019: 0019-ethdev-get-RSS-hash-algorithm-by-name.patch
Patch6020: 0020-app-testpmd-set-RSS-hash-algorithm.patch
Patch6021: 0021-net-hns3-refactor-VF-mailbox-message-struct.patch
Patch6022: 0022-net-hns3-refactor-PF-mailbox-message-struct.patch
Patch6023: 0023-net-hns3-refactor-send-mailbox-function.patch
Patch6024: 0024-net-hns3-refactor-handle-mailbox-function.patch
Patch6025: 0025-net-hns3-fix-VF-multiple-count-on-one-reset.patch
Patch6026: 0026-net-hns3-fix-disable-command-with-firmware.patch
Patch6027: 0027-net-hns3-fix-reset-level-comparison.patch
Patch6028: 0028-net-hns3-remove-QinQ-insert-support-for-VF.patch
Patch6029: 0029-net-hns3-support-power-monitor.patch
Patch6030: 0030-app-testpmd-fix-crash-in-multi-process-forwarding.patch
BuildRequires: meson
BuildRequires: python3-pyelftools
BuildRequires: diffutils
@ -195,6 +215,28 @@ strip -g $RPM_BUILD_ROOT/lib/modules/%{kern_devel_ver}/extra/dpdk/igb_uio.ko
%endif
%changelog
* Tue Mar 5 2024 huangdengdui <huangdengui@huawei.com> - 23.11-6
Sync some patches for hns3 about refactor mailbox, add new API for RSS,
support power monitor and some bugfix, modifies are as follow:
- app/testpmd: fix crash in multi -process forwarding
- net/hns3: support power monitor
- net/hns3: remove QinQ insert support for VF
- net/hns3: fix reset level comparison
- net/hns3: fix disable command with firmware
- net/hns3: fix VF multiple count on one reset
- net/hns3: refactor handle mailbox function
- net/hns3: refactor send mailbox function
- net/hns3: refactor PF mailbox message struct
- net/hns3: refactor VF mailbox message struct
- app/testpmd: set RSS hash algorithm
- ethdev: get RSS hash algorithm by name
- ring: add telemetry command for ring info
- ring: add telemetry command to list rings
- eal: introduce more macros for bit definition
- dmadev: add tracepoints in data path API
- dmadev: add telemetry capability for m2d auto free
- maintainers: update for DMA device performance tool
* Thu Jan 25 2024 shafeipaozi <sunbo.oerv@isrc.iscas.ac.cn> - 23.11-5
Add support to riscv64