backport patches from upstream:

backport-libbpf-Add-NULL-checks-to-bpf_object__prev_map,next_.patch
backport-libbpf-Apply-map_set_def_max_entries-for-inner_maps-.patch

Signed-off-by: zhang-mingyi66 <zhangmingyi5@huawei.com>
(cherry picked from commit 036cf1a87e3060b88baacc592c7e96a57f88fa5b)
This commit is contained in:
zhang-mingyi66 2024-10-09 05:37:55 +08:00 committed by openeuler-sync-bot
parent de0fba45e0
commit 246f68069b
3 changed files with 110 additions and 1 deletions

View File

@ -0,0 +1,53 @@
From 1867490d8fc635c552569d51c48debff588d2191 Mon Sep 17 00:00:00 2001
From: Andreas Ziegler <ziegler.andreas@siemens.com>
Date: Wed, 3 Jul 2024 10:34:36 +0200
Subject: [PATCH] libbpf: Add NULL checks to bpf_object__{prev_map,next_map}
In the current state, an erroneous call to
bpf_object__find_map_by_name(NULL, ...) leads to a segmentation
fault through the following call chain:
bpf_object__find_map_by_name(obj = NULL, ...)
-> bpf_object__for_each_map(pos, obj = NULL)
-> bpf_object__next_map((obj = NULL), NULL)
-> return (obj = NULL)->maps
While calling bpf_object__find_map_by_name with obj = NULL is
obviously incorrect, this should not lead to a segmentation
fault but rather be handled gracefully.
As __bpf_map__iter already handles this situation correctly, we
can delegate the check for the regular case there and only add
a check in case the prev or next parameter is NULL.
Signed-off-by: Andreas Ziegler <ziegler.andreas@siemens.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20240703083436.505124-1-ziegler.andreas@siemens.com
---
src/libbpf.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/src/libbpf.c b/src/libbpf.c
index 4a28fac49..30f121754 100644
--- a/src/libbpf.c
+++ b/src/libbpf.c
@@ -10375,7 +10375,7 @@ __bpf_map__iter(const struct bpf_map *m, const struct bpf_object *obj, int i)
struct bpf_map *
bpf_object__next_map(const struct bpf_object *obj, const struct bpf_map *prev)
{
- if (prev == NULL)
+ if (prev == NULL && obj != NULL)
return obj->maps;
return __bpf_map__iter(prev, obj, 1);
@@ -10384,7 +10384,7 @@ bpf_object__next_map(const struct bpf_object *obj, const struct bpf_map *prev)
struct bpf_map *
bpf_object__prev_map(const struct bpf_object *obj, const struct bpf_map *next)
{
- if (next == NULL) {
+ if (next == NULL && obj != NULL) {
if (!obj->nr_maps)
return NULL;
return obj->maps + obj->nr_maps - 1;
--
2.33.0

View File

@ -0,0 +1,49 @@
From 89ca11a79bb93824e82897bdb48727b5d75e469a Mon Sep 17 00:00:00 2001
From: Andrey Grafin <conquistador@yandex-team.ru>
Date: Wed, 17 Jan 2024 16:06:18 +0300
Subject: [PATCH] libbpf: Apply map_set_def_max_entries() for inner_maps on
creation
This patch allows to auto create BPF_MAP_TYPE_ARRAY_OF_MAPS and
BPF_MAP_TYPE_HASH_OF_MAPS with values of BPF_MAP_TYPE_PERF_EVENT_ARRAY
by bpf_object__load().
Previous behaviour created a zero filled btf_map_def for inner maps and
tried to use it for a map creation but the linux kernel forbids to create
a BPF_MAP_TYPE_PERF_EVENT_ARRAY map with max_entries=0.
Fixes: 646f02ffdd49 ("libbpf: Add BTF-defined map-in-map support")
Signed-off-by: Andrey Grafin <conquistador@yandex-team.ru>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Yonghong Song <yonghong.song@linux.dev>
Acked-by: Hou Tao <houtao1@huawei.com>
Link: https://lore.kernel.org/bpf/20240117130619.9403-1-conquistador@yandex-team.ru
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
---
src/libbpf.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/src/libbpf.c b/src/libbpf.c
index afd09571c..b8b00da62 100644
--- a/src/libbpf.c
+++ b/src/libbpf.c
@@ -70,6 +70,7 @@
static struct bpf_map *bpf_object__add_map(struct bpf_object *obj);
static bool prog_is_subprog(const struct bpf_object *obj, const struct bpf_program *prog);
+static int map_set_def_max_entries(struct bpf_map *map);
static const char * const attach_type_name[] = {
[BPF_CGROUP_INET_INGRESS] = "cgroup_inet_ingress",
@@ -5172,6 +5173,9 @@ static int bpf_object__create_map(struct bpf_object *obj, struct bpf_map *map, b
if (bpf_map_type__is_map_in_map(def->type)) {
if (map->inner_map) {
+ err = map_set_def_max_entries(map->inner_map);
+ if (err)
+ return err;
err = bpf_object__create_map(obj, map->inner_map, true);
if (err) {
pr_warn("map '%s': failed to create inner map: %d\n",
--
2.33.0

View File

@ -4,7 +4,7 @@
Name: %{githubname}
Version: %{githubver}
Release: 4
Release: 5
Summary: Libbpf library
License: LGPLv2 or BSD
@ -19,6 +19,8 @@ Patch0002: backport-libbpf-Set-close-on-exec-flag-on-gzopen.patch
Patch0003: backport-libbpf-Fix-NULL-pointer-dereference-in_bpf_object__c.patch
Patch0004: backport-libbpf-Free-btf_vmlinux-when-closing-bpf_object.patch
Patch0005: backport-libbpf-Avoid-uninitialized-value-in-BPF_CORE_READ_BI.patch
Patch0006: backport-libbpf-Add-NULL-checks-to-bpf_object__prev_map,next_.patch
Patch0007: backport-libbpf-Apply-map_set_def_max_entries-for-inner_maps-.patch
# This package supersedes libbpf from kernel-tools,
# which has default Epoch: 0. By having Epoch: 1
@ -71,6 +73,11 @@ developing applications that use %{name}
%{_libdir}/libbpf.a
%changelog
* Wed Oct 09 2024 zhangmingyi <zhangmingyi5@huawei.com> 2:1.2.2-5
- backport patch from upstream:
backport-libbpf-Add-NULL-checks-to-bpf_object__prev_map,next_.patch
backport-libbpf-Apply-map_set_def_max_entries-for-inner_maps-.patch
* Wed Sep 25 2024 zhangmingyi <zhangmingyi5@huawei.com> 2:1.2.2-4
- backport patch from upstream:
backport-libbpf-Avoid-uninitialized-value-in-BPF_CORE_READ_BI.patch