Compare commits

...

10 Commits

Author SHA1 Message Date
openeuler-ci-bot
65afb5d622
!86 [sync] PR-84: Fix CVE-2024-53868
From: @openeuler-sync-bot 
Reviewed-by: @caodongxia 
Signed-off-by: @caodongxia
2025-04-07 09:23:59 +00:00
starlet-dx
1d5d229608 Fix CVE-2024-53868
(cherry picked from commit 42bffd4c8f379fb5b0275b33a4075632a4bdf194)
2025-04-07 16:41:30 +08:00
openeuler-ci-bot
aa00d36d67
!80 [sync] PR-78: Fix CVE-2024-38311,CVE-2024-56195 and CVE-2024-56202
From: @openeuler-sync-bot 
Reviewed-by: @wang--ge 
Signed-off-by: @wang--ge
2025-03-07 06:08:11 +00:00
starlet-dx
0af0d27de3 Fix CVE-2024-38311,CVE-2024-56195 and CVE-2024-56202
(cherry picked from commit 1be79f85e0ecdd6927504eaa16b594d42a1beba1)
2025-03-07 10:53:04 +08:00
openeuler-ci-bot
2e7df68d17
!74 [sync] PR-73: Fix trafficserver service error
From: @openeuler-sync-bot 
Reviewed-by: @wang--ge 
Signed-off-by: @wang--ge
2024-12-03 08:36:41 +00:00
starlet-dx
796a863a61 Fix trafficserver service error
(cherry picked from commit 56e4a624809318b3d692df4d9da03f3d96b7b27a)
2024-12-03 15:33:05 +08:00
openeuler-ci-bot
522b143f4e
!71 [sync] PR-68: Fix CVE-2024-38479, CVE-2024-50306, CVE-2024-50305
From: @openeuler-sync-bot 
Reviewed-by: @wang--ge 
Signed-off-by: @wang--ge
2024-11-15 08:36:33 +00:00
wk333
36a2d68845 Fix CVE-2024-38479, CVE-2024-50306, CVE-2024-50305
(cherry picked from commit 4fc2a49a6bfa63e6cf9966dbb019c143fd74e3bd)
2024-11-15 15:28:19 +08:00
openeuler-ci-bot
6ce75dc39d
!67 [sync] PR-65: Update to 9.2.5 for fix CVE-2023-38522, CVE-2024-35161, CVE-2024-35296
From: @openeuler-sync-bot 
Reviewed-by: @wang--ge 
Signed-off-by: @wang--ge
2024-07-30 02:53:07 +00:00
wk333
e88b359bba Update to 9.2.5 for fix CVE-2023-38522, CVE-2024-35161, CVE-2024-35296
(cherry picked from commit b9904e360244d1c3fb0311dbcaa7a80f2eba55b5)
2024-07-30 09:33:00 +08:00
13 changed files with 3097 additions and 411 deletions

View File

@ -1,407 +0,0 @@
From b8c6a23b74af1772e5cb0de25b38c234a418cb1d Mon Sep 17 00:00:00 2001
From: Masakazu Kitajo <maskit@apache.org>
Date: Wed, 3 Apr 2024 09:31:37 -0600
Subject: [PATCH] proxy.config.http2.max_continuation_frames_per_minute
(#11206)
Origin: https://github.com/apache/trafficserver/commit/b8c6a23b74af1772e5cb0de25b38c234a418cb1d
This adds the ability to rate limite HTTP/2 CONTINUATION frames per
stream per minute.
Co-authored-by: Brian Neradt <brian.neradt@gmail.com>
---
doc/admin-guide/files/records.config.en.rst | 11 +++-
.../statistics/core/http-connection.en.rst | 11 +++-
iocore/net/P_SNIActionPerformer.h | 17 +++++
iocore/net/SSLSNIConfig.cc | 4 ++
iocore/net/TLSSNISupport.h | 1 +
iocore/net/YamlSNIConfig.cc | 4 ++
iocore/net/YamlSNIConfig.h | 2 +
mgmt/RecordsConfig.cc | 2 +
proxy/http2/HTTP2.cc | 66 ++++++++++---------
proxy/http2/HTTP2.h | 2 +
proxy/http2/Http2ConnectionState.cc | 36 ++++++++--
proxy/http2/Http2ConnectionState.h | 12 ++--
12 files changed, 126 insertions(+), 42 deletions(-)
diff --git a/doc/admin-guide/files/records.config.en.rst b/doc/admin-guide/files/records.config.en.rst
index f3df888708e..979c8bda2f4 100644
--- a/doc/admin-guide/files/records.config.en.rst
+++ b/doc/admin-guide/files/records.config.en.rst
@@ -4287,8 +4287,15 @@ HTTP/2 Configuration
-.. ts:cv:: CONFIG proxy.config.http2.max_rst_stream_frames_per_minute INT 14
+.. ts:cv:: CONFIG proxy.config.http2.max_rst_stream_frames_per_minute INT 200
:reloadable:
- Specifies how many RST_STREAM frames |TS| receives for a minute at maximum.
- Clients exceeded this limit will be immediately disconnected with an error
+ Specifies how many RST_STREAM frames |TS| receives per minute at maximum.
+ Clients exceeding this limit will be immediately disconnected with an error
+ code of ENHANCE_YOUR_CALM.
+
+.. ts:cv:: CONFIG proxy.config.http2.max_continuation_frames_per_minute INT 120
+ :reloadable:
+
+ Specifies how many CONTINUATION frames |TS| receives per minute at maximum.
+ Clients exceeding this limit will be immediately disconnected with an error
code of ENHANCE_YOUR_CALM.
.. ts:cv:: CONFIG proxy.config.http2.min_avg_window_update FLOAT 2560.0
diff --git a/doc/admin-guide/monitoring/statistics/core/http-connection.en.rst b/doc/admin-guide/monitoring/statistics/core/http-connection.en.rst
index b22da8e1c66..ee47a147c01 100644
--- a/doc/admin-guide/monitoring/statistics/core/http-connection.en.rst
+++ b/doc/admin-guide/monitoring/statistics/core/http-connection.en.rst
@@ -263,10 +263,17 @@ HTTP/2
.. ts:stat:: global proxy.process.http2.max_rst_stream_frames_per_minute_exceeded integer
:type: counter
- Represents the total number of closed HTTP/2 connections for exceeding the
- maximum allowed number of rst_stream frames per minute limit which is configured by
+ Represents the total number of HTTP/2 connections closed for exceeding the
+ maximum allowed number of ``RST_STREAM`` frames per minute limit which is configured by
:ts:cv:`proxy.config.http2.max_rst_stream_frames_per_minute`.
+.. ts:stat:: global proxy.process.http2.max_continuation_frames_per_minute_exceeded integer
+ :type: counter
+
+ Represents the total number of HTTP/2 connections closed for exceeding the
+ maximum allowed number of ``CONTINUATION`` frames per minute limit which is
+ configured by :ts:cv:`proxy.config.http2.max_continuation_frames_per_minute`.
+
.. ts:stat:: global proxy.process.http2.insufficient_avg_window_update integer
:type: counter
diff --git a/iocore/net/P_SNIActionPerformer.h b/iocore/net/P_SNIActionPerformer.h
index e223ac7d0ba..eebe44b75a1 100644
--- a/iocore/net/P_SNIActionPerformer.h
+++ b/iocore/net/P_SNIActionPerformer.h
@@ -186,6 +186,23 @@ class HTTP2MaxRstStreamFramesPerMinute : public ActionItem
int value = -1;
};
+class HTTP2MaxContinuationFramesPerMinute : public ActionItem
+{
+public:
+ HTTP2MaxContinuationFramesPerMinute(int value) : value(value) {}
+ ~HTTP2MaxContinuationFramesPerMinute() override {}
+
+ int
+ SNIAction(TLSSNISupport *snis, const Context &ctx) const override
+ {
+ snis->hints_from_sni.http2_max_continuation_frames_per_minute = value;
+ return SSL_TLSEXT_ERR_OK;
+ }
+
+private:
+ int value = -1;
+};
+
class TunnelDestination : public ActionItem
{
public:
diff --git a/iocore/net/SSLSNIConfig.cc b/iocore/net/SSLSNIConfig.cc
index a7071013f6a..942e6c420f0 100644
--- a/iocore/net/SSLSNIConfig.cc
+++ b/iocore/net/SSLSNIConfig.cc
@@ -151,6 +151,10 @@ SNIConfigParams::load_sni_config()
ai->actions.push_back(
std::make_unique<HTTP2MaxRstStreamFramesPerMinute>(item.http2_max_rst_stream_frames_per_minute.value()));
}
+ if (item.http2_max_continuation_frames_per_minute.has_value()) {
+ ai->actions.push_back(
+ std::make_unique<HTTP2MaxContinuationFramesPerMinute>(item.http2_max_continuation_frames_per_minute.value()));
+ }
ai->actions.push_back(std::make_unique<SNI_IpAllow>(item.ip_allow, item.fqdn));
diff --git a/iocore/net/TLSSNISupport.h b/iocore/net/TLSSNISupport.h
index ba2d13e9300..e8614ffa9b8 100644
--- a/iocore/net/TLSSNISupport.h
+++ b/iocore/net/TLSSNISupport.h
@@ -56,6 +56,7 @@ class TLSSNISupport
std::optional<uint32_t> http2_max_ping_frames_per_minute;
std::optional<uint32_t> http2_max_priority_frames_per_minute;
std::optional<uint32_t> http2_max_rst_stream_frames_per_minute;
+ std::optional<uint32_t> http2_max_continuation_frames_per_minute;
} hints_from_sni;
protected:
diff --git a/iocore/net/YamlSNIConfig.cc b/iocore/net/YamlSNIConfig.cc
index 9a777b806f2..7286197c9c7 100644
--- a/iocore/net/YamlSNIConfig.cc
+++ b/iocore/net/YamlSNIConfig.cc
@@ -148,6 +148,7 @@ std::set<std::string> valid_sni_config_keys = {TS_fqdn,
TS_http2_max_ping_frames_per_minute,
TS_http2_max_priority_frames_per_minute,
TS_http2_max_rst_stream_frames_per_minute,
+ TS_http2_max_continuation_frames_per_minute,
TS_ip_allow,
#if TS_USE_HELLO_CB || defined(OPENSSL_IS_BORINGSSL)
TS_valid_tls_versions_in,
@@ -193,6 +194,9 @@ template <> struct convert<YamlSNIConfig::Item> {
if (node[TS_http2_max_rst_stream_frames_per_minute]) {
item.http2_max_rst_stream_frames_per_minute = node[TS_http2_max_rst_stream_frames_per_minute].as<int>();
}
+ if (node[TS_http2_max_continuation_frames_per_minute]) {
+ item.http2_max_continuation_frames_per_minute = node[TS_http2_max_continuation_frames_per_minute].as<int>();
+ }
// enum
if (node[TS_verify_client]) {
diff --git a/iocore/net/YamlSNIConfig.h b/iocore/net/YamlSNIConfig.h
index b297bd5c16e..8165dc336c5 100644
--- a/iocore/net/YamlSNIConfig.h
+++ b/iocore/net/YamlSNIConfig.h
@@ -60,6 +60,7 @@ TSDECL(http2_max_settings_frames_per_minute);
TSDECL(http2_max_ping_frames_per_minute);
TSDECL(http2_max_priority_frames_per_minute);
TSDECL(http2_max_rst_stream_frames_per_minute);
+TSDECL(http2_max_continuation_frames_per_minute);
TSDECL(host_sni_policy);
#undef TSDECL
@@ -94,6 +95,7 @@ struct YamlSNIConfig {
std::optional<int> http2_max_ping_frames_per_minute;
std::optional<int> http2_max_priority_frames_per_minute;
std::optional<int> http2_max_rst_stream_frames_per_minute;
+ std::optional<int> http2_max_continuation_frames_per_minute;
bool tunnel_prewarm_srv = false;
uint32_t tunnel_prewarm_min = 0;
diff --git a/mgmt/RecordsConfig.cc b/mgmt/RecordsConfig.cc
index b63e0523c2b..a3752ea8359 100644
--- a/mgmt/RecordsConfig.cc
+++ b/mgmt/RecordsConfig.cc
@@ -1395,6 +1395,8 @@ static const RecordElement RecordsConfig[] =
,
{RECT_CONFIG, "proxy.config.http2.max_rst_stream_frames_per_minute", RECD_INT, "200", RECU_DYNAMIC, RR_NULL, RECC_STR, "^[0-9]+$", RECA_NULL}
,
+ {RECT_CONFIG, "proxy.config.http2.max_continuation_frames_per_minute", RECD_INT, "120", RECU_DYNAMIC, RR_NULL, RECC_STR, "^[0-9]+$", RECA_NULL}
+ ,
{RECT_CONFIG, "proxy.config.http2.min_avg_window_update", RECD_FLOAT, "2560.0", RECU_DYNAMIC, RR_NULL, RECC_NULL, nullptr, RECA_NULL}
,
{RECT_CONFIG, "proxy.config.http2.header_table_size_limit", RECD_INT, "65536", RECU_DYNAMIC, RR_NULL, RECC_STR, "^[0-9]+$", RECA_NULL}
diff --git a/proxy/http2/HTTP2.cc b/proxy/http2/HTTP2.cc
index 04813d2212b..a3a5a0ac781 100644
--- a/proxy/http2/HTTP2.cc
+++ b/proxy/http2/HTTP2.cc
@@ -85,6 +85,8 @@ static const char *const HTTP2_STAT_MAX_PRIORITY_FRAMES_PER_MINUTE_EXCEEDED_NAME
"proxy.process.http2.max_priority_frames_per_minute_exceeded";
static const char *const HTTP2_STAT_MAX_RST_STREAM_FRAMES_PER_MINUTE_EXCEEDED_NAME =
"proxy.process.http2.max_rst_stream_frames_per_minute_exceeded";
+static const char *const HTTP2_STAT_MAX_CONTINUATION_FRAMES_PER_MINUTE_EXCEEDED_NAME =
+ "proxy.process.http2.max_continuation_frames_per_minute_exceeded";
static const char *const HTTP2_STAT_INSUFFICIENT_AVG_WINDOW_UPDATE_NAME = "proxy.process.http2.insufficient_avg_window_update";
static const char *const HTTP2_STAT_MAX_CONCURRENT_STREAMS_EXCEEDED_IN_NAME =
"proxy.process.http2.max_concurrent_streams_exceeded_in";
@@ -798,36 +800,37 @@ http2_decode_header_blocks(HTTPHdr *hdr, const uint8_t *buf_start, const uint32_
}
// Initialize this subsystem with librecords configs (for now)
-uint32_t Http2::max_concurrent_streams_in = 100;
-uint32_t Http2::min_concurrent_streams_in = 10;
-uint32_t Http2::max_active_streams_in = 0;
-bool Http2::throttling = false;
-uint32_t Http2::stream_priority_enabled = 0;
-uint32_t Http2::initial_window_size = 65535;
-uint32_t Http2::max_frame_size = 16384;
-uint32_t Http2::header_table_size = 4096;
-uint32_t Http2::max_header_list_size = 4294967295;
-uint32_t Http2::accept_no_activity_timeout = 120;
-uint32_t Http2::no_activity_timeout_in = 120;
-uint32_t Http2::active_timeout_in = 0;
-uint32_t Http2::push_diary_size = 256;
-uint32_t Http2::zombie_timeout_in = 0;
-float Http2::stream_error_rate_threshold = 0.1;
-uint32_t Http2::stream_error_sampling_threshold = 10;
-uint32_t Http2::max_settings_per_frame = 7;
-uint32_t Http2::max_settings_per_minute = 14;
-uint32_t Http2::max_settings_frames_per_minute = 14;
-uint32_t Http2::max_ping_frames_per_minute = 60;
-uint32_t Http2::max_priority_frames_per_minute = 120;
-uint32_t Http2::max_rst_stream_frames_per_minute = 200;
-float Http2::min_avg_window_update = 2560.0;
-uint32_t Http2::con_slow_log_threshold = 0;
-uint32_t Http2::stream_slow_log_threshold = 0;
-uint32_t Http2::header_table_size_limit = 65536;
-uint32_t Http2::write_buffer_block_size = 262144;
-float Http2::write_size_threshold = 0.5;
-uint32_t Http2::write_time_threshold = 100;
-uint32_t Http2::buffer_water_mark = 0;
+uint32_t Http2::max_concurrent_streams_in = 100;
+uint32_t Http2::min_concurrent_streams_in = 10;
+uint32_t Http2::max_active_streams_in = 0;
+bool Http2::throttling = false;
+uint32_t Http2::stream_priority_enabled = 0;
+uint32_t Http2::initial_window_size = 65535;
+uint32_t Http2::max_frame_size = 16384;
+uint32_t Http2::header_table_size = 4096;
+uint32_t Http2::max_header_list_size = 4294967295;
+uint32_t Http2::accept_no_activity_timeout = 120;
+uint32_t Http2::no_activity_timeout_in = 120;
+uint32_t Http2::active_timeout_in = 0;
+uint32_t Http2::push_diary_size = 256;
+uint32_t Http2::zombie_timeout_in = 0;
+float Http2::stream_error_rate_threshold = 0.1;
+uint32_t Http2::stream_error_sampling_threshold = 10;
+uint32_t Http2::max_settings_per_frame = 7;
+uint32_t Http2::max_settings_per_minute = 14;
+uint32_t Http2::max_settings_frames_per_minute = 14;
+uint32_t Http2::max_ping_frames_per_minute = 60;
+uint32_t Http2::max_priority_frames_per_minute = 120;
+uint32_t Http2::max_rst_stream_frames_per_minute = 200;
+uint32_t Http2::max_continuation_frames_per_minute = 120;
+float Http2::min_avg_window_update = 2560.0;
+uint32_t Http2::con_slow_log_threshold = 0;
+uint32_t Http2::stream_slow_log_threshold = 0;
+uint32_t Http2::header_table_size_limit = 65536;
+uint32_t Http2::write_buffer_block_size = 262144;
+float Http2::write_size_threshold = 0.5;
+uint32_t Http2::write_time_threshold = 100;
+uint32_t Http2::buffer_water_mark = 0;
void
Http2::init()
@@ -853,6 +856,7 @@ Http2::init()
REC_EstablishStaticConfigInt32U(max_ping_frames_per_minute, "proxy.config.http2.max_ping_frames_per_minute");
REC_EstablishStaticConfigInt32U(max_priority_frames_per_minute, "proxy.config.http2.max_priority_frames_per_minute");
REC_EstablishStaticConfigInt32U(max_rst_stream_frames_per_minute, "proxy.config.http2.max_rst_stream_frames_per_minute");
+ REC_EstablishStaticConfigInt32U(max_continuation_frames_per_minute, "proxy.config.http2.max_continuation_frames_per_minute");
REC_EstablishStaticConfigFloat(min_avg_window_update, "proxy.config.http2.min_avg_window_update");
REC_EstablishStaticConfigInt32U(con_slow_log_threshold, "proxy.config.http2.connection.slow.log.threshold");
REC_EstablishStaticConfigInt32U(stream_slow_log_threshold, "proxy.config.http2.stream.slow.log.threshold");
@@ -923,6 +927,8 @@ Http2::init()
static_cast<int>(HTTP2_STAT_MAX_PRIORITY_FRAMES_PER_MINUTE_EXCEEDED), RecRawStatSyncSum);
RecRegisterRawStat(http2_rsb, RECT_PROCESS, HTTP2_STAT_MAX_RST_STREAM_FRAMES_PER_MINUTE_EXCEEDED_NAME, RECD_INT, RECP_PERSISTENT,
static_cast<int>(HTTP2_STAT_MAX_RST_STREAM_FRAMES_PER_MINUTE_EXCEEDED), RecRawStatSyncSum);
+ RecRegisterRawStat(http2_rsb, RECT_PROCESS, HTTP2_STAT_MAX_CONTINUATION_FRAMES_PER_MINUTE_EXCEEDED_NAME, RECD_INT,
+ RECP_PERSISTENT, static_cast<int>(HTTP2_STAT_MAX_CONTINUATION_FRAMES_PER_MINUTE_EXCEEDED), RecRawStatSyncSum);
RecRegisterRawStat(http2_rsb, RECT_PROCESS, HTTP2_STAT_INSUFFICIENT_AVG_WINDOW_UPDATE_NAME, RECD_INT, RECP_PERSISTENT,
static_cast<int>(HTTP2_STAT_INSUFFICIENT_AVG_WINDOW_UPDATE), RecRawStatSyncSum);
RecRegisterRawStat(http2_rsb, RECT_PROCESS, HTTP2_STAT_MAX_CONCURRENT_STREAMS_EXCEEDED_IN_NAME, RECD_INT, RECP_PERSISTENT,
diff --git a/proxy/http2/HTTP2.h b/proxy/http2/HTTP2.h
index 5847865a9a4..857b199c05d 100644
--- a/proxy/http2/HTTP2.h
+++ b/proxy/http2/HTTP2.h
@@ -105,6 +105,7 @@ enum {
HTTP2_STAT_MAX_PING_FRAMES_PER_MINUTE_EXCEEDED,
HTTP2_STAT_MAX_PRIORITY_FRAMES_PER_MINUTE_EXCEEDED,
HTTP2_STAT_MAX_RST_STREAM_FRAMES_PER_MINUTE_EXCEEDED,
+ HTTP2_STAT_MAX_CONTINUATION_FRAMES_PER_MINUTE_EXCEEDED,
HTTP2_STAT_INSUFFICIENT_AVG_WINDOW_UPDATE,
HTTP2_STAT_MAX_CONCURRENT_STREAMS_EXCEEDED_IN,
HTTP2_STAT_MAX_CONCURRENT_STREAMS_EXCEEDED_OUT,
@@ -404,6 +405,7 @@ class Http2
static uint32_t max_ping_frames_per_minute;
static uint32_t max_priority_frames_per_minute;
static uint32_t max_rst_stream_frames_per_minute;
+ static uint32_t max_continuation_frames_per_minute;
static float min_avg_window_update;
static uint32_t con_slow_log_threshold;
static uint32_t stream_slow_log_threshold;
diff --git a/proxy/http2/Http2ConnectionState.cc b/proxy/http2/Http2ConnectionState.cc
index b36e5c11793..b089048eb1d 100644
--- a/proxy/http2/Http2ConnectionState.cc
+++ b/proxy/http2/Http2ConnectionState.cc
@@ -924,6 +924,18 @@ rcv_continuation_frame(Http2ConnectionState &cstate, const Http2Frame &frame)
}
}
+ // Update CONTINUATION frame count per minute.
+ cstate.increment_received_continuation_frame_count();
+ // Close this connection if its CONTINUATION frame count exceeds a limit.
+ if (cstate.configured_max_continuation_frames_per_minute != 0 &&
+ cstate.get_received_continuation_frame_count() > cstate.configured_max_continuation_frames_per_minute) {
+ HTTP2_INCREMENT_THREAD_DYN_STAT(HTTP2_STAT_MAX_CONTINUATION_FRAMES_PER_MINUTE_EXCEEDED, this_ethread());
+ Http2StreamDebug(cstate.session, stream_id, "Observed too frequent CONTINUATION frames: %u frames within a last minute",
+ cstate.get_received_continuation_frame_count());
+ return Http2Error(Http2ErrorClass::HTTP2_ERROR_CLASS_CONNECTION, Http2ErrorCode::HTTP2_ERROR_ENHANCE_YOUR_CALM,
+ "reset too frequent CONTINUATION frames");
+ }
+
uint32_t header_blocks_offset = stream->header_blocks_length;
stream->header_blocks_length += payload_length;
@@ -1088,10 +1100,11 @@ Http2ConnectionState::init(Http2CommonSession *ssn)
dependency_tree = new DependencyTree(Http2::max_concurrent_streams_in);
}
- configured_max_settings_frames_per_minute = Http2::max_settings_frames_per_minute;
- configured_max_ping_frames_per_minute = Http2::max_ping_frames_per_minute;
- configured_max_priority_frames_per_minute = Http2::max_priority_frames_per_minute;
- configured_max_rst_stream_frames_per_minute = Http2::max_rst_stream_frames_per_minute;
+ configured_max_settings_frames_per_minute = Http2::max_settings_frames_per_minute;
+ configured_max_ping_frames_per_minute = Http2::max_ping_frames_per_minute;
+ configured_max_priority_frames_per_minute = Http2::max_priority_frames_per_minute;
+ configured_max_rst_stream_frames_per_minute = Http2::max_rst_stream_frames_per_minute;
+ configured_max_continuation_frames_per_minute = Http2::max_continuation_frames_per_minute;
if (auto snis = dynamic_cast<TLSSNISupport *>(session->get_netvc()); snis) {
if (snis->hints_from_sni.http2_max_settings_frames_per_minute.has_value()) {
configured_max_settings_frames_per_minute = snis->hints_from_sni.http2_max_settings_frames_per_minute.value();
@@ -1105,6 +1118,9 @@ Http2ConnectionState::init(Http2CommonSession *ssn)
if (snis->hints_from_sni.http2_max_rst_stream_frames_per_minute.has_value()) {
configured_max_rst_stream_frames_per_minute = snis->hints_from_sni.http2_max_rst_stream_frames_per_minute.value();
}
+ if (snis->hints_from_sni.http2_max_continuation_frames_per_minute.has_value()) {
+ configured_max_continuation_frames_per_minute = snis->hints_from_sni.http2_max_continuation_frames_per_minute.value();
+ }
}
_cop = ActivityCop<Http2Stream>(this->mutex, &stream_list, 1);
@@ -2140,6 +2156,18 @@ Http2ConnectionState::get_received_rst_stream_frame_count()
return this->_received_rst_stream_frame_counter.get_count();
}
+void
+Http2ConnectionState::increment_received_continuation_frame_count()
+{
+ this->_received_continuation_frame_counter.increment();
+}
+
+uint32_t
+Http2ConnectionState::get_received_continuation_frame_count()
+{
+ return this->_received_continuation_frame_counter.get_count();
+}
+
// Return min_concurrent_streams_in when current client streams number is larger than max_active_streams_in.
// Main purpose of this is preventing DDoS Attacks.
unsigned
diff --git a/proxy/http2/Http2ConnectionState.h b/proxy/http2/Http2ConnectionState.h
index 76d2e2a8e17..fff7763f2a1 100644
--- a/proxy/http2/Http2ConnectionState.h
+++ b/proxy/http2/Http2ConnectionState.h
@@ -102,10 +102,11 @@ class Http2ConnectionState : public Continuation
Http2ConnectionSettings server_settings;
Http2ConnectionSettings client_settings;
- uint32_t configured_max_settings_frames_per_minute = 0;
- uint32_t configured_max_ping_frames_per_minute = 0;
- uint32_t configured_max_priority_frames_per_minute = 0;
- uint32_t configured_max_rst_stream_frames_per_minute = 0;
+ uint32_t configured_max_settings_frames_per_minute = 0;
+ uint32_t configured_max_ping_frames_per_minute = 0;
+ uint32_t configured_max_priority_frames_per_minute = 0;
+ uint32_t configured_max_rst_stream_frames_per_minute = 0;
+ uint32_t configured_max_continuation_frames_per_minute = 0;
void init(Http2CommonSession *ssn);
void send_connection_preface();
@@ -174,6 +175,8 @@ class Http2ConnectionState : public Continuation
uint32_t get_received_priority_frame_count();
void increment_received_rst_stream_frame_count();
uint32_t get_received_rst_stream_frame_count();
+ void increment_received_continuation_frame_count();
+ uint32_t get_received_continuation_frame_count();
ssize_t client_rwnd() const;
Http2ErrorCode increment_client_rwnd(size_t amount);
@@ -220,6 +223,7 @@ class Http2ConnectionState : public Continuation
Http2FrequencyCounter _received_ping_frame_counter;
Http2FrequencyCounter _received_priority_frame_counter;
Http2FrequencyCounter _received_rst_stream_frame_counter;
+ Http2FrequencyCounter _received_continuation_frame_counter;
// NOTE: Id of stream which MUST receive CONTINUATION frame.
// - [RFC 7540] 6.2 HEADERS

View File

@ -0,0 +1,168 @@
From d3fd9ac0380099de6bb1fb973234aa278000aecc Mon Sep 17 00:00:00 2001
From: Masakazu Kitajo <maskit@apache.org>
Date: Wed, 15 Jan 2025 11:10:36 -0700
Subject: [PATCH] Do not allow extra CRs in chunks (#11936) (#11942)
* Do not allow extra CRs in chunks (#11936)
* Do not allow extra CRs in chunks
* Renumber test uuid
* Add test cases and fix an oversight
* Use prefix increment
(cherry picked from commit f5f2256c00abbfd02c22fbae3937da1c7bd8a34f)
* Fix test case
---
proxy/http/HttpTunnel.cc | 12 +++++
.../bad_chunked_encoding.test.py | 6 +--
.../malformed_chunked_header.replay.yaml | 49 +++++++++++++++++--
3 files changed, 61 insertions(+), 6 deletions(-)
diff --git a/proxy/http/HttpTunnel.cc b/proxy/http/HttpTunnel.cc
index 4b20784f395..adb3cd9bc98 100644
--- a/proxy/http/HttpTunnel.cc
+++ b/proxy/http/HttpTunnel.cc
@@ -136,6 +136,7 @@ ChunkedHandler::read_size()
{
int64_t bytes_used;
bool done = false;
+ int cr = 0;
while (chunked_reader->read_avail() > 0 && !done) {
const char *tmp = chunked_reader->start();
@@ -174,6 +175,9 @@ ChunkedHandler::read_size()
done = true;
break;
} else {
+ if (ParseRules::is_cr(*tmp)) {
+ ++cr;
+ }
state = CHUNK_READ_SIZE_CRLF; // now look for CRLF
}
}
@@ -183,7 +187,15 @@ ChunkedHandler::read_size()
cur_chunk_bytes_left = (cur_chunk_size = running_sum);
state = (running_sum == 0) ? CHUNK_READ_TRAILER_BLANK : CHUNK_READ_CHUNK;
done = true;
+ cr = 0;
break;
+ } else if (ParseRules::is_cr(*tmp)) {
+ if (cr != 0) {
+ state = CHUNK_READ_ERROR;
+ done = true;
+ break;
+ }
+ ++cr;
}
} else if (state == CHUNK_READ_SIZE_START) {
if (ParseRules::is_cr(*tmp)) {
diff --git a/tests/gold_tests/chunked_encoding/bad_chunked_encoding.test.py b/tests/gold_tests/chunked_encoding/bad_chunked_encoding.test.py
index e92181ccdf7..f22cb9d2d39 100644
--- a/tests/gold_tests/chunked_encoding/bad_chunked_encoding.test.py
+++ b/tests/gold_tests/chunked_encoding/bad_chunked_encoding.test.py
@@ -172,13 +172,13 @@ def runChunkedTraffic(self):
# code from the verifier client.
tr.Processes.Default.ReturnCode = 1
tr.Processes.Default.Streams.stdout += Testers.ContainsExpression(
- r"(Unexpected chunked content for key 4: too small|Failed HTTP/1 transaction with key: 4)",
+ r"(Unexpected chunked content for key 101: too small|Failed HTTP/1 transaction with key: 101)",
"Verify that ATS closed the forth transaction.")
tr.Processes.Default.Streams.stdout += Testers.ContainsExpression(
- r"(Unexpected chunked content for key 5: too small|Failed HTTP/1 transaction with key: 5)",
+ r"(Unexpected chunked content for key 102: too small|Failed HTTP/1 transaction with key: 102)",
"Verify that ATS closed the fifth transaction.")
tr.Processes.Default.Streams.stdout += Testers.ContainsExpression(
- r"(Unexpected chunked content for key 6: too small|Failed HTTP/1 transaction with key: 6)",
+ r"(Unexpected chunked content for key 103: too small|Failed HTTP/1 transaction with key: 103)",
"Verify that ATS closed the sixth transaction.")
# ATS should close the connection before any body gets through. "def"
diff --git a/tests/gold_tests/chunked_encoding/replays/malformed_chunked_header.replay.yaml b/tests/gold_tests/chunked_encoding/replays/malformed_chunked_header.replay.yaml
index ae135d77ab7..5f136a7eeba 100644
--- a/tests/gold_tests/chunked_encoding/replays/malformed_chunked_header.replay.yaml
+++ b/tests/gold_tests/chunked_encoding/replays/malformed_chunked_header.replay.yaml
@@ -78,6 +78,26 @@ sessions:
server-response:
status: 200
+- transactions:
+ - client-request:
+ method: "POST"
+ version: "1.1"
+ url: /malformed/chunk/header3
+ headers:
+ fields:
+ - [ Host, example.com ]
+ - [ Transfer-Encoding, chunked ]
+ - [ uuid, 4 ]
+ content:
+ transfer: plain
+ encoding: uri
+ # BWS cannot have CR
+ data: 3%0D%0D%0Aabc%0D%0A0%0D%0A%0D%0A
+
+ # The connection will be dropped and this response will not go out.
+ server-response:
+ status: 200
+
#
# Now repeat the above two malformed chunk header tests, but on the server
# side.
@@ -90,7 +110,7 @@ sessions:
headers:
fields:
- [ Host, example.com ]
- - [ uuid, 4 ]
+ - [ uuid, 101 ]
# The connection will be dropped and this response will not go out.
server-response:
@@ -113,7 +133,7 @@ sessions:
headers:
fields:
- [ Host, example.com ]
- - [ uuid, 5 ]
+ - [ uuid, 102 ]
# The connection will be dropped and this response will not go out.
server-response:
@@ -136,7 +156,7 @@ sessions:
headers:
fields:
- [ Host, example.com ]
- - [ uuid, 6 ]
+ - [ uuid, 103 ]
# The connection will be dropped and this response will not go out.
server-response:
@@ -150,3 +170,26 @@ sessions:
encoding: uri
# Super large chunk header, larger than will fit in an int.
data: 111111113%0D%0Adef%0D%0A0%0D%0A%0D%0A
+
+- transactions:
+ - client-request:
+ method: "GET"
+ version: "1.1"
+ url: /response/malformed/chunk/size2
+ headers:
+ fields:
+ - [ Host, example.com ]
+ - [ uuid, 104 ]
+
+ # The connection will be dropped and this response will not go out.
+ server-response:
+ status: 200
+ reason: OK
+ headers:
+ fields:
+ - [ Transfer-Encoding, chunked ]
+ content:
+ transfer: plain
+ encoding: uri
+ # BWS cannot have CR
+ data: 3%0D%0D%0Adef%0D%0A0%0D%0A%0D%0A

1580
CVE-2024-38311.patch Normal file

File diff suppressed because it is too large Load Diff

129
CVE-2024-38479.patch Normal file
View File

@ -0,0 +1,129 @@
From b8861231702ac5df7d5de401e82440c1cf20b633 Mon Sep 17 00:00:00 2001
From: Bryan Call <bcall@apache.org>
Date: Tue, 12 Nov 2024 09:51:49 -0800
Subject: [PATCH] Add matrix params to the cachekey in the cachekey plugin
(#11856)
Origin: https://github.com/apache/trafficserver/commit/b8861231702ac5df7d5de401e82440c1cf20b633
---
plugins/cachekey/cachekey.cc | 21 +++++++++++++++++++++
plugins/cachekey/cachekey.h | 1 +
plugins/cachekey/configs.cc | 14 ++++++++++++++
plugins/cachekey/configs.h | 11 +++++++++++
plugins/cachekey/plugin.cc | 4 ++++
5 files changed, 51 insertions(+)
diff --git a/plugins/cachekey/cachekey.cc b/plugins/cachekey/cachekey.cc
index babc78cc999..38286e7eb28 100644
--- a/plugins/cachekey/cachekey.cc
+++ b/plugins/cachekey/cachekey.cc
@@ -673,6 +673,27 @@ CacheKey::appendQuery(const ConfigQuery &config)
}
}
+void
+CacheKey::appendMatrix(const ConfigMatrix &config)
+{
+ if (config.toBeRemoved()) {
+ return;
+ }
+
+ const char *matrix;
+ int length;
+
+ matrix = TSUrlHttpParamsGet(_buf, _url, &length);
+ if (matrix == nullptr || length == 0) {
+ return;
+ }
+
+ if (matrix && length) {
+ _key.append(";");
+ _key.append(matrix, length);
+ }
+}
+
/**
* @brief Append User-Agent header captures specified in the Pattern configuration object.
*
diff --git a/plugins/cachekey/cachekey.h b/plugins/cachekey/cachekey.h
index 0b47e85984d..dc208f93bb4 100644
--- a/plugins/cachekey/cachekey.h
+++ b/plugins/cachekey/cachekey.h
@@ -63,6 +63,7 @@ class CacheKey
void appendPath(Pattern &pathCapture, Pattern &pathCaptureUri);
void appendHeaders(const ConfigHeaders &config);
void appendQuery(const ConfigQuery &config);
+ void appendMatrix(const ConfigMatrix &config);
void appendCookies(const ConfigCookies &config);
void appendUaCaptures(Pattern &config);
bool appendUaClass(Classifier &classifier);
diff --git a/plugins/cachekey/configs.cc b/plugins/cachekey/configs.cc
index b2bc42d5e70..d6ef13aea68 100644
--- a/plugins/cachekey/configs.cc
+++ b/plugins/cachekey/configs.cc
@@ -208,6 +208,20 @@ ConfigQuery::name() const
return _NAME;
}
+bool
+ConfigMatrix::finalize()
+{
+ _remove = noIncludeExcludeRules();
+ return true;
+}
+
+const String ConfigMatrix::_NAME = "matrix parameter";
+inline const String &
+ConfigMatrix::name() const
+{
+ return _NAME;
+}
+
/**
* @briefs finalizes the headers related configuration.
*
diff --git a/plugins/cachekey/configs.h b/plugins/cachekey/configs.h
index e98b69afd48..f5d24bdbe3c 100644
--- a/plugins/cachekey/configs.h
+++ b/plugins/cachekey/configs.h
@@ -112,6 +112,16 @@ class ConfigQuery : public ConfigElements
static const String _NAME;
};
+class ConfigMatrix : public ConfigElements
+{
+public:
+ bool finalize() override;
+
+private:
+ const String &name() const override;
+ static const String _NAME;
+};
+
/**
* @brief Headers configuration class.
*/
@@ -210,6 +220,7 @@ class Configs
/* Make the following members public to avoid unnecessary accessors */
ConfigQuery _query; /**< @brief query parameter related configuration */
ConfigHeaders _headers; /**< @brief headers related configuration */
+ ConfigMatrix _matrix; /**< @brief matrix parameter related configuration */
ConfigCookies _cookies; /**< @brief cookies related configuration */
Pattern _uaCapture; /**< @brief the capture groups and the replacement string used for the User-Agent header capture */
String _prefix; /**< @brief cache key prefix string */
diff --git a/plugins/cachekey/plugin.cc b/plugins/cachekey/plugin.cc
index d92c079271a..b863b94a0d5 100644
--- a/plugins/cachekey/plugin.cc
+++ b/plugins/cachekey/plugin.cc
@@ -64,6 +64,10 @@ setCacheKey(TSHttpTxn txn, Configs *config, TSRemapRequestInfo *rri = nullptr)
if (!config->pathToBeRemoved()) {
cachekey.appendPath(config->_pathCapture, config->_pathCaptureUri);
}
+
+ /* Append the matrix parameters to the cache key. */
+ cachekey.appendMatrix(config->_matrix);
+
/* Append query parameters to the cache key. */
cachekey.appendQuery(config->_query);

72
CVE-2024-50305.patch Normal file
View File

@ -0,0 +1,72 @@
From 5e39658f7c0bc91613468c9513ba22ede1739d7e Mon Sep 17 00:00:00 2001
From: "Alan M. Carroll" <amc@apache.org>
Date: Tue, 2 Nov 2021 11:47:09 -0500
Subject: [PATCH] Tweak MimeHdr::get_host_port_values to not run over the end
of the TextView. (#8468)
Origin: https://github.com/apache/trafficserver/commit/5e39658f7c0bc91613468c9513ba22ede1739d7e
Fix for #8461
(cherry picked from commit 055ca11c2842a64bf7df8d547515670e1a04afc1)
---
proxy/hdrs/MIME.cc | 11 +++--------
src/tscpp/util/unit_tests/test_TextView.cc | 11 +++--------
2 files changed, 6 insertions(+), 16 deletions(-)
diff --git a/proxy/hdrs/MIME.cc b/proxy/hdrs/MIME.cc
index 45c16c386dd..0a55dd06b4d 100644
--- a/proxy/hdrs/MIME.cc
+++ b/proxy/hdrs/MIME.cc
@@ -2284,20 +2284,15 @@ MIMEHdr::get_host_port_values(const char **host_ptr, ///< Pointer to host.
if (b) {
if ('[' == *b) {
auto idx = b.find(']');
- if (idx <= b.size() && b[idx + 1] == ':') {
+ if (idx < b.size() - 1 && b[idx + 1] == ':') {
host = b.take_prefix_at(idx + 1);
port = b;
} else {
host = b;
}
} else {
- auto x = b.split_prefix_at(':');
- if (x) {
- host = x;
- port = b;
- } else {
- host = b;
- }
+ host = b.take_prefix_at(':');
+ port = b;
}
if (host) {
diff --git a/src/tscpp/util/unit_tests/test_TextView.cc b/src/tscpp/util/unit_tests/test_TextView.cc
index 8f71e0aa39d..7f365369082 100644
--- a/src/tscpp/util/unit_tests/test_TextView.cc
+++ b/src/tscpp/util/unit_tests/test_TextView.cc
@@ -275,20 +275,15 @@ TEST_CASE("TextView Affixes", "[libts][TextView]")
auto f_host = [](TextView b, TextView &host, TextView &port) -> void {
if ('[' == *b) {
auto idx = b.find(']');
- if (idx <= b.size() && b[idx + 1] == ':') {
+ if (idx < b.size() - 1 && b[idx + 1] == ':') {
host = b.take_prefix_at(idx + 1);
port = b;
} else {
host = b;
}
} else {
- auto x = b.split_prefix_at(':');
- if (x) {
- host = x;
- port = b;
- } else {
- host = b;
- }
+ host = b.take_prefix_at(':');
+ port = b;
}
};

37
CVE-2024-50306.patch Normal file
View File

@ -0,0 +1,37 @@
From 27f504883547502b1f5e4e389edd7f26e3ab246f Mon Sep 17 00:00:00 2001
From: Masakazu Kitajo <maskit@apache.org>
Date: Tue, 12 Nov 2024 11:13:59 -0700
Subject: [PATCH] Fix unchecked return value of initgroups() (#11855)
Origin: https://github.com/apache/trafficserver/commit/27f504883547502b1f5e4e389edd7f26e3ab246f
* Fix unchecked return value of initgroups()
Signed-off-by: Jeffrey Bencteux <jeffbencteux@gmail.com>
* clang-format
---------
Signed-off-by: Jeffrey Bencteux <jeffbencteux@gmail.com>
Co-authored-by: Jeffrey Bencteux <jeffbencteux@gmail.com>
(cherry picked from commit ae638096e259121d92d46a9f57026a5ff5bc328b)
---
src/tscore/ink_cap.cc | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/src/tscore/ink_cap.cc b/src/tscore/ink_cap.cc
index b4f0ecace5d..8a95d4b1329 100644
--- a/src/tscore/ink_cap.cc
+++ b/src/tscore/ink_cap.cc
@@ -160,7 +160,9 @@ impersonate(const struct passwd *pwd, ImpersonationLevel level)
#endif
// Always repopulate the supplementary group list for the new user.
- initgroups(pwd->pw_name, pwd->pw_gid);
+ if (initgroups(pwd->pw_name, pwd->pw_gid) != 0) {
+ Fatal("switching to user %s, failed to initialize supplementary groups ID %ld", pwd->pw_name, (long)pwd->pw_gid);
+ }
switch (level) {
case IMPERSONATE_PERMANENT:

550
CVE-2024-53868.patch Normal file
View File

@ -0,0 +1,550 @@
From 3d2f29c88f9b073cb0fd3b9c7f85430e2170acbb Mon Sep 17 00:00:00 2001
From: Masakazu Kitajo <maskit@apache.org>
Date: Tue, 1 Apr 2025 12:15:16 -0600
Subject: [PATCH] Require the use of CRLF in chunked message body (#12150)
* Require the use of CRLF in chunked message body
* Fix docs
---
doc/admin-guide/files/records.config.en.rst | 9 +++
.../functions/TSHttpOverridableConfig.en.rst | 1 +
.../api/types/TSOverridableConfigKey.en.rst | 1 +
include/ts/apidefs.h.in | 1 +
mgmt/RecordsConfig.cc | 2 +
plugins/lua/ts_lua_http_config.c | 2 +
proxy/http/HttpConfig.cc | 2 +
proxy/http/HttpConfig.h | 1 +
proxy/http/HttpSM.cc | 28 +++++---
proxy/http/HttpTunnel.cc | 72 ++++++++++++-------
proxy/http/HttpTunnel.h | 15 +++-
src/shared/overridable_txn_vars.cc | 1 +
src/traffic_server/FetchSM.cc | 3 +-
src/traffic_server/InkAPI.cc | 3 +
src/traffic_server/InkAPITest.cc | 3 +-
.../malformed_chunked_header.replay.yaml | 44 ++++++++++++
16 files changed, 150 insertions(+), 38 deletions(-)
diff --git a/doc/admin-guide/files/records.config.en.rst b/doc/admin-guide/files/records.config.en.rst
index b81510db69d..7db9e6a9f66 100644
--- a/doc/admin-guide/files/records.config.en.rst
+++ b/doc/admin-guide/files/records.config.en.rst
@@ -987,6 +987,15 @@ mptcp
for details about chunked trailers. By default, this option is disabled
and therefore |TS| will not drop chunked trailers.
+.. ts:cv:: CONFIG proxy.config.http.strict_chunk_parsing INT 1
+ :reloadable:
+ :overridable:
+
+ Specifies whether |TS| strictly checks errors in chunked message body.
+ If enabled (``1``), |TS| returns 400 Bad Request if chunked message body is
+ not compliant with RFC 9112. If disabled (``0``), |TS| allows using LF as
+ a line terminator.
+
.. ts:cv:: CONFIG proxy.config.http.send_http11_requests INT 1
:reloadable:
:overridable:
diff --git a/doc/developer-guide/api/functions/TSHttpOverridableConfig.en.rst b/doc/developer-guide/api/functions/TSHttpOverridableConfig.en.rst
index 2ec29831532..b2b0e231502 100644
--- a/doc/developer-guide/api/functions/TSHttpOverridableConfig.en.rst
+++ b/doc/developer-guide/api/functions/TSHttpOverridableConfig.en.rst
@@ -111,6 +111,7 @@ TSOverridableConfigKey Value Config
:c:enumerator:`TS_CONFIG_HTTP_CACHE_WHEN_TO_REVALIDATE` :ts:cv:`proxy.config.http.cache.when_to_revalidate`
:c:enumerator:`TS_CONFIG_HTTP_CHUNKING_ENABLED` :ts:cv:`proxy.config.http.chunking_enabled`
:c:enumerator:`TS_CONFIG_HTTP_CHUNKING_SIZE` :ts:cv:`proxy.config.http.chunking.size`
+:c:enumerator:`TS_CONFIG_HTTP_STRICT_CHUNK_PARSING` :ts:cv:`proxy.config.http.strict_chunk_parsing`
:c:enumerator:`TS_CONFIG_HTTP_CONNECT_ATTEMPTS_MAX_RETRIES_DEAD_SERVER` :ts:cv:`proxy.config.http.connect_attempts_max_retries_dead_server`
:c:enumerator:`TS_CONFIG_HTTP_DROP_CHUNKED_TRAILERS` :ts:cv:`proxy.config.http.drop_chunked_trailers`
:c:enumerator:`TS_CONFIG_HTTP_CONNECT_ATTEMPTS_MAX_RETRIES` :ts:cv:`proxy.config.http.connect_attempts_max_retries`
diff --git a/doc/developer-guide/api/types/TSOverridableConfigKey.en.rst b/doc/developer-guide/api/types/TSOverridableConfigKey.en.rst
index 2d0941efde0..b4291b46579 100644
--- a/doc/developer-guide/api/types/TSOverridableConfigKey.en.rst
+++ b/doc/developer-guide/api/types/TSOverridableConfigKey.en.rst
@@ -91,6 +91,7 @@ Enumeration Members
.. c:enumerator:: TS_CONFIG_NET_SOCK_PACKET_TOS_OUT
.. c:enumerator:: TS_CONFIG_HTTP_INSERT_AGE_IN_RESPONSE
.. c:enumerator:: TS_CONFIG_HTTP_CHUNKING_SIZE
+.. c:enumerator:: TS_CONFIG_HTTP_STRICT_CHUNK_PARSING
.. c:enumerator:: TS_CONFIG_HTTP_DROP_CHUNKED_TRAILERS
.. c:enumerator:: TS_CONFIG_HTTP_FLOW_CONTROL_ENABLED
.. c:enumerator:: TS_CONFIG_HTTP_FLOW_CONTROL_LOW_WATER_MARK
diff --git a/include/ts/apidefs.h.in b/include/ts/apidefs.h.in
index 1641565a1a9..893177c88b9 100644
--- a/include/ts/apidefs.h.in
+++ b/include/ts/apidefs.h.in
@@ -875,6 +875,7 @@ typedef enum {
TS_CONFIG_HTTP_ENABLE_PARENT_TIMEOUT_MARKDOWNS,
TS_CONFIG_HTTP_DISABLE_PARENT_MARKDOWNS,
TS_CONFIG_HTTP_DROP_CHUNKED_TRAILERS,
+ TS_CONFIG_HTTP_STRICT_CHUNK_PARSING,
TS_CONFIG_LAST_ENTRY
} TSOverridableConfigKey;
diff --git a/mgmt/RecordsConfig.cc b/mgmt/RecordsConfig.cc
index ff7fdc0e3c8..e645bb6c6f1 100644
--- a/mgmt/RecordsConfig.cc
+++ b/mgmt/RecordsConfig.cc
@@ -363,6 +363,8 @@ static const RecordElement RecordsConfig[] =
,
{RECT_CONFIG, "proxy.config.http.drop_chunked_trailers", RECD_INT, "0", RECU_DYNAMIC, RR_NULL, RECC_NULL, "[0-1]", RECA_NULL}
,
+ {RECT_CONFIG, "proxy.config.http.strict_chunk_parsing", RECD_INT, "1", RECU_DYNAMIC, RR_NULL, RECC_NULL, "[0-1]", RECA_NULL}
+ ,
{RECT_CONFIG, "proxy.config.http.flow_control.enabled", RECD_INT, "0", RECU_DYNAMIC, RR_NULL, RECC_NULL, nullptr, RECA_NULL}
,
{RECT_CONFIG, "proxy.config.http.flow_control.high_water", RECD_INT, "0", RECU_DYNAMIC, RR_NULL, RECC_NULL, nullptr, RECA_NULL}
diff --git a/plugins/lua/ts_lua_http_config.c b/plugins/lua/ts_lua_http_config.c
index a25d8ab8c8f..4b22ee94b50 100644
--- a/plugins/lua/ts_lua_http_config.c
+++ b/plugins/lua/ts_lua_http_config.c
@@ -149,6 +149,7 @@ typedef enum {
TS_LUA_CONFIG_BODY_FACTORY_RESPONSE_SUPPRESSION_MODE = TS_CONFIG_BODY_FACTORY_RESPONSE_SUPPRESSION_MODE,
TS_LUA_CONFIG_ENABLE_PARENT_TIMEOUT_MARKDOWNS = TS_CONFIG_HTTP_ENABLE_PARENT_TIMEOUT_MARKDOWNS,
TS_LUA_CONFIG_DISABLE_PARENT_MARKDOWNS = TS_CONFIG_HTTP_DISABLE_PARENT_MARKDOWNS,
+ TS_LUA_CONFIG_HTTP_STRICT_CHUNK_PARSING = TS_CONFIG_HTTP_STRICT_CHUNK_PARSING,
TS_LUA_CONFIG_LAST_ENTRY = TS_CONFIG_LAST_ENTRY,
} TSLuaOverridableConfigKey;
@@ -290,6 +291,7 @@ ts_lua_var_item ts_lua_http_config_vars[] = {
TS_LUA_MAKE_VAR_ITEM(TS_LUA_CONFIG_BODY_FACTORY_RESPONSE_SUPPRESSION_MODE),
TS_LUA_MAKE_VAR_ITEM(TS_LUA_CONFIG_ENABLE_PARENT_TIMEOUT_MARKDOWNS),
TS_LUA_MAKE_VAR_ITEM(TS_LUA_CONFIG_DISABLE_PARENT_MARKDOWNS),
+ TS_LUA_MAKE_VAR_ITEM(TS_LUA_CONFIG_HTTP_STRICT_CHUNK_PARSING),
TS_LUA_MAKE_VAR_ITEM(TS_LUA_CONFIG_LAST_ENTRY),
};
diff --git a/proxy/http/HttpConfig.cc b/proxy/http/HttpConfig.cc
index d5c1c00a283..ca2edee1ee7 100644
--- a/proxy/http/HttpConfig.cc
+++ b/proxy/http/HttpConfig.cc
@@ -1190,6 +1190,7 @@ HttpConfig::startup()
HttpEstablishStaticConfigByte(c.oride.chunking_enabled, "proxy.config.http.chunking_enabled");
HttpEstablishStaticConfigLongLong(c.oride.http_chunking_size, "proxy.config.http.chunking.size");
HttpEstablishStaticConfigByte(c.oride.http_drop_chunked_trailers, "proxy.config.http.drop_chunked_trailers");
+ HttpEstablishStaticConfigByte(c.oride.http_strict_chunk_parsing, "proxy.config.http.strict_chunk_parsing");
HttpEstablishStaticConfigByte(c.oride.flow_control_enabled, "proxy.config.http.flow_control.enabled");
HttpEstablishStaticConfigLongLong(c.oride.flow_high_water_mark, "proxy.config.http.flow_control.high_water");
HttpEstablishStaticConfigLongLong(c.oride.flow_low_water_mark, "proxy.config.http.flow_control.low_water");
@@ -1496,6 +1497,7 @@ HttpConfig::reconfigure()
params->oride.keep_alive_enabled_out = INT_TO_BOOL(m_master.oride.keep_alive_enabled_out);
params->oride.chunking_enabled = INT_TO_BOOL(m_master.oride.chunking_enabled);
params->oride.http_drop_chunked_trailers = m_master.oride.http_drop_chunked_trailers;
+ params->oride.http_strict_chunk_parsing = m_master.oride.http_strict_chunk_parsing;
params->oride.auth_server_session_private = INT_TO_BOOL(m_master.oride.auth_server_session_private);
params->oride.http_chunking_size = m_master.oride.http_chunking_size;
diff --git a/proxy/http/HttpConfig.h b/proxy/http/HttpConfig.h
index 6c1763f84e8..53450bdbb25 100644
--- a/proxy/http/HttpConfig.h
+++ b/proxy/http/HttpConfig.h
@@ -703,6 +703,7 @@ struct OverridableHttpConfigParams {
MgmtInt http_chunking_size = 4096; // Maximum chunk size for chunked output.
MgmtByte http_drop_chunked_trailers = 0; ///< Whether to drop chunked trailers.
+ MgmtByte http_strict_chunk_parsing = 1; ///< Whether to parse chunked body strictly.
MgmtInt flow_high_water_mark = 0; ///< Flow control high water mark.
MgmtInt flow_low_water_mark = 0; ///< Flow control low water mark.
diff --git a/proxy/http/HttpSM.cc b/proxy/http/HttpSM.cc
index cdc05461320..c0ba82641e1 100644
--- a/proxy/http/HttpSM.cc
+++ b/proxy/http/HttpSM.cc
@@ -978,7 +978,8 @@ HttpSM::wait_for_full_body()
p = tunnel.add_producer(ua_entry->vc, post_bytes, buf_start, &HttpSM::tunnel_handler_post_ua, HT_BUFFER_READ, "ua post buffer");
if (chunked) {
bool const drop_chunked_trailers = t_state.http_config_param->oride.http_drop_chunked_trailers == 1;
- tunnel.set_producer_chunking_action(p, 0, TCA_PASSTHRU_CHUNKED_CONTENT, drop_chunked_trailers);
+ bool const parse_chunk_strictly = t_state.http_config_param->oride.http_strict_chunk_parsing == 1;
+ tunnel.set_producer_chunking_action(p, 0, TCA_PASSTHRU_CHUNKED_CONTENT, drop_chunked_trailers, parse_chunk_strictly);
}
ua_entry->in_tunnel = true;
ua_txn->set_inactivity_timeout(HRTIME_SECONDS(t_state.txn_conf->transaction_no_activity_timeout_in));
@@ -6197,10 +6198,11 @@ HttpSM::do_setup_post_tunnel(HttpVC_t to_vc_type)
// In either case, the server will support chunked (HTTP/1.1)
if (chunked) {
bool const drop_chunked_trailers = t_state.http_config_param->oride.http_drop_chunked_trailers == 1;
+ bool const parse_chunk_strictly = t_state.http_config_param->oride.http_strict_chunk_parsing == 1;
if (ua_txn->is_chunked_encoding_supported()) {
- tunnel.set_producer_chunking_action(p, 0, TCA_PASSTHRU_CHUNKED_CONTENT, drop_chunked_trailers);
+ tunnel.set_producer_chunking_action(p, 0, TCA_PASSTHRU_CHUNKED_CONTENT, drop_chunked_trailers, parse_chunk_strictly);
} else {
- tunnel.set_producer_chunking_action(p, 0, TCA_CHUNK_CONTENT, drop_chunked_trailers);
+ tunnel.set_producer_chunking_action(p, 0, TCA_CHUNK_CONTENT, drop_chunked_trailers, parse_chunk_strictly);
tunnel.set_producer_chunking_size(p, 0);
}
}
@@ -6609,7 +6611,9 @@ HttpSM::setup_cache_read_transfer()
// w/o providing a Content-Length header
if (t_state.client_info.receive_chunked_response) {
bool const drop_chunked_trailers = t_state.http_config_param->oride.http_drop_chunked_trailers == 1;
- tunnel.set_producer_chunking_action(p, client_response_hdr_bytes, TCA_CHUNK_CONTENT, drop_chunked_trailers);
+ bool const parse_chunk_strictly = t_state.http_config_param->oride.http_strict_chunk_parsing == 1;
+ tunnel.set_producer_chunking_action(p, client_response_hdr_bytes, TCA_CHUNK_CONTENT, drop_chunked_trailers,
+ parse_chunk_strictly);
tunnel.set_producer_chunking_size(p, t_state.txn_conf->http_chunking_size);
}
ua_entry->in_tunnel = true;
@@ -6927,8 +6931,10 @@ HttpSM::setup_server_transfer_to_transform()
transform_info.entry->in_tunnel = true;
if (t_state.current.server->transfer_encoding == HttpTransact::CHUNKED_ENCODING) {
- client_response_hdr_bytes = 0; // fixed by YTS Team, yamsat
- tunnel.set_producer_chunking_action(p, client_response_hdr_bytes, TCA_DECHUNK_CONTENT, HttpTunnel::DROP_CHUNKED_TRAILERS);
+ client_response_hdr_bytes = 0; // fixed by YTS Team, yamsat
+ bool const parse_chunk_strictly = t_state.http_config_param->oride.http_strict_chunk_parsing == 1;
+ tunnel.set_producer_chunking_action(p, client_response_hdr_bytes, TCA_DECHUNK_CONTENT, HttpTunnel::DROP_CHUNKED_TRAILERS,
+ parse_chunk_strictly);
}
return p;
@@ -6968,7 +6974,9 @@ HttpSM::setup_transfer_from_transform()
if (t_state.client_info.receive_chunked_response) {
bool const drop_chunked_trailers = t_state.http_config_param->oride.http_drop_chunked_trailers == 1;
- tunnel.set_producer_chunking_action(p, client_response_hdr_bytes, TCA_CHUNK_CONTENT, drop_chunked_trailers);
+ bool const parse_chunk_strictly = t_state.http_config_param->oride.http_strict_chunk_parsing == 1;
+ tunnel.set_producer_chunking_action(p, client_response_hdr_bytes, TCA_CHUNK_CONTENT, drop_chunked_trailers,
+ parse_chunk_strictly);
tunnel.set_producer_chunking_size(p, t_state.txn_conf->http_chunking_size);
}
@@ -7025,7 +7033,8 @@ HttpSM::setup_server_transfer_to_cache_only()
tunnel.add_producer(server_entry->vc, nbytes, buf_start, &HttpSM::tunnel_handler_server, HT_HTTP_SERVER, "http server");
bool const drop_chunked_trailers = t_state.http_config_param->oride.http_drop_chunked_trailers == 1;
- tunnel.set_producer_chunking_action(p, 0, action, drop_chunked_trailers);
+ bool const parse_chunk_strictly = t_state.http_config_param->oride.http_strict_chunk_parsing == 1;
+ tunnel.set_producer_chunking_action(p, 0, action, drop_chunked_trailers, parse_chunk_strictly);
tunnel.set_producer_chunking_size(p, t_state.txn_conf->http_chunking_size);
setup_cache_write_transfer(&cache_sm, server_entry->vc, &t_state.cache_info.object_store, 0, "cache write");
@@ -7114,7 +7123,8 @@ HttpSM::setup_server_transfer()
}
*/
bool const drop_chunked_trailers = t_state.http_config_param->oride.http_drop_chunked_trailers == 1;
- tunnel.set_producer_chunking_action(p, client_response_hdr_bytes, action, drop_chunked_trailers);
+ bool const parse_chunk_strictly = t_state.http_config_param->oride.http_strict_chunk_parsing == 1;
+ tunnel.set_producer_chunking_action(p, client_response_hdr_bytes, action, drop_chunked_trailers, parse_chunk_strictly);
tunnel.set_producer_chunking_size(p, t_state.txn_conf->http_chunking_size);
return p;
}
diff --git a/proxy/http/HttpTunnel.cc b/proxy/http/HttpTunnel.cc
index 1508179e6b5..e9c0c6eafea 100644
--- a/proxy/http/HttpTunnel.cc
+++ b/proxy/http/HttpTunnel.cc
@@ -51,27 +51,28 @@ static int const CHUNK_IOBUFFER_SIZE_INDEX = MIN_IOBUFFER_SIZE;
ChunkedHandler::ChunkedHandler() : max_chunk_size(DEFAULT_MAX_CHUNK_SIZE) {}
void
-ChunkedHandler::init(IOBufferReader *buffer_in, HttpTunnelProducer *p, bool drop_chunked_trailers)
+ChunkedHandler::init(IOBufferReader *buffer_in, HttpTunnelProducer *p, bool drop_chunked_trailers, bool parse_chunk_strictly)
{
if (p->do_chunking) {
- init_by_action(buffer_in, ACTION_DOCHUNK, drop_chunked_trailers);
+ init_by_action(buffer_in, ACTION_DOCHUNK, drop_chunked_trailers, parse_chunk_strictly);
} else if (p->do_dechunking) {
- init_by_action(buffer_in, ACTION_DECHUNK, drop_chunked_trailers);
+ init_by_action(buffer_in, ACTION_DECHUNK, drop_chunked_trailers, parse_chunk_strictly);
} else {
- init_by_action(buffer_in, ACTION_PASSTHRU, drop_chunked_trailers);
+ init_by_action(buffer_in, ACTION_PASSTHRU, drop_chunked_trailers, parse_chunk_strictly);
}
return;
}
void
-ChunkedHandler::init_by_action(IOBufferReader *buffer_in, Action action, bool drop_chunked_trailers)
+ChunkedHandler::init_by_action(IOBufferReader *buffer_in, Action action, bool drop_chunked_trailers, bool parse_chunk_strictly)
{
- running_sum = 0;
- num_digits = 0;
- cur_chunk_size = 0;
- cur_chunk_bytes_left = 0;
- truncation = false;
- this->action = action;
+ running_sum = 0;
+ num_digits = 0;
+ cur_chunk_size = 0;
+ cur_chunk_bytes_left = 0;
+ truncation = false;
+ this->action = action;
+ this->strict_chunk_parsing = parse_chunk_strictly;
switch (action) {
case ACTION_DOCHUNK:
@@ -139,7 +140,6 @@ ChunkedHandler::read_size()
{
int64_t bytes_consumed = 0;
bool done = false;
- int cr = 0;
while (chunked_reader->is_read_avail_more_than(0) && !done) {
const char *tmp = chunked_reader->start();
@@ -178,36 +178,59 @@ ChunkedHandler::read_size()
done = true;
break;
} else {
- if (ParseRules::is_cr(*tmp)) {
- ++cr;
+ if ((prev_is_cr = ParseRules::is_cr(*tmp)) == true) {
+ ++num_cr;
}
state = CHUNK_READ_SIZE_CRLF; // now look for CRLF
}
}
} else if (state == CHUNK_READ_SIZE_CRLF) { // Scan for a linefeed
if (ParseRules::is_lf(*tmp)) {
+ if (!prev_is_cr) {
+ Debug("http_chunk", "Found an LF without a preceding CR (protocol violation)");
+ if (strict_chunk_parsing) {
+ state = CHUNK_READ_ERROR;
+ done = true;
+ break;
+ }
+ }
Debug("http_chunk", "read chunk size of %d bytes", running_sum);
cur_chunk_bytes_left = (cur_chunk_size = running_sum);
state = (running_sum == 0) ? CHUNK_READ_TRAILER_BLANK : CHUNK_READ_CHUNK;
done = true;
- cr = 0;
+ num_cr = 0;
break;
- } else if (ParseRules::is_cr(*tmp)) {
- if (cr != 0) {
+ } else if ((prev_is_cr = ParseRules::is_cr(*tmp)) == true) {
+ if (num_cr != 0) {
state = CHUNK_READ_ERROR;
done = true;
break;
}
- ++cr;
+ ++num_cr;
}
} else if (state == CHUNK_READ_SIZE_START) {
- if (ParseRules::is_cr(*tmp)) {
- // Skip it
- } else if (ParseRules::is_lf(*tmp) &&
- bytes_used <= 2) { // bytes_used should be 2 if it's CRLF, but permit a single LF as well
+ Debug("http_chunk", "CHUNK_READ_SIZE_START 0x%02x", *tmp);
+ if (ParseRules::is_lf(*tmp)) {
+ if (!prev_is_cr) {
+ Debug("http_chunk", "Found an LF without a preceding CR (protocol violation) before chunk size");
+ if (strict_chunk_parsing) {
+ state = CHUNK_READ_ERROR;
+ done = true;
+ break;
+ }
+ }
running_sum = 0;
num_digits = 0;
+ num_cr = 0;
state = CHUNK_READ_SIZE;
+ } else if ((prev_is_cr = ParseRules::is_cr(*tmp)) == true) {
+ if (num_cr != 0) {
+ Debug("http_chunk", "Found multiple CRs before chunk size");
+ state = CHUNK_READ_ERROR;
+ done = true;
+ break;
+ }
+ ++num_cr;
} else { // Unexpected character
state = CHUNK_READ_ERROR;
done = true;
@@ -651,9 +674,10 @@ HttpTunnel::deallocate_buffers()
void
HttpTunnel::set_producer_chunking_action(HttpTunnelProducer *p, int64_t skip_bytes, TunnelChunkingAction_t action,
- bool drop_chunked_trailers)
+ bool drop_chunked_trailers, bool parse_chunk_strictly)
{
this->http_drop_chunked_trailers = drop_chunked_trailers;
+ this->http_strict_chunk_parsing = parse_chunk_strictly;
p->chunked_handler.skip_bytes = skip_bytes;
p->chunking_action = action;
@@ -878,7 +902,7 @@ HttpTunnel::producer_run(HttpTunnelProducer *p)
// For all the chunking cases, we must only copy bytes as we process them.
body_bytes_to_copy = 0;
- p->chunked_handler.init(p->buffer_start, p, this->http_drop_chunked_trailers);
+ p->chunked_handler.init(p->buffer_start, p, this->http_drop_chunked_trailers, this->http_strict_chunk_parsing);
// Copy the header into the chunked/dechunked buffers.
if (p->do_chunking) {
diff --git a/proxy/http/HttpTunnel.h b/proxy/http/HttpTunnel.h
index 3aac38aca68..9b7d1876425 100644
--- a/proxy/http/HttpTunnel.h
+++ b/proxy/http/HttpTunnel.h
@@ -112,6 +112,8 @@ struct ChunkedHandler {
*/
bool drop_chunked_trailers = false;
+ bool strict_chunk_parsing = true;
+
bool truncation = false;
/** The number of bytes to skip from the reader because they are not body bytes.
@@ -130,6 +132,8 @@ struct ChunkedHandler {
// Chunked header size parsing info.
int running_sum = 0;
int num_digits = 0;
+ int num_cr = 0;
+ bool prev_is_cr = false;
/// @name Output data.
//@{
@@ -144,8 +148,8 @@ struct ChunkedHandler {
//@}
ChunkedHandler();
- void init(IOBufferReader *buffer_in, HttpTunnelProducer *p, bool drop_chunked_trailers);
- void init_by_action(IOBufferReader *buffer_in, Action action, bool drop_chunked_trailers);
+ void init(IOBufferReader *buffer_in, HttpTunnelProducer *p, bool drop_chunked_trailers, bool strict_parsing);
+ void init_by_action(IOBufferReader *buffer_in, Action action, bool drop_chunked_trailers, bool strict_parsing);
void clear();
/// Set the max chunk @a size.
@@ -392,6 +396,7 @@ class HttpTunnel : public Continuation
/// A named variable for the @a drop_chunked_trailers parameter to @a set_producer_chunking_action.
static constexpr bool DROP_CHUNKED_TRAILERS = true;
+ static constexpr bool PARSE_CHUNK_STRICTLY = true;
/** Designate chunking behavior to the producer.
*
@@ -402,9 +407,10 @@ class HttpTunnel : public Continuation
* @param[in] drop_chunked_trailers If @c true, chunked trailers are filtered
* out. Logically speaking, this is only applicable when proxying chunked
* content, thus only when @a action is @c TCA_PASSTHRU_CHUNKED_CONTENT.
+ * @param[in] parse_chunk_strictly If @c true, no parse error will be allowed
*/
void set_producer_chunking_action(HttpTunnelProducer *p, int64_t skip_bytes, TunnelChunkingAction_t action,
- bool drop_chunked_trailers);
+ bool drop_chunked_trailers, bool parse_chunk_strictly);
/// Set the maximum (preferred) chunk @a size of chunked output for @a producer.
void set_producer_chunking_size(HttpTunnelProducer *producer, int64_t size);
@@ -483,6 +489,9 @@ class HttpTunnel : public Continuation
/// Corresponds to proxy.config.http.drop_chunked_trailers having a value of 1.
bool http_drop_chunked_trailers = false;
+ /// Corresponds to proxy.config.http.strict_chunk_parsing having a value of 1.
+ bool http_strict_chunk_parsing = false;
+
/** The number of body bytes processed in this last execution of the tunnel.
*
* This accounting is used to determine how many bytes to copy into the body
diff --git a/src/shared/overridable_txn_vars.cc b/src/shared/overridable_txn_vars.cc
index 1a5d740794a..f8c6e6e58ea 100644
--- a/src/shared/overridable_txn_vars.cc
+++ b/src/shared/overridable_txn_vars.cc
@@ -31,6 +31,7 @@ const std::unordered_map<std::string_view, std::tuple<const TSOverridableConfigK
{"proxy.config.http.normalize_ae", {TS_CONFIG_HTTP_NORMALIZE_AE, TS_RECORDDATATYPE_INT}},
{"proxy.config.http.chunking.size", {TS_CONFIG_HTTP_CHUNKING_SIZE, TS_RECORDDATATYPE_INT}},
{"proxy.config.http.drop_chunked_trailers", {TS_CONFIG_HTTP_DROP_CHUNKED_TRAILERS, TS_RECORDDATATYPE_INT}},
+ {"proxy.config.http.strict_chunk_parsing", {TS_CONFIG_HTTP_STRICT_CHUNK_PARSING, TS_RECORDDATATYPE_INT}},
{"proxy.config.ssl.client.cert.path", {TS_CONFIG_SSL_CERT_FILEPATH, TS_RECORDDATATYPE_STRING}},
{"proxy.config.http.allow_half_open", {TS_CONFIG_HTTP_ALLOW_HALF_OPEN, TS_RECORDDATATYPE_INT}},
{"proxy.config.http.chunking_enabled", {TS_CONFIG_HTTP_CHUNKING_ENABLED, TS_RECORDDATATYPE_INT}},
diff --git a/src/traffic_server/FetchSM.cc b/src/traffic_server/FetchSM.cc
index 19303b7e03d..ad5634845b4 100644
--- a/src/traffic_server/FetchSM.cc
+++ b/src/traffic_server/FetchSM.cc
@@ -198,7 +198,8 @@ FetchSM::check_chunked()
if (resp_is_chunked && (fetch_flags & TS_FETCH_FLAGS_DECHUNK)) {
ChunkedHandler *ch = &chunked_handler;
- ch->init_by_action(resp_reader, ChunkedHandler::ACTION_DECHUNK, HttpTunnel::DROP_CHUNKED_TRAILERS);
+ ch->init_by_action(resp_reader, ChunkedHandler::ACTION_DECHUNK, HttpTunnel::DROP_CHUNKED_TRAILERS,
+ HttpTunnel::PARSE_CHUNK_STRICTLY);
ch->dechunked_reader = ch->dechunked_buffer->alloc_reader();
ch->state = ChunkedHandler::CHUNK_READ_SIZE;
resp_reader->dealloc();
diff --git a/src/traffic_server/InkAPI.cc b/src/traffic_server/InkAPI.cc
index 71adb94d0cc..40bc7608ffc 100644
--- a/src/traffic_server/InkAPI.cc
+++ b/src/traffic_server/InkAPI.cc
@@ -8928,6 +8928,9 @@ _conf_to_memberp(TSOverridableConfigKey conf, OverridableHttpConfigParams *overr
case TS_CONFIG_HTTP_DROP_CHUNKED_TRAILERS:
ret = _memberp_to_generic(&overridableHttpConfig->http_drop_chunked_trailers, conv);
break;
+ case TS_CONFIG_HTTP_STRICT_CHUNK_PARSING:
+ ret = _memberp_to_generic(&overridableHttpConfig->http_strict_chunk_parsing, conv);
+ break;
case TS_CONFIG_HTTP_FLOW_CONTROL_ENABLED:
ret = _memberp_to_generic(&overridableHttpConfig->flow_control_enabled, conv);
break;
diff --git a/src/traffic_server/InkAPITest.cc b/src/traffic_server/InkAPITest.cc
index a6e7217291a..0e0cac586be 100644
--- a/src/traffic_server/InkAPITest.cc
+++ b/src/traffic_server/InkAPITest.cc
@@ -8774,7 +8774,8 @@ std::array<std::string_view, TS_CONFIG_LAST_ENTRY> SDK_Overridable_Configs = {
"proxy.config.body_factory.response_suppression_mode",
"proxy.config.http.parent_proxy.enable_parent_timeout_markdowns",
"proxy.config.http.parent_proxy.disable_parent_markdowns",
- "proxy.config.http.drop_chunked_trailers"}};
+ "proxy.config.http.drop_chunked_trailers",
+ "proxy.config.http.strict_chunk_parsing"}};
extern ClassAllocator<HttpSM> httpSMAllocator;
diff --git a/tests/gold_tests/chunked_encoding/replays/malformed_chunked_header.replay.yaml b/tests/gold_tests/chunked_encoding/replays/malformed_chunked_header.replay.yaml
index 1118036b3c8..7c0ccb9a47f 100644
--- a/tests/gold_tests/chunked_encoding/replays/malformed_chunked_header.replay.yaml
+++ b/tests/gold_tests/chunked_encoding/replays/malformed_chunked_header.replay.yaml
@@ -98,6 +98,27 @@ sessions:
server-response:
status: 200
+- transactions:
+ - client-request:
+ method: "POST"
+ version: "1.1"
+ url: /malformed/chunk/header3
+ headers:
+ fields:
+ - [ Host, example.com ]
+ - [ Transfer-Encoding, chunked ]
+ - [ uuid, 5 ]
+ content:
+ transfer: plain
+ encoding: uri
+ # Chunk header must end with a sequence of CRLF.
+ data: 7;x%0Aabcwxyz%0D%0A0%0D%0A%0D%0A
+
+ # The connection will be dropped and this response will not go out.
+ server-response:
+ status: 200
+
+
#
# Now repeat the above two malformed chunk header tests, but on the server
# side.
@@ -193,3 +214,26 @@ sessions:
encoding: uri
# BWS cannot have CR
data: 3%0D%0D%0Adef%0D%0A0%0D%0A%0D%0A
+
+- transactions:
+ - client-request:
+ method: "GET"
+ version: "1.1"
+ url: /response/malformed/chunk/size2
+ headers:
+ fields:
+ - [ Host, example.com ]
+ - [ uuid, 105 ]
+
+ # The connection will be dropped and this response will not go out.
+ server-response:
+ status: 200
+ reason: OK
+ headers:
+ fields:
+ - [ Transfer-Encoding, chunked ]
+ content:
+ transfer: plain
+ encoding: uri
+ # Chunk header must end with a sequence of CRLF.
+ data: 3;x%0Adef%0D%0A0%0D%0A%0D%0A

32
CVE-2024-56195.patch Normal file
View File

@ -0,0 +1,32 @@
From 483f84ea4ae2511834abd90014770b27a5082a4c Mon Sep 17 00:00:00 2001
From: Chris McFarlen <chris@mcfarlen.us>
Date: Tue, 4 Mar 2025 13:33:06 -0600
Subject: [PATCH] Fix intercept plugin ignoring ACL (#12077)
(cherry picked from commit 8d678fa21e4676f8491e18094d1cd5fcb455d522)
Co-authored-by: Chris McFarlen <cmcfarlen@apple.com>
---
proxy/http/HttpTransact.cc | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/proxy/http/HttpTransact.cc b/proxy/http/HttpTransact.cc
index 0109f62dd1b..115e15f93e5 100644
--- a/proxy/http/HttpTransact.cc
+++ b/proxy/http/HttpTransact.cc
@@ -1174,6 +1174,15 @@ HttpTransact::EndRemapRequest(State *s)
obj_describe(s->hdr_info.client_request.m_http, true);
}
+ // If the client failed ACLs, send error response
+ // This extra condition was added to separate it from the logic below that might allow
+ // requests that use some types of plugins as that code was allowing requests that didn't
+ // pass ACL checks. ACL mismatches are also not counted as invalid client requests
+ if (!s->client_connection_enabled) {
+ TxnDebug("http_trans", "END HttpTransact::EndRemapRequest: connection not allowed");
+ TRANSACT_RETURN(SM_ACTION_SEND_ERROR_CACHE_NOOP, nullptr);
+ }
+
/*
if s->reverse_proxy == false, we can assume remapping failed in some way
-however-

429
CVE-2024-56202.patch Normal file
View File

@ -0,0 +1,429 @@
From 1cca4a29520f9258be6c3fad5092939dbe9d3562 Mon Sep 17 00:00:00 2001
From: Bryan Call <bcall@apache.org>
Date: Tue, 4 Mar 2025 11:39:32 -0800
Subject: [PATCH] Fix send 100 Continue optimization for GET (#12075)
This fixes a bug with the proxy.config.http.send_100_continue_response
feature for GET requests in which the following would happen:
1. We do not send the optimized 100 Continue out of proxy for GET
requests with Exect: 100-Continue. This is reasonable since the vast
majority of 100-Continue requests will be POST.
2. Later, we inspect the proxy.config.http.send_100_continue_response
value and assume we did send a 100 Continue response and stripped the
Expect: 100-Continue header from the proxied request. This is
disasterous as it leaves the server waiting for a body which would
never come because the client is waiting for a 100 Continue response
which will never come.
(cherry picked from commit 33b7c7c161c453d6b43c9aecbb7351ad8326c28d)
Co-authored-by: Brian Neradt <brian.neradt@gmail.com>
---
proxy/hdrs/HTTP.h | 1 +
proxy/http/HttpSM.cc | 1 +
proxy/http/HttpTransact.cc | 2 +-
tests/gold_tests/post/expect_client.py | 110 ++++++++++++++++++
tests/gold_tests/post/expect_tests.test.py | 88 ++++++++++++++
tests/gold_tests/post/http_utils.py | 93 +++++++++++++++
.../post/replay/expect-continue.replay.yaml | 42 +++++++
7 files changed, 336 insertions(+), 1 deletion(-)
create mode 100644 tests/gold_tests/post/expect_client.py
create mode 100644 tests/gold_tests/post/expect_tests.test.py
create mode 100644 tests/gold_tests/post/http_utils.py
create mode 100644 tests/gold_tests/post/replay/expect-continue.replay.yaml
diff --git a/proxy/hdrs/HTTP.h b/proxy/hdrs/HTTP.h
index 710fbaf00f4..3daa172f1c7 100644
--- a/proxy/hdrs/HTTP.h
+++ b/proxy/hdrs/HTTP.h
@@ -480,6 +480,7 @@ class HTTPHdr : public MIMEHdr
mutable int m_port = 0; ///< Target port.
mutable bool m_target_cached = false; ///< Whether host name and port are cached.
mutable bool m_target_in_url = false; ///< Whether host name and port are in the URL.
+ mutable bool m_100_continue_sent = false; ///< Whether ATS sent a 100 Continue optimized response.
mutable bool m_100_continue_required = false; ///< Whether 100_continue is in the Expect header.
/// Set if the port was effectively specified in the header.
/// @c true if the target (in the URL or the HOST field) also specified
diff --git a/proxy/http/HttpSM.cc b/proxy/http/HttpSM.cc
index 4220e455af8..4e09795f036 100644
--- a/proxy/http/HttpSM.cc
+++ b/proxy/http/HttpSM.cc
@@ -900,6 +900,7 @@ HttpSM::state_read_client_request_header(int event, void *data)
SMDebug("http_seq", "send 100 Continue response to client");
int64_t nbytes = ua_entry->write_buffer->write(str_100_continue_response, len_100_continue_response);
ua_entry->write_vio = ua_txn->do_io_write(this, nbytes, buf_start);
+ t_state.hdr_info.client_request.m_100_continue_sent = true;
} else {
t_state.hdr_info.client_request.m_100_continue_required = true;
}
diff --git a/proxy/http/HttpTransact.cc b/proxy/http/HttpTransact.cc
index 115e15f93e5..31810e45d14 100644
--- a/proxy/http/HttpTransact.cc
+++ b/proxy/http/HttpTransact.cc
@@ -7877,7 +7877,7 @@ HttpTransact::build_request(State *s, HTTPHdr *base_request, HTTPHdr *outgoing_r
}
}
- if (s->http_config_param->send_100_continue_response) {
+ if (s->hdr_info.client_request.m_100_continue_sent) {
HttpTransactHeaders::remove_100_continue_headers(s, outgoing_request);
TxnDebug("http_trans", "request expect 100-continue headers removed");
}
diff --git a/tests/gold_tests/post/expect_client.py b/tests/gold_tests/post/expect_client.py
new file mode 100644
index 00000000000..d419f8c339b
--- /dev/null
+++ b/tests/gold_tests/post/expect_client.py
@@ -0,0 +1,110 @@
+#!/usr/bin/env python3
+"""Implements a client which tests Expect: 100-Continue."""
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from http_utils import (wait_for_headers_complete, determine_outstanding_bytes_to_read, drain_socket)
+
+import argparse
+import socket
+import sys
+
+
+def parse_args() -> argparse.Namespace:
+ """Parse the command line arguments.
+
+ :return: The parsed arguments.
+ """
+ parser = argparse.ArgumentParser()
+ parser.add_argument("proxy_address", help="Address of the proxy to connect to.")
+ parser.add_argument("proxy_port", type=int, help="The port of the proxy to connect to.")
+ parser.add_argument(
+ '-s',
+ '--server-hostname',
+ dest="server_hostname",
+ default="some.server.com",
+ help="The hostname of the server to connect to.")
+ return parser.parse_args()
+
+
+def open_connection(address: str, port: int) -> socket.socket:
+ """Open a connection to the desired host.
+
+ :param address: The address of the host to connect to.
+ :param port: The port of the host to connect to.
+ :return: The socket connected to the host.
+ """
+ sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
+ sock.connect((address, port))
+ print(f'Connected to {address}:{port}')
+ return sock
+
+
+def send_expect_request(sock: socket.socket, server_hostname: str) -> None:
+ """Send an Expect: 100-Continue request.
+
+ :param sock: The socket to send the request on.
+ :param server_hostname: The hostname of the server to connect to.
+ """
+ # Send the POST request.
+ host_header: bytes = f'Host: {server_hostname}\r\n'.encode()
+ request: bytes = (
+ b"GET /api/1 HTTP/1.1\r\n" + host_header + b"Connection: keep-alive\r\n"
+ b"Content-Length: 3\r\n"
+ b"uuid: expect\r\n"
+ b"Expect: 100-Continue\r\n"
+ b"\r\n")
+ sock.sendall(request)
+ print('Sent Expect: 100-Continue request:')
+ print(request.decode())
+ drain_response(sock)
+ print('Got 100-Continue response.')
+ sock.sendall(b'rst')
+ print('Sent "rst" body.')
+
+
+def drain_response(sock: socket.socket) -> None:
+ """Drain the response from the server.
+
+ :param sock: The socket to read the response from.
+ """
+ print('Waiting for the response to complete.')
+ read_bytes: bytes = wait_for_headers_complete(sock)
+ try:
+ num_bytes_to_drain: int = determine_outstanding_bytes_to_read(read_bytes)
+ except ValueError:
+ print('No CL found')
+ return
+ if num_bytes_to_drain > 0:
+ drain_socket(sock, read_bytes, num_bytes_to_drain)
+ print('Response complete.')
+
+
+def main() -> int:
+ """Run the client."""
+ args = parse_args()
+ print(args)
+
+ with open_connection(args.proxy_address, args.proxy_port) as sock:
+ send_expect_request(sock, args.server_hostname)
+ drain_response(sock)
+ print('Done.')
+ return 0
+
+
+if __name__ == '__main__':
+ sys.exit(main())
diff --git a/tests/gold_tests/post/expect_tests.test.py b/tests/gold_tests/post/expect_tests.test.py
new file mode 100644
index 00000000000..e6f85cd660c
--- /dev/null
+++ b/tests/gold_tests/post/expect_tests.test.py
@@ -0,0 +1,88 @@
+'''
+'''
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import sys
+
+
+class ExpectTest:
+
+ _expect_client: str = 'expect_client.py'
+ _http_utils: str = 'http_utils.py'
+ _replay_file: str = 'replay/expect-continue.replay.yaml'
+
+ def __init__(self):
+ tr = Test.AddTestRun('Verify Expect: 100-Continue handling.')
+ self._setup_dns(tr)
+ self._setup_origin(tr)
+ self._setup_trafficserver(tr)
+ self._setup_client(tr)
+
+ def _setup_dns(self, tr: 'TestRun') -> None:
+ '''Set up the DNS server.
+
+ :param tr: The TestRun to which to add the DNS server.
+ '''
+ dns = tr.MakeDNServer('dns', default='127.0.0.1')
+ self._dns = dns
+
+ def _setup_origin(self, tr: 'TestRun') -> None:
+ '''Set up the origin server.
+
+ :param tr: The TestRun to which to add the origin server.
+ '''
+ server = tr.AddVerifierServerProcess("server", replay_path=self._replay_file)
+ self._server = server
+
+ def _setup_trafficserver(self, tr: 'TestRun') -> None:
+ '''Set up the traffic server.
+
+ :param tr: The TestRun to which to add the traffic server.
+ '''
+ ts = tr.MakeATSProcess("ts", enable_cache=False)
+ self._ts = ts
+ ts.Disk.remap_config.AddLine(f'map / http://backend.example.com:{self._server.Variables.http_port}')
+ ts.Disk.records_config.update(
+ {
+ 'proxy.config.diags.debug.enabled': 1,
+ 'proxy.config.diags.debug.tags': 'http',
+ 'proxy.config.dns.nameservers': f"127.0.0.1:{self._dns.Variables.Port}",
+ 'proxy.config.dns.resolv_conf': 'NULL',
+ 'proxy.config.http.send_100_continue_response': 1,
+ })
+
+ def _setup_client(self, tr: 'TestRun') -> None:
+ '''Set up the client.
+
+ :param tr: The TestRun to which to add the client.
+ '''
+ tr.Setup.CopyAs(self._expect_client)
+ tr.Setup.CopyAs(self._http_utils)
+ tr.Processes.Default.Command = \
+ f'{sys.executable} {self._expect_client} 127.0.0.1 {self._ts.Variables.port} -s example.com'
+ tr.Processes.Default.ReturnCode = 0
+ tr.Processes.Default.StartBefore(self._dns)
+ tr.Processes.Default.StartBefore(self._server)
+ tr.Processes.Default.StartBefore(self._ts)
+ tr.Processes.Default.Streams.stdout += Testers.ContainsExpression(
+ 'HTTP/1.1 100', 'Verify the 100 Continue response was received.')
+ tr.Processes.Default.Streams.stdout += Testers.ContainsExpression(
+ 'HTTP/1.1 200', 'Verify the 200 OK response was received.')
+
+
+Test.Summary = 'Verify Expect: 100-Continue handling.'
+ExpectTest()
diff --git a/tests/gold_tests/post/http_utils.py b/tests/gold_tests/post/http_utils.py
new file mode 100644
index 00000000000..e1ad4e77fed
--- /dev/null
+++ b/tests/gold_tests/post/http_utils.py
@@ -0,0 +1,93 @@
+#!/usr/bin/env python3
+"""Common logic between the ad hoc client and server."""
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import socket
+
+
+def wait_for_headers_complete(sock: socket.socket) -> bytes:
+ """Wait for the headers to be complete.
+
+ :param sock: The socket to read from.
+ :returns: The bytes read off the socket.
+ """
+ headers = b""
+ while True:
+ data = sock.recv(1024)
+ if not data:
+ print("Socket closed.")
+ break
+ print(f'Received:\n{data}')
+ headers += data
+ if b"\r\n\r\n" in headers:
+ break
+ return headers
+
+
+def determine_outstanding_bytes_to_read(read_bytes: bytes) -> int:
+ """Determine how many more bytes to read from the headers.
+
+ This parses the Content-Length header to determine how many more bytes to
+ read.
+
+ :param read_bytes: The bytes read so far.
+ :returns: The number of bytes to read, or -1 if it is chunked encoded.
+ """
+ headers = read_bytes.decode("utf-8").split("\r\n")
+ content_length_value = None
+ for header in headers:
+ if header.lower().startswith("content-length:"):
+ content_length_value = int(header.split(":")[1].strip())
+ elif header.lower().startswith("transfer-encoding: chunked"):
+ return -1
+ if content_length_value is None:
+ raise ValueError("No Content-Length header found.")
+
+ end_of_headers = read_bytes.find(b"\r\n\r\n")
+ if end_of_headers == -1:
+ raise ValueError("No end of headers found.")
+
+ end_of_headers += 4
+ return content_length_value - (len(read_bytes) - end_of_headers)
+
+
+def drain_socket(sock: socket.socket, previously_read_data: bytes, num_bytes_to_drain: int) -> None:
+ """Read the rest of the transaction.
+
+ :param sock: The socket to drain.
+ :param num_bytes_to_drain: The number of bytes to drain. If -1, then drain
+ bytes until the final zero-length chunk is read.
+ """
+
+ read_data = previously_read_data
+ num_bytes_drained = 0
+ while True:
+ if num_bytes_to_drain > 0:
+ if num_bytes_drained >= num_bytes_to_drain:
+ break
+ elif b'0\r\n\r\n' == read_data[-5:]:
+ print("Found end of chunked data.")
+ break
+
+ data = sock.recv(1024)
+ print(f'Received:\n{data}')
+ if not data:
+ print("Socket closed.")
+ break
+ num_bytes_drained += len(data)
+ read_data += data
diff --git a/tests/gold_tests/post/replay/expect-continue.replay.yaml b/tests/gold_tests/post/replay/expect-continue.replay.yaml
new file mode 100644
index 00000000000..e136b5dfda5
--- /dev/null
+++ b/tests/gold_tests/post/replay/expect-continue.replay.yaml
@@ -0,0 +1,42 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements. See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership. The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+#
+# This replay file assumes that caching is enabled and
+# proxy.config.http.cache.ignore_server_no_cache is set to 1(meaning the
+# cache-control directives in responses to bypass the cache is ignored)
+meta:
+ version: "1.0"
+
+sessions:
+ - transactions:
+ # The client is actually the python script, not Proxy Verifier.
+ - client-request:
+ method: "GET"
+ version: "1.1"
+ headers:
+ fields:
+ - [uuid, expect]
+ - [Expect, 100-continue]
+
+ server-response:
+ status: 200
+ reason: OK
+ headers:
+ fields:
+ - [Content-Length, 4]
+ - [Connection, keep-alive]
+ - [X-Response, expect]

View File

@ -0,0 +1,28 @@
From d4dda9b5583d19e2eee268fec59aa487d61fc079 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Valent=C3=ADn=20Guti=C3=A9rrez?= <vgutierrez@wikimedia.org>
Date: Thu, 21 Nov 2024 03:54:03 +0100
Subject: [PATCH] Invoke initgroups() iff we got enough privileges (#11869)
(#11872)
Follow up of #11855, that rendered unusable ATS as root when spawned via traffic_manager.
---
src/tscore/ink_cap.cc | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/src/tscore/ink_cap.cc b/src/tscore/ink_cap.cc
index 0f0d6f869e2..f464daad3b1 100644
--- a/src/tscore/ink_cap.cc
+++ b/src/tscore/ink_cap.cc
@@ -156,8 +156,10 @@ impersonate(const struct passwd *pwd, ImpersonationLevel level)
#endif
// Always repopulate the supplementary group list for the new user.
- if (initgroups(pwd->pw_name, pwd->pw_gid) != 0) {
- Fatal("switching to user %s, failed to initialize supplementary groups ID %ld", pwd->pw_name, (long)pwd->pw_gid);
+ if (geteuid() == 0) { // check that we have enough rights to call initgroups()
+ if (initgroups(pwd->pw_name, pwd->pw_gid) != 0) {
+ Fatal("switching to user %s, failed to initialize supplementary groups ID %ld", pwd->pw_name, (long)pwd->pw_gid);
+ }
}
switch (level) {

39
add-loong64-support.patch Normal file
View File

@ -0,0 +1,39 @@
From d52504bbf8673d1f33f9926933eece1eaf0b31c5 Mon Sep 17 00:00:00 2001
From: Wenlong Zhang <zhangwenlong@loongson.cn>
Date: Fri, 12 Jul 2024 07:23:25 +0000
Subject: [PATCH] add loong64 support for trafficserver
---
include/tscore/ink_queue.h | 2 +-
iocore/eventsystem/UnixEventProcessor.cc | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/include/tscore/ink_queue.h b/include/tscore/ink_queue.h
index ef79752..a9fb1b5 100644
--- a/include/tscore/ink_queue.h
+++ b/include/tscore/ink_queue.h
@@ -139,7 +139,7 @@ union head_p {
#define SET_FREELIST_POINTER_VERSION(_x, _p, _v) \
(_x).s.pointer = _p; \
(_x).s.version = _v
-#elif defined(__x86_64__) || defined(__ia64__) || defined(__powerpc64__) || defined(__mips64) || defined(__riscv)
+#elif defined(__x86_64__) || defined(__ia64__) || defined(__powerpc64__) || defined(__mips64) || defined(__riscv) || defined(__loongarch64)
/* Layout of FREELIST_POINTER
*
* 0 ~ 47 bits : 48 bits, Virtual Address
diff --git a/iocore/eventsystem/UnixEventProcessor.cc b/iocore/eventsystem/UnixEventProcessor.cc
index 0c123c1..3fb27cb 100644
--- a/iocore/eventsystem/UnixEventProcessor.cc
+++ b/iocore/eventsystem/UnixEventProcessor.cc
@@ -141,7 +141,7 @@ void
ThreadAffinityInitializer::setup_stack_guard(void *stack, int stackguard_pages)
{
#if !(defined(__i386__) || defined(__x86_64__) || defined(__arm__) || defined(__arm64__) || defined(__aarch64__) || \
- defined(__mips__) || defined(__powerpc64__) || defined(__riscv))
+ defined(__mips__) || defined(__powerpc64__) || defined(__riscv) || defined(__loongarch64))
#error Unknown stack growth direction. Determine the stack growth direction of your platform.
// If your stack grows upwards, you need to change this function and the calculation of stack_begin in do_alloc_stack.
#endif
--
2.43.0

View File

@ -1,7 +1,8 @@
%define _hardened_build 1
%global vendor %{?_vendor:%{_vendor}}%{!?_vendor:openEuler}
Name: trafficserver
Version: 9.2.3
Release: 3
Version: 9.2.5
Release: 5
Summary: Apache Traffic Server, a reverse, forward and transparent HTTP proxy cache
License: Apache-2.0
URL: https://trafficserver.apache.org/
@ -12,7 +13,16 @@ Patch0002: Fix-log-in-debug-mode.patch
Patch0003: config-layout-openEuler.patch
Patch0004: Modify-storage.config-for-traffic_cache_tool.patch
Patch0005: add-riscv-support.patch
Patch0006: CVE-2024-31309.patch
Patch0006: add-loong64-support.patch
Patch0007: CVE-2024-38479.patch
Patch0008: CVE-2024-50305.patch
Patch0009: CVE-2024-50306.patch
Patch0010: Invoke-initgroups-iff-we-got-enough-privileges.patch
Patch0011: CVE-2024-38311-pre-Do-not-allow-extra-CRs-in-chunks-11936-11942.patch
Patch0012: CVE-2024-38311.patch
Patch0013: CVE-2024-56195.patch
Patch0014: CVE-2024-56202.patch
Patch0015: CVE-2024-53868.patch
BuildRequires: expat-devel hwloc-devel openssl-devel pcre-devel zlib-devel xz-devel
BuildRequires: libcurl-devel ncurses-devel gcc gcc-c++ perl-ExtUtils-MakeMaker
BuildRequires: libcap-devel cmake libunwind-devel automake chrpath
@ -41,7 +51,7 @@ This package contains some Perl APIs for talking to the ATS management port.
%build
autoreconf
./configure \
--enable-layout=openEuler \
--enable-layout=%{vendor} \
--libdir=%{_libdir}/trafficserver \
--libexecdir=%{_libdir}/trafficserver/plugins \
--enable-experimental-plugins \
@ -133,6 +143,25 @@ getent passwd ats >/dev/null || useradd -r -u 176 -g ats -d / -s /sbin/nologin -
%{_datadir}/pkgconfig/trafficserver.pc
%changelog
* Mon Apr 07 2025 yaoxin <1024769339@qq.com> - 9.2.5-5
- Fix CVE-2024-53868
* Fri Mar 07 2025 yaoxin <1024769339@qq.com> - 9.2.5-4
- Fix CVE-2024-38311,CVE-2024-56195 and CVE-2024-56202
* Tue Dec 03 2024 yaoxin <yao_xin001@hoperun.com> - 9.2.5-3
- Fix trafficserver service error
* Fri Nov 15 2024 wangkai <13474090681@163.com> - 9.2.5-2
- Fix CVE-2024-38479, CVE-2024-50306, CVE-2024-50305
- Replace openEuler with vendor
* Mon Jul 29 2024 wangkai <13474090681@163.com> - 9.2.5-1
- Update to 9.2.5 for fix CVE-2023-38522, CVE-2024-35161, CVE-2024-35296
* Fri Jul 12 2024 Wenlong Zhang <zhangwenlong@loongson.cn> - 9.2.3-4
- add loong64 support for trafficserver
* Thu May 30 2024 laokz <zhangkai@iscas.ac.cn> - 9.2.3-3
- Update riscv64 patch