From acabfdb582330345c05e0500d302e9e99f3eb5e9 Mon Sep 17 00:00:00 2001 From: Tobias Stoeckmann Date: Fri, 17 Jan 2020 21:28:28 +0100 Subject: [PATCH] sed: handle very long input lines with R (tiny change) It is possible to trigger an out of bounds memory access when using the sed command R with an input file containing very long lines. The problem is that the line length of parsed file is returned as a size_t by ck_getdelim, but temporarily stored in an int and then converted back into a size_t. On systems like amd64, on which this problem can be triggered, size_t and int have different sizes. If the input line is longer than 2 GB (which is parseable on amd64 or other 64 bit systems), this means that the temporarily stored int turns negative. Converting the negative int back into a size_t will lead to an excessively large size_t, as the conversion leads to a lot of leading 1 bits. Eventually ck_fwrite is called with this huge size_t which in turn will lead to an out of bounds access on amd64 systems -- after all the parsed text was just a bit above 2 GB, not near SIZE_MAX. You can trigger this issue with GNU sed on OpenBSD like this: $ dd if=/dev/zero bs=1M count=2049 | tr '\0' e > long.txt $ sed Rlong.txt /etc/fstab Segmentation fault (core dumped) I was unable to trigger the bug on a Linux system with glibc due to a bug in glibc's fwrite implementation -- it leads to a short write and sed treats that correctly as an error. * sed/execute.c (execute_program) [case 'R']: Declare result to be of type size_t, not int. * NEWS (Bug fixes): Mention it. This addresses https://bugs.gnu.org/39166 --- sed/execute.c | 2 +- 1 files changed, 1 insertions(+), 1 deletion(-) diff --git a/sed/execute.c b/sed/execute.c index 8f43f2e..f94b125 100644 --- a/sed/execute.c +++ b/sed/execute.c @@ -1518,7 +1518,7 @@ execute_program (struct vector *vec, struct input *input) struct append_queue *aq; size_t buflen; char *text = NULL; - int result; + size_t result; result = ck_getdelim (&text, &buflen, buffer_delimiter, cur_cmd->x.inf->fp);