From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from simark.ca by simark.ca with LMTP id lzRAE1Z9+GXzigwAWB0awg (envelope-from ) for ; Mon, 18 Mar 2024 13:43:50 -0400 Received: by simark.ca (Postfix, from userid 112) id 3B27D1E0BB; Mon, 18 Mar 2024 13:43:50 -0400 (EDT) Received: from server2.sourceware.org (server2.sourceware.org [8.43.85.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (prime256v1) server-digest SHA256) (No client certificate requested) by simark.ca (Postfix) with ESMTPS id 1EF0C1E08C for ; Mon, 18 Mar 2024 13:43:48 -0400 (EDT) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id CFAE3385840E for ; Mon, 18 Mar 2024 17:43:47 +0000 (GMT) Received: from mail-lf1-f42.google.com (mail-lf1-f42.google.com [209.85.167.42]) by sourceware.org (Postfix) with ESMTPS id DAC553858D37 for ; Mon, 18 Mar 2024 17:43:23 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org DAC553858D37 Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=palves.net Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=gmail.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org DAC553858D37 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=209.85.167.42 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1710783807; cv=none; b=uSBJ3hztHX/7ROtsQMLAjCzeUUtI6I15t2X6lN3ZHcJeZ859lbqAABxawzd9eaSpKxx2Yz+rZiJMhPQko23CBXH2anD6zbGvXsxx6q160VTUlYle70Bj0MPQRE3LBW4DvXmXVQnPE2yBEF/DQyFGE4w4dMNfujzVZG2PeqPONRI= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1710783807; c=relaxed/simple; bh=BdoWFSHE6SaVgumOyoPQebdVuwRg2/KHnt4ng7T7FWU=; h=Message-ID:Date:MIME-Version:Subject:From:To; b=umA7g7zWYL0au+ciMDhRpyKTQ7A1sFenbtdvp1v7fhpdYP6FrE5HF3mX07NHHMo/+Oy6d4rLYn2Ko3xIXkOsHgJRBRF2o+EFEZjHOb/rIgjeYTgmaYotRc90oWKpyODH0rWH3sfq1f9lxQ53d+bDO0xUwbWGHRzfPwXjW0vE7xA= ARC-Authentication-Results: i=1; server2.sourceware.org Received: by mail-lf1-f42.google.com with SMTP id 2adb3069b0e04-513edc88d3cso1083843e87.0 for ; Mon, 18 Mar 2024 10:43:23 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1710783802; x=1711388602; h=content-transfer-encoding:in-reply-to:references:cc:to:from :content-language:subject:user-agent:mime-version:date:message-id :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=6oDhqy9BhZpfyzwv6NxvLtxc6kQpG0dpOhWf9QlsU1Y=; b=IWE5AlHq0SAiY8f7swqzqdI4BO67wSuo8vbFUOpy574zwE83BGcpRqwsv+BM5lEnsy AGoDLwYB+Y7uHxdVVqZzjHicukZ1fLAkkBwNaDtsshUipAbFovqgZHGhwcqOC6hTXnH/ V9j+D9n1V0G6wKQAUqC29oYRVhuDTZUPtfKezQC3qJWYe6C5XIQezoqpJzPg78x7dzpg zWv4FFFRT3DOl5ACzRXw2ghLOiFQ235FqeijKdVB2ITHEJjVa14SU0o89qD5nYQKh9+M wxiF77VddgOFHpmAdCisBhk9KKgjNmm+s6JLJvIwIVvd07hZI51yluwrgP1Q5c/uvMAd k1Ug== X-Gm-Message-State: AOJu0YxXFYT+DoCz1mLuBC/qNQm1P7pPUfpDf/oDHnrKsHWOGrKdmxcH 4dZCtAnbC6JUQr/maTPxgRjT0Wb5ANRz/qhKEsw6rF+SQMifwup6ATgpqnoJzr4= X-Google-Smtp-Source: AGHT+IFJ4Pwd0C6RAM+V0voofOsqJ0eOZSWQCpopRZ1fp+oilfX1sWCilf29ie/1GA38XFjI/aS+SQ== X-Received: by 2002:ac2:5e23:0:b0:513:a719:a24c with SMTP id o3-20020ac25e23000000b00513a719a24cmr7995807lfg.32.1710783801688; Mon, 18 Mar 2024 10:43:21 -0700 (PDT) Received: from ?IPV6:2001:8a0:f918:ab00:53e7:d86b:f84e:5d3? ([2001:8a0:f918:ab00:53e7:d86b:f84e:5d3]) by smtp.gmail.com with ESMTPSA id 27-20020ac25f5b000000b00513e5a58df6sm633524lfz.78.2024.03.18.10.43.20 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 18 Mar 2024 10:43:21 -0700 (PDT) Message-ID: <269ff31a-9aeb-4293-a4d9-df0f16f12e88@palves.net> Date: Mon, 18 Mar 2024 17:43:18 +0000 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: [PATCH v3] Teach GDB to generate sparse core files (PR corefiles/31494) Content-Language: en-US From: Pedro Alves To: gdb-patches@sourceware.org Cc: Lancelot Six References: <20240315182705.4064062-1-pedro@palves.net> <1cb2e4f4-f14d-4434-9eb2-b33fdf4bf0bb@palves.net> In-Reply-To: <1cb2e4f4-f14d-4434-9eb2-b33fdf4bf0bb@palves.net> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-10.3 required=5.0 tests=BAYES_00, FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM, GIT_PATCH_0, HEADER_FROM_DIFFERENT_DOMAINS, KAM_DMARC_STATUS, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gdb-patches@sourceware.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gdb-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gdb-patches-bounces+public-inbox=simark.ca@sourceware.org On 2024-03-18 13:29, Pedro Alves wrote: > On 2024-03-15 18:27, Pedro Alves wrote: > + /* If we already know we have an all-zero block at the next > + offset, we can skip calling get_all_zero_block_size for > + it again. */ > + if (next_all_zero_block.offset != 0) > + data_offset += next_all_zero_block.offset; Err, all the effort to pass down the size, only to typo and not use it... Sigh. That last line should be: data_offset += next_all_zero_block.size; Here's the corrected patch... >From adb681ce583fa640c4fb6883a827f3ab6b28b1c0 Mon Sep 17 00:00:00 2001 From: Pedro Alves Date: Mon, 18 Mar 2024 13:16:10 +0000 Subject: [PATCH v3] Teach GDB to generate sparse core files (PR corefiles/31494) This commit teaches GDB's gcore command to generate sparse core files (if supported by the filesystem). To create a sparse file, all you have to do is skip writing zeros to the file, instead lseek'ing-ahead over them. The sparse logic is applied when writing the memory sections, as that's where the bulk of the data and the zeros are. The commit also tweaks gdb.base/bigcore.exp to make it exercise gdb-generated cores in addition to kernel-generated cores. We couldn't do that before, because GDB's gcore on that test's program would generate a multi-GB non-sparse core (16GB on my system). After this commit, gdb.base/bigcore.exp generates, when testing with GDB's gcore, a much smaller core file, roughly in line with what the kernel produces: real sizes: $ du --hu testsuite/outputs/gdb.base/bigcore/bigcore.corefile.* 2.2M testsuite/outputs/gdb.base/bigcore/bigcore.corefile.gdb 2.0M testsuite/outputs/gdb.base/bigcore/bigcore.corefile.kernel apparent sizes: $ du --hu --apparent-size testsuite/outputs/gdb.base/bigcore/bigcore.corefile.* 16G testsuite/outputs/gdb.base/bigcore/bigcore.corefile.gdb 16G testsuite/outputs/gdb.base/bigcore/bigcore.corefile.kernel Time to generate the core also goes down significantly. On my machine, I get: when writing to an SSD, from 21.0s, down to 8.0s when writing to an HDD, from 31.0s, down to 8.5s The changes to gdb.base/bigcore.exp are smaller than they look at first sight. It's basically mostly refactoring -- moving most of the code to a new procedure which takes as argument who should dump the core, and then calling the procedure twice. I purposedly did not modernize any of the refactored code in this patch. Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=31494 Reviewed-By: Lancelot Six Reviewed-By: Eli Zaretskii Change-Id: I2554a6a4a72d8c199ce31f176e0ead0c0c76cff1 --- gdb/NEWS | 4 + gdb/doc/gdb.texinfo | 3 + gdb/gcore.c | 177 ++++++++++++++++++++- gdb/testsuite/gdb.base/bigcore.exp | 238 ++++++++++++++++------------- 4 files changed, 314 insertions(+), 108 deletions(-) diff --git a/gdb/NEWS b/gdb/NEWS index d8ac0bb06a7..d1d25e4c24d 100644 --- a/gdb/NEWS +++ b/gdb/NEWS @@ -23,6 +23,10 @@ disassemble command will now give an error. Previously the 'b' flag would always override the 'r' flag. +gcore +generate-core-file + GDB now generates sparse core files, on systems that support it. + maintenance info line-table Add an EPILOGUE-BEGIN column to the output of the command. It indicates if the line is considered the start of the epilgoue, and thus a point at diff --git a/gdb/doc/gdb.texinfo b/gdb/doc/gdb.texinfo index f093ee269e2..9224829bd93 100644 --- a/gdb/doc/gdb.texinfo +++ b/gdb/doc/gdb.texinfo @@ -13867,6 +13867,9 @@ Produce a core dump of the inferior process. The optional argument specified, the file name defaults to @file{core.@var{pid}}, where @var{pid} is the inferior process ID. +If supported by the filesystem where the core is written to, +@value{GDBN} generates a sparse core dump file. + Note that this command is implemented only for some systems (as of this writing, @sc{gnu}/Linux, FreeBSD, Solaris, and S390). diff --git a/gdb/gcore.c b/gdb/gcore.c index 7c12aa3a777..23e8066745a 100644 --- a/gdb/gcore.c +++ b/gdb/gcore.c @@ -39,10 +39,21 @@ #include "gdbsupport/byte-vector.h" #include "gdbsupport/scope-exit.h" +/* To generate sparse cores, we look at the data to write in chunks of + this size when considering whether to skip the write. Only if we + have a full block of this size with all zeros do we skip writing + it. A simpler algorithm that would try to skip all zeros would + result in potentially many more write/lseek syscalls, as normal + data is typically sprinkled with many small holes of zeros. Also, + it's much more efficient to memcmp a block of data against an + all-zero buffer than to check each and every data byte against zero + one by one. */ +#define SPARSE_BLOCK_SIZE 0x1000 + /* The largest amount of memory to read from the target at once. We must throttle it to limit the amount of memory used by GDB during generate-core-file for programs with large resident data. */ -#define MAX_COPY_BYTES (1024 * 1024) +#define MAX_COPY_BYTES (256 * SPARSE_BLOCK_SIZE) static const char *default_gcore_target (void); static enum bfd_architecture default_gcore_arch (void); @@ -98,7 +109,12 @@ write_gcore_file_1 (bfd *obfd) bfd_set_section_alignment (note_sec, 0); bfd_set_section_size (note_sec, note_size); - /* Now create the memory/load sections. */ + /* Now create the memory/load sections. Note + gcore_memory_sections's sparse logic is assuming that we'll + always write something afterwards, which we do: just below, we + write the note section. So there's no need for an ftruncate-like + call to grow the file to the right size if the last memory + sections were zeros and we skipped writing them. */ if (gcore_memory_sections (obfd) == 0) error (_("gcore: failed to get corefile memory sections from target.")); @@ -567,6 +583,158 @@ objfile_find_memory_regions (struct target_ops *self, return 0; } +/* Check if we have a block full of zeros at DATA within the [DATA, + DATA+SIZE) buffer. Returns the size of the all-zero block found. + Returns at most the minimum between SIZE and SPARSE_BLOCK_SIZE. */ + +static size_t +get_all_zero_block_size (const gdb_byte *data, size_t size) +{ + size = std::min (size, (size_t) SPARSE_BLOCK_SIZE); + + /* A memcmp of a whole block is much faster than a simple for loop. + This makes a big difference, as with a for loop, this code would + dominate the performance and result in doubling the time to + generate a core, at the time of writing. With an optimized + memcmp, this doesn't even show up in the perf trace. */ + static const gdb_byte all_zero_block[SPARSE_BLOCK_SIZE] = {}; + if (memcmp (data, all_zero_block, size) == 0) + return size; + return 0; +} + +/* Basically a named-elements pair, used as return type of + find_next_all_zero_block. */ + +struct offset_and_size +{ + size_t offset; + size_t size; +}; + +/* Find the next all-zero block at DATA+OFFSET within the [DATA, + DATA+SIZE) buffer. Returns the offset and the size of the all-zero + block if found, or zero if not found. */ + +static offset_and_size +find_next_all_zero_block (const gdb_byte *data, size_t offset, size_t size) +{ + for (; offset < size; offset += SPARSE_BLOCK_SIZE) + { + size_t zero_block_size + = get_all_zero_block_size (data + offset, size - offset); + if (zero_block_size != 0) + return {offset, zero_block_size}; + } + return {0, 0}; +} + +/* Wrapper around bfd_set_section_contents that avoids writing + all-zero blocks to disk, so we create a sparse core file. + SKIP_ALIGN is a recursion helper -- if true, we'll skip aligning + the file position to SPARSE_BLOCK_SIZE. */ + +static bool +sparse_bfd_set_section_contents (bfd *obfd, asection *osec, + const gdb_byte *data, + size_t sec_offset, + size_t size, + bool skip_align = false) +{ + /* Note, we don't have to have special handling for the case of the + last memory region ending with zeros, because our caller always + writes out the note section after the memory/load sections. If + it didn't, we'd have to seek+write the last byte to make the file + size correct. (Or add an ftruncate abstraction to bfd and call + that.) */ + + if (!skip_align) + { + /* Align the all-zero block search with SPARSE_BLOCK_SIZE, to + better align with filesystem blocks. If we find we're + misaligned, then write/skip the bytes needed to make us + aligned. We do that with (one level) recursion. */ + + /* We need to know the section's file offset on disk. We can + only look at it after the bfd's 'output_has_begun' flag has + been set, as bfd hasn't computed the file offsets + otherwise. */ + if (!obfd->output_has_begun) + { + gdb_byte dummy = 0; + + /* A write forces BFD to compute the bfd's section file + positions. Zero size works for that too. */ + if (!bfd_set_section_contents (obfd, osec, &dummy, 0, 0)) + return false; + + gdb_assert (obfd->output_has_begun); + } + + /* How much we need to write/skip in order to find the next + SPARSE_BLOCK_SIZE filepos-aligned block. */ + size_t align_remainder + = (SPARSE_BLOCK_SIZE + - (osec->filepos + sec_offset) % SPARSE_BLOCK_SIZE); + + /* How much we'll actually write in the recursion call. */ + size_t align_write_size = std::min (size, align_remainder); + + if (align_write_size != 0) + { + /* Recurse, skipping the alignment code. */ + if (!sparse_bfd_set_section_contents (obfd, osec, data, + sec_offset, + align_write_size, true)) + return false; + + /* Skip over what we've written, and proceed with + assumes-aligned logic. */ + data += align_write_size; + sec_offset += align_write_size; + size -= align_write_size; + } + } + + size_t data_offset = 0; + while (data_offset < size) + { + size_t all_zero_block_size + = get_all_zero_block_size (data + data_offset, size - data_offset); + if (all_zero_block_size != 0) + data_offset += all_zero_block_size; + else + { + /* We have some non-zero data to write to file. Find the + next all-zero block within the data, and only write up to + it. */ + + offset_and_size next_all_zero_block + = find_next_all_zero_block (data, + data_offset + SPARSE_BLOCK_SIZE, + size); + size_t next_data_offset = (next_all_zero_block.offset == 0 + ? size + : next_all_zero_block.offset); + + if (!bfd_set_section_contents (obfd, osec, data + data_offset, + sec_offset + data_offset, + next_data_offset - data_offset)) + return false; + + data_offset = next_data_offset; + + /* If we already know we have an all-zero block at the next + offset, we can skip calling get_all_zero_block_size for + it again. */ + if (next_all_zero_block.offset != 0) + data_offset += next_all_zero_block.size; + } + } + + return true; +} + static void gcore_copy_callback (bfd *obfd, asection *osec) { @@ -599,8 +767,9 @@ gcore_copy_callback (bfd *obfd, asection *osec) bfd_section_vma (osec))); break; } - if (!bfd_set_section_contents (obfd, osec, memhunk.data (), - offset, size)) + + if (!sparse_bfd_set_section_contents (obfd, osec, memhunk.data (), + offset, size)) { warning (_("Failed to write corefile contents (%s)."), bfd_errmsg (bfd_get_error ())); diff --git a/gdb/testsuite/gdb.base/bigcore.exp b/gdb/testsuite/gdb.base/bigcore.exp index 3f9ae48abf2..6c64d402502 100644 --- a/gdb/testsuite/gdb.base/bigcore.exp +++ b/gdb/testsuite/gdb.base/bigcore.exp @@ -43,23 +43,6 @@ if { [gdb_compile "${srcdir}/${subdir}/${srcfile}" "${binfile}" executable {deb return -1 } -# Run GDB on the bigcore program up-to where it will dump core. - -clean_restart ${binfile} -gdb_test_no_output "set print sevenbit-strings" -gdb_test_no_output "set width 0" - -# Get the core into the output directory. -set_inferior_cwd_to_output_dir - -if {![runto_main]} { - return 0 -} -set print_core_line [gdb_get_line_number "Dump core"] -gdb_test "tbreak $print_core_line" -gdb_test continue ".*print_string.*" -gdb_test next ".*0 = 0.*" - # Traverse part of bigcore's linked list of memory chunks (forward or # backward), saving each chunk's address. @@ -92,92 +75,11 @@ proc extract_heap { dir } { } return $heap } -set next_heap [extract_heap next] -set prev_heap [extract_heap prev] - -# Save the total allocated size within GDB so that we can check -# the core size later. -gdb_test_no_output "set \$bytes_allocated = bytes_allocated" "save heap size" - -# Now create a core dump - -# Rename the core file to "TESTFILE.corefile" rather than just "core", -# to avoid problems with sys admin types that like to regularly prune -# all files named "core" from the system. - -# Some systems append "core" to the name of the program; others append -# the name of the program to "core"; still others (like Linux, as of -# May 2003) create cores named "core.PID". - -# Save the process ID. Some systems dump the core into core.PID. -set inferior_pid [get_inferior_pid] - -# Dump core using SIGABRT -set oldtimeout $timeout -set timeout 600 -gdb_test "signal SIGABRT" "Program terminated with signal SIGABRT, .*" -set timeout $oldtimeout - -# Find the corefile. -set file [find_core_file $inferior_pid] -if { $file != "" } { - remote_exec build "mv $file $corefile" -} else { - untested "can't generate a core file" - return 0 -} -# Check that the corefile is plausibly large enough. We're trying to -# detect the case where the operating system has truncated the file -# just before signed wraparound. TCL, unfortunately, has a similar -# problem - so use catch. It can handle the "bad" size but not -# necessarily the "good" one. And we must use GDB for the comparison, -# similarly. - -if {[catch {file size $corefile} core_size] == 0} { - set core_ok 0 - gdb_test_multiple "print \$bytes_allocated < $core_size" "check core size" { - -re " = 1\r\n$gdb_prompt $" { - pass "check core size" - set core_ok 1 - } - -re " = 0\r\n$gdb_prompt $" { - pass "check core size" - set core_ok 0 - } - } -} { - # Probably failed due to the TCL build having problems with very - # large values. Since GDB uses a 64-bit off_t (when possible) it - # shouldn't have this problem. Assume that things are going to - # work. Without this assumption the test is skiped on systems - # (such as i386 GNU/Linux with patched kernel) which do pass. - pass "check core size" - set core_ok 1 -} -if {! $core_ok} { - untested "check core size (system does not support large corefiles)" - return 0 -} - -# Now load up that core file - -set test "load corefile" -gdb_test_multiple "core $corefile" "$test" { - -re "A program is being debugged already. Kill it. .y or n. " { - send_gdb "y\n" - exp_continue - } - -re "Core was generated by.*$gdb_prompt $" { - pass "$test" - } -} - -# Finally, re-traverse bigcore's linked list, checking each chunk's -# address against the executable. Don't use gdb_test_multiple as want -# only one pass/fail. Don't use exp_continue as the regular -# expression involving $heap needs to be re-evaluated for each new -# response. +# Re-traverse bigcore's linked list, checking each chunk's address +# against the executable. Don't use gdb_test_multiple as want only +# one pass/fail. Don't use exp_continue as the regular expression +# involving $heap needs to be re-evaluated for each new response. proc check_heap { dir heap } { global gdb_prompt @@ -208,5 +110,133 @@ proc check_heap { dir heap } { } } -check_heap next $next_heap -check_heap prev $prev_heap +# The bulk of the testcase. DUMPER indicates who is supposed to dump +# the core. It can be either "kernel", or "gdb". +proc test {dumper} { + global binfile timeout corefile gdb_prompt + + # Run GDB on the bigcore program up-to where it will dump core. + + clean_restart ${binfile} + gdb_test_no_output "set print sevenbit-strings" + gdb_test_no_output "set width 0" + + # Get the core into the output directory. + set_inferior_cwd_to_output_dir + + if {![runto_main]} { + return 0 + } + set print_core_line [gdb_get_line_number "Dump core"] + gdb_test "tbreak $print_core_line" + gdb_test continue ".*print_string.*" + gdb_test next ".*0 = 0.*" + + set next_heap [extract_heap next] + set prev_heap [extract_heap prev] + + # Save the total allocated size within GDB so that we can check + # the core size later. + gdb_test_no_output "set \$bytes_allocated = bytes_allocated" \ + "save heap size" + + # Now create a core dump. + + if {$dumper == "kernel"} { + # Rename the core file to "TESTFILE.corefile.$dumper" rather + # than just "core", to avoid problems with sys admin types + # that like to regularly prune all files named "core" from the + # system. + + # Some systems append "core" to the name of the program; + # others append the name of the program to "core"; still + # others (like Linux, as of May 2003) create cores named + # "core.PID". + + # Save the process ID. Some systems dump the core into + # core.PID. + set inferior_pid [get_inferior_pid] + + # Dump core using SIGABRT. + set oldtimeout $timeout + set timeout 600 + gdb_test "signal SIGABRT" "Program terminated with signal SIGABRT, .*" + set timeout $oldtimeout + + # Find the corefile. + set file [find_core_file $inferior_pid] + if { $file != "" } { + remote_exec build "mv $file $corefile.$dumper" + } else { + untested "can't generate a core file" + return 0 + } + } elseif {$dumper == "gdb"} { + gdb_gcore_cmd "$corefile.$dumper" "gcore corefile" + } else { + error "unhandled dumper: $dumper" + } + + # Check that the corefile is plausibly large enough. We're trying + # to detect the case where the operating system has truncated the + # file just before signed wraparound. TCL, unfortunately, has a + # similar problem - so use catch. It can handle the "bad" size + # but not necessarily the "good" one. And we must use GDB for the + # comparison, similarly. + + if {[catch {file size $corefile.$dumper} core_size] == 0} { + set core_ok 0 + gdb_test_multiple "print \$bytes_allocated < $core_size" \ + "check core size" { + -re " = 1\r\n$gdb_prompt $" { + pass "check core size" + set core_ok 1 + } + -re " = 0\r\n$gdb_prompt $" { + pass "check core size" + set core_ok 0 + } + } + } { + # Probably failed due to the TCL build having problems with + # very large values. Since GDB uses a 64-bit off_t (when + # possible) it shouldn't have this problem. Assume that + # things are going to work. Without this assumption the test + # is skiped on systems (such as i386 GNU/Linux with patched + # kernel) which do pass. + pass "check core size" + set core_ok 1 + } + if {! $core_ok} { + untested "check core size (system does not support large corefiles)" + return 0 + } + + # Now load up that core file. + + set test "load corefile" + gdb_test_multiple "core $corefile.$dumper" "$test" { + -re "A program is being debugged already. Kill it. .y or n. " { + send_gdb "y\n" + exp_continue + } + -re "Core was generated by.*$gdb_prompt $" { + pass "$test" + } + } + + # Finally, re-traverse bigcore's linked list, checking each + # chunk's address against the executable. + + check_heap next $next_heap + check_heap prev $prev_heap +} + +foreach_with_prefix dumper {kernel gdb} { + # GDB's gcore is too slow when testing with the extended-gdbserver + # board, since it requires reading all the inferior memory. + if {$dumper == "gdb" && [target_info gdb_protocol] != ""} { + continue + } + test $dumper +} base-commit: d0eb2625bff1387744304bdc70ec0a85a20b8a3f -- 2.43.2