From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from simark.ca by simark.ca with LMTP id mIc1N63PZWdlaiUAWB0awg (envelope-from ) for ; Fri, 20 Dec 2024 15:12:29 -0500 Authentication-Results: simark.ca; dkim=pass (2048-bit key; unprotected) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=fQ1bEfxn; dkim-atps=neutral Received: by simark.ca (Postfix, from userid 112) id DF02D1E097; Fri, 20 Dec 2024 15:12:29 -0500 (EST) X-Spam-Checker-Version: SpamAssassin 4.0.0 (2022-12-13) on simark.ca X-Spam-Level: X-Spam-Status: No, score=-6.4 required=5.0 tests=ARC_SIGNED,ARC_VALID,BAYES_00, DKIMWL_WL_HIGH,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI, RCVD_IN_DNSWL_MED autolearn=unavailable autolearn_force=no version=4.0.0 Received: from server2.sourceware.org (server2.sourceware.org [8.43.85.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (prime256v1) server-digest SHA256) (No client certificate requested) by simark.ca (Postfix) with ESMTPS id 387A01E091 for ; Fri, 20 Dec 2024 15:12:29 -0500 (EST) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id E81773858C35 for ; Fri, 20 Dec 2024 20:12:28 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org E81773858C35 Authentication-Results: sourceware.org; dkim=pass (2048-bit key, unprotected) header.d=intel.com header.i=@intel.com header.a=rsa-sha256 header.s=Intel header.b=fQ1bEfxn Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.10]) by sourceware.org (Postfix) with ESMTPS id 6D18F385840B for ; Fri, 20 Dec 2024 20:06:33 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 6D18F385840B Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=intel.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 6D18F385840B Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=198.175.65.10 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1734725193; cv=none; b=sW2tBx8ck5kPNgTjrR2F3wezRVlj5xbzZynWC13bNri0TPzz/ZLqr/DY58QIRPJdqW6YfvdxyJ5CBY6yWgTMTn5jExn/zSqZpumv0cfOl9ozuZgbIX94JqeNUtrYcKBNfWJthtbDZSIBXD9sEecfHLZpOr7gv3eoAod48NJCBBM= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1734725193; c=relaxed/simple; bh=IM281C+m2thQxTKTYMfYI2UQkizH76f8/OkAJMp5Wgc=; h=DKIM-Signature:From:To:Subject:Date:Message-Id:MIME-Version; b=M7T0cigr5WlO0Yj+wKXolfJHs8swTlsf96dQFCnQAg+1umECKoD6IV/TWp1BFXiDWzSLni7z5TQ8pjilA0vbQ4CTdrjvmVR/6KUPBe/1RmPk62KiCKlM6J334muNK0dejRAHzGdLFQF+P/h6prEmG2B6FyXcLBl0hgEqyMv7wcc= ARC-Authentication-Results: i=1; server2.sourceware.org DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 6D18F385840B DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1734725194; x=1766261194; h=from:to:subject:date:message-id:in-reply-to:references: mime-version:content-transfer-encoding; bh=IM281C+m2thQxTKTYMfYI2UQkizH76f8/OkAJMp5Wgc=; b=fQ1bEfxnhdbewx57XiFamevybKra/c7C9tkV6sEevkN67D0IUFU0x7iU 96TAXkNM1gFHHG2tgLCwKxwaATVrQSdjxun24gD+l2ZDWxy/QwjxsTPFC 5f4j5EUNeXucIF3Sj/QZfFLtsTgdyhRzZmLXBTj7US3CLykRjFp56k/Bj d3WvFP7iPMBMbfZbg9jjxrI7QR6/OS2jhG5xjust63Qp+VH99EMEgq0SL 0P55Dtyt1mZmw/2sA5kJ/NMqqDBe1vhNNiAdflH3mh5WKP1XFrHwT/xAr GCbU9M8d9mlJjMYb5AZFskQcniITN4igStmVa1E0/RDmsFd3ZXhCcEtVJ g==; X-CSE-ConnectionGUID: x5sX9Ie5T3eZYPaBuSqohA== X-CSE-MsgGUID: eCS5BrnvQbmgRh2CD9ADrg== X-IronPort-AV: E=McAfee;i="6700,10204,11292"; a="52689156" X-IronPort-AV: E=Sophos;i="6.12,251,1728975600"; d="scan'208";a="52689156" Received: from fmviesa008.fm.intel.com ([10.60.135.148]) by orvoesa102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Dec 2024 12:06:33 -0800 X-CSE-ConnectionGUID: DcyUjB0UQfKXQHqW7WSXyQ== X-CSE-MsgGUID: N8SSZFLXRduktRcYN4VPPQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.12,251,1728975600"; d="scan'208";a="98809965" Received: from gkldtt-dev-004.igk.intel.com (HELO localhost) ([10.123.221.202]) by fmviesa008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Dec 2024 12:06:31 -0800 From: "Schimpe, Christina" To: gdb-patches@sourceware.org Subject: [PATCH 10/12] gdb: Implement amd64 linux shadow stack support for inferior calls. Date: Fri, 20 Dec 2024 20:04:59 +0000 Message-Id: <20241220200501.324191-11-christina.schimpe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241220200501.324191-1-christina.schimpe@intel.com> References: <20241220200501.324191-1-christina.schimpe@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit X-BeenThere: gdb-patches@sourceware.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gdb-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gdb-patches-bounces~public-inbox=simark.ca@sourceware.org This patch enables inferior calls to support Intel's Control-Flow Enforcement Technology (CET), which provides the shadow stack feature for the x86 architecture. Following the restriction of the linux kernel, enable inferior calls for amd64 only. --- gdb/amd64-linux-tdep.c | 89 +++++++++++++++++-- gdb/doc/gdb.texinfo | 29 ++++++ .../gdb.arch/amd64-shadow-stack-cmds.exp | 55 +++++++++++- 3 files changed, 164 insertions(+), 9 deletions(-) diff --git a/gdb/amd64-linux-tdep.c b/gdb/amd64-linux-tdep.c index 895feac85e8..ef59cfcb7e4 100644 --- a/gdb/amd64-linux-tdep.c +++ b/gdb/amd64-linux-tdep.c @@ -1875,6 +1875,82 @@ amd64_linux_remove_non_address_bits_watchpoint (gdbarch *gdbarch, return (addr & amd64_linux_lam_untag_mask ()); } +/* Read the shadow stack pointer register and return its value, if + possible. */ + +static std::optional +amd64_linux_get_shadow_stack_pointer (gdbarch *gdbarch) +{ + const i386_gdbarch_tdep *tdep = gdbarch_tdep (gdbarch); + + if (tdep == nullptr || tdep->ssp_regnum < 0) + return {}; + + CORE_ADDR ssp; + regcache *regcache = get_thread_regcache (inferior_thread ()); + if (regcache_raw_read_unsigned (regcache, tdep->ssp_regnum, &ssp) + != REG_VALID) + return {}; + + /* Starting with v6.6., the Linux kernel supports CET shadow stack. + Dependent on the target the ssp register can be invalid or nullptr + when shadow stack is supported by HW and the linux kernel but not + enabled for the current thread. */ + if (ssp == 0x0) + return {}; + + return ssp; +} + +/* Return the number of bytes required to update the shadow stack pointer + by one element. For x32 the shadow stack elements are still 64-bit + aligned. Thus, gdbarch_addr_bit cannot be used to compute the new + stack pointer. */ + +static inline int +amd64_linux_shadow_stack_element_size_aligned (gdbarch *gdbarch) +{ + const bfd_arch_info *binfo = gdbarch_bfd_arch_info (gdbarch); + return (binfo->bits_per_word / binfo->bits_per_byte); +} + + +/* If shadow stack is enabled, push the address NEW_ADDR on the shadow + stack and increment the shadow stack pointer accordingly. */ + +static void +amd64_linux_shadow_stack_push (gdbarch *gdbarch, CORE_ADDR new_addr) +{ + std::optional ssp = amd64_linux_get_shadow_stack_pointer (gdbarch); + if (!ssp.has_value ()) + return; + + /* The shadow stack grows downwards. To push addresses on the stack, + we need to decrement SSP. */ + const int element_size + = amd64_linux_shadow_stack_element_size_aligned (gdbarch); + const CORE_ADDR new_ssp = *ssp - element_size; + + /* Starting with v6.6., the Linux kernel supports CET shadow stack. + Using /proc/PID/smaps we can only check if NEW_SSP points to shadow + stack memory. If it doesn't, we assume the stack is full. */ + std::pair memrange; + if (!linux_address_in_shadow_stack_mem_range (new_ssp, &memrange)) + error (_("No space left on the shadow stack.")); + + /* On x86 there can be a shadow stack token at bit 63. For x32, the + address size is only 32 bit. Thus, we must use ELEMENT_SIZE (and + not gdbarch_addr_bit) to determine the width of the address to be + written. */ + const bfd_endian byte_order = gdbarch_byte_order (gdbarch); + write_memory_unsigned_integer (new_ssp, element_size, byte_order, + (ULONGEST) new_addr); + + i386_gdbarch_tdep *tdep = gdbarch_tdep (gdbarch); + regcache *regcache = get_thread_regcache (inferior_thread ()); + regcache_raw_write_unsigned (regcache, tdep->ssp_regnum, new_ssp); +} + static value * amd64_linux_dwarf2_prev_ssp (const frame_info_ptr &this_frame, void **this_cache, int regnum) @@ -1900,14 +1976,9 @@ amd64_linux_dwarf2_prev_ssp (const frame_info_ptr &this_frame, if (linux_address_in_shadow_stack_mem_range (ssp, &range)) { /* The shadow stack grows downwards. To compute the previous - shadow stack pointer, we need to increment SSP. - For x32 the shadow stack elements are still 64-bit aligned. - Thus, we cannot use gdbarch_addr_bit to compute the new stack - pointer. */ - const bfd_arch_info *binfo = gdbarch_bfd_arch_info (gdbarch); - const int bytes_per_word - = (binfo->bits_per_word / binfo->bits_per_byte); - CORE_ADDR new_ssp = ssp + bytes_per_word; + shadow stack pointer, we need to increment SSP. */ + CORE_ADDR new_ssp + = ssp + amd64_linux_shadow_stack_element_size_aligned (gdbarch); /* If NEW_SSP points to the end of or before (<=) the current shadow stack memory range we consider NEW_SSP as valid (but @@ -1995,6 +2066,8 @@ amd64_linux_init_abi_common(struct gdbarch_info info, struct gdbarch *gdbarch, set_gdbarch_remove_non_address_bits_watchpoint (gdbarch, amd64_linux_remove_non_address_bits_watchpoint); + + set_gdbarch_shadow_stack_push (gdbarch, amd64_linux_shadow_stack_push); dwarf2_frame_set_init_reg (gdbarch, amd64_init_reg); } diff --git a/gdb/doc/gdb.texinfo b/gdb/doc/gdb.texinfo index c6c6fcaa17f..4bed63cb0a1 100644 --- a/gdb/doc/gdb.texinfo +++ b/gdb/doc/gdb.texinfo @@ -26836,6 +26836,35 @@ registers @end itemize +@subsubsection Intel @dfn{Control-flow Enforcement Technology} (CET). +@cindex Intel Control-flow Enforcement Technology (CET). + +Control-flow Enforcement Technology (CET) provides two capabilities to defend +against ``Return-oriented Programming'' and ``call/jmp-oriented +programming'' style control-flow attacks: + +@itemize @bullet +@item Shadow Stack: +A shadow stack is a second stack for a program. It holds the return +addresses pushed by the call instruction. The @code{RET} instruction pops the +return addresses from both call and shadow stack. If the return addresses from +the two stacks do not match, the processor signals a control protection +exception. +@item Indirect Branch Tracking (IBT): +When IBT is enabled, the CPU implements a state machine that tracks indirect +@code{JMP} and @code{CALL} instructions. The state machine can be either IDLE +or WAIT_FOR_ENDBRANCH. In WAIT_FOR_ENDBRANCH state the next instruction in +the program stream must be an @code{ENDBR} instruction, otherwise the +processor signals a control protection exception. +@end itemize + +Impact on Call/Print: +Inferior calls in @value{GDBN} reset the current PC to the beginning of the +function that is called. No call instruction is executed, but the @code{RET} +instruction actually is. To avoid a control protection exception due to the +missing return address on the shadow stack, @value{GDBN} pushes the new return +address to the shadow stack and updates the shadow stack pointer. + @node Alpha @subsection Alpha diff --git a/gdb/testsuite/gdb.arch/amd64-shadow-stack-cmds.exp b/gdb/testsuite/gdb.arch/amd64-shadow-stack-cmds.exp index 17f32ce3964..df654f9db5d 100644 --- a/gdb/testsuite/gdb.arch/amd64-shadow-stack-cmds.exp +++ b/gdb/testsuite/gdb.arch/amd64-shadow-stack-cmds.exp @@ -13,12 +13,29 @@ # You should have received a copy of the GNU General Public License # along with this program. If not, see . -# Test shadow stack enabling for frame level update and the return command. +# Test shadow stack enabling for frame level update, the return and the +# call command. +# As potential CET violations often only occur after resuming normal +# execution, test normal program continuation after each return or call +# command. require allow_ssp_tests standard_testfile amd64-shadow-stack.c +proc restart_and_run_infcall_call2 {} { + global binfile + clean_restart ${binfile} + if { ![runto_main] } { + return -1 + } + set inside_infcall_str "The program being debugged stopped while in a function called from GDB" + gdb_breakpoint [ gdb_get_line_number "break call2" ] + gdb_continue_to_breakpoint "break call2" ".*break call2.*" + gdb_test "call (int) call2()" \ + "Breakpoint \[0-9\]*, call2.*$inside_infcall_str.*" +} + save_vars { ::env(GLIBC_TUNABLES) } { append_environment GLIBC_TUNABLES "glibc.cpu.hwcaps" "SHSTK" @@ -33,6 +50,42 @@ save_vars { ::env(GLIBC_TUNABLES) } { return -1 } + with_test_prefix "test inferior call and continue" { + gdb_breakpoint [ gdb_get_line_number "break call1" ] + gdb_continue_to_breakpoint "break call1" ".*break call1.*" + + gdb_test "call (int) call2()" "= 42" + + gdb_continue_to_end + } + + with_test_prefix "test return inside an inferior call" { + restart_and_run_infcall_call2 + + gdb_test "return" "\#0.*call2.*" \ + "Test shadow stack return inside an inferior call" \ + "Make.*return now\\? \\(y or n\\) " "y" + + gdb_continue_to_end + } + + with_test_prefix "test return 'above' an inferior call" { + restart_and_run_infcall_call2 + + gdb_test "frame 2" "call2 ().*" "move to frame 'above' inferior call" + + gdb_test "return" "\#0.*call1.*" \ + "Test shadow stack return 'above' an inferior call" \ + "Make.*return now\\? \\(y or n\\) " "y" + + gdb_continue_to_end + } + + clean_restart ${binfile} + if { ![runto_main] } { + return -1 + } + set call1_line [ gdb_get_line_number "break call1" ] set call2_line [ gdb_get_line_number "break call2" ] -- 2.34.1 Intel Deutschland GmbH Registered Address: Am Campeon 10, 85579 Neubiberg, Germany Tel: +49 89 99 8853-0, www.intel.de Managing Directors: Sean Fennelly, Jeffrey Schneiderman, Tiffany Doon Silva Chairperson of the Supervisory Board: Nicole Lau Registered Office: Munich Commercial Register: Amtsgericht Muenchen HRB 186928