From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-wm1-x343.google.com (mail-wm1-x343.google.com [IPv6:2a00:1450:4864:20::343]) by sourceware.org (Postfix) with ESMTPS id 32E68386F43F for ; Wed, 22 Apr 2020 08:47:38 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org 32E68386F43F Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=embecosm.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=andrew.burgess@embecosm.com Received: by mail-wm1-x343.google.com with SMTP id y24so1355499wma.4 for ; Wed, 22 Apr 2020 01:47:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=embecosm.com; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=PmWB/UBtXAdVZS264A64C4FU1CJwpLemiAFKTmwR34A=; b=hp/TtjJavG5mmJrRTAgo28lYHogTcaeEAKB/toon8VzHkkrIsZiwrnTI5FH3NI8bql zK5+WNM4n5EHwJ5wetHmnpBOjAOhQBe8VP8UBOSHhgJtfU4iWAXISdrMft14d1mHpBAm VvQtUfN9st+YLIAVqNpu5c1YmQHLQ9Jh8Wk9g+/G6GIaMONjuc0JuBRjtXSPV/YGgpSP M2SHnmb/QkHi5ie6BeigZYa895gv+0ZhDmGphbDuF2Lq5rpqhq5oe4t1qKsKOo1sKF2z 0+Sq8Op/LsEbix+4vRzZSDceMEdi2hpCALp43Xv5p8YNEspz7Vt7tOvRDM6I3P+nU9l9 8a9A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=PmWB/UBtXAdVZS264A64C4FU1CJwpLemiAFKTmwR34A=; b=eZMi5gUdD6rHoVN3+7k6QJI0/ub7yxJDJQpLFJSpqHon0jENUh6Z+Ds0P1VToxNN8b Y7Y6wOYfPKLDXO4K6zB3slDB222mbYurlIgO8Dz3pu+vWsUAUl/3kFjlEfX6Jo3fa9rz GcTZmCa30XnNv1gcwaVvORG6EIe/oabGBoad1iN2RkkHpoy6DcDbIOp5nBwjj9TlIhbS hfv3vZBKkSZlxuognoMLQ7cTRMl9RSRcdPH7pydzthI+eriStPLyYa8BiuX130Gir7yG hOpkXHgjtP6X+sleuo/FaKRt6Fs6P35ofAoetEvxq8RlXTj/qr2KR/rvMs55cf7OfqHC dyzQ== X-Gm-Message-State: AGi0PuYFBfPbn74qtvHaz/N/aKId0hyY+iVSb9mgNoVVGL0p18SHh01I sG1TBexzvw5zj+QqNw3rYn6m0Riobc0= X-Google-Smtp-Source: APiQypKiX0WdCImRsFUrAYwBcx89DOeSkP8d6b2az38AIP+l1Yv2TX+2UeaUDjyOIBGoF5mYkwmVoA== X-Received: by 2002:a05:600c:2941:: with SMTP id n1mr9484934wmd.25.1587545256849; Wed, 22 Apr 2020 01:47:36 -0700 (PDT) Received: from localhost (host81-151-181-184.range81-151.btcentralplus.com. [81.151.181.184]) by smtp.gmail.com with ESMTPSA id q143sm7209768wme.31.2020.04.22.01.47.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 22 Apr 2020 01:47:36 -0700 (PDT) Date: Wed, 22 Apr 2020 10:37:23 +0100 From: Andrew Burgess To: Luis Machado Cc: gdb-patches@sourceware.org, tromey@adacore.com Subject: Re: [PATCH] Fix inline frame unwinding breakage Message-ID: <20200422093723.GA3522@embecosm.com> References: <20200414213137.24015-1-luis.machado@linaro.org> <20200414213836.24370-1-luis.machado@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200414213836.24370-1-luis.machado@linaro.org> X-Operating-System: Linux/5.5.17-200.fc31.x86_64 (x86_64) X-Uptime: 10:29:04 up 1 day, 16:36, X-Editor: GNU Emacs [ http://www.gnu.org/software/emacs ] X-Spam-Status: No, score=-23.9 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, GIT_PATCH_1, GIT_PATCH_2, GIT_PATCH_3, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: gdb-patches@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gdb-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 22 Apr 2020 08:47:40 -0000 * Luis Machado via Gdb-patches [2020-04-14 18:38:36 -0300]: > *** re-sending due to the poor choice of characters for the backtrace > annotations. GIT swallowed parts of it. > > There has been some breakage for aarch64-linux, arm-linux and s390-linux in > terms of inline frame unwinding. There may be other targets, but these are > the ones i'm aware of. > > The following testcases started to show numerous failures and trigger internal > errors in GDB after commit 1009d92fc621bc4d017029b90a5bfab16e17fde5, > "Find tailcall frames before inline frames". > > gdb.opt/inline-break.exp > gdb.opt/inline-cmds.exp > gdb.python/py-frame-inline.exp > gdb.reverse/insn-reverse.exp > > The internal errors were of this kind: > > binutils-gdb/gdb/frame.c:579: internal-error: frame_id get_frame_id(frame_info*): Assertion `fi->level == 0' failed. I have also started seeing this assert on RISC-V, and your patch resolves this issue for me, so I'm keen to see this merged. I took a look through and it all looks good to me - is there anything holding this back from being merged? Thanks, Andrew > > After a lengthy investigation to try and find the cause of these assertions, > it seems we're dealing with some fragile/poorly documented code to handle inline > frames and we are attempting to unwind from this fragile section of code. > > Before commit 1009d92fc621bc4d017029b90a5bfab16e17fde5, the tailcall sniffer > was invoked from dwarf2_frame_prev_register. By the time we invoke the > dwarf2_frame_prev_register function, we've probably already calculated the > frame id (via compute_frame_id). > > After said commit, the call to dwarf2_tailcall_sniffer_first was moved to > dwarf2_frame_cache. This is very early in a frame creation process, and > we're still calculating the frame ID (so compute_frame_id is in the call > stack). > > This would be fine for regular frames, but the above testcases all deal > with some inline frames. > > The particularity of inline frames is that their frame ID's depend on > the previous frame's ID, and the previous frame's ID relies in the inline > frame's registers. So it is a bit of a messy situation. > > We have comments in various parts of the code warning about some of these > particularities. > > In the case of dwarf2_tailcall_sniffer_first, we attempt to unwind the PC, > which goes through various functions until we eventually invoke > frame_unwind_got_register. This function will eventually attempt to create > a lazy value for a particular register, and this lazy value will require > a valid frame ID. Since the inline frame doesn't have a valid frame ID > yet (remember we're still calculating the previous frame's ID so we can tell > what the inline frame ID is) we will call compute_frame_id for the inline > frame (level 0). > > We'll eventually hit the assertion above, inside get_frame_id: > > -- > /* If we haven't computed the frame id yet, then it must be that > this is the current frame. Compute it now, and stash the > result. The IDs of other frames are computed as soon as > they're created, in order to detect cycles. See > get_prev_frame_if_no_cycle. */ > gdb_assert (fi->level == 0); > -- > > It seems to me we shouldn't have reached this assertion without having the > inline frame ID already calculated. In fact, it seems we even start recursing > a bit when we invoke get_prev_frame_always within inline_frame_this_id. But > a check makes us quit the recursion and proceed to compute the id. > > Here's the call stack for context: > > #0 get_prev_frame_always_1 (this_frame=0xaaaaab85a670) at ../../../repos/binutils-gdb/gdb/frame.c:2109 > RECURSION - #1 0x0000aaaaaae1d098 in get_prev_frame_always (this_frame=0xaaaaab85a670) at ../../../repos/binutils-gdb/gdb/frame.c:2124 > #2 0x0000aaaaaae95768 in inline_frame_this_id (this_frame=0xaaaaab85a670, this_cache=0xaaaaab85a688, this_id=0xaaaaab85a6d0) > at ../../../repos/binutils-gdb/gdb/inline-frame.c:165 > #3 0x0000aaaaaae1916c in compute_frame_id (fi=0xaaaaab85a670) at ../../../repos/binutils-gdb/gdb/frame.c:550 > #4 0x0000aaaaaae19318 in get_frame_id (fi=0xaaaaab85a670) at ../../../repos/binutils-gdb/gdb/frame.c:582 > #5 0x0000aaaaaae13480 in value_of_register_lazy (frame=0xaaaaab85a730, regnum=30) at ../../../repos/binutils-gdb/gdb/findvar.c:296 > #6 0x0000aaaaaae16c00 in frame_unwind_got_register (frame=0xaaaaab85a730, regnum=30, new_regnum=30) at ../../../repos/binutils-gdb/gdb/frame-unwind.c:268 > #7 0x0000aaaaaad52604 in dwarf2_frame_prev_register (this_frame=0xaaaaab85a730, this_cache=0xaaaaab85a748, regnum=30) > at ../../../repos/binutils-gdb/gdb/dwarf2/frame.c:1296 > #8 0x0000aaaaaae1ae68 in frame_unwind_register_value (next_frame=0xaaaaab85a730, regnum=30) at ../../../repos/binutils-gdb/gdb/frame.c:1229 > #9 0x0000aaaaaae1b304 in frame_unwind_register_unsigned (next_frame=0xaaaaab85a730, regnum=30) at ../../../repos/binutils-gdb/gdb/frame.c:1320 > #10 0x0000aaaaaab76574 in aarch64_dwarf2_prev_register (this_frame=0xaaaaab85a730, this_cache=0xaaaaab85a748, regnum=32) > at ../../../repos/binutils-gdb/gdb/aarch64-tdep.c:1114 > #11 0x0000aaaaaad52724 in dwarf2_frame_prev_register (this_frame=0xaaaaab85a730, this_cache=0xaaaaab85a748, regnum=32) > at ../../../repos/binutils-gdb/gdb/dwarf2/frame.c:1316 > #12 0x0000aaaaaae1ae68 in frame_unwind_register_value (next_frame=0xaaaaab85a730, regnum=32) at ../../../repos/binutils-gdb/gdb/frame.c:1229 > #13 0x0000aaaaaae1b304 in frame_unwind_register_unsigned (next_frame=0xaaaaab85a730, regnum=32) at ../../../repos/binutils-gdb/gdb/frame.c:1320 > #14 0x0000aaaaaae16a84 in default_unwind_pc (gdbarch=0xaaaaab81edc0, next_frame=0xaaaaab85a730) at ../../../repos/binutils-gdb/gdb/frame-unwind.c:223 > #15 0x0000aaaaaae32124 in gdbarch_unwind_pc (gdbarch=0xaaaaab81edc0, next_frame=0xaaaaab85a730) at ../../../repos/binutils-gdb/gdb/gdbarch.c:3074 > #16 0x0000aaaaaad4f15c in dwarf2_tailcall_sniffer_first (this_frame=0xaaaaab85a730, tailcall_cachep=0xaaaaab85a830, entry_cfa_sp_offsetp=0x0) > at ../../../repos/binutils-gdb/gdb/dwarf2/frame-tailcall.c:388 > #17 0x0000aaaaaad520c0 in dwarf2_frame_cache (this_frame=0xaaaaab85a730, this_cache=0xaaaaab85a748) at ../../../repos/binutils-gdb/gdb/dwarf2/frame.c:1190 > #18 0x0000aaaaaad52204 in dwarf2_frame_this_id (this_frame=0xaaaaab85a730, this_cache=0xaaaaab85a748, this_id=0xaaaaab85a790) > at ../../../repos/binutils-gdb/gdb/dwarf2/frame.c:1218 > #19 0x0000aaaaaae1916c in compute_frame_id (fi=0xaaaaab85a730) at ../../../repos/binutils-gdb/gdb/frame.c:550 > #20 0x0000aaaaaae1c958 in get_prev_frame_if_no_cycle (this_frame=0xaaaaab85a670) at ../../../repos/binutils-gdb/gdb/frame.c:1927 > #21 0x0000aaaaaae1cc44 in get_prev_frame_always_1 (this_frame=0xaaaaab85a670) at ../../../repos/binutils-gdb/gdb/frame.c:2006 > FIRST CALL - #22 0x0000aaaaaae1d098 in get_prev_frame_always (this_frame=0xaaaaab85a670) at ../../../repos/binutils-gdb/gdb/frame.c:2124 > #23 0x0000aaaaaae18f68 in skip_artificial_frames (frame=0xaaaaab85a670) at ../../../repos/binutils-gdb/gdb/frame.c:495 > #24 0x0000aaaaaae193e8 in get_stack_frame_id (next_frame=0xaaaaab85a670) at ../../../repos/binutils-gdb/gdb/frame.c:596 > #25 0x0000aaaaaae87a54 in process_event_stop_test (ecs=0xffffffffefc8) at ../../../repos/binutils-gdb/gdb/infrun.c:6857 > #26 0x0000aaaaaae86bdc in handle_signal_stop (ecs=0xffffffffefc8) at ../../../repos/binutils-gdb/gdb/infrun.c:6381 > #27 0x0000aaaaaae84fd0 in handle_inferior_event (ecs=0xffffffffefc8) at ../../../repos/binutils-gdb/gdb/infrun.c:5578 > #28 0x0000aaaaaae81588 in fetch_inferior_event (client_data=0x0) at ../../../repos/binutils-gdb/gdb/infrun.c:4020 > #29 0x0000aaaaaae5f7fc in inferior_event_handler (event_type=INF_REG_EVENT, client_data=0x0) at ../../../repos/binutils-gdb/gdb/inf-loop.c:43 > #30 0x0000aaaaaae8d768 in infrun_async_inferior_event_handler (data=0x0) at ../../../repos/binutils-gdb/gdb/infrun.c:9377 > #31 0x0000aaaaaabff970 in check_async_event_handlers () at ../../../repos/binutils-gdb/gdb/async-event.c:291 > #32 0x0000aaaaab27cbec in gdb_do_one_event () at ../../../repos/binutils-gdb/gdbsupport/event-loop.cc:194 > #33 0x0000aaaaaaef1894 in start_event_loop () at ../../../repos/binutils-gdb/gdb/main.c:356 > #34 0x0000aaaaaaef1a04 in captured_command_loop () at ../../../repos/binutils-gdb/gdb/main.c:416 > #35 0x0000aaaaaaef3338 in captured_main (data=0xfffffffff1f0) at ../../../repos/binutils-gdb/gdb/main.c:1254 > #36 0x0000aaaaaaef33a0 in gdb_main (args=0xfffffffff1f0) at ../../../repos/binutils-gdb/gdb/main.c:1269 > #37 0x0000aaaaaab6e0dc in main (argc=6, argv=0xfffffffff348) at ../../../repos/binutils-gdb/gdb/gdb.c:32 > > The following patch addresses this by using a function that unwinds the PC > from the next (inline) frame directly as opposed to creating a lazy value > that is bound to the next frame's ID (still not computed). > > I've validated this for aarch64-linux and x86_64-linux by running the > testsuite. > > Tromey, would you mind checking if this suits your problematic core file > tailcall scenario? > > gdb/ChangeLog: > > 2020-04-14 Luis Machado > > * dwarf2/frame-tailcall.c (dwarf2_tailcall_sniffer_first): Use > get_frame_register instead of gdbarch_unwind_pc. > --- > gdb/dwarf2/frame-tailcall.c | 4 +++- > 1 file changed, 3 insertions(+), 1 deletion(-) > > diff --git a/gdb/dwarf2/frame-tailcall.c b/gdb/dwarf2/frame-tailcall.c > index 2d219f13f9..01bb134a5c 100644 > --- a/gdb/dwarf2/frame-tailcall.c > +++ b/gdb/dwarf2/frame-tailcall.c > @@ -385,7 +385,9 @@ dwarf2_tailcall_sniffer_first (struct frame_info *this_frame, > prev_gdbarch = frame_unwind_arch (this_frame); > > /* Simulate frame_unwind_pc without setting this_frame->prev_pc.p. */ > - prev_pc = gdbarch_unwind_pc (prev_gdbarch, this_frame); > + get_frame_register (this_frame, gdbarch_pc_regnum (prev_gdbarch), > + (gdb_byte *) &prev_pc); > + prev_pc = gdbarch_addr_bits_remove (prev_gdbarch, prev_pc); > > /* call_site_find_chain can throw an exception. */ > chain = call_site_find_chain (prev_gdbarch, prev_pc, this_pc); > -- > 2.17.1 >