From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 19325 invoked by alias); 17 Apr 2004 18:13:21 -0000 Mailing-List: contact gdb-patches-help@sources.redhat.com; run by ezmlm Precedence: bulk List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-patches-owner@sources.redhat.com Received: (qmail 19316 invoked from network); 17 Apr 2004 18:13:19 -0000 Received: from unknown (HELO pippin.tausq.org) (64.81.244.94) by sources.redhat.com with SMTP; 17 Apr 2004 18:13:19 -0000 Received: by pippin.tausq.org (Postfix, from userid 1000) id 8F68BCD2CF; Sat, 17 Apr 2004 11:51:23 -0700 (PDT) Date: Sat, 17 Apr 2004 18:13:00 -0000 From: Randolph Chung To: gdb-patches@sources.redhat.com Subject: Re: [patch] Fix unwind handling for hppa Message-ID: <20040417185123.GH17842@tausq.org> Reply-To: Randolph Chung References: <20040417080536.GB17842@tausq.org> <20040417163525.GA3521@nevyn.them.org> <20040417172027.GD17842@tausq.org> <20040417165455.GA14387@nevyn.them.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20040417165455.GA14387@nevyn.them.org> X-GPG: for GPG key, see http://www.tausq.org/gpg.txt User-Agent: Mutt/1.5.5.1+cvs20040105i X-SW-Source: 2004-04/txt/msg00397.txt.bz2 > Beats me. I guess that might work. At that point you're more or less > running the prologue analyzer despite having unwind data; I'm not sure > how I feel about that. But it does seem pragmatically useful. yeah, actually it can be optimized a bit. hppa_frame_cache already is doing some prologue parsing, so there's no point in doing it again.... something like this seems to work just as well. randolph 2004-04-17 Randolph Chung * hppa-tdep.c (hppa_frame_cache): Handle the case when frame unwind starts at a pc before the frame is created. Index: hppa-tdep.c =================================================================== RCS file: /cvs/src/src/gdb/hppa-tdep.c,v retrieving revision 1.147 diff -u -p -r1.147 hppa-tdep.c --- hppa-tdep.c 17 Apr 2004 17:41:10 -0000 1.147 +++ hppa-tdep.c 17 Apr 2004 18:08:34 -0000 @@ -2039,5 +2039,6 @@ hppa_frame_cache (struct frame_info *nex CORE_ADDR this_sp; long frame_size; struct unwind_table_entry *u; + CORE_ADDR end_pc; int i; @@ -2085,7 +2086,6 @@ hppa_frame_cache (struct frame_info *nex { int final_iteration = 0; CORE_ADDR pc; - CORE_ADDR end_pc; int looking_for_sp = u->Save_SP; int looking_for_rp = u->Save_RP; int fp_loc = -1; @@ -2207,6 +2207,7 @@ hppa_frame_cache (struct frame_info *nex if (is_branch (inst)) final_iteration = 1; } + end_pc = pc; } { @@ -2214,17 +2215,28 @@ hppa_frame_cache (struct frame_info *nex the current function (and is thus equivalent to the "saved" stack pointer. */ CORE_ADDR this_sp = frame_unwind_register_unsigned (next_frame, HPPA_SP_REGNUM); - /* FIXME: cagney/2004-02-22: This assumes that the frame has been - created. If it hasn't everything will be out-of-wack. */ - if (u->Save_SP && trad_frame_addr_p (cache->saved_regs, HPPA_SP_REGNUM)) - /* Both we're expecting the SP to be saved and the SP has been - saved. The entry SP value is saved at this frame's SP - address. */ - cache->base = read_memory_integer (this_sp, TARGET_PTR_BIT / 8); + if (frame_pc_unwind (next_frame) >= end_pc) + { + if (u->Save_SP && trad_frame_addr_p (cache->saved_regs, HPPA_SP_REGNUM)) + { + /* Both we're expecting the SP to be saved and the SP has been + saved. The entry SP value is saved at this frame's SP + address. */ + cache->base = read_memory_integer (this_sp, TARGET_PTR_BIT / 8); + } + else + { + /* The prologue has been slowly allocating stack space. Adjust + the SP back. */ + cache->base = this_sp - frame_size; + } + } else - /* The prologue has been slowly allocating stack space. Adjust - the SP back. */ - cache->base = this_sp - frame_size; + { + /* This frame has not yet been created. */ + cache->base = this_sp; + } + trad_frame_set_value (cache->saved_regs, HPPA_SP_REGNUM, cache->base); } -- Randolph Chung Debian GNU/Linux Developer, hppa/ia64 ports http://www.tausq.org/