From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 5037 invoked by alias); 2 Aug 2005 15:45:04 -0000 Mailing-List: contact gdb-help@sources.redhat.com; run by ezmlm Precedence: bulk List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-owner@sources.redhat.com Received: (qmail 4987 invoked by uid 22791); 2 Aug 2005 15:44:58 -0000 Received: from nevyn.them.org (HELO nevyn.them.org) (66.93.172.17) by sourceware.org (qpsmtpd/0.30-dev) with ESMTP; Tue, 02 Aug 2005 15:44:58 +0000 Received: from drow by nevyn.them.org with local (Exim 4.52) id 1Dzywq-0001bJ-G4; Tue, 02 Aug 2005 11:44:56 -0400 Date: Tue, 02 Aug 2005 15:45:00 -0000 From: Daniel Jacobowitz To: Jie Zhang Cc: gdb@sources.redhat.com Subject: Re: [RFC] Reinsert the single stepping breakpoint after being hopped over Message-ID: <20050802154456.GA5342@nevyn.them.org> Mail-Followup-To: Jie Zhang , gdb@sources.redhat.com References: <6f48278f05080208114fb35462@mail.gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <6f48278f05080208114fb35462@mail.gmail.com> User-Agent: Mutt/1.5.8i X-SW-Source: 2005-08/txt/msg00011.txt.bz2 On Tue, Aug 02, 2005 at 11:11:45PM +0800, Jie Zhang wrote: > Hi, > > I'm working on a new port of GDB for Blackfin. It's usually used for > remote debugging using gdbserver and software single stepping. It has > many FAILs for schedlock.exp. I traced these FAILs and, I think, found > a bug in GDB. > > Assume there are two threads in the program and we are stepping on > one. (I use GDB for the whole of GDB and gdbserver.) > > 1. GDB inserts a breakpoint for stepping and resumes both threads. > 2. Both threads hit this breakpoint. > 3. The first stopped thread GDB looks for (for some reason) is not the > thread we are stepping. GDB make it hop over the stepping breakpoint. > 4. GDB resume both threads by > > resume (1, TARGET_SIGNAL_0); > > So the thread we are stepping will do a new step and the previous stop > of step will never be noticed by GDB. (It's caught by gdbserver and > saved as a pending status. However, when it stops again for the new > step, the previous stepping breakpoint has been removed, so > check_removed_breakpoint () will clear status_pending_p and the > previous stop is lost.) GDB will never get a chance to check if the > stepping is out of the range. The step command will not return. Yes... I fixed this once already, but I failed to consider the case where both threads have hit the singlestep breakpoint, rather than another thread hitting the breakpoint before the original thread has had a chance to step. I've fixed the version you encountered before, but it appears I never submitted the patch. I'm not sure if it still applies, but could you give this a try? > To fix it, GDB should put the previous stepping breakpoint back, not > do a new stepping when resuming both threads. The following patch is > trying to fix this. It adds a new field "singlestep_breakpoint_addr" > in "struct thread_info" to remember the last single stepping > breakpoint address. Passing 2 as the second argument to > SOFTWARE_SINGLE_STEP () to tell the target software single stepping > function that we just reinsert the previous single stepping > breakpoint. Interesting approach. I don't like the implementation - I'd rather not extend context_switch - but the concept may be better than mine. Let me think about this for a little. -- Daniel Jacobowitz CodeSourcery, LLC Index: infrun.c =================================================================== RCS file: /big/fsf/rsync/src-cvs/src/gdb/infrun.c,v retrieving revision 1.137 diff -u -p -r1.137 infrun.c --- infrun.c 16 Feb 2004 20:49:51 -0000 1.137 +++ infrun.c 6 Mar 2004 04:23:06 -0000 @@ -479,6 +479,9 @@ static int singlestep_breakpoints_insert /* The thread we inserted single-step breakpoints for. */ static ptid_t singlestep_ptid; +/* PC when we started this single-step. */ +static CORE_ADDR singlestep_pc; + /* If another thread hit the singlestep breakpoint, we save the original thread here so that we can resume single-stepping it later. */ static ptid_t saved_singlestep_ptid; @@ -570,6 +573,7 @@ resume (int step, enum target_signal sig `wait_for_inferior' */ singlestep_breakpoints_inserted_p = 1; singlestep_ptid = inferior_ptid; + singlestep_pc = read_pc (); } /* Handle any optimized stores to the inferior NOW... */ @@ -1201,6 +1205,7 @@ context_switch (struct execution_control &ecs->current_line, &ecs->current_symtab, &step_sp); } inferior_ptid = ecs->ptid; + flush_cached_frames (); } /* Wrapper for PC_IN_SIGTRAMP that takes care of the need to find the @@ -1803,7 +1808,10 @@ handle_inferior_event (struct execution_ } else if (SOFTWARE_SINGLE_STEP_P () && singlestep_breakpoints_inserted_p) { + gdb_assert (ptid_equal (inferior_ptid, singlestep_ptid)); + ecs->random_signal = 0; + /* The call to in_thread_list is necessary because PTIDs sometimes change when we go from single-threaded to multi-threaded. If the singlestep_ptid is still in the list, assume that it is @@ -1811,9 +1819,32 @@ handle_inferior_event (struct execution_ if (!ptid_equal (singlestep_ptid, ecs->ptid) && in_thread_list (singlestep_ptid)) { - thread_hop_needed = 1; - stepping_past_singlestep_breakpoint = 1; - saved_singlestep_ptid = singlestep_ptid; + /* If the PC of the thread we were trying to single-step + has changed, discard this event (which we were going + to ignore anyway), and pretend we saw that thread + trap. This runs a risk of losing signal information + for singlestep_ptid, but prevents us continuously + moving the single-step breakpoint. This situation + means that the thread has trapped or been signalled, + but the event has not been reported to GDB yet. + Really we should arrange to report all events, or to + re-poll the remote looking for this particular + thread (i.e. temporarily enable schedlock). */ + if (read_pc_pid (singlestep_ptid) != singlestep_pc) + { + /* The current context still belongs to + singlestep_ptid. Don't swap here, since that's + the context we want to use. Just fudge our + state and continue. */ + ecs->ptid = singlestep_ptid; + stop_pc = read_pc_pid (ecs->ptid); + } + else + { + thread_hop_needed = 1; + stepping_past_singlestep_breakpoint = 1; + saved_singlestep_ptid = singlestep_ptid; + } } } @@ -1942,8 +1973,6 @@ handle_inferior_event (struct execution_ if (context_hook) context_hook (pid_to_thread_id (ecs->ptid)); - - flush_cached_frames (); } if (SOFTWARE_SINGLE_STEP_P () && singlestep_breakpoints_inserted_p)