From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 12972 invoked by alias); 24 Oct 2008 00:43:06 -0000 Received: (qmail 12960 invoked by uid 22791); 24 Oct 2008 00:43:05 -0000 X-Spam-Check-By: sourceware.org Received: from smtp-outbound-1.vmware.com (HELO smtp-outbound-1.vmware.com) (65.115.85.69) by sourceware.org (qpsmtpd/0.31) with ESMTP; Fri, 24 Oct 2008 00:42:29 +0000 Received: from mailhost3.vmware.com (mailhost3.vmware.com [10.16.27.45]) by smtp-outbound-1.vmware.com (Postfix) with ESMTP id 51D7B1301E; Thu, 23 Oct 2008 17:42:27 -0700 (PDT) Received: from [10.20.92.59] (promb-2s-dhcp59.eng.vmware.com [10.20.92.59]) by mailhost3.vmware.com (Postfix) with ESMTP id 473A6C9A10; Thu, 23 Oct 2008 17:42:27 -0700 (PDT) Message-ID: <490118CB.5000500@vmware.com> Date: Fri, 24 Oct 2008 00:43:00 -0000 From: Michael Snyder User-Agent: Thunderbird 1.5.0.12 (X11/20080411) MIME-Version: 1.0 To: Pedro Alves CC: "gdb-patches@sourceware.org" , teawater Subject: Re: [reverse/record] adjust_pc_after_break in reverse execution mode? References: <200810180210.16346.pedro@codesourcery.com> <200810200109.55661.pedro@codesourcery.com> <49010833.4070400@vmware.com> <200810240045.52818.pedro@codesourcery.com> In-Reply-To: <200810240045.52818.pedro@codesourcery.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-IsSubscribed: yes Mailing-List: contact gdb-patches-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-patches-owner@sourceware.org X-SW-Source: 2008-10/txt/msg00596.txt.bz2 Pedro Alves wrote: > A Friday 24 October 2008 00:26:43, Michael Snyder wrote: >> Hi Pedro, >> >> I duplicated your test case, and found that I could >> reproduce the behavior that you show below, but only >> so long as the branch did not contain your >> "adjust_pc_after_break" patch. >> >> Once I added that patch to the branch, this behavior >> seemed to go away. >> >> If I look carefully at what you did below, it seems like >> the forward-replay problem only shows up immediately after >> the reverse-replay problem manifests. And my experiments >> reflect the same thing. >> >> The branch is now patched. Could you spare a moment to >> play with it, and see if you can make it break again? > > I've done so a bit this morning, and came to a similar > conclusion, although I noticed Hui's change to set stop_pc in > TARGET_WAITKIND_NO_HISTORY, also also required. I was wanting > to find time to play a little bit more, but since you're on to it... > > I think the issue here, is that when proceeding (continuing) from B1 > below, > > B1: PC --> 0x80000001 INSN1 > B2: 0x80000002 INSN2 > > GDB will always do a single-step to get over B1. Then, the record > target replays INSN1, and then notices that there's a breakpoint > at 0x80000002. Remember that GDB told the target to single-step (over > a breakpoint), and to do so, removed all breakpoints from > the inferior. Hence, the adjust_pc_after_break checks to see if there's > a breakpoint inserted at `0x80000002 - 1', it will find there isn't one > (no breakpoint is inserted while doing the single-step over breakpoints > operation). Yes, I was reaching the same conclusion. > In sum, it appears that decr_pc_after_break doesn't matter when you have > continguous breakpoints, as long as you get from from B1's address to B2's > address by single-stepping. All is good then, it appears! I agree, at least that is the conclusion I am leaning toward. > (*) BTW, it seemed that TARGET_WAITKIND_NO_HISTORY overrides the > last event the target would report? Should'nt the last event in > history be reported normally, and only *on the next* resume we'd > get a TARGET_WAITKIND_NO_HISTORY? I was wondering if you'd not lose > a possible interesting event, just because it happened to be on > the edge of the history. Yes, it seems like if there is a breakpoint at the very last (or first) instruction in the history, GDB will report "no history" rather than "breakpoint". I'm not *terribly* happy about that, but it's also not the worst thing that could happen. Maybe we can get around to looking at it once we feel that everything more urgent has been handled.