From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 20882 invoked by alias); 11 Dec 2004 19:35:55 -0000 Mailing-List: contact gdb-patches-help@sources.redhat.com; run by ezmlm Precedence: bulk List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-patches-owner@sources.redhat.com Received: (qmail 20796 invoked from network); 11 Dec 2004 19:35:49 -0000 Received: from unknown (HELO sibelius.xs4all.nl) (82.92.89.47) by sourceware.org with SMTP; 11 Dec 2004 19:35:49 -0000 Received: from elgar.sibelius.xs4all.nl (elgar.sibelius.xs4all.nl [192.168.0.2]) by sibelius.xs4all.nl (8.13.0/8.13.0) with ESMTP id iBBJZg1e024794; Sat, 11 Dec 2004 20:35:42 +0100 (CET) Received: from elgar.sibelius.xs4all.nl (localhost [127.0.0.1]) by elgar.sibelius.xs4all.nl (8.12.6p3/8.12.6) with ESMTP id iBBJZfc4008015; Sat, 11 Dec 2004 20:35:41 +0100 (CET) (envelope-from kettenis@elgar.sibelius.xs4all.nl) Received: (from kettenis@localhost) by elgar.sibelius.xs4all.nl (8.12.6p3/8.12.6/Submit) id iBBJZbrR008012; Sat, 11 Dec 2004 20:35:37 +0100 (CET) Date: Sat, 11 Dec 2004 21:54:00 -0000 Message-Id: <200412111935.iBBJZbrR008012@elgar.sibelius.xs4all.nl> From: Mark Kettenis To: drow@false.org CC: eliz@gnu.org, jjohnstn@redhat.com, gdb-patches@sources.redhat.com In-reply-to: <20041211180236.GA16131@nevyn.them.org> (message from Daniel Jacobowitz on Sat, 11 Dec 2004 13:02:36 -0500) Subject: Re: [RFA]: Modified Watchthreads Patch References: <41B8E16D.6070505@redhat.com> <20041210191015.GA18430@nevyn.them.org> <41BA00E1.20900@redhat.com> <20041210203729.GA7830@nevyn.them.org> <41BA168E.7030507@redhat.com> <41BA36C5.2030304@redhat.com> <01c4df75$Blat.v2.2.2$1a340140@zahav.net.il> <20041211165237.GC13865@nevyn.them.org> <01c4dfaa$Blat.v2.2.2$47bcb3c0@zahav.net.il> <20041211180236.GA16131@nevyn.them.org> X-SW-Source: 2004-12/txt/msg00319.txt.bz2 Date: Sat, 11 Dec 2004 13:02:36 -0500 From: Daniel Jacobowitz On Sat, Dec 11, 2004 at 07:52:08PM +0200, Eli Zaretskii wrote: > > Date: Sat, 11 Dec 2004 11:52:37 -0500 > > From: Daniel Jacobowitz > > Cc: Jeff Johnston , gdb-patches@sources.redhat.com > > > > - The GDB core needs to continue to support watchpoints (hardware > > breakpoints; et cetera) triggering in an unexpected thread. > > Agreed. > > > Rationale: some targets won't support any other way. For instance > > page protection based watchpoints on GNU/Linux would probably apply to > > all threads. > > Another, even better (IMHO) rationale: one important reason for using > watchpoints is to find what code accesses some specific data; when we > use watchpoints for this, we more often than not do not know what > thread will access the data. That's just a watchpoint without an explicit thread specified. That's the default when you say "watch foo". > Yes, but I think (and have said it several times in the past) that > this difference in handling bp_hardware_watchpoint and > bp_read_watchpoint/bp_access_watchpoint is a Bad Thing and we should > rewrite it so that all the types of hardware-assisted watchpoints are > handled in the same way. The current code that handles > bp_hardware_watchpoint is simply the same code that handled software > bp_watchpoint, which to me doesn't make sense, because at least on > x86, the hardware tells us exactly what address was written to. > > (However, note that some platforms that cannot implement > target_stopped_data_address actually take advantage of the different > treatment of bp_hardware_watchpoint to implement `watch' even though > `rwatch' and `awatch' are not implemented. If we decide to make the > treatment of all hardware-assisted watchpoints similar, we should be > careful not to break those platforms.) Yes, page-protection assisted watchpoints in their most primitive form might cause spurious events when you access memory that is in the same page as the variable being watched. GDB will have to check whether the variable actually changed. OK. We can fix this. But for the reason given above `watch' always will be a bit different than `rwatch' and `awatch'. > > Assuming that the program didn't stop for any other reason, and that > > hardware watchpoints trigger after the write is executed > > (I note in parens that on x86, watchpoints trigger _before_ the write > is executed. Not sure if it matters here.) This is in general true for page-protection assisted watchpoints to. At least it is for HP-UX. You get a page fault which is reported as SIGBUS or SIGSEGV or something like that. But I guess that for real hardware watchpoints it is possible for them to trigger after the write is executed. And for page-protection assisted watchpoints that are implemented fully in the kernel this is possible behaviour. I think this is what happens on Solaris, but I'm not completely sure. How do we get the "new value" for a watchpoint, then? Do we step over the instruction? In general yes. This is what HAVE_STEPPABLE_WATCHPOINT, HAVE_NONSTEPPABLE_WATCHPOINT and HAVE_CONTINUABLE_WATCHPOINT are dealing with. > > or come up with a more useful behavior for software and hardware > > watchpoints than the one I've thought of. Can you think of one? > > I think there's nothing wrong with the model you suggested, it's just > that our handling of bp_hardware_watchpoint is wrong. Assuming we > change it along the lines I suggested above, do you see any further > problems with extending watchpoints to threaded programs? This is "extending watchpoints to specific threads" in my understanding, not to "threaded programs" - obviously a useful feature, but I think a different one. Conceptually, our watchpoints already work for threaded programs. In practice, Jeff found that on GNU/Linux they didn't work, and that's what he's fixing, not the "specific threads" enhancement. Does that make sense? It does to me. If there are problems they're most likely in platform-dependent code (and i386-nat.c *is* platform-dependent code even if it's used on a number of different platforms). > One aspect where a kernel can help us with multi-threaded programs is > if it tells us not only that a watchpoint has been hit, but also what > thread hit that watchpoint. Are there any kernel-level features that > can be used to find that out? Right now GDB assumes that this is reflected in inferior_ptid. At least for GNU/Linux, this is a reasonable assumption. The watchpoint is reported as an event (SIGTRAP); an event has to come from a specific thread, and it will come from the thread which generated the trap/fault/etc. I think this is a valid assumption. Hitting a "hardware" watchpoint will always trigger some sort of trap (be it a memory fault or some special hardware interrupt). A Unix-like kernel should know what "thread" was active at that point, and be able to report that. Mark