From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 10310 invoked by alias); 22 Jun 2003 22:34:20 -0000 Mailing-List: contact gdb-help@sources.redhat.com; run by ezmlm Precedence: bulk List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-owner@sources.redhat.com Received: (qmail 10303 invoked from network); 22 Jun 2003 22:34:19 -0000 Received: from unknown (HELO crack.them.org) (146.82.138.56) by sources.redhat.com with SMTP; 22 Jun 2003 22:34:19 -0000 Received: from dsl093-172-017.pit1.dsl.speakeasy.net ([66.93.172.17] helo=nevyn.them.org ident=mail) by crack.them.org with asmtp (Exim 3.12 #1 (Debian)) id 19UDQU-00033G-00; Sun, 22 Jun 2003 17:35:10 -0500 Received: from drow by nevyn.them.org with local (Exim 3.36 #1 (Debian)) id 19UDPY-00048R-00; Sun, 22 Jun 2003 18:34:12 -0400 Date: Sun, 22 Jun 2003 22:34:00 -0000 From: Daniel Jacobowitz To: Andrew Cagney Cc: gdb@sources.redhat.com Subject: Re: Always cache memory and registers Message-ID: <20030622223412.GA15860@nevyn.them.org> Mail-Followup-To: Andrew Cagney , gdb@sources.redhat.com References: <3EF62D05.8070205@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <3EF62D05.8070205@redhat.com> User-Agent: Mutt/1.5.1i X-SW-Source: 2003-06/txt/msg00439.txt.bz2 On Sun, Jun 22, 2003 at 06:26:13PM -0400, Andrew Cagney wrote: > Hello, > > Think back to the rationale for GDB simply flushing its entire state > after the user modifies a memory or register. No matter how inefficent > that update is, it can't be any worse than the full refresh needed after > a single step. All effort should be put into making single step fast, > and not into making read-modifywrite fast. > > I think I've just found a similar argument that can be used to justify > always enabling a data cache. GDB's dcache is currently disabled (or at > least was the last time I looked :-). The rationale was that the user, > when inspecting in-memory devices, would be confused if repeated reads > did not reflect the devices current register values. > > The problem with this is GUIs. > > A GUI can simultaneously display multiple views of the same memory > region. Should each of those displays generate separate target reads > (with different values and side effects) or should they all share a > common cache? > > I think the later because it is impossible, from a GUI, to predict or > control the number of reads that request will trigger. Hence I'm > thinking that a data cache should be enabled by default. Good reasoning. I like it. > The only proviso being that the the current cache and target vector > would need to be modified so that the cache only ever requested the data > needed, leaving it to the target to supply more if available (much like > registers do today). The current dcache doesn't do this, it instead > pads out small reads :-( It needs tweaking for other reasons too. It should probably have a much higher threshold before it starts throwing out data, for one thing. Padding out small reads isn't such a bad idea. It generally seems to be the latency that's a real problem, esp. for remote targets. I think both NetBSD and GNU/Linux do fast bulk reads native now? I'd almost want to increase the padding. > One thing that could be added to this is the idea of a sync point. > When supplying data, the target could mark it as volatile. Such > volatile data would then be drawn from the cache but only up until the > next sync point. After that a fetch would trigger a new read. > Returning to the command line, for instance, could be a sync point. > Individual x/i commands on a volatile region would be separated by sync > points, and hence would trigger separate reads. > > Thoughts. I think this provides at least one techical reason for > enabling the cache. Interesting idea there. I'm not quite sure how much work vs. return it would be. -- Daniel Jacobowitz MontaVista Software Debian GNU/Linux Developer