From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 25179 invoked by alias); 22 Jun 2003 22:55:03 -0000 Mailing-List: contact gdb-help@sources.redhat.com; run by ezmlm Precedence: bulk List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-owner@sources.redhat.com Received: (qmail 25122 invoked from network); 22 Jun 2003 22:55:02 -0000 Received: from unknown (HELO localhost.redhat.com) (24.157.166.107) by sources.redhat.com with SMTP; 22 Jun 2003 22:55:02 -0000 Received: from redhat.com (localhost [127.0.0.1]) by localhost.redhat.com (Postfix) with ESMTP id 7FBC52B5F; Sun, 22 Jun 2003 18:54:48 -0400 (EDT) Message-ID: <3EF633B8.4030009@redhat.com> Date: Sun, 22 Jun 2003 22:55:00 -0000 From: Andrew Cagney User-Agent: Mozilla/5.0 (X11; U; NetBSD macppc; en-US; rv:1.0.2) Gecko/20030223 X-Accept-Language: en-us, en MIME-Version: 1.0 To: Daniel Jacobowitz Cc: gdb@sources.redhat.com Subject: Re: Always cache memory and registers References: <3EF62D05.8070205@redhat.com> <20030622223412.GA15860@nevyn.them.org> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-SW-Source: 2003-06/txt/msg00440.txt.bz2 >> The only proviso being that the the current cache and target vector >> would need to be modified so that the cache only ever requested the data >> needed, leaving it to the target to supply more if available (much like >> registers do today). The current dcache doesn't do this, it instead >> pads out small reads :-( > > > It needs tweaking for other reasons too. It should probably have a > much higher threshold before it starts throwing out data, for one > thing. > > Padding out small reads isn't such a bad idea. It generally seems to > be the latency that's a real problem, esp. for remote targets. I think > both NetBSD and GNU/Linux do fast bulk reads native now? I'd almost > want to increase the padding. No, other way. Having GDB pad out small reads can be a disaster - read one too many bytes and ``foomp''. This is one of the reasons why the dcache was never enabled. However, it is totally reasonable for the target (not GDB) to supply megabytes of memory mapped data when GDB only asked for a single byte! The key point is that it is the target that makes any padding / transfer decisions, and not core GDB. If the remote target fetches too much data and `foomp' then, hey not our fault, we didn't tell it to read that address :-^ >> One thing that could be added to this is the idea of a sync point. >> When supplying data, the target could mark it as volatile. Such >> volatile data would then be drawn from the cache but only up until the >> next sync point. After that a fetch would trigger a new read. >> Returning to the command line, for instance, could be a sync point. >> Individual x/i commands on a volatile region would be separated by sync >> points, and hence would trigger separate reads. >> >> Thoughts. I think this provides at least one techical reason for >> enabling the cache. > > > Interesting idea there. I'm not quite sure how much work vs. return it > would be. There needs to at least be a contingency plan (if someone finds a technical problem :-). I also think its relatively easy to implement. Reach a sync point, flush volatile data from the cache. Andrew