From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 26964 invoked by alias); 22 Jun 2003 22:26:24 -0000 Mailing-List: contact gdb-help@sources.redhat.com; run by ezmlm Precedence: bulk List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-owner@sources.redhat.com Received: (qmail 26854 invoked from network); 22 Jun 2003 22:26:23 -0000 Received: from unknown (HELO localhost.redhat.com) (24.157.166.107) by sources.redhat.com with SMTP; 22 Jun 2003 22:26:23 -0000 Received: from redhat.com (localhost [127.0.0.1]) by localhost.redhat.com (Postfix) with ESMTP id AF7AF2B5F for ; Sun, 22 Jun 2003 18:26:13 -0400 (EDT) Message-ID: <3EF62D05.8070205@redhat.com> Date: Sun, 22 Jun 2003 22:26:00 -0000 From: Andrew Cagney User-Agent: Mozilla/5.0 (X11; U; NetBSD macppc; en-US; rv:1.0.2) Gecko/20030223 X-Accept-Language: en-us, en MIME-Version: 1.0 To: gdb@sources.redhat.com Subject: Always cache memory and registers Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-SW-Source: 2003-06/txt/msg00438.txt.bz2 Hello, Think back to the rationale for GDB simply flushing its entire state after the user modifies a memory or register. No matter how inefficent that update is, it can't be any worse than the full refresh needed after a single step. All effort should be put into making single step fast, and not into making read-modifywrite fast. I think I've just found a similar argument that can be used to justify always enabling a data cache. GDB's dcache is currently disabled (or at least was the last time I looked :-). The rationale was that the user, when inspecting in-memory devices, would be confused if repeated reads did not reflect the devices current register values. The problem with this is GUIs. A GUI can simultaneously display multiple views of the same memory region. Should each of those displays generate separate target reads (with different values and side effects) or should they all share a common cache? I think the later because it is impossible, from a GUI, to predict or control the number of reads that request will trigger. Hence I'm thinking that a data cache should be enabled by default. The only proviso being that the the current cache and target vector would need to be modified so that the cache only ever requested the data needed, leaving it to the target to supply more if available (much like registers do today). The current dcache doesn't do this, it instead pads out small reads :-( One thing that could be added to this is the idea of a sync point. When supplying data, the target could mark it as volatile. Such volatile data would then be drawn from the cache but only up until the next sync point. After that a fetch would trigger a new read. Returning to the command line, for instance, could be a sync point. Individual x/i commands on a volatile region would be separated by sync points, and hence would trigger separate reads. Thoughts. I think this provides at least one techical reason for enabling the cache. enjoy, Andrew