From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 10698 invoked by alias); 19 Oct 2009 19:41:23 -0000 Received: (qmail 10690 invoked by uid 22791); 19 Oct 2009 19:41:22 -0000 X-SWARE-Spam-Status: No, hits=-2.4 required=5.0 tests=AWL,BAYES_00 X-Spam-Check-By: sourceware.org Received: from smtp-outbound-2.vmware.com (HELO smtp-outbound-2.vmware.com) (65.115.85.73) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Mon, 19 Oct 2009 19:41:18 +0000 Received: from mailhost2.vmware.com (mailhost2.vmware.com [10.16.67.167]) by smtp-outbound-2.vmware.com (Postfix) with ESMTP id A072315038 for ; Mon, 19 Oct 2009 12:41:14 -0700 (PDT) Received: from [10.20.94.141] (msnyder-server.eng.vmware.com [10.20.94.141]) by mailhost2.vmware.com (Postfix) with ESMTP id 80B0C8E893; Mon, 19 Oct 2009 12:41:14 -0700 (PDT) Message-ID: <4ADCBF6B.9050309@vmware.com> Date: Mon, 19 Oct 2009 19:41:00 -0000 From: Michael Snyder User-Agent: Thunderbird 1.5.0.12 (X11/20080411) MIME-Version: 1.0 To: Michael Snyder , "gdb-patches@sourceware.org" Subject: Re: Seems like a bug in target_read_stack / dcache_xfer_memory? References: <4ADB9759.7060305@vmware.com> <20091018225134.GA30546@caradoc.them.org> <4ADCA53C.2080703@vmware.com> <20091019183724.GA17923@caradoc.them.org> In-Reply-To: <20091019183724.GA17923@caradoc.them.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-IsSubscribed: yes Mailing-List: contact gdb-patches-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-patches-owner@sourceware.org X-SW-Source: 2009-10/txt/msg00444.txt.bz2 Daniel Jacobowitz wrote: > On Mon, Oct 19, 2009 at 10:43:24AM -0700, Michael Snyder wrote: >> drow@false.org wrote: >>> On Sun, Oct 18, 2009 at 03:31:53PM -0700, Michael Snyder wrote: >>>> The arguments and return >>>> value are just as for target_xfer_partial >>> The comment is on the logical home of this method, other places should >>> refer to the header: the definition of to_xfer_partial in struct >>> target_ops in target.h. >> OK, shouldn't we say so? I want to drop this conversation, >> though, and focus on the code problem. Sorry again for my >> aggrieved tone yesterday -- I was tired. ;-( > > Sure - I'm just as annoyed by the tangle as you are - I just remember > hunting for this comment myself :-) > >> OK, so suppose dcache_xfer_memory returns zero in this context. >> That means no transfer is possible. Shouldn't we give the other >> targets on the stack a shot? >> >> In the case I'm looking at, the next target down is a core file, >> and I know it has the memory location available. If I force gdb >> out of this error return, core_xfer_partial will succeed. > > You haven't really described the situation, so I'm guessing. But the > problem can't be in the code you cited. It's got to be further down > the call stack. Can you explain why it can't be in the code that I cited? I don't understand why, for instance, this scenario couldn't apply: * memory_xfer_partial is called for a stack-like location (say, 4 bytes beyond the topmost frame). * inf is non-null, region->attrib.cache is true or stach_cache_enabled_p and object == TARGET_OBJECT_STACK_MEMORY * therefore we call dcache_xfer_memory, * The requested location isn't cached, so we return zero. ---- However, of course I'll describe the situation. You can't quite replicate it yet, unles you've applied my recent patches. 1) load testsuite/gdb.reverse/solib-reverse 2) break main 3) record 4) until 41 5) record save foo 6) record restore foo (this kills the running process, loads the core file/log, and puts you back at main) 7) until 41 The "until" command tries to read beyond the top of stack, which is fine for the running process and fine for the core file, but for some reason in this instance wants to go into dcache, where nothing currently should be cached.