From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 1628 invoked by alias); 9 Nov 2011 18:27:32 -0000 Received: (qmail 1485 invoked by uid 22791); 9 Nov 2011 18:27:31 -0000 X-SWARE-Spam-Status: No, hits=-2.3 required=5.0 tests=AWL,BAYES_00,MSGID_FROM_MTA_HEADER,RP_MATCHES_RCVD X-Spam-Check-By: sourceware.org Received: from mtagate1.uk.ibm.com (HELO mtagate1.uk.ibm.com) (194.196.100.161) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Wed, 09 Nov 2011 18:27:11 +0000 Received: from d06nrmr1307.portsmouth.uk.ibm.com (d06nrmr1307.portsmouth.uk.ibm.com [9.149.38.129]) by mtagate1.uk.ibm.com (8.13.1/8.13.1) with ESMTP id pA9IR9PE006226 for ; Wed, 9 Nov 2011 18:27:09 GMT Received: from d06av02.portsmouth.uk.ibm.com (d06av02.portsmouth.uk.ibm.com [9.149.37.228]) by d06nrmr1307.portsmouth.uk.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id pA9IR9CD2498572 for ; Wed, 9 Nov 2011 18:27:09 GMT Received: from d06av02.portsmouth.uk.ibm.com (loopback [127.0.0.1]) by d06av02.portsmouth.uk.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id pA9IR8ta023204 for ; Wed, 9 Nov 2011 11:27:09 -0700 Received: from tuxmaker.boeblingen.de.ibm.com (tuxmaker.boeblingen.de.ibm.com [9.152.85.9]) by d06av02.portsmouth.uk.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with SMTP id pA9IR7UH023183; Wed, 9 Nov 2011 11:27:07 -0700 Message-Id: <201111091827.pA9IR7UH023183@d06av02.portsmouth.uk.ibm.com> Received: by tuxmaker.boeblingen.de.ibm.com (sSMTP sendmail emulation); Wed, 09 Nov 2011 19:27:07 +0100 Subject: Re: [rfc][3/3] Remote core file generation: memory map To: pedro@codesourcery.com (Pedro Alves) Date: Wed, 09 Nov 2011 18:27:00 -0000 From: "Ulrich Weigand" Cc: gdb-patches@sourceware.org, jan.kratochvil@redhat.com, sergiodj@redhat.com In-Reply-To: <201111091637.23350.pedro@codesourcery.com> from "Pedro Alves" at Nov 09, 2011 04:37:22 PM MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Mailing-List: contact gdb-patches-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-patches-owner@sourceware.org X-SW-Source: 2011-11/txt/msg00247.txt.bz2 Pedro Alves wrote: > On Tuesday 08 November 2011 17:25:36, Ulrich Weigand wrote: > > > The problem is that the way GDB uses the memory map is completely > > > incompatible with the presence of multiple address spaces. > > > > > > There is a single instance of the map (kept in a global variable > > > mem_region_list in memattr.c), which is used for any access in > > > any address space. lookup_mem_region takes only a CORE_ADDR; > > > the "info mem" commands only operate on addresses with no notion > > > of address spaces. > > That's mostly because we never really needed to consider making it > per multi-process/inferior/exec before, and managed to just look the > other way. Targets that do multi-process don't use the map presently. > I'm sure there are other things that live in globals but that should > be per-inferior or address space, waiting for someone to trip on > them, and eventually get fixed. :-) Yes, that's what I thought :-) > > This seems to me to be an argument *for* splitting the contents into > > two maps; the system map which is static and cached (and used for > > each memory access), and the per-process map which is dynamic > > and uncached (and only used rarely, in response to unfrequently > > used user commands) ... > > On e.g., uclinux / no mmu, you could have both the > system memory map returning the properties of memory of the > whole system, and gdb could use that for all memory accesses, > but, when generating a core of a single process, we're only > interested in the memory "mapped" to that process. So I tend > to agree. OK, another good point. > We could also make the existing memory map be per-process/aspace, > and define it describe only the process's map (a process is a means > of a virtualization of the system resources after all). The dynamic > issue with process's memory maps then becomes a cache management > policy decision. E.g., at times we know the map can't change (all is > stopped, or by user knob), this would automatically enable the dcache > for all RO regions (mostly .text). We can still do this while having > two maps mechanism though. > > It doesn't seem there's a true answer to this, but I'm leaning > on a new target object. OK. In the meantime, I've noticed the discussion going on in parallel on the "info core mappings" commands. If we implement this, we have the somewhat weird situation that we can show mappings for native processes and for core files, but not for processes attached to remotely, even if the target is also Linux ... It would appear to me that this command actually just needs the very same data I need here for the generate-core-file command, namely the current list of memory mappings. If we create a new target object for VMA memory mappings, maybe we ought to then have a standard "info mappings" (or the like) command implemented in GDB *common code* that works likewise on native, core file, *and* also gdbserver targets; in fact, on all targets that provide that new target object (which may need to be a bit richer, e.g. provide mapped file names as well)? Bye, Ulrich -- Dr. Ulrich Weigand GNU Toolchain for Linux on System z and Cell BE Ulrich.Weigand@de.ibm.com