From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 19599 invoked by alias); 8 Nov 2011 17:25:54 -0000 Received: (qmail 19569 invoked by uid 22791); 8 Nov 2011 17:25:53 -0000 X-SWARE-Spam-Status: No, hits=-2.3 required=5.0 tests=AWL,BAYES_00,MSGID_FROM_MTA_HEADER,RP_MATCHES_RCVD X-Spam-Check-By: sourceware.org Received: from mtagate2.uk.ibm.com (HELO mtagate2.uk.ibm.com) (194.196.100.162) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Tue, 08 Nov 2011 17:25:39 +0000 Received: from d06nrmr1507.portsmouth.uk.ibm.com (d06nrmr1507.portsmouth.uk.ibm.com [9.149.38.233]) by mtagate2.uk.ibm.com (8.13.1/8.13.1) with ESMTP id pA8HPb6L000743 for ; Tue, 8 Nov 2011 17:25:37 GMT Received: from d06av02.portsmouth.uk.ibm.com (d06av02.portsmouth.uk.ibm.com [9.149.37.228]) by d06nrmr1507.portsmouth.uk.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id pA8HPbim2592804 for ; Tue, 8 Nov 2011 17:25:37 GMT Received: from d06av02.portsmouth.uk.ibm.com (loopback [127.0.0.1]) by d06av02.portsmouth.uk.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id pA8HPbMO003729 for ; Tue, 8 Nov 2011 10:25:37 -0700 Received: from tuxmaker.boeblingen.de.ibm.com (tuxmaker.boeblingen.de.ibm.com [9.152.85.9]) by d06av02.portsmouth.uk.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with SMTP id pA8HPaFc003696; Tue, 8 Nov 2011 10:25:36 -0700 Message-Id: <201111081725.pA8HPaFc003696@d06av02.portsmouth.uk.ibm.com> Received: by tuxmaker.boeblingen.de.ibm.com (sSMTP sendmail emulation); Tue, 08 Nov 2011 18:25:36 +0100 Subject: Re: [rfc][3/3] Remote core file generation: memory map To: jan.kratochvil@redhat.com Date: Tue, 08 Nov 2011 17:25:00 -0000 From: "Ulrich Weigand" Cc: gdb-patches@sourceware.org In-Reply-To: <201111012128.pA1LSCTW002598@d06av02.portsmouth.uk.ibm.com> from "Ulrich Weigand" at Nov 01, 2011 10:28:12 PM MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Mailing-List: contact gdb-patches-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-patches-owner@sourceware.org X-SW-Source: 2011-11/txt/msg00199.txt.bz2 I wrote: > Jan Kratochvil wrote: > > On Fri, 21 Oct 2011 20:57:04 +0200, Ulrich Weigand wrote: > > > Note that there already is a qXfer:memory-map:read packet, but this > > > is not usable as-is to implement target_find_memory_regions, since > > > it is really intended for a *system* memory map for some naked > > > embedded targets instead of a per-process virtual address space map. > > > > > > For example: > > > > > > - the memory map is read into a single global mem_region list; it is not > > > switched for multiple inferiors > > > > Without extended-remote there is a single address map only. Is the memory map > > already useful with extended-remote using separate address spaces? > > > > I do not have the embedded memory map experience but it seems to me the memory > > map should be specified for each address map, therefore for each inferior it > > is OK (maybe only possibly more duplicates are sent if the address spaces are > > the same). If GDB uses the memory map it uses it already for some inferior > > and therefore its address space. > > The problem is that the way GDB uses the memory map is completely > incompatible with the presence of multiple address spaces. > > There is a single instance of the map (kept in a global variable > mem_region_list in memattr.c), which is used for any access in > any address space. lookup_mem_region takes only a CORE_ADDR; > the "info mem" commands only operate on addresses with no notion > of address spaces. The remote protocol also does not specify > which address space a map is requested for. Another problem just occurred to me: the memory region list is cached during the whole duration of existence of the inferior. This caching is really necessary, since the map is consulted during each single memory access. And it seems quite valid to cache the map as long as it describes fixed features of the architecture (i.e. RAM/ROM/Flash layout). However, once the map describes VMA mappings in a process context, it becomes highly dynamic as memory maps come and go ... It is no longer really feasible to cache the map contents then. This seems to me to be an argument *for* splitting the contents into two maps; the system map which is static and cached (and used for each memory access), and the per-process map which is dynamic and uncached (and only used rarely, in response to unfrequently used user commands) ... Thoughts? Bye, Ulrich -- Dr. Ulrich Weigand GNU Toolchain for Linux on System z and Cell BE Ulrich.Weigand@de.ibm.com