From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 3700 invoked by alias); 1 Nov 2011 21:28:33 -0000 Received: (qmail 3690 invoked by uid 22791); 1 Nov 2011 21:28:32 -0000 X-SWARE-Spam-Status: No, hits=-2.3 required=5.0 tests=AWL,BAYES_00,MSGID_FROM_MTA_HEADER,RP_MATCHES_RCVD X-Spam-Check-By: sourceware.org Received: from mtagate2.uk.ibm.com (HELO mtagate2.uk.ibm.com) (194.196.100.162) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Tue, 01 Nov 2011 21:28:17 +0000 Received: from d06nrmr1507.portsmouth.uk.ibm.com (d06nrmr1507.portsmouth.uk.ibm.com [9.149.38.233]) by mtagate2.uk.ibm.com (8.13.1/8.13.1) with ESMTP id pA1LSEjo002257 for ; Tue, 1 Nov 2011 21:28:14 GMT Received: from d06av02.portsmouth.uk.ibm.com (d06av02.portsmouth.uk.ibm.com [9.149.37.228]) by d06nrmr1507.portsmouth.uk.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id pA1LSE3W2523176 for ; Tue, 1 Nov 2011 21:28:14 GMT Received: from d06av02.portsmouth.uk.ibm.com (loopback [127.0.0.1]) by d06av02.portsmouth.uk.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id pA1LSEFi002647 for ; Tue, 1 Nov 2011 15:28:14 -0600 Received: from tuxmaker.boeblingen.de.ibm.com (tuxmaker.boeblingen.de.ibm.com [9.152.85.9]) by d06av02.portsmouth.uk.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with SMTP id pA1LSCTW002598; Tue, 1 Nov 2011 15:28:13 -0600 Message-Id: <201111012128.pA1LSCTW002598@d06av02.portsmouth.uk.ibm.com> Received: by tuxmaker.boeblingen.de.ibm.com (sSMTP sendmail emulation); Tue, 01 Nov 2011 22:28:12 +0100 Subject: Re: [rfc][3/3] Remote core file generation: memory map To: jan.kratochvil@redhat.com (Jan Kratochvil) Date: Tue, 01 Nov 2011 21:28:00 -0000 From: "Ulrich Weigand" Cc: gdb-patches@sourceware.org In-Reply-To: <20111101184048.GA17896@host1.jankratochvil.net> from "Jan Kratochvil" at Nov 01, 2011 07:40:48 PM MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Mailing-List: contact gdb-patches-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-patches-owner@sourceware.org X-SW-Source: 2011-11/txt/msg00032.txt.bz2 Jan Kratochvil wrote: > On Fri, 21 Oct 2011 20:57:04 +0200, Ulrich Weigand wrote: > > Note that there already is a qXfer:memory-map:read packet, but this > > is not usable as-is to implement target_find_memory_regions, since > > it is really intended for a *system* memory map for some naked > > embedded targets instead of a per-process virtual address space map. > > > > For example: > > > > - the memory map is read into a single global mem_region list; it is not > > switched for multiple inferiors > > Without extended-remote there is a single address map only. Is the memory map > already useful with extended-remote using separate address spaces? > > I do not have the embedded memory map experience but it seems to me the memory > map should be specified for each address map, therefore for each inferior it > is OK (maybe only possibly more duplicates are sent if the address spaces are > the same). If GDB uses the memory map it uses it already for some inferior > and therefore its address space. The problem is that the way GDB uses the memory map is completely incompatible with the presence of multiple address spaces. There is a single instance of the map (kept in a global variable mem_region_list in memattr.c), which is used for any access in any address space. lookup_mem_region takes only a CORE_ADDR; the "info mem" commands only operate on addresses with no notion of address spaces. The remote protocol also does not specify which address space a map is requested for. This doesn't appear to matter much in practice, since the native targets and gdbserver do not implement memory maps at all. Just some special-purpose remote stubs apparently do; and those are probably for targets that do not support multiple address spaces. However, this means that it isn't easily possible to just switch to providing memory maps for native/gdbserver target, because we now run into those problems ... > I need to implement core files reading support into gdbserver in a foreseeable > future for performance reasons. For the core file case everything can be > indefinitely cached (and it is more significant to cache it than in the local > core file case). The caching can+should be improved even in the normal live > process case (by setting default_mem_attrib->cache = 1) but it needs to be > temporary (with the prepare_execute_command flushing). For embedded targets > the caching should be disabled for memory-I/O regions even if it would get > enabled otherwise. > > The caching should probably stay in the memory map and not be moved into the > process map. This all suggests me separation in the submitted patch may > complicate it all a bit. Yes, if you want to enable memory-map features on gdbserver targets, then those problems will need to be fixed. In *that* case, it would make more sense to avoid introducing a new map. > > +const struct gdb_xml_attribute vma_attributes[] = { > > +const struct gdb_xml_element process_map_children[] = { > > +const struct gdb_xml_element process_map_elements[] = { > > These should be static; it is already a bug in memory-map.c but there are too > many such bugs, someone could spend some time fixing them, one could use my: > http://git.jankratochvil.net/?p=nethome.git;a=blob_plain;hb=HEAD;f=bin/checkstatic Fixed, thanks. Bye, Ulrich -- Dr. Ulrich Weigand GNU Toolchain for Linux on System z and Cell BE Ulrich.Weigand@de.ibm.com