From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 18484 invoked by alias); 1 Nov 2011 18:41:22 -0000 Received: (qmail 18476 invoked by uid 22791); 1 Nov 2011 18:41:21 -0000 X-SWARE-Spam-Status: No, hits=-7.2 required=5.0 tests=AWL,BAYES_00,RCVD_IN_DNSWL_HI,RP_MATCHES_RCVD,SPF_HELO_PASS X-Spam-Check-By: sourceware.org Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Tue, 01 Nov 2011 18:40:59 +0000 Received: from int-mx12.intmail.prod.int.phx2.redhat.com (int-mx12.intmail.prod.int.phx2.redhat.com [10.5.11.25]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id pA1IeuPa003642 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Tue, 1 Nov 2011 14:40:56 -0400 Received: from host1.jankratochvil.net (ovpn-116-23.ams2.redhat.com [10.36.116.23]) by int-mx12.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id pA1IerXU029299 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Tue, 1 Nov 2011 14:40:55 -0400 Received: from host1.jankratochvil.net (localhost [127.0.0.1]) by host1.jankratochvil.net (8.14.4/8.14.4) with ESMTP id pA1Ieokk006890; Tue, 1 Nov 2011 19:40:50 +0100 Received: (from jkratoch@localhost) by host1.jankratochvil.net (8.14.4/8.14.4/Submit) id pA1Iem4q006884; Tue, 1 Nov 2011 19:40:48 +0100 Date: Tue, 01 Nov 2011 18:41:00 -0000 From: Jan Kratochvil To: Ulrich Weigand Cc: gdb-patches@sourceware.org Subject: Re: [rfc][3/3] Remote core file generation: memory map Message-ID: <20111101184048.GA17896@host1.jankratochvil.net> References: <201110211857.p9LIv4j0013316@d06av02.portsmouth.uk.ibm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <201110211857.p9LIv4j0013316@d06av02.portsmouth.uk.ibm.com> User-Agent: Mutt/1.5.21 (2010-09-15) X-IsSubscribed: yes Mailing-List: contact gdb-patches-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-patches-owner@sourceware.org X-SW-Source: 2011-11/txt/msg00024.txt.bz2 Hi Ulrich, not sure if there is anything to do from this mail but at least some comments: On Fri, 21 Oct 2011 20:57:04 +0200, Ulrich Weigand wrote: > Note that there already is a qXfer:memory-map:read packet, but this > is not usable as-is to implement target_find_memory_regions, since > it is really intended for a *system* memory map for some naked > embedded targets instead of a per-process virtual address space map. > > For example: > > - the memory map is read into a single global mem_region list; it is not > switched for multiple inferiors Without extended-remote there is a single address map only. Is the memory map already useful with extended-remote using separate address spaces? I do not have the embedded memory map experience but it seems to me the memory map should be specified for each address map, therefore for each inferior it is OK (maybe only possibly more duplicates are sent if the address spaces are the same). If GDB uses the memory map it uses it already for some inferior and therefore its address space. > - native or gdbserver Linux targets do not have a memory map today, > and just enabling it changes memory access behaviour in unexpected > ways, e.g. accesses outside of memory regions in /proc//maps are > now no longer possible; also caching behaviour is different > > - the memory attribute format is insufficient to express properties > of a virtual memory mapping (e.g. permissions; mapped filename ...) > > > I guess longer term it might be nicer to always have a memory map, > and also use it for native targets, and then use the same map also > for core file generation ... > > I'd appreciate suggestions how to move forward on this; is having a > new qXfer type just for core file generation OK, or should we rather > attempt to move towards an always-active memory map -- if the latter, > how can we get there? I need to implement core files reading support into gdbserver in a foreseeable future for performance reasons. For the core file case everything can be indefinitely cached (and it is more significant to cache it than in the local core file case). The caching can+should be improved even in the normal live process case (by setting default_mem_attrib->cache = 1) but it needs to be temporary (with the prepare_execute_command flushing). For embedded targets the caching should be disabled for memory-I/O regions even if it would get enabled otherwise. The caching should probably stay in the memory map and not be moved into the process map. This all suggests me separation in the submitted patch may complicate it all a bit. > +const struct gdb_xml_attribute vma_attributes[] = { > +const struct gdb_xml_element process_map_children[] = { > +const struct gdb_xml_element process_map_elements[] = { These should be static; it is already a bug in memory-map.c but there are too many such bugs, someone could spend some time fixing them, one could use my: http://git.jankratochvil.net/?p=nethome.git;a=blob_plain;hb=HEAD;f=bin/checkstatic > +static int > +read_mapping (FILE *mapfile, > + long long *addr, > + long long *endaddr, > + char *permissions, > + long long *offset, > + char *device, long long *inode, char *filename) > +{ > + int ret = fscanf (mapfile, "%llx-%llx %s %llx %s %llx", > + addr, endaddr, permissions, offset, device, inode); There will be /proc/PID/maps reader for gdbserver also from: [patch 3/3] Implement qXfer:libraries for Linux/gdbserver #3 http://sourceware.org/ml/gdb-patches/2011-10/msg00511.html But maybe it is simple enough code and it has different output anyway it does not matter much to unify it. Thanks, Jan