From: Pedro Alves <pedro@codesourcery.com>
To: gdb-patches@sourceware.org
Cc: "Ulrich Weigand" <uweigand@de.ibm.com>, jan.kratochvil@redhat.com
Subject: Re: [rfc][3/3] Remote core file generation: memory map
Date: Wed, 09 Nov 2011 16:37:00 -0000 [thread overview]
Message-ID: <201111091637.23350.pedro@codesourcery.com> (raw)
In-Reply-To: <201111081725.pA8HPaFc003696@d06av02.portsmouth.uk.ibm.com>
On Tuesday 08 November 2011 17:25:36, Ulrich Weigand wrote:
> I wrote:
> > Jan Kratochvil wrote:
> > > On Fri, 21 Oct 2011 20:57:04 +0200, Ulrich Weigand wrote:
> > > > Note that there already is a qXfer:memory-map:read packet, but this
> > > > is not usable as-is to implement target_find_memory_regions, since
> > > > it is really intended for a *system* memory map for some naked
> > > > embedded targets instead of a per-process virtual address space map.
> > > >
> > > > For example:
> > > >
> > > > - the memory map is read into a single global mem_region list; it is not
> > > > switched for multiple inferiors
> > >
> > > Without extended-remote there is a single address map only. Is the memory map
> > > already useful with extended-remote using separate address spaces?
> > >
> > > I do not have the embedded memory map experience but it seems to me the memory
> > > map should be specified for each address map, therefore for each inferior it
> > > is OK (maybe only possibly more duplicates are sent if the address spaces are
> > > the same). If GDB uses the memory map it uses it already for some inferior
> > > and therefore its address space.
> >
> > The problem is that the way GDB uses the memory map is completely
> > incompatible with the presence of multiple address spaces.
> >
> > There is a single instance of the map (kept in a global variable
> > mem_region_list in memattr.c), which is used for any access in
> > any address space. lookup_mem_region takes only a CORE_ADDR;
> > the "info mem" commands only operate on addresses with no notion
> > of address spaces.
That's mostly because we never really needed to consider making it
per multi-process/inferior/exec before, and managed to just look the
other way. Targets that do multi-process don't use the map presently.
I'm sure there are other things that live in globals but that should
be per-inferior or address space, waiting for someone to trip on
them, and eventually get fixed. :-)
> Another problem just occurred to me: the memory region list is
> cached during the whole duration of existence of the inferior.
> This caching is really necessary, since the map is consulted
> during each single memory access. And it seems quite valid to
> cache the map as long as it describes fixed features of the
> architecture (i.e. RAM/ROM/Flash layout).
>
> However, once the map describes VMA mappings in a process context,
> it becomes highly dynamic as memory maps come and go ... It is
> no longer really feasible to cache the map contents then.
Agreed.
> This seems to me to be an argument *for* splitting the contents into
> two maps; the system map which is static and cached (and used for
> each memory access), and the per-process map which is dynamic
> and uncached (and only used rarely, in response to unfrequently
> used user commands) ...
On e.g., uclinux / no mmu, you could have both the
system memory map returning the properties of memory of the
whole system, and gdb could use that for all memory accesses,
but, when generating a core of a single process, we're only
interested in the memory "mapped" to that process. So I tend
to agree.
We could also make the existing memory map be per-process/aspace,
and define it describe only the process's map (a process is a means
of a virtualization of the system resources after all). The dynamic
issue with process's memory maps then becomes a cache management
policy decision. E.g., at times we know the map can't change (all is
stopped, or by user knob), this would automatically enable the dcache
for all RO regions (mostly .text). We can still do this while having
two maps mechanism though.
It doesn't seem there's a true answer to this, but I'm leaning
on a new target object.
--
Pedro Alves
next prev parent reply other threads:[~2011-11-09 16:37 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-10-21 19:54 Ulrich Weigand
2011-11-01 18:41 ` Jan Kratochvil
2011-11-01 21:28 ` Ulrich Weigand
2011-11-08 17:25 ` Ulrich Weigand
2011-11-09 16:37 ` Pedro Alves [this message]
2011-11-09 18:27 ` Ulrich Weigand
2011-11-09 19:31 ` Pedro Alves
2011-11-09 20:04 ` Sergio Durigan Junior
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=201111091637.23350.pedro@codesourcery.com \
--to=pedro@codesourcery.com \
--cc=gdb-patches@sourceware.org \
--cc=jan.kratochvil@redhat.com \
--cc=uweigand@de.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox