From: Jacob Potter <jdpotter@google.com>
To: gdb-patches@sourceware.org
Subject: Re: [RFA] Rewrite data cache and use for stack access.
Date: Tue, 30 Jun 2009 21:16:00 -0000 [thread overview]
Message-ID: <7e6c8d660906301416k6fac6853k1d295bd5feae3283@mail.gmail.com> (raw)
In-Reply-To: <20090629193230.GA8840@caradoc.them.org>
On Mon, Jun 29, 2009 at 12:32 PM, Daniel Jacobowitz<drow@false.org> wrote:
>
> I think that part of the trouble with the existing cache is that it's
> implemented too much like a cache. Rather than making it
> set-associative, and thus (marginally?) less effective, what about
> fixing the search to use a more efficient structure?
>
> What we have today is a linked list of blocks. If we put them into a
> splay tree instead, search performance would be much better. It would
> be very similar to addrmap.c's splay trees.
>
Interesting. I'll look into that...
>
> * I'd find it helpful if any performance improvements were separated
> out from stack caching. Could you do that?
>
I've split it into two patch files. Should I be submitting them as
completely separate [RFA]s?
> * Have you thought at all about non-stop or multi-process debugging?
> If we have a data cache which is specifically for stack accesses,
> maybe we should associate it with the thread.
I don't think we need to associate the cache with a particular thread,
since the threads' stacks will be in separate parts of the address
space anyway; multiple caches will just add more stuff to keep track
of.
For non-stop debugging, it seems like the correct thing to do would be
to clear the cache between each _command_ the user gives. It's
conceivable that a running thread might modify a value on a stopped
thread's stack, and we don't want to hide that by keeping the cache
between two backtrace commands. This may already happen; I'll double
check.
> * Do we really need an option to turn this off? It seems to me to be
> risk-free; whether the target memory ends up volatile or not, we
> don't want volatile semantics on saved registers anyway.
>
> * If we do add the option, it requires documentation. Whether we do
> or not, please add an entry to NEWS about the new feature.
I'd like to keep the option, "just in case"; perhaps it should default
to on, though.
> * We'd prefer that new functions take a target_ops; are the
> current_target versions of read_stack and target_stack necessary?
They're called from value_at, which doesn't seem to get information
about the target; is there a way to avoid using current_target there?
>> +extern struct value *value_at_lazy_stack (struct type *type, CORE_ADDR addr);
>
> IMO this one isn't needed; just call value_set_stack on the result
> of value_at_lazy, right?
Hmm, you're right. I'll change that.
Next step: looking into using splay trees.
- Jacob
next prev parent reply other threads:[~2009-06-30 21:16 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-06-29 19:01 Jacob Potter
2009-06-29 19:32 ` Daniel Jacobowitz
2009-06-29 19:58 ` Michael Snyder
2009-06-29 20:03 ` Daniel Jacobowitz
2009-06-30 21:16 ` Jacob Potter [this message]
2009-06-30 21:28 ` Daniel Jacobowitz
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=7e6c8d660906301416k6fac6853k1d295bd5feae3283@mail.gmail.com \
--to=jdpotter@google.com \
--cc=gdb-patches@sourceware.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox