From: Pedro Alves <pedro@codesourcery.com>
To: gdb-patches@sourceware.org
Cc: Doug Evans <dje@google.com>
Subject: Re: [RFA] Use data cache for stack accesses
Date: Wed, 26 Aug 2009 22:45:00 -0000 [thread overview]
Message-ID: <200908262108.49085.pedro@codesourcery.com> (raw)
In-Reply-To: <e394668d0908260929s2264835el8d481a596a8cf104@mail.gmail.com>
On Wednesday 26 August 2009 17:29:40, Doug Evans wrote:
> On Tue, Aug 25, 2009 at 11:44 AM, Pedro Alves<pedro@codesourcery.com> wrote:
>
> > I worry about new stale cache issues in non-stop mode.
> > [...]
> > It appears that (at least in non-stop or if any thread is running)
> > the cache should only be live for the duration of an "high level
> > operation" --- that is, for a "backtrace", or a "print", etc.
> > Did you consider this?
>
> It wasn't clear how to handle non-stop/etc. mode so I left that for
> the next iteration.
> If only having the data live across a high level operation works for
> you, it works for me.
Well, I'm not sure either, but much better to discuss it
upfront than to introduce subtle, hard to reproduce bugs
induced by GDB itself.
Reading things from memory while threads are running is
always racy --- if we want to get an accurate snapshot of
the inferior's memory, we *have* to stop all threads
temporarily. If we don't (stop all threads), then, even if
the dcache gets stale while we do a series of micro
memory reads (that logically are part of a bigger higher level
operation --- extracting a backtrace, reading a structure from
memory, etc.), it's OK, we can just pretend the memory had
changed only after we did the reads instead of before. If
we restart the higher level operation, we shouldn't not
hit the stale cache, and that seems good enough.
Writes are more dangerous though, since the dcache writes
back cache lines in chunks, meaning, there's the risk that
the dcache undoes changes the inferior had done meanwhile
to other parts of the cache line higher layers of gdb code
were not trying to write to (it's effectively a read-modify-write,
hence, racy). This is a more general non-stop mode problem, not
strictly related to the dcache, e.g., ptrace writes to memory
do a word-sized read-modify-write, clearly a smaller risk than
a 64-byte wide cache line, which is itself small, but still.
The only cure for this is to stop all threads momentarily...
We're shifting the caching decision to the intent of the
transfer, but, we still cache whole lines (larger than the
read_stack request) --- Do we still have rare border cases
where the cache line can cover more than stack memory, hence
still leading to incorrect results? Probably a very rare
problem in practice. Even less problematic if the cache
is only live across high level operations (essentially
getting rid of most of the volatile memory problem). Which
brings me to what amounts to chunking vs caching improvements
you are seeing:
> > Did you post number showing off the improvements from
> > having the cache on? E.g., when doing foo, with cache off,
> > I get NNN memory reads, while with cache off, we get only
> > nnn reads. I'd be curious to have some backing behind
> > "This improves remote performance significantly".
>
> For a typical gdb/gdbserver connection here a backtrace of 256 levels
> went from 48 seconds (average over 6 tries) to 4 seconds (average over
> 6 tries).
Nice! Were all those single runs started from cold cache, or
are you starting from a cold cache and issuing 6 backtraces in
a row? I mean, how sparse were those 6 tries? Shall one
read that as 48,48,48,48,48,48 vs 20,1,1,1,1,1 (some improvement
due to chunking, and large improvement due to caching in following
repeats of the command); or 48,48,48,48,48,48 vs 4,4,4,4,4,4 (large
improvement due to chunking --- caching not actually measured)?
--
Pedro Alves
next prev parent reply other threads:[~2009-08-26 20:08 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-07-08 20:49 Jacob Potter
2009-07-08 20:51 ` Pedro Alves
2009-07-08 20:58 ` Pedro Alves
2009-07-08 23:46 ` Daniel Jacobowitz
2009-07-09 3:06 ` Pedro Alves
2009-07-10 9:34 ` Pedro Alves
2009-07-10 8:45 ` Jacob Potter
2009-07-10 14:19 ` Pedro Alves
2009-07-13 19:25 ` Jacob Potter
2009-08-21 6:25 ` Doug Evans
2009-08-25 3:00 ` Doug Evans
2009-08-25 18:55 ` Pedro Alves
2009-08-26 16:36 ` Doug Evans
2009-08-26 22:45 ` Pedro Alves [this message]
2009-08-27 0:46 ` Doug Evans
2009-08-27 3:11 ` Doug Evans
2009-08-29 5:16 ` Doug Evans
2009-08-29 18:28 ` Doug Evans
2009-08-29 20:25 ` Pedro Alves
2009-09-02 20:43 ` Tom Tromey
2009-09-03 15:38 ` Doug Evans
2009-09-03 19:38 ` Tom Tromey
2009-09-03 19:45 ` Daniel Jacobowitz
2009-07-09 12:20 ` Eli Zaretskii
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=200908262108.49085.pedro@codesourcery.com \
--to=pedro@codesourcery.com \
--cc=dje@google.com \
--cc=gdb-patches@sourceware.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox