From: Pedro Alves <palves@redhat.com>
To: Yao Qi <yao@codesourcery.com>
Cc: Mark Kettenis <mark.kettenis@xs4all.nl>, gdb-patches@sourceware.org
Subject: Re: [PATCH 2/3] skip_prolgoue (amd64)
Date: Mon, 09 Dec 2013 15:34:00 -0000 [thread overview]
Message-ID: <52A5E2EE.5040501@redhat.com> (raw)
In-Reply-To: <52A5CC0D.4080004@codesourcery.com>
On 12/09/2013 01:56 PM, Yao Qi wrote:
> On 12/09/2013 09:13 PM, Pedro Alves wrote:
>> We can have more stops than resumes.
>>
>> #1 - resume everything (1000 threads)
>> #2 - event in one thread triggers, we call target_wait
>> #3 - gdb decides to leave thread stopped.
>> #4 - one hour passes, while threads poke at memory.
>> #5 - another event triggers, and we call target_wait again
>>
>> No resume happened between #2 and #5.
>
> Thanks for the explanation. IIUC, #2, #3, and #5 are the result of
> handle_inferior_event, where cache is flushed (with my patch applied).
No, #2 happens before handle_inferior_event is called.
>
> "wait -> handle event -> wait" is like a loop or circle to me, and we
> can flush at any point(s) of this circle, depending on what heuristic
> we are using.
Again, the point is making it so that the cache does not enlarge
the race window with the inferior itself. IOW, make the cache
transparent WRT to chances of seeing a teared value, prologue, or
whatever. Between starting to handle an event and finishing
it, a very short time passes. Between finishing handling an
event and the next event, an unbound amount of time passes.
If we don't flush the cache just before handling the event,
having the cache active has a much much wider race window width
than without the cache active.
--
Pedro Alves
next prev parent reply other threads:[~2013-12-09 15:34 UTC|newest]
Thread overview: 39+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-11-29 14:27 [PATCH 0/3] Use target_read_code in skip_prologue Yao Qi
2013-11-29 14:27 ` [PATCH 2/3] skip_prolgoue (amd64) Yao Qi
2013-11-29 14:38 ` Mark Kettenis
2013-11-29 18:55 ` Mark Kettenis
2013-11-30 3:40 ` Yao Qi
2013-11-30 12:01 ` Pedro Alves
2013-12-02 7:34 ` Yao Qi
2013-12-03 18:28 ` Pedro Alves
2013-12-04 2:34 ` Yao Qi
2013-12-04 12:08 ` Pedro Alves
2013-12-04 15:38 ` Tom Tromey
2013-12-04 18:31 ` Doug Evans
2013-12-05 11:31 ` Pedro Alves
2013-12-05 1:21 ` Yao Qi
2013-12-05 12:08 ` Pedro Alves
2013-12-05 14:08 ` Yao Qi
2013-12-05 14:37 ` Pedro Alves
2013-12-08 8:01 ` Yao Qi
2013-12-08 8:26 ` Doug Evans
2013-12-09 1:45 ` Yao Qi
2013-12-09 11:32 ` Pedro Alves
2013-12-09 11:53 ` Pedro Alves
2013-12-09 13:03 ` Yao Qi
2013-12-09 13:13 ` Pedro Alves
2013-12-09 13:58 ` Yao Qi
2013-12-09 15:34 ` Pedro Alves [this message]
2013-12-10 0:57 ` Yao Qi
2013-12-10 10:23 ` Pedro Alves
2013-12-10 12:02 ` Yao Qi
2013-12-04 17:42 ` Doug Evans
2013-12-04 18:00 ` Doug Evans
2013-12-04 17:54 ` Doug Evans
2013-12-05 1:39 ` Yao Qi
2013-12-05 11:47 ` Pedro Alves
2013-11-29 14:36 ` [PATCH 1/3] Use target_read_code in skip_prologue (i386) Yao Qi
2013-11-30 11:43 ` Pedro Alves
2013-11-29 14:38 ` [PATCH 3/3] Perf test case: skip-prologue Yao Qi
2013-12-03 7:34 ` Yao Qi
2013-12-10 12:45 ` Yao Qi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=52A5E2EE.5040501@redhat.com \
--to=palves@redhat.com \
--cc=gdb-patches@sourceware.org \
--cc=mark.kettenis@xs4all.nl \
--cc=yao@codesourcery.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox