* [RFA] gdbserver/lynx178: spurious SIG61 signal when resuming inferior.
@ 2013-05-13 10:46 Joel Brobecker
2013-05-13 11:22 ` Pedro Alves
` (2 more replies)
0 siblings, 3 replies; 10+ messages in thread
From: Joel Brobecker @ 2013-05-13 10:46 UTC (permalink / raw)
To: gdb-patches; +Cc: Joel Brobecker
Hello,
On ppc-lynx178, resuming the execution of a program after hitting
a breakpoint sometimes triggers a spurious SIG61 event:
(gdb) cont
Continuing.
Program received signal SIG61, Real-time event 61.
[Switching to Thread 39]
0x10002324 in a_test.task1 (<_task>=0x3ffff774) at a_test.adb:30
30 select -- Task 1
From this point on, continuing again lets the signal kill the program.
Using "signal 0" or configuring GDB to discard the signal does not
help either, as the program immediately reports the same signal again.
What happens is the following:
- GDB sends a single-step order to gdbserver: $vCont;s:31
This tells GDBserver to do a step using thread 0x31=49.
GDBserver does the step, and thread 49 receives the SIGTRAP
indicating that the step has finished.
- GDB then sends a "continue", but this time does not specify
which thread to continue: $vCont;c
GDBserver uses an arbitrary thread's ptid to resume the program's
execution (the current_inferior's ptid was chosen for that).
See lynx-low.c:lynx_resume:
if (ptid_equal (ptid, minus_one_ptid))
ptid = thread_to_gdb_id (current_inferior);
So far on all LynxOS platforms, this has been good enough. But
not so on LynxOS 178. If the ptid used to resume the execution
is not the same as the thread that did the step, we get the weird
signal.
This patch fixes the problem by saving the ptid of the thread
that last caused an event, received during a call to waitpid.
The ptid is saved in per-process private data.
gdbserver/ChangeLog:
* lynx-low.c (struct process_info_private): New type.
(lynx_add_process): New function.
(lynx_create_inferior, lynx_attach): Replace calls to
add_process by calls to lynx_add_process.
(lynx_resume): If PTID is null, then try using
current_process()->private->last_wait_event_ptid.
Add comments.
(lynx_clear_inferiors): Delete. The contents of that function
has been inlined in lynx_mourn;
(lynx_wait_1): Save the ptid in the process's private data.
(lynx_mourn): Free the process' private data. Replace call
to lynx_clear_inferiors by call to clear_inferiors.
Tested on ppc-lynx178. OK to checkin?
Thanks,
--
Joel
---
gdb/gdbserver/lynx-low.c | 58 +++++++++++++++++++++++++++++++++++----------
1 files changed, 45 insertions(+), 13 deletions(-)
diff --git a/gdb/gdbserver/lynx-low.c b/gdb/gdbserver/lynx-low.c
index a5f3b6d..b4cb5d2 100644
--- a/gdb/gdbserver/lynx-low.c
+++ b/gdb/gdbserver/lynx-low.c
@@ -30,6 +30,15 @@
int using_threads = 1;
+/* Per-process private data. */
+
+struct process_info_private
+{
+ /* The PTID obtained from the last wait performed on this process.
+ Initialized to null_ptid until the first wait is performed. */
+ ptid_t last_wait_event_ptid;
+};
+
/* Print a debug trace on standard output if debug_threads is set. */
static void
@@ -196,6 +205,21 @@ lynx_ptrace (int request, ptid_t ptid, int addr, int data, int addr2)
return result;
}
+/* Call add_process with the given parameters, and initializes
+ the process' private data. */
+
+static struct process_info *
+lynx_add_process (int pid, int attached)
+{
+ struct process_info *proc;
+
+ proc = add_process (pid, attached);
+ proc->private = xcalloc (1, sizeof (*proc->private));
+ proc->private->last_wait_event_ptid = null_ptid;
+
+ return proc;
+}
+
/* Implement the create_inferior method of the target_ops vector. */
static int
@@ -225,7 +249,7 @@ lynx_create_inferior (char *program, char **allargs)
_exit (0177);
}
- add_process (pid, 0);
+ lynx_add_process (pid, 0);
/* Do not add the process thread just yet, as we do not know its tid.
We will add it later, during the wait for the STOP event corresponding
to the lynx_ptrace (PTRACE_TRACEME) call above. */
@@ -243,7 +267,7 @@ lynx_attach (unsigned long pid)
error ("Cannot attach to process %lu: %s (%d)\n", pid,
strerror (errno), errno);
- add_process (pid, 1);
+ lynx_add_process (pid, 1);
add_thread (ptid, NULL);
return 0;
@@ -260,6 +284,19 @@ lynx_resume (struct thread_resume *resume_info, size_t n)
? PTRACE_SINGLESTEP : PTRACE_CONT);
const int signal = resume_info[0].sig;
+ /* If given a null_ptid, then try using the current_process'
+ private->last_wait_event_ptid. On most LynxOS versions,
+ using any of the process' thread works well enough, but
+ LynxOS 178 is a little more sensitive, and triggers some
+ unexpected signals (Eg SIG61) when we resume the inferior
+ using a different thread. */
+ if (ptid_equal (ptid, minus_one_ptid))
+ ptid = current_process()->private->last_wait_event_ptid;
+
+ /* The ptid might still be NULL; this can happen between the moment
+ we create the inferior or attach to a process, and the moment
+ we resume its execution for the first time. It is fine to
+ use the current_inferior's ptid in those cases. */
if (ptid_equal (ptid, minus_one_ptid))
ptid = thread_to_gdb_id (current_inferior);
@@ -285,16 +322,6 @@ lynx_continue (ptid_t ptid)
lynx_resume (&resume_info, 1);
}
-/* Remove all inferiors and associated threads. */
-
-static void
-lynx_clear_inferiors (void)
-{
- /* We do not use private data, so nothing much to do except calling
- clear_inferiors. */
- clear_inferiors ();
-}
-
/* A wrapper around waitpid that handles the various idiosyncrasies
of LynxOS' waitpid. */
@@ -352,6 +379,7 @@ retry:
ret = lynx_waitpid (pid, &wstat);
new_ptid = lynx_ptid_build (ret, ((union wait *) &wstat)->w_tid);
+ find_process_pid (ret)->private->last_wait_event_ptid = new_ptid;
/* If this is a new thread, then add it now. The reason why we do
this here instead of when handling new-thread events is because
@@ -480,7 +508,11 @@ lynx_detach (int pid)
static void
lynx_mourn (struct process_info *proc)
{
- lynx_clear_inferiors ();
+ /* Free our private data. */
+ free (proc->private);
+ proc->private = NULL;
+
+ clear_inferiors ();
}
/* Implement the join target_ops method. */
--
1.7.0.4
^ permalink raw reply [flat|nested] 10+ messages in thread* Re: [RFA] gdbserver/lynx178: spurious SIG61 signal when resuming inferior.
2013-05-13 10:46 [RFA] gdbserver/lynx178: spurious SIG61 signal when resuming inferior Joel Brobecker
@ 2013-05-13 11:22 ` Pedro Alves
2013-05-13 11:25 ` Pedro Alves
2013-05-13 13:28 ` Joel Brobecker
2013-05-13 14:36 ` Pedro Alves
2013-05-17 6:48 ` Checked in: " Joel Brobecker
2 siblings, 2 replies; 10+ messages in thread
From: Pedro Alves @ 2013-05-13 11:22 UTC (permalink / raw)
To: Joel Brobecker; +Cc: gdb-patches
Hi Joel,
On 05/13/2013 11:46 AM, Joel Brobecker wrote:
> On ppc-lynx178, resuming the execution of a program after hitting
> a breakpoint sometimes triggers a spurious SIG61 event:
I'd like to understand this a little better.
Could that mean the thread that gdbserver used for ptrace hadn't
been ptrace stopped, or doesn't exist at all? "sometimes" makes
me wonder about the latter.
> (gdb) cont
> Continuing.
>
> Program received signal SIG61, Real-time event 61.
> [Switching to Thread 39]
> 0x10002324 in a_test.task1 (<_task>=0x3ffff774) at a_test.adb:30
> 30 select -- Task 1
>
> From this point on, continuing again lets the signal kill the program.
> Using "signal 0" or configuring GDB to discard the signal does not
> help either, as the program immediately reports the same signal again.
>
> What happens is the following:
>
> - GDB sends a single-step order to gdbserver: $vCont;s:31
> This tells GDBserver to do a step using thread 0x31=49.
> GDBserver does the step, and thread 49 receives the SIGTRAP
> indicating that the step has finished.
>
> - GDB then sends a "continue", but this time does not specify
> which thread to continue: $vCont;c
> GDBserver uses an arbitrary thread's ptid to resume the program's
> execution (the current_inferior's ptid was chosen for that).
> See lynx-low.c:lynx_resume:
Urgh.
So does that mean scheduler locking doesn't work?
E.g.,
(gdb) thread 2
(gdb) si
(gdb) thread 1
(gdb) c
That'll single-step thread 2, and then continue just thread 1, supposedly
triggering this issue too? If not, why not?
BTW, vCont;c means "resume all threads", why is the current code just
resuming one?
This:
lynx_wait_1 ()
...
if (ptid_equal (ptid, minus_one_ptid))
pid = lynx_ptid_get_pid (thread_to_gdb_id (current_inferior));
else
pid = BUILDPID (lynx_ptid_get_pid (ptid), lynx_ptid_get_tid (ptid));
retry:
ret = lynx_waitpid (pid, &wstat);
is suspicious also. Doesn't that mean we're doing a waitpid on
a possibly not-resumed current_inferior (that may not be the main task,
if that matters)? Could _that_ be reason for that magic signal 61?
--
Pedro Alves
^ permalink raw reply [flat|nested] 10+ messages in thread* Re: [RFA] gdbserver/lynx178: spurious SIG61 signal when resuming inferior.
2013-05-13 11:22 ` Pedro Alves
@ 2013-05-13 11:25 ` Pedro Alves
2013-05-13 13:28 ` Joel Brobecker
1 sibling, 0 replies; 10+ messages in thread
From: Pedro Alves @ 2013-05-13 11:25 UTC (permalink / raw)
To: Joel Brobecker; +Cc: gdb-patches
On 05/13/2013 12:22 PM, Pedro Alves wrote:
> So does that mean scheduler locking doesn't work?
>
> E.g.,
>
> (gdb) thread 2
> (gdb) si
> (gdb) thread 1
> (gdb) c
To be more explicit, I meant with "(gdb) set scheduler-locking on".
--
Pedro Alves
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [RFA] gdbserver/lynx178: spurious SIG61 signal when resuming inferior.
2013-05-13 11:22 ` Pedro Alves
2013-05-13 11:25 ` Pedro Alves
@ 2013-05-13 13:28 ` Joel Brobecker
2013-05-13 14:28 ` Pedro Alves
1 sibling, 1 reply; 10+ messages in thread
From: Joel Brobecker @ 2013-05-13 13:28 UTC (permalink / raw)
To: Pedro Alves; +Cc: gdb-patches
Thanks for the comments, Pedro.
> > On ppc-lynx178, resuming the execution of a program after hitting
> > a breakpoint sometimes triggers a spurious SIG61 event:
>
> I'd like to understand this a little better.
>
> Could that mean the thread that gdbserver used for ptrace hadn't
> been ptrace stopped, or doesn't exist at all? "sometimes" makes
> me wonder about the latter.
My interpretation of the clues I have been able to gather is that
the LynxOS thread library implementation does not like it when
we mess with the program's scheduling. Lynx178 is derived from
an old version of LynxOS, which can explain why newer versions
are a little more robust in that respect.
I tried to get more info directly from the people who I thought
would know about this, but never managed to make progress in that
direction, so I gave up when I found this solution.
> So does that mean scheduler locking doesn't work?
>
> E.g.,
>
> (gdb) thread 2
> (gdb) si
> (gdb) thread 1
> (gdb) c
Indeed, as expected, same sort of symptom:
(gdb) thread 1
[Switching to thread 1 (Thread 30)]
#0 0x1004ed94 in _trap_ ()
(gdb) si
0x1004ed98 in _trap_ ()
(gdb) thread 2
[Switching to thread 2 (Thread 36)]
#0 task_switch.break_me () at task_switch.adb:42
42 null;
(gdb) cont
Continuing.
Program received signal SIG62, Real-time event 62.
task_switch.break_me () at task_switch.adb:42
42 null;
> BTW, vCont;c means "resume all threads", why is the current code just
> resuming one?
It's actually using a ptrace request that applies to the process
(either PTRACE_CONT or PTRACE_SINGLE_STEP).
I never tried to implement single-thread control (scheduler-locking
on), as this is not something we're interested on for this platform,
at least for now...
> lynx_wait_1 ()
> ...
> if (ptid_equal (ptid, minus_one_ptid))
> pid = lynx_ptid_get_pid (thread_to_gdb_id (current_inferior));
> else
> pid = BUILDPID (lynx_ptid_get_pid (ptid), lynx_ptid_get_tid (ptid));
>
> retry:
>
> ret = lynx_waitpid (pid, &wstat);
>
>
> is suspicious also.
I understand... It's a bit of a hybrid between trying to deal with
thread-level execution control, and process-level execution control.
> Doesn't that mean we're doing a waitpid on
> a possibly not-resumed current_inferior (that may not be the main task,
> if that matters)? Could _that_ be reason for that magic signal 61?
Given the above (we resume processes, rather than threads individually),
I do not think that this is the source of the problem itself. I blame
the thread library for now liking it when you potentially alter the
program scheduling by resuming the non-active thread. This patch does
not prevent this from happening, but at least makes an effort into
avoiding it for the usual situations.
--
Joel
^ permalink raw reply [flat|nested] 10+ messages in thread* Re: [RFA] gdbserver/lynx178: spurious SIG61 signal when resuming inferior.
2013-05-13 13:28 ` Joel Brobecker
@ 2013-05-13 14:28 ` Pedro Alves
2013-05-16 12:24 ` Joel Brobecker
0 siblings, 1 reply; 10+ messages in thread
From: Pedro Alves @ 2013-05-13 14:28 UTC (permalink / raw)
To: Joel Brobecker; +Cc: gdb-patches
On 05/13/2013 02:28 PM, Joel Brobecker wrote:
> Lynx178 is derived from
> an old version of LynxOS, which can explain why newer versions
> are a little more robust in that respect.
Ah. I really have no sense of whether 178 is old or recent. ;-)
>
> I tried to get more info directly from the people who I thought
> would know about this, but never managed to make progress in that
> direction, so I gave up when I found this solution.
>
>> So does that mean scheduler locking doesn't work?
>>
>> E.g.,
>>
>> (gdb) thread 2
>> (gdb) si
>> (gdb) thread 1
>> (gdb) c
> Indeed, as expected, same sort of symptom:
>
> (gdb) thread 1
> [Switching to thread 1 (Thread 30)]
> #0 0x1004ed94 in _trap_ ()
> (gdb) si
> 0x1004ed98 in _trap_ ()
> (gdb) thread 2
> [Switching to thread 2 (Thread 36)]
> #0 task_switch.break_me () at task_switch.adb:42
> 42 null;
> (gdb) cont
> Continuing.
>
> Program received signal SIG62, Real-time event 62.
> task_switch.break_me () at task_switch.adb:42
> 42 null;
>
>> BTW, vCont;c means "resume all threads", why is the current code just
>> resuming one?
>
> It's actually using a ptrace request that applies to the process
> (either PTRACE_CONT or PTRACE_SINGLE_STEP).
> I never tried to implement single-thread control (scheduler-locking
> on), as this is not something we're interested on for this platform,
> at least for now...
Okay... I see the file has a reference to PTRACE_CONT_ONE/PTRACE_SINGLE_STEP_ONE
though they're not really being used. As PTRACE_SINGLE_STEP is resumes all
threads in the process, then when stepping over a breakpoint, other
threads may miss breakpoints...
Old lynx-nat.c did:
http://sourceware.org/cgi-bin/cvsweb.cgi/src/gdb/Attic/lynx-nat.c?rev=1.23&content-type=text/x-cvsweb-markup&cvsroot=src
/* If pid == -1, then we want to step/continue all threads, else
we only want to step/continue a single thread. */
if (pid == -1)
{
pid = PIDGET (inferior_ptid);
func = step ? PTRACE_SINGLESTEP : PTRACE_CONT;
}
else
func = step ? PTRACE_SINGLESTEP_ONE : PTRACE_CONT_ONE;
I'd like to believe that just doing that in gdbserver too
would fix the scheduler-locking example. :-)
For the SIG61 issue, I wonder whether for PTRACE_CONT,
it's "continue main pid process" that we should always use
instead of "last reported thread id" (and that's what the old
lynx-nat.c did too). Did you try that?
Sorry to be picky. IMO, it's good to have all these
experimentation results archived, for when somebody proposes
removing/changing the "make sure to resume last reported" code
at some point...
>
>> lynx_wait_1 ()
>> ...
>> if (ptid_equal (ptid, minus_one_ptid))
>> pid = lynx_ptid_get_pid (thread_to_gdb_id (current_inferior));
>> else
>> pid = BUILDPID (lynx_ptid_get_pid (ptid), lynx_ptid_get_tid (ptid));
>>
>> retry:
>>
>> ret = lynx_waitpid (pid, &wstat);
>>
>>
>> is suspicious also.
>
> I understand... It's a bit of a hybrid between trying to deal with
> thread-level execution control, and process-level execution control.
I actually misread this. lynx_ptid_get_pid returns the main pid of the
process, while I read that as getting at the current_inferior's tid.
>> Doesn't that mean we're doing a waitpid on
>> a possibly not-resumed current_inferior (that may not be the main task,
>> if that matters)? Could _that_ be reason for that magic signal 61?
>
> Given the above (we resume processes, rather than threads individually),
> I do not think that this is the source of the problem itself. I blame
> the thread library for now liking it when you potentially alter the
> program scheduling by resuming the non-active thread. This patch does
> not prevent this from happening, but at least makes an effort into
> avoiding it for the usual situations.
--
Pedro Alves
^ permalink raw reply [flat|nested] 10+ messages in thread* Re: [RFA] gdbserver/lynx178: spurious SIG61 signal when resuming inferior.
2013-05-13 14:28 ` Pedro Alves
@ 2013-05-16 12:24 ` Joel Brobecker
2013-05-16 13:14 ` Pedro Alves
0 siblings, 1 reply; 10+ messages in thread
From: Joel Brobecker @ 2013-05-16 12:24 UTC (permalink / raw)
To: Pedro Alves; +Cc: gdb-patches
> Old lynx-nat.c did:
>
> http://sourceware.org/cgi-bin/cvsweb.cgi/src/gdb/Attic/lynx-nat.c?rev=1.23&content-type=text/x-cvsweb-markup&cvsroot=src
>
> /* If pid == -1, then we want to step/continue all threads, else
> we only want to step/continue a single thread. */
> if (pid == -1)
> {
> pid = PIDGET (inferior_ptid);
> func = step ? PTRACE_SINGLESTEP : PTRACE_CONT;
> }
> else
> func = step ? PTRACE_SINGLESTEP_ONE : PTRACE_CONT_ONE;
>
>
> I'd like to believe that just doing that in gdbserver too
> would fix the scheduler-locking example. :-)
I just tried that, and I am not sure yet how well this is going to
work. It'll at least require a change in the "wait" routine which
resumes the execution after a "new-thread" event, because do not want
to resume the execution using the thread's ptid, because we'd switch
to a PTRACE_CONT_ONE request. I tried to see if I could make it work
quickly, but got inconclusive results (process hanging), so I am
leaving that for another day :-).
> For the SIG61 issue, I wonder whether for PTRACE_CONT,
> it's "continue main pid process" that we should always use
> instead of "last reported thread id" (and that's what the old
> lynx-nat.c did too). Did you try that?
Yes, I did that a while ago. Looking at the man page for
PTRACE_CONT:
This request is always directed to an individual
thread specified by pid, while all the threads in
the traced process are also to be resumed.
The man page also says a bit earlier:
Per-thread, but effective on the entire process.
Based on the above, I think that using the currently "active"
thread (the thread that caused the process to stop) helps
avoid having the debugger influence the program's behavior
by influencing its scheduling.
Nevertheless, I tried that again today, and that is not sufficient
to prevent the SIG61 signal from being raised.
So I think that the patch as I proposed it still makes sense.
I know you pre-approved it, but I want to make sure that I answered
all your questions properly before going ahead with the commit.
> Sorry to be picky. IMO, it's good to have all these
> experimentation results archived, for when somebody proposes
> removing/changing the "make sure to resume last reported" code
> at some point...
Not at all, I think this makes sense.
--
Joel
^ permalink raw reply [flat|nested] 10+ messages in thread* Re: [RFA] gdbserver/lynx178: spurious SIG61 signal when resuming inferior.
2013-05-16 12:24 ` Joel Brobecker
@ 2013-05-16 13:14 ` Pedro Alves
0 siblings, 0 replies; 10+ messages in thread
From: Pedro Alves @ 2013-05-16 13:14 UTC (permalink / raw)
To: Joel Brobecker; +Cc: gdb-patches
On 05/16/2013 01:24 PM, Joel Brobecker wrote:
> So I think that the patch as I proposed it still makes sense.
Indeed.
> I know you pre-approved it, but I want to make sure that I answered
> all your questions properly before going ahead with the commit.
You have, thanks. Please go ahead.
--
Pedro Alves
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [RFA] gdbserver/lynx178: spurious SIG61 signal when resuming inferior.
2013-05-13 10:46 [RFA] gdbserver/lynx178: spurious SIG61 signal when resuming inferior Joel Brobecker
2013-05-13 11:22 ` Pedro Alves
@ 2013-05-13 14:36 ` Pedro Alves
2013-05-17 6:57 ` Joel Brobecker
2013-05-17 6:48 ` Checked in: " Joel Brobecker
2 siblings, 1 reply; 10+ messages in thread
From: Pedro Alves @ 2013-05-13 14:36 UTC (permalink / raw)
To: Joel Brobecker; +Cc: gdb-patches
On 05/13/2013 11:46 AM, Joel Brobecker wrote:
> (lynx_resume): If PTID is null, then try using
> current_process()->private->last_wait_event_ptid.
> @@ -260,6 +284,19 @@ lynx_resume (struct thread_resume *resume_info, size_t n)
> ? PTRACE_SINGLESTEP : PTRACE_CONT);
> const int signal = resume_info[0].sig;
>
> + /* If given a null_ptid, then try using the current_process'
> + private->last_wait_event_ptid. On most LynxOS versions,
> + using any of the process' thread works well enough, but
> + LynxOS 178 is a little more sensitive, and triggers some
> + unexpected signals (Eg SIG61) when we resume the inferior
> + using a different thread. */
> + if (ptid_equal (ptid, minus_one_ptid))
> + ptid = current_process()->private->last_wait_event_ptid;
> +
> + /* The ptid might still be NULL; this can happen between the moment
> + we create the inferior or attach to a process, and the moment
> + we resume its execution for the first time. It is fine to
> + use the current_inferior's ptid in those cases. */
> if (ptid_equal (ptid, minus_one_ptid))
> ptid = thread_to_gdb_id (current_inferior);
>
> @@ -285,16 +322,6 @@ lynx_continue (ptid_t ptid)
> lynx_resume (&resume_info, 1);
> }
Nit, the comments above talk about null_ptid, while the code is
really checking for minus_one_ptid (wildcard).
Otherwise, if this is really what's necessary for Lynx178,
then this is OK.
--
Pedro Alves
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [RFA] gdbserver/lynx178: spurious SIG61 signal when resuming inferior.
2013-05-13 14:36 ` Pedro Alves
@ 2013-05-17 6:57 ` Joel Brobecker
0 siblings, 0 replies; 10+ messages in thread
From: Joel Brobecker @ 2013-05-17 6:57 UTC (permalink / raw)
To: gdb-patches
[-- Attachment #1: Type: text/plain, Size: 430 bytes --]
> Nit, the comments above talk about null_ptid, while the code is
> really checking for minus_one_ptid (wildcard).
Argh! Going through the emails, I noticed I forgot about this comment
before committing. Sorry about that...
Fixed thusly.
gdb/gdbserver/ChangeLog:
* lynx-low.c (lynx_resume): Fix null_ptid/minus_one_ptid
confusion in comment.
Tested by rebuilding the lynx178 gdbserver. Checked in.
--
Joel
[-- Attachment #2: 0001-gdbserver-lynx178-Fix-null_ptid-vs-minus_one_ptid-co.patch --]
[-- Type: text/x-diff, Size: 2507 bytes --]
From 415cbe11443c18ee01256eb6d529005a6c6fa7e2 Mon Sep 17 00:00:00 2001
From: Joel Brobecker <brobecker@adacore.com>
Date: Fri, 17 May 2013 02:49:40 -0400
Subject: [PATCH] [gdbserver/lynx178]: Fix null_ptid -vs- minus_one_ptid confusion in comment
gdb/gdbserver/ChangeLog:
* lynx-low.c (lynx_resume): Fix null_ptid/minus_one_ptid
confusion in comment.
---
gdb/gdbserver/ChangeLog | 5 +++++
gdb/gdbserver/lynx-low.c | 10 +++++-----
2 files changed, 10 insertions(+), 5 deletions(-)
diff --git a/gdb/gdbserver/ChangeLog b/gdb/gdbserver/ChangeLog
index 5cc2c25..bc2ba38 100644
--- a/gdb/gdbserver/ChangeLog
+++ b/gdb/gdbserver/ChangeLog
@@ -1,5 +1,10 @@
2013-05-17 Joel Brobecker <brobecker@adacore.com>
+ * lynx-low.c (lynx_resume): Fix null_ptid/minus_one_ptid
+ confusion in comment.
+
+2013-05-17 Joel Brobecker <brobecker@adacore.com>
+
* lynx-low.c (struct process_info_private): New type.
(lynx_add_process): New function.
(lynx_create_inferior, lynx_attach): Replace calls to
diff --git a/gdb/gdbserver/lynx-low.c b/gdb/gdbserver/lynx-low.c
index b4cb5d2..3dbffa5 100644
--- a/gdb/gdbserver/lynx-low.c
+++ b/gdb/gdbserver/lynx-low.c
@@ -284,7 +284,7 @@ lynx_resume (struct thread_resume *resume_info, size_t n)
? PTRACE_SINGLESTEP : PTRACE_CONT);
const int signal = resume_info[0].sig;
- /* If given a null_ptid, then try using the current_process'
+ /* If given a minus_one_ptid, then try using the current_process'
private->last_wait_event_ptid. On most LynxOS versions,
using any of the process' thread works well enough, but
LynxOS 178 is a little more sensitive, and triggers some
@@ -293,10 +293,10 @@ lynx_resume (struct thread_resume *resume_info, size_t n)
if (ptid_equal (ptid, minus_one_ptid))
ptid = current_process()->private->last_wait_event_ptid;
- /* The ptid might still be NULL; this can happen between the moment
- we create the inferior or attach to a process, and the moment
- we resume its execution for the first time. It is fine to
- use the current_inferior's ptid in those cases. */
+ /* The ptid might still be minus_one_ptid; this can happen between
+ the moment we create the inferior or attach to a process, and
+ the moment we resume its execution for the first time. It is
+ fine to use the current_inferior's ptid in those cases. */
if (ptid_equal (ptid, minus_one_ptid))
ptid = thread_to_gdb_id (current_inferior);
--
1.7.0.4
^ permalink raw reply [flat|nested] 10+ messages in thread
* Checked in: [RFA] gdbserver/lynx178: spurious SIG61 signal when resuming inferior.
2013-05-13 10:46 [RFA] gdbserver/lynx178: spurious SIG61 signal when resuming inferior Joel Brobecker
2013-05-13 11:22 ` Pedro Alves
2013-05-13 14:36 ` Pedro Alves
@ 2013-05-17 6:48 ` Joel Brobecker
2 siblings, 0 replies; 10+ messages in thread
From: Joel Brobecker @ 2013-05-17 6:48 UTC (permalink / raw)
To: gdb-patches
> gdbserver/ChangeLog:
>
> * lynx-low.c (struct process_info_private): New type.
> (lynx_add_process): New function.
> (lynx_create_inferior, lynx_attach): Replace calls to
> add_process by calls to lynx_add_process.
> (lynx_resume): If PTID is null, then try using
> current_process()->private->last_wait_event_ptid.
> Add comments.
> (lynx_clear_inferiors): Delete. The contents of that function
> has been inlined in lynx_mourn;
> (lynx_wait_1): Save the ptid in the process's private data.
> (lynx_mourn): Free the process' private data. Replace call
> to lynx_clear_inferiors by call to clear_inferiors.
Checked in.
--
Joel
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2013-05-17 6:57 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-05-13 10:46 [RFA] gdbserver/lynx178: spurious SIG61 signal when resuming inferior Joel Brobecker
2013-05-13 11:22 ` Pedro Alves
2013-05-13 11:25 ` Pedro Alves
2013-05-13 13:28 ` Joel Brobecker
2013-05-13 14:28 ` Pedro Alves
2013-05-16 12:24 ` Joel Brobecker
2013-05-16 13:14 ` Pedro Alves
2013-05-13 14:36 ` Pedro Alves
2013-05-17 6:57 ` Joel Brobecker
2013-05-17 6:48 ` Checked in: " Joel Brobecker
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox