Mirror of the gdb-patches mailing list
 help / color / mirror / Atom feed
From: Pedro Alves <pedro@codesourcery.com>
To: gdb-patches@sourceware.org
Cc: Tom Tromey <tromey@redhat.com>, Keith Seitz <keiths@redhat.com>,
	Pierre Muller <pierre.muller@ics-cnrs.unistra.fr>
Subject: Re: [RFA] Fix for testsuite errors with gdbserver (remote)
Date: Mon, 21 Feb 2011 11:50:00 -0000	[thread overview]
Message-ID: <201102211150.04538.pedro@codesourcery.com> (raw)
In-Reply-To: <m38vxdxezs.fsf@fleche.redhat.com>

On Friday 18 February 2011 20:30:15, Tom Tromey wrote:
> Tom> All that python_inferior_exit is doing to provoke the crash is calling
> Tom> get_current_arch.  Under what circumstances is this not safe?  I would
> Tom> have thought -- perhaps naively -- that it was always safe.

You should only call get_current_arch when preparing to run a command, as
default context gdbarch for the user commands (so "global" is really the
interpreter's default gdbarch).
That was part of the whole effort to eliminate the current_gdbarch global.
All other places should use the gdbarch associated with whatever
operation/object the code is doing/manipulating.  E.g., do something
in the current context while the inferior is running, use the current
frame's gdbarch.  Do something related to an obfile, use the obfile
gdbarch, etc.

At the top level, preparing to execute a command, the whole global
state (inferior_ptid, current_inferior, etc.) is in a state that
the core wants to see it in.  Several places in GDB need to
switch inferior_ptid to something else temporarily, at which point
you lose the mapping the core side relies on at most places, between
inferior_ptid, and a thread.  The tear down code paths 
do _not_ and cannot assume there's such a connection.
inferior_ptid will point to a whole process, not a thread.  There
are other places, but mostly at boundaries between the core, and
a target_ops.  Python is probably not being involved at such
times.  I hope, at least.

Now, the python hooks added a new subpath in the teardown path
that decided to query global state at a time the state is
transient, expecting to see it like the top level would.

Looking at the code,

 static void
 python_inferior_exit (struct inferior *inf)
 {
   struct cleanup *cleanup;
   LONGEST exit_code = -1;
   ptid_t ptidp;
   struct target_waitstatus status;

   cleanup = ensure_python_env (get_current_arch (), current_language);
 
   get_last_target_status (&ptidp, &status);

   exit_code = status.value.integer;

   if (exit_code >= 0
       && emit_exited_event (exit_code) < 0)
     gdbpy_print_stack ();

   do_cleanups (cleanup);
 }

I noted in <http://sourceware.org/ml/gdb/2011-02/msg00108.html>
that the get_current_arch() doesn't have any connection with
the INF argument.  It can be seen in the case at hand: we're
iterating over all inferiors and "exiting" them.  The current
inferior stays the same throughout the whole iteration.

I don't really understand the existence of python_gdbarch,
and why do we need to keep ensuring/preserving it throughout,
instead of using the most appropriate gdbarch whenever one
is necessary.

As I asked on the gdb@ list, what is the gdbarch that the
exited_event callback wants to see as current python_gdbarch?
One related to INF?  That's not where the whole current
context is pointing.  Or to whatever current thread or inferior
was selected at the top level before gdb started handling
the inferior's exit (think non-stop)?  Is it
just making sure the python_gdbarch remains unaltered
by the callback?  Then why not python_gdbarch instead of
get_current_arch()?  That is the "something" that I
was thinking this code may not be doing right.

That all said, it looks like the python api doesn't define
exactly what can be done on an ExitedEvent, and even if
we address the gdbarch issue, if it's intended that the
python user code can do whatever, we're likely to see python/gdb
calls there that try to do something that calls has_stack_frames,
and that will internal error too.  On that ground, I do think
we should put in Pierre's patch, as I'm not seeing what
possible harm can it do, and that gets things working again.
But I'll note that if the handler is allowed to assume the
whole api is available at this point, and that the current
global state can be used as if in the top-level, we'll
probably see other similar cases triggering.

-- 
Pedro Alves


  reply	other threads:[~2011-02-21 11:50 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <4D5C71F6.80208@vmware.com>
     [not found] ` <4D5C7552.4010708@vmware.com>
     [not found]   ` <4D5C8312.4030701@vmware.com>
     [not found]     ` <4D5D6D83.8030302@vmware.com>
     [not found]       ` <4D5D8974.5030205@redhat.com>
2011-02-18 16:50         ` Pierre Muller
2011-02-18 18:56           ` Keith Seitz
2011-02-18 20:26             ` Pedro Alves
2011-02-18 20:30               ` Tom Tromey
2011-02-18 20:31                 ` Tom Tromey
2011-02-21 11:50                   ` Pedro Alves [this message]
2011-02-21 15:47                     ` Pierre Muller
2011-02-21 15:54                       ` Pedro Alves

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=201102211150.04538.pedro@codesourcery.com \
    --to=pedro@codesourcery.com \
    --cc=gdb-patches@sourceware.org \
    --cc=keiths@redhat.com \
    --cc=pierre.muller@ics-cnrs.unistra.fr \
    --cc=tromey@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox