Mirror of the gdb-patches mailing list
 help / color / mirror / Atom feed
From: Mark Kettenis <mark.kettenis@xs4all.nl>
To: gbenson@redhat.com
Cc: tromey@redhat.com, palves@redhat.com, fw@deneb.enyo.de,
	       mark.kettenis@xs4all.nl, gdb-patches@sourceware.org
Subject: Re: [PATCH 0/2] Demangler crash handler
Date: Thu, 22 May 2014 14:40:00 -0000	[thread overview]
Message-ID: <201405221440.s4MEeEbx021165@glazunov.sibelius.xs4all.nl> (raw)
In-Reply-To: <20140522140904.GD15598@blade.nx> (message from Gary Benson on	Thu, 22 May 2014 15:09:04 +0100)

> Date: Thu, 22 May 2014 15:09:04 +0100
> From: Gary Benson <gbenson@redhat.com>
> 
> Tom Tromey wrote:
> > Pedro> Then stealing a signal handler always has multi-threading
> > Pedro> considerations.  E.g., gdb Python code could well spawn a
> > Pedro> thread that happens to call something that wants its own
> > Pedro> SIGSEGV handler...  Signal handlers are per-process, not
> > Pedro> per-thread.
> > 
> > That is true in theory but I think it is unlikely in practice.  And,
> > should it happen -- well, the onus is on folks writing extensions
> > not to mess things up.  That's the nature of the beast.  And, sure,
> > it is messy, particularly if we ever upstream "import gdb", but even
> > so, signals are just fraught and this is not an ordinary enough
> > usage to justify preventing gdb from doing it.
> 
> GDB installs handlers for INT, TERM, QUIT, HUP, FPE, WINCH, CONT,
> TTOU, TRAP, ALRM and TSTP, and some other platform-specific ones
> I didn't recognise.  Is there anything that means SIGSEGV should
> be treated differently to all these other signals?

From that list SIGFPE is probably a bogosity.  I don't think the
SIGFPE handler will do the right thing on many OSes and architectures
supported by GDB, since it is unspecified whether the trapping
instruction will be re-executed upon return from the signal handler.
I'd argue that the SIGFPE handler is just as unhelpful as the SIGSEGV
handler you're proposing.  Luckily, we don't seem to have a lot of
division-by-zero bugs in the code base.

> > The choice is really between SEGV catching and "somebody else
> > down the road fixes more demangler bugs".
> 
> The demangler bugs will get fixed one way or another.  The choice is:
> do we allow users to continue to use GDB while the bug they've hit is
> fixed, or, do we make them wait?  In the expectation that they will
> put their own work aside while they fix GDB instead?

Unless there is a way to force a core dump (like internal_error()
offers) with the state at the point of the SIGSEGV in it, yes, we need
to make them wait or fix it themselves.

I'd really like to avoid adding a SIGSEGV handler altogether.  But I'm
willing to compromise if the signal handler offers to opportunity to
create a core dump.  Now doing so in a signal-safe way will be a bit
tricky of course.


  reply	other threads:[~2014-05-22 14:40 UTC|newest]

Thread overview: 50+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-05-09 10:07 Gary Benson
2014-05-09 10:09 ` [PATCH 1/2] " Gary Benson
2014-05-09 10:10 ` [PATCH 2/2] " Gary Benson
2014-05-09 11:20 ` [PATCH 0/2] " Mark Kettenis
2014-05-09 15:33   ` Gary Benson
2014-05-11  5:17     ` Doug Evans
2014-05-13 10:20       ` Gary Benson
2014-05-13 19:29         ` Tom Tromey
2014-05-14 13:07           ` Gary Benson
2014-05-13 19:39         ` Tom Tromey
2014-05-14  9:15           ` Gary Benson
2014-05-11 20:23     ` Mark Kettenis
2014-05-13 10:21       ` Gary Benson
2014-05-13 16:05         ` Pedro Alves
2014-05-15 13:24           ` Gary Benson
2014-05-15 14:07             ` Pedro Alves
2014-05-15 14:28               ` Gary Benson
2014-05-15 15:25                 ` Pedro Alves
2014-05-16 11:06             ` Pedro Alves
2014-05-10 20:55   ` Florian Weimer
2014-05-11  5:10     ` Doug Evans
2014-05-13 10:22     ` Gary Benson
2014-05-13 18:22       ` Florian Weimer
2014-05-13 18:42         ` Pedro Alves
2014-05-13 19:16           ` Gary Benson
2014-05-13 19:19             ` Pedro Alves
2014-05-14  9:11               ` Gary Benson
2014-05-13 19:20           ` Florian Weimer
2014-05-13 19:22             ` Pedro Alves
2014-05-13 19:22         ` Gary Benson
2014-05-13 19:36           ` Tom Tromey
2014-05-14  9:13             ` Gary Benson
2014-05-14 14:18     ` Pedro Alves
2014-05-14 16:08       ` Andrew Burgess
2014-05-14 18:32         ` Pedro Alves
2014-05-15 13:25           ` Gary Benson
2014-05-15 16:01             ` Pedro Alves
2014-05-15 13:27       ` Gary Benson
2014-05-20 17:05       ` Tom Tromey
2014-05-20 18:40         ` Stan Shebs
2014-05-20 19:36           ` Tom Tromey
2014-05-20 20:23             ` Joel Brobecker
2014-05-22 12:56               ` Gary Benson
2014-05-22 13:09                 ` Joel Brobecker
2014-05-22 14:13                 ` Pedro Alves
2014-05-22 15:57                   ` Gary Benson
2014-05-22 13:18           ` Gary Benson
2014-05-22 14:09         ` Gary Benson
2014-05-22 14:40           ` Mark Kettenis [this message]
2014-05-22 20:42             ` Gary Benson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=201405221440.s4MEeEbx021165@glazunov.sibelius.xs4all.nl \
    --to=mark.kettenis@xs4all.nl \
    --cc=fw@deneb.enyo.de \
    --cc=gbenson@redhat.com \
    --cc=gdb-patches@sourceware.org \
    --cc=palves@redhat.com \
    --cc=tromey@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox