From: Daniel Jacobowitz <drow@mvista.com>
To: Andrew Cagney <ac131313@redhat.com>
Cc: Fernando Nasser <fnasser@redhat.com>, gdb-patches@sources.redhat.com
Subject: Re: [patch/rfc] Remove all setup_xfail's from testsuite/gdb.mi/
Date: Thu, 16 Jan 2003 19:55:00 -0000 [thread overview]
Message-ID: <20030116195520.GA22164@nevyn.them.org> (raw)
In-Reply-To: <3E2701DA.9060306@redhat.com>
Once again, I feel the need to apologize for my tone. I'm being too
sensitive about this. Sorry... let's try this again.
On Thu, Jan 16, 2003 at 02:02:50PM -0500, Andrew Cagney wrote:
>
> >I don't think making it a requirement that go out and analyze all the
> >existing XFAILs is reasonable, although it is patently something we
> >need to do. That's not the same as ripping them out and introducing
> >failures in the test results without addressing those failures.
>
>
>
> >>As a specific example, the i386 has an apparently low failure rate.
> >>That rate is badly misleading and the real number of failures is much
> >>higher :-( It's just that those failures have been [intentionally]
> >>camoflaged using xfail. It would be unfortunate if people, for the
> >>i386, tried to use that false result (almost zero fails) when initally
> >>setting the bar.
> >
> >
> >Have you reviewed the list of XFAILs? None of them are related to the
> >i386. One, in signals.exp, is either related to GDB's handling of
> >signals or to a longstanding limitation in most operating system
> >kernels, depending how you look at it. The rest are pretty much
> >platform independent.
>
> I've been through the files and looked at the actual xfail markings.
> They are dominated by what look like cpu specific cases (rs6000 and HP
> are especially bad at this).
Most of these which are *-*-* are actually generic, even when they have
HP markings, in my experience.
> I've also noticed cases where simply hanking the xfail doesn't make
> sense - when the failure has already been analized (easy to spot since
> they are conditional on the debug info or compiler version).
Definitely. On the other hand, the particular choice of xfail
conditions is often really bogus.
> >>This is also why I think the xfail's should simply be yanked. It acts
> >>as a one time reset of gdb's test results, restoring them to their true
> >>values. While this may cause the bar to start out lower than some
> >>would like, I think that is far better and far more realistic than
> >>trying to start with a bar falsely set too high.
> >
> >
> >This is a _regression_ testsuite. I've been trying for months to get
> >it down to zero failures without compromising its integrity, and I've
> >just about done it for one target, by judicious use of KFAILs (and
> >fixing bugs!). The existing XFAILs all look to me like either
> >legitimate XFAILs or things that should be KFAILed. If you're going
> >to rip up my test results, please sort them accordingly first.
>
> No one is ripping up your individual and personal test results.
>
> Several years ago some maintainers were intentionally xfailing many of
> the bugs that they had no intention of fixing. That was wrong, and that
> needs to be fixed.
>
> An unfortunate consequence of that action is that the zero you've been
> shooting for is really only a local minimum. The real zero is further
> out, that zero was a mirage :-(
Close, close... what I'm trying to avoid is a local minimum. The zero
I've been shooting for should be a local _plateau_. Then we continue
going down as XFAIL/KFAILs are fixed/analyzed/recategorized/everything
else that happens to bugs when they go to bug heaven.
> >It doesn't need to be done all at once. We can put markers in .exp
> >files saying "xfails audited". But I think that we should audit
> >individual files, not yank madly.
>
> (which reminds me, the existing xfail reference to bug reports need to
> be ripped out - they refer to Red Hat and HP bug databases :-().
Ayup.
> > If
> >you introduce seventy failures, then that's another couple of weeks I
> >can't just look at the results, see "oh, two failures in threads and
> >that's it, I didn't break anything".
>
> People doing proper test analysis should be comparing the summary files
> and not the final numbers. A summary analysis would show 70 XFAIL->FAIL
> changes, but no real regressions.
I do, but it's exceedingly convenient for, e.g., automated testing
purposes to have the actual number of FAILs come out as zero and each
bug to be otherwise accounted for. What I would like to do is get to
that point, and then recategorize directly from:
XFAIL->KFAIL
random XFAIL->analyzed XFAIL
XFAIL->PASS
etc. on a case-by-case basis. I don't see any benefit from ripping out
the XFAILs wholesale and then analyzing them as we find the time; why
not (rip out and analyze) as we find the time?
> Anyway,
>
> If the eixsting (bogus) xfail PR numbers are _all_ ripped out, and then
> the requirement for all new xfail's to include a corresponding bug
> report, I think there is a way forward.
This I definitely like. "Cantfix"?
--
Daniel Jacobowitz
MontaVista Software Debian GNU/Linux Developer
next prev parent reply other threads:[~2003-01-16 19:55 UTC|newest]
Thread overview: 51+ messages / expand[flat|nested] mbox.gz Atom feed top
2002-10-24 11:41 Andrew Cagney
2002-10-24 12:09 ` Daniel Jacobowitz
2002-10-24 12:29 ` Andrew Cagney
2002-10-24 12:58 ` Daniel Jacobowitz
2002-10-24 14:22 ` Andrew Cagney
2002-10-24 14:26 ` Daniel Jacobowitz
2002-10-24 14:39 ` Michael Snyder
2002-10-24 16:31 ` Andrew Cagney
2002-10-24 16:36 ` Michael Snyder
2002-10-24 14:50 ` Andrew Cagney
2002-10-24 14:58 ` Michael Snyder
2002-10-24 15:31 ` Ben Elliston
2002-10-24 16:44 ` Andrew Cagney
2002-10-24 17:35 ` Michael Snyder
2002-10-24 18:25 ` Andrew Cagney
2002-10-24 14:18 ` Michael Snyder
2002-10-24 14:32 ` Andrew Cagney
2002-10-24 14:39 ` David Carlton
2002-10-24 14:57 ` Andrew Cagney
2002-10-24 15:00 ` Michael Snyder
2002-10-24 15:26 ` David Carlton
2002-10-24 15:36 ` Andrew Cagney
2003-01-15 15:55 ` Andrew Cagney
2003-01-15 17:25 ` Fernando Nasser
2003-01-16 16:53 ` Andrew Cagney
2003-01-16 17:05 ` Daniel Jacobowitz
2003-01-16 19:03 ` Andrew Cagney
2003-01-16 19:55 ` Daniel Jacobowitz [this message]
2003-01-15 17:44 Michael Elizabeth Chastain
2003-01-15 17:51 ` Daniel Jacobowitz
2003-01-16 14:27 ` Fernando Nasser
2003-01-16 14:30 ` Daniel Jacobowitz
2003-01-16 14:46 ` Fernando Nasser
2003-01-16 14:52 ` Daniel Jacobowitz
2003-01-16 15:46 ` Andrew Cagney
2003-01-16 14:20 ` Fernando Nasser
2003-01-16 17:07 Michael Elizabeth Chastain
2003-01-16 17:12 Michael Elizabeth Chastain
2003-01-16 20:06 Michael Elizabeth Chastain
2003-01-16 20:12 ` Daniel Jacobowitz
2003-01-17 14:12 ` Fernando Nasser
2003-01-17 16:05 ` Andrew Cagney
2003-01-17 14:26 ` Fernando Nasser
2003-01-17 19:00 Michael Elizabeth Chastain
2003-01-17 19:16 ` David Carlton
2003-01-17 19:20 ` David Carlton
2003-01-17 19:30 ` Daniel Jacobowitz
2003-01-17 19:28 ` Andrew Cagney
2003-01-17 19:28 Michael Elizabeth Chastain
2003-01-17 19:34 ` Daniel Jacobowitz
2003-01-17 19:32 Michael Elizabeth Chastain
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20030116195520.GA22164@nevyn.them.org \
--to=drow@mvista.com \
--cc=ac131313@redhat.com \
--cc=fnasser@redhat.com \
--cc=gdb-patches@sources.redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox