Mirror of the gdb-patches mailing list
 help / color / mirror / Atom feed
From: Yao Qi <yao@codesourcery.com>
To: "Agovic, Sanimir" <sanimir.agovic@intel.com>
Cc: "gdb-patches@sourceware.org" <gdb-patches@sourceware.org>
Subject: Re: [RFC] GDB performance testing infrastructure
Date: Wed, 28 Aug 2013 03:04:00 -0000	[thread overview]
Message-ID: <521D6862.6050100@codesourcery.com> (raw)
In-Reply-To: <0377C58828D86C4588AEEC42FC3B85A71764E92B@IRSMSX105.ger.corp.intel.com>

On 08/27/2013 09:49 PM, Agovic, Sanimir wrote:
>> * Remote debugging.  It is slower to read from the remote target, and
>> >     worse, GDB reads the same memory regions in multiple times, or reads
>> >     the consecutive memory by multiple packets.
>> >
> Once gdb and gdbserver share most of the target code, the overhead will be
> caused by the serial protocol roundtrips. But this will take a while...

Sanimir, thanks for your comments!

One of the motivations of the performance testing is to measure the
overhead of RSP in some scenarios, and look for the opportunities to
improve it, or add a completely new protocol, which is an extreme case.

Once the infrastructure is ready, we can write some tests to see how 
efficient or inefficient RSP is.

>
>> >   * Tracepoint.  Tracepoint is designed to be efficient on collecting
>> >     data in the inferior, so we need performance tests to guarantee that
>> >     tracepoint is still efficient enough.  Note that we a test
>> >     `gdb.trace/tspeed.exp', but there are still some rooms to improve.
>> >
> Afaik the tracepoint functionality is quite separated from gdb may be tested
> in isolation. Having a generic benchmark framework covering the most parts of
> gdb is probably_the_  way to go but I see some room for specialized benchmarks
> e.g. for tracepoints with a custom driver. But my knowledge is too vague on
> the topic.
>

Well, it is sort of design trade-off.  We need a framework generic 
enough to handle most of the testing requirements for different GDB 
modules, (such as solib, symbols, backtrace, disassemble, etc), on the 
other hand, we want each test is specialized for the corresponding GDB 
module, so that we can find more details.

I am inclined to handle testing to _all_ modules under this generic
framework.

>> >   2. Detect performance regressions.  We collected the performance data
>> >      of each micro-benchmark, and we need to detect or identify the
>> >      performance regression by comparing with the previous run.  It is
>> >      more powerful to associate it with continuous testing.
>> >
> Something really simple, so simple that one could run it silently with every
> make invokation. For a newcomer, it took me some time to get used to make
> check e.g. setup, run, and interpret the tests with various settings. Something
> simpler would help to run it more often.
>

Yes, I agree, everything should be simple.  I assume that people
running performance testing should be familiar with GDB regular
regression test, like 'make check'.  We'll provide 'make check-perf' to 
run performance testing ,and it doesn't add extra difficulties on top of 
'make check', from user's point of view, IMO.

> I like to add the Machine Interface (MI) to the list, but it is quite rudimentary:
>
> $ gdb -interpreter mi -q debugee
> [...]
> -enable-timings
> ^done
> (gdb)
> -break-insert -f main
> ^done,bkpt={...},time={wallclock="0.00656",user="0.00000",system="0.00000"}
> [...]
> (gdb)
> -exec-step
> ^running
> *running,thread-id="1"
> (gdb)
> *stopped,[...],time={wallclock="0.19425",user="0.09700",system="0.04200"}
> (gdb)
>
> With -enable-timings[1] enabled, every result record has a time triple
> appended, even for async[2] ones. If we come up with a full mi parser
> one could run tests w/o timings. A mi result is quite json-ish.

Thanks for the input.

>
> (To be honest I do not know how timings are composed of =D)
>
> In addition there are some tools for plotting benchmark results[3].
>
> [1]http://sourceware.org/gdb/onlinedocs/gdb/GDB_002fMI-Miscellaneous-Commands.html
> [2]https://sourceware.org/gdb/onlinedocs/gdb/GDB_002fMI-Async-Records.html
> [3]http://speed.pypy.org/

I am using speed to track and show the performance data I got from the 
GDB performance tests.  It is able to associate the performance data to 
the commit, so easy to find which commit causes regressions.  However, 
my impression is that speed or its dependent packages are not 
well-maintained nowadays.

After some search online, I like the chromium performance test and its 
plot, personally.  It is integrated with buildbot (a customized version).

   http://build.chromium.org/f/chromium/perf/dashboard/overview.html

However, as I said in this proposal, let us focus on goal #1 first, get
the framework ready and collect performance data.

-- 
Yao (齐尧)


  reply	other threads:[~2013-08-28  3:04 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-08-14 13:01 Yao Qi
2013-08-21 20:39 ` Tom Tromey
2013-08-27  6:21   ` Yao Qi
2013-08-27 13:49 ` Agovic, Sanimir
2013-08-28  3:04   ` Yao Qi [this message]
2013-09-19  0:36     ` Doug Evans
2013-08-28  4:17 ` [RFC 0/3] GDB Performance testing Yao Qi
2013-08-28  4:17   ` [RFC 3/3] Test on solib load and unload Yao Qi
2013-08-28  4:27     ` Yao Qi
2013-08-28 11:31       ` Agovic, Sanimir
2013-09-03  1:59         ` Yao Qi
2013-09-03  6:33           ` Agovic, Sanimir
2013-09-02 15:24       ` Blanc, Nicolas
2013-09-03  2:04         ` Yao Qi
2013-09-03  7:50           ` Blanc, Nicolas
2013-09-19 22:45       ` Doug Evans
2013-09-20 19:19         ` Tom Tromey
2013-10-05  0:34           ` Doug Evans
2013-10-07 16:31             ` Tom Tromey
2013-09-22  6:25         ` Yao Qi
2013-09-23  0:14           ` Doug Evans
2013-09-24  2:31             ` Yao Qi
2013-10-05  0:37               ` Doug Evans
2013-09-20 19:14       ` Tom Tromey
2013-08-28  4:17   ` [RFC 2/3] Perf test framework Yao Qi
2013-08-28  9:57     ` Agovic, Sanimir
2013-09-03  1:45       ` Yao Qi
2013-09-03  6:38         ` Agovic, Sanimir
2013-09-19 19:09     ` Doug Evans
2013-09-20  8:04       ` Yao Qi
2013-09-20 16:51         ` Doug Evans
2013-09-22  2:54           ` Yao Qi
2013-09-22 23:14             ` Doug Evans
2013-09-20 17:12         ` Doug Evans
2013-08-28  4:17   ` [RFC 1/3] New make target 'check-perf' and new dir gdb.perf Yao Qi
2013-08-28  9:40     ` Agovic, Sanimir
2013-09-19 17:47     ` Doug Evans
2013-09-20 19:00       ` Tom Tromey
2013-09-20 18:59     ` Tom Tromey
2013-09-19 17:25   ` [RFC 0/3] GDB Performance testing Doug Evans

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=521D6862.6050100@codesourcery.com \
    --to=yao@codesourcery.com \
    --cc=gdb-patches@sourceware.org \
    --cc=sanimir.agovic@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox