Mirror of the gdb-patches mailing list
 help / color / mirror / Atom feed
From: "Agovic, Sanimir" <sanimir.agovic@intel.com>
To: 'Yao Qi' <yao@codesourcery.com>
Cc: "gdb-patches@sourceware.org" <gdb-patches@sourceware.org>
Subject: RE: [RFC] GDB performance testing infrastructure
Date: Tue, 27 Aug 2013 13:49:00 -0000	[thread overview]
Message-ID: <0377C58828D86C4588AEEC42FC3B85A71764E92B@IRSMSX105.ger.corp.intel.com> (raw)
In-Reply-To: <520B7F70.6070207@codesourcery.com>

Hello Yao,

I like the overall proposal for a "micro" benchmark suite. Some comments below.

> -----Original Message-----
> From: gdb-patches-owner@sourceware.org [mailto:gdb-patches-owner@sourceware.org] On Behalf
> Of Yao Qi
> Sent: Wednesday, August 14, 2013 03:01 PM
> To: gdb-patches@sourceware.org
> Subject: [RFC] GDB performance testing infrastructure
> 
>   * Remote debugging.  It is slower to read from the remote target, and
>     worse, GDB reads the same memory regions in multiple times, or reads
>     the consecutive memory by multiple packets.
>
Once gdb and gdbserver share most of the target code, the overhead will be
caused by the serial protocol roundtrips. But this will take a while...

>   * Tracepoint.  Tracepoint is designed to be efficient on collecting
>     data in the inferior, so we need performance tests to guarantee that
>     tracepoint is still efficient enough.  Note that we a test
>     `gdb.trace/tspeed.exp', but there are still some rooms to improve.
>
Afaik the tracepoint functionality is quite separated from gdb may be tested
in isolation. Having a generic benchmark framework covering the most parts of
gdb is probably _the_ way to go but I see some room for specialized benchmarks
e.g. for tracepoints with a custom driver. But my knowledge is too vague on
the topic.

>   2. Detect performance regressions.  We collected the performance data
>      of each micro-benchmark, and we need to detect or identify the
>      performance regression by comparing with the previous run.  It is
>      more powerful to associate it with continuous testing.
> 
Something really simple, so simple that one could run it silently with every
make invokation. For a newcomer, it took me some time to get used to make
check e.g. setup, run, and interpret the tests with various settings. Something 
simpler would help to run it more often.

> 
> 2 Known works
> =============
> 
>   * [LNT] It was written for LLVM, but is *designed* to be usable for
>     the performance testing of any software.  It is written in python,
>     well-documented and easy to set up.  LNT spawn the compiler first
>     and then target program, record the time usages of compiler and
>     target program in json format.  No interaction is involved.  The
>     performance data collection in LNT is relatively simple, because it
>     is targeted to compiler.  The performance testing part is done, and
>     the next step is to show the data and detect performance
>     regressions.  LNT does a lot work here.  The performance data in
>     json format can be imported to a database, and shown through [web].
>     The performance regression will be highlighted in red.
> 
>   * [lldb] LLDB has a [performance.py] to measure the speed and memory
>     usage of LLDB.  It captures the internal events, feeds some events
>     and record the time usages.  It handles interactions by consuming
>     debugging events, and take some actions accordingly.  It only
>     collects performance data, doesn't detect performance regressions.
> 
>   * libstdc++-v3 There is directory performance in
>     libstdc++-v3/testsuite/ and a header testsuite_performance.h in
>     testsuite/util/.  Test cases are compiled with the header, and run
>     with some large data set, to calculate the time usage.  It is
>     suitable for performance testing for a library.
> 
I like to add the Machine Interface (MI) to the list, but it is quite rudimentary:

$ gdb -interpreter mi -q debugee
[...]
-enable-timings
^done
(gdb)
-break-insert -f main
^done,bkpt={...},time={wallclock="0.00656",user="0.00000",system="0.00000"}
[...]
(gdb)
-exec-step
^running
*running,thread-id="1"
(gdb)
*stopped,[...],time={wallclock="0.19425",user="0.09700",system="0.04200"}
(gdb)

With -enable-timings[1] enabled, every result record has a time triple
appended, even for async[2] ones. If we come up with a full mi parser
one could run tests w/o timings. A mi result is quite json-ish.

(To be honest I do not know how timings are composed of =D)

In addition there are some tools for plotting benchmark results[3].

[1] http://sourceware.org/gdb/onlinedocs/gdb/GDB_002fMI-Miscellaneous-Commands.html
[2] https://sourceware.org/gdb/onlinedocs/gdb/GDB_002fMI-Async-Records.html
[3] http://speed.pypy.org/

-Sanimir
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen, Deutschland
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Christian Lamprechter, Hannes Schwaderer, Douglas Lusk
Registergericht: Muenchen HRB 47456
Ust.-IdNr./VAT Registration No.: DE129385895
Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052

  parent reply	other threads:[~2013-08-27 13:49 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-08-14 13:01 Yao Qi
2013-08-21 20:39 ` Tom Tromey
2013-08-27  6:21   ` Yao Qi
2013-08-27 13:49 ` Agovic, Sanimir [this message]
2013-08-28  3:04   ` Yao Qi
2013-09-19  0:36     ` Doug Evans
2013-08-28  4:17 ` [RFC 0/3] GDB Performance testing Yao Qi
2013-08-28  4:17   ` [RFC 2/3] Perf test framework Yao Qi
2013-08-28  9:57     ` Agovic, Sanimir
2013-09-03  1:45       ` Yao Qi
2013-09-03  6:38         ` Agovic, Sanimir
2013-09-19 19:09     ` Doug Evans
2013-09-20  8:04       ` Yao Qi
2013-09-20 16:51         ` Doug Evans
2013-09-22  2:54           ` Yao Qi
2013-09-22 23:14             ` Doug Evans
2013-09-20 17:12         ` Doug Evans
2013-08-28  4:17   ` [RFC 1/3] New make target 'check-perf' and new dir gdb.perf Yao Qi
2013-08-28  9:40     ` Agovic, Sanimir
2013-09-19 17:47     ` Doug Evans
2013-09-20 19:00       ` Tom Tromey
2013-09-20 18:59     ` Tom Tromey
2013-08-28  4:17   ` [RFC 3/3] Test on solib load and unload Yao Qi
2013-08-28  4:27     ` Yao Qi
2013-08-28 11:31       ` Agovic, Sanimir
2013-09-03  1:59         ` Yao Qi
2013-09-03  6:33           ` Agovic, Sanimir
2013-09-02 15:24       ` Blanc, Nicolas
2013-09-03  2:04         ` Yao Qi
2013-09-03  7:50           ` Blanc, Nicolas
2013-09-19 22:45       ` Doug Evans
2013-09-20 19:19         ` Tom Tromey
2013-10-05  0:34           ` Doug Evans
2013-10-07 16:31             ` Tom Tromey
2013-09-22  6:25         ` Yao Qi
2013-09-23  0:14           ` Doug Evans
2013-09-24  2:31             ` Yao Qi
2013-10-05  0:37               ` Doug Evans
2013-09-20 19:14       ` Tom Tromey
2013-09-19 17:25   ` [RFC 0/3] GDB Performance testing Doug Evans

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0377C58828D86C4588AEEC42FC3B85A71764E92B@IRSMSX105.ger.corp.intel.com \
    --to=sanimir.agovic@intel.com \
    --cc=gdb-patches@sourceware.org \
    --cc=yao@codesourcery.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox