From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 6119 invoked by alias); 28 Aug 2013 03:04:17 -0000 Mailing-List: contact gdb-patches-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-patches-owner@sourceware.org Received: (qmail 6102 invoked by uid 89); 28 Aug 2013 03:04:16 -0000 Received: from relay1.mentorg.com (HELO relay1.mentorg.com) (192.94.38.131) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Wed, 28 Aug 2013 03:04:16 +0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-2.6 required=5.0 tests=AWL,BAYES_00,KHOP_THREADED,RDNS_NONE,SPF_HELO_FAIL autolearn=no version=3.3.2 X-HELO: relay1.mentorg.com Received: from svr-orw-exc-10.mgc.mentorg.com ([147.34.98.58]) by relay1.mentorg.com with esmtp id 1VEW3G-0002eI-Ry from Yao_Qi@mentor.com ; Tue, 27 Aug 2013 20:04:10 -0700 Received: from SVR-ORW-FEM-03.mgc.mentorg.com ([147.34.97.39]) by SVR-ORW-EXC-10.mgc.mentorg.com with Microsoft SMTPSVC(6.0.3790.4675); Tue, 27 Aug 2013 20:04:11 -0700 Received: from qiyao.dyndns.org (147.34.91.1) by svr-orw-fem-03.mgc.mentorg.com (147.34.97.39) with Microsoft SMTP Server id 14.2.247.3; Tue, 27 Aug 2013 20:04:09 -0700 Message-ID: <521D6862.6050100@codesourcery.com> Date: Wed, 28 Aug 2013 03:04:00 -0000 From: Yao Qi User-Agent: Mozilla/5.0 (X11; Linux i686; rv:17.0) Gecko/20130110 Thunderbird/17.0.2 MIME-Version: 1.0 To: "Agovic, Sanimir" CC: "gdb-patches@sourceware.org" Subject: Re: [RFC] GDB performance testing infrastructure References: <520B7F70.6070207@codesourcery.com> <0377C58828D86C4588AEEC42FC3B85A71764E92B@IRSMSX105.ger.corp.intel.com> In-Reply-To: <0377C58828D86C4588AEEC42FC3B85A71764E92B@IRSMSX105.ger.corp.intel.com> Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 8bit X-SW-Source: 2013-08/txt/msg00804.txt.bz2 On 08/27/2013 09:49 PM, Agovic, Sanimir wrote: >> * Remote debugging. It is slower to read from the remote target, and >> > worse, GDB reads the same memory regions in multiple times, or reads >> > the consecutive memory by multiple packets. >> > > Once gdb and gdbserver share most of the target code, the overhead will be > caused by the serial protocol roundtrips. But this will take a while... Sanimir, thanks for your comments! One of the motivations of the performance testing is to measure the overhead of RSP in some scenarios, and look for the opportunities to improve it, or add a completely new protocol, which is an extreme case. Once the infrastructure is ready, we can write some tests to see how efficient or inefficient RSP is. > >> > * Tracepoint. Tracepoint is designed to be efficient on collecting >> > data in the inferior, so we need performance tests to guarantee that >> > tracepoint is still efficient enough. Note that we a test >> > `gdb.trace/tspeed.exp', but there are still some rooms to improve. >> > > Afaik the tracepoint functionality is quite separated from gdb may be tested > in isolation. Having a generic benchmark framework covering the most parts of > gdb is probably_the_ way to go but I see some room for specialized benchmarks > e.g. for tracepoints with a custom driver. But my knowledge is too vague on > the topic. > Well, it is sort of design trade-off. We need a framework generic enough to handle most of the testing requirements for different GDB modules, (such as solib, symbols, backtrace, disassemble, etc), on the other hand, we want each test is specialized for the corresponding GDB module, so that we can find more details. I am inclined to handle testing to _all_ modules under this generic framework. >> > 2. Detect performance regressions. We collected the performance data >> > of each micro-benchmark, and we need to detect or identify the >> > performance regression by comparing with the previous run. It is >> > more powerful to associate it with continuous testing. >> > > Something really simple, so simple that one could run it silently with every > make invokation. For a newcomer, it took me some time to get used to make > check e.g. setup, run, and interpret the tests with various settings. Something > simpler would help to run it more often. > Yes, I agree, everything should be simple. I assume that people running performance testing should be familiar with GDB regular regression test, like 'make check'. We'll provide 'make check-perf' to run performance testing ,and it doesn't add extra difficulties on top of 'make check', from user's point of view, IMO. > I like to add the Machine Interface (MI) to the list, but it is quite rudimentary: > > $ gdb -interpreter mi -q debugee > [...] > -enable-timings > ^done > (gdb) > -break-insert -f main > ^done,bkpt={...},time={wallclock="0.00656",user="0.00000",system="0.00000"} > [...] > (gdb) > -exec-step > ^running > *running,thread-id="1" > (gdb) > *stopped,[...],time={wallclock="0.19425",user="0.09700",system="0.04200"} > (gdb) > > With -enable-timings[1] enabled, every result record has a time triple > appended, even for async[2] ones. If we come up with a full mi parser > one could run tests w/o timings. A mi result is quite json-ish. Thanks for the input. > > (To be honest I do not know how timings are composed of =D) > > In addition there are some tools for plotting benchmark results[3]. > > [1]http://sourceware.org/gdb/onlinedocs/gdb/GDB_002fMI-Miscellaneous-Commands.html > [2]https://sourceware.org/gdb/onlinedocs/gdb/GDB_002fMI-Async-Records.html > [3]http://speed.pypy.org/ I am using speed to track and show the performance data I got from the GDB performance tests. It is able to associate the performance data to the commit, so easy to find which commit causes regressions. However, my impression is that speed or its dependent packages are not well-maintained nowadays. After some search online, I like the chromium performance test and its plot, personally. It is integrated with buildbot (a customized version). http://build.chromium.org/f/chromium/perf/dashboard/overview.html However, as I said in this proposal, let us focus on goal #1 first, get the framework ready and collect performance data. -- Yao (齐尧)