From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 13816 invoked by alias); 27 Aug 2009 03:11:15 -0000 Received: (qmail 13806 invoked by uid 22791); 27 Aug 2009 03:11:14 -0000 X-SWARE-Spam-Status: No, hits=-1.8 required=5.0 tests=AWL,BAYES_00,SARE_MSGID_LONG40,SPF_PASS X-Spam-Check-By: sourceware.org Received: from smtp-out.google.com (HELO smtp-out.google.com) (216.239.45.13) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Thu, 27 Aug 2009 03:11:08 +0000 Received: from zps76.corp.google.com (zps76.corp.google.com [172.25.146.76]) by smtp-out.google.com with ESMTP id n7R3B6ct020287 for ; Wed, 26 Aug 2009 20:11:06 -0700 Received: from gxk10 (gxk10.prod.google.com [10.202.11.10]) by zps76.corp.google.com with ESMTP id n7R3B340006818 for ; Wed, 26 Aug 2009 20:11:04 -0700 Received: by gxk10 with SMTP id 10so915811gxk.21 for ; Wed, 26 Aug 2009 20:11:03 -0700 (PDT) MIME-Version: 1.0 Received: by 10.150.8.14 with SMTP id 14mr13351630ybh.52.1251342663691; Wed, 26 Aug 2009 20:11:03 -0700 (PDT) In-Reply-To: References: <7e6c8d660907081308r13bff580rdcf4822c77df8403@mail.gmail.com> <200908251944.45977.pedro@codesourcery.com> <200908262108.49085.pedro@codesourcery.com> Date: Thu, 27 Aug 2009 03:11:00 -0000 Message-ID: Subject: Re: [RFA] Use data cache for stack accesses From: Doug Evans To: Pedro Alves Cc: gdb-patches@sourceware.org Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable X-System-Of-Record: true X-IsSubscribed: yes Mailing-List: contact gdb-patches-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-patches-owner@sourceware.org X-SW-Source: 2009-08/txt/msg00475.txt.bz2 On Wed, Aug 26, 2009 at 5:32 PM, Doug Evans wrote: > On Wed, Aug 26, 2009 at 1:08 PM, Pedro Alves wrot= e: >>> > Did you post number showing off the improvements from >>> > having the cache on? =A0E.g., when doing foo, with cache off, >>> > I get NNN memory reads, while with cache off, we get only >>> > nnn reads. =A0I'd be curious to have some backing behind >>> > "This improves remote performance significantly". >>> >>> For a typical gdb/gdbserver connection here a backtrace of 256 levels >>> went from 48 seconds (average over 6 tries) to 4 seconds (average over >>> 6 tries). >> >> Nice! =A0Were all those single runs started from cold cache, or >> are you starting from a cold cache and issuing 6 backtraces in >> a row? =A0I mean, how sparse were those 6 tries? =A0Shall one >> read that as 48,48,48,48,48,48 vs 20,1,1,1,1,1 (some improvement >> due to chunking, and large improvement due to caching in following >> repeats of the command); or 48,48,48,48,48,48 vs 4,4,4,4,4,4 (large >> improvement due to chunking --- caching not actually measured)? > > The cache was always flushed between backtraces, so that's > 48, 48. ..., 48 vs 4, 4, ..., 4. > > Backtraces win from both chunking and caching. > Even in one backtrace gdb will often fetch the same value multiple times. > I haven't computed the relative win. Besides, the chunking doesn't really work without the caching. :-)