From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 2591 invoked by alias); 25 Jul 2013 17:59:42 -0000 Mailing-List: contact gdb-patches-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-patches-owner@sourceware.org Received: (qmail 2577 invoked by uid 89); 25 Jul 2013 17:59:41 -0000 X-Spam-SWARE-Status: No, score=-6.9 required=5.0 tests=AWL,BAYES_00,KHOP_THREADED,RCVD_IN_HOSTKARMA_W,RCVD_IN_HOSTKARMA_WL,RDNS_NONE,SPF_HELO_PASS,SPF_PASS,TW_CP autolearn=no version=3.3.1 Received: from Unknown (HELO mx1.redhat.com) (209.132.183.28) by sourceware.org (qpsmtpd/0.84/v0.84-167-ge50287c) with ESMTP; Thu, 25 Jul 2013 17:59:38 +0000 Received: from int-mx01.intmail.prod.int.phx2.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id r6PHxUMT018874 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Thu, 25 Jul 2013 13:59:30 -0400 Received: from [127.0.0.1] (ovpn01.gateway.prod.ext.ams2.redhat.com [10.39.146.11]) by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id r6PHxThj022425; Thu, 25 Jul 2013 13:59:29 -0400 Message-ID: <51F16780.70408@redhat.com> Date: Thu, 25 Jul 2013 17:59:00 -0000 From: Pedro Alves User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130625 Thunderbird/17.0.7 MIME-Version: 1.0 To: Anton Blanchard CC: gdb-patches@sourceware.org Subject: Re: [PATCH] Improve performance of large restore commands References: <20130725220858.58184193@kryten> In-Reply-To: <20130725220858.58184193@kryten> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-SW-Source: 2013-07/txt/msg00626.txt.bz2 On 07/25/2013 01:08 PM, Anton Blanchard wrote: > > I noticed a large (100MB) restore took hours to complete. The problem > is target_xfer_partial repeatedly mallocs and memcpys the entire > 100MB buffer only to find a small portion of it is actually written. I think you meant memory_xfer_partial, in the breakpoint shadow handling, right? I'd prefer pushing the capping close to the offending malloc/memcpy. We could conceivably change that shadowing algorithm to not malloc at all. E.g., say, with a memory block like |------B------| start end with B being the address where a breakpoint is supposed to be planted, write block [start,B), then a write for the breakpoint instruction at B, then another block write for (B,e). Or, we could throttle the requested window width up/down depending on the buffer size returned at each partial transfer. I'm not actually suggesting doing this, only explaining why I'd rather put the cap close to the problem it solves. target_write_partial is used for other targets objects too, not just memory. > We already cap reads to 4K Where exactly? In the target backend, perhaps? I'm not finding a cap at the target.c level. > -- > > 2013-07-25 Anton Blanchard > > * target.c (target_write_with_progress): Cap write to 4K Period at end of sentence. > > Index: b/gdb/target.c > =================================================================== > --- a/gdb/target.c > +++ b/gdb/target.c > @@ -2287,9 +2287,11 @@ target_write_with_progress (struct targe > > while (xfered < len) > { > + /* Cap the write to 4K */ > + int to_transfer = min(4096, len - xfered); > LONGEST xfer = target_write_partial (ops, object, annex, Empty line after last declaration. Missing space before parens. > (gdb_byte *) buf + xfered, > - offset + xfered, len - xfered); > + offset + xfered, to_transfer); > > if (xfer == 0) > return xfered; > -- Pedro Alves