From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 96708 invoked by alias); 14 Nov 2016 15:38:08 -0000 Mailing-List: contact gdb-patches-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-patches-owner@sourceware.org Received: (qmail 96676 invoked by uid 89); 14 Nov 2016 15:38:08 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-1.9 required=5.0 tests=AWL,BAYES_00,RCVD_IN_DNSWL_NONE,SPF_PASS autolearn=ham version=3.3.2 spammy=dwarf2locc, dwarf2loc.c, UD:dwarf2loc.c X-HELO: relay1.mentorg.com Received: from relay1.mentorg.com (HELO relay1.mentorg.com) (192.94.38.131) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Mon, 14 Nov 2016 15:38:06 +0000 Received: from svr-orw-mbx-03.mgc.mentorg.com ([147.34.90.203]) by relay1.mentorg.com with esmtp id 1c6JKf-0003Gv-97 from Luis_Gustavo@mentor.com ; Mon, 14 Nov 2016 07:38:05 -0800 Received: from [172.30.3.198] (147.34.91.1) by svr-orw-mbx-03.mgc.mentorg.com (147.34.90.203) with Microsoft SMTP Server (TLS) id 15.0.1210.3; Mon, 14 Nov 2016 07:38:02 -0800 Subject: Re: [PATCH 3/3] Optimize byte-aligned copies in copy_bitwise() References: <1479135786-31150-1-git-send-email-arnez@linux.vnet.ibm.com> <1479135786-31150-4-git-send-email-arnez@linux.vnet.ibm.com> To: Andreas Arnez , Reply-To: Luis Machado From: Luis Machado Message-ID: Date: Mon, 14 Nov 2016 15:38:00 -0000 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.4.0 MIME-Version: 1.0 In-Reply-To: <1479135786-31150-4-git-send-email-arnez@linux.vnet.ibm.com> Content-Type: text/plain; charset="windows-1252"; format=flowed Content-Transfer-Encoding: 7bit X-ClientProxiedBy: svr-orw-mbx-02.mgc.mentorg.com (147.34.90.202) To svr-orw-mbx-03.mgc.mentorg.com (147.34.90.203) X-IsSubscribed: yes X-SW-Source: 2016-11/txt/msg00341.txt.bz2 On 11/14/2016 09:02 AM, Andreas Arnez wrote: > The function copy_bitwise used for copying DWARF pieces can potentially > be invoked for large chunks of data. For instance, consider a large > struct one of whose members is currently located in a register. In this > case copy_bitwise would still copy the data bitwise in a loop, which is > much slower than necessary. > > This change uses memcpy for the large part instead, if possible. > > gdb/ChangeLog: > > * dwarf2loc.c (copy_bitwise): Use memcpy for the middle part, if > it is byte-aligned. > --- > gdb/dwarf2loc.c | 27 +++++++++++++++++++++++---- > 1 file changed, 23 insertions(+), 4 deletions(-) > > diff --git a/gdb/dwarf2loc.c b/gdb/dwarf2loc.c > index 3a241a8..26f6bd8 100644 > --- a/gdb/dwarf2loc.c > +++ b/gdb/dwarf2loc.c > @@ -1547,11 +1547,30 @@ copy_bitwise (gdb_byte *dest, ULONGEST dest_offset, > { > size_t len = nbits / 8; > > - while (len--) > + /* Use a faster method for byte-aligned copies. */ > + if (avail == 0) > { > - buf |= *(bits_big_endian ? source-- : source++) << avail; > - *(bits_big_endian ? dest-- : dest++) = buf; > - buf >>= 8; > + if (bits_big_endian) > + { > + dest -= len; > + source -= len; > + memcpy (dest + 1, source + 1, len); > + } > + else > + { > + memcpy (dest, source, len); > + dest += len; > + source += len; > + } > + } > + else > + { > + while (len--) > + { > + buf |= *(bits_big_endian ? source-- : source++) << avail; Same as patch 2/3 about the construct. > + *(bits_big_endian ? dest-- : dest++) = buf; > + buf >>= 8; > + } > } > nbits %= 8; > } > Otherwise looks sane to me.