From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 31138 invoked by alias); 5 Oct 2004 16:13:35 -0000 Mailing-List: contact gdb-patches-help@sources.redhat.com; run by ezmlm Precedence: bulk List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-patches-owner@sources.redhat.com Received: (qmail 31126 invoked from network); 5 Oct 2004 16:13:33 -0000 Received: from unknown (HELO mx1.redhat.com) (66.187.233.31) by sourceware.org with SMTP; 5 Oct 2004 16:13:33 -0000 Received: from int-mx1.corp.redhat.com (int-mx1.corp.redhat.com [172.16.52.254]) by mx1.redhat.com (8.12.11/8.12.10) with ESMTP id i95GDXpK030483 for ; Tue, 5 Oct 2004 12:13:33 -0400 Received: from zenia.home.redhat.com (sebastian-int.corp.redhat.com [172.16.52.221]) by int-mx1.corp.redhat.com (8.11.6/8.11.6) with ESMTP id i95GDVr11349; Tue, 5 Oct 2004 12:13:32 -0400 To: Daniel Jacobowitz Cc: gdb-patches@sources.redhat.com Subject: Re: [rfa/dwarf] Support for attributes pointing to a different CU References: <20040923045723.GA11871@nevyn.them.org> <20040924003412.GB10500@nevyn.them.org> <20041003161221.GA3234@nevyn.them.org> <20041004212201.GA21064@nevyn.them.org> <20041005134808.GA11252@nevyn.them.org> From: Jim Blandy Date: Tue, 05 Oct 2004 16:13:00 -0000 In-Reply-To: <20041005134808.GA11252@nevyn.them.org> Message-ID: User-Agent: Gnus/5.09 (Gnus v5.9.0) Emacs/21.3 MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-SW-Source: 2004-10/txt/msg00085.txt.bz2 Daniel Jacobowitz writes: > On Tue, Oct 05, 2004 at 12:04:26AM -0500, Jim Blandy wrote: > > The only advantage I had in mind was simplicity, and it didn't seem > > like it'd be a performance hit. > > > > The libiberty hash table expands by a given ratio each time, which > > means that, overall, the number of rehashings per element is constant > > no matter how large the table gets. It's similar to the analysis that > > shows that ye olde buffer doubling trick ends up being linear time. > > (I'm thinking on my own here, not quoting anyone, so be critical...) > > > > There could be a locality disadvantage to doing it all in one big hash > > table. When the time comes to restore a given CU's types, their table > > entries will be sharing cache blocks with those of other, irrelevant > > CU's. That doesn't happen if we use for per-CU hash tables: table > > entries will never share cache blocks with table entries for other > > CU's (assuming the tail of one table doesn't share a block with the > > head of another, blah blah blah...). > > > > I'm concerned about the legacy of complexity we'll leave. Wrinkles > > should prove they can pay their own way. :) > > Then there's only one thing to do... I'll time it. > > Using a per-objfile type_hash saves a little memory in overhead, and > probably a little more in hash table size - I didn't instrument memory > use. But it's definitely slower. From 1% to 4.3% depending on the > test case. > > I believe this happens because we can create the per-comp-unit hash > tables at the correct size - I use a heuristic based on the size of the > CU, although by this point I could use a more accurate one based on the > number of DIEs if I thought it would be worthwhile. If we create a > per-objfile CU, then we don't get this benefit, so we do a lot of > copying. There's also the locality benefit. > > Also, it saves no code - unless you see something I'm missing, it was > basically s/cu->per_cu->type_hash/dwarf2_per_objfile->type_hash/. So, > OK with the per-comp-unit hash? Yep, that's fine. Thanks for persuing it; I'm comfortable with this now.