From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 423 invoked by alias); 24 Sep 2002 17:42:23 -0000 Mailing-List: contact gdb-help@sources.redhat.com; run by ezmlm Precedence: bulk List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-owner@sources.redhat.com Received: (qmail 416 invoked from network); 24 Sep 2002 17:42:22 -0000 Received: from unknown (HELO mail.cdt.org) (206.112.85.61) by sources.redhat.com with SMTP; 24 Sep 2002 17:42:22 -0000 Received: from dberlin.org (pool-138-88-132-45.esr.east.verizon.net [138.88.132.45]) by mail.cdt.org (Postfix) with ESMTP id 0142249005B; Tue, 24 Sep 2002 13:19:18 -0400 (EDT) Received: from [128.164.132.25] (account dberlin HELO dberlin.org) by dberlin.org (CommuniGate Pro SMTP 4.0b9a) with ESMTP id 260648; Tue, 24 Sep 2002 13:42:19 -0400 Date: Tue, 24 Sep 2002 10:42:00 -0000 Subject: Re: suggestion for dictionary representation Content-Type: text/plain; charset=US-ASCII; format=flowed Mime-Version: 1.0 (Apple Message framework v546) Cc: Daniel Jacobowitz , Jim Blandy , gdb@sources.redhat.com To: David Carlton From: Daniel Berlin In-Reply-To: Message-Id: Content-Transfer-Encoding: 7bit X-SW-Source: 2002-09/txt/msg00381.txt.bz2 On Tuesday, September 24, 2002, at 12:33 PM, David Carlton wrote: > On Mon, 23 Sep 2002 20:34:50 -0400, Daniel Berlin > said: > >>> I'm also curious about how it would affect the speed of reading in >>> symbols. Right now, that should be O(n), where n is the number of >>> global symbols, right? > >>> If we used expandable hash tables, then I think it would be >>> amortized O(n) and with the constant factor larger. > >> Our string hash function is O(N) right now (as are most). Hash >> tables are only O(N) when the hash function is O(1). > > [ Here, of course, my 'n' is the number of global symbols, and > Daniel's 'N' is the maximum symbol length. ] Yup. In the future i'll just choose different letters. (Who the heck thought it would be smart to have a big Oh and little Oh, fer instance) > > This is true, but I'm not sure that it's relevant to this sort of > theoretical analysis. After all, skip lists depend on N, as well: > they put symbols in order, and the amount of time to do that depends > on the length of the symbols. I was waiting for someone to bring this up. You *also* have this N in hash tables in another form: Key comparisons. It's not just the hash function. Regardless of your method of hashing (chaining or probing), you have to compare the key. The better the hash function, the less key comparisons you do (because the chain length is shorter or the number of probes is less). If your average chain lengths go above log n (the number of keys we'd have to strcmp in the skiplist), which in fact, for large files, they were, then the skiplist will likely be better, because it'll perform less comparisons, *and* doesn't have do to the hash function calculation. > > And it's entirely reasonable to think of 'N' as a constant. But it's really not. You can't predict the lengths of symbol names over the next ten years. I'd bet if one looked, you'd seem it's a generally increasing curve for C++ apps, as they become more complex, have more namespaces, etc. For C apps, it's probably constant. > Or > perhaps two constants: one for C programs with short names, one for > C++ programs with long names. (And I'm not really sure that the C++ > names will ultimately turn out to be that much longer: once the proper > namespace support has been added, then looking up a C++ name will > probably be a multistep process (looking up pieces of the demangled > name in turn), and for each those steps, we'll be looking at a name > that will be of the appropriate length for a C program.) > Maybe, maybe not. Certainly, if it goes multistep, i'd wager that even in 10 years, the average symbol length of individual parts won't go up very much, even if the length of the entire symbol name goes up. > But even if we consider N to be a constant, your broader point stands: > the constant factors that different algorithms differ by are > important, and in practice large constants can have more of an affect > than logarithmic terms. Fortunately, one of the advantages of the > refactoring that I'm doing right now is that it'll be easy to drop in > different dictionary implementations for testing purposes: it should > boil down to writing the code that we'd have to do to get skip list > support anyways, changing one or two function calls, and recompiling. > Works for me. I would actually suggest neither hash tables or skiplists, but ternary search trees. They compare *very* well to hashes, and beat out skip lists as well. The algorithms are easy (and i already added them to libiberty), and they were engineered as structures for symbol tables. They support the "fast completion" property of skiplists (since they are ordered), but the constant you pay to insert nodes is much lower. They also support easy pattern matching searches. The only problem is that i was too lazy to look up general n-way tree balancing algorithms and implement one, so the current implementation is unbalanced. Thus, if you insert in sorted order, things will get bad. See http://www.ddj.com/documents/s=921/ddj9804a/9804a.htm Asymptotic analysis of them isn't as easy as one would think. In fact, it's downright difficult. But, it's been done. If you can grasp all the math: http://users.info.unicaen.fr/~clement/SODA/all1.html But, as you said, this can all be hashed out later and a few function calls changed if it turns out to be better. > David Carlton > carlton@math.stanford.edu