From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 10605 invoked by alias); 10 Nov 2003 22:43:44 -0000 Mailing-List: contact gdb-patches-help@sources.redhat.com; run by ezmlm Precedence: bulk List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-patches-owner@sources.redhat.com Received: (qmail 10590 invoked from network); 10 Nov 2003 22:43:43 -0000 Received: from unknown (HELO localhost.redhat.com) (216.129.200.20) by sources.redhat.com with SMTP; 10 Nov 2003 22:43:43 -0000 Received: from redhat.com (localhost [127.0.0.1]) by localhost.redhat.com (Postfix) with ESMTP id 2BDBE2B8F; Mon, 10 Nov 2003 17:43:41 -0500 (EST) Message-ID: <3FB0149C.1060908@redhat.com> Date: Mon, 10 Nov 2003 22:43:00 -0000 From: Andrew Cagney User-Agent: Mozilla/5.0 (X11; U; NetBSD macppc; en-US; rv:1.0.2) Gecko/20030820 X-Accept-Language: en-us, en MIME-Version: 1.0 To: davidm@hpl.hp.com Cc: Andrew Cagney , Kevin Buettner , "J. Johnston" , gdb-patches@sources.redhat.com Subject: Re: RFA: ia64 portion of libunwind patch References: <3FA2B71A.3080905@redhat.com> <3FA2CA1B.7000502@redhat.com> <16290.59502.799536.383397@napali.hpl.hp.com> <3FAC12D3.2070207@redhat.com> <16300.8192.489647.740612@napali.hpl.hp.com> <3FAC2454.2030009@redhat.com> <16300.9949.513264.716812@napali.hpl.hp.com> <3FAC2D03.8070607@redhat.com> <16300.12503.585501.180768@napali.hpl.hp.com> <3FAC33B3.2030403@redhat.com> <1031108001337.ZM18506@localhost.localdomain> <3FAC388A.10207@redhat.com> <16300.39298.323956.667764@napali.hpl.hp.com> <3FAD7F01.2050407@gnu.org> <16304.3297.662733.250523@napali.hpl.hp.com> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-SW-Source: 2003-11/txt/msg00200.txt.bz2 > Andrew> If we look at GDB with its 128k of unwind data. At 14*28 > Andrew> byte requests per unwind, it would take ~300 unwinds before > Andrew> GDB was required to xfer 128k (yes I'm pushing the numbers a > Andrew> little here, but then I'm also ignoring the very significant > Andrew> locality of the searches). > > Oh, but you're ignoring the latency effects. N 1-byte transfers can > easily be much slower than a single N-byte transfer. It's easy to play with the numbers here. Ex: The remote protocol typically caps each transfer at ~1k. So that would be 128 (xfer all) vs 14 (xfer needed) so it would still be faster. More seriously. If problems are identified in the remote protocol, GDB should fix them. It is important though, that its clients don't program around perceived performance problems and in doing so create artifical loads. As my previous e-mail mentioned, GDB's already seen one example of that - a library demanding ever increasing amounts of data in attempt to work around an io throughput bottle neck. > Andrew> Scary as it is, GDB's already got a requrest to feltch a > Andrew> shared library image from the target's memory :-/. > > That kind of throws your speed argument out of the water, though, > doesn't it? ;-) The extraction will need to be done very carefully so only required data is read. > Andrew> Provided the remote target knows the address of the unwind > Andrew> table, GDB should be able to find a way of getting it to > Andrew> libunwind. > > OK, I still don't quite understand why this is a common and important > scenario. It strikes me as a corner-case which _occasionally_ may be > useful, but if that's true, a bit of extra latency doesn't seem like a > huge deal. > > In any case, perhaps it is possible to add incremental reading support > by stealing a bit from one of the members in the "unw_dyn_table_info". > All we really need is a single bit to indicate whether the table-data > should be fetched from remote-memory. I'll think about it some more. It would be appreciated. My suggestion was to use memory reads when the unwind table pointer was NULL. However, anything would help. Andrew