From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 2749 invoked by alias); 16 Jun 2016 18:25:10 -0000 Mailing-List: contact gdb-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-owner@sourceware.org Received: (qmail 2729 invoked by uid 89); 16 Jun 2016 18:25:08 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-3.3 required=5.0 tests=BAYES_00,RP_MATCHES_RCVD,SPF_HELO_PASS autolearn=ham version=3.3.2 spammy=mens, invented, late X-HELO: mx1.redhat.com Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with (AES256-GCM-SHA384 encrypted) ESMTPS; Thu, 16 Jun 2016 18:25:07 +0000 Received: from int-mx11.intmail.prod.int.phx2.redhat.com (int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 3F1092CAF02; Thu, 16 Jun 2016 18:25:06 +0000 (UTC) Received: from [127.0.0.1] (ovpn01.gateway.prod.ext.ams2.redhat.com [10.39.146.11]) by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id u5GIP2qr019550; Thu, 16 Jun 2016 14:25:03 -0400 Subject: Re: why I dislike qXfer To: "taylor, david" , "gdb@sourceware.org" References: <31527.1465841753@usendtaylorx2l> <7ee87c44-2fe7-741b-d134-49e9a56a966c@redhat.com> <63F1AEE13FAE864586D589C671A6E18B062D63@MX203CL03.corp.emc.com> From: Pedro Alves Message-ID: Date: Thu, 16 Jun 2016 18:25:00 -0000 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.1.0 MIME-Version: 1.0 In-Reply-To: <63F1AEE13FAE864586D589C671A6E18B062D63@MX203CL03.corp.emc.com> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit X-SW-Source: 2016-06/txt/msg00027.txt.bz2 On 06/16/2016 06:42 PM, taylor, david wrote: > >> From: Pedro Alves [mailto:palves@redhat.com] > > We allow an arbitrary number of GDBs to connect to the GDB stub running > in the OS kernel -- each connection gets a dedicated thread. > > Currently, we support 320 threads. This might well increase in the > future. With thread name and everything else I want to send back at the > maximum (because that reflects how much space I might need under the > offset & length scheme), I calculate 113 bytes per thread (this counts > and ) to send back -- before escaping. > > So, if I 'snapshot' everything every time I get a packet with an offset of 0, > the buffer would need to be over 32K bytes in size. I don't want to > increase the GDB stub stack size by this much. So, that mens either > limiting the number of connections (fixed, pre-allocated buffers) or > using kernel equivalents of malloc and free (which is discouraged) or > coming up with a different approach -- e.g., avoiding the need for the > buffer... So a workaround that probably will never break is to adjust your stub to only remember the xml fragment for only one (or a few) threads at a time, and serve off of that. That would only be a problem if gdb "goes backwards" I.e., if gdb requests a lower offset (other than 0) than the previous requested offset. The issue is that qXfer was originally invented for (binary) target objects for which gdb wants random access. However, "threads", and few other target objects are xml based. And for those, it must always be that gdb reads the whole object, or at least reads it sequentially starting from the beginning. I can well imagine optimizations where gdb processes the xml as it is reading it and stops reading before reaching EOF. But that wouldn't break the workaround. Starting a read somewhere in the middle of the file could be possible too, but it's require understanding how to skip until some xml element starts and ignore the fact that the file wouldn't validate. Plus gdb doesn't know the size of the file until it reads it fully, so we'd either some other way to determine that, or make gdb take guesses. So I'm not seeing this happening anytime soon. > > So, in terms of saved state, with the snapshot it is 35-36K bytes, with the > process table index it is 2-8 bytes. > > It's too late now, but I would much prefer interfaces something like: > > either > qfXfer:object:read:annex:length > qsXfer:object:read:annex:length > or > qfXfer:object:read:annex > qsXfer:object:read:annex > > [If the :length wasn't part of the spec, then send as much > as you want so long as you stay within the maximum packet size. My > preference would be to leave off the length, but I'd be happy either way.] What would you do if the object to retrieve is larger than the maximum packet size? Thanks, Pedro Alves