From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 64221 invoked by alias); 16 Jun 2016 20:00:21 -0000 Mailing-List: contact gdb-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-owner@sourceware.org Received: (qmail 64143 invoked by uid 89); 16 Jun 2016 20:00:16 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-3.3 required=5.0 tests=BAYES_00,RP_MATCHES_RCVD,SPF_PASS autolearn=ham version=3.3.2 spammy=auxv, AT_ENTRY, AT_NULL, at_entry X-HELO: mailuogwdur.emc.com Received: from mailuogwdur.emc.com (HELO mailuogwdur.emc.com) (128.221.224.79) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with (AES256-GCM-SHA384 encrypted) ESMTPS; Thu, 16 Jun 2016 20:00:06 +0000 Received: from maildlpprd56.lss.emc.com (maildlpprd56.lss.emc.com [10.106.48.160]) by mailuogwprd53.lss.emc.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.0) with ESMTP id u5GJxxuZ021175 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Thu, 16 Jun 2016 16:00:01 -0400 X-DKIM: OpenDKIM Filter v2.4.3 mailuogwprd53.lss.emc.com u5GJxxuZ021175 X-DKIM: OpenDKIM Filter v2.4.3 mailuogwprd53.lss.emc.com u5GJxxuZ021175 Received: from mailusrhubprd01.lss.emc.com (mailusrhubprd01.lss.emc.com [10.253.24.19]) by maildlpprd56.lss.emc.com (RSA Interceptor); Thu, 16 Jun 2016 15:59:42 -0400 Received: from MXHUB210.corp.emc.com (MXHUB210.corp.emc.com [10.253.68.36]) by mailusrhubprd01.lss.emc.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.0) with ESMTP id u5GJxiet007882 (version=TLSv1.2 cipher=AES128-SHA256 bits=128 verify=FAIL); Thu, 16 Jun 2016 15:59:44 -0400 Received: from MX203CL03.corp.emc.com ([fe80::4dee:b763:b476:8fbc]) by MXHUB210.corp.emc.com ([10.253.68.36]) with mapi id 14.03.0266.001; Thu, 16 Jun 2016 15:59:44 -0400 From: "taylor, david" To: Pedro Alves , "gdb@sourceware.org" Subject: RE: why I dislike qXfer Date: Thu, 16 Jun 2016 20:00:00 -0000 Message-ID: <63F1AEE13FAE864586D589C671A6E18B062D9F@MX203CL03.corp.emc.com> References: <31527.1465841753@usendtaylorx2l> <7ee87c44-2fe7-741b-d134-49e9a56a966c@redhat.com> <63F1AEE13FAE864586D589C671A6E18B062D63@MX203CL03.corp.emc.com> In-Reply-To: Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-Sentrion-Hostname: mailusrhubprd01.lss.emc.com X-RSA-Classifications: public X-SW-Source: 2016-06/txt/msg00028.txt.bz2 > From: Pedro Alves [mailto:palves@redhat.com] > Sent: Thursday, June 16, 2016 2:25 PM > To: taylor, david; gdb@sourceware.org > Subject: Re: why I dislike qXfer >=20 > On 06/16/2016 06:42 PM, taylor, david wrote: > > > >> From: Pedro Alves [mailto:palves@redhat.com] >=20 > > > > We allow an arbitrary number of GDBs to connect to the GDB stub > > running in the OS kernel -- each connection gets a dedicated thread. > > > > Currently, we support 320 threads. This might well increase in the > > future. With thread name and everything else I want to send back at > > the maximum (because that reflects how much space I might need under > > the offset & length scheme), I calculate 113 bytes per thread (this > > counts and ) to send back -- before escaping. > > > > So, if I 'snapshot' everything every time I get a packet with an > > offset of 0, the buffer would need to be over 32K bytes in size. I > > don't want to increase the GDB stub stack size by this much. So, that > > mens either limiting the number of connections (fixed, pre-allocated > > buffers) or using kernel equivalents of malloc and free (which is > > discouraged) or coming up with a different approach -- e.g., avoiding > > the need for the buffer... >=20 > So a workaround that probably will never break is to adjust your stub to = only > remember the xml fragment for only one (or a few) threads at a time, and > serve off of that. That would only be a problem if gdb "goes backwards" > I.e., if gdb requests a lower offset (other than 0) than the previous > requested offset. What I was thinking of doing was having no saved entries or, depending on GDB details yet to be discovered, one saved entry. Talk to the core OS people about prohibiting characters that require quoting from occurring in the thread name. Compute the maximum potential size of an entry with no padding. Do arithmetic on the offset to figure out which process table entry to star= t with. Do arithmetic on the length to figure out how many entries to process Pad each entry at the end with spaces to bring it up to the maximum For dead threads, fill the entry with spaces. Report done ('l') when there are no more live threads between the current position and the end of the process table. > The issue is that qXfer was originally invented for (binary) target objec= ts for > which gdb wants random access. However, "threads", and few other target > objects are xml based. And for those, it must always be that gdb reads t= he > whole object, or at least reads it sequentially starting from the beginni= ng. I > can well imagine optimizations where gdb processes the xml as it is readi= ng it > and stops reading before reaching EOF. But that wouldn't break the > workaround. The qXfer objects for which I am thinking of implementing stub support, fal= l into two categories: . small enough that I would expect GDB to read it in toto in one chunk. For example, auxv. Initially, I will likely have two entries (AT_ENTRY, = AT_NULL); 6 or 7 others might get added later. Worst case, it all easily fits in o= ne packet. . larger with structure and possibly variable length elements -- where I w= ould expect multiple sequential reads starting at the beginning and continuing until everything is read. For example, threads with no padding and skipp= ing dead threads. > Starting a read somewhere in the middle of the file could be possible too= , but > it's require understanding how to skip until some xml element starts and > ignore the fact that the file wouldn't validate. Plus gdb doesn't know t= he size > of the file until it reads it fully, so we'd either some other way to det= ermine > that, or make gdb take guesses. > So I'm not seeing this happening anytime soon. But, alas, the community won't commit to it. > > So, in terms of saved state, with the snapshot it is 35-36K bytes, > > with the process table index it is 2-8 bytes. > > > > It's too late now, but I would much prefer interfaces something like: > > > > either > > qfXfer:object:read:annex:length > > qsXfer:object:read:annex:length > > or > > qfXfer:object:read:annex > > qsXfer:object:read:annex > > > > [If the :length wasn't part of the spec, then send as much as you want > > so long as you stay within the maximum packet size. My preference > > would be to leave off the length, but I'd be happy either way.] >=20 > What would you do if the object to retrieve is larger than the maximum > packet size? Huh? qfXfer would read the first part, each subsequent qsXfer would read the next chunk. If you wanted to think of it in offset/length terms, the o= ffset for qfXfer would be zero; for qsXfer it would be the sum of the sizes (igno= ring GDB escaping modifications) of the qfXfer packet and any qsXfer that occurr= ed after the qfXfer and before this qsXfer. As now, sub-elements (e.g. within ) could be contained wi= thin one packet or split between multiple packets. Put the packets together in = the order received with no white space or anything else between them and pass the res= ult off to GDB's XML processing. Or do I not understand your question? > Thanks, > Pedro Alves