From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 79566 invoked by alias); 16 Jun 2016 17:42:38 -0000 Mailing-List: contact gdb-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-owner@sourceware.org Received: (qmail 79554 invoked by uid 89); 16 Jun 2016 17:42:37 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-3.3 required=5.0 tests=BAYES_00,RP_MATCHES_RCVD,SPF_PASS autolearn=ham version=3.3.2 spammy=mens, connections, sk:gdbsou, U*gdb X-HELO: mailuogwhop.emc.com Received: from mailuogwhop.emc.com (HELO mailuogwhop.emc.com) (168.159.213.141) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with (AES256-GCM-SHA384 encrypted) ESMTPS; Thu, 16 Jun 2016 17:42:27 +0000 Received: from maildlpprd04.lss.emc.com (maildlpprd04.lss.emc.com [10.253.24.36]) by mailuogwprd01.lss.emc.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.0) with ESMTP id u5GHgME8022646 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Thu, 16 Jun 2016 13:42:23 -0400 X-DKIM: OpenDKIM Filter v2.4.3 mailuogwprd01.lss.emc.com u5GHgME8022646 X-DKIM: OpenDKIM Filter v2.4.3 mailuogwprd01.lss.emc.com u5GHgME8022646 Received: from mailusrhubprd51.lss.emc.com (mailusrhubprd51.lss.emc.com [10.106.48.24]) by maildlpprd04.lss.emc.com (RSA Interceptor); Thu, 16 Jun 2016 13:41:51 -0400 Received: from MXHUB103.corp.emc.com (MXHUB103.corp.emc.com [10.253.50.16]) by mailusrhubprd51.lss.emc.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.0) with ESMTP id u5GHg4kH005307 (version=TLSv1 cipher=AES128-SHA bits=128 verify=FAIL); Thu, 16 Jun 2016 13:42:04 -0400 Received: from MX203CL03.corp.emc.com ([fe80::4dee:b763:b476:8fbc]) by MXHUB103.corp.emc.com ([::1]) with mapi id 14.03.0266.001; Thu, 16 Jun 2016 13:42:03 -0400 From: "taylor, david" To: Pedro Alves , "gdb@sourceware.org" Subject: RE: why I dislike qXfer Date: Thu, 16 Jun 2016 17:42:00 -0000 Message-ID: <63F1AEE13FAE864586D589C671A6E18B062D63@MX203CL03.corp.emc.com> References: <31527.1465841753@usendtaylorx2l> <7ee87c44-2fe7-741b-d134-49e9a56a966c@redhat.com> In-Reply-To: <7ee87c44-2fe7-741b-d134-49e9a56a966c@redhat.com> Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-Sentrion-Hostname: mailusrhubprd51.lss.emc.com X-RSA-Classifications: public, GIS Solicitation X-SW-Source: 2016-06/txt/msg00026.txt.bz2 > From: Pedro Alves [mailto:palves@redhat.com] > Sent: Monday, June 13, 2016 2:36 PM > To: taylor, david; gdb@sourceware.org > Subject: Re: why I dislike qXfer >=20 > On 06/13/2016 07:15 PM, David Taylor wrote: >=20 > > With the qT{f,s}{STM,P,V} q{f,s}ThreadInfo (and possibly others) > > interfaces, nothing needs to be precomputed, and I either start at the > > beginning (f -- first) or where the previous request left off (s -- > > subsequent). >=20 > > I have to store, per connection, my location. But, there is no random > > reading. The next request of that flavor will either start at the > > beginning (f) or where the last one left off (s). Reads are sequential. >=20 > If you support non-stop mode, the target is running and list of threads > changes as gdb is iterating. The "location" thread can exit and you're l= eft not > knowing where to continue from, for example. To get around that, generate > a stable snapshot when you get the f requests, and serve gdb requests from > that snapshot. We are non-stop. The "location" thread exiting would not be a problem. Each request, whether first or subsequent would send one or more complete thread entries. When sending a reply, you know where in the process table to start, you skip dead threads, and you fill entries until after doing the XML escaping and the GDB escaping an additional complete entry will not fit. You record where you stopped -- where to resume. We allow an arbitrary number of GDBs to connect to the GDB stub running in the OS kernel -- each connection gets a dedicated thread. Currently, we support 320 threads. This might well increase in the future. With thread name and everything else I want to send back at the maximum (because that reflects how much space I might need under the offset & length scheme), I calculate 113 bytes per thread (this counts and ) to send back -- before escaping. So, if I 'snapshot' everything every time I get a packet with an offset of = 0, the buffer would need to be over 32K bytes in size. I don't want to increase the GDB stub stack size by this much. So, that mens either limiting the number of connections (fixed, pre-allocated buffers) or using kernel equivalents of malloc and free (which is discouraged) or coming up with a different approach -- e.g., avoiding the need for the buffer... So, in terms of saved state, with the snapshot it is 35-36K bytes, with the process table index it is 2-8 bytes. It's too late now, but I would much prefer interfaces something like: either qfXfer:object:read:annex:length qsXfer:object:read:annex:length or qfXfer:object:read:annex qsXfer:object:read:annex [If the :length wasn't part of the spec, then send as much as you want so long as you stay within the maximum packet size. My preference would be to leave off the length, but I'd be happy either way.] David