From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 94216 invoked by alias); 17 Jun 2016 14:33:45 -0000 Mailing-List: contact gdb-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-owner@sourceware.org Received: (qmail 94112 invoked by uid 89); 17 Jun 2016 14:33:45 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-3.3 required=5.0 tests=BAYES_00,RP_MATCHES_RCVD,SPF_HELO_PASS autolearn=ham version=3.3.2 spammy=transferred, transfer, sum, white X-HELO: mx1.redhat.com Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with (AES256-GCM-SHA384 encrypted) ESMTPS; Fri, 17 Jun 2016 14:33:42 +0000 Received: from int-mx10.intmail.prod.int.phx2.redhat.com (int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 706B680087; Fri, 17 Jun 2016 14:33:41 +0000 (UTC) Received: from [127.0.0.1] (ovpn01.gateway.prod.ext.ams2.redhat.com [10.39.146.11]) by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id u5HEXenA020341; Fri, 17 Jun 2016 10:33:40 -0400 Subject: Re: why I dislike qXfer To: "taylor, david" , "gdb@sourceware.org" References: <31527.1465841753@usendtaylorx2l> <7ee87c44-2fe7-741b-d134-49e9a56a966c@redhat.com> <63F1AEE13FAE864586D589C671A6E18B062D63@MX203CL03.corp.emc.com> <63F1AEE13FAE864586D589C671A6E18B062D9F@MX203CL03.corp.emc.com> From: Pedro Alves Message-ID: Date: Fri, 17 Jun 2016 14:33:00 -0000 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.1.1 MIME-Version: 1.0 In-Reply-To: <63F1AEE13FAE864586D589C671A6E18B062D9F@MX203CL03.corp.emc.com> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit X-SW-Source: 2016-06/txt/msg00030.txt.bz2 On 06/16/2016 08:59 PM, taylor, david wrote: > >> From: Pedro Alves [mailto:palves@redhat.com] >> So a workaround that probably will never break is to adjust your stub to only >> remember the xml fragment for only one (or a few) threads at a time, and >> serve off of that. That would only be a problem if gdb "goes backwards" >> I.e., if gdb requests a lower offset (other than 0) than the previous >> requested offset. > > What I was thinking of doing was having no saved entries or, depending on > GDB details yet to be discovered, one saved entry. > > Talk to the core OS people about prohibiting characters that require quoting > from occurring in the thread name. > > Compute the maximum potential size of an entry with no padding. > > Do arithmetic on the offset to figure out which process table entry to start with. > > Do arithmetic on the length to figure out how many entries to process > > Pad each entry at the end with spaces to bring it up to the maximum > > For dead threads, fill the entry with spaces. > > Report done ('l') when there are no more live threads between the current > position and the end of the process table. That sounds over complicated, but, up to you. I think "no saved entries" would be problematic, unless you assume that gdb never requests a chunk smaller than the size of one entry. Because if it does, and you return half of a thread element, when gdb fetches the rest of the element, the thread might have changed state already. So e.g., you end up returning an impossible extended info, or thread name, with a Frankenstein-like mix of before/after state change (extended info goes "AAAA" -> "BBBB", and you report back "AA" + "BB"). And if you're going to save one entry, might as well keep it simple, as in my original suggestion. > >> The issue is that qXfer was originally invented for (binary) target objects for >> which gdb wants random access. However, "threads", and few other target >> objects are xml based. And for those, it must always be that gdb reads the >> whole object, or at least reads it sequentially starting from the beginning. I >> can well imagine optimizations where gdb processes the xml as it is reading it >> and stops reading before reaching EOF. But that wouldn't break the >> workaround. > > The qXfer objects for which I am thinking of implementing stub support, fall into > two categories: > > . small enough that I would expect GDB to read it in toto in one chunk. > For example, auxv. Initially, I will likely have two entries (AT_ENTRY, AT_NULL); > 6 or 7 others might get added later. Worst case, it all easily fits in one packet. GDB does cache some objects like that, but others doesn't. E.g., auvx is cached nowadays, but that wasn't always the case, and most others objects are not cached. >>> It's too late now, but I would much prefer interfaces something like: >>> >>> either >>> qfXfer:object:read:annex:length >>> qsXfer:object:read:annex:length >>> or >>> qfXfer:object:read:annex >>> qsXfer:object:read:annex >>> >>> [If the :length wasn't part of the spec, then send as much as you want >>> so long as you stay within the maximum packet size. My preference >>> would be to leave off the length, but I'd be happy either way.] >> >> What would you do if the object to retrieve is larger than the maximum >> packet size? > > Huh? qfXfer would read the first part, each subsequent qsXfer would read > the next chunk. If you wanted to think of it in offset/length terms, the offset > for qfXfer would be zero; for qsXfer it would be the sum of the sizes (ignoring > GDB escaping modifications) of the qfXfer packet and any qsXfer that occurred > after the qfXfer and before this qsXfer. > > As now, sub-elements (e.g. within ) could be contained within > one packet or split between multiple packets. Put the packets together in the order > received with no white space or anything else between them and pass the result off > to GDB's XML processing. > > Or do I not understand your question? If you're still going to need to handle sub-elements split between packets, then other than that making explicit the assumption that gdb reads the object sequentially, what's the real difference between this and gdb fetching using the existing qXfer, but requesting larger chunks, e.g., the size of the stub's reported max packet length? On the "leave off the length", I don't think it'd be a good idea for the target to be in complete control of the transfer chunk size, without having a way to interrupt the transfer. I mean, there's no real limit on the incoming packet size (gdb grows the buffer dynamically), and if gdb requests qfXfer:object:read:annex and the stub decides to send the whole multi-megabyte object back in one go, that's going to hog the RSP channel until the packet is fully transferred. Thanks, Pedro Alves