From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 54224 invoked by alias); 9 Apr 2015 16:29:12 -0000 Mailing-List: contact gdb-patches-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-patches-owner@sourceware.org Received: (qmail 54204 invoked by uid 89); 9 Apr 2015 16:29:12 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-2.5 required=5.0 tests=AWL,BAYES_00,RCVD_IN_DNSWL_NONE,SPF_SOFTFAIL autolearn=no version=3.3.2 X-HELO: mtaout22.012.net.il Received: from mtaout22.012.net.il (HELO mtaout22.012.net.il) (80.179.55.172) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Thu, 09 Apr 2015 16:29:09 +0000 Received: from conversion-daemon.a-mtaout22.012.net.il by a-mtaout22.012.net.il (HyperSendmail v2007.08) id <0NMJ00B00S3VH500@a-mtaout22.012.net.il> for gdb-patches@sourceware.org; Thu, 09 Apr 2015 19:29:06 +0300 (IDT) Received: from HOME-C4E4A596F7 ([87.69.4.28]) by a-mtaout22.012.net.il (HyperSendmail v2007.08) with ESMTPA id <0NMJ00BHYSGI2WB0@a-mtaout22.012.net.il>; Thu, 09 Apr 2015 19:29:06 +0300 (IDT) Date: Thu, 09 Apr 2015 16:29:00 -0000 From: Eli Zaretskii Subject: Re: [PATCH 0/7] Support reading/writing memory on architectures with non 8-bits bytes In-reply-to: <55269D1A.3080902@ericsson.com> To: Simon Marchi Cc: gdb-patches@sourceware.org Reply-to: Eli Zaretskii Message-id: <83vbh5e04f.fsf@gnu.org> References: <1428522979-28709-1-git-send-email-simon.marchi@ericsson.com> <83d23dg1bd.fsf@gnu.org> <55269D1A.3080902@ericsson.com> X-IsSubscribed: yes X-SW-Source: 2015-04/txt/msg00346.txt.bz2 > Date: Thu, 9 Apr 2015 11:39:06 -0400 > From: Simon Marchi > CC: > > > I wonder: wouldn't it be possible to keep the current "byte == 8 bits" > > notion, and instead to change the way addresses are interpreted by the > > target back-end? > > > > IOW, do we really need to expose this issue all the way to the higher > > levels of GDB application code? > > I don't think there is an elegant way of making this work without gdb > knowing at least a bit about it. If you don't make some changes at one > level, you'll end up needing to make the equivalent changes at some other > level (still in gdb core). I didn't mean to imply that this could work without changes on _some_ level. The question is what level, and whether or not we expose this to the application level, where commands are interpreted. > >From what I understand, your suggestion would be to treat addresses as > indexes of octets in memory. So, to read target bytes at addresses 3 > and 4, I would have to ask gdb for 4 "gdb" bytes starting at address 6. > > size == 2 > v-------------------v > +---------+---------+---------+---------+---------+---------+ > real idx | 0 | 1 | 2 | 3 | 4 | 5 | > +----+----+----+----+----+----+----+----+----+----+----+----+ > octet idx | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | > +----+----+----+----+----+----+----+----+----+----+----+----+ > ^-------------------^ > size == 4 > > The backend would then divide everything by two and read 2 target bytes > starting at address 3. Something like that, yes. > If we require the user or the front-end to do that conversion, we just push > the responsibility over the fence to them. I don't follow: how does the above place any requirements on the user? > For the developer working with that system 8 hours per day, a size > of 1 is one 16-bits byte. His debugger should understand that > language. By "size" do you mean the result of "sizeof"? That could still measure in target-side units, I see no contradiction there. I just don't see why do we need to call that unit a "byte". > If I have a pointer p (char *p) and I want to examine memory starting at p, > I would do "x/10h p". That wouldn't give me what I want, as it would give me > memory at p/2. I don't see how it follows from my suggestion that 10 here must mean 80 bits. It could continue meaning 10 16-bit units. > Also, the gdb code in the context of these platforms becomes instantly more > hackish if you say that the address variable is not really the address we want > to read, but the double. I didn't say that, either. > Another problem: the DWARF information describes the types using sizes in > target bytes (at least in our case, other implementations could do it > differently I suppose). The "char" type has a size of 1 (1 x 16-bits). That's fine, just don't call that a "byte". Call it a "word". > So, when you "print myvar", gdb would have to know that it needs to convert > the size to octets to request the right amount of memory. No, it won't. It sounds like my suggestion was totally misunderstood. > I think the solution we propose is the one that models the best the debugged > system and therefore is the least hacky. My problem with your solution is that you require the user to change her thinking about what a "byte" and "word" are. GDB is moving to being able to debug several different targets at the same time, and I worry about the user's sanity when one of those targets is of the kind you are describing. E.g., suppose we will have a command to copy memory from one target to another: how do we count the size of the buffer then?