From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 19754 invoked by alias); 4 Mar 2011 12:10:57 -0000 Received: (qmail 19391 invoked by uid 22791); 4 Mar 2011 12:10:54 -0000 X-SWARE-Spam-Status: No, hits=-2.0 required=5.0 tests=AWL,BAYES_00,TW_BJ X-Spam-Check-By: sourceware.org Received: from rock.gnat.com (HELO rock.gnat.com) (205.232.38.15) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Fri, 04 Mar 2011 12:10:49 +0000 Received: from localhost (localhost.localdomain [127.0.0.1]) by filtered-rock.gnat.com (Postfix) with ESMTP id 5B2D52BB19F; Fri, 4 Mar 2011 07:10:47 -0500 (EST) Received: from rock.gnat.com ([127.0.0.1]) by localhost (rock.gnat.com [127.0.0.1]) (amavisd-new, port 10024) with LMTP id KwxOukeacU0A; Fri, 4 Mar 2011 07:10:47 -0500 (EST) Received: from joel.gnat.com (localhost.localdomain [127.0.0.1]) by rock.gnat.com (Postfix) with ESMTP id C9DBF2BB18B; Fri, 4 Mar 2011 07:10:46 -0500 (EST) Received: by joel.gnat.com (Postfix, from userid 1000) id 6448F1459AD; Fri, 4 Mar 2011 16:10:28 +0400 (RET) Date: Fri, 04 Mar 2011 12:10:00 -0000 From: Joel Brobecker To: Pedro Alves Cc: gdb-patches@sourceware.org Subject: Re: Add support for VxWorks (v3) Message-ID: <20110304121028.GH30306@adacore.com> References: <1299219720-13398-1-git-send-email-brobecker@adacore.com> <201103040936.02472.pedro@codesourcery.com> <20110304100434.GG30306@adacore.com> <201103041045.10426.pedro@codesourcery.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <201103041045.10426.pedro@codesourcery.com> User-Agent: Mutt/1.5.20 (2009-06-14) Mailing-List: contact gdb-patches-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-patches-owner@sourceware.org X-SW-Source: 2011-03/txt/msg00285.txt.bz2 First of all, I really appreciate the attention and time spent on this topic. Thanks guys! After writing another rather lengthy reply, I just started thinking that maybe I should go back to basics and describe how VxWorks systems work. I tried brushing on that in the GDB Manual's update, but assumed that readers were minimally familiar with a few concepts. However, I'm running tight on time, and after re-reading myself, I think I presented most of the important details about this system in a way that flows reasonably well. So I'm giving this reply a try, and if that doesn't work, or you feel this is necessary, I'll try to write something up that describe the system more (or we can go with my offer to deal with that sometime during the GCC Summit - see below). > 1) Could map into a single inferior and then "info sharedlibrary" > lists all partitions? What is it in the use case that makes > this not viable? I really think that the concept of partitions and shared libraries do not match at all. It would be simpler if we were allowed to think that partitions can be seen as independent computers. This is pretty much the case (and I believe the intent!), except that we have the extra complication of the fact that inter-partition linkage is allowed under certain conditions. > 2) Could map each partition into an inferior? What is > it in the use case that makes this not viable? We can't map a partition into an inferior either, because we may be debugging a single (VxWorks) task, or a group of tasks, from that partition. In that case, the inferior consists purely of the VxWorks task(s) being debugged. > > Another use case is is when > > debugging a task that runs some code provided by a shared partition. > > It's a little bit like shared libraries on traditional OSes. In that > > case, you're effectively debugging over several partitions at the same > > time. > > So that task would be mapped to a running inferior, and the > shared partition would appear under shared libraries? It's a little bit like that. VxWorks systems consists of partitions inside which there are modules, which are inter-linked blobs of code. There is no executable, just these pieces of code called modules. All these modules, including the ones in shared partitions, but also those in the current partition, are just objfiles. We have no main objfile in this case. Because the system is so bare, it's really hard to map any of the standard OS concepts on it. When you read memory, or insert a breakpoint, the user needs to tell us which which partition this applies to. This is what the "partition" commands are for. It's a little bit like the "current language", it tell us the context of all the queries that user are making. > If we end up with partitions, note that the "program space" entity > was designed specifically for this. From your description, the > partition's symbols are a natural fit for a "program space". > I kept the "address space" object separate from the "program space" > exactly for cases like these, where you have several > distinct programs (program spaces / partitions) running under > the same address space. Absolutely. I remember reading the specs of your program & address spaces with great interest when you published it. It'll need a few extensions, I think (for instance, an inferior may be running over multiple program spaces). I think that we'll have one program spacen and one address space per partition (except maybe for shared partitions, which, I think, do not have their own address space, only code). That's also only part of the solution: We're going to need a target-side extension to allow the target to tell core GDB that the system has partitions, what partitions, etc. And then a core part to deal with partitions themselves. > You shouldn need objfile list swapping hacks anymore. Hopefully not - just enhanced versions of the objfile chain walking. I am really sorry to be using up so much of everybody's time for this concept. I wish it was easy to explain, but VxWorks systems can take a little bit of explanation before getting the hang of it. If you guys are interested, and are going to be at this year's GCC Summit, I'm more than happy to make a 10-15 min presentation of VxWorks systems, and let you guys ask any questions you may have. We could then talk about how to best implement partition support. Otherwise, I'm more than happy to continue by email. -- Joel