From: Andreas Jaeger <aj@suse.de>
To: Eli Zaretskii <eliz@is.elta.co.il>
Cc: gdb@sourceware.cygnus.com, Andrew Cagney <ac131313@cygnus.com>,
DJ Delorie <dj@delorie.com>
Subject: Re: -Wmissing-prototypes ...
Date: Thu, 02 Dec 1999 22:19:00 -0000 [thread overview]
Message-ID: <u8k8mwk17z.fsf@gromit.rhein-neckar.de> (raw)
In-Reply-To: <Pine.SUN.3.91.991202192443.17848B-100000@is>
>>>>> Eli Zaretskii writes:
Eli> On 2 Dec 1999, Andreas Jaeger wrote:
Eli> +static void print_387_status (unsigned, struct env387 *);
>> Shouldn't this be unsigned int? AFAIK the new ISO C99 standard
>> mandates it - and it doesn't harm here.
Eli> I didn't know we are supposed to be compatible with C9x. It's probably a
Eli> good idea to tell this explicitly, so all platform maintainers know.
That was a personal comment - I'm not a gdb maintainer.
Eli> Yes, you can change all places with "unsigned" into "unsigned int".
Andreas
--
Andreas Jaeger
SuSE Labs aj@suse.de
private aj@arthur.rhein-neckar.de
From toddpw@windriver.com Thu Dec 02 23:19:00 1999
From: Todd Whitesel <toddpw@windriver.com>
To: davidwilliams@ozemail.com.au
Cc: gdb@sourceware.cygnus.com (GDB Developers)
Subject: Re: exceptionHandler for 68K
Date: Thu, 02 Dec 1999 23:19:00 -0000
Message-id: <199912030719.XAA19824@alabama.wrs.com>
References: <01BF3D76.F8F982C0.davidwilliams@ozemail.com.au>
X-SW-Source: 1999-q4/msg00431.html
Content-length: 997
> to the exception handler. My problem is that I dont understand where this
> table of jsr's is located...
Admittedly I have not looked at the code, but it may be generated by a macro,
so it might be easy to miss. You could try building a debug stub and objdump
it to see where all the vector table entries point to.
According to the discussion you posted, it sounds like there's a label called
'exception' and the 256 JSR's are stored underneath that. So maybe you should
search the source for references to that label?
> The m68k-stub has code to determine the exception number by taking the
> return address on the stack (the return address is the location in the jsr
> table) and adding 1530 and then dividing by 6.
Feh. Use BSR.W and divide by 4. Or chop up the table into four pieces, each
a row of BSR.S instructions jumping to slightly different computation code
at the bottom -- that lets you divide by 2 and save another 500 bytes or so.
--
Todd Whitesel
toddpw @ windriver.com
From ac131313@cygnus.com Thu Dec 02 23:22:00 1999
From: Andrew Cagney <ac131313@cygnus.com>
To: "davidwilliams@ozemail.com.au" <davidwilliams@ozemail.com.au>
Cc: "'gdb mail list'" <gdb@sourceware.cygnus.com>
Subject: Re: gdb stack in stub
Date: Thu, 02 Dec 1999 23:22:00 -0000
Message-id: <38476F7E.B6A46989@cygnus.com>
References: <01BF3D52.CD93EEA0.davidwilliams@ozemail.com.au>
X-SW-Source: 1999-q4/msg00432.html
Content-length: 881
David Williams wrote:
>
> Hi all,
>
> I noticed that a 10K local stack is allocated in the m68k-stub.c for it own
> use when communcating with gdb. This seems excessive. I would like to leave
> the stub code in my final application so that I can debug in the field (via
> a special option). However it would be better if the stub consumed the
> least amount of system resources as possible.
>
> Is there any problem with using my applications stack (if enough room is
> allocated for GDB usage in addition to normal usage)? My application always
> runs in supervisor mode.
FYI,
GDB likes to perform inferior function calls on the target stack. Watch
the effect of:
(gdb) print printf ("Hello world\n")
If you try to use the target program's stack, GDB is very likely to
trash it :-(
enjoy,
Andrew
> What is likely max stack usage of GDB?
>
> TIA
> David Williams.
From ac131313@cygnus.com Fri Dec 03 00:52:00 1999
From: Andrew Cagney <ac131313@cygnus.com>
To: Steven Johnson <sbjohnson@ozemail.com.au>
Cc: jtc@redback.com, gdb@sourceware.cygnus.com
Subject: Re: Standard GDB Remote Protocol
Date: Fri, 03 Dec 1999 00:52:00 -0000
Message-id: <38478423.6ACE7BF8@cygnus.com>
References: <199911090706.CAA13120@zwingli.cygnus.com> <199911102246.RAA01846@mescaline.gnu.org> <npr9hi321d.fsf@zwingli.cygnus.com> <199911231303.IAA01523@mescaline.gnu.org> <npr9hg2a9t.fsf@zwingli.cygnus.com> <199911251715.MAA09225@mescaline.gnu.org>
X-SW-Source: 1999-q4/msg00433.html
Content-length: 9163
Steven Johnson wrote:
> It is based on my First Read of the current online version of protocol
> specification at:
> http://sourceware.cygnus.com/gdb/onlinedocs/gdb_14.html
> Packet Structure:
>
> Simple structure, obviously originally designed to be able to be driven
> manually from a TTY. (Hence it's ASCII nature.) However, the protocol
> has evolved quite significantly and I doubt it could still be used very
> efficiently from a TTY. That said, it still demarks frames effectively.
One really useful thing to know is the command:
(gdb) set remotedebug 1
(If there isn't already) there should be a reference to that command (or
is successor).
> Sequence Numbers:
>
> Definition of Sequence ID's needs work. Are they necessary? Are they
> deprecated? What purpose do they currently serve within GDB? One would
> imagine that they are used to allow GDB to handle retransmits from a
> remote system. Reading between the lines, this is done to allow error
> recovery when a transmission from target to host fails. Possible
> sequence being:
I guess the best description is ``in limbo''. Sequence ID's have been in
the protocl for as long as anyone can remember but, at the same time no
one can actually remeber them being used. True?
I didn't deprecate it as there does need to be something for handling
things like duplicate packets (so that finally the protocol can be used
reliably across UDP and the like). Someone needs to sit down and fully
specify this (or better) identify an existing protocol that can be used
to specify the sequence-id's behavour.
There should at least be a note pointing out the current status of
sequence-ID's.
> The 2 primary timing constraints I see that are missing are:
>
> Inter character times during a message transmission, and Ack/Nak
> response times.
>
> If a message is only half received, the receiver has no ability without
> a timeout mechanism of generating a NAK signalling failed receipt. If
> this occurs, and there is no timeout on ACK/NAK reception, the entire
> comms stream could Hang. Transmitter is Hung waiting for an ACK/NAK and
> the Receiver is Hung waiting for the rest of the message.
>
> I would propose that something needs to be defined along the lines of:
>
> Once the $ character for the start of a packet is transmitted, each
> subsequent byte must be received within "n" byte transmission times.
> (This would allow for varying comms line speeds). Or alternately a
> global timeout on the whole message could be define one "$" (start
> sentinel) is sent, the complete message must be received within "X"
> time. I personally favour the inter character time as opposed to
> complete message time as it will work with any size message, however the
> complete message time restrict the maximum size of any one message (to
> how many bytes can be sent at the maximum rate for the period). These
> tiemouts do not need to be very tight, as they are merely for complete
> failure recovery and a little delay there does not hurt much.
How GDB behaves should be clarified. However, any definition should
avoid refering to absolute times and instead refer the user back to a
number of knobs that can be tweeked from the GDB command line.
This in turn suggests that there should be a section describing the
protocol's behavour with direct references to things like the
configurable timers.
I've sometimes wondered about doing a proper SDL spec.... :-)
> Identified Protocol Hole:
>
> Lets look at the following abstract scenario (Text in brackets are
> supporting comments):
>
> <- $packet-data#checksum (Run Target Command)
> -> + (Response is lost due to a line
> error)
> (Target runs for a very short period of time and then breaks).
> -> $sequence-id:packet-data#checksum (Break Response - GDB takes as a
> NAK, expecting a +, got a $).
> <- $packet-data#checksum (GDB retransmits it's Run Target Command,
> target restarts)
> -> + (Response received OK by GDB).
> (Target again starts running.)
That is correct.
The protocol, as it currently stands, is only robust against overrun
errors (detected by a checksum failure).
In pratice this has proven to be sufficient for almost all situtations.
Because of the request/response nature of the protocol the probably of a
dropped/corrupt ACK packet is very low. (I'm not saying this a good
thing, just a lucky thing :-)
This would suggest that at least the spec should make the known
limitations clear.
This is another reason why I've not simply abandoned the sequence-ID -
I'm hopeing someone will do something about this :-)
> Run Length Encoding:
>
> Is run length encoding supported in all packets, or just some packets?
> (For example, not binary packets)
> Why not allow lengths greater than 126? Or does this mean lengths
> greater than 97 (as in 126-29)
> If binary packets with 8 bit data can be sent, why not allow RLE to use
> length also greater than 97. If the length maximum is really 126, then
> this yields the character 0x9B which is 8 bits, wouldn't the maximum
> length in this case be 226. Or is this a misprint?
FYI, Any packet can be RLE. RLE handling is done as part of unpacking a
packet. The code doesn't know if it is binary/ascii at that point.
The old RLE size reflects the fact that a 7 bit printable character was
used (the actual equation is RLE - ' ' +3). While binary packets could
extend this, I don't think that there is any benefit.
I've rarely seen an RLE stub in pratice. The benefits would be
significant.
GDB doesn't send RLE packets and, I think, it should.
> Why are there 2 methods of RLE? Is it important for a Remote Target to
> understand and process both, or is the "cisco encoding" a proprietary
> extension of the GDB Remote protocol, and not part of the standard
> implementation. The documentation of "cisco encoding" is confusing and
> seems to conflict with standard RLE encoding. They appear to be mutually
> exclusive. If they are both part of the protocol, how are they
> distinguished when used?
It's an unsupported extension. Idea's from CISCO are slowly being
rolled back into the normal protocol.
> Deprecated Messages:
>
> Should an implementation of the protocol implement the deprecated
> messages or not? What is the significance of the deprecated messages to
> the current implementation?
(Your not the first one to ask that one :-)
They are there as a very strong deterant for people thinking of re-using
the relevant letters. This should probably be clarified.
> Character Escaping:
>
> The mechanism of Escaping the characters is not defined. Further it is
> only defined as used by write mem binary. Wouldn't it be useful for
> future expansion of the protocol to define Character Escaping as a
> global feature of the protocol, so that if any control characters were
> required to be sent, they could be escaped in a consistent manner across
> all messages. Also, wouldn't the full list of escape characters be
> $,#,+,-,*,0x7d. Otherwise, + & - might be processed inadvertently as ACK
> or NAK. If this can't happen, then why must they be avoided in RLE? If
> they are escaped across all messages, then that means they could be used
> in RLE and not treated specially.
Beyond the X packet there is no escape mechanism. GDB assumes the
connection is capable of transfering printable ascii. Escaping
characters should probably be left to a lower layer.
(A side note: The existing protocol spec mixes the packet specification
and transfer in with the specification of the actual packet body - not a
good way of defining a spec.)
> 8/7 Bit protocol.
>
> With the documentation of RAW Binary transfers, the protocol moves from
> being a strictly 7 bit affair into being a 8 bit capable protocol. If
> this is so, then shouldn't all the restrictions that are placed from the
> 7 bit protocol days be lifted to take advantage of the capabilities of
> an 8 bit message stream. (RLE limitations, for example). Would anyone
> seriously be using a computer that had a 7 bit limitation anymore
> anyway? (At least a computer that would run GDB with remote debugging).
I suspect that it will still be there for a while longer. Even if there
were no broken serial controllers there will still be broken stubs :-(
> Thoughts on consistency and future growth:
>
> Apply RLE as a feature of All messages. (Including binary messages, as
> these can probably benefit significantly from it).
>
> Apply the Binary Escaping mechanism as a feature of the packet that is
> performed on all messages prior to transmission and immediately after
> reception. Define an exhaustive set of "Characters to be escaped".
>
> Introduce message timing constraints.
>
> Properly define sequence-id and allow it to be used from GDB to make
> communications secure and reliable.
FYI, the one I really wish someone would persue is a mechanism that
allowed GDB to send console input down to the target. Cisco added code
that does something however it isn't robust.
To do this, I suspect that some of the other issues you've raised would
also need to be addressed.
thanks,
Andrew
From ac131313@cygnus.com Fri Dec 03 01:14:00 1999
From: Andrew Cagney <ac131313@cygnus.com>
To: Jim Blandy <jimb@cygnus.com>
Cc: Eli Zaretskii <eliz@gnu.org>, gdb@sourceware.cygnus.com
Subject: Re: ST(i) and MMj
Date: Fri, 03 Dec 1999 01:14:00 -0000
Message-id: <38478987.EECEEBF@cygnus.com>
References: <199911090706.CAA13120@zwingli.cygnus.com> <199911102246.RAA01846@mescaline.gnu.org> <npr9hi321d.fsf@zwingli.cygnus.com> <199911231303.IAA01523@mescaline.gnu.org> <npr9hg2a9t.fsf@zwingli.cygnus.com> <199911251715.MAA09225@mescaline.gnu.org> <npzovvc04o.fsf@zwingli.cygnus.com> <199912010821.DAA27130@mescaline.gnu.org> <npogca9tb8.fsf@zwingli.cygnus.com>
X-SW-Source: 1999-q4/msg00434.html
Content-length: 1151
Jim Blandy wrote:
>
> > During that discussion I did agree that these registers should not be
> > treated as separate, but it seems we meant different things.
> > What I meant was that it is a Bad Idea to maintain separate data for
> > each one of these sets.
>
> Ah. I see what you meant now. Yes, we misunderstood each other.
>
> > But I don't see why cannot GDB _think_ about %st(X) and %mmY as being
> > separate registers while in reality they share the same data, if this
> > sharing is concealed behind REGISTER_BYTE and REGISTER_RAW_SIZE (and
> > possibly other functions/macros used to manipulate registers). What
> > are the specific problems with this scheme?
>
> Grep the sources for NUM_REGS, and look for loops that traverse the
> register set. Prove to yourself that none of these loops will break
> if register X aliases register Y. Persuade yourself that nobody in
> the future, innocent of the x86's sins, will write such a loop.
>
> I tried, but I couldn't manage it. :)
I agree with Jim. The way GDB currently resolves register names/numbers
``freaks me out''.
(Now about that REGISTER_VIRTUAL_NAME macro :-)
Andrew
From alexs@cygnus.co.uk Fri Dec 03 03:41:00 1999
From: "Alex Schuilenburg" <alexs@cygnus.co.uk>
To: "Stan Shebs" <shebs@cygnus.com>
Cc: <gdb@sourceware.cygnus.com>, "Hugo Tyson" <hmt@cygnus.co.uk>
Subject: RE: Multi-threaded debugging within GDB & eCOS
Date: Fri, 03 Dec 1999 03:41:00 -0000
Message-id: <NCBBKJPHEKGAJNNFGNBLCEMJDHAA.alexs@cygnus.co.uk>
References: <199912030305.TAA24626@andros.cygnus.com>
X-SW-Source: 1999-q4/msg00435.html
Content-length: 4176
> GDB will "lock up". More precisely, GDB will sit quietly waiting for
> the target to do something, not realizing that the user has caused a
> deadlock by manually suspending thread A. This is as expected, since
> GDB has no way to know that the program is in this state. In
> practice, the user gets impatient, hits ^C, sees the current state,
> and goes "oh yeah, my fault" and fixes.
I figured it would.
> GDB's default behavior is to give all threads a chance to run when
> resuming, which generally prevents this kind of situation. We
> introduced the scheduler-locking flag for those cases where the user
> really really needs to lock out other threads, even if it might mean
> the thread being stepped will block.
Hugo and I had a discussion a while ago about the usefulness of this flag
and come to the conclusion that it was not, or rather the semantics of it
should be changed. Locking the scheduler is dangerous for this very reason.
You may have interrupts occuring which need to be serviced on the target
hardware and this method will lock out the service thread, and in certain
cases even crash the hardware. So you really need to have a less intrusive
method of debugging, particularly for deeply embedded real-time debugging.
The solution we came up with for eCos was something similar to what I had
done before. See below.
> (I'd be interested to know if you have workarounds for this
> fundamental problem though.)
Sort of. I wrote a debugger for Helios which was targetted at
multi-threaded non-intrusive debugging. For this I termed the phrase
"loosly coupled debugging". Essentially the target hardware (e.g. threads
not being debugged) had to keep on running, even when a bp was hit by a
thread being debugged, else the h/w would die. I had a debug agent running
on the h/w which would talk back to the debugger and let it know when a bp
was hit, when an exception occurred etc, but more importantly it would run
in conjunction with the application which was being debugged. That is, the
debugger on the host could query the agent on the target at any time to
query the status of the h/w and the status of actively running threads. You
could do a backtrace which was a "sample" where a thread on the remote app
was at when the bt command was executed, and set breakpoints within the app
while the app was still running.
More importantly I introduced the freezing and thawing of threads. This
required some support from the RTOS. If you froze a thread, the RTOS would
no longer schedule it for execution. Similarly threads were thawed to allow
them to resume. So the user would freeze only the threads which they did
not want to interact with the thread they were debugging. Naturally you
could inspect the stack etc of a frozen thread as the full register context
of the frozen thread was preserved and available to the agent, and hence the
debugger. There was no concept of ^C as you always had a command line. If
you wanted to stop the target entirely you would simply freeze all threads.
Naturally the debug agent would be exempt.
The cool thing was that you could leave the debug agent in your final app.
So you could walk up to the running h/w in the field, hook a portable to the
serial port, and attach the debugger. Hence you could query the target to
find out what it was doing, what state threads were in, and if you had the
matching source and symbol tables available, even start debugging it. Like
attaching gdb to an active process on UNIX (which is where I snarfed the
idea from).
Things were a bit more complex (aren't they always) since you could debug
the agent itself and still had to provide the ability to debug the RTOS
which was controlling everything, including the agent. So threads could
only be frozen if they were not consuming a system resource. That is,
something which the debug agent or RTOS would need to fulfill its
obligations. Of cource you could over-ride this to debug the kernel and
system calls, but in these instances the agent could fall back into a
gdb-stub mode, and the user suffered the consequences of freezing the
hardware. At least they were warned...
Cheers
-- Alex
next parent reply other threads:[~1999-12-02 22:19 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <Pine.SUN.3.91.991202192443.17848B-100000@is>
1999-12-02 22:19 ` Andreas Jaeger [this message]
1999-11-27 18:50 -Wmissing-prototypes Andrew Cagney
1999-09-20 0:46 ` -Wmissing-prototypes Andrew Cagney
[not found] ` <37CB6DBE.2083662F@cygnus.com>
[not found] ` <199912021414.JAA16068@mescaline.gnu.org>
1999-12-02 7:42 ` -Wmissing-prototypes Andreas Jaeger
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=u8k8mwk17z.fsf@gromit.rhein-neckar.de \
--to=aj@suse.de \
--cc=ac131313@cygnus.com \
--cc=dj@delorie.com \
--cc=eliz@is.elta.co.il \
--cc=gdb@sourceware.cygnus.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox