From mboxrd@z Thu Jan 1 00:00:00 1970 From: Stan Shebs To: sbjohnson@ozemail.com.au Cc: gdb@sourceware.cygnus.com Subject: Re: Standard GDB Remote Protocol Date: Wed, 01 Dec 1999 15:53:00 -0000 Message-id: <199912012353.PAA29743@andros.cygnus.com> References: <3845AB0E.3795D99E@ozemail.com.au> X-SW-Source: 1999-q4/msg00410.html Date: Thu, 02 Dec 1999 09:11:10 +1000 From: Steven Johnson Can anyone point me at the Protocol specification for this? Or doesn't it exist? Ive looked everywhere I can think of and can find nothing documented about it, except references to the fact GDB has this standard Protocol. As other people have pointed out, the complete spec is now part of the main GDB manual. The 4.18 manual and earlier had a partial and out-of-date description in section 13.4.14 "Communication protocol". We've updated and expanded this section, and the description in the manual is now the official specification for the protocol. So feel free to rely on the manual, and if you find the description is lacking or erroneous, please help us to correct it (or GDB); your effort will be much appreciated by everybody. Stan >From davidwilliams@ozemail.com.au Wed Dec 01 16:23:00 1999 From: David Williams To: "'gdb mail list'" Subject: remote debug of 68EZ328 Date: Wed, 01 Dec 1999 16:23:00 -0000 Message-id: <01BF3CB7.62CF55A0.davidwilliams@ozemail.com.au> X-SW-Source: 1999-q4/msg00411.html Content-length: 1311 Hi All, I am still trying to ramp up on required knowledge to tackle this... 1. As far as I can tell, the remote protocol specifies communications between a stub on the target system and GDB on host system. I assume that there is some 68K specific code as part of GDB that communicates to stub on target system - I am a little cloudy on this and would appreciate some clarification. 2. Looking through the procotol & source for 68K-stub leads me to believe that the 68K stub (and possibly all stubs) do not support hardware breakpoints - they work by assuming code is running is RAM and op-code can be substituted for trap instructions. Is this correct? If so then I cannot use the remote protocol and a stub to support hardware breakpoints. What other method is best. (I have received some responses to previous queries on this subject but I still dont get it!) 3. Some people have mentioned insight to me. The suggestion is the Insight may have slightly different (and possible later) sources for GDB component. Is this true? I am interested in using a GUI with GDB and this sounds good. My specific problems are that I am currently using win95 as my development platform (yes I know) and dont know how this will affect development of changes required to GDB (Insight). TIA David Williams >From shebs@cygnus.com Wed Dec 01 17:11:00 1999 From: Stan Shebs To: davidwilliams@ozemail.com.au Cc: gdb@sourceware.cygnus.com Subject: Re: remote debug of 68EZ328 Date: Wed, 01 Dec 1999 17:11:00 -0000 Message-id: <199912020111.RAA01159@andros.cygnus.com> References: <01BF3CB7.62CF55A0.davidwilliams@ozemail.com.au> X-SW-Source: 1999-q4/msg00412.html Content-length: 3890 From: David Williams Date: Thu, 2 Dec 1999 11:21:31 +1100 I am still trying to ramp up on required knowledge to tackle this... 1. As far as I can tell, the remote protocol specifies communications between a stub on the target system and GDB on host system. I assume that there is some 68K specific code as part of GDB that communicates to stub on target system - I am a little cloudy on this and would appreciate some clarification. Yes, GDB has m68k-specific code that describes the architecture in a somewhat abstract way, and only in as much detail as the debugger needs. So for instance, there are bits saying that d0 is a 32-bit register, a6 is the frame pointer, fp0 contains floats, etc. There are also procedures to do things like decoding frames. All of this code is common to all m68k systems, whether they're Apollos, Sun-3s, Palms, whatever. The remote protocol and its implementation are very generic. The protocol is just a set of commands, like "$g#67", which says to deliver all the registers. The size of the response is very different from one system to another, but GDB just hopes for the best :-) and dumps the blob of data into the array of registers. Once GDB has acquired some register and memory data via the generic protocol, it puts its m68k-specific code to work analyzing it, eventually resulting in a reconstruction of your program's state. So the purpose of your stub is simply to report program state and to obey the commands (packets) sent to it by GDB. Since GDB has no other means to contact the target system except the stub, it will pretty much believe what your stub tells it. This suggests interesting tricks that you can do - for instance, you could notice that a memory write seems to be the depositing of a breakpoint instruction, and do something else, like set a hardware breakpoint. As long as you're telling GDB believable things, it will go along. 2. Looking through the procotol & source for 68K-stub leads me to believe that the 68K stub (and possibly all stubs) do not support hardware breakpoints - they work by assuming code is running is RAM and op-code can be substituted for trap instructions. Is this correct? If so then I cannot use the remote protocol and a stub to support hardware breakpoints. What other method is best. (I have received some responses to previous queries on this subject but I still dont get it!) We've recently defined a 'Z' packet that is for the purpose of setting hardware breakpoints. The generic m68k stub in the sources doesn't use it, because any such code would be very specific to particular systems. As you may have noticed, the stub file is public domain, not even GPLed, because we want everybody to modify stubs so they'll fit properly into the target system. As long as you conform to the established protocol as defined in the manual, you can do whatever you want. 3. Some people have mentioned insight to me. The suggestion is the Insight may have slightly different (and possible later) sources for GDB component. Is this true? I am interested in using a GUI with GDB and this sounds good. My specific problems are that I am currently using win95 as my development platform (yes I know) and dont know how this will affect development of changes required to GDB (Insight). Since you're already used to crashes, you won't be any worse off than you are now... :-) Insight is just a tcl GUI extension to basic GDB. The snapshot sources on sourceware are synced with the basic GDB sources available at the same place, so you can use either as you prefer. Building everything from scratch using cygwin on W95 usually takes so long that something crashes before it finishes, but if you reboot and continue building, you can eventually get to a working GDB. Stan >From sbjohnson@ozemail.com.au Wed Dec 01 20:22:00 1999 From: Steven Johnson To: jtc@redback.com Cc: gdb@sourceware.cygnus.com Subject: Re: Standard GDB Remote Protocol Date: Wed, 01 Dec 1999 20:22:00 -0000 Message-id: <3845F45A.38EA29CF@ozemail.com.au> References: <199911090706.CAA13120@zwingli.cygnus.com> <199911102246.RAA01846@mescaline.gnu.org> <199911231303.IAA01523@mescaline.gnu.org> <199911251715.MAA09225@mescaline.gnu.org> <199912010821.DAA27130@mescaline.gnu.org> <3845AB0E.3795D99E@ozemail.com.au> <5md7sql00o.fsf@jtc.redbacknetworks.com> X-SW-Source: 1999-q4/msg00413.html Content-length: 11909 "J.T. Conklin" wrote: > > Since you're putting up your hand, would you be willing to review the > protocol spec and point out areas that are ambiguous, confusing, need > revising, etc? > Following is a Hopefully Constructive Critique, of the GDB Remote Protocol. It is based on my First Read of the current online version of protocol specification at: http://sourceware.cygnus.com/gdb/onlinedocs/gdb_14.html In my Critique, I am not posing real questions when I discuss subjects. What I am doing is hi-lighting areas where I have questions in my own mind, where I find the description of the protocol lacking. The answers will probably be present in the current implemented code and stubs, and I have not yet looked for those answers. Nor do I wish to, until my initial analysis of the Protocol is complete. I do not wish to taint my understanding of the written words of the protocol with Black Knowledge gleaned from the source. Further my critique is not a criticism of the hard work that people have already done to get the documentation/GDB/protocol to this state. Further it is obvious from my first read of the protocol is that it has undergone extensive evolution, and I have taken this into consideration. Any comments I make on ways to fix things are simply my attempt at understanding the problem. They do not represent a request or proposal to change anything in the protocol, they are presented as part of the thought process I underwent when analysing the protocol. They also indicate areas where I have concerns with my understanding of the protocol as documented. Packet Structure: Simple structure, obviously originally designed to be able to be driven manually from a TTY. (Hence it's ASCII nature.) However, the protocol has evolved quite significantly and I doubt it could still be used very efficiently from a TTY. That said, it still demarks frames effectively. Sequence Numbers: Definition of Sequence ID's needs work. Are they necessary? Are they deprecated? What purpose do they currently serve within GDB? One would imagine that they are used to allow GDB to handle retransmits from a remote system. Reading between the lines, this is done to allow error recovery when a transmission from target to host fails. Possible sequence being: <- $packet-data#checksum -> + -> $sequence-id:packet-data#checksum (checksum fails or receive timeout halfway through packet). <- -sequence-id -> $sequence-id:packet-data#checksum <- +sequence-id When do the sequence-id's increment? Presumably on the successful receipt of the +sequence-id acknowledgement. If they increment on the successful acknowledgement, what happens if the acknowledgement is in error? For example a framing error on the '+'. The target would never see the successful acknowledgement and would not increment it's sequence number. So what if it doesn't? The +/- Ack/Nak mechanism should be amply sufficient to allow retransmits of missed responses. I can see little practical benefit in a sequence-id in the responses, as it is currently documented. This is supported buy the comment within the document: "Beyond that its meaning is poorly defined. GDB is not known to output sequence-ids". This tends to indicate that the mechanism has fallen out of use, Probably because it doesn't actually achieve anything. If this is the case, it could be deprecated. However, I would advocate not deprecating it from the protocol, because If they were sent by GDB a current hole I believe is in the protocol could be plugged. (I will discuss this hole later in this critique.) Ack/Nak Mechanism: Simple Ack/Nak Mechanism, using + and - Respectively. Also reflects the simple ASCII basis of the protocol. My main concern with this system is there is no documentation of timing. Usually Ack/Nak must be received within a certain time frame, otherwise a Nak is assumed and a retransmit proceeds. This is necessary, because it is possible for the Ack/Nak character to be lost (however unlikely) on the line due to a data error. I think there should be a general timing basis to the entire protocol to tie up some potential communications/implementation problems. The 2 primary timing constraints I see that are missing are: Inter character times during a message transmission, and Ack/Nak response times. If a message is only half received, the receiver has no ability without a timeout mechanism of generating a NAK signalling failed receipt. If this occurs, and there is no timeout on ACK/NAK reception, the entire comms stream could Hang. Transmitter is Hung waiting for an ACK/NAK and the Receiver is Hung waiting for the rest of the message. I would propose that something needs to be defined along the lines of: Once the $ character for the start of a packet is transmitted, each subsequent byte must be received within "n" byte transmission times. (This would allow for varying comms line speeds). Or alternately a global timeout on the whole message could be define one "$" (start sentinel) is sent, the complete message must be received within "X" time. I personally favour the inter character time as opposed to complete message time as it will work with any size message, however the complete message time restrict the maximum size of any one message (to how many bytes can be sent at the maximum rate for the period). These tiemouts do not need to be very tight, as they are merely for complete failure recovery and a little delay there does not hurt much. One possible timeout that would be easy to work with could be: Timeout occurs 1 second after the last received byte. For ACK/NAK I propose that something needs to be defined along the lines: ACK/NAK must be received within X Seconds from transmission of the end of the message, otherwise a NAK must be assumed. There is no documentation of the recovery procedure, Does GDB retransmit if its message is responded to with a NAK? If not, what does it do? How is the target supposed to identify and handle retransmits from GDB. What happens if something other than + or - is received when ACK/NAK is expected. (For example $). Identified Protocol Hole: Lets look at the following abstract scenario (Text in brackets are supporting comments): <- $packet-data#checksum (Run Target Command) -> + (Response is lost due to a line error) (Target runs for a very short period of time and then breaks). -> $sequence-id:packet-data#checksum (Break Response - GDB takes as a NAK, expecting a +, got a $). <- $packet-data#checksum (GDB retransmits it's Run Target Command, target restarts) -> + (Response received OK by GDB). (Target again starts running.) In this scenario, it is shown that with the currently documented mechanisms, it is possible for transmission errors to occur that interfere with debugging. There was no mechanism for the target to identify that GDB was re-transmitting and subsequently executed the same operation twice. When GDB really only wanted to execute the command once. Its this sort of scenario that I imagine the sequence id's were created for. If GDB sent Sequence ID's then the scenario would be much different: <- $ sequence-id:packet-data#checksum (Run Target Command) -> + (Response is lost due to a line error) (Target runs for a very short period of time and then breaks). -> $sequence-id:packet-data#checksum (Break Response - GDB takes as a NAK, expecting a +, got a $). <- $ sequence-id:packet-data#checksum (GDB retransmits it's Run Target Command, with the same sequence -id as in the original command) (Target identifies the sequence-id as a retransmit.) (Instead of performing the operation again, it simply re-responds with the results obtained from the last command) -> + (Response received OK by GDB). -> $sequence-id:packet-data#checksum (Break Response - GDB processes as expected.) (GDB then increments its sequence-id in preparation for the next command.) As an extra integrity check, the response sequence-id should be identical to the request sequence-id. This would allow GDB to verify that the response it is processing is properly paired with it's request. Further, the target shouldn't require either ACK nor NAK. It should process them properly if received, but otherwise process the received packet, even if ACK/NAK was expected. If this is the intent of sequence-id and it has fallen into disuse, then to allow it's re-introduction at a later date, it could be documented that if GDB sends a sequence-id, then the retransmit processing I've documented here operates, otherwise the currently defined behaviour operates, and that sequence-id is only sent by the target in responses where they are present in the original GDB message. This would allow GDB to probe if the target supports secure and recoverable message delivery or not. Run Length Encoding: Is run length encoding supported in all packets, or just some packets? (For example, not binary packets) Why not allow lengths greater than 126? Or does this mean lengths greater than 97 (as in 126-29) If binary packets with 8 bit data can be sent, why not allow RLE to use length also greater than 97. If the length maximum is really 126, then this yields the character 0x9B which is 8 bits, wouldn't the maximum length in this case be 226. Or is this a misprint? Why are there 2 methods of RLE? Is it important for a Remote Target to understand and process both, or is the "cisco encoding" a proprietary extension of the GDB Remote protocol, and not part of the standard implementation. The documentation of "cisco encoding" is confusing and seems to conflict with standard RLE encoding. They appear to be mutually exclusive. If they are both part of the protocol, how are they distinguished when used? Deprecated Messages: Should an implementation of the protocol implement the deprecated messages or not? What is the significance of the deprecated messages to the current implementation? Character Escaping: The mechanism of Escaping the characters is not defined. Further it is only defined as used by write mem binary. Wouldn't it be useful for future expansion of the protocol to define Character Escaping as a global feature of the protocol, so that if any control characters were required to be sent, they could be escaped in a consistent manner across all messages. Also, wouldn't the full list of escape characters be $,#,+,-,*,0x7d. Otherwise, + & - might be processed inadvertently as ACK or NAK. If this can't happen, then why must they be avoided in RLE? If they are escaped across all messages, then that means they could be used in RLE and not treated specially. 8/7 Bit protocol. With the documentation of RAW Binary transfers, the protocol moves from being a strictly 7 bit affair into being a 8 bit capable protocol. If this is so, then shouldn't all the restrictions that are placed from the 7 bit protocol days be lifted to take advantage of the capabilities of an 8 bit message stream. (RLE limitations, for example). Would anyone seriously be using a computer that had a 7 bit limitation anymore anyway? (At least a computer that would run GDB with remote debugging). Thoughts on consistency and future growth: Apply RLE as a feature of All messages. (Including binary messages, as these can probably benefit significantly from it). Apply the Binary Escaping mechanism as a feature of the packet that is performed on all messages prior to transmission and immediately after reception. Define an exhaustive set of "Characters to be escaped". Introduce message timing constraints. Properly define sequence-id and allow it to be used from GDB to make communications secure and reliable. Steven Johnson Managing Director Neurizon Pty Ltd >From jtc@redback.com Thu Dec 02 00:50:00 1999 From: jtc@redback.com (J.T. Conklin) To: gdb@sourceware.cygnus.com Subject: using '-x -' to read gdb script from stdin Date: Thu, 02 Dec 1999 00:50:00 -0000 Message-id: <5miu2hhgkx.fsf@jtc.redbacknetworks.com> X-SW-Source: 1999-q4/msg00414.html Content-length: 991 I updated one of our year old GDB executables a week or so ago, and was notified that one of the scripts used by SQA failed to work. I tracked it down to the following bit of code that was ifdef'd out earlier this year. >From main.c: /* NOTE: I am commenting this out, because it is not clear where this feature is used. It is very old and undocumented. ezannoni: 1999-05-04 */ #if 0 if (cmdarg[i][0] == '-' && cmdarg[i][1] == '\0') read_command_file (stdin); else #endif The script invoked gdb like this: echo list "*$addr" | $gdb -batch -x - $file | head -1 [ I know, I should be using addr2line. But this script was written before addr2line existed. ] Since all of our systems support /dev/stdin, I patched up our script accordingly. But I wonder whether support for - should be reenabled. Is there any reason why not? --jtc -- J.T. Conklin RedBack Networks >From eliz@gnu.org Thu Dec 02 06:14:00 1999 From: Eli Zaretskii To: gdb@sourceware.cygnus.com Cc: Andrew Cagney , DJ Delorie Subject: Re: -Wmissing-prototypes ... Date: Thu, 02 Dec 1999 06:14:00 -0000 Message-id: <199912021414.JAA16068@mescaline.gnu.org> References: <37E5E508.D56E054C@cygnus.com> <37CB6DBE.2083662F@cygnus.com> X-SW-Source: 1999-q4/msg00415.html Content-length: 8681 > My current list is: > > --enable-build-warnings=-Werror\ > ,-Wimplicit\ > ,-Wreturn-type\ > ,-Wcomment\ > ,-Wtrigraphs\ > ,-Wformat\ > ,-Wparentheses\ > ,-Wpointer-arith\ > ,-Wmissing-prototypes\ > ,-Woverloaded-virtual\ Here are the patches for go32-nat.c to allow it to compile with all kinds of -Wfoo switches (I added switches beyond those mentioned above). While working on this, I found out that defs.h redeclares several library functions, like getenv, fclose and atof, because symbols like GETENV_PROVIDED etc. aren't defined anywhere; this causes GCC to complain (under the full list of warning options). What header should define those for a particular host? --- gdb/go32-nat.~17 Wed Dec 1 20:02:36 1999 +++ gdb/go32-nat.c Wed Dec 1 20:57:06 1999 @@ -29,6 +29,7 @@ #include "gdbcore.h" #include "command.h" #include "floatformat.h" +#include "language.h" #include /* required for __DJGPP_MINOR__ */ #include @@ -164,42 +165,47 @@ #define SOME_PID 42 static int prog_has_started = 0; -static void print_387_status (unsigned short status, struct env387 *ep); -static void go32_open (char *name, int from_tty); -static void go32_close (int quitting); -static void go32_attach (char *args, int from_tty); -static void go32_detach (char *args, int from_tty); -static void go32_resume (int pid, int step, enum target_signal siggnal); -static int go32_wait (int pid, struct target_waitstatus *status); -static void go32_fetch_registers (int regno); -static void store_register (int regno); -static void go32_store_registers (int regno); +static void print_387_status (unsigned, struct env387 *); +static void go32_open (char *, int); +static void go32_close (int); +static void go32_attach (char *, int); +static void go32_detach (char *, int); +static void go32_resume (int, int, enum target_signal); +static int go32_wait (int, struct target_waitstatus *); +static void go32_fetch_registers (int); +static void store_register (int); +static void go32_store_registers (int); static void go32_prepare_to_store (void); -static int go32_xfer_memory (CORE_ADDR memaddr, char *myaddr, int len, - int write, struct target_ops *target); -static void go32_files_info (struct target_ops *target); +static int go32_xfer_memory (CORE_ADDR, char *, int, + int, struct target_ops *); +static void go32_files_info (struct target_ops *); static void go32_stop (void); static void go32_kill_inferior (void); -static void go32_create_inferior (char *exec_file, char *args, char **env); +static void go32_create_inferior (char *, char *, char **); static void cleanup_dregs (void); static void go32_mourn_inferior (void); static int go32_can_run (void); static void ignore (void); -static void ignore2 (char *a, int b); -static int go32_insert_aligned_watchpoint (CORE_ADDR waddr, CORE_ADDR addr, - int len, int rw); -static int go32_remove_aligned_watchpoint (CORE_ADDR waddr, CORE_ADDR addr, - int len, int rw); -static int go32_handle_nonaligned_watchpoint (wp_op what, CORE_ADDR waddr, - CORE_ADDR addr, int len, int rw); +static int go32_insert_aligned_watchpoint (CORE_ADDR, CORE_ADDR, int, int); +static int go32_remove_aligned_watchpoint (CORE_ADDR, CORE_ADDR, int, int); +static int go32_handle_nonaligned_watchpoint (wp_op, CORE_ADDR, CORE_ADDR, + int, int); static struct target_ops go32_ops; static void go32_terminal_init (void); static void go32_terminal_inferior (void); static void go32_terminal_ours (void); +int go32_insert_watchpoint (int, CORE_ADDR, int, int); +int go32_remove_watchpoint (int, CORE_ADDR, int, int); +int go32_region_ok_for_watchpoint (CORE_ADDR, int); +CORE_ADDR go32_stopped_by_watchpoint (int, int); +int go32_insert_hw_breakpoint (CORE_ADDR, CORE_ADDR); +int go32_remove_hw_breakpoint (CORE_ADDR, CORE_ADDR); + + static void -print_387_status (unsigned short status, struct env387 *ep) +print_387_status (unsigned status, struct env387 *ep) { int i; int bothstatus; @@ -221,7 +227,7 @@ print_387_status_word (ep->status); } - print_387_control_word (ep->control & 0xffff); + print_387_control_word ((unsigned)ep->control & 0xffff); /* Other platforms say "last exception", but that's not true: the FPU stores the last non-control instruction there. */ printf_unfiltered ("last FP instruction: "); @@ -229,7 +235,8 @@ are not stored by the FPU (since these bits are the same for all floating-point instructions). */ printf_unfiltered ("opcode %s; ", - local_hex_string (ep->opcode ? (ep->opcode|0xd800) : 0)); + local_hex_string (ep->opcode + ? (unsigned)(ep->opcode|0xd800) : 0)); printf_unfiltered ("pc %s:", local_hex_string (ep->code_seg)); printf_unfiltered ("%s; ", local_hex_string (ep->eip)); printf_unfiltered ("operand %s", local_hex_string (ep->operand_seg)); @@ -244,7 +251,7 @@ order, beginning with ST(0). Since we need to print them in their physical order, we have to remap them. */ int regno = fpreg - top; - long double val; + long double ldval; if (regno < 0) regno += 8; @@ -272,9 +279,9 @@ printf_unfiltered ("%02x", ep->regs[regno][i]); REGISTER_CONVERT_TO_VIRTUAL (FP0_REGNUM+regno, builtin_type_long_double, - &ep->regs[regno], &val); + &ep->regs[regno], &ldval); - printf_unfiltered (" %.19LG\n", val); + printf_unfiltered (" %.19LG\n", ldval); } } @@ -381,7 +388,7 @@ TARGET_SIGNAL_QUIT, 0x7a, TARGET_SIGNAL_ALRM, 0x78, /* triggers SIGTIMR */ TARGET_SIGNAL_PROF, 0x78, - -1, -1 + (enum target_signal)-1, -1 }; static void @@ -420,7 +427,8 @@ if (siggnal != TARGET_SIGNAL_0 && siggnal != TARGET_SIGNAL_TRAP) { - for (i = 0, resume_signal = -1; excepn_map[i].gdb_sig != -1; i++) + for (i = 0, resume_signal = -1; + excepn_map[i].gdb_sig != (enum target_signal)-1; i++) if (excepn_map[i].gdb_sig == siggnal) { resume_signal = excepn_map[i].djgpp_excepno; @@ -439,7 +447,7 @@ { int i; unsigned char saved_opcode; - unsigned long INT3_addr; + unsigned long INT3_addr = 0L; int stepping_over_INT = 0; a_tss.tss_eflags &= 0xfeff; /* reset the single-step flag (TF) */ @@ -594,14 +602,14 @@ static void go32_store_registers (int regno) { - int r; + unsigned r; if (regno >= 0) store_register (regno); else { for (r = 0; r < sizeof (regno_mapping) / sizeof (regno_mapping[0]); r++) - store_register (r); + store_register ((int)r); } } @@ -611,12 +619,12 @@ } static int -go32_xfer_memory (CORE_ADDR memaddr, char *myaddr, int len, int write, +go32_xfer_memory (CORE_ADDR memaddr, char *myaddr, int len, int to_write, struct target_ops *target) { - if (write) + if (to_write) { - if (write_child (memaddr, myaddr, len)) + if (write_child (memaddr, myaddr, (unsigned)len)) { return 0; } @@ -627,7 +635,7 @@ } else { - if (read_child (memaddr, myaddr, len)) + if (read_child (memaddr, myaddr, (unsigned)len)) { return 0; } @@ -820,12 +828,13 @@ #define SHOW_DR(text,len) \ do { \ if (!getenv ("GDB_SHOW_DR")) break; \ - fprintf(stderr,"%08x %08x ",edi.dr[7],edi.dr[6]); \ - fprintf(stderr,"%08x %d %08x %d ", \ + fprintf(stderr,"%08lx %08lx ",edi.dr[7],edi.dr[6]); \ + fprintf(stderr,"%08lx %d %08lx %d ", \ edi.dr[0],dr_ref_count[0],edi.dr[1],dr_ref_count[1]); \ - fprintf(stderr,"%08x %d %08x %d ", \ + fprintf(stderr,"%08lx %d %08lx %d ", \ edi.dr[2],dr_ref_count[2],edi.dr[3],dr_ref_count[3]); \ - fprintf(stderr,(len)?"(%s:%d)\n":"(%s)\n",#text,len); \ + if (len) fprintf(stderr,"(%s:%d)\n",#text,len); \ + else fprintf(stderr,"(%s)\n",#text); \ } while (0) #else #define SHOW_DR(text,len) do {} while (0) @@ -861,7 +870,7 @@ int len, int rw) { int i; - int read_write_bits, len_bits; + unsigned read_write_bits, len_bits; /* Values of rw: 0 - write, 1 - read, 2 - access (read and write). However, x86 doesn't support read-only data breakpoints. */ @@ -992,7 +1001,7 @@ int len, int rw) { int i; - int read_write_bits, len_bits; + unsigned read_write_bits, len_bits; /* Values of rw: 0 - write, 1 - read, 2 - access (read and write). However, x86 doesn't support read-only data breakpoints. */ @@ -1105,9 +1114,6 @@ go32_insert_hw_breakpoint (CORE_ADDR addr, CORE_ADDR shadow) { int i; - int read_write_bits, len_bits; - int free_debug_register; - int register_number; /* Look for an occupied debug register with the same address and the same RW and LEN definitions. If we find one, we can use it for >From gatliff@haulpak.com Thu Dec 02 06:43:00 1999 From: William Gatliff To: gdb@sourceware.cygnus.com Subject: Re: Standard GDB Remote Protocol Date: Thu, 02 Dec 1999 06:43:00 -0000 Message-id: <384685A7.15184EB1@haulpak.com> References: <199911090706.CAA13120@zwingli.cygnus.com> <199911102246.RAA01846@mescaline.gnu.org> <199911231303.IAA01523@mescaline.gnu.org> <199911251715.MAA09225@mescaline.gnu.org> <199912010821.DAA27130@mescaline.gnu.org> <3845AB0E.3795D99E@ozemail.com.au> <5md7sql00o.fsf@jtc.redbacknetworks.com> <3845F45A.38EA29CF@ozemail.com.au> X-SW-Source: 1999-q4/msg00416.html Content-length: 5911 Steven Johnson wrote: > Packet Structure: > > Simple structure, obviously originally designed to be able to be driven > manually from a TTY. (Hence it's ASCII nature.) However, the protocol has > evolved quite significantly and I doubt it could still be used very > efficiently from a TTY. True, but it can still be *monitored* quite effectively with a TTY, and simple things like a ? query are still possible. If I'm using a TTY then I'm desperate anyway, so I'm willing to put up with a little pain. Go to a non-ASCII protocol, however, and the TTY option is right out altogether, no matter how desperate I am! If efficiency/throughput is a problem, then go to ethernet. At 10/100Mbps, even the overhead of ASCII isn't a problem for most targets I can think of. > I think there should be a general timing basis to the entire protocol to > tie up some potential communications/implementation problems. The RSP's lack of timing requirements is an asset, as far as I'm concerned. See below. > If a message is only half received, the receiver has no ability without a > timeout mechanism of generating a NAK signalling failed receipt. If this > occurs, and there is no timeout on ACK/NAK reception, the entire comms > stream could Hang. Transmitter is Hung waiting for an ACK/NAK and the > Receiver is Hung waiting for the rest of the message. This is something that a stub can handle itself, as a self-protection measure, without changing the RSP. A debugging stub running on production hardware would probably need to do this anyway, while a lab/development system could tolerate a hang (concerns with rotating machinery, etc. notwithstanding). So, I don't see any reason to create requirements, because they're likely to be so target-specific that you'll never get good agreement on what they should be, and therefore there will not be any uniform implementations. In my opinion, a debugging stub is the responsible party for the safety of a debugging target, because it alone can decide what to do if it thinks that gdb has "gone away" unexpectedly (line noise, PC/protocol hang, etc.). When this is done, nobody cares if gdb hangs, because it doesn't necessarily cause problems for the target. >From that perspective, it is clear to me that a debugging stub will have to do whatever it needs to do to protect itself and the target, regardless of what the RSP says. So the mission to beef up the RSP in the way you suggest seems counterproductive. In the best case, you'll drive the need for gdb enhancements that won't benefit most people (i.e. timing requirements that are so loose that targets cannot depend on them); in the worst case, you'll create gdb behaviors that are incompatible with certain types of targets (i.e. timing requirements that are so tight that targets and hosts can't implement them). > I would propose that something needs to be defined along the lines of: > > Once the $ character for the start of a packet is transmitted, each > subsequent byte must be received within "n" byte transmission times. > (This would allow for varying comms line speeds). Or alternately a global > timeout on the whole message could be define one "$" (start sentinel) is > sent, the complete message must be received within "X" time. I personally > favour the inter character time as opposed to complete message time as it > will work with any size message, however the complete message time > restrict the maximum size of any one message (to how many bytes can be > sent at the maximum rate for the period). These tiemouts do not need to > be very tight, as they are merely for complete failure recovery and a > little delay there does not hurt much. > > One possible timeout that would be easy to work with could be: Timeout > occurs 1 second after the last received byte. > > For ACK/NAK I propose that something needs to be defined along the lines: > ACK/NAK must be received within X Seconds from transmission of the end of > the message, otherwise a NAK must be assumed. Good suggestions, but I would prefer that these be general stub design guidelines that aren't enforced by gdb. Let gdb be as flexible as possible, so that it will work with super-smart stubs that do all the timing stuff properly, as well as stubs that are minimally written. Gdb is supposed to be a debugging aid; I would prefer that all the protocol stuff not get in the way of its fundamental mission. Also, how do you measure byte times on most debugging hosts, particularly at 115K (my bit rate of choice) and higher? Such a specification sounds easy, but an implementation isn't likely to be portable. And finally, consider the case where an M command is really writing to flash, and the debugging target gets busy erasing flash sectors (which can take longer than a second in some cases)? If gdb retries, things may get confusing. > There is no documentation of the recovery procedure, Does GDB retransmit > if its message is responded to with a NAK? If not, what does it do? How > is the target supposed to identify and handle retransmits from GDB. > What happens if something other than + or - is received when ACK/NAK is > expected. (For example $). >From my own experience, remote.c is kinda fragile where stuff like this is concerned. I had been intending to look into this myself next year, but by then someone else will have certainly beaten me to it. I think some improvements have already been made. > Character Escaping: The mechanism of Escaping the characters is not > defined. Further it is only defined as used by write mem binary. That's because this is the only place where it is needed, AFAIK. And, since X is optional (and support for it is detected automatically by gdb), that means that I don't have to implement it if I don't want to. Bonus for super-minimal stubs. b.g. -- William A. Gatliff Senior Design Engineer Komatsu Mining Systems To teach is to learn.