From mboxrd@z Thu Jan 1 00:00:00 1970 From: Daniel Jacobowitz To: Quality Quorum Cc: gdb@sources.redhat.com Subject: Re: more on gdb server Date: Wed, 18 Jul 2001 12:49:00 -0000 Message-id: <20010718124918.A4250@nevyn.them.org> References: <5mitgq6ug4.fsf@orac.redback.com> X-SW-Source: 2001-07/msg00252.html On Wed, Jul 18, 2001 at 03:40:57PM -0400, Quality Quorum wrote: > On 18 Jul 2001, J.T. Conklin wrote: > > > > > I know HP were once playing with ideas that would have eliminated any > > > > copying because they were finding memory read/write performance using > > > > ptrace (or what ever) lacking. > > > > > > I would suppose they had something truly unusual - debuggin is going with > > > the pace of human reaction to debugging events and I can hardly imagine > > > that network performance over local loop interface would be a factor here. > > > > Remember that GDB may be issuing many low level commands for each high > > level (CLI) command. For example, a single step or next command may > > issue several step instruction, fetch registers, and store registers > > commands. On some large programs, some interactive commands are > > beyond the interactive threshold (something like .3 seconds? I can't > > remember the commonly quoted figure), this additional overhead would > > only make it worse. > > > > Also note that oftentimes it's not a human driving the debugging > > session, but user defined functions that grovel through data > > structures, call inferior functions, etc. > > I still have hard time to beleive that there is an issue here. Consider software watchpoints, already almost uselessly slow. Consider single-stepping over a single line of code consisting of forty or four hundred machine instructions. There can be a significant overhead. -- Daniel Jacobowitz Carnegie Mellon University MontaVista Software Debian GNU/Linux Developer