Mirror of the gdb mailing list
 help / color / mirror / Atom feed
* Re: gdb and dlopen
       [not found] <y3radyrjqf8.wl@paladin.sgrail.com>
@ 2001-10-16 13:15 ` Daniel Jacobowitz
  2001-10-16 18:23   ` Kimball Thurston
  2001-10-16 15:05 ` H . J . Lu
  1 sibling, 1 reply; 30+ messages in thread
From: Daniel Jacobowitz @ 2001-10-16 13:15 UTC (permalink / raw)
  To: Kimball Thurston; +Cc: gdb

On Tue, Oct 16, 2001 at 01:11:39PM -0700, Kimball Thurston wrote:
> Hey all,
> 
>    In our application, we've got a plugin architecture that, under
> unix, we open using dlopen et al. When trying to debug using gdb, the
> process of calling dlopen seems to take an extraordinary amount of
> time, as it looks like gdb is using ptrace to copy a bunch of the
> debug process's memory at each dlopen into itself. Is there a way to
> delay this behavior, or disable it all together, or fix it? I couldn't
> determine exactly how gdb uses the memory it copies in. All I know is
> it makes using gdb nearly impossible when you have to wait 10 minutes
> for the program to start up...

You might want to look at some of the options under 'set debug' to see
what it's doing.  It's possible that it's just symbol reading
inefficiency biting you; how big are these DSOs?  How many are there?

-- 
Daniel Jacobowitz                           Carnegie Mellon University
MontaVista Software                         Debian GNU/Linux Developer


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: gdb and dlopen
       [not found] <y3radyrjqf8.wl@paladin.sgrail.com>
  2001-10-16 13:15 ` gdb and dlopen Daniel Jacobowitz
@ 2001-10-16 15:05 ` H . J . Lu
  1 sibling, 0 replies; 30+ messages in thread
From: H . J . Lu @ 2001-10-16 15:05 UTC (permalink / raw)
  To: Kimball Thurston; +Cc: gdb

On Tue, Oct 16, 2001 at 01:11:39PM -0700, Kimball Thurston wrote:
> Hey all,
> 
>    In our application, we've got a plugin architecture that, under
> unix, we open using dlopen et al. When trying to debug using gdb, the
> process of calling dlopen seems to take an extraordinary amount of
> time, as it looks like gdb is using ptrace to copy a bunch of the
> debug process's memory at each dlopen into itself. Is there a way to
> delay this behavior, or disable it all together, or fix it? I couldn't
> determine exactly how gdb uses the memory it copies in. All I know is
> it makes using gdb nearly impossible when you have to wait 10 minutes
> for the program to start up...

We noticed the same thing. Our solution is not to start the application
under gdb, but attach it to gdb after all DSOs are dlopened. It speeds
up the startup time. BTW, gdb 4.18 doesn't have this `problem'. On the
other hand, gdb 5.1 has much better support for dlopen.


H.J.


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: gdb and dlopen
  2001-10-16 13:15 ` gdb and dlopen Daniel Jacobowitz
@ 2001-10-16 18:23   ` Kimball Thurston
       [not found]     ` <20011016213252.A8694@nevyn.them.org>
  0 siblings, 1 reply; 30+ messages in thread
From: Kimball Thurston @ 2001-10-16 18:23 UTC (permalink / raw)
  To: Kimball Thurston, gdb

[-- Attachment #1: Type: text/plain, Size: 2315 bytes --]

At Tue, 16 Oct 2001 16:15:25 -0400,
Daniel Jacobowitz wrote:
> 
> On Tue, Oct 16, 2001 at 01:11:39PM -0700, Kimball Thurston wrote:
> > Hey all,
> > 
> >    In our application, we've got a plugin architecture that, under
> > unix, we open using dlopen et al. When trying to debug using gdb, the
> > process of calling dlopen seems to take an extraordinary amount of
> > time, as it looks like gdb is using ptrace to copy a bunch of the
> > debug process's memory at each dlopen into itself. Is there a way to
> > delay this behavior, or disable it all together, or fix it? I couldn't
> > determine exactly how gdb uses the memory it copies in. All I know is
> > it makes using gdb nearly impossible when you have to wait 10 minutes
> > for the program to start up...
> 
> You might want to look at some of the options under 'set debug' to see
> what it's doing.  It's possible that it's just symbol reading
> inefficiency biting you; how big are these DSOs?  How many are there?
> 

It wasn't symbol reading inefficiency - or at least not directly. I
thought that at first, but I grabbed the snapshot from Oct 5th - I
haven't tried the latest yet, compiled it up with profiling info to
find where gdb is spending it's time. The majority of the time is
spent in child_xfer_memory - like 56% of the time (and most of that is
spent calling ptrace to copy bytes around) - the child_xfer_memory
seems to end up being called as a result of resetting breakpoints via
a chain of other things. I don't know why (ignorance). I've attached a
bzip of the profile data from the Oct 5th snapshot. Unfortunately, I
don't know about the internals of gdb to know what memory it's
transferring between processes. I tweaked on child_xfer_memory to not
call ptid_get_pid quite so much, but that obviously had only a
marginal improvement - it's all in ptrace and system calls - you can
see the system calls hit pretty hard from a cpuload application.

The plugins are very small (minus debug code info) - they should have
only 3 exported functions, a few static functions, and their local
data block has ~ 1K of data in it or so. Right now, there are about 50
of them.

What is the purpose child_xfer_memory is called for? Maybe I can go
through and change that to a delayed load-on-access type scenario or
something?

thanks,
Kimball


[-- Attachment #2: gdb_prof_data.bz2 --]
[-- Type: application/x-bzip2, Size: 38675 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: gdb and dlopen
       [not found]     ` <20011016213252.A8694@nevyn.them.org>
@ 2001-10-16 19:03       ` Daniel Jacobowitz
  2001-10-16 20:04         ` Kimball Thurston
  0 siblings, 1 reply; 30+ messages in thread
From: Daniel Jacobowitz @ 2001-10-16 19:03 UTC (permalink / raw)
  To: Kimball Thurston, gdb

On Tue, Oct 16, 2001 at 09:32:52PM -0400, Daniel Jacobowitz wrote:
> On Tue, Oct 16, 2001 at 06:23:39PM -0700, Kimball Thurston wrote:
> > It wasn't symbol reading inefficiency - or at least not directly. I
> > thought that at first, but I grabbed the snapshot from Oct 5th - I
> > haven't tried the latest yet, compiled it up with profiling info to
> > find where gdb is spending it's time. The majority of the time is
> > spent in child_xfer_memory - like 56% of the time (and most of that is
> > spent calling ptrace to copy bytes around) - the child_xfer_memory
> > seems to end up being called as a result of resetting breakpoints via
> > a chain of other things. I don't know why (ignorance). I've attached a
> > bzip of the profile data from the Oct 5th snapshot. Unfortunately, I
> > don't know about the internals of gdb to know what memory it's
> > transferring between processes. I tweaked on child_xfer_memory to not
> > call ptid_get_pid quite so much, but that obviously had only a
> > marginal improvement - it's all in ptrace and system calls - you can
> > see the system calls hit pretty hard from a cpuload application.
> > 
> > The plugins are very small (minus debug code info) - they should have
> > only 3 exported functions, a few static functions, and their local
> > data block has ~ 1K of data in it or so. Right now, there are about 50
> > of them.

Can you give me more information, or a testcase?  My suspicion that the
link map was responsible seems to be wrong, since I can write a dummy
application that loads 50 DSOs and debug it without any noticeable
stalls at all.

Is this application multithreaded, by chance, or at least linked to
libpthread?  The overhead in stopping and starting via the LinuxThreads
debugging package, even without multiple threads, is so ridiculous that
the times go through the roof.  I think there's something which can be
done about that.  I see a staggering amount of time spent in
svr4_current_sos, because target_read_string percolates down to
thread_db_xfer_memory and thus to thread_db_thread_alive.  We should be
able to do some intelligent caching here.

-- 
Daniel Jacobowitz                           Carnegie Mellon University
MontaVista Software                         Debian GNU/Linux Developer


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: gdb and dlopen
  2001-10-16 19:03       ` Daniel Jacobowitz
@ 2001-10-16 20:04         ` Kimball Thurston
  2001-10-16 20:17           ` Andrew Cagney
  0 siblings, 1 reply; 30+ messages in thread
From: Kimball Thurston @ 2001-10-16 20:04 UTC (permalink / raw)
  To: Kimball Thurston, gdb

At Tue, 16 Oct 2001 22:03:53 -0400,
Daniel Jacobowitz wrote:
> 
> On Tue, Oct 16, 2001 at 09:32:52PM -0400, Daniel Jacobowitz wrote:
> > On Tue, Oct 16, 2001 at 06:23:39PM -0700, Kimball Thurston wrote:
> > > It wasn't symbol reading inefficiency - or at least not directly. I
> > > thought that at first, but I grabbed the snapshot from Oct 5th - I
> > > haven't tried the latest yet, compiled it up with profiling info to
> > > find where gdb is spending it's time. The majority of the time is
> > > spent in child_xfer_memory - like 56% of the time (and most of that is
> > > spent calling ptrace to copy bytes around) - the child_xfer_memory
> > > seems to end up being called as a result of resetting breakpoints via
> > > a chain of other things. I don't know why (ignorance). I've attached a
> > > bzip of the profile data from the Oct 5th snapshot. Unfortunately, I
> > > don't know about the internals of gdb to know what memory it's
> > > transferring between processes. I tweaked on child_xfer_memory to not
> > > call ptid_get_pid quite so much, but that obviously had only a
> > > marginal improvement - it's all in ptrace and system calls - you can
> > > see the system calls hit pretty hard from a cpuload application.
> > > 
> > > The plugins are very small (minus debug code info) - they should have
> > > only 3 exported functions, a few static functions, and their local
> > > data block has ~ 1K of data in it or so. Right now, there are about 50
> > > of them.
> 
> Can you give me more information, or a testcase?  My suspicion that the
> link map was responsible seems to be wrong, since I can write a dummy
> application that loads 50 DSOs and debug it without any noticeable
> stalls at all.
> 
> Is this application multithreaded, by chance, or at least linked to
> libpthread?  The overhead in stopping and starting via the LinuxThreads
> debugging package, even without multiple threads, is so ridiculous that
> the times go through the roof.  I think there's something which can be
> done about that.  I see a staggering amount of time spent in
> svr4_current_sos, because target_read_string percolates down to
> thread_db_xfer_memory and thus to thread_db_thread_alive.  We should be
> able to do some intelligent caching here.
> 

Aha! Yes, I was just about to send more info, when I got your
reply. Our application is multi-threaded (pthreads). We have not
started any threads when we are loading the dsos - just the main
thread is active, but I guess that doesn't matter... If you need any
more info, let me know...

I put a timer in our code, so I start the timer before I call the
first dlopen, and stop it right after the last one:

no gdb:
   0.164334 seconds for 54 plugins

with gdb 5.0.90-cvs from 07-Oct-2001, it takes ~ 30 seconds or so to
enter main, and then:
  151.355 seconds for 54 plugins, 144.033 to unload
  73.3301 seconds for 31 plugins, 68.953 to unload
  50.4318 seconds for 23 plugins, 47.6721 to unload
  31.2207 seconds for 15 plugins, 29.0286 to unload
  19.7027 seconds for 10 plugins, 18.4323 to unload
  9.372 seconds for 5 plugins, 8.70661 to unload
  1.83179 seconds for 1 plugins, 1.67519 to unload

gdb 5.1-branch snapshot from sources.redhat.com from 16-Oct-2001:
  151.919 seconds for 54 plugins, 144.9 to unload

so 5.1 appears to be approximately the same as 5. This definitely
doesn't happen in 4.18, although 4.18 has other, worse problems (like
about half the time, it isn't useful for debugging - loses the call
stack or ability to look at variables...) The scary thing is that it
looks like the numbers are starting to grow non-linearly - it starts
off at just under 2 seconds per plugin, then grows to almost 3 when we
get up to 54, but I guess that is just the link map growing, so
whatever...

I am more than willing to do the leg work / coding on this, just need
to know what direction to head down...

thanks,
Kimball


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: gdb and dlopen
  2001-10-16 20:04         ` Kimball Thurston
@ 2001-10-16 20:17           ` Andrew Cagney
  2001-10-16 22:08             ` Daniel Jacobowitz
  2001-10-16 22:25             ` Kimball Thurston
  0 siblings, 2 replies; 30+ messages in thread
From: Andrew Cagney @ 2001-10-16 20:17 UTC (permalink / raw)
  To: Kimball Thurston; +Cc: gdb

> Aha! Yes, I was just about to send more info, when I got your
> reply. Our application is multi-threaded (pthreads). We have not
> started any threads when we are loading the dsos - just the main
> thread is active, but I guess that doesn't matter... If you need any
> more info, let me know...

To play the part of Bart Simpons dog:  blah blah blah blah blah PTHREADS 
blah blah 4.18 blah blah 5.x :-)

> I put a timer in our code, so I start the timer before I call the
> first dlopen, and stop it right after the last one:
> 
> no gdb:
>    0.164334 seconds for 54 plugins
> 
> with gdb 5.0.90-cvs from 07-Oct-2001, it takes ~ 30 seconds or so to
> enter main, and then:
>   151.355 seconds for 54 plugins, 144.033 to unload
>   73.3301 seconds for 31 plugins, 68.953 to unload
>   50.4318 seconds for 23 plugins, 47.6721 to unload
>   31.2207 seconds for 15 plugins, 29.0286 to unload
>   19.7027 seconds for 10 plugins, 18.4323 to unload
>   9.372 seconds for 5 plugins, 8.70661 to unload
>   1.83179 seconds for 1 plugins, 1.67519 to unload
> 
> gdb 5.1-branch snapshot from sources.redhat.com from 16-Oct-2001:
>   151.919 seconds for 54 plugins, 144.9 to unload
> 
> so 5.1 appears to be approximately the same as 5. This definitely
> doesn't happen in 4.18, although 4.18 has other, worse problems (like
> about half the time, it isn't useful for debugging - loses the call
> stack or ability to look at variables...) The scary thing is that it
> looks like the numbers are starting to grow non-linearly - it starts
> off at just under 2 seconds per plugin, then grows to almost 3 when we
> get up to 54, but I guess that is just the link map growing, so
> whatever...
> 
> I am more than willing to do the leg work / coding on this, just need
> to know what direction to head down...

Thread support was given a serious overhall in 5.0 (it became 
maintainable and fixable).

Can you try this with/without the thread library linked in?  Everytime 
GDB sees a shared library being loaded it goes frobbing around to see if 
it contains some thread support code.  That could be the problem.

Andrew



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: gdb and dlopen
  2001-10-16 20:17           ` Andrew Cagney
@ 2001-10-16 22:08             ` Daniel Jacobowitz
  2001-10-16 22:19               ` Daniel Jacobowitz
  2001-10-17  8:42               ` Andrew Cagney
  2001-10-16 22:25             ` Kimball Thurston
  1 sibling, 2 replies; 30+ messages in thread
From: Daniel Jacobowitz @ 2001-10-16 22:08 UTC (permalink / raw)
  To: Andrew Cagney; +Cc: Kimball Thurston, gdb

On Tue, Oct 16, 2001 at 11:17:19PM -0400, Andrew Cagney wrote:
> Thread support was given a serious overhall in 5.0 (it became 
> maintainable and fixable).
> 
> Can you try this with/without the thread library linked in?  Everytime 
> GDB sees a shared library being loaded it goes frobbing around to see if 
> it contains some thread support code.  That could be the problem.

I can verify that this's the problem.  It takes negligible time (still
more ptraces than it should, maybe, but not by too much) for a
non-threaded testcase.  Link in -lpthread, and the time skyrockets.

thread_db is, plain and simply, horribly slow.  We could speed it up
tremendously if we cached memory reads from the child across periods
where we knew it was safe to do so; I'll have to think about how to do
this.  Meanwhile, the real speed penalty seems to be:

      /* FIXME: This seems to be necessary to make sure breakpoints
         are removed.  */
      if (!target_thread_alive (inferior_ptid))
        inferior_ptid = pid_to_ptid (GET_PID (inferior_ptid));
      else
        inferior_ptid = lwp_from_thread (inferior_ptid);

thread_db_thread_alive is EXPENSIVE!  And we do it on every attempt to
read the child's memory, of which we appear to have several hundred in
a call to current_sos ().

-- 
Daniel Jacobowitz                           Carnegie Mellon University
MontaVista Software                         Debian GNU/Linux Developer


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: gdb and dlopen
  2001-10-16 22:08             ` Daniel Jacobowitz
@ 2001-10-16 22:19               ` Daniel Jacobowitz
       [not found]                 ` <y3rzo6qx1ej.wl@paladin.sgrail.com>
                                   ` (3 more replies)
  2001-10-17  8:42               ` Andrew Cagney
  1 sibling, 4 replies; 30+ messages in thread
From: Daniel Jacobowitz @ 2001-10-16 22:19 UTC (permalink / raw)
  To: Andrew Cagney, Kimball Thurston, gdb

On Wed, Oct 17, 2001 at 01:08:49AM -0400, Daniel Jacobowitz wrote:
> On Tue, Oct 16, 2001 at 11:17:19PM -0400, Andrew Cagney wrote:
> > Thread support was given a serious overhall in 5.0 (it became 
> > maintainable and fixable).
> > 
> > Can you try this with/without the thread library linked in?  Everytime 
> > GDB sees a shared library being loaded it goes frobbing around to see if 
> > it contains some thread support code.  That could be the problem.
> 
> I can verify that this's the problem.  It takes negligible time (still
> more ptraces than it should, maybe, but not by too much) for a
> non-threaded testcase.  Link in -lpthread, and the time skyrockets.
> 
> thread_db is, plain and simply, horribly slow.  We could speed it up
> tremendously if we cached memory reads from the child across periods
> where we knew it was safe to do so; I'll have to think about how to do
> this.  Meanwhile, the real speed penalty seems to be:
> 
>       /* FIXME: This seems to be necessary to make sure breakpoints
>          are removed.  */
>       if (!target_thread_alive (inferior_ptid))
>         inferior_ptid = pid_to_ptid (GET_PID (inferior_ptid));
>       else
>         inferior_ptid = lwp_from_thread (inferior_ptid);
> 
> thread_db_thread_alive is EXPENSIVE!  And we do it on every attempt to
> read the child's memory, of which we appear to have several hundred in
> a call to current_sos ().

(and lwp_from_thread is a little expensive too...)

In the case I'm looking at, where I don't need to mess with either
breakpoints or multiple threads (:P), I can safely comment out that
whole check.  I get an interesting result:

Without thread library:
loading 50 DSOs takes about 0.09 - 0.11 sec

With thread library but without that chunk:
1.47 - 1.56 sec

With thread library as it currently stands:
7.24 - 7.36 sec

We've definitely got some room for improvement here.


Amusingly, there are something like eight million calls to
ptid_get_pid.  I'll send along a trivial patch to shrink the worst
offenders.  I understand the opacity that functions over macros is
going for here, but a function that does 'return a.b;' and gets called
eight MILLION times is a little bit absurd, don't you think?  Absurd
enough that it shows up as the second highest item on the profile.

-- 
Daniel Jacobowitz                           Carnegie Mellon University
MontaVista Software                         Debian GNU/Linux Developer


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: gdb and dlopen
  2001-10-16 20:17           ` Andrew Cagney
  2001-10-16 22:08             ` Daniel Jacobowitz
@ 2001-10-16 22:25             ` Kimball Thurston
  1 sibling, 0 replies; 30+ messages in thread
From: Kimball Thurston @ 2001-10-16 22:25 UTC (permalink / raw)
  To: Andrew Cagney; +Cc: Kimball Thurston, gdb

> 
> To play the part of Bart Simpons dog:  blah blah blah blah blah PTHREADS 
> blah blah 4.18 blah blah 5.x :-)

And to play the part of Homer: DOH! ;-P

> Thread support was given a serious overhall in 5.0 (it became 
> maintainable and fixable).
> 
> Can you try this with/without the thread library linked in?  Everytime 
> GDB sees a shared library being loaded it goes frobbing around to see if 
> it contains some thread support code.  That could be the problem.
> 

It took a little hacking, but I hacked out the multi-threaded aspect
of our app so I could not link in libpthread, and loading went from
150+ seconds for the dsos to well under 1 second. I think we have our
suspect... :) 

I went to the document on the internals of GDB, and the thread section
is empty - where can I get a primer on how the thread support works,
and how the thread_db affects all this - I tried to trace through the
code a bit, to follow what was said before about svr4_current_sos, and
couldn't figure out how the thread stuff is affecting stuff for
dlopen...

- Kimball


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: gdb and dlopen
       [not found]                 ` <y3rzo6qx1ej.wl@paladin.sgrail.com>
@ 2001-10-16 22:52                   ` Kimball Thurston
  0 siblings, 0 replies; 30+ messages in thread
From: Kimball Thurston @ 2001-10-16 22:52 UTC (permalink / raw)
  To: Daniel Jacobowitz; +Cc: gdb, kimball

Damn reply vs. reply-all.


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: gdb and dlopen
  2001-10-16 22:19               ` Daniel Jacobowitz
       [not found]                 ` <y3rzo6qx1ej.wl@paladin.sgrail.com>
@ 2001-10-17  8:07                 ` Mark Kettenis
  2001-10-17  8:29                   ` H . J . Lu
  2001-10-17 11:09                   ` Daniel Jacobowitz
  2001-10-17  8:54                 ` Andrew Cagney
  2001-10-17 15:08                 ` Kevin Buettner
  3 siblings, 2 replies; 30+ messages in thread
From: Mark Kettenis @ 2001-10-17  8:07 UTC (permalink / raw)
  To: Daniel Jacobowitz; +Cc: Andrew Cagney, Kimball Thurston, gdb

Daniel Jacobowitz <drow@mvista.com> writes:

> > thread_db_thread_alive is EXPENSIVE!  And we do it on every attempt to
> > read the child's memory, of which we appear to have several hundred in
> > a call to current_sos ().
> 
> (and lwp_from_thread is a little expensive too...)
> 
> In the case I'm looking at, where I don't need to mess with either
> breakpoints or multiple threads (:P), I can safely comment out that
> whole check.

The FIXME on the check is a bit vague, and probably so since I didn't
exactly understand what was going on when I wrote that bit of code.  I
believe the need for the check arises from the fact that glibc 2.1.3
is buggy in the sense that TD_DEATH events are unusable.  This means
that we have no clean way to determine whether a thread exited or
not.  Therefore we have to check whether it is still alive.

If we declare glibc 2.1.3 broken, and force people to upgrade to glibc
2.2.x, we could assume that a thread stays alive between TD_CREATE and
TD_DEATH, and speed up thread_db_thread_alive considerably.

Something similar can be done for lwp_from_threads since assigning a
thread to a particular LWP can be reported too (never happens on Linux
since LinuxThreads uses a 1:1 mapping).

Note that if we assume that all threads see the same VM, and that the
initial LWP stays alive during the execution of the program, we could
simply use the process ID of the initial LWP for all memory transfers,
which would remove the need for those checks completely.

Mark


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: gdb and dlopen
  2001-10-17  8:07                 ` Mark Kettenis
@ 2001-10-17  8:29                   ` H . J . Lu
  2001-10-17 11:09                   ` Daniel Jacobowitz
  1 sibling, 0 replies; 30+ messages in thread
From: H . J . Lu @ 2001-10-17  8:29 UTC (permalink / raw)
  To: Mark Kettenis; +Cc: Daniel Jacobowitz, Andrew Cagney, Kimball Thurston, gdb

On Wed, Oct 17, 2001 at 04:59:32PM +0200, Mark Kettenis wrote:
> Daniel Jacobowitz <drow@mvista.com> writes:
> 
> > > thread_db_thread_alive is EXPENSIVE!  And we do it on every attempt to
> > > read the child's memory, of which we appear to have several hundred in
> > > a call to current_sos ().
> > 
> > (and lwp_from_thread is a little expensive too...)
> > 
> > In the case I'm looking at, where I don't need to mess with either
> > breakpoints or multiple threads (:P), I can safely comment out that
> > whole check.
> 
> The FIXME on the check is a bit vague, and probably so since I didn't
> exactly understand what was going on when I wrote that bit of code.  I
> believe the need for the check arises from the fact that glibc 2.1.3
> is buggy in the sense that TD_DEATH events are unusable.  This means
> that we have no clean way to determine whether a thread exited or
> not.  Therefore we have to check whether it is still alive.
> 
> If we declare glibc 2.1.3 broken, and force people to upgrade to glibc

Why not? We just declare gdb 5.1 only supports thread in glibc 2.2 and
above.


H.J.


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: gdb and dlopen
  2001-10-16 22:08             ` Daniel Jacobowitz
  2001-10-16 22:19               ` Daniel Jacobowitz
@ 2001-10-17  8:42               ` Andrew Cagney
  2001-10-17 11:15                 ` Daniel Jacobowitz
  1 sibling, 1 reply; 30+ messages in thread
From: Andrew Cagney @ 2001-10-17  8:42 UTC (permalink / raw)
  To: Daniel Jacobowitz; +Cc: Kimball Thurston, gdb

> thread_db is, plain and simply, horribly slow.  We could speed it up
> tremendously if we cached memory reads from the child across periods
> where we knew it was safe to do so; I'll have to think about how to do
> this.  Meanwhile, the real speed penalty seems to be:

Look at dcache.[hc].

Andrew



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: gdb and dlopen
  2001-10-16 22:19               ` Daniel Jacobowitz
       [not found]                 ` <y3rzo6qx1ej.wl@paladin.sgrail.com>
  2001-10-17  8:07                 ` Mark Kettenis
@ 2001-10-17  8:54                 ` Andrew Cagney
  2001-10-17 15:08                 ` Kevin Buettner
  3 siblings, 0 replies; 30+ messages in thread
From: Andrew Cagney @ 2001-10-17  8:54 UTC (permalink / raw)
  To: Daniel Jacobowitz; +Cc: Kimball Thurston, gdb

> Amusingly, there are something like eight million calls to
> ptid_get_pid.  I'll send along a trivial patch to shrink the worst
> offenders.  I understand the opacity that functions over macros is
> going for here, but a function that does 'return a.b;' and gets called
> eight MILLION times is a little bit absurd, don't you think?  Absurd
> enough that it shows up as the second highest item on the profile.

To speak vague fussy and hypothetical.  This will go away.

GDB has long had ``struct thread_info'' as a thread object.  The 
underlying problem is that nothing makes use of this.  Since a thread 
object could hold thread specific register and memory caches the minor 
overhead of a few functions would greatly outweigh the benefits of not 
going anywhere near ptrace when the user switches or manipulates threads.

Andrew



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: gdb and dlopen
  2001-10-17  8:07                 ` Mark Kettenis
  2001-10-17  8:29                   ` H . J . Lu
@ 2001-10-17 11:09                   ` Daniel Jacobowitz
  2001-10-17 14:26                     ` Mark Kettenis
  1 sibling, 1 reply; 30+ messages in thread
From: Daniel Jacobowitz @ 2001-10-17 11:09 UTC (permalink / raw)
  To: Mark Kettenis; +Cc: Andrew Cagney, Kimball Thurston, gdb

On Wed, Oct 17, 2001 at 04:59:32PM +0200, Mark Kettenis wrote:
> Daniel Jacobowitz <drow@mvista.com> writes:
> 
> > > thread_db_thread_alive is EXPENSIVE!  And we do it on every attempt to
> > > read the child's memory, of which we appear to have several hundred in
> > > a call to current_sos ().
> > 
> > (and lwp_from_thread is a little expensive too...)
> > 
> > In the case I'm looking at, where I don't need to mess with either
> > breakpoints or multiple threads (:P), I can safely comment out that
> > whole check.
> 
> The FIXME on the check is a bit vague, and probably so since I didn't
> exactly understand what was going on when I wrote that bit of code.  I
> believe the need for the check arises from the fact that glibc 2.1.3
> is buggy in the sense that TD_DEATH events are unusable.  This means
> that we have no clean way to determine whether a thread exited or
> not.  Therefore we have to check whether it is still alive.

(Shouldn't there be a way for us to tell when a thread dies without
receiving the TD_DEATH event anyway?  We -are- attached to all threads,
and LinuxThreads threads are all separate processes...)

> If we declare glibc 2.1.3 broken, and force people to upgrade to glibc
> 2.2.x, we could assume that a thread stays alive between TD_CREATE and
> TD_DEATH, and speed up thread_db_thread_alive considerably.
> 
> Something similar can be done for lwp_from_threads since assigning a
> thread to a particular LWP can be reported too (never happens on Linux
> since LinuxThreads uses a 1:1 mapping).
> 
> Note that if we assume that all threads see the same VM, and that the
> initial LWP stays alive during the execution of the program, we could
> simply use the process ID of the initial LWP for all memory transfers,
> which would remove the need for those checks completely.

I'm not entirely comfortable with that assumption, especially since
this is in thread-db rather than the LinuxThreads specific code.  But
perhaps we could introduce a target method saying what PID to use for
reads?  Then we could make Linux (I'm perfectly comfortable with not
supporting thread debugging on 2.1.3...) simply return the PID without
any expensive checks.

-- 
Daniel Jacobowitz                           Carnegie Mellon University
MontaVista Software                         Debian GNU/Linux Developer


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: gdb and dlopen
  2001-10-17  8:42               ` Andrew Cagney
@ 2001-10-17 11:15                 ` Daniel Jacobowitz
  2001-10-17 12:09                   ` Kimball Thurston
  0 siblings, 1 reply; 30+ messages in thread
From: Daniel Jacobowitz @ 2001-10-17 11:15 UTC (permalink / raw)
  To: Andrew Cagney; +Cc: Kimball Thurston, gdb

On Wed, Oct 17, 2001 at 11:42:07AM -0400, Andrew Cagney wrote:
> >thread_db is, plain and simply, horribly slow.  We could speed it up
> >tremendously if we cached memory reads from the child across periods
> >where we knew it was safe to do so; I'll have to think about how to do
> >this.  Meanwhile, the real speed penalty seems to be:
> 
> Look at dcache.[hc].

Well, if I use dcache by creating an appropriate memory region, I go
from 7.17 seconds execution time to 5.46 seconds.  We still do a load
of unnecessary memory traffic, but at least it isn't quite so heinous.

Is there any reason not to define a Unix inferior process's memory
space as cached by default?  I suppose that for mmap'd regions and for
SYSV style shared memory we might lose, but I consider still consider
this a reasonable behavior, worth documenting but not worth accepting a
performance penalty for.

-- 
Daniel Jacobowitz                           Carnegie Mellon University
MontaVista Software                         Debian GNU/Linux Developer


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: gdb and dlopen
  2001-10-17 11:15                 ` Daniel Jacobowitz
@ 2001-10-17 12:09                   ` Kimball Thurston
  2001-10-17 12:58                     ` Kevin Buettner
  0 siblings, 1 reply; 30+ messages in thread
From: Kimball Thurston @ 2001-10-17 12:09 UTC (permalink / raw)
  To: Andrew Cagney, Kimball Thurston, gdb

At Wed, 17 Oct 2001 14:15:50 -0400,
Daniel Jacobowitz wrote:
> 
> On Wed, Oct 17, 2001 at 11:42:07AM -0400, Andrew Cagney wrote:
> > >thread_db is, plain and simply, horribly slow.  We could speed it up
> > >tremendously if we cached memory reads from the child across periods
> > >where we knew it was safe to do so; I'll have to think about how to do
> > >this.  Meanwhile, the real speed penalty seems to be:
> > 
> > Look at dcache.[hc].
> 
> Well, if I use dcache by creating an appropriate memory region, I go
> from 7.17 seconds execution time to 5.46 seconds.  We still do a load
> of unnecessary memory traffic, but at least it isn't quite so heinous.
> 
> Is there any reason not to define a Unix inferior process's memory
> space as cached by default?  I suppose that for mmap'd regions and for
> SYSV style shared memory we might lose, but I consider still consider
> this a reasonable behavior, worth documenting but not worth accepting a
> performance penalty for.

Along the same lines of just trying to clean up unnecessary work, I
was seeing 2 scans of all the open dsos for each dlopen call - it
looks like we are getting 2 BPSTAT_WHAT_CHECK_SHLIBS events (in
infrun.c) for each dlopen which causes us to rescan everything. Is
there a way to distinguish these two events, and only do the scan
once? 

- Kimball


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: gdb and dlopen
  2001-10-17 12:09                   ` Kimball Thurston
@ 2001-10-17 12:58                     ` Kevin Buettner
  2001-11-08  0:22                       ` Daniel Jacobowitz
  0 siblings, 1 reply; 30+ messages in thread
From: Kevin Buettner @ 2001-10-17 12:58 UTC (permalink / raw)
  To: Kimball Thurston, Andrew Cagney, gdb

On Oct 17, 12:09pm, Kimball Thurston wrote:

> Along the same lines of just trying to clean up unnecessary work, I
> was seeing 2 scans of all the open dsos for each dlopen call - it
> looks like we are getting 2 BPSTAT_WHAT_CHECK_SHLIBS events (in
> infrun.c) for each dlopen which causes us to rescan everything. Is
> there a way to distinguish these two events, and only do the scan
> once? 

I haven't looked at how hard it'd be, but it seems to me that it'd
be a good idea for gdb to note that a shlib event has happened without
immediately doing anything about it.  Then, when the target stops
for some other reason (than a shlib event), we handle all of them at
once.  This should cut down on the memory traffic greatly.

Kevin


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: gdb and dlopen
  2001-10-17 11:09                   ` Daniel Jacobowitz
@ 2001-10-17 14:26                     ` Mark Kettenis
  2001-10-17 14:34                       ` Daniel Jacobowitz
  0 siblings, 1 reply; 30+ messages in thread
From: Mark Kettenis @ 2001-10-17 14:26 UTC (permalink / raw)
  To: drow; +Cc: ac131313, kimball, gdb

   Date: Wed, 17 Oct 2001 14:09:50 -0400
   From: Daniel Jacobowitz <drow@mvista.com>

   (Shouldn't there be a way for us to tell when a thread dies without
   receiving the TD_DEATH event anyway?  We -are- attached to all threads,
   and LinuxThreads threads are all separate processes...)

Ultimately waitpid() will report that the process has exited.
Unfortunately there seems to be a window where we cannot access that
process's memory with ptrace, while waitpid() hasn't reported that
exit yet.

   > If we declare glibc 2.1.3 broken, and force people to upgrade to glibc
   > 2.2.x, we could assume that a thread stays alive between TD_CREATE and
   > TD_DEATH, and speed up thread_db_thread_alive considerably.
   > 
   > Something similar can be done for lwp_from_threads since assigning a
   > thread to a particular LWP can be reported too (never happens on Linux
   > since LinuxThreads uses a 1:1 mapping).
   > 
   > Note that if we assume that all threads see the same VM, and that the
   > initial LWP stays alive during the execution of the program, we could
   > simply use the process ID of the initial LWP for all memory transfers,
   > which would remove the need for those checks completely.

   I'm not entirely comfortable with that assumption, especially since
   this is in thread-db rather than the LinuxThreads specific code.  But
   perhaps we could introduce a target method saying what PID to use for
   reads?  Then we could make Linux (I'm perfectly comfortable with not
   supporting thread debugging on 2.1.3...) simply return the PID without
   any expensive checks.

I think the assumption that all threads/LWPs share the same VM is a
fair assumption for thread-db.  GDB assumes this model in several
places, and you'd probably wouldn't call processes not sharing their
VM threads at all.

As an aside, it appears that Solaris doesn't even allow you to read
memory from a particular LWP at all.

Mark


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: gdb and dlopen
  2001-10-17 14:26                     ` Mark Kettenis
@ 2001-10-17 14:34                       ` Daniel Jacobowitz
  0 siblings, 0 replies; 30+ messages in thread
From: Daniel Jacobowitz @ 2001-10-17 14:34 UTC (permalink / raw)
  To: Mark Kettenis; +Cc: ac131313, kimball, gdb

On Wed, Oct 17, 2001 at 11:26:08PM +0200, Mark Kettenis wrote:
>    Date: Wed, 17 Oct 2001 14:09:50 -0400
>    From: Daniel Jacobowitz <drow@mvista.com>
> 
>    (Shouldn't there be a way for us to tell when a thread dies without
>    receiving the TD_DEATH event anyway?  We -are- attached to all threads,
>    and LinuxThreads threads are all separate processes...)
> 
> Ultimately waitpid() will report that the process has exited.
> Unfortunately there seems to be a window where we cannot access that
> process's memory with ptrace, while waitpid() hasn't reported that
> exit yet.

Well, that's somewhat unfortunate.  I guess there may not be anything
we can do in that case.

> I think the assumption that all threads/LWPs share the same VM is a
> fair assumption for thread-db.  GDB assumes this model in several
> places, and you'd probably wouldn't call processes not sharing their
> VM threads at all.
> 
> As an aside, it appears that Solaris doesn't even allow you to read
> memory from a particular LWP at all.

Well, I'm not sure if there are any systems using thread_db that allow
this model, but it's reasonable in general to share heap but not stack.
I'll certainly bow to your judgement here, though.  Being able to read
from only the primary PID would be convenient - although I'm not sure
what to do in the case where that first thread exits.

-- 
Daniel Jacobowitz                           Carnegie Mellon University
MontaVista Software                         Debian GNU/Linux Developer


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: gdb and dlopen
  2001-10-16 22:19               ` Daniel Jacobowitz
                                   ` (2 preceding siblings ...)
  2001-10-17  8:54                 ` Andrew Cagney
@ 2001-10-17 15:08                 ` Kevin Buettner
  2001-10-17 15:57                   ` Andrew Cagney
  3 siblings, 1 reply; 30+ messages in thread
From: Kevin Buettner @ 2001-10-17 15:08 UTC (permalink / raw)
  To: Daniel Jacobowitz, Andrew Cagney, Kimball Thurston, gdb

On Oct 17,  1:19am, Daniel Jacobowitz wrote:

> Amusingly, there are something like eight million calls to
> ptid_get_pid.  I'll send along a trivial patch to shrink the worst
> offenders.  I understand the opacity that functions over macros is
> going for here, but a function that does 'return a.b;' and gets called
> eight MILLION times is a little bit absurd, don't you think?  Absurd
> enough that it shows up as the second highest item on the profile.

It's a shame that we can't use inline functions...

Kevin


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: gdb and dlopen
  2001-10-17 15:08                 ` Kevin Buettner
@ 2001-10-17 15:57                   ` Andrew Cagney
  2001-10-17 17:05                     ` Daniel Jacobowitz
  0 siblings, 1 reply; 30+ messages in thread
From: Andrew Cagney @ 2001-10-17 15:57 UTC (permalink / raw)
  To: Kevin Buettner; +Cc: Daniel Jacobowitz, Kimball Thurston, gdb

> On Oct 17,  1:19am, Daniel Jacobowitz wrote:
> 
> 
>> Amusingly, there are something like eight million calls to
>> ptid_get_pid.  I'll send along a trivial patch to shrink the worst
>> offenders.  I understand the opacity that functions over macros is
>> going for here, but a function that does 'return a.b;' and gets called
>> eight MILLION times is a little bit absurd, don't you think?  Absurd
>> enough that it shows up as the second highest item on the profile.
> 
> 
> It's a shame that we can't use inline functions...

Remember, ptid_get_pid() is the messenger.  The real problem is 
elsewhere.  A bit like STREQ() in the symtab code.

enjoy,
Andrew



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: gdb and dlopen
  2001-10-17 15:57                   ` Andrew Cagney
@ 2001-10-17 17:05                     ` Daniel Jacobowitz
  2001-10-17 23:14                       ` Andrew Cagney
  0 siblings, 1 reply; 30+ messages in thread
From: Daniel Jacobowitz @ 2001-10-17 17:05 UTC (permalink / raw)
  To: Andrew Cagney; +Cc: Kevin Buettner, gdb

On Wed, Oct 17, 2001 at 06:56:38PM -0400, Andrew Cagney wrote:
> >On Oct 17,  1:19am, Daniel Jacobowitz wrote:
> >
> >
> >>Amusingly, there are something like eight million calls to
> >>ptid_get_pid.  I'll send along a trivial patch to shrink the worst
> >>offenders.  I understand the opacity that functions over macros is
> >>going for here, but a function that does 'return a.b;' and gets called
> >>eight MILLION times is a little bit absurd, don't you think?  Absurd
> >>enough that it shows up as the second highest item on the profile.
> >
> >
> >It's a shame that we can't use inline functions...
> 
> Remember, ptid_get_pid() is the messenger.  The real problem is 
> elsewhere.  A bit like STREQ() in the symtab code.

I don't understand what you mean by this.  We certainly need to get at
the actual PID everywhere PIDGET () is being used, regardless of
whether it could be hoisted out of loops.

-- 
Daniel Jacobowitz                           Carnegie Mellon University
MontaVista Software                         Debian GNU/Linux Developer


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: gdb and dlopen
  2001-10-17 17:05                     ` Daniel Jacobowitz
@ 2001-10-17 23:14                       ` Andrew Cagney
  0 siblings, 0 replies; 30+ messages in thread
From: Andrew Cagney @ 2001-10-17 23:14 UTC (permalink / raw)
  To: Daniel Jacobowitz; +Cc: Kevin Buettner, gdb

> Remember, ptid_get_pid() is the messenger.  The real problem is 
>> elsewhere.  A bit like STREQ() in the symtab code.
> 
> 
> I don't understand what you mean by this.  We certainly need to get at
> the actual PID everywhere PIDGET () is being used, regardless of
> whether it could be hoisted out of loops.

To give an example, instead of accessing multiple thread objects 
simultaneously, GDB has a single global current thread state which it 
swaps in and out (using memcpy() and invalidate all).  As a consequence 
GDB spends its time doing this song and dance where is constantly and 
needlessly checks that the current single global thread is the correct 
current single global thread.

enjoy,
Andrew



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: gdb and dlopen
  2001-10-17 12:58                     ` Kevin Buettner
@ 2001-11-08  0:22                       ` Daniel Jacobowitz
  2001-11-08  8:17                         ` Kevin Buettner
  0 siblings, 1 reply; 30+ messages in thread
From: Daniel Jacobowitz @ 2001-11-08  0:22 UTC (permalink / raw)
  To: Kevin Buettner; +Cc: Kimball Thurston, Andrew Cagney, gdb

On Wed, Oct 17, 2001 at 12:58:38PM -0700, Kevin Buettner wrote:
> On Oct 17, 12:09pm, Kimball Thurston wrote:
> 
> > Along the same lines of just trying to clean up unnecessary work, I
> > was seeing 2 scans of all the open dsos for each dlopen call - it
> > looks like we are getting 2 BPSTAT_WHAT_CHECK_SHLIBS events (in
> > infrun.c) for each dlopen which causes us to rescan everything. Is
> > there a way to distinguish these two events, and only do the scan
> > once? 
> 
> I haven't looked at how hard it'd be, but it seems to me that it'd
> be a good idea for gdb to note that a shlib event has happened without
> immediately doing anything about it.  Then, when the target stops
> for some other reason (than a shlib event), we handle all of them at
> once.  This should cut down on the memory traffic greatly.

Actually implementing this, at first glance, is easy.  However, there's
a couple of interesting issues.  For instance, suppose that we want to
reset a breakpoint in a shared library; we need to read in the symbols
for that shared library before we can do that.  If we defer it, and
there are no other breakpoints, then we'll never set the breakpoint and
never stop.

Thoughts?

-- 
Daniel Jacobowitz                           Carnegie Mellon University
MontaVista Software                         Debian GNU/Linux Developer


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: gdb and dlopen
  2001-11-08  0:22                       ` Daniel Jacobowitz
@ 2001-11-08  8:17                         ` Kevin Buettner
  2001-11-08  9:44                           ` Daniel Jacobowitz
  0 siblings, 1 reply; 30+ messages in thread
From: Kevin Buettner @ 2001-11-08  8:17 UTC (permalink / raw)
  To: Daniel Jacobowitz, Kevin Buettner; +Cc: Kimball Thurston, Andrew Cagney, gdb

On Nov 18,  2:45pm, Daniel Jacobowitz wrote:

> On Wed, Oct 17, 2001 at 12:58:38PM -0700, Kevin Buettner wrote:
> > On Oct 17, 12:09pm, Kimball Thurston wrote:
> > 
> > > Along the same lines of just trying to clean up unnecessary work, I
> > > was seeing 2 scans of all the open dsos for each dlopen call - it
> > > looks like we are getting 2 BPSTAT_WHAT_CHECK_SHLIBS events (in
> > > infrun.c) for each dlopen which causes us to rescan everything. Is
> > > there a way to distinguish these two events, and only do the scan
> > > once? 
> > 
> > I haven't looked at how hard it'd be, but it seems to me that it'd
> > be a good idea for gdb to note that a shlib event has happened without
> > immediately doing anything about it.  Then, when the target stops
> > for some other reason (than a shlib event), we handle all of them at
> > once.  This should cut down on the memory traffic greatly.
> 
> Actually implementing this, at first glance, is easy.  However, there's
> a couple of interesting issues.  For instance, suppose that we want to
> reset a breakpoint in a shared library; we need to read in the symbols
> for that shared library before we can do that.  If we defer it, and
> there are no other breakpoints, then we'll never set the breakpoint and
> never stop.
> 
> Thoughts?

After I proposed the above idea, Peter Schauer emailed me privately
and noted that my idea would "break setting breakpoints in global
object constructor code in shared libraries."  He goes on to say
that the "reenable breakpoint logic after every shlib load currently
takes care of this."

So, it looks like you've also noticed one of the concerns that Peter
had regarding my idea.

The only thing that I can think of is to introduce a GDB setting which
indicates which behavior you want.  Maybe call it
"solib-reenable-breakpoints-after-load" and have it default to "true". 
(Which is what it currently does.)

Then, if you care more about speed, you can shut it off if desired.

Thinking about it some more, maybe it would be better extend
auto-solib-add so that it has three settings:

    disabled			(off)
    when-stopped
    as-early-as-possible	(on)

The "disabled" setting would be the same as what you currently get when
you do ``set auto-solib off''.  For the sake of backwards compatibility,
we'd also continue to accept "off" as a synonym for "disabled".

The "when-stopped" setting is the new one which would cause new shared
libs to be checked for (and loaded) only when GDB stops for a non-shlib
event.

The "as-early-as-possible" setting is the same as what you currently
get when you do ``set auto-solib on''.  Again for the the sake of
backwards compatibility, we'd also continue to accept "on" as a
synonym for "as-early-as-possible".

(I'm not very good at thinking of names and won't be at all offended
if someone suggests something better...)

Kevin


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: gdb and dlopen
  2001-11-08  8:17                         ` Kevin Buettner
@ 2001-11-08  9:44                           ` Daniel Jacobowitz
  2001-11-08 10:49                             ` Kevin Buettner
  0 siblings, 1 reply; 30+ messages in thread
From: Daniel Jacobowitz @ 2001-11-08  9:44 UTC (permalink / raw)
  To: Kevin Buettner; +Cc: Kimball Thurston, Andrew Cagney, gdb

On Mon, Nov 19, 2001 at 10:04:09AM -0700, Kevin Buettner wrote:
> After I proposed the above idea, Peter Schauer emailed me privately
> and noted that my idea would "break setting breakpoints in global
> object constructor code in shared libraries."  He goes on to say
> that the "reenable breakpoint logic after every shlib load currently
> takes care of this."
> 
> So, it looks like you've also noticed one of the concerns that Peter
> had regarding my idea.

Yes.  I don't know what we can really do about this - besides
decreasing the total memory traffic for an update, which I think would
be wise.  Among other possibilities, do you have any comment on my
suggestion for setting inferior memory to be cached by default if not
otherwise specified?  Currently we default to uncached, which is safer,
but I can't think of many examples where it would be a problem to
cache.

> The only thing that I can think of is to introduce a GDB setting which
> indicates which behavior you want.  Maybe call it
> "solib-reenable-breakpoints-after-load" and have it default to "true". 
> (Which is what it currently does.)
> 
> Then, if you care more about speed, you can shut it off if desired.
> 
> Thinking about it some more, maybe it would be better extend
> auto-solib-add so that it has three settings:
> 
>     disabled			(off)
>     when-stopped
>     as-early-as-possible	(on)

I suppose this is a good idea.  I'm not going to do it, as I'd much
rather make as-early-as-possible (which is what I've wanted nine times
out of ten when actually debugging something which used DSOs...)
faster.

-- 
Daniel Jacobowitz                           Carnegie Mellon University
MontaVista Software                         Debian GNU/Linux Developer


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: gdb and dlopen
  2001-11-08  9:44                           ` Daniel Jacobowitz
@ 2001-11-08 10:49                             ` Kevin Buettner
  2001-11-08 11:14                               ` Daniel Jacobowitz
  0 siblings, 1 reply; 30+ messages in thread
From: Kevin Buettner @ 2001-11-08 10:49 UTC (permalink / raw)
  To: Daniel Jacobowitz, Kevin Buettner; +Cc: Kimball Thurston, Andrew Cagney, gdb

On Nov 19,  2:16pm, Daniel Jacobowitz wrote:

> > After I proposed the above idea, Peter Schauer emailed me privately
> > and noted that my idea would "break setting breakpoints in global
> > object constructor code in shared libraries."  He goes on to say
> > that the "reenable breakpoint logic after every shlib load currently
> > takes care of this."
> > 
> > So, it looks like you've also noticed one of the concerns that Peter
> > had regarding my idea.
> 
> Yes.  I don't know what we can really do about this - besides
> decreasing the total memory traffic for an update, which I think would
> be wise.  Among other possibilities, do you have any comment on my
> suggestion for setting inferior memory to be cached by default if not
> otherwise specified?  Currently we default to uncached, which is safer,
> but I can't think of many examples where it would be a problem to
> cache.

Are you sure caching will help?  The cache has to be invalidated every
time GDB stops, right?

If current_sos() is refetching some bit of memory more than once per
invocation, then perhaps this problem should be solved by some other
means?

Kevin


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: gdb and dlopen
  2001-11-08 10:49                             ` Kevin Buettner
@ 2001-11-08 11:14                               ` Daniel Jacobowitz
  2001-11-08 16:17                                 ` Andrew Cagney
  0 siblings, 1 reply; 30+ messages in thread
From: Daniel Jacobowitz @ 2001-11-08 11:14 UTC (permalink / raw)
  To: Kevin Buettner; +Cc: Kimball Thurston, Andrew Cagney, gdb

On Mon, Nov 19, 2001 at 12:38:21PM -0700, Kevin Buettner wrote:
> On Nov 19,  2:16pm, Daniel Jacobowitz wrote:
> 
> > > After I proposed the above idea, Peter Schauer emailed me privately
> > > and noted that my idea would "break setting breakpoints in global
> > > object constructor code in shared libraries."  He goes on to say
> > > that the "reenable breakpoint logic after every shlib load currently
> > > takes care of this."
> > > 
> > > So, it looks like you've also noticed one of the concerns that Peter
> > > had regarding my idea.
> > 
> > Yes.  I don't know what we can really do about this - besides
> > decreasing the total memory traffic for an update, which I think would
> > be wise.  Among other possibilities, do you have any comment on my
> > suggestion for setting inferior memory to be cached by default if not
> > otherwise specified?  Currently we default to uncached, which is safer,
> > but I can't think of many examples where it would be a problem to
> > cache.
> 
> Are you sure caching will help?  The cache has to be invalidated every
> time GDB stops, right?
> 
> If current_sos() is refetching some bit of memory more than once per
> invocation, then perhaps this problem should be solved by some other
> means?

I'm absolutely sure.  Or at least, I was... when I tested this, it was
an obvious win.  Now it is an obvious LOSS to turn on the cache.  I'm
not sure why, so I'll have to investigate it later.  In 5.0.90-cvs it
was a win and in current trunk it is a significant performance loss.

This is in the context of a linuxthreads application.  We do a
ridiculous, staggering amount of memory transfer in order to debug a
linuxthreads application, and parts of it are duplicated.

-- 
Daniel Jacobowitz                           Carnegie Mellon University
MontaVista Software                         Debian GNU/Linux Developer


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: gdb and dlopen
  2001-11-08 11:14                               ` Daniel Jacobowitz
@ 2001-11-08 16:17                                 ` Andrew Cagney
  0 siblings, 0 replies; 30+ messages in thread
From: Andrew Cagney @ 2001-11-08 16:17 UTC (permalink / raw)
  To: Daniel Jacobowitz; +Cc: Kevin Buettner, Kimball Thurston, gdb

> 
> I'm absolutely sure.  Or at least, I was... when I tested this, it was
> an obvious win.  Now it is an obvious LOSS to turn on the cache.  I'm
> not sure why, so I'll have to investigate it later.  In 5.0.90-cvs it
> was a win and in current trunk it is a significant performance loss.
> 
> This is in the context of a linuxthreads application.  We do a
> ridiculous, staggering amount of memory transfer in order to debug a
> linuxthreads application, and parts of it are duplicated.

Hmm, In the context of threads, trying to use the insn/data cache is 
cheating :-)

The problem with threads is that GDB is constantly discarding its per 
thread information.  Only to immediatly turn around and ask that that 
same information be re-created.  GDB, when switching between threads, 
should retain that information in the the ``struct thread_info''.  Even 
register information could be cached on a per-thread basis.

See 
http://sources.redhat.com/gdb/papers/multi-arch/real-multi-arch/index.html#SEC37

enjoy,
Andrew



^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2001-11-19 23:11 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <y3radyrjqf8.wl@paladin.sgrail.com>
2001-10-16 13:15 ` gdb and dlopen Daniel Jacobowitz
2001-10-16 18:23   ` Kimball Thurston
     [not found]     ` <20011016213252.A8694@nevyn.them.org>
2001-10-16 19:03       ` Daniel Jacobowitz
2001-10-16 20:04         ` Kimball Thurston
2001-10-16 20:17           ` Andrew Cagney
2001-10-16 22:08             ` Daniel Jacobowitz
2001-10-16 22:19               ` Daniel Jacobowitz
     [not found]                 ` <y3rzo6qx1ej.wl@paladin.sgrail.com>
2001-10-16 22:52                   ` Kimball Thurston
2001-10-17  8:07                 ` Mark Kettenis
2001-10-17  8:29                   ` H . J . Lu
2001-10-17 11:09                   ` Daniel Jacobowitz
2001-10-17 14:26                     ` Mark Kettenis
2001-10-17 14:34                       ` Daniel Jacobowitz
2001-10-17  8:54                 ` Andrew Cagney
2001-10-17 15:08                 ` Kevin Buettner
2001-10-17 15:57                   ` Andrew Cagney
2001-10-17 17:05                     ` Daniel Jacobowitz
2001-10-17 23:14                       ` Andrew Cagney
2001-10-17  8:42               ` Andrew Cagney
2001-10-17 11:15                 ` Daniel Jacobowitz
2001-10-17 12:09                   ` Kimball Thurston
2001-10-17 12:58                     ` Kevin Buettner
2001-11-08  0:22                       ` Daniel Jacobowitz
2001-11-08  8:17                         ` Kevin Buettner
2001-11-08  9:44                           ` Daniel Jacobowitz
2001-11-08 10:49                             ` Kevin Buettner
2001-11-08 11:14                               ` Daniel Jacobowitz
2001-11-08 16:17                                 ` Andrew Cagney
2001-10-16 22:25             ` Kimball Thurston
2001-10-16 15:05 ` H . J . Lu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox