* Re: Proposal for handling large numbers of registers
[not found] <199908062333.QAA16667@andros.cygnus.com>
1999-08-06 17:33 ` Proposal for handling large numbers of registers Michael Hayes
@ 1999-08-09 16:31 ` J.T. Conklin
1 sibling, 0 replies; 3+ messages in thread
From: J.T. Conklin @ 1999-08-09 16:31 UTC (permalink / raw)
To: Stan Shebs; +Cc: gdb
>>>>> "Stan" == Stan Shebs <shebs@cygnus.com> writes:
Stan> So I'm considering importing the idea of "register classes" from
Stan> GCC.
This seems to fit well into my strawman proposal for a remote protocol
extension to store and fetch groups of registers.
Stan> GCC uses the class idea to guide allocation and instruction
Stan> selection, but we don't even need that much; just define the
Stan> names of the classes and which registers belong to each.
GCC's register classes are defined on a per-processor basis. Are you
thinking of using that model, or one where registers are fit into a
pre-defined set of classes? I think compelling arguments could be
made on each side.
Stan> In a minimal model, each register belongs to exactly one class.
Stan> A more sophisticated model would allow registers to belong to
Stan> multiple classes,
Which model are you leaning towards? I suspect that this decision
will have significant implementation implications.
Stan> although I think you'd want to have a primary class for each
Stan> register, so that GUIs can construct a hierarchical list of
Stan> registers by class, where each register appears exactly once.
IMO, whether a single class or multiple class representation is used
should not have an impact on presentation. I strongly believe that
users should be able to create arbitrary "classes" and decide which
registers are to be displayed in each. Using the internal register
classes is a poor substitute for this. In fact, a logical extension
is the ability for users to define their own "virtual" registers for
memory mapped I/O devices.
Stan> To go along with this, it would be useful to able to identify a
Stan> class of registers that are always fetched from targets.
Always fetched for what purpose?
A lazy fetch scheme should be workable. When the user invokes 'info
registers' or has a GUI register window open with the appropriate
class 'un-clicked', the needed registers should be fetched.
Stan> Typically these would be the ones saved and restored by traps,
Stan> and so would not include special registers, but that would be up
Stan> to the person doing the GDB and target-side stub port. I could
Stan> also see adding an option to "info registers" to only display
Stan> these, and redefining it and "info all-registers" to base their
Stan> behavior upon agreed-upon classes, rather than always being
Stan> wired to float/non-float classes as they are now.
Oh, what you are asking is what registers should be displayed with
'info registers'. Perhaps a variable would contain a list of what
registers and register classes are displayed. Maybe something like
set info-register-list int sys
Would tell 'info register' to display registers from the 'int' and
'sys' classes?
--jtc
--
J.T. Conklin
RedBack Networks
From jingham@leda.cygnus.com Mon Aug 09 18:15:00 1999
From: James Ingham <jingham@leda.cygnus.com>
To: gdb@sourceware.cygnus.com
Subject: Re: enum xyz;
Date: Mon, 09 Aug 1999 18:15:00 -0000
Message-id: <mfd7wwigum.fsf@leda.cygnus.com>
References: <199908092157.OAA18400@andros.cygnus.com>
X-SW-Source: 1999-q3/msg00138.html
Content-length: 814
Stan,
>
> Indeed, this is not a hypothetical issue; Apple's C compiler used to
> make byte- and short-sized enums when it was possible to do so. This
> caused me much grief in porting GCC, since much of the code assumed
> enums were int-sized. I had to add many "rtx_dummy = 1000000" values
> just to force the compiler to use an int representation.
>
Actually, MetroWerks still does do this on the Macintosh. We have
some special code in the MacTcl header files to panic the compiler if
someone turns off the "enums always int" switch, 'cause otherwise it
causes all sorts of fun and mysterious crashes...
Jim
--
++==++==++==++==++==++==++==++==++==++==++==++==++==++==++==++==++==++==++
Jim Ingham jingham@cygnus.com
Cygnus Solutions Inc.
From jtc@redback.com Mon Aug 09 21:01:00 1999
From: jtc@redback.com (J.T. Conklin)
To: gdb@sourceware.cygnus.com
Subject: FYI: readline upgrade broke gdb.info generation
Date: Mon, 09 Aug 1999 21:01:00 -0000
Message-id: <5m907kffo8.fsf@jtc.redbacknetworks.com>
X-SW-Source: 1999-q3/msg00139.html
Content-length: 177
It looks like the updated readline distribution has re-structured the
documentation that is included in the GDB User's Manual.
--jtc
--
J.T. Conklin
RedBack Networks
From ezannoni@cygnus.com Tue Aug 10 06:45:00 1999
From: Elena Zannoni <ezannoni@cygnus.com>
To: jtc@redback.com
Cc: gdb@sourceware.cygnus.com
Subject: FYI: readline upgrade broke gdb.info generation
Date: Tue, 10 Aug 1999 06:45:00 -0000
Message-id: <14256.11468.44569.647968@kwikemart.cygnus.com>
References: <5m907kffo8.fsf@jtc.redbacknetworks.com>
X-SW-Source: 1999-q3/msg00140.html
Content-length: 266
J.T. Conklin writes:
> It looks like the updated readline distribution has re-structured the
> documentation that is included in the GDB User's Manual.
>
> --jtc
>
> --
> J.T. Conklin
> RedBack Networks
Thanks JT,
I'll take a look at it.
Elena
From jtc@redback.com Tue Aug 10 08:42:00 1999
From: jtc@redback.com (J.T. Conklin)
To: gdb@sourceware.cygnus.com
Subject: worst case symbol lookup performance
Date: Tue, 10 Aug 1999 08:42:00 -0000
Message-id: <5m4si7fxsv.fsf@jtc.redbacknetworks.com>
X-SW-Source: 1999-q3/msg00141.html
Content-length: 1451
Has anyone thought about the worst case performance of symbol lookup?
We have a library of user defined functions that grovel around data
structures and display information in a useful manner. Due to poor
performance (for example, the function that displays a task listing
takes about a second per task on a PII 450), these functions are
hardly ever used --- our engineers to write throwaway code and bind
it into the executable.
But now that I've implemented crash dump support for our system, we
must use the user defined functions. I built a profiled gdb with a
new CLI moncontrol command so I had better control over what I was
measuring, and determined that most (87%+) of the time was being spent
in lookup_partial_symbol.
It appears that write_dollar_variable() calls lookup_symbol() in order
to expand HPUX/HPPA millicode functions ($$dyncall, etc.). In my run,
write_dollar_variable() called lookup_symbol() ~1000 times, which in
turn called lookup_partial_symbol ~2,000,000 times (we have ~20,000
symbols in our system). But since the $ variables use in my script
will never be found in the symbol tables, I'm encountering worst case
behavior for each lookup.
I'm not familiar with the symbol handling portions of GDB, so I'm
looking for ideas. Removing the symbol lookups from write_dollar_-
variable() significantly improves performance, but doesn't solve the
underlying problem.
--jtc
--
J.T. Conklin
RedBack Networks
From shebs@cygnus.com Tue Aug 10 11:33:00 1999
From: Stan Shebs <shebs@cygnus.com>
To: jtc@redback.com
Cc: gdb@sourceware.cygnus.com
Subject: Re: worst case symbol lookup performance
Date: Tue, 10 Aug 1999 11:33:00 -0000
Message-id: <199908101833.LAA21916@andros.cygnus.com>
References: <5m4si7fxsv.fsf@jtc.redbacknetworks.com>
X-SW-Source: 1999-q3/msg00142.html
Content-length: 1163
From: jtc@redback.com (J.T. Conklin)
Date: 10 Aug 1999 08:41:20 -0700
It appears that write_dollar_variable() calls lookup_symbol() in order
to expand HPUX/HPPA millicode functions ($$dyncall, etc.). In my run,
write_dollar_variable() called lookup_symbol() ~1000 times, which in
turn called lookup_partial_symbol ~2,000,000 times (we have ~20,000
symbols in our system). But since the $ variables use in my script
will never be found in the symbol tables, I'm encountering worst case
behavior for each lookup.
Yow! I'm pretty sure we shouldn't be looking for millicode functions
on anything besides HPUX native. :-) At the very least, the bit of
code should be conditionalized to not affect anybody else.
I'm not familiar with the symbol handling portions of GDB, so I'm
looking for ideas. Removing the symbol lookups from write_dollar_-
variable() significantly improves performance, but doesn't solve the
underlying problem.
Presumably you get a ~8 times speedup by removing the symbol lookup.
What does profiling say is the most expensive operation now?
Srikanth, did you ever look at this issue?
Stan
From jtc@redback.com Tue Aug 10 12:04:00 1999
From: jtc@redback.com (J.T. Conklin)
To: Stan Shebs <shebs@cygnus.com>
Cc: gdb@sourceware.cygnus.com
Subject: Re: worst case symbol lookup performance
Date: Tue, 10 Aug 1999 12:04:00 -0000
Message-id: <5mso5rwj7w.fsf@jtc.redbacknetworks.com>
References: <199908101833.LAA21916@andros.cygnus.com>
X-SW-Source: 1999-q3/msg00143.html
Content-length: 1600
>>>>> "Stan" == Stan Shebs <shebs@cygnus.com> writes:
jtc> It appears that write_dollar_variable() calls lookup_symbol()
jtc> in order to expand HPUX/HPPA millicode functions ($$dyncall,
jtc> etc.). In my run, write_dollar_variable() called
jtc> lookup_symbol() ~1000 times, which in turn called
jtc> lookup_partial_symbol ~2,000,000 times (we have ~20,000
jtc> symbols in our system). But since the $ variables use in my
jtc> script will never be found in the symbol tables, I'm
jtc> encountering worst case behavior for each lookup.
Stan> Yow! I'm pretty sure we shouldn't be looking for millicode
Stan> functions on anything besides HPUX native. :-) At the very
Stan> least, the bit of code should be conditionalized to not affect
Stan> anybody else.
If HPUX/HPPA is the only system with identifiers with leading $'s,
conditionalizing this code would be appropriate. At the same time,
I don't want to gloss over poor lookup performance in general.
jtc> I'm not familiar with the symbol handling portions of GDB, so I'm
jtc> looking for ideas. Removing the symbol lookups from
jtc> write_dollar_- variable() significantly improves performance, but
jtc> doesn't solve the underlying problem.
Stan> Presumably you get a ~8 times speedup by removing the symbol
Stan> lookup. What does profiling say is the most expensive operation
Stan> now?
It still turns out to be lookup_partial_symbol at 85%+. Of course,
it's 85% of a much smaller total. In this case, the symbols are found
in the psymtab and are promoted to real symtab entries.
--jtc
--
J.T. Conklin
RedBack Networks
From srikanth@cup.hp.com Tue Aug 10 12:20:00 1999
From: Srikanth <srikanth@cup.hp.com>
To: Stan Shebs <shebs@cygnus.com>
Cc: jtc@redback.com, gdb@sourceware.cygnus.com
Subject: Re: worst case symbol lookup performance
Date: Tue, 10 Aug 1999 12:20:00 -0000
Message-id: <37B07B68.4F1F@cup.hp.com>
References: <199908101833.LAA21916@andros.cygnus.com>
X-SW-Source: 1999-q3/msg00144.html
Content-length: 1811
Stan Shebs wrote:
>
> From: jtc@redback.com (J.T. Conklin)
> Date: 10 Aug 1999 08:41:20 -0700
>
> I'm not familiar with the symbol handling portions of GDB, so I'm
> looking for ideas. Removing the symbol lookups from write_dollar_-
> variable() significantly improves performance, but doesn't solve the
> underlying problem.
>
> Presumably you get a ~8 times speedup by removing the symbol lookup.
> What does profiling say is the most expensive operation now?
> Srikanth, did you ever look at this issue?
>
> Stan
No. This has not shown up in our profiles so far. As for
symbol lookup performance, we have successfully prototyped a new
scheme, whereby lookups are a lot faster.
Minimal symbol table :
Currently the minimal symbol table is sorted by address only.
In our prototype, we also sort it by the mangled name. So name lookups
are also in logarithmic time. To allow searching the minimal symbol
table by unmangled C++ names, we have come up with a just-in-time
mangling/demangling scheme. According to this, at startup gdb will
not demangle minimal symbol table entries. Instead when a lookup
happens, we will mangle the signature just enough to get a
reasonably long prefix of the mangled name, use the prefix to
binary search the table and demangle symbols with the same prefix
and do a full compare.
Partial symbol table :
All our partial symbol tables will be empty. We will use
the minimal symbol table to lookup the symbol, use the address
found there to decide which psymtab to expand. This works in
part because in HP-gdb, the psymtabs contain only the procedure
psymbols. No types, variables etc ... There is one special "globals"
psymtab which contains all variables and types.
Srikanth
From jimb@cygnus.com Tue Aug 10 19:07:00 1999
From: Jim Blandy <jimb@cygnus.com>
To: jtc@redback.com
Cc: gdb@sourceware.cygnus.com
Subject: Re: enum xyz;
Date: Tue, 10 Aug 1999 19:07:00 -0000
Message-id: <npd7wvvzmj.fsf@zwingli.cygnus.com>
References: <199908091549.LAA03778@texas.cygnus.com> <5mwvv4hlsr.fsf@jtc.redbacknetworks.com>
X-SW-Source: 1999-q3/msg00145.html
Content-length: 1742
I agree with Stan that we should simply remove the non-portable
construct, but just for entertainment:
> In fact, as I wrote this message, I came to realize that incomplete
> enums lose even on machines with single pointer representation. For
> example, in the following code, a compiler that uses different storage
> sizes for enums would not know how much to copy.
>
>
> enum foo;
>
> struct bar {
> enum foo *bar;
> ...
> };
>
> struct baz {
> enum foo *baz;
> ...
> };
>
>
> {
> ...
> *bar->foo == *baz->foo;
> ...
> }
>
> And to think that some years ago I thought it was stupid to have
> incomplete structs and unions and not have incomplete enums...
The GCC manual says:
Incomplete `enum' Types
=======================
You can define an `enum' tag without specifying its possible values.
This results in an incomplete type, much like what you get if you write
`struct foo' without describing the elements. A later declaration
which does specify the possible values completes the type.
You can't allocate variables or storage using the type while it is
incomplete. However, you can work with pointers to that type.
So your example isn't allowed, because you can't dereference a pointer
to an incomplete type. And I'm a little confused about the problems
other folks have reported, because you're not allowed to declare
instances of an incomplete type, so their size can't matter. The case
of incomplete enums is exactly parallel to that of incomplete structs:
you don't know what size the object is.
No?
From ac131313@cygnus.com Tue Aug 10 19:30:00 1999
From: Andrew Cagney <ac131313@cygnus.com>
To: Stan Shebs <shebs@cygnus.com>
Cc: jtc@redback.com, gdb@sourceware.cygnus.com
Subject: Re: enum xyz;
Date: Tue, 10 Aug 1999 19:30:00 -0000
Message-id: <37B0E001.A9BE394F@cygnus.com>
References: <199908092157.OAA18400@andros.cygnus.com>
X-SW-Source: 1999-q3/msg00146.html
Content-length: 756
Stan Shebs wrote:
> We should probably lose the incomplete enum definitions in the
> sources, because they are a portability problem, the problem can be
> solved just by declaring affected functions after the enum's
> definition in value.h, and there aren't very many incomplete enum in
> the GDB sources.
Er, put that can opener down, that is a can of worms your trying to open
:-)
Incomplete enum's (like incomplete structs) are nice as they can help
you avoid some of that #include forest.
If there are really more than one or two such delcarations it might in
fact be better to conditionalize the code on CC_HAS_INCOMPLETE_ENUMS
(assuming autoconf gets a AC_CC_HAS_INCOMPLETE_ENUMS test :-). When not
defined, skip the relevant prototype.
Andrew
From jtc@redback.com Tue Aug 10 21:43:00 1999
From: jtc@redback.com (J.T. Conklin)
To: gdb@sourceware.cygnus.com
Subject: What's with all the Cisco stuff?
Date: Tue, 10 Aug 1999 21:43:00 -0000
Message-id: <5md7wv2aif.fsf@jtc.redbacknetworks.com>
X-SW-Source: 1999-q3/msg00147.html
Content-length: 1305
What's the deal with all the Cisco-specific stuff ending up in GDB?
We recently saw the introduction of a mutant Cisco variant of the
remote debug protocol (which doesn't offer any advantages over the
existing RDP), and the latest snapshot contains some sort of kernel
object display code for IOS that is bound into every GDB, native or
embedded.
I'll argue that code for _any_ company's in-house systems is not
appropriate to be integrated into GDB. Unlike support for an embedded
OS, which benefits a broad developer community; integrating code for a
closed system benefits only that company, especially as responsibility
for continued maintenance of whatever oddball way of doing things now
falls on the shoulders of future GDB maintainers.
By integrating this code, we have also set up a slipery slope where
similar code from other companies cannot be rejected out of hand. The
nature of embedded systems is that companies are going to have unique
requirements and specialized code that are of use and interest only
within. Is all such code going to be welcomed in GDB? Or only code
from Cygnus' customer list?
At the very least, shouldn't the cisco specific code be explicitly
enabled with --enable-cisco-cruft or some such configure option?
--jtc
--
J.T. Conklin
RedBack Networks
From shebs@cygnus.com Wed Aug 11 12:01:00 1999
From: Stan Shebs <shebs@cygnus.com>
To: jtc@redback.com
Cc: gdb@sourceware.cygnus.com
Subject: Re: What's with all the Cisco stuff?
Date: Wed, 11 Aug 1999 12:01:00 -0000
Message-id: <199908111901.MAA23001@andros.cygnus.com>
References: <5md7wv2aif.fsf@jtc.redbacknetworks.com>
X-SW-Source: 1999-q3/msg00148.html
Content-length: 5345
From: jtc@redback.com (J.T. Conklin)
Date: 10 Aug 1999 21:42:48 -0700
What's the deal with all the Cisco-specific stuff ending up in GDB?
A fair question. I apologize for not discussing this on the list
previously; this seemed pretty non-controversial compared to some
of the other changes that are going on...
We recently saw the introduction of a mutant Cisco variant of the
remote debug protocol (which doesn't offer any advantages over the
existing RDP), and the latest snapshot contains some sort of kernel
object display code for IOS that is bound into every GDB, native or
embedded.
Actually, the kernel object display mechanism is supposed to be generic
and is thus available for any RTOS that wishes to make use of it. To
answer an implied question in a different message, you can't do this
with user-defined commands alone, because the OS may not let you
access kernel data structures directly, or the representation may
change from one release to the next.
I'll argue that code for _any_ company's in-house systems is not
appropriate to be integrated into GDB. Unlike support for an embedded
OS, which benefits a broad developer community; integrating code for a
closed system benefits only that company, especially as responsibility
for continued maintenance of whatever oddball way of doing things now
falls on the shoulders of future GDB maintainers.
By integrating this code, we have also set up a slipery slope where
similar code from other companies cannot be rejected out of hand. The
nature of embedded systems is that companies are going to have unique
requirements and specialized code that are of use and interest only
within. Is all such code going to be welcomed in GDB? Or only code
from Cygnus' customer list?
That's a little unfair... In actual fact, over the past five years
I have not rejected any contribution because the system in question
was "in-house". There are several reasons for this.
First, there is ample precedent. bfd/cisco-core.c was added by Jim
Kingdon in 1994. remote-array.c is only useful for Array
Technologies, while remote-st.c, which has been in GDB forever, is for
Tandem phone switches. sh3-rom.c is an SH target that I don't think
was ever available to the general public, and you yourself recently
noted that the PowerPC bits in gdb/nlm were for a product that never
saw the light of day.
Second, think about the process of accepting or rejecting these kinds
of additions; how is one supposed to be decide whether something is an
"in-house-only" system? I accept patches for systems that I don't
personally have access to, and indeed with no way of knowing whether
they are available to a broad developer community. I'm pretty certain
that there are more Cisco IOS hackers in the world than there are
Mitsubishi d10v and d30v hackers put together, and yet nobody seems to
have a problem with architecture-specific code that is not useful to
anyone outside of Mitsubishi.
Third, I don't want to encourage variant versions of GDB. Divergence
begets more divergence, and results in permanent forks. Unlike GCC,
sooner or later GDB has to be able to get very grubby with the details
of real systems. If this can only be done by patching GDB, then that
means the base sources are incomplete. Ultimately I'd like to see
more modularity for system-specific bits, but we're not there yet, and
I don't want to see users switching to other debuggers because we're
waiting for somebody to write more elegant code.
Fourth, the basic design philosophy for GDB is that it is a bag of
tools for debugging. All we really require is that each tool be
useful to someone, and that each tool doesn't interfere with any of
the other tools. Every version of GDB has an Apollo Domain symbol
reader (dstread.c) compiled in, even though it's highly unlikely that
the average embedded programmer will need it. It just stays out of
the way and unnoticed, but is ready should an Apollo-like set of
symbols ever appear in an object file. When we talk about making a
universal GDB that includes support for all architectures, that's
another expression of the philosophy - why have different GDB builds
for different target architectures, if they can all play nice
together? So by that philosophy, it should be acceptable to have
OS-specific bits in GDB builds, whether or not the OS is common or
popular or on sale at Fry's.
Again, I apologize for not discussing this in public earlier. I've
been operating with the abovementioned philosophy for awhile, but
haven't tried to articulate it clearly until now. I suspect that as a
result, you and others have probably been self-censoring on patches
that would actually be quite reasonable for GDB. So yes, I would like
to see any patches that people have been holding back on because they
thought they were too system-specific.
At the very least, shouldn't the cisco specific code be explicitly
enabled with --enable-cisco-cruft or some such configure option?
I did consider this when evaluating Cisco's support bits, and rejected
any changes that would have required a special enable flag. If the
presence of this code is making it difficult or impossible for you to
use GDB, then I can see adding it, but so far I haven't heard of any
usage problems.
Stan
From jimb@cygnus.com Wed Aug 11 19:22:00 1999
From: Jim Blandy <jimb@cygnus.com>
To: jtc@redback.com
Cc: gdb@sourceware.cygnus.com
Subject: Re: worst case symbol lookup performance
Date: Wed, 11 Aug 1999 19:22:00 -0000
Message-id: <npvhalviuq.fsf@zwingli.cygnus.com>
References: <5m4si7fxsv.fsf@jtc.redbacknetworks.com>
X-SW-Source: 1999-q3/msg00149.html
Content-length: 509
Hah. At the GDB meeting I just gave a presentation about GDB's symbol
table structures, in which everyone was astonished at the lack of hash
tables or trees or anything reasonable, to which I happily replied
that lookups were fast enough without them. Well, I guess that didn't
last long.
How many object files do you have? That is, how many entries are
there in your objfile's psymtab list? I don't see why lookup_symbol
should be calling lookup_partial_symbol once per symbol. Twice per
psymtab, yes.
From kevinb@cygnus.com Wed Aug 11 19:52:00 1999
From: Kevin Buettner <kevinb@cygnus.com>
To: gdb@sourceware.cygnus.com
Subject: Temporary breakpoints
Date: Wed, 11 Aug 1999 19:52:00 -0000
Message-id: <199908120252.TAA25463@elmo.cygnus.com>
X-SW-Source: 1999-q3/msg00150.html
Content-length: 2755
Hi all,
I have a question regarding temporary breakpoints...
Under what circumstances should a temporary breakpoint be deleted?
Sounds like a silly question, right? Obviously, it should be deleted
when the breakpoint is hit (provided that any conditions attached to
the breakpoint are met). But what constitutes hitting a breakpoint?
Clearly, running the program or continuing may cause execution to stop
due to the breakpoint. But what about single stepping (either step or
next)?
E.g, suppose the debugger is stopped several instructions (or
statements) prior to the address at which you place a temporary
breakpoint. What should happen when you single step over the
address/statement on which the temporary breakpoint is placed? Should
the breakpoint be deleted? Or should it remain in effect until it is
hit in some fashion that's not due to single stepping?
All recent versions of gdb that I've tried on Linux/x86 will not
remove the temporary breakpoint when you step over the temporary
breakpoint. OTOH, Solaris does the opposite. On Solaris, GDB will
remove the breakpoint when stepping over a temporary breakpoint. I
spoke with Stan about this briefly and we agreed that the reason for
this difference in behavior has to do with the fact that the SPARC
architecture doesn't have a hardware single-step, whereas the x86
architecture does.
Due to this inconsistency in behavior, I conclude that GDB will most
likely require some fixing, but I'd like to determine what the desired
behavior should be prior to fixing it.
I have looked at the GDB manual, but, to me at least, there is some
ambiguity about what the expected behavior should be. In particular,
under "Setting breakpoints", it says the following:
tbreak args
Set a breakpoint enabled only for one stop. args are the same
as for the break command, and the breakpoint is set in the
same way, but the breakpoint is automatically deleted after
the first time your program stops there. See Disabling
breakpoints.
Under "Disabling breakpoints", the GDB manual says:
A breakpoint or watchpoint can have any of four different states
of enablement:
[ Descriptions of 'Enabled', 'Disabled', and 'Enabled once' elided ]
* Enabled for deletion
The breakpoint stops your program, but immediately after it
does so it is deleted permanently.
One could argue that on linux, the program is stopped due to the
hardware single step and not the breakpoint getting hit, so it's
behavior is correct. But you can make a similar argument for
Solaris which doesn't have hardware single stepping. I think it'd
be more useful if gdb behaved in a consistent manner regardless of
whether the architecture supports hardware single stepping.
Opinions?
Kevin
From Guenther.Grau@bosch.com Thu Aug 12 03:40:00 1999
From: Guenther Grau <Guenther.Grau@bosch.com>
To: gdb@sourceware.cygnus.com
Subject: Re: worst case symbol lookup performance
Date: Thu, 12 Aug 1999 03:40:00 -0000
Message-id: <37B2A4CB.1304B2B5@bosch.com>
References: <5m4si7fxsv.fsf@jtc.redbacknetworks.com> <npvhalviuq.fsf@zwingli.cygnus.com>
X-SW-Source: 1999-q3/msg00151.html
Content-length: 1237
Hi,
Jim Blandy wrote:
>
> Hah. At the GDB meeting I just gave a presentation about GDB's symbol
> table structures, in which everyone was astonished at the lack of hash
> tables or trees or anything reasonable, to which I happily replied
> that lookups were fast enough without them. Well, I guess that didn't
> last long.
Oh, well I guess then you are just missing some user feedback :-)
This is of course, our (the users) fault, not yours.
> How many object files do you have? That is, how many entries are
Hmm, almost too many to count them... Our binaries are about 150 MB
in size on the harddisk. This is on Solaris. On HP-UX they are a little
smaller, about 70 to 80 MB. It takes quite a while to look up some
symbols, no real data from measurements, though. But it's still usable.
What really becomes a problem is trying to get a stacktrace from
a core dump under HP-UX. Reading a core file of 30 MB (from a smaller
process :-) grows gdb to about 250 MB in RAM. (Besides, that it still
often gets the stack wrong :-( But I guess this is an HP problem.
Their old debugger xdb shows even less of the stack and busy-loops
forever while doing a bt. Oh, I don't know if this matters, but
this is mostly C++ code.
Guenther
From gatliff@haulpak.com Thu Aug 12 06:15:00 1999
From: William Gatliff <gatliff@haulpak.com>
To: gdb@sourceware.cygnus.com
Subject: Re: Temporary breakpoints
Date: Thu, 12 Aug 1999 06:15:00 -0000
Message-id: <37B2C8EC.E0DC6057@haulpak.com>
References: <199908120252.TAA25463@elmo.cygnus.com>
X-SW-Source: 1999-q3/msg00152.html
Content-length: 940
Kevin:
> Or should it remain in effect until it is
> hit in some fashion that's not due to single stepping?
I'd go with this one.
Here's why: if I'm stepping in the vicinity of the breakpoint, it's usually
due to something unrelated to the reason the breakpoint was set, and so the
need for the breakpoint is still valid.
I can always delete the breakpoint if I decide I don't want it any more, but if
gdb silently deletes it and then I miss some kind of critical event, then I'm
annoyed.
Of course, gdb could just issue a message notifying the user that it's deleting
the breakpoint. But, from the user's perspective, which is easier: re-entering
the breakpoint that gdb deleted, or deleting the breakpoint that gdb didn't?
For me, it's the latter...
BTW, I'm an embedded developer, and most of my work involves gdb in remote
debugging setups. YMMV.
b.g.
--
William A. Gatliff
Senior Design Engineer
Komatsu Mining Systems
^ permalink raw reply [flat|nested] 3+ messages in thread