* Tracepoint enhancements
@ 2008-10-31 20:46 Stan Shebs
[not found] ` <490B6CEF.2000003@vmware.com>
2008-11-03 9:12 ` Jeremy Bennett
0 siblings, 2 replies; 16+ messages in thread
From: Stan Shebs @ 2008-10-31 20:46 UTC (permalink / raw)
To: gdb
There is some interest in pumping up GDB's tracepoint capabilities, in
particular to make it more suitable for cross-debugging a target with
serious performance constraints. While a lot of the detail is centered
around making a faster stub and other low-level tweaks, we are going to
do MI for tracing finally, plus it's an opportunity to review the
existing trace commands and consider what interface changes are
desirable. In particular, we will want to think about how tracing should
interoperate with non-stop debugging and multi-process.
So the first question that comes to my mind is: how many people are
actually using the trace commands right now? If they're not being much
used, then we have more flexibility about making user-visible changes.
One possible change to consider is to merge tracepoint setting into
breakpoint setting. Among other benefits would be a single numbering
scheme for breakpoints and tracepoints, plus we will be able to share
some machinery and make things more consistent.
A bigger change would be to introduce a general notion of execution
history, which could subsume fork checkpoints and trace snapshots, maybe
tie into some versions of reverse debugging as well.
What else should we be thinking about doing?
(There are of course all kinds of implementation-level changes to make,
but at the moment I'm focussed on the user experience.)
Stan
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Tracepoint enhancements
[not found] ` <490B6CEF.2000003@vmware.com>
@ 2008-11-01 8:40 ` Vladimir Prus
2008-11-03 18:20 ` Michael Snyder
[not found] ` <Pine.LNX.4.58.0811060523150.8468@vlab.hofr.at>
2008-11-03 6:38 ` Jakob Engblom
1 sibling, 2 replies; 16+ messages in thread
From: Vladimir Prus @ 2008-11-01 8:40 UTC (permalink / raw)
To: gdb
Michael Snyder wrote:
>> One possible change to consider is to merge tracepoint setting into
>> breakpoint setting. Among other benefits would be a single numbering
>> scheme for breakpoints and tracepoints, plus we will be able to share
>> some machinery and make things more consistent.
>
> Just my personal opinion, I would find that confusing.
>
> It seems useful to maintain a fairly sharp distinction
> between breakpoints and tracepoints, since their behavior
> is entirely different from both the implementation and the
> user's point of view.
>
> But I would not plan to make a fuss about it...
I think breakpoints and tracepoints have very lots of common.
First of all, the logic of resolving location specification to addresses
is, conceptually the same. Right now, breakpoints in constructors and
template functions work. Tracepoints don't seem to, because the fail
to use the multi-location breakpoint mechanisms. Tracepoints don't have
conditions -- which is something we want to fix -- and handling of condition
is a bit tricky too. Breakpoints in shared libraries work just fine --
and tracepoints should work too -- but they don't use pending breakpoints
mechanisms.
On the interface (MI) level breakpoints and tracepoints are essentially
the same. Breakpoints allow user, or frontend, to do something at specific
points of program. That something very well can be printing variables. In
fact, KDevelop does have "tracing" functionality for breakpoints -- where
on breakpoint hit, selected variables are printed and execution resumes.
Tracepoints are exactly the same, except that:
- they are more efficient
- they don't cause frontend to be involved, because to be efficient,
they are entirely stub-side
So it makes perfect sense to treat tracepoints as specially-optimized versions
of breakpoints. In order for breakpoint to be optimized like this, the list
of commands for that breakpoints should only use a limited set of commands,
and end with 'continue'
> One more thing, only vaguely related...
>
> I've thought that if we had the ability to attach an expression
> (in pcode such as we use for tracepoints) to a conditional breakpoint,
> we could have the conditional evaluation be done on the target
> rather than by gdb, which would be a big performance win for
> conditional breakpoints or watchpoints.
Yes. We want conditional tracepoints, and the condition would have to be evaluated
on the target. And if breakpoints and tracepoints are unified, both breakpoints and
tracepoints will benefit.
- Volodya
^ permalink raw reply [flat|nested] 16+ messages in thread
* RE: Tracepoint enhancements
[not found] ` <490B6CEF.2000003@vmware.com>
2008-11-01 8:40 ` Vladimir Prus
@ 2008-11-03 6:38 ` Jakob Engblom
2008-11-03 18:27 ` Michael Snyder
1 sibling, 1 reply; 16+ messages in thread
From: Jakob Engblom @ 2008-11-03 6:38 UTC (permalink / raw)
To: 'Michael Snyder', 'Stan Shebs'; +Cc: gdb
> > One possible change to consider is to merge tracepoint setting into
> > breakpoint setting. Among other benefits would be a single numbering
> > scheme for breakpoints and tracepoints, plus we will be able to share
> > some machinery and make things more consistent.
>
> Just my personal opinion, I would find that confusing.
>
> It seems useful to maintain a fairly sharp distinction
> between breakpoints and tracepoints, since their behavior
> is entirely different from both the implementation and the
> user's point of view.
>
> But I would not plan to make a fuss about it...
In a simulator, they might be the same. In both cases, the main mechanism is
noting that you reach a certain place in the code or read or write som memory
position. Whether you then note it down and continue or stop execution or call
some callback does not matter. So they can be very much the same.
> > A bigger change would be to introduce a general notion of execution
> > history, which could subsume fork checkpoints and trace snapshots, maybe
> > tie into some versions of reverse debugging as well.
>
> That could be interesting to talk about.
>
> Right now, I think checkpoints are only implemented for native
> linux, and maybe a few other (native) targets. Whereas tracepoints
> are traditionally associated with remote targets.
>
> I am very interested in defining a remote protocol that could
> tell the remote target "take a checkpoint" or "restore to a
> checkpoint". Ideally it should be entirely agnostic about how
> a checkpoint is actually implemented.
If by checkpoint you mean "some point inside the execution of a single program"
this is also a nice fit with simulators (and I presume VmWare as well, if we use
its snapshotting ability for this). I think this is a very good idea that works
very well with a smart remote target.
> I talked about this with somebody once (can't remember who),
> but I remember the discussion got hung up over whether gdb or
> the target should actually manage the list of checkpoint IDs.
>
> My thinking is that gdb will probably want to number them with
> simple ordinal numbers (1, 2, 3...) like breakpoints, but that
> the target may have a different type of ID in mind (such as
> process/fork IDs), and somebody will have to maintain a mapping.
The target might have its own interface for looking at such checkpoints... so I
think passing name strings make the most sense. In Simics, for example,
bookmarks as we call them have names and that is how we work with them.
> Not very different from threads, actually...
I think it is. It is a snapshot of the system state that you can back to, not
really a thread. Only if you consider the odd Linux implementating with fork et
al are they the same.
Best regards,
/jakob
_______________________________________________________
Jakob Engblom, PhD, Technical Marketing Manager
Virtutech Direct: +46 8 690 07 47
Drottningholmsvägen 14 Mobile: +46 709 242 646
11243 Stockholm Web: www.virtutech.com
Sweden
________________________________________________________
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Tracepoint enhancements
2008-10-31 20:46 Tracepoint enhancements Stan Shebs
[not found] ` <490B6CEF.2000003@vmware.com>
@ 2008-11-03 9:12 ` Jeremy Bennett
2008-11-04 21:26 ` Stan Shebs
1 sibling, 1 reply; 16+ messages in thread
From: Jeremy Bennett @ 2008-11-03 9:12 UTC (permalink / raw)
To: gdb
On Fri, 2008-10-31 at 12:57 -0700, Stan Shebs wrote:
> There is some interest in pumping up GDB's tracepoint capabilities, in
> particular to make it more suitable for cross-debugging a target with
> serious performance constraints. While a lot of the detail is centered
> around making a faster stub and other low-level tweaks, we are going to
> do MI for tracing finally, plus it's an opportunity to review the
> existing trace commands and consider what interface changes are
> desirable. In particular, we will want to think about how tracing should
> interoperate with non-stop debugging and multi-process.
>
> So the first question that comes to my mind is: how many people are
> actually using the trace commands right now? If they're not being much
> used, then we have more flexibility about making user-visible changes.
I've been working on the OpenRISC 1000, which has hardware trace
support. No one has yet complained that I dropped trace functionality
from GDB 6.8 for OpenRISC, so I guess it's not currently in use by that
user community.
> One possible change to consider is to merge tracepoint setting into
> breakpoint setting. Among other benefits would be a single numbering
> scheme for breakpoints and tracepoints, plus we will be able to share
> some machinery and make things more consistent.
I'd strongly encourage a uniform reference scheme. Not necessarily just
numbers - something richer may be needed in complex environments. This
should work for ANY target covering breakpoints, watchpoints,
catchpoints, tracepoints etc.
This ties in with your work on multiprocess/multiprogram support. A
debugging target might be a complex SoC with multiple heterogenous
processor cores together with peripherals having substantial state and
processing power. Eventually GDB should be able to handle all of this
consistently.
This will require a standard way of addressing ANY part of such a target
- not just within one processor - and turning it into a unique reference
for GDB. For example I could specify a watchpoint on internal state of a
peripheral, asking for execution to stop (on some or all
threads/processes/processors/peripherals) if that internal state
changed.
At some stage a general way of linking the reference to a complex
specification will be needed. I am not sure that "condition" and
"break ... if" are sufficient. They certainly will need to reference
multiple threads and target functional units.
> A bigger change would be to introduce a general notion of execution
> history, which could subsume fork checkpoints and trace snapshots, maybe
> tie into some versions of reverse debugging as well.
Which also requires a way of specifying what execution you are talking
about. A uniform way of addressing potentially hundreds of thousands of
threads of control individually and in arbitrary groupings.
Some of this is a long way in the future, but I hope it provides a
context for thinking about changes to GDB today.
> What else should we be thinking about doing?
>
> (There are of course all kinds of implementation-level changes to make,
> but at the moment I'm focussed on the user experience.)
Keep up the good work :-)
HTH,
Jeremy
--
Tel: +44 (1202) 416955
Cell: +44 (7970) 676050
SkypeID: jeremybennett
Email: jeremy.bennett@embecosm.com
Web: www.embecosm.com
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Tracepoint enhancements
2008-11-01 8:40 ` Vladimir Prus
@ 2008-11-03 18:20 ` Michael Snyder
2008-11-04 21:17 ` Stan Shebs
[not found] ` <Pine.LNX.4.58.0811060523150.8468@vlab.hofr.at>
1 sibling, 1 reply; 16+ messages in thread
From: Michael Snyder @ 2008-11-03 18:20 UTC (permalink / raw)
To: Vladimir Prus; +Cc: gdb
Vladimir Prus wrote:
> Michael Snyder wrote:
>> One more thing, only vaguely related...
>>
>> I've thought that if we had the ability to attach an expression
>> (in pcode such as we use for tracepoints) to a conditional breakpoint,
>> we could have the conditional evaluation be done on the target
>> rather than by gdb, which would be a big performance win for
>> conditional breakpoints or watchpoints.
>
> Yes. We want conditional tracepoints, and the condition would have to be evaluated
> on the target. And if breakpoints and tracepoints are unified, both breakpoints and
> tracepoints will benefit.
Very good point. OK, you've convinced me.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Tracepoint enhancements
2008-11-03 6:38 ` Jakob Engblom
@ 2008-11-03 18:27 ` Michael Snyder
2008-11-03 18:53 ` Jakob Engblom
0 siblings, 1 reply; 16+ messages in thread
From: Michael Snyder @ 2008-11-03 18:27 UTC (permalink / raw)
To: Jakob Engblom; +Cc: 'Stan Shebs', gdb
Jakob Engblom wrote:
>>> One possible change to consider is to merge tracepoint setting into
>>> breakpoint setting. Among other benefits would be a single numbering
>>> scheme for breakpoints and tracepoints, plus we will be able to share
>>> some machinery and make things more consistent.
>> Just my personal opinion, I would find that confusing.
>>
>> It seems useful to maintain a fairly sharp distinction
>> between breakpoints and tracepoints, since their behavior
>> is entirely different from both the implementation and the
>> user's point of view.
>>
>> But I would not plan to make a fuss about it...
>
> In a simulator, they might be the same. In both cases, the main mechanism is
> noting that you reach a certain place in the code or read or write som memory
> position. Whether you then note it down and continue or stop execution or call
> some callback does not matter. So they can be very much the same.
>
>>> A bigger change would be to introduce a general notion of execution
>>> history, which could subsume fork checkpoints and trace snapshots, maybe
>>> tie into some versions of reverse debugging as well.
>> That could be interesting to talk about.
>>
>> Right now, I think checkpoints are only implemented for native
>> linux, and maybe a few other (native) targets. Whereas tracepoints
>> are traditionally associated with remote targets.
>>
>> I am very interested in defining a remote protocol that could
>> tell the remote target "take a checkpoint" or "restore to a
>> checkpoint". Ideally it should be entirely agnostic about how
>> a checkpoint is actually implemented.
>
> If by checkpoint you mean "some point inside the execution of a single program"
> this is also a nice fit with simulators (and I presume VmWare as well, if we use
> its snapshotting ability for this). I think this is a very good idea that works
> very well with a smart remote target.
Yes, that's what I meant. A "point in time" in the execution
history, something that could be represented eg. by a cycle count
or instruction count, rather than just by a PC.
Something corresponding to a snapshot or bookmark.
>> I talked about this with somebody once (can't remember who),
>> but I remember the discussion got hung up over whether gdb or
>> the target should actually manage the list of checkpoint IDs.
>>
>> My thinking is that gdb will probably want to number them with
>> simple ordinal numbers (1, 2, 3...) like breakpoints, but that
>> the target may have a different type of ID in mind (such as
>> process/fork IDs), and somebody will have to maintain a mapping.
>
> The target might have its own interface for looking at such checkpoints... so I
> think passing name strings make the most sense. In Simics, for example,
> bookmarks as we call them have names and that is how we work with them.
Right -- so for you an internal representation might look like a string.
For VMware, it would look like a pair of integers. If we did an
implementation linux gdbserver, in which gdbserver did the "fork
trick" (like gdb does now), then the internal representation would
be a process ID.
But for all of these, gdb might keep an external representation
that just looked like a counting integer -- as it does for breakpoints
and threads. That way the user would have a common interface
(eg. "restore 3"), no matter which target.
>> Not very different from threads, actually...
>
> I think it is. It is a snapshot of the system state that you can back to, not
> really a thread. Only if you consider the odd Linux implementating with fork et
> al are they the same.
Sorry, I just meant "like threads in that we have a counting
integer representation on the GDB side, even though there are
various internal representations on the target side".
^ permalink raw reply [flat|nested] 16+ messages in thread
* RE: Tracepoint enhancements
2008-11-03 18:27 ` Michael Snyder
@ 2008-11-03 18:53 ` Jakob Engblom
2008-11-03 19:23 ` Michael Snyder
0 siblings, 1 reply; 16+ messages in thread
From: Jakob Engblom @ 2008-11-03 18:53 UTC (permalink / raw)
To: 'Michael Snyder'; +Cc: 'Stan Shebs', gdb
> > If by checkpoint you mean "some point inside the execution of a single
program"
> > this is also a nice fit with simulators (and I presume VmWare as well, if we
> use
> > its snapshotting ability for this). I think this is a very good idea that
> works
> > very well with a smart remote target.
>
> Yes, that's what I meant. A "point in time" in the execution
> history, something that could be represented eg. by a cycle count
> or instruction count, rather than just by a PC.
I think that is a bad idea to assume there is only one time or one instruction
count in the target. It could be a multicore target with lots of CPUs running
around... so let the backend handle that in a symbolic way rather than assume
anything about what it means.
> > The target might have its own interface for looking at such checkpoints...
so I
> > think passing name strings make the most sense. In Simics, for example,
> > bookmarks as we call them have names and that is how we work with them.
>
> Right -- so for you an internal representation might look like a string.
> For VMware, it would look like a pair of integers. If we did an
> implementation linux gdbserver, in which gdbserver did the "fork
> trick" (like gdb does now), then the internal representation would
> be a process ID.
>
> But for all of these, gdb might keep an external representation
> that just looked like a counting integer -- as it does for breakpoints
> and threads. That way the user would have a common interface
> (eg. "restore 3"), no matter which target.
That is a decent idea.
> >> Not very different from threads, actually...
> >
> > I think it is. It is a snapshot of the system state that you can back to,
not
> > really a thread. Only if you consider the odd Linux implementating with fork
et
> > al are they the same.
>
> Sorry, I just meant "like threads in that we have a counting
> integer representation on the GDB side, even though there are
> various internal representations on the target side".
Sorry, misunderstood. Thanks.
/jakob
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Tracepoint enhancements
2008-11-03 18:53 ` Jakob Engblom
@ 2008-11-03 19:23 ` Michael Snyder
2008-11-04 14:00 ` Jakob Engblom
2008-11-04 21:37 ` Stan Shebs
0 siblings, 2 replies; 16+ messages in thread
From: Michael Snyder @ 2008-11-03 19:23 UTC (permalink / raw)
To: Jakob Engblom; +Cc: 'Stan Shebs', gdb
Jakob Engblom wrote:
>>> If by checkpoint you mean "some point inside the execution of a single
> program"
>>> this is also a nice fit with simulators (and I presume VmWare as well, if we
>> use
>>> its snapshotting ability for this). I think this is a very good idea that
>> works
>>> very well with a smart remote target.
>> Yes, that's what I meant. A "point in time" in the execution
>> history, something that could be represented eg. by a cycle count
>> or instruction count, rather than just by a PC.
>
> I think that is a bad idea to assume there is only one time or one instruction
> count in the target. It could be a multicore target with lots of CPUs running
> around... so let the backend handle that in a symbolic way rather than assume
> anything about what it means.
Right, OK. But it was a mental assumption rather than an
implementation assumption.
I think the idea we're both getting at is that a checkpoint
represents a machine state. If there are multiple machines,
that complicates the picture -- but basically gdb is saying
to the target "I want to be able to return to the state that
you are in *right now*".
^ permalink raw reply [flat|nested] 16+ messages in thread
* RE: Tracepoint enhancements
2008-11-03 19:23 ` Michael Snyder
@ 2008-11-04 14:00 ` Jakob Engblom
2008-11-04 21:37 ` Stan Shebs
1 sibling, 0 replies; 16+ messages in thread
From: Jakob Engblom @ 2008-11-04 14:00 UTC (permalink / raw)
To: 'Michael Snyder'; +Cc: 'Stan Shebs', gdb
> > I think that is a bad idea to assume there is only one time or one
instruction
> > count in the target. It could be a multicore target with lots of CPUs
running
> > around... so let the backend handle that in a symbolic way rather than
assume
> > anything about what it means.
>
> Right, OK. But it was a mental assumption rather than an
> implementation assumption.
>
> I think the idea we're both getting at is that a checkpoint
> represents a machine state. If there are multiple machines,
> that complicates the picture -- but basically gdb is saying
> to the target "I want to be able to return to the state that
> you are in *right now*".
I think the "thing at the other end of the remote connection" is what gdb should
debug, long-term. And in a virtualized and simulated world, that can be quite a
few machines. What might also become interesting is if people attach multiple
gdbs to a single simulation -- with Simics, we do that quite often to debug
software running on mixed networks of machines, and multiple programs on a
single machine.
But support heterogeneous network debug feels like way beyond the scope of gdb.
Best regards,
/jakob
_______________________________________________________
Jakob Engblom, PhD, Technical Marketing Manager
Virtutech Direct: +46 8 690 07 47
Drottningholmsvägen 14 Mobile: +46 709 242 646
11243 Stockholm Web: www.virtutech.com
Sweden
________________________________________________________
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Tracepoint enhancements
2008-11-03 18:20 ` Michael Snyder
@ 2008-11-04 21:17 ` Stan Shebs
2008-11-05 7:14 ` Vladimir Prus
0 siblings, 1 reply; 16+ messages in thread
From: Stan Shebs @ 2008-11-04 21:17 UTC (permalink / raw)
To: Michael Snyder; +Cc: Vladimir Prus, gdb
Michael Snyder wrote:
> Vladimir Prus wrote:
>> Michael Snyder wrote:
>
>>> One more thing, only vaguely related...
>>>
>>> I've thought that if we had the ability to attach an expression
>>> (in pcode such as we use for tracepoints) to a conditional breakpoint,
>>> we could have the conditional evaluation be done on the target
>>> rather than by gdb, which would be a big performance win for
>>> conditional breakpoints or watchpoints.
>>
>> Yes. We want conditional tracepoints, and the condition would have to
>> be evaluated
>> on the target. And if breakpoints and tracepoints are unified, both
>> breakpoints and
>> tracepoints will benefit.
>
> Very good point. OK, you've convinced me.
I shall proceed on the assumption that we will make a tracepoint a kind
of breakpoint. This means we no longer need the special
enable/disable/delete commands. I think the original "trace" command
should remain as-is, and I'm also inclined to leave "actions" alone for
the moment, rather than try to merge with "commands"; while there could
be some useful unification, it seems like more of a sweeping change to
try to decide for every command, whether it could be part of a
tracepoint action or not. We get tracepoint conditions via "condition"
and "trace ... if" then. Not clear if "info tracepoints" should stick
around as a subset of "info breakpoints".
Ignore counts vs passcounts still mystify me a bit. They seem
conceptually similar (modulo the sense inversion), but the documentation
for passcounts makes it seems as though one might expect all tracing and
all tracepoints to be disabled once a passcount is exceeded for any one
of them - and I see where that might be the desired behavior, vs the
per-breakpoint control of ignore counts.
Stan
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Tracepoint enhancements
2008-11-03 9:12 ` Jeremy Bennett
@ 2008-11-04 21:26 ` Stan Shebs
0 siblings, 0 replies; 16+ messages in thread
From: Stan Shebs @ 2008-11-04 21:26 UTC (permalink / raw)
To: jeremy.bennett; +Cc: gdb
Jeremy Bennett wrote:
> On Fri, 2008-10-31 at 12:57 -0700, Stan Shebs wrote:
>
>> A bigger change would be to introduce a general notion of execution
>> history, which could subsume fork checkpoints and trace snapshots, maybe
>> tie into some versions of reverse debugging as well.
>>
>
> Which also requires a way of specifying what execution you are talking
> about. A uniform way of addressing potentially hundreds of thousands of
> threads of control individually and in arbitrary groupings.
>
>
The "inferior/thread set" syntax for multiprocess GDB has the ability to
do numerical ranges and unions and such, so it gets at least part of the
way there. One of the things that struck me about TotalView is that they
introduced dozens of special-purpose predicates as well -
"system-created lwps that were locked out but are now runnable and yet
haven't run yet". :-) I suspect that practical usage with GDB will
demonstrate that many of those are not as silly as they sound, and we'll
be wanting our own versions!
Stan
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Tracepoint enhancements
2008-11-03 19:23 ` Michael Snyder
2008-11-04 14:00 ` Jakob Engblom
@ 2008-11-04 21:37 ` Stan Shebs
2008-11-04 21:58 ` Michael Snyder
2008-11-05 9:04 ` Jakob Engblom
1 sibling, 2 replies; 16+ messages in thread
From: Stan Shebs @ 2008-11-04 21:37 UTC (permalink / raw)
To: Michael Snyder; +Cc: Jakob Engblom, 'Stan Shebs', gdb
Michael Snyder wrote:
> [...] a checkpoint
> represents a machine state. If there are multiple machines,
> that complicates the picture -- but basically gdb is saying
> to the target "I want to be able to return to the state that
> you are in *right now*".
Hmm, that is a significant wrinkle to the execution history theory.
Basically it's not possible to know reliably whether the execution state
of one CPU comes sooner or later than the state of another CPU - their
clocks can't be guaranteed to be sync'ed to a sub-instruction level.
It's a little like a distributed version control system, where each
repository has its own version numbers, and any ordering derives from
explicit push/pull instructions. Each inferior can have a reliable
execution history, but if you want to go back to state X on CPU 1, you
either just affect the one inferior, or expect that other inferiors will
go back to the closest available state in their histories.
Stan
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Tracepoint enhancements
2008-11-04 21:37 ` Stan Shebs
@ 2008-11-04 21:58 ` Michael Snyder
2008-11-05 9:04 ` Jakob Engblom
1 sibling, 0 replies; 16+ messages in thread
From: Michael Snyder @ 2008-11-04 21:58 UTC (permalink / raw)
To: Stan Shebs; +Cc: Jakob Engblom, gdb
Stan Shebs wrote:
> Michael Snyder wrote:
>> [...] a checkpoint
>> represents a machine state. If there are multiple machines,
>> that complicates the picture -- but basically gdb is saying
>> to the target "I want to be able to return to the state that
>> you are in *right now*".
> Hmm, that is a significant wrinkle to the execution history theory.
> Basically it's not possible to know reliably whether the execution state
> of one CPU comes sooner or later than the state of another CPU - their
> clocks can't be guaranteed to be sync'ed to a sub-instruction level.
> It's a little like a distributed version control system, where each
> repository has its own version numbers, and any ordering derives from
> explicit push/pull instructions. Each inferior can have a reliable
> execution history, but if you want to go back to state X on CPU 1, you
> either just affect the one inferior, or expect that other inferiors will
> go back to the closest available state in their histories.
All true. I think this sub-branch of the discussion
was mostly blue-sky.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Tracepoint enhancements
2008-11-04 21:17 ` Stan Shebs
@ 2008-11-05 7:14 ` Vladimir Prus
0 siblings, 0 replies; 16+ messages in thread
From: Vladimir Prus @ 2008-11-05 7:14 UTC (permalink / raw)
To: Stan Shebs; +Cc: Michael Snyder, gdb
On Wednesday 05 November 2008 00:16:33 Stan Shebs wrote:
> Michael Snyder wrote:
> > Vladimir Prus wrote:
> >> Michael Snyder wrote:
> >
> >>> One more thing, only vaguely related...
> >>>
> >>> I've thought that if we had the ability to attach an expression
> >>> (in pcode such as we use for tracepoints) to a conditional breakpoint,
> >>> we could have the conditional evaluation be done on the target
> >>> rather than by gdb, which would be a big performance win for
> >>> conditional breakpoints or watchpoints.
> >>
> >> Yes. We want conditional tracepoints, and the condition would have to
> >> be evaluated
> >> on the target. And if breakpoints and tracepoints are unified, both
> >> breakpoints and
> >> tracepoints will benefit.
> >
> > Very good point. OK, you've convinced me.
> I shall proceed on the assumption that we will make a tracepoint a kind
> of breakpoint. This means we no longer need the special
> enable/disable/delete commands. I think the original "trace" command
> should remain as-is, and I'm also inclined to leave "actions" alone for
> the moment, rather than try to merge with "commands"; while there could
> be some useful unification, it seems like more of a sweeping change to
> try to decide for every command, whether it could be part of a
> tracepoint action or not.
There's no need to do that. 'collect', 'while-stepping' and 'continue'
can be part of command set that is hardware-supported. Other commands do not.
From MI standpoint, I disagree to having independent 'commands' and 'actions'
properties -- this makes no sense in the long run, and MI is not supposed to
be breaking behaviour at random, so we cannot implement 'actions' now, and
take it back later.
> Ignore counts vs passcounts still mystify me a bit. They seem
> conceptually similar (modulo the sense inversion), but the documentation
> for passcounts makes it seems as though one might expect all tracing and
> all tracepoints to be disabled once a passcount is exceeded for any one
> of them - and I see where that might be the desired behavior, vs the
> per-breakpoint control of ignore counts.
Yeah, this is one bit that needs further design.
- Volodya
^ permalink raw reply [flat|nested] 16+ messages in thread
* RE: Tracepoint enhancements
2008-11-04 21:37 ` Stan Shebs
2008-11-04 21:58 ` Michael Snyder
@ 2008-11-05 9:04 ` Jakob Engblom
1 sibling, 0 replies; 16+ messages in thread
From: Jakob Engblom @ 2008-11-05 9:04 UTC (permalink / raw)
To: 'Stan Shebs', 'Michael Snyder'; +Cc: gdb
> > [...] a checkpoint
> > represents a machine state. If there are multiple machines,
> > that complicates the picture -- but basically gdb is saying
> > to the target "I want to be able to return to the state that
> > you are in *right now*".
> Hmm, that is a significant wrinkle to the execution history theory.
> Basically it's not possible to know reliably whether the execution state
> of one CPU comes sooner or later than the state of another CPU - their
> clocks can't be guaranteed to be sync'ed to a sub-instruction level.
> It's a little like a distributed version control system, where each
> repository has its own version numbers, and any ordering derives from
> explicit push/pull instructions. Each inferior can have a reliable
> execution history, but if you want to go back to state X on CPU 1, you
> either just affect the one inferior, or expect that other inferiors will
> go back to the closest available state in their histories.
This is true on most physical hardware, but it can often be worked around in
simulators. So I would strongly suggest letting the backend worry about that.
In a system such as Simics and other complete simulation solutions, this is
indeed feasible since you impose a certain semantics on the execution of the
simulated system.
Or imagine connecting to a cycle-by-cycle simulation or emulation solution such
as a HAPS or Palladium or ZeBu system -- such solutions can stop synchronously
on a single cycle. Also, on-chip trace is coming out that would be able to do
synchronized time-stamping of event from an entire SoC, exported over some
hardware debug port.
So while not always true, there are several cases where you can indeed have a
synchronized history and control over parallellism. All you need is to insert a
layer of indirection that converts parallel to some kind of known sequential
execution.
Best regards,
/jakob
_______________________________________________________
Jakob Engblom, PhD, Technical Marketing Manager
Virtutech Direct: +46 8 690 07 47
Drottningholmsvägen 14 Mobile: +46 709 242 646
11243 Stockholm Web: www.virtutech.com
Sweden
________________________________________________________
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Tracepoint enhancements
[not found] ` <Pine.LNX.4.58.0811060523150.8468@vlab.hofr.at>
@ 2008-11-06 18:19 ` Vladimir Prus
0 siblings, 0 replies; 16+ messages in thread
From: Vladimir Prus @ 2008-11-06 18:19 UTC (permalink / raw)
To: Nicholas Mc Guire; +Cc: gdb
On Thursday 06 November 2008 17:38:38 Nicholas Mc Guire wrote:
>
> > Michael Snyder wrote:
> >
> > >> One possible change to consider is to merge tracepoint setting into
> > >> breakpoint setting. Among other benefits would be a single numbering
> > >> scheme for breakpoints and tracepoints, plus we will be able to share
> > >> some machinery and make things more consistent.
>
> we have implemented tp for gdb 6.3-6.6 and it might help to play with it
> before redesigning things - in our implementation we did not merge tp and
> breakpoints as it makes it quite complicated, i.e. breakpoints allow
> multiple breakpoints at the same address - for tp this makes little sense.
I don't see any fundamental problem here.
> the other issue is that you dont care about call overhead on breakpoints
> but you do on tracepoints so you probably dont want to have too much
> runtime searching going on for tp.
What "runtime searching" do you have in mind?
>
> ftp://dslab.lzu.edu.cn/pub/gdb_tracepoints
>
>
>
> > >
> > > Just my personal opinion, I would find that confusing.
>
> and it would make scripting complicated as well.
Why?
>
> > >
> > > It seems useful to maintain a fairly sharp distinction
> > > between breakpoints and tracepoints, since their behavior
> > > is entirely different from both the implementation and the
> > > user's point of view.
> > >
> > > But I would not plan to make a fuss about it...
> >
> > I think breakpoints and tracepoints have very lots of common.
> >
> > First of all, the logic of resolving location specification to addresses
> > is, conceptually the same. Right now, breakpoints in constructors and
> > template functions work. Tracepoints don't seem to, because the fail
> > to use the multi-location breakpoint mechanisms. Tracepoints don't have
> > conditions -- which is something we want to fix -- and handling of condition
> > is a bit tricky too. Breakpoints in shared libraries work just fine --
> > and tracepoints should work too -- but they don't use pending breakpoints
> > mechanisms.
> >
>
> one simple way of handling conditional stuff would be to put it into the
> bytecode - you would incure the overhead of calling the stub,
What do you mean by "calling the stub". I think tracepoints should work
entirely on target without any interaction with gdb.
> but that
> might actually be more convenient with respect to temporal distortion. The
> overhead of tracepoints is quite conciderable and thus conditional
> breakpoints (notably in multithreaded apps) need to be placed
> "synchronously" to not lead to excessive distortions (alteast our
> implementatoin showd a conciderable overhead - but you might find a way to
> do better).
What do you mean by "synchronously". Note that gdb now can keep breakpoints
inserted in the target at all times, so the insertion/removal overhead can
be eliminated this way.
> > On the interface (MI) level breakpoints and tracepoints are essentially
> > the same. Breakpoints allow user, or frontend, to do something at specific
> > points of program. That something very well can be printing variables. In
> > fact, KDevelop does have "tracing" functionality for breakpoints -- where
> > on breakpoint hit, selected variables are printed and execution resumes.
> > Tracepoints are exactly the same, except that:
> >
> > - they are more efficient
>
> they are actually less efficient on the stub-side as they need to use
> bytecode to get hold of compound statements and that is definitly less
> efficient on the target than on the host.
Ok, "they are more efficient provided expression is small enough". For small
expressions, roundtrip to target will take more time than on-target evaluation.
> Aside from the current spec
> having a few sub-optimal things in it (like static array of registers)
>
> > - they don't cause frontend to be involved, because to be efficient,
> > they are entirely stub-side
>
> the mechanism is a bit different as you need to handle dynamic memmory
> allocation for tp or you would end up with a large blob preallocated - not
> an issue for breakpoints.
Are you talking about stub side of things. Yes, it's different, but for frontend,
it does not matter much how stub implements them.
> > So it makes perfect sense to treat tracepoints as specially-optimized versions
> > of breakpoints.
>
> I doubt that - de-facto the code would not share much - atleast in our
> implementation this was the case, and I doubt that maintenance wise it
> makes any sense to merge the code - notably I would expect that the tp
> code would start changing once it goes in mainline and users start playing
> with it.
I disagree. Experience will tell, I think.
- Volodya
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2008-11-06 18:19 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2008-10-31 20:46 Tracepoint enhancements Stan Shebs
[not found] ` <490B6CEF.2000003@vmware.com>
2008-11-01 8:40 ` Vladimir Prus
2008-11-03 18:20 ` Michael Snyder
2008-11-04 21:17 ` Stan Shebs
2008-11-05 7:14 ` Vladimir Prus
[not found] ` <Pine.LNX.4.58.0811060523150.8468@vlab.hofr.at>
2008-11-06 18:19 ` Vladimir Prus
2008-11-03 6:38 ` Jakob Engblom
2008-11-03 18:27 ` Michael Snyder
2008-11-03 18:53 ` Jakob Engblom
2008-11-03 19:23 ` Michael Snyder
2008-11-04 14:00 ` Jakob Engblom
2008-11-04 21:37 ` Stan Shebs
2008-11-04 21:58 ` Michael Snyder
2008-11-05 9:04 ` Jakob Engblom
2008-11-03 9:12 ` Jeremy Bennett
2008-11-04 21:26 ` Stan Shebs
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox