From: "Metzger, Markus T" <markus.t.metzger@intel.com>
To: Jan Kratochvil <jan.kratochvil@redhat.com>
Cc: "gdb-patches@sourceware.org" <gdb-patches@sourceware.org>,
"Himpel, Christian" <christian.himpel@intel.com>
Subject: RE: [patch v4 03/24] btrace: change branch trace data structure
Date: Tue, 10 Sep 2013 09:11:00 -0000 [thread overview]
Message-ID: <A78C989F6D9628469189715575E55B230A9CA7A2@IRSMSX104.ger.corp.intel.com> (raw)
In-Reply-To: <20130818190426.GC24153@host2.jankratochvil.net>
> -----Original Message-----
> From: Jan Kratochvil [mailto:jan.kratochvil@redhat.com]
> Sent: Sunday, August 18, 2013 9:04 PM
Thanks for your review.
> > @@ -248,89 +185,477 @@ ftrace_skip_file (struct btrace_func *bfun, const
> char *filename)
> > else
> > bfile = "";
> >
> > - if (filename == NULL)
> > - filename = "";
> > + if (fullname == NULL)
> > + fullname = "";
>
> The code should not assume FULLNAME cannot be "", "" is theoretically a
> valid
> source file filename.
>
> Second reason is that currently no caller of ftrace_skip_file will pass NULL
> as the second parameter.
>
> So the function can be just:
>
> if (sym == NULL)
> return 1;
>
> bfile = symtab_to_fullname (sym->symtab);
>
> return filename_cmp (bfile, fullname) != 0;
Sounds good. Changed it.
> And the function has only one caller so it would be IMO easier to read to get
> it inlined.
I'd rather keep it separate for documentation purposes.
> And not sure if it matters much but doing two symtab_to_fullname for
> comparison of two symtabs equality is needlessly expensive
> - symtab_to_fullname is very expensive. There are several places in GDB
> doing first:
> /* Before we invoke realpath, which can get expensive when many
> files are involved, do a quick comparison of the basenames. */
> if (!basenames_may_differ
> && filename_cmp (lbasename (symtab1->filename),
> lbasename (symtab2->filename)) != 0)
> continue;
I would expect it does matter. I noticed a slowdown when the trace gets
in the order of 1M instructions. I have not done any profiling, yet, but I will
get back to this once I look into performance.
> > +static void
> > +ftrace_update_caller (struct btrace_function *bfun,
> > + struct btrace_function *caller,
> > + unsigned int flags)
>
> FLAGS should be enum btrace_function_flag (it is ORed bitmask but GDB
> displays
> enum ORed bitmasks appropriately).
Changed it. This will burn us when we want to switch to C++ someday.
> > +/* Fix up the caller for a function segment. */
>
> IIUC it should be:
>
> /* Fix up the caller for all segments of a function call. */
Thanks. Yes, that's how it should be.
> > + /* We maintain levels for a series of returns for which we have
> > + not seen the calls, but we restart at level 0, otherwise. */
> > + bfun->level = min (0, prev->level) - 1;
>
> Why is there the 'min (0, ' part?
When we return from some tail call chain, for example, and we have
not traced the actual function call that started this chain.
I added a reference to tail calls in the comment.
> > - bfun = VEC_safe_push (btrace_func_s, ftrace, NULL);
> > + /* There is a call in PREV's back trace to which we should have
> > + returned. Let's remain at this level. */
> > + bfun->level = prev->level;
>
> Shouldn't here be rather:
> bfun->level = caller->level;
We did not return to this caller - otherwise, we would have found it before.
This is handling a case that should not normally occur. No matter what we
do, the indentation will likely be off in one or the other case.
> > + /* If we have symbol information for our current location, use
> > + it to check that we jump to the start of a function. */
> > + if (fun != NULL || mfun != NULL)
> > + start = get_pc_function_start (pc);
> > + else
> > + start = pc;
>
> This goes into implementation detail of get_pc_function_start. Rather
> always
> call get_pc_function_start but one should check if it failed in all cases
> (you do not check if get_pc_function_start failed). get_pc_function_start
> returns 0 if it has failed.
The check is implicit since pc can't be zero.
> Or was the 'fun != NULL || mfun != NULL' check there for performance
> reasons?
That's for performance reasons. No need to call the function if we know
it won't help us.
> > +static void
> > +btrace_compute_ftrace (struct btrace_thread_info *btinfo,
> > + VEC (btrace_block_s) *btrace)
>
> When doing any non-trivial trace on buggy Nehalem (enabling btrace by a
> GDB
> patch) GDB locks up on "info record". I found it is looping in this function
> with too big btrace range:
> (gdb) p *block
> $5 = {begin = 4777824, end = 9153192}
>
> But one can break it easily with CTRL-C and hopefully on btrace-correct CPUs
> such things do not happen.
We should not normally get such trace. I could add a simple heuristic that
blocks can't be bigger than x bytes but I don't think that's necessary.
> > +const struct btrace_insn *
> > +btrace_insn_get (const struct btrace_insn_iterator *it)
> > +{
> > + const struct btrace_function *bfun;
> > + unsigned int index, end;
> > +
> > + if (it == NULL)
> > + return NULL;
>
> I do not see this style in GDB and IMO it can delay bug report from where it
> occured. Either gdb_assert (it != NULL); or just to leave it crashing below.
OK. It's just a habit.
> > + index = it->index;
> > + bfun = it->function;
> > + if (bfun == NULL)
> > + return NULL;
>
> btrace_insn_iterator::function does not state if NULL is allowed and its
> meaning in such case. btrace_call_get description states "NULL if the
> interator points past the end of the branch trace." but I do not see it could
> be set to NULL in any current code (expecting it was so in older code).
> btrace_insn_next returns the last instruction not the last+1 pointer.
>
> IMO it should be stated btrace_insn_iterator::function can never be NULL
> and
> here should be either gdb_assert (bfun != NULL); or just nothing, like above.
Done. That's indeed a leftover. Thanks for pointing it out.
> > +
> > + btinfo = it->btinfo;
> > + if (btinfo == NULL)
> > + return 0;
>
> Similiarly btrace_call_iterator::btinfo does not state if it can be NULL and
> consequently here to code should rather gdb_assert it (or ignore it all).
OK. I ignored it.
> > + bfun = it->function;
> > + if (bfun != NULL)
> > + return bfun->number;
>
> Similiarly btrace_call_iterator::function does not state if it can be NULL and
> consequently here to code should rather gdb_assert it (or ignore it all).
For the call iterator, function can be NULL (see e.g. btrace_call_end).
It is documented in btrace.h (maybe I added this afterwards).
> > +
> > + /* The branch trace function segment.
> > + This will be NULL for the iterator pointing to the end of the trace. */
>
> btrace_call_next can return NULL in function while btrace_insn_next returns
> rather the very last of all instructions. Is there a reason for this
> difference?
The end iterator points one past the last element. For instructions, this is
one past the last instruction index in the last (non-empty) function segment.
For calls, this is a NULL function segment.
> > uiout = current_uiout;
> > uiout_cleanup = make_cleanup_ui_out_tuple_begin_end (uiout,
> > "insn history");
> > - btinfo = require_btrace ();
> > - last = VEC_length (btrace_inst_s, btinfo->itrace);
> > + low = (unsigned int) from;
> > + high = (unsigned int) to;
>
> I do not see a reason for this cast, it is even not signed vs. unsigned.
From and to are ULONGEST which may be 64bit whereas low and high
are 32bit.
> > - if (end <= begin)
> > + if (high <= low)
> > error (_("Bad range."));
>
> Function description says:
> /* Disassemble a section of the recorded execution trace from instruction
> BEGIN (inclusive) to instruction END (exclusive). */
>
> But it beahves as if END was inclusive. Or I do not understand something?
> (gdb) record instruction-history 1925,1926
> 1925 0x00007ffff62f6afc <memset+28>: ja 0x7ffff62f6b30
> <memset+80>
> 1926 0x00007ffff62f6afe <memset+30>: cmp $0x10,%rdx
>
> If it should be inclusive then LOW == HIGH should be allowed:
> (gdb) record instruction-history 1925,1925
> Bad range.
>
> Not in this patch (in some later one) but there is also:
> /* We want both begin and end to be inclusive. */
> btrace_insn_next (&end, 1);
>
> which contradicts the description of to_insn_history_range.
Initially, I had it exclusive, and this should be the behaviour if you just apply
this patch. Later on, I changed it to be inclusive, instead, to better match
existing commands like list.
I fixed this behaviour in the respective patch, added a test, and updated
the comment in target.h.
> Unrelated to this patch but the function record_btrace_insn_history_from
> does
> not need to be virtualized. It does not access any internals of
> record-btrace.c, it could be fully implemented in the superclass record.c and
> to_insn_history_from could be deleted.
>
> The same applies for record_btrace_call_history_from and
> to_call_history_from.
Both depend on the numbering scheme, which is an implementation detail.
They both assume that counting starts at 0 (at 1 in a later patch).
This does not hold for record-full, where the lowest instruction may be
bigger than zero.
> > - if (end <= begin)
> > + if (high <= low)
> > error (_("Bad range."));
>
> Similar inclusive/exclusive question as in record_btrace_insn_history_range
> and
> to_insn_history_range.
>
> /* We want both begin and end to be inclusive. */
> btrace_call_next (&end, 1);
>
> (gdb) record function-call-history 700,701
> 700 _dl_lookup_symbol_x
> 701 _dl_fixup
> (gdb) record function-call-history 700,700
> Bad range.
I fixed this behaviour in the respective patch, added a test, and updated
the comment in target.h.
Regards,
Markus.
Intel GmbH
Dornacher Strasse 1
85622 Feldkirchen/Muenchen, Deutschland
Sitz der Gesellschaft: Feldkirchen bei Muenchen
Geschaeftsfuehrer: Christian Lamprechter, Hannes Schwaderer, Douglas Lusk
Registergericht: Muenchen HRB 47456
Ust.-IdNr./VAT Registration No.: DE129385895
Citibank Frankfurt a.M. (BLZ 502 109 00) 600119052
next prev parent reply other threads:[~2013-09-10 9:11 UTC|newest]
Thread overview: 88+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-07-03 9:15 [patch v4 00/24] record-btrace: reverse Markus Metzger
2013-07-03 9:14 ` [patch v4 10/24] target: add ops parameter to to_prepare_to_store method Markus Metzger
2013-08-18 19:07 ` Jan Kratochvil
2013-07-03 9:14 ` [patch v4 03/24] btrace: change branch trace data structure Markus Metzger
2013-08-18 19:05 ` Jan Kratochvil
2013-09-10 9:11 ` Metzger, Markus T [this message]
2013-09-12 20:09 ` Jan Kratochvil
2013-09-16 9:01 ` Metzger, Markus T
2013-09-21 19:44 ` Jan Kratochvil
2013-09-23 6:54 ` Metzger, Markus T
2013-09-23 7:15 ` Jan Kratochvil
2013-09-23 7:27 ` Metzger, Markus T
2013-09-22 16:57 ` Jan Kratochvil
2013-09-22 17:16 ` Jan Kratochvil
2013-07-03 9:14 ` [patch v4 19/24] btrace, linux: fix memory leak when reading branch trace Markus Metzger
2013-08-18 19:09 ` Jan Kratochvil
2013-07-03 9:14 ` [patch v4 11/24] record-btrace: supply register target methods Markus Metzger
2013-08-18 19:07 ` Jan Kratochvil
2013-09-16 9:19 ` Metzger, Markus T
2013-09-22 13:55 ` Jan Kratochvil
2013-09-23 6:55 ` Metzger, Markus T
2013-07-03 9:14 ` [patch v4 20/24] btrace, gdbserver: read branch trace incrementally Markus Metzger
2013-08-18 19:09 ` Jan Kratochvil
2013-09-16 12:48 ` Metzger, Markus T
2013-09-22 14:42 ` Jan Kratochvil
2013-09-23 7:09 ` Metzger, Markus T
2013-09-25 19:05 ` Jan Kratochvil
2013-09-26 6:27 ` Metzger, Markus T
2013-07-03 9:14 ` [patch v4 24/24] record-btrace: skip tail calls in back trace Markus Metzger
2013-08-18 19:10 ` Jan Kratochvil
2013-09-17 14:28 ` Metzger, Markus T
2013-09-18 8:28 ` Metzger, Markus T
2013-09-18 9:52 ` Metzger, Markus T
2013-07-03 9:14 ` [patch v4 22/24] infrun: reverse stepping from unknown functions Markus Metzger
2013-08-18 19:09 ` Jan Kratochvil
2013-07-03 9:14 ` [patch v4 08/24] record-btrace: make ranges include begin and end Markus Metzger
2013-08-18 19:12 ` Jan Kratochvil
2013-07-03 9:14 ` [patch v4 07/24] record-btrace: optionally indent function call history Markus Metzger
2013-08-18 19:06 ` Jan Kratochvil
2013-09-10 13:06 ` Metzger, Markus T
2013-09-10 13:08 ` Jan Kratochvil
2013-07-03 9:14 ` [patch v4 16/24] record-btrace: provide target_find_new_threads method Markus Metzger
2013-08-18 19:15 ` Jan Kratochvil
2013-07-03 9:14 ` [patch v4 02/24] record: upcase record_print_flag enumeration constants Markus Metzger
2013-08-18 19:11 ` Jan Kratochvil
2013-07-03 9:14 ` [patch v4 14/24] record-btrace: provide xfer_partial target method Markus Metzger
2013-08-18 19:08 ` Jan Kratochvil
2013-09-16 9:30 ` Metzger, Markus T
2013-09-22 14:18 ` Jan Kratochvil
2013-07-03 9:14 ` [patch v4 09/24] btrace: add replay position to btrace thread info Markus Metzger
2013-08-18 19:07 ` Jan Kratochvil
2013-09-10 13:24 ` Metzger, Markus T
2013-09-12 20:19 ` Jan Kratochvil
2013-07-03 9:14 ` [patch v4 05/24] record-btrace: start counting at one Markus Metzger
2013-08-18 19:11 ` Jan Kratochvil
2013-07-03 9:15 ` [patch v4 12/24] frame, backtrace: allow targets to supply a frame unwinder Markus Metzger
2013-08-18 19:14 ` Jan Kratochvil
2013-07-03 9:15 ` [patch v4 04/24] record-btrace: fix insn range in function call history Markus Metzger
2013-08-18 19:06 ` Jan Kratochvil
2013-07-03 9:15 ` [patch v4 18/24] record-btrace: extend unwinder Markus Metzger
2013-08-18 19:08 ` Jan Kratochvil
2013-09-16 11:21 ` Metzger, Markus T
2013-09-27 13:55 ` Jan Kratochvil
2013-09-30 9:45 ` Metzger, Markus T
2013-09-30 10:26 ` Jan Kratochvil
2013-07-03 9:15 ` [patch v4 21/24] record-btrace: show trace from enable location Markus Metzger
2013-08-18 19:10 ` instruction_history.exp unset variable [Re: [patch v4 21/24] record-btrace: show trace from enable location] Jan Kratochvil
2013-09-16 14:11 ` Metzger, Markus T
2013-08-18 19:16 ` [patch v4 21/24] record-btrace: show trace from enable location Jan Kratochvil
2013-07-03 9:15 ` [patch v4 01/24] gdbarch: add instruction predicate methods Markus Metzger
2013-07-03 9:49 ` Mark Kettenis
2013-07-03 11:10 ` Metzger, Markus T
2013-08-18 19:04 ` Jan Kratochvil
2013-07-03 9:15 ` [patch v4 23/24] record-btrace: add (reverse-)stepping support Markus Metzger
2013-08-18 19:09 ` Jan Kratochvil
2013-09-17 9:43 ` Metzger, Markus T
2013-09-29 17:24 ` Jan Kratochvil
2013-09-30 9:30 ` Metzger, Markus T
2013-09-30 10:25 ` Jan Kratochvil
2013-07-03 9:15 ` [patch v4 06/24] btrace: increase buffer size Markus Metzger
2013-08-18 19:06 ` Jan Kratochvil
2013-07-03 9:15 ` [patch v4 17/24] record-btrace: add record goto target methods Markus Metzger
2013-08-18 19:08 ` Jan Kratochvil
2013-07-03 9:15 ` [patch v4 15/24] record-btrace: add to_wait and to_resume " Markus Metzger
2013-08-18 19:08 ` Jan Kratochvil
2013-07-03 9:15 ` [patch v4 13/24] record-btrace, frame: supply target-specific unwinder Markus Metzger
2013-08-18 19:07 ` Jan Kratochvil
2013-08-18 19:04 ` [patch v4 00/24] record-btrace: reverse Jan Kratochvil
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=A78C989F6D9628469189715575E55B230A9CA7A2@IRSMSX104.ger.corp.intel.com \
--to=markus.t.metzger@intel.com \
--cc=christian.himpel@intel.com \
--cc=gdb-patches@sourceware.org \
--cc=jan.kratochvil@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox