From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 36302 invoked by alias); 8 May 2017 08:13:36 -0000 Mailing-List: contact gdb-patches-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-patches-owner@sourceware.org Received: (qmail 36110 invoked by uid 89); 8 May 2017 08:13:35 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-24.4 required=5.0 tests=AWL,BAYES_00,GIT_PATCH_0,GIT_PATCH_1,GIT_PATCH_2,GIT_PATCH_3,KAM_LAZY_DOMAIN_SECURITY,RP_MATCHES_RCVD autolearn=ham version=3.3.2 spammy=70311, risk X-HELO: mga07.intel.com Received: from mga07.intel.com (HELO mga07.intel.com) (134.134.136.100) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Mon, 08 May 2017 08:13:33 +0000 Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by orsmga105.jf.intel.com with ESMTP; 08 May 2017 01:13:34 -0700 X-ExtLoop1: 1 Received: from irvmail001.ir.intel.com ([163.33.26.43]) by fmsmga006.fm.intel.com with ESMTP; 08 May 2017 01:13:33 -0700 Received: from ulvlx001.iul.intel.com (ulvlx001.iul.intel.com [172.28.207.17]) by irvmail001.ir.intel.com (8.14.3/8.13.6/MailSET/Hub) with ESMTP id v488DWKF018499; Mon, 8 May 2017 09:13:32 +0100 Received: from ulvlx001.iul.intel.com (localhost [127.0.0.1]) by ulvlx001.iul.intel.com with ESMTP id v488DWQj005020; Mon, 8 May 2017 10:13:32 +0200 Received: (from twiederh@localhost) by ulvlx001.iul.intel.com with œ id v488DWvh005016; Mon, 8 May 2017 10:13:32 +0200 From: Tim Wiederhake To: gdb-patches@sourceware.org Cc: markus.t.metzger@intel.com Subject: [PATCH v2 08/11] btrace: Remove struct btrace_thread_info::flow. Date: Mon, 08 May 2017 08:13:00 -0000 Message-Id: <1494231185-4709-9-git-send-email-tim.wiederhake@intel.com> In-Reply-To: <1494231185-4709-1-git-send-email-tim.wiederhake@intel.com> References: <1494231185-4709-1-git-send-email-tim.wiederhake@intel.com> X-IsSubscribed: yes X-SW-Source: 2017-05/txt/msg00173.txt.bz2 This used to hold a pair of pointers to the previous and next function segment in execution flow order. It is no longer necessary as the previous and next function segments now are simply the previous and next elements in the vector of function segments. 2017-05-08 Tim Wiederhake gdb/ChangeLog: * btrace.c (ftrace_new_function, ftrace_fixup_level, ftrace_connect_bfun, ftrace_bridge_gap, btrace_bridge_gaps, btrace_insn_next, btrace_insn_prev): Remove references to btrace_thread_info::flow. * btrace.h (struct btrace_function): Remove FLOW. --- gdb/btrace.c | 44 ++++++++++++++++++++++++-------------------- gdb/btrace.h | 3 --- 2 files changed, 24 insertions(+), 23 deletions(-) diff --git a/gdb/btrace.c b/gdb/btrace.c index 5cd3525..e32b593 100644 --- a/gdb/btrace.c +++ b/gdb/btrace.c @@ -242,10 +242,6 @@ ftrace_new_function (struct btrace_thread_info *btinfo, { struct btrace_function *prev = VEC_last (btrace_fun_p, btinfo->functions); - gdb_assert (prev->flow.next == NULL); - prev->flow.next = bfun; - bfun->flow.prev = prev; - bfun->number = prev->number + 1; bfun->insn_offset = prev->insn_offset + ftrace_call_num_insn (prev); bfun->level = prev->level; @@ -694,10 +690,12 @@ ftrace_match_backtrace (struct btrace_thread_info *btinfo, return matches; } -/* Add ADJUSTMENT to the level of BFUN and succeeding function segments. */ +/* Add ADJUSTMENT to the level of BFUN and succeeding function segments. + BTINFO is the branch trace information for the current thread. */ static void -ftrace_fixup_level (struct btrace_function *bfun, int adjustment) +ftrace_fixup_level (struct btrace_thread_info *btinfo, + struct btrace_function *bfun, int adjustment) { if (adjustment == 0) return; @@ -705,8 +703,11 @@ ftrace_fixup_level (struct btrace_function *bfun, int adjustment) DEBUG_FTRACE ("fixup level (%+d)", adjustment); ftrace_debug (bfun, "..bfun"); - for (; bfun != NULL; bfun = bfun->flow.next) - bfun->level += adjustment; + while (bfun != NULL) + { + bfun->level += adjustment; + bfun = ftrace_find_call_by_number (btinfo, bfun->number + 1); + } } /* Recompute the global level offset. Traverse the function trace and compute @@ -763,7 +764,7 @@ ftrace_connect_bfun (struct btrace_thread_info *btinfo, next->segment.prev = prev; /* We may have moved NEXT to a different function level. */ - ftrace_fixup_level (next, prev->level - next->level); + ftrace_fixup_level (btinfo, next, prev->level - next->level); /* If we run out of back trace for one, let's use the other's. */ if (prev->up == 0) @@ -836,7 +837,8 @@ ftrace_connect_bfun (struct btrace_thread_info *btinfo, Otherwise we will fix up CALLER's level when we connect it to PREV's caller in the next iteration. */ - ftrace_fixup_level (caller, prev->level - caller->level - 1); + ftrace_fixup_level (btinfo, caller, + prev->level - caller->level - 1); break; } @@ -934,7 +936,7 @@ ftrace_bridge_gap (struct btrace_thread_info *btinfo, To catch this, we already fix up the level here where we can start at RHS instead of at BEST_R. We will ignore the level fixup when connecting BEST_L to BEST_R as they will already be on the same level. */ - ftrace_fixup_level (rhs, best_l->level - best_r->level); + ftrace_fixup_level (btinfo, rhs, best_l->level - best_r->level); ftrace_connect_backtrace (btinfo, best_l, best_r); @@ -947,12 +949,14 @@ ftrace_bridge_gap (struct btrace_thread_info *btinfo, static void btrace_bridge_gaps (struct thread_info *tp, VEC (bfun_s) **gaps) { + struct btrace_thread_info *btinfo; VEC (bfun_s) *remaining; struct cleanup *old_chain; int min_matches; DEBUG ("bridge gaps"); + btinfo = &tp->btrace; remaining = NULL; old_chain = make_cleanup (VEC_cleanup (bfun_s), &remaining); @@ -981,20 +985,20 @@ btrace_bridge_gaps (struct thread_info *tp, VEC (bfun_s) **gaps) all but the leftmost gap in such a sequence. Also ignore gaps at the beginning of the trace. */ - lhs = gap->flow.prev; + lhs = ftrace_find_call_by_number (btinfo, gap->number - 1); if (lhs == NULL || lhs->errcode != 0) continue; /* Skip gaps to the right. */ - for (rhs = gap->flow.next; rhs != NULL; rhs = rhs->flow.next) - if (rhs->errcode == 0) - break; + rhs = ftrace_find_call_by_number (btinfo, gap->number + 1); + while (rhs != NULL && rhs->errcode != 0) + rhs = ftrace_find_call_by_number (btinfo, rhs->number + 1); /* Ignore gaps at the end of the trace. */ if (rhs == NULL) continue; - bridged = ftrace_bridge_gap (&tp->btrace, lhs, rhs, min_matches); + bridged = ftrace_bridge_gap (btinfo, lhs, rhs, min_matches); /* Keep track of gaps we were not able to bridge and try again. If we just pushed them to the end of GAPS we would risk an @@ -1024,7 +1028,7 @@ btrace_bridge_gaps (struct thread_info *tp, VEC (bfun_s) **gaps) /* We may omit this in some cases. Not sure it is worth the extra complication, though. */ - ftrace_compute_global_level_offset (&tp->btrace); + ftrace_compute_global_level_offset (btinfo); } /* Compute the function branch trace from BTS trace. */ @@ -2382,7 +2386,7 @@ btrace_insn_next (struct btrace_insn_iterator *it, unsigned int stride) { const struct btrace_function *next; - next = bfun->flow.next; + next = ftrace_find_call_by_number (it->btinfo, bfun->number + 1); if (next == NULL) break; @@ -2412,7 +2416,7 @@ btrace_insn_next (struct btrace_insn_iterator *it, unsigned int stride) { const struct btrace_function *next; - next = bfun->flow.next; + next = ftrace_find_call_by_number (it->btinfo, bfun->number + 1); if (next == NULL) { /* We stepped past the last function. @@ -2461,7 +2465,7 @@ btrace_insn_prev (struct btrace_insn_iterator *it, unsigned int stride) { const struct btrace_function *prev; - prev = bfun->flow.prev; + prev = ftrace_find_call_by_number (it->btinfo, bfun->number - 1); if (prev == NULL) break; diff --git a/gdb/btrace.h b/gdb/btrace.h index fe591a5..c998258 100644 --- a/gdb/btrace.h +++ b/gdb/btrace.h @@ -149,9 +149,6 @@ struct btrace_function two segments: one before the call and another after the return. */ struct btrace_func_link segment; - /* The previous and next function in control flow order. */ - struct btrace_func_link flow; - /* The function segment number of the directly preceding function segment in a (fake) call stack. Will be zero if there is no such function segment in the record. */ -- 2.7.4