* [patch] Performance optimize large bp_location count
@ 2009-09-04 20:17 Jan Kratochvil
2009-10-13 18:34 ` Tom Tromey
0 siblings, 1 reply; 5+ messages in thread
From: Jan Kratochvil @ 2009-09-04 20:17 UTC (permalink / raw)
To: gdb-patches
Hi,
with more than ~500 breakpoints GDB currently starts to be very slow.
`rbreak' can usually create such situations.
Every breakpoint being inserted runs check_duplicates() which was quadratic =>
the whole operation was cubic. n^3 becomes n^2*log(n) by this patch.
% cumulative self self total
time seconds seconds calls s/call s/call name
64.80 89.71 89.71 2834 0.03 0.03 update_global_location_list
31.93 133.92 44.21 1616900 0.00 0.00 breakpoint_restore_shadows
(due to -O2 the former represent even inlined check_duplicates())
After the fix the top gprof functions are strcmp_iw + lookup_partial_symbol.
The breakpoint stuff is no longer significant. Expecting still many more
optimizations remain to be done for other breakpoints use cases.
echo 'main(){}' | gcc -o 1 -g -x c -; time .../gdb -ex 'set pagination off' -ex start -ex rbreak -ex 'set confirm no' -ex q ./1 2>&1 | grep -c '^Breakpoint '
CVS HEAD:
2505
real 2m12.352s
user 2m12.030s
sys 0m0.238s
This patch:
2505
real 0m3.748s
user 0m3.498s
sys 0m0.251s
The current only usage of ALL_BP_LOCATIONS_SAFE could be IMO already replaced
by ALL_BP_LOCATIONS.
The delaying of the update_global_location_list() call by update_locations
looks to fix an already existing bug in the current code. I did not try to
exploit it, it was more harmless before but it became a crashes with new
bp_location array.
Regression tested on {x86_64,i686}-fedorarawhide-linux-gnu.
Thanks,
Jan
2009-09-04 Jan Kratochvil <jan.kratochvil@redhat.com>
Performance optimize large bp_location count.
* breakpoint.c (ALL_BP_LOCATIONS_SAFE): Remove.
(ALL_BP_LOCATIONS): New parameter BP_TMP. Use now bp_location and
bp_location_count.
(bp_location, bp_location_count)
(bp_location_placed_address_before_address_max)
(bp_location_shadow_len_after_address_max): New variables.
(moribund_locations, update_watchpoint): Update the bp_location
variable name.
(breakpoint_restore_shadows): Extend the comment. Move the variable
b to local blocks. Move the variables bp_addr, bp_size and bptoffset
to a local block. New variables bc_l, bc_r and bc. New binary search
for the left range boundary. New break on reaching the right range
boundary. Move shadow existence conditionals to ...
(bp_location_has_shadow): ... a new function.
(insert_breakpoint_locations): Replace the temp variable by bp_tmp.
Use now ALL_BP_LOCATIONS instead of ALL_BP_LOCATIONS_SAFE.
(remove_breakpoints, remove_hw_watchpoints, reattach_breakpoints)
(detach_breakpoints): New variable bp_tmp. Update the ALL_BP_LOCATIONS
calling convention.
(update_breakpoints_after_exec): New variable bplocp_tmp. Update the
ALL_BP_LOCATIONS calling convention.
(breakpoint_here_p, software_breakpoint_inserted_here_p)
(breakpoint_thread_match): New variable bptp_tmp. Drop the const
attribute of bpt. Update the ALL_BP_LOCATIONS calling convention.
(regular_breakpoint_inserted_here_p): Likewise. Update the bp_location
variable name.
(mark_breakpoints_out, breakpoint_init_inferior): New variable
bptp_tmp. Update the ALL_BP_LOCATIONS calling convention.
(bpstat_stop_status): New variables blp_tmp and update_locations. Drop
the const attribute of bl. Update the ALL_BP_LOCATIONS calling
convention. Protect HIT_COUNT increment by an ENABLE_STATE check.
Delay the update_global_location_list call using update_locations.
(set_default_breakpoint): Drop the check_duplicates name from comment.
(disable_breakpoints_in_shlibs, disable_breakpoints_in_unloaded_shlib):
New variable locp_tmp. Update the ALL_BP_LOCATIONS calling convention.
(bp_location_compare, bp_location_compare_for_qsort)
(bp_location_target_extensions_update): New functions.
(check_duplicates, check_duplicates_for): Remove, moving their code ...
(update_global_location_list): ... into this existing function. Remove
variables next, loc2, old_locations, ret and ix. New variables locp,
loc_first, old_location, old_locp and old_location_count. Stop using
global_next, create now the array bp_location, sort it by
bp_location_compare_for_qsort and call
bp_location_target_extensions_update. Change quadratic iteration by
loc2 into an in-sync scanning by locp and loc2p. Rename former loc
usage as old_loc.
* breakpoint.h (struct bp_location <global_next>): Remove.
--- a/gdb/breakpoint.c
+++ b/gdb/breakpoint.c
@@ -111,8 +111,6 @@ struct breakpoint *set_raw_breakpoint (struct gdbarch *gdbarch,
struct symtab_and_line,
enum bptype);
-static void check_duplicates (struct breakpoint *);
-
static void breakpoint_adjustment_warning (CORE_ADDR, CORE_ADDR, int, int);
static CORE_ADDR adjust_breakpoint_address (struct gdbarch *gdbarch,
@@ -334,14 +332,14 @@ static int executing_startup;
B ? (TMP=B->next, 1): 0; \
B = TMP)
-/* Similar iterators for the low-level breakpoints. */
-
-#define ALL_BP_LOCATIONS(B) for (B = bp_location_chain; B; B = B->global_next)
+/* Similar iterator for the low-level breakpoints. SAFE variant is not
+ provided so update_global_location_list must not be called while executing
+ the block of ALL_BP_LOCATIONS. */
-#define ALL_BP_LOCATIONS_SAFE(B,TMP) \
- for (B = bp_location_chain; \
- B ? (TMP=B->global_next, 1): 0; \
- B = TMP)
+#define ALL_BP_LOCATIONS(B,BP_TMP) \
+ for (BP_TMP = bp_location; \
+ BP_TMP < bp_location + bp_location_count && (B = *BP_TMP); \
+ BP_TMP++)
/* Iterator for tracepoints only. */
@@ -353,10 +351,31 @@ static int executing_startup;
struct breakpoint *breakpoint_chain;
-struct bp_location *bp_location_chain;
+/* Array is sorted by bp_location_compare - primarily by the ADDRESS. */
+
+static struct bp_location **bp_location;
+
+/* Number of elements of BP_LOCATION. */
+
+static unsigned bp_location_count;
+
+/* Maximum alignment offset between bp_target_info.PLACED_ADDRESS and ADDRESS
+ for the current elements of BP_LOCATION which get a valid result from
+ bp_location_has_shadow. You can use it for roughly limiting the subrange of
+ BP_LOCATION to scan for shadow bytes for an address you need to read. */
+
+static CORE_ADDR bp_location_placed_address_before_address_max;
+
+/* Maximum offset plus alignment between
+ bp_target_info.PLACED_ADDRESS + bp_target_info.SHADOW_LEN and ADDRESS for
+ the current elements of BP_LOCATION which get a valid result from
+ bp_location_has_shadow. You can use it for roughly limiting the subrange of
+ BP_LOCATION to scan for shadow bytes for an address you need to read. */
+
+static CORE_ADDR bp_location_shadow_len_after_address_max;
/* The locations that no longer correspond to any breakpoint,
- unlinked from bp_location_chain, but for which a hit
+ unlinked from bp_location array, but for which a hit
may still be reported by a target. */
VEC(bp_location_p) *moribund_locations = NULL;
@@ -732,35 +751,88 @@ commands_from_control_command (char *arg, struct command_line *cmd)
}
error (_("No breakpoint number %d."), bnum);
}
-\f
+
+/* Return non-zero if BL->TARGET_INFO contains valid information. */
+
+static int
+bp_location_has_shadow (struct bp_location *bl)
+{
+ if (bl->loc_type != bp_loc_software_breakpoint)
+ return 0;
+ if (!bl->inserted)
+ return 0;
+ if (bl->target_info.shadow_len == 0)
+ /* bp isn't valid, or doesn't shadow memory. */
+ return 0;
+ return 1;
+}
+
/* Update BUF, which is LEN bytes read from the target address MEMADDR,
- by replacing any memory breakpoints with their shadowed contents. */
+ by replacing any memory breakpoints with their shadowed contents.
+
+ The range of shadowed area by each bp_location is:
+ b->address - bp_location_placed_address_before_address_max
+ up to b->address + bp_location_shadow_len_after_address_max
+ The range we were requested to resolve shadows for is:
+ memaddr ... memaddr + le
+ Thus the safe cutoff boundaries for performance optimization are
+ memaddr + len <= b->address - bp_location_placed_address_before_address_max
+ and:
+ b->address + bp_location_shadow_len_after_address_max <= memaddr */
void
breakpoint_restore_shadows (gdb_byte *buf, ULONGEST memaddr, LONGEST len)
{
- struct bp_location *b;
- CORE_ADDR bp_addr = 0;
- int bp_size = 0;
- int bptoffset = 0;
+ /* Left boundary, right boundary and media element of our binary search. */
+ unsigned bc_l, bc_r, bc;
+
+ /* Find BC_L which is a leftmost element which may affect BUF content. It is
+ safe to report lower value but a failure to report higher one. */
+
+ bc_l = 0;
+ bc_r = bp_location_count;
+ while (bc_l + 1 < bc_r)
+ {
+ struct bp_location *b;
+
+ bc = (bc_l + bc_r) / 2;
+ b = bp_location[bc];
+
+ if (b->address + bp_location_shadow_len_after_address_max >= b->address
+ && b->address + bp_location_shadow_len_after_address_max <= memaddr)
+ bc_l = bc;
+ else
+ bc_r = bc;
+ }
- ALL_BP_LOCATIONS (b)
+ /* Now do full processing of the found relevant range of elements. */
+
+ for (bc = bc_l; bc < bp_location_count; bc++)
{
+ struct bp_location *b = bp_location[bc];
+ CORE_ADDR bp_addr = 0;
+ int bp_size = 0;
+ int bptoffset = 0;
+
if (b->owner->type == bp_none)
warning (_("reading through apparently deleted breakpoint #%d?"),
b->owner->number);
- if (b->loc_type != bp_loc_software_breakpoint)
- continue;
- if (!b->inserted)
+ /* Performance optimization: any futher element can no longer affect BUF
+ content. */
+
+ if (b->address >= bp_location_placed_address_before_address_max
+ && memaddr + len <= b->address
+ - bp_location_placed_address_before_address_max)
+ break;
+
+ if (!bp_location_has_shadow (b))
continue;
+
/* Addresses and length of the part of the breakpoint that
we need to copy. */
bp_addr = b->target_info.placed_address;
bp_size = b->target_info.shadow_len;
- if (bp_size == 0)
- /* bp isn't valid, or doesn't shadow memory. */
- continue;
if (bp_addr + bp_size <= memaddr)
/* The breakpoint is entirely before the chunk of memory we
@@ -909,7 +981,7 @@ update_watchpoint (struct breakpoint *b, int reparse)
struct bp_location *loc;
bpstat bs;
- /* We don't free locations. They are stored in bp_location_chain and
+ /* We don't free locations. They are stored in bp_location array and
update_global_locations will eventually delete them and remove
breakpoints if needed. */
b->loc = NULL;
@@ -1344,7 +1416,7 @@ static void
insert_breakpoint_locations (void)
{
struct breakpoint *bpt;
- struct bp_location *b, *temp;
+ struct bp_location *b, **bp_tmp;
int error = 0;
int val = 0;
int disabled_breaks = 0;
@@ -1357,7 +1429,7 @@ insert_breakpoint_locations (void)
there was an error. */
fprintf_unfiltered (tmp_error_stream, "Warning:\n");
- ALL_BP_LOCATIONS_SAFE (b, temp)
+ ALL_BP_LOCATIONS (b, bp_tmp)
{
if (!should_be_inserted (b) || b->inserted)
continue;
@@ -1431,10 +1503,10 @@ You may have requested too many hardware breakpoints/watchpoints.\n");
int
remove_breakpoints (void)
{
- struct bp_location *b;
+ struct bp_location *b, **bp_tmp;
int val = 0;
- ALL_BP_LOCATIONS (b)
+ ALL_BP_LOCATIONS (b, bp_tmp)
{
if (b->inserted)
val |= remove_breakpoint (b, mark_uninserted);
@@ -1445,10 +1517,10 @@ remove_breakpoints (void)
int
remove_hw_watchpoints (void)
{
- struct bp_location *b;
+ struct bp_location *b, **bp_tmp;
int val = 0;
- ALL_BP_LOCATIONS (b)
+ ALL_BP_LOCATIONS (b, bp_tmp)
{
if (b->inserted && b->loc_type == bp_loc_hardware_watchpoint)
val |= remove_breakpoint (b, mark_uninserted);
@@ -1459,7 +1531,7 @@ remove_hw_watchpoints (void)
int
reattach_breakpoints (int pid)
{
- struct bp_location *b;
+ struct bp_location *b, **bp_tmp;
int val;
struct cleanup *old_chain = save_inferior_ptid ();
struct ui_file *tmp_error_stream = mem_fileopen ();
@@ -1468,7 +1540,7 @@ reattach_breakpoints (int pid)
make_cleanup_ui_file_delete (tmp_error_stream);
inferior_ptid = pid_to_ptid (pid);
- ALL_BP_LOCATIONS (b)
+ ALL_BP_LOCATIONS (b, bp_tmp)
{
if (b->inserted)
{
@@ -1571,7 +1643,7 @@ update_breakpoints_after_exec (void)
{
struct breakpoint *b;
struct breakpoint *temp;
- struct bp_location *bploc;
+ struct bp_location *bploc, **bplocp_tmp;
/* We're about to delete breakpoints from GDB's lists. If the
INSERTED flag is true, GDB will try to lift the breakpoints by
@@ -1581,7 +1653,7 @@ update_breakpoints_after_exec (void)
breakpoints out as soon as it detects an exec. We don't do that
here instead, because there may be other attempts to delete
breakpoints after detecting an exec and before reaching here. */
- ALL_BP_LOCATIONS (bploc)
+ ALL_BP_LOCATIONS (bploc, bplocp_tmp)
gdb_assert (!bploc->inserted);
ALL_BREAKPOINTS_SAFE (b, temp)
@@ -1684,7 +1756,7 @@ update_breakpoints_after_exec (void)
int
detach_breakpoints (int pid)
{
- struct bp_location *b;
+ struct bp_location *b, **bp_tmp;
int val = 0;
struct cleanup *old_chain = save_inferior_ptid ();
@@ -1693,7 +1765,7 @@ detach_breakpoints (int pid)
/* Set inferior_ptid; remove_breakpoint uses this global. */
inferior_ptid = pid_to_ptid (pid);
- ALL_BP_LOCATIONS (b)
+ ALL_BP_LOCATIONS (b, bp_tmp)
{
if (b->inserted)
val |= remove_breakpoint (b, mark_inserted);
@@ -1824,9 +1896,9 @@ remove_breakpoint (struct bp_location *b, insertion_state_t is)
void
mark_breakpoints_out (void)
{
- struct bp_location *bpt;
+ struct bp_location *bpt, **bptp_tmp;
- ALL_BP_LOCATIONS (bpt)
+ ALL_BP_LOCATIONS (bpt, bptp_tmp)
bpt->inserted = 0;
}
@@ -1846,7 +1918,7 @@ void
breakpoint_init_inferior (enum inf_context context)
{
struct breakpoint *b, *temp;
- struct bp_location *bpt;
+ struct bp_location *bpt, **bptp_tmp;
int ix;
/* If breakpoint locations are shared across processes, then there's
@@ -1854,7 +1926,7 @@ breakpoint_init_inferior (enum inf_context context)
if (gdbarch_has_global_breakpoints (target_gdbarch))
return;
- ALL_BP_LOCATIONS (bpt)
+ ALL_BP_LOCATIONS (bpt, bptp_tmp)
if (bpt->owner->enable_state != bp_permanent)
bpt->inserted = 0;
@@ -1915,10 +1987,10 @@ breakpoint_init_inferior (enum inf_context context)
enum breakpoint_here
breakpoint_here_p (CORE_ADDR pc)
{
- const struct bp_location *bpt;
+ struct bp_location *bpt, **bptp_tmp;
int any_breakpoint_here = 0;
- ALL_BP_LOCATIONS (bpt)
+ ALL_BP_LOCATIONS (bpt, bptp_tmp)
{
if (bpt->loc_type != bp_loc_software_breakpoint
&& bpt->loc_type != bp_loc_hardware_breakpoint)
@@ -1958,16 +2030,16 @@ moribund_breakpoint_here_p (CORE_ADDR pc)
}
/* Returns non-zero if there's a breakpoint inserted at PC, which is
- inserted using regular breakpoint_chain/bp_location_chain mechanism.
+ inserted using regular breakpoint_chain / bp_location array mechanism.
This does not check for single-step breakpoints, which are
inserted and removed using direct target manipulation. */
int
regular_breakpoint_inserted_here_p (CORE_ADDR pc)
{
- const struct bp_location *bpt;
+ struct bp_location *bpt, **bptp_tmp;
- ALL_BP_LOCATIONS (bpt)
+ ALL_BP_LOCATIONS (bpt, bptp_tmp)
{
if (bpt->loc_type != bp_loc_software_breakpoint
&& bpt->loc_type != bp_loc_hardware_breakpoint)
@@ -2008,10 +2080,10 @@ breakpoint_inserted_here_p (CORE_ADDR pc)
int
software_breakpoint_inserted_here_p (CORE_ADDR pc)
{
- const struct bp_location *bpt;
+ struct bp_location *bpt, **bptp_tmp;
int any_breakpoint_here = 0;
- ALL_BP_LOCATIONS (bpt)
+ ALL_BP_LOCATIONS (bpt, bptp_tmp)
{
if (bpt->loc_type != bp_loc_software_breakpoint)
continue;
@@ -2041,12 +2113,12 @@ software_breakpoint_inserted_here_p (CORE_ADDR pc)
int
breakpoint_thread_match (CORE_ADDR pc, ptid_t ptid)
{
- const struct bp_location *bpt;
+ struct bp_location *bpt, **bptp_tmp;
/* The thread and task IDs associated to PTID, computed lazily. */
int thread = -1;
int task = 0;
- ALL_BP_LOCATIONS (bpt)
+ ALL_BP_LOCATIONS (bpt, bptp_tmp)
{
if (bpt->loc_type != bp_loc_software_breakpoint
&& bpt->loc_type != bp_loc_hardware_breakpoint)
@@ -3154,16 +3226,16 @@ bpstat
bpstat_stop_status (CORE_ADDR bp_addr, ptid_t ptid)
{
struct breakpoint *b = NULL;
- const struct bp_location *bl;
+ struct bp_location *bl, **blp_tmp;
struct bp_location *loc;
/* Root of the chain of bpstat's */
struct bpstats root_bs[1];
/* Pointer to the last thing in the chain currently. */
bpstat bs = root_bs;
int ix;
- int need_remove_insert;
+ int need_remove_insert, update_locations = 0;
- ALL_BP_LOCATIONS (bl)
+ ALL_BP_LOCATIONS (bl, blp_tmp)
{
b = bl->owner;
gdb_assert (b);
@@ -3205,14 +3277,15 @@ bpstat_stop_status (CORE_ADDR bp_addr, ptid_t ptid)
if (bs->stop)
{
- ++(b->hit_count);
+ if (b->enable_state != bp_disabled)
+ ++(b->hit_count);
/* We will stop here */
if (b->disposition == disp_disable)
{
if (b->enable_state != bp_permanent)
b->enable_state = bp_disabled;
- update_global_location_list (0);
+ update_locations = 1;
}
if (b->silent)
bs->print = 0;
@@ -3232,6 +3305,10 @@ bpstat_stop_status (CORE_ADDR bp_addr, ptid_t ptid)
bs->print_it = print_it_noop;
}
+ /* Delay this call which would break the ALL_BP_LOCATIONS iteration above. */
+ if (update_locations)
+ update_global_location_list (0);
+
for (ix = 0; VEC_iterate (bp_location_p, moribund_locations, ix, loc); ++ix)
{
if (loc->address == bp_addr)
@@ -4194,9 +4271,8 @@ set_default_breakpoint (int valid, CORE_ADDR addr, struct symtab *symtab,
(or use it for any other purpose either).
More specifically, each of the following breakpoint types will always
- have a zero valued address and we don't want check_duplicates() to mark
- breakpoints of any of these types to be a duplicate of an actual
- breakpoint at address zero:
+ have a zero valued address and we don't want to mark breakpoints of any of
+ these types to be a duplicate of an actual breakpoint at address zero:
bp_watchpoint
bp_hardware_watchpoint
@@ -4216,88 +4292,6 @@ breakpoint_address_is_meaningful (struct breakpoint *bpt)
&& type != bp_catchpoint);
}
-/* Rescan breakpoints at the same address and section as BPT,
- marking the first one as "first" and any others as "duplicates".
- This is so that the bpt instruction is only inserted once.
- If we have a permanent breakpoint at the same place as BPT, make
- that one the official one, and the rest as duplicates. */
-
-static void
-check_duplicates_for (CORE_ADDR address, struct obj_section *section)
-{
- struct bp_location *b;
- int count = 0;
- struct bp_location *perm_bp = 0;
-
- ALL_BP_LOCATIONS (b)
- if (b->owner->enable_state != bp_disabled
- && b->owner->enable_state != bp_call_disabled
- && b->owner->enable_state != bp_startup_disabled
- && b->enabled
- && !b->shlib_disabled
- && b->address == address /* address / overlay match */
- && (!overlay_debugging || b->section == section)
- && breakpoint_address_is_meaningful (b->owner))
- {
- /* Have we found a permanent breakpoint? */
- if (b->owner->enable_state == bp_permanent)
- {
- perm_bp = b;
- break;
- }
-
- count++;
- b->duplicate = count > 1;
- }
-
- /* If we found a permanent breakpoint at this address, go over the
- list again and declare all the other breakpoints there (except
- other permanent breakpoints) to be the duplicates. */
- if (perm_bp)
- {
- perm_bp->duplicate = 0;
-
- /* Permanent breakpoint should always be inserted. */
- if (! perm_bp->inserted)
- internal_error (__FILE__, __LINE__,
- _("allegedly permanent breakpoint is not "
- "actually inserted"));
-
- ALL_BP_LOCATIONS (b)
- if (b != perm_bp)
- {
- if (b->owner->enable_state != bp_permanent
- && b->owner->enable_state != bp_disabled
- && b->owner->enable_state != bp_call_disabled
- && b->owner->enable_state != bp_startup_disabled
- && b->enabled && !b->shlib_disabled
- && b->address == address /* address / overlay match */
- && (!overlay_debugging || b->section == section)
- && breakpoint_address_is_meaningful (b->owner))
- {
- if (b->inserted)
- internal_error (__FILE__, __LINE__,
- _("another breakpoint was inserted on top of "
- "a permanent breakpoint"));
-
- b->duplicate = 1;
- }
- }
- }
-}
-
-static void
-check_duplicates (struct breakpoint *bpt)
-{
- struct bp_location *bl = bpt->loc;
-
- if (! breakpoint_address_is_meaningful (bpt))
- return;
-
- for (; bl; bl = bl->next)
- check_duplicates_for (bl->address, bl->section);
-}
-
static void
breakpoint_adjustment_warning (CORE_ADDR from_addr, CORE_ADDR to_addr,
int bnum, int have_bnum)
@@ -4706,9 +4700,9 @@ create_solib_event_breakpoint (struct gdbarch *gdbarch, CORE_ADDR address)
void
disable_breakpoints_in_shlibs (void)
{
- struct bp_location *loc;
+ struct bp_location *loc, **locp_tmp;
- ALL_BP_LOCATIONS (loc)
+ ALL_BP_LOCATIONS (loc, locp_tmp)
{
struct breakpoint *b = loc->owner;
/* We apply the check to all breakpoints, including disabled
@@ -4738,7 +4732,7 @@ disable_breakpoints_in_shlibs (void)
static void
disable_breakpoints_in_unloaded_shlib (struct so_list *solib)
{
- struct bp_location *loc;
+ struct bp_location *loc, **locp_tmp;
int disabled_shlib_breaks = 0;
/* SunOS a.out shared libraries are always mapped, so do not
@@ -4749,7 +4743,7 @@ disable_breakpoints_in_unloaded_shlib (struct so_list *solib)
&& bfd_get_flavour (exec_bfd) == bfd_target_aout_flavour)
return;
- ALL_BP_LOCATIONS (loc)
+ ALL_BP_LOCATIONS (loc, locp_tmp)
{
struct breakpoint *b = loc->owner;
if ((loc->loc_type == bp_loc_hardware_breakpoint
@@ -7357,6 +7351,82 @@ do_vec_free (void *p)
VEC_free (bp_location_p, *vec);
}
+/* A comparison function for bp_location A and B being interfaced to qsort.
+ Sort elements primarily by their ADDRESS (no matter what does
+ breakpoint_address_is_meaningful say for its OWNER), secondarily by ordering
+ first bp_permanent OWNERed elements and terciarily just ensuring the array
+ is sorted stable way despite qsort being an instable algorithm. */
+
+static int
+bp_location_compare (struct bp_location *a, struct bp_location *b)
+{
+ int a_perm = a->owner->enable_state == bp_permanent;
+ int b_perm = b->owner->enable_state == bp_permanent;
+
+ if (a->address != b->address)
+ return (a->address > b->address) - (a->address < b->address);
+
+ /* Sort permanent breakpoints first. */
+ if (a_perm != b_perm)
+ return (a_perm < b_perm) - (a_perm > b_perm);
+
+ /* Make the user-visible order stable across GDB runs. Locations of the same
+ breakpoint can be sorted in arbitrary order. */
+
+ if (a->owner->number != b->owner->number)
+ return (a->owner->number > b->owner->number)
+ - (a->owner->number < b->owner->number);
+
+ return (a > b) - (a < b);
+}
+
+/* Interface bp_location_compare as the COMPAR parameter of qsort function. */
+
+static int
+bp_location_compare_for_qsort (const void *ap, const void *bp)
+{
+ struct bp_location *a = *(void **) ap;
+ struct bp_location *b = *(void **) bp;
+
+ return bp_location_compare (a, b);
+}
+
+/* Set bp_location_placed_address_before_address_max and
+ bp_location_shadow_len_after_address_max according to the current content of
+ the bp_location array. */
+
+static void
+bp_location_target_extensions_update (void)
+{
+ struct bp_location *bl, **blp_tmp;
+
+ bp_location_placed_address_before_address_max = 0;
+ bp_location_shadow_len_after_address_max = 0;
+
+ ALL_BP_LOCATIONS (bl, blp_tmp)
+ {
+ CORE_ADDR start, end, addr;
+
+ if (!bp_location_has_shadow (bl))
+ continue;
+
+ start = bl->target_info.placed_address;
+ end = start + bl->target_info.shadow_len;
+
+ gdb_assert (bl->address >= start);
+ addr = bl->address - start;
+ if (addr > bp_location_placed_address_before_address_max)
+ bp_location_placed_address_before_address_max = addr;
+
+ /* Zero SHADOW_LEN would not pass bp_location_has_shadow. */
+
+ gdb_assert (bl->address < end);
+ addr = end - bl->address;
+ if (addr > bp_location_shadow_len_after_address_max)
+ bp_location_shadow_len_after_address_max = addr;
+ }
+}
+
/* If SHOULD_INSERT is false, do not insert any breakpoint locations
into the inferior, only remove already-inserted locations that no
longer should be inserted. Functions that delete a breakpoint or
@@ -7376,49 +7446,66 @@ static void
update_global_location_list (int should_insert)
{
struct breakpoint *b;
- struct bp_location **next = &bp_location_chain;
- struct bp_location *loc;
- struct bp_location *loc2;
- VEC(bp_location_p) *old_locations = NULL;
- int ret;
- int ix;
+ struct bp_location **locp, *loc;
struct cleanup *cleanups;
- cleanups = make_cleanup (do_vec_free, &old_locations);
- /* Store old locations for future reference. */
- for (loc = bp_location_chain; loc; loc = loc->global_next)
- VEC_safe_push (bp_location_p, old_locations, loc);
+ /* The first bp_location being the only one non-DUPLICATE for the current run
+ of the same ADDRESS. */
+ struct bp_location *loc_first;
+
+ /* Saved former bp_location array which we compare against the newly built
+ bp_location from the current state of ALL_BREAKPOINTS. */
+ struct bp_location **old_location, **old_locp;
+ unsigned old_location_count;
+
+ old_location = bp_location;
+ old_location_count = bp_location_count;
+ bp_location = NULL;
+ bp_location_count = 0;
+ cleanups = make_cleanup (xfree, old_location);
- bp_location_chain = NULL;
ALL_BREAKPOINTS (b)
- {
- for (loc = b->loc; loc; loc = loc->next)
- {
- *next = loc;
- next = &(loc->global_next);
- *next = NULL;
- }
- }
+ for (loc = b->loc; loc; loc = loc->next)
+ bp_location_count++;
+
+ bp_location = xmalloc (sizeof (*bp_location) * bp_location_count);
+ locp = bp_location;
+ ALL_BREAKPOINTS (b)
+ for (loc = b->loc; loc; loc = loc->next)
+ *locp++ = loc;
+ qsort (bp_location, bp_location_count, sizeof (*bp_location),
+ bp_location_compare_for_qsort);
+
+ bp_location_target_extensions_update ();
/* Identify bp_location instances that are no longer present in the new
list, and therefore should be freed. Note that it's not necessary that
those locations should be removed from inferior -- if there's another
location at the same address (previously marked as duplicate),
- we don't need to remove/insert the location. */
- for (ix = 0; VEC_iterate(bp_location_p, old_locations, ix, loc); ++ix)
+ we don't need to remove/insert the location.
+
+ LOCP is kept in sync with OLD_LOCP, each pointing to the current and
+ former bp_location array state respectively. */
+
+ locp = bp_location;
+ for (old_locp = old_location; old_locp < old_location + old_location_count;
+ old_locp++)
{
- /* Tells if 'loc' is found amoung the new locations. If not, we
+ struct bp_location *old_loc = *old_locp;
+
+ /* Tells if 'old_loc' is found amoung the new locations. If not, we
have to free it. */
- int found_object = 0;
+ int found_object;
/* Tells if the location should remain inserted in the target. */
int keep_in_target = 0;
int removed = 0;
- for (loc2 = bp_location_chain; loc2; loc2 = loc2->global_next)
- if (loc2 == loc)
- {
- found_object = 1;
- break;
- }
+
+ /* Skip LOCP entries which will definitely never be needed. Stop either
+ at or being the one matching OLD_LOC. */
+ while (locp < bp_location + bp_location_count
+ && bp_location_compare (*locp, old_loc) < 0)
+ locp++;
+ found_object = locp < bp_location + bp_location_count && *locp == old_loc;
/* If this location is no longer present, and inserted, look if there's
maybe a new location at the same address. If so, mark that one
@@ -7426,11 +7513,11 @@ update_global_location_list (int should_insert)
don't have a time window where a breakpoint at certain location is not
inserted. */
- if (loc->inserted)
+ if (old_loc->inserted)
{
/* If the location is inserted now, we might have to remove it. */
- if (found_object && should_be_inserted (loc))
+ if (found_object && should_be_inserted (old_loc))
{
/* The location is still present in the location list, and still
should be inserted. Don't do anything. */
@@ -7441,37 +7528,46 @@ update_global_location_list (int should_insert)
/* The location is either no longer present, or got disabled.
See if there's another location at the same address, in which
case we don't need to remove this one from the target. */
- if (breakpoint_address_is_meaningful (loc->owner))
- for (loc2 = bp_location_chain; loc2; loc2 = loc2->global_next)
- {
- /* For the sake of should_insert_location. The
- call to check_duplicates will fix up this later. */
- loc2->duplicate = 0;
- if (should_be_inserted (loc2)
- && loc2 != loc && loc2->address == loc->address)
- {
- loc2->inserted = 1;
- loc2->target_info = loc->target_info;
- keep_in_target = 1;
- break;
- }
- }
+
+ if (breakpoint_address_is_meaningful (old_loc->owner))
+ {
+ struct bp_location **loc2p;
+
+ for (loc2p = locp;
+ loc2p < bp_location + bp_location_count
+ && (*loc2p)->address == old_loc->address;
+ loc2p++)
+ {
+ struct bp_location *loc2 = *loc2p;
+
+ /* For the sake of should_be_inserted.
+ Duplicates check below will fix up this later. */
+ loc2->duplicate = 0;
+ if (loc2 != old_loc && should_be_inserted (loc2))
+ {
+ loc2->inserted = 1;
+ loc2->target_info = old_loc->target_info;
+ keep_in_target = 1;
+ break;
+ }
+ }
+ }
}
if (!keep_in_target)
{
- if (remove_breakpoint (loc, mark_uninserted))
+ if (remove_breakpoint (old_loc, mark_uninserted))
{
/* This is just about all we can do. We could keep this
location on the global list, and try to remove it next
time, but there's no particular reason why we will
succeed next time.
- Note that at this point, loc->owner is still valid,
+ Note that at this point, old_loc->owner is still valid,
as delete_breakpoint frees the breakpoint only
after calling us. */
printf_filtered (_("warning: Error removing breakpoint %d\n"),
- loc->owner->number);
+ old_loc->owner->number);
}
removed = 1;
}
@@ -7493,19 +7589,59 @@ update_global_location_list (int should_insert)
longer need to keep this breakpoint. This is just a
heuristic, but if it's wrong, we'll report unexpected SIGTRAP,
which is usability issue, but not a correctness problem. */
- loc->events_till_retirement = 3 * (thread_count () + 1);
- loc->owner = NULL;
+ old_loc->events_till_retirement = 3 * (thread_count () + 1);
+ old_loc->owner = NULL;
- VEC_safe_push (bp_location_p, moribund_locations, loc);
+ VEC_safe_push (bp_location_p, moribund_locations, old_loc);
}
else
- free_bp_location (loc);
+ free_bp_location (old_loc);
}
}
- ALL_BREAKPOINTS (b)
+ /* Rescan breakpoints at the same address and section,
+ marking the first one as "first" and any others as "duplicates".
+ This is so that the bpt instruction is only inserted once.
+ If we have a permanent breakpoint at the same place as BPT, make
+ that one the official one, and the rest as duplicates. Permanent
+ breakpoints are sorted first for the same address. */
+
+ loc_first = NULL;
+ ALL_BP_LOCATIONS (loc, locp)
{
- check_duplicates (b);
+ struct breakpoint *b = loc->owner;
+
+ if (b->enable_state == bp_disabled
+ || b->enable_state == bp_call_disabled
+ || b->enable_state == bp_startup_disabled
+ || !loc->enabled
+ || loc->shlib_disabled
+ || !breakpoint_address_is_meaningful (b))
+ continue;
+
+ /* Permanent breakpoint should always be inserted. */
+ if (b->enable_state == bp_permanent && ! loc->inserted)
+ internal_error (__FILE__, __LINE__,
+ _("allegedly permanent breakpoint is not "
+ "actually inserted"));
+
+ if (loc_first == NULL
+ /* address / overlay match */
+ || loc->address != loc_first->address
+ || (overlay_debugging && loc->section != loc_first->section))
+ {
+ loc_first = loc;
+ loc->duplicate = 0;
+ continue;
+ }
+
+ loc->duplicate = 1;
+
+ if (loc_first->owner->enable_state == bp_permanent && loc->inserted
+ && b->enable_state != bp_permanent)
+ internal_error (__FILE__, __LINE__,
+ _("another breakpoint was inserted on top of "
+ "a permanent breakpoint"));
}
if (breakpoints_always_inserted_mode () && should_insert
--- a/gdb/breakpoint.h
+++ b/gdb/breakpoint.h
@@ -227,10 +227,6 @@ struct bp_location
the same parent breakpoint. */
struct bp_location *next;
- /* Pointer to the next breakpoint location, in a global
- list of all breakpoint locations. */
- struct bp_location *global_next;
-
/* Type of this breakpoint location. */
enum bp_loc_type loc_type;
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [patch] Performance optimize large bp_location count
2009-09-04 20:17 [patch] Performance optimize large bp_location count Jan Kratochvil
@ 2009-10-13 18:34 ` Tom Tromey
2009-10-13 22:35 ` Jan Kratochvil
0 siblings, 1 reply; 5+ messages in thread
From: Tom Tromey @ 2009-10-13 18:34 UTC (permalink / raw)
To: Jan Kratochvil; +Cc: gdb-patches
>>>>> "Jan" == Jan Kratochvil <jan.kratochvil@redhat.com> writes:
Jan> Every breakpoint being inserted runs check_duplicates() which was
Jan> quadratic => the whole operation was cubic. n^3 becomes n^2*log(n)
Jan> by this patch.
n^2*log(n) still seems high, but since it dropped off the profile, then
that is good enough.
I puzzled over the choice of a sorted array here, wondering if there
were perhaps some better data structure. A bit of rationale and
overview in the patch email would go a long way.
Jan> + /* Left boundary, right boundary and media element of our binary search. */
Jan> + unsigned bc_l, bc_r, bc;
I think s/media/median/
Jan> + /* Find BC_L which is a leftmost element which may affect BUF content. It is
Jan> + safe to report lower value but a failure to report higher one. */
Jan> +
Jan> + bc_l = 0;
Jan> + bc_r = bp_location_count;
Jan> + while (bc_l + 1 < bc_r)
Jan> + {
Jan> + struct bp_location *b;
Jan> +
Jan> + bc = (bc_l + bc_r) / 2;
Jan> + b = bp_location[bc];
Jan> +
Jan> + if (b->address + bp_location_shadow_len_after_address_max >= b->address
Jan> + && b->address + bp_location_shadow_len_after_address_max <= memaddr)
Jan> + bc_l = bc;
Jan> + else
Jan> + bc_r = bc;
Jan> + }
It isn't obvious to me why this finds the leftmost element when there
are multiple overlapping ones. What am I missing?
Jan> - cleanups = make_cleanup (do_vec_free, &old_locations);
I think this is the only use of do_vec_free, so that function can be
removed.
Tom
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [patch] Performance optimize large bp_location count
2009-10-13 18:34 ` Tom Tromey
@ 2009-10-13 22:35 ` Jan Kratochvil
2009-10-14 22:57 ` Tom Tromey
0 siblings, 1 reply; 5+ messages in thread
From: Jan Kratochvil @ 2009-10-13 22:35 UTC (permalink / raw)
To: Tom Tromey; +Cc: gdb-patches
On Tue, 13 Oct 2009 20:33:59 +0200, Tom Tromey wrote:
> >>>>> "Jan" == Jan Kratochvil <jan.kratochvil@redhat.com> writes:
>
> Jan> Every breakpoint being inserted runs check_duplicates() which was
> Jan> quadratic => the whole operation was cubic. n^3 becomes n^2*log(n)
> Jan> by this patch.
>
> n^2*log(n) still seems high, but since it dropped off the profile, then
> that is good enough.
Single `break' command was n^2:
(gdb) break func
1 x break_command_really -> update_global_location_list
n x -> check_duplicates -> check_duplicates_for
n x loop
For the n break commands typed in one GDB stop the complexity was n^3.
With my patch it gets for single `break' command n*log(n):
(gdb) break func
1 x break_command_really -> update_global_location_list
n*log(n) of qsort
For the n break commands typed in one GDB stop it is n^2*log(n) which is still
terrible but just according to gprof it is good enough as you say.
In the default all-stop mode update_global_location_list could be called only
once - before resuming the inferior. Otherwise in async mode
update_global_location_list could work in incremental way for the specific new
breakpoint.
Still just this way it was a minimal changeset to get the same user experience
as the best breakpoints-modifying patch could be. After other parts of GDB
(symbol lookups) get accelerated more work may be useful on this part.
> I puzzled over the choice of a sorted array here, wondering if there
> were perhaps some better data structure. A bit of rationale and
> overview in the patch email would go a long way.
I do not remember it all but probably I just did not consider the possibility
of an incremental update, I tried the most easy patch and it worked.
Different data structure would be more appropriate for the incremental update.
Another possibility would be a hash table which has lookup O(1) instead of
bsearch()ed sorted array with O(log(n)) but the constant C (hidden by O) of
hash lookup is higher, in this GDB case also multiplied by
bp_location_placed_address_before_address_max
+ bp_location_shadow_len_after_address_max over the array. It is true in the
x86* target case sum of these *_max values remains 1. The hash table would
not be usable for future possible accelerations for breakpoints in specific
ranges (`disable_breakpoints_in_unloaded_shlib (struct so_list *solib)').
+ incremental update possible
+ O(1) better than sorted-array O(log(n))
- not range-selectable
- (risk of C1 for O(1) could get higher than the whole C2*log(n) of this patch)
splay_tree tree used by GDB addrmap would enable both incremental updates and
O(log(n)) range-based selections. Still it would have much higher C constant
overhead (due to random memory locations requiring many accesses through FSB
on modern CPUs with high FSB multiplier and FSB burst mode transfers which
suit better the sequential access of a sorted array). Still I rather tried to
make the minimal code change (this patch) as technically fast as possible to
make it already end-user acceptable without needing larger GDB changes.
+ incremental update possible
+ not range-selectable
- (higher C constant overhead)
- (some range-based selections in the future would need to extend addrmap API)
> Jan> + /* Find BC_L which is a leftmost element which may affect BUF content. It is
> Jan> + safe to report lower value but a failure to report higher one. */
> Jan> +
> Jan> + bc_l = 0;
> Jan> + bc_r = bp_location_count;
> Jan> + while (bc_l + 1 < bc_r)
> Jan> + {
> Jan> + struct bp_location *b;
> Jan> +
> Jan> + bc = (bc_l + bc_r) / 2;
> Jan> + b = bp_location[bc];
> Jan> +
> Jan> + if (b->address + bp_location_shadow_len_after_address_max >= b->address
> Jan> + && b->address + bp_location_shadow_len_after_address_max <= memaddr)
> Jan> + bc_l = bc;
> Jan> + else
> Jan> + bc_r = bc;
> Jan> + }
>
> It isn't obvious to me why this finds the leftmost element when there
> are multiple overlapping ones. What am I missing?
Maybe another comment? (put there one before "if (b->...)")
Thanks,
Jan
2009-10-14 Jan Kratochvil <jan.kratochvil@redhat.com>
Performance optimize large bp_location count.
* breakpoint.c (ALL_BP_LOCATIONS_SAFE): Remove.
(ALL_BP_LOCATIONS): New parameter BP_TMP. Use now bp_location and
bp_location_count.
(bp_location, bp_location_count)
(bp_location_placed_address_before_address_max)
(bp_location_shadow_len_after_address_max): New variables.
(moribund_locations, update_watchpoint): Update the bp_location
variable name.
(breakpoint_restore_shadows): Extend the comment. Move the variable
b to local blocks. Move the variables bp_addr, bp_size and bptoffset
to a local block. New variables bc_l, bc_r and bc. New binary search
for the left range boundary. New break on reaching the right range
boundary. Move shadow existence conditionals to ...
(bp_location_has_shadow): ... a new function.
(insert_breakpoint_locations): Replace the temp variable by bp_tmp.
Use now ALL_BP_LOCATIONS instead of ALL_BP_LOCATIONS_SAFE.
(remove_breakpoints, remove_hw_watchpoints, reattach_breakpoints)
(detach_breakpoints): New variable bp_tmp. Update the ALL_BP_LOCATIONS
calling convention.
(update_breakpoints_after_exec): New variable bplocp_tmp. Update the
ALL_BP_LOCATIONS calling convention.
(breakpoint_here_p, software_breakpoint_inserted_here_p)
(breakpoint_thread_match): New variable bptp_tmp. Drop the const
attribute of bpt. Update the ALL_BP_LOCATIONS calling convention.
(regular_breakpoint_inserted_here_p): Likewise. Update the bp_location
variable name.
(mark_breakpoints_out, breakpoint_init_inferior): New variable
bptp_tmp. Update the ALL_BP_LOCATIONS calling convention.
(bpstat_stop_status): New variables blp_tmp and update_locations. Drop
the const attribute of bl. Update the ALL_BP_LOCATIONS calling
convention. Protect HIT_COUNT increment by an ENABLE_STATE check.
Delay the update_global_location_list call using update_locations.
(set_default_breakpoint): Drop the check_duplicates name from comment.
(disable_breakpoints_in_shlibs, disable_breakpoints_in_unloaded_shlib):
New variable locp_tmp. Update the ALL_BP_LOCATIONS calling convention.
(bp_location_compare, bp_location_compare_for_qsort)
(bp_location_target_extensions_update): New functions.
(check_duplicates, check_duplicates_for): Remove, moving their code ...
(update_global_location_list): ... into this existing function. Remove
variables next, loc2, old_locations, ret and ix. New variables locp,
loc_first, old_location, old_locp and old_location_count. Stop using
global_next, create now the array bp_location, sort it by
bp_location_compare_for_qsort and call
bp_location_target_extensions_update. Change quadratic iteration by
loc2 into an in-sync scanning by locp and loc2p. Rename former loc
usage as old_loc.
(do_vec_free): Remove.
* breakpoint.h (struct bp_location <global_next>): Remove.
--- a/gdb/breakpoint.c
+++ b/gdb/breakpoint.c
@@ -112,8 +112,6 @@ struct breakpoint *set_raw_breakpoint (struct gdbarch *gdbarch,
struct symtab_and_line,
enum bptype);
-static void check_duplicates (struct breakpoint *);
-
static void breakpoint_adjustment_warning (CORE_ADDR, CORE_ADDR, int, int);
static CORE_ADDR adjust_breakpoint_address (struct gdbarch *gdbarch,
@@ -337,14 +335,14 @@ static int executing_startup;
B ? (TMP=B->next, 1): 0; \
B = TMP)
-/* Similar iterators for the low-level breakpoints. */
-
-#define ALL_BP_LOCATIONS(B) for (B = bp_location_chain; B; B = B->global_next)
+/* Similar iterator for the low-level breakpoints. SAFE variant is not
+ provided so update_global_location_list must not be called while executing
+ the block of ALL_BP_LOCATIONS. */
-#define ALL_BP_LOCATIONS_SAFE(B,TMP) \
- for (B = bp_location_chain; \
- B ? (TMP=B->global_next, 1): 0; \
- B = TMP)
+#define ALL_BP_LOCATIONS(B,BP_TMP) \
+ for (BP_TMP = bp_location; \
+ BP_TMP < bp_location + bp_location_count && (B = *BP_TMP); \
+ BP_TMP++)
/* Iterator for tracepoints only. */
@@ -356,10 +354,31 @@ static int executing_startup;
struct breakpoint *breakpoint_chain;
-struct bp_location *bp_location_chain;
+/* Array is sorted by bp_location_compare - primarily by the ADDRESS. */
+
+static struct bp_location **bp_location;
+
+/* Number of elements of BP_LOCATION. */
+
+static unsigned bp_location_count;
+
+/* Maximum alignment offset between bp_target_info.PLACED_ADDRESS and ADDRESS
+ for the current elements of BP_LOCATION which get a valid result from
+ bp_location_has_shadow. You can use it for roughly limiting the subrange of
+ BP_LOCATION to scan for shadow bytes for an address you need to read. */
+
+static CORE_ADDR bp_location_placed_address_before_address_max;
+
+/* Maximum offset plus alignment between
+ bp_target_info.PLACED_ADDRESS + bp_target_info.SHADOW_LEN and ADDRESS for
+ the current elements of BP_LOCATION which get a valid result from
+ bp_location_has_shadow. You can use it for roughly limiting the subrange of
+ BP_LOCATION to scan for shadow bytes for an address you need to read. */
+
+static CORE_ADDR bp_location_shadow_len_after_address_max;
/* The locations that no longer correspond to any breakpoint,
- unlinked from bp_location_chain, but for which a hit
+ unlinked from bp_location array, but for which a hit
may still be reported by a target. */
VEC(bp_location_p) *moribund_locations = NULL;
@@ -735,35 +754,96 @@ commands_from_control_command (char *arg, struct command_line *cmd)
}
error (_("No breakpoint number %d."), bnum);
}
-\f
+
+/* Return non-zero if BL->TARGET_INFO contains valid information. */
+
+static int
+bp_location_has_shadow (struct bp_location *bl)
+{
+ if (bl->loc_type != bp_loc_software_breakpoint)
+ return 0;
+ if (!bl->inserted)
+ return 0;
+ if (bl->target_info.shadow_len == 0)
+ /* bp isn't valid, or doesn't shadow memory. */
+ return 0;
+ return 1;
+}
+
/* Update BUF, which is LEN bytes read from the target address MEMADDR,
- by replacing any memory breakpoints with their shadowed contents. */
+ by replacing any memory breakpoints with their shadowed contents.
+
+ The range of shadowed area by each bp_location is:
+ b->address - bp_location_placed_address_before_address_max
+ up to b->address + bp_location_shadow_len_after_address_max
+ The range we were requested to resolve shadows for is:
+ memaddr ... memaddr + len
+ Thus the safe cutoff boundaries for performance optimization are
+ memaddr + len <= b->address - bp_location_placed_address_before_address_max
+ and:
+ b->address + bp_location_shadow_len_after_address_max <= memaddr */
void
breakpoint_restore_shadows (gdb_byte *buf, ULONGEST memaddr, LONGEST len)
{
- struct bp_location *b;
- CORE_ADDR bp_addr = 0;
- int bp_size = 0;
- int bptoffset = 0;
+ /* Left boundary, right boundary and median element of our binary search. */
+ unsigned bc_l, bc_r, bc;
+
+ /* Find BC_L which is a leftmost element which may affect BUF content. It is
+ safe to report lower value but a failure to report higher one. */
+
+ bc_l = 0;
+ bc_r = bp_location_count;
+ while (bc_l + 1 < bc_r)
+ {
+ struct bp_location *b;
+
+ bc = (bc_l + bc_r) / 2;
+ b = bp_location[bc];
+
+ /* Check first B->ADDRESS will not overflow due to the added constant.
+ Then advance the left boundary only if we are sure the BC element can
+ in no way affect the BUF content (MEMADDR to MEMADDR + LEN range).
+
+ Use the BP_LOCATION_SHADOW_LEN_AFTER_ADDRESS_MAX safety offset so that
+ we cannot miss a breakpoint with its shadow range tail still reaching
+ MEMADDR. */
- ALL_BP_LOCATIONS (b)
+ if (b->address + bp_location_shadow_len_after_address_max >= b->address
+ && b->address + bp_location_shadow_len_after_address_max <= memaddr)
+ bc_l = bc;
+ else
+ bc_r = bc;
+ }
+
+ /* Now do full processing of the found relevant range of elements. */
+
+ for (bc = bc_l; bc < bp_location_count; bc++)
{
+ struct bp_location *b = bp_location[bc];
+ CORE_ADDR bp_addr = 0;
+ int bp_size = 0;
+ int bptoffset = 0;
+
if (b->owner->type == bp_none)
warning (_("reading through apparently deleted breakpoint #%d?"),
b->owner->number);
- if (b->loc_type != bp_loc_software_breakpoint)
- continue;
- if (!b->inserted)
+ /* Performance optimization: any futher element can no longer affect BUF
+ content. */
+
+ if (b->address >= bp_location_placed_address_before_address_max
+ && memaddr + len <= b->address
+ - bp_location_placed_address_before_address_max)
+ break;
+
+ if (!bp_location_has_shadow (b))
continue;
+
/* Addresses and length of the part of the breakpoint that
we need to copy. */
bp_addr = b->target_info.placed_address;
bp_size = b->target_info.shadow_len;
- if (bp_size == 0)
- /* bp isn't valid, or doesn't shadow memory. */
- continue;
if (bp_addr + bp_size <= memaddr)
/* The breakpoint is entirely before the chunk of memory we
@@ -912,7 +992,7 @@ update_watchpoint (struct breakpoint *b, int reparse)
struct bp_location *loc;
bpstat bs;
- /* We don't free locations. They are stored in bp_location_chain and
+ /* We don't free locations. They are stored in bp_location array and
update_global_locations will eventually delete them and remove
breakpoints if needed. */
b->loc = NULL;
@@ -1347,7 +1427,7 @@ static void
insert_breakpoint_locations (void)
{
struct breakpoint *bpt;
- struct bp_location *b, *temp;
+ struct bp_location *b, **bp_tmp;
int error = 0;
int val = 0;
int disabled_breaks = 0;
@@ -1360,7 +1440,7 @@ insert_breakpoint_locations (void)
there was an error. */
fprintf_unfiltered (tmp_error_stream, "Warning:\n");
- ALL_BP_LOCATIONS_SAFE (b, temp)
+ ALL_BP_LOCATIONS (b, bp_tmp)
{
if (!should_be_inserted (b) || b->inserted)
continue;
@@ -1434,10 +1514,10 @@ You may have requested too many hardware breakpoints/watchpoints.\n");
int
remove_breakpoints (void)
{
- struct bp_location *b;
+ struct bp_location *b, **bp_tmp;
int val = 0;
- ALL_BP_LOCATIONS (b)
+ ALL_BP_LOCATIONS (b, bp_tmp)
{
if (b->inserted)
val |= remove_breakpoint (b, mark_uninserted);
@@ -1448,10 +1528,10 @@ remove_breakpoints (void)
int
remove_hw_watchpoints (void)
{
- struct bp_location *b;
+ struct bp_location *b, **bp_tmp;
int val = 0;
- ALL_BP_LOCATIONS (b)
+ ALL_BP_LOCATIONS (b, bp_tmp)
{
if (b->inserted && b->loc_type == bp_loc_hardware_watchpoint)
val |= remove_breakpoint (b, mark_uninserted);
@@ -1462,7 +1542,7 @@ remove_hw_watchpoints (void)
int
reattach_breakpoints (int pid)
{
- struct bp_location *b;
+ struct bp_location *b, **bp_tmp;
int val;
struct cleanup *old_chain = save_inferior_ptid ();
struct ui_file *tmp_error_stream = mem_fileopen ();
@@ -1471,7 +1551,7 @@ reattach_breakpoints (int pid)
make_cleanup_ui_file_delete (tmp_error_stream);
inferior_ptid = pid_to_ptid (pid);
- ALL_BP_LOCATIONS (b)
+ ALL_BP_LOCATIONS (b, bp_tmp)
{
if (b->inserted)
{
@@ -1574,7 +1654,7 @@ update_breakpoints_after_exec (void)
{
struct breakpoint *b;
struct breakpoint *temp;
- struct bp_location *bploc;
+ struct bp_location *bploc, **bplocp_tmp;
/* We're about to delete breakpoints from GDB's lists. If the
INSERTED flag is true, GDB will try to lift the breakpoints by
@@ -1584,7 +1664,7 @@ update_breakpoints_after_exec (void)
breakpoints out as soon as it detects an exec. We don't do that
here instead, because there may be other attempts to delete
breakpoints after detecting an exec and before reaching here. */
- ALL_BP_LOCATIONS (bploc)
+ ALL_BP_LOCATIONS (bploc, bplocp_tmp)
gdb_assert (!bploc->inserted);
ALL_BREAKPOINTS_SAFE (b, temp)
@@ -1687,7 +1767,7 @@ update_breakpoints_after_exec (void)
int
detach_breakpoints (int pid)
{
- struct bp_location *b;
+ struct bp_location *b, **bp_tmp;
int val = 0;
struct cleanup *old_chain = save_inferior_ptid ();
@@ -1696,7 +1776,7 @@ detach_breakpoints (int pid)
/* Set inferior_ptid; remove_breakpoint uses this global. */
inferior_ptid = pid_to_ptid (pid);
- ALL_BP_LOCATIONS (b)
+ ALL_BP_LOCATIONS (b, bp_tmp)
{
if (b->inserted)
val |= remove_breakpoint (b, mark_inserted);
@@ -1827,9 +1907,9 @@ remove_breakpoint (struct bp_location *b, insertion_state_t is)
void
mark_breakpoints_out (void)
{
- struct bp_location *bpt;
+ struct bp_location *bpt, **bptp_tmp;
- ALL_BP_LOCATIONS (bpt)
+ ALL_BP_LOCATIONS (bpt, bptp_tmp)
bpt->inserted = 0;
}
@@ -1849,7 +1929,7 @@ void
breakpoint_init_inferior (enum inf_context context)
{
struct breakpoint *b, *temp;
- struct bp_location *bpt;
+ struct bp_location *bpt, **bptp_tmp;
int ix;
/* If breakpoint locations are shared across processes, then there's
@@ -1857,7 +1937,7 @@ breakpoint_init_inferior (enum inf_context context)
if (gdbarch_has_global_breakpoints (target_gdbarch))
return;
- ALL_BP_LOCATIONS (bpt)
+ ALL_BP_LOCATIONS (bpt, bptp_tmp)
if (bpt->owner->enable_state != bp_permanent)
bpt->inserted = 0;
@@ -1918,10 +1998,10 @@ breakpoint_init_inferior (enum inf_context context)
enum breakpoint_here
breakpoint_here_p (CORE_ADDR pc)
{
- const struct bp_location *bpt;
+ struct bp_location *bpt, **bptp_tmp;
int any_breakpoint_here = 0;
- ALL_BP_LOCATIONS (bpt)
+ ALL_BP_LOCATIONS (bpt, bptp_tmp)
{
if (bpt->loc_type != bp_loc_software_breakpoint
&& bpt->loc_type != bp_loc_hardware_breakpoint)
@@ -1961,16 +2041,16 @@ moribund_breakpoint_here_p (CORE_ADDR pc)
}
/* Returns non-zero if there's a breakpoint inserted at PC, which is
- inserted using regular breakpoint_chain/bp_location_chain mechanism.
+ inserted using regular breakpoint_chain / bp_location array mechanism.
This does not check for single-step breakpoints, which are
inserted and removed using direct target manipulation. */
int
regular_breakpoint_inserted_here_p (CORE_ADDR pc)
{
- const struct bp_location *bpt;
+ struct bp_location *bpt, **bptp_tmp;
- ALL_BP_LOCATIONS (bpt)
+ ALL_BP_LOCATIONS (bpt, bptp_tmp)
{
if (bpt->loc_type != bp_loc_software_breakpoint
&& bpt->loc_type != bp_loc_hardware_breakpoint)
@@ -2011,10 +2091,10 @@ breakpoint_inserted_here_p (CORE_ADDR pc)
int
software_breakpoint_inserted_here_p (CORE_ADDR pc)
{
- const struct bp_location *bpt;
+ struct bp_location *bpt, **bptp_tmp;
int any_breakpoint_here = 0;
- ALL_BP_LOCATIONS (bpt)
+ ALL_BP_LOCATIONS (bpt, bptp_tmp)
{
if (bpt->loc_type != bp_loc_software_breakpoint)
continue;
@@ -2044,12 +2124,12 @@ software_breakpoint_inserted_here_p (CORE_ADDR pc)
int
breakpoint_thread_match (CORE_ADDR pc, ptid_t ptid)
{
- const struct bp_location *bpt;
+ struct bp_location *bpt, **bptp_tmp;
/* The thread and task IDs associated to PTID, computed lazily. */
int thread = -1;
int task = 0;
- ALL_BP_LOCATIONS (bpt)
+ ALL_BP_LOCATIONS (bpt, bptp_tmp)
{
if (bpt->loc_type != bp_loc_software_breakpoint
&& bpt->loc_type != bp_loc_hardware_breakpoint)
@@ -3157,16 +3237,16 @@ bpstat
bpstat_stop_status (CORE_ADDR bp_addr, ptid_t ptid)
{
struct breakpoint *b = NULL;
- const struct bp_location *bl;
+ struct bp_location *bl, **blp_tmp;
struct bp_location *loc;
/* Root of the chain of bpstat's */
struct bpstats root_bs[1];
/* Pointer to the last thing in the chain currently. */
bpstat bs = root_bs;
int ix;
- int need_remove_insert;
+ int need_remove_insert, update_locations = 0;
- ALL_BP_LOCATIONS (bl)
+ ALL_BP_LOCATIONS (bl, blp_tmp)
{
b = bl->owner;
gdb_assert (b);
@@ -3208,14 +3288,15 @@ bpstat_stop_status (CORE_ADDR bp_addr, ptid_t ptid)
if (bs->stop)
{
- ++(b->hit_count);
+ if (b->enable_state != bp_disabled)
+ ++(b->hit_count);
/* We will stop here */
if (b->disposition == disp_disable)
{
if (b->enable_state != bp_permanent)
b->enable_state = bp_disabled;
- update_global_location_list (0);
+ update_locations = 1;
}
if (b->silent)
bs->print = 0;
@@ -3235,6 +3316,10 @@ bpstat_stop_status (CORE_ADDR bp_addr, ptid_t ptid)
bs->print_it = print_it_noop;
}
+ /* Delay this call which would break the ALL_BP_LOCATIONS iteration above. */
+ if (update_locations)
+ update_global_location_list (0);
+
for (ix = 0; VEC_iterate (bp_location_p, moribund_locations, ix, loc); ++ix)
{
if (loc->address == bp_addr)
@@ -4197,9 +4282,8 @@ set_default_breakpoint (int valid, CORE_ADDR addr, struct symtab *symtab,
(or use it for any other purpose either).
More specifically, each of the following breakpoint types will always
- have a zero valued address and we don't want check_duplicates() to mark
- breakpoints of any of these types to be a duplicate of an actual
- breakpoint at address zero:
+ have a zero valued address and we don't want to mark breakpoints of any of
+ these types to be a duplicate of an actual breakpoint at address zero:
bp_watchpoint
bp_hardware_watchpoint
@@ -4219,88 +4303,6 @@ breakpoint_address_is_meaningful (struct breakpoint *bpt)
&& type != bp_catchpoint);
}
-/* Rescan breakpoints at the same address and section as BPT,
- marking the first one as "first" and any others as "duplicates".
- This is so that the bpt instruction is only inserted once.
- If we have a permanent breakpoint at the same place as BPT, make
- that one the official one, and the rest as duplicates. */
-
-static void
-check_duplicates_for (CORE_ADDR address, struct obj_section *section)
-{
- struct bp_location *b;
- int count = 0;
- struct bp_location *perm_bp = 0;
-
- ALL_BP_LOCATIONS (b)
- if (b->owner->enable_state != bp_disabled
- && b->owner->enable_state != bp_call_disabled
- && b->owner->enable_state != bp_startup_disabled
- && b->enabled
- && !b->shlib_disabled
- && b->address == address /* address / overlay match */
- && (!overlay_debugging || b->section == section)
- && breakpoint_address_is_meaningful (b->owner))
- {
- /* Have we found a permanent breakpoint? */
- if (b->owner->enable_state == bp_permanent)
- {
- perm_bp = b;
- break;
- }
-
- count++;
- b->duplicate = count > 1;
- }
-
- /* If we found a permanent breakpoint at this address, go over the
- list again and declare all the other breakpoints there (except
- other permanent breakpoints) to be the duplicates. */
- if (perm_bp)
- {
- perm_bp->duplicate = 0;
-
- /* Permanent breakpoint should always be inserted. */
- if (! perm_bp->inserted)
- internal_error (__FILE__, __LINE__,
- _("allegedly permanent breakpoint is not "
- "actually inserted"));
-
- ALL_BP_LOCATIONS (b)
- if (b != perm_bp)
- {
- if (b->owner->enable_state != bp_permanent
- && b->owner->enable_state != bp_disabled
- && b->owner->enable_state != bp_call_disabled
- && b->owner->enable_state != bp_startup_disabled
- && b->enabled && !b->shlib_disabled
- && b->address == address /* address / overlay match */
- && (!overlay_debugging || b->section == section)
- && breakpoint_address_is_meaningful (b->owner))
- {
- if (b->inserted)
- internal_error (__FILE__, __LINE__,
- _("another breakpoint was inserted on top of "
- "a permanent breakpoint"));
-
- b->duplicate = 1;
- }
- }
- }
-}
-
-static void
-check_duplicates (struct breakpoint *bpt)
-{
- struct bp_location *bl = bpt->loc;
-
- if (! breakpoint_address_is_meaningful (bpt))
- return;
-
- for (; bl; bl = bl->next)
- check_duplicates_for (bl->address, bl->section);
-}
-
static void
breakpoint_adjustment_warning (CORE_ADDR from_addr, CORE_ADDR to_addr,
int bnum, int have_bnum)
@@ -4710,9 +4712,9 @@ create_solib_event_breakpoint (struct gdbarch *gdbarch, CORE_ADDR address)
void
disable_breakpoints_in_shlibs (void)
{
- struct bp_location *loc;
+ struct bp_location *loc, **locp_tmp;
- ALL_BP_LOCATIONS (loc)
+ ALL_BP_LOCATIONS (loc, locp_tmp)
{
struct breakpoint *b = loc->owner;
/* We apply the check to all breakpoints, including disabled
@@ -4742,7 +4744,7 @@ disable_breakpoints_in_shlibs (void)
static void
disable_breakpoints_in_unloaded_shlib (struct so_list *solib)
{
- struct bp_location *loc;
+ struct bp_location *loc, **locp_tmp;
int disabled_shlib_breaks = 0;
/* SunOS a.out shared libraries are always mapped, so do not
@@ -4753,7 +4755,7 @@ disable_breakpoints_in_unloaded_shlib (struct so_list *solib)
&& bfd_get_flavour (exec_bfd) == bfd_target_aout_flavour)
return;
- ALL_BP_LOCATIONS (loc)
+ ALL_BP_LOCATIONS (loc, locp_tmp)
{
struct breakpoint *b = loc->owner;
if ((loc->loc_type == bp_loc_hardware_breakpoint
@@ -7748,14 +7750,80 @@ breakpoint_auto_delete (bpstat bs)
}
}
-/* A cleanup function which destroys a vector. */
+/* A comparison function for bp_location A and B being interfaced to qsort.
+ Sort elements primarily by their ADDRESS (no matter what does
+ breakpoint_address_is_meaningful say for its OWNER), secondarily by ordering
+ first bp_permanent OWNERed elements and terciarily just ensuring the array
+ is sorted stable way despite qsort being an instable algorithm. */
+
+static int
+bp_location_compare (struct bp_location *a, struct bp_location *b)
+{
+ int a_perm = a->owner->enable_state == bp_permanent;
+ int b_perm = b->owner->enable_state == bp_permanent;
+
+ if (a->address != b->address)
+ return (a->address > b->address) - (a->address < b->address);
+
+ /* Sort permanent breakpoints first. */
+ if (a_perm != b_perm)
+ return (a_perm < b_perm) - (a_perm > b_perm);
+
+ /* Make the user-visible order stable across GDB runs. Locations of the same
+ breakpoint can be sorted in arbitrary order. */
+
+ if (a->owner->number != b->owner->number)
+ return (a->owner->number > b->owner->number)
+ - (a->owner->number < b->owner->number);
+
+ return (a > b) - (a < b);
+}
+
+/* Interface bp_location_compare as the COMPAR parameter of qsort function. */
+
+static int
+bp_location_compare_for_qsort (const void *ap, const void *bp)
+{
+ struct bp_location *a = *(void **) ap;
+ struct bp_location *b = *(void **) bp;
+
+ return bp_location_compare (a, b);
+}
+
+/* Set bp_location_placed_address_before_address_max and
+ bp_location_shadow_len_after_address_max according to the current content of
+ the bp_location array. */
static void
-do_vec_free (void *p)
+bp_location_target_extensions_update (void)
{
- VEC(bp_location_p) **vec = p;
- if (*vec)
- VEC_free (bp_location_p, *vec);
+ struct bp_location *bl, **blp_tmp;
+
+ bp_location_placed_address_before_address_max = 0;
+ bp_location_shadow_len_after_address_max = 0;
+
+ ALL_BP_LOCATIONS (bl, blp_tmp)
+ {
+ CORE_ADDR start, end, addr;
+
+ if (!bp_location_has_shadow (bl))
+ continue;
+
+ start = bl->target_info.placed_address;
+ end = start + bl->target_info.shadow_len;
+
+ gdb_assert (bl->address >= start);
+ addr = bl->address - start;
+ if (addr > bp_location_placed_address_before_address_max)
+ bp_location_placed_address_before_address_max = addr;
+
+ /* Zero SHADOW_LEN would not pass bp_location_has_shadow. */
+
+ gdb_assert (bl->address < end);
+ addr = end - bl->address;
+ if (addr > bp_location_shadow_len_after_address_max)
+ bp_location_shadow_len_after_address_max = addr;
+ }
}
/* If SHOULD_INSERT is false, do not insert any breakpoint locations
@@ -7777,49 +7845,66 @@ static void
update_global_location_list (int should_insert)
{
struct breakpoint *b;
- struct bp_location **next = &bp_location_chain;
- struct bp_location *loc;
- struct bp_location *loc2;
- VEC(bp_location_p) *old_locations = NULL;
- int ret;
- int ix;
+ struct bp_location **locp, *loc;
struct cleanup *cleanups;
- cleanups = make_cleanup (do_vec_free, &old_locations);
- /* Store old locations for future reference. */
- for (loc = bp_location_chain; loc; loc = loc->global_next)
- VEC_safe_push (bp_location_p, old_locations, loc);
+ /* The first bp_location being the only one non-DUPLICATE for the current run
+ of the same ADDRESS. */
+ struct bp_location *loc_first;
+
+ /* Saved former bp_location array which we compare against the newly built
+ bp_location from the current state of ALL_BREAKPOINTS. */
+ struct bp_location **old_location, **old_locp;
+ unsigned old_location_count;
+
+ old_location = bp_location;
+ old_location_count = bp_location_count;
+ bp_location = NULL;
+ bp_location_count = 0;
+ cleanups = make_cleanup (xfree, old_location);
- bp_location_chain = NULL;
ALL_BREAKPOINTS (b)
- {
- for (loc = b->loc; loc; loc = loc->next)
- {
- *next = loc;
- next = &(loc->global_next);
- *next = NULL;
- }
- }
+ for (loc = b->loc; loc; loc = loc->next)
+ bp_location_count++;
+
+ bp_location = xmalloc (sizeof (*bp_location) * bp_location_count);
+ locp = bp_location;
+ ALL_BREAKPOINTS (b)
+ for (loc = b->loc; loc; loc = loc->next)
+ *locp++ = loc;
+ qsort (bp_location, bp_location_count, sizeof (*bp_location),
+ bp_location_compare_for_qsort);
+
+ bp_location_target_extensions_update ();
/* Identify bp_location instances that are no longer present in the new
list, and therefore should be freed. Note that it's not necessary that
those locations should be removed from inferior -- if there's another
location at the same address (previously marked as duplicate),
- we don't need to remove/insert the location. */
- for (ix = 0; VEC_iterate(bp_location_p, old_locations, ix, loc); ++ix)
+ we don't need to remove/insert the location.
+
+ LOCP is kept in sync with OLD_LOCP, each pointing to the current and
+ former bp_location array state respectively. */
+
+ locp = bp_location;
+ for (old_locp = old_location; old_locp < old_location + old_location_count;
+ old_locp++)
{
- /* Tells if 'loc' is found amoung the new locations. If not, we
+ struct bp_location *old_loc = *old_locp;
+
+ /* Tells if 'old_loc' is found amoung the new locations. If not, we
have to free it. */
- int found_object = 0;
+ int found_object;
/* Tells if the location should remain inserted in the target. */
int keep_in_target = 0;
int removed = 0;
- for (loc2 = bp_location_chain; loc2; loc2 = loc2->global_next)
- if (loc2 == loc)
- {
- found_object = 1;
- break;
- }
+
+ /* Skip LOCP entries which will definitely never be needed. Stop either
+ at or being the one matching OLD_LOC. */
+ while (locp < bp_location + bp_location_count
+ && bp_location_compare (*locp, old_loc) < 0)
+ locp++;
+ found_object = locp < bp_location + bp_location_count && *locp == old_loc;
/* If this location is no longer present, and inserted, look if there's
maybe a new location at the same address. If so, mark that one
@@ -7827,11 +7912,11 @@ update_global_location_list (int should_insert)
don't have a time window where a breakpoint at certain location is not
inserted. */
- if (loc->inserted)
+ if (old_loc->inserted)
{
/* If the location is inserted now, we might have to remove it. */
- if (found_object && should_be_inserted (loc))
+ if (found_object && should_be_inserted (old_loc))
{
/* The location is still present in the location list, and still
should be inserted. Don't do anything. */
@@ -7842,37 +7927,46 @@ update_global_location_list (int should_insert)
/* The location is either no longer present, or got disabled.
See if there's another location at the same address, in which
case we don't need to remove this one from the target. */
- if (breakpoint_address_is_meaningful (loc->owner))
- for (loc2 = bp_location_chain; loc2; loc2 = loc2->global_next)
- {
- /* For the sake of should_insert_location. The
- call to check_duplicates will fix up this later. */
- loc2->duplicate = 0;
- if (should_be_inserted (loc2)
- && loc2 != loc && loc2->address == loc->address)
- {
- loc2->inserted = 1;
- loc2->target_info = loc->target_info;
- keep_in_target = 1;
- break;
- }
- }
+
+ if (breakpoint_address_is_meaningful (old_loc->owner))
+ {
+ struct bp_location **loc2p;
+
+ for (loc2p = locp;
+ loc2p < bp_location + bp_location_count
+ && (*loc2p)->address == old_loc->address;
+ loc2p++)
+ {
+ struct bp_location *loc2 = *loc2p;
+
+ /* For the sake of should_be_inserted.
+ Duplicates check below will fix up this later. */
+ loc2->duplicate = 0;
+ if (loc2 != old_loc && should_be_inserted (loc2))
+ {
+ loc2->inserted = 1;
+ loc2->target_info = old_loc->target_info;
+ keep_in_target = 1;
+ break;
+ }
+ }
+ }
}
if (!keep_in_target)
{
- if (remove_breakpoint (loc, mark_uninserted))
+ if (remove_breakpoint (old_loc, mark_uninserted))
{
/* This is just about all we can do. We could keep this
location on the global list, and try to remove it next
time, but there's no particular reason why we will
succeed next time.
- Note that at this point, loc->owner is still valid,
+ Note that at this point, old_loc->owner is still valid,
as delete_breakpoint frees the breakpoint only
after calling us. */
printf_filtered (_("warning: Error removing breakpoint %d\n"),
- loc->owner->number);
+ old_loc->owner->number);
}
removed = 1;
}
@@ -7894,19 +7988,59 @@ update_global_location_list (int should_insert)
longer need to keep this breakpoint. This is just a
heuristic, but if it's wrong, we'll report unexpected SIGTRAP,
which is usability issue, but not a correctness problem. */
- loc->events_till_retirement = 3 * (thread_count () + 1);
- loc->owner = NULL;
+ old_loc->events_till_retirement = 3 * (thread_count () + 1);
+ old_loc->owner = NULL;
- VEC_safe_push (bp_location_p, moribund_locations, loc);
+ VEC_safe_push (bp_location_p, moribund_locations, old_loc);
}
else
- free_bp_location (loc);
+ free_bp_location (old_loc);
}
}
- ALL_BREAKPOINTS (b)
+ /* Rescan breakpoints at the same address and section,
+ marking the first one as "first" and any others as "duplicates".
+ This is so that the bpt instruction is only inserted once.
+ If we have a permanent breakpoint at the same place as BPT, make
+ that one the official one, and the rest as duplicates. Permanent
+ breakpoints are sorted first for the same address. */
+
+ loc_first = NULL;
+ ALL_BP_LOCATIONS (loc, locp)
{
- check_duplicates (b);
+ struct breakpoint *b = loc->owner;
+
+ if (b->enable_state == bp_disabled
+ || b->enable_state == bp_call_disabled
+ || b->enable_state == bp_startup_disabled
+ || !loc->enabled
+ || loc->shlib_disabled
+ || !breakpoint_address_is_meaningful (b))
+ continue;
+
+ /* Permanent breakpoint should always be inserted. */
+ if (b->enable_state == bp_permanent && ! loc->inserted)
+ internal_error (__FILE__, __LINE__,
+ _("allegedly permanent breakpoint is not "
+ "actually inserted"));
+
+ if (loc_first == NULL
+ /* address / overlay match */
+ || loc->address != loc_first->address
+ || (overlay_debugging && loc->section != loc_first->section))
+ {
+ loc_first = loc;
+ loc->duplicate = 0;
+ continue;
+ }
+
+ loc->duplicate = 1;
+
+ if (loc_first->owner->enable_state == bp_permanent && loc->inserted
+ && b->enable_state != bp_permanent)
+ internal_error (__FILE__, __LINE__,
+ _("another breakpoint was inserted on top of "
+ "a permanent breakpoint"));
}
if (breakpoints_always_inserted_mode () && should_insert
--- a/gdb/breakpoint.h
+++ b/gdb/breakpoint.h
@@ -228,10 +228,6 @@ struct bp_location
the same parent breakpoint. */
struct bp_location *next;
- /* Pointer to the next breakpoint location, in a global
- list of all breakpoint locations. */
- struct bp_location *global_next;
-
/* Type of this breakpoint location. */
enum bp_loc_type loc_type;
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [patch] Performance optimize large bp_location count
2009-10-13 22:35 ` Jan Kratochvil
@ 2009-10-14 22:57 ` Tom Tromey
2009-10-25 19:41 ` Jan Kratochvil
0 siblings, 1 reply; 5+ messages in thread
From: Tom Tromey @ 2009-10-14 22:57 UTC (permalink / raw)
To: Jan Kratochvil; +Cc: gdb-patches
>>>>> "Jan" == Jan Kratochvil <jan.kratochvil@redhat.com> writes:
Jan> Still just this way it was a minimal changeset to get the same user
Jan> experience as the best breakpoints-modifying patch could be. After
Jan> other parts of GDB (symbol lookups) get accelerated more work may
Jan> be useful on this part.
This sounds reasonable to me.
Tom> I puzzled over the choice of a sorted array here, wondering if there
Tom> were perhaps some better data structure. A bit of rationale and
Tom> overview in the patch email would go a long way.
Jan> I do not remember it all but probably I just did not consider the
Jan> possibility of an incremental update, I tried the most easy patch
Jan> and it worked. Different data structure would be more appropriate
Jan> for the incremental update.
Thanks for this and your explanations, they were very useful.
Tom> It isn't obvious to me why this finds the leftmost element when there
Tom> are multiple overlapping ones. What am I missing?
Jan> Maybe another comment? (put there one before "if (b->...)")
Thanks.
Jan> 2009-10-14 Jan Kratochvil <jan.kratochvil@redhat.com>
Jan> Performance optimize large bp_location count.
Jan> * breakpoint.c (ALL_BP_LOCATIONS_SAFE): Remove.
[...]
Ok.
Tom
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [patch] Performance optimize large bp_location count
2009-10-14 22:57 ` Tom Tromey
@ 2009-10-25 19:41 ` Jan Kratochvil
0 siblings, 0 replies; 5+ messages in thread
From: Jan Kratochvil @ 2009-10-25 19:41 UTC (permalink / raw)
To: Tom Tromey; +Cc: gdb-patches
On Thu, 15 Oct 2009 00:57:02 +0200, Tom Tromey wrote:
> >>>>> "Jan" == Jan Kratochvil <jan.kratochvil@redhat.com> writes:
>
> Jan> Still just this way it was a minimal changeset to get the same user
> Jan> experience as the best breakpoints-modifying patch could be. After
> Jan> other parts of GDB (symbol lookups) get accelerated more work may
> Jan> be useful on this part.
>
> This sounds reasonable to me.
I was checking the patch again now during its update and while I am more
critical to it now I still think it is mostly reusable as a step towards the
final incremental global locations keeping code in the future.
> Jan> 2009-10-14 Jan Kratochvil <jan.kratochvil@redhat.com>
> Jan> Performance optimize large bp_location count.
> Jan> * breakpoint.c (ALL_BP_LOCATIONS_SAFE): Remove.
> [...]
>
> Ok.
There had to be done updates on top of the multi-exec patch:
* Use for address comparison breakpoint_address_match.
* ALL_BP_LOCATIONS update for new breakpoint_program_space_exit and
remove_breakpoints_pid.
Checked-in.
Regression retested on {x86_64,x86_64-m32,i686}-fedora12-linux-gnu.
Thanks,
Jan
http://sourceware.org/ml/gdb-cvs/2009-10/msg00181.html
--- src/gdb/ChangeLog 2009/10/25 09:09:01 1.11002
+++ src/gdb/ChangeLog 2009/10/25 19:35:25 1.11003
@@ -1,5 +1,62 @@
2009-10-25 Jan Kratochvil <jan.kratochvil@redhat.com>
+ Performance optimize large bp_location count.
+ * breakpoint.c (ALL_BP_LOCATIONS_SAFE): Remove.
+ (ALL_BP_LOCATIONS): New parameter BP_TMP. Use now bp_location and
+ bp_location_count.
+ (bp_location_chain): Remove variable.
+ (bp_location, bp_location_count)
+ (bp_location_placed_address_before_address_max)
+ (bp_location_shadow_len_after_address_max): New variables.
+ (moribund_locations, update_watchpoint): Update the bp_location
+ variable name.
+ (breakpoint_restore_shadows): Extend the comment. Move the variable
+ b to local blocks. Move the variables bp_addr, bp_size and bptoffset
+ to a local block. New variables bc_l, bc_r and bc. New binary search
+ for the left range boundary. New break on reaching the right range
+ boundary. Move shadow existence conditionals to ...
+ (bp_location_has_shadow): ... a new function.
+ (insert_breakpoint_locations): Replace the temp variable by bp_tmp.
+ Use now ALL_BP_LOCATIONS instead of ALL_BP_LOCATIONS_SAFE.
+ (remove_breakpoints, remove_hw_watchpoints, reattach_breakpoints)
+ (detach_breakpoints): New variable bp_tmp. Update the ALL_BP_LOCATIONS
+ calling convention.
+ (update_breakpoints_after_exec): New variable bplocp_tmp. Update the
+ ALL_BP_LOCATIONS calling convention.
+ (breakpoint_here_p, software_breakpoint_inserted_here_p)
+ (breakpoint_thread_match): New variable bptp_tmp. Drop the const
+ attribute of bpt. Update the ALL_BP_LOCATIONS calling convention.
+ (regular_breakpoint_inserted_here_p): Likewise. Update the bp_location
+ variable name.
+ (mark_breakpoints_out, breakpoint_init_inferior): New variable
+ bptp_tmp. Update the ALL_BP_LOCATIONS calling convention.
+ (bpstat_stop_status): New variables blp_tmp and update_locations. Drop
+ the const attribute of bl. Update the ALL_BP_LOCATIONS calling
+ convention. Protect HIT_COUNT increment by an ENABLE_STATE check.
+ Delay the update_global_location_list call using update_locations.
+ (set_default_breakpoint): Drop the check_duplicates name from comment.
+ (disable_breakpoints_in_shlibs, disable_breakpoints_in_unloaded_shlib):
+ New variable locp_tmp. Update the ALL_BP_LOCATIONS calling convention.
+ (bp_location_compare, bp_location_compare_for_qsort)
+ (bp_location_target_extensions_update): New functions.
+ (check_duplicates, check_duplicates_for): Remove, moving their code ...
+ (update_global_location_list): ... into this existing function. Remove
+ variables next, loc2, old_locations, ret and ix. New variables locp,
+ loc_first, old_location, old_locp and old_location_count. Stop using
+ global_next, create now the array bp_location, sort it by
+ bp_location_compare_for_qsort and call
+ bp_location_target_extensions_update. Change quadratic iteration by
+ loc2 into an in-sync scanning by locp and loc2p. Rename former loc
+ usage as old_loc.
+ (do_vec_free): Remove.
+ (breakpoint_program_space_exit): Update the ALL_BP_LOCATIONS calling
+ convention.
+ (remove_breakpoints_pid): New variable b_tmp. Update the
+ ALL_BP_LOCATIONS calling convention.
+ * breakpoint.h (struct bp_location <global_next>): Remove.
+
+2009-10-25 Jan Kratochvil <jan.kratochvil@redhat.com>
+
* mep-tdep.c: Update include for the new location cgen/bitset.h.
2009-10-23 Tom Tromey <tromey@redhat.com>
--- src/gdb/breakpoint.c 2009/10/19 09:51:40 1.420
+++ src/gdb/breakpoint.c 2009/10/25 19:35:26 1.421
@@ -112,8 +112,6 @@
struct symtab_and_line,
enum bptype);
-static void check_duplicates (struct breakpoint *);
-
static void breakpoint_adjustment_warning (CORE_ADDR, CORE_ADDR, int, int);
static CORE_ADDR adjust_breakpoint_address (struct gdbarch *gdbarch,
@@ -342,14 +340,14 @@
B ? (TMP=B->next, 1): 0; \
B = TMP)
-/* Similar iterators for the low-level breakpoints. */
-
-#define ALL_BP_LOCATIONS(B) for (B = bp_location_chain; B; B = B->global_next)
-
-#define ALL_BP_LOCATIONS_SAFE(B,TMP) \
- for (B = bp_location_chain; \
- B ? (TMP=B->global_next, 1): 0; \
- B = TMP)
+/* Similar iterator for the low-level breakpoints. SAFE variant is not
+ provided so update_global_location_list must not be called while executing
+ the block of ALL_BP_LOCATIONS. */
+
+#define ALL_BP_LOCATIONS(B,BP_TMP) \
+ for (BP_TMP = bp_location; \
+ BP_TMP < bp_location + bp_location_count && (B = *BP_TMP); \
+ BP_TMP++)
/* Iterator for tracepoints only. */
@@ -361,10 +359,31 @@
struct breakpoint *breakpoint_chain;
-struct bp_location *bp_location_chain;
+/* Array is sorted by bp_location_compare - primarily by the ADDRESS. */
+
+static struct bp_location **bp_location;
+
+/* Number of elements of BP_LOCATION. */
+
+static unsigned bp_location_count;
+
+/* Maximum alignment offset between bp_target_info.PLACED_ADDRESS and ADDRESS
+ for the current elements of BP_LOCATION which get a valid result from
+ bp_location_has_shadow. You can use it for roughly limiting the subrange of
+ BP_LOCATION to scan for shadow bytes for an address you need to read. */
+
+static CORE_ADDR bp_location_placed_address_before_address_max;
+
+/* Maximum offset plus alignment between
+ bp_target_info.PLACED_ADDRESS + bp_target_info.SHADOW_LEN and ADDRESS for
+ the current elements of BP_LOCATION which get a valid result from
+ bp_location_has_shadow. You can use it for roughly limiting the subrange of
+ BP_LOCATION to scan for shadow bytes for an address you need to read. */
+
+static CORE_ADDR bp_location_shadow_len_after_address_max;
/* The locations that no longer correspond to any breakpoint,
- unlinked from bp_location_chain, but for which a hit
+ unlinked from bp_location array, but for which a hit
may still be reported by a target. */
VEC(bp_location_p) *moribund_locations = NULL;
@@ -742,27 +761,90 @@
}
error (_("No breakpoint number %d."), bnum);
}
-\f
+
+/* Return non-zero if BL->TARGET_INFO contains valid information. */
+
+static int
+bp_location_has_shadow (struct bp_location *bl)
+{
+ if (bl->loc_type != bp_loc_software_breakpoint)
+ return 0;
+ if (!bl->inserted)
+ return 0;
+ if (bl->target_info.shadow_len == 0)
+ /* bp isn't valid, or doesn't shadow memory. */
+ return 0;
+ return 1;
+}
+
/* Update BUF, which is LEN bytes read from the target address MEMADDR,
- by replacing any memory breakpoints with their shadowed contents. */
+ by replacing any memory breakpoints with their shadowed contents.
+
+ The range of shadowed area by each bp_location is:
+ b->address - bp_location_placed_address_before_address_max
+ up to b->address + bp_location_shadow_len_after_address_max
+ The range we were requested to resolve shadows for is:
+ memaddr ... memaddr + len
+ Thus the safe cutoff boundaries for performance optimization are
+ memaddr + len <= b->address - bp_location_placed_address_before_address_max
+ and:
+ b->address + bp_location_shadow_len_after_address_max <= memaddr */
void
breakpoint_restore_shadows (gdb_byte *buf, ULONGEST memaddr, LONGEST len)
{
- struct bp_location *b;
- CORE_ADDR bp_addr = 0;
- int bp_size = 0;
- int bptoffset = 0;
+ /* Left boundary, right boundary and median element of our binary search. */
+ unsigned bc_l, bc_r, bc;
+
+ /* Find BC_L which is a leftmost element which may affect BUF content. It is
+ safe to report lower value but a failure to report higher one. */
+
+ bc_l = 0;
+ bc_r = bp_location_count;
+ while (bc_l + 1 < bc_r)
+ {
+ struct bp_location *b;
+
+ bc = (bc_l + bc_r) / 2;
+ b = bp_location[bc];
- ALL_BP_LOCATIONS (b)
+ /* Check first B->ADDRESS will not overflow due to the added constant.
+ Then advance the left boundary only if we are sure the BC element can
+ in no way affect the BUF content (MEMADDR to MEMADDR + LEN range).
+
+ Use the BP_LOCATION_SHADOW_LEN_AFTER_ADDRESS_MAX safety offset so that
+ we cannot miss a breakpoint with its shadow range tail still reaching
+ MEMADDR. */
+
+ if (b->address + bp_location_shadow_len_after_address_max >= b->address
+ && b->address + bp_location_shadow_len_after_address_max <= memaddr)
+ bc_l = bc;
+ else
+ bc_r = bc;
+ }
+
+ /* Now do full processing of the found relevant range of elements. */
+
+ for (bc = bc_l; bc < bp_location_count; bc++)
{
+ struct bp_location *b = bp_location[bc];
+ CORE_ADDR bp_addr = 0;
+ int bp_size = 0;
+ int bptoffset = 0;
+
if (b->owner->type == bp_none)
warning (_("reading through apparently deleted breakpoint #%d?"),
b->owner->number);
- if (b->loc_type != bp_loc_software_breakpoint)
- continue;
- if (!b->inserted)
+ /* Performance optimization: any futher element can no longer affect BUF
+ content. */
+
+ if (b->address >= bp_location_placed_address_before_address_max
+ && memaddr + len <= b->address
+ - bp_location_placed_address_before_address_max)
+ break;
+
+ if (!bp_location_has_shadow (b))
continue;
if (!breakpoint_address_match (b->target_info.placed_address_space, 0,
current_program_space->aspace, 0))
@@ -772,9 +854,6 @@
we need to copy. */
bp_addr = b->target_info.placed_address;
bp_size = b->target_info.shadow_len;
- if (bp_size == 0)
- /* bp isn't valid, or doesn't shadow memory. */
- continue;
if (bp_addr + bp_size <= memaddr)
/* The breakpoint is entirely before the chunk of memory we
@@ -924,7 +1003,7 @@
bpstat bs;
struct program_space *frame_pspace;
- /* We don't free locations. They are stored in bp_location_chain and
+ /* We don't free locations. They are stored in bp_location array and
update_global_locations will eventually delete them and remove
breakpoints if needed. */
b->loc = NULL;
@@ -1341,7 +1420,7 @@
breakpoint_program_space_exit (struct program_space *pspace)
{
struct breakpoint *b, *b_temp;
- struct bp_location *loc, *loc_temp;
+ struct bp_location *loc, **loc_temp;
/* Remove any breakpoint that was set through this program space. */
ALL_BREAKPOINTS_SAFE (b, b_temp)
@@ -1352,7 +1431,7 @@
/* Breakpoints set through other program spaces could have locations
bound to PSPACE as well. Remove those. */
- ALL_BP_LOCATIONS_SAFE (loc, loc_temp)
+ ALL_BP_LOCATIONS (loc, loc_temp)
{
struct bp_location *tmp;
@@ -1406,7 +1485,7 @@
insert_breakpoint_locations (void)
{
struct breakpoint *bpt;
- struct bp_location *b, *temp;
+ struct bp_location *b, **bp_tmp;
int error = 0;
int val = 0;
int disabled_breaks = 0;
@@ -1421,7 +1500,7 @@
save_current_space_and_thread ();
- ALL_BP_LOCATIONS_SAFE (b, temp)
+ ALL_BP_LOCATIONS (b, bp_tmp)
{
struct thread_info *tp;
CORE_ADDR last_addr;
@@ -1527,10 +1606,10 @@
int
remove_breakpoints (void)
{
- struct bp_location *b;
+ struct bp_location *b, **bp_tmp;
int val = 0;
- ALL_BP_LOCATIONS (b)
+ ALL_BP_LOCATIONS (b, bp_tmp)
{
if (b->inserted)
val |= remove_breakpoint (b, mark_uninserted);
@@ -1543,11 +1622,11 @@
int
remove_breakpoints_pid (int pid)
{
- struct bp_location *b;
+ struct bp_location *b, **b_tmp;
int val;
struct inferior *inf = find_inferior_pid (pid);
- ALL_BP_LOCATIONS (b)
+ ALL_BP_LOCATIONS (b, b_tmp)
{
if (b->pspace != inf->pspace)
continue;
@@ -1565,10 +1644,10 @@
int
remove_hw_watchpoints (void)
{
- struct bp_location *b;
+ struct bp_location *b, **bp_tmp;
int val = 0;
- ALL_BP_LOCATIONS (b)
+ ALL_BP_LOCATIONS (b, bp_tmp)
{
if (b->inserted && b->loc_type == bp_loc_hardware_watchpoint)
val |= remove_breakpoint (b, mark_uninserted);
@@ -1580,7 +1659,7 @@
reattach_breakpoints (int pid)
{
struct cleanup *old_chain;
- struct bp_location *b;
+ struct bp_location *b, **bp_tmp;
int val;
struct ui_file *tmp_error_stream = mem_fileopen ();
int dummy1 = 0, dummy2 = 0;
@@ -1598,7 +1677,7 @@
make_cleanup_ui_file_delete (tmp_error_stream);
- ALL_BP_LOCATIONS (b)
+ ALL_BP_LOCATIONS (b, bp_tmp)
{
if (b->pspace != inf->pspace)
continue;
@@ -1714,7 +1793,7 @@
{
struct breakpoint *b;
struct breakpoint *temp;
- struct bp_location *bploc;
+ struct bp_location *bploc, **bplocp_tmp;
/* We're about to delete breakpoints from GDB's lists. If the
INSERTED flag is true, GDB will try to lift the breakpoints by
@@ -1724,7 +1803,7 @@
breakpoints out as soon as it detects an exec. We don't do that
here instead, because there may be other attempts to delete
breakpoints after detecting an exec and before reaching here. */
- ALL_BP_LOCATIONS (bploc)
+ ALL_BP_LOCATIONS (bploc, bplocp_tmp)
if (bploc->pspace == current_program_space)
gdb_assert (!bploc->inserted);
@@ -1831,7 +1910,7 @@
int
detach_breakpoints (int pid)
{
- struct bp_location *b;
+ struct bp_location *b, **bp_tmp;
int val = 0;
struct cleanup *old_chain = save_inferior_ptid ();
struct inferior *inf = current_inferior ();
@@ -1841,7 +1920,7 @@
/* Set inferior_ptid; remove_breakpoint_1 uses this global. */
inferior_ptid = pid_to_ptid (pid);
- ALL_BP_LOCATIONS (b)
+ ALL_BP_LOCATIONS (b, bp_tmp)
{
if (b->pspace != inf->pspace)
continue;
@@ -2006,9 +2085,9 @@
void
mark_breakpoints_out (void)
{
- struct bp_location *bpt;
+ struct bp_location *bpt, **bptp_tmp;
- ALL_BP_LOCATIONS (bpt)
+ ALL_BP_LOCATIONS (bpt, bptp_tmp)
if (bpt->pspace == current_program_space)
bpt->inserted = 0;
}
@@ -2029,7 +2108,7 @@
breakpoint_init_inferior (enum inf_context context)
{
struct breakpoint *b, *temp;
- struct bp_location *bpt;
+ struct bp_location *bpt, **bptp_tmp;
int ix;
struct program_space *pspace = current_program_space;
@@ -2038,7 +2117,7 @@
if (gdbarch_has_global_breakpoints (target_gdbarch))
return;
- ALL_BP_LOCATIONS (bpt)
+ ALL_BP_LOCATIONS (bpt, bptp_tmp)
{
if (bpt->pspace == pspace
&& bpt->owner->enable_state != bp_permanent)
@@ -2110,10 +2189,10 @@
enum breakpoint_here
breakpoint_here_p (struct address_space *aspace, CORE_ADDR pc)
{
- const struct bp_location *bpt;
+ struct bp_location *bpt, **bptp_tmp;
int any_breakpoint_here = 0;
- ALL_BP_LOCATIONS (bpt)
+ ALL_BP_LOCATIONS (bpt, bptp_tmp)
{
if (bpt->loc_type != bp_loc_software_breakpoint
&& bpt->loc_type != bp_loc_hardware_breakpoint)
@@ -2155,16 +2234,16 @@
}
/* Returns non-zero if there's a breakpoint inserted at PC, which is
- inserted using regular breakpoint_chain/bp_location_chain mechanism.
+ inserted using regular breakpoint_chain / bp_location array mechanism.
This does not check for single-step breakpoints, which are
inserted and removed using direct target manipulation. */
int
regular_breakpoint_inserted_here_p (struct address_space *aspace, CORE_ADDR pc)
{
- const struct bp_location *bpt;
+ struct bp_location *bpt, **bptp_tmp;
- ALL_BP_LOCATIONS (bpt)
+ ALL_BP_LOCATIONS (bpt, bptp_tmp)
{
if (bpt->loc_type != bp_loc_software_breakpoint
&& bpt->loc_type != bp_loc_hardware_breakpoint)
@@ -2206,10 +2285,10 @@
int
software_breakpoint_inserted_here_p (struct address_space *aspace, CORE_ADDR pc)
{
- const struct bp_location *bpt;
+ struct bp_location *bpt, **bptp_tmp;
int any_breakpoint_here = 0;
- ALL_BP_LOCATIONS (bpt)
+ ALL_BP_LOCATIONS (bpt, bptp_tmp)
{
if (bpt->loc_type != bp_loc_software_breakpoint)
continue;
@@ -2241,12 +2320,12 @@
breakpoint_thread_match (struct address_space *aspace, CORE_ADDR pc,
ptid_t ptid)
{
- const struct bp_location *bpt;
+ struct bp_location *bpt, **bptp_tmp;
/* The thread and task IDs associated to PTID, computed lazily. */
int thread = -1;
int task = 0;
- ALL_BP_LOCATIONS (bpt)
+ ALL_BP_LOCATIONS (bpt, bptp_tmp)
{
if (bpt->loc_type != bp_loc_software_breakpoint
&& bpt->loc_type != bp_loc_hardware_breakpoint)
@@ -3358,16 +3437,16 @@
CORE_ADDR bp_addr, ptid_t ptid)
{
struct breakpoint *b = NULL;
- const struct bp_location *bl;
+ struct bp_location *bl, **blp_tmp;
struct bp_location *loc;
/* Root of the chain of bpstat's */
struct bpstats root_bs[1];
/* Pointer to the last thing in the chain currently. */
bpstat bs = root_bs;
int ix;
- int need_remove_insert;
+ int need_remove_insert, update_locations = 0;
- ALL_BP_LOCATIONS (bl)
+ ALL_BP_LOCATIONS (bl, blp_tmp)
{
b = bl->owner;
gdb_assert (b);
@@ -3409,14 +3488,15 @@
if (bs->stop)
{
- ++(b->hit_count);
+ if (b->enable_state != bp_disabled)
+ ++(b->hit_count);
/* We will stop here */
if (b->disposition == disp_disable)
{
if (b->enable_state != bp_permanent)
b->enable_state = bp_disabled;
- update_global_location_list (0);
+ update_locations = 1;
}
if (b->silent)
bs->print = 0;
@@ -3436,6 +3516,10 @@
bs->print_it = print_it_noop;
}
+ /* Delay this call which would break the ALL_BP_LOCATIONS iteration above. */
+ if (update_locations)
+ update_global_location_list (0);
+
for (ix = 0; VEC_iterate (bp_location_p, moribund_locations, ix, loc); ++ix)
{
if (breakpoint_address_match (loc->pspace->aspace, loc->address,
@@ -4446,9 +4530,8 @@
(or use it for any other purpose either).
More specifically, each of the following breakpoint types will always
- have a zero valued address and we don't want check_duplicates() to mark
- breakpoints of any of these types to be a duplicate of an actual
- breakpoint at address zero:
+ have a zero valued address and we don't want to mark breakpoints of any of
+ these types to be a duplicate of an actual breakpoint at address zero:
bp_watchpoint
bp_hardware_watchpoint
@@ -4482,92 +4565,6 @@
&& addr1 == addr2);
}
-/* Rescan breakpoints at the same address and section as BPT,
- marking the first one as "first" and any others as "duplicates".
- This is so that the bpt instruction is only inserted once.
- If we have a permanent breakpoint at the same place as BPT, make
- that one the official one, and the rest as duplicates. */
-
-static void
-check_duplicates_for (struct address_space *aspace,
- CORE_ADDR address,
- struct obj_section *section)
-{
- struct bp_location *b;
- int count = 0;
- struct bp_location *perm_bp = 0;
-
- ALL_BP_LOCATIONS (b)
- if (b->owner->enable_state != bp_disabled
- && b->owner->enable_state != bp_call_disabled
- && b->owner->enable_state != bp_startup_disabled
- && b->enabled
- && !b->shlib_disabled
- && (!overlay_debugging || b->section == section)
- && breakpoint_address_is_meaningful (b->owner)
- && breakpoint_address_match (b->pspace->aspace, b->address,
- aspace, address))
- {
- /* Have we found a permanent breakpoint? */
- if (b->owner->enable_state == bp_permanent)
- {
- perm_bp = b;
- break;
- }
-
- count++;
- b->duplicate = count > 1;
- }
-
- /* If we found a permanent breakpoint at this address, go over the
- list again and declare all the other breakpoints there (except
- other permanent breakpoints) to be the duplicates. */
- if (perm_bp)
- {
- perm_bp->duplicate = 0;
-
- /* Permanent breakpoint should always be inserted. */
- if (! perm_bp->inserted)
- internal_error (__FILE__, __LINE__,
- _("allegedly permanent breakpoint is not "
- "actually inserted"));
-
- ALL_BP_LOCATIONS (b)
- if (b != perm_bp)
- {
- if (b->owner->enable_state != bp_permanent
- && b->owner->enable_state != bp_disabled
- && b->owner->enable_state != bp_call_disabled
- && b->owner->enable_state != bp_startup_disabled
- && b->enabled && !b->shlib_disabled
- && breakpoint_address_is_meaningful (b->owner)
- && breakpoint_address_match (b->pspace->aspace, b->address,
- aspace, address)
- && (!overlay_debugging || b->section == section))
- {
- if (b->inserted)
- internal_error (__FILE__, __LINE__,
- _("another breakpoint was inserted on top of "
- "a permanent breakpoint"));
-
- b->duplicate = 1;
- }
- }
- }
-}
-
-static void
-check_duplicates (struct breakpoint *bpt)
-{
- struct bp_location *bl = bpt->loc;
-
- if (! breakpoint_address_is_meaningful (bpt))
- return;
-
- for (; bl; bl = bl->next)
- check_duplicates_for (bl->pspace->aspace, bl->address, bl->section);
-}
-
static void
breakpoint_adjustment_warning (CORE_ADDR from_addr, CORE_ADDR to_addr,
int bnum, int have_bnum)
@@ -4988,9 +4985,9 @@
void
disable_breakpoints_in_shlibs (void)
{
- struct bp_location *loc;
+ struct bp_location *loc, **locp_tmp;
- ALL_BP_LOCATIONS (loc)
+ ALL_BP_LOCATIONS (loc, locp_tmp)
{
struct breakpoint *b = loc->owner;
/* We apply the check to all breakpoints, including disabled
@@ -5021,7 +5018,7 @@
static void
disable_breakpoints_in_unloaded_shlib (struct so_list *solib)
{
- struct bp_location *loc;
+ struct bp_location *loc, **locp_tmp;
int disabled_shlib_breaks = 0;
/* SunOS a.out shared libraries are always mapped, so do not
@@ -5032,7 +5029,7 @@
&& bfd_get_flavour (exec_bfd) == bfd_target_aout_flavour)
return;
- ALL_BP_LOCATIONS (loc)
+ ALL_BP_LOCATIONS (loc, locp_tmp)
{
struct breakpoint *b = loc->owner;
if ((loc->loc_type == bp_loc_hardware_breakpoint
@@ -8079,14 +8076,80 @@
}
}
-/* A cleanup function which destroys a vector. */
+/* A comparison function for bp_location A and B being interfaced to qsort.
+ Sort elements primarily by their ADDRESS (no matter what does
+ breakpoint_address_is_meaningful say for its OWNER), secondarily by ordering
+ first bp_permanent OWNERed elements and terciarily just ensuring the array
+ is sorted stable way despite qsort being an instable algorithm. */
+
+static int
+bp_location_compare (struct bp_location *a, struct bp_location *b)
+{
+ int a_perm = a->owner->enable_state == bp_permanent;
+ int b_perm = b->owner->enable_state == bp_permanent;
+
+ if (a->address != b->address)
+ return (a->address > b->address) - (a->address < b->address);
+
+ /* Sort permanent breakpoints first. */
+ if (a_perm != b_perm)
+ return (a_perm < b_perm) - (a_perm > b_perm);
+
+ /* Make the user-visible order stable across GDB runs. Locations of the same
+ breakpoint can be sorted in arbitrary order. */
+
+ if (a->owner->number != b->owner->number)
+ return (a->owner->number > b->owner->number)
+ - (a->owner->number < b->owner->number);
+
+ return (a > b) - (a < b);
+}
+
+/* Interface bp_location_compare as the COMPAR parameter of qsort function. */
+
+static int
+bp_location_compare_for_qsort (const void *ap, const void *bp)
+{
+ struct bp_location *a = *(void **) ap;
+ struct bp_location *b = *(void **) bp;
+
+ return bp_location_compare (a, b);
+}
+
+/* Set bp_location_placed_address_before_address_max and
+ bp_location_shadow_len_after_address_max according to the current content of
+ the bp_location array. */
static void
-do_vec_free (void *p)
+bp_location_target_extensions_update (void)
{
- VEC(bp_location_p) **vec = p;
- if (*vec)
- VEC_free (bp_location_p, *vec);
+ struct bp_location *bl, **blp_tmp;
+
+ bp_location_placed_address_before_address_max = 0;
+ bp_location_shadow_len_after_address_max = 0;
+
+ ALL_BP_LOCATIONS (bl, blp_tmp)
+ {
+ CORE_ADDR start, end, addr;
+
+ if (!bp_location_has_shadow (bl))
+ continue;
+
+ start = bl->target_info.placed_address;
+ end = start + bl->target_info.shadow_len;
+
+ gdb_assert (bl->address >= start);
+ addr = bl->address - start;
+ if (addr > bp_location_placed_address_before_address_max)
+ bp_location_placed_address_before_address_max = addr;
+
+ /* Zero SHADOW_LEN would not pass bp_location_has_shadow. */
+
+ gdb_assert (bl->address < end);
+ addr = end - bl->address;
+ if (addr > bp_location_shadow_len_after_address_max)
+ bp_location_shadow_len_after_address_max = addr;
+ }
}
/* If SHOULD_INSERT is false, do not insert any breakpoint locations
@@ -8108,49 +8171,66 @@
update_global_location_list (int should_insert)
{
struct breakpoint *b;
- struct bp_location **next = &bp_location_chain;
- struct bp_location *loc;
- struct bp_location *loc2;
- VEC(bp_location_p) *old_locations = NULL;
- int ret;
- int ix;
+ struct bp_location **locp, *loc;
struct cleanup *cleanups;
- cleanups = make_cleanup (do_vec_free, &old_locations);
- /* Store old locations for future reference. */
- for (loc = bp_location_chain; loc; loc = loc->global_next)
- VEC_safe_push (bp_location_p, old_locations, loc);
+ /* The first bp_location being the only one non-DUPLICATE for the current run
+ of the same ADDRESS. */
+ struct bp_location *loc_first;
+
+ /* Saved former bp_location array which we compare against the newly built
+ bp_location from the current state of ALL_BREAKPOINTS. */
+ struct bp_location **old_location, **old_locp;
+ unsigned old_location_count;
+
+ old_location = bp_location;
+ old_location_count = bp_location_count;
+ bp_location = NULL;
+ bp_location_count = 0;
+ cleanups = make_cleanup (xfree, old_location);
- bp_location_chain = NULL;
ALL_BREAKPOINTS (b)
- {
- for (loc = b->loc; loc; loc = loc->next)
- {
- *next = loc;
- next = &(loc->global_next);
- *next = NULL;
- }
- }
+ for (loc = b->loc; loc; loc = loc->next)
+ bp_location_count++;
+
+ bp_location = xmalloc (sizeof (*bp_location) * bp_location_count);
+ locp = bp_location;
+ ALL_BREAKPOINTS (b)
+ for (loc = b->loc; loc; loc = loc->next)
+ *locp++ = loc;
+ qsort (bp_location, bp_location_count, sizeof (*bp_location),
+ bp_location_compare_for_qsort);
+
+ bp_location_target_extensions_update ();
/* Identify bp_location instances that are no longer present in the new
list, and therefore should be freed. Note that it's not necessary that
those locations should be removed from inferior -- if there's another
location at the same address (previously marked as duplicate),
- we don't need to remove/insert the location. */
- for (ix = 0; VEC_iterate(bp_location_p, old_locations, ix, loc); ++ix)
+ we don't need to remove/insert the location.
+
+ LOCP is kept in sync with OLD_LOCP, each pointing to the current and
+ former bp_location array state respectively. */
+
+ locp = bp_location;
+ for (old_locp = old_location; old_locp < old_location + old_location_count;
+ old_locp++)
{
- /* Tells if 'loc' is found amoung the new locations. If not, we
+ struct bp_location *old_loc = *old_locp;
+
+ /* Tells if 'old_loc' is found amoung the new locations. If not, we
have to free it. */
- int found_object = 0;
+ int found_object;
/* Tells if the location should remain inserted in the target. */
int keep_in_target = 0;
int removed = 0;
- for (loc2 = bp_location_chain; loc2; loc2 = loc2->global_next)
- if (loc2 == loc)
- {
- found_object = 1;
- break;
- }
+
+ /* Skip LOCP entries which will definitely never be needed. Stop either
+ at or being the one matching OLD_LOC. */
+ while (locp < bp_location + bp_location_count
+ && bp_location_compare (*locp, old_loc) < 0)
+ locp++;
+ found_object = locp < bp_location + bp_location_count && *locp == old_loc;
/* If this location is no longer present, and inserted, look if there's
maybe a new location at the same address. If so, mark that one
@@ -8158,11 +8238,11 @@
don't have a time window where a breakpoint at certain location is not
inserted. */
- if (loc->inserted)
+ if (old_loc->inserted)
{
/* If the location is inserted now, we might have to remove it. */
- if (found_object && should_be_inserted (loc))
+ if (found_object && should_be_inserted (old_loc))
{
/* The location is still present in the location list, and still
should be inserted. Don't do anything. */
@@ -8173,41 +8253,49 @@
/* The location is either no longer present, or got disabled.
See if there's another location at the same address, in which
case we don't need to remove this one from the target. */
- if (breakpoint_address_is_meaningful (loc->owner))
- for (loc2 = bp_location_chain; loc2; loc2 = loc2->global_next)
- {
- /* For the sake of should_insert_location. The
- call to check_duplicates will fix up this later. */
- loc2->duplicate = 0;
- if (should_be_inserted (loc2)
- && loc2 != loc
- && breakpoint_address_match (loc2->pspace->aspace,
- loc2->address,
- loc->pspace->aspace,
- loc->address))
- {
- loc2->inserted = 1;
- loc2->target_info = loc->target_info;
- keep_in_target = 1;
- break;
- }
- }
+
+ if (breakpoint_address_is_meaningful (old_loc->owner))
+ {
+ struct bp_location **loc2p;
+
+ for (loc2p = locp;
+ loc2p < bp_location + bp_location_count
+ && breakpoint_address_match ((*loc2p)->pspace->aspace,
+ (*loc2p)->address,
+ old_loc->pspace->aspace,
+ old_loc->address);
+ loc2p++)
+ {
+ struct bp_location *loc2 = *loc2p;
+
+ /* For the sake of should_be_inserted.
+ Duplicates check below will fix up this later. */
+ loc2->duplicate = 0;
+ if (loc2 != old_loc && should_be_inserted (loc2))
+ {
+ loc2->inserted = 1;
+ loc2->target_info = old_loc->target_info;
+ keep_in_target = 1;
+ break;
+ }
+ }
+ }
}
if (!keep_in_target)
{
- if (remove_breakpoint (loc, mark_uninserted))
+ if (remove_breakpoint (old_loc, mark_uninserted))
{
/* This is just about all we can do. We could keep this
location on the global list, and try to remove it next
time, but there's no particular reason why we will
succeed next time.
- Note that at this point, loc->owner is still valid,
+ Note that at this point, old_loc->owner is still valid,
as delete_breakpoint frees the breakpoint only
after calling us. */
printf_filtered (_("warning: Error removing breakpoint %d\n"),
- loc->owner->number);
+ old_loc->owner->number);
}
removed = 1;
}
@@ -8229,19 +8317,60 @@
longer need to keep this breakpoint. This is just a
heuristic, but if it's wrong, we'll report unexpected SIGTRAP,
which is usability issue, but not a correctness problem. */
- loc->events_till_retirement = 3 * (thread_count () + 1);
- loc->owner = NULL;
+ old_loc->events_till_retirement = 3 * (thread_count () + 1);
+ old_loc->owner = NULL;
- VEC_safe_push (bp_location_p, moribund_locations, loc);
+ VEC_safe_push (bp_location_p, moribund_locations, old_loc);
}
else
- free_bp_location (loc);
+ free_bp_location (old_loc);
}
}
- ALL_BREAKPOINTS (b)
+ /* Rescan breakpoints at the same address and section,
+ marking the first one as "first" and any others as "duplicates".
+ This is so that the bpt instruction is only inserted once.
+ If we have a permanent breakpoint at the same place as BPT, make
+ that one the official one, and the rest as duplicates. Permanent
+ breakpoints are sorted first for the same address. */
+
+ loc_first = NULL;
+ ALL_BP_LOCATIONS (loc, locp)
{
- check_duplicates (b);
+ struct breakpoint *b = loc->owner;
+
+ if (b->enable_state == bp_disabled
+ || b->enable_state == bp_call_disabled
+ || b->enable_state == bp_startup_disabled
+ || !loc->enabled
+ || loc->shlib_disabled
+ || !breakpoint_address_is_meaningful (b))
+ continue;
+
+ /* Permanent breakpoint should always be inserted. */
+ if (b->enable_state == bp_permanent && ! loc->inserted)
+ internal_error (__FILE__, __LINE__,
+ _("allegedly permanent breakpoint is not "
+ "actually inserted"));
+
+ if (loc_first == NULL
+ || (overlay_debugging && loc->section != loc_first->section)
+ || !breakpoint_address_match (loc->pspace->aspace, loc->address,
+ loc_first->pspace->aspace,
+ loc_first->address))
+ {
+ loc_first = loc;
+ loc->duplicate = 0;
+ continue;
+ }
+
+ loc->duplicate = 1;
+
+ if (loc_first->owner->enable_state == bp_permanent && loc->inserted
+ && b->enable_state != bp_permanent)
+ internal_error (__FILE__, __LINE__,
+ _("another breakpoint was inserted on top of "
+ "a permanent breakpoint"));
}
if (breakpoints_always_inserted_mode () && should_insert
--- src/gdb/breakpoint.h 2009/10/19 09:51:40 1.100
+++ src/gdb/breakpoint.h 2009/10/25 19:35:26 1.101
@@ -231,10 +231,6 @@
the same parent breakpoint. */
struct bp_location *next;
- /* Pointer to the next breakpoint location, in a global
- list of all breakpoint locations. */
- struct bp_location *global_next;
-
/* Type of this breakpoint location. */
enum bp_loc_type loc_type;
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2009-10-25 19:41 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-09-04 20:17 [patch] Performance optimize large bp_location count Jan Kratochvil
2009-10-13 18:34 ` Tom Tromey
2009-10-13 22:35 ` Jan Kratochvil
2009-10-14 22:57 ` Tom Tromey
2009-10-25 19:41 ` Jan Kratochvil
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox