* [PATCH] Displaced stepping (non-stop debugging) support for ARM Linux
@ 2009-01-20 22:14 Julian Brown
2009-01-21 18:07 ` Pedro Alves
2009-02-02 20:01 ` Daniel Jacobowitz
0 siblings, 2 replies; 24+ messages in thread
From: Julian Brown @ 2009-01-20 22:14 UTC (permalink / raw)
To: gdb-patches; +Cc: pedro
[-- Attachment #1: Type: text/plain, Size: 3586 bytes --]
Hi,
These patches provide an implementation of displaced stepping support
for ARM (Linux only for now), using the generic hooks provided by GDB.
ARM support is relatively tricky compared to some other architectures,
because there's no hardware single-stepping support. However we can
fake it by making sure that displaced instructions don't modify control
flow, and placing a software breakpoint after each displaced
instruction. Also registers are rewritten to handle instructions which
might read/write the PC. We must of course take care that the cleanup
routine puts things back in the correct places.
As a side-effect of the lack of h/w single-stepping support, we've
enabled displaced stepping in all cases, not just when stepping over
breakpoints (a patch of Pedro Alves's, attached, but mangled by me to
apply to mainline). I'm not sure if that's the most sensible approach
(for displaced stepping, we only care about not *removing* breakpoints
which might be hit by other threads. We can still add temporary
breakpoints for the purpose of software single-stepping).
Only the traditional ARM instruction set is covered by this patch --
there's no support for Thumb or Thumb-2. For ARM instructions, the
coverage is pretty good though I think.
Note that though this implementation is loosely inspired by the Linux
kernel's kprobes implementation, no code has been taken from there.
Regression tested using an x86 host and a remote target running
gdbserver, with (possibly) no regressions -- although several tests
seem to fluctuate randomly between passing/failing for me (with
timeouts) with or without the patch. I'm not sure how normal that is.
Also tested with "GDBFLAGS=-ex 'set displaced-stepping on'", which
seemed OK, and of course with hand-written spot-checks.
OK to apply, or any comments?
Cheers,
Julian
ChangeLog (always use displaced stepping)
2008-11-19 Pedro Alves <pedro@codesourcery.com>
* infrun.c (displaced_step_fixup): If this is a software
single-stepping arch, don't tell the target to single-step.
(resume): If this is a software single-stepping arch, and
displaced-stepping is enabled, use it for all single-step
requests.
ChangeLog (ARM displaced stepping)
gdb/
* arm-linux-tdep.c (arch-utils.h): Include file.
(arm_linux_init_abi): Initialise displaced stepping callbacks.
* arm-tdep.c (DISPLACED_TEMPS, DISPLACED_MODIFIED_INSNS): New
macros.
(struct displaced_step_closure): Define.
(displaced_read_reg, displaced_write_reg, copy_unmodified)
(copy_preload, copy_preload_reg, copy_copro_load_store)
(copy_b_bl_blx, copy_bx_blx_reg, copy_dp_imm, copy_dp_reg)
(copy_dp_shifted_reg, modify_store_pc, copy_extra_ld_st)
(copy_ldr_str_ldrb_strb, copy_block_xfer, copy_svc, copy_undef)
(copy_unpred): New.
(cleanup_branch, cleanup_dp_imm, cleanup_dp_reg)
(cleanup_dp_shifted_reg, cleanup_load, cleanup_store)
(cleanup_block_xfer, cleanup_svc, cleanup_kernel_helper_return)
(cleanup_preload, cleanup_copro_load_store): New functions (with
forward declarations).
(decode_misc_memhint_neon, decode_unconditional)
(decode_miscellaneous, decode_dp_misc, decode_ld_st_word_ubyte)
(decode_media, decode_b_bl_ldmstm, decode_ext_reg_ld_st)
(decode_svc_copro, arm_process_displaced_insn)
(arm_catch_kernel_helper_return, arm_displaced_step_copy_insn)
(arm_displaced_step_fixup): New.
(arm_gdbarch_init): Initialise max insn length field.
* arm-tdep.h (arm_displaced_step_copy_insn)
(arm_displaced_step_fixup): Add prototypes.
[-- Attachment #2: fsf-arm-displaced-stepping-1.diff --]
[-- Type: text/x-patch, Size: 46164 bytes --]
--- .pc/arm-displaced-stepping/gdb/arm-linux-tdep.c 2009-01-20 13:22:42.000000000 -0800
+++ gdb/arm-linux-tdep.c 2009-01-20 13:23:59.000000000 -0800
@@ -37,6 +37,7 @@
#include "arm-tdep.h"
#include "arm-linux-tdep.h"
#include "glibc-tdep.h"
+#include "arch-utils.h"
#include "gdb_string.h"
@@ -647,6 +648,14 @@ arm_linux_init_abi (struct gdbarch_info
/* Core file support. */
set_gdbarch_regset_from_core_section (gdbarch,
arm_linux_regset_from_core_section);
+
+ /* Displaced stepping. */
+ set_gdbarch_displaced_step_copy_insn (gdbarch,
+ arm_displaced_step_copy_insn);
+ set_gdbarch_displaced_step_fixup (gdbarch, arm_displaced_step_fixup);
+ set_gdbarch_displaced_step_free_closure (gdbarch,
+ simple_displaced_step_free_closure);
+ set_gdbarch_displaced_step_location (gdbarch, displaced_step_at_entry_point);
}
void
--- .pc/arm-displaced-stepping/gdb/arm-tdep.c 2009-01-20 13:22:42.000000000 -0800
+++ gdb/arm-tdep.c 2009-01-20 13:33:10.000000000 -0800
@@ -2175,6 +2175,1456 @@ arm_software_single_step (struct frame_i
return 1;
}
+/* Displaced stepping support. */
+
+/* The maximum number of temporaries available for displaced instructions. */
+#define DISPLACED_TEMPS 5
+/* The maximum number of modified instructions generated for one single-stepped
+ instruction. */
+#define DISPLACED_MODIFIED_INSNS 5
+
+struct displaced_step_closure
+{
+ ULONGEST tmp[DISPLACED_TEMPS];
+ int rd;
+ int wrote_to_pc;
+ union
+ {
+ struct
+ {
+ int xfersize;
+ int rn; /* Writeback register. */
+ unsigned int immed : 1; /* Offset is immediate. */
+ unsigned int writeback : 1; /* Perform base-register writeback. */
+ } ldst;
+
+ struct
+ {
+ unsigned long dest;
+ unsigned int link : 1;
+ unsigned int exchange : 1;
+ } branch;
+
+ struct
+ {
+ unsigned int regmask;
+ int rn;
+ CORE_ADDR xfer_addr;
+ unsigned int load : 1;
+ unsigned int user : 1;
+ unsigned int increment : 1;
+ unsigned int before : 1;
+ unsigned int writeback : 1;
+ } block;
+
+ struct
+ {
+ unsigned int immed : 1;
+ } preload;
+ } u;
+ unsigned long modinsn[DISPLACED_MODIFIED_INSNS];
+ int numinsns;
+ CORE_ADDR insn_addr;
+ void (*cleanup) (struct regcache *, struct displaced_step_closure *);
+};
+
+static void cleanup_branch (struct regcache *, struct displaced_step_closure *);
+static void cleanup_dp_imm (struct regcache *, struct displaced_step_closure *);
+static void cleanup_dp_reg (struct regcache *, struct displaced_step_closure *);
+static void cleanup_dp_shifted_reg (struct regcache *,
+ struct displaced_step_closure *);
+static void cleanup_load (struct regcache *, struct displaced_step_closure *);
+static void cleanup_store (struct regcache *, struct displaced_step_closure *);
+static void cleanup_block_xfer (struct regcache *,
+ struct displaced_step_closure *);
+static void cleanup_svc (struct regcache *, struct displaced_step_closure *);
+static void cleanup_kernel_helper_return (struct regcache *,
+ struct displaced_step_closure *);
+static void cleanup_preload (struct regcache *,
+ struct displaced_step_closure *);
+static void cleanup_copro_load_store (struct regcache *,
+ struct displaced_step_closure *);
+
+ULONGEST
+displaced_read_reg (struct regcache *regs, CORE_ADDR from, int regno)
+{
+ ULONGEST ret;
+
+ if (regno == 15)
+ {
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: read pc value %.8lx\n",
+ (unsigned long) from + 8);
+ return (ULONGEST) from + 8; /* Pipeline offset. */
+ }
+ else
+ {
+ regcache_cooked_read_unsigned (regs, regno, &ret);
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: read r%d value %.8lx\n",
+ regno, (unsigned long) ret);
+ return ret;
+ }
+}
+
+static void arm_write_pc (struct regcache *, CORE_ADDR);
+
+void
+displaced_write_reg (struct regcache *regs, struct displaced_step_closure *dsc,
+ int regno, ULONGEST val)
+{
+ if (regno == 15)
+ {
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: writing pc %.8lx\n",
+ (unsigned long) val);
+ arm_write_pc (regs, val);
+ dsc->wrote_to_pc = 1;
+ }
+ else
+ {
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: writing r%d value %.8lx\n",
+ regno, (unsigned long) val);
+ regcache_cooked_write_unsigned (regs, regno, val);
+ }
+}
+
+static int
+copy_unmodified (unsigned long insn, const char *iname,
+ struct displaced_step_closure *dsc)
+{
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying insn %.8lx, "
+ "opcode/class '%s'\n", insn, iname);
+
+ dsc->modinsn[0] = insn;
+
+ return 0;
+}
+
+static int
+copy_preload (unsigned long insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ unsigned int rn = bits (insn, 16, 19);
+ ULONGEST rn_val;
+ CORE_ADDR from = dsc->insn_addr;
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying preload insn %.8lx\n",
+ insn);
+
+ /* Preload instructions:
+
+ {pli/pld} [rn, #+/-imm]
+ ->
+ {pli/pld} [r0, #+/-imm]. */
+
+ dsc->tmp[0] = displaced_read_reg (regs, from, 0);
+ rn_val = displaced_read_reg (regs, from, rn);
+ displaced_write_reg (regs, dsc, 0, rn_val);
+
+ dsc->u.preload.immed = 1;
+
+ dsc->modinsn[0] = insn & 0xfff0ffff;
+
+ dsc->cleanup = &cleanup_preload;
+
+ return 0;
+}
+
+static int
+copy_preload_reg (unsigned long insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ unsigned int rn = bits (insn, 16, 19);
+ unsigned int rm = bits (insn, 0, 3);
+ ULONGEST rn_val, rm_val;
+ CORE_ADDR from = dsc->insn_addr;
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying preload insn %.8lx\n",
+ insn);
+
+ /* Preload register-offset instructions:
+
+ {pli/pld} [rn, rm {, shift}]
+ ->
+ {pli/pld} [r0, r1 {, shift}]. */
+
+ dsc->tmp[0] = displaced_read_reg (regs, from, 0);
+ dsc->tmp[1] = displaced_read_reg (regs, from, 1);
+ rn_val = displaced_read_reg (regs, from, rn);
+ rm_val = displaced_read_reg (regs, from, rm);
+ displaced_write_reg (regs, dsc, 0, rn_val);
+ displaced_write_reg (regs, dsc, 1, rm_val);
+
+ dsc->u.preload.immed = 0;
+
+ dsc->modinsn[0] = (insn & 0xfff0fff0) | 0x1;
+
+ dsc->cleanup = &cleanup_preload;
+
+ return 0;
+}
+
+static void
+cleanup_preload (struct regcache *regs, struct displaced_step_closure *dsc)
+{
+ displaced_write_reg (regs, dsc, 0, dsc->tmp[0]);
+ if (!dsc->u.preload.immed)
+ displaced_write_reg (regs, dsc, 1, dsc->tmp[1]);
+}
+
+static int
+copy_copro_load_store (unsigned long insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ unsigned int rn = bits (insn, 16, 19);
+ ULONGEST rn_val;
+ CORE_ADDR from = dsc->insn_addr;
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying coprocessor "
+ "load/store insn %.8lx\n", insn);
+
+ /* Coprocessor load/store instructions:
+
+ {stc/stc2} [<Rn>, #+/-imm] (and other immediate addressing modes)
+ ->
+ {stc/stc2} [r0, #+/-imm].
+
+ ldc/ldc2 are handled identically. */
+
+ dsc->tmp[0] = displaced_read_reg (regs, from, 0);
+ rn_val = displaced_read_reg (regs, from, rn);
+ displaced_write_reg (regs, dsc, 0, rn_val);
+
+ dsc->u.ldst.writeback = bit (insn, 25);
+ dsc->u.ldst.rn = rn;
+
+ dsc->modinsn[0] = insn & 0xfff0ffff;
+
+ dsc->cleanup = &cleanup_copro_load_store;
+
+ return 0;
+}
+
+static void
+cleanup_copro_load_store (struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ ULONGEST rn_val = displaced_read_reg (regs, dsc->insn_addr, 0);
+
+ displaced_write_reg (regs, dsc, 0, dsc->tmp[0]);
+
+ if (dsc->u.ldst.writeback)
+ displaced_write_reg (regs, dsc, dsc->u.ldst.rn, rn_val);
+}
+
+static int
+copy_b_bl_blx (unsigned long insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ unsigned int cond = bits (insn, 28, 31);
+ int exchange = (cond == 0xf);
+ int link = exchange || bit (insn, 24);
+ CORE_ADDR from = dsc->insn_addr;
+ long offset;
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying %s immediate insn "
+ "%.8lx\n", (exchange) ? "blx" : (link) ? "bl" : "b",
+ insn);
+
+ offset = bits (insn, 0, 23) << 2;
+ if (bit (offset, 25))
+ offset = offset | ~0x3ffffff;
+
+ /* Implement "BL<cond> <label>" as:
+
+ Preparation: tmp <- r0; r0 <- #0
+ Insn: mov<cond> r0, #1
+ Cleanup: if (r0) { r14 <- pc; pc <- label }, r0 <- tmp.
+
+ B<cond> similar, but don't set r14 in cleanup. */
+
+ dsc->tmp[0] = displaced_read_reg (regs, from, 0);
+ displaced_write_reg (regs, dsc, 0, 0);
+
+ dsc->u.branch.link = link;
+ dsc->u.branch.exchange = exchange;
+ dsc->u.branch.dest = from + 8 + offset;
+
+ if (exchange)
+ /* Implement as actual blx? */
+ dsc->modinsn[0] = 0xe3a00001; /* mov r0, #1. */
+ else
+ dsc->modinsn[0] = (cond << 28) | 0x3a00001; /* mov<cond> r0, #1. */
+
+ dsc->cleanup = &cleanup_branch;
+
+ return 0;
+}
+
+static int
+copy_bx_blx_reg (unsigned long insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ unsigned int cond = bits (insn, 28, 31);
+ /* BX: x12xxx1x
+ BLX: x12xxx3x. */
+ int link = bit (insn, 5);
+ unsigned int rm = bits (insn, 0, 3);
+ CORE_ADDR from = dsc->insn_addr;
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying %s register insn "
+ "%.8lx\n", (link) ? "blx" : "bx", insn);
+
+ /* Implement {BX,BLX}<cond> <reg>" as:
+
+ Preparation: dest <- rm; tmp <- r0; r0 <- #0
+ Insn: mov<cond> r0, #1
+ Cleanup: if (r0) { r14 <- pc; pc <- dest; }, r0 <- tmp.
+
+ Don't set r14 in cleanup for BX. */
+
+ dsc->tmp[0] = displaced_read_reg (regs, from, 0);
+ dsc->u.branch.dest = displaced_read_reg (regs, from, rm);
+ displaced_write_reg (regs, dsc, 0, 0);
+
+ dsc->u.branch.link = link;
+ dsc->u.branch.exchange = 1;
+
+ dsc->modinsn[0] = (cond << 28) | 0x3a00001; /* mov<cond> r0, #1. */
+
+ dsc->cleanup = &cleanup_branch;
+
+ return 0;
+}
+
+static void
+cleanup_branch (struct regcache *regs, struct displaced_step_closure *dsc)
+{
+ ULONGEST from = dsc->insn_addr;
+ ULONGEST branch_taken = displaced_read_reg (regs, from, 0);
+
+ displaced_write_reg (regs, dsc, 0, dsc->tmp[0]);
+
+ if (!branch_taken)
+ return;
+
+ if (dsc->u.branch.link)
+ {
+ ULONGEST pc = displaced_read_reg (regs, from, 15);
+ displaced_write_reg (regs, dsc, 14, pc - 4);
+ }
+
+ /* FIXME: BLX immediate is probably broken! */
+
+ displaced_write_reg (regs, dsc, 15, dsc->u.branch.dest);
+}
+
+static int
+copy_dp_imm (unsigned long insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ unsigned int rn = bits (insn, 16, 19);
+ unsigned int rd = bits (insn, 12, 15);
+ unsigned int op = bits (insn, 21, 24);
+ int is_mov = (op == 0xd);
+ ULONGEST rd_val, rn_val;
+ CORE_ADDR from = dsc->insn_addr;
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying immediate %s insn "
+ "%.8lx\n", is_mov ? "move" : "data-processing", insn);
+
+ /* Instruction is of form:
+
+ <op><cond> rd, [rn,] #imm
+
+ Rewrite as:
+
+ Preparation: tmp1, tmp2 <- r0, r1;
+ r0, r1 <- rd, rn
+ Insn: <op><cond> r0, r1, #imm
+ Cleanup: rd <- r0; r0 <- tmp1; r1 <- tmp2
+ */
+
+ dsc->tmp[0] = displaced_read_reg (regs, from, 0);
+ dsc->tmp[1] = displaced_read_reg (regs, from, 1);
+ rn_val = displaced_read_reg (regs, from, rn);
+ rd_val = displaced_read_reg (regs, from, rd);
+ displaced_write_reg (regs, dsc, 0, rd_val);
+ displaced_write_reg (regs, dsc, 1, rn_val);
+ dsc->rd = rd;
+
+ if (is_mov)
+ dsc->modinsn[0] = insn & 0xfff00fff;
+ else
+ dsc->modinsn[0] = (insn & 0xfff00fff) | 0x10000;
+
+ dsc->cleanup = &cleanup_dp_imm;
+
+ return 0;
+}
+
+static void
+cleanup_dp_imm (struct regcache *regs, struct displaced_step_closure *dsc)
+{
+ ULONGEST rd_val = displaced_read_reg (regs, dsc->insn_addr, 0);
+ displaced_write_reg (regs, dsc, 0, dsc->tmp[0]);
+ displaced_write_reg (regs, dsc, 1, dsc->tmp[1]);
+ displaced_write_reg (regs, dsc, dsc->rd, rd_val);
+}
+
+static int
+copy_dp_reg (unsigned long insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ unsigned int rn = bits (insn, 16, 19);
+ unsigned int rm = bits (insn, 0, 3);
+ unsigned int rd = bits (insn, 12, 15);
+ unsigned int op = bits (insn, 21, 24);
+ int is_mov = (op == 0xd);
+ ULONGEST rd_val, rn_val, rm_val;
+ CORE_ADDR from = dsc->insn_addr;
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying reg %s insn %.8lx\n",
+ is_mov ? "move" : "data-processing", insn);
+
+ /* Instruction is of form:
+
+ <op><cond> rd, [rn,] rm [, <shift>]
+
+ Rewrite as:
+
+ Preparation: tmp1, tmp2, tmp3 <- r0, r1, r2;
+ r0, r1, r2 <- rd, rn, rm
+ Insn: <op><cond> r0, r1, r2 [, <shift>]
+ Cleanup: rd <- r0; r0, r1, r2 <- tmp1, tmp2, tmp3
+ */
+
+ dsc->tmp[0] = displaced_read_reg (regs, from, 0);
+ dsc->tmp[1] = displaced_read_reg (regs, from, 1);
+ dsc->tmp[2] = displaced_read_reg (regs, from, 2);
+ rd_val = displaced_read_reg (regs, from, rd);
+ rn_val = displaced_read_reg (regs, from, rn);
+ rm_val = displaced_read_reg (regs, from, rm);
+ displaced_write_reg (regs, dsc, 0, rd_val);
+ displaced_write_reg (regs, dsc, 1, rn_val);
+ displaced_write_reg (regs, dsc, 2, rm_val);
+ dsc->rd = rd;
+
+ if (is_mov)
+ dsc->modinsn[0] = (insn & 0xfff00ff0) | 0x2;
+ else
+ dsc->modinsn[0] = (insn & 0xfff00ff0) | 0x10002;
+
+ dsc->cleanup = &cleanup_dp_reg;
+
+ return 0;
+}
+
+static void
+cleanup_dp_reg (struct regcache *regs, struct displaced_step_closure *dsc)
+{
+ ULONGEST rd_val;
+ int i;
+
+ rd_val = displaced_read_reg (regs, dsc->insn_addr, 0);
+
+ for (i = 0; i < 3; i++)
+ displaced_write_reg (regs, dsc, i, dsc->tmp[i]);
+
+ displaced_write_reg (regs, dsc, dsc->rd, rd_val);
+}
+
+static int
+copy_dp_shifted_reg (unsigned long insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ unsigned int rn = bits (insn, 16, 19);
+ unsigned int rm = bits (insn, 0, 3);
+ unsigned int rd = bits (insn, 12, 15);
+ unsigned int rs = bits (insn, 8, 11);
+ unsigned int op = bits (insn, 21, 24);
+ int is_mov = (op == 0xd), i;
+ ULONGEST rd_val, rn_val, rm_val, rs_val;
+ CORE_ADDR from = dsc->insn_addr;
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying shifted reg %s insn "
+ "%.8lx\n", is_mov ? "move" : "data-processing", insn);
+
+ /* Instruction is of form:
+
+ <op><cond> rd, [rn,] rm, <shift> rs
+
+ Rewrite as:
+
+ Preparation: tmp1, tmp2, tmp3, tmp4 <- r0, r1, r2, r3
+ r0, r1, r2, r3 <- rd, rn, rm, rs
+ Insn: <op><cond> r0, r1, r2, <shift> r3
+ Cleanup: tmp5 <- r0
+ r0, r1, r2, r3 <- tmp1, tmp2, tmp3, tmp4
+ rd <- tmp5
+ */
+
+ for (i = 0; i < 4; i++)
+ dsc->tmp[i] = displaced_read_reg (regs, from, i);
+
+ rd_val = displaced_read_reg (regs, from, rd);
+ rn_val = displaced_read_reg (regs, from, rn);
+ rm_val = displaced_read_reg (regs, from, rm);
+ rs_val = displaced_read_reg (regs, from, rs);
+ displaced_write_reg (regs, dsc, 0, rd_val);
+ displaced_write_reg (regs, dsc, 1, rn_val);
+ displaced_write_reg (regs, dsc, 2, rm_val);
+ displaced_write_reg (regs, dsc, 3, rs_val);
+ dsc->rd = rd;
+
+ if (is_mov)
+ dsc->modinsn[0] = (insn & 0xfff000f0) | 0x302;
+ else
+ dsc->modinsn[0] = (insn & 0xfff000f0) | 0x10302;
+
+ dsc->cleanup = &cleanup_dp_shifted_reg;
+
+ return 0;
+}
+
+static void
+cleanup_dp_shifted_reg (struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ ULONGEST rd_val = displaced_read_reg (regs, dsc->insn_addr, 0);
+ int i;
+
+ for (i = 0; i < 4; i++)
+ displaced_write_reg (regs, dsc, i, dsc->tmp[i]);
+
+ displaced_write_reg (regs, dsc, dsc->rd, rd_val);
+}
+
+/* FIXME: This should depend on the arch version. */
+
+static ULONGEST
+modify_store_pc (ULONGEST pc)
+{
+ return pc + 4;
+}
+
+static int
+copy_extra_ld_st (unsigned long insn, int unpriveleged, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ unsigned int op1 = bits (insn, 20, 24);
+ unsigned int op2 = bits (insn, 5, 6);
+ unsigned int rt = bits (insn, 12, 15);
+ unsigned int rn = bits (insn, 16, 19);
+ unsigned int rm = bits (insn, 0, 3);
+ char load[12] = {0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1};
+ char bytesize[12] = {2, 2, 2, 2, 8, 1, 8, 1, 8, 2, 8, 2};
+ int immed = (op1 & 0x4) != 0;
+ int opcode;
+ ULONGEST rt_val, rt_val2 = 0, rn_val, rm_val = 0;
+ CORE_ADDR from = dsc->insn_addr;
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying %sextra load/store "
+ "insn %.8lx\n", unpriveleged ? "unpriveleged " : "",
+ insn);
+
+ opcode = ((op2 << 2) | (op1 & 0x1) | ((op1 & 0x4) >> 1)) - 4;
+
+ if (opcode < 0)
+ internal_error (__FILE__, __LINE__,
+ _("copy_extra_ld_st: instruction decode error"));
+
+ dsc->tmp[0] = displaced_read_reg (regs, from, 0);
+ dsc->tmp[1] = displaced_read_reg (regs, from, 1);
+ dsc->tmp[2] = displaced_read_reg (regs, from, 2);
+ if (!immed)
+ dsc->tmp[3] = displaced_read_reg (regs, from, 3);
+
+ rt_val = displaced_read_reg (regs, from, rt);
+ if (bytesize[opcode] == 8)
+ rt_val2 = displaced_read_reg (regs, from, rt + 1);
+ rn_val = displaced_read_reg (regs, from, rn);
+ if (!immed)
+ rm_val = displaced_read_reg (regs, from, rm);
+
+ displaced_write_reg (regs, dsc, 0, rt_val);
+ if (bytesize[opcode] == 8)
+ displaced_write_reg (regs, dsc, 1, rt_val2);
+ displaced_write_reg (regs, dsc, 2, rn_val);
+ if (!immed)
+ displaced_write_reg (regs, dsc, 3, rm_val);
+
+ dsc->rd = rt;
+ dsc->u.ldst.xfersize = bytesize[opcode];
+ dsc->u.ldst.rn = rn;
+ dsc->u.ldst.immed = immed;
+ dsc->u.ldst.writeback = bit (insn, 24) == 0 || bit (insn, 21) != 0;
+
+ if (immed)
+ /* {ldr,str}<width><cond> rt, [rt2,] [rn, #imm]
+ ->
+ {ldr,str}<width><cond> r0, [r1,] [r2, #imm]. */
+ dsc->modinsn[0] = (insn & 0xfff00fff) | 0x20000;
+ else
+ /* {ldr,str}<width><cond> rt, [rt2,] [rn, +/-rm]
+ ->
+ {ldr,str}<width><cond> r0, [r1,] [r2, +/-r3]. */
+ dsc->modinsn[0] = (insn & 0xfff00ff0) | 0x20003;
+
+ dsc->cleanup = load[opcode] ? &cleanup_load : &cleanup_store;
+
+ return 0;
+}
+
+static int
+copy_ldr_str_ldrb_strb (unsigned long insn, struct regcache *regs,
+ struct displaced_step_closure *dsc, int load, int byte,
+ int usermode)
+{
+ int immed = !bit (insn, 25);
+ unsigned int rt = bits (insn, 12, 15);
+ unsigned int rn = bits (insn, 16, 19);
+ unsigned int rm = bits (insn, 0, 3); /* Only valid if !immed. */
+ ULONGEST rt_val, rn_val, rm_val = 0;
+ CORE_ADDR from = dsc->insn_addr;
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying %s%s insn %.8lx\n",
+ load ? (byte ? "ldrb" : "ldr")
+ : (byte ? "strb" : "str"), usermode ? "t" : "",
+ insn);
+
+ dsc->tmp[0] = displaced_read_reg (regs, from, 0);
+ dsc->tmp[2] = displaced_read_reg (regs, from, 2);
+ if (!immed)
+ dsc->tmp[3] = displaced_read_reg (regs, from, 3);
+
+ rt_val = displaced_read_reg (regs, from, rt);
+ rn_val = displaced_read_reg (regs, from, rn);
+ if (!immed)
+ rm_val = displaced_read_reg (regs, from, rm);
+
+ if (!load && rt == 15)
+ rt_val = modify_store_pc (rt_val);
+
+ displaced_write_reg (regs, dsc, 0, rt_val);
+ displaced_write_reg (regs, dsc, 2, rn_val);
+ if (!immed)
+ displaced_write_reg (regs, dsc, 3, rm_val);
+
+ dsc->rd = rt;
+ dsc->u.ldst.xfersize = byte ? 1 : 4;
+ dsc->u.ldst.rn = rn;
+ dsc->u.ldst.immed = immed;
+ dsc->u.ldst.writeback = bit (insn, 24) == 0 || bit (insn, 21) != 0;
+
+ if (immed)
+ /* {ldr,str}[b]<cond> rt, [rn, #imm], etc.
+ ->
+ {ldr,str}[b]<cond> r0, [r2, #imm]. */
+ dsc->modinsn[0] = (insn & 0xfff00fff) | 0x20000;
+ else
+ /* {ldr,str}[b]<cond> rt, [rn, rm], etc.
+ ->
+ {ldr,str}[b]<cond> r0, [r2, r3]. */
+ dsc->modinsn[0] = (insn & 0xfff00ff0) | 0x20003;
+
+ dsc->cleanup = load ? &cleanup_load : &cleanup_store;
+
+ return 0;
+}
+
+static void
+cleanup_load (struct regcache *regs, struct displaced_step_closure *dsc)
+{
+ ULONGEST rt_val, rt_val2 = 0, rn_val;
+ CORE_ADDR from = dsc->insn_addr;
+
+ rt_val = displaced_read_reg (regs, from, 0);
+ if (dsc->u.ldst.xfersize == 8)
+ rt_val2 = displaced_read_reg (regs, from, 1);
+ rn_val = displaced_read_reg (regs, from, 2);
+
+ displaced_write_reg (regs, dsc, 0, dsc->tmp[0]);
+ if (dsc->u.ldst.xfersize > 4)
+ displaced_write_reg (regs, dsc, 1, dsc->tmp[1]);
+ displaced_write_reg (regs, dsc, 2, dsc->tmp[2]);
+ if (!dsc->u.ldst.immed)
+ displaced_write_reg (regs, dsc, 3, dsc->tmp[3]);
+
+ /* Handle register writeback. */
+ if (dsc->u.ldst.writeback)
+ displaced_write_reg (regs, dsc, dsc->u.ldst.rn, rn_val);
+ /* Put result in right place. */
+ displaced_write_reg (regs, dsc, dsc->rd, rt_val);
+ if (dsc->u.ldst.xfersize == 8)
+ displaced_write_reg (regs, dsc, dsc->rd + 1, rt_val2);
+}
+
+static void
+cleanup_store (struct regcache *regs, struct displaced_step_closure *dsc)
+{
+ CORE_ADDR from = dsc->insn_addr;
+ ULONGEST rn_val = displaced_read_reg (regs, from, 2);
+
+ displaced_write_reg (regs, dsc, 0, dsc->tmp[0]);
+ if (dsc->u.ldst.xfersize > 4)
+ displaced_write_reg (regs, dsc, 1, dsc->tmp[1]);
+ displaced_write_reg (regs, dsc, 2, dsc->tmp[2]);
+ if (!dsc->u.ldst.immed)
+ displaced_write_reg (regs, dsc, 3, dsc->tmp[3]);
+
+ /* Writeback. */
+ if (dsc->u.ldst.writeback)
+ displaced_write_reg (regs, dsc, dsc->u.ldst.rn, rn_val);
+}
+
+/* Handle ldm/stm. Doesn't handle any difficult cases (exception return,
+ user-register transfer). */
+
+static int
+copy_block_xfer (unsigned long insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ int load = bit (insn, 20);
+ int user = bit (insn, 22);
+ int increment = bit (insn, 23);
+ int before = bit (insn, 24);
+ int writeback = bit (insn, 21);
+ int rn = bits (insn, 16, 19);
+ CORE_ADDR from = dsc->insn_addr;
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying block transfer insn "
+ "%.8lx\n", insn);
+
+ /* ldm/stm is always emulated, because there are too many corner cases to
+ deal with otherwise. Implement as mov<cond> r0, #1, then do actual
+ transfer in cleanup routine if condition passes. FIXME: Non-priveleged
+ transfers. */
+
+ /* Hmm, this might not work, because of memory permissions differing for
+ the debugger & the debugged program. I wonder what to do about that? */
+
+ dsc->u.block.xfer_addr = displaced_read_reg (regs, from, rn);
+ dsc->u.block.rn = rn;
+
+ dsc->tmp[0] = displaced_read_reg (regs, from, 0);
+ displaced_write_reg (regs, dsc, 0, 0);
+
+ dsc->u.block.load = load;
+ dsc->u.block.user = user;
+ dsc->u.block.increment = increment;
+ dsc->u.block.before = before;
+ dsc->u.block.writeback = writeback;
+
+ dsc->u.block.regmask = insn & 0xffff;
+
+ dsc->modinsn[0] = (insn & 0xf0000000) | 0x3a00001; /* mov<cond> r0, #1. */
+
+ dsc->cleanup = &cleanup_block_xfer;
+
+ return 0;
+}
+
+static void
+cleanup_block_xfer (struct regcache *regs, struct displaced_step_closure *dsc)
+{
+ ULONGEST do_transfer;
+ ULONGEST from = dsc->insn_addr;
+ int inc = dsc->u.block.increment;
+ int bump_before = dsc->u.block.before ? (inc ? 4 : -4) : 0;
+ int bump_after = dsc->u.block.before ? 0 : (inc ? 4 : -4);
+ unsigned long regmask = dsc->u.block.regmask;
+ int regno = inc ? 0 : 15;
+ CORE_ADDR xfer_addr = dsc->u.block.xfer_addr;
+
+ do_transfer = displaced_read_reg (regs, from, 0);
+
+ displaced_write_reg (regs, dsc, 0, dsc->tmp[0]);
+
+ if (!do_transfer)
+ return;
+
+ /* FIXME: Implement non-priveleged transfers! */
+ gdb_assert (!dsc->u.block.user);
+
+ /* FIXME: Exception return. */
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: emulating block transfer: "
+ "%s %s %s\n", dsc->u.block.load ? "ldm" : "stm",
+ dsc->u.block.increment ? "inc" : "dec",
+ dsc->u.block.before ? "before" : "after");
+
+ while (regmask)
+ {
+ if (inc)
+ while (regno <= 15 && (regmask & (1 << regno)) == 0)
+ regno++;
+ else
+ while (regno >= 0 && (regmask & (1 << regno)) == 0)
+ regno--;
+
+ xfer_addr += bump_before;
+
+ if (dsc->u.block.load)
+ {
+ unsigned long memword = read_memory_unsigned_integer (xfer_addr, 4);
+ displaced_write_reg (regs, dsc, regno, memword);
+ }
+ else
+ {
+ ULONGEST regval = displaced_read_reg (regs, from, regno);
+ if (regno == 15)
+ regval = modify_store_pc (regval);
+ write_memory_unsigned_integer (xfer_addr, 4, regval);
+ }
+
+ xfer_addr += bump_after;
+
+ regmask &= ~(1 << regno);
+ }
+
+ if (dsc->u.block.writeback)
+ displaced_write_reg (regs, dsc, dsc->u.block.rn, xfer_addr);
+}
+
+static int
+copy_svc (unsigned long insn, CORE_ADDR to, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ CORE_ADDR from = dsc->insn_addr;
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying svc insn %.8lx\n",
+ insn);
+
+ /* Preparation: tmp[0] <- to.
+ Insn: unmodified svc.
+ Cleanup: if (pc == <scratch>+4) pc <- insn_addr + 4;
+ else leave PC alone. */
+
+ dsc->tmp[0] = to;
+ dsc->modinsn[0] = insn;
+
+ dsc->cleanup = &cleanup_svc;
+ /* Pretend we wrote to the PC, so cleanup doesn't set PC to the next
+ instruction. */
+ dsc->wrote_to_pc = 1;
+
+ return 0;
+}
+
+static void
+cleanup_svc (struct regcache *regs, struct displaced_step_closure *dsc)
+{
+ CORE_ADDR from = dsc->insn_addr;
+ CORE_ADDR to = dsc->tmp[0];
+ ULONGEST pc;
+
+ /* Note: we want the real PC, so don't use displaced_read_reg here. */
+ regcache_cooked_read_unsigned (regs, ARM_PC_REGNUM, &pc);
+
+ if (pc == to + 4)
+ displaced_write_reg (regs, dsc, ARM_PC_REGNUM, from + 4);
+
+ /* FIXME: What can we do about signal trampolines? */
+}
+
+static int
+copy_undef (unsigned long insn, struct displaced_step_closure *dsc)
+{
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying undefined insn %.8lx\n",
+ insn);
+
+ dsc->modinsn[0] = insn;
+
+ return 0;
+}
+
+static int
+copy_unpred (unsigned long insn, struct displaced_step_closure *dsc)
+{
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying unpredictable insn "
+ "%.8lx\n", insn);
+
+ dsc->modinsn[0] = insn;
+
+ return 0;
+}
+
+static int
+decode_misc_memhint_neon (unsigned long insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ unsigned int op1 = bits (insn, 20, 26), op2 = bits (insn, 4, 7);
+ unsigned int rn = bits (insn, 16, 19);
+
+ if (op1 == 0x10 && (op2 & 0x2) == 0x0 && (rn & 0xe) == 0x0)
+ return copy_unmodified (insn, "cps", dsc);
+ else if (op1 == 0x10 && op2 == 0x0 && (rn & 0xe) == 0x1)
+ return copy_unmodified (insn, "setend", dsc);
+ else if ((op1 & 0x60) == 0x20)
+ return copy_unmodified (insn, "neon dataproc", dsc);
+ else if ((op1 & 0x71) == 0x40)
+ return copy_unmodified (insn, "neon elt/struct load/store", dsc);
+ else if ((op1 & 0x77) == 0x41)
+ return copy_unmodified (insn, "unallocated mem hint", dsc);
+ else if ((op1 & 0x77) == 0x45)
+ return copy_preload (insn, regs, dsc); /* pli. */
+ else if ((op1 & 0x77) == 0x51)
+ {
+ if (rn != 0xf)
+ return copy_preload (insn, regs, dsc); /* pld/pldw. */
+ else
+ return copy_unpred (insn, dsc);
+ }
+ else if ((op1 & 0x77) == 0x55)
+ return copy_preload (insn, regs, dsc); /* pld/pldw. */
+ else if (op1 == 0x57)
+ switch (op2)
+ {
+ case 0x1: return copy_unmodified (insn, "clrex", dsc);
+ case 0x4: return copy_unmodified (insn, "dsb", dsc);
+ case 0x5: return copy_unmodified (insn, "dmb", dsc);
+ case 0x6: return copy_unmodified (insn, "isb", dsc);
+ default: return copy_unpred (insn, dsc);
+ }
+ else if ((op1 & 0x63) == 0x43)
+ return copy_unpred (insn, dsc);
+ else if ((op2 & 0x1) == 0x0)
+ switch (op1 & ~0x80)
+ {
+ case 0x61:
+ return copy_unmodified (insn, "unallocated mem hint", dsc);
+ case 0x65:
+ return copy_preload_reg (insn, regs, dsc); /* pli reg. */
+ case 0x71: case 0x75:
+ return copy_preload_reg (insn, regs, dsc); /* pld/pldw reg. */
+ case 0x63: case 0x67: case 0x73: case 0x77:
+ return copy_unpred (insn, dsc);
+ default:
+ return copy_undef (insn, dsc);
+ }
+ else
+ return copy_undef (insn, dsc); /* Probably unreachable. */
+}
+
+static int
+decode_unconditional (unsigned long insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ if (bit (insn, 27) == 0)
+ return decode_misc_memhint_neon (insn, regs, dsc);
+ /* Switch on bits: 0bxxxxx321xxx0xxxxxxxxxxxxxxxxxxxx. */
+ else switch (((insn & 0x7000000) >> 23) | ((insn & 0x100000) >> 20))
+ {
+ case 0x0: case 0x2:
+ return copy_unmodified (insn, "srs", dsc);
+
+ case 0x1: case 0x3:
+ return copy_unmodified (insn, "rfe", dsc);
+
+ case 0x4: case 0x5: case 0x6: case 0x7:
+ return copy_b_bl_blx (insn, regs, dsc);
+
+ case 0x8:
+ switch ((insn & 0xe00000) >> 21)
+ {
+ case 0x1: case 0x3: case 0x4: case 0x5: case 0x6: case 0x7:
+ return copy_copro_load_store (insn, regs, dsc); /* stc/stc2. */
+
+ case 0x2:
+ return copy_unmodified (insn, "mcrr/mcrr2", dsc);
+
+ default:
+ return copy_undef (insn, dsc);
+ }
+
+ case 0x9:
+ {
+ int rn_f = (bits (insn, 16, 19) == 0xf);
+ switch ((insn & 0xe00000) >> 21)
+ {
+ case 0x1: case 0x3:
+ /* ldc/ldc2 imm (undefined for rn == pc). */
+ return rn_f ? copy_undef (insn, dsc)
+ : copy_copro_load_store (insn, regs, dsc);
+
+ case 0x2:
+ return copy_unmodified (insn, "mrrc/mrrc2", dsc);
+
+ case 0x4: case 0x5: case 0x6: case 0x7:
+ /* ldc/ldc2 lit (undefined for rn != pc). */
+ return rn_f ? copy_copro_load_store (insn, regs, dsc)
+ : copy_undef (insn, dsc);
+
+ default:
+ return copy_undef (insn, dsc);
+ }
+ }
+
+ case 0xa:
+ return copy_unmodified (insn, "stc/stc2", dsc);
+
+ case 0xb:
+ if (bits (insn, 16, 19) == 0xf)
+ return copy_copro_load_store (insn, regs, dsc); /* ldc/ldc2 lit. */
+ else
+ return copy_undef (insn, dsc);
+
+ case 0xc:
+ if (bit (insn, 4))
+ return copy_unmodified (insn, "mcr/mcr2", dsc);
+ else
+ return copy_unmodified (insn, "cdp/cdp2", dsc);
+
+ case 0xd:
+ if (bit (insn, 4))
+ return copy_unmodified (insn, "mrc/mrc2", dsc);
+ else
+ return copy_unmodified (insn, "cdp/cdp2", dsc);
+
+ default:
+ return copy_undef (insn, dsc);
+ }
+}
+
+/* Decode miscellaneous instructions in dp/misc encoding space. */
+
+static int
+decode_miscellaneous (unsigned long insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ unsigned int op2 = bits (insn, 4, 6);
+ unsigned int op = bits (insn, 21, 22);
+ unsigned int op1 = bits (insn, 16, 19);
+
+ switch (op2)
+ {
+ case 0x0:
+ return copy_unmodified (insn, "mrs/msr", dsc);
+
+ case 0x1:
+ if (op == 0x1) /* bx. */
+ return copy_bx_blx_reg (insn, regs, dsc);
+ else if (op == 0x3)
+ return copy_unmodified (insn, "clz", dsc);
+ else
+ return copy_undef (insn, dsc);
+
+ case 0x2:
+ if (op == 0x1)
+ return copy_unmodified (insn, "bxj", dsc); /* Not really supported. */
+ else
+ return copy_undef (insn, dsc);
+
+ case 0x3:
+ if (op == 0x1)
+ return copy_bx_blx_reg (insn, regs, dsc); /* blx register. */
+ else
+ return copy_undef (insn, dsc);
+
+ case 0x5:
+ return copy_unmodified (insn, "saturating add/sub", dsc);
+
+ case 0x7:
+ if (op == 0x1)
+ return copy_unmodified (insn, "bkpt", dsc);
+ else if (op == 0x3)
+ return copy_unmodified (insn, "smc", dsc); /* Not really supported. */
+
+ default:
+ return copy_undef (insn, dsc);
+ }
+}
+
+static int
+decode_dp_misc (unsigned long insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ if (bit (insn, 25))
+ switch (bits (insn, 20, 24))
+ {
+ case 0x10:
+ return copy_unmodified (insn, "movw", dsc);
+
+ case 0x14:
+ return copy_unmodified (insn, "movt", dsc);
+
+ case 0x12: case 0x16:
+ return copy_unmodified (insn, "msr imm", dsc);
+
+ default:
+ return copy_dp_imm (insn, regs, dsc);
+ }
+ else
+ {
+ unsigned long op1 = bits (insn, 20, 24), op2 = bits (insn, 4, 7);
+
+ if ((op1 & 0x19) != 0x10 && (op2 & 0x1) == 0x0)
+ return copy_dp_reg (insn, regs, dsc);
+ else if ((op1 & 0x19) != 0x10 && (op2 & 0x9) == 0x1)
+ return copy_dp_shifted_reg (insn, regs, dsc);
+ else if ((op1 & 0x19) == 0x10 && (op2 & 0x8) == 0x0)
+ return decode_miscellaneous (insn, regs, dsc);
+ else if ((op1 & 0x19) == 0x10 && (op2 & 0x9) == 0x8)
+ return copy_unmodified (insn, "halfword mul/mla", dsc);
+ else if ((op1 & 0x10) == 0x00 && op2 == 0x9)
+ return copy_unmodified (insn, "mul/mla", dsc);
+ else if ((op1 & 0x10) == 0x10 && op2 == 0x9)
+ return copy_unmodified (insn, "synch", dsc);
+ else if (op2 == 0xb || (op2 & 0xd) == 0xd)
+ /* 2nd arg means "unpriveleged". */
+ return copy_extra_ld_st (insn, (op1 & 0x12) == 0x02, regs, dsc);
+ }
+
+ /* Should be unreachable. */
+ return 1;
+}
+
+static int
+decode_ld_st_word_ubyte (unsigned long insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ int a = bit (insn, 25), b = bit (insn, 4);
+ unsigned long op1 = bits (insn, 20, 24);
+ int rn_f = bits (insn, 16, 19) == 0xf;
+
+ if ((!a && (op1 & 0x05) == 0x00 && (op1 & 0x17) != 0x02)
+ || (a && (op1 & 0x05) == 0x00 && (op1 & 0x17) != 0x02 && !b))
+ return copy_ldr_str_ldrb_strb (insn, regs, dsc, 0, 0, 0);
+ else if ((!a && (op1 & 0x17) == 0x02)
+ || (a && (op1 & 0x17) == 0x02 && !b))
+ return copy_ldr_str_ldrb_strb (insn, regs, dsc, 0, 0, 1);
+ else if ((!a && (op1 & 0x05) == 0x01 && (op1 & 0x17) != 0x03)
+ || (a && (op1 & 0x05) == 0x01 && (op1 & 0x17) != 0x03 && !b))
+ return copy_ldr_str_ldrb_strb (insn, regs, dsc, 1, 0, 0);
+ else if ((!a && (op1 & 0x17) == 0x03)
+ || (a && (op1 & 0x17) == 0x03 && !b))
+ return copy_ldr_str_ldrb_strb (insn, regs, dsc, 1, 0, 1);
+ else if ((!a && (op1 & 0x05) == 0x04 && (op1 & 0x17) != 0x06)
+ || (a && (op1 & 0x05) == 0x04 && (op1 & 0x17) != 0x06 && !b))
+ return copy_ldr_str_ldrb_strb (insn, regs, dsc, 0, 1, 0);
+ else if ((!a && (op1 & 0x17) == 0x06)
+ || (a && (op1 & 0x17) == 0x06 && !b))
+ return copy_ldr_str_ldrb_strb (insn, regs, dsc, 0, 1, 1);
+ else if ((!a && (op1 & 0x05) == 0x05 && (op1 & 0x17) != 0x07)
+ || (a && (op1 & 0x05) == 0x05 && (op1 & 0x17) != 0x07 && !b))
+ return copy_ldr_str_ldrb_strb (insn, regs, dsc, 1, 1, 0);
+ else if ((!a && (op1 & 0x17) == 0x07)
+ || (a && (op1 & 0x17) == 0x07 && !b))
+ return copy_ldr_str_ldrb_strb (insn, regs, dsc, 1, 1, 1);
+
+ /* Should be unreachable. */
+ return 1;
+}
+
+static int
+decode_media (unsigned long insn, struct displaced_step_closure *dsc)
+{
+ switch (bits (insn, 20, 24))
+ {
+ case 0x00: case 0x01: case 0x02: case 0x03:
+ return copy_unmodified (insn, "parallel add/sub signed", dsc);
+
+ case 0x04: case 0x05: case 0x06: case 0x07:
+ return copy_unmodified (insn, "parallel add/sub unsigned", dsc);
+
+ case 0x08: case 0x09: case 0x0a: case 0x0b:
+ case 0x0c: case 0x0d: case 0x0e: case 0x0f:
+ return copy_unmodified (insn, "decode/pack/unpack/saturate/reverse", dsc);
+
+ case 0x18:
+ if (bits (insn, 5, 7) == 0) /* op2. */
+ {
+ if (bits (insn, 12, 15) == 0xf)
+ return copy_unmodified (insn, "usad8", dsc);
+ else
+ return copy_unmodified (insn, "usada8", dsc);
+ }
+ else
+ return copy_undef (insn, dsc);
+
+ case 0x1a: case 0x1b:
+ if (bits (insn, 5, 6) == 0x2) /* op2[1:0]. */
+ return copy_unmodified (insn, "sbfx", dsc);
+ else
+ return copy_undef (insn, dsc);
+
+ case 0x1c: case 0x1d:
+ if (bits (insn, 5, 6) == 0x0) /* op2[1:0]. */
+ {
+ if (bits (insn, 0, 3) == 0xf)
+ return copy_unmodified (insn, "bfc", dsc);
+ else
+ return copy_unmodified (insn, "bfi", dsc);
+ }
+ else
+ return copy_undef (insn, dsc);
+
+ case 0x1e: case 0x1f:
+ if (bits (insn, 5, 6) == 0x2) /* op2[1:0]. */
+ return copy_unmodified (insn, "ubfx", dsc);
+ else
+ return copy_undef (insn, dsc);
+ }
+
+ /* Should be unreachable. */
+ return 1;
+}
+
+static int
+decode_b_bl_ldmstm (unsigned long insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ if (bit (insn, 25))
+ return copy_b_bl_blx (insn, regs, dsc);
+ else
+ return copy_block_xfer (insn, regs, dsc);
+}
+
+static int
+decode_ext_reg_ld_st (unsigned long insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ unsigned int opcode = bits (insn, 20, 24);
+
+ switch (opcode)
+ {
+ case 0x04: case 0x05: /* VFP/Neon mrrc/mcrr. */
+ return copy_unmodified (insn, "vfp/neon mrrc/mcrr", dsc);
+
+ case 0x08: case 0x0a: case 0x0c: case 0x0e:
+ case 0x12: case 0x16:
+ return copy_unmodified (insn, "vfp/neon vstm/vpush", dsc);
+
+ case 0x09: case 0x0b: case 0x0d: case 0x0f:
+ case 0x13: case 0x17:
+ return copy_unmodified (insn, "vfp/neon vldm/vpop", dsc);
+
+ case 0x10: case 0x14: case 0x18: case 0x1c: /* vstr. */
+ case 0x11: case 0x15: case 0x19: case 0x1d: /* vldr. */
+ /* Note: no writeback for these instructions. Bit 25 will always be
+ zero though (via caller), so the following works OK. */
+ return copy_copro_load_store (insn, regs, dsc);
+ }
+
+ /* Should be unreachable. */
+ return 1;
+}
+
+static int
+decode_svc_copro (unsigned long insn, CORE_ADDR to, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ unsigned int op1 = bits (insn, 20, 25);
+ int op = bit (insn, 4);
+ unsigned int coproc = bits (insn, 8, 11);
+ unsigned int rn = bits (insn, 16, 19);
+
+ if ((op1 & 0x20) == 0x00 && (op1 & 0x3a) != 0x00 && (coproc & 0xe) == 0xa)
+ return decode_ext_reg_ld_st (insn, regs, dsc);
+ else if ((op1 & 0x21) == 0x00 && (op1 & 0x3a) != 0x00
+ && (coproc & 0xe) != 0xa)
+ return copy_copro_load_store (insn, regs, dsc); /* stc/stc2. */
+ else if ((op1 & 0x21) == 0x01 && (op1 & 0x3a) != 0x00
+ && (coproc & 0xe) != 0xa)
+ return copy_copro_load_store (insn, regs, dsc); /* ldc/ldc2 imm/lit. */
+ else if ((op1 & 0x3e) == 0x00)
+ return copy_undef (insn, dsc);
+ else if ((op1 & 0x3e) == 0x04 && (coproc & 0xe) == 0xa)
+ return copy_unmodified (insn, "neon 64bit xfer", dsc);
+ else if (op1 == 0x04 && (coproc & 0xe) != 0xa)
+ return copy_unmodified (insn, "mcrr/mcrr2", dsc);
+ else if (op1 == 0x05 && (coproc & 0xe) != 0xa)
+ return copy_unmodified (insn, "mrrc/mrrc2", dsc);
+ else if ((op1 & 0x30) == 0x20 && !op)
+ {
+ if ((coproc & 0xe) == 0xa)
+ return copy_unmodified (insn, "vfp dataproc", dsc);
+ else
+ return copy_unmodified (insn, "cdp/cdp2", dsc);
+ }
+ else if ((op1 & 0x30) == 0x20 && op)
+ return copy_unmodified (insn, "neon 8/16/32 bit xfer", dsc);
+ else if ((op1 & 0x31) == 0x20 && op && (coproc & 0xe) != 0xa)
+ return copy_unmodified (insn, "mcr/mcr2", dsc);
+ else if ((op1 & 0x31) == 0x21 && op && (coproc & 0xe) != 0xa)
+ return copy_unmodified (insn, "mrc/mrc2", dsc);
+ else if ((op1 & 0x30) == 0x30)
+ return copy_svc (insn, to, regs, dsc);
+ else
+ return copy_undef (insn, dsc); /* Possibly unreachable. */
+}
+
+static struct displaced_step_closure *
+arm_process_displaced_insn (unsigned long insn, CORE_ADDR from, CORE_ADDR to,
+ struct regcache *regs)
+{
+ struct displaced_step_closure *dsc
+ = xmalloc (sizeof (struct displaced_step_closure));
+ int err = 0;
+
+ /* Most displaced instructions use a 1-instruction scratch space, so set this
+ here and override below if/when necessary. */
+ dsc->numinsns = 1;
+ dsc->insn_addr = from;
+ dsc->cleanup = NULL;
+ dsc->wrote_to_pc = 0;
+
+ if ((insn & 0xf0000000) == 0xf0000000)
+ err = decode_unconditional (insn, regs, dsc);
+ else switch (((insn & 0x10) >> 4) | ((insn & 0xe000000) >> 24))
+ {
+ case 0x0: case 0x1: case 0x2: case 0x3:
+ err = decode_dp_misc (insn, regs, dsc);
+ break;
+
+ case 0x4: case 0x5: case 0x6:
+ err = decode_ld_st_word_ubyte (insn, regs, dsc);
+ break;
+
+ case 0x7:
+ err = decode_media (insn, dsc);
+ break;
+
+ case 0x8: case 0x9: case 0xa: case 0xb:
+ err = decode_b_bl_ldmstm (insn, regs, dsc);
+ break;
+
+ case 0xc: case 0xd: case 0xe: case 0xf:
+ err = decode_svc_copro (insn, to, regs, dsc);
+ break;
+ }
+
+ if (err)
+ internal_error (__FILE__, __LINE__,
+ _("arm_process_displaced_insn: Instruction decode error"));
+
+ return dsc;
+}
+
+static struct displaced_step_closure *
+arm_catch_kernel_helper_return (CORE_ADDR from, CORE_ADDR to,
+ struct regcache *regs)
+{
+ struct displaced_step_closure *dsc
+ = xmalloc (sizeof (struct displaced_step_closure));
+
+ dsc->numinsns = 1;
+ dsc->insn_addr = from;
+ dsc->cleanup = &cleanup_kernel_helper_return;
+ /* Say we wrote to the PC, else cleanup will set PC to the next
+ instruction in the helper, which isn't helpful. */
+ dsc->wrote_to_pc = 1;
+
+ /* Preparation: tmp[0] <- r14
+ r14 <- <scratch space>+4
+ *(<scratch space>+8) <- from
+ Insn: ldr pc, [r14, #4]
+ Cleanup: r14 <- tmp[0], pc <- tmp[0]. */
+
+ dsc->tmp[0] = displaced_read_reg (regs, from, ARM_LR_REGNUM);
+ displaced_write_reg (regs, dsc, ARM_LR_REGNUM, (ULONGEST) to + 4);
+ write_memory_unsigned_integer (to + 8, 4, from);
+
+ dsc->modinsn[0] = 0xe59ef004; /* ldr pc, [lr, #4]. */
+
+ return dsc;
+}
+
+static void
+cleanup_kernel_helper_return (struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ displaced_write_reg (regs, dsc, ARM_LR_REGNUM, dsc->tmp[0]);
+ displaced_write_reg (regs, dsc, ARM_PC_REGNUM, dsc->tmp[0]);
+}
+
+struct displaced_step_closure *
+arm_displaced_step_copy_insn (struct gdbarch *gdbarch,
+ CORE_ADDR from, CORE_ADDR to,
+ struct regcache *regs)
+{
+ struct gdbarch_tdep *tdep = gdbarch_tdep (gdbarch);
+ const size_t len = 4;
+ gdb_byte *buf = xmalloc (len);
+ struct displaced_step_closure *dsc;
+ unsigned long insn;
+ int i;
+
+ /* A linux-specific hack. Detect when we've entered (inaccessible by GDB)
+ kernel helpers, and stop at the return location. */
+ if (gdbarch_osabi (gdbarch) == GDB_OSABI_LINUX && from > 0xffff0000)
+ {
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: detected kernel helper "
+ "at %.8lx\n", (unsigned long) from);
+
+ dsc = arm_catch_kernel_helper_return (from, to, regs);
+ }
+ else
+ {
+ insn = read_memory_unsigned_integer (from, len);
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: stepping insn %.8lx "
+ "at %.8lx\n", insn, (unsigned long) from);
+
+ dsc = arm_process_displaced_insn (insn, from, to, regs);
+ }
+
+ /* Poke modified instruction(s). FIXME: Thumb, endianness. */
+ for (i = 0; i < dsc->numinsns; i++)
+ {
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: writing insn %.8lx at "
+ "%.8lx\n", (unsigned long) dsc->modinsn[i],
+ (unsigned long) to + i * 4);
+ write_memory_unsigned_integer (to + i * 4, 4, dsc->modinsn[i]);
+ }
+
+ /* Put breakpoint afterwards. FIXME: Likewise. */
+ write_memory (to + dsc->numinsns * 4, tdep->arm_breakpoint,
+ tdep->arm_breakpoint_size);
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copy 0x%s->0x%s: ",
+ paddr_nz (from), paddr_nz (to));
+
+ return dsc;
+}
+
+void
+arm_displaced_step_fixup (struct gdbarch *gdbarch,
+ struct displaced_step_closure *dsc,
+ CORE_ADDR from, CORE_ADDR to,
+ struct regcache *regs)
+{
+ if (dsc->cleanup)
+ dsc->cleanup (regs, dsc);
+
+ if (!dsc->wrote_to_pc)
+ regcache_cooked_write_unsigned (regs, ARM_PC_REGNUM, dsc->insn_addr + 4);
+}
+
+
#include "bfd-in2.h"
#include "libcoff.h"
@@ -3252,6 +4702,10 @@ arm_gdbarch_init (struct gdbarch_info in
/* On ARM targets char defaults to unsigned. */
set_gdbarch_char_signed (gdbarch, 0);
+ /* Note: for displaced stepping, this includes the breakpoint, and one word
+ of additional scratch space. */
+ set_gdbarch_max_insn_length (gdbarch, 12);
+
/* This should be low enough for everything. */
tdep->lowest_pc = 0x20;
tdep->jb_pc = -1; /* Longjump support not enabled by default. */
--- .pc/arm-displaced-stepping/gdb/arm-tdep.h 2009-01-20 13:22:42.000000000 -0800
+++ gdb/arm-tdep.h 2009-01-20 13:24:00.000000000 -0800
@@ -178,6 +178,13 @@ CORE_ADDR arm_skip_stub (struct frame_in
CORE_ADDR arm_get_next_pc (struct frame_info *, CORE_ADDR);
int arm_software_single_step (struct frame_info *);
+extern struct displaced_step_closure *
+ arm_displaced_step_copy_insn (struct gdbarch *, CORE_ADDR, CORE_ADDR,
+ struct regcache *);
+extern void arm_displaced_step_fixup (struct gdbarch *,
+ struct displaced_step_closure *,
+ CORE_ADDR, CORE_ADDR, struct regcache *);
+
/* Functions exported from armbsd-tdep.h. */
/* Return the appropriate register set for the core section identified
[-- Attachment #3: fsf-displaced-stepping-always-1.diff --]
[-- Type: text/x-patch, Size: 3990 bytes --]
--- .pc/displaced-step-always/gdb/infrun.c 2009-01-20 13:23:02.000000000 -0800
+++ gdb/infrun.c 2009-01-20 13:23:34.000000000 -0800
@@ -825,6 +825,9 @@ displaced_step_fixup (ptid_t event_ptid,
one now. */
while (displaced_step_request_queue)
{
+ struct regcache *regcache;
+ struct gdbarch *gdbarch;
+
struct displaced_step_request *head;
ptid_t ptid;
CORE_ADDR actual_pc;
@@ -847,8 +850,12 @@ displaced_step_fixup (ptid_t event_ptid,
displaced_step_prepare (ptid);
+ regcache = get_thread_regcache (ptid);
+ gdbarch = get_regcache_arch (regcache);
+
if (debug_displaced)
{
+ CORE_ADDR actual_pc = regcache_read_pc (regcache);
gdb_byte buf[4];
fprintf_unfiltered (gdb_stdlog, "displaced: run 0x%s: ",
@@ -857,7 +864,10 @@ displaced_step_fixup (ptid_t event_ptid,
displaced_step_dump_bytes (gdb_stdlog, buf, sizeof (buf));
}
- target_resume (ptid, 1, TARGET_SIGNAL_0);
+ if (gdbarch_software_single_step_p (gdbarch))
+ target_resume (ptid, 0, TARGET_SIGNAL_0);
+ else
+ target_resume (ptid, 1, TARGET_SIGNAL_0);
/* Done, we're stepping a thread. */
break;
@@ -970,6 +980,7 @@ resume (int step, enum target_signal sig
struct gdbarch *gdbarch = get_regcache_arch (regcache);
struct thread_info *tp = inferior_thread ();
CORE_ADDR pc = regcache_read_pc (regcache);
+ int hw_step = step;
QUIT;
@@ -1014,7 +1025,8 @@ a command like `return' or `jump' to con
comments in the handle_inferior event for dealing with 'random
signals' explain what we do instead. */
if (use_displaced_stepping (gdbarch)
- && tp->trap_expected
+ && (tp->trap_expected
+ || (step && gdbarch_software_single_step_p (gdbarch)))
&& sig == TARGET_SIGNAL_0)
{
if (!displaced_step_prepare (inferior_ptid))
@@ -1033,11 +1045,13 @@ a command like `return' or `jump' to con
if (step && gdbarch_software_single_step_p (gdbarch))
{
+ if (use_displaced_stepping (gdbarch))
+ hw_step = 0;
/* Do it the hard way, w/temp breakpoints */
- if (gdbarch_software_single_step (gdbarch, get_current_frame ()))
+ else if (gdbarch_software_single_step (gdbarch, get_current_frame ()))
{
/* ...and don't ask hardware to do it. */
- step = 0;
+ hw_step = 0;
/* and do not pull these breakpoints until after a `wait' in
`wait_for_inferior' */
singlestep_breakpoints_inserted_p = 1;
@@ -1085,7 +1099,7 @@ a command like `return' or `jump' to con
/* If STEP is set, it's a request to use hardware stepping
facilities. But in that case, we should never
use singlestep breakpoint. */
- gdb_assert (!(singlestep_breakpoints_inserted_p && step));
+ gdb_assert (!(singlestep_breakpoints_inserted_p && hw_step));
if (singlestep_breakpoints_inserted_p
&& stepping_past_singlestep_breakpoint)
@@ -1139,13 +1153,14 @@ a command like `return' or `jump' to con
/* Most targets can step a breakpoint instruction, thus
executing it normally. But if this one cannot, just
continue and we will hit it anyway. */
- if (step && breakpoint_inserted_here_p (pc))
- step = 0;
+ if (hw_step && breakpoint_inserted_here_p (pc))
+ hw_step = 0;
}
if (debug_displaced
&& use_displaced_stepping (gdbarch)
- && tp->trap_expected)
+ && (tp->trap_expected
+ || (step && gdbarch_software_single_step_p (gdbarch))))
{
struct regcache *resume_regcache = get_thread_regcache (resume_ptid);
CORE_ADDR actual_pc = regcache_read_pc (resume_regcache);
@@ -1161,7 +1176,7 @@ a command like `return' or `jump' to con
happens to apply to another thread. */
tp->stop_signal = TARGET_SIGNAL_0;
- target_resume (resume_ptid, step, sig);
+ target_resume (resume_ptid, hw_step, sig);
}
discard_cleanups (old_cleanups);
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH] Displaced stepping (non-stop debugging) support for ARM Linux
2009-01-20 22:14 [PATCH] Displaced stepping (non-stop debugging) support for ARM Linux Julian Brown
@ 2009-01-21 18:07 ` Pedro Alves
2009-02-02 20:01 ` Daniel Jacobowitz
1 sibling, 0 replies; 24+ messages in thread
From: Pedro Alves @ 2009-01-21 18:07 UTC (permalink / raw)
To: Julian Brown; +Cc: gdb-patches
Hi Julian,
On Tuesday 20 January 2009 22:13:55, Julian Brown wrote:
> As a side-effect of the lack of h/w single-stepping support, we've
> enabled displaced stepping in all cases, not just when stepping over
> breakpoints (a patch of Pedro Alves's, attached, but mangled by me to
> apply to mainline). I'm not sure if that's the most sensible approach
> (for displaced stepping, we only care about not *removing* breakpoints
> which might be hit by other threads. We can still add temporary
> breakpoints for the purpose of software single-stepping).
Right, you may end up with a temporary breakpoint over another breakpoint,
though. It would be better to use the standard software
single-stepping (set temp break at next pc, continue, remove break) for
standard stepping requests, and use displaced stepping only for stepping
over breakpoints. Unfortunately, you don't get that for free --- infrun.c
and friends don't know how to handle multiple simultaneous software
single-stepping requests, and that is required in non-stop mode.
On Tuesday 20 January 2009 22:13:55, Julian Brown wrote:
> 2008-11-19 Pedro Alves <pedro@codesourcery.com>
>
> * infrun.c (displaced_step_fixup): If this is a software
> single-stepping arch, don't tell the target to single-step.
> (resume): If this is a software single-stepping arch, and
> displaced-stepping is enabled, use it for all single-step
> requests.
By default, displaced stepping is only enabled in non-stop mode, so,
I'm fine with this being placed in the tree, as an incremental step.
This should not affect standard all-stop mode. It is a step in the
right direction, IMO.
You'll need someone else to look over the ARM bits. I wouldn't
mind at all if you added a general description of what you're doing
to arm-tdep.c, though. Perhaps, even based on:
On Tuesday 20 January 2009 22:13:55, Julian Brown wrote:
> ARM support is relatively tricky compared to some other architectures,
> because there's no hardware single-stepping support. However we can
> fake it by making sure that displaced instructions don't modify control
> flow, and placing a software breakpoint after each displaced
> instruction. Also registers are rewritten to handle instructions which
> might read/write the PC. We must of course take care that the cleanup
> routine puts things back in the correct places.
--
Pedro Alves
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH] Displaced stepping (non-stop debugging) support for ARM Linux
2009-01-20 22:14 [PATCH] Displaced stepping (non-stop debugging) support for ARM Linux Julian Brown
2009-01-21 18:07 ` Pedro Alves
@ 2009-02-02 20:01 ` Daniel Jacobowitz
2009-05-16 18:19 ` Julian Brown
1 sibling, 1 reply; 24+ messages in thread
From: Daniel Jacobowitz @ 2009-02-02 20:01 UTC (permalink / raw)
To: Julian Brown; +Cc: gdb-patches, pedro
On Tue, Jan 20, 2009 at 10:13:55PM +0000, Julian Brown wrote:
> OK to apply, or any comments?
General comments:
* Please make more of the functions static.
* More comments would be nice. Some of the helper functions need
individual comments, and there needs to be an overview comment
explaining the structure. For instance, "cleanup_* does X, copy_X
does Y".
* Why do you convert all register reads to fixed temporaries - is this
much simpler than detecting and replacing only PC references? Or are
their other tricky cases? This causes a lot of register reads and
writes that are not strictly required.
* If you reordered the cleanup and copy functions, you wouldn't need
all the static prototypes.
* What's the point of executing mov<cond> on the target for BL<cond>?
At that point it seems like we ought to skip the target step entirely;
just simulate the instruction. We've already got a function to check
conditions (condition_true).
* Using arm_write_pc is a bit dodgy here; I don't think it's what we
want. That function updates the CPSR based on a number of things
including symbol tables. We know exactly what is supposed to happen
to CPSR for a given instruction and should honor it. An example of
why this matters: people regularly get a blx in Cortex-M3 code by use
of bad libraries, untyped ELF symbols, or other such circumstances.
That blx had better update the CPSR even when we step over it.
* You've got FIXMEs. Let's fix them rather than introduce bug
minefields, please. If they're questions, I can probably answer them.
> + /* FIXME: BLX immediate is probably broken! */
How so?
> +static int
> +copy_dp_imm (unsigned long insn, struct regcache *regs,
> + struct displaced_step_closure *dsc)
What's "dp" mean? Data-processing?
> +/* FIXME: This should depend on the arch version. */
> +
> +static ULONGEST
> +modify_store_pc (ULONGEST pc)
> +{
> + return pc + 4;
> +}
This one we might not be able to fix in current GDB but we can at
least expand the comment... if I remember right the +4 is correct for
everything since ARMv5 and most ARMv4?
> +/* Handle ldm/stm. Doesn't handle any difficult cases (exception return,
> + user-register transfer). */
If we don't handle them we should detect them and fail noisily.
> + /* ldm/stm is always emulated, because there are too many corner cases to
> + deal with otherwise. Implement as mov<cond> r0, #1, then do actual
> + transfer in cleanup routine if condition passes. FIXME: Non-priveleged
> + transfers. */
> +
> + /* Hmm, this might not work, because of memory permissions differing for
> + the debugger & the debugged program. I wonder what to do about that? */
Yes, we just can't emulate loads or stores. Anything that could cause
an exception that won't be delayed till the next instruction, I think.
> + if (!do_transfer)
> + return;
> +
> + /* FIXME: Implement non-priveleged transfers! */
> + gdb_assert (!dsc->u.block.user);
> +
> + /* FIXME: Exception return. */
This is not an internal error; it should not be a gdb_assert. Instead
we should error().
> +static int
> +copy_svc (unsigned long insn, CORE_ADDR to, struct regcache *regs,
> + struct displaced_step_closure *dsc)
> +{
> + CORE_ADDR from = dsc->insn_addr;
> +
> + if (debug_displaced)
> + fprintf_unfiltered (gdb_stdlog, "displaced: copying svc insn %.8lx\n",
> + insn);
> +
> + /* Preparation: tmp[0] <- to.
> + Insn: unmodified svc.
> + Cleanup: if (pc == <scratch>+4) pc <- insn_addr + 4;
> + else leave PC alone. */
What about the saved PC? Don't really want the OS service routine to
return to the scratchpad.
> +static void
> +cleanup_svc (struct regcache *regs, struct displaced_step_closure *dsc)
> +{
> + CORE_ADDR from = dsc->insn_addr;
> + CORE_ADDR to = dsc->tmp[0];
> + ULONGEST pc;
> +
> + /* Note: we want the real PC, so don't use displaced_read_reg here. */
> + regcache_cooked_read_unsigned (regs, ARM_PC_REGNUM, &pc);
> +
> + if (pc == to + 4)
> + displaced_write_reg (regs, dsc, ARM_PC_REGNUM, from + 4);
> +
> + /* FIXME: What can we do about signal trampolines? */
> +}
Maybe this is referring to the same question I asked above?
If so, I think you get to unwind and if you find the scratchpad,
update the saved PC.
> +static struct displaced_step_closure *
> +arm_catch_kernel_helper_return (CORE_ADDR from, CORE_ADDR to,
> + struct regcache *regs)
Definitely would like a comment about what's going on here.
> +struct displaced_step_closure *
> +arm_displaced_step_copy_insn (struct gdbarch *gdbarch,
> + CORE_ADDR from, CORE_ADDR to,
> + struct regcache *regs)
> +{
> + struct gdbarch_tdep *tdep = gdbarch_tdep (gdbarch);
> + const size_t len = 4;
> + gdb_byte *buf = xmalloc (len);
> + struct displaced_step_closure *dsc;
> + unsigned long insn;
> + int i;
> +
> + /* A linux-specific hack. Detect when we've entered (inaccessible by GDB)
> + kernel helpers, and stop at the return location. */
> + if (gdbarch_osabi (gdbarch) == GDB_OSABI_LINUX && from > 0xffff0000)
> + {
> + if (debug_displaced)
> + fprintf_unfiltered (gdb_stdlog, "displaced: detected kernel helper "
> + "at %.8lx\n", (unsigned long) from);
> +
> + dsc = arm_catch_kernel_helper_return (from, to, regs);
> + }
> + else
> + {
> + insn = read_memory_unsigned_integer (from, len);
> +
> + if (debug_displaced)
> + fprintf_unfiltered (gdb_stdlog, "displaced: stepping insn %.8lx "
> + "at %.8lx\n", insn, (unsigned long) from);
> +
> + dsc = arm_process_displaced_insn (insn, from, to, regs);
> + }
Can the Linux-specific hack go in arm-linux-tdep.c? Shouldn't have to
make many functions global to do that.
> + /* Poke modified instruction(s). FIXME: Thumb, endianness. */
I didn't see any endianness problems, but testing on BE is a good idea
anyway. There ought to be an error for Thumb somewhere.
> @@ -3252,6 +4702,10 @@ arm_gdbarch_init (struct gdbarch_info in
> /* On ARM targets char defaults to unsigned. */
> set_gdbarch_char_signed (gdbarch, 0);
>
> + /* Note: for displaced stepping, this includes the breakpoint, and one word
> + of additional scratch space. */
> + set_gdbarch_max_insn_length (gdbarch, 12);
> +
> /* This should be low enough for everything. */
> tdep->lowest_pc = 0x20;
> tdep->jb_pc = -1; /* Longjump support not enabled by default. */
Does this relate to the size of modinsns, which has its own constant?
--
Daniel Jacobowitz
CodeSourcery
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH] Displaced stepping (non-stop debugging) support for ARM Linux
2009-02-02 20:01 ` Daniel Jacobowitz
@ 2009-05-16 18:19 ` Julian Brown
2009-06-09 17:37 ` Daniel Jacobowitz
0 siblings, 1 reply; 24+ messages in thread
From: Julian Brown @ 2009-05-16 18:19 UTC (permalink / raw)
To: Daniel Jacobowitz; +Cc: gdb-patches, pedro
[-- Attachment #1: Type: text/plain, Size: 10243 bytes --]
Hi,
This is a new version of the patch to support displaced stepping on
ARM. Many things are fixed from the last version posted previously
(January 20th), though we're probably not 100% of the way there yet.
Pedro Alves wrote:
> Right, you may end up with a temporary breakpoint over another
> breakpoint, though. It would be better to use the standard software
> single-stepping (set temp break at next pc, continue, remove break)
> for standard stepping requests, and use displaced stepping only for
> stepping over breakpoints. Unfortunately, you don't get that for
> free --- infrun.c and friends don't know how to handle multiple
> simultaneous software single-stepping requests, and that is required
> in non-stop mode.
I'm not sure what the status is here now. For testing purposes, I've
(still) been using a local patch which uses displaced stepping for all
single-step operations.
Daniel Jacobowitz <drow@false.org> wrote:
> * What's the point of executing mov<cond> on the target for BL<cond>?
> At that point it seems like we ought to skip the target step entirely;
> just simulate the instruction. We've already got a function to check
> conditions (condition_true).
I'm now using NOP instructions and condition_true, because the current
displaced stepping support wants to execute "something" rather than
nothing.
> * Using arm_write_pc is a bit dodgy here; I don't think it's what we
> want. That function updates the CPSR based on a number of things
> including symbol tables. We know exactly what is supposed to happen
> to CPSR for a given instruction and should honor it. An example of
> why this matters: people regularly get a blx in Cortex-M3 code by use
> of bad libraries, untyped ELF symbols, or other such circumstances.
> That blx had better update the CPSR even when we step over it.
Fixed, I think.
> > +/* FIXME: This should depend on the arch version. */
> > +
> > +static ULONGEST
> > +modify_store_pc (ULONGEST pc)
> > +{
> > + return pc + 4;
> > +}
>
> This one we might not be able to fix in current GDB but we can at
> least expand the comment... if I remember right the +4 is correct for
> everything since ARMv5 and most ARMv4?
I've removed this function. Stores of PC now read back the offset, so
should be architecture-version independent (the strategy is slightly
different for STR vs. STM: see below).
> Yes, we just can't emulate loads or stores. Anything that could cause
> an exception that won't be delayed till the next instruction, I think.
LDM and STM are handled substantially differently now: STM instructions
are let through unmodified, and when PC is in the register list the
cleanup routine reads back the stored value and calculates the proper
offset for PC writes. The true (non-displaced) PC value (plus offset) is
then written to the appropriate memory location.
LDM instructions shuffle registers downwards into a contiguous list (to
avoid loading PC directly), then fix up register contents afterwards in
the cleanup routine. The case with a fully-populated register list is
still emulated, for now.
> > +static int
> > +copy_svc (unsigned long insn, CORE_ADDR to, struct regcache *regs,
> > + struct displaced_step_closure *dsc)
> > +{
> > + CORE_ADDR from = dsc->insn_addr;
> > +
> > + if (debug_displaced)
> > + fprintf_unfiltered (gdb_stdlog, "displaced: copying svc insn
> > %.8lx\n",
> > + insn);
> > +
> > + /* Preparation: tmp[0] <- to.
> > + Insn: unmodified svc.
> > + Cleanup: if (pc == <scratch>+4) pc <- insn_addr + 4;
> > + else leave PC alone. */
>
> What about the saved PC? Don't really want the OS service routine to
> return to the scratchpad.
>
> > + /* FIXME: What can we do about signal trampolines? */
>
> Maybe this is referring to the same question I asked above?
>
> If so, I think you get to unwind and if you find the scratchpad,
> update the saved PC.
I've tried to figure this out, and have totally drawn a blank so far.
AFAICT, the problem we're trying to solve runs as follows: sometimes, a
signal may be delivered to a process whilst it is executing a system
call. In that case, the kernel writes a signal trampoline to the user
program's stack space, and rewrites the state so that the trampoline is
executed when the system call returns.
Now: if we single-step that signal trampoline, we will see a system
call ("sigreturn") which does not return to the caller: rather, it
returns to a handler (in the user program) for the signal in question.
So, the expected result at present is that if displaced stepping is
used to single-step the sigreturn call, the debugger will lose control
of the debugged program.
Unfortunately I've been unable to figure out if the above is true, and
I can't quite figure out the mechanism in enough detail to know if
there's really anything we can do about it if so. My test program
(stolen from the internet and tweaked) runs as follows:
/*
* signal.c - A signal-catching test program
*/
#include <stdio.h>
#include <unistd.h>
#include <signal.h>
void func (int, siginfo_t *, void *);
void func2 (int, siginfo_t *, void *);
int main (int argc, char **argv) {
struct sigaction sa;
printf ("Starting execution\n");
sa.sa_sigaction = func;
sigemptyset (&sa.sa_mask);
sa.sa_flags = SA_SIGINFO | SA_RESETHAND;
if (sigaction (SIGHUP, &sa, NULL))
perror ("sigaction() failed");
sa.sa_sigaction = func2;
if (sigaction (SIGINT, &sa, NULL))
perror ("sigaction() failed");
printf ("sigaction() successful. Now sleeping\n");
while (1)
sleep (600);
printf ("I should not come here\n");
return 0;
}
void
func (int sig, siginfo_t *sinf, void *foo)
{
printf ("Signal Handler: sig=%d scp=%p\n", sig, sinf);
if (sinf)
{
printf ("siginfo.si_signo=%d\n", sinf->si_signo);
printf ("siginfo.si_errno=%d\n", sinf->si_errno);
printf ("siginfo.si_code=%d\n", sinf->si_code);
}
pause ();
printf ("func() exiting\n");
sleep (2);
}
void
func2 (int sig, siginfo_t *sinf, void *foo)
{
printf ("Signal Handler: sig=%d scp=%p\n", sig, sinf);
if (sinf)
{
printf ("siginfo.si_signo=%d\n", sinf->si_signo);
printf ("siginfo.si_errno=%d\n", sinf->si_errno);
printf ("siginfo.si_code=%d\n", sinf->si_code);
}
printf ("func2() exiting\n");
}
Without the debugger, this can be run, then sent signal 1 (which prints
the messages from func(), and then sent signal 2 (which prints the
messages from func2() -- presumably after running a signal trampoline,
though I'm not entirely certain of that), then sleeps. But with the
debugger, the program never gets beyond func(): and that's where I got
stuck.
> > +struct displaced_step_closure *
> > +arm_displaced_step_copy_insn (struct gdbarch *gdbarch,
> > + CORE_ADDR from, CORE_ADDR to,
> > + struct regcache *regs)
> > +{
> > + struct gdbarch_tdep *tdep = gdbarch_tdep (gdbarch);
> > + const size_t len = 4;
> > + gdb_byte *buf = xmalloc (len);
> > + struct displaced_step_closure *dsc;
> > + unsigned long insn;
> > + int i;
> > +
> > + /* A linux-specific hack. Detect when we've entered
> > (inaccessible by GDB)
> > + kernel helpers, and stop at the return location. */
> > + if (gdbarch_osabi (gdbarch) == GDB_OSABI_LINUX && from >
> > 0xffff0000)
> > + {
> > + if (debug_displaced)
> > + fprintf_unfiltered (gdb_stdlog, "displaced: detected
> > kernel helper "
> > + "at %.8lx\n", (unsigned long) from);
> > +
> > + dsc = arm_catch_kernel_helper_return (from, to, regs);
> > + }
> > + else
> > + {
> > + insn = read_memory_unsigned_integer (from, len);
> > +
> > + if (debug_displaced)
> > + fprintf_unfiltered (gdb_stdlog, "displaced: stepping insn
> > %.8lx "
> > + "at %.8lx\n", insn, (unsigned long)
> > from); +
> > + dsc = arm_process_displaced_insn (insn, from, to, regs);
> > + }
>
> Can the Linux-specific hack go in arm-linux-tdep.c? Shouldn't have to
> make many functions global to do that.
Moved. Other points you (Dan) raised have been dealt with, I think.
I've hit some problems testing this patch, mainly because I can't seem
to get a reliable baseline run with my current test setup. AFAICT, there
should be no affect on behaviour unless displaced stepping is in use
(differences in passes/failures with my patch only seem to be in
"unreliable" tests, after running baseline testing three times), and of
course displaced stepping isn't present for ARM without this patch
anyway.
OK to apply?
Thanks,
Julian
ChangeLog
gdb/
* arm-linux-tdep.c (arch-utils.h, inferior.h): Include files.
(cleanup_kernel_helper_return, arm_catch_kernel_helper_return): New.
(arm_linux_displaced_step_copy_insn): New.
(arm_linux_init_abi): Initialise displaced stepping callbacks.
* arm-tdep.c (DISPLACED_STEPPING_ARCH_VERSION): New macro.
(ARM_NOP): New.
(displaced_read_reg, displaced_in_arm_mode, branch_write_pc)
(bx_write_pc, load_write_pc, alu_write_pc, displaced_write_reg)
(insn_references_pc, copy_unmodified, cleanup_preload, copy_preload)
(copy_preload_reg, cleanup_copro_load_store, copy_copro_load_store)
(cleanup_branch, copy_b_bl_blx, copy_bx_blx_reg, cleanup_alu_imm)
(copy_alu_imm, cleanup_alu_reg, copy_alu_reg)
(cleanup_alu_shifted_reg, copy_alu_shifted_reg, cleanup_load)
(cleanup_store, copy_extra_ld_st, copy_ldr_str_ldrb_strb)
(cleanup_block_load_all, cleanup_block_store_pc)
(cleanup_block_load_pc, copy_block_xfer, cleanup_svc, copy_svc)
(copy_undef, copy_unpred): New.
(decode_misc_memhint_neon, decode_unconditional)
(decode_miscellaneous, decode_dp_misc, decode_ld_st_word_ubyte)
(decode_media, decode_b_bl_ldmstm, decode_ext_reg_ld_st)
(decode_svc_copro, arm_process_displaced_insn)
(arm_displaced_init_closure, arm_displcaed_step_copy_insn)
(arm_displaced_step_fixup): New.
(arm_gdbarch_init): Initialise max insn length field.
* arm-tdep.h (DISPLACED_TEMPS, DISPLACED_MODIFIED_INSNS): New
macros.
(displaced_step_closure, pc_write_style): New.
(arm_displaced_init_closure, displaced_read_reg)
(displaced_write_reg, arm_displaced_step_copy_insn)
(arm_displaced_step_fixup): Add prototypes.
[-- Attachment #2: fsf-arm-displaced-stepping-6.diff --]
[-- Type: text/x-patch, Size: 64527 bytes --]
--- .pc/displaced-stepping/gdb/arm-linux-tdep.c 2009-05-15 16:05:07.000000000 -0700
+++ gdb/arm-linux-tdep.c 2009-05-16 10:16:52.000000000 -0700
@@ -38,6 +38,8 @@
#include "arm-linux-tdep.h"
#include "linux-tdep.h"
#include "glibc-tdep.h"
+#include "arch-utils.h"
+#include "inferior.h"
#include "gdb_string.h"
@@ -590,6 +592,77 @@ arm_linux_software_single_step (struct f
return 1;
}
+/* The following two functions implement single-stepping over calls to Linux
+ kernel helper routines, which perform e.g. atomic operations on architecture
+ variants which don't support them natively. We call the helper out-of-line
+ and place a breakpoint at the return address (in our scratch space). */
+
+static void
+cleanup_kernel_helper_return (struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ displaced_write_reg (regs, dsc, ARM_LR_REGNUM, dsc->tmp[0], CANNOT_WRITE_PC);
+ displaced_write_reg (regs, dsc, ARM_PC_REGNUM, dsc->tmp[0], BRANCH_WRITE_PC);
+}
+
+static struct displaced_step_closure *
+arm_catch_kernel_helper_return (CORE_ADDR from, CORE_ADDR to,
+ struct regcache *regs)
+{
+ struct displaced_step_closure *dsc
+ = xmalloc (sizeof (struct displaced_step_closure));
+
+ dsc->numinsns = 1;
+ dsc->insn_addr = from;
+ dsc->cleanup = &cleanup_kernel_helper_return;
+ /* Say we wrote to the PC, else cleanup will set PC to the next
+ instruction in the helper, which isn't helpful. */
+ dsc->wrote_to_pc = 1;
+
+ /* Preparation: tmp[0] <- r14
+ r14 <- <scratch space>+4
+ *(<scratch space>+8) <- from
+ Insn: ldr pc, [r14, #4]
+ Cleanup: r14 <- tmp[0], pc <- tmp[0]. */
+
+ dsc->tmp[0] = displaced_read_reg (regs, from, ARM_LR_REGNUM);
+ displaced_write_reg (regs, dsc, ARM_LR_REGNUM, (ULONGEST) to + 4,
+ CANNOT_WRITE_PC);
+ write_memory_unsigned_integer (to + 8, 4, from);
+
+ dsc->modinsn[0] = 0xe59ef004; /* ldr pc, [lr, #4]. */
+
+ return dsc;
+}
+
+/* Linux-specific displaced step instruction copying function. Detects when
+ the program has stepped into a Linux kernel helper routine (which must be
+ handled as a special case), falling back to arm_displaced_step_copy_insn()
+ if it hasn't. */
+
+static struct displaced_step_closure *
+arm_linux_displaced_step_copy_insn (struct gdbarch *gdbarch,
+ CORE_ADDR from, CORE_ADDR to,
+ struct regcache *regs)
+{
+ /* Detect when we enter an (inaccessible by GDB) Linux kernel helper, and
+ stop at the return location. */
+ if (from > 0xffff0000)
+ {
+ struct displaced_step_closure *dsc;
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: detected kernel helper "
+ "at %.8lx\n", (unsigned long) from);
+
+ dsc = arm_catch_kernel_helper_return (from, to, regs);
+
+ return arm_displaced_init_closure (gdbarch, from, to, dsc);
+ }
+ else
+ return arm_displaced_step_copy_insn (gdbarch, from, to, regs);
+}
+
static void
arm_linux_init_abi (struct gdbarch_info info,
struct gdbarch *gdbarch)
@@ -650,6 +723,14 @@ arm_linux_init_abi (struct gdbarch_info
arm_linux_regset_from_core_section);
set_gdbarch_get_siginfo_type (gdbarch, linux_get_siginfo_type);
+
+ /* Displaced stepping. */
+ set_gdbarch_displaced_step_copy_insn (gdbarch,
+ arm_linux_displaced_step_copy_insn);
+ set_gdbarch_displaced_step_fixup (gdbarch, arm_displaced_step_fixup);
+ set_gdbarch_displaced_step_free_closure (gdbarch,
+ simple_displaced_step_free_closure);
+ set_gdbarch_displaced_step_location (gdbarch, displaced_step_at_entry_point);
}
/* Provide a prototype to silence -Wmissing-prototypes. */
--- .pc/displaced-stepping/gdb/arm-tdep.c 2009-05-15 16:05:07.000000000 -0700
+++ gdb/arm-tdep.c 2009-05-16 10:16:52.000000000 -0700
@@ -241,6 +241,11 @@ struct arm_prologue_cache
struct trad_frame_saved_reg *saved_regs;
};
+/* Architecture version for displaced stepping. This effects the behaviour of
+ certain instructions, and really should not be hard-wired. */
+
+#define DISPLACED_STEPPING_ARCH_VERSION 5
+
/* Addresses for calling Thumb functions have the bit 0 set.
Here are some macros to test, set, or clear bit 0 of addresses. */
#define IS_THUMB_ADDR(addr) ((addr) & 1)
@@ -2175,6 +2180,1828 @@ arm_software_single_step (struct frame_i
return 1;
}
+/* ARM displaced stepping support.
+
+ Generally ARM displaced stepping works as follows:
+
+ 1. When an instruction is to be single-stepped, it is first decoded by
+ arm_process_displaced_insn (called from arm_displaced_step_copy_insn).
+ Depending on the type of instruction, it is then copied to a scratch
+ location, possibly in a modified form. The copy_* set of functions
+ performs such modification, as necessary. A breakpoint is placed after
+ the modified instruction in the scratch space to return control to GDB.
+ Note in particular that instructions which modify the PC will no longer
+ do so after modification.
+
+ 2. The instruction is single-stepped.
+
+ 3. A cleanup function (cleanup_*) is called corresponding to the copy_*
+ function used for the current instruction. This function's job is to
+ put the CPU/memory state back to what it would have been if the
+ instruction had been executed unmodified in its original location. */
+
+/* NOP instruction (mov r0, r0). */
+#define ARM_NOP 0xe1a00000
+
+/* Helper for register reads for displaced stepping. In particular, this
+ returns the PC as it would be seen by the instruction at its original
+ location. */
+
+ULONGEST
+displaced_read_reg (struct regcache *regs, CORE_ADDR from, int regno)
+{
+ ULONGEST ret;
+
+ if (regno == 15)
+ {
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: read pc value %.8lx\n",
+ (unsigned long) from + 8);
+ return (ULONGEST) from + 8; /* Pipeline offset. */
+ }
+ else
+ {
+ regcache_cooked_read_unsigned (regs, regno, &ret);
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: read r%d value %.8lx\n",
+ regno, (unsigned long) ret);
+ return ret;
+ }
+}
+
+static int
+displaced_in_arm_mode (struct regcache *regs)
+{
+ ULONGEST ps;
+
+ regcache_cooked_read_unsigned (regs, ARM_PS_REGNUM, &ps);
+
+ return (ps & CPSR_T) == 0;
+}
+
+/* Write to the PC as from a branch instruction. */
+
+static void
+branch_write_pc (struct regcache *regs, ULONGEST val)
+{
+ if (displaced_in_arm_mode (regs))
+ /* Note: If bits 0/1 are set, this branch would be unpredictable for
+ architecture versions < 6. */
+ regcache_cooked_write_unsigned (regs, ARM_PC_REGNUM, val & ~(ULONGEST) 0x3);
+ else
+ regcache_cooked_write_unsigned (regs, ARM_PC_REGNUM, val & ~(ULONGEST) 0x1);
+}
+
+/* Write to the PC as from a branch-exchange instruction. */
+
+static void
+bx_write_pc (struct regcache *regs, ULONGEST val)
+{
+ ULONGEST ps;
+
+ regcache_cooked_read_unsigned (regs, ARM_PS_REGNUM, &ps);
+
+ if ((val & 1) == 1)
+ {
+ regcache_cooked_write_unsigned (regs, ARM_PS_REGNUM, ps | CPSR_T);
+ regcache_cooked_write_unsigned (regs, ARM_PC_REGNUM, val & 0xfffffffe);
+ }
+ else if ((val & 2) == 0)
+ {
+ regcache_cooked_write_unsigned (regs, ARM_PS_REGNUM,
+ ps & ~(ULONGEST) CPSR_T);
+ regcache_cooked_write_unsigned (regs, ARM_PC_REGNUM, val);
+ }
+ else
+ /* Unpredictable behaviour. */
+ warning (_("Single-stepping BX to non-word-aligned ARM instruction."));
+}
+
+/* Write to the PC as if from a load instruction. */
+
+static void
+load_write_pc (struct regcache *regs, ULONGEST val)
+{
+ if (DISPLACED_STEPPING_ARCH_VERSION >= 5)
+ bx_write_pc (regs, val);
+ else
+ branch_write_pc (regs, val);
+}
+
+/* Write to the PC as if from an ALU instruction. */
+
+static void
+alu_write_pc (struct regcache *regs, ULONGEST val)
+{
+ if (DISPLACED_STEPPING_ARCH_VERSION >= 7 && displaced_in_arm_mode (regs))
+ bx_write_pc (regs, val);
+ else
+ branch_write_pc (regs, val);
+}
+
+/* Helper for writing to registers for displaced stepping. Writing to the PC
+ has a varying effects depending on the instruction which does the write:
+ this is controlled by the WRITE_PC argument. */
+
+void
+displaced_write_reg (struct regcache *regs, struct displaced_step_closure *dsc,
+ int regno, ULONGEST val, enum pc_write_style write_pc)
+{
+ if (regno == 15)
+ {
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: writing pc %.8lx\n",
+ (unsigned long) val);
+ switch (write_pc)
+ {
+ case BRANCH_WRITE_PC:
+ branch_write_pc (regs, val);
+ break;
+
+ case BX_WRITE_PC:
+ bx_write_pc (regs, val);
+ break;
+
+ case LOAD_WRITE_PC:
+ load_write_pc (regs, val);
+ break;
+
+ case ALU_WRITE_PC:
+ alu_write_pc (regs, val);
+ break;
+
+ case CANNOT_WRITE_PC:
+ warning (_("Instruction wrote to PC in an unexpected way when "
+ "single-stepping"));
+ break;
+
+ default:
+ abort ();
+ }
+
+ dsc->wrote_to_pc = 1;
+ }
+ else
+ {
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: writing r%d value %.8lx\n",
+ regno, (unsigned long) val);
+ regcache_cooked_write_unsigned (regs, regno, val);
+ }
+}
+
+/* This function is used to concisely determine if an instruction INSN
+ references PC. Register fields of interest in INSN should have the
+ corresponding fields of BITMASK set to 0b1111. The function returns return 1
+ if any of these fields in INSN reference the PC (also 0b1111, r15), else it
+ returns 0. */
+
+static int
+insn_references_pc (unsigned long insn, unsigned long bitmask)
+{
+ unsigned long lowbit = 1;
+
+ while (bitmask != 0)
+ {
+ unsigned long mask;
+
+ for (; lowbit && (bitmask & lowbit) == 0; lowbit <<= 1)
+ ;
+
+ if (!lowbit)
+ break;
+
+ mask = lowbit * 0xf;
+
+ if ((insn & mask) == mask)
+ return 1;
+
+ bitmask &= ~mask;
+ }
+
+ return 0;
+}
+
+/* The simplest copy function. Many instructions have the same effect no
+ matter what address they are executed at: in those cases, use this. */
+
+static int
+copy_unmodified (unsigned long insn, const char *iname,
+ struct displaced_step_closure *dsc)
+{
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying insn %.8lx, "
+ "opcode/class '%s' unmodified\n", insn, iname);
+
+ dsc->modinsn[0] = insn;
+
+ return 0;
+}
+
+/* Preload instructions with immediate offset. */
+
+static void
+cleanup_preload (struct regcache *regs, struct displaced_step_closure *dsc)
+{
+ displaced_write_reg (regs, dsc, 0, dsc->tmp[0], CANNOT_WRITE_PC);
+ if (!dsc->u.preload.immed)
+ displaced_write_reg (regs, dsc, 1, dsc->tmp[1], CANNOT_WRITE_PC);
+}
+
+static int
+copy_preload (unsigned long insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ unsigned int rn = bits (insn, 16, 19);
+ ULONGEST rn_val;
+ CORE_ADDR from = dsc->insn_addr;
+
+ if (!insn_references_pc (insn, 0x000f0000ul))
+ return copy_unmodified (insn, "preload", dsc);
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying preload insn %.8lx\n",
+ insn);
+
+ /* Preload instructions:
+
+ {pli/pld} [rn, #+/-imm]
+ ->
+ {pli/pld} [r0, #+/-imm]. */
+
+ dsc->tmp[0] = displaced_read_reg (regs, from, 0);
+ rn_val = displaced_read_reg (regs, from, rn);
+ displaced_write_reg (regs, dsc, 0, rn_val, CANNOT_WRITE_PC);
+
+ dsc->u.preload.immed = 1;
+
+ dsc->modinsn[0] = insn & 0xfff0ffff;
+
+ dsc->cleanup = &cleanup_preload;
+
+ return 0;
+}
+
+/* Preload instructions with register offset. */
+
+static int
+copy_preload_reg (unsigned long insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ unsigned int rn = bits (insn, 16, 19);
+ unsigned int rm = bits (insn, 0, 3);
+ ULONGEST rn_val, rm_val;
+ CORE_ADDR from = dsc->insn_addr;
+
+ if (!insn_references_pc (insn, 0x000f000ful))
+ return copy_unmodified (insn, "preload reg", dsc);
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying preload insn %.8lx\n",
+ insn);
+
+ /* Preload register-offset instructions:
+
+ {pli/pld} [rn, rm {, shift}]
+ ->
+ {pli/pld} [r0, r1 {, shift}]. */
+
+ dsc->tmp[0] = displaced_read_reg (regs, from, 0);
+ dsc->tmp[1] = displaced_read_reg (regs, from, 1);
+ rn_val = displaced_read_reg (regs, from, rn);
+ rm_val = displaced_read_reg (regs, from, rm);
+ displaced_write_reg (regs, dsc, 0, rn_val, CANNOT_WRITE_PC);
+ displaced_write_reg (regs, dsc, 1, rm_val, CANNOT_WRITE_PC);
+
+ dsc->u.preload.immed = 0;
+
+ dsc->modinsn[0] = (insn & 0xfff0fff0) | 0x1;
+
+ dsc->cleanup = &cleanup_preload;
+
+ return 0;
+}
+
+/* Copy/cleanup coprocessor load and store instructions. */
+
+static void
+cleanup_copro_load_store (struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ ULONGEST rn_val = displaced_read_reg (regs, dsc->insn_addr, 0);
+
+ displaced_write_reg (regs, dsc, 0, dsc->tmp[0], CANNOT_WRITE_PC);
+
+ if (dsc->u.ldst.writeback)
+ displaced_write_reg (regs, dsc, dsc->u.ldst.rn, rn_val, LOAD_WRITE_PC);
+}
+
+static int
+copy_copro_load_store (unsigned long insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ unsigned int rn = bits (insn, 16, 19);
+ ULONGEST rn_val;
+ CORE_ADDR from = dsc->insn_addr;
+
+ if (!insn_references_pc (insn, 0x000f0000ul))
+ return copy_unmodified (insn, "copro load/store", dsc);
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying coprocessor "
+ "load/store insn %.8lx\n", insn);
+
+ /* Coprocessor load/store instructions:
+
+ {stc/stc2} [<Rn>, #+/-imm] (and other immediate addressing modes)
+ ->
+ {stc/stc2} [r0, #+/-imm].
+
+ ldc/ldc2 are handled identically. */
+
+ dsc->tmp[0] = displaced_read_reg (regs, from, 0);
+ rn_val = displaced_read_reg (regs, from, rn);
+ displaced_write_reg (regs, dsc, 0, rn_val, CANNOT_WRITE_PC);
+
+ dsc->u.ldst.writeback = bit (insn, 25);
+ dsc->u.ldst.rn = rn;
+
+ dsc->modinsn[0] = insn & 0xfff0ffff;
+
+ dsc->cleanup = &cleanup_copro_load_store;
+
+ return 0;
+}
+
+/* Clean up branch instructions (actually perform the branch, by setting
+ PC). */
+
+static void
+cleanup_branch (struct regcache *regs, struct displaced_step_closure *dsc)
+{
+ ULONGEST from = dsc->insn_addr;
+ unsigned long status = displaced_read_reg (regs, from, ARM_PS_REGNUM);
+ int branch_taken = condition_true (dsc->u.branch.cond, status);
+ enum pc_write_style write_pc = dsc->u.branch.exchange
+ ? BX_WRITE_PC : BRANCH_WRITE_PC;
+
+ if (!branch_taken)
+ return;
+
+ if (dsc->u.branch.link)
+ {
+ ULONGEST pc = displaced_read_reg (regs, from, 15);
+ displaced_write_reg (regs, dsc, 14, pc - 4, CANNOT_WRITE_PC);
+ }
+
+ displaced_write_reg (regs, dsc, 15, dsc->u.branch.dest, write_pc);
+}
+
+/* Copy B/BL/BLX instructions with immediate destinations. */
+
+static int
+copy_b_bl_blx (unsigned long insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ unsigned int cond = bits (insn, 28, 31);
+ int exchange = (cond == 0xf);
+ int link = exchange || bit (insn, 24);
+ CORE_ADDR from = dsc->insn_addr;
+ long offset;
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying %s immediate insn "
+ "%.8lx\n", (exchange) ? "blx" : (link) ? "bl" : "b",
+ insn);
+
+ /* Implement "BL<cond> <label>" as:
+
+ Preparation: cond <- instruction condition
+ Insn: mov r0, r0 (nop)
+ Cleanup: if (condition true) { r14 <- pc; pc <- label }.
+
+ B<cond> similar, but don't set r14 in cleanup. */
+
+ if (exchange)
+ /* For BLX, set bit 0 of the destination. The cleanup_branch function will
+ then arrange the switch into Thumb mode. */
+ offset = (bits (insn, 0, 23) << 2) | (bit (insn, 24) << 1) | 1;
+ else
+ offset = bits (insn, 0, 23) << 2;
+
+ if (bit (offset, 25))
+ offset = offset | ~0x3ffffff;
+
+ dsc->u.branch.cond = cond;
+ dsc->u.branch.link = link;
+ dsc->u.branch.exchange = exchange;
+ dsc->u.branch.dest = from + 8 + offset;
+
+ dsc->modinsn[0] = ARM_NOP;
+
+ dsc->cleanup = &cleanup_branch;
+
+ return 0;
+}
+
+/* Copy BX/BLX with register-specified destinations. */
+
+static int
+copy_bx_blx_reg (unsigned long insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ unsigned int cond = bits (insn, 28, 31);
+ /* BX: x12xxx1x
+ BLX: x12xxx3x. */
+ int link = bit (insn, 5);
+ unsigned int rm = bits (insn, 0, 3);
+ CORE_ADDR from = dsc->insn_addr;
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying %s register insn "
+ "%.8lx\n", (link) ? "blx" : "bx", insn);
+
+ /* Implement {BX,BLX}<cond> <reg>" as:
+
+ Preparation: cond <- instruction condition
+ Insn: mov r0, r0 (nop)
+ Cleanup: if (condition true) { r14 <- pc; pc <- dest; }.
+
+ Don't set r14 in cleanup for BX. */
+
+ dsc->u.branch.dest = displaced_read_reg (regs, from, rm);
+
+ dsc->u.branch.cond = cond;
+ dsc->u.branch.link = link;
+ dsc->u.branch.exchange = 1;
+
+ dsc->modinsn[0] = ARM_NOP;
+
+ dsc->cleanup = &cleanup_branch;
+
+ return 0;
+}
+
+/* Copy/cleanup arithmetic/logic instruction with immediate RHS. */
+
+static void
+cleanup_alu_imm (struct regcache *regs, struct displaced_step_closure *dsc)
+{
+ ULONGEST rd_val = displaced_read_reg (regs, dsc->insn_addr, 0);
+ displaced_write_reg (regs, dsc, 0, dsc->tmp[0], CANNOT_WRITE_PC);
+ displaced_write_reg (regs, dsc, 1, dsc->tmp[1], CANNOT_WRITE_PC);
+ displaced_write_reg (regs, dsc, dsc->rd, rd_val, ALU_WRITE_PC);
+}
+
+static int
+copy_alu_imm (unsigned long insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ unsigned int rn = bits (insn, 16, 19);
+ unsigned int rd = bits (insn, 12, 15);
+ unsigned int op = bits (insn, 21, 24);
+ int is_mov = (op == 0xd);
+ ULONGEST rd_val, rn_val;
+ CORE_ADDR from = dsc->insn_addr;
+
+ if (!insn_references_pc (insn, 0x000ff000ul))
+ return copy_unmodified (insn, "ALU immediate", dsc);
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying immediate %s insn "
+ "%.8lx\n", is_mov ? "move" : "ALU", insn);
+
+ /* Instruction is of form:
+
+ <op><cond> rd, [rn,] #imm
+
+ Rewrite as:
+
+ Preparation: tmp1, tmp2 <- r0, r1;
+ r0, r1 <- rd, rn
+ Insn: <op><cond> r0, r1, #imm
+ Cleanup: rd <- r0; r0 <- tmp1; r1 <- tmp2
+ */
+
+ dsc->tmp[0] = displaced_read_reg (regs, from, 0);
+ dsc->tmp[1] = displaced_read_reg (regs, from, 1);
+ rn_val = displaced_read_reg (regs, from, rn);
+ rd_val = displaced_read_reg (regs, from, rd);
+ displaced_write_reg (regs, dsc, 0, rd_val, CANNOT_WRITE_PC);
+ displaced_write_reg (regs, dsc, 1, rn_val, CANNOT_WRITE_PC);
+ dsc->rd = rd;
+
+ if (is_mov)
+ dsc->modinsn[0] = insn & 0xfff00fff;
+ else
+ dsc->modinsn[0] = (insn & 0xfff00fff) | 0x10000;
+
+ dsc->cleanup = &cleanup_alu_imm;
+
+ return 0;
+}
+
+/* Copy/cleanup arithmetic/logic insns with register RHS. */
+
+static void
+cleanup_alu_reg (struct regcache *regs, struct displaced_step_closure *dsc)
+{
+ ULONGEST rd_val;
+ int i;
+
+ rd_val = displaced_read_reg (regs, dsc->insn_addr, 0);
+
+ for (i = 0; i < 3; i++)
+ displaced_write_reg (regs, dsc, i, dsc->tmp[i], CANNOT_WRITE_PC);
+
+ displaced_write_reg (regs, dsc, dsc->rd, rd_val, ALU_WRITE_PC);
+}
+
+static int
+copy_alu_reg (unsigned long insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ unsigned int rn = bits (insn, 16, 19);
+ unsigned int rm = bits (insn, 0, 3);
+ unsigned int rd = bits (insn, 12, 15);
+ unsigned int op = bits (insn, 21, 24);
+ int is_mov = (op == 0xd);
+ ULONGEST rd_val, rn_val, rm_val;
+ CORE_ADDR from = dsc->insn_addr;
+
+ if (!insn_references_pc (insn, 0x000ff00ful))
+ return copy_unmodified (insn, "ALU reg", dsc);
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying reg %s insn %.8lx\n",
+ is_mov ? "move" : "ALU", insn);
+
+ /* Instruction is of form:
+
+ <op><cond> rd, [rn,] rm [, <shift>]
+
+ Rewrite as:
+
+ Preparation: tmp1, tmp2, tmp3 <- r0, r1, r2;
+ r0, r1, r2 <- rd, rn, rm
+ Insn: <op><cond> r0, r1, r2 [, <shift>]
+ Cleanup: rd <- r0; r0, r1, r2 <- tmp1, tmp2, tmp3
+ */
+
+ dsc->tmp[0] = displaced_read_reg (regs, from, 0);
+ dsc->tmp[1] = displaced_read_reg (regs, from, 1);
+ dsc->tmp[2] = displaced_read_reg (regs, from, 2);
+ rd_val = displaced_read_reg (regs, from, rd);
+ rn_val = displaced_read_reg (regs, from, rn);
+ rm_val = displaced_read_reg (regs, from, rm);
+ displaced_write_reg (regs, dsc, 0, rd_val, CANNOT_WRITE_PC);
+ displaced_write_reg (regs, dsc, 1, rn_val, CANNOT_WRITE_PC);
+ displaced_write_reg (regs, dsc, 2, rm_val, CANNOT_WRITE_PC);
+ dsc->rd = rd;
+
+ if (is_mov)
+ dsc->modinsn[0] = (insn & 0xfff00ff0) | 0x2;
+ else
+ dsc->modinsn[0] = (insn & 0xfff00ff0) | 0x10002;
+
+ dsc->cleanup = &cleanup_alu_reg;
+
+ return 0;
+}
+
+/* Cleanup/copy arithmetic/logic insns with shifted register RHS. */
+
+static void
+cleanup_alu_shifted_reg (struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ ULONGEST rd_val = displaced_read_reg (regs, dsc->insn_addr, 0);
+ int i;
+
+ for (i = 0; i < 4; i++)
+ displaced_write_reg (regs, dsc, i, dsc->tmp[i], CANNOT_WRITE_PC);
+
+ displaced_write_reg (regs, dsc, dsc->rd, rd_val, ALU_WRITE_PC);
+}
+
+static int
+copy_alu_shifted_reg (unsigned long insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ unsigned int rn = bits (insn, 16, 19);
+ unsigned int rm = bits (insn, 0, 3);
+ unsigned int rd = bits (insn, 12, 15);
+ unsigned int rs = bits (insn, 8, 11);
+ unsigned int op = bits (insn, 21, 24);
+ int is_mov = (op == 0xd), i;
+ ULONGEST rd_val, rn_val, rm_val, rs_val;
+ CORE_ADDR from = dsc->insn_addr;
+
+ if (!insn_references_pc (insn, 0x000fff0ful))
+ return copy_unmodified (insn, "ALU shifted reg", dsc);
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying shifted reg %s insn "
+ "%.8lx\n", is_mov ? "move" : "ALU", insn);
+
+ /* Instruction is of form:
+
+ <op><cond> rd, [rn,] rm, <shift> rs
+
+ Rewrite as:
+
+ Preparation: tmp1, tmp2, tmp3, tmp4 <- r0, r1, r2, r3
+ r0, r1, r2, r3 <- rd, rn, rm, rs
+ Insn: <op><cond> r0, r1, r2, <shift> r3
+ Cleanup: tmp5 <- r0
+ r0, r1, r2, r3 <- tmp1, tmp2, tmp3, tmp4
+ rd <- tmp5
+ */
+
+ for (i = 0; i < 4; i++)
+ dsc->tmp[i] = displaced_read_reg (regs, from, i);
+
+ rd_val = displaced_read_reg (regs, from, rd);
+ rn_val = displaced_read_reg (regs, from, rn);
+ rm_val = displaced_read_reg (regs, from, rm);
+ rs_val = displaced_read_reg (regs, from, rs);
+ displaced_write_reg (regs, dsc, 0, rd_val, CANNOT_WRITE_PC);
+ displaced_write_reg (regs, dsc, 1, rn_val, CANNOT_WRITE_PC);
+ displaced_write_reg (regs, dsc, 2, rm_val, CANNOT_WRITE_PC);
+ displaced_write_reg (regs, dsc, 3, rs_val, CANNOT_WRITE_PC);
+ dsc->rd = rd;
+
+ if (is_mov)
+ dsc->modinsn[0] = (insn & 0xfff000f0) | 0x302;
+ else
+ dsc->modinsn[0] = (insn & 0xfff000f0) | 0x10302;
+
+ dsc->cleanup = &cleanup_alu_shifted_reg;
+
+ return 0;
+}
+
+/* Clean up load instructions. */
+
+static void
+cleanup_load (struct regcache *regs, struct displaced_step_closure *dsc)
+{
+ ULONGEST rt_val, rt_val2 = 0, rn_val;
+ CORE_ADDR from = dsc->insn_addr;
+
+ rt_val = displaced_read_reg (regs, from, 0);
+ if (dsc->u.ldst.xfersize == 8)
+ rt_val2 = displaced_read_reg (regs, from, 1);
+ rn_val = displaced_read_reg (regs, from, 2);
+
+ displaced_write_reg (regs, dsc, 0, dsc->tmp[0], CANNOT_WRITE_PC);
+ if (dsc->u.ldst.xfersize > 4)
+ displaced_write_reg (regs, dsc, 1, dsc->tmp[1], CANNOT_WRITE_PC);
+ displaced_write_reg (regs, dsc, 2, dsc->tmp[2], CANNOT_WRITE_PC);
+ if (!dsc->u.ldst.immed)
+ displaced_write_reg (regs, dsc, 3, dsc->tmp[3], CANNOT_WRITE_PC);
+
+ /* Handle register writeback. */
+ if (dsc->u.ldst.writeback)
+ displaced_write_reg (regs, dsc, dsc->u.ldst.rn, rn_val, CANNOT_WRITE_PC);
+ /* Put result in right place. */
+ displaced_write_reg (regs, dsc, dsc->rd, rt_val, LOAD_WRITE_PC);
+ if (dsc->u.ldst.xfersize == 8)
+ displaced_write_reg (regs, dsc, dsc->rd + 1, rt_val2, LOAD_WRITE_PC);
+}
+
+/* Clean up store instructions. */
+
+static void
+cleanup_store (struct regcache *regs, struct displaced_step_closure *dsc)
+{
+ CORE_ADDR from = dsc->insn_addr;
+ ULONGEST rn_val = displaced_read_reg (regs, from, 2);
+
+ displaced_write_reg (regs, dsc, 0, dsc->tmp[0], CANNOT_WRITE_PC);
+ if (dsc->u.ldst.xfersize > 4)
+ displaced_write_reg (regs, dsc, 1, dsc->tmp[1], CANNOT_WRITE_PC);
+ displaced_write_reg (regs, dsc, 2, dsc->tmp[2], CANNOT_WRITE_PC);
+ if (!dsc->u.ldst.immed)
+ displaced_write_reg (regs, dsc, 3, dsc->tmp[3], CANNOT_WRITE_PC);
+ if (!dsc->u.ldst.restore_r4)
+ displaced_write_reg (regs, dsc, 4, dsc->tmp[4], CANNOT_WRITE_PC);
+
+ /* Writeback. */
+ if (dsc->u.ldst.writeback)
+ displaced_write_reg (regs, dsc, dsc->u.ldst.rn, rn_val, CANNOT_WRITE_PC);
+}
+
+/* Copy "extra" load/store instructions. These are halfword/doubleword
+ transfers, which have a different encoding to byte/word transfers. */
+
+static int
+copy_extra_ld_st (unsigned long insn, int unpriveleged, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ unsigned int op1 = bits (insn, 20, 24);
+ unsigned int op2 = bits (insn, 5, 6);
+ unsigned int rt = bits (insn, 12, 15);
+ unsigned int rn = bits (insn, 16, 19);
+ unsigned int rm = bits (insn, 0, 3);
+ char load[12] = {0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1};
+ char bytesize[12] = {2, 2, 2, 2, 8, 1, 8, 1, 8, 2, 8, 2};
+ int immed = (op1 & 0x4) != 0;
+ int opcode;
+ ULONGEST rt_val, rt_val2 = 0, rn_val, rm_val = 0;
+ CORE_ADDR from = dsc->insn_addr;
+
+ if (!insn_references_pc (insn, 0x000ff00ful))
+ return copy_unmodified (insn, "extra load/store", dsc);
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying %sextra load/store "
+ "insn %.8lx\n", unpriveleged ? "unpriveleged " : "",
+ insn);
+
+ opcode = ((op2 << 2) | (op1 & 0x1) | ((op1 & 0x4) >> 1)) - 4;
+
+ if (opcode < 0)
+ internal_error (__FILE__, __LINE__,
+ _("copy_extra_ld_st: instruction decode error"));
+
+ dsc->tmp[0] = displaced_read_reg (regs, from, 0);
+ dsc->tmp[1] = displaced_read_reg (regs, from, 1);
+ dsc->tmp[2] = displaced_read_reg (regs, from, 2);
+ if (!immed)
+ dsc->tmp[3] = displaced_read_reg (regs, from, 3);
+
+ rt_val = displaced_read_reg (regs, from, rt);
+ if (bytesize[opcode] == 8)
+ rt_val2 = displaced_read_reg (regs, from, rt + 1);
+ rn_val = displaced_read_reg (regs, from, rn);
+ if (!immed)
+ rm_val = displaced_read_reg (regs, from, rm);
+
+ displaced_write_reg (regs, dsc, 0, rt_val, CANNOT_WRITE_PC);
+ if (bytesize[opcode] == 8)
+ displaced_write_reg (regs, dsc, 1, rt_val2, CANNOT_WRITE_PC);
+ displaced_write_reg (regs, dsc, 2, rn_val, CANNOT_WRITE_PC);
+ if (!immed)
+ displaced_write_reg (regs, dsc, 3, rm_val, CANNOT_WRITE_PC);
+
+ dsc->rd = rt;
+ dsc->u.ldst.xfersize = bytesize[opcode];
+ dsc->u.ldst.rn = rn;
+ dsc->u.ldst.immed = immed;
+ dsc->u.ldst.writeback = bit (insn, 24) == 0 || bit (insn, 21) != 0;
+ dsc->u.ldst.restore_r4 = 0;
+
+ if (immed)
+ /* {ldr,str}<width><cond> rt, [rt2,] [rn, #imm]
+ ->
+ {ldr,str}<width><cond> r0, [r1,] [r2, #imm]. */
+ dsc->modinsn[0] = (insn & 0xfff00fff) | 0x20000;
+ else
+ /* {ldr,str}<width><cond> rt, [rt2,] [rn, +/-rm]
+ ->
+ {ldr,str}<width><cond> r0, [r1,] [r2, +/-r3]. */
+ dsc->modinsn[0] = (insn & 0xfff00ff0) | 0x20003;
+
+ dsc->cleanup = load[opcode] ? &cleanup_load : &cleanup_store;
+
+ return 0;
+}
+
+/* Copy byte/word loads and stores. */
+
+static int
+copy_ldr_str_ldrb_strb (unsigned long insn, struct regcache *regs,
+ struct displaced_step_closure *dsc, int load, int byte,
+ int usermode)
+{
+ int immed = !bit (insn, 25);
+ unsigned int rt = bits (insn, 12, 15);
+ unsigned int rn = bits (insn, 16, 19);
+ unsigned int rm = bits (insn, 0, 3); /* Only valid if !immed. */
+ ULONGEST rt_val, rn_val, rm_val = 0;
+ CORE_ADDR from = dsc->insn_addr;
+
+ if (!insn_references_pc (insn, 0x000ff00ful))
+ return copy_unmodified (insn, "load/store", dsc);
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying %s%s insn %.8lx\n",
+ load ? (byte ? "ldrb" : "ldr")
+ : (byte ? "strb" : "str"), usermode ? "t" : "",
+ insn);
+
+ dsc->tmp[0] = displaced_read_reg (regs, from, 0);
+ dsc->tmp[2] = displaced_read_reg (regs, from, 2);
+ if (!immed)
+ dsc->tmp[3] = displaced_read_reg (regs, from, 3);
+ if (!load)
+ dsc->tmp[4] = displaced_read_reg (regs, from, 4);
+
+ rt_val = displaced_read_reg (regs, from, rt);
+ rn_val = displaced_read_reg (regs, from, rn);
+ if (!immed)
+ rm_val = displaced_read_reg (regs, from, rm);
+
+ displaced_write_reg (regs, dsc, 0, rt_val, CANNOT_WRITE_PC);
+ displaced_write_reg (regs, dsc, 2, rn_val, CANNOT_WRITE_PC);
+ if (!immed)
+ displaced_write_reg (regs, dsc, 3, rm_val, CANNOT_WRITE_PC);
+
+ dsc->rd = rt;
+ dsc->u.ldst.xfersize = byte ? 1 : 4;
+ dsc->u.ldst.rn = rn;
+ dsc->u.ldst.immed = immed;
+ dsc->u.ldst.writeback = bit (insn, 24) == 0 || bit (insn, 21) != 0;
+
+ /* To write PC we can do:
+
+ scratch+0: str pc, temp (*temp = scratch + 8 + offset)
+ scratch+4: ldr r4, temp
+ scratch+8: sub r4, r4, pc (r4 = scratch + 8 + offset - scratch - 8 - 8)
+ scratch+12: add r4, r4, #8 (r4 = offset)
+ scratch+16: add r0, r0, r4
+ scratch+20: str r0, [r2, #imm] (or str r0, [r2, r3])
+ scratch+24: <temp>
+
+ Otherwise we don't know what value to write for PC, since the offset is
+ architecture-dependent (sometimes PC+8, sometimes PC+12). */
+
+ if (load || rt != 15)
+ {
+ dsc->u.ldst.restore_r4 = 0;
+
+ if (immed)
+ /* {ldr,str}[b]<cond> rt, [rn, #imm], etc.
+ ->
+ {ldr,str}[b]<cond> r0, [r2, #imm]. */
+ dsc->modinsn[0] = (insn & 0xfff00fff) | 0x20000;
+ else
+ /* {ldr,str}[b]<cond> rt, [rn, rm], etc.
+ ->
+ {ldr,str}[b]<cond> r0, [r2, r3]. */
+ dsc->modinsn[0] = (insn & 0xfff00ff0) | 0x20003;
+ }
+ else
+ {
+ /* We need to use r4 as scratch. Make sure it's restored afterwards. */
+ dsc->u.ldst.restore_r4 = 1;
+
+ dsc->modinsn[0] = 0xe58ff014; /* str pc, [pc, #20]. */
+ dsc->modinsn[1] = 0xe59f4010; /* ldr r4, [pc, #16]. */
+ dsc->modinsn[2] = 0xe044400f; /* sub r4, r4, pc. */
+ dsc->modinsn[3] = 0xe2844008; /* add r4, r4, #8. */
+ dsc->modinsn[4] = 0xe0800004; /* add r0, r0, r4. */
+
+ /* As above. */
+ if (immed)
+ dsc->modinsn[5] = (insn & 0xfff00fff) | 0x20000;
+ else
+ dsc->modinsn[5] = (insn & 0xfff00ff0) | 0x20003;
+
+ dsc->modinsn[6] = 0x0; /* breakpoint location. */
+ dsc->modinsn[7] = 0x0; /* scratch space. */
+
+ dsc->numinsns = 6;
+ }
+
+ dsc->cleanup = load ? &cleanup_load : &cleanup_store;
+
+ return 0;
+}
+
+/* Cleanup LDM instructions with fully-populated register list. This is an
+ unfortunate corner case: it's impossible to implement correctly by modifying
+ the instruction. The issue is as follows: we have an instruction,
+
+ ldm rN, {r0-r15}
+
+ which we must rewrite to avoid loading PC. A possible solution would be to
+ do the load in two halves, something like (with suitable cleanup
+ afterwards):
+
+ mov r8, rN
+ ldm[id][ab] r8!, {r0-r7}
+ str r7, <temp>
+ ldm[id][ab] r8, {r7-r14}
+ <bkpt>
+
+ but at present there's no suitable place for <temp>, since the scratch space
+ is overwritten before the cleanup routine is called. For now, we simply
+ emulate the instruction. */
+
+static void
+cleanup_block_load_all (struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ ULONGEST from = dsc->insn_addr;
+ int inc = dsc->u.block.increment;
+ int bump_before = dsc->u.block.before ? (inc ? 4 : -4) : 0;
+ int bump_after = dsc->u.block.before ? 0 : (inc ? 4 : -4);
+ unsigned long regmask = dsc->u.block.regmask;
+ int regno = inc ? 0 : 15;
+ CORE_ADDR xfer_addr = dsc->u.block.xfer_addr;
+ int exception_return = dsc->u.block.load && dsc->u.block.user
+ && (regmask & 0x8000) != 0;
+ unsigned long status = displaced_read_reg (regs, from, ARM_PS_REGNUM);
+ int do_transfer = condition_true (dsc->u.block.cond, status);
+
+ if (!do_transfer)
+ return;
+
+ /* If the instruction is ldm rN, {...pc}^, I don't think there's anything
+ sensible we can do here. Complain loudly. */
+ if (exception_return)
+ error (_("Cannot single-step exception return"));
+
+ /* We don't handle any stores here for now. */
+ gdb_assert (dsc->u.block.load != 0);
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: emulating block transfer: "
+ "%s %s %s\n", dsc->u.block.load ? "ldm" : "stm",
+ dsc->u.block.increment ? "inc" : "dec",
+ dsc->u.block.before ? "before" : "after");
+
+ while (regmask)
+ {
+ unsigned long memword;
+
+ if (inc)
+ while (regno <= 15 && (regmask & (1 << regno)) == 0)
+ regno++;
+ else
+ while (regno >= 0 && (regmask & (1 << regno)) == 0)
+ regno--;
+
+ xfer_addr += bump_before;
+
+ memword = read_memory_unsigned_integer (xfer_addr, 4);
+ displaced_write_reg (regs, dsc, regno, memword, LOAD_WRITE_PC);
+
+ xfer_addr += bump_after;
+
+ regmask &= ~(1 << regno);
+ }
+
+ if (dsc->u.block.writeback)
+ displaced_write_reg (regs, dsc, dsc->u.block.rn, xfer_addr,
+ CANNOT_WRITE_PC);
+}
+
+/* Clean up an STM which included the PC in the register list. */
+
+static void
+cleanup_block_store_pc (struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ ULONGEST from = dsc->insn_addr;
+ unsigned long status = displaced_read_reg (regs, from, ARM_PS_REGNUM);
+ int store_executed = condition_true (dsc->u.block.cond, status);
+ CORE_ADDR pc_stored_at, transferred_regs = bitcount (dsc->u.block.regmask);
+ CORE_ADDR stm_insn_addr;
+ unsigned long pc_val;
+ long offset;
+
+ /* If condition code fails, there's nothing else to do. */
+ if (!store_executed)
+ return;
+
+ if (dsc->u.block.increment)
+ {
+ pc_stored_at = dsc->u.block.xfer_addr + 4 * transferred_regs;
+
+ if (dsc->u.block.before)
+ pc_stored_at += 4;
+ }
+ else
+ {
+ pc_stored_at = dsc->u.block.xfer_addr;
+
+ if (dsc->u.block.before)
+ pc_stored_at -= 4;
+ }
+
+ pc_val = read_memory_unsigned_integer (pc_stored_at, 4);
+ stm_insn_addr = dsc->scratch_base;
+ offset = pc_val - stm_insn_addr;
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: detected PC offset %.8lx for "
+ "STM instruction\n", offset);
+
+ /* Rewrite the stored PC to the proper value for the non-displaced original
+ instruction. */
+ write_memory_unsigned_integer (pc_stored_at, 4, dsc->insn_addr + offset);
+}
+
+/* Clean up an LDM which includes the PC in the register list. We clumped all
+ the registers in the transferred list into a contiguous range r0...rX (to
+ avoid loading PC directly and losing control of the debugged program), so we
+ must undo that here. */
+
+static void
+cleanup_block_load_pc (struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ ULONGEST from = dsc->insn_addr;
+ unsigned long status = displaced_read_reg (regs, from, ARM_PS_REGNUM);
+ int load_executed = condition_true (dsc->u.block.cond, status), i;
+ unsigned int mask = dsc->u.block.regmask, write_reg = 15;
+ unsigned int regs_loaded = bitcount (mask);
+ unsigned int num_to_shuffle = regs_loaded, clobbered;
+
+ /* The method employed here will fail if the register list is fully populated
+ (we need to avoid loading PC directly). */
+ gdb_assert (num_to_shuffle < 16);
+
+ if (!load_executed)
+ return;
+
+ clobbered = (1 << num_to_shuffle) - 1;
+
+ while (num_to_shuffle > 0)
+ {
+ if ((mask & (1 << write_reg)) != 0)
+ {
+ unsigned int read_reg = num_to_shuffle - 1;
+
+ if (read_reg != write_reg)
+ {
+ ULONGEST rval = displaced_read_reg (regs, from, read_reg);
+ displaced_write_reg (regs, dsc, write_reg, rval, LOAD_WRITE_PC);
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, _("displaced: LDM: move "
+ "loaded register r%d to r%d\n"), read_reg,
+ write_reg);
+ }
+ else if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, _("displaced: LDM: register "
+ "r%d already in the right place\n"),
+ write_reg);
+
+ clobbered &= ~(1 << write_reg);
+
+ num_to_shuffle--;
+ }
+
+ write_reg--;
+ }
+
+ /* Restore any registers we scribbled over. */
+ for (write_reg = 0; clobbered != 0; write_reg++)
+ {
+ if ((clobbered & (1 << write_reg)) != 0)
+ {
+ displaced_write_reg (regs, dsc, write_reg, dsc->tmp[write_reg],
+ CANNOT_WRITE_PC);
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, _("displaced: LDM: restored "
+ "clobbered register r%d\n"), write_reg);
+ clobbered &= ~(1 << write_reg);
+ }
+ }
+
+ /* Perform register writeback manually. */
+ if (dsc->u.block.writeback)
+ {
+ ULONGEST new_rn_val = dsc->u.block.xfer_addr;
+
+ if (dsc->u.block.increment)
+ new_rn_val += regs_loaded * 4;
+ else
+ new_rn_val -= regs_loaded * 4;
+
+ displaced_write_reg (regs, dsc, dsc->u.block.rn, new_rn_val,
+ CANNOT_WRITE_PC);
+ }
+}
+
+/* Handle ldm/stm, apart from some tricky cases which are unlikely to occur
+ in user-level code (in particular exception return, ldm rn, {...pc}^). */
+
+static int
+copy_block_xfer (unsigned long insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ int load = bit (insn, 20);
+ int user = bit (insn, 22);
+ int increment = bit (insn, 23);
+ int before = bit (insn, 24);
+ int writeback = bit (insn, 21);
+ int rn = bits (insn, 16, 19);
+ CORE_ADDR from = dsc->insn_addr;
+
+ /* Block transfers which don't mention PC can be run directly out-of-line. */
+ if (rn != 15 && (insn & 0x8000) == 0)
+ return copy_unmodified (insn, "ldm/stm", dsc);
+
+ if (rn == 15)
+ {
+ warning (_("displaced: Unpredictable LDM or STM with base register r15"));
+ return copy_unmodified (insn, "unpredictable ldm/stm", dsc);
+ }
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying block transfer insn "
+ "%.8lx\n", insn);
+
+ dsc->u.block.xfer_addr = displaced_read_reg (regs, from, rn);
+ dsc->u.block.rn = rn;
+
+ dsc->u.block.load = load;
+ dsc->u.block.user = user;
+ dsc->u.block.increment = increment;
+ dsc->u.block.before = before;
+ dsc->u.block.writeback = writeback;
+ dsc->u.block.cond = bits (insn, 28, 31);
+
+ dsc->u.block.regmask = insn & 0xffff;
+
+ if (load)
+ {
+ if ((insn & 0xffff) == 0xffff)
+ {
+ /* LDM with a fully-populated register list. This case is
+ particularly tricky. Implement for now by fully emulating the
+ instruction (which might not behave perfectly in all cases, but
+ these instructions should be rare enough for that not to matter
+ too much). */
+ dsc->modinsn[0] = ARM_NOP;
+
+ dsc->cleanup = &cleanup_block_load_all;
+ }
+ else
+ {
+ /* LDM of a list of registers which includes PC. Implement by
+ rewriting the list of registers to be transferred into a
+ contiguous chunk r0...rX before doing the transfer, then shuffling
+ registers into the correct places in the cleanup routine. */
+ unsigned int regmask = insn & 0xffff;
+ unsigned int num_in_list = bitcount (regmask), new_regmask, bit = 1;
+ unsigned int to = 0, from = 0, i, new_rn;
+
+ for (i = 0; i < num_in_list; i++)
+ dsc->tmp[i] = displaced_read_reg (regs, from, i);
+
+ /* Writeback makes things complicated. We need to avoid clobbering
+ the base register with one of the registers in our modified
+ register list, but just using a different register can't work in
+ all cases, e.g.:
+
+ ldm r14!, {r0-r13,pc}
+
+ which would need to be rewritten as:
+
+ ldm rN!, {r0-r14}
+
+ but that can't work, because there's no free register for N.
+
+ Solve this by turning off the writeback bit, and emulating
+ writeback manually in the cleanup routine. */
+
+ if (writeback)
+ insn &= ~(1 << 21);
+
+ new_regmask = (1 << num_in_list) - 1;
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, _("displaced: LDM r%d%s, "
+ "{..., pc}: original reg list %.4x, modified "
+ "list %.4x\n"), rn, writeback ? "!" : "",
+ (int) insn & 0xffff, new_regmask);
+
+ dsc->modinsn[0] = (insn & ~0xffff) | (new_regmask & 0xffff);
+
+ dsc->cleanup = &cleanup_block_load_pc;
+ }
+ }
+ else
+ {
+ /* STM of a list of registers which includes PC. Run the instruction
+ as-is, but out of line: this will store the wrong value for the PC,
+ so we must manually fix up the memory in the cleanup routine.
+ Doing things this way has the advantage that we can auto-detect
+ the offset of the PC write (which is architecture-dependent) in
+ the cleanup routine. */
+ dsc->modinsn[0] = insn;
+
+ dsc->cleanup = &cleanup_block_store_pc;
+ }
+
+ return 0;
+}
+
+/* Cleanup/copy SVC (SWI) instructions. */
+
+static void
+cleanup_svc (struct regcache *regs, struct displaced_step_closure *dsc)
+{
+ CORE_ADDR from = dsc->insn_addr;
+ CORE_ADDR to = dsc->tmp[0];
+ ULONGEST pc;
+
+ /* Note: we want the real PC, so don't use displaced_read_reg here. */
+ regcache_cooked_read_unsigned (regs, ARM_PC_REGNUM, &pc);
+
+ displaced_write_reg (regs, dsc, ARM_PC_REGNUM, from + 4, BRANCH_WRITE_PC);
+}
+
+static int
+copy_svc (unsigned long insn, CORE_ADDR to, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ CORE_ADDR from = dsc->insn_addr;
+ unsigned int svc_number;
+
+ if (debug_displaced)
+
+ svc_number = displaced_read_reg (regs, from, 7);
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying svc insn %.8lx "
+ "(r7 = %d)\n", insn, svc_number);
+
+ switch (svc_number)
+ {
+ case 119:
+ case 173:
+ warning (_("displaced: Apparently single-stepping sigreturn SVC call. "
+ "This might not work properly!"));
+ }
+
+ /* Preparation: tmp[0] <- to.
+ Insn: unmodified svc.
+ Cleanup: pc <- insn_addr + 4. */
+
+ dsc->tmp[0] = to;
+ dsc->modinsn[0] = insn;
+
+ dsc->cleanup = &cleanup_svc;
+ /* Pretend we wrote to the PC, so cleanup doesn't set PC to the next
+ instruction. */
+ dsc->wrote_to_pc = 1;
+
+ return 0;
+}
+
+/* Copy undefined instructions. */
+
+static int
+copy_undef (unsigned long insn, struct displaced_step_closure *dsc)
+{
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying undefined insn %.8lx\n",
+ insn);
+
+ dsc->modinsn[0] = insn;
+
+ return 0;
+}
+
+/* Copy unpredictable instructions. */
+
+static int
+copy_unpred (unsigned long insn, struct displaced_step_closure *dsc)
+{
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying unpredictable insn "
+ "%.8lx\n", insn);
+
+ dsc->modinsn[0] = insn;
+
+ return 0;
+}
+
+/* The decode_* functions are instruction decoding helpers. They mostly follow
+ the presentation in the ARM ARM. */
+
+static int
+decode_misc_memhint_neon (unsigned long insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ unsigned int op1 = bits (insn, 20, 26), op2 = bits (insn, 4, 7);
+ unsigned int rn = bits (insn, 16, 19);
+
+ if (op1 == 0x10 && (op2 & 0x2) == 0x0 && (rn & 0xe) == 0x0)
+ return copy_unmodified (insn, "cps", dsc);
+ else if (op1 == 0x10 && op2 == 0x0 && (rn & 0xe) == 0x1)
+ return copy_unmodified (insn, "setend", dsc);
+ else if ((op1 & 0x60) == 0x20)
+ return copy_unmodified (insn, "neon dataproc", dsc);
+ else if ((op1 & 0x71) == 0x40)
+ return copy_unmodified (insn, "neon elt/struct load/store", dsc);
+ else if ((op1 & 0x77) == 0x41)
+ return copy_unmodified (insn, "unallocated mem hint", dsc);
+ else if ((op1 & 0x77) == 0x45)
+ return copy_preload (insn, regs, dsc); /* pli. */
+ else if ((op1 & 0x77) == 0x51)
+ {
+ if (rn != 0xf)
+ return copy_preload (insn, regs, dsc); /* pld/pldw. */
+ else
+ return copy_unpred (insn, dsc);
+ }
+ else if ((op1 & 0x77) == 0x55)
+ return copy_preload (insn, regs, dsc); /* pld/pldw. */
+ else if (op1 == 0x57)
+ switch (op2)
+ {
+ case 0x1: return copy_unmodified (insn, "clrex", dsc);
+ case 0x4: return copy_unmodified (insn, "dsb", dsc);
+ case 0x5: return copy_unmodified (insn, "dmb", dsc);
+ case 0x6: return copy_unmodified (insn, "isb", dsc);
+ default: return copy_unpred (insn, dsc);
+ }
+ else if ((op1 & 0x63) == 0x43)
+ return copy_unpred (insn, dsc);
+ else if ((op2 & 0x1) == 0x0)
+ switch (op1 & ~0x80)
+ {
+ case 0x61:
+ return copy_unmodified (insn, "unallocated mem hint", dsc);
+ case 0x65:
+ return copy_preload_reg (insn, regs, dsc); /* pli reg. */
+ case 0x71: case 0x75:
+ return copy_preload_reg (insn, regs, dsc); /* pld/pldw reg. */
+ case 0x63: case 0x67: case 0x73: case 0x77:
+ return copy_unpred (insn, dsc);
+ default:
+ return copy_undef (insn, dsc);
+ }
+ else
+ return copy_undef (insn, dsc); /* Probably unreachable. */
+}
+
+static int
+decode_unconditional (unsigned long insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ if (bit (insn, 27) == 0)
+ return decode_misc_memhint_neon (insn, regs, dsc);
+ /* Switch on bits: 0bxxxxx321xxx0xxxxxxxxxxxxxxxxxxxx. */
+ else switch (((insn & 0x7000000) >> 23) | ((insn & 0x100000) >> 20))
+ {
+ case 0x0: case 0x2:
+ return copy_unmodified (insn, "srs", dsc);
+
+ case 0x1: case 0x3:
+ return copy_unmodified (insn, "rfe", dsc);
+
+ case 0x4: case 0x5: case 0x6: case 0x7:
+ return copy_b_bl_blx (insn, regs, dsc);
+
+ case 0x8:
+ switch ((insn & 0xe00000) >> 21)
+ {
+ case 0x1: case 0x3: case 0x4: case 0x5: case 0x6: case 0x7:
+ return copy_copro_load_store (insn, regs, dsc); /* stc/stc2. */
+
+ case 0x2:
+ return copy_unmodified (insn, "mcrr/mcrr2", dsc);
+
+ default:
+ return copy_undef (insn, dsc);
+ }
+
+ case 0x9:
+ {
+ int rn_f = (bits (insn, 16, 19) == 0xf);
+ switch ((insn & 0xe00000) >> 21)
+ {
+ case 0x1: case 0x3:
+ /* ldc/ldc2 imm (undefined for rn == pc). */
+ return rn_f ? copy_undef (insn, dsc)
+ : copy_copro_load_store (insn, regs, dsc);
+
+ case 0x2:
+ return copy_unmodified (insn, "mrrc/mrrc2", dsc);
+
+ case 0x4: case 0x5: case 0x6: case 0x7:
+ /* ldc/ldc2 lit (undefined for rn != pc). */
+ return rn_f ? copy_copro_load_store (insn, regs, dsc)
+ : copy_undef (insn, dsc);
+
+ default:
+ return copy_undef (insn, dsc);
+ }
+ }
+
+ case 0xa:
+ return copy_unmodified (insn, "stc/stc2", dsc);
+
+ case 0xb:
+ if (bits (insn, 16, 19) == 0xf)
+ return copy_copro_load_store (insn, regs, dsc); /* ldc/ldc2 lit. */
+ else
+ return copy_undef (insn, dsc);
+
+ case 0xc:
+ if (bit (insn, 4))
+ return copy_unmodified (insn, "mcr/mcr2", dsc);
+ else
+ return copy_unmodified (insn, "cdp/cdp2", dsc);
+
+ case 0xd:
+ if (bit (insn, 4))
+ return copy_unmodified (insn, "mrc/mrc2", dsc);
+ else
+ return copy_unmodified (insn, "cdp/cdp2", dsc);
+
+ default:
+ return copy_undef (insn, dsc);
+ }
+}
+
+/* Decode miscellaneous instructions in dp/misc encoding space. */
+
+static int
+decode_miscellaneous (unsigned long insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ unsigned int op2 = bits (insn, 4, 6);
+ unsigned int op = bits (insn, 21, 22);
+ unsigned int op1 = bits (insn, 16, 19);
+
+ switch (op2)
+ {
+ case 0x0:
+ return copy_unmodified (insn, "mrs/msr", dsc);
+
+ case 0x1:
+ if (op == 0x1) /* bx. */
+ return copy_bx_blx_reg (insn, regs, dsc);
+ else if (op == 0x3)
+ return copy_unmodified (insn, "clz", dsc);
+ else
+ return copy_undef (insn, dsc);
+
+ case 0x2:
+ if (op == 0x1)
+ return copy_unmodified (insn, "bxj", dsc); /* Not really supported. */
+ else
+ return copy_undef (insn, dsc);
+
+ case 0x3:
+ if (op == 0x1)
+ return copy_bx_blx_reg (insn, regs, dsc); /* blx register. */
+ else
+ return copy_undef (insn, dsc);
+
+ case 0x5:
+ return copy_unmodified (insn, "saturating add/sub", dsc);
+
+ case 0x7:
+ if (op == 0x1)
+ return copy_unmodified (insn, "bkpt", dsc);
+ else if (op == 0x3)
+ return copy_unmodified (insn, "smc", dsc); /* Not really supported. */
+
+ default:
+ return copy_undef (insn, dsc);
+ }
+}
+
+static int
+decode_dp_misc (unsigned long insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ if (bit (insn, 25))
+ switch (bits (insn, 20, 24))
+ {
+ case 0x10:
+ return copy_unmodified (insn, "movw", dsc);
+
+ case 0x14:
+ return copy_unmodified (insn, "movt", dsc);
+
+ case 0x12: case 0x16:
+ return copy_unmodified (insn, "msr imm", dsc);
+
+ default:
+ return copy_alu_imm (insn, regs, dsc);
+ }
+ else
+ {
+ unsigned long op1 = bits (insn, 20, 24), op2 = bits (insn, 4, 7);
+
+ if ((op1 & 0x19) != 0x10 && (op2 & 0x1) == 0x0)
+ return copy_alu_reg (insn, regs, dsc);
+ else if ((op1 & 0x19) != 0x10 && (op2 & 0x9) == 0x1)
+ return copy_alu_shifted_reg (insn, regs, dsc);
+ else if ((op1 & 0x19) == 0x10 && (op2 & 0x8) == 0x0)
+ return decode_miscellaneous (insn, regs, dsc);
+ else if ((op1 & 0x19) == 0x10 && (op2 & 0x9) == 0x8)
+ return copy_unmodified (insn, "halfword mul/mla", dsc);
+ else if ((op1 & 0x10) == 0x00 && op2 == 0x9)
+ return copy_unmodified (insn, "mul/mla", dsc);
+ else if ((op1 & 0x10) == 0x10 && op2 == 0x9)
+ return copy_unmodified (insn, "synch", dsc);
+ else if (op2 == 0xb || (op2 & 0xd) == 0xd)
+ /* 2nd arg means "unpriveleged". */
+ return copy_extra_ld_st (insn, (op1 & 0x12) == 0x02, regs, dsc);
+ }
+
+ /* Should be unreachable. */
+ return 1;
+}
+
+static int
+decode_ld_st_word_ubyte (unsigned long insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ int a = bit (insn, 25), b = bit (insn, 4);
+ unsigned long op1 = bits (insn, 20, 24);
+ int rn_f = bits (insn, 16, 19) == 0xf;
+
+ if ((!a && (op1 & 0x05) == 0x00 && (op1 & 0x17) != 0x02)
+ || (a && (op1 & 0x05) == 0x00 && (op1 & 0x17) != 0x02 && !b))
+ return copy_ldr_str_ldrb_strb (insn, regs, dsc, 0, 0, 0);
+ else if ((!a && (op1 & 0x17) == 0x02)
+ || (a && (op1 & 0x17) == 0x02 && !b))
+ return copy_ldr_str_ldrb_strb (insn, regs, dsc, 0, 0, 1);
+ else if ((!a && (op1 & 0x05) == 0x01 && (op1 & 0x17) != 0x03)
+ || (a && (op1 & 0x05) == 0x01 && (op1 & 0x17) != 0x03 && !b))
+ return copy_ldr_str_ldrb_strb (insn, regs, dsc, 1, 0, 0);
+ else if ((!a && (op1 & 0x17) == 0x03)
+ || (a && (op1 & 0x17) == 0x03 && !b))
+ return copy_ldr_str_ldrb_strb (insn, regs, dsc, 1, 0, 1);
+ else if ((!a && (op1 & 0x05) == 0x04 && (op1 & 0x17) != 0x06)
+ || (a && (op1 & 0x05) == 0x04 && (op1 & 0x17) != 0x06 && !b))
+ return copy_ldr_str_ldrb_strb (insn, regs, dsc, 0, 1, 0);
+ else if ((!a && (op1 & 0x17) == 0x06)
+ || (a && (op1 & 0x17) == 0x06 && !b))
+ return copy_ldr_str_ldrb_strb (insn, regs, dsc, 0, 1, 1);
+ else if ((!a && (op1 & 0x05) == 0x05 && (op1 & 0x17) != 0x07)
+ || (a && (op1 & 0x05) == 0x05 && (op1 & 0x17) != 0x07 && !b))
+ return copy_ldr_str_ldrb_strb (insn, regs, dsc, 1, 1, 0);
+ else if ((!a && (op1 & 0x17) == 0x07)
+ || (a && (op1 & 0x17) == 0x07 && !b))
+ return copy_ldr_str_ldrb_strb (insn, regs, dsc, 1, 1, 1);
+
+ /* Should be unreachable. */
+ return 1;
+}
+
+static int
+decode_media (unsigned long insn, struct displaced_step_closure *dsc)
+{
+ switch (bits (insn, 20, 24))
+ {
+ case 0x00: case 0x01: case 0x02: case 0x03:
+ return copy_unmodified (insn, "parallel add/sub signed", dsc);
+
+ case 0x04: case 0x05: case 0x06: case 0x07:
+ return copy_unmodified (insn, "parallel add/sub unsigned", dsc);
+
+ case 0x08: case 0x09: case 0x0a: case 0x0b:
+ case 0x0c: case 0x0d: case 0x0e: case 0x0f:
+ return copy_unmodified (insn, "decode/pack/unpack/saturate/reverse", dsc);
+
+ case 0x18:
+ if (bits (insn, 5, 7) == 0) /* op2. */
+ {
+ if (bits (insn, 12, 15) == 0xf)
+ return copy_unmodified (insn, "usad8", dsc);
+ else
+ return copy_unmodified (insn, "usada8", dsc);
+ }
+ else
+ return copy_undef (insn, dsc);
+
+ case 0x1a: case 0x1b:
+ if (bits (insn, 5, 6) == 0x2) /* op2[1:0]. */
+ return copy_unmodified (insn, "sbfx", dsc);
+ else
+ return copy_undef (insn, dsc);
+
+ case 0x1c: case 0x1d:
+ if (bits (insn, 5, 6) == 0x0) /* op2[1:0]. */
+ {
+ if (bits (insn, 0, 3) == 0xf)
+ return copy_unmodified (insn, "bfc", dsc);
+ else
+ return copy_unmodified (insn, "bfi", dsc);
+ }
+ else
+ return copy_undef (insn, dsc);
+
+ case 0x1e: case 0x1f:
+ if (bits (insn, 5, 6) == 0x2) /* op2[1:0]. */
+ return copy_unmodified (insn, "ubfx", dsc);
+ else
+ return copy_undef (insn, dsc);
+ }
+
+ /* Should be unreachable. */
+ return 1;
+}
+
+static int
+decode_b_bl_ldmstm (unsigned long insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ if (bit (insn, 25))
+ return copy_b_bl_blx (insn, regs, dsc);
+ else
+ return copy_block_xfer (insn, regs, dsc);
+}
+
+static int
+decode_ext_reg_ld_st (unsigned long insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ unsigned int opcode = bits (insn, 20, 24);
+
+ switch (opcode)
+ {
+ case 0x04: case 0x05: /* VFP/Neon mrrc/mcrr. */
+ return copy_unmodified (insn, "vfp/neon mrrc/mcrr", dsc);
+
+ case 0x08: case 0x0a: case 0x0c: case 0x0e:
+ case 0x12: case 0x16:
+ return copy_unmodified (insn, "vfp/neon vstm/vpush", dsc);
+
+ case 0x09: case 0x0b: case 0x0d: case 0x0f:
+ case 0x13: case 0x17:
+ return copy_unmodified (insn, "vfp/neon vldm/vpop", dsc);
+
+ case 0x10: case 0x14: case 0x18: case 0x1c: /* vstr. */
+ case 0x11: case 0x15: case 0x19: case 0x1d: /* vldr. */
+ /* Note: no writeback for these instructions. Bit 25 will always be
+ zero though (via caller), so the following works OK. */
+ return copy_copro_load_store (insn, regs, dsc);
+ }
+
+ /* Should be unreachable. */
+ return 1;
+}
+
+static int
+decode_svc_copro (unsigned long insn, CORE_ADDR to, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ unsigned int op1 = bits (insn, 20, 25);
+ int op = bit (insn, 4);
+ unsigned int coproc = bits (insn, 8, 11);
+ unsigned int rn = bits (insn, 16, 19);
+
+ if ((op1 & 0x20) == 0x00 && (op1 & 0x3a) != 0x00 && (coproc & 0xe) == 0xa)
+ return decode_ext_reg_ld_st (insn, regs, dsc);
+ else if ((op1 & 0x21) == 0x00 && (op1 & 0x3a) != 0x00
+ && (coproc & 0xe) != 0xa)
+ return copy_copro_load_store (insn, regs, dsc); /* stc/stc2. */
+ else if ((op1 & 0x21) == 0x01 && (op1 & 0x3a) != 0x00
+ && (coproc & 0xe) != 0xa)
+ return copy_copro_load_store (insn, regs, dsc); /* ldc/ldc2 imm/lit. */
+ else if ((op1 & 0x3e) == 0x00)
+ return copy_undef (insn, dsc);
+ else if ((op1 & 0x3e) == 0x04 && (coproc & 0xe) == 0xa)
+ return copy_unmodified (insn, "neon 64bit xfer", dsc);
+ else if (op1 == 0x04 && (coproc & 0xe) != 0xa)
+ return copy_unmodified (insn, "mcrr/mcrr2", dsc);
+ else if (op1 == 0x05 && (coproc & 0xe) != 0xa)
+ return copy_unmodified (insn, "mrrc/mrrc2", dsc);
+ else if ((op1 & 0x30) == 0x20 && !op)
+ {
+ if ((coproc & 0xe) == 0xa)
+ return copy_unmodified (insn, "vfp dataproc", dsc);
+ else
+ return copy_unmodified (insn, "cdp/cdp2", dsc);
+ }
+ else if ((op1 & 0x30) == 0x20 && op)
+ return copy_unmodified (insn, "neon 8/16/32 bit xfer", dsc);
+ else if ((op1 & 0x31) == 0x20 && op && (coproc & 0xe) != 0xa)
+ return copy_unmodified (insn, "mcr/mcr2", dsc);
+ else if ((op1 & 0x31) == 0x21 && op && (coproc & 0xe) != 0xa)
+ return copy_unmodified (insn, "mrc/mrc2", dsc);
+ else if ((op1 & 0x30) == 0x30)
+ return copy_svc (insn, to, regs, dsc);
+ else
+ return copy_undef (insn, dsc); /* Possibly unreachable. */
+}
+
+static struct displaced_step_closure *
+arm_process_displaced_insn (unsigned long insn, CORE_ADDR from, CORE_ADDR to,
+ struct regcache *regs)
+{
+ struct displaced_step_closure *dsc
+ = xmalloc (sizeof (struct displaced_step_closure));
+ int err = 0;
+
+ /* Most displaced instructions use a 1-instruction scratch space, so set this
+ here and override below if/when necessary. */
+ dsc->numinsns = 1;
+ dsc->insn_addr = from;
+ dsc->scratch_base = to;
+ dsc->cleanup = NULL;
+ dsc->wrote_to_pc = 0;
+
+ if ((insn & 0xf0000000) == 0xf0000000)
+ err = decode_unconditional (insn, regs, dsc);
+ else switch (((insn & 0x10) >> 4) | ((insn & 0xe000000) >> 24))
+ {
+ case 0x0: case 0x1: case 0x2: case 0x3:
+ err = decode_dp_misc (insn, regs, dsc);
+ break;
+
+ case 0x4: case 0x5: case 0x6:
+ err = decode_ld_st_word_ubyte (insn, regs, dsc);
+ break;
+
+ case 0x7:
+ err = decode_media (insn, dsc);
+ break;
+
+ case 0x8: case 0x9: case 0xa: case 0xb:
+ err = decode_b_bl_ldmstm (insn, regs, dsc);
+ break;
+
+ case 0xc: case 0xd: case 0xe: case 0xf:
+ err = decode_svc_copro (insn, to, regs, dsc);
+ break;
+ }
+
+ if (err)
+ internal_error (__FILE__, __LINE__,
+ _("arm_process_displaced_insn: Instruction decode error"));
+
+ return dsc;
+}
+
+/* Actually set up the scratch space for a displaced instruction. */
+
+struct displaced_step_closure *
+arm_displaced_init_closure (struct gdbarch *gdbarch, CORE_ADDR from,
+ CORE_ADDR to, struct displaced_step_closure *dsc)
+{
+ struct gdbarch_tdep *tdep = gdbarch_tdep (gdbarch);
+ unsigned int i;
+
+ /* Poke modified instruction(s). */
+ for (i = 0; i < dsc->numinsns; i++)
+ {
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: writing insn %.8lx at "
+ "%.8lx\n", (unsigned long) dsc->modinsn[i],
+ (unsigned long) to + i * 4);
+ write_memory_unsigned_integer (to + i * 4, 4, dsc->modinsn[i]);
+ }
+
+ /* Put breakpoint afterwards. */
+ write_memory (to + dsc->numinsns * 4, tdep->arm_breakpoint,
+ tdep->arm_breakpoint_size);
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copy 0x%s->0x%s: ",
+ paddr_nz (from), paddr_nz (to));
+
+ return dsc;
+}
+
+/* Entry point for copying an instruction into scratch space for displaced
+ stepping. */
+
+struct displaced_step_closure *
+arm_displaced_step_copy_insn (struct gdbarch *gdbarch,
+ CORE_ADDR from, CORE_ADDR to,
+ struct regcache *regs)
+{
+ const size_t len = 4;
+ struct displaced_step_closure *dsc;
+ unsigned long insn;
+
+ if (!displaced_in_arm_mode (regs))
+ error (_("Displaced stepping is only supported in ARM mode"));
+
+ insn = read_memory_unsigned_integer (from, len);
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: stepping insn %.8lx "
+ "at %.8lx\n", insn, (unsigned long) from);
+
+ dsc = arm_process_displaced_insn (insn, from, to, regs);
+
+ return arm_displaced_init_closure (gdbarch, from, to, dsc);
+}
+
+/* Entry point for cleaning things up after a displaced instruction has been
+ single-stepped. */
+
+void
+arm_displaced_step_fixup (struct gdbarch *gdbarch,
+ struct displaced_step_closure *dsc,
+ CORE_ADDR from, CORE_ADDR to,
+ struct regcache *regs)
+{
+ if (dsc->cleanup)
+ dsc->cleanup (regs, dsc);
+
+ if (!dsc->wrote_to_pc)
+ regcache_cooked_write_unsigned (regs, ARM_PC_REGNUM, dsc->insn_addr + 4);
+}
+
+
#include "bfd-in2.h"
#include "libcoff.h"
@@ -3252,6 +5079,11 @@ arm_gdbarch_init (struct gdbarch_info in
/* On ARM targets char defaults to unsigned. */
set_gdbarch_char_signed (gdbarch, 0);
+ /* Note: for displaced stepping, this includes the breakpoint, and one word
+ of additional scratch space. This setting isn't used for anything beside
+ displaced stepping at present. */
+ set_gdbarch_max_insn_length (gdbarch, 4 * DISPLACED_MODIFIED_INSNS);
+
/* This should be low enough for everything. */
tdep->lowest_pc = 0x20;
tdep->jb_pc = -1; /* Longjump support not enabled by default. */
--- .pc/displaced-stepping/gdb/arm-tdep.h 2009-05-15 16:05:07.000000000 -0700
+++ gdb/arm-tdep.h 2009-05-16 10:16:52.000000000 -0700
@@ -172,11 +172,96 @@ struct gdbarch_tdep
struct regset *gregset, *fpregset;
};
+/* Structures used for displaced stepping. */
+
+/* The maximum number of temporaries available for displaced instructions. */
+#define DISPLACED_TEMPS 16
+/* The maximum number of modified instructions generated for one single-stepped
+ instruction, including the breakpoint (usually at the end of the instruction
+ sequence) and any scratch words, etc. */
+#define DISPLACED_MODIFIED_INSNS 8
+
+struct displaced_step_closure
+{
+ ULONGEST tmp[DISPLACED_TEMPS];
+ int rd;
+ int wrote_to_pc;
+ union
+ {
+ struct
+ {
+ int xfersize;
+ int rn; /* Writeback register. */
+ unsigned int immed : 1; /* Offset is immediate. */
+ unsigned int writeback : 1; /* Perform base-register writeback. */
+ unsigned int restore_r4 : 1; /* Used r4 as scratch. */
+ } ldst;
+
+ struct
+ {
+ unsigned long dest;
+ unsigned int link : 1;
+ unsigned int exchange : 1;
+ unsigned int cond : 4;
+ } branch;
+
+ struct
+ {
+ unsigned int regmask;
+ int rn;
+ CORE_ADDR xfer_addr;
+ unsigned int load : 1;
+ unsigned int user : 1;
+ unsigned int increment : 1;
+ unsigned int before : 1;
+ unsigned int writeback : 1;
+ unsigned int cond : 4;
+ } block;
+
+ struct
+ {
+ unsigned int immed : 1;
+ } preload;
+ } u;
+ unsigned long modinsn[DISPLACED_MODIFIED_INSNS];
+ int numinsns;
+ CORE_ADDR insn_addr;
+ CORE_ADDR scratch_base;
+ void (*cleanup) (struct regcache *, struct displaced_step_closure *);
+};
+
+/* Values for the WRITE_PC argument to displaced_write_reg. If the register
+ write may write to the PC, specifies the way the CPSR T bit, etc. is
+ modified by the instruction. */
+
+enum pc_write_style
+{
+ BRANCH_WRITE_PC,
+ BX_WRITE_PC,
+ LOAD_WRITE_PC,
+ ALU_WRITE_PC,
+ CANNOT_WRITE_PC
+};
+
+struct displaced_step_closure *
+ arm_displaced_init_closure (struct gdbarch *gdbarch, CORE_ADDR from,
+ CORE_ADDR to, struct displaced_step_closure *dsc);
+ULONGEST displaced_read_reg (struct regcache *regs, CORE_ADDR from, int regno);
+void displaced_write_reg (struct regcache *regs,
+ struct displaced_step_closure *dsc, int regno,
+ ULONGEST val, enum pc_write_style write_pc);
CORE_ADDR arm_skip_stub (struct frame_info *, CORE_ADDR);
CORE_ADDR arm_get_next_pc (struct frame_info *, CORE_ADDR);
int arm_software_single_step (struct frame_info *);
+extern struct displaced_step_closure *
+ arm_displaced_step_copy_insn (struct gdbarch *, CORE_ADDR, CORE_ADDR,
+ struct regcache *);
+extern void arm_displaced_step_fixup (struct gdbarch *,
+ struct displaced_step_closure *,
+ CORE_ADDR, CORE_ADDR, struct regcache *);
+
/* Functions exported from armbsd-tdep.h. */
/* Return the appropriate register set for the core section identified
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH] Displaced stepping (non-stop debugging) support for ARM Linux
2009-05-16 18:19 ` Julian Brown
@ 2009-06-09 17:37 ` Daniel Jacobowitz
2009-06-10 14:58 ` Pedro Alves
0 siblings, 1 reply; 24+ messages in thread
From: Daniel Jacobowitz @ 2009-06-09 17:37 UTC (permalink / raw)
To: Julian Brown; +Cc: gdb-patches, pedro
On Sat, May 16, 2009 at 07:19:10PM +0100, Julian Brown wrote:
> Pedro Alves wrote:
>
> > Right, you may end up with a temporary breakpoint over another
> > breakpoint, though. It would be better to use the standard software
> > single-stepping (set temp break at next pc, continue, remove break)
> > for standard stepping requests, and use displaced stepping only for
> > stepping over breakpoints. Unfortunately, you don't get that for
> > free --- infrun.c and friends don't know how to handle multiple
> > simultaneous software single-stepping requests, and that is required
> > in non-stop mode.
>
> I'm not sure what the status is here now. For testing purposes, I've
> (still) been using a local patch which uses displaced stepping for all
> single-step operations.
We still can't use software single-stepping simultaneously in multiple
threads. Pedro, should we fix that or always use displaced stepping
for now?
> Daniel Jacobowitz <drow@false.org> wrote:
>
> > * What's the point of executing mov<cond> on the target for BL<cond>?
> > At that point it seems like we ought to skip the target step entirely;
> > just simulate the instruction. We've already got a function to check
> > conditions (condition_true).
>
> I'm now using NOP instructions and condition_true, because the current
> displaced stepping support wants to execute "something" rather than
> nothing.
From infrun.c:
One way to work around [software single stepping]...
would be to have gdbarch_displaced_step_copy_insn fully
simulate the effect of PC-relative instructions (and return NULL)
on architectures that use software single-stepping.
So the interface you need is there; it's just not implemented yet:
/* We don't support the fully-simulated case at present. */
gdb_assert (closure);
I think the implementation strategy will look like:
* Add another non-zero return value from displaced_step_prepare.
* Update should_resume after the call, in resume (currently unused).
* Ask Pedro how to pretend that the inferior resumed and stopped,
for higher levels. I think this will entail a new queue. Bonus
points if prepare_to_wait and wait_for_inferior do not invalidate
the perfectly good register cache at this point.
Pedro, thoughts - easy or should we stick with the NOP workaround for
now?
I know that some debug interfaces simulate instructions aggressively,
to save the target round trip.
> > Yes, we just can't emulate loads or stores. Anything that could cause
> > an exception that won't be delayed till the next instruction, I think.
>
> LDM and STM are handled substantially differently now: STM instructions
> are let through unmodified, and when PC is in the register list the
> cleanup routine reads back the stored value and calculates the proper
> offset for PC writes. The true (non-displaced) PC value (plus offset) is
> then written to the appropriate memory location.
I am not happy about creating additional memory writes from GDB, but I
agree that for the special case of the PC this is good enough.
> LDM instructions shuffle registers downwards into a contiguous list (to
> avoid loading PC directly), then fix up register contents afterwards in
> the cleanup routine. The case with a fully-populated register list is
> still emulated, for now.
This seems OK too, by the same logic.
> > > +static int
> > > +copy_svc (unsigned long insn, CORE_ADDR to, struct regcache *regs,
> > > + struct displaced_step_closure *dsc)
> > > +{
> > > + CORE_ADDR from = dsc->insn_addr;
> > > +
> > > + if (debug_displaced)
> > > + fprintf_unfiltered (gdb_stdlog, "displaced: copying svc insn
> > > %.8lx\n",
> > > + insn);
> > > +
> > > + /* Preparation: tmp[0] <- to.
> > > + Insn: unmodified svc.
> > > + Cleanup: if (pc == <scratch>+4) pc <- insn_addr + 4;
> > > + else leave PC alone. */
> >
> > What about the saved PC? Don't really want the OS service routine to
> > return to the scratchpad.
> >
> > > + /* FIXME: What can we do about signal trampolines? */
> >
> > Maybe this is referring to the same question I asked above?
> >
> > If so, I think you get to unwind and if you find the scratchpad,
> > update the saved PC.
>
> I've tried to figure this out, and have totally drawn a blank so far.
> AFAICT, the problem we're trying to solve runs as follows: sometimes, a
> signal may be delivered to a process whilst it is executing a system
> call. In that case, the kernel writes a signal trampoline to the user
> program's stack space, and rewrites the state so that the trampoline is
> executed when the system call returns.
No, not quite. That may be how things work on other platforms - this
varies quite a bit, and I'm only familiar with the Linux
implementation - but Linux does it differently. There are no signal
*call* trampolines, only signal *return* trampolines.
First we step over a system call instruction. An arbitrary amount of
work happens. A signal may be received; if so, the process should
be stopped for GDB to inspect. Here's where the first weird
single-stepping thing happens; we don't know if a handler is installed
for this signal, and we can't figure it out without races, so we don't
know where a single step would send us. The kernel handles this on
platforms with PTRACE_SINGLESTEP. Several GDB tests fail because of
this problem. Anyway, that's a digression.
If a signal was received and that signal has a handler, then when the
system call finishes (either early, or at the normal time) then the
kernel sets up a trampoline. The program state is saved on the stack,
the PC is changed to point at the handler, and the LR is changed to
point at the restorer. The restorer can be some instructions on the
stack or a function in the C library with SA_RESTORER; I believe ARM
GLIBC uses SA_RESTORER exclusively. The PC in the saved state
may point at the system call, before it, or after it - this depends
whether the system call completed or needs to be restarted.
Then the program resumes, runs the handler, and eventually calls
sigreturn.
Sigreturn is another special case since it changes sp/pc and does not
return to the following instruction.
I don't know if you can end up needing to restart a system call even
if no signal was received, but I don't think so. Every architecture
has a restart convention. In some cases it backs up one instruction,
to the system call; this is usually used if the system call
instruction encodes the syscall number, or if it's in a different
register than error codes. Other architectures back up two
instructions and mandate a two-instruction system call sequence. ARM
backs up one instruction; MIPS usually backs up two (not always, see
kernel/signal.c).
If we step over a system call, and reach the breakpoint at the next
instruction, we're golden. If we receive a signal instead we have to
determine if the system call completed or not, probably based on the
apparent PC address. We should be able to unclobber the PC here,
by a constant relative distance from the original instruction to the
scratchpad instruction.
I'll make this more concrete. I took your test program and did this:
(gdb) b func2
Breakpoint 2 at 0x8644: file signal-step.c, line 48.
(gdb) b main
Breakpoint 3 at 0x84d8: file signal-step.c, line 14.
(gdb) r
Starting program: /home/dan/signal-step
Breakpoint 3, main (argc=1, argv=0xbeb798b4) at signal-step.c:14
14 printf ("Starting execution\n");
(gdb) x/i 0x400b5d78
0x400b5d78 <pause+24>: svc 0x00000000
(gdb) b *0x400b5d78
Breakpoint 5 at 0x400b5d78
(gdb) c
Continuing.
Starting execution
sigaction() successful. Now sleeping
[Send signal 1 from other terminal]
Program received signal SIGHUP, Hangup.
0x400b5dfc in nanosleep () from /lib/libc.so.6
(gdb) x/2i $pc - 4
0x400b5df8 <nanosleep+24>: svc 0x00000000
0x400b5dfc <nanosleep+28>: mov r7, r12
(gdb) p (int) $r0
$3 = -516
[Notice, we're apparently after the system call here... but $r0 is
ERESTART_RESTARTBLOCK. So apparently at this point we are going
to restart the system call but have not adjusted the PC to do so
yet. You can see from the kernel that because there is a handler,
ERESTART_RESTARTBLOCK and ERESTARTNOHAND will end up changing r0
to EINTR before running the signal handler. ERESTARTSYS and
ERESTARTNOINTR could potentially adjust the PC to restart the
system call. That PC adjustment has not happened yet. If you're
looking at arch/arm/kernel/signal.c, the point where the debugger gets
control is inside get_signal_to_deliver; the PC is adjusted later in
do_signal if there was no handler, and in handle_signal if there was.
Anyway, the important point is that at this moment we don't know
whether the system call instruction will be executed again. If
we adjust the PC back to the non-scratchpad location, the next
instruction could be any of:
* (Unknown) signal handler
* System call
* Instruction after system call
Complicated.]
(gdb) c
Continuing.
Signal Handler: sig=1 scp=0xbef45190
siginfo.si_signo=1
siginfo.si_errno=0
siginfo.si_code=0
Breakpoint 5, 0x400b5d78 in pause () from /lib/libc.so.6
(gdb) si
[No prompt. We're inside pause now. Send signal 2 from another
window.]
Program received signal SIGINT, Interrupt.
0x400b5d7c in pause () from /lib/libc.so.6
(gdb) p (int) $r0
$4 = -514
[We're after the syscall instruction now. This is ERESTARTNOHAND, for
the semantics of pause - only return after a signal handler is run.]
(gdb) handle SIGINT pass
SIGINT is used by the debugger.
Are you sure you want to change it? (y or n) y
Signal Stop Print Pass to program Description
SIGINT Yes Yes Yes Interrupt
(gdb) set $pc = 0x0
(gdb) c
Continuing.
Breakpoint 2, func2 (sig=2, sinf=0xbe989e00, foo=0xbe989e80) at
signal-step.c:48
48 printf ("Signal Handler: sig=%d scp=%p\n", sig, sinf);
(gdb) bt
#0 func2 (sig=2, sinf=0xbe989e00, foo=0xbe989e80) at signal-step.c:48
#1 <signal handler called>
#2 0x00000000 in ?? ()
#3 0x000085f8 in func (sig=1, sinf=0xbe98a190, foo=0xbe98a210) at signal-step.c:40
#4 <signal handler called>
#5 0x400b5dfc in nanosleep () from /lib/libc.so.6
#6 0x400b5ba8 in sleep () from /lib/libc.so.6
#7 0x00008568 in main (argc=1, argv=0xbe98a8b4) at signal-step.c:25
See what's happened? My adjustment to the PC, at the time GDB sees
the signal, has ended up in the saved signal context. So we need to
take advantage of that to remove the scratchpad from the signal
context.
Let me know if that session log straightened things out, or if my
explanation just confused the issue hopelessly.
> I've hit some problems testing this patch, mainly because I can't seem
> to get a reliable baseline run with my current test setup. AFAICT, there
> should be no affect on behaviour unless displaced stepping is in use
> (differences in passes/failures with my patch only seem to be in
> "unreliable" tests, after running baseline testing three times), and of
> course displaced stepping isn't present for ARM without this patch
> anyway.
What sort of tests are fluctuating? Our internal tree has at least
one testsuite reliability fix that I hadn't gotten round to posting
for the FSF tree yet, I'll do it now.
> +static struct displaced_step_closure *
> +arm_catch_kernel_helper_return (CORE_ADDR from, CORE_ADDR to,
> + struct regcache *regs)
> +{
> + struct displaced_step_closure *dsc
> + = xmalloc (sizeof (struct displaced_step_closure));
> +
> + dsc->numinsns = 1;
> + dsc->insn_addr = from;
> + dsc->cleanup = &cleanup_kernel_helper_return;
> + /* Say we wrote to the PC, else cleanup will set PC to the next
> + instruction in the helper, which isn't helpful. */
> + dsc->wrote_to_pc = 1;
> +
> + /* Preparation: tmp[0] <- r14
> + r14 <- <scratch space>+4
> + *(<scratch space>+8) <- from
> + Insn: ldr pc, [r14, #4]
> + Cleanup: r14 <- tmp[0], pc <- tmp[0]. */
> +
> + dsc->tmp[0] = displaced_read_reg (regs, from, ARM_LR_REGNUM);
> + displaced_write_reg (regs, dsc, ARM_LR_REGNUM, (ULONGEST) to + 4,
> + CANNOT_WRITE_PC);
> + write_memory_unsigned_integer (to + 8, 4, from);
> +
> + dsc->modinsn[0] = 0xe59ef004; /* ldr pc, [lr, #4]. */
> +
> + return dsc;
> +}
You're pointing lr at scratch+4, which will be a breakpoint. Why do
we need the load? Is it because common code wants to resume execution
at the scratch space, rather than at the helper?
> + 2. The instruction is single-stepped.
This misses the complicated part, IMO :-) Execution is resumed at the
scratch space address. Thus the breakpoint.
> +/* Write to the PC as from a branch-exchange instruction. */
> +
> +static void
> +bx_write_pc (struct regcache *regs, ULONGEST val)
> +{
> + ULONGEST ps;
> +
> + regcache_cooked_read_unsigned (regs, ARM_PS_REGNUM, &ps);
> +
> + if ((val & 1) == 1)
> + {
> + regcache_cooked_write_unsigned (regs, ARM_PS_REGNUM, ps | CPSR_T);
> + regcache_cooked_write_unsigned (regs, ARM_PC_REGNUM, val & 0xfffffffe);
> + }
> + else if ((val & 2) == 0)
> + {
> + regcache_cooked_write_unsigned (regs, ARM_PS_REGNUM,
> + ps & ~(ULONGEST) CPSR_T);
> + regcache_cooked_write_unsigned (regs, ARM_PC_REGNUM, val);
> + }
> + else
> + /* Unpredictable behaviour. */
> + warning (_("Single-stepping BX to non-word-aligned ARM instruction."));
> +}
Let's either make this an error, or else do our best - the current
version will fall through to the instruction after the BX.
> +/* This function is used to concisely determine if an instruction INSN
> + references PC. Register fields of interest in INSN should have the
> + corresponding fields of BITMASK set to 0b1111. The function returns return 1
> + if any of these fields in INSN reference the PC (also 0b1111, r15), else it
> + returns 0. */
> +
> +static int
> +insn_references_pc (unsigned long insn, unsigned long bitmask)
> +{
> + unsigned long lowbit = 1;
Should these be uint32_t instead of unsigned long?
Otherwise, looks OK.
--
Daniel Jacobowitz
CodeSourcery
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH] Displaced stepping (non-stop debugging) support for ARM Linux
2009-06-09 17:37 ` Daniel Jacobowitz
@ 2009-06-10 14:58 ` Pedro Alves
2009-06-10 15:05 ` Daniel Jacobowitz
0 siblings, 1 reply; 24+ messages in thread
From: Pedro Alves @ 2009-06-10 14:58 UTC (permalink / raw)
To: Daniel Jacobowitz; +Cc: Julian Brown, gdb-patches
On Tuesday 09 June 2009 18:37:09, Daniel Jacobowitz wrote:
> On Sat, May 16, 2009 at 07:19:10PM +0100, Julian Brown wrote:
> > I'm not sure what the status is here now. For testing purposes, I've
> > (still) been using a local patch which uses displaced stepping for all
> > single-step operations.
> We still can't use software single-stepping simultaneously in multiple
> threads. Pedro, should we fix that or always use displaced stepping
> for now?
It would be nice to have that fixed, for sure, so yes to the
we should fix that question. However, it seems to me that this
is something that can be worked on mostly independently of the ARM
bits as it's a general software single-step issue, not really ARM
specific. Unless someone wants to (and has time to) tackle it
right now, I'd say go with the always displace-step version. If
nothing else, helps in stressing the displaced stepping
implementation. :-)
>
> > Daniel Jacobowitz <drow@false.org> wrote:
> >
> > > * What's the point of executing mov<cond> on the target for BL<cond>?
> > > At that point it seems like we ought to skip the target step entirely;
> > > just simulate the instruction. We've already got a function to check
> > > conditions (condition_true).
> >
> > I'm now using NOP instructions and condition_true, because the current
> > displaced stepping support wants to execute "something" rather than
> > nothing.
>
> From infrun.c:
>
> One way to work around [software single stepping]...
> would be to have gdbarch_displaced_step_copy_insn fully
> simulate the effect of PC-relative instructions (and return NULL)
> on architectures that use software single-stepping.
>
> So the interface you need is there; it's just not implemented yet:
>
> /* We don't support the fully-simulated case at present. */
> gdb_assert (closure);
>
> I think the implementation strategy will look like:
>
> * Add another non-zero return value from displaced_step_prepare.
The thread should still be marked as running in the frontend/cli's
perspective, as it would be stuck doing internal things, so that
the user can't try to "continue" it again ...
>
> * Update should_resume after the call, in resume (currently unused).
... so this bit in resume applies:
{
if (!displaced_step_prepare (inferior_ptid))
{
/* Got placed in displaced stepping queue. Will be resumed
later when all the currently queued displaced stepping
requests finish. The thread is not executing at this point,
and the call to set_executing will be made later. But we
need to call set_running here, since from frontend point of view,
the thread is running. */
set_running (inferior_ptid, 1);
discard_cleanups (old_cleanups);
return;
}
}
You need that, since "set_running" is usually called from target_resume,
which you'd bypass in the fully similated case.
>
> * Ask Pedro how to pretend that the inferior resumed and stopped,
> for higher levels.
See below.
> I think this will entail a new queue.
Indeed. Every time we required something similar, we got
away with doing that shortcut inside target code, keeping infrun
agnostic of the trick. But this is gdbarch code, independent of which
target_ops is playing (e.g., could be native linux-nat.c, could be remote.c).
> Bonus
> points if prepare_to_wait and wait_for_inferior do not invalidate
> the perfectly good register cache at this point.
>
> Pedro, thoughts - easy or should we stick with the NOP workaround for
> now?
In sync mode, I can picture bypassing the target_wait call in
wait_for_inferior, by peeking a local event queue first. In
async (and non-stop) modes, that would happen in fetch_inferior_event,
which is an asynchronous event loop callback. This requires registering
a new event source from within infrun.c, and, marking it whenever
we have events to handle (that is, when the queue isn't empty). In
principle, it shouldn't be hard. Keep a local queue of events in
infrun.c (as first approximation, each event consisting of a ptid
and a target_waitstatus), and add an async_event_handler to infrun.c (look
around in remote.c for event queue and remote_async_inferior_event_token
for copy&paste sources). Then, in displaced_step_prepare, if you just
detected a fully simulated case, push a new stop
event (TARGET_WAITKIND_STOP/SIGTRAP) in this queue, and mark the
event source as having something to process. The event source
callback would be something like:
static void
infrun_async_inferior_event_handler (gdb_client_data data)
{
inferior_event_handler (INF_REG_EVENT, NULL);
}
Then, just do nothing else, returning back to the event loop, which
will eventually call inferior_event_handler->fetch_inferior_event, and
that would collect the event from the local event queue, and pass it
to handle_inferior_event as usual.
The event queue would expose a similar interface to target_wait,
so that wait_for_inferior can request a queued event from a
specific ptid (and things look neat).
Care must be taken to keep the event queue in sync with target
reality --- e.g., if the thread that was doing the short-circuit (and so
has a pending event in a new infrun.c local event queue) exits (because
e.g., we lost connection to the target in between, or the whole
process exits due to another thread doing an _exit call or something
like that), then we need to remove the event from the queue,
otherwise, when you go to process it, things break while referencing
a thread that doesn't exist anymore. Should be mostly a matter of
taking care of the event queue from within infrun.c:infrun_thread_thread_exit.
Another source of care is if there's code out there that tries to resume
all threads and the target happens to try to resume such a thread behind
infrun event queue's back. This shouldn't be a problem in non-stop
mode, though, so, probably not something to worry much about much
for now.
--
Pedro Alves
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH] Displaced stepping (non-stop debugging) support for ARM Linux
2009-06-10 14:58 ` Pedro Alves
@ 2009-06-10 15:05 ` Daniel Jacobowitz
2009-07-15 19:16 ` Julian Brown
0 siblings, 1 reply; 24+ messages in thread
From: Daniel Jacobowitz @ 2009-06-10 15:05 UTC (permalink / raw)
To: Pedro Alves; +Cc: Julian Brown, gdb-patches
On Wed, Jun 10, 2009 at 03:59:46PM +0100, Pedro Alves wrote:
> It would be nice to have that fixed, for sure, so yes to the
> we should fix that question. However, it seems to me that this
> is something that can be worked on mostly independently of the ARM
> bits as it's a general software single-step issue, not really ARM
> specific. Unless someone wants to (and has time to) tackle it
> right now, I'd say go with the always displace-step version. If
> nothing else, helps in stressing the displaced stepping
> implementation. :-)
Agreed - let's merge the always-displace patch for now.
> Care must be taken to keep
Thanks for the plan. I suspect this is too much to insist on before
this patch goes in :-)
--
Daniel Jacobowitz
CodeSourcery
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH] Displaced stepping (non-stop debugging) support for ARM Linux
2009-06-10 15:05 ` Daniel Jacobowitz
@ 2009-07-15 19:16 ` Julian Brown
2009-07-24 2:17 ` Daniel Jacobowitz
2009-07-31 11:43 ` Julian Brown
0 siblings, 2 replies; 24+ messages in thread
From: Julian Brown @ 2009-07-15 19:16 UTC (permalink / raw)
To: gdb-patches; +Cc: Pedro Alves, Daniel Jacobowitz
[-- Attachment #1: Type: text/plain, Size: 4408 bytes --]
Here's a new version of the ARM displaced-stepping patch, together with
a new version of the patch to always use displaced stepping if it is
enabled:
Pedro wrote:
> It would be nice to have that fixed, for sure, so yes to the
> we should fix that question. However, it seems to me that this
> is something that can be worked on mostly independently of the ARM
> bits as it's a general software single-step issue, not really ARM
> specific. Unless someone wants to (and has time to) tackle it
> right now, I'd say go with the always displace-step version. If
> nothing else, helps in stressing the displaced stepping
> implementation. :-)
As suggested here.
Dan wrote:
> Pedro wrote:
> > Care must be taken to keep
>
> Thanks for the plan. I suspect this is too much to insist on before
> this patch goes in :-)
The current patch still uses a target round trip with a NOP
instruction, rather than fiddling with infrun.c to handle
fully-emulated instructions more cleanly (and/or faster). Something for
future improvement, perhaps.
Dan wrote:
> [a Linux signal handling explanation]
Thanks for that -- I think signal handling for displaced stepping now
works reasonably well, including stepping over sigreturn/rt_sigreturn
syscalls (for EABI). AFAICT the scratch space address never leaks into
the signal trampoline frame, so the potentially-disastrous results of
that happening are avoided already.
One possibly dubious part though is the positioning of the
insert_breakpoints() call in arm-linux-tdep.c:arm_linux_copy_svc():
without that, the momentary breakpoint used to regain control after a
sigreturn syscall never actually gets inserted into the debugged
program, because the displaced-step copy function gets called after
that normally happens. It should be safe AFAICT, but I may have
overlooked something.
Other things mentioned during previous review are fixed, hopefully.
Test results look reasonable, I think. "mi-nonstop.exp" tests fail in
Thumb mode, since this patch doesn't support Thumb. There's some noise
in threading results, but that's probably just bad luck.
OK to apply?
Cheers,
Julian
ChangeLog (displaced-stepping-always)
* infrun.c (displaced_step_fixup): If this is a software
single-stepping arch, don't tell the target to single-step.
(maybe_software_singlestep): Return 0 if we're using displaced
stepping.
(resume): If this is a software single-stepping arch, and
displaced-stepping is enabled, use it for all single-step
requests.
ChangeLog (displaced-stepping)
gdb/
* arm-linux-tdep.c (arch-utils.h, inferior.h, gdbthread.h, symfile.h): Include files.
(arm_linux_cleanup_svc, arm_linux_copy_svc): New.
(cleanup_kernel_helper_return, arm_catch_kernel_helper_return): New.
(arm_linux_displaced_step_copy_insn): New.
(arm_linux_init_abi): Initialise displaced stepping callbacks.
* arm-tdep.c (DISPLACED_STEPPING_ARCH_VERSION): New macro.
(ARM_NOP): New.
(displaced_read_reg, displaced_in_arm_mode, branch_write_pc)
(bx_write_pc, load_write_pc, alu_write_pc, displaced_write_reg)
(insn_references_pc, copy_unmodified, cleanup_preload, copy_preload)
(copy_preload_reg, cleanup_copro_load_store, copy_copro_load_store)
(cleanup_branch, copy_b_bl_blx, copy_bx_blx_reg, cleanup_alu_imm)
(copy_alu_imm, cleanup_alu_reg, copy_alu_reg)
(cleanup_alu_shifted_reg, copy_alu_shifted_reg, cleanup_load)
(cleanup_store, copy_extra_ld_st, copy_ldr_str_ldrb_strb)
(cleanup_block_load_all, cleanup_block_store_pc)
(cleanup_block_load_pc, copy_block_xfer, cleanup_svc, copy_svc)
(copy_undef, copy_unpred): New.
(decode_misc_memhint_neon, decode_unconditional)
(decode_miscellaneous, decode_dp_misc, decode_ld_st_word_ubyte)
(decode_media, decode_b_bl_ldmstm, decode_ext_reg_ld_st)
(decode_svc_copro, arm_process_displaced_insn)
(arm_displaced_init_closure, arm_displaced_step_copy_insn)
(arm_displaced_step_fixup): New.
(arm_gdbarch_init): Initialise max insn length field.
* arm-tdep.h (DISPLACED_TEMPS, DISPLACED_MODIFIED_INSNS): New
macros.
(displaced_step_closure, pc_write_style): New.
(arm_displaced_init_closure, displaced_read_reg)
(arm_process_displaced_insn, arm_displaced_init_closure, displaced_read_reg)
(displaced_write_reg, arm_displaced_step_copy_insn, arm_displaced_step_fixup): Add
prototypes.
[-- Attachment #2: fsf-arm-displaced-stepping-8.diff --]
[-- Type: text/x-patch, Size: 88697 bytes --]
--- .pc/displaced-stepping/gdb/arm-linux-tdep.c 2009-07-15 11:14:33.000000000 -0700
+++ gdb/arm-linux-tdep.c 2009-07-15 11:15:02.000000000 -0700
@@ -38,6 +38,10 @@
#include "arm-linux-tdep.h"
#include "linux-tdep.h"
#include "glibc-tdep.h"
+#include "arch-utils.h"
+#include "inferior.h"
+#include "gdbthread.h"
+#include "symfile.h"
#include "gdb_string.h"
@@ -590,6 +594,205 @@ arm_linux_software_single_step (struct f
return 1;
}
+/* Support for displaced stepping of Linux SVC instructions. */
+
+static void
+arm_linux_cleanup_svc (struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ CORE_ADDR from = dsc->insn_addr;
+ ULONGEST apparent_pc;
+ int within_scratch;
+
+ regcache_cooked_read_unsigned (regs, ARM_PC_REGNUM, &apparent_pc);
+
+ within_scratch = (apparent_pc >= dsc->scratch_base
+ && apparent_pc < (dsc->scratch_base
+ + DISPLACED_MODIFIED_INSNS * 4 + 4));
+
+ if (debug_displaced)
+ {
+ fprintf_unfiltered (gdb_stdlog, "displaced: PC is apparently %.8lx after "
+ "SVC step ", (unsigned long) apparent_pc);
+ if (within_scratch)
+ fprintf_unfiltered (gdb_stdlog, "(within scratch space)\n");
+ else
+ fprintf_unfiltered (gdb_stdlog, "(outside scratch space)\n");
+ }
+
+ if (within_scratch)
+ displaced_write_reg (regs, dsc, ARM_PC_REGNUM, from + 4, BRANCH_WRITE_PC);
+}
+
+static int
+arm_linux_copy_svc (uint32_t insn, CORE_ADDR to, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ CORE_ADDR from = dsc->insn_addr;
+ struct frame_info *frame;
+ unsigned int svc_number = displaced_read_reg (regs, from, 7);
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying Linux svc insn %.8lx\n",
+ (unsigned long) insn);
+
+ frame = get_current_frame ();
+
+ /* Is this a sigreturn or rt_sigreturn syscall? Note: these are only useful
+ for EABI. */
+ if (svc_number == 119 || svc_number == 173)
+ {
+ if (get_frame_type (frame) == SIGTRAMP_FRAME)
+ {
+ CORE_ADDR return_to;
+ struct symtab_and_line sal;
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: found "
+ "sigreturn/rt_sigreturn SVC call. PC in frame = %lx\n",
+ (unsigned long) get_frame_pc (frame));
+
+ return_to = frame_pc_unwind (frame);
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: unwind pc = %lx. "
+ "Setting momentary breakpoint.\n", (unsigned long) return_to);
+
+ gdb_assert (inferior_thread ()->step_resume_breakpoint == NULL);
+
+ sal = find_pc_line (return_to, 0);
+ sal.pc = return_to;
+ sal.section = find_pc_overlay (return_to);
+ sal.explicit_pc = 1;
+
+ frame = get_prev_frame (frame);
+
+ if (frame)
+ {
+ inferior_thread ()->step_resume_breakpoint
+ = set_momentary_breakpoint (sal, get_frame_id (frame),
+ bp_step_resume);
+
+ /* We need to make sure we actually insert the momentary
+ breakpoint set above. */
+ insert_breakpoints ();
+ }
+ else if (debug_displaced)
+ fprintf_unfiltered (gdb_stderr, "displaced: couldn't find previous "
+ "frame to set momentary breakpoint for "
+ "sigreturn/rt_sigreturn\n");
+ }
+ else if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: sigreturn/rt_sigreturn "
+ "SVC call not in signal trampoline frame\n");
+ }
+
+ /* Preparation: If we detect sigreturn, set momentary breakpoint at resume
+ location, else nothing.
+ Insn: unmodified svc.
+ Cleanup: if pc lands in scratch space, pc <- insn_addr + 4
+ else leave pc alone. */
+
+ dsc->modinsn[0] = insn;
+
+ dsc->cleanup = &arm_linux_cleanup_svc;
+ /* Pretend we wrote to the PC, so cleanup doesn't set PC to the next
+ instruction. */
+ dsc->wrote_to_pc = 1;
+
+ return 0;
+}
+
+
+/* The following two functions implement single-stepping over calls to Linux
+ kernel helper routines, which perform e.g. atomic operations on architecture
+ variants which don't support them natively.
+
+ When this function is called, the PC will be pointing at the kernel helper
+ (at an address inaccessible to GDB), and r14 will point to the return
+ address. Displaced stepping always executes code in the copy area:
+ so, make the copy-area instruction branch back to the kernel helper (the
+ "from" address), and make r14 point to the breakpoint in the copy area. In
+ that way, we regain control once the kernel helper returns, and can clean
+ up appropriately (as if we had just returned from the kernel helper as it
+ would have been called from the non-displaced location). */
+
+static void
+cleanup_kernel_helper_return (struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ displaced_write_reg (regs, dsc, ARM_LR_REGNUM, dsc->tmp[0], CANNOT_WRITE_PC);
+ displaced_write_reg (regs, dsc, ARM_PC_REGNUM, dsc->tmp[0], BRANCH_WRITE_PC);
+}
+
+static void
+arm_catch_kernel_helper_return (CORE_ADDR from, CORE_ADDR to,
+ struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ dsc->numinsns = 1;
+ dsc->insn_addr = from;
+ dsc->cleanup = &cleanup_kernel_helper_return;
+ /* Say we wrote to the PC, else cleanup will set PC to the next
+ instruction in the helper, which isn't helpful. */
+ dsc->wrote_to_pc = 1;
+
+ /* Preparation: tmp[0] <- r14
+ r14 <- <scratch space>+4
+ *(<scratch space>+8) <- from
+ Insn: ldr pc, [r14, #4]
+ Cleanup: r14 <- tmp[0], pc <- tmp[0]. */
+
+ dsc->tmp[0] = displaced_read_reg (regs, from, ARM_LR_REGNUM);
+ displaced_write_reg (regs, dsc, ARM_LR_REGNUM, (ULONGEST) to + 4,
+ CANNOT_WRITE_PC);
+ write_memory_unsigned_integer (to + 8, 4, from);
+
+ dsc->modinsn[0] = 0xe59ef004; /* ldr pc, [lr, #4]. */
+}
+
+/* Linux-specific displaced step instruction copying function. Detects when
+ the program has stepped into a Linux kernel helper routine (which must be
+ handled as a special case), falling back to arm_displaced_step_copy_insn()
+ if it hasn't. */
+
+static struct displaced_step_closure *
+arm_linux_displaced_step_copy_insn (struct gdbarch *gdbarch,
+ CORE_ADDR from, CORE_ADDR to,
+ struct regcache *regs)
+{
+ struct displaced_step_closure *dsc
+ = xmalloc (sizeof (struct displaced_step_closure));
+
+ /* Detect when we enter an (inaccessible by GDB) Linux kernel helper, and
+ stop at the return location. */
+ if (from > 0xffff0000)
+ {
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: detected kernel helper "
+ "at %.8lx\n", (unsigned long) from);
+
+ arm_catch_kernel_helper_return (from, to, regs, dsc);
+ }
+ else
+ {
+ uint32_t insn = read_memory_unsigned_integer (from, 4);
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: stepping insn %.8lx "
+ "at %.8lx\n", (unsigned long) insn,
+ (unsigned long) from);
+
+ /* Override the default handling of SVC instructions. */
+ dsc->u.svc.copy_svc_os = arm_linux_copy_svc;
+
+ arm_process_displaced_insn (insn, from, to, regs, dsc);
+ }
+
+ arm_displaced_init_closure (gdbarch, from, to, dsc);
+
+ return dsc;
+}
+
static void
arm_linux_init_abi (struct gdbarch_info info,
struct gdbarch *gdbarch)
@@ -650,6 +853,14 @@ arm_linux_init_abi (struct gdbarch_info
arm_linux_regset_from_core_section);
set_gdbarch_get_siginfo_type (gdbarch, linux_get_siginfo_type);
+
+ /* Displaced stepping. */
+ set_gdbarch_displaced_step_copy_insn (gdbarch,
+ arm_linux_displaced_step_copy_insn);
+ set_gdbarch_displaced_step_fixup (gdbarch, arm_displaced_step_fixup);
+ set_gdbarch_displaced_step_free_closure (gdbarch,
+ simple_displaced_step_free_closure);
+ set_gdbarch_displaced_step_location (gdbarch, displaced_step_at_entry_point);
}
/* Provide a prototype to silence -Wmissing-prototypes. */
--- .pc/displaced-stepping/gdb/arm-tdep.c 2009-07-15 11:14:33.000000000 -0700
+++ gdb/arm-tdep.c 2009-07-15 11:15:02.000000000 -0700
@@ -241,6 +241,11 @@ struct arm_prologue_cache
struct trad_frame_saved_reg *saved_regs;
};
+/* Architecture version for displaced stepping. This effects the behaviour of
+ certain instructions, and really should not be hard-wired. */
+
+#define DISPLACED_STEPPING_ARCH_VERSION 5
+
/* Addresses for calling Thumb functions have the bit 0 set.
Here are some macros to test, set, or clear bit 0 of addresses. */
#define IS_THUMB_ADDR(addr) ((addr) & 1)
@@ -2175,280 +2180,2099 @@ arm_software_single_step (struct frame_i
return 1;
}
-#include "bfd-in2.h"
-#include "libcoff.h"
+/* ARM displaced stepping support.
+
+ Generally ARM displaced stepping works as follows:
+
+ 1. When an instruction is to be single-stepped, it is first decoded by
+ arm_process_displaced_insn (called from arm_displaced_step_copy_insn).
+ Depending on the type of instruction, it is then copied to a scratch
+ location, possibly in a modified form. The copy_* set of functions
+ performs such modification, as necessary. A breakpoint is placed after
+ the modified instruction in the scratch space to return control to GDB.
+ Note in particular that instructions which modify the PC will no longer
+ do so after modification.
+
+ 2. The instruction is single-stepped, by setting the PC to the scratch
+ location address, and resuming. Control returns to GDB when the
+ breakpoint is hit.
+
+ 3. A cleanup function (cleanup_*) is called corresponding to the copy_*
+ function used for the current instruction. This function's job is to
+ put the CPU/memory state back to what it would have been if the
+ instruction had been executed unmodified in its original location. */
+
+/* NOP instruction (mov r0, r0). */
+#define ARM_NOP 0xe1a00000
+
+/* Helper for register reads for displaced stepping. In particular, this
+ returns the PC as it would be seen by the instruction at its original
+ location. */
+
+ULONGEST
+displaced_read_reg (struct regcache *regs, CORE_ADDR from, int regno)
+{
+ ULONGEST ret;
+
+ if (regno == 15)
+ {
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: read pc value %.8lx\n",
+ (unsigned long) from + 8);
+ return (ULONGEST) from + 8; /* Pipeline offset. */
+ }
+ else
+ {
+ regcache_cooked_read_unsigned (regs, regno, &ret);
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: read r%d value %.8lx\n",
+ regno, (unsigned long) ret);
+ return ret;
+ }
+}
static int
-gdb_print_insn_arm (bfd_vma memaddr, disassemble_info *info)
+displaced_in_arm_mode (struct regcache *regs)
{
- if (arm_pc_is_thumb (memaddr))
- {
- static asymbol *asym;
- static combined_entry_type ce;
- static struct coff_symbol_struct csym;
- static struct bfd fake_bfd;
- static bfd_target fake_target;
+ ULONGEST ps;
- if (csym.native == NULL)
- {
- /* Create a fake symbol vector containing a Thumb symbol.
- This is solely so that the code in print_insn_little_arm()
- and print_insn_big_arm() in opcodes/arm-dis.c will detect
- the presence of a Thumb symbol and switch to decoding
- Thumb instructions. */
+ regcache_cooked_read_unsigned (regs, ARM_PS_REGNUM, &ps);
- fake_target.flavour = bfd_target_coff_flavour;
- fake_bfd.xvec = &fake_target;
- ce.u.syment.n_sclass = C_THUMBEXTFUNC;
- csym.native = &ce;
- csym.symbol.the_bfd = &fake_bfd;
- csym.symbol.name = "fake";
- asym = (asymbol *) & csym;
- }
+ return (ps & CPSR_T) == 0;
+}
- memaddr = UNMAKE_THUMB_ADDR (memaddr);
- info->symbols = &asym;
- }
- else
- info->symbols = NULL;
+/* Write to the PC as from a branch instruction. */
- if (info->endian == BFD_ENDIAN_BIG)
- return print_insn_big_arm (memaddr, info);
+static void
+branch_write_pc (struct regcache *regs, ULONGEST val)
+{
+ if (displaced_in_arm_mode (regs))
+ /* Note: If bits 0/1 are set, this branch would be unpredictable for
+ architecture versions < 6. */
+ regcache_cooked_write_unsigned (regs, ARM_PC_REGNUM, val & ~(ULONGEST) 0x3);
else
- return print_insn_little_arm (memaddr, info);
+ regcache_cooked_write_unsigned (regs, ARM_PC_REGNUM, val & ~(ULONGEST) 0x1);
}
-/* The following define instruction sequences that will cause ARM
- cpu's to take an undefined instruction trap. These are used to
- signal a breakpoint to GDB.
-
- The newer ARMv4T cpu's are capable of operating in ARM or Thumb
- modes. A different instruction is required for each mode. The ARM
- cpu's can also be big or little endian. Thus four different
- instructions are needed to support all cases.
-
- Note: ARMv4 defines several new instructions that will take the
- undefined instruction trap. ARM7TDMI is nominally ARMv4T, but does
- not in fact add the new instructions. The new undefined
- instructions in ARMv4 are all instructions that had no defined
- behaviour in earlier chips. There is no guarantee that they will
- raise an exception, but may be treated as NOP's. In practice, it
- may only safe to rely on instructions matching:
-
- 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1
- 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
- C C C C 0 1 1 x x x x x x x x x x x x x x x x x x x x 1 x x x x
-
- Even this may only true if the condition predicate is true. The
- following use a condition predicate of ALWAYS so it is always TRUE.
-
- There are other ways of forcing a breakpoint. GNU/Linux, RISC iX,
- and NetBSD all use a software interrupt rather than an undefined
- instruction to force a trap. This can be handled by by the
- abi-specific code during establishment of the gdbarch vector. */
-
-#define ARM_LE_BREAKPOINT {0xFE,0xDE,0xFF,0xE7}
-#define ARM_BE_BREAKPOINT {0xE7,0xFF,0xDE,0xFE}
-#define THUMB_LE_BREAKPOINT {0xbe,0xbe}
-#define THUMB_BE_BREAKPOINT {0xbe,0xbe}
-
-static const char arm_default_arm_le_breakpoint[] = ARM_LE_BREAKPOINT;
-static const char arm_default_arm_be_breakpoint[] = ARM_BE_BREAKPOINT;
-static const char arm_default_thumb_le_breakpoint[] = THUMB_LE_BREAKPOINT;
-static const char arm_default_thumb_be_breakpoint[] = THUMB_BE_BREAKPOINT;
-
-/* Determine the type and size of breakpoint to insert at PCPTR. Uses
- the program counter value to determine whether a 16-bit or 32-bit
- breakpoint should be used. It returns a pointer to a string of
- bytes that encode a breakpoint instruction, stores the length of
- the string to *lenptr, and adjusts the program counter (if
- necessary) to point to the actual memory location where the
- breakpoint should be inserted. */
+/* Write to the PC as from a branch-exchange instruction. */
-static const unsigned char *
-arm_breakpoint_from_pc (struct gdbarch *gdbarch, CORE_ADDR *pcptr, int *lenptr)
+static void
+bx_write_pc (struct regcache *regs, ULONGEST val)
{
- struct gdbarch_tdep *tdep = gdbarch_tdep (gdbarch);
+ ULONGEST ps;
- if (arm_pc_is_thumb (*pcptr))
+ regcache_cooked_read_unsigned (regs, ARM_PS_REGNUM, &ps);
+
+ if ((val & 1) == 1)
{
- *pcptr = UNMAKE_THUMB_ADDR (*pcptr);
- *lenptr = tdep->thumb_breakpoint_size;
- return tdep->thumb_breakpoint;
+ regcache_cooked_write_unsigned (regs, ARM_PS_REGNUM, ps | CPSR_T);
+ regcache_cooked_write_unsigned (regs, ARM_PC_REGNUM, val & 0xfffffffe);
+ }
+ else if ((val & 2) == 0)
+ {
+ regcache_cooked_write_unsigned (regs, ARM_PS_REGNUM,
+ ps & ~(ULONGEST) CPSR_T);
+ regcache_cooked_write_unsigned (regs, ARM_PC_REGNUM, val);
}
else
{
- *lenptr = tdep->arm_breakpoint_size;
- return tdep->arm_breakpoint;
+ /* Unpredictable behaviour. Try to do something sensible (switch to ARM
+ mode, align dest to 4 bytes). */
+ warning (_("Single-stepping BX to non-word-aligned ARM instruction."));
+ regcache_cooked_write_unsigned (regs, ARM_PS_REGNUM,
+ ps & ~(ULONGEST) CPSR_T);
+ regcache_cooked_write_unsigned (regs, ARM_PC_REGNUM, val & 0xfffffffc);
}
}
-/* Extract from an array REGBUF containing the (raw) register state a
- function return value of type TYPE, and copy that, in virtual
- format, into VALBUF. */
+/* Write to the PC as if from a load instruction. */
static void
-arm_extract_return_value (struct type *type, struct regcache *regs,
- gdb_byte *valbuf)
+load_write_pc (struct regcache *regs, ULONGEST val)
{
- struct gdbarch *gdbarch = get_regcache_arch (regs);
+ if (DISPLACED_STEPPING_ARCH_VERSION >= 5)
+ bx_write_pc (regs, val);
+ else
+ branch_write_pc (regs, val);
+}
- if (TYPE_CODE_FLT == TYPE_CODE (type))
+/* Write to the PC as if from an ALU instruction. */
+
+static void
+alu_write_pc (struct regcache *regs, ULONGEST val)
+{
+ if (DISPLACED_STEPPING_ARCH_VERSION >= 7 && displaced_in_arm_mode (regs))
+ bx_write_pc (regs, val);
+ else
+ branch_write_pc (regs, val);
+}
+
+/* Helper for writing to registers for displaced stepping. Writing to the PC
+ has a varying effects depending on the instruction which does the write:
+ this is controlled by the WRITE_PC argument. */
+
+void
+displaced_write_reg (struct regcache *regs, struct displaced_step_closure *dsc,
+ int regno, ULONGEST val, enum pc_write_style write_pc)
+{
+ if (regno == 15)
{
- switch (gdbarch_tdep (gdbarch)->fp_model)
- {
- case ARM_FLOAT_FPA:
- {
- /* The value is in register F0 in internal format. We need to
- extract the raw value and then convert it to the desired
- internal type. */
- bfd_byte tmpbuf[FP_REGISTER_SIZE];
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: writing pc %.8lx\n",
+ (unsigned long) val);
+ switch (write_pc)
+ {
+ case BRANCH_WRITE_PC:
+ branch_write_pc (regs, val);
+ break;
- regcache_cooked_read (regs, ARM_F0_REGNUM, tmpbuf);
- convert_from_extended (floatformat_from_type (type), tmpbuf,
- valbuf, gdbarch_byte_order (gdbarch));
- }
+ case BX_WRITE_PC:
+ bx_write_pc (regs, val);
break;
- case ARM_FLOAT_SOFT_FPA:
- case ARM_FLOAT_SOFT_VFP:
- regcache_cooked_read (regs, ARM_A1_REGNUM, valbuf);
- if (TYPE_LENGTH (type) > 4)
- regcache_cooked_read (regs, ARM_A1_REGNUM + 1,
- valbuf + INT_REGISTER_SIZE);
+ case LOAD_WRITE_PC:
+ load_write_pc (regs, val);
break;
- default:
- internal_error
- (__FILE__, __LINE__,
- _("arm_extract_return_value: Floating point model not supported"));
+ case ALU_WRITE_PC:
+ alu_write_pc (regs, val);
break;
- }
- }
- else if (TYPE_CODE (type) == TYPE_CODE_INT
- || TYPE_CODE (type) == TYPE_CODE_CHAR
- || TYPE_CODE (type) == TYPE_CODE_BOOL
- || TYPE_CODE (type) == TYPE_CODE_PTR
- || TYPE_CODE (type) == TYPE_CODE_REF
- || TYPE_CODE (type) == TYPE_CODE_ENUM)
- {
- /* If the the type is a plain integer, then the access is
- straight-forward. Otherwise we have to play around a bit more. */
- int len = TYPE_LENGTH (type);
- int regno = ARM_A1_REGNUM;
- ULONGEST tmp;
- while (len > 0)
- {
- /* By using store_unsigned_integer we avoid having to do
- anything special for small big-endian values. */
- regcache_cooked_read_unsigned (regs, regno++, &tmp);
- store_unsigned_integer (valbuf,
- (len > INT_REGISTER_SIZE
- ? INT_REGISTER_SIZE : len),
- tmp);
- len -= INT_REGISTER_SIZE;
- valbuf += INT_REGISTER_SIZE;
+ case CANNOT_WRITE_PC:
+ warning (_("Instruction wrote to PC in an unexpected way when "
+ "single-stepping"));
+ break;
+
+ default:
+ abort ();
}
+
+ dsc->wrote_to_pc = 1;
}
else
{
- /* For a structure or union the behaviour is as if the value had
- been stored to word-aligned memory and then loaded into
- registers with 32-bit load instruction(s). */
- int len = TYPE_LENGTH (type);
- int regno = ARM_A1_REGNUM;
- bfd_byte tmpbuf[INT_REGISTER_SIZE];
-
- while (len > 0)
- {
- regcache_cooked_read (regs, regno++, tmpbuf);
- memcpy (valbuf, tmpbuf,
- len > INT_REGISTER_SIZE ? INT_REGISTER_SIZE : len);
- len -= INT_REGISTER_SIZE;
- valbuf += INT_REGISTER_SIZE;
- }
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: writing r%d value %.8lx\n",
+ regno, (unsigned long) val);
+ regcache_cooked_write_unsigned (regs, regno, val);
}
}
-
-/* Will a function return an aggregate type in memory or in a
- register? Return 0 if an aggregate type can be returned in a
- register, 1 if it must be returned in memory. */
+/* This function is used to concisely determine if an instruction INSN
+ references PC. Register fields of interest in INSN should have the
+ corresponding fields of BITMASK set to 0b1111. The function returns return 1
+ if any of these fields in INSN reference the PC (also 0b1111, r15), else it
+ returns 0. */
static int
-arm_return_in_memory (struct gdbarch *gdbarch, struct type *type)
+insn_references_pc (uint32_t insn, uint32_t bitmask)
{
- int nRc;
- enum type_code code;
+ uint32_t lowbit = 1;
- CHECK_TYPEDEF (type);
+ while (bitmask != 0)
+ {
+ uint32_t mask;
- /* In the ARM ABI, "integer" like aggregate types are returned in
- registers. For an aggregate type to be integer like, its size
- must be less than or equal to INT_REGISTER_SIZE and the
- offset of each addressable subfield must be zero. Note that bit
- fields are not addressable, and all addressable subfields of
- unions always start at offset zero.
+ for (; lowbit && (bitmask & lowbit) == 0; lowbit <<= 1)
+ ;
- This function is based on the behaviour of GCC 2.95.1.
- See: gcc/arm.c: arm_return_in_memory() for details.
+ if (!lowbit)
+ break;
- Note: All versions of GCC before GCC 2.95.2 do not set up the
- parameters correctly for a function returning the following
- structure: struct { float f;}; This should be returned in memory,
- not a register. Richard Earnshaw sent me a patch, but I do not
- know of any way to detect if a function like the above has been
- compiled with the correct calling convention. */
+ mask = lowbit * 0xf;
- /* All aggregate types that won't fit in a register must be returned
- in memory. */
- if (TYPE_LENGTH (type) > INT_REGISTER_SIZE)
- {
- return 1;
+ if ((insn & mask) == mask)
+ return 1;
+
+ bitmask &= ~mask;
}
- /* The AAPCS says all aggregates not larger than a word are returned
- in a register. */
- if (gdbarch_tdep (gdbarch)->arm_abi != ARM_ABI_APCS)
- return 0;
+ return 0;
+}
- /* The only aggregate types that can be returned in a register are
- structs and unions. Arrays must be returned in memory. */
- code = TYPE_CODE (type);
- if ((TYPE_CODE_STRUCT != code) && (TYPE_CODE_UNION != code))
- {
- return 1;
- }
+/* The simplest copy function. Many instructions have the same effect no
+ matter what address they are executed at: in those cases, use this. */
- /* Assume all other aggregate types can be returned in a register.
- Run a check for structures, unions and arrays. */
- nRc = 0;
+static int
+copy_unmodified (uint32_t insn, const char *iname,
+ struct displaced_step_closure *dsc)
+{
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying insn %.8lx, "
+ "opcode/class '%s' unmodified\n", (unsigned long) insn,
+ iname);
- if ((TYPE_CODE_STRUCT == code) || (TYPE_CODE_UNION == code))
- {
- int i;
- /* Need to check if this struct/union is "integer" like. For
- this to be true, its size must be less than or equal to
- INT_REGISTER_SIZE and the offset of each addressable
- subfield must be zero. Note that bit fields are not
- addressable, and unions always start at offset zero. If any
- of the subfields is a floating point type, the struct/union
- cannot be an integer type. */
+ dsc->modinsn[0] = insn;
- /* For each field in the object, check:
- 1) Is it FP? --> yes, nRc = 1;
- 2) Is it addressable (bitpos != 0) and
- not packed (bitsize == 0)?
- --> yes, nRc = 1
- */
+ return 0;
+}
- for (i = 0; i < TYPE_NFIELDS (type); i++)
- {
- enum type_code field_type_code;
- field_type_code = TYPE_CODE (check_typedef (TYPE_FIELD_TYPE (type, i)));
+/* Preload instructions with immediate offset. */
- /* Is it a floating point type field? */
+static void
+cleanup_preload (struct regcache *regs, struct displaced_step_closure *dsc)
+{
+ displaced_write_reg (regs, dsc, 0, dsc->tmp[0], CANNOT_WRITE_PC);
+ if (!dsc->u.preload.immed)
+ displaced_write_reg (regs, dsc, 1, dsc->tmp[1], CANNOT_WRITE_PC);
+}
+
+static int
+copy_preload (uint32_t insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ unsigned int rn = bits (insn, 16, 19);
+ ULONGEST rn_val;
+ CORE_ADDR from = dsc->insn_addr;
+
+ if (!insn_references_pc (insn, 0x000f0000ul))
+ return copy_unmodified (insn, "preload", dsc);
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying preload insn %.8lx\n",
+ (unsigned long) insn);
+
+ /* Preload instructions:
+
+ {pli/pld} [rn, #+/-imm]
+ ->
+ {pli/pld} [r0, #+/-imm]. */
+
+ dsc->tmp[0] = displaced_read_reg (regs, from, 0);
+ rn_val = displaced_read_reg (regs, from, rn);
+ displaced_write_reg (regs, dsc, 0, rn_val, CANNOT_WRITE_PC);
+
+ dsc->u.preload.immed = 1;
+
+ dsc->modinsn[0] = insn & 0xfff0ffff;
+
+ dsc->cleanup = &cleanup_preload;
+
+ return 0;
+}
+
+/* Preload instructions with register offset. */
+
+static int
+copy_preload_reg (uint32_t insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ unsigned int rn = bits (insn, 16, 19);
+ unsigned int rm = bits (insn, 0, 3);
+ ULONGEST rn_val, rm_val;
+ CORE_ADDR from = dsc->insn_addr;
+
+ if (!insn_references_pc (insn, 0x000f000ful))
+ return copy_unmodified (insn, "preload reg", dsc);
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying preload insn %.8lx\n",
+ (unsigned long) insn);
+
+ /* Preload register-offset instructions:
+
+ {pli/pld} [rn, rm {, shift}]
+ ->
+ {pli/pld} [r0, r1 {, shift}]. */
+
+ dsc->tmp[0] = displaced_read_reg (regs, from, 0);
+ dsc->tmp[1] = displaced_read_reg (regs, from, 1);
+ rn_val = displaced_read_reg (regs, from, rn);
+ rm_val = displaced_read_reg (regs, from, rm);
+ displaced_write_reg (regs, dsc, 0, rn_val, CANNOT_WRITE_PC);
+ displaced_write_reg (regs, dsc, 1, rm_val, CANNOT_WRITE_PC);
+
+ dsc->u.preload.immed = 0;
+
+ dsc->modinsn[0] = (insn & 0xfff0fff0) | 0x1;
+
+ dsc->cleanup = &cleanup_preload;
+
+ return 0;
+}
+
+/* Copy/cleanup coprocessor load and store instructions. */
+
+static void
+cleanup_copro_load_store (struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ ULONGEST rn_val = displaced_read_reg (regs, dsc->insn_addr, 0);
+
+ displaced_write_reg (regs, dsc, 0, dsc->tmp[0], CANNOT_WRITE_PC);
+
+ if (dsc->u.ldst.writeback)
+ displaced_write_reg (regs, dsc, dsc->u.ldst.rn, rn_val, LOAD_WRITE_PC);
+}
+
+static int
+copy_copro_load_store (uint32_t insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ unsigned int rn = bits (insn, 16, 19);
+ ULONGEST rn_val;
+ CORE_ADDR from = dsc->insn_addr;
+
+ if (!insn_references_pc (insn, 0x000f0000ul))
+ return copy_unmodified (insn, "copro load/store", dsc);
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying coprocessor "
+ "load/store insn %.8lx\n", (unsigned long) insn);
+
+ /* Coprocessor load/store instructions:
+
+ {stc/stc2} [<Rn>, #+/-imm] (and other immediate addressing modes)
+ ->
+ {stc/stc2} [r0, #+/-imm].
+
+ ldc/ldc2 are handled identically. */
+
+ dsc->tmp[0] = displaced_read_reg (regs, from, 0);
+ rn_val = displaced_read_reg (regs, from, rn);
+ displaced_write_reg (regs, dsc, 0, rn_val, CANNOT_WRITE_PC);
+
+ dsc->u.ldst.writeback = bit (insn, 25);
+ dsc->u.ldst.rn = rn;
+
+ dsc->modinsn[0] = insn & 0xfff0ffff;
+
+ dsc->cleanup = &cleanup_copro_load_store;
+
+ return 0;
+}
+
+/* Clean up branch instructions (actually perform the branch, by setting
+ PC). */
+
+static void
+cleanup_branch (struct regcache *regs, struct displaced_step_closure *dsc)
+{
+ ULONGEST from = dsc->insn_addr;
+ uint32_t status = displaced_read_reg (regs, from, ARM_PS_REGNUM);
+ int branch_taken = condition_true (dsc->u.branch.cond, status);
+ enum pc_write_style write_pc = dsc->u.branch.exchange
+ ? BX_WRITE_PC : BRANCH_WRITE_PC;
+
+ if (!branch_taken)
+ return;
+
+ if (dsc->u.branch.link)
+ {
+ ULONGEST pc = displaced_read_reg (regs, from, 15);
+ displaced_write_reg (regs, dsc, 14, pc - 4, CANNOT_WRITE_PC);
+ }
+
+ displaced_write_reg (regs, dsc, 15, dsc->u.branch.dest, write_pc);
+}
+
+/* Copy B/BL/BLX instructions with immediate destinations. */
+
+static int
+copy_b_bl_blx (uint32_t insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ unsigned int cond = bits (insn, 28, 31);
+ int exchange = (cond == 0xf);
+ int link = exchange || bit (insn, 24);
+ CORE_ADDR from = dsc->insn_addr;
+ long offset;
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying %s immediate insn "
+ "%.8lx\n", (exchange) ? "blx" : (link) ? "bl" : "b",
+ (unsigned long) insn);
+
+ /* Implement "BL<cond> <label>" as:
+
+ Preparation: cond <- instruction condition
+ Insn: mov r0, r0 (nop)
+ Cleanup: if (condition true) { r14 <- pc; pc <- label }.
+
+ B<cond> similar, but don't set r14 in cleanup. */
+
+ if (exchange)
+ /* For BLX, set bit 0 of the destination. The cleanup_branch function will
+ then arrange the switch into Thumb mode. */
+ offset = (bits (insn, 0, 23) << 2) | (bit (insn, 24) << 1) | 1;
+ else
+ offset = bits (insn, 0, 23) << 2;
+
+ if (bit (offset, 25))
+ offset = offset | ~0x3ffffff;
+
+ dsc->u.branch.cond = cond;
+ dsc->u.branch.link = link;
+ dsc->u.branch.exchange = exchange;
+ dsc->u.branch.dest = from + 8 + offset;
+
+ dsc->modinsn[0] = ARM_NOP;
+
+ dsc->cleanup = &cleanup_branch;
+
+ return 0;
+}
+
+/* Copy BX/BLX with register-specified destinations. */
+
+static int
+copy_bx_blx_reg (uint32_t insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ unsigned int cond = bits (insn, 28, 31);
+ /* BX: x12xxx1x
+ BLX: x12xxx3x. */
+ int link = bit (insn, 5);
+ unsigned int rm = bits (insn, 0, 3);
+ CORE_ADDR from = dsc->insn_addr;
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying %s register insn "
+ "%.8lx\n", (link) ? "blx" : "bx", (unsigned long) insn);
+
+ /* Implement {BX,BLX}<cond> <reg>" as:
+
+ Preparation: cond <- instruction condition
+ Insn: mov r0, r0 (nop)
+ Cleanup: if (condition true) { r14 <- pc; pc <- dest; }.
+
+ Don't set r14 in cleanup for BX. */
+
+ dsc->u.branch.dest = displaced_read_reg (regs, from, rm);
+
+ dsc->u.branch.cond = cond;
+ dsc->u.branch.link = link;
+ dsc->u.branch.exchange = 1;
+
+ dsc->modinsn[0] = ARM_NOP;
+
+ dsc->cleanup = &cleanup_branch;
+
+ return 0;
+}
+
+/* Copy/cleanup arithmetic/logic instruction with immediate RHS. */
+
+static void
+cleanup_alu_imm (struct regcache *regs, struct displaced_step_closure *dsc)
+{
+ ULONGEST rd_val = displaced_read_reg (regs, dsc->insn_addr, 0);
+ displaced_write_reg (regs, dsc, 0, dsc->tmp[0], CANNOT_WRITE_PC);
+ displaced_write_reg (regs, dsc, 1, dsc->tmp[1], CANNOT_WRITE_PC);
+ displaced_write_reg (regs, dsc, dsc->rd, rd_val, ALU_WRITE_PC);
+}
+
+static int
+copy_alu_imm (uint32_t insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ unsigned int rn = bits (insn, 16, 19);
+ unsigned int rd = bits (insn, 12, 15);
+ unsigned int op = bits (insn, 21, 24);
+ int is_mov = (op == 0xd);
+ ULONGEST rd_val, rn_val;
+ CORE_ADDR from = dsc->insn_addr;
+
+ if (!insn_references_pc (insn, 0x000ff000ul))
+ return copy_unmodified (insn, "ALU immediate", dsc);
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying immediate %s insn "
+ "%.8lx\n", is_mov ? "move" : "ALU",
+ (unsigned long) insn);
+
+ /* Instruction is of form:
+
+ <op><cond> rd, [rn,] #imm
+
+ Rewrite as:
+
+ Preparation: tmp1, tmp2 <- r0, r1;
+ r0, r1 <- rd, rn
+ Insn: <op><cond> r0, r1, #imm
+ Cleanup: rd <- r0; r0 <- tmp1; r1 <- tmp2
+ */
+
+ dsc->tmp[0] = displaced_read_reg (regs, from, 0);
+ dsc->tmp[1] = displaced_read_reg (regs, from, 1);
+ rn_val = displaced_read_reg (regs, from, rn);
+ rd_val = displaced_read_reg (regs, from, rd);
+ displaced_write_reg (regs, dsc, 0, rd_val, CANNOT_WRITE_PC);
+ displaced_write_reg (regs, dsc, 1, rn_val, CANNOT_WRITE_PC);
+ dsc->rd = rd;
+
+ if (is_mov)
+ dsc->modinsn[0] = insn & 0xfff00fff;
+ else
+ dsc->modinsn[0] = (insn & 0xfff00fff) | 0x10000;
+
+ dsc->cleanup = &cleanup_alu_imm;
+
+ return 0;
+}
+
+/* Copy/cleanup arithmetic/logic insns with register RHS. */
+
+static void
+cleanup_alu_reg (struct regcache *regs, struct displaced_step_closure *dsc)
+{
+ ULONGEST rd_val;
+ int i;
+
+ rd_val = displaced_read_reg (regs, dsc->insn_addr, 0);
+
+ for (i = 0; i < 3; i++)
+ displaced_write_reg (regs, dsc, i, dsc->tmp[i], CANNOT_WRITE_PC);
+
+ displaced_write_reg (regs, dsc, dsc->rd, rd_val, ALU_WRITE_PC);
+}
+
+static int
+copy_alu_reg (uint32_t insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ unsigned int rn = bits (insn, 16, 19);
+ unsigned int rm = bits (insn, 0, 3);
+ unsigned int rd = bits (insn, 12, 15);
+ unsigned int op = bits (insn, 21, 24);
+ int is_mov = (op == 0xd);
+ ULONGEST rd_val, rn_val, rm_val;
+ CORE_ADDR from = dsc->insn_addr;
+
+ if (!insn_references_pc (insn, 0x000ff00ful))
+ return copy_unmodified (insn, "ALU reg", dsc);
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying reg %s insn %.8lx\n",
+ is_mov ? "move" : "ALU", (unsigned long) insn);
+
+ /* Instruction is of form:
+
+ <op><cond> rd, [rn,] rm [, <shift>]
+
+ Rewrite as:
+
+ Preparation: tmp1, tmp2, tmp3 <- r0, r1, r2;
+ r0, r1, r2 <- rd, rn, rm
+ Insn: <op><cond> r0, r1, r2 [, <shift>]
+ Cleanup: rd <- r0; r0, r1, r2 <- tmp1, tmp2, tmp3
+ */
+
+ dsc->tmp[0] = displaced_read_reg (regs, from, 0);
+ dsc->tmp[1] = displaced_read_reg (regs, from, 1);
+ dsc->tmp[2] = displaced_read_reg (regs, from, 2);
+ rd_val = displaced_read_reg (regs, from, rd);
+ rn_val = displaced_read_reg (regs, from, rn);
+ rm_val = displaced_read_reg (regs, from, rm);
+ displaced_write_reg (regs, dsc, 0, rd_val, CANNOT_WRITE_PC);
+ displaced_write_reg (regs, dsc, 1, rn_val, CANNOT_WRITE_PC);
+ displaced_write_reg (regs, dsc, 2, rm_val, CANNOT_WRITE_PC);
+ dsc->rd = rd;
+
+ if (is_mov)
+ dsc->modinsn[0] = (insn & 0xfff00ff0) | 0x2;
+ else
+ dsc->modinsn[0] = (insn & 0xfff00ff0) | 0x10002;
+
+ dsc->cleanup = &cleanup_alu_reg;
+
+ return 0;
+}
+
+/* Cleanup/copy arithmetic/logic insns with shifted register RHS. */
+
+static void
+cleanup_alu_shifted_reg (struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ ULONGEST rd_val = displaced_read_reg (regs, dsc->insn_addr, 0);
+ int i;
+
+ for (i = 0; i < 4; i++)
+ displaced_write_reg (regs, dsc, i, dsc->tmp[i], CANNOT_WRITE_PC);
+
+ displaced_write_reg (regs, dsc, dsc->rd, rd_val, ALU_WRITE_PC);
+}
+
+static int
+copy_alu_shifted_reg (uint32_t insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ unsigned int rn = bits (insn, 16, 19);
+ unsigned int rm = bits (insn, 0, 3);
+ unsigned int rd = bits (insn, 12, 15);
+ unsigned int rs = bits (insn, 8, 11);
+ unsigned int op = bits (insn, 21, 24);
+ int is_mov = (op == 0xd), i;
+ ULONGEST rd_val, rn_val, rm_val, rs_val;
+ CORE_ADDR from = dsc->insn_addr;
+
+ if (!insn_references_pc (insn, 0x000fff0ful))
+ return copy_unmodified (insn, "ALU shifted reg", dsc);
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying shifted reg %s insn "
+ "%.8lx\n", is_mov ? "move" : "ALU",
+ (unsigned long) insn);
+
+ /* Instruction is of form:
+
+ <op><cond> rd, [rn,] rm, <shift> rs
+
+ Rewrite as:
+
+ Preparation: tmp1, tmp2, tmp3, tmp4 <- r0, r1, r2, r3
+ r0, r1, r2, r3 <- rd, rn, rm, rs
+ Insn: <op><cond> r0, r1, r2, <shift> r3
+ Cleanup: tmp5 <- r0
+ r0, r1, r2, r3 <- tmp1, tmp2, tmp3, tmp4
+ rd <- tmp5
+ */
+
+ for (i = 0; i < 4; i++)
+ dsc->tmp[i] = displaced_read_reg (regs, from, i);
+
+ rd_val = displaced_read_reg (regs, from, rd);
+ rn_val = displaced_read_reg (regs, from, rn);
+ rm_val = displaced_read_reg (regs, from, rm);
+ rs_val = displaced_read_reg (regs, from, rs);
+ displaced_write_reg (regs, dsc, 0, rd_val, CANNOT_WRITE_PC);
+ displaced_write_reg (regs, dsc, 1, rn_val, CANNOT_WRITE_PC);
+ displaced_write_reg (regs, dsc, 2, rm_val, CANNOT_WRITE_PC);
+ displaced_write_reg (regs, dsc, 3, rs_val, CANNOT_WRITE_PC);
+ dsc->rd = rd;
+
+ if (is_mov)
+ dsc->modinsn[0] = (insn & 0xfff000f0) | 0x302;
+ else
+ dsc->modinsn[0] = (insn & 0xfff000f0) | 0x10302;
+
+ dsc->cleanup = &cleanup_alu_shifted_reg;
+
+ return 0;
+}
+
+/* Clean up load instructions. */
+
+static void
+cleanup_load (struct regcache *regs, struct displaced_step_closure *dsc)
+{
+ ULONGEST rt_val, rt_val2 = 0, rn_val;
+ CORE_ADDR from = dsc->insn_addr;
+
+ rt_val = displaced_read_reg (regs, from, 0);
+ if (dsc->u.ldst.xfersize == 8)
+ rt_val2 = displaced_read_reg (regs, from, 1);
+ rn_val = displaced_read_reg (regs, from, 2);
+
+ displaced_write_reg (regs, dsc, 0, dsc->tmp[0], CANNOT_WRITE_PC);
+ if (dsc->u.ldst.xfersize > 4)
+ displaced_write_reg (regs, dsc, 1, dsc->tmp[1], CANNOT_WRITE_PC);
+ displaced_write_reg (regs, dsc, 2, dsc->tmp[2], CANNOT_WRITE_PC);
+ if (!dsc->u.ldst.immed)
+ displaced_write_reg (regs, dsc, 3, dsc->tmp[3], CANNOT_WRITE_PC);
+
+ /* Handle register writeback. */
+ if (dsc->u.ldst.writeback)
+ displaced_write_reg (regs, dsc, dsc->u.ldst.rn, rn_val, CANNOT_WRITE_PC);
+ /* Put result in right place. */
+ displaced_write_reg (regs, dsc, dsc->rd, rt_val, LOAD_WRITE_PC);
+ if (dsc->u.ldst.xfersize == 8)
+ displaced_write_reg (regs, dsc, dsc->rd + 1, rt_val2, LOAD_WRITE_PC);
+}
+
+/* Clean up store instructions. */
+
+static void
+cleanup_store (struct regcache *regs, struct displaced_step_closure *dsc)
+{
+ CORE_ADDR from = dsc->insn_addr;
+ ULONGEST rn_val = displaced_read_reg (regs, from, 2);
+
+ displaced_write_reg (regs, dsc, 0, dsc->tmp[0], CANNOT_WRITE_PC);
+ if (dsc->u.ldst.xfersize > 4)
+ displaced_write_reg (regs, dsc, 1, dsc->tmp[1], CANNOT_WRITE_PC);
+ displaced_write_reg (regs, dsc, 2, dsc->tmp[2], CANNOT_WRITE_PC);
+ if (!dsc->u.ldst.immed)
+ displaced_write_reg (regs, dsc, 3, dsc->tmp[3], CANNOT_WRITE_PC);
+ if (!dsc->u.ldst.restore_r4)
+ displaced_write_reg (regs, dsc, 4, dsc->tmp[4], CANNOT_WRITE_PC);
+
+ /* Writeback. */
+ if (dsc->u.ldst.writeback)
+ displaced_write_reg (regs, dsc, dsc->u.ldst.rn, rn_val, CANNOT_WRITE_PC);
+}
+
+/* Copy "extra" load/store instructions. These are halfword/doubleword
+ transfers, which have a different encoding to byte/word transfers. */
+
+static int
+copy_extra_ld_st (uint32_t insn, int unpriveleged, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ unsigned int op1 = bits (insn, 20, 24);
+ unsigned int op2 = bits (insn, 5, 6);
+ unsigned int rt = bits (insn, 12, 15);
+ unsigned int rn = bits (insn, 16, 19);
+ unsigned int rm = bits (insn, 0, 3);
+ char load[12] = {0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1};
+ char bytesize[12] = {2, 2, 2, 2, 8, 1, 8, 1, 8, 2, 8, 2};
+ int immed = (op1 & 0x4) != 0;
+ int opcode;
+ ULONGEST rt_val, rt_val2 = 0, rn_val, rm_val = 0;
+ CORE_ADDR from = dsc->insn_addr;
+
+ if (!insn_references_pc (insn, 0x000ff00ful))
+ return copy_unmodified (insn, "extra load/store", dsc);
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying %sextra load/store "
+ "insn %.8lx\n", unpriveleged ? "unpriveleged " : "",
+ (unsigned long) insn);
+
+ opcode = ((op2 << 2) | (op1 & 0x1) | ((op1 & 0x4) >> 1)) - 4;
+
+ if (opcode < 0)
+ internal_error (__FILE__, __LINE__,
+ _("copy_extra_ld_st: instruction decode error"));
+
+ dsc->tmp[0] = displaced_read_reg (regs, from, 0);
+ dsc->tmp[1] = displaced_read_reg (regs, from, 1);
+ dsc->tmp[2] = displaced_read_reg (regs, from, 2);
+ if (!immed)
+ dsc->tmp[3] = displaced_read_reg (regs, from, 3);
+
+ rt_val = displaced_read_reg (regs, from, rt);
+ if (bytesize[opcode] == 8)
+ rt_val2 = displaced_read_reg (regs, from, rt + 1);
+ rn_val = displaced_read_reg (regs, from, rn);
+ if (!immed)
+ rm_val = displaced_read_reg (regs, from, rm);
+
+ displaced_write_reg (regs, dsc, 0, rt_val, CANNOT_WRITE_PC);
+ if (bytesize[opcode] == 8)
+ displaced_write_reg (regs, dsc, 1, rt_val2, CANNOT_WRITE_PC);
+ displaced_write_reg (regs, dsc, 2, rn_val, CANNOT_WRITE_PC);
+ if (!immed)
+ displaced_write_reg (regs, dsc, 3, rm_val, CANNOT_WRITE_PC);
+
+ dsc->rd = rt;
+ dsc->u.ldst.xfersize = bytesize[opcode];
+ dsc->u.ldst.rn = rn;
+ dsc->u.ldst.immed = immed;
+ dsc->u.ldst.writeback = bit (insn, 24) == 0 || bit (insn, 21) != 0;
+ dsc->u.ldst.restore_r4 = 0;
+
+ if (immed)
+ /* {ldr,str}<width><cond> rt, [rt2,] [rn, #imm]
+ ->
+ {ldr,str}<width><cond> r0, [r1,] [r2, #imm]. */
+ dsc->modinsn[0] = (insn & 0xfff00fff) | 0x20000;
+ else
+ /* {ldr,str}<width><cond> rt, [rt2,] [rn, +/-rm]
+ ->
+ {ldr,str}<width><cond> r0, [r1,] [r2, +/-r3]. */
+ dsc->modinsn[0] = (insn & 0xfff00ff0) | 0x20003;
+
+ dsc->cleanup = load[opcode] ? &cleanup_load : &cleanup_store;
+
+ return 0;
+}
+
+/* Copy byte/word loads and stores. */
+
+static int
+copy_ldr_str_ldrb_strb (uint32_t insn, struct regcache *regs,
+ struct displaced_step_closure *dsc, int load, int byte,
+ int usermode)
+{
+ int immed = !bit (insn, 25);
+ unsigned int rt = bits (insn, 12, 15);
+ unsigned int rn = bits (insn, 16, 19);
+ unsigned int rm = bits (insn, 0, 3); /* Only valid if !immed. */
+ ULONGEST rt_val, rn_val, rm_val = 0;
+ CORE_ADDR from = dsc->insn_addr;
+
+ if (!insn_references_pc (insn, 0x000ff00ful))
+ return copy_unmodified (insn, "load/store", dsc);
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying %s%s insn %.8lx\n",
+ load ? (byte ? "ldrb" : "ldr")
+ : (byte ? "strb" : "str"), usermode ? "t" : "",
+ (unsigned long) insn);
+
+ dsc->tmp[0] = displaced_read_reg (regs, from, 0);
+ dsc->tmp[2] = displaced_read_reg (regs, from, 2);
+ if (!immed)
+ dsc->tmp[3] = displaced_read_reg (regs, from, 3);
+ if (!load)
+ dsc->tmp[4] = displaced_read_reg (regs, from, 4);
+
+ rt_val = displaced_read_reg (regs, from, rt);
+ rn_val = displaced_read_reg (regs, from, rn);
+ if (!immed)
+ rm_val = displaced_read_reg (regs, from, rm);
+
+ displaced_write_reg (regs, dsc, 0, rt_val, CANNOT_WRITE_PC);
+ displaced_write_reg (regs, dsc, 2, rn_val, CANNOT_WRITE_PC);
+ if (!immed)
+ displaced_write_reg (regs, dsc, 3, rm_val, CANNOT_WRITE_PC);
+
+ dsc->rd = rt;
+ dsc->u.ldst.xfersize = byte ? 1 : 4;
+ dsc->u.ldst.rn = rn;
+ dsc->u.ldst.immed = immed;
+ dsc->u.ldst.writeback = bit (insn, 24) == 0 || bit (insn, 21) != 0;
+
+ /* To write PC we can do:
+
+ scratch+0: str pc, temp (*temp = scratch + 8 + offset)
+ scratch+4: ldr r4, temp
+ scratch+8: sub r4, r4, pc (r4 = scratch + 8 + offset - scratch - 8 - 8)
+ scratch+12: add r4, r4, #8 (r4 = offset)
+ scratch+16: add r0, r0, r4
+ scratch+20: str r0, [r2, #imm] (or str r0, [r2, r3])
+ scratch+24: <temp>
+
+ Otherwise we don't know what value to write for PC, since the offset is
+ architecture-dependent (sometimes PC+8, sometimes PC+12). */
+
+ if (load || rt != 15)
+ {
+ dsc->u.ldst.restore_r4 = 0;
+
+ if (immed)
+ /* {ldr,str}[b]<cond> rt, [rn, #imm], etc.
+ ->
+ {ldr,str}[b]<cond> r0, [r2, #imm]. */
+ dsc->modinsn[0] = (insn & 0xfff00fff) | 0x20000;
+ else
+ /* {ldr,str}[b]<cond> rt, [rn, rm], etc.
+ ->
+ {ldr,str}[b]<cond> r0, [r2, r3]. */
+ dsc->modinsn[0] = (insn & 0xfff00ff0) | 0x20003;
+ }
+ else
+ {
+ /* We need to use r4 as scratch. Make sure it's restored afterwards. */
+ dsc->u.ldst.restore_r4 = 1;
+
+ dsc->modinsn[0] = 0xe58ff014; /* str pc, [pc, #20]. */
+ dsc->modinsn[1] = 0xe59f4010; /* ldr r4, [pc, #16]. */
+ dsc->modinsn[2] = 0xe044400f; /* sub r4, r4, pc. */
+ dsc->modinsn[3] = 0xe2844008; /* add r4, r4, #8. */
+ dsc->modinsn[4] = 0xe0800004; /* add r0, r0, r4. */
+
+ /* As above. */
+ if (immed)
+ dsc->modinsn[5] = (insn & 0xfff00fff) | 0x20000;
+ else
+ dsc->modinsn[5] = (insn & 0xfff00ff0) | 0x20003;
+
+ dsc->modinsn[6] = 0x0; /* breakpoint location. */
+ dsc->modinsn[7] = 0x0; /* scratch space. */
+
+ dsc->numinsns = 6;
+ }
+
+ dsc->cleanup = load ? &cleanup_load : &cleanup_store;
+
+ return 0;
+}
+
+/* Cleanup LDM instructions with fully-populated register list. This is an
+ unfortunate corner case: it's impossible to implement correctly by modifying
+ the instruction. The issue is as follows: we have an instruction,
+
+ ldm rN, {r0-r15}
+
+ which we must rewrite to avoid loading PC. A possible solution would be to
+ do the load in two halves, something like (with suitable cleanup
+ afterwards):
+
+ mov r8, rN
+ ldm[id][ab] r8!, {r0-r7}
+ str r7, <temp>
+ ldm[id][ab] r8, {r7-r14}
+ <bkpt>
+
+ but at present there's no suitable place for <temp>, since the scratch space
+ is overwritten before the cleanup routine is called. For now, we simply
+ emulate the instruction. */
+
+static void
+cleanup_block_load_all (struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ ULONGEST from = dsc->insn_addr;
+ int inc = dsc->u.block.increment;
+ int bump_before = dsc->u.block.before ? (inc ? 4 : -4) : 0;
+ int bump_after = dsc->u.block.before ? 0 : (inc ? 4 : -4);
+ uint32_t regmask = dsc->u.block.regmask;
+ int regno = inc ? 0 : 15;
+ CORE_ADDR xfer_addr = dsc->u.block.xfer_addr;
+ int exception_return = dsc->u.block.load && dsc->u.block.user
+ && (regmask & 0x8000) != 0;
+ uint32_t status = displaced_read_reg (regs, from, ARM_PS_REGNUM);
+ int do_transfer = condition_true (dsc->u.block.cond, status);
+
+ if (!do_transfer)
+ return;
+
+ /* If the instruction is ldm rN, {...pc}^, I don't think there's anything
+ sensible we can do here. Complain loudly. */
+ if (exception_return)
+ error (_("Cannot single-step exception return"));
+
+ /* We don't handle any stores here for now. */
+ gdb_assert (dsc->u.block.load != 0);
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: emulating block transfer: "
+ "%s %s %s\n", dsc->u.block.load ? "ldm" : "stm",
+ dsc->u.block.increment ? "inc" : "dec",
+ dsc->u.block.before ? "before" : "after");
+
+ while (regmask)
+ {
+ uint32_t memword;
+
+ if (inc)
+ while (regno <= 15 && (regmask & (1 << regno)) == 0)
+ regno++;
+ else
+ while (regno >= 0 && (regmask & (1 << regno)) == 0)
+ regno--;
+
+ xfer_addr += bump_before;
+
+ memword = read_memory_unsigned_integer (xfer_addr, 4);
+ displaced_write_reg (regs, dsc, regno, memword, LOAD_WRITE_PC);
+
+ xfer_addr += bump_after;
+
+ regmask &= ~(1 << regno);
+ }
+
+ if (dsc->u.block.writeback)
+ displaced_write_reg (regs, dsc, dsc->u.block.rn, xfer_addr,
+ CANNOT_WRITE_PC);
+}
+
+/* Clean up an STM which included the PC in the register list. */
+
+static void
+cleanup_block_store_pc (struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ ULONGEST from = dsc->insn_addr;
+ uint32_t status = displaced_read_reg (regs, from, ARM_PS_REGNUM);
+ int store_executed = condition_true (dsc->u.block.cond, status);
+ CORE_ADDR pc_stored_at, transferred_regs = bitcount (dsc->u.block.regmask);
+ CORE_ADDR stm_insn_addr;
+ uint32_t pc_val;
+ long offset;
+
+ /* If condition code fails, there's nothing else to do. */
+ if (!store_executed)
+ return;
+
+ if (dsc->u.block.increment)
+ {
+ pc_stored_at = dsc->u.block.xfer_addr + 4 * transferred_regs;
+
+ if (dsc->u.block.before)
+ pc_stored_at += 4;
+ }
+ else
+ {
+ pc_stored_at = dsc->u.block.xfer_addr;
+
+ if (dsc->u.block.before)
+ pc_stored_at -= 4;
+ }
+
+ pc_val = read_memory_unsigned_integer (pc_stored_at, 4);
+ stm_insn_addr = dsc->scratch_base;
+ offset = pc_val - stm_insn_addr;
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: detected PC offset %.8lx for "
+ "STM instruction\n", offset);
+
+ /* Rewrite the stored PC to the proper value for the non-displaced original
+ instruction. */
+ write_memory_unsigned_integer (pc_stored_at, 4, dsc->insn_addr + offset);
+}
+
+/* Clean up an LDM which includes the PC in the register list. We clumped all
+ the registers in the transferred list into a contiguous range r0...rX (to
+ avoid loading PC directly and losing control of the debugged program), so we
+ must undo that here. */
+
+static void
+cleanup_block_load_pc (struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ ULONGEST from = dsc->insn_addr;
+ uint32_t status = displaced_read_reg (regs, from, ARM_PS_REGNUM);
+ int load_executed = condition_true (dsc->u.block.cond, status), i;
+ unsigned int mask = dsc->u.block.regmask, write_reg = 15;
+ unsigned int regs_loaded = bitcount (mask);
+ unsigned int num_to_shuffle = regs_loaded, clobbered;
+
+ /* The method employed here will fail if the register list is fully populated
+ (we need to avoid loading PC directly). */
+ gdb_assert (num_to_shuffle < 16);
+
+ if (!load_executed)
+ return;
+
+ clobbered = (1 << num_to_shuffle) - 1;
+
+ while (num_to_shuffle > 0)
+ {
+ if ((mask & (1 << write_reg)) != 0)
+ {
+ unsigned int read_reg = num_to_shuffle - 1;
+
+ if (read_reg != write_reg)
+ {
+ ULONGEST rval = displaced_read_reg (regs, from, read_reg);
+ displaced_write_reg (regs, dsc, write_reg, rval, LOAD_WRITE_PC);
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, _("displaced: LDM: move "
+ "loaded register r%d to r%d\n"), read_reg,
+ write_reg);
+ }
+ else if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, _("displaced: LDM: register "
+ "r%d already in the right place\n"),
+ write_reg);
+
+ clobbered &= ~(1 << write_reg);
+
+ num_to_shuffle--;
+ }
+
+ write_reg--;
+ }
+
+ /* Restore any registers we scribbled over. */
+ for (write_reg = 0; clobbered != 0; write_reg++)
+ {
+ if ((clobbered & (1 << write_reg)) != 0)
+ {
+ displaced_write_reg (regs, dsc, write_reg, dsc->tmp[write_reg],
+ CANNOT_WRITE_PC);
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, _("displaced: LDM: restored "
+ "clobbered register r%d\n"), write_reg);
+ clobbered &= ~(1 << write_reg);
+ }
+ }
+
+ /* Perform register writeback manually. */
+ if (dsc->u.block.writeback)
+ {
+ ULONGEST new_rn_val = dsc->u.block.xfer_addr;
+
+ if (dsc->u.block.increment)
+ new_rn_val += regs_loaded * 4;
+ else
+ new_rn_val -= regs_loaded * 4;
+
+ displaced_write_reg (regs, dsc, dsc->u.block.rn, new_rn_val,
+ CANNOT_WRITE_PC);
+ }
+}
+
+/* Handle ldm/stm, apart from some tricky cases which are unlikely to occur
+ in user-level code (in particular exception return, ldm rn, {...pc}^). */
+
+static int
+copy_block_xfer (uint32_t insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ int load = bit (insn, 20);
+ int user = bit (insn, 22);
+ int increment = bit (insn, 23);
+ int before = bit (insn, 24);
+ int writeback = bit (insn, 21);
+ int rn = bits (insn, 16, 19);
+ CORE_ADDR from = dsc->insn_addr;
+
+ /* Block transfers which don't mention PC can be run directly out-of-line. */
+ if (rn != 15 && (insn & 0x8000) == 0)
+ return copy_unmodified (insn, "ldm/stm", dsc);
+
+ if (rn == 15)
+ {
+ warning (_("displaced: Unpredictable LDM or STM with base register r15"));
+ return copy_unmodified (insn, "unpredictable ldm/stm", dsc);
+ }
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying block transfer insn "
+ "%.8lx\n", (unsigned long) insn);
+
+ dsc->u.block.xfer_addr = displaced_read_reg (regs, from, rn);
+ dsc->u.block.rn = rn;
+
+ dsc->u.block.load = load;
+ dsc->u.block.user = user;
+ dsc->u.block.increment = increment;
+ dsc->u.block.before = before;
+ dsc->u.block.writeback = writeback;
+ dsc->u.block.cond = bits (insn, 28, 31);
+
+ dsc->u.block.regmask = insn & 0xffff;
+
+ if (load)
+ {
+ if ((insn & 0xffff) == 0xffff)
+ {
+ /* LDM with a fully-populated register list. This case is
+ particularly tricky. Implement for now by fully emulating the
+ instruction (which might not behave perfectly in all cases, but
+ these instructions should be rare enough for that not to matter
+ too much). */
+ dsc->modinsn[0] = ARM_NOP;
+
+ dsc->cleanup = &cleanup_block_load_all;
+ }
+ else
+ {
+ /* LDM of a list of registers which includes PC. Implement by
+ rewriting the list of registers to be transferred into a
+ contiguous chunk r0...rX before doing the transfer, then shuffling
+ registers into the correct places in the cleanup routine. */
+ unsigned int regmask = insn & 0xffff;
+ unsigned int num_in_list = bitcount (regmask), new_regmask, bit = 1;
+ unsigned int to = 0, from = 0, i, new_rn;
+
+ for (i = 0; i < num_in_list; i++)
+ dsc->tmp[i] = displaced_read_reg (regs, from, i);
+
+ /* Writeback makes things complicated. We need to avoid clobbering
+ the base register with one of the registers in our modified
+ register list, but just using a different register can't work in
+ all cases, e.g.:
+
+ ldm r14!, {r0-r13,pc}
+
+ which would need to be rewritten as:
+
+ ldm rN!, {r0-r14}
+
+ but that can't work, because there's no free register for N.
+
+ Solve this by turning off the writeback bit, and emulating
+ writeback manually in the cleanup routine. */
+
+ if (writeback)
+ insn &= ~(1 << 21);
+
+ new_regmask = (1 << num_in_list) - 1;
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, _("displaced: LDM r%d%s, "
+ "{..., pc}: original reg list %.4x, modified "
+ "list %.4x\n"), rn, writeback ? "!" : "",
+ (int) insn & 0xffff, new_regmask);
+
+ dsc->modinsn[0] = (insn & ~0xffff) | (new_regmask & 0xffff);
+
+ dsc->cleanup = &cleanup_block_load_pc;
+ }
+ }
+ else
+ {
+ /* STM of a list of registers which includes PC. Run the instruction
+ as-is, but out of line: this will store the wrong value for the PC,
+ so we must manually fix up the memory in the cleanup routine.
+ Doing things this way has the advantage that we can auto-detect
+ the offset of the PC write (which is architecture-dependent) in
+ the cleanup routine. */
+ dsc->modinsn[0] = insn;
+
+ dsc->cleanup = &cleanup_block_store_pc;
+ }
+
+ return 0;
+}
+
+/* Cleanup/copy SVC (SWI) instructions. These two functions are overridden
+ for Linux, where some SVC instructions must be treated specially. */
+
+static void
+cleanup_svc (struct regcache *regs, struct displaced_step_closure *dsc)
+{
+ CORE_ADDR from = dsc->insn_addr;
+ CORE_ADDR resume_addr = from + 4;
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: cleanup for svc, resume at "
+ "%.8lx\n", (unsigned long) resume_addr);
+
+ displaced_write_reg (regs, dsc, ARM_PC_REGNUM, resume_addr, BRANCH_WRITE_PC);
+}
+
+static int
+copy_svc (uint32_t insn, CORE_ADDR to, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ CORE_ADDR from = dsc->insn_addr;
+
+ /* Allow OS-specific code to override SVC handling. */
+ if (dsc->u.svc.copy_svc_os)
+ return dsc->u.svc.copy_svc_os (insn, to, regs, dsc);
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying svc insn %.8lx\n",
+ (unsigned long) insn);
+
+ /* Preparation: none.
+ Insn: unmodified svc.
+ Cleanup: pc <- insn_addr + 4. */
+
+ dsc->modinsn[0] = insn;
+
+ dsc->cleanup = &cleanup_svc;
+ /* Pretend we wrote to the PC, so cleanup doesn't set PC to the next
+ instruction. */
+ dsc->wrote_to_pc = 1;
+
+ return 0;
+}
+
+/* Copy undefined instructions. */
+
+static int
+copy_undef (uint32_t insn, struct displaced_step_closure *dsc)
+{
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying undefined insn %.8lx\n",
+ (unsigned long) insn);
+
+ dsc->modinsn[0] = insn;
+
+ return 0;
+}
+
+/* Copy unpredictable instructions. */
+
+static int
+copy_unpred (uint32_t insn, struct displaced_step_closure *dsc)
+{
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying unpredictable insn "
+ "%.8lx\n", (unsigned long) insn);
+
+ dsc->modinsn[0] = insn;
+
+ return 0;
+}
+
+/* The decode_* functions are instruction decoding helpers. They mostly follow
+ the presentation in the ARM ARM. */
+
+static int
+decode_misc_memhint_neon (uint32_t insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ unsigned int op1 = bits (insn, 20, 26), op2 = bits (insn, 4, 7);
+ unsigned int rn = bits (insn, 16, 19);
+
+ if (op1 == 0x10 && (op2 & 0x2) == 0x0 && (rn & 0xe) == 0x0)
+ return copy_unmodified (insn, "cps", dsc);
+ else if (op1 == 0x10 && op2 == 0x0 && (rn & 0xe) == 0x1)
+ return copy_unmodified (insn, "setend", dsc);
+ else if ((op1 & 0x60) == 0x20)
+ return copy_unmodified (insn, "neon dataproc", dsc);
+ else if ((op1 & 0x71) == 0x40)
+ return copy_unmodified (insn, "neon elt/struct load/store", dsc);
+ else if ((op1 & 0x77) == 0x41)
+ return copy_unmodified (insn, "unallocated mem hint", dsc);
+ else if ((op1 & 0x77) == 0x45)
+ return copy_preload (insn, regs, dsc); /* pli. */
+ else if ((op1 & 0x77) == 0x51)
+ {
+ if (rn != 0xf)
+ return copy_preload (insn, regs, dsc); /* pld/pldw. */
+ else
+ return copy_unpred (insn, dsc);
+ }
+ else if ((op1 & 0x77) == 0x55)
+ return copy_preload (insn, regs, dsc); /* pld/pldw. */
+ else if (op1 == 0x57)
+ switch (op2)
+ {
+ case 0x1: return copy_unmodified (insn, "clrex", dsc);
+ case 0x4: return copy_unmodified (insn, "dsb", dsc);
+ case 0x5: return copy_unmodified (insn, "dmb", dsc);
+ case 0x6: return copy_unmodified (insn, "isb", dsc);
+ default: return copy_unpred (insn, dsc);
+ }
+ else if ((op1 & 0x63) == 0x43)
+ return copy_unpred (insn, dsc);
+ else if ((op2 & 0x1) == 0x0)
+ switch (op1 & ~0x80)
+ {
+ case 0x61:
+ return copy_unmodified (insn, "unallocated mem hint", dsc);
+ case 0x65:
+ return copy_preload_reg (insn, regs, dsc); /* pli reg. */
+ case 0x71: case 0x75:
+ return copy_preload_reg (insn, regs, dsc); /* pld/pldw reg. */
+ case 0x63: case 0x67: case 0x73: case 0x77:
+ return copy_unpred (insn, dsc);
+ default:
+ return copy_undef (insn, dsc);
+ }
+ else
+ return copy_undef (insn, dsc); /* Probably unreachable. */
+}
+
+static int
+decode_unconditional (uint32_t insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ if (bit (insn, 27) == 0)
+ return decode_misc_memhint_neon (insn, regs, dsc);
+ /* Switch on bits: 0bxxxxx321xxx0xxxxxxxxxxxxxxxxxxxx. */
+ else switch (((insn & 0x7000000) >> 23) | ((insn & 0x100000) >> 20))
+ {
+ case 0x0: case 0x2:
+ return copy_unmodified (insn, "srs", dsc);
+
+ case 0x1: case 0x3:
+ return copy_unmodified (insn, "rfe", dsc);
+
+ case 0x4: case 0x5: case 0x6: case 0x7:
+ return copy_b_bl_blx (insn, regs, dsc);
+
+ case 0x8:
+ switch ((insn & 0xe00000) >> 21)
+ {
+ case 0x1: case 0x3: case 0x4: case 0x5: case 0x6: case 0x7:
+ return copy_copro_load_store (insn, regs, dsc); /* stc/stc2. */
+
+ case 0x2:
+ return copy_unmodified (insn, "mcrr/mcrr2", dsc);
+
+ default:
+ return copy_undef (insn, dsc);
+ }
+
+ case 0x9:
+ {
+ int rn_f = (bits (insn, 16, 19) == 0xf);
+ switch ((insn & 0xe00000) >> 21)
+ {
+ case 0x1: case 0x3:
+ /* ldc/ldc2 imm (undefined for rn == pc). */
+ return rn_f ? copy_undef (insn, dsc)
+ : copy_copro_load_store (insn, regs, dsc);
+
+ case 0x2:
+ return copy_unmodified (insn, "mrrc/mrrc2", dsc);
+
+ case 0x4: case 0x5: case 0x6: case 0x7:
+ /* ldc/ldc2 lit (undefined for rn != pc). */
+ return rn_f ? copy_copro_load_store (insn, regs, dsc)
+ : copy_undef (insn, dsc);
+
+ default:
+ return copy_undef (insn, dsc);
+ }
+ }
+
+ case 0xa:
+ return copy_unmodified (insn, "stc/stc2", dsc);
+
+ case 0xb:
+ if (bits (insn, 16, 19) == 0xf)
+ return copy_copro_load_store (insn, regs, dsc); /* ldc/ldc2 lit. */
+ else
+ return copy_undef (insn, dsc);
+
+ case 0xc:
+ if (bit (insn, 4))
+ return copy_unmodified (insn, "mcr/mcr2", dsc);
+ else
+ return copy_unmodified (insn, "cdp/cdp2", dsc);
+
+ case 0xd:
+ if (bit (insn, 4))
+ return copy_unmodified (insn, "mrc/mrc2", dsc);
+ else
+ return copy_unmodified (insn, "cdp/cdp2", dsc);
+
+ default:
+ return copy_undef (insn, dsc);
+ }
+}
+
+/* Decode miscellaneous instructions in dp/misc encoding space. */
+
+static int
+decode_miscellaneous (uint32_t insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ unsigned int op2 = bits (insn, 4, 6);
+ unsigned int op = bits (insn, 21, 22);
+ unsigned int op1 = bits (insn, 16, 19);
+
+ switch (op2)
+ {
+ case 0x0:
+ return copy_unmodified (insn, "mrs/msr", dsc);
+
+ case 0x1:
+ if (op == 0x1) /* bx. */
+ return copy_bx_blx_reg (insn, regs, dsc);
+ else if (op == 0x3)
+ return copy_unmodified (insn, "clz", dsc);
+ else
+ return copy_undef (insn, dsc);
+
+ case 0x2:
+ if (op == 0x1)
+ return copy_unmodified (insn, "bxj", dsc); /* Not really supported. */
+ else
+ return copy_undef (insn, dsc);
+
+ case 0x3:
+ if (op == 0x1)
+ return copy_bx_blx_reg (insn, regs, dsc); /* blx register. */
+ else
+ return copy_undef (insn, dsc);
+
+ case 0x5:
+ return copy_unmodified (insn, "saturating add/sub", dsc);
+
+ case 0x7:
+ if (op == 0x1)
+ return copy_unmodified (insn, "bkpt", dsc);
+ else if (op == 0x3)
+ return copy_unmodified (insn, "smc", dsc); /* Not really supported. */
+
+ default:
+ return copy_undef (insn, dsc);
+ }
+}
+
+static int
+decode_dp_misc (uint32_t insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ if (bit (insn, 25))
+ switch (bits (insn, 20, 24))
+ {
+ case 0x10:
+ return copy_unmodified (insn, "movw", dsc);
+
+ case 0x14:
+ return copy_unmodified (insn, "movt", dsc);
+
+ case 0x12: case 0x16:
+ return copy_unmodified (insn, "msr imm", dsc);
+
+ default:
+ return copy_alu_imm (insn, regs, dsc);
+ }
+ else
+ {
+ uint32_t op1 = bits (insn, 20, 24), op2 = bits (insn, 4, 7);
+
+ if ((op1 & 0x19) != 0x10 && (op2 & 0x1) == 0x0)
+ return copy_alu_reg (insn, regs, dsc);
+ else if ((op1 & 0x19) != 0x10 && (op2 & 0x9) == 0x1)
+ return copy_alu_shifted_reg (insn, regs, dsc);
+ else if ((op1 & 0x19) == 0x10 && (op2 & 0x8) == 0x0)
+ return decode_miscellaneous (insn, regs, dsc);
+ else if ((op1 & 0x19) == 0x10 && (op2 & 0x9) == 0x8)
+ return copy_unmodified (insn, "halfword mul/mla", dsc);
+ else if ((op1 & 0x10) == 0x00 && op2 == 0x9)
+ return copy_unmodified (insn, "mul/mla", dsc);
+ else if ((op1 & 0x10) == 0x10 && op2 == 0x9)
+ return copy_unmodified (insn, "synch", dsc);
+ else if (op2 == 0xb || (op2 & 0xd) == 0xd)
+ /* 2nd arg means "unpriveleged". */
+ return copy_extra_ld_st (insn, (op1 & 0x12) == 0x02, regs, dsc);
+ }
+
+ /* Should be unreachable. */
+ return 1;
+}
+
+static int
+decode_ld_st_word_ubyte (uint32_t insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ int a = bit (insn, 25), b = bit (insn, 4);
+ uint32_t op1 = bits (insn, 20, 24);
+ int rn_f = bits (insn, 16, 19) == 0xf;
+
+ if ((!a && (op1 & 0x05) == 0x00 && (op1 & 0x17) != 0x02)
+ || (a && (op1 & 0x05) == 0x00 && (op1 & 0x17) != 0x02 && !b))
+ return copy_ldr_str_ldrb_strb (insn, regs, dsc, 0, 0, 0);
+ else if ((!a && (op1 & 0x17) == 0x02)
+ || (a && (op1 & 0x17) == 0x02 && !b))
+ return copy_ldr_str_ldrb_strb (insn, regs, dsc, 0, 0, 1);
+ else if ((!a && (op1 & 0x05) == 0x01 && (op1 & 0x17) != 0x03)
+ || (a && (op1 & 0x05) == 0x01 && (op1 & 0x17) != 0x03 && !b))
+ return copy_ldr_str_ldrb_strb (insn, regs, dsc, 1, 0, 0);
+ else if ((!a && (op1 & 0x17) == 0x03)
+ || (a && (op1 & 0x17) == 0x03 && !b))
+ return copy_ldr_str_ldrb_strb (insn, regs, dsc, 1, 0, 1);
+ else if ((!a && (op1 & 0x05) == 0x04 && (op1 & 0x17) != 0x06)
+ || (a && (op1 & 0x05) == 0x04 && (op1 & 0x17) != 0x06 && !b))
+ return copy_ldr_str_ldrb_strb (insn, regs, dsc, 0, 1, 0);
+ else if ((!a && (op1 & 0x17) == 0x06)
+ || (a && (op1 & 0x17) == 0x06 && !b))
+ return copy_ldr_str_ldrb_strb (insn, regs, dsc, 0, 1, 1);
+ else if ((!a && (op1 & 0x05) == 0x05 && (op1 & 0x17) != 0x07)
+ || (a && (op1 & 0x05) == 0x05 && (op1 & 0x17) != 0x07 && !b))
+ return copy_ldr_str_ldrb_strb (insn, regs, dsc, 1, 1, 0);
+ else if ((!a && (op1 & 0x17) == 0x07)
+ || (a && (op1 & 0x17) == 0x07 && !b))
+ return copy_ldr_str_ldrb_strb (insn, regs, dsc, 1, 1, 1);
+
+ /* Should be unreachable. */
+ return 1;
+}
+
+static int
+decode_media (uint32_t insn, struct displaced_step_closure *dsc)
+{
+ switch (bits (insn, 20, 24))
+ {
+ case 0x00: case 0x01: case 0x02: case 0x03:
+ return copy_unmodified (insn, "parallel add/sub signed", dsc);
+
+ case 0x04: case 0x05: case 0x06: case 0x07:
+ return copy_unmodified (insn, "parallel add/sub unsigned", dsc);
+
+ case 0x08: case 0x09: case 0x0a: case 0x0b:
+ case 0x0c: case 0x0d: case 0x0e: case 0x0f:
+ return copy_unmodified (insn, "decode/pack/unpack/saturate/reverse", dsc);
+
+ case 0x18:
+ if (bits (insn, 5, 7) == 0) /* op2. */
+ {
+ if (bits (insn, 12, 15) == 0xf)
+ return copy_unmodified (insn, "usad8", dsc);
+ else
+ return copy_unmodified (insn, "usada8", dsc);
+ }
+ else
+ return copy_undef (insn, dsc);
+
+ case 0x1a: case 0x1b:
+ if (bits (insn, 5, 6) == 0x2) /* op2[1:0]. */
+ return copy_unmodified (insn, "sbfx", dsc);
+ else
+ return copy_undef (insn, dsc);
+
+ case 0x1c: case 0x1d:
+ if (bits (insn, 5, 6) == 0x0) /* op2[1:0]. */
+ {
+ if (bits (insn, 0, 3) == 0xf)
+ return copy_unmodified (insn, "bfc", dsc);
+ else
+ return copy_unmodified (insn, "bfi", dsc);
+ }
+ else
+ return copy_undef (insn, dsc);
+
+ case 0x1e: case 0x1f:
+ if (bits (insn, 5, 6) == 0x2) /* op2[1:0]. */
+ return copy_unmodified (insn, "ubfx", dsc);
+ else
+ return copy_undef (insn, dsc);
+ }
+
+ /* Should be unreachable. */
+ return 1;
+}
+
+static int
+decode_b_bl_ldmstm (uint32_t insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ if (bit (insn, 25))
+ return copy_b_bl_blx (insn, regs, dsc);
+ else
+ return copy_block_xfer (insn, regs, dsc);
+}
+
+static int
+decode_ext_reg_ld_st (uint32_t insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ unsigned int opcode = bits (insn, 20, 24);
+
+ switch (opcode)
+ {
+ case 0x04: case 0x05: /* VFP/Neon mrrc/mcrr. */
+ return copy_unmodified (insn, "vfp/neon mrrc/mcrr", dsc);
+
+ case 0x08: case 0x0a: case 0x0c: case 0x0e:
+ case 0x12: case 0x16:
+ return copy_unmodified (insn, "vfp/neon vstm/vpush", dsc);
+
+ case 0x09: case 0x0b: case 0x0d: case 0x0f:
+ case 0x13: case 0x17:
+ return copy_unmodified (insn, "vfp/neon vldm/vpop", dsc);
+
+ case 0x10: case 0x14: case 0x18: case 0x1c: /* vstr. */
+ case 0x11: case 0x15: case 0x19: case 0x1d: /* vldr. */
+ /* Note: no writeback for these instructions. Bit 25 will always be
+ zero though (via caller), so the following works OK. */
+ return copy_copro_load_store (insn, regs, dsc);
+ }
+
+ /* Should be unreachable. */
+ return 1;
+}
+
+static int
+decode_svc_copro (uint32_t insn, CORE_ADDR to, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ unsigned int op1 = bits (insn, 20, 25);
+ int op = bit (insn, 4);
+ unsigned int coproc = bits (insn, 8, 11);
+ unsigned int rn = bits (insn, 16, 19);
+
+ if ((op1 & 0x20) == 0x00 && (op1 & 0x3a) != 0x00 && (coproc & 0xe) == 0xa)
+ return decode_ext_reg_ld_st (insn, regs, dsc);
+ else if ((op1 & 0x21) == 0x00 && (op1 & 0x3a) != 0x00
+ && (coproc & 0xe) != 0xa)
+ return copy_copro_load_store (insn, regs, dsc); /* stc/stc2. */
+ else if ((op1 & 0x21) == 0x01 && (op1 & 0x3a) != 0x00
+ && (coproc & 0xe) != 0xa)
+ return copy_copro_load_store (insn, regs, dsc); /* ldc/ldc2 imm/lit. */
+ else if ((op1 & 0x3e) == 0x00)
+ return copy_undef (insn, dsc);
+ else if ((op1 & 0x3e) == 0x04 && (coproc & 0xe) == 0xa)
+ return copy_unmodified (insn, "neon 64bit xfer", dsc);
+ else if (op1 == 0x04 && (coproc & 0xe) != 0xa)
+ return copy_unmodified (insn, "mcrr/mcrr2", dsc);
+ else if (op1 == 0x05 && (coproc & 0xe) != 0xa)
+ return copy_unmodified (insn, "mrrc/mrrc2", dsc);
+ else if ((op1 & 0x30) == 0x20 && !op)
+ {
+ if ((coproc & 0xe) == 0xa)
+ return copy_unmodified (insn, "vfp dataproc", dsc);
+ else
+ return copy_unmodified (insn, "cdp/cdp2", dsc);
+ }
+ else if ((op1 & 0x30) == 0x20 && op)
+ return copy_unmodified (insn, "neon 8/16/32 bit xfer", dsc);
+ else if ((op1 & 0x31) == 0x20 && op && (coproc & 0xe) != 0xa)
+ return copy_unmodified (insn, "mcr/mcr2", dsc);
+ else if ((op1 & 0x31) == 0x21 && op && (coproc & 0xe) != 0xa)
+ return copy_unmodified (insn, "mrc/mrc2", dsc);
+ else if ((op1 & 0x30) == 0x30)
+ return copy_svc (insn, to, regs, dsc);
+ else
+ return copy_undef (insn, dsc); /* Possibly unreachable. */
+}
+
+void
+arm_process_displaced_insn (uint32_t insn, CORE_ADDR from, CORE_ADDR to,
+ struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ int err = 0;
+
+ if (!displaced_in_arm_mode (regs))
+ error (_("Displaced stepping is only supported in ARM mode"));
+
+ /* Most displaced instructions use a 1-instruction scratch space, so set this
+ here and override below if/when necessary. */
+ dsc->numinsns = 1;
+ dsc->insn_addr = from;
+ dsc->scratch_base = to;
+ dsc->cleanup = NULL;
+ dsc->wrote_to_pc = 0;
+
+ if ((insn & 0xf0000000) == 0xf0000000)
+ err = decode_unconditional (insn, regs, dsc);
+ else switch (((insn & 0x10) >> 4) | ((insn & 0xe000000) >> 24))
+ {
+ case 0x0: case 0x1: case 0x2: case 0x3:
+ err = decode_dp_misc (insn, regs, dsc);
+ break;
+
+ case 0x4: case 0x5: case 0x6:
+ err = decode_ld_st_word_ubyte (insn, regs, dsc);
+ break;
+
+ case 0x7:
+ err = decode_media (insn, dsc);
+ break;
+
+ case 0x8: case 0x9: case 0xa: case 0xb:
+ err = decode_b_bl_ldmstm (insn, regs, dsc);
+ break;
+
+ case 0xc: case 0xd: case 0xe: case 0xf:
+ err = decode_svc_copro (insn, to, regs, dsc);
+ break;
+ }
+
+ if (err)
+ internal_error (__FILE__, __LINE__,
+ _("arm_process_displaced_insn: Instruction decode error"));
+}
+
+/* Actually set up the scratch space for a displaced instruction. */
+
+void
+arm_displaced_init_closure (struct gdbarch *gdbarch, CORE_ADDR from,
+ CORE_ADDR to, struct displaced_step_closure *dsc)
+{
+ struct gdbarch_tdep *tdep = gdbarch_tdep (gdbarch);
+ unsigned int i;
+
+ /* Poke modified instruction(s). */
+ for (i = 0; i < dsc->numinsns; i++)
+ {
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: writing insn %.8lx at "
+ "%.8lx\n", (unsigned long) dsc->modinsn[i],
+ (unsigned long) to + i * 4);
+ write_memory_unsigned_integer (to + i * 4, 4, dsc->modinsn[i]);
+ }
+
+ /* Put breakpoint afterwards. */
+ write_memory (to + dsc->numinsns * 4, tdep->arm_breakpoint,
+ tdep->arm_breakpoint_size);
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copy 0x%s->0x%s: ",
+ paddr_nz (from), paddr_nz (to));
+}
+
+/* Entry point for copying an instruction into scratch space for displaced
+ stepping. */
+
+struct displaced_step_closure *
+arm_displaced_step_copy_insn (struct gdbarch *gdbarch,
+ CORE_ADDR from, CORE_ADDR to,
+ struct regcache *regs)
+{
+ struct displaced_step_closure *dsc
+ = xmalloc (sizeof (struct displaced_step_closure));
+ uint32_t insn = read_memory_unsigned_integer (from, 4);
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: stepping insn %.8lx "
+ "at %.8lx\n", (unsigned long) insn,
+ (unsigned long) from);
+
+ arm_process_displaced_insn (insn, from, to, regs, dsc);
+ arm_displaced_init_closure (gdbarch, from, to, dsc);
+
+ return dsc;
+}
+
+/* Entry point for cleaning things up after a displaced instruction has been
+ single-stepped. */
+
+void
+arm_displaced_step_fixup (struct gdbarch *gdbarch,
+ struct displaced_step_closure *dsc,
+ CORE_ADDR from, CORE_ADDR to,
+ struct regcache *regs)
+{
+ if (dsc->cleanup)
+ dsc->cleanup (regs, dsc);
+
+ if (!dsc->wrote_to_pc)
+ regcache_cooked_write_unsigned (regs, ARM_PC_REGNUM, dsc->insn_addr + 4);
+}
+
+
+#include "bfd-in2.h"
+#include "libcoff.h"
+
+static int
+gdb_print_insn_arm (bfd_vma memaddr, disassemble_info *info)
+{
+ if (arm_pc_is_thumb (memaddr))
+ {
+ static asymbol *asym;
+ static combined_entry_type ce;
+ static struct coff_symbol_struct csym;
+ static struct bfd fake_bfd;
+ static bfd_target fake_target;
+
+ if (csym.native == NULL)
+ {
+ /* Create a fake symbol vector containing a Thumb symbol.
+ This is solely so that the code in print_insn_little_arm()
+ and print_insn_big_arm() in opcodes/arm-dis.c will detect
+ the presence of a Thumb symbol and switch to decoding
+ Thumb instructions. */
+
+ fake_target.flavour = bfd_target_coff_flavour;
+ fake_bfd.xvec = &fake_target;
+ ce.u.syment.n_sclass = C_THUMBEXTFUNC;
+ csym.native = &ce;
+ csym.symbol.the_bfd = &fake_bfd;
+ csym.symbol.name = "fake";
+ asym = (asymbol *) & csym;
+ }
+
+ memaddr = UNMAKE_THUMB_ADDR (memaddr);
+ info->symbols = &asym;
+ }
+ else
+ info->symbols = NULL;
+
+ if (info->endian == BFD_ENDIAN_BIG)
+ return print_insn_big_arm (memaddr, info);
+ else
+ return print_insn_little_arm (memaddr, info);
+}
+
+/* The following define instruction sequences that will cause ARM
+ cpu's to take an undefined instruction trap. These are used to
+ signal a breakpoint to GDB.
+
+ The newer ARMv4T cpu's are capable of operating in ARM or Thumb
+ modes. A different instruction is required for each mode. The ARM
+ cpu's can also be big or little endian. Thus four different
+ instructions are needed to support all cases.
+
+ Note: ARMv4 defines several new instructions that will take the
+ undefined instruction trap. ARM7TDMI is nominally ARMv4T, but does
+ not in fact add the new instructions. The new undefined
+ instructions in ARMv4 are all instructions that had no defined
+ behaviour in earlier chips. There is no guarantee that they will
+ raise an exception, but may be treated as NOP's. In practice, it
+ may only safe to rely on instructions matching:
+
+ 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1
+ 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ C C C C 0 1 1 x x x x x x x x x x x x x x x x x x x x 1 x x x x
+
+ Even this may only true if the condition predicate is true. The
+ following use a condition predicate of ALWAYS so it is always TRUE.
+
+ There are other ways of forcing a breakpoint. GNU/Linux, RISC iX,
+ and NetBSD all use a software interrupt rather than an undefined
+ instruction to force a trap. This can be handled by by the
+ abi-specific code during establishment of the gdbarch vector. */
+
+#define ARM_LE_BREAKPOINT {0xFE,0xDE,0xFF,0xE7}
+#define ARM_BE_BREAKPOINT {0xE7,0xFF,0xDE,0xFE}
+#define THUMB_LE_BREAKPOINT {0xbe,0xbe}
+#define THUMB_BE_BREAKPOINT {0xbe,0xbe}
+
+static const char arm_default_arm_le_breakpoint[] = ARM_LE_BREAKPOINT;
+static const char arm_default_arm_be_breakpoint[] = ARM_BE_BREAKPOINT;
+static const char arm_default_thumb_le_breakpoint[] = THUMB_LE_BREAKPOINT;
+static const char arm_default_thumb_be_breakpoint[] = THUMB_BE_BREAKPOINT;
+
+/* Determine the type and size of breakpoint to insert at PCPTR. Uses
+ the program counter value to determine whether a 16-bit or 32-bit
+ breakpoint should be used. It returns a pointer to a string of
+ bytes that encode a breakpoint instruction, stores the length of
+ the string to *lenptr, and adjusts the program counter (if
+ necessary) to point to the actual memory location where the
+ breakpoint should be inserted. */
+
+static const unsigned char *
+arm_breakpoint_from_pc (struct gdbarch *gdbarch, CORE_ADDR *pcptr, int *lenptr)
+{
+ struct gdbarch_tdep *tdep = gdbarch_tdep (gdbarch);
+
+ if (arm_pc_is_thumb (*pcptr))
+ {
+ *pcptr = UNMAKE_THUMB_ADDR (*pcptr);
+ *lenptr = tdep->thumb_breakpoint_size;
+ return tdep->thumb_breakpoint;
+ }
+ else
+ {
+ *lenptr = tdep->arm_breakpoint_size;
+ return tdep->arm_breakpoint;
+ }
+}
+
+/* Extract from an array REGBUF containing the (raw) register state a
+ function return value of type TYPE, and copy that, in virtual
+ format, into VALBUF. */
+
+static void
+arm_extract_return_value (struct type *type, struct regcache *regs,
+ gdb_byte *valbuf)
+{
+ struct gdbarch *gdbarch = get_regcache_arch (regs);
+
+ if (TYPE_CODE_FLT == TYPE_CODE (type))
+ {
+ switch (gdbarch_tdep (gdbarch)->fp_model)
+ {
+ case ARM_FLOAT_FPA:
+ {
+ /* The value is in register F0 in internal format. We need to
+ extract the raw value and then convert it to the desired
+ internal type. */
+ bfd_byte tmpbuf[FP_REGISTER_SIZE];
+
+ regcache_cooked_read (regs, ARM_F0_REGNUM, tmpbuf);
+ convert_from_extended (floatformat_from_type (type), tmpbuf,
+ valbuf, gdbarch_byte_order (gdbarch));
+ }
+ break;
+
+ case ARM_FLOAT_SOFT_FPA:
+ case ARM_FLOAT_SOFT_VFP:
+ regcache_cooked_read (regs, ARM_A1_REGNUM, valbuf);
+ if (TYPE_LENGTH (type) > 4)
+ regcache_cooked_read (regs, ARM_A1_REGNUM + 1,
+ valbuf + INT_REGISTER_SIZE);
+ break;
+
+ default:
+ internal_error
+ (__FILE__, __LINE__,
+ _("arm_extract_return_value: Floating point model not supported"));
+ break;
+ }
+ }
+ else if (TYPE_CODE (type) == TYPE_CODE_INT
+ || TYPE_CODE (type) == TYPE_CODE_CHAR
+ || TYPE_CODE (type) == TYPE_CODE_BOOL
+ || TYPE_CODE (type) == TYPE_CODE_PTR
+ || TYPE_CODE (type) == TYPE_CODE_REF
+ || TYPE_CODE (type) == TYPE_CODE_ENUM)
+ {
+ /* If the the type is a plain integer, then the access is
+ straight-forward. Otherwise we have to play around a bit more. */
+ int len = TYPE_LENGTH (type);
+ int regno = ARM_A1_REGNUM;
+ ULONGEST tmp;
+
+ while (len > 0)
+ {
+ /* By using store_unsigned_integer we avoid having to do
+ anything special for small big-endian values. */
+ regcache_cooked_read_unsigned (regs, regno++, &tmp);
+ store_unsigned_integer (valbuf,
+ (len > INT_REGISTER_SIZE
+ ? INT_REGISTER_SIZE : len),
+ tmp);
+ len -= INT_REGISTER_SIZE;
+ valbuf += INT_REGISTER_SIZE;
+ }
+ }
+ else
+ {
+ /* For a structure or union the behaviour is as if the value had
+ been stored to word-aligned memory and then loaded into
+ registers with 32-bit load instruction(s). */
+ int len = TYPE_LENGTH (type);
+ int regno = ARM_A1_REGNUM;
+ bfd_byte tmpbuf[INT_REGISTER_SIZE];
+
+ while (len > 0)
+ {
+ regcache_cooked_read (regs, regno++, tmpbuf);
+ memcpy (valbuf, tmpbuf,
+ len > INT_REGISTER_SIZE ? INT_REGISTER_SIZE : len);
+ len -= INT_REGISTER_SIZE;
+ valbuf += INT_REGISTER_SIZE;
+ }
+ }
+}
+
+
+/* Will a function return an aggregate type in memory or in a
+ register? Return 0 if an aggregate type can be returned in a
+ register, 1 if it must be returned in memory. */
+
+static int
+arm_return_in_memory (struct gdbarch *gdbarch, struct type *type)
+{
+ int nRc;
+ enum type_code code;
+
+ CHECK_TYPEDEF (type);
+
+ /* In the ARM ABI, "integer" like aggregate types are returned in
+ registers. For an aggregate type to be integer like, its size
+ must be less than or equal to INT_REGISTER_SIZE and the
+ offset of each addressable subfield must be zero. Note that bit
+ fields are not addressable, and all addressable subfields of
+ unions always start at offset zero.
+
+ This function is based on the behaviour of GCC 2.95.1.
+ See: gcc/arm.c: arm_return_in_memory() for details.
+
+ Note: All versions of GCC before GCC 2.95.2 do not set up the
+ parameters correctly for a function returning the following
+ structure: struct { float f;}; This should be returned in memory,
+ not a register. Richard Earnshaw sent me a patch, but I do not
+ know of any way to detect if a function like the above has been
+ compiled with the correct calling convention. */
+
+ /* All aggregate types that won't fit in a register must be returned
+ in memory. */
+ if (TYPE_LENGTH (type) > INT_REGISTER_SIZE)
+ {
+ return 1;
+ }
+
+ /* The AAPCS says all aggregates not larger than a word are returned
+ in a register. */
+ if (gdbarch_tdep (gdbarch)->arm_abi != ARM_ABI_APCS)
+ return 0;
+
+ /* The only aggregate types that can be returned in a register are
+ structs and unions. Arrays must be returned in memory. */
+ code = TYPE_CODE (type);
+ if ((TYPE_CODE_STRUCT != code) && (TYPE_CODE_UNION != code))
+ {
+ return 1;
+ }
+
+ /* Assume all other aggregate types can be returned in a register.
+ Run a check for structures, unions and arrays. */
+ nRc = 0;
+
+ if ((TYPE_CODE_STRUCT == code) || (TYPE_CODE_UNION == code))
+ {
+ int i;
+ /* Need to check if this struct/union is "integer" like. For
+ this to be true, its size must be less than or equal to
+ INT_REGISTER_SIZE and the offset of each addressable
+ subfield must be zero. Note that bit fields are not
+ addressable, and unions always start at offset zero. If any
+ of the subfields is a floating point type, the struct/union
+ cannot be an integer type. */
+
+ /* For each field in the object, check:
+ 1) Is it FP? --> yes, nRc = 1;
+ 2) Is it addressable (bitpos != 0) and
+ not packed (bitsize == 0)?
+ --> yes, nRc = 1
+ */
+
+ for (i = 0; i < TYPE_NFIELDS (type); i++)
+ {
+ enum type_code field_type_code;
+ field_type_code = TYPE_CODE (check_typedef (TYPE_FIELD_TYPE (type, i)));
+
+ /* Is it a floating point type field? */
if (field_type_code == TYPE_CODE_FLT)
{
nRc = 1;
@@ -3252,6 +5076,11 @@ arm_gdbarch_init (struct gdbarch_info in
/* On ARM targets char defaults to unsigned. */
set_gdbarch_char_signed (gdbarch, 0);
+ /* Note: for displaced stepping, this includes the breakpoint, and one word
+ of additional scratch space. This setting isn't used for anything beside
+ displaced stepping at present. */
+ set_gdbarch_max_insn_length (gdbarch, 4 * DISPLACED_MODIFIED_INSNS);
+
/* This should be low enough for everything. */
tdep->lowest_pc = 0x20;
tdep->jb_pc = -1; /* Longjump support not enabled by default. */
--- .pc/displaced-stepping/gdb/arm-tdep.h 2009-07-15 11:14:33.000000000 -0700
+++ gdb/arm-tdep.h 2009-07-15 11:15:02.000000000 -0700
@@ -172,11 +172,110 @@ struct gdbarch_tdep
struct regset *gregset, *fpregset;
};
+/* Structures used for displaced stepping. */
+
+/* The maximum number of temporaries available for displaced instructions. */
+#define DISPLACED_TEMPS 16
+/* The maximum number of modified instructions generated for one single-stepped
+ instruction, including the breakpoint (usually at the end of the instruction
+ sequence) and any scratch words, etc. */
+#define DISPLACED_MODIFIED_INSNS 8
+
+struct displaced_step_closure
+{
+ ULONGEST tmp[DISPLACED_TEMPS];
+ int rd;
+ int wrote_to_pc;
+ union
+ {
+ struct
+ {
+ int xfersize;
+ int rn; /* Writeback register. */
+ unsigned int immed : 1; /* Offset is immediate. */
+ unsigned int writeback : 1; /* Perform base-register writeback. */
+ unsigned int restore_r4 : 1; /* Used r4 as scratch. */
+ } ldst;
+
+ struct
+ {
+ unsigned long dest;
+ unsigned int link : 1;
+ unsigned int exchange : 1;
+ unsigned int cond : 4;
+ } branch;
+
+ struct
+ {
+ unsigned int regmask;
+ int rn;
+ CORE_ADDR xfer_addr;
+ unsigned int load : 1;
+ unsigned int user : 1;
+ unsigned int increment : 1;
+ unsigned int before : 1;
+ unsigned int writeback : 1;
+ unsigned int cond : 4;
+ } block;
+
+ struct
+ {
+ unsigned int immed : 1;
+ } preload;
+
+ struct
+ {
+ /* If non-NULL, override generic SVC handling (e.g. for a particular
+ OS). */
+ int (*copy_svc_os) (uint32_t insn, CORE_ADDR to, struct regcache *regs,
+ struct displaced_step_closure *dsc);
+ } svc;
+ } u;
+ unsigned long modinsn[DISPLACED_MODIFIED_INSNS];
+ int numinsns;
+ CORE_ADDR insn_addr;
+ CORE_ADDR scratch_base;
+ void (*cleanup) (struct regcache *, struct displaced_step_closure *);
+};
+
+/* Values for the WRITE_PC argument to displaced_write_reg. If the register
+ write may write to the PC, specifies the way the CPSR T bit, etc. is
+ modified by the instruction. */
+
+enum pc_write_style
+{
+ BRANCH_WRITE_PC,
+ BX_WRITE_PC,
+ LOAD_WRITE_PC,
+ ALU_WRITE_PC,
+ CANNOT_WRITE_PC
+};
+
+extern void
+ arm_process_displaced_insn (uint32_t insn, CORE_ADDR from, CORE_ADDR to,
+ struct regcache *regs,
+ struct displaced_step_closure *dsc);
+extern void
+ arm_displaced_init_closure (struct gdbarch *gdbarch, CORE_ADDR from,
+ CORE_ADDR to, struct displaced_step_closure *dsc);
+extern ULONGEST
+ displaced_read_reg (struct regcache *regs, CORE_ADDR from, int regno);
+extern void
+ displaced_write_reg (struct regcache *regs,
+ struct displaced_step_closure *dsc, int regno,
+ ULONGEST val, enum pc_write_style write_pc);
CORE_ADDR arm_skip_stub (struct frame_info *, CORE_ADDR);
CORE_ADDR arm_get_next_pc (struct frame_info *, CORE_ADDR);
int arm_software_single_step (struct frame_info *);
+extern struct displaced_step_closure *
+ arm_displaced_step_copy_insn (struct gdbarch *, CORE_ADDR, CORE_ADDR,
+ struct regcache *);
+extern void arm_displaced_step_fixup (struct gdbarch *,
+ struct displaced_step_closure *,
+ CORE_ADDR, CORE_ADDR, struct regcache *);
+
/* Functions exported from armbsd-tdep.h. */
/* Return the appropriate register set for the core section identified
[-- Attachment #3: fsf-displaced-stepping-always-3.diff --]
[-- Type: text/x-patch, Size: 2501 bytes --]
--- .pc/displaced-stepping-always/gdb/infrun.c 2009-07-15 00:36:51.000000000 -0700
+++ gdb/infrun.c 2009-07-15 11:16:42.000000000 -0700
@@ -825,6 +825,9 @@ displaced_step_fixup (ptid_t event_ptid,
one now. */
while (displaced_step_request_queue)
{
+ struct regcache *regcache;
+ struct gdbarch *gdbarch;
+
struct displaced_step_request *head;
ptid_t ptid;
CORE_ADDR actual_pc;
@@ -847,8 +850,12 @@ displaced_step_fixup (ptid_t event_ptid,
displaced_step_prepare (ptid);
+ regcache = get_thread_regcache (ptid);
+ gdbarch = get_regcache_arch (regcache);
+
if (debug_displaced)
{
+ CORE_ADDR actual_pc = regcache_read_pc (regcache);
gdb_byte buf[4];
fprintf_unfiltered (gdb_stdlog, "displaced: run 0x%s: ",
@@ -857,7 +864,10 @@ displaced_step_fixup (ptid_t event_ptid,
displaced_step_dump_bytes (gdb_stdlog, buf, sizeof (buf));
}
- target_resume (ptid, 1, TARGET_SIGNAL_0);
+ if (gdbarch_software_single_step_p (gdbarch))
+ target_resume (ptid, 0, TARGET_SIGNAL_0);
+ else
+ target_resume (ptid, 1, TARGET_SIGNAL_0);
/* Done, we're stepping a thread. */
break;
@@ -961,15 +971,19 @@ maybe_software_singlestep (struct gdbarc
{
int hw_step = 1;
- if (gdbarch_software_single_step_p (gdbarch)
- && gdbarch_software_single_step (gdbarch, get_current_frame ()))
+ if (gdbarch_software_single_step_p (gdbarch))
{
- hw_step = 0;
- /* Do not pull these breakpoints until after a `wait' in
- `wait_for_inferior' */
- singlestep_breakpoints_inserted_p = 1;
- singlestep_ptid = inferior_ptid;
- singlestep_pc = pc;
+ if (use_displaced_stepping (gdbarch))
+ hw_step = 0;
+ else if (gdbarch_software_single_step (gdbarch, get_current_frame ()))
+ {
+ hw_step = 0;
+ /* Do not pull these breakpoints until after a `wait' in
+ `wait_for_inferior' */
+ singlestep_breakpoints_inserted_p = 1;
+ singlestep_ptid = inferior_ptid;
+ singlestep_pc = pc;
+ }
}
return hw_step;
}
@@ -1037,7 +1051,8 @@ a command like `return' or `jump' to con
comments in the handle_inferior event for dealing with 'random
signals' explain what we do instead. */
if (use_displaced_stepping (gdbarch)
- && tp->trap_expected
+ && (tp->trap_expected
+ || (step && gdbarch_software_single_step_p (gdbarch)))
&& sig == TARGET_SIGNAL_0)
{
if (!displaced_step_prepare (inferior_ptid))
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH] Displaced stepping (non-stop debugging) support for ARM Linux
2009-07-15 19:16 ` Julian Brown
@ 2009-07-24 2:17 ` Daniel Jacobowitz
2009-07-31 11:43 ` Julian Brown
1 sibling, 0 replies; 24+ messages in thread
From: Daniel Jacobowitz @ 2009-07-24 2:17 UTC (permalink / raw)
To: Julian Brown; +Cc: gdb-patches, Pedro Alves
On Wed, Jul 15, 2009 at 07:27:49PM +0100, Julian Brown wrote:
> One possibly dubious part though is the positioning of the
> insert_breakpoints() call in arm-linux-tdep.c:arm_linux_copy_svc():
> without that, the momentary breakpoint used to regain control after a
> sigreturn syscall never actually gets inserted into the debugged
> program, because the displaced-step copy function gets called after
> that normally happens. It should be safe AFAICT, but I may have
> overlooked something.
set_momentary_breakpoint calls update_global_location_list_nothrow.
That's supposed to insert breakpoints. Here it is:
if (breakpoints_always_inserted_mode () && should_insert
&& (have_live_inferiors ()
|| (gdbarch_has_global_breakpoints (target_gdbarch))))
insert_breakpoint_locations ();
I'm guessing that you're using displaced stepping, but don't have
breakpoints always inserted (as they would be in typical use, since
non-stop requires it)?
I wish there were a more robust way to manage this, but I'm not sure
what it would be. We could do it centrally after setting up displaced
stepping. What you have seems OK to me.
In fact, both patches look OK to apply.
--
Daniel Jacobowitz
CodeSourcery
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH] Displaced stepping (non-stop debugging) support for ARM Linux
2009-07-15 19:16 ` Julian Brown
2009-07-24 2:17 ` Daniel Jacobowitz
@ 2009-07-31 11:43 ` Julian Brown
2009-09-24 19:35 ` Ulrich Weigand
1 sibling, 1 reply; 24+ messages in thread
From: Julian Brown @ 2009-07-31 11:43 UTC (permalink / raw)
To: Julian Brown; +Cc: gdb-patches, Pedro Alves, Daniel Jacobowitz
[-- Attachment #1: Type: text/plain, Size: 476 bytes --]
On Wed, 15 Jul 2009 19:27:49 +0100
Julian Brown <julian@codesourcery.com> wrote:
> Here's a new version of the ARM displaced-stepping patch, together
> with a new version of the patch to always use displaced stepping if
> it is enabled:
FYI: These are the versions I've committed. They needed some (mostly
mechanical) changes to work again after CVS update, due to some changes
elsewhere in GDB (e.g. passing byte order to
{read,write}_memory_unsigned_integer, etc).
Julian
[-- Attachment #2: fsf-arm-displaced-stepping-9.diff --]
[-- Type: text/x-patch, Size: 72328 bytes --]
--- .pc/displaced-stepping/gdb/arm-linux-tdep.c 2009-07-30 15:33:41.000000000 -0700
+++ gdb/arm-linux-tdep.c 2009-07-30 15:34:18.000000000 -0700
@@ -38,6 +38,10 @@
#include "arm-linux-tdep.h"
#include "linux-tdep.h"
#include "glibc-tdep.h"
+#include "arch-utils.h"
+#include "inferior.h"
+#include "gdbthread.h"
+#include "symfile.h"
#include "gdb_string.h"
@@ -597,6 +601,210 @@ arm_linux_software_single_step (struct f
return 1;
}
+/* Support for displaced stepping of Linux SVC instructions. */
+
+static void
+arm_linux_cleanup_svc (struct gdbarch *gdbarch ATTRIBUTE_UNUSED,
+ struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ CORE_ADDR from = dsc->insn_addr;
+ ULONGEST apparent_pc;
+ int within_scratch;
+
+ regcache_cooked_read_unsigned (regs, ARM_PC_REGNUM, &apparent_pc);
+
+ within_scratch = (apparent_pc >= dsc->scratch_base
+ && apparent_pc < (dsc->scratch_base
+ + DISPLACED_MODIFIED_INSNS * 4 + 4));
+
+ if (debug_displaced)
+ {
+ fprintf_unfiltered (gdb_stdlog, "displaced: PC is apparently %.8lx after "
+ "SVC step ", (unsigned long) apparent_pc);
+ if (within_scratch)
+ fprintf_unfiltered (gdb_stdlog, "(within scratch space)\n");
+ else
+ fprintf_unfiltered (gdb_stdlog, "(outside scratch space)\n");
+ }
+
+ if (within_scratch)
+ displaced_write_reg (regs, dsc, ARM_PC_REGNUM, from + 4, BRANCH_WRITE_PC);
+}
+
+static int
+arm_linux_copy_svc (struct gdbarch *gdbarch, uint32_t insn, CORE_ADDR to,
+ struct regcache *regs, struct displaced_step_closure *dsc)
+{
+ CORE_ADDR from = dsc->insn_addr;
+ struct frame_info *frame;
+ unsigned int svc_number = displaced_read_reg (regs, from, 7);
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying Linux svc insn %.8lx\n",
+ (unsigned long) insn);
+
+ frame = get_current_frame ();
+
+ /* Is this a sigreturn or rt_sigreturn syscall? Note: these are only useful
+ for EABI. */
+ if (svc_number == 119 || svc_number == 173)
+ {
+ if (get_frame_type (frame) == SIGTRAMP_FRAME)
+ {
+ CORE_ADDR return_to;
+ struct symtab_and_line sal;
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: found "
+ "sigreturn/rt_sigreturn SVC call. PC in frame = %lx\n",
+ (unsigned long) get_frame_pc (frame));
+
+ return_to = frame_unwind_caller_pc (frame);
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: unwind pc = %lx. "
+ "Setting momentary breakpoint.\n", (unsigned long) return_to);
+
+ gdb_assert (inferior_thread ()->step_resume_breakpoint == NULL);
+
+ sal = find_pc_line (return_to, 0);
+ sal.pc = return_to;
+ sal.section = find_pc_overlay (return_to);
+ sal.explicit_pc = 1;
+
+ frame = get_prev_frame (frame);
+
+ if (frame)
+ {
+ inferior_thread ()->step_resume_breakpoint
+ = set_momentary_breakpoint (gdbarch, sal, get_frame_id (frame),
+ bp_step_resume);
+
+ /* We need to make sure we actually insert the momentary
+ breakpoint set above. */
+ insert_breakpoints ();
+ }
+ else if (debug_displaced)
+ fprintf_unfiltered (gdb_stderr, "displaced: couldn't find previous "
+ "frame to set momentary breakpoint for "
+ "sigreturn/rt_sigreturn\n");
+ }
+ else if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: sigreturn/rt_sigreturn "
+ "SVC call not in signal trampoline frame\n");
+ }
+
+ /* Preparation: If we detect sigreturn, set momentary breakpoint at resume
+ location, else nothing.
+ Insn: unmodified svc.
+ Cleanup: if pc lands in scratch space, pc <- insn_addr + 4
+ else leave pc alone. */
+
+ dsc->modinsn[0] = insn;
+
+ dsc->cleanup = &arm_linux_cleanup_svc;
+ /* Pretend we wrote to the PC, so cleanup doesn't set PC to the next
+ instruction. */
+ dsc->wrote_to_pc = 1;
+
+ return 0;
+}
+
+
+/* The following two functions implement single-stepping over calls to Linux
+ kernel helper routines, which perform e.g. atomic operations on architecture
+ variants which don't support them natively.
+
+ When this function is called, the PC will be pointing at the kernel helper
+ (at an address inaccessible to GDB), and r14 will point to the return
+ address. Displaced stepping always executes code in the copy area:
+ so, make the copy-area instruction branch back to the kernel helper (the
+ "from" address), and make r14 point to the breakpoint in the copy area. In
+ that way, we regain control once the kernel helper returns, and can clean
+ up appropriately (as if we had just returned from the kernel helper as it
+ would have been called from the non-displaced location). */
+
+static void
+cleanup_kernel_helper_return (struct gdbarch *gdbarch ATTRIBUTE_UNUSED,
+ struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ displaced_write_reg (regs, dsc, ARM_LR_REGNUM, dsc->tmp[0], CANNOT_WRITE_PC);
+ displaced_write_reg (regs, dsc, ARM_PC_REGNUM, dsc->tmp[0], BRANCH_WRITE_PC);
+}
+
+static void
+arm_catch_kernel_helper_return (struct gdbarch *gdbarch, CORE_ADDR from,
+ CORE_ADDR to, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ enum bfd_endian byte_order = gdbarch_byte_order (gdbarch);
+
+ dsc->numinsns = 1;
+ dsc->insn_addr = from;
+ dsc->cleanup = &cleanup_kernel_helper_return;
+ /* Say we wrote to the PC, else cleanup will set PC to the next
+ instruction in the helper, which isn't helpful. */
+ dsc->wrote_to_pc = 1;
+
+ /* Preparation: tmp[0] <- r14
+ r14 <- <scratch space>+4
+ *(<scratch space>+8) <- from
+ Insn: ldr pc, [r14, #4]
+ Cleanup: r14 <- tmp[0], pc <- tmp[0]. */
+
+ dsc->tmp[0] = displaced_read_reg (regs, from, ARM_LR_REGNUM);
+ displaced_write_reg (regs, dsc, ARM_LR_REGNUM, (ULONGEST) to + 4,
+ CANNOT_WRITE_PC);
+ write_memory_unsigned_integer (to + 8, 4, byte_order, from);
+
+ dsc->modinsn[0] = 0xe59ef004; /* ldr pc, [lr, #4]. */
+}
+
+/* Linux-specific displaced step instruction copying function. Detects when
+ the program has stepped into a Linux kernel helper routine (which must be
+ handled as a special case), falling back to arm_displaced_step_copy_insn()
+ if it hasn't. */
+
+static struct displaced_step_closure *
+arm_linux_displaced_step_copy_insn (struct gdbarch *gdbarch,
+ CORE_ADDR from, CORE_ADDR to,
+ struct regcache *regs)
+{
+ struct displaced_step_closure *dsc
+ = xmalloc (sizeof (struct displaced_step_closure));
+
+ /* Detect when we enter an (inaccessible by GDB) Linux kernel helper, and
+ stop at the return location. */
+ if (from > 0xffff0000)
+ {
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: detected kernel helper "
+ "at %.8lx\n", (unsigned long) from);
+
+ arm_catch_kernel_helper_return (gdbarch, from, to, regs, dsc);
+ }
+ else
+ {
+ enum bfd_endian byte_order = gdbarch_byte_order (gdbarch);
+ uint32_t insn = read_memory_unsigned_integer (from, 4, byte_order);
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: stepping insn %.8lx "
+ "at %.8lx\n", (unsigned long) insn,
+ (unsigned long) from);
+
+ /* Override the default handling of SVC instructions. */
+ dsc->u.svc.copy_svc_os = arm_linux_copy_svc;
+
+ arm_process_displaced_insn (gdbarch, insn, from, to, regs, dsc);
+ }
+
+ arm_displaced_init_closure (gdbarch, from, to, dsc);
+
+ return dsc;
+}
+
static void
arm_linux_init_abi (struct gdbarch_info info,
struct gdbarch *gdbarch)
@@ -657,6 +865,14 @@ arm_linux_init_abi (struct gdbarch_info
arm_linux_regset_from_core_section);
set_gdbarch_get_siginfo_type (gdbarch, linux_get_siginfo_type);
+
+ /* Displaced stepping. */
+ set_gdbarch_displaced_step_copy_insn (gdbarch,
+ arm_linux_displaced_step_copy_insn);
+ set_gdbarch_displaced_step_fixup (gdbarch, arm_displaced_step_fixup);
+ set_gdbarch_displaced_step_free_closure (gdbarch,
+ simple_displaced_step_free_closure);
+ set_gdbarch_displaced_step_location (gdbarch, displaced_step_at_entry_point);
}
/* Provide a prototype to silence -Wmissing-prototypes. */
--- .pc/displaced-stepping/gdb/arm-tdep.c 2009-07-30 15:33:41.000000000 -0700
+++ gdb/arm-tdep.c 2009-07-30 15:34:18.000000000 -0700
@@ -228,6 +228,11 @@ struct arm_prologue_cache
struct trad_frame_saved_reg *saved_regs;
};
+/* Architecture version for displaced stepping. This effects the behaviour of
+ certain instructions, and really should not be hard-wired. */
+
+#define DISPLACED_STEPPING_ARCH_VERSION 5
+
/* Addresses for calling Thumb functions have the bit 0 set.
Here are some macros to test, set, or clear bit 0 of addresses. */
#define IS_THUMB_ADDR(addr) ((addr) & 1)
@@ -2177,6 +2182,1856 @@ arm_software_single_step (struct frame_i
return 1;
}
+/* ARM displaced stepping support.
+
+ Generally ARM displaced stepping works as follows:
+
+ 1. When an instruction is to be single-stepped, it is first decoded by
+ arm_process_displaced_insn (called from arm_displaced_step_copy_insn).
+ Depending on the type of instruction, it is then copied to a scratch
+ location, possibly in a modified form. The copy_* set of functions
+ performs such modification, as necessary. A breakpoint is placed after
+ the modified instruction in the scratch space to return control to GDB.
+ Note in particular that instructions which modify the PC will no longer
+ do so after modification.
+
+ 2. The instruction is single-stepped, by setting the PC to the scratch
+ location address, and resuming. Control returns to GDB when the
+ breakpoint is hit.
+
+ 3. A cleanup function (cleanup_*) is called corresponding to the copy_*
+ function used for the current instruction. This function's job is to
+ put the CPU/memory state back to what it would have been if the
+ instruction had been executed unmodified in its original location. */
+
+/* NOP instruction (mov r0, r0). */
+#define ARM_NOP 0xe1a00000
+
+/* Helper for register reads for displaced stepping. In particular, this
+ returns the PC as it would be seen by the instruction at its original
+ location. */
+
+ULONGEST
+displaced_read_reg (struct regcache *regs, CORE_ADDR from, int regno)
+{
+ ULONGEST ret;
+
+ if (regno == 15)
+ {
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: read pc value %.8lx\n",
+ (unsigned long) from + 8);
+ return (ULONGEST) from + 8; /* Pipeline offset. */
+ }
+ else
+ {
+ regcache_cooked_read_unsigned (regs, regno, &ret);
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: read r%d value %.8lx\n",
+ regno, (unsigned long) ret);
+ return ret;
+ }
+}
+
+static int
+displaced_in_arm_mode (struct regcache *regs)
+{
+ ULONGEST ps;
+
+ regcache_cooked_read_unsigned (regs, ARM_PS_REGNUM, &ps);
+
+ return (ps & CPSR_T) == 0;
+}
+
+/* Write to the PC as from a branch instruction. */
+
+static void
+branch_write_pc (struct regcache *regs, ULONGEST val)
+{
+ if (displaced_in_arm_mode (regs))
+ /* Note: If bits 0/1 are set, this branch would be unpredictable for
+ architecture versions < 6. */
+ regcache_cooked_write_unsigned (regs, ARM_PC_REGNUM, val & ~(ULONGEST) 0x3);
+ else
+ regcache_cooked_write_unsigned (regs, ARM_PC_REGNUM, val & ~(ULONGEST) 0x1);
+}
+
+/* Write to the PC as from a branch-exchange instruction. */
+
+static void
+bx_write_pc (struct regcache *regs, ULONGEST val)
+{
+ ULONGEST ps;
+
+ regcache_cooked_read_unsigned (regs, ARM_PS_REGNUM, &ps);
+
+ if ((val & 1) == 1)
+ {
+ regcache_cooked_write_unsigned (regs, ARM_PS_REGNUM, ps | CPSR_T);
+ regcache_cooked_write_unsigned (regs, ARM_PC_REGNUM, val & 0xfffffffe);
+ }
+ else if ((val & 2) == 0)
+ {
+ regcache_cooked_write_unsigned (regs, ARM_PS_REGNUM,
+ ps & ~(ULONGEST) CPSR_T);
+ regcache_cooked_write_unsigned (regs, ARM_PC_REGNUM, val);
+ }
+ else
+ {
+ /* Unpredictable behaviour. Try to do something sensible (switch to ARM
+ mode, align dest to 4 bytes). */
+ warning (_("Single-stepping BX to non-word-aligned ARM instruction."));
+ regcache_cooked_write_unsigned (regs, ARM_PS_REGNUM,
+ ps & ~(ULONGEST) CPSR_T);
+ regcache_cooked_write_unsigned (regs, ARM_PC_REGNUM, val & 0xfffffffc);
+ }
+}
+
+/* Write to the PC as if from a load instruction. */
+
+static void
+load_write_pc (struct regcache *regs, ULONGEST val)
+{
+ if (DISPLACED_STEPPING_ARCH_VERSION >= 5)
+ bx_write_pc (regs, val);
+ else
+ branch_write_pc (regs, val);
+}
+
+/* Write to the PC as if from an ALU instruction. */
+
+static void
+alu_write_pc (struct regcache *regs, ULONGEST val)
+{
+ if (DISPLACED_STEPPING_ARCH_VERSION >= 7 && displaced_in_arm_mode (regs))
+ bx_write_pc (regs, val);
+ else
+ branch_write_pc (regs, val);
+}
+
+/* Helper for writing to registers for displaced stepping. Writing to the PC
+ has a varying effects depending on the instruction which does the write:
+ this is controlled by the WRITE_PC argument. */
+
+void
+displaced_write_reg (struct regcache *regs, struct displaced_step_closure *dsc,
+ int regno, ULONGEST val, enum pc_write_style write_pc)
+{
+ if (regno == 15)
+ {
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: writing pc %.8lx\n",
+ (unsigned long) val);
+ switch (write_pc)
+ {
+ case BRANCH_WRITE_PC:
+ branch_write_pc (regs, val);
+ break;
+
+ case BX_WRITE_PC:
+ bx_write_pc (regs, val);
+ break;
+
+ case LOAD_WRITE_PC:
+ load_write_pc (regs, val);
+ break;
+
+ case ALU_WRITE_PC:
+ alu_write_pc (regs, val);
+ break;
+
+ case CANNOT_WRITE_PC:
+ warning (_("Instruction wrote to PC in an unexpected way when "
+ "single-stepping"));
+ break;
+
+ default:
+ abort ();
+ }
+
+ dsc->wrote_to_pc = 1;
+ }
+ else
+ {
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: writing r%d value %.8lx\n",
+ regno, (unsigned long) val);
+ regcache_cooked_write_unsigned (regs, regno, val);
+ }
+}
+
+/* This function is used to concisely determine if an instruction INSN
+ references PC. Register fields of interest in INSN should have the
+ corresponding fields of BITMASK set to 0b1111. The function returns return 1
+ if any of these fields in INSN reference the PC (also 0b1111, r15), else it
+ returns 0. */
+
+static int
+insn_references_pc (uint32_t insn, uint32_t bitmask)
+{
+ uint32_t lowbit = 1;
+
+ while (bitmask != 0)
+ {
+ uint32_t mask;
+
+ for (; lowbit && (bitmask & lowbit) == 0; lowbit <<= 1)
+ ;
+
+ if (!lowbit)
+ break;
+
+ mask = lowbit * 0xf;
+
+ if ((insn & mask) == mask)
+ return 1;
+
+ bitmask &= ~mask;
+ }
+
+ return 0;
+}
+
+/* The simplest copy function. Many instructions have the same effect no
+ matter what address they are executed at: in those cases, use this. */
+
+static int
+copy_unmodified (struct gdbarch *gdbarch ATTRIBUTE_UNUSED, uint32_t insn,
+ const char *iname, struct displaced_step_closure *dsc)
+{
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying insn %.8lx, "
+ "opcode/class '%s' unmodified\n", (unsigned long) insn,
+ iname);
+
+ dsc->modinsn[0] = insn;
+
+ return 0;
+}
+
+/* Preload instructions with immediate offset. */
+
+static void
+cleanup_preload (struct gdbarch *gdbarch ATTRIBUTE_UNUSED,
+ struct regcache *regs, struct displaced_step_closure *dsc)
+{
+ displaced_write_reg (regs, dsc, 0, dsc->tmp[0], CANNOT_WRITE_PC);
+ if (!dsc->u.preload.immed)
+ displaced_write_reg (regs, dsc, 1, dsc->tmp[1], CANNOT_WRITE_PC);
+}
+
+static int
+copy_preload (struct gdbarch *gdbarch, uint32_t insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ unsigned int rn = bits (insn, 16, 19);
+ ULONGEST rn_val;
+ CORE_ADDR from = dsc->insn_addr;
+
+ if (!insn_references_pc (insn, 0x000f0000ul))
+ return copy_unmodified (gdbarch, insn, "preload", dsc);
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying preload insn %.8lx\n",
+ (unsigned long) insn);
+
+ /* Preload instructions:
+
+ {pli/pld} [rn, #+/-imm]
+ ->
+ {pli/pld} [r0, #+/-imm]. */
+
+ dsc->tmp[0] = displaced_read_reg (regs, from, 0);
+ rn_val = displaced_read_reg (regs, from, rn);
+ displaced_write_reg (regs, dsc, 0, rn_val, CANNOT_WRITE_PC);
+
+ dsc->u.preload.immed = 1;
+
+ dsc->modinsn[0] = insn & 0xfff0ffff;
+
+ dsc->cleanup = &cleanup_preload;
+
+ return 0;
+}
+
+/* Preload instructions with register offset. */
+
+static int
+copy_preload_reg (struct gdbarch *gdbarch, uint32_t insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ unsigned int rn = bits (insn, 16, 19);
+ unsigned int rm = bits (insn, 0, 3);
+ ULONGEST rn_val, rm_val;
+ CORE_ADDR from = dsc->insn_addr;
+
+ if (!insn_references_pc (insn, 0x000f000ful))
+ return copy_unmodified (gdbarch, insn, "preload reg", dsc);
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying preload insn %.8lx\n",
+ (unsigned long) insn);
+
+ /* Preload register-offset instructions:
+
+ {pli/pld} [rn, rm {, shift}]
+ ->
+ {pli/pld} [r0, r1 {, shift}]. */
+
+ dsc->tmp[0] = displaced_read_reg (regs, from, 0);
+ dsc->tmp[1] = displaced_read_reg (regs, from, 1);
+ rn_val = displaced_read_reg (regs, from, rn);
+ rm_val = displaced_read_reg (regs, from, rm);
+ displaced_write_reg (regs, dsc, 0, rn_val, CANNOT_WRITE_PC);
+ displaced_write_reg (regs, dsc, 1, rm_val, CANNOT_WRITE_PC);
+
+ dsc->u.preload.immed = 0;
+
+ dsc->modinsn[0] = (insn & 0xfff0fff0) | 0x1;
+
+ dsc->cleanup = &cleanup_preload;
+
+ return 0;
+}
+
+/* Copy/cleanup coprocessor load and store instructions. */
+
+static void
+cleanup_copro_load_store (struct gdbarch *gdbarch ATTRIBUTE_UNUSED,
+ struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ ULONGEST rn_val = displaced_read_reg (regs, dsc->insn_addr, 0);
+
+ displaced_write_reg (regs, dsc, 0, dsc->tmp[0], CANNOT_WRITE_PC);
+
+ if (dsc->u.ldst.writeback)
+ displaced_write_reg (regs, dsc, dsc->u.ldst.rn, rn_val, LOAD_WRITE_PC);
+}
+
+static int
+copy_copro_load_store (struct gdbarch *gdbarch, uint32_t insn,
+ struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ unsigned int rn = bits (insn, 16, 19);
+ ULONGEST rn_val;
+ CORE_ADDR from = dsc->insn_addr;
+
+ if (!insn_references_pc (insn, 0x000f0000ul))
+ return copy_unmodified (gdbarch, insn, "copro load/store", dsc);
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying coprocessor "
+ "load/store insn %.8lx\n", (unsigned long) insn);
+
+ /* Coprocessor load/store instructions:
+
+ {stc/stc2} [<Rn>, #+/-imm] (and other immediate addressing modes)
+ ->
+ {stc/stc2} [r0, #+/-imm].
+
+ ldc/ldc2 are handled identically. */
+
+ dsc->tmp[0] = displaced_read_reg (regs, from, 0);
+ rn_val = displaced_read_reg (regs, from, rn);
+ displaced_write_reg (regs, dsc, 0, rn_val, CANNOT_WRITE_PC);
+
+ dsc->u.ldst.writeback = bit (insn, 25);
+ dsc->u.ldst.rn = rn;
+
+ dsc->modinsn[0] = insn & 0xfff0ffff;
+
+ dsc->cleanup = &cleanup_copro_load_store;
+
+ return 0;
+}
+
+/* Clean up branch instructions (actually perform the branch, by setting
+ PC). */
+
+static void
+cleanup_branch (struct gdbarch *gdbarch ATTRIBUTE_UNUSED, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ ULONGEST from = dsc->insn_addr;
+ uint32_t status = displaced_read_reg (regs, from, ARM_PS_REGNUM);
+ int branch_taken = condition_true (dsc->u.branch.cond, status);
+ enum pc_write_style write_pc = dsc->u.branch.exchange
+ ? BX_WRITE_PC : BRANCH_WRITE_PC;
+
+ if (!branch_taken)
+ return;
+
+ if (dsc->u.branch.link)
+ {
+ ULONGEST pc = displaced_read_reg (regs, from, 15);
+ displaced_write_reg (regs, dsc, 14, pc - 4, CANNOT_WRITE_PC);
+ }
+
+ displaced_write_reg (regs, dsc, 15, dsc->u.branch.dest, write_pc);
+}
+
+/* Copy B/BL/BLX instructions with immediate destinations. */
+
+static int
+copy_b_bl_blx (struct gdbarch *gdbarch ATTRIBUTE_UNUSED, uint32_t insn,
+ struct regcache *regs, struct displaced_step_closure *dsc)
+{
+ unsigned int cond = bits (insn, 28, 31);
+ int exchange = (cond == 0xf);
+ int link = exchange || bit (insn, 24);
+ CORE_ADDR from = dsc->insn_addr;
+ long offset;
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying %s immediate insn "
+ "%.8lx\n", (exchange) ? "blx" : (link) ? "bl" : "b",
+ (unsigned long) insn);
+
+ /* Implement "BL<cond> <label>" as:
+
+ Preparation: cond <- instruction condition
+ Insn: mov r0, r0 (nop)
+ Cleanup: if (condition true) { r14 <- pc; pc <- label }.
+
+ B<cond> similar, but don't set r14 in cleanup. */
+
+ if (exchange)
+ /* For BLX, set bit 0 of the destination. The cleanup_branch function will
+ then arrange the switch into Thumb mode. */
+ offset = (bits (insn, 0, 23) << 2) | (bit (insn, 24) << 1) | 1;
+ else
+ offset = bits (insn, 0, 23) << 2;
+
+ if (bit (offset, 25))
+ offset = offset | ~0x3ffffff;
+
+ dsc->u.branch.cond = cond;
+ dsc->u.branch.link = link;
+ dsc->u.branch.exchange = exchange;
+ dsc->u.branch.dest = from + 8 + offset;
+
+ dsc->modinsn[0] = ARM_NOP;
+
+ dsc->cleanup = &cleanup_branch;
+
+ return 0;
+}
+
+/* Copy BX/BLX with register-specified destinations. */
+
+static int
+copy_bx_blx_reg (struct gdbarch *gdbarch ATTRIBUTE_UNUSED, uint32_t insn,
+ struct regcache *regs, struct displaced_step_closure *dsc)
+{
+ unsigned int cond = bits (insn, 28, 31);
+ /* BX: x12xxx1x
+ BLX: x12xxx3x. */
+ int link = bit (insn, 5);
+ unsigned int rm = bits (insn, 0, 3);
+ CORE_ADDR from = dsc->insn_addr;
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying %s register insn "
+ "%.8lx\n", (link) ? "blx" : "bx", (unsigned long) insn);
+
+ /* Implement {BX,BLX}<cond> <reg>" as:
+
+ Preparation: cond <- instruction condition
+ Insn: mov r0, r0 (nop)
+ Cleanup: if (condition true) { r14 <- pc; pc <- dest; }.
+
+ Don't set r14 in cleanup for BX. */
+
+ dsc->u.branch.dest = displaced_read_reg (regs, from, rm);
+
+ dsc->u.branch.cond = cond;
+ dsc->u.branch.link = link;
+ dsc->u.branch.exchange = 1;
+
+ dsc->modinsn[0] = ARM_NOP;
+
+ dsc->cleanup = &cleanup_branch;
+
+ return 0;
+}
+
+/* Copy/cleanup arithmetic/logic instruction with immediate RHS. */
+
+static void
+cleanup_alu_imm (struct gdbarch *gdbarch ATTRIBUTE_UNUSED,
+ struct regcache *regs, struct displaced_step_closure *dsc)
+{
+ ULONGEST rd_val = displaced_read_reg (regs, dsc->insn_addr, 0);
+ displaced_write_reg (regs, dsc, 0, dsc->tmp[0], CANNOT_WRITE_PC);
+ displaced_write_reg (regs, dsc, 1, dsc->tmp[1], CANNOT_WRITE_PC);
+ displaced_write_reg (regs, dsc, dsc->rd, rd_val, ALU_WRITE_PC);
+}
+
+static int
+copy_alu_imm (struct gdbarch *gdbarch, uint32_t insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ unsigned int rn = bits (insn, 16, 19);
+ unsigned int rd = bits (insn, 12, 15);
+ unsigned int op = bits (insn, 21, 24);
+ int is_mov = (op == 0xd);
+ ULONGEST rd_val, rn_val;
+ CORE_ADDR from = dsc->insn_addr;
+
+ if (!insn_references_pc (insn, 0x000ff000ul))
+ return copy_unmodified (gdbarch, insn, "ALU immediate", dsc);
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying immediate %s insn "
+ "%.8lx\n", is_mov ? "move" : "ALU",
+ (unsigned long) insn);
+
+ /* Instruction is of form:
+
+ <op><cond> rd, [rn,] #imm
+
+ Rewrite as:
+
+ Preparation: tmp1, tmp2 <- r0, r1;
+ r0, r1 <- rd, rn
+ Insn: <op><cond> r0, r1, #imm
+ Cleanup: rd <- r0; r0 <- tmp1; r1 <- tmp2
+ */
+
+ dsc->tmp[0] = displaced_read_reg (regs, from, 0);
+ dsc->tmp[1] = displaced_read_reg (regs, from, 1);
+ rn_val = displaced_read_reg (regs, from, rn);
+ rd_val = displaced_read_reg (regs, from, rd);
+ displaced_write_reg (regs, dsc, 0, rd_val, CANNOT_WRITE_PC);
+ displaced_write_reg (regs, dsc, 1, rn_val, CANNOT_WRITE_PC);
+ dsc->rd = rd;
+
+ if (is_mov)
+ dsc->modinsn[0] = insn & 0xfff00fff;
+ else
+ dsc->modinsn[0] = (insn & 0xfff00fff) | 0x10000;
+
+ dsc->cleanup = &cleanup_alu_imm;
+
+ return 0;
+}
+
+/* Copy/cleanup arithmetic/logic insns with register RHS. */
+
+static void
+cleanup_alu_reg (struct gdbarch *gdbarch ATTRIBUTE_UNUSED,
+ struct regcache *regs, struct displaced_step_closure *dsc)
+{
+ ULONGEST rd_val;
+ int i;
+
+ rd_val = displaced_read_reg (regs, dsc->insn_addr, 0);
+
+ for (i = 0; i < 3; i++)
+ displaced_write_reg (regs, dsc, i, dsc->tmp[i], CANNOT_WRITE_PC);
+
+ displaced_write_reg (regs, dsc, dsc->rd, rd_val, ALU_WRITE_PC);
+}
+
+static int
+copy_alu_reg (struct gdbarch *gdbarch, uint32_t insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ unsigned int rn = bits (insn, 16, 19);
+ unsigned int rm = bits (insn, 0, 3);
+ unsigned int rd = bits (insn, 12, 15);
+ unsigned int op = bits (insn, 21, 24);
+ int is_mov = (op == 0xd);
+ ULONGEST rd_val, rn_val, rm_val;
+ CORE_ADDR from = dsc->insn_addr;
+
+ if (!insn_references_pc (insn, 0x000ff00ful))
+ return copy_unmodified (gdbarch, insn, "ALU reg", dsc);
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying reg %s insn %.8lx\n",
+ is_mov ? "move" : "ALU", (unsigned long) insn);
+
+ /* Instruction is of form:
+
+ <op><cond> rd, [rn,] rm [, <shift>]
+
+ Rewrite as:
+
+ Preparation: tmp1, tmp2, tmp3 <- r0, r1, r2;
+ r0, r1, r2 <- rd, rn, rm
+ Insn: <op><cond> r0, r1, r2 [, <shift>]
+ Cleanup: rd <- r0; r0, r1, r2 <- tmp1, tmp2, tmp3
+ */
+
+ dsc->tmp[0] = displaced_read_reg (regs, from, 0);
+ dsc->tmp[1] = displaced_read_reg (regs, from, 1);
+ dsc->tmp[2] = displaced_read_reg (regs, from, 2);
+ rd_val = displaced_read_reg (regs, from, rd);
+ rn_val = displaced_read_reg (regs, from, rn);
+ rm_val = displaced_read_reg (regs, from, rm);
+ displaced_write_reg (regs, dsc, 0, rd_val, CANNOT_WRITE_PC);
+ displaced_write_reg (regs, dsc, 1, rn_val, CANNOT_WRITE_PC);
+ displaced_write_reg (regs, dsc, 2, rm_val, CANNOT_WRITE_PC);
+ dsc->rd = rd;
+
+ if (is_mov)
+ dsc->modinsn[0] = (insn & 0xfff00ff0) | 0x2;
+ else
+ dsc->modinsn[0] = (insn & 0xfff00ff0) | 0x10002;
+
+ dsc->cleanup = &cleanup_alu_reg;
+
+ return 0;
+}
+
+/* Cleanup/copy arithmetic/logic insns with shifted register RHS. */
+
+static void
+cleanup_alu_shifted_reg (struct gdbarch *gdbarch ATTRIBUTE_UNUSED,
+ struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ ULONGEST rd_val = displaced_read_reg (regs, dsc->insn_addr, 0);
+ int i;
+
+ for (i = 0; i < 4; i++)
+ displaced_write_reg (regs, dsc, i, dsc->tmp[i], CANNOT_WRITE_PC);
+
+ displaced_write_reg (regs, dsc, dsc->rd, rd_val, ALU_WRITE_PC);
+}
+
+static int
+copy_alu_shifted_reg (struct gdbarch *gdbarch, uint32_t insn,
+ struct regcache *regs, struct displaced_step_closure *dsc)
+{
+ unsigned int rn = bits (insn, 16, 19);
+ unsigned int rm = bits (insn, 0, 3);
+ unsigned int rd = bits (insn, 12, 15);
+ unsigned int rs = bits (insn, 8, 11);
+ unsigned int op = bits (insn, 21, 24);
+ int is_mov = (op == 0xd), i;
+ ULONGEST rd_val, rn_val, rm_val, rs_val;
+ CORE_ADDR from = dsc->insn_addr;
+
+ if (!insn_references_pc (insn, 0x000fff0ful))
+ return copy_unmodified (gdbarch, insn, "ALU shifted reg", dsc);
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying shifted reg %s insn "
+ "%.8lx\n", is_mov ? "move" : "ALU",
+ (unsigned long) insn);
+
+ /* Instruction is of form:
+
+ <op><cond> rd, [rn,] rm, <shift> rs
+
+ Rewrite as:
+
+ Preparation: tmp1, tmp2, tmp3, tmp4 <- r0, r1, r2, r3
+ r0, r1, r2, r3 <- rd, rn, rm, rs
+ Insn: <op><cond> r0, r1, r2, <shift> r3
+ Cleanup: tmp5 <- r0
+ r0, r1, r2, r3 <- tmp1, tmp2, tmp3, tmp4
+ rd <- tmp5
+ */
+
+ for (i = 0; i < 4; i++)
+ dsc->tmp[i] = displaced_read_reg (regs, from, i);
+
+ rd_val = displaced_read_reg (regs, from, rd);
+ rn_val = displaced_read_reg (regs, from, rn);
+ rm_val = displaced_read_reg (regs, from, rm);
+ rs_val = displaced_read_reg (regs, from, rs);
+ displaced_write_reg (regs, dsc, 0, rd_val, CANNOT_WRITE_PC);
+ displaced_write_reg (regs, dsc, 1, rn_val, CANNOT_WRITE_PC);
+ displaced_write_reg (regs, dsc, 2, rm_val, CANNOT_WRITE_PC);
+ displaced_write_reg (regs, dsc, 3, rs_val, CANNOT_WRITE_PC);
+ dsc->rd = rd;
+
+ if (is_mov)
+ dsc->modinsn[0] = (insn & 0xfff000f0) | 0x302;
+ else
+ dsc->modinsn[0] = (insn & 0xfff000f0) | 0x10302;
+
+ dsc->cleanup = &cleanup_alu_shifted_reg;
+
+ return 0;
+}
+
+/* Clean up load instructions. */
+
+static void
+cleanup_load (struct gdbarch *gdbarch ATTRIBUTE_UNUSED, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ ULONGEST rt_val, rt_val2 = 0, rn_val;
+ CORE_ADDR from = dsc->insn_addr;
+
+ rt_val = displaced_read_reg (regs, from, 0);
+ if (dsc->u.ldst.xfersize == 8)
+ rt_val2 = displaced_read_reg (regs, from, 1);
+ rn_val = displaced_read_reg (regs, from, 2);
+
+ displaced_write_reg (regs, dsc, 0, dsc->tmp[0], CANNOT_WRITE_PC);
+ if (dsc->u.ldst.xfersize > 4)
+ displaced_write_reg (regs, dsc, 1, dsc->tmp[1], CANNOT_WRITE_PC);
+ displaced_write_reg (regs, dsc, 2, dsc->tmp[2], CANNOT_WRITE_PC);
+ if (!dsc->u.ldst.immed)
+ displaced_write_reg (regs, dsc, 3, dsc->tmp[3], CANNOT_WRITE_PC);
+
+ /* Handle register writeback. */
+ if (dsc->u.ldst.writeback)
+ displaced_write_reg (regs, dsc, dsc->u.ldst.rn, rn_val, CANNOT_WRITE_PC);
+ /* Put result in right place. */
+ displaced_write_reg (regs, dsc, dsc->rd, rt_val, LOAD_WRITE_PC);
+ if (dsc->u.ldst.xfersize == 8)
+ displaced_write_reg (regs, dsc, dsc->rd + 1, rt_val2, LOAD_WRITE_PC);
+}
+
+/* Clean up store instructions. */
+
+static void
+cleanup_store (struct gdbarch *gdbarch ATTRIBUTE_UNUSED, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ CORE_ADDR from = dsc->insn_addr;
+ ULONGEST rn_val = displaced_read_reg (regs, from, 2);
+
+ displaced_write_reg (regs, dsc, 0, dsc->tmp[0], CANNOT_WRITE_PC);
+ if (dsc->u.ldst.xfersize > 4)
+ displaced_write_reg (regs, dsc, 1, dsc->tmp[1], CANNOT_WRITE_PC);
+ displaced_write_reg (regs, dsc, 2, dsc->tmp[2], CANNOT_WRITE_PC);
+ if (!dsc->u.ldst.immed)
+ displaced_write_reg (regs, dsc, 3, dsc->tmp[3], CANNOT_WRITE_PC);
+ if (!dsc->u.ldst.restore_r4)
+ displaced_write_reg (regs, dsc, 4, dsc->tmp[4], CANNOT_WRITE_PC);
+
+ /* Writeback. */
+ if (dsc->u.ldst.writeback)
+ displaced_write_reg (regs, dsc, dsc->u.ldst.rn, rn_val, CANNOT_WRITE_PC);
+}
+
+/* Copy "extra" load/store instructions. These are halfword/doubleword
+ transfers, which have a different encoding to byte/word transfers. */
+
+static int
+copy_extra_ld_st (struct gdbarch *gdbarch, uint32_t insn, int unpriveleged,
+ struct regcache *regs, struct displaced_step_closure *dsc)
+{
+ unsigned int op1 = bits (insn, 20, 24);
+ unsigned int op2 = bits (insn, 5, 6);
+ unsigned int rt = bits (insn, 12, 15);
+ unsigned int rn = bits (insn, 16, 19);
+ unsigned int rm = bits (insn, 0, 3);
+ char load[12] = {0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1};
+ char bytesize[12] = {2, 2, 2, 2, 8, 1, 8, 1, 8, 2, 8, 2};
+ int immed = (op1 & 0x4) != 0;
+ int opcode;
+ ULONGEST rt_val, rt_val2 = 0, rn_val, rm_val = 0;
+ CORE_ADDR from = dsc->insn_addr;
+
+ if (!insn_references_pc (insn, 0x000ff00ful))
+ return copy_unmodified (gdbarch, insn, "extra load/store", dsc);
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying %sextra load/store "
+ "insn %.8lx\n", unpriveleged ? "unpriveleged " : "",
+ (unsigned long) insn);
+
+ opcode = ((op2 << 2) | (op1 & 0x1) | ((op1 & 0x4) >> 1)) - 4;
+
+ if (opcode < 0)
+ internal_error (__FILE__, __LINE__,
+ _("copy_extra_ld_st: instruction decode error"));
+
+ dsc->tmp[0] = displaced_read_reg (regs, from, 0);
+ dsc->tmp[1] = displaced_read_reg (regs, from, 1);
+ dsc->tmp[2] = displaced_read_reg (regs, from, 2);
+ if (!immed)
+ dsc->tmp[3] = displaced_read_reg (regs, from, 3);
+
+ rt_val = displaced_read_reg (regs, from, rt);
+ if (bytesize[opcode] == 8)
+ rt_val2 = displaced_read_reg (regs, from, rt + 1);
+ rn_val = displaced_read_reg (regs, from, rn);
+ if (!immed)
+ rm_val = displaced_read_reg (regs, from, rm);
+
+ displaced_write_reg (regs, dsc, 0, rt_val, CANNOT_WRITE_PC);
+ if (bytesize[opcode] == 8)
+ displaced_write_reg (regs, dsc, 1, rt_val2, CANNOT_WRITE_PC);
+ displaced_write_reg (regs, dsc, 2, rn_val, CANNOT_WRITE_PC);
+ if (!immed)
+ displaced_write_reg (regs, dsc, 3, rm_val, CANNOT_WRITE_PC);
+
+ dsc->rd = rt;
+ dsc->u.ldst.xfersize = bytesize[opcode];
+ dsc->u.ldst.rn = rn;
+ dsc->u.ldst.immed = immed;
+ dsc->u.ldst.writeback = bit (insn, 24) == 0 || bit (insn, 21) != 0;
+ dsc->u.ldst.restore_r4 = 0;
+
+ if (immed)
+ /* {ldr,str}<width><cond> rt, [rt2,] [rn, #imm]
+ ->
+ {ldr,str}<width><cond> r0, [r1,] [r2, #imm]. */
+ dsc->modinsn[0] = (insn & 0xfff00fff) | 0x20000;
+ else
+ /* {ldr,str}<width><cond> rt, [rt2,] [rn, +/-rm]
+ ->
+ {ldr,str}<width><cond> r0, [r1,] [r2, +/-r3]. */
+ dsc->modinsn[0] = (insn & 0xfff00ff0) | 0x20003;
+
+ dsc->cleanup = load[opcode] ? &cleanup_load : &cleanup_store;
+
+ return 0;
+}
+
+/* Copy byte/word loads and stores. */
+
+static int
+copy_ldr_str_ldrb_strb (struct gdbarch *gdbarch, uint32_t insn,
+ struct regcache *regs,
+ struct displaced_step_closure *dsc, int load, int byte,
+ int usermode)
+{
+ int immed = !bit (insn, 25);
+ unsigned int rt = bits (insn, 12, 15);
+ unsigned int rn = bits (insn, 16, 19);
+ unsigned int rm = bits (insn, 0, 3); /* Only valid if !immed. */
+ ULONGEST rt_val, rn_val, rm_val = 0;
+ CORE_ADDR from = dsc->insn_addr;
+
+ if (!insn_references_pc (insn, 0x000ff00ful))
+ return copy_unmodified (gdbarch, insn, "load/store", dsc);
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying %s%s insn %.8lx\n",
+ load ? (byte ? "ldrb" : "ldr")
+ : (byte ? "strb" : "str"), usermode ? "t" : "",
+ (unsigned long) insn);
+
+ dsc->tmp[0] = displaced_read_reg (regs, from, 0);
+ dsc->tmp[2] = displaced_read_reg (regs, from, 2);
+ if (!immed)
+ dsc->tmp[3] = displaced_read_reg (regs, from, 3);
+ if (!load)
+ dsc->tmp[4] = displaced_read_reg (regs, from, 4);
+
+ rt_val = displaced_read_reg (regs, from, rt);
+ rn_val = displaced_read_reg (regs, from, rn);
+ if (!immed)
+ rm_val = displaced_read_reg (regs, from, rm);
+
+ displaced_write_reg (regs, dsc, 0, rt_val, CANNOT_WRITE_PC);
+ displaced_write_reg (regs, dsc, 2, rn_val, CANNOT_WRITE_PC);
+ if (!immed)
+ displaced_write_reg (regs, dsc, 3, rm_val, CANNOT_WRITE_PC);
+
+ dsc->rd = rt;
+ dsc->u.ldst.xfersize = byte ? 1 : 4;
+ dsc->u.ldst.rn = rn;
+ dsc->u.ldst.immed = immed;
+ dsc->u.ldst.writeback = bit (insn, 24) == 0 || bit (insn, 21) != 0;
+
+ /* To write PC we can do:
+
+ scratch+0: str pc, temp (*temp = scratch + 8 + offset)
+ scratch+4: ldr r4, temp
+ scratch+8: sub r4, r4, pc (r4 = scratch + 8 + offset - scratch - 8 - 8)
+ scratch+12: add r4, r4, #8 (r4 = offset)
+ scratch+16: add r0, r0, r4
+ scratch+20: str r0, [r2, #imm] (or str r0, [r2, r3])
+ scratch+24: <temp>
+
+ Otherwise we don't know what value to write for PC, since the offset is
+ architecture-dependent (sometimes PC+8, sometimes PC+12). */
+
+ if (load || rt != 15)
+ {
+ dsc->u.ldst.restore_r4 = 0;
+
+ if (immed)
+ /* {ldr,str}[b]<cond> rt, [rn, #imm], etc.
+ ->
+ {ldr,str}[b]<cond> r0, [r2, #imm]. */
+ dsc->modinsn[0] = (insn & 0xfff00fff) | 0x20000;
+ else
+ /* {ldr,str}[b]<cond> rt, [rn, rm], etc.
+ ->
+ {ldr,str}[b]<cond> r0, [r2, r3]. */
+ dsc->modinsn[0] = (insn & 0xfff00ff0) | 0x20003;
+ }
+ else
+ {
+ /* We need to use r4 as scratch. Make sure it's restored afterwards. */
+ dsc->u.ldst.restore_r4 = 1;
+
+ dsc->modinsn[0] = 0xe58ff014; /* str pc, [pc, #20]. */
+ dsc->modinsn[1] = 0xe59f4010; /* ldr r4, [pc, #16]. */
+ dsc->modinsn[2] = 0xe044400f; /* sub r4, r4, pc. */
+ dsc->modinsn[3] = 0xe2844008; /* add r4, r4, #8. */
+ dsc->modinsn[4] = 0xe0800004; /* add r0, r0, r4. */
+
+ /* As above. */
+ if (immed)
+ dsc->modinsn[5] = (insn & 0xfff00fff) | 0x20000;
+ else
+ dsc->modinsn[5] = (insn & 0xfff00ff0) | 0x20003;
+
+ dsc->modinsn[6] = 0x0; /* breakpoint location. */
+ dsc->modinsn[7] = 0x0; /* scratch space. */
+
+ dsc->numinsns = 6;
+ }
+
+ dsc->cleanup = load ? &cleanup_load : &cleanup_store;
+
+ return 0;
+}
+
+/* Cleanup LDM instructions with fully-populated register list. This is an
+ unfortunate corner case: it's impossible to implement correctly by modifying
+ the instruction. The issue is as follows: we have an instruction,
+
+ ldm rN, {r0-r15}
+
+ which we must rewrite to avoid loading PC. A possible solution would be to
+ do the load in two halves, something like (with suitable cleanup
+ afterwards):
+
+ mov r8, rN
+ ldm[id][ab] r8!, {r0-r7}
+ str r7, <temp>
+ ldm[id][ab] r8, {r7-r14}
+ <bkpt>
+
+ but at present there's no suitable place for <temp>, since the scratch space
+ is overwritten before the cleanup routine is called. For now, we simply
+ emulate the instruction. */
+
+static void
+cleanup_block_load_all (struct gdbarch *gdbarch, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ ULONGEST from = dsc->insn_addr;
+ int inc = dsc->u.block.increment;
+ int bump_before = dsc->u.block.before ? (inc ? 4 : -4) : 0;
+ int bump_after = dsc->u.block.before ? 0 : (inc ? 4 : -4);
+ uint32_t regmask = dsc->u.block.regmask;
+ int regno = inc ? 0 : 15;
+ CORE_ADDR xfer_addr = dsc->u.block.xfer_addr;
+ int exception_return = dsc->u.block.load && dsc->u.block.user
+ && (regmask & 0x8000) != 0;
+ uint32_t status = displaced_read_reg (regs, from, ARM_PS_REGNUM);
+ int do_transfer = condition_true (dsc->u.block.cond, status);
+ enum bfd_endian byte_order = gdbarch_byte_order (gdbarch);
+
+ if (!do_transfer)
+ return;
+
+ /* If the instruction is ldm rN, {...pc}^, I don't think there's anything
+ sensible we can do here. Complain loudly. */
+ if (exception_return)
+ error (_("Cannot single-step exception return"));
+
+ /* We don't handle any stores here for now. */
+ gdb_assert (dsc->u.block.load != 0);
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: emulating block transfer: "
+ "%s %s %s\n", dsc->u.block.load ? "ldm" : "stm",
+ dsc->u.block.increment ? "inc" : "dec",
+ dsc->u.block.before ? "before" : "after");
+
+ while (regmask)
+ {
+ uint32_t memword;
+
+ if (inc)
+ while (regno <= 15 && (regmask & (1 << regno)) == 0)
+ regno++;
+ else
+ while (regno >= 0 && (regmask & (1 << regno)) == 0)
+ regno--;
+
+ xfer_addr += bump_before;
+
+ memword = read_memory_unsigned_integer (xfer_addr, 4, byte_order);
+ displaced_write_reg (regs, dsc, regno, memword, LOAD_WRITE_PC);
+
+ xfer_addr += bump_after;
+
+ regmask &= ~(1 << regno);
+ }
+
+ if (dsc->u.block.writeback)
+ displaced_write_reg (regs, dsc, dsc->u.block.rn, xfer_addr,
+ CANNOT_WRITE_PC);
+}
+
+/* Clean up an STM which included the PC in the register list. */
+
+static void
+cleanup_block_store_pc (struct gdbarch *gdbarch, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ ULONGEST from = dsc->insn_addr;
+ uint32_t status = displaced_read_reg (regs, from, ARM_PS_REGNUM);
+ int store_executed = condition_true (dsc->u.block.cond, status);
+ CORE_ADDR pc_stored_at, transferred_regs = bitcount (dsc->u.block.regmask);
+ CORE_ADDR stm_insn_addr;
+ uint32_t pc_val;
+ long offset;
+ enum bfd_endian byte_order = gdbarch_byte_order (gdbarch);
+
+ /* If condition code fails, there's nothing else to do. */
+ if (!store_executed)
+ return;
+
+ if (dsc->u.block.increment)
+ {
+ pc_stored_at = dsc->u.block.xfer_addr + 4 * transferred_regs;
+
+ if (dsc->u.block.before)
+ pc_stored_at += 4;
+ }
+ else
+ {
+ pc_stored_at = dsc->u.block.xfer_addr;
+
+ if (dsc->u.block.before)
+ pc_stored_at -= 4;
+ }
+
+ pc_val = read_memory_unsigned_integer (pc_stored_at, 4, byte_order);
+ stm_insn_addr = dsc->scratch_base;
+ offset = pc_val - stm_insn_addr;
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: detected PC offset %.8lx for "
+ "STM instruction\n", offset);
+
+ /* Rewrite the stored PC to the proper value for the non-displaced original
+ instruction. */
+ write_memory_unsigned_integer (pc_stored_at, 4, byte_order,
+ dsc->insn_addr + offset);
+}
+
+/* Clean up an LDM which includes the PC in the register list. We clumped all
+ the registers in the transferred list into a contiguous range r0...rX (to
+ avoid loading PC directly and losing control of the debugged program), so we
+ must undo that here. */
+
+static void
+cleanup_block_load_pc (struct gdbarch *gdbarch ATTRIBUTE_UNUSED,
+ struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ ULONGEST from = dsc->insn_addr;
+ uint32_t status = displaced_read_reg (regs, from, ARM_PS_REGNUM);
+ int load_executed = condition_true (dsc->u.block.cond, status), i;
+ unsigned int mask = dsc->u.block.regmask, write_reg = 15;
+ unsigned int regs_loaded = bitcount (mask);
+ unsigned int num_to_shuffle = regs_loaded, clobbered;
+
+ /* The method employed here will fail if the register list is fully populated
+ (we need to avoid loading PC directly). */
+ gdb_assert (num_to_shuffle < 16);
+
+ if (!load_executed)
+ return;
+
+ clobbered = (1 << num_to_shuffle) - 1;
+
+ while (num_to_shuffle > 0)
+ {
+ if ((mask & (1 << write_reg)) != 0)
+ {
+ unsigned int read_reg = num_to_shuffle - 1;
+
+ if (read_reg != write_reg)
+ {
+ ULONGEST rval = displaced_read_reg (regs, from, read_reg);
+ displaced_write_reg (regs, dsc, write_reg, rval, LOAD_WRITE_PC);
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, _("displaced: LDM: move "
+ "loaded register r%d to r%d\n"), read_reg,
+ write_reg);
+ }
+ else if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, _("displaced: LDM: register "
+ "r%d already in the right place\n"),
+ write_reg);
+
+ clobbered &= ~(1 << write_reg);
+
+ num_to_shuffle--;
+ }
+
+ write_reg--;
+ }
+
+ /* Restore any registers we scribbled over. */
+ for (write_reg = 0; clobbered != 0; write_reg++)
+ {
+ if ((clobbered & (1 << write_reg)) != 0)
+ {
+ displaced_write_reg (regs, dsc, write_reg, dsc->tmp[write_reg],
+ CANNOT_WRITE_PC);
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, _("displaced: LDM: restored "
+ "clobbered register r%d\n"), write_reg);
+ clobbered &= ~(1 << write_reg);
+ }
+ }
+
+ /* Perform register writeback manually. */
+ if (dsc->u.block.writeback)
+ {
+ ULONGEST new_rn_val = dsc->u.block.xfer_addr;
+
+ if (dsc->u.block.increment)
+ new_rn_val += regs_loaded * 4;
+ else
+ new_rn_val -= regs_loaded * 4;
+
+ displaced_write_reg (regs, dsc, dsc->u.block.rn, new_rn_val,
+ CANNOT_WRITE_PC);
+ }
+}
+
+/* Handle ldm/stm, apart from some tricky cases which are unlikely to occur
+ in user-level code (in particular exception return, ldm rn, {...pc}^). */
+
+static int
+copy_block_xfer (struct gdbarch *gdbarch, uint32_t insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ int load = bit (insn, 20);
+ int user = bit (insn, 22);
+ int increment = bit (insn, 23);
+ int before = bit (insn, 24);
+ int writeback = bit (insn, 21);
+ int rn = bits (insn, 16, 19);
+ CORE_ADDR from = dsc->insn_addr;
+
+ /* Block transfers which don't mention PC can be run directly out-of-line. */
+ if (rn != 15 && (insn & 0x8000) == 0)
+ return copy_unmodified (gdbarch, insn, "ldm/stm", dsc);
+
+ if (rn == 15)
+ {
+ warning (_("displaced: Unpredictable LDM or STM with base register r15"));
+ return copy_unmodified (gdbarch, insn, "unpredictable ldm/stm", dsc);
+ }
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying block transfer insn "
+ "%.8lx\n", (unsigned long) insn);
+
+ dsc->u.block.xfer_addr = displaced_read_reg (regs, from, rn);
+ dsc->u.block.rn = rn;
+
+ dsc->u.block.load = load;
+ dsc->u.block.user = user;
+ dsc->u.block.increment = increment;
+ dsc->u.block.before = before;
+ dsc->u.block.writeback = writeback;
+ dsc->u.block.cond = bits (insn, 28, 31);
+
+ dsc->u.block.regmask = insn & 0xffff;
+
+ if (load)
+ {
+ if ((insn & 0xffff) == 0xffff)
+ {
+ /* LDM with a fully-populated register list. This case is
+ particularly tricky. Implement for now by fully emulating the
+ instruction (which might not behave perfectly in all cases, but
+ these instructions should be rare enough for that not to matter
+ too much). */
+ dsc->modinsn[0] = ARM_NOP;
+
+ dsc->cleanup = &cleanup_block_load_all;
+ }
+ else
+ {
+ /* LDM of a list of registers which includes PC. Implement by
+ rewriting the list of registers to be transferred into a
+ contiguous chunk r0...rX before doing the transfer, then shuffling
+ registers into the correct places in the cleanup routine. */
+ unsigned int regmask = insn & 0xffff;
+ unsigned int num_in_list = bitcount (regmask), new_regmask, bit = 1;
+ unsigned int to = 0, from = 0, i, new_rn;
+
+ for (i = 0; i < num_in_list; i++)
+ dsc->tmp[i] = displaced_read_reg (regs, from, i);
+
+ /* Writeback makes things complicated. We need to avoid clobbering
+ the base register with one of the registers in our modified
+ register list, but just using a different register can't work in
+ all cases, e.g.:
+
+ ldm r14!, {r0-r13,pc}
+
+ which would need to be rewritten as:
+
+ ldm rN!, {r0-r14}
+
+ but that can't work, because there's no free register for N.
+
+ Solve this by turning off the writeback bit, and emulating
+ writeback manually in the cleanup routine. */
+
+ if (writeback)
+ insn &= ~(1 << 21);
+
+ new_regmask = (1 << num_in_list) - 1;
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, _("displaced: LDM r%d%s, "
+ "{..., pc}: original reg list %.4x, modified "
+ "list %.4x\n"), rn, writeback ? "!" : "",
+ (int) insn & 0xffff, new_regmask);
+
+ dsc->modinsn[0] = (insn & ~0xffff) | (new_regmask & 0xffff);
+
+ dsc->cleanup = &cleanup_block_load_pc;
+ }
+ }
+ else
+ {
+ /* STM of a list of registers which includes PC. Run the instruction
+ as-is, but out of line: this will store the wrong value for the PC,
+ so we must manually fix up the memory in the cleanup routine.
+ Doing things this way has the advantage that we can auto-detect
+ the offset of the PC write (which is architecture-dependent) in
+ the cleanup routine. */
+ dsc->modinsn[0] = insn;
+
+ dsc->cleanup = &cleanup_block_store_pc;
+ }
+
+ return 0;
+}
+
+/* Cleanup/copy SVC (SWI) instructions. These two functions are overridden
+ for Linux, where some SVC instructions must be treated specially. */
+
+static void
+cleanup_svc (struct gdbarch *gdbarch ATTRIBUTE_UNUSED, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ CORE_ADDR from = dsc->insn_addr;
+ CORE_ADDR resume_addr = from + 4;
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: cleanup for svc, resume at "
+ "%.8lx\n", (unsigned long) resume_addr);
+
+ displaced_write_reg (regs, dsc, ARM_PC_REGNUM, resume_addr, BRANCH_WRITE_PC);
+}
+
+static int
+copy_svc (struct gdbarch *gdbarch, uint32_t insn, CORE_ADDR to,
+ struct regcache *regs, struct displaced_step_closure *dsc)
+{
+ CORE_ADDR from = dsc->insn_addr;
+
+ /* Allow OS-specific code to override SVC handling. */
+ if (dsc->u.svc.copy_svc_os)
+ return dsc->u.svc.copy_svc_os (gdbarch, insn, to, regs, dsc);
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying svc insn %.8lx\n",
+ (unsigned long) insn);
+
+ /* Preparation: none.
+ Insn: unmodified svc.
+ Cleanup: pc <- insn_addr + 4. */
+
+ dsc->modinsn[0] = insn;
+
+ dsc->cleanup = &cleanup_svc;
+ /* Pretend we wrote to the PC, so cleanup doesn't set PC to the next
+ instruction. */
+ dsc->wrote_to_pc = 1;
+
+ return 0;
+}
+
+/* Copy undefined instructions. */
+
+static int
+copy_undef (struct gdbarch *gdbarch ATTRIBUTE_UNUSED, uint32_t insn,
+ struct displaced_step_closure *dsc)
+{
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying undefined insn %.8lx\n",
+ (unsigned long) insn);
+
+ dsc->modinsn[0] = insn;
+
+ return 0;
+}
+
+/* Copy unpredictable instructions. */
+
+static int
+copy_unpred (struct gdbarch *gdbarch ATTRIBUTE_UNUSED, uint32_t insn,
+ struct displaced_step_closure *dsc)
+{
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copying unpredictable insn "
+ "%.8lx\n", (unsigned long) insn);
+
+ dsc->modinsn[0] = insn;
+
+ return 0;
+}
+
+/* The decode_* functions are instruction decoding helpers. They mostly follow
+ the presentation in the ARM ARM. */
+
+static int
+decode_misc_memhint_neon (struct gdbarch *gdbarch, uint32_t insn,
+ struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ unsigned int op1 = bits (insn, 20, 26), op2 = bits (insn, 4, 7);
+ unsigned int rn = bits (insn, 16, 19);
+
+ if (op1 == 0x10 && (op2 & 0x2) == 0x0 && (rn & 0xe) == 0x0)
+ return copy_unmodified (gdbarch, insn, "cps", dsc);
+ else if (op1 == 0x10 && op2 == 0x0 && (rn & 0xe) == 0x1)
+ return copy_unmodified (gdbarch, insn, "setend", dsc);
+ else if ((op1 & 0x60) == 0x20)
+ return copy_unmodified (gdbarch, insn, "neon dataproc", dsc);
+ else if ((op1 & 0x71) == 0x40)
+ return copy_unmodified (gdbarch, insn, "neon elt/struct load/store", dsc);
+ else if ((op1 & 0x77) == 0x41)
+ return copy_unmodified (gdbarch, insn, "unallocated mem hint", dsc);
+ else if ((op1 & 0x77) == 0x45)
+ return copy_preload (gdbarch, insn, regs, dsc); /* pli. */
+ else if ((op1 & 0x77) == 0x51)
+ {
+ if (rn != 0xf)
+ return copy_preload (gdbarch, insn, regs, dsc); /* pld/pldw. */
+ else
+ return copy_unpred (gdbarch, insn, dsc);
+ }
+ else if ((op1 & 0x77) == 0x55)
+ return copy_preload (gdbarch, insn, regs, dsc); /* pld/pldw. */
+ else if (op1 == 0x57)
+ switch (op2)
+ {
+ case 0x1: return copy_unmodified (gdbarch, insn, "clrex", dsc);
+ case 0x4: return copy_unmodified (gdbarch, insn, "dsb", dsc);
+ case 0x5: return copy_unmodified (gdbarch, insn, "dmb", dsc);
+ case 0x6: return copy_unmodified (gdbarch, insn, "isb", dsc);
+ default: return copy_unpred (gdbarch, insn, dsc);
+ }
+ else if ((op1 & 0x63) == 0x43)
+ return copy_unpred (gdbarch, insn, dsc);
+ else if ((op2 & 0x1) == 0x0)
+ switch (op1 & ~0x80)
+ {
+ case 0x61:
+ return copy_unmodified (gdbarch, insn, "unallocated mem hint", dsc);
+ case 0x65:
+ return copy_preload_reg (gdbarch, insn, regs, dsc); /* pli reg. */
+ case 0x71: case 0x75:
+ /* pld/pldw reg. */
+ return copy_preload_reg (gdbarch, insn, regs, dsc);
+ case 0x63: case 0x67: case 0x73: case 0x77:
+ return copy_unpred (gdbarch, insn, dsc);
+ default:
+ return copy_undef (gdbarch, insn, dsc);
+ }
+ else
+ return copy_undef (gdbarch, insn, dsc); /* Probably unreachable. */
+}
+
+static int
+decode_unconditional (struct gdbarch *gdbarch, uint32_t insn,
+ struct regcache *regs, struct displaced_step_closure *dsc)
+{
+ if (bit (insn, 27) == 0)
+ return decode_misc_memhint_neon (gdbarch, insn, regs, dsc);
+ /* Switch on bits: 0bxxxxx321xxx0xxxxxxxxxxxxxxxxxxxx. */
+ else switch (((insn & 0x7000000) >> 23) | ((insn & 0x100000) >> 20))
+ {
+ case 0x0: case 0x2:
+ return copy_unmodified (gdbarch, insn, "srs", dsc);
+
+ case 0x1: case 0x3:
+ return copy_unmodified (gdbarch, insn, "rfe", dsc);
+
+ case 0x4: case 0x5: case 0x6: case 0x7:
+ return copy_b_bl_blx (gdbarch, insn, regs, dsc);
+
+ case 0x8:
+ switch ((insn & 0xe00000) >> 21)
+ {
+ case 0x1: case 0x3: case 0x4: case 0x5: case 0x6: case 0x7:
+ /* stc/stc2. */
+ return copy_copro_load_store (gdbarch, insn, regs, dsc);
+
+ case 0x2:
+ return copy_unmodified (gdbarch, insn, "mcrr/mcrr2", dsc);
+
+ default:
+ return copy_undef (gdbarch, insn, dsc);
+ }
+
+ case 0x9:
+ {
+ int rn_f = (bits (insn, 16, 19) == 0xf);
+ switch ((insn & 0xe00000) >> 21)
+ {
+ case 0x1: case 0x3:
+ /* ldc/ldc2 imm (undefined for rn == pc). */
+ return rn_f ? copy_undef (gdbarch, insn, dsc)
+ : copy_copro_load_store (gdbarch, insn, regs, dsc);
+
+ case 0x2:
+ return copy_unmodified (gdbarch, insn, "mrrc/mrrc2", dsc);
+
+ case 0x4: case 0x5: case 0x6: case 0x7:
+ /* ldc/ldc2 lit (undefined for rn != pc). */
+ return rn_f ? copy_copro_load_store (gdbarch, insn, regs, dsc)
+ : copy_undef (gdbarch, insn, dsc);
+
+ default:
+ return copy_undef (gdbarch, insn, dsc);
+ }
+ }
+
+ case 0xa:
+ return copy_unmodified (gdbarch, insn, "stc/stc2", dsc);
+
+ case 0xb:
+ if (bits (insn, 16, 19) == 0xf)
+ /* ldc/ldc2 lit. */
+ return copy_copro_load_store (gdbarch, insn, regs, dsc);
+ else
+ return copy_undef (gdbarch, insn, dsc);
+
+ case 0xc:
+ if (bit (insn, 4))
+ return copy_unmodified (gdbarch, insn, "mcr/mcr2", dsc);
+ else
+ return copy_unmodified (gdbarch, insn, "cdp/cdp2", dsc);
+
+ case 0xd:
+ if (bit (insn, 4))
+ return copy_unmodified (gdbarch, insn, "mrc/mrc2", dsc);
+ else
+ return copy_unmodified (gdbarch, insn, "cdp/cdp2", dsc);
+
+ default:
+ return copy_undef (gdbarch, insn, dsc);
+ }
+}
+
+/* Decode miscellaneous instructions in dp/misc encoding space. */
+
+static int
+decode_miscellaneous (struct gdbarch *gdbarch, uint32_t insn,
+ struct regcache *regs, struct displaced_step_closure *dsc)
+{
+ unsigned int op2 = bits (insn, 4, 6);
+ unsigned int op = bits (insn, 21, 22);
+ unsigned int op1 = bits (insn, 16, 19);
+
+ switch (op2)
+ {
+ case 0x0:
+ return copy_unmodified (gdbarch, insn, "mrs/msr", dsc);
+
+ case 0x1:
+ if (op == 0x1) /* bx. */
+ return copy_bx_blx_reg (gdbarch, insn, regs, dsc);
+ else if (op == 0x3)
+ return copy_unmodified (gdbarch, insn, "clz", dsc);
+ else
+ return copy_undef (gdbarch, insn, dsc);
+
+ case 0x2:
+ if (op == 0x1)
+ /* Not really supported. */
+ return copy_unmodified (gdbarch, insn, "bxj", dsc);
+ else
+ return copy_undef (gdbarch, insn, dsc);
+
+ case 0x3:
+ if (op == 0x1)
+ return copy_bx_blx_reg (gdbarch, insn, regs, dsc); /* blx register. */
+ else
+ return copy_undef (gdbarch, insn, dsc);
+
+ case 0x5:
+ return copy_unmodified (gdbarch, insn, "saturating add/sub", dsc);
+
+ case 0x7:
+ if (op == 0x1)
+ return copy_unmodified (gdbarch, insn, "bkpt", dsc);
+ else if (op == 0x3)
+ /* Not really supported. */
+ return copy_unmodified (gdbarch, insn, "smc", dsc);
+
+ default:
+ return copy_undef (gdbarch, insn, dsc);
+ }
+}
+
+static int
+decode_dp_misc (struct gdbarch *gdbarch, uint32_t insn, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ if (bit (insn, 25))
+ switch (bits (insn, 20, 24))
+ {
+ case 0x10:
+ return copy_unmodified (gdbarch, insn, "movw", dsc);
+
+ case 0x14:
+ return copy_unmodified (gdbarch, insn, "movt", dsc);
+
+ case 0x12: case 0x16:
+ return copy_unmodified (gdbarch, insn, "msr imm", dsc);
+
+ default:
+ return copy_alu_imm (gdbarch, insn, regs, dsc);
+ }
+ else
+ {
+ uint32_t op1 = bits (insn, 20, 24), op2 = bits (insn, 4, 7);
+
+ if ((op1 & 0x19) != 0x10 && (op2 & 0x1) == 0x0)
+ return copy_alu_reg (gdbarch, insn, regs, dsc);
+ else if ((op1 & 0x19) != 0x10 && (op2 & 0x9) == 0x1)
+ return copy_alu_shifted_reg (gdbarch, insn, regs, dsc);
+ else if ((op1 & 0x19) == 0x10 && (op2 & 0x8) == 0x0)
+ return decode_miscellaneous (gdbarch, insn, regs, dsc);
+ else if ((op1 & 0x19) == 0x10 && (op2 & 0x9) == 0x8)
+ return copy_unmodified (gdbarch, insn, "halfword mul/mla", dsc);
+ else if ((op1 & 0x10) == 0x00 && op2 == 0x9)
+ return copy_unmodified (gdbarch, insn, "mul/mla", dsc);
+ else if ((op1 & 0x10) == 0x10 && op2 == 0x9)
+ return copy_unmodified (gdbarch, insn, "synch", dsc);
+ else if (op2 == 0xb || (op2 & 0xd) == 0xd)
+ /* 2nd arg means "unpriveleged". */
+ return copy_extra_ld_st (gdbarch, insn, (op1 & 0x12) == 0x02, regs,
+ dsc);
+ }
+
+ /* Should be unreachable. */
+ return 1;
+}
+
+static int
+decode_ld_st_word_ubyte (struct gdbarch *gdbarch, uint32_t insn,
+ struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ int a = bit (insn, 25), b = bit (insn, 4);
+ uint32_t op1 = bits (insn, 20, 24);
+ int rn_f = bits (insn, 16, 19) == 0xf;
+
+ if ((!a && (op1 & 0x05) == 0x00 && (op1 & 0x17) != 0x02)
+ || (a && (op1 & 0x05) == 0x00 && (op1 & 0x17) != 0x02 && !b))
+ return copy_ldr_str_ldrb_strb (gdbarch, insn, regs, dsc, 0, 0, 0);
+ else if ((!a && (op1 & 0x17) == 0x02)
+ || (a && (op1 & 0x17) == 0x02 && !b))
+ return copy_ldr_str_ldrb_strb (gdbarch, insn, regs, dsc, 0, 0, 1);
+ else if ((!a && (op1 & 0x05) == 0x01 && (op1 & 0x17) != 0x03)
+ || (a && (op1 & 0x05) == 0x01 && (op1 & 0x17) != 0x03 && !b))
+ return copy_ldr_str_ldrb_strb (gdbarch, insn, regs, dsc, 1, 0, 0);
+ else if ((!a && (op1 & 0x17) == 0x03)
+ || (a && (op1 & 0x17) == 0x03 && !b))
+ return copy_ldr_str_ldrb_strb (gdbarch, insn, regs, dsc, 1, 0, 1);
+ else if ((!a && (op1 & 0x05) == 0x04 && (op1 & 0x17) != 0x06)
+ || (a && (op1 & 0x05) == 0x04 && (op1 & 0x17) != 0x06 && !b))
+ return copy_ldr_str_ldrb_strb (gdbarch, insn, regs, dsc, 0, 1, 0);
+ else if ((!a && (op1 & 0x17) == 0x06)
+ || (a && (op1 & 0x17) == 0x06 && !b))
+ return copy_ldr_str_ldrb_strb (gdbarch, insn, regs, dsc, 0, 1, 1);
+ else if ((!a && (op1 & 0x05) == 0x05 && (op1 & 0x17) != 0x07)
+ || (a && (op1 & 0x05) == 0x05 && (op1 & 0x17) != 0x07 && !b))
+ return copy_ldr_str_ldrb_strb (gdbarch, insn, regs, dsc, 1, 1, 0);
+ else if ((!a && (op1 & 0x17) == 0x07)
+ || (a && (op1 & 0x17) == 0x07 && !b))
+ return copy_ldr_str_ldrb_strb (gdbarch, insn, regs, dsc, 1, 1, 1);
+
+ /* Should be unreachable. */
+ return 1;
+}
+
+static int
+decode_media (struct gdbarch *gdbarch, uint32_t insn,
+ struct displaced_step_closure *dsc)
+{
+ switch (bits (insn, 20, 24))
+ {
+ case 0x00: case 0x01: case 0x02: case 0x03:
+ return copy_unmodified (gdbarch, insn, "parallel add/sub signed", dsc);
+
+ case 0x04: case 0x05: case 0x06: case 0x07:
+ return copy_unmodified (gdbarch, insn, "parallel add/sub unsigned", dsc);
+
+ case 0x08: case 0x09: case 0x0a: case 0x0b:
+ case 0x0c: case 0x0d: case 0x0e: case 0x0f:
+ return copy_unmodified (gdbarch, insn,
+ "decode/pack/unpack/saturate/reverse", dsc);
+
+ case 0x18:
+ if (bits (insn, 5, 7) == 0) /* op2. */
+ {
+ if (bits (insn, 12, 15) == 0xf)
+ return copy_unmodified (gdbarch, insn, "usad8", dsc);
+ else
+ return copy_unmodified (gdbarch, insn, "usada8", dsc);
+ }
+ else
+ return copy_undef (gdbarch, insn, dsc);
+
+ case 0x1a: case 0x1b:
+ if (bits (insn, 5, 6) == 0x2) /* op2[1:0]. */
+ return copy_unmodified (gdbarch, insn, "sbfx", dsc);
+ else
+ return copy_undef (gdbarch, insn, dsc);
+
+ case 0x1c: case 0x1d:
+ if (bits (insn, 5, 6) == 0x0) /* op2[1:0]. */
+ {
+ if (bits (insn, 0, 3) == 0xf)
+ return copy_unmodified (gdbarch, insn, "bfc", dsc);
+ else
+ return copy_unmodified (gdbarch, insn, "bfi", dsc);
+ }
+ else
+ return copy_undef (gdbarch, insn, dsc);
+
+ case 0x1e: case 0x1f:
+ if (bits (insn, 5, 6) == 0x2) /* op2[1:0]. */
+ return copy_unmodified (gdbarch, insn, "ubfx", dsc);
+ else
+ return copy_undef (gdbarch, insn, dsc);
+ }
+
+ /* Should be unreachable. */
+ return 1;
+}
+
+static int
+decode_b_bl_ldmstm (struct gdbarch *gdbarch, int32_t insn,
+ struct regcache *regs, struct displaced_step_closure *dsc)
+{
+ if (bit (insn, 25))
+ return copy_b_bl_blx (gdbarch, insn, regs, dsc);
+ else
+ return copy_block_xfer (gdbarch, insn, regs, dsc);
+}
+
+static int
+decode_ext_reg_ld_st (struct gdbarch *gdbarch, uint32_t insn,
+ struct regcache *regs, struct displaced_step_closure *dsc)
+{
+ unsigned int opcode = bits (insn, 20, 24);
+
+ switch (opcode)
+ {
+ case 0x04: case 0x05: /* VFP/Neon mrrc/mcrr. */
+ return copy_unmodified (gdbarch, insn, "vfp/neon mrrc/mcrr", dsc);
+
+ case 0x08: case 0x0a: case 0x0c: case 0x0e:
+ case 0x12: case 0x16:
+ return copy_unmodified (gdbarch, insn, "vfp/neon vstm/vpush", dsc);
+
+ case 0x09: case 0x0b: case 0x0d: case 0x0f:
+ case 0x13: case 0x17:
+ return copy_unmodified (gdbarch, insn, "vfp/neon vldm/vpop", dsc);
+
+ case 0x10: case 0x14: case 0x18: case 0x1c: /* vstr. */
+ case 0x11: case 0x15: case 0x19: case 0x1d: /* vldr. */
+ /* Note: no writeback for these instructions. Bit 25 will always be
+ zero though (via caller), so the following works OK. */
+ return copy_copro_load_store (gdbarch, insn, regs, dsc);
+ }
+
+ /* Should be unreachable. */
+ return 1;
+}
+
+static int
+decode_svc_copro (struct gdbarch *gdbarch, uint32_t insn, CORE_ADDR to,
+ struct regcache *regs, struct displaced_step_closure *dsc)
+{
+ unsigned int op1 = bits (insn, 20, 25);
+ int op = bit (insn, 4);
+ unsigned int coproc = bits (insn, 8, 11);
+ unsigned int rn = bits (insn, 16, 19);
+
+ if ((op1 & 0x20) == 0x00 && (op1 & 0x3a) != 0x00 && (coproc & 0xe) == 0xa)
+ return decode_ext_reg_ld_st (gdbarch, insn, regs, dsc);
+ else if ((op1 & 0x21) == 0x00 && (op1 & 0x3a) != 0x00
+ && (coproc & 0xe) != 0xa)
+ /* stc/stc2. */
+ return copy_copro_load_store (gdbarch, insn, regs, dsc);
+ else if ((op1 & 0x21) == 0x01 && (op1 & 0x3a) != 0x00
+ && (coproc & 0xe) != 0xa)
+ /* ldc/ldc2 imm/lit. */
+ return copy_copro_load_store (gdbarch, insn, regs, dsc);
+ else if ((op1 & 0x3e) == 0x00)
+ return copy_undef (gdbarch, insn, dsc);
+ else if ((op1 & 0x3e) == 0x04 && (coproc & 0xe) == 0xa)
+ return copy_unmodified (gdbarch, insn, "neon 64bit xfer", dsc);
+ else if (op1 == 0x04 && (coproc & 0xe) != 0xa)
+ return copy_unmodified (gdbarch, insn, "mcrr/mcrr2", dsc);
+ else if (op1 == 0x05 && (coproc & 0xe) != 0xa)
+ return copy_unmodified (gdbarch, insn, "mrrc/mrrc2", dsc);
+ else if ((op1 & 0x30) == 0x20 && !op)
+ {
+ if ((coproc & 0xe) == 0xa)
+ return copy_unmodified (gdbarch, insn, "vfp dataproc", dsc);
+ else
+ return copy_unmodified (gdbarch, insn, "cdp/cdp2", dsc);
+ }
+ else if ((op1 & 0x30) == 0x20 && op)
+ return copy_unmodified (gdbarch, insn, "neon 8/16/32 bit xfer", dsc);
+ else if ((op1 & 0x31) == 0x20 && op && (coproc & 0xe) != 0xa)
+ return copy_unmodified (gdbarch, insn, "mcr/mcr2", dsc);
+ else if ((op1 & 0x31) == 0x21 && op && (coproc & 0xe) != 0xa)
+ return copy_unmodified (gdbarch, insn, "mrc/mrc2", dsc);
+ else if ((op1 & 0x30) == 0x30)
+ return copy_svc (gdbarch, insn, to, regs, dsc);
+ else
+ return copy_undef (gdbarch, insn, dsc); /* Possibly unreachable. */
+}
+
+void
+arm_process_displaced_insn (struct gdbarch *gdbarch, uint32_t insn,
+ CORE_ADDR from, CORE_ADDR to, struct regcache *regs,
+ struct displaced_step_closure *dsc)
+{
+ int err = 0;
+
+ if (!displaced_in_arm_mode (regs))
+ error (_("Displaced stepping is only supported in ARM mode"));
+
+ /* Most displaced instructions use a 1-instruction scratch space, so set this
+ here and override below if/when necessary. */
+ dsc->numinsns = 1;
+ dsc->insn_addr = from;
+ dsc->scratch_base = to;
+ dsc->cleanup = NULL;
+ dsc->wrote_to_pc = 0;
+
+ if ((insn & 0xf0000000) == 0xf0000000)
+ err = decode_unconditional (gdbarch, insn, regs, dsc);
+ else switch (((insn & 0x10) >> 4) | ((insn & 0xe000000) >> 24))
+ {
+ case 0x0: case 0x1: case 0x2: case 0x3:
+ err = decode_dp_misc (gdbarch, insn, regs, dsc);
+ break;
+
+ case 0x4: case 0x5: case 0x6:
+ err = decode_ld_st_word_ubyte (gdbarch, insn, regs, dsc);
+ break;
+
+ case 0x7:
+ err = decode_media (gdbarch, insn, dsc);
+ break;
+
+ case 0x8: case 0x9: case 0xa: case 0xb:
+ err = decode_b_bl_ldmstm (gdbarch, insn, regs, dsc);
+ break;
+
+ case 0xc: case 0xd: case 0xe: case 0xf:
+ err = decode_svc_copro (gdbarch, insn, to, regs, dsc);
+ break;
+ }
+
+ if (err)
+ internal_error (__FILE__, __LINE__,
+ _("arm_process_displaced_insn: Instruction decode error"));
+}
+
+/* Actually set up the scratch space for a displaced instruction. */
+
+void
+arm_displaced_init_closure (struct gdbarch *gdbarch, CORE_ADDR from,
+ CORE_ADDR to, struct displaced_step_closure *dsc)
+{
+ struct gdbarch_tdep *tdep = gdbarch_tdep (gdbarch);
+ unsigned int i;
+ enum bfd_endian byte_order_for_code = gdbarch_byte_order_for_code (gdbarch);
+
+ /* Poke modified instruction(s). */
+ for (i = 0; i < dsc->numinsns; i++)
+ {
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: writing insn %.8lx at "
+ "%.8lx\n", (unsigned long) dsc->modinsn[i],
+ (unsigned long) to + i * 4);
+ write_memory_unsigned_integer (to + i * 4, 4, byte_order_for_code,
+ dsc->modinsn[i]);
+ }
+
+ /* Put breakpoint afterwards. */
+ write_memory (to + dsc->numinsns * 4, tdep->arm_breakpoint,
+ tdep->arm_breakpoint_size);
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: copy %s->%s: ",
+ paddress (gdbarch, from), paddress (gdbarch, to));
+}
+
+/* Entry point for copying an instruction into scratch space for displaced
+ stepping. */
+
+struct displaced_step_closure *
+arm_displaced_step_copy_insn (struct gdbarch *gdbarch,
+ CORE_ADDR from, CORE_ADDR to,
+ struct regcache *regs)
+{
+ struct displaced_step_closure *dsc
+ = xmalloc (sizeof (struct displaced_step_closure));
+ enum bfd_endian byte_order_for_code = gdbarch_byte_order_for_code (gdbarch);
+ uint32_t insn = read_memory_unsigned_integer (from, 4, byte_order_for_code);
+
+ if (debug_displaced)
+ fprintf_unfiltered (gdb_stdlog, "displaced: stepping insn %.8lx "
+ "at %.8lx\n", (unsigned long) insn,
+ (unsigned long) from);
+
+ arm_process_displaced_insn (gdbarch, insn, from, to, regs, dsc);
+ arm_displaced_init_closure (gdbarch, from, to, dsc);
+
+ return dsc;
+}
+
+/* Entry point for cleaning things up after a displaced instruction has been
+ single-stepped. */
+
+void
+arm_displaced_step_fixup (struct gdbarch *gdbarch,
+ struct displaced_step_closure *dsc,
+ CORE_ADDR from, CORE_ADDR to,
+ struct regcache *regs)
+{
+ if (dsc->cleanup)
+ dsc->cleanup (gdbarch, regs, dsc);
+
+ if (!dsc->wrote_to_pc)
+ regcache_cooked_write_unsigned (regs, ARM_PC_REGNUM, dsc->insn_addr + 4);
+}
+
#include "bfd-in2.h"
#include "libcoff.h"
@@ -3258,6 +5113,11 @@ arm_gdbarch_init (struct gdbarch_info in
/* On ARM targets char defaults to unsigned. */
set_gdbarch_char_signed (gdbarch, 0);
+ /* Note: for displaced stepping, this includes the breakpoint, and one word
+ of additional scratch space. This setting isn't used for anything beside
+ displaced stepping at present. */
+ set_gdbarch_max_insn_length (gdbarch, 4 * DISPLACED_MODIFIED_INSNS);
+
/* This should be low enough for everything. */
tdep->lowest_pc = 0x20;
tdep->jb_pc = -1; /* Longjump support not enabled by default. */
--- .pc/displaced-stepping/gdb/arm-tdep.h 2009-07-30 15:33:41.000000000 -0700
+++ gdb/arm-tdep.h 2009-07-30 15:34:18.000000000 -0700
@@ -175,11 +175,113 @@ struct gdbarch_tdep
struct type *arm_ext_type;
};
+/* Structures used for displaced stepping. */
+
+/* The maximum number of temporaries available for displaced instructions. */
+#define DISPLACED_TEMPS 16
+/* The maximum number of modified instructions generated for one single-stepped
+ instruction, including the breakpoint (usually at the end of the instruction
+ sequence) and any scratch words, etc. */
+#define DISPLACED_MODIFIED_INSNS 8
+
+struct displaced_step_closure
+{
+ ULONGEST tmp[DISPLACED_TEMPS];
+ int rd;
+ int wrote_to_pc;
+ union
+ {
+ struct
+ {
+ int xfersize;
+ int rn; /* Writeback register. */
+ unsigned int immed : 1; /* Offset is immediate. */
+ unsigned int writeback : 1; /* Perform base-register writeback. */
+ unsigned int restore_r4 : 1; /* Used r4 as scratch. */
+ } ldst;
+
+ struct
+ {
+ unsigned long dest;
+ unsigned int link : 1;
+ unsigned int exchange : 1;
+ unsigned int cond : 4;
+ } branch;
+
+ struct
+ {
+ unsigned int regmask;
+ int rn;
+ CORE_ADDR xfer_addr;
+ unsigned int load : 1;
+ unsigned int user : 1;
+ unsigned int increment : 1;
+ unsigned int before : 1;
+ unsigned int writeback : 1;
+ unsigned int cond : 4;
+ } block;
+
+ struct
+ {
+ unsigned int immed : 1;
+ } preload;
+
+ struct
+ {
+ /* If non-NULL, override generic SVC handling (e.g. for a particular
+ OS). */
+ int (*copy_svc_os) (struct gdbarch *gdbarch, uint32_t insn, CORE_ADDR to,
+ struct regcache *regs,
+ struct displaced_step_closure *dsc);
+ } svc;
+ } u;
+ unsigned long modinsn[DISPLACED_MODIFIED_INSNS];
+ int numinsns;
+ CORE_ADDR insn_addr;
+ CORE_ADDR scratch_base;
+ void (*cleanup) (struct gdbarch *, struct regcache *,
+ struct displaced_step_closure *);
+};
+
+/* Values for the WRITE_PC argument to displaced_write_reg. If the register
+ write may write to the PC, specifies the way the CPSR T bit, etc. is
+ modified by the instruction. */
+
+enum pc_write_style
+{
+ BRANCH_WRITE_PC,
+ BX_WRITE_PC,
+ LOAD_WRITE_PC,
+ ALU_WRITE_PC,
+ CANNOT_WRITE_PC
+};
+
+extern void
+ arm_process_displaced_insn (struct gdbarch *gdbarch, uint32_t insn,
+ CORE_ADDR from, CORE_ADDR to,
+ struct regcache *regs,
+ struct displaced_step_closure *dsc);
+extern void
+ arm_displaced_init_closure (struct gdbarch *gdbarch, CORE_ADDR from,
+ CORE_ADDR to, struct displaced_step_closure *dsc);
+extern ULONGEST
+ displaced_read_reg (struct regcache *regs, CORE_ADDR from, int regno);
+extern void
+ displaced_write_reg (struct regcache *regs,
+ struct displaced_step_closure *dsc, int regno,
+ ULONGEST val, enum pc_write_style write_pc);
CORE_ADDR arm_skip_stub (struct frame_info *, CORE_ADDR);
CORE_ADDR arm_get_next_pc (struct frame_info *, CORE_ADDR);
int arm_software_single_step (struct frame_info *);
+extern struct displaced_step_closure *
+ arm_displaced_step_copy_insn (struct gdbarch *, CORE_ADDR, CORE_ADDR,
+ struct regcache *);
+extern void arm_displaced_step_fixup (struct gdbarch *,
+ struct displaced_step_closure *,
+ CORE_ADDR, CORE_ADDR, struct regcache *);
+
/* Functions exported from armbsd-tdep.h. */
/* Return the appropriate register set for the core section identified
[-- Attachment #3: fsf-displaced-stepping-always-5.diff --]
[-- Type: text/x-patch, Size: 2500 bytes --]
--- .pc/displaced-stepping-always/gdb/infrun.c 2009-07-30 15:33:13.000000000 -0700
+++ gdb/infrun.c 2009-07-30 15:33:31.000000000 -0700
@@ -964,6 +964,7 @@ displaced_step_fixup (ptid_t event_ptid,
struct displaced_step_request *head;
ptid_t ptid;
struct regcache *regcache;
+ struct gdbarch *gdbarch;
CORE_ADDR actual_pc;
head = displaced_step_request_queue;
@@ -985,9 +986,11 @@ displaced_step_fixup (ptid_t event_ptid,
displaced_step_prepare (ptid);
+ gdbarch = get_regcache_arch (regcache);
+
if (debug_displaced)
{
- struct gdbarch *gdbarch = get_regcache_arch (regcache);
+ CORE_ADDR actual_pc = regcache_read_pc (regcache);
gdb_byte buf[4];
fprintf_unfiltered (gdb_stdlog, "displaced: run %s: ",
@@ -996,7 +999,10 @@ displaced_step_fixup (ptid_t event_ptid,
displaced_step_dump_bytes (gdb_stdlog, buf, sizeof (buf));
}
- target_resume (ptid, 1, TARGET_SIGNAL_0);
+ if (gdbarch_software_single_step_p (gdbarch))
+ target_resume (ptid, 0, TARGET_SIGNAL_0);
+ else
+ target_resume (ptid, 1, TARGET_SIGNAL_0);
/* Done, we're stepping a thread. */
break;
@@ -1105,15 +1111,19 @@ maybe_software_singlestep (struct gdbarc
{
int hw_step = 1;
- if (gdbarch_software_single_step_p (gdbarch)
- && gdbarch_software_single_step (gdbarch, get_current_frame ()))
+ if (gdbarch_software_single_step_p (gdbarch))
{
- hw_step = 0;
- /* Do not pull these breakpoints until after a `wait' in
- `wait_for_inferior' */
- singlestep_breakpoints_inserted_p = 1;
- singlestep_ptid = inferior_ptid;
- singlestep_pc = pc;
+ if (use_displaced_stepping (gdbarch))
+ hw_step = 0;
+ else if (gdbarch_software_single_step (gdbarch, get_current_frame ()))
+ {
+ hw_step = 0;
+ /* Do not pull these breakpoints until after a `wait' in
+ `wait_for_inferior' */
+ singlestep_breakpoints_inserted_p = 1;
+ singlestep_ptid = inferior_ptid;
+ singlestep_pc = pc;
+ }
}
return hw_step;
}
@@ -1179,7 +1189,8 @@ a command like `return' or `jump' to con
comments in the handle_inferior event for dealing with 'random
signals' explain what we do instead. */
if (use_displaced_stepping (gdbarch)
- && tp->trap_expected
+ && (tp->trap_expected
+ || (step && gdbarch_software_single_step_p (gdbarch)))
&& sig == TARGET_SIGNAL_0)
{
if (!displaced_step_prepare (inferior_ptid))
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH] Displaced stepping (non-stop debugging) support for ARM Linux
2009-07-31 11:43 ` Julian Brown
@ 2009-09-24 19:35 ` Ulrich Weigand
2009-09-27 21:47 ` [rfc] Fix PowerPC displaced stepping regression Ulrich Weigand
0 siblings, 1 reply; 24+ messages in thread
From: Ulrich Weigand @ 2009-09-24 19:35 UTC (permalink / raw)
To: Julian Brown; +Cc: Julian Brown, gdb-patches, Pedro Alves, Daniel Jacobowitz
Julian Brown wrote:
> --- .pc/displaced-stepping-always/gdb/infrun.c 2009-07-30 15:33:13.000000000 -0700
> +++ gdb/infrun.c 2009-07-30 15:33:31.000000000 -0700
> @@ -1105,15 +1111,19 @@ maybe_software_singlestep (struct gdbarc
> {
> int hw_step = 1;
>
> - if (gdbarch_software_single_step_p (gdbarch)
> - && gdbarch_software_single_step (gdbarch, get_current_frame ()))
> + if (gdbarch_software_single_step_p (gdbarch))
> {
> - hw_step = 0;
> - /* Do not pull these breakpoints until after a `wait' in
> - `wait_for_inferior' */
> - singlestep_breakpoints_inserted_p = 1;
> - singlestep_ptid = inferior_ptid;
> - singlestep_pc = pc;
> + if (use_displaced_stepping (gdbarch))
> + hw_step = 0;
> + else if (gdbarch_software_single_step (gdbarch, get_current_frame ()))
> + {
> + hw_step = 0;
> + /* Do not pull these breakpoints until after a `wait' in
> + `wait_for_inferior' */
> + singlestep_breakpoints_inserted_p = 1;
> + singlestep_ptid = inferior_ptid;
> + singlestep_pc = pc;
> + }
> }
> return hw_step;
> }
It seems this change broke displaced stepping on PowerPC.
The problem is that on PowerPC, we do have a gdbarch_software_single_step
routine (ppc_deal_with_atomic_sequence), but this is only used in very
specific circumstances. Usually, it returns zero and lets GDB use hardware
single stepping.
We also have a displaced stepping implementation, which assumes GDB will
use hardware single-stepping to step over the displaced copy (in particular,
the gdbarch_software_single_step routine should always return 0 when
looking at the displaced copy).
However, with the patch, GDB will simply always use "continue" to run
the displaced copy, which generally breaks.
I'm not sure I understand the rationale behind these changes to the
displaced stepping logic in infrun.c in the first place. Why is
everything conditioned on gdbarch_software_single_step_p, which just
says whether or not the architecture has installed a single-stepping
routine -- but this alone doesn't say whether software stepping is
actually needed in any given situation ...
Bye,
Ulrich
--
Dr. Ulrich Weigand
GNU Toolchain for Linux on System z and Cell BE
Ulrich.Weigand@de.ibm.com
^ permalink raw reply [flat|nested] 24+ messages in thread
* [rfc] Fix PowerPC displaced stepping regression
2009-09-24 19:35 ` Ulrich Weigand
@ 2009-09-27 21:47 ` Ulrich Weigand
2009-09-28 16:57 ` Pedro Alves
2009-09-28 19:41 ` Pedro Alves
0 siblings, 2 replies; 24+ messages in thread
From: Ulrich Weigand @ 2009-09-27 21:47 UTC (permalink / raw)
To: gdb-patches; +Cc: Julian Brown, Pedro Alves, Daniel Jacobowitz
I wrote:
> It seems this change broke displaced stepping on PowerPC.
>
> I'm not sure I understand the rationale behind these changes to the
> displaced stepping logic in infrun.c in the first place. Why is
> everything conditioned on gdbarch_software_single_step_p, which just
> says whether or not the architecture has installed a single-stepping
> routine -- but this alone doesn't say whether software stepping is
> actually needed in any given situation ...
OK, it seems there are two separate changes:
- In non-stop mode, we never want to use software single-step as
common code does not support this in multiple threads at once.
- On platforms with no hardware single-step available, GDB common
code should not use "step" but "continue" to run displaced copies.
The first change does make sense, also on PowerPC. It is in fact
the second change that is problematic, as it would force PowerPC
to implement a much more complex displaced stepping logic just to
avoid using hardware single-stepping the displaced copies .. which
there is no need for in the first place.
The following patch keeps the first change, but makes the second
change conditional on a new gdbarch callback instead of simply
checking for gdbarch_software_single_step_p. This allows PowerPC
to say that even though it has installed a SW single-step routine
to handle some specific corner cases, it still wants to use HW
stepping for displaced copies. The default is such that everything
should be unchanged for the ARM case.
Tested on s390(x)-linux and ppc(64)-linux with no regressions,
fixes all non-stop related test case failures.
Does this look reasonable?
Bye,
Ulrich
ChangeLog:
* gdbarch.sh (displaced_step_hw_singlestep): New callback.
* gdbarch.c, gdbarch.h: Regenerate.
* arch-utils.c (default_displaced_step_hw_singlestep): New function.
* arch-utils.h (default_displaced_step_hw_singlestep): Add prototype.
* ppc-linux-tdep.c (ppc_displaced_step_hw_singlestep): New function.
(rs6000_gdbarch_init): Install it.
* infrun.c (displaced_step_fixup): Use new callback to determine
whether to "step" or "continue" displaced copy.
(resume): Likewise. Do not call maybe_software_singlestep
for displaced stepping.
(maybe_software_singlestep): Do not handle displaced stepping.
Index: gdb/arch-utils.c
===================================================================
RCS file: /cvs/src/src/gdb/arch-utils.c,v
retrieving revision 1.182
diff -c -p -r1.182 arch-utils.c
*** gdb/arch-utils.c 31 Jul 2009 14:39:11 -0000 1.182
--- gdb/arch-utils.c 27 Sep 2009 21:06:05 -0000
*************** simple_displaced_step_free_closure (stru
*** 67,72 ****
--- 67,78 ----
xfree (closure);
}
+ int
+ default_displaced_step_hw_singlestep (struct gdbarch *gdbarch,
+ struct displaced_step_closure *closure)
+ {
+ return !gdbarch_software_single_step_p (gdbarch);
+ }
CORE_ADDR
displaced_step_at_entry_point (struct gdbarch *gdbarch)
Index: gdb/arch-utils.h
===================================================================
RCS file: /cvs/src/src/gdb/arch-utils.h,v
retrieving revision 1.104
diff -c -p -r1.104 arch-utils.h
*** gdb/arch-utils.h 2 Jul 2009 17:25:52 -0000 1.104
--- gdb/arch-utils.h 27 Sep 2009 21:06:05 -0000
*************** extern void
*** 49,54 ****
--- 49,59 ----
simple_displaced_step_free_closure (struct gdbarch *gdbarch,
struct displaced_step_closure *closure);
+ /* Default implementation of gdbarch_displaced_hw_singlestep. */
+ extern int
+ default_displaced_step_hw_singlestep (struct gdbarch *gdbarch,
+ struct displaced_step_closure *closure);
+
/* Possible value for gdbarch_displaced_step_location:
Place displaced instructions at the program's entry point,
leaving space for inferior function call return breakpoints. */
Index: gdb/gdbarch.c
===================================================================
RCS file: /cvs/src/src/gdb/gdbarch.c,v
retrieving revision 1.454
diff -c -p -r1.454 gdbarch.c
*** gdb/gdbarch.c 21 Sep 2009 05:52:05 -0000 1.454
--- gdb/gdbarch.c 27 Sep 2009 21:06:06 -0000
*************** struct gdbarch
*** 232,237 ****
--- 232,238 ----
gdbarch_skip_permanent_breakpoint_ftype *skip_permanent_breakpoint;
ULONGEST max_insn_length;
gdbarch_displaced_step_copy_insn_ftype *displaced_step_copy_insn;
+ gdbarch_displaced_step_hw_singlestep_ftype *displaced_step_hw_singlestep;
gdbarch_displaced_step_fixup_ftype *displaced_step_fixup;
gdbarch_displaced_step_free_closure_ftype *displaced_step_free_closure;
gdbarch_displaced_step_location_ftype *displaced_step_location;
*************** struct gdbarch startup_gdbarch =
*** 371,376 ****
--- 372,378 ----
0, /* skip_permanent_breakpoint */
0, /* max_insn_length */
0, /* displaced_step_copy_insn */
+ default_displaced_step_hw_singlestep, /* displaced_step_hw_singlestep */
0, /* displaced_step_fixup */
NULL, /* displaced_step_free_closure */
NULL, /* displaced_step_location */
*************** gdbarch_alloc (const struct gdbarch_info
*** 464,469 ****
--- 466,472 ----
gdbarch->elf_make_msymbol_special = default_elf_make_msymbol_special;
gdbarch->coff_make_msymbol_special = default_coff_make_msymbol_special;
gdbarch->register_reggroup_p = default_register_reggroup_p;
+ gdbarch->displaced_step_hw_singlestep = default_displaced_step_hw_singlestep;
gdbarch->displaced_step_fixup = NULL;
gdbarch->displaced_step_free_closure = NULL;
gdbarch->displaced_step_location = NULL;
*************** verify_gdbarch (struct gdbarch *gdbarch)
*** 627,632 ****
--- 630,636 ----
/* Skip verify of skip_permanent_breakpoint, has predicate */
/* Skip verify of max_insn_length, has predicate */
/* Skip verify of displaced_step_copy_insn, has predicate */
+ /* Skip verify of displaced_step_hw_singlestep, invalid_p == 0 */
/* Skip verify of displaced_step_fixup, has predicate */
if ((! gdbarch->displaced_step_free_closure) != (! gdbarch->displaced_step_copy_insn))
fprintf_unfiltered (log, "\n\tdisplaced_step_free_closure");
*************** gdbarch_dump (struct gdbarch *gdbarch, s
*** 791,796 ****
--- 795,803 ----
"gdbarch_dump: displaced_step_free_closure = <%s>\n",
host_address_to_string (gdbarch->displaced_step_free_closure));
fprintf_unfiltered (file,
+ "gdbarch_dump: displaced_step_hw_singlestep = <%s>\n",
+ host_address_to_string (gdbarch->displaced_step_hw_singlestep));
+ fprintf_unfiltered (file,
"gdbarch_dump: displaced_step_location = <%s>\n",
host_address_to_string (gdbarch->displaced_step_location));
fprintf_unfiltered (file,
*************** set_gdbarch_displaced_step_copy_insn (st
*** 3145,3150 ****
--- 3152,3174 ----
}
int
+ gdbarch_displaced_step_hw_singlestep (struct gdbarch *gdbarch, struct displaced_step_closure *closure)
+ {
+ gdb_assert (gdbarch != NULL);
+ gdb_assert (gdbarch->displaced_step_hw_singlestep != NULL);
+ if (gdbarch_debug >= 2)
+ fprintf_unfiltered (gdb_stdlog, "gdbarch_displaced_step_hw_singlestep called\n");
+ return gdbarch->displaced_step_hw_singlestep (gdbarch, closure);
+ }
+
+ void
+ set_gdbarch_displaced_step_hw_singlestep (struct gdbarch *gdbarch,
+ gdbarch_displaced_step_hw_singlestep_ftype displaced_step_hw_singlestep)
+ {
+ gdbarch->displaced_step_hw_singlestep = displaced_step_hw_singlestep;
+ }
+
+ int
gdbarch_displaced_step_fixup_p (struct gdbarch *gdbarch)
{
gdb_assert (gdbarch != NULL);
Index: gdb/gdbarch.h
===================================================================
RCS file: /cvs/src/src/gdb/gdbarch.h,v
retrieving revision 1.404
diff -c -p -r1.404 gdbarch.h
*** gdb/gdbarch.h 21 Sep 2009 05:52:06 -0000 1.404
--- gdb/gdbarch.h 27 Sep 2009 21:06:06 -0000
*************** typedef struct displaced_step_closure *
*** 734,739 ****
--- 734,753 ----
extern struct displaced_step_closure * gdbarch_displaced_step_copy_insn (struct gdbarch *gdbarch, CORE_ADDR from, CORE_ADDR to, struct regcache *regs);
extern void set_gdbarch_displaced_step_copy_insn (struct gdbarch *gdbarch, gdbarch_displaced_step_copy_insn_ftype *displaced_step_copy_insn);
+ /* Return true if GDB should use hardware single-stepping to execute
+ the the displaced instruction identified by CLOSURE. If false,
+ GDB will simply restart execution at the displaced instruction
+ location, and it is up to the target to ensure GDB will receive
+ control again (e.g. by placing a software breakpoint instruction
+ into the displaced instruction buffer).
+
+ The default implementation returns false on all targets that
+ provide a gdbarch_software_single_step routine, and true otherwise. */
+
+ typedef int (gdbarch_displaced_step_hw_singlestep_ftype) (struct gdbarch *gdbarch, struct displaced_step_closure *closure);
+ extern int gdbarch_displaced_step_hw_singlestep (struct gdbarch *gdbarch, struct displaced_step_closure *closure);
+ extern void set_gdbarch_displaced_step_hw_singlestep (struct gdbarch *gdbarch, gdbarch_displaced_step_hw_singlestep_ftype *displaced_step_hw_singlestep);
+
/* Fix up the state resulting from successfully single-stepping a
displaced instruction, to give the result we would have gotten from
stepping the instruction in its original location.
Index: gdb/gdbarch.sh
===================================================================
RCS file: /cvs/src/src/gdb/gdbarch.sh,v
retrieving revision 1.497
diff -c -p -r1.497 gdbarch.sh
*** gdb/gdbarch.sh 21 Sep 2009 05:52:05 -0000 1.497
--- gdb/gdbarch.sh 27 Sep 2009 21:06:07 -0000
*************** V:ULONGEST:max_insn_length:::0:0
*** 654,659 ****
--- 654,670 ----
# here.
M:struct displaced_step_closure *:displaced_step_copy_insn:CORE_ADDR from, CORE_ADDR to, struct regcache *regs:from, to, regs
+ # Return true if GDB should use hardware single-stepping to execute
+ # the the displaced instruction identified by CLOSURE. If false,
+ # GDB will simply restart execution at the displaced instruction
+ # location, and it is up to the target to ensure GDB will receive
+ # control again (e.g. by placing a software breakpoint instruction
+ # into the displaced instruction buffer).
+ #
+ # The default implementation returns false on all targets that
+ # provide a gdbarch_software_single_step routine, and true otherwise.
+ m:int:displaced_step_hw_singlestep:struct displaced_step_closure *closure:closure::default_displaced_step_hw_singlestep::0
+
# Fix up the state resulting from successfully single-stepping a
# displaced instruction, to give the result we would have gotten from
# stepping the instruction in its original location.
Index: gdb/infrun.c
===================================================================
RCS file: /cvs/src/src/gdb/infrun.c,v
retrieving revision 1.409
diff -c -p -r1.409 infrun.c
*** gdb/infrun.c 15 Sep 2009 03:30:06 -0000 1.409
--- gdb/infrun.c 27 Sep 2009 21:06:07 -0000
*************** displaced_step_fixup (ptid_t event_ptid,
*** 1002,1011 ****
displaced_step_dump_bytes (gdb_stdlog, buf, sizeof (buf));
}
! if (gdbarch_software_single_step_p (gdbarch))
! target_resume (ptid, 0, TARGET_SIGNAL_0);
! else
target_resume (ptid, 1, TARGET_SIGNAL_0);
/* Done, we're stepping a thread. */
break;
--- 1002,1012 ----
displaced_step_dump_bytes (gdb_stdlog, buf, sizeof (buf));
}
! if (gdbarch_displaced_step_hw_singlestep
! (gdbarch, displaced_step_closure))
target_resume (ptid, 1, TARGET_SIGNAL_0);
+ else
+ target_resume (ptid, 0, TARGET_SIGNAL_0);
/* Done, we're stepping a thread. */
break;
*************** maybe_software_singlestep (struct gdbarc
*** 1114,1132 ****
{
int hw_step = 1;
! if (gdbarch_software_single_step_p (gdbarch))
{
! if (use_displaced_stepping (gdbarch))
! hw_step = 0;
! else if (gdbarch_software_single_step (gdbarch, get_current_frame ()))
! {
! hw_step = 0;
! /* Do not pull these breakpoints until after a `wait' in
! `wait_for_inferior' */
! singlestep_breakpoints_inserted_p = 1;
! singlestep_ptid = inferior_ptid;
! singlestep_pc = pc;
! }
}
return hw_step;
}
--- 1115,1129 ----
{
int hw_step = 1;
! if (gdbarch_software_single_step_p (gdbarch)
! && gdbarch_software_single_step (gdbarch, get_current_frame ()))
{
! hw_step = 0;
! /* Do not pull these breakpoints until after a `wait' in
! `wait_for_inferior' */
! singlestep_breakpoints_inserted_p = 1;
! singlestep_ptid = inferior_ptid;
! singlestep_pc = pc;
}
return hw_step;
}
*************** a command like `return' or `jump' to con
*** 1208,1217 ****
discard_cleanups (old_cleanups);
return;
}
}
/* Do we need to do it the hard way, w/temp breakpoints? */
! if (step)
step = maybe_software_singlestep (gdbarch, pc);
if (should_resume)
--- 1205,1217 ----
discard_cleanups (old_cleanups);
return;
}
+
+ step = gdbarch_displaced_step_hw_singlestep
+ (gdbarch, displaced_step_closure);
}
/* Do we need to do it the hard way, w/temp breakpoints? */
! else if (step)
step = maybe_software_singlestep (gdbarch, pc);
if (should_resume)
Index: gdb/rs6000-tdep.c
===================================================================
RCS file: /cvs/src/src/gdb/rs6000-tdep.c,v
retrieving revision 1.337
diff -c -p -r1.337 rs6000-tdep.c
*** gdb/rs6000-tdep.c 18 Sep 2009 15:48:23 -0000 1.337
--- gdb/rs6000-tdep.c 27 Sep 2009 21:06:08 -0000
*************** ppc_displaced_step_fixup (struct gdbarch
*** 1058,1063 ****
--- 1058,1072 ----
from + offset);
}
+ /* Always use hardware single-stepping to execute the
+ displaced instruction. */
+ static int
+ ppc_displaced_step_hw_singlestep (struct gdbarch *gdbarch,
+ struct displaced_step_closure *closure)
+ {
+ return 1;
+ }
+
/* Instruction masks used during single-stepping of atomic sequences. */
#define LWARX_MASK 0xfc0007fe
#define LWARX_INSTRUCTION 0x7c000028
*************** rs6000_gdbarch_init (struct gdbarch_info
*** 3898,3903 ****
--- 3907,3914 ----
/* Setup displaced stepping. */
set_gdbarch_displaced_step_copy_insn (gdbarch,
simple_displaced_step_copy_insn);
+ set_gdbarch_displaced_step_hw_singlestep (gdbarch,
+ ppc_displaced_step_hw_singlestep);
set_gdbarch_displaced_step_fixup (gdbarch, ppc_displaced_step_fixup);
set_gdbarch_displaced_step_free_closure (gdbarch,
simple_displaced_step_free_closure);
--
Dr. Ulrich Weigand
GNU Toolchain for Linux on System z and Cell BE
Ulrich.Weigand@de.ibm.com
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [rfc] Fix PowerPC displaced stepping regression
2009-09-27 21:47 ` [rfc] Fix PowerPC displaced stepping regression Ulrich Weigand
@ 2009-09-28 16:57 ` Pedro Alves
2009-09-28 17:12 ` Ulrich Weigand
2009-09-28 17:27 ` Ulrich Weigand
2009-09-28 19:41 ` Pedro Alves
1 sibling, 2 replies; 24+ messages in thread
From: Pedro Alves @ 2009-09-28 16:57 UTC (permalink / raw)
To: gdb-patches; +Cc: Ulrich Weigand, Julian Brown, Daniel Jacobowitz
On Sunday 27 September 2009 22:47:13, Ulrich Weigand wrote:
> I wrote:
> > It seems this change broke displaced stepping on PowerPC.
> >
> > I'm not sure I understand the rationale behind these changes to the
> > displaced stepping logic in infrun.c in the first place. Why is
> > everything conditioned on gdbarch_software_single_step_p, which just
> > says whether or not the architecture has installed a single-stepping
> > routine -- but this alone doesn't say whether software stepping is
> > actually needed in any given situation ...
>
> OK, it seems there are two separate changes:
>
> - In non-stop mode, we never want to use software single-step as
> common code does not support this in multiple threads at once.
Right. Shouldn't we switch this particular predicate to
check the non_stop global instead?
> - On platforms with no hardware single-step available, GDB common
> code should not use "step" but "continue" to run displaced copies.
>
> The first change does make sense, also on PowerPC. It is in fact
> the second change that is problematic, as it would force PowerPC
> to implement a much more complex displaced stepping logic just to
> avoid using hardware single-stepping the displaced copies .. which
> there is no need for in the first place.
>
> The following patch keeps the first change, but makes the second
> change conditional on a new gdbarch callback instead of simply
> checking for gdbarch_software_single_step_p. This allows PowerPC
> to say that even though it has installed a SW single-step routine
> to handle some specific corner cases, it still wants to use HW
> stepping for displaced copies. The default is such that everything
> should be unchanged for the ARM case.
Did you consider making the gdbarch_displaced_step_copy_insn
callback itself return that it expects the target to be
continued instead of stepped? I see that it's
arm-tdep.c:arm_displaced_init_closure itself that inserts a breakpoint
after the relocated instructions. An original insn can be expanded
to more than one instruction, at displaced_step_copy time, so it
can be useful to say "continue" instead of several single-step
even if the target supported HW step, and this addresses the ppc/arm
issue as well.
So, displaced_step_prepare would propagate the "continue" vs
"step" up, and all its callers would do the old logic:
if (step)
{
if (gdbarch_software_single_step_p (gdbarch))
target_resume (ptid, 0, TARGET_SIGNAL_0);
else
target_resume (ptid, 1, TARGET_SIGNAL_0);
}
else
target_resume (ptid, 0, TARGET_SIGNAL_0);
... that is, we'd remove the checks for use_displaced_stepping from
maybe_software_singlestep, and use something like the
above in displaced_step_fixup, where we issue the target_resume
(with `step' being what gdbarch_displaced_step_copy_insn reported
it wanted).
--
Pedro Alves
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [rfc] Fix PowerPC displaced stepping regression
2009-09-28 16:57 ` Pedro Alves
@ 2009-09-28 17:12 ` Ulrich Weigand
2009-09-28 17:31 ` Pedro Alves
2009-09-28 17:27 ` Ulrich Weigand
1 sibling, 1 reply; 24+ messages in thread
From: Ulrich Weigand @ 2009-09-28 17:12 UTC (permalink / raw)
To: Pedro Alves; +Cc: gdb-patches, Julian Brown, Daniel Jacobowitz
Pedro Alves wrote:
> On Sunday 27 September 2009 22:47:13, Ulrich Weigand wrote:
> > - In non-stop mode, we never want to use software single-step as
> > common code does not support this in multiple threads at once.
>
> Right. Shouldn't we switch this particular predicate to
> check the non_stop global instead?
I'm not sure which "particular predicate" you're referring to, sorry ...
The check currently reads:
if (use_displaced_stepping (gdbarch)
&& (tp->trap_expected
|| (step && gdbarch_software_single_step_p (gdbarch)))
&& sig == TARGET_SIGNAL_0)
that is, if we'd otherwise be about to issue a single step (potentially)
treat it like stepping over a breakpoint. At what point would you
suggest to check for non_stop?
> Did you consider making the gdbarch_displaced_step_copy_insn
> callback itself return that it expects the target to be
> continued instead of stepped?
Yes, but this would have required changes to the existing gdbarch
interface that would have meant updating all existing users; and
I wanted to produce a patch that doesn't touch any platform I
cannot test at this point ...
In any case, the two interfaces should be pretty much identical:
a target can simply set a flag in its "closure" and return this
flag from the displaced_step_hw_singlestep routine. That's why
I'm passing the closure in, even though PPC doesn't need it ...
Bye,
Ulrich
--
Dr. Ulrich Weigand
GNU Toolchain for Linux on System z and Cell BE
Ulrich.Weigand@de.ibm.com
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [rfc] Fix PowerPC displaced stepping regression
2009-09-28 16:57 ` Pedro Alves
2009-09-28 17:12 ` Ulrich Weigand
@ 2009-09-28 17:27 ` Ulrich Weigand
2009-09-28 17:39 ` Pedro Alves
1 sibling, 1 reply; 24+ messages in thread
From: Ulrich Weigand @ 2009-09-28 17:27 UTC (permalink / raw)
To: Pedro Alves; +Cc: gdb-patches, Julian Brown, Daniel Jacobowitz
Pedro Alves wrote:
Sorry, I missed one additional point:
> So, displaced_step_prepare would propagate the "continue" vs
> "step" up, and all its callers would do the old logic:
>
> if (step)
> {
> if (gdbarch_software_single_step_p (gdbarch))
> target_resume (ptid, 0, TARGET_SIGNAL_0);
> else
> target_resume (ptid, 1, TARGET_SIGNAL_0);
> }
> else
> target_resume (ptid, 0, TARGET_SIGNAL_0);
>
> ... that is, we'd remove the checks for use_displaced_stepping from
> maybe_software_singlestep, and use something like the
> above in displaced_step_fixup, where we issue the target_resume
> (with `step' being what gdbarch_displaced_step_copy_insn reported
> it wanted).
Maybe I misunderstood your point here, but I don't think we can
actually do SW single-step on the displaced copy (using the normal
SW single-step mechanism). The way SW single-step ususally works
is to place breakpoints at all potential branch targets. But if
we have a displaced PC-relative branch, for example, the branch
target may not even point to addressable memory, so we cannot put
breakpoints there.
It seems best to never call maybe_software_single_step on displaced
copies, like my patch does. If the target wants to place breakpoint
instructions somewhere in there, it can do so during copy_insn.
Bye,
Ulrich
--
Dr. Ulrich Weigand
GNU Toolchain for Linux on System z and Cell BE
Ulrich.Weigand@de.ibm.com
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [rfc] Fix PowerPC displaced stepping regression
2009-09-28 17:12 ` Ulrich Weigand
@ 2009-09-28 17:31 ` Pedro Alves
2009-09-28 17:39 ` Ulrich Weigand
0 siblings, 1 reply; 24+ messages in thread
From: Pedro Alves @ 2009-09-28 17:31 UTC (permalink / raw)
To: gdb-patches; +Cc: Ulrich Weigand, Julian Brown, Daniel Jacobowitz
On Monday 28 September 2009 18:12:44, Ulrich Weigand wrote:
> Pedro Alves wrote:
>
> > On Sunday 27 September 2009 22:47:13, Ulrich Weigand wrote:
> > > - In non-stop mode, we never want to use software single-step as
> > > common code does not support this in multiple threads at once.
> >
> > Right. Shouldn't we switch this particular predicate to
> > check the non_stop global instead?
>
> I'm not sure which "particular predicate" you're referring to, sorry ...
>
> The check currently reads:
>
> if (use_displaced_stepping (gdbarch)
> && (tp->trap_expected
> || (step && gdbarch_software_single_step_p (gdbarch)))
> && sig == TARGET_SIGNAL_0)
>
> that is, if we'd otherwise be about to issue a single step (potentially)
> treat it like stepping over a breakpoint. At what point would you
> suggest to check for non_stop?
At the points where we decide to use displaced stepping because
software single-stepping doesn't work with multiple pending
simultaneous requests. So, adding a non_stop check there in
front of software_single_step_p,
if (use_displaced_stepping (gdbarch)
&& (tp->trap_expected
- || (step && gdbarch_software_single_step_p (gdbarch)))
+ || (non_stop && step && gdbarch_software_single_step_p (gdbarch)))
... and in maybe_software_singlestep:
- if (use_displaced_stepping (gdbarch))
- if (non_stop)
hw_step = 0;
(or make resume clear `step' if "non_stop && step && gdbarch_software_single_step_p (gdbarch))"
so to we'd not reach maybe_software_singlestep at all.
But, let's ignore this. The only benefit would be for
all-stop + displaced stepping=on to not trigger displaced
stepping all the way.
> > Did you consider making the gdbarch_displaced_step_copy_insn
> > callback itself return that it expects the target to be
> > continued instead of stepped?
>
> Yes, but this would have required changes to the existing gdbarch
> interface that would have meant updating all existing users; and
> I wanted to produce a patch that doesn't touch any platform I
> cannot test at this point ...
>
> In any case, the two interfaces should be pretty much identical:
> a target can simply set a flag in its "closure" and return this
> flag from the displaced_step_hw_singlestep routine. That's why
> I'm passing the closure in, even though PPC doesn't need it ...
True. I like that!
--
Pedro Alves
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [rfc] Fix PowerPC displaced stepping regression
2009-09-28 17:31 ` Pedro Alves
@ 2009-09-28 17:39 ` Ulrich Weigand
0 siblings, 0 replies; 24+ messages in thread
From: Ulrich Weigand @ 2009-09-28 17:39 UTC (permalink / raw)
To: Pedro Alves; +Cc: gdb-patches, Julian Brown, Daniel Jacobowitz
Pedro Alves wrote:
> At the points where we decide to use displaced stepping because
> software single-stepping doesn't work with multiple pending
> simultaneous requests. So, adding a non_stop check there in
> front of software_single_step_p,
>
> if (use_displaced_stepping (gdbarch)
> && (tp->trap_expected
> - || (step && gdbarch_software_single_step_p (gdbarch)))
> + || (non_stop && step && gdbarch_software_single_step_p (gdbarch)))
>
>
> ... and in maybe_software_singlestep:
>
> - if (use_displaced_stepping (gdbarch))
> - if (non_stop)
> hw_step = 0;
>
> (or make resume clear `step' if "non_stop && step && gdbarch_software_single_step_p (gdbarch))"
> so to we'd not reach maybe_software_singlestep at all.
Ah, now I've got it, thanks!
> But, let's ignore this. The only benefit would be for
> all-stop + displaced stepping=on to not trigger displaced
> stepping all the way.
Agreed.
> > > Did you consider making the gdbarch_displaced_step_copy_insn
> > > callback itself return that it expects the target to be
> > > continued instead of stepped?
> >
> > Yes, but this would have required changes to the existing gdbarch
> > interface that would have meant updating all existing users; and
> > I wanted to produce a patch that doesn't touch any platform I
> > cannot test at this point ...
> >
> > In any case, the two interfaces should be pretty much identical:
> > a target can simply set a flag in its "closure" and return this
> > flag from the displaced_step_hw_singlestep routine. That's why
> > I'm passing the closure in, even though PPC doesn't need it ...
>
> True. I like that!
OK, good :-)
Thanks,
Ulrich
--
Dr. Ulrich Weigand
GNU Toolchain for Linux on System z and Cell BE
Ulrich.Weigand@de.ibm.com
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [rfc] Fix PowerPC displaced stepping regression
2009-09-28 17:27 ` Ulrich Weigand
@ 2009-09-28 17:39 ` Pedro Alves
2009-09-28 17:45 ` Ulrich Weigand
0 siblings, 1 reply; 24+ messages in thread
From: Pedro Alves @ 2009-09-28 17:39 UTC (permalink / raw)
To: gdb-patches; +Cc: Ulrich Weigand, Julian Brown, Daniel Jacobowitz
On Monday 28 September 2009 18:27:03, Ulrich Weigand wrote:
> > ... that is, we'd remove the checks for use_displaced_stepping from
> > maybe_software_singlestep, and use something like the
> > above in displaced_step_fixup, where we issue the target_resume
> > (with `step' being what gdbarch_displaced_step_copy_insn reported
> > it wanted).
>
> Maybe I misunderstood your point here, but I don't think we can
> actually do SW single-step on the displaced copy (using the normal
> SW single-step mechanism). The way SW single-step ususally works
> is to place breakpoints at all potential branch targets. But if
> we have a displaced PC-relative branch, for example, the branch
> target may not even point to addressable memory, so we cannot put
> breakpoints there.
If you get yourself such an instruction in the buffer, usually you'd
want the branch offset had to be adjusted at displaced copy time,
otherwise it seems to be you're already broken. But I did post a
confusing snippet, sorry. All I meant was to have displaced_step_copy
routine to tell infrun.c to call target_resume(continue), instead
of target_resume(step). Your version works for that too.
> It seems best to never call maybe_software_single_step on displaced
> copies, like my patch does. If the target wants to place breakpoint
> instructions somewhere in there, it can do so during copy_insn.
Yes, like arm does.
--
Pedro Alves
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [rfc] Fix PowerPC displaced stepping regression
2009-09-28 17:39 ` Pedro Alves
@ 2009-09-28 17:45 ` Ulrich Weigand
2009-09-28 19:07 ` Pedro Alves
0 siblings, 1 reply; 24+ messages in thread
From: Ulrich Weigand @ 2009-09-28 17:45 UTC (permalink / raw)
To: Pedro Alves; +Cc: gdb-patches, Julian Brown, Daniel Jacobowitz
Pedro Alves wrote:
> On Monday 28 September 2009 18:27:03, Ulrich Weigand wrote:
> > Maybe I misunderstood your point here, but I don't think we can
> > actually do SW single-step on the displaced copy (using the normal
> > SW single-step mechanism). The way SW single-step ususally works
> > is to place breakpoints at all potential branch targets. But if
> > we have a displaced PC-relative branch, for example, the branch
> > target may not even point to addressable memory, so we cannot put
> > breakpoints there.
>
> If you get yourself such an instruction in the buffer, usually you'd
> want the branch offset had to be adjusted at displaced copy time,
> otherwise it seems to be you're already broken.
If that's possible. In general, the real branch target may be out of
range relative to the address of the copied instruction for a branch in
the original instruction format ... (You could redirect to some temporary
target in the copy buffer, but at this point you're probably better off
just emulating the whole thing in the first place.)
> But I did post a
> confusing snippet, sorry. All I meant was to have displaced_step_copy
> routine to tell infrun.c to call target_resume(continue), instead
> of target_resume(step). Your version works for that too.
Ah, I see.
Bye,
Ulrich
--
Dr. Ulrich Weigand
GNU Toolchain for Linux on System z and Cell BE
Ulrich.Weigand@de.ibm.com
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [rfc] Fix PowerPC displaced stepping regression
2009-09-28 17:45 ` Ulrich Weigand
@ 2009-09-28 19:07 ` Pedro Alves
0 siblings, 0 replies; 24+ messages in thread
From: Pedro Alves @ 2009-09-28 19:07 UTC (permalink / raw)
To: gdb-patches; +Cc: Ulrich Weigand, Julian Brown, Daniel Jacobowitz
On Monday 28 September 2009 18:45:18, Ulrich Weigand wrote:
> Pedro Alves wrote:
>
> > On Monday 28 September 2009 18:27:03, Ulrich Weigand wrote:
> > > Maybe I misunderstood your point here, but I don't think we can
> > > actually do SW single-step on the displaced copy (using the normal
> > > SW single-step mechanism). The way SW single-step ususally works
> > > is to place breakpoints at all potential branch targets. But if
> > > we have a displaced PC-relative branch, for example, the branch
> > > target may not even point to addressable memory, so we cannot put
> > > breakpoints there.
> >
> > If you get yourself such an instruction in the buffer, usually you'd
> > want the branch offset had to be adjusted at displaced copy time,
> > otherwise it seems to be you're already broken.
>
> If that's possible. In general, the real branch target may be out of
> range relative to the address of the copied instruction for a branch in
> the original instruction format ... (You could redirect to some temporary
> target in the copy buffer, but at this point you're probably better off
> just emulating the whole thing in the first place.)
Yes, of course. But, the point is that whatever ends up in the
displaced step scratch pad after displaced_step_copy time, be it simply
a copy of the original insn, an adjusted pc-relative instruction, or
a sequence of insns emulating the original insn, _could_ be single-stepped
using software-single stepping. It's the latter case of single instruction
emulation with more than one insn that is generatly more efficient to
execute in one go with a break+continue, irrespective or HW or software
single stepping being supported.
Anyway, we're both clearly aware of these issues, and getting off topic. :-)
--
Pedro Alves
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [rfc] Fix PowerPC displaced stepping regression
2009-09-27 21:47 ` [rfc] Fix PowerPC displaced stepping regression Ulrich Weigand
2009-09-28 16:57 ` Pedro Alves
@ 2009-09-28 19:41 ` Pedro Alves
2009-09-29 0:59 ` Ulrich Weigand
1 sibling, 1 reply; 24+ messages in thread
From: Pedro Alves @ 2009-09-28 19:41 UTC (permalink / raw)
To: gdb-patches; +Cc: Ulrich Weigand, Julian Brown, Daniel Jacobowitz
On Sunday 27 September 2009 22:47:13, Ulrich Weigand wrote:
> + # the the displaced instruction identified by CLOSURE. If false,
Double "the".
> + /* Always use hardware single-stepping to execute the
> + displaced instruction. */
> + static int
> + ppc_displaced_step_hw_singlestep (struct gdbarch *gdbarch,
> + struct displaced_step_closure *closure)
> + {
> + return 1;
> + }
> +
Hmmm, does this mean that a breakpoint at the start of an
atomic sequence instruction wouldn't be displaced stepped properly,
as in, you'd trip on the same issue that happens when stepping over
an atomic sequence without displaced stepping?
(If broken, this was already broken before your patch and even
before the regression your patch fixes)
( A nice stress test of the displaced stepping support is to run the
whole testsuite with "set displaced-stepping on". )
I've now read through the patch carefully, and didn't spot
anything wrong. I think this would be safe for 7.0 as well.
--
Pedro Alves
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [rfc] Fix PowerPC displaced stepping regression
2009-09-28 19:41 ` Pedro Alves
@ 2009-09-29 0:59 ` Ulrich Weigand
2009-09-29 1:36 ` Joel Brobecker
0 siblings, 1 reply; 24+ messages in thread
From: Ulrich Weigand @ 2009-09-29 0:59 UTC (permalink / raw)
To: Pedro Alves; +Cc: gdb-patches, Julian Brown, Daniel Jacobowitz, brobecker
Pedro Alves wrote:
> On Sunday 27 September 2009 22:47:13, Ulrich Weigand wrote:
> > + # the the displaced instruction identified by CLOSURE. If false,
>
> Double "the".
Fixed, thanks!
> Hmmm, does this mean that a breakpoint at the start of an
> atomic sequence instruction wouldn't be displaced stepped properly,
> as in, you'd trip on the same issue that happens when stepping over
> an atomic sequence without displaced stepping?
Yes, that's true.
> (If broken, this was already broken before your patch and even
> before the regression your patch fixes)
Indeed. For now, I'm OK with restoring the state before the
regression. The new mechanism should allow fixing this particular
corner case as well, I hope, but this will be more involved ...
> ( A nice stress test of the displaced stepping support is to run the
> whole testsuite with "set displaced-stepping on". )
>
> I've now read through the patch carefully, and didn't spot
> anything wrong. I think this would be safe for 7.0 as well.
OK, thanks for the review!
I've now checked in the patch to mainline. I'll wait with
checking into the branch until Joel has agreed how we should
handle it ...
Bye,
Ulrich
--
Dr. Ulrich Weigand
GNU Toolchain for Linux on System z and Cell BE
Ulrich.Weigand@de.ibm.com
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [rfc] Fix PowerPC displaced stepping regression
2009-09-29 0:59 ` Ulrich Weigand
@ 2009-09-29 1:36 ` Joel Brobecker
2009-09-29 12:54 ` Ulrich Weigand
0 siblings, 1 reply; 24+ messages in thread
From: Joel Brobecker @ 2009-09-29 1:36 UTC (permalink / raw)
To: Ulrich Weigand; +Cc: Pedro Alves, gdb-patches, Julian Brown, Daniel Jacobowitz
> > I've now read through the patch carefully, and didn't spot
> > anything wrong. I think this would be safe for 7.0 as well.
>
> OK, thanks for the review!
>
> I've now checked in the patch to mainline. I'll wait with
> checking into the branch until Joel has agreed how we should
> handle it ...
Pedro thinks it's safe, it's good enough for me. Go ahead.
--
Joel
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [rfc] Fix PowerPC displaced stepping regression
2009-09-29 1:36 ` Joel Brobecker
@ 2009-09-29 12:54 ` Ulrich Weigand
0 siblings, 0 replies; 24+ messages in thread
From: Ulrich Weigand @ 2009-09-29 12:54 UTC (permalink / raw)
To: Joel Brobecker; +Cc: Pedro Alves, gdb-patches, Julian Brown, Daniel Jacobowitz
Joel Brobecker wrote:
> > > I've now read through the patch carefully, and didn't spot
> > > anything wrong. I think this would be safe for 7.0 as well.
> >
> > OK, thanks for the review!
> >
> > I've now checked in the patch to mainline. I'll wait with
> > checking into the branch until Joel has agreed how we should
> > handle it ...
>
> Pedro thinks it's safe, it's good enough for me. Go ahead.
OK, thanks! I've now checked it into the branch.
Bye,
Ulrich
--
Dr. Ulrich Weigand
GNU Toolchain for Linux on System z and Cell BE
Ulrich.Weigand@de.ibm.com
^ permalink raw reply [flat|nested] 24+ messages in thread
end of thread, other threads:[~2009-09-29 12:54 UTC | newest]
Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-01-20 22:14 [PATCH] Displaced stepping (non-stop debugging) support for ARM Linux Julian Brown
2009-01-21 18:07 ` Pedro Alves
2009-02-02 20:01 ` Daniel Jacobowitz
2009-05-16 18:19 ` Julian Brown
2009-06-09 17:37 ` Daniel Jacobowitz
2009-06-10 14:58 ` Pedro Alves
2009-06-10 15:05 ` Daniel Jacobowitz
2009-07-15 19:16 ` Julian Brown
2009-07-24 2:17 ` Daniel Jacobowitz
2009-07-31 11:43 ` Julian Brown
2009-09-24 19:35 ` Ulrich Weigand
2009-09-27 21:47 ` [rfc] Fix PowerPC displaced stepping regression Ulrich Weigand
2009-09-28 16:57 ` Pedro Alves
2009-09-28 17:12 ` Ulrich Weigand
2009-09-28 17:31 ` Pedro Alves
2009-09-28 17:39 ` Ulrich Weigand
2009-09-28 17:27 ` Ulrich Weigand
2009-09-28 17:39 ` Pedro Alves
2009-09-28 17:45 ` Ulrich Weigand
2009-09-28 19:07 ` Pedro Alves
2009-09-28 19:41 ` Pedro Alves
2009-09-29 0:59 ` Ulrich Weigand
2009-09-29 1:36 ` Joel Brobecker
2009-09-29 12:54 ` Ulrich Weigand
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox