From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 932 invoked by alias); 26 Feb 2011 13:53:27 -0000 Received: (qmail 895 invoked by uid 22791); 26 Feb 2011 13:53:23 -0000 X-SWARE-Spam-Status: No, hits=-1.8 required=5.0 tests=AWL,BAYES_00,TW_BL,TW_EG,T_RP_MATCHES_RCVD X-Spam-Check-By: sourceware.org Received: from mail.codesourcery.com (HELO mail.codesourcery.com) (38.113.113.100) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Sat, 26 Feb 2011 13:53:12 +0000 Received: (qmail 14162 invoked from network); 26 Feb 2011 13:53:09 -0000 Received: from unknown (HELO ?192.168.0.101?) (yao@127.0.0.2) by mail.codesourcery.com with ESMTPA; 26 Feb 2011 13:53:09 -0000 Message-ID: <4D6905C0.6030804@codesourcery.com> Date: Sat, 26 Feb 2011 14:07:00 -0000 From: Yao Qi User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.13) Gecko/20101208 Lightning/1.0b2 Thunderbird/3.1.7 MIME-Version: 1.0 To: Ulrich Weigand CC: gdb-patches@sourceware.org Subject: Re: [patch 2/3] Displaced stepping for 16-bit Thumb instructions References: <201102251922.p1PJM7bw003601@d06av02.portsmouth.uk.ibm.com> In-Reply-To: <201102251922.p1PJM7bw003601@d06av02.portsmouth.uk.ibm.com> Content-Type: multipart/mixed; boundary="------------080609080300050509030604" X-IsSubscribed: yes Mailing-List: contact gdb-patches-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-patches-owner@sourceware.org X-SW-Source: 2011-02/txt/msg00791.txt.bz2 This is a multi-part message in MIME format. --------------080609080300050509030604 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Content-length: 4456 On 02/26/2011 03:22 AM, Ulrich Weigand wrote: > Yao Qi wrote: > >> This new patch implements what we discussed above. There is a minor >> difference on rule #3. "Thumb 32-bit instruction occupies *two* slots >> with flag `Thumb-2'", because we have to choose breakpoint type (thumb >> breakpoint or thumb-2 breakpoint) according to this flag. > > Actually, there's no need to get complicated w.r.t. breakpoints. > The only reason for using a Thumb-2 breakpoint is if we *replace* > an existing 32-bit instruction and don't want to mess up instruction > stream parsing. (E.g. if the breakpoint is under an IT block and > happens to be skipped, instruction execution would continue with > the "second half" of the replaced instruction if we had used just > a regular Thumb breakpoint.) > > However, with displaced stepping, we construct a full instruction > sequence from scratch. In this case, we can just always use a > 16-bit Thumb breakpoint instruction. (In fact, throughout the > instruction sequence we construct, we can freely intermix 16-bit > and 32-bit instructions. We just cannot intermix ARM and Thumb, > of course.) > Hmm, I think you are right. Fix it in my new patch. >> + /* Compute pipeline offset: >> + - When executing an ARM instruction, PC reads as the address of the >> + current instruction plus 8. >> + - When executing a Thumb instruction, PC reads as the address of the >> + current instruction plus 4. */ >> + >> + if (displaced_in_arm_mode (regs)) >> + from += 8; >> + else >> + from += 4; >> + >> if (debug_displaced) >> fprintf_unfiltered (gdb_stdlog, "displaced: read pc value %.8lx\n", >> - (unsigned long) from + 8); >> - return (ULONGEST) from + 8; /* Pipeline offset. */ >> + (unsigned long) from); >> + return (ULONGEST) from; /* Pipeline offset. */ > > Just remove the comment from that last line here; the offset is now > handled above. > Fixed. >> dsc->u.ldst.restore_r4 = 1; >> - dsc->modinsn[0] = 0xe92d8000; /* push {pc} */ >> - dsc->modinsn[1] = 0xe8bd0010; /* pop {r4} */ >> - dsc->modinsn[2] = 0xe044400f; /* sub r4, r4, pc. */ >> - dsc->modinsn[3] = 0xe2844008; /* add r4, r4, #8. */ >> - dsc->modinsn[4] = 0xe0800004; /* add r0, r0, r4. */ >> + RECORD_ARM_MODE_INSN (0, 0xe92d8000); /* push {pc} */ >> + RECORD_ARM_MODE_INSN (1, 0xe8bd0010); /* pop {r4} */ >> + RECORD_ARM_MODE_INSN (2, 0xe044400f); /* sub r4, r4, pc. */ >> + RECORD_ARM_MODE_INSN (3, 0xe2844008); /* add r4, r4, #8. */ >> + RECORD_ARM_MODE_INSN (4, 0xe0800004); /* add r0, r0, r4. */ >> >> /* As above. */ >> if (immed) >> - dsc->modinsn[5] = (insn & 0xfff00fff) | 0x20000; >> + RECORD_ARM_MODE_INSN (5, ((insn & 0xfff00fff) | 0x20000)); >> else >> - dsc->modinsn[5] = (insn & 0xfff00ff0) | 0x20003; >> - >> - dsc->modinsn[6] = 0x0; /* breakpoint location. */ >> - dsc->modinsn[7] = 0x0; /* scratch space. */ >> + RECORD_ARM_MODE_INSN (5, ((insn & 0xfff00ff0) | 0x20003)); >> >> + RECORD_ARM_MODE_INSN (6, 0x00); /* breakpoint location. */ >> + RECORD_ARM_MODE_INSN (7, 0x00); /* scratch space. */ > > This reminds me: after your latest patch in that area, we do not > actually use any scratch space in the instruction stream any more, > so this could be removed ... > Oh, Yes. I'll remove it by another patch. >> + { >> + CORE_ADDR next_pc; >> + if (dsc->insn_mode == ARM) >> + next_pc = dsc->insn_addr + 4; >> + else if (dsc->insn_mode == THUMB) >> + next_pc = dsc->insn_addr + 2; >> + else >> + { >> + struct frame_info *fi = get_current_frame (); >> + enum bfd_endian byte_order_for_code >> + = gdbarch_byte_order_for_code (gdbarch); >> + unsigned short inst1 >> + = read_memory_unsigned_integer (dsc->insn_addr, 2, >> + byte_order_for_code); >> + >> + next_pc = dsc->insn_addr + thumb_insn_size (inst1); >> + } > > Huh? Shouldn't we know this already? See below ... [*] > > > [ In fact, it might be even easier to replace insn_mode with > *two* separate fields: > > * insn_size holds the size (4 or 2) of the *original* insn > * is_thumb is true if the original insn (and thus all > replacement insns) are Thumb instead of ARM ] > OK, two new fields are added in struct displaced_step_closure. The computation of next_pc is simplified. -- Yao (齐尧) --------------080609080300050509030604 Content-Type: text/x-patch; name="0001-refactor-to-handle-both-32-bit-and-16-bit.patch" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename="0001-refactor-to-handle-both-32-bit-and-16-bit.patch" Content-length: 14602 gdb/ * arm-tdep.h (struct displaced_step_closure): New fields insn_size and is_thumb. (RECORD_ARM_MODE_INSN, RECORD_THUMB_MODE_INSN, RECORD_THUMB2_MODE_INSN): New macro. * arm-tdep.c (copy_unmodified): Save modified insns by RECORD_ARM_MODE_INSN. (copy_preload, copy_preload_reg, copy_copro_load_store, copy_b_bl_blx): Likewise. (copy_bx_blx_reg, copy_alu_imm, copy_alu_reg, copy_alu_shifted_reg): Likewise. (copy_extra_ld_st, copy_ldr_str_ldrb_strb, copy_block_xfer): Likewise. (copy_svc, copy_undef, copy_unpred): Likewise. (displaced_read_reg): Handle both ARM and Thumb mode when reading PC. (arm_displaced_init_closure): Handle both 32bit and 16bit insns. (arm_displaced_step_fixup): Likewise. * arm-linux-tdep.c (arm_linux_copy_svc): Save modified insns by RECORD_ARM_MODE_INSN. (arm_catch_kernel_helper_return): Likewise. --- gdb/arm-linux-tdep.c | 4 +- gdb/arm-tdep.c | 127 +++++++++++++++++++++++++++++++++---------------- gdb/arm-tdep.h | 30 ++++++++++++ 3 files changed, 117 insertions(+), 44 deletions(-) diff --git a/gdb/arm-linux-tdep.c b/gdb/arm-linux-tdep.c index ff649d6..75a4ea4 100644 --- a/gdb/arm-linux-tdep.c +++ b/gdb/arm-linux-tdep.c @@ -827,7 +827,7 @@ arm_linux_copy_svc (struct gdbarch *gdbarch, uint32_t insn, CORE_ADDR to, Cleanup: if pc lands in scratch space, pc <- insn_addr + 4 else leave pc alone. */ - dsc->modinsn[0] = insn; + RECORD_ARM_MODE_INSN (0, insn); dsc->cleanup = &arm_linux_cleanup_svc; /* Pretend we wrote to the PC, so cleanup doesn't set PC to the next @@ -885,7 +885,7 @@ arm_catch_kernel_helper_return (struct gdbarch *gdbarch, CORE_ADDR from, CANNOT_WRITE_PC); write_memory_unsigned_integer (to + 8, 4, byte_order, from); - dsc->modinsn[0] = 0xe59ef004; /* ldr pc, [lr, #4]. */ + RECORD_ARM_MODE_INSN (0, 0xe59ef004); /* ldr pc, [lr, #4]. */ } /* Linux-specific displaced step instruction copying function. Detects when diff --git a/gdb/arm-tdep.c b/gdb/arm-tdep.c index f0e9435..d1f5d7b 100644 --- a/gdb/arm-tdep.c +++ b/gdb/arm-tdep.c @@ -5106,6 +5106,8 @@ arm_adjust_breakpoint_address (struct gdbarch *gdbarch, CORE_ADDR bpaddr) /* NOP instruction (mov r0, r0). */ #define ARM_NOP 0xe1a00000 +static int displaced_in_arm_mode (struct regcache *regs); + /* Helper for register reads for displaced stepping. In particular, this returns the PC as it would be seen by the instruction at its original location. */ @@ -5117,10 +5119,21 @@ displaced_read_reg (struct regcache *regs, CORE_ADDR from, int regno) if (regno == 15) { + /* Compute pipeline offset: + - When executing an ARM instruction, PC reads as the address of the + current instruction plus 8. + - When executing a Thumb instruction, PC reads as the address of the + current instruction plus 4. */ + + if (displaced_in_arm_mode (regs)) + from += 8; + else + from += 4; + if (debug_displaced) fprintf_unfiltered (gdb_stdlog, "displaced: read pc value %.8lx\n", - (unsigned long) from + 8); - return (ULONGEST) from + 8; /* Pipeline offset. */ + (unsigned long) from); + return (ULONGEST) from; } else { @@ -5306,7 +5319,7 @@ copy_unmodified (struct gdbarch *gdbarch, uint32_t insn, "opcode/class '%s' unmodified\n", (unsigned long) insn, iname); - dsc->modinsn[0] = insn; + RECORD_ARM_MODE_INSN (0, insn); return 0; } @@ -5349,7 +5362,7 @@ copy_preload (struct gdbarch *gdbarch, uint32_t insn, struct regcache *regs, dsc->u.preload.immed = 1; - dsc->modinsn[0] = insn & 0xfff0ffff; + RECORD_ARM_MODE_INSN (0, (insn & 0xfff0ffff)); dsc->cleanup = &cleanup_preload; @@ -5390,7 +5403,7 @@ copy_preload_reg (struct gdbarch *gdbarch, uint32_t insn, dsc->u.preload.immed = 0; - dsc->modinsn[0] = (insn & 0xfff0fff0) | 0x1; + RECORD_ARM_MODE_INSN (0, ((insn & 0xfff0fff0) | 0x1)); dsc->cleanup = &cleanup_preload; @@ -5443,7 +5456,7 @@ copy_copro_load_store (struct gdbarch *gdbarch, uint32_t insn, dsc->u.ldst.writeback = bit (insn, 25); dsc->u.ldst.rn = rn; - dsc->modinsn[0] = insn & 0xfff0ffff; + RECORD_ARM_MODE_INSN (0, (insn & 0xfff0ffff)); dsc->cleanup = &cleanup_copro_load_store; @@ -5515,7 +5528,7 @@ copy_b_bl_blx (struct gdbarch *gdbarch, uint32_t insn, dsc->u.branch.exchange = exchange; dsc->u.branch.dest = from + 8 + offset; - dsc->modinsn[0] = ARM_NOP; + RECORD_ARM_MODE_INSN (0, ARM_NOP); dsc->cleanup = &cleanup_branch; @@ -5554,7 +5567,7 @@ copy_bx_blx_reg (struct gdbarch *gdbarch, uint32_t insn, dsc->u.branch.link = link; dsc->u.branch.exchange = 1; - dsc->modinsn[0] = ARM_NOP; + RECORD_ARM_MODE_INSN (0, ARM_NOP); dsc->cleanup = &cleanup_branch; @@ -5613,9 +5626,9 @@ copy_alu_imm (struct gdbarch *gdbarch, uint32_t insn, struct regcache *regs, dsc->rd = rd; if (is_mov) - dsc->modinsn[0] = insn & 0xfff00fff; + RECORD_ARM_MODE_INSN (0, (insn & 0xfff00fff)); else - dsc->modinsn[0] = (insn & 0xfff00fff) | 0x10000; + RECORD_ARM_MODE_INSN (0, ((insn & 0xfff00fff) | 0x10000)); dsc->cleanup = &cleanup_alu_imm; @@ -5682,9 +5695,9 @@ copy_alu_reg (struct gdbarch *gdbarch, uint32_t insn, struct regcache *regs, dsc->rd = rd; if (is_mov) - dsc->modinsn[0] = (insn & 0xfff00ff0) | 0x2; + RECORD_ARM_MODE_INSN (0, ((insn & 0xfff00ff0) | 0x2)); else - dsc->modinsn[0] = (insn & 0xfff00ff0) | 0x10002; + RECORD_ARM_MODE_INSN (0, ((insn & 0xfff00ff0) | 0x10002)); dsc->cleanup = &cleanup_alu_reg; @@ -5757,9 +5770,9 @@ copy_alu_shifted_reg (struct gdbarch *gdbarch, uint32_t insn, dsc->rd = rd; if (is_mov) - dsc->modinsn[0] = (insn & 0xfff000f0) | 0x302; + RECORD_ARM_MODE_INSN (0, ((insn & 0xfff000f0) | 0x302)); else - dsc->modinsn[0] = (insn & 0xfff000f0) | 0x10302; + RECORD_ARM_MODE_INSN (0, ((insn & 0xfff000f0) | 0x10302)); dsc->cleanup = &cleanup_alu_shifted_reg; @@ -5883,12 +5896,12 @@ copy_extra_ld_st (struct gdbarch *gdbarch, uint32_t insn, int unpriveleged, /* {ldr,str} rt, [rt2,] [rn, #imm] -> {ldr,str} r0, [r1,] [r2, #imm]. */ - dsc->modinsn[0] = (insn & 0xfff00fff) | 0x20000; + RECORD_ARM_MODE_INSN (0, ((insn & 0xfff00fff) | 0x20000)); else /* {ldr,str} rt, [rt2,] [rn, +/-rm] -> {ldr,str} r0, [r1,] [r2, +/-r3]. */ - dsc->modinsn[0] = (insn & 0xfff00ff0) | 0x20003; + RECORD_ARM_MODE_INSN (0, ((insn & 0xfff00ff0) | 0x20003)); dsc->cleanup = load[opcode] ? &cleanup_load : &cleanup_store; @@ -5971,32 +5984,31 @@ copy_ldr_str_ldrb_strb (struct gdbarch *gdbarch, uint32_t insn, /* {ldr,str}[b] rt, [rn, #imm], etc. -> {ldr,str}[b] r0, [r2, #imm]. */ - dsc->modinsn[0] = (insn & 0xfff00fff) | 0x20000; + RECORD_ARM_MODE_INSN (0, ((insn & 0xfff00fff) | 0x20000)); else /* {ldr,str}[b] rt, [rn, rm], etc. -> {ldr,str}[b] r0, [r2, r3]. */ - dsc->modinsn[0] = (insn & 0xfff00ff0) | 0x20003; + RECORD_ARM_MODE_INSN (0, ((insn & 0xfff00ff0) | 0x20003)); } else { /* We need to use r4 as scratch. Make sure it's restored afterwards. */ dsc->u.ldst.restore_r4 = 1; - dsc->modinsn[0] = 0xe92d8000; /* push {pc} */ - dsc->modinsn[1] = 0xe8bd0010; /* pop {r4} */ - dsc->modinsn[2] = 0xe044400f; /* sub r4, r4, pc. */ - dsc->modinsn[3] = 0xe2844008; /* add r4, r4, #8. */ - dsc->modinsn[4] = 0xe0800004; /* add r0, r0, r4. */ + RECORD_ARM_MODE_INSN (0, 0xe92d8000); /* push {pc} */ + RECORD_ARM_MODE_INSN (1, 0xe8bd0010); /* pop {r4} */ + RECORD_ARM_MODE_INSN (2, 0xe044400f); /* sub r4, r4, pc. */ + RECORD_ARM_MODE_INSN (3, 0xe2844008); /* add r4, r4, #8. */ + RECORD_ARM_MODE_INSN (4, 0xe0800004); /* add r0, r0, r4. */ /* As above. */ if (immed) - dsc->modinsn[5] = (insn & 0xfff00fff) | 0x20000; + RECORD_ARM_MODE_INSN (5, ((insn & 0xfff00fff) | 0x20000)); else - dsc->modinsn[5] = (insn & 0xfff00ff0) | 0x20003; - - dsc->modinsn[6] = 0x0; /* breakpoint location. */ - dsc->modinsn[7] = 0x0; /* scratch space. */ + RECORD_ARM_MODE_INSN (5, ((insn & 0xfff00ff0) | 0x20003)); + RECORD_ARM_MODE_INSN (6, 0x00); /* breakpoint location. */ + RECORD_ARM_MODE_INSN (7, 0x00); /* scratch space. */ dsc->numinsns = 6; } @@ -6268,7 +6280,7 @@ copy_block_xfer (struct gdbarch *gdbarch, uint32_t insn, struct regcache *regs, instruction (which might not behave perfectly in all cases, but these instructions should be rare enough for that not to matter too much). */ - dsc->modinsn[0] = ARM_NOP; + RECORD_ARM_MODE_INSN (0, ARM_NOP); dsc->cleanup = &cleanup_block_load_all; } @@ -6312,7 +6324,8 @@ copy_block_xfer (struct gdbarch *gdbarch, uint32_t insn, struct regcache *regs, "list %.4x\n"), rn, writeback ? "!" : "", (int) insn & 0xffff, new_regmask); - dsc->modinsn[0] = (insn & ~0xffff) | (new_regmask & 0xffff); + RECORD_ARM_MODE_INSN (0, + ((insn & ~0xffff) | (new_regmask & 0xffff))); dsc->cleanup = &cleanup_block_load_pc; } @@ -6325,7 +6338,7 @@ copy_block_xfer (struct gdbarch *gdbarch, uint32_t insn, struct regcache *regs, Doing things this way has the advantage that we can auto-detect the offset of the PC write (which is architecture-dependent) in the cleanup routine. */ - dsc->modinsn[0] = insn; + RECORD_ARM_MODE_INSN (0, insn); dsc->cleanup = &cleanup_block_store_pc; } @@ -6368,7 +6381,7 @@ copy_svc (struct gdbarch *gdbarch, uint32_t insn, CORE_ADDR to, Insn: unmodified svc. Cleanup: pc <- insn_addr + 4. */ - dsc->modinsn[0] = insn; + RECORD_ARM_MODE_INSN (0, insn); dsc->cleanup = &cleanup_svc; /* Pretend we wrote to the PC, so cleanup doesn't set PC to the next @@ -6389,7 +6402,7 @@ copy_undef (struct gdbarch *gdbarch, uint32_t insn, "displaced: copying undefined insn %.8lx\n", (unsigned long) insn); - dsc->modinsn[0] = insn; + RECORD_ARM_MODE_INSN (0, insn); return 0; } @@ -6404,7 +6417,7 @@ copy_unpred (struct gdbarch *gdbarch, uint32_t insn, fprintf_unfiltered (gdb_stdlog, "displaced: copying unpredictable insn " "%.8lx\n", (unsigned long) insn); - dsc->modinsn[0] = insn; + RECORD_ARM_MODE_INSN (0, insn); return 0; } @@ -6861,6 +6874,8 @@ arm_process_displaced_insn (struct gdbarch *gdbarch, CORE_ADDR from, if (!displaced_in_arm_mode (regs)) return thumb_process_displaced_insn (gdbarch, from, to, regs, dsc); + dsc->is_thumb = 0; + dsc->insn_size = 4; insn = read_memory_unsigned_integer (from, 4, byte_order_for_code); if (debug_displaced) fprintf_unfiltered (gdb_stdlog, "displaced: stepping insn %.8lx " @@ -6904,23 +6919,49 @@ arm_displaced_init_closure (struct gdbarch *gdbarch, CORE_ADDR from, CORE_ADDR to, struct displaced_step_closure *dsc) { struct gdbarch_tdep *tdep = gdbarch_tdep (gdbarch); - unsigned int i; + unsigned int i, len, offset; enum bfd_endian byte_order_for_code = gdbarch_byte_order_for_code (gdbarch); + int size = dsc->insn_size; + const unsigned char *bkp_insn; + offset = 0; /* Poke modified instruction(s). */ for (i = 0; i < dsc->numinsns; i++) { if (debug_displaced) - fprintf_unfiltered (gdb_stdlog, "displaced: writing insn %.8lx at " - "%.8lx\n", (unsigned long) dsc->modinsn[i], - (unsigned long) to + i * 4); - write_memory_unsigned_integer (to + i * 4, 4, byte_order_for_code, + { + fprintf_unfiltered (gdb_stdlog, "displaced: writing insn "); + if (size == 4) + fprintf_unfiltered (gdb_stdlog, "%.8lx", + dsc->modinsn[i]); + else if (size == 2) + fprintf_unfiltered (gdb_stdlog, "%.4x", + (unsigned short)dsc->modinsn[i]); + + fprintf_unfiltered (gdb_stdlog, " at %.8lx\n", + (unsigned long) to + offset); + + } + write_memory_unsigned_integer (to + offset, size, + byte_order_for_code, dsc->modinsn[i]); + offset += size; + } + + /* Choose the correct breakpoint instruction. */ + if (dsc->is_thumb) + { + bkp_insn = tdep->thumb_breakpoint; + len = tdep->thumb_breakpoint_size; + } + else + { + bkp_insn = tdep->arm_breakpoint; + len = tdep->arm_breakpoint_size; } /* Put breakpoint afterwards. */ - write_memory (to + dsc->numinsns * 4, tdep->arm_breakpoint, - tdep->arm_breakpoint_size); + write_memory (to + offset, bkp_insn, len); if (debug_displaced) fprintf_unfiltered (gdb_stdlog, "displaced: copy %s->%s: ", @@ -6956,7 +6997,9 @@ arm_displaced_step_fixup (struct gdbarch *gdbarch, dsc->cleanup (gdbarch, regs, dsc); if (!dsc->wrote_to_pc) - regcache_cooked_write_unsigned (regs, ARM_PC_REGNUM, dsc->insn_addr + 4); + regcache_cooked_write_unsigned (regs, ARM_PC_REGNUM, + dsc->insn_addr + dsc->insn_size); + } #include "bfd-in2.h" diff --git a/gdb/arm-tdep.h b/gdb/arm-tdep.h index ef02002..1b3f9e4 100644 --- a/gdb/arm-tdep.h +++ b/gdb/arm-tdep.h @@ -204,6 +204,25 @@ struct gdbarch_tdep /* Structures used for displaced stepping. */ +/* Record an ARM mode instruction in one slot. */ +#define RECORD_ARM_MODE_INSN(INDEX, INSN) do \ +{\ + dsc->modinsn[INDEX] = INSN;\ + } while (0) + +#define RECORD_THUMB_MODE_INSN(INDEX, INSN) do \ +{\ + dsc->modinsn[INDEX] = INSN;\ + } while (0) + +/* Record the two parts of 32-bit Thumb-2 instruction. Each part occupies + one array element. */ +#define RECORD_THUMB2_MODE_INSN(INDEX, INSN1, INSN2) do \ +{ \ + dsc->modinsn[INDEX] = INSN1;\ + dsc->modinsn[INDEX + 1] = INSN2;\ +} while (0) + /* The maximum number of temporaries available for displaced instructions. */ #define DISPLACED_TEMPS 16 /* The maximum number of modified instructions generated for one single-stepped @@ -262,6 +281,17 @@ struct displaced_step_closure struct displaced_step_closure *dsc); } svc; } u; + + /* The size of original instruction, 2 or 4. */ + unsigned int insn_size; + /* True if the original insn (and thus all replacement insns) are Thumb + instead of ARM. */ + unsigned int is_thumb; + + /* The slots in the array is used in this way below, + - ARM instruction occupies one slot, + - Thumb 16 bit instruction occupies one slot, + - Thumb 32-bit instruction occupies *two* slots, one part for each. */ unsigned long modinsn[DISPLACED_MODIFIED_INSNS]; int numinsns; CORE_ADDR insn_addr; -- 1.7.0.4 --------------080609080300050509030604--