From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 500 invoked by alias); 17 May 2011 14:29:01 -0000 Received: (qmail 485 invoked by uid 22791); 17 May 2011 14:28:59 -0000 X-SWARE-Spam-Status: No, hits=-1.8 required=5.0 tests=AWL,BAYES_00,TW_EG,T_RP_MATCHES_RCVD X-Spam-Check-By: sourceware.org Received: from mail.codesourcery.com (HELO mail.codesourcery.com) (38.113.113.100) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Tue, 17 May 2011 14:28:43 +0000 Received: (qmail 6107 invoked from network); 17 May 2011 14:28:40 -0000 Received: from unknown (HELO ?192.168.0.102?) (yao@127.0.0.2) by mail.codesourcery.com with ESMTPA; 17 May 2011 14:28:40 -0000 Message-ID: <4DD28612.1090204@codesourcery.com> Date: Tue, 17 May 2011 14:29:00 -0000 From: Yao Qi User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.14) Gecko/20110223 Lightning/1.0b2 Thunderbird/3.1.8 MIME-Version: 1.0 To: Ulrich Weigand CC: gdb-patches@sourceware.org Subject: Re: [try 2nd 4/8] Displaced stepping for Thumb 16-bit insn References: <201105161719.p4GHJFd1032039@d06av02.portsmouth.uk.ibm.com> In-Reply-To: <201105161719.p4GHJFd1032039@d06av02.portsmouth.uk.ibm.com> Content-Type: multipart/mixed; boundary="------------050401020401050603060803" X-IsSubscribed: yes Mailing-List: contact gdb-patches-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-patches-owner@sourceware.org X-SW-Source: 2011-05/txt/msg00378.txt.bz2 This is a multi-part message in MIME format. --------------050401020401050603060803 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Content-length: 2402 On 05/17/2011 01:19 AM, Ulrich Weigand wrote: > Yao Qi wrote: >> >> POP {r0, r1, ...., r6}; >> POP {r7}; > > The above can use just a single POP {r0, ..., r7}, can't it? Yes, we can. Why didn't I combine these two instructions then? > Have you looked at how the ARM case does it? There, we still have just > a single POP { r0, ..., rN } that pops the right number of registers, > and then the cleanup function (cleanup_block_load_pc) reshuffles them. > It seems to me we could do the same (and actually use the same cleanup > function) for the Thumb case too ... Sure, we can reuse that for Thumb case here. In this case, when register list is not full, we could optimize it a little bit like what I did in my last patch. However, it is a separate issue, and can be addressed separately. >> 3. register list is empty. This case is relative simple. >> >> POP {r0} >> >> In cleanup, we store r0's value to PC. > > If we used cleanup_block_load_pc, this would handle the same case as well. > > (Unfortunately, handling case 1 the same way looks somewhat difficult, > since cleanup_block_load_pc would expect the PC in register r8 ...) > In my new patch, there are two different cases to handle POP instruction. 1. register list is full. Use the following code sequence, POP {r0, r1, ...., r6, r7}; remove PC from reglist MOV r8, r7; Move value of r7 to r8; POP {r7}; Store PC value into r7. Install cleanup routine cleanup_pop_pc_16bit_all (renamed from cleanup_pop_pc_16bit) 2. register list is not full. Similar to arm part (arm_copy_block_xfer) >> +cleanup_pop_pc_16bit(struct gdbarch *gdbarch, struct regcache *regs, >> + struct displaced_step_closure *dsc) > > One more space before ( ... > Sorry about that. Fixed. >> + else /* Cleanup procedure of case #2 and case #3 can be unified. */ >> + { >> + int rx = 0; >> + int rx_val = 0; >> + >> + if (dsc->u.block.regmask) >> + { >> + for (rx = 0; rx < 8; rx++) >> + if ((dsc->u.block.regmask & (1 << rx)) == 0) >> + break; >> + } >> + else >> + rx = 0; > > (This is irrelevant if we decide to use cleanup_block_load_pc, but: > the "if (dsc->u.block.regmask)" and "else rx = 0" are superfluous, > since the for loop will terminate with rx == 0 anyway if regmask > is zero.) This part is removed since cleanup_block_load_pc is used. -- Yao (齐尧) --------------050401020401050603060803 Content-Type: text/x-patch; name="0001-Support-displaced-stepping-for-Thumb-16-bit-insns.patch" Content-Transfer-Encoding: 7bit Content-Disposition: attachment; filename*0="0001-Support-displaced-stepping-for-Thumb-16-bit-insns.patch" Content-length: 17690 Support displaced stepping for Thumb 16-bit insns. * arm-tdep.c (THUMB_NOP) Define. (thumb_copy_unmodified_16bit): New. (thumb_copy_b, thumb_copy_bx_blx_reg): New. (thumb_copy_alu_reg): New. (arm_copy_svc): Move some common code to ... (install_svc): ... here. New. (thumb_copy_svc): New. (install_pc_relative): New. (thumb_copy_pc_relative_16bit): New. (thumb_decode_pc_relative_16bit): New. (thumb_copy_16bit_ldr_literal): New. (thumb_copy_cbnz_cbz): New. (cleanup_pop_pc_16bit_all): New. (thumb_copy_pop_pc_16bit): New. (thumb_process_displaced_16bit_insn): New. (thumb_process_displaced_32bit_insn): New. (thumb_process_displaced_insn): process thumb instruction. --- gdb/arm-tdep.c | 495 ++++++++++++++++++++++++++++++++++++++++++++++++++++++-- 1 files changed, 483 insertions(+), 12 deletions(-) diff --git a/gdb/arm-tdep.c b/gdb/arm-tdep.c index 2dd8c9e..702a8a1 100644 --- a/gdb/arm-tdep.c +++ b/gdb/arm-tdep.c @@ -5118,6 +5118,7 @@ arm_adjust_breakpoint_address (struct gdbarch *gdbarch, CORE_ADDR bpaddr) /* NOP instruction (mov r0, r0). */ #define ARM_NOP 0xe1a00000 +#define THUMB_NOP 0x4600 /* Helper for register reads for displaced stepping. In particular, this returns the PC as it would be seen by the instruction at its original @@ -5340,6 +5341,23 @@ arm_copy_unmodified (struct gdbarch *gdbarch, uint32_t insn, return 0; } +/* Copy 16-bit Thumb(Thumb and 16-bit Thumb-2) instruction without any + modification. */ +static int +thumb_copy_unmodified_16bit (struct gdbarch *gdbarch, unsigned int insn, + const char *iname, + struct displaced_step_closure *dsc) +{ + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, "displaced: copying insn %.4x, " + "opcode/class '%s' unmodified\n", insn, + iname); + + dsc->modinsn[0] = insn; + + return 0; +} + /* Preload instructions with immediate offset. */ static void @@ -5586,6 +5604,44 @@ arm_copy_b_bl_blx (struct gdbarch *gdbarch, uint32_t insn, return 0; } +/* Copy B Thumb instructions. */ +static int +thumb_copy_b (struct gdbarch *gdbarch, unsigned short insn, + struct displaced_step_closure *dsc) +{ + unsigned int cond = 0; + int offset = 0; + unsigned short bit_12_15 = bits (insn, 12, 15); + CORE_ADDR from = dsc->insn_addr; + + if (bit_12_15 == 0xd) + { + offset = sbits (insn, 0, 7); + cond = bits (insn, 8, 11); + } + else if (bit_12_15 == 0xe) /* Encoding T2 */ + { + offset = sbits ((insn << 1), 0, 11); + cond = INST_AL; + } + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, + "displaced: copying b immediate insn %.4x " + "with offset %d\n", insn, offset); + + dsc->u.branch.cond = cond; + dsc->u.branch.link = 0; + dsc->u.branch.exchange = 0; + dsc->u.branch.dest = from + 4 + offset; + + dsc->modinsn[0] = THUMB_NOP; + + dsc->cleanup = &cleanup_branch; + + return 0; +} + /* Copy BX/BLX with register-specified destinations. */ static void @@ -5631,6 +5687,26 @@ arm_copy_bx_blx_reg (struct gdbarch *gdbarch, uint32_t insn, return 0; } +static int +thumb_copy_bx_blx_reg (struct gdbarch *gdbarch, uint16_t insn, + struct regcache *regs, + struct displaced_step_closure *dsc) +{ + int link = bit (insn, 7); + unsigned int rm = bits (insn, 3, 6); + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, "displaced: copying insn %.4x", + (unsigned short) insn); + + dsc->modinsn[0] = THUMB_NOP; + + install_bx_blx_reg (gdbarch, regs, dsc, link, INST_AL, rm); + + return 0; +} + + /* Copy/cleanup arithmetic/logic instruction with immediate RHS. */ static void @@ -5765,6 +5841,31 @@ arm_copy_alu_reg (struct gdbarch *gdbarch, uint32_t insn, struct regcache *regs, return 0; } +static int +thumb_copy_alu_reg (struct gdbarch *gdbarch, uint16_t insn, + struct regcache *regs, + struct displaced_step_closure *dsc) +{ + unsigned rn, rm, rd; + + rd = bits (insn, 3, 6); + rn = (bit (insn, 7) << 3) | bits (insn, 0, 2); + rm = 2; + + if (rd != ARM_PC_REGNUM && rn != ARM_PC_REGNUM) + return thumb_copy_unmodified_16bit (gdbarch, insn, "ALU reg", dsc); + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, "displaced: copying reg %s insn %.4x\n", + "ALU", (unsigned short) insn); + + dsc->modinsn[0] = ((insn & 0xff00) | 0x08); + + install_alu_reg (gdbarch, regs, dsc, rd, rn, rm); + + return 0; +} + /* Cleanup/copy arithmetic/logic insns with shifted register RHS. */ static void @@ -6439,21 +6540,16 @@ cleanup_svc (struct gdbarch *gdbarch, struct regcache *regs, displaced_write_reg (regs, dsc, ARM_PC_REGNUM, resume_addr, BRANCH_WRITE_PC); } -static int - -arm_copy_svc (struct gdbarch *gdbarch, uint32_t insn, - struct regcache *regs, struct displaced_step_closure *dsc) -{ - if (debug_displaced) - fprintf_unfiltered (gdb_stdlog, "displaced: copying svc insn %.8lx\n", - (unsigned long) insn); +/* Common copy routine for svc instruciton. */ +static int +install_svc (struct gdbarch *gdbarch, struct regcache *regs, + struct displaced_step_closure *dsc) +{ /* Preparation: none. Insn: unmodified svc. - Cleanup: pc <- insn_addr + 4. */ - - dsc->modinsn[0] = insn; + Cleanup: pc <- insn_addr + insn_size. */ /* Pretend we wrote to the PC, so cleanup doesn't set PC to the next instruction. */ @@ -6467,7 +6563,34 @@ arm_copy_svc (struct gdbarch *gdbarch, uint32_t insn, dsc->cleanup = &cleanup_svc; return 0; } +} + +static int +arm_copy_svc (struct gdbarch *gdbarch, uint32_t insn, + struct regcache *regs, struct displaced_step_closure *dsc) +{ + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, "displaced: copying svc insn %.8lx\n", + (unsigned long) insn); + + dsc->modinsn[0] = insn; + + return install_svc (gdbarch, regs, dsc); +} + +static int +thumb_copy_svc (struct gdbarch *gdbarch, uint16_t insn, + struct regcache *regs, struct displaced_step_closure *dsc) +{ + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, "displaced: copying svc insn %.4x\n", + insn); + + dsc->modinsn[0] = insn; + return install_svc (gdbarch, regs, dsc); } /* Copy undefined instructions. */ @@ -6929,11 +7052,359 @@ arm_decode_svc_copro (struct gdbarch *gdbarch, uint32_t insn, CORE_ADDR to, } static void +install_pc_relative (struct gdbarch *gdbarch, struct regcache *regs, + struct displaced_step_closure *dsc, int rd) +{ + /* ADR Rd, #imm + + Rewrite as: + + Preparation: Rd <- PC + Insn: ADD Rd, #imm + Cleanup: Null. + */ + + /* Rd <- PC */ + int val = displaced_read_reg (regs, dsc, ARM_PC_REGNUM); + displaced_write_reg (regs, dsc, rd, val, CANNOT_WRITE_PC); +} + +static int +thumb_copy_pc_relative_16bit (struct gdbarch *gdbarch, struct regcache *regs, + struct displaced_step_closure *dsc, + int rd, unsigned int imm) +{ + + /* Encoding T2: ADDS Rd, #imm */ + dsc->modinsn[0] = (0x3000 | (rd << 8) | imm); + + install_pc_relative (gdbarch, regs, dsc, rd); + + return 0; +} + +static int +thumb_decode_pc_relative_16bit (struct gdbarch *gdbarch, uint16_t insn, + struct regcache *regs, + struct displaced_step_closure *dsc) +{ + unsigned int rd = bits (insn, 8, 10); + unsigned int imm8 = bits (insn, 0, 7); + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, + "displaced: copying thumb adr r%d, #%d insn %.4x\n", + rd, imm8, insn); + + return thumb_copy_pc_relative_16bit (gdbarch, regs, dsc, rd, imm8); +} + +static int +thumb_copy_16bit_ldr_literal (struct gdbarch *gdbarch, unsigned short insn1, + struct regcache *regs, + struct displaced_step_closure *dsc) +{ + unsigned int rt = bits (insn1, 8, 7); + unsigned int pc; + int imm8 = sbits (insn1, 0, 7); + CORE_ADDR from = dsc->insn_addr; + + /* LDR Rd, #imm8 + + Rwrite as: + + Preparation: tmp2 <- R2, tmp3 <- R3, R2 <- PC, R3 <- #imm8; + if (Rd is not R0) tmp0 <- R0; + Insn: LDR R0, [R2, R3]; + Cleanup: R2 <- tmp2, R3 <- tmp3, + if (Rd is not R0) Rd <- R0, R0 <- tmp0 */ + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, "displaced: copying thumb ldr literal " + "insn %.4x\n", insn1); + + dsc->tmp[0] = displaced_read_reg (regs, dsc, 0); + dsc->tmp[2] = displaced_read_reg (regs, dsc, 2); + dsc->tmp[3] = displaced_read_reg (regs, dsc, 3); + pc = displaced_read_reg (regs, dsc, ARM_PC_REGNUM); + + displaced_write_reg (regs, dsc, 2, pc, CANNOT_WRITE_PC); + displaced_write_reg (regs, dsc, 3, imm8, CANNOT_WRITE_PC); + + dsc->rd = rt; + dsc->u.ldst.xfersize = 4; + dsc->u.ldst.rn = 0; + dsc->u.ldst.immed = 0; + dsc->u.ldst.writeback = 0; + dsc->u.ldst.restore_r4 = 0; + + dsc->modinsn[0] = 0x58d0; /* ldr r0, [r2, r3]*/ + + dsc->cleanup = &cleanup_load; + + return 0; +} + +/* Copy Thumb cbnz/cbz insruction. */ + +static int +thumb_copy_cbnz_cbz (struct gdbarch *gdbarch, uint16_t insn1, + struct regcache *regs, + struct displaced_step_closure *dsc) +{ + int non_zero = bit (insn1, 11); + unsigned int imm5 = (bit (insn1, 9) << 6) | (bits (insn1, 3, 7) << 1); + CORE_ADDR from = dsc->insn_addr; + int rn = bits (insn1, 0, 2); + int rn_val = displaced_read_reg (regs, dsc, rn); + + dsc->u.branch.cond = (rn_val && non_zero) || (!rn_val && !non_zero); + /* CBNZ and CBZ do not affect the condition flags. If condition is true, + set it INST_AL, so cleanup_branch will know branch is taken, otherwise, + condition is false, let it be, cleanup_branch will do nothing. */ + if (dsc->u.branch.cond) + dsc->u.branch.cond = INST_AL; + + dsc->u.branch.link = 0; + dsc->u.branch.exchange = 0; + + dsc->u.branch.dest = from + 2 + imm5; + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, "displaced: copying %s [r%d = 0x%x]" + " insn %.4x to %.8lx\n", non_zero ? "cbnz" : "cbz", + rn, rn_val, insn1, dsc->u.branch.dest); + + dsc->modinsn[0] = THUMB_NOP; + + dsc->cleanup = &cleanup_branch; + return 0; +} + +static void +cleanup_pop_pc_16bit_all (struct gdbarch *gdbarch, struct regcache *regs, + struct displaced_step_closure *dsc) +{ + /* PC <- r7 */ + int val = displaced_read_reg (regs, dsc, 7); + displaced_write_reg (regs, dsc, ARM_PC_REGNUM, val, BX_WRITE_PC); + + /* r7 <- r8 */ + val = displaced_read_reg (regs, dsc, 8); + displaced_write_reg (regs, dsc, 7, val, CANNOT_WRITE_PC); + + /* r8 <- tmp[0] */ + displaced_write_reg (regs, dsc, 8, dsc->tmp[0], CANNOT_WRITE_PC); + +} + +static int +thumb_copy_pop_pc_16bit (struct gdbarch *gdbarch, unsigned short insn1, + struct regcache *regs, + struct displaced_step_closure *dsc) +{ + dsc->u.block.regmask = insn1 & 0x00ff; + + /* Rewrite instruction: POP {rX, rY, ...,rZ, PC} + to : + + (1) register list is full, that is, r0-r7 are used. + Prepare: tmp[0] <- r8 + + POP {r0, r1, ...., r6, r7}; remove PC from reglist + MOV r8, r7; Move value of r7 to r8; + POP {r7}; Store PC value into r7. + + Cleanup: PC <- r7, r7 <- r8, r8 <-tmp[0] + + (2) register list is not full, supposing there are N registers in + register list (except PC, 0 <= N <= 7). + Prepare: for each i, 0 - N, tmp[i] <- ri. + + POP {r0, r1, ...., rN}; + + Cleanup: Set registers in original reglist from r0 - rN. Restore r0 - rN + from tmp[] properly. + */ + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, + "displaced: copying thumb pop {%.8x, pc} insn %.4x\n", + dsc->u.block.regmask, insn1); + + if (dsc->u.block.regmask == 0xff) + { + dsc->tmp[0] = displaced_read_reg (regs, dsc, 8); + + dsc->modinsn[0] = (insn1 & 0xfeff); /* POP {r0,r1,...,r6, r7} */ + dsc->modinsn[1] = 0x46b8; /* MOV r8, r7 */ + dsc->modinsn[2] = 0xbc80; /* POP {r7} */ + + dsc->numinsns = 3; + dsc->cleanup = &cleanup_pop_pc_16bit_all; + } + else + { + unsigned int num_in_list = bitcount (dsc->u.block.regmask); + unsigned int new_regmask, bit = 1; + unsigned int to = 0, from = 0, i, new_rn; + + for (i = 0; i < num_in_list + 1; i++) + dsc->tmp[i] = displaced_read_reg (regs, dsc, i); + + new_regmask = (1 << (num_in_list + 1)) - 1; + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, _("displaced: POP " + "{..., pc}: original reg list %.4x," + " modified list %.4x\n"), + (int) dsc->u.block.regmask, new_regmask); + + dsc->u.block.regmask |= 0x8000; + dsc->u.block.writeback = 0; + dsc->u.block.cond = INST_AL; + + dsc->modinsn[0] = (insn1 & ~0x1ff) | (new_regmask & 0xff); + + dsc->cleanup = &cleanup_block_load_pc; + } + + return 0; +} + +static void +thumb_process_displaced_16bit_insn (struct gdbarch *gdbarch, uint16_t insn1, + struct regcache *regs, + struct displaced_step_closure *dsc) +{ + unsigned short op_bit_12_15 = bits (insn1, 12, 15); + unsigned short op_bit_10_11 = bits (insn1, 10, 11); + int err = 0; + + /* 16-bit thumb instructions. */ + switch (op_bit_12_15) + { + /* Shift (imme), add, subtract, move and compare. */ + case 0: case 1: case 2: case 3: + err = thumb_copy_unmodified_16bit (gdbarch, insn1, + "shift/add/sub/mov/cmp", + dsc); + break; + case 4: + switch (op_bit_10_11) + { + case 0: /* Data-processing */ + err = thumb_copy_unmodified_16bit (gdbarch, insn1, + "data-processing", + dsc); + break; + case 1: /* Special data instructions and branch and exchange. */ + { + unsigned short op = bits (insn1, 7, 9); + if (op == 6 || op == 7) /* BX or BLX */ + err = thumb_copy_bx_blx_reg (gdbarch, insn1, regs, dsc); + else if (bits (insn1, 6, 7) != 0) /* ADD/MOV/CMP high registers. */ + err = thumb_copy_alu_reg (gdbarch, insn1, regs, dsc); + else + err = thumb_copy_unmodified_16bit (gdbarch, insn1, "special data", + dsc); + } + break; + default: /* LDR (literal) */ + err = thumb_copy_16bit_ldr_literal (gdbarch, insn1, regs, dsc); + } + break; + case 5: case 6: case 7: case 8: case 9: /* Load/Store single data item */ + err = thumb_copy_unmodified_16bit (gdbarch, insn1, "ldr/str", dsc); + break; + case 10: + if (op_bit_10_11 < 2) /* Generate PC-relative address */ + err = thumb_decode_pc_relative_16bit (gdbarch, insn1, regs, dsc); + else /* Generate SP-relative address */ + err = thumb_copy_unmodified_16bit (gdbarch, insn1, "sp-relative", dsc); + break; + case 11: /* Misc 16-bit instructions */ + { + switch (bits (insn1, 8, 11)) + { + case 1: case 3: case 9: case 11: /* CBNZ, CBZ */ + err = thumb_copy_cbnz_cbz (gdbarch, insn1, regs, dsc); + break; + case 12: case 13: /* POP */ + if (bit (insn1, 8)) /* PC is in register list. */ + err = thumb_copy_pop_pc_16bit (gdbarch, insn1, regs, dsc); + else + err = thumb_copy_unmodified_16bit (gdbarch, insn1, "pop", dsc); + break; + case 15: /* If-Then, and hints */ + if (bits (insn1, 0, 3)) + /* If-Then makes up to four following instructions conditional. + IT instruction itself is not conditional, so handle it as a + common unmodified instruction. */ + err = thumb_copy_unmodified_16bit (gdbarch, insn1, "If-Then", + dsc); + else + err = thumb_copy_unmodified_16bit (gdbarch, insn1, "hints", dsc); + break; + default: + err = thumb_copy_unmodified_16bit (gdbarch, insn1, "misc", dsc); + } + } + break; + case 12: + if (op_bit_10_11 < 2) /* Store multiple registers */ + err = thumb_copy_unmodified_16bit (gdbarch, insn1, "stm", dsc); + else /* Load multiple registers */ + err = thumb_copy_unmodified_16bit (gdbarch, insn1, "ldm", dsc); + break; + case 13: /* Conditional branch and supervisor call */ + if (bits (insn1, 9, 11) != 7) /* conditional branch */ + err = thumb_copy_b (gdbarch, insn1, dsc); + else + err = thumb_copy_svc (gdbarch, insn1, regs, dsc); + break; + case 14: /* Unconditional branch */ + err = thumb_copy_b (gdbarch, insn1, dsc); + break; + default: + err = 1; + } + + if (err) + internal_error (__FILE__, __LINE__, + _("thumb_process_displaced_16bit_insn: Instruction decode error")); +} + +static void +thumb_process_displaced_32bit_insn (struct gdbarch *gdbarch, uint16_t insn1, + uint16_t insn2, struct regcache *regs, + struct displaced_step_closure *dsc) +{ + error (_("Displaced stepping is only supported in ARM mode and Thumb 16bit instructions")); +} + +static void thumb_process_displaced_insn (struct gdbarch *gdbarch, CORE_ADDR from, CORE_ADDR to, struct regcache *regs, struct displaced_step_closure *dsc) { - error (_("Displaced stepping is only supported in ARM mode")); + enum bfd_endian byte_order_for_code = gdbarch_byte_order_for_code (gdbarch); + uint16_t insn1 + = read_memory_unsigned_integer (from, 2, byte_order_for_code); + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, "displaced: process thumb insn %.4x " + "at %.8lx\n", insn1, (unsigned long) from); + + dsc->is_thumb = 1; + dsc->insn_size = thumb_insn_size (insn1); + if (thumb_insn_size (insn1) == 4) + { + uint16_t insn2 + = read_memory_unsigned_integer (from + 2, 2, byte_order_for_code); + thumb_process_displaced_32bit_insn (gdbarch, insn1, insn2, regs, dsc); + } + else + thumb_process_displaced_16bit_insn (gdbarch, insn1, regs, dsc); } void -- 1.7.0.4 --------------050401020401050603060803--