* Re: [try 2nd 5/8] Displaced stepping for Thumb 32-bit insns [not found] <201107151847.p6FIlJNm001180@d06av02.portsmouth.uk.ibm.com> @ 2011-08-06 4:32 ` Yao Qi 2011-08-09 18:46 ` Ulrich Weigand 0 siblings, 1 reply; 19+ messages in thread From: Yao Qi @ 2011-08-06 4:32 UTC (permalink / raw) To: Ulrich Weigand; +Cc: gdb-patches [-- Attachment #1: Type: text/plain, Size: 18153 bytes --] On 07/16/2011 02:47 AM, Ulrich Weigand wrote: > Yao Qi wrote: > >> On 05/18/2011 01:14 AM, Ulrich Weigand wrote: >>> - However, you cannot just transform a PLD/PLI "literal" (i.e. PC + immediate) >>> into an "immediate" (i.e. register + immediate) version, since in Thumb >>> mode the "literal" version supports a 12-bit immediate, while the immediate >>> version only supports an 8-bit immediate. >>> >>> I guess you could either add the immediate to the PC during preparation >>> stage and then use an "immediate" instruction with immediate zero, or >>> else load the immediate into a second register and use a "register" >>> version of the instruction. >>> >> >> The former may not be correct. PC should be set at the address of `copy >> area' in displaced stepping, instead of any other arbitrary values. The >> alternative to the former approach is to compute the new immediate value >> according to the new PC value we will set (new PC value is >> dsc->scratch_base). However, in this way, we have to worry about the >> overflow of new computed 12-bit immediate. >> >> The latter one sounds better, because we don't have to worry about >> overflow problem, and cleanup_preload can be still used as cleanup >> routine in this case. > > OK, this looks good to me now. > >>> This doesn't look right: you're replacing the RN register if it is anything >>> *but* 15 -- but those cases do not need to be replaced! >>> >> >> Oh, sorry, it is a logic error. The code should be like >> >> if (rn != ARM_PC_REGNUM) >> return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, "copro >> load/store", dsc); > > Hmm, it's still the wrong way in this patch? > Sorry, fixed. > >>>> + case 2: /* op1 = 2 */ >>>> + if (op) /* Branch and misc control. */ >>>> + { >>>> + if (bit (insn2, 14)) /* BLX/BL */ >>>> + err = thumb2_copy_b_bl_blx (gdbarch, insn1, insn2, regs, dsc); >>>> + else if (!bits (insn2, 12, 14) && bits (insn1, 8, 10) != 0x7) >>> I don't understand this condition, but it looks wrong to me ... >>> >> >> This condition is about "Conditional Branch". The 2nd half of condition >> should be "bits (insn1, 7, 9) != 0x7", corresponding to the first line >> of table A6-13 "op1 = 0x0, op is not x111xxx". > > But "!bits (insn2, 12, 14)" doesn't say "op1 = 0x0" either ... Since we > already know bit 14 is 0, this should probably just check for bit 12. OK, the condition checking is changed from "!bits (insn2, 12, 14)" to "!bit (insn2, 12)". > Some more comments on the latest patch. There's a couple of issues I > had overlooked in the previous review, in particular handling of the > load/store instructions. Most of the rest is just minor things ... > > >> +static int >> +thumb2_copy_copro_load_store (struct gdbarch *gdbarch, uint16_t insn1, >> + uint16_t insn2, struct regcache *regs, >> + struct displaced_step_closure *dsc) >> +{ >> + unsigned int rn = bits (insn1, 0, 3); >> + >> + if (rn == ARM_PC_REGNUM) >> + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, >> + "copro load/store", dsc); > > This still needs to be rn != ARM_PC_REGNUM > Oh, sorry for missing this one in last patch. Fixed. >> + if (debug_displaced) >> + fprintf_unfiltered (gdb_stdlog, "displaced: copying coprocessor " >> + "load/store insn %.4x%.4x\n", insn1, insn2); >> + >> + dsc->modinsn[0] = insn1 & 0xfff0; >> + dsc->modinsn[1] = insn2; >> + dsc->numinsns = 2; >> + >> + install_copro_load_store (gdbarch, regs, dsc, bit (insn1, 9), rn); > > Why bit 9? Isn't the writeback bit bit 5 here? But anyway, those > instructions we support here in Thumb mode (LDC/LDC2, VLDR) don't > support writeback anyway. It's probably best to just pass 0. > Right, let us pass 0 for writeback in this function. >> +static int >> +thumb2_copy_b_bl_blx (struct gdbarch *gdbarch, uint16_t insn1, >> + uint16_t insn2, struct regcache *regs, >> + struct displaced_step_closure *dsc) >> +{ >> + int link = bit (insn2, 14); >> + int exchange = link && !bit (insn2, 12); >> + int cond = INST_AL; >> + long offset =0; > Space after = > Fixed. > >> +thumb2_copy_load_store (struct gdbarch *gdbarch, uint16_t insn1, uint16_t insn2, >> + struct regcache *regs, >> + struct displaced_step_closure *dsc, int load, int size, >> + int usermode, int writeback) > > Looking at the store instructions (STR/STRB/STRH[T]), it would appear that none > of them may use PC in Thumb mode. Therefore, it seems that this routine should > just handle loads (i.e. rename to thumb2_copy_load and remove the load argument). > > There is another fundamental problem: The LDR "literal" Thumb encodings provide > a "long-form" 12-bit immediate *and* an U bit. However, the non-PC-relative > "immediate" Thumb encodings only provide a U bit with the short 8-bit immediates; > the 12-bit immediate form does not have a U bit (instead, bit 7 encodes whether > the 12-bit or 8-bit form is in use). > > This means that you cannot simply translate LDR literal into LDR immediate > forms, but probably need to handle by loading the immediate into a register, > similar to the preload case. > OK, thumb2_copy_load_store is renamed to thumb2_copy_load_reg_imm, which is to handle non-literal case, and thumb2_copy_load_literal is a new function to handle LDR "literal" instruction. >> +{ >> + int immed = !bit (insn1, 9); > > This check looks incorrect. E.g. LDR (register) also has bit 9 equals zero. > There needs to be a more complex decoding step somewhere, either in the caller, > or directly in here. (Note that decoding immediate, usermode, and writeback > flags are closely coupled, so this should probably be done in the same location.) > >> + unsigned int rt = bits (insn2, 12, 15); >> + unsigned int rn = bits (insn1, 0, 3); >> + unsigned int rm = bits (insn2, 0, 3); /* Only valid if !immed. */ >> + >> + if (rt != ARM_PC_REGNUM && rn != ARM_PC_REGNUM >> + && (immed || rm != ARM_PC_REGNUM)) >> + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, "load/store", >> + dsc); >> + >> + if (debug_displaced) >> + fprintf_unfiltered (gdb_stdlog, >> + "displaced: copying %s%s r%d [r%d] insn %.4x%.4x\n", >> + load ? (size == 1 ? "ldrb" : (size == 2 ? "ldrh" : "ldr")) >> + : (size == 1 ? "strb" : (size == 2 ? "strh" : "str")), >> + usermode ? "t" : "", >> + rt, rn, insn1, insn2); >> + >> + install_load_store (gdbarch, regs, dsc, load, immed, writeback, size, >> + usermode, rt, rm, rn); >> + >> + if (load || rt != ARM_PC_REGNUM) >> + { >> + dsc->u.ldst.restore_r4 = 0; >> + >> + if (immed) >> + /* {ldr,str}[b]<cond> rt, [rn, #imm], etc. >> + -> >> + {ldr,str}[b]<cond> r0, [r2, #imm]. */ >> + { >> + dsc->modinsn[0] = (insn1 & 0xfff0) | 0x2; >> + dsc->modinsn[1] = insn2 & 0x0fff; >> + } >> + else >> + /* {ldr,str}[b]<cond> rt, [rn, rm], etc. >> + -> >> + {ldr,str}[b]<cond> r0, [r2, r3]. */ >> + { >> + dsc->modinsn[0] = (insn1 & 0xfff0) | 0x2; >> + dsc->modinsn[1] = (insn2 & 0x0ff0) | 0x3; >> + } >> + >> + dsc->numinsns = 2; >> + } >> + else >> + { >> + /* In Thumb-32 instructions, the behavior is unpredictable when Rt is >> + PC, while the behavior is undefined when Rn is PC. Shortly, neither >> + Rt nor Rn can be PC. */ >> + >> + gdb_assert (0); >> + } > > See above, this should only be used for loads. > This is removed. > >> +static int >> +thumb2_copy_block_xfer (struct gdbarch *gdbarch, uint16_t insn1, uint16_t insn2, >> + struct regcache *regs, >> + struct displaced_step_closure *dsc) >> +{ >> + int rn = bits (insn1, 0, 3); >> + int load = bit (insn1, 4); >> + int writeback = bit (insn1, 5); >> + >> + /* Block transfers which don't mention PC can be run directly >> + out-of-line. */ >> + if (rn != ARM_PC_REGNUM && (insn2 & 0x8000) == 0) >> + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, "ldm/stm", dsc); >> + >> + if (rn == ARM_PC_REGNUM) >> + { >> + warning (_("displaced: Unpredictable LDM or STM with " >> + "base register r15")); >> + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, >> + "unpredictable ldm/stm", dsc); >> + } >> + >> + if (debug_displaced) >> + fprintf_unfiltered (gdb_stdlog, "displaced: copying block transfer insn " >> + "%.4x%.4x\n", insn1, insn2); >> + >> + /* Clear bit 13, since it should be always zero. */ >> + dsc->u.block.regmask = (insn2 & 0xdfff); >> + dsc->u.block.rn = rn; >> + >> + dsc->u.block.load = bit (insn1, 4); > We've already read that bit into "load". > Fixed. >> + dsc->u.block.user = bit (insn1, 6); > This must always be 0 -- we're never called otherwise. > Fixed. >> +static int >> +thumb2_decode_dp_shift_reg (struct gdbarch *gdbarch, uint16_t insn1, >> + uint16_t insn2, struct regcache *regs, >> + struct displaced_step_closure *dsc) >> +{ >> + /* Data processing (shift register) instructions can be grouped according to >> + their encondings: > Typo. >> + >> + 1. Insn X Rn :inst1,3-0 Rd: insn2,8-11, Rm: insn2,3-0. Rd=15 & S=1, Insn Y. >> + Rn != PC, Rm ! = PC. >> + X: AND, Y: TST (REG) >> + X: EOR, Y: TEQ (REG) >> + X: ADD, Y: CMN (REG) >> + X: SUB, Y: CMP (REG) >> + >> + 2. Insn X Rn : ins1,3-0, Rm: insn2, 3-0; Rm! = PC, Rn != PC >> + Insn X: TST, TEQ, PKH, CMN, and CMP. >> + >> + 3. Insn X Rn:inst1,3-0 Rd:insn2,8-11, Rm:insn2, 3-0. Rn != PC, Rd != PC, >> + Rm != PC. >> + X: BIC, ADC, SBC, and RSB. >> + >> + 4. Insn X Rn:inst1,3-0 Rd:insn2,8-11, Rm:insn2,3-0. Rd = 15, Insn Y. >> + X: ORR, Y: MOV (REG). >> + X: ORN, Y: MVN (REG). >> + >> + 5. Insn X Rd: insn2, 8-11, Rm: insn2, 3-0. >> + X: MVN, Rd != PC, Rm != PC >> + X: MOV: Rd/Rm can be PC. >> + >> + PC is only allowed to be used in instruction MOV. >> +*/ > > Do we need this comment at all (except for the last sentence)? > > I leave this comment there to help me to remind what instruction is still missing here. It is somewhat redundant when it is done. Removed except for the last one. >> static int >> +thumb_copy_pc_relative_32bit (struct gdbarch *gdbarch, uint16_t insn1, >> + uint16_t insn2, struct regcache *regs, >> + struct displaced_step_closure *dsc) >> +{ >> + unsigned int rd = bits (insn2, 8, 11); >> + /* Since immeidate has the same encoding in both ADR and ADD, so we simply > Typo: immediate Fixed. >> + extract raw immediate encoding rather than computing immediate. When >> + generating ADD instruction, we can simply perform OR operation to set >> + immediate into ADD. */ >> + unsigned int imm_3_8 = insn2 & 0x70ff; >> + unsigned int imm_i = insn1 & 0x0400; /* Clear all bits except bit 10. */ >> + >> + if (debug_displaced) >> + fprintf_unfiltered (gdb_stdlog, >> + "displaced: copying thumb adr r%d, #%d:%d insn %.4x%.4x\n", >> + rd, imm_i, imm_3_8, insn1, insn2); >> + >> + /* Encoding T3: ADD Rd, Rd, #imm */ >> + dsc->modinsn[0] = (0xf100 | rd | imm_i); >> + dsc->modinsn[1] = ((rd << 8) | imm_3_8); > > Hmm. So this handles the T3 encoding of ADR correctly. However, in the T2 > encoding, we need to *subtract* the immediate from PC, so we really need to > generate a SUB instead of an ADD as replacement ... > > We generate SUB (immediate) Encoding T3 for ADR Encoding T2 in new patch. >> +/* Copy Table Brach Byte/Halfword */ > Type: Branch > > Fixed. >> +static int >> +decode_thumb_32bit_ld_mem_hints (struct gdbarch *gdbarch, >> + uint16_t insn1, uint16_t insn2, >> + struct regcache *regs, >> + struct displaced_step_closure *dsc) >> +{ >> + int rt = bits (insn2, 12, 15); >> + int rn = bits (insn1, 0, 3); >> + int op1 = bits (insn1, 7, 8); >> + int user_mode = (bits (insn2, 8, 11) == 0xe); > > This is too simplistic. The "long immediate" forms may just accidentally > have 0xe in those bits -- they're part of the immediate there. See above > for the comments about computing immediate/writeback/usermode flags at > the same location. > This part is re-written to decode immediate/writeback/usermode bits. >> + int err = 0; >> + int writeback = 0; >> + >> + switch (bits (insn1, 5, 6)) >> + { >> + case 0: /* Load byte and memory hints */ >> + if (rt == 0xf) /* PLD/PLI */ >> + { >> + if (rn == 0xf) >> + { >> + /* PLD literal or Encoding T3 of PLI(immediate, literal). */ >> + return thumb2_copy_preload (gdbarch, insn1, insn2, regs, dsc); >> + } > >> + else >> + { >> + switch (op1) >> + { >> + case 0: case 2: >> + if (bits (insn2, 8, 11) == 0x1110 >> + || (bits (insn2, 8, 11) & 0x6) == 0x9) >> + return thumb_32bit_copy_unpred (gdbarch, insn1, insn2, dsc); >> + else >> + /* PLI/PLD (reigster, immediate) doesn't use PC. */ >> + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, >> + "pli/pld", dsc); >> + break; >> + case 1: /* PLD/PLDW (immediate) */ >> + case 3: /* PLI (immediate, literal) */ >> + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, >> + "pli/pld", dsc); >> + break; >> + >> + } > > I'd just make the whole block use copy_unmodified ... That's some complexity > here for no real gain. > OK. Replace them all with a single copy_unmodified routine. >> + } >> + } > > >> + else >> + { >> + if ((op1 == 0 || op1 == 2) && bit (insn2, 11)) >> + writeback = bit (insn2, 8); >> + >> + return thumb2_copy_load_store (gdbarch, insn1, insn2, regs, dsc, 1, 1, >> + user_mode, writeback); > > As discussed above, we'll have to distiguish the "literal" forms from > the immediate forms. > Fixed. >> + } > > >> + case 1: /* Load halfword and memory hints. */ >> + if (rt == 0xf) /* PLD{W} and Unalloc memory hint. */ >> + { >> + if (rn == 0xf) >> + { >> + if (op1 == 0 || op1 == 1) >> + return thumb_32bit_copy_unpred (gdbarch, insn1, insn2, dsc); >> + else >> + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, >> + "unalloc memhint", dsc); >> + } >> + else >> + { >> + if ((op1 == 0 || op1 == 2) >> + && (bits (insn2, 8, 11) == 0xe >> + || ((bits (insn2, 8, 11) & 0x9) == 0x9))) >> + return thumb_32bit_copy_unpred (gdbarch, insn1, insn2, dsc); >> + else thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, >> + "pld/unalloc memhint", dsc); >> + } > > See above, it's probably not worth to make such fine-grained distinctions > when the result is effectively the same anyway. > OK. They are combined into a single call to copy_unmodified routine. >> + } >> + else >> + { >> + int op1 = bits (insn1, 7, 8); >> + >> + if ((op1 == 0 || op1 == 2) && bit (insn2, 11)) >> + writeback = bit (insn2, 8); >> + return thumb2_copy_load_store (gdbarch, insn1, insn2, regs, dsc, 1, >> + 2, user_mode, writeback); > > See above for literal forms; computation of writeback etc. flags. > >> + } >> + break; >> + case 2: /* Load word */ >> + { >> + int op1 = bits (insn1, 7, 8); >> + >> + if ((op1 == 0 || op1 == 2) && bit (insn2, 11)) >> + writeback = bit (insn2, 8); >> + >> + return thumb2_copy_load_store (gdbarch, insn1, insn2, regs, dsc, 1, 4, >> + user_mode, writeback); >> + break; > > Likewise. > > >> +static int >> +decode_thumb_32bit_store_single_data_item (struct gdbarch *gdbarch, >> + uint16_t insn1, uint16_t insn2, >> + struct regcache *regs, >> + struct displaced_step_closure *dsc) >> +{ >> + int user_mode = (bits (insn2, 8, 11) == 0xe); >> + int size = 0; >> + int writeback = 0; >> + int op1 = bits (insn1, 5, 7); >> + >> + switch (op1) >> + { >> + case 0: case 4: size = 1; break; >> + case 1: case 5: size = 2; break; >> + case 2: case 6: size = 4; break; >> + } >> + if (bits (insn1, 5, 7) < 3 && bit (insn2, 11)) >> + writeback = bit (insn2, 8); >> + >> + return thumb2_copy_load_store (gdbarch, insn1, insn2, regs, >> + dsc, 0, size, user_mode, >> + writeback); >> + >> +} > > As per the discussion above, this function is probably unnecessary, > since stores cannot use the PC in Thumb mode. > > Yes, it is removed. >> static void >> thumb_process_displaced_32bit_insn (struct gdbarch *gdbarch, uint16_t insn1, >> uint16_t insn2, struct regcache *regs, >> struct displaced_step_closure *dsc) >> { > >> + case 2: /* op1 = 2 */ >> + if (op) /* Branch and misc control. */ >> + { >> + if (bit (insn2, 14)) /* BLX/BL */ >> + err = thumb2_copy_b_bl_blx (gdbarch, insn1, insn2, regs, dsc); >> + else if (!bits (insn2, 12, 14) && bits (insn1, 7, 9) != 0x7) >> + /* Conditional Branch */ >> + err = thumb2_copy_b_bl_blx (gdbarch, insn1, insn2, regs, dsc); > > See above for the problems with this condition. Also, you're missing > (some) *unconditional* branch instructions (B) here; those have bit 12 > equal to 1. > > Maybe the checks should be combined into: > if (bit (insn2, 14) /* BLX/BL */ > || bit (insn2, 12) /* Unconditional branch */ > || bits (insn1, 7, 9) != 0x7)) /* Conditional branch */ > err = thumb2_copy_b_bl_blx (gdbarch, insn1, insn2, regs, dsc); > Yeah, it looks right. >> + case 3: /* op1 = 3 */ >> + switch (bits (insn1, 9, 10)) >> + { >> + case 0: >> + if ((bits (insn1, 4, 6) & 0x5) == 0x1) >> + err = decode_thumb_32bit_ld_mem_hints (gdbarch, insn1, insn2, >> + regs, dsc); > > This check misses the "Load word" instructions. It should probably > just be "if (bit (insn1, 4))" at this point. > Fixed. >> + else >> + { >> + if (bit (insn1, 8)) /* NEON Load/Store */ >> + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, >> + "neon elt/struct load/store", >> + dsc); >> + else /* Store single data item */ >> + err = decode_thumb_32bit_store_single_data_item (gdbarch, >> + insn1, insn2, >> + regs, dsc); > > As discussed above, I think those can all be copied unmodified. > Done. -- Yao (é½å°§) [-- Attachment #2: 0003-Support-displaced-stepping-for-Thumb-32-bit-insns.patch --] [-- Type: text/x-patch, Size: 28053 bytes --] Support displaced stepping for Thumb 32-bit insns. * arm-tdep.c (thumb_copy_unmodified_32bit): New. (thumb2_copy_preload): New. (thumb2_copy_copro_load_store): New. (thumb2_copy_b_bl_blx): New. (thumb2_copy_alu_imm): New. (thumb2_copy_load_reg_imm): New. (thumb2_copy_load_literal): New (thumb2_copy_block_xfer): New. (thumb_32bit_copy_undef): New. (thumb_32bit_copy_unpred): New. (thumb2_decode_ext_reg_ld_st): New. (thumb2_decode_svc_copro): New. (decode_thumb_32bit_store_single_data_item): New. (thumb_copy_pc_relative_32bit): New. (thumb_decode_pc_relative_32bit): New. (decode_thumb_32bit_ld_mem_hints): New. (thumb2_copy_table_branch): New (thumb_process_displaced_32bit_insn): Process Thumb 32-bit instructions. --- gdb/arm-tdep.c | 805 +++++++++++++++++++++++++++++++++++++++++++++++++++++++- 1 files changed, 804 insertions(+), 1 deletions(-) diff --git a/gdb/arm-tdep.c b/gdb/arm-tdep.c index b0074bd..58c7c72 100644 --- a/gdb/arm-tdep.c +++ b/gdb/arm-tdep.c @@ -5341,6 +5341,23 @@ arm_copy_unmodified (struct gdbarch *gdbarch, uint32_t insn, return 0; } +static int +thumb_copy_unmodified_32bit (struct gdbarch *gdbarch, uint16_t insn1, + uint16_t insn2, const char *iname, + struct displaced_step_closure *dsc) +{ + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, "displaced: copying insn %.4x %.4x, " + "opcode/class '%s' unmodified\n", insn1, insn2, + iname); + + dsc->modinsn[0] = insn1; + dsc->modinsn[1] = insn2; + dsc->numinsns = 2; + + return 0; +} + /* Copy 16-bit Thumb(Thumb and 16-bit Thumb-2) instruction without any modification. */ static int @@ -5408,6 +5425,54 @@ arm_copy_preload (struct gdbarch *gdbarch, uint32_t insn, struct regcache *regs, return 0; } +static int +thumb2_copy_preload (struct gdbarch *gdbarch, uint16_t insn1, uint16_t insn2, + struct regcache *regs, struct displaced_step_closure *dsc) +{ + unsigned int rn = bits (insn1, 0, 3); + unsigned int u_bit = bit (insn1, 7); + int imm12 = bits (insn2, 0, 11); + ULONGEST pc_val; + + if (rn != ARM_PC_REGNUM) + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, "preload", dsc); + + /* PC is only allowed to use in PLI (immeidate,literal) Encoding T3, and + PLD (literal) Encoding T1. */ + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, + "displaced: copying pld/pli pc (0x%x) %c imm12 %.4x\n", + (unsigned int) dsc->insn_addr, u_bit ? '+' : '-', + imm12); + + if (!u_bit) + imm12 = -1 * imm12; + + /* Rewrite instruction {pli/pld} PC imm12 into: + Preapre: tmp[0] <- r0, tmp[1] <- r1, r0 <- pc, r1 <- imm12 + + {pli/pld} [r0, r1] + + Cleanup: r0 <- tmp[0], r1 <- tmp[1]. */ + + dsc->tmp[0] = displaced_read_reg (regs, dsc, 0); + dsc->tmp[1] = displaced_read_reg (regs, dsc, 1); + + pc_val = displaced_read_reg (regs, dsc, ARM_PC_REGNUM); + + displaced_write_reg (regs, dsc, 0, pc_val, CANNOT_WRITE_PC); + displaced_write_reg (regs, dsc, 1, imm12, CANNOT_WRITE_PC); + dsc->u.preload.immed = 0; + + /* {pli/pld} [r0, r1] */ + dsc->modinsn[0] = insn1 & 0xff00; + dsc->modinsn[1] = 0xf001; + dsc->numinsns = 2; + + dsc->cleanup = &cleanup_preload; + return 0; +} + /* Preload instructions with register offset. */ static void @@ -5517,6 +5582,32 @@ arm_copy_copro_load_store (struct gdbarch *gdbarch, uint32_t insn, return 0; } +static int +thumb2_copy_copro_load_store (struct gdbarch *gdbarch, uint16_t insn1, + uint16_t insn2, struct regcache *regs, + struct displaced_step_closure *dsc) +{ + unsigned int rn = bits (insn1, 0, 3); + + if (rn != ARM_PC_REGNUM) + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "copro load/store", dsc); + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, "displaced: copying coprocessor " + "load/store insn %.4x%.4x\n", insn1, insn2); + + dsc->modinsn[0] = insn1 & 0xfff0; + dsc->modinsn[1] = insn2; + dsc->numinsns = 2; + + /* This function is called for copying instruction LDC/LDC2/VLDR, which + doesn't support writeback, so pass 0. */ + install_copro_load_store (gdbarch, regs, dsc, 0, rn); + + return 0; +} + /* Clean up branch instructions (actually perform the branch, by setting PC). */ @@ -5604,6 +5695,61 @@ arm_copy_b_bl_blx (struct gdbarch *gdbarch, uint32_t insn, return 0; } +static int +thumb2_copy_b_bl_blx (struct gdbarch *gdbarch, uint16_t insn1, + uint16_t insn2, struct regcache *regs, + struct displaced_step_closure *dsc) +{ + int link = bit (insn2, 14); + int exchange = link && !bit (insn2, 12); + int cond = INST_AL; + long offset = 0; + int j1 = bit (insn2, 13); + int j2 = bit (insn2, 11); + int s = sbits (insn1, 10, 10); + int i1 = !(j1 ^ bit (insn1, 10)); + int i2 = !(j2 ^ bit (insn1, 10)); + + if (!link && !exchange) /* B */ + { + offset = (bits (insn2, 0, 10) << 1); + if (bit (insn2, 12)) /* Encoding T4 */ + { + offset |= (bits (insn1, 0, 9) << 12) + | (i2 << 22) + | (i1 << 23) + | (s << 24); + cond = INST_AL; + } + else /* Encoding T3 */ + { + offset |= (bits (insn1, 0, 5) << 12) + | (j1 << 18) + | (j2 << 19) + | (s << 20); + cond = bits (insn1, 6, 9); + } + } + else + { + offset = (bits (insn1, 0, 9) << 12); + offset |= ((i2 << 22) | (i1 << 23) | (s << 24)); + offset |= exchange ? + (bits (insn2, 1, 10) << 2) : (bits (insn2, 0, 10) << 1); + } + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, "displaced: copying %s insn " + "%.4x %.4x with offset %.8lx\n", + link ? (exchange) ? "blx" : "bl" : "b", + insn1, insn2, offset); + + dsc->modinsn[0] = THUMB_NOP; + + install_b_bl_blx (gdbarch, regs, dsc, cond, exchange, link, offset); + return 0; +} + /* Copy B Thumb instructions. */ static int thumb_copy_b (struct gdbarch *gdbarch, unsigned short insn, @@ -5767,6 +5913,58 @@ arm_copy_alu_imm (struct gdbarch *gdbarch, uint32_t insn, struct regcache *regs, return 0; } +static int +thumb2_copy_alu_imm (struct gdbarch *gdbarch, uint16_t insn1, + uint16_t insn2, struct regcache *regs, + struct displaced_step_closure *dsc) +{ + unsigned int op = bits (insn1, 5, 8); + unsigned int rn, rm, rd; + ULONGEST rd_val, rn_val; + + rn = bits (insn1, 0, 3); /* Rn */ + rm = bits (insn2, 0, 3); /* Rm */ + rd = bits (insn2, 8, 11); /* Rd */ + + /* This routine is only called for instruction MOV. */ + gdb_assert (op == 0x2 && rn == 0xf); + + if (rm != ARM_PC_REGNUM && rd != ARM_PC_REGNUM) + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, "ALU imm", dsc); + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, "displaced: copying reg %s insn %.4x%.4x\n", + "ALU", insn1, insn2); + + /* Instruction is of form: + + <op><cond> rd, [rn,] #imm + + Rewrite as: + + Preparation: tmp1, tmp2 <- r0, r1; + r0, r1 <- rd, rn + Insn: <op><cond> r0, r1, #imm + Cleanup: rd <- r0; r0 <- tmp1; r1 <- tmp2 + */ + + dsc->tmp[0] = displaced_read_reg (regs, dsc, 0); + dsc->tmp[1] = displaced_read_reg (regs, dsc, 1); + rn_val = displaced_read_reg (regs, dsc, rn); + rd_val = displaced_read_reg (regs, dsc, rd); + displaced_write_reg (regs, dsc, 0, rd_val, CANNOT_WRITE_PC); + displaced_write_reg (regs, dsc, 1, rn_val, CANNOT_WRITE_PC); + dsc->rd = rd; + + dsc->modinsn[0] = insn1; + dsc->modinsn[1] = ((insn2 & 0xf0f0) | 0x1); + dsc->numinsns = 2; + + dsc->cleanup = &cleanup_alu_imm; + + return 0; +} + /* Copy/cleanup arithmetic/logic insns with register RHS. */ static void @@ -6134,6 +6332,113 @@ install_load_store (struct gdbarch *gdbarch, struct regcache *regs, dsc->cleanup = load ? &cleanup_load : &cleanup_store; } + +static int +thumb2_copy_load_literal (struct gdbarch *gdbarch, uint16_t insn1, + uint16_t insn2, struct regcache *regs, + struct displaced_step_closure *dsc) +{ + unsigned int u_bit = bit (insn1, 7); + unsigned int rt = bits (insn2, 12, 15); + int imm12 = bits (insn2, 0, 11); + ULONGEST pc_val; + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, + "displaced: copying ldr pc (0x%x) R%d %c imm12 %.4x\n", + (unsigned int) dsc->insn_addr, rt, u_bit ? '+' : '-', + imm12); + + if (!u_bit) + imm12 = -1 * imm12; + + /* Rewrite instruction LDR Rt imm12 into: + + Prepare: tmp[0] <- r0, tmp[1] <- r1, tmp[2] <- r2, r1 <- pc, r2 <- imm12 + + LDR R0, R1, R2, + + Cleanup: rt <- r0, r0 <- tmp[0], r1 <- tmp[1], r2 <- tmp[2]. */ + + + dsc->tmp[0] = displaced_read_reg (regs, dsc, 0); + dsc->tmp[1] = displaced_read_reg (regs, dsc, 1); + dsc->tmp[2] = displaced_read_reg (regs, dsc, 2); + + pc_val = displaced_read_reg (regs, dsc, ARM_PC_REGNUM); + + displaced_write_reg (regs, dsc, 1, pc_val, CANNOT_WRITE_PC); + displaced_write_reg (regs, dsc, 2, imm12, CANNOT_WRITE_PC); + + dsc->rd = rt; + + dsc->u.ldst.xfersize = 4; + dsc->u.ldst.immed = 0; + dsc->u.ldst.writeback = 0; + dsc->u.ldst.restore_r4 = 0; + + /* LDR R0, R1, R2 */ + dsc->modinsn[0] = 0xf851; + dsc->modinsn[1] = 0x2; + dsc->numinsns = 2; + + dsc->cleanup = &cleanup_load; + + return 0; +} + + +static int +thumb2_copy_load_reg_imm (struct gdbarch *gdbarch, uint16_t insn1, + uint16_t insn2, struct regcache *regs, + struct displaced_step_closure *dsc, + int size, int usermode, int writeback, int immed) +{ + unsigned int rt = bits (insn2, 12, 15); + unsigned int rn = bits (insn1, 0, 3); + unsigned int rm = bits (insn2, 0, 3); /* Only valid if !immed. */ + /* In LDR (register), there is also a register Rm, which is not allowed to + be PC, so we don't have to check it. */ + + if (rt != ARM_PC_REGNUM && rn != ARM_PC_REGNUM) + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, "load", + dsc); + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, + "displaced: copying %s%s r%d [r%d] insn %.4x%.4x\n", + (size == 1 ? "ldrb" : (size == 2 ? "ldrh" : "ldr")), + usermode ? "t" : "", + rt, rn, insn1, insn2); + + install_load_store (gdbarch, regs, dsc, 1, immed, writeback, size, + usermode, rt, rm, rn); + + dsc->u.ldst.restore_r4 = 0; + + if (immed) + /* ldr[b]<cond> rt, [rn, #imm], etc. + -> + ldr[b]<cond> r0, [r2, #imm]. */ + { + dsc->modinsn[0] = (insn1 & 0xfff0) | 0x2; + dsc->modinsn[1] = insn2 & 0x0fff; + } + else + /* ldr[b]<cond> rt, [rn, rm], etc. + -> + ldr[b]<cond> r0, [r2, r3]. */ + { + dsc->modinsn[0] = (insn1 & 0xfff0) | 0x2; + dsc->modinsn[1] = (insn2 & 0x0ff0) | 0x3; + } + + dsc->numinsns = 2; + + return 0; +} + + static int arm_copy_ldr_str_ldrb_strb (struct gdbarch *gdbarch, uint32_t insn, struct regcache *regs, @@ -6524,6 +6829,87 @@ arm_copy_block_xfer (struct gdbarch *gdbarch, uint32_t insn, return 0; } +static int +thumb2_copy_block_xfer (struct gdbarch *gdbarch, uint16_t insn1, uint16_t insn2, + struct regcache *regs, + struct displaced_step_closure *dsc) +{ + int rn = bits (insn1, 0, 3); + int load = bit (insn1, 4); + int writeback = bit (insn1, 5); + + /* Block transfers which don't mention PC can be run directly + out-of-line. */ + if (rn != ARM_PC_REGNUM && (insn2 & 0x8000) == 0) + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, "ldm/stm", dsc); + + if (rn == ARM_PC_REGNUM) + { + warning (_("displaced: Unpredictable LDM or STM with " + "base register r15")); + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "unpredictable ldm/stm", dsc); + } + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, "displaced: copying block transfer insn " + "%.4x%.4x\n", insn1, insn2); + + /* Clear bit 13, since it should be always zero. */ + dsc->u.block.regmask = (insn2 & 0xdfff); + dsc->u.block.rn = rn; + + dsc->u.block.load = load; + dsc->u.block.user = 0; + dsc->u.block.increment = bit (insn1, 7); + dsc->u.block.before = bit (insn1, 8); + dsc->u.block.writeback = writeback; + dsc->u.block.cond = INST_AL; + + if (load) + { + if (dsc->u.block.regmask == 0xffff) + { + /* This branch is impossible to happen. */ + gdb_assert (0); + } + else + { + unsigned int regmask = dsc->u.block.regmask; + unsigned int num_in_list = bitcount (regmask), new_regmask, bit = 1; + unsigned int to = 0, from = 0, i, new_rn; + + for (i = 0; i < num_in_list; i++) + dsc->tmp[i] = displaced_read_reg (regs, dsc, i); + + if (writeback) + insn1 &= ~(1 << 5); + + new_regmask = (1 << num_in_list) - 1; + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, _("displaced: LDM r%d%s, " + "{..., pc}: original reg list %.4x, modified " + "list %.4x\n"), rn, writeback ? "!" : "", + (int) dsc->u.block.regmask, new_regmask); + + dsc->modinsn[0] = insn1; + dsc->modinsn[1] = (new_regmask & 0xffff); + dsc->numinsns = 2; + + dsc->cleanup = &cleanup_block_load_pc; + } + } + else + { + dsc->modinsn[0] = insn1; + dsc->modinsn[1] = insn2; + dsc->numinsns = 2; + dsc->cleanup = &cleanup_block_store_pc; + } + return 0; +} + /* Cleanup/copy SVC (SWI) instructions. These two functions are overridden for Linux, where some SVC instructions must be treated specially. */ @@ -6609,6 +6995,23 @@ arm_copy_undef (struct gdbarch *gdbarch, uint32_t insn, return 0; } +static int +thumb_32bit_copy_undef (struct gdbarch *gdbarch, uint16_t insn1, uint16_t insn2, + struct displaced_step_closure *dsc) +{ + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, "displaced: copying undefined insn " + "%.4x %.4x\n", (unsigned short) insn1, + (unsigned short) insn2); + + dsc->modinsn[0] = insn1; + dsc->modinsn[1] = insn2; + dsc->numinsns = 2; + + return 0; +} + /* Copy unpredictable instructions. */ static int @@ -7005,6 +7408,65 @@ arm_decode_ext_reg_ld_st (struct gdbarch *gdbarch, uint32_t insn, return 1; } +/* Decode shifted register instructions. */ + +static int +thumb2_decode_dp_shift_reg (struct gdbarch *gdbarch, uint16_t insn1, + uint16_t insn2, struct regcache *regs, + struct displaced_step_closure *dsc) +{ + /* PC is only allowed to be used in instruction MOV. */ + + unsigned int op = bits (insn1, 5, 8); + unsigned int rn = bits (insn1, 0, 3); + + if (op == 0x2 && rn == 0xf) /* MOV */ + return thumb2_copy_alu_imm (gdbarch, insn1, insn2, regs, dsc); + else + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "dp (shift reg)", dsc); +} + + +/* Decode extension register load/store. Exactly the same as + arm_decode_ext_reg_ld_st. */ + +static int +thumb2_decode_ext_reg_ld_st (struct gdbarch *gdbarch, uint16_t insn1, + uint16_t insn2, struct regcache *regs, + struct displaced_step_closure *dsc) +{ + unsigned int opcode = bits (insn1, 4, 8); + + switch (opcode) + { + case 0x04: case 0x05: + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "vfp/neon vmov", dsc); + + case 0x08: case 0x0c: /* 01x00 */ + case 0x0a: case 0x0e: /* 01x10 */ + case 0x12: case 0x16: /* 10x10 */ + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "vfp/neon vstm/vpush", dsc); + + case 0x09: case 0x0d: /* 01x01 */ + case 0x0b: case 0x0f: /* 01x11 */ + case 0x13: case 0x17: /* 10x11 */ + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "vfp/neon vldm/vpop", dsc); + + case 0x10: case 0x14: case 0x18: case 0x1c: /* vstr. */ + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "vstr", dsc); + case 0x11: case 0x15: case 0x19: case 0x1d: /* vldr. */ + return thumb2_copy_copro_load_store (gdbarch, insn1, insn2, regs, dsc); + } + + /* Should be unreachable. */ + return 1; +} + static int arm_decode_svc_copro (struct gdbarch *gdbarch, uint32_t insn, CORE_ADDR to, struct regcache *regs, struct displaced_step_closure *dsc) @@ -7051,6 +7513,49 @@ arm_decode_svc_copro (struct gdbarch *gdbarch, uint32_t insn, CORE_ADDR to, return arm_copy_undef (gdbarch, insn, dsc); /* Possibly unreachable. */ } +static int +thumb2_decode_svc_copro (struct gdbarch *gdbarch, uint16_t insn1, + uint16_t insn2, struct regcache *regs, + struct displaced_step_closure *dsc) +{ + unsigned int coproc = bits (insn2, 8, 11); + unsigned int op1 = bits (insn1, 4, 9); + unsigned int bit_5_8 = bits (insn1, 5, 8); + unsigned int bit_9 = bit (insn1, 9); + unsigned int bit_4 = bit (insn1, 4); + unsigned int rn = bits (insn1, 0, 3); + + if (bit_9 == 0) + { + if (bit_5_8 == 2) + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "neon 64bit xfer/mrrc/mrrc2/mcrr/mcrr2", + dsc); + else if (bit_5_8 == 0) /* UNDEFINED. */ + return thumb_32bit_copy_undef (gdbarch, insn1, insn2, dsc); + else + { + /*coproc is 101x. SIMD/VFP, ext registers load/store. */ + if ((coproc & 0xe) == 0xa) + return thumb2_decode_ext_reg_ld_st (gdbarch, insn1, insn2, regs, + dsc); + else /* coproc is not 101x. */ + { + if (bit_4 == 0) /* STC/STC2. */ + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "stc/stc2", dsc); + else /* LDC/LDC2 {literal, immeidate}. */ + return thumb2_copy_copro_load_store (gdbarch, insn1, insn2, + regs, dsc); + } + } + } + else + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, "coproc", dsc); + + return 0; +} + static void install_pc_relative (struct gdbarch *gdbarch, struct regcache *regs, struct displaced_step_closure *dsc, int rd) @@ -7100,6 +7605,43 @@ thumb_decode_pc_relative_16bit (struct gdbarch *gdbarch, uint16_t insn, } static int +thumb_copy_pc_relative_32bit (struct gdbarch *gdbarch, uint16_t insn1, + uint16_t insn2, struct regcache *regs, + struct displaced_step_closure *dsc) +{ + unsigned int rd = bits (insn2, 8, 11); + /* Since immediate has the same encoding in ADR ADD and SUB, so we simply + extract raw immediate encoding rather than computing immediate. When + generating ADD or SUB instruction, we can simply perform OR operation to + set immediate into ADD. */ + unsigned int imm_3_8 = insn2 & 0x70ff; + unsigned int imm_i = insn1 & 0x0400; /* Clear all bits except bit 10. */ + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, + "displaced: copying thumb adr r%d, #%d:%d insn %.4x%.4x\n", + rd, imm_i, imm_3_8, insn1, insn2); + + if (bit (insn1, 7)) /* Encoding T2 */ + { + /* Encoding T3: SUB Rd, Rd, #imm */ + dsc->modinsn[0] = (0xf1a0 | rd | imm_i); + dsc->modinsn[1] = ((rd << 8) | imm_3_8); + } + else /* Encoding T3 */ + { + /* Encoding T3: ADD Rd, Rd, #imm */ + dsc->modinsn[0] = (0xf100 | rd | imm_i); + dsc->modinsn[1] = ((rd << 8) | imm_3_8); + } + dsc->numinsns = 2; + + install_pc_relative (gdbarch, regs, dsc, rd); + + return 0; +} + +static int thumb_copy_16bit_ldr_literal (struct gdbarch *gdbarch, unsigned short insn1, struct regcache *regs, struct displaced_step_closure *dsc) @@ -7181,6 +7723,51 @@ thumb_copy_cbnz_cbz (struct gdbarch *gdbarch, uint16_t insn1, return 0; } +/* Copy Table Branch Byte/Halfword */ +static int +thumb2_copy_table_branch (struct gdbarch *gdbarch, uint16_t insn1, + uint16_t insn2, struct regcache *regs, + struct displaced_step_closure *dsc) +{ + ULONGEST rn_val, rm_val; + int is_tbh = bit (insn2, 4); + CORE_ADDR halfwords = 0; + enum bfd_endian byte_order = gdbarch_byte_order (gdbarch); + + rn_val = displaced_read_reg (regs, dsc, bits (insn1, 0, 3)); + rm_val = displaced_read_reg (regs, dsc, bits (insn2, 0, 3)); + + if (is_tbh) + { + gdb_byte buf[2]; + + target_read_memory (rn_val + 2 * rm_val, buf, 2); + halfwords = extract_unsigned_integer (buf, 2, byte_order); + } + else + { + gdb_byte buf[1]; + + target_read_memory (rn_val + rm_val, buf, 1); + halfwords = extract_unsigned_integer (buf, 1, byte_order); + } + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, "displaced: %s base 0x%x offset 0x%x" + " offset 0x%x\n", is_tbh ? "tbh" : "tbb", + (unsigned int) rn_val, (unsigned int) rm_val, + (unsigned int) halfwords); + + dsc->u.branch.cond = INST_AL; + dsc->u.branch.link = 0; + dsc->u.branch.exchange = 0; + dsc->u.branch.dest = dsc->insn_addr + 4 + 2 * halfwords; + + dsc->cleanup = &cleanup_branch; + + return 0; +} + static void cleanup_pop_pc_16bit_all (struct gdbarch *gdbarch, struct regcache *regs, struct displaced_step_closure *dsc) @@ -7374,12 +7961,228 @@ thumb_process_displaced_16bit_insn (struct gdbarch *gdbarch, uint16_t insn1, _("thumb_process_displaced_16bit_insn: Instruction decode error")); } +static int +decode_thumb_32bit_ld_mem_hints (struct gdbarch *gdbarch, + uint16_t insn1, uint16_t insn2, + struct regcache *regs, + struct displaced_step_closure *dsc) +{ + int rt = bits (insn2, 12, 15); + int rn = bits (insn1, 0, 3); + int op1 = bits (insn1, 7, 8); + int err = 0; + + switch (bits (insn1, 5, 6)) + { + case 0: /* Load byte and memory hints */ + if (rt == 0xf) /* PLD/PLI */ + { + if (rn == 0xf) + /* PLD literal or Encoding T3 of PLI(immediate, literal). */ + return thumb2_copy_preload (gdbarch, insn1, insn2, regs, dsc); + else + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "pli/pld", dsc); + } + else + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "ldrb{reg, immediate}/ldrbt", + dsc); + + break; + case 1: /* Load halfword and memory hints. */ + if (rt == 0xf) /* PLD{W} and Unalloc memory hint. */ + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "pld/unalloc memhint", dsc); + else + { + int insn2_bit_8_11 = bits (insn2, 8, 11); + + if (rn == 0xf) + return thumb2_copy_load_literal (gdbarch, insn1, insn2, regs, dsc); + else + { + if (op1 == 0x1 || op1 == 0x3) + /* LDRH/LDRSH (immediate), in which bit 7 of insn1 is 1, + PC is not used. */ + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "ldrh/ldrht", dsc); + else if (insn2_bit_8_11 == 0xc + || (insn2_bit_8_11 & 0x9) == 0x9) + /* LDRH/LDRSH (imediate), in which bit 7 of insn1 is 0, PC + can be used. */ + return thumb2_copy_load_reg_imm (gdbarch, insn1, insn2, regs, + dsc, 2, 0, bit (insn2, 8), 1); + else /* PC is not allowed to use in LDRH (register) and LDRHT. */ + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "ldrh/ldrht", dsc); + } + } + break; + case 2: /* Load word */ + { + int insn2_bit_8_11 = bits (insn2, 8, 11); + + if (rn == 0xf) + return thumb2_copy_load_literal (gdbarch, insn1, insn2, regs, dsc); + else if (op1 == 0x1) /* Encoding T3 */ + return thumb2_copy_load_reg_imm (gdbarch, insn1, insn2, regs, + dsc, 4, 0, 0, 1); + else /* op1 == 0x0 */ + { + if (insn2_bit_8_11 == 0xc || (insn2_bit_8_11 & 0x9) == 0x9) + /* LDR (immediate) */ + return thumb2_copy_load_reg_imm (gdbarch, insn1, insn2, regs, + dsc, 4, 0, + bit (insn2, 8), 1); + else + /* LDRT and LDR (register) */ + return thumb2_copy_load_reg_imm (gdbarch, insn1, insn2, regs, + dsc, 4, + bits (insn2, 8, 11) == 0xe, + 0, 0); + } + + break; + } + default: + return thumb_32bit_copy_undef (gdbarch, insn1, insn2, dsc); + break; + } + return 0; +} + static void thumb_process_displaced_32bit_insn (struct gdbarch *gdbarch, uint16_t insn1, uint16_t insn2, struct regcache *regs, struct displaced_step_closure *dsc) { - error (_("Displaced stepping is only supported in ARM mode and Thumb 16bit instructions")); + int err = 0; + unsigned short op = bit (insn2, 15); + unsigned int op1 = bits (insn1, 11, 12); + + switch (op1) + { + case 1: + { + switch (bits (insn1, 9, 10)) + { + case 0: + if (bit (insn1, 6)) + { + /* Load/store {dual, execlusive}, table branch. */ + if (bits (insn1, 7, 8) == 1 && bits (insn1, 4, 5) == 1 + && bits (insn2, 5, 7) == 0) + err = thumb2_copy_table_branch (gdbarch, insn1, insn2, regs, + dsc); + else + /* PC is not allowed to use in load/store {dual, exclusive} + instructions. */ + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "load/store dual/ex", dsc); + } + else /* load/store multiple */ + { + switch (bits (insn1, 7, 8)) + { + case 0: case 3: /* SRS, RFE */ + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "srs/rfe", dsc); + break; + case 1: case 2: /* LDM/STM/PUSH/POP */ + err = thumb2_copy_block_xfer (gdbarch, insn1, insn2, regs, dsc); + break; + } + } + break; + + case 1: + /* Data-processing (shift register). */ + err = thumb2_decode_dp_shift_reg (gdbarch, insn1, insn2, regs, + dsc); + break; + default: /* Coprocessor instructions. */ + /* Thumb 32bit coprocessor instructions have the same encoding + as ARM's. */ + err = thumb2_decode_svc_copro (gdbarch, insn1, insn2, regs, dsc); + break; + } + break; + } + case 2: /* op1 = 2 */ + if (op) /* Branch and misc control. */ + { + if (bit (insn2, 14) /* BLX/BL */ + || bit (insn2, 12) /* Unconditional branch */ + || (bits (insn1, 7, 9) != 0x7)) /* Conditional branch */ + err = thumb2_copy_b_bl_blx (gdbarch, insn1, insn2, regs, dsc); + else if (!bit (insn2, 12) && bits (insn1, 7, 9) != 0x7) + /* Conditional Branch */ + err = thumb2_copy_b_bl_blx (gdbarch, insn1, insn2, regs, dsc); + else + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "misc ctrl", dsc); + } + else + { + if (bit (insn1, 9)) /* Data processing (plain binary imm). */ + { + int op = bits (insn1, 4, 8); + int rn = bits (insn1, 0, 4); + if ((op == 0 || op == 0xa) && rn == 0xf) + err = thumb_copy_pc_relative_32bit (gdbarch, insn1, insn2, + regs, dsc); + else + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "dp/pb", dsc); + } + else /* Data processing (modified immeidate) */ + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "dp/mi", dsc); + } + break; + case 3: /* op1 = 3 */ + switch (bits (insn1, 9, 10)) + { + case 0: + if (bit (insn1, 4)) + err = decode_thumb_32bit_ld_mem_hints (gdbarch, insn1, insn2, + regs, dsc); + else /* NEON Load/Store and Store single data item */ + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "neon elt/struct load/store", + dsc); + break; + case 1: /* op1 = 3, bits (9, 10) == 1 */ + switch (bits (insn1, 7, 8)) + { + case 0: case 1: /* Data processing (register) */ + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "dp(reg)", dsc); + break; + case 2: /* Multiply and absolute difference */ + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "mul/mua/diff", dsc); + break; + case 3: /* Long multiply and divide */ + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "lmul/lmua", dsc); + break; + } + break; + default: /* Coprocessor instructions */ + err = thumb2_decode_svc_copro (gdbarch, insn1, insn2, regs, dsc); + break; + } + break; + default: + err = 1; + } + + if (err) + internal_error (__FILE__, __LINE__, + _("thumb_process_displaced_32bit_insn: Instruction decode error")); + } static void -- 1.7.0.4 ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [try 2nd 5/8] Displaced stepping for Thumb 32-bit insns 2011-08-06 4:32 ` [try 2nd 5/8] Displaced stepping for Thumb 32-bit insns Yao Qi @ 2011-08-09 18:46 ` Ulrich Weigand 2011-08-19 3:13 ` Yao Qi 0 siblings, 1 reply; 19+ messages in thread From: Ulrich Weigand @ 2011-08-09 18:46 UTC (permalink / raw) To: Yao Qi; +Cc: gdb-patches Yao Qi wrote: > Support displaced stepping for Thumb 32-bit insns. There's still a couple of issues I noticed, but overall it is looking quite good now... Thanks! > + /* Rewrite instruction {pli/pld} PC imm12 into: > + Preapre: tmp[0] <- r0, tmp[1] <- r1, r0 <- pc, r1 <- imm12 Typo: Prepare > + {pli/pld} [r0, r1] > + > + Cleanup: r0 <- tmp[0], r1 <- tmp[1]. */ > + > + dsc->tmp[0] = displaced_read_reg (regs, dsc, 0); > + dsc->tmp[1] = displaced_read_reg (regs, dsc, 1); > + > + pc_val = displaced_read_reg (regs, dsc, ARM_PC_REGNUM); > + > + displaced_write_reg (regs, dsc, 0, pc_val, CANNOT_WRITE_PC); > + displaced_write_reg (regs, dsc, 1, imm12, CANNOT_WRITE_PC); > + dsc->u.preload.immed = 0; > + > + /* {pli/pld} [r0, r1] */ > + dsc->modinsn[0] = insn1 & 0xff00; Shouldn't this be something like 0xfff0 instead? We need to keep bit 4 set ... > +static int > +decode_thumb_32bit_ld_mem_hints (struct gdbarch *gdbarch, > + uint16_t insn1, uint16_t insn2, > + struct regcache *regs, > + struct displaced_step_closure *dsc) > +{ > + int rt = bits (insn2, 12, 15); > + int rn = bits (insn1, 0, 3); > + int op1 = bits (insn1, 7, 8); > + int err = 0; > + > + switch (bits (insn1, 5, 6)) > + { > + case 0: /* Load byte and memory hints */ > + if (rt == 0xf) /* PLD/PLI */ > + { > + if (rn == 0xf) > + /* PLD literal or Encoding T3 of PLI(immediate, literal). */ > + return thumb2_copy_preload (gdbarch, insn1, insn2, regs, dsc); > + else > + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, > + "pli/pld", dsc); > + } > + else > + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, > + "ldrb{reg, immediate}/ldrbt", > + dsc); Hmm. What about literal variants of LDRB/LDRSB ? > + case 1: /* Load halfword and memory hints. */ > + if (rt == 0xf) /* PLD{W} and Unalloc memory hint. */ > + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, > + "pld/unalloc memhint", dsc); > + else > + { > + int insn2_bit_8_11 = bits (insn2, 8, 11); > + > + if (rn == 0xf) > + return thumb2_copy_load_literal (gdbarch, insn1, insn2, regs, dsc); copy_load_literal currently only handles full-word loads ... this should really be able to handle half-word loads as well (which means it probably needs a size argument). > + else > + { > + if (op1 == 0x1 || op1 == 0x3) > + /* LDRH/LDRSH (immediate), in which bit 7 of insn1 is 1, > + PC is not used. */ > + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, > + "ldrh/ldrht", dsc); > + else if (insn2_bit_8_11 == 0xc > + || (insn2_bit_8_11 & 0x9) == 0x9) > + /* LDRH/LDRSH (imediate), in which bit 7 of insn1 is 0, PC > + can be used. */ > + return thumb2_copy_load_reg_imm (gdbarch, insn1, insn2, regs, > + dsc, 2, 0, bit (insn2, 8), 1); Actually, it cannot ... if RT is PC, we have either UNPREDICTABLE or an Unallocated memory hint; if RN is PC, we have the literal version. It seems everything except literal can just be passed through unmodified, and we do not need to call thumb2_copy_load_reg_imm at all. > + case 2: /* Load word */ > + { > + int insn2_bit_8_11 = bits (insn2, 8, 11); > + > + if (rn == 0xf) > + return thumb2_copy_load_literal (gdbarch, insn1, insn2, regs, dsc); > + else if (op1 == 0x1) /* Encoding T3 */ > + return thumb2_copy_load_reg_imm (gdbarch, insn1, insn2, regs, > + dsc, 4, 0, 0, 1); > + else /* op1 == 0x0 */ > + { > + if (insn2_bit_8_11 == 0xc || (insn2_bit_8_11 & 0x9) == 0x9) > + /* LDR (immediate) */ > + return thumb2_copy_load_reg_imm (gdbarch, insn1, insn2, regs, > + dsc, 4, 0, > + bit (insn2, 8), 1); > + else > + /* LDRT and LDR (register) */ > + return thumb2_copy_load_reg_imm (gdbarch, insn1, insn2, regs, > + dsc, 4, > + bits (insn2, 8, 11) == 0xe, > + 0, 0); LDRT also cannot use PC as target, so we really only need to check for LDR (register) here. Also, this means that thumb2_copy_load_reg_imm doesn't need a user_mode argument. (It also seems that it doesn't need a size argument: loads into PC are only allowed for the full-word instructions.) > + switch (op1) > + { > + case 1: > + { > + switch (bits (insn1, 9, 10)) > + { > + default: /* Coprocessor instructions. */ > + /* Thumb 32bit coprocessor instructions have the same encoding > + as ARM's. */ The comment isn't really correct ... > + case 2: /* op1 = 2 */ > + if (op) /* Branch and misc control. */ > + { > + if (bit (insn2, 14) /* BLX/BL */ > + || bit (insn2, 12) /* Unconditional branch */ > + || (bits (insn1, 7, 9) != 0x7)) /* Conditional branch */ > + err = thumb2_copy_b_bl_blx (gdbarch, insn1, insn2, regs, dsc); > + else if (!bit (insn2, 12) && bits (insn1, 7, 9) != 0x7) > + /* Conditional Branch */ > + err = thumb2_copy_b_bl_blx (gdbarch, insn1, insn2, regs, dsc); The else if is now superfluous: conditional branches are covered by the first if condition. Bye, Ulrich -- Dr. Ulrich Weigand GNU Toolchain for Linux on System z and Cell BE Ulrich.Weigand@de.ibm.com ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [try 2nd 5/8] Displaced stepping for Thumb 32-bit insns 2011-08-09 18:46 ` Ulrich Weigand @ 2011-08-19 3:13 ` Yao Qi 2011-08-19 16:39 ` Ulrich Weigand 0 siblings, 1 reply; 19+ messages in thread From: Yao Qi @ 2011-08-19 3:13 UTC (permalink / raw) To: Ulrich Weigand; +Cc: gdb-patches [-- Attachment #1: Type: text/plain, Size: 5819 bytes --] On 08/10/2011 02:46 AM, Ulrich Weigand wrote: > Yao Qi wrote: > > >> + /* Rewrite instruction {pli/pld} PC imm12 into: >> + Preapre: tmp[0] <- r0, tmp[1] <- r1, r0 <- pc, r1 <- imm12 > > Typo: Prepare > Fixed. >> + {pli/pld} [r0, r1] >> + >> + Cleanup: r0 <- tmp[0], r1 <- tmp[1]. */ >> + >> + dsc->tmp[0] = displaced_read_reg (regs, dsc, 0); >> + dsc->tmp[1] = displaced_read_reg (regs, dsc, 1); >> + >> + pc_val = displaced_read_reg (regs, dsc, ARM_PC_REGNUM); >> + >> + displaced_write_reg (regs, dsc, 0, pc_val, CANNOT_WRITE_PC); >> + displaced_write_reg (regs, dsc, 1, imm12, CANNOT_WRITE_PC); >> + dsc->u.preload.immed = 0; >> + >> + /* {pli/pld} [r0, r1] */ >> + dsc->modinsn[0] = insn1 & 0xff00; > > Shouldn't this be something like 0xfff0 instead? We need to > keep bit 4 set ... Yeah, we should only clear bits for register number. Fixed. >> +static int >> +decode_thumb_32bit_ld_mem_hints (struct gdbarch *gdbarch, >> + uint16_t insn1, uint16_t insn2, >> + struct regcache *regs, >> + struct displaced_step_closure *dsc) >> +{ >> + int rt = bits (insn2, 12, 15); >> + int rn = bits (insn1, 0, 3); >> + int op1 = bits (insn1, 7, 8); >> + int err = 0; >> + >> + switch (bits (insn1, 5, 6)) >> + { >> + case 0: /* Load byte and memory hints */ >> + if (rt == 0xf) /* PLD/PLI */ >> + { >> + if (rn == 0xf) >> + /* PLD literal or Encoding T3 of PLI(immediate, literal). */ >> + return thumb2_copy_preload (gdbarch, insn1, insn2, regs, dsc); >> + else >> + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, >> + "pli/pld", dsc); >> + } >> + else >> + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, >> + "ldrb{reg, immediate}/ldrbt", >> + dsc); > > Hmm. What about literal variants of LDRB/LDRSB ? > The else block is re-written like this to handle LDRB/LDRSB (literal), if (rn == 0xf) /* LDRB/LDRSB (literal) */ return thumb2_copy_load_literal (gdbarch, insn1, insn2, regs, dsc, 1); else return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, "ldrb{reg, immediate}/ldrbt", dsc); >> + case 1: /* Load halfword and memory hints. */ >> + if (rt == 0xf) /* PLD{W} and Unalloc memory hint. */ >> + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, >> + "pld/unalloc memhint", dsc); >> + else >> + { >> + int insn2_bit_8_11 = bits (insn2, 8, 11); >> + >> + if (rn == 0xf) >> + return thumb2_copy_load_literal (gdbarch, insn1, insn2, regs, dsc); > > copy_load_literal currently only handles full-word loads ... this should > really be able to handle half-word loads as well (which means it probably > needs a size argument). > You are right. Add a new argument `size'. >> + else >> + { >> + if (op1 == 0x1 || op1 == 0x3) >> + /* LDRH/LDRSH (immediate), in which bit 7 of insn1 is 1, >> + PC is not used. */ >> + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, >> + "ldrh/ldrht", dsc); >> + else if (insn2_bit_8_11 == 0xc >> + || (insn2_bit_8_11 & 0x9) == 0x9) >> + /* LDRH/LDRSH (imediate), in which bit 7 of insn1 is 0, PC >> + can be used. */ >> + return thumb2_copy_load_reg_imm (gdbarch, insn1, insn2, regs, >> + dsc, 2, 0, bit (insn2, 8), 1); > > Actually, it cannot ... if RT is PC, we have either UNPREDICTABLE or > an Unallocated memory hint; if RN is PC, we have the literal version. > > It seems everything except literal can just be passed through unmodified, > and we do not need to call thumb2_copy_load_reg_imm at all. > OK. Fixed. >> + case 2: /* Load word */ >> + { >> + int insn2_bit_8_11 = bits (insn2, 8, 11); >> + >> + if (rn == 0xf) >> + return thumb2_copy_load_literal (gdbarch, insn1, insn2, regs, dsc); >> + else if (op1 == 0x1) /* Encoding T3 */ >> + return thumb2_copy_load_reg_imm (gdbarch, insn1, insn2, regs, >> + dsc, 4, 0, 0, 1); >> + else /* op1 == 0x0 */ >> + { >> + if (insn2_bit_8_11 == 0xc || (insn2_bit_8_11 & 0x9) == 0x9) >> + /* LDR (immediate) */ >> + return thumb2_copy_load_reg_imm (gdbarch, insn1, insn2, regs, >> + dsc, 4, 0, >> + bit (insn2, 8), 1); >> + else >> + /* LDRT and LDR (register) */ >> + return thumb2_copy_load_reg_imm (gdbarch, insn1, insn2, regs, >> + dsc, 4, >> + bits (insn2, 8, 11) == 0xe, >> + 0, 0); > > LDRT also cannot use PC as target, so we really only need to check for > LDR (register) here. Also, this means that thumb2_copy_load_reg_imm > doesn't need a user_mode argument. > Right. Remove user_mode argument from thumb2_copy_load_reg_imm. > (It also seems that it doesn't need a size argument: loads into PC > are only allowed for the full-word instructions.) > > `size' argument is removed. >> + switch (op1) >> + { >> + case 1: >> + { >> + switch (bits (insn1, 9, 10)) >> + { >> + default: /* Coprocessor instructions. */ >> + /* Thumb 32bit coprocessor instructions have the same encoding >> + as ARM's. */ > > The comment isn't really correct ... > It is out of date. Removed. >> + case 2: /* op1 = 2 */ >> + if (op) /* Branch and misc control. */ >> + { >> + if (bit (insn2, 14) /* BLX/BL */ >> + || bit (insn2, 12) /* Unconditional branch */ >> + || (bits (insn1, 7, 9) != 0x7)) /* Conditional branch */ >> + err = thumb2_copy_b_bl_blx (gdbarch, insn1, insn2, regs, dsc); >> + else if (!bit (insn2, 12) && bits (insn1, 7, 9) != 0x7) >> + /* Conditional Branch */ >> + err = thumb2_copy_b_bl_blx (gdbarch, insn1, insn2, regs, dsc); > > The else if is now superfluous: conditional branches are covered by > the first if condition. > Yes, "else if" block is removed. -- Yao (é½å°§) [-- Attachment #2: 0003-Support-displaced-stepping-for-Thumb-32-bit-insns.patch --] [-- Type: text/x-patch, Size: 27348 bytes --] Support displaced stepping for Thumb 32-bit insns. * arm-tdep.c (thumb_copy_unmodified_32bit): New. (thumb2_copy_preload): New. (thumb2_copy_copro_load_store): New. (thumb2_copy_b_bl_blx): New. (thumb2_copy_alu_imm): New. (thumb2_copy_load_reg_imm): New. (thumb2_copy_load_literal): New (thumb2_copy_block_xfer): New. (thumb_32bit_copy_undef): New. (thumb_32bit_copy_unpred): New. (thumb2_decode_ext_reg_ld_st): New. (thumb2_decode_svc_copro): New. (decode_thumb_32bit_store_single_data_item): New. (thumb_copy_pc_relative_32bit): New. (thumb_decode_pc_relative_32bit): New. (decode_thumb_32bit_ld_mem_hints): New. (thumb2_copy_table_branch): New (thumb_process_displaced_32bit_insn): Process Thumb 32-bit instructions. --- gdb/arm-tdep.c | 789 +++++++++++++++++++++++++++++++++++++++++++++++++++++++- 1 files changed, 788 insertions(+), 1 deletions(-) diff --git a/gdb/arm-tdep.c b/gdb/arm-tdep.c index b436a3b..6f8ee22 100644 --- a/gdb/arm-tdep.c +++ b/gdb/arm-tdep.c @@ -5346,6 +5346,23 @@ arm_copy_unmodified (struct gdbarch *gdbarch, uint32_t insn, return 0; } +static int +thumb_copy_unmodified_32bit (struct gdbarch *gdbarch, uint16_t insn1, + uint16_t insn2, const char *iname, + struct displaced_step_closure *dsc) +{ + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, "displaced: copying insn %.4x %.4x, " + "opcode/class '%s' unmodified\n", insn1, insn2, + iname); + + dsc->modinsn[0] = insn1; + dsc->modinsn[1] = insn2; + dsc->numinsns = 2; + + return 0; +} + /* Copy 16-bit Thumb(Thumb and 16-bit Thumb-2) instruction without any modification. */ static int @@ -5413,6 +5430,54 @@ arm_copy_preload (struct gdbarch *gdbarch, uint32_t insn, struct regcache *regs, return 0; } +static int +thumb2_copy_preload (struct gdbarch *gdbarch, uint16_t insn1, uint16_t insn2, + struct regcache *regs, struct displaced_step_closure *dsc) +{ + unsigned int rn = bits (insn1, 0, 3); + unsigned int u_bit = bit (insn1, 7); + int imm12 = bits (insn2, 0, 11); + ULONGEST pc_val; + + if (rn != ARM_PC_REGNUM) + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, "preload", dsc); + + /* PC is only allowed to use in PLI (immeidate,literal) Encoding T3, and + PLD (literal) Encoding T1. */ + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, + "displaced: copying pld/pli pc (0x%x) %c imm12 %.4x\n", + (unsigned int) dsc->insn_addr, u_bit ? '+' : '-', + imm12); + + if (!u_bit) + imm12 = -1 * imm12; + + /* Rewrite instruction {pli/pld} PC imm12 into: + Prepare: tmp[0] <- r0, tmp[1] <- r1, r0 <- pc, r1 <- imm12 + + {pli/pld} [r0, r1] + + Cleanup: r0 <- tmp[0], r1 <- tmp[1]. */ + + dsc->tmp[0] = displaced_read_reg (regs, dsc, 0); + dsc->tmp[1] = displaced_read_reg (regs, dsc, 1); + + pc_val = displaced_read_reg (regs, dsc, ARM_PC_REGNUM); + + displaced_write_reg (regs, dsc, 0, pc_val, CANNOT_WRITE_PC); + displaced_write_reg (regs, dsc, 1, imm12, CANNOT_WRITE_PC); + dsc->u.preload.immed = 0; + + /* {pli/pld} [r0, r1] */ + dsc->modinsn[0] = insn1 & 0xfff0; + dsc->modinsn[1] = 0xf001; + dsc->numinsns = 2; + + dsc->cleanup = &cleanup_preload; + return 0; +} + /* Preload instructions with register offset. */ static void @@ -5522,6 +5587,32 @@ arm_copy_copro_load_store (struct gdbarch *gdbarch, uint32_t insn, return 0; } +static int +thumb2_copy_copro_load_store (struct gdbarch *gdbarch, uint16_t insn1, + uint16_t insn2, struct regcache *regs, + struct displaced_step_closure *dsc) +{ + unsigned int rn = bits (insn1, 0, 3); + + if (rn != ARM_PC_REGNUM) + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "copro load/store", dsc); + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, "displaced: copying coprocessor " + "load/store insn %.4x%.4x\n", insn1, insn2); + + dsc->modinsn[0] = insn1 & 0xfff0; + dsc->modinsn[1] = insn2; + dsc->numinsns = 2; + + /* This function is called for copying instruction LDC/LDC2/VLDR, which + doesn't support writeback, so pass 0. */ + install_copro_load_store (gdbarch, regs, dsc, 0, rn); + + return 0; +} + /* Clean up branch instructions (actually perform the branch, by setting PC). */ @@ -5609,6 +5700,61 @@ arm_copy_b_bl_blx (struct gdbarch *gdbarch, uint32_t insn, return 0; } +static int +thumb2_copy_b_bl_blx (struct gdbarch *gdbarch, uint16_t insn1, + uint16_t insn2, struct regcache *regs, + struct displaced_step_closure *dsc) +{ + int link = bit (insn2, 14); + int exchange = link && !bit (insn2, 12); + int cond = INST_AL; + long offset = 0; + int j1 = bit (insn2, 13); + int j2 = bit (insn2, 11); + int s = sbits (insn1, 10, 10); + int i1 = !(j1 ^ bit (insn1, 10)); + int i2 = !(j2 ^ bit (insn1, 10)); + + if (!link && !exchange) /* B */ + { + offset = (bits (insn2, 0, 10) << 1); + if (bit (insn2, 12)) /* Encoding T4 */ + { + offset |= (bits (insn1, 0, 9) << 12) + | (i2 << 22) + | (i1 << 23) + | (s << 24); + cond = INST_AL; + } + else /* Encoding T3 */ + { + offset |= (bits (insn1, 0, 5) << 12) + | (j1 << 18) + | (j2 << 19) + | (s << 20); + cond = bits (insn1, 6, 9); + } + } + else + { + offset = (bits (insn1, 0, 9) << 12); + offset |= ((i2 << 22) | (i1 << 23) | (s << 24)); + offset |= exchange ? + (bits (insn2, 1, 10) << 2) : (bits (insn2, 0, 10) << 1); + } + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, "displaced: copying %s insn " + "%.4x %.4x with offset %.8lx\n", + link ? (exchange) ? "blx" : "bl" : "b", + insn1, insn2, offset); + + dsc->modinsn[0] = THUMB_NOP; + + install_b_bl_blx (gdbarch, regs, dsc, cond, exchange, link, offset); + return 0; +} + /* Copy B Thumb instructions. */ static int thumb_copy_b (struct gdbarch *gdbarch, unsigned short insn, @@ -5772,6 +5918,58 @@ arm_copy_alu_imm (struct gdbarch *gdbarch, uint32_t insn, struct regcache *regs, return 0; } +static int +thumb2_copy_alu_imm (struct gdbarch *gdbarch, uint16_t insn1, + uint16_t insn2, struct regcache *regs, + struct displaced_step_closure *dsc) +{ + unsigned int op = bits (insn1, 5, 8); + unsigned int rn, rm, rd; + ULONGEST rd_val, rn_val; + + rn = bits (insn1, 0, 3); /* Rn */ + rm = bits (insn2, 0, 3); /* Rm */ + rd = bits (insn2, 8, 11); /* Rd */ + + /* This routine is only called for instruction MOV. */ + gdb_assert (op == 0x2 && rn == 0xf); + + if (rm != ARM_PC_REGNUM && rd != ARM_PC_REGNUM) + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, "ALU imm", dsc); + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, "displaced: copying reg %s insn %.4x%.4x\n", + "ALU", insn1, insn2); + + /* Instruction is of form: + + <op><cond> rd, [rn,] #imm + + Rewrite as: + + Preparation: tmp1, tmp2 <- r0, r1; + r0, r1 <- rd, rn + Insn: <op><cond> r0, r1, #imm + Cleanup: rd <- r0; r0 <- tmp1; r1 <- tmp2 + */ + + dsc->tmp[0] = displaced_read_reg (regs, dsc, 0); + dsc->tmp[1] = displaced_read_reg (regs, dsc, 1); + rn_val = displaced_read_reg (regs, dsc, rn); + rd_val = displaced_read_reg (regs, dsc, rd); + displaced_write_reg (regs, dsc, 0, rd_val, CANNOT_WRITE_PC); + displaced_write_reg (regs, dsc, 1, rn_val, CANNOT_WRITE_PC); + dsc->rd = rd; + + dsc->modinsn[0] = insn1; + dsc->modinsn[1] = ((insn2 & 0xf0f0) | 0x1); + dsc->numinsns = 2; + + dsc->cleanup = &cleanup_alu_imm; + + return 0; +} + /* Copy/cleanup arithmetic/logic insns with register RHS. */ static void @@ -6139,6 +6337,110 @@ install_load_store (struct gdbarch *gdbarch, struct regcache *regs, dsc->cleanup = load ? &cleanup_load : &cleanup_store; } + +static int +thumb2_copy_load_literal (struct gdbarch *gdbarch, uint16_t insn1, + uint16_t insn2, struct regcache *regs, + struct displaced_step_closure *dsc, int size) +{ + unsigned int u_bit = bit (insn1, 7); + unsigned int rt = bits (insn2, 12, 15); + int imm12 = bits (insn2, 0, 11); + ULONGEST pc_val; + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, + "displaced: copying ldr pc (0x%x) R%d %c imm12 %.4x\n", + (unsigned int) dsc->insn_addr, rt, u_bit ? '+' : '-', + imm12); + + if (!u_bit) + imm12 = -1 * imm12; + + /* Rewrite instruction LDR Rt imm12 into: + + Prepare: tmp[0] <- r0, tmp[1] <- r1, tmp[2] <- r2, r1 <- pc, r2 <- imm12 + + LDR R0, R1, R2, + + Cleanup: rt <- r0, r0 <- tmp[0], r1 <- tmp[1], r2 <- tmp[2]. */ + + + dsc->tmp[0] = displaced_read_reg (regs, dsc, 0); + dsc->tmp[1] = displaced_read_reg (regs, dsc, 1); + dsc->tmp[2] = displaced_read_reg (regs, dsc, 2); + + pc_val = displaced_read_reg (regs, dsc, ARM_PC_REGNUM); + + displaced_write_reg (regs, dsc, 1, pc_val, CANNOT_WRITE_PC); + displaced_write_reg (regs, dsc, 2, imm12, CANNOT_WRITE_PC); + + dsc->rd = rt; + + dsc->u.ldst.xfersize = size; + dsc->u.ldst.immed = 0; + dsc->u.ldst.writeback = 0; + dsc->u.ldst.restore_r4 = 0; + + /* LDR R0, R1, R2 */ + dsc->modinsn[0] = 0xf851; + dsc->modinsn[1] = 0x2; + dsc->numinsns = 2; + + dsc->cleanup = &cleanup_load; + + return 0; +} + +static int +thumb2_copy_load_reg_imm (struct gdbarch *gdbarch, uint16_t insn1, + uint16_t insn2, struct regcache *regs, + struct displaced_step_closure *dsc, + int writeback, int immed) +{ + unsigned int rt = bits (insn2, 12, 15); + unsigned int rn = bits (insn1, 0, 3); + unsigned int rm = bits (insn2, 0, 3); /* Only valid if !immed. */ + /* In LDR (register), there is also a register Rm, which is not allowed to + be PC, so we don't have to check it. */ + + if (rt != ARM_PC_REGNUM && rn != ARM_PC_REGNUM) + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, "load", + dsc); + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, + "displaced: copying ldr r%d [r%d] insn %.4x%.4x\n", + rt, rn, insn1, insn2); + + install_load_store (gdbarch, regs, dsc, 1, immed, writeback, 4, + 0, rt, rm, rn); + + dsc->u.ldst.restore_r4 = 0; + + if (immed) + /* ldr[b]<cond> rt, [rn, #imm], etc. + -> + ldr[b]<cond> r0, [r2, #imm]. */ + { + dsc->modinsn[0] = (insn1 & 0xfff0) | 0x2; + dsc->modinsn[1] = insn2 & 0x0fff; + } + else + /* ldr[b]<cond> rt, [rn, rm], etc. + -> + ldr[b]<cond> r0, [r2, r3]. */ + { + dsc->modinsn[0] = (insn1 & 0xfff0) | 0x2; + dsc->modinsn[1] = (insn2 & 0x0ff0) | 0x3; + } + + dsc->numinsns = 2; + + return 0; +} + + static int arm_copy_ldr_str_ldrb_strb (struct gdbarch *gdbarch, uint32_t insn, struct regcache *regs, @@ -6529,6 +6831,87 @@ arm_copy_block_xfer (struct gdbarch *gdbarch, uint32_t insn, return 0; } +static int +thumb2_copy_block_xfer (struct gdbarch *gdbarch, uint16_t insn1, uint16_t insn2, + struct regcache *regs, + struct displaced_step_closure *dsc) +{ + int rn = bits (insn1, 0, 3); + int load = bit (insn1, 4); + int writeback = bit (insn1, 5); + + /* Block transfers which don't mention PC can be run directly + out-of-line. */ + if (rn != ARM_PC_REGNUM && (insn2 & 0x8000) == 0) + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, "ldm/stm", dsc); + + if (rn == ARM_PC_REGNUM) + { + warning (_("displaced: Unpredictable LDM or STM with " + "base register r15")); + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "unpredictable ldm/stm", dsc); + } + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, "displaced: copying block transfer insn " + "%.4x%.4x\n", insn1, insn2); + + /* Clear bit 13, since it should be always zero. */ + dsc->u.block.regmask = (insn2 & 0xdfff); + dsc->u.block.rn = rn; + + dsc->u.block.load = load; + dsc->u.block.user = 0; + dsc->u.block.increment = bit (insn1, 7); + dsc->u.block.before = bit (insn1, 8); + dsc->u.block.writeback = writeback; + dsc->u.block.cond = INST_AL; + + if (load) + { + if (dsc->u.block.regmask == 0xffff) + { + /* This branch is impossible to happen. */ + gdb_assert (0); + } + else + { + unsigned int regmask = dsc->u.block.regmask; + unsigned int num_in_list = bitcount (regmask), new_regmask, bit = 1; + unsigned int to = 0, from = 0, i, new_rn; + + for (i = 0; i < num_in_list; i++) + dsc->tmp[i] = displaced_read_reg (regs, dsc, i); + + if (writeback) + insn1 &= ~(1 << 5); + + new_regmask = (1 << num_in_list) - 1; + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, _("displaced: LDM r%d%s, " + "{..., pc}: original reg list %.4x, modified " + "list %.4x\n"), rn, writeback ? "!" : "", + (int) dsc->u.block.regmask, new_regmask); + + dsc->modinsn[0] = insn1; + dsc->modinsn[1] = (new_regmask & 0xffff); + dsc->numinsns = 2; + + dsc->cleanup = &cleanup_block_load_pc; + } + } + else + { + dsc->modinsn[0] = insn1; + dsc->modinsn[1] = insn2; + dsc->numinsns = 2; + dsc->cleanup = &cleanup_block_store_pc; + } + return 0; +} + /* Cleanup/copy SVC (SWI) instructions. These two functions are overridden for Linux, where some SVC instructions must be treated specially. */ @@ -6614,6 +6997,23 @@ arm_copy_undef (struct gdbarch *gdbarch, uint32_t insn, return 0; } +static int +thumb_32bit_copy_undef (struct gdbarch *gdbarch, uint16_t insn1, uint16_t insn2, + struct displaced_step_closure *dsc) +{ + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, "displaced: copying undefined insn " + "%.4x %.4x\n", (unsigned short) insn1, + (unsigned short) insn2); + + dsc->modinsn[0] = insn1; + dsc->modinsn[1] = insn2; + dsc->numinsns = 2; + + return 0; +} + /* Copy unpredictable instructions. */ static int @@ -7010,6 +7410,65 @@ arm_decode_ext_reg_ld_st (struct gdbarch *gdbarch, uint32_t insn, return 1; } +/* Decode shifted register instructions. */ + +static int +thumb2_decode_dp_shift_reg (struct gdbarch *gdbarch, uint16_t insn1, + uint16_t insn2, struct regcache *regs, + struct displaced_step_closure *dsc) +{ + /* PC is only allowed to be used in instruction MOV. */ + + unsigned int op = bits (insn1, 5, 8); + unsigned int rn = bits (insn1, 0, 3); + + if (op == 0x2 && rn == 0xf) /* MOV */ + return thumb2_copy_alu_imm (gdbarch, insn1, insn2, regs, dsc); + else + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "dp (shift reg)", dsc); +} + + +/* Decode extension register load/store. Exactly the same as + arm_decode_ext_reg_ld_st. */ + +static int +thumb2_decode_ext_reg_ld_st (struct gdbarch *gdbarch, uint16_t insn1, + uint16_t insn2, struct regcache *regs, + struct displaced_step_closure *dsc) +{ + unsigned int opcode = bits (insn1, 4, 8); + + switch (opcode) + { + case 0x04: case 0x05: + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "vfp/neon vmov", dsc); + + case 0x08: case 0x0c: /* 01x00 */ + case 0x0a: case 0x0e: /* 01x10 */ + case 0x12: case 0x16: /* 10x10 */ + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "vfp/neon vstm/vpush", dsc); + + case 0x09: case 0x0d: /* 01x01 */ + case 0x0b: case 0x0f: /* 01x11 */ + case 0x13: case 0x17: /* 10x11 */ + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "vfp/neon vldm/vpop", dsc); + + case 0x10: case 0x14: case 0x18: case 0x1c: /* vstr. */ + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "vstr", dsc); + case 0x11: case 0x15: case 0x19: case 0x1d: /* vldr. */ + return thumb2_copy_copro_load_store (gdbarch, insn1, insn2, regs, dsc); + } + + /* Should be unreachable. */ + return 1; +} + static int arm_decode_svc_copro (struct gdbarch *gdbarch, uint32_t insn, CORE_ADDR to, struct regcache *regs, struct displaced_step_closure *dsc) @@ -7056,6 +7515,49 @@ arm_decode_svc_copro (struct gdbarch *gdbarch, uint32_t insn, CORE_ADDR to, return arm_copy_undef (gdbarch, insn, dsc); /* Possibly unreachable. */ } +static int +thumb2_decode_svc_copro (struct gdbarch *gdbarch, uint16_t insn1, + uint16_t insn2, struct regcache *regs, + struct displaced_step_closure *dsc) +{ + unsigned int coproc = bits (insn2, 8, 11); + unsigned int op1 = bits (insn1, 4, 9); + unsigned int bit_5_8 = bits (insn1, 5, 8); + unsigned int bit_9 = bit (insn1, 9); + unsigned int bit_4 = bit (insn1, 4); + unsigned int rn = bits (insn1, 0, 3); + + if (bit_9 == 0) + { + if (bit_5_8 == 2) + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "neon 64bit xfer/mrrc/mrrc2/mcrr/mcrr2", + dsc); + else if (bit_5_8 == 0) /* UNDEFINED. */ + return thumb_32bit_copy_undef (gdbarch, insn1, insn2, dsc); + else + { + /*coproc is 101x. SIMD/VFP, ext registers load/store. */ + if ((coproc & 0xe) == 0xa) + return thumb2_decode_ext_reg_ld_st (gdbarch, insn1, insn2, regs, + dsc); + else /* coproc is not 101x. */ + { + if (bit_4 == 0) /* STC/STC2. */ + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "stc/stc2", dsc); + else /* LDC/LDC2 {literal, immeidate}. */ + return thumb2_copy_copro_load_store (gdbarch, insn1, insn2, + regs, dsc); + } + } + } + else + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, "coproc", dsc); + + return 0; +} + static void install_pc_relative (struct gdbarch *gdbarch, struct regcache *regs, struct displaced_step_closure *dsc, int rd) @@ -7105,6 +7607,43 @@ thumb_decode_pc_relative_16bit (struct gdbarch *gdbarch, uint16_t insn, } static int +thumb_copy_pc_relative_32bit (struct gdbarch *gdbarch, uint16_t insn1, + uint16_t insn2, struct regcache *regs, + struct displaced_step_closure *dsc) +{ + unsigned int rd = bits (insn2, 8, 11); + /* Since immediate has the same encoding in ADR ADD and SUB, so we simply + extract raw immediate encoding rather than computing immediate. When + generating ADD or SUB instruction, we can simply perform OR operation to + set immediate into ADD. */ + unsigned int imm_3_8 = insn2 & 0x70ff; + unsigned int imm_i = insn1 & 0x0400; /* Clear all bits except bit 10. */ + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, + "displaced: copying thumb adr r%d, #%d:%d insn %.4x%.4x\n", + rd, imm_i, imm_3_8, insn1, insn2); + + if (bit (insn1, 7)) /* Encoding T2 */ + { + /* Encoding T3: SUB Rd, Rd, #imm */ + dsc->modinsn[0] = (0xf1a0 | rd | imm_i); + dsc->modinsn[1] = ((rd << 8) | imm_3_8); + } + else /* Encoding T3 */ + { + /* Encoding T3: ADD Rd, Rd, #imm */ + dsc->modinsn[0] = (0xf100 | rd | imm_i); + dsc->modinsn[1] = ((rd << 8) | imm_3_8); + } + dsc->numinsns = 2; + + install_pc_relative (gdbarch, regs, dsc, rd); + + return 0; +} + +static int thumb_copy_16bit_ldr_literal (struct gdbarch *gdbarch, unsigned short insn1, struct regcache *regs, struct displaced_step_closure *dsc) @@ -7186,6 +7725,51 @@ thumb_copy_cbnz_cbz (struct gdbarch *gdbarch, uint16_t insn1, return 0; } +/* Copy Table Branch Byte/Halfword */ +static int +thumb2_copy_table_branch (struct gdbarch *gdbarch, uint16_t insn1, + uint16_t insn2, struct regcache *regs, + struct displaced_step_closure *dsc) +{ + ULONGEST rn_val, rm_val; + int is_tbh = bit (insn2, 4); + CORE_ADDR halfwords = 0; + enum bfd_endian byte_order = gdbarch_byte_order (gdbarch); + + rn_val = displaced_read_reg (regs, dsc, bits (insn1, 0, 3)); + rm_val = displaced_read_reg (regs, dsc, bits (insn2, 0, 3)); + + if (is_tbh) + { + gdb_byte buf[2]; + + target_read_memory (rn_val + 2 * rm_val, buf, 2); + halfwords = extract_unsigned_integer (buf, 2, byte_order); + } + else + { + gdb_byte buf[1]; + + target_read_memory (rn_val + rm_val, buf, 1); + halfwords = extract_unsigned_integer (buf, 1, byte_order); + } + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, "displaced: %s base 0x%x offset 0x%x" + " offset 0x%x\n", is_tbh ? "tbh" : "tbb", + (unsigned int) rn_val, (unsigned int) rm_val, + (unsigned int) halfwords); + + dsc->u.branch.cond = INST_AL; + dsc->u.branch.link = 0; + dsc->u.branch.exchange = 0; + dsc->u.branch.dest = dsc->insn_addr + 4 + 2 * halfwords; + + dsc->cleanup = &cleanup_branch; + + return 0; +} + static void cleanup_pop_pc_16bit_all (struct gdbarch *gdbarch, struct regcache *regs, struct displaced_step_closure *dsc) @@ -7379,12 +7963,215 @@ thumb_process_displaced_16bit_insn (struct gdbarch *gdbarch, uint16_t insn1, _("thumb_process_displaced_16bit_insn: Instruction decode error")); } +static int +decode_thumb_32bit_ld_mem_hints (struct gdbarch *gdbarch, + uint16_t insn1, uint16_t insn2, + struct regcache *regs, + struct displaced_step_closure *dsc) +{ + int rt = bits (insn2, 12, 15); + int rn = bits (insn1, 0, 3); + int op1 = bits (insn1, 7, 8); + int err = 0; + + switch (bits (insn1, 5, 6)) + { + case 0: /* Load byte and memory hints */ + if (rt == 0xf) /* PLD/PLI */ + { + if (rn == 0xf) + /* PLD literal or Encoding T3 of PLI(immediate, literal). */ + return thumb2_copy_preload (gdbarch, insn1, insn2, regs, dsc); + else + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "pli/pld", dsc); + } + else + { + if (rn == 0xf) /* LDRB/LDRSB (literal) */ + return thumb2_copy_load_literal (gdbarch, insn1, insn2, regs, dsc, + 1); + else + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "ldrb{reg, immediate}/ldrbt", + dsc); + } + + break; + case 1: /* Load halfword and memory hints. */ + if (rt == 0xf) /* PLD{W} and Unalloc memory hint. */ + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "pld/unalloc memhint", dsc); + else + { + int insn2_bit_8_11 = bits (insn2, 8, 11); + + if (rn == 0xf) + return thumb2_copy_load_literal (gdbarch, insn1, insn2, regs, dsc, + 2); + else + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "ldrh/ldrht", dsc); + } + break; + case 2: /* Load word */ + { + int insn2_bit_8_11 = bits (insn2, 8, 11); + + if (rn == 0xf) + return thumb2_copy_load_literal (gdbarch, insn1, insn2, regs, dsc, 4); + else if (op1 == 0x1) /* Encoding T3 */ + return thumb2_copy_load_reg_imm (gdbarch, insn1, insn2, regs, dsc, + 0, 1); + else /* op1 == 0x0 */ + { + if (insn2_bit_8_11 == 0xc || (insn2_bit_8_11 & 0x9) == 0x9) + /* LDR (immediate) */ + return thumb2_copy_load_reg_imm (gdbarch, insn1, insn2, regs, + dsc, bit (insn2, 8), 1); + else if (insn2_bit_8_11 == 0xe) /* LDRT */ + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "ldrt", dsc); + else + /* LDR (register) */ + return thumb2_copy_load_reg_imm (gdbarch, insn1, insn2, regs, + dsc, 0, 0); + } + break; + } + default: + return thumb_32bit_copy_undef (gdbarch, insn1, insn2, dsc); + break; + } + return 0; +} + static void thumb_process_displaced_32bit_insn (struct gdbarch *gdbarch, uint16_t insn1, uint16_t insn2, struct regcache *regs, struct displaced_step_closure *dsc) { - error (_("Displaced stepping is only supported in ARM mode and Thumb 16bit instructions")); + int err = 0; + unsigned short op = bit (insn2, 15); + unsigned int op1 = bits (insn1, 11, 12); + + switch (op1) + { + case 1: + { + switch (bits (insn1, 9, 10)) + { + case 0: + if (bit (insn1, 6)) + { + /* Load/store {dual, execlusive}, table branch. */ + if (bits (insn1, 7, 8) == 1 && bits (insn1, 4, 5) == 1 + && bits (insn2, 5, 7) == 0) + err = thumb2_copy_table_branch (gdbarch, insn1, insn2, regs, + dsc); + else + /* PC is not allowed to use in load/store {dual, exclusive} + instructions. */ + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "load/store dual/ex", dsc); + } + else /* load/store multiple */ + { + switch (bits (insn1, 7, 8)) + { + case 0: case 3: /* SRS, RFE */ + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "srs/rfe", dsc); + break; + case 1: case 2: /* LDM/STM/PUSH/POP */ + err = thumb2_copy_block_xfer (gdbarch, insn1, insn2, regs, dsc); + break; + } + } + break; + + case 1: + /* Data-processing (shift register). */ + err = thumb2_decode_dp_shift_reg (gdbarch, insn1, insn2, regs, + dsc); + break; + default: /* Coprocessor instructions. */ + err = thumb2_decode_svc_copro (gdbarch, insn1, insn2, regs, dsc); + break; + } + break; + } + case 2: /* op1 = 2 */ + if (op) /* Branch and misc control. */ + { + if (bit (insn2, 14) /* BLX/BL */ + || bit (insn2, 12) /* Unconditional branch */ + || (bits (insn1, 7, 9) != 0x7)) /* Conditional branch */ + err = thumb2_copy_b_bl_blx (gdbarch, insn1, insn2, regs, dsc); + else + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "misc ctrl", dsc); + } + else + { + if (bit (insn1, 9)) /* Data processing (plain binary imm). */ + { + int op = bits (insn1, 4, 8); + int rn = bits (insn1, 0, 4); + if ((op == 0 || op == 0xa) && rn == 0xf) + err = thumb_copy_pc_relative_32bit (gdbarch, insn1, insn2, + regs, dsc); + else + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "dp/pb", dsc); + } + else /* Data processing (modified immeidate) */ + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "dp/mi", dsc); + } + break; + case 3: /* op1 = 3 */ + switch (bits (insn1, 9, 10)) + { + case 0: + if (bit (insn1, 4)) + err = decode_thumb_32bit_ld_mem_hints (gdbarch, insn1, insn2, + regs, dsc); + else /* NEON Load/Store and Store single data item */ + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "neon elt/struct load/store", + dsc); + break; + case 1: /* op1 = 3, bits (9, 10) == 1 */ + switch (bits (insn1, 7, 8)) + { + case 0: case 1: /* Data processing (register) */ + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "dp(reg)", dsc); + break; + case 2: /* Multiply and absolute difference */ + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "mul/mua/diff", dsc); + break; + case 3: /* Long multiply and divide */ + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "lmul/lmua", dsc); + break; + } + break; + default: /* Coprocessor instructions */ + err = thumb2_decode_svc_copro (gdbarch, insn1, insn2, regs, dsc); + break; + } + break; + default: + err = 1; + } + + if (err) + internal_error (__FILE__, __LINE__, + _("thumb_process_displaced_32bit_insn: Instruction decode error")); + } static void -- 1.7.0.4 ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [try 2nd 5/8] Displaced stepping for Thumb 32-bit insns 2011-08-19 3:13 ` Yao Qi @ 2011-08-19 16:39 ` Ulrich Weigand 2011-08-30 15:53 ` Yao Qi 0 siblings, 1 reply; 19+ messages in thread From: Ulrich Weigand @ 2011-08-19 16:39 UTC (permalink / raw) To: Yao Qi; +Cc: gdb-patches Yao Qi wrote: > Support displaced stepping for Thumb 32-bit insns. > > * arm-tdep.c (thumb_copy_unmodified_32bit): New. > (thumb2_copy_preload): New. > (thumb2_copy_copro_load_store): New. > (thumb2_copy_b_bl_blx): New. > (thumb2_copy_alu_imm): New. > (thumb2_copy_load_reg_imm): New. > (thumb2_copy_load_literal): New > (thumb2_copy_block_xfer): New. > (thumb_32bit_copy_undef): New. > (thumb_32bit_copy_unpred): New. > (thumb2_decode_ext_reg_ld_st): New. > (thumb2_decode_svc_copro): New. > (decode_thumb_32bit_store_single_data_item): New. > (thumb_copy_pc_relative_32bit): New. > (thumb_decode_pc_relative_32bit): New. > (decode_thumb_32bit_ld_mem_hints): New. > (thumb2_copy_table_branch): New > (thumb_process_displaced_32bit_insn): Process Thumb 32-bit > instructions. I'm not finding any more bugs :-) Just a couple of cosmetic issues: > + /* PC is only allowed to use in PLI (immeidate,literal) Encoding T3, and Typo: immediate > + case 1: /* Load halfword and memory hints. */ > + if (rt == 0xf) /* PLD{W} and Unalloc memory hint. */ > + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, > + "pld/unalloc memhint", dsc); > + else > + { > + int insn2_bit_8_11 = bits (insn2, 8, 11); This is now unused. Since this is (together with the previous patches that are not yet committed) is a significant change, I'm wondering a bit what additional testing we could do to catch any possibly remaining issues ... Did you try a testsuite run with a GDB build that forces displaced-stepping on by default? (I.e. change the initializer of can_use_displaced_stepping in infrun.c to can_use_displaced_stepping_on.) That would exercise the new code a lot. Thanks, Ulrich -- Dr. Ulrich Weigand GNU Toolchain for Linux on System z and Cell BE Ulrich.Weigand@de.ibm.com ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [try 2nd 5/8] Displaced stepping for Thumb 32-bit insns 2011-08-19 16:39 ` Ulrich Weigand @ 2011-08-30 15:53 ` Yao Qi 2011-09-14 14:25 ` Ulrich Weigand 0 siblings, 1 reply; 19+ messages in thread From: Yao Qi @ 2011-08-30 15:53 UTC (permalink / raw) To: Ulrich Weigand; +Cc: gdb-patches [-- Attachment #1: Type: text/plain, Size: 2907 bytes --] On 08/20/2011 12:39 AM, Ulrich Weigand wrote: > Since this is (together with the previous patches that are not yet committed) > is a significant change, I'm wondering a bit what additional testing we could > do to catch any possibly remaining issues ... > > Did you try a testsuite run with a GDB build that forces displaced-stepping > on by default? (I.e. change the initializer of can_use_displaced_stepping > in infrun.c to can_use_displaced_stepping_on.) That would exercise the new > code a lot. Yes, I run gdb testsuite with can_use_displaced_stepping set to can_use_displaced_stepping_on, and it does expose more problems in current patches. Three patches attached here to address these problems found so far. I don't combine them into one patch, because they belongs to different groups (thumb 16bit, thumb 32bit). After applied these three patches, there are still some failures, which are caused by some reasons, so these three patches here can be regarded as WIP patches. 1. Failures in gdb.arch/thumb2-it.exp and gdb.base/gdb1555.exp. These failures are caused by missing IT support in thumb displaced stepping. 2. Failures in gdb.base/break-interp.exp and gdb.base/nostdlib.exp. They are appeared on i686-pc-linux-gnu as well. 3. Failures (timeout) in gdb.base/sigstep.exp. IIUC, it is incorrect to displaced step instructions in signal handler, so failures are expected. 4. Failures in gdb.base/watch-vfork.exp. Displaced stepping is not completed due to a VFORK event. Current displaced stepping infrastructure or infrun logic doesn't consider the case that executing instruction in scratch can be "interrupted". When displaced stepping an vfork syscall, VFORK event comes out earlier than TRAP event. GDB will be confused. 5. Timeout failures in gdb.threads/*.exp. Similarly to #4, when execution instructions in scratch, thread context switch may happen, and GDB will be confused as well. #4 and #5 are not arm-specific problem. 6. Failures in gdb.base/watchpoint-solib.exp gdb.mi/mi-simplerun.exp. They are caused by displaced stepping instruction `mov r12, #imm`. This instruction should be unmodified-copied to scratch, and execute, but experiment shows we can't. I have a local patch that can control displaced stepping on instructions' level. Once I turn it on for `mov r12, #imm`, these tests will fail. The reason is still unknown to me. 7. Accessing some high addresses. Some instructions (alu_imm) may set PC to a hight address, such as 0xffffxxxx, and displaced stepping of this kind instruction should be handled differently. If my analysis above makes sense and is correct, we still have to fix #1 at least, to make displaced stepping really works. On the other hand, if current patches can be approved, I am happy as well, and can carry less local patches to move on. :) -- Yao (é½å°§) [-- Attachment #2: 0008-copro_load_store-install_b_bl_blx.patch --] [-- Type: text/x-patch, Size: 1308 bytes --] gdb/ * arm-tdep.c (install_copro_load_store): PC is set 4-byte aligned. (install_b_bl_blx): Likewise. --- gdb/arm-tdep.c | 11 +++++++++-- 1 files changed, 9 insertions(+), 2 deletions(-) diff --git a/gdb/arm-tdep.c b/gdb/arm-tdep.c index 7df9958..67d41d2 100644 --- a/gdb/arm-tdep.c +++ b/gdb/arm-tdep.c @@ -5558,6 +5558,8 @@ install_copro_load_store (struct gdbarch *gdbarch, struct regcache *regs, dsc->tmp[0] = displaced_read_reg (regs, dsc, 0); rn_val = displaced_read_reg (regs, dsc, rn); + /* PC should be 4-byte aligned. */ + rn_val = rn_val & 0xfffffffc; displaced_write_reg (regs, dsc, 0, rn_val, CANNOT_WRITE_PC); dsc->u.ldst.writeback = writeback; @@ -5664,10 +5666,15 @@ install_b_bl_blx (struct gdbarch *gdbarch, struct regcache *regs, dsc->u.branch.link = link; dsc->u.branch.exchange = exchange; + dsc->u.branch.dest = dsc->insn_addr; + if (link && exchange) + /* For BLX, offset is computed from the Align (PC, 4). */ + dsc->u.branch.dest = dsc->u.branch.dest & 0xfffffffc; + if (dsc->is_thumb) - dsc->u.branch.dest = dsc->insn_addr + 4 + offset; + dsc->u.branch.dest += 4 + offset; else - dsc->u.branch.dest = dsc->insn_addr + 8 + offset; + dsc->u.branch.dest += 8 + offset; dsc->cleanup = &cleanup_branch; } -- 1.7.0.4 [-- Attachment #3: 0007-thumb-16bit.patch --] [-- Type: text/x-patch, Size: 2775 bytes --] gdb/ * arm-tdep.c (thumb_copy_b): Extract correct offset. (thumb_copy_16bit_ldr_literal): Extract correct value for rt and imm8. Set pc 4-byte aligned. Set branch dest address correctly. --- gdb/arm-tdep.c | 26 +++++++++++++++----------- 1 files changed, 15 insertions(+), 11 deletions(-) diff --git a/gdb/arm-tdep.c b/gdb/arm-tdep.c index 8f13b72..7df9958 100644 --- a/gdb/arm-tdep.c +++ b/gdb/arm-tdep.c @@ -5767,13 +5767,14 @@ thumb_copy_b (struct gdbarch *gdbarch, unsigned short insn, if (bit_12_15 == 0xd) { - offset = sbits (insn, 0, 7); + /* offset = SignExtend (imm8:0, 32) */ + offset = sbits ((insn << 1), 0, 8); cond = bits (insn, 8, 11); } else if (bit_12_15 == 0xe) /* Encoding T2 */ { offset = sbits ((insn << 1), 0, 11); - cond = INST_AL; + cond = INST_AL; } if (debug_displaced) @@ -7648,29 +7649,32 @@ thumb_copy_16bit_ldr_literal (struct gdbarch *gdbarch, unsigned short insn1, struct regcache *regs, struct displaced_step_closure *dsc) { - unsigned int rt = bits (insn1, 8, 7); + unsigned int rt = bits (insn1, 8, 10); unsigned int pc; - int imm8 = sbits (insn1, 0, 7); + int imm8 = (bits (insn1, 0, 7) << 2); CORE_ADDR from = dsc->insn_addr; /* LDR Rd, #imm8 Rwrite as: - Preparation: tmp2 <- R2, tmp3 <- R3, R2 <- PC, R3 <- #imm8; - if (Rd is not R0) tmp0 <- R0; + Preparation: tmp0 <- R0, tmp2 <- R2, tmp3 <- R3, R2 <- PC, R3 <- #imm8; + Insn: LDR R0, [R2, R3]; - Cleanup: R2 <- tmp2, R3 <- tmp3, - if (Rd is not R0) Rd <- R0, R0 <- tmp0 */ + Cleanup: R2 <- tmp2, R3 <- tmp3, Rd <- R0, R0 <- tmp0 */ if (debug_displaced) - fprintf_unfiltered (gdb_stdlog, "displaced: copying thumb ldr literal " - "insn %.4x\n", insn1); + fprintf_unfiltered (gdb_stdlog, + "displaced: copying thumb ldr r%d [pc #%d]\n" + , rt, imm8); dsc->tmp[0] = displaced_read_reg (regs, dsc, 0); dsc->tmp[2] = displaced_read_reg (regs, dsc, 2); dsc->tmp[3] = displaced_read_reg (regs, dsc, 3); pc = displaced_read_reg (regs, dsc, ARM_PC_REGNUM); + /* The assembler calculates the required value of the offset from the + Align(PC,4) value of this instruction to the label. */ + pc = pc & 0xfffffffc; displaced_write_reg (regs, dsc, 2, pc, CANNOT_WRITE_PC); displaced_write_reg (regs, dsc, 3, imm8, CANNOT_WRITE_PC); @@ -7712,7 +7716,7 @@ thumb_copy_cbnz_cbz (struct gdbarch *gdbarch, uint16_t insn1, dsc->u.branch.link = 0; dsc->u.branch.exchange = 0; - dsc->u.branch.dest = from + 2 + imm5; + dsc->u.branch.dest = from + 4 + imm5; if (debug_displaced) fprintf_unfiltered (gdb_stdlog, "displaced: copying %s [r%d = 0x%x]" -- 1.7.0.4 [-- Attachment #4: 0009-thumb2.patch --] [-- Type: text/x-patch, Size: 2591 bytes --] gdb/ * arm-tdep.c (thumb2_copy_load_literal): Use register r2 and r3. (thumb2_copy_block_xfer): Set dsc->u.block.xfer_addr. (thumb_process_displaced_32bit_insn): Extract correct value for rn. --- gdb/arm-tdep.c | 23 +++++++++++++---------- 1 files changed, 13 insertions(+), 10 deletions(-) diff --git a/gdb/arm-tdep.c b/gdb/arm-tdep.c index 67d41d2..6d76999 100644 --- a/gdb/arm-tdep.c +++ b/gdb/arm-tdep.c @@ -6367,21 +6367,23 @@ thumb2_copy_load_literal (struct gdbarch *gdbarch, uint16_t insn1, /* Rewrite instruction LDR Rt imm12 into: - Prepare: tmp[0] <- r0, tmp[1] <- r1, tmp[2] <- r2, r1 <- pc, r2 <- imm12 + Prepare: tmp[0] <- r0, tmp[1] <- r2, tmp[2] <- r3, r2 <- pc, r3 <- imm12 - LDR R0, R1, R2, + LDR R0, R2, R3, - Cleanup: rt <- r0, r0 <- tmp[0], r1 <- tmp[1], r2 <- tmp[2]. */ + Cleanup: rt <- r0, r0 <- tmp[0], r2 <- tmp[1], r3 <- tmp[2]. */ dsc->tmp[0] = displaced_read_reg (regs, dsc, 0); - dsc->tmp[1] = displaced_read_reg (regs, dsc, 1); dsc->tmp[2] = displaced_read_reg (regs, dsc, 2); + dsc->tmp[3] = displaced_read_reg (regs, dsc, 3); pc_val = displaced_read_reg (regs, dsc, ARM_PC_REGNUM); - displaced_write_reg (regs, dsc, 1, pc_val, CANNOT_WRITE_PC); - displaced_write_reg (regs, dsc, 2, imm12, CANNOT_WRITE_PC); + pc_val = pc_val & 0xfffffffc; + + displaced_write_reg (regs, dsc, 2, pc_val, CANNOT_WRITE_PC); + displaced_write_reg (regs, dsc, 3, imm12, CANNOT_WRITE_PC); dsc->rd = rt; @@ -6390,9 +6392,9 @@ thumb2_copy_load_literal (struct gdbarch *gdbarch, uint16_t insn1, dsc->u.ldst.writeback = 0; dsc->u.ldst.restore_r4 = 0; - /* LDR R0, R1, R2 */ - dsc->modinsn[0] = 0xf851; - dsc->modinsn[1] = 0x2; + /* LDR R0, R2, R3 */ + dsc->modinsn[0] = 0xf852; + dsc->modinsn[1] = 0x3; dsc->numinsns = 2; dsc->cleanup = &cleanup_load; @@ -6875,6 +6877,7 @@ thumb2_copy_block_xfer (struct gdbarch *gdbarch, uint16_t insn1, uint16_t insn2, dsc->u.block.before = bit (insn1, 8); dsc->u.block.writeback = writeback; dsc->u.block.cond = INST_AL; + dsc->u.block.xfer_addr = displaced_read_reg (regs, dsc, rn); if (load) { @@ -8126,7 +8129,7 @@ thumb_process_displaced_32bit_insn (struct gdbarch *gdbarch, uint16_t insn1, if (bit (insn1, 9)) /* Data processing (plain binary imm). */ { int op = bits (insn1, 4, 8); - int rn = bits (insn1, 0, 4); + int rn = bits (insn1, 0, 3); if ((op == 0 || op == 0xa) && rn == 0xf) err = thumb_copy_pc_relative_32bit (gdbarch, insn1, insn2, regs, dsc); -- 1.7.0.4 ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [try 2nd 5/8] Displaced stepping for Thumb 32-bit insns 2011-08-30 15:53 ` Yao Qi @ 2011-09-14 14:25 ` Ulrich Weigand 2011-10-09 13:28 ` Yao Qi 2011-10-10 1:41 ` Yao Qi 0 siblings, 2 replies; 19+ messages in thread From: Ulrich Weigand @ 2011-09-14 14:25 UTC (permalink / raw) To: Yao Qi; +Cc: gdb-patches Yao Qi wrote: > Yes, I run gdb testsuite with can_use_displaced_stepping set to > can_use_displaced_stepping_on, and it does expose more problems in > current patches. OK, thanks for verifying! > Three patches attached here to address these problems > found so far. I don't combine them into one patch, because they belongs > to different groups (thumb 16bit, thumb 32bit). The three patches look good to me. > After applied these three patches, there are still some failures, which > are caused by some reasons, so these three patches here can be regarded > as WIP patches. > > 1. Failures in gdb.arch/thumb2-it.exp and gdb.base/gdb1555.exp. > These failures are caused by missing IT support in thumb displaced stepping. Ah, right. Fortunately, I think IT support should be relatively easy to add, in fact we should be able to just completely emulate it: - The first thing we do when we're about to displaced-step a Thumb insn is to check the itstate and see whether we're in an IT block. - If so, we check whether the condition is true, given the current state of the flags. - If the condition is false, we always use a NOP as the displaced instruction; otherwise, compute the displaced instruction as usual. - In either case, set the CSPR register as if we're outside of any IT block while actually executing the displaced instruction. (This also makes sure that the breakpoint at the end will always be executed.) - During fixup after execution is done, re-set IT state in the CSPR to the proper value (advanced by one instruction). See also thumb_get_next_pc_raw for how to manipulate IT state ... Does this look good to you? > 2. Failures in gdb.base/break-interp.exp and gdb.base/nostdlib.exp. > They are appeared on i686-pc-linux-gnu as well. > > 3. Failures (timeout) in gdb.base/sigstep.exp. IIUC, it is > incorrect to displaced step instructions in signal handler, so failures > are expected. > > 4. Failures in gdb.base/watch-vfork.exp. Displaced stepping is not > completed due to a VFORK event. Current displaced stepping > infrastructure or infrun logic doesn't consider the case that executing > instruction in scratch can be "interrupted". When displaced stepping an > vfork syscall, VFORK event comes out earlier than TRAP event. GDB will > be confused. > > 5. Timeout failures in gdb.threads/*.exp. Similarly to #4, when > execution instructions in scratch, thread context switch may happen, and > GDB will be confused as well. #4 and #5 are not arm-specific problem. So these are apparently all common-code problems. While those ought to be fixed, of course, IMO they should not prevent the ARM support patches from going forward at this point ... > 6. Failures in gdb.base/watchpoint-solib.exp gdb.mi/mi-simplerun.exp. > They are caused by displaced stepping instruction `mov r12, #imm`. > This instruction should be unmodified-copied to scratch, and execute, > but experiment shows we can't. I have a local patch that can control > displaced stepping on instructions' level. Once I turn it on for `mov > r12, #imm`, these tests will fail. The reason is still unknown to me. > > 7. Accessing some high addresses. Some instructions (alu_imm) may > set PC to a hight address, such as 0xffffxxxx, and displaced stepping of > this kind instruction should be handled differently. I'm afraid I don't quite understand those last two points. Could you elaborate what exactly is going wrong? > If my analysis above makes sense and is correct, we still have to fix #1 > at least, to make displaced stepping really works. On the other hand, > if current patches can be approved, I am happy as well, and can carry > less local patches to move on. :) Agreed, I think we should fix IT support before the patches go in. > gdb/ > * arm-tdep.c (thumb_copy_b): Extract correct offset. > (thumb_copy_16bit_ldr_literal): Extract correct value for rt and imm8. > Set pc 4-byte aligned. > Set branch dest address correctly. The last line refers to thumb_copy_cbnz_cbz, I think. > + fprintf_unfiltered (gdb_stdlog, > + "displaced: copying thumb ldr r%d [pc #%d]\n" > + , rt, imm8); Comma at the end of the previous line, please. Bye, Ulrich -- Dr. Ulrich Weigand GNU Toolchain for Linux on System z and Cell BE Ulrich.Weigand@de.ibm.com ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [try 2nd 5/8] Displaced stepping for Thumb 32-bit insns 2011-09-14 14:25 ` Ulrich Weigand @ 2011-10-09 13:28 ` Yao Qi 2011-10-10 14:40 ` Ulrich Weigand 2011-10-10 1:41 ` Yao Qi 1 sibling, 1 reply; 19+ messages in thread From: Yao Qi @ 2011-10-09 13:28 UTC (permalink / raw) To: Ulrich Weigand; +Cc: gdb-patches On 09/14/2011 09:39 PM, Ulrich Weigand wrote: >> > 1. Failures in gdb.arch/thumb2-it.exp and gdb.base/gdb1555.exp. >> > These failures are caused by missing IT support in thumb displaced stepping. > Ah, right. Fortunately, I think IT support should be relatively easy to > add, in fact we should be able to just completely emulate it: > > - The first thing we do when we're about to displaced-step a Thumb insn > is to check the itstate and see whether we're in an IT block. > > - If so, we check whether the condition is true, given the current state > of the flags. > > - If the condition is false, we always use a NOP as the displaced > instruction; otherwise, compute the displaced instruction as usual. > > - In either case, set the CSPR register as if we're outside of any > IT block while actually executing the displaced instruction. (This > also makes sure that the breakpoint at the end will always be > executed.) > > - During fixup after execution is done, re-set IT state in the CSPR > to the proper value (advanced by one instruction). > > See also thumb_get_next_pc_raw for how to manipulate IT state ... > > Does this look good to you? > Yes, it looks right to me in general. However, it doesn't handle the case of `stepi' in condition blocks when displaced stepping is enabled, as gdb.arch/thumb2-it.exp tested. We expect inferior stops at the next true-condition instruction instead of next instruction after typing `stepi'. In this design, inferior will stop at the next instruction regardless of condition. We may adjust PC value in fixup to skip these false-condition instructions. -- Yao (é½å°§) ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [try 2nd 5/8] Displaced stepping for Thumb 32-bit insns 2011-10-09 13:28 ` Yao Qi @ 2011-10-10 14:40 ` Ulrich Weigand 0 siblings, 0 replies; 19+ messages in thread From: Ulrich Weigand @ 2011-10-10 14:40 UTC (permalink / raw) To: Yao Qi; +Cc: gdb-patches Yao Qi wrote: > On 09/14/2011 09:39 PM, Ulrich Weigand wrote: > >> > 1. Failures in gdb.arch/thumb2-it.exp and gdb.base/gdb1555.exp. > >> > These failures are caused by missing IT support in thumb displaced stepping. > > Ah, right. Fortunately, I think IT support should be relatively easy to > > add, in fact we should be able to just completely emulate it: > > > > - The first thing we do when we're about to displaced-step a Thumb insn > > is to check the itstate and see whether we're in an IT block. > > > > - If so, we check whether the condition is true, given the current state > > of the flags. > > > > - If the condition is false, we always use a NOP as the displaced > > instruction; otherwise, compute the displaced instruction as usual. > > > > - In either case, set the CSPR register as if we're outside of any > > IT block while actually executing the displaced instruction. (This > > also makes sure that the breakpoint at the end will always be > > executed.) > > > > - During fixup after execution is done, re-set IT state in the CSPR > > to the proper value (advanced by one instruction). > > > > See also thumb_get_next_pc_raw for how to manipulate IT state ... > > > > Does this look good to you? > > > > Yes, it looks right to me in general. However, it doesn't handle the > case of `stepi' in condition blocks when displaced stepping is enabled, > as gdb.arch/thumb2-it.exp tested. We expect inferior stops at the next > true-condition instruction instead of next instruction after typing > `stepi'. In this design, inferior will stop at the next instruction > regardless of condition. We may adjust PC value in fixup to skip these > false-condition instructions. OK, good point. I agree. Thanks, Ulrich -- Dr. Ulrich Weigand GNU Toolchain for Linux on System z and Cell BE Ulrich.Weigand@de.ibm.com ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [try 2nd 5/8] Displaced stepping for Thumb 32-bit insns 2011-09-14 14:25 ` Ulrich Weigand 2011-10-09 13:28 ` Yao Qi @ 2011-10-10 1:41 ` Yao Qi 2011-10-10 14:39 ` Ulrich Weigand 1 sibling, 1 reply; 19+ messages in thread From: Yao Qi @ 2011-10-10 1:41 UTC (permalink / raw) To: Ulrich Weigand; +Cc: gdb-patches On 09/14/2011 09:39 PM, Ulrich Weigand wrote: >> > 6. Failures in gdb.base/watchpoint-solib.exp gdb.mi/mi-simplerun.exp. >> > They are caused by displaced stepping instruction `mov r12, #imm`. >> > This instruction should be unmodified-copied to scratch, and execute, >> > but experiment shows we can't. I have a local patch that can control >> > displaced stepping on instructions' level. Once I turn it on for `mov >> > r12, #imm`, these tests will fail. The reason is still unknown to me. >> > >> > 7. Accessing some high addresses. Some instructions (alu_imm) may >> > set PC to a hight address, such as 0xffffxxxx, and displaced stepping of >> > this kind instruction should be handled differently. > I'm afraid I don't quite understand those last two points. Could you > elaborate what exactly is going wrong? > I don't have much details on hand for problem #6, but I can explain problem #7 a little bit here. There are some kernel helpers on ARM in a high page (0xffffXXXX), and application can access them like this, (gdb) disassemble 0x400eaba4,+4 Dump of assembler code from 0x400eaba4 to 0x400eaba8: => 0x400eaba4: sub pc, r0, #31 End of assembler dump. (gdb) p/x $r0 $2 = 0xffff0fff We have some bits in gdb to handle it (arm-linux-tdep.c:arm_catch_kernel_helper_return). The problem here is that when inferior stops at such high address, gdb stops stepping and inserts a step-resume breakpoint, as shown in this log below, displaced: stepping insn e240f01f at 40021ba4 displaced: copying immediate ALU insn e240f01f displaced: read r0 value ffff0fff displaced: read r1 value 0d696914 displaced: read r0 value ffff0fff displaced: read pc value 40021bac displaced: writing r0 value 40021bac displaced: writing r1 value ffff0fff displaced: writing insn e241001f at 000083ac displaced: copy 0x40021ba4->0x83ac: displaced: check mode of 40021ba4 instead of 000083ac displaced: displaced pc to 0x83ac displaced: restored process 2067 0x83ac displaced: read r0 value ffff0fe0 displaced: writing r0 value ffff0fff displaced: writing r1 value 0d696914 displaced: writing pc ffff0fe0 infrun: stop_pc = 0xffff0fe0 infrun: stepped into undebuggable function Obviously, it is not what we want here. What we want here is to continue stepping, and then arm_catch_kernel_helper_return has the chance to handle PC at high address, and make everything correct. -- Yao (é½å°§) ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [try 2nd 5/8] Displaced stepping for Thumb 32-bit insns 2011-10-10 1:41 ` Yao Qi @ 2011-10-10 14:39 ` Ulrich Weigand 0 siblings, 0 replies; 19+ messages in thread From: Ulrich Weigand @ 2011-10-10 14:39 UTC (permalink / raw) To: Yao Qi; +Cc: gdb-patches Yao Qi wrote: > I don't have much details on hand for problem #6, but I can explain > problem #7 a little bit here. There are some kernel helpers on ARM in a > high page (0xffffXXXX), and application can access them like this, > > (gdb) disassemble 0x400eaba4,+4 > Dump of assembler code from 0x400eaba4 to 0x400eaba8: > => 0x400eaba4: sub pc, r0, #31 > End of assembler dump. > (gdb) p/x $r0 > $2 = 0xffff0fff > > We have some bits in gdb to handle it > (arm-linux-tdep.c:arm_catch_kernel_helper_return). The problem here is > that when inferior stops at such high address, gdb stops stepping and > inserts a step-resume breakpoint, as shown in this log below, > > displaced: stepping insn e240f01f at 40021ba4 > displaced: copying immediate ALU insn e240f01f > displaced: read r0 value ffff0fff > displaced: read r1 value 0d696914 > displaced: read r0 value ffff0fff > displaced: read pc value 40021bac > displaced: writing r0 value 40021bac > displaced: writing r1 value ffff0fff > displaced: writing insn e241001f at 000083ac > displaced: copy 0x40021ba4->0x83ac: displaced: check mode of 40021ba4 > instead of 000083ac > displaced: displaced pc to 0x83ac > displaced: restored process 2067 0x83ac > displaced: read r0 value ffff0fe0 > displaced: writing r0 value ffff0fff > displaced: writing r1 value 0d696914 > displaced: writing pc ffff0fe0 > infrun: stop_pc = 0xffff0fe0 > infrun: stepped into undebuggable function > > Obviously, it is not what we want here. What we want here is to > continue stepping, and then arm_catch_kernel_helper_return has the > chance to handle PC at high address, and make everything correct. I still don't quite understand what's wrong with the above sequence; GDB displaced-stepped the "sub pc" command, recognized it was now in an undebuggable function, and then inserted a step-resume breakpoint to continue out of it. This should work just fine, and should in fact work the same in ARM mode too ... The special code in arm_catch_kernel_helper_return is only needed if we actually step in code *in* the kernel helper (i.e. if we do a "si" on the "sub pc", and then *another* "si"). If *that* happens, we should run into arm_catch_kernel_helper_return -- b.t.w. it seems this function is then not Thumb-safe: dsc->modinsn[0] = 0xe59ef004; /* ldr pc, [lr, #4]. */ I guess this needs to check for Thumb mode and produce an appropiate instruction in that case ... Bye, Ulrich -- Dr. Ulrich Weigand GNU Toolchain for Linux on System z and Cell BE Ulrich.Weigand@de.ibm.com ^ permalink raw reply [flat|nested] 19+ messages in thread
* [patch 0/3] Displaced stepping for 16-bit Thumb instructions @ 2010-12-25 14:17 Yao Qi 2011-03-24 13:49 ` [try 2nd 0/8] Displaced stepping for " Yao Qi 0 siblings, 1 reply; 19+ messages in thread From: Yao Qi @ 2010-12-25 14:17 UTC (permalink / raw) To: gdb-patches Displaced stepping doesn't work for Thumb instructions so far. This set of patches are about support displaced stepping of 16-bit Thumb instructions. There are much more 32-bit Thumb instructions than 16-bit Thumb instructions, so it takes more time to support 32-bit Thumb instructions. I'd like to send these three patches first to review. Once these three are done, it is straight forward to support 32-bit Thumb instructions. Regression tested these three patches along with another pending patch on armv7l-unknown-linux-gnueabi. http://sourceware.org/ml/gdb-patches/2010-12/msg00427.html No regressions and some test failures are fixed. -FAIL: gdb.base/moribund-step.exp: running to main in runto -FAIL: gdb.mi/mi-nonstop-exit.exp: mi runto main (timeout) -FAIL: gdb.mi/mi-nonstop.exp: mi runto main (timeout) -FAIL: gdb.mi/mi-ns-stale-regcache.exp: mi runto main (timeout) -FAIL: gdb.mi/mi-nsintrall.exp: mi runto main (timeout) -FAIL: gdb.mi/mi-nsmoribund.exp: mi runto main (timeout) -FAIL: gdb.mi/mi-nsthrexec.exp: mi runto main (timeout) -- Yao (é½å°§) ^ permalink raw reply [flat|nested] 19+ messages in thread
* [try 2nd 0/8] Displaced stepping for Thumb instructions 2010-12-25 14:17 [patch 0/3] Displaced stepping for 16-bit Thumb instructions Yao Qi @ 2011-03-24 13:49 ` Yao Qi 2011-03-24 14:05 ` [try 2nd 5/8] Displaced stepping for Thumb 32-bit insns Yao Qi 0 siblings, 1 reply; 19+ messages in thread From: Yao Qi @ 2011-03-24 13:49 UTC (permalink / raw) To: gdb-patches This is the 2nd try for displaced stepping for Thumb instructions. Ulrich's comments in last thread[*] are addressed. [*] "[patch 0/3] Displaced stepping for 16-bit Thumb instructions" http://sourceware.org/ml/gdb-patches/2010-12/msg00457.html -- Yao (é½å°§) ^ permalink raw reply [flat|nested] 19+ messages in thread
* [try 2nd 5/8] Displaced stepping for Thumb 32-bit insns 2011-03-24 13:49 ` [try 2nd 0/8] Displaced stepping for " Yao Qi @ 2011-03-24 14:05 ` Yao Qi 2011-05-05 13:25 ` Yao Qi 0 siblings, 1 reply; 19+ messages in thread From: Yao Qi @ 2011-03-24 14:05 UTC (permalink / raw) To: gdb-patches [-- Attachment #1: Type: text/plain, Size: 74 bytes --] Displaced stepping for 32-bit Thumb instructions. -- Yao (é½å°§) [-- Attachment #2: 0005-thumb-32bit.patch --] [-- Type: text/x-patch, Size: 24236 bytes --] 2011-03-24 Yao Qi <yao@codesourcery.com> * gdb/arm-tdep.c (thumb_copy_unmodified_32bit): New. (thumb2_copy_preload): New. (thumb2_copy_preload_reg): New. (thumb2_copy_copro_load_store): New. (thumb2_copy_b_bl_blx): New. (thumb2_copy_alu_reg): New. (thumb2_copy_ldr_str_ldrb_strb): New. (thumb2_copy_block_xfer): New. (thumb_32bit_copy_undef): New. (thumb2_decode_ext_reg_ld_st): New. (thumb2_decode_svc_copro): New. (thumb_decode_pc_relative_32bit): New. (decode_thumb_32bit_ld_mem_hints): New. (thumb_process_displaced_32bit_insn): Process 32-bit Thumb insn. --- gdb/arm-tdep.c | 701 +++++++++++++++++++++++++++++++++++++++++++++++++++++++- 1 files changed, 695 insertions(+), 6 deletions(-) diff --git a/gdb/arm-tdep.c b/gdb/arm-tdep.c index a356451..6ba7b5b 100644 --- a/gdb/arm-tdep.c +++ b/gdb/arm-tdep.c @@ -5333,6 +5333,23 @@ arm_copy_unmodified (struct gdbarch *gdbarch, uint32_t insn, return 0; } +static int +thumb_copy_unmodified_32bit (struct gdbarch *gdbarch, unsigned int insn1, + unsigned int insn2, const char *iname, + struct displaced_step_closure *dsc) +{ + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, "displaced: copying insn %.4x %.4x, " + "opcode/class '%s' unmodified\n", insn1, insn2, + iname); + + dsc->modinsn[0] = insn1; + dsc->modinsn[1] = insn2; + dsc->numinsns = 2; + + return 0; +} + /* Copy 16-bit Thumb(Thumb and 16-bit Thumb-2) instruction without any modification. */ static int @@ -5400,6 +5417,27 @@ arm_copy_preload (struct gdbarch *gdbarch, uint32_t insn, struct regcache *regs, return 0; } +static int +thumb2_copy_preload (struct gdbarch *gdbarch, uint16_t insn1, uint16_t insn2, + struct regcache *regs, struct displaced_step_closure *dsc) +{ + unsigned int rn = bits (insn1, 0, 3); + if (rn == ARM_PC_REGNUM) + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, "preload", dsc); + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, "displaced: copying preload insn %.4x%.4x\n", + insn1, insn2); + + dsc->modinsn[0] = insn1 & 0xfff0; + dsc->modinsn[1] = insn2; + dsc->numinsns = 2; + + install_preload (gdbarch, regs, dsc, rn); + + return 0; +} + /* Preload instructions with register offset. */ static void @@ -5448,6 +5486,31 @@ arm_copy_preload_reg (struct gdbarch *gdbarch, uint32_t insn, return 0; } +static int +thumb2_copy_preload_reg (struct gdbarch *gdbarch, uint16_t insn1, + uint16_t insn2, struct regcache *regs, + struct displaced_step_closure *dsc) +{ + unsigned int rn = bits (insn1, 0, 3); + unsigned int rm = bits (insn2, 0, 3); + + + if (rn != ARM_PC_REGNUM && rm != ARM_PC_REGNUM) + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, "preload reg", + dsc); + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, "displaced: copying preload insn %.4x%.4x\n", + insn1, insn1); + + dsc->modinsn[0] = insn1 & 0xfff0; + dsc->modinsn[1] = (insn2 & 0xfff0) | 0x1; + dsc->numinsns = 2; + + install_preload_reg (gdbarch, regs, dsc, rn, rm); + return 0; +} + /* Copy/cleanup coprocessor load and store instructions. */ static void @@ -5500,6 +5563,33 @@ copy_copro_load_store (struct gdbarch *gdbarch, uint32_t insn, return 0; } +static int +thumb2_copy_copro_load_store (struct gdbarch *gdbarch, uint16_t insn1, + uint16_t insn2, struct regcache *regs, + struct displaced_step_closure *dsc) +{ + unsigned int rn = bits (insn1, 0, 3); + + if (rn == ARM_PC_REGNUM) + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "copro load/store", dsc); + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, "displaced: copying coprocessor " + "load/store insn %.4x%.4x\n", insn1, insn2); + + dsc->u.ldst.writeback = bit (insn1, 9); + dsc->u.ldst.rn = rn; + + dsc->modinsn[0] = insn1 & 0xfff0; + dsc->modinsn[1] = insn2; + dsc->numinsns = 2; + + install_copy_copro_load_store (gdbarch, regs, dsc); + + return 0; +} + /* Clean up branch instructions (actually perform the branch, by setting PC). */ @@ -5584,6 +5674,58 @@ arm_copy_b_bl_blx (struct gdbarch *gdbarch, uint32_t insn, return install_b_bl_blx (gdbarch, cond, exchange, link, offset, regs, dsc); } +static int +thumb2_copy_b_bl_blx (struct gdbarch *gdbarch, unsigned short insn1, + unsigned short insn2, struct regcache *regs, + struct displaced_step_closure *dsc) +{ + int link = bit (insn2, 14); + int exchange = link && !bit (insn2, 12); + int cond = INST_AL; + long offset =0; + int j1 = bit (insn2, 13); + int j2 = bit (insn2, 11); + int s = sbits (insn1, 10, 10); + int i1 = !(j1 ^ bit (insn1, 10)); + int i2 = !(j2 ^ bit (insn1, 10)); + + if (!link && !exchange) /* B */ + { + cond = bits (insn1, 6, 9); + offset = (bits (insn2, 0, 10) << 1); + if (bit (insn2, 12)) /* Encoding T4 */ + { + offset |= (bits (insn1, 0, 9) << 12) + | (i2 << 22) + | (i1 << 23) + | (s << 24); + } + else /* Encoding T3 */ + offset |= (bits (insn1, 0, 5) << 12) + | (j1 << 18) + | (j2 << 19) + | (s << 20); + } + else + { + offset = (bits (insn1, 0, 9) << 12); + offset |= ((i2 << 22) | (i1 << 23) | (s << 24)); + offset |= exchange ? + (bits (insn2, 1, 10) << 2) : (bits (insn2, 0, 10) << 1); + } + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, "displaced: copying %s immediate insn " + "%.4x %.4x with offset %.8lx\n", + (exchange) ? "blx" : "bl", + insn1, insn2, offset); + + dsc->u.branch.dest = dsc->insn_addr + 4 + offset; + dsc->modinsn[0] = THUMB_NOP; + + return install_b_bl_blx (gdbarch, cond, exchange, 1, offset, regs, dsc); +} + /* Copy B Thumb instructions. */ static int thumb_copy_b (struct gdbarch *gdbarch, unsigned short insn, @@ -5849,6 +5991,40 @@ thumb_copy_alu_reg (struct gdbarch *gdbarch, unsigned short insn, return install_alu_reg (gdbarch, regs, dsc); } +static int +thumb2_copy_alu_reg (struct gdbarch *gdbarch, unsigned short insn1, + unsigned short insn2, struct regcache *regs, + struct displaced_step_closure *dsc) +{ + unsigned int op2 = bits (insn2, 4, 7); + int is_mov = (op2 == 0x0); + + dsc->u.alu_reg.rn = bits (insn1, 0, 3); /* Rn */ + dsc->u.alu_reg.rm = bits (insn2, 0, 3); /* Rm */ + dsc->rd = bits (insn2, 8, 11); /* Rd */ + + /* In Thumb-2, rn, rm and rd can't be r15. */ + if (dsc->u.alu_reg.rn != ARM_PC_REGNUM + && dsc->u.alu_reg.rm != ARM_PC_REGNUM + && dsc->rd != ARM_PC_REGNUM) + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, "ALU reg", dsc); + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, "displaced: copying reg %s insn %.4x%.4x\n", + "ALU", insn1, insn2); + + if (is_mov) + dsc->modinsn[0] = insn1; + else + dsc->modinsn[0] = ((insn1 & 0xfff0) | 0x1); + + dsc->modinsn[1] = ((insn2 & 0xf0f0) | 0x2); + dsc->numinsns = 2; + + return install_alu_reg (gdbarch, regs, dsc); + +} + /* Cleanup/copy arithmetic/logic insns with shifted register RHS. */ static void @@ -6117,6 +6293,69 @@ install_ldr_str_ldrb_strb (struct gdbarch *gdbarch, struct regcache *regs, } static int +thumb2_copy_ldr_str_ldrb_strb (struct gdbarch *gdbarch, unsigned short insn1, + unsigned short insn2, struct regcache *regs, + struct displaced_step_closure *dsc, + int load, int byte, int usermode, int writeback) +{ + int immed = !bit (insn1, 9); + unsigned int rt = bits (insn2, 12, 15); + unsigned int rn = bits (insn1, 0, 3); + unsigned int rm = bits (insn2, 0, 3); /* Only valid if !immed. */ + + if (rt != ARM_PC_REGNUM && rn != ARM_PC_REGNUM && rm != ARM_PC_REGNUM) + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, "load/store", + dsc); + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, + "displaced: copying %s%s r%d [r%d] insn %.4x%.4x\n", + load ? (byte ? "ldrb" : "ldr") + : (byte ? "strb" : "str"), usermode ? "t" : "", + rt, rn, insn1, insn2); + + dsc->rd = rt; + dsc->u.ldst.rn = rn; + + install_ldr_str_ldrb_strb (gdbarch, regs, dsc, load, byte, usermode, + writeback, rm, immed); + + if (load || rt != ARM_PC_REGNUM) + { + dsc->u.ldst.restore_r4 = 0; + + if (immed) + /* {ldr,str}[b]<cond> rt, [rn, #imm], etc. + -> + {ldr,str}[b]<cond> r0, [r2, #imm]. */ + { + dsc->modinsn[0] = (insn1 & 0xfff0) | 0x2; + dsc->modinsn[1] = insn2 & 0x0fff; + } + else + /* {ldr,str}[b]<cond> rt, [rn, rm], etc. + -> + {ldr,str}[b]<cond> r0, [r2, r3]. */ + { + dsc->modinsn[0] = (insn1 & 0xfff0) | 0x2; + dsc->modinsn[1] = (insn2 & 0x0ff0) | 0x3; + } + + dsc->numinsns = 2; + } + else + { + /* In Thumb-32 instructions, the behavior is unpredictable when Rt is + PC, while the behavior is undefined when Rn is PC. Shortly, neither + Rt nor Rn can be PC. */ + + gdb_assert (0); + } + + return 0; +} + +static int arm_copy_ldr_str_ldrb_strb (struct gdbarch *gdbarch, uint32_t insn, struct regcache *regs, struct displaced_step_closure *dsc, @@ -6508,6 +6747,87 @@ copy_block_xfer (struct gdbarch *gdbarch, uint32_t insn, struct regcache *regs, return 0; } +static int +thumb2_copy_block_xfer (struct gdbarch *gdbarch, uint16_t insn1, uint16_t insn2, + struct regcache *regs, + struct displaced_step_closure *dsc) +{ + int rn = bits (insn1, 0, 3); + int load = bit (insn1, 4); + int writeback = bit (insn1, 5); + + /* Block transfers which don't mention PC can be run directly + out-of-line. */ + if (rn != ARM_PC_REGNUM && (insn2 & 0x8000) == 0) + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, "ldm/stm", dsc); + + if (rn == ARM_PC_REGNUM) + { + warning (_("displaced: Unpredictable LDM or STM with " + "base register r15")); + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "unpredictable ldm/stm", dsc); + } + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, "displaced: copying block transfer insn " + "%.4x%.4x\n", insn1, insn2); + + /* Clear bit 13, since it should be always zero. */ + dsc->u.block.regmask = (insn2 & 0xdfff); + dsc->u.block.rn = rn; + + dsc->u.block.load = bit (insn1, 4); + dsc->u.block.user = bit (insn1, 6); + dsc->u.block.increment = bit (insn1, 7); + dsc->u.block.before = bit (insn1, 8); + dsc->u.block.writeback = writeback; + dsc->u.block.cond = INST_AL; + + if (load) + { + if (dsc->u.block.regmask == 0xffff) + { + /* This branch is impossible to happen. */ + gdb_assert (0); + } + else + { + unsigned int regmask = dsc->u.block.regmask; + unsigned int num_in_list = bitcount (regmask), new_regmask, bit = 1; + unsigned int to = 0, from = 0, i, new_rn; + + for (i = 0; i < num_in_list; i++) + dsc->tmp[i] = displaced_read_reg (regs, dsc, i); + + if (writeback) + insn1 &= ~(1 << 5); + + new_regmask = (1 << num_in_list) - 1; + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, _("displaced: LDM r%d%s, " + "{..., pc}: original reg list %.4x, modified " + "list %.4x\n"), rn, writeback ? "!" : "", + (int) dsc->u.block.regmask, new_regmask); + + dsc->modinsn[0] = insn1; + dsc->modinsn[1] = (new_regmask & 0xffff); + dsc->numinsns = 2; + + dsc->cleanup = &cleanup_block_load_pc; + } + } + else + { + dsc->modinsn[0] = insn1; + dsc->modinsn[1] = insn2; + dsc->numinsns = 2; + dsc->cleanup = &cleanup_block_store_pc; + } + return 0; +} + /* Cleanup/copy SVC (SWI) instructions. These two functions are overridden for Linux, where some SVC instructions must be treated specially. */ @@ -6599,6 +6919,23 @@ copy_undef (struct gdbarch *gdbarch, uint32_t insn, return 0; } +static int +thumb_32bit_copy_undef (struct gdbarch *gdbarch, uint16_t insn1, uint16_t insn2, + struct displaced_step_closure *dsc) +{ + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, "displaced: copying undefined insn " + "%.4x %.4x\n", (unsigned short) insn1, + (unsigned short) insn2); + + dsc->modinsn[0] = insn1; + dsc->modinsn[1] = insn2; + dsc->numinsns = 2; + + return 0; +} + /* Copy unpredictable instructions. */ static int @@ -6993,6 +7330,43 @@ decode_ext_reg_ld_st (struct gdbarch *gdbarch, uint32_t insn, return 1; } +/* Decode extension register load/store. Exactly the same as + arm_decode_ext_reg_ld_st. */ + +static int +thumb2_decode_ext_reg_ld_st (struct gdbarch *gdbarch, uint16_t insn1, + uint16_t insn2, struct regcache *regs, + struct displaced_step_closure *dsc) +{ + unsigned int opcode = bits (insn1, 4, 8); + + switch (opcode) + { + case 0x04: case 0x05: + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "vfp/neon vmov", dsc); + + case 0x08: case 0x0c: /* 01x00 */ + case 0x0a: case 0x0e: /* 01x10 */ + case 0x12: case 0x16: /* 10x10 */ + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "vfp/neon vstm/vpush", dsc); + + case 0x09: case 0x0d: /* 01x01 */ + case 0x0b: case 0x0f: /* 01x11 */ + case 0x13: case 0x17: /* 10x11 */ + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "vfp/neon vldm/vpop", dsc); + + case 0x10: case 0x14: case 0x18: case 0x1c: /* vstr. */ + case 0x11: case 0x15: case 0x19: case 0x1d: /* vldr. */ + return thumb2_copy_copro_load_store (gdbarch, insn1, insn2, regs, dsc); + } + + /* Should be unreachable. */ + return 1; +} + static int decode_svc_copro (struct gdbarch *gdbarch, uint32_t insn, CORE_ADDR to, struct regcache *regs, struct displaced_step_closure *dsc) @@ -7040,7 +7414,105 @@ decode_svc_copro (struct gdbarch *gdbarch, uint32_t insn, CORE_ADDR to, } static int -copy_pc_relative (struct regcache *regs, struct displaced_step_closure *dsc, +thumb2_decode_svc_copro (struct gdbarch *gdbarch, uint16_t insn1, + uint16_t insn2, struct regcache *regs, + struct displaced_step_closure *dsc) +{ + unsigned int coproc = bits (insn2, 8, 11); + unsigned int op1 = bits (insn1, 4, 9); + unsigned int bit_5_8 = bits (insn1, 5, 8); + unsigned int bit_9 = bit (insn1, 9); + unsigned int bit_4 = bit (insn1, 4); + unsigned int rn = bits (insn1, 0, 3); + + if (bit_9 == 0) + { + if (bit_5_8 == 2) + { + if ((coproc & 0xe) == 0xa) /* 64-bit xfer. */ + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "neon 64bit xfer", dsc); + else + { + if (bit_4) /* MRRC/MRRC2 */ + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "mrrc/mrrc2", dsc); + else /* MCRR/MCRR2 */ + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "mcrr/mcrr2", dsc); + } + } + else if (bit_5_8 == 0) /* UNDEFINED. */ + return thumb_32bit_copy_undef (gdbarch, insn1, insn2, dsc); + else + { + /*coproc is 101x. SIMD/VFP, ext registers load/store. */ + if ((coproc & 0xe) == 0xa) + return thumb2_decode_ext_reg_ld_st (gdbarch, insn1, insn2, regs, + dsc); + else /* coproc is not 101x. */ + { + if (bit_4 == 0) /* STC/STC2. */ + return thumb2_copy_copro_load_store (gdbarch, insn1, insn2, + regs, dsc); + else + { + if (rn == 0xf) /* LDC/LDC2 literal. */ + return thumb2_copy_copro_load_store (gdbarch, insn1, insn2, + regs, dsc); + else /* LDC/LDC2 immeidate. */ + return thumb2_copy_copro_load_store (gdbarch, insn1, insn2, + regs, dsc); + } + } + } + } + else + { + unsigned int op = bit (insn2, 4); + unsigned int bit_8 = bit (insn1, 8); + + if (bit_8) /* Advanced SIMD */ + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "neon", dsc); + else + { + /*coproc is 101x. */ + if ((coproc & 0xe) == 0xa) + { + if (op) /* 8,16,32-bit xfer. */ + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "neon 8/16/32 bit xfer", + dsc); + else /* VFP data processing. */ + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "vfp dataproc", dsc); + } + else + { + if (op) + { + if (bit_4) /* MRC/MRC2 */ + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "mrc/mrc2", dsc); + else /* MCR/MCR2 */ + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "mcr/mcr2", dsc); + } + else /* CDP/CDP 2 */ + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "cdp/cdp2", dsc); + } + } + } + + + + return 0; +} + +static int +decode_pc_relative (struct regcache *regs, struct displaced_step_closure *dsc, int rd, unsigned int imm, int is_32bit) { int val; @@ -7086,7 +7558,27 @@ thumb_decode_pc_relative_16bit (struct gdbarch *gdbarch, unsigned short insn, "displaced: copying thumb adr r%d, #%d insn %.4x\n", rd, imm8, insn); - return copy_pc_relative (regs, dsc, rd, imm8, 0); + return decode_pc_relative (regs, dsc, rd, imm8, 0); +} + +static int +thumb_decode_pc_relative_32bit (struct gdbarch *gdbarch, unsigned short insn1, + unsigned short insn2, struct regcache *regs, + struct displaced_step_closure *dsc) +{ + unsigned int rd = bits (insn2, 8, 11); + /* Since immeidate has the same encoding in both ADR and ADDS, so we simply + extract raw immediate encoding rather than computing immediate. When + generating ADDS instruction, we can simply perform OR operation to set + immediate into ADDS. */ + unsigned int imm = (insn2 & 0x70ff) | (bit (insn1, 10) << 26); + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, + "displaced: copying thumb adr r%d, #%d insn %.4x%.4x\n", + rd, imm, insn1, insn2); + + return decode_pc_relative (regs, dsc, rd, imm, 1); } static int @@ -7348,12 +7840,209 @@ thumb_process_displaced_16bit_insn (struct gdbarch *gdbarch, _("thumb_process_displaced_insn: Instruction decode error")); } +static int +decode_thumb_32bit_ld_mem_hints (struct gdbarch *gdbarch, + unsigned short insn1, unsigned short insn2, + struct regcache *regs, + struct displaced_step_closure *dsc) +{ + int rd = bits (insn2, 12, 15); + int user_mode = (bits (insn2, 8, 11) == 0xe); + int err = 0; + int writeback = 0; + + switch (bits (insn1, 5, 6)) + { + case 0: /* Load byte and memory hints */ + if (rd == 0xf) /* PLD/PLI */ + { + if (bits (insn2, 6, 11)) + return thumb2_copy_preload (gdbarch, insn1, insn2, regs, dsc); + else + return thumb2_copy_preload_reg (gdbarch, insn1, insn2, regs, dsc); + } + else + { + int op1 = bits (insn1, 7, 8); + + if ((op1 == 0 || op1 == 2) && bit (insn2, 11)) + writeback = bit (insn2, 8); + + return thumb2_copy_ldr_str_ldrb_strb (gdbarch, insn1, insn2, regs, + dsc, 1, 1, user_mode, writeback); + } + + break; + case 1: /* Load halfword and memory hints */ + if (rd == 0xf) /* PLD{W} and Unalloc memory hint */ + { + if (bits (insn2, 6, 11)) + return thumb2_copy_preload (gdbarch, insn1, insn2, regs, dsc); + else + return thumb2_copy_preload_reg (gdbarch, insn1, insn2, regs, dsc); + } + else + { + int op1 = bits (insn1, 7, 8); + + if ((op1 == 0 || op1 == 2) && bit (insn2, 11)) + writeback = bit (insn2, 8); + return thumb2_copy_ldr_str_ldrb_strb (gdbarch, insn1, insn2, regs, + dsc, 1, 0, user_mode, writeback); + } + break; + case 2: /* Load word */ + { + int op1 = bits (insn1, 7, 8); + + if ((op1 == 0 || op1 == 2) && bit (insn2, 11)) + writeback = bit (insn2, 8); + + return thumb2_copy_ldr_str_ldrb_strb (gdbarch, insn1, insn2, regs, dsc, + 1, 0, user_mode, writeback); + break; + } + default: + return thumb_32bit_copy_undef (gdbarch, insn1, insn2, dsc); + break; + } + return 0; +} + static void thumb_process_displaced_32bit_insn (struct gdbarch *gdbarch, uint16_t insn1, uint16_t insn2, struct regcache *regs, struct displaced_step_closure *dsc) { - error (_("Displaced stepping is only supported in ARM mode and Thumb 16bit instructions")); + int err = 0; + unsigned short op = bit (insn2, 15); + unsigned int op1 = bits (insn1, 11, 12); + + switch (op1) + { + case 1: + { + switch (bits (insn1, 9, 10)) + { + case 0: /* load/store multiple */ + switch (bits (insn1, 7, 8)) + { + case 0: case 3: /* SRS, RFE */ + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "srs/rfe", dsc); + break; + case 1: case 2: /* LDM/STM/PUSH/POP */ + /* These Thumb 32-bit insns have the same encodings as ARM + counterparts. */ + err = thumb2_copy_block_xfer (gdbarch, insn1, insn2, regs, dsc); + } + break; + case 1: + /* Data-processing (shift register). In ARM archtecture reference + manual, this entry is + "Data-processing (shifted register) on page A6-31". However, + instructions in table A6-31 shows that they are `alu_reg' + instructions. There is no alu_shifted_reg instructions in + Thumb-2. */ + err = thumb2_copy_alu_reg (gdbarch, insn1, insn2, regs, + dsc); + break; + default: /* Coprocessor instructions */ + /* Thumb 32bit coprocessor instructions have the same encoding + as ARM's. */ + err = thumb2_decode_svc_copro (gdbarch, insn1, insn2, regs, dsc); + break; + } + break; + } + case 2: /* op1 = 2 */ + if (op) /* Branch and misc control. */ + { + if (bit (insn2, 14)) /* BLX/BL */ + err = thumb2_copy_b_bl_blx (gdbarch, insn1, insn2, regs, dsc); + else if (!bits (insn2, 12, 14) && bits (insn1, 8, 10) != 0x7) + /* Conditional Branch */ + err = thumb2_copy_b_bl_blx (gdbarch, insn1, insn2, regs, dsc); + else + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "misc ctrl", dsc); + } + else + { + if (bit (insn1, 9)) /* Data processing (plain binary imm) */ + { + int op = bits (insn1, 4, 8); + int rn = bits (insn1, 0, 4); + if ((op == 0 || op == 0xa) && rn == 0xf) + err = thumb_decode_pc_relative_32bit (gdbarch, insn1, insn2, + regs, dsc); + else + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "dp/pb", dsc); + } + else /* Data processing (modified immeidate) */ + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "dp/mi", dsc); + } + break; + case 3: /* op1 = 3 */ + switch (bits (insn1, 9, 10)) + { + case 0: + if (bit (insn1, 4)) + err = decode_thumb_32bit_ld_mem_hints (gdbarch, insn1, insn2, + regs, dsc); + else + { + if (bit (insn1, 8)) /* NEON Load/Store */ + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "neon elt/struct load/store", + dsc); + else /* Store single data item */ + { + int user_mode = (bits (insn2, 8, 11) == 0xe); + int byte = (bits (insn1, 5, 7) == 0 + || bits (insn1, 5, 7) == 4); + int writeback = 0; + + if (bits (insn1, 5, 7) < 3 && bit (insn2, 11)) + writeback = bit (insn2, 8); + + err = thumb2_copy_ldr_str_ldrb_strb (gdbarch, insn1, insn2, + regs, dsc, 0, byte, + user_mode, writeback); + } + } + break; + case 1: /* op1 = 3, bits (9, 10) == 1 */ + switch (bits (insn1, 7, 8)) + { + case 0: case 1: /* Data processing (register) */ + err = thumb2_copy_alu_reg (gdbarch, insn1, insn2, regs, dsc); + break; + case 2: /* Multiply and absolute difference */ + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "mul/mua/diff", dsc); + break; + case 3: /* Long multiply and divide */ + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "lmul/lmua", dsc); + break; + } + break; + default: /* Coprocessor instructions */ + err = thumb2_decode_svc_copro (gdbarch, insn1, insn2, regs, dsc); + break; + } + break; + default: + err = 1; + } + + if (err) + internal_error (__FILE__, __LINE__, + _("thumb_process_displaced_insn: Instruction decode error")); + } static void -- 1.7.0.4 ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [try 2nd 5/8] Displaced stepping for Thumb 32-bit insns 2011-03-24 14:05 ` [try 2nd 5/8] Displaced stepping for Thumb 32-bit insns Yao Qi @ 2011-05-05 13:25 ` Yao Qi 2011-05-17 17:14 ` Ulrich Weigand 0 siblings, 1 reply; 19+ messages in thread From: Yao Qi @ 2011-05-05 13:25 UTC (permalink / raw) To: gdb-patches [-- Attachment #1: Type: text/plain, Size: 53 bytes --] Here is the updated version. -- Yao (é½å°§) [-- Attachment #2: 0002-thumb-32bit.patch --] [-- Type: text/x-patch, Size: 24385 bytes --] 2011-05-05 Yao Qi <yao@codesourcery.com> Support displaced stepping for Thumb 32-bit insns. * gdb/arm-tdep.c (thumb_copy_unmodified_32bit): New. (thumb2_copy_preload): New. (thumb2_copy_preload_reg): New. (thumb2_copy_copro_load_store): New. (thumb2_copy_b_bl_blx): New. (thumb2_copy_alu_reg): New. (thumb2_copy_ldr_str_ldrb_strb): New. (thumb2_copy_block_xfer): New. (thumb_32bit_copy_undef): New. (thumb2_decode_ext_reg_ld_st): New. (thumb2_decode_svc_copro): New. (thumb_copy_pc_relative_32bit): New. (thumb_decode_pc_relative_32bit): New. (decode_thumb_32bit_ld_mem_hints): New. (thumb_process_displaced_32bit_insn): Process Thumb 32-bit instructions. --- gdb/arm-tdep.c | 702 +++++++++++++++++++++++++++++++++++++++++++++++++++++++- 1 files changed, 701 insertions(+), 1 deletions(-) diff --git a/gdb/arm-tdep.c b/gdb/arm-tdep.c index 83ac297..6fb1eaa 100644 --- a/gdb/arm-tdep.c +++ b/gdb/arm-tdep.c @@ -5341,6 +5341,23 @@ arm_copy_unmodified (struct gdbarch *gdbarch, uint32_t insn, return 0; } +static int +thumb_copy_unmodified_32bit (struct gdbarch *gdbarch, uint16_t insn1, + uint16_t insn2, const char *iname, + struct displaced_step_closure *dsc) +{ + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, "displaced: copying insn %.4x %.4x, " + "opcode/class '%s' unmodified\n", insn1, insn2, + iname); + + dsc->modinsn[0] = insn1; + dsc->modinsn[1] = insn2; + dsc->numinsns = 2; + + return 0; +} + /* Copy 16-bit Thumb(Thumb and 16-bit Thumb-2) instruction without any modification. */ static int @@ -5408,6 +5425,27 @@ arm_copy_preload (struct gdbarch *gdbarch, uint32_t insn, struct regcache *regs, return 0; } +static int +thumb2_copy_preload (struct gdbarch *gdbarch, uint16_t insn1, uint16_t insn2, + struct regcache *regs, struct displaced_step_closure *dsc) +{ + unsigned int rn = bits (insn1, 0, 3); + if (rn == ARM_PC_REGNUM) + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, "preload", dsc); + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, "displaced: copying preload insn %.4x%.4x\n", + insn1, insn2); + + dsc->modinsn[0] = insn1 & 0xfff0; + dsc->modinsn[1] = insn2; + dsc->numinsns = 2; + + install_preload (gdbarch, regs, dsc, rn); + + return 0; +} + /* Preload instructions with register offset. */ static void @@ -5456,6 +5494,30 @@ arm_copy_preload_reg (struct gdbarch *gdbarch, uint32_t insn, return 0; } +static int +thumb2_copy_preload_reg (struct gdbarch *gdbarch, uint16_t insn1, + uint16_t insn2, struct regcache *regs, + struct displaced_step_closure *dsc) +{ + unsigned int rn = bits (insn1, 0, 3); + unsigned int rm = bits (insn2, 0, 3); + + if (rn != ARM_PC_REGNUM && rm != ARM_PC_REGNUM) + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, "preload reg", + dsc); + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, "displaced: copying preload insn %.4x%.4x\n", + insn1, insn1); + + dsc->modinsn[0] = insn1 & 0xfff0; + dsc->modinsn[1] = (insn2 & 0xfff0) | 0x1; + dsc->numinsns = 2; + + install_preload_reg (gdbarch, regs, dsc, rn, rm); + return 0; +} + /* Copy/cleanup coprocessor load and store instructions. */ static void @@ -5517,6 +5579,30 @@ arm_copy_copro_load_store (struct gdbarch *gdbarch, uint32_t insn, return 0; } +static int +thumb2_copy_copro_load_store (struct gdbarch *gdbarch, uint16_t insn1, + uint16_t insn2, struct regcache *regs, + struct displaced_step_closure *dsc) +{ + unsigned int rn = bits (insn1, 0, 3); + + if (rn == ARM_PC_REGNUM) + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "copro load/store", dsc); + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, "displaced: copying coprocessor " + "load/store insn %.4x%.4x\n", insn1, insn2); + + dsc->modinsn[0] = insn1 & 0xfff0; + dsc->modinsn[1] = insn2; + dsc->numinsns = 2; + + install_copro_load_store (gdbarch, regs, dsc, bit (insn1, 9), rn); + + return 0; +} + /* Clean up branch instructions (actually perform the branch, by setting PC). */ @@ -5604,6 +5690,58 @@ arm_copy_b_bl_blx (struct gdbarch *gdbarch, uint32_t insn, return 0; } +static int +thumb2_copy_b_bl_blx (struct gdbarch *gdbarch, uint16_t insn1, + uint16_t insn2, struct regcache *regs, + struct displaced_step_closure *dsc) +{ + int link = bit (insn2, 14); + int exchange = link && !bit (insn2, 12); + int cond = INST_AL; + long offset =0; + int j1 = bit (insn2, 13); + int j2 = bit (insn2, 11); + int s = sbits (insn1, 10, 10); + int i1 = !(j1 ^ bit (insn1, 10)); + int i2 = !(j2 ^ bit (insn1, 10)); + + if (!link && !exchange) /* B */ + { + cond = bits (insn1, 6, 9); + offset = (bits (insn2, 0, 10) << 1); + if (bit (insn2, 12)) /* Encoding T4 */ + { + offset |= (bits (insn1, 0, 9) << 12) + | (i2 << 22) + | (i1 << 23) + | (s << 24); + } + else /* Encoding T3 */ + offset |= (bits (insn1, 0, 5) << 12) + | (j1 << 18) + | (j2 << 19) + | (s << 20); + } + else + { + offset = (bits (insn1, 0, 9) << 12); + offset |= ((i2 << 22) | (i1 << 23) | (s << 24)); + offset |= exchange ? + (bits (insn2, 1, 10) << 2) : (bits (insn2, 0, 10) << 1); + } + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, "displaced: copying %s immediate insn " + "%.4x %.4x with offset %.8lx\n", + (exchange) ? "blx" : "bl", + insn1, insn2, offset); + + dsc->modinsn[0] = THUMB_NOP; + + install_b_bl_blx (gdbarch, regs, dsc, cond, exchange, 1, offset); + return 0; +} + /* Copy B Thumb instructions. */ static int thumb_copy_b (struct gdbarch *gdbarch, unsigned short insn, @@ -5866,6 +6004,41 @@ thumb_copy_alu_reg (struct gdbarch *gdbarch, uint16_t insn, return 0; } +static int +thumb2_copy_alu_reg (struct gdbarch *gdbarch, uint16_t insn1, + uint16_t insn2, struct regcache *regs, + struct displaced_step_closure *dsc) +{ + unsigned int op2 = bits (insn2, 4, 7); + int is_mov = (op2 == 0x0); + unsigned int rn, rm, rd; + + rn = bits (insn1, 0, 3); /* Rn */ + rm = bits (insn2, 0, 3); /* Rm */ + rd = bits (insn2, 8, 11); /* Rd */ + + /* In Thumb-2, rn, rm and rd can't be r15. */ + if (rn != ARM_PC_REGNUM && rm != ARM_PC_REGNUM + && rd != ARM_PC_REGNUM) + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, "ALU reg", dsc); + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, "displaced: copying reg %s insn %.4x%.4x\n", + "ALU", insn1, insn2); + + if (is_mov) + dsc->modinsn[0] = insn1; + else + dsc->modinsn[0] = ((insn1 & 0xfff0) | 0x1); + + dsc->modinsn[1] = ((insn2 & 0xf0f0) | 0x2); + dsc->numinsns = 2; + + install_alu_reg (gdbarch, regs, dsc, rd, rn, rm); + + return 0; +} + /* Cleanup/copy arithmetic/logic insns with shifted register RHS. */ static void @@ -6135,6 +6308,67 @@ install_ldr_str_ldrb_strb (struct gdbarch *gdbarch, struct regcache *regs, } static int +thumb2_copy_ldr_str_ldrb_strb (struct gdbarch *gdbarch, uint16_t insn1, + uint16_t insn2, struct regcache *regs, + struct displaced_step_closure *dsc, + int load, int byte, int usermode, int writeback) +{ + int immed = !bit (insn1, 9); + unsigned int rt = bits (insn2, 12, 15); + unsigned int rn = bits (insn1, 0, 3); + unsigned int rm = bits (insn2, 0, 3); /* Only valid if !immed. */ + + if (rt != ARM_PC_REGNUM && rn != ARM_PC_REGNUM && rm != ARM_PC_REGNUM) + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, "load/store", + dsc); + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, + "displaced: copying %s%s r%d [r%d] insn %.4x%.4x\n", + load ? (byte ? "ldrb" : "ldr") + : (byte ? "strb" : "str"), usermode ? "t" : "", + rt, rn, insn1, insn2); + + install_ldr_str_ldrb_strb (gdbarch, regs, dsc, load, immed, writeback, byte, + usermode, rt, rm, rn); + + if (load || rt != ARM_PC_REGNUM) + { + dsc->u.ldst.restore_r4 = 0; + + if (immed) + /* {ldr,str}[b]<cond> rt, [rn, #imm], etc. + -> + {ldr,str}[b]<cond> r0, [r2, #imm]. */ + { + dsc->modinsn[0] = (insn1 & 0xfff0) | 0x2; + dsc->modinsn[1] = insn2 & 0x0fff; + } + else + /* {ldr,str}[b]<cond> rt, [rn, rm], etc. + -> + {ldr,str}[b]<cond> r0, [r2, r3]. */ + { + dsc->modinsn[0] = (insn1 & 0xfff0) | 0x2; + dsc->modinsn[1] = (insn2 & 0x0ff0) | 0x3; + } + + dsc->numinsns = 2; + } + else + { + /* In Thumb-32 instructions, the behavior is unpredictable when Rt is + PC, while the behavior is undefined when Rn is PC. Shortly, neither + Rt nor Rn can be PC. */ + + gdb_assert (0); + } + + return 0; +} + + +static int arm_copy_ldr_str_ldrb_strb (struct gdbarch *gdbarch, uint32_t insn, struct regcache *regs, struct displaced_step_closure *dsc, @@ -6524,6 +6758,87 @@ arm_copy_block_xfer (struct gdbarch *gdbarch, uint32_t insn, return 0; } +static int +thumb2_copy_block_xfer (struct gdbarch *gdbarch, uint16_t insn1, uint16_t insn2, + struct regcache *regs, + struct displaced_step_closure *dsc) +{ + int rn = bits (insn1, 0, 3); + int load = bit (insn1, 4); + int writeback = bit (insn1, 5); + + /* Block transfers which don't mention PC can be run directly + out-of-line. */ + if (rn != ARM_PC_REGNUM && (insn2 & 0x8000) == 0) + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, "ldm/stm", dsc); + + if (rn == ARM_PC_REGNUM) + { + warning (_("displaced: Unpredictable LDM or STM with " + "base register r15")); + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "unpredictable ldm/stm", dsc); + } + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, "displaced: copying block transfer insn " + "%.4x%.4x\n", insn1, insn2); + + /* Clear bit 13, since it should be always zero. */ + dsc->u.block.regmask = (insn2 & 0xdfff); + dsc->u.block.rn = rn; + + dsc->u.block.load = bit (insn1, 4); + dsc->u.block.user = bit (insn1, 6); + dsc->u.block.increment = bit (insn1, 7); + dsc->u.block.before = bit (insn1, 8); + dsc->u.block.writeback = writeback; + dsc->u.block.cond = INST_AL; + + if (load) + { + if (dsc->u.block.regmask == 0xffff) + { + /* This branch is impossible to happen. */ + gdb_assert (0); + } + else + { + unsigned int regmask = dsc->u.block.regmask; + unsigned int num_in_list = bitcount (regmask), new_regmask, bit = 1; + unsigned int to = 0, from = 0, i, new_rn; + + for (i = 0; i < num_in_list; i++) + dsc->tmp[i] = displaced_read_reg (regs, dsc, i); + + if (writeback) + insn1 &= ~(1 << 5); + + new_regmask = (1 << num_in_list) - 1; + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, _("displaced: LDM r%d%s, " + "{..., pc}: original reg list %.4x, modified " + "list %.4x\n"), rn, writeback ? "!" : "", + (int) dsc->u.block.regmask, new_regmask); + + dsc->modinsn[0] = insn1; + dsc->modinsn[1] = (new_regmask & 0xffff); + dsc->numinsns = 2; + + dsc->cleanup = &cleanup_block_load_pc; + } + } + else + { + dsc->modinsn[0] = insn1; + dsc->modinsn[1] = insn2; + dsc->numinsns = 2; + dsc->cleanup = &cleanup_block_store_pc; + } + return 0; +} + /* Cleanup/copy SVC (SWI) instructions. These two functions are overridden for Linux, where some SVC instructions must be treated specially. */ @@ -6609,6 +6924,23 @@ arm_copy_undef (struct gdbarch *gdbarch, uint32_t insn, return 0; } +static int +thumb_32bit_copy_undef (struct gdbarch *gdbarch, uint16_t insn1, uint16_t insn2, + struct displaced_step_closure *dsc) +{ + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, "displaced: copying undefined insn " + "%.4x %.4x\n", (unsigned short) insn1, + (unsigned short) insn2); + + dsc->modinsn[0] = insn1; + dsc->modinsn[1] = insn2; + dsc->numinsns = 2; + + return 0; +} + /* Copy unpredictable instructions. */ static int @@ -7005,6 +7337,43 @@ arm_decode_ext_reg_ld_st (struct gdbarch *gdbarch, uint32_t insn, return 1; } +/* Decode extension register load/store. Exactly the same as + arm_decode_ext_reg_ld_st. */ + +static int +thumb2_decode_ext_reg_ld_st (struct gdbarch *gdbarch, uint16_t insn1, + uint16_t insn2, struct regcache *regs, + struct displaced_step_closure *dsc) +{ + unsigned int opcode = bits (insn1, 4, 8); + + switch (opcode) + { + case 0x04: case 0x05: + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "vfp/neon vmov", dsc); + + case 0x08: case 0x0c: /* 01x00 */ + case 0x0a: case 0x0e: /* 01x10 */ + case 0x12: case 0x16: /* 10x10 */ + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "vfp/neon vstm/vpush", dsc); + + case 0x09: case 0x0d: /* 01x01 */ + case 0x0b: case 0x0f: /* 01x11 */ + case 0x13: case 0x17: /* 10x11 */ + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "vfp/neon vldm/vpop", dsc); + + case 0x10: case 0x14: case 0x18: case 0x1c: /* vstr. */ + case 0x11: case 0x15: case 0x19: case 0x1d: /* vldr. */ + return thumb2_copy_copro_load_store (gdbarch, insn1, insn2, regs, dsc); + } + + /* Should be unreachable. */ + return 1; +} + static int arm_decode_svc_copro (struct gdbarch *gdbarch, uint32_t insn, CORE_ADDR to, struct regcache *regs, struct displaced_step_closure *dsc) @@ -7051,6 +7420,102 @@ arm_decode_svc_copro (struct gdbarch *gdbarch, uint32_t insn, CORE_ADDR to, return arm_copy_undef (gdbarch, insn, dsc); /* Possibly unreachable. */ } +static int +thumb2_decode_svc_copro (struct gdbarch *gdbarch, uint16_t insn1, + uint16_t insn2, struct regcache *regs, + struct displaced_step_closure *dsc) +{ + unsigned int coproc = bits (insn2, 8, 11); + unsigned int op1 = bits (insn1, 4, 9); + unsigned int bit_5_8 = bits (insn1, 5, 8); + unsigned int bit_9 = bit (insn1, 9); + unsigned int bit_4 = bit (insn1, 4); + unsigned int rn = bits (insn1, 0, 3); + + if (bit_9 == 0) + { + if (bit_5_8 == 2) + { + if ((coproc & 0xe) == 0xa) /* 64-bit xfer. */ + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "neon 64bit xfer", dsc); + else + { + if (bit_4) /* MRRC/MRRC2 */ + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "mrrc/mrrc2", dsc); + else /* MCRR/MCRR2 */ + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "mcrr/mcrr2", dsc); + } + } + else if (bit_5_8 == 0) /* UNDEFINED. */ + return thumb_32bit_copy_undef (gdbarch, insn1, insn2, dsc); + else + { + /*coproc is 101x. SIMD/VFP, ext registers load/store. */ + if ((coproc & 0xe) == 0xa) + return thumb2_decode_ext_reg_ld_st (gdbarch, insn1, insn2, regs, + dsc); + else /* coproc is not 101x. */ + { + if (bit_4 == 0) /* STC/STC2. */ + return thumb2_copy_copro_load_store (gdbarch, insn1, insn2, + regs, dsc); + else + { + if (rn == 0xf) /* LDC/LDC2 literal. */ + return thumb2_copy_copro_load_store (gdbarch, insn1, insn2, + regs, dsc); + else /* LDC/LDC2 immeidate. */ + return thumb2_copy_copro_load_store (gdbarch, insn1, insn2, + regs, dsc); + } + } + } + } + else + { + unsigned int op = bit (insn2, 4); + unsigned int bit_8 = bit (insn1, 8); + + if (bit_8) /* Advanced SIMD */ + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "neon", dsc); + else + { + /*coproc is 101x. */ + if ((coproc & 0xe) == 0xa) + { + if (op) /* 8,16,32-bit xfer. */ + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "neon 8/16/32 bit xfer", + dsc); + else /* VFP data processing. */ + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "vfp dataproc", dsc); + } + else + { + if (op) + { + if (bit_4) /* MRC/MRC2 */ + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "mrc/mrc2", dsc); + else /* MCR/MCR2 */ + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "mcr/mcr2", dsc); + } + else /* CDP/CDP 2 */ + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "cdp/cdp2", dsc); + } + } + } + + return 0; +} + static void install_pc_relative (struct gdbarch *gdbarch, struct regcache *regs, struct displaced_step_closure *dsc, int rd) @@ -7100,6 +7565,42 @@ thumb_decode_pc_relative_16bit (struct gdbarch *gdbarch, uint16_t insn, } static int +thumb_copy_pc_relative_32bit (struct gdbarch *gdbarch, struct regcache *regs, + struct displaced_step_closure *dsc, + int rd, unsigned int imm) +{ + /* Encoding T3: ADDS Rd, Rd, #imm */ + dsc->modinsn[0] = (0xf100 | rd); + dsc->modinsn[1] = (0x0 | (rd << 8) | imm); + + dsc->numinsns = 2; + + install_pc_relative (gdbarch, regs, dsc, rd); + + return 0; +} + +static int +thumb_decode_pc_relative_32bit (struct gdbarch *gdbarch, uint16_t insn1, + uint16_t insn2, struct regcache *regs, + struct displaced_step_closure *dsc) +{ + unsigned int rd = bits (insn2, 8, 11); + /* Since immeidate has the same encoding in both ADR and ADDS, so we simply + extract raw immediate encoding rather than computing immediate. When + generating ADDS instruction, we can simply perform OR operation to set + immediate into ADDS. */ + unsigned int imm = (insn2 & 0x70ff) | (bit (insn1, 10) << 26); + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, + "displaced: copying thumb adr r%d, #%d insn %.4x%.4x\n", + rd, imm, insn1, insn2); + + return thumb_copy_pc_relative_32bit (gdbarch, regs, dsc, rd, imm); +} + +static int thumb_copy_16bit_ldr_literal (struct gdbarch *gdbarch, unsigned short insn1, struct regcache *regs, struct displaced_step_closure *dsc) @@ -7354,12 +7855,211 @@ thumb_process_displaced_16bit_insn (struct gdbarch *gdbarch, uint16_t insn1, _("thumb_process_displaced_16bit_insn: Instruction decode error")); } +static int +decode_thumb_32bit_ld_mem_hints (struct gdbarch *gdbarch, + uint16_t insn1, uint16_t insn2, + struct regcache *regs, + struct displaced_step_closure *dsc) +{ + int rd = bits (insn2, 12, 15); + int user_mode = (bits (insn2, 8, 11) == 0xe); + int err = 0; + int writeback = 0; + + switch (bits (insn1, 5, 6)) + { + case 0: /* Load byte and memory hints */ + if (rd == 0xf) /* PLD/PLI */ + { + if (bits (insn2, 6, 11)) + return thumb2_copy_preload (gdbarch, insn1, insn2, regs, dsc); + else + return thumb2_copy_preload_reg (gdbarch, insn1, insn2, regs, dsc); + } + else + { + int op1 = bits (insn1, 7, 8); + + if ((op1 == 0 || op1 == 2) && bit (insn2, 11)) + writeback = bit (insn2, 8); + + return thumb2_copy_ldr_str_ldrb_strb (gdbarch, insn1, insn2, regs, + dsc, 1, 1, user_mode, + writeback); + } + + break; + case 1: /* Load halfword and memory hints */ + if (rd == 0xf) /* PLD{W} and Unalloc memory hint */ + { + if (bits (insn2, 6, 11)) + return thumb2_copy_preload (gdbarch, insn1, insn2, regs, dsc); + else + return thumb2_copy_preload_reg (gdbarch, insn1, insn2, regs, dsc); + } + else + { + int op1 = bits (insn1, 7, 8); + + if ((op1 == 0 || op1 == 2) && bit (insn2, 11)) + writeback = bit (insn2, 8); + return thumb2_copy_ldr_str_ldrb_strb (gdbarch, insn1, insn2, regs, + dsc, 1, 0, user_mode, + writeback); + } + break; + case 2: /* Load word */ + { + int op1 = bits (insn1, 7, 8); + + if ((op1 == 0 || op1 == 2) && bit (insn2, 11)) + writeback = bit (insn2, 8); + + return thumb2_copy_ldr_str_ldrb_strb (gdbarch, insn1, insn2, regs, dsc, + 1, 0, user_mode, writeback); + break; + } + default: + return thumb_32bit_copy_undef (gdbarch, insn1, insn2, dsc); + break; + } + return 0; +} + static void thumb_process_displaced_32bit_insn (struct gdbarch *gdbarch, uint16_t insn1, uint16_t insn2, struct regcache *regs, struct displaced_step_closure *dsc) { - error (_("Displaced stepping is only supported in ARM mode and Thumb 16bit instructions")); + int err = 0; + unsigned short op = bit (insn2, 15); + unsigned int op1 = bits (insn1, 11, 12); + + switch (op1) + { + case 1: + { + switch (bits (insn1, 9, 10)) + { + case 0: /* load/store multiple */ + switch (bits (insn1, 7, 8)) + { + case 0: case 3: /* SRS, RFE */ + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "srs/rfe", dsc); + break; + case 1: case 2: /* LDM/STM/PUSH/POP */ + /* These Thumb 32-bit insns have the same encodings as ARM + counterparts. */ + err = thumb2_copy_block_xfer (gdbarch, insn1, insn2, regs, dsc); + } + break; + case 1: + /* Data-processing (shift register). In ARM archtecture reference + manual, this entry is + "Data-processing (shifted register) on page A6-31". However, + instructions in table A6-31 shows that they are `alu_reg' + instructions. There is no alu_shifted_reg instructions in + Thumb-2. */ + err = thumb2_copy_alu_reg (gdbarch, insn1, insn2, regs, + dsc); + break; + default: /* Coprocessor instructions */ + /* Thumb 32bit coprocessor instructions have the same encoding + as ARM's. */ + err = thumb2_decode_svc_copro (gdbarch, insn1, insn2, regs, dsc); + break; + } + break; + } + case 2: /* op1 = 2 */ + if (op) /* Branch and misc control. */ + { + if (bit (insn2, 14)) /* BLX/BL */ + err = thumb2_copy_b_bl_blx (gdbarch, insn1, insn2, regs, dsc); + else if (!bits (insn2, 12, 14) && bits (insn1, 8, 10) != 0x7) + /* Conditional Branch */ + err = thumb2_copy_b_bl_blx (gdbarch, insn1, insn2, regs, dsc); + else + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "misc ctrl", dsc); + } + else + { + if (bit (insn1, 9)) /* Data processing (plain binary imm) */ + { + int op = bits (insn1, 4, 8); + int rn = bits (insn1, 0, 4); + if ((op == 0 || op == 0xa) && rn == 0xf) + err = thumb_decode_pc_relative_32bit (gdbarch, insn1, insn2, + regs, dsc); + else + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "dp/pb", dsc); + } + else /* Data processing (modified immeidate) */ + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "dp/mi", dsc); + } + break; + case 3: /* op1 = 3 */ + switch (bits (insn1, 9, 10)) + { + case 0: + if (bit (insn1, 4)) + err = decode_thumb_32bit_ld_mem_hints (gdbarch, insn1, insn2, + regs, dsc); + else + { + if (bit (insn1, 8)) /* NEON Load/Store */ + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "neon elt/struct load/store", + dsc); + else /* Store single data item */ + { + int user_mode = (bits (insn2, 8, 11) == 0xe); + int byte = (bits (insn1, 5, 7) == 0 + || bits (insn1, 5, 7) == 4); + int writeback = 0; + + if (bits (insn1, 5, 7) < 3 && bit (insn2, 11)) + writeback = bit (insn2, 8); + + err = thumb2_copy_ldr_str_ldrb_strb (gdbarch, insn1, insn2, + regs, dsc, 0, byte, + user_mode, writeback); + } + } + break; + case 1: /* op1 = 3, bits (9, 10) == 1 */ + switch (bits (insn1, 7, 8)) + { + case 0: case 1: /* Data processing (register) */ + err = thumb2_copy_alu_reg (gdbarch, insn1, insn2, regs, dsc); + break; + case 2: /* Multiply and absolute difference */ + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "mul/mua/diff", dsc); + break; + case 3: /* Long multiply and divide */ + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "lmul/lmua", dsc); + break; + } + break; + default: /* Coprocessor instructions */ + err = thumb2_decode_svc_copro (gdbarch, insn1, insn2, regs, dsc); + break; + } + break; + default: + err = 1; + } + + if (err) + internal_error (__FILE__, __LINE__, + _("thumb_process_displaced_32bit_insn: Instruction decode error")); + } static void -- 1.7.0.4 ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [try 2nd 5/8] Displaced stepping for Thumb 32-bit insns 2011-05-05 13:25 ` Yao Qi @ 2011-05-17 17:14 ` Ulrich Weigand 2011-05-23 11:32 ` Yao Qi ` (2 more replies) 0 siblings, 3 replies; 19+ messages in thread From: Ulrich Weigand @ 2011-05-17 17:14 UTC (permalink / raw) To: Yao Qi; +Cc: gdb-patches Yao Qi wrote: > +static int > +thumb2_copy_preload (struct gdbarch *gdbarch, uint16_t insn1, uint16_t insn2, > + struct regcache *regs, struct displaced_step_closure *dsc) > +{ > + unsigned int rn = bits (insn1, 0, 3); > + if (rn == ARM_PC_REGNUM) > + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, "preload", dsc); > + > + if (debug_displaced) > + fprintf_unfiltered (gdb_stdlog, "displaced: copying preload insn %.4x%.4x\n", > + insn1, insn2); > + > + dsc->modinsn[0] = insn1 & 0xfff0; > + dsc->modinsn[1] = insn2; > + dsc->numinsns = 2; > + > + install_preload (gdbarch, regs, dsc, rn); > + > + return 0; > +} > +static int > +thumb2_copy_preload_reg (struct gdbarch *gdbarch, uint16_t insn1, > + uint16_t insn2, struct regcache *regs, > + struct displaced_step_closure *dsc) > +{ > + unsigned int rn = bits (insn1, 0, 3); > + unsigned int rm = bits (insn2, 0, 3); > + > + if (rn != ARM_PC_REGNUM && rm != ARM_PC_REGNUM) > + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, "preload reg", > + dsc); > + > + if (debug_displaced) > + fprintf_unfiltered (gdb_stdlog, "displaced: copying preload insn %.4x%.4x\n", > + insn1, insn1); > + > + dsc->modinsn[0] = insn1 & 0xfff0; > + dsc->modinsn[1] = (insn2 & 0xfff0) | 0x1; > + dsc->numinsns = 2; > + > + install_preload_reg (gdbarch, regs, dsc, rn, rm); > + return 0; > +} Handling of preload instructions seems wrong for a couple of reasons: - In Thumb mode, PLD/PLI with register offset must not use PC as offset register, so those can just be copied unmodified. The only instructions to be treated specially are the "literal" variants, which do encode PC-relative offsets. This means a separate thumb2_copy_preload_reg shouldn't be needed. - However, you cannot just transform a PLD/PLI "literal" (i.e. PC + immediate) into an "immediate" (i.e. register + immediate) version, since in Thumb mode the "literal" version supports a 12-bit immediate, while the immediate version only supports an 8-bit immediate. I guess you could either add the immediate to the PC during preparation stage and then use an "immediate" instruction with immediate zero, or else load the immediate into a second register and use a "register" version of the instruction. > +static int > +thumb2_copy_copro_load_store (struct gdbarch *gdbarch, uint16_t insn1, > + uint16_t insn2, struct regcache *regs, > + struct displaced_step_closure *dsc) > +{ > + unsigned int rn = bits (insn1, 0, 3); > + > + if (rn == ARM_PC_REGNUM) > + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, > + "copro load/store", dsc); > + > + if (debug_displaced) > + fprintf_unfiltered (gdb_stdlog, "displaced: copying coprocessor " > + "load/store insn %.4x%.4x\n", insn1, insn2); > + > + dsc->modinsn[0] = insn1 & 0xfff0; > + dsc->modinsn[1] = insn2; > + dsc->numinsns = 2; This doesn't look right: you're replacing the RN register if it is anything *but* 15 -- but those cases do not need to be replaced! In fact, unless I'm missing something, in Thumb mode no coprocessor instruction actually uses the PC (either RN == 15 indicates some other operation, or else it is specified as unpredictable). So those should simply all be copied unmodified ... > +static int > +thumb2_copy_b_bl_blx (struct gdbarch *gdbarch, uint16_t insn1, > + uint16_t insn2, struct regcache *regs, > + struct displaced_step_closure *dsc) > +{ > + int link = bit (insn2, 14); > + int exchange = link && !bit (insn2, 12); > + int cond = INST_AL; > + long offset =0; > + int j1 = bit (insn2, 13); > + int j2 = bit (insn2, 11); > + int s = sbits (insn1, 10, 10); > + int i1 = !(j1 ^ bit (insn1, 10)); > + int i2 = !(j2 ^ bit (insn1, 10)); > + > + if (!link && !exchange) /* B */ > + { > + cond = bits (insn1, 6, 9); Only encoding T3 has condition bits, not T4. > + offset = (bits (insn2, 0, 10) << 1); > + if (bit (insn2, 12)) /* Encoding T4 */ > + { > + offset |= (bits (insn1, 0, 9) << 12) > + | (i2 << 22) > + | (i1 << 23) > + | (s << 24); > + } > + else /* Encoding T3 */ > + offset |= (bits (insn1, 0, 5) << 12) > + | (j1 << 18) > + | (j2 << 19) > + | (s << 20); > + } > + else > + { > + offset = (bits (insn1, 0, 9) << 12); > + offset |= ((i2 << 22) | (i1 << 23) | (s << 24)); > + offset |= exchange ? > + (bits (insn2, 1, 10) << 2) : (bits (insn2, 0, 10) << 1); > + } > + > + if (debug_displaced) > + fprintf_unfiltered (gdb_stdlog, "displaced: copying %s immediate insn " > + "%.4x %.4x with offset %.8lx\n", > + (exchange) ? "blx" : "bl", > + insn1, insn2, offset); > + > + dsc->modinsn[0] = THUMB_NOP; > + > + install_b_bl_blx (gdbarch, regs, dsc, cond, exchange, 1, offset); Why do you always pass 1 for link? Shouldn't "link" be passed? > +static int > +thumb2_copy_alu_reg (struct gdbarch *gdbarch, uint16_t insn1, > + uint16_t insn2, struct regcache *regs, > + struct displaced_step_closure *dsc) > +{ > + unsigned int op2 = bits (insn2, 4, 7); > + int is_mov = (op2 == 0x0); > + unsigned int rn, rm, rd; > + > + rn = bits (insn1, 0, 3); /* Rn */ > + rm = bits (insn2, 0, 3); /* Rm */ > + rd = bits (insn2, 8, 11); /* Rd */ > + > + /* In Thumb-2, rn, rm and rd can't be r15. */ This isn't quite true ... otherwise we wouldn't need the routine at all. > + if (rn != ARM_PC_REGNUM && rm != ARM_PC_REGNUM > + && rd != ARM_PC_REGNUM) > + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, "ALU reg", dsc); > + > + if (debug_displaced) > + fprintf_unfiltered (gdb_stdlog, "displaced: copying reg %s insn %.4x%.4x\n", > + "ALU", insn1, insn2); > + > + if (is_mov) > + dsc->modinsn[0] = insn1; > + else > + dsc->modinsn[0] = ((insn1 & 0xfff0) | 0x1); > + > + dsc->modinsn[1] = ((insn2 & 0xf0f0) | 0x2); > + dsc->numinsns = 2; This doesn't look right. It looks like this function is called for all instructions in tables A6-22 through A6-26; those encodings differ significantly in how their fields are used. Some of them have the Rn, Rm, Rd fields as above, but others just have some of them. For some, a register field content of 15 does indeed refer to the PC and needs to be replaced; for others a register field content of 15 means instead that a different operation is to be performed (e.g. ADD vs TST, EOR vs TEQ ...) and so it must *not* be replaced; and for yet others, a register field content of 15 is unpredictable. In fact, I think only a very small number of instructions in this category actually may refer to the PC (only MOV?), so there needs to the be more instruction decoding to actually identify those. > static int > +thumb2_copy_ldr_str_ldrb_strb (struct gdbarch *gdbarch, uint16_t insn1, > + uint16_t insn2, struct regcache *regs, > + struct displaced_step_closure *dsc, > + int load, int byte, int usermode, int writeback) Hmmm ... this function is called for *halfwords* as well, not just for bytes and words. This means the "byte" operand is no longer sufficient to uniquely determine the size -- note that when calling down to the install_ routine, xfersize is always set to 1 or 4. > +{ > + int immed = !bit (insn1, 9); > + unsigned int rt = bits (insn2, 12, 15); > + unsigned int rn = bits (insn1, 0, 3); > + unsigned int rm = bits (insn2, 0, 3); /* Only valid if !immed. */ > + > + if (rt != ARM_PC_REGNUM && rn != ARM_PC_REGNUM && rm != ARM_PC_REGNUM) rm shouldn't be checked if immed is true > + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, "load/store", > + dsc); > + > + if (debug_displaced) > + fprintf_unfiltered (gdb_stdlog, > + "displaced: copying %s%s r%d [r%d] insn %.4x%.4x\n", > + load ? (byte ? "ldrb" : "ldr") > + : (byte ? "strb" : "str"), usermode ? "t" : "", > + rt, rn, insn1, insn2); > + > + install_ldr_str_ldrb_strb (gdbarch, regs, dsc, load, immed, writeback, byte, > + usermode, rt, rm, rn); > + > + if (load || rt != ARM_PC_REGNUM) > + { > + dsc->u.ldst.restore_r4 = 0; > + > + if (immed) > + /* {ldr,str}[b]<cond> rt, [rn, #imm], etc. > + -> > + {ldr,str}[b]<cond> r0, [r2, #imm]. */ > + { > + dsc->modinsn[0] = (insn1 & 0xfff0) | 0x2; > + dsc->modinsn[1] = insn2 & 0x0fff; > + } > + else > + /* {ldr,str}[b]<cond> rt, [rn, rm], etc. > + -> > + {ldr,str}[b]<cond> r0, [r2, r3]. */ > + { > + dsc->modinsn[0] = (insn1 & 0xfff0) | 0x2; > + dsc->modinsn[1] = (insn2 & 0x0ff0) | 0x3; > + } > + > + dsc->numinsns = 2; > + } > + else > + { > + /* In Thumb-32 instructions, the behavior is unpredictable when Rt is > + PC, while the behavior is undefined when Rn is PC. Shortly, neither > + Rt nor Rn can be PC. */ > + > + gdb_assert (0); > + } > + > + return 0; > +} > +/* Decode extension register load/store. Exactly the same as > + arm_decode_ext_reg_ld_st. */ > + > +static int > +thumb2_decode_ext_reg_ld_st (struct gdbarch *gdbarch, uint16_t insn1, > + uint16_t insn2, struct regcache *regs, > + struct displaced_step_closure *dsc) > +{ > + unsigned int opcode = bits (insn1, 4, 8); > + > + switch (opcode) > + { > + case 0x04: case 0x05: > + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, > + "vfp/neon vmov", dsc); > + > + case 0x08: case 0x0c: /* 01x00 */ > + case 0x0a: case 0x0e: /* 01x10 */ > + case 0x12: case 0x16: /* 10x10 */ > + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, > + "vfp/neon vstm/vpush", dsc); > + > + case 0x09: case 0x0d: /* 01x01 */ > + case 0x0b: case 0x0f: /* 01x11 */ > + case 0x13: case 0x17: /* 10x11 */ > + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, > + "vfp/neon vldm/vpop", dsc); > + > + case 0x10: case 0x14: case 0x18: case 0x1c: /* vstr. */ > + case 0x11: case 0x15: case 0x19: case 0x1d: /* vldr. */ > + return thumb2_copy_copro_load_store (gdbarch, insn1, insn2, regs, dsc); See the comment at thumb2_copy_copro_load_store: since that function will always copy the instruction unmodified, so can this function. > +static int > +thumb2_decode_svc_copro (struct gdbarch *gdbarch, uint16_t insn1, > + uint16_t insn2, struct regcache *regs, > + struct displaced_step_closure *dsc) > +{ > + unsigned int coproc = bits (insn2, 8, 11); > + unsigned int op1 = bits (insn1, 4, 9); > + unsigned int bit_5_8 = bits (insn1, 5, 8); > + unsigned int bit_9 = bit (insn1, 9); > + unsigned int bit_4 = bit (insn1, 4); > + unsigned int rn = bits (insn1, 0, 3); > + > + if (bit_9 == 0) > + { > + if (bit_5_8 == 2) > + { > + if ((coproc & 0xe) == 0xa) /* 64-bit xfer. */ > + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, > + "neon 64bit xfer", dsc); > + else > + { > + if (bit_4) /* MRRC/MRRC2 */ > + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, > + "mrrc/mrrc2", dsc); > + else /* MCRR/MCRR2 */ > + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, > + "mcrr/mcrr2", dsc); > + } > + } > + else if (bit_5_8 == 0) /* UNDEFINED. */ > + return thumb_32bit_copy_undef (gdbarch, insn1, insn2, dsc); > + else > + { > + /*coproc is 101x. SIMD/VFP, ext registers load/store. */ > + if ((coproc & 0xe) == 0xa) > + return thumb2_decode_ext_reg_ld_st (gdbarch, insn1, insn2, regs, > + dsc); > + else /* coproc is not 101x. */ > + { > + if (bit_4 == 0) /* STC/STC2. */ > + return thumb2_copy_copro_load_store (gdbarch, insn1, insn2, > + regs, dsc); > + else > + { > + if (rn == 0xf) /* LDC/LDC2 literal. */ > + return thumb2_copy_copro_load_store (gdbarch, insn1, insn2, > + regs, dsc); > + else /* LDC/LDC2 immeidate. */ > + return thumb2_copy_copro_load_store (gdbarch, insn1, insn2, > + regs, dsc); > + } > + } See above ... I don't think any of those instructions can ever use the PC in Thumb mode, so this can be simplified. > + } > + } > + else > + { > + unsigned int op = bit (insn2, 4); > + unsigned int bit_8 = bit (insn1, 8); > + > + if (bit_8) /* Advanced SIMD */ > + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, > + "neon", dsc); > + else > + { > + /*coproc is 101x. */ > + if ((coproc & 0xe) == 0xa) > + { > + if (op) /* 8,16,32-bit xfer. */ > + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, > + "neon 8/16/32 bit xfer", > + dsc); > + else /* VFP data processing. */ > + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, > + "vfp dataproc", dsc); > + } > + else > + { > + if (op) > + { > + if (bit_4) /* MRC/MRC2 */ > + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, > + "mrc/mrc2", dsc); > + else /* MCR/MCR2 */ > + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, > + "mcr/mcr2", dsc); > + } > + else /* CDP/CDP 2 */ > + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, > + "cdp/cdp2", dsc); > + } Likewise I'm not sure there is any need to decode to such depth, if the instruction in the end all can be copied unmodified. > static int > +thumb_copy_pc_relative_32bit (struct gdbarch *gdbarch, struct regcache *regs, > + struct displaced_step_closure *dsc, > + int rd, unsigned int imm) > +{ > + /* Encoding T3: ADDS Rd, Rd, #imm */ Why do you refer to ADDS? The instruction you generate is ADD (with no S bit), which is actually correct -- so it seems just the comment is wrong. > + dsc->modinsn[0] = (0xf100 | rd); > + dsc->modinsn[1] = (0x0 | (rd << 8) | imm); > + > + dsc->numinsns = 2; > + > + install_pc_relative (gdbarch, regs, dsc, rd); > + > + return 0; > +} > + > +static int > +thumb_decode_pc_relative_32bit (struct gdbarch *gdbarch, uint16_t insn1, > + uint16_t insn2, struct regcache *regs, > + struct displaced_step_closure *dsc) > +{ > + unsigned int rd = bits (insn2, 8, 11); > + /* Since immeidate has the same encoding in both ADR and ADDS, so we simply typo > + extract raw immediate encoding rather than computing immediate. When > + generating ADDS instruction, we can simply perform OR operation to set > + immediate into ADDS. */ See above for ADDS vs. ADD. > + unsigned int imm = (insn2 & 0x70ff) | (bit (insn1, 10) << 26); The last bit will get lost, since thumb_copy_pc_relative_32bit only or's the value to the second 16-bit halfword. > + if (debug_displaced) > + fprintf_unfiltered (gdb_stdlog, > + "displaced: copying thumb adr r%d, #%d insn %.4x%.4x\n", > + rd, imm, insn1, insn2); > + > + return thumb_copy_pc_relative_32bit (gdbarch, regs, dsc, rd, imm); > +} B.t.w. I think the distinction between a _decode_ and a _copy_ routine is pointless in this case since the _decode_ routine is only ever called for one single instruction that matches ... it doesn't actually decode anything. > +static int > +decode_thumb_32bit_ld_mem_hints (struct gdbarch *gdbarch, > + uint16_t insn1, uint16_t insn2, > + struct regcache *regs, > + struct displaced_step_closure *dsc) > +{ > + int rd = bits (insn2, 12, 15); > + int user_mode = (bits (insn2, 8, 11) == 0xe); > + int err = 0; > + int writeback = 0; > + > + switch (bits (insn1, 5, 6)) > + { > + case 0: /* Load byte and memory hints */ > + if (rd == 0xf) /* PLD/PLI */ > + { > + if (bits (insn2, 6, 11)) This check doesn't look right to me. > + return thumb2_copy_preload (gdbarch, insn1, insn2, regs, dsc); > + else > + return thumb2_copy_preload_reg (gdbarch, insn1, insn2, regs, dsc); In any case, see the comments above on handling preload instructions. You should only need to handle the "literal" variants. > + } > + else > + { > + int op1 = bits (insn1, 7, 8); > + > + if ((op1 == 0 || op1 == 2) && bit (insn2, 11)) > + writeback = bit (insn2, 8); > + > + return thumb2_copy_ldr_str_ldrb_strb (gdbarch, insn1, insn2, regs, > + dsc, 1, 1, user_mode, > + writeback); > + } > + > + break; > + case 1: /* Load halfword and memory hints */ > + if (rd == 0xf) /* PLD{W} and Unalloc memory hint */ > + { > + if (bits (insn2, 6, 11)) > + return thumb2_copy_preload (gdbarch, insn1, insn2, regs, dsc); > + else > + return thumb2_copy_preload_reg (gdbarch, insn1, insn2, regs, dsc); See above. > + } > + else > + { > + int op1 = bits (insn1, 7, 8); > + > + if ((op1 == 0 || op1 == 2) && bit (insn2, 11)) > + writeback = bit (insn2, 8); > + return thumb2_copy_ldr_str_ldrb_strb (gdbarch, insn1, insn2, regs, > + dsc, 1, 0, user_mode, > + writeback); > + } > + break; > + case 2: /* Load word */ > + { > + int op1 = bits (insn1, 7, 8); > + > + if ((op1 == 0 || op1 == 2) && bit (insn2, 11)) > + writeback = bit (insn2, 8); > + > + return thumb2_copy_ldr_str_ldrb_strb (gdbarch, insn1, insn2, regs, dsc, > + 1, 0, user_mode, writeback); > + break; > + } > + default: > + return thumb_32bit_copy_undef (gdbarch, insn1, insn2, dsc); > + break; > + } > + return 0; > +} > static void > thumb_process_displaced_32bit_insn (struct gdbarch *gdbarch, uint16_t insn1, > uint16_t insn2, struct regcache *regs, > struct displaced_step_closure *dsc) > { > - error (_("Displaced stepping is only supported in ARM mode and Thumb 16bit instructions")); > + int err = 0; > + unsigned short op = bit (insn2, 15); > + unsigned int op1 = bits (insn1, 11, 12); > + > + switch (op1) > + { > + case 1: > + { > + switch (bits (insn1, 9, 10)) > + { > + case 0: /* load/store multiple */ > + switch (bits (insn1, 7, 8)) > + { > + case 0: case 3: /* SRS, RFE */ > + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, > + "srs/rfe", dsc); > + break; > + case 1: case 2: /* LDM/STM/PUSH/POP */ > + /* These Thumb 32-bit insns have the same encodings as ARM > + counterparts. */ "same encodings" isn't quite true ... > + err = thumb2_copy_block_xfer (gdbarch, insn1, insn2, regs, dsc); > + } > + break; Hmm, it seems this case is missing code to handle the load/store dual, load/store exclusive, and table branch instructions (page A6-24 / table A6-17); there should be a check whether bit 6 is zero or one somewhere. > + case 1: > + /* Data-processing (shift register). In ARM archtecture reference > + manual, this entry is > + "Data-processing (shifted register) on page A6-31". However, > + instructions in table A6-31 shows that they are `alu_reg' > + instructions. There is no alu_shifted_reg instructions in > + Thumb-2. */ Well ... they are not *register*-shifted register instructions like there are in ARM mode (i.e. register shifted by another register), but they are still *shifted* register instructions (i.e. register shifted by an immediate). > + err = thumb2_copy_alu_reg (gdbarch, insn1, insn2, regs, > + dsc); (see comments at that function ...) > + break; > + default: /* Coprocessor instructions */ > + /* Thumb 32bit coprocessor instructions have the same encoding > + as ARM's. */ (see above as to "same encoding" ... also, some ARM coprocessor instruction may in fact use the PC, while no Thumb coprocessor instruction can ... so there is probably no need to decode them further at this point) > + err = thumb2_decode_svc_copro (gdbarch, insn1, insn2, regs, dsc); > + break; > + } > + break; > + } > + case 2: /* op1 = 2 */ > + if (op) /* Branch and misc control. */ > + { > + if (bit (insn2, 14)) /* BLX/BL */ > + err = thumb2_copy_b_bl_blx (gdbarch, insn1, insn2, regs, dsc); > + else if (!bits (insn2, 12, 14) && bits (insn1, 8, 10) != 0x7) I don't understand this condition, but it looks wrong to me ... > + /* Conditional Branch */ > + err = thumb2_copy_b_bl_blx (gdbarch, insn1, insn2, regs, dsc); > + else > + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, > + "misc ctrl", dsc); > + } > + else > + { > + if (bit (insn1, 9)) /* Data processing (plain binary imm) */ > + { > + int op = bits (insn1, 4, 8); > + int rn = bits (insn1, 0, 4); > + if ((op == 0 || op == 0xa) && rn == 0xf) > + err = thumb_decode_pc_relative_32bit (gdbarch, insn1, insn2, > + regs, dsc); > + else > + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, > + "dp/pb", dsc); > + } > + else /* Data processing (modified immeidate) */ > + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, > + "dp/mi", dsc); > + } > + break; > + case 3: /* op1 = 3 */ > + switch (bits (insn1, 9, 10)) > + { > + case 0: > + if (bit (insn1, 4)) > + err = decode_thumb_32bit_ld_mem_hints (gdbarch, insn1, insn2, > + regs, dsc); > + else > + { > + if (bit (insn1, 8)) /* NEON Load/Store */ > + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, > + "neon elt/struct load/store", > + dsc); > + else /* Store single data item */ > + { > + int user_mode = (bits (insn2, 8, 11) == 0xe); > + int byte = (bits (insn1, 5, 7) == 0 > + || bits (insn1, 5, 7) == 4); > + int writeback = 0; > + > + if (bits (insn1, 5, 7) < 3 && bit (insn2, 11)) > + writeback = bit (insn2, 8); If things get this complicated, a decode routine might be appropriate. > + > + err = thumb2_copy_ldr_str_ldrb_strb (gdbarch, insn1, insn2, > + regs, dsc, 0, byte, > + user_mode, writeback); > + } > + } > + break; > + case 1: /* op1 = 3, bits (9, 10) == 1 */ > + switch (bits (insn1, 7, 8)) > + { > + case 0: case 1: /* Data processing (register) */ > + err = thumb2_copy_alu_reg (gdbarch, insn1, insn2, regs, dsc); > + break; > + case 2: /* Multiply and absolute difference */ > + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, > + "mul/mua/diff", dsc); > + break; > + case 3: /* Long multiply and divide */ > + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, > + "lmul/lmua", dsc); > + break; > + } > + break; > + default: /* Coprocessor instructions */ > + err = thumb2_decode_svc_copro (gdbarch, insn1, insn2, regs, dsc); > + break; > + } > + break; > + default: > + err = 1; > + } > + > + if (err) > + internal_error (__FILE__, __LINE__, > + _("thumb_process_displaced_32bit_insn: Instruction decode error")); > + > } Thanks, Ulrich -- Dr. Ulrich Weigand GNU Toolchain for Linux on System z and Cell BE Ulrich.Weigand@de.ibm.com ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [try 2nd 5/8] Displaced stepping for Thumb 32-bit insns 2011-05-17 17:14 ` Ulrich Weigand @ 2011-05-23 11:32 ` Yao Qi 2011-05-23 11:32 ` Yao Qi 2011-07-06 10:55 ` Yao Qi 2 siblings, 0 replies; 19+ messages in thread From: Yao Qi @ 2011-05-23 11:32 UTC (permalink / raw) To: Ulrich Weigand; +Cc: gdb-patches On 05/18/2011 01:14 AM, Ulrich Weigand wrote: >> > +static int >> > +thumb2_copy_copro_load_store (struct gdbarch *gdbarch, uint16_t insn1, >> > + uint16_t insn2, struct regcache *regs, >> > + struct displaced_step_closure *dsc) >> > +{ >> > + unsigned int rn = bits (insn1, 0, 3); >> > + >> > + if (rn == ARM_PC_REGNUM) >> > + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, >> > + "copro load/store", dsc); >> > + >> > + if (debug_displaced) >> > + fprintf_unfiltered (gdb_stdlog, "displaced: copying coprocessor " >> > + "load/store insn %.4x%.4x\n", insn1, insn2); >> > + >> > + dsc->modinsn[0] = insn1 & 0xfff0; >> > + dsc->modinsn[1] = insn2; >> > + dsc->numinsns = 2; > This doesn't look right: you're replacing the RN register if it is anything > *but* 15 -- but those cases do not need to be replaced! > Sorry, the condition check should be reversed. > In fact, unless I'm missing something, in Thumb mode no coprocessor > instruction actually uses the PC (either RN == 15 indicates some other > operation, or else it is specified as unpredictable). So those should > simply all be copied unmodified ... > I can understand almost of your comments except this one. I think you are right, but there are still some cases that PC is used in this category of instructions. thumb2_copy_copro_load_store covers instructions STC/STC2, VLDR/VSTR and LDC/LDC2 (literal and immediate). I re-read ARM ARM again, and find that, STC/STC2 doesn't use PC. ARM ARM said "if n == 15 && (wback || CurrentInstrSet() != InstrSet_ARM) then UNPREDICTABLE;" VSTR doesn't use PC. ARM ARM said "if n == 15 && CurrentInstrSet() != InstrSet_ARM then UNPREDICTABLE;" However, LDC/LDC2/VLDR can use PC. VLDR<c><q>{.32} <Sd>, [PC, #+/-<imm>] LDC, LDC2 (literal or immediate) LDC{L}<c> <coproc>,<CRd>,[PC],<option> I can write a real VLDR instruction using PC successfully. Still no luck to fix 'Illegal instruction' when running program having LDC/LDC2 using PC register, but I think LDC/LDC2 should be able to use PC register. Am I missing something here? -- Yao (é½å°§) ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [try 2nd 5/8] Displaced stepping for Thumb 32-bit insns 2011-05-17 17:14 ` Ulrich Weigand 2011-05-23 11:32 ` Yao Qi @ 2011-05-23 11:32 ` Yao Qi 2011-05-27 22:11 ` Ulrich Weigand 2011-07-06 10:55 ` Yao Qi 2 siblings, 1 reply; 19+ messages in thread From: Yao Qi @ 2011-05-23 11:32 UTC (permalink / raw) To: Ulrich Weigand; +Cc: gdb-patches On 05/18/2011 01:14 AM, Ulrich Weigand wrote: >> > +static int >> > +thumb2_copy_copro_load_store (struct gdbarch *gdbarch, uint16_t insn1, >> > + uint16_t insn2, struct regcache *regs, >> > + struct displaced_step_closure *dsc) >> > +{ >> > + unsigned int rn = bits (insn1, 0, 3); >> > + >> > + if (rn == ARM_PC_REGNUM) >> > + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, >> > + "copro load/store", dsc); >> > + >> > + if (debug_displaced) >> > + fprintf_unfiltered (gdb_stdlog, "displaced: copying coprocessor " >> > + "load/store insn %.4x%.4x\n", insn1, insn2); >> > + >> > + dsc->modinsn[0] = insn1 & 0xfff0; >> > + dsc->modinsn[1] = insn2; >> > + dsc->numinsns = 2; > This doesn't look right: you're replacing the RN register if it is anything > *but* 15 -- but those cases do not need to be replaced! > Sorry, the condition check should be reversed. > In fact, unless I'm missing something, in Thumb mode no coprocessor > instruction actually uses the PC (either RN == 15 indicates some other > operation, or else it is specified as unpredictable). So those should > simply all be copied unmodified ... > I can understand almost of your comments except this one. I think you are right, but there are still some cases that PC is used in this category of instructions. thumb2_copy_copro_load_store covers instructions STC/STC2, VLDR/VSTR and LDC/LDC2 (literal and immediate). I re-read ARM ARM again, and find that, STC/STC2 doesn't use PC. ARM ARM said "if n == 15 && (wback || CurrentInstrSet() != InstrSet_ARM) then UNPREDICTABLE;" VSTR doesn't use PC. ARM ARM said "if n == 15 && CurrentInstrSet() != InstrSet_ARM then UNPREDICTABLE;" However, LDC/LDC2/VLDR can use PC. VLDR<c><q>{.32} <Sd>, [PC, #+/-<imm>] LDC, LDC2 (literal or immediate) LDC{L}<c> <coproc>,<CRd>,[PC],<option> I can write a real VLDR instruction using PC successfully. Still no luck to fix 'Illegal instruction' when running program having LDC/LDC2 using PC register, but I think LDC/LDC2 should be able to use PC register. Am I missing something here? -- Yao (é½å°§) ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [try 2nd 5/8] Displaced stepping for Thumb 32-bit insns 2011-05-23 11:32 ` Yao Qi @ 2011-05-27 22:11 ` Ulrich Weigand 0 siblings, 0 replies; 19+ messages in thread From: Ulrich Weigand @ 2011-05-27 22:11 UTC (permalink / raw) To: Yao Qi; +Cc: gdb-patches Yao Qi wrote: > On 05/18/2011 01:14 AM, Ulrich Weigand wrote: > > In fact, unless I'm missing something, in Thumb mode no coprocessor > > instruction actually uses the PC (either RN == 15 indicates some other > > operation, or else it is specified as unpredictable). So those should > > simply all be copied unmodified ... > > I can understand almost of your comments except this one. I think you > are right, but there are still some cases that PC is used in this > category of instructions. > > thumb2_copy_copro_load_store covers instructions STC/STC2, VLDR/VSTR and > LDC/LDC2 (literal and immediate). I re-read ARM ARM again, and find that, > > STC/STC2 doesn't use PC. ARM ARM said "if n == 15 && (wback || > CurrentInstrSet() != InstrSet_ARM) then UNPREDICTABLE;" > > VSTR doesn't use PC. ARM ARM said "if n == 15 && CurrentInstrSet() != > InstrSet_ARM then UNPREDICTABLE;" > > However, LDC/LDC2/VLDR can use PC. > > VLDR<c><q>{.32} <Sd>, [PC, #+/-<imm>] > > LDC, LDC2 (literal or immediate) > LDC{L}<c> <coproc>,<CRd>,[PC],<option> > > I can write a real VLDR instruction using PC successfully. Still no > luck to fix 'Illegal instruction' when running program having LDC/LDC2 > using PC register, but I think LDC/LDC2 should be able to use PC > register. Am I missing something here? No, you're right -- I had overlooked those. LDC/LDC2/VLDR must indeed be handled here. Bye, Ulrich -- Dr. Ulrich Weigand GNU Toolchain for Linux on System z and Cell BE Ulrich.Weigand@de.ibm.com ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [try 2nd 5/8] Displaced stepping for Thumb 32-bit insns 2011-05-17 17:14 ` Ulrich Weigand 2011-05-23 11:32 ` Yao Qi 2011-05-23 11:32 ` Yao Qi @ 2011-07-06 10:55 ` Yao Qi 2011-07-15 19:57 ` Ulrich Weigand 2 siblings, 1 reply; 19+ messages in thread From: Yao Qi @ 2011-07-06 10:55 UTC (permalink / raw) To: Ulrich Weigand; +Cc: gdb-patches [-- Attachment #1: Type: text/plain, Size: 16949 bytes --] On 05/18/2011 01:14 AM, Ulrich Weigand wrote: > Yao Qi wrote: > >> +static int >> +thumb2_copy_preload_reg (struct gdbarch *gdbarch, uint16_t insn1, >> + uint16_t insn2, struct regcache *regs, >> + struct displaced_step_closure *dsc) >> +{ >> + unsigned int rn = bits (insn1, 0, 3); >> + unsigned int rm = bits (insn2, 0, 3); >> + >> + if (rn != ARM_PC_REGNUM && rm != ARM_PC_REGNUM) >> + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, "preload reg", >> + dsc); >> + >> + if (debug_displaced) >> + fprintf_unfiltered (gdb_stdlog, "displaced: copying preload insn %.4x%.4x\n", >> + insn1, insn1); >> + >> + dsc->modinsn[0] = insn1 & 0xfff0; >> + dsc->modinsn[1] = (insn2 & 0xfff0) | 0x1; >> + dsc->numinsns = 2; >> + >> + install_preload_reg (gdbarch, regs, dsc, rn, rm); >> + return 0; >> +} > > Handling of preload instructions seems wrong for a couple of reasons: > > - In Thumb mode, PLD/PLI with register offset must not use PC as offset > register, so those can just be copied unmodified. The only instructions > to be treated specially are the "literal" variants, which do encode > PC-relative offsets. > > This means a separate thumb2_copy_preload_reg shouldn't be needed. > Right. thumb2_copy_preload_reg is removed. > - However, you cannot just transform a PLD/PLI "literal" (i.e. PC + immediate) > into an "immediate" (i.e. register + immediate) version, since in Thumb > mode the "literal" version supports a 12-bit immediate, while the immediate > version only supports an 8-bit immediate. > > I guess you could either add the immediate to the PC during preparation > stage and then use an "immediate" instruction with immediate zero, or > else load the immediate into a second register and use a "register" > version of the instruction. > The former may not be correct. PC should be set at the address of `copy area' in displaced stepping, instead of any other arbitrary values. The alternative to the former approach is to compute the new immediate value according to the new PC value we will set (new PC value is dsc->scratch_base). However, in this way, we have to worry about the overflow of new computed 12-bit immediate. The latter one sounds better, because we don't have to worry about overflow problem, and cleanup_preload can be still used as cleanup routine in this case. > >> +static int >> +thumb2_copy_copro_load_store (struct gdbarch *gdbarch, uint16_t insn1, >> + uint16_t insn2, struct regcache *regs, >> + struct displaced_step_closure *dsc) >> +{ >> + unsigned int rn = bits (insn1, 0, 3); >> + >> + if (rn == ARM_PC_REGNUM) >> + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, >> + "copro load/store", dsc); >> + >> + if (debug_displaced) >> + fprintf_unfiltered (gdb_stdlog, "displaced: copying coprocessor " >> + "load/store insn %.4x%.4x\n", insn1, insn2); >> + >> + dsc->modinsn[0] = insn1 & 0xfff0; >> + dsc->modinsn[1] = insn2; >> + dsc->numinsns = 2; > > This doesn't look right: you're replacing the RN register if it is anything > *but* 15 -- but those cases do not need to be replaced! > Oh, sorry, it is a logic error. The code should be like if (rn != ARM_PC_REGNUM) return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, "copro load/store", dsc); >> +static int >> +thumb2_copy_b_bl_blx (struct gdbarch *gdbarch, uint16_t insn1, >> + uint16_t insn2, struct regcache *regs, >> + struct displaced_step_closure *dsc) >> + >> + if (!link && !exchange) /* B */ >> + { >> + cond = bits (insn1, 6, 9); > > Only encoding T3 has condition bits, not T4. > Oh, right. Fixed. >> + >> + dsc->modinsn[0] = THUMB_NOP; >> + >> + install_b_bl_blx (gdbarch, regs, dsc, cond, exchange, 1, offset); > > Why do you always pass 1 for link? Shouldn't "link" be passed? > "link" should be passed. Fixed. >> +static int >> +thumb2_copy_alu_reg (struct gdbarch *gdbarch, uint16_t insn1, >> + uint16_t insn2, struct regcache *regs, >> + struct displaced_step_closure *dsc) >> +{ >> + unsigned int op2 = bits (insn2, 4, 7); >> + int is_mov = (op2 == 0x0); >> + unsigned int rn, rm, rd; >> + >> + rn = bits (insn1, 0, 3); /* Rn */ >> + rm = bits (insn2, 0, 3); /* Rm */ >> + rd = bits (insn2, 8, 11); /* Rd */ >> + >> + /* In Thumb-2, rn, rm and rd can't be r15. */ > This isn't quite true ... otherwise we wouldn't need the routine at all. This line of comment is out of date. Remove it. >> + if (rn != ARM_PC_REGNUM && rm != ARM_PC_REGNUM >> + && rd != ARM_PC_REGNUM) >> + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, "ALU reg", dsc); >> + >> + if (debug_displaced) >> + fprintf_unfiltered (gdb_stdlog, "displaced: copying reg %s insn %.4x%.4x\n", >> + "ALU", insn1, insn2); >> + >> + if (is_mov) >> + dsc->modinsn[0] = insn1; >> + else >> + dsc->modinsn[0] = ((insn1 & 0xfff0) | 0x1); >> + >> + dsc->modinsn[1] = ((insn2 & 0xf0f0) | 0x2); >> + dsc->numinsns = 2; > > This doesn't look right. It looks like this function is called for all > instructions in tables A6-22 through A6-26; those encodings differ > significantly in how their fields are used. Some of them have the > Rn, Rm, Rd fields as above, but others just have some of them. For > some, a register field content of 15 does indeed refer to the PC and > needs to be replaced; for others a register field content of 15 means > instead that a different operation is to be performed (e.g. ADD vs TST, > EOR vs TEQ ...) and so it must *not* be replaced; and for yet others, > a register field content of 15 is unpredictable. > > In fact, I think only a very small number of instructions in this > category actually may refer to the PC (only MOV?), so there needs > to the be more instruction decoding to actually identify those. > thumb2_copy_alu_reg is called in for two groups of instructions, 1. A6.3.11 Data-processing (shifted register) 2. A6.3.12 Data-processing (register) PC is not used in group #2. Even in group #1, PC is only used in MOV. This routine thumb2_copy_alu_reg is deleted, and thumb2_decode_dp_shift_reg is added to decode group #2. >> static int >> +thumb2_copy_ldr_str_ldrb_strb (struct gdbarch *gdbarch, uint16_t insn1, >> + uint16_t insn2, struct regcache *regs, >> + struct displaced_step_closure *dsc, >> + int load, int byte, int usermode, int writeback) > > Hmmm ... this function is called for *halfwords* as well, not just for > bytes and words. This means the "byte" operand is no longer sufficient > to uniquely determine the size -- note that when calling down to the > install_ routine, xfersize is always set to 1 or 4. > I thought "halfword" can be treated as "word" in this case, so I didn't distinguish them. I rename "thumb2_copy_ldr_str_ldrb_strb" to "thumb2_copy_load_store", and change parameter BYTE to SIZE. install_ routine and arm_ routine is updated as well. >> +{ >> + int immed = !bit (insn1, 9); >> + unsigned int rt = bits (insn2, 12, 15); >> + unsigned int rn = bits (insn1, 0, 3); >> + unsigned int rm = bits (insn2, 0, 3); /* Only valid if !immed. */ >> + >> + if (rt != ARM_PC_REGNUM && rn != ARM_PC_REGNUM && rm != ARM_PC_REGNUM) > rm shouldn't be checked if immed is true Fixed. >> + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, "load/store", >> + dsc); >> +/* Decode extension register load/store. Exactly the same as >> + arm_decode_ext_reg_ld_st. */ >> + >> +static int >> +thumb2_decode_ext_reg_ld_st (struct gdbarch *gdbarch, uint16_t insn1, >> + uint16_t insn2, struct regcache *regs, >> + struct displaced_step_closure *dsc) >> +{ >> + >> + case 0x10: case 0x14: case 0x18: case 0x1c: /* vstr. */ >> + case 0x11: case 0x15: case 0x19: case 0x1d: /* vldr. */ >> + return thumb2_copy_copro_load_store (gdbarch, insn1, insn2, regs, dsc); > > See the comment at thumb2_copy_copro_load_store: since that function will > always copy the instruction unmodified, so can this function. > > As we discussed VLDR may still use PC, so call thumb_copy_unmodified_32bit for VSTR in my new patch. >> +static int >> +thumb2_decode_svc_copro (struct gdbarch *gdbarch, uint16_t insn1, >> + uint16_t insn2, struct regcache *regs, >> + struct displaced_step_closure *dsc) >> +{ [...] > > See above ... I don't think any of those instructions can ever use the PC > in Thumb mode, so this can be simplified. > It is simplified to some extent in new patch. >> + } >> + } >> + else >> + { >> + unsigned int op = bit (insn2, 4); >> + unsigned int bit_8 = bit (insn1, 8); >> + >> + if (bit_8) /* Advanced SIMD */ >> + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, >> + "neon", dsc); >> + else >> + { >> + /*coproc is 101x. */ >> + if ((coproc & 0xe) == 0xa) >> + { >> + if (op) /* 8,16,32-bit xfer. */ >> + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, >> + "neon 8/16/32 bit xfer", >> + dsc); >> + else /* VFP data processing. */ >> + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, >> + "vfp dataproc", dsc); >> + } >> + else >> + { >> + if (op) >> + { >> + if (bit_4) /* MRC/MRC2 */ >> + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, >> + "mrc/mrc2", dsc); >> + else /* MCR/MCR2 */ >> + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, >> + "mcr/mcr2", dsc); >> + } >> + else /* CDP/CDP 2 */ >> + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, >> + "cdp/cdp2", dsc); >> + } > > Likewise I'm not sure there is any need to decode to such depth, if the > instruction in the end all can be copied unmodified. OK. Patch length can be reduced then. >> static int >> +thumb_copy_pc_relative_32bit (struct gdbarch *gdbarch, struct regcache *regs, >> + struct displaced_step_closure *dsc, >> + int rd, unsigned int imm) >> +{ >> + /* Encoding T3: ADDS Rd, Rd, #imm */ > Why do you refer to ADDS? The instruction you generate is ADD (with no S bit), > which is actually correct -- so it seems just the comment is wrong. It is a mistake in comment. ADR doesn't update flags, we don't have S bit in ADD. >> + dsc->modinsn[0] = (0xf100 | rd); >> + dsc->modinsn[1] = (0x0 | (rd << 8) | imm); >> + >> + dsc->numinsns = 2; >> + >> + install_pc_relative (gdbarch, regs, dsc, rd); >> + >> + return 0; >> +} >> + >> +static int >> +thumb_decode_pc_relative_32bit (struct gdbarch *gdbarch, uint16_t insn1, >> + uint16_t insn2, struct regcache *regs, >> + struct displaced_step_closure *dsc) >> +{ >> + unsigned int rd = bits (insn2, 8, 11); >> + /* Since immeidate has the same encoding in both ADR and ADDS, so we simply > typo >> + extract raw immediate encoding rather than computing immediate. When >> + generating ADDS instruction, we can simply perform OR operation to set >> + immediate into ADDS. */ > See above for ADDS vs. ADD. s/ADDS/ADD/ in comments. >> + unsigned int imm = (insn2 & 0x70ff) | (bit (insn1, 10) << 26); > > The last bit will get lost, since thumb_copy_pc_relative_32bit only or's > the value to the second 16-bit halfword. Then, we have separately set the bit 10 (i bit) in dsc->modinsn[0] per original insn1's i bit. >> + if (debug_displaced) >> + fprintf_unfiltered (gdb_stdlog, >> + "displaced: copying thumb adr r%d, #%d insn %.4x%.4x\n", >> + rd, imm, insn1, insn2); >> + >> + return thumb_copy_pc_relative_32bit (gdbarch, regs, dsc, rd, imm); >> +} > > B.t.w. I think the distinction between a _decode_ and a _copy_ routine is > pointless in this case since the _decode_ routine is only ever called for > one single instruction that matches ... it doesn't actually decode anything. > thumb_decode_pc_relative_32bit is merged to thumb_copy_pc_relative_32bit. > >> +static int >> +decode_thumb_32bit_ld_mem_hints (struct gdbarch *gdbarch, >> + uint16_t insn1, uint16_t insn2, >> + struct regcache *regs, >> + struct displaced_step_closure *dsc) >> +{ >> + int rd = bits (insn2, 12, 15); >> + int user_mode = (bits (insn2, 8, 11) == 0xe); >> + int err = 0; >> + int writeback = 0; >> + >> + switch (bits (insn1, 5, 6)) >> + { >> + case 0: /* Load byte and memory hints */ >> + if (rd == 0xf) /* PLD/PLI */ >> + { >> + if (bits (insn2, 6, 11)) > This check doesn't look right to me. This part is re-written. >> + return thumb2_copy_preload (gdbarch, insn1, insn2, regs, dsc); >> + else >> + return thumb2_copy_preload_reg (gdbarch, insn1, insn2, regs, dsc); > > In any case, see the comments above on handling preload instructions. You > should only need to handle the "literal" variants. > Right. thumb2_copy_preload_reg is removed, and this part of code is adjusted as well. > > >> static void >> thumb_process_displaced_32bit_insn (struct gdbarch *gdbarch, uint16_t insn1, >> uint16_t insn2, struct regcache *regs, >> struct displaced_step_closure *dsc) >> { >> - error (_("Displaced stepping is only supported in ARM mode and Thumb 16bit instructions")); >> + int err = 0; >> + unsigned short op = bit (insn2, 15); >> + unsigned int op1 = bits (insn1, 11, 12); >> + >> + switch (op1) >> + { >> + case 1: >> + { >> + switch (bits (insn1, 9, 10)) >> + { >> + case 0: /* load/store multiple */ >> + switch (bits (insn1, 7, 8)) >> + { >> + case 0: case 3: /* SRS, RFE */ >> + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, >> + "srs/rfe", dsc); >> + break; >> + case 1: case 2: /* LDM/STM/PUSH/POP */ >> + /* These Thumb 32-bit insns have the same encodings as ARM >> + counterparts. */ > "same encodings" isn't quite true ... This line of comment is out of date. Removed. >> + err = thumb2_copy_block_xfer (gdbarch, insn1, insn2, regs, dsc); >> + } >> + break; > > Hmm, it seems this case is missing code to handle the load/store dual, > load/store exclusive, and table branch instructions (page A6-24 / table A6-17); > there should be a check whether bit 6 is zero or one somewhere. > routine thumb2_copy_table_branch is added to handle table branch instructions. load/store dual and load/store exclusive don't use PC, so they are copy-unmodified. >> + case 1: >> + /* Data-processing (shift register). In ARM archtecture reference >> + manual, this entry is >> + "Data-processing (shifted register) on page A6-31". However, >> + instructions in table A6-31 shows that they are `alu_reg' >> + instructions. There is no alu_shifted_reg instructions in >> + Thumb-2. */ > > Well ... they are not *register*-shifted register instructions like > there are in ARM mode (i.e. register shifted by another register), > but they are still *shifted* register instructions (i.e. register > shifted by an immediate). > Thanks for the clarification. Only leave the 1st sentence of comment, and remove the rest of them. >> + err = thumb2_copy_alu_reg (gdbarch, insn1, insn2, regs, >> + dsc); > (see comments at that function ...) Add a new function thumb2_decode_dp_shift_reg and call it here. >> + break; >> + default: /* Coprocessor instructions */ >> + /* Thumb 32bit coprocessor instructions have the same encoding >> + as ARM's. */ > (see above as to "same encoding" ... also, some ARM coprocessor instruction > may in fact use the PC, while no Thumb coprocessor instruction can ... so > there is probably no need to decode them further at this point) As we discussed, STC/STC/VLDR may still use PC. Leave it there. > >> + case 2: /* op1 = 2 */ >> + if (op) /* Branch and misc control. */ >> + { >> + if (bit (insn2, 14)) /* BLX/BL */ >> + err = thumb2_copy_b_bl_blx (gdbarch, insn1, insn2, regs, dsc); >> + else if (!bits (insn2, 12, 14) && bits (insn1, 8, 10) != 0x7) > I don't understand this condition, but it looks wrong to me ... > This condition is about "Conditional Branch". The 2nd half of condition should be "bits (insn1, 7, 9) != 0x7", corresponding to the first line of table A6-13 "op1 = 0x0, op is not x111xxx". >> + else /* Store single data item */ >> + { >> + int user_mode = (bits (insn2, 8, 11) == 0xe); >> + int byte = (bits (insn1, 5, 7) == 0 >> + || bits (insn1, 5, 7) == 4); >> + int writeback = 0; >> + >> + if (bits (insn1, 5, 7) < 3 && bit (insn2, 11)) >> + writeback = bit (insn2, 8); > > If things get this complicated, a decode routine might be appropriate. OK, move these logics into a new function "decode_thumb_32bit_store_single_data_item". Note that patch sits on top of this patch, [patch] refactor arm-tdep.c:install_ldr_str_ldrb_strb to handle halfword http://sourceware.org/ml/gdb-patches/2011-07/msg00183.html -- Yao [-- Attachment #2: 0003-Support-displaced-stepping-for-Thumb-32-bit-insns.patch --] [-- Type: text/x-patch, Size: 28770 bytes --] Support displaced stepping for Thumb 32-bit insns. * arm-tdep.c (thumb_copy_unmodified_32bit): New. (thumb2_copy_preload): New. (thumb2_copy_copro_load_store): New. (thumb2_copy_b_bl_blx): New. (thumb2_copy_alu_imm): New. (thumb2_copy_load_store): New. (thumb2_copy_block_xfer): New. (thumb_32bit_copy_undef): New. (thumb_32bit_copy_unpred): New. (thumb2_decode_ext_reg_ld_st): New. (thumb2_decode_svc_copro): New. (decode_thumb_32bit_store_single_data_item): New. (thumb_copy_pc_relative_32bit): New. (thumb_decode_pc_relative_32bit): New. (decode_thumb_32bit_ld_mem_hints): New. (thumb2_copy_table_branch): New (thumb_process_displaced_32bit_insn): Process Thumb 32-bit instructions. --- gdb/arm-tdep.c | 840 +++++++++++++++++++++++++++++++++++++++++++++++++++++++- 1 files changed, 839 insertions(+), 1 deletions(-) diff --git a/gdb/arm-tdep.c b/gdb/arm-tdep.c index b0074bd..bd92193 100644 --- a/gdb/arm-tdep.c +++ b/gdb/arm-tdep.c @@ -5341,6 +5341,23 @@ arm_copy_unmodified (struct gdbarch *gdbarch, uint32_t insn, return 0; } +static int +thumb_copy_unmodified_32bit (struct gdbarch *gdbarch, uint16_t insn1, + uint16_t insn2, const char *iname, + struct displaced_step_closure *dsc) +{ + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, "displaced: copying insn %.4x %.4x, " + "opcode/class '%s' unmodified\n", insn1, insn2, + iname); + + dsc->modinsn[0] = insn1; + dsc->modinsn[1] = insn2; + dsc->numinsns = 2; + + return 0; +} + /* Copy 16-bit Thumb(Thumb and 16-bit Thumb-2) instruction without any modification. */ static int @@ -5408,6 +5425,54 @@ arm_copy_preload (struct gdbarch *gdbarch, uint32_t insn, struct regcache *regs, return 0; } +static int +thumb2_copy_preload (struct gdbarch *gdbarch, uint16_t insn1, uint16_t insn2, + struct regcache *regs, struct displaced_step_closure *dsc) +{ + unsigned int rn = bits (insn1, 0, 3); + unsigned int u_bit = bit (insn1, 7); + int imm12 = bits (insn2, 0, 11); + ULONGEST pc_val; + + if (rn != ARM_PC_REGNUM) + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, "preload", dsc); + + /* PC is only allowed to use in PLI (immeidate,literal) Encoding T3, and + PLD (literal) Encoding T1. */ + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, + "displaced: copying pld/pli pc (0x%x) %c imm12 %.4x\n", + (unsigned int) dsc->insn_addr, u_bit ? '+' : '-', + imm12); + + if (!u_bit) + imm12 = -1 * imm12; + + /* Rewrite instruction {pli/pld} PC imm12 into: + Preapre: tmp[0] <- r0, tmp[1] <- r1, r0 <- pc, r1 <- imm12 + + {pli/pld} [r0, r1] + + Cleanup: r0 <- tmp[0], r1 <- tmp[1]. */ + + dsc->tmp[0] = displaced_read_reg (regs, dsc, 0); + dsc->tmp[1] = displaced_read_reg (regs, dsc, 1); + + pc_val = displaced_read_reg (regs, dsc, ARM_PC_REGNUM); + + displaced_write_reg (regs, dsc, 0, pc_val, CANNOT_WRITE_PC); + displaced_write_reg (regs, dsc, 1, imm12, CANNOT_WRITE_PC); + dsc->u.preload.immed = 0; + + /* {pli/pld} [r0, r1] */ + dsc->modinsn[0] = insn1 & 0xff00; + dsc->modinsn[1] = 0xf001; + dsc->numinsns = 2; + + dsc->cleanup = &cleanup_preload; + return 0; +} + /* Preload instructions with register offset. */ static void @@ -5517,6 +5582,30 @@ arm_copy_copro_load_store (struct gdbarch *gdbarch, uint32_t insn, return 0; } +static int +thumb2_copy_copro_load_store (struct gdbarch *gdbarch, uint16_t insn1, + uint16_t insn2, struct regcache *regs, + struct displaced_step_closure *dsc) +{ + unsigned int rn = bits (insn1, 0, 3); + + if (rn == ARM_PC_REGNUM) + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "copro load/store", dsc); + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, "displaced: copying coprocessor " + "load/store insn %.4x%.4x\n", insn1, insn2); + + dsc->modinsn[0] = insn1 & 0xfff0; + dsc->modinsn[1] = insn2; + dsc->numinsns = 2; + + install_copro_load_store (gdbarch, regs, dsc, bit (insn1, 9), rn); + + return 0; +} + /* Clean up branch instructions (actually perform the branch, by setting PC). */ @@ -5604,6 +5693,61 @@ arm_copy_b_bl_blx (struct gdbarch *gdbarch, uint32_t insn, return 0; } +static int +thumb2_copy_b_bl_blx (struct gdbarch *gdbarch, uint16_t insn1, + uint16_t insn2, struct regcache *regs, + struct displaced_step_closure *dsc) +{ + int link = bit (insn2, 14); + int exchange = link && !bit (insn2, 12); + int cond = INST_AL; + long offset =0; + int j1 = bit (insn2, 13); + int j2 = bit (insn2, 11); + int s = sbits (insn1, 10, 10); + int i1 = !(j1 ^ bit (insn1, 10)); + int i2 = !(j2 ^ bit (insn1, 10)); + + if (!link && !exchange) /* B */ + { + offset = (bits (insn2, 0, 10) << 1); + if (bit (insn2, 12)) /* Encoding T4 */ + { + offset |= (bits (insn1, 0, 9) << 12) + | (i2 << 22) + | (i1 << 23) + | (s << 24); + cond = INST_AL; + } + else /* Encoding T3 */ + { + offset |= (bits (insn1, 0, 5) << 12) + | (j1 << 18) + | (j2 << 19) + | (s << 20); + cond = bits (insn1, 6, 9); + } + } + else + { + offset = (bits (insn1, 0, 9) << 12); + offset |= ((i2 << 22) | (i1 << 23) | (s << 24)); + offset |= exchange ? + (bits (insn2, 1, 10) << 2) : (bits (insn2, 0, 10) << 1); + } + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, "displaced: copying %s insn " + "%.4x %.4x with offset %.8lx\n", + link ? (exchange) ? "blx" : "bl" : "b", + insn1, insn2, offset); + + dsc->modinsn[0] = THUMB_NOP; + + install_b_bl_blx (gdbarch, regs, dsc, cond, exchange, link, offset); + return 0; +} + /* Copy B Thumb instructions. */ static int thumb_copy_b (struct gdbarch *gdbarch, unsigned short insn, @@ -5767,6 +5911,58 @@ arm_copy_alu_imm (struct gdbarch *gdbarch, uint32_t insn, struct regcache *regs, return 0; } +static int +thumb2_copy_alu_imm (struct gdbarch *gdbarch, uint16_t insn1, + uint16_t insn2, struct regcache *regs, + struct displaced_step_closure *dsc) +{ + unsigned int op = bits (insn1, 5, 8); + unsigned int rn, rm, rd; + ULONGEST rd_val, rn_val; + + rn = bits (insn1, 0, 3); /* Rn */ + rm = bits (insn2, 0, 3); /* Rm */ + rd = bits (insn2, 8, 11); /* Rd */ + + /* This routine is only called for instruction MOV. */ + gdb_assert (op == 0x2 && rn == 0xf); + + if (rm != ARM_PC_REGNUM && rd != ARM_PC_REGNUM) + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, "ALU imm", dsc); + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, "displaced: copying reg %s insn %.4x%.4x\n", + "ALU", insn1, insn2); + + /* Instruction is of form: + + <op><cond> rd, [rn,] #imm + + Rewrite as: + + Preparation: tmp1, tmp2 <- r0, r1; + r0, r1 <- rd, rn + Insn: <op><cond> r0, r1, #imm + Cleanup: rd <- r0; r0 <- tmp1; r1 <- tmp2 + */ + + dsc->tmp[0] = displaced_read_reg (regs, dsc, 0); + dsc->tmp[1] = displaced_read_reg (regs, dsc, 1); + rn_val = displaced_read_reg (regs, dsc, rn); + rd_val = displaced_read_reg (regs, dsc, rd); + displaced_write_reg (regs, dsc, 0, rd_val, CANNOT_WRITE_PC); + displaced_write_reg (regs, dsc, 1, rn_val, CANNOT_WRITE_PC); + dsc->rd = rd; + + dsc->modinsn[0] = insn1; + dsc->modinsn[1] = ((insn2 & 0xf0f0) | 0x1); + dsc->numinsns = 2; + + dsc->cleanup = &cleanup_alu_imm; + + return 0; +} + /* Copy/cleanup arithmetic/logic insns with register RHS. */ static void @@ -6135,6 +6331,69 @@ install_load_store (struct gdbarch *gdbarch, struct regcache *regs, } static int +thumb2_copy_load_store (struct gdbarch *gdbarch, uint16_t insn1, uint16_t insn2, + struct regcache *regs, + struct displaced_step_closure *dsc, int load, int size, + int usermode, int writeback) +{ + int immed = !bit (insn1, 9); + unsigned int rt = bits (insn2, 12, 15); + unsigned int rn = bits (insn1, 0, 3); + unsigned int rm = bits (insn2, 0, 3); /* Only valid if !immed. */ + + if (rt != ARM_PC_REGNUM && rn != ARM_PC_REGNUM + && (immed || rm != ARM_PC_REGNUM)) + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, "load/store", + dsc); + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, + "displaced: copying %s%s r%d [r%d] insn %.4x%.4x\n", + load ? (size == 1 ? "ldrb" : (size == 2 ? "ldrh" : "ldr")) + : (size == 1 ? "strb" : (size == 2 ? "strh" : "str")), + usermode ? "t" : "", + rt, rn, insn1, insn2); + + install_load_store (gdbarch, regs, dsc, load, immed, writeback, size, + usermode, rt, rm, rn); + + if (load || rt != ARM_PC_REGNUM) + { + dsc->u.ldst.restore_r4 = 0; + + if (immed) + /* {ldr,str}[b]<cond> rt, [rn, #imm], etc. + -> + {ldr,str}[b]<cond> r0, [r2, #imm]. */ + { + dsc->modinsn[0] = (insn1 & 0xfff0) | 0x2; + dsc->modinsn[1] = insn2 & 0x0fff; + } + else + /* {ldr,str}[b]<cond> rt, [rn, rm], etc. + -> + {ldr,str}[b]<cond> r0, [r2, r3]. */ + { + dsc->modinsn[0] = (insn1 & 0xfff0) | 0x2; + dsc->modinsn[1] = (insn2 & 0x0ff0) | 0x3; + } + + dsc->numinsns = 2; + } + else + { + /* In Thumb-32 instructions, the behavior is unpredictable when Rt is + PC, while the behavior is undefined when Rn is PC. Shortly, neither + Rt nor Rn can be PC. */ + + gdb_assert (0); + } + + return 0; +} + + +static int arm_copy_ldr_str_ldrb_strb (struct gdbarch *gdbarch, uint32_t insn, struct regcache *regs, struct displaced_step_closure *dsc, @@ -6524,6 +6783,87 @@ arm_copy_block_xfer (struct gdbarch *gdbarch, uint32_t insn, return 0; } +static int +thumb2_copy_block_xfer (struct gdbarch *gdbarch, uint16_t insn1, uint16_t insn2, + struct regcache *regs, + struct displaced_step_closure *dsc) +{ + int rn = bits (insn1, 0, 3); + int load = bit (insn1, 4); + int writeback = bit (insn1, 5); + + /* Block transfers which don't mention PC can be run directly + out-of-line. */ + if (rn != ARM_PC_REGNUM && (insn2 & 0x8000) == 0) + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, "ldm/stm", dsc); + + if (rn == ARM_PC_REGNUM) + { + warning (_("displaced: Unpredictable LDM or STM with " + "base register r15")); + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "unpredictable ldm/stm", dsc); + } + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, "displaced: copying block transfer insn " + "%.4x%.4x\n", insn1, insn2); + + /* Clear bit 13, since it should be always zero. */ + dsc->u.block.regmask = (insn2 & 0xdfff); + dsc->u.block.rn = rn; + + dsc->u.block.load = bit (insn1, 4); + dsc->u.block.user = bit (insn1, 6); + dsc->u.block.increment = bit (insn1, 7); + dsc->u.block.before = bit (insn1, 8); + dsc->u.block.writeback = writeback; + dsc->u.block.cond = INST_AL; + + if (load) + { + if (dsc->u.block.regmask == 0xffff) + { + /* This branch is impossible to happen. */ + gdb_assert (0); + } + else + { + unsigned int regmask = dsc->u.block.regmask; + unsigned int num_in_list = bitcount (regmask), new_regmask, bit = 1; + unsigned int to = 0, from = 0, i, new_rn; + + for (i = 0; i < num_in_list; i++) + dsc->tmp[i] = displaced_read_reg (regs, dsc, i); + + if (writeback) + insn1 &= ~(1 << 5); + + new_regmask = (1 << num_in_list) - 1; + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, _("displaced: LDM r%d%s, " + "{..., pc}: original reg list %.4x, modified " + "list %.4x\n"), rn, writeback ? "!" : "", + (int) dsc->u.block.regmask, new_regmask); + + dsc->modinsn[0] = insn1; + dsc->modinsn[1] = (new_regmask & 0xffff); + dsc->numinsns = 2; + + dsc->cleanup = &cleanup_block_load_pc; + } + } + else + { + dsc->modinsn[0] = insn1; + dsc->modinsn[1] = insn2; + dsc->numinsns = 2; + dsc->cleanup = &cleanup_block_store_pc; + } + return 0; +} + /* Cleanup/copy SVC (SWI) instructions. These two functions are overridden for Linux, where some SVC instructions must be treated specially. */ @@ -6609,6 +6949,23 @@ arm_copy_undef (struct gdbarch *gdbarch, uint32_t insn, return 0; } +static int +thumb_32bit_copy_undef (struct gdbarch *gdbarch, uint16_t insn1, uint16_t insn2, + struct displaced_step_closure *dsc) +{ + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, "displaced: copying undefined insn " + "%.4x %.4x\n", (unsigned short) insn1, + (unsigned short) insn2); + + dsc->modinsn[0] = insn1; + dsc->modinsn[1] = insn2; + dsc->numinsns = 2; + + return 0; +} + /* Copy unpredictable instructions. */ static int @@ -6624,6 +6981,23 @@ arm_copy_unpred (struct gdbarch *gdbarch, uint32_t insn, return 0; } +static int +thumb_32bit_copy_unpred (struct gdbarch *gdbarch, uint16_t insn1, + uint16_t insn2, struct displaced_step_closure *dsc) +{ + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, "displaced: copying unpredicatable insn " + "%.4x %.4x\n", (unsigned short) insn1, + (unsigned short) insn2); + + dsc->modinsn[0] = insn1; + dsc->modinsn[1] = insn2; + dsc->numinsns = 2; + + return 0; +} + /* The decode_* functions are instruction decoding helpers. They mostly follow the presentation in the ARM ARM. */ @@ -7005,6 +7379,91 @@ arm_decode_ext_reg_ld_st (struct gdbarch *gdbarch, uint32_t insn, return 1; } +/* Decode shifted register instructions. */ + +static int +thumb2_decode_dp_shift_reg (struct gdbarch *gdbarch, uint16_t insn1, + uint16_t insn2, struct regcache *regs, + struct displaced_step_closure *dsc) +{ + /* Data processing (shift register) instructions can be grouped according to + their encondings: + + 1. Insn X Rn :inst1,3-0 Rd: insn2,8-11, Rm: insn2,3-0. Rd=15 & S=1, Insn Y. + Rn != PC, Rm ! = PC. + X: AND, Y: TST (REG) + X: EOR, Y: TEQ (REG) + X: ADD, Y: CMN (REG) + X: SUB, Y: CMP (REG) + + 2. Insn X Rn : ins1,3-0, Rm: insn2, 3-0; Rm! = PC, Rn != PC + Insn X: TST, TEQ, PKH, CMN, and CMP. + + 3. Insn X Rn:inst1,3-0 Rd:insn2,8-11, Rm:insn2, 3-0. Rn != PC, Rd != PC, + Rm != PC. + X: BIC, ADC, SBC, and RSB. + + 4. Insn X Rn:inst1,3-0 Rd:insn2,8-11, Rm:insn2,3-0. Rd = 15, Insn Y. + X: ORR, Y: MOV (REG). + X: ORN, Y: MVN (REG). + + 5. Insn X Rd: insn2, 8-11, Rm: insn2, 3-0. + X: MVN, Rd != PC, Rm != PC + X: MOV: Rd/Rm can be PC. + + PC is only allowed to be used in instruction MOV. +*/ + + unsigned int op = bits (insn1, 5, 8); + unsigned int rn = bits (insn1, 0, 3); + + if (op == 0x2 && rn == 0xf) /* MOV */ + return thumb2_copy_alu_imm (gdbarch, insn1, insn2, regs, dsc); + else + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "dp (shift reg)", dsc); +} + + +/* Decode extension register load/store. Exactly the same as + arm_decode_ext_reg_ld_st. */ + +static int +thumb2_decode_ext_reg_ld_st (struct gdbarch *gdbarch, uint16_t insn1, + uint16_t insn2, struct regcache *regs, + struct displaced_step_closure *dsc) +{ + unsigned int opcode = bits (insn1, 4, 8); + + switch (opcode) + { + case 0x04: case 0x05: + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "vfp/neon vmov", dsc); + + case 0x08: case 0x0c: /* 01x00 */ + case 0x0a: case 0x0e: /* 01x10 */ + case 0x12: case 0x16: /* 10x10 */ + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "vfp/neon vstm/vpush", dsc); + + case 0x09: case 0x0d: /* 01x01 */ + case 0x0b: case 0x0f: /* 01x11 */ + case 0x13: case 0x17: /* 10x11 */ + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "vfp/neon vldm/vpop", dsc); + + case 0x10: case 0x14: case 0x18: case 0x1c: /* vstr. */ + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "vstr", dsc); + case 0x11: case 0x15: case 0x19: case 0x1d: /* vldr. */ + return thumb2_copy_copro_load_store (gdbarch, insn1, insn2, regs, dsc); + } + + /* Should be unreachable. */ + return 1; +} + static int arm_decode_svc_copro (struct gdbarch *gdbarch, uint32_t insn, CORE_ADDR to, struct regcache *regs, struct displaced_step_closure *dsc) @@ -7051,6 +7510,49 @@ arm_decode_svc_copro (struct gdbarch *gdbarch, uint32_t insn, CORE_ADDR to, return arm_copy_undef (gdbarch, insn, dsc); /* Possibly unreachable. */ } +static int +thumb2_decode_svc_copro (struct gdbarch *gdbarch, uint16_t insn1, + uint16_t insn2, struct regcache *regs, + struct displaced_step_closure *dsc) +{ + unsigned int coproc = bits (insn2, 8, 11); + unsigned int op1 = bits (insn1, 4, 9); + unsigned int bit_5_8 = bits (insn1, 5, 8); + unsigned int bit_9 = bit (insn1, 9); + unsigned int bit_4 = bit (insn1, 4); + unsigned int rn = bits (insn1, 0, 3); + + if (bit_9 == 0) + { + if (bit_5_8 == 2) + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "neon 64bit xfer/mrrc/mrrc2/mcrr/mcrr2", + dsc); + else if (bit_5_8 == 0) /* UNDEFINED. */ + return thumb_32bit_copy_undef (gdbarch, insn1, insn2, dsc); + else + { + /*coproc is 101x. SIMD/VFP, ext registers load/store. */ + if ((coproc & 0xe) == 0xa) + return thumb2_decode_ext_reg_ld_st (gdbarch, insn1, insn2, regs, + dsc); + else /* coproc is not 101x. */ + { + if (bit_4 == 0) /* STC/STC2. */ + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "stc/stc2", dsc); + else /* LDC/LDC2 {literal, immeidate}. */ + return thumb2_copy_copro_load_store (gdbarch, insn1, insn2, + regs, dsc); + } + } + } + else + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, "coproc", dsc); + + return 0; +} + static void install_pc_relative (struct gdbarch *gdbarch, struct regcache *regs, struct displaced_step_closure *dsc, int rd) @@ -7100,6 +7602,35 @@ thumb_decode_pc_relative_16bit (struct gdbarch *gdbarch, uint16_t insn, } static int +thumb_copy_pc_relative_32bit (struct gdbarch *gdbarch, uint16_t insn1, + uint16_t insn2, struct regcache *regs, + struct displaced_step_closure *dsc) +{ + unsigned int rd = bits (insn2, 8, 11); + /* Since immeidate has the same encoding in both ADR and ADD, so we simply + extract raw immediate encoding rather than computing immediate. When + generating ADD instruction, we can simply perform OR operation to set + immediate into ADD. */ + unsigned int imm_3_8 = insn2 & 0x70ff; + unsigned int imm_i = insn1 & 0x0400; /* Clear all bits except bit 10. */ + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, + "displaced: copying thumb adr r%d, #%d:%d insn %.4x%.4x\n", + rd, imm_i, imm_3_8, insn1, insn2); + + /* Encoding T3: ADD Rd, Rd, #imm */ + dsc->modinsn[0] = (0xf100 | rd | imm_i); + dsc->modinsn[1] = ((rd << 8) | imm_3_8); + + dsc->numinsns = 2; + + install_pc_relative (gdbarch, regs, dsc, rd); + + return 0; +} + +static int thumb_copy_16bit_ldr_literal (struct gdbarch *gdbarch, unsigned short insn1, struct regcache *regs, struct displaced_step_closure *dsc) @@ -7181,6 +7712,51 @@ thumb_copy_cbnz_cbz (struct gdbarch *gdbarch, uint16_t insn1, return 0; } +/* Copy Table Brach Byte/Halfword */ +static int +thumb2_copy_table_branch (struct gdbarch *gdbarch, uint16_t insn1, + uint16_t insn2, struct regcache *regs, + struct displaced_step_closure *dsc) +{ + ULONGEST rn_val, rm_val; + int is_tbh = bit (insn2, 4); + CORE_ADDR halfwords = 0; + enum bfd_endian byte_order = gdbarch_byte_order (gdbarch); + + rn_val = displaced_read_reg (regs, dsc, bits (insn1, 0, 3)); + rm_val = displaced_read_reg (regs, dsc, bits (insn2, 0, 3)); + + if (is_tbh) + { + gdb_byte buf[2]; + + target_read_memory (rn_val + 2 * rm_val, buf, 2); + halfwords = extract_unsigned_integer (buf, 2, byte_order); + } + else + { + gdb_byte buf[1]; + + target_read_memory (rn_val + rm_val, buf, 1); + halfwords = extract_unsigned_integer (buf, 1, byte_order); + } + + if (debug_displaced) + fprintf_unfiltered (gdb_stdlog, "displaced: %s base 0x%x offset 0x%x" + " offset 0x%x\n", is_tbh ? "tbh" : "tbb", + (unsigned int) rn_val, (unsigned int) rm_val, + (unsigned int) halfwords); + + dsc->u.branch.cond = INST_AL; + dsc->u.branch.link = 0; + dsc->u.branch.exchange = 0; + dsc->u.branch.dest = dsc->insn_addr + 4 + 2 * halfwords; + + dsc->cleanup = &cleanup_branch; + + return 0; +} + static void cleanup_pop_pc_16bit_all (struct gdbarch *gdbarch, struct regcache *regs, struct displaced_step_closure *dsc) @@ -7374,12 +7950,274 @@ thumb_process_displaced_16bit_insn (struct gdbarch *gdbarch, uint16_t insn1, _("thumb_process_displaced_16bit_insn: Instruction decode error")); } +static int +decode_thumb_32bit_ld_mem_hints (struct gdbarch *gdbarch, + uint16_t insn1, uint16_t insn2, + struct regcache *regs, + struct displaced_step_closure *dsc) +{ + int rt = bits (insn2, 12, 15); + int rn = bits (insn1, 0, 3); + int op1 = bits (insn1, 7, 8); + int user_mode = (bits (insn2, 8, 11) == 0xe); + int err = 0; + int writeback = 0; + + switch (bits (insn1, 5, 6)) + { + case 0: /* Load byte and memory hints */ + if (rt == 0xf) /* PLD/PLI */ + { + if (rn == 0xf) + { + /* PLD literal or Encoding T3 of PLI(immediate, literal). */ + return thumb2_copy_preload (gdbarch, insn1, insn2, regs, dsc); + } + else + { + switch (op1) + { + case 0: case 2: + if (bits (insn2, 8, 11) == 0x1110 + || (bits (insn2, 8, 11) & 0x6) == 0x9) + return thumb_32bit_copy_unpred (gdbarch, insn1, insn2, dsc); + else + /* PLI/PLD (reigster, immediate) doesn't use PC. */ + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "pli/pld", dsc); + break; + case 1: /* PLD/PLDW (immediate) */ + case 3: /* PLI (immediate, literal) */ + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "pli/pld", dsc); + break; + + } + } + } + else + { + if ((op1 == 0 || op1 == 2) && bit (insn2, 11)) + writeback = bit (insn2, 8); + + return thumb2_copy_load_store (gdbarch, insn1, insn2, regs, dsc, 1, 1, + user_mode, writeback); + } + + break; + case 1: /* Load halfword and memory hints. */ + if (rt == 0xf) /* PLD{W} and Unalloc memory hint. */ + { + if (rn == 0xf) + { + if (op1 == 0 || op1 == 1) + return thumb_32bit_copy_unpred (gdbarch, insn1, insn2, dsc); + else + return thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "unalloc memhint", dsc); + } + else + { + if ((op1 == 0 || op1 == 2) + && (bits (insn2, 8, 11) == 0xe + || ((bits (insn2, 8, 11) & 0x9) == 0x9))) + return thumb_32bit_copy_unpred (gdbarch, insn1, insn2, dsc); + else thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "pld/unalloc memhint", dsc); + } + } + else + { + int op1 = bits (insn1, 7, 8); + + if ((op1 == 0 || op1 == 2) && bit (insn2, 11)) + writeback = bit (insn2, 8); + return thumb2_copy_load_store (gdbarch, insn1, insn2, regs, dsc, 1, + 2, user_mode, writeback); + } + break; + case 2: /* Load word */ + { + int op1 = bits (insn1, 7, 8); + + if ((op1 == 0 || op1 == 2) && bit (insn2, 11)) + writeback = bit (insn2, 8); + + return thumb2_copy_load_store (gdbarch, insn1, insn2, regs, dsc, 1, 4, + user_mode, writeback); + break; + } + default: + return thumb_32bit_copy_undef (gdbarch, insn1, insn2, dsc); + break; + } + return 0; +} + + +static int +decode_thumb_32bit_store_single_data_item (struct gdbarch *gdbarch, + uint16_t insn1, uint16_t insn2, + struct regcache *regs, + struct displaced_step_closure *dsc) +{ + int user_mode = (bits (insn2, 8, 11) == 0xe); + int size = 0; + int writeback = 0; + int op1 = bits (insn1, 5, 7); + + switch (op1) + { + case 0: case 4: size = 1; break; + case 1: case 5: size = 2; break; + case 2: case 6: size = 4; break; + } + if (bits (insn1, 5, 7) < 3 && bit (insn2, 11)) + writeback = bit (insn2, 8); + + return thumb2_copy_load_store (gdbarch, insn1, insn2, regs, + dsc, 0, size, user_mode, + writeback); + +} + static void thumb_process_displaced_32bit_insn (struct gdbarch *gdbarch, uint16_t insn1, uint16_t insn2, struct regcache *regs, struct displaced_step_closure *dsc) { - error (_("Displaced stepping is only supported in ARM mode and Thumb 16bit instructions")); + int err = 0; + unsigned short op = bit (insn2, 15); + unsigned int op1 = bits (insn1, 11, 12); + + switch (op1) + { + case 1: + { + switch (bits (insn1, 9, 10)) + { + case 0: + if (bit (insn1, 6)) + { + /* Load/store {dual, execlusive}, table branch. */ + if (bits (insn1, 7, 8) == 1 && bits (insn1, 4, 5) == 1 + && bits (insn2, 5, 7) == 0) + err = thumb2_copy_table_branch (gdbarch, insn1, insn2, regs, + dsc); + else + /* PC is not allowed to use in load/store {dual, exclusive} + instructions. */ + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "load/store dual/ex", dsc); + } + else /* load/store multiple */ + { + switch (bits (insn1, 7, 8)) + { + case 0: case 3: /* SRS, RFE */ + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "srs/rfe", dsc); + break; + case 1: case 2: /* LDM/STM/PUSH/POP */ + err = thumb2_copy_block_xfer (gdbarch, insn1, insn2, regs, dsc); + break; + } + } + break; + + case 1: + /* Data-processing (shift register). */ + err = thumb2_decode_dp_shift_reg (gdbarch, insn1, insn2, regs, + dsc); + break; + default: /* Coprocessor instructions. */ + /* Thumb 32bit coprocessor instructions have the same encoding + as ARM's. */ + err = thumb2_decode_svc_copro (gdbarch, insn1, insn2, regs, dsc); + break; + } + break; + } + case 2: /* op1 = 2 */ + if (op) /* Branch and misc control. */ + { + if (bit (insn2, 14)) /* BLX/BL */ + err = thumb2_copy_b_bl_blx (gdbarch, insn1, insn2, regs, dsc); + else if (!bits (insn2, 12, 14) && bits (insn1, 7, 9) != 0x7) + /* Conditional Branch */ + err = thumb2_copy_b_bl_blx (gdbarch, insn1, insn2, regs, dsc); + else + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "misc ctrl", dsc); + } + else + { + if (bit (insn1, 9)) /* Data processing (plain binary imm). */ + { + int op = bits (insn1, 4, 8); + int rn = bits (insn1, 0, 4); + if ((op == 0 || op == 0xa) && rn == 0xf) + err = thumb_copy_pc_relative_32bit (gdbarch, insn1, insn2, + regs, dsc); + else + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "dp/pb", dsc); + } + else /* Data processing (modified immeidate) */ + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "dp/mi", dsc); + } + break; + case 3: /* op1 = 3 */ + switch (bits (insn1, 9, 10)) + { + case 0: + if ((bits (insn1, 4, 6) & 0x5) == 0x1) + err = decode_thumb_32bit_ld_mem_hints (gdbarch, insn1, insn2, + regs, dsc); + else + { + if (bit (insn1, 8)) /* NEON Load/Store */ + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "neon elt/struct load/store", + dsc); + else /* Store single data item */ + err = decode_thumb_32bit_store_single_data_item (gdbarch, + insn1, insn2, + regs, dsc); + + } + break; + case 1: /* op1 = 3, bits (9, 10) == 1 */ + switch (bits (insn1, 7, 8)) + { + case 0: case 1: /* Data processing (register) */ + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "dp(reg)", dsc); + break; + case 2: /* Multiply and absolute difference */ + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "mul/mua/diff", dsc); + break; + case 3: /* Long multiply and divide */ + err = thumb_copy_unmodified_32bit (gdbarch, insn1, insn2, + "lmul/lmua", dsc); + break; + } + break; + default: /* Coprocessor instructions */ + err = thumb2_decode_svc_copro (gdbarch, insn1, insn2, regs, dsc); + break; + } + break; + default: + err = 1; + } + + if (err) + internal_error (__FILE__, __LINE__, + _("thumb_process_displaced_32bit_insn: Instruction decode error")); + } static void -- 1.7.0.4 ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [try 2nd 5/8] Displaced stepping for Thumb 32-bit insns 2011-07-06 10:55 ` Yao Qi @ 2011-07-15 19:57 ` Ulrich Weigand 2011-07-18 9:26 ` Yao Qi 0 siblings, 1 reply; 19+ messages in thread From: Ulrich Weigand @ 2011-07-15 19:57 UTC (permalink / raw) To: Yao Qi; +Cc: gdb-patches Hi Yao, I just sent a review of your latest patch, but it doesn't show up on gdb-patches ... Did I just mess up CC, or did you not get it at all? Thanks, Ulrich -- Dr. Ulrich Weigand GNU Toolchain for Linux on System z and Cell BE Ulrich.Weigand@de.ibm.com ^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [try 2nd 5/8] Displaced stepping for Thumb 32-bit insns 2011-07-15 19:57 ` Ulrich Weigand @ 2011-07-18 9:26 ` Yao Qi 0 siblings, 0 replies; 19+ messages in thread From: Yao Qi @ 2011-07-18 9:26 UTC (permalink / raw) To: Ulrich Weigand; +Cc: gdb-patches On 07/16/2011 02:56 AM, Ulrich Weigand wrote: > Hi Yao, > > I just sent a review of your latest patch, but it doesn't show up on > gdb-patches ... Did I just mess up CC, or did you not get it at all? > I got you review mail, but gdb-patches@ was not copied. I'll reply to that mail and copy gdb-patches@. Again, thanks for your careful and patient review. -- Yao (é½å°§) ^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2011-10-10 14:40 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
[not found] <201107151847.p6FIlJNm001180@d06av02.portsmouth.uk.ibm.com>
2011-08-06 4:32 ` [try 2nd 5/8] Displaced stepping for Thumb 32-bit insns Yao Qi
2011-08-09 18:46 ` Ulrich Weigand
2011-08-19 3:13 ` Yao Qi
2011-08-19 16:39 ` Ulrich Weigand
2011-08-30 15:53 ` Yao Qi
2011-09-14 14:25 ` Ulrich Weigand
2011-10-09 13:28 ` Yao Qi
2011-10-10 14:40 ` Ulrich Weigand
2011-10-10 1:41 ` Yao Qi
2011-10-10 14:39 ` Ulrich Weigand
2010-12-25 14:17 [patch 0/3] Displaced stepping for 16-bit Thumb instructions Yao Qi
2011-03-24 13:49 ` [try 2nd 0/8] Displaced stepping for " Yao Qi
2011-03-24 14:05 ` [try 2nd 5/8] Displaced stepping for Thumb 32-bit insns Yao Qi
2011-05-05 13:25 ` Yao Qi
2011-05-17 17:14 ` Ulrich Weigand
2011-05-23 11:32 ` Yao Qi
2011-05-23 11:32 ` Yao Qi
2011-05-27 22:11 ` Ulrich Weigand
2011-07-06 10:55 ` Yao Qi
2011-07-15 19:57 ` Ulrich Weigand
2011-07-18 9:26 ` Yao Qi
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox