From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 15312 invoked by alias); 17 Feb 2011 20:11:52 -0000 Received: (qmail 15300 invoked by uid 22791); 17 Feb 2011 20:11:49 -0000 X-SWARE-Spam-Status: No, hits=-1.1 required=5.0 tests=AWL,BAYES_00,MSGID_FROM_MTA_HEADER,SPF_SOFTFAIL,TW_BL,TW_CB,TW_CN,TW_EG,TW_NB,T_RP_MATCHES_RCVD X-Spam-Check-By: sourceware.org Received: from mtagate5.uk.ibm.com (HELO mtagate5.uk.ibm.com) (194.196.100.165) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Thu, 17 Feb 2011 20:11:41 +0000 Received: from d06nrmr1507.portsmouth.uk.ibm.com (d06nrmr1507.portsmouth.uk.ibm.com [9.149.38.233]) by mtagate5.uk.ibm.com (8.13.1/8.13.1) with ESMTP id p1HKBc0R005067 for ; Thu, 17 Feb 2011 20:11:38 GMT Received: from d06av02.portsmouth.uk.ibm.com (d06av02.portsmouth.uk.ibm.com [9.149.37.228]) by d06nrmr1507.portsmouth.uk.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id p1HKBj511585404 for ; Thu, 17 Feb 2011 20:11:45 GMT Received: from d06av02.portsmouth.uk.ibm.com (loopback [127.0.0.1]) by d06av02.portsmouth.uk.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id p1HKBc5w002615 for ; Thu, 17 Feb 2011 13:11:38 -0700 Received: from tuxmaker.boeblingen.de.ibm.com (tuxmaker.boeblingen.de.ibm.com [9.152.85.9]) by d06av02.portsmouth.uk.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with SMTP id p1HKBa0Y002611; Thu, 17 Feb 2011 13:11:36 -0700 Message-Id: <201102172011.p1HKBa0Y002611@d06av02.portsmouth.uk.ibm.com> Received: by tuxmaker.boeblingen.de.ibm.com (sSMTP sendmail emulation); Thu, 17 Feb 2011 21:11:36 +0100 Subject: Re: [patch 3/3] Displaced stepping for 16-bit Thumb instructions To: yao@codesourcery.com (Yao Qi) Date: Thu, 17 Feb 2011 20:55:00 -0000 From: "Ulrich Weigand" Cc: gdb-patches@sourceware.org In-Reply-To: <4D15FDF6.4070606@codesourcery.com> from "Yao Qi" at Dec 25, 2010 10:21:42 PM MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Mailing-List: contact gdb-patches-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-patches-owner@sourceware.org X-SW-Source: 2011-02/txt/msg00436.txt.bz2 Yao Qi wrote: > Patch 3 is about supporting 16-bit Thumb displaced stepping. In this > patch, we decode 16-bit instruction, and process them. We also leave a > slot for 32-bit Thumb instructions, and put an error there. Thanks. Any thoughts how much more work the 32-bit instructions would be? I'm not sure it is a good idea to support Thumb only partially; with the current setup, you will fail immediately when debugging Thumb, but with the patch, you might happen to step over a couple of 16-bit instructions before hitting the first 32-bit instruction ... (That may not be all that bad, just wondering ...) > Test cases are updated accordingly for some PC-related instructions. I haven't looked in detail at the test cases yet, but here some comments on the code so far: > (cleanup_branch): Move some code to ... > (cleanup_branch_1): ... here. New. Support Thumb mode. > (cleanup_cbz_cbnz): New. > (copy_b_bl_blx): Move some code to ... > (arm_copy_b_bl_blx): ... here. New. > (thumb_copy_b): New. > (copy_bx_blx_reg): Move some code to ... > (arm_copy_bx_blx_reg): ... here. New. > (thumb_copy_bx_blx_reg): New. I'm not sure I like those refactorings ... It seems to me a nicer way would be to re-use the cleanup_ routines unchanged for both ARM and Thumb (because those only depend on the instruction *semantics*, not encoding), but use fully separate copy_ routines for ARM and Thumb, since those are really all about decoding the instruction format. > (thumb_decode_dp): New. > (thumb_decode_pc_relative): New. > (thumb_copy_16bit_ldr_literal): New. > (thumb_copy_cbnz_cbz): New. > (thumb_process_displaced_16bit_insn): New. > (thumb_process_displaced_32bit_insn): New. Just a general note on the ordering of routines in the file: the existing code has first all the cleanup and copy routines, followed by all the higher-level decode routines. I think it would be cleaner to keep it the same way for the Thumb routines. In fact, I think it might even make sense to have the Thumb copy routines next to the corresponding ARM routines, such that all the copy routines that install a particular cleanup routine are in fact close by that cleanup routine. The decode routines can then come at the end (after the ARM decode routines). > @@ -4338,10 +4341,15 @@ displaced_read_reg (struct regcache *regs, CORE_ADDR from, int regno) > > if (regno == 15) > { > + if (displaced_in_arm_mode (regs)) > + from += 8; > + else > + from += 6; I think the 6 is wrong, it should be 4. From the ARM manual: - When executing an ARM instruction, PC reads as the address of the current instruction plus 8. - When executing a Thumb instruction, PC reads as the address of the current instruction plus 4. > if (debug_displaced) > fprintf_unfiltered (gdb_stdlog, "displaced: read pc value %.8lx\n", > - (unsigned long) from + 8); > - return (ULONGEST) from + 8; /* Pipeline offset. */ > + (unsigned long) from); > + return (ULONGEST) from; /* Pipeline offset. */ The "pipeline offset" comment refers to the + 8 (or +4); as this is now moved further up, the comment should move with it. > +/* Clean up branch instructions (actually perform the branch, by setting > + PC). */ > +static void > +cleanup_branch(struct gdbarch *gdbarch, struct regcache *regs, > + struct displaced_step_closure *dsc) > +{ > + ULONGEST from = dsc->insn_addr; > + uint32_t status = displaced_read_reg (regs, from, ARM_PS_REGNUM); > + int branch_taken = condition_true (dsc->u.branch.cond, status); > + > + cleanup_branch_1 (gdbarch, regs, dsc, branch_taken); > +} > + > +static void > +cleanup_cbz_cbnz(struct gdbarch *gdbarch, struct regcache *regs, > + struct displaced_step_closure *dsc) > +{ > + cleanup_branch_1 (gdbarch, regs, dsc, dsc->u.branch.cond); > +} I think this is unnecessary: copy_cbz_cnbz ought to be able to use cleanup_branch as-is. If the branch is taken, it should just set dsc->u.branch.cond to INSN_AL; if the branch is not taken, it should simply not use any cleanup at all since no further action is required. > @@ -4718,6 +4752,40 @@ copy_b_bl_blx (struct gdbarch *gdbarch, uint32_t insn, > > B similar, but don't set r14 in cleanup. */ > > + > + dsc->u.branch.cond = cond; > + dsc->u.branch.link = link; > + dsc->u.branch.exchange = exchange; > + > + if (arm_pc_is_thumb (gdbarch, from)) You should never use arm_pc_is_thumb here; the heuristics it applies are completely unnecessary, since we know in which mode we are, and may just result in the wrong outcome. In any case, as discussed above, this ought to be two separate copy routines, one for ARM mode and one for Thumb mode anyway. > + { > + /* Plus the size of THUMB_NOP and B/BL/BLX. */ > + dsc->u.branch.dest = from + 2 + 4 + offset; > + RECORD_MOD_16BIT_INSN (0, THUMB_NOP); The + 2 doesn't look right to me. The offset is relative to the PC, which is -see above- "from + 8" in ARM mode and "from + 4" in Thumb mode. I don't see how the size of the THUMB_NOP is involved at all here ... > +static int > +thumb_decode_dp (struct gdbarch *gdbarch, unsigned short insn, > + struct displaced_step_closure *dsc) > +{ > + /* 16-bit data-processing insns are not related to PC. */ > + return thumb_copy_unmodified_16bit (gdbarch, insn,"data-processing", dsc); > +} This doesn't need to be a separate function, I guess ... > +static int > +thumb_decode_pc_relative (struct gdbarch *gdbarch, unsigned short insn, > + struct regcache *regs, > + struct displaced_step_closure *dsc) This seems to be a copy routine, not a decode routine ... > + /* ADDS Rd, #imm8 */ > + RECORD_MOD_32BIT_INSN (0, 0x3000 | (rd << 8) | imm8); Should be 16BIT (but see my earlier mail on the usefulness of those macros in the first place ...). > +static void > +thumb_process_displaced_16bit_insn (struct gdbarch *gdbarch, > + unsigned short insn1, CORE_ADDR to, I don't think this needs TO. > + struct regcache *regs, > + struct displaced_step_closure *dsc) > +{ > + unsigned short op_bit_12_15 = bits (insn1, 12, 15); > + unsigned short op_bit_10_11 = bits (insn1, 10, 11); > + int err = 0; > + > + /* 16-bit thumb instructions. */ > + switch (op_bit_12_15) > + { > + /* Shift (imme), add, subtract, move and compare*/ > + case 0: case 1: case 2: case 3: > + err = thumb_copy_unmodified_16bit (gdbarch, insn1,"", dsc); > + break; > + case 4: > + switch (op_bit_10_11) > + { > + case 0: /* Data-processing */ > + err = thumb_decode_dp (gdbarch, insn1, dsc); > + break; > + case 1: /* Special data instructions and branch and exchange */ > + { > + unsigned short op = bits (insn1, 7, 9); > + if (op == 6 || op == 7) /* BX or BLX */ > + err = thumb_copy_bx_blx_reg (gdbarch, insn1, regs, dsc); > + else > + err = thumb_copy_unmodified_16bit (gdbarch, insn1, "special data", > + dsc); These include the ADD / MOV / CMP high register instructions, which can access the PC, so they'd need special treatment > + } > + break; > + default: /* LDR (literal) */ > + err = thumb_copy_16bit_ldr_literal (gdbarch, insn1, regs, dsc); > + } > + break; > + case 5: case 6: case 7: case 8: case 9: /* Load/Store single data item */ > + err = thumb_copy_unmodified_16bit (gdbarch, insn1,"ldr/str", dsc); > + break; > + case 10: > + if (op_bit_10_11 < 2) /* Generate PC-relative address */ > + err = thumb_decode_pc_relative (gdbarch, insn1, regs, dsc); > + else /* Generate SP-relative address */ > + err = thumb_copy_unmodified_16bit (gdbarch, insn1,"sp-relative", dsc); > + break; > + case 11: /* Misc 16-bit instructions */ > + { > + switch (bits (insn1, 8, 11)) > + { > + case 1: case 3: case 9: case 11: /* CBNZ, CBZ */ > + err = thumb_copy_cbnz_cbz (gdbarch, insn1, regs, dsc); > + break; > + default: > + err = thumb_copy_unmodified_16bit (gdbarch, insn1,"", dsc); Hmm, what about IT ? > + } > + } > + break; > + case 12: > + if (op_bit_10_11 < 2) /* Store multiple registers */ > + err = thumb_copy_unmodified_16bit (gdbarch, insn1,"stm", dsc); > + else /* Load multiple registers */ > + err = thumb_copy_unmodified_16bit (gdbarch, insn1,"ldm", dsc); > + break; > + case 13: /* Conditional branch and supervisor call */ > + if (bits (insn1, 9, 11) != 7) /* conditional branch */ > + err = thumb_copy_b (gdbarch, insn1, dsc); > + else > + err = thumb_copy_unmodified_16bit (gdbarch, insn1,"svc", dsc); There is special handling in arm-linux-tdep.c for ARM SVC instructions. Don't we need this for Thumb SVC's as well? > + break; > + case 14: /* Unconditional branch */ > + err = thumb_copy_b (gdbarch, insn1, dsc); > + break; > + default: > + internal_error (__FILE__, __LINE__, > + _("thumb_process_displaced_insn: Instruction decode error")); > + } > + > + if (err) > + internal_error (__FILE__, __LINE__, > + _("thumb_process_displaced_insn: Instruction decode error")); > +} > + > +static void > +thumb_process_displaced_32bit_insn (struct gdbarch *gdbarch, CORE_ADDR from, > + CORE_ADDR to, struct regcache *regs, > + struct displaced_step_closure *dsc) This really needs neither FROM nor TO, but should rather get the two parts of the insn (to avoid duplicate memory accesses). > +{ > + error (_("Displaced stepping is only supported in ARM mode and Thumb 16bit instructions")); > +} > + > static void > thumb_process_displaced_insn (struct gdbarch *gdbarch, CORE_ADDR from, > CORE_ADDR to, struct regcache *regs, > struct displaced_step_closure *dsc) > { > - error (_("Displaced stepping is only supported in ARM mode")); > + enum bfd_endian byte_order_for_code = gdbarch_byte_order_for_code (gdbarch); > + unsigned short insn1 > + = read_memory_unsigned_integer (from, 2, byte_order_for_code); > + > + if (debug_displaced) > + fprintf_unfiltered (gdb_stdlog, "displaced: process thumb insn %.4x " > + "at %.8lx\n", insn1, (unsigned long) from); > + > + if ((bits (insn1, 13, 15) == 7) && (bits (insn1, 11, 12))) You should just use thumb_insn_size ... > + thumb_process_displaced_32bit_insn(gdbarch, from, to, regs, dsc); > + else > + thumb_process_displaced_16bit_insn(gdbarch, insn1, to, regs, dsc); > } Bye, Ulrich -- Dr. Ulrich Weigand GNU Toolchain for Linux on System z and Cell BE Ulrich.Weigand@de.ibm.com