From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 29045 invoked by alias); 24 Mar 2014 16:11:03 -0000 Mailing-List: contact gdb-patches-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-patches-owner@sourceware.org Received: (qmail 29033 invoked by uid 89); 24 Mar 2014 16:11:02 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-2.4 required=5.0 tests=AWL,BAYES_00,RP_MATCHES_RCVD,SPF_HELO_PASS,SPF_PASS autolearn=ham version=3.3.2 X-HELO: mx1.redhat.com Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Mon, 24 Mar 2014 16:11:00 +0000 Received: from int-mx11.intmail.prod.int.phx2.redhat.com (int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s2OGAxXx028758 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Mon, 24 Mar 2014 12:10:59 -0400 Received: from redacted.bos.redhat.com ([10.18.17.143]) by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id s2OGAvid022087 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NO) for ; Mon, 24 Mar 2014 12:10:58 -0400 Date: Mon, 24 Mar 2014 16:11:00 -0000 From: Kyle McMartin To: gdb-patches@sourceware.org Subject: [PATCH] aarch64: detect atomic sequences like other ll/sc architectures Message-ID: <20140324161056.GB23291@redacted.bos.redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.5.21 (2010-09-15) X-IsSubscribed: yes X-SW-Source: 2014-03/txt/msg00576.txt.bz2 Add similar single-stepping over atomic sequences support like other load-locked/store-conditional architectures (alpha, powerpc, arm, etc.) do. Verified the decode_masked_match, and decode_bcond works against the atomic sequences used in the Linux kernel atomic.h, and also gcc libatomic. Thanks to Richard Henderson for feedback on my initial attempt at this patch! 2014-03-23 Kyle McMartin * aarch64-tdep.c (aarch64_deal_with_atomic_sequence): New function. (aarch64_gdbarch_init): Handle single stepping of atomic sequences with aarch64_deal_with_atomic_sequence. --- a/gdb/aarch64-tdep.c +++ b/gdb/aarch64-tdep.c @@ -2509,6 +2509,82 @@ value_of_aarch64_user_reg (struct frame_info *frame, const void *baton) } +static int +aarch64_deal_with_atomic_sequence (struct frame_info *frame) +{ + struct gdbarch *gdbarch = get_frame_arch (frame); + struct address_space *aspace = get_frame_address_space (frame); + enum bfd_endian byte_order = gdbarch_byte_order (gdbarch); + const int insn_size = 4; + const int atomic_sequence_length = 16; /* Instruction sequence length. */ + CORE_ADDR pc = get_frame_pc (frame); + CORE_ADDR breaks[2] = { -1, -1 }; + CORE_ADDR loc = pc; + CORE_ADDR closing_insn = 0; + uint32_t insn = read_memory_unsigned_integer (loc, insn_size, byte_order); + int index; + int insn_count; + int bc_insn_count = 0; /* Conditional branch instruction count. */ + int last_breakpoint = 0; /* Defaults to 0 (no breakpoints placed). */ + + /* look for a load-exclusive to begin the sequence... */ + if (!decode_masked_match(insn, 0x3fc00000, 0x08400000)) + return 0; + + for (insn_count = 0; insn_count < atomic_sequence_length; ++insn_count) + { + int32_t offset; + unsigned cond; + + loc += insn_size; + insn = read_memory_unsigned_integer (loc, insn_size, byte_order); + + /* look for a conditional branch to set a breakpoint on the destination. */ + if (decode_bcond(loc, insn, &cond, &offset)) + { + + if (bc_insn_count >= 1) + return 0; + + breaks[1] = loc + offset; + + bc_insn_count++; + last_breakpoint++; + } + + /* and the matching store-exclusive to close it. */ + if (decode_masked_match(insn, 0x3fc00000, 0x08000000)) + { + closing_insn = loc; + break; + } + } + + /* didn't find a stxr to end the sequence... */ + if (!closing_insn) + return 0; + + loc += insn_size; + insn = read_memory_unsigned_integer (loc, insn_size, byte_order); + + /* insert breakpoint at the end of the atomic sequence */ + breaks[0] = loc; + + /* check for duplicated breakpoints. also check for a breakpoint placed on + * the conditional branch destination isn't within the sequence. */ + if (last_breakpoint && + (breaks[1] == breaks[0] || + (breaks[1] >= pc && breaks[1] <= closing_insn))) + last_breakpoint = 0; + + /* insert the breakpoint at the end of the sequence, also possibly at the + conditional branch destination */ + for (index = 0; index <= last_breakpoint; index++) + insert_single_step_breakpoint (gdbarch, aspace, breaks[index]); + + return 1; +} + /* Initialize the current architecture based on INFO. If possible, re-use an architecture from ARCHES, which is a list of architectures already created during this debugging session. @@ -2624,6 +2700,8 @@ aarch64_gdbarch_init (struct gdbarch_info info, struct gdbarch_list *arches) set_gdbarch_breakpoint_from_pc (gdbarch, aarch64_breakpoint_from_pc); set_gdbarch_cannot_step_breakpoint (gdbarch, 1); set_gdbarch_have_nonsteppable_watchpoint (gdbarch, 1); + /* Handles single stepping of atomic sequences. */ + set_gdbarch_software_single_step (gdbarch, aarch64_deal_with_atomic_sequence); /* Information about registers, etc. */ set_gdbarch_sp_regnum (gdbarch, AARCH64_SP_REGNUM);