From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 20562 invoked by alias); 27 Mar 2014 14:07:41 -0000 Mailing-List: contact gdb-patches-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-patches-owner@sourceware.org Received: (qmail 20528 invoked by uid 89); 27 Mar 2014 14:07:39 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-2.5 required=5.0 tests=AWL,BAYES_00,FREEMAIL_FROM,RCVD_IN_DNSWL_LOW,SPF_PASS autolearn=ham version=3.3.2 X-HELO: mail-vc0-f169.google.com Received: from mail-vc0-f169.google.com (HELO mail-vc0-f169.google.com) (209.85.220.169) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with (AES128-SHA encrypted) ESMTPS; Thu, 27 Mar 2014 14:07:38 +0000 Received: by mail-vc0-f169.google.com with SMTP id ik5so4389734vcb.0 for ; Thu, 27 Mar 2014 07:07:35 -0700 (PDT) MIME-Version: 1.0 X-Received: by 10.220.159.4 with SMTP id h4mr1619739vcx.1.1395929255648; Thu, 27 Mar 2014 07:07:35 -0700 (PDT) Received: by 10.52.78.194 with HTTP; Thu, 27 Mar 2014 07:07:35 -0700 (PDT) In-Reply-To: <20140327015125.GE3075@redacted.bos.redhat.com> References: <20140327015125.GE3075@redacted.bos.redhat.com> Date: Thu, 27 Mar 2014 14:07:00 -0000 Message-ID: Subject: Re: [PATCHv2] aarch64: detect atomic sequences like other ll/sc architectures From: Marcus Shawcroft To: Kyle McMartin Cc: "gdb-patches@sourceware.org" Content-Type: text/plain; charset=ISO-8859-1 X-IsSubscribed: yes X-SW-Source: 2014-03/txt/msg00631.txt.bz2 Hi, On 27 March 2014 01:51, Kyle McMartin wrote: > + /* Look for a Load Exclusive instruction which begins the sequence. */ > + if (!decode_masked_match (insn, 0x3fc00000, 0x08400000)) > + return 0; Are you sure these masks and patterns are accurate? Looks to me that this excludes many of the load exclusive instructions and includes part of the unallocated encoding space. There are several different encodings to match here covering ld[a]xr{b,h,} and ld[a]xp. The masks and patterns will be something like: 0xbfff7c00 0x085f7c00 0xbfff7c00 0x885f7c00 0xbfff0000 0x887f0000 > + if (decode_masked_match (insn, 0x3fc00000, 0x08000000)) This also looks wrong. > + /* Test that we can step over ldxr/stxr. This sequence should step from > + ldxr to the following __asm __volatile. */ > + __asm __volatile ("1: ldxr %0,%2\n" \ > + " cmp %0,#1\n" \ > + " b.eq out\n" \ > + " add %0,%0,1\n" \ > + " stxr %w1,%0,%2\n" \ > + " cbnz %w1,1b" \ > + : "=&r" (tmp), "=&r" (cond), "+Q" (dword) \ > + : : "memory"); > + > + /* This sequence should take the conditional branch and step from ldxr > + to the return dword line. */ > + __asm __volatile ("1: ldxr %0,%2\n" \ > + " cmp %0,#1\n" \ > + " b.eq out\n" \ > + " add %0,%0,1\n" \ > + " stxr %w1,%0,%2\n" \ > + " cbnz %w1,1b\n" \ > + : "=&r" (tmp), "=&r" (cond), "+Q" (dword) \ > + : : "memory"); > + > + dword = -1; > +__asm __volatile ("out:\n"); > + return dword; > +} How about testing at least one instruction from each group of load store exclusives? Cheers /Marcus