From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 6904 invoked by alias); 20 Jun 2010 09:30:11 -0000 Received: (qmail 6893 invoked by uid 22791); 20 Jun 2010 09:30:10 -0000 X-SWARE-Spam-Status: No, hits=0.9 required=5.0 tests=AWL,BAYES_05,MSGID_MULTIPLE_AT,RCVD_IN_JMF_BL X-Spam-Check-By: sourceware.org Received: from mailhost.u-strasbg.fr (HELO mailhost.u-strasbg.fr) (130.79.200.152) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Sun, 20 Jun 2010 09:30:05 +0000 Received: from baal.u-strasbg.fr (baal.u-strasbg.fr [IPv6:2001:660:2402::41]) by mailhost.u-strasbg.fr (8.14.3/jtpda-5.5pre1) with ESMTP id o5K9TWWd002074 ; Sun, 20 Jun 2010 11:29:32 +0200 (CEST) (envelope-from pierre.muller@ics-cnrs.unistra.fr) Received: from mailserver.u-strasbg.fr (ms5.u-strasbg.fr [IPv6:2001:660:2402:d::14]) by baal.u-strasbg.fr (8.14.0/jtpda-5.5pre1) with ESMTP id o5K9TUr4002046 ; Sun, 20 Jun 2010 11:29:30 +0200 (CEST) (envelope-from pierre.muller@ics-cnrs.unistra.fr) Received: from d620muller (lec67-4-82-230-53-140.fbx.proxad.net [82.230.53.140]) (user=mullerp mech=LOGIN) by mailserver.u-strasbg.fr (8.14.4/jtpda-5.5pre1) with ESMTP id o5K9TRAc073086 (version=TLSv1/SSLv3 cipher=RC4-MD5 bits=128 verify=NO) ; Sun, 20 Jun 2010 11:29:28 +0200 (CEST) (envelope-from pierre.muller@ics-cnrs.unistra.fr) From: "Pierre Muller" To: "'Doug Evans'" , "'Hui Zhu'" Cc: , "'Pedro Alves'" , "'Stan Shebs'" , "'Eli Zaretskii'" , , "'Michael Snyder'" References: <201006071700.28706.pedro@codesourcery.com> <4C19222C.2000208@codesourcery.com> <201006162016.18181.pedro@codesourcery.com> <4C19265B.7090502@codesourcery.com> <4C1A6362.3020306@vmware.com> In-Reply-To: Subject: [RFA-new version][gdbserver] x86 agent expression bytecode compiler (speed up conditional tracepoints) Date: Sun, 20 Jun 2010 09:30:00 -0000 Message-ID: <003301cb105b$1abd19e0$50374da0$@muller@ics-cnrs.unistra.fr> MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Mailing-List: contact gdb-patches-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-patches-owner@sourceware.org X-SW-Source: 2010-06/txt/msg00437.txt.bz2 > -----Message d'origine----- > De=A0: gdb-patches-owner@sourceware.org [mailto:gdb-patches- > owner@sourceware.org] De la part de Doug Evans > Envoy=E9=A0: Saturday, June 19, 2010 7:26 PM > =C0=A0: Hui Zhu > Cc=A0: gdb-patches@sourceware.org; Pedro Alves; Stan Shebs; Eli > Zaretskii; tromey@redhat.com; Michael Snyder > Objet=A0: Re: [NEWS/RFA] Re: [gdbserver] x86 agent expression bytecode > compiler (speed up conditional tracepoints) >=20 > The fix to the compilation problem (for now) should be as trivial as > applying Ian's suggested change. > gcc doesn't optimize *inside* the asm statement. As I said in a previous email, Ian's patch didn't work for me. http://sourceware.org/ml/gdb-patches/2010-06/msg00424.html I propose here another small patch that fixes the linking failure. Using a volatile variable, it explicitly forbids the compiler to optimize out code by forbidding the assumption that this value will=20 never change. This works on gcc16, an the approach seems reasonable. Pierre Muller =20 gdbserver/ChangeLog entry: 2010-06-20 Pierre Muller * linux-x86-low.c (always_true): Delete function. (always_true): New volatile variable. (EMIT_ASM, EMIT_ASM32): Adapt to always_true change. Index: src/gdb/gdbserver/linux-x86-low.c =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D RCS file: /cvs/src/src/gdb/gdbserver/linux-x86-low.c,v retrieving revision 1.19 diff -u -p -r1.19 linux-x86-low.c --- src/gdb/gdbserver/linux-x86-low.c 15 Jun 2010 10:44:48 -0000 1.19 +++ src/gdb/gdbserver/linux-x86-low.c 20 Jun 2010 06:25:07 -0000 @@ -1484,13 +1484,12 @@ add_insns (unsigned char *start, int len current_insn_ptr =3D buildaddr; } =20 -/* A function used to trick optimizers. */ +/* A simple function returning the constant 1 is not enough + to trick modern optimizers anymore. Use a volatile variable + seems to force inclusion of the code, as the compiler is forced + to assume that that value could be changed by some external code. */ =20 -int -always_true (void) -{ - return 1; -} +static volatile int always_true =3D 1; =20 /* Our general strategy for emitting code is to avoid specifying raw bytes whenever possible, and instead copy a block of inline asm @@ -1501,7 +1500,7 @@ always_true (void) #define EMIT_ASM(NAME,INSNS) \ { extern unsigned char start_ ## NAME, end_ ## NAME; \ add_insns (&start_ ## NAME, &end_ ## NAME - &start_ ## NAME); \ - if (always_true ()) \ + if (always_true) \ goto skipover ## NAME; \ __asm__ ("start_" #NAME ":\n\t" INSNS "\n\tend_" #NAME ":\n\t"); \ skipover ## NAME: \ @@ -1513,7 +1512,7 @@ always_true (void) #define EMIT_ASM32(NAME,INSNS) \ { extern unsigned char start_ ## NAME, end_ ## NAME; \ add_insns (&start_ ## NAME, &end_ ## NAME - &start_ ## NAME); \ - if (always_true ()) \ + if (always_true) \ goto skipover ## NAME; \ __asm__ (".code32\n\tstart_" #NAME ":\n\t" INSNS "\n\tend_" #NAME ":\n" \ "\t.code64\n\t"); \