From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 19973 invoked by alias); 22 Nov 2002 00:43:03 -0000 Mailing-List: contact gdb-help@sources.redhat.com; run by ezmlm Precedence: bulk List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-owner@sources.redhat.com Received: (qmail 19966 invoked from network); 22 Nov 2002 00:43:01 -0000 Received: from unknown (HELO localhost.redhat.com) (216.138.202.10) by sources.redhat.com with SMTP; 22 Nov 2002 00:43:01 -0000 Received: from redhat.com (localhost [127.0.0.1]) by localhost.redhat.com (Postfix) with ESMTP id B0FE13E4B; Thu, 21 Nov 2002 19:42:54 -0500 (EST) Message-ID: <3DDD7D8E.2010407@redhat.com> Date: Thu, 21 Nov 2002 16:43:00 -0000 From: Andrew Cagney User-Agent: Mozilla/5.0 (X11; U; NetBSD macppc; en-US; rv:1.0.0) Gecko/20020824 X-Accept-Language: en-us, en MIME-Version: 1.0 To: Daniel Jacobowitz Cc: gdb@sources.redhat.com Subject: Re: [Fwd: Re: gdb/725: Crash using debug target and regcaches (in 5.3 branch?)] References: <3DDD6150.10407@redhat.com> <20021122001447.GA7884@nevyn.them.org> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-SW-Source: 2002-11/txt/msg00320.txt.bz2 > On Thu, Nov 21, 2002 at 05:42:24PM -0500, Andrew Cagney wrote: > >> FYI, >> >> Too many memory reads/writes was one reason for a ptrace'd threaded >> shlib program running slow, I suspect this is the other. > > > Maybe, maybe not... definitely needs to go though! Thanks for such a > thorough investigation, it gave me a good idea. [snip] >> currently: >> runtest linux-dp.exp print-threads.exp 17.21s user 48.22s system 82% cpu 1:19.56 total >> With change: >> runtest linux-dp.exp print-threads.exp 16.67s user 45.35s system 82% cpu 1:15.27 total Given that the numbers are being overwhelmed by all those memory read ptrace calls, a ~5% improvement is significant. Try something simpler like running gdb under strace (tweak testsuite/lib/gdb.exp to run 'strace $GDB' instead of $GDB) and then count how many ptrace calls of each type occure. >> Briefly, the GNU/Linux thread code is giving regcache.c conflicting >> stories over which inferior ptid should be in the register cache. As a >> consequence, every single register fetch leads to a regcache flush and >> re-fetch. Outch! >> >> >> Briefly, core GDB tries to fetch a register. This eventually leads to >> the call: >> >> regcache_raw_read(REGNUM) >> >> registers_tpid != inferior_tpid >> (gdb) print registers_ptid >> $6 = {pid = 31263, lwp = 0, tid = 0} >> (gdb) print inferior_ptid >> $7 = {pid = 31263, lwp = 31263, tid = 0} >> -> flush regcache >> -> registers_tpid = inferior_tpid >> -- at this point regnum is invalid >> target_fetch_registers (regnum) >> >> Since the inferior doesn't match the target, the cache is flushed, >> inferior_ptid is updated, and the register is fetched. The fetch flows >> on down into the depths of the target and the call: >> >> Seen the problem yet? > > > Yup. Saw something else very interesting, too. > > >> The long term fix is to have per-thread register caches, that is >> progressing. >> >> I don't know about a short term fix though. > > > I was working on a short-term fix and discovered it was almost entirely > in place already. Look at a couple of random fetch_inferior_registers > implementations; every one that a GNU/Linux platform uses already will > fetch the LWP's registers if the LWP is non-zero. So why not give that > to 'em? Leave the inferior_ptid as it is, and make > fetch_inferior_registers honor the LWP id. It feels right. I'm hopeing that, eventually, the code will supply the registers direct to a (one of many) `struct thread_info' object. > So, thoughts on the attached patch? Thread maintainer question (not so sure about the #ifdef linux though :-). Andrew