From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 12685 invoked by alias); 4 Feb 2011 14:55:31 -0000 Received: (qmail 12675 invoked by uid 22791); 4 Feb 2011 14:55:29 -0000 X-SWARE-Spam-Status: No, hits=-1.9 required=5.0 tests=AWL,BAYES_00,TW_EG,T_RP_MATCHES_RCVD X-Spam-Check-By: sourceware.org Received: from mail.codesourcery.com (HELO mail.codesourcery.com) (38.113.113.100) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Fri, 04 Feb 2011 14:55:24 +0000 Received: (qmail 9576 invoked from network); 4 Feb 2011 14:55:22 -0000 Received: from unknown (HELO scottsdale.localnet) (pedro@127.0.0.2) by mail.codesourcery.com with ESMTPA; 4 Feb 2011 14:55:22 -0000 From: Pedro Alves To: "Ulrich Weigand" Subject: Re: performance of multithreading gets gradually worse under gdb Date: Fri, 04 Feb 2011 14:55:00 -0000 User-Agent: KMail/1.13.5 (Linux/2.6.35-25-generic; KDE/4.5.1; x86_64; ; ) Cc: Tom Tromey , Markus Alber , Michael Snyder , gdb@sourceware.org References: <201102032140.p13Le89f031563@d06av02.portsmouth.uk.ibm.com> In-Reply-To: <201102032140.p13Le89f031563@d06av02.portsmouth.uk.ibm.com> MIME-Version: 1.0 Content-Type: Text/Plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit Message-Id: <201102041455.20607.pedro@codesourcery.com> X-IsSubscribed: yes Mailing-List: contact gdb-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-owner@sourceware.org X-SW-Source: 2011-02/txt/msg00023.txt.bz2 On Friday 04 February 2011 21:40:08, Ulrich Weigand wrote: > Tom Tromey wrote: > > >>>>> "Markus" == Markus Alber writes: > > > > Markus> See the attached file. It shows a similar behaviour, although it only > > Markus> allocates 8kB per iteration. > > Markus> You have to wait some time before this happens. > > > > Thanks. > > > > I changed 1<<24 to 1<<15, to spare my underpowered machine, and ran gdb > > under massif. > > > > This part is interesting: > > > > ->20.78% (2,954,016B) 0x8253683: regcache_xmalloc_1 (regcache.c:232) > > | ->20.78% (2,954,016B) 0x8253F85: get_thread_arch_regcache (regcache.c:463) > > | ->20.78% (2,954,016B) 0x82540B5: get_thread_regcache (regcache.c:488) > > | ->20.78% (2,954,016B) 0x81D1579: i386_linux_resume (i386-linux-nat.c:861) > > | | ->20.78% (2,954,016B) 0x81D7D32: linux_nat_resume (linux-nat.c:1983) > > > > > > I debugged gdb a little and it does indeed seem to be leaking here. > > > > I don't understand why registers_changed_ptid unconditionally clears > > current_regcache. I suspect that may be the source of the problem. > > > > Perhaps someone who knows this code better could take a look. > > It seems this leak was introduced by Pedro's patch here: > http://sourceware.org/ml/gdb-patches/2010-04/msg00960.html > > The function used to free all regcaches in the list and then > reset current_regcache. The new code now takes care to selectively > free only a subset of regcaches -- and then resets current_regcache > anyway ... Egads! > I guess we should just remove the current_regcache = NULL line now. Yeah. And only clear current_regcache_ptid if it was deleted in the first place; and only reinit the frame cache if we deleted the regcache of inferior_ptid ? Like the patch below. > (Actually, now that every thread always has a thread_info, the > best thing would probably be anyway to hang each thread's regcaches > off the thread_info, and do away with the global list completely.) Not sure we can do that yet, at least as sole mechanism to keep track of regcache pointers. We have targets that do thread<->lwp ptid translation between thread/proc stratum layers, and random places that do get_thread_regcache (ptid) behind the core's back -- it appears aix-thread.c could be one of those. Another example: linux-nat.c:cancel_breakpoint builds regcaches for lwps before linux-thread-db.c (if active at all) has had a chance of telling the core about new threads (in all-stop mode). -- Pedro Alves 2011-02-04 Pedro Alves gdb/ * regcache.c (registers_changed_ptid): Don't explictly always clear `current_regcache'. Only clear current_thread_ptid and current_thread_arch when PTID matches. Only reinit the frame cache if PTID matches the current inferior_ptid. Move alloca(0) call to ... (registers_changed): ... here. --- gdb/regcache.c | 28 +++++++++++++++++----------- 1 file changed, 17 insertions(+), 11 deletions(-) Index: src/gdb/regcache.c =================================================================== --- src.orig/gdb/regcache.c 2011-02-01 15:27:37.000000000 +0000 +++ src/gdb/regcache.c 2011-02-04 11:56:49.273328004 +0000 @@ -530,6 +530,7 @@ void registers_changed_ptid (ptid_t ptid) { struct regcache_list *list, **list_link; + int wildcard = ptid_equal (ptid, minus_one_ptid); list = current_regcache; list_link = ¤t_regcache; @@ -550,13 +551,24 @@ registers_changed_ptid (ptid_t ptid) list = *list_link; } - current_regcache = NULL; + if (wildcard || ptid_equal (ptid, current_thread_ptid)) + { + current_thread_ptid = null_ptid; + current_thread_arch = NULL; + } - current_thread_ptid = null_ptid; - current_thread_arch = NULL; + if (wildcard || ptid_equal (ptid, inferior_ptid)) + { + /* We just deleted the regcache of the current thread. Need to + forget about any frames we have cached, too. */ + reinit_frame_cache (); + } +} - /* Need to forget about any frames we have cached, too. */ - reinit_frame_cache (); +void +registers_changed (void) +{ + registers_changed_ptid (minus_one_ptid); /* Force cleanup of any alloca areas if using C alloca instead of a builtin alloca. This particular call is used to clean up @@ -567,12 +579,6 @@ registers_changed_ptid (ptid_t ptid) } void -registers_changed (void) -{ - registers_changed_ptid (minus_one_ptid); -} - -void regcache_raw_read (struct regcache *regcache, int regnum, gdb_byte *buf) { gdb_assert (regcache != NULL && buf != NULL);