From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 11902 invoked by alias); 12 Mar 2002 07:52:42 -0000 Mailing-List: contact gdb-patches-help@sources.redhat.com; run by ezmlm Precedence: bulk List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-patches-owner@sources.redhat.com Received: (qmail 11811 invoked from network); 12 Mar 2002 07:52:40 -0000 Received: from unknown (HELO cygnus.com) (205.180.230.5) by sources.redhat.com with SMTP; 12 Mar 2002 07:52:40 -0000 Received: from cse.cygnus.com (cse.cygnus.com [205.180.230.236]) by runyon.cygnus.com (8.8.7-cygnus/8.8.7) with ESMTP id XAA03261; Mon, 11 Mar 2002 23:52:38 -0800 (PST) Received: (from kev@localhost) by cse.cygnus.com (8.11.6/8.11.6) id g2C7qGP22018; Tue, 12 Mar 2002 00:52:16 -0700 Date: Mon, 11 Mar 2002 23:52:00 -0000 From: Kevin Buettner Message-Id: <1020312075215.ZM22017@localhost.localdomain> In-Reply-To: Daniel Jacobowitz "Re: [PATCH RFA/RFC] Don't use lwp_from_thread() in thread_db_wait()" (Mar 11, 10:23pm) References: <1020311234554.ZM20650@localhost.localdomain> <20020311214703.A462@nevyn.them.org> <1020312031619.ZM21458@localhost.localdomain> <20020311222334.A3178@nevyn.them.org> X-Mailer: Z-Mail (4.0.1 13Jan97 Caldera) To: Daniel Jacobowitz Subject: Re: [PATCH RFA/RFC] Don't use lwp_from_thread() in thread_db_wait() Cc: gdb-patches@sources.redhat.com MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii X-SW-Source: 2002-03/txt/msg00177.txt.bz2 On Mar 11, 10:23pm, Daniel Jacobowitz wrote: > On Mon, Mar 11, 2002 at 08:16:19PM -0700, Kevin Buettner wrote: > > I think that an LWP id cache is only useful so long as all of the > > threads are stopped. This is because the mappings could change in the > > course of running the program. So, for this particular case, where > > the threads are running and we want to wait for one of them to stop, > > the cache wouldn't be useful to us. > > > > Of course, if we have knowledge that a particular thread > > implementation never changes its mappings or perhaps only changes its > > mappings for certain threads, we might be able to use such a cache > > across the stop/start transitions. However, I think that Mark had > > intended for thread-db.c to be a fairly generic solution that's not > > wedded to any one particular thread implementation. In particular, it > > should be possible to use it with an M:N model in which a thread may > > migrate from one LWP to another. > > This implies that part of the caching should be in lin-lwp.c rather > than in thread-db.c... that knowledge belongs with the lower level > threading layer. Does that make sense? I think I see what you're driving at, though I don't think it belongs in lin-lwp.c. lin-lwp.c should, I hope, be usable as is by a number of different thread implementations. Instead, I think what you have in mind should reside in some sort of policy adjuct to thread-db.c which understands the kinds of relationships that can exist between thread ids and lwp ids. If it knows that the thread implementation uses a 1:1 model as linuxthreads does now, it can use agressive caching. (By which I mean that the cache is allowed to persist between stops in the debugger). If it uses a M:N model, it must cache more conservatively. (I.e, the cache must be invalidated whenever the inferior is resumed.) I think this code could be reasonably generic and it shouldn't be too hard to implement. The difficult part will be to figure out which kind of thread library you have. After all, if someone provided a dropin replacement for linuxthreads which implemented M:N threading, how would you tell the difference? > We could also, for instance, update the cache via thread event > reporting... If the thread events tell GDB when a thread has migrated from one LWP to another, then this would work too. ... But, for the problem at hand (i.e, the bug that my patch is intended to fix), I think it's important that we first make it work without caching. As I see it, the cache ought to exist to enhance performance, not guarantee basic correctness. If we can't make it work without some sort of caching or enhanced thread event reporting, we need to understand exactly why first. Kevin