From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 76140 invoked by alias); 19 May 2016 14:48:24 -0000 Mailing-List: contact gdb-patches-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-patches-owner@sourceware.org Received: (qmail 76071 invoked by uid 89); 19 May 2016 14:48:23 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-3.3 required=5.0 tests=BAYES_00,RP_MATCHES_RCVD,SPF_HELO_PASS autolearn=ham version=3.3.2 spammy=traffic X-HELO: mx1.redhat.com Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with (AES256-GCM-SHA384 encrypted) ESMTPS; Thu, 19 May 2016 14:48:18 +0000 Received: from int-mx10.intmail.prod.int.phx2.redhat.com (int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 2AE2228A48 for ; Thu, 19 May 2016 14:48:13 +0000 (UTC) Received: from cascais.lan (ovpn01.gateway.prod.ext.ams2.redhat.com [10.39.146.11]) by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id u4JEmBLo009731 for ; Thu, 19 May 2016 10:48:12 -0400 From: Pedro Alves To: gdb-patches@sourceware.org Subject: [PATCH 1/6] Linux native thread create/exit events support Date: Thu, 19 May 2016 14:48:00 -0000 Message-Id: <1463669290-30415-2-git-send-email-palves@redhat.com> In-Reply-To: <1463669290-30415-1-git-send-email-palves@redhat.com> References: <1463669290-30415-1-git-send-email-palves@redhat.com> X-SW-Source: 2016-05/txt/msg00331.txt.bz2 A following patch (fix for gdb/19828) makes linux-nat.c add threads to GDB's thread list earlier in the "attach" sequence, and that causes a surprising regression on gdb.threads/attach-many-short-lived-threads.exp on my machine. The extra "thread x exited" handling and traffic slows down that test enough that GDB core has trouble keeping up with new threads that are spawned while trying to stop existing ones. I saw the exact same issue with remote/gdbserver a while ago and fixed it in 65706a29bac5 (Remote thread create/exit events) so part of the fix here is the exact same -- add support for thread created events to gdb/linux-nat.c. infrun.c:stop_all_threads enables those events when it tries to stop threads, which ensures that new threads never get a chance to themselves start new threads, thus fixing the race. gdb/ yyyy-mm-dd Pedro Alves PR gdb/19828 * linux-nat.c (report_thread_events): New global. (linux_handle_extended_wait): Report TARGET_WAITKIND_THREAD_CREATED if thread event reporting is enabled. (wait_lwp, linux_nat_filter_event): Report all thread exits if thread event reporting is enabled. Update comments. (filter_exit_event): New function. (linux_nat_wait_1): Use it. (linux_nat_thread_events): New function. (linux_nat_add_target): Install it as target_thread_events method. --- gdb/linux-nat.c | 58 +++++++++++++++++++++++++++++++++++++++++++++------ gdb/linux-thread-db.c | 14 +++++++------ 2 files changed, 60 insertions(+), 12 deletions(-) diff --git a/gdb/linux-nat.c b/gdb/linux-nat.c index edde88d..5ec56c1 100644 --- a/gdb/linux-nat.c +++ b/gdb/linux-nat.c @@ -239,6 +239,9 @@ struct simple_pid_list }; struct simple_pid_list *stopped_pids; +/* Whether target_thread_events is in effect. */ +static int report_thread_events; + /* Async mode support. */ /* The read/write ends of the pipe registered as waitable file in the @@ -1952,6 +1955,11 @@ linux_handle_extended_wait (struct lwp_info *lp, int status) status_to_str (status)); new_lp->status = status; } + else if (report_thread_events) + { + new_lp->waitstatus.kind = TARGET_WAITKIND_THREAD_CREATED; + new_lp->status = status; + } return 1; } @@ -2091,13 +2099,14 @@ wait_lwp (struct lwp_info *lp) /* Check if the thread has exited. */ if (WIFEXITED (status) || WIFSIGNALED (status)) { - if (ptid_get_pid (lp->ptid) == ptid_get_lwp (lp->ptid)) + if (report_thread_events + || ptid_get_pid (lp->ptid) == ptid_get_lwp (lp->ptid)) { if (debug_linux_nat) - fprintf_unfiltered (gdb_stdlog, "WL: Process %d exited.\n", + fprintf_unfiltered (gdb_stdlog, "WL: LWP %d exited.\n", ptid_get_pid (lp->ptid)); - /* This is the leader exiting, it means the whole + /* If this is the leader exiting, it means the whole process is gone. Store the status to report to the core. Store it in lp->waitstatus, because lp->status would be ambiguous (W_EXITCODE(0,0) == 0). */ @@ -2902,7 +2911,8 @@ linux_nat_filter_event (int lwpid, int status) /* Check if the thread has exited. */ if (WIFEXITED (status) || WIFSIGNALED (status)) { - if (num_lwps (ptid_get_pid (lp->ptid)) > 1) + if (!report_thread_events + && num_lwps (ptid_get_pid (lp->ptid)) > 1) { if (debug_linux_nat) fprintf_unfiltered (gdb_stdlog, @@ -2922,10 +2932,10 @@ linux_nat_filter_event (int lwpid, int status) resumed. */ if (debug_linux_nat) fprintf_unfiltered (gdb_stdlog, - "Process %ld exited (resumed=%d)\n", + "LWP %ld exited (resumed=%d)\n", ptid_get_lwp (lp->ptid), lp->resumed); - /* This was the last lwp in the process. Since events are + /* This may be the last lwp in the process. Since events are serialized to GDB core, we may not be able report this one right now, but GDB core and the other target layers will want to be notified about the exit code/signal, leave the status @@ -3110,6 +3120,30 @@ check_zombie_leaders (void) } } +/* Convenience function that is called when the kernel reports an exit + event. This decides whether to report the event to GDB as a + process exit event, a thread exit event, or to suppress the + event. */ + +static ptid_t +filter_exit_event (struct lwp_info *event_child, + struct target_waitstatus *ourstatus) +{ + ptid_t ptid = event_child->ptid; + + if (num_lwps (ptid_get_pid (ptid)) > 1) + { + if (report_thread_events) + ourstatus->kind = TARGET_WAITKIND_THREAD_EXITED; + else + ourstatus->kind = TARGET_WAITKIND_IGNORE; + + exit_lwp (event_child); + } + + return ptid; +} + static ptid_t linux_nat_wait_1 (struct target_ops *ops, ptid_t ptid, struct target_waitstatus *ourstatus, @@ -3339,6 +3373,9 @@ linux_nat_wait_1 (struct target_ops *ops, else lp->core = linux_common_core_of_thread (lp->ptid); + if (ourstatus->kind == TARGET_WAITKIND_EXITED) + return filter_exit_event (lp, ourstatus); + return lp->ptid; } @@ -4614,6 +4651,14 @@ linux_nat_fileio_unlink (struct target_ops *self, return ret; } +/* Implementation of the to_thread_events method. */ + +static void +linux_nat_thread_events (struct target_ops *ops, int enable) +{ + report_thread_events = enable; +} + void linux_nat_add_target (struct target_ops *t) { @@ -4646,6 +4691,7 @@ linux_nat_add_target (struct target_ops *t) t->to_supports_stopped_by_sw_breakpoint = linux_nat_supports_stopped_by_sw_breakpoint; t->to_stopped_by_hw_breakpoint = linux_nat_stopped_by_hw_breakpoint; t->to_supports_stopped_by_hw_breakpoint = linux_nat_supports_stopped_by_hw_breakpoint; + t->to_thread_events = linux_nat_thread_events; t->to_can_async_p = linux_nat_can_async_p; t->to_is_async_p = linux_nat_is_async_p; diff --git a/gdb/linux-thread-db.c b/gdb/linux-thread-db.c index 844b05c..fe46062 100644 --- a/gdb/linux-thread-db.c +++ b/gdb/linux-thread-db.c @@ -1118,12 +1118,14 @@ thread_db_wait (struct target_ops *ops, ptid = beneath->to_wait (beneath, ptid, ourstatus, options); - if (ourstatus->kind == TARGET_WAITKIND_IGNORE) - return ptid; - - if (ourstatus->kind == TARGET_WAITKIND_EXITED - || ourstatus->kind == TARGET_WAITKIND_SIGNALLED) - return ptid; + switch (ourstatus->kind) + { + case TARGET_WAITKIND_IGNORE: + case TARGET_WAITKIND_EXITED: + case TARGET_WAITKIND_THREAD_EXITED: + case TARGET_WAITKIND_SIGNALLED: + return ptid; + } info = get_thread_db_info (ptid_get_pid (ptid)); -- 2.5.5