From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from simark.ca by simark.ca with LMTP id QsDeGaM8CmCbGwAAWB0awg (envelope-from ) for ; Thu, 21 Jan 2021 21:46:59 -0500 Received: by simark.ca (Postfix, from userid 112) id 6008D1EF80; Thu, 21 Jan 2021 21:46:59 -0500 (EST) X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on simark.ca X-Spam-Level: X-Spam-Status: No, score=0.2 required=5.0 tests=DKIM_SIGNED,DKIM_VALID, DKIM_VALID_AU,MAILING_LIST_MULTI,RDNS_NONE,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.2 Received: from sourceware.org (unknown [8.43.85.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by simark.ca (Postfix) with ESMTPS id 078301E939 for ; Thu, 21 Jan 2021 21:46:59 -0500 (EST) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 413DE384400A; Fri, 22 Jan 2021 02:46:58 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 413DE384400A DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1611283618; bh=uHVqzfcIzyTmkj63IpOQsNbhQioCBHsuPyWKiq4WPzE=; h=Subject:To:References:Date:In-Reply-To:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To: From; b=K4SSZR8uXigvOw7A+3NZJxXkJpYRm6Ss3Ky/EU9Ta/pantVcPKijbG/CYlv1H2h6a jUB3P5CDihT3TLRvTvvby9iBdfRxB/zNSMsI2zvqNkIh4UUZjxl5xKojf/VogoYdPE vTbNUB8Mce7v0Semz9WtqZkw/w33Y3MXYGRTZcLs= Received: from smtp.polymtl.ca (smtp.polymtl.ca [132.207.4.11]) by sourceware.org (Postfix) with ESMTPS id DB50A384601D for ; Fri, 22 Jan 2021 02:46:54 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org DB50A384601D Received: from simark.ca (simark.ca [158.69.221.121]) (authenticated bits=0) by smtp.polymtl.ca (8.14.7/8.14.7) with ESMTP id 10M2keqE022173 (version=TLSv1/SSLv3 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Thu, 21 Jan 2021 21:46:44 -0500 DKIM-Filter: OpenDKIM Filter v2.11.0 smtp.polymtl.ca 10M2keqE022173 Received: from [10.0.0.11] (192-222-157-6.qc.cable.ebox.net [192.222.157.6]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by simark.ca (Postfix) with ESMTPSA id ED1BE1E939; Thu, 21 Jan 2021 21:46:39 -0500 (EST) Subject: Re: [PATCH v3 4/5] gdb: generalize commit_resume, avoid commit-resuming when threads have pending statuses To: Simon Marchi , Pedro Alves , gdb-patches@sourceware.org References: <20210108041734.3873826-1-simon.marchi@polymtl.ca> <20210108041734.3873826-5-simon.marchi@polymtl.ca> <93a38356-a5ea-b6e4-d86d-ed8db5f9545e@palves.net> <617137d0-d740-c495-b7c1-9ad400ae617b@efficios.com> Message-ID: Date: Thu, 21 Jan 2021 21:46:39 -0500 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.6.1 MIME-Version: 1.0 In-Reply-To: <617137d0-d740-c495-b7c1-9ad400ae617b@efficios.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Poly-FromMTA: (simark.ca [158.69.221.121]) at Fri, 22 Jan 2021 02:46:40 +0000 X-BeenThere: gdb-patches@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gdb-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , From: Simon Marchi via Gdb-patches Reply-To: Simon Marchi Errors-To: gdb-patches-bounces@sourceware.org Sender: "Gdb-patches" On 2021-01-11 3:28 p.m., Simon Marchi wrote: > I see more of ~1.33 % slowdown, but it's consistent too. > > It could be the std::set allocation. I changed all_process_targets to make it > return an array of 1 element, just to see what happens, it didn't seem to help. > See attached patch "0001-Test-returning-something-else-than-std-set-in-all_pr.patch" > if you want to try it. > > It's maybe due to the fact that we now iterate on all threads at every handled > event? fetch_inferior_event calls ~scoped_disable_commit_resumed, which calls > maybe_commit_resumed_all_process_targets, which iterates on all threads. The > loop actually breaks when it finds a thread with a pending status, but that > still makes this function O(number of threads). I did some more testing regarding this. I am testing the following scenarios: - baseline: up to the previous patch ("gdb: move commit_resume to process_stratum_target") applied. - original patch: baseline + this patch applied - always commit resume: I take the original patch, and I remove the loop over all threads to check if there are resumed threads with a pending status. Instead, I just let it do the commit-resume all the time. - all no-op: I make these no-ops: - maybe_commit_resumed_all_process_targets - scoped_disable_commit_resumed::scoped_disable_commit_resumed - scoped_disable_commit_resumed::~scoped_disable_commit_resumed This essentially gets rid of all calls to all_process_targets and its std::set allocation. It wouldn't be a correct patch, but the native target doesn't care, it's just to see the impact of these functions. The results are: baseline: 81,534 original patch: 78,926 always commit resume: 80,416 all no-op: 80,890 I don't really understand why "all no-op" would be slower than the baseline, there isn't much left at this point. Still, we see, with the difference between "original patch" and "always commit resume", that iterating on all threads to see if there's one resumed with a pending status has a non-negligible cost. To speed this up, we could maintain a per target or per inferior count of "number of resumed threads with a pending status". I think it would be a bit tricky to get completely right, to make sure this count never gets out of sync with the reality. Another similar idea would be to keep a per inferior list / queue of resumed threads with pending statuses. That would serve the same purpose as the count from above (checking quickly if there exists a resumed thread with pending status) but could also make random_pending_event_thread more efficient. random_pending_event_thread would just return the first thread in that list that matches waiton_ptid (most of the time, that will be the first in the list because we are looking for any thread). It would no longer be random, but it would be FIFO, but I think it would achieve the same goal of ensuring no thread starves because it is never the chosen one. There would be the same difficulty as with the counter though, to make sure that this list gets in sync with the reality. Simon