From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from simark.ca by simark.ca with LMTP id 5kl+AwEK3WNE8ygAWB0awg (envelope-from ) for ; Fri, 03 Feb 2023 08:20:01 -0500 Received: by simark.ca (Postfix, from userid 112) id F0D291E128; Fri, 3 Feb 2023 08:20:00 -0500 (EST) Authentication-Results: simark.ca; dkim=pass (1024-bit key; secure) header.d=sourceware.org header.i=@sourceware.org header.a=rsa-sha256 header.s=default header.b=DwwetRim; dkim-atps=neutral X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on simark.ca X-Spam-Level: X-Spam-Status: No, score=-6.3 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,MAILING_LIST_MULTI,NICE_REPLY_A, RCVD_IN_DNSWL_MED,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 Received: from sourceware.org (server2.sourceware.org [8.43.85.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by simark.ca (Postfix) with ESMTPS id 7C1DB1E112 for ; Fri, 3 Feb 2023 08:20:00 -0500 (EST) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id CE5943858416 for ; Fri, 3 Feb 2023 13:19:59 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org CE5943858416 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1675430399; bh=vONKbQvT4KIfM03VR3zaDfj36rOzSa3cfXzt8lG6K94=; h=Date:Subject:To:Cc:References:In-Reply-To:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From:Reply-To:From; b=DwwetRimys8v79DmxGdc2bczoWJ1UNQgoqkV0fAVQwO+CbASFa+ZAj0lQMHOO+b69 lVV9XM2YdsDnttRfJGKoqAqecHeKQ/OR1MYUMWskRR9I7ku3jI2vS2sFG3MK84OsYE K+38tdT6eAdvO7iMqIq740y77Kc/uzREOjZT/2d0= Received: from simark.ca (simark.ca [158.69.221.121]) by sourceware.org (Postfix) with ESMTPS id 006E33858D20 for ; Fri, 3 Feb 2023 13:19:39 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 006E33858D20 Received: from [10.0.0.11] (unknown [217.28.27.60]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by simark.ca (Postfix) with ESMTPSA id 4A1041E112; Fri, 3 Feb 2023 08:19:39 -0500 (EST) Message-ID: <7ad8e78a-7966-a714-d4cb-ebe1bfa606ee@simark.ca> Date: Fri, 3 Feb 2023 08:19:38 -0500 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.7.1 Subject: Re: [PATCH v3 6/8] gdb/remote: Parse tdesc field in stop reply and threads list XML Content-Language: en-US To: Luis Machado , Andrew Burgess , Thiago Jung Bauermann via Gdb-patches Cc: Thiago Jung Bauermann References: <20230130044518.3322695-1-thiago.bauermann@linaro.org> <20230130044518.3322695-7-thiago.bauermann@linaro.org> <87edr9tq0c.fsf@redhat.com> <9f5deefd-52fc-9792-f9a5-dede9c415777@simark.ca> In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-BeenThere: gdb-patches@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gdb-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , From: Simon Marchi via Gdb-patches Reply-To: Simon Marchi Errors-To: gdb-patches-bounces+public-inbox=simark.ca@sourceware.org Sender: "Gdb-patches" On 2/3/23 06:27, Luis Machado wrote: > On 2/1/23 20:16, Simon Marchi via Gdb-patches wrote: >>> IIUC, the tdescs would be deleted during the >>> pop_all_targets_at_and_above, when the refcount of the remote_target >>> gets to 0 and it gets deleted. And the threads would be removed in >>> generic_mourn_inferior just after. >>> >>> An idea could be to call generic_mourn_inferior before >>> remote_unpush_target (no idea if it works). Another one would be to >>> get a temporary reference to the remote_target object in >>> remote_unpush_target, just so that it outlives the threads. >>> Or maybe we should say that it's a process target's responsibility to >>> delete any thread it "owns" before getting deleted itself. >> >> Another question related to this popped while reading the following >> patch. When creating a gdbarch from a tdesc, the gdbarch keeps a >> pointer to that tdesc (accessible through gdbarch_target_desc). And >> AFAIK, we never delete gdbarches. So I suppose the gdbarch will refer a >> stale target desc. At first I thought it wouldn't be a problem in >> practice, because while that gdbarch object still exists, nothing >> references it (it is effectively leaked). But then I remember that we >> cache gdbarches to avoid creating arches with duplicate features. So >> later (let's say if you connect again to a remote), we might want to >> create a gdbarch with the same features as before, and we'll dig up the >> old gdbarch, that points to the now deleted tdesc. > > The target descriptions for aarch64 are all cached using a map in gdb/aarch64-tdep.c: > > /* All possible aarch64 target descriptors. */ > static std::unordered_map tdesc_aarch64_map; > > I don't think we should try to delete those, and they should live throughout the life of gdb (unless things get large, then we might consider cleanups). When debugging natively with GDB, that's true. When debugging remotely, on GDBserver-side, that's true too. But when debugging remotely, on GDB-side, don't we create a new target_desc object for each read target description? Ok, I just saw in xml-tdesc.c: /* A record of every XML description we have parsed. We never discard old descriptions, because we never discard gdbarches. As long as we have a gdbarch referencing this description, we want to have a copy of it here, so that if we parse the same XML document again we can return the same "struct target_desc *"; if they are not singletons, then we will create unnecessary duplicate gdbarches. See gdbarch_list_lookup_by_info. */ static std::unordered_map xml_cache; So, at least, a remote sending the same exact XML over and over will lead to the same target_desc object being reused. And there won't be lifetime issues, since the target_desc created from XML also live forever. So I guess we're good. Simon