From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 48802 invoked by alias); 18 Feb 2020 20:31:46 -0000 Mailing-List: contact gdb-patches-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-patches-owner@sourceware.org Received: (qmail 48794 invoked by uid 89); 18 Feb 2020 20:31:45 -0000 Authentication-Results: sourceware.org; auth=none X-Spam-SWARE-Status: No, score=-7.7 required=5.0 tests=AWL,BAYES_00,RCVD_IN_DNSWL_LOW,SPF_PASS autolearn=ham version=3.3.1 spammy=exceed, columns, *Maybe*, perlwp X-HELO: mx0a-001b2d01.pphosted.com Received: from mx0a-001b2d01.pphosted.com (HELO mx0a-001b2d01.pphosted.com) (148.163.156.1) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Tue, 18 Feb 2020 20:31:44 +0000 Received: from pps.filterd (m0098393.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 01IKKilJ008279 for ; Tue, 18 Feb 2020 15:31:43 -0500 Received: from ppma02dal.us.ibm.com (a.bd.3ea9.ip4.static.sl-reverse.com [169.62.189.10]) by mx0a-001b2d01.pphosted.com with ESMTP id 2y6adsg5fm-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Tue, 18 Feb 2020 15:31:42 -0500 Received: from pps.filterd (ppma02dal.us.ibm.com [127.0.0.1]) by ppma02dal.us.ibm.com (8.16.0.27/8.16.0.27) with SMTP id 01IKMR1B017167 for ; Tue, 18 Feb 2020 20:31:41 GMT Received: from b01cxnp22036.gho.pok.ibm.com (b01cxnp22036.gho.pok.ibm.com [9.57.198.26]) by ppma02dal.us.ibm.com with ESMTP id 2y6896m91p-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Tue, 18 Feb 2020 20:31:41 +0000 Received: from b01ledav006.gho.pok.ibm.com (b01ledav006.gho.pok.ibm.com [9.57.199.111]) by b01cxnp22036.gho.pok.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 01IKVeQG2818914 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 18 Feb 2020 20:31:40 GMT Received: from b01ledav006.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 21DB4AC064; Tue, 18 Feb 2020 20:31:40 +0000 (GMT) Received: from b01ledav006.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id ED76CAC062; Tue, 18 Feb 2020 20:31:39 +0000 (GMT) Received: from pedro.localdomain (unknown [9.18.235.193]) by b01ledav006.gho.pok.ibm.com (Postfix) with ESMTP; Tue, 18 Feb 2020 20:31:39 +0000 (GMT) Received: by pedro.localdomain (Postfix, from userid 1000) id 154623C4F92; Tue, 18 Feb 2020 17:31:36 -0300 (-03) From: Pedro Franco de Carvalho To: Ulrich Weigand Cc: gdb-patches@sourceware.org, ulrich.weigand@de.ibm.com, rcardoso@linux.ibm.com Subject: Re: [PATCH v2 3/3] [PowerPC] Fix debug register issues in ppc-linux-nat In-Reply-To: <20200217174720.3CB09D802EA@oc3748833570.ibm.com> References: <20200217174720.3CB09D802EA@oc3748833570.ibm.com> Date: Tue, 18 Feb 2020 20:31:00 -0000 Message-ID: <87lfoziqmw.fsf@linux.ibm.com> MIME-Version: 1.0 Content-Type: text/plain X-SW-Source: 2020-02/txt/msg00736.txt.bz2 "Ulrich Weigand" writes: > Can we simply store the installed slots map in here, instead of requiring > a whole new per-lwp map in m_installed_hw_bps? I had considered doing this, however, low_new_fork needs to copy the per-lwp state in case the debug registers are copied across forks, and this function is called before the lwp_info object for the new forked thread is constructed, which only happens in linux_nat_target::follow_fork. > But it would seem cleaner to make this explicit by having an > explicit "initialize" or "detect" call, which gets called in > those places we expect to be "first", and which gets passed > a ptid_t to use (where the callers will still pass inferior_ptid, > but then at least the dependency will be explicit. Agreed. I'm investigating the best way to do this. > I'm wondering if it might be preferable to have a single map from pid_t > to a "per-process HW break/watchpoint" structure, which tracks the > lifetime of the process (cleaned up completely in low_forget_process), > and holds all the data (list of ppc_hw_breakpoint structs, plus a WP > value)? Yes, that would probably be cleaner. > [ *Maybe* (and I'm not sure here) it would even make sense to move the > ppc_linux_dreg_interface into that per-process struct, to clearly > associate it with the pid that was used to query the kernel? ] I'm not yet sure about this one, I have to think a bit more. >> +ppc_linux_nat_target::hwdebug_point_cmp >> +(const struct ppc_hw_breakpoint &a, const struct ppc_hw_breakpoint &b) > > You're using this style in a number of places, but I don't think this > complies with the GNU coding style ... (The '(' should not be in the > first column.) I will change this. I had done this because even if I broke the line after the first argument, the line still had more than the soft limit of columns (74): ppc_linux_nat_target::hwdebug_point_cmp (const struct ppc_hw_breakpoint &a, const struct ppc_hw_breakpoint &b) Is this a reasonable reason to exceed the soft limit column limit? It's under the hard limit (80). If it's not reasonable, I'll have to do something like: bool ppc_linux_nat_target::hwdebug_point_cmp (const struct ppc_hw_breakpoint &a, const struct ppc_hw_breakpoint &b) Thanks a lot for the review! -- Pedro Franco de Carvalho