From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 7899 invoked by alias); 4 Jul 2013 11:23:11 -0000 Mailing-List: contact gdb-patches-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-patches-owner@sourceware.org Received: (qmail 7888 invoked by uid 89); 4 Jul 2013 11:23:10 -0000 X-Spam-SWARE-Status: No, score=-4.1 required=5.0 tests=AWL,BAYES_00,KHOP_RCVD_UNTRUST,KHOP_THREADED,RCVD_IN_DNSWL_MED,RP_MATCHES_RCVD autolearn=ham version=3.3.1 Received: from mms3.broadcom.com (HELO mms3.broadcom.com) (216.31.210.19) by sourceware.org (qpsmtpd/0.84/v0.84-167-ge50287c) with ESMTP; Thu, 04 Jul 2013 11:23:09 +0000 Received: from [10.9.208.57] by mms3.broadcom.com with ESMTP (Broadcom SMTP Relay (Email Firewall v6.5)); Thu, 04 Jul 2013 04:13:36 -0700 X-Server-Uuid: B86B6450-0931-4310-942E-F00ED04CA7AF Received: from IRVEXCHSMTP3.corp.ad.broadcom.com (10.9.207.53) by IRVEXCHCAS08.corp.ad.broadcom.com (10.9.208.57) with Microsoft SMTP Server (TLS) id 14.1.438.0; Thu, 4 Jul 2013 04:22:59 -0700 Received: from mail-irva-13.broadcom.com (10.10.10.20) by IRVEXCHSMTP3.corp.ad.broadcom.com (10.9.207.53) with Microsoft SMTP Server id 14.1.438.0; Thu, 4 Jul 2013 04:22:59 -0700 Received: from [10.177.73.66] (unknown [10.177.73.66]) by mail-irva-13.broadcom.com (Postfix) with ESMTP id B7FD0F2D72; Thu, 4 Jul 2013 04:22:58 -0700 (PDT) Message-ID: <51D55B11.2030100@broadcom.com> Date: Thu, 04 Jul 2013 11:23:00 -0000 From: "Andrew Burgess" User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:17.0) Gecko/20130620 Thunderbird/17.0.7 MIME-Version: 1.0 To: gdb-patches@sourceware.org cc: "Pedro Alves" Subject: Re: [2/3] [PATCH] value_optimized_out and value_fetch_lazy References: <51B5A95F.7090400@broadcom.com> <51C1D347.3020906@redhat.com> <51D1C522.5060507@broadcom.com> <51D470E8.1080708@redhat.com> In-Reply-To: <51D470E8.1080708@redhat.com> Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: 7bit X-SW-Source: 2013-07/txt/msg00167.txt.bz2 On 03/07/2013 7:43 PM, Pedro Alves wrote: > On 07/01/2013 07:06 PM, Andrew Burgess wrote: >> Here's the improved patch, ok to apply? > > OK, with ... > >> gdb/ChangeLog >> >> 2013-07-01 Andrew Burgess >> >> * stack.c (read_frame_arg): No longer need to fetch lazy values, >> checking for optimized out will ensure lazy values are loaded. > > Write: > > * stack.c (read_frame_arg): No longer fetch lazy values. > >> * value.c (value_optimized_out): If the value is not already marked >> optimized out, and is lazy then fetch it so we can know for sure >> if the value is optimized out. > > Write: > > * value.c (value_optimized_out): If the value is not already marked > optimized out, and is lazy then fetch it. > > and put the "so we can know for sure if the value is optimized out." comment > in the sources. > >> (value_primitive_field): Move optimized out check later to later in >> the function after we have loaded any lazy values. > > "later to later" sounds like a later too much. It'd be great to have a > comment in the sources about this detail. > >> (value_fetch_lazy): Use optimized out flag directly rather than >> calling optimized_out method to avoid triggering recursion. > > Write: > > (value_fetch_lazy): Use optimized out flag directly rather than > calling optimized_out method. > > and put the "to avoid triggering recursion." comment in the sources. > Applied with the following ChangeLog: gdb/ChangeLog * stack.c (read_frame_arg): No longer fetch lazy values. * value.c (value_optimized_out): If the value is not already marked optimized out, and is lazy then fetch it. (value_primitive_field): Move optimized out check to later in the function, after we have loaded any lazy values. (value_fetch_lazy): Use optimized out flag directly rather than calling optimized_out method. And these additional comments: diff --git a/gdb/value.c b/gdb/value.c index e3a60dd..abaf23b 100644 --- a/gdb/value.c +++ b/gdb/value.c @@ -1054,6 +1054,8 @@ value_contents_equal (struct value *val1, struct value *val2) int value_optimized_out (struct value *value) { + /* We can only know if a value is optimized out once we have tried to + fetch it. */ if (!value->optimized_out && value->lazy) value_fetch_lazy (value); @@ -2677,6 +2679,9 @@ value_primitive_field (struct value *arg1, int offset, if (VALUE_LVAL (arg1) == lval_register && value_lazy (arg1)) value_fetch_lazy (arg1); + /* The optimized_out flag is only set correctly once a lazy value is + loaded, having just loaded some lazy values we should check the + optimized out case now. */ if (arg1->optimized_out) v = allocate_optimized_out_value (type); else @@ -2715,6 +2720,9 @@ value_primitive_field (struct value *arg1, int offset, if (VALUE_LVAL (arg1) == lval_register && value_lazy (arg1)) value_fetch_lazy (arg1); + /* The optimized_out flag is only set correctly once a lazy value is + loaded, having just loaded some lazy values we should check for + the optimized out case now. */ if (arg1->optimized_out) v = allocate_optimized_out_value (type); else if (value_lazy (arg1)) @@ -3541,6 +3549,9 @@ value_fetch_lazy (struct value *val) else if (VALUE_LVAL (val) == lval_computed && value_computed_funcs (val)->read != NULL) value_computed_funcs (val)->read (val); + /* Don't call value_optimized_out on val, doing so would result in a + recursive call back to value_fetch_lazy, instead check the + optimized_out flag directly. */ else if (val->optimized_out) /* Keep it optimized out. */; else Thanks, Andrew