From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 14136 invoked by alias); 1 Jul 2010 19:10:57 -0000 Received: (qmail 14124 invoked by uid 22791); 1 Jul 2010 19:10:57 -0000 X-SWARE-Spam-Status: No, hits=-2.0 required=5.0 tests=AWL,BAYES_00,TW_BJ X-Spam-Check-By: sourceware.org Received: from rock.gnat.com (HELO rock.gnat.com) (205.232.38.15) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Thu, 01 Jul 2010 19:10:51 +0000 Received: from localhost (localhost.localdomain [127.0.0.1]) by filtered-rock.gnat.com (Postfix) with ESMTP id 32D4D2BAC06; Thu, 1 Jul 2010 15:10:50 -0400 (EDT) Received: from rock.gnat.com ([127.0.0.1]) by localhost (rock.gnat.com [127.0.0.1]) (amavisd-new, port 10024) with LMTP id 8CF70ukHEBlK; Thu, 1 Jul 2010 15:10:50 -0400 (EDT) Received: from joel.gnat.com (localhost.localdomain [127.0.0.1]) by rock.gnat.com (Postfix) with ESMTP id E57DB2BAC05; Thu, 1 Jul 2010 15:10:49 -0400 (EDT) Received: by joel.gnat.com (Postfix, from userid 1000) id EB2D4F5895; Thu, 1 Jul 2010 12:10:41 -0700 (PDT) Date: Thu, 01 Jul 2010 19:10:00 -0000 From: Joel Brobecker To: Jan Kratochvil Cc: gdb-patches@sourceware.org Subject: Re: ping: [patch 6/6] PIE: Fix back re-run Message-ID: <20100701191041.GX2595@adacore.com> References: <20100329161905.GE2940@host0.dyn.jankratochvil.net> <20100609151008.GF7183@host0.dyn.jankratochvil.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20100609151008.GF7183@host0.dyn.jankratochvil.net> User-Agent: Mutt/1.5.20 (2009-06-14) Mailing-List: contact gdb-patches-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-patches-owner@sourceware.org X-SW-Source: 2010-07/txt/msg00029.txt.bz2 Sorry for the delay; it took me a long time to review everything that you mentioned and try to get a more complete picture of what was going on. I'll once again admit that it would be an understatement to say that I have limited knowledge of this area. But since holding up patches for lack of review is worse IMO than allowing potentially buggy patches in... > gdb/ > 2010-03-29 Jan Kratochvil > > Fix re-run of PIE executable. > * solib-svr4.c (svr4_relocate_main_executable) : Remove > the part of pre-set SYMFILE_OBJFILE->SECTION_OFFSETS. Just one question, which could actually be dealt with as a followup patch, if it's more convenient. And the usual nit-pick on comments... > gdb/testsuite/ > 2010-03-29 Jan Kratochvil > > Fix re-run of PIE executable. > * gdb.base/break-interp.exp (test_ld): Turn off "disable-randomization". > Remove $displacement_main to match the solib-svr4.c change. New "kill" > and re-"run" of the inferior. OK, modulo a request about one of your comments. BTW: I seem to be commenting a lot on your comments, but as said before, you're one of the few who makes an active effort at writing them, and I really think that this is extremely important and useful. So a big Thank You. > + /* SYMFILE_OBJFILE->SECTION_OFFSETS may now contain displacement from the > + previous run of the inferior. Re-set it according to the current value, > + if we can find it out. But otherwise keep it as for remote target it may > + have been pre-set by the `qOffsets' packet. */ I was having a hard time trying to figure out what you were trying to get at until I had a broader picture of the various scenarios that we are trying to deal with. I would like to rephrase your comment a little differently: /* If we are re-running this executable, SYMFILE_OBJFILE->SECTION_OFFSETS probably contains the offsets computed using the PIE displacement from the previous run, which of course are irrelevant for this run. So we need to determine the new PIE displacement and recompute the section offsets accordingly, even if SYMFILE_OBJFILE->SECTION_OFFSETS already contains pre-computed offsets. If, for some reason we cannot compute the PIE displacement (why???), then we leave the section offsets untouched and use them as is for this run. Either: - These section offsets were properly reset earlier, and thus already contain the correct values. This can happen for instance when reconnecting via the remote protocol to a target that supports the `qOffsets' packet. - The section offsets were not reset earlier, and the best we can hope is that the old offsets are still applicable to the new run. */ But all this begs one important question: When does svr4_exec_displacement return zero (fail)??? Based on the code, it can happen in two situations, I think: (a) PIE is not in use; and (b) binary on disk is different from program being executed. Case (b) is hopeless as I understand it? But (a) shouldn't. So, expanding the logic in the comment I am suggesting, I think we need to account for the situation where svr4_exec_displacement fails because of non-PIE executable. Either: (a1) there was no PIE in sight during the previous run either => offsets should be already be correct; (a2) PIE was in use during the previous run => the offsets may be incorrect. But it should be trivial to reset them. So, I'm wondering whether it would make sense to amend your patch to check for PIE after svr4_exec_displacement failed, and use a zero displacement if PIE is not detected. > + # A bit better test coverage. > + gdb_test "set disable-randomization off" I think that this does more than just "better coverage". It is a critical part of the testcase that helps trying to obtain two runs with different displacement addresses. I think the comment should make it clear, because otherwise people might think that this is not necessarily an important part of the testcase. Something like this might help making things clearer: # We want to test the re-run of a PIE in the case where the executable # is loaded with a different displacement, but disable-randomization # prevents that from happening. So turn it off. > + reach "dl_main" "run segv" $displacement ^^ detail: two spaces? -- Joel