From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (qmail 20570 invoked by alias); 17 Feb 2002 16:23:45 -0000 Mailing-List: contact gdb-patches-help@sources.redhat.com; run by ezmlm Precedence: bulk List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-patches-owner@sources.redhat.com Received: (qmail 20491 invoked from network); 17 Feb 2002 16:23:44 -0000 Received: from unknown (HELO localhost.redhat.com) (24.112.135.44) by sources.redhat.com with SMTP; 17 Feb 2002 16:23:44 -0000 Received: from cygnus.com (localhost [127.0.0.1]) by localhost.redhat.com (Postfix) with ESMTP id 3D79C3D04; Sun, 17 Feb 2002 11:23:43 -0500 (EST) Message-ID: <3C6FD90E.5000504@cygnus.com> Date: Sun, 17 Feb 2002 08:23:00 -0000 From: Andrew Cagney User-Agent: Mozilla/5.0 (X11; U; NetBSD macppc; en-US; rv:0.9.8) Gecko/20020210 X-Accept-Language: en-us MIME-Version: 1.0 To: "Peter.Schauer" Cc: gdb-patches@sources.redhat.com Subject: Re: [RFD] How to fix FRAME_CHAIN_VALID redefinition in config/i386/tm-i386v4.h ? References: <200202171345.OAA04571@reisser.regent.e-technik.tu-muenchen.de> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit X-SW-Source: 2002-02/txt/msg00453.txt.bz2 > Due to this change: > > 2002-02-10 Andrew Cagney > > * gdbarch.sh: For for level one methods, disallow a definition > when partially multi-arched. Add comments explaining rationale. > * gdbarch.h: Re-generate. > > native SVR4 based platforms (including Solaris x86) no longer compile, > as they redefine FRAME_CHAIN_VALID in config/i386/tm-i386v4.h. > > I understand, that this redefinition has to go, but I have no idea, how to > get back to the old behaviour cleanly. Appreciated! > Any ideas, suggestions ? To expand a little on the superficial problem. A non-multi-arch compile has: #include tm.h might define FRAME_CHAIN_VALID #include gdbarch.h doesn't define FRAME_CHAIN_VALID (as not multi-arch) #include frame.h defines FRAME_CHAIN_VALID if not defined using some convoluted #ifdef logic. In the partial multi-arch case it ends up with: #include tm.h might define FRAME_CHAIN_VALID #include gdbarch.h (re)defines FRAME_CHAIN_VALID #include frame.h gets ignored The upshot is that FRAME_CHAIN_VALID's definition can silently change when the multi-arch switch is thrown. The above change stops this by barfing things during the build :-/ Looking at frame.h, though, I think I've come across some good news. The logic reads: #if !defined (FRAME_CHAIN_VALID) #if !defined (FRAME_CHAIN_VALID_ALTERNATE) #define FRAME_CHAIN_VALID(chain, thisframe) file_frame_chain_valid (chain, thisframe) #else /* Use the alternate method of avoiding running up off the end of the frame chain or following frames back into the startup code. See the comments in objfiles.h. */ #define FRAME_CHAIN_VALID(chain, thisframe) func_frame_chain_valid (chain,thisframe) #endif /* FRAME_CHAIN_VALID_ALTERNATE */ #endif /* FRAME_CHAIN_VALID */ greping through the code, FRAME_CHAIN_VALID_ALTERNATE appears to have quietly disappeared! Can someone confirm this? Assuming that is the case, the above can be reduced to: #ifndef FRAME_CHAIN_VALID #define FRAME_CHAIN_VALID(chain, thisframe) file_frame_chain_valid (chain, thisframe) and that, in turn, can be moved to gdbarch.* allowing the level-1 requirement to be dropped. Doesn't fix the underlying problem though :-( -- > Three approaches come to mind: > > - Do nothing about it and let SVR4 based platforms backtrace through main. > This is the simplest solution, albeit ugly. I'll immediatly apply the above. It gets you back the old behavour. > - Use func_frame_chain_valid instead of file_frame_chain_valid in > i386-tdep.c. This would stop backtraces through main on GNU/Linux. See also > http://sources.redhat.com/ml/gdb/2002-02/msg00117.html > > - Try to switch the frame_chain_valid method dynamically in i386_gdbarch_init, > something like: > > if (os_ident != ELFOSABI_NONE) > set_gdbarch_frame_chain_valid (gdbarch, file_frame_chain_valid); > else > set_gdbarch_frame_chain_valid (gdbarch, func_frame_chain_valid); > > This approach would work well for SVR4, but causes interesting problems > on GNU/Linux. As core files have no ABI markers, we can't distinguish > them, and we get different backtracing behaviour when debugging an > executable (GNU/Linux ABI) or a core file (generic ELF ABI), so we > simply can't do it. > > I suspect that we will hit this kind of multiarching problem more often > in native setups, where we can't discern the native ABI flavour from the > generic one (the various native sigtramp variants come to mind). Yes. > Do we need a hook from XXX_gdbarch_init to some native code ? It isn't just a native problem. Consider a solaris-X-arm-linux-gnu GDB debugging a remote target that includes threads, shared libraries and sigtramps. The current gdbarch select mechanism is based on the ABI and ISA but not the ``OS''. (Strictly speaking SHLIBS and SIGTRAPPS and ... can probably be classed ABI, GDB's architecture doesn't reflect this). > Any ideas, suggestions ? Not really. I'm having enough fun pinning BFD down on the semantics of bfd_architecture and bfd_machine. Several thoughts: - allow multiple registrarations for an architecture (eg i386-tdep.c, i386-linux-tdep.c, ...) and have gdbarch try the OS specific one before the generic one. - Let a tdep file specify the ``os'' when registering their architecture so that the gdbarch code can select based on that. - Add an ``os'' field to ``struct gdbarch_info'' which can be set to what is known to be the OS. - Just tweek i386-tdep.c's *gdbarch_init() so that it uses a better local (architecture specific) heuristic. I suspect a combination of the first three is the best. The moment the heuristic is pushed down to the target we end up with inconsistent, target dependant, behavour. Andrew