From mboxrd@z Thu Jan 1 00:00:00 1970 From: Richard Henderson To: Bernd Schmidt Cc: gcc-patches@gcc.gnu.org, gdb@sourceware.cygnus.com Subject: Re: More SSE infrastructure Date: Mon, 03 Jul 2000 14:02:00 -0000 Message-id: <20000703140220.A25809@cygnus.com> References: <20000703122133.F25642@cygnus.com> X-SW-Source: 2000-07/msg00000.html [ For the GDB list, we're discussing what needs to be emitted for debug information for 128-bit integers used with SSE. Note that this is not the same as when a user has declared a proper 128-bit vector type, which is given in the debugging information as a struct, but rather to the Intel API defined __m128, which does not define the shape of the vector (float[4], int[4], short[8], ...) and so is represented as a plain int. ] On Mon, Jul 03, 2000 at 09:14:52PM +0100, Bernd Schmidt wrote: > On Mon, 3 Jul 2000, Richard Henderson wrote: > > On Mon, Jul 03, 2000 at 07:08:34PM +0100, Bernd Schmidt wrote: > > > ... the only place in the compiler I've found so far that relies > > > on it is debugging output (where TImode constants are used for > > > TYPE_{MIN,MAX}_VALUE of 128 bit integers. > > > > I wonder if we can just bail on that? > > Possibly. I don't know for what the debugger could use this information. Neither do I. It seems relatively certain that the debugger isn't going to allow us to evaluate 128-bit int expressions; what other use of the bounds of the type I don't know. Why it would even need to be told the bounds, given the size of the type and its signedness I don't know. (Possibly to represent pascal-like integer subranges?) > If you think that's OK, we could leave out that part for now. I don't think we can do nothing right now. We need to come to agreement with the gdb folks what would be acceptible. If we need to emit *something*, it would be possible for us to put code in at this point to recognize that TYPE_{SIZE,PRECISION} is out of range for what an INTEGER_CST could represent, and emit the max bounds by hand, in octal, based on the known size of the type. r~ PS: Oh, you'll have to watch type_for_mode, which does #if HOST_BITS_PER_WIDE_INT >= 64 if (mode == TYPE_MODE (intTI_type_node)) and possibly a few other places in the compiler. What I'd like you to do while we're sorting out the debugging thing is to hack the debug code to not crash (or just use -g0) and see where else you run into problems compiling SSE code with HOST_BITS_PER_WIDE_INT=32. Because you'll probably need to have TYPE_{MIN,MAX}_VALUE=NULL, which could well give fold-const (among other places) indigestion. >From bernds@masala.cygnus.co.uk Mon Jul 03 15:31:00 2000 From: Bernd Schmidt To: Richard Henderson Cc: gcc-patches@gcc.gnu.org, gdb@sourceware.cygnus.com Subject: Re: More SSE infrastructure Date: Mon, 03 Jul 2000 15:31:00 -0000 Message-id: References: <20000703140220.A25809@cygnus.com> X-SW-Source: 2000-07/msg00001.html Content-length: 1061 On Mon, 3 Jul 2000, Richard Henderson wrote: > What I'd like you > to do while we're sorting out the debugging thing is to hack the debug > code to not crash (or just use -g0) and see where else you run into > problems compiling SSE code with HOST_BITS_PER_WIDE_INT=32. Because > you'll probably need to have TYPE_{MIN,MAX}_VALUE=NULL, which could > well give fold-const (among other places) indigestion. I did something like this last year. The only difference in behaviour I noticed was slightly different debugging output (TYPE_{MIN_MAX}_VALUE contained bogus constant values, but not NULL). I did not look at the whole compiler, but I believe things like fold-const are relatively safe, as we aren't really building any "real" expressions with 128 bit types. These types only show up as function call arguments and return values and variable declarations. This is not something I can imagine fold-const ever wanting to touch. What I did was by no means an exhaustive test, though. I don't really have a large chunk of SSE code to test with. Bernd