I wrote: >> What about the attached? Isn't it simpler? >> >> We get passed the max number of bits the number we're parsing >> can hold in TWOS_COMPLEMENT_BITS. >> >> Just parse the number as unsigned, if it overflows, it doesn't >> matter, we just account for the bits. >> If it doesn't overflow (host has a big long), >> and TWOS_COMPLEMENT_BITS < sizeof (long) * HOST_CHAR_BIT >> and number is signed according to TWOS_COMPLEMENT_BITS - 1 >> bit being set, sign extend the number into a long. >> Blah, that doesn't work quite right in all the cases. Since with that, we always read the number as unsigned while we're deciding if we'll overflow, and we will overflow on small 64-bit negative numbers, on 32-bit hosts, while previously we wouldn't. See below. (BTW, why the heck isn't the return of read_huge_number a LONGEST?) I looked at what the fixed_points.exp is doing: A -50 .. 50 range with delta 1.0 / 16.0, Which translates into a -800 .. 800 range like: "s800:t(0,23)=@s64;r(0,23);01777777777777777776340;0000000001440;",128,0,0,0 Current CVS doesn't overflow on 01777777777777777776340, but both my patch, and Pierre's did. The real problem with the original code is that it assumes that when twos_complement_bits > 0 and the number is in octal, it must be negative. That isn't true always, as can be seen on the case that Pierre showed: .stabs "long long unsigned int:t(0,7)=@s64;r(0,7);0000000000000;01777777777777777777777;",128,0,0,0 We would parse octal 0000000000000 (, with twos_complement_bits == 64, due to that @s64), until we hit -268435456 (0x10000000), at which point is was considered an overflow, and only the n of bits where taken care of. So, this new patch, attached, checks to see if enough algarisms are there for signess, and if so only checks the sign bit on the first iteration. Another problem then shows up: Prior to Michael's fix, we always read the octals as *unsigned*, so, on i386-pc-cygwin, the 01777777777777777777777 on the "long long unsigned" case was always considered overflow as it didn't fit in a long. That case is then handled by read_range_type not by looking at value of 01777777777777777777777, but to the n of bits needed to represent the number (64). With the twos_complement_representation code fixed, the code parses 01777777777777777777777,size_type=64 (s64) as -1, thus returning n2=0,n3=-1,n2bits=0,n3bits=0. But that case isn't handled correctly later in in read_range_type, where it is assumed to represent an unsigned int (32-bit). /* If the upper bound is -1, it must really be an unsigned int. */ else if (n2 == 0 && n3 == -1) { /* It is unsigned int or unsigned long. */ /* GCC 2.3.3 uses this for long long too, but that is just a GDB 3.5 compatibility hack. */ return init_type (TYPE_CODE_INT, gdbarch_int_bit (current_gdbarch) / TARGET_CHAR_BIT, TYPE_FLAG_UNSIGNED, NULL, objfile); } The patch fixes it by calling init_type with 'type_size / TARGET_CHAR_BIT', if type_size > 0. Anyone has such an old version of gcc around to test if this brakes it? We now correcly parse what I could find gcc outputting, and few things more - see attached .s file. I've also ran the testsuite wheel a couple of times, and it all looks fine. Cheers, Pedro Alves