#4867: ghci displays negative floats incorrectly (was: Incorrect result from 
trig
functions)
-------------------------------+--------------------------------------------
    Reporter:  gwright         |        Owner:  gwright                    
        Type:  bug             |       Status:  new                        
    Priority:  high            |    Milestone:  7.0.2                      
   Component:  GHCi            |      Version:  7.0.1                      
    Keywords:                  |     Testcase:                             
   Blockedby:                  |   Difficulty:                             
          Os:  MacOS X         |     Blocking:                             
Architecture:  x86_64 (amd64)  |      Failure:  Incorrect result at runtime
-------------------------------+--------------------------------------------

Comment(by gwright):

 Replying to [comment:39 batterseapower]:
 > I think a single minus sign is the expected result, and it is what I get
 with GHC and GHCi (near HEAD) on OS X 10.6. Note that:
 >
 >     showSignedFloat showFloat 0 (-1.0) ""
 >
 > Calls showFloat with the last argument '''negated''' after outputting a
 minus sign, so showFloat itself will not get a chance to produce an extra
 minus sign. (The relevant line of showSignedFloat is Float.lhs:1016) The
 question seems to be why that negation is not happening on your machine.
 >

 Checking again (after coffee) I agree with this.  My above argument is
 wrong. The question is why the negation is not being done.  `showFloat`
 ought to see a positive argument.

 > This seems unlikely, but is it possible that your GHCi is still somehow
 finding the version of the libraries that you edited before, and that
 version accidentally removed the negation?

 No, I can edit the libraries, change the minus signs to other characters,
 and when I rebuild I see the expected result (i.e., the minus signs are
 changed to the specified characters).

 Thinking about this some more, there are probably more incorrect
 relocations.  `Doubles` are negated by `xor`-ing them with a fixed
 bitmask, `0x8000000000000000`. The thing to do is to track down the
 assembly code corresponding to the negation in `showSignedFloat`.  One
 thing that would cause this problem is if the loaded bitmask were all
 zeroes instead of having '1' in the msb.  `xor`-ing with an all zero mask
 will return the `Double` value unchanged.

-- 
Ticket URL: <http://hackage.haskell.org/trac/ghc/ticket/4867#comment:43>
GHC <http://www.haskell.org/ghc/>
The Glasgow Haskell Compiler

_______________________________________________
Glasgow-haskell-bugs mailing list
[email protected]
http://www.haskell.org/mailman/listinfo/glasgow-haskell-bugs

Reply via email to