On 23 September 2017 at 03:15, Tim Peters wrote:
> But if they see a rare:
>
> x = float.fromhex("0x1.aaap-4")
>
> they can Google for "python fromhex" and find the docs themselves at
> once. The odd method name makes it highly "discoverable", and I think
> that's a
On Fri, Sep 22, 2017, at 03:41, Rob Cliffe wrote:
> Unrelated thought: Users might be unsure if the exponent in a
> hexadecimal float is in decimal or in hex.
Or, for that matter, a power of two or of sixteen.
___
Python-ideas mailing list
Le 22/09/2017 à 19:15, Tim Peters a écrit :
> I've seen plenty of people on StackOverflow who (a) don't understand
> hex notation for integers; and/or (b) don't understand scientific
> notation for floats. Nothing is self-evident about either; they both
> have to be learned at first.
Sure.
[Antoine Pitrou ]
> ...
> The main difference is familiarity. "scientific" notation should be
> well-known and understood even by high school kids. Who knows about
> hexadecimal notation for floats, apart from floating-point experts?
Here's an example: you <0x0.2p0 wink>.
On Fri, Sep 22, 2017 at 8:37 AM, Guido van Rossum wrote:
> On Thu, Sep 21, 2017 at 9:20 PM, Nick Coghlan wrote:
>
>> >>> one_tenth = 0x1.0 / 0xA.0
>> >>> two_tenths = 0x2.0 / 0xA.0
>> >>> three_tenths = 0x3.0 / 0xA.0
>> >>> three_tenths ==
On Thu, Sep 21, 2017 at 9:20 PM, Nick Coghlan wrote:
> >>> one_tenth = 0x1.0 / 0xA.0
> >>> two_tenths = 0x2.0 / 0xA.0
> >>> three_tenths = 0x3.0 / 0xA.0
> >>> three_tenths == one_tenth + two_tenths
> False
>
OMG Regardless of whether we introduce this
>
> Unrelated thought: Users might be unsure if the exponent in a hexadecimal
> float is in decimal or in hex.
I was playing around with float.fromhex() for this thread, and the first
number I tried to spell used a hex exponent because that seemed like "the
obvious thing"... I figured it out
On 22/09/17 03:57, David Mertz wrote:
I think you are missing the point I was assuming at. Having a binary/hex
float literal would tempt users to think "I know EXACTLY what number I'm
spelling this way"... where most users definitely don't in edge cases.
Quite. What makes me -0 on this idea
21.09.17 18:23, Victor Stinner пише:
My vote is now -1 on extending the Python syntax to add hexadecimal
floating literals.
While I was first in favor of extending the Python syntax, I changed
my mind. Float constants written in hexadecimal is a (very?) rare use
case, and there is already
On 22/09/2017 02:32, Steven D'Aprano wrote:
Are there actually any Python implementations or builds which have
floats not equal to 64 bits? If not, perhaps it is time to make 64 bit
floats a language guarantee.
This will be unfortunate when Intel bring out a processor with 256-bit
floats
On 22 September 2017 at 13:38, Guido van Rossum wrote:
> On Thu, Sep 21, 2017 at 8:30 PM, David Mertz wrote:
>>
>> Simply because the edge cases for working with e.g. '0xC.68p+2' in a
>> hypothetical future Python are less obvious and less simple to
When I teach, I usually present this to students:
>>> (0.1 + 0.2) + 0.3 == 0.1 + (0.2 + 0.3)
False
This is really easy as a way to say "floating point numbers are
approximations where you often encounter rounding errors" The fact the
"edge cases" are actually pretty central and commonplace in
On Thu, Sep 21, 2017 at 8:30 PM, David Mertz wrote:
> Simply because the edge cases for working with e.g. '0xC.68p+2' in a
> hypothetical future Python are less obvious and less simple to demonstrate,
> I feel like learners will be tempted to think that using this base-2/16
>
When I teach, I usually present this to students:
>>> (0.1 + 0.2) + 0.3 == 0.1 + (0.2 + 0.3)
False
This is really easy as a way to say "floating point numbers are
approximations where you often encounter rounding errors" The fact the
"edge cases" are actually pretty central and commonplace in
On Thu, Sep 21, 2017 at 7:57 PM, David Mertz wrote:
> I think you are missing the point I was assuming at. Having a binary/hex
> float literal would tempt users to think "I know EXACTLY what number I'm
> spelling this way"... where most users definitely don't in edge cases.
>
[David Mertz ]
> -1
>
> Writing a floating point literal requires A LOT more knowledge than writing
> a hex integer.
But not really more than writing a decimal float literal in
"scientific notation". People who use floats are used to the latter.
Besides using "p" instead of "e"
I think you are missing the point I was assuming at. Having a binary/hex
float literal would tempt users to think "I know EXACTLY what number I'm
spelling this way"... where most users definitely don't in edge cases.
Spelling it float.fromhex(s) makes it more obvious "this is an expert
operation
On Thu, Sep 21, 2017 at 7:32 PM, Steven D'Aprano
wrote:
> On Thu, Sep 21, 2017 at 01:09:11PM -0700, David Mertz wrote:
> > -1
> >
> > Writing a floating point literal requires A LOT more knowledge than
> writing
> > a hex integer.
> >
> > What is the bit length of floats on
On Thu, Sep 21, 2017 at 1:57 AM, Paul Moore wrote:
> ...
>
> It's also worth remembering that there will be implementations other
> than CPython that will need changes, too - Jython, PyPy, possibly
> Cython, and many editors and IDEs. So setting the bar at "someone who
>
Lucas Wiman wrote:
It is inconsistent that you can write hexadecimal integers but not
floating point numbers. Consistency in syntax is /fewer/ things to
learn, not more.
You still need to learn the details of the hex syntax for
floats, though. It's not obvious e.g. that you need to use
"p"
On Thu, Sep 21, 2017 at 01:09:11PM -0700, David Mertz wrote:
> -1
>
> Writing a floating point literal requires A LOT more knowledge than writing
> a hex integer.
>
> What is the bit length of floats on your specific Python compile?
Are there actually any Python implementations or builds which
Tablet autocorrect: bit representation of inf and -inf.
On Sep 21, 2017 1:09 PM, "David Mertz" wrote:
> -1
>
> Writing a floating point literal requires A LOT more knowledge than
> writing a hex integer.
>
> What is the bit length of floats on your specific Python compile? What
-1
Writing a floating point literal requires A LOT more knowledge than writing
a hex integer.
What is the bit length of floats on your specific Python compile? What
happens if you specify more or less precision than actually available.
Where is the underflow to subnormal numbers? What is the bit
On Thu, Sep 21, 2017 at 8:23 AM, Victor Stinner
wrote:
> While I was first in favor of extending the Python syntax, I changed
> my mind. Float constants written in hexadecimal is a (very?) rare use
> case, and there is already float.fromhex() available.
>
> A new syntax
2017-09-21 3:53 GMT+02:00 Steven D'Aprano :
> float.fromhex(s) if s.startswith('0x') else float(s)
My vote is now -1 on extending the Python syntax to add hexadecimal
floating literals.
While I was first in favor of extending the Python syntax, I changed
my mind. Float
On 21 September 2017 at 02:53, Steven D'Aprano wrote:
> On Thu, Sep 21, 2017 at 11:13:44AM +1000, Nick Coghlan wrote:
>
>> I think so, as consider this question: how do you write a script that
>> accepts a user-supplied string (e.g. from a CSV file) and treats it as
>> hex
On Thu, Sep 21, 2017 at 11:13:44AM +1000, Nick Coghlan wrote:
> I think so, as consider this question: how do you write a script that
> accepts a user-supplied string (e.g. from a CSV file) and treats it as
> hex floating point if it has the 0x prefix, and decimal floating point
> otherwise?
Yeah, I agree, +0. It won't confuse anyone who doesn't care about it and
those who need it will benefit.
On Wed, Sep 20, 2017 at 6:13 PM, Nick Coghlan wrote:
> On 21 September 2017 at 10:44, Chris Barker - NOAA Federal
> wrote:
> [Thibault]
> >> To
On 21 September 2017 at 10:44, Chris Barker - NOAA Federal
wrote:
[Thibault]
>> To sum up:
>> - In some specific context, hexadecimal floating-point constants make it
>> easy for the programmers to reproduce the exact value. Typically, a software
>> engineer who is
> And that's one of the reasons why the hexadecimal floating-point
> representation exist:
I suspect no one here thinks floathex representation is unimportant...
>
> To sum up:
> - In some specific context, hexadecimal floating-point constants make it
> easy for the programmers to reproduce
Hi everyone
>> Of course, for a lost of numbers, the decimal representation is simpler, and
>> just as accurate as the radix-2 hexadecimal representation.
>> But, due to the radix-10 and radix-2 used in the two representations, the
>> radix-2 may be much easier to use.
>
> Hex is radix 16, not
All this talk about accurate representation left aside,
please consider what a newbie would think when s/he sees:
x = 0x1.fc000p-127
There's really no need to make Python scripts cryptic. It's enough
to have a helper function that knows how to read such representations
and we already
On Wed, Sep 13, 2017 at 04:36:49PM +0200, Thibault Hilaire wrote:
> Of course, for a lost of numbers, the decimal representation is simpler, and
> just as accurate as the radix-2 hexadecimal representation.
> But, due to the radix-10 and radix-2 used in the two representations, the
> radix-2
Hi everybody
> I chose it because it's easy to write. Maybe math.pi is a better example :-)
>>
> math.pi.hex()
>> '0x1.921fb54442d18p+1'
>
> 3.141592653589793 is four fewer characters to type, just as accurate,
> and far more recognizable.
Of course, for a lost of numbers, the decimal
On Tue, Sep 12, 2017 at 9:20 PM, Steven D'Aprano wrote:
> On Mon, Sep 11, 2017 at 06:26:16PM -0600, Neil Schemenauer wrote:
>> On 2017-09-12, Victor Stinner wrote:
>> > Instead of modifying the Python grammar, the alternative is to enhance
>> > float(str) to support it:
>> >
On Tue, Sep 12, 2017 at 09:23:04AM +0200, Victor Stinner wrote:
> 2017-09-12 3:48 GMT+02:00 Steven D'Aprano :
> >> k = float("0x1.2492492492492p-3") # 1/7
> >
> > Why wouldn't you just write 1/7?
>
> 1/7 is irrational, so it's not easy to get the "exact value" for a
> 64-bit
On Mon, Sep 11, 2017 at 06:26:16PM -0600, Neil Schemenauer wrote:
> On 2017-09-12, Victor Stinner wrote:
> > Instead of modifying the Python grammar, the alternative is to enhance
> > float(str) to support it:
> >
> > k = float("0x1.2492492492492p-3") # 1/7
>
> Making it a different function
2017-09-12 1:27 GMT+02:00 Neil Schemenauer :
>> k = float("0x1.2492492492492p-3") # 1/7
>
> Making it a different function from float() would avoid backwards
> compatibility issues. I.e. float() no longer returns errors on some
> inputs.
In that case, I suggest float.fromhex() to
2017-09-12 3:48 GMT+02:00 Steven D'Aprano :
>> k = float("0x1.2492492492492p-3") # 1/7
>
> Why wouldn't you just write 1/7?
1/7 is irrational, so it's not easy to get the "exact value" for a
64-bit IEEE 754 double float.
I chose it because it's easy to write. Maybe math.pi
On 2017-09-12, Victor Stinner wrote:
> Instead of modifying the Python grammar, the alternative is to enhance
> float(str) to support it:
>
> k = float("0x1.2492492492492p-3") # 1/7
Making it a different function from float() would avoid backwards
compatibility issues. I.e. float() no longer
Instead of modifying the Python grammar, the alternative is to enhance
float(str) to support it:
k = float("0x1.2492492492492p-3") # 1/7
Victor
2017-09-08 8:57 GMT+02:00 Serhiy Storchaka :
> The support of hexadecimal floating literals (like 0xC.68p+2) is included in
> just
On Fri, Sep 8, 2017 at 12:05 PM, Victor Stinner
wrote:
> 2017-09-07 23:57 GMT-07:00 Serhiy Storchaka :
> > The support of hexadecimal floating literals (like 0xC.68p+2) is
> included in
> > just released C++17 standard. Seems this becomes a
2017-09-07 23:57 GMT-07:00 Serhiy Storchaka :
> The support of hexadecimal floating literals (like 0xC.68p+2) is included in
> just released C++17 standard. Seems this becomes a mainstream.
Floating literal using base 2 (or base 2^n, like hexadecimal, 2^4) is
the only way to
Dear all
This is my very first email to python-ideas, and I strongly support this idea.
float.hex() does the job for float to hexadecimal conversion, and
float.fromhex() does the opposite. But a full support for hexadecimal
floating-point literals would be great (it bypasses the decimal to
44 matches
Mail list logo