On 11 April 2016 at 03:55, Amy Valhausen wrote:
> In tried ;
>
> import mpmath
> x = Float("1.4142", 950)
> x**6000%400
>
> But got the same sort of error message again ;
>
x = Float("1.4142", 950)
> Traceback (most recent call last):
> File "", line 1, in
>
Hi, regarding the below, is it possible to increase precision for this past
(15) ? Im a little bit confused are you saying that a float set to the
default
precision will not return the accurate result but specifying the precision
will?
Also you mentioned a non float expression will return
Hi Aaron,
In tried ;
import mpmath
x = Float("1.4142", 950)
x**6000%400
But got the same sort of error message again ;
>>> x = Float("1.4142", 950)
Traceback (most recent call last):
File "", line 1, in
NameError: name 'Float' is not defined
Also, I thought I read recently that the float
Oscar you mentioned below finding the # of digits using log
and the code ;
In [1]: from sympy import mpmath
In [2]: mpmath.mp.dps = 950
In [3]: mpmath.mpf('1.4142') ** 6000 % 400
How accurate of a return do you feel this will give? Are we losing
any data, will the return be very accurate?
Hi Aaron
I tried ;
import mpmath
x = mpmath.mpf("1.4142")
x**6000 % 400
but this returned "32" for me, not 272 like you got?
Also when I tried ;
>>> x = Float("1.4142", 100)
I got this error message ?
Traceback (most recent call last):
File "", line 1, in
NameError: name 'Float' is not
You can see how mpmath works here
http://mpmath.org/doc/current/technical.html#representation-of-numbers.
It basically uses man*2**exp binary representation. For instance
In [249]: Float("0.1")._mpf_
Out[249]: (0, 3602879701896397, -55, 52)
The first entry is the sign (positive). The next two
On 5 April 2016 at 18:21, Isuru Fernando wrote:
>
> I think the current way of representing Floats is reasonable.
>
> Float internally keeps it in binary representation, so any non-terminating
> number in base 2 is truncated when stored as a Float. That's why there is a
> string
On 5 April 2016 at 18:08, Aaron Meurer wrote:
> On Tue, Apr 5, 2016 at 12:54 PM, Oscar Benjamin
> wrote:
>>
>>>
>>> I don't know if it should be considered a bug, but it's worth noting
>>> that if you want SymPy to give the right precision in
On Tue, Apr 5, 2016 at 12:54 PM, Oscar Benjamin
wrote:
> On 5 April 2016 at 17:15, Aaron Meurer wrote:
>> On Tue, Apr 5, 2016 at 6:19 AM, Oscar Benjamin
>> wrote:
>>>
>>> I though that it should be possible to easily do
On 5 April 2016 at 17:15, Aaron Meurer wrote:
> On Tue, Apr 5, 2016 at 6:19 AM, Oscar Benjamin
> wrote:
>>
>> I though that it should be possible to easily do this with sympy
>> Floats but it doesn't seem to work:
>>
>> In [1]: x = S(1.4142)
>>
>>
On Tue, Apr 5, 2016 at 6:19 AM, Oscar Benjamin
wrote:
> On 5 April 2016 at 01:56, Amy Valhausen wrote:
>>
>> import numpy
>> (np.longdouble(1.4142)** 6000 )%400
> ...
>>
>> # The library mpmath is a good solution
> import sympy as smp
On 5 April 2016 at 01:56, Amy Valhausen wrote:
>
> import numpy
> (np.longdouble(1.4142)** 6000 )%400
...
>
> # The library mpmath is a good solution
import sympy as smp
mp = smp.mpmath
>
mp.mp.dps = 50 # Computation precision is 50 digits
50 digits is
6 at 8:56 PM, Amy Valhausen <amy.vaulhau...@gmail.com> wrote:
> Sympy vs Numpy, better accuracy in precision?
>
> I've been trying to solve a problem with numpy and other code routines
> to raise a base to a large power and then take the modulus.
>
> Precision accuracy is ve
Sympy vs Numpy, better accuracy in precision?
I've been trying to solve a problem with numpy and other code routines
to raise a base to a large power and then take the modulus.
Precision accuracy is very important, speed isnt as much - although it would
be convenient if I didnt have to wait
14 matches
Mail list logo