Re: extend methods of decimal module

2014-02-28 Thread casevh
On Thursday, February 27, 2014 2:33:35 AM UTC-8, Mark H. Harris wrote:
> No...  was not aware of gmpy2... looks like a great project!   I am wondering
> why it would be sooo much faster?

For multiplication and division of ~1000 decimal digit numbers, gmpy2 is ~10x
faster. The numbers I gave were for ln() and sqrt().

> I was hoping that Stefan Krah's C accelerator was using FFT fast fourier
> transforms for multiplication at least...  
> .. snip ..
> I have not looked at Krah's code, so not sure what he did to speed things
> up... (something more than just writing it in C I would suppose).

IIRC, cdecimal uses a Number Theory Transform for multiplication of very large
numbers. It has been a while since I looked so I could be wrong.

casevh

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: extend methods of decimal module

2014-02-19 Thread casevh
On Wednesday, February 19, 2014 1:30:13 PM UTC-8, Mark H. Harris wrote:
> 
> I guess what I'm really asking for are the same routines found in "bc -l"
> math library. I've finally moved my number crunching stuff to python (from
> bc) because the performance of "decimal" is finally way better than bc for
> the moment, and wrapping python are the math routines for control and
> processing is so much better.   Anyway, sure would be nice to have a very
> speedy atan() function built-in for decimal.
> 

Have you looked at the gmpy2 ( https://code.google.com/p/gmpy/ ) module?

It supports all the transcendental function available in the MPFR library. I
did a quick performance test of sqrt() and ln() at around 1000 decimal digits.
gmpy2 was about ~200 times faster than the corresponding functions in decimal.

casevh
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: New to Py 3.3.3 having prob. with large integer div. float.

2014-02-10 Thread casevh
On Monday, February 10, 2014 6:40:03 PM UTC-8, hlauk.h...@gmail.com wrote:
> I am coming off Python 2.6.6 32 bite platform and now in win 8.1 64 bite.
> I had no problems with gmpy in 2.6.6 and large integer floating points where
> you could control the length of the floating point by entering the bite size
> of the divisor(s) you are using. That would give the limit length of the float
> in the correct number of bites.
> 
> In Python 3.3.3 and gmpy2 I have tried many things in the import mpfr module
> changing and trying all kinds of parameters in the gmpy2 set_context module 
> and others.
> 
> The best I can get without an error is the results of a large integer 
> division is a/b = inf. or an integer rounded up or down.
> I can't seem to find the right settings for the limit of the remainders in the
> quotient.  
> 
> My old code in the first few lines of 2.6.6 worked great and looked like this 
> -
> 
> import gmpy
> 
> BIG =(A large composite with 2048 bites) 
> SML =(a smaller divisor with 1024 bites)
> 
> Y= gmpy.mpz(1)
> A= gmpy.mpf(1)
> 
> y=Y
> 
> x=BIG
> z=SML
> a=A
> k=BIG
> j=BIG
> x=+ gmpy.next.prime(x)
> 
> while y < 20: 
> B = gmpy.mpf(x.1024)
> ## the above set the limit of z/b float (see below) to 1024   
> b=B
> a=z/b
> c=int(a)
> d=a-c
> if d = <.001:
>  proc. continues from here with desired results.
> 
> gmpy2 seems a lot more complex but I am sure there is a work around.
> I am not interested in the mod function.
> 
> My new conversion proc. is full of ## tags on the different things
> I tried that didn't work.
> 
> TIA 
> Dan

The following example will divide two integers with a result precision
of 1024 bits:

import gmpy2

# Set mpfr precision to 1024
gmpy2.get_context().precision=1024

# Omitting code

a = gmpy2.mpz(SML)/gmpy2.mpz(x)

Python 3.x performs true division by default. When integer division involves
an mpz, the result will be an mpfr with the precision of the current context.

Does this help?

casevh
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Calculate Big Number

2013-01-08 Thread casevh
On Tuesday, January 8, 2013 2:06:09 AM UTC-8, Gisle Vanem wrote:
> "Steven D'Aprano"  wrote:
> 
> > py> from timeit import Timer
> > py> t1 = Timer("(a**b)*(c**d)", setup="a,b,c,d = 10, 25, 2, 50")
> > py> min(t1.repeat(repeat=5, number=10))
> > 0.5256571769714355
> > 
> > So that's about 5 microseconds on my (slow) computer.
> 
> That's pretty fast. So is there still a need for a GMP python-binding like
> gmpy? http://code.google.com/p/gmpy/wiki/IntroductionToGmpy
> 
> GMP can include optimized assembler for the CPU you're using. But
> I guess it needs more memory. Hence disk-swapping could be an issue
> on performance.
> 
gmpy will be faster than Python as the numbers get larger. The cutover varies 
depending on the platform, but usually occurs between 50 and 100 digits.

casevh
> 
> --gv

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: bit count or bit set && Python3

2012-10-26 Thread casevh
On Thursday, October 25, 2012 7:56:25 AM UTC-7, Charles Hixson wrote:
> In Python3 is there any good way to count the number of on bits in an 
> integer (after an & operation)?

You may want to look at gmpy2[1] and the popcount() function.

> 
> Alternatively, is there any VERY light-weight implementation of a bit 
> set?  I'd prefer to use integers, as I'm probably going to need 
> thousands of these, if the tests work out.  But before I can test, I 
> need a decent bit counter.  (shift, xor, &, and | are already present 
> for integer values, but I also need to count the number of "true" items 
> after the logical operation.  So if a bitset is the correct approach, 
> 

Whether or not gmpy2 is considered light-weight is debateable. :)

> I'll need it to implement those operations, or their equivalents in 
> terms of union and intersection.)
> 
> 
> 
> Or do I need to drop into C for this?
> 

[1] http://code.google.com/p/gmpy/

> 
> 
> -- 
> 
> Charles Hixson

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: why did GMPY change the names of its functions?

2012-03-26 Thread casevh
On Sunday, March 25, 2012 9:59:56 PM UTC-7, Mensanator wrote:
> OK, GMPY is now called GMPY2. No big deal, I can import as GMPY.
> 
> But why were scan0 and scan1 changed to bit_scan0 and bit_scan1?
> 
> What's the justification for that? I use those functions extensively
> in my library of Collatz utilities  and I had to re-edit them for no
> obvious reason.

I'll speak up as the maintainer of GMPY and GMPY2.

(My comments apply to the beta1 release which should be out in a couple of 
days.)

GMPY2 introduces many changes: 

1) the limited "mpf" type that is based on GMP has been replaced with the 
"mpfr" type from the MPFR library
2) support for multiple-precision complex arithmetic based on the MPC library
3) support for a mutable integer type optimized for in-place bit manipulations
4) support for addition number theory functions (generalized Lucas sequences 
and more primality tests

I began to encounter name collisions; for example, should sqrt() only return 
integer square roots. I chose to call it a new name (gmpy2) and update the API 
to reflect new choices I made. For example, sqrt() now returns an "mpfr" and 
isqrt() returns an "mpz".

As part of the documentation for the beta release, I will document the name 
changes. "import gmpy2 as gmpy; gmpy.scan0=gmpy.bit_scan0; etc" should work 
just fine.

If you encounter problems with the alpha release, please open an issue on 
gmpy's site.

Thanks,
casevh



On Sunday, March 25, 2012 9:59:56 PM UTC-7, Mensanator wrote:
> OK, GMPY is now called GMPY2. No big deal, I can import as GMPY.
> 
> But why were scan0 and scan1 changed to bit_scan0 and bit_scan1?
> 
> What's the justification for that? I use those functions extensively
> in my library of Collatz utilities  and I had to re-edit them for no
> obvious reason.



On Sunday, March 25, 2012 9:59:56 PM UTC-7, Mensanator wrote:
> OK, GMPY is now called GMPY2. No big deal, I can import as GMPY.
> 
> But why were scan0 and scan1 changed to bit_scan0 and bit_scan1?
> 
> What's the justification for that? I use those functions extensively
> in my library of Collatz utilities  and I had to re-edit them for no
> obvious reason.



On Sunday, March 25, 2012 9:59:56 PM UTC-7, Mensanator wrote:
> OK, GMPY is now called GMPY2. No big deal, I can import as GMPY.
> 
> But why were scan0 and scan1 changed to bit_scan0 and bit_scan1?
> 
> What's the justification for that? I use those functions extensively
> in my library of Collatz utilities  and I had to re-edit them for no
> obvious reason.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PyCrypto builds neither with MSVC nor MinGW

2012-03-12 Thread casevh
On Monday, March 12, 2012 1:38:29 PM UTC-7, Alec Taylor wrote:
> On a brand new Windows install now, with a brand new VS8 installed
> with new YASM and MPIR in c:\usr\src\include and c:\usr\src\lib.
> 
> But it still isn't working:
> 
This was a little challenging. I looked through the setup.py to figure out what 
assumptions their build process made. First, the file 
pycrypto-2.5\src\inc-msvc\config.h must be modified. Below is the file I used:

config.h
===
/* Define to 1 if you have the declaration of `mpz_powm', and to 0 if you
   don't. */
#define HAVE_DECL_MPZ_POWM 1

/* Define to 1 if you have the declaration of `mpz_powm_sec', and to 0 if you
   don't. */
#define HAVE_DECL_MPZ_POWM_SEC 0

/* Define to 1 if you have the `gmp' library (-lgmp). */
#undef HAVE_LIBGMP

/* Define to 1 if you have the `mpir' library (-lmpir). */
#define HAVE_LIBMPIR 1

/* Define to 1 if you have the  header file. */
#define HAVE_STDINT_H 1
===

Although I was able to specify an include directory for mpir.h with 
-Ic:\usr\include, I was not able specify a lib directory with -Lc:\usr\lib. It 
looks like setup.py does not honor the -L option. So I finally gave up and just 
copied the mpir.h file into my Python27\include directory and the mpir.lib file 
into my Python27\libs directory. 

After copying the files "python setup.py install" was successful. I created a 
binary installer with "python setup.py bdist-wininst".

There may be a cleaner way to build PyCrypto, but these steps worked for me.

casevh
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PyCrypto builds neither with MSVC nor MinGW

2012-02-06 Thread casevh
On Feb 5, 6:40 am, Alec Taylor  wrote:
> PIL, PyCrypto and many other modules require a C compiler and linker.
>
> Unfortunately neither install on my computer, with a PATH with the following:
>
> C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC
> C:\libraries\MinGW\msys\1.0\bin
> C:\libraries\MinGW
> C:\Python27\Scripts
>
> Output from G:\pycrypto>vcvarsall.bat
> Setting environment for using Microsoft Visual Studio 2010 x86 tools.
>
> Error output from G:\pycrypto>python setup.py build --compiler msvc
> http://pastebin.com/nBsuXDGg
A couple of comments. You will need to complile either GMP or MPIR
first.

MPIR is a windows friendly fork of GMP and I use it create Windows
binaries
for gmpy.
>
> Error output from G:\pycrypto>python setup.py build --compiler mingw32
> 1> log1 2> log2
> Log1:http://pastebin.com/yG3cbdZv
> Log2:http://pastebin.com/qvnshPeh
>
> Will there ever be support for newer MSVC versions? - Also, why

Python 2.7 uses VS2008. I use the command line compiler included with
in
Microsoft's SDK 7.0 which is still available for download. I have
step-
by-step build instructions included in gmpy's source download. I would
try to build MPIR and gmpy first and then adapt/modify the process for
PyCrypto.

MPIR home page: www.mpir.org
gmpy source: gmpy.googlecode.com/files/gmpy-1.15.zip

> doesn't even MinGW install PyCrypto for me?
>

> Thanks for all suggestions,
>
> Alec Taylor
Hope these comments help...
casevh
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Counting bits in large string / bit vector

2011-09-26 Thread casevh
On Sep 26, 12:56 am, Nizamov Shawkat 
wrote:
> > Is there an equivalent command in python that would immediately provide the
> > number of set bits in a large bit vector/string
>
> You might be able to achieve this using numpy boolean array and, e.g,
> the arithmetic sum function or something similar.
> There is also another library  http://pypi.python.org/pypi/bitarray
> which resembles numpy's bit array.
>
> Hope it helps,
> S.Nizamov

You can also use gmpy or gmpy2.

>>> a=gmpy2.mpz(123)
>>> bin(a)
'0b011'
>>> gmpy2.hamdist(a,0)
6
>>>

casevh
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Representation of floats (-> Mark Dickinson?)

2011-09-06 Thread casevh
On Sep 6, 6:37 am, jmfauth  wrote:
> This is just an attempt to put 
> thehttp://groups.google.com/group/comp.lang.python/browse_thread/thread/...
> discussion at a correct level.
>
> With Python 2.7 a new float number representation (the David Gay's
> algorithm)
> has been introduced. If this is well honored in Python 2.7, it
> seems to me, there are some missmatches in the Py3 series.
>
> >>> sys.version
>
> '2.5.4 (r254:67916, Dec 23 2008, 15:10:54) [MSC v.1310 32 bit
> (Intel)]'
>
> >>> 0.1
> 0.10001
> >>> print 0.1
> 0.1
> >>> 1.1 * 1.1
> 1.2102
> >>> print 1.1 * 1.1
> 1.21
> >>> print repr(1.1 * 1.1)
> 1.2102
>
> >>> sys.version
>
> 2.7.2 (default, Jun 12 2011, 15:08:59) [MSC v.1500 32 bit (Intel)]
>
>
>
> >>> 0.1
> 0.1
> >>> print 0.1
> 0.1
> >>> 1.1 * 1.1
> 1.21
> >>> print 1.1 * 1.1
> 1.21
> >>> print repr(1.1 * 1.1)
> 1.2102
>

I tried this with the same version of Python and I get:

>>> sys.version
'2.7.2 (default, Jun 12 2011, 15:08:59) [MSC v.1500 32 bit (Intel)]'
>>> 1.1 * 1.1
1.2102
>>> print 1.1 * 1.1
1.21
>>> print repr(1.1 * 1.1)
1.2102
>>>

> >>> sys.version
>
> '3.1.4 (default, Jun 12 2011, 15:05:44) [MSC v.1500 32 bit (Intel)]'>>> 0.1
> 0.1
> >>> print(0.1)
> 0.1
> >>> 1.1 * 1.1
> 1.2102
> >>> print(1.1 * 1.1)
> 1.21
> >>> print(repr(1.1 * 1.1))
> 1.2102
> >>> '{:g}'.format(1.1 * 1.1)
>
> '1.21'
>
> >>> sys.version
>
> '3.2.2 (default, Sep  4 2011, 09:51:08) [MSC v.1500 32 bit (Intel)]'
>
> >>> 0.1
> 0.1
> >>> print(0.1)
> 0.1
> >>> 1.1 * 1.1
> 1.2102
> >>> print (1.1 * 1.1)
> 1.2102
> >>> print(repr((1.1 * 1.1)))
> 1.2102
>
> >>> '{:g}'.format(1.1 * 1.1)
> '1.21'
>

I get same results as you do for Python 3.1.4 and 3.2.2. IIRC, Python
3.2 changed (for floats) __str__ to call __repr__. That should explain
the difference between 3.1.4 and 3.2.2

Also note that 1.1 * 1.1 is not the same as 1.21.

>>> (1.1 * 1.1).as_integer_ratio()
(5449355549118301, 4503599627370496)
>>> (1.21).as_integer_ratio()
(1362338887279575, 1125899906842624)

This doesn't explain why 2.7.2 displayed a different result on your
computer. What do you get for as_integer_ratio() for (1.1 * 1.1) and
(1.21)?

casevh


> jmf

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: relative speed of incremention syntaxes (or "i=i+1" VS "i+=1")

2011-08-21 Thread casevh
nb_inplace_add so the check fails and binary_iop1 used
nb_add.

In recent version of gmpy and gmpy2, I implemented the nb_inplace_add
function and performance (for the gmpy.mpz type) is much better for
the in-place addition.

For the adventuresome, gmpy2 implements a mutable integer type called
xmpz. It isn't much faster until the values are so large that the
memory copy times become significant. (Some old gmpy documentation
implies that operations with mutable integers should be much faster.
With agressive caching of deleted objects, the object creation
overhead is very low. So the big win for mutable integers is reduced
to avoiding memory copies.)

casevh

>
> To be clear, this is nothing you should consider when writing fast code.
> Complexity wise they both are the same.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Puzzled about the output of my demo of a proof of The Euler Series

2011-08-10 Thread casevh
On Aug 10, 4:57 pm, "Richard D. Moores"  wrote:
> I saw an interesting proof of the limit of The Euler Series on
> math.stackexchange.com at
> <http://math.stackexchange.com/questions/8337/different-methods-to-com...>.
> Scroll down to Hans Lundmark's post.
>
> I thought I'd try to see this "pinching down" on the limit of pi**2/6.
> See my attempt, and output for n = 150 at
> <http://pastebin.com/pvznFWsT>. What puzzles me is that
> upper_bound_partial_sum (lines 39 and 60) is always smaller than the
> limit. It should be greater than the limit, right? If not, no pinching
> between upper_bound_partial_sum and lower_bound_partial_sum.
>
> I've checked and double-checked the computation, but can't figure out
> what's wrong.
>
> Thanks,
>
> Dick Moores

The math is correct. The proof only asserts that sum(1/k^2) is between
the upper and lower partial sums. The upper and lower partial sums
both converge to pi^2/6 from below and since the sum(1/k^2) is between
the two partial sums, it must also converge to pi^2/6.

Try calculating sum(1/k^2) for k in range(1, 2**n) and compare that
with the upper and lower sums. I verified it with several values up to
n=20.

casevh
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Large number multiplication

2011-07-07 Thread casevh
On Jul 7, 1:30 am, Ulrich Eckhardt 
wrote:
> Billy Mays wrote:
> > On 07/06/2011 04:02 PM, Ian Kelly wrote:
> >> According to Wikipedia:
>
> >> """
> >> In practice the Schönhage–Strassen algorithm starts to outperform
> >> older methods such as Karatsuba and Toom–Cook multiplication for
> >> numbers beyond 2**2**15 to 2**2**17 (10,000 to 40,000 decimal digits).
> >> """
>
> >> I think most Python users are probably not working with numbers that
> >> large, and if they are, they are probably using specialized numerical
> >> libraries anyway, so there would be little benefit in implementing it
> >> in core.
>
> > You are right that not many people would gain significant use of it.
>
> Even worse, most people would actually pay for its use, because they don't
> use numbers large enough to merit the Schönhage–Strassen algorithm.
>
> > The reason I ask is because convolution has a better (best ?) complexity
> > class than the current multiplication algorithm.
>
> The "asymptotic complexity" of algorithms (I guess that's what you mean) is
> concerned with large up to infinite n elements in operations. The claim
> there always excludes any number of elements below n_0, where the complexity
> might be different, even though that is usually not repeatedly mentioned. In
> other words, lower complexity does not mean that something runs faster, only
> that for large enough n it runs faster. If you never realistically reach
> that limit, you can't reap those benefits.
>
> That said, I'm sure that the developers would accept a patch that switches
> to a different algorithm if the numbers get large enough. I believe it
> already doesn't use Karatsuba for small numbers that fit into registers,
> too.
>
> > I was more interested in finding previous discussion (if any) on why
> > Karatsuba was chosen, not so much as trying to alter the current
> > multiplication implementation.
>
> I would hope that such design decisions are documented in code or at least
> referenced from there. Otherwise the code is impossible to understand and
> argue about.
>
> Cheers!
>
> Uli
>
> --
> Domino Laser GmbH
> Geschäftsführer: Thorsten Föcking, Amtsgericht Hamburg HR B62 932- Hide 
> quoted text -
>
> - Show quoted text -

A quick search on the Python issue tracker (bugs.python.org) yields
the following issues:

http://bugs.python.org/issue560379

http://bugs.python.org/issue4258

The issues also refer to discussion threads on the python-dev mailing
list.

casevh
-- 
http://mail.python.org/mailman/listinfo/python-list


ANN: GMPY2 or How I learned to love "nan"

2011-06-08 Thread casevh
Everyone,

I'm pleased to announce a new alpha release of GMPY2.
GMPY2 is a wrapper for GMP and MPFR multiple-precision
arithmetic libraries.

GMPY2 alpha2 introduces context manager support for
MPFR arithmetic. It's now possible to trigger an exception
when comparing against "nan" (and other for other
events that normally generate "nan" or "inf").

>>> import gmpy2
>>> gmpy2.context().trap_erange=True
>>> gmpy2.mpfr("nan") == gmpy2.mpfr("nan")
Traceback (most recent call last):
  File "", line 1, in 
gmpy2.RangeError: comparison with NaN

If you have an interest in multiple-precision arithmetic
or want more control over the handling of exceptional
events in floating point arithmetic, please check out
GMPY2!

GMPY2 is available for download from:

http://code.google.com/p/gmpy/

Experimental release


To simplify the codebase, allow for changes in the API,
and support simultaneous installation, the development
version has been renamed to GMPY2. The following is list
of changes in GMPY2:


In 2.0.0a0
--

 * support for a mutable integer type "xmpz"
 * removal of random number functions
 * "xmpz" supports slices for setting/clearing bits
 * some methods have been renamed (scan1 -> bit_scan1)
 * support for Python prior to 2.6 has been removed
 * support for all division modes has been added
* ceiling - round to +Infinity
* floor - round to -Infinity
* truncate - round to zero
* 2exp - division by a power of 2
 * support is_even() and is_odd()

In 2.0.0a1
--

 * support for the MPFR floating point library

In 2.0.0a2
--
 * context manager support from controlling MPFR
   arithmetic
 * can raise Python exceptions when exceptional events
   occur with MPFR arithmetic; for example, comparing
   against "nan" can trigger an exception
 * more complete coverage for MPFR
 * many function names were changed to be more consistent

Please report any issues!

casevh

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: integer multiplication

2011-04-04 Thread casevh
On Apr 4, 9:41 am, Terry Reedy  wrote:
> On 4/4/2011 1:51 AM, Paul Rubin wrote:
>
> > I didn't realize Python used Karatsuba.  The main issue is probably that
> > Python uses a straightforward portable C implementation that's not
> > terribly efficient,
>
> but relatively easy for a couple of people to maintain. For (C)Python 3,
> which no longer has a C int type, I believe changes were focused on
> making calculations with small integers almost as fast as in 2.x.
>
> (I believe that retaining two implementations internally was considered
> but rejected. Could be wrong.)
>
>  >If you look for the gmpy module, it gives you a way to use gmp from
>  >Python.  In crypto code (lots of 1024 bit modular exponentials) I think
>  >I found gmpy to be around 4x faster than Python longs.
>
> For specialized use, specialized gmpy is the way to go.
>
> I am curious how gmpy compares to 3.x ints (longs) with small number
> calculations like 3+5 or 3*5.
>
> --
> Terry Jan Reedy

(Disclaimer: I'm the current maintainer of gmpy.)

A quick comparison between native integers and gmpy.mpz() on Python
3.2, 64-bit Linux and gmpy 1.14.

For multiplication of single digit numbers, native integers are faster
by ~25%. The breakeven threshold for multiplication occurs around 12
digits and by 20 digits, gmpy is almost 2x faster.

I've made some improvements between gmpy 1.04 and 1.14 to decrease the
overhead for gmpy operations. The breakeven point for older versions
will be higher so if you are running performance critical code with
older versions of gmpy, I'd recommend upgrading to 1.14.

casevh




-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Porting Python2 C-API/Swig based modules to Python 3

2011-02-23 Thread casevh
On Feb 23, 8:54 am, Adam Pridgen 
wrote:
> Hello,
>
> I am trying to get a compiled module to work with Python3.  The code I
> am compiling was originally intended to be used in a Python 2.*
> environment, but I updated all the Python 2.* elements and the swig
> commands used by the setup.py script.  I got the library to
> successfully compile, but I am getting the following errors on import
> (below).  I am not sure how to trouble shoot this problem, and the
> fact that only one symbol (_PyCObject_FromVoidPtr) is missing is
> disconcerting.  I Googled some, but the symbol mentioned only showed
> up in a few posts, where linking was an issue.  I have also gone
> through the setup.py script and explicitly defined all the library
> paths.
>
> My questions:
>
> - Has anyone ever ported Python 2* modules/libs to Python 3 that rely
> on swig, and are there some changes in the C-API/swig I need to be
> looking for to make this port successful?
>
> - Does anyone have any advice/insght about how I can troubleshoot,
> diagnose, and resolve this issue?
>
> Thanks in advance,
>
> -- Adam
>
> 
>
> 0:pylibpcap-0.6.2$ python3
> Python 3.2 (r32:88452, Feb 20 2011, 11:12:31)
> [GCC 4.2.1 (Apple Inc. build 5664)] on darwin
> Type "help", "copyright", "credits" or "license" for more information.>>> 
> import pcap,py
>
> Traceback (most recent call last):
>   File "", line 1, in 
>   File 
> "/Library/Frameworks/Python.framework/Versions/3.2/lib/python3.2/site-packa­ges/pcap.py",
> line 25, in 
>     _pcap = swig_import_helper()
>   File 
> "/Library/Frameworks/Python.framework/Versions/3.2/lib/python3.2/site-packa­ges/pcap.py",
> line 21, in swig_import_helper
>     _mod = imp.load_module('_pcap', fp, pathname, description)
> ImportError: 
> dlopen(/Library/Frameworks/Python.framework/Versions/3.2/lib/python3.2/site­-packages/_pcapmodule.so,
> 2): Symbol not found: _PyCObject_FromVoidPtr
>   Referenced from:
> /Library/Frameworks/Python.framework/Versions/3.2/lib/python3.2/site-packag­es/_pcapmodule.so
>   Expected in: flat namespace
>  in 
> /Library/Frameworks/Python.framework/Versions/3.2/lib/python3.2/site-packag­es/_pcapmodule.so
>
>
>
> >>> ^D- Hide quoted text -
>
> - Show quoted text -

This is a change in the C-API in 3.2. See http://bugs.python.org/issue5630

casevh
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: inverse of a matrix with Fraction entries

2010-11-27 Thread casevh
On Nov 27, 3:08 pm, Steven D'Aprano  wrote:
> On Fri, 26 Nov 2010 19:21:47 -0800, casevh wrote:
> > On Nov 26, 2:11 pm, Steven D'Aprano  > +comp.lang.pyt...@pearwood.info> wrote:
> >> On Fri, 26 Nov 2010 12:54:12 -0800, John Nagle wrote:
> >> > For ordinary number crunching,
> >> > rational arithmetic is completely inappropriate.
>
> >> Why?
>
> >> --
> >> Steven
> > As you perform repeated calculations with rationals, the size of the
> > values (usually) keep growing. (Where size is measured as the length of
> > numerator and denominator.) The speed and memory requirements are no
> > longer constant.
>
> You're not comparing apples with apples. You're comparing arbitrary
> precision calculations with fixed precision calculations. If you want
> fixed memory requirements, you should use fixed-precision rationals. Most
> rationals I know of have a method for limiting the denominator to a
> maximum value (even if not necessarily convenient).
>
> On the other hand, if you want infinite precision, there are floating
> point implementations that offer that too. How well do you think they
> perform relative to rationals? (Hint: what are the memory requirements
> for an infinite precision binary float equal to fraction(1, 3)? *wink*)
>
> Forth originally didn't offer floats, because there is nothing you can do
> with floats that can't be done slightly less conveniently but more
> accurately with a pair of integers treated as a rational. Floats, after
> all, *are* rationals, where the denominator is implied rather than
> explicit.
>
> I suspect that if rational arithmetic had been given half the attention
> that floating point arithmetic has been given, most of the performance
> difficulties would be significantly reduced. Perhaps not entirely
> eliminated, but I would expect that for a fixed precision calculation, we
> should have equivalent big-Oh behaviour, differing on the multiplicative
> factors.
>
> In any case, the real lesson of your benchmark is that infinite precision
> is quite costly, no matter how you implement it :)
>
> --
> Steven

I think most users are expecting infinite precision when they use
rationals. Trying to explain limited precision rational arithmetic
might be interesting.

Knuth described "floating-slash" arithmetic that used a fixed number
of bits for both the numerator and denominator and a rounding
algorithm that prefers "simple" fractions versus more complex
fractions. IIRC, the original paper was from the 1960s.

casevh
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: inverse of a matrix with Fraction entries

2010-11-27 Thread casevh
On Nov 27, 4:00 am, m...@distorted.org.uk (Mark Wooding) wrote:
> casevh  writes:
> > I coded a quick matrix inversion function and measured running times
> > using GMPY2 rational and floating point types. For the floating point
> > tests, I used a precision of 1000 bits. With floating point values,
> > the running time grew as n^3. With rational values, the running time
> > grew as n^4*ln(n).
>
> Did you clear the denominators before you started?
>
> -- [mdw]

No. It more of an exercise in illustrating the difference between
arithmetic operations that have a constant running time versus those
that have an n*ln(n) or worse running time.

casevh
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: inverse of a matrix with Fraction entries

2010-11-26 Thread casevh
On Nov 26, 2:11 pm, Steven D'Aprano  wrote:
> On Fri, 26 Nov 2010 12:54:12 -0800, John Nagle wrote:
> > For ordinary number crunching,
> > rational arithmetic is completely inappropriate.
>
> Why?
>
> --
> Steven
As you perform repeated calculations with rationals, the size of the
values (usually) keep growing. (Where size is measured as the length
of numerator and denominator.) The speed and memory requirements are
no longer constant.

I coded a quick matrix inversion function and measured running times
using GMPY2 rational and floating point types. For the floating point
tests, I used a precision of 1000 bits. With floating point values,
the running time grew as n^3. With rational values, the running time
grew as n^4*ln(n).

On my system, inverting a 1000x1000 matrix with 1000-bit precision
floating point would take about 30 minutes. Inverting the same matrix
using rationals would take a month or longer and require much more
memory. But the rational solution would be exact.

casevh
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: inverse of a matrix with Fraction entries

2010-11-26 Thread casevh
On Nov 25, 1:28 pm, Daniel Fetchinson 
wrote:

> Okay, I see your point and I completely agree.
> Surely it will be faster to do it with integers, will give it a shot.
>
> Cheers,
> Daniel
>
> --
> Psss, psss, put it down! -http://www.cafepress.com/putitdown

You may want to look at using GMPY. GMPY wraps the Gnu Multiple-
precision library and includes support for rational arithmetic. I just
tested a few calculations involving rational numbers with hundreds to
thousands of digits and GMPY's mpq type was between 10x and 100x
faster than fractions.Fractions.

GMPY is availabe at http://code.google.com/p/gmpy/

casevh

Disclaimer: I'm the current maintainer of GMPY.
-- 
http://mail.python.org/mailman/listinfo/python-list


ANN: GMPY 1.14 and GMPY2 alpha1 released

2010-11-18 Thread casevh
Everyone,

I'm pleased to annouce the release of both a new production
and experimental release of GMPY. GMPY is a wrapper for
the MPIR or GMP multiple-precision arithmetic library.

The experimental release (GMPY2) now includes support for
the MPFR floating-point library.

GMPY is available for download from:

http://code.google.com/p/gmpy/


Production release
--


GMPY 1.14 is the updated stable release. A memory leak was
fixed so it is highly recommended that all user upgrade to
this version. In addition to a few other bug fixes, GMPY
1.14 is compatible with the changes to the hashing code in
Python 3.2a4. The 64-bit Windows installer for Python 3.2
should only be used with 3.2a4 and later.

Even though my primary development focus has shifted to
GMPY2 (see below), GMPY 1.X will continue to receive bug
and compatibility fixes.


Experimental release



To simplify the codebase, allow for changes in the API,
and support simultaneous installation, the development
version has been renamed to GMPY2. The following is list
of changes in GMPY2:

In 2.0.0a0
--

 * support for a mutable integer type "xmpz"
 * removal of random number functions
 * "xmpz" supports slices for setting/clearing bits
 * some methods have been renamed (scan1 -> bit_scan1)
 * support for Python prior to 2.6 has been removed
 * support for all division modes has been added
* ceiling - round to +Infinity
* floor - round to -Infinity
* truncate - round to zero
* 2exp - division by a power of 2
 * support is_even() and is_odd()

In 2.0.0a1
--

 * support for the MPFR floating point library

If you use GMPY regularly, please test GMPY2. There have been
several requests asking for a mutable integer and I am curious
if there are real-world performance improvements.


Please report any issues!


casevh



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Download Microsoft C/C++ compiler for use with Python 2.6/2.7 ASAP

2010-07-06 Thread casevh
On Jul 6, 9:21 am, Thomas Jollans  wrote:
> On 07/06/2010 05:50 PM, sturlamolden wrote:
>
> > It is possible to build C and Fortran extensions for official Python
> > 2.6/2.7 binaries on x86 using mingw. AFAIK, Microsoft's compiler is
> > required for C++ or amd64 though. (Intel's compiler requires VS2008,
> > which has now perished.)
>
> mingw gcc should work for building C++ extensions if it also works for C
> extensions. There's no difference on the binding side - you simply have
> to include everything as extern "C", which I am sure the header does for
> you.
>
> As for amd64 - I do not know if there is a mingw64 release for windows
> already. If there isn't, there should be ;-) But that doesn't really
> change anything: the express edition of Microsoft's VC++ doesn't include
> an amd64 compiler anyway, AFAIK.

The original version of the Windows 7 SDK includes the command line
version of the VS 2008 amd64 compiler. I've used it compile MPIR and
GMPY successfully. The GMPY source includes a text file describing the
build process using the SDK tools.

casevh
>
> Also, VS2010 should work as well - doesn't it?
>
>
>
>
>
> > Remember Python on Windows will still require VS2008 for a long time.
> > Just take a look at the recent Python 3 loath threads.- Hide quoted text -
>
> - Show quoted text -

-- 
http://mail.python.org/mailman/listinfo/python-list


ANN: GMPY 1.12 and GMPY2 unstable released

2010-06-26 Thread casevh
Everyone,

I'm pleased to annouce the release both a production and
experimental release of GMPY. GMPY is a wrapper for the
MPIR or GMP multiple-precision arithmetic library. GMPY
is available for download from:

http://code.google.com/p/gmpy/

Production release
--

GMPY 1.12 is the new stable release. In addition to fixing
a few bugs, GMPY 1.12 adds support for Python 2.7 and the
current py3k development trunk. Even though my primary
development focus has shifted to GMPY2 (see below), GMPY
1.X will continue to receive bug and compatibility fixes.

Experimental release


To simplify the codebase, allow for changes in the API,
and support simultaneous installation, the development
version has been renamed to GMPY2. The following is list
of changes in GMPY2:

 * support for a mutable integer type "xmpz"
 * removal of random number functions
 * "xmpz" supports slices for setting/clearing bits
 * some methods have been renamed (scan1 -> bit_scan1)
 * support for Python prior to 2.6 has been removed
 * support for all division modes has been added
* ceiling - round to +Infinity
* floor - round to -Infinity
* truncate - round to zero
* 2exp - division by a power of 2
 * support is_even() and is_odd()

If you use GMPY regularly, please test GMPY2. There have been
several requests asking for a mutable integer and I am curious
if there are real-world performance improvements.

Future enhancements
---

I am looking for feedback on future enhancements. If you have
specific requests, please let me know. Below are some ideas
I've had:

 * An optional "max_bits" parameter for an "xmpz". If
   specified, all results would be calculated modulo
   2^max_bits.

 * I've added the ability to set/clear bits of an "xmpz"
   using slice notation. Should I allow the source to be
   arbitrary set of bits or require that all bits are set
   to 0 or 1? Should support for "max_bits" be required?

 * Improved floating point support.

Comments on provided binaries
-

The pre-compiled Windows installers use later versions of
GMP and MPIR so performance some operations should be
faster.

The 32-bit Windows installers were compiled with MinGW32 using
GMP 5.0.1 and will automatically recognize the CPU type and use
assembly code optimized for the CPU at runtime. The 64-bit
Windows installers were compiled Microsoft's SDK compilers
using MPIR 2.1.1. Detailed instructions are included if you
want to compile your own binary.

Please report any issues!

casevh


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Where does "make altinstall" put stuff?

2010-05-29 Thread casevh
On May 29, 11:06 pm, John Nagle  wrote:
>    I know that one is supposed to use "make altinstall" to install
> versions of Python that won't be the "primary" version.  But what
> directory names does it use for packages and other support files?
> Is this documented somewhere?
>
>    I want to make sure that no part of the existing Python installation
> on Fedora Core is overwritten by an "altinstall" of 2.6.
>
>                                 John Nagle

It's placed in the directory specified by --prefix. If --prefix is not
specified, /usr/local is used by default.

casevh
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Calculating very large exponents in python

2010-03-09 Thread casevh
On Mar 8, 10:39 pm, casevh  wrote:
> [also replying to Geremy since the OP's message doesn't appear...]
>
> On Mar 8, 11:05 am, geremy condra  wrote:
>
>
>
>
>
> > On Mon, Mar 8, 2010 at 2:15 AM, Fahad Ahmad  wrote:
> > > Thanks Geremy,
>
> > > That has been an absolute bump... GOD i cant sit on my chair, it 
> > > has
> > > worked even on 512 bit number and with no time..
> > > superb i would say.
>
> > > lastly, i am using the code below to calculate Largest Prime factor of a
> > > number:
>
> > > print
> > > ('''===­­'''
> > >    '''  CALCULATE  HIGHEST PRIME
> > > FACTOR  '''
>
> > > '''­­===''')
>
> > > #!/usr/bin/env python
> > > def highest_prime_factor(n):
> > >    if isprime(n):
> > >   return n
> > >    for x in xrange(2,n ** 0.5 + 1):
> > >   if not n % x:
> > >  return highest_prime_factor(n/x)
> > > def isprime(n):
> > >    for x in xrange(2,n ** 0.5 + 1):
> > >   if not n % x:
> > >  return False
> > >    return True
> > > if  __name__ == "__main__":
> > >    import time
> > >    start = time.time()
> > >    print highest_prime_factor(1238162376372637826)
> > >    print time.time() - start
>
> > > the code works with a bit of delay on the number : "1238162376372637826" 
> > > but
> > > extending it to
> > > (10902610991329142436630551158108608965062811746392577675456004845499113044­­30471090261099132914243663055115810860896506281174639257767545600484549911­3­0443047)
> > >  makes python go crazy. Is there any way just like above, i can have it
> > > calculated it in no time.
>
> > > thanks for the support.
>
> > If you're just looking for the largest prime factor I would suggest using
> > a fermat factorization attack. In the example you gave, it returns
> > nearly immediately.
>
> > Geremy Condra- Hide quoted text -
>
> > - Show quoted text -
>
> For a Python-based solution, you might want to look at pyecm (http://
> sourceforge.net/projects/pyecm/)
>
> On a system with gmpy installed also, pyecm found the following
> factors:
>
> 101, 521, 3121, 9901, 36479, 300623, 53397071018461,
> 1900381976777332243781
>
> There still is a 98 digit unfactored composite:
>
> 602525071745682437589111511878284384468144476539868422797968232621651594065­00174226172705680274911
>
> Factoring this remaining composite using ECM may not be practical.
>
> casevh- Hide quoted text -
>
> - Show quoted text -

After a few hours, the remaining factors are

6060517860310398033985611921721

and

9941808367425935774306988776021629111399536914790551022447994642391

casevh
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Calculating very large exponents in python

2010-03-08 Thread casevh
[also replying to Geremy since the OP's message doesn't appear...]

On Mar 8, 11:05 am, geremy condra  wrote:
> On Mon, Mar 8, 2010 at 2:15 AM, Fahad Ahmad  wrote:
> > Thanks Geremy,
>
> > That has been an absolute bump... GOD i cant sit on my chair, it has
> > worked even on 512 bit number and with no time..
> > superb i would say.
>
> > lastly, i am using the code below to calculate Largest Prime factor of a
> > number:
>
> > print
> > ('''===­'''
> >    '''  CALCULATE  HIGHEST PRIME
> > FACTOR  '''
>
> > '''­===''')
>
> > #!/usr/bin/env python
> > def highest_prime_factor(n):
> >    if isprime(n):
> >   return n
> >    for x in xrange(2,n ** 0.5 + 1):
> >   if not n % x:
> >  return highest_prime_factor(n/x)
> > def isprime(n):
> >    for x in xrange(2,n ** 0.5 + 1):
> >   if not n % x:
> >  return False
> >    return True
> > if  __name__ == "__main__":
> >    import time
> >    start = time.time()
> >    print highest_prime_factor(1238162376372637826)
> >    print time.time() - start
>
> > the code works with a bit of delay on the number : "1238162376372637826" but
> > extending it to
> > (10902610991329142436630551158108608965062811746392577675456004845499113044­304710902610991329142436630551158108608965062811746392577675456004845499113­0443047)
> >  makes python go crazy. Is there any way just like above, i can have it
> > calculated it in no time.
>
> > thanks for the support.
>
> If you're just looking for the largest prime factor I would suggest using
> a fermat factorization attack. In the example you gave, it returns
> nearly immediately.
>
> Geremy Condra- Hide quoted text -
>
> - Show quoted text -

For a Python-based solution, you might want to look at pyecm (http://
sourceforge.net/projects/pyecm/)

On a system with gmpy installed also, pyecm found the following
factors:

101, 521, 3121, 9901, 36479, 300623, 53397071018461,
1900381976777332243781

There still is a 98 digit unfactored composite:

60252507174568243758911151187828438446814447653986842279796823262165159406500174226172705680274911

Factoring this remaining composite using ECM may not be practical.

casevh
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ANN: GMPY 1.11 released

2010-02-03 Thread casevh
On Feb 3, 10:22 am, Mensanator  wrote:
> On Feb 3, 10:37 am, casevh  wrote:
>
>
>
> > On Feb 2, 10:03 pm, Mensanator  wrote:
>
> > > On Feb 2, 12:45 am, casevh  wrote:
>
> > > > Everyone,
>
> > > > I'm pleased to annouce the final release of GMPY 1.11.
> > > > GMPY is a wrapper for the MPIR or GMP multiple-precision
> > > > arithmetic library. GMPY 1.11 is available for download from:
>
> > > >http://code.google.com/p/gmpy/
>
> > > > In addition to support for Python 3.x, there are several new
> > > > features in this release:
>
> > > > - Even faster conversion to/from Python longs.
> > > > - Performance improvements by reducing function overhead.
> > > > - Performance improvements by improved caching.
> > > > - Support for cdivmod, fdivmod, and tdivmod.
> > > > - Unicode strings are accepted on Python 2.x and 3.x.
> > > > - Fixed regression in GMPY 1.10 where True/False were no
> > > >   longer recognized.
>
> > > > Changes since 1.11rc1:
> > > > - Recognizes GMP 5.
> > > > - Bugs fixed in Windows binaries (MPIR 1.3.0rc3 -> 1.3.1).
>
> > > > Comments on provided binaries
>
> > > > The 32-bit Windows installers were compiled with MinGW32 using MPIR
> > > > 1.3.1 and will automatically recognize the CPU type and use code
> > > > optimized for the CPU at runtime. The 64-bit Windows installers were
> > > > compiled Microsoft's SDK compilers using MPRI 1.3.1. Detailed
> > > > instructions are included if you want to compile your own binary.
>
> > > > Please report any issues!
>
> > > My previous replies didn't show up. Something to do the .announce
> > > group? I'll trim that and try again. Sorry if they show up eventually.
>
> > > Two issues:
>
> > > 1] why does both gmpy 1.11 and gmpy 1.11rc1 both reply
>
> > > >>> gmpy.version()
>
> > > '1.11'
>
> > > Aren't these different versions? How are we supposed to tell them
> > > apart?
>
> > Check the name of source tarball?
>
> > gmpy._cvsid() will return the internal source code revision number.
> > The changes made in each revision number are listed 
> > athttp://code.google.com/p/gmpy/source/list.
>
> So, '$Id: gmpy.c 237 2010-01-10 03:46:37Z casevh $' would be Revision
> 237
> on that source list?
>
Correct.
>
>
> > I know some applications check gmpy.version(). I don't know if they'll
> > work if the format of the string changes.
>
> Then gmpy.version() isn't really intended to be a version per se,
> it's just a level of compatibility for those programs that care?

Historically, gmpy really didn't have alpha/beta/rc versions and
gmpy.version() just had the version "number" and didn't indicate the
status. If I change it, I'd rather go to "1.1.1rc1" or "1.2.0a0" but
that might break some applications.

>
>
>
> > > 2] Is it true that the only changes since 1.11rc1 are not
> > >    applicable to me since
>
> > >    - I'm not using Windows
> > >    - whether it recognizes GMP 5 is moot as GMP 5 cannot be
> > >      compiled on a Mac (according to GMP site)
>
> > Yes. The only change for GMP 5 was to recognize the new version number
> > when running the tests.
>
> Good.
>
>
>
> > > Is it possible GMP's problems with getting GMP 5 to compile
> > > are the same ones I had with 3.1 on Snow Leopard? (They bemoan
> > > not having a set of every Mac system.) Think it would behoove
> > > me to try it?
>
> > According to comments on GMP's mailing list, the latest snapshot
> > should work.ftp://ftp.gmplib.org/pub/snapshot/
>
> I'll have to see if I can get it to work this weekend. I sure hope I
> don't muck it up after after all the trouble I had getting the
> previous
> one to work.
>
> Thanks for the links.
>
>
>
> > > > casevh
>
>

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ANN: GMPY 1.11 released

2010-02-03 Thread casevh
On Feb 2, 10:03 pm, Mensanator  wrote:
> On Feb 2, 12:45 am, casevh  wrote:
>
>
>
> > Everyone,
>
> > I'm pleased to annouce the final release of GMPY 1.11.
> > GMPY is a wrapper for the MPIR or GMP multiple-precision
> > arithmetic library. GMPY 1.11 is available for download from:
>
> >http://code.google.com/p/gmpy/
>
> > In addition to support for Python 3.x, there are several new
> > features in this release:
>
> > - Even faster conversion to/from Python longs.
> > - Performance improvements by reducing function overhead.
> > - Performance improvements by improved caching.
> > - Support for cdivmod, fdivmod, and tdivmod.
> > - Unicode strings are accepted on Python 2.x and 3.x.
> > - Fixed regression in GMPY 1.10 where True/False were no
> >   longer recognized.
>
> > Changes since 1.11rc1:
> > - Recognizes GMP 5.
> > - Bugs fixed in Windows binaries (MPIR 1.3.0rc3 -> 1.3.1).
>
> > Comments on provided binaries
>
> > The 32-bit Windows installers were compiled with MinGW32 using MPIR
> > 1.3.1 and will automatically recognize the CPU type and use code
> > optimized for the CPU at runtime. The 64-bit Windows installers were
> > compiled Microsoft's SDK compilers using MPRI 1.3.1. Detailed
> > instructions are included if you want to compile your own binary.
>
> > Please report any issues!
>
> My previous replies didn't show up. Something to do the .announce
> group? I'll trim that and try again. Sorry if they show up eventually.
>
> Two issues:
>
> 1] why does both gmpy 1.11 and gmpy 1.11rc1 both reply
>
> >>> gmpy.version()
>
> '1.11'
>
> Aren't these different versions? How are we supposed to tell them
> apart?

Check the name of source tarball?

gmpy._cvsid() will return the internal source code revision number.
The changes made in each revision number are listed at
http://code.google.com/p/gmpy/source/list.

I know some applications check gmpy.version(). I don't know if they'll
work if the format of the string changes.

>
> 2] Is it true that the only changes since 1.11rc1 are not
>    applicable to me since
>
>    - I'm not using Windows
>    - whether it recognizes GMP 5 is moot as GMP 5 cannot be
>      compiled on a Mac (according to GMP site)

Yes. The only change for GMP 5 was to recognize the new version number
when running the tests.

>
> Is it possible GMP's problems with getting GMP 5 to compile
> are the same ones I had with 3.1 on Snow Leopard? (They bemoan
> not having a set of every Mac system.) Think it would behoove
> me to try it?

According to comments on GMP's mailing list, the latest snapshot
should work.
ftp://ftp.gmplib.org/pub/snapshot/

>
>
>
> > casevh
>
>

-- 
http://mail.python.org/mailman/listinfo/python-list


ANN: GMPY 1.11 released

2010-02-02 Thread casevh
Everyone,

I'm pleased to annouce the final release of GMPY 1.11.
GMPY is a wrapper for the MPIR or GMP multiple-precision
arithmetic library. GMPY 1.11 is available for download from:


http://code.google.com/p/gmpy/


In addition to support for Python 3.x, there are several new
features in this release:


- Even faster conversion to/from Python longs.
- Performance improvements by reducing function overhead.
- Performance improvements by improved caching.
- Support for cdivmod, fdivmod, and tdivmod.
- Unicode strings are accepted on Python 2.x and 3.x.
- Fixed regression in GMPY 1.10 where True/False were no
  longer recognized.

Changes since 1.11rc1:
- Recognizes GMP 5.
- Bugs fixed in Windows binaries (MPIR 1.3.0rc3 -> 1.3.1).


Comments on provided binaries


The 32-bit Windows installers were compiled with MinGW32 using MPIR
1.3.1 and will automatically recognize the CPU type and use code
optimized for the CPU at runtime. The 64-bit Windows installers were
compiled Microsoft's SDK compilers using MPRI 1.3.1. Detailed
instructions are included if you want to compile your own binary.


Please report any issues!


casevh


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python distutils build problems with MinGW

2010-02-01 Thread casevh
On Feb 1, 8:31 am, Andrej Mitrovic  wrote:
> On Feb 1, 4:03 am, Andrej Mitrovic  wrote:
>
>
>
>
>
> > On Feb 1, 2:59 am, Andrej Mitrovic  wrote:
>
> > > Hi,
>
> > > I've made a similar post on the Cython mailing list, however I think
> > > this is more python-specific. I'm having trouble setting up distutils
> > > to use MinGW instead of Visual Studio when building a module. Even tho
> > > I've just uninstalled VS, and cleared out any leftover VS environment
> > > variables, distutils keeps wanting to use it.
>
> > > The steps I took:
>
> > > Fresh installation of Python 3.1.1
> > > Successfully installed MinGW, added to the path variable (gcc in
> > > command prompt works)
> > > Successfully installed Cython, imports from Cython in Python work.
> > > Added a distutils.cfg file in \Python31\Lib\distutils\ directory with:
>
> > > [build]
> > > compiler=mingw32
>
> > > (also tried adding [build_ext] compiler=mingw32)
>
> > > There's a demo setup.py module that came with Cython, I tried the
> > > following commands:
>
> > > 
>
> > > > python setup.py build_ext --inplace
>
> > > error: Unable to find vcvarsall.bat
>
> > > > python setup.py build
>
> > > error: Unable to find vcvarsall.bat
> > > 
>
> > > I'm having the exact same issue with trying to build the Polygon
> > > library via MinGW. In fact, the reason I had installed Visual Studio
> > > in the first place was to be able to build the Polygon library, since
> > > I was having these errors.
>
> > > What do I need to do to make distutils/python use MinGW?
>
> > Update:
>
> > I installed and tried building with Python 2.6, it calls MinGW when I
> > have the distutils.cfg file configured properly (same configuration as
> > the Python 3.1.1 one)
>
> > But why doesn't it work on a fresh Python 3.1.1 installation as well?
> > Is this a bug?
>
> Also tried calling (Python 3.1.1):
>
> 
> python setup.py build --compiler=mingw32
>
> error: Unable to find vcvarsall.bat
> 
>
> I've tried using pexports and the dlltool to build new python31.def
> and libpython31.a files, and put them in the libs folder. That didn't
> work either.
>
> I've also tried adding some print statements in the \distutils\dist.py
> file, in the parse_config_files() function, just to see if Python
> properly parses the config file. And it does, both Python 2.6 and 3.1
> parse the distutils.cfg file properly. Yet something is making python
> 3 look for the VS/VC compiler instead of MinGW. I'll keep updating on
> any progres..- Hide quoted text -
>
> - Show quoted text -

I think this is http://bugs.python.org/issue6377.

I applied the patch to my local copy of Python 3.1 and it seems to
work.

casevh
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ISO module for binomial coefficients, etc.

2010-01-24 Thread casevh
On Jan 23, 2:55 pm, kj  wrote:
> Before I go off to re-invent a thoroughly invented wheel, I thought
> I'd ask around for some existing module for computing binomial
> coefficient, hypergeometric coefficients, and other factorial-based
> combinatorial indices.  I'm looking for something that can handle
> fairly large factorials (on the order of 1!), using floating-point
> approximations as needed, and is smart about optimizations,
> memoizations, etc.
>
> TIA!
>
> ~K

If you need exact values, gmpy (http://code.google.com/p/gmpy/) has
basic, but fast, support for calculating binomial coefficients and
factorials. If you are floating point approximations, check out mpmath
(http://code.google.com/p/mpmath/).

casevh
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Rounding up to the next 100

2010-01-21 Thread casevh
On Jan 21, 1:58 pm, noydb  wrote:
> Sorry, although what I really need is the string-number rounded UP
> every time.  So if the number is 3890.32, it needs to go to 3900; if
> the number is 3811.345, it needs to go to 3900 also.
>
> So, Florian's answer works.

Another option is using math.ceil and math.floor.

>>> import math
>>> 100*math.ceil(1234.5678/100)
1300
>>> 100*math.floor(1234.5678/100)
1200
>>> 100*math.ceil(-1234.5678/100)
-1200
>>> 100*math.floor(-1234.5678/100)
-1300

casevh
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: interactive terminal in Ubuntu Linux : libreadline5-dev works only in Python 2.6 not 3.1

2010-01-12 Thread casevh
On Jan 12, 9:03 pm, Dave WB3DWE wrote:
> On Sun, 10 Jan 2010 22:08:20 -0800 (PST), casevh 
> wrote:
>
>
>
>
>
> >On Jan 10, 8:16 pm, Dave WB3DWE wrote:
> >> On Sat, 9 Jan 2010 16:48:52 -0800 (PST), casevh 
> >> wrote:
>
> >> >On Jan 9, 3:10 pm, pdlem...@earthlink.net wrote:
> >> >> On Sat, 9 Jan 2010 13:27:07 -0800 (PST), casevh 
> >> >> wrote:
>
> >Are you sure you are using the new version of python3.1 (the one
> >located in /usr/local/bin/)?
>
> >What is the result of "which python3.1"?
>
> >What happens if you run "/usr/local/bin/python3.1"?
>
> >casevh
>
> You're correct : there are now two versions of Python 3 on my machine.
>
> Two weeks ago when I installed the Python-3.1.1 from the tarball somehow
> it got called up with "python3" .  Thats what I've continued to run and
> now has all the errors.
>
> in my usr/local/bin are
>     2to3  idle3  pydoc3  python3  python3.1  python3.1-config  and,
>         in light blue, python3-config
>
> Running  " python3.1 "  solves _most_ of the problems :
>     It imports the random & time modules and runs most of my modules in
> the pycode dir.
>     The libreadline5-dev works fine and all my issues with the keys are
> gone. I'm astonished.
>
> However now my modules that import  msvcrt  will not run.  I use this
> for single char keyboard input.  Trying to import msvcrt yields
>     InputError : No module named msvcrt
> I believe a module using this ran before recompilation.  Furthermore I
> can find  msvcrtmodule.c in  /home/dave/python31/Python-3.1.1/PC
> and msvcrt.rst in /home/dave/python31/Python-3.1.1/Doc/library
> I use this a lot.  Suppose I should now learn curses module.
>
> Thanks for everything         Dave WB3DWE      pdlem...@earthlink.net- Hide 
> quoted text -
>
> - Show quoted text -

msvcrt provides support for the MicroSoft Visual C RunTime so it won't
run on Ubuntu.

The files that you see are the source code and documentation but the
source code is only compiled on Windows.

casevh
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: interactive terminal in Ubuntu Linux : libreadline5-dev works only in Python 2.6 not 3.1

2010-01-10 Thread casevh
On Jan 10, 8:16 pm, Dave WB3DWE wrote:
> On Sat, 9 Jan 2010 16:48:52 -0800 (PST), casevh 
> wrote:
>
> >On Jan 9, 3:10 pm, pdlem...@earthlink.net wrote:
> >> On Sat, 9 Jan 2010 13:27:07 -0800 (PST), casevh 
> >> wrote:
>
> >1) Try the commands again. Make sure all the "./configure" options are
> >on one line. Make sure to do "sudo make altinstall". (Don't use "sudo
> >make install"; it will give your version of Python the name "python"
> >and that can cause confusion on your system.)
>
> >2) Move your applications to another directory.
>
> >3) Try running "python3.1" while you are in that directory.
>
> >If this doesn't work, report back on the error messages you receive.
>
> Thanks casevh for your time & patience.
> The ./configure . . .  was a "continuous" line , although it ran over to
> next line even on 132 char wide terminal because of names of system &
> dir.  I was careful with the spaces and hyphens.
> In my reply the "make install" was a typo , I did run  make altinstall.
>
> Moved all my code to pycode dir on my home directory.  Removed all
> files from /usr/local/lib/python3.1/dlmodules and removed that dir.
>
> Twice ran the recompile :
>     make distclean
>     ./configure --prefix=/usr/local --with-computed-gotos
>             --with-wide-unicode
>         <^ one space>
>     make
>     sudo make altinstall
> After each reinstall had same problems :  cannot import  random  or
> any code using it, although random.py is in Lib :
>     Traceback
>         File "", line 1, in 
>         File "random.py", line 46, in 
>             import collections as _collections
>         File "collections.py", line 9 in 
>             from _collections import deque, default dict
>     ImportError: /usr/local/lib/python3.1/lib-dynload/collections.so:
>         undefined symbol: PyUnicodeUCS4_FromString
>
> After second reinstall today I also found that a module importing
>     "time"  would not run.  Likewise could not import  time  at  >>> .
> Same error, ending : undefined symbol: PyUnicode UCS4_FromString
>
> And my original problem still there : fouled up keys in interactive
> terminal. Seems minor now ; )   Should I try to remove everything
> and reopen the tarball ?
> Dave WB3DWE,  central Texas

Are you sure you are using the new version of python3.1 (the one
located in /usr/local/bin/)?

What is the result of "which python3.1"?

What happens if you run "/usr/local/bin/python3.1"?

casevh
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: interactive terminal in Ubuntu Linux : libreadline5-dev works only in Python 2.6 not 3.1

2010-01-09 Thread casevh
On Jan 9, 3:10 pm, pdlem...@earthlink.net wrote:
> On Sat, 9 Jan 2010 13:27:07 -0800 (PST), casevh 
> wrote:
> >Did you recompile Python 3.1.1 after installing libreadline5-dev?
>
> >(From the Python 3.1.1 directory. Your options to configure may vary.)
>
> >make distclean
> >./configure --prefix=/usr/local --with-computed-gotos --with-wide-
> >unicode
> >make
> >make altinstall
>
> >casevh
>
> Thanks so much for your help . . . but I'm going backwards, probably
> due to my inadequate knowledge of Linux.
>
> Ran the above vebatim, except had to do      sudo make install    .
> Python recompiled and the system appears to be in same path/dir as
> before :  /home/dave/python31/Python-3.1.1
> Called up interactive prompt  >>>  with  $ python3 , as before.
>

It looks like you now have two copies of Python 3.1.1 installed.
That's probably a side-effect of my instructions. I'll got through
each instruction individually.

It looks like your Python source code is in "/home/dave/python31/
Python-3.1.1". The commnad "make distclean" should remove the results
of the prior configuration so to won't need to extract the source code
again.

The command "./configure" accepts several options that control how the
source code is configured. To see all the available options, use the
command "./configure --help". The first option - "--prefix=/usr/local"
- identifies the location where Python 3.1.1 will be installed. The "/
usr/local" directory is a common location for user-compiled programs.
If no location is specified, many applications assume "/usr/local".
The option - "--with-computed-gotos" - is just a compiler options that
produces a slightly faster interpreter. The final option - "--with-
wide-unicode" - identifies the proper Unicode format to use. (I looks
like this option may have been split via line wrapping on my earlier
email. I think it is causing the UCS4 error.)

The command "make" compiles Python 3.1.1.

The command "sudo make altinstall" installs Python 3.1.1 under the "/
usr/local" directory. The option "altinstall" installs Python 3.1.1
and leaves its name as "python3.1". It should be located in "/usr/
local/bin/python3.1".

Your own programs (Ackermann.py) should be kept somewhere in your home
directory, for example "/home/dave/code". Typing "python3.1" will look
in the current directory first, and then search the path where it
should find "/usr/local/bin/python3.1".

What will you need to do fix this?

1) Try the commands again. Make sure all the "./configure" options are
on one line. Make sure to do "sudo make altinstall". (Don't use "sudo
make install"; it will give your version of Python the name "python"
and that can cause confusion on your system.)

2) Move your applications to another directory.

3) Try running "python3.1" while you are in that directory.

If this doesn't work, report back on the error messages you receive.

Don't give up; we can fix this.

casevh
> Problems :
>     1. prompt keys remain fouled up as before.
>     2. will not import  random  module, either from prompt or
>             from any of my prewritten modules.  Get ImportError
>            /usr/local/lib/Python3.1/lib-dynload/_collections.so :
>             undefined symbol : PyUnicode UCS4_FromString
>                 Some other modules will import : math , sys, os
>     3. Some of my pre-existing modules will not run : have
>           not checked them all, but apparently those with random.
>     4. Attempted to read my modules with gedit.  Pulls them
>           up but refuses to save :          could not save file
>           /usr/local/lib/python3.1/dlmodules/Ackermann.py
>                You do not have permission necessary to save file.
>           This is same path my modules were in before the
>           recompile and the dir I made for them. Had no trouble
>           saving them previously.
>
> If no fix available, I'll reinstall from the Python-3.1.1 that
> I extracted from the tarball , and go back to XP for a while  : (
>
> Dave WB3DWE       pdlem...@earthlink.net

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: interactive terminal in Ubuntu Linux : libreadline5-dev works only in Python 2.6 not 3.1

2010-01-09 Thread casevh
On Jan 9, 10:06 am, Dave WB3DWE wrote:
> On Jan 6 I inquired how to "fix" the 3.1.1 interactive terminal
> in Ubuntu Linux.   Left arrow yields ^[[D , etc.
>
> casevh helped by suggesting "libreadline5-dev" be installed.
> Did so with Synaptic Package Manager.
> The behavior of the Python 3.3.1 terminal is unchanged but
> the 2.6.2 terminal is corrected.
>
> Is there any way to fix the 3.1.1 terminal command line ?
>
> Thanks,     Dave WB3DWE      pdlem...@earthlink.net

Did you recompile Python 3.1.1 after installing libreadline5-dev?

(From the Python 3.1.1 directory. Your options to configure may vary.)

make distclean
./configure --prefix=/usr/local --with-computed-gotos --with-wide-
unicode
make
make altinstall

casevh
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Caching objects in a C extension

2010-01-08 Thread casevh
On Jan 8, 8:56 am, Antoine Pitrou  wrote:
> Le Fri, 08 Jan 2010 08:39:17 -0800, casevh a écrit :
>
>
>
> > Thanks for the reply. I realized that I missed one detail. The objects
> > are created by the extension but are deleted by Python. I don't know
> > that an object is no longer needed until its tp_dealloc is called. At
> > that point, its reference count is 0.
>
> tuple objects and others already have such a caching scheme, so you could
> download the Python source and look at e.g. Objects/tupleobject.c to see
> how it's done.
>
> Regards
>
> Antoine.

Thanks. That's exactly the information I was looking for.

casevh
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Caching objects in a C extension

2010-01-08 Thread casevh
On Jan 8, 9:19 am, "Diez B. Roggisch"  wrote:
> casevh schrieb:
>
>
>
>
>
> > On Jan 8, 2:59 am, "Diez B. Roggisch"  wrote:
> >> casevh schrieb:
>
> >>> I'm working with a C extension that needs to rapidly create and delete
> >>> objects. I came up with an approach to cache objects that are being
> >>> deleted and resurrect them instead of creating new objects. It appears
> >>> to work well but I'm afraid I may be missing something (besides
> >>> heeding the warning in the documentation that _Py_NewReference is for
> >>> internal interpreter use only).
> >>> Below is a simplified version of the approach I'm using:
> >>> MyType_dealloc(MyTypeObject *self)
> >>> {
> >>>     if(I_want_to_save_MyType(self)) {
> >>>         // Save the object pointer in a cache
> >>>         save_it(self);
> >>>     } else {
> >>>         PyObject_Del(self);
> >>>     }
> >>> }
> >>> MyType_new(void)
> >>> {
> >>>     MyTypeObject *self;
> >>>     if(there_is_an_object_in_the_cache) {
> >>>         self = get_object_from_cache;
> >>>         _Py_NewReference((PyObject*)self);
> >>>     } else {
> >>>         if(!(self = PyObjectNew(MyTypeObject, &MyType))
> >>>             return NULL;
> >>>         initialize_the_new_object(self);
> >>>     }
> >>>     return self;
> >>> }
> >>> The objects referenced in the cache have a reference count of 0 and I
> >>> don't increment the reference count until I need to resurrect the
> >>> object. Could these objects be clobbered by the garbage collector?
> >>> Would it be safer to create the new reference before stuffing the
> >>> object into the cache (even though it will look like there is a memory
> >>> leak when running under a debug build)?
> >> Deep out of my guts I'd say keeping a reference, and using you own
> >> LRU-scheme would be the safest without residing to use dark magic.
>
> >> Diez- Hide quoted text -
>
> >> - Show quoted text -
>
> > Thanks for the reply. I realized that I missed one detail. The objects
> > are created by the extension but are deleted by Python. I don't know
> > that an object is no longer needed until its tp_dealloc is called. At
> > that point, its reference count is 0.
>
> I don't fully understand. Whoever creates these objects, you get a
> reference to them at some point. Then you increment (through the
> destined Macros) the ref-count.
>
> All objects in your pool with refcount 1 are canditates for removal. All
> you need to do is to keep a kind of timestamp together with them, since
> when they are released. If that's to old, fully release them.
>
> Diez- Hide quoted text -
>
> - Show quoted text -

These are numeric objects created by gmpy. I'm trying to minimize the
overhead for using mpz with small numbers. Objects are created and
deleted very often by the interpreter as expressions are evaluated. I
don't keep ownership of the objects.

casevh
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Caching objects in a C extension

2010-01-08 Thread casevh
On Jan 8, 2:59 am, "Diez B. Roggisch"  wrote:
> casevh schrieb:
>
>
>
>
>
> > I'm working with a C extension that needs to rapidly create and delete
> > objects. I came up with an approach to cache objects that are being
> > deleted and resurrect them instead of creating new objects. It appears
> > to work well but I'm afraid I may be missing something (besides
> > heeding the warning in the documentation that _Py_NewReference is for
> > internal interpreter use only).
>
> > Below is a simplified version of the approach I'm using:
>
> > MyType_dealloc(MyTypeObject *self)
> > {
> >     if(I_want_to_save_MyType(self)) {
> >         // Save the object pointer in a cache
> >         save_it(self);
> >     } else {
> >         PyObject_Del(self);
> >     }
> > }
>
> > MyType_new(void)
> > {
> >     MyTypeObject *self;
> >     if(there_is_an_object_in_the_cache) {
> >         self = get_object_from_cache;
> >         _Py_NewReference((PyObject*)self);
> >     } else {
> >         if(!(self = PyObjectNew(MyTypeObject, &MyType))
> >             return NULL;
> >         initialize_the_new_object(self);
> >     }
> >     return self;
> > }
>
> > The objects referenced in the cache have a reference count of 0 and I
> > don't increment the reference count until I need to resurrect the
> > object. Could these objects be clobbered by the garbage collector?
> > Would it be safer to create the new reference before stuffing the
> > object into the cache (even though it will look like there is a memory
> > leak when running under a debug build)?
>
> Deep out of my guts I'd say keeping a reference, and using you own
> LRU-scheme would be the safest without residing to use dark magic.
>
> Diez- Hide quoted text -
>
> - Show quoted text -

Thanks for the reply. I realized that I missed one detail. The objects
are created by the extension but are deleted by Python. I don't know
that an object is no longer needed until its tp_dealloc is called. At
that point, its reference count is 0.

casevh
-- 
http://mail.python.org/mailman/listinfo/python-list


Caching objects in a C extension

2010-01-08 Thread casevh
I'm working with a C extension that needs to rapidly create and delete
objects. I came up with an approach to cache objects that are being
deleted and resurrect them instead of creating new objects. It appears
to work well but I'm afraid I may be missing something (besides
heeding the warning in the documentation that _Py_NewReference is for
internal interpreter use only).

Below is a simplified version of the approach I'm using:

MyType_dealloc(MyTypeObject *self)
{
if(I_want_to_save_MyType(self)) {
// Save the object pointer in a cache
save_it(self);
} else {
PyObject_Del(self);
}
}

MyType_new(void)
{
MyTypeObject *self;
if(there_is_an_object_in_the_cache) {
self = get_object_from_cache;
_Py_NewReference((PyObject*)self);
} else {
if(!(self = PyObjectNew(MyTypeObject, &MyType))
return NULL;
initialize_the_new_object(self);
}
return self;
}

The objects referenced in the cache have a reference count of 0 and I
don't increment the reference count until I need to resurrect the
object. Could these objects be clobbered by the garbage collector?
Would it be safer to create the new reference before stuffing the
object into the cache (even though it will look like there is a memory
leak when running under a debug build)?

Thanks in advance for any comments,

casevh





-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python interactive terminal in Ubuntu Linux : some keys fouled up

2010-01-06 Thread casevh
On Jan 6, 2:40 pm, pdlem...@earthlink.net wrote:
> Have recently moved from XP to Ubuntu Linux.
> Successfully installed Python 3.1.1 in the Ubuntu 9.04
> release on my desktop.
> Problem is the python interactive terminal  >>> .
> Many of the keys now do not work.
> eg pressing left-arrow yields  ^[[D
>                     right-arrow         ^[[C
>                     Home                  ^[OH
>                     Del                      ^[[3~
>                     up-arrow            ^[[A                  
>
> Frustrating as I use all these , esp  up-arrow to
> repeat recent lines.  Found the same thing on
> my sons MacBook.
>
> This is not mentioned in two recent Python books
> or one on Ubuntu. Nor could I found help on the www.
>
> Is there any work-around  ?  Should I just forget
> the python prompt >>>   ?                  
>
> Thanks,             Dave        pdlem...@earthlink.net

Assuming you compiled the source code, you will also need to install
"libreadline5-dev" via Synaptic or apt-get.

casevh
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: the need for 64 bits

2009-12-28 Thread casevh
On Dec 28, 2:13 am, Mark Dickinson  wrote:
> On Dec 28, 6:50 am, Mensanator  wrote:
>
>
>
> > But with a 64-bit processor, that limitation no longer stops me.
>
> > i: 11   bits: 10,460,353,205   decimals:  3,148,880,080
> > i: 12   bits: 94,143,178,829   decimals: 28,339,920,715
>
> > Wow! 94 billion bits! 28 billion decimal digits!
>
> > Of course, once one wall falls, you get to go up against the next
> > one.
> > For generation 13, I get:
>
> > gmp: overflow in mpz type
> > Abort trap
>
> > Hmm, not sure what "overflow" means in this context, but I suspect
> > it ran out of memory, I probably should have gotten the MacBook Pro
> > with 8 GB of ram. But then, maybe it wouldn't help.
>
> I don't think this was due to running out of memory:  it looks like
> gmp uses the 'int' C type to count the number of limbs in an mpz,
> which would make the maximum number of bits 2**31 * 64, or around 137
> billion, on a typical 64-bit machine.  Maybe there's a configure
> option to change this?
>
> For Python longs, the number of limbs is stored as a signed size_t
> type, so on a 64-bit machine memory really is the only limitation.
>
> --
> Mark

Based on comments on the GMP website, the maximum number of bits on a
64-bit platform is limited to 2**37 or 41 billion decimal digits. A
number this size requires 16GB of RAM. A future version of GMP (5.x)
is supposed to remove that limit and also work well with disk-based
storage.

casevh
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How do I install GMPY 1.11 on a Mac with OS X 10.6 and Python 3.1?

2009-12-26 Thread casevh
t;> >> > $ /Library/Frameworks/Python.framework/Versions/3.1/bin/python3
> >> >> > setup.py install
> >> >> > running install
> >> >> > running build
> >> >> > running build_ext
> >> >> > building 'gmpy' extension
> >> >> > creating build/temp.macosx-10.3-fat-3.1
> >> >> > creating build/temp.macosx-10.3-fat-3.1/src
> >> >> > Compiling with an SDK that doesn't seem to exist: /Developer/SDKs/
> >> >> > MacOSX10.4u.sdk
> >> >> > Please check your Xcode installation
> >> >> > gcc -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -
> >> >> > fno-strict-aliasing -fno-common -dynamic -DNDEBUG -g -O3 -I./src -I/
> >> >> > opt/local/include -I/Library/Frameworks/Python.framework/Versions/3.1/
> >> >> > include/python3.1 -c src/gmpy.c -o build/temp.macosx-10.3-fat-3.1/src/
> >> >> > gmpy.o
> >> >> > In file included from src/gmpy.c:206:
> >> >> > /Library/Frameworks/Python.framework/Versions/3.1/include/python3.1/
> >> >> > Python.h:11:20: error: limits.h: No such file or directory
> >> >> > /Library/Frameworks/Python.framework/Versions/3.1/include/python3.1/
> >> >> > Python.h:14:2: error: #error "Something's broken. UCHAR_MAX should be
> >> >> > defined in limits.h."
> >> >> > /Library/Frameworks/Python.framework/Versions/3.1/include/python3.1/
> >> >> > Python.h:18:
>
> >> >> > Any other ideas? Do I have to install a separate Python 3?
>
> >> >> That's not a Python 3 problem. It appears to be a problem in the build 
> >> >> script.
>
> >> >> > Compiling with an SDK that doesn't seem to exist: /Developer/SDKs/
> >> >> > MacOSX10.4u.sdk
>
> >> >> My guess would be you're on Snow Leopard while the original developer
> >> >> is either on Tiger or Leopard. The script wants to use the 10.4 SDK
> >> >> but Apple only includes the SDKs for the latest 2 versions of OS X.
>
> >> > I just thought of something. Why I am able to do the build for python
> >> > 2.6?
> >> > Wouldn't that also fail for lack of a 10.4 SDK?
>
> >> I think you'd need different C sources for 2.x and 3.x because I think
> >> the C API changed quite a bit.
>
> > I can see that.
>
> >> That might be why it worked for 2.6 but
> >> failed for 3.1
>
> > But Python 2.6 must not care about whether the 10.4SDK
> > exists or not. It must be using 10.5 or 10.6 since they
> > are te only SDKs that do exist.
>
> > As far as different sources, only one set is supplied and
> > it specifically states in the comments that the source
> > supports 3.x. There are, after all, 3.1 versions of the
> > windows installer.
>
> > I certainly woudn't be surprised by the need for seperate
> > sources, but wouldn't such seperate sources have been
> > supplied and seperate setup scripts been provided?
>
> I was just digging though the sources- turns out it is a Python issue.
> GMPY compiles using distutil's makefile.
> /Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/config/Makefile
>
> That makefile specifies the 10.4 SDK and a MACOSX_DEPLOYMENT_TARGET of
> 10.3. I don't know how you'd fix this to work on all supported
> versions of OS X, but that seems to be the problem.
(I'm one of the GMPY maintainers but I have no access to Macs)

The same GMPY source should should compile with all version of Python
2.4 and later.

I think the makefile is created when that specific version of Python
is compiled. Was Python 3.1 included with OS X or was it installed
separately? If it was installed separately, was it installed from
macports or python.org?

I have a couple other generic questions.

Is Python 2.6 built as a 32 or 64-bit application? (sys.maxint)

Is the gmp library 32 or 64-bit? (gmpy.gmp_limbsize())

For best performance with large numbers, GMPY should be compiled as a
64-bit application. If Python and gmp are not compiled in 64-bit mode,
you probably will want to compile both of them from source or find 64-
bit versions.

casevh
>
>
>
> >> >> >> > --
> >> >> >> >http://mail.python.org/mailman/listinfo/python-list
>
> >> >> > --
> >> >> >http://mail.python.org/mailman/listinfo/python-list
>
> >> > --
> >> >http://mail.python.org/mailman/listinfo/python-list
>
> > --
> >http://mail.python.org/mailman/listinfo/python-list
>
>

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PyArg_ParseTupleAndKeywords in Python3.1

2009-12-18 Thread casevh
On Dec 18, 10:28 am, Joachim Dahl  wrote:
> My mistake seems to be that I declared
>
> char a, b;
>
> instead of
>
> int a, b;
>
> Thank you for sorting this out.
>
> Joachim

I think you need to initialize them, too.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PyArg_ParseTupleAndKeywords in Python3.1

2009-12-18 Thread casevh
On Dec 17, 11:14 am, Joachim Dahl  wrote:
> In the Ubuntu 9.10 version of Python 3.1 (using your patch), there's a
> related bug:
>
> >>> foo(b='b')
>
> will set the value of a in the extension module to zero, thus clearing
> whatever
> default value it may have had.  In other words, the optional character
> arguments
> that are skipped seem to be nulled by PyArg_ParseTupleAndKeywords().

The following code seems to work fine for me:

static PyObject* foo(PyObject *self, PyObject *args, PyObject *kwrds)
{
int a=65, b=66;
char *kwlist[] = {"a", "b", NULL};
if (!PyArg_ParseTupleAndKeywords(args, kwrds, "|CC", kwlist, &a,
&b))
return NULL;
return Py_BuildValue("(CC)", a, b);
}

The default values seem to remain as 'A' and 'B'.

>>> foo()
('A', 'B')
>>> foo(b='b')
('A', 'b')
>>> foo()
('A', 'B')
>>> foo('a')
('a', 'B')
>>> foo('a', b='b')
('a', 'b')
>>> foo()
('A', 'B')
>>>

casevh
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PyArg_ParseTupleAndKeywords in Python3.1

2009-11-30 Thread casevh
On Nov 30, 2:18 pm, Joachim Dahl  wrote:
> I think that "C" encoding is what I need, however I run into an odd
> problem.
> If I use the following C code
>
> static PyObject* foo(PyObject *self, PyObject *args, PyObject *kwrds)
> {
>   char a, b;
>   char *kwlist[] = {"a", "b", NULL};
>   if (!PyArg_ParseTupleAndKeywords(args, kwrds, "|CC", kwlist, &a,
> &b))
>     return NULL;
>   ...
>
> then the following works:
>
> >>> foo('a')
> >>> foo('a','b')
> >>> foo(a='a',b='b')
>
> but the following fails:>>> foo(b='b')
>
> RuntimeError: impossible: 'CC'
>
> Is this error-message expected?

Nope. It appears to be a bug in Python. The format code 'C' is missing
in the switch statement in skipitem() in getargs.c. I added "case
'C': /* int */" after "case 'c': /* char */" and then example worked
for me.

I'll open a bug report.

casevh
>
> On Nov 30, 10:19 pm, casevh  wrote:
>
> > On Nov 30, 1:04 pm, Joachim Dahl  wrote:
>
> > > Obviously the name of the C function and the char variable cannot both
> > > be foo,
> > > so the C code should be:
>
> > > static PyObject* foo(PyObject *self, PyObject *args, PyObject *kwrds)
> > > {
> > >   char foochar;
> > >   char *kwlist[] = {"foochar", NULL};
> > >   if (!PyArg_ParseTupleAndKeywords(args, kwrds, "c", kwlist,
> > > &foochar))
> > >     return NULL;
> > >   ...
>
> > > The question remains the same: why can't I pass a single character
> > > argument to this function under Python3.1?
>
> > > Thanks.
> > > Joachim
>
> > > On Nov 30, 9:52 pm, Joachim Dahl  wrote:
>
> > > > I am updating an extension module from Python2.6 to Python3.
>
> > > > I used to pass character codes to the extension module, for example, I
> > > > would write:
>
> > > > >>> foo('X')
>
> > > > with the corresponding C extension routine defined as follows:
> > > > static PyObject* foo(PyObject *self, PyObject *args, PyObject *kwrds)
> > > > {
> > > >   char foo;
> > > >   char *kwlist[] = {"foo", NULL};
> > > >   if (!PyArg_ParseTupleAndKeywords(args, kwrds, "c", kwlist, &foo))
> > > >     return NULL;
> > > >   ...
>
> > > > In Python3.0 this also works, but in Python3.1 I get the following
> > > > error:
> > > > TypeError: argument 1 must be a byte string of length 1, not str
>
> > > > and I seem to be supposed to write>>> foo(b'X')
>
> > > > instead. From the Python C API, I have not been able to explain this
> > > > new behavior.
> > > > What is the correct way to pass a single character argument to
> > > > Python3.1
> > > > extension modules?- Hide quoted text -
>
> > > - Show quoted text -
>
> > Python 3.1 uses "c" (lowercase c) to parse a single character from a
> > byte-string and uses "C" (uppercase c) to parse a single character
> > from a Unicode string. I don't think there is an easy way to accept a
> > character from both.
>
> > HTH,
>
> > casevh
>
>

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: PyArg_ParseTupleAndKeywords in Python3.1

2009-11-30 Thread casevh
On Nov 30, 1:04 pm, Joachim Dahl  wrote:
> Obviously the name of the C function and the char variable cannot both
> be foo,
> so the C code should be:
>
> static PyObject* foo(PyObject *self, PyObject *args, PyObject *kwrds)
> {
>   char foochar;
>   char *kwlist[] = {"foochar", NULL};
>   if (!PyArg_ParseTupleAndKeywords(args, kwrds, "c", kwlist,
> &foochar))
>     return NULL;
>   ...
>
> The question remains the same: why can't I pass a single character
> argument to this function under Python3.1?
>
> Thanks.
> Joachim
>
> On Nov 30, 9:52 pm, Joachim Dahl  wrote:
>
>
>
> > I am updating an extension module from Python2.6 to Python3.
>
> > I used to pass character codes to the extension module, for example, I
> > would write:
>
> > >>> foo('X')
>
> > with the corresponding C extension routine defined as follows:
> > static PyObject* foo(PyObject *self, PyObject *args, PyObject *kwrds)
> > {
> >   char foo;
> >   char *kwlist[] = {"foo", NULL};
> >   if (!PyArg_ParseTupleAndKeywords(args, kwrds, "c", kwlist, &foo))
> >     return NULL;
> >   ...
>
> > In Python3.0 this also works, but in Python3.1 I get the following
> > error:
> > TypeError: argument 1 must be a byte string of length 1, not str
>
> > and I seem to be supposed to write>>> foo(b'X')
>
> > instead. From the Python C API, I have not been able to explain this
> > new behavior.
> > What is the correct way to pass a single character argument to
> > Python3.1
> > extension modules?- Hide quoted text -
>
> - Show quoted text -

Python 3.1 uses "c" (lowercase c) to parse a single character from a
byte-string and uses "C" (uppercase c) to parse a single character
from a Unicode string. I don't think there is an easy way to accept a
character from both.

HTH,

casevh
-- 
http://mail.python.org/mailman/listinfo/python-list


ANN: GMPY 1.11rc1 is available

2009-11-29 Thread casevh
Everyone,

I'm pleased to annouce that a new version of GMPY is available.
GMPY is a wrapper for the MPIR or GMP multiple-precision
arithmetic library. GMPY 1.11rc1 is available for download from:

http://code.google.com/p/gmpy/

In addition to support for Python 3.x, there are several new
features in this release:

- Even faster conversion to/from Python longs.
- Performance improvements by reducing function overhead.
- Performance improvements by improved caching.
- Support for cdivmod, fdivmod, and tdivmod.
- Unicode strings are accepted on Python 2.x and 3.x.
- Fixed regression in GMPY 1.10 where True/False were no
  longer recognized.

Comments on provided binaries

The 32-bit Windows installers were compiled with MinGW32 using MPIR
1.3.0rc3 and will automatically recognize the CPU type and use code
optimized for the CPU at runtime. The 64-bit Windows installers were
compiled Microsoft's SDK compilers using MPRI 1.3.0rc3. Detailed
instructions are included if you want to compile your own binary.

Future plans

On releasing the GIL: I have compared releasing the GIL versus the
multiprocessing module and the multiprocessing module offers better
and more predictable performance for embarrassingly parallel tasks
than releasing the GIL. If there are requests, I can add a compile-
time option to enable threading support but it is unlikely to
become the default.

On mutable integers: The performance advantages of mutable integers
appears to be 20% to 30% for some operations. I plan to add a new
mutable integer type in the next release of GMPY. If you want to
experiment with mutable integers now, GMPY can be compiled with
mutable version of the standard 'mpz' type. Please see the file
"mutable_mpz.txt" for more information.

Please report any issues!

casevh
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: C api and checking for integers

2009-11-12 Thread casevh
On Nov 12, 1:28 am, "lallous"  wrote:
> Hello,
>
> I am a little confused on how to check if a python variable is an integer or
> not.
>
> Sometimes PyInt_Check() fails and PyLong_Check() succeeds.

I assume you are using Python 2.x. There are two integer types: (1)
PyInt which stores small values that can be stored in a single C long
and (2) PyLong which stores values that may or may not fit in a single
C long. The number 2 could arrive as either a PyInt or a PyLong.

Try something like the following:

if PyInt_CheckExact()
  myvar = PyInt_AS_LONG()
else if PyLong_CheckExact()
  myvar = PyLong_As_Long()
  if ((myvar == -1) && (PyErr_Occurred())
  # Too big to fit in a C long

Python 3.x is a little easier since everything is a PyLong.

>
> How to properly check for integer values?
>
> OTOH, I tried PyNumber_Check() and:
>
> (1) The doc says: Returns 1 if the object o provides numeric protocols, and
> false otherwise. This function always succeeds.
>
> What do they mean: "always succeeds" ?

That it will always return true or false; it won't raise an error.

>
> (2) It seems PyNumber_check(py_val) returns true when passed an instance!
>
> Please advise.
>
> --
> Elias

casevh
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python3.1 shell and Arrow Keys ?

2009-10-08 Thread casevh
On Oct 8, 2:47 pm, Peter Billam  wrote:
> Greetings.  I've upgraded 3.0->3.1 and now my arrow-keys don't work
> at the shell command-line, I just get:
>   >>> ^[[A
> etc. Is there some environment-variable or option I have to set ?
> I'm using xterm on debian lenny, and compiled python3.1.1, as far
> as I remember, just with sh configure, make, make install ...
>
> Regards,  Peter
>
> --
> Peter Billam      www.pjb.com.au   www.pjb.com.au/comp/contact.html

You probably need to install the readline-devel package. I don't use
debian so I'm not sure of exact the package name.

HTH,

casevh
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Turtle Graphics are incompatible with gmpy

2009-08-05 Thread casevh
On Aug 5, 12:19 am, Steven D'Aprano  wrote:
> On Wed, 5 Aug 2009 03:49 pm Mensanator wrote:
>
> > In 3.1, tracing is now a screen attribute, not a turtle atribute.
> > I have no idea why
>
> >   tooter = turtle.Turtle()
> >   tooter.tracer(False)
>
> > doesn't give me an error (I thought silent errors were a bad thing).
>
> What makes it an error? Do you consider the following an error?
>
> >>> class Test:
>
> ...     pass
> ...
>
> >>> t = Test()
> >>> t.tracer = 5
>
> Perhaps you mean, it's an API change you didn't know about, and you wish to
> protest that Turtle Graphics made an incompatible API change without
> telling you?
>
> > Naturally, having tracing on caused my program to crash.
>
> It seg faulted or raised an exception?
>
> [...]
>
> > Unfortunately, that calculation of nhops is illegal if diffsq is
> > an .mpf (gmpy floating point). Otherwise, you get
>
> How does diffsq get to be a mpf? Are gmpy floats supposed to be supported?
>
> --
> Steven

The root cause of the error is that GMP, the underlying library for
gmpy, provides only the basic floating point operations. gmpy
implements a very limited exponentiation function. Python's math
library will convert an mpf to a float automatically so I think the
revised calculation for nhops should work with either any numerical
type that supports __float__.

casevh
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Turtle Graphics are incompatible with gmpy

2009-08-05 Thread casevh
On Aug 4, 10:49 pm, Mensanator  wrote:
> I hadn't noticed this before, but the overhaul of Turtle Graphics
> dating
> back to 2.6 has been broken as far as gmpy is concerned.
> The reason is that code in turtle.py was chabged from
>
> v2.5
>         if self._drawing:
>             if self._tracing:
>                 dx = float(x1 - x0)
>                 dy = float(y1 - y0)
>                 distance = hypot(dx, dy)
>                 nhops = int(distance)
>
> to
>
> v3.1
>        if self._speed and screen._tracing == 1:
>             diff = (end-start)
>             diffsq = (diff[0]*screen.xscale)**2 + (diff[1]
> *screen.yscale)**2
>             nhops = 1+int((diffsq**0.5)/(3*(1.1**self._speed)
> *self._speed))
>
> Unfortunately, that calculation of nhops is illegal if diffsq is
> an .mpf (gmpy
> floating point). Otherwise, you get
>
> Traceback (most recent call last):
>   File "K:\user_python26\turtle\turtle_xy_Py3.py", line 95, in
> 
>     tooter.goto(the_coord)
>   File "C:\Python31\lib\turtle.py", line 1771, in goto
>     self._goto(Vec2D(*x))
>   File "C:\Python31\lib\turtle.py", line 3165, in _goto
>     nhops = 1+int((diffsq**0.5)/(3*(1.1**self._speed)*self._speed))
> ValueError: mpq.pow fractional exponent, inexact-root
>
Warning: Completely untested fix ahead!

What happens if you change turtle.py to use

nhops=1+int((math.sqrt(diffsq)/(3*math.pow(1.1, self._speed)
*self._speed))

casevh
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pack an integer into a string

2009-07-24 Thread casevh
On Jul 24, 3:28 pm, superpollo  wrote:
> is there a pythonic and synthetic way (maybe some standard module) to
> "pack" an integer (maybe a *VERY* big one) into a string? like this:
>
>   >>> number = 252509952
>   >>> hex(number)
>   '0xf0cff00'
>   >>>
>
> so i would like a string like '\xf0\xcf\xf0\x00'
>
> i wrote some code to do it, so ugly i am ashamed to post :-(
>
> i tried xdrlib, but does not quite do what i mean...
>
> bye

This works, I think.

>>> def hex_string(n):
... a = hex(n)
... ch_list = []
... if a.startswith('0x'): a = a[2:]
... if a.endswith('L'): a = a[:-1]
... for i in range(0, len(a)):
... ch_list.append(chr(int(a[i:i+2],16)))
... return ''.join(ch_list)
...
>>> hex_string(252509952)
'\xf0\x0c\xcf\xff\xf0\x00\x00'
>>> hex_string(283691163101781L)
'\x10\x02 \x03?\xff\xf0\x00\n\xaa\xa5U\x05'
>>>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: integer square roots

2009-07-23 Thread casevh
On Jul 23, 9:43 pm, timro21  wrote:
> Hi Tim, sure, I'm looking at the perfect cuboid problem.
>
> I've just spent a couple of very frustrating hours.  Given that I'm in
> my 50's but have the brain of a retarded 3-year-old, can someone
> please explain what I have to do to download gmpy to use with
> ActivePython 2.6 on a Windows system?  I downloaded (or so I thought)
> GMPY 1.04 fromhttp://code.google.com/p/gmpy/but all the files look
> way too small...and then how do I import it into a Python
> program...I'm sure I could do this if it wasn't for my extreme
> stupidity...

You should just download the Windows installer from the Downloads tab.
I would recommend version 1.10 since it should be faster and fixes
some bugs in 1.04. The direct link is

http://gmpy.googlecode.com/files/gmpy-1.10-beta.win32-py2.6.exe

Just run the installer and it should automatically detect your
existing Python installation. The executable is small. ;-)

casevh

p.s. If you're using a 64-bit Windows system, let me know and I'll
step you through the manual process.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: integer square roots

2009-07-23 Thread casevh
> comp.lang.python is a good place to get answers about Python.  It's
> probably not such a good source of answers about computational number
> theory.  Also, Python is more about productivity than speed, so
> answers involving Python may not be the most efficient possible
> answers.  One obvious point though is that GMP/gmpy is pretty simple
> to use from Python and will be several times faster than Python's
> built in longs.  Also, as Mensanator pointed out, GMP has built-in
> functions that will help you with precise checks (I'd forgotten or not
> known about them).  I still think you'd get a speedup from first
> filtering out obviously non-square candidates before resorting to
> multi-precision arithmetic.  Some other fairly simple optimization may
> be possible too.
>
gmpy.is_square() is quite fast. On a older 32-bit Linux box, it can
test approximately 400,000 100-digits numbers per second. The time
includes conversion from a string. If the numbers are already Python
longs, it can check 800,000 per second. Checking a billion is not
unreasonable.

casevh
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ANN: GMPY 1.10 alpha with support for Python 3

2009-07-19 Thread casevh
GMPY 1.10 beta is now available. This version fixes an issue where
very large objects would be cached for reuse instead of being freed.

Source code and Windows installers may be found at
http://code.google.com/p/gmpy/downloads/list

casevh
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ANN: GMPY 1.10 alpha with support for Python 3

2009-07-08 Thread casevh
On Jul 8, 5:03 pm, Mensanator  wrote:
> On Jul 7, 12:47 am, Mensanator  wrote:
>
>
>
> > On Jul 7, 12:16 am, casevh  wrote:
>
> > > I discovered a serious bug with comparisons and have posted alpha2
> > > which fixes that bug and adds Unicode support for Python 2.x
>
> > > casevh
>
> > Damn! I was just congatulating myself for pulling off
> > a hat trick (there had been no point in downloading
> > 3.x without gmpy so I have been putting it off):
>
> > - installing Python 3.1
> > - installing gmpy 1.10
> > - converting my Collatz Function library to 3.1 syntax
>
> > And it all worked smoothly, just had to add parentheses
> > to my print statements, change xrange to range and all
> > my / to // (the library is exclusively integer). I had
> > gmpy running in my library on 3.1 in about 10 minutes.
>
> > So I'll have to re-do the gmpy install. Shouldn't be
> > any big deal.
>
> > I started doing some real world tests. Generally, things
> > look good (nothing crashes, timing looks not bad) but
> > I'm getting some funny results on one of my tests, so
> > I'll report back when I have more information.
>
> As I said, I was getting funny results from one of my tests.
>
> It seemed to work ok, but timing varied from 2 to 88 seconds,
> which seemed odd. The other tests were generally consistent
> for a given environment (cpu speed, OS, Python version, gmpy
> version).
>
> At some point I watched the memory usage profile from Windows.
> On the same machine I have both Python 2.6 and 3.1 installed,
> with the appropriate gmpy 1.10 version loaded for each.
>
> In Python 2.6, it looks like this:
>
> memory usage profile Python 2.6 gmpy 1.1 Vista
>
>       /--\  /---\ .RESTART SHELL
>      /               .\/         \
> /                .            \
>    .                 .
>    .                 .
>    start of RUN      start of RUN
>
> The first "start of RUN" is the first time the test is run
> (from IDLE). That change in usage represents about 700 MB
> (I'm testing really BIG numbers, up to 500 million digits).
>
> The memory remains allocated after the program terminates
> (the flat plateau). When I run a second time, we see the
> allocation dip, then climb back up to the plateau, so it
> appears that the allocation never climbs above 1.1 GB.
>
> Finally, doing a RESTART SHELL seems to completely free
> the allocated memory. I assume this is normal behaviour.
>
> With Python 3.1, it get this profile:
>
> memory usage profile Python 3.1 gmpy 1.1 Vista
>
>                          /-
>                         / |
>       //   \--\ .RESTART SHELL
>      /                 .           \
> /                  .            \___
>    .                   .
>    .                   .
>    start of RUN        start of RUN
>
> Here, at the start of the second RUN, it appears that new
> memory is allocated BEFORE the previous is cleared. Is this
> a quirk in the way 3.1 behaves? Here, the peak usage climbs
> to 1.8 GB which I think causes VM thrashing accounting for
> the increased execution times.
>

Hmmm. It looks like memory is not being release properly. I don't see
that behavior under Linux. The running time is a very consistent 1.35
seconds. I'm traveling at the moment so it will be at least a week
before I can test under Windows.

Thanks for the report. I'll try to see if I can figure out what is
going on.

casevh
> My guess is that gmpy is provoking, but not causing this
> behaviour.
>
> The actual test is:
>
> t0 = time.time()
> n=10
> for k in range(1,n):
>   for i in range(1,n-2):
>     print((str(cf.gmpy.numdigits(cf.Type12MH(k,i))).zfill(n)),end=' ')
>   print()
> print()
> t1 = time.time()
>
> The library function Type12MH is:
>
> def Type12MH(k,i):
>     """Find ith, kth Generation Type [1,2] Mersenne Hailstone using
> the closed form equation
>
>     Type12MH(k,i)
>     k: generation
>     i: member of generation
>     returns Hailstone (a)
>     """
>     ONE = gmpy.mpz(1)
>     TWO = gmpy.mpz(2)
>     SIX = gmpy.mpz(6)
>     NIN = gmpy.mpz(9)
>
>     if (k<1) or (i<1): return 0
>
>     i = gmpy.mpz(i)
>     k = gmpy.mpz(k)
>
>     # a = (i-1)*9**(k-1) + (9**(k-1) - 1)//2 + 1
>     # return 2**(6*a - 1) - 1
>
>     a = (i-ONE)*NIN**(k-ONE) + (NIN**(k-ONE) - ONE)//TWO + ONE
>     return TWO**(SIX*a - ONE) - ONE
>
> ##  Sample runs
> ##
> ##  Test 1 - create numbers up to 500 million digits
> ##
>

Re: ANN: GMPY 1.10 alpha with support for Python 3

2009-07-06 Thread casevh
I discovered a serious bug with comparisons and have posted alpha2
which fixes that bug and adds Unicode support for Python 2.x

casevh
-- 
http://mail.python.org/mailman/listinfo/python-list


ANN: GMPY 1.10 alpha with support for Python 3

2009-07-06 Thread casevh
An alpha release of GMPY that supports Python 2 and 3 is available.
GMPY is a wrapper for the GMP multiple-precision arithmetic
library. The MPIR multiple-precision arithmetic library is also
supported. GMPY is available for download from
http://code.google.com/p/gmpy/

Support for Python 3 required many changes to the logic used to
convert between different numerical types. The result type of some
combinations has changed. For example, 'mpz' + 'float' now returns
an 'mpf' instead of a 'float'. See the file "changes.txt" for more
information.

In addition to support for Python 3, there are several other
changes and bug fixes:

- Bug fixes in mpz.binary() and mpq.binary().

- Unicode strings are accepted as input on Python 2.
  (Known bug: works for mpz, fails for mpq and mpf)

- The overhead for calling GMPY routines has been reduced.
  If one operand in a small integer, it is not converted to mpz.

- 'mpf' and 'mpq' now support % and divmod.

Comments on provided binaries

The 32-bit Windows installers were compiled using MPIR 1.2.1 and
will automatically recognize the CPU type and use code optimized for
that CPU.

Please test with your applications and report any issues found!

casevh
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to locate the bit in bits string?

2009-04-29 Thread casevh
On Apr 28, 5:39 pm, Li Wang  wrote:
> 2009/4/29 Tim Chase :
>
> >> I want to concatenate two bits string together: say we have '1001' and
> >> '111' which are represented in integer. I want to concatenate them to
> >> '100' (also in integer form), my method is:
> >> ('1001' << 3) | 111
> >> which is very time consuming.
>
> > You omit some key details -- namely how do you know that "1001" is 4 bits
> > and not "1001" (8-bits)?  If it's a string (as your current code shows),
> > you can determine the length.  However, if they are actually ints, your code
> > should work fine & be O(1).
>
> Actually, what I have is a list of integer numbers [3,55,99,44], and
> by using Huffman coding or fixed length coding, I will know how the
> bits-length for each number. When I try to concatenate them (say
> 10,000 items in the list) all together, the speed is going down
> quickly (because of the shifting operations of python long).
>
>
>
> > This can be abstracted if you need:
>
> >  def combine_bits(int_a, int_b, bit_len_b):
> >    return (int_a << bit_len_b) | int_b
>
> >  a = 0x09
> >  b = 0x07
> >  print combine_bits(a, b, 3)
>
> > However, if you're using gargantuan ints (as discussed before), it's a lot
> > messier.  You'd have to clarify the storage structure (a byte string?  a
> > python long?)
>
> I am using a single python long to store all the items in the list
> (say, 10,000 items), so the work does become messier...

Using GMPY (http://code.google.com/p/gmpy/) may offer a performance
improvement. When shifting multi-thousand bit numbers, GMPY is several
times faster than Python longs. GMPY also support functions to scan
for 0 or 1 bits.

>
> > -tkc
>
> > PS:  You may want to CC the mailing list so that others get a crack at
> > answering your questions...I've been adding it back in, but you've been
> > replying just to me.
>
> Sorry, this is the first time I am using mail-listand always
> forgot "reply to all"
>
> Thank you very much:D
>
>
>
> --
> Li
> --
> Time is all we have
> and you may find one day
> you have less than you think

--
http://mail.python.org/mailman/listinfo/python-list


Re: Python and GMP.

2009-04-21 Thread casevh
On Apr 21, 5:47 am, Paul Rubin <http://phr...@nospam.invalid> wrote:
> casevh  writes:
> > > Could you test pow(a,b,c) where a,b,c are each 300 decimal digits?
>
> > $ py25 -m timeit -s  "a=long('23'*150);b=long('47'*150);m=long
> > ('79'*150)" "c=pow(a,b,m)"
> > 10 loops, best of 3: 52.7 msec per loop
> > $ py31 -m timeit -s 
> > 100 loops, best of 3: 8.85 msec per loop
> > $ py25 -m timeit -s  ..."import gmpy ...
> > 1000 loops, best of 3: 1.26 msec per loop
>
> Wow, thanks.  gmpy = 40x faster than py2.5.  Ouch.

Remember this is on a 64-bit platform using the latest versions of
MPIR or GMP. The ratio would be less for older versions or 32-bit
versions.

casevh
--
http://mail.python.org/mailman/listinfo/python-list


Re: Python and GMP.

2009-04-21 Thread casevh
On Apr 21, 12:11 am, Paul Rubin <http://phr...@nospam.invalid> wrote:
> casevh  writes:
> > Python 3.1 is significantly faster than Python 2.x on 64-bit
> > platforms. The following times are for multiplication with 2, 30 and
> > 300 decimal digits.
>
> Could you test pow(a,b,c) where a,b,c are each 300 decimal digits?
> This is an important operation in cryptography, that GMP is carefully
> optimized for.  Thanks.

$ py25 -m timeit -s  "a=long('23'*150);b=long('47'*150);m=long
('79'*150)" "c=pow(a,b,m)"
10 loops, best of 3: 52.7 msec per loop
$ py31 -m timeit -s  "a=int('23'*150);b=int('47'*150);m=int('79'*150)"
"c=pow(a,b,m)"
100 loops, best of 3: 8.85 msec per loop
$ py25 -m timeit -s  "import gmpy;a=gmpy.mpz('23'*150);b=gmpy.mpz
('47'*150);m=gmpy.mpz('79'*150)" "c=pow(a,b,m)"
1000 loops, best of 3: 1.26 msec per loop

casevh

--
http://mail.python.org/mailman/listinfo/python-list


Re: Python and GMP.

2009-04-20 Thread casevh
On Apr 20, 11:39 am, Benjamin Peterson  wrote:
>   gmail.com> writes:
>
>
>
> > There are reasons why Python not used the GMP library for implementing
> > its long type?
>
> Basically, GMP only becomes faster when the numbers are huge.

Python 3.1 is significantly faster than Python 2.x on 64-bit
platforms. The following times are for multiplication with 2, 30 and
300 decimal digits.

Testing 2 digits. This primarily measures the overhead for call GMP
via an extension module.

$ py25 -m timeit -s  "a=long('23'*1);b=long('47'*1)" "c=a*b"
1000 loops, best of 3: 0.15 usec per loop
$ py25 -m timeit -s  "a=int('23'*1);b=int('47'*1)" "c=a*b"
1000 loops, best of 3: 0.0735 usec per loop
$ py31 -m timeit -s  "a=int('23'*1);b=int('47'*1)" "c=a*b"
1000 loops, best of 3: 0.074 usec per loop
$ py25 -m timeit -s  "import gmpy;a=gmpy.mpz('23'*1);b=gmpy.mpz
('47'*1)" "c=a*b"
1000 loops, best of 3: 0.121 usec per loop


Testing 30 digits. No significant increase in time with GMP.

$ py25 -m timeit -s  "a=long('23'*15);b=long('47'*15)" "c=a*b"
100 loops, best of 3: 0.343 usec per loop
$ py31 -m timeit -s  "a=int('23'*15);b=int('47'*15)" "c=a*b"
1000 loops, best of 3: 0.142 usec per loop
$ py25 -m timeit -s  "import gmpy;a=gmpy.mpz('23'*15);b=gmpy.mpz
('47'*15)" "c=a*b"
1000 loops, best of 3: 0.125 usec per loop

Testing 300 digits.

$ py25 -m timeit -s  "a=long('23'*150);b=long('47'*150)" "c=a*b"
10 loops, best of 3: 12.5 usec per loop
$ py31 -m timeit -s  "a=int('23'*150);b=int('47'*150)" "c=a*b"
10 loops, best of 3: 3.13 usec per loop
$ py25 -m timeit -s  "import gmpy;a=gmpy.mpz('23'*150);b=gmpy.mpz
('47'*150)" "c=a*b"
100 loops, best of 3: 0.673 usec per loop

Platform is 64-bit Linux with Core2 Duo processor. gmpy was linked
against MPIR 1.1. MPIR is an LGPLv2 fork of GMP and is significantly
faster than GMP 4.2.x.  The newly released GMP 4.3.0 is about 10%
faster yet.

casevh
--
http://mail.python.org/mailman/listinfo/python-list


Re: Number of bits/sizeof int

2009-02-02 Thread casevh
On Jan 30, 11:03 pm, Jon Clements  wrote:
> Hi Group,
>
> This has a certain amount of irony (as this is what I'm pretty much
> after):-
> Fromhttp://docs.python.org/dev/3.0/whatsnew/3.1.html:
> "The int() type gained a bit_length method that returns the number of
> bits necessary to represent its argument in binary:"
>
> Any tips on how to get this in 2.5.2 as that's the production version
> I'm stuck with.
>
> Cheers,
>
> Jon.

If performance does become an issue, the newly released gmpy 1.04
includes bit_length().

casevh
--
http://mail.python.org/mailman/listinfo/python-list


Re: nth root

2009-02-01 Thread casevh
On Feb 1, 10:02 pm, Mensanator  wrote:
> On Feb 1, 8:20 pm, casevh  wrote:
>
>
>
> > On Feb 1, 1:04 pm, Mensanator  wrote:
>
> > > On Feb 1, 2:27 am, casevh  wrote:
>
> > > > On Jan 31, 9:36 pm, "Tim Roberts"  wrote:
>
> > > > > Actually, all I'm interested in is whether the 100 digit numbers have 
> > > > > an exact integral root, or not. At the moment, because of accuracy 
> > > > > concerns, I'm doing something like
>
> > > > > for root in powersp:
> > > > > nroot = round(bignum**(1.0/root))
> > > > > if bignum==long(nroot)**root:
> > > > > .
> > > > > which is probably very inefficient, but I can't see anything 
> > > > > better.
>
> > > > > Tim
>
> > > > Take a look at gmpy and the is_power function. I think it will do
> > > > exactly what you want.
>
> > > And the root function will give you the root AND tell you whether
> > > it was an integral root:
>
> > > >>> gmpy.root(a,13)
>
> > > (mpz(3221), 0)
>
> > > In this case, it wasn't.
>
> > I think the original poster wants to know if a large number has an
> > exact integral root for any exponent. is_power will give you an answer
> > to that question but won't tell you what the root or exponent is. Once
> > you know that the number is a perfect power, you can root to find the
> > root.
>
> But how do you know what exponent to use?

That's the gotcha. :) You still need to test all prime exponents until
you find the correct one. But it is much faster to use is_power to
check whether or not a number has representation as a**b and then try
all the possible exponents than to just try all the possible exponents
on all the numbers.

>
>
>
> > > >http://code.google.com/p/gmpy/
>
> > > > casevh
>
>

--
http://mail.python.org/mailman/listinfo/python-list


Re: nth root

2009-02-01 Thread casevh
On Feb 1, 1:04 pm, Mensanator  wrote:
> On Feb 1, 2:27 am, casevh  wrote:
>
> > On Jan 31, 9:36 pm, "Tim Roberts"  wrote:
>
> > > Actually, all I'm interested in is whether the 100 digit numbers have an 
> > > exact integral root, or not.  At the moment, because of accuracy 
> > > concerns, I'm doing something like
>
> > >                     for root in powersp:
> > >                             nroot = round(bignum**(1.0/root))
> > >                             if bignum==long(nroot)**root:
> > >                                                              .
> > > which is probably very inefficient, but I can't see anything better.
>
> > > Tim
>
> > Take a look at gmpy and the is_power function. I think it will do
> > exactly what you want.
>
> And the root function will give you the root AND tell you whether
> it was an integral root:
>
> >>> gmpy.root(a,13)
>
> (mpz(3221), 0)
>
> In this case, it wasn't.
>
I think the original poster wants to know if a large number has an
exact integral root for any exponent. is_power will give you an answer
to that question but won't tell you what the root or exponent is. Once
you know that the number is a perfect power, you can root to find the
root.

>
> >http://code.google.com/p/gmpy/
>
> > casevh
>
>

--
http://mail.python.org/mailman/listinfo/python-list


Re: nth root

2009-02-01 Thread casevh
On Jan 31, 9:36 pm, "Tim Roberts"  wrote:
> Actually, all I'm interested in is whether the 100 digit numbers have an 
> exact integral root, or not.  At the moment, because of accuracy concerns, 
> I'm doing something like
>
>                     for root in powersp:
>                             nroot = round(bignum**(1.0/root))
>                             if bignum==long(nroot)**root:
>                                                              .
> which is probably very inefficient, but I can't see anything better.
>
> Tim

Take a look at gmpy and the is_power function. I think it will do
exactly what you want.

http://code.google.com/p/gmpy/

casevh
--
http://mail.python.org/mailman/listinfo/python-list


Re: gmpy and counting None

2008-10-13 Thread casevh
On Oct 13, 12:43 pm, <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I just stumbled upon the following issue (I am running Debian):
>
> $ python
> Python 2.5.2 (r252:60911, Sep 29 2008, 21:15:13)
> [GCC 4.3.2] on linux2
> Type "help", "copyright", "credits" or "license" for more information.>>> [2, 
> None].count(None)
> 1
> >>> from gmpy import mpz
> >>> [mpz(2), None].count(None)
>
> Traceback (most recent call last):
>   File "", line 1, in 
> TypeError: coercion to gmpy.mpz type failed
>
>
>
> Is this a bug in gmpy?
>
> If yes is there any way to issue the bug athttp://code.google.com/p/gmpy/
> without creating a gmail account?

I've added it for you.

>
> Thanks
>
> Martin

Thanks for reporting the issue.

casevh
--
http://mail.python.org/mailman/listinfo/python-list


Re: numeric emulation and __pos__

2008-07-08 Thread casevh
On Jul 7, 4:12 pm, Ethan Furman <[EMAIL PROTECTED]> wrote:
> Greetings, List!
>
> I'm working on a numeric data type for measured values that will keep
> track of and limit results to the number of significant digits
> originally defined for the values in question.
>
> I am doing this primarily because I enjoy playing with numbers, and also
> to get some experience with unit testing.
>
> At this point I have the __init__ portion finished, and am starting on
> the various operator functions.
>
> Questions for the group:
>
> 1) Any reason to support the less common operators?
>         i.e. <<, >>, &, ^, |
Assuming you are working with decimal numbers, the &, ^, | may not be
of any use for your application. The shift operators may be useful but
there are two possible ways to define their behavior:

1) Multiplication or division by powers of 2. This mimics the common
use of those operators as used with binary numbers.

2) Multiplication or division by powers of 10.

>
> 2) What, exactly, does .__pos__() do?  An example would help, too.

The unary + operator is frequently added for symmetry with -, however
it is also used to force an existing number to match a new precision
setting. For example, using the decimal module:

>>> from decimal import *
>>> t=Decimal('1.23456')
>>> t
Decimal("1.23456")
>>> getcontext().prec = 5
>>> +t
Decimal("1.2346")

>
> Thanks for the feedback.
> --
> Ethan

casevh
--
http://mail.python.org/mailman/listinfo/python-list


Re: conflict between multiple installs of python (linux)

2008-07-05 Thread casevh
On Jul 5, 11:09 am, david <[EMAIL PROTECTED]> wrote:
> You learn something new every day:
>
> On my ubuntu, update-manager is supposed to use the python2.5
> installed on /usr/bin. Well, I had subsequently installed a whole
> bunch of stuff in /usr/local (/usr/local/bin/python and /usr/local/lib/
> python2.5 etc), which I have happily been using for development for a
> year. I had thought that the two pythons were completely independent.
>
> Well, I was wrong. When /usr/bin/python2.5 runs, *by default*, it
> adds /usr/local/lib/python2.5 to its sys path - and apparently there
> are things in /usr/local which are inconsistent with those at /usr
> (not suprising).
>
> I have fixed the problem - but I had to modify the actual update-
> manager .py file itself. At the beginning, I set the sys.path in
> python *explicitly* to not include the /usr/local stuff.
>
> But this is clearly a kludge. My question: how do I keep the Ubuntu
> python stuff at /usr/bin/python2.5 from adding /usr/local/lib/
> python2.5 to the import search path in a clean and global way? I
> really want both pythons completely isolated from one another!
>
> Thankyou.

Python's path is build by site.py. In the file /usr/lib/python2.5/
site.py, look for the line "prefixes.insert(0, '/usr/local')" and
comment it out. That should do it.

casevh
--
http://mail.python.org/mailman/listinfo/python-list


Re: Generating list of possible configurations

2008-07-03 Thread casevh
On Jul 3, 6:54 am, Mensanator <[EMAIL PROTECTED]> wrote:
>
> Well, it will be great at some point in the future
> when Python 2.6/3.0 have actually been released and
> third party extensions such as gmpy have caught up.
>
> Until then, such solutions are worthless, i.e., of
> no value.

gmpy is supported on Python 2.6. A new version and Windows binaries
were released shortly after 2.6b1 was released.

casevh
--
http://mail.python.org/mailman/listinfo/python-list


Re: Multiprecision arithmetic library question.

2008-06-19 Thread casevh
> No, I do not know that. Define desperate.
> Does Python support the extended Euclidean algorithm
> and other number theory functions?

No.

> How fast does Python multiply?

Python uses the Karatsuba algorithm which O(n^1.585). Division is
still O(n^2).

> Not that the latter is particularly important,
> as C is built for speed.
>
> I've been fooling around. Ran dir(gmpy), and
> it does not show the full complement of GMP
> library functions, such as the various division
> functions. e.g. mpz_tdiv_qr.

gmpy implements the Python numeric model using GMP and exposes some of
the high-level functions. Are you looking for low-level wrapper that
exposes all the GMP library functions?

> --
> Michael Press

--
http://mail.python.org/mailman/listinfo/python-list


Re: Hrounding error

2008-06-18 Thread casevh
>
> So it seems then that python might not be very good for doing
> precision floating point work, because there is a good chance its
> floating points will be off by a (very small) amount?  Or is there a
> way to get around this and be guaranteed an accurate answer?- Hide quoted 
> text -
>

It's not a Python problem. That is just the behavior for floating-
point arithmetic.

casevh

--
http://mail.python.org/mailman/listinfo/python-list


Re: Hrounding error

2008-06-17 Thread casevh
On Jun 17, 10:47 pm, [EMAIL PROTECTED] wrote:
> Hi,
> I am new to python.  I was messing around in the interperator checking
> out the floating point handling and I believe I may have found a
> rounding bug:
>
> >>> 234 - 23234.2345
>
> -23000.2344
>
> This is not correct by my calculations.
>
> I am using python 2.5.2 in ubuntu 8.04.  I am wondering if this is a
> known bug, or if I am just not understanding some feature of python.
> Thanks for the help!
>
> -Cooper Quintinhttp://www.bitsamurai.net

It is a side-effect of binary floating-point.

http://www.python.org/doc/faq/general/#why-are-floating-point-calculations-so-inaccurate

casevh

--
http://mail.python.org/mailman/listinfo/python-list


Re: Multiprecision arithmetic library question.

2008-06-17 Thread casevh
On Jun 17, 5:13 pm, Michael Press <[EMAIL PROTECTED]> wrote:
> I already compiled and installed the GNU multiprecision library
> on Mac OS X, and link to it in C programs.
> How do I link to the library from Python?
> I do not want to download and install redundant material.
> (I am new to Python)
>
> --
> Michael Press

GMPY provides the interface between Python and GMP. It is available at

http://code.google.com/p/gmpy/downloads/list

casevh
--
http://mail.python.org/mailman/listinfo/python-list


Re: 32 bit or 64 bit?

2008-06-15 Thread casevh
> Not yet: I was kind of set back when I saw their homepage was last
> updated 2002. But I'll give it a try. You think it's the best thing
> there is?
>
> Thanks,
> Ram.

gmpy has moved to Google.

http://code.google.com/p/gmpy/

gmpy only support the basic floating point operations so it may not be
sufficient for your needs.

A couple alternatives:

1) mpmath is a pure-Python, arbitrary precision floating-point
package. It will be faster than Decimal but slower than gmpy.
http://code.google.com/p/mpmath/

2) Sage is an open-source mathematics software package. It uses Python
as it glue/scripting language. It includes support for MPFR, a
multiple-precision floating point library based on GMP.   www.sagemath.org

casevh
--
http://mail.python.org/mailman/listinfo/python-list


Re: Integer dicision

2008-04-10 Thread casevh
On Apr 10, 9:28 pm, bdsatish <[EMAIL PROTECTED]> wrote:
> How does (a/b) work when both 'a' and 'b' are pure integers ?

Python defines the quotient and remainder from integer division so
that a = qb + r and 0<=r < abs(b). C/C++ lets the remainder be
negative.

>>> divmod(-9,2)
(-5, 1)
>>> divmod(9,2)
(4, 1)

casevh
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: float / rounding question

2008-02-25 Thread casevh
On Feb 25, 2:44 am, [EMAIL PROTECTED] wrote:
> Hi I'm very much a beginner with Python.
> I want to write a function to convert celcius to fahrenheit like this
> one:
>
> def celciusToFahrenheit(tc):
> tf = (9/5)*tc+32
> return tf
>
> I want the answer correct to one decimal place, so
> celciusToFahrenheit(12) would return 53.6.
>
> Of course the function above returns 53.601.
>
> How do I format it correctly?

That is the normal behavior for binary (radix-2) numbers. Just like it
is impossible write 1/3 exactly as a decimal (radix-10) number, 536/10
cannot be written exactly as a binary number. If you really need
decimal numbers, use the Decimal class.

See http://docs.python.org/tut/node16.html.

casevh
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How about adding rational fraction to Python?

2008-02-24 Thread casevh
On Feb 24, 7:56 pm, Mensanator <[EMAIL PROTECTED]> wrote:

> But that doesn't mean they become less manageable than
> other unlimited precision usages. Did you see my example
> of the polynomial finder using Newton's Forward Differences
> Method? The denominator's certainly don't settle out, neither
> do they become unmanageable. And that's general mathematics.
>

Since you are expecting to work with unlimited (or at least, very
high) precision, then the behavior of rationals is not a surprise. But
a naive user may be surprised when the running time for a calculation
varies greatly based on the values of the numbers. In contrast, the
running time for standard binary floating point operations are fairly
constant.

>
> If the point was as SDA suggested, where things like 16/16
> are possible, I see that point. As gmpy demonstrates thouigh,
> such concerns are moot as that doesn't happen. There's no
> reason to suppose a Python native rational type would be
> implemented stupidly, is there?

In the current version of GMP, the running time for the calculation of
the greatest common divisor is O(n^2). If you include reduction to
lowest terms, the running time for a rational add is now O(n^2)
instead of O(n) for a high-precision floating point addition or O(1)
for a standard floating point addition. If you need an exact rational
answer, then the change in running time is fine. But you can't just
use rationals and expect a constant running time.

There are trade-offs between IEEE-754 binary, Decimal, and Rational
arithmetic. They all have there appropriate problem domains.

And sometimes you just need unlimited precision, radix-6, fixed-point
arithmetic

casevh
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How about adding rational fraction to Python?

2008-02-24 Thread casevh
>
> Out of curiosity, of what use is denominator limits?
>
> The problems where I've had to use rationals have
> never afforded me such luxury, so I don't see what
> your point is

In Donald Knuth's The Art of Computer Programming, he described
floating slash arithmetic where the total number of bits by the
numerator and denominator was bounded. IIRC, a use case was matrix
inversion.

casevh
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Looking for an interpreter that does not request internet access

2007-06-28 Thread casevh
On Jun 25, 6:07 pm, James Alan Farrell <[EMAIL PROTECTED]> wrote:
> Hello,
> I recently installed new anti-virus software and was surprised the
> next time I brought up IDLE, that it was accessing the internet.
>
> I dislike software accessing the internet without telling me about it,
> especially because of my slow dial up connection (there is no option
> where I live), but also because I feel it unsafe.
>
> Can anyone recommend an interpreter that does not access the internet
> when it starts (or when it is running, unless I specifically write a
> program that causes it to do so, so as a browser)?
>
> James Alan Farrell

It is a false alarm. The IP address 127.0.0.1 is a reserved address
that means "this computer". It is more commonly known as "localhost".
A server application can bind to 127.0.0.1 and it can be accessed by a
client application running on the same computer. This is what IDLE is
doing. It is not accessing the Internet. It is only using the IP
protocol to communicate between different parts of the application.

The anti-virus application should be configured to allow use of
127.0.0.1. But coming from a corporate IT world, I'm not surprised
that it is not reasonably configured

casevh

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How do I tell the difference between the end of a text file, and an empty line in a text file?

2007-05-16 Thread casevh
On May 16, 2:47 pm, walterbyrd <[EMAIL PROTECTED]> wrote:
> Python's lack of an EOF character is giving me a hard time.
>
> I've tried:
>
> -
> s = f.readline()
> while s:
> .
> .
> s = f.readline()
> 
>
> and
>
> ---
> s = f.readline()
> while s != ''
> .
> .
> s = f.readline()
> ---
>
> In both cases, the loop ends as soon it encounters an empty line in
> the file, i.e.
>
> xx
> xxx
> xxx
>   < - - -  loop end here
> xx
> xx
> x
>    <  loop should end here

Assuming f is initialized as in your example, try

-
for s in f:
print s
-

casevh

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python 2.5, problems reading large ( > 4Gbyes) files on win2k

2007-03-03 Thread casevh
On Mar 2, 10:09 am, [EMAIL PROTECTED] wrote:
> Folks,
>
> I've a Python 2.5 app running on 32 bit Win 2k SP4 (NTFS volume).
> Reading a file of 13 GBytes, one line at a time.  It appears that,
> once the read line passes the 4 GByte boundary, I am getting
> occasional random line concatenations.  Input file is confirmed good
> via UltraEdit.  Groovy version of the same app runs fine.
>
> Any ideas?
>
> Cheers

It appears to be a bug. I am able to reproduce the problem with the
code fragment below. It creates a 12GB file with line lengths ranging
from 0 to 126 bytes, and repeating that set of lines 150 times. It
fails on W2K SP4 with both Python 2.4 and 2.5. It works correctly on
Linux (Ubuntu 6.10).

I have reported on SourceForge as bug 1672853.

# Read and write a huge file.
import sys

def write_file(end = 126, loops = 150, fname='bigfile'):
fh = open(fname, 'w')
buff = 'A' * end
for k in range(loops):
for t in range(end+1):
fh.write(buff[:t]+'\n')
fh.close()

def read_file(end = 126, fname = 'bigfile'):
fh = open(fname, 'r')
offset = 0
loops = 0
for rec in fh:
if offset != len(rec.strip()):
print 'Error at loop:', loops
print 'Expected record length:', offset
print 'Actual record length:', len(rec.strip())
sys.exit(0)
offset += 1
if offset > end:
offset = 0
loops += 1
    if not loops % 1: print loops
fh.close()

if __name__ == '__main__':
write_file(loops=150)
read_file()

casevh

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: finding out the precision of floats

2007-02-27 Thread casevh
> Why should you? It only gives you 28 significant digits, while 64-bit
> float (as in 32-bit version of Python) gives you 53 significant
> digits. Also note, that on x86 FPU uses 80-bit registers. An then
> Decimal executes over 1500 times slower.

64-bit floating point only gives you 53 binary bits, not 53 digits.
That's approximately 16 decimal digits. And anyway, Decimal can be
configured to support than 28 digits.

>
> >>> from timeit import Timer
> >>> t1 = Timer('(1.0/3.0)*3.0 - 1.0')
> >>> t2 = Timer('(Decimal(1)/Decimal(3))*Decimal(3)-Decimal(1)',
>
> 'from decimal import Decimal')>>> t2.timeit()/t1.timeit()
>
> 1621.7838879255889
>
> If that's not enough to forget about Decimal, take a look at this:
>
> >>> (Decimal(1)/Decimal(3))*Decimal(3) == Decimal(1)
> False
> >>> ((1.0/3.0)*3.0) == 1.0
>
> True

Try ((15.0/11.0)*11.0) == 15.0. Decimal is actually returning the
correct result. Your example was just lucky.

Decimal was intended to solve a different class of problems. It
provides predictable arithmetic using "decimal" floating point.
IEEE-754 provides predictable arithmetic using "binary" floating
point.

casevh

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python / Socket speed

2007-02-26 Thread casevh
On Feb 26, 7:05 am, "Paul Boddie" <[EMAIL PROTECTED]> wrote:
> On 26 Feb, 15:54, "[EMAIL PROTECTED]" <[EMAIL PROTECTED]> wrote:
>
> > Seems like sockets are about 6 times faster on OpenSUSE than on
> > Windows XP in Python.
>
> >http://pyfanatic.blogspot.com/2007/02/socket-performance.html
>
> > Is this related to Python or the OS?
> >From the output:
> > TCP window size: 8.00 KByte (default)
> > TCP window size: 49.4 KByte (default)
>
> I don't pretend to be an expert on TCP/IP, but might the window size
> have something to do with it?
>
> Paul

Tuning the TCP window size will make a big difference with Windows XP
performance. I'm more curious about the original script. Either the
test was against the loopback address, or he has a very impressive
netork to sustain 1.8Gbit/s.

casevh

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Rational numbers

2007-02-23 Thread casevh
> Looks pretty much the same as mx.Number
>
> Does this warning come up from gmp? Do I have to compile it with
> different flags?
> Or do both wrappers use the same code?
>
> I would like to encourage the python community (i.e. the guys
> maintaining the python docs) to point out a recommended rational
> library. In one of the rational PEPs, it is stated that there are
> multiple robust rational wrappers for python. But it is not stated,
> which ones were thoroughly reviewed.
>
That specific error message is only a warning that occurs the first
time a comparison operation is performed. I've successfully used gmpy
and just ignored the one-time only error. You can use the "warnings"
module to filter out that error. If I remember correctly, it was fixed
in version 1.01. I know it is fixed in the SVN version.

Which specific version of gmpy are you using? (What is
gmpy.version()?)

I just compiled Alex's most recent SVN version on Linux without any
problems. I'll make Windows binaries next.

casevh





-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Rational numbers

2007-02-23 Thread casevh
On Feb 23, 3:27 pm, [EMAIL PROTECTED] wrote:
> On Feb 23, 12:00 pm, [EMAIL PROTECTED] wrote:
>...
>
> > > > + gmpy is looking pretty unmaintained (dead) to me (newest update of
> > > > cvs 10 months ago).
>
> > I worked withAlex Martelli(gmpy's maintainer) to fix a bug found by
> > mensanator. With Alex's permission, I released it as gmpy 1.04a. Alex
> > has not updated cvs with the fix.
>
> Heh, I see why one might get that impression -- I'm in the process of
> moving gmpy from sourceforge (where I find it harder and harder, and
> ever more problematic, to work) to code.google.com 's new hosting
> facility -- gmpy 1.02 prerelease (more updated than that "1.04a", and
> particularly including your fix, Case) is already available 
> athttp://code.google.com/p/gmpy/but I have made no official
> announcement yet (partly because what's available is yet limited:
> sources, and binaries for Python 2.3, 2.4 and 2.5 but only for MacOSX
> 10.4 on Macs with intel processors)... building binaries for Windows
> (not having a Windows machine or development system) or Universal
> binaries for the Mac (due to problems building Universal versions of
> the underlying GMP in its latest, 4.2 incarnation... I'm running out
> of PPC-based Macs, and have none left with MaxOSX 10.3...) is much
> more problematic for me.
>
> To call this (Google Code) release 1.02, with a "1.04" (?) out from
> another source, may be confusing, but I'd rather not "force" the
> number upwards
>
> I do have one new co-owner on the Google Code "version" of gmpy (Chip
> Turner, once author of a similar GMP wrapper for perl, now a Python
> convert and a colleague of mine) but I suspect that won't make the
> building of Windows (and Universal Mac) binaries much easier.  If
> anybody who has easy access to Microsoft's MSVC++.NET (and is willing
> to try building GMP 4.2 with/for it), or a PPC Mac with XCode
> installed (possibly with MacOSX 10.3...), wants to volunteer to build
> "the missing binaries" for the platforms that the current owners of
> gmpy can't easily support, we could complete, test and release the
> definitive 1.02, and move on with the development (I could get
> enthusiastic about this again, if I could develop just the sources,
> and the binaries for the one architecture I really use -- Macs w/intel
> -- rather than strive each time with binaries for architectures that
> are quite a pain for me...!-).
>
> Anybody who's interested in helping out is welcome to mail me and/or
> use the "wiki" and "issues" entry of the Google Code gmpy site...
>
> Thanks,
>
> Alex

I can keep building gmpy for Windows. I actually use MINGW since
getting GMP compiled under MSVC is "challanging". I should be able to
build new binaries for Windows this weekend. And I would be happy to
point everyone to a real release.

casevh

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Rational numbers

2007-02-23 Thread casevh

> Am I hallucinating? Didn't I see at least some version
> of gmpy for Python 2.5 on SourceForge awhile back?
> I distinctly remember thinking that I don't have to
> direct people to your site, but SourceForge is not
> showing anything beyond vesion 1.01 for Python 2.4.

Alex released versions 1.02 and 1.03 as CVS updates only. I think he
may have made an announcement that 1.02 included alpha support for
Python 2.5. 1.04a is 1.03 with one additional fix. I don't think there
has been an official release, though.

casevh

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Rational numbers

2007-02-23 Thread casevh
On Feb 23, 10:34 am, "[EMAIL PROTECTED]" <[EMAIL PROTECTED]> wrote:
> On Feb 23, 10:39 am, Martin Manns <[EMAIL PROTECTED]> wrote:
>
> > On Fri, 23 Feb 2007 09:52:06 -0600
>
> > Larry Bates <[EMAIL PROTECTED]> wrote:
> > > I quick search of Google turned up:
>
> > >http://books.google.com/books?id=1Shx_VXS6ioC&pg=PA625&lpg=PA625&dq=p...
> > >http://calcrpnpy.sourceforge.net/clnum.html
> > >http://gmpy.sourceforge.net/
>
> > Sorry that I did not point these out initially.
>
> > + clnum seems to be slower and for speed may be compiled to wrap gmp so
> > that it is just an additional layer between python and gmp .
>
> > + gmpy is looking pretty unmaintained (dead) to me (newest update of
> > cvs 10 months ago).

I worked with Alex Martelli (gmpy's maintainer) to fix a bug found by
mensanator. With Alex's permission, I released it as gmpy 1.04a. Alex
has not updated cvs with the fix.

gmpy 1.04a compiles cleanly with the latest releases of Python and
GMP, so I consider it stable.

>
> Actually, gmpy is being maitained even if SourceForge isn't up to
> date.
>
> I got my gmpy 1.04a for Python 2.5 Windows binary from
>
> <http://home.comcast.net/~casevh>
>
> I haven't used the rationals all that much, but been very
> happy with them when I have.
>

casevh

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: code optimization (calc PI) / New Algorithme for PI

2007-01-04 Thread casevh

> Yes, this "gmpy" sounds good for calc things like that.
> But not available on my machine.
> ImportError: No module named gmpy

What type of machine?

The home page for gmpy is http://sourceforge.net/projects/gmpy/

I have Windows versions available at http://home.comcast.net/~casevh/

casevh

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: code optimization (calc PI) / Full Code of PI calc in Python and C.

2007-01-04 Thread casevh

> Here is my attempt to convert the C code, not written with speed in mind
> and I was too lazy too time it.  :-)
>
> from itertools import izip
>
> def pi():
> result = list()
> d = 0
> e = 0
> f = [2000] * 2801
> for c in xrange(2800, 0, -14):
> for b, g in izip(xrange(c, 1, -1), xrange((c * 2) - 1, 0, -2)):
> d += f[b] * 1
> h, f[b] = divmod(d, g)
> d = h * b
> h, i = divmod(d, 1)
> result.append('%.4d' % (e + h))
> e = i
> return ''.join(result)
>

Your version: .36 seconds

It's ugly, but unrolling the divmod into two separate statements is
faster for small operands. The following takes .28 seconds:

from time import clock
from itertools import izip

def pi():
result = list()
d = 0
e = 0
f = [2000] * 2801
for c in xrange(2800, 0, -14):
for b, g in izip(xrange(c, 1, -1), xrange((c * 2) - 1, 0, -2)):
d += f[b] * 1
f[b] = d % g
h = d // g
d = h * b
h, i = divmod(d, 1)
result.append('%.4d' % (e + h))
e = i
return ''.join(result)

start_time = clock()
pi = pi()
print pi
print "Total time elapsed:", round(clock() - start_time, 2), "s"
print len(pi)

casevh

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: code optimization (calc PI) / Full Code of PI calc in Python and C.

2007-01-03 Thread casevh

Michael M. wrote:
> Ok, here is the code. It is a translation of the following code, found
> on the internet.
>
> * The C is very fast, Python not.
> * Target: Do optimization, that Python runs nearly like C.

There is an error in the translated code. It returns 1600 digits
instead of 800 digits.

>
> counter=c
> while 0<=counter+4000:
>f.append(2000)  #   f.append( int(a/5) )
>counter=counter-1
># b=b+1

This creates a list f with length 9601. It should have a length of
2801.

I found an explanation of the original C program at
http://rooster.stanford.edu/~ben/maths/pi/code.html

Using the variable names from the above explanation and editing the
code posted by bearophile to match the explanation, I have the
following:

from time import clock

def compute_pi():
pi = []
a = 1
i = k = b = d = c = 0
k = 2800
r = [2000] * 2801
while k:
d = 0
i = k
while True:
d += r[i] * a
b = 2 * i - 1
r[i] = d % b
d //= b
i -= 1
if i == 0: break
d *= i
k -= 14
pi.append("%04d" % int(c + d // a))
c = d % a
return "".join(pi)

start_time = clock()
pi = compute_pi()
print pi
print "Total time elapsed:", round(clock() - start_time, 2), "s"
print len(pi)

You're original version takes 2.8 seconds on my computer. The above
version takes .36 seconds. I tried a couple of optimizations but
couldn't make any more improvements.

casevh

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: A stupid question

2006-12-28 Thread casevh

[EMAIL PROTECTED] wrote:
> It says exactly:
>
> The specified module could not be found.
> LoadLibrary(pythondll) failed

> > >I bought a floor model computer, and it came with all sorts of
> > >ridiculousness on it that I promptly uninstalled. However, now whenever
> > >I start windows I get a message saying "LoadLibrary (pythondll
> > >) failed." It also says this when I try to download into a bittorrent
> > >client, and it keeps it from downloading. What does this mean, and how
> > >can I make it go away?

Python is a programming language used to develop (some of) the
bittorrent clients. You were a little exuberant in uninstalling
applications and removed the Python files that are required for the
bittorrent client.

You could try downloading and installing Python, but I don't know which
version you'll need.

You could try uninstalling the bittorrent client and reinstalling a
client with all its required dependencies.

HTH,

casevh

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python speed on Solaris 10

2006-11-14 Thread casevh

Chris Miles wrote:
> I have found that the sunfreeware.com build of Python 2.4.3 for Solaris
> 10 is faster than one I can build myself, on the same system.
> sunfreeware.com doesn't bother showing the options they used to
> configure and build the software, so does anyone know what the optimal
> build options are for Solaris 10 (x86)?
>
> Here are some pybench/pystone results, and I include the same comparison
> of Python2.4.3 running on CentOS 4.3 on the same hardware (which is what
> prompted me to find a faster Python build in the first place).
>
> Python 2.4.3:
>System   pybench  Pystone (pystones/s)
> 
>Sol10 my build   3836.00 ms   37313.4
>Sol10 sunfreeware3235.00 ms   43859.6
>CentOS   3569.00 ms   44247.8
>
>
> My build:
> Python 2.4.3 (#1, Oct 15 2006, 16:00:33)
> [GCC 3.4.3 (csl-sol210-3_4-branch+sol_rpath)] on sunos5
>
> sunfreeware.com build:
> Python 2.4.3 (#1, Jul 31 2006, 05:14:51)
> [GCC 3.4.6] on sunos5
>
> My build on CentOS 4.3:
> Python 2.4.3 (#1, Jul 19 2006, 17:52:43)
> [GCC 3.4.5 20051201 (Red Hat 3.4.5-2)] on linux2
>
>
> is the difference purely gcc minor version?
I noticed that speed difference, too. After running ./configure, I
edited the resulting Makefile to pass "-march=athlon-mp" to the C
compiler. ./configure seems to ignore CFLAGS, for example, so I just
edited Makefile to suit my environment. You should set the values
appropriate for you system, of course.

I've also compiled Python using the Sun Studio compiler. Some tests
were faster, some tests were slower.

casevh
 -- 
> http://chrismiles.info/

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Building C extensions

2006-11-06 Thread casevh

Paolo Pantaleo wrote:
> Well I'm just courious: if I want to buid a C extension, I shoul use
> the same compiler that has been used to build python (right?). Since
> python has been built using Visual C, how can I build an extension if
> I don't have Visual Studio?
>
> PAolo

Use mingw32. It should work fine for most extensions.

For example, see
http://groups.google.com/group/comp.lang.python/msg/8e2260fe4d4b7de9
and the followup messages.

casevh

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: "best" rational number library for Python?

2006-10-31 Thread casevh
Oops, on the double-post.

[EMAIL PROTECTED] wrote:
> > A guy at work asked for functionality commonly found with rational numbers,
> > so I said I'd find and install something.  I figured gmpy would be suitable,
> > alas I'm having trouble successfully building the underlying GMP 4.2.1
> > library on a PC running Solaris 10 (won't compile with the default --host,
> > fails "make check" if I go the no-assembly route).  Before I invest a bunch
> > of time into this, am I barking up the wrong tree?
> >
> I've successfully compiled GMP 4.2.1 on Solaris 10 x86 using both the
> GCC and Sun Studio compilers on AMD 32-bit platform.
>
> I just compiled GMP 4.2.1 on a P4 using
>
> $ CFLAGS="" CC=gcc ./configure
> $ gmake; gmake check
>

You must use "gmake". "make" fails during "make check"

> and all tests passed.
> 
> casevh

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: "best" rational number library for Python?

2006-10-31 Thread casevh
> A guy at work asked for functionality commonly found with rational numbers,
> so I said I'd find and install something.  I figured gmpy would be suitable,
> alas I'm having trouble successfully building the underlying GMP 4.2.1
> library on a PC running Solaris 10 (won't compile with the default --host,
> fails "make check" if I go the no-assembly route).  Before I invest a bunch
> of time into this, am I barking up the wrong tree?
>
I've successfully compiled GMP 4.2.1 on Solaris 10 x86 using both the
GCC and Sun Studio compilers on AMD 32-bit platform.

I just compiled GMP 4.2.1 on a P4 using

$ CFLAGS="" CC=gcc ./configure
$ gmake; gmake check

and all tests passed.

casevh

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: "best" rational number library for Python?

2006-10-31 Thread casevh
> A guy at work asked for functionality commonly found with rational numbers,
> so I said I'd find and install something.  I figured gmpy would be suitable,
> alas I'm having trouble successfully building the underlying GMP 4.2.1
> library on a PC running Solaris 10 (won't compile with the default --host,
> fails "make check" if I go the no-assembly route).  Before I invest a bunch
> of time into this, am I barking up the wrong tree?
>
I've successfully compiled GMP 4.2.1 on Solaris 10 x86 using both the
GCC and Sun Studio compilers on AMD 32-bit platform.

I just compiled GMP 4.2.1 on a P4 using

$ CFLAGS="" CC=gcc ./configure
$ gmake; gmake check

and all tests passed.

casevh

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Fwd: Re: How to upgrade python from 2.4.3 to 2.4.4 ?

2006-10-21 Thread casevh
>
> The link for pexports-0.42h.zip is broken so I cant
> test it on an extension.
>

pexports is only needed for Python 2.3. It is not required for 2.4 or
2.5.

casevh

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Fwd: Re: How to upgrade python from 2.4.3 to 2.4.4 ?

2006-10-21 Thread casevh

Anthony Baxter wrote:
> On 21 Oct 2006 21:39:51 -0700, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> > mingw32 is supported and can compile many extensions. See the following
> > post:
> >
> > http://groups.google.com/group/comp.lang.python/msg/8e2260fe4d4b7de9
> >
> > If you meant something else with your comment, please explain.
>
> That's for Python 2.4. I'm not sure it works the same way with Python
> 2.5. If someone has information to the contrary, it would be excellent
> to get confirmation and the steps that are necessary...

I've used mingw32 to build gmpy for Python 2.5 without any problems. It
looks like mingw32 works just fine with Python 2.5 (assuming the
extension will compile with mingw32).

casevh

-- 
http://mail.python.org/mailman/listinfo/python-list


  1   2   >