On Thu, 2008-06-05 at 08:24 +0200, Michael Abshoff wrote:
I am not what I would call familiar with numpy internals, so is there a
magic thing I can do to make numpy aware that ctypes exists?
I don't think any magic is needed. As long as importing ctypes works
from the python you used to
Michael Abshoff wrote:
Jonathan Wright wrote:
...etc. We needed this for generating the .so library file name for
ctypes
Can you elaborate on this a little?
The we refered to another project (not numpy) where we needed to
distinguish 32 bit from 64 bit platforms. We have code for picking
I'm using python 2.5.2 on OS X, with 8 GB of ram, and a 64-bit processor.
In
this, setting, I'm working with large arrays of binary data. E.g, I want to
make calls like:
Z = numpy.inner(a,b)
where and b are fairly large -- e.g. 2 rows by 100 columns.
However, when such a call
On Wed, Jun 4, 2008 at 6:42 PM, Dan Yamins [EMAIL PROTECTED] wrote:
I'm using python 2.5.2 on OS X, with 8 GB of ram, and a 64-bit processor.
In
this, setting, I'm working with large arrays of binary data. E.g, I want
to
make calls like:
Z = numpy.inner(a,b)
where and b are
2008/6/4 Dan Yamins [EMAIL PROTECTED]:
So, I have three questions about this:
1) Why is mmap being called in the first place? I've written to Travis
Oliphant, and he's explained that numpy.inner does NOT directly do any
memory
mapping and shouldn't call mmap. Instead, it should just
I don't know much about OSX, but I do know that many malloc()
implementations take advantage of a modern operating system's virtual
memory when allocating large blocks of memory. For small blocks,
malloc uses memory arenas, but if you ask for a large block malloc()
will request a whole bunch
On Wed, Jun 4, 2008 at 9:06 PM, Charles R Harris [EMAIL PROTECTED]
wrote:
On Wed, Jun 4, 2008 at 6:42 PM, Dan Yamins [EMAIL PROTECTED] wrote:
I'm using python 2.5.2 on OS X, with 8 GB of ram, and a 64-bit processor.
In
this, setting, I'm working with large arrays of binary data. E.g, I
2008/6/4 Dan Yamins [EMAIL PROTECTED]:
Anne, thanks so much for your help. I still a little confused. If your
scenario about the the memory allocation is working is right, does that mean
that even if I put a lot of ram on the machine, e.g. 16GB, I still can't
request it in blocks larger
On Wed, Jun 4, 2008 at 7:41 PM, Dan Yamins [EMAIL PROTECTED] wrote:
On Wed, Jun 4, 2008 at 9:06 PM, Charles R Harris
[EMAIL PROTECTED] wrote:
On Wed, Jun 4, 2008 at 6:42 PM, Dan Yamins [EMAIL PROTECTED] wrote:
I'm using python 2.5.2 on OS X, with 8 GB of ram, and a 64-bit
processor.
On Wed, 2008-06-04 at 21:38 -0400, Dan Yamins wrote:
Anne, thanks so much for your help. I still a little confused. If
your scenario about the the memory allocation is working is right,
does that mean that even if I put a lot of ram on the machine, e.g.
16GB, I still can't request it in
Dan Yamins wrote:
I'm using python 2.5.2 on OS X, with 8 GB of ram, and a 64-bit
processor. In
this, setting, I'm working with large arrays of binary data. E.g, I
want to
make calls like:
Z = numpy.inner(a,b)
where and b are fairly large -- e.g. 2 rows by 100
On Wed, Jun 4, 2008 at 10:07 PM, David Cournapeau
[EMAIL PROTECTED] wrote:
On Wed, 2008-06-04 at 21:38 -0400, Dan Yamins wrote:
Anne, thanks so much for your help. I still a little confused. If
your scenario about the the memory allocation is working is right,
does that mean that
On Wed, Jun 4, 2008 at 7:41 PM, Dan Yamins [EMAIL PROTECTED] wrote:
On Wed, Jun 4, 2008 at 9:06 PM, Charles R Harris
[EMAIL PROTECTED] wrote:
On Wed, Jun 4, 2008 at 6:42 PM, Dan Yamins [EMAIL PROTECTED] wrote:
I'm using python 2.5.2 on OS X, with 8 GB of ram, and a 64-bit
processor.
Hey Dan. Now, that you mention you are using OS X, I'm fairly
confident that the problem is that you are using a 32-bit version of
Python (i.e. you are not running in full 64-bit mode and so the 4GB
limit applies).
The most common Python on OS X is 32-bit python. I think a few people
in
2008/6/4 Dan Yamins [EMAIL PROTECTED]:
Try
In [3]: numpy.dtype(numpy.uintp).itemsize
Out[3]: 4
which is the size in bytes of the integer needed to hold a pointer. The
output above is for 32 bit python/numpy.
Chuck
Check, the answer is 4, as you got for the 32-bit. What would the
On Wed, Jun 4, 2008 at 9:50 PM, Dan Yamins [EMAIL PROTECTED] wrote:
In [3]: numpy.dtype(numpy.uintp).itemsize
Out[3]: 4
which is the size in bytes of the integer needed to hold a pointer. The
output above is for 32 bit python/numpy.
Check, the answer is 4, as you got for the 32-bit.
Hi Dan,
On Wed, Jun 4, 2008 at 8:50 PM, Dan Yamins [EMAIL PROTECTED] wrote:
Try
In [3]: numpy.dtype(numpy.uintp).itemsize
Out[3]: 4
which is the size in bytes of the integer needed to hold a pointer. The
output above is for 32 bit python/numpy.
Chuck
Check, the answer is 4, as
On Wed, Jun 4, 2008 at 9:07 PM, Anne Archibald [EMAIL PROTECTED]
wrote:
2008/6/4 Dan Yamins [EMAIL PROTECTED]:
Try
In [3]: numpy.dtype(numpy.uintp).itemsize
Out[3]: 4
which is the size in bytes of the integer needed to hold a pointer. The
output above is for 32 bit
What Charles pointed out was that while the inner product is very big,
it seems to fit into memory on his 32-bit Linux machine; is it
possible that OSX is preventing your python process from using even
the meager 2-3 GB that a 32-bit process ought to get?
Yes -- I think this is what is
Dan Yamins wrote:
Hello folks,
I did port Sage and hence Python with numpy and scipy to 64 bit OSX and
below are some sample build instructions for just building python and
numpy in 64 bit mode.
Try
In [3]: numpy.dtype(numpy.uintp).itemsize
Out[3]: 4
which is the size
Dan Yamins wrote:
On Wed, Jun 4, 2008 at 9:06 PM, Charles R Harris
[EMAIL PROTECTED] mailto:[EMAIL PROTECTED] wrote:
Are both python and your version of OS X fully 64 bits?
I'm not sure.
From python:
python2.5 -c 'import platform;print platform.architecture()'
('32bit', 'ELF')
21 matches
Mail list logo