Re: Python 2.7 range Function provokes a Memory Error

2023-03-06 Thread Chris Angelico
On Tue, 7 Mar 2023 at 16:53, Stephen Tucker  wrote:
>
> Hi again,
>
> I tried xrange, but I got an error telling me that my integer was too big
> for a C long.
>
> Clearly, xrange in Py2 is not capable of dealing with Python (that is,
> possibly very long) integers.

That's because Py2 has two different integer types, int and long.

> I am raising this because,
>
> (a) IF xrange in Py3 is a simple "port" from Py2, then it won't handle
> Python integers either.
>
> AND
>
> (b) IF xrange in Py3 is intended to be equivalent to range (which, even in
> Py2, does handle Python integers)
>
> THEN
>
> It could be argued that xrange in Py3 needs some attention from the
> developer(s).


Why don't you actually try Python 3 instead of making assumptions
based on the state of Python from more than a decade ago?

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python 2.7 range Function provokes a Memory Error

2023-03-06 Thread Stephen Tucker
Hi again,

I tried xrange, but I got an error telling me that my integer was too big
for a C long.

Clearly, xrange in Py2 is not capable of dealing with Python (that is,
possibly very long) integers.

I am raising this because,

(a) IF xrange in Py3 is a simple "port" from Py2, then it won't handle
Python integers either.

AND

(b) IF xrange in Py3 is intended to be equivalent to range (which, even in
Py2, does handle Python integers)

THEN

It could be argued that xrange in Py3 needs some attention from the
developer(s).

Stephen Tucker.


On Thu, Mar 2, 2023 at 6:24 PM Jon Ribbens via Python-list <
python-list@python.org> wrote:

> On 2023-03-02, Stephen Tucker  wrote:
> > The range function in Python 2.7 (and yes, I know that it is now
> > superseded), provokes a Memory Error when asked to deiliver a very long
> > list of values.
> >
> > I assume that this is because the function produces a list which it then
> > iterates through.
> >
> > 1. Does the  range  function in Python 3.x behave the same way?
>
> No, in Python 3 it is an iterator which produces the next number in the
> sequence each time.
>
> > 2. Is there any equivalent way that behaves more like a  for loop (that
> is,
> > without producing a list)?
>
> Yes, 'xrange' in Python 2 behaves like 'range' in Python 3.
> --
> https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python 2.7 range Function provokes a Memory Error

2023-03-02 Thread Jon Ribbens via Python-list
On 2023-03-02, Stephen Tucker  wrote:
> The range function in Python 2.7 (and yes, I know that it is now
> superseded), provokes a Memory Error when asked to deiliver a very long
> list of values.
>
> I assume that this is because the function produces a list which it then
> iterates through.
>
> 1. Does the  range  function in Python 3.x behave the same way?

No, in Python 3 it is an iterator which produces the next number in the
sequence each time.

> 2. Is there any equivalent way that behaves more like a  for loop (that is,
> without producing a list)?

Yes, 'xrange' in Python 2 behaves like 'range' in Python 3.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python 2.7 range Function provokes a Memory Error

2023-03-02 Thread Chris Angelico
On Thu, 2 Mar 2023 at 22:27, Stephen Tucker  wrote:
>
> Hi,
>
> The range function in Python 2.7 (and yes, I know that it is now
> superseded), provokes a Memory Error when asked to deiliver a very long
> list of values.
>
> I assume that this is because the function produces a list which it then
> iterates through.
>
> 1. Does the  range  function in Python 3.x behave the same way?

No, but list(range(x)) might, for the same reason. In Py2, range
returns a list, which means it needs a gigantic collection of integer
objects. In Py3, a range object just defines its start/stop/step, but
if you call list() on it, you get the same sort of

> 2. Is there any equivalent way that behaves more like a  for loop (that is,
> without producing a list)?
>
> To get round the problem I have written my own software that is used in a
> for  loop.

xrange is an iterator in Py2, so that's the easiest way to handle it.
Obviously migrating to Py3 would be the best way, but in the meantime,
xrange will probably do what you need.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python 2.7 range Function provokes a Memory Error

2023-03-02 Thread 2QdxY4RzWzUUiLuE
On 2023-03-02 at 11:25:49 +,
Stephen Tucker  wrote:

> The range function in Python 2.7 (and yes, I know that it is now
> superseded), provokes a Memory Error when asked to deiliver a very long
> list of values.
> 
> I assume that this is because the function produces a list which it then
> iterates through.
> 
> 1. Does the  range  function in Python 3.x behave the same way?

No.

> 2. Is there any equivalent way that behaves more like a  for loop (that is,
> without producing a list)?

Try xrange.
-- 
https://mail.python.org/mailman/listinfo/python-list


Python 2.7 range Function provokes a Memory Error

2023-03-02 Thread Stephen Tucker
Hi,

The range function in Python 2.7 (and yes, I know that it is now
superseded), provokes a Memory Error when asked to deiliver a very long
list of values.

I assume that this is because the function produces a list which it then
iterates through.

1. Does the  range  function in Python 3.x behave the same way?

2. Is there any equivalent way that behaves more like a  for loop (that is,
without producing a list)?

To get round the problem I have written my own software that is used in a
for  loop.

Stephen Tucker.
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue42688] ctypes memory error on Apple Silicon with external libffi

2021-05-02 Thread Ned Deily


Change by Ned Deily :


--
resolution:  -> fixed
stage: patch review -> resolved
status: open -> closed
versions: +Python 3.10, Python 3.8

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42688] ctypes memory error on Apple Silicon with external libffi

2021-05-02 Thread Łukasz Langa

Łukasz Langa  added the comment:


New changeset b29d0a5a7811418c0a1082ca188fd4850185e290 by Ned Deily in branch 
'3.8':
[3.8] bpo-41100: Support macOS 11 Big Sur and Apple Silicon Macs (#25806)
https://github.com/python/cpython/commit/b29d0a5a7811418c0a1082ca188fd4850185e290


--
nosy: +lukasz.langa

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42688] ctypes memory error on Apple Silicon with external libffi

2021-05-02 Thread Ned Deily


Change by Ned Deily :


--
pull_requests: +24493
pull_request: https://github.com/python/cpython/pull/25806

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42688] ctypes memory error on Apple Silicon with external libffi

2021-04-08 Thread Max Bélanger

Change by Max Bélanger :


--
nosy: +maxbelanger
nosy_count: 4.0 -> 5.0
pull_requests: +24011
pull_request: https://github.com/python/cpython/pull/25274

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42688] ctypes memory error on Apple Silicon with external libffi

2021-01-31 Thread Ned Deily


Ned Deily  added the comment:


New changeset 7e729978fa08a360cbf936dc215ba7dd25a06a08 by Miss Islington (bot) 
in branch '3.9':
bpo-42688: Fix ffi alloc/free when using external libffi on macos (GH-23868) 
(GH-23888)
https://github.com/python/cpython/commit/7e729978fa08a360cbf936dc215ba7dd25a06a08


--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42688] ctypes memory error on Apple Silicon with external libffi

2020-12-22 Thread miss-islington


Change by miss-islington :


--
pull_requests: +22747
pull_request: https://github.com/python/cpython/pull/23888

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42688] ctypes memory error on Apple Silicon with external libffi

2020-12-22 Thread miss-islington


miss-islington  added the comment:


New changeset b3c77ecbbe0ad3e3cc6dbd885792203e9e6ec858 by erykoff in branch 
'master':
bpo-42688: Fix ffi alloc/free when using external libffi on macos (GH-23868)
https://github.com/python/cpython/commit/b3c77ecbbe0ad3e3cc6dbd885792203e9e6ec858


--
nosy: +miss-islington

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42688] ctypes memory error on Apple Silicon with external libffi

2020-12-20 Thread Eli Rykoff


Eli Rykoff  added the comment:

Thanks for your quick feedback!  I signed the CLA after submitting the PR, but 
I think it takes a bit of time to percolate through the system.

As for the "why", until 3.9.1 conda-forge had been successfully using an 
external ffi (with 3.9.0 + osx-arm64 patches) and then suddenly it broke.  For 
the time being conda-forge is using the system ffi, which does have other 
advantages as well, having all the latest patches.  However, I was curious as 
to _why_ it broke, and that led me to discover this bug, and it seemed 
straightforward to fix.  When/if conda-forge will switch back to external ffi 
is TBD, but if that decision is made this issue (at least) will be taken care 
of.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42688] ctypes memory error on Apple Silicon with external libffi

2020-12-20 Thread Ronald Oussoren


Ronald Oussoren  added the comment:

Could you  please sign the CLA? (See the PR for details on that)

The PR looks sane.

Out of interest: why do you use an external version of libffi? AFAIK the system 
copy of libffi contains some magic sauce to work nicer with signed binaries.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42688] ctypes memory error on Apple Silicon with external libffi

2020-12-20 Thread Ronald Oussoren


Change by Ronald Oussoren :


--
components: +macOS
nosy: +ned.deily, ronaldoussoren

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42688] ctypes memory error on Apple Silicon with external libffi

2020-12-19 Thread Eli Rykoff


Change by Eli Rykoff :


--
keywords: +patch
pull_requests: +22730
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/23868

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42688] ctypes memory error on Apple Silicon with external libffi

2020-12-19 Thread Eli Rykoff


New submission from Eli Rykoff :

Building python 3.9.1 on Apple Silicon compiled against a external 
(non-os-provided) libffi makes the following code return a MemoryError:

Test:

import ctypes

@ctypes.CFUNCTYPE(None, ctypes.c_int, ctypes.c_char_p)
def error_handler(fif, message):
pass

I have tracked this down to the following code in malloc_closure.c:

#if USING_APPLE_OS_LIBFFI && HAVE_FFI_CLOSURE_ALLOC
if (__builtin_available(macos 10.15, ios 13, watchos 6, tvos 13, *)) {
ffi_closure_free(p);
return;
}
#endif

and

#if USING_APPLE_OS_LIBFFI && HAVE_FFI_CLOSURE_ALLOC
if (__builtin_available(macos 10.15, ios 13, watchos 6, tvos 13, *)) {
return ffi_closure_alloc(size, codeloc);
}
#endif

In fact, while the __builtin_available() call should be guarded by 
USING_APPLE_OS_LIBFFI, the call to ffi_closure_alloc() should only be guarded 
by HAVE_FFI_CLOSURE_ALLOC, as this is set as the result of an independent check 
in setup.py and should be used with external libffi when supported.

The following code does work instead:

#if HAVE_FFI_CLOSURE_ALLOC
#if USING_APPLE_OS_LIBFFI
if (__builtin_available(macos 10.15, ios 13, watchos 6, tvos 13, *)) {
#endif
ffi_closure_free(p);
return;
#if USING_APPLE_OS_LIBFFI
}
#endif
#endif


#if HAVE_FFI_CLOSURE_ALLOC
#if USING_APPLE_OS_LIBFFI
if (__builtin_available(macos 10.15, ios 13, watchos 6, tvos 13, *)) {
#endif
return ffi_closure_alloc(size, codeloc);
return;
#if USING_APPLE_OS_LIBFFI
}
#endif
#endif

--
components: ctypes
messages: 383419
nosy: erykoff
priority: normal
severity: normal
status: open
title: ctypes memory error on Apple Silicon with external libffi
type: behavior
versions: Python 3.9

___
Python tracker 
<https://bugs.python.org/issue42688>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29818] Py_SetStandardStreamEncoding leads to a memory error in debug mode

2019-05-14 Thread STINNER Victor


STINNER Victor  added the comment:

I fixed this issue in Python 3.7. Py_SetStandardStreamEncoding() now uses:

PyMemAllocatorEx old_alloc;
_PyMem_SetDefaultAllocator(PYMEM_DOMAIN_RAW, _alloc);

... _PyMem_RawStrdup() ...

PyMem_SetAllocator(PYMEM_DOMAIN_RAW, _alloc);

--
components: +Interpreter Core
resolution:  -> fixed
stage: needs patch -> resolved
status: open -> closed
versions:  -Python 3.6

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29818] Py_SetStandardStreamEncoding leads to a memory error in debug mode

2017-03-15 Thread Nick Coghlan

Changes by Nick Coghlan :


--
versions: +Python 3.6, Python 3.7

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29818] Py_SetStandardStreamEncoding leads to a memory error in debug mode

2017-03-15 Thread Nick Coghlan

New submission from Nick Coghlan:

For PEP 538, setting PYTHONIOENCODING turned out to have undesirable side 
effects on Python 2 instances in subprocesses, since Python 2 has no 
'surrogateescape' error handler.

So I switched to using the "Py_SetStandardStreamEncoding" API defined in 
http://bugs.python.org/issue16129 instead, but this turns out to have 
problematic interactions with the dynamic memory allocator management, so it 
fails with a fatal exception in debug mode. An example of the error can be seen 
here: https://travis-ci.org/python/cpython/jobs/211293576

The problem appears to be that between the allocation of the memory with 
`_PyMem_RawStrdup` in `Py_SetStandardStreamEncoding` and the release of that 
memory in `initstdio`, the active memory manager has changed (at least in a 
debug build), so the deallocation as part of the interpreter startup fails.

That interpretation is based on this comment in Programs/python.c:

```
/* Force again malloc() allocator to release memory blocks allocated
   before Py_Main() */
(void)_PyMem_SetupAllocators("malloc");
```

The allocations in Py_SetStandardStreamEncoding happen before the call to 
Py_Main/Py_Initialize, but the deallocation happens in Py_Initialize.

The "fix" I applied to the PEP branch was to make the default allocator 
conditional in Programs/python.c as well:

```
#ifdef Py_DEBUG
(void)_PyMem_SetupAllocators("malloc_debug");
#  else
(void)_PyMem_SetupAllocators("malloc");
#  endif
```

While that works (at least in the absence of a PYTHONMALLOC setting) it seems 
fragile. It would be nicer if there was a way for Py_SetStandardStreamEncoding 
to indicate which allocator should be used for the deallocation.

--
messages: 289668
nosy: haypo, ncoghlan
priority: normal
severity: normal
stage: needs patch
status: open
title: Py_SetStandardStreamEncoding leads to a memory error in debug mode
type: crash

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue29818>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Memory error while using pandas dataframe

2015-06-10 Thread Jason Swails
On Mon, Jun 8, 2015 at 3:32 AM, naren naren...@gmail.com wrote:

 Memory Error while working with pandas dataframe.

 Description of Environment Windows 7 python 3.4.2 32-bit version pandas
 0.16.0

 We are running into the error described below. Any help provided will be
 sincerely appreciated.

 We are able to read a 300MB Csv file into a dataframe using the read_csv
 function. While working with the dataframe we ran into memory error. We
 used the pd.Concat function to concatenate two dataframes. So we decided to
 use chunksize for lazy reading. Chunking returns an object of type
 TextFileReader.


 http://pandas.pydata.org/pandas-docs/stable/io.html#iterating-through-files-chunk-by-chunk

 We are able to iterate over this object once as a debugging measure. The
 iterator gets exhausted after iterating once. So we are not able to convert
 the TextFileReader object back into a dataframe, using the pd.concat
 function.

​It looks like you already figured out what your problem is.  The
TextFileReader is exhausted (i.e., at EOF), so you end up getting None from
it.​


​What is your question?  You want to be able to iterate through
TextFileReader again?

If so, try rewinding the file object that you passed to pd.concat.  If you
saved a reference to the file object, just call seek(0) on that object.
If you didn't, access it as the f attribute on the TextFileReader object
and call seek(0) on that instead.

That might work.  Otherwise, you should be more specific with your question
and provide a full segment of code that is as small as possible to
reproduce the error you're seeing.

HTH,
Jason
-- 
https://mail.python.org/mailman/listinfo/python-list


Memory error while using pandas dataframe

2015-06-08 Thread naren
Memory Error while working with pandas dataframe.

Description of Environment Windows 7 python 3.4.2 32-bit version pandas
0.16.0

We are running into the error described below. Any help provided will be
sincerely appreciated.

We are able to read a 300MB Csv file into a dataframe using the read_csv
function. While working with the dataframe we ran into memory error. We
used the pd.Concat function to concatenate two dataframes. So we decided to
use chunksize for lazy reading. Chunking returns an object of type
TextFileReader.

http://pandas.pydata.org/pandas-docs/stable/io.html#iterating-through-files-chunk-by-chunk

We are able to iterate over this object once as a debugging measure. The
iterator gets exhausted after iterating once. So we are not able to convert
the TextFileReader object back into a dataframe, using the pd.concat
function.

Error

Traceback (most recent call last):
  File psindia.py, line 60, in module
data=pd.concat(tp,ignore_index=True)
  File C:\Python34\lib\site-packages\pandas\tools\merge.py, line 754, in conca
t
copy=copy)
  File C:\Python34\lib\site-packages\pandas\tools\merge.py, line 799, in __ini
t__
raise ValueError('All objects passed were None')ValueError: All
objects passed were None

Thanks for your time.

-- 
Narendran Elango
B.Tech(2014)
Department of Computer Engineering
National Institute of Technology Karnataka, Surathkal
-- 
https://mail.python.org/mailman/listinfo/python-list


Memory Error while using pandas dataframe

2015-06-08 Thread narencr7
Memory Error while working with pandas dataframe.

Description of Environment Windows 7 python 3.4.2 32-bit version pandas 0.16.0

We are running into the error described below. Any help provided will be 
sincerely appreciated.

We are able to read a 300MB Csv file into a dataframe using the read_csv 
function. While working with the dataframe we ran into memory error. We used 
the pd.Concat function to concatenate two dataframes. So we decided to use 
chunksize for lazy reading. Chunking returns an object of type TextFileReader.

http://pandas.pydata.org/pandas-docs/stable/io.html#iterating-through-files-chunk-by-chunk

We are able to iterate over this object once as a debugging measure. The 
iterator gets exhausted after iterating once. So we are not able to convert the 
TextFileReader object back into a dataframe, using the pd.concat function.

Error

Traceback (most recent call last):
  File psindia.py, line 60, in module
data=pd.concat(tp,ignore_index=True)
  File C:\Python34\lib\site-packages\pandas\tools\merge.py, line 754, in conca
t
copy=copy)
  File C:\Python34\lib\site-packages\pandas\tools\merge.py, line 799, in __ini
t__
raise ValueError('All objects passed were None')
ValueError: All objects passed were None
Thanks for your time.
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue6676] expat parser throws Memory Error when parsing multiple files

2014-03-27 Thread Roundup Robot

Roundup Robot added the comment:

New changeset 74faca1ac59c by Ned Deily in branch '2.7':
Issue #6676: Ensure a meaningful exception is raised when attempting
http://hg.python.org/cpython/rev/74faca1ac59c

New changeset 9e3fc66ee0b8 by Ned Deily in branch '3.4':
Issue #6676: Ensure a meaningful exception is raised when attempting
http://hg.python.org/cpython/rev/9e3fc66ee0b8

New changeset ee0034434e65 by Ned Deily in branch 'default':
Issue #6676: merge from 3.4
http://hg.python.org/cpython/rev/ee0034434e65

--
nosy: +python-dev

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6676
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6676] expat parser throws Memory Error when parsing multiple files

2014-03-27 Thread Ned Deily

Ned Deily added the comment:

Applied for release in 3.5.0, 3.4.1 and 2.7.7.  Thanks, everyone!

--
resolution:  - fixed
stage: patch review - committed/rejected
status: open - closed
versions: +Python 3.5 -Python 3.3

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6676
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Memory error

2014-03-25 Thread dieter
Jamie Mitchell jamiemitchell1...@gmail.com writes:
 ...
 I then get a memory error:

 Traceback (most recent call last):
   File stdin, line 1, in module
   File /usr/local/sci/lib/python2.7/site-packages/scipy/stats/stats.py, 
 line 2409, in pearsonr
 x = np.asarray(x)
   File /usr/local/sci/lib/python2.7/site-packages/numpy/core/numeric.py, 
 line 321, in asarray
 return array(a, dtype, copy=False, order=order)
 MemoryError

MemoryError means that Python cannot get sufficent memory
from the operating system.


You have already found out one mistake. Should you continue to
get MemoryError after this is fixed, then your system does not
provide enough resources (memory) to solve the problem at hand.
You would need to find a way to provide more resources.

-- 
https://mail.python.org/mailman/listinfo/python-list


Memory error

2014-03-24 Thread Jamie Mitchell
Hello all,

I'm afraid I am new to all this so bear with me...

I am looking to find the statistical significance between two large netCDF data 
sets.

Firstly I've loaded the two files into python:

swh=netCDF4.Dataset('/data/cr1/jmitchel/Q0/swh/controlperiod/averages/swh_control_concat.nc',
 'r')

swh_2050s=netCDF4.Dataset('/data/cr1/jmitchel/Q0/swh/2050s/averages/swh_2050s_concat.nc',
 'r')

I have then isolated the variables I want to perform the pearson correlation on:

hs=swh.variables['hs']

hs_2050s=swh_2050s.variables['hs']

Here is the metadata for those files:

print hs
type 'netCDF4.Variable'
int16 hs(time, latitude, longitude)
standard_name: significant_height_of_wind_and_swell_waves
long_name: significant_wave_height
units: m
add_offset: 0.0
scale_factor: 0.002
_FillValue: -32767
missing_value: -32767
unlimited dimensions: time
current shape = (86400, 350, 227)

print hs_2050s
type 'netCDF4.Variable'
int16 hs(time, latitude, longitude)
standard_name: significant_height_of_wind_and_swell_waves
long_name: significant_wave_height
units: m
add_offset: 0.0
scale_factor: 0.002
_FillValue: -32767
missing_value: -32767
unlimited dimensions: time
current shape = (86400, 350, 227)


Then to perform the pearsons correlation:

from scipy.stats.stats import pearsonr

pearsonr(hs,hs_2050s)

I then get a memory error:

Traceback (most recent call last):
  File stdin, line 1, in module
  File /usr/local/sci/lib/python2.7/site-packages/scipy/stats/stats.py, line 
2409, in pearsonr
x = np.asarray(x)
  File /usr/local/sci/lib/python2.7/site-packages/numpy/core/numeric.py, line 
321, in asarray
return array(a, dtype, copy=False, order=order)
MemoryError

This also happens when I try to create numpy arrays from the data.

Does anyone know how I can alleviate theses memory errors?

Cheers,

Jamie
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Memory error

2014-03-24 Thread Jamie Mitchell
On Monday, March 24, 2014 11:32:31 AM UTC, Jamie Mitchell wrote:
 Hello all,
 
 
 
 I'm afraid I am new to all this so bear with me...
 
 
 
 I am looking to find the statistical significance between two large netCDF 
 data sets.
 
 
 
 Firstly I've loaded the two files into python:
 
 
 
 swh=netCDF4.Dataset('/data/cr1/jmitchel/Q0/swh/controlperiod/averages/swh_control_concat.nc',
  'r')
 
 
 
 swh_2050s=netCDF4.Dataset('/data/cr1/jmitchel/Q0/swh/2050s/averages/swh_2050s_concat.nc',
  'r')
 
 
 
 I have then isolated the variables I want to perform the pearson correlation 
 on:
 
 
 
 hs=swh.variables['hs']
 
 
 
 hs_2050s=swh_2050s.variables['hs']
 
 
 
 Here is the metadata for those files:
 
 
 
 print hs
 
 type 'netCDF4.Variable'
 
 int16 hs(time, latitude, longitude)
 
 standard_name: significant_height_of_wind_and_swell_waves
 
 long_name: significant_wave_height
 
 units: m
 
 add_offset: 0.0
 
 scale_factor: 0.002
 
 _FillValue: -32767
 
 missing_value: -32767
 
 unlimited dimensions: time
 
 current shape = (86400, 350, 227)
 
 
 
 print hs_2050s
 
 type 'netCDF4.Variable'
 
 int16 hs(time, latitude, longitude)
 
 standard_name: significant_height_of_wind_and_swell_waves
 
 long_name: significant_wave_height
 
 units: m
 
 add_offset: 0.0
 
 scale_factor: 0.002
 
 _FillValue: -32767
 
 missing_value: -32767
 
 unlimited dimensions: time
 
 current shape = (86400, 350, 227)
 
 
 
 
 
 Then to perform the pearsons correlation:
 
 
 
 from scipy.stats.stats import pearsonr
 
 
 
 pearsonr(hs,hs_2050s)
 
 
 
 I then get a memory error:
 
 
 
 Traceback (most recent call last):
 
   File stdin, line 1, in module
 
   File /usr/local/sci/lib/python2.7/site-packages/scipy/stats/stats.py, 
 line 2409, in pearsonr
 
 x = np.asarray(x)
 
   File /usr/local/sci/lib/python2.7/site-packages/numpy/core/numeric.py, 
 line 321, in asarray
 
 return array(a, dtype, copy=False, order=order)
 
 MemoryError
 
 
 
 This also happens when I try to create numpy arrays from the data.
 
 
 
 Does anyone know how I can alleviate theses memory errors?
 
 
 
 Cheers,
 
 
 
 Jamie

Just realised that obviously pearson correlation requires two 1D arrays and 
mine are 3D, silly mistake!
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Memory error

2014-03-24 Thread Gary Herron

On 03/24/2014 04:32 AM, Jamie Mitchell wrote:

Hello all,

I'm afraid I am new to all this so bear with me...

I am looking to find the statistical significance between two large netCDF data 
sets.

Firstly I've loaded the two files into python:

swh=netCDF4.Dataset('/data/cr1/jmitchel/Q0/swh/controlperiod/averages/swh_control_concat.nc',
 'r')

swh_2050s=netCDF4.Dataset('/data/cr1/jmitchel/Q0/swh/2050s/averages/swh_2050s_concat.nc',
 'r')

I have then isolated the variables I want to perform the pearson correlation on:

hs=swh.variables['hs']

hs_2050s=swh_2050s.variables['hs']


This is not really a Python question.  It's a question about netCDF 
(whatever that may be), or perhaps it's interface to Python python-netCD4.


You may get an answer here, but you are far more likely to get one 
quickly and accurately from a forum dedicated to netCDF, or python-netCD.


Good luck.

Gary Herron
--
https://mail.python.org/mailman/listinfo/python-list


[issue6676] expat parser throws Memory Error when parsing multiple files

2014-02-26 Thread Ned Deily

Ned Deily added the comment:

Thanks for the reminder, David.  Here are patches for 3.x and 2.7 that include 
updated versions of the proposed pyexpat.c and test_pyexpat.py patches along 
with a doc update along the lines suggested by David.

--
stage:  - patch review
versions:  -Python 3.2
Added file: http://bugs.python.org/file34240/issue6676_3x.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6676
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6676] expat parser throws Memory Error when parsing multiple files

2014-02-26 Thread Ned Deily

Changes by Ned Deily n...@acm.org:


Added file: http://bugs.python.org/file34241/issue6676_27.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6676
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6676] expat parser throws Memory Error when parsing multiple files

2014-02-19 Thread David H. Gutteridge

David H. Gutteridge added the comment:

Updating to reflect the Python 3.4 documentation is now also relevant to this 
discussion. Perhaps someone could commit a change something like my suggestion 
in msg143295?

--
versions: +Python 3.4

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6676
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Memory error with quadratic interpolation

2013-01-24 Thread Oscar Benjamin
On 23 January 2013 17:33, Isaac Won winef...@gmail.com wrote:
 On Wednesday, January 23, 2013 10:51:43 AM UTC-6, Oscar Benjamin wrote:
 On 23 January 2013 14:57, Isaac Won winef...@gmail.com wrote:

  On Wednesday, January 23, 2013 8:40:54 AM UTC-6, Oscar Benjamin wrote:

 Unless I've misunderstood how this function is supposed to be used, it
 just doesn't really seem to work for arrays of much more than a few
 hundred elements.


The solution is to use UnivariateSpline. I don't know what the
difference is but it works where the other fails:

import numpy as np
from scipy.interpolate import UnivariateSpline
x = np.array(1 * [0.0], float)
indices = np.arange(len(x))
interp = UnivariateSpline(indices, x, k=2)
print(interp(1.5))


Oscar
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Memory error with quadratic interpolation

2013-01-23 Thread Ulrich Eckhardt

Am 23.01.2013 05:06, schrieb Isaac Won:

I have tried to use different interpolation methods with Scipy. My
code seems just fine with linear interpolation, but shows memory
error with quadratic. I am a novice for python. I will appreciate any
help.



#code
f = open(filin, r)


Check out the with open(...) as f syntax.



for columns in ( raw.strip().split() for raw in f ):


For the record, this first builds a sequence and then iterates over that 
sequence. This is not very memory-efficient, try this instead:


   for line in f:
   columns = line.strip().split()


Concerning the rest of your problems, there is lots of code and the 
datafile missing. However, there is also too much of it, try replacing 
the file with generated data and remove everything from the code that is 
not absolutely necessary.


Good luck!

Uli


--
http://mail.python.org/mailman/listinfo/python-list


Re: Memory error with quadratic interpolation

2013-01-23 Thread Oscar Benjamin
On 23 January 2013 08:55, Ulrich Eckhardt
ulrich.eckha...@dominolaser.com wrote:
 Am 23.01.2013 05:06, schrieb Isaac Won:

 I have tried to use different interpolation methods with Scipy. My
 code seems just fine with linear interpolation, but shows memory
 error with quadratic. I am a novice for python. I will appreciate any
 help.

[SNIP]


 Concerning the rest of your problems, there is lots of code and the datafile
 missing. However, there is also too much of it, try replacing the file with
 generated data and remove everything from the code that is not absolutely
 necessary.

Also please copy paste the actual error message rather than paraphrasing it.


Oscar
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Memory error with quadratic interpolation

2013-01-23 Thread Isaac Won
On Tuesday, January 22, 2013 10:06:41 PM UTC-6, Isaac Won wrote:
 Hi all,
 
 
 
 I have tried to use different interpolation methods with Scipy. My code seems 
 just fine with linear interpolation, but shows memory error with quadratic. I 
 am a novice for python. I will appreciate any help.
 
 
 
 #code
 
 f = open(filin, r)
 
 for columns in ( raw.strip().split() for raw in f ):
 
 a.append(columns[5])
 
 x = np.array(a, float)
 
 
 
 
 
 not_nan = np.logical_not(np.isnan(x))
 
 indices = np.arange(len(x))
 
 interp = interp1d(indices[not_nan], x[not_nan], kind = 'quadratic')
 
 
 
 p = interp(indices)
 
 
 
 
 
 The number of data is 31747.
 
 
 
 Thank you,
 
 
 
 Isaac

I really appreciate to both Ulich and Oscar.

To Oscar
My actual error message is:
File 
/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py,
 line 311, in __init__
self._spline = splmake(x,oriented_y,order=order)
  File 
/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py,
 line 809, in splmake
coefs = func(xk, yk, order, conds, B)
  File 
/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py,
 line 530, in _find_smoothest
u,s,vh = np.dual.svd(B)
  File 
/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/linalg/decomp_svd.py,
 line 91, in svd
full_matrices=full_matrices, overwrite_a = overwrite_a)
MemoryError
--
Thank you,

Hoonill
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Memory error with quadratic interpolation

2013-01-23 Thread Isaac Won
On Wednesday, January 23, 2013 4:08:13 AM UTC-6, Oscar Benjamin wrote:
 On 23 January 2013 08:55, Ulrich Eckhardt
 
 ulrich.eckha...@dominolaser.com wrote:
 
  Am 23.01.2013 05:06, schrieb Isaac Won:
 
 
 
  I have tried to use different interpolation methods with Scipy. My
 
  code seems just fine with linear interpolation, but shows memory
 
  error with quadratic. I am a novice for python. I will appreciate any
 
  help.
 
 
 
 [SNIP]
 
 
 
 
 
  Concerning the rest of your problems, there is lots of code and the datafile
 
  missing. However, there is also too much of it, try replacing the file with
 
  generated data and remove everything from the code that is not absolutely
 
  necessary.
 
 
 
 Also please copy paste the actual error message rather than paraphrasing it.
 
 
 
 
 
 Oscar

I really appreciate to both Ulich and Oscar. 

To Oscar 
My actual error message is: 
File 
/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py,
 line 311, in __init__
 self._spline = splmake(x,oriented_y,order=order) 
  File 
/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py,
 line 809, in splmake
 coefs = func(xk, yk, order, conds, B) 
  File 
/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py,
 line 530, in _find_smoothest
 u,s,vh = np.dual.svd(B) 
  File 
/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/linalg/decomp_svd.py,
 line 91, in svd
 full_matrices=full_matrices, overwrite_a = overwrite_a) 
MemoryError 
-- 
Thank you, 

Isaac
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Memory error with quadratic interpolation

2013-01-23 Thread Isaac Won
On Wednesday, January 23, 2013 4:08:13 AM UTC-6, Oscar Benjamin wrote:
 On 23 January 2013 08:55, Ulrich Eckhardt
 
 
 
  Am 23.01.2013 05:06, schrieb Isaac Won:
 
 
 
  I have tried to use different interpolation methods with Scipy. My
 
  code seems just fine with linear interpolation, but shows memory
 
  error with quadratic. I am a novice for python. I will appreciate any
 
  help.
 
 
 
 [SNIP]
 
 
 
 
 
  Concerning the rest of your problems, there is lots of code and the datafile
 
  missing. However, there is also too much of it, try replacing the file with
 
  generated data and remove everything from the code that is not absolutely
 
  necessary.
 
 
 
 Also please copy paste the actual error message rather than paraphrasing it.
 
 
 
 
 
 Oscar

I really appreciate to both Ulich and Oscar. 

To Oscar 
My actual error message is: 
File 
/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py,
 line 311, in __init__
 self._spline = splmake(x,oriented_y,order=order) 
  File 
/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py,
 line 809, in splmake
 coefs = func(xk, yk, order, conds, B) 
  File 
/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py,
 line 530, in _find_smoothest
 u,s,vh = np.dual.svd(B) 
  File 
/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/linalg/decomp_svd.py,
 line 91, in svd
 full_matrices=full_matrices, overwrite_a = overwrite_a) 
MemoryError 
-- 
Thank you, 

Hoonill 
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Memory error with quadratic interpolation

2013-01-23 Thread Oscar Benjamin
On 23 January 2013 14:28, Isaac Won winef...@gmail.com wrote:
 On Wednesday, January 23, 2013 4:08:13 AM UTC-6, Oscar Benjamin wrote:

 To Oscar
 My actual error message is:
 File 
 /lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py,
  line 311, in __init__
  self._spline = splmake(x,oriented_y,order=order)
   File 
 /lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py,
  line 809, in splmake
  coefs = func(xk, yk, order, conds, B)
   File 
 /lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py,
  line 530, in _find_smoothest
  u,s,vh = np.dual.svd(B)
   File 
 /lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/linalg/decomp_svd.py,
  line 91, in svd
  full_matrices=full_matrices, overwrite_a = overwrite_a)
 MemoryError

Are you sure that's the *whole* error message? The traceback only
refers to the scipy modules. I can't see the line from your code that
is generating the error.


Oscar
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Memory error with quadratic interpolation

2013-01-23 Thread Isaac Won
On Wednesday, January 23, 2013 2:55:14 AM UTC-6, Ulrich Eckhardt wrote:
 Am 23.01.2013 05:06, schrieb Isaac Won:
 
  I have tried to use different interpolation methods with Scipy. My
 
  code seems just fine with linear interpolation, but shows memory
 
  error with quadratic. I am a novice for python. I will appreciate any
 
  help.
 
  
 
  #code
 
  f = open(filin, r)
 
 
 
 Check out the with open(...) as f syntax.
 
 
 
 
 
  for columns in ( raw.strip().split() for raw in f ):
 
 
 
 For the record, this first builds a sequence and then iterates over that 
 
 sequence. This is not very memory-efficient, try this instead:
 
 
 
 for line in f:
 
 columns = line.strip().split()
 
 
 
 
 
 Concerning the rest of your problems, there is lots of code and the 
 
 datafile missing. However, there is also too much of it, try replacing 
 
 the file with generated data and remove everything from the code that is 
 
 not absolutely necessary.
 
 
 
 Good luck!
 
 
 
 Uli

Hi Ulich,

I tried to change the code following your advice, but it doesn't seem to work 
still.

My adjusted code is:

a = []

with open(filin, r) as f:

for line in f:
columns = line.strip().split()

a.append(columns[5])
x = np.array(a, float)


not_nan = np.logical_not(np.isnan(x))
indices = np.arange(len(x))
interp = interp1d(indices[not_nan], x[not_nan], kind = 'quadratic')
p = interp(indices)
-
And full error message is:
   interp = interp1d(indices[not_nan], x[not_nan], kind = 'quadratic')
  File 
/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py,
 line 311, in __init__
self._spline = splmake(x,oriented_y,order=order)
  File 
/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py,
 line 809, in splmake
coefs = func(xk, yk, order, conds, B)
  File 
/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py,
 line 530, in _find_smoothest
u,s,vh = np.dual.svd(B)
  File 
/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/linalg/decomp_svd.py,
 line 91, in svd
full_matrices=full_matrices, overwrite_a = overwrite_a)
MemoryError
---
Could you give me some advice for this situation?

Thank you always,

Isaac
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Memory error with quadratic interpolation

2013-01-23 Thread Isaac Won
On Wednesday, January 23, 2013 8:40:54 AM UTC-6, Oscar Benjamin wrote:
 On 23 January 2013 14:28, Isaac Won winef...@gmail.com wrote:
 
  On Wednesday, January 23, 2013 4:08:13 AM UTC-6, Oscar Benjamin wrote:
 
 
 
  To Oscar
 
  My actual error message is:
 
  File 
  /lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py,
   line 311, in __init__
 
   self._spline = splmake(x,oriented_y,order=order)
 
File 
  /lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py,
   line 809, in splmake
 
   coefs = func(xk, yk, order, conds, B)
 
File 
  /lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py,
   line 530, in _find_smoothest
 
   u,s,vh = np.dual.svd(B)
 
File 
  /lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/linalg/decomp_svd.py,
   line 91, in svd
 
   full_matrices=full_matrices, overwrite_a = overwrite_a)
 
  MemoryError
 
 
 
 Are you sure that's the *whole* error message? The traceback only
 
 refers to the scipy modules. I can't see the line from your code that
 
 is generating the error.
 
 
 
 
 
 Oscar

Dear Oscar,

Following is full error message after I adjusted following Ulich's advice:

interp = interp1d(indices[not_nan], x[not_nan], kind = 'quadratic') 
File 
/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py,
 line 311, in __init__
 self._spline = splmake(x,oriented_y,order=order) 
  File 
/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py,
 line 809, in splmake
 coefs = func(xk, yk, order, conds, B) 
  File 
/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py,
 line 530, in _find_smoothest
 u,s,vh = np.dual.svd(B) 
  File 
/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/linalg/decomp_svd.py,
 line 91, in svd
 full_matrices=full_matrices, overwrite_a = overwrite_a) 
MemoryError 
--
Thank you,

Hoonill
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Memory error with quadratic interpolation

2013-01-23 Thread Oscar Benjamin
On 23 January 2013 14:57, Isaac Won winef...@gmail.com wrote:
 On Wednesday, January 23, 2013 8:40:54 AM UTC-6, Oscar Benjamin wrote:
 On 23 January 2013 14:28, Isaac Won winef...@gmail.com wrote:

[SNIP]

 Following is full error message after I adjusted following Ulich's advice:

 interp = interp1d(indices[not_nan], x[not_nan], kind = 'quadratic')
 File 
 /lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py,
  line 311, in __init__
  self._spline = splmake(x,oriented_y,order=order)
   File 
 /lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py,
  line 809, in splmake
  coefs = func(xk, yk, order, conds, B)
   File 
 /lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py,
  line 530, in _find_smoothest
  u,s,vh = np.dual.svd(B)
   File 
 /lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/linalg/decomp_svd.py,
  line 91, in svd
  full_matrices=full_matrices, overwrite_a = overwrite_a)
 MemoryError

Where is the new code? You should show full working code (with the
import statements) and the full error that is generated by exactly
that code. If possible you should also write code that someone else
could run even without having access to your data files. If you did
that in your first post, you'd probably have an answer to your problem
by now.

Here is a version of your code that many people on this list can test
straight away:

import numpy as np
from scipy.interpolate import interp1d
x = np.array(31747 * [0.0], float)
indices = np.arange(len(x))
interp = interp1d(indices, x, kind='quadratic')

Running this gives the following error:

~$ python tmp.py
Traceback (most recent call last):
  File tmp.py, line 5, in module
interp = interp1d(indices, x, kind='quadratic')
  File /usr/lib/python2.7/dist-packages/scipy/interpolate/interpolate.py,
line 308, in __init__
self._spline = splmake(x,oriented_y,order=order)
  File /usr/lib/python2.7/dist-packages/scipy/interpolate/interpolate.py,
line 805, in splmake
B = _fitpack._bsplmat(order, xk)
MemoryError

Unless I've misunderstood how this function is supposed to be used, it
just doesn't really seem to work for arrays of much more than a few
hundred elements.


Oscar
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Memory error with quadratic interpolation

2013-01-23 Thread Isaac Won
On Wednesday, January 23, 2013 10:51:43 AM UTC-6, Oscar Benjamin wrote:
 On 23 January 2013 14:57, Isaac Won winef...@gmail.com wrote:
 
  On Wednesday, January 23, 2013 8:40:54 AM UTC-6, Oscar Benjamin wrote:
 
  On 23 January 2013 14:28, Isaac Won winef...@gmail.com wrote:
 
 
 
 [SNIP]
 
 
 
  Following is full error message after I adjusted following Ulich's advice:
 
 
 
  interp = interp1d(indices[not_nan], x[not_nan], kind = 'quadratic')
 
  File 
  /lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py,
   line 311, in __init__
 
   self._spline = splmake(x,oriented_y,order=order)
 
File 
  /lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py,
   line 809, in splmake
 
   coefs = func(xk, yk, order, conds, B)
 
File 
  /lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py,
   line 530, in _find_smoothest
 
   u,s,vh = np.dual.svd(B)
 
File 
  /lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/linalg/decomp_svd.py,
   line 91, in svd
 
   full_matrices=full_matrices, overwrite_a = overwrite_a)
 
  MemoryError
 
 
 
 Where is the new code? You should show full working code (with the
 
 import statements) and the full error that is generated by exactly
 
 that code. If possible you should also write code that someone else
 
 could run even without having access to your data files. If you did
 
 that in your first post, you'd probably have an answer to your problem
 
 by now.
 
 
 
 Here is a version of your code that many people on this list can test
 
 straight away:
 
 
 
 import numpy as np
 
 from scipy.interpolate import interp1d
 
 x = np.array(31747 * [0.0], float)
 
 indices = np.arange(len(x))
 
 interp = interp1d(indices, x, kind='quadratic')
 
 
 
 Running this gives the following error:
 
 
 
 ~$ python tmp.py
 
 Traceback (most recent call last):
 
   File tmp.py, line 5, in module
 
 interp = interp1d(indices, x, kind='quadratic')
 
   File /usr/lib/python2.7/dist-packages/scipy/interpolate/interpolate.py,
 
 line 308, in __init__
 
 self._spline = splmake(x,oriented_y,order=order)
 
   File /usr/lib/python2.7/dist-packages/scipy/interpolate/interpolate.py,
 
 line 805, in splmake
 
 B = _fitpack._bsplmat(order, xk)
 
 MemoryError
 
 
 
 Unless I've misunderstood how this function is supposed to be used, it
 
 just doesn't really seem to work for arrays of much more than a few
 
 hundred elements.
 
 
 
 
 
 Oscar

Thank you Oscar for your help and advice.

I agree with you. So, I tried to find the way to solve this problem.

My full code adjusted is:
from scipy.interpolate import interp1d

import numpy as np
import matplotlib.pyplot as plt



with open(filin, r) as f:

for line in f:
columns = line.strip().split()

a.append(columns[5])
x = np.array(a, float)


not_nan = np.logical_not(np.isnan(x))
indices = np.arange(len(x))
interp = interp1d(indices[not_nan], x[not_nan], kind = 'quadratic')

p = interp(indices)




k = np.arange(31747)

plt.subplot(211)
plt.plot(k, p)
plt.xlabel('Quadratic interpolation')
plt.subplot(212)
plt.plot(k, x)

plt.show()
-
Whole error message was:

Traceback (most recent call last):
  File QI1.py, line 22, in module
interp = interp1d(indices[not_nan], x[not_nan], kind = 'quadratic')
  File 
/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py,
 line 311, in __init__
self._spline = splmake(x,oriented_y,order=order)
  File 
/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py,
 line 809, in splmake
coefs = func(xk, yk, order, conds, B)
  File 
/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py,
 line 530, in _find_smoothest
u,s,vh = np.dual.svd(B)
  File 
/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/linalg/decomp_svd.py,
 line 91, in svd
full_matrices=full_matrices, overwrite_a = overwrite_a)
MemoryError
--
Thank you again Oscar,

Isaac
-- 
http://mail.python.org/mailman/listinfo/python-list


Memory error with quadratic interpolation

2013-01-22 Thread Isaac Won
Hi all,

I have tried to use different interpolation methods with Scipy. My code seems 
just fine with linear interpolation, but shows memory error with quadratic. I 
am a novice for python. I will appreciate any help.

#code
f = open(filin, r)
for columns in ( raw.strip().split() for raw in f ):
a.append(columns[5])
x = np.array(a, float)


not_nan = np.logical_not(np.isnan(x))
indices = np.arange(len(x))
interp = interp1d(indices[not_nan], x[not_nan], kind = 'quadratic')

p = interp(indices)


The number of data is 31747.

Thank you,

Isaac
-- 
http://mail.python.org/mailman/listinfo/python-list


How to deal with python 32bit memory error

2012-07-24 Thread Sammy Danso
Hello Experts,
I am having a 'memory error', which suggest that I 
have run out of memory, but I am not sure this is the case as I have 
considerable amount of memory unused on my computer. 

A little 
search suggest this is a limitation of python 32 and an option is to 
have a 64bit. I however have other plug-ins, which are tied to the 32bit
 so this is not the best option in my case. 

I was wondering whether there is an elegant way to dealing with this without 
installing a 6bit version of python.

Thanks very much in advance.
Sammy-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to deal with python 32bit memory error

2012-07-24 Thread Dave Angel
On 07/24/2012 04:06 AM, Sammy Danso wrote:
 Hello Experts,
 I am having a 'memory error',

Please post the actual error message.

  which suggest that I 
 have run out of memory, but I am not sure this is the case as I have 
 considerable amount of memory unused on my computer. 

What OS, version, and bitness.  What 32bit Python?

How do you know you have unused memory?  How much total memory, how much
swapspace, and what are you using to tell how much of each is unused?

How big is the process when it dies?  And how are you determining that?

 A little 
 search suggest this is a limitation of python 32 and an option is to 
 have a 64bit. I however have other plug-ins, which are tied to the 32bit
  so this is not the best option in my case. 
There are some limitations to 32 bits, that have nothing to do with
Python specifically.  However, they have different impact, depending on
the environment you're running in.  First and foremost, address are
32bits, which limits them to 4gb of ram.  So out of your 32Gig of
swapspace, a best-case maximum regardless of OS is 4Gig.  No operating
system lets you actually get that high.

 I was wondering whether there is an elegant way to dealing with this without 
 installing a 6bit version of python.

The most elegant way of solving it is to reduce the amount of memory
your program is using.  For example, instead of building large lists,
perhaps you can use generators.  Simplest example (for Python 2.x) is
xrange rather than range.

For another example, reduce the globals.  Create large objects inside a
limited function, and discard them when done (by returning from the
function).

 Thanks very much in advance.
 Sammy


When responding, please remember to post your response *AFTER* the part
you're quoting.  Most of your previous posts here have been top-posted.

-- 

DaveA

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to deal with python 32bit memory error

2012-07-24 Thread Christian Heimes
Am 24.07.2012 11:58, schrieb Dave Angel:
 There are some limitations to 32 bits, that have nothing to do with
 Python specifically.  However, they have different impact, depending on
 the environment you're running in.  First and foremost, address are
 32bits, which limits them to 4gb of ram.  So out of your 32Gig of
 swapspace, a best-case maximum regardless of OS is 4Gig.  No operating
 system lets you actually get that high.

The usable amount of memory is much lower than 4 GB for a 32bit program.
A typical program can use about 2.4 to 2.7 GB virtual address space for
heap. The exact amount is hard to predicate as it depends on the
operating system, memory allocator and other settings. The rest of the 4
GB virtual address space is reserved for stack, mapping of dynamic
libraries and operating system routines.

The amount of usable memory on the heap (area returned by malloc()) is
often lowered by memory fragmentation. If your program tries to malloc
100 MB of memory but the largest contiguous area is just 98 MB you'll
get a memory error, too.

Christian

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: memory error

2011-09-29 Thread questions anon
Hello All,
I am still having trouble with memory errors when I try to process many
netcdf files.
Originally I would get the memory error as mentioned in the previous post
but when I added gc.collect() after each for loop I receive the error:
GEOS_ERROR: bad allocation
with no additional information!
The error use to occur at the point when a new netcdf file was to be opened
and plotted but with the things I have 'fixed' thanks to suggestions from
this list it seems to happen while processing the second file.
I am just trying to plot 3hourly data for each file and each file contains
hourly data for a month and I am trying to do this for many months.
It seems like I cannot close down the last file properly so the computer has
a clean memory to start the next one.
Any feedback will be greatly appreciated.
My latest version of the code:

##

from netCDF4 import Dataset
import numpy as N
import matplotlib.pyplot as plt
from numpy import ma as MA
from mpl_toolkits.basemap import Basemap
from netcdftime import utime
from datetime import datetime
import os


shapefile1=E:/DSE_BushfireClimatologyProject/griddeddatasamples/test_GIS/DSE_REGIONS
OutputFolder=rE:/DSE_BushfireClimatologyProject/griddeddatasamples/GriddedData/OutputsforValidation

def plotrawdata(variable):
if variable=='TSFC':
ncvariablename='T_SFC'

MainFolder=rE:/DSE_BushfireClimatologyProject/griddeddatasamples/GriddedData/InputsforValidation/T_SFC/
ticks=[-5,0,5,10,15,20,25,30,35,40,45,50]
Title='Surface Temperature'
cmap=plt.cm.jet

elif variable=='RHSFC':
ncvariablename='RH_SFC'

MainFolder=rE:/DSE_BushfireClimatologyProject/griddeddatasamples/GriddedData/InputsforValidation/RH_SFC/
ticks=[0, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100]
Title='Surface RH'
cmap=plt.cm.jet_r


fileforlatlon=Dataset(E:/DSE_BushfireClimatologyProject/griddeddatasamples/GriddedData/InputsforValidation/T_SFC/TSFC_1974_01/IDZ00026_VIC_ADFD_T_SFC.nc,
'r+', 'NETCDF4')
LAT=fileforlatlon.variables['latitude'][:]
LON=fileforlatlon.variables['longitude'][:]

startperiod=raw_input(Start slice (e.g. 1 ): )
endperiod=raw_input(End slice (e.g. 2): )
skipperiod=raw_input(skip slice (e.g. 1): )
if startperiod == :
startperiod = None
else:
startperiod = int(startperiod)
if endperiod == :
endperiod = None
else:
endperiod = int(endperiod)
if skipperiod == :
skipperiod = None
else:
skipperiod= int(skipperiod)

for (path, dirs, files) in os.walk(MainFolder):
for dir in dirs:
print dir
path=path+'/'

for ncfile in files:
if ncfile[-3:]=='.nc':
print dealing with ncfiles:,
path+ncfile
ncfile=os.path.join(path,ncfile)
ncfile=Dataset(ncfile, 'r+', 'NETCDF4')
#global TSFC

variable=ncfile.variables[ncvariablename][startperiod:endperiod:skipperiod]

TIME=ncfile.variables['time'][startperiod:endperiod:skipperiod]

fillvalue=ncfile.variables[ncvariablename]._FillValue
ncfile.close()

for variable, TIME in
zip((variable[:]),(TIME[:])):
#for variable, TIME in
zip((variable[sliceperiod]),(TIME[sliceperiod])):

cdftime=utime('seconds since
1970-01-01 00:00:00')

ncfiletime=cdftime.num2date(TIME)
print ncfiletime
timestr=str(ncfiletime)
d = datetime.strptime(timestr,
'%Y-%m-%d %H:%M:%S')
date_string =
d.strftime('%Y%m%d_%H%M')
#Set up basemap using mercator
projection
http://matplotlib.sourceforge.net/basemap/doc/html/users/merc.html
map =
Basemap(projection='merc',llcrnrlat=-40,urcrnrlat=-33,

llcrnrlon=139.0,urcrnrlon=151.0,lat_ts=0,resolution='i')
x,y=map(*N.meshgrid(LON,LAT))

map.drawcoastlines(linewidth=0.5)
map.readshapefile(shapefile1,
'DSE_REGIONS')
map.drawstates()

plt.title(Title+' %s
UTC'%ncfiletime)

CS = map.contourf(x,y,variable,
ticks, cmap=cmap)
l,b,w,h =0.1,0.1,0.8,0.8

memory error

2011-09-14 Thread questions anon
Hello All,
I keep coming across a memory error when processing many netcdf files. I
assume it has something to do with how I loop things and maybe need to close
things off properly.
In the code below I am looping through a bunch of netcdf files (each file is
hourly data for one month) and within each netcdf file I am outputting a
*png file every three hours.
This works for one netcdf file but when it begins to process the next netcdf
file I receive this memory error:

*Traceback (most recent call last):
  File
d:/plot_netcdf_merc_multiplot_across_multifolders_mkdirs_memoryerror.py,
line 44, in module
TSFC=ncfile.variables['T_SFC'][:]
  File netCDF4.pyx, line 2473, in netCDF4.Variable.__getitem__
(netCDF4.c:23094)
MemoryError*

To reduce processing requirements I have tried making the LAT and LON to
only use [0] but I also receive an error:

*Traceback (most recent call last):
  File
d:/plot_netcdf_merc_multiplot_across_multifolders_mkdirs_memoryerror.py,
line 75, in module
x,y=map(*N.meshgrid(LON,LAT))
  File C:\Python27\lib\site-packages\numpy\lib\function_base.py, line
3256, in meshgrid
numRows, numCols = len(y), len(x)  # yes, reversed
TypeError: len() of unsized object*

finally I have added gc.collect() in a couple of places but that doesn't
seem to do anything to help.
I am using :*Python 2.7.2 |EPD 7.1-2 (32-bit)| (default, Jul  3 2011,
15:13:59) [MSC v.1500 32 bit (Intel)] on win32*
Any feedback will be greatly appreciated!


from netCDF4 import Dataset
import numpy
import numpy as N
import matplotlib.pyplot as plt
from numpy import ma as MA
from mpl_toolkits.basemap import Basemap
from netcdftime import utime
from datetime import datetime
import os
import gc

print start processing

inputpath=r'E:/GriddedData/Input/'
outputpath=r'E:/GriddedData/Validation/'
shapefile1=E:/test_GIS/DSE_REGIONS
for (path, dirs, files) in os.walk(inputpath):
for dir in dirs:
print dir
sourcepath=os.path.join(path,dir)
relativepath=os.path.relpath(sourcepath,inputpath)
newdir=os.path.join(outputpath,relativepath)
if not os.path.exists(newdir):
os.makedirs(newdir)

for ncfile in files:
if ncfile[-3:]=='.nc':
print dealing with ncfiles:, ncfile
ncfile=os.path.join(sourcepath,ncfile)
#print ncfile
ncfile=Dataset(ncfile, 'r+', 'NETCDF4')
TSFC=ncfile.variables['T_SFC'][:,:,:]
TIME=ncfile.variables['time'][:]
LAT=ncfile.variables['latitude'][:]
LON=ncfile.variables['longitude'][:]
fillvalue=ncfile.variables['T_SFC']._FillValue
TSFC=MA.masked_values(TSFC, fillvalue)
ncfile.close()
gc.collect()
print garbage collected


for TSFC, TIME in zip((TSFC[1::3]),(TIME[1::3])):
print TSFC, TIME
#convert time from numbers to date and prepare it to have no
symbols for saving to filename
cdftime=utime('seconds since 1970-01-01 00:00:00')
ncfiletime=cdftime.num2date(TIME)
print ncfiletime
timestr=str(ncfiletime)
d = datetime.strptime(timestr, '%Y-%m-%d %H:%M:%S')
date_string = d.strftime('%Y%m%d_%H%M')

#Set up basemap using mercator projection
http://matplotlib.sourceforge.net/basemap/doc/html/users/merc.html
map = Basemap(projection='merc',llcrnrlat=-40,urcrnrlat=-33,

llcrnrlon=139.0,urcrnrlon=151.0,lat_ts=0,resolution='i')

# compute map projection coordinates for lat/lon grid.
x,y=map(*N.meshgrid(LON,LAT))
map.drawcoastlines(linewidth=0.5)
map.readshapefile(shapefile1, 'DSE_REGIONS')
map.drawstates()

plt.title('Surface temperature at %s UTC'%ncfiletime)
ticks=[-5,0,5,10,15,20,25,30,35,40,45,50]
CS = map.contourf(x,y,TSFC, ticks, cmap=plt.cm.jet)
l,b,w,h =0.1,0.1,0.8,0.8
cax = plt.axes([l+w+0.025, b, 0.025, h], )
cbar=plt.colorbar(CS, cax=cax, drawedges=True)

#save map as *.png and plot netcdf file

plt.savefig((os.path.join(newdir,'TSFC'+date_string+'UTC.png')))
plt.close()
gc.collect()
print garbage collected again
print end of processing
-- 
http://mail.python.org/mailman/listinfo/python-list


[issue6676] expat parser throws Memory Error when parsing multiple files

2011-08-31 Thread David H. Gutteridge

David H. Gutteridge dhgutteri...@sympatico.ca added the comment:

Ned: My proposed wording is: Note that only one document can be parsed by a 
given instance; it is not possible to reuse an instance to parse multiple 
files.  To provide more detail, one could also add something like: The 
isfinal argument of the Parse() method is intended to allow the parsing of a 
single file in fragments, not the submission of multiple files.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6676
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6676] expat parser throws Memory Error when parsing multiple files

2011-08-30 Thread Ned Deily

Ned Deily n...@acm.org added the comment:

I agree that, at a minimum, the documentation should be updated to include a 
warning about not reusing a parser instance.  Whether it's worth trying to plug 
all the holes in the expat library is another issue (see, for instance, 
issue12829).  David, would you be willing to propose a wording for a 
documentation change?

--
nosy: +ned.deily
versions: +Python 3.2, Python 3.3 -Python 2.6

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6676
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6676] expat parser throws Memory Error when parsing multiple files

2011-08-30 Thread Ned Deily

Ned Deily n...@acm.org added the comment:

Also, note issue1208730 proposes a feature to expose a binding for 
XML_ParserReset and has the start of a patch.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6676
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6676] expat parser throws Memory Error when parsing multiple files

2011-08-24 Thread David H. Gutteridge

David H. Gutteridge dhgutteri...@sympatico.ca added the comment:

The documentation should definitely be updated to clarify that a parser 
instance is not reusable with more than one file.  I had a look at the 
equivalent documentation for Perl and TCL, and Perl's implementation explicitly 
does not allow attempts to reuse the parser instance (which is clearly noted in 
the documentation), and TCL's implementation (or one of them, anyway) offers a 
reset call that explicitly resets the parser in preparation for another file to 
be submitted.

--
nosy: +dhgutteridge

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6676
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



How to catch an memory error in Windows?

2011-07-25 Thread António Rocha
Greetings

I'm using subprocess module to run an external Windows binary. Due to some
limitations, sometimes all memory is consumed in this process. How can I
catch this error?
Antonio
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: How to catch an memory error in Windows?

2011-07-25 Thread Thomas Jollans
On 25/07/11 16:55, António Rocha wrote:
 Greetings
 
 I'm using subprocess module to run an external Windows binary. Due to
 some limitations, sometimes all memory is consumed in this process. How
 can I catch this error?
 Antonio
 

How is this relevant to the Python part?

Also, no memory left is not necessarily an error, it's simply a fact
of life. How to catch it depends on how you're (here, you means the
external process) allocating the memory - The POSIX C malloc function
will return NULL and set errno to ENOMEM, for example.

Thomas
-- 
http://mail.python.org/mailman/listinfo/python-list


Re:How to catch an memory error in Windows?

2011-07-25 Thread António Rocha
Hi
I just want to use the Python part to catch this event as an error
try:
 subprocess
except ERROR:

I just wanted to know if there is any special error that I can use to catch
this error?
THanks
-- Forwarded message --
From: Thomas Jollans t...@jollybox.de
To: python-list@python.org
Date: Mon, 25 Jul 2011 17:14:46 +0200
Subject: Re: How to catch an memory error in Windows?
On 25/07/11 16:55, António Rocha wrote:
 Greetings

 I'm using subprocess module to run an external Windows binary. Due to
 some limitations, sometimes all memory is consumed in this process. How
 can I catch this error?
 Antonio


How is this relevant to the Python part?

Also, no memory left is not necessarily an error, it's simply a fact
of life. How to catch it depends on how you're (here, you means the
external process) allocating the memory - The POSIX C malloc function
will return NULL and set errno to ENOMEM, for example.

Thomas
-- 
http://mail.python.org/mailman/listinfo/python-list


[issue9180] Memory Error

2010-08-13 Thread Mark Dickinson

Changes by Mark Dickinson dicki...@gmail.com:


--
status: pending - closed

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9180
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9180] Memory Error

2010-07-06 Thread Peter Wolf

New submission from Peter Wolf freakcy...@optusnet.com.au:

I am using Ubuntu 10.04 32 bit  and python 2.6.When I type the following line 
in a terminal

python mydatafile.py

I get the following error message on the next line

MemoryError

That is all.


File details : 

It is a 2d list of floating point numbers 86Mb in size.
Here is the start -

mydata=[[1.51386,1.51399,1.51386,1.51399],
[1.51386,1.51401,1.51401,1.51386],
[1.51391,1.51406,1.51395,1.51401],
[1.51392,1.514,1.51397,1.51395],
[1.51377,1.5142,1.51387,1.51397],

here is the end -

[1.5631,1.5633,1.5631,1.5631],
[1.5631,1.5632,1.5631,1.5631],
[1.5631,1.5633,1.5631,1.5631],
[1.563,1.5631,1.5631,1.5631]]


I will add that exactly the same type of file but 49MB in size compiled 
with 1GB of ram although there was a lot of disk activity and the CPU seemed to 
be working very hard.The 86MB file produced the above error.I upgraded to 3.4GB 
and still the same error.

--
components: None
messages: 109392
nosy: freakcycle
priority: normal
severity: normal
status: open
title: Memory Error
type: compile error
versions: Python 2.6

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9180
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9180] Memory Error

2010-07-06 Thread Mark Dickinson

Mark Dickinson dicki...@gmail.com added the comment:

Thanks for the extra information; that helps a lot.

I think this is expected behaviour:  Python really does need that much memory 
to parse the file (as a Python file).  Partly this is because Python objects 
actually do take up a fair amount of space:  a length-4 list of floats on my 
(64-bit) machine takes 200 bytes, though on 32-bit machine this number should 
be a bit smaller.  But partly it's that the compilation stage itself uses a lot 
of memory:  for example, each of the floats in your input gets put into a dict 
during compilation;  this dict is used to recognize multiple references to the 
same float, so that only one float object needs to be created for each distinct 
float value.  And those dicts are going to get pretty big.

I don't think that storing huge amounts of data in a .py file like this is 
usual practice, so I'm not particularly worried that importing a huge .py file 
can cause a MemoryError.

For your case, I'd suggest parsing your datafile manually:  reading the file 
line by line from within Python.

Suggest closing this issue as won't fix.

--
nosy: +mark.dickinson

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9180
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9180] Memory Error

2010-07-06 Thread Mark Dickinson

Mark Dickinson dicki...@gmail.com added the comment:

Just an additional note:  have you considered using the pickle or json modules?

--
resolution:  - wont fix
status: open - pending

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9180
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6676] expat parser throws Memory Error when parsing multiple files

2010-02-02 Thread Will Grainger

Will Grainger willgrain...@gmail.com added the comment:

I don't think this is a python specific problem. I have just seen 
the same error when working with the expat library from C, and the cause
is using the same parser to read multiple files.

--
nosy: +willgrainger

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6676
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



RE: memory error

2009-12-04 Thread Ahmed, Shakir
 

 

From: python-list-bounces+shahmed=sfwmd@python.org 
[mailto:python-list-bounces+shahmed=sfwmd@python.org] On Behalf Of Stephen 
Hansen
Sent: Thursday, December 03, 2009 10:22 PM
To: python-list@python.org
Subject: Re: memory error

 

On Thu, Dec 3, 2009 at 5:51 AM, Ahmed, Shakir shah...@sfwmd.gov wrote:

I am getting a memory error while executing a script. Any idea is highly
appreciated.

Error message:  The instruction at 0x1b009032 referenced memory at
0x0804:, The memory could not be written

This error is appearing and I have to exit from the script.

 

Vastly insufficient information; that basically is like saying, Something 
broke. People can't really help you with that. You sorta need to show some 
code and/or at least describe what's going on at the time.

 

But-- the image does say Pythonwin... are you running this from the Pythonwin 
editor/IDE? Does this script crash out if you run it through the normal 
'python'(or pythonw) commands? If not, are you attempting to do any sort of GUI 
work in this script? That rarely works within Pythonwin directly.

 




--S

 

I am using python to do some gp ( geo processing ) for accuracy analysis. This 
analysis is based on application numbers. The script is going through each 
application number to process the data and looping through. The error appears 
after running few loops ( mean it process few applications). There is no 
certainty of how many loops it is going through but stopped with the error 
message and.

 

The code is attached herewith so I  hope it would make more clear to you. Any 
help is highly appreciated.

 

--sk 

 



ReverseBufferOverLay.py
Description: ReverseBufferOverLay.py
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: memory error

2009-12-04 Thread Stephen Hansen

 But-- the image does say Pythonwin... are you running this from the
 Pythonwin editor/IDE? Does this script crash out if you run it through the
 normal 'python'(or pythonw) commands? If not, are you attempting to do any
 sort of GUI work in this script? That rarely works within Pythonwin
 directly.

 I am using python to do some gp ( geo processing ) for accuracy analysis.
 This analysis is based on application numbers. The script is going through
 each application number to process the data and looping through. The error
 appears after running few loops ( mean it process few applications). There
 is no certainty of how many loops it is going through but stopped with the
 error message and.




You didn't answer my other questions-- have you run this with python
directly and not PythonWin? It doesn't look like it you're doing anything
GUI-ish, but I don't know anything about arcgisscripting... which is
almost certainly where the error is happening, its really hard to get access
violations in pure Python. You'll probably need to add a (lot) of logging
into the script to narrow down precisely where this error is happening, that
it happens 'eventually' and 'somewhere' in the loop isn't going to help. I'd
guess that eventually a certain piece of data in there gets passed to this
library and everything explodes.

--S
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: memory error

2009-12-03 Thread Rami Chowdhury
On Thursday 03 December 2009 05:51:05 Ahmed, Shakir wrote:
 I am getting a memory error while executing a script. Any idea is
  highly appreciated.
 
 Error message:  The instruction at 0x1b009032 referenced memory at
 0x0804:, The memory could not be written
 
 This error is appearing and I have to exit from the script.
 
 Thanks
 sk
 

I'm afraid you'll really have to provide us a little more information. 
When does the error happen? At the beginning of the script? Halfway 
through? When you're closing the program? 

I don't know anything about Pythonwin, so I won't comment further, but 
IME Python scripts rarely get direct memory access errors so I would 
suspect a problem in Pythonwin.


Rami Chowdhury
-BEGIN GEEK CODE BLOCK-
Version: 3.1
GO d-(+++) s-:++ a-- C++ ULX+ P++ L++
E+ W+++ w-- PS+ PE t+ b+++ e++ !r z?
--END GEEK CODE BLOCK--
408-597-7068 (US) / 07875-841-046 (UK) / 0189-245544 (BD)
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: memory error

2009-12-03 Thread Lie Ryan

On 12/4/2009 12:51 AM, Ahmed, Shakir wrote:

I am getting a memory error while executing a script. Any idea is highly
appreciated.

Error message:  The instruction at 0x1b009032 referenced memory at
0x0804:, The memory could not be written

This error is appearing and I have to exit from the script.

Thanks
sk


Would you mind telling what you were doing at that time?
--
http://mail.python.org/mailman/listinfo/python-list


Re: memory error

2009-12-03 Thread Stephen Hansen
On Thu, Dec 3, 2009 at 5:51 AM, Ahmed, Shakir shah...@sfwmd.gov wrote:

 I am getting a memory error while executing a script. Any idea is highly
 appreciated.

 Error message:  The instruction at 0x1b009032 referenced memory at
 0x0804:, The memory could not be written

 This error is appearing and I have to exit from the script.


Vastly insufficient information; that basically is like saying, Something
broke. People can't really help you with that. You sorta need to show some
code and/or at least describe what's going on at the time.

But-- the image does say Pythonwin... are you running this from the
Pythonwin editor/IDE? Does this script crash out if you run it through the
normal 'python'(or pythonw) commands? If not, are you attempting to do any
sort of GUI work in this script? That rarely works within Pythonwin
directly.


--S
-- 
http://mail.python.org/mailman/listinfo/python-list


[issue6676] expat parser throws Memory Error when parsing multiple files

2009-10-09 Thread Andy Balaam

Andy Balaam m...@artificialworlds.net added the comment:

I am also seeing this with Python 2.5.2 on Ubuntu.

--
nosy: +andybalaam

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6676
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6676] expat parser throws Memory Error when parsing multiple files

2009-10-09 Thread Andy Balaam

Andy Balaam m...@artificialworlds.net added the comment:

Just in case it wasn't obvious - the workaround is to create a new
parser (with xml.parsers.expat.ParserCreate()) for every XML file you
want to parse.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6676
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6676] expat parser throws Memory Error when parsing multiple files

2009-10-09 Thread Hirokazu Yamamoto

Hirokazu Yamamoto ocean-c...@m2.ccsnet.ne.jp added the comment:

I'm not familiar with expat, but we can see what is happening more
clearly with attached adhok patch.

Traceback (most recent call last):
  File expat-error.py, line 14, in module
p.ParseFile(file)
xml.parsers.expat.ExpatError: parsing finished: line 2, column 482

It seems ParseFile() doesn't support second call. I'm not sure this is
intended behavior or not.

--
keywords: +patch
nosy: +ocean-city
versions: +Python 2.7
Added file: http://bugs.python.org/file15089/pyexpat_addhok.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6676
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6676] expat parser throws Memory Error when parsing multiple files

2009-10-09 Thread Amaury Forgeot d'Arc

Amaury Forgeot d'Arc amaur...@gmail.com added the comment:

The patch is good; a test would be appreciated.

The difference now is that in case of true low-memory conditions,
ExpatError(no memory) is raised instead of MemoryError.
This is acceptable IMO.

 It seems ParseFile() doesn't support second call
This is correct; the C expat library has a function XML_ParserReset()
which could be called before calling ParseFile() again, but pyexpat does
not expose it yet (see issue1208730).

--
nosy: +amaury.forgeotdarc

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6676
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6676] expat parser throws Memory Error when parsing multiple files

2009-10-09 Thread Hirokazu Yamamoto

Hirokazu Yamamoto ocean-c...@m2.ccsnet.ne.jp added the comment:

Well, I tried to write test like this.

1. Check if xml.parsers.expat.error is raised.
2. Compare *code* attribute of error object with
xml.parsers.expat.errors.XML_ERROR_FINISHED

But I noticed XML_ERROR_FINISHED is not integer but string. (!)

According to
http://docs.python.org/library/pyexpat.html#expaterror-objects

 ExpatError.code

Expat’s internal error number for the specific error. This will
match one of the constants defined in the errors object from
this module.

Is this document bug or implementation bug? Personally, I think string
'parsing finished' as error constant might be useless...

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6676
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6676] expat parser throws Memory Error when parsing multiple files

2009-10-09 Thread Amaury Forgeot d'Arc

Amaury Forgeot d'Arc amaur...@gmail.com added the comment:

Looks like an implementation bug to me; far too late to change it, though.

In your test, you could use
  pyexpat.ErrorString(e.code) == pyexpat.errors.XML_ERROR_FINISHED
And the docs could mention this trick.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6676
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6676] expat parser throws Memory Error when parsing multiple files

2009-10-09 Thread Hirokazu Yamamoto

Hirokazu Yamamoto ocean-c...@m2.ccsnet.ne.jp added the comment:

Here is the patch. I'm not confident with my English comment though.

--
Added file: http://bugs.python.org/file15090/pyexpat.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6676
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6676] expat parser throws Memory Error when parsing multiple files

2009-10-09 Thread Hirokazu Yamamoto

Changes by Hirokazu Yamamoto ocean-c...@m2.ccsnet.ne.jp:


Removed file: http://bugs.python.org/file15089/pyexpat_addhok.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6676
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6676] expat parser throws Memory Error when parsing multiple files

2009-10-09 Thread Amaury Forgeot d'Arc

Amaury Forgeot d'Arc amaur...@gmail.com added the comment:

Do you know the new context manager feature of assertRaises? it makes
it easier to check for exceptions.
I join a new patch that uses it.

--
Added file: http://bugs.python.org/file15094/pyexpat-2.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6676
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6676] expat parser throws Memory Error when parsing multiple files

2009-10-09 Thread Hirokazu Yamamoto

Hirokazu Yamamoto ocean-c...@m2.ccsnet.ne.jp added the comment:

I knew existence of that new feature, but didn't know how to use it.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6676
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6676] expat parser throws Memory Error when parsing multiple files

2009-10-09 Thread Hirokazu Yamamoto

Hirokazu Yamamoto ocean-c...@m2.ccsnet.ne.jp added the comment:

Hmm, looks useful. I think your patch is good. Only one problem is that
we cannot use this new feature in python2.6. If we use my patch in that
branch, I think there is no problem.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6676
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6676] expat parser throws Memory Error when parsing multiple files

2009-08-10 Thread Matthew

New submission from Matthew webmas...@adurosolutions.com:

I'm using the Expat python interface to parse multiple XML files in an
application and have found that it throws a Memory Error exception if
multiple calls are made to xmlparser.ParseFile(file) on the same
xmlparser object. This occurs even with a vanilla xmlparser object
created with xml.parsers.expat.ParserCreate().

Python Version: 2.6.2
Operating System: Ubuntu

--
components: XML
files: expat-error.py
messages: 91452
nosy: realpolitik
severity: normal
status: open
title: expat parser throws Memory Error when parsing multiple files
type: behavior
versions: Python 2.6
Added file: http://bugs.python.org/file14684/expat-error.py

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6676
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6676] expat parser throws Memory Error when parsing multiple files

2009-08-10 Thread Matthew

Matthew webmas...@adurosolutions.com added the comment:

This also occurs with Python 2.5.1 on OS X

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6676
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Memory error due to big input file

2009-07-13 Thread sityee kong
Hi All,

I have a similar problem that many new python users might encounter. I would
really appreciate if you could help me fix the error.
I have a big text file with size more than 2GB. It turned out memory error
when reading in this file. Here is my python script, the error occurred at
line -- self.fh.readlines().

import math
import time

class textfile:
  def __init__(self,fname):
 self.name=fname
 self.fh=open(fname)
 self.fh.readline()
 self.lines=self.fh.readlines()

a=textfile(/home/sservice/nfbc/GenoData/CompareCalls3.diff)

lfile=len(a.lines)

def myfun(snp,start,end):
  subdata=a.lines[start:end+1]
  NEWmiss=0
  OLDmiss=0
  DIFF=0
  for row in subdata:
 k=row.split()
 if (k[3]==0/0)  (k[4]!=0/0):
NEWmiss=NEWmiss+1
 elif (k[3]!=0/0)  (k[4]==0/0):
OLDmiss=OLDmiss+1
 elif (k[3]!=0/0)  (k[4]!=0/0):
DIFF=DIFF+1
  result.write(snp+ +str(NEWmiss)+ +str(OLDmiss)+ +str(DIFF)+\n)

result=open(Summary_noLoop_diff3.txt,w)
result.write(SNP NEWmiss OLDmiss DIFF\n)

start=0
snp=0
for i in range(lfile):
  if (i==0): continue
  after=a.lines[i].split()
  before=a.lines[i-1].split()
  if (before[0]==after[0]):
if (i!=(lfile-1)): continue
else:
  end=lfile-1
  myfun(before[0],start,end)
  snp=snp+1
  else:
end=i-1
myfun(before[0],start,end)
snp=snp+1
start=i
if (i ==(lfile-1)):
  myfun(after[0],start,start)
  snp=snp+1

result.close()

  sincerely, phoebe
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Memory error due to big input file

2009-07-13 Thread MRAB

sityee kong wrote:

Hi All,

I have a similar problem that many new python users might encounter. I 
would really appreciate if you could help me fix the error.
I have a big text file with size more than 2GB. It turned out memory 
error when reading in this file. Here is my python script, the error 
occurred at line -- self.fh.readlines().



[snip code]
Your 'error' is that you're running it on a computer with insufficient
memory.

--
http://mail.python.org/mailman/listinfo/python-list


Re: Memory error due to big input file

2009-07-13 Thread skip

phoebe I have a big text file with size more than 2GB. It turned out
phoebe memory error when reading in this file. Here is my python
phoebe script, the error occurred at line -- self.fh.readlines().

phoebe import math
phoebe import time

phoebe class textfile:
phoebe   def __init__(self,fname):
phoebe  self.name=fname
phoebe  self.fh=open(fname)
phoebe  self.fh.readline()
phoebe  self.lines=self.fh.readlines()

Don't do that.  The problem is that you are trying to read the entire file
into memory.  Learn to operate a line (or a few lines) at a time.  Try
something like:

a = open(/home/sservice/nfbc/GenoData/CompareCalls3.diff)
for line in a:
do your per-line work here

-- 
Skip Montanaro - s...@pobox.com - http://www.smontanaro.net/
when i wake up with a heart rate below 40, i head right for the espresso
machine. -- chaos @ forums.usms.org
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Memory error due to big input file

2009-07-13 Thread Dave Angel

sityee kong wrote:

Hi All,

I have a similar problem that many new python users might encounter. I would
really appreciate if you could help me fix the error.
I have a big text file with size more than 2GB. It turned out memory error
when reading in this file. Here is my python script, the error occurred at
line -- self.fh.readlines().

import math
import time

class textfile:
  def __init__(self,fname):
 self.name=fname
 self.fh=open(fname)
 self.fh.readline()
 self.lines=self.fh.readlines()

a=textfile(/home/sservice/nfbc/GenoData/CompareCalls3.diff)

lfile=len(a.lines)

def myfun(snp,start,end):
  subdata=a.lines[start:end+1]
  NEWmiss=0
  OLDmiss=0
  DIFF=0
  for row in subdata:
 k=row.split()
 if (k[3]==0/0)  (k[4]!=0/0):
NEWmiss=NEWmiss+1
 elif (k[3]!=0/0)  (k[4]==0/0):
OLDmiss=OLDmiss+1
 elif (k[3]!=0/0)  (k[4]!=0/0):
DIFF=DIFF+1
  result.write(snp+ +str(NEWmiss)+ +str(OLDmiss)+ +str(DIFF)+\n)

result=open(Summary_noLoop_diff3.txt,w)
result.write(SNP NEWmiss OLDmiss DIFF\n)

start=0
snp=0
for i in range(lfile):
  if (i==0): continue
  after=a.lines[i].split()
  before=a.lines[i-1].split()
  if (before[0]==after[0]):
if (i!=(lfile-1)): continue
else:
  end=lfile-1
  myfun(before[0],start,end)
  snp=snp+1
  else:
end=i-1
myfun(before[0],start,end)
snp=snp+1
start=i
if (i ==(lfile-1)):
  myfun(after[0],start,start)
  snp=snp+1

result.close()

  sincerely, phoebe

  
Others have pointed out that you have too little memory for a 2gig data 
structure.  If you're running on a 32bit system, chances are it won't 
matter how much memory you add, a process is limited to 4gb, and the OS 
typically takes about half of it, your code and other data takes some, 
and you don't have 2gig left.   A 64 bit version of Python, running on a 
64bit OS, might be able to just work.


Anyway, loading the whole file into a list is seldom the best answer, 
except for files under a meg or so.  It's usually better to process the 
file in sequence.  It looks like you're also making slices of that data, 
so they could potentially be pretty big as well.


If you can be assured that you only need the current line and the 
previous two (for example), then you can use a list of just those three, 
and delete the oldest one, and add a new one to that list each time 
through the loop.


Or, you can add some methods to that 'textfile' class that fetch a line 
by index.  Brute force, you could pre-scan the file, and record all the 
file offsets for the lines you find, rather than storing the actual 
line.  So you still have just as big a list, but it's a list of 
integers.  Then when somebody calls your method, he passes an integer, 
and you return the particular line.  A little caching for performance, 
and you're good to go.


Anyway, if you organize it that way, you can code the rest of the module 
to not care whether the whole file is really in memory or not.


BTW, you should derive all your classes from something.  If nothing 
else, use object.

 class textfile(object):


--
http://mail.python.org/mailman/listinfo/python-list


Re: Memory error due to big input file

2009-07-13 Thread Aaron Scott
 BTW, you should derive all your classes from something.  If nothing
 else, use object.
   class textfile(object):

Just out of curiousity... why is that? I've been coding in Python for
a long time, and I never derive my base classes. What's the advantage
to deriving them?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Memory error due to big input file

2009-07-13 Thread Vilya Harvey
2009/7/13 Aaron Scott aaron.hildebra...@gmail.com:
 BTW, you should derive all your classes from something.  If nothing
 else, use object.
   class textfile(object):

 Just out of curiousity... why is that? I've been coding in Python for
 a long time, and I never derive my base classes. What's the advantage
 to deriving them?

class Foo:

uses the old object model.

class Foo(object):

uses the new object model.

See http://docs.python.org/reference/datamodel.html (specifically
section 3.3) for details of the differences.

Vil.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Memory error due to big input file

2009-07-13 Thread Chris Rebert
On Mon, Jul 13, 2009 at 2:51 PM, Vilya Harveyvilya.har...@gmail.com wrote:
 2009/7/13 Aaron Scott aaron.hildebra...@gmail.com:
 BTW, you should derive all your classes from something.  If nothing
 else, use object.
   class textfile(object):

 Just out of curiousity... why is that? I've been coding in Python for
 a long time, and I never derive my base classes. What's the advantage
 to deriving them?

    class Foo:

 uses the old object model.

    class Foo(object):

 uses the new object model.

 See http://docs.python.org/reference/datamodel.html (specifically
 section 3.3) for details of the differences.

Note that Python 3.0 makes explicitly subclassing `object` unnecessary
since it removes old-style classes; a class that doesn't explicitly
subclass anything will implicitly subclass `object`.

Cheers,
Chris
-- 
http://blog.rebertia.com
-- 
http://mail.python.org/mailman/listinfo/python-list


[issue6311] virtual memory error with archivemail

2009-06-19 Thread helgekraak

New submission from helgekraak he...@kraak.info:

Hi,

I'm neither a Python nor a Unix specialist, so please understand that I
can't give a very professional bug report.

My issue seems to be related to issues 1092502 and 1389051. When I run
archivemail with Python 2.6.2 after a couple of minutes the virtual
memory in the activity monitor suddenly goes up to more than 2 GB and
then the python process quits as it seems to have reached a limit of the
virtual memory (the virtual memory available is about 10 GB). Most of
the time the virtual memory is stable around 40 MB and only sporadically
the memory suddenly increases to values between 100 MB and 2 GB but goes
back to 40 MB again until the one time that it goes so high that the
python process quits. The error happens much faster when I use a non
secured imap connection compared to a secured connection.

I get this error message:

command: archivemail --date='23 April 2030' --include-flagged
--output-dir=/temp  --copy  imaps://*:**...@*/inbox


error message: Python(1238) malloc: *** vm_allocate(size=15396864)
failed (error code=3)
Python(1238) malloc: *** error: can't allocate region
Python(1238) malloc: *** set a breakpoint in szone_error to debug
Traceback (most recent call last):
  File /opt/local/bin/archivemail, line 1603, in ?
main()
  File /opt/local/bin/archivemail, line 702, in main
archive(mailbox_path)
  File /opt/local/bin/archivemail, line 1145, in archive
_archive_imap(mailbox_name, final_archive_name)
  File /opt/local/bin/archivemail, line 1424, in _archive_imap
result, response = imap_srv.fetch(msn, '(RFC822)')
  File
/opt/local/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/imaplib.py,
line 426, in fetch
typ, dat = self._simple_command(name, message_set, message_parts)
  File
/opt/local/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/imaplib.py,
line 1028, in _simple_command
return self._command_complete(name, self._command(name, *args))
  File
/opt/local/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/imaplib.py,
line 858, in _command_complete
typ, data = self._get_tagged_response(tag)
  File
/opt/local/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/imaplib.py,
line 959, in _get_tagged_response
self._get_response()
  File
/opt/local/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/imaplib.py,
line 921, in _get_response
data = self.read(size)
  File
/opt/local/Library/Frameworks/Python.framework/Versions/2.4/lib/python2.4/imaplib.py,
line 1123, in read
data = self.sslobj.read(size-read)
MemoryError


Thanks for your time!

Helge

--
messages: 89520
nosy: helgekraak
severity: normal
status: open
title: virtual memory error with archivemail
versions: Python 2.6

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6311
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6311] virtual memory error with archivemail

2009-06-19 Thread Martin v . Löwis

Martin v. Löwis mar...@v.loewis.de added the comment:

Unfortunately, without a much more detailed analysis, I don't think
there is much we can do. I recommend to report this to the author of
archivemail first.

--
nosy: +loewis
resolution:  - wont fix
status: open - closed

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6311
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Memory error due to the huge/huge input file size

2008-11-20 Thread tejsupra
On Nov 10, 4:47 pm, [EMAIL PROTECTED] wrote:
 Hello Everyone,

 I need to read a .csv file which has a size of 2.26 GB . And I wrote a
 Python script , where I need to read this file. And my Computer has 2
 GB RAM Please see the code as follows:

 
 This program has been developed to retrieve all the promoter sequences
 for the specified
 list of genes in the given cluster

 So, this program will act as a substitute to the whole EZRetrieve
 system

 Input arguments:

 1) Cluster.txt or DowRatClust161718bwithDummy.txt
 2) TransProCrossReferenceAndSequences.csv - This is the file that has
 all the promoter sequences
 3) -2000
 4) 500
 

 import time
 import csv
 import sys
 import linecache
 import re
 from sets import Set
 import gc

 print time.localtime()

 fileInputHandler = open(sys.argv[1],r)
 line = fileInputHandler.readline()

 refSeqIDsinTransPro = []
 promoterSequencesinTransPro = []
 reader2 = csv.reader(open(sys.argv[2],rb))
 reader2_list = []
 reader2_list.extend(reader2)

 for data2 in reader2_list:
    refSeqIDsinTransPro.append(data2[3])
 for data2 in reader2_list:
    promoterSequencesinTransPro.append(data2[4])

 while line:
    l = line.rstrip('\n')
    for j in range(1,len(refSeqIDsinTransPro)):
       found = re.search(l,refSeqIDsinTransPro[j])
       if found:
          promoterSequencesinTransPro[j]  
          print l

    line = fileInputHandler.readline()

 fileInputHandler.close()

 The error that I got is given as follows:
 Traceback (most recent call last):
   File RefSeqsToPromoterSequences.py, line 31, in module
     reader2_list.extend(reader2)
 MemoryError

 I understand that the issue is Memory error and it is caused because
 of the  line reader2_list.extend(reader2). Is there any other
 alternative method in reading the .csv file  line by line?

 sincerely,
 Suprabhath

Thanks a Lot James Mills. It worked

--
http://mail.python.org/mailman/listinfo/python-list


Memory error due to the huge/huge input file size

2008-11-10 Thread tejsupra
Hello Everyone,

I need to read a .csv file which has a size of 2.26 GB . And I wrote a
Python script , where I need to read this file. And my Computer has 2
GB RAM Please see the code as follows:


This program has been developed to retrieve all the promoter sequences
for the specified
list of genes in the given cluster

So, this program will act as a substitute to the whole EZRetrieve
system

Input arguments:

1) Cluster.txt or DowRatClust161718bwithDummy.txt
2) TransProCrossReferenceAndSequences.csv - This is the file that has
all the promoter sequences
3) -2000
4) 500


import time
import csv
import sys
import linecache
import re
from sets import Set
import gc

print time.localtime()

fileInputHandler = open(sys.argv[1],r)
line = fileInputHandler.readline()

refSeqIDsinTransPro = []
promoterSequencesinTransPro = []
reader2 = csv.reader(open(sys.argv[2],rb))
reader2_list = []
reader2_list.extend(reader2)

for data2 in reader2_list:
   refSeqIDsinTransPro.append(data2[3])
for data2 in reader2_list:
   promoterSequencesinTransPro.append(data2[4])

while line:
   l = line.rstrip('\n')
   for j in range(1,len(refSeqIDsinTransPro)):
  found = re.search(l,refSeqIDsinTransPro[j])
  if found:
 promoterSequencesinTransPro[j]  
 print l

   line = fileInputHandler.readline()


fileInputHandler.close()


The error that I got is given as follows:
Traceback (most recent call last):
  File RefSeqsToPromoterSequences.py, line 31, in module
reader2_list.extend(reader2)
MemoryError

I understand that the issue is Memory error and it is caused because
of the  line reader2_list.extend(reader2). Is there any other
alternative method in reading the .csv file  line by line?

sincerely,
Suprabhath
--
http://mail.python.org/mailman/listinfo/python-list


Re: Memory error due to the huge/huge input file size

2008-11-10 Thread James Mills
On Tue, Nov 11, 2008 at 7:47 AM,  [EMAIL PROTECTED] wrote:
 refSeqIDsinTransPro = []
 promoterSequencesinTransPro = []
 reader2 = csv.reader(open(sys.argv[2],rb))
 reader2_list = []
 reader2_list.extend(reader2)

Without testing, this looks like you're reading the _ENTIRE_
input stream into memory! Try this:

def readCSV(file):

   if type(file) == str:
  fd = open(file, rU)
   else:
  fd = file

   sniffer = csv.Sniffer()
   dialect = sniffer.sniff(fd.readline())
   fd.seek(0)

   reader = csv.reader(fd, dialect)
   for line in reader:
  yield line

for line in readCSV(open(foo.csv, r)):
   ...

--JamesMills

-- 
--
-- Problems are solved by method
--
http://mail.python.org/mailman/listinfo/python-list


Re: Memory error due to the huge/huge input file size

2008-11-10 Thread John Machin
On Nov 11, 8:47 am, [EMAIL PROTECTED] wrote:

 import linecache

Why???

 reader2 = csv.reader(open(sys.argv[2],rb))
 reader2_list = []
 reader2_list.extend(reader2)

 for data2 in reader2_list:
    refSeqIDsinTransPro.append(data2[3])
 for data2 in reader2_list:
    promoterSequencesinTransPro.append(data2[4])


All you need to do is replace the above by:

reader2 = csv.reader(open(sys.argv[2],rb))

for data2 in reader2:
   refSeqIDsinTransPro.append(data2[3])
   promoterSequencesinTransPro.append(data2[4])
--
http://mail.python.org/mailman/listinfo/python-list


Memory Error

2008-08-27 Thread Sibtey Mehdi
I am using the cPickle module to serialization and de-serialization of heavy
python object (80 MB). When I try to save the object it gives the memory
Error. Any one can help me out of this problem. 

 

I am pickling the object as:

 

def savePklFile(pickleFile, data):  

pickledFile = open(pickleFile, 'wb')

cPickle.dump(data, pickledFile, -1)

pickledFile.close()

 

my system has 2 GB of RAM.

 

Thanks,

Sibtey

--
http://mail.python.org/mailman/listinfo/python-list

Memory error while saving dictionary of size 65000X50 using pickle

2008-07-07 Thread Nagu
I am trying to save a dictionary of size 65000X50 to a local file and
I get the memory error problem.

How do I go about resolving this? Is there way to partition the pickle
object and combine later if this is a problem due to limited resources
(memory) on the machine (it is 32 bit machine Win XP, with 4GB RAM).


Here is the detail description of the error:

Traceback (most recent call last):
  File pyshell#12, line 1, in module
s = pickle.dumps(itemsim)
  File C:\Python25\lib\pickle.py, line 1366, in dumps
Pickler(file, protocol).dump(obj)
  File C:\Python25\lib\pickle.py, line 224, in dump
self.save(obj)
  File C:\Python25\lib\pickle.py, line 286, in save
f(self, obj) # Call unbound method with explicit self
  File C:\Python25\lib\pickle.py, line 649, in save_dict
self._batch_setitems(obj.iteritems())
  File C:\Python25\lib\pickle.py, line 663, in _batch_setitems
save(v)
  File C:\Python25\lib\pickle.py, line 286, in save
f(self, obj) # Call unbound method with explicit self
  File C:\Python25\lib\pickle.py, line 600, in save_list
self._batch_appends(iter(obj))
  File C:\Python25\lib\pickle.py, line 615, in _batch_appends
save(x)
  File C:\Python25\lib\pickle.py, line 286, in save
f(self, obj) # Call unbound method with explicit self
  File C:\Python25\lib\pickle.py, line 562, in save_tuple
save(element)
  File C:\Python25\lib\pickle.py, line 286, in save
f(self, obj) # Call unbound method with explicit self
  File C:\Python25\lib\pickle.py, line 477, in save_float
self.write(FLOAT + repr(obj) + '\n')
MemoryError: out of memory
--
http://mail.python.org/mailman/listinfo/python-list


Re: Memory error while saving dictionary of size 65000X50 using pickle

2008-07-07 Thread Larry Bates

Nagu wrote:

I am trying to save a dictionary of size 65000X50 to a local file and
I get the memory error problem.

How do I go about resolving this? Is there way to partition the pickle
object and combine later if this is a problem due to limited resources
(memory) on the machine (it is 32 bit machine Win XP, with 4GB RAM).


Here is the detail description of the error:

Traceback (most recent call last):
  File pyshell#12, line 1, in module
s = pickle.dumps(itemsim)
  File C:\Python25\lib\pickle.py, line 1366, in dumps
Pickler(file, protocol).dump(obj)
  File C:\Python25\lib\pickle.py, line 224, in dump
self.save(obj)
  File C:\Python25\lib\pickle.py, line 286, in save
f(self, obj) # Call unbound method with explicit self
  File C:\Python25\lib\pickle.py, line 649, in save_dict
self._batch_setitems(obj.iteritems())
  File C:\Python25\lib\pickle.py, line 663, in _batch_setitems
save(v)
  File C:\Python25\lib\pickle.py, line 286, in save
f(self, obj) # Call unbound method with explicit self
  File C:\Python25\lib\pickle.py, line 600, in save_list
self._batch_appends(iter(obj))
  File C:\Python25\lib\pickle.py, line 615, in _batch_appends
save(x)
  File C:\Python25\lib\pickle.py, line 286, in save
f(self, obj) # Call unbound method with explicit self
  File C:\Python25\lib\pickle.py, line 562, in save_tuple
save(element)
  File C:\Python25\lib\pickle.py, line 286, in save
f(self, obj) # Call unbound method with explicit self
  File C:\Python25\lib\pickle.py, line 477, in save_float
self.write(FLOAT + repr(obj) + '\n')
MemoryError: out of memory


I've generated some rather large pickled dicts before and have never seen this 
before.  You can, of course, split your dictionary into several smaller ones, 
pickle them individually and combine them back together on unpickle/read using 
the dictionary update method.


BTW - 32-bit Windows can only address 3.5Gb of memory maximum.  If you have more 
installed, it is ignored.


-Larry
--
http://mail.python.org/mailman/listinfo/python-list


Memory error while saving dictionary using pickle

2008-07-07 Thread Nagu
I am trying to save a dictionary of size 65000X50 to a local file and
I get the memory error problem.

How do I go about resolving this? Is there way to partition the pickle
object and combine later if this is a problem due to limited resources
(memory) on the machine (it is 32 bit machine Win XP, with 4GB RAM).

Please advice.

Thank you,
Nagu

Here is the detail description of the error:

Traceback (most recent call last):
 File pyshell#12, line 1, in module
   s = pickle.dumps(itemsim)
 File C:\Python25\lib\pickle.py, line 1366, in dumps
   Pickler(file, protocol).dump(obj)
 File C:\Python25\lib\pickle.py, line 224, in dump
   self.save(obj)
 File C:\Python25\lib\pickle.py, line 286, in save
   f(self, obj) # Call unbound method with explicit self
 File C:\Python25\lib\pickle.py, line 649, in save_dict
   self._batch_setitems(obj.iteritems())
 File C:\Python25\lib\pickle.py, line 663, in _batch_setitems
   save(v)
 File C:\Python25\lib\pickle.py, line 286, in save
   f(self, obj) # Call unbound method with explicit self
 File C:\Python25\lib\pickle.py, line 600, in save_list
   self._batch_appends(iter(obj))
 File C:\Python25\lib\pickle.py, line 615, in _batch_appends
   save(x)
 File C:\Python25\lib\pickle.py, line 286, in save
   f(self, obj) # Call unbound method with explicit self
 File C:\Python25\lib\pickle.py, line 562, in save_tuple
   save(element)
 File C:\Python25\lib\pickle.py, line 286, in save
   f(self, obj) # Call unbound method with explicit self
 File C:\Python25\lib\pickle.py, line 477, in save_float
   self.write(FLOAT + repr(obj) + '\n')
MemoryError: out of memory
--
http://mail.python.org/mailman/listinfo/python-list


Re: Memory error while saving dictionary of size 65000X50 using pickle

2008-07-07 Thread Martin v. Löwis
Nagu wrote:
 I am trying to save a dictionary of size 65000X50 to a local file and
 I get the memory error problem.

What do you mean by this size specification? When I interpreter X as
multiplication, I can't see a problem: the code

import pickle

d = {}

for i in xrange(65000*50):
d[i]=i
print Starting dump
s = pickle.dumps(d)

works just fine for me. Can you please modify it so that it does cause
a problem?

Regards,
Martin
--
http://mail.python.org/mailman/listinfo/python-list


Re: Memory error while saving dictionary of size 65000X50 using pickle

2008-07-07 Thread Nagu
I didn't have the problem with dumping as a string. When I tried to
save this object to a file, memory error pops up.

I am sorry for the mention of size for a dictionary. What I meant by
65000X50 is that it has 65000 keys and each key has a list of 50
tuples.

I was able to save a dictionary object with 65000 keys and a list of
15-tuple values to a file. But I could not do the same when I have a
list of 25-tuple values for 65000 keys.

You exmple works just fine on my side.

Thank you,
Nagu
--
http://mail.python.org/mailman/listinfo/python-list


Re: Memory error while saving dictionary of size 65000X50 using pickle

2008-07-07 Thread Martin v. Löwis
 I didn't have the problem with dumping as a string. When I tried to
 save this object to a file, memory error pops up.

That's not what the backtrace says. The backtrace says that the error
occurs inside pickle.dumps() (and it is consistent with the functions
being called, so it's plausible).

 I am sorry for the mention of size for a dictionary. What I meant by
 65000X50 is that it has 65000 keys and each key has a list of 50
 tuples.
[...]
 
 You exmple works just fine on my side.

I can get the program

import pickle

d = {}

for i in xrange(65000):
d[i]=[(x,) for x in range(50)]
print Starting dump
s = pickle.dumps(d)

to complete successfully, also, however, it consumes a lot
of memory. I can reduce memory usage slightly by
a) dumping directly to a file, and
b) using cPickle instead of pickle
i.e.

import cPickle as pickle

d = {}

for i in xrange(65000):
d[i]=[(x,) for x in range(50)]
print Starting dump
pickle.dump(d,open(/tmp/t.pickle,wb))

The memory consumed originates primarily from the need to determine
shared references. If you are certain that no object sharing occurs
in your graph, you can do
import cPickle as pickle

d = {}

for i in xrange(65000):
d[i]=[(x,) for x in range(50)]
print Starting dump
p = pickle.Pickler(open(/tmp/t.pickle,wb))
p.fast = True
p.dump(d)

With that, I see no additional memory usage, and pickling completes
really fast.

Regards,
Martin
--
http://mail.python.org/mailman/listinfo/python-list


  1   2   >