do you suggest to do something like

for a in xc:
        for b in yc:
            newphi.setValue(phi ((array([a]),array([b]) )) )

this is horribly slow...





On Tue, Feb 8, 2011 at 3:52 PM, Benny Malengier
<[email protected]>wrote:

>
>
> 2011/2/8 Julien Derr <[email protected]>
>
>> Thanks very much Benny for the explanation, but what do you mean  by
>> working  in chunks ?
>>
>> I have a variable newphi which is defined on a newmesh and phi the old
>> variable which was defined in the original mesh.
>>
>> how can I copy phi to newphi by "chunk" ? In any case the huge array needs
>> to be created no ?
>>
>
> The problem is not in the final array, but in the intermediate ones. I see
> this in the error:
>
> nearestCellIDs = self.getMesh()_getNearestCellID(points)
>
> So my guess is that every new point has to be compared with all meshpoints
> to extract the neareast one. If you do one point at the time, this is
> doable, if you 10 points also, ..., but not all points in one go.
> In a rectangular regular grid, this is also easy, but not in an
> unstructured one.
>
> Benny
>
>
>> Julien
>>
>>
>>
>>
>>
>> On Tue, Feb 8, 2011 at 2:47 PM, Benny Malengier <
>> [email protected]> wrote:
>>
>>> You do
>>>
>>> ... value=phi(mesh.getCellCenters()))
>>>
>>> This is what causes the problem. You can probably work around it by
>>> creating value yourself.
>>> Fipy uses some vectorized algorithms, which works great, but have the
>>> disadvantage that there must be enough memory to allocate the required
>>> arrays.
>>> In this case, by working in chunks to create value, you can work around
>>> this (I believe, did not try anything).
>>>
>>> Benny
>>>
>>> 2011/2/8 Julien Derr <[email protected]>
>>>
>>> Hi everyone,
>>>>
>>>> I am dealing with an irregular mesh done between a big circle and a
>>>> small polygone.
>>>>
>>>> looks like I have memory issues when the polygone becomes more than a
>>>> few hundreds sides. Is that a typical limitation by fipy ? Is there a way 
>>>> to
>>>> deal with big polygonial shapes?
>>>>
>>>> see error bellow, if you understand it ?
>>>>
>>>> Traceback (most recent call last):
>>>>   File "growth.py", line 175, in <module>
>>>>     newphi = CellVariable(name = "solution variable",    mesh = mesh,
>>>> value=phi(mesh.getCellCenters()))
>>>>   File "/usr/lib/pymodules/python2.6/fipy/variables/cellVariable.py",
>>>> line 197, in __call__
>>>>     nearestCellIDs = self.getMesh()_getNearestCellID(points)
>>>>   File "/usr/lib/pymodules/python2.6/fipy/meshes/common/mesh.py", line
>>>> 772, in _getNearestCellID
>>>>     return numerix.argmin(numerix.dot(tmp, tmp, axis = 0), axis=0)
>>>>   File "/usr/lib/pymodules/python2.6/fipy/tools/numerix.py", line 844,
>>>> in dot
>>>>     return sum(a1*a2, axis)
>>>>   File "/usr/lib/pymodules/python2.6/fipy/tools/numerix.py", line 241,
>>>> in sum
>>>>     return NUMERIX.sum(arr, axis)
>>>>   File "/usr/lib/python2.6/dist-packages/numpy/core/fromnumeric.py",
>>>> line 1252, in sum
>>>>     return sum(axis, dtype, out)
>>>> MemoryError
>>>>
>>>> thanks for your help !
>>>>
>>>> Julien
>>>>
>>>> PS I am trying to reproduce a typical DLA/saffman taylor problem. so I
>>>> guess the alternative would be to use a simple rectangular lattice, and to
>>>> use a phase parameter to determine what is solid and what is not (like in
>>>> the dendritic solidification example of the fipy help).
>>>>  but the issue then is to make some diffusion happen (for another field
>>>> c) only on the space where phi=0. is that possible ?
>>>>
>>>>
>>>>
>>>
>>
>

Reply via email to