[ 
https://issues.apache.org/jira/browse/ARROW-2879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16790190#comment-16790190
 ] 

Wenjun Si commented on ARROW-2879:
----------------------------------

Excuse me for the delay, but we tested the issue with pyarrow 0.12.1 installed 
with anaconda
{code}
conda create -n pyarrow12 python=3.7 numpy
conda activate pyarrow12
conda install -c conda-forge pyarrow=0.12.1
{code}
and test the issue with the code below
{code}
import pyarrow.lib
import pyarrow.plasma as plasma

for m_size in range(1, 30):
    alloc_total = m_size * 1024 ** 3
    with plasma.start_plasma_store(alloc_total) as (sckt, _):
        client = plasma.connect(sckt, '', 0)
        usable_size = 0
        objs = []
        alloc_unit = 4 * 1024 ** 2
        while True:
            obj_id = plasma.ObjectID.from_random()
            try:
                buf = client.create(obj_id, alloc_unit)
            except pyarrow.lib.PlasmaStoreFull:
                break
            client.seal(obj_id)
            
            usable_size += alloc_unit
            [buf] = client.get_buffers([obj_id])
            objs.append(buf)
        objs = []
        print((alloc_total, usable_size))
{code}
The result is shown below before an memory-insufficient error raises:
{code:java}
(1073741824, 528482304)
(2147483648, 1061158912)
(3221225472, 2130706432)
(4294967296, 2130706432)
(5368709120, 4273995776)
(6442450944, 4273995776)
(7516192768, 4273995776)
(8589934592, 4273995776)
(9663676416, 8564768768)
(10737418240, 8564768768)
{code}
It seems that the memory size actually allocated is aligned into 2 ** k.

The reason why we stops using *use_one_memory_mapped_file* is that it fails to 
work when we allocate huge memory sizes, for instance, 1GB.

> [Python] Arrow plasma can only use a small part of specified shared memory
> --------------------------------------------------------------------------
>
>                 Key: ARROW-2879
>                 URL: https://issues.apache.org/jira/browse/ARROW-2879
>             Project: Apache Arrow
>          Issue Type: Bug
>          Components: Python
>            Reporter: chineking
>            Priority: Major
>             Fix For: 0.14.0
>
>
> Hi, thanks for the great job of arrow, it helps us a lot.
> However, we encounter a problem when we were using plasma.
> The sample code:
> {code:python}
> import numpy as np
> import pyarrow as pa
> import pyarrow.plasma as plasma
> client = plasma.connect("/tmp/plasma", "", 0)
> puts = []
> nbytes = 0
> while True:
>     a = np.ones((1000, 1000))
>     try:
>         oid = client.put(a)
>         puts.append(client.get(oid))
>         nbytes += a.nbytes
>     except pa.lib.PlasmaStoreFull:
>         print('use nbytes', nbytes)
>         break
> {code}
> We start a plasma store with 1G memory, but the nbytes output above is only 
> 496000000, which cannot even reach half of the memory we specified.
> I cannot figure out why plasma can only use such a small part of shared 
> memory. Could anybody help me? Thanks a lot.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to