[jira] [Commented] (ARROW-8654) [Python] pyarrow 0.17.0 fails reading "wide" parquet files

2020-05-09 Thread Mike Macpherson (Jira)


[ 
https://issues.apache.org/jira/browse/ARROW-8654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17103363#comment-17103363
 ] 

Mike Macpherson commented on ARROW-8654:


Thank you for this context, very helpful.

What would you think of adding documentation on parquet file column-number 
limits, to pandas and/or pyarrow's docs? I'd be interested to contribute the 
PR/s, if we might clarify what those limits are. That may also be an 
appropriate place to offer the guidance that performance may decline as 
column-number grows.

> [Python] pyarrow 0.17.0 fails reading "wide" parquet files
> --
>
> Key: ARROW-8654
> URL: https://issues.apache.org/jira/browse/ARROW-8654
> Project: Apache Arrow
>  Issue Type: Bug
>Reporter: Mike Macpherson
>Priority: Major
>
> {code:java}
> import pandas as pd
> import numpy as np
> num_rows, num_cols = 1000, 45000
> df = pd.DataFrame(np.random.randint(0, 256, size=(num_rows, 
> num_cols)).astype(np.uint8))
> outfile = "test.parquet"
> df.to_parquet(outfile)
> del df
> df = pd.read_parquet(outfile)
> {code}
> Yields:
> {noformat}
> df = pd.read_parquet(outfile) 
> File "/jupyter/venv/lib/python3.6/site-packages/pandas/io/parquet.py", line 
> 310, in read_parquet 
> return impl.read(path, columns=columns, kwargs) 
> File "/jupyter/venv/lib/python3.6/site-packages/pandas/io/parquet.py", line 
> 125, in read 
> path, columns=columns, kwargs 
> File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 
> 1530, in read_table 
> partitioning=partitioning) 
> File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 
> 1189, in __init__ 
> self.validate_schemas() 
> File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 
> 1217, in validate_schemas 
> self.schema = self.pieces[0].get_metadata().schema 
> File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 
> 662, in get_metadata 
> f = self.open() 
> File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 
> 669, in open 
> reader = self.open_file_func(self.path) 
> File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 
> 1040, in _open_dataset_file 
> buffer_size=dataset.buffer_size 
> File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 
> 210, in __init__ 
> read_dictionary=read_dictionary, metadata=metadata) 
> File "pyarrow/_parquet.pyx", line 1023, in 
> pyarrow._parquet.ParquetReader.open 
> File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status 
> OSError: Couldn't deserialize thrift: TProtocolException: Exceeded size limit
> {noformat}
> This is pandas 1.0.3, and pyarrow 0.17.0.
>  
> I tried this with pyarrow 0.16.0, and it works. 0.15.1 did as well.
>  
> I also tried with 40,000 columns aot 45,000 as above, and that does work with 
> 0.17.0.
>  
> Thanks for all your work on this project!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (ARROW-8654) [Python] pyarrow 0.17.0 fails reading "wide" parquet files

2020-04-30 Thread Mike Macpherson (Jira)


 [ 
https://issues.apache.org/jira/browse/ARROW-8654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Macpherson updated ARROW-8654:
---
Description: 
{code:java}
import pandas as pd
import numpy as np

num_rows, num_cols = 1000, 45000

df = pd.DataFrame(np.random.randint(0, 256, size=(num_rows, 
num_cols)).astype(np.uint8))

outfile = "test.parquet"
df.to_parquet(outfile)
del df

df = pd.read_parquet(outfile)
{code}
Yields:
{noformat}
df = pd.read_parquet(outfile) 
File "/jupyter/venv/lib/python3.6/site-packages/pandas/io/parquet.py", line 
310, in read_parquet 
return impl.read(path, columns=columns, kwargs) 
File "/jupyter/venv/lib/python3.6/site-packages/pandas/io/parquet.py", line 
125, in read 
path, columns=columns, kwargs 
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 1530, 
in read_table 
partitioning=partitioning) 
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 1189, 
in __init__ 
self.validate_schemas() 
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 1217, 
in validate_schemas 
self.schema = self.pieces[0].get_metadata().schema 
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 662, 
in get_metadata 
f = self.open() 
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 669, 
in open 
reader = self.open_file_func(self.path) 
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 1040, 
in _open_dataset_file 
buffer_size=dataset.buffer_size 
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 210, 
in __init__ 
read_dictionary=read_dictionary, metadata=metadata) 
File "pyarrow/_parquet.pyx", line 1023, in pyarrow._parquet.ParquetReader.open 
File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status 
OSError: Couldn't deserialize thrift: TProtocolException: Exceeded size limit
{noformat}
This is pandas 1.0.3, and pyarrow 0.17.0.

 

I tried this with pyarrow 0.16.0, and it works. 0.15.1 did as well.

 

I also tried with 40,000 columns aot 45,000 as above, and that does work with 
0.17.0.

 

Thanks for all your work on this project!

  was:
{code:java}
import pandas as pd

num_rows, num_cols = 1000, 45000

df = pd.DataFrame(np.random.randint(0, 256, size=(num_rows, 
num_cols)).astype(np.uint8))

outfile = "test.parquet"
df.to_parquet(outfile)
del df

df = pd.read_parquet(outfile)
{code}
Yields:
{noformat}
df = pd.read_parquet(outfile) 
File "/jupyter/venv/lib/python3.6/site-packages/pandas/io/parquet.py", line 
310, in read_parquet 
return impl.read(path, columns=columns, kwargs) 
File "/jupyter/venv/lib/python3.6/site-packages/pandas/io/parquet.py", line 
125, in read 
path, columns=columns, kwargs 
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 1530, 
in read_table 
partitioning=partitioning) 
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 1189, 
in __init__ 
self.validate_schemas() 
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 1217, 
in validate_schemas 
self.schema = self.pieces[0].get_metadata().schema 
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 662, 
in get_metadata 
f = self.open() 
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 669, 
in open 
reader = self.open_file_func(self.path) 
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 1040, 
in _open_dataset_file 
buffer_size=dataset.buffer_size 
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 210, 
in __init__ 
read_dictionary=read_dictionary, metadata=metadata) 
File "pyarrow/_parquet.pyx", line 1023, in pyarrow._parquet.ParquetReader.open 
File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status 
OSError: Couldn't deserialize thrift: TProtocolException: Exceeded size limit
{noformat}
This is pandas 1.0.3, and pyarrow 0.17.0.

 

I tried this with pyarrow 0.16.0, and it works. 0.15.1 did as well.

 

I also tried with 40,000 columns aot 45,000 as above, and that does work with 
0.17.0.

 

Thanks for all your work on this project!


> [Python] pyarrow 0.17.0 fails reading "wide" parquet files
> --
>
> Key: ARROW-8654
> URL: https://issues.apache.org/jira/browse/ARROW-8654
> Project: Apache Arrow
>  Issue Type: Bug
>Reporter: Mike Macpherson
>Priority: Major
>
> {code:java}
> import pandas as pd
> import numpy as np
> num_rows, num_cols = 1000, 45000
> df = pd.DataFrame(np.random.randint(0, 256, size=(num_rows, 
> num_cols)).astype(np.uint8))
> outfile = "test.parquet"
> df.to_parquet(outfile)
> del df
> df = pd.read_parquet(outfile)
> {code}
> Yields:
> {noformat}
> df = pd.read_parquet(outfile) 
> File "/jupyter/venv/lib/python3.6/site-packages/pandas/io/parquet.py", line 
> 

[jira] [Updated] (ARROW-8654) [Python] pyarrow 0.17.0 fails reading "wide" parquet files

2020-04-30 Thread Mike Macpherson (Jira)


 [ 
https://issues.apache.org/jira/browse/ARROW-8654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Macpherson updated ARROW-8654:
---
Description: 
{code:java}
import pandas as pd

num_rows, num_cols = 1000, 45000

df = pd.DataFrame(np.random.randint(0, 256, size=(num_rows, 
num_cols)).astype(np.uint8))

outfile = "test.parquet"
df.to_parquet(outfile)
del df

df = pd.read_parquet(outfile)
{code}
Yields:
{noformat}
df = pd.read_parquet(outfile) 
File "/jupyter/venv/lib/python3.6/site-packages/pandas/io/parquet.py", line 
310, in read_parquet 
return impl.read(path, columns=columns, kwargs) 
File "/jupyter/venv/lib/python3.6/site-packages/pandas/io/parquet.py", line 
125, in read 
path, columns=columns, kwargs 
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 1530, 
in read_table 
partitioning=partitioning) 
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 1189, 
in __init__ 
self.validate_schemas() 
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 1217, 
in validate_schemas 
self.schema = self.pieces[0].get_metadata().schema 
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 662, 
in get_metadata 
f = self.open() 
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 669, 
in open 
reader = self.open_file_func(self.path) 
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 1040, 
in _open_dataset_file 
buffer_size=dataset.buffer_size 
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 210, 
in __init__ 
read_dictionary=read_dictionary, metadata=metadata) 
File "pyarrow/_parquet.pyx", line 1023, in pyarrow._parquet.ParquetReader.open 
File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status 
OSError: Couldn't deserialize thrift: TProtocolException: Exceeded size limit
{noformat}
This is pandas 1.0.3, and pyarrow 0.17.0.

 

I tried this with pyarrow 0.16.0, and it works. 0.15.1 did as well.

 

I also tried with 40,000 columns aot 45,000 as above, and that does work with 
0.17.0.

 

Thanks for all your work on this project!

  was:
{code:java}
import pandas as pd

num_rows, num_cols = 1000, 45000

df = pd.DataFrame(np.random.randint(0, 256, size=(num_rows, 
num_cols)).astype(np.uint8))

outfile = "test.parquet"
df.to_parquet(outfile)
del df

df = pd.read_parquet(fout)
{code}
Yields:
{noformat}
df = pd.read_parquet(outfile) 
File "/jupyter/venv/lib/python3.6/site-packages/pandas/io/parquet.py", line 
310, in read_parquet 
return impl.read(path, columns=columns, kwargs) 
File "/jupyter/venv/lib/python3.6/site-packages/pandas/io/parquet.py", line 
125, in read 
path, columns=columns, kwargs 
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 1530, 
in read_table 
partitioning=partitioning) 
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 1189, 
in __init__ 
self.validate_schemas() 
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 1217, 
in validate_schemas 
self.schema = self.pieces[0].get_metadata().schema 
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 662, 
in get_metadata 
f = self.open() 
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 669, 
in open 
reader = self.open_file_func(self.path) 
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 1040, 
in _open_dataset_file 
buffer_size=dataset.buffer_size 
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 210, 
in __init__ 
read_dictionary=read_dictionary, metadata=metadata) 
File "pyarrow/_parquet.pyx", line 1023, in pyarrow._parquet.ParquetReader.open 
File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status 
OSError: Couldn't deserialize thrift: TProtocolException: Exceeded size limit
{noformat}
This is pandas 1.0.3, and pyarrow 0.17.0.

 

I tried this with pyarrow 0.16.0, and it works. 0.15.1 did as well.

 

I also tried with 40,000 columns aot 45,000 as above, and that does work with 
0.17.0.

 

Thanks for all your work on this project!


> [Python] pyarrow 0.17.0 fails reading "wide" parquet files
> --
>
> Key: ARROW-8654
> URL: https://issues.apache.org/jira/browse/ARROW-8654
> Project: Apache Arrow
>  Issue Type: Bug
>Reporter: Mike Macpherson
>Priority: Major
>
> {code:java}
> import pandas as pd
> num_rows, num_cols = 1000, 45000
> df = pd.DataFrame(np.random.randint(0, 256, size=(num_rows, 
> num_cols)).astype(np.uint8))
> outfile = "test.parquet"
> df.to_parquet(outfile)
> del df
> df = pd.read_parquet(outfile)
> {code}
> Yields:
> {noformat}
> df = pd.read_parquet(outfile) 
> File "/jupyter/venv/lib/python3.6/site-packages/pandas/io/parquet.py", line 
> 310, in read_parquet 
> return 

[jira] [Created] (ARROW-8654) [Python] pyarrow 0.17.0 fails reading "wide" parquet files

2020-04-30 Thread Mike Macpherson (Jira)
Mike Macpherson created ARROW-8654:
--

 Summary: [Python] pyarrow 0.17.0 fails reading "wide" parquet files
 Key: ARROW-8654
 URL: https://issues.apache.org/jira/browse/ARROW-8654
 Project: Apache Arrow
  Issue Type: Bug
Reporter: Mike Macpherson


{code:java}
import pandas as pd

num_rows, num_cols = 1000, 45000

df = pd.DataFrame(np.random.randint(0, 256, size=(num_rows, 
num_cols)).astype(np.uint8))

outfile = "test.parquet"
df.to_parquet(outfile)
del df

df = pd.read_parquet(fout)
{code}
Yields:
{noformat}
df = pd.read_parquet(outfile) 
File "/jupyter/venv/lib/python3.6/site-packages/pandas/io/parquet.py", line 
310, in read_parquet 
return impl.read(path, columns=columns, kwargs) 
File "/jupyter/venv/lib/python3.6/site-packages/pandas/io/parquet.py", line 
125, in read 
path, columns=columns, kwargs 
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 1530, 
in read_table 
partitioning=partitioning) 
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 1189, 
in __init__ 
self.validate_schemas() 
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 1217, 
in validate_schemas 
self.schema = self.pieces[0].get_metadata().schema 
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 662, 
in get_metadata 
f = self.open() 
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 669, 
in open 
reader = self.open_file_func(self.path) 
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 1040, 
in _open_dataset_file 
buffer_size=dataset.buffer_size 
File "/jupyter/venv/lib/python3.6/site-packages/pyarrow/parquet.py", line 210, 
in __init__ 
read_dictionary=read_dictionary, metadata=metadata) 
File "pyarrow/_parquet.pyx", line 1023, in pyarrow._parquet.ParquetReader.open 
File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status 
OSError: Couldn't deserialize thrift: TProtocolException: Exceeded size limit
{noformat}
This is pandas 1.0.3, and pyarrow 0.17.0.

 

I tried this with pyarrow 0.16.0, and it works. 0.15.1 did as well.

 

I also tried with 40,000 columns aot 45,000 as above, and that does work with 
0.17.0.

 

Thanks for all your work on this project!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)