[jira] [Created] (ARROW-1979) [JS] JS builds handing in es2015:umd tests

2018-01-09 Thread Wes McKinney (JIRA)
Wes McKinney created ARROW-1979: --- Summary: [JS] JS builds handing in es2015:umd tests Key: ARROW-1979 URL: https://issues.apache.org/jira/browse/ARROW-1979 Project: Apache Arrow Issue Type:

[jira] [Created] (ARROW-1980) [Python] Race condition in `write_to_dataset`

2018-01-09 Thread Jim Crist (JIRA)
Jim Crist created ARROW-1980: Summary: [Python] Race condition in `write_to_dataset` Key: ARROW-1980 URL: https://issues.apache.org/jira/browse/ARROW-1980 Project: Apache Arrow Issue Type: Bug

Board report

2018-01-09 Thread Jacques Nadeau
Hey all, Does anyone want to help draft a board report? Just noticed it is due soon. Thanks Jacques

Re: How to get "standard" binary columns out of a pyarrow table

2018-01-09 Thread Eli
Hey Wes, The database in question accepts columnar chunks of "regular" binary data over the network, one of the sources of which is parquet. Thus, data only comes out of parquet on my side, and I was wondering how to get it out as "regular" binary columns. Something like tobytes() for an Arrow

Re: How to get "standard" binary columns out of a pyarrow table

2018-01-09 Thread Wes McKinney
hi Eli, I'm wondering what kind of API you would want, if the perfect one existed. If I understand correctly, you are embedding objects in a BYTE_ARRAY column in Parquet, and need to do some post-processing as the data goes in / comes out of Parquet? Thanks, Wes On Sat, Jan 6, 2018 at 8:37 AM,

Re: JDBC Adapter for Apache-Arrow

2018-01-09 Thread Jacques Nadeau
We have some stuff I Dremio that we've planned on open sourcing but haven't yet done so. We should try to get that out for others to consume. On Jan 7, 2018 11:49 AM, "Uwe L. Korn" wrote: > Has anyone made progress on the JDBC adapter yet? > > I recently came across a lot