[
https://issues.apache.org/jira/browse/BEAM-11742?focusedWorklogId=575805&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-575805
]
ASF GitHub Bot logged work on BEAM-11742:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 01/Apr/21 22:48
Start Date: 01/Apr/21 22:48
Worklog Time Spent: 10m
Work Description: TheNeuralBit commented on a change in pull request
#14335:
URL: https://github.com/apache/beam/pull/14335#discussion_r605958809
##########
File path: sdks/python/apache_beam/io/parquetio_test.py
##########
@@ -104,14 +104,19 @@ def setUp(self):
'name': 'Percy',
'favorite_number': 6,
'favorite_color': 'Green'
+ },
+ {
+ 'name': 'Peter',
+ 'favorite_number': 3,
+ 'favorite_color': None
}]
- self.SCHEMA = pa.schema([('name', pa.string()),
- ('favorite_number', pa.int64()),
+ self.SCHEMA = pa.schema([('name', pa.string(), False),
+ ('favorite_number', pa.int64(), False),
Review comment:
Thanks for testing :)
##########
File path: sdks/python/apache_beam/io/parquetio.py
##########
@@ -570,7 +570,7 @@ def _flush_buffer(self):
for x, y in enumerate(self._buffer):
arrays[x] = pa.array(y, type=self._schema.types[x])
self._buffer[x] = []
- rb = pa.RecordBatch.from_arrays(arrays, self._schema.names)
+ rb = pa.RecordBatch.from_arrays(arrays, schema=self._schema)
Review comment:
I had to remind myself of why the old way (just specifying the type on
each array) doesn't work. Figured I should leave an explanation here for
posterity.
It's because nullability is tracked as part of the `Field`, not part of the
`Type`:
https://github.com/apache/arrow/blob/6e29200cebeb94f6014c72d25b1bc3a1be9cff1c/format/Schema.fbs#L348-L356
(A Schema is just a list of Fields)
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 575805)
Time Spent: 1h (was: 50m)
> Use schema when creating record batch in ParquetSink
> ----------------------------------------------------
>
> Key: BEAM-11742
> URL: https://issues.apache.org/jira/browse/BEAM-11742
> Project: Beam
> Issue Type: Improvement
> Components: io-py-parquet
> Reporter: Wenbing Bai
> Assignee: Wenbing Bai
> Priority: P2
> Time Spent: 1h
> Remaining Estimate: 0h
>
> Before pyarrow 0.15, it is not possible to create pyarrow record batch with
> schema.
> So in apache_beam.io.parquetio._ParquetSink, when creating pyarrow record
> batch we use
>
> {code:java}
> rb = pa.RecordBatch.from_arrays(arrays, self._schema.names){code}
> Error is raised that the parquet table to be created (record batch schema)
> has a different schema with the schema specify (self._schema).
> For example, when schema specified with "is not null", the record batch
> schema doesn't indicate that, the error will be raised.
>
> The fix is to use schema instead of names in pa.RecordBatch.from_arrays
> {code:java}
> rb = pa.RecordBatch.from_arrays(arrays, schema=self._schema){code}
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)