Github user rdblue commented on the issue:
https://github.com/apache/spark/pull/21306
@tigerquoll, the proposal isn't to make partitions part of table
configuration. It is to make the partitioning scheme part of the table
configuration. How sources choose to handle individual partitions is up to the
source. How those partitions are exposed through Spark is a different API
because the current v2 data source design covers tables that appear to be
unpartitioned.
We could support range partitioning with the strategy that was discussed on
the dev list, where the configuration is a function application with column
references and literals. So your partitioning could be expressed like this:
```sql
create table t (id bigint, ts timestamp, data string)
partitioned by (range(ts, '2016-01-01', '2017-01-01', '2017-06-01')) using
kudu.
```
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]