olumnarBatch if the columns to scan
>>> are all simple types.
>>>
>>> On Tue, Apr 17, 2018 at 11:38 AM, Felix Cheung <
>>> felixcheun...@hotmail.com> wrote:
>>>
>>>> Is it required for DataReader to support all known DataFormat?
>>>>
>>>> Ho
pecifically how are we going to express capability of the given reader of
>>> its supported format(s), or specific support for each of “real-time data in
>>> row format, and history data in columnar format”?
>>>
>>>
>>> -
------
>> *From:* Wenchen Fan
>> *Sent:* Sunday, April 15, 2018 7:45:01 PM
>> *To:* Spark dev list
>> *Subject:* [discuss][data source v2] remove type parameter in
>> DataReader/WriterFactory
>>
>> Hi all,
>>
>> I'd like to pro
* Sunday, April 15, 2018 7:45:01 PM
> *To:* Spark dev list
> *Subject:* [discuss][data source v2] remove type parameter in
> DataReader/WriterFactory
>
> Hi all,
>
> I'd like to propose an API change to the data source v2.
>
> One design goal of data source
row format, and
history data in columnar format"?
From: Wenchen Fan
Sent: Sunday, April 15, 2018 7:45:01 PM
To: Spark dev list
Subject: [discuss][data source v2] remove type parameter in
DataReader/WriterFactory
Hi all,
I'd like to propose an API cha
Hi all,
I'd like to propose an API change to the data source v2.
One design goal of data source v2 is API type safety. The FileFormat API is
a bad example, it asks the implementation to return InternalRow even it's
actually ColumnarBatch. In data source v2 we add a type parameter to
DataReader/Wr