; Is it required for DataReader to support all known DataFormat?
>>>>
>>>> Hopefully, not, as assumed by the ‘throw’ in the interface. Then
>>>> specifically how are we going to express capability of the given reader of
>>>> its supported format(s), or
ly, not, as assumed by the ‘throw’ in the interface. Then
>>> specifically how are we going to express capability of the given reader of
>>> its supported format(s), or specific support for each of “real-time data in
>>> row format, and history data in columnar format”?
>>>
>
umnar format”?
>>
>>
>> --
>> *From:* Wenchen Fan <cloud0...@gmail.com>
>> *Sent:* Sunday, April 15, 2018 7:45:01 PM
>> *To:* Spark dev list
>> *Subject:* [discuss][data source v2] remove type parameter in
>>
From:* Wenchen Fan <cloud0...@gmail.com>
> *Sent:* Sunday, April 15, 2018 7:45:01 PM
> *To:* Spark dev list
> *Subject:* [discuss][data source v2] remove type parameter in
> DataReader/WriterFactory
>
> Hi all,
>
> I'd like to propose an API change to the data source v2.
>
ormat, and
history data in columnar format"?
From: Wenchen Fan <cloud0...@gmail.com>
Sent: Sunday, April 15, 2018 7:45:01 PM
To: Spark dev list
Subject: [discuss][data source v2] remove type parameter in
DataReader/WriterFactory
Hi all,
I'd like to propo
Hi all,
I'd like to propose an API change to the data source v2.
One design goal of data source v2 is API type safety. The FileFormat API is
a bad example, it asks the implementation to return InternalRow even it's
actually ColumnarBatch. In data source v2 we add a type parameter to