I guess explaining rationalization would be better to understanding the
situation.

It's related to skip converting params to lowercase before assigning to
Kafka parameter. (https://github.com/apache/spark/pull/23612) If we
guarantee lowercase key on interface(s) we can simply pass them to Kafka as
well, and if not we may want to convert to lowercase to ensure safety.

2019년 1월 25일 (금) 오전 3:27, Joseph Torres <joseph.tor...@databricks.com>님이 작성:

> I wouldn't be opposed to also documenting that we canonicalize the keys as
> lowercase, but the case-insensitivity is I think the primary property. It's
> important to call out that data source developers don't have to worry about
> a semantic difference between option("mykey", "value") and option("myKey",
> "value").
>
> On Thu, Jan 24, 2019 at 9:58 AM Gabor Somogyi <gabor.g.somo...@gmail.com>
> wrote:
>
>> Hi All,
>>
>> Given org.apache.spark.sql.sources.v2.DataSourceOptions which states the
>> following:
>>
>> * An immutable string-to-string map in which keys are case-insensitive. This 
>> is used to represent
>> * data source options.
>>
>> Case-insensitivity can be reached many ways.The implementation provides
>> lowercase solution.
>>
>> I've seen code parts which take advantage of this implementation detail.
>> My questions are:
>>
>> 1. As the class only states case-insensitive is the lowercase a subject
>> to change?
>> 2. If it's not subject to change wouldn't it be better to change
>> case-insensitive to lowercase or something?
>>
>> I've seen similar pattern on interfaces...
>>
>> Thanks in advance!
>>
>> BR,
>> G
>>
>>

Reply via email to