implementation. Spark will try each class specified
until one of them returns the resource information for that resource. It
tries the discovery script last if none of the plugins return information
for that resource. 3.0.0
--
Best Regards,
Ayan Guha
t; ingest it with Linkedin Gobblin to HDFS / S3.
>>>>>
>>>>> *Approach 2:*
>>>>>
>>>>> Run Scheduled Spark Job - Read from HBase and do transformations and
>>>>> maintain flag column at HBase Level.
>>>>>
>>>>> In above both approach, I need to maintain column level flags. such as
>>>>> 0 - by default, 1-sent,2-sent and acknowledged. So next time Producer will
>>>>> take another 1000 rows of batch where flag is 0 or 1.
>>>>>
>>>>> I am looking for best practice approach with any distributed tool.
>>>>>
>>>>> Thanks.
>>>>>
>>>>> - Chetan Khatri
>>>>>
>>>>
>>>>
>>>
>>
>
--
Best Regards,
Ayan Guha
LOWING// no need to
> specify
>
> If we go with option 2, we should throw exceptions if users specify
> multiple from's or to's. A variant of option 2 is to require explicitly
> specification of begin/end even in the case of unbounded boundary, e.g.:
>
> Window.rowsFromBeginning().rowsTo(-3)
> or
> Window.rowsFromUnboundedPreceding().rowsTo(-3)
>
>
>
--
Best Regards,
Ayan Guha