de more details?
>>
>> The reason we want to disable the LZ4 test is because it requires the
>> native LZ4 library when running with Hadoop 2.x, which the Spark CI doesn't
>> have.
>>
>> On Tue, Sep 21, 2021 at 3:46 PM Venkatakrishnan Sowrirajan <
>> vs
Hi Chao,
But there are tests in core as well failing. For
eg: org.apache.spark.FileSuite But these tests are passing in 3.1, why do
you think we should disable these tests for hadoop version < 3.x?
Regards
Venkata krishnan
On Tue, Sep 21, 2021 at 3:33 PM Chao Sun wrote:
> I just created SPARK
I have created a JIRA (https://issues.apache.org/jira/browse/SPARK-36810)
to track this issue. Will look into this issue further in the coming days.
Regards
Venkata krishnan
On Tue, Sep 7, 2021 at 5:57 AM Steve Loughran
wrote:
> FileContext came in Hadoop 2.x with a cleaner split of client API
+1. Interesting indeed :)
Regards
Venkata krishnan
On Mon, Sep 14, 2020 at 11:14 AM Xingbo Jiang wrote:
> +1 This is an exciting new feature!
>
> On Sun, Sep 13, 2020 at 8:00 PM Mridul Muralidharan
> wrote:
>
>> Hi,
>>
>> I'd like to call for a vote on SPARK-30602 - SPIP: Support push-based
>
I think Spark in itself doesn't allow DFOC when append mode is enabled. So
DFOC works only for Insert overwrite queries/overwrite mode not for append
mode.
Regards
Venkata krishnan
On Fri, Jun 16, 2017 at 9:35 PM, sririshindra
wrote:
> Hi Ryan and Steve,
>
> Thanks very much for your reply.
>
>
Hi Rachana,
Are you by any chance saying something like this in your code
?
"sparkConf.setMaster("yarn-cluster");"
SparkContext is not supported with yarn-cluster mode.
I think you are hitting this bug -- >
https://issues.apache.org/jira/browse/SPARK-7504. This got fixed in
Spark-1.4.0,