spark.sql.hive.exec.dynamic.partition description

2019-04-29 Thread Mike Chan
Hi Guys,

Does any one have detailed descriptions for hive parameters in spark? like
spark.sql.hive.exec.dynamic.partition I couldn't find any reference in my
spark 2.3.2 configuration.

I'm looking into a problem that Spark cannot understand Hive partition at
all. In my Hive table it is partitioned by 1,000; however when I read the
same table in spark in RDD it becomes 105 if I query as
df.rdd.getNumPartitions()

Because create 1 task per partition when read, the reading is painfully
slow as 1 task reading many Hive folders in sequential order. My target is
spin up more tasks that increase parallelism during read operations. Hope
this makes sense.

Thank you

Best Regards,
Mike


unsubscribe

2019-04-29 Thread Amrit Jangid



Re: Anaconda installation with Pyspark/Pyarrow (2.3.0+) on cloudera managed server

2019-04-29 Thread Rishi Shah
modified the subject & would like to clarify that I am looking to create an
anaconda parcel with pyarrow and other libraries, so that I can distribute
it on the cloudera cluster..

On Tue, Apr 30, 2019 at 12:21 AM Rishi Shah 
wrote:

> Hi All,
>
> I have been trying to figure out a way to build anaconda parcel with
> pyarrow included for my cloudera managed server for distribution but this
> doesn't seem to work right. Could someone please help?
>
> I have tried to install anaconda on one of the management nodes on
> cloudera cluster... tarred the directory, but this directory doesn't
> include all the packages to form a proper parcel for distribution.
>
> Any help is much appreciated!
>
> --
> Regards,
>
> Rishi Shah
>


-- 
Regards,

Rishi Shah


Anaconda installation with Pyspark on cloudera managed server

2019-04-29 Thread Rishi Shah
Hi All,

I have been trying to figure out a way to build anaconda parcel with
pyarrow included for my cloudera managed server for distribution but this
doesn't seem to work right. Could someone please help?

I have tried to install anaconda on one of the management nodes on cloudera
cluster... tarred the directory, but this directory doesn't include all the
packages to form a proper parcel for distribution.

Any help is much appreciated!

-- 
Regards,

Rishi Shah


Re: [EXT] handling skewness issues

2019-04-29 Thread Jules Damji
Yes, indeed! A few talks in the developer and deep dives address the data skews 
issue and how to address them. 

I shall let the group know when the talk sessions are available.

Cheers 
Jules

Sent from my iPhone
Pardon the dumb thumb typos :)

> On Apr 29, 2019, at 2:13 PM, Michael Mansour  
> wrote:
> 
> There were recently some fantastic talks about this at the SparkSummit 
> conference in San Francisco.  I suggest you check out the SparkSummit YouTube 
> channel after May 9th for a deep dive into this topic.
>  
> From: rajat kumar 
> Date: Monday, April 29, 2019 at 9:34 AM
> To: "user@spark.apache.org" 
> Subject: [EXT] handling skewness issues
>  
> Hi All,
>  
> How to overcome skewness issues in spark ?
>  
> I read that we can add some randomness to key column before join and remove 
> that random part after join.
>  
> is there any better way ? Above method seems to be a workaround.
>  
> thanks 
> rajat


Re: Handle Null Columns in Spark Structured Streaming Kafka

2019-04-29 Thread Jason Nerothin
See also here:
https://stackoverflow.com/questions/44671597/how-to-replace-null-values-with-a-specific-value-in-dataframe-using-spark-in-jav

On Mon, Apr 29, 2019 at 5:27 PM Jason Nerothin 
wrote:

> Spark SQL has had an na.fill function on it since at least 2.1. Would that
> work for you?
>
>
> https://spark.apache.org/docs/2.1.0/api/java/org/apache/spark/sql/DataFrameNaFunctions.html
>
> On Mon, Apr 29, 2019 at 4:57 PM Shixiong(Ryan) Zhu <
> shixi...@databricks.com> wrote:
>
>> Hey Snehasish,
>>
>> Do you have a reproducer for this issue?
>>
>> Best Regards,
>> Ryan
>>
>>
>> On Wed, Apr 24, 2019 at 7:24 AM SNEHASISH DUTTA 
>> wrote:
>>
>>> Hi,
>>>
>>> While writing to kafka using spark structured streaming , if all the
>>> values in certain column are Null it gets dropped
>>> Is there any way to override this , other than using na.fill functions
>>>
>>> Regards,
>>> Snehasish
>>>
>>
>
> --
> Thanks,
> Jason
>


-- 
Thanks,
Jason


Re: Handle Null Columns in Spark Structured Streaming Kafka

2019-04-29 Thread Jason Nerothin
Spark SQL has had an na.fill function on it since at least 2.1. Would that
work for you?

https://spark.apache.org/docs/2.1.0/api/java/org/apache/spark/sql/DataFrameNaFunctions.html

On Mon, Apr 29, 2019 at 4:57 PM Shixiong(Ryan) Zhu 
wrote:

> Hey Snehasish,
>
> Do you have a reproducer for this issue?
>
> Best Regards,
> Ryan
>
>
> On Wed, Apr 24, 2019 at 7:24 AM SNEHASISH DUTTA 
> wrote:
>
>> Hi,
>>
>> While writing to kafka using spark structured streaming , if all the
>> values in certain column are Null it gets dropped
>> Is there any way to override this , other than using na.fill functions
>>
>> Regards,
>> Snehasish
>>
>

-- 
Thanks,
Jason


Re: [EXT] handling skewness issues

2019-04-29 Thread Michael Mansour
There were recently some fantastic talks about this at the SparkSummit 
conference in San Francisco.  I suggest you check out the SparkSummit YouTube 
channel after May 9th for a deep dive into this topic.

From: rajat kumar 
Date: Monday, April 29, 2019 at 9:34 AM
To: "user@spark.apache.org" 
Subject: [EXT] handling skewness issues

Hi All,

How to overcome skewness issues in spark ?

I read that we can add some randomness to key column before join and remove 
that random part after join.

is there any better way ? Above method seems to be a workaround.

thanks
rajat


Re: spark hive concurrency

2019-04-29 Thread Mich Talebzadeh
That assertion seems to be true. Spark does not seem to hold locks when
doing DML on a Hive table.

I cannot recall whether I checked it in previous versions of Spark.
However, in Spark  2.3 I can see that is true using Hive 3.0

This may be a potential oversight as Spark SQL and Hive are drifting away
from each other.

HTH

Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
*



http://talebzadehmich.wordpress.com


*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Mon, 29 Apr 2019 at 09:54, CPC  wrote:

> Hi All,
>
> Does spark2 support concurrency on hive tables? I mean when we query with
> hive and issue show locks we can see shared locks. But when we use spark
> sql and query tables we could not see any locks on tables.
>
> Thanks in advance..
>


Issue with offset management using Spark on Dataproc

2019-04-29 Thread Austin Weaver
Hey guys, relatively new Spark Dev here and i'm seeing some kafka offset
issues and was wondering if you guys could help me out.

I am currently running a spark job on Dataproc and am getting errors trying
to re-join a group and read data from a kafka topic. I have done some
digging and am not sure what the issue is. I have auto.offset.reset set to
earliest so it should being reading from the earliest available
non-committed offset and initially my spark logs look like this :

19/04/29 16:30:30 INFO
org.apache.kafka.clients.consumer.internals.Fetcher: [Consumer
clientId=consumer-1, groupId=demo-group] Resetting offset for
partition demo.topic-11 to offset 5553330.
19/04/29 16:30:30 INFO
org.apache.kafka.clients.consumer.internals.Fetcher: [Consumer
clientId=consumer-1, groupId=demo-group] Resetting offset for
partition demo.topic-2 to offset 553.
19/04/29 16:30:30 INFO
org.apache.kafka.clients.consumer.internals.Fetcher: [Consumer
clientId=consumer-1, groupId=demo-group] Resetting offset for
partition demo.topic-3 to offset 484.
19/04/29 16:30:30 INFO
org.apache.kafka.clients.consumer.internals.Fetcher: [Consumer
clientId=consumer-1, groupId=demo-group] Resetting offset for
partition demo.topic-4 to offset 586.
19/04/29 16:30:30 INFO
org.apache.kafka.clients.consumer.internals.Fetcher: [Consumer
clientId=consumer-1, groupId=demo-group] Resetting offset for
partition demo.topic-5 to offset 502.
19/04/29 16:30:30 INFO
org.apache.kafka.clients.consumer.internals.Fetcher: [Consumer
clientId=consumer-1, groupId=demo-group] Resetting offset for
partition demo.topic-6 to offset 561.
19/04/29 16:30:30 INFO
org.apache.kafka.clients.consumer.internals.Fetcher: [Consumer
clientId=consumer-1, groupId=demo-group] Resetting offset for
partition demo.topic-7 to offset 542.```

But then the very next line I get an error trying to read from a
nonexistent offset on the server (you can see that the offset for the
partition differs from the one listed above, so I have no idea why it would
be attempting to read form that offset, here is the error on the next line:

org.apache.kafka.clients.consumer.OffsetOutOfRangeException: Offsets
out of range with no configured reset policy for partitions:
{demo.topic-11=4544296}

Any ideas to why my spark job is constantly going back to this offset
(4544296), and not the one it outputs originally (5553330)?

It seems to be contradicting itself w a) the actual offset it says its on
and the one it attempts to read and b) saying no configured reset policy
-- 
Austin Weaver
Software Engineer
FLYR, Inc.   www.flyrlabs.com


handling skewness issues

2019-04-29 Thread rajat kumar
Hi All,

How to overcome skewness issues in spark ?

I read that we can add some randomness to key column before join and remove
that random part after join.

is there any better way ? Above method seems to be a workaround.

thanks
rajat


Spark 2.4.1 on Kubernetes - DNS resolution of driver fails

2019-04-29 Thread Olivier Girardot
Hi everyone,
I have ~300 spark job on Kubernetes (GKE) using the cluster auto-scaler,
and sometimes while running these jobs a pretty bad thing happens, the
driver (in cluster mode) gets scheduled on Kubernetes and launches many
executor pods.
So far so good, but the k8s "Service" associated to the driver does not
seem to be propagated in terms of DNS resolution so all the executor fails
with a "spark-application-..cluster.svc.local" does not exists.

All executors failing the driver should be failing too, but it considers
that it's a "pending" initial allocation and stay stuck forever in a loop
of "Initial job has not accepted any resources, please check Cluster UI"

Has anyone else observed this king of behaviour ?
We had it on 2.3.1 and I upgraded to 2.4.1 but this issue still seems to
exist even after the "big refactoring" in the kubernetes cluster scheduler
backend.

I can work on a fix / workaround but I'd like to check with you the proper
way forward :

   - Some processes (like the airflow helm recipe) rely on a "sleep 30s"
   before launching the dependent pods (that could be added to
   /opt/entrypoint.sh used in the kubernetes packing)
   - We can add a simple step to the init container trying to do the DNS
   resolution and failing after 60s if it did not work

But these steps won't change the fact that the driver will stay stuck
thinking we're still in the case of the Initial allocation delay.

Thoughts ?

-- 
*Olivier Girardot*
o.girar...@lateral-thoughts.com


Re: Getting EOFFileException while reading from sequence file in spark

2019-04-29 Thread Prateek Rajput
I checked and removed 0 sized files then also it is coming. And sometimes
when there is no 0 size file then also it is happening.
I checked data also if it is corrupted by directly opening that file and
checking it. I traced whole data but did not find any issue. For hadoop
Map-Reduce no such issue is coming it is happening in case of spark only.

On Mon, Apr 29, 2019 at 2:50 PM Deepak Sharma  wrote:

> This can happen if the file size is 0
>
> On Mon, Apr 29, 2019 at 2:28 PM Prateek Rajput
>  wrote:
>
>> Hi guys,
>> I am getting this strange error again and again while reading from from a
>> sequence file in spark.
>> User class threw exception: org.apache.spark.SparkException: Job aborted.
>> at
>> org.apache.spark.internal.io.SparkHadoopWriter$.write(SparkHadoopWriter.scala:100)
>> at
>> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1096)
>> at
>> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1094)
>> at
>> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1094)
>> at
>> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>> at
>> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>> at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
>> at
>> org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopDataset(PairRDDFunctions.scala:1094)
>> at
>> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply$mcV$sp(PairRDDFunctions.scala:1067)
>> at
>> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply(PairRDDFunctions.scala:1032)
>> at
>> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply(PairRDDFunctions.scala:1032)
>> at
>> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>> at
>> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>> at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
>> at
>> org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:1032)
>> at
>> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$1.apply$mcV$sp(PairRDDFunctions.scala:958)
>> at
>> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$1.apply(PairRDDFunctions.scala:958)
>> at
>> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$1.apply(PairRDDFunctions.scala:958)
>> at
>> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>> at
>> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>> at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
>> at
>> org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:957)
>> at
>> org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$1.apply$mcV$sp(RDD.scala:1499)
>> at
>> org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$1.apply(RDD.scala:1478)
>> at
>> org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$1.apply(RDD.scala:1478)
>> at
>> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>> at
>> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>> at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
>> at org.apache.spark.rdd.RDD.saveAsTextFile(RDD.scala:1478)
>> at
>> org.apache.spark.api.java.JavaRDDLike$class.saveAsTextFile(JavaRDDLike.scala:550)
>> at
>> org.apache.spark.api.java.AbstractJavaRDDLike.saveAsTextFile(JavaRDDLike.scala:45)
>> at
>> com.flipkart.prognos.spark.UniqueDroppedFSN.main(UniqueDroppedFSN.java:42)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:498)
>> at
>> org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:678)
>> Caused by: org.apache.spark.SparkException: Job aborted due to stage
>> failure: Task 186 in stage 0.0 failed 4 times, most recent failure: Lost
>> task 186.3 in stage 0.0 (TID 179, prod-fdphadoop-krios-dn-1039, executor
>> 1): java.io.EOFException
>> at java.io.DataInputStream.readFully(DataInputStream.java:197)
>> at
>> org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:70)
>> at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:120)
>> at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:2436)
>> at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:2568)
>> at
>> org.apache.hadoop.mapred.SequenceFileRecordReader.next(SequenceFileRecordReader.java:82)
>> at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:293)
>> at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:224)
>> at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
>> at
>> 

Re: Getting EOFFileException while reading from sequence file in spark

2019-04-29 Thread Deepak Sharma
This can happen if the file size is 0

On Mon, Apr 29, 2019 at 2:28 PM Prateek Rajput
 wrote:

> Hi guys,
> I am getting this strange error again and again while reading from from a
> sequence file in spark.
> User class threw exception: org.apache.spark.SparkException: Job aborted.
> at
> org.apache.spark.internal.io.SparkHadoopWriter$.write(SparkHadoopWriter.scala:100)
> at
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1096)
> at
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1094)
> at
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1094)
> at
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
> at
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
> at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
> at
> org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopDataset(PairRDDFunctions.scala:1094)
> at
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply$mcV$sp(PairRDDFunctions.scala:1067)
> at
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply(PairRDDFunctions.scala:1032)
> at
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply(PairRDDFunctions.scala:1032)
> at
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
> at
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
> at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
> at
> org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:1032)
> at
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$1.apply$mcV$sp(PairRDDFunctions.scala:958)
> at
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$1.apply(PairRDDFunctions.scala:958)
> at
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$1.apply(PairRDDFunctions.scala:958)
> at
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
> at
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
> at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
> at
> org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:957)
> at
> org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$1.apply$mcV$sp(RDD.scala:1499)
> at org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$1.apply(RDD.scala:1478)
> at org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$1.apply(RDD.scala:1478)
> at
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
> at
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
> at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
> at org.apache.spark.rdd.RDD.saveAsTextFile(RDD.scala:1478)
> at
> org.apache.spark.api.java.JavaRDDLike$class.saveAsTextFile(JavaRDDLike.scala:550)
> at
> org.apache.spark.api.java.AbstractJavaRDDLike.saveAsTextFile(JavaRDDLike.scala:45)
> at
> com.flipkart.prognos.spark.UniqueDroppedFSN.main(UniqueDroppedFSN.java:42)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
> org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:678)
> Caused by: org.apache.spark.SparkException: Job aborted due to stage
> failure: Task 186 in stage 0.0 failed 4 times, most recent failure: Lost
> task 186.3 in stage 0.0 (TID 179, prod-fdphadoop-krios-dn-1039, executor
> 1): java.io.EOFException
> at java.io.DataInputStream.readFully(DataInputStream.java:197)
> at
> org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:70)
> at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:120)
> at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:2436)
> at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:2568)
> at
> org.apache.hadoop.mapred.SequenceFileRecordReader.next(SequenceFileRecordReader.java:82)
> at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:293)
> at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:224)
> at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
> at
> org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
> at
> org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:191)
> at
> org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:62)
> at
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
> at
> 

Getting EOFFileException while reading from sequence file in spark

2019-04-29 Thread Prateek Rajput
Hi guys,
I am getting this strange error again and again while reading from from a
sequence file in spark.
User class threw exception: org.apache.spark.SparkException: Job aborted.
at
org.apache.spark.internal.io.SparkHadoopWriter$.write(SparkHadoopWriter.scala:100)
at
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1096)
at
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1094)
at
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1094)
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at
org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopDataset(PairRDDFunctions.scala:1094)
at
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply$mcV$sp(PairRDDFunctions.scala:1067)
at
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply(PairRDDFunctions.scala:1032)
at
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply(PairRDDFunctions.scala:1032)
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at
org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:1032)
at
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$1.apply$mcV$sp(PairRDDFunctions.scala:958)
at
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$1.apply(PairRDDFunctions.scala:958)
at
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$1.apply(PairRDDFunctions.scala:958)
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at
org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:957)
at
org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$1.apply$mcV$sp(RDD.scala:1499)
at org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$1.apply(RDD.scala:1478)
at org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$1.apply(RDD.scala:1478)
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.RDD.saveAsTextFile(RDD.scala:1478)
at
org.apache.spark.api.java.JavaRDDLike$class.saveAsTextFile(JavaRDDLike.scala:550)
at
org.apache.spark.api.java.AbstractJavaRDDLike.saveAsTextFile(JavaRDDLike.scala:45)
at
com.flipkart.prognos.spark.UniqueDroppedFSN.main(UniqueDroppedFSN.java:42)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:678)
Caused by: org.apache.spark.SparkException: Job aborted due to stage
failure: Task 186 in stage 0.0 failed 4 times, most recent failure: Lost
task 186.3 in stage 0.0 (TID 179, prod-fdphadoop-krios-dn-1039, executor
1): java.io.EOFException
at java.io.DataInputStream.readFully(DataInputStream.java:197)
at
org.apache.hadoop.io.DataOutputBuffer$Buffer.write(DataOutputBuffer.java:70)
at org.apache.hadoop.io.DataOutputBuffer.write(DataOutputBuffer.java:120)
at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:2436)
at org.apache.hadoop.io.SequenceFile$Reader.next(SequenceFile.java:2568)
at
org.apache.hadoop.mapred.SequenceFileRecordReader.next(SequenceFileRecordReader.java:82)
at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:293)
at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:224)
at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:73)
at
org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at
org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:191)
at
org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:62)
at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
at
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at
org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at 

spark hive concurrency

2019-04-29 Thread CPC
Hi All,

Does spark2 support concurrency on hive tables? I mean when we query with
hive and issue show locks we can see shared locks. But when we use spark
sql and query tables we could not see any locks on tables.

Thanks in advance..