Spark shutdown hook just wipes
> temp files
>
> On Thu, Mar 23, 2017 at 10:55 AM, Jörn Franke <jornfra...@gmail.com
> <mailto:jornfra...@gmail.com>> wrote:
> What do you mean by clear ? What is the use case?
>
> On 23 Mar 2017, at 10:16, nayan sharma <nayans
Does Spark clears the persisted RDD in case if the task fails ?
Regards,
Nayan
Hi,
I wanted to skip all the headers of CSVs present in a directory.
After searching on Google I got to know that it can be done using
sc.wholetextfiles.
Can any one suggest me how to do that in Scala.?
Thanks & Regards,
Nayan Sh
scalaVersion := “2.10.5"
> On 06-Apr-2017, at 7:35 PM, Jörn Franke <jornfra...@gmail.com> wrote:
>
> Maybe your Spark is based on scala 2.11, but you compile it for 2.10 or the
> other way around?
>
> On 6. Apr 2017, at 15:54, nayan sharma <nayansharm...
spark version 1.6.2
scala version 2.10.5
> On 06-Apr-2017, at 8:05 PM, Jörn Franke <jornfra...@gmail.com> wrote:
>
> And which version does your Spark cluster use?
>
> On 6. Apr 2017, at 16:11, nayan sharma <nayansharm...@gmail.com
> <mailto:nayansharm...@gmail.c
il.com> wrote:
>
> Is the library in your assembly jar?
>
> On 6. Apr 2017, at 15:06, nayan sharma <nayansharm...@gmail.com
> <mailto:nayansharm...@gmail.com>> wrote:
>
>> Hi All,
>> I am getting error while loading CSV file.
>>
>> v
In addition I am using spark version 1.6.2
Is there any chance of error coming because of Scala version or dependencies
are not matching.?I just guessed.
Thanks,
Nayan
> On 06-Apr-2017, at 7:16 PM, nayan sharma <nayansharm...@gmail.com> wrote:
>
> Hi Jorn,
> Thanks for repl
Hi All,
I am getting error while loading CSV file.
val datacsv=sqlContext.read.format("com.databricks.spark.csv").option("header",
"true").load("timeline.csv")
java.lang.NoSuchMethodError:
org.apache.commons.csv.CSVFormat.withQuote(Ljava/lang/Character;)Lorg/apache/commons/csv/CSVFormat;
I
you
> please try that and let us know:
> Command:
> spark-submit --packages com.databricks:spark-csv_2.11:1.4.0
>
> On Fri, 7 Apr 2017 at 00:39 nayan sharma <nayansharm...@gmail.com
> <mailto:nayansharm...@gmail.com>> wrote:
> spark version 1.6.2
> scala versio
isin query
> Date: 17 April 2017 at 8:13:24 PM IST
> To: nayan sharma <nayansharm...@gmail.com>, user@spark.apache.org
>
> How about using OR operator in filter?
>
> On Tue, 18 Apr 2017 at 12:35 am, nayan sharma <nayansharm...@gmail.com
> <mailto:nayansharm..
Dataframe (df) having column msrid(String) having values
m_123,m_111,m_145,m_098,m_666
I wanted to filter out rows which are having values m_123,m_111,m_145
df.filter($"msrid".isin("m_123","m_111","m_145")).count
count =0
while
df.filter($"msrid".isin("m_123")).count
count=121212
I have
Dataframe (df) having column msrid(String) having values
m_123,m_111,m_145,m_098,m_666
I wanted to filter out rows which are having values m_123,m_111,m_145
df.filter($"msrid".isin("m_123","m_111","m_145")).count
count =0
while
df.filter($"msrid".isin("m_123")).count
count=121212
I have
I’ve a Dataframe where in some columns there are multiple values, always
separated by ^
phone|contact|
ERN~58XX7~^EPN~5X551~|C~MXXX~MSO~^CAxxE~~3XXX5|
phone1|phone2|contact1|contact2|
ERN~5XXX7|EPN~5891551~|C~MXXXH~MSO~|CAxxE~~3XXX5|
How can this be achieved using loop
t 3:29 AM, ayan guha <guha.a...@gmail.com> wrote:
>
> You are looking for explode function.
>
> On Mon, 17 Jul 2017 at 4:25 am, nayan sharma <nayansharm...@gmail.com
> <mailto:nayansharm...@gmail.com>> wrote:
> I’ve a Dataframe where in some columns there a
ames.zipWithIndex.view) {
> val data = firtRow(idx).asInstanceOf[String].split("\\^")
> var j = 0
> for(d<-data){
> schema = schema + colNames + j + ","
> j = j+1
> }
> }
> schema=schema.substring(0,schema.length-1)
>
Test
-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org
Hi All,
ERROR:-
Caused by: org.apache.spark.util.TaskCompletionListenerException: Connection
error (check network and/or proxy settings)- all nodes failed; tried
[[10.0.1.8*:9200, 10.0.1.**:9200, 10.0.1.***:9200]]
I am getting this error while trying to show the dataframe.
df.count =5190767
arn user and my job is also running from the
same user.
Thanks,
Nayan
> On Mar 22, 2018, at 12:54 PM, Jorge Machado <jom...@me.com> wrote:
>
> Seems to me permissions problems ! Can you check your user / folder
> permissions ?
>
> Jorge Machado
>
>
>
>
Hi All,As druid uses Hadoop MapReduce to ingest batch data but I am trying spark for ingesting data into druid taking reference from https://github.com/metamx/druid-spark-batchBut we are stuck at the following error.Application Log:—>2018-03-20T07:54:28,782 INFO [task-runner-0-priority-0]
/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Thanks & Regards,
Nayan Sharma
*+91-8095382952*
<https://www.linkedin.com/in/nayan-sharma>
<http://stackoverflow.com/users/3687426/nayan-sharma?tab=profile>
Hi Users,
I am trying to build fault tolerant spark solace consumer.
Issue :- we have to take restart of the job due to multiple issue load
average is one of them. At that time whatever spark is processing or
batches in the queue is lost. We can't replay it because we already had
send ack while
e : " + sr.toString)
sr.toString
}
}
Thread.sleep(12)
try{
* val messagesJson = spark.read.json(messages) ===> getting NPE here
after restarting using WAL*
messagesJson.write.mode("append").parquet(data)
}
catch {
22 matches
Mail list logo