Re: spark sql StackOverflow

2018-05-15 Thread Jörn Franke
3000 filters don’t look like something reasonable. This is very difficult to 
test and verify as well as impossible to maintain.
Could it be that your filters are another table that you should join with ?
The example is a little bit artificial to understand the underlying business 
case. Can you provide a more realistic example?

Maybe a bloom filter or something similar can make sense for you ?  Basically 
you want  to know if the key pair is in a given set of pairs?

> On 15. May 2018, at 11:48, Alessandro Solimando 
>  wrote:
> 
> From the information you provided I would tackle this as a batch problem, 
> because this way you have access to more sophisticated techniques and you 
> have more flexibility (maybe HDFS and a SparkJob, but also think about a 
> datastore offering good indexes for the kind of data types and values you 
> have for your keys, and benefit from filter push-downs).
> 
> I personally use streaming only when real-time ingestion is needed.
> 
> Hth,
> Alessandro
> 
>> On 15 May 2018 at 09:11, onmstester onmstester  wrote:
>> 
>> How many distinct key1 (resp. key2) values do you have? Are these values 
>> reasonably stable over time?
>> 
>> less than 10 thousands and this filters would change each 2-3 days. They 
>> would be written and loaded from a database
>> 
>> Are these records ingested in real-time or are they loaded from a datastore?
>> 
>> records would be loaded from some text files that would be copied in some 
>> directory over and over
>> 
>> Are you suggesting that i dont need to use spark-streaming?
>> Sent using Zoho Mail
>> 
>> 
>> 
>>  On Tue, 15 May 2018 11:26:42 +0430 Alessandro Solimando 
>>  wrote 
>> 
>> Hi,
>> I am not familiar with ATNConfigSet, but some thoughts that might help.
>> 
>> How many distinct key1 (resp. key2) values do you have? Are these values 
>> reasonably stable over time?
>> 
>> Are these records ingested in real-time or are they loaded from a datastore?
>> 
>> If the latter case the DB might be able to efficiently perform the 
>> filtering, especially if equipped with a proper index over key1/key2 (or a 
>> composite one).
>> 
>> In such case the filter push-down could be very effective (I didn't get if 
>> you just need to count or do something more with the matching record).
>> 
>> Alternatively, you could try to group by (key1,key2), and then filter (it 
>> again depends on the kind of output you have in mind).
>> 
>> If the datastore/stream is distributed and supports partitioning, you could 
>> partition your records by either key1 or key2 (or key1+key2), so they are 
>> already "separated" and can be consumed more efficiently (e.g., the groupby 
>> could then be local to a single partition).
>> 
>> Best regards,
>> Alessandro
>> 
>> On 15 May 2018 at 08:32, onmstester onmstester  wrote:
>> 
>> 
>> Hi, 
>> 
>> I need to run some queries on huge amount input records. Input rate for 
>> records are 100K/seconds.
>> A record is like (key1,key2,value) and the application should report 
>> occurances of kye1 = something && key2 == somethingElse.
>> The problem is there are too many filters in my query: more than 3 thousands 
>> pair of key1 and key2 should be filtered.
>> I was simply puting 1 millions of records in a temptable each time and 
>> running a query sql using spark-sql on temp table:
>> select * from mytemptable where (kye1 = something && key2 == somethingElse) 
>> or (kye1 = someOtherthing && key2 == someAnotherThing) or ...(3thousands 
>> or!!!)
>> And i encounter StackOverFlow at ATNConfigSet.java line 178.
>> 
>> So i have two options IMHO:
>> 1. Either put all key1 and key2 filter pairs in another temp table and do a 
>> join between  two temp table
>> 2. Or use spark-stream that i'm not familiar with and i don't know if it 
>> could handle 3K of filters.
>> 
>> Which way do you suggest? what is the best solution for my problem 
>> 'performance-wise'?
>> 
>> Thanks in advance
>> 
>> 
> 


Re: spark sql StackOverflow

2018-05-15 Thread Alessandro Solimando
>From the information you provided I would tackle this as a batch problem,
because this way you have access to more sophisticated techniques and you
have more flexibility (maybe HDFS and a SparkJob, but also think about a
datastore offering good indexes for the kind of data types and values you
have for your keys, and benefit from filter push-downs).

I personally use streaming only when real-time ingestion is needed.

Hth,
Alessandro

On 15 May 2018 at 09:11, onmstester onmstester  wrote:

>
> How many distinct key1 (resp. key2) values do you have? Are these values
> reasonably stable over time?
>
> less than 10 thousands and this filters would change each 2-3 days. They
> would be written and loaded from a database
>
> Are these records ingested in real-time or are they loaded from a
> datastore?
>
>
> records would be loaded from some text files that would be copied in some
> directory over and over
>
> Are you suggesting that i dont need to use spark-streaming?
>
> Sent using Zoho Mail 
>
>
>  On Tue, 15 May 2018 11:26:42 +0430 *Alessandro Solimando
> >* wrote
> 
>
> Hi,
> I am not familiar with ATNConfigSet, but some thoughts that might help.
>
> How many distinct key1 (resp. key2) values do you have? Are these values
> reasonably stable over time?
>
> Are these records ingested in real-time or are they loaded from a
> datastore?
>
> If the latter case the DB might be able to efficiently perform the
> filtering, especially if equipped with a proper index over key1/key2 (or a
> composite one).
>
> In such case the filter push-down could be very effective (I didn't get if
> you just need to count or do something more with the matching record).
>
> Alternatively, you could try to group by (key1,key2), and then filter (it
> again depends on the kind of output you have in mind).
>
> If the datastore/stream is distributed and supports partitioning, you
> could partition your records by either key1 or key2 (or key1+key2), so they
> are already "separated" and can be consumed more efficiently (e.g., the
> groupby could then be local to a single partition).
>
> Best regards,
> Alessandro
>
> On 15 May 2018 at 08:32, onmstester onmstester 
> wrote:
>
>
> Hi,
>
> I need to run some queries on huge amount input records. Input rate for
> records are 100K/seconds.
> A record is like (key1,key2,value) and the application should report
> occurances of kye1 = something && key2 == somethingElse.
> The problem is there are too many filters in my query: more than 3
> thousands pair of key1 and key2 should be filtered.
> I was simply puting 1 millions of records in a temptable each time and
> running a query sql using spark-sql on temp table:
> select * from mytemptable where (kye1 = something && key2 ==
> somethingElse) or (kye1 = someOtherthing && key2 == someAnotherThing) or
> ...(3thousands or!!!)
> And i encounter StackOverFlow at ATNConfigSet.java line 178.
>
> So i have two options IMHO:
> 1. Either put all key1 and key2 filter pairs in another temp table and do
> a join between  two temp table
> 2. Or use spark-stream that i'm not familiar with and i don't know if it
> could handle 3K of filters.
>
> Which way do you suggest? what is the best solution for my problem
> 'performance-wise'?
>
> Thanks in advance
>
>
>
>


Re: spark sql StackOverflow

2018-05-15 Thread Alessandro Solimando
Hi,
I am not familiar with ATNConfigSet, but some thoughts that might help.

How many distinct key1 (resp. key2) values do you have? Are these values
reasonably stable over time?

Are these records ingested in real-time or are they loaded from a datastore?

If the latter case the DB might be able to efficiently perform the
filtering, especially if equipped with a proper index over key1/key2 (or a
composite one).

In such case the filter push-down could be very effective (I didn't get if
you just need to count or do something more with the matching record).

Alternatively, you could try to group by (key1,key2), and then filter (it
again depends on the kind of output you have in mind).

If the datastore/stream is distributed and supports partitioning, you could
partition your records by either key1 or key2 (or key1+key2), so they are
already "separated" and can be consumed more efficiently (e.g., the groupby
could then be local to a single partition).

Best regards,
Alessandro

On 15 May 2018 at 08:32, onmstester onmstester  wrote:

> Hi,
>
> I need to run some queries on huge amount input records. Input rate for
> records are 100K/seconds.
> A record is like (key1,key2,value) and the application should report
> occurances of kye1 = something && key2 == somethingElse.
> The problem is there are too many filters in my query: more than 3
> thousands pair of key1 and key2 should be filtered.
> I was simply puting 1 millions of records in a temptable each time and
> running a query sql using spark-sql on temp table:
> select * from mytemptable where (kye1 = something && key2 ==
> somethingElse) or (kye1 = someOtherthing && key2 == someAnotherThing) or
> ...(3thousands or!!!)
> And i encounter StackOverFlow at ATNConfigSet.java line 178.
>
> So i have two options IMHO:
> 1. Either put all key1 and key2 filter pairs in another temp table and do
> a join between  two temp table
> 2. Or use spark-stream that i'm not familiar with and i don't know if it
> could handle 3K of filters.
>
> Which way do you suggest? what is the best solution for my problem
> 'performance-wise'?
>
> Thanks in advance
>
>


spark sql StackOverflow

2018-05-15 Thread onmstester onmstester
Hi, 



I need to run some queries on huge amount input records. Input rate for records 
are 100K/seconds.

A record is like (key1,key2,value) and the application should report occurances 
of kye1 = something  key2 == somethingElse.

The problem is there are too many filters in my query: more than 3 thousands 
pair of key1 and key2 should be filtered.

I was simply puting 1 millions of records in a temptable each time and running 
a query sql using spark-sql on temp table:

select * from mytemptable where (kye1 = something  key2 == 
somethingElse) or (kye1 = someOtherthing  key2 == someAnotherThing) 
or ...(3thousands or!!!)

And i encounter StackOverFlow at ATNConfigSet.java line 178.



So i have two options IMHO:

1. Either put all key1 and key2 filter pairs in another temp table and do a 
join between  two temp table

2. Or use spark-stream that i'm not familiar with and i don't know if it could 
handle 3K of filters.



Which way do you suggest? what is the best solution for my problem 
'performance-wise'?



Thanks in advance




RE: Spark SQL Stackoverflow error

2015-03-10 Thread jishnu.prathap
import com.google.gson.{GsonBuilder, JsonParser}
import org.apache.spark.mllib.clustering.KMeans
import org.apache.spark.sql.SQLContext
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.mllib.clustering.KMeans
/**
* Examine the collected tweets and trains a model based on them.
*/
object ExamineAndTrain {
val jsonParser = new JsonParser()
val gson = new GsonBuilder().setPrettyPrinting().create()
def main(args: Array[String]) {
   val outputModelDir=C:\\outputmode111
 val tweetInput=C:\\test
   val numClusters=10
   val numIterations=20

val conf = new 
SparkConf().setAppName(this.getClass.getSimpleName).setMaster(local[4]).set(spark.executor.memory,
 1g)
val sc = new SparkContext(conf)
val tweets = sc.textFile(tweetInput)
val vectors = tweets.map(Utils.featurize).cache()
vectors.count() // Calls an action on the RDD to populate the vectors cache.
val model = KMeans.train(vectors, numClusters, numIterations)
sc.makeRDD(model.clusterCenters, numClusters).saveAsObjectFile(outputModelDir)
val some_tweets = tweets.take(2)
println(Example tweets from the clusters)
for (i - 0 until numClusters) {
println(s\nCLUSTER $i:)
some_tweets.foreach { t =
if (model.predict(Utils.featurize(t)) == i) {
println(t)
}
}
}
}
}

From: lovelylavs [via Apache Spark User List] 
[mailto:ml-node+s1001560n21956...@n3.nabble.com]
Sent: Sunday, March 08, 2015 2:34 AM
To: Jishnu Menath Prathap (WT01 - BAS)
Subject: Re: Spark SQL Stackoverflow error


​Thank you so much for your reply. If it is possible can you please provide me 
with the code?



Thank you so much.



Lavanya.


From: Jishnu Prathap [via Apache Spark User List] ml-node+[hidden 
email]/user/SendEmail.jtp?type=nodenode=21956i=0
Sent: Sunday, March 1, 2015 3:03 AM
To: Nadikuda, Lavanya
Subject: RE: Spark SQL Stackoverflow error

Hi
The Issue was not fixed .
I removed the between sql layer and directly created features from the file.

Regards
Jishnu Prathap

From: lovelylavs [via Apache Spark User List] [mailto:ml-node+[hidden 
email]/user/SendEmail.jtp?type=nodenode=21863i=0]
Sent: Sunday, March 01, 2015 4:44 AM
To: Jishnu Menath Prathap (WT01 - BAS)
Subject: Re: Spark SQL Stackoverflow error

Hi,

how was this issue fixed?

If you reply to this email, your message will be added to the discussion below:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-SQL-Stackoverflow-error-tp12086p21862.html
To unsubscribe from Spark SQL Stackoverflow error, click here.
NAMLhttp://apache-spark-user-list.1001560.n3.nabble.com/template/NamlServlet.jtp?macro=macro_viewerid=instant_html%21nabble%3Aemail.namlbase=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespacebreadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml
The information contained in this electronic message and any attachments to 
this message are intended for the exclusive use of the addressee(s) and may 
contain proprietary, confidential or privileged information. If you are not the 
intended recipient, you should not disseminate, distribute or copy this e-mail. 
Please notify the sender immediately and destroy all copies of this message and 
any attachments. WARNING: Computer viruses can be transmitted via email. The 
recipient should check this email and any attachments for the presence of 
viruses. The company accepts no liability for any damage caused by any virus 
transmitted by this email. www.wipro.comhttp://www.wipro.com

If you reply to this email, your message will be added to the discussion below:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-SQL-Stackoverflow-error-tp12086p21863.html
To unsubscribe from Spark SQL Stackoverflow error, click here.
NAMLhttp://apache-spark-user-list.1001560.n3.nabble.com/template/NamlServlet.jtp?macro=macro_viewerid=instant_html%21nabble%3Aemail.namlbase=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespacebreadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml


If you reply to this email, your message will be added to the discussion below:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-SQL-Stackoverflow-error-tp12086p21956.html
To unsubscribe from Spark SQL Stackoverflow error, click 
herehttp://apache-spark-user-list.1001560.n3.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_codenode=12086code=amlzaG51LnByYXRoYXBAd2lwcm8uY29tfDEyMDg2fC0xNzUwOTc3MjE3.
NAMLhttp://apache-spark-user-list.1001560.n3.nabble.com/template/NamlServlet.jtp?macro=macro_viewerid=instant_html%21nabble%3Aemail.namlbase=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace

RE: Spark SQL Stackoverflow error

2015-03-01 Thread Jishnu Prathap
Hi
The Issue was not fixed .
I removed the between sql layer and directly created features from the file.

Regards
Jishnu Prathap

From: lovelylavs [via Apache Spark User List] 
[mailto:ml-node+s1001560n21862...@n3.nabble.com]
Sent: Sunday, March 01, 2015 4:44 AM
To: Jishnu Menath Prathap (WT01 - BAS)
Subject: Re: Spark SQL Stackoverflow error

Hi,

how was this issue fixed?

If you reply to this email, your message will be added to the discussion below:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-SQL-Stackoverflow-error-tp12086p21862.html
To unsubscribe from Spark SQL Stackoverflow error, click 
herehttp://apache-spark-user-list.1001560.n3.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_codenode=12086code=amlzaG51LnByYXRoYXBAd2lwcm8uY29tfDEyMDg2fC0xNzUwOTc3MjE3.
NAMLhttp://apache-spark-user-list.1001560.n3.nabble.com/template/NamlServlet.jtp?macro=macro_viewerid=instant_html%21nabble%3Aemail.namlbase=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespacebreadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml
The information contained in this electronic message and any attachments to 
this message are intended for the exclusive use of the addressee(s) and may 
contain proprietary, confidential or privileged information. If you are not the 
intended recipient, you should not disseminate, distribute or copy this e-mail. 
Please notify the sender immediately and destroy all copies of this message and 
any attachments. WARNING: Computer viruses can be transmitted via email. The 
recipient should check this email and any attachments for the presence of 
viruses. The company accepts no liability for any damage caused by any virus 
transmitted by this email. www.wipro.com




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-SQL-Stackoverflow-error-tp12086p21863.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Re: Spark SQL Stackoverflow error

2014-12-09 Thread Jishnu Prathap
Hi,

Unfortunately I am also getting the same error
Anybody solved it??..
Exception in main java.lang.stackoverflowerror
scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
   at
scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
   at
scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
   at
scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
   at
scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
   at
scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)

Exact line where exception occurs.
sqlContext.sql(SELECT text FROM tweetTable LIMIT
10).collect().foreach(println)

The complete code is from github

https://github.com/databricks/reference-apps/blob/master/twitter_classifier/scala/src/main/scala/com/databricks/apps/twitter_classifier/ExamineAndTrain.scala

import com.google.gson.{GsonBuilder, JsonParser}
import org.apache.spark.mllib.clustering.KMeans
import org.apache.spark.sql.SQLContext
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.mllib.clustering.KMeans
/**
* Examine the collected tweets and trains a model based on them.
*/
object ExamineAndTrain {
val jsonParser = new JsonParser()
val gson = new GsonBuilder().setPrettyPrinting().create()
def main(args: Array[String]) {
// Process program arguments and set properties
/*if (args.length  3) {
System.err.println(Usage:  + this.getClass.getSimpleName +
 tweetInput outputModelDir numClusters numIterations)
System.exit(1)
}
*
*/
   val outputModelDir=C:\\MLModel
 val tweetInput=C:\\MLInput
   val numClusters=10
   val numIterations=20
   
//val Array(tweetInput, outputModelDir, Utils.IntParam(numClusters),
Utils.IntParam(numIterations)) = args

val conf = new
SparkConf().setAppName(this.getClass.getSimpleName).setMaster(local[4])
val sc = new SparkContext(conf)
val sqlContext = new SQLContext(sc)
// Pretty print some of the tweets.
val tweets = sc.textFile(tweetInput)
println(Sample JSON Tweets---)
for (tweet - tweets.take(5)) {
println(gson.toJson(jsonParser.parse(tweet)))
}
val tweetTable = sqlContext.jsonFile(tweetInput).cache()
tweetTable.registerTempTable(tweetTable)
println(--Tweet table Schema---)
tweetTable.printSchema()
println(Sample Tweet Text-)

sqlContext.sql(SELECT text FROM tweetTable LIMIT
10).collect().foreach(println)



println(--Sample Lang, Name, text---)
sqlContext.sql(SELECT user.lang, user.name, text FROM tweetTable LIMIT
1000).collect().foreach(println)
println(--Total count by languages Lang, count(*)---)
sqlContext.sql(SELECT user.lang, COUNT(*) as cnt FROM tweetTable GROUP BY
user.lang ORDER BY cnt DESC LIMIT 25).collect.foreach(println)
println(--- Training the model and persist it)
val texts = sqlContext.sql(SELECT text from
tweetTable).map(_.head.toString)
// Cache the vectors RDD since it will be used for all the KMeans
iterations.
val vectors = texts.map(Utils.featurize).cache()
vectors.count() // Calls an action on the RDD to populate the vectors cache.
val model = KMeans.train(vectors, numClusters, numIterations)
sc.makeRDD(model.clusterCenters,
numClusters).saveAsObjectFile(outputModelDir)
val some_tweets = texts.take(100)
println(Example tweets from the clusters)
for (i - 0 until numClusters) {
println(s\nCLUSTER $i:)
some_tweets.foreach { t =
if (model.predict(Utils.featurize(t)) == i) {
println(t)
}
}
}
}
}



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-SQL-Stackoverflow-error-tp12086p20605.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



RE: Spark SQL Stackoverflow error

2014-08-14 Thread Cheng, Hao
I couldn’t reproduce the exception, probably it’s solved in the latest code.

From: Vishal Vibhandik [mailto:vishal.vibhan...@gmail.com]
Sent: Thursday, August 14, 2014 11:17 AM
To: user@spark.apache.org
Subject: Spark SQL Stackoverflow error

Hi,
I tried running the sample sql code JavaSparkSQL but keep getting this error:-

the error comes on line
JavaSchemaRDD teenagers = sqlCtx.sql(SELECT name FROM people WHERE age = 13 
AND age = 19);

C:\spark-submit --class org.apache.spark.examples.sql.JavaSparkSQL --master 
local \spark-1.0.2-bin-hadoop1\lib\spark-examples-1.0.2-hadoop1.0.4.jar
=== Data source: RDD ===
WARN  org.apache.spark.util.SizeEstimator: Failed to check whether 
UseCompressedOops is set; assuming yes
Exception in thread main java.lang.StackOverflowError
at 
scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
at 
scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
at 
scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
at 
scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
at 
scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
at 
scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
at 
scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
at 
scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
at 
scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)



Spark SQL Stackoverflow error

2014-08-13 Thread Vishal Vibhandik
Hi,
I tried running the sample sql code JavaSparkSQL but keep getting this
error:-

the error comes on line
JavaSchemaRDD teenagers = sqlCtx.sql(SELECT name FROM people WHERE age =
13 AND age = 19);

C:\spark-submit --class org.apache.spark.examples.sql.JavaSparkSQL
--master local
\spark-1.0.2-bin-hadoop1\lib\spark-examples-1.0.2-hadoop1.0.4.jar
=== Data source: RDD ===
WARN  org.apache.spark.util.SizeEstimator: Failed to check whether
UseCompressedOops is set; assuming yes
Exception in thread main java.lang.StackOverflowError
at
scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
at
scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
at
scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
at
scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
at
scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
at
scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
at
scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
at
scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
at
scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)