Re: df.groupBy('m).agg(sum('n)).show dies with 10^3 elements?

2016-09-06 Thread Jacek Laskowski
Hi Josh,

Yes, that seems to be the issue. As I commented out in the JIRA, just
yesterday (after I had sent the email), such simple queries like the
following killed spark-shell:

Seq(1).toDF.groupBy('value).count.show

Hoping to see it get resolved soon. If there's anything I could help
you with to fix/reproduce the issue, let me know. I wish I knew how to
write a unit test for this. Where in the code to look for inspiration?

Pozdrawiam,
Jacek Laskowski

https://medium.com/@jaceklaskowski/
Mastering Apache Spark 2.0 http://bit.ly/mastering-apache-spark
Follow me at https://twitter.com/jaceklaskowski


On Tue, Sep 6, 2016 at 11:51 PM, Josh Rosen  wrote:
> I think that this is a simpler case of
> https://issues.apache.org/jira/browse/SPARK-17405. I'm going to comment on
> that ticket with your simpler reproduction.
>
> On Tue, Sep 6, 2016 at 1:32 PM Jacek Laskowski  wrote:
>>
>> Hi,
>>
>> I'm concerned with the OOME in local mode with the version built today:
>>
>> scala> val intsMM = 1 to math.pow(10, 3).toInt
>> intsMM: scala.collection.immutable.Range.Inclusive = Range(1, 2, 3, 4,
>> 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23,
>> 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40,
>> 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57,
>> 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74,
>> 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91,
>> 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106,
>> 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120,
>> 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134,
>> 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148,
>> 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162,
>> 163, 164, 165, 166, 167, 168, 169, 1...
>> scala> val df = intsMM.toDF("n").withColumn("m", 'n % 2)
>> df: org.apache.spark.sql.DataFrame = [n: int, m: int]
>>
>> scala> df.groupBy('m).agg(sum('n)).show
>> ...
>> 16/09/06 22:28:02 ERROR Executor: Exception in task 6.0 in stage 0.0 (TID
>> 6)
>> java.lang.OutOfMemoryError: Unable to acquire 262144 bytes of memory, got
>> 0
>> ...
>>
>> Please see
>> https://gist.github.com/jaceklaskowski/906d62b830f6c967a7eee5f8eb6e9237
>> and let me know if I should file an issue. I don't think 10^3 elements
>> and groupBy should kill spark-shell.
>>
>> Pozdrawiam,
>> Jacek Laskowski
>> 
>> https://medium.com/@jaceklaskowski/
>> Mastering Apache Spark 2.0 http://bit.ly/mastering-apache-spark
>> Follow me at https://twitter.com/jaceklaskowski
>>
>> -
>> To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
>>
>

-
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org



Unable to run docker jdbc integrations test ?

2016-09-06 Thread Suresh Thalamati
Hi, 


I am getting the following error , when I am trying to run jdbc docker 
integration tests on my laptop.   Any ideas , what I might be be doing wrong ?

build/mvn -Pyarn -Phadoop-2.6 -Dhadoop.version=2.6.0  -Phive-thriftserver 
-Phive -DskipTests clean install
build/mvn -Pdocker-integration-tests -pl :spark-docker-integration-tests_2.11  
compile test

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512m; 
support was removed in 8.0
Discovery starting.
Discovery completed in 200 milliseconds.
Run starting. Expected test count is: 10
MySQLIntegrationSuite:

Error:
16/09/06 11:52:00 INFO BlockManagerMaster: Registered BlockManager 
BlockManagerId(driver, 9.31.117.25, 51868)
*** RUN ABORTED ***
  java.lang.AbstractMethodError:
  at 
org.glassfish.jersey.model.internal.CommonConfig.configureAutoDiscoverableProviders(CommonConfig.java:622)
  at 
org.glassfish.jersey.client.ClientConfig$State.configureAutoDiscoverableProviders(ClientConfig.java:357)
  at 
org.glassfish.jersey.client.ClientConfig$State.initRuntime(ClientConfig.java:392)
  at 
org.glassfish.jersey.client.ClientConfig$State.access$000(ClientConfig.java:88)
  at org.glassfish.jersey.client.ClientConfig$State$3.get(ClientConfig.java:120)
  at org.glassfish.jersey.client.ClientConfig$State$3.get(ClientConfig.java:117)
  at 
org.glassfish.jersey.internal.util.collection.Values$LazyValueImpl.get(Values.java:340)
  at org.glassfish.jersey.client.ClientConfig.getRuntime(ClientConfig.java:726)
  at 
org.glassfish.jersey.client.ClientRequest.getConfiguration(ClientRequest.java:285)
  at 
org.glassfish.jersey.client.JerseyInvocation.validateHttpMethodAndEntity(JerseyInvocation.java:126)
  ...
16/09/06 11:52:00 INFO SparkContext: Invoking stop() from shutdown hook
16/09/06 11:52:00 INFO MapOutputTrackerMasterEndpoint: 
MapOutputTrackerMasterEndpoint stopped!



Thanks
-suresh



BlockMatrix Multiplication fails with Out of Memory

2016-09-06 Thread vinodep
Hi, 
I am trying to multiply Matrix of size 67584*67584 in a loop. In the first
iteration, multiplication goes through, but in the second iteration, it
fails with Java heap out of memory issue. I'm using pyspark and below is the
configuration.
Setup:
70 nodes (1driver+69 workers) with
SPARK_DRIVER_MEMORY=32g,SPARK_WORKER_CORES=16,SPARK_WORKER_MEMORY=20g,SPARK_EXECUTOR_MEMORY=5g,spark.executor.cores=5

Data : 67584 matrix size, block size is 1024
So, i basically load number of mat files (matlab .mat) files using textFile,
form a Block RDD with each file read being a block, and create a
blockmatrix(A)
Then, i multiply the matrix with itself in the loop, basically to get the
powers (A^^2,A^^4). But somehow multiplication always fails with out of
memory issues after second iteration.I'm using multiply method from
BlockMatrix

for i in range(3):
A = A.multiply(A)

What am i missing? What is a correct way to load a big matrix file (.mat
)from local filesystem into rdd and create a blockmatrix and do repeated
multiplication? 



--
View this message in context: 
http://apache-spark-developers-list.1001551.n3.nabble.com/BlockMatrix-Multiplication-fails-with-Out-of-Memory-tp18869.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.

-
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org



df.groupBy('m).agg(sum('n)).show dies with 10^3 elements?

2016-09-06 Thread Jacek Laskowski
Hi,

I'm concerned with the OOME in local mode with the version built today:

scala> val intsMM = 1 to math.pow(10, 3).toInt
intsMM: scala.collection.immutable.Range.Inclusive = Range(1, 2, 3, 4,
5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23,
24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40,
41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57,
58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74,
75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91,
92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106,
107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120,
121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134,
135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148,
149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162,
163, 164, 165, 166, 167, 168, 169, 1...
scala> val df = intsMM.toDF("n").withColumn("m", 'n % 2)
df: org.apache.spark.sql.DataFrame = [n: int, m: int]

scala> df.groupBy('m).agg(sum('n)).show
...
16/09/06 22:28:02 ERROR Executor: Exception in task 6.0 in stage 0.0 (TID 6)
java.lang.OutOfMemoryError: Unable to acquire 262144 bytes of memory, got 0
...

Please see 
https://gist.github.com/jaceklaskowski/906d62b830f6c967a7eee5f8eb6e9237
and let me know if I should file an issue. I don't think 10^3 elements
and groupBy should kill spark-shell.

Pozdrawiam,
Jacek Laskowski

https://medium.com/@jaceklaskowski/
Mastering Apache Spark 2.0 http://bit.ly/mastering-apache-spark
Follow me at https://twitter.com/jaceklaskowski

-
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org