[jira] [Commented] (SPARK-24481) GeneratedIteratorForCodegenStage1 grows beyond 64 KB

2018-06-07 Thread Andrew Conegliano (JIRA)


[ 
https://issues.apache.org/jira/browse/SPARK-24481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16504965#comment-16504965
 ] 

Andrew Conegliano commented on SPARK-24481:
---

Thanks Marco.

Forgot to mention, this error doesn't happen in 2.0.2 and 2.2.0. And for 2.3.0, 
even though the error pops up, the code will still run because it disables 
wholestagecodegen to run it. The main problem is that in a spark streaming 
context, the error pops up for every message so logs fill disk very quickly.

> GeneratedIteratorForCodegenStage1 grows beyond 64 KB
> 
>
> Key: SPARK-24481
> URL: https://issues.apache.org/jira/browse/SPARK-24481
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.3.0
> Environment: Emr 5.13.0 and Databricks Cloud 4.0
>Reporter: Andrew Conegliano
>Priority: Major
> Attachments: log4j-active(1).log
>
>
> Similar to other "grows beyond 64 KB" errors.  Happens with large case 
> statement:
> {code:java}
> import org.apache.spark.sql.functions._
> import scala.collection.mutable
> import org.apache.spark.sql.Column
> var rdd = sc.parallelize(Array("""{
> "event":
> {
> "timestamp": 1521086591110,
> "event_name": "yu",
> "page":
> {
> "page_url": "https://;,
> "page_name": "es"
> },
> "properties":
> {
> "id": "87",
> "action": "action",
> "navigate_action": "navigate_action"
> }
> }
> }
> """))
> var df = spark.read.json(rdd)
> df = 
> df.select("event.properties.id","event.timestamp","event.page.page_url","event.properties.action","event.page.page_name","event.event_name","event.properties.navigate_action")
> .toDF("id","event_time","url","action","page_name","event_name","navigation_action")
> var a = "case "
> for(i <- 1 to 300){
>   a = a + s"when action like '$i%' THEN '$i' "
> }
> a = a + " else null end as task_id"
> val expression = expr(a)
> df = df.filter("id is not null and id <> '' and event_time is not null")
> val transformationExpressions: mutable.HashMap[String, Column] = 
> mutable.HashMap(
> "action" -> expr("coalesce(action, navigation_action) as action"),
> "task_id" -> expression
> )
> for((col, expr) <- transformationExpressions)
> df = df.withColumn(col, expr)
> df = df.filter("(action is not null and action <> '') or (page_name is not 
> null and page_name <> '')")
> df.show
> {code}
>  
> Exception:
> {code:java}
> 18/06/07 01:06:34 ERROR CodeGenerator: failed to compile: 
> org.codehaus.janino.InternalCompilerException: Compiling "GeneratedClass": 
> Code of method 
> "project_doConsume$(Lorg/apache/spark/sql/catalyst/expressions/GeneratedClass$GeneratedIteratorForCodegenStage1;Lorg/apache/spark/sql/catalyst/InternalRow;)V"
>  of class 
> "org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1"
>  grows beyond 64 KB
> org.codehaus.janino.InternalCompilerException: Compiling "GeneratedClass": 
> Code of method 
> "project_doConsume$(Lorg/apache/spark/sql/catalyst/expressions/GeneratedClass$GeneratedIteratorForCodegenStage1;Lorg/apache/spark/sql/catalyst/InternalRow;)V"
>  of class 
> "org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1"
>  grows beyond 64 KB
>   at org.codehaus.janino.UnitCompiler.compileUnit(UnitCompiler.java:361)
>   at org.codehaus.janino.SimpleCompiler.cook(SimpleCompiler.java:234)
>   at 
> org.codehaus.janino.SimpleCompiler.compileToClassLoader(SimpleCompiler.java:446)
>   at 
> org.codehaus.janino.ClassBodyEvaluator.compileToClass(ClassBodyEvaluator.java:313)
>   at 
> org.codehaus.janino.ClassBodyEvaluator.cook(ClassBodyEvaluator.java:235)
>   at org.codehaus.janino.SimpleCompiler.cook(SimpleCompiler.java:204)
>   at org.codehaus.commons.compiler.Cookable.cook(Cookable.java:80)
>   at 
> org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$.org$apache$spark$sql$catalyst$expressions$codegen$CodeGenerator$$doCompile(CodeGenerator.scala:1444)
>   at 
> org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$$anon$1.load(CodeGenerator.scala:1523)
>   at 
> org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$$anon$1.load(CodeGenerator.scala:1520)
>   at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3522)
>   at 
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2315)
>   at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2278)
>   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2193)
>   at com.google.common.cache.LocalCache.get(LocalCache.java:3932)
>   at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3936)
>   at 
> com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4806)
>   at 

[jira] [Updated] (SPARK-24481) GeneratedIteratorForCodegenStage1 grows beyond 64 KB

2018-06-06 Thread Andrew Conegliano (JIRA)


 [ 
https://issues.apache.org/jira/browse/SPARK-24481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Conegliano updated SPARK-24481:
--
Description: 
Similar to other "grows beyond 64 KB" errors.  Happens with large case 
statement:
{code:java}
import org.apache.spark.sql.functions._
import scala.collection.mutable
import org.apache.spark.sql.Column

var rdd = sc.parallelize(Array("""{
"event":
{
"timestamp": 1521086591110,
"event_name": "yu",
"page":
{
"page_url": "https://;,
"page_name": "es"
},
"properties":
{
"id": "87",
"action": "action",
"navigate_action": "navigate_action"
}
}
}
"""))

var df = spark.read.json(rdd)
df = 
df.select("event.properties.id","event.timestamp","event.page.page_url","event.properties.action","event.page.page_name","event.event_name","event.properties.navigate_action")
.toDF("id","event_time","url","action","page_name","event_name","navigation_action")

var a = "case "
for(i <- 1 to 300){
  a = a + s"when action like '$i%' THEN '$i' "
}
a = a + " else null end as task_id"

val expression = expr(a)

df = df.filter("id is not null and id <> '' and event_time is not null")

val transformationExpressions: mutable.HashMap[String, Column] = 
mutable.HashMap(
"action" -> expr("coalesce(action, navigation_action) as action"),
"task_id" -> expression
)

for((col, expr) <- transformationExpressions)
df = df.withColumn(col, expr)

df = df.filter("(action is not null and action <> '') or (page_name is not null 
and page_name <> '')")

df.show
{code}
 

Exception:
{code:java}
18/06/07 01:06:34 ERROR CodeGenerator: failed to compile: 
org.codehaus.janino.InternalCompilerException: Compiling "GeneratedClass": Code 
of method 
"project_doConsume$(Lorg/apache/spark/sql/catalyst/expressions/GeneratedClass$GeneratedIteratorForCodegenStage1;Lorg/apache/spark/sql/catalyst/InternalRow;)V"
 of class 
"org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1"
 grows beyond 64 KB
org.codehaus.janino.InternalCompilerException: Compiling "GeneratedClass": Code 
of method 
"project_doConsume$(Lorg/apache/spark/sql/catalyst/expressions/GeneratedClass$GeneratedIteratorForCodegenStage1;Lorg/apache/spark/sql/catalyst/InternalRow;)V"
 of class 
"org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1"
 grows beyond 64 KB
at org.codehaus.janino.UnitCompiler.compileUnit(UnitCompiler.java:361)
at org.codehaus.janino.SimpleCompiler.cook(SimpleCompiler.java:234)
at 
org.codehaus.janino.SimpleCompiler.compileToClassLoader(SimpleCompiler.java:446)
at 
org.codehaus.janino.ClassBodyEvaluator.compileToClass(ClassBodyEvaluator.java:313)
at 
org.codehaus.janino.ClassBodyEvaluator.cook(ClassBodyEvaluator.java:235)
at org.codehaus.janino.SimpleCompiler.cook(SimpleCompiler.java:204)
at org.codehaus.commons.compiler.Cookable.cook(Cookable.java:80)
at 
org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$.org$apache$spark$sql$catalyst$expressions$codegen$CodeGenerator$$doCompile(CodeGenerator.scala:1444)
at 
org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$$anon$1.load(CodeGenerator.scala:1523)
at 
org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$$anon$1.load(CodeGenerator.scala:1520)
at 
com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3522)
at 
com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2315)
at 
com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2278)
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2193)
at com.google.common.cache.LocalCache.get(LocalCache.java:3932)
at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3936)
at 
com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4806)
at 
org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator$.compile(CodeGenerator.scala:1392)
at 
org.apache.spark.sql.execution.WholeStageCodegenExec.liftedTree1$1(WholeStageCodegenExec.scala:579)
at 
org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:578)
at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:135)
at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$3.apply(SparkPlan.scala:167)
at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at 
org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:164)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
at 
org.apache.spark.sql.execution.collect.Collector$.collect(Collector.scala:61)
at 

[jira] [Updated] (SPARK-24481) GeneratedIteratorForCodegenStage1 grows beyond 64 KB

2018-06-06 Thread Andrew Conegliano (JIRA)


 [ 
https://issues.apache.org/jira/browse/SPARK-24481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Conegliano updated SPARK-24481:
--
Description: 
Similar to other "grows beyond 64 KB" errors.  Happens with large case 
statement:
{code:java}
import org.apache.spark.sql.functions._
import scala.collection.mutable
import org.apache.spark.sql.Column

var rdd = sc.parallelize(Array("""{
"event":
{
"timestamp": 1521086591110,
"event_name": "yu",
"page":
{
"page_url": "https://;,
"page_name": "es"
},
"properties":
{
"id": "87",
"action": "action",
"navigate_action": "navigate_action"
}
}
}
"""))

var df = spark.read.json(rdd)
df = 
df.select("event.properties.id","event.timestamp","event.page.page_url","event.properties.action","event.page.page_name","event.event_name","event.properties.navigate_action")
.toDF("id","event_time","url","action","page_name","event_name","navigation_action")

var a = "case "
for(i <- 1 to 300){
  a = a + s"when action like '$i%' THEN '$i' "
}
a = a + " else null end as task_id"

val expression = expr(a)

df = df.filter("id is not null and id <> '' and event_time is not null")

val transformationExpressions: mutable.HashMap[String, Column] = 
mutable.HashMap(
"action" -> expr("coalesce(action, navigation_action) as action"),
"task_id" -> expression
)

for((col, expr) <- transformationExpressions)
df = df.withColumn(col, expr)

df = df.filter("(action is not null and action <> '') or (page_name is not null 
and page_name <> '')")

df.show
{code}
 

Exception:
{code:java}
18/06/07 01:06:34 ERROR CodeGenerator: failed to compile: 
org.codehaus.janino.InternalCompilerException: Compiling "GeneratedClass": Code 
of method 
"project_doConsume$(Lorg/apache/spark/sql/catalyst/expressions/GeneratedClass$GeneratedIteratorForCodegenStage1;Lorg/apache/spark/sql/catalyst/InternalRow;)V"
 of class 
"org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1"
 grows beyond 64 KB
org.codehaus.janino.InternalCompilerException: Compiling "GeneratedClass": Code 
of method 
"project_doConsume$(Lorg/apache/spark/sql/catalyst/expressions/GeneratedClass$GeneratedIteratorForCodegenStage1;Lorg/apache/spark/sql/catalyst/InternalRow;)V"
 of class 
"org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1"
 grows beyond 64 KB{code}
 

Log file is attached

  was:
Similar to other "grows beyond 64 KB" errors.  Happens with large case 
statement:
{code:java}
import org.apache.spark.sql.functions._
import scala.collection.mutable
import org.apache.spark.sql.Column

var rdd = sc.parallelize(Array("""{
"event":
{
"timestamp": 1521086591110,
"event_name": "yu",
"page":
{
"page_url": "https://;,
"page_name": "es"
},
"properties":
{
"id": "87",
"action": "action",
"navigate_action": "navigate_action"
}
}
}
"""))

var df = spark.read.json(rdd)
df = 
df.select("event.properties.id","event.timestamp","event.page.page_url","event.properties.action","event.page.page_name","event.event_name","event.properties.navigate_action")
.toDF("id","event_time","url","action","page_name","event_name","navigation_action")

var a = "case "
for(i <- 1 to 300){
  a = a + s"when action like '$i%' THEN '$i' "
}
a = a + " else null end as task_id"

val expression = expr(a)

df = df.filter("id is not null and id <> '' and event_time is not null")

val transformationExpressions: mutable.HashMap[String, Column] = 
mutable.HashMap(
"action" -> expr("coalesce(action, navigation_action) as action"),
"task_id" -> expression
)

for((col, expr) <- transformationExpressions)
df = df.withColumn(col, expr)

df = df.filter("(action is not null and action <> '') or (page_name is not null 
and page_name <> '')")

df.show
{code}
Log file is attached


> GeneratedIteratorForCodegenStage1 grows beyond 64 KB
> 
>
> Key: SPARK-24481
> URL: https://issues.apache.org/jira/browse/SPARK-24481
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.3.0
> Environment: Emr 5.13.0 and Databricks Cloud 4.0
>Reporter: Andrew Conegliano
>Priority: Major
> Attachments: log4j-active(1).log
>
>
> Similar to other "grows beyond 64 KB" errors.  Happens with large case 
> statement:
> {code:java}
> import org.apache.spark.sql.functions._
> import scala.collection.mutable
> import org.apache.spark.sql.Column
> var rdd = sc.parallelize(Array("""{
> "event":
> {
> "timestamp": 1521086591110,
> "event_name": "yu",
> "page":
> {
> "page_url": "https://;,
> "page_name": "es"
> },
> "properties":
> {
> "id": "87",
> "action": "action",
> "navigate_action": "navigate_action"
> }
> }
> }
> """))
> var df = spark.read.json(rdd)
> df = 
> 

[jira] [Updated] (SPARK-24481) GeneratedIteratorForCodegenStage1 grows beyond 64 KB

2018-06-06 Thread Andrew Conegliano (JIRA)


 [ 
https://issues.apache.org/jira/browse/SPARK-24481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Conegliano updated SPARK-24481:
--
Environment: Emr 5.13.0 and Databricks Cloud 4.0  (was: Emr 5.13.0)

> GeneratedIteratorForCodegenStage1 grows beyond 64 KB
> 
>
> Key: SPARK-24481
> URL: https://issues.apache.org/jira/browse/SPARK-24481
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.3.0
> Environment: Emr 5.13.0 and Databricks Cloud 4.0
>Reporter: Andrew Conegliano
>Priority: Major
> Attachments: log4j-active(1).log
>
>
> Similar to other "grows beyond 64 KB" errors.  Happens with large case 
> statement:
> {code:java}
> import org.apache.spark.sql.functions._
> import scala.collection.mutable
> import org.apache.spark.sql.Column
> var rdd = sc.parallelize(Array("""{
> "event":
> {
> "timestamp": 1521086591110,
> "event_name": "yu",
> "page":
> {
> "page_url": "https://;,
> "page_name": "es"
> },
> "properties":
> {
> "id": "87",
> "action": "action",
> "navigate_action": "navigate_action"
> }
> }
> }
> """))
> var df = spark.read.json(rdd)
> df = 
> df.select("event.properties.id","event.timestamp","event.page.page_url","event.properties.action","event.page.page_name","event.event_name","event.properties.navigate_action")
> .toDF("id","event_time","url","action","page_name","event_name","navigation_action")
> var a = "case "
> for(i <- 1 to 300){
>   a = a + s"when action like '$i%' THEN '$i' "
> }
> a = a + " else null end as task_id"
> val expression = expr(a)
> df = df.filter("id is not null and id <> '' and event_time is not null")
> val transformationExpressions: mutable.HashMap[String, Column] = 
> mutable.HashMap(
> "action" -> expr("coalesce(action, navigation_action) as action"),
> "task_id" -> expression
> )
> for((col, expr) <- transformationExpressions)
> df = df.withColumn(col, expr)
> df = df.filter("(action is not null and action <> '') or (page_name is not 
> null and page_name <> '')")
> df.show
> {code}
> Log file is attached



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-24481) GeneratedIteratorForCodegenStage1 grows beyond 64 KB

2018-06-06 Thread Andrew Conegliano (JIRA)


 [ 
https://issues.apache.org/jira/browse/SPARK-24481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Conegliano updated SPARK-24481:
--
Description: 
Similar to other "grows beyond 64 KB" errors.  Happens with large case 
statement:
{code:java}
import org.apache.spark.sql.functions._
import scala.collection.mutable
import org.apache.spark.sql.Column

var rdd = sc.parallelize(Array("""{
"event":
{
"timestamp": 1521086591110,
"event_name": "yu",
"page":
{
"page_url": "https://;,
"page_name": "es"
},
"properties":
{
"id": "87",
"action": "action",
"navigate_action": "navigate_action"
}
}
}
"""))

var df = spark.read.json(rdd)
df = 
df.select("event.properties.id","event.timestamp","event.page.page_url","event.properties.action","event.page.page_name","event.event_name","event.properties.navigate_action")
.toDF("id","event_time","url","action","page_name","event_name","navigation_action")

var a = "case "
for(i <- 1 to 300){
  a = a + s"when action like '$i%' THEN '$i' "
}
a = a + " else null end as task_id"

val expression = expr(a)

df = df.filter("id is not null and id <> '' and event_time is not null")

val transformationExpressions: mutable.HashMap[String, Column] = 
mutable.HashMap(
"action" -> expr("coalesce(action, navigation_action) as action"),
"task_id" -> expression
)

for((col, expr) <- transformationExpressions)
df = df.withColumn(col, expr)

df = df.filter("(action is not null and action <> '') or (page_name is not null 
and page_name <> '')")

df.show
{code}
Log file is attached

  was:
Similar to other "grows beyond 64 KB" errors.  Happens with large case 
statement:
{code:java}
// Databricks notebook source
import org.apache.spark.sql.functions._
import scala.collection.mutable
import org.apache.spark.sql.Column

var rdd = sc.parallelize(Array("""{
"event":
{
"timestamp": 1521086591110,
"event_name": "yu",
"page":
{
"page_url": "https://;,
"page_name": "es"
},
"properties":
{
"id": "87",
"action": "action",
"navigate_action": "navigate_action"
}
}
}
"""))

var df = spark.read.json(rdd)
df = 
df.select("event.properties.id","event.timestamp","event.page.page_url","event.properties.action","event.page.page_name","event.event_name","event.properties.navigate_action")
.toDF("id","event_time","url","action","page_name","event_name","navigation_action")

var a = "case "
for(i <- 1 to 300){
  a = a + s"when action like '$i%' THEN '$i' "
}
a = a + " else null end as task_id"

val expression = expr(a)

df = df.filter("id is not null and id <> '' and event_time is not null")

val transformationExpressions: mutable.HashMap[String, Column] = 
mutable.HashMap(
"action" -> expr("coalesce(action, navigation_action) as action"),
"task_id" -> expression
)

for((col, expr) <- transformationExpressions)
df = df.withColumn(col, expr)

df = df.filter("(action is not null and action <> '') or (page_name is not null 
and page_name <> '')")

df.show
{code}
Log file is attached


> GeneratedIteratorForCodegenStage1 grows beyond 64 KB
> 
>
> Key: SPARK-24481
> URL: https://issues.apache.org/jira/browse/SPARK-24481
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.3.0
> Environment: Emr 5.13.0
>Reporter: Andrew Conegliano
>Priority: Major
> Attachments: log4j-active(1).log
>
>
> Similar to other "grows beyond 64 KB" errors.  Happens with large case 
> statement:
> {code:java}
> import org.apache.spark.sql.functions._
> import scala.collection.mutable
> import org.apache.spark.sql.Column
> var rdd = sc.parallelize(Array("""{
> "event":
> {
> "timestamp": 1521086591110,
> "event_name": "yu",
> "page":
> {
> "page_url": "https://;,
> "page_name": "es"
> },
> "properties":
> {
> "id": "87",
> "action": "action",
> "navigate_action": "navigate_action"
> }
> }
> }
> """))
> var df = spark.read.json(rdd)
> df = 
> df.select("event.properties.id","event.timestamp","event.page.page_url","event.properties.action","event.page.page_name","event.event_name","event.properties.navigate_action")
> .toDF("id","event_time","url","action","page_name","event_name","navigation_action")
> var a = "case "
> for(i <- 1 to 300){
>   a = a + s"when action like '$i%' THEN '$i' "
> }
> a = a + " else null end as task_id"
> val expression = expr(a)
> df = df.filter("id is not null and id <> '' and event_time is not null")
> val transformationExpressions: mutable.HashMap[String, Column] = 
> mutable.HashMap(
> "action" -> expr("coalesce(action, navigation_action) as action"),
> "task_id" -> expression
> )
> for((col, expr) <- transformationExpressions)
> df = df.withColumn(col, expr)
> df = df.filter("(action is not null and action <> '') or (page_name is not 
> null and page_name <> '')")
> df.show
> {code}
> Log file is attached



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (SPARK-24481) GeneratedIteratorForCodegenStage1 grows beyond 64 KB

2018-06-06 Thread Andrew Conegliano (JIRA)


 [ 
https://issues.apache.org/jira/browse/SPARK-24481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Conegliano updated SPARK-24481:
--
Description: 
Similar to other "grows beyond 64 KB" errors.  Happens with large case 
statement:
{code:java}
// Databricks notebook source
import org.apache.spark.sql.functions._
import scala.collection.mutable
import org.apache.spark.sql.Column

var rdd = sc.parallelize(Array("""{
"event":
{
"timestamp": 1521086591110,
"event_name": "yu",
"page":
{
"page_url": "https://;,
"page_name": "es"
},
"properties":
{
"id": "87",
"action": "action",
"navigate_action": "navigate_action"
}
}
}
"""))

var df = spark.read.json(rdd)
df = 
df.select("event.properties.id","event.timestamp","event.page.page_url","event.properties.action","event.page.page_name","event.event_name","event.properties.navigate_action")
.toDF("id","event_time","url","action","page_name","event_name","navigation_action")

var a = "case "
for(i <- 1 to 300){
  a = a + s"when action like '$i%' THEN '$i' "
}
a = a + " else null end as task_id"

val expression = expr(a)

df = df.filter("id is not null and id <> '' and event_time is not null")

val transformationExpressions: mutable.HashMap[String, Column] = 
mutable.HashMap(
"action" -> expr("coalesce(action, navigation_action) as action"),
"task_id" -> expression
)

for((col, expr) <- transformationExpressions)
df = df.withColumn(col, expr)

df = df.filter("(action is not null and action <> '') or (page_name is not null 
and page_name <> '')")

df.show
{code}
Log file is attached

  was:
Similar to other "grows beyond 64 KB" errors.  Happens with large case 
statement:
{code:java}
// Databricks notebook source
import org.apache.spark.sql.functions._
import scala.collection.mutable
import org.apache.spark.sql.Column

var rdd = sc.parallelize(Array("""{
"event":
{
"timestamp": 1521086591110,
"event_name": "yu",
"page":
{
"page_url": "https://;,
"page_name": "es"
},
"properties":
{
"id": "87",
"action": "action",
"navigate_action": "navigate_action"
}
}
}
"""))

var df = spark.read.json(rdd)
df = 
df.select("event.properties.id","event.timestamp","event.page.page_url","event.properties.action","event.page.page_name","event.event_name","event.properties.navigate_action")
.toDF("id","event_time","url","action","page_name","event_name","navigation_action")

var a = "case "
for(i <- 1 to 300)
a = a + s"when action like '$i%' THEN '$i' "
a = a + " else null end as task_id"

val expression = expr(a)

df = df.filter("id is not null and id <> '' and event_time is not null")

val transformationExpressions: mutable.HashMap[String, Column] = 
mutable.HashMap(
"action" -> expr("coalesce(action, navigation_action) as action"),
"task_id" -> expression
)

for((col, expr) <- transformationExpressions)
df = df.withColumn(col, expr)

df = df.filter("(action is not null and action <> '') or (page_name is not null 
and page_name <> '')")

df.show
{code}
Log file is attached


> GeneratedIteratorForCodegenStage1 grows beyond 64 KB
> 
>
> Key: SPARK-24481
> URL: https://issues.apache.org/jira/browse/SPARK-24481
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.3.0
> Environment: Emr 5.13.0
>Reporter: Andrew Conegliano
>Priority: Major
> Attachments: log4j-active(1).log
>
>
> Similar to other "grows beyond 64 KB" errors.  Happens with large case 
> statement:
> {code:java}
> // Databricks notebook source
> import org.apache.spark.sql.functions._
> import scala.collection.mutable
> import org.apache.spark.sql.Column
> var rdd = sc.parallelize(Array("""{
> "event":
> {
> "timestamp": 1521086591110,
> "event_name": "yu",
> "page":
> {
> "page_url": "https://;,
> "page_name": "es"
> },
> "properties":
> {
> "id": "87",
> "action": "action",
> "navigate_action": "navigate_action"
> }
> }
> }
> """))
> var df = spark.read.json(rdd)
> df = 
> df.select("event.properties.id","event.timestamp","event.page.page_url","event.properties.action","event.page.page_name","event.event_name","event.properties.navigate_action")
> .toDF("id","event_time","url","action","page_name","event_name","navigation_action")
> var a = "case "
> for(i <- 1 to 300){
>   a = a + s"when action like '$i%' THEN '$i' "
> }
> a = a + " else null end as task_id"
> val expression = expr(a)
> df = df.filter("id is not null and id <> '' and event_time is not null")
> val transformationExpressions: mutable.HashMap[String, Column] = 
> mutable.HashMap(
> "action" -> expr("coalesce(action, navigation_action) as action"),
> "task_id" -> expression
> )
> for((col, expr) <- transformationExpressions)
> df = df.withColumn(col, expr)
> df = df.filter("(action is not null and action <> '') or (page_name is not 
> null and page_name <> '')")
> df.show
> {code}
> Log file is attached



--

[jira] [Updated] (SPARK-24481) GeneratedIteratorForCodegenStage1 grows beyond 64 KB

2018-06-06 Thread Andrew Conegliano (JIRA)


 [ 
https://issues.apache.org/jira/browse/SPARK-24481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Conegliano updated SPARK-24481:
--
Attachment: log4j-active(1).log

> GeneratedIteratorForCodegenStage1 grows beyond 64 KB
> 
>
> Key: SPARK-24481
> URL: https://issues.apache.org/jira/browse/SPARK-24481
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 2.3.0
> Environment: Emr 5.13.0
>Reporter: Andrew Conegliano
>Priority: Major
> Attachments: log4j-active(1).log
>
>
> Similar to other "grows beyond 64 KB" errors.  Happens with large case 
> statement:
> {code:java}
> // Databricks notebook source
> import org.apache.spark.sql.functions._
> import scala.collection.mutable
> import org.apache.spark.sql.Column
> var rdd = sc.parallelize(Array("""{
> "event":
> {
> "timestamp": 1521086591110,
> "event_name": "yu",
> "page":
> {
> "page_url": "https://;,
> "page_name": "es"
> },
> "properties":
> {
> "id": "87",
> "action": "action",
> "navigate_action": "navigate_action"
> }
> }
> }
> """))
> var df = spark.read.json(rdd)
> df = 
> df.select("event.properties.id","event.timestamp","event.page.page_url","event.properties.action","event.page.page_name","event.event_name","event.properties.navigate_action")
> .toDF("id","event_time","url","action","page_name","event_name","navigation_action")
> var a = "case "
> for(i <- 1 to 300)
> a = a + s"when action like '$i%' THEN '$i' "
> a = a + " else null end as task_id"
> val expression = expr(a)
> df = df.filter("id is not null and id <> '' and event_time is not null")
> val transformationExpressions: mutable.HashMap[String, Column] = 
> mutable.HashMap(
> "action" -> expr("coalesce(action, navigation_action) as action"),
> "task_id" -> expression
> )
> for((col, expr) <- transformationExpressions)
> df = df.withColumn(col, expr)
> df = df.filter("(action is not null and action <> '') or (page_name is not 
> null and page_name <> '')")
> df.show
> {code}
> Log file is attached



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-24481) GeneratedIteratorForCodegenStage1 grows beyond 64 KB

2018-06-06 Thread Andrew Conegliano (JIRA)
Andrew Conegliano created SPARK-24481:
-

 Summary: GeneratedIteratorForCodegenStage1 grows beyond 64 KB
 Key: SPARK-24481
 URL: https://issues.apache.org/jira/browse/SPARK-24481
 Project: Spark
  Issue Type: Bug
  Components: SQL
Affects Versions: 2.3.0
 Environment: Emr 5.13.0
Reporter: Andrew Conegliano
 Attachments: log4j-active(1).log

Similar to other "grows beyond 64 KB" errors.  Happens with large case 
statement:
{code:java}
// Databricks notebook source
import org.apache.spark.sql.functions._
import scala.collection.mutable
import org.apache.spark.sql.Column

var rdd = sc.parallelize(Array("""{
"event":
{
"timestamp": 1521086591110,
"event_name": "yu",
"page":
{
"page_url": "https://;,
"page_name": "es"
},
"properties":
{
"id": "87",
"action": "action",
"navigate_action": "navigate_action"
}
}
}
"""))

var df = spark.read.json(rdd)
df = 
df.select("event.properties.id","event.timestamp","event.page.page_url","event.properties.action","event.page.page_name","event.event_name","event.properties.navigate_action")
.toDF("id","event_time","url","action","page_name","event_name","navigation_action")

var a = "case "
for(i <- 1 to 300)
a = a + s"when action like '$i%' THEN '$i' "
a = a + " else null end as task_id"

val expression = expr(a)

df = df.filter("id is not null and id <> '' and event_time is not null")

val transformationExpressions: mutable.HashMap[String, Column] = 
mutable.HashMap(
"action" -> expr("coalesce(action, navigation_action) as action"),
"task_id" -> expression
)

for((col, expr) <- transformationExpressions)
df = df.withColumn(col, expr)

df = df.filter("(action is not null and action <> '') or (page_name is not null 
and page_name <> '')")

df.show
{code}
Log file is attached



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org