[jira] [Commented] (SPARK-17403) Fatal Error: Scan cached strings
[ https://issues.apache.org/jira/browse/SPARK-17403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15470293#comment-15470293 ] Ruben Hernando commented on SPARK-17403: I think the error happens when a cached data frame is being queried by 2 different tasks at once and both tasks try to cache that data frame If I use serialised persistence I get "ConcurrentModificationException" writing the serialised object > Fatal Error: Scan cached strings > > > Key: SPARK-17403 > URL: https://issues.apache.org/jira/browse/SPARK-17403 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.0.0 > Environment: Spark standalone cluster (3 Workers, 47 cores) > Ubuntu 14 > Java 8 >Reporter: Ruben Hernando > > The process creates views from JDBC (SQL server) source and combines them to > create other views. > Finally it dumps results via JDBC > Error: > {quote} > # JRE version: Java(TM) SE Runtime Environment (8.0_101-b13) (build > 1.8.0_101-b13) > # Java VM: Java HotSpot(TM) 64-Bit Server VM (25.101-b13 mixed mode > linux-amd64 ) > # Problematic frame: > # J 4895 C1 org.apache.spark.unsafe.Platform.getLong(Ljava/lang/Object;J)J (9 > bytes) @ 0x7fbb355dfd6c [0x7fbb355dfd60+0xc] > # > {quote} > SQL Query plan (fields truncated): > {noformat} > == Parsed Logical Plan == > 'Project [*] > +- 'UnresolvedRelation `COEQ_63` > == Analyzed Logical Plan == > InstanceId: bigint, price: double, ZoneId: int, priceItemId: int, priceId: int > Project [InstanceId#20236L, price#20237, ZoneId#20239, priceItemId#20242, > priceId#20244] > +- SubqueryAlias coeq_63 >+- Project [_TableSL_SID#143L AS InstanceId#20236L, SL_RD_ColR_N#189 AS > price#20237, 24 AS ZoneId#20239, 6 AS priceItemId#20242, 63 AS priceId#20244] > +- SubqueryAlias 6__input > +- > Relation[_TableSL_SID#143L,_TableP_DC_SID#144L,_TableSH_SID#145L,ID#146,Name#147,TableP_DCID#148,TableSHID#149,SL_ACT_GI_DTE#150,SL_Xcl_C#151,SL_Xcl_C#152,SL_Css_Cojs#153L,SL_Config#154,SL_CREATEDON# > .. 36 more fields] JDBCRelation((select [SLTables].[_TableSL_SID], > [SLTables]. ... [...] FROM [sch].[SLTables] [SLTables] JOIN sch.TPSLTables > TPSLTables ON [TPSLTables].[_TableSL_SID] = [SLTables].[_TableSL_SID] where > _TP = 24) input) > == Optimized Logical Plan == > Project [_TableSL_SID#143L AS InstanceId#20236L, SL_RD_ColR_N#189 AS > price#20237, 24 AS ZoneId#20239, 6 AS priceItemId#20242, 63 AS priceId#20244] > +- InMemoryRelation [_TableSL_SID#143L, _TableP_DC_SID#144L, > _TableSH_SID#145L, ID#146, Name#147, ... 36 more fields], true, 1, > StorageLevel(disk, memory, deserialized, 1 replicas) >: +- *Scan JDBCRelation((select [SLTables].[_TableSL_SID], > [SLTables].[_TableP_DC_SID], [SLTables].[_TableSH_SID], [SLTables].[ID], > [SLTables].[Name], [SLTables].[TableP_DCID], [SLTables].[TableSHID], > [TPSLTables].[SL_ACT_GI_DTE], ... [...] FROM [sch].[SLTables] [SLTables] > JOIN sch.TPSLTables TPSLTables ON [TPSLTables].[_TableSL_SID] = > [SLTables].[_TableSL_SID] where _TP = 24) input) > [_TableSL_SID#143L,_TableP_DC_SID#144L,_TableSH_SID#145L,ID#146,Name#147,TableP_DCID#148,TableSHID#149,SL_ACT_GI_DTE#150,SL_Xcl_C#151,... > 36 more fields] > == Physical Plan == > *Project [_TableSL_SID#143L AS InstanceId#20236L, SL_RD_ColR_N#189 AS > price#20237, 24 AS ZoneId#20239, 6 AS priceItemId#20242, 63 AS priceId#20244] > +- InMemoryTableScan [_TableSL_SID#143L, SL_RD_ColR_N#189] >: +- InMemoryRelation [_TableSL_SID#143L, _TableP_DC_SID#144L, > _TableSH_SID#145L, ID#146, Name#147, ... 36 more fields], true, 1, > StorageLevel(disk, memory, deserialized, 1 replicas) >: : +- *Scan JDBCRelation((select [SLTables].[_TableSL_SID], > [SLTables].[_TableP_DC_SID], [SLTables].[_TableSH_SID], [SLTables].[ID], > [SLTables].[Name], [SLTables].[TableP_DCID], ... [...] FROM [sch].[SLTables] > [SLTables] JOIN sch.TPSLTables TPSLTables ON [TPSLTables].[_TableSL_SID] = > [SLTables].[_TableSL_SID] where _TP = 24) input) > [_TableSL_SID#143L,_TableP_DC_SID#144L,_TableSH_SID#145L,ID#146,Name#147,,... > 36 more fields] > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-17403) Fatal Error: Scan cached strings
[ https://issues.apache.org/jira/browse/SPARK-17403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15468528#comment-15468528 ] Ruben Hernando commented on SPARK-17403: I'm sorry I can't share the data. This is a 2 tables join, where each table has near 37 million records, and one of them about 40 columns. As far as I can see, string columns are identifiers (many null values) btw, as problematic frame is "org.apache.spark.unsafe.Platform.getLong", wouldn't it mean to be a Long field? > Fatal Error: Scan cached strings > > > Key: SPARK-17403 > URL: https://issues.apache.org/jira/browse/SPARK-17403 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.0.0 > Environment: Spark standalone cluster (3 Workers, 47 cores) > Ubuntu 14 > Java 8 >Reporter: Ruben Hernando > > The process creates views from JDBC (SQL server) source and combines them to > create other views. > Finally it dumps results via JDBC > Error: > {quote} > # JRE version: Java(TM) SE Runtime Environment (8.0_101-b13) (build > 1.8.0_101-b13) > # Java VM: Java HotSpot(TM) 64-Bit Server VM (25.101-b13 mixed mode > linux-amd64 ) > # Problematic frame: > # J 4895 C1 org.apache.spark.unsafe.Platform.getLong(Ljava/lang/Object;J)J (9 > bytes) @ 0x7fbb355dfd6c [0x7fbb355dfd60+0xc] > # > {quote} > SQL Query plan (fields truncated): > {noformat} > == Parsed Logical Plan == > 'Project [*] > +- 'UnresolvedRelation `COEQ_63` > == Analyzed Logical Plan == > InstanceId: bigint, price: double, ZoneId: int, priceItemId: int, priceId: int > Project [InstanceId#20236L, price#20237, ZoneId#20239, priceItemId#20242, > priceId#20244] > +- SubqueryAlias coeq_63 >+- Project [_TableSL_SID#143L AS InstanceId#20236L, SL_RD_ColR_N#189 AS > price#20237, 24 AS ZoneId#20239, 6 AS priceItemId#20242, 63 AS priceId#20244] > +- SubqueryAlias 6__input > +- > Relation[_TableSL_SID#143L,_TableP_DC_SID#144L,_TableSH_SID#145L,ID#146,Name#147,TableP_DCID#148,TableSHID#149,SL_ACT_GI_DTE#150,SL_Xcl_C#151,SL_Xcl_C#152,SL_Css_Cojs#153L,SL_Config#154,SL_CREATEDON# > .. 36 more fields] JDBCRelation((select [SLTables].[_TableSL_SID], > [SLTables]. ... [...] FROM [sch].[SLTables] [SLTables] JOIN sch.TPSLTables > TPSLTables ON [TPSLTables].[_TableSL_SID] = [SLTables].[_TableSL_SID] where > _TP = 24) input) > == Optimized Logical Plan == > Project [_TableSL_SID#143L AS InstanceId#20236L, SL_RD_ColR_N#189 AS > price#20237, 24 AS ZoneId#20239, 6 AS priceItemId#20242, 63 AS priceId#20244] > +- InMemoryRelation [_TableSL_SID#143L, _TableP_DC_SID#144L, > _TableSH_SID#145L, ID#146, Name#147, ... 36 more fields], true, 1, > StorageLevel(disk, memory, deserialized, 1 replicas) >: +- *Scan JDBCRelation((select [SLTables].[_TableSL_SID], > [SLTables].[_TableP_DC_SID], [SLTables].[_TableSH_SID], [SLTables].[ID], > [SLTables].[Name], [SLTables].[TableP_DCID], [SLTables].[TableSHID], > [TPSLTables].[SL_ACT_GI_DTE], ... [...] FROM [sch].[SLTables] [SLTables] > JOIN sch.TPSLTables TPSLTables ON [TPSLTables].[_TableSL_SID] = > [SLTables].[_TableSL_SID] where _TP = 24) input) > [_TableSL_SID#143L,_TableP_DC_SID#144L,_TableSH_SID#145L,ID#146,Name#147,TableP_DCID#148,TableSHID#149,SL_ACT_GI_DTE#150,SL_Xcl_C#151,... > 36 more fields] > == Physical Plan == > *Project [_TableSL_SID#143L AS InstanceId#20236L, SL_RD_ColR_N#189 AS > price#20237, 24 AS ZoneId#20239, 6 AS priceItemId#20242, 63 AS priceId#20244] > +- InMemoryTableScan [_TableSL_SID#143L, SL_RD_ColR_N#189] >: +- InMemoryRelation [_TableSL_SID#143L, _TableP_DC_SID#144L, > _TableSH_SID#145L, ID#146, Name#147, ... 36 more fields], true, 1, > StorageLevel(disk, memory, deserialized, 1 replicas) >: : +- *Scan JDBCRelation((select [SLTables].[_TableSL_SID], > [SLTables].[_TableP_DC_SID], [SLTables].[_TableSH_SID], [SLTables].[ID], > [SLTables].[Name], [SLTables].[TableP_DCID], ... [...] FROM [sch].[SLTables] > [SLTables] JOIN sch.TPSLTables TPSLTables ON [TPSLTables].[_TableSL_SID] = > [SLTables].[_TableSL_SID] where _TP = 24) input) > [_TableSL_SID#143L,_TableP_DC_SID#144L,_TableSH_SID#145L,ID#146,Name#147,,... > 36 more fields] > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-17403) Fatal Error: SIGSEGV on Jdbc joins
[ https://issues.apache.org/jira/browse/SPARK-17403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15465539#comment-15465539 ] Ruben Hernando commented on SPARK-17403: You're right. Removing cache() errors disappeared > Fatal Error: SIGSEGV on Jdbc joins > -- > > Key: SPARK-17403 > URL: https://issues.apache.org/jira/browse/SPARK-17403 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.0.0 > Environment: Spark standalone cluster (3 Workers, 47 cores) > Ubuntu 14 > Java 8 >Reporter: Ruben Hernando > > The process creates views from JDBC (SQL server) source and combines them to > create other views. > Finally it dumps results via JDBC > Error: > {quote} > # JRE version: Java(TM) SE Runtime Environment (8.0_101-b13) (build > 1.8.0_101-b13) > # Java VM: Java HotSpot(TM) 64-Bit Server VM (25.101-b13 mixed mode > linux-amd64 ) > # Problematic frame: > # J 4895 C1 org.apache.spark.unsafe.Platform.getLong(Ljava/lang/Object;J)J (9 > bytes) @ 0x7fbb355dfd6c [0x7fbb355dfd60+0xc] > # > {quote} > SQL Query plan (fields truncated): > {noformat} > == Parsed Logical Plan == > 'Project [*] > +- 'UnresolvedRelation `COEQ_63` > == Analyzed Logical Plan == > InstanceId: bigint, price: double, ZoneId: int, priceItemId: int, priceId: int > Project [InstanceId#20236L, price#20237, ZoneId#20239, priceItemId#20242, > priceId#20244] > +- SubqueryAlias coeq_63 >+- Project [_TableSL_SID#143L AS InstanceId#20236L, SL_RD_ColR_N#189 AS > price#20237, 24 AS ZoneId#20239, 6 AS priceItemId#20242, 63 AS priceId#20244] > +- SubqueryAlias 6__input > +- > Relation[_TableSL_SID#143L,_TableP_DC_SID#144L,_TableSH_SID#145L,ID#146,Name#147,TableP_DCID#148,TableSHID#149,SL_ACT_GI_DTE#150,SL_Xcl_C#151,SL_Xcl_C#152,SL_Css_Cojs#153L,SL_Config#154,SL_CREATEDON# > .. 36 more fields] JDBCRelation((select [SLTables].[_TableSL_SID], > [SLTables]. ... [...] FROM [sch].[SLTables] [SLTables] JOIN sch.TPSLTables > TPSLTables ON [TPSLTables].[_TableSL_SID] = [SLTables].[_TableSL_SID] where > _TP = 24) input) > == Optimized Logical Plan == > Project [_TableSL_SID#143L AS InstanceId#20236L, SL_RD_ColR_N#189 AS > price#20237, 24 AS ZoneId#20239, 6 AS priceItemId#20242, 63 AS priceId#20244] > +- InMemoryRelation [_TableSL_SID#143L, _TableP_DC_SID#144L, > _TableSH_SID#145L, ID#146, Name#147, ... 36 more fields], true, 1, > StorageLevel(disk, memory, deserialized, 1 replicas) >: +- *Scan JDBCRelation((select [SLTables].[_TableSL_SID], > [SLTables].[_TableP_DC_SID], [SLTables].[_TableSH_SID], [SLTables].[ID], > [SLTables].[Name], [SLTables].[TableP_DCID], [SLTables].[TableSHID], > [TPSLTables].[SL_ACT_GI_DTE], ... [...] FROM [sch].[SLTables] [SLTables] > JOIN sch.TPSLTables TPSLTables ON [TPSLTables].[_TableSL_SID] = > [SLTables].[_TableSL_SID] where _TP = 24) input) > [_TableSL_SID#143L,_TableP_DC_SID#144L,_TableSH_SID#145L,ID#146,Name#147,TableP_DCID#148,TableSHID#149,SL_ACT_GI_DTE#150,SL_Xcl_C#151,... > 36 more fields] > == Physical Plan == > *Project [_TableSL_SID#143L AS InstanceId#20236L, SL_RD_ColR_N#189 AS > price#20237, 24 AS ZoneId#20239, 6 AS priceItemId#20242, 63 AS priceId#20244] > +- InMemoryTableScan [_TableSL_SID#143L, SL_RD_ColR_N#189] >: +- InMemoryRelation [_TableSL_SID#143L, _TableP_DC_SID#144L, > _TableSH_SID#145L, ID#146, Name#147, ... 36 more fields], true, 1, > StorageLevel(disk, memory, deserialized, 1 replicas) >: : +- *Scan JDBCRelation((select [SLTables].[_TableSL_SID], > [SLTables].[_TableP_DC_SID], [SLTables].[_TableSH_SID], [SLTables].[ID], > [SLTables].[Name], [SLTables].[TableP_DCID], ... [...] FROM [sch].[SLTables] > [SLTables] JOIN sch.TPSLTables TPSLTables ON [TPSLTables].[_TableSL_SID] = > [SLTables].[_TableSL_SID] where _TP = 24) input) > [_TableSL_SID#143L,_TableP_DC_SID#144L,_TableSH_SID#145L,ID#146,Name#147,,... > 36 more fields] > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-15822) segmentation violation in o.a.s.unsafe.types.UTF8String
[ https://issues.apache.org/jira/browse/SPARK-15822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15465353#comment-15465353 ] Ruben Hernando commented on SPARK-15822: Created https://issues.apache.org/jira/browse/SPARK-17403 > segmentation violation in o.a.s.unsafe.types.UTF8String > > > Key: SPARK-15822 > URL: https://issues.apache.org/jira/browse/SPARK-15822 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.0.0 > Environment: linux amd64 > openjdk version "1.8.0_91" > OpenJDK Runtime Environment (build 1.8.0_91-b14) > OpenJDK 64-Bit Server VM (build 25.91-b14, mixed mode) >Reporter: Pete Robbins >Assignee: Herman van Hovell >Priority: Blocker > Fix For: 2.0.0 > > > Executors fail with segmentation violation while running application with > spark.memory.offHeap.enabled true > spark.memory.offHeap.size 512m > Also now reproduced with > spark.memory.offHeap.enabled false > {noformat} > # > # A fatal error has been detected by the Java Runtime Environment: > # > # SIGSEGV (0xb) at pc=0x7f4559b4d4bd, pid=14182, tid=139935319750400 > # > # JRE version: OpenJDK Runtime Environment (8.0_91-b14) (build 1.8.0_91-b14) > # Java VM: OpenJDK 64-Bit Server VM (25.91-b14 mixed mode linux-amd64 > compressed oops) > # Problematic frame: > # J 4816 C2 > org.apache.spark.unsafe.types.UTF8String.compareTo(Lorg/apache/spark/unsafe/types/UTF8String;)I > (64 bytes) @ 0x7f4559b4d4bd [0x7f4559b4d460+0x5d] > {noformat} > We initially saw this on IBM java on PowerPC box but is recreatable on linux > with OpenJDK. On linux with IBM Java 8 we see a null pointer exception at the > same code point: > {noformat} > 16/06/08 11:14:58 ERROR Executor: Exception in task 1.0 in stage 5.0 (TID 48) > java.lang.NullPointerException > at > org.apache.spark.unsafe.types.UTF8String.compareTo(UTF8String.java:831) > at org.apache.spark.unsafe.types.UTF8String.compare(UTF8String.java:844) > at > org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.findNextInnerJoinRows$(Unknown > Source) > at > org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown > Source) > at > org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) > at > org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$doExecute$2$$anon$2.hasNext(WholeStageCodegenExec.scala:377) > at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) > at > scala.collection.convert.Wrappers$IteratorWrapper.hasNext(Wrappers.scala:30) > at org.spark_project.guava.collect.Ordering.leastOf(Ordering.java:664) > at org.apache.spark.util.collection.Utils$.takeOrdered(Utils.scala:37) > at > org.apache.spark.rdd.RDD$$anonfun$takeOrdered$1$$anonfun$30.apply(RDD.scala:1365) > at > org.apache.spark.rdd.RDD$$anonfun$takeOrdered$1$$anonfun$30.apply(RDD.scala:1362) > at > org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:757) > at > org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:757) > at > org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) > at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:318) > at org.apache.spark.rdd.RDD.iterator(RDD.scala:282) > at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70) > at org.apache.spark.scheduler.Task.run(Task.scala:85) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1153) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) > at java.lang.Thread.run(Thread.java:785) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Created] (SPARK-17403) Fatal Error: SIGSEGV on Jdbc joins
Ruben Hernando created SPARK-17403: -- Summary: Fatal Error: SIGSEGV on Jdbc joins Key: SPARK-17403 URL: https://issues.apache.org/jira/browse/SPARK-17403 Project: Spark Issue Type: Bug Components: SQL Affects Versions: 2.0.0 Environment: Spark standalone cluster (3 Workers, 47 cores) Ubuntu 14 Java 8 Reporter: Ruben Hernando The process creates views from JDBC (SQL server) source and combines them to create other views. Finally it dumps results via JDBC Error: {quote} # JRE version: Java(TM) SE Runtime Environment (8.0_101-b13) (build 1.8.0_101-b13) # Java VM: Java HotSpot(TM) 64-Bit Server VM (25.101-b13 mixed mode linux-amd64 ) # Problematic frame: # J 4895 C1 org.apache.spark.unsafe.Platform.getLong(Ljava/lang/Object;J)J (9 bytes) @ 0x7fbb355dfd6c [0x7fbb355dfd60+0xc] # {quote} SQL Query plan (fields truncated): {quote} == Parsed Logical Plan == 'Project [*] +- 'UnresolvedRelation `COEQ_63` == Analyzed Logical Plan == InstanceId: bigint, price: double, ZoneId: int, priceItemId: int, priceId: int Project [InstanceId#20236L, price#20237, ZoneId#20239, priceItemId#20242, priceId#20244] +- SubqueryAlias coeq_63 +- Project [_TableSL_SID#143L AS InstanceId#20236L, SL_RD_ColR_N#189 AS price#20237, 24 AS ZoneId#20239, 6 AS priceItemId#20242, 63 AS priceId#20244] +- SubqueryAlias 6__input +- Relation[_TableSL_SID#143L,_TableP_DC_SID#144L,_TableSH_SID#145L,ID#146,Name#147,TableP_DCID#148,TableSHID#149,SL_ACT_GI_DTE#150,SL_Xcl_C#151,SL_Xcl_C#152,SL_Css_Cojs#153L,SL_Config#154,SL_CREATEDON# .. 36 more fields] JDBCRelation((select [SLTables].[_TableSL_SID], [SLTables]. ... [...] FROM [sch].[SLTables] [SLTables] JOIN sch.TPSLTables TPSLTables ON [TPSLTables].[_TableSL_SID] = [SLTables].[_TableSL_SID] where _TP = 24) input) == Optimized Logical Plan == Project [_TableSL_SID#143L AS InstanceId#20236L, SL_RD_ColR_N#189 AS price#20237, 24 AS ZoneId#20239, 6 AS priceItemId#20242, 63 AS priceId#20244] +- InMemoryRelation [_TableSL_SID#143L, _TableP_DC_SID#144L, _TableSH_SID#145L, ID#146, Name#147, ... 36 more fields], true, 1, StorageLevel(disk, memory, deserialized, 1 replicas) : +- *Scan JDBCRelation((select [SLTables].[_TableSL_SID], [SLTables].[_TableP_DC_SID], [SLTables].[_TableSH_SID], [SLTables].[ID], [SLTables].[Name], [SLTables].[TableP_DCID], [SLTables].[TableSHID], [TPSLTables].[SL_ACT_GI_DTE], ... [...] FROM [sch].[SLTables] [SLTables] JOIN sch.TPSLTables TPSLTables ON [TPSLTables].[_TableSL_SID] = [SLTables].[_TableSL_SID] where _TP = 24) input) [_TableSL_SID#143L,_TableP_DC_SID#144L,_TableSH_SID#145L,ID#146,Name#147,TableP_DCID#148,TableSHID#149,SL_ACT_GI_DTE#150,SL_Xcl_C#151,... 36 more fields] == Physical Plan == *Project [_TableSL_SID#143L AS InstanceId#20236L, SL_RD_ColR_N#189 AS price#20237, 24 AS ZoneId#20239, 6 AS priceItemId#20242, 63 AS priceId#20244] +- InMemoryTableScan [_TableSL_SID#143L, SL_RD_ColR_N#189] : +- InMemoryRelation [_TableSL_SID#143L, _TableP_DC_SID#144L, _TableSH_SID#145L, ID#146, Name#147, ... 36 more fields], true, 1, StorageLevel(disk, memory, deserialized, 1 replicas) : : +- *Scan JDBCRelation((select [SLTables].[_TableSL_SID], [SLTables].[_TableP_DC_SID], [SLTables].[_TableSH_SID], [SLTables].[ID], [SLTables].[Name], [SLTables].[TableP_DCID], ... [...] FROM [sch].[SLTables] [SLTables] JOIN sch.TPSLTables TPSLTables ON [TPSLTables].[_TableSL_SID] = [SLTables].[_TableSL_SID] where _TP = 24) input) [_TableSL_SID#143L,_TableP_DC_SID#144L,_TableSH_SID#145L,ID#146,Name#147,,... 36 more fields] {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-15822) segmentation violation in o.a.s.unsafe.types.UTF8String
[ https://issues.apache.org/jira/browse/SPARK-15822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15465342#comment-15465342 ] Ruben Hernando commented on SPARK-15822: Thank you! I'll do it > segmentation violation in o.a.s.unsafe.types.UTF8String > > > Key: SPARK-15822 > URL: https://issues.apache.org/jira/browse/SPARK-15822 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.0.0 > Environment: linux amd64 > openjdk version "1.8.0_91" > OpenJDK Runtime Environment (build 1.8.0_91-b14) > OpenJDK 64-Bit Server VM (build 25.91-b14, mixed mode) >Reporter: Pete Robbins >Assignee: Herman van Hovell >Priority: Blocker > Fix For: 2.0.0 > > > Executors fail with segmentation violation while running application with > spark.memory.offHeap.enabled true > spark.memory.offHeap.size 512m > Also now reproduced with > spark.memory.offHeap.enabled false > {noformat} > # > # A fatal error has been detected by the Java Runtime Environment: > # > # SIGSEGV (0xb) at pc=0x7f4559b4d4bd, pid=14182, tid=139935319750400 > # > # JRE version: OpenJDK Runtime Environment (8.0_91-b14) (build 1.8.0_91-b14) > # Java VM: OpenJDK 64-Bit Server VM (25.91-b14 mixed mode linux-amd64 > compressed oops) > # Problematic frame: > # J 4816 C2 > org.apache.spark.unsafe.types.UTF8String.compareTo(Lorg/apache/spark/unsafe/types/UTF8String;)I > (64 bytes) @ 0x7f4559b4d4bd [0x7f4559b4d460+0x5d] > {noformat} > We initially saw this on IBM java on PowerPC box but is recreatable on linux > with OpenJDK. On linux with IBM Java 8 we see a null pointer exception at the > same code point: > {noformat} > 16/06/08 11:14:58 ERROR Executor: Exception in task 1.0 in stage 5.0 (TID 48) > java.lang.NullPointerException > at > org.apache.spark.unsafe.types.UTF8String.compareTo(UTF8String.java:831) > at org.apache.spark.unsafe.types.UTF8String.compare(UTF8String.java:844) > at > org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.findNextInnerJoinRows$(Unknown > Source) > at > org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown > Source) > at > org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) > at > org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$doExecute$2$$anon$2.hasNext(WholeStageCodegenExec.scala:377) > at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) > at > scala.collection.convert.Wrappers$IteratorWrapper.hasNext(Wrappers.scala:30) > at org.spark_project.guava.collect.Ordering.leastOf(Ordering.java:664) > at org.apache.spark.util.collection.Utils$.takeOrdered(Utils.scala:37) > at > org.apache.spark.rdd.RDD$$anonfun$takeOrdered$1$$anonfun$30.apply(RDD.scala:1365) > at > org.apache.spark.rdd.RDD$$anonfun$takeOrdered$1$$anonfun$30.apply(RDD.scala:1362) > at > org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:757) > at > org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:757) > at > org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) > at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:318) > at org.apache.spark.rdd.RDD.iterator(RDD.scala:282) > at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70) > at org.apache.spark.scheduler.Task.run(Task.scala:85) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1153) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) > at java.lang.Thread.run(Thread.java:785) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-15822) segmentation violation in o.a.s.unsafe.types.UTF8String
[ https://issues.apache.org/jira/browse/SPARK-15822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15465316#comment-15465316 ] Ruben Hernando commented on SPARK-15822: {quote} == Parsed Logical Plan == 'Project [*] +- 'UnresolvedRelation `COEQ_63` == Analyzed Logical Plan == InstanceId: bigint, price: double, ZoneId: int, priceItemId: int, priceId: int Project [InstanceId#20236L, price#20237, ZoneId#20239, priceItemId#20242, priceId#20244] +- SubqueryAlias coeq_63 +- Project [_TableSL_SID#143L AS InstanceId#20236L, SL_RD_ColR_N#189 AS price#20237, 24 AS ZoneId#20239, 6 AS priceItemId#20242, 63 AS priceId#20244] +- SubqueryAlias 6__input +- Relation[_TableSL_SID#143L,_TableP_DC_SID#144L,_TableSH_SID#145L,ID#146,Name#147,TableP_DCID#148,TableSHID#149,SL_ACT_GI_DTE#150,SL_Xcl_C#151,SL_Xcl_C#152,SL_Css_Cojs#153L,SL_Config#154,SL_CREATEDON# .. 36 more fields] JDBCRelation((select [SLTables].[_TableSL_SID], [SLTables]. ... [...] FROM [sch].[SLTables] [SLTables] JOIN sch.TPSLTables TPSLTables ON [TPSLTables].[_TableSL_SID] = [SLTables].[_TableSL_SID] where _TP = 24) input) == Optimized Logical Plan == Project [_TableSL_SID#143L AS InstanceId#20236L, SL_RD_ColR_N#189 AS price#20237, 24 AS ZoneId#20239, 6 AS priceItemId#20242, 63 AS priceId#20244] +- InMemoryRelation [_TableSL_SID#143L, _TableP_DC_SID#144L, _TableSH_SID#145L, ID#146, Name#147, ... 36 more fields], true, 1, StorageLevel(disk, memory, deserialized, 1 replicas) : +- *Scan JDBCRelation((select [SLTables].[_TableSL_SID], [SLTables].[_TableP_DC_SID], [SLTables].[_TableSH_SID], [SLTables].[ID], [SLTables].[Name], [SLTables].[TableP_DCID], [SLTables].[TableSHID], [TPSLTables].[SL_ACT_GI_DTE], ... [...] FROM [sch].[SLTables] [SLTables] JOIN sch.TPSLTables TPSLTables ON [TPSLTables].[_TableSL_SID] = [SLTables].[_TableSL_SID] where _TP = 24) input) [_TableSL_SID#143L,_TableP_DC_SID#144L,_TableSH_SID#145L,ID#146,Name#147,TableP_DCID#148,TableSHID#149,SL_ACT_GI_DTE#150,SL_Xcl_C#151,... 36 more fields] == Physical Plan == *Project [_TableSL_SID#143L AS InstanceId#20236L, SL_RD_ColR_N#189 AS price#20237, 24 AS ZoneId#20239, 6 AS priceItemId#20242, 63 AS priceId#20244] +- InMemoryTableScan [_TableSL_SID#143L, SL_RD_ColR_N#189] : +- InMemoryRelation [_TableSL_SID#143L, _TableP_DC_SID#144L, _TableSH_SID#145L, ID#146, Name#147, ... 36 more fields], true, 1, StorageLevel(disk, memory, deserialized, 1 replicas) : : +- *Scan JDBCRelation((select [SLTables].[_TableSL_SID], [SLTables].[_TableP_DC_SID], [SLTables].[_TableSH_SID], [SLTables].[ID], [SLTables].[Name], [SLTables].[TableP_DCID], ... [...] FROM [sch].[SLTables] [SLTables] JOIN sch.TPSLTables TPSLTables ON [TPSLTables].[_TableSL_SID] = [SLTables].[_TableSL_SID] where _TP = 24) input) [_TableSL_SID#143L,_TableP_DC_SID#144L,_TableSH_SID#145L,ID#146,Name#147,,... 36 more fields] {quote} > segmentation violation in o.a.s.unsafe.types.UTF8String > > > Key: SPARK-15822 > URL: https://issues.apache.org/jira/browse/SPARK-15822 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.0.0 > Environment: linux amd64 > openjdk version "1.8.0_91" > OpenJDK Runtime Environment (build 1.8.0_91-b14) > OpenJDK 64-Bit Server VM (build 25.91-b14, mixed mode) >Reporter: Pete Robbins >Assignee: Herman van Hovell >Priority: Blocker > Fix For: 2.0.0 > > > Executors fail with segmentation violation while running application with > spark.memory.offHeap.enabled true > spark.memory.offHeap.size 512m > Also now reproduced with > spark.memory.offHeap.enabled false > {noformat} > # > # A fatal error has been detected by the Java Runtime Environment: > # > # SIGSEGV (0xb) at pc=0x7f4559b4d4bd, pid=14182, tid=139935319750400 > # > # JRE version: OpenJDK Runtime Environment (8.0_91-b14) (build 1.8.0_91-b14) > # Java VM: OpenJDK 64-Bit Server VM (25.91-b14 mixed mode linux-amd64 > compressed oops) > # Problematic frame: > # J 4816 C2 > org.apache.spark.unsafe.types.UTF8String.compareTo(Lorg/apache/spark/unsafe/types/UTF8String;)I > (64 bytes) @ 0x7f4559b4d4bd [0x7f4559b4d460+0x5d] > {noformat} > We initially saw this on IBM java on PowerPC box but is recreatable on linux > with OpenJDK. On linux with IBM Java 8 we see a null pointer exception at the > same code point: > {noformat} > 16/06/08 11:14:58 ERROR Executor: Exception in task 1.0 in stage 5.0 (TID 48) > java.lang.NullPointerException > at > org.apache.spark.unsafe.types.UTF8String.compareTo(UTF8String.java:831) > at org.apache.spark.unsafe.types.UTF8String.compare(UTF8String.java:844) > at > org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.findNextInnerJoinRows$(Unknown >
[jira] [Issue Comment Deleted] (SPARK-15822) segmentation violation in o.a.s.unsafe.types.UTF8String
[ https://issues.apache.org/jira/browse/SPARK-15822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ruben Hernando updated SPARK-15822: --- Comment: was deleted (was: I'm doing joins between SQL views. Data source is JDBC to SQL server) > segmentation violation in o.a.s.unsafe.types.UTF8String > > > Key: SPARK-15822 > URL: https://issues.apache.org/jira/browse/SPARK-15822 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.0.0 > Environment: linux amd64 > openjdk version "1.8.0_91" > OpenJDK Runtime Environment (build 1.8.0_91-b14) > OpenJDK 64-Bit Server VM (build 25.91-b14, mixed mode) >Reporter: Pete Robbins >Assignee: Herman van Hovell >Priority: Blocker > Fix For: 2.0.0 > > > Executors fail with segmentation violation while running application with > spark.memory.offHeap.enabled true > spark.memory.offHeap.size 512m > Also now reproduced with > spark.memory.offHeap.enabled false > {noformat} > # > # A fatal error has been detected by the Java Runtime Environment: > # > # SIGSEGV (0xb) at pc=0x7f4559b4d4bd, pid=14182, tid=139935319750400 > # > # JRE version: OpenJDK Runtime Environment (8.0_91-b14) (build 1.8.0_91-b14) > # Java VM: OpenJDK 64-Bit Server VM (25.91-b14 mixed mode linux-amd64 > compressed oops) > # Problematic frame: > # J 4816 C2 > org.apache.spark.unsafe.types.UTF8String.compareTo(Lorg/apache/spark/unsafe/types/UTF8String;)I > (64 bytes) @ 0x7f4559b4d4bd [0x7f4559b4d460+0x5d] > {noformat} > We initially saw this on IBM java on PowerPC box but is recreatable on linux > with OpenJDK. On linux with IBM Java 8 we see a null pointer exception at the > same code point: > {noformat} > 16/06/08 11:14:58 ERROR Executor: Exception in task 1.0 in stage 5.0 (TID 48) > java.lang.NullPointerException > at > org.apache.spark.unsafe.types.UTF8String.compareTo(UTF8String.java:831) > at org.apache.spark.unsafe.types.UTF8String.compare(UTF8String.java:844) > at > org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.findNextInnerJoinRows$(Unknown > Source) > at > org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown > Source) > at > org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) > at > org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$doExecute$2$$anon$2.hasNext(WholeStageCodegenExec.scala:377) > at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) > at > scala.collection.convert.Wrappers$IteratorWrapper.hasNext(Wrappers.scala:30) > at org.spark_project.guava.collect.Ordering.leastOf(Ordering.java:664) > at org.apache.spark.util.collection.Utils$.takeOrdered(Utils.scala:37) > at > org.apache.spark.rdd.RDD$$anonfun$takeOrdered$1$$anonfun$30.apply(RDD.scala:1365) > at > org.apache.spark.rdd.RDD$$anonfun$takeOrdered$1$$anonfun$30.apply(RDD.scala:1362) > at > org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:757) > at > org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:757) > at > org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) > at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:318) > at org.apache.spark.rdd.RDD.iterator(RDD.scala:282) > at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70) > at org.apache.spark.scheduler.Task.run(Task.scala:85) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1153) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) > at java.lang.Thread.run(Thread.java:785) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-15822) segmentation violation in o.a.s.unsafe.types.UTF8String
[ https://issues.apache.org/jira/browse/SPARK-15822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15464860#comment-15464860 ] Ruben Hernando commented on SPARK-15822: I'm doing joins between SQL views. Data source is JDBC to SQL server > segmentation violation in o.a.s.unsafe.types.UTF8String > > > Key: SPARK-15822 > URL: https://issues.apache.org/jira/browse/SPARK-15822 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.0.0 > Environment: linux amd64 > openjdk version "1.8.0_91" > OpenJDK Runtime Environment (build 1.8.0_91-b14) > OpenJDK 64-Bit Server VM (build 25.91-b14, mixed mode) >Reporter: Pete Robbins >Assignee: Herman van Hovell >Priority: Blocker > Fix For: 2.0.0 > > > Executors fail with segmentation violation while running application with > spark.memory.offHeap.enabled true > spark.memory.offHeap.size 512m > Also now reproduced with > spark.memory.offHeap.enabled false > {noformat} > # > # A fatal error has been detected by the Java Runtime Environment: > # > # SIGSEGV (0xb) at pc=0x7f4559b4d4bd, pid=14182, tid=139935319750400 > # > # JRE version: OpenJDK Runtime Environment (8.0_91-b14) (build 1.8.0_91-b14) > # Java VM: OpenJDK 64-Bit Server VM (25.91-b14 mixed mode linux-amd64 > compressed oops) > # Problematic frame: > # J 4816 C2 > org.apache.spark.unsafe.types.UTF8String.compareTo(Lorg/apache/spark/unsafe/types/UTF8String;)I > (64 bytes) @ 0x7f4559b4d4bd [0x7f4559b4d460+0x5d] > {noformat} > We initially saw this on IBM java on PowerPC box but is recreatable on linux > with OpenJDK. On linux with IBM Java 8 we see a null pointer exception at the > same code point: > {noformat} > 16/06/08 11:14:58 ERROR Executor: Exception in task 1.0 in stage 5.0 (TID 48) > java.lang.NullPointerException > at > org.apache.spark.unsafe.types.UTF8String.compareTo(UTF8String.java:831) > at org.apache.spark.unsafe.types.UTF8String.compare(UTF8String.java:844) > at > org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.findNextInnerJoinRows$(Unknown > Source) > at > org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown > Source) > at > org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) > at > org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$doExecute$2$$anon$2.hasNext(WholeStageCodegenExec.scala:377) > at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) > at > scala.collection.convert.Wrappers$IteratorWrapper.hasNext(Wrappers.scala:30) > at org.spark_project.guava.collect.Ordering.leastOf(Ordering.java:664) > at org.apache.spark.util.collection.Utils$.takeOrdered(Utils.scala:37) > at > org.apache.spark.rdd.RDD$$anonfun$takeOrdered$1$$anonfun$30.apply(RDD.scala:1365) > at > org.apache.spark.rdd.RDD$$anonfun$takeOrdered$1$$anonfun$30.apply(RDD.scala:1362) > at > org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:757) > at > org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:757) > at > org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) > at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:318) > at org.apache.spark.rdd.RDD.iterator(RDD.scala:282) > at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70) > at org.apache.spark.scheduler.Task.run(Task.scala:85) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1153) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) > at java.lang.Thread.run(Thread.java:785) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-15822) segmentation violation in o.a.s.unsafe.types.UTF8String
[ https://issues.apache.org/jira/browse/SPARK-15822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15464861#comment-15464861 ] Ruben Hernando commented on SPARK-15822: I'm doing joins between SQL views. Data source is JDBC to SQL server > segmentation violation in o.a.s.unsafe.types.UTF8String > > > Key: SPARK-15822 > URL: https://issues.apache.org/jira/browse/SPARK-15822 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.0.0 > Environment: linux amd64 > openjdk version "1.8.0_91" > OpenJDK Runtime Environment (build 1.8.0_91-b14) > OpenJDK 64-Bit Server VM (build 25.91-b14, mixed mode) >Reporter: Pete Robbins >Assignee: Herman van Hovell >Priority: Blocker > Fix For: 2.0.0 > > > Executors fail with segmentation violation while running application with > spark.memory.offHeap.enabled true > spark.memory.offHeap.size 512m > Also now reproduced with > spark.memory.offHeap.enabled false > {noformat} > # > # A fatal error has been detected by the Java Runtime Environment: > # > # SIGSEGV (0xb) at pc=0x7f4559b4d4bd, pid=14182, tid=139935319750400 > # > # JRE version: OpenJDK Runtime Environment (8.0_91-b14) (build 1.8.0_91-b14) > # Java VM: OpenJDK 64-Bit Server VM (25.91-b14 mixed mode linux-amd64 > compressed oops) > # Problematic frame: > # J 4816 C2 > org.apache.spark.unsafe.types.UTF8String.compareTo(Lorg/apache/spark/unsafe/types/UTF8String;)I > (64 bytes) @ 0x7f4559b4d4bd [0x7f4559b4d460+0x5d] > {noformat} > We initially saw this on IBM java on PowerPC box but is recreatable on linux > with OpenJDK. On linux with IBM Java 8 we see a null pointer exception at the > same code point: > {noformat} > 16/06/08 11:14:58 ERROR Executor: Exception in task 1.0 in stage 5.0 (TID 48) > java.lang.NullPointerException > at > org.apache.spark.unsafe.types.UTF8String.compareTo(UTF8String.java:831) > at org.apache.spark.unsafe.types.UTF8String.compare(UTF8String.java:844) > at > org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.findNextInnerJoinRows$(Unknown > Source) > at > org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown > Source) > at > org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) > at > org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$doExecute$2$$anon$2.hasNext(WholeStageCodegenExec.scala:377) > at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) > at > scala.collection.convert.Wrappers$IteratorWrapper.hasNext(Wrappers.scala:30) > at org.spark_project.guava.collect.Ordering.leastOf(Ordering.java:664) > at org.apache.spark.util.collection.Utils$.takeOrdered(Utils.scala:37) > at > org.apache.spark.rdd.RDD$$anonfun$takeOrdered$1$$anonfun$30.apply(RDD.scala:1365) > at > org.apache.spark.rdd.RDD$$anonfun$takeOrdered$1$$anonfun$30.apply(RDD.scala:1362) > at > org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:757) > at > org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:757) > at > org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) > at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:318) > at org.apache.spark.rdd.RDD.iterator(RDD.scala:282) > at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70) > at org.apache.spark.scheduler.Task.run(Task.scala:85) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1153) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) > at java.lang.Thread.run(Thread.java:785) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-15822) segmentation violation in o.a.s.unsafe.types.UTF8String
[ https://issues.apache.org/jira/browse/SPARK-15822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15464793#comment-15464793 ] Ruben Hernando commented on SPARK-15822: Encountered a similar error (using 2.0.0): {quote}# SIGSEGV (0xb) at pc=0x7fbb355dfd6c, pid=5592, tid=0x7faa9f2f4700 # # JRE version: Java(TM) SE Runtime Environment (8.0_101-b13) (build 1.8.0_101-b13) # Java VM: Java HotSpot(TM) 64-Bit Server VM (25.101-b13 mixed mode linux-amd64 ) # Problematic frame: # J 4895 C1 org.apache.spark.unsafe.Platform.getLong(Ljava/lang/Object;J)J (9 bytes) @ 0x7fbb355dfd6c [0x7fbb355dfd60+0xc] # {quote} I'm not sure if it's related > segmentation violation in o.a.s.unsafe.types.UTF8String > > > Key: SPARK-15822 > URL: https://issues.apache.org/jira/browse/SPARK-15822 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.0.0 > Environment: linux amd64 > openjdk version "1.8.0_91" > OpenJDK Runtime Environment (build 1.8.0_91-b14) > OpenJDK 64-Bit Server VM (build 25.91-b14, mixed mode) >Reporter: Pete Robbins >Assignee: Herman van Hovell >Priority: Blocker > Fix For: 2.0.0 > > > Executors fail with segmentation violation while running application with > spark.memory.offHeap.enabled true > spark.memory.offHeap.size 512m > Also now reproduced with > spark.memory.offHeap.enabled false > {noformat} > # > # A fatal error has been detected by the Java Runtime Environment: > # > # SIGSEGV (0xb) at pc=0x7f4559b4d4bd, pid=14182, tid=139935319750400 > # > # JRE version: OpenJDK Runtime Environment (8.0_91-b14) (build 1.8.0_91-b14) > # Java VM: OpenJDK 64-Bit Server VM (25.91-b14 mixed mode linux-amd64 > compressed oops) > # Problematic frame: > # J 4816 C2 > org.apache.spark.unsafe.types.UTF8String.compareTo(Lorg/apache/spark/unsafe/types/UTF8String;)I > (64 bytes) @ 0x7f4559b4d4bd [0x7f4559b4d460+0x5d] > {noformat} > We initially saw this on IBM java on PowerPC box but is recreatable on linux > with OpenJDK. On linux with IBM Java 8 we see a null pointer exception at the > same code point: > {noformat} > 16/06/08 11:14:58 ERROR Executor: Exception in task 1.0 in stage 5.0 (TID 48) > java.lang.NullPointerException > at > org.apache.spark.unsafe.types.UTF8String.compareTo(UTF8String.java:831) > at org.apache.spark.unsafe.types.UTF8String.compare(UTF8String.java:844) > at > org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.findNextInnerJoinRows$(Unknown > Source) > at > org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown > Source) > at > org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) > at > org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$doExecute$2$$anon$2.hasNext(WholeStageCodegenExec.scala:377) > at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408) > at > scala.collection.convert.Wrappers$IteratorWrapper.hasNext(Wrappers.scala:30) > at org.spark_project.guava.collect.Ordering.leastOf(Ordering.java:664) > at org.apache.spark.util.collection.Utils$.takeOrdered(Utils.scala:37) > at > org.apache.spark.rdd.RDD$$anonfun$takeOrdered$1$$anonfun$30.apply(RDD.scala:1365) > at > org.apache.spark.rdd.RDD$$anonfun$takeOrdered$1$$anonfun$30.apply(RDD.scala:1362) > at > org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:757) > at > org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:757) > at > org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) > at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:318) > at org.apache.spark.rdd.RDD.iterator(RDD.scala:282) > at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70) > at org.apache.spark.scheduler.Task.run(Task.scala:85) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1153) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) > at java.lang.Thread.run(Thread.java:785) > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org