[ https://issues.apache.org/jira/browse/SPARK-32275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17166132#comment-17166132 ]
Hyukjin Kwon commented on SPARK-32275: -------------------------------------- Looks like it tries to access to JVM instances within UDFs which are disallowed. > "None.org.apache.spark.api.java.JavaSparkContext" Issue With Spark-Mllib > Algorithm and JDBC Connectors > ------------------------------------------------------------------------------------------------------ > > Key: SPARK-32275 > URL: https://issues.apache.org/jira/browse/SPARK-32275 > Project: Spark > Issue Type: Bug > Components: PySpark > Affects Versions: 2.4.4 > Environment: Pyspark 2.4.4 > Python 3.7 > Running on AWS EC2 instances with RHEL. > Reporter: Luke Chu > Priority: Minor > > While calling a spark-mllib package algorithm, specifically FPGrowth, and > passing in a dataframe from a JDBC connector, specifically datastax's > spark-cassandra-connector, the following is thrown at the Task level: > > {code:java} > 20/05/29 01:56:07 WARN TaskSetManager: Lost task 96.0 in stage 8.0 (TID 802, > 10.168.0.43, executor 0): org.apache.spark.api.python.PythonException: > Traceback (most recent call last): > File > "/autoid/spark/spark-2.4.4-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", > line 366, in main > func, profiler, deserializer, serializer = read_udfs(pickleSer, infile, > eval_type) > File > "/autoid/spark/spark-2.4.4-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", > line 241, in read_udfs > arg_offsets, udf = read_single_udf(pickleSer, infile, eval_type, runner_conf) > File > "/autoid/spark/spark-2.4.4-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", > line 168, in read_single_udf > f, return_type = read_command(pickleSer, infile) > File > "/autoid/spark/spark-2.4.4-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/worker.py", > line 69, in read_command > command = serializer._read_with_length(file) > File > "/autoid/spark/spark-2.4.4-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/serializers.py", > line 172, in _read_with_length > return self.loads(obj) > File > "/autoid/spark/spark-2.4.4-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/serializers.py", > line 580, in loads > return pickle.loads(obj, encoding=encoding) > File "build/bdist.linux-x86_64/egg/zoran_core/_init_.py", line 5, in <module> > File "build/bdist.linux-x86_64/egg/zoran_core/config/conf.py", line 17, in > <module> > File "build/bdist.linux-x86_64/egg/zoran_core/utils/logger.py", line 5, in > getSparkLogger > File > "/autoid/spark/spark-2.4.4-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/sql/session.py", > line 173, in getOrCreate > sc = SparkContext.getOrCreate(sparkConf) > File > "/autoid/spark/spark-2.4.4-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/context.py", > line 367, in getOrCreate > SparkContext(conf=conf or SparkConf()) > File > "/autoid/spark/spark-2.4.4-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/context.py", > line 136, in _init_ > conf, jsc, profiler_cls) > File > "/autoid/spark/spark-2.4.4-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/context.py", > line 198, in _do_init > self._jsc = jsc or self._initialize_context(self._conf._jconf) > File > "/autoid/spark/spark-2.4.4-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/context.py", > line 306, in _initialize_context > return self._jvm.JavaSparkContext(jconf) > File > "/autoid/spark/spark-2.4.4-bin-hadoop2.7/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", > line 1525, in _call_ > answer, self._gateway_client, None, self._fqn) > File > "/autoid/spark/spark-2.4.4-bin-hadoop2.7/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", > line 336, in get_return_value > format(target_id, ".", name)) > py4j.protocol.Py4JError: An error occurred while calling > None.org.apache.spark.api.java.JavaSparkContext > at > org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:456) > at > org.apache.spark.sql.execution.python.PythonUDFRunner$$anon$1.read(PythonUDFRunner.scala:81) > at > org.apache.spark.sql.execution.python.PythonUDFRunner$$anon$1.read(PythonUDFRunner.scala:64) > at > org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:410) > at > org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) > at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440) > at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409) > at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409) > at > org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage3.agg_doAggregateWithKeys_0$(Unknown > Source) > at > org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage3.processNext(Unknown > Source) > at > org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) > at > org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:636) > at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409) > at > org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125) > at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99) > at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55) > at org.apache.spark.scheduler.Task.run(Task.scala:123) > at > org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408) > at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > {code} > > > Note the py4j.protocol.Py4JError: An error occurred while calling > None.org.apache.spark.api.java.JavaSparkContext . > > At the top level it is a WARN so execution continues and ultimately succeeds. > This doesn't happen when the dataframe passed to the algorithm is read from > csv. Also, I suspect this isn't unique to spark-mllib or the > spark-cassandra-connector due to this thread: > [http://mail-archives.apache.org/mod_mbox/spark-user/201701.mbox/%3ccaohmdzfvxrwzjh6yesiann-lumz467bv3key68-nvzjzeno...@mail.gmail.com%3E] > > Here the user ran into the same problem using GraphFrames and JDBC connectors > it seems. > > Just in case, this happens at the 'fit' step. > > {code:java} > fp_growth = FPGrowth(itemsCol='feats', \ > minSupport=min_support, \ minConfidence=min_confidence, > \ numPartitions=num_partitions \ > ) > model = fp_growth.fit(features_df){code} > > where features_df is sourced from Cassandra. -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org