alberttwong opened a new issue, #12985:
URL: https://github.com/apache/hudi/issues/12985

   **Describe the problem you faced**
   
   Running the Hudi Quickstart with Databricks 13.3 LTS (Scala 2.12 with Spark 
3.4).    
   
   **To Reproduce**
   
   Steps to reproduce the behavior:
   
   
   Running https://hudi.apache.org/docs/quick-start-guide/ with 
org.apache.hudi:hudi-spark3.4-bundle_2.12:0.14.1
   
   ```
   ---------------------------------------------------------------------------
   Py4JJavaError                             Traceback (most recent call last)
   File /databricks/spark/python/pyspark/errors/exceptions/captured.py:188, in 
capture_sql_exception.<locals>.deco(*a, **kw)
       187 try:
   --> 188     return f(*a, **kw)
       189 except Py4JJavaError as e:
   
   File 
/databricks/spark/python/lib/py4j-0.10.9.7-src.zip/py4j/protocol.py:326, in 
get_return_value(answer, gateway_client, target_id, name)
       325 if answer[1] == REFERENCE_TYPE:
   --> 326     raise Py4JJavaError(
       327         "An error occurred while calling {0}{1}{2}.\n".
       328         format(target_id, ".", name), value)
       329 else:
   
   Py4JJavaError: An error occurred while calling o456.showString.
   : org.apache.spark.SparkException: Job aborted due to stage failure: Failed 
to serialize task 0, not attempting to retry it.
        at 
org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:3694)
        at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:3616)
        at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:3603)
        at 
scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
        at 
scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
        at 
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:3603)
        at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1548)
        at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1548)
        at scala.Option.foreach(Option.scala:407)
        at 
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1548)
        at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:3939)
        at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:3851)
        at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:3839)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:51)
        at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$runJob$1(DAGScheduler.scala:1272)
        at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
        at 
com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)
        at 
org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:1260)
        at org.apache.spark.SparkContext.runJobInternal(SparkContext.scala:2961)
        at 
org.apache.spark.sql.execution.collect.Collector.$anonfun$runSparkJobs$1(Collector.scala:338)
        at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
        at 
com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)
        at 
org.apache.spark.sql.execution.collect.Collector.runSparkJobs(Collector.scala:282)
        at 
org.apache.spark.sql.execution.collect.Collector.$anonfun$collect$1(Collector.scala:366)
        at 
com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)
        at 
org.apache.spark.sql.execution.collect.Collector.collect(Collector.scala:363)
        at 
org.apache.spark.sql.execution.collect.Collector$.collect(Collector.scala:117)
        at 
org.apache.spark.sql.execution.collect.Collector$.collect(Collector.scala:124)
        at 
org.apache.spark.sql.execution.qrc.InternalRowFormat$.collect(cachedSparkResults.scala:126)
        at 
org.apache.spark.sql.execution.qrc.InternalRowFormat$.collect(cachedSparkResults.scala:114)
        at 
org.apache.spark.sql.execution.qrc.InternalRowFormat$.collect(cachedSparkResults.scala:94)
        at 
org.apache.spark.sql.execution.qrc.ResultCacheManager.$anonfun$computeResult$1(ResultCacheManager.scala:553)
        at 
com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)
        at 
org.apache.spark.sql.execution.qrc.ResultCacheManager.collectResult$1(ResultCacheManager.scala:545)
        at 
org.apache.spark.sql.execution.qrc.ResultCacheManager.computeResult(ResultCacheManager.scala:565)
        at 
org.apache.spark.sql.execution.qrc.ResultCacheManager.$anonfun$getOrComputeResultInternal$1(ResultCacheManager.scala:426)
        at scala.Option.getOrElse(Option.scala:189)
        at 
org.apache.spark.sql.execution.qrc.ResultCacheManager.getOrComputeResultInternal(ResultCacheManager.scala:419)
        at 
org.apache.spark.sql.execution.qrc.ResultCacheManager.getOrComputeResult(ResultCacheManager.scala:313)
        at 
org.apache.spark.sql.execution.SparkPlan.$anonfun$executeCollectResult$1(SparkPlan.scala:519)
        at 
com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)
        at 
org.apache.spark.sql.execution.SparkPlan.executeCollectResult(SparkPlan.scala:516)
        at org.apache.spark.sql.Dataset.collectResult(Dataset.scala:3670)
        at org.apache.spark.sql.Dataset.collectFromPlan(Dataset.scala:4595)
        at org.apache.spark.sql.Dataset.$anonfun$head$1(Dataset.scala:3378)
        at 
org.apache.spark.sql.Dataset.$anonfun$withAction$3(Dataset.scala:4586)
        at 
org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:981)
        at 
org.apache.spark.sql.Dataset.$anonfun$withAction$2(Dataset.scala:4584)
        at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$9(SQLExecution.scala:303)
        at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:533)
        at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withCustomExecutionEnv$1(SQLExecution.scala:226)
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1148)
        at 
org.apache.spark.sql.execution.SQLExecution$.withCustomExecutionEnv(SQLExecution.scala:155)
        at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:482)
        at org.apache.spark.sql.Dataset.withAction(Dataset.scala:4584)
        at org.apache.spark.sql.Dataset.head(Dataset.scala:3378)
        at org.apache.spark.sql.Dataset.take(Dataset.scala:3601)
        at org.apache.spark.sql.Dataset.getRows(Dataset.scala:321)
        at org.apache.spark.sql.Dataset.showString(Dataset.scala:356)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:397)
        at py4j.Gateway.invoke(Gateway.java:306)
        at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
        at py4j.commands.CallCommand.execute(CallCommand.java:79)
        at 
py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:199)
        at py4j.ClientServerConnection.run(ClientServerConnection.java:119)
        at java.lang.Thread.run(Thread.java:750)
   Caused by: java.io.IOException: org.apache.spark.SparkException: Failed to 
register classes with Kryo
        at 
org.apache.spark.util.SparkErrorUtils$.tryOrIOException(SparkErrorUtils.scala:41)
        at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1749)
        at 
org.apache.spark.rdd.ParallelCollectionPartition.writeObject(ParallelCollectionRDD.scala:50)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at 
java.io.ObjectStreamClass.invokeWriteObject(ObjectStreamClass.java:1154)
        at 
java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1496)
        at 
java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
        at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
        at 
java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
        at 
java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
        at 
java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
        at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
        at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
        at 
org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:46)
        at 
org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:115)
        at 
org.apache.spark.scheduler.TaskSetManager.$anonfun$prepareLaunchingTask$1(TaskSetManager.scala:681)
        at 
com.databricks.spark.scheduler.SchedulerProfiler$.time(SchedulerProfiler.scala:176)
        at 
com.databricks.spark.scheduler.SchedulerProfiler$.$anonfun$getTimer$2(SchedulerProfiler.scala:164)
        at 
org.apache.spark.scheduler.TaskSetManager.serializationFn$1(TaskSetManager.scala:679)
        at 
org.apache.spark.scheduler.TaskSetManager.$anonfun$prepareLaunchingTask$7(TaskSetManager.scala:768)
        at 
org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.serializeTask(CoarseGrainedSchedulerBackend.scala:760)
        at 
org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.$anonfun$launchSingleTask$2(CoarseGrainedSchedulerBackend.scala:771)
        at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
        at 
com.databricks.spark.scheduler.SchedulerProfiler$.time(SchedulerProfiler.scala:176)
        at 
com.databricks.spark.scheduler.SchedulerProfiler$.$anonfun$getTimer$2(SchedulerProfiler.scala:164)
        at 
org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.$anonfun$launchSingleTask$1(CoarseGrainedSchedulerBackend.scala:770)
        at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
        at 
com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)
        at 
org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.launchSingleTask(CoarseGrainedSchedulerBackend.scala:769)
        at 
org.apache.spark.scheduler.cluster.CoarseGrainedSchedulerBackend.$anonfun$acquireResourcesAndLaunchTask$1(CoarseGrainedSchedulerBackend.scala:824)
        at 
org.apache.spark.util.threads.SparkThreadLocalCapturingRunnable.$anonfun$run$1(SparkThreadLocalForwardingThreadPoolExecutor.scala:118)
        at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
        at 
com.databricks.spark.util.IdentityClaim$.withClaim(IdentityClaim.scala:48)
        at 
org.apache.spark.util.threads.SparkThreadLocalCapturingHelper.$anonfun$runWithCaptured$4(SparkThreadLocalForwardingThreadPoolExecutor.scala:81)
        at 
com.databricks.unity.UCSEphemeralState$Handle.runWith(UCSEphemeralState.scala:45)
        at 
org.apache.spark.util.threads.SparkThreadLocalCapturingHelper.runWithCaptured(SparkThreadLocalForwardingThreadPoolExecutor.scala:80)
        at 
org.apache.spark.util.threads.SparkThreadLocalCapturingHelper.runWithCaptured$(SparkThreadLocalForwardingThreadPoolExecutor.scala:66)
        at 
org.apache.spark.util.threads.SparkThreadLocalCapturingRunnable.runWithCaptured(SparkThreadLocalForwardingThreadPoolExecutor.scala:115)
        at 
org.apache.spark.util.threads.SparkThreadLocalCapturingRunnable.run(SparkThreadLocalForwardingThreadPoolExecutor.scala:118)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        ... 1 more
   Caused by: org.apache.spark.SparkException: Failed to register classes with 
Kryo
        at 
org.apache.spark.serializer.KryoSerializer.$anonfun$newKryo$5(KryoSerializer.scala:187)
        at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
        at org.apache.spark.util.Utils$.withContextClassLoader(Utils.scala:258)
        at 
org.apache.spark.serializer.KryoSerializer.newKryo(KryoSerializer.scala:175)
        at 
org.apache.spark.serializer.KryoSerializer$$anon$1.create(KryoSerializer.scala:106)
        at 
com.esotericsoftware.kryo.pool.KryoPoolQueueImpl.borrow(KryoPoolQueueImpl.java:48)
        at 
org.apache.spark.serializer.KryoSerializer$PoolWrapper.borrow(KryoSerializer.scala:113)
        at 
org.apache.spark.serializer.KryoSerializerInstance.borrowKryo(KryoSerializer.scala:410)
        at 
org.apache.spark.serializer.KryoSerializationStream.<init>(KryoSerializer.scala:276)
        at 
org.apache.spark.serializer.KryoSerializerInstance.serializeStream(KryoSerializer.scala:527)
        at 
org.apache.spark.util.Utils$.serializeViaNestedStream(Utils.scala:188)
        at 
org.apache.spark.rdd.ParallelCollectionPartition.$anonfun$writeObject$1(ParallelCollectionRDD.scala:64)
        at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
        at 
org.apache.spark.util.SparkErrorUtils$.tryOrIOException(SparkErrorUtils.scala:34)
        ... 44 more
   Caused by: java.lang.ClassNotFoundException: 
org.apache.spark.HoodieSparkKryoRegistrar not found in 
com.databricks.backend.daemon.driver.ClassLoaders$LibraryClassLoader@4cbfcabf
        at 
com.databricks.backend.daemon.driver.ClassLoaders$MultiReplClassLoader.loadClass(ClassLoaders.scala:115)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:352)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:348)
        at 
org.apache.spark.util.SparkClassUtils.classForName(SparkClassUtils.scala:40)
        at 
org.apache.spark.util.SparkClassUtils.classForName$(SparkClassUtils.scala:33)
        at org.apache.spark.util.Utils$.classForName(Utils.scala:104)
        at 
org.apache.spark.serializer.KryoSerializer.$anonfun$newKryo$7(KryoSerializer.scala:182)
        at 
scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286)
        at 
scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
        at 
scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
        at scala.collection.TraversableLike.map(TraversableLike.scala:286)
        at scala.collection.TraversableLike.map$(TraversableLike.scala:279)
        at scala.collection.AbstractTraversable.map(Traversable.scala:108)
        at 
org.apache.spark.serializer.KryoSerializer.$anonfun$newKryo$5(KryoSerializer.scala:182)
        ... 57 more
   Caused by: java.lang.ClassNotFoundException: 
org.apache.spark.HoodieSparkKryoRegistrar
        at java.net.URLClassLoader.findClass(URLClassLoader.java:387)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:419)
        at 
com.databricks.backend.daemon.driver.ClassLoaders$LibraryClassLoader.loadClass(ClassLoaders.scala:151)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:352)
        at 
com.databricks.backend.daemon.driver.ClassLoaders$MultiReplClassLoader.loadClass(ClassLoaders.scala:112)
        ... 72 more
   
   
   During handling of the above exception, another exception occurred:
   
   Py4JError                                 Traceback (most recent call last)
   File <command-275843039130838>, line 2
         1 df.printSchema()
   ----> 2 df.show()
   
   File /databricks/spark/python/pyspark/instrumentation_utils.py:48, in 
_wrap_function.<locals>.wrapper(*args, **kwargs)
        46 start = time.perf_counter()
        47 try:
   ---> 48     res = func(*args, **kwargs)
        49     logger.log_success(
        50         module_name, class_name, function_name, time.perf_counter() 
- start, signature
        51     )
        52     return res
   
   File /databricks/spark/python/pyspark/sql/dataframe.py:934, in 
DataFrame.show(self, n, truncate, vertical)
       928     raise PySparkTypeError(
       929         error_class="NOT_BOOL",
       930         message_parameters={"arg_name": "vertical", "arg_type": 
type(vertical).__name__},
       931     )
       933 if isinstance(truncate, bool) and truncate:
   --> 934     print(self._jdf.showString(n, 20, vertical))
       935 else:
       936     try:
   
   File 
/databricks/spark/python/lib/py4j-0.10.9.7-src.zip/py4j/java_gateway.py:1355, 
in JavaMember.__call__(self, *args)
      1349 command = proto.CALL_COMMAND_NAME +\
      1350     self.command_header +\
      1351     args_command +\
      1352     proto.END_COMMAND_PART
      1354 answer = self.gateway_client.send_command(command)
   -> 1355 return_value = get_return_value(
      1356     answer, self.gateway_client, self.target_id, self.name)
      1358 for temp_arg in temp_args:
      1359     if hasattr(temp_arg, "_detach"):
   
   File /databricks/spark/python/pyspark/errors/exceptions/captured.py:190, in 
capture_sql_exception.<locals>.deco(*a, **kw)
       188     return f(*a, **kw)
       189 except Py4JJavaError as e:
   --> 190     converted = convert_exception(e.java_exception)
       191     if not isinstance(converted, UnknownException):
       192         # Hide where the exception came from that shows a 
non-Pythonic
       193         # JVM exception message.
       194         raise converted from None
   
   File /databricks/spark/python/pyspark/errors/exceptions/captured.py:182, in 
convert_exception(e)
       176     msg = (
       177         "\n  An exception was thrown from the Python worker. "
       178         "Please see the stack trace below.\n%s" % c.getMessage()
       179     )
       180     return PythonException(msg, stacktrace)
   --> 182 return UnknownException(desc=e.toString(), stackTrace=stacktrace, 
cause=c)
   
   File /databricks/spark/python/pyspark/errors/exceptions/captured.py:65, in 
CapturedException.__init__(self, desc, stackTrace, cause, origin)
        59 assert SparkContext._jvm is not None
        60 self.stackTrace = (
        61     stackTrace
        62     if stackTrace is not None
        63     else 
(SparkContext._jvm.org.apache.spark.util.Utils.exceptionString(origin))
        64 )
   ---> 65 self.cause = convert_exception(cause) if cause is not None else None
        66 if self.cause is None and origin is not None and origin.getCause() 
is not None:
        67     self.cause = convert_exception(origin.getCause())
   
   File /databricks/spark/python/pyspark/errors/exceptions/captured.py:182, in 
convert_exception(e)
       176     msg = (
       177         "\n  An exception was thrown from the Python worker. "
       178         "Please see the stack trace below.\n%s" % c.getMessage()
       179     )
       180     return PythonException(msg, stacktrace)
   --> 182 return UnknownException(desc=e.toString(), stackTrace=stacktrace, 
cause=c)
   
   File /databricks/spark/python/pyspark/errors/exceptions/captured.py:65, in 
CapturedException.__init__(self, desc, stackTrace, cause, origin)
        59 assert SparkContext._jvm is not None
        60 self.stackTrace = (
        61     stackTrace
        62     if stackTrace is not None
        63     else 
(SparkContext._jvm.org.apache.spark.util.Utils.exceptionString(origin))
        64 )
   ---> 65 self.cause = convert_exception(cause) if cause is not None else None
        66 if self.cause is None and origin is not None and origin.getCause() 
is not None:
        67     self.cause = convert_exception(origin.getCause())
   
   File /databricks/spark/python/pyspark/errors/exceptions/captured.py:182, in 
convert_exception(e)
       176     msg = (
       177         "\n  An exception was thrown from the Python worker. "
       178         "Please see the stack trace below.\n%s" % c.getMessage()
       179     )
       180     return PythonException(msg, stacktrace)
   --> 182 return UnknownException(desc=e.toString(), stackTrace=stacktrace, 
cause=c)
   
   File /databricks/spark/python/pyspark/errors/exceptions/captured.py:65, in 
CapturedException.__init__(self, desc, stackTrace, cause, origin)
        59 assert SparkContext._jvm is not None
        60 self.stackTrace = (
        61     stackTrace
        62     if stackTrace is not None
        63     else 
(SparkContext._jvm.org.apache.spark.util.Utils.exceptionString(origin))
        64 )
   ---> 65 self.cause = convert_exception(cause) if cause is not None else None
        66 if self.cause is None and origin is not None and origin.getCause() 
is not None:
        67     self.cause = convert_exception(origin.getCause())
   
   File /databricks/spark/python/pyspark/errors/exceptions/captured.py:160, in 
convert_exception(e)
       156 # BEGIN-EDGE
       157 # For Delta exception improvement.
       158 from delta.exceptions.captured import _convert_delta_exception
   --> 160 delta_exception = _convert_delta_exception(e)
       161 if delta_exception is not None:
       162     return delta_exception
   
   File /databricks/spark/python/delta/exceptions/captured.py:135, in 
_convert_delta_exception(e)
       131 """
       132 Convert Delta's Scala concurrent exceptions to the corresponding 
Python exceptions.
       133 """
       134 s: str = e.toString()
   --> 135 c: "JavaObject" = e.getCause()
       137 jvm: "JVMView" = SparkContext._jvm  # type: ignore[attr-defined]
       138 gw = SparkContext._gateway  # type: ignore[attr-defined]
   
   File 
/databricks/spark/python/lib/py4j-0.10.9.7-src.zip/py4j/java_gateway.py:1355, 
in JavaMember.__call__(self, *args)
      1349 command = proto.CALL_COMMAND_NAME +\
      1350     self.command_header +\
      1351     args_command +\
      1352     proto.END_COMMAND_PART
      1354 answer = self.gateway_client.send_command(command)
   -> 1355 return_value = get_return_value(
      1356     answer, self.gateway_client, self.target_id, self.name)
      1358 for temp_arg in temp_args:
      1359     if hasattr(temp_arg, "_detach"):
   
   File /databricks/spark/python/pyspark/errors/exceptions/captured.py:188, in 
capture_sql_exception.<locals>.deco(*a, **kw)
       186 def deco(*a: Any, **kw: Any) -> Any:
       187     try:
   --> 188         return f(*a, **kw)
       189     except Py4JJavaError as e:
       190         converted = convert_exception(e.java_exception)
   
   File 
/databricks/spark/python/lib/py4j-0.10.9.7-src.zip/py4j/protocol.py:330, in 
get_return_value(answer, gateway_client, target_id, name)
       326         raise Py4JJavaError(
       327             "An error occurred while calling {0}{1}{2}.\n".
       328             format(target_id, ".", name), value)
       329     else:
   --> 330         raise Py4JError(
       331             "An error occurred while calling {0}{1}{2}. 
Trace:\n{3}\n".
       332             format(target_id, ".", name, value))
       333 else:
       334     raise Py4JError(
       335         "An error occurred while calling {0}{1}{2}".
       336         format(target_id, ".", name))
   
   Py4JError: An error occurred while calling o490.getCause. Trace:
   py4j.security.Py4JSecurityException: Method public java.lang.Throwable 
java.lang.ClassNotFoundException.getCause() is not whitelisted on class class 
java.lang.ClassNotFoundException
        at 
py4j.security.WhitelistingPy4JSecurityManager.checkCall(WhitelistingPy4JSecurityManager.java:473)
        at py4j.Gateway.invoke(Gateway.java:305)
        at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
        at py4j.commands.CallCommand.execute(CallCommand.java:79)
        at 
py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:199)
        at py4j.ClientServerConnection.run(ClientServerConnection.java:119)
        at java.lang.Thread.run(Thread.java:750)
   
   ```
   
   **Expected behavior**
   
   A clear and concise description of what you expected to happen.
   
   **Environment Description**
   
   * Hudi version : 0.14.1
   
   * Spark version : 3.4
   
   * Hive version :
   
   * Hadoop version :
   
   * Storage (HDFS/S3/GCS..) :
   
   * Running on Docker? (yes/no) :
   
   
   **Additional context**
   
   Add any other context about the problem here.
   
   **Stacktrace**
   
   ```Add the stacktrace of the error.```
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to