[
https://issues.apache.org/jira/browse/TOREE-531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17511916#comment-17511916
]
SHOBHIT SHUKLA commented on TOREE-531:
--------------------------------------
Refer below example to reproduce :
The simple example below shows that in case of any syntax error, the Scala 2.12
with Spark Notebook doesn’t throw error the way it is thrown in the Python 3.7
Notebook. Removed a double quote (“) in the end during the assignment to the
variable test2 to showcase this issue.
!f5b22480-8a67-11ec-88d3-93fd083c850b.png!
> notebook failures does not print whole stack trace for spark session DF
> -----------------------------------------------------------------------
>
> Key: TOREE-531
> URL: https://issues.apache.org/jira/browse/TOREE-531
> Project: TOREE
> Issue Type: Bug
> Components: Build
> Affects Versions: 0.4.0, 0.5.0
> Reporter: SHOBHIT SHUKLA
> Priority: Major
> Attachments: f5b22480-8a67-11ec-88d3-93fd083c850b.png
>
>
> The following stack trace is printed which is not whole stack trace, it is
> missing the lines in the beginning
> at
> wdpshadow.org.apache.spark.sql.util.ArrowUtils$.fromArrowType(ArrowUtils.scala:75)
> at
> wdpshadow.org.apache.spark.sql.util.ArrowUtils$.fromArrowField(ArrowUtils.scala:125)
> at
> wdpshadow.org.apache.spark.sql.execution.arrow.ArrowWriter$.createFieldWriter(ArrowWriter.scala:48)
> at
> wdpshadow.org.apache.spark.sql.execution.arrow.ArrowWriter$.$anonfun$create$1(ArrowWriter.scala:41)
> at
> wdpshadow.org.apache.spark.sql.execution.arrow.ArrowWriter$$$Lambda$2701/0x0000000075bfb4a0.apply(Unknown
> Source)
> at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
> at
> scala.collection.TraversableLike$$Lambda$34/0x00000000f27b9810.apply(Unknown
> Source)
> at scala.collection.Iterator.foreach(Iterator.scala:941)
> at scala.collection.Iterator.foreach$(Iterator.scala:941)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
> at scala.collection.IterableLike.foreach(IterableLike.scala:74)
> at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
> at scala.collection.TraversableLike.map(TraversableLike.scala:238)
> at scala.collection.TraversableLike.map$(TraversableLike.scala:231)
> at scala.collection.AbstractTraversable.map(Traversable.scala:108)
> at
> wdpshadow.org.apache.spark.sql.execution.arrow.ArrowWriter$.create(ArrowWriter.scala:39)
> at
> com.ibm.connect.spark.arrow.AbstractArrowDataSource.inferSchema(AbstractArrowDataSource.scala:65)
> at
> org.apache.spark.sql.execution.datasources.v2.DataSourceV2Utils$.getTableFromProvider(DataSourceV2Utils.scala:81)
> at
> org.apache.spark.sql.DataFrameReader.$anonfun$load$1(DataFrameReader.scala:274)
> at
> org.apache.spark.sql.DataFrameReader$$Lambda$2635/0x00000000756d4b20.apply(Unknown
> Source)
> at scala.Option.map(Option.scala:230)
> at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:248)
> at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:221)
> *Expected:*
> java.lang.UnsupportedOperationException: Unsupported data type:
> Timestamp(MILLISECOND, null)
> at
> wdpshadow.org.apache.spark.sql.util.ArrowUtils$.fromArrowType(ArrowUtils.scala:75)
> at
> wdpshadow.org.apache.spark.sql.util.ArrowUtils$.fromArrowField(ArrowUtils.scala:125)
> at
> wdpshadow.org.apache.spark.sql.execution.arrow.ArrowWriter$.createFieldWriter(ArrowWriter.scala:48)
> at
> wdpshadow.org.apache.spark.sql.execution.arrow.ArrowWriter$.$anonfun$create$1(ArrowWriter.scala:41)
> at
> wdpshadow.org.apache.spark.sql.execution.arrow.ArrowWriter$$$Lambda$2822/0x000000005d1c9070.apply(Unknown
> Source)
> at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238)
> at
> scala.collection.TraversableLike$$Lambda$34/0x00000000de7979c0.apply(Unknown
> Source)
> at scala.collection.Iterator.foreach(Iterator.scala:941)
> at scala.collection.Iterator.foreach$(Iterator.scala:941)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
> at scala.collection.IterableLike.foreach(IterableLike.scala:74)
> at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
> at scala.collection.TraversableLike.map(TraversableLike.scala:238)
> at scala.collection.TraversableLike.map$(TraversableLike.scala:231)
> at scala.collection.AbstractTraversable.map(Traversable.scala:108)
> at
> wdpshadow.org.apache.spark.sql.execution.arrow.ArrowWriter$.create(ArrowWriter.scala:39)
> at
> com.ibm.connect.spark.arrow.AbstractArrowDataSource.inferSchema(AbstractArrowDataSource.scala:65)
> at
> org.apache.spark.sql.execution.datasources.v2.DataSourceV2Utils$.getTableFromProvider(DataSourceV2Utils.scala:81)
> at
> org.apache.spark.sql.DataFrameReader.$anonfun$load$1(DataFrameReader.scala:274)
> at
> org.apache.spark.sql.DataFrameReader$$Lambda$2787/0x000000005d86a2f0.apply(Unknown
> Source)
> at scala.Option.map(Option.scala:230)
> at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:248)
> at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:221)
--
This message was sent by Atlassian Jira
(v8.20.1#820001)