murphycrosby commented on issue #13760:
URL: https://github.com/apache/iceberg/issues/13760#issuecomment-3168967975
Here is the log...The databricks cluster has
org.apache.iceberg:iceberg-spark-runtime-3.5_2.12:1.9.2 installed. There is an
authentication error before the TABLE_OR_VIEW_NOT_FOUND error. Databricks has a
service principal and credentials setup, but I am not sure if/how the
iceberg-spark-runtime uses them. However, even if I set all of the
fs.azure.account.auth.type.{storage_account}.dfs.core.windows.net,
fs.azure.account.oauth.provider.type.{storage_account}.dfs.core.windows.net,
etc, it still tries to use the CredentialScopeADLSTokenProvider.
25/08/08 16:21:16 INFO DatabricksFileSystemV2Factory: Creating abfss file
system for abfss://[email protected]
25/08/08 16:21:16 INFO AzureBlobFileSystem:V3: Initializing
AzureBlobFileSystem for
abfss://[email protected]/1752359545852644 with credential
= FixedSASTokenProvider with jvmId = 582
25/08/08 16:21:16 INFO DbfsHadoop3: Initialized DBFS with DBFSV2 as the
delegate.
25/08/08 16:21:16 INFO DriverCorral: DBFS health check ok
25/08/08 16:21:17 INFO HiveMetaStore: 1: get_database: default
25/08/08 16:21:17 INFO audit: ugi=root ip=unknown-ip-addr
cmd=get_database: default
25/08/08 16:21:17 INFO HiveMetaStore: 1: Opening raw store with
implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
25/08/08 16:21:17 INFO ObjectStore: ObjectStore, initialize called
25/08/08 16:21:17 INFO ObjectStore: Initialized ObjectStore
25/08/08 16:21:17 INFO DriverCorral: Metastore health check ok
25/08/08 16:21:21 WARN JupyterKernelListener: Received Jupyter debug message
with unknown command: null
25/08/08 16:21:23 WARN JupyterKernelListener: Received Jupyter debug message
with unknown command: null
25/08/08 16:21:50 INFO ProgressReporter$: Added result fetcher for
1754669739780_7416712908102305456_85f6a4ce93bf4564b7e9e60f6b0db2df
25/08/08 16:21:50 INFO ProgressReporter$: Removed result fetcher for
1754669739780_7416712908102305456_85f6a4ce93bf4564b7e9e60f6b0db2df
25/08/08 16:21:50 INFO DatabricksStreamingQueryListener: Command execution
complete on Driver, no active Streaming query runs associated with command
1754669739780_7416712908102305456_85f6a4ce93bf4564b7e9e60f6b0db2df
25/08/08 16:21:52 INFO ProgressReporter$: Added result fetcher for
1754669739780_6172583903742288693_ab54727d993f4f8f9e4f249064c56ea5
25/08/08 16:21:52 INFO AzureBlobFileSystem:V3: Initializing
AzureBlobFileSystem for
abfss://[email protected]/lakehouse/domain/thing/blah
with credential = CredentialScopeADLSTokenProvider with jvmId = 582
25/08/08 16:21:52 ERROR AbfsClient: HttpRequest:
403,err=,appendpos=,cid=0801-192641-fr0t4xlw------:5b351aae-8ead-4666-9d33-2ee9618fbbd4:24347f68-5b25-459b-8e7e-a676d46471f6:::GF:0,rid=f01b1e19-301f-0014-3b80-080582000000,connMs=0,sendMs=0,recvMs=1,sent=0,recv=0,method=HEAD,https://someazstaccount.dfs.core.windows.net/containername/_delta_log?upn=false&action=getStatus&timeout=90&st=2025-08-08T15:36:15Z&sv=2020-02-10&ske=2025-08-08T17:36:15Z&sig=XXXXX&sktid=8c91e3f4-7f37-4334-9f70-fae3f5235c18&se=2025-08-08T17:17:07Z&sdd=1&skoid=d3098274-b8ab-46ceXXXXXXXXXXXXXXXXXX&spr=https&sks=b&skt=2025-08-08T15:36:15Z&sp=rl&skv=2025-01-05&sr=d
25/08/08 16:21:52 WARN DeltaTableUtils: Access error while exploring path
hierarchy for a delta log.original
path=abfss://[email protected]/lakehouse/domain/thing/blah,
path with error=abfss://[email protected]/.
Operation failed: "Server failed to authenticate the request. Make sure the
value of Authorization header is formed correctly including the signature.",
403, HEAD, ,
https://someazstaccount.dfs.core.windows.net/containername/_delta_log?upn=false&action=getStatus&timeout=90&st=2025-08-08T15:36:15Z&sv=2020-02-10&ske=2025-08-08T17:36:15Z&sig=XXXXX&sktid=8c91e3f4-7f37-4334-9f70-fae3f5235c18&se=2025-08-08T17:17:07Z&sdd=1&skoid=d3098274-b8ab-46ceXXXXXXXXXXXXXXXXXX&spr=https&sks=b&skt=2025-08-08T15:36:15Z&sp=rl&skv=2025-01-05&sr=d
at
shaded.databricks.azurebfs.org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.completeExecute(AbfsRestOperation.java:316)
at
shaded.databricks.azurebfs.org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.lambda$execute$0(AbfsRestOperation.java:251)
at
org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.measureDurationOfInvocation(IOStatisticsBinding.java:494)
at
org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDurationOfInvocation(IOStatisticsBinding.java:465)
at
shaded.databricks.azurebfs.org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.execute(AbfsRestOperation.java:249)
at
shaded.databricks.azurebfs.org.apache.hadoop.fs.azurebfs.services.AbfsClient.getPathStatus(AbfsClient.java:1069)
at
shaded.databricks.azurebfs.org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.getFileStatus(AzureBlobFileSystemStore.java:1402)
at
shaded.databricks.azurebfs.org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.getFileStatus(AzureBlobFileSystem.java:1029)
at
shaded.databricks.azurebfs.org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.getFileStatus(AzureBlobFileSystem.java:1019)
at
com.databricks.common.filesystem.LokiABFS.getFileStatusNoCache(LokiABFS.scala:54)
at
com.databricks.common.filesystem.LokiABFS.getFileStatus(LokiABFS.scala:44)
at
com.databricks.sql.io.LokiFileSystem.getFileStatus(LokiFileSystem.scala:241)
at
com.databricks.sql.acl.fs.CredentialScopeFileSystem.getFileStatus(CredentialScopeFileSystem.scala:332)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1862)
at
com.databricks.sql.transaction.tahoe.DeltaTableUtils$.findDeltaTableRootThrowOnError(DeltaTable.scala:367)
at
com.databricks.sql.transaction.tahoe.DeltaTableUtils$.findDeltaTableRoot(DeltaTable.scala:315)
at
com.databricks.sql.transaction.tahoe.DeltaTableUtils$.findDeltaTableRoot(DeltaTable.scala:306)
at
com.databricks.sql.transaction.tahoe.DeltaTableUtils$.findDeltaTableRoot(DeltaTable.scala:298)
at
org.apache.spark.sql.catalyst.analysis.ResolveDataSource.liftedTree1$1(ResolveDataSource.scala:259)
at
org.apache.spark.sql.catalyst.analysis.ResolveDataSource.deltaTable$lzycompute$1(ResolveDataSource.scala:258)
at
org.apache.spark.sql.catalyst.analysis.ResolveDataSource.deltaTable$1(ResolveDataSource.scala:258)
at
org.apache.spark.sql.catalyst.analysis.ResolveDataSource.org$apache$spark$sql$catalyst$analysis$ResolveDataSource$$preprocessDeltaLoading(ResolveDataSource.scala:290)
at
org.apache.spark.sql.catalyst.analysis.ResolveDataSource$$anonfun$apply$1.applyOrElse(ResolveDataSource.scala:83)
at
org.apache.spark.sql.catalyst.analysis.ResolveDataSource$$anonfun$apply$1.applyOrElse(ResolveDataSource.scala:58)
at
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsUpWithPruning$3(AnalysisHelper.scala:141)
at
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:85)
at
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsUpWithPruning$1(AnalysisHelper.scala:141)
at
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:418)
at
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsUpWithPruning(AnalysisHelper.scala:137)
at
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsUpWithPruning$(AnalysisHelper.scala:133)
at
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsUpWithPruning(LogicalPlan.scala:42)
at
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsUp(AnalysisHelper.scala:114)
at
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsUp$(AnalysisHelper.scala:113)
at
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsUp(LogicalPlan.scala:42)
at
org.apache.spark.sql.catalyst.analysis.ResolveDataSource.apply(ResolveDataSource.scala:58)
at
org.apache.spark.sql.catalyst.analysis.ResolveDataSource.apply(ResolveDataSource.scala:56)
at
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$16(RuleExecutor.scala:480)
at
org.apache.spark.sql.catalyst.rules.RecoverableRuleExecutionHelper.processRule(RuleExecutor.scala:629)
at
org.apache.spark.sql.catalyst.rules.RecoverableRuleExecutionHelper.processRule$(RuleExecutor.scala:613)
at
org.apache.spark.sql.catalyst.rules.RuleExecutor.processRule(RuleExecutor.scala:131)
at
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$15(RuleExecutor.scala:480)
at
com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)
at
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$14(RuleExecutor.scala:479)
at
scala.collection.LinearSeqOptimized.foldLeft(LinearSeqOptimized.scala:126)
at
scala.collection.LinearSeqOptimized.foldLeft$(LinearSeqOptimized.scala:122)
at scala.collection.immutable.List.foldLeft(List.scala:91)
at
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$13(RuleExecutor.scala:475)
at
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at
com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)
at
org.apache.spark.sql.catalyst.rules.RuleExecutor.executeBatch$1(RuleExecutor.scala:452)
at
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$22(RuleExecutor.scala:585)
at
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$22$adapted(RuleExecutor.scala:585)
at scala.collection.immutable.List.foreach(List.scala:431)
at
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1(RuleExecutor.scala:585)
at
com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)
at
org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:349)
at
org.apache.spark.sql.catalyst.analysis.Analyzer.executeSameContext(Analyzer.scala:498)
at
org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$execute$1(Analyzer.scala:491)
at
org.apache.spark.sql.catalyst.analysis.AnalysisContext$.withNewAnalysisContext(Analyzer.scala:397)
at
org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:491)
at
org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:416)
at
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$executeAndTrack$1(RuleExecutor.scala:341)
at
org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:219)
at
org.apache.spark.sql.catalyst.rules.RuleExecutor.executeAndTrack(RuleExecutor.scala:341)
at
org.apache.spark.sql.catalyst.analysis.resolver.HybridAnalyzer.resolveInFixedPoint(HybridAnalyzer.scala:252)
at
org.apache.spark.sql.catalyst.analysis.resolver.HybridAnalyzer.$anonfun$apply$1(HybridAnalyzer.scala:96)
at
org.apache.spark.sql.catalyst.analysis.resolver.HybridAnalyzer.withTrackedAnalyzerBridgeState(HybridAnalyzer.scala:131)
at
org.apache.spark.sql.catalyst.analysis.resolver.HybridAnalyzer.apply(HybridAnalyzer.scala:87)
at
org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$1(Analyzer.scala:478)
at
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:425)
at
org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:478)
at
org.apache.spark.sql.execution.QueryExecution.$anonfun$lazyAnalyzed$3(QueryExecution.scala:294)
at
com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)
at
org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:548)
at
org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$5(QueryExecution.scala:668)
at
org.apache.spark.sql.execution.SQLExecution$.withExecutionPhase(SQLExecution.scala:151)
at
org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$4(QueryExecution.scala:668)
at
org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:1307)
at
org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$2(QueryExecution.scala:661)
at
com.databricks.util.LexicalThreadLocal$Handle.runWith(LexicalThreadLocal.scala:63)
at
org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:658)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1462)
at
org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:658)
at
org.apache.spark.sql.execution.QueryExecution.$anonfun$lazyAnalyzed$2(QueryExecution.scala:288)
at
com.databricks.sql.util.MemoryTrackerHelper.withMemoryTracking(MemoryTrackerHelper.scala:80)
at
org.apache.spark.sql.execution.QueryExecution.$anonfun$lazyAnalyzed$1(QueryExecution.scala:287)
at scala.util.Try$.apply(Try.scala:213)
at
org.apache.spark.util.Utils$.doTryWithCallerStacktrace(Utils.scala:1684)
at org.apache.spark.util.LazyTry.tryT$lzycompute(LazyTry.scala:46)
at org.apache.spark.util.LazyTry.tryT(LazyTry.scala:46)
at org.apache.spark.util.LazyTry.get(LazyTry.scala:58)
at
org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:321)
at
org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:267)
at org.apache.spark.sql.Dataset$.$anonfun$ofRows$1(Dataset.scala:108)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1462)
at
org.apache.spark.sql.SparkSession.$anonfun$withActiveAndFrameProfiler$1(SparkSession.scala:1469)
at
com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)
at
org.apache.spark.sql.SparkSession.withActiveAndFrameProfiler(SparkSession.scala:1469)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:106)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:265)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:238)
at
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:569)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:397)
at py4j.Gateway.invoke(Gateway.java:306)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at
py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:197)
at py4j.ClientServerConnection.run(ClientServerConnection.java:117)
at java.base/java.lang.Thread.run(Thread.java:840)
25/08/08 16:21:52 INFO QueryPlanningTracker: Query phase analysis took 0s
before execution.
25/08/08 16:21:52 INFO ClusterLoadMonitor: Added query with execution ID:6.
Current active queries:1
25/08/08 16:21:52 INFO ClusterLoadMonitor: Removed query with execution
ID:6. Current active queries:0
25/08/08 16:21:52 INFO SQLExecution: 0 QueryExecution(s) are running
25/08/08 16:21:52 INFO DBCEventLoggingListener: Rolling event log;
numTimesRolledOver = 1
25/08/08 16:21:52 INFO DBCEventLoggingListener: Rolled active log file
/databricks/driver/eventlogs/8549087355160894129/eventlog to
/databricks/driver/eventlogs/8549087355160894129/eventlog-2025-08-08--16-20,
size = 3920473
25/08/08 16:21:52 ERROR ReplAwareSparkDataSourceListener: Unexpected
exception when attempting to handle SparkListenerSQLExecutionEnd event. Please
report this error, along with the following stacktrace, on
https://github.com/mlflow/mlflow/issues:
org.apache.spark.sql.catalyst.analysis.NoSuchTableException:
[TABLE_OR_VIEW_NOT_FOUND] The table or view
`default_iceberg`.`abfss://[email protected]/lakehouse/domain/thing`.`blah`
cannot be found. Verify the spelling and correctness of the schema and catalog.
If you did not qualify the name with a schema, verify the current_schema()
output, or qualify the name with the correct schema and catalog.
To tolerate the error on drop use DROP VIEW IF EXISTS or DROP TABLE IF
EXISTS. SQLSTATE: 42P01
at
org.apache.spark.sql.errors.QueryCompilationErrors$.noSuchTableError(QueryCompilationErrors.scala:2177)
at
org.apache.spark.sql.execution.datasources.v2.V2SessionCatalog.$anonfun$loadTable$1(V2SessionCatalog.scala:110)
at
com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)
at
org.apache.spark.sql.execution.datasources.v2.V2SessionCatalog.loadTable(V2SessionCatalog.scala:93)
at
org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension.loadTable(DelegatingCatalogExtension.java:77)
at
com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.super$loadTable(DeltaCatalog.scala:654)
at
com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$loadTable$1(DeltaCatalog.scala:654)
at
com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)
at
com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:418)
at
com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:416)
at
com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:140)
at
com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.loadTable(DeltaCatalog.scala:653)
at
com.databricks.sql.managedcatalog.UnityCatalogV2Proxy.loadTable(UnityCatalogV2Proxy.scala:223)
at
org.apache.spark.sql.connector.catalog.CatalogV2Util$.getTable(CatalogV2Util.scala:455)
at
org.apache.spark.sql.execution.datasources.v2.DataSourceV2Utils$.loadV2Source(DataSourceV2Utils.scala:247)
at
org.apache.spark.sql.catalyst.analysis.ResolveDataSource$$anonfun$apply$1.$anonfun$applyOrElse$1(ResolveDataSource.scala:96)
at scala.Option.flatMap(Option.scala:271)
at
org.apache.spark.sql.catalyst.analysis.ResolveDataSource$$anonfun$apply$1.applyOrElse(ResolveDataSource.scala:94)
at
org.apache.spark.sql.catalyst.analysis.ResolveDataSource$$anonfun$apply$1.applyOrElse(ResolveDataSource.scala:58)
at
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsUpWithPruning$3(AnalysisHelper.scala:141)
at
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:85)
at
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsUpWithPruning$1(AnalysisHelper.scala:141)
at
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:418)
at
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsUpWithPruning(AnalysisHelper.scala:137)
at
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsUpWithPruning$(AnalysisHelper.scala:133)
at
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsUpWithPruning(LogicalPlan.scala:42)
at
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsUp(AnalysisHelper.scala:114)
at
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsUp$(AnalysisHelper.scala:113)
at
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsUp(LogicalPlan.scala:42)
at
org.apache.spark.sql.catalyst.analysis.ResolveDataSource.apply(ResolveDataSource.scala:58)
at
org.apache.spark.sql.catalyst.analysis.ResolveDataSource.apply(ResolveDataSource.scala:56)
at
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$16(RuleExecutor.scala:480)
at
org.apache.spark.sql.catalyst.rules.RecoverableRuleExecutionHelper.processRule(RuleExecutor.scala:629)
at
org.apache.spark.sql.catalyst.rules.RecoverableRuleExecutionHelper.processRule$(RuleExecutor.scala:613)
at
org.apache.spark.sql.catalyst.rules.RuleExecutor.processRule(RuleExecutor.scala:131)
at
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$15(RuleExecutor.scala:480)
at
com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)
at
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$14(RuleExecutor.scala:479)
at
scala.collection.LinearSeqOptimized.foldLeft(LinearSeqOptimized.scala:126)
at
scala.collection.LinearSeqOptimized.foldLeft$(LinearSeqOptimized.scala:122)
at scala.collection.immutable.List.foldLeft(List.scala:91)
at
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$13(RuleExecutor.scala:475)
at
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at
com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)
at
org.apache.spark.sql.catalyst.rules.RuleExecutor.executeBatch$1(RuleExecutor.scala:452)
at
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$22(RuleExecutor.scala:585)
at
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$22$adapted(RuleExecutor.scala:585)
at scala.collection.immutable.List.foreach(List.scala:431)
at
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1(RuleExecutor.scala:585)
at
com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)
at
org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:349)
at
org.apache.spark.sql.catalyst.analysis.Analyzer.executeSameContext(Analyzer.scala:498)
at
org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$execute$1(Analyzer.scala:491)
at
org.apache.spark.sql.catalyst.analysis.AnalysisContext$.withNewAnalysisContext(Analyzer.scala:397)
at
org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:491)
at
org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:416)
at
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$executeAndTrack$1(RuleExecutor.scala:341)
at
org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:219)
at
org.apache.spark.sql.catalyst.rules.RuleExecutor.executeAndTrack(RuleExecutor.scala:341)
at
org.apache.spark.sql.catalyst.analysis.resolver.HybridAnalyzer.resolveInFixedPoint(HybridAnalyzer.scala:252)
at
org.apache.spark.sql.catalyst.analysis.resolver.HybridAnalyzer.$anonfun$apply$1(HybridAnalyzer.scala:96)
at
org.apache.spark.sql.catalyst.analysis.resolver.HybridAnalyzer.withTrackedAnalyzerBridgeState(HybridAnalyzer.scala:131)
at
org.apache.spark.sql.catalyst.analysis.resolver.HybridAnalyzer.apply(HybridAnalyzer.scala:87)
at
org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$1(Analyzer.scala:478)
at
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:425)
at
org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:478)
at
org.apache.spark.sql.execution.QueryExecution.$anonfun$lazyAnalyzed$3(QueryExecution.scala:294)
at
com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)
at
org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:548)
at
org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$5(QueryExecution.scala:668)
at
org.apache.spark.sql.execution.SQLExecution$.withExecutionPhase(SQLExecution.scala:151)
at
org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$4(QueryExecution.scala:668)
at
org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:1307)
at
org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$2(QueryExecution.scala:661)
at
com.databricks.util.LexicalThreadLocal$Handle.runWith(LexicalThreadLocal.scala:63)
at
org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:658)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1462)
at
org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:658)
at
org.apache.spark.sql.execution.QueryExecution.$anonfun$lazyAnalyzed$2(QueryExecution.scala:288)
at
com.databricks.sql.util.MemoryTrackerHelper.withMemoryTracking(MemoryTrackerHelper.scala:80)
at
org.apache.spark.sql.execution.QueryExecution.$anonfun$lazyAnalyzed$1(QueryExecution.scala:287)
at scala.util.Try$.apply(Try.scala:213)
at
org.apache.spark.util.Utils$.doTryWithCallerStacktrace(Utils.scala:1684)
at
org.apache.spark.util.Utils$.getTryWithCallerStacktrace(Utils.scala:1745)
at org.apache.spark.util.LazyTry.get(LazyTry.scala:58)
at
org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:321)
at
org.mlflow.spark.autologging.DatasourceAttributeExtractorBase.getTableInfos(DatasourceAttributeExtractor.scala:88)
at
org.mlflow.spark.autologging.DatasourceAttributeExtractorBase.getTableInfos$(DatasourceAttributeExtractor.scala:85)
at
org.mlflow.spark.autologging.ReplAwareDatasourceAttributeExtractor$.getTableInfos(DatasourceAttributeExtractor.scala:142)
at
org.mlflow.spark.autologging.ReplAwareSparkDataSourceListener.onSQLExecutionEnd(ReplAwareSparkDataSourceListener.scala:49)
at
org.mlflow.spark.autologging.SparkDataSourceListener.$anonfun$onOtherEvent$1(SparkDataSourceListener.scala:39)
at
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at
org.mlflow.spark.autologging.ExceptionUtils$.tryAndLogUnexpectedError(ExceptionUtils.scala:26)
at
org.mlflow.spark.autologging.SparkDataSourceListener.onOtherEvent(SparkDataSourceListener.scala:39)
at
org.apache.spark.scheduler.SparkListenerBus.doPostEvent(SparkListenerBus.scala:108)
at
org.apache.spark.scheduler.SparkListenerBus.doPostEvent$(SparkListenerBus.scala:28)
at
org.apache.spark.scheduler.AsyncEventQueue.doPostEvent(AsyncEventQueue.scala:46)
at
org.apache.spark.scheduler.AsyncEventQueue.doPostEvent(AsyncEventQueue.scala:46)
at org.apache.spark.util.ListenerBus.postToAll(ListenerBus.scala:208)
at org.apache.spark.util.ListenerBus.postToAll$(ListenerBus.scala:172)
at
org.apache.spark.scheduler.AsyncEventQueue.super$postToAll(AsyncEventQueue.scala:130)
at
org.apache.spark.scheduler.AsyncEventQueue.$anonfun$dispatch$1(AsyncEventQueue.scala:130)
at
scala.runtime.java8.JFunction0$mcJ$sp.apply(JFunction0$mcJ$sp.java:23)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
at
org.apache.spark.scheduler.AsyncEventQueue.org$apache$spark$scheduler$AsyncEventQueue$$dispatch(AsyncEventQueue.scala:116)
at
org.apache.spark.scheduler.AsyncEventQueue$$anon$2.$anonfun$run$1(AsyncEventQueue.scala:112)
at org.apache.spark.util.Utils$.tryOrStopSparkContext(Utils.scala:1572)
at
org.apache.spark.scheduler.AsyncEventQueue$$anon$2.run(AsyncEventQueue.scala:112)
Suppressed: org.apache.spark.util.Utils$OriginalTryStackTraceException:
Full stacktrace of original doTryWithCallerStacktrace caller
at
org.apache.spark.sql.errors.QueryCompilationErrors$.noSuchTableError(QueryCompilationErrors.scala:2177)
at
org.apache.spark.sql.execution.datasources.v2.V2SessionCatalog.$anonfun$loadTable$1(V2SessionCatalog.scala:110)
at
com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)
at
org.apache.spark.sql.execution.datasources.v2.V2SessionCatalog.loadTable(V2SessionCatalog.scala:93)
at
org.apache.spark.sql.connector.catalog.DelegatingCatalogExtension.loadTable(DelegatingCatalogExtension.java:77)
at
com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.super$loadTable(DeltaCatalog.scala:654)
at
com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.$anonfun$loadTable$1(DeltaCatalog.scala:654)
at
com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)
at
com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile(DeltaLogging.scala:418)
at
com.databricks.sql.transaction.tahoe.metering.DeltaLogging.recordFrameProfile$(DeltaLogging.scala:416)
at
com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.recordFrameProfile(DeltaCatalog.scala:140)
at
com.databricks.sql.transaction.tahoe.catalog.DeltaCatalog.loadTable(DeltaCatalog.scala:653)
at
com.databricks.sql.managedcatalog.UnityCatalogV2Proxy.loadTable(UnityCatalogV2Proxy.scala:223)
at
org.apache.spark.sql.connector.catalog.CatalogV2Util$.getTable(CatalogV2Util.scala:455)
at
org.apache.spark.sql.execution.datasources.v2.DataSourceV2Utils$.loadV2Source(DataSourceV2Utils.scala:247)
at
org.apache.spark.sql.catalyst.analysis.ResolveDataSource$$anonfun$apply$1.$anonfun$applyOrElse$1(ResolveDataSource.scala:96)
at scala.Option.flatMap(Option.scala:271)
at
org.apache.spark.sql.catalyst.analysis.ResolveDataSource$$anonfun$apply$1.applyOrElse(ResolveDataSource.scala:94)
at
org.apache.spark.sql.catalyst.analysis.ResolveDataSource$$anonfun$apply$1.applyOrElse(ResolveDataSource.scala:58)
at
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsUpWithPruning$3(AnalysisHelper.scala:141)
at
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(origin.scala:85)
at
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.$anonfun$resolveOperatorsUpWithPruning$1(AnalysisHelper.scala:141)
at
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:418)
at
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsUpWithPruning(AnalysisHelper.scala:137)
at
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsUpWithPruning$(AnalysisHelper.scala:133)
at
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsUpWithPruning(LogicalPlan.scala:42)
at
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsUp(AnalysisHelper.scala:114)
at
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.resolveOperatorsUp$(AnalysisHelper.scala:113)
at
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsUp(LogicalPlan.scala:42)
at
org.apache.spark.sql.catalyst.analysis.ResolveDataSource.apply(ResolveDataSource.scala:58)
at
org.apache.spark.sql.catalyst.analysis.ResolveDataSource.apply(ResolveDataSource.scala:56)
at
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$16(RuleExecutor.scala:480)
at
org.apache.spark.sql.catalyst.rules.RecoverableRuleExecutionHelper.processRule(RuleExecutor.scala:629)
at
org.apache.spark.sql.catalyst.rules.RecoverableRuleExecutionHelper.processRule$(RuleExecutor.scala:613)
at
org.apache.spark.sql.catalyst.rules.RuleExecutor.processRule(RuleExecutor.scala:131)
at
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$15(RuleExecutor.scala:480)
at
com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)
at
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$14(RuleExecutor.scala:479)
at
scala.collection.LinearSeqOptimized.foldLeft(LinearSeqOptimized.scala:126)
at
scala.collection.LinearSeqOptimized.foldLeft$(LinearSeqOptimized.scala:122)
at scala.collection.immutable.List.foldLeft(List.scala:91)
at
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$13(RuleExecutor.scala:475)
at
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at
com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)
at
org.apache.spark.sql.catalyst.rules.RuleExecutor.executeBatch$1(RuleExecutor.scala:452)
at
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$22(RuleExecutor.scala:585)
at
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$22$adapted(RuleExecutor.scala:585)
at scala.collection.immutable.List.foreach(List.scala:431)
at
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1(RuleExecutor.scala:585)
at
com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)
at
org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:349)
at
org.apache.spark.sql.catalyst.analysis.Analyzer.executeSameContext(Analyzer.scala:498)
at
org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$execute$1(Analyzer.scala:491)
at
org.apache.spark.sql.catalyst.analysis.AnalysisContext$.withNewAnalysisContext(Analyzer.scala:397)
at
org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:491)
at
org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:416)
at
org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$executeAndTrack$1(RuleExecutor.scala:341)
at
org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:219)
at
org.apache.spark.sql.catalyst.rules.RuleExecutor.executeAndTrack(RuleExecutor.scala:341)
at
org.apache.spark.sql.catalyst.analysis.resolver.HybridAnalyzer.resolveInFixedPoint(HybridAnalyzer.scala:252)
at
org.apache.spark.sql.catalyst.analysis.resolver.HybridAnalyzer.$anonfun$apply$1(HybridAnalyzer.scala:96)
at
org.apache.spark.sql.catalyst.analysis.resolver.HybridAnalyzer.withTrackedAnalyzerBridgeState(HybridAnalyzer.scala:131)
at
org.apache.spark.sql.catalyst.analysis.resolver.HybridAnalyzer.apply(HybridAnalyzer.scala:87)
at
org.apache.spark.sql.catalyst.analysis.Analyzer.$anonfun$executeAndCheck$1(Analyzer.scala:478)
at
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:425)
at
org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:478)
at
org.apache.spark.sql.execution.QueryExecution.$anonfun$lazyAnalyzed$3(QueryExecution.scala:294)
at
com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)
at
org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:548)
at
org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$5(QueryExecution.scala:668)
at
org.apache.spark.sql.execution.SQLExecution$.withExecutionPhase(SQLExecution.scala:151)
at
org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$4(QueryExecution.scala:668)
at
org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:1307)
at
org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$2(QueryExecution.scala:661)
at
com.databricks.util.LexicalThreadLocal$Handle.runWith(LexicalThreadLocal.scala:63)
at
org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:658)
at
org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1462)
at
org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:658)
at
org.apache.spark.sql.execution.QueryExecution.$anonfun$lazyAnalyzed$2(QueryExecution.scala:288)
at
com.databricks.sql.util.MemoryTrackerHelper.withMemoryTracking(MemoryTrackerHelper.scala:80)
at
org.apache.spark.sql.execution.QueryExecution.$anonfun$lazyAnalyzed$1(QueryExecution.scala:287)
at scala.util.Try$.apply(Try.scala:213)
at
org.apache.spark.util.Utils$.doTryWithCallerStacktrace(Utils.scala:1684)
at
org.apache.spark.util.LazyTry.tryT$lzycompute(LazyTry.scala:46)
at org.apache.spark.util.LazyTry.tryT(LazyTry.scala:46)
at org.apache.spark.util.LazyTry.get(LazyTry.scala:58)
at
org.apache.spark.sql.execution.QueryExecution.analyzed(QueryExecution.scala:321)
at
org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:267)
at
org.apache.spark.sql.Dataset$.$anonfun$ofRows$1(Dataset.scala:108)
at
org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:1462)
at
org.apache.spark.sql.SparkSession.$anonfun$withActiveAndFrameProfiler$1(SparkSession.scala:1469)
at
com.databricks.spark.util.FrameProfiler$.record(FrameProfiler.scala:94)
at
org.apache.spark.sql.SparkSession.withActiveAndFrameProfiler(SparkSession.scala:1469)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:106)
at
org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:265)
at
org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:238)
at
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:569)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at
py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:397)
at py4j.Gateway.invoke(Gateway.java:306)
at
py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at
py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:197)
at
py4j.ClientServerConnection.run(ClientServerConnection.java:117)
at java.base/java.lang.Thread.run(Thread.java:840)
25/08/08 16:21:52 INFO DBCEventLoggingListener: Logging events to
eventlogs/8549087355160894129/eventlog
25/08/08 16:21:52 INFO DBCEventLoggingListener: Compressed rolled file
/databricks/driver/eventlogs/8549087355160894129/eventlog-2025-08-08--16-20 to
/databricks/driver/eventlogs/8549087355160894129/eventlog-2025-08-08--16-20.gz
in 109ms, size = 587447
25/08/08 16:21:52 INFO DBCEventLoggingListener: Deleted rolled file
eventlogs/8549087355160894129/eventlog-2025-08-08--16-20
25/08/08 16:21:53 WARN JupyterDriverLocal$: Unsupported payload:
ExecuteReplyPayload(error,Some(Map(errorClass -> TABLE_OR_VIEW_NOT_FOUND,
pysparkFragment -> , sqlState -> 42P01, pysparkCallSite -> , messageParameters
-> Map(relationName ->
`default_iceberg`.`abfss://[email protected]/lakehouse/domain/thing`.`blah`))))
25/08/08 16:21:53 INFO ProgressReporter$: Removed result fetcher for
1754669739780_6172583903742288693_ab54727d993f4f8f9e4f249064c56ea5
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]