Hi devs,
I got another report regarding configuring v2 session catalog, when Spark
fails to instantiate the configured catalog. For now, it just simply logs
error message without exception information, and silently uses the default
session catalog.
That's a compile error. If it said this were ambiguous, I'd say this is
probably another instance where the legacy overloads for Java become
ambiguous in 2.12 / 3.0 so you have to cast your function to the specific
Scala overload. That's not quite the error though, but, might try it?
As you say
Fair enough.
Sorry it does compile but when you run it, it fails.
https://stackoverflow.com/questions/63642364/how-to-use-foreachbatch-batchdf-unpersist-appropriately-in-structured-streamin
[image: Captura de pantalla 2020-10-22 a las 16.01.33.png]
On Thu, 22 Oct 2020 at 15:53, Sean Owen
Probably for purposes of chaining, though won't be very useful here. Like
df.unpersist().cache(... some other settings ...)
foreachBatch wants a function that evaluates to Unit, but this qualifies -
doesn't matter what the value of the block is, if it's ignored.
This does seem to compile; are you
Hello!
I'd like to ask if there is any reason to return *type *when calling
*dataframe.unpersist*
def unpersist(blocking: Boolean): this.type = {
sparkSession.sharedState.cacheManager.uncacheQuery(
sparkSession, logicalPlan, cascade = false, blocking)
this
}
Just pointing it out