Github user JDrit commented on the pull request:
https://github.com/apache/spark/pull/7478#issuecomment-123475683
`FunctionRegistry` is in the catalyst package so wouldn't using
`SparkPartitiionID` cause a circular dependency problem?
Would be good to do something similar to the resolvers for the analyzer in
which `SQLContext` in which functions can be added from the SQL package?
Something like:
```
protected[sql] val extendedFunctions: Map[String, FunctionBuilder] = Map(
expression[SparkPartitionID]("spark__partition__id")
)
// TODO how to handle the temp function per user session?
@transient
protected[sql] lazy val functionRegistry: FunctionRegistry = {
val reg = FunctionRegistry.builtin
extendedFunctions.foreach { case(name, fun) =>
reg.registerFunction(name, fun) }
reg
}
```
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]