Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/9003#issuecomment-149721248
@sethah The storage of `InternalRow` is an implicit contract. Exposing
`InternalRow` to subclasses makes the code harder to understand. For example,
to confirm the following code, one needs to go back to the implementation of
`CentralMomentAgg`.
~~~scala
val M0 = buffer.getDouble(mutableAggBufferOffset)
val M2 = buffer.getDouble(mutableAggBufferOffset + 2)
val M4 = buffer.getDouble(mutableAggBufferOffset + 4)
~~~
I suggest adding an abstract method to `CentralMomemtAgg` called `Double
getStatistic(n: Double, mean: Double, moments: Array[Double])` and implement
`eval` in `CentralMomentAgg`:
~~~scala
override final def eval(buffer: InternalRow): Any = {
val n = buffer.getDouble(offset)
val mean = buffer.getDouble(offset + 1)
val moments = Array.ofDim(maxMomemt)
moments[0] = 1.0
moments[1] = 0.0
// fill in other moments up to maxMoment
getStatistic(n, mean, moments)
}
~~~
Then we can hide the storage of `InternalRow` from subclasses.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]