revans2 commented on a change in pull request #29506:
URL: https://github.com/apache/spark/pull/29506#discussion_r474716510
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/columnar/compression/compressionSchemes.scala
##########
@@ -318,7 +318,7 @@ private[columnar] case object RunLengthEncoding extends
CompressionScheme {
var valueCountLocal = 0
var currentValueLocal: Long = 0
- while (valueCountLocal < runLocal || (pos < capacity)) {
+ while (pos < capacity) {
Review comment:
Please note that this change is not needed. I made it so that all
decompress code was consistent now using
```
while (pos < capacity) {
```
The extra `valueCountLocal < runLocal` would only serve to have the code
crash trying to walk past the end of the batch if for some reason the
compression was incorrect.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]