Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/3803#discussion_r23881983
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -657,6 +657,10 @@ class SparkContext(config: SparkConf) extends Logging
with ExecutorAllocationCli
*
* Load data from a flat binary file, assuming the length of each record
is constant.
*
+ * '''Note:''' Normally getBytes returns an array padded with extra
values,
--- End diff --
This is referring to the `BytesWritable` values: their `getBytes` methods
are technically allowed to return padded byte arrays, so in general callers
should take this into account, but in this case we are guaranteeing that
`BytesWritable.getBytes` will return a non-padded array.
Actually, this raises a good point: the fact that we're returning un-padded
arrays is an implementation detail, so maybe we don't want to make it part of
our API contract (unless it's explicitly part of the
FixedLengthBinaryInputFormat contract).
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]