Github user JoshRosen commented on the pull request:

    https://github.com/apache/spark/pull/3301#issuecomment-65016448
  
    @ash211 My understanding of the purpose of the 
`WholeCombineFileRecordReader` was that it was just plumbing that's necessary 
to allow Hadoop configuration settings to be passed down to 
WholeTextFileRecordReaders, since `CombineFileRecordReader` isn't 
`Configurable` and doesn't pass configurations down to the record readers that 
it creates.  I suppose that we could make this a bit clearer by replacing 
`WholeCombineFileRecordReader` with a class named 
`ConfigurableCombineFileRecordReader` that's parameterized by a `Configurable` 
`RecordReader`.
    
    Do you think that would be clearer?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to