GitHub user paul-rogers opened a pull request: https://github.com/apache/drill/pull/789
DRILL-5356: Refactor Parquet Record Reader The Parquet reader is Drill's premier data source and has worked very well for many years. As with any piece of code, it has grown in complexity over that time and has become hard to understand and maintain. In work in another project, we found that Parquet is accidentally creating "low density" batches: record batches with little actual data compared to the amount of memory allocated. We'd like to fix that. However, the current complexity of the reader code creates a barrier to making improvements: the code is so complex that it is often better to leave bugs unfixed, or risk spending large amounts of time struggling to make small changes. This commit offers to help revitalize the Parquet reader. Functionality is identical to the code in master; but code has been pulled apart into various classes each of which focuses on one part of the task: building up a schema, keeping track of read state, a strategy for reading various combinations of records, etc. The idea is that it is easier to understand several small, focused classes than one huge, complex class. Indeed, the idea of small, focused classes is common in the industry; it is nothing new. Unit tests pass with the change. Since no logic has chanaged, we only moved lines of code, that is a good indication that everything still works. You can merge this pull request into a Git repository by running: $ git pull https://github.com/paul-rogers/drill DRILL-5356 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/drill/pull/789.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #789 ---- commit f54fc657ef4bda5db2743032ab64b504183f93c8 Author: Paul Rogers <prog...@maprtech.com> Date: 2017-03-15T20:49:07Z DRILL-5356: Refactor Parquet Record Reader The Parquet reader is Drill's premier data source and has worked very well for many years. As with any piece of code, it has grown in complexity over that time and has become hard to understand and maintain. In work in another project, we found that Parquet is accidentally creating "low density" batches: record batches with little actual data compared to the amount of memory allocated. We'd like to fix that. However, the current complexity of the reader code creates a barrier to making improvements: the code is so complex that it is often better to leave bugs unfixed, or risk spending large amounts of time struggling to make small changes. This commit offers to help revitalize the Parquet reader. Functionality is identical to the code in master; but code has been pulled apart into various classes each of which focuses on one part of the task: building up a schema, keeping track of read state, a strategy for reading various combinations of records, etc. The idea is that it is easier to understand several small, focused classes than one huge, complex class. Indeed, the idea of small, focused classes is common in the industry; it is nothing new. Unit tests pass with the change. Since no logic has chanaged, we only moved lines of code, that is a good indication that everything still works. ---- --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. ---