tustvold opened a new issue, #2594: URL: https://github.com/apache/arrow-rs/issues/2594
**TLDR rather than fighting entropy lets just brute-force compilation** **Is your feature request related to a problem or challenge? Please describe what you are trying to do.** <!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] (This section helps Arrow developers understand the context and *why* for this feature, in addition to the *what*) --> The arrow crate is getting rather large, and is starting to show up as a non-trivial bottleneck when compiling code, see #2170. There have been some efforts to reduce the amount of generated code, see #1858, but this is going to be a perpetual losing battle against new feature additions. I think there are a couple of problems currently: 1. Limited build parallelism, especially if [codegen-units](https://doc.rust-lang.org/stable/rustc/codegen-options/#codegen-units) is set low 2. Upstream crates have to "depend" on functionality they don't need, e.g. `parquet` depending on compute kernels 3. Minor changes force large amounts of recompilation, with incremental compilation only helping marginally 4. Codegen is rarely linear in complexity, consequently larger codegen units take longer than the same amount of code in smaller units All these conspire to often result in an `arrow` shaped hole in compilation, where CPUs are left idle. Some numbers from my local machine * Release with default features: 232 seconds * Release with default features without comparison kernels: 150 seconds * Release with default features without compute kernels: 70 seconds * Release without default features without compute kernels: 60 seconds **The vast majority of the time all bar a single core is idle.** **Describe the solution you'd like** <!-- A clear and concise description of what you want to happen. --> I would like to propose we split up the arrow crate, into a number of sub-crates that are then re-exported by the top-level `arrow` crate. Users can then choose to depend on the batteries included `arrow` crate, or more granular crates. Initially I would propose the following split: * arrow-csv: CSV reader support * arrow-ipc: IPC support * arrow-json: JSON support (related to #2300) * arrow-compute: contents of compute module * arrow-test: arrow test_utils (not published) * arrow-core: everything else There is definitely scope for splitting up the crates further after this, in particular the comparison kernels might be a good candidate to live on their own, but I think lets start small and go from there. I suspect there is a fair amount of disentangling that will be necessary to achieve this. **Describe alternatives you've considered** <!-- A clear and concise description of any alternative solutions or features you've considered. --> Feature flags are another way this can be handled, however, they have a couple of limitations: * It is impractical to test the full combinatorial explosion of combinations, which allows for bugs to sneak through * They are unified for a target which limits build parallelism, just because say DataFusion depends on arrow with CSV support, shouldn't force the `parquet` crate to wait for this to compile before it can start compiling * Poor UX: * Discoverability is limited, it can be hard to determine what features gate what functionality * Hard to determine if the feature flag set is minimal, no equivalent of cargo-udeps * It can be a non-trivial detective exercise to determine why a given feature is being enabled * Necessitate counter-intuitive hacks to play nicely in multi-crate workspaces - see [workspace hack](https://docs.rs/cargo-hakari/latest/cargo_hakari/about/index.html#what-are-workspace-hack-crates) **Additional context** <!-- Add any other context or screenshots about the feature request here. --> FYI @alamb @jhorstmann @nevi-me -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
