Dear AsterixDB devs, I am currently trying out the new support for Parquet files on S3 (still in the context of my High-energy Physics use case [1]). This works great so far and has generally decent performance. However, I realized that it does not use more than 16 cores, even though 96 logical cores are available and even though I run long-running queries (several minutes) on large data sets with a large number of files (I tried 128 files of 17GB each). Is this an arbitrary/artificial limitation that can be changed somehow (potentially with a small patch+recompiling) or is there more serious development required to lift it? FYI, I am currently using 03fd6d0f, which should include all S3/Parquet commits on master.
Cheers, Ingo [1] https://arxiv.org/abs/2104.12615
