alamb commented on issue #5855: URL: https://github.com/apache/arrow-rs/issues/5855#issuecomment-2155280567
To be clear, I don't have a need personally (and I don't think InfluxData has a technical need) at the moment to actually invest the engineering time to make the thrift decoding faster. Instead, my goal of filing these tickets is to leave sufficient information / analysis for anyone for whom it is important (e.g. machine learning use cases, potential users of a "Parquet V3" ,etc) that they could undertake the work if it was actually critical to their workload I tend to agree with @tustvold that there are likely only a few real world senarios where the system bottleneck is parquet decoding, though I am sure we can come up with them -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
