Github user sameeragarwal commented on the issue:

    https://github.com/apache/spark/pull/14176
  
    @clockfly as Qifan said, the rationale for not deleting the old vectorized 
hashmap code in the short-term was to enable us to quickly benchmark and 
compare the two implementations for a wide variety of workloads.
    
    That said, I think the high level issue is that we don't currently expose a 
good interface/hooks in our generated code that can be used to test custom 
operator implementations while running benchmarks or tests (... and given these 
first level aggregate hashmap are entirely generated during query compilation, 
injecting a class that can work for all schema types during testing isn't very 
straightforward).


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to