[ https://issues.apache.org/jira/browse/SPARK-37034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17436898#comment-17436898 ]
Wenchen Fan edited comment on SPARK-37034 at 11/1/21, 3:23 PM: --------------------------------------------------------------- This is a question, not a feature request. Please ask it in the dev-list instead of JIRA. A quick answer is: there is no plan to add vectorized execution into Spark in the near future (at least I haven't seen any proposal), but there are third-party libraries doing it via the columnar API in the Spark query plan. was (Author: cloud_fan): This is a question, not a feature request. Please ask it in the dev-list instead of JIRA. A quick answer is: there is no plan to add vectorized execution into Spark in the near future, but there are third-party libraries doing it via the columnar API in the Spark query plan. > What's the progress of vectorized execution for spark? > ------------------------------------------------------ > > Key: SPARK-37034 > URL: https://issues.apache.org/jira/browse/SPARK-37034 > Project: Spark > Issue Type: New Feature > Components: Spark Core > Affects Versions: 3.2.0 > Reporter: xiaoli > Priority: Major > > Spark has support vectorized read for ORC and parquet. What's the progress of > other vectorized execution, e.g. vectorized write, join, aggr, simple > operator (string function, math function)? > Hive support vectorized execution in early version > (https://cwiki.apache.org/confluence/display/hive/vectorized+query+execution) > As we know, Spark is replacement of Hive. I guess the reason why Spark does > not support vectorized execution maybe the difficulty of design or > implementation in Spark is larger. What's the main issue for Spark to support > vectorized execution? -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org