[jira] [Commented] (SPARK-23442) Reading from partitioned and bucketed table uses only bucketSpec.numBuckets partitions in all cases
[ https://issues.apache.org/jira/browse/SPARK-23442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16494806#comment-16494806 ] Apache Spark commented on SPARK-23442: -- User 'wangyum' has created a pull request for this issue: https://github.com/apache/spark/pull/21460 > Reading from partitioned and bucketed table uses only bucketSpec.numBuckets > partitions in all cases > --- > > Key: SPARK-23442 > URL: https://issues.apache.org/jira/browse/SPARK-23442 > Project: Spark > Issue Type: Bug > Components: Spark Core, SQL >Affects Versions: 2.2.1 >Reporter: Pranav Rao >Priority: Major > > Through the DataFrameWriter[T] interface I have created a external HIVE table > with 5000 (horizontal) partitions and 50 buckets in each partition. Overall > the dataset is 600GB and the provider is Parquet. > Now this works great when joining with a similarly bucketed dataset - it's > able to avoid a shuffle. > But any action on this Dataframe(from _spark.table("tablename")_), works with > only 50 RDD partitions. This is happening because of > [createBucketedReadRDD|https://github.com/apachttps:/github.com/apache/spark/blob/branch-2.3/sql/core/src/main/scala/org/apache/spark/sql/execution/DataSourceScanExec.she/spark/blob/branch-2.3/sql/core/src/main/scala/org/apache/spark/sql/execution/DataSourceScanExec.sc]. > So the 600GB dataset is only read through 50 tasks, which makes this > partitioning + bucketing scheme not useful. > I cannot expose the base directory of the parquet folder for reading the > dataset, because the partition locations don't follow a (basePath + partSpec) > format. > Meanwhile, are there workarounds to use higher parallelism while reading such > a table? > Let me know if I can help in any way. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-23442) Reading from partitioned and bucketed table uses only bucketSpec.numBuckets partitions in all cases
[ https://issues.apache.org/jira/browse/SPARK-23442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16368117#comment-16368117 ] Pranav Rao commented on SPARK-23442: Repartitioning is unlikely to be helpful to a user because: * The map part of repartition is still limited to num_buckets, so it's going to be very slow and not utilise available parallelism. * The user would have pre-partitioned and bucketed his dataset and persisted it so, purely to avoid repartitioning/shuffle at read time. So the purpose of this feature is lost. > Reading from partitioned and bucketed table uses only bucketSpec.numBuckets > partitions in all cases > --- > > Key: SPARK-23442 > URL: https://issues.apache.org/jira/browse/SPARK-23442 > Project: Spark > Issue Type: Bug > Components: Spark Core, SQL >Affects Versions: 2.2.1 >Reporter: Pranav Rao >Priority: Major > > Through the DataFrameWriter[T] interface I have created a external HIVE table > with 5000 (horizontal) partitions and 50 buckets in each partition. Overall > the dataset is 600GB and the provider is Parquet. > Now this works great when joining with a similarly bucketed dataset - it's > able to avoid a shuffle. > But any action on this Dataframe(from _spark.table("tablename")_), works with > only 50 RDD partitions. This is happening because of > [createBucketedReadRDD|https://github.com/apachttps:/github.com/apache/spark/blob/branch-2.3/sql/core/src/main/scala/org/apache/spark/sql/execution/DataSourceScanExec.she/spark/blob/branch-2.3/sql/core/src/main/scala/org/apache/spark/sql/execution/DataSourceScanExec.sc]. > So the 600GB dataset is only read through 50 tasks, which makes this > partitioning + bucketing scheme not useful. > I cannot expose the base directory of the parquet folder for reading the > dataset, because the partition locations don't follow a (basePath + partSpec) > format. > Meanwhile, are there workarounds to use higher parallelism while reading such > a table? > Let me know if I can help in any way. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-23442) Reading from partitioned and bucketed table uses only bucketSpec.numBuckets partitions in all cases
[ https://issues.apache.org/jira/browse/SPARK-23442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16366898#comment-16366898 ] Marco Gaido commented on SPARK-23442: - I am not sure it is what you are looking for, but you can repartition the resulting DataFrame in order to have more partitions. > Reading from partitioned and bucketed table uses only bucketSpec.numBuckets > partitions in all cases > --- > > Key: SPARK-23442 > URL: https://issues.apache.org/jira/browse/SPARK-23442 > Project: Spark > Issue Type: Bug > Components: Spark Core, SQL >Affects Versions: 2.2.1 >Reporter: Pranav Rao >Priority: Major > > Through the DataFrameWriter[T] interface I have created a external HIVE table > with 5000 (horizontal) partitions and 50 buckets in each partition. Overall > the dataset is 600GB and the provider is Parquet. > Now this works great when joining with a similarly bucketed dataset - it's > able to avoid a shuffle. > But any action on this Dataframe(from _spark.table("tablename")_), works with > only 50 RDD partitions. This is happening because of > [createBucketedReadRDD|https://github.com/apachttps:/github.com/apache/spark/blob/branch-2.3/sql/core/src/main/scala/org/apache/spark/sql/execution/DataSourceScanExec.she/spark/blob/branch-2.3/sql/core/src/main/scala/org/apache/spark/sql/execution/DataSourceScanExec.sc]. > So the 600GB dataset is only read through 50 tasks, which makes this > partitioning + bucketing scheme not useful. > I cannot expose the base directory of the parquet folder for reading the > dataset, because the partition locations don't follow a (basePath + partSpec) > format. > Meanwhile, are there workarounds to use higher parallelism while reading such > a table? > Let me know if I can help in any way. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org