[jira] [Updated] (SPARK-17047) Spark 2 cannot create table when CLUSTERED.
[ https://issues.apache.org/jira/browse/SPARK-17047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dongjoon Hyun updated SPARK-17047: -- Component/s: SQL > Spark 2 cannot create table when CLUSTERED. > --- > > Key: SPARK-17047 > URL: https://issues.apache.org/jira/browse/SPARK-17047 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.0.0, 2.1.1, 2.2.0 >Reporter: Dr Mich Talebzadeh > Fix For: 2.3.0 > > > This does not work with CLUSTERED BY clause in Spark 2 now! > CREATE TABLE test.dummy2 > ( > ID INT >, CLUSTERED INT >, SCATTERED INT >, RANDOMISED INT >, RANDOM_STRING VARCHAR(50) >, SMALL_VC VARCHAR(10) >, PADDING VARCHAR(10) > ) > CLUSTERED BY (ID) INTO 256 BUCKETS > STORED AS ORC > TBLPROPERTIES ( "orc.compress"="SNAPPY", > "orc.create.index"="true", > "orc.bloom.filter.columns"="ID", > "orc.bloom.filter.fpp"="0.05", > "orc.stripe.size"="268435456", > "orc.row.index.stride"="1" ) > scala> HiveContext.sql(sqltext) > org.apache.spark.sql.catalyst.parser.ParseException: > Operation not allowed: CREATE TABLE ... CLUSTERED BY(line 2, pos 0) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-17047) Spark 2 cannot create table when CLUSTERED.
[ https://issues.apache.org/jira/browse/SPARK-17047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dongjoon Hyun updated SPARK-17047: -- Affects Version/s: 2.1.1 2.2.0 > Spark 2 cannot create table when CLUSTERED. > --- > > Key: SPARK-17047 > URL: https://issues.apache.org/jira/browse/SPARK-17047 > Project: Spark > Issue Type: Bug > Components: SQL >Affects Versions: 2.0.0, 2.1.1, 2.2.0 >Reporter: Dr Mich Talebzadeh > Fix For: 2.3.0 > > > This does not work with CLUSTERED BY clause in Spark 2 now! > CREATE TABLE test.dummy2 > ( > ID INT >, CLUSTERED INT >, SCATTERED INT >, RANDOMISED INT >, RANDOM_STRING VARCHAR(50) >, SMALL_VC VARCHAR(10) >, PADDING VARCHAR(10) > ) > CLUSTERED BY (ID) INTO 256 BUCKETS > STORED AS ORC > TBLPROPERTIES ( "orc.compress"="SNAPPY", > "orc.create.index"="true", > "orc.bloom.filter.columns"="ID", > "orc.bloom.filter.fpp"="0.05", > "orc.stripe.size"="268435456", > "orc.row.index.stride"="1" ) > scala> HiveContext.sql(sqltext) > org.apache.spark.sql.catalyst.parser.ParseException: > Operation not allowed: CREATE TABLE ... CLUSTERED BY(line 2, pos 0) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Updated] (SPARK-17047) Spark 2 cannot create table when CLUSTERED.
[ https://issues.apache.org/jira/browse/SPARK-17047?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dongjoon Hyun updated SPARK-17047: -- Summary: Spark 2 cannot create table when CLUSTERED. (was: Spark 2 cannot create ORC table when CLUSTERED.) > Spark 2 cannot create table when CLUSTERED. > --- > > Key: SPARK-17047 > URL: https://issues.apache.org/jira/browse/SPARK-17047 > Project: Spark > Issue Type: Bug >Affects Versions: 2.0.0 >Reporter: Dr Mich Talebzadeh > > This does not work with CLUSTERED BY clause in Spark 2 now! > CREATE TABLE test.dummy2 > ( > ID INT >, CLUSTERED INT >, SCATTERED INT >, RANDOMISED INT >, RANDOM_STRING VARCHAR(50) >, SMALL_VC VARCHAR(10) >, PADDING VARCHAR(10) > ) > CLUSTERED BY (ID) INTO 256 BUCKETS > STORED AS ORC > TBLPROPERTIES ( "orc.compress"="SNAPPY", > "orc.create.index"="true", > "orc.bloom.filter.columns"="ID", > "orc.bloom.filter.fpp"="0.05", > "orc.stripe.size"="268435456", > "orc.row.index.stride"="1" ) > scala> HiveContext.sql(sqltext) > org.apache.spark.sql.catalyst.parser.ParseException: > Operation not allowed: CREATE TABLE ... CLUSTERED BY(line 2, pos 0) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org