[jira] [Commented] (CARBONDATA-1148) Cann't load data to carbon_table

2017-06-19 Thread Ashwini K (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16055143#comment-16055143
 ] 

Ashwini K commented on CARBONDATA-1148:
---

Is this resolved ? If not please share additional details on table structure 
and load data(csv file) for which you are getting this error .

> Cann't load data to carbon_table
> 
>
> Key: CARBONDATA-1148
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1148
> Project: CarbonData
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 1.1.0
> Environment: HDP  2.6
> Spark 2.1.0.2.6.0.3-8
> HDFS  2.7.3.2.6
> YARN  2.7.3
> Hive  1.2.1.2.6
> Java  1.8.0_112
> Scala 2.11.8
> CarabonData   1.1.0(carbondata_2.11-1.1.0-shade-hadoop2.7.3.jar)
>Reporter: lonly
>Priority: Critical
>  Labels: carbon, spark
>
> cala> carbon.sql("LOAD DATA INPATH 
> 'hdfs://hmly10:8020/testdata/carbondata/sample.csv' INTO TABLE 
> carbon.test_table")
> 17/06/09 15:53:11 WARN TaskSetManager: Lost task 0.0 in stage 6.0 (TID 6, 
> hmly11, executor 1): java.lang.ClassCastException: cannot assign instance of 
> scala.collection.immutable.List$SerializationProxy to field 
> org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type 
> scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD
>   at 
> java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2133)
>   at 
> java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1305)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2024)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
>   at java.io.ObjectInputStream.readObject(ObjectInputStream.java:373)
>   at 
> scala.collection.immutable.List$SerializationProxy.readObject(List.scala:479)
>   at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1058)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1909)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
>   at java.io.ObjectInputStream.readObject(ObjectInputStream.java:373)
>   at 
> scala.collection.immutable.List$SerializationProxy.readObject(List.scala:479)
>   at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1058)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1909)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1808)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1353)
>   at 
> java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2018)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1942)

[GitHub] carbondata issue #1058: [CARBONDATA-1190] Wrap bytes in V3 Writer

2017-06-19 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1058
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/501/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1058: [CARBONDATA-1190] Wrap bytes in V3 Writer

2017-06-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1058
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2599/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1058: [CARBONDATA-1190] Wrap bytes in V3 Writer

2017-06-19 Thread QiangCai
Github user QiangCai commented on the issue:

https://github.com/apache/carbondata/pull/1058
  
retest this please


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1042: [CARBONDATA-1181] Show partitions

2017-06-19 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1042
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/500/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1042: [CARBONDATA-1181] Show partitions

2017-06-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1042
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2598/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1058: [CARBONDATA-1190] Wrap bytes in V3 Writer

2017-06-19 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1058
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/499/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1058: [CARBONDATA-1190] Wrap bytes in V3 Writer

2017-06-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1058
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2597/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1042: [CARBONDATA-1181] Show partitions

2017-06-19 Thread chenerlu
Github user chenerlu commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1042#discussion_r122864627
  
--- Diff: 
integration/spark/src/main/scala/org/apache/spark/sql/execution/command/ShowPartitionsCommand.scala
 ---
@@ -0,0 +1,95 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.execution.command
+
+import java.util
+
+import scala.collection.JavaConverters._
+import scala.collection.mutable.ListBuffer
+
+import org.apache.spark.sql._
+import org.apache.spark.sql.catalyst.expressions.{Attribute, 
AttributeReference}
+import org.apache.spark.sql.catalyst.TableIdentifier
+import org.apache.spark.sql.execution.RunnableCommand
+import org.apache.spark.sql.types._
+
+import org.apache.carbondata.common.logging.LogServiceFactory
+import org.apache.carbondata.core.metadata.schema.partition.PartitionType
+
+
+private[sql] case class ShowCarbonPartitionsCommand(
--- End diff --

Why  this class name is different from class name ?
Just my doubt.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1042: [CARBONDATA-1181] Show partitions

2017-06-19 Thread chenerlu
Github user chenerlu commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1042#discussion_r122864102
  
--- Diff: 
integration/spark/src/main/scala/org/apache/spark/sql/execution/command/ShowPartitionsCommand.scala
 ---
@@ -0,0 +1,95 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.execution.command
+
+import java.util
+
+import scala.collection.JavaConverters._
+import scala.collection.mutable.ListBuffer
+
+import org.apache.spark.sql._
+import org.apache.spark.sql.catalyst.expressions.{Attribute, 
AttributeReference}
+import org.apache.spark.sql.catalyst.TableIdentifier
+import org.apache.spark.sql.execution.RunnableCommand
+import org.apache.spark.sql.types._
+
+import org.apache.carbondata.common.logging.LogServiceFactory
+import org.apache.carbondata.core.metadata.schema.partition.PartitionType
+
+
+private[sql] case class ShowCarbonPartitionsCommand(
+tableIdentifier: TableIdentifier) extends RunnableCommand {
+  val LOGGER = 
LogServiceFactory.getLogService(ShowCarbonPartitionsCommand.getClass.getName)
+  var columnName = ""
+  override val output: Seq[Attribute] = Seq(
+// Column names are based on Hive.
+AttributeReference("ID", StringType, nullable = false,
+  new MetadataBuilder().putString("comment", "partition 
id").build())(),
+AttributeReference("Name", StringType, nullable = false,
+  new MetadataBuilder().putString("comment", "partition 
name").build())(),
+AttributeReference("Value(" + columnName + "=)", StringType, nullable 
= true,
+  new MetadataBuilder().putString("comment", "partition 
value").build())()
+  )
+  override def run(sqlContext: SQLContext): Seq[Row] = {
+val relation = CarbonEnv.get.carbonMetastore
+  .lookupRelation1(tableIdentifier)(sqlContext).
+  asInstanceOf[CarbonRelation]
+val carbonTable = relation.tableMeta.carbonTable
+var partitionInfo = carbonTable.getPartitionInfo(
+  
carbonTable.getAbsoluteTableIdentifier.getCarbonTableIdentifier.getTableName)
+var partitionType = partitionInfo.getPartitionType
+var result = Seq.newBuilder[Row]
+columnName = partitionInfo.getColumnSchemaList.get(0).getColumnName
+if (PartitionType.RANGE.equals(partitionType)) {
+  result.+=(RowFactory.create("0", "", "default"))
+  var id = 1
+  // var name = "partition_"
--- End diff --

delete useless notes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1042: [CARBONDATA-1181] Show partitions

2017-06-19 Thread chenerlu
Github user chenerlu commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1042#discussion_r122864076
  
--- Diff: 
integration/spark/src/main/scala/org/apache/spark/sql/execution/command/ShowPartitionsCommand.scala
 ---
@@ -0,0 +1,95 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.execution.command
+
+import java.util
+
+import scala.collection.JavaConverters._
+import scala.collection.mutable.ListBuffer
+
+import org.apache.spark.sql._
+import org.apache.spark.sql.catalyst.expressions.{Attribute, 
AttributeReference}
+import org.apache.spark.sql.catalyst.TableIdentifier
+import org.apache.spark.sql.execution.RunnableCommand
+import org.apache.spark.sql.types._
+
+import org.apache.carbondata.common.logging.LogServiceFactory
+import org.apache.carbondata.core.metadata.schema.partition.PartitionType
+
+
+private[sql] case class ShowCarbonPartitionsCommand(
+tableIdentifier: TableIdentifier) extends RunnableCommand {
+  val LOGGER = 
LogServiceFactory.getLogService(ShowCarbonPartitionsCommand.getClass.getName)
+  var columnName = ""
+  override val output: Seq[Attribute] = Seq(
+// Column names are based on Hive.
+AttributeReference("ID", StringType, nullable = false,
+  new MetadataBuilder().putString("comment", "partition 
id").build())(),
+AttributeReference("Name", StringType, nullable = false,
+  new MetadataBuilder().putString("comment", "partition 
name").build())(),
+AttributeReference("Value(" + columnName + "=)", StringType, nullable 
= true,
+  new MetadataBuilder().putString("comment", "partition 
value").build())()
+  )
+  override def run(sqlContext: SQLContext): Seq[Row] = {
--- End diff --

Suggest use case match for different partition type


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1042: [CARBONDATA-1181] Show partitions

2017-06-19 Thread chenerlu
Github user chenerlu commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1042#discussion_r122863911
  
--- Diff: 
integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/ShowPartitionsCommand.scala
 ---
@@ -0,0 +1,96 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.execution.command
+
+import java.util
+
+import scala.collection.JavaConverters._
+import scala.collection.mutable.ListBuffer
+
+import org.apache.spark.sql._
+import org.apache.spark.sql.catalyst.expressions.{Attribute, 
AttributeReference}
+import org.apache.spark.sql.catalyst.TableIdentifier
+import org.apache.spark.sql.hive.CarbonRelation
+import org.apache.spark.sql.types._
+
+import org.apache.carbondata.common.logging.LogServiceFactory
+import org.apache.carbondata.core.metadata.schema.partition.PartitionType
+
+
+
+private[sql] case class ShowCarbonPartitionsCommand(
+tableIdentifier: TableIdentifier) extends RunnableCommand {
+  val LOGGER = 
LogServiceFactory.getLogService(ShowCarbonPartitionsCommand.getClass.getName)
+  var columnName = ""
+  override val output: Seq[Attribute] = Seq(
+// Column names are based on Hive.
+AttributeReference("ID", StringType, nullable = false,
+  new MetadataBuilder().putString("comment", "partition 
id").build())(),
+AttributeReference("Name", StringType, nullable = false,
+  new MetadataBuilder().putString("comment", "partition 
name").build())(),
+AttributeReference("Value(" + columnName + "=)", StringType, nullable 
= true,
+  new MetadataBuilder().putString("comment", "partition 
value").build())()
+  )
+  override def run(sparkSession: SparkSession): Seq[Row] = {
+val relation = CarbonEnv.getInstance(sparkSession).carbonMetastore
+  .lookupRelation(tableIdentifier)(sparkSession).
+  asInstanceOf[CarbonRelation]
+val carbonTable = relation.tableMeta.carbonTable
+var partitionInfo = carbonTable.getPartitionInfo(
+  
carbonTable.getAbsoluteTableIdentifier.getCarbonTableIdentifier.getTableName)
+var partitionType = partitionInfo.getPartitionType
+var result = Seq.newBuilder[Row]
+columnName = partitionInfo.getColumnSchemaList.get(0).getColumnName
+if (PartitionType.RANGE.equals(partitionType)) {
+  result.+=(RowFactory.create("0", "", "default"))
+  var id = 1
+  // var name = "partition_"
--- End diff --

delete useless notes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1042: [CARBONDATA-1181] Show partitions

2017-06-19 Thread chenerlu
Github user chenerlu commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1042#discussion_r122863841
  
--- Diff: 
integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/ShowPartitionsCommand.scala
 ---
@@ -0,0 +1,96 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.execution.command
+
+import java.util
+
+import scala.collection.JavaConverters._
+import scala.collection.mutable.ListBuffer
+
+import org.apache.spark.sql._
+import org.apache.spark.sql.catalyst.expressions.{Attribute, 
AttributeReference}
+import org.apache.spark.sql.catalyst.TableIdentifier
+import org.apache.spark.sql.hive.CarbonRelation
+import org.apache.spark.sql.types._
+
+import org.apache.carbondata.common.logging.LogServiceFactory
+import org.apache.carbondata.core.metadata.schema.partition.PartitionType
+
+
+
+private[sql] case class ShowCarbonPartitionsCommand(
+tableIdentifier: TableIdentifier) extends RunnableCommand {
+  val LOGGER = 
LogServiceFactory.getLogService(ShowCarbonPartitionsCommand.getClass.getName)
+  var columnName = ""
+  override val output: Seq[Attribute] = Seq(
+// Column names are based on Hive.
+AttributeReference("ID", StringType, nullable = false,
+  new MetadataBuilder().putString("comment", "partition 
id").build())(),
+AttributeReference("Name", StringType, nullable = false,
+  new MetadataBuilder().putString("comment", "partition 
name").build())(),
+AttributeReference("Value(" + columnName + "=)", StringType, nullable 
= true,
+  new MetadataBuilder().putString("comment", "partition 
value").build())()
+  )
+  override def run(sparkSession: SparkSession): Seq[Row] = {
+val relation = CarbonEnv.getInstance(sparkSession).carbonMetastore
--- End diff --

Suggest use case match for different partition type 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1058: [CARBONDATA-1190] Wrap bytes in V3 Writer

2017-06-19 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1058
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/498/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1058: [CARBONDATA-1190] Wrap bytes in V3 Writer

2017-06-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1058
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2596/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1058: [CARBONDATA-1190] Wrap bytes in V3 Writer

2017-06-19 Thread jackylk
Github user jackylk commented on the issue:

https://github.com/apache/carbondata/pull/1058
  
retest this please


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1053: [CARBONDATA-1188] fixed codec for UpscaleFloa...

2017-06-19 Thread jackylk
Github user jackylk commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1053#discussion_r122855602
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/datastore/page/statistics/ColumnPageStatsVO.java
 ---
@@ -96,8 +96,7 @@ public void update(Object value) {
   case DOUBLE:
 max = ((double) max > (double) value) ? max : value;
 min = ((double) min < (double) value) ? min : value;
-int num = getDecimalCount((double) value);
-decimal = decimal > num ? decimal : num;
+decimal = getDecimalCount((double) value);
--- End diff --

This seems not correct, it should be updated only if decimal place of this 
value is longer


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1064: [CARBONDATA-<1173>] Stream ingestion - write path fr...

2017-06-19 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1064
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/497/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1064: [CARBONDATA-<1173>] Stream ingestion - write path fr...

2017-06-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1064
  
Can one of the admins verify this patch?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1064: [CARBONDATA-<1173>] Stream ingestion - write path fr...

2017-06-19 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1064
  
Can one of the admins verify this patch?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1064: [CARBONDATA-<1173>] Stream ingestion - write ...

2017-06-19 Thread aniketadnaik
GitHub user aniketadnaik opened a pull request:

https://github.com/apache/carbondata/pull/1064

[CARBONDATA-<1173>] Stream ingestion - write path framework 

Description:
This is a over all streaming write path framework implementation. It is 
mainly targeted for "**streaming_ingest**" branch.  This framework 
implementation will help other developers to leverage a common code path for 
developing other pieces of the streaming ingestion feature. 
- Whether new unit test cases have been added or why no new tests are 
required?
- This is a just framework implementation, unit tests will be added 
with individual functionality commits- What manual testing you have 
done?
   - mvn clean verify 
[INFO] 

[INFO] Reactor Summary:
[INFO] 
[INFO] Apache CarbonData :: Parent  SUCCESS [  
1.533 s]
[INFO] Apache CarbonData :: Common  SUCCESS [  
1.335 s]
[INFO] Apache CarbonData :: Core .. SUCCESS 
[02:34 min]
[INFO] Apache CarbonData :: Processing  SUCCESS [  
5.158 s]
[INFO] Apache CarbonData :: Hadoop  SUCCESS [  
5.208 s]
[INFO] Apache CarbonData :: Spark Common .. SUCCESS [ 
13.989 s]
[INFO] Apache CarbonData :: Spark . SUCCESS 
[02:33 min]
[INFO] Apache CarbonData :: Spark Common Test . SUCCESS 
[05:15 min]
[INFO] Apache CarbonData :: Assembly .. SUCCESS [  
1.683 s]
[INFO] Apache CarbonData :: Spark Examples  SUCCESS [  
5.123 s]
[INFO] 

[INFO] BUILD SUCCESS
[INFO] 

[INFO] Total time: 10:58 min
[INFO] Finished at: 2017-06-16T17:24:23-07:00
[INFO] Final Memory: 85M/1078M
[INFO] 


   - Made sure write path class invocation happens correctly with with 
spark structured streaming (2.1)
   - Made sure write path execution work flow with structured 
streaming(2.1) using both socket and file 
 sources

- Any additional information to help reviewers in testing this change.
   - structured streaming now accepts carbondata as a file format in 
addition to data source 
   - temporary functionality writes out data in text format instead of 
carbondata format, will be replaced with new functionality additions.
 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/aniketadnaik/carbondataStreamIngest 
streamIngest-1173

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/1064.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1064


commit 5403972898dc0d62471296476d6d5603dccbda10
Author: Aniket Adnaik 
Date:   2017-06-15T18:57:43Z

[CARBONDATA-1173] Streaming Ingest - write path framework implementation

commit 5270952a43b5e373957682dd75edd15048cb5aa3
Author: Aniket Adnaik 
Date:   2017-06-15T19:11:48Z

[CARBONDATA-1173] Streaming Ingest - write path framework implementation

commit ac3f1bbbee9c33472726afe10bac73a17a607cce
Author: Aniket Adnaik 
Date:   2017-06-16T18:04:19Z

[CARBONDATA-1173] Streaming Ingest - write path framework implementation

commit 4fc68b6c7a986ceda2c02713884d9293d29961fc
Author: Aniket Adnaik 
Date:   2017-06-19T22:19:21Z

[CARBONDATA-1173] Streaming Ingest - write path framework implementation




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1042: [CARBONDATA-1181] Show partitions

2017-06-19 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1042
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/496/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1042: [CARBONDATA-1181] Show partitions

2017-06-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1042
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2595/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1033: spark2/CarbonSQLCLIDriver.scala storePath is not hdf...

2017-06-19 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1033
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/495/Failed Tests: 
2carbondata-pr-spark-1.6/org.apache.carbondata:carbondata-core:
 2org.apache.carbondata.core.writer.CarbonFooterWriterTest.testWriteFactMetadataorg.apache.carbondata.core.writer.CarbonFooterWriterTes
 t.testReadFactMetadata



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (CARBONDATA-1145) Single-pass loading not work on partition table

2017-06-19 Thread Venkata Ramana G (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Venkata Ramana G resolved CARBONDATA-1145.
--
   Resolution: Fixed
Fix Version/s: 1.1.1
   1.2.0

> Single-pass loading not work on partition table
> ---
>
> Key: CARBONDATA-1145
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1145
> Project: CarbonData
>  Issue Type: Bug
>Reporter: QiangCai
>Assignee: QiangCai
> Fix For: 1.2.0, 1.1.1
>
>  Time Spent: 7h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] carbondata pull request #1008: [CARBONDATA-1145] Fix single-pass issue for m...

2017-06-19 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/carbondata/pull/1008


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1008: [CARBONDATA-1145] Fix single-pass issue for multi-ta...

2017-06-19 Thread gvramana
Github user gvramana commented on the issue:

https://github.com/apache/carbondata/pull/1008
  
LGTM


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (CARBONDATA-1194) Problem in filling/processing multiple implicit columns

2017-06-19 Thread Venkata Ramana G (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Venkata Ramana G resolved CARBONDATA-1194.
--
   Resolution: Fixed
Fix Version/s: 1.1.1
   1.2.0

> Problem in filling/processing multiple implicit columns
> ---
>
> Key: CARBONDATA-1194
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1194
> Project: CarbonData
>  Issue Type: Bug
>  Components: core
>Reporter: Manohar Vanam
>Assignee: Manohar Vanam
>Priority: Minor
> Fix For: 1.2.0, 1.1.1
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> If carbon.enable.vector.reader = true
> Currently we are not handling multiple implicit columns .
> Support Multiple implicit columns.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] carbondata pull request #1063: [CARBONDATA-1194] Problem in filling/processi...

2017-06-19 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/carbondata/pull/1063


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1063: [CARBONDATA-1194] Problem in filling/processing mult...

2017-06-19 Thread gvramana
Github user gvramana commented on the issue:

https://github.com/apache/carbondata/pull/1063
  
LGTM


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1063: [CARBONDATA-1194] Problem in filling/processing mult...

2017-06-19 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1063
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/493/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1063: [CARBONDATA-1194] Problem in filling/processi...

2017-06-19 Thread ManoharVanam
GitHub user ManoharVanam opened a pull request:

https://github.com/apache/carbondata/pull/1063

[CARBONDATA-1194] Problem in filling/processing multiple implicit columns

Handling multiple implicit columns

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ManoharVanam/incubator-carbondata 
ImplicitColumnsBug

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/1063.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1063


commit 236fa0eff9bfe7814a0864d48db0bfc4707644f2
Author: Manohar 
Date:   2017-06-19T14:39:40Z

Handling multiple implicit columns




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1063: [CARBONDATA-1194] Problem in filling/processing mult...

2017-06-19 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1063
  
Can one of the admins verify this patch?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (CARBONDATA-1194) Problem in filling/processing multiple implicit columns

2017-06-19 Thread Manohar Vanam (JIRA)
Manohar Vanam created CARBONDATA-1194:
-

 Summary: Problem in filling/processing multiple implicit columns
 Key: CARBONDATA-1194
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1194
 Project: CarbonData
  Issue Type: Bug
  Components: core
Reporter: Manohar Vanam
Assignee: Manohar Vanam
Priority: Minor


If carbon.enable.vector.reader = true
Currently we are not handling multiple implicit columns .

Support Multiple implicit columns.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] carbondata pull request #1058: [CARBONDATA-1190] Wrap bytes in V3 Writer

2017-06-19 Thread QiangCai
Github user QiangCai commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1058#discussion_r122719926
  
--- Diff: 
processing/src/main/java/org/apache/carbondata/processing/store/writer/v3/CarbonFactDataWriterImplV3.java
 ---
@@ -447,9 +447,7 @@ private void writeDataToFile(FileChannel channel, 
byte[][] dataChunkBytes) {
 for (int j = 0; j < nodeHolderList.size(); j++) {
   nodeHolder = nodeHolderList.get(j);
   bufferSize = nodeHolder.getDataArray()[i].length;
-  buffer = ByteBuffer.allocate(bufferSize);
-  buffer.put(nodeHolder.getDataArray()[i]);
-  buffer.flip();
+  buffer = ByteBuffer.wrap(nodeHolder.getDataArray()[i]);
--- End diff --

can we remove the buffer variable?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1042: [CARBONDATA-1181] Show partitions

2017-06-19 Thread chenerlu
Github user chenerlu commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1042#discussion_r122717016
  
--- Diff: 
integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/partition/TestShowPartitions.scala
 ---
@@ -0,0 +1,127 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.carbondata.spark.testsuite.partition
+
+import java.sql.Timestamp
+
+import org.apache.spark.sql.Row
+import org.apache.spark.sql.common.util.QueryTest
+import org.scalatest.BeforeAndAfterAll
+
+import org.apache.carbondata.core.constants.CarbonCommonConstants
+import org.apache.carbondata.core.util.CarbonProperties
+
+
+class TestShowPartition  extends QueryTest with BeforeAndAfterAll {
+  override def beforeAll = {
+dropTable
+
+CarbonProperties.getInstance()
+  .addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, 
"dd-MM-")
+
+  }
+
+  test("show partition table: hash table") {
+sql(
+  """
+| CREATE TABLE hashTable (empname String, designation String, doj 
Timestamp,
+|  workgroupcategory int, workgroupcategoryname String, deptno 
int, deptname String,
+|  projectcode int, projectjoindate Timestamp, projectenddate 
Timestamp,attendance int,
+|  utilization int,salary int)
+| PARTITIONED BY (empno int)
+| STORED BY 'org.apache.carbondata.format'
+| TBLPROPERTIES('PARTITION_TYPE'='HASH','NUM_PARTITIONS'='3')
+  """.stripMargin)
+sql(s"""LOAD DATA local inpath '$resourcesPath/data.csv' INTO TABLE 
hashTable OPTIONS('DELIMITER'= ',', 'QUOTECHAR'= '"')""")
+
+// EqualTo
+checkAnswer(sql("show partitions hashTable"), Seq(Row("HASH 
PARTITION", "", "3")))
+
+sql("drop table hashTable")
+  }
+
+  test("show partition table: range partition") {
+sql(
+  """
+| CREATE TABLE rangeTable (empno int, empname String, designation 
String,
+|  workgroupcategory int, workgroupcategoryname String, deptno 
int, deptname String,
+|  projectcode int, projectjoindate Timestamp, projectenddate 
Timestamp,attendance int,
+|  utilization int,salary int)
+| PARTITIONED BY (doj Timestamp)
+| STORED BY 'org.apache.carbondata.format'
+| TBLPROPERTIES('PARTITION_TYPE'='RANGE',
+|  'RANGE_INFO'='01-01-2010, 01-01-2015')
+  """.stripMargin)
+sql(s"""LOAD DATA local inpath '$resourcesPath/data.csv' INTO TABLE 
rangeTable OPTIONS('DELIMITER'= ',', 'QUOTECHAR'= '"')""")
+
+// EqualTo
+checkAnswer(sql("show partitions rangeTable"), Seq(Row("0", "", 
"default"), Row("1", "", "< 01-01-2010"), Row("2", "", "< 01-01-2015")))
--- End diff --

over 100 chars ?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1058: [CARBONDATA-1190] Wrap bytes in V3 Writer

2017-06-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1058
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2593/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1042: [CARBONDATA-1181] Show partitions

2017-06-19 Thread chenerlu
Github user chenerlu commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1042#discussion_r122715807
  
--- Diff: 
integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/partition/TestShowPartitions.scala
 ---
@@ -0,0 +1,127 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.carbondata.spark.testsuite.partition
+
+import java.sql.Timestamp
+
+import org.apache.spark.sql.Row
+import org.apache.spark.sql.common.util.QueryTest
+import org.scalatest.BeforeAndAfterAll
+
+import org.apache.carbondata.core.constants.CarbonCommonConstants
+import org.apache.carbondata.core.util.CarbonProperties
+
+
+class TestShowPartition  extends QueryTest with BeforeAndAfterAll {
--- End diff --

extra space between “TestShowPartition” and “extend”


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1060: [CARBONDATA-1191] Remove carbon-spark-shell s...

2017-06-19 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/carbondata/pull/1060


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1060: [CARBONDATA-1191] Remove carbon-spark-shell script

2017-06-19 Thread jackylk
Github user jackylk commented on the issue:

https://github.com/apache/carbondata/pull/1060
  
LGTM


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1058: [CARBONDATA-1190] Wrap bytes in V3 Writer

2017-06-19 Thread jackylk
Github user jackylk commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1058#discussion_r122713717
  
--- Diff: 
processing/src/main/java/org/apache/carbondata/processing/store/writer/v3/CarbonFactDataWriterImplV3.java
 ---
@@ -447,9 +447,7 @@ private void writeDataToFile(FileChannel channel, 
byte[][] dataChunkBytes) {
 for (int j = 0; j < nodeHolderList.size(); j++) {
   nodeHolder = nodeHolderList.get(j);
   bufferSize = nodeHolder.getDataArray()[i].length;
-  buffer = ByteBuffer.allocate(bufferSize);
-  buffer.put(nodeHolder.getDataArray()[i]);
-  buffer.flip();
+  buffer = ByteBuffer.wrap(nodeHolder.getDataArray()[i]);
--- End diff --

@mohammadshahidkhan fixed


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1040: [CARBONDATA-1171] Added support for show partitions ...

2017-06-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1040
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2592/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1053: [CARBONDATA-1188] fixed codec for UpscaleFloatingCod...

2017-06-19 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1053
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/489/Failed Tests: 
1carbondata-pr-spark-1.6/org.apache.carbondata:carbondata-spark-common-test:
 1org.apache.carbondata.spark.testsuite.dataload.TestLoadDataWithHiveSyntaxDefaultFormat.test
 data load with double datatype



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1053: [CARBONDATA-1188] fixed codec for UpscaleFloatingCod...

2017-06-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1053
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2591/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Reopened] (CARBONDATA-707) Less ( < ) than operator does not work properly in carbondata.

2017-06-19 Thread SWATI RAO (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

SWATI RAO reopened CARBONDATA-707:
--

not working

> Less ( < ) than operator does not work properly in carbondata. 
> ---
>
> Key: CARBONDATA-707
> URL: https://issues.apache.org/jira/browse/CARBONDATA-707
> Project: CarbonData
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 1.1.0
> Environment: Spark 2.1
>Reporter: SWATI RAO
>Priority: Minor
> Fix For: 1.1.0
>
> Attachments: 100_hive_test.csv
>
>
> Incorrect result displays 
> Steps to Reproduces:
> 1:Create table using following Command
> " create table Carbon_automation (imei string,deviceInformationId int,MAC 
> string,deviceColor string,device_backColor string,modelId string,marketName 
> string,AMSize string,ROMSize string,CUPAudit string,CPIClocked string,series 
> string,productionDate timestamp,bomCode string,internalModels string, 
> deliveryTime string, channelsId string, channelsName string , deliveryAreaId 
> string, deliveryCountry string, deliveryProvince string, deliveryCity 
> string,deliveryDistrict string, deliveryStreet string, oxSingleNumber string, 
> ActiveCheckTime string, ActiveAreaId string, ActiveCountry string, 
> ActiveProvince string, Activecity string, ActiveDistrict string, ActiveStreet 
> string, ActiveOperatorId string, Active_releaseId string, Active_EMUIVersion 
> string, Active_operaSysVersion string, Active_BacVerNumber string, 
> Active_BacFlashVer string, Active_webUIVersion string, 
> Active_webUITypeCarrVer string,Active_webTypeDataVerNumber string, 
> Active_operatorsVersion string, Active_phonePADPartitionedVersions string, 
> Latest_YEAR int, Latest_MONTH int, Latest_DAY int, Latest_HOUR string, 
> Latest_areaId string, Latest_country string, Latest_province string, 
> Latest_city string, Latest_district string, Latest_street string, 
> Latest_releaseId string, Latest_EMUIVersion string, Latest_operaSysVersion 
> string, Latest_BacVerNumber string, Latest_BacFlashVer string, 
> Latest_webUIVersion string, Latest_webUITypeCarrVer string, 
> Latest_webTypeDataVerNumber string, Latest_operatorsVersion string, 
> Latest_phonePADPartitionedVersions string, Latest_operatorId string, 
> gamePointDescription string,gamePointId double,contractNumber 
> double,imei_count int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES 
> ('DICTIONARY_INCLUDE'='deviceInformationId,Latest_YEAR,Latest_MONTH,Latest_DAY')"
> 2:Load Data with following command
> " LOAD DATA INPATH 'HDFS_URL/BabuStore/Data/HiveData' INTO TABLE 
> Carbon_automation 
> OPTIONS('DELIMITER'=',','QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='imei,deviceInformationId,MAC,deviceColor,device_backColor,modelId,marketName,AMSize,ROMSize,CUPAudit,CPIClocked,series,productionDate,bomCode,internalModels,deliveryTime,channelsId,channelsName,deliveryAreaId,deliveryCountry,deliveryProvince,deliveryCity,deliveryDistrict,deliveryStreet,oxSingleNumber,contractNumber,ActiveCheckTime,ActiveAreaId,ActiveCountry,ActiveProvince,Activecity,ActiveDistrict,ActiveStreet,ActiveOperatorId,Active_releaseId,Active_EMUIVersion,Active_operaSysVersion,Active_BacVerNumber,Active_BacFlashVer,Active_webUIVersion,Active_webUITypeCarrVer,Active_webTypeDataVerNumber,Active_operatorsVersion,Active_phonePADPartitionedVersions,Latest_YEAR,Latest_MONTH,Latest_DAY,Latest_HOUR,Latest_areaId,Latest_country,Latest_province,Latest_city,Latest_district,Latest_street,Latest_releaseId,Latest_EMUIVersion,Latest_operaSysVersion,Latest_BacVerNumber,Latest_BacFlashVer,Latest_webUIVersion,Latest_webUITypeCarrVer,Latest_webTypeDataVerNumber,Latest_operatorsVersion,Latest_phonePADPartitionedVersions,Latest_operatorId,gamePointId,gamePointDescription,imei_count')"
> 3:Run the Query 
> " Select imei,gamePointId, channelsId,series from Carbon_automation where  
> channelsId < 4 ORDER BY gamePointId limit 5 "
> 4:Incorrect Result displays as follows:
> ++--+-+--+--+
> |imei| gamePointId  | channelsId  |  series  |
> ++--+-+--+--+
> | 1AA100050  | 29.0 | 1   | 2Series  |
> | 1AA100014  | 151.0| 3   | 5Series  |
> | 1AA100011  | 202.0| 1   | 0Series  |
> | 1AA100018  | 441.0| 4   | 8Series  |
> | 1AA100060  | 538.0| 4   | 8Series  |
> ++--+-+--+--+
> 5 rows selected (0.237 seconds)
> 5:CSV Attached: "100_hive_test.csv"
> Expected Result: It should not display channel id 4 as per query.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (CARBONDATA-1188) Incorrect data is displayed for double data type

2017-06-19 Thread Kunal Kapoor (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16053860#comment-16053860
 ] 

Kunal Kapoor edited comment on CARBONDATA-1188 at 6/19/17 11:51 AM:


duplicate to 
https://issues.apache.org/jira/projects/CARBONDATA/issues/CARBONDATA-1184?filter=allopenissues



was (Author: kunal):
duplicate to 
[#https://issues.apache.org/jira/projects/CARBONDATA/issues/CARBONDATA-1184?filter=allopenissues]


> Incorrect data is displayed for double data type
> 
>
> Key: CARBONDATA-1188
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1188
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Kunal Kapoor
>Assignee: Kunal Kapoor
>  Labels: duplicate
> Attachments: 100.csv
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> create table Comp_VMALL_DICTIONARY_EXCLUDE (imei string,gamePointId double)  
> STORED BY 'org.apache.carbondata.format' 
> TBLPROPERTIES('DICTIONARY_EXCLUDE'='imei')
> LOAD DATA INPATH  '/home/kunal/Downloads/100.csv' INTO table 
> Comp_VMALL_DICTIONARY_EXCLUDE options ('DELIMITER'=',', 'QUOTECHAR'='"', 
> 'BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='imei,gamePointId')
> select * from Comp_VMALL_DICTIONARY_EXCLUDE



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (CARBONDATA-1188) Incorrect data is displayed for double data type

2017-06-19 Thread Kunal Kapoor (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16053860#comment-16053860
 ] 

Kunal Kapoor edited comment on CARBONDATA-1188 at 6/19/17 11:47 AM:


duplicate to 
[#https://issues.apache.org/jira/projects/CARBONDATA/issues/CARBONDATA-1184?filter=allopenissues]



was (Author: kunal):
duplicate to 
https://issues.apache.org/jira/projects/CARBONDATA/issues/CARBONDATA-1184?filter=allopenissues


> Incorrect data is displayed for double data type
> 
>
> Key: CARBONDATA-1188
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1188
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Kunal Kapoor
>Assignee: Kunal Kapoor
>  Labels: duplicate
> Attachments: 100.csv
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> create table Comp_VMALL_DICTIONARY_EXCLUDE (imei string,gamePointId double)  
> STORED BY 'org.apache.carbondata.format' 
> TBLPROPERTIES('DICTIONARY_EXCLUDE'='imei')
> LOAD DATA INPATH  '/home/kunal/Downloads/100.csv' INTO table 
> Comp_VMALL_DICTIONARY_EXCLUDE options ('DELIMITER'=',', 'QUOTECHAR'='"', 
> 'BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='imei,gamePointId')
> select * from Comp_VMALL_DICTIONARY_EXCLUDE



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (CARBONDATA-1188) Incorrect data is displayed for double data type

2017-06-19 Thread Kunal Kapoor (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16053860#comment-16053860
 ] 

Kunal Kapoor commented on CARBONDATA-1188:
--

duplicate to 
https://issues.apache.org/jira/projects/CARBONDATA/issues/CARBONDATA-1184?filter=allopenissues


> Incorrect data is displayed for double data type
> 
>
> Key: CARBONDATA-1188
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1188
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Kunal Kapoor
>Assignee: Kunal Kapoor
>  Labels: duplicate
> Attachments: 100.csv
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> create table Comp_VMALL_DICTIONARY_EXCLUDE (imei string,gamePointId double)  
> STORED BY 'org.apache.carbondata.format' 
> TBLPROPERTIES('DICTIONARY_EXCLUDE'='imei')
> LOAD DATA INPATH  '/home/kunal/Downloads/100.csv' INTO table 
> Comp_VMALL_DICTIONARY_EXCLUDE options ('DELIMITER'=',', 'QUOTECHAR'='"', 
> 'BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='imei,gamePointId')
> select * from Comp_VMALL_DICTIONARY_EXCLUDE



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (CARBONDATA-1188) Incorrect data is displayed for double data type

2017-06-19 Thread Kunal Kapoor (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kunal Kapoor updated CARBONDATA-1188:
-
Labels: duplicate  (was: )

> Incorrect data is displayed for double data type
> 
>
> Key: CARBONDATA-1188
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1188
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Kunal Kapoor
>Assignee: Kunal Kapoor
>  Labels: duplicate
> Attachments: 100.csv
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> create table Comp_VMALL_DICTIONARY_EXCLUDE (imei string,gamePointId double)  
> STORED BY 'org.apache.carbondata.format' 
> TBLPROPERTIES('DICTIONARY_EXCLUDE'='imei')
> LOAD DATA INPATH  '/home/kunal/Downloads/100.csv' INTO table 
> Comp_VMALL_DICTIONARY_EXCLUDE options ('DELIMITER'=',', 'QUOTECHAR'='"', 
> 'BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='imei,gamePointId')
> select * from Comp_VMALL_DICTIONARY_EXCLUDE



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] carbondata issue #1056: [CARBONDATA-1185] Job Aborted due to Stage Failure (...

2017-06-19 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1056
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/486/Failed Tests: 
3carbondata-pr-spark-1.6/org.apache.carbondata:carbondata-spark-common-test:
 3org.apache.carbondata.spark.testsuite.dataload.TestBatchSortDataLoad.test
 batch sort load by passing option and compactionorg.apache.carbondata.spark.testsuite.dataload.TestBatchSortDataLoad.test
 batch sort load by passing option in one load and with out option in other 
load and then do compactionorg.apache.carbondata.spark.testsuite.dataretention.DataRetentionConcurrencyTestCase.DataRetention_Concurrency_load_date



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1062: [CARBONDATA-982] Fixed Bug For NotIn Clause In Prest...

2017-06-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1062
  
Can one of the admins verify this patch?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1062: [CARBONDATA-982] Fixed Bug For NotIn Clause I...

2017-06-19 Thread jatin9896
GitHub user jatin9896 opened a pull request:

https://github.com/apache/carbondata/pull/1062

[CARBONDATA-982] Fixed Bug For NotIn Clause In Presto

Resolved NotIn clause for presto integration

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jatin9896/incubator-carbondata 
feature/CARBONDATA-982

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/1062.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1062


commit ab85bce11e537614a633c7915c17a44093920442
Author: Geetika gupta 
Date:   2017-06-16T07:37:52Z

Resolved NotIn clause for presto integration




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1062: [CARBONDATA-982] Fixed Bug For NotIn Clause In Prest...

2017-06-19 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1062
  
Can one of the admins verify this patch?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1049: [CARBONDATA-1183] Update CarbonPartitionExample beca...

2017-06-19 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1049
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/485/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1058: [CARBONDATA-1190] Wrap bytes in V3 Writer

2017-06-19 Thread jackylk
Github user jackylk commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1058#discussion_r122675461
  
--- Diff: 
processing/src/main/java/org/apache/carbondata/processing/store/writer/v3/CarbonFactDataWriterImplV3.java
 ---
@@ -447,9 +447,7 @@ private void writeDataToFile(FileChannel channel, 
byte[][] dataChunkBytes) {
 for (int j = 0; j < nodeHolderList.size(); j++) {
   nodeHolder = nodeHolderList.get(j);
   bufferSize = nodeHolder.getDataArray()[i].length;
-  buffer = ByteBuffer.allocate(bufferSize);
-  buffer.put(nodeHolder.getDataArray()[i]);
-  buffer.flip();
+  buffer = ByteBuffer.wrap(nodeHolder.getDataArray()[i]);
--- End diff --

Thanks, I will check


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1061: [CARBONDATA-1193] ViewFS Support - improvement

2017-06-19 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1061
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/484/Failed Tests: 
1carbondata-pr-spark-1.6/org.apache.carbondata:carbondata-spark-common-test:
 1org.apache.carbondata.spark.testsuite.allqueries.InsertIntoCarbonTableTestCase.insert
 into carbon table from carbon table union query



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (CARBONDATA-1180) loading data failed for dictionary file id is locked for updation

2017-06-19 Thread Liu Shaohui (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16053724#comment-16053724
 ] 

Liu Shaohui commented on CARBONDATA-1180:
-

[~chenerlu]
After checking the executor log, I found the root cause is no write permission 
for table data path.
Thanks for your attention~

>  loading data failed for dictionary file id is locked for updation 
> ---
>
> Key: CARBONDATA-1180
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1180
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load
>Affects Versions: 1.2.0
>Reporter: Liu Shaohui
>
> use Spark 2.1 in yarn-client mode and query from beeline to spark sql 
> thriftserver
> {code}
> CREATE TABLE IF NOT EXISTS carbondata_test(id string, name string, city 
> string, age Int) STORED BY 'carbondata';
> LOAD DATA INPATH 'hdfs:///user/sample-data/sample.csv' INTO TABLE 
> carbondata_test;
> {code}
> Data load is failed for following exception.
> {code}
> java.lang.RuntimeException: Dictionary file id is locked for updation. Please 
> try after some time +details
> java.lang.RuntimeException: Dictionary file id is locked for updation. Please 
> try after some time
>   at scala.sys.package$.error(package.scala:27)
>   at 
> org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerateRDD$$anon$1.(CarbonGlobalDictionaryRDD.scala:407)
>   at 
> org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerateRDD.compute(CarbonGlobalDictionaryRDD.scala:345)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>   at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
>   at org.apache.spark.scheduler.Task.run(Task.scala:99)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> The 1.2.0 contains the fix in CARBONDATA-614.
> Any suggestion about this problem? Thanks~



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (CARBONDATA-1180) loading data failed for dictionary file id is locked for updation

2017-06-19 Thread Liu Shaohui (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liu Shaohui closed CARBONDATA-1180.
---
Resolution: Not A Problem

>  loading data failed for dictionary file id is locked for updation 
> ---
>
> Key: CARBONDATA-1180
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1180
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load
>Affects Versions: 1.2.0
>Reporter: Liu Shaohui
>
> use Spark 2.1 in yarn-client mode and query from beeline to spark sql 
> thriftserver
> {code}
> CREATE TABLE IF NOT EXISTS carbondata_test(id string, name string, city 
> string, age Int) STORED BY 'carbondata';
> LOAD DATA INPATH 'hdfs:///user/sample-data/sample.csv' INTO TABLE 
> carbondata_test;
> {code}
> Data load is failed for following exception.
> {code}
> java.lang.RuntimeException: Dictionary file id is locked for updation. Please 
> try after some time +details
> java.lang.RuntimeException: Dictionary file id is locked for updation. Please 
> try after some time
>   at scala.sys.package$.error(package.scala:27)
>   at 
> org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerateRDD$$anon$1.(CarbonGlobalDictionaryRDD.scala:407)
>   at 
> org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerateRDD.compute(CarbonGlobalDictionaryRDD.scala:345)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>   at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
>   at org.apache.spark.scheduler.Task.run(Task.scala:99)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> The 1.2.0 contains the fix in CARBONDATA-614.
> Any suggestion about this problem? Thanks~



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] carbondata issue #1053: [CARBONDATA-1188] fixed codec for UpscaleFloatingCod...

2017-06-19 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1053
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/483/Failed Tests: 
3carbondata-pr-spark-1.6/org.apache.carbondata:carbondata-spark-common-test:
 3org.apache.carbondata.integration.spark.testsuite.complexType.TestComplexTypeQuery.Test
 ^ * special character data loading for complex typesorg.apache.carbondata.integration.spark.testsuite.complexType.TestComplexTypeQuery.select
 * from complexcarbontableorg.apache.carbondata.spark.testsuite.dataload.TestLoadDataWithHiveSyntaxDefaultFormat.test
 data load with double datatype



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1053: [CARBONDATA-1188] fixed codec for UpscaleFloatingCod...

2017-06-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1053
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2586/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1061: [CARBONDATA-1193] ViewFS Support - improvement

2017-06-19 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1061
  
Can one of the admins verify this patch?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1061: [CARBONDATA-1193] ViewFS Support - improvemen...

2017-06-19 Thread dhatchayani
GitHub user dhatchayani opened a pull request:

https://github.com/apache/carbondata/pull/1061

[CARBONDATA-1193] ViewFS Support - improvement



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/dhatchayani/incubator-carbondata viewfs

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/1061.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1061






---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1049: [CARBONDATA-1183] Update CarbonPartitionExample beca...

2017-06-19 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1049
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/482/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (CARBONDATA-1193) ViewFS Support

2017-06-19 Thread dhatchayani (JIRA)
dhatchayani created CARBONDATA-1193:
---

 Summary: ViewFS Support
 Key: CARBONDATA-1193
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1193
 Project: CarbonData
  Issue Type: Improvement
Reporter: dhatchayani
Assignee: dhatchayani
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] carbondata issue #1049: [CARBONDATA-1183] Update CarbonPartitionExample beca...

2017-06-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1049
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2585/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1042: [CARBONDATA-1181] Show partitions

2017-06-19 Thread chenerlu
Github user chenerlu commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1042#discussion_r122658883
  
--- Diff: 
integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/partition/TestShowPartitions.scala
 ---
@@ -0,0 +1,127 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.carbondata.spark.testsuite.partition
+
+import java.sql.Timestamp
+
+import org.apache.spark.sql.Row
+import org.apache.spark.sql.common.util.QueryTest
+import org.scalatest.BeforeAndAfterAll
+
+import org.apache.carbondata.core.constants.CarbonCommonConstants
+import org.apache.carbondata.core.util.CarbonProperties
+
+
+class TestShowPartition  extends QueryTest with BeforeAndAfterAll {
+  override def beforeAll = {
+dropTable
+
+CarbonProperties.getInstance()
+  .addProperty(CarbonCommonConstants.CARBON_TIMESTAMP_FORMAT, 
"dd-MM-")
+
+  }
+
+  test("show partition table: hash table") {
+sql(
+  """
+| CREATE TABLE hashTable (empname String, designation String, doj 
Timestamp,
+|  workgroupcategory int, workgroupcategoryname String, deptno 
int, deptname String,
+|  projectcode int, projectjoindate Timestamp, projectenddate 
Timestamp,attendance int,
+|  utilization int,salary int)
+| PARTITIONED BY (empno int)
+| STORED BY 'org.apache.carbondata.format'
+| TBLPROPERTIES('PARTITION_TYPE'='HASH','NUM_PARTITIONS'='3')
+  """.stripMargin)
+sql(s"""LOAD DATA local inpath '$resourcesPath/data.csv' INTO TABLE 
hashTable OPTIONS('DELIMITER'= ',', 'QUOTECHAR'= '"')""")
+
+// EqualTo
+checkAnswer(sql("show partitions hashTable"), Seq(Row("HASH 
PARTITION", "", "3")))
+
+sql("drop table hashTable")
+  }
+
+  test("show partition table: range partition") {
+sql(
+  """
+| CREATE TABLE rangeTable (empno int, empname String, designation 
String,
+|  workgroupcategory int, workgroupcategoryname String, deptno 
int, deptname String,
+|  projectcode int, projectjoindate Timestamp, projectenddate 
Timestamp,attendance int,
+|  utilization int,salary int)
+| PARTITIONED BY (doj Timestamp)
+| STORED BY 'org.apache.carbondata.format'
+| TBLPROPERTIES('PARTITION_TYPE'='RANGE',
+|  'RANGE_INFO'='01-01-2010, 01-01-2015')
+  """.stripMargin)
+sql(s"""LOAD DATA local inpath '$resourcesPath/data.csv' INTO TABLE 
rangeTable OPTIONS('DELIMITER'= ',', 'QUOTECHAR'= '"')""")
+
+// EqualTo
+checkAnswer(sql("show partitions rangeTable"), Seq(Row("0", "", 
"default"), Row("1", "", "< 01-01-2010"), Row("2", "", "< 01-01-2015")))
+sql("drop table rangeTable")
+  }
+
+  test("show partition table: list partition") {
+sql(
+  """
+| CREATE TABLE listTable (empno int, empname String, designation 
String, doj Timestamp,
+|  workgroupcategoryname String, deptno int, deptname String,
+|  projectcode int, projectjoindate Timestamp, projectenddate 
Timestamp,attendance int,
+|  utilization int,salary int)
+| PARTITIONED BY (workgroupcategory int)
+| STORED BY 'org.apache.carbondata.format'
+| TBLPROPERTIES('PARTITION_TYPE'='LIST',
+|  'LIST_INFO'='0, 1, (2, 3)')
+  """.stripMargin)
+sql(s"""LOAD DATA local inpath '$resourcesPath/data.csv' INTO TABLE 
listTable OPTIONS('DELIMITER'= ',', 'QUOTECHAR'= '"')""")
+
+// EqualTo
+checkAnswer(sql("show partitions listTable"), Seq(Row("0", "", "0"), 
Row("1", "", "1"), Row("2", "", "2, 3")))
+
+  sql("drop table listTable")
+  }
+  test("show partition table: not default db") {
+sql(s"CREATE DATABASE if not exists partitionDB")
+
+sql(
+  """
+| CREATE TABLE partitionDB.listTable (empno int, empname 

[GitHub] carbondata issue #1052: [CARBONDATA-1182] Resolved packaging issue in presto...

2017-06-19 Thread chenliang613
Github user chenliang613 commented on the issue:

https://github.com/apache/carbondata/pull/1052
  
@geetikagupta16  yes, now i can reproduce it, thanks.
Can you compile it based on parent pom.xml, see if can reproduce it also ?



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1042: [CARBONDATA-1181] Show partitions

2017-06-19 Thread chenerlu
Github user chenerlu commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1042#discussion_r122658340
  
--- Diff: 
examples/spark2/src/main/scala/org/apache/carbondata/examples/CarbonShowPartitionInfo.scala
 ---
@@ -0,0 +1,111 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.carbondata.examples
+
+import java.io.File
+
+import org.apache.spark.sql.SparkSession
+
+object CarbonShowPartitionInfo {
+  def main(args: Array[String]) {
+
+CarbonShowPartitionInfo.extracted("t3", args)
+  }
+  def extracted(tableName: String, args: Array[String]): Unit = {
+val rootPath = new File(this.getClass.getResource("/").getPath
+  + "../../../..").getCanonicalPath
+val storeLocation = s"$rootPath/examples/spark2/target/store"
+val warehouse = s"$rootPath/examples/spark2/target/warehouse"
+val metastoredb = s"$rootPath/examples/spark2/target"
+val testData = 
s"$rootPath/examples/spark2/src/main/resources/bitmaptest2.csv"
+import org.apache.spark.sql.CarbonSession._
+val spark = SparkSession
+  .builder()
+  .master("local")
+  .appName("CarbonDataLoad")
+  .config("spark.sql.warehouse.dir", warehouse)
+  .getOrCreateCarbonSession(storeLocation, metastoredb)
+
+// range partition
+spark.sql("DROP TABLE IF EXISTS t1")
+// hash partition
+spark.sql("DROP TABLE IF EXISTS t3")
+// list partition
+spark.sql("DROP TABLE IF EXISTS t5")
+
+spark.sql("""
+| CREATE TABLE IF NOT EXISTS t1
+| (
+| vin String,
+| phonenumber Long,
+| country String,
+| area String
+| )
+| PARTITIONED BY (logdate Timestamp)
+| STORED BY 'carbondata'
+| TBLPROPERTIES('PARTITION_TYPE'='RANGE',
+| 'RANGE_INFO'='20140101, 2015/01/01 ,2016-01-01')
+  """.stripMargin)
+
+spark.sql("""
+| CREATE TABLE IF NOT EXISTS t3
+| (
+| logdate Timestamp,
+| phonenumber Long,
+| country String,
+| area String
+| )
+| PARTITIONED BY (vin String)
+| STORED BY 'carbondata'
+| 
TBLPROPERTIES('PARTITION_TYPE'='HASH','NUM_PARTITIONS'='5')
+""".stripMargin)
+
+spark.sql("""
+   | CREATE TABLE IF NOT EXISTS t5
+   | (
+   | vin String,
+   | logdate Timestamp,
+   | phonenumber Long,
+   | area String
+   |)
+   | PARTITIONED BY (country string)
+   | STORED BY 'carbondata'
+   | TBLPROPERTIES('PARTITION_TYPE'='LIST',
+   | 'LIST_INFO'='(China,United States),UK ,japan,(Canada,Russia), 
South Korea ')
+   """.stripMargin)
+
+spark.sparkContext.setLogLevel("WARN")
+spark.sql(s"""
+  SHOW PARTITIONS t1
+ """).show()
+spark.sql(s"""
+  SHOW PARTITIONS t3
+ """).show()
+spark.sql(s"""
+  SHOW PARTITIONS t5
+ """).show()
+
+// range partition
+spark.sql("DROP TABLE IF EXISTS t1")
+// hash partition
+spark.sql("DROP TABLE IF EXISTS t3")
+// list partition
+spark.sql("DROP TABLE IF EXISTS t5")
+
--- End diff --

Suggest close spark session as last.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1042: [CARBONDATA-1181] Show partitions

2017-06-19 Thread chenerlu
Github user chenerlu commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1042#discussion_r122658297
  
--- Diff: 
examples/spark2/src/main/scala/org/apache/carbondata/examples/CarbonShowPartitionInfo.scala
 ---
@@ -0,0 +1,111 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.carbondata.examples
+
+import java.io.File
+
+import org.apache.spark.sql.SparkSession
+
+object CarbonShowPartitionInfo {
+  def main(args: Array[String]) {
+
+CarbonShowPartitionInfo.extracted("t3", args)
+  }
+  def extracted(tableName: String, args: Array[String]): Unit = {
+val rootPath = new File(this.getClass.getResource("/").getPath
+  + "../../../..").getCanonicalPath
+val storeLocation = s"$rootPath/examples/spark2/target/store"
+val warehouse = s"$rootPath/examples/spark2/target/warehouse"
+val metastoredb = s"$rootPath/examples/spark2/target"
+val testData = 
s"$rootPath/examples/spark2/src/main/resources/bitmaptest2.csv"
+import org.apache.spark.sql.CarbonSession._
+val spark = SparkSession
+  .builder()
+  .master("local")
+  .appName("CarbonDataLoad")
+  .config("spark.sql.warehouse.dir", warehouse)
+  .getOrCreateCarbonSession(storeLocation, metastoredb)
+
+// range partition
+spark.sql("DROP TABLE IF EXISTS t1")
+// hash partition
+spark.sql("DROP TABLE IF EXISTS t3")
+// list partition
+spark.sql("DROP TABLE IF EXISTS t5")
+
+spark.sql("""
+| CREATE TABLE IF NOT EXISTS t1
+| (
+| vin String,
+| phonenumber Long,
+| country String,
+| area String
+| )
+| PARTITIONED BY (logdate Timestamp)
+| STORED BY 'carbondata'
+| TBLPROPERTIES('PARTITION_TYPE'='RANGE',
+| 'RANGE_INFO'='20140101, 2015/01/01 ,2016-01-01')
+  """.stripMargin)
+
+spark.sql("""
+| CREATE TABLE IF NOT EXISTS t3
+| (
+| logdate Timestamp,
+| phonenumber Long,
+| country String,
+| area String
+| )
+| PARTITIONED BY (vin String)
+| STORED BY 'carbondata'
+| 
TBLPROPERTIES('PARTITION_TYPE'='HASH','NUM_PARTITIONS'='5')
+""".stripMargin)
+
+spark.sql("""
+   | CREATE TABLE IF NOT EXISTS t5
+   | (
+   | vin String,
+   | logdate Timestamp,
+   | phonenumber Long,
+   | area String
+   |)
+   | PARTITIONED BY (country string)
+   | STORED BY 'carbondata'
+   | TBLPROPERTIES('PARTITION_TYPE'='LIST',
+   | 'LIST_INFO'='(China,United States),UK ,japan,(Canada,Russia), 
South Korea ')
+   """.stripMargin)
+
+spark.sparkContext.setLogLevel("WARN")
+spark.sql(s"""
+  SHOW PARTITIONS t1
+ """).show()
+spark.sql(s"""
--- End diff --

One line, same problem.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1042: [CARBONDATA-1181] Show partitions

2017-06-19 Thread chenerlu
Github user chenerlu commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1042#discussion_r122657713
  
--- Diff: 
examples/spark2/src/main/scala/org/apache/carbondata/examples/CarbonPartitionExample.scala
 ---
@@ -107,7 +105,6 @@ object CarbonPartitionExample {
| vin String,
| logdate Timestamp,
| phonenumber Long,
-   | country String,
--- End diff --

already have PR-1049 to modify this class, you can help to review.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1042: [CARBONDATA-1181] Show partitions

2017-06-19 Thread chenerlu
Github user chenerlu commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1042#discussion_r122656172
  
--- Diff: 
examples/spark/src/main/scala/org/apache/carbondata/examples/CarbonShowPartitionInfo.scala
 ---
@@ -0,0 +1,114 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.carbondata.examples
+
+import scala.collection.mutable.LinkedHashMap
+
+import org.apache.carbondata.core.constants.CarbonCommonConstants
+import org.apache.carbondata.core.util.CarbonProperties
+import org.apache.carbondata.examples.util.ExampleUtils
+
+object CarbonShowPartitionInfo {
+  def main(args: Array[String]) {
+
+CarbonShowPartitionInfo.extracted("t3", args)
+  }
+  def extracted(tableName: String, args: Array[String]): Unit = {
+val cc = ExampleUtils.createCarbonContext("CarbonShowPartitionInfo")
+val testData = ExampleUtils.currentPath + 
"/src/main/resources/data.csv"
+
+// range partition
+cc.sql("DROP TABLE IF EXISTS t1")
+
+cc.sql("""
+| CREATE TABLE IF NOT EXISTS t1
+| (
+| vin String,
+| phonenumber Int,
+| country String,
+| area String
+| )
+| PARTITIONED BY (logdate Timestamp)
+| STORED BY 'carbondata'
+| TBLPROPERTIES('PARTITION_TYPE'='RANGE',
+| 'RANGE_INFO'='20140101, 2015/01/01 ,2016-01-01')
+  """.stripMargin)
+cc.sql(s"""
+  SHOW PARTITIONS t1
+ """).show()
+
+cc.sql("""
+| CREATE TABLE IF NOT EXISTS t3
+| (
+| logdate Timestamp,
+| phonenumber Int,
+| country String,
+| area String
+| )
+| PARTITIONED BY (vin String)
+| STORED BY 'carbondata'
+| 
TBLPROPERTIES('PARTITION_TYPE'='HASH','NUM_PARTITIONS'='5')
+""".stripMargin)
+cc.sql(s"""
+  SHOW PARTITIONS t3
+ """).show()
+// list partition
+cc.sql("DROP TABLE IF EXISTS t5")
+
+cc.sql("""
+   | CREATE TABLE IF NOT EXISTS t5
+   | (
+   | vin String,
+   | logdate Timestamp,
+   | phonenumber Int,
+   | area String
+   |)
+   | PARTITIONED BY (country string)
+   | STORED BY 'carbondata'
+   | TBLPROPERTIES('PARTITION_TYPE'='LIST',
+   | 'LIST_INFO'='(China,United States),UK ,japan,(Canada,Russia), 
South Korea ')
+   """.stripMargin)
+cc.sql(s"""
+  SHOW PARTITIONS t5
+ """).show()
+
+cc.sql(s"DROP TABLE IF EXISTS partitionDB.$tableName")
+cc.sql(s"DROP DATABASE IF EXISTS partitionDB")
+cc.sql(s"CREATE DATABASE partitionDB")
+cc.sql(s"""
+| CREATE TABLE IF NOT EXISTS partitionDB.$tableName
+| (
+| logdate Timestamp,
+| phonenumber Int,
+| country String,
+| area String
+| )
+| PARTITIONED BY (vin String)
+| STORED BY 'carbondata'
+| 
TBLPROPERTIES('PARTITION_TYPE'='HASH','NUM_PARTITIONS'='5')
+""".stripMargin)
+cc.sql(s"""
+  SHOW PARTITIONS partitionDB.$tableName
+ """).show()
+
+cc.sql(s"""
+  SHOW PARTITIONS $tableName
+ """).show()
+
--- End diff --

Suggest do some clear work, such as drop created tables and database.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please

[GitHub] carbondata issue #1060: [CARBONDATA-1191] Remove carbon-spark-shell script

2017-06-19 Thread chenliang613
Github user chenliang613 commented on the issue:

https://github.com/apache/carbondata/pull/1060
  
LGTM


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1042: [CARBONDATA-1181] Show partitions

2017-06-19 Thread chenerlu
Github user chenerlu commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1042#discussion_r122652358
  
--- Diff: 
examples/spark/src/main/scala/org/apache/carbondata/examples/CarbonShowPartitionInfo.scala
 ---
@@ -0,0 +1,114 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.carbondata.examples
+
+import scala.collection.mutable.LinkedHashMap
+
+import org.apache.carbondata.core.constants.CarbonCommonConstants
+import org.apache.carbondata.core.util.CarbonProperties
+import org.apache.carbondata.examples.util.ExampleUtils
+
+object CarbonShowPartitionInfo {
+  def main(args: Array[String]) {
+
+CarbonShowPartitionInfo.extracted("t3", args)
+  }
+  def extracted(tableName: String, args: Array[String]): Unit = {
+val cc = ExampleUtils.createCarbonContext("CarbonShowPartitionInfo")
+val testData = ExampleUtils.currentPath + 
"/src/main/resources/data.csv"
+
+// range partition
+cc.sql("DROP TABLE IF EXISTS t1")
+
+cc.sql("""
+| CREATE TABLE IF NOT EXISTS t1
+| (
+| vin String,
+| phonenumber Int,
+| country String,
+| area String
+| )
+| PARTITIONED BY (logdate Timestamp)
+| STORED BY 'carbondata'
+| TBLPROPERTIES('PARTITION_TYPE'='RANGE',
+| 'RANGE_INFO'='20140101, 2015/01/01 ,2016-01-01')
+  """.stripMargin)
+cc.sql(s"""
+  SHOW PARTITIONS t1
+ """).show()
--- End diff --

Suggest command “SHOW PARTITIONS t1” can be on one line.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1049: [CARBONDATA-1183] Update CarbonPartitionExample beca...

2017-06-19 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1049
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/481/Failed Tests: 
2carbondata-pr-spark-1.6/org.apache.carbondata:carbondata-core:
 2org.apache.carbondata.core.dictionary.client.DictionaryClientTest.testClientorg.apache.carbonda
 
ta.core.dictionary.client.DictionaryClientTest.testToCheckIfCorrectTimeOutExceptionMessageIsThrown



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1042: [CARBONDATA-1181] Show partitions

2017-06-19 Thread chenerlu
Github user chenerlu commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1042#discussion_r122650800
  
--- Diff: 
examples/spark/src/main/scala/org/apache/carbondata/examples/CarbonShowPartitionInfo.scala
 ---
@@ -0,0 +1,114 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.carbondata.examples
+
+import scala.collection.mutable.LinkedHashMap
+
+import org.apache.carbondata.core.constants.CarbonCommonConstants
+import org.apache.carbondata.core.util.CarbonProperties
+import org.apache.carbondata.examples.util.ExampleUtils
+
+object CarbonShowPartitionInfo {
+  def main(args: Array[String]) {
+
+CarbonShowPartitionInfo.extracted("t3", args)
+  }
+  def extracted(tableName: String, args: Array[String]): Unit = {
+val cc = ExampleUtils.createCarbonContext("CarbonShowPartitionInfo")
+val testData = ExampleUtils.currentPath + 
"/src/main/resources/data.csv"
+
+// range partition
+cc.sql("DROP TABLE IF EXISTS t1")
+
+cc.sql("""
+| CREATE TABLE IF NOT EXISTS t1
+| (
+| vin String,
+| phonenumber Int,
+| country String,
+| area String
+| )
--- End diff --

I think key words in sql command and data type should be upper case. 
for example, "String" -> "STRING"
anyway, it's not a big problem.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1042: [CARBONDATA-1181] Show partitions

2017-06-19 Thread chenerlu
Github user chenerlu commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1042#discussion_r122650261
  
--- Diff: 
examples/spark/src/main/scala/org/apache/carbondata/examples/CarbonShowPartitionInfo.scala
 ---
@@ -0,0 +1,114 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.carbondata.examples
+
+import scala.collection.mutable.LinkedHashMap
+
+import org.apache.carbondata.core.constants.CarbonCommonConstants
+import org.apache.carbondata.core.util.CarbonProperties
+import org.apache.carbondata.examples.util.ExampleUtils
+
+object CarbonShowPartitionInfo {
+  def main(args: Array[String]) {
+
+CarbonShowPartitionInfo.extracted("t3", args)
+  }
+  def extracted(tableName: String, args: Array[String]): Unit = {
+val cc = ExampleUtils.createCarbonContext("CarbonShowPartitionInfo")
+val testData = ExampleUtils.currentPath + 
"/src/main/resources/data.csv"
+
+// range partition
+cc.sql("DROP TABLE IF EXISTS t1")
+
+cc.sql("""
+| CREATE TABLE IF NOT EXISTS t1
+| (
+| vin String,
+| phonenumber Int,
+| country String,
+| area String
+| )
+| PARTITIONED BY (logdate Timestamp)
+| STORED BY 'carbondata'
+| TBLPROPERTIES('PARTITION_TYPE'='RANGE',
+| 'RANGE_INFO'='20140101, 2015/01/01 ,2016-01-01')
--- End diff --

We will add partition check mechanism later, So change 'RANGE_INFO'  = 
'2014/01/01,2015/01/01,2016/01/01', otherwise it will fail after check 
mechanism merged.
Besides, it will be better for specify the timestamp format suitable for 
'2014/0101' before running the sql command.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (CARBONDATA-1180) loading data failed for dictionary file id is locked for updation

2017-06-19 Thread chenerlu (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16053607#comment-16053607
 ] 

chenerlu commented on CARBONDATA-1180:
--

Is this always happen ? Could  you please remove carbondata_test related 
metafiles and retry ?

>  loading data failed for dictionary file id is locked for updation 
> ---
>
> Key: CARBONDATA-1180
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1180
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load
>Affects Versions: 1.2.0
>Reporter: Liu Shaohui
>
> use Spark 2.1 in yarn-client mode and query from beeline to spark sql 
> thriftserver
> {code}
> CREATE TABLE IF NOT EXISTS carbondata_test(id string, name string, city 
> string, age Int) STORED BY 'carbondata';
> LOAD DATA INPATH 'hdfs:///user/sample-data/sample.csv' INTO TABLE 
> carbondata_test;
> {code}
> Data load is failed for following exception.
> {code}
> java.lang.RuntimeException: Dictionary file id is locked for updation. Please 
> try after some time +details
> java.lang.RuntimeException: Dictionary file id is locked for updation. Please 
> try after some time
>   at scala.sys.package$.error(package.scala:27)
>   at 
> org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerateRDD$$anon$1.(CarbonGlobalDictionaryRDD.scala:407)
>   at 
> org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerateRDD.compute(CarbonGlobalDictionaryRDD.scala:345)
>   at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>   at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>   at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
>   at org.apache.spark.scheduler.Task.run(Task.scala:99)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> The 1.2.0 contains the fix in CARBONDATA-614.
> Any suggestion about this problem? Thanks~



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] carbondata issue #1049: [CARBONDATA-1183] Update CarbonPartitionExample beca...

2017-06-19 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1049
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/480/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1060: [CARBONDATA-1191] Remove carbon-spark-shell script

2017-06-19 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1060
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/479/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1060: [CARBONDATA-1191] Remove carbon-spark-shell script

2017-06-19 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1060
  
Can one of the admins verify this patch?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1060: [CARBONDATA-1191] Remove carbon-spark-shell s...

2017-06-19 Thread chenerlu
GitHub user chenerlu opened a pull request:

https://github.com/apache/carbondata/pull/1060

[CARBONDATA-1191] Remove carbon-spark-shell script

As discuss in mailing list, we reach an agreement to remove this useless 
feature.
So this PR is removing related class.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/chenerlu/incubator-carbondata RemoveScript

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/1060.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1060


commit dd018331c2aa79ec9c8111cf3f885dbb55e92dec
Author: chenerlu 
Date:   2017-06-19T07:53:40Z

delete carbon-spark-shell script




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Assigned] (CARBONDATA-1187) Fix Documentation links pointing to wrong urls in useful-tips-on-carbondata and faq

2017-06-19 Thread Liang Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liang Chen reassigned CARBONDATA-1187:
--

Assignee: Jatin  (was: Pallavi Singh)

> Fix Documentation links pointing to wrong urls in useful-tips-on-carbondata 
> and faq 
> 
>
> Key: CARBONDATA-1187
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1187
> Project: CarbonData
>  Issue Type: Bug
>  Components: docs
>Reporter: Jatin
>Assignee: Jatin
>Priority: Minor
>  Time Spent: 1h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] carbondata pull request #1058: [CARBONDATA-1190] Wrap bytes in V3 Writer

2017-06-19 Thread mohammadshahidkhan
Github user mohammadshahidkhan commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1058#discussion_r122639818
  
--- Diff: 
processing/src/main/java/org/apache/carbondata/processing/store/writer/v3/CarbonFactDataWriterImplV3.java
 ---
@@ -447,9 +447,7 @@ private void writeDataToFile(FileChannel channel, 
byte[][] dataChunkBytes) {
 for (int j = 0; j < nodeHolderList.size(); j++) {
   nodeHolder = nodeHolderList.get(j);
   bufferSize = nodeHolder.getDataArray()[i].length;
-  buffer = ByteBuffer.allocate(bufferSize);
-  buffer.put(nodeHolder.getDataArray()[i]);
-  buffer.flip();
+  buffer = ByteBuffer.wrap(nodeHolder.getDataArray()[i]);
--- End diff --

@jackylk If i am not wrong ByteBuffer.wrap could be used even for  below 
lines of code.
375-377
406-408
441-443


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1057: [CARBONDATA-1187]Fixed linking and content issues

2017-06-19 Thread chenliang613
Github user chenliang613 commented on the issue:

https://github.com/apache/carbondata/pull/1057
  
LGTM


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1059: [CARBONDATA-1124] Use raw compression while encoding...

2017-06-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1059
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2581/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1059: [CARBONDATA-1124] Use raw compression while encoding...

2017-06-19 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1059
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/477/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1049: [CARBONDATA-1183] Update CarbonPartitionExample beca...

2017-06-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1049
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2580/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (CARBONDATA-1192) Unable to Select Data From more than one table in hive

2017-06-19 Thread anubhav tarar (JIRA)
anubhav tarar created CARBONDATA-1192:
-

 Summary: Unable to Select Data From more than one table in hive
 Key: CARBONDATA-1192
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1192
 Project: CarbonData
  Issue Type: Bug
  Components: hive-integration
Affects Versions: 1.2.0
 Environment: hive 1.2,spark 2.1
Reporter: anubhav tarar
Assignee: anubhav tarar
 Fix For: 1.2.0


inside spark shell

carbon.sql("DROP TABLE IF EXISTS CUSTOMER ")

carbon.sql("CREATE TABLE CUSTOMER ( C_CUSTKEY INT ,\n C_NAME STRING ,\n 
C_ADDRESS STRING ,\n " +
   "C_NATIONKEY INT ,\n C_PHONE STRING ,\n C_ACCTBAL DECIMAL(15,2) ,\n 
C_MKTSEGMENT " +
   "STRING ,\n C_COMMENT STRING ) STORED BY 'carbondata' ")

carbon.sql("LOAD DATA INPATH \"hdfs://localhost:54310/user1/customer.csv\" 
INTO TABLE customer " +
  "OPTIONS('DELIMITER'='|' , 'QUOTECHAR'='\"' , 
'FILEHEADER'='C_CUSTKEY,C_NAME," +
  
"C_ADDRESS,C_NATIONKEY,C_PHONE,C_ACCTBAL,C_MKTSEGMENT,C_COMMENT')")

 carbon.sql("DROP TABLE IF EXISTS ORDERS ")

carbon.sql("CREATE TABLE ORDERS ( O_ORDERKEY INT ,O_CUSTKEY INT ,O_ORDERSTATUS 
STRING ,O_TOTALPRICE DECIMAL(15,2) , O_ORDERDATE TIMESTAMP , O_ORDERPRIORITY 
STRING , O_CLERK STRING , O_SHIPPRIORITY INT , O_COMMENT STRING ) STORED BY 
'carbondata' ")

carbon.sql("LOAD DATA INPATH 'hdfs://localhost:54310/user1/orders.csv' INTO 
TABLE orders " +
  "OPTIONS('DELIMITER'='|' , 
'QUOTECHAR'='\"','FILEHEADER'='O_ORDERKEY,O_CUSTKEY," +
  
"O_ORDERSTATUS,O_TOTALPRICE,O_ORDERDATE,O_ORDERPRIORITY,O_CLERK,O_SHIPPRIORITY,"
 +
  "O_COMMENT')")

read data from hive shell

hive> select o_custkey,c_custkey from orders,customer limit 2;
Warning: Shuffle Join JOIN[4][tables = [orders, customer]] in Stage 
'Stage-1:MAPRED' is a cross product
Query ID = hduser_20170619125257_d889efa9-261f-436e-9489-fd15d6b76beb
Total jobs = 1
Stage-1 is selected by condition resolver.
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=
In order to set a constant number of reducers:
  set mapreduce.job.reduces=
Job running in-process (local Hadoop)
2017-06-19 12:53:01,987 Stage-1 map = 0%,  reduce = 0%
2017-06-19 12:53:49,113 Stage-1 map = 38%,  reduce = 0%
2017-06-19 12:53:51,127 Stage-1 map = 100%,  reduce = 0%
Ended Job = job_local1708233203_0001 with errors
Error during job, obtaining debugging information...
Job Tracking URL: http://localhost:8080/
FAILED: Execution Error, return code 2 from 
org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched: 
Stage-Stage-1:  HDFS Read: 12033731 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec

 





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] carbondata issue #1049: [CARBONDATA-1183] Update CarbonPartitionExample beca...

2017-06-19 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1049
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/476/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1059: [CARBONDATA-1124] Use raw compression while encoding

2017-06-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1059
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2578/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1052: [CARBONDATA-1182] Resolved packaging issue in presto...

2017-06-19 Thread chenliang613
Github user chenliang613 commented on the issue:

https://github.com/apache/carbondata/pull/1052
  
@geetikagupta16  Please verify this issue whether is valid, or not ?  i 
can't reproduce it at my local machine with "mvn package for 
integration/presto/pom.xml"


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (CARBONDATA-1182) Error for packaging carbondata-preto

2017-06-19 Thread Liang Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16053539#comment-16053539
 ] 

Liang Chen commented on CARBONDATA-1182:


[~lonly]  can you try it again based on master code, i can't reproduce it at my 
machine.
[INFO] --- maven-jar-plugin:2.5:jar (default-jar) @ carbondata-presto ---
[INFO] Building jar: 
/Users/apple/Carbon-dev/mergepr/integration/presto/target/carbondata-presto-1.2.0-SNAPSHOT.jar
[INFO]
[INFO] --- provisio-maven-plugin:0.1.40:provision (default-provision) @ 
carbondata-presto ---
[INFO]
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
carbondata-presto ---
[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 42.884 s
[INFO] Finished at: 2017-06-19T15:03:22+08:00
[INFO] Final Memory: 39M/651M
[INFO] 
ChenLiangs-MAC:presto apple$ mvn package


> Error for packaging carbondata-preto
> 
>
> Key: CARBONDATA-1182
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1182
> Project: CarbonData
>  Issue Type: Bug
>  Components: presto-integration
>Affects Versions: 1.2.0
>Reporter: lonly
>Assignee: Pallavi Singh
>  Labels: build, maven
> Attachments: 微信图片_20170616151720.png, 微信图片_20170616151731.png
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> When I mvn package the carbondata-presto-master, I encountered this problem 
> about “ncompatible types: 
> org.apache.carbondata.core.metadata.ColumnarFormatVersion cannot be converted 
> to org.apache.carbondata.core.datastore.block.BlockletInfos”
> 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] carbondata issue #1056: [CARBONDATA-1185] Job Aborted due to Stage Failure (...

2017-06-19 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1056
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/473/Failed Tests: 
1carbondata-pr-spark-1.6/org.apache.carbondata:carbondata-spark-common-test:
 1org.apache.carbondata.spark.testsuite.allqueries.InsertIntoCarbonTableTestCase.insert
 into carbon table from carbon table union query



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1056: [CARBONDATA-1185] Job Aborted due to Stage Failure (...

2017-06-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1056
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2577/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1059: [CARBONDATA-1124] Use raw compression while e...

2017-06-19 Thread jackylk
GitHub user jackylk opened a pull request:

https://github.com/apache/carbondata/pull/1059

[CARBONDATA-1124] Use raw compression while encoding measures

Use zera-copy raw compression form Snappy to encode measure 
(UnsafeColumnPage)

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jackylk/incubator-carbondata rawcomp

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/1059.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1059


commit ca8fa5148f0629f7b376eef3873d32b9f6c19806
Author: jackylk 
Date:   2017-06-19T06:52:03Z

use raw compression




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1040: [CARBONDATA-1171] Added support for show part...

2017-06-19 Thread jackylk
Github user jackylk commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1040#discussion_r122628873
  
--- Diff: 
integration/spark/src/main/scala/org/apache/spark/sql/execution/command/carbonTableSchema.scala
 ---
@@ -921,3 +923,28 @@ private[sql] case class CleanFiles(
 Seq.empty
   }
 }
+
+private[sql] case class ShowPartitions(tableIdentifier: TableIdentifier)
+  extends RunnableCommand {
+
+  override val output: Seq[Attribute] = {
+AttributeReference("partition", StringType, nullable = false)() :: Nil
+  }
+
+  def run(sqlContext: SQLContext): Seq[Row] = {
--- End diff --

It seems this command is implemented twice in spark and spark2, can you 
move it to spark-common module?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1049: [CARBONDATA-1183] Update CarbonPartitionExample beca...

2017-06-19 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1049
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2576/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1040: [CARBONDATA-1171] Added support for show part...

2017-06-19 Thread jackylk
Github user jackylk commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1040#discussion_r122628649
  
--- Diff: 
integration/spark/src/main/scala/org/apache/spark/sql/CarbonCatalystOperators.scala
 ---
@@ -75,6 +75,17 @@ case class DescribeFormattedCommand(sql: String, 
tblIdentifier: TableIdentifier)
 Seq(AttributeReference("result", StringType, nullable = false)())
 }
 
+case class ShowPartitionsCommand(tableIdentifier: TableIdentifier)
+  extends LogicalPlan with Command {
+
+  override def output: Seq[AttributeReference] =
+Seq(AttributeReference("partitions", StringType, nullable = false)())
+
+  override def children: Seq[LogicalPlan] = Seq.empty
+}
+
+
--- End diff --

remove unnecessary empty lines


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1040: [CARBONDATA-1171] Added support for show part...

2017-06-19 Thread jackylk
Github user jackylk commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1040#discussion_r122628497
  
--- Diff: 
integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/CommonUtil.scala
 ---
@@ -435,4 +437,18 @@ object CommonUtil {
 }
   }
 
+  def getPartitionsBasedOnType(partitionInfo: PartitionInfo): Seq[String] 
= {
+val columnSchema = partitionInfo.getColumnSchemaList.get(0)
+partitionInfo.getPartitionType match {
+  case PartitionType.HASH => Seq(s"${ columnSchema.getColumnName 
}=HASH_NUMBER(${ partitionInfo
--- End diff --

move to next line after `=>`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1033: spark2/CarbonSQLCLIDriver.scala storePath is not hdf...

2017-06-19 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1033
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/471/Failed Tests: 
1carbondata-pr-spark-1.6/org.apache.carbondata:carbondata-spark-common-test:
 1org.apache.carbondata.spark.testsuite.dataretention.DataRetentionConcurrencyTestCase.DataRetention_Concurrency_load_date



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


  1   2   >