Jenkins build is back to normal : carbondata-master-spark-2.1 #2031

2018-02-03 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : carbondata-master-spark-2.1 » Apache CarbonData :: Core #2031

2018-02-03 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : carbondata-master-spark-2.2 » Apache CarbonData :: Core #66

2018-02-03 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : carbondata-master-spark-2.2 #66

2018-02-03 Thread Apache Jenkins Server
See 




Build failed in Jenkins: carbondata-master-spark-2.1 » Apache CarbonData :: Core #2030

2018-02-03 Thread Apache Jenkins Server
See 


--
[INFO] 
[INFO] 
[INFO] Building Apache CarbonData :: Core 1.3.0-SNAPSHOT
[INFO] 
[INFO] Downloading: 
http://repo1.maven.org/maven2/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
[INFO] Downloading: 
http://repository.apache.org/snapshots/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
[INFO] Downloaded: 
http://repository.apache.org/snapshots/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
 (791 B at 1.9 KB/sec)
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ carbondata-core ---
[INFO] Deleting 

[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent (default-prepare-agent) @ 
carbondata-core ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/0/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
carbondata-core ---
[INFO] 
[INFO] --- exec-maven-plugin:1.2.1:exec (default) @ carbondata-core ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ 
carbondata-core ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 1 resource
[INFO] Copying 1 resource
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:compile (default-compile) @ 
carbondata-core ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 582 source files to 

[INFO] 
:
 

 uses or overrides a deprecated API.
[INFO] 
:
 Recompile with -Xlint:deprecation for details.
[INFO] 
:
 Some input files use unchecked or unsafe operations.
[INFO] 
:
 Recompile with -Xlint:unchecked for details.
[INFO] -
[WARNING] COMPILATION WARNING : 
[INFO] -
[WARNING] 
:[23,16]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[43,18]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[47,21]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[49,17]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[70,17]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[INFO] 5 warnings 
[INFO] -
[INFO] -
[ERROR] COMPILATION ERROR : 
[INFO] 

Build failed in Jenkins: carbondata-master-spark-2.1 #2030

2018-02-03 Thread Apache Jenkins Server
See 


--
[...truncated 11.87 KB...]
[INFO] --- maven-install-plugin:2.5.2:install (default-install) @ 
carbondata-parent ---
[INFO] Installing 
 to 
/home/jenkins/jenkins-slave/maven-repositories/0/org/apache/carbondata/carbondata-parent/1.3.0-SNAPSHOT/carbondata-parent-1.3.0-SNAPSHOT.pom
[INFO] 
[INFO] 
[INFO] Building Apache CarbonData :: Common 1.3.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ carbondata-common ---
[INFO] Deleting 

[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent (default-prepare-agent) @ 
carbondata-common ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/0/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
carbondata-common ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ 
carbondata-common ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:compile (default-compile) @ 
carbondata-common ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 9 source files to 

[INFO] 
[INFO] >>> findbugs-maven-plugin:3.0.4:check (analyze-compile) > :findbugs @ 
carbondata-common >>>
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.4:findbugs (findbugs) @ carbondata-common 
---
[INFO] Fork Value is true
[INFO] Done FindBugs Analysis
[INFO] 
[INFO] <<< findbugs-maven-plugin:3.0.4:check (analyze-compile) < :findbugs @ 
carbondata-common <<<
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.4:check (analyze-compile) @ 
carbondata-common ---
[INFO] BugInstance size is 0
[INFO] Error size is 0
[INFO] No errors/warnings found
[INFO] 
[INFO] --- maven-resources-plugin:2.7:testResources (default-testResources) @ 
carbondata-common ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:testCompile (default-testCompile) @ 
carbondata-common ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 6 source files to 

[INFO] 
[INFO] --- maven-surefire-plugin:2.18.1:test (default-test) @ carbondata-common 
---
[JENKINS] Recording test results
[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (default-jar) @ carbondata-common ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
carbondata-common ---
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent-integration 
(default-prepare-agent-integration) @ carbondata-common ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/0/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.17:check (default) @ carbondata-common ---
[INFO] Starting audit...
Audit done.
[INFO] 
[INFO] --- scalastyle-maven-plugin:0.8.0:check (default) @ carbondata-common ---
[WARNING] sourceDirectory is not specified or does not exist 
value=
Saving to 
outputFile=
Processed 0 file(s)
Found 0 errors
Found 0 warnings
Found 0 infos
Finished in 0 ms
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:report (default-report) @ 
carbondata-common ---
[INFO] Skipping JaCoCo execution due to missing execution data file.
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:report-integration 
(default-report-integration) @ carbondata-common ---
[INFO] Skipping JaCoCo execution due to missing execution data file.
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:check (default-check) @ carbondata-common 

Build failed in Jenkins: carbondata-master-spark-2.2 #65

2018-02-03 Thread Apache Jenkins Server
See 


--
[...truncated 11.82 KB...]
[INFO] --- maven-install-plugin:2.5.2:install (default-install) @ 
carbondata-parent ---
[INFO] Installing 
 to 
/home/jenkins/jenkins-slave/maven-repositories/0/org/apache/carbondata/carbondata-parent/1.3.0-SNAPSHOT/carbondata-parent-1.3.0-SNAPSHOT.pom
[INFO] 
[INFO] 
[INFO] Building Apache CarbonData :: Common 1.3.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ carbondata-common ---
[INFO] Deleting 

[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent (default-prepare-agent) @ 
carbondata-common ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/0/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
carbondata-common ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ 
carbondata-common ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:compile (default-compile) @ 
carbondata-common ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 9 source files to 

[INFO] 
[INFO] >>> findbugs-maven-plugin:3.0.4:check (analyze-compile) > :findbugs @ 
carbondata-common >>>
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.4:findbugs (findbugs) @ carbondata-common 
---
[INFO] Fork Value is true
[INFO] Done FindBugs Analysis
[INFO] 
[INFO] <<< findbugs-maven-plugin:3.0.4:check (analyze-compile) < :findbugs @ 
carbondata-common <<<
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.4:check (analyze-compile) @ 
carbondata-common ---
[INFO] BugInstance size is 0
[INFO] Error size is 0
[INFO] No errors/warnings found
[INFO] 
[INFO] --- maven-resources-plugin:2.7:testResources (default-testResources) @ 
carbondata-common ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:testCompile (default-testCompile) @ 
carbondata-common ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 6 source files to 

[INFO] 
[INFO] --- maven-surefire-plugin:2.18.1:test (default-test) @ carbondata-common 
---
[JENKINS] Recording test results
[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (default-jar) @ carbondata-common ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
carbondata-common ---
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent-integration 
(default-prepare-agent-integration) @ carbondata-common ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/0/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.17:check (default) @ carbondata-common ---
[INFO] Starting audit...
Audit done.
[INFO] 
[INFO] --- scalastyle-maven-plugin:0.8.0:check (default) @ carbondata-common ---
[WARNING] sourceDirectory is not specified or does not exist 
value=
Saving to 
outputFile=
Processed 0 file(s)
Found 0 errors
Found 0 warnings
Found 0 infos
Finished in 3 ms
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:report (default-report) @ 
carbondata-common ---
[INFO] Skipping JaCoCo execution due to missing execution data file.
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:report-integration 
(default-report-integration) @ carbondata-common ---
[INFO] Skipping JaCoCo execution due to missing execution data file.
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:check (default-check) @ carbondata-common 

svn commit: r24676 - in /dev/carbondata/1.3.0-rc2: ./ apache-carbondata-1.3.0-source-release.zip apache-carbondata-1.3.0-source-release.zip.asc apache-carbondata-1.3.0-source-release.zip.md5 apache-ca

2018-02-03 Thread ravipesala
Author: ravipesala
Date: Sat Feb  3 21:41:41 2018
New Revision: 24676

Log:
Upload 1.3.0-rc2

Added:
dev/carbondata/1.3.0-rc2/
dev/carbondata/1.3.0-rc2/apache-carbondata-1.3.0-source-release.zip   (with 
props)
dev/carbondata/1.3.0-rc2/apache-carbondata-1.3.0-source-release.zip.asc
dev/carbondata/1.3.0-rc2/apache-carbondata-1.3.0-source-release.zip.md5
dev/carbondata/1.3.0-rc2/apache-carbondata-1.3.0-source-release.zip.sha512

Added: dev/carbondata/1.3.0-rc2/apache-carbondata-1.3.0-source-release.zip
==
Binary file - no diff available.

Propchange: dev/carbondata/1.3.0-rc2/apache-carbondata-1.3.0-source-release.zip
--
svn:mime-type = application/octet-stream

Added: dev/carbondata/1.3.0-rc2/apache-carbondata-1.3.0-source-release.zip.asc
==
--- dev/carbondata/1.3.0-rc2/apache-carbondata-1.3.0-source-release.zip.asc 
(added)
+++ dev/carbondata/1.3.0-rc2/apache-carbondata-1.3.0-source-release.zip.asc Sat 
Feb  3 21:41:41 2018
@@ -0,0 +1,17 @@
+-BEGIN PGP SIGNATURE-
+Version: GnuPG v1
+
+iQIcBAABAgAGBQJadhImAAoJELrXKninsbLuUIoQAJVVrL52sWLJB+1k6waQoubX
+K5gcRVaGo/2M6NPCcOmFw2+w5PljMEMfnOODlDJL6+qqKa9nClGd07QwS0+uY6Ic
+BX4ljCg9jF5ZP+Zahv/aZZ2b+2/NebvTmAuwYg/pMoWfbQH0mSNdNH+An0I2DDr7
+sTsqIt6GaR1J3qM5SVPeoYVvK9iSnVlyX1ytsOQTRrSYO3TySCXshW5m6ltHEZll
+O1IkWIDFEKqMU53ruY6hCiyddemfUjzvYEZ69oJ9RC2s6E/td7fH/a4/N+lEJO/w
+xH3EJwAZOtphgHkbrVNg8W0UsWqkoejc1qIg35xT2SzrYbm2PsG4RawoYAFc6t7j
+bUuvbJUbiuPNVa4cnBjYvGNLpZUIR9HZWb21xO6Rdy3VArXeHJIdneKsnPKqbzFk
+rAsoc5yns76qNmhykeR0lX9+vaFd1UbMb37OFV2EKetDvYmx6sc2usqqLJo81quL
+VBq/DGTuxX1JoX/MVRkOnZzyGXHHbtbls26mZKCKP0fzF7ax3lKo+g5D7qdKtaoa
+f+5TrFlv4GEjvrbBB3SOE6XmlcnZjd4lR5b2Q2IG2+x87GKeyiv8BE3CDotk8xd3
+YnOw5xdZvtN15Fx0FL3VTql3dXA+rTi2oIIfNjEqaWmG/uU+CD1AlweB22hEZ9fI
+msdGqFctu/3P/oJNkqrh
+=xdP0
+-END PGP SIGNATURE-

Added: dev/carbondata/1.3.0-rc2/apache-carbondata-1.3.0-source-release.zip.md5
==
--- dev/carbondata/1.3.0-rc2/apache-carbondata-1.3.0-source-release.zip.md5 
(added)
+++ dev/carbondata/1.3.0-rc2/apache-carbondata-1.3.0-source-release.zip.md5 Sat 
Feb  3 21:41:41 2018
@@ -0,0 +1 @@
+290f4647ec4791ce41e623c26571486c  apache-carbondata-1.3.0-source-release.zip

Added: 
dev/carbondata/1.3.0-rc2/apache-carbondata-1.3.0-source-release.zip.sha512
==
--- dev/carbondata/1.3.0-rc2/apache-carbondata-1.3.0-source-release.zip.sha512 
(added)
+++ dev/carbondata/1.3.0-rc2/apache-carbondata-1.3.0-source-release.zip.sha512 
Sat Feb  3 21:41:41 2018
@@ -0,0 +1 @@
+e6f6235fee96fde575217cab4c626fe300716b4e9dde51a350220892774eb3e749bd3e0ff349fcef6e341c90d1a2cac978632fdbefa92338a350fd4f345b4c85
  apache-carbondata-1.3.0-source-release.zip




[26/50] [abbrv] carbondata git commit: [CARBONDATA-2108]Updated unsafe sort memory configuration

2018-02-03 Thread ravipesala
[CARBONDATA-2108]Updated unsafe sort memory configuration

Deprecated old property: sort.inmemory.size.inmb
Added new property: carbon.sort.storage.inmemory.size.inmb,
If user has configured old property then internally it will be converted to new 
property
for ex: If user has configured sort.inmemory.size.inmb then 20% memory will be 
used as working memory and rest for storage memory

This closes #1896


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/27ec6515
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/27ec6515
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/27ec6515

Branch: refs/heads/branch-1.3
Commit: 27ec6515a143dc3b697ac914bfcd4cfe10a49e17
Parents: 2610a60
Author: kumarvishal 
Authored: Wed Jan 31 18:43:02 2018 +0530
Committer: ravipesala 
Committed: Fri Feb 2 23:18:21 2018 +0530

--
 .../core/constants/CarbonCommonConstants.java   |  5 +
 .../core/memory/UnsafeMemoryManager.java|  2 +-
 .../core/memory/UnsafeSortMemoryManager.java|  6 +-
 .../carbondata/core/util/CarbonProperties.java  | 99 
 4 files changed, 108 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/27ec6515/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
--
diff --git 
a/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
 
b/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
index 87eec8a..8480758 100644
--- 
a/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
+++ 
b/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
@@ -1585,6 +1585,11 @@ public final class CarbonCommonConstants {
 
   public static final String 
CARBON_ENABLE_PAGE_LEVEL_READER_IN_COMPACTION_DEFAULT = "true";
 
+  @CarbonProperty
+  public static final String IN_MEMORY_STORAGE_FOR_SORTED_DATA_IN_MB =
+  "carbon.sort.storage.inmemory.size.inmb";
+  public static final String IN_MEMORY_STORAGE_FOR_SORTED_DATA_IN_MB_DEFAULT = 
"512";
+
   private CarbonCommonConstants() {
   }
 }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/27ec6515/core/src/main/java/org/apache/carbondata/core/memory/UnsafeMemoryManager.java
--
diff --git 
a/core/src/main/java/org/apache/carbondata/core/memory/UnsafeMemoryManager.java 
b/core/src/main/java/org/apache/carbondata/core/memory/UnsafeMemoryManager.java
index 4222e14..d3b9b48 100644
--- 
a/core/src/main/java/org/apache/carbondata/core/memory/UnsafeMemoryManager.java
+++ 
b/core/src/main/java/org/apache/carbondata/core/memory/UnsafeMemoryManager.java
@@ -47,7 +47,7 @@ public class UnsafeMemoryManager {
   .getProperty(CarbonCommonConstants.UNSAFE_WORKING_MEMORY_IN_MB,
   CarbonCommonConstants.UNSAFE_WORKING_MEMORY_IN_MB_DEFAULT));
 } catch (Exception e) {
-  size = 
Long.parseLong(CarbonCommonConstants.IN_MEMORY_FOR_SORT_DATA_IN_MB_DEFAULT);
+  size = 
Long.parseLong(CarbonCommonConstants.UNSAFE_WORKING_MEMORY_IN_MB_DEFAULT);
   LOGGER.info("Wrong memory size given, "
   + "so setting default value to " + size);
 }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/27ec6515/core/src/main/java/org/apache/carbondata/core/memory/UnsafeSortMemoryManager.java
--
diff --git 
a/core/src/main/java/org/apache/carbondata/core/memory/UnsafeSortMemoryManager.java
 
b/core/src/main/java/org/apache/carbondata/core/memory/UnsafeSortMemoryManager.java
index c63b320..67bb6cc 100644
--- 
a/core/src/main/java/org/apache/carbondata/core/memory/UnsafeSortMemoryManager.java
+++ 
b/core/src/main/java/org/apache/carbondata/core/memory/UnsafeSortMemoryManager.java
@@ -75,10 +75,10 @@ public class UnsafeSortMemoryManager {
 long size;
 try {
   size = Long.parseLong(CarbonProperties.getInstance()
-  .getProperty(CarbonCommonConstants.IN_MEMORY_FOR_SORT_DATA_IN_MB,
-  CarbonCommonConstants.IN_MEMORY_FOR_SORT_DATA_IN_MB_DEFAULT));
+  
.getProperty(CarbonCommonConstants.IN_MEMORY_STORAGE_FOR_SORTED_DATA_IN_MB,
+  
CarbonCommonConstants.IN_MEMORY_STORAGE_FOR_SORTED_DATA_IN_MB_DEFAULT));
 } catch (Exception e) {
-  size = 
Long.parseLong(CarbonCommonConstants.IN_MEMORY_FOR_SORT_DATA_IN_MB_DEFAULT);
+  size = 
Long.parseLong(CarbonCommonConstants.IN_MEMORY_STORAGE_FOR_SORTED_DATA_IN_MB_DEFAULT);
   LOGGER.info("Wrong memory size given, " + "so setting default value to " 
+ size);
 }
 if (size < 1024) {


[45/50] [abbrv] carbondata git commit: [CARBONDATA-2127] Documentation for Hive Standard Partition

2018-02-03 Thread ravipesala
[CARBONDATA-2127] Documentation for Hive Standard Partition

This closes #1926


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/a7bcc763
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/a7bcc763
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/a7bcc763

Branch: refs/heads/branch-1.3
Commit: a7bcc763b5d1dea35f5015dadabb37a051a4f881
Parents: 4a251ba
Author: sgururajshetty 
Authored: Sat Feb 3 21:04:23 2018 +0530
Committer: ravipesala 
Committed: Sat Feb 3 21:53:38 2018 +0530

--
 docs/data-management-on-carbondata.md | 104 -
 1 file changed, 103 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/a7bcc763/docs/data-management-on-carbondata.md
--
diff --git a/docs/data-management-on-carbondata.md 
b/docs/data-management-on-carbondata.md
index d9d4420..3acb711 100644
--- a/docs/data-management-on-carbondata.md
+++ b/docs/data-management-on-carbondata.md
@@ -20,12 +20,13 @@
 This tutorial is going to introduce all commands and data operations on 
CarbonData.
 
 * [CREATE TABLE](#create-table)
-* [CREATE DATABASE] (#create-database)
+* [CREATE DATABASE](#create-database)
 * [TABLE MANAGEMENT](#table-management)
 * [LOAD DATA](#load-data)
 * [UPDATE AND DELETE](#update-and-delete)
 * [COMPACTION](#compaction)
 * [PARTITION](#partition)
+* [HIVE STANDARD PARTITION](#hive-standard-partition)
 * [PRE-AGGREGATE TABLES](#agg-tables)
 * [BUCKETING](#bucketing)
 * [SEGMENT MANAGEMENT](#segment-management)
@@ -765,6 +766,107 @@ This tutorial is going to introduce all commands and data 
operations on CarbonDa
   * The partitioned column can be excluded from SORT_COLUMNS, this will let 
other columns to do the efficient sorting.
   * When writing SQL on a partition table, try to use filters on the partition 
column.
 
+## HIVE STANDARD PARTITION
+
+  Carbon supports the partition which is custom implemented by carbon but due 
to compatibility issue does not allow you to use the feature of Hive. By using 
this function, you can use the feature available in Hive.
+
+### Create Partition Table
+
+  This command allows you to create table with partition.
+  
+  ```
+  CREATE TABLE [IF NOT EXISTS] [db_name.]table_name 
+[(col_name data_type , ...)]
+[COMMENT table_comment]
+[PARTITIONED BY (col_name data_type , ...)]
+[STORED BY file_format]
+[TBLPROPERTIES (property_name=property_value, ...)]
+[AS select_statement];
+  ```
+  
+  Example:
+  ```
+   CREATE TABLE IF NOT EXISTS productSchema.productSalesTable (
+productNumber Int,
+productName String,
+storeCity String,
+storeProvince String,
+saleQuantity Int,
+revenue Int)
+  PARTITIONED BY (productCategory String, productBatch String)
+  STORED BY 'carbondata'
+  ```
+   
+### Load Data Using Static Partition
+
+  This command allows you to load data using static partition.
+  
+  ```
+  LOAD DATA [LOCAL] INPATH 'folder_path' 
+INTO TABLE [db_name.]table_name PARTITION (partition_spec) 
+OPTIONS(property_name=property_value, ...)
+  NSERT INTO INTO TABLE [db_name.]table_name PARTITION (partition_spec) SELECT 
STATMENT 
+  ```
+  
+  Example:
+  ```
+  LOAD DATA LOCAL INPATH '${env:HOME}/staticinput.txt'
+INTO TABLE locationTable
+PARTITION (country = 'US', state = 'CA')
+
+  INSERT INTO TABLE locationTable
+PARTITION (country = 'US', state = 'AL')
+SELECT * FROM another_user au 
+WHERE au.country = 'US' AND au.state = 'AL';
+  ```
+
+### Load Data Using Dynamic Partition
+
+  This command allows you to load data using dynamic partition. If partition 
spec is not specified, then the partition is considered as dynamic.
+
+  Example:
+  ```
+  LOAD DATA LOCAL INPATH '${env:HOME}/staticinput.txt'
+INTO TABLE locationTable
+  
+  INSERT INTO TABLE locationTable
+SELECT * FROM another_user au 
+WHERE au.country = 'US' AND au.state = 'AL';
+  ```
+
+### Show Partitions
+
+  This command gets the Hive partition information of the table
+
+  ```
+  SHOW PARTITIONS [db_name.]table_name
+  ```
+
+### Drop Partition
+
+  This command drops the specified Hive partition only.
+  ```
+  ALTER TABLE table_name DROP [IF EXISTS] (PARTITION part_spec, ...)
+  ```
+
+### Insert OVERWRITE
+  
+  This command allows you to insert or load overwrite on a spcific partition.
+  
+  ```
+   INSERT OVERWRITE TABLE table_name
+PARTITION (column = 'partition_name')
+select_statement
+  ```
+  
+  Example:

carbondata git commit: [maven-release-plugin] prepare for next development iteration

2018-02-03 Thread ravipesala
Repository: carbondata
Updated Branches:
  refs/heads/branch-1.3 c055c8f33 -> 607b4cef6


[maven-release-plugin] prepare for next development iteration


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/607b4cef
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/607b4cef
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/607b4cef

Branch: refs/heads/branch-1.3
Commit: 607b4cef646b2b9a3c2a8fc687dc40342165979a
Parents: c055c8f
Author: ravipesala 
Authored: Sun Feb 4 02:01:53 2018 +0530
Committer: ravipesala 
Committed: Sun Feb 4 02:01:53 2018 +0530

--
 assembly/pom.xml  | 2 +-
 common/pom.xml| 2 +-
 core/pom.xml  | 2 +-
 examples/spark2/pom.xml   | 2 +-
 format/pom.xml| 2 +-
 hadoop/pom.xml| 2 +-
 integration/hive/pom.xml  | 2 +-
 integration/presto/pom.xml| 2 +-
 integration/spark-common-test/pom.xml | 2 +-
 integration/spark-common/pom.xml  | 2 +-
 integration/spark2/pom.xml| 2 +-
 pom.xml   | 4 ++--
 processing/pom.xml| 2 +-
 streaming/pom.xml | 2 +-
 14 files changed, 15 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/607b4cef/assembly/pom.xml
--
diff --git a/assembly/pom.xml b/assembly/pom.xml
index cb5b821..461bed7 100644
--- a/assembly/pom.xml
+++ b/assembly/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.carbondata
 carbondata-parent
-1.3.0
+1.3.1-SNAPSHOT
 ../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/607b4cef/common/pom.xml
--
diff --git a/common/pom.xml b/common/pom.xml
index 39047f2..b136141 100644
--- a/common/pom.xml
+++ b/common/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.carbondata
 carbondata-parent
-1.3.0
+1.3.1-SNAPSHOT
 ../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/607b4cef/core/pom.xml
--
diff --git a/core/pom.xml b/core/pom.xml
index 53453d9..d5a1c0b 100644
--- a/core/pom.xml
+++ b/core/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.carbondata
 carbondata-parent
-1.3.0
+1.3.1-SNAPSHOT
 ../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/607b4cef/examples/spark2/pom.xml
--
diff --git a/examples/spark2/pom.xml b/examples/spark2/pom.xml
index b31725e..3d2260f 100644
--- a/examples/spark2/pom.xml
+++ b/examples/spark2/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.carbondata
 carbondata-parent
-1.3.0
+1.3.1-SNAPSHOT
 ../../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/607b4cef/format/pom.xml
--
diff --git a/format/pom.xml b/format/pom.xml
index 08f1e70..ddc8027 100644
--- a/format/pom.xml
+++ b/format/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.carbondata
 carbondata-parent
-1.3.0
+1.3.1-SNAPSHOT
 ../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/607b4cef/hadoop/pom.xml
--
diff --git a/hadoop/pom.xml b/hadoop/pom.xml
index ea809c1..bac5785 100644
--- a/hadoop/pom.xml
+++ b/hadoop/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.carbondata
 carbondata-parent
-1.3.0
+1.3.1-SNAPSHOT
 ../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/607b4cef/integration/hive/pom.xml
--
diff --git a/integration/hive/pom.xml b/integration/hive/pom.xml
index c685b0a..8c38b74 100644
--- a/integration/hive/pom.xml
+++ b/integration/hive/pom.xml
@@ -22,7 +22,7 @@
 
 org.apache.carbondata
 carbondata-parent
-1.3.0
+1.3.1-SNAPSHOT
 ../../pom.xml
 
 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/607b4cef/integration/presto/pom.xml
--
diff --git a/integration/presto/pom.xml b/integration/presto/pom.xml
index b261434..a4c482d 100644
--- a/integration/presto/pom.xml
+++ b/integration/presto/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.carbondata
 carbondata-parent
-1.3.0
+1.3.1-SNAPSHOT
 ../../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/607b4cef/integration/spark-common-test/pom.xml

[carbondata] Git Push Summary

2018-02-03 Thread ravipesala
Repository: carbondata
Updated Tags:  refs/tags/apache-carbondata-1.3.0-rc2 [created] dade91bde


carbondata git commit: [maven-release-plugin] prepare release apache-carbondata-1.3.0-rc2

2018-02-03 Thread ravipesala
Repository: carbondata
Updated Branches:
  refs/heads/branch-1.3 e16e87818 -> c055c8f33


[maven-release-plugin] prepare release apache-carbondata-1.3.0-rc2


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/c055c8f3
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/c055c8f3
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/c055c8f3

Branch: refs/heads/branch-1.3
Commit: c055c8f33123bfb6e1103456bea23a0ff8c944ca
Parents: e16e878
Author: ravipesala 
Authored: Sun Feb 4 02:01:00 2018 +0530
Committer: ravipesala 
Committed: Sun Feb 4 02:01:00 2018 +0530

--
 assembly/pom.xml  |  2 +-
 common/pom.xml|  2 +-
 core/pom.xml  |  2 +-
 examples/spark2/pom.xml   |  2 +-
 format/pom.xml|  2 +-
 hadoop/pom.xml|  2 +-
 integration/hive/pom.xml  |  2 +-
 integration/presto/pom.xml|  2 +-
 integration/spark-common-test/pom.xml | 14 +++---
 integration/spark-common/pom.xml  |  2 +-
 integration/spark2/pom.xml|  2 +-
 pom.xml   |  4 ++--
 processing/pom.xml|  2 +-
 streaming/pom.xml |  6 ++
 14 files changed, 22 insertions(+), 24 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/c055c8f3/assembly/pom.xml
--
diff --git a/assembly/pom.xml b/assembly/pom.xml
index 76ad8a5..cb5b821 100644
--- a/assembly/pom.xml
+++ b/assembly/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.carbondata
 carbondata-parent
-1.3.0-SNAPSHOT
+1.3.0
 ../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/c055c8f3/common/pom.xml
--
diff --git a/common/pom.xml b/common/pom.xml
index 6343361..39047f2 100644
--- a/common/pom.xml
+++ b/common/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.carbondata
 carbondata-parent
-1.3.0-SNAPSHOT
+1.3.0
 ../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/c055c8f3/core/pom.xml
--
diff --git a/core/pom.xml b/core/pom.xml
index f874615..53453d9 100644
--- a/core/pom.xml
+++ b/core/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.carbondata
 carbondata-parent
-1.3.0-SNAPSHOT
+1.3.0
 ../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/c055c8f3/examples/spark2/pom.xml
--
diff --git a/examples/spark2/pom.xml b/examples/spark2/pom.xml
index da39f1d..b31725e 100644
--- a/examples/spark2/pom.xml
+++ b/examples/spark2/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.carbondata
 carbondata-parent
-1.3.0-SNAPSHOT
+1.3.0
 ../../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/c055c8f3/format/pom.xml
--
diff --git a/format/pom.xml b/format/pom.xml
index ae84f0b..08f1e70 100644
--- a/format/pom.xml
+++ b/format/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.carbondata
 carbondata-parent
-1.3.0-SNAPSHOT
+1.3.0
 ../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/c055c8f3/hadoop/pom.xml
--
diff --git a/hadoop/pom.xml b/hadoop/pom.xml
index 206246f..ea809c1 100644
--- a/hadoop/pom.xml
+++ b/hadoop/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.carbondata
 carbondata-parent
-1.3.0-SNAPSHOT
+1.3.0
 ../pom.xml
   
 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/c055c8f3/integration/hive/pom.xml
--
diff --git a/integration/hive/pom.xml b/integration/hive/pom.xml
index e0ad499..c685b0a 100644
--- a/integration/hive/pom.xml
+++ b/integration/hive/pom.xml
@@ -22,7 +22,7 @@
 
 org.apache.carbondata
 carbondata-parent
-1.3.0-SNAPSHOT
+1.3.0
 ../../pom.xml
 
 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/c055c8f3/integration/presto/pom.xml
--
diff --git a/integration/presto/pom.xml b/integration/presto/pom.xml
index 98bbe99..b261434 100644
--- a/integration/presto/pom.xml
+++ b/integration/presto/pom.xml
@@ -22,7 +22,7 @@
   
 org.apache.carbondata
 carbondata-parent
-1.3.0-SNAPSHOT
+1.3.0
 ../../pom.xml
   
 


[48/50] [abbrv] carbondata git commit: [CARBONDATA-2122] Add validation for empty bad record path

2018-02-03 Thread ravipesala
[CARBONDATA-2122] Add validation for empty bad record path

Data Load having bad record redirect with empty location should throw the 
exception of Invalid Path.

This closes #1914


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/4a2a2d1b
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/4a2a2d1b
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/4a2a2d1b

Branch: refs/heads/branch-1.3
Commit: 4a2a2d1b74901f96efc4ecf9cc16e9804884b929
Parents: 50e2f2c
Author: Jatin 
Authored: Fri Feb 2 19:55:16 2018 +0530
Committer: kunal642 
Committed: Sun Feb 4 00:23:19 2018 +0530

--
 .../apache/carbondata/core/util/CarbonUtil.java |  7 +-
 .../sdv/generated/AlterTableTestCase.scala  |  2 -
 .../sdv/generated/DataLoadingTestCase.scala |  5 +-
 .../badrecordloger/BadRecordActionTest.scala| 71 +++-
 .../badrecordloger/BadRecordEmptyDataTest.scala |  5 --
 .../badrecordloger/BadRecordLoggerTest.scala|  5 --
 .../StandardPartitionBadRecordLoggerTest.scala  |  5 --
 .../carbondata/spark/util/DataLoadingUtil.scala |  2 +-
 .../spark/sql/test/TestQueryExecutor.scala  | 16 ++---
 .../BadRecordPathLoadOptionTest.scala   | 11 ++-
 .../DataLoadFailAllTypeSortTest.scala   | 28 +---
 .../NumericDimensionBadRecordTest.scala |  6 +-
 .../AlterTableValidationTestCase.scala  |  3 -
 .../carbon/datastore/BlockIndexStoreTest.java   |  2 -
 14 files changed, 93 insertions(+), 75 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/4a2a2d1b/core/src/main/java/org/apache/carbondata/core/util/CarbonUtil.java
--
diff --git a/core/src/main/java/org/apache/carbondata/core/util/CarbonUtil.java 
b/core/src/main/java/org/apache/carbondata/core/util/CarbonUtil.java
index b62b77d..c208154 100644
--- a/core/src/main/java/org/apache/carbondata/core/util/CarbonUtil.java
+++ b/core/src/main/java/org/apache/carbondata/core/util/CarbonUtil.java
@@ -98,6 +98,7 @@ import com.google.gson.GsonBuilder;
 import org.apache.commons.codec.binary.Base64;
 import org.apache.commons.io.FileUtils;
 import org.apache.commons.lang.ArrayUtils;
+import org.apache.commons.lang.StringUtils;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.FileSystem;
@@ -1891,7 +1892,11 @@ public final class CarbonUtil {
* @return
*/
   public static boolean isValidBadStorePath(String badRecordsLocation) {
-return !(null == badRecordsLocation || badRecordsLocation.length() == 0);
+if (StringUtils.isEmpty(badRecordsLocation)) {
+  return false;
+} else {
+  return isFileExists(checkAndAppendHDFSUrl(badRecordsLocation));
+}
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/carbondata/blob/4a2a2d1b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/AlterTableTestCase.scala
--
diff --git 
a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/AlterTableTestCase.scala
 
b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/AlterTableTestCase.scala
index 8899f5c..4e53ea3 100644
--- 
a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/AlterTableTestCase.scala
+++ 
b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/AlterTableTestCase.scala
@@ -1016,8 +1016,6 @@ class AlterTableTestCase extends QueryTest with 
BeforeAndAfterAll {
 prop.addProperty("carbon.compaction.level.threshold", "2,1")
 prop.addProperty("carbon.enable.auto.load.merge", "false")
 prop.addProperty("carbon.bad.records.action", "FORCE")
-prop.addProperty(CarbonCommonConstants.CARBON_BADRECORDS_LOC,
-  TestQueryExecutor.warehouse+"/baaadrecords")
   }
 
   override def afterAll: Unit = {

http://git-wip-us.apache.org/repos/asf/carbondata/blob/4a2a2d1b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/DataLoadingTestCase.scala
--
diff --git 
a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/DataLoadingTestCase.scala
 
b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/DataLoadingTestCase.scala
index 52396ee..24a5aa4 100644
--- 
a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/DataLoadingTestCase.scala
+++ 

[38/50] [abbrv] carbondata git commit: [CARBONDATA-2104] Add testcase for concurrent execution of insert overwrite and other command

2018-02-03 Thread ravipesala
[CARBONDATA-2104] Add testcase for concurrent execution of insert overwrite and 
other command

More testcases are added for concurrent execution of insert overwrite and other 
commands.
Fix bug of delete segment, clean file when insert overwrite in progress
Change in all command processMetadata to throw ProcessMetadataException instead 
of sys.error

This closes #1891


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/55bffbe2
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/55bffbe2
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/55bffbe2

Branch: refs/heads/branch-1.3
Commit: 55bffbe2dbe880bb2f7a1e51cf02d49e828098d9
Parents: 91911af
Author: Jacky Li 
Authored: Wed Jan 31 13:10:11 2018 +0800
Committer: ravipesala 
Committed: Sat Feb 3 17:24:47 2018 +0530

--
 .../statusmanager/SegmentStatusManager.java |  59 ++--
 .../sdv/register/TestRegisterCarbonTable.scala  |   7 +-
 .../preaggregate/TestPreAggCreateCommand.scala  |   7 +-
 .../TestLoadTableConcurrentScenario.scala   |  78 -
 .../iud/InsertOverwriteConcurrentTest.scala | 204 -
 .../TestInsertAndOtherCommandConcurrent.scala   | 304 +++
 .../partition/TestShowPartitions.scala  |   9 +-
 .../StandardPartitionTableQueryTestCase.scala   |   4 +-
 .../command/CarbonTableSchemaCommonSuite.scala  |  11 +-
 .../exception/ConcurrentOperationException.java |  28 +-
 .../exception/ProcessMetaDataException.java |  26 ++
 .../datamap/CarbonDropDataMapCommand.scala  |   2 +-
 .../CarbonAlterTableCompactionCommand.scala |  15 +-
 .../CarbonAlterTableFinishStreaming.scala   |   4 +-
 .../management/CarbonCleanFilesCommand.scala|   7 +
 .../CarbonDeleteLoadByIdCommand.scala   |   9 +-
 .../CarbonDeleteLoadByLoadDateCommand.scala |   8 +
 .../management/RefreshCarbonTableCommand.scala  |   3 +-
 .../CarbonProjectForDeleteCommand.scala |   8 +-
 .../CarbonProjectForUpdateCommand.scala |   8 +-
 .../spark/sql/execution/command/package.scala   |   6 +
 ...rbonAlterTableDropHivePartitionCommand.scala |   2 +-
 .../CarbonAlterTableDropPartitionCommand.scala  |  16 +-
 .../CarbonAlterTableSplitPartitionCommand.scala |  11 +-
 .../CarbonShowCarbonPartitionsCommand.scala |   7 +-
 .../CarbonAlterTableAddColumnCommand.scala  |   3 +-
 .../CarbonAlterTableDataTypeChangeCommand.scala |  15 +-
 .../CarbonAlterTableDropColumnCommand.scala |  13 +-
 .../schema/CarbonAlterTableRenameCommand.scala  |  16 +-
 .../schema/CarbonAlterTableUnsetCommand.scala   |   4 +-
 .../table/CarbonCreateTableCommand.scala|   7 +-
 .../command/table/CarbonDropTableCommand.scala  |  27 +-
 .../TestStreamingTableOperation.scala   |  14 +-
 .../register/TestRegisterCarbonTable.scala  |   3 +-
 .../restructure/AlterTableRevertTestCase.scala  |  17 +-
 .../AlterTableValidationTestCase.scala  |  11 +-
 .../vectorreader/AddColumnTestCases.scala   |   4 +-
 .../vectorreader/ChangeDataTypeTestCases.scala  |   6 +-
 .../vectorreader/DropColumnTestCases.scala  |   4 +-
 .../processing/util/CarbonLoaderUtil.java   |   4 +-
 40 files changed, 532 insertions(+), 459 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/55bffbe2/core/src/main/java/org/apache/carbondata/core/statusmanager/SegmentStatusManager.java
--
diff --git 
a/core/src/main/java/org/apache/carbondata/core/statusmanager/SegmentStatusManager.java
 
b/core/src/main/java/org/apache/carbondata/core/statusmanager/SegmentStatusManager.java
index 01f810e..9d14c62 100755
--- 
a/core/src/main/java/org/apache/carbondata/core/statusmanager/SegmentStatusManager.java
+++ 
b/core/src/main/java/org/apache/carbondata/core/statusmanager/SegmentStatusManager.java
@@ -509,13 +509,13 @@ public class SegmentStatusManager {
 invalidLoadIds.add(loadId);
 return invalidLoadIds;
   } else if (SegmentStatus.INSERT_IN_PROGRESS == segmentStatus
-  && checkIfValidLoadInProgress(absoluteTableIdentifier, loadId)) {
+  && isLoadInProgress(absoluteTableIdentifier, loadId)) {
 // if the segment status is in progress then no need to delete 
that.
 LOG.error("Cannot delete the segment " + loadId + " which is load 
in progress");
 invalidLoadIds.add(loadId);
 return invalidLoadIds;
   } else if (SegmentStatus.INSERT_OVERWRITE_IN_PROGRESS == 
segmentStatus
-  && checkIfValidLoadInProgress(absoluteTableIdentifier, loadId)) {
+  && isLoadInProgress(absoluteTableIdentifier, loadId)) {
 // if the segment status is overwrite in 

[14/50] [abbrv] carbondata git commit: [CARBONDATA-2094] Filter DataMap Tables in Show Table Command

2018-02-03 Thread ravipesala
[CARBONDATA-2094] Filter DataMap Tables in Show Table Command

Currently Show Table command shows datamap tables (agg tablels) but show table 
command should not show aggregate tables.Solution :- Handle show table command 
in carbon side and Filter the datamap table and return rest of the tables.

This closes #1089


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/ee1c4d42
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/ee1c4d42
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/ee1c4d42

Branch: refs/heads/branch-1.3
Commit: ee1c4d42fc0837e515ac222c676bd46fe93795d5
Parents: 19fdd4d
Author: BJangir 
Authored: Mon Jan 29 23:46:56 2018 +0530
Committer: kumarvishal 
Committed: Thu Feb 1 18:42:05 2018 +0530

--
 .../preaggregate/TestPreAggCreateCommand.scala  | 36 +
 .../preaggregate/TestPreAggregateDrop.scala |  9 ++-
 .../command/table/CarbonShowTablesCommand.scala | 82 
 .../spark/sql/hive/CarbonSessionState.scala | 11 ++-
 .../spark/sql/hive/CarbonSessionState.scala | 11 ++-
 5 files changed, 142 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/ee1c4d42/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggCreateCommand.scala
--
diff --git 
a/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggCreateCommand.scala
 
b/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggCreateCommand.scala
index 23132de..f1d7396 100644
--- 
a/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggCreateCommand.scala
+++ 
b/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggCreateCommand.scala
@@ -233,6 +233,20 @@ class TestPreAggCreateCommand extends QueryTest with 
BeforeAndAfterAll {
   }
 
   val timeSeries = TIMESERIES.toString
+  test("remove agg tables from show table command") {
+sql("DROP TABLE IF EXISTS tbl_1")
+sql("DROP TABLE IF EXISTS sparktable")
+sql("create table if not exists  tbl_1(imei string,age int,mac string 
,prodate timestamp,update timestamp,gamepoint double,contrid double) stored by 
'carbondata' ")
+sql("create table if not exists sparktable(a int,b string)")
+sql(
+  s"""create datamap preagg_sum on table tbl_1 using 'preaggregate' as 
select mac,avg(age) from tbl_1 group by mac"""
+.stripMargin)
+sql(
+  "create datamap agg2 on table tbl_1 using 'preaggregate' DMPROPERTIES 
('timeseries" +
+  ".eventTime'='prodate', 
'timeseries.hierarchy'='hour=1,day=1,month=1,year=1') as select prodate," +
+  "mac from tbl_1 group by prodate,mac")
+checkExistence(sql("show tables"), false, 
"tbl_1_preagg_sum","tbl_1_agg2_day","tbl_1_agg2_hour","tbl_1_agg2_month","tbl_1_agg2_year")
+  }
 
   test("test pre agg  create table 21: create with preaggregate and 
hierarchy") {
 sql("DROP TABLE IF EXISTS maintabletime")
@@ -287,6 +301,28 @@ class TestPreAggCreateCommand extends QueryTest with 
BeforeAndAfterAll {
 sql("DROP DATAMAP IF EXISTS agg0 ON TABLE maintable")
   }
 
+  test("remove  agg tables from show table command") {
+sql("DROP TABLE IF EXISTS tbl_1")
+sql("create table if not exists  tbl_1(imei string,age int,mac string 
,prodate timestamp,update timestamp,gamepoint double,contrid double) stored by 
'carbondata' ")
+sql("create datamap agg1 on table tbl_1 using 'preaggregate' as select 
mac, sum(age) from tbl_1 group by mac")
+sql("create table if not exists  sparktable(imei string,age int,mac string 
,prodate timestamp,update timestamp,gamepoint double,contrid double) ")
+checkExistence(sql("show tables"), false, "tbl_1_agg1")
+checkExistence(sql("show tables"), true, "sparktable","tbl_1")
+  }
+
+
+  test("remove TimeSeries agg tables from show table command") {
+sql("DROP TABLE IF EXISTS tbl_1")
+sql("create table if not exists  tbl_1(imei string,age int,mac string 
,prodate timestamp,update timestamp,gamepoint double,contrid double) stored by 
'carbondata' ")
+sql(
+  "create datamap agg2 on table tbl_1 using 'preaggregate' DMPROPERTIES 
('timeseries" +
+  ".eventTime'='prodate', 
'timeseries.hierarchy'='hour=1,day=1,month=1,year=1') as select prodate," +
+  "mac from tbl_1 group by prodate,mac")
+checkExistence(sql("show tables"), false, 
"tbl_1_agg2_day","tbl_1_agg2_hour","tbl_1_agg2_month","tbl_1_agg2_year")
+  }
+
+
+
   def 

[44/50] [abbrv] carbondata git commit: [CARBONDATA-2128] Documentation for table path while creating the table

2018-02-03 Thread ravipesala
[CARBONDATA-2128] Documentation for table path while creating the table

This closes #1927


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/4a251ba1
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/4a251ba1
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/4a251ba1

Branch: refs/heads/branch-1.3
Commit: 4a251ba168236ea1d19c5e15ea6877145952d301
Parents: 349be00
Author: sgururajshetty 
Authored: Sat Feb 3 21:20:41 2018 +0530
Committer: ravipesala 
Committed: Sat Feb 3 21:49:11 2018 +0530

--
 docs/data-management-on-carbondata.md | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/4a251ba1/docs/data-management-on-carbondata.md
--
diff --git a/docs/data-management-on-carbondata.md 
b/docs/data-management-on-carbondata.md
index fef2371..d9d4420 100644
--- a/docs/data-management-on-carbondata.md
+++ b/docs/data-management-on-carbondata.md
@@ -32,12 +32,13 @@ This tutorial is going to introduce all commands and data 
operations on CarbonDa
 
 ## CREATE TABLE
 
-  This command can be used to create a CarbonData table by specifying the list 
of fields along with the table properties.
+  This command can be used to create a CarbonData table by specifying the list 
of fields along with the table properties. You can also specify the location 
where the table needs to be stored.
   
   ```
   CREATE TABLE [IF NOT EXISTS] [db_name.]table_name[(col_name data_type , ...)]
   STORED BY 'carbondata'
   [TBLPROPERTIES (property_name=property_value, ...)]
+  [LOCATION 'path']
   ```  
   
 ### Usage Guidelines



[12/50] [abbrv] carbondata git commit: [CARBONDATA-2012] Add support to load pre-aggregate in one transaction

2018-02-03 Thread ravipesala
[CARBONDATA-2012] Add support to load pre-aggregate in one transaction

Current if a table(t1) has 2 preaggregate table(p1,p2) then while loading all 
the pre-aggregate tables are committed(table status writing) and then the 
parent table is committed.

After this PR the flow would be like this:

load t1
load p1
load p2
write table status for p2 with transactionID
write table status for p1 with transactionID
rename tablestatus_UUID to tablestatus for p2
rename tablestatus_UUID to tablestatus for p1
write table status for t1

This closes #1781


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/d680e9cf
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/d680e9cf
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/d680e9cf

Branch: refs/heads/branch-1.3
Commit: d680e9cf5016475e6e9b320c27be6503e1c6e66c
Parents: c9a02fc
Author: kunal642 
Authored: Mon Jan 15 14:35:56 2018 +0530
Committer: ravipesala 
Committed: Thu Feb 1 14:42:05 2018 +0530

--
 .../datastore/filesystem/LocalCarbonFile.java   |   2 +-
 .../statusmanager/SegmentStatusManager.java |  29 ++-
 .../core/util/path/CarbonTablePath.java |   8 +
 .../hadoop/api/CarbonOutputCommitter.java   |   4 +
 .../carbondata/events/AlterTableEvents.scala|  10 +
 .../spark/rdd/AggregateDataMapCompactor.scala   |  31 ++-
 .../spark/rdd/CarbonDataRDDFactory.scala|  37 +++-
 .../spark/rdd/CarbonTableCompactor.scala|  33 ++-
 .../scala/org/apache/spark/sql/CarbonEnv.scala  |   4 +-
 .../management/CarbonLoadDataCommand.scala  |  25 ++-
 .../CreatePreAggregateTableCommand.scala|   7 +-
 .../preaaggregate/PreAggregateListeners.scala   | 220 +--
 .../preaaggregate/PreAggregateUtil.scala|  35 +--
 .../processing/loading/events/LoadEvents.java   |  13 ++
 .../processing/util/CarbonLoaderUtil.java   |  49 -
 15 files changed, 431 insertions(+), 76 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/d680e9cf/core/src/main/java/org/apache/carbondata/core/datastore/filesystem/LocalCarbonFile.java
--
diff --git 
a/core/src/main/java/org/apache/carbondata/core/datastore/filesystem/LocalCarbonFile.java
 
b/core/src/main/java/org/apache/carbondata/core/datastore/filesystem/LocalCarbonFile.java
index 4ce78be..5df5a81 100644
--- 
a/core/src/main/java/org/apache/carbondata/core/datastore/filesystem/LocalCarbonFile.java
+++ 
b/core/src/main/java/org/apache/carbondata/core/datastore/filesystem/LocalCarbonFile.java
@@ -233,7 +233,7 @@ public class LocalCarbonFile implements CarbonFile {
 
   @Override public boolean renameForce(String changetoName) {
 File destFile = new File(changetoName);
-if (destFile.exists()) {
+if (destFile.exists() && 
!file.getAbsolutePath().equals(destFile.getAbsolutePath())) {
   if (destFile.delete()) {
 return file.renameTo(new File(changetoName));
   }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/d680e9cf/core/src/main/java/org/apache/carbondata/core/statusmanager/SegmentStatusManager.java
--
diff --git 
a/core/src/main/java/org/apache/carbondata/core/statusmanager/SegmentStatusManager.java
 
b/core/src/main/java/org/apache/carbondata/core/statusmanager/SegmentStatusManager.java
index 6af0304..01f810e 100755
--- 
a/core/src/main/java/org/apache/carbondata/core/statusmanager/SegmentStatusManager.java
+++ 
b/core/src/main/java/org/apache/carbondata/core/statusmanager/SegmentStatusManager.java
@@ -178,23 +178,42 @@ public class SegmentStatusManager {
* @return
*/
   public static LoadMetadataDetails[] readLoadMetadata(String 
metadataFolderPath) {
+String metadataFileName = metadataFolderPath + 
CarbonCommonConstants.FILE_SEPARATOR
++ CarbonCommonConstants.LOADMETADATA_FILENAME;
+return readTableStatusFile(metadataFileName);
+  }
+
+  /**
+   * Reads the table status file with the specified UUID if non empty.
+   */
+  public static LoadMetadataDetails[] readLoadMetadata(String 
metaDataFolderPath, String uuid) {
+String tableStatusFileName;
+if (uuid.isEmpty()) {
+  tableStatusFileName = metaDataFolderPath + 
CarbonCommonConstants.FILE_SEPARATOR
+  + CarbonCommonConstants.LOADMETADATA_FILENAME;
+} else {
+  tableStatusFileName = metaDataFolderPath + 
CarbonCommonConstants.FILE_SEPARATOR
+  + CarbonCommonConstants.LOADMETADATA_FILENAME + 
CarbonCommonConstants.UNDERSCORE + uuid;
+}
+return readTableStatusFile(tableStatusFileName);
+  }
+
+  public static LoadMetadataDetails[] readTableStatusFile(String 
tableStatusPath) {
 Gson 

[47/50] [abbrv] carbondata git commit: [CARBONDATA-2125] like% filter is giving ArrayIndexOutOfBoundException in case of table having more pages

2018-02-03 Thread ravipesala
[CARBONDATA-2125] like% filter is giving ArrayIndexOutOfBoundException in case 
of table having more pages

Problem: like% filter is giving ArrayIndexOutOfBoundException in case of table 
having more pages
Solution: In RowlevelFilter the number of rows should be filled based on the 
rows in a page.

This closes #1909


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/50e2f2c8
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/50e2f2c8
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/50e2f2c8

Branch: refs/heads/branch-1.3
Commit: 50e2f2c8f2cc6ee4b72839b704a038666ae629ba
Parents: 54b7db5
Author: dhatchayani 
Authored: Fri Feb 2 10:55:19 2018 +0530
Committer: ravipesala 
Committed: Sat Feb 3 22:55:36 2018 +0530

--
 .../executer/RowLevelFilterExecuterImpl.java| 10 ++--
 .../filter/executer/TrueFilterExecutor.java |  2 +-
 .../filterexpr/FilterProcessorTestCase.scala| 25 
 3 files changed, 34 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/50e2f2c8/core/src/main/java/org/apache/carbondata/core/scan/filter/executer/RowLevelFilterExecuterImpl.java
--
diff --git 
a/core/src/main/java/org/apache/carbondata/core/scan/filter/executer/RowLevelFilterExecuterImpl.java
 
b/core/src/main/java/org/apache/carbondata/core/scan/filter/executer/RowLevelFilterExecuterImpl.java
index 224a69f..89489a2 100644
--- 
a/core/src/main/java/org/apache/carbondata/core/scan/filter/executer/RowLevelFilterExecuterImpl.java
+++ 
b/core/src/main/java/org/apache/carbondata/core/scan/filter/executer/RowLevelFilterExecuterImpl.java
@@ -205,7 +205,10 @@ public class RowLevelFilterExecuterImpl implements 
FilterExecuter {
   } else {
 // specific for restructure case where default values need to be filled
 pageNumbers = blockChunkHolder.getDataBlock().numberOfPages();
-numberOfRows = new int[] { blockChunkHolder.getDataBlock().nodeSize() 
};
+numberOfRows = new int[pageNumbers];
+for (int i = 0; i < pageNumbers; i++) {
+  numberOfRows[i] = blockChunkHolder.getDataBlock().getPageRowCount(i);
+}
   }
 }
 if (msrColEvalutorInfoList.size() > 0) {
@@ -217,7 +220,10 @@ public class RowLevelFilterExecuterImpl implements 
FilterExecuter {
   } else {
 // specific for restructure case where default values need to be filled
 pageNumbers = blockChunkHolder.getDataBlock().numberOfPages();
-numberOfRows = new int[] { blockChunkHolder.getDataBlock().nodeSize() 
};
+numberOfRows = new int[pageNumbers];
+for (int i = 0; i < pageNumbers; i++) {
+  numberOfRows[i] = blockChunkHolder.getDataBlock().getPageRowCount(i);
+}
   }
 }
 BitSetGroup bitSetGroup = new BitSetGroup(pageNumbers);

http://git-wip-us.apache.org/repos/asf/carbondata/blob/50e2f2c8/core/src/main/java/org/apache/carbondata/core/scan/filter/executer/TrueFilterExecutor.java
--
diff --git 
a/core/src/main/java/org/apache/carbondata/core/scan/filter/executer/TrueFilterExecutor.java
 
b/core/src/main/java/org/apache/carbondata/core/scan/filter/executer/TrueFilterExecutor.java
index 92396ae..4b3738a 100644
--- 
a/core/src/main/java/org/apache/carbondata/core/scan/filter/executer/TrueFilterExecutor.java
+++ 
b/core/src/main/java/org/apache/carbondata/core/scan/filter/executer/TrueFilterExecutor.java
@@ -39,7 +39,7 @@ public class TrueFilterExecutor implements FilterExecuter {
 BitSetGroup group = new BitSetGroup(numberOfPages);
 for (int i = 0; i < numberOfPages; i++) {
   BitSet set = new BitSet();
-  set.flip(0, blockChunkHolder.getDataBlock().nodeSize());
+  set.flip(0, blockChunkHolder.getDataBlock().getPageRowCount(i));
   group.setBitSet(set, i);
 }
 return group;

http://git-wip-us.apache.org/repos/asf/carbondata/blob/50e2f2c8/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/filterexpr/FilterProcessorTestCase.scala
--
diff --git 
a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/filterexpr/FilterProcessorTestCase.scala
 
b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/filterexpr/FilterProcessorTestCase.scala
index b92b379..d54906f 100644
--- 
a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/filterexpr/FilterProcessorTestCase.scala
+++ 

[31/50] [abbrv] carbondata git commit: [CARBONDATA-2115] Docummentation - Scenarios in which aggregate query is not fetching

2018-02-03 Thread ravipesala
[CARBONDATA-2115] Docummentation - Scenarios in which aggregate query is not 
fetching

Added the FAQ on Scenarios in which aggregate query is not fetching

This closes #1905


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/88757754
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/88757754
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/88757754

Branch: refs/heads/branch-1.3
Commit: 88757754e26423d741bb51f3f5c0222fea8de9f5
Parents: b48a8c2
Author: sgururajshetty 
Authored: Thu Feb 1 17:59:17 2018 +0530
Committer: chenliang613 
Committed: Sat Feb 3 16:10:17 2018 +0800

--
 docs/faq.md | 37 +
 1 file changed, 37 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/88757754/docs/faq.md
--
diff --git a/docs/faq.md b/docs/faq.md
index 6bbd4f7..baa46cc 100644
--- a/docs/faq.md
+++ b/docs/faq.md
@@ -25,6 +25,7 @@
 * [What is Carbon Lock Type?](#what-is-carbon-lock-type)
 * [How to resolve Abstract Method 
Error?](#how-to-resolve-abstract-method-error)
 * [How Carbon will behave when execute insert operation in abnormal 
scenarios?](#how-carbon-will-behave-when-execute-insert-operation-in-abnormal-scenarios)
+* [Why aggregate query is not fetching data from aggregate table?] 
(#why-aggregate-query-is-not-fetching-data-from-aggregate-table)
 
 ## What are Bad Records?
 Records that fail to get loaded into the CarbonData due to data type 
incompatibility or are empty or have incompatible format are classified as Bad 
Records.
@@ -141,4 +142,40 @@ INSERT INTO TABLE carbon_table SELECT id, city FROM 
source_table;
 
 When the column type in carbon table is different from the column specified in 
select statement. The insert operation will still success, but you may get NULL 
in result, because NULL will be substitute value when conversion type failed.
 
+## Why aggregate query is not fetching data from aggregate table?
+Following are the aggregate queries that won’t fetch data from aggregate 
table:
+
+- **Scenario 1** :
+When SubQuery predicate is present in the query.
+
+Example 
+
+```
+create table gdp21(cntry smallint, gdp double, y_year date) stored by 
'carbondata'
+create datamap ag1 on table gdp21 using 'preaggregate' as select cntry, 
sum(gdp) from gdp group by ctry;
+select ctry from pop1 where ctry in (select cntry from gdp21 group by cntry)
+```
+
+- **Scenario 2** : 
+When aggregate function along with ‘in’ filter. 
+
+Example.
+
+```
+create table gdp21(cntry smallint, gdp double, y_year date) stored by 
'carbondata'
+create datamap ag1 on table gdp21 using 'preaggregate' as select cntry, 
sum(gdp) from gdp group by ctry;
+select cntry, sum(gdp) from gdp21 where cntry in (select ctry from pop1) group 
by cntry;
+```
+
+- **Scenario 3** : 
+When aggregate function having ‘join’ with Equal filter.
+
+Example.
+
+```
+create table gdp21(cntry smallint, gdp double, y_year date) stored by 
'carbondata'
+create datamap ag1 on table gdp21 using 'preaggregate' as select cntry, 
sum(gdp) from gdp group by ctry;
+select cntry,sum(gdp) from gdp21,pop1 where cntry=ctry group by cntry;
+```
+
 



[19/50] [abbrv] carbondata git commit: [CARBONDATA-1626] Documentation for add datasize and index size to table status file

2018-02-03 Thread ravipesala
[CARBONDATA-1626] Documentation for add datasize and index size to table status 
file

Added the parameter to add data size and index size to the table status.

This closes #1897


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/a3638adb
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/a3638adb
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/a3638adb

Branch: refs/heads/branch-1.3
Commit: a3638adbc392c9a12e7858f5a61427266a8937a1
Parents: 473bd31
Author: sgururajshetty 
Authored: Wed Jan 31 19:14:06 2018 +0530
Committer: manishgupta88 
Committed: Fri Feb 2 11:33:53 2018 +0530

--
 docs/configuration-parameters.md | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/a3638adb/docs/configuration-parameters.md
--
diff --git a/docs/configuration-parameters.md b/docs/configuration-parameters.md
index fe207f2..b68a2d1 100644
--- a/docs/configuration-parameters.md
+++ b/docs/configuration-parameters.md
@@ -111,6 +111,8 @@ This section provides the details of all the configurations 
required for CarbonD
 | carbon.tempstore.location | /opt/Carbon/TempStoreLoc | Temporary store 
location. By default it takes System.getProperty("java.io.tmpdir"). |
 | carbon.load.log.counter | 50 | Data loading records count logger. |
 | carbon.skip.empty.line | false | Setting this property ignores the empty 
lines in the CSV file during the data load |
+| carbon.enable.calculate.size | true | **For Load Operation**: Setting this 
property calculates the size of the carbon data file (.carbondata) and carbon 
index file (.carbonindex) for every load and updates the table status file. 
**For Describe Formatted**: Setting this property calculates the total size of 
the carbon data files and carbon index files for the respective table and 
displays in describe formatted command. | 
+
 
 
 * **Compaction Configuration**



[33/50] [abbrv] carbondata git commit: [Documentation] Data types for Dictionary exclude & sort column

2018-02-03 Thread ravipesala
[Documentation] Data types for Dictionary exclude & sort column

Data types for Dictionary exclude & sort column

This closes #1907


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/22f78fab
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/22f78fab
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/22f78fab

Branch: refs/heads/branch-1.3
Commit: 22f78faba1b7314297434cf23d898aa5346dea37
Parents: e527c05
Author: sgururajshetty 
Authored: Thu Feb 1 20:30:12 2018 +0530
Committer: manishgupta88 
Committed: Sat Feb 3 14:17:18 2018 +0530

--
 docs/data-management-on-carbondata.md | 11 +++
 1 file changed, 7 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/22f78fab/docs/data-management-on-carbondata.md
--
diff --git a/docs/data-management-on-carbondata.md 
b/docs/data-management-on-carbondata.md
index 0b35ed9..66cc048 100644
--- a/docs/data-management-on-carbondata.md
+++ b/docs/data-management-on-carbondata.md
@@ -45,13 +45,15 @@ This tutorial is going to introduce all commands and data 
operations on CarbonDa
   
- **Dictionary Encoding Configuration**
 
- Dictionary encoding is turned off for all columns by default from 1.3 
onwards, you can use this command for including columns to do dictionary 
encoding.
+ Dictionary encoding is turned off for all columns by default from 1.3 
onwards, you can use this command for including or excluding columns to do 
dictionary encoding.
  Suggested use cases : do dictionary encoding for low cardinality columns, 
it might help to improve data compression ratio and performance.
 
  ```
  TBLPROPERTIES ('DICTIONARY_INCLUDE'='column1, column2')
- ```
+```
  
+NOTE: DICTIONARY_EXCLUDE supports only int, string, timestamp, long, 
bigint, and varchar data types.
+
- **Inverted Index Configuration**
 
  By default inverted index is enabled, it might help to improve 
compression ratio and query speed, especially for low cardinality columns which 
are in reward position.
@@ -64,8 +66,9 @@ This tutorial is going to introduce all commands and data 
operations on CarbonDa
- **Sort Columns Configuration**
 
  This property is for users to specify which columns belong to the 
MDK(Multi-Dimensions-Key) index.
- * If users don't specify "SORT_COLUMN" property, by default MDK index be 
built by using all dimension columns except complex datatype column. 
- * If this property is specified but with empty argument, then the table 
will be loaded without sort..
+ * If users don't specify "SORT_COLUMN" property, by default MDK index be 
built by using all dimension columns except complex data type column. 
+ * If this property is specified but with empty argument, then the table 
will be loaded without sort.
+* This supports only string, date, timestamp, short, int, long, and 
boolean data types.
  Suggested use cases : Only build MDK index for required columns,it might 
help to improve the data loading performance.
 
  ```



[17/50] [abbrv] carbondata git commit: [CARBONDATA-2064] Add compaction listener

2018-02-03 Thread ravipesala
[CARBONDATA-2064] Add compaction listener

This closes #1847


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/54a381c2
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/54a381c2
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/54a381c2

Branch: refs/heads/branch-1.3
Commit: 54a381c27024ece07d400a4a1d36917bd3ca09f9
Parents: 1202e20
Author: dhatchayani 
Authored: Tue Jan 23 15:26:26 2018 +0530
Committer: ravipesala 
Committed: Thu Feb 1 22:20:33 2018 +0530

--
 .../core/constants/CarbonCommonConstants.java   |   7 -
 .../hadoop/api/CarbonOutputCommitter.java   |  32 ++--
 .../sdv/generated/MergeIndexTestCase.scala  |  30 ++--
 .../CarbonIndexFileMergeTestCase.scala  |  48 +++---
 .../dataload/TestGlobalSortDataLoad.scala   |   2 +-
 .../StandardPartitionTableLoadingTestCase.scala |   5 -
 .../carbondata/events/AlterTableEvents.scala|  14 +-
 .../spark/rdd/CarbonMergeFilesRDD.scala |  84 --
 .../carbondata/spark/util/CommonUtil.scala  |  51 --
 .../spark/rdd/CarbonDataRDDFactory.scala|  14 --
 .../spark/rdd/CarbonTableCompactor.scala|   2 -
 .../CarbonAlterTableCompactionCommand.scala | 165 +--
 .../sql/execution/strategy/DDLStrategy.scala|  17 --
 .../CarbonGetTableDetailComandTestCase.scala|   6 +-
 .../processing/loading/events/LoadEvents.java   |  12 ++
 .../processing/merger/CompactionType.java   |   1 -
 16 files changed, 155 insertions(+), 335 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/54a381c2/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
--
diff --git 
a/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
 
b/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
index 77e8db8..7ae3034 100644
--- 
a/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
+++ 
b/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
@@ -1478,13 +1478,6 @@ public final class CarbonCommonConstants {
 
   public static final String BITSET_PIPE_LINE_DEFAULT = "true";
 
-  /**
-   * It is internal configuration and used only for test purpose.
-   * It will merge the carbon index files with in the segment to single 
segment.
-   */
-  public static final String CARBON_MERGE_INDEX_IN_SEGMENT = 
"carbon.merge.index.in.segment";
-
-  public static final String CARBON_MERGE_INDEX_IN_SEGMENT_DEFAULT = "true";
 
   public static final String AGGREGATIONDATAMAPSCHEMA = 
"AggregateDataMapHandler";
   /*

http://git-wip-us.apache.org/repos/asf/carbondata/blob/54a381c2/hadoop/src/main/java/org/apache/carbondata/hadoop/api/CarbonOutputCommitter.java
--
diff --git 
a/hadoop/src/main/java/org/apache/carbondata/hadoop/api/CarbonOutputCommitter.java
 
b/hadoop/src/main/java/org/apache/carbondata/hadoop/api/CarbonOutputCommitter.java
index 9cca1bb..555ddd2 100644
--- 
a/hadoop/src/main/java/org/apache/carbondata/hadoop/api/CarbonOutputCommitter.java
+++ 
b/hadoop/src/main/java/org/apache/carbondata/hadoop/api/CarbonOutputCommitter.java
@@ -25,18 +25,15 @@ import java.util.Set;
 
 import org.apache.carbondata.common.logging.LogService;
 import org.apache.carbondata.common.logging.LogServiceFactory;
-import org.apache.carbondata.core.constants.CarbonCommonConstants;
 import org.apache.carbondata.core.metadata.PartitionMapFileStore;
 import org.apache.carbondata.core.metadata.schema.table.CarbonTable;
 import org.apache.carbondata.core.mutate.CarbonUpdateUtil;
 import org.apache.carbondata.core.statusmanager.LoadMetadataDetails;
 import org.apache.carbondata.core.statusmanager.SegmentStatus;
 import org.apache.carbondata.core.statusmanager.SegmentStatusManager;
-import org.apache.carbondata.core.util.CarbonProperties;
 import org.apache.carbondata.core.util.CarbonSessionInfo;
 import org.apache.carbondata.core.util.ThreadLocalSessionInfo;
 import org.apache.carbondata.core.util.path.CarbonTablePath;
-import org.apache.carbondata.core.writer.CarbonIndexFileMergeWriter;
 import org.apache.carbondata.events.OperationContext;
 import org.apache.carbondata.events.OperationListenerBus;
 import org.apache.carbondata.processing.loading.events.LoadEvents;
@@ -126,7 +123,16 @@ public class CarbonOutputCommitter extends 
FileOutputCommitter {
 }
   }
   CarbonLoaderUtil.recordNewLoadMetadata(newMetaEntry, loadModel, false, 
overwriteSet);
-  mergeCarbonIndexFiles(segmentPath);
+  if (operationContext != null) {
+

[36/50] [abbrv] carbondata git commit: [CARBONDATA-2112] Fixed bug for select operation on datamap with avg and a column name

2018-02-03 Thread ravipesala
[CARBONDATA-2112] Fixed bug for select operation on datamap with avg and a 
column name

Problem: When applying select operation(having a column name and an aggregate 
function) on a
table having a datamap, the data was coming out to be wrong as the group by 
expression
and aggregate expression were created incorrectly.

Solution: While creating the aggregate and group by expression, we were getting 
the child
column related to parent column name which was coming out to be wrong so added 
a new
check there to get the correct child column.

This closes #1910


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/91911af2
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/91911af2
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/91911af2

Branch: refs/heads/branch-1.3
Commit: 91911af231583b9e2b210dd685770836b358bcd0
Parents: 44e70d0
Author: Geetika Gupta 
Authored: Fri Feb 2 14:02:38 2018 +0530
Committer: kunal642 
Committed: Sat Feb 3 16:25:30 2018 +0530

--
 .../metadata/schema/table/AggregationDataMapSchema.java  |  3 ++-
 .../testsuite/preaggregate/TestPreAggregateLoad.scala| 11 +++
 2 files changed, 13 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/91911af2/core/src/main/java/org/apache/carbondata/core/metadata/schema/table/AggregationDataMapSchema.java
--
diff --git 
a/core/src/main/java/org/apache/carbondata/core/metadata/schema/table/AggregationDataMapSchema.java
 
b/core/src/main/java/org/apache/carbondata/core/metadata/schema/table/AggregationDataMapSchema.java
index e061812..2a16e1f 100644
--- 
a/core/src/main/java/org/apache/carbondata/core/metadata/schema/table/AggregationDataMapSchema.java
+++ 
b/core/src/main/java/org/apache/carbondata/core/metadata/schema/table/AggregationDataMapSchema.java
@@ -151,7 +151,8 @@ public class AggregationDataMapSchema extends DataMapSchema 
{
   List parentColumnTableRelations =
   columnSchema.getParentColumnTableRelations();
   if (null != parentColumnTableRelations && 
parentColumnTableRelations.size() == 1
-  && 
parentColumnTableRelations.get(0).getColumnName().equals(columName)) {
+  && 
parentColumnTableRelations.get(0).getColumnName().equals(columName) &&
+  columnSchema.getColumnName().endsWith(columName)) {
 return columnSchema;
   }
 }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/91911af2/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggregateLoad.scala
--
diff --git 
a/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggregateLoad.scala
 
b/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggregateLoad.scala
index b6b7a17..da1ffb5 100644
--- 
a/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggregateLoad.scala
+++ 
b/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggregateLoad.scala
@@ -405,4 +405,15 @@ test("check load and select for avg double datatype") {
 sql("drop table if exists maintable")
   }
 
+  test("check load and select for avg int datatype and group by") {
+sql("drop table if exists maintable ")
+sql("CREATE TABLE maintable(id int, city string, age int) stored by 
'carbondata'")
+sql(s"LOAD DATA LOCAL INPATH '$testData' into table maintable")
+sql(s"LOAD DATA LOCAL INPATH '$testData' into table maintable")
+sql(s"LOAD DATA LOCAL INPATH '$testData' into table maintable")
+val rows = sql("select age,avg(age) from maintable group by age").collect()
+sql("create datamap maintbl_douoble on table maintable using 
'preaggregate' as select avg(age) from maintable group by age")
+checkAnswer(sql("select age,avg(age) from maintable group by age"), rows)
+  }
+
 }



[06/50] [abbrv] carbondata git commit: [CARBONDATA-2089]SQL exception is masked due to assert(false) inside try catch and exception block always asserting true

2018-02-03 Thread ravipesala
[CARBONDATA-2089]SQL exception is masked due to assert(false) inside try catch 
and exception block always asserting true

Correct all SDV testcase to use intercept exception

This closes #1871


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/3dff273b
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/3dff273b
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/3dff273b

Branch: refs/heads/branch-1.3
Commit: 3dff273b4f1308fa76a91f6f22bb40eb2d2d9553
Parents: b2139ca
Author: Raghunandan S 
Authored: Sat Jan 27 20:49:47 2018 +0530
Committer: Jacky Li 
Committed: Wed Jan 31 19:28:09 2018 +0800

--
 .../sdv/generated/AlterTableTestCase.scala  | 250 ++-
 .../sdv/generated/BatchSortLoad1TestCase.scala  |  39 +--
 .../sdv/generated/BatchSortLoad2TestCase.scala  |  32 +-
 .../sdv/generated/BatchSortQueryTestCase.scala  | 290 +++--
 .../sdv/generated/BucketingTestCase.scala   |  12 +-
 .../sdv/generated/ColumndictTestCase.scala  |  60 +---
 .../sdv/generated/DataLoadingIUDTestCase.scala  | 318 ---
 .../sdv/generated/DataLoadingTestCase.scala |   7 +-
 .../sdv/generated/InvertedindexTestCase.scala   |  14 +-
 .../sdv/generated/OffheapQuery1TestCase.scala   | 287 +++--
 .../sdv/generated/OffheapQuery2TestCase.scala   | 286 +++--
 .../sdv/generated/OffheapSort1TestCase.scala|  10 +-
 .../sdv/generated/OffheapSort2TestCase.scala|  10 +-
 .../sdv/generated/PartitionTestCase.scala   |  71 ++---
 .../sdv/generated/SinglepassTestCase.scala  |  76 ++---
 15 files changed, 423 insertions(+), 1339 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/3dff273b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/AlterTableTestCase.scala
--
diff --git 
a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/AlterTableTestCase.scala
 
b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/AlterTableTestCase.scala
index b1a0f34..8899f5c 100644
--- 
a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/AlterTableTestCase.scala
+++ 
b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/AlterTableTestCase.scala
@@ -120,141 +120,107 @@ class AlterTableTestCase extends QueryTest with 
BeforeAndAfterAll {
 
   //Check alter table when the altered name is already present in the database
   test("RenameTable_001_08", Include) {
-try {
-   sql(s"""create table test1 (name string, id int) stored by 
'carbondata'""").collect
-   sql(s"""insert into test1 select 'xx',1""").collect
-   sql(s"""create table test2 (name string, id int) stored by 
'carbondata'""").collect
+intercept[Exception] {
+  sql(s"""create table test1 (name string, id int) stored by 
'carbondata'""").collect
+  sql(s"""insert into test1 select 'xx',1""").collect
+  sql(s"""create table test2 (name string, id int) stored by 
'carbondata'""").collect
   sql(s"""alter table test1 RENAME TO test2""").collect
-  assert(false)
-} catch {
-  case _ => assert(true)
 }
- sql(s"""drop table if exists test1""").collect
-   sql(s"""drop table if exists test2""").collect
+
+sql(s"""drop table if exists test1""").collect
+sql(s"""drop table if exists test2""").collect
   }
 
 
   //Check alter table when the altered name is given multiple times
   test("RenameTable_001_09", Include) {
-try {
-   sql(s"""create table test1 (name string, id int) stored by 
'carbondata'""").collect
-   sql(s"""insert into test1 select 'xx',1""").collect
+intercept[Exception] {
+  sql(s"""create table test1 (name string, id int) stored by 
'carbondata'""").collect
+  sql(s"""insert into test1 select 'xx',1""").collect
   sql(s"""alter table test1 RENAME TO test2 test3""").collect
-  assert(false)
-} catch {
-  case _ => assert(true)
 }
- sql(s"""drop table if exists test1""").collect
+sql(s"""drop table if exists test1""").collect
   }
 
 
   //Check delete column for dimension column
   test("DeleteCol_001_01", Include) {
-try {
- sql(s"""create table test1 (name string, id int) stored by 'carbondata' 
TBLPROPERTIES('DICTIONARY_INCLUDE'='id') """).collect
-   sql(s"""insert into test1 select 'xx',1""").collect
-   sql(s"""alter table test1 drop columns (name)""").collect
+intercept[Exception] {
+  sql(s"""create table test1 (name string, id int) stored by 'carbondata' 

[50/50] [abbrv] carbondata git commit: [CARBONDATA-1454]false expression handling and block pruning

2018-02-03 Thread ravipesala
[CARBONDATA-1454]false expression handling and block pruning

Issue :- In case of wrong value/invalid for time-stamp and date data type. all 
blocks are identified for scan .

Solution :- Add False Expression handling and False Filter Executor. it can be 
used to handle invalid Filter value.

This closes #1915


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/e16e8781
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/e16e8781
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/e16e8781

Branch: refs/heads/branch-1.3
Commit: e16e878189baa82bee5ca8af8d1229b7733b454a
Parents: fa6cd8d
Author: BJangir 
Authored: Fri Feb 2 16:33:45 2018 +0530
Committer: ravipesala 
Committed: Sun Feb 4 00:59:25 2018 +0530

--
 .../scan/filter/FilterExpressionProcessor.java  |  3 +-
 .../carbondata/core/scan/filter/FilterUtil.java |  3 +
 .../filter/executer/FalseFilterExecutor.java| 60 
 .../scan/filter/intf/FilterExecuterType.java|  2 +-
 .../FalseConditionalResolverImpl.java   | 61 
 .../filterexpr/FilterProcessorTestCase.scala| 74 +++-
 .../apache/spark/sql/CarbonBoundReference.scala |  4 ++
 .../execution/CastExpressionOptimization.scala  | 60 +---
 .../strategy/CarbonLateDecodeStrategy.scala |  2 +
 .../spark/sql/optimizer/CarbonFilters.scala |  4 ++
 10 files changed, 259 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/e16e8781/core/src/main/java/org/apache/carbondata/core/scan/filter/FilterExpressionProcessor.java
--
diff --git 
a/core/src/main/java/org/apache/carbondata/core/scan/filter/FilterExpressionProcessor.java
 
b/core/src/main/java/org/apache/carbondata/core/scan/filter/FilterExpressionProcessor.java
index 5a1b7df..3e23aa3 100644
--- 
a/core/src/main/java/org/apache/carbondata/core/scan/filter/FilterExpressionProcessor.java
+++ 
b/core/src/main/java/org/apache/carbondata/core/scan/filter/FilterExpressionProcessor.java
@@ -63,6 +63,7 @@ import 
org.apache.carbondata.core.scan.filter.resolver.FilterResolverIntf;
 import 
org.apache.carbondata.core.scan.filter.resolver.LogicalFilterResolverImpl;
 import 
org.apache.carbondata.core.scan.filter.resolver.RowLevelFilterResolverImpl;
 import 
org.apache.carbondata.core.scan.filter.resolver.RowLevelRangeFilterResolverImpl;
+import 
org.apache.carbondata.core.scan.filter.resolver.resolverinfo.FalseConditionalResolverImpl;
 import 
org.apache.carbondata.core.scan.filter.resolver.resolverinfo.TrueConditionalResolverImpl;
 import org.apache.carbondata.core.scan.partition.PartitionUtil;
 import org.apache.carbondata.core.scan.partition.Partitioner;
@@ -398,7 +399,7 @@ public class FilterExpressionProcessor implements 
FilterProcessor {
 ConditionalExpression condExpression = null;
 switch (filterExpressionType) {
   case FALSE:
-return new RowLevelFilterResolverImpl(expression, false, false, 
tableIdentifier);
+return new FalseConditionalResolverImpl(expression, false, false, 
tableIdentifier);
   case TRUE:
 return new TrueConditionalResolverImpl(expression, false, false, 
tableIdentifier);
   case EQUALS:

http://git-wip-us.apache.org/repos/asf/carbondata/blob/e16e8781/core/src/main/java/org/apache/carbondata/core/scan/filter/FilterUtil.java
--
diff --git 
a/core/src/main/java/org/apache/carbondata/core/scan/filter/FilterUtil.java 
b/core/src/main/java/org/apache/carbondata/core/scan/filter/FilterUtil.java
index 3268ca3..a08edc0 100644
--- a/core/src/main/java/org/apache/carbondata/core/scan/filter/FilterUtil.java
+++ b/core/src/main/java/org/apache/carbondata/core/scan/filter/FilterUtil.java
@@ -74,6 +74,7 @@ import 
org.apache.carbondata.core.scan.filter.executer.AndFilterExecuterImpl;
 import 
org.apache.carbondata.core.scan.filter.executer.DimColumnExecuterFilterInfo;
 import 
org.apache.carbondata.core.scan.filter.executer.ExcludeColGroupFilterExecuterImpl;
 import 
org.apache.carbondata.core.scan.filter.executer.ExcludeFilterExecuterImpl;
+import org.apache.carbondata.core.scan.filter.executer.FalseFilterExecutor;
 import org.apache.carbondata.core.scan.filter.executer.FilterExecuter;
 import 
org.apache.carbondata.core.scan.filter.executer.ImplicitIncludeFilterExecutorImpl;
 import 
org.apache.carbondata.core.scan.filter.executer.IncludeColGroupFilterExecuterImpl;
@@ -176,6 +177,8 @@ public final class FilterUtil {
   .getFilterRangeValues(segmentProperties), segmentProperties);
 case TRUE:
   return new TrueFilterExecutor();
+case FALSE:
+ 

[42/50] [abbrv] carbondata git commit: [CARBONDATA-2123] Refactor datamap schema thrift and datamap provider to use short name and classname

2018-02-03 Thread ravipesala
[CARBONDATA-2123] Refactor datamap schema thrift and datamap provider to use 
short name and classname

Update schema thrift file for datamap schema to correct the typo errors and 
updated the names.
Added class name to schema file and updated short name for each enum.

This closes #1919


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/46d9bf96
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/46d9bf96
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/46d9bf96

Branch: refs/heads/branch-1.3
Commit: 46d9bf966910afb98a4e4e9cf879f2a9beef5b72
Parents: 4677fc6
Author: ravipesala 
Authored: Sat Feb 3 00:18:10 2018 +0530
Committer: Jacky Li 
Committed: Sat Feb 3 22:05:37 2018 +0800

--
 .../core/constants/CarbonCommonConstants.java   |  2 -
 .../ThriftWrapperSchemaConverterImpl.java   | 10 ++---
 .../schema/datamap/DataMapProvider.java | 39 +++-
 .../schema/table/DataMapSchemaFactory.java  | 12 +++---
 format/src/main/thrift/schema.thrift| 14 ---
 .../preaggregate/TestPreAggCreateCommand.scala  |  3 +-
 .../timeseries/TestTimeSeriesCreateTable.scala  |  4 +-
 .../datamap/CarbonCreateDataMapCommand.scala| 28 --
 .../CreatePreAggregateTableCommand.scala|  5 ++-
 9 files changed, 80 insertions(+), 37 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/46d9bf96/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
--
diff --git 
a/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
 
b/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
index 8480758..a799e51 100644
--- 
a/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
+++ 
b/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
@@ -1455,8 +1455,6 @@ public final class CarbonCommonConstants {
 
   public static final String BITSET_PIPE_LINE_DEFAULT = "true";
 
-
-  public static final String AGGREGATIONDATAMAPSCHEMA = 
"AggregateDataMapHandler";
   /*
* The total size of carbon data
*/

http://git-wip-us.apache.org/repos/asf/carbondata/blob/46d9bf96/core/src/main/java/org/apache/carbondata/core/metadata/converter/ThriftWrapperSchemaConverterImpl.java
--
diff --git 
a/core/src/main/java/org/apache/carbondata/core/metadata/converter/ThriftWrapperSchemaConverterImpl.java
 
b/core/src/main/java/org/apache/carbondata/core/metadata/converter/ThriftWrapperSchemaConverterImpl.java
index e9c5505..21ab797 100644
--- 
a/core/src/main/java/org/apache/carbondata/core/metadata/converter/ThriftWrapperSchemaConverterImpl.java
+++ 
b/core/src/main/java/org/apache/carbondata/core/metadata/converter/ThriftWrapperSchemaConverterImpl.java
@@ -343,7 +343,7 @@ public class ThriftWrapperSchemaConverterImpl implements 
SchemaConverter {
 
.setDatabaseName(wrapperChildSchema.getRelationIdentifier().getDatabaseName());
 
relationIdentifier.setTableName(wrapperChildSchema.getRelationIdentifier().getTableName());
 
relationIdentifier.setTableId(wrapperChildSchema.getRelationIdentifier().getTableId());
-thriftChildSchema.setRelationIdentifire(relationIdentifier);
+thriftChildSchema.setChildTableIdentifier(relationIdentifier);
   }
   thriftChildSchema.setProperties(wrapperChildSchema.getProperties());
   thriftChildSchema.setClassName(wrapperChildSchema.getClassName());
@@ -648,11 +648,11 @@ public class ThriftWrapperSchemaConverterImpl implements 
SchemaConverter {
 DataMapSchema childSchema = new 
DataMapSchema(thriftDataMapSchema.getDataMapName(),
 thriftDataMapSchema.getClassName());
 childSchema.setProperties(thriftDataMapSchema.getProperties());
-if (null != thriftDataMapSchema.getRelationIdentifire()) {
+if (null != thriftDataMapSchema.getChildTableIdentifier()) {
   RelationIdentifier relationIdentifier =
-  new 
RelationIdentifier(thriftDataMapSchema.getRelationIdentifire().getDatabaseName(),
-  thriftDataMapSchema.getRelationIdentifire().getTableName(),
-  thriftDataMapSchema.getRelationIdentifire().getTableId());
+  new 
RelationIdentifier(thriftDataMapSchema.getChildTableIdentifier().getDatabaseName(),
+  thriftDataMapSchema.getChildTableIdentifier().getTableName(),
+  thriftDataMapSchema.getChildTableIdentifier().getTableId());
   childSchema.setRelationIdentifier(relationIdentifier);
   childSchema.setChildSchema(
   

[29/50] [abbrv] carbondata git commit: [CARBONDATA-2098]Add Documentation for Pre-Aggregate tables

2018-02-03 Thread ravipesala
[CARBONDATA-2098]Add Documentation for Pre-Aggregate tables

Add Documentation for Pre-Aggregate tables

This closes #1886


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/71f8828b
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/71f8828b
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/71f8828b

Branch: refs/heads/branch-1.3
Commit: 71f8828be56ae9f3927a5fc4a5047794a740c6d1
Parents: da129d5
Author: Raghunandan S 
Authored: Mon Jan 29 08:54:49 2018 +0530
Committer: chenliang613 
Committed: Sat Feb 3 15:45:30 2018 +0800

--
 docs/data-management-on-carbondata.md   | 245 +++
 .../examples/PreAggregateTableExample.scala | 145 +++
 .../TimeSeriesPreAggregateTableExample.scala| 103 
 3 files changed, 493 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/71f8828b/docs/data-management-on-carbondata.md
--
diff --git a/docs/data-management-on-carbondata.md 
b/docs/data-management-on-carbondata.md
index 3119935..0b35ed9 100644
--- a/docs/data-management-on-carbondata.md
+++ b/docs/data-management-on-carbondata.md
@@ -25,6 +25,7 @@ This tutorial is going to introduce all commands and data 
operations on CarbonDa
 * [UPDATE AND DELETE](#update-and-delete)
 * [COMPACTION](#compaction)
 * [PARTITION](#partition)
+* [PRE-AGGREGATE TABLES](#agg-tables)
 * [BUCKETING](#bucketing)
 * [SEGMENT MANAGEMENT](#segment-management)
 
@@ -748,6 +749,250 @@ This tutorial is going to introduce all commands and data 
operations on CarbonDa
   * The partitioned column can be excluded from SORT_COLUMNS, this will let 
other columns to do the efficient sorting.
   * When writing SQL on a partition table, try to use filters on the partition 
column.
 
+## PRE-AGGREGATE TABLES
+  Carbondata supports pre aggregating of data so that OLAP kind of queries can 
fetch data 
+  much faster.Aggregate tables are created as datamaps so that the handling is 
as efficient as 
+  other indexing support.Users can create as many aggregate tables they 
require as datamaps to 
+  improve their query performance,provided the storage requirements and 
loading speeds are 
+  acceptable.
+  
+  For main table called **sales** which is defined as 
+  
+  ```
+  CREATE TABLE sales (
+  order_time timestamp,
+  user_id string,
+  sex string,
+  country string,
+  quantity int,
+  price bigint)
+  STORED BY 'carbondata'
+  ```
+  
+  user can create pre-aggregate tables using the DDL
+  
+  ```
+  CREATE DATAMAP agg_sales
+  ON TABLE sales
+  USING "preaggregate"
+  AS
+  SELECT country, sex, sum(quantity), avg(price)
+  FROM sales
+  GROUP BY country, sex
+  ```
+  
+Functions supported in pre-aggregate tables
+
+| Function | Rollup supported |
+|---||
+| SUM | Yes |
+| AVG | Yes |
+| MAX | Yes |
+| MIN | Yes |
+| COUNT | Yes |
+
+
+# How pre-aggregate tables are selected
+For the main table **sales** and pre-aggregate table **agg_sales** created 
above, queries of the 
+kind
+```
+SELECT country, sex, sum(quantity), avg(price) from sales GROUP BY country, sex
+
+SELECT sex, sum(quantity) from sales GROUP BY sex
+
+SELECT sum(price), country from sales GROUP BY country
+``` 
+
+will be transformed by Query Planner to fetch data from pre-aggregate table 
**agg_sales**
+
+But queries of kind
+```
+SELECT user_id, country, sex, sum(quantity), avg(price) from sales GROUP BY 
country, sex
+
+SELECT sex, avg(quantity) from sales GROUP BY sex
+
+SELECT max(price), country from sales GROUP BY country
+```
+
+will fetch the data from the main table **sales**
+
+# Loading data to pre-aggregate tables
+For existing table with loaded data, data load to pre-aggregate table will be 
triggered by the 
+CREATE DATAMAP statement when user creates the pre-aggregate table.
+For incremental loads after aggregates tables are created, loading data to 
main table triggers 
+the load to pre-aggregate tables once main table loading is complete.These 
loads are automic 
+meaning that data on main table and aggregate tables are only visible to the 
user after all tables 
+are loaded
+
+# Querying data from pre-aggregate tables
+Pre-aggregate tables cannot be queries directly.Queries are to be made on main 
table.Internally 
+carbondata will check associated pre-aggregate tables with the main table and 
if the 
+pre-aggregate tables satisfy the query condition, the plan is transformed 
automatically to use 
+pre-aggregate table to fetch the data
+
+# Compacting pre-aggregate tables
+Compaction is an optional operation for pre-aggregate table. If compaction is 
performed on main 
+table but not performed on 

[25/50] [abbrv] carbondata git commit: [CARBONDATA-1918] Incorrect data is displayed when String is updated using Sentences

2018-02-03 Thread ravipesala
[CARBONDATA-1918] Incorrect data is displayed when String is updated using 
Sentences

Incorrect data is displayed when updating a String column using Sentences UDF. 
Sentences UDF will give us a Array, When updating string with array, wrong data 
is getting updated. Therefore, we have to check for the supported type before 
updating.

This closes  #1704


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/2610a609
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/2610a609
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/2610a609

Branch: refs/heads/branch-1.3
Commit: 2610a6091623c271552b7a69d402dded79ba3517
Parents: a9a0201
Author: dhatchayani 
Authored: Wed Dec 20 18:16:10 2017 +0530
Committer: kumarvishal 
Committed: Fri Feb 2 21:22:30 2018 +0530

--
 .../sdv/generated/DataLoadingIUDTestCase.scala  |  8 
 .../testsuite/iud/UpdateCarbonTableTestCase.scala   | 13 +
 .../mutation/CarbonProjectForUpdateCommand.scala| 16 
 3 files changed, 33 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/2610a609/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/DataLoadingIUDTestCase.scala
--
diff --git 
a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/DataLoadingIUDTestCase.scala
 
b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/DataLoadingIUDTestCase.scala
index b4459ab..4c232be 100644
--- 
a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/DataLoadingIUDTestCase.scala
+++ 
b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/DataLoadingIUDTestCase.scala
@@ -1858,13 +1858,13 @@ ignore("IUD-01-01-01_040-23", Include) {

 
 //Check for updating carbon table set column value to a value returned by 
split function
+//Split will give us array value
 test("IUD-01-01-01_040-25", Include) {
sql(s"""create table if not exists default.t_carbn01 (Active_status 
String,Item_type_cd INT,Qty_day_avg INT,Qty_total INT,Sell_price 
BIGINT,Sell_pricep DOUBLE,Discount_price DOUBLE,Profit DECIMAL(3,2),Item_code 
String,Item_name String,Outlet_name String,Update_time TIMESTAMP,Create_date 
String)STORED BY 'org.apache.carbondata.format'""").collect
  sql(s"""insert into default.t_carbn01  select * from 
default.t_carbn01b""").collect
- sql(s"""update default.t_carbn01  set (active_status)= (split('t','a')) 
""").collect
-  checkAnswer(s""" select active_status from default.t_carbn01  group by 
active_status """,
-Seq(Row("t\\")), "DataLoadingIUDTestCase_IUD-01-01-01_040-25")
-   sql(s"""drop table default.t_carbn01  """).collect
+ intercept[Exception] {
+   sql(s"""update default.t_carbn01  set (active_status)= (split('t','a')) 
""").collect
+ }
 }

 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/2610a609/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/iud/UpdateCarbonTableTestCase.scala
--
diff --git 
a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/iud/UpdateCarbonTableTestCase.scala
 
b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/iud/UpdateCarbonTableTestCase.scala
index cf4fc07..98c9a16 100644
--- 
a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/iud/UpdateCarbonTableTestCase.scala
+++ 
b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/iud/UpdateCarbonTableTestCase.scala
@@ -691,6 +691,19 @@ class UpdateCarbonTableTestCase extends QueryTest with 
BeforeAndAfterAll {
  CarbonCommonConstants.FILE_SEPARATOR + "Part0")
 assert(f.list().length == 2)
   }
+  test("test sentences func in update statement") {
+sql("drop table if exists senten")
+sql("create table senten(name string, comment string) stored by 
'carbondata'")
+sql("insert into senten select 'aaa','comment for aaa'")
+sql("insert into senten select 'bbb','comment for bbb'")
+sql("select * from senten").show()
+val errorMessage = intercept[Exception] {
+  sql("update senten set(comment)=(sentences('Hello there! How are 
you?'))").show()
+}.getMessage
+errorMessage
+  .contains("Unsupported data type: Array")
+sql("drop table if exists senten")
+  }
 
   override def afterAll {
 sql("use default")


[28/50] [abbrv] carbondata git commit: [CARBONDATA-2110]deprecate 'tempCSV' option of dataframe load

2018-02-03 Thread ravipesala
[CARBONDATA-2110]deprecate 'tempCSV' option of dataframe load

deprecate 'tempCSV' option of dataframe load, it won't generate temp file on 
hdfs, no matter the value of tempCSV

This closes #1916


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/da129d52
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/da129d52
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/da129d52

Branch: refs/heads/branch-1.3
Commit: da129d5277babe498fa5686fe53d01433d112bab
Parents: 6c097cb
Author: qiuchenjian <807169...@qq.com>
Authored: Sat Feb 3 00:14:07 2018 +0800
Committer: Jacky Li 
Committed: Sat Feb 3 15:29:08 2018 +0800

--
 .../testsuite/dataload/TestLoadDataFrame.scala  | 19 
 .../spark/sql/CarbonDataFrameWriter.scala   | 98 +---
 2 files changed, 20 insertions(+), 97 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/da129d52/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/dataload/TestLoadDataFrame.scala
--
diff --git 
a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/dataload/TestLoadDataFrame.scala
 
b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/dataload/TestLoadDataFrame.scala
index 6f03493..693c145 100644
--- 
a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/dataload/TestLoadDataFrame.scala
+++ 
b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/dataload/TestLoadDataFrame.scala
@@ -29,6 +29,7 @@ class TestLoadDataFrame extends QueryTest with 
BeforeAndAfterAll {
   var df: DataFrame = _
   var dataFrame: DataFrame = _
   var df2: DataFrame = _
+  var df3: DataFrame = _
   var booldf:DataFrame = _
 
 
@@ -52,6 +53,10 @@ class TestLoadDataFrame extends QueryTest with 
BeforeAndAfterAll {
   .map(x => ("key_" + x, "str_" + x, x, x * 2, x * 3))
   .toDF("c1", "c2", "c3", "c4", "c5")
 
+df3 = sqlContext.sparkContext.parallelize(1 to 3)
+  .map(x => (x.toString + "te,s\nt", x))
+  .toDF("c1", "c2")
+
 val boolrdd = sqlContext.sparkContext.parallelize(
   Row("anubhav",true) ::
 Row("prince",false) :: Nil)
@@ -74,6 +79,7 @@ class TestLoadDataFrame extends QueryTest with 
BeforeAndAfterAll {
 sql("DROP TABLE IF EXISTS carbon9")
 sql("DROP TABLE IF EXISTS carbon10")
 sql("DROP TABLE IF EXISTS carbon11")
+sql("DROP TABLE IF EXISTS carbon12")
 sql("DROP TABLE IF EXISTS df_write_sort_column_not_specified")
 sql("DROP TABLE IF EXISTS df_write_specify_sort_column")
 sql("DROP TABLE IF EXISTS df_write_empty_sort_column")
@@ -261,6 +267,19 @@ test("test the boolean data type"){
 val isStreaming: String = 
descResult.collect().find(row=>row(0).asInstanceOf[String].trim.equalsIgnoreCase("streaming")).get.get(1).asInstanceOf[String]
 assert(isStreaming.contains("true"))
   }
+
+  test("test datasource table with specified char") {
+
+df3.write
+  .format("carbondata")
+  .option("tableName", "carbon12")
+  .option("tempCSV", "true")
+  .mode(SaveMode.Overwrite)
+  .save()
+checkAnswer(
+  sql("select count(*) from carbon12"), Row(3)
+)
+  }
   private def getSortColumnValue(tableName: String): Array[String] = {
 val desc = sql(s"desc formatted $tableName")
 val sortColumnRow = desc.collect.find(r =>

http://git-wip-us.apache.org/repos/asf/carbondata/blob/da129d52/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonDataFrameWriter.scala
--
diff --git 
a/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonDataFrameWriter.scala
 
b/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonDataFrameWriter.scala
index 2b06375..2be89b1 100644
--- 
a/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonDataFrameWriter.scala
+++ 
b/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonDataFrameWriter.scala
@@ -17,16 +17,12 @@
 
 package org.apache.spark.sql
 
-import org.apache.hadoop.fs.Path
-import org.apache.hadoop.io.compress.GzipCodec
 import org.apache.spark.sql.execution.command.management.CarbonLoadDataCommand
 import org.apache.spark.sql.types._
 import org.apache.spark.sql.util.CarbonException
 
 import org.apache.carbondata.common.logging.LogServiceFactory
-import org.apache.carbondata.core.constants.CarbonCommonConstants
 import org.apache.carbondata.core.metadata.datatype.{DataTypes => CarbonType}
-import org.apache.carbondata.core.util.CarbonProperties
 import org.apache.carbondata.spark.CarbonOption
 
 class CarbonDataFrameWriter(sqlContext: SQLContext, val dataFrame: 

[02/50] [abbrv] carbondata git commit: [CARBONDATA-2096] Add query test case for 'merge_small_files' distribution

2018-02-03 Thread ravipesala
[CARBONDATA-2096] Add query test case for 'merge_small_files' distribution

Add query test case for 'merge_small_files' distribution

This closes #1882


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/d90280af
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/d90280af
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/d90280af

Branch: refs/heads/branch-1.3
Commit: d90280afc8adcab741c7aa29a99b450af78cd8e9
Parents: 24ba2fe
Author: QiangCai 
Authored: Tue Jan 30 17:07:24 2018 +0800
Committer: Jacky Li 
Committed: Wed Jan 31 19:21:04 2018 +0800

--
 .../dataload/TestGlobalSortDataLoad.scala   | 27 ++--
 .../apache/spark/sql/test/util/QueryTest.scala  |  1 +
 2 files changed, 26 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/d90280af/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/dataload/TestGlobalSortDataLoad.scala
--
diff --git 
a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/dataload/TestGlobalSortDataLoad.scala
 
b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/dataload/TestGlobalSortDataLoad.scala
index 9ce9675..50a38f1 100644
--- 
a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/dataload/TestGlobalSortDataLoad.scala
+++ 
b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/dataload/TestGlobalSortDataLoad.scala
@@ -25,14 +25,15 @@ import 
org.apache.carbondata.core.constants.CarbonCommonConstants
 import org.apache.carbondata.core.util.CarbonProperties
 import org.apache.carbondata.spark.exception.MalformedCarbonCommandException
 import org.apache.spark.sql.Row
+import org.apache.spark.sql.execution.BatchedDataSourceScanExec
 import org.apache.spark.sql.test.TestQueryExecutor.projectPath
 import org.apache.spark.sql.test.util.QueryTest
 import org.scalatest.{BeforeAndAfterAll, BeforeAndAfterEach}
 
 import 
org.apache.carbondata.core.indexstore.blockletindex.SegmentIndexFileStore
 import org.apache.carbondata.core.metadata.CarbonMetadata
-import org.apache.carbondata.core.metadata.schema.table.CarbonTable
 import org.apache.carbondata.core.util.path.CarbonStorePath
+import org.apache.carbondata.spark.rdd.CarbonScanRDD
 
 class TestGlobalSortDataLoad extends QueryTest with BeforeAndAfterEach with 
BeforeAndAfterAll {
   var filePath: String = s"$resourcesPath/globalsort"
@@ -272,7 +273,29 @@ class TestGlobalSortDataLoad extends QueryTest with 
BeforeAndAfterEach with Befo
 val carbonTable = CarbonMetadata.getInstance().getCarbonTable("default", 
"carbon_globalsort")
 val carbonTablePath = 
CarbonStorePath.getCarbonTablePath(carbonTable.getAbsoluteTableIdentifier)
 val segmentDir = carbonTablePath.getSegmentDir("0", "0")
-assertResult(5)(new File(segmentDir).listFiles().length)
+assertResult(Math.max(4, defaultParallelism) + 1)(new 
File(segmentDir).listFiles().length)
+  }
+
+  test("Query with small files") {
+try {
+  CarbonProperties.getInstance().addProperty(
+CarbonCommonConstants.CARBON_TASK_DISTRIBUTION,
+CarbonCommonConstants.CARBON_TASK_DISTRIBUTION_MERGE_FILES)
+  for (i <- 0 until 10) {
+sql(s"insert into carbon_globalsort select $i, 'name_$i', 'city_$i', 
${ i % 100 }")
+  }
+  val df = sql("select * from carbon_globalsort")
+  val scanRdd = df.queryExecution.sparkPlan.collect {
+case b: BatchedDataSourceScanExec if b.rdd.isInstanceOf[CarbonScanRDD] 
=>
+  b.rdd.asInstanceOf[CarbonScanRDD]
+  }.head
+  assertResult(defaultParallelism)(scanRdd.getPartitions.length)
+  assertResult(10)(df.count)
+} finally {
+  CarbonProperties.getInstance().addProperty(
+CarbonCommonConstants.CARBON_TASK_DISTRIBUTION,
+CarbonCommonConstants.CARBON_TASK_DISTRIBUTION_DEFAULT)
+}
   }
 
   // --- INSERT INTO 
---

http://git-wip-us.apache.org/repos/asf/carbondata/blob/d90280af/integration/spark-common/src/main/scala/org/apache/spark/sql/test/util/QueryTest.scala
--
diff --git 
a/integration/spark-common/src/main/scala/org/apache/spark/sql/test/util/QueryTest.scala
 
b/integration/spark-common/src/main/scala/org/apache/spark/sql/test/util/QueryTest.scala
index 0079d1e..b87473a 100644
--- 
a/integration/spark-common/src/main/scala/org/apache/spark/sql/test/util/QueryTest.scala
+++ 
b/integration/spark-common/src/main/scala/org/apache/spark/sql/test/util/QueryTest.scala
@@ -107,6 +107,7 @@ 

[32/50] [abbrv] carbondata git commit: [CARBONDATA-2093] Use small file feature of global sort to minimise the carbondata file count

2018-02-03 Thread ravipesala
[CARBONDATA-2093] Use small file feature of global sort to minimise the 
carbondata file count

This closes #1876


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/e527c059
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/e527c059
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/e527c059

Branch: refs/heads/branch-1.3
Commit: e527c059e81e58503d568c82a3e7ac822a8a5b47
Parents: 8875775
Author: ravipesala 
Authored: Sun Jan 28 20:37:21 2018 +0530
Committer: QiangCai 
Committed: Sat Feb 3 16:36:30 2018 +0800

--
 .../StandardPartitionTableLoadingTestCase.scala |  77 ++-
 .../load/DataLoadProcessBuilderOnSpark.scala| 130 +--
 .../carbondata/spark/util/DataLoadingUtil.scala | 127 ++
 .../management/CarbonLoadDataCommand.scala  |  94 ++
 .../sort/sortdata/SortParameters.java   |   4 +
 5 files changed, 249 insertions(+), 183 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/e527c059/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/standardpartition/StandardPartitionTableLoadingTestCase.scala
--
diff --git 
a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/standardpartition/StandardPartitionTableLoadingTestCase.scala
 
b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/standardpartition/StandardPartitionTableLoadingTestCase.scala
index 16f252b..669d6e7 100644
--- 
a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/standardpartition/StandardPartitionTableLoadingTestCase.scala
+++ 
b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/standardpartition/StandardPartitionTableLoadingTestCase.scala
@@ -16,11 +16,12 @@
  */
 package org.apache.carbondata.spark.testsuite.standardpartition
 
-import java.io.{File, IOException}
+import java.io.{File, FileWriter, IOException}
 import java.util
 import java.util.concurrent.{Callable, ExecutorService, Executors}
 
 import org.apache.commons.io.FileUtils
+import org.apache.spark.sql.execution.BatchedDataSourceScanExec
 import org.apache.spark.sql.test.util.QueryTest
 import org.apache.spark.sql.{AnalysisException, CarbonEnv, Row}
 import org.scalatest.BeforeAndAfterAll
@@ -30,7 +31,8 @@ import 
org.apache.carbondata.core.datastore.filesystem.{CarbonFile, CarbonFileFi
 import org.apache.carbondata.core.datastore.impl.FileFactory
 import org.apache.carbondata.core.metadata.CarbonMetadata
 import org.apache.carbondata.core.util.CarbonProperties
-import org.apache.carbondata.core.util.path.CarbonTablePath
+import org.apache.carbondata.core.util.path.{CarbonStorePath, CarbonTablePath}
+import org.apache.carbondata.spark.rdd.CarbonScanRDD
 
 class StandardPartitionTableLoadingTestCase extends QueryTest with 
BeforeAndAfterAll {
   var executorService: ExecutorService = _
@@ -409,6 +411,75 @@ class StandardPartitionTableLoadingTestCase extends 
QueryTest with BeforeAndAfte
   sql("select * from  casesensitivepartition where empno=17"))
   }
 
+  test("Partition LOAD with small files") {
+sql("DROP TABLE IF EXISTS smallpartitionfiles")
+sql(
+  """
+| CREATE TABLE smallpartitionfiles(id INT, name STRING, age INT) 
PARTITIONED BY(city STRING)
+| STORED BY 'org.apache.carbondata.format'
+  """.stripMargin)
+val inputPath = new File("target/small_files").getCanonicalPath
+val folder = new File(inputPath)
+if (folder.exists()) {
+  FileUtils.deleteDirectory(folder)
+}
+folder.mkdir()
+for (i <- 0 to 100) {
+  val file = s"$folder/file$i.csv"
+  val writer = new FileWriter(file)
+  writer.write("id,name,city,age\n")
+  writer.write(s"$i,name_$i,city_${i % 5},${ i % 100 }")
+  writer.close()
+}
+sql(s"LOAD DATA LOCAL INPATH '$inputPath' INTO TABLE smallpartitionfiles")
+FileUtils.deleteDirectory(folder)
+val carbonTable = CarbonMetadata.getInstance().getCarbonTable("default", 
"smallpartitionfiles")
+val carbonTablePath = 
CarbonStorePath.getCarbonTablePath(carbonTable.getAbsoluteTableIdentifier)
+val segmentDir = carbonTablePath.getSegmentDir("0", "0")
+assert(new File(segmentDir).listFiles().length < 50)
+  }
+
+  test("verify partition read with small files") {
+try {
+  
CarbonProperties.getInstance().addProperty(CarbonCommonConstants.CARBON_TASK_DISTRIBUTION,
+CarbonCommonConstants.CARBON_TASK_DISTRIBUTION_MERGE_FILES)
+  sql("DROP TABLE IF EXISTS smallpartitionfilesread")
+  sql(
+"""
+  | CREATE TABLE smallpartitionfilesread(id INT, name 

[10/50] [abbrv] carbondata git commit: [CARBONDATA-2092] Fix compaction bug to prevent the compaction flow from going through the restructure compaction flow

2018-02-03 Thread ravipesala
[CARBONDATA-2092] Fix compaction bug to prevent the compaction flow from going 
through the restructure compaction flow

Problem and analysis:
During data load current schema timestamp is written to the carbondata 
fileHeader. This is used during compaction to decide whether the block is a 
restructured block or the block is according to the latest schema.
As the blocklet information is now stored in the index file, while laoding it 
in memory the carbondata file header is not read and due to this the schema 
timestamp is not getting set to the blocklet information. Due to this during 
compaction flow there is a mismatch on comparing the current schema time stamp 
with the timestamp stored in the block and the flow goes through the 
restructure compaction flow instead of normal compaction flow.

Impact:
Compaction performance degradation as restructure compaction flow involves 
sorting of data again.

Solution:
Modified code to fix compaction bug to prevent the compaction flow from going 
through the restructure compaction flow until and unless and restructure add or 
drop column operation has not been performed

This closes #1875


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/7ed144c5
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/7ed144c5
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/7ed144c5

Branch: refs/heads/branch-1.3
Commit: 7ed144c537b48353de1ee8bf710c884d555c01ce
Parents: f34ea5c
Author: manishgupta88 
Authored: Tue Jan 23 21:12:39 2018 +0530
Committer: ravipesala 
Committed: Thu Feb 1 12:05:01 2018 +0530

--
 .../util/AbstractDataFileFooterConverter.java   |  4 
 .../core/util/CarbonMetadataUtil.java   |  6 -
 .../apache/carbondata/core/util/CarbonUtil.java | 24 +---
 .../core/util/CarbonMetadataUtilTest.java   |  3 ++-
 format/src/main/thrift/carbondata_index.thrift  |  1 +
 .../carbondata/hadoop/CarbonInputSplit.java |  2 ++
 .../carbondata/spark/rdd/CarbonMergerRDD.scala  |  3 +++
 .../CarbonGetTableDetailComandTestCase.scala|  5 ++--
 .../processing/merger/CarbonCompactionUtil.java | 11 -
 .../store/writer/AbstractFactDataWriter.java|  3 ++-
 10 files changed, 53 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/7ed144c5/core/src/main/java/org/apache/carbondata/core/util/AbstractDataFileFooterConverter.java
--
diff --git 
a/core/src/main/java/org/apache/carbondata/core/util/AbstractDataFileFooterConverter.java
 
b/core/src/main/java/org/apache/carbondata/core/util/AbstractDataFileFooterConverter.java
index 5ebf4cf..c7bc6aa 100644
--- 
a/core/src/main/java/org/apache/carbondata/core/util/AbstractDataFileFooterConverter.java
+++ 
b/core/src/main/java/org/apache/carbondata/core/util/AbstractDataFileFooterConverter.java
@@ -165,6 +165,10 @@ public abstract class AbstractDataFileFooterConverter {
 dataFileFooter.setBlockInfo(new BlockInfo(tableBlockInfo));
 dataFileFooter.setSegmentInfo(segmentInfo);
 dataFileFooter.setVersionId(tableBlockInfo.getVersion());
+// In case of old schema time stamp will not be found in the index 
header
+if (readIndexHeader.isSetSchema_time_stamp()) {
+  
dataFileFooter.setSchemaUpdatedTimeStamp(readIndexHeader.getSchema_time_stamp());
+}
 if (readBlockIndexInfo.isSetBlocklet_info()) {
   List blockletInfoList = new ArrayList();
   BlockletInfo blockletInfo = new DataFileFooterConverterV3()

http://git-wip-us.apache.org/repos/asf/carbondata/blob/7ed144c5/core/src/main/java/org/apache/carbondata/core/util/CarbonMetadataUtil.java
--
diff --git 
a/core/src/main/java/org/apache/carbondata/core/util/CarbonMetadataUtil.java 
b/core/src/main/java/org/apache/carbondata/core/util/CarbonMetadataUtil.java
index 0ca0df8..9880b4d 100644
--- a/core/src/main/java/org/apache/carbondata/core/util/CarbonMetadataUtil.java
+++ b/core/src/main/java/org/apache/carbondata/core/util/CarbonMetadataUtil.java
@@ -234,10 +234,12 @@ public class CarbonMetadataUtil {
*
* @param columnCardinality cardinality of each column
* @param columnSchemaList  list of column present in the table
+   * @param bucketNumber
+   * @param schemaTimeStamp current timestamp of schema
* @return Index header object
*/
   public static IndexHeader getIndexHeader(int[] columnCardinality,
-  List columnSchemaList, int bucketNumber) {
+  List columnSchemaList, int bucketNumber, long 
schemaTimeStamp) {
 // create segment info object
 SegmentInfo segmentInfo = new SegmentInfo();
 // 

[11/50] [abbrv] carbondata git commit: [CARBONDATA-2111] Fix the decoder issue when multiple joins are present in the TPCH query

2018-02-03 Thread ravipesala
[CARBONDATA-2111] Fix the decoder issue when multiple joins are present in the 
TPCH query

Problem
The TPCh query which has multiple joins fails to return any rows

Analysis
It is because of decode of dictionary columns not happening for some joins in 
case of self join and same column is used for join multiple times.

Solution
If project list attributes are not present as part of the decoder to be decoded 
attributes then add them to notDecodeCarryForward list, otherwise
there is a chance of skipping decoding of those columns in case of join case.If 
left and right plans both use same attribute but from the left
side it is not decoded and right side it is decoded then we should decide based 
on the above project list plan.

This closes #1895


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/c9a02fc2
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/c9a02fc2
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/c9a02fc2

Branch: refs/heads/branch-1.3
Commit: c9a02fc2a8389288085fae4ba5d7375d11de22ff
Parents: 7ed144c
Author: ravipesala 
Authored: Wed Jan 31 18:39:05 2018 +0530
Committer: manishgupta88 
Committed: Thu Feb 1 12:43:48 2018 +0530

--
 .../allqueries/AllDataTypesTestCase.scala   | 47 +++-
 .../CarbonDecoderOptimizerHelper.scala  |  4 ++
 .../sql/optimizer/CarbonLateDecodeRule.scala| 26 +++
 3 files changed, 76 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/c9a02fc2/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/allqueries/AllDataTypesTestCase.scala
--
diff --git 
a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/allqueries/AllDataTypesTestCase.scala
 
b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/allqueries/AllDataTypesTestCase.scala
index e739091..afff2d0 100644
--- 
a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/allqueries/AllDataTypesTestCase.scala
+++ 
b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/allqueries/AllDataTypesTestCase.scala
@@ -17,8 +17,10 @@
 
 package org.apache.carbondata.spark.testsuite.allqueries
 
-import org.apache.spark.sql.{Row, SaveMode}
+import org.apache.spark.sql.catalyst.plans.logical.Join
+import org.apache.spark.sql.{CarbonDictionaryCatalystDecoder, Row, SaveMode}
 import org.scalatest.BeforeAndAfterAll
+
 import org.apache.carbondata.core.constants.CarbonCommonConstants
 import org.apache.carbondata.core.util.CarbonProperties
 import org.apache.spark.sql.test.util.QueryTest
@@ -1154,4 +1156,47 @@ class AllDataTypesTestCase extends QueryTest with 
BeforeAndAfterAll {
 
   }
 
+  test("TPCH query issue with not joining with decoded values") {
+
+sql("drop table if exists SUPPLIER")
+sql("drop table if exists PARTSUPP")
+sql("drop table if exists CUSTOMER")
+sql("drop table if exists NATION")
+sql("drop table if exists REGION")
+sql("drop table if exists PART")
+sql("drop table if exists LINEITEM")
+sql("drop table if exists ORDERS")
+sql("create table if not exists SUPPLIER(S_COMMENT string,S_SUPPKEY 
string,S_NAME string, S_ADDRESS string, S_NATIONKEY string, S_PHONE string, 
S_ACCTBAL double) STORED BY 'org.apache.carbondata.format'TBLPROPERTIES 
('DICTIONARY_EXCLUDE'='S_COMMENT, S_SUPPKEY, S_NAME, S_ADDRESS, S_NATIONKEY, 
S_PHONE','table_blocksize'='300','SORT_COLUMNS'='')")
+sql("create table if not exists PARTSUPP (  PS_PARTKEY int,  PS_SUPPKEY  
string,  PS_AVAILQTY  int,  PS_SUPPLYCOST  double,  PS_COMMENT  string) STORED 
BY 'org.apache.carbondata.format'TBLPROPERTIES 
('DICTIONARY_EXCLUDE'='PS_SUPPKEY,PS_COMMENT', 'table_blocksize'='300', 
'no_inverted_index'='PS_SUPPKEY, PS_COMMENT','SORT_COLUMNS'='')")
+sql("create table if not exists CUSTOMER(  C_MKTSEGMENT string,  
C_NATIONKEY string,  C_CUSTKEY string,  C_NAME string,  C_ADDRESS string,  
C_PHONE string,  C_ACCTBAL double,  C_COMMENT string) STORED BY 
'org.apache.carbondata.format'TBLPROPERTIES 
('DICTIONARY_INCLUDE'='C_MKTSEGMENT,C_NATIONKEY','DICTIONARY_EXCLUDE'='C_CUSTKEY,C_NAME,C_ADDRESS,C_PHONE,C_COMMENT',
 'table_blocksize'='300', 
'no_inverted_index'='C_CUSTKEY,C_NAME,C_ADDRESS,C_PHONE,C_COMMENT','SORT_COLUMNS'='C_MKTSEGMENT')")
+sql("create table if not exists NATION (  N_NAME string,  N_NATIONKEY 
string,  N_REGIONKEY string,  N_COMMENT  string) STORED BY 
'org.apache.carbondata.format'TBLPROPERTIES 
('DICTIONARY_INCLUDE'='N_REGIONKEY','DICTIONARY_EXCLUDE'='N_COMMENT', 

[07/50] [abbrv] carbondata git commit: [HOTFIX] Correct the order of dropping pre-aggregate tables.pre-aggregate tables to be dropped before main table is dropped

2018-02-03 Thread ravipesala
[HOTFIX] Correct the order of dropping pre-aggregate tables.pre-aggregate 
tables to be dropped before main table is dropped

This closes #1900


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/c8a3eb5c
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/c8a3eb5c
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/c8a3eb5c

Branch: refs/heads/branch-1.3
Commit: c8a3eb5cf79a3e68e7b4f19311f35ceca95b3003
Parents: 3dff273
Author: Raghunandan S 
Authored: Thu Feb 1 07:42:09 2018 +0530
Committer: Jacky Li 
Committed: Thu Feb 1 10:37:17 2018 +0800

--
 .../standardpartition/StandardPartitionTableQueryTestCase.scala   | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/c8a3eb5c/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/standardpartition/StandardPartitionTableQueryTestCase.scala
--
diff --git 
a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/standardpartition/StandardPartitionTableQueryTestCase.scala
 
b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/standardpartition/StandardPartitionTableQueryTestCase.scala
index b1fc0a7..0a86dee 100644
--- 
a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/standardpartition/StandardPartitionTableQueryTestCase.scala
+++ 
b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/standardpartition/StandardPartitionTableQueryTestCase.scala
@@ -274,7 +274,6 @@ test("Creation of partition table should fail if the 
colname in table schema and
 
   test("drop partition on preAggregate table should fail"){
 sql("drop table if exists partitionTable")
-sql("drop datamap if exists preaggTable on table partitionTable")
 sql("create table partitionTable (id int,city string,age int) partitioned 
by(name string) stored by 'carbondata'".stripMargin)
 sql(
   s"""create datamap preaggTable on table partitionTable using 
'preaggregate' as select id,sum(age) from partitionTable group by id"""
@@ -285,6 +284,7 @@ test("Creation of partition table should fail if the 
colname in table schema and
 intercept[Exception]{
   sql("alter table partitionTable drop PARTITION(name='John')")
 }
+sql("drop datamap if exists preaggTable on table partitionTable")
   }
 
 
@@ -318,7 +318,6 @@ test("Creation of partition table should fail if the 
colname in table schema and
 sql("drop table if exists badrecordsPartitionintnull")
 sql("drop table if exists badrecordsPartitionintnullalt")
 sql("drop table if exists partitionTable")
-sql("drop datamap if exists preaggTable on table partitionTable")
   }
 
 }



[40/50] [abbrv] carbondata git commit: [HOTFIX] Fix streaming test case issue for file input source

2018-02-03 Thread ravipesala
[HOTFIX] Fix streaming test case issue for file input source

Fix streaming test case issue for file input source

This closes #1922


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/36ff9321
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/36ff9321
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/36ff9321

Branch: refs/heads/branch-1.3
Commit: 36ff93216d7acce7bde7287a285d00944065da3b
Parents: 11f2371
Author: QiangCai 
Authored: Sat Feb 3 18:04:49 2018 +0800
Committer: chenliang613 
Committed: Sat Feb 3 21:27:31 2018 +0800

--
 .../spark/carbondata/TestStreamingTableOperation.scala   | 11 ---
 1 file changed, 4 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/36ff9321/integration/spark2/src/test/scala/org/apache/spark/carbondata/TestStreamingTableOperation.scala
--
diff --git 
a/integration/spark2/src/test/scala/org/apache/spark/carbondata/TestStreamingTableOperation.scala
 
b/integration/spark2/src/test/scala/org/apache/spark/carbondata/TestStreamingTableOperation.scala
index e1e41dc..a368cef 100644
--- 
a/integration/spark2/src/test/scala/org/apache/spark/carbondata/TestStreamingTableOperation.scala
+++ 
b/integration/spark2/src/test/scala/org/apache/spark/carbondata/TestStreamingTableOperation.scala
@@ -233,6 +233,9 @@ class TestStreamingTableOperation extends QueryTest with 
BeforeAndAfterAll {
   sql("select count(*) from streaming.stream_table_file"),
   Seq(Row(25))
 )
+
+val row = sql("select * from streaming.stream_table_file order by 
id").head()
+assertResult(Row(10, "name_10", "city_10", 10.0))(row)
   }
 
   // bad records
@@ -875,13 +878,7 @@ class TestStreamingTableOperation extends QueryTest with 
BeforeAndAfterAll {
   .add("file", "string")
 var qry: StreamingQuery = null
 try {
-  val readSocketDF = spark.readStream
-.format("csv")
-.option("sep", ",")
-.schema(inputSchema)
-.option("path", csvDataDir)
-.option("header", "false")
-.load()
+  val readSocketDF = spark.readStream.text(csvDataDir)
 
   // Write data from socket stream to carbondata file
   qry = readSocketDF.writeStream



[22/50] [abbrv] carbondata git commit: [CARBONDATA-2078][CARBONDATA-1516] Add 'if not exists' for creating datamap

2018-02-03 Thread ravipesala
[CARBONDATA-2078][CARBONDATA-1516] Add 'if not exists' for creating datamap

Add 'if not exists' function for creating datamap

This closes #1861


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/f9606e9d
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/f9606e9d
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/f9606e9d

Branch: refs/heads/branch-1.3
Commit: f9606e9d03d55bf57925c2ac176e92553c213d49
Parents: 02eefca
Author: xubo245 <601450...@qq.com>
Authored: Thu Jan 25 21:27:27 2018 +0800
Committer: kunal642 
Committed: Fri Feb 2 12:08:54 2018 +0530

--
 .../preaggregate/TestPreAggCreateCommand.scala  | 55 ++-
 .../preaggregate/TestPreAggregateLoad.scala | 96 ++-
 .../timeseries/TestTimeSeriesCreateTable.scala  | 85 +
 .../timeseries/TestTimeseriesDataLoad.scala | 99 +++-
 .../datamap/CarbonCreateDataMapCommand.scala| 38 ++--
 .../CreatePreAggregateTableCommand.scala|  5 +-
 .../sql/parser/CarbonSpark2SqlParser.scala  |  9 +-
 7 files changed, 353 insertions(+), 34 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/f9606e9d/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggCreateCommand.scala
--
diff --git 
a/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggCreateCommand.scala
 
b/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggCreateCommand.scala
index f1d7396..0cb1045 100644
--- 
a/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggCreateCommand.scala
+++ 
b/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggCreateCommand.scala
@@ -19,7 +19,7 @@ package 
org.apache.carbondata.integration.spark.testsuite.preaggregate
 
 import scala.collection.JavaConverters._
 
-import org.apache.spark.sql.CarbonDatasourceHadoopRelation
+import org.apache.spark.sql.{AnalysisException, CarbonDatasourceHadoopRelation}
 import org.apache.spark.sql.catalyst.plans.logical.LogicalPlan
 import org.apache.spark.sql.execution.datasources.LogicalRelation
 import org.apache.spark.sql.hive.CarbonRelation
@@ -321,7 +321,60 @@ class TestPreAggCreateCommand extends QueryTest with 
BeforeAndAfterAll {
 checkExistence(sql("show tables"), false, 
"tbl_1_agg2_day","tbl_1_agg2_hour","tbl_1_agg2_month","tbl_1_agg2_year")
   }
 
+  test("test pre agg create table 21: should support 'if not exists'") {
+try {
+  sql(
+"""
+  | CREATE DATAMAP IF NOT EXISTS agg0 ON TABLE mainTable
+  | USING 'preaggregate'
+  | AS SELECT
+  |   column3,
+  |   sum(column3),
+  |   column5,
+  |   sum(column5)
+  | FROM maintable
+  | GROUP BY column3,column5,column2
+""".stripMargin)
+
+  sql(
+"""
+  | CREATE DATAMAP IF NOT EXISTS agg0 ON TABLE mainTable
+  | USING 'preaggregate'
+  | AS SELECT
+  |   column3,
+  |   sum(column3),
+  |   column5,
+  |   sum(column5)
+  | FROM maintable
+  | GROUP BY column3,column5,column2
+""".stripMargin)
+  assert(true)
+} catch {
+  case _: Exception =>
+assert(false)
+}
+sql("DROP DATAMAP IF EXISTS agg0 ON TABLE maintable")
+  }
 
+  test("test pre agg create table 22: don't support 'create datamap if 
exists'") {
+val e: Exception = intercept[AnalysisException] {
+  sql(
+"""
+  | CREATE DATAMAP IF EXISTS agg0 ON TABLE mainTable
+  | USING 'preaggregate'
+  | AS SELECT
+  |   column3,
+  |   sum(column3),
+  |   column5,
+  |   sum(column5)
+  | FROM maintable
+  | GROUP BY column3,column5,column2
+""".stripMargin)
+  assert(true)
+}
+assert(e.getMessage.contains("identifier matching regex"))
+sql("DROP DATAMAP IF EXISTS agg0 ON TABLE maintable")
+  }
 
   def getCarbontable(plan: LogicalPlan) : CarbonTable ={
 var carbonTable : CarbonTable = null

http://git-wip-us.apache.org/repos/asf/carbondata/blob/f9606e9d/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggregateLoad.scala
--
diff --git 

[35/50] [abbrv] carbondata git commit: [CARBONDATA-2117]Fixed Syncronization issue in CarbonEnv

2018-02-03 Thread ravipesala
[CARBONDATA-2117]Fixed Syncronization issue in CarbonEnv

Problem: When creating multiple session (100) session initialisation is failing 
with below error

java.lang.IllegalArgumentException: requirement failed: Config entry 
enable.unsafe.sort already registered!

Solution: Currently in CarbonEnv we are updating global configuration(shared) 
and location configuration in class level synchronized block. In case of 
multiple session class level lock will not work , need to add global level lock 
so only one thread will update the global configuration

This closes #1908


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/44e70d08
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/44e70d08
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/44e70d08

Branch: refs/heads/branch-1.3
Commit: 44e70d08e0c73e2c65e9a0d147cbbbe965aaf9f7
Parents: 6fd778a
Author: kumarvishal 
Authored: Thu Feb 1 23:13:54 2018 +0530
Committer: Jacky Li 
Committed: Sat Feb 3 17:36:40 2018 +0800

--
 .../spark2/src/main/scala/org/apache/spark/sql/CarbonEnv.scala| 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/44e70d08/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonEnv.scala
--
diff --git 
a/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonEnv.scala 
b/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonEnv.scala
index 40035ce..6b12008 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonEnv.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonEnv.scala
@@ -68,7 +68,8 @@ class CarbonEnv {
 
 // added for handling timeseries function like hour, minute, day , month , 
year
 sparkSession.udf.register("timeseries", new TimeSeriesFunction)
-synchronized {
+// acquiring global level lock so global configuration will be updated by 
only one thread
+CarbonEnv.carbonEnvMap.synchronized {
   if (!initialized) {
 // update carbon session parameters , preserve thread parameters
 val currentThreadSesssionInfo = 
ThreadLocalSessionInfo.getCarbonSessionInfo



[43/50] [abbrv] carbondata git commit: [CARBONDATA-2101]Restrict direct query on pre aggregate and timeseries datamap

2018-02-03 Thread ravipesala
[CARBONDATA-2101]Restrict direct query on pre aggregate and timeseries datamap

Restricting direct query on PreAggregate and timeseries data map
Added Property to run direct query on data map for testing purpose
validate.support.direct.query.on.datamap=true

This closes #1888


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/349be007
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/349be007
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/349be007

Branch: refs/heads/branch-1.3
Commit: 349be007fd20fb8c4a39b318e45b47445d2e798c
Parents: 46d9bf9
Author: kumarvishal 
Authored: Tue Jan 30 20:54:12 2018 +0530
Committer: ravipesala 
Committed: Sat Feb 3 21:32:08 2018 +0530

--
 .../core/constants/CarbonCommonConstants.java   | 10 +
 .../carbondata/core/util/SessionParams.java |  2 +
 .../spark/sql/common/util/QueryTest.scala   |  4 ++
 .../apache/spark/sql/test/util/QueryTest.scala  |  3 ++
 .../spark/rdd/AggregateDataMapCompactor.scala   |  2 +
 .../sql/CarbonDatasourceHadoopRelation.scala|  1 +
 .../scala/org/apache/spark/sql/CarbonEnv.scala  | 18 +
 .../preaaggregate/PreAggregateUtil.scala|  2 +
 .../sql/hive/CarbonPreAggregateRules.scala  |  9 +
 .../sql/optimizer/CarbonLateDecodeRule.scala| 40 +++-
 10 files changed, 89 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/349be007/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
--
diff --git 
a/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
 
b/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
index a799e51..6e6482d 100644
--- 
a/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
+++ 
b/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
@@ -1588,6 +1588,16 @@ public final class CarbonCommonConstants {
   "carbon.sort.storage.inmemory.size.inmb";
   public static final String IN_MEMORY_STORAGE_FOR_SORTED_DATA_IN_MB_DEFAULT = 
"512";
 
+  @CarbonProperty
+  public static final String SUPPORT_DIRECT_QUERY_ON_DATAMAP =
+  "carbon.query.directQueryOnDataMap.enabled";
+  public static final String SUPPORT_DIRECT_QUERY_ON_DATAMAP_DEFAULTVALUE = 
"false";
+
+  @CarbonProperty
+  public static final String VALIDATE_DIRECT_QUERY_ON_DATAMAP =
+  "carbon.query.validate.directqueryondatamap";
+  public static final String VALIDATE_DIRECT_QUERY_ON_DATAMAP_DEFAULTVALUE = 
"true";
+
   private CarbonCommonConstants() {
   }
 }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/349be007/core/src/main/java/org/apache/carbondata/core/util/SessionParams.java
--
diff --git 
a/core/src/main/java/org/apache/carbondata/core/util/SessionParams.java 
b/core/src/main/java/org/apache/carbondata/core/util/SessionParams.java
index ddc7539..a6ff61e 100644
--- a/core/src/main/java/org/apache/carbondata/core/util/SessionParams.java
+++ b/core/src/main/java/org/apache/carbondata/core/util/SessionParams.java
@@ -199,6 +199,8 @@ public class SessionParams implements Serializable {
   }
 } else if 
(key.startsWith(CarbonCommonConstants.VALIDATE_CARBON_INPUT_SEGMENTS)) {
   isValid = true;
+} else if 
(key.equalsIgnoreCase(CarbonCommonConstants.SUPPORT_DIRECT_QUERY_ON_DATAMAP)) {
+  isValid = true;
 } else {
   throw new InvalidConfigurationException(
   "The key " + key + " not supported for dynamic configuration.");

http://git-wip-us.apache.org/repos/asf/carbondata/blob/349be007/integration/spark-common-cluster-test/src/test/scala/org/apache/spark/sql/common/util/QueryTest.scala
--
diff --git 
a/integration/spark-common-cluster-test/src/test/scala/org/apache/spark/sql/common/util/QueryTest.scala
 
b/integration/spark-common-cluster-test/src/test/scala/org/apache/spark/sql/common/util/QueryTest.scala
index d80efb8..9c5bc38 100644
--- 
a/integration/spark-common-cluster-test/src/test/scala/org/apache/spark/sql/common/util/QueryTest.scala
+++ 
b/integration/spark-common-cluster-test/src/test/scala/org/apache/spark/sql/common/util/QueryTest.scala
@@ -33,7 +33,9 @@ import org.apache.spark.sql.test.{ResourceRegisterAndCopier, 
TestQueryExecutor}
 import org.apache.spark.sql.{CarbonSession, DataFrame, Row, SQLContext}
 import org.scalatest.Suite
 
+import org.apache.carbondata.core.constants.CarbonCommonConstants
 import 

[16/50] [abbrv] carbondata git commit: [CARBONDATA-2113]compatibility fix for v2

2018-02-03 Thread ravipesala
[CARBONDATA-2113]compatibility fix for v2

Fixes related to backword compatibility:

Count() issue:
when count() is run on old store where file format version is V2, it was unable 
to get the files, because when forming the filepath while creating the table 
block info object from index file info, the path was formed with double 
slash(//), beacuse in v2, local path was stored. so when trying to get this 
path with key, it was failing. So form the correct file path.

Select * issue:
when select * is run, then only the datachunck was considered as whole data and 
while uncompressing the measure data, it was failing. So, read proper data and 
datachunck before uncompressing.

Read Metadata file:
when readMetadataFile is called , the while reading the schema, it was 
explicitly returning null, so columns will be null and hence some 
nullpointerxception was coming. so do not return null, return proper schema, by 
reading the footer.

Vesion compatibility:
calculation the number of pages to be filled based on row count was not handled 
for V2 version, handled same, as no. of rows per page is different in V2 and V3

This closes #1901


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/1202e209
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/1202e209
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/1202e209

Branch: refs/heads/branch-1.3
Commit: 1202e209eccda172f9671fa0cc2deeebeb4af456
Parents: 1248bd4
Author: akashrn5 
Authored: Thu Feb 1 12:42:49 2018 +0530
Committer: ravipesala 
Committed: Thu Feb 1 22:16:39 2018 +0530

--
 .../core/constants/CarbonVersionConstants.java   |  5 +
 .../v2/CompressedMeasureChunkFileBasedReaderV2.java  | 15 ++-
 .../blockletindex/BlockletDataRefNodeWrapper.java| 13 +++--
 .../carbondata/core/mutate/CarbonUpdateUtil.java | 10 --
 .../core/util/AbstractDataFileFooterConverter.java   |  3 ++-
 .../core/util/DataFileFooterConverter2.java  |  2 +-
 6 files changed, 33 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/1202e209/core/src/main/java/org/apache/carbondata/core/constants/CarbonVersionConstants.java
--
diff --git 
a/core/src/main/java/org/apache/carbondata/core/constants/CarbonVersionConstants.java
 
b/core/src/main/java/org/apache/carbondata/core/constants/CarbonVersionConstants.java
index 2d58b0b..22fbaf2 100644
--- 
a/core/src/main/java/org/apache/carbondata/core/constants/CarbonVersionConstants.java
+++ 
b/core/src/main/java/org/apache/carbondata/core/constants/CarbonVersionConstants.java
@@ -49,6 +49,11 @@ public final class CarbonVersionConstants {
*/
   public static final String CARBONDATA_BUILD_DATE;
 
+  /**
+   * number of rows per blocklet column page default value for V2 version
+   */
+  public static final int NUMBER_OF_ROWS_PER_BLOCKLET_COLUMN_PAGE_DEFAULT_V2 = 
12;
+
   static {
 // create input stream for CARBONDATA_VERSION_INFO_FILE
 InputStream resourceStream = Thread.currentThread().getContextClassLoader()

http://git-wip-us.apache.org/repos/asf/carbondata/blob/1202e209/core/src/main/java/org/apache/carbondata/core/datastore/chunk/reader/measure/v2/CompressedMeasureChunkFileBasedReaderV2.java
--
diff --git 
a/core/src/main/java/org/apache/carbondata/core/datastore/chunk/reader/measure/v2/CompressedMeasureChunkFileBasedReaderV2.java
 
b/core/src/main/java/org/apache/carbondata/core/datastore/chunk/reader/measure/v2/CompressedMeasureChunkFileBasedReaderV2.java
index 2ddc202..d61f98a 100644
--- 
a/core/src/main/java/org/apache/carbondata/core/datastore/chunk/reader/measure/v2/CompressedMeasureChunkFileBasedReaderV2.java
+++ 
b/core/src/main/java/org/apache/carbondata/core/datastore/chunk/reader/measure/v2/CompressedMeasureChunkFileBasedReaderV2.java
@@ -52,7 +52,14 @@ public class CompressedMeasureChunkFileBasedReaderV2 extends 
AbstractMeasureChun
   throws IOException {
 int dataLength = 0;
 if (measureColumnChunkOffsets.size() - 1 == columnIndex) {
-  dataLength = measureColumnChunkLength.get(columnIndex);
+  DataChunk2 metadataChunk = null;
+  synchronized (fileReader) {
+metadataChunk = CarbonUtil.readDataChunk(ByteBuffer.wrap(fileReader
+.readByteArray(filePath, 
measureColumnChunkOffsets.get(columnIndex),
+measureColumnChunkLength.get(columnIndex))), 0,
+measureColumnChunkLength.get(columnIndex));
+  }
+  dataLength = measureColumnChunkLength.get(columnIndex) + 
metadataChunk.data_page_length;
 } else {
   long currentMeasureOffset = 

[01/50] [abbrv] carbondata git commit: [HOTFIX] Correct CI url and add standard partition usage [Forced Update!]

2018-02-03 Thread ravipesala
Repository: carbondata
Updated Branches:
  refs/heads/branch-1.3 ba7589805 -> e16e87818 (forced update)


[HOTFIX] Correct CI url and add standard partition usage

This closes #1889


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/24ba2fe2
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/24ba2fe2
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/24ba2fe2

Branch: refs/heads/branch-1.3
Commit: 24ba2fe2226f9168dcde6c216948f8656488293d
Parents: 8a86d3f
Author: chenliang613 
Authored: Tue Jan 30 22:35:02 2018 +0800
Committer: Jacky Li 
Committed: Wed Jan 31 19:18:26 2018 +0800

--
 README.md   | 12 +++
 docs/data-management-on-carbondata.md   | 38 ++--
 .../examples/StandardPartitionExample.scala |  7 ++--
 .../preaggregate/TestPreAggCreateCommand.scala  | 17 +
 4 files changed, 61 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/24ba2fe2/README.md
--
diff --git a/README.md b/README.md
index 15dba93..3b6792e 100644
--- a/README.md
+++ b/README.md
@@ -17,7 +17,7 @@
 
 
 
-Apache CarbonData is an indexed columnar data format for fast analytics on big 
data platform, e.g.Apache Hadoop, Apache Spark, etc.
+Apache CarbonData is an indexed columnar data store solution for fast 
analytics on big data platform, e.g.Apache Hadoop, Apache Spark, etc.
 
 You can find the latest CarbonData document and learn more at:
 [http://carbondata.apache.org](http://carbondata.apache.org/)
@@ -25,14 +25,9 @@ You can find the latest CarbonData document and learn more 
at:
 [CarbonData cwiki](https://cwiki.apache.org/confluence/display/CARBONDATA/)
 
 ## Status
-Spark2.1:
-[![Build 
Status](https://builds.apache.org/buildStatus/icon?job=carbondata-master-spark-2.1)](https://builds.apache.org/view/A-D/view/CarbonData/job/carbondata-master-spark-2.1/badge/icon)
+Spark2.2:
+[![Build 
Status](https://builds.apache.org/buildStatus/icon?job=carbondata-master-spark-2.2)](https://builds.apache.org/view/A-D/view/CarbonData/job/carbondata-master-spark-2.2/lastBuild/testReport)
 [![Coverage 
Status](https://coveralls.io/repos/github/apache/carbondata/badge.svg?branch=master)](https://coveralls.io/github/apache/carbondata?branch=master)
-## Features
-CarbonData file format is a columnar store in HDFS, it has many features that 
a modern columnar format has, such as splittable, compression schema ,complex 
data type etc, and CarbonData has following unique features:
-* Stores data along with index: it can significantly accelerate query 
performance and reduces the I/O scans and CPU resources, where there are 
filters in the query.  CarbonData index consists of multiple level of indices, 
a processing framework can leverage this index to reduce the task it needs to 
schedule and process, and it can also do skip scan in more finer grain unit 
(called blocklet) in task side scanning instead of scanning the whole file. 
-* Operable encoded data :Through supporting efficient compression and global 
encoding schemes, can query on compressed/encoded data, the data can be 
converted just before returning the results to the users, which is "late 
materialized". 
-* Supports for various use cases with one single Data format : like 
interactive OLAP-style query, Sequential Access (big scan), Random Access 
(narrow scan). 
 
 ## Building CarbonData
 CarbonData is built using Apache Maven, to [build 
CarbonData](https://github.com/apache/carbondata/blob/master/build)
@@ -50,6 +45,7 @@ CarbonData is built using Apache Maven, to [build 
CarbonData](https://github.com
 
 ## Other Technical Material
 [Apache CarbonData meetup 
material](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=66850609)
+[Use Case 
Articles](https://cwiki.apache.org/confluence/display/CARBONDATA/CarbonData+Articles)
 
 ## Fork and Contribute
 This is an active open source project for everyone, and we are always open to 
people who want to use this system or contribute to it. 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/24ba2fe2/docs/data-management-on-carbondata.md
--
diff --git a/docs/data-management-on-carbondata.md 
b/docs/data-management-on-carbondata.md
index 3af95ac..d7954e1 100644
--- a/docs/data-management-on-carbondata.md
+++ b/docs/data-management-on-carbondata.md
@@ -567,9 +567,43 @@ This tutorial is going to introduce all commands and data 
operations on CarbonDa
   ALTER TABLE table_name COMPACT 'MAJOR'
   ```
 
-## PARTITION
+  - **CLEAN SEGMENTS AFTER Compaction**
+  
+  Clean the segments which are compacted:
+  ```

[46/50] [abbrv] carbondata git commit: [CARBONDATA-2119] Fixed deserialization issues for carbonLoadModel

2018-02-03 Thread ravipesala
[CARBONDATA-2119] Fixed deserialization issues for carbonLoadModel

Problem:
Load model was not getting de-serialized in the executor due to which 2 
different carbon table objects were being created.
Solution:
Reconstruct carbonTable from tableInfo if not already created.

This closes #1911


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/54b7db51
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/54b7db51
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/54b7db51

Branch: refs/heads/branch-1.3
Commit: 54b7db51906340d6d7b417058f9665731fa51a21
Parents: a7bcc76
Author: kunal642 
Authored: Fri Feb 2 17:37:51 2018 +0530
Committer: ravipesala 
Committed: Sat Feb 3 22:02:54 2018 +0530

--
 .../core/metadata/schema/table/CarbonTable.java  |  2 +-
 .../processing/loading/model/CarbonDataLoadSchema.java   | 11 ++-
 2 files changed, 11 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/54b7db51/core/src/main/java/org/apache/carbondata/core/metadata/schema/table/CarbonTable.java
--
diff --git 
a/core/src/main/java/org/apache/carbondata/core/metadata/schema/table/CarbonTable.java
 
b/core/src/main/java/org/apache/carbondata/core/metadata/schema/table/CarbonTable.java
index 4bb0d20..09ff440 100644
--- 
a/core/src/main/java/org/apache/carbondata/core/metadata/schema/table/CarbonTable.java
+++ 
b/core/src/main/java/org/apache/carbondata/core/metadata/schema/table/CarbonTable.java
@@ -141,7 +141,7 @@ public class CarbonTable implements Serializable {
*
* @param tableInfo
*/
-  private static void updateTableInfo(TableInfo tableInfo) {
+  public static void updateTableInfo(TableInfo tableInfo) {
 List dataMapSchemas = new ArrayList<>();
 for (DataMapSchema dataMapSchema : tableInfo.getDataMapSchemaList()) {
   DataMapSchema newDataMapSchema = DataMapSchemaFactory.INSTANCE

http://git-wip-us.apache.org/repos/asf/carbondata/blob/54b7db51/processing/src/main/java/org/apache/carbondata/processing/loading/model/CarbonDataLoadSchema.java
--
diff --git 
a/processing/src/main/java/org/apache/carbondata/processing/loading/model/CarbonDataLoadSchema.java
 
b/processing/src/main/java/org/apache/carbondata/processing/loading/model/CarbonDataLoadSchema.java
index d7aa103..a9d7bd8 100644
--- 
a/processing/src/main/java/org/apache/carbondata/processing/loading/model/CarbonDataLoadSchema.java
+++ 
b/processing/src/main/java/org/apache/carbondata/processing/loading/model/CarbonDataLoadSchema.java
@@ -37,6 +37,11 @@ public class CarbonDataLoadSchema implements Serializable {
   private CarbonTable carbonTable;
 
   /**
+   * Used to determine if the dataTypes have already been updated or not.
+   */
+  private transient boolean updatedDataTypes;
+
+  /**
* CarbonDataLoadSchema constructor which takes CarbonTable
*
* @param carbonTable
@@ -51,7 +56,11 @@ public class CarbonDataLoadSchema implements Serializable {
* @return carbonTable
*/
   public CarbonTable getCarbonTable() {
+if (!updatedDataTypes) {
+  CarbonTable.updateTableInfo(carbonTable.getTableInfo());
+  updatedDataTypes = true;
+}
 return carbonTable;
   }
 
-}
+}
\ No newline at end of file



[20/50] [abbrv] carbondata git commit: [CARBONDATA-2116] Documentation for CTAS

2018-02-03 Thread ravipesala
[CARBONDATA-2116] Documentation for CTAS

Added the documentation for CTAS

This closes #1906


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/1b224a4a
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/1b224a4a
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/1b224a4a

Branch: refs/heads/branch-1.3
Commit: 1b224a4a971597c36e931eb8e17ccbd24cea642e
Parents: a3638ad
Author: sgururajshetty 
Authored: Thu Feb 1 20:04:54 2018 +0530
Committer: manishgupta88 
Committed: Fri Feb 2 11:48:04 2018 +0530

--
 docs/data-management-on-carbondata.md | 13 -
 1 file changed, 12 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/1b224a4a/docs/data-management-on-carbondata.md
--
diff --git a/docs/data-management-on-carbondata.md 
b/docs/data-management-on-carbondata.md
index d7954e1..3119935 100644
--- a/docs/data-management-on-carbondata.md
+++ b/docs/data-management-on-carbondata.md
@@ -144,7 +144,18 @@ This tutorial is going to introduce all commands and data 
operations on CarbonDa
   'streaming'='true',
'ALLOWED_COMPACTION_DAYS'='5')
```
-
+
+## CREATE TABLE As SELECT
+  This function allows you to create a Carbon table from any of the 
Parquet/Hive/Carbon table. This is beneficial when the user wants to create 
Carbon table from any other Parquet/Hive table and use the Carbon query engine 
to query and achieve better query results for cases where Carbon is faster than 
other file formats. Also this feature can be used for backing up the data.
+  ```
+  CREATE TABLE [IF NOT EXISTS] [db_name.]table_name STORED BY 'carbondata' 
[TBLPROPERTIES (key1=val1, key2=val2, ...)] AS select_statement;
+  ```
+
+### Examples
+  ```
+  CREATE TABLE ctas_select_parquet STORED BY 'carbondata' as select * from 
parquet_ctas_test;
+  ```
+   
 ## TABLE MANAGEMENT  
 
 ### SHOW TABLE



[05/50] [abbrv] carbondata git commit: [CARBONDATA-2089]SQL exception is masked due to assert(false) inside try catch and exception block always asserting true

2018-02-03 Thread ravipesala
http://git-wip-us.apache.org/repos/asf/carbondata/blob/3dff273b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/DataLoadingIUDTestCase.scala
--
diff --git 
a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/DataLoadingIUDTestCase.scala
 
b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/DataLoadingIUDTestCase.scala
index d6fa3ca..b4459ab 100644
--- 
a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/DataLoadingIUDTestCase.scala
+++ 
b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/DataLoadingIUDTestCase.scala
@@ -20,6 +20,8 @@ package org.apache.carbondata.cluster.sdv.generated
 
 import java.sql.Timestamp
 
+import org.apache.carbondata.core.constants.CarbonCommonConstants
+import org.apache.carbondata.core.util.CarbonProperties
 import org.apache.spark.sql.Row
 import org.apache.spark.sql.common.util._
 import org.scalatest.{BeforeAndAfter, BeforeAndAfterAll, BeforeAndAfterEach}
@@ -60,6 +62,9 @@ class DataLoadingIUDTestCase extends QueryTest with 
BeforeAndAfterAll with Befor
 sql("drop table if exists t_carbn01b").collect
 sql("drop table if exists T_Hive1").collect
 sql("drop table if exists T_Hive6").collect
+sql(s"""create table default.t_carbn01b(Active_status String,Item_type_cd 
INT,Qty_day_avg INT,Qty_total INT,Sell_price BIGINT,Sell_pricep 
DOUBLE,Discount_price DOUBLE,Profit DECIMAL(3,2),Item_code String,Item_name 
String,Outlet_name String,Update_time TIMESTAMP,Create_date String)STORED BY 
'org.apache.carbondata.format'""").collect
+sql(s"""LOAD DATA INPATH '$resourcesPath/Data/InsertData/T_Hive1.csv' INTO 
table default.t_carbn01B options ('DELIMITER'=',', 'QUOTECHAR'='\', 
'FILEHEADER'='Active_status,Item_type_cd,Qty_day_avg,Qty_total,Sell_price,Sell_pricep,Discount_price,Profit,Item_code,Item_name,Outlet_name,Update_time,Create_date')""").collect
+
   }
 
   override def before(fun: => Any) {
@@ -75,9 +80,7 @@ class DataLoadingIUDTestCase extends QueryTest with 
BeforeAndAfterAll with Befor
 
 //NA
 test("IUD-01-01-01_001-001", Include) {
-   sql(s"""create table default.t_carbn01b(Active_status String,Item_type_cd 
INT,Qty_day_avg INT,Qty_total INT,Sell_price BIGINT,Sell_pricep 
DOUBLE,Discount_price DOUBLE,Profit DECIMAL(3,2),Item_code String,Item_name 
String,Outlet_name String,Update_time TIMESTAMP,Create_date String)STORED BY 
'org.apache.carbondata.format'""").collect
- sql(s"""LOAD DATA INPATH '$resourcesPath/Data/InsertData/T_Hive1.csv' INTO 
table default.t_carbn01B options ('DELIMITER'=',', 'QUOTECHAR'='\', 
'FILEHEADER'='Active_status,Item_type_cd,Qty_day_avg,Qty_total,Sell_price,Sell_pricep,Discount_price,Profit,Item_code,Item_name,Outlet_name,Update_time,Create_date')""").collect
-  sql("create table T_Hive1(Active_status BOOLEAN, Item_type_cd TINYINT, 
Qty_day_avg SMALLINT, Qty_total INT, Sell_price BIGINT, Sell_pricep FLOAT, 
Discount_price DOUBLE , Profit DECIMAL(3,2), Item_code STRING, Item_name 
VARCHAR(50), Outlet_name CHAR(100), Update_time TIMESTAMP, Create_date DATE) 
row format delimited fields terminated by ',' collection items terminated by 
'$'")
+   sql("create table T_Hive1(Active_status BOOLEAN, Item_type_cd TINYINT, 
Qty_day_avg SMALLINT, Qty_total INT, Sell_price BIGINT, Sell_pricep FLOAT, 
Discount_price DOUBLE , Profit DECIMAL(3,2), Item_code STRING, Item_name 
VARCHAR(50), Outlet_name CHAR(100), Update_time TIMESTAMP, Create_date DATE) 
row format delimited fields terminated by ',' collection items terminated by 
'$'")
  sql(s"""LOAD DATA INPATH '$resourcesPath/Data/InsertData/T_Hive1.csv' 
overwrite into table T_Hive1""").collect
  sql("create table T_Hive6(Item_code STRING, Sub_item_cd ARRAY)row 
format delimited fields terminated by ',' collection items terminated by '$'")
  sql(s"""load data inpath '$resourcesPath/Data/InsertData/T_Hive1.csv' 
overwrite into table T_Hive6""").collect
@@ -115,16 +118,13 @@ test("IUD-01-01-01_001-02", Include) {
 
 //Check for update Carbon table using a data value on a string column without 
giving values in semi quote
 test("IUD-01-01-01_001-03", Include) {
-  try {
+  intercept[Exception] {
sql(s"""drop table IF EXISTS default.t_carbn01""").collect
  sql(s"""create table default.t_carbn01 (Active_status String,Item_type_cd 
INT,Qty_day_avg INT,Qty_total INT,Sell_price BIGINT,Sell_pricep 
DOUBLE,Discount_price DOUBLE,Profit DECIMAL(3,2),Item_code String,Item_name 
String,Outlet_name String,Update_time TIMESTAMP,Create_date String)STORED BY 
'org.apache.carbondata.format'""").collect
  sql(s"""insert into default.t_carbn01  select * from 
default.t_carbn01b""").collect
  sql(s"""update default.t_carbn01  set (active_status) = (NO) """).collect
 sql(s"""NA""").collect
 
-

[08/50] [abbrv] carbondata git commit: [CARBONDATA-1616] Add CarbonData Streaming Ingestion Guide

2018-02-03 Thread ravipesala
[CARBONDATA-1616] Add CarbonData Streaming Ingestion Guide

Add CarbonData Streaming Ingestion Guide

This closes #1880


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/cdff1932
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/cdff1932
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/cdff1932

Branch: refs/heads/branch-1.3
Commit: cdff193255418e56ab4a98c441eb6b809142c9a2
Parents: c8a3eb5
Author: QiangCai 
Authored: Thu Jan 4 11:52:07 2018 +0800
Committer: chenliang613 
Committed: Thu Feb 1 10:59:36 2018 +0800

--
 README.md   |   1 +
 docs/streaming-guide.md | 169 +++
 2 files changed, 170 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/cdff1932/README.md
--
diff --git a/README.md b/README.md
index 3b6792e..952392b 100644
--- a/README.md
+++ b/README.md
@@ -39,6 +39,7 @@ CarbonData is built using Apache Maven, to [build 
CarbonData](https://github.com
 * [Data Management on 
CarbonData](https://github.com/apache/carbondata/blob/master/docs/data-management-on-carbondata.md)
 * [Cluster Installation and 
Deployment](https://github.com/apache/carbondata/blob/master/docs/installation-guide.md)
 * [Configuring 
Carbondata](https://github.com/apache/carbondata/blob/master/docs/configuration-parameters.md)
+* [Streaming 
Ingestion](https://github.com/apache/carbondata/blob/master/docs/streaming-guide.md)
 * [FAQ](https://github.com/apache/carbondata/blob/master/docs/faq.md)
 * [Trouble 
Shooting](https://github.com/apache/carbondata/blob/master/docs/troubleshooting.md)
 * [Useful 
Tips](https://github.com/apache/carbondata/blob/master/docs/useful-tips-on-carbondata.md)

http://git-wip-us.apache.org/repos/asf/carbondata/blob/cdff1932/docs/streaming-guide.md
--
diff --git a/docs/streaming-guide.md b/docs/streaming-guide.md
new file mode 100644
index 000..201f8e0
--- /dev/null
+++ b/docs/streaming-guide.md
@@ -0,0 +1,169 @@
+# CarbonData Streaming Ingestion
+
+## Quick example
+Download and unzip spark-2.2.0-bin-hadoop2.7.tgz, and export $SPARK_HOME
+
+Package carbon jar, and copy 
assembly/target/scala-2.11/carbondata_2.11-1.3.0-SNAPSHOT-shade-hadoop2.7.2.jar 
to $SPARK_HOME/jars
+```shell
+mvn clean package -DskipTests -Pspark-2.2
+```
+
+Start a socket data server in a terminal
+```shell
+ nc -lk 9099
+```
+ type some CSV rows as following
+```csv
+1,col1
+2,col2
+3,col3
+4,col4
+5,col5
+```
+
+Start spark-shell in new terminal, type :paste, then copy and run the 
following code.
+```scala
+ import java.io.File
+ import org.apache.spark.sql.{CarbonEnv, SparkSession}
+ import org.apache.spark.sql.CarbonSession._
+ import org.apache.spark.sql.streaming.{ProcessingTime, StreamingQuery}
+ import org.apache.carbondata.core.util.path.CarbonStorePath
+ 
+ val warehouse = new File("./warehouse").getCanonicalPath
+ val metastore = new File("./metastore").getCanonicalPath
+ 
+ val spark = SparkSession
+   .builder()
+   .master("local")
+   .appName("StreamExample")
+   .config("spark.sql.warehouse.dir", warehouse)
+   .getOrCreateCarbonSession(warehouse, metastore)
+
+ spark.sparkContext.setLogLevel("ERROR")
+
+ // drop table if exists previously
+ spark.sql(s"DROP TABLE IF EXISTS carbon_table")
+ // Create target carbon table and populate with initial data
+ spark.sql(
+   s"""
+  | CREATE TABLE carbon_table (
+  | col1 INT,
+  | col2 STRING
+  | )
+  | STORED BY 'carbondata'
+  | TBLPROPERTIES('streaming'='true')""".stripMargin)
+
+ val carbonTable = CarbonEnv.getCarbonTable(Some("default"), 
"carbon_table")(spark)
+ val tablePath = 
CarbonStorePath.getCarbonTablePath(carbonTable.getAbsoluteTableIdentifier)
+ 
+ // batch load
+ var qry: StreamingQuery = null
+ val readSocketDF = spark.readStream
+   .format("socket")
+   .option("host", "localhost")
+   .option("port", 9099)
+   .load()
+
+ // Write data from socket stream to carbondata file
+ qry = readSocketDF.writeStream
+   .format("carbondata")
+   .trigger(ProcessingTime("5 seconds"))
+   .option("checkpointLocation", tablePath.getStreamingCheckpointDir)
+   .option("dbName", "default")
+   .option("tableName", "carbon_table")
+   .start()
+
+ // start new thread to show data
+ new Thread() {
+   override def run(): Unit = {
+ do {
+   spark.sql("select * from carbon_table").show(false)
+   Thread.sleep(1)
+ } while (true)
+   }
+ }.start()
+
+ qry.awaitTermination()
+```
+
+Continue to type some rows into data server, and spark-shell will show the new 
data of the table.
+
+## Create table with streaming property
+Streaming 

[04/50] [abbrv] carbondata git commit: [CARBONDATA-2089]SQL exception is masked due to assert(false) inside try catch and exception block always asserting true

2018-02-03 Thread ravipesala
http://git-wip-us.apache.org/repos/asf/carbondata/blob/3dff273b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/SinglepassTestCase.scala
--
diff --git 
a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/SinglepassTestCase.scala
 
b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/SinglepassTestCase.scala
index dab6e41..c57bd04 100644
--- 
a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/SinglepassTestCase.scala
+++ 
b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/SinglepassTestCase.scala
@@ -21,9 +21,9 @@ package org.apache.carbondata.cluster.sdv.generated
 import org.apache.spark.sql.Row
 import org.apache.spark.sql.common.util._
 import org.scalatest.BeforeAndAfterAll
-
 import org.apache.carbondata.core.constants.CarbonCommonConstants
 import org.apache.carbondata.core.util.CarbonProperties
+import org.apache.spark.sql.test.TestQueryExecutor
 
 /**
  * Test Class for singlepassTestCase to verify all scenerios
@@ -55,80 +55,51 @@ class SinglepassTestCase extends QueryTest with 
BeforeAndAfterAll {
 
   //To check data loading from CSV with incomplete data
   test("Loading-004-01-01-01_001-TC_003", Include) {
-try {
+intercept[Exception] {
  sql(s"""drop table if exists uniqdata""").collect
sql(s"""CREATE TABLE if not exists uniqdata (CUST_ID int,CUST_NAME 
String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, BIGINT_COLUMN1 
bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
int) STORED BY 'org.apache.carbondata.format'""").collect
   sql(s"""LOAD DATA INPATH 
'$resourcesPath/Data/singlepass/2000_UniqData_incomplete.csv' INTO TABLE 
uniqdata OPTIONS('DELIMITER'=',', 'QUOTECHAR'= '"','SINGLE_PASS'='TRUE', 
'FILEHEADER'= 
'imei,deviceInformationId,AMSize,channelsId,ActiveCountry,Activecity,gamePointId,productionDate,deliveryDate,deliverycharge')""").collect
-  assert(false)
-} catch {
-  case _ => assert(true)
 }
-
   }
 
 
   //To check data loading from CSV with bad records
   test("Loading-004-01-01-01_001-TC_004", Include) {
-try {
-
+intercept[Exception] {
   sql(s"""LOAD DATA INPATH 
'$resourcesPath/Data/singlepass/2000_UniqData_badrec.csv' INTO TABLE uniqdata 
OPTIONS('DELIMITER'=',', 'QUOTECHAR'= '"','SINGLE_PASS'='TRUE', 'FILEHEADER'= 
'imei,deviceInformationId,AMSize,channelsId,ActiveCountry,Activecity,gamePointId,productionDate,deliveryDate,deliverycharge')""").collect
-  assert(false)
-} catch {
-  case _ => assert(true)
 }
-
   }
 
 
   //To check data loading from CSV with no data
   test("Loading-004-01-01-01_001-TC_005", Include) {
-try {
-
+intercept[Exception] {
   sql(s"""LOAD DATA INPATH 
'$resourcesPath/Data/singlepass/2000_UniqData_nodata.csv' INTO TABLE uniqdata 
OPTIONS('DELIMITER'=',', 'QUOTECHAR'= '"','SINGLE_PASS'='TRUE', 'FILEHEADER'= 
'imei,deviceInformationId,AMSize,channelsId,ActiveCountry,Activecity,gamePointId,productionDate,deliveryDate,deliverycharge')""").collect
-  assert(false)
-} catch {
-  case _ => assert(true)
 }
-
   }
 
 
   //To check data loading from CSV with incomplete data
   test("Loading-004-01-01-01_001-TC_006", Include) {
-try {
-
+intercept[Exception] {
   sql(s"""LOAD DATA INPATH 
'$resourcesPath/Data/singlepass/2000_UniqData_incomplete.csv' INTO TABLE 
uniqdata OPTIONS('DELIMITER'=',', 'QUOTECHAR'= '"','SINGLE_PASS'='FALSE', 
'FILEHEADER'= 
'imei,deviceInformationId,AMSize,channelsId,ActiveCountry,Activecity,gamePointId,productionDate,deliveryDate,deliverycharge')""").collect
-  assert(false)
-} catch {
-  case _ => assert(true)
 }
-
   }
 
 
   //To check data loading from CSV with wrong data
   test("Loading-004-01-01-01_001-TC_007", Include) {
-try {
-
+intercept[Exception] {
   sql(s"""LOAD DATA INPATH 
'$resourcesPath/Data/singlepass/2000_UniqData_incomplete.csv' INTO TABLE 
uniqdata OPTIONS('DELIMITER'=',', 'QUOTECHAR'= '"','SINGLE_PASS'='FALSE', 
'FILEHEADER'= 
'imei,deviceInformationId,AMSize,channelsId,ActiveCountry,Activecity,gamePointId,productionDate,deliveryDate,deliverycharge')""").collect
-  assert(false)
-} catch {
-  case _ => assert(true)
 }
-
   }
 
 
   //To check data loading from CSV with no data and 'SINGLEPASS' = 'FALSE'
   test("Loading-004-01-01-01_001-TC_008", Include) {
-try {
-
+intercept[Exception] {
   sql(s"""LOAD DATA INPATH 
'$resourcesPath/Data/singlepass/2000_UniqData_nodata.csv.csv' INTO TABLE 
uniqdata OPTIONS('DELIMITER'=',', 'QUOTECHAR'= '"','SINGLE_PASS'='FALSE', 
'FILEHEADER'= 

[41/50] [abbrv] carbondata git commit: [CARBONDATA-2126] Documentation for create database and custom location

2018-02-03 Thread ravipesala
[CARBONDATA-2126] Documentation for create database and custom location

Documentation for create database and custom location

This closes #1923


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/4677fc6b
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/4677fc6b
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/4677fc6b

Branch: refs/heads/branch-1.3
Commit: 4677fc6b437deadc47a234ae535a5017c6a2c4d8
Parents: 36ff932
Author: sgururajshetty 
Authored: Sat Feb 3 18:38:10 2018 +0530
Committer: chenliang613 
Committed: Sat Feb 3 21:54:59 2018 +0800

--
 docs/data-management-on-carbondata.md | 12 
 1 file changed, 12 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/4677fc6b/docs/data-management-on-carbondata.md
--
diff --git a/docs/data-management-on-carbondata.md 
b/docs/data-management-on-carbondata.md
index 66cc048..fef2371 100644
--- a/docs/data-management-on-carbondata.md
+++ b/docs/data-management-on-carbondata.md
@@ -20,6 +20,7 @@
 This tutorial is going to introduce all commands and data operations on 
CarbonData.
 
 * [CREATE TABLE](#create-table)
+* [CREATE DATABASE] (#create-database)
 * [TABLE MANAGEMENT](#table-management)
 * [LOAD DATA](#load-data)
 * [UPDATE AND DELETE](#update-and-delete)
@@ -149,6 +150,17 @@ This tutorial is going to introduce all commands and data 
operations on CarbonDa
'ALLOWED_COMPACTION_DAYS'='5')
```
 
+## CREATE DATABASE 
+  This function creates a new database. By default the database is created in 
Carbon store location, but you can also specify custom location.
+  ```
+  CREATE DATABASE [IF NOT EXISTS] database_name [LOCATION path];
+  ```
+  
+### Example
+  ```
+  CREATE DATABASE carbon LOCATION “hdfs://name_cluster/dir1/carbonstore”;
+  ```
+
 ## CREATE TABLE As SELECT
   This function allows you to create a Carbon table from any of the 
Parquet/Hive/Carbon table. This is beneficial when the user wants to create 
Carbon table from any other Parquet/Hive table and use the Carbon query engine 
to query and achieve better query results for cases where Carbon is faster than 
other file formats. Also this feature can be used for backing up the data.
   ```



[37/50] [abbrv] carbondata git commit: [CARBONDATA-2104] Add testcase for concurrent execution of insert overwrite and other command

2018-02-03 Thread ravipesala
http://git-wip-us.apache.org/repos/asf/carbondata/blob/55bffbe2/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/table/CarbonCreateTableCommand.scala
--
diff --git 
a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/table/CarbonCreateTableCommand.scala
 
b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/table/CarbonCreateTableCommand.scala
index f38304e..13d6274 100644
--- 
a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/table/CarbonCreateTableCommand.scala
+++ 
b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/table/CarbonCreateTableCommand.scala
@@ -22,8 +22,7 @@ import scala.collection.JavaConverters._
 import org.apache.spark.sql.{CarbonEnv, Row, SparkSession, _}
 import org.apache.spark.sql.catalyst.analysis.TableAlreadyExistsException
 import org.apache.spark.sql.execution.SQLExecution.EXECUTION_ID_KEY
-import org.apache.spark.sql.execution.command.{Field, MetadataCommand, 
TableModel, TableNewProcessor}
-import org.apache.spark.sql.util.CarbonException
+import org.apache.spark.sql.execution.command.MetadataCommand
 
 import org.apache.carbondata.common.logging.LogServiceFactory
 import org.apache.carbondata.core.constants.CarbonCommonConstants
@@ -79,7 +78,7 @@ case class CarbonCreateTableCommand(
   }
 
   if (tableInfo.getFactTable.getListOfColumns.size <= 0) {
-CarbonException.analysisException("Table should have at least one 
column.")
+throwMetadataException(dbName, tableName, "Table should have at least 
one column.")
   }
 
   val operationContext = new OperationContext
@@ -125,7 +124,7 @@ case class CarbonCreateTableCommand(
 val msg = s"Create table'$tableName' in database '$dbName' failed"
 LOGGER.audit(msg.concat(", ").concat(e.getMessage))
 LOGGER.error(e, msg)
-CarbonException.analysisException(msg.concat(", 
").concat(e.getMessage))
+throwMetadataException(dbName, tableName, msg)
 }
   }
   val createTablePostExecutionEvent: CreateTablePostExecutionEvent =

http://git-wip-us.apache.org/repos/asf/carbondata/blob/55bffbe2/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/table/CarbonDropTableCommand.scala
--
diff --git 
a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/table/CarbonDropTableCommand.scala
 
b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/table/CarbonDropTableCommand.scala
index 9c0eb57..7c895ab 100644
--- 
a/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/table/CarbonDropTableCommand.scala
+++ 
b/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/table/CarbonDropTableCommand.scala
@@ -20,11 +20,10 @@ package org.apache.spark.sql.execution.command.table
 import scala.collection.JavaConverters._
 import scala.collection.mutable.ListBuffer
 
-import org.apache.spark.sql.{AnalysisException, CarbonEnv, Row, SparkSession}
+import org.apache.spark.sql.{CarbonEnv, Row, SparkSession}
 import org.apache.spark.sql.catalyst.TableIdentifier
 import org.apache.spark.sql.catalyst.analysis.NoSuchTableException
 import org.apache.spark.sql.execution.command.AtomicRunnableCommand
-import org.apache.spark.sql.util.CarbonException
 
 import org.apache.carbondata.common.logging.{LogService, LogServiceFactory}
 import org.apache.carbondata.core.cache.dictionary.ManageDictionaryAndBTree
@@ -34,6 +33,7 @@ import 
org.apache.carbondata.core.metadata.schema.table.CarbonTable
 import org.apache.carbondata.core.statusmanager.SegmentStatusManager
 import org.apache.carbondata.core.util.CarbonUtil
 import org.apache.carbondata.events._
+import org.apache.carbondata.spark.exception.{ConcurrentOperationException, 
ProcessMetaDataException}
 
 case class CarbonDropTableCommand(
 ifExistsSet: Boolean,
@@ -55,8 +55,11 @@ case class CarbonDropTableCommand(
   locksToBeAcquired foreach {
 lock => carbonLocks += CarbonLockUtil.getLockObject(identifier, lock)
   }
-  LOGGER.audit(s"Deleting table [$tableName] under database [$dbName]")
   carbonTable = CarbonEnv.getCarbonTable(databaseNameOp, 
tableName)(sparkSession)
+  if (SegmentStatusManager.isLoadInProgressInTable(carbonTable)) {
+throw new ConcurrentOperationException(carbonTable, "loading", "drop 
table")
+  }
+  LOGGER.audit(s"Deleting table [$tableName] under database [$dbName]")
   if (carbonTable.isStreamingTable) {
 // streaming table should acquire streaming.lock
 carbonLocks += CarbonLockUtil.getLockObject(identifier, 
LockUsage.STREAMING_LOCK)
@@ -65,8 +68,9 @@ case class CarbonDropTableCommand(
   if (relationIdentifiers != null && !relationIdentifiers.isEmpty) {
 if (!dropChildTable) 

[27/50] [abbrv] carbondata git commit: [CARBONDATA-2120]Fixed is null filter issue

2018-02-03 Thread ravipesala
[CARBONDATA-2120]Fixed is null filter issue

Problem: Is null filter is failing for numeric data type(No dictionary column).

Root cause: Min max calculation is wrong when no dictionary column is not the 
first column.

As it is not the first column null value can come in between and min max for 
null value is getting updated only when first row is null

Solution: Update the min max in all the case when value is null or not null for 
all type

This closes #1912


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/6c097cbf
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/6c097cbf
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/6c097cbf

Branch: refs/heads/branch-1.3
Commit: 6c097cbf310e8d2199e57cde4fcc417122a8a1ca
Parents: 27ec651
Author: kumarvishal 
Authored: Fri Feb 2 17:53:57 2018 +0530
Committer: ravipesala 
Committed: Sat Feb 3 00:23:39 2018 +0530

--
 .../page/statistics/LVStringStatsCollector.java | 28 -
 .../core/util/path/CarbonTablePath.java |  2 +-
 .../src/test/resources/newsample.csv|  7 +
 .../testsuite/filterexpr/TestIsNullFilter.scala | 32 
 4 files changed, 53 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/6c097cbf/core/src/main/java/org/apache/carbondata/core/datastore/page/statistics/LVStringStatsCollector.java
--
diff --git 
a/core/src/main/java/org/apache/carbondata/core/datastore/page/statistics/LVStringStatsCollector.java
 
b/core/src/main/java/org/apache/carbondata/core/datastore/page/statistics/LVStringStatsCollector.java
index 61acec9..23795c5 100644
--- 
a/core/src/main/java/org/apache/carbondata/core/datastore/page/statistics/LVStringStatsCollector.java
+++ 
b/core/src/main/java/org/apache/carbondata/core/datastore/page/statistics/LVStringStatsCollector.java
@@ -73,28 +73,26 @@ public class LVStringStatsCollector implements 
ColumnPageStatsCollector {
   @Override
   public void update(byte[] value) {
 // input value is LV encoded
+byte[] newValue = null;
 assert (value.length >= 2);
 if (value.length == 2) {
   assert (value[0] == 0 && value[1] == 0);
-  if (min == null && max == null) {
-min = new byte[0];
-max = new byte[0];
-  }
-  return;
+  newValue = new byte[0];
+} else {
+  int length = (value[0] << 8) + (value[1] & 0xff);
+  assert (length > 0);
+  newValue = new byte[value.length - 2];
+  System.arraycopy(value, 2, newValue, 0, newValue.length);
 }
-int length = (value[0] << 8) + (value[1] & 0xff);
-assert (length > 0);
-byte[] v = new byte[value.length - 2];
-System.arraycopy(value, 2, v, 0, v.length);
 if (min == null && max == null) {
-  min = v;
-  max = v;
+  min = newValue;
+  max = newValue;
 } else {
-  if (ByteUtil.UnsafeComparer.INSTANCE.compareTo(min, v) > 0) {
-min = v;
+  if (ByteUtil.UnsafeComparer.INSTANCE.compareTo(min, newValue) > 0) {
+min = newValue;
   }
-  if (ByteUtil.UnsafeComparer.INSTANCE.compareTo(max, v) < 0) {
-max = v;
+  if (ByteUtil.UnsafeComparer.INSTANCE.compareTo(max, newValue) < 0) {
+max = newValue;
   }
 }
   }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/6c097cbf/core/src/main/java/org/apache/carbondata/core/util/path/CarbonTablePath.java
--
diff --git 
a/core/src/main/java/org/apache/carbondata/core/util/path/CarbonTablePath.java 
b/core/src/main/java/org/apache/carbondata/core/util/path/CarbonTablePath.java
index d8c64c4..5a63d2f 100644
--- 
a/core/src/main/java/org/apache/carbondata/core/util/path/CarbonTablePath.java
+++ 
b/core/src/main/java/org/apache/carbondata/core/util/path/CarbonTablePath.java
@@ -75,7 +75,7 @@ public class CarbonTablePath extends Path {
* @param carbonFilePath
*/
   public static String getFolderContainingFile(String carbonFilePath) {
-return carbonFilePath.substring(0, 
carbonFilePath.lastIndexOf(File.separator));
+return carbonFilePath.substring(0, carbonFilePath.lastIndexOf('/'));
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/carbondata/blob/6c097cbf/integration/spark-common-test/src/test/resources/newsample.csv
--
diff --git a/integration/spark-common-test/src/test/resources/newsample.csv 
b/integration/spark-common-test/src/test/resources/newsample.csv
new file mode 100644
index 000..38cd3dd
--- /dev/null
+++ b/integration/spark-common-test/src/test/resources/newsample.csv
@@ -0,0 +1,7 @@

[21/50] [abbrv] carbondata git commit: [Compatibility] Added changes for backward compatibility

2018-02-03 Thread ravipesala
[Compatibility] Added changes for backward compatibility

This PR will fix the issues related to old version and new version 
compatibility.
Issues fixed:
1. Schema file name was different in one of the previous versions.
2. Bucket number was not supported in the previous versions.
3. Table parameters were stored in lower case while in the current version we 
are reading in camel case.

This closes #1747


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/02eefca1
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/02eefca1
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/02eefca1

Branch: refs/heads/branch-1.3
Commit: 02eefca15862a8667d53e247272afb68efe7af60
Parents: 1b224a4
Author: kunal642 
Authored: Mon Nov 20 20:36:54 2017 +0530
Committer: manishgupta88 
Committed: Fri Feb 2 12:08:50 2018 +0530

--
 .../core/util/path/CarbonTablePath.java | 22 -
 .../carbondata/spark/util/CarbonScalaUtil.scala | 47 
 .../org/apache/spark/sql/CarbonSource.scala | 33 --
 3 files changed, 87 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/02eefca1/core/src/main/java/org/apache/carbondata/core/util/path/CarbonTablePath.java
--
diff --git 
a/core/src/main/java/org/apache/carbondata/core/util/path/CarbonTablePath.java 
b/core/src/main/java/org/apache/carbondata/core/util/path/CarbonTablePath.java
index fab6289..d8c64c4 100644
--- 
a/core/src/main/java/org/apache/carbondata/core/util/path/CarbonTablePath.java
+++ 
b/core/src/main/java/org/apache/carbondata/core/util/path/CarbonTablePath.java
@@ -233,7 +233,7 @@ public class CarbonTablePath extends Path {
* @return absolute path of schema file
*/
   public String getSchemaFilePath() {
-return getMetaDataDir() + File.separator + SCHEMA_FILE;
+return getActualSchemaFilePath(tablePath);
   }
 
   /**
@@ -242,7 +242,22 @@ public class CarbonTablePath extends Path {
* @return schema file path
*/
   public static String getSchemaFilePath(String tablePath) {
-return tablePath + File.separator + METADATA_DIR + File.separator + 
SCHEMA_FILE;
+return getActualSchemaFilePath(tablePath);
+  }
+
+  private static String getActualSchemaFilePath(String tablePath) {
+String metaPath = tablePath + CarbonCommonConstants.FILE_SEPARATOR + 
METADATA_DIR;
+CarbonFile carbonFile = FileFactory.getCarbonFile(metaPath);
+CarbonFile[] schemaFile = carbonFile.listFiles(new CarbonFileFilter() {
+  @Override public boolean accept(CarbonFile file) {
+return file.getName().startsWith(SCHEMA_FILE);
+  }
+});
+if (schemaFile != null && schemaFile.length > 0) {
+  return schemaFile[0].getAbsolutePath();
+} else {
+  return metaPath + CarbonCommonConstants.FILE_SEPARATOR + SCHEMA_FILE;
+}
   }
 
   /**
@@ -351,6 +366,9 @@ public class CarbonTablePath extends Path {
 
   private static String getCarbonIndexFileName(String taskNo, int bucketNumber,
   String factUpdatedtimeStamp) {
+if (bucketNumber == -1) {
+  return taskNo + "-" + factUpdatedtimeStamp + INDEX_FILE_EXT;
+}
 return taskNo + "-" + bucketNumber + "-" + factUpdatedtimeStamp + 
INDEX_FILE_EXT;
   }
 

http://git-wip-us.apache.org/repos/asf/carbondata/blob/02eefca1/integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/CarbonScalaUtil.scala
--
diff --git 
a/integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/CarbonScalaUtil.scala
 
b/integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/CarbonScalaUtil.scala
index 86d25b4..262adf2 100644
--- 
a/integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/CarbonScalaUtil.scala
+++ 
b/integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/CarbonScalaUtil.scala
@@ -404,4 +404,51 @@ object CarbonScalaUtil {
   })
 otherFields
   }
+
+  /**
+   * If the table is from an old store then the table parameters are in 
lowercase. In the current
+   * code we are reading the parameters as camel case.
+   * This method will convert all the schema parts to camel case
+   *
+   * @param parameters
+   * @return
+   */
+  def getDeserializedParameters(parameters: Map[String, String]): Map[String, 
String] = {
+val keyParts = 
parameters.getOrElse("spark.sql.sources.options.keys.numparts", "0").toInt
+if (keyParts == 0) {
+  parameters
+} else {
+  val keyStr = 0 until keyParts map {
+i => parameters(s"spark.sql.sources.options.keys.part.$i")
+  }
+  val finalProperties = 

[13/50] [abbrv] carbondata git commit: [CARBONDATA-2107]Fixed query failure in case if average case

2018-02-03 Thread ravipesala
[CARBONDATA-2107]Fixed query failure in case if average case

Average query is failing when data map has sum(column), average(column)

This closes #1894


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/19fdd4d7
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/19fdd4d7
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/19fdd4d7

Branch: refs/heads/branch-1.3
Commit: 19fdd4d7581477557f2771909cf54a95a0b6665d
Parents: d680e9c
Author: kumarvishal 
Authored: Wed Jan 31 17:44:55 2018 +0530
Committer: kunal642 
Committed: Thu Feb 1 17:44:21 2018 +0530

--
 .../preaggregate/TestPreAggregateTableSelection.scala| 11 ++-
 .../apache/spark/sql/hive/CarbonPreAggregateRules.scala  |  8 ++--
 2 files changed, 16 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/19fdd4d7/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggregateTableSelection.scala
--
diff --git 
a/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggregateTableSelection.scala
 
b/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggregateTableSelection.scala
index f9ac354..5fb7b02 100644
--- 
a/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggregateTableSelection.scala
+++ 
b/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggregateTableSelection.scala
@@ -29,6 +29,7 @@ class TestPreAggregateTableSelection extends QueryTest with 
BeforeAndAfterAll {
 
   override def beforeAll: Unit = {
 sql("drop table if exists mainTable")
+sql("drop table if exists mainTableavg")
 sql("drop table if exists agg0")
 sql("drop table if exists agg1")
 sql("drop table if exists agg2")
@@ -47,7 +48,10 @@ class TestPreAggregateTableSelection extends QueryTest with 
BeforeAndAfterAll {
 sql("create datamap agg6 on table mainTable using 'preaggregate' as select 
name,min(age) from mainTable group by name")
 sql("create datamap agg7 on table mainTable using 'preaggregate' as select 
name,max(age) from mainTable group by name")
 sql("create datamap agg8 on table maintable using 'preaggregate' as select 
name, sum(id), avg(id) from maintable group by name")
+sql("CREATE TABLE mainTableavg(id int, name string, city string, age 
bigint) STORED BY 'org.apache.carbondata.format'")
+sql("create datamap agg0 on table mainTableavg using 'preaggregate' as 
select name,sum(age), avg(age) from mainTableavg group by name")
 sql(s"LOAD DATA LOCAL INPATH '$resourcesPath/measureinsertintotest.csv' 
into table mainTable")
+sql(s"LOAD DATA LOCAL INPATH '$resourcesPath/measureinsertintotest.csv' 
into table mainTableavg")
   }
 
   test("test sum and avg on same column should give proper results") {
@@ -191,7 +195,6 @@ class TestPreAggregateTableSelection extends QueryTest with 
BeforeAndAfterAll {
 preAggTableValidator(df.queryExecution.analyzed, "maintable")
   }
 
-
   def preAggTableValidator(plan: LogicalPlan, actualTableName: String) : Unit 
={
 var isValidPlan = false
 plan.transform {
@@ -312,8 +315,14 @@ test("test PreAggregate table selection with timeseries 
and normal together") {
 
 sql("select var_samp(name) from maintabletime  where name='Mikka' ")
   }
+
+  test("test PreAggregate table selection For Sum And Avg in aggregate table 
with bigint") {
+val df = sql("select avg(age) from mainTableavg")
+preAggTableValidator(df.queryExecution.analyzed, "mainTableavg_agg0")
+  }
   override def afterAll: Unit = {
 sql("drop table if exists mainTable")
+sql("drop table if exists mainTable_avg")
 sql("drop table if exists lineitem")
 sql("DROP TABLE IF EXISTS maintabletime")
   }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/19fdd4d7/integration/spark2/src/main/scala/org/apache/spark/sql/hive/CarbonPreAggregateRules.scala
--
diff --git 
a/integration/spark2/src/main/scala/org/apache/spark/sql/hive/CarbonPreAggregateRules.scala
 
b/integration/spark2/src/main/scala/org/apache/spark/sql/hive/CarbonPreAggregateRules.scala
index 79cbe05..de58805 100644
--- 
a/integration/spark2/src/main/scala/org/apache/spark/sql/hive/CarbonPreAggregateRules.scala
+++ 
b/integration/spark2/src/main/scala/org/apache/spark/sql/hive/CarbonPreAggregateRules.scala
@@ -1023,10 +1023,14 @@ case class 

[09/50] [abbrv] carbondata git commit: [CARBONDATA-1840]Updated configuration-parameters.md for V3 format

2018-02-03 Thread ravipesala
[CARBONDATA-1840]Updated configuration-parameters.md for V3 format

Updated configuration-parameters.md for V3 format

This closes #1883


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/f34ea5c7
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/f34ea5c7
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/f34ea5c7

Branch: refs/heads/branch-1.3
Commit: f34ea5c70b38ac6934d9203264de4626d22f68e4
Parents: cdff193
Author: vandana 
Authored: Tue Jan 30 15:12:34 2018 +0530
Committer: chenliang613 
Committed: Thu Feb 1 11:03:29 2018 +0800

--
 docs/configuration-parameters.md | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/f34ea5c7/docs/configuration-parameters.md
--
diff --git a/docs/configuration-parameters.md b/docs/configuration-parameters.md
index 522d222..fe207f2 100644
--- a/docs/configuration-parameters.md
+++ b/docs/configuration-parameters.md
@@ -35,7 +35,7 @@ This section provides the details of all the configurations 
required for the Car
 | carbon.storelocation | /user/hive/warehouse/carbon.store | Location where 
CarbonData will create the store, and write the data in its own format. NOTE: 
Store location should be in HDFS. |
 | carbon.ddl.base.hdfs.url | hdfs://hacluster/opt/data | This property is used 
to configure the HDFS relative path, the path configured in 
carbon.ddl.base.hdfs.url will be appended to the HDFS path configured in 
fs.defaultFS. If this path is configured, then user need not pass the complete 
path while dataload. For example: If absolute path of the csv file is 
hdfs://10.18.101.155:54310/data/cnbc/2016/xyz.csv, the path 
"hdfs://10.18.101.155:54310" will come from property fs.defaultFS and user can 
configure the /data/cnbc/ as carbon.ddl.base.hdfs.url. Now while dataload user 
can specify the csv path as /2016/xyz.csv. |
 | carbon.badRecords.location | /opt/Carbon/Spark/badrecords | Path where the 
bad records are stored. |
-| carbon.data.file.version | 2 | If this parameter value is set to 1, then 
CarbonData will support the data load which is in old format(0.x version). If 
the value is set to 2(1.x onwards version), then CarbonData will support the 
data load of new format only.|
+| carbon.data.file.version | 3 | If this parameter value is set to 1, then 
CarbonData will support the data load which is in old format(0.x version). If 
the value is set to 2(1.x onwards version), then CarbonData will support the 
data load of new format only. The default value for this parameter is 3(latest 
version is set as default version). It improves the query performance by ~20% 
to 50%. For configuring V3 format explicitly, add carbon.data.file.version = V3 
in carbon.properties file. |
 | carbon.streaming.auto.handoff.enabled | true | If this parameter value is 
set to true, auto trigger handoff function will be enabled.|
 | carbon.streaming.segment.max.size | 102400 | This parameter defines the 
maximum size of the streaming segment. Setting this parameter to appropriate 
value will avoid impacting the streaming ingestion. The value is in bytes.|
 
@@ -60,6 +60,7 @@ This section provides the details of all the configurations 
required for CarbonD
 | carbon.options.is.empty.data.bad.record | false | If false, then empty ("" 
or '' or ,,) data will not be considered as bad record and vice versa. | |
 | carbon.options.bad.record.path |  | Specifies the HDFS path where bad 
records are stored. By default the value is Null. This path must to be 
configured by the user if bad record logger is enabled or bad record action 
redirect. | |
 | carbon.enable.vector.reader | true | This parameter increases the 
performance of select queries as it fetch columnar batch of size 4*1024 rows 
instead of fetching data row by row. | |
+| carbon.blockletgroup.size.in.mb | 64 MB | The data are read as a group of 
blocklets which are called blocklet groups. This parameter specifies the size 
of the blocklet group. Higher value results in better sequential IO access.The 
minimum value is 16MB, any value lesser than 16MB will reset to the default 
value (64MB). |  |
 
 * **Compaction Configuration**
   



[23/50] [abbrv] carbondata git commit: [CARBONDATA-2062] Configure the temp directory to be used for streaming handoff

2018-02-03 Thread ravipesala
[CARBONDATA-2062] Configure the temp directory to be used for streaming handoff

This closes #1841


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/d3b228fb
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/d3b228fb
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/d3b228fb

Branch: refs/heads/branch-1.3
Commit: d3b228fb8cde5bace2fc932124ee68b8b2e4ee8c
Parents: f9606e9
Author: Raghunandan S 
Authored: Mon Jan 22 11:47:28 2018 +0530
Committer: QiangCai 
Committed: Fri Feb 2 14:52:05 2018 +0800

--
 .../spark/rdd/AlterTableLoadPartitionRDD.scala  | 34 ++-
 .../carbondata/spark/rdd/CarbonMergerRDD.scala  | 31 ++
 .../carbondata/spark/util/CommonUtil.scala  | 44 +++-
 .../carbondata/streaming/StreamHandoffRDD.scala |  4 ++
 4 files changed, 52 insertions(+), 61 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/d3b228fb/integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/AlterTableLoadPartitionRDD.scala
--
diff --git 
a/integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/AlterTableLoadPartitionRDD.scala
 
b/integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/AlterTableLoadPartitionRDD.scala
index 35a8ea7..76c99f2 100644
--- 
a/integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/AlterTableLoadPartitionRDD.scala
+++ 
b/integration/spark-common/src/main/scala/org/apache/carbondata/spark/rdd/AlterTableLoadPartitionRDD.scala
@@ -18,21 +18,19 @@
 package org.apache.carbondata.spark.rdd
 
 import scala.collection.JavaConverters._
-import scala.util.Random
 
-import org.apache.spark.{Partition, SparkContext, SparkEnv, TaskContext}
+import org.apache.spark.{Partition, TaskContext}
 import org.apache.spark.rdd.RDD
 import org.apache.spark.sql.execution.command.AlterPartitionModel
 import org.apache.spark.util.PartitionUtils
 
 import org.apache.carbondata.common.logging.LogServiceFactory
 import org.apache.carbondata.core.metadata.AbsoluteTableIdentifier
-import org.apache.carbondata.core.util.CarbonProperties
 import org.apache.carbondata.processing.loading.TableProcessingOperations
 import org.apache.carbondata.processing.partition.spliter.RowResultProcessor
 import org.apache.carbondata.processing.util.{CarbonDataProcessorUtil, 
CarbonLoaderUtil}
 import org.apache.carbondata.spark.AlterPartitionResult
-import org.apache.carbondata.spark.util.Util
+import org.apache.carbondata.spark.util.CommonUtil
 
 class AlterTableLoadPartitionRDD[K, V](alterPartitionModel: 
AlterPartitionModel,
 result: AlterPartitionResult[K, V],
@@ -65,33 +63,7 @@ class AlterTableLoadPartitionRDD[K, V](alterPartitionModel: 
AlterPartitionModel,
 carbonLoadModel.setTaskNo(String.valueOf(partitionId))
 carbonLoadModel.setSegmentId(segmentId)
 carbonLoadModel.setPartitionId("0")
-val tempLocationKey = CarbonDataProcessorUtil
-  .getTempStoreLocationKey(carbonLoadModel.getDatabaseName,
-  carbonLoadModel.getTableName,
-  segmentId,
-  carbonLoadModel.getTaskNo,
-  false,
-  true)
-// this property is used to determine whether temp location for 
carbon is inside
-// container temp dir or is yarn application directory.
-val carbonUseLocalDir = CarbonProperties.getInstance()
-  .getProperty("carbon.use.local.dir", "false")
-
-if (carbonUseLocalDir.equalsIgnoreCase("true")) {
-
-val storeLocations = 
Util.getConfiguredLocalDirs(SparkEnv.get.conf)
-if (null != storeLocations && storeLocations.nonEmpty) {
-storeLocation = 
storeLocations(Random.nextInt(storeLocations.length))
-}
-if (storeLocation == null) {
-storeLocation = System.getProperty("java.io.tmpdir")
-}
-} else {
-storeLocation = System.getProperty("java.io.tmpdir")
-}
-storeLocation = storeLocation + '/' + System.nanoTime() + '/' + 
split.index
-CarbonProperties.getInstance().addProperty(tempLocationKey, 
storeLocation)
-LOGGER.info(s"Temp storeLocation taken is $storeLocation")
+CommonUtil.setTempStoreLocation(split.index, carbonLoadModel, 
false, true)
 
 val tempStoreLoc = 
CarbonDataProcessorUtil.getLocalDataFolderLocation(databaseName,
 factTableName,


[30/50] [abbrv] carbondata git commit: [CARBONDATA-1880] Documentation for merging small files

2018-02-03 Thread ravipesala
[CARBONDATA-1880] Documentation for merging small files

Documentation for merging small files

This closes #1903


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/b48a8c21
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/b48a8c21
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/b48a8c21

Branch: refs/heads/branch-1.3
Commit: b48a8c21f75d642c5729bdc3f147a50685447f65
Parents: 71f8828
Author: sgururajshetty 
Authored: Wed Jan 31 19:25:16 2018 +0530
Committer: chenliang613 
Committed: Sat Feb 3 16:05:56 2018 +0800

--
 docs/configuration-parameters.md | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/b48a8c21/docs/configuration-parameters.md
--
diff --git a/docs/configuration-parameters.md b/docs/configuration-parameters.md
index b68a2d1..621574d 100644
--- a/docs/configuration-parameters.md
+++ b/docs/configuration-parameters.md
@@ -61,6 +61,7 @@ This section provides the details of all the configurations 
required for CarbonD
 | carbon.options.bad.record.path |  | Specifies the HDFS path where bad 
records are stored. By default the value is Null. This path must to be 
configured by the user if bad record logger is enabled or bad record action 
redirect. | |
 | carbon.enable.vector.reader | true | This parameter increases the 
performance of select queries as it fetch columnar batch of size 4*1024 rows 
instead of fetching data row by row. | |
 | carbon.blockletgroup.size.in.mb | 64 MB | The data are read as a group of 
blocklets which are called blocklet groups. This parameter specifies the size 
of the blocklet group. Higher value results in better sequential IO access.The 
minimum value is 16MB, any value lesser than 16MB will reset to the default 
value (64MB). |  |
+| carbon.task.distribution | block | **block**: Setting this value will launch 
one task per block. This setting is suggested in case of concurrent queries and 
queries having big shuffling scenarios. **custom**: Setting this value will 
group the blocks and distribute it uniformly to the available resources in the 
cluster. This enhances the query performance but not suggested in case of 
concurrent queries and queries having big shuffling scenarios. **blocklet**: 
Setting this value will launch one task per blocklet. This setting is suggested 
in case of concurrent queries and queries having big shuffling scenarios. 
**merge_small_files**: Setting this value will merge all the small partitions 
to a size of (128 MB) during querying. The small partitions are combined to a 
map task to reduce the number of read task. This enhances the performance. | | 
 
 * **Compaction Configuration**
   



[15/50] [abbrv] carbondata git commit: Problem: For old store the measure min and max values are written opposite (i.e min in place of max and max in place of min). Due to this computing of measure fi

2018-02-03 Thread ravipesala
Problem:
For old store the measure min and max values are written opposite (i.e min in 
place of max and max in place of min). Due to this computing of measure filter 
with current code is impacted.
This problem specifically comes when measure data has negative values.

Impact
Filter query on measure

Solution
In order to sync with current min and max values for old store, measures min 
and max value is reversed by using an old store flag.

This closes #1879


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/1248bd4b
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/1248bd4b
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/1248bd4b

Branch: refs/heads/branch-1.3
Commit: 1248bd4b7ff4bb45392106082011dde7f9db460f
Parents: ee1c4d4
Author: manishgupta88 
Authored: Tue Jan 30 08:56:13 2018 +0530
Committer: ravipesala 
Committed: Thu Feb 1 22:13:36 2018 +0530

--
 .../core/datastore/block/TableBlockInfo.java| 14 +
 .../blockletindex/BlockletDMComparator.java |  2 +-
 .../blockletindex/BlockletDataMap.java  | 61 +---
 .../BlockletDataRefNodeWrapper.java | 39 -
 .../executor/impl/AbstractQueryExecutor.java| 11 
 .../core/scan/filter/ColumnFilterInfo.java  |  9 ++-
 .../apache/carbondata/core/util/CarbonUtil.java | 47 +++
 .../carbondata/core/util/CarbonUtilTest.java| 46 +++
 8 files changed, 178 insertions(+), 51 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/1248bd4b/core/src/main/java/org/apache/carbondata/core/datastore/block/TableBlockInfo.java
--
diff --git 
a/core/src/main/java/org/apache/carbondata/core/datastore/block/TableBlockInfo.java
 
b/core/src/main/java/org/apache/carbondata/core/datastore/block/TableBlockInfo.java
index c3cc551..b27b5fc 100644
--- 
a/core/src/main/java/org/apache/carbondata/core/datastore/block/TableBlockInfo.java
+++ 
b/core/src/main/java/org/apache/carbondata/core/datastore/block/TableBlockInfo.java
@@ -72,6 +72,12 @@ public class TableBlockInfo implements Distributable, 
Serializable {
   private String[] locations;
 
   private ColumnarFormatVersion version;
+
+  /**
+   * flag to determine whether the data block is from old store (version 1.1)
+   * or current store
+   */
+  private boolean isDataBlockFromOldStore;
   /**
* The class holds the blockletsinfo
*/
@@ -410,4 +416,12 @@ public class TableBlockInfo implements Distributable, 
Serializable {
   public void setBlockletId(String blockletId) {
 this.blockletId = blockletId;
   }
+
+  public boolean isDataBlockFromOldStore() {
+return isDataBlockFromOldStore;
+  }
+
+  public void setDataBlockFromOldStore(boolean dataBlockFromOldStore) {
+isDataBlockFromOldStore = dataBlockFromOldStore;
+  }
 }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/1248bd4b/core/src/main/java/org/apache/carbondata/core/indexstore/blockletindex/BlockletDMComparator.java
--
diff --git 
a/core/src/main/java/org/apache/carbondata/core/indexstore/blockletindex/BlockletDMComparator.java
 
b/core/src/main/java/org/apache/carbondata/core/indexstore/blockletindex/BlockletDMComparator.java
index fccbda8..9a50600 100644
--- 
a/core/src/main/java/org/apache/carbondata/core/indexstore/blockletindex/BlockletDMComparator.java
+++ 
b/core/src/main/java/org/apache/carbondata/core/indexstore/blockletindex/BlockletDMComparator.java
@@ -63,7 +63,7 @@ public class BlockletDMComparator implements 
Comparator {
 int compareResult = 0;
 int processedNoDictionaryColumn = numberOfNoDictSortColumns;
 byte[][] firstBytes = splitKey(first.getByteArray(0));
-byte[][] secondBytes = splitKey(second.getByteArray(0));
+byte[][] secondBytes = splitKey(first.getByteArray(0));
 byte[] firstNoDictionaryKeys = firstBytes[1];
 ByteBuffer firstNoDictionaryKeyBuffer = 
ByteBuffer.wrap(firstNoDictionaryKeys);
 byte[] secondNoDictionaryKeys = secondBytes[1];

http://git-wip-us.apache.org/repos/asf/carbondata/blob/1248bd4b/core/src/main/java/org/apache/carbondata/core/indexstore/blockletindex/BlockletDataMap.java
--
diff --git 
a/core/src/main/java/org/apache/carbondata/core/indexstore/blockletindex/BlockletDataMap.java
 
b/core/src/main/java/org/apache/carbondata/core/indexstore/blockletindex/BlockletDataMap.java
index b097c66..699f9e1 100644
--- 
a/core/src/main/java/org/apache/carbondata/core/indexstore/blockletindex/BlockletDataMap.java
+++ 

[24/50] [abbrv] carbondata git commit: [CARBONDATA-2082] Timeseries pre-aggregate table should support the blank space

2018-02-03 Thread ravipesala
[CARBONDATA-2082] Timeseries pre-aggregate table should support the blank space

Timeseries pre-aggregate table should support the blank space, 
including:event_time,different franularity

This closes  #1902


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/a9a0201b
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/a9a0201b
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/a9a0201b

Branch: refs/heads/branch-1.3
Commit: a9a0201b468505c79d1881607fb0673ee588d85a
Parents: d3b228f
Author: xubo245 <601450...@qq.com>
Authored: Thu Feb 1 15:32:36 2018 +0800
Committer: kumarvishal 
Committed: Fri Feb 2 18:38:44 2018 +0530

--
 .../timeseries/TestTimeSeriesCreateTable.scala  | 76 
 .../datamap/CarbonCreateDataMapCommand.scala| 17 +++--
 .../command/timeseries/TimeSeriesUtil.scala | 11 ++-
 3 files changed, 92 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/a9a0201b/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/timeseries/TestTimeSeriesCreateTable.scala
--
diff --git 
a/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/timeseries/TestTimeSeriesCreateTable.scala
 
b/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/timeseries/TestTimeSeriesCreateTable.scala
index b63fd53..f3bbcaf 100644
--- 
a/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/timeseries/TestTimeSeriesCreateTable.scala
+++ 
b/integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/timeseries/TestTimeSeriesCreateTable.scala
@@ -368,6 +368,82 @@ class TestTimeSeriesCreateTable extends QueryTest with 
BeforeAndAfterAll {
 assert(e.getMessage.contains("identifier matching regex"))
   }
 
+  test("test timeseries create table 33: support event_time and granularity 
key with space") {
+sql("DROP DATAMAP IF EXISTS agg1_month ON TABLE maintable")
+sql(
+  s"""CREATE DATAMAP agg1_month ON TABLE mainTable
+ |USING '$timeSeries'
+ |DMPROPERTIES (
+ |   ' event_time '='dataTime',
+ |   ' MONTH_GRANULARITY '='1')
+ |AS SELECT dataTime, SUM(age) FROM mainTable
+ |GROUP BY dataTime
+""".stripMargin)
+checkExistence(sql("SHOW DATAMAP ON TABLE maintable"), true, 
"maintable_agg1_month")
+sql("DROP DATAMAP IF EXISTS agg1_month ON TABLE maintable")
+  }
+
+
+  test("test timeseries create table 34: support event_time value with space") 
{
+sql("DROP DATAMAP IF EXISTS agg1_month ON TABLE maintable")
+sql(
+  s"""CREATE DATAMAP agg1_month ON TABLE mainTable
+ |USING '$timeSeries'
+ |DMPROPERTIES (
+ |   'event_time '=' dataTime',
+ |   'MONTH_GRANULARITY '='1')
+ |AS SELECT dataTime, SUM(age) FROM mainTable
+ |GROUP BY dataTime
+""".stripMargin)
+checkExistence(sql("SHOW DATAMAP ON TABLE maintable"), true, 
"maintable_agg1_month")
+sql("DROP DATAMAP IF EXISTS agg1_month ON TABLE maintable")
+  }
+
+  test("test timeseries create table 35: support granularity value with 
space") {
+sql("DROP DATAMAP IF EXISTS agg1_month ON TABLE maintable")
+sql(
+  s"""CREATE DATAMAP agg1_month ON TABLE mainTable
+ |USING '$timeSeries'
+ |DMPROPERTIES (
+ |   'event_time '='dataTime',
+ |   'MONTH_GRANULARITY '=' 1')
+ |AS SELECT dataTime, SUM(age) FROM mainTable
+ |GROUP BY dataTime
+""".stripMargin)
+checkExistence(sql("SHOW DATAMAP ON TABLE maintable"), true, 
"maintable_agg1_month")
+sql("DROP DATAMAP IF EXISTS agg1_month ON TABLE maintable")
+  }
+
+  test("test timeseries create table 36: support event_time and granularity 
value with space") {
+sql("DROP DATAMAP IF EXISTS agg1_month ON TABLE maintable")
+sql(
+  s"""
+ | CREATE DATAMAP agg1_month ON TABLE mainTable
+ | USING '$timeSeries'
+ | DMPROPERTIES (
+ |   'EVENT_TIME'='dataTime   ',
+ |   'MONTH_GRANULARITY'=' 1  ')
+ | AS SELECT dataTime, SUM(age) FROM mainTable
+ | GROUP BY dataTime
+""".stripMargin)
+checkExistence(sql("SHOW DATAMAP ON TABLE maintable"), true, 
"maintable_agg1_month")
+  }
+
+  test("test timeseries create table 37:  unsupport event_time error value") {
+sql("DROP DATAMAP IF EXISTS agg1_month ON TABLE maintable")
+intercept[NullPointerException] {
+  sql(
+s"""CREATE DATAMAP agg1_month ON TABLE mainTable USING '$timeSeries'
+   |DMPROPERTIES (
+ 

[49/50] [abbrv] carbondata git commit: [HOTFIX] Some basic fix for 1.3.0 release

2018-02-03 Thread ravipesala
[HOTFIX] Some basic fix for 1.3.0 release

This closes #1924


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/fa6cd8d5
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/fa6cd8d5
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/fa6cd8d5

Branch: refs/heads/branch-1.3
Commit: fa6cd8d58632357cd29731d59398d1a43b282447
Parents: 4a2a2d1
Author: chenliang613 
Authored: Sat Feb 3 21:06:55 2018 +0800
Committer: ravipesala 
Committed: Sun Feb 4 00:33:13 2018 +0530

--
 docs/configuration-parameters.md|   2 +-
 docs/data-management-on-carbondata.md   | 216 ---
 .../examples/StandardPartitionExample.scala |  11 +-
 integration/spark2/pom.xml  |   3 +
 4 files changed, 107 insertions(+), 125 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/fa6cd8d5/docs/configuration-parameters.md
--
diff --git a/docs/configuration-parameters.md b/docs/configuration-parameters.md
index 621574d..91f6cf5 100644
--- a/docs/configuration-parameters.md
+++ b/docs/configuration-parameters.md
@@ -61,7 +61,7 @@ This section provides the details of all the configurations 
required for CarbonD
 | carbon.options.bad.record.path |  | Specifies the HDFS path where bad 
records are stored. By default the value is Null. This path must to be 
configured by the user if bad record logger is enabled or bad record action 
redirect. | |
 | carbon.enable.vector.reader | true | This parameter increases the 
performance of select queries as it fetch columnar batch of size 4*1024 rows 
instead of fetching data row by row. | |
 | carbon.blockletgroup.size.in.mb | 64 MB | The data are read as a group of 
blocklets which are called blocklet groups. This parameter specifies the size 
of the blocklet group. Higher value results in better sequential IO access.The 
minimum value is 16MB, any value lesser than 16MB will reset to the default 
value (64MB). |  |
-| carbon.task.distribution | block | **block**: Setting this value will launch 
one task per block. This setting is suggested in case of concurrent queries and 
queries having big shuffling scenarios. **custom**: Setting this value will 
group the blocks and distribute it uniformly to the available resources in the 
cluster. This enhances the query performance but not suggested in case of 
concurrent queries and queries having big shuffling scenarios. **blocklet**: 
Setting this value will launch one task per blocklet. This setting is suggested 
in case of concurrent queries and queries having big shuffling scenarios. 
**merge_small_files**: Setting this value will merge all the small partitions 
to a size of (128 MB) during querying. The small partitions are combined to a 
map task to reduce the number of read task. This enhances the performance. | | 
+| carbon.task.distribution | block | **block**: Setting this value will launch 
one task per block. This setting is suggested in case of concurrent queries and 
queries having big shuffling scenarios. **custom**: Setting this value will 
group the blocks and distribute it uniformly to the available resources in the 
cluster. This enhances the query performance but not suggested in case of 
concurrent queries and queries having big shuffling scenarios. **blocklet**: 
Setting this value will launch one task per blocklet. This setting is suggested 
in case of concurrent queries and queries having big shuffling scenarios. 
**merge_small_files**: Setting this value will merge all the small partitions 
to a size of (128 MB is the default value of 
"spark.sql.files.maxPartitionBytes",it is configurable) during querying. The 
small partitions are combined to a map task to reduce the number of read task. 
This enhances the performance. | | 
 
 * **Compaction Configuration**
   

http://git-wip-us.apache.org/repos/asf/carbondata/blob/fa6cd8d5/docs/data-management-on-carbondata.md
--
diff --git a/docs/data-management-on-carbondata.md 
b/docs/data-management-on-carbondata.md
index 3acb711..9bb6c20 100644
--- a/docs/data-management-on-carbondata.md
+++ b/docs/data-management-on-carbondata.md
@@ -26,8 +26,7 @@ This tutorial is going to introduce all commands and data 
operations on CarbonDa
 * [UPDATE AND DELETE](#update-and-delete)
 * [COMPACTION](#compaction)
 * [PARTITION](#partition)
-* [HIVE STANDARD PARTITION](#hive-standard-partition)
-* [PRE-AGGREGATE TABLES](#agg-tables)
+* [PRE-AGGREGATE TABLES](#pre-aggregate-tables)
 * [BUCKETING](#bucketing)
 * [SEGMENT MANAGEMENT](#segment-management)
 
@@ -54,8 +53,6 @@ This tutorial is going to introduce all commands and 

[18/50] [abbrv] carbondata git commit: [CARBONDATA-2043] Configurable wait time for requesting executors and minimum registered executors ratio to continue the block distribution - carbon.dynamicAlloc

2018-02-03 Thread ravipesala
[CARBONDATA-2043] Configurable wait time for requesting executors and minimum 
registered executors ratio to continue the block distribution
- carbon.dynamicAllocation.schedulerTimeout : to configure wait time. defalt 
5sec, Min 5 sec and max 15 sec.
- carbon.scheduler.minRegisteredResourcesRatio : min 0.1, max 1.0 and 
default to 0.8 to configure minimum registered executors ratio.

This closes #1822


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/473bd319
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/473bd319
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/473bd319

Branch: refs/heads/branch-1.3
Commit: 473bd3197a69e3c0574f8c07f04c29e43f7a023d
Parents: 54a381c
Author: mohammadshahidkhan 
Authored: Fri Dec 22 17:30:31 2017 +0530
Committer: Venkata Ramana G 
Committed: Fri Feb 2 11:10:23 2018 +0530

--
 .../core/constants/CarbonCommonConstants.java   | 71 ++-
 .../carbondata/core/util/CarbonProperties.java  | 90 +++-
 .../core/CarbonPropertiesValidationTest.java| 42 +
 .../spark/sql/hive/DistributionUtil.scala   | 67 ++-
 4 files changed, 205 insertions(+), 65 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/473bd319/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
--
diff --git 
a/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
 
b/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
index 7ae3034..87eec8a 100644
--- 
a/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
+++ 
b/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
@@ -1149,29 +1149,6 @@ public final class CarbonCommonConstants {
*/
   public static final int DEFAULT_MAX_NUMBER_OF_COLUMNS = 2;
 
-  /**
-   * Maximum waiting time (in seconds) for a query for requested executors to 
be started
-   */
-  @CarbonProperty
-  public static final String CARBON_EXECUTOR_STARTUP_TIMEOUT =
-  "carbon.max.executor.startup.timeout";
-
-  /**
-   * default value for executor start up waiting time out
-   */
-  public static final String CARBON_EXECUTOR_WAITING_TIMEOUT_DEFAULT = "5";
-
-  /**
-   * Max value. If value configured by user is more than this than this value 
will value will be
-   * considered
-   */
-  public static final int CARBON_EXECUTOR_WAITING_TIMEOUT_MAX = 60;
-
-  /**
-   * time for which thread will sleep and check again if the requested number 
of executors
-   * have been started
-   */
-  public static final int CARBON_EXECUTOR_STARTUP_THREAD_SLEEP_TIME = 250;
 
   /**
* to enable unsafe column page in write step
@@ -1537,6 +1514,54 @@ public final class CarbonCommonConstants {
   public static final long HANDOFF_SIZE_DEFAULT = 1024L * 1024 * 1024;
 
   /**
+   * minimum required registered resource for starting block distribution
+   */
+  @CarbonProperty
+  public static final String CARBON_SCHEDULER_MIN_REGISTERED_RESOURCES_RATIO =
+  "carbon.scheduler.minregisteredresourcesratio";
+  /**
+   * default minimum required registered resource for starting block 
distribution
+   */
+  public static final String 
CARBON_SCHEDULER_MIN_REGISTERED_RESOURCES_RATIO_DEFAULT = "0.8d";
+  /**
+   * minimum required registered resource for starting block distribution
+   */
+  public static final double 
CARBON_SCHEDULER_MIN_REGISTERED_RESOURCES_RATIO_MIN = 0.1d;
+  /**
+   * max minimum required registered resource for starting block distribution
+   */
+  public static final double 
CARBON_SCHEDULER_MIN_REGISTERED_RESOURCES_RATIO_MAX = 1.0d;
+
+  /**
+   * To define how much time scheduler should wait for the
+   * resource in dynamic allocation.
+   */
+  public static final String CARBON_DYNAMIC_ALLOCATION_SCHEDULER_TIMEOUT =
+  "carbon.dynamicallocation.schedulertimeout";
+
+  /**
+   * default scheduler wait time
+   */
+  public static final String 
CARBON_DYNAMIC_ALLOCATION_SCHEDULER_TIMEOUT_DEFAULT = "5";
+
+  /**
+   * default value for executor start up waiting time out
+   */
+  public static final int CARBON_DYNAMIC_ALLOCATION_SCHEDULER_TIMEOUT_MIN = 5;
+
+  /**
+   * Max value. If value configured by user is more than this than this value 
will value will be
+   * considered
+   */
+  public static final int CARBON_DYNAMIC_ALLOCATION_SCHEDULER_TIMEOUT_MAX = 15;
+
+  /**
+   * time for which thread will sleep and check again if the requested number 
of executors
+   * have been started
+   */
+  public static final int 
CARBON_DYNAMIC_ALLOCATION_SCHEDULER_THREAD_SLEEP_TIME = 250;
+
+  /**
* 

Build failed in Jenkins: carbondata-master-spark-2.1 #2029

2018-02-03 Thread Apache Jenkins Server
See 


Changes:

[ravipesala] [CARBONDATA-1454]false expression handling and block pruning

--
[...truncated 11.60 KB...]
[INFO] --- maven-install-plugin:2.5.2:install (default-install) @ 
carbondata-parent ---
[INFO] Installing 
 to 
/home/jenkins/jenkins-slave/maven-repositories/0/org/apache/carbondata/carbondata-parent/1.3.0-SNAPSHOT/carbondata-parent-1.3.0-SNAPSHOT.pom
[INFO] 
[INFO] 
[INFO] Building Apache CarbonData :: Common 1.3.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ carbondata-common ---
[INFO] Deleting 

[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent (default-prepare-agent) @ 
carbondata-common ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/0/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
carbondata-common ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ 
carbondata-common ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:compile (default-compile) @ 
carbondata-common ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 9 source files to 

[INFO] 
[INFO] >>> findbugs-maven-plugin:3.0.4:check (analyze-compile) > :findbugs @ 
carbondata-common >>>
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.4:findbugs (findbugs) @ carbondata-common 
---
[INFO] Fork Value is true
[INFO] Done FindBugs Analysis
[INFO] 
[INFO] <<< findbugs-maven-plugin:3.0.4:check (analyze-compile) < :findbugs @ 
carbondata-common <<<
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.4:check (analyze-compile) @ 
carbondata-common ---
[INFO] BugInstance size is 0
[INFO] Error size is 0
[INFO] No errors/warnings found
[INFO] 
[INFO] --- maven-resources-plugin:2.7:testResources (default-testResources) @ 
carbondata-common ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:testCompile (default-testCompile) @ 
carbondata-common ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 6 source files to 

[INFO] 
[INFO] --- maven-surefire-plugin:2.18.1:test (default-test) @ carbondata-common 
---
[JENKINS] Recording test results
[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (default-jar) @ carbondata-common ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
carbondata-common ---
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent-integration 
(default-prepare-agent-integration) @ carbondata-common ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/0/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.17:check (default) @ carbondata-common ---
[INFO] Starting audit...
Audit done.
[INFO] 
[INFO] --- scalastyle-maven-plugin:0.8.0:check (default) @ carbondata-common ---
[WARNING] sourceDirectory is not specified or does not exist 
value=
Saving to 
outputFile=
Processed 0 file(s)
Found 0 errors
Found 0 warnings
Found 0 infos
Finished in 1 ms
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:report (default-report) @ 
carbondata-common ---
[INFO] Skipping JaCoCo execution due to missing execution data file.
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:report-integration 
(default-report-integration) @ carbondata-common ---
[INFO] Skipping JaCoCo execution due to missing execution 

Build failed in Jenkins: carbondata-master-spark-2.1 » Apache CarbonData :: Core #2029

2018-02-03 Thread Apache Jenkins Server
See 


Changes:

[ravipesala] [CARBONDATA-1454]false expression handling and block pruning

--
[INFO] 
[INFO] 
[INFO] Building Apache CarbonData :: Core 1.3.0-SNAPSHOT
[INFO] 
[INFO] Downloading: 
http://repo1.maven.org/maven2/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
[INFO] Downloading: 
http://repository.apache.org/snapshots/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
[INFO] Downloaded: 
http://repository.apache.org/snapshots/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
 (791 B at 1.9 KB/sec)
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ carbondata-core ---
[INFO] Deleting 

[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent (default-prepare-agent) @ 
carbondata-core ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/0/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
carbondata-core ---
[INFO] 
[INFO] --- exec-maven-plugin:1.2.1:exec (default) @ carbondata-core ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ 
carbondata-core ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 1 resource
[INFO] Copying 1 resource
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:compile (default-compile) @ 
carbondata-core ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 582 source files to 

[INFO] 
:
 

 uses or overrides a deprecated API.
[INFO] 
:
 Recompile with -Xlint:deprecation for details.
[INFO] 
:
 Some input files use unchecked or unsafe operations.
[INFO] 
:
 Recompile with -Xlint:unchecked for details.
[INFO] -
[WARNING] COMPILATION WARNING : 
[INFO] -
[WARNING] 
:[23,16]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[43,18]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[47,21]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[49,17]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[70,17]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[INFO] 5 warnings 
[INFO] -
[INFO] 

Build failed in Jenkins: carbondata-master-spark-2.2 » Apache CarbonData :: Core #64

2018-02-03 Thread Apache Jenkins Server
See 


Changes:

[ravipesala] [CARBONDATA-1454]false expression handling and block pruning

--
[INFO] 
[INFO] 
[INFO] Building Apache CarbonData :: Core 1.3.0-SNAPSHOT
[INFO] 
[INFO] Downloading: 
http://repository.apache.org/snapshots/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
[INFO] Downloading: 
http://repo1.maven.org/maven2/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
[INFO] Downloaded: 
http://repository.apache.org/snapshots/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
 (791 B at 1.8 KB/sec)
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ carbondata-core ---
[INFO] Deleting 

[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent (default-prepare-agent) @ 
carbondata-core ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/0/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
carbondata-core ---
[INFO] 
[INFO] --- exec-maven-plugin:1.2.1:exec (default) @ carbondata-core ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ 
carbondata-core ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 1 resource
[INFO] Copying 1 resource
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:compile (default-compile) @ 
carbondata-core ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 582 source files to 

[INFO] 
:
 

 uses or overrides a deprecated API.
[INFO] 
:
 Recompile with -Xlint:deprecation for details.
[INFO] 
:
 Some input files use unchecked or unsafe operations.
[INFO] 
:
 Recompile with -Xlint:unchecked for details.
[INFO] -
[WARNING] COMPILATION WARNING : 
[INFO] -
[WARNING] 
:[23,16]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[43,18]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[47,21]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[49,17]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[70,17]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[INFO] 5 warnings 
[INFO] -
[INFO] 

Build failed in Jenkins: carbondata-master-spark-2.2 #64

2018-02-03 Thread Apache Jenkins Server
See 


Changes:

[ravipesala] [CARBONDATA-1454]false expression handling and block pruning

--
[...truncated 11.55 KB...]
[INFO] --- maven-install-plugin:2.5.2:install (default-install) @ 
carbondata-parent ---
[INFO] Installing 
 to 
/home/jenkins/jenkins-slave/maven-repositories/0/org/apache/carbondata/carbondata-parent/1.3.0-SNAPSHOT/carbondata-parent-1.3.0-SNAPSHOT.pom
[INFO] 
[INFO] 
[INFO] Building Apache CarbonData :: Common 1.3.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ carbondata-common ---
[INFO] Deleting 

[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent (default-prepare-agent) @ 
carbondata-common ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/0/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
carbondata-common ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ 
carbondata-common ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:compile (default-compile) @ 
carbondata-common ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 9 source files to 

[INFO] 
[INFO] >>> findbugs-maven-plugin:3.0.4:check (analyze-compile) > :findbugs @ 
carbondata-common >>>
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.4:findbugs (findbugs) @ carbondata-common 
---
[INFO] Fork Value is true
[INFO] Done FindBugs Analysis
[INFO] 
[INFO] <<< findbugs-maven-plugin:3.0.4:check (analyze-compile) < :findbugs @ 
carbondata-common <<<
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.4:check (analyze-compile) @ 
carbondata-common ---
[INFO] BugInstance size is 0
[INFO] Error size is 0
[INFO] No errors/warnings found
[INFO] 
[INFO] --- maven-resources-plugin:2.7:testResources (default-testResources) @ 
carbondata-common ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:testCompile (default-testCompile) @ 
carbondata-common ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 6 source files to 

[INFO] 
[INFO] --- maven-surefire-plugin:2.18.1:test (default-test) @ carbondata-common 
---
[JENKINS] Recording test results
[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (default-jar) @ carbondata-common ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
carbondata-common ---
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent-integration 
(default-prepare-agent-integration) @ carbondata-common ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/0/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.17:check (default) @ carbondata-common ---
[INFO] Starting audit...
Audit done.
[INFO] 
[INFO] --- scalastyle-maven-plugin:0.8.0:check (default) @ carbondata-common ---
[WARNING] sourceDirectory is not specified or does not exist 
value=
Saving to 
outputFile=
Processed 0 file(s)
Found 0 errors
Found 0 warnings
Found 0 infos
Finished in 1 ms
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:report (default-report) @ 
carbondata-common ---
[INFO] Skipping JaCoCo execution due to missing execution data file.
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:report-integration 
(default-report-integration) @ carbondata-common ---
[INFO] Skipping JaCoCo execution due to missing execution 

carbondata git commit: [CARBONDATA-1454]false expression handling and block pruning

2018-02-03 Thread ravipesala
Repository: carbondata
Updated Branches:
  refs/heads/master fa6cd8d58 -> e16e87818


[CARBONDATA-1454]false expression handling and block pruning

Issue :- In case of wrong value/invalid for time-stamp and date data type. all 
blocks are identified for scan .

Solution :- Add False Expression handling and False Filter Executor. it can be 
used to handle invalid Filter value.

This closes #1915


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/e16e8781
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/e16e8781
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/e16e8781

Branch: refs/heads/master
Commit: e16e878189baa82bee5ca8af8d1229b7733b454a
Parents: fa6cd8d
Author: BJangir 
Authored: Fri Feb 2 16:33:45 2018 +0530
Committer: ravipesala 
Committed: Sun Feb 4 00:59:25 2018 +0530

--
 .../scan/filter/FilterExpressionProcessor.java  |  3 +-
 .../carbondata/core/scan/filter/FilterUtil.java |  3 +
 .../filter/executer/FalseFilterExecutor.java| 60 
 .../scan/filter/intf/FilterExecuterType.java|  2 +-
 .../FalseConditionalResolverImpl.java   | 61 
 .../filterexpr/FilterProcessorTestCase.scala| 74 +++-
 .../apache/spark/sql/CarbonBoundReference.scala |  4 ++
 .../execution/CastExpressionOptimization.scala  | 60 +---
 .../strategy/CarbonLateDecodeStrategy.scala |  2 +
 .../spark/sql/optimizer/CarbonFilters.scala |  4 ++
 10 files changed, 259 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/e16e8781/core/src/main/java/org/apache/carbondata/core/scan/filter/FilterExpressionProcessor.java
--
diff --git 
a/core/src/main/java/org/apache/carbondata/core/scan/filter/FilterExpressionProcessor.java
 
b/core/src/main/java/org/apache/carbondata/core/scan/filter/FilterExpressionProcessor.java
index 5a1b7df..3e23aa3 100644
--- 
a/core/src/main/java/org/apache/carbondata/core/scan/filter/FilterExpressionProcessor.java
+++ 
b/core/src/main/java/org/apache/carbondata/core/scan/filter/FilterExpressionProcessor.java
@@ -63,6 +63,7 @@ import 
org.apache.carbondata.core.scan.filter.resolver.FilterResolverIntf;
 import 
org.apache.carbondata.core.scan.filter.resolver.LogicalFilterResolverImpl;
 import 
org.apache.carbondata.core.scan.filter.resolver.RowLevelFilterResolverImpl;
 import 
org.apache.carbondata.core.scan.filter.resolver.RowLevelRangeFilterResolverImpl;
+import 
org.apache.carbondata.core.scan.filter.resolver.resolverinfo.FalseConditionalResolverImpl;
 import 
org.apache.carbondata.core.scan.filter.resolver.resolverinfo.TrueConditionalResolverImpl;
 import org.apache.carbondata.core.scan.partition.PartitionUtil;
 import org.apache.carbondata.core.scan.partition.Partitioner;
@@ -398,7 +399,7 @@ public class FilterExpressionProcessor implements 
FilterProcessor {
 ConditionalExpression condExpression = null;
 switch (filterExpressionType) {
   case FALSE:
-return new RowLevelFilterResolverImpl(expression, false, false, 
tableIdentifier);
+return new FalseConditionalResolverImpl(expression, false, false, 
tableIdentifier);
   case TRUE:
 return new TrueConditionalResolverImpl(expression, false, false, 
tableIdentifier);
   case EQUALS:

http://git-wip-us.apache.org/repos/asf/carbondata/blob/e16e8781/core/src/main/java/org/apache/carbondata/core/scan/filter/FilterUtil.java
--
diff --git 
a/core/src/main/java/org/apache/carbondata/core/scan/filter/FilterUtil.java 
b/core/src/main/java/org/apache/carbondata/core/scan/filter/FilterUtil.java
index 3268ca3..a08edc0 100644
--- a/core/src/main/java/org/apache/carbondata/core/scan/filter/FilterUtil.java
+++ b/core/src/main/java/org/apache/carbondata/core/scan/filter/FilterUtil.java
@@ -74,6 +74,7 @@ import 
org.apache.carbondata.core.scan.filter.executer.AndFilterExecuterImpl;
 import 
org.apache.carbondata.core.scan.filter.executer.DimColumnExecuterFilterInfo;
 import 
org.apache.carbondata.core.scan.filter.executer.ExcludeColGroupFilterExecuterImpl;
 import 
org.apache.carbondata.core.scan.filter.executer.ExcludeFilterExecuterImpl;
+import org.apache.carbondata.core.scan.filter.executer.FalseFilterExecutor;
 import org.apache.carbondata.core.scan.filter.executer.FilterExecuter;
 import 
org.apache.carbondata.core.scan.filter.executer.ImplicitIncludeFilterExecutorImpl;
 import 
org.apache.carbondata.core.scan.filter.executer.IncludeColGroupFilterExecuterImpl;
@@ -176,6 +177,8 @@ public final class FilterUtil {
   .getFilterRangeValues(segmentProperties), segmentProperties);
 

Build failed in Jenkins: carbondata-master-spark-2.1 » Apache CarbonData :: Core #2028

2018-02-03 Thread Apache Jenkins Server
See 


--
[INFO] 
[INFO] 
[INFO] Building Apache CarbonData :: Core 1.3.0-SNAPSHOT
[INFO] 
[INFO] Downloading: 
http://repository.apache.org/snapshots/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
[INFO] Downloading: 
http://repo1.maven.org/maven2/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
[INFO] Downloaded: 
http://repository.apache.org/snapshots/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
 (791 B at 1.7 KB/sec)
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ carbondata-core ---
[INFO] Deleting 

[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent (default-prepare-agent) @ 
carbondata-core ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/1/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
carbondata-core ---
[INFO] 
[INFO] --- exec-maven-plugin:1.2.1:exec (default) @ carbondata-core ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ 
carbondata-core ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 1 resource
[INFO] Copying 1 resource
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:compile (default-compile) @ 
carbondata-core ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 580 source files to 

[INFO] 
:
 

 uses or overrides a deprecated API.
[INFO] 
:
 Recompile with -Xlint:deprecation for details.
[INFO] 
:
 Some input files use unchecked or unsafe operations.
[INFO] 
:
 Recompile with -Xlint:unchecked for details.
[INFO] -
[WARNING] COMPILATION WARNING : 
[INFO] -
[WARNING] 
:[23,16]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[43,18]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[47,21]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[49,17]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[70,17]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[INFO] 5 warnings 
[INFO] -
[INFO] -
[ERROR] COMPILATION ERROR : 
[INFO] 

Build failed in Jenkins: carbondata-master-spark-2.1 #2028

2018-02-03 Thread Apache Jenkins Server
See 


Changes:

[ravipesala] [HOTFIX] Some basic fix for 1.3.0 release

--
[...truncated 11.58 KB...]
[INFO] --- maven-install-plugin:2.5.2:install (default-install) @ 
carbondata-parent ---
[INFO] Installing 
 to 
/home/jenkins/jenkins-slave/maven-repositories/1/org/apache/carbondata/carbondata-parent/1.3.0-SNAPSHOT/carbondata-parent-1.3.0-SNAPSHOT.pom
[INFO] 
[INFO] 
[INFO] Building Apache CarbonData :: Common 1.3.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ carbondata-common ---
[INFO] Deleting 

[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent (default-prepare-agent) @ 
carbondata-common ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/1/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
carbondata-common ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ 
carbondata-common ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:compile (default-compile) @ 
carbondata-common ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 9 source files to 

[INFO] 
[INFO] >>> findbugs-maven-plugin:3.0.4:check (analyze-compile) > :findbugs @ 
carbondata-common >>>
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.4:findbugs (findbugs) @ carbondata-common 
---
[INFO] Fork Value is true
[INFO] Done FindBugs Analysis
[INFO] 
[INFO] <<< findbugs-maven-plugin:3.0.4:check (analyze-compile) < :findbugs @ 
carbondata-common <<<
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.4:check (analyze-compile) @ 
carbondata-common ---
[INFO] BugInstance size is 0
[INFO] Error size is 0
[INFO] No errors/warnings found
[INFO] 
[INFO] --- maven-resources-plugin:2.7:testResources (default-testResources) @ 
carbondata-common ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:testCompile (default-testCompile) @ 
carbondata-common ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 6 source files to 

[INFO] 
[INFO] --- maven-surefire-plugin:2.18.1:test (default-test) @ carbondata-common 
---
[JENKINS] Recording test results
[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (default-jar) @ carbondata-common ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
carbondata-common ---
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent-integration 
(default-prepare-agent-integration) @ carbondata-common ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/1/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.17:check (default) @ carbondata-common ---
[INFO] Starting audit...
Audit done.
[INFO] 
[INFO] --- scalastyle-maven-plugin:0.8.0:check (default) @ carbondata-common ---
[WARNING] sourceDirectory is not specified or does not exist 
value=
Saving to 
outputFile=
Processed 0 file(s)
Found 0 errors
Found 0 warnings
Found 0 infos
Finished in 2 ms
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:report (default-report) @ 
carbondata-common ---
[INFO] Skipping JaCoCo execution due to missing execution data file.
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:report-integration 
(default-report-integration) @ carbondata-common ---
[INFO] Skipping JaCoCo execution due to missing execution data file.
[INFO] 

Build failed in Jenkins: carbondata-master-spark-2.2 » Apache CarbonData :: Core #63

2018-02-03 Thread Apache Jenkins Server
See 


--
[INFO] 
[INFO] 
[INFO] Building Apache CarbonData :: Core 1.3.0-SNAPSHOT
[INFO] 
[INFO] Downloading: 
http://repo1.maven.org/maven2/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
[INFO] Downloading: 
http://repository.apache.org/snapshots/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
[INFO] Downloaded: 
http://repository.apache.org/snapshots/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
 (791 B at 1.9 KB/sec)
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ carbondata-core ---
[INFO] Deleting 

[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent (default-prepare-agent) @ 
carbondata-core ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/1/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
carbondata-core ---
[INFO] 
[INFO] --- exec-maven-plugin:1.2.1:exec (default) @ carbondata-core ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ 
carbondata-core ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 1 resource
[INFO] Copying 1 resource
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:compile (default-compile) @ 
carbondata-core ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 580 source files to 

[INFO] 
:
 

 uses or overrides a deprecated API.
[INFO] 
:
 Recompile with -Xlint:deprecation for details.
[INFO] 
:
 Some input files use unchecked or unsafe operations.
[INFO] 
:
 Recompile with -Xlint:unchecked for details.
[INFO] -
[WARNING] COMPILATION WARNING : 
[INFO] -
[WARNING] 
:[23,16]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[43,18]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[47,21]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[49,17]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[70,17]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[INFO] 5 warnings 
[INFO] -
[INFO] -
[ERROR] COMPILATION ERROR : 
[INFO] 

Build failed in Jenkins: carbondata-master-spark-2.2 #63

2018-02-03 Thread Apache Jenkins Server
See 


Changes:

[ravipesala] [HOTFIX] Some basic fix for 1.3.0 release

--
[...truncated 11.54 KB...]
[INFO] Installing 
 to 
/home/jenkins/jenkins-slave/maven-repositories/1/org/apache/carbondata/carbondata-parent/1.3.0-SNAPSHOT/carbondata-parent-1.3.0-SNAPSHOT.pom
[INFO] 
[INFO] 
[INFO] Building Apache CarbonData :: Common 1.3.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ carbondata-common ---
[INFO] Deleting 

[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent (default-prepare-agent) @ 
carbondata-common ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/1/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
carbondata-common ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ 
carbondata-common ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:compile (default-compile) @ 
carbondata-common ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 9 source files to 

[INFO] 
[INFO] >>> findbugs-maven-plugin:3.0.4:check (analyze-compile) > :findbugs @ 
carbondata-common >>>
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.4:findbugs (findbugs) @ carbondata-common 
---
[INFO] Fork Value is true
[INFO] Done FindBugs Analysis
[INFO] 
[INFO] <<< findbugs-maven-plugin:3.0.4:check (analyze-compile) < :findbugs @ 
carbondata-common <<<
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.4:check (analyze-compile) @ 
carbondata-common ---
[INFO] BugInstance size is 0
[INFO] Error size is 0
[INFO] No errors/warnings found
[INFO] 
[INFO] --- maven-resources-plugin:2.7:testResources (default-testResources) @ 
carbondata-common ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:testCompile (default-testCompile) @ 
carbondata-common ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 6 source files to 

[INFO] 
[INFO] --- maven-surefire-plugin:2.18.1:test (default-test) @ carbondata-common 
---
[JENKINS] Recording test results
[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (default-jar) @ carbondata-common ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
carbondata-common ---
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent-integration 
(default-prepare-agent-integration) @ carbondata-common ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/1/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.17:check (default) @ carbondata-common ---
[INFO] Starting audit...
Audit done.
[INFO] 
[INFO] --- scalastyle-maven-plugin:0.8.0:check (default) @ carbondata-common ---
[WARNING] sourceDirectory is not specified or does not exist 
value=
Saving to 
outputFile=
Processed 0 file(s)
Found 0 errors
Found 0 warnings
Found 0 infos
Finished in 2 ms
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:report (default-report) @ 
carbondata-common ---
[INFO] Skipping JaCoCo execution due to missing execution data file.
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:report-integration 
(default-report-integration) @ carbondata-common ---
[INFO] Skipping JaCoCo execution due to missing execution data file.
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:check (default-check) @ carbondata-common 
---
[INFO] 

carbondata git commit: [HOTFIX] Some basic fix for 1.3.0 release

2018-02-03 Thread ravipesala
Repository: carbondata
Updated Branches:
  refs/heads/master 4a2a2d1b7 -> fa6cd8d58


[HOTFIX] Some basic fix for 1.3.0 release

This closes #1924


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/fa6cd8d5
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/fa6cd8d5
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/fa6cd8d5

Branch: refs/heads/master
Commit: fa6cd8d58632357cd29731d59398d1a43b282447
Parents: 4a2a2d1
Author: chenliang613 
Authored: Sat Feb 3 21:06:55 2018 +0800
Committer: ravipesala 
Committed: Sun Feb 4 00:33:13 2018 +0530

--
 docs/configuration-parameters.md|   2 +-
 docs/data-management-on-carbondata.md   | 216 ---
 .../examples/StandardPartitionExample.scala |  11 +-
 integration/spark2/pom.xml  |   3 +
 4 files changed, 107 insertions(+), 125 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/fa6cd8d5/docs/configuration-parameters.md
--
diff --git a/docs/configuration-parameters.md b/docs/configuration-parameters.md
index 621574d..91f6cf5 100644
--- a/docs/configuration-parameters.md
+++ b/docs/configuration-parameters.md
@@ -61,7 +61,7 @@ This section provides the details of all the configurations 
required for CarbonD
 | carbon.options.bad.record.path |  | Specifies the HDFS path where bad 
records are stored. By default the value is Null. This path must to be 
configured by the user if bad record logger is enabled or bad record action 
redirect. | |
 | carbon.enable.vector.reader | true | This parameter increases the 
performance of select queries as it fetch columnar batch of size 4*1024 rows 
instead of fetching data row by row. | |
 | carbon.blockletgroup.size.in.mb | 64 MB | The data are read as a group of 
blocklets which are called blocklet groups. This parameter specifies the size 
of the blocklet group. Higher value results in better sequential IO access.The 
minimum value is 16MB, any value lesser than 16MB will reset to the default 
value (64MB). |  |
-| carbon.task.distribution | block | **block**: Setting this value will launch 
one task per block. This setting is suggested in case of concurrent queries and 
queries having big shuffling scenarios. **custom**: Setting this value will 
group the blocks and distribute it uniformly to the available resources in the 
cluster. This enhances the query performance but not suggested in case of 
concurrent queries and queries having big shuffling scenarios. **blocklet**: 
Setting this value will launch one task per blocklet. This setting is suggested 
in case of concurrent queries and queries having big shuffling scenarios. 
**merge_small_files**: Setting this value will merge all the small partitions 
to a size of (128 MB) during querying. The small partitions are combined to a 
map task to reduce the number of read task. This enhances the performance. | | 
+| carbon.task.distribution | block | **block**: Setting this value will launch 
one task per block. This setting is suggested in case of concurrent queries and 
queries having big shuffling scenarios. **custom**: Setting this value will 
group the blocks and distribute it uniformly to the available resources in the 
cluster. This enhances the query performance but not suggested in case of 
concurrent queries and queries having big shuffling scenarios. **blocklet**: 
Setting this value will launch one task per blocklet. This setting is suggested 
in case of concurrent queries and queries having big shuffling scenarios. 
**merge_small_files**: Setting this value will merge all the small partitions 
to a size of (128 MB is the default value of 
"spark.sql.files.maxPartitionBytes",it is configurable) during querying. The 
small partitions are combined to a map task to reduce the number of read task. 
This enhances the performance. | | 
 
 * **Compaction Configuration**
   

http://git-wip-us.apache.org/repos/asf/carbondata/blob/fa6cd8d5/docs/data-management-on-carbondata.md
--
diff --git a/docs/data-management-on-carbondata.md 
b/docs/data-management-on-carbondata.md
index 3acb711..9bb6c20 100644
--- a/docs/data-management-on-carbondata.md
+++ b/docs/data-management-on-carbondata.md
@@ -26,8 +26,7 @@ This tutorial is going to introduce all commands and data 
operations on CarbonDa
 * [UPDATE AND DELETE](#update-and-delete)
 * [COMPACTION](#compaction)
 * [PARTITION](#partition)
-* [HIVE STANDARD PARTITION](#hive-standard-partition)
-* [PRE-AGGREGATE TABLES](#agg-tables)
+* [PRE-AGGREGATE TABLES](#pre-aggregate-tables)
 * [BUCKETING](#bucketing)
 * [SEGMENT 

Build failed in Jenkins: carbondata-master-spark-2.1 #2027

2018-02-03 Thread Apache Jenkins Server
See 


Changes:

[kunalkapoor642] [CARBONDATA-2122] Add validation for empty bad record path

--
[...truncated 11.59 KB...]
[INFO] --- maven-install-plugin:2.5.2:install (default-install) @ 
carbondata-parent ---
[INFO] Installing 
 to 
/home/jenkins/jenkins-slave/maven-repositories/1/org/apache/carbondata/carbondata-parent/1.3.0-SNAPSHOT/carbondata-parent-1.3.0-SNAPSHOT.pom
[INFO] 
[INFO] 
[INFO] Building Apache CarbonData :: Common 1.3.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ carbondata-common ---
[INFO] Deleting 

[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent (default-prepare-agent) @ 
carbondata-common ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/1/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
carbondata-common ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ 
carbondata-common ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:compile (default-compile) @ 
carbondata-common ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 9 source files to 

[INFO] 
[INFO] >>> findbugs-maven-plugin:3.0.4:check (analyze-compile) > :findbugs @ 
carbondata-common >>>
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.4:findbugs (findbugs) @ carbondata-common 
---
[INFO] Fork Value is true
[INFO] Done FindBugs Analysis
[INFO] 
[INFO] <<< findbugs-maven-plugin:3.0.4:check (analyze-compile) < :findbugs @ 
carbondata-common <<<
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.4:check (analyze-compile) @ 
carbondata-common ---
[INFO] BugInstance size is 0
[INFO] Error size is 0
[INFO] No errors/warnings found
[INFO] 
[INFO] --- maven-resources-plugin:2.7:testResources (default-testResources) @ 
carbondata-common ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:testCompile (default-testCompile) @ 
carbondata-common ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 6 source files to 

[INFO] 
[INFO] --- maven-surefire-plugin:2.18.1:test (default-test) @ carbondata-common 
---
[JENKINS] Recording test results
[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (default-jar) @ carbondata-common ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
carbondata-common ---
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent-integration 
(default-prepare-agent-integration) @ carbondata-common ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/1/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.17:check (default) @ carbondata-common ---
[INFO] Starting audit...
Audit done.
[INFO] 
[INFO] --- scalastyle-maven-plugin:0.8.0:check (default) @ carbondata-common ---
[WARNING] sourceDirectory is not specified or does not exist 
value=
Saving to 
outputFile=
Processed 0 file(s)
Found 0 errors
Found 0 warnings
Found 0 infos
Finished in 3 ms
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:report (default-report) @ 
carbondata-common ---
[INFO] Skipping JaCoCo execution due to missing execution data file.
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:report-integration 
(default-report-integration) @ carbondata-common ---
[INFO] Skipping JaCoCo execution due to missing 

Build failed in Jenkins: carbondata-master-spark-2.1 » Apache CarbonData :: Core #2027

2018-02-03 Thread Apache Jenkins Server
See 


Changes:

[kunalkapoor642] [CARBONDATA-2122] Add validation for empty bad record path

--
[INFO] 
[INFO] 
[INFO] Building Apache CarbonData :: Core 1.3.0-SNAPSHOT
[INFO] 
[INFO] Downloading: 
http://repo1.maven.org/maven2/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
[INFO] Downloading: 
http://repository.apache.org/snapshots/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
[INFO] Downloaded: 
http://repository.apache.org/snapshots/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
 (791 B at 1.8 KB/sec)
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ carbondata-core ---
[INFO] Deleting 

[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent (default-prepare-agent) @ 
carbondata-core ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/1/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
carbondata-core ---
[INFO] 
[INFO] --- exec-maven-plugin:1.2.1:exec (default) @ carbondata-core ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ 
carbondata-core ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 1 resource
[INFO] Copying 1 resource
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:compile (default-compile) @ 
carbondata-core ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 580 source files to 

[INFO] 
:
 

 uses or overrides a deprecated API.
[INFO] 
:
 Recompile with -Xlint:deprecation for details.
[INFO] 
:
 Some input files use unchecked or unsafe operations.
[INFO] 
:
 Recompile with -Xlint:unchecked for details.
[INFO] -
[WARNING] COMPILATION WARNING : 
[INFO] -
[WARNING] 
:[23,16]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[43,18]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[47,21]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[49,17]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[70,17]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[INFO] 5 warnings 
[INFO] -
[INFO] 

Build failed in Jenkins: carbondata-master-spark-2.2 #62

2018-02-03 Thread Apache Jenkins Server
See 


Changes:

[kunalkapoor642] [CARBONDATA-2122] Add validation for empty bad record path

--
[...truncated 11.55 KB...]
[INFO] --- maven-install-plugin:2.5.2:install (default-install) @ 
carbondata-parent ---
[INFO] Installing 
 to 
/home/jenkins/jenkins-slave/maven-repositories/0/org/apache/carbondata/carbondata-parent/1.3.0-SNAPSHOT/carbondata-parent-1.3.0-SNAPSHOT.pom
[INFO] 
[INFO] 
[INFO] Building Apache CarbonData :: Common 1.3.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ carbondata-common ---
[INFO] Deleting 

[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent (default-prepare-agent) @ 
carbondata-common ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/0/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
carbondata-common ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ 
carbondata-common ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:compile (default-compile) @ 
carbondata-common ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 9 source files to 

[INFO] 
[INFO] >>> findbugs-maven-plugin:3.0.4:check (analyze-compile) > :findbugs @ 
carbondata-common >>>
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.4:findbugs (findbugs) @ carbondata-common 
---
[INFO] Fork Value is true
[INFO] Done FindBugs Analysis
[INFO] 
[INFO] <<< findbugs-maven-plugin:3.0.4:check (analyze-compile) < :findbugs @ 
carbondata-common <<<
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.4:check (analyze-compile) @ 
carbondata-common ---
[INFO] BugInstance size is 0
[INFO] Error size is 0
[INFO] No errors/warnings found
[INFO] 
[INFO] --- maven-resources-plugin:2.7:testResources (default-testResources) @ 
carbondata-common ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:testCompile (default-testCompile) @ 
carbondata-common ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 6 source files to 

[INFO] 
[INFO] --- maven-surefire-plugin:2.18.1:test (default-test) @ carbondata-common 
---
[JENKINS] Recording test results
[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (default-jar) @ carbondata-common ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
carbondata-common ---
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent-integration 
(default-prepare-agent-integration) @ carbondata-common ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/0/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.17:check (default) @ carbondata-common ---
[INFO] Starting audit...
Audit done.
[INFO] 
[INFO] --- scalastyle-maven-plugin:0.8.0:check (default) @ carbondata-common ---
[WARNING] sourceDirectory is not specified or does not exist 
value=
Saving to 
outputFile=
Processed 0 file(s)
Found 0 errors
Found 0 warnings
Found 0 infos
Finished in 3 ms
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:report (default-report) @ 
carbondata-common ---
[INFO] Skipping JaCoCo execution due to missing execution data file.
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:report-integration 
(default-report-integration) @ carbondata-common ---
[INFO] Skipping JaCoCo execution due to missing execution 

Build failed in Jenkins: carbondata-master-spark-2.2 » Apache CarbonData :: Core #62

2018-02-03 Thread Apache Jenkins Server
See 


Changes:

[kunalkapoor642] [CARBONDATA-2122] Add validation for empty bad record path

--
[INFO] 
[INFO] 
[INFO] Building Apache CarbonData :: Core 1.3.0-SNAPSHOT
[INFO] 
[INFO] Downloading: 
http://repository.apache.org/snapshots/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
[INFO] Downloading: 
http://repo1.maven.org/maven2/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
[INFO] Downloaded: 
http://repository.apache.org/snapshots/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
 (791 B at 1.9 KB/sec)
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ carbondata-core ---
[INFO] Deleting 

[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent (default-prepare-agent) @ 
carbondata-core ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/0/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
carbondata-core ---
[INFO] 
[INFO] --- exec-maven-plugin:1.2.1:exec (default) @ carbondata-core ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ 
carbondata-core ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 1 resource
[INFO] Copying 1 resource
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:compile (default-compile) @ 
carbondata-core ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 580 source files to 

[INFO] 
:
 

 uses or overrides a deprecated API.
[INFO] 
:
 Recompile with -Xlint:deprecation for details.
[INFO] 
:
 Some input files use unchecked or unsafe operations.
[INFO] 
:
 Recompile with -Xlint:unchecked for details.
[INFO] -
[WARNING] COMPILATION WARNING : 
[INFO] -
[WARNING] 
:[23,16]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[43,18]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[47,21]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[49,17]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[70,17]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[INFO] 5 warnings 
[INFO] -
[INFO] 

carbondata git commit: [CARBONDATA-2122] Add validation for empty bad record path

2018-02-03 Thread kunalkapoor
Repository: carbondata
Updated Branches:
  refs/heads/master 50e2f2c8f -> 4a2a2d1b7


[CARBONDATA-2122] Add validation for empty bad record path

Data Load having bad record redirect with empty location should throw the 
exception of Invalid Path.

This closes #1914


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/4a2a2d1b
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/4a2a2d1b
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/4a2a2d1b

Branch: refs/heads/master
Commit: 4a2a2d1b74901f96efc4ecf9cc16e9804884b929
Parents: 50e2f2c
Author: Jatin 
Authored: Fri Feb 2 19:55:16 2018 +0530
Committer: kunal642 
Committed: Sun Feb 4 00:23:19 2018 +0530

--
 .../apache/carbondata/core/util/CarbonUtil.java |  7 +-
 .../sdv/generated/AlterTableTestCase.scala  |  2 -
 .../sdv/generated/DataLoadingTestCase.scala |  5 +-
 .../badrecordloger/BadRecordActionTest.scala| 71 +++-
 .../badrecordloger/BadRecordEmptyDataTest.scala |  5 --
 .../badrecordloger/BadRecordLoggerTest.scala|  5 --
 .../StandardPartitionBadRecordLoggerTest.scala  |  5 --
 .../carbondata/spark/util/DataLoadingUtil.scala |  2 +-
 .../spark/sql/test/TestQueryExecutor.scala  | 16 ++---
 .../BadRecordPathLoadOptionTest.scala   | 11 ++-
 .../DataLoadFailAllTypeSortTest.scala   | 28 +---
 .../NumericDimensionBadRecordTest.scala |  6 +-
 .../AlterTableValidationTestCase.scala  |  3 -
 .../carbon/datastore/BlockIndexStoreTest.java   |  2 -
 14 files changed, 93 insertions(+), 75 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/4a2a2d1b/core/src/main/java/org/apache/carbondata/core/util/CarbonUtil.java
--
diff --git a/core/src/main/java/org/apache/carbondata/core/util/CarbonUtil.java 
b/core/src/main/java/org/apache/carbondata/core/util/CarbonUtil.java
index b62b77d..c208154 100644
--- a/core/src/main/java/org/apache/carbondata/core/util/CarbonUtil.java
+++ b/core/src/main/java/org/apache/carbondata/core/util/CarbonUtil.java
@@ -98,6 +98,7 @@ import com.google.gson.GsonBuilder;
 import org.apache.commons.codec.binary.Base64;
 import org.apache.commons.io.FileUtils;
 import org.apache.commons.lang.ArrayUtils;
+import org.apache.commons.lang.StringUtils;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.FileStatus;
 import org.apache.hadoop.fs.FileSystem;
@@ -1891,7 +1892,11 @@ public final class CarbonUtil {
* @return
*/
   public static boolean isValidBadStorePath(String badRecordsLocation) {
-return !(null == badRecordsLocation || badRecordsLocation.length() == 0);
+if (StringUtils.isEmpty(badRecordsLocation)) {
+  return false;
+} else {
+  return isFileExists(checkAndAppendHDFSUrl(badRecordsLocation));
+}
   }
 
   /**

http://git-wip-us.apache.org/repos/asf/carbondata/blob/4a2a2d1b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/AlterTableTestCase.scala
--
diff --git 
a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/AlterTableTestCase.scala
 
b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/AlterTableTestCase.scala
index 8899f5c..4e53ea3 100644
--- 
a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/AlterTableTestCase.scala
+++ 
b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/AlterTableTestCase.scala
@@ -1016,8 +1016,6 @@ class AlterTableTestCase extends QueryTest with 
BeforeAndAfterAll {
 prop.addProperty("carbon.compaction.level.threshold", "2,1")
 prop.addProperty("carbon.enable.auto.load.merge", "false")
 prop.addProperty("carbon.bad.records.action", "FORCE")
-prop.addProperty(CarbonCommonConstants.CARBON_BADRECORDS_LOC,
-  TestQueryExecutor.warehouse+"/baaadrecords")
   }
 
   override def afterAll: Unit = {

http://git-wip-us.apache.org/repos/asf/carbondata/blob/4a2a2d1b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/DataLoadingTestCase.scala
--
diff --git 
a/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/DataLoadingTestCase.scala
 
b/integration/spark-common-cluster-test/src/test/scala/org/apache/carbondata/cluster/sdv/generated/DataLoadingTestCase.scala
index 52396ee..24a5aa4 100644
--- 

Build failed in Jenkins: carbondata-master-spark-2.1 #2026

2018-02-03 Thread Apache Jenkins Server
See 


Changes:

[ravipesala] [CARBONDATA-2125] like% filter is giving 
ArrayIndexOutOfBoundException

--
[...truncated 11.49 KB...]
[INFO] 
[INFO] --- maven-install-plugin:2.5.2:install (default-install) @ 
carbondata-parent ---
[INFO] Installing 
 to 
/home/jenkins/jenkins-slave/maven-repositories/0/org/apache/carbondata/carbondata-parent/1.3.0-SNAPSHOT/carbondata-parent-1.3.0-SNAPSHOT.pom
[INFO] 
[INFO] 
[INFO] Building Apache CarbonData :: Common 1.3.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ carbondata-common ---
[INFO] Deleting 

[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent (default-prepare-agent) @ 
carbondata-common ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/0/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
carbondata-common ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ 
carbondata-common ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:compile (default-compile) @ 
carbondata-common ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 9 source files to 

[INFO] 
[INFO] >>> findbugs-maven-plugin:3.0.4:check (analyze-compile) > :findbugs @ 
carbondata-common >>>
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.4:findbugs (findbugs) @ carbondata-common 
---
[INFO] Fork Value is true
[INFO] Done FindBugs Analysis
[INFO] 
[INFO] <<< findbugs-maven-plugin:3.0.4:check (analyze-compile) < :findbugs @ 
carbondata-common <<<
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.4:check (analyze-compile) @ 
carbondata-common ---
[INFO] BugInstance size is 0
[INFO] Error size is 0
[INFO] No errors/warnings found
[INFO] 
[INFO] --- maven-resources-plugin:2.7:testResources (default-testResources) @ 
carbondata-common ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:testCompile (default-testCompile) @ 
carbondata-common ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 6 source files to 

[INFO] 
[INFO] --- maven-surefire-plugin:2.18.1:test (default-test) @ carbondata-common 
---
[JENKINS] Recording test results
[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (default-jar) @ carbondata-common ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
carbondata-common ---
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent-integration 
(default-prepare-agent-integration) @ carbondata-common ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/0/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.17:check (default) @ carbondata-common ---
[INFO] Starting audit...
Audit done.
[INFO] 
[INFO] --- scalastyle-maven-plugin:0.8.0:check (default) @ carbondata-common ---
[WARNING] sourceDirectory is not specified or does not exist 
value=
Saving to 
outputFile=
Processed 0 file(s)
Found 0 errors
Found 0 warnings
Found 0 infos
Finished in 1 ms
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:report (default-report) @ 
carbondata-common ---
[INFO] Skipping JaCoCo execution due to missing execution data file.
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:report-integration 
(default-report-integration) @ carbondata-common ---
[INFO] Skipping JaCoCo execution due 

Build failed in Jenkins: carbondata-master-spark-2.1 » Apache CarbonData :: Core #2026

2018-02-03 Thread Apache Jenkins Server
See 


Changes:

[ravipesala] [CARBONDATA-2125] like% filter is giving 
ArrayIndexOutOfBoundException

--
[INFO] 
[INFO] 
[INFO] Building Apache CarbonData :: Core 1.3.0-SNAPSHOT
[INFO] 
[INFO] Downloading: 
http://repository.apache.org/snapshots/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
[INFO] Downloading: 
http://repo1.maven.org/maven2/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
[INFO] Downloaded: 
http://repository.apache.org/snapshots/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
 (791 B at 1.8 KB/sec)
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ carbondata-core ---
[INFO] Deleting 

[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent (default-prepare-agent) @ 
carbondata-core ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/0/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
carbondata-core ---
[INFO] 
[INFO] --- exec-maven-plugin:1.2.1:exec (default) @ carbondata-core ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ 
carbondata-core ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 1 resource
[INFO] Copying 1 resource
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:compile (default-compile) @ 
carbondata-core ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 580 source files to 

[INFO] 
:
 

 uses or overrides a deprecated API.
[INFO] 
:
 Recompile with -Xlint:deprecation for details.
[INFO] 
:
 Some input files use unchecked or unsafe operations.
[INFO] 
:
 Recompile with -Xlint:unchecked for details.
[INFO] -
[WARNING] COMPILATION WARNING : 
[INFO] -
[WARNING] 
:[23,16]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[43,18]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[47,21]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[49,17]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[70,17]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[INFO] 5 warnings 
[INFO] -
[INFO] 

Build failed in Jenkins: carbondata-master-spark-2.2 » Apache CarbonData :: Core #61

2018-02-03 Thread Apache Jenkins Server
See 


Changes:

[ravipesala] [CARBONDATA-2125] like% filter is giving 
ArrayIndexOutOfBoundException

--
[INFO] 
[INFO] 
[INFO] Building Apache CarbonData :: Core 1.3.0-SNAPSHOT
[INFO] 
[INFO] Downloading: 
http://repository.apache.org/snapshots/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
[INFO] Downloading: 
http://repo1.maven.org/maven2/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
[INFO] Downloaded: 
http://repository.apache.org/snapshots/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
 (791 B at 1.9 KB/sec)
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ carbondata-core ---
[INFO] Deleting 

[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent (default-prepare-agent) @ 
carbondata-core ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/0/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
carbondata-core ---
[INFO] 
[INFO] --- exec-maven-plugin:1.2.1:exec (default) @ carbondata-core ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ 
carbondata-core ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 1 resource
[INFO] Copying 1 resource
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:compile (default-compile) @ 
carbondata-core ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 580 source files to 

[INFO] 
:
 

 uses or overrides a deprecated API.
[INFO] 
:
 Recompile with -Xlint:deprecation for details.
[INFO] 
:
 Some input files use unchecked or unsafe operations.
[INFO] 
:
 Recompile with -Xlint:unchecked for details.
[INFO] -
[WARNING] COMPILATION WARNING : 
[INFO] -
[WARNING] 
:[23,16]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[43,18]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[47,21]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[49,17]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[70,17]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[INFO] 5 warnings 
[INFO] -
[INFO] 

Build failed in Jenkins: carbondata-master-spark-2.2 #61

2018-02-03 Thread Apache Jenkins Server
See 


Changes:

[ravipesala] [CARBONDATA-2125] like% filter is giving 
ArrayIndexOutOfBoundException

--
[...truncated 11.44 KB...]
[INFO] 
[INFO] --- maven-install-plugin:2.5.2:install (default-install) @ 
carbondata-parent ---
[INFO] Installing 
 to 
/home/jenkins/jenkins-slave/maven-repositories/0/org/apache/carbondata/carbondata-parent/1.3.0-SNAPSHOT/carbondata-parent-1.3.0-SNAPSHOT.pom
[INFO] 
[INFO] 
[INFO] Building Apache CarbonData :: Common 1.3.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ carbondata-common ---
[INFO] Deleting 

[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent (default-prepare-agent) @ 
carbondata-common ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/0/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
carbondata-common ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ 
carbondata-common ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:compile (default-compile) @ 
carbondata-common ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 9 source files to 

[INFO] 
[INFO] >>> findbugs-maven-plugin:3.0.4:check (analyze-compile) > :findbugs @ 
carbondata-common >>>
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.4:findbugs (findbugs) @ carbondata-common 
---
[INFO] Fork Value is true
[INFO] Done FindBugs Analysis
[INFO] 
[INFO] <<< findbugs-maven-plugin:3.0.4:check (analyze-compile) < :findbugs @ 
carbondata-common <<<
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.4:check (analyze-compile) @ 
carbondata-common ---
[INFO] BugInstance size is 0
[INFO] Error size is 0
[INFO] No errors/warnings found
[INFO] 
[INFO] --- maven-resources-plugin:2.7:testResources (default-testResources) @ 
carbondata-common ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:testCompile (default-testCompile) @ 
carbondata-common ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 6 source files to 

[INFO] 
[INFO] --- maven-surefire-plugin:2.18.1:test (default-test) @ carbondata-common 
---
[JENKINS] Recording test results
[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (default-jar) @ carbondata-common ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
carbondata-common ---
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent-integration 
(default-prepare-agent-integration) @ carbondata-common ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/0/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.17:check (default) @ carbondata-common ---
[INFO] Starting audit...
Audit done.
[INFO] 
[INFO] --- scalastyle-maven-plugin:0.8.0:check (default) @ carbondata-common ---
[WARNING] sourceDirectory is not specified or does not exist 
value=
Saving to 
outputFile=
Processed 0 file(s)
Found 0 errors
Found 0 warnings
Found 0 infos
Finished in 1 ms
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:report (default-report) @ 
carbondata-common ---
[INFO] Skipping JaCoCo execution due to missing execution data file.
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:report-integration 
(default-report-integration) @ carbondata-common ---
[INFO] Skipping JaCoCo execution due to 

carbondata git commit: [CARBONDATA-2125] like% filter is giving ArrayIndexOutOfBoundException in case of table having more pages

2018-02-03 Thread ravipesala
Repository: carbondata
Updated Branches:
  refs/heads/master 54b7db519 -> 50e2f2c8f


[CARBONDATA-2125] like% filter is giving ArrayIndexOutOfBoundException in case 
of table having more pages

Problem: like% filter is giving ArrayIndexOutOfBoundException in case of table 
having more pages
Solution: In RowlevelFilter the number of rows should be filled based on the 
rows in a page.

This closes #1909


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/50e2f2c8
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/50e2f2c8
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/50e2f2c8

Branch: refs/heads/master
Commit: 50e2f2c8f2cc6ee4b72839b704a038666ae629ba
Parents: 54b7db5
Author: dhatchayani 
Authored: Fri Feb 2 10:55:19 2018 +0530
Committer: ravipesala 
Committed: Sat Feb 3 22:55:36 2018 +0530

--
 .../executer/RowLevelFilterExecuterImpl.java| 10 ++--
 .../filter/executer/TrueFilterExecutor.java |  2 +-
 .../filterexpr/FilterProcessorTestCase.scala| 25 
 3 files changed, 34 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/50e2f2c8/core/src/main/java/org/apache/carbondata/core/scan/filter/executer/RowLevelFilterExecuterImpl.java
--
diff --git 
a/core/src/main/java/org/apache/carbondata/core/scan/filter/executer/RowLevelFilterExecuterImpl.java
 
b/core/src/main/java/org/apache/carbondata/core/scan/filter/executer/RowLevelFilterExecuterImpl.java
index 224a69f..89489a2 100644
--- 
a/core/src/main/java/org/apache/carbondata/core/scan/filter/executer/RowLevelFilterExecuterImpl.java
+++ 
b/core/src/main/java/org/apache/carbondata/core/scan/filter/executer/RowLevelFilterExecuterImpl.java
@@ -205,7 +205,10 @@ public class RowLevelFilterExecuterImpl implements 
FilterExecuter {
   } else {
 // specific for restructure case where default values need to be filled
 pageNumbers = blockChunkHolder.getDataBlock().numberOfPages();
-numberOfRows = new int[] { blockChunkHolder.getDataBlock().nodeSize() 
};
+numberOfRows = new int[pageNumbers];
+for (int i = 0; i < pageNumbers; i++) {
+  numberOfRows[i] = blockChunkHolder.getDataBlock().getPageRowCount(i);
+}
   }
 }
 if (msrColEvalutorInfoList.size() > 0) {
@@ -217,7 +220,10 @@ public class RowLevelFilterExecuterImpl implements 
FilterExecuter {
   } else {
 // specific for restructure case where default values need to be filled
 pageNumbers = blockChunkHolder.getDataBlock().numberOfPages();
-numberOfRows = new int[] { blockChunkHolder.getDataBlock().nodeSize() 
};
+numberOfRows = new int[pageNumbers];
+for (int i = 0; i < pageNumbers; i++) {
+  numberOfRows[i] = blockChunkHolder.getDataBlock().getPageRowCount(i);
+}
   }
 }
 BitSetGroup bitSetGroup = new BitSetGroup(pageNumbers);

http://git-wip-us.apache.org/repos/asf/carbondata/blob/50e2f2c8/core/src/main/java/org/apache/carbondata/core/scan/filter/executer/TrueFilterExecutor.java
--
diff --git 
a/core/src/main/java/org/apache/carbondata/core/scan/filter/executer/TrueFilterExecutor.java
 
b/core/src/main/java/org/apache/carbondata/core/scan/filter/executer/TrueFilterExecutor.java
index 92396ae..4b3738a 100644
--- 
a/core/src/main/java/org/apache/carbondata/core/scan/filter/executer/TrueFilterExecutor.java
+++ 
b/core/src/main/java/org/apache/carbondata/core/scan/filter/executer/TrueFilterExecutor.java
@@ -39,7 +39,7 @@ public class TrueFilterExecutor implements FilterExecuter {
 BitSetGroup group = new BitSetGroup(numberOfPages);
 for (int i = 0; i < numberOfPages; i++) {
   BitSet set = new BitSet();
-  set.flip(0, blockChunkHolder.getDataBlock().nodeSize());
+  set.flip(0, blockChunkHolder.getDataBlock().getPageRowCount(i));
   group.setBitSet(set, i);
 }
 return group;

http://git-wip-us.apache.org/repos/asf/carbondata/blob/50e2f2c8/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/filterexpr/FilterProcessorTestCase.scala
--
diff --git 
a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/filterexpr/FilterProcessorTestCase.scala
 
b/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/filterexpr/FilterProcessorTestCase.scala
index b92b379..d54906f 100644
--- 
a/integration/spark-common-test/src/test/scala/org/apache/carbondata/spark/testsuite/filterexpr/FilterProcessorTestCase.scala
+++ 

Build failed in Jenkins: carbondata-master-spark-2.1 #2025

2018-02-03 Thread Apache Jenkins Server
See 


Changes:

[ravipesala] [CARBONDATA-2119] Fixed deserialization issues for carbonLoadModel

--
[...truncated 11.45 KB...]
[INFO] 
[INFO] --- maven-install-plugin:2.5.2:install (default-install) @ 
carbondata-parent ---
[INFO] Installing 
 to 
/home/jenkins/jenkins-slave/maven-repositories/0/org/apache/carbondata/carbondata-parent/1.3.0-SNAPSHOT/carbondata-parent-1.3.0-SNAPSHOT.pom
[INFO] 
[INFO] 
[INFO] Building Apache CarbonData :: Common 1.3.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ carbondata-common ---
[INFO] Deleting 

[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent (default-prepare-agent) @ 
carbondata-common ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/0/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
carbondata-common ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ 
carbondata-common ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:compile (default-compile) @ 
carbondata-common ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 9 source files to 

[INFO] 
[INFO] >>> findbugs-maven-plugin:3.0.4:check (analyze-compile) > :findbugs @ 
carbondata-common >>>
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.4:findbugs (findbugs) @ carbondata-common 
---
[INFO] Fork Value is true
[INFO] Done FindBugs Analysis
[INFO] 
[INFO] <<< findbugs-maven-plugin:3.0.4:check (analyze-compile) < :findbugs @ 
carbondata-common <<<
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.4:check (analyze-compile) @ 
carbondata-common ---
[INFO] BugInstance size is 0
[INFO] Error size is 0
[INFO] No errors/warnings found
[INFO] 
[INFO] --- maven-resources-plugin:2.7:testResources (default-testResources) @ 
carbondata-common ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:testCompile (default-testCompile) @ 
carbondata-common ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 6 source files to 

[INFO] 
[INFO] --- maven-surefire-plugin:2.18.1:test (default-test) @ carbondata-common 
---
[JENKINS] Recording test results
[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (default-jar) @ carbondata-common ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
carbondata-common ---
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent-integration 
(default-prepare-agent-integration) @ carbondata-common ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/0/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.17:check (default) @ carbondata-common ---
[INFO] Starting audit...
Audit done.
[INFO] 
[INFO] --- scalastyle-maven-plugin:0.8.0:check (default) @ carbondata-common ---
[WARNING] sourceDirectory is not specified or does not exist 
value=
Saving to 
outputFile=
Processed 0 file(s)
Found 0 errors
Found 0 warnings
Found 0 infos
Finished in 1 ms
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:report (default-report) @ 
carbondata-common ---
[INFO] Skipping JaCoCo execution due to missing execution data file.
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:report-integration 
(default-report-integration) @ carbondata-common ---
[INFO] Skipping JaCoCo execution due to 

Build failed in Jenkins: carbondata-master-spark-2.1 » Apache CarbonData :: Core #2025

2018-02-03 Thread Apache Jenkins Server
See 


Changes:

[ravipesala] [CARBONDATA-2119] Fixed deserialization issues for carbonLoadModel

--
[INFO] 
[INFO] 
[INFO] Building Apache CarbonData :: Core 1.3.0-SNAPSHOT
[INFO] 
[INFO] Downloading: 
http://repository.apache.org/snapshots/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
[INFO] Downloading: 
http://repo1.maven.org/maven2/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
[INFO] Downloaded: 
http://repository.apache.org/snapshots/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
 (791 B at 1.7 KB/sec)
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ carbondata-core ---
[INFO] Deleting 

[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent (default-prepare-agent) @ 
carbondata-core ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/0/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
carbondata-core ---
[INFO] 
[INFO] --- exec-maven-plugin:1.2.1:exec (default) @ carbondata-core ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ 
carbondata-core ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 1 resource
[INFO] Copying 1 resource
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:compile (default-compile) @ 
carbondata-core ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 580 source files to 

[INFO] 
:
 

 uses or overrides a deprecated API.
[INFO] 
:
 Recompile with -Xlint:deprecation for details.
[INFO] 
:
 Some input files use unchecked or unsafe operations.
[INFO] 
:
 Recompile with -Xlint:unchecked for details.
[INFO] -
[WARNING] COMPILATION WARNING : 
[INFO] -
[WARNING] 
:[23,16]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[43,18]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[47,21]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[49,17]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[70,17]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[INFO] 5 warnings 
[INFO] -
[INFO] 

Build failed in Jenkins: carbondata-master-spark-2.2 #60

2018-02-03 Thread Apache Jenkins Server
See 


Changes:

[ravipesala] [CARBONDATA-2119] Fixed deserialization issues for carbonLoadModel

--
[...truncated 11.41 KB...]
[INFO] 
[INFO] --- maven-install-plugin:2.5.2:install (default-install) @ 
carbondata-parent ---
[INFO] Installing 
 to 
/home/jenkins/jenkins-slave/maven-repositories/0/org/apache/carbondata/carbondata-parent/1.3.0-SNAPSHOT/carbondata-parent-1.3.0-SNAPSHOT.pom
[INFO] 
[INFO] 
[INFO] Building Apache CarbonData :: Common 1.3.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ carbondata-common ---
[INFO] Deleting 

[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent (default-prepare-agent) @ 
carbondata-common ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/0/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
carbondata-common ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ 
carbondata-common ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:compile (default-compile) @ 
carbondata-common ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 9 source files to 

[INFO] 
[INFO] >>> findbugs-maven-plugin:3.0.4:check (analyze-compile) > :findbugs @ 
carbondata-common >>>
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.4:findbugs (findbugs) @ carbondata-common 
---
[INFO] Fork Value is true
[INFO] Done FindBugs Analysis
[INFO] 
[INFO] <<< findbugs-maven-plugin:3.0.4:check (analyze-compile) < :findbugs @ 
carbondata-common <<<
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.4:check (analyze-compile) @ 
carbondata-common ---
[INFO] BugInstance size is 0
[INFO] Error size is 0
[INFO] No errors/warnings found
[INFO] 
[INFO] --- maven-resources-plugin:2.7:testResources (default-testResources) @ 
carbondata-common ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:testCompile (default-testCompile) @ 
carbondata-common ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 6 source files to 

[INFO] 
[INFO] --- maven-surefire-plugin:2.18.1:test (default-test) @ carbondata-common 
---
[JENKINS] Recording test results
[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (default-jar) @ carbondata-common ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
carbondata-common ---
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent-integration 
(default-prepare-agent-integration) @ carbondata-common ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/0/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.17:check (default) @ carbondata-common ---
[INFO] Starting audit...
Audit done.
[INFO] 
[INFO] --- scalastyle-maven-plugin:0.8.0:check (default) @ carbondata-common ---
[WARNING] sourceDirectory is not specified or does not exist 
value=
Saving to 
outputFile=
Processed 0 file(s)
Found 0 errors
Found 0 warnings
Found 0 infos
Finished in 1 ms
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:report (default-report) @ 
carbondata-common ---
[INFO] Skipping JaCoCo execution due to missing execution data file.
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:report-integration 
(default-report-integration) @ carbondata-common ---
[INFO] Skipping JaCoCo execution due to 

carbondata git commit: [CARBONDATA-2119] Fixed deserialization issues for carbonLoadModel

2018-02-03 Thread ravipesala
Repository: carbondata
Updated Branches:
  refs/heads/master a7bcc763b -> 54b7db519


[CARBONDATA-2119] Fixed deserialization issues for carbonLoadModel

Problem:
Load model was not getting de-serialized in the executor due to which 2 
different carbon table objects were being created.
Solution:
Reconstruct carbonTable from tableInfo if not already created.

This closes #1911


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/54b7db51
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/54b7db51
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/54b7db51

Branch: refs/heads/master
Commit: 54b7db51906340d6d7b417058f9665731fa51a21
Parents: a7bcc76
Author: kunal642 
Authored: Fri Feb 2 17:37:51 2018 +0530
Committer: ravipesala 
Committed: Sat Feb 3 22:02:54 2018 +0530

--
 .../core/metadata/schema/table/CarbonTable.java  |  2 +-
 .../processing/loading/model/CarbonDataLoadSchema.java   | 11 ++-
 2 files changed, 11 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/54b7db51/core/src/main/java/org/apache/carbondata/core/metadata/schema/table/CarbonTable.java
--
diff --git 
a/core/src/main/java/org/apache/carbondata/core/metadata/schema/table/CarbonTable.java
 
b/core/src/main/java/org/apache/carbondata/core/metadata/schema/table/CarbonTable.java
index 4bb0d20..09ff440 100644
--- 
a/core/src/main/java/org/apache/carbondata/core/metadata/schema/table/CarbonTable.java
+++ 
b/core/src/main/java/org/apache/carbondata/core/metadata/schema/table/CarbonTable.java
@@ -141,7 +141,7 @@ public class CarbonTable implements Serializable {
*
* @param tableInfo
*/
-  private static void updateTableInfo(TableInfo tableInfo) {
+  public static void updateTableInfo(TableInfo tableInfo) {
 List dataMapSchemas = new ArrayList<>();
 for (DataMapSchema dataMapSchema : tableInfo.getDataMapSchemaList()) {
   DataMapSchema newDataMapSchema = DataMapSchemaFactory.INSTANCE

http://git-wip-us.apache.org/repos/asf/carbondata/blob/54b7db51/processing/src/main/java/org/apache/carbondata/processing/loading/model/CarbonDataLoadSchema.java
--
diff --git 
a/processing/src/main/java/org/apache/carbondata/processing/loading/model/CarbonDataLoadSchema.java
 
b/processing/src/main/java/org/apache/carbondata/processing/loading/model/CarbonDataLoadSchema.java
index d7aa103..a9d7bd8 100644
--- 
a/processing/src/main/java/org/apache/carbondata/processing/loading/model/CarbonDataLoadSchema.java
+++ 
b/processing/src/main/java/org/apache/carbondata/processing/loading/model/CarbonDataLoadSchema.java
@@ -37,6 +37,11 @@ public class CarbonDataLoadSchema implements Serializable {
   private CarbonTable carbonTable;
 
   /**
+   * Used to determine if the dataTypes have already been updated or not.
+   */
+  private transient boolean updatedDataTypes;
+
+  /**
* CarbonDataLoadSchema constructor which takes CarbonTable
*
* @param carbonTable
@@ -51,7 +56,11 @@ public class CarbonDataLoadSchema implements Serializable {
* @return carbonTable
*/
   public CarbonTable getCarbonTable() {
+if (!updatedDataTypes) {
+  CarbonTable.updateTableInfo(carbonTable.getTableInfo());
+  updatedDataTypes = true;
+}
 return carbonTable;
   }
 
-}
+}
\ No newline at end of file



Build failed in Jenkins: carbondata-master-spark-2.1 #2024

2018-02-03 Thread Apache Jenkins Server
See 


Changes:

[ravipesala] [CARBONDATA-2127] Documentation for Hive Standard Partition

--
[...truncated 11.45 KB...]
[INFO] 
[INFO] --- maven-install-plugin:2.5.2:install (default-install) @ 
carbondata-parent ---
[INFO] Installing 
 to 
/home/jenkins/jenkins-slave/maven-repositories/0/org/apache/carbondata/carbondata-parent/1.3.0-SNAPSHOT/carbondata-parent-1.3.0-SNAPSHOT.pom
[INFO] 
[INFO] 
[INFO] Building Apache CarbonData :: Common 1.3.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ carbondata-common ---
[INFO] Deleting 

[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent (default-prepare-agent) @ 
carbondata-common ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/0/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
carbondata-common ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ 
carbondata-common ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:compile (default-compile) @ 
carbondata-common ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 9 source files to 

[INFO] 
[INFO] >>> findbugs-maven-plugin:3.0.4:check (analyze-compile) > :findbugs @ 
carbondata-common >>>
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.4:findbugs (findbugs) @ carbondata-common 
---
[INFO] Fork Value is true
[INFO] Done FindBugs Analysis
[INFO] 
[INFO] <<< findbugs-maven-plugin:3.0.4:check (analyze-compile) < :findbugs @ 
carbondata-common <<<
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.4:check (analyze-compile) @ 
carbondata-common ---
[INFO] BugInstance size is 0
[INFO] Error size is 0
[INFO] No errors/warnings found
[INFO] 
[INFO] --- maven-resources-plugin:2.7:testResources (default-testResources) @ 
carbondata-common ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:testCompile (default-testCompile) @ 
carbondata-common ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 6 source files to 

[INFO] 
[INFO] --- maven-surefire-plugin:2.18.1:test (default-test) @ carbondata-common 
---
[JENKINS] Recording test results
[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (default-jar) @ carbondata-common ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
carbondata-common ---
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent-integration 
(default-prepare-agent-integration) @ carbondata-common ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/0/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.17:check (default) @ carbondata-common ---
[INFO] Starting audit...
Audit done.
[INFO] 
[INFO] --- scalastyle-maven-plugin:0.8.0:check (default) @ carbondata-common ---
[WARNING] sourceDirectory is not specified or does not exist 
value=
Saving to 
outputFile=
Processed 0 file(s)
Found 0 errors
Found 0 warnings
Found 0 infos
Finished in 2 ms
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:report (default-report) @ 
carbondata-common ---
[INFO] Skipping JaCoCo execution due to missing execution data file.
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:report-integration 
(default-report-integration) @ carbondata-common ---
[INFO] Skipping JaCoCo execution due to missing 

Build failed in Jenkins: carbondata-master-spark-2.1 » Apache CarbonData :: Core #2024

2018-02-03 Thread Apache Jenkins Server
See 


--
[INFO] 
[INFO] 
[INFO] Building Apache CarbonData :: Core 1.3.0-SNAPSHOT
[INFO] 
[INFO] Downloading: 
http://repo1.maven.org/maven2/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
[INFO] Downloading: 
http://repository.apache.org/snapshots/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
[INFO] Downloaded: 
http://repository.apache.org/snapshots/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
 (791 B at 1.8 KB/sec)
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ carbondata-core ---
[INFO] Deleting 

[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent (default-prepare-agent) @ 
carbondata-core ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/0/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
carbondata-core ---
[INFO] 
[INFO] --- exec-maven-plugin:1.2.1:exec (default) @ carbondata-core ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ 
carbondata-core ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 1 resource
[INFO] Copying 1 resource
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:compile (default-compile) @ 
carbondata-core ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 580 source files to 

[INFO] 
:
 

 uses or overrides a deprecated API.
[INFO] 
:
 Recompile with -Xlint:deprecation for details.
[INFO] 
:
 Some input files use unchecked or unsafe operations.
[INFO] 
:
 Recompile with -Xlint:unchecked for details.
[INFO] -
[WARNING] COMPILATION WARNING : 
[INFO] -
[WARNING] 
:[23,16]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[43,18]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[47,21]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[49,17]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[70,17]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[INFO] 5 warnings 
[INFO] -
[INFO] -
[ERROR] COMPILATION ERROR : 
[INFO] 

Build failed in Jenkins: carbondata-master-spark-2.2 » Apache CarbonData :: Core #59

2018-02-03 Thread Apache Jenkins Server
See 


--
[INFO] 
[INFO] 
[INFO] Building Apache CarbonData :: Core 1.3.0-SNAPSHOT
[INFO] 
[INFO] Downloading: 
http://repository.apache.org/snapshots/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
[INFO] Downloading: 
http://repo1.maven.org/maven2/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
[INFO] Downloaded: 
http://repository.apache.org/snapshots/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
 (791 B at 1.9 KB/sec)
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ carbondata-core ---
[INFO] Deleting 

[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent (default-prepare-agent) @ 
carbondata-core ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/0/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
carbondata-core ---
[INFO] 
[INFO] --- exec-maven-plugin:1.2.1:exec (default) @ carbondata-core ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ 
carbondata-core ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 1 resource
[INFO] Copying 1 resource
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:compile (default-compile) @ 
carbondata-core ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 580 source files to 

[INFO] 
:
 

 uses or overrides a deprecated API.
[INFO] 
:
 Recompile with -Xlint:deprecation for details.
[INFO] 
:
 Some input files use unchecked or unsafe operations.
[INFO] 
:
 Recompile with -Xlint:unchecked for details.
[INFO] -
[WARNING] COMPILATION WARNING : 
[INFO] -
[WARNING] 
:[23,16]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[43,18]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[47,21]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[49,17]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[70,17]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[INFO] 5 warnings 
[INFO] -
[INFO] -
[ERROR] COMPILATION ERROR : 
[INFO] 

Build failed in Jenkins: carbondata-master-spark-2.2 #59

2018-02-03 Thread Apache Jenkins Server
See 


Changes:

[ravipesala] [CARBONDATA-2127] Documentation for Hive Standard Partition

--
[...truncated 11.40 KB...]
[INFO] 
[INFO] --- maven-install-plugin:2.5.2:install (default-install) @ 
carbondata-parent ---
[INFO] Installing 
 to 
/home/jenkins/jenkins-slave/maven-repositories/0/org/apache/carbondata/carbondata-parent/1.3.0-SNAPSHOT/carbondata-parent-1.3.0-SNAPSHOT.pom
[INFO] 
[INFO] 
[INFO] Building Apache CarbonData :: Common 1.3.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ carbondata-common ---
[INFO] Deleting 

[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent (default-prepare-agent) @ 
carbondata-common ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/0/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
carbondata-common ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ 
carbondata-common ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:compile (default-compile) @ 
carbondata-common ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 9 source files to 

[INFO] 
[INFO] >>> findbugs-maven-plugin:3.0.4:check (analyze-compile) > :findbugs @ 
carbondata-common >>>
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.4:findbugs (findbugs) @ carbondata-common 
---
[INFO] Fork Value is true
[INFO] Done FindBugs Analysis
[INFO] 
[INFO] <<< findbugs-maven-plugin:3.0.4:check (analyze-compile) < :findbugs @ 
carbondata-common <<<
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.4:check (analyze-compile) @ 
carbondata-common ---
[INFO] BugInstance size is 0
[INFO] Error size is 0
[INFO] No errors/warnings found
[INFO] 
[INFO] --- maven-resources-plugin:2.7:testResources (default-testResources) @ 
carbondata-common ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:testCompile (default-testCompile) @ 
carbondata-common ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 6 source files to 

[INFO] 
[INFO] --- maven-surefire-plugin:2.18.1:test (default-test) @ carbondata-common 
---
[JENKINS] Recording test results
[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (default-jar) @ carbondata-common ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
carbondata-common ---
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent-integration 
(default-prepare-agent-integration) @ carbondata-common ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/0/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.17:check (default) @ carbondata-common ---
[INFO] Starting audit...
Audit done.
[INFO] 
[INFO] --- scalastyle-maven-plugin:0.8.0:check (default) @ carbondata-common ---
[WARNING] sourceDirectory is not specified or does not exist 
value=
Saving to 
outputFile=
Processed 0 file(s)
Found 0 errors
Found 0 warnings
Found 0 infos
Finished in 1 ms
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:report (default-report) @ 
carbondata-common ---
[INFO] Skipping JaCoCo execution due to missing execution data file.
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:report-integration 
(default-report-integration) @ carbondata-common ---
[INFO] Skipping JaCoCo execution due to missing 

carbondata git commit: [CARBONDATA-2127] Documentation for Hive Standard Partition

2018-02-03 Thread ravipesala
Repository: carbondata
Updated Branches:
  refs/heads/master 4a251ba16 -> a7bcc763b


[CARBONDATA-2127] Documentation for Hive Standard Partition

This closes #1926


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/a7bcc763
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/a7bcc763
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/a7bcc763

Branch: refs/heads/master
Commit: a7bcc763b5d1dea35f5015dadabb37a051a4f881
Parents: 4a251ba
Author: sgururajshetty 
Authored: Sat Feb 3 21:04:23 2018 +0530
Committer: ravipesala 
Committed: Sat Feb 3 21:53:38 2018 +0530

--
 docs/data-management-on-carbondata.md | 104 -
 1 file changed, 103 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/a7bcc763/docs/data-management-on-carbondata.md
--
diff --git a/docs/data-management-on-carbondata.md 
b/docs/data-management-on-carbondata.md
index d9d4420..3acb711 100644
--- a/docs/data-management-on-carbondata.md
+++ b/docs/data-management-on-carbondata.md
@@ -20,12 +20,13 @@
 This tutorial is going to introduce all commands and data operations on 
CarbonData.
 
 * [CREATE TABLE](#create-table)
-* [CREATE DATABASE] (#create-database)
+* [CREATE DATABASE](#create-database)
 * [TABLE MANAGEMENT](#table-management)
 * [LOAD DATA](#load-data)
 * [UPDATE AND DELETE](#update-and-delete)
 * [COMPACTION](#compaction)
 * [PARTITION](#partition)
+* [HIVE STANDARD PARTITION](#hive-standard-partition)
 * [PRE-AGGREGATE TABLES](#agg-tables)
 * [BUCKETING](#bucketing)
 * [SEGMENT MANAGEMENT](#segment-management)
@@ -765,6 +766,107 @@ This tutorial is going to introduce all commands and data 
operations on CarbonDa
   * The partitioned column can be excluded from SORT_COLUMNS, this will let 
other columns to do the efficient sorting.
   * When writing SQL on a partition table, try to use filters on the partition 
column.
 
+## HIVE STANDARD PARTITION
+
+  Carbon supports the partition which is custom implemented by carbon but due 
to compatibility issue does not allow you to use the feature of Hive. By using 
this function, you can use the feature available in Hive.
+
+### Create Partition Table
+
+  This command allows you to create table with partition.
+  
+  ```
+  CREATE TABLE [IF NOT EXISTS] [db_name.]table_name 
+[(col_name data_type , ...)]
+[COMMENT table_comment]
+[PARTITIONED BY (col_name data_type , ...)]
+[STORED BY file_format]
+[TBLPROPERTIES (property_name=property_value, ...)]
+[AS select_statement];
+  ```
+  
+  Example:
+  ```
+   CREATE TABLE IF NOT EXISTS productSchema.productSalesTable (
+productNumber Int,
+productName String,
+storeCity String,
+storeProvince String,
+saleQuantity Int,
+revenue Int)
+  PARTITIONED BY (productCategory String, productBatch String)
+  STORED BY 'carbondata'
+  ```
+   
+### Load Data Using Static Partition
+
+  This command allows you to load data using static partition.
+  
+  ```
+  LOAD DATA [LOCAL] INPATH 'folder_path' 
+INTO TABLE [db_name.]table_name PARTITION (partition_spec) 
+OPTIONS(property_name=property_value, ...)
+  NSERT INTO INTO TABLE [db_name.]table_name PARTITION (partition_spec) SELECT 
STATMENT 
+  ```
+  
+  Example:
+  ```
+  LOAD DATA LOCAL INPATH '${env:HOME}/staticinput.txt'
+INTO TABLE locationTable
+PARTITION (country = 'US', state = 'CA')
+
+  INSERT INTO TABLE locationTable
+PARTITION (country = 'US', state = 'AL')
+SELECT * FROM another_user au 
+WHERE au.country = 'US' AND au.state = 'AL';
+  ```
+
+### Load Data Using Dynamic Partition
+
+  This command allows you to load data using dynamic partition. If partition 
spec is not specified, then the partition is considered as dynamic.
+
+  Example:
+  ```
+  LOAD DATA LOCAL INPATH '${env:HOME}/staticinput.txt'
+INTO TABLE locationTable
+  
+  INSERT INTO TABLE locationTable
+SELECT * FROM another_user au 
+WHERE au.country = 'US' AND au.state = 'AL';
+  ```
+
+### Show Partitions
+
+  This command gets the Hive partition information of the table
+
+  ```
+  SHOW PARTITIONS [db_name.]table_name
+  ```
+
+### Drop Partition
+
+  This command drops the specified Hive partition only.
+  ```
+  ALTER TABLE table_name DROP [IF EXISTS] (PARTITION part_spec, ...)
+  ```
+
+### Insert OVERWRITE
+  
+  This command allows you to insert or load overwrite on a spcific partition.
+  
+  ```
+   INSERT OVERWRITE TABLE table_name
+

Build failed in Jenkins: carbondata-master-spark-2.1 #2023

2018-02-03 Thread Apache Jenkins Server
See 


Changes:

[ravipesala] [CARBONDATA-2128] Documentation for table path while creating the 
table

--
[...truncated 11.46 KB...]
[INFO] 
[INFO] --- maven-install-plugin:2.5.2:install (default-install) @ 
carbondata-parent ---
[INFO] Installing 
 to 
/home/jenkins/jenkins-slave/maven-repositories/0/org/apache/carbondata/carbondata-parent/1.3.0-SNAPSHOT/carbondata-parent-1.3.0-SNAPSHOT.pom
[INFO] 
[INFO] 
[INFO] Building Apache CarbonData :: Common 1.3.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ carbondata-common ---
[INFO] Deleting 

[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent (default-prepare-agent) @ 
carbondata-common ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/0/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
carbondata-common ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ 
carbondata-common ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:compile (default-compile) @ 
carbondata-common ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 9 source files to 

[INFO] 
[INFO] >>> findbugs-maven-plugin:3.0.4:check (analyze-compile) > :findbugs @ 
carbondata-common >>>
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.4:findbugs (findbugs) @ carbondata-common 
---
[INFO] Fork Value is true
[INFO] Done FindBugs Analysis
[INFO] 
[INFO] <<< findbugs-maven-plugin:3.0.4:check (analyze-compile) < :findbugs @ 
carbondata-common <<<
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.4:check (analyze-compile) @ 
carbondata-common ---
[INFO] BugInstance size is 0
[INFO] Error size is 0
[INFO] No errors/warnings found
[INFO] 
[INFO] --- maven-resources-plugin:2.7:testResources (default-testResources) @ 
carbondata-common ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:testCompile (default-testCompile) @ 
carbondata-common ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 6 source files to 

[INFO] 
[INFO] --- maven-surefire-plugin:2.18.1:test (default-test) @ carbondata-common 
---
[JENKINS] Recording test results
[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (default-jar) @ carbondata-common ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
carbondata-common ---
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent-integration 
(default-prepare-agent-integration) @ carbondata-common ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/0/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.17:check (default) @ carbondata-common ---
[INFO] Starting audit...
Audit done.
[INFO] 
[INFO] --- scalastyle-maven-plugin:0.8.0:check (default) @ carbondata-common ---
[WARNING] sourceDirectory is not specified or does not exist 
value=
Saving to 
outputFile=
Processed 0 file(s)
Found 0 errors
Found 0 warnings
Found 0 infos
Finished in 2 ms
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:report (default-report) @ 
carbondata-common ---
[INFO] Skipping JaCoCo execution due to missing execution data file.
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:report-integration 
(default-report-integration) @ carbondata-common ---
[INFO] Skipping JaCoCo execution due 

Build failed in Jenkins: carbondata-master-spark-2.1 » Apache CarbonData :: Core #2023

2018-02-03 Thread Apache Jenkins Server
See 


--
[INFO] 
[INFO] 
[INFO] Building Apache CarbonData :: Core 1.3.0-SNAPSHOT
[INFO] 
[INFO] Downloading: 
http://repository.apache.org/snapshots/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
[INFO] Downloading: 
http://repo1.maven.org/maven2/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
[INFO] Downloaded: 
http://repository.apache.org/snapshots/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
 (791 B at 1.8 KB/sec)
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ carbondata-core ---
[INFO] Deleting 

[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent (default-prepare-agent) @ 
carbondata-core ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/0/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
carbondata-core ---
[INFO] 
[INFO] --- exec-maven-plugin:1.2.1:exec (default) @ carbondata-core ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ 
carbondata-core ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 1 resource
[INFO] Copying 1 resource
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:compile (default-compile) @ 
carbondata-core ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 580 source files to 

[INFO] 
:
 

 uses or overrides a deprecated API.
[INFO] 
:
 Recompile with -Xlint:deprecation for details.
[INFO] 
:
 Some input files use unchecked or unsafe operations.
[INFO] 
:
 Recompile with -Xlint:unchecked for details.
[INFO] -
[WARNING] COMPILATION WARNING : 
[INFO] -
[WARNING] 
:[23,16]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[43,18]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[47,21]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[49,17]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[70,17]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[INFO] 5 warnings 
[INFO] -
[INFO] -
[ERROR] COMPILATION ERROR : 
[INFO] 

Build failed in Jenkins: carbondata-master-spark-2.2 » Apache CarbonData :: Core #58

2018-02-03 Thread Apache Jenkins Server
See 


--
[INFO] 
[INFO] 
[INFO] Building Apache CarbonData :: Core 1.3.0-SNAPSHOT
[INFO] 
[INFO] Downloading: 
http://repository.apache.org/snapshots/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
[INFO] Downloading: 
http://repo1.maven.org/maven2/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
[INFO] Downloaded: 
http://repository.apache.org/snapshots/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
 (791 B at 1.7 KB/sec)
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ carbondata-core ---
[INFO] Deleting 

[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent (default-prepare-agent) @ 
carbondata-core ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/0/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
carbondata-core ---
[INFO] 
[INFO] --- exec-maven-plugin:1.2.1:exec (default) @ carbondata-core ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ 
carbondata-core ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 1 resource
[INFO] Copying 1 resource
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:compile (default-compile) @ 
carbondata-core ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 580 source files to 

[INFO] 
:
 

 uses or overrides a deprecated API.
[INFO] 
:
 Recompile with -Xlint:deprecation for details.
[INFO] 
:
 Some input files use unchecked or unsafe operations.
[INFO] 
:
 Recompile with -Xlint:unchecked for details.
[INFO] -
[WARNING] COMPILATION WARNING : 
[INFO] -
[WARNING] 
:[23,16]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[43,18]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[47,21]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[49,17]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[70,17]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[INFO] 5 warnings 
[INFO] -
[INFO] -
[ERROR] COMPILATION ERROR : 
[INFO] 

Build failed in Jenkins: carbondata-master-spark-2.2 #58

2018-02-03 Thread Apache Jenkins Server
See 


Changes:

[ravipesala] [CARBONDATA-2128] Documentation for table path while creating the 
table

--
[...truncated 11.57 KB...]
[INFO] --- maven-install-plugin:2.5.2:install (default-install) @ 
carbondata-parent ---
[INFO] Installing 
 to 
/home/jenkins/jenkins-slave/maven-repositories/0/org/apache/carbondata/carbondata-parent/1.3.0-SNAPSHOT/carbondata-parent-1.3.0-SNAPSHOT.pom
[INFO] 
[INFO] 
[INFO] Building Apache CarbonData :: Common 1.3.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ carbondata-common ---
[INFO] Deleting 

[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent (default-prepare-agent) @ 
carbondata-common ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/0/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
carbondata-common ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ 
carbondata-common ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:compile (default-compile) @ 
carbondata-common ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 9 source files to 

[INFO] 
[INFO] >>> findbugs-maven-plugin:3.0.4:check (analyze-compile) > :findbugs @ 
carbondata-common >>>
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.4:findbugs (findbugs) @ carbondata-common 
---
[INFO] Fork Value is true
[INFO] Done FindBugs Analysis
[INFO] 
[INFO] <<< findbugs-maven-plugin:3.0.4:check (analyze-compile) < :findbugs @ 
carbondata-common <<<
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.4:check (analyze-compile) @ 
carbondata-common ---
[INFO] BugInstance size is 0
[INFO] Error size is 0
[INFO] No errors/warnings found
[INFO] 
[INFO] --- maven-resources-plugin:2.7:testResources (default-testResources) @ 
carbondata-common ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:testCompile (default-testCompile) @ 
carbondata-common ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 6 source files to 

[INFO] 
[INFO] --- maven-surefire-plugin:2.18.1:test (default-test) @ carbondata-common 
---
[JENKINS] Recording test results
[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (default-jar) @ carbondata-common ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
carbondata-common ---
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent-integration 
(default-prepare-agent-integration) @ carbondata-common ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/0/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.17:check (default) @ carbondata-common ---
[INFO] Starting audit...
Audit done.
[INFO] 
[INFO] --- scalastyle-maven-plugin:0.8.0:check (default) @ carbondata-common ---
[WARNING] sourceDirectory is not specified or does not exist 
value=
Saving to 
outputFile=
Processed 0 file(s)
Found 0 errors
Found 0 warnings
Found 0 infos
Finished in 1 ms
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:report (default-report) @ 
carbondata-common ---
[INFO] Skipping JaCoCo execution due to missing execution data file.
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:report-integration 
(default-report-integration) @ carbondata-common ---
[INFO] Skipping JaCoCo execution due to missing 

carbondata git commit: [CARBONDATA-2128] Documentation for table path while creating the table

2018-02-03 Thread ravipesala
Repository: carbondata
Updated Branches:
  refs/heads/master 349be007f -> 4a251ba16


[CARBONDATA-2128] Documentation for table path while creating the table

This closes #1927


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/4a251ba1
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/4a251ba1
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/4a251ba1

Branch: refs/heads/master
Commit: 4a251ba168236ea1d19c5e15ea6877145952d301
Parents: 349be00
Author: sgururajshetty 
Authored: Sat Feb 3 21:20:41 2018 +0530
Committer: ravipesala 
Committed: Sat Feb 3 21:49:11 2018 +0530

--
 docs/data-management-on-carbondata.md | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/4a251ba1/docs/data-management-on-carbondata.md
--
diff --git a/docs/data-management-on-carbondata.md 
b/docs/data-management-on-carbondata.md
index fef2371..d9d4420 100644
--- a/docs/data-management-on-carbondata.md
+++ b/docs/data-management-on-carbondata.md
@@ -32,12 +32,13 @@ This tutorial is going to introduce all commands and data 
operations on CarbonDa
 
 ## CREATE TABLE
 
-  This command can be used to create a CarbonData table by specifying the list 
of fields along with the table properties.
+  This command can be used to create a CarbonData table by specifying the list 
of fields along with the table properties. You can also specify the location 
where the table needs to be stored.
   
   ```
   CREATE TABLE [IF NOT EXISTS] [db_name.]table_name[(col_name data_type , ...)]
   STORED BY 'carbondata'
   [TBLPROPERTIES (property_name=property_value, ...)]
+  [LOCATION 'path']
   ```  
   
 ### Usage Guidelines



Build failed in Jenkins: carbondata-master-spark-2.1 » Apache CarbonData :: Core #2022

2018-02-03 Thread Apache Jenkins Server
See 


Changes:

[ravipesala] [CARBONDATA-2101]Restrict direct query on pre aggregate and 
timeseries

--
[INFO] 
[INFO] 
[INFO] Building Apache CarbonData :: Core 1.3.0-SNAPSHOT
[INFO] 
[INFO] Downloading: 
http://repository.apache.org/snapshots/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
[INFO] Downloading: 
http://repo1.maven.org/maven2/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
[INFO] Downloaded: 
http://repository.apache.org/snapshots/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
 (791 B at 1.9 KB/sec)
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ carbondata-core ---
[INFO] Deleting 

[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent (default-prepare-agent) @ 
carbondata-core ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/0/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
carbondata-core ---
[INFO] 
[INFO] --- exec-maven-plugin:1.2.1:exec (default) @ carbondata-core ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ 
carbondata-core ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 1 resource
[INFO] Copying 1 resource
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:compile (default-compile) @ 
carbondata-core ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 580 source files to 

[INFO] 
:
 

 uses or overrides a deprecated API.
[INFO] 
:
 Recompile with -Xlint:deprecation for details.
[INFO] 
:
 Some input files use unchecked or unsafe operations.
[INFO] 
:
 Recompile with -Xlint:unchecked for details.
[INFO] -
[WARNING] COMPILATION WARNING : 
[INFO] -
[WARNING] 
:[23,16]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[43,18]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[47,21]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[49,17]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[70,17]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[INFO] 5 warnings 
[INFO] -
[INFO] 

Build failed in Jenkins: carbondata-master-spark-2.1 #2022

2018-02-03 Thread Apache Jenkins Server
See 


Changes:

[ravipesala] [CARBONDATA-2101]Restrict direct query on pre aggregate and 
timeseries

--
[...truncated 11.46 KB...]
[INFO] 
[INFO] --- maven-install-plugin:2.5.2:install (default-install) @ 
carbondata-parent ---
[INFO] Installing 
 to 
/home/jenkins/jenkins-slave/maven-repositories/0/org/apache/carbondata/carbondata-parent/1.3.0-SNAPSHOT/carbondata-parent-1.3.0-SNAPSHOT.pom
[INFO] 
[INFO] 
[INFO] Building Apache CarbonData :: Common 1.3.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ carbondata-common ---
[INFO] Deleting 

[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent (default-prepare-agent) @ 
carbondata-common ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/0/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
carbondata-common ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ 
carbondata-common ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:compile (default-compile) @ 
carbondata-common ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 9 source files to 

[INFO] 
[INFO] >>> findbugs-maven-plugin:3.0.4:check (analyze-compile) > :findbugs @ 
carbondata-common >>>
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.4:findbugs (findbugs) @ carbondata-common 
---
[INFO] Fork Value is true
[INFO] Done FindBugs Analysis
[INFO] 
[INFO] <<< findbugs-maven-plugin:3.0.4:check (analyze-compile) < :findbugs @ 
carbondata-common <<<
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.4:check (analyze-compile) @ 
carbondata-common ---
[INFO] BugInstance size is 0
[INFO] Error size is 0
[INFO] No errors/warnings found
[INFO] 
[INFO] --- maven-resources-plugin:2.7:testResources (default-testResources) @ 
carbondata-common ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:testCompile (default-testCompile) @ 
carbondata-common ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 6 source files to 

[INFO] 
[INFO] --- maven-surefire-plugin:2.18.1:test (default-test) @ carbondata-common 
---
[JENKINS] Recording test results
[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (default-jar) @ carbondata-common ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
carbondata-common ---
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent-integration 
(default-prepare-agent-integration) @ carbondata-common ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/0/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.17:check (default) @ carbondata-common ---
[INFO] Starting audit...
Audit done.
[INFO] 
[INFO] --- scalastyle-maven-plugin:0.8.0:check (default) @ carbondata-common ---
[WARNING] sourceDirectory is not specified or does not exist 
value=
Saving to 
outputFile=
Processed 0 file(s)
Found 0 errors
Found 0 warnings
Found 0 infos
Finished in 2 ms
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:report (default-report) @ 
carbondata-common ---
[INFO] Skipping JaCoCo execution due to missing execution data file.
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:report-integration 
(default-report-integration) @ carbondata-common ---
[INFO] Skipping JaCoCo execution due 

Build failed in Jenkins: carbondata-master-spark-2.2 #57

2018-02-03 Thread Apache Jenkins Server
See 


Changes:

[ravipesala] [CARBONDATA-2101]Restrict direct query on pre aggregate and 
timeseries

--
[...truncated 11.42 KB...]
[INFO] 
[INFO] --- maven-install-plugin:2.5.2:install (default-install) @ 
carbondata-parent ---
[INFO] Installing 
 to 
/home/jenkins/jenkins-slave/maven-repositories/0/org/apache/carbondata/carbondata-parent/1.3.0-SNAPSHOT/carbondata-parent-1.3.0-SNAPSHOT.pom
[INFO] 
[INFO] 
[INFO] Building Apache CarbonData :: Common 1.3.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ carbondata-common ---
[INFO] Deleting 

[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent (default-prepare-agent) @ 
carbondata-common ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/0/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
carbondata-common ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ 
carbondata-common ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:compile (default-compile) @ 
carbondata-common ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 9 source files to 

[INFO] 
[INFO] >>> findbugs-maven-plugin:3.0.4:check (analyze-compile) > :findbugs @ 
carbondata-common >>>
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.4:findbugs (findbugs) @ carbondata-common 
---
[INFO] Fork Value is true
[INFO] Done FindBugs Analysis
[INFO] 
[INFO] <<< findbugs-maven-plugin:3.0.4:check (analyze-compile) < :findbugs @ 
carbondata-common <<<
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.4:check (analyze-compile) @ 
carbondata-common ---
[INFO] BugInstance size is 0
[INFO] Error size is 0
[INFO] No errors/warnings found
[INFO] 
[INFO] --- maven-resources-plugin:2.7:testResources (default-testResources) @ 
carbondata-common ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:testCompile (default-testCompile) @ 
carbondata-common ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 6 source files to 

[INFO] 
[INFO] --- maven-surefire-plugin:2.18.1:test (default-test) @ carbondata-common 
---
[JENKINS] Recording test results
[INFO] 
[INFO] --- maven-jar-plugin:2.5:jar (default-jar) @ carbondata-common ---
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
carbondata-common ---
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent-integration 
(default-prepare-agent-integration) @ carbondata-common ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/0/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.17:check (default) @ carbondata-common ---
[INFO] Starting audit...
Audit done.
[INFO] 
[INFO] --- scalastyle-maven-plugin:0.8.0:check (default) @ carbondata-common ---
[WARNING] sourceDirectory is not specified or does not exist 
value=
Saving to 
outputFile=
Processed 0 file(s)
Found 0 errors
Found 0 warnings
Found 0 infos
Finished in 1 ms
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:report (default-report) @ 
carbondata-common ---
[INFO] Skipping JaCoCo execution due to missing execution data file.
[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:report-integration 
(default-report-integration) @ carbondata-common ---
[INFO] Skipping JaCoCo execution due to 

Build failed in Jenkins: carbondata-master-spark-2.2 » Apache CarbonData :: Core #57

2018-02-03 Thread Apache Jenkins Server
See 


Changes:

[ravipesala] [CARBONDATA-2101]Restrict direct query on pre aggregate and 
timeseries

--
[INFO] 
[INFO] 
[INFO] Building Apache CarbonData :: Core 1.3.0-SNAPSHOT
[INFO] 
[INFO] Downloading: 
http://repo1.maven.org/maven2/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
[INFO] Downloading: 
http://repository.apache.org/snapshots/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
[INFO] Downloaded: 
http://repository.apache.org/snapshots/org/apache/carbondata/carbondata-format/1.3.0-SNAPSHOT/maven-metadata.xml
 (791 B at 1.8 KB/sec)
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ carbondata-core ---
[INFO] Deleting 

[INFO] 
[INFO] --- jacoco-maven-plugin:0.7.9:prepare-agent (default-prepare-agent) @ 
carbondata-core ---
[INFO] argLine set to 
-javaagent:/home/jenkins/jenkins-slave/maven-repositories/0/org/jacoco/org.jacoco.agent/0.7.9/org.jacoco.agent-0.7.9-runtime.jar=destfile=
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ 
carbondata-core ---
[INFO] 
[INFO] --- exec-maven-plugin:1.2.1:exec (default) @ carbondata-core ---
[INFO] 
[INFO] --- maven-resources-plugin:2.7:resources (default-resources) @ 
carbondata-core ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 1 resource
[INFO] Copying 1 resource
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.2:compile (default-compile) @ 
carbondata-core ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 580 source files to 

[INFO] 
:
 

 uses or overrides a deprecated API.
[INFO] 
:
 Recompile with -Xlint:deprecation for details.
[INFO] 
:
 Some input files use unchecked or unsafe operations.
[INFO] 
:
 Recompile with -Xlint:unchecked for details.
[INFO] -
[WARNING] COMPILATION WARNING : 
[INFO] -
[WARNING] 
:[23,16]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[43,18]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[47,21]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[49,17]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[WARNING] 
:[70,17]
 sun.misc.Unsafe is internal proprietary API and may be removed in a future 
release
[INFO] 5 warnings 
[INFO] -
[INFO] 

carbondata git commit: [CARBONDATA-2101]Restrict direct query on pre aggregate and timeseries datamap

2018-02-03 Thread ravipesala
Repository: carbondata
Updated Branches:
  refs/heads/master 46d9bf966 -> 349be007f


[CARBONDATA-2101]Restrict direct query on pre aggregate and timeseries datamap

Restricting direct query on PreAggregate and timeseries data map
Added Property to run direct query on data map for testing purpose
validate.support.direct.query.on.datamap=true

This closes #1888


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/349be007
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/349be007
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/349be007

Branch: refs/heads/master
Commit: 349be007fd20fb8c4a39b318e45b47445d2e798c
Parents: 46d9bf9
Author: kumarvishal 
Authored: Tue Jan 30 20:54:12 2018 +0530
Committer: ravipesala 
Committed: Sat Feb 3 21:32:08 2018 +0530

--
 .../core/constants/CarbonCommonConstants.java   | 10 +
 .../carbondata/core/util/SessionParams.java |  2 +
 .../spark/sql/common/util/QueryTest.scala   |  4 ++
 .../apache/spark/sql/test/util/QueryTest.scala  |  3 ++
 .../spark/rdd/AggregateDataMapCompactor.scala   |  2 +
 .../sql/CarbonDatasourceHadoopRelation.scala|  1 +
 .../scala/org/apache/spark/sql/CarbonEnv.scala  | 18 +
 .../preaaggregate/PreAggregateUtil.scala|  2 +
 .../sql/hive/CarbonPreAggregateRules.scala  |  9 +
 .../sql/optimizer/CarbonLateDecodeRule.scala| 40 +++-
 10 files changed, 89 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/carbondata/blob/349be007/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
--
diff --git 
a/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
 
b/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
index a799e51..6e6482d 100644
--- 
a/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
+++ 
b/core/src/main/java/org/apache/carbondata/core/constants/CarbonCommonConstants.java
@@ -1588,6 +1588,16 @@ public final class CarbonCommonConstants {
   "carbon.sort.storage.inmemory.size.inmb";
   public static final String IN_MEMORY_STORAGE_FOR_SORTED_DATA_IN_MB_DEFAULT = 
"512";
 
+  @CarbonProperty
+  public static final String SUPPORT_DIRECT_QUERY_ON_DATAMAP =
+  "carbon.query.directQueryOnDataMap.enabled";
+  public static final String SUPPORT_DIRECT_QUERY_ON_DATAMAP_DEFAULTVALUE = 
"false";
+
+  @CarbonProperty
+  public static final String VALIDATE_DIRECT_QUERY_ON_DATAMAP =
+  "carbon.query.validate.directqueryondatamap";
+  public static final String VALIDATE_DIRECT_QUERY_ON_DATAMAP_DEFAULTVALUE = 
"true";
+
   private CarbonCommonConstants() {
   }
 }

http://git-wip-us.apache.org/repos/asf/carbondata/blob/349be007/core/src/main/java/org/apache/carbondata/core/util/SessionParams.java
--
diff --git 
a/core/src/main/java/org/apache/carbondata/core/util/SessionParams.java 
b/core/src/main/java/org/apache/carbondata/core/util/SessionParams.java
index ddc7539..a6ff61e 100644
--- a/core/src/main/java/org/apache/carbondata/core/util/SessionParams.java
+++ b/core/src/main/java/org/apache/carbondata/core/util/SessionParams.java
@@ -199,6 +199,8 @@ public class SessionParams implements Serializable {
   }
 } else if 
(key.startsWith(CarbonCommonConstants.VALIDATE_CARBON_INPUT_SEGMENTS)) {
   isValid = true;
+} else if 
(key.equalsIgnoreCase(CarbonCommonConstants.SUPPORT_DIRECT_QUERY_ON_DATAMAP)) {
+  isValid = true;
 } else {
   throw new InvalidConfigurationException(
   "The key " + key + " not supported for dynamic configuration.");

http://git-wip-us.apache.org/repos/asf/carbondata/blob/349be007/integration/spark-common-cluster-test/src/test/scala/org/apache/spark/sql/common/util/QueryTest.scala
--
diff --git 
a/integration/spark-common-cluster-test/src/test/scala/org/apache/spark/sql/common/util/QueryTest.scala
 
b/integration/spark-common-cluster-test/src/test/scala/org/apache/spark/sql/common/util/QueryTest.scala
index d80efb8..9c5bc38 100644
--- 
a/integration/spark-common-cluster-test/src/test/scala/org/apache/spark/sql/common/util/QueryTest.scala
+++ 
b/integration/spark-common-cluster-test/src/test/scala/org/apache/spark/sql/common/util/QueryTest.scala
@@ -33,7 +33,9 @@ import org.apache.spark.sql.test.{ResourceRegisterAndCopier, 
TestQueryExecutor}
 import org.apache.spark.sql.{CarbonSession, DataFrame, Row, SQLContext}
 import org.scalatest.Suite
 
+import 

Jenkins build is back to stable : carbondata-master-spark-2.2 #55

2018-02-03 Thread Apache Jenkins Server
See 




Jenkins build is back to stable : carbondata-master-spark-2.2 » Apache CarbonData :: Spark Common Test #55

2018-02-03 Thread Apache Jenkins Server
See 




Jenkins build became unstable: carbondata-master-spark-2.2 #54

2018-02-03 Thread Apache Jenkins Server
See 




  1   2   >