[jira] [Created] (KYLIN-3630) remove unused fields in the implementations of MeasureType

2018-10-14 Thread jiatao.tao (JIRA)
jiatao.tao created KYLIN-3630:
-

 Summary: remove unused fields in the implementations of MeasureType
 Key: KYLIN-3630
 URL: https://issues.apache.org/jira/browse/KYLIN-3630
 Project: Kylin
  Issue Type: Improvement
Reporter: jiatao.tao
Assignee: jiatao.tao
 Attachments: image-2018-10-14-18-56-29-010.png

 !image-2018-10-14-18-56-29-010.png! 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] asfgit commented on issue #289: KYLIN-3630, remove unused fields in the implementations of MeasureType

2018-10-14 Thread GitBox
asfgit commented on issue #289: KYLIN-3630, remove unused fields in the 
implementations of MeasureType
URL: https://github.com/apache/kylin/pull/289#issuecomment-429617849
 
 
   Can one of the admins verify this patch?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tttMelody opened a new pull request #289: KYLIN-3630, remove unused fields in the implementations of MeasureType

2018-10-14 Thread GitBox
tttMelody opened a new pull request #289: KYLIN-3630, remove unused fields in 
the implementations of MeasureType
URL: https://github.com/apache/kylin/pull/289
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] codecov-io commented on issue #289: KYLIN-3630, remove unused fields in the implementations of MeasureType

2018-10-14 Thread GitBox
codecov-io commented on issue #289: KYLIN-3630, remove unused fields in the 
implementations of MeasureType
URL: https://github.com/apache/kylin/pull/289#issuecomment-429619135
 
 
   # [Codecov](https://codecov.io/gh/apache/kylin/pull/289?src=pr&el=h1) Report
   > :exclamation: No coverage uploaded for pull request base 
(`master@ec10114`). [Click here to learn what that 
means](https://docs.codecov.io/docs/error-reference#section-missing-base-commit).
   > The diff coverage is `100%`.
   
   [![Impacted file tree 
graph](https://codecov.io/gh/apache/kylin/pull/289/graphs/tree.svg?width=650&token=JawVgbgsVo&height=150&src=pr)](https://codecov.io/gh/apache/kylin/pull/289?src=pr&el=tree)
   
   ```diff
   @@   Coverage Diff@@
   ## master#289   +/-   ##
   
 Coverage  ?   21.3%   
 Complexity?4441   
   
 Files ?1087   
 Lines ?   69953   
 Branches  ?   10108   
   
 Hits  ?   14906   
 Misses?   53644   
 Partials  ?1403
   ```
   
   
   | [Impacted 
Files](https://codecov.io/gh/apache/kylin/pull/289?src=pr&el=tree) | Coverage Δ 
| Complexity Δ | |
   |---|---|---|---|
   | 
[...sure/extendedcolumn/ExtendedColumnMeasureType.java](https://codecov.io/gh/apache/kylin/pull/289/diff?src=pr&el=tree#diff-Y29yZS1tZXRhZGF0YS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUva3lsaW4vbWVhc3VyZS9leHRlbmRlZGNvbHVtbi9FeHRlbmRlZENvbHVtbk1lYXN1cmVUeXBlLmphdmE=)
 | `30.27% <100%> (ø)` | `5 <1> (?)` | |
   | 
[...org/apache/kylin/measure/topn/TopNMeasureType.java](https://codecov.io/gh/apache/kylin/pull/289/diff?src=pr&el=tree#diff-Y29yZS1tZXRhZGF0YS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUva3lsaW4vbWVhc3VyZS90b3BuL1RvcE5NZWFzdXJlVHlwZS5qYXZh)
 | `4.5% <100%> (ø)` | `3 <1> (?)` | |
   | 
[...apache/kylin/measure/bitmap/BitmapMeasureType.java](https://codecov.io/gh/apache/kylin/pull/289/diff?src=pr&el=tree#diff-Y29yZS1tZXRhZGF0YS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUva3lsaW4vbWVhc3VyZS9iaXRtYXAvQml0bWFwTWVhc3VyZVR5cGUuamF2YQ==)
 | `19.6% <100%> (ø)` | `4 <1> (?)` | |
   | 
[...a/org/apache/kylin/measure/raw/RawMeasureType.java](https://codecov.io/gh/apache/kylin/pull/289/diff?src=pr&el=tree#diff-Y29yZS1tZXRhZGF0YS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUva3lsaW4vbWVhc3VyZS9yYXcvUmF3TWVhc3VyZVR5cGUuamF2YQ==)
 | `9.37% <100%> (ø)` | `3 <1> (?)` | |
   | 
[...org/apache/kylin/measure/hllc/HLLCMeasureType.java](https://codecov.io/gh/apache/kylin/pull/289/diff?src=pr&el=tree#diff-Y29yZS1tZXRhZGF0YS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUva3lsaW4vbWVhc3VyZS9obGxjL0hMTENNZWFzdXJlVHlwZS5qYXZh)
 | `65% <100%> (ø)` | `5 <1> (?)` | |
   | 
[...ylin/measure/percentile/PercentileMeasureType.java](https://codecov.io/gh/apache/kylin/pull/289/diff?src=pr&el=tree#diff-Y29yZS1tZXRhZGF0YS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUva3lsaW4vbWVhc3VyZS9wZXJjZW50aWxlL1BlcmNlbnRpbGVNZWFzdXJlVHlwZS5qYXZh)
 | `55% <100%> (ø)` | `4 <1> (?)` | |
   
   --
   
   [Continue to review full report at 
Codecov](https://codecov.io/gh/apache/kylin/pull/289?src=pr&el=continue).
   > **Legend** - [Click here to learn 
more](https://docs.codecov.io/docs/codecov-delta)
   > `Δ = absolute  (impact)`, `ø = not affected`, `? = missing data`
   > Powered by 
[Codecov](https://codecov.io/gh/apache/kylin/pull/289?src=pr&el=footer). Last 
update 
[ec10114...f32c098](https://codecov.io/gh/apache/kylin/pull/289?src=pr&el=lastupdated).
 Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] shaofengshi closed pull request #288: KYLIN-3597 Improve code smell

2018-10-14 Thread GitBox
shaofengshi closed pull request #288: KYLIN-3597 Improve code smell
URL: https://github.com/apache/kylin/pull/288
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/core-common/src/main/java/org/apache/kylin/common/KylinConfigBase.java 
b/core-common/src/main/java/org/apache/kylin/common/KylinConfigBase.java
index 6f1dfd9144..386a73ca6f 100644
--- a/core-common/src/main/java/org/apache/kylin/common/KylinConfigBase.java
+++ b/core-common/src/main/java/org/apache/kylin/common/KylinConfigBase.java
@@ -57,6 +57,12 @@
 private static final long serialVersionUID = 1L;
 private static final Logger logger = 
LoggerFactory.getLogger(KylinConfigBase.class);
 
+private static final String FALSE = "false";
+private static final String TRUE = "true";
+private static final String DEFAULT = "default";
+private static final String KYLIN_ENGINE_MR_JOB_JAR = 
"kylin.engine.mr.job-jar";
+private static final String KYLIN_STORAGE_HBASE_COPROCESSOR_LOCAL_JAR = 
"kylin.storage.hbase.coprocessor-local-jar";
+
 /*
  * DON'T DEFINE CONSTANTS FOR PROPERTY KEYS!
  *
@@ -84,13 +90,13 @@ public static String getKylinHomeWithoutWarn() {
 public static String getSparkHome() {
 String sparkHome = System.getenv("SPARK_HOME");
 if (StringUtils.isNotEmpty(sparkHome)) {
-logger.info("SPARK_HOME was set to " + sparkHome);
+logger.info("SPARK_HOME was set to {}", sparkHome);
 return sparkHome;
 }
 
 sparkHome = System.getProperty("SPARK_HOME");
 if (StringUtils.isNotEmpty(sparkHome)) {
-logger.info("SPARK_HOME was set to " + sparkHome);
+logger.info("SPARK_HOME was set to {}", sparkHome);
 return sparkHome;
 }
 
@@ -195,7 +201,7 @@ final protected String getRequired(String prop) {
  * Use with care, properties should be read-only. This is for testing only.
  */
 final public void setProperty(String key, String value) {
-logger.info("Kylin Config was updated with " + key + " : " + value);
+logger.info("Kylin Config was updated with {} : {}", key, value);
 properties.setProperty(BCC.check(key), value);
 }
 
@@ -228,7 +234,6 @@ public String getDeployEnv() {
 }
 
 private String cachedHdfsWorkingDirectory;
-private String cachedBigCellDirectory;
 
 public String getHdfsWorkingDirectory() {
 if (cachedHdfsWorkingDirectory != null)
@@ -301,7 +306,7 @@ public String getZookeeperConnectString() {
 }
 
 public boolean isZookeeperAclEnabled() {
-return 
Boolean.parseBoolean(getOptional("kylin.env.zookeeper-acl-enabled", "false"));
+return 
Boolean.parseBoolean(getOptional("kylin.env.zookeeper-acl-enabled", FALSE));
 }
 
 public String getZKAuths() {
@@ -381,7 +386,7 @@ public String getHBaseMappingAdapter() {
 }
 
 public boolean isCheckCopyOnWrite() {
-return 
Boolean.parseBoolean(getOptional("kylin.metadata.check-copy-on-write", 
"false"));
+return 
Boolean.parseBoolean(getOptional("kylin.metadata.check-copy-on-write", FALSE));
 }
 
 public String getHbaseClientScannerTimeoutPeriod() {
@@ -401,7 +406,7 @@ public String getHbaseClientRetriesNumber() {
 // 

 
 public boolean isUseForestTrieDictionary() {
-return 
Boolean.parseBoolean(getOptional("kylin.dictionary.use-forest-trie", "true"));
+return 
Boolean.parseBoolean(getOptional("kylin.dictionary.use-forest-trie", TRUE));
 }
 
 public int getTrieDictionaryForestMaxTrieSizeMB() {
@@ -413,11 +418,11 @@ public int getCachedDictMaxEntrySize() {
 }
 
 public boolean isGrowingDictEnabled() {
-return 
Boolean.parseBoolean(this.getOptional("kylin.dictionary.growing-enabled", 
"false"));
+return 
Boolean.parseBoolean(this.getOptional("kylin.dictionary.growing-enabled", 
FALSE));
 }
 
 public boolean isDictResuable() {
-return 
Boolean.parseBoolean(this.getOptional("kylin.dictionary.resuable", "false"));
+return 
Boolean.parseBoolean(this.getOptional("kylin.dictionary.resuable", FALSE));
 }
 
 public int getAppendDictEntrySize() {
@@ -453,7 +458,7 @@ public double getExtTableSnapshotLocalCacheMaxSizeGB() {
 }
 
 public boolean isShrunkenDictFromGlobalEnabled() {
-return 
Boolean.parseBoolean(this.getOptional("kylin.dictionary.shrunken-from-global-enabled",
 "false"));
+return 
Boolean.parseBoolean(this.getOptional("kylin.dictionary.shrunken-from-global-enabled",
 FALSE));
 }
 
 // 

@@ -502,7 +507,7 @@ public int getCub

[GitHub] shaofengshi closed pull request #289: KYLIN-3630, remove unused fields in the implementations of MeasureType

2018-10-14 Thread GitBox
shaofengshi closed pull request #289: KYLIN-3630, remove unused fields in the 
implementations of MeasureType
URL: https://github.com/apache/kylin/pull/289
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git 
a/core-metadata/src/main/java/org/apache/kylin/measure/bitmap/BitmapMeasureType.java
 
b/core-metadata/src/main/java/org/apache/kylin/measure/bitmap/BitmapMeasureType.java
index 403d1b6c6d..f724257de5 100644
--- 
a/core-metadata/src/main/java/org/apache/kylin/measure/bitmap/BitmapMeasureType.java
+++ 
b/core-metadata/src/main/java/org/apache/kylin/measure/bitmap/BitmapMeasureType.java
@@ -52,7 +52,7 @@
 
 @Override
 public MeasureType createMeasureType(String funcName, 
DataType dataType) {
-return new BitmapMeasureType(funcName, dataType);
+return new BitmapMeasureType();
 }
 
 @Override
@@ -71,10 +71,7 @@ public String getAggrDataTypeName() {
 }
 }
 
-public DataType dataType;
-
-public BitmapMeasureType(String funcName, DataType dataType) {
-this.dataType = dataType;
+public BitmapMeasureType() {
 }
 
 @Override
diff --git 
a/core-metadata/src/main/java/org/apache/kylin/measure/extendedcolumn/ExtendedColumnMeasureType.java
 
b/core-metadata/src/main/java/org/apache/kylin/measure/extendedcolumn/ExtendedColumnMeasureType.java
index b38299ac3b..4c20ef3fe2 100644
--- 
a/core-metadata/src/main/java/org/apache/kylin/measure/extendedcolumn/ExtendedColumnMeasureType.java
+++ 
b/core-metadata/src/main/java/org/apache/kylin/measure/extendedcolumn/ExtendedColumnMeasureType.java
@@ -56,7 +56,7 @@
 
 @Override
 public MeasureType createMeasureType(String funcName, 
DataType dataType) {
-return new ExtendedColumnMeasureType(funcName, dataType);
+return new ExtendedColumnMeasureType(dataType);
 }
 
 @Override
@@ -75,7 +75,7 @@ public String getAggrDataTypeName() {
 }
 }
 
-public ExtendedColumnMeasureType(String funcName, DataType dataType) {
+public ExtendedColumnMeasureType(DataType dataType) {
 this.dataType = dataType;
 }
 
diff --git 
a/core-metadata/src/main/java/org/apache/kylin/measure/hllc/HLLCMeasureType.java
 
b/core-metadata/src/main/java/org/apache/kylin/measure/hllc/HLLCMeasureType.java
index 51c5a66217..daa104baf7 100644
--- 
a/core-metadata/src/main/java/org/apache/kylin/measure/hllc/HLLCMeasureType.java
+++ 
b/core-metadata/src/main/java/org/apache/kylin/measure/hllc/HLLCMeasureType.java
@@ -43,7 +43,7 @@
 
 @Override
 public MeasureType createMeasureType(String funcName, 
DataType dataType) {
-return new HLLCMeasureType(funcName, dataType);
+return new HLLCMeasureType(dataType);
 }
 
 @Override
@@ -66,7 +66,7 @@ public String getAggrDataTypeName() {
 
 private final DataType dataType;
 
-public HLLCMeasureType(String funcName, DataType dataType) {
+public HLLCMeasureType(DataType dataType) {
 // note at query parsing phase, the data type may be null, because 
only function and parameters are known
 this.dataType = dataType;
 }
diff --git 
a/core-metadata/src/main/java/org/apache/kylin/measure/percentile/PercentileMeasureType.java
 
b/core-metadata/src/main/java/org/apache/kylin/measure/percentile/PercentileMeasureType.java
index 2c79784696..44bd2133b2 100644
--- 
a/core-metadata/src/main/java/org/apache/kylin/measure/percentile/PercentileMeasureType.java
+++ 
b/core-metadata/src/main/java/org/apache/kylin/measure/percentile/PercentileMeasureType.java
@@ -39,7 +39,7 @@
 public static final String FUNC_PERCENTILE_APPROX = "PERCENTILE_APPROX";
 public static final String DATATYPE_PERCENTILE = "percentile";
 
-public PercentileMeasureType(String funcName, DataType dataType) {
+public PercentileMeasureType(DataType dataType) {
 this.dataType = dataType;
 }
 
@@ -47,7 +47,7 @@ public PercentileMeasureType(String funcName, DataType 
dataType) {
 
 @Override
 public MeasureType createMeasureType(String 
funcName, DataType dataType) {
-return new PercentileMeasureType(funcName, dataType);
+return new PercentileMeasureType(dataType);
 }
 
 @Override
diff --git 
a/core-metadata/src/main/java/org/apache/kylin/measure/raw/RawMeasureType.java 
b/core-metadata/src/main/java/org/apache/kylin/measure/raw/RawMeasureType.java
index 2add0602ad..e9f1c82c45 100644
--- 
a/core-metadata/src/main/java/org/apache/kylin/measure/raw/RawMeasureType.java
+++ 
b/core-metadata/src/main/java/org/apache/kylin/measure/raw/RawMeasureType.java
@@ -57,7 +57,7 @@
 
 @Override
 public MeasureType> createMeasureType(String funcName, 
DataT

[GitHub] nichunen commented on a change in pull request #286: KYLIN-3617 Use job's cache in job scheduler

2018-10-14 Thread GitBox
nichunen commented on a change in pull request #286: KYLIN-3617 Use job's cache 
in job scheduler
URL: https://github.com/apache/kylin/pull/286#discussion_r224993945
 
 

 ##
 File path: core-job/src/main/java/org/apache/kylin/job/dao/ExecutableDao.java
 ##
 @@ -391,4 +405,13 @@ public void deleteJobOutput(String uuid) throws 
PersistentException {
 throw new PersistentException(e);
 }
 }
+
+public void reloadAll() throws IOException {
+try (AutoReadWriteLock.AutoLock lock = 
executableDigestMapLock.lockForWrite()) {
+executableDigestCrud.reloadAll();
 
 Review comment:
   @shaofengshi You are right, it's a mistake, should use lockForWrite during 
reloading. PR updated, thanks


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] codecov-io commented on issue #286: KYLIN-3617 Use job's cache in job scheduler

2018-10-14 Thread GitBox
codecov-io commented on issue #286: KYLIN-3617 Use job's cache in job scheduler
URL: https://github.com/apache/kylin/pull/286#issuecomment-429630698
 
 
   # [Codecov](https://codecov.io/gh/apache/kylin/pull/286?src=pr&el=h1) Report
   > :exclamation: No coverage uploaded for pull request base 
(`master@50f1758`). [Click here to learn what that 
means](https://docs.codecov.io/docs/error-reference#section-missing-base-commit).
   > The diff coverage is `43.75%`.
   
   [![Impacted file tree 
graph](https://codecov.io/gh/apache/kylin/pull/286/graphs/tree.svg?width=650&token=JawVgbgsVo&height=150&src=pr)](https://codecov.io/gh/apache/kylin/pull/286?src=pr&el=tree)
   
   ```diff
   @@   Coverage Diff@@
   ## master#286   +/-   ##
   
 Coverage  ?   21.3%   
 Complexity?   
   
 Files ?1087   
 Lines ?   69983   
 Branches  ?   10109   
   
 Hits  ?   14907   
 Misses?   53673   
 Partials  ?1403
   ```
   
   
   | [Impacted 
Files](https://codecov.io/gh/apache/kylin/pull/286?src=pr&el=tree) | Coverage Δ 
| Complexity Δ | |
   |---|---|---|---|
   | 
[...lin/job/impl/threadpool/PriorityFetcherRunner.java](https://codecov.io/gh/apache/kylin/pull/286/diff?src=pr&el=tree#diff-Y29yZS1qb2Ivc3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2t5bGluL2pvYi9pbXBsL3RocmVhZHBvb2wvUHJpb3JpdHlGZXRjaGVyUnVubmVyLmphdmE=)
 | `0% <0%> (ø)` | `0 <0> (?)` | |
   | 
[...he/kylin/job/impl/threadpool/DefaultScheduler.java](https://codecov.io/gh/apache/kylin/pull/286/diff?src=pr&el=tree#diff-Y29yZS1qb2Ivc3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2t5bGluL2pvYi9pbXBsL3RocmVhZHBvb2wvRGVmYXVsdFNjaGVkdWxlci5qYXZh)
 | `68.53% <100%> (ø)` | `10 <1> (?)` | |
   | 
[...n/java/org/apache/kylin/job/dao/ExecutableDao.java](https://codecov.io/gh/apache/kylin/pull/286/diff?src=pr&el=tree#diff-Y29yZS1qb2Ivc3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2t5bGluL2pvYi9kYW8vRXhlY3V0YWJsZURhby5qYXZh)
 | `35.38% <33.33%> (ø)` | `24 <2> (?)` | |
   | 
[.../apache/kylin/job/execution/ExecutableManager.java](https://codecov.io/gh/apache/kylin/pull/286/diff?src=pr&el=tree#diff-Y29yZS1qb2Ivc3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2t5bGluL2pvYi9leGVjdXRpb24vRXhlY3V0YWJsZU1hbmFnZXIuamF2YQ==)
 | `34.7% <42.85%> (ø)` | `35 <2> (?)` | |
   | 
[...ylin/job/impl/threadpool/DefaultFetcherRunner.java](https://codecov.io/gh/apache/kylin/pull/286/diff?src=pr&el=tree#diff-Y29yZS1qb2Ivc3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2t5bGluL2pvYi9pbXBsL3RocmVhZHBvb2wvRGVmYXVsdEZldGNoZXJSdW5uZXIuamF2YQ==)
 | `72.72% <85.71%> (ø)` | `10 <0> (?)` | |
   
   --
   
   [Continue to review full report at 
Codecov](https://codecov.io/gh/apache/kylin/pull/286?src=pr&el=continue).
   > **Legend** - [Click here to learn 
more](https://docs.codecov.io/docs/codecov-delta)
   > `Δ = absolute  (impact)`, `ø = not affected`, `? = missing data`
   > Powered by 
[Codecov](https://codecov.io/gh/apache/kylin/pull/286?src=pr&el=footer). Last 
update 
[50f1758...ee848c9](https://codecov.io/gh/apache/kylin/pull/286?src=pr&el=lastupdated).
 Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (KYLIN-3631) Utilize Arrays#parallelSort for better performance

2018-10-14 Thread Ted Yu (JIRA)
Ted Yu created KYLIN-3631:
-

 Summary: Utilize Arrays#parallelSort for better performance
 Key: KYLIN-3631
 URL: https://issues.apache.org/jira/browse/KYLIN-3631
 Project: Kylin
  Issue Type: Task
Reporter: Ted Yu


Arrays#parallelSort was introduced since Java 1.8

We can utilize Arrays#parallelSort to achieve better performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Some wrong with kylin2.5-hbase2.* for protobuf-java

2018-10-14 Thread Lijun Cao
Hi liuzhixin:

Which platform did you use?

The CDH 6.0.x or HDP 3.0 ? 

Best Regards

Lijun Cao

> 在 2018年10月12日,21:14,liuzhixin  写道:
> 
> Logging initialized using configuration in 
> file:/data/hadoop-enviorment/apache-hive-2.3.3/conf/hive-log4j2.properties 
> Async: true
> OK
> Time taken: 4.512 seconds
> OK
> Time taken: 1.511 seconds
> OK
> Time taken: 0.272 seconds
> OK
> Time taken: 0.185 seconds
> Exception in thread "main" java.lang.NoSuchMethodError: 
> com.google.protobuf.Descriptors$Descriptor.getOneofs()Ljava/util/List;
>   at 
> com.google.protobuf.GeneratedMessageV3$FieldAccessorTable.(GeneratedMessageV3.java:1704)
>   at org.apache.calcite.avatica.proto.Common.(Common.java:18927)
>   at 
> org.apache.calcite.avatica.proto.Common$ConnectionProperties.getDescriptor(Common.java:1264)
>   at 
> org.apache.calcite.avatica.ConnectionPropertiesImpl.(ConnectionPropertiesImpl.java:38)
>   at org.apache.calcite.avatica.MetaImpl.(MetaImpl.java:72)
>   at 
> org.apache.calcite.jdbc.CalciteMetaImpl.(CalciteMetaImpl.java:88)
>   at org.apache.calcite.jdbc.Driver.createMeta(Driver.java:169)
>   at 
> org.apache.calcite.avatica.AvaticaConnection.(AvaticaConnection.java:121)
>   at 
> org.apache.calcite.jdbc.CalciteConnectionImpl.(CalciteConnectionImpl.java:113)
>   at 
> org.apache.calcite.jdbc.CalciteJdbc41Factory$CalciteJdbc41Connection.(CalciteJdbc41Factory.java:114)
>   at 
> org.apache.calcite.jdbc.CalciteJdbc41Factory.newConnection(CalciteJdbc41Factory.java:59)
>   at 
> org.apache.calcite.jdbc.CalciteJdbc41Factory.newConnection(CalciteJdbc41Factory.java:44)
>   at 
> org.apache.calcite.jdbc.CalciteFactory.newConnection(CalciteFactory.java:53)
>   at 
> org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:138)
>   at java.sql.DriverManager.getConnection(DriverManager.java:664)
>   at java.sql.DriverManager.getConnection(DriverManager.java:208)
>   at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:145)
>   at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:106)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1069)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.getOptimizedAST(CalcitePlanner.java:1085)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:364)
>   at 
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11138)
>   at 
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:286)
>   at 
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
>   at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:512)
>   at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1317)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1457)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>   at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
>   at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:336)
>   at 
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:787)
>   at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
>   at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:232)
> The command is:
> hive -e "USE default;





[GitHub] shaofengshi closed pull request #286: KYLIN-3617 Use job's cache in job scheduler

2018-10-14 Thread GitBox
shaofengshi closed pull request #286: KYLIN-3617 Use job's cache in job 
scheduler
URL: https://github.com/apache/kylin/pull/286
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/core-job/src/main/java/org/apache/kylin/job/dao/ExecutableDao.java 
b/core-job/src/main/java/org/apache/kylin/job/dao/ExecutableDao.java
index 0cc6c8e5fb..8352005234 100644
--- a/core-job/src/main/java/org/apache/kylin/job/dao/ExecutableDao.java
+++ b/core-job/src/main/java/org/apache/kylin/job/dao/ExecutableDao.java
@@ -23,6 +23,7 @@
 import java.util.Collections;
 import java.util.List;
 import java.util.NavigableSet;
+import java.util.Set;
 
 import org.apache.kylin.common.KylinConfig;
 import org.apache.kylin.common.persistence.JsonSerializer;
@@ -241,6 +242,10 @@ private long writeJobOutputResource(String path, 
ExecutableOutputPO output) thro
 }
 }
 
+public ExecutableOutputPO getJobOutputDigest(String uuid) {
+return executableOutputDigestMap.get(uuid);
+}
+
 public List getJobOutputDigests(long timeStart, long 
timeEndExclusive) {
 List jobOutputDigests = Lists.newArrayList();
 for (ExecutableOutputPO po : executableOutputDigestMap.values()) {
@@ -268,6 +273,10 @@ private long writeJobOutputResource(String path, 
ExecutableOutputPO output) thro
 }
 }
 
+public ExecutablePO getJobDigest(String uuid) {
+return executableDigestMap.get(uuid);
+}
+
 public List getJobDigests(long timeStart, long 
timeEndExclusive) {
 List jobDigests = Lists.newArrayList();
 for (ExecutablePO po : executableDigestMap.values()) {
@@ -277,6 +286,11 @@ private long writeJobOutputResource(String path, 
ExecutableOutputPO output) thro
 return jobDigests;
 }
 
+public List getJobIdsInCache() {
+Set idSet = executableDigestMap.keySet();
+return Lists.newArrayList(idSet);
+}
+
 public List getJobIds() throws PersistentException {
 try {
 NavigableSet resources = 
store.listResources(ResourceStore.EXECUTE_RESOURCE_ROOT);
@@ -391,4 +405,13 @@ public void deleteJobOutput(String uuid) throws 
PersistentException {
 throw new PersistentException(e);
 }
 }
+
+public void reloadAll() throws IOException {
+try (AutoReadWriteLock.AutoLock lock = 
executableDigestMapLock.lockForWrite()) {
+executableDigestCrud.reloadAll();
+}
+try (AutoReadWriteLock.AutoLock lock = 
executableOutputDigestMapLock.lockForWrite()) {
+executableOutputDigestCrud.reloadAll();
+}
+}
 }
diff --git 
a/core-job/src/main/java/org/apache/kylin/job/execution/ExecutableManager.java 
b/core-job/src/main/java/org/apache/kylin/job/execution/ExecutableManager.java
index 5cc8a0f7d7..b866618e05 100644
--- 
a/core-job/src/main/java/org/apache/kylin/job/execution/ExecutableManager.java
+++ 
b/core-job/src/main/java/org/apache/kylin/job/execution/ExecutableManager.java
@@ -155,6 +155,10 @@ public AbstractExecutable getJob(String uuid) {
 }
 }
 
+public AbstractExecutable getJobDigest(String uuid) {
+return parseTo(executableDao.getJobDigest(uuid));
+}
+
 public Output getOutput(String uuid) {
 try {
 final ExecutableOutputPO jobOutput = 
executableDao.getJobOutput(uuid);
@@ -166,6 +170,12 @@ public Output getOutput(String uuid) {
 }
 }
 
+public Output getOutputDigest(String uuid) {
+final ExecutableOutputPO jobOutput = 
executableDao.getJobOutputDigest(uuid);
+Preconditions.checkArgument(jobOutput != null, "there is no related 
output for job id:" + uuid);
+return parseOutput(jobOutput);
+}
+
 private DefaultOutput parseOutput(ExecutableOutputPO jobOutput) {
 final DefaultOutput result = new DefaultOutput();
 result.setExtra(jobOutput.getInfo());
@@ -286,6 +296,10 @@ public void updateAllRunningJobsToError() {
 }
 }
 
+public List getAllJobIdsInCache() {
+return executableDao.getJobIdsInCache();
+}
+
 public void resumeAllRunningJobs() {
 try {
 final List jobOutputs = 
executableDao.getJobOutputs();
@@ -439,6 +453,10 @@ public void updateJobOutput(String jobId, ExecutableState 
newStatus, Map 
newConcurrentMap(), jobEngineConfig.getConfig());
 
 logger.info("Staring resume all running jobs.");
+ExecutableManager executableManager = getExecutableManager();
 executableManager.resumeAllRunningJobs();
 logger.info("Finishing resume all running jobs.");
 
diff --git 
a/core-job/src/main/java/org/apache/kylin/job/impl/threadpool/PriorityFetcherRunner.java
 
b/core-job/src/main/java/org/apache/kylin/job/impl/threadpool/PriorityFetcherRu

Re: Some wrong with kylin2.5-hbase2.* for protobuf-java

2018-10-14 Thread liuzhixin
hi cao lijun,
#
the platform is ambari hdp3.0, and hive is 2.3.3, hbase version is 2.0

I have compile the source code with hive 2.3.3, 

but the module atopcalcite depends on protobuf 3.1.0,

other module depends on protobuf 2.5.0. 


> 在 2018年10月15日,上午8:40,Lijun Cao <641507...@qq.com> 写道:
> 
> Hi liuzhixin:
> 
> Which platform did you use?
> 
> The CDH 6.0.x or HDP 3.0 ? 
> 
> Best Regards
> 
> Lijun Cao
> 
>> 在 2018年10月12日,21:14,liuzhixin  写道:
>> 
>> Logging initialized using configuration in 
>> file:/data/hadoop-enviorment/apache-hive-2.3.3/conf/hive-log4j2.properties 
>> Async: true
>> OK
>> Time taken: 4.512 seconds
>> OK
>> Time taken: 1.511 seconds
>> OK
>> Time taken: 0.272 seconds
>> OK
>> Time taken: 0.185 seconds
>> Exception in thread "main" java.lang.NoSuchMethodError: 
>> com.google.protobuf.Descriptors$Descriptor.getOneofs()Ljava/util/List;
>>  at 
>> com.google.protobuf.GeneratedMessageV3$FieldAccessorTable.(GeneratedMessageV3.java:1704)
>>  at org.apache.calcite.avatica.proto.Common.(Common.java:18927)
>>  at 
>> org.apache.calcite.avatica.proto.Common$ConnectionProperties.getDescriptor(Common.java:1264)
>>  at 
>> org.apache.calcite.avatica.ConnectionPropertiesImpl.(ConnectionPropertiesImpl.java:38)
>>  at org.apache.calcite.avatica.MetaImpl.(MetaImpl.java:72)
>>  at 
>> org.apache.calcite.jdbc.CalciteMetaImpl.(CalciteMetaImpl.java:88)
>>  at org.apache.calcite.jdbc.Driver.createMeta(Driver.java:169)
>>  at 
>> org.apache.calcite.avatica.AvaticaConnection.(AvaticaConnection.java:121)
>>  at 
>> org.apache.calcite.jdbc.CalciteConnectionImpl.(CalciteConnectionImpl.java:113)
>>  at 
>> org.apache.calcite.jdbc.CalciteJdbc41Factory$CalciteJdbc41Connection.(CalciteJdbc41Factory.java:114)
>>  at 
>> org.apache.calcite.jdbc.CalciteJdbc41Factory.newConnection(CalciteJdbc41Factory.java:59)
>>  at 
>> org.apache.calcite.jdbc.CalciteJdbc41Factory.newConnection(CalciteJdbc41Factory.java:44)
>>  at 
>> org.apache.calcite.jdbc.CalciteFactory.newConnection(CalciteFactory.java:53)
>>  at 
>> org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:138)
>>  at java.sql.DriverManager.getConnection(DriverManager.java:664)
>>  at java.sql.DriverManager.getConnection(DriverManager.java:208)
>>  at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:145)
>>  at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:106)
>>  at 
>> org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1069)
>>  at 
>> org.apache.hadoop.hive.ql.parse.CalcitePlanner.getOptimizedAST(CalcitePlanner.java:1085)
>>  at 
>> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:364)
>>  at 
>> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11138)
>>  at 
>> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:286)
>>  at 
>> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
>>  at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:512)
>>  at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1317)
>>  at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1457)
>>  at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
>>  at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)
>>  at 
>> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>>  at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)
>>  at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
>>  at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:336)
>>  at 
>> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:787)
>>  at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
>>  at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686)
>>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>  at 
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>  at 
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>  at java.lang.reflect.Method.invoke(Method.java:498)
>>  at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
>>  at org.apache.hadoop.util.RunJar.main(RunJar.java:232)
>> The command is:
>> hive -e "USE default;
> 
> 



Can't create EnumerableAggregate! with cte in where condition

2018-10-14 Thread yiwang
Hello kylin team,

I got a error:
"
Can't create EnumerableAggregate! while executing SQL: "with cte1 as( select
* from (select * from kylin_sales) where seller_id <> 0 ), cte2 as( select
distinct trans_id from kylin_sales where trans_id <> 0 ) select part_dt,
sum(price) as total_selled, count(distinct seller_id) as seller from cte1
where cte1.trans_id in( select trans_id from cte2) group by part_dt order by
part_dt LIMIT 5"
" 
I ran my SQL as below on sample cube:

with cte1 as(
select * from (select * from kylin_sales) where seller_id !=0
),
cte2 as(
select distinct trans_id from kylin_sales where trans_id != 0
)
select part_dt, sum(price) as total_selled, count(distinct seller_id) as
seller from cte1 where cte1.trans_id in(select trans_id from cte2)
group by part_dt order by part_dt 

does kylin support Subquery with CTE friendly?
and could you please give any advise cause it blocked me for a longtime?

Thank
Yi



--
Sent from: http://apache-kylin.74782.x6.nabble.com/


Re: Some wrong with kylin2.5-hbase2.* for protobuf-java

2018-10-14 Thread Lijun Cao
Hi liuzhixin:

As I remember, the Hive version in HDP 3 is 3.1.0 . 

You can update Hive to 3.1.0 and then have another try.

And according to my previous test, the binary package 
apache-kylin-2.5.0-bin-hadoop3.tar.gz can work properly on HDP 3. You can get 
it form official site.

Best Regards

Lijun Cao

> 在 2018年10月15日,10:22,liuzhixin  写道:
> 
> hi cao lijun,
> #
> the platform is ambari hdp3.0, and hive is 2.3.3, hbase version is 2.0
> 
> I have compile the source code with hive 2.3.3, 
> 
> but the module atopcalcite depends on protobuf 3.1.0,
> 
> other module depends on protobuf 2.5.0. 
> 
> 
>> 在 2018年10月15日,上午8:40,Lijun Cao <641507...@qq.com> 写道:
>> 
>> Hi liuzhixin:
>> 
>> Which platform did you use?
>> 
>> The CDH 6.0.x or HDP 3.0 ? 
>> 
>> Best Regards
>> 
>> Lijun Cao
>> 
>>> 在 2018年10月12日,21:14,liuzhixin  写道:
>>> 
>>> Logging initialized using configuration in 
>>> file:/data/hadoop-enviorment/apache-hive-2.3.3/conf/hive-log4j2.properties 
>>> Async: true
>>> OK
>>> Time taken: 4.512 seconds
>>> OK
>>> Time taken: 1.511 seconds
>>> OK
>>> Time taken: 0.272 seconds
>>> OK
>>> Time taken: 0.185 seconds
>>> Exception in thread "main" java.lang.NoSuchMethodError: 
>>> com.google.protobuf.Descriptors$Descriptor.getOneofs()Ljava/util/List;
>>> at 
>>> com.google.protobuf.GeneratedMessageV3$FieldAccessorTable.(GeneratedMessageV3.java:1704)
>>> at org.apache.calcite.avatica.proto.Common.(Common.java:18927)
>>> at 
>>> org.apache.calcite.avatica.proto.Common$ConnectionProperties.getDescriptor(Common.java:1264)
>>> at 
>>> org.apache.calcite.avatica.ConnectionPropertiesImpl.(ConnectionPropertiesImpl.java:38)
>>> at org.apache.calcite.avatica.MetaImpl.(MetaImpl.java:72)
>>> at 
>>> org.apache.calcite.jdbc.CalciteMetaImpl.(CalciteMetaImpl.java:88)
>>> at org.apache.calcite.jdbc.Driver.createMeta(Driver.java:169)
>>> at 
>>> org.apache.calcite.avatica.AvaticaConnection.(AvaticaConnection.java:121)
>>> at 
>>> org.apache.calcite.jdbc.CalciteConnectionImpl.(CalciteConnectionImpl.java:113)
>>> at 
>>> org.apache.calcite.jdbc.CalciteJdbc41Factory$CalciteJdbc41Connection.(CalciteJdbc41Factory.java:114)
>>> at 
>>> org.apache.calcite.jdbc.CalciteJdbc41Factory.newConnection(CalciteJdbc41Factory.java:59)
>>> at 
>>> org.apache.calcite.jdbc.CalciteJdbc41Factory.newConnection(CalciteJdbc41Factory.java:44)
>>> at 
>>> org.apache.calcite.jdbc.CalciteFactory.newConnection(CalciteFactory.java:53)
>>> at 
>>> org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:138)
>>> at java.sql.DriverManager.getConnection(DriverManager.java:664)
>>> at java.sql.DriverManager.getConnection(DriverManager.java:208)
>>> at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:145)
>>> at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:106)
>>> at 
>>> org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1069)
>>> at 
>>> org.apache.hadoop.hive.ql.parse.CalcitePlanner.getOptimizedAST(CalcitePlanner.java:1085)
>>> at 
>>> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:364)
>>> at 
>>> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11138)
>>> at 
>>> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:286)
>>> at 
>>> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
>>> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:512)
>>> at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1317)
>>> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1457)
>>> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
>>> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)
>>> at 
>>> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>>> at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)
>>> at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
>>> at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:336)
>>> at 
>>> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:787)
>>> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
>>> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> at 
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>> at 
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>> at java.lang.reflect.Method.invoke(Method.java:498)
>>> at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
>>> at org.apache.hadoop.util.RunJar.main(RunJar.java:232)
>>> The command is:
>>> hive -e "USE default;
>> 
>> 
> 



Re: Some wrong with kylin2.5-hbase2.* for protobuf-java

2018-10-14 Thread ShaoFeng Shi
Hi Zhixin,

The error log is thrown from Hive, not from Kylin I think. Please verify
your hive is properly installed; You can manually run that hive command :

hive -e "use default; xxx"

Lijun Cao <641507...@qq.com> 于2018年10月15日周一 上午11:01写道:

> Hi liuzhixin:
>
> As I remember, the Hive version in HDP 3 is 3.1.0 .
>
> You can update Hive to 3.1.0 and then have another try.
>
> And according to my previous test, the binary package
> apache-kylin-2.5.0-bin-hadoop3.tar.gz can work properly on HDP 3. You can
> get it form official site.
>
> Best Regards
>
> Lijun Cao
>
> > 在 2018年10月15日,10:22,liuzhixin  写道:
> >
> > hi cao lijun,
> > #
> > the platform is ambari hdp3.0, and hive is 2.3.3, hbase version is 2.0
> >
> > I have compile the source code with hive 2.3.3,
> >
> > but the module atopcalcite depends on protobuf 3.1.0,
> >
> > other module depends on protobuf 2.5.0.
> >
> >
> >> 在 2018年10月15日,上午8:40,Lijun Cao <641507...@qq.com> 写道:
> >>
> >> Hi liuzhixin:
> >>
> >> Which platform did you use?
> >>
> >> The CDH 6.0.x or HDP 3.0 ?
> >>
> >> Best Regards
> >>
> >> Lijun Cao
> >>
> >>> 在 2018年10月12日,21:14,liuzhixin  写道:
> >>>
> >>> Logging initialized using configuration in
> file:/data/hadoop-enviorment/apache-hive-2.3.3/conf/hive-log4j2.properties
> Async: true
> >>> OK
> >>> Time taken: 4.512 seconds
> >>> OK
> >>> Time taken: 1.511 seconds
> >>> OK
> >>> Time taken: 0.272 seconds
> >>> OK
> >>> Time taken: 0.185 seconds
> >>> Exception in thread "main" java.lang.NoSuchMethodError:
> com.google.protobuf.Descriptors$Descriptor.getOneofs()Ljava/util/List;
> >>> at
> com.google.protobuf.GeneratedMessageV3$FieldAccessorTable.(GeneratedMessageV3.java:1704)
> >>> at
> org.apache.calcite.avatica.proto.Common.(Common.java:18927)
> >>> at
> org.apache.calcite.avatica.proto.Common$ConnectionProperties.getDescriptor(Common.java:1264)
> >>> at
> org.apache.calcite.avatica.ConnectionPropertiesImpl.(ConnectionPropertiesImpl.java:38)
> >>> at org.apache.calcite.avatica.MetaImpl.(MetaImpl.java:72)
> >>> at
> org.apache.calcite.jdbc.CalciteMetaImpl.(CalciteMetaImpl.java:88)
> >>> at org.apache.calcite.jdbc.Driver.createMeta(Driver.java:169)
> >>> at
> org.apache.calcite.avatica.AvaticaConnection.(AvaticaConnection.java:121)
> >>> at
> org.apache.calcite.jdbc.CalciteConnectionImpl.(CalciteConnectionImpl.java:113)
> >>> at
> org.apache.calcite.jdbc.CalciteJdbc41Factory$CalciteJdbc41Connection.(CalciteJdbc41Factory.java:114)
> >>> at
> org.apache.calcite.jdbc.CalciteJdbc41Factory.newConnection(CalciteJdbc41Factory.java:59)
> >>> at
> org.apache.calcite.jdbc.CalciteJdbc41Factory.newConnection(CalciteJdbc41Factory.java:44)
> >>> at
> org.apache.calcite.jdbc.CalciteFactory.newConnection(CalciteFactory.java:53)
> >>> at
> org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:138)
> >>> at java.sql.DriverManager.getConnection(DriverManager.java:664)
> >>> at java.sql.DriverManager.getConnection(DriverManager.java:208)
> >>> at
> org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:145)
> >>> at
> org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:106)
> >>> at
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1069)
> >>> at
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.getOptimizedAST(CalcitePlanner.java:1085)
> >>> at
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:364)
> >>> at
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11138)
> >>> at
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:286)
> >>> at
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
> >>> at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:512)
> >>> at
> org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1317)
> >>> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1457)
> >>> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
> >>> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)
> >>> at
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
> >>> at
> org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)
> >>> at
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
> >>> at
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:336)
> >>> at
> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:787)
> >>> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
> >>> at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686)
> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> >>> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(Delegati

Re: Some wrong with kylin2.5-hbase2.* for protobuf-java

2018-10-14 Thread liuzhixin
Hi Cao Lijun

Yeah! You are right.

Our platform uses the ambari-hdp3.0, but with the hive standalone 2.3.3.

So I need to compile kyin for the hive version 2.3.3.

And now its not compatible with protobuf-java version 3.1.0 which from 
atopcalcite.

Best wishes for you.

> 在 2018年10月15日,上午11:00,Lijun Cao <641507...@qq.com> 写道:
> 
> Hi liuzhixin:
> 
> As I remember, the Hive version in HDP 3 is 3.1.0 . 
> 
> You can update Hive to 3.1.0 and then have another try.
> 
> And according to my previous test, the binary package 
> apache-kylin-2.5.0-bin-hadoop3.tar.gz can work properly on HDP 3. You can get 
> it form official site.
> 
> Best Regards
> 
> Lijun Cao
> 
>> 在 2018年10月15日,10:22,liuzhixin  写道:
>> 
>> hi cao lijun,
>> #
>> the platform is ambari hdp3.0, and hive is 2.3.3, hbase version is 2.0
>> 
>> I have compile the source code with hive 2.3.3, 
>> 
>> but the module atopcalcite depends on protobuf 3.1.0,
>> 
>> other module depends on protobuf 2.5.0. 
>> 
>> 
>>> 在 2018年10月15日,上午8:40,Lijun Cao <641507...@qq.com> 写道:
>>> 
>>> Hi liuzhixin:
>>> 
>>> Which platform did you use?
>>> 
>>> The CDH 6.0.x or HDP 3.0 ? 
>>> 
>>> Best Regards
>>> 
>>> Lijun Cao
>>> 
 在 2018年10月12日,21:14,liuzhixin  写道:
 
 Logging initialized using configuration in 
 file:/data/hadoop-enviorment/apache-hive-2.3.3/conf/hive-log4j2.properties 
 Async: true
 OK
 Time taken: 4.512 seconds
 OK
 Time taken: 1.511 seconds
 OK
 Time taken: 0.272 seconds
 OK
 Time taken: 0.185 seconds
 Exception in thread "main" java.lang.NoSuchMethodError: 
 com.google.protobuf.Descriptors$Descriptor.getOneofs()Ljava/util/List;
at 
 com.google.protobuf.GeneratedMessageV3$FieldAccessorTable.(GeneratedMessageV3.java:1704)
at org.apache.calcite.avatica.proto.Common.(Common.java:18927)
at 
 org.apache.calcite.avatica.proto.Common$ConnectionProperties.getDescriptor(Common.java:1264)
at 
 org.apache.calcite.avatica.ConnectionPropertiesImpl.(ConnectionPropertiesImpl.java:38)
at org.apache.calcite.avatica.MetaImpl.(MetaImpl.java:72)
at 
 org.apache.calcite.jdbc.CalciteMetaImpl.(CalciteMetaImpl.java:88)
at org.apache.calcite.jdbc.Driver.createMeta(Driver.java:169)
at 
 org.apache.calcite.avatica.AvaticaConnection.(AvaticaConnection.java:121)
at 
 org.apache.calcite.jdbc.CalciteConnectionImpl.(CalciteConnectionImpl.java:113)
at 
 org.apache.calcite.jdbc.CalciteJdbc41Factory$CalciteJdbc41Connection.(CalciteJdbc41Factory.java:114)
at 
 org.apache.calcite.jdbc.CalciteJdbc41Factory.newConnection(CalciteJdbc41Factory.java:59)
at 
 org.apache.calcite.jdbc.CalciteJdbc41Factory.newConnection(CalciteJdbc41Factory.java:44)
at 
 org.apache.calcite.jdbc.CalciteFactory.newConnection(CalciteFactory.java:53)
at 
 org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:138)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:208)
at org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:145)
at org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:106)
at 
 org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1069)
at 
 org.apache.hadoop.hive.ql.parse.CalcitePlanner.getOptimizedAST(CalcitePlanner.java:1085)
at 
 org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:364)
at 
 org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11138)
at 
 org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:286)
at 
 org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:512)
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1317)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1457)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)
at 
 org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:336)
at 
 org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:787)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.j

Re: Some wrong with kylin2.5-hbase2.* for protobuf-java

2018-10-14 Thread liuzhixin
Hi ShaoFeng Shi

Yes, the error from hive version 2.3.3,

And Kylin need hive version 3.1.0.

So how to solve the question?

Best wishes!

> 在 2018年10月15日,上午11:10,ShaoFeng Shi  写道:
> 
> Hi Zhixin,
> 
> The error log is thrown from Hive, not from Kylin I think. Please verify
> your hive is properly installed; You can manually run that hive command :
> 
> hive -e "use default; xxx"
> 
> Lijun Cao <641507...@qq.com> 于2018年10月15日周一 上午11:01写道:
> 
>> Hi liuzhixin:
>> 
>> As I remember, the Hive version in HDP 3 is 3.1.0 .
>> 
>> You can update Hive to 3.1.0 and then have another try.
>> 
>> And according to my previous test, the binary package
>> apache-kylin-2.5.0-bin-hadoop3.tar.gz can work properly on HDP 3. You can
>> get it form official site.
>> 
>> Best Regards
>> 
>> Lijun Cao
>> 
>>> 在 2018年10月15日,10:22,liuzhixin  写道:
>>> 
>>> hi cao lijun,
>>> #
>>> the platform is ambari hdp3.0, and hive is 2.3.3, hbase version is 2.0
>>> 
>>> I have compile the source code with hive 2.3.3,
>>> 
>>> but the module atopcalcite depends on protobuf 3.1.0,
>>> 
>>> other module depends on protobuf 2.5.0.
>>> 
>>> 
 在 2018年10月15日,上午8:40,Lijun Cao <641507...@qq.com> 写道:
 
 Hi liuzhixin:
 
 Which platform did you use?
 
 The CDH 6.0.x or HDP 3.0 ?
 
 Best Regards
 
 Lijun Cao
 
> 在 2018年10月12日,21:14,liuzhixin  写道:
> 
> Logging initialized using configuration in
>> file:/data/hadoop-enviorment/apache-hive-2.3.3/conf/hive-log4j2.properties
>> Async: true
> OK
> Time taken: 4.512 seconds
> OK
> Time taken: 1.511 seconds
> OK
> Time taken: 0.272 seconds
> OK
> Time taken: 0.185 seconds
> Exception in thread "main" java.lang.NoSuchMethodError:
>> com.google.protobuf.Descriptors$Descriptor.getOneofs()Ljava/util/List;
>at
>> com.google.protobuf.GeneratedMessageV3$FieldAccessorTable.(GeneratedMessageV3.java:1704)
>at
>> org.apache.calcite.avatica.proto.Common.(Common.java:18927)
>at
>> org.apache.calcite.avatica.proto.Common$ConnectionProperties.getDescriptor(Common.java:1264)
>at
>> org.apache.calcite.avatica.ConnectionPropertiesImpl.(ConnectionPropertiesImpl.java:38)
>at org.apache.calcite.avatica.MetaImpl.(MetaImpl.java:72)
>at
>> org.apache.calcite.jdbc.CalciteMetaImpl.(CalciteMetaImpl.java:88)
>at org.apache.calcite.jdbc.Driver.createMeta(Driver.java:169)
>at
>> org.apache.calcite.avatica.AvaticaConnection.(AvaticaConnection.java:121)
>at
>> org.apache.calcite.jdbc.CalciteConnectionImpl.(CalciteConnectionImpl.java:113)
>at
>> org.apache.calcite.jdbc.CalciteJdbc41Factory$CalciteJdbc41Connection.(CalciteJdbc41Factory.java:114)
>at
>> org.apache.calcite.jdbc.CalciteJdbc41Factory.newConnection(CalciteJdbc41Factory.java:59)
>at
>> org.apache.calcite.jdbc.CalciteJdbc41Factory.newConnection(CalciteJdbc41Factory.java:44)
>at
>> org.apache.calcite.jdbc.CalciteFactory.newConnection(CalciteFactory.java:53)
>at
>> org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:138)
>at java.sql.DriverManager.getConnection(DriverManager.java:664)
>at java.sql.DriverManager.getConnection(DriverManager.java:208)
>at
>> org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:145)
>at
>> org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:106)
>at
>> org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1069)
>at
>> org.apache.hadoop.hive.ql.parse.CalcitePlanner.getOptimizedAST(CalcitePlanner.java:1085)
>at
>> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:364)
>at
>> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11138)
>at
>> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:286)
>at
>> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
>at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:512)
>at
>> org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1317)
>at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1457)
>at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
>at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)
>at
>> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:233)
>at
>> org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:184)
>at
>> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:403)
>at
>> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:336)
>at
>> org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:787)
>at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759)
>at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:686)
>

[jira] [Created] (KYLIN-3632) Add configuration that can switch on/off preparedStatement cache in Kylin server

2018-10-14 Thread Ma Gang (JIRA)
Ma Gang created KYLIN-3632:
--

 Summary: Add configuration that can switch on/off 
preparedStatement cache in Kylin server
 Key: KYLIN-3632
 URL: https://issues.apache.org/jira/browse/KYLIN-3632
 Project: Kylin
  Issue Type: Improvement
  Components: Query Engine
Reporter: Ma Gang
Assignee: Ma Gang
 Fix For: v2.5.1


From Kylin 2.5 we introduce preparedStatement cache feature, it can be turn 
on/off for each request by adding a new field "enableStatementCache" in the 
query request, by default it is on. We need to add a switch in server level, 
the preparedStatement cache can take effective only when the switch is on, by 
default, it will be set to false.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Some wrong with kylin2.5-hbase2.* for protobuf-java

2018-10-14 Thread ShaoFeng Shi
Hi zhixin,

I think the problem is how to run Hive 2 with HDP 3, no relation with
Kylin.

Usually, we don't encourage user to customize the component version in a
release, because that may bring dependency conflicts.

I suggest you use the original Hive version in HDP 3.

liuzhixin  于2018年10月15日周一 上午11:25写道:

> Hi ShaoFeng Shi
>
> Yes, the error from hive version 2.3.3,
>
> And Kylin need hive version 3.1.0.
>
> So how to solve the question?
>
> Best wishes!
>
> > 在 2018年10月15日,上午11:10,ShaoFeng Shi  写道:
> >
> > Hi Zhixin,
> >
> > The error log is thrown from Hive, not from Kylin I think. Please verify
> > your hive is properly installed; You can manually run that hive command :
> >
> > hive -e "use default; xxx"
> >
> > Lijun Cao <641507...@qq.com> 于2018年10月15日周一 上午11:01写道:
> >
> >> Hi liuzhixin:
> >>
> >> As I remember, the Hive version in HDP 3 is 3.1.0 .
> >>
> >> You can update Hive to 3.1.0 and then have another try.
> >>
> >> And according to my previous test, the binary package
> >> apache-kylin-2.5.0-bin-hadoop3.tar.gz can work properly on HDP 3. You
> can
> >> get it form official site.
> >>
> >> Best Regards
> >>
> >> Lijun Cao
> >>
> >>> 在 2018年10月15日,10:22,liuzhixin  写道:
> >>>
> >>> hi cao lijun,
> >>> #
> >>> the platform is ambari hdp3.0, and hive is 2.3.3, hbase version is 2.0
> >>>
> >>> I have compile the source code with hive 2.3.3,
> >>>
> >>> but the module atopcalcite depends on protobuf 3.1.0,
> >>>
> >>> other module depends on protobuf 2.5.0.
> >>>
> >>>
>  在 2018年10月15日,上午8:40,Lijun Cao <641507...@qq.com> 写道:
> 
>  Hi liuzhixin:
> 
>  Which platform did you use?
> 
>  The CDH 6.0.x or HDP 3.0 ?
> 
>  Best Regards
> 
>  Lijun Cao
> 
> > 在 2018年10月12日,21:14,liuzhixin  写道:
> >
> > Logging initialized using configuration in
> >>
> file:/data/hadoop-enviorment/apache-hive-2.3.3/conf/hive-log4j2.properties
> >> Async: true
> > OK
> > Time taken: 4.512 seconds
> > OK
> > Time taken: 1.511 seconds
> > OK
> > Time taken: 0.272 seconds
> > OK
> > Time taken: 0.185 seconds
> > Exception in thread "main" java.lang.NoSuchMethodError:
> >> com.google.protobuf.Descriptors$Descriptor.getOneofs()Ljava/util/List;
> >at
> >>
> com.google.protobuf.GeneratedMessageV3$FieldAccessorTable.(GeneratedMessageV3.java:1704)
> >at
> >> org.apache.calcite.avatica.proto.Common.(Common.java:18927)
> >at
> >>
> org.apache.calcite.avatica.proto.Common$ConnectionProperties.getDescriptor(Common.java:1264)
> >at
> >>
> org.apache.calcite.avatica.ConnectionPropertiesImpl.(ConnectionPropertiesImpl.java:38)
> >at org.apache.calcite.avatica.MetaImpl.(MetaImpl.java:72)
> >at
> >> org.apache.calcite.jdbc.CalciteMetaImpl.(CalciteMetaImpl.java:88)
> >at org.apache.calcite.jdbc.Driver.createMeta(Driver.java:169)
> >at
> >>
> org.apache.calcite.avatica.AvaticaConnection.(AvaticaConnection.java:121)
> >at
> >>
> org.apache.calcite.jdbc.CalciteConnectionImpl.(CalciteConnectionImpl.java:113)
> >at
> >>
> org.apache.calcite.jdbc.CalciteJdbc41Factory$CalciteJdbc41Connection.(CalciteJdbc41Factory.java:114)
> >at
> >>
> org.apache.calcite.jdbc.CalciteJdbc41Factory.newConnection(CalciteJdbc41Factory.java:59)
> >at
> >>
> org.apache.calcite.jdbc.CalciteJdbc41Factory.newConnection(CalciteJdbc41Factory.java:44)
> >at
> >>
> org.apache.calcite.jdbc.CalciteFactory.newConnection(CalciteFactory.java:53)
> >at
> >>
> org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:138)
> >at java.sql.DriverManager.getConnection(DriverManager.java:664)
> >at java.sql.DriverManager.getConnection(DriverManager.java:208)
> >at
> >> org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:145)
> >at
> >> org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:106)
> >at
> >>
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1069)
> >at
> >>
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.getOptimizedAST(CalcitePlanner.java:1085)
> >at
> >>
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:364)
> >at
> >>
> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11138)
> >at
> >>
> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:286)
> >at
> >>
> org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
> >at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:512)
> >at
> >> org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1317)
> >at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1457)
> >at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1237)
> >at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1227)
> >at

[GitHub] allenma opened a new pull request #290: KYLIN-3632 Add configuration that can switch on/off preparedStatement cache

2018-10-14 Thread GitBox
allenma opened a new pull request #290: KYLIN-3632 Add configuration that can 
switch on/off preparedStatement cache
URL: https://github.com/apache/kylin/pull/290
 
 
   Add configuration that can switch on/off preparedStatement cache in kylin 
server


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] asfgit commented on issue #290: KYLIN-3632 Add configuration that can switch on/off preparedStatement cache

2018-10-14 Thread GitBox
asfgit commented on issue #290: KYLIN-3632 Add configuration that can switch 
on/off preparedStatement cache
URL: https://github.com/apache/kylin/pull/290#issuecomment-429714335
 
 
   Can one of the admins verify this patch?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


Re: Some wrong with kylin2.5-hbase2.* for protobuf-java

2018-10-14 Thread liuzhixin
Thank you for the answer!

I can’t decide the hive version.

And the hive version 2.3.3 can work well with HDP 3.

Perhaps you can test the Kylin with hive version 2.3.3.

Maybe it’s other error. Thanks!

Best wishes!


> 在 2018年10月15日,下午1:24,ShaoFeng Shi  写道:
> 
> Hi zhixin,
> 
> I think the problem is how to run Hive 2 with HDP 3, no relation with Kylin. 
> 
> Usually, we don't encourage user to customize the component version in a 
> release, because that may bring dependency conflicts.
> 
> I suggest you use the original Hive version in HDP 3.
> 
> liuzhixin mailto:liuz...@163.com>> 于2018年10月15日周一 上午11:25写道:
> Hi ShaoFeng Shi
> 
> Yes, the error from hive version 2.3.3,
> 
> And Kylin need hive version 3.1.0.
> 
> So how to solve the question?
> 
> Best wishes!
> 
> > 在 2018年10月15日,上午11:10,ShaoFeng Shi  > > 写道:
> > 
> > Hi Zhixin,
> > 
> > The error log is thrown from Hive, not from Kylin I think. Please verify
> > your hive is properly installed; You can manually run that hive command :
> > 
> > hive -e "use default; xxx"
> > 
> > Lijun Cao <641507...@qq.com > 于2018年10月15日周一 
> > 上午11:01写道:
> > 
> >> Hi liuzhixin:
> >> 
> >> As I remember, the Hive version in HDP 3 is 3.1.0 .
> >> 
> >> You can update Hive to 3.1.0 and then have another try.
> >> 
> >> And according to my previous test, the binary package
> >> apache-kylin-2.5.0-bin-hadoop3.tar.gz can work properly on HDP 3. You can
> >> get it form official site.
> >> 
> >> Best Regards
> >> 
> >> Lijun Cao
> >> 
> >>> 在 2018年10月15日,10:22,liuzhixin mailto:liuz...@163.com>> 
> >>> 写道:
> >>> 
> >>> hi cao lijun,
> >>> #
> >>> the platform is ambari hdp3.0, and hive is 2.3.3, hbase version is 2.0
> >>> 
> >>> I have compile the source code with hive 2.3.3,
> >>> 
> >>> but the module atopcalcite depends on protobuf 3.1.0,
> >>> 
> >>> other module depends on protobuf 2.5.0.
> >>> 
> >>> 
>  在 2018年10月15日,上午8:40,Lijun Cao <641507...@qq.com 
>  > 写道:
>  
>  Hi liuzhixin:
>  
>  Which platform did you use?
>  
>  The CDH 6.0.x or HDP 3.0 ?
>  
>  Best Regards
>  
>  Lijun Cao
>  
> > 在 2018年10月12日,21:14,liuzhixin  > > 写道:
> > 
> > Logging initialized using configuration in
> >> file:/data/hadoop-enviorment/apache-hive-2.3.3/conf/hive-log4j2.properties
> >> Async: true
> > OK
> > Time taken: 4.512 seconds
> > OK
> > Time taken: 1.511 seconds
> > OK
> > Time taken: 0.272 seconds
> > OK
> > Time taken: 0.185 seconds
> > Exception in thread "main" java.lang.NoSuchMethodError:
> >> com.google.protobuf.Descriptors$Descriptor.getOneofs()Ljava/util/List;
> >at
> >> com.google.protobuf.GeneratedMessageV3$FieldAccessorTable.(GeneratedMessageV3.java:1704)
> >at
> >> org.apache.calcite.avatica.proto.Common.(Common.java:18927)
> >at
> >> org.apache.calcite.avatica.proto.Common$ConnectionProperties.getDescriptor(Common.java:1264)
> >at
> >> org.apache.calcite.avatica.ConnectionPropertiesImpl.(ConnectionPropertiesImpl.java:38)
> >at org.apache.calcite.avatica.MetaImpl.(MetaImpl.java:72)
> >at
> >> org.apache.calcite.jdbc.CalciteMetaImpl.(CalciteMetaImpl.java:88)
> >at org.apache.calcite.jdbc.Driver.createMeta(Driver.java:169)
> >at
> >> org.apache.calcite.avatica.AvaticaConnection.(AvaticaConnection.java:121)
> >at
> >> org.apache.calcite.jdbc.CalciteConnectionImpl.(CalciteConnectionImpl.java:113)
> >at
> >> org.apache.calcite.jdbc.CalciteJdbc41Factory$CalciteJdbc41Connection.(CalciteJdbc41Factory.java:114)
> >at
> >> org.apache.calcite.jdbc.CalciteJdbc41Factory.newConnection(CalciteJdbc41Factory.java:59)
> >at
> >> org.apache.calcite.jdbc.CalciteJdbc41Factory.newConnection(CalciteJdbc41Factory.java:44)
> >at
> >> org.apache.calcite.jdbc.CalciteFactory.newConnection(CalciteFactory.java:53)
> >at
> >> org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:138)
> >at java.sql.DriverManager.getConnection(DriverManager.java:664)
> >at java.sql.DriverManager.getConnection(DriverManager.java:208)
> >at
> >> org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:145)
> >at
> >> org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:106)
> >at
> >> org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1069)
> >at
> >> org.apache.hadoop.hive.ql.parse.CalcitePlanner.getOptimizedAST(CalcitePlanner.java:1085)
> >at
> >> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:364)
> >at
> >> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:11138)
> >at
> >> org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:286)
> >at
> >> org.apache.ha

[GitHub] codecov-io commented on issue #290: KYLIN-3632 Add configuration that can switch on/off preparedStatement cache

2018-10-14 Thread GitBox
codecov-io commented on issue #290: KYLIN-3632 Add configuration that can 
switch on/off preparedStatement cache
URL: https://github.com/apache/kylin/pull/290#issuecomment-429717664
 
 
   # [Codecov](https://codecov.io/gh/apache/kylin/pull/290?src=pr&el=h1) Report
   > Merging [#290](https://codecov.io/gh/apache/kylin/pull/290?src=pr&el=desc) 
into 
[master](https://codecov.io/gh/apache/kylin/commit/5f6007ff2b5aead4781c876fa3203dd67188d90b?src=pr&el=desc)
 will **decrease** coverage by `<.01%`.
   > The diff coverage is `0%`.
   
   [![Impacted file tree 
graph](https://codecov.io/gh/apache/kylin/pull/290/graphs/tree.svg?width=650&token=JawVgbgsVo&height=150&src=pr)](https://codecov.io/gh/apache/kylin/pull/290?src=pr&el=tree)
   
   ```diff
   @@ Coverage Diff  @@
   ## master #290  +/-   ##
   
   - Coverage 21.29%   21.29%   -0.01% 
   + Complexity  4442   -2 
   
 Files  1087 1087  
 Lines 6998369984   +1 
 Branches  1010910109  
   
   - Hits  1490314902   -1 
   - Misses5367653677   +1 
   - Partials   1404 1405   +1
   ```
   
   
   | [Impacted 
Files](https://codecov.io/gh/apache/kylin/pull/290?src=pr&el=tree) | Coverage Δ 
| Complexity Δ | |
   |---|---|---|---|
   | 
[...va/org/apache/kylin/rest/service/QueryService.java](https://codecov.io/gh/apache/kylin/pull/290/diff?src=pr&el=tree#diff-c2VydmVyLWJhc2Uvc3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2t5bGluL3Jlc3Qvc2VydmljZS9RdWVyeVNlcnZpY2UuamF2YQ==)
 | `0% <0%> (ø)` | `0 <0> (ø)` | :arrow_down: |
   | 
[.../java/org/apache/kylin/common/KylinConfigBase.java](https://codecov.io/gh/apache/kylin/pull/290/diff?src=pr&el=tree#diff-Y29yZS1jb21tb24vc3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2t5bGluL2NvbW1vbi9LeWxpbkNvbmZpZ0Jhc2UuamF2YQ==)
 | `14.66% <0%> (-0.03%)` | `36 <0> (ø)` | |
   | 
[...he/kylin/dict/lookup/cache/RocksDBLookupTable.java](https://codecov.io/gh/apache/kylin/pull/290/diff?src=pr&el=tree#diff-Y29yZS1kaWN0aW9uYXJ5L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9reWxpbi9kaWN0L2xvb2t1cC9jYWNoZS9Sb2Nrc0RCTG9va3VwVGFibGUuamF2YQ==)
 | `72.97% <0%> (-5.41%)` | `6% <0%> (-1%)` | |
   | 
[...a/org/apache/kylin/dict/Number2BytesConverter.java](https://codecov.io/gh/apache/kylin/pull/290/diff?src=pr&el=tree#diff-Y29yZS1kaWN0aW9uYXJ5L3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9reWxpbi9kaWN0L051bWJlcjJCeXRlc0NvbnZlcnRlci5qYXZh)
 | `81.74% <0%> (-0.8%)` | `17% <0%> (-1%)` | |
   | 
[...rg/apache/kylin/cube/inmemcubing/MemDiskStore.java](https://codecov.io/gh/apache/kylin/pull/290/diff?src=pr&el=tree#diff-Y29yZS1jdWJlL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9reWxpbi9jdWJlL2lubWVtY3ViaW5nL01lbURpc2tTdG9yZS5qYXZh)
 | `70.21% <0%> (+0.6%)` | `7% <0%> (ø)` | :arrow_down: |
   
   --
   
   [Continue to review full report at 
Codecov](https://codecov.io/gh/apache/kylin/pull/290?src=pr&el=continue).
   > **Legend** - [Click here to learn 
more](https://docs.codecov.io/docs/codecov-delta)
   > `Δ = absolute  (impact)`, `ø = not affected`, `? = missing data`
   > Powered by 
[Codecov](https://codecov.io/gh/apache/kylin/pull/290?src=pr&el=footer). Last 
update 
[5f6007f...3fcb1d5](https://codecov.io/gh/apache/kylin/pull/290?src=pr&el=lastupdated).
 Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] coveralls commented on issue #290: KYLIN-3632 Add configuration that can switch on/off preparedStatement cache

2018-10-14 Thread GitBox
coveralls commented on issue #290: KYLIN-3632 Add configuration that can switch 
on/off preparedStatement cache
URL: https://github.com/apache/kylin/pull/290#issuecomment-429717932
 
 
   ## Pull Request Test Coverage Report for [Build 
3769](https://coveralls.io/builds/19518586)
   
   * **0** of **2**   **(0.0%)**  changed or added relevant lines in **2** 
files are covered.
   * **1** unchanged line in **1** file lost coverage.
   * Overall coverage decreased (**-0.0003%**) to **23.304%**
   
   ---
   
   |  Changes Missing Coverage | Covered Lines | Changed/Added Lines | % |
   | :-|--||---: |
   | 
[core-common/src/main/java/org/apache/kylin/common/KylinConfigBase.java](https://coveralls.io/builds/19518586/source?filename=core-common%2Fsrc%2Fmain%2Fjava%2Forg%2Fapache%2Fkylin%2Fcommon%2FKylinConfigBase.java#L1461)
 | 0 | 1 | 0.0%
   | 
[server-base/src/main/java/org/apache/kylin/rest/service/QueryService.java](https://coveralls.io/builds/19518586/source?filename=server-base%2Fsrc%2Fmain%2Fjava%2Forg%2Fapache%2Fkylin%2Frest%2Fservice%2FQueryService.java#L590)
 | 0 | 1 | 0.0%
   
   
   |  Files with Coverage Reduction | New Missed Lines | % |
   | :-|--|--: |
   | 
[core-dictionary/src/main/java/org/apache/kylin/dict/lookup/cache/RocksDBLookupTable.java](https://coveralls.io/builds/19518586/source?filename=core-dictionary%2Fsrc%2Fmain%2Fjava%2Forg%2Fapache%2Fkylin%2Fdict%2Flookup%2Fcache%2FRocksDBLookupTable.java#L62)
 | 1 | 81.08% |
   
   
   |  Totals | [![Coverage 
Status](https://coveralls.io/builds/19518586/badge)](https://coveralls.io/builds/19518586)
 |
   | :-- | --: |
   | Change from base [Build 3768](https://coveralls.io/builds/19516792): |  
-0.0003% |
   | Covered Lines: | 16309 |
   | Relevant Lines: | 69984 |
   
   ---
   # 💛  - [Coveralls](https://coveralls.io)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


Re: Some wrong with kylin2.5-hbase2.* for protobuf-java

2018-10-14 Thread ShaoFeng Shi
Hive version 2.3.3 can work well with HDP 3? Can you try the HiveQL that
Kylin executed out of Kylin, if it works, then there should be something
wrong in Kylin.

liuzhixin  于2018年10月15日周一 下午1:47写道:

> Thank you for the answer!
>
> I can’t decide the hive version.
>
> And the hive version 2.3.3 can work well with HDP 3.
>
> Perhaps you can test the Kylin with hive version 2.3.3.
>
> Maybe it’s other error. Thanks!
>
> Best wishes!
>
>
> 在 2018年10月15日,下午1:24,ShaoFeng Shi  写道:
>
> Hi zhixin,
>
> I think the problem is how to run Hive 2 with HDP 3, no relation with
> Kylin.
>
> Usually, we don't encourage user to customize the component version in a
> release, because that may bring dependency conflicts.
>
> I suggest you use the original Hive version in HDP 3.
>
> liuzhixin  于2018年10月15日周一 上午11:25写道:
>
>> Hi ShaoFeng Shi
>>
>> Yes, the error from hive version 2.3.3,
>>
>> And Kylin need hive version 3.1.0.
>>
>> So how to solve the question?
>>
>> Best wishes!
>>
>> > 在 2018年10月15日,上午11:10,ShaoFeng Shi  写道:
>> >
>> > Hi Zhixin,
>> >
>> > The error log is thrown from Hive, not from Kylin I think. Please verify
>> > your hive is properly installed; You can manually run that hive command
>> :
>> >
>> > hive -e "use default; xxx"
>> >
>> > Lijun Cao <641507...@qq.com> 于2018年10月15日周一 上午11:01写道:
>> >
>> >> Hi liuzhixin:
>> >>
>> >> As I remember, the Hive version in HDP 3 is 3.1.0 .
>> >>
>> >> You can update Hive to 3.1.0 and then have another try.
>> >>
>> >> And according to my previous test, the binary package
>> >> apache-kylin-2.5.0-bin-hadoop3.tar.gz can work properly on HDP 3. You
>> can
>> >> get it form official site.
>> >>
>> >> Best Regards
>> >>
>> >> Lijun Cao
>> >>
>> >>> 在 2018年10月15日,10:22,liuzhixin  写道:
>> >>>
>> >>> hi cao lijun,
>> >>> #
>> >>> the platform is ambari hdp3.0, and hive is 2.3.3, hbase version is 2.0
>> >>>
>> >>> I have compile the source code with hive 2.3.3,
>> >>>
>> >>> but the module atopcalcite depends on protobuf 3.1.0,
>> >>>
>> >>> other module depends on protobuf 2.5.0.
>> >>>
>> >>>
>>  在 2018年10月15日,上午8:40,Lijun Cao <641507...@qq.com> 写道:
>> 
>>  Hi liuzhixin:
>> 
>>  Which platform did you use?
>> 
>>  The CDH 6.0.x or HDP 3.0 ?
>> 
>>  Best Regards
>> 
>>  Lijun Cao
>> 
>> > 在 2018年10月12日,21:14,liuzhixin  写道:
>> >
>> > Logging initialized using configuration in
>> >>
>> file:/data/hadoop-enviorment/apache-hive-2.3.3/conf/hive-log4j2.properties
>> >> Async: true
>> > OK
>> > Time taken: 4.512 seconds
>> > OK
>> > Time taken: 1.511 seconds
>> > OK
>> > Time taken: 0.272 seconds
>> > OK
>> > Time taken: 0.185 seconds
>> > Exception in thread "main" java.lang.NoSuchMethodError:
>> >> com.google.protobuf.Descriptors$Descriptor.getOneofs()Ljava/util/List;
>> >at
>> >>
>> com.google.protobuf.GeneratedMessageV3$FieldAccessorTable.(GeneratedMessageV3.java:1704)
>> >at
>> >> org.apache.calcite.avatica.proto.Common.(Common.java:18927)
>> >at
>> >>
>> org.apache.calcite.avatica.proto.Common$ConnectionProperties.getDescriptor(Common.java:1264)
>> >at
>> >>
>> org.apache.calcite.avatica.ConnectionPropertiesImpl.(ConnectionPropertiesImpl.java:38)
>> >at org.apache.calcite.avatica.MetaImpl.(MetaImpl.java:72)
>> >at
>> >> org.apache.calcite.jdbc.CalciteMetaImpl.(CalciteMetaImpl.java:88)
>> >at org.apache.calcite.jdbc.Driver.createMeta(Driver.java:169)
>> >at
>> >>
>> org.apache.calcite.avatica.AvaticaConnection.(AvaticaConnection.java:121)
>> >at
>> >>
>> org.apache.calcite.jdbc.CalciteConnectionImpl.(CalciteConnectionImpl.java:113)
>> >at
>> >>
>> org.apache.calcite.jdbc.CalciteJdbc41Factory$CalciteJdbc41Connection.(CalciteJdbc41Factory.java:114)
>> >at
>> >>
>> org.apache.calcite.jdbc.CalciteJdbc41Factory.newConnection(CalciteJdbc41Factory.java:59)
>> >at
>> >>
>> org.apache.calcite.jdbc.CalciteJdbc41Factory.newConnection(CalciteJdbc41Factory.java:44)
>> >at
>> >>
>> org.apache.calcite.jdbc.CalciteFactory.newConnection(CalciteFactory.java:53)
>> >at
>> >>
>> org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:138)
>> >at java.sql.DriverManager.getConnection(DriverManager.java:664)
>> >at java.sql.DriverManager.getConnection(DriverManager.java:208)
>> >at
>> >> org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:145)
>> >at
>> >> org.apache.calcite.tools.Frameworks.withPlanner(Frameworks.java:106)
>> >at
>> >>
>> org.apache.hadoop.hive.ql.parse.CalcitePlanner.logicalPlan(CalcitePlanner.java:1069)
>> >at
>> >>
>> org.apache.hadoop.hive.ql.parse.CalcitePlanner.getOptimizedAST(CalcitePlanner.java:1085)
>> >at
>> >>
>> org.apache.hadoop.hive.ql.parse.CalcitePlanner.genOPTree(CalcitePlanner.java:364)
>> >at
>> >>
>> org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(Se

UNION ALL is not working with count()

2018-10-14 Thread yiwang
Hi Kylin Team,

When I ran sql "SELECT count(TRANS_ID) as TRANS_ID FROM KYLIN_SALES where
TRANS_ID <> 1 union all select count(TRANS_ID) as TRANS_ID FROM KYLIN_SALES"
on sample cube.

Got Error:
Error while compiling generated Java code: public static class Record2_1
implements java.io.Serializable { public Long TRANS_ID; public long
_KY_COUNT__; public Record2_1() {} public boolean equals(Object o) { if
(this == o) { return true; } if (!(o instanceof Record2_1)) { return false;
} return java.util.Objects.equals(this.TRANS_ID, ((Record2_1) o).TRANS_ID)
&& this._KY_COUNT__ == ((Record2_1) o)._KY_COUNT__; } public int hashCode()
{ int h = 0; h = org.apache.calcite.runtime.Utilities.hash(h,
this.TRANS_ID); h = org.apache.calcite.runtime.Utilities.hash(h,
this._KY_COUNT__); return h; } public int compareTo(Record2_1 that) { int c;
c = org.apache.calcite.runtime.Utilities.compareNullsLast(this.TRANS_ID,
that.TRANS_ID); if (c != 0) { return c; } c =
org.apache.calcite.runtime.Utilities.compare(this._KY_COUNT__,
that._KY_COUNT__); if (c != 0) { return c; } return 0; } public String
toString() { return "{TRANS_ID=" + this.TRANS_ID + ", _KY_COUNT__=" +
this._KY_COUNT__ + "}"; } } public static class Record1_0 implements
java.io.Serializable { public long f0; public Record1_0() {} public boolean
equals(Object o) { if (this == o) { return true; } if (!(o instanceof
Record1_0)) { return false; } return this.f0 == ((Record1_0) o).f0; } public
int hashCode() { int h = 0; h = org.apache.calcite.runtime.Utilities.hash(h,
this.f0); return h; } public int compareTo(Record1_0 that) { final int c; c
= org.apache.calcite.runtime.Utilities.compare(this.f0, that.f0); if (c !=
0) { return c; } return 0; } public String toString() { return "{f0=" +
this.f0 + "}"; } } org.apache.calcite.DataContext root; public
org.apache.calcite.linq4j.Enumerable bind(final
org.apache.calcite.DataContext root0) { root = root0; final
org.apache.calcite.linq4j.Enumerable _inputEnumerable =
((org.apache.kylin.query.schema.OLAPTable)
root.getRootSchema().getSubSchema("DEFAULT").getTable("KYLIN_SALES")).executeOLAPQuery(root,
1); final org.apache.calcite.linq4j.AbstractEnumerable child = new
org.apache.calcite.linq4j.AbstractEnumerable(){ public
org.apache.calcite.linq4j.Enumerator enumerator() { return new
org.apache.calcite.linq4j.Enumerator(){ public final
org.apache.calcite.linq4j.Enumerator inputEnumerator =
_inputEnumerable.enumerator(); public void reset() {
inputEnumerator.reset(); } public boolean moveNext() { while
(inputEnumerator.moveNext()) { final Long inp0_ = (Long) ((Object[])
inputEnumerator.current())[0]; if (inp0_ != null && inp0_.longValue() != 1L)
{ return true; } } return false; } public void close() {
inputEnumerator.close(); } public Object current() { final Object[] current
= (Object[]) inputEnumerator.current(); return new Object[] { current[0],
current[11]}; } }; } }; final org.apache.calcite.linq4j.Enumerable
_inputEnumerable0 = ((org.apache.kylin.query.schema.OLAPTable)
root.getRootSchema().getSubSchema("DEFAULT").getTable("KYLIN_SALES")).executeOLAPQuery(root,
2); final org.apache.calcite.linq4j.AbstractEnumerable child1 = new
org.apache.calcite.linq4j.AbstractEnumerable(){ public
org.apache.calcite.linq4j.Enumerator enumerator() { return new
org.apache.calcite.linq4j.Enumerator(){ public final
org.apache.calcite.linq4j.Enumerator inputEnumerator =
_inputEnumerable0.enumerator(); public void reset() {
inputEnumerator.reset(); } public boolean moveNext() { return
inputEnumerator.moveNext(); } public void close() { inputEnumerator.close();
} public Object current() { final Object[] current = (Object[])
inputEnumerator.current(); return new Record2_1( (Long) current[0],
org.apache.calcite.runtime.SqlFunctions.toLong(current[11])); } }; } };
return
org.apache.calcite.linq4j.Linq4j.singletonEnumerable(child.aggregate(new
org.apache.calcite.linq4j.function.Function0() { public Object apply() {
long a0s0; a0s0 = 0; Record1_0 record0; record0 = new Record1_0();
record0.f0 = a0s0; return record0; } } .apply(), new
org.apache.calcite.linq4j.function.Function2() { public Record1_0
apply(Record1_0 acc, Object[] in) { acc.f0 = acc.f0 +
org.apache.calcite.runtime.SqlFunctions.toLong(in[1]); return acc; } public
Record1_0 apply(Object acc, Object in) { return apply( (Record1_0) acc,
(Object[]) in); } } , new org.apache.calcite.linq4j.function.Function1() {
public long apply(Record1_0 acc) { return acc.f0; } public Object
apply(Object acc) { return apply( (Record1_0) acc); } }
)).concat(org.apache.calcite.linq4j.Linq4j.singletonEnumerable(child1.aggregate(new
org.apache.calcite.linq4j.function.Function0() { public Object apply() {
long a0s0; a0s0 = 0; Record1_0 record0; record0 = new Record1_0();
record0.f0 = a0s0; return record0; } } .apply(), new
org.apache.calcite.linq4j.function.Function2() { public Record1_0
apply(Record1_0 acc, Record2_1 in) { acc.f0 = acc.f0 + in._KY_COUNT__;
return acc; } public Record1_0 apply(Object acc, Object 

Re: Some wrong with kylin2.5-hbase2.* for protobuf-java-3.1.0

2018-10-14 Thread liuzhixin
Hi shaofeng:

Yes, I can run the command well in hive shell. 

I can’t find calcite-core version 1.13.0-kylin-r4. 

Best wishes.

> 在 2018年10月15日,下午2:20,ShaoFeng Shi  写道:
> 
> Hive version 2.3.3 can work well with HDP 3? Can you try the HiveQL that
> Kylin executed out of Kylin, if it works, then there should be something
> wrong in Kylin.
> 
> liuzhixin mailto:liuz...@163.com>> 于2018年10月15日周一 下午1:47写道:
> 
>> Thank you for the answer!
>> 
>> I can’t decide the hive version.
>> 
>> And the hive version 2.3.3 can work well with HDP 3.
>> 
>> Perhaps you can test the Kylin with hive version 2.3.3.
>> 
>> Maybe it’s other error. Thanks!
>> 
>> Best wishes!
>> 
>> 
>> 在 2018年10月15日,下午1:24,ShaoFeng Shi  写道:
>> 
>> Hi zhixin,
>> 
>> I think the problem is how to run Hive 2 with HDP 3, no relation with
>> Kylin.
>> 
>> Usually, we don't encourage user to customize the component version in a
>> release, because that may bring dependency conflicts.
>> 
>> I suggest you use the original Hive version in HDP 3.
>> 
>> liuzhixin mailto:liuz...@163.com>> 于2018年10月15日周一 
>> 上午11:25写道:
>> 
>>> Hi ShaoFeng Shi
>>> 
>>> Yes, the error from hive version 2.3.3,
>>> 
>>> And Kylin need hive version 3.1.0.
>>> 
>>> So how to solve the question?
>>> 
>>> Best wishes!
>>> 
 在 2018年10月15日,上午11:10,ShaoFeng Shi >>> > 写道:
 
 Hi Zhixin,
 
 The error log is thrown from Hive, not from Kylin I think. Please verify
 your hive is properly installed; You can manually run that hive command
>>> :
 
 hive -e "use default; xxx"
 
 Lijun Cao <641507...@qq.com > 于2018年10月15日周一 
 上午11:01写道:
 
> Hi liuzhixin:
> 
> As I remember, the Hive version in HDP 3 is 3.1.0 .
> 
> You can update Hive to 3.1.0 and then have another try.
> 
> And according to my previous test, the binary package
> apache-kylin-2.5.0-bin-hadoop3.tar.gz can work properly on HDP 3. You
>>> can
> get it form official site.
> 
> Best Regards
> 
> Lijun Cao
> 
>> 在 2018年10月15日,10:22,liuzhixin mailto:liuz...@163.com>> 
>> 写道:
>> 
>> hi cao lijun,
>> #
>> the platform is ambari hdp3.0, and hive is 2.3.3, hbase version is 2.0
>> 
>> I have compile the source code with hive 2.3.3,
>> 
>> but the module atopcalcite depends on protobuf 3.1.0,
>> 
>> other module depends on protobuf 2.5.0.
>> 
>> 
>>> 在 2018年10月15日,上午8:40,Lijun Cao <641507...@qq.com 
>>> > 写道:
>>> 
>>> Hi liuzhixin:
>>> 
>>> Which platform did you use?
>>> 
>>> The CDH 6.0.x or HDP 3.0 ?
>>> 
>>> Best Regards
>>> 
>>> Lijun Cao
>>> 
 在 2018年10月12日,21:14,liuzhixin >>> > 写道:
 
 Logging initialized using configuration in
> 
>>> file:/data/hadoop-enviorment/apache-hive-2.3.3/conf/hive-log4j2.properties
> Async: true
 OK
 Time taken: 4.512 seconds
 OK
 Time taken: 1.511 seconds
 OK
 Time taken: 0.272 seconds
 OK
 Time taken: 0.185 seconds
 Exception in thread "main" java.lang.NoSuchMethodError:
> com.google.protobuf.Descriptors$Descriptor.getOneofs()Ljava/util/List;
   at
> 
>>> com.google.protobuf.GeneratedMessageV3$FieldAccessorTable.(GeneratedMessageV3.java:1704)
   at
> org.apache.calcite.avatica.proto.Common.(Common.java:18927)
   at
> 
>>> org.apache.calcite.avatica.proto.Common$ConnectionProperties.getDescriptor(Common.java:1264)
   at
> 
>>> org.apache.calcite.avatica.ConnectionPropertiesImpl.(ConnectionPropertiesImpl.java:38)
   at org.apache.calcite.avatica.MetaImpl.(MetaImpl.java:72)
   at
> org.apache.calcite.jdbc.CalciteMetaImpl.(CalciteMetaImpl.java:88)
   at org.apache.calcite.jdbc.Driver.createMeta(Driver.java:169)
   at
> 
>>> org.apache.calcite.avatica.AvaticaConnection.(AvaticaConnection.java:121)
   at
> 
>>> org.apache.calcite.jdbc.CalciteConnectionImpl.(CalciteConnectionImpl.java:113)
   at
> 
>>> org.apache.calcite.jdbc.CalciteJdbc41Factory$CalciteJdbc41Connection.(CalciteJdbc41Factory.java:114)
   at
> 
>>> org.apache.calcite.jdbc.CalciteJdbc41Factory.newConnection(CalciteJdbc41Factory.java:59)
   at
> 
>>> org.apache.calcite.jdbc.CalciteJdbc41Factory.newConnection(CalciteJdbc41Factory.java:44)
   at
> 
>>> org.apache.calcite.jdbc.CalciteFactory.newConnection(CalciteFactory.java:53)
   at
> 
>>> org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:138)
   at java.sql.DriverManager.getConnection(DriverManager.java:664)
   at java.sql.DriverManager.getConnection(DriverManager.java:208)
   at
> org.apache.calcite.tools.Frameworks.withPrepare(Frameworks.java:145)
   at
>>>