Re: Welcome new Apache Kylin committer: Jianhua Peng

2018-02-03 Thread
Congratulations, Jianhua!

2018-02-02 19:02 GMT+08:00 kangkaisen :

> Welcome Jianhua!
>
>
> --
> Sent from: http://apache-kylin.74782.x6.nabble.com/
>


Re: Error on adding udf to_date to support superset

2018-01-31 Thread
It doesn't take effect[image: 内嵌图片 1]

2018-01-31 10:59 GMT+08:00 yongjie zhao <yongjie.z...@gmail.com>:

> you can try this expression in Superset column editor
> ```
> (SUBSTRING(CAST(DIM_INTEGER as VARCHAR) FROM 1 FOR 4) || '-' ||
> SUBSTRING(CAST(DIM_INTEGER as VARCHAR) FROM 5 FOR 2) || '-' ||
> SUBSTRING(CAST(DIM_INTEGER as VARCHAR) FROM 7 FOR 2) )
> ```
>
>
> ​
>
> On Wed, Jan 31, 2018 at 9:36 AM, 杨浩 <yangha...@gmail.com> wrote:
>
>> int, such as 20180130
>>
>> 2018-01-30 21:39 GMT+08:00 yongjie zhao <yongjie.z...@gmail.com>:
>>
>> > What is your type of dimension, date? varchar?
>> >
>> > On Tue, Jan 30, 2018 at 6:17 PM, 杨浩 <yangha...@gmail.com> wrote:
>> >
>> > > If the format of "DATE" is MMDD,and superset changes it to format
>> > > -MM-DD, so we should use the UDF
>> > > [image: 内嵌图片 1]
>> > >
>> > > 2018-01-30 16:00 GMT+08:00 杨浩 <yangha...@gmail.com>:
>> > >
>> > >> Or the problem may be that how should we write UDF to support
>> function
>> > in
>> > >> filter position
>> > >>
>> > >> 2018-01-30 15:45 GMT+08:00 杨浩 <yangha...@gmail.com>:
>> > >>
>> > >>> Do you mean sqllab in superset? We have solved relevant problems,
>> > except
>> > >>> for the exception supplied by me
>> > >>>
>> > >>> 2018-01-30 15:38 GMT+08:00 yongjie zhao <yongjie.z...@gmail.com>:
>> > >>>
>> > >>>> Are you in sqllab write this SQL?
>> > >>>>
>> > >>>> On Tue, Jan 30, 2018 at 2:57 PM, 杨浩 <yangha...@gmail.com> wrote:
>> > >>>>
>> > >>>> > kylin developers
>> > >>>> >We have used superset as BI tool. Superset uses to_date to
>> > >>>> represent
>> > >>>> > time, and we add the to_date udf in our env. A query may be like
>> > >>>> select ***
>> > >>>> > from table_1 where 'DATE' >= TO_DATE('2017-12-31 00:00:00',
>> > >>>> '-MM-dd').
>> > >>>> > The executing result is right, but the query will not use kylin
>> > >>>> optimize,
>> > >>>> > because some error has happened , and every query will scan all
>> > hbase
>> > >>>> > table. How should we add the udf ?
>> > >>>> >
>> > >>>> > 2018-01-30 13:10:03,400 WARN  [Query
>> > >>>> > >> f029cbac-2aba-456c-b857-f65c8661e39c-90]
>> > >>>> > >> filter.BuiltInFunctionTupleFilter:143 : Reflection failed for
>> > >>>> > methodParams.
>> > >>>> > >
>> > >>>> > > java.lang.NullPointerException
>> > >>>> > >
>> > >>>> > > at
>> > >>>> > >> org.apache.kylin.metadata.filter.BuiltInFunctionTupleFilter.
>> > >>>> addChild(
>> > >>>> > BuiltInFunctionTupleFilter.java:136)
>> > >>>> > >
>> > >>>> > > at
>> > >>>> > >> org.apache.kylin.metadata.filter.TupleFilterSerializer.deser
>> > >>>> ialize(
>> > >>>> > TupleFilterSerializer.java:146)
>> > >>>> > >
>> > >>>> > > at
>> > >>>> > >> org.apache.kylin.storage.gtrecord.CubeSegmentScanner.<
>> > >>>> > init>(CubeSegmentScanner.java:65)
>> > >>>> > >
>> > >>>> > > at
>> > >>>> > >> org.apache.kylin.storage.gtrecord.GTCubeStorageQueryBase.
>> search(
>> > >>>> > GTCubeStorageQueryBase.java:93)
>> > >>>> > >
>> > >>>> > > at
>> > >>>> > >> org.apache.kylin.query.enumerator.OLAPEnumerator.
>> > >>>> > queryStorage(OLAPEnumerator.java:117)
>> > >>>> > >
>> > >>>> > > at
>> > >>>> > >> org.apache.kylin.query.enumerator.OLAPEnumerator.
>> > >>>> > moveNext(OLAPEnumerator.java:64)
>> > >>>> > >
>> > >>>> > >   

Re: Error on adding udf to_date to support superset

2018-01-30 Thread
int, such as 20180130

2018-01-30 21:39 GMT+08:00 yongjie zhao <yongjie.z...@gmail.com>:

> What is your type of dimension, date? varchar?
>
> On Tue, Jan 30, 2018 at 6:17 PM, 杨浩 <yangha...@gmail.com> wrote:
>
> > If the format of "DATE" is MMDD,and superset changes it to format
> > -MM-DD, so we should use the UDF
> > [image: 内嵌图片 1]
> >
> > 2018-01-30 16:00 GMT+08:00 杨浩 <yangha...@gmail.com>:
> >
> >> Or the problem may be that how should we write UDF to support function
> in
> >> filter position
> >>
> >> 2018-01-30 15:45 GMT+08:00 杨浩 <yangha...@gmail.com>:
> >>
> >>> Do you mean sqllab in superset? We have solved relevant problems,
> except
> >>> for the exception supplied by me
> >>>
> >>> 2018-01-30 15:38 GMT+08:00 yongjie zhao <yongjie.z...@gmail.com>:
> >>>
> >>>> Are you in sqllab write this SQL?
> >>>>
> >>>> On Tue, Jan 30, 2018 at 2:57 PM, 杨浩 <yangha...@gmail.com> wrote:
> >>>>
> >>>> > kylin developers
> >>>> >We have used superset as BI tool. Superset uses to_date to
> >>>> represent
> >>>> > time, and we add the to_date udf in our env. A query may be like
> >>>> select ***
> >>>> > from table_1 where 'DATE' >= TO_DATE('2017-12-31 00:00:00',
> >>>> '-MM-dd').
> >>>> > The executing result is right, but the query will not use kylin
> >>>> optimize,
> >>>> > because some error has happened , and every query will scan all
> hbase
> >>>> > table. How should we add the udf ?
> >>>> >
> >>>> > 2018-01-30 13:10:03,400 WARN  [Query
> >>>> > >> f029cbac-2aba-456c-b857-f65c8661e39c-90]
> >>>> > >> filter.BuiltInFunctionTupleFilter:143 : Reflection failed for
> >>>> > methodParams.
> >>>> > >
> >>>> > > java.lang.NullPointerException
> >>>> > >
> >>>> > > at
> >>>> > >> org.apache.kylin.metadata.filter.BuiltInFunctionTupleFilter.
> >>>> addChild(
> >>>> > BuiltInFunctionTupleFilter.java:136)
> >>>> > >
> >>>> > > at
> >>>> > >> org.apache.kylin.metadata.filter.TupleFilterSerializer.deser
> >>>> ialize(
> >>>> > TupleFilterSerializer.java:146)
> >>>> > >
> >>>> > > at
> >>>> > >> org.apache.kylin.storage.gtrecord.CubeSegmentScanner.<
> >>>> > init>(CubeSegmentScanner.java:65)
> >>>> > >
> >>>> > > at
> >>>> > >> org.apache.kylin.storage.gtrecord.GTCubeStorageQueryBase.search(
> >>>> > GTCubeStorageQueryBase.java:93)
> >>>> > >
> >>>> > > at
> >>>> > >> org.apache.kylin.query.enumerator.OLAPEnumerator.
> >>>> > queryStorage(OLAPEnumerator.java:117)
> >>>> > >
> >>>> > > at
> >>>> > >> org.apache.kylin.query.enumerator.OLAPEnumerator.
> >>>> > moveNext(OLAPEnumerator.java:64)
> >>>> > >
> >>>> > > at Baz$1$1.moveNext(Unknown Source)
> >>>> > >
> >>>> > > at
> >>>> > >> org.apache.calcite.linq4j.EnumerableDefaults.groupBy_(
> >>>> > EnumerableDefaults.java:826)
> >>>> > >
> >>>> > > at
> >>>> > >> org.apache.calcite.linq4j.EnumerableDefaults.groupBy(
> >>>> > EnumerableDefaults.java:761)
> >>>> > >
> >>>> > > at
> >>>> > >> org.apache.calcite.linq4j.DefaultEnumerable.groupBy(
> >>>> > DefaultEnumerable.java:302)
> >>>> > >
> >>>> > > at Baz.bind(Unknown Source)
> >>>> > >
> >>>> > > at
> >>>> > >> org.apache.calcite.jdbc.CalcitePrepare$CalciteSignature.enum
> >>>> erable(
> >>>> > CalcitePrepare.java:335)
> >>>> > >
> >>>> > > at
> &g

Re: Error on adding udf to_date to support superset

2018-01-30 Thread
If the format of "DATE" is MMDD,and superset changes it to format
-MM-DD, so we should use the UDF
[image: 内嵌图片 1]

2018-01-30 16:00 GMT+08:00 杨浩 <yangha...@gmail.com>:

> Or the problem may be that how should we write UDF to support function in
> filter position
>
> 2018-01-30 15:45 GMT+08:00 杨浩 <yangha...@gmail.com>:
>
>> Do you mean sqllab in superset? We have solved relevant problems, except
>> for the exception supplied by me
>>
>> 2018-01-30 15:38 GMT+08:00 yongjie zhao <yongjie.z...@gmail.com>:
>>
>>> Are you in sqllab write this SQL?
>>>
>>> On Tue, Jan 30, 2018 at 2:57 PM, 杨浩 <yangha...@gmail.com> wrote:
>>>
>>> > kylin developers
>>> >We have used superset as BI tool. Superset uses to_date to represent
>>> > time, and we add the to_date udf in our env. A query may be like
>>> select ***
>>> > from table_1 where 'DATE' >= TO_DATE('2017-12-31 00:00:00',
>>> '-MM-dd').
>>> > The executing result is right, but the query will not use kylin
>>> optimize,
>>> > because some error has happened , and every query will scan all hbase
>>> > table. How should we add the udf ?
>>> >
>>> > 2018-01-30 13:10:03,400 WARN  [Query
>>> > >> f029cbac-2aba-456c-b857-f65c8661e39c-90]
>>> > >> filter.BuiltInFunctionTupleFilter:143 : Reflection failed for
>>> > methodParams.
>>> > >
>>> > > java.lang.NullPointerException
>>> > >
>>> > > at
>>> > >> org.apache.kylin.metadata.filter.BuiltInFunctionTupleFilter.
>>> addChild(
>>> > BuiltInFunctionTupleFilter.java:136)
>>> > >
>>> > > at
>>> > >> org.apache.kylin.metadata.filter.TupleFilterSerializer.deserialize(
>>> > TupleFilterSerializer.java:146)
>>> > >
>>> > > at
>>> > >> org.apache.kylin.storage.gtrecord.CubeSegmentScanner.<
>>> > init>(CubeSegmentScanner.java:65)
>>> > >
>>> > > at
>>> > >> org.apache.kylin.storage.gtrecord.GTCubeStorageQueryBase.search(
>>> > GTCubeStorageQueryBase.java:93)
>>> > >
>>> > > at
>>> > >> org.apache.kylin.query.enumerator.OLAPEnumerator.
>>> > queryStorage(OLAPEnumerator.java:117)
>>> > >
>>> > > at
>>> > >> org.apache.kylin.query.enumerator.OLAPEnumerator.
>>> > moveNext(OLAPEnumerator.java:64)
>>> > >
>>> > > at Baz$1$1.moveNext(Unknown Source)
>>> > >
>>> > > at
>>> > >> org.apache.calcite.linq4j.EnumerableDefaults.groupBy_(
>>> > EnumerableDefaults.java:826)
>>> > >
>>> > > at
>>> > >> org.apache.calcite.linq4j.EnumerableDefaults.groupBy(
>>> > EnumerableDefaults.java:761)
>>> > >
>>> > > at
>>> > >> org.apache.calcite.linq4j.DefaultEnumerable.groupBy(
>>> > DefaultEnumerable.java:302)
>>> > >
>>> > > at Baz.bind(Unknown Source)
>>> > >
>>> > > at
>>> > >> org.apache.calcite.jdbc.CalcitePrepare$CalciteSignature.enumerable(
>>> > CalcitePrepare.java:335)
>>> > >
>>> > > at
>>> > >> org.apache.calcite.jdbc.CalciteConnectionImpl.enumerable(
>>> > CalciteConnectionImpl.java:294)
>>> > >
>>> > > at
>>> > >> org.apache.calcite.jdbc.CalciteMetaImpl._createIterable(
>>> > CalciteMetaImpl.java:559)
>>> > >
>>> > > at
>>> > >> org.apache.calcite.jdbc.CalciteMetaImpl.createIterable(
>>> > CalciteMetaImpl.java:550)
>>> > >
>>> > > at
>>> > >> org.apache.calcite.avatica.AvaticaResultSet.execute(
>>> > AvaticaResultSet.java:204)
>>> > >
>>> > > at
>>> > >> org.apache.calcite.jdbc.CalciteResultSet.execute(
>>> > CalciteResultSet.java:67)
>>> > >
>>> > > at
>>> > >> org.apache.calcite.jdbc.CalciteResultSet.execute(
>>> > CalciteResultSet.java:44)
>>> > >
>>> > > at
>>> > >> org.apa

Re: Error on adding udf to_date to support superset

2018-01-29 Thread
Do you mean sqllab in superset? We have solved relevant problems, except
for the exception supplied by me

2018-01-30 15:38 GMT+08:00 yongjie zhao <yongjie.z...@gmail.com>:

> Are you in sqllab write this SQL?
>
> On Tue, Jan 30, 2018 at 2:57 PM, 杨浩 <yangha...@gmail.com> wrote:
>
> > kylin developers
> >We have used superset as BI tool. Superset uses to_date to represent
> > time, and we add the to_date udf in our env. A query may be like select
> ***
> > from table_1 where 'DATE' >= TO_DATE('2017-12-31 00:00:00',
> '-MM-dd').
> > The executing result is right, but the query will not use kylin optimize,
> > because some error has happened , and every query will scan all hbase
> > table. How should we add the udf ?
> >
> > 2018-01-30 13:10:03,400 WARN  [Query
> > >> f029cbac-2aba-456c-b857-f65c8661e39c-90]
> > >> filter.BuiltInFunctionTupleFilter:143 : Reflection failed for
> > methodParams.
> > >
> > > java.lang.NullPointerException
> > >
> > > at
> > >> org.apache.kylin.metadata.filter.BuiltInFunctionTupleFilter.addChild(
> > BuiltInFunctionTupleFilter.java:136)
> > >
> > > at
> > >> org.apache.kylin.metadata.filter.TupleFilterSerializer.deserialize(
> > TupleFilterSerializer.java:146)
> > >
> > > at
> > >> org.apache.kylin.storage.gtrecord.CubeSegmentScanner.<
> > init>(CubeSegmentScanner.java:65)
> > >
> > > at
> > >> org.apache.kylin.storage.gtrecord.GTCubeStorageQueryBase.search(
> > GTCubeStorageQueryBase.java:93)
> > >
> > > at
> > >> org.apache.kylin.query.enumerator.OLAPEnumerator.
> > queryStorage(OLAPEnumerator.java:117)
> > >
> > > at
> > >> org.apache.kylin.query.enumerator.OLAPEnumerator.
> > moveNext(OLAPEnumerator.java:64)
> > >
> > > at Baz$1$1.moveNext(Unknown Source)
> > >
> > > at
> > >> org.apache.calcite.linq4j.EnumerableDefaults.groupBy_(
> > EnumerableDefaults.java:826)
> > >
> > > at
> > >> org.apache.calcite.linq4j.EnumerableDefaults.groupBy(
> > EnumerableDefaults.java:761)
> > >
> > > at
> > >> org.apache.calcite.linq4j.DefaultEnumerable.groupBy(
> > DefaultEnumerable.java:302)
> > >
> > > at Baz.bind(Unknown Source)
> > >
> > > at
> > >> org.apache.calcite.jdbc.CalcitePrepare$CalciteSignature.enumerable(
> > CalcitePrepare.java:335)
> > >
> > > at
> > >> org.apache.calcite.jdbc.CalciteConnectionImpl.enumerable(
> > CalciteConnectionImpl.java:294)
> > >
> > > at
> > >> org.apache.calcite.jdbc.CalciteMetaImpl._createIterable(
> > CalciteMetaImpl.java:559)
> > >
> > > at
> > >> org.apache.calcite.jdbc.CalciteMetaImpl.createIterable(
> > CalciteMetaImpl.java:550)
> > >
> > > at
> > >> org.apache.calcite.avatica.AvaticaResultSet.execute(
> > AvaticaResultSet.java:204)
> > >
> > > at
> > >> org.apache.calcite.jdbc.CalciteResultSet.execute(
> > CalciteResultSet.java:67)
> > >
> > > at
> > >> org.apache.calcite.jdbc.CalciteResultSet.execute(
> > CalciteResultSet.java:44)
> > >
> > > at
> > >> org.apache.calcite.avatica.AvaticaConnection$1.execute(
> > AvaticaConnection.java:630)
> > >
> > > at
> > >> org.apache.calcite.jdbc.CalciteMetaImpl.prepareAndExecute(
> > CalciteMetaImpl.java:607)
> > >
> > > at
> > >> org.apache.calcite.avatica.AvaticaConnection.
> prepareAndExecuteInternal(
> > AvaticaConnection.java:638)
> > >
> > > at
> > >> org.apache.calcite.avatica.AvaticaStatement.executeInternal(
> > AvaticaStatement.java:149)
> > >
> > > at
> > >> org.apache.calcite.avatica.AvaticaStatement.executeQuery(
> > AvaticaStatement.java:218)
> > >
> > > at
> > >> org.apache.kylin.rest.service.QueryService.execute(
> > QueryService.java:845)
> > >
> > > at
> > >> org.apache.kylin.rest.service.QueryService.queryWithSqlMassage(
> > QueryService.java:572)
> > >
> > > at
> > >> org.apache.kylin.rest.service.QueryService.query(
> QueryService.java:

Error on adding udf to_date to support superset

2018-01-29 Thread
kylin developers
   We have used superset as BI tool. Superset uses to_date to represent
time, and we add the to_date udf in our env. A query may be like select ***
from table_1 where 'DATE' >= TO_DATE('2017-12-31 00:00:00', '-MM-dd').
The executing result is right, but the query will not use kylin optimize,
because some error has happened , and every query will scan all hbase
table. How should we add the udf ?

2018-01-30 13:10:03,400 WARN  [Query
>> f029cbac-2aba-456c-b857-f65c8661e39c-90]
>> filter.BuiltInFunctionTupleFilter:143 : Reflection failed for methodParams.
>
> java.lang.NullPointerException
>
> at
>> org.apache.kylin.metadata.filter.BuiltInFunctionTupleFilter.addChild(BuiltInFunctionTupleFilter.java:136)
>
> at
>> org.apache.kylin.metadata.filter.TupleFilterSerializer.deserialize(TupleFilterSerializer.java:146)
>
> at
>> org.apache.kylin.storage.gtrecord.CubeSegmentScanner.(CubeSegmentScanner.java:65)
>
> at
>> org.apache.kylin.storage.gtrecord.GTCubeStorageQueryBase.search(GTCubeStorageQueryBase.java:93)
>
> at
>> org.apache.kylin.query.enumerator.OLAPEnumerator.queryStorage(OLAPEnumerator.java:117)
>
> at
>> org.apache.kylin.query.enumerator.OLAPEnumerator.moveNext(OLAPEnumerator.java:64)
>
> at Baz$1$1.moveNext(Unknown Source)
>
> at
>> org.apache.calcite.linq4j.EnumerableDefaults.groupBy_(EnumerableDefaults.java:826)
>
> at
>> org.apache.calcite.linq4j.EnumerableDefaults.groupBy(EnumerableDefaults.java:761)
>
> at
>> org.apache.calcite.linq4j.DefaultEnumerable.groupBy(DefaultEnumerable.java:302)
>
> at Baz.bind(Unknown Source)
>
> at
>> org.apache.calcite.jdbc.CalcitePrepare$CalciteSignature.enumerable(CalcitePrepare.java:335)
>
> at
>> org.apache.calcite.jdbc.CalciteConnectionImpl.enumerable(CalciteConnectionImpl.java:294)
>
> at
>> org.apache.calcite.jdbc.CalciteMetaImpl._createIterable(CalciteMetaImpl.java:559)
>
> at
>> org.apache.calcite.jdbc.CalciteMetaImpl.createIterable(CalciteMetaImpl.java:550)
>
> at
>> org.apache.calcite.avatica.AvaticaResultSet.execute(AvaticaResultSet.java:204)
>
> at
>> org.apache.calcite.jdbc.CalciteResultSet.execute(CalciteResultSet.java:67)
>
> at
>> org.apache.calcite.jdbc.CalciteResultSet.execute(CalciteResultSet.java:44)
>
> at
>> org.apache.calcite.avatica.AvaticaConnection$1.execute(AvaticaConnection.java:630)
>
> at
>> org.apache.calcite.jdbc.CalciteMetaImpl.prepareAndExecute(CalciteMetaImpl.java:607)
>
> at
>> org.apache.calcite.avatica.AvaticaConnection.prepareAndExecuteInternal(AvaticaConnection.java:638)
>
> at
>> org.apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:149)
>
> at
>> org.apache.calcite.avatica.AvaticaStatement.executeQuery(AvaticaStatement.java:218)
>
> at
>> org.apache.kylin.rest.service.QueryService.execute(QueryService.java:845)
>
> at
>> org.apache.kylin.rest.service.QueryService.queryWithSqlMassage(QueryService.java:572)
>
> at
>> org.apache.kylin.rest.service.QueryService.query(QueryService.java:181)
>
> at
>> org.apache.kylin.rest.service.QueryService.doQueryWithCache(QueryService.java:428)
>
>
ToDateUDF likes this, and I have adding it on  KylinConfigBase.getUDFs


public class ToDateUDF {
> private static final Logger logger =
> LoggerFactory.getLogger(ToDateUDF.class);
> public String eval(String sourceDateStr, String sourceDateFormat) {
> sourceDateStr = sourceDateStr.replaceAll("'", "").trim();
> sourceDateFormat = sourceDateFormat.replaceAll("'", "").trim();
> try {
> SimpleDateFormat dateFormat = new
> SimpleDateFormat(sourceDateFormat);
> long ts = dateFormat.parse(sourceDateStr).getTime();
> return getFormatTime(ts, "MMdd");
> } catch (ParseException e) {
> logger.error("parse error", e);
> logger.error("sourceDateStr:{},sourceDateFormat:{}",
> sourceDateStr, sourceDateFormat);
> return "";
> }
> }
> public static String getFormatTime(long timeStamp, String format) {
> if (StringUtils.isBlank(format)) {
> format = "MMdd";
> }
> Calendar cal = Calendar.getInstance();
> SimpleDateFormat formatter = new SimpleDateFormat(format);
> cal.setTimeInMillis(timeStamp);
> return formatter.format(cal.getTime());
> }
> }


Re: Auto merge not working on Kylin 2.2

2018-01-17 Thread
It seems KYLIN-3165 . I
think you can merge it to KYLIN-2.2.0 to solve the problem.

2017-12-26 14:23 GMT+08:00 ketan dikshit :

> Hi ShaoFeng,
> We are using Hive as data source for pulling in fact data for cube
> building,
> Also the commit id as I see is: 0169dc0950fc278239413cfb71313ff9ae8edf7d
>
> Will also investigate myself, and try to resolve the same and share a
> patch.
>
> Thanks,
> Ketan
>
> > On 22-Dec-2017, at 7:41 PM, ShaoFeng Shi  wrote:
> >
> > Hi Ketan,
> >
> > Is your Cube data from Hive or Kafka?
> >
> > The line number seems couldn't match with the Kylin 2.2 source code; Can
> > you double confirm the Kylin version? In Kylin installation folder, there
> > is a commit id sha file, please provide that file content to us so we can
> > exactly match the line numbers. Thanks.
> >
> > 2017-12-22 18:12 GMT+08:00 ShaoFeng Shi :
> >
> >> It should be a bug; I took a quick look but didn't find the root cause
> >> (very busy in the year-end). Could you please investigate it and then
> >> contribute a patch? Thanks!
> >>
> >> 2017-12-22 16:41 GMT+08:00 ketan dikshit 
> :
> >>
> >>> Hi Guys,
> >>> Any update on this, as I see this is a bug or ?
> >>>
> >>> We are passing in null as TSRange and the check
> >>> Preconditions.checkArgument(tsRange != null); is always failing.
> >>>
> >>> Help would be appreciated.
> >>>
> >>> Thanks,
> >>> Ketan
> >>>
>  On 21-Dec-2017, at 3:06 PM, ketan dikshit
> 
> >>> wrote:
> 
>  Hi Guys,
>  I am using Kylin 2.2 and getting some issues with Kylin auto merge.
>  The error log in kylin.log is here:
> 
>  2017-12-21 08:31:18,811 INFO  [Thread-50937] service.CubeService:552 :
> >>> checking keepCubeRetention
>  2017-12-21 08:31:18,811 DEBUG [Thread-50937] model.Segments:201 : Cube
> >>> publishers_v4 has 1 building segments
>  2017-12-21 08:31:18,811 ERROR [Thread-50937] service.CacheService:87 :
> >>> Error in updateOnNewSegmentReady()
>  java.lang.IllegalArgumentException
>   at com.google.common.base.Preconditions.checkArgument(Precondit
> >>> ions.java:76)
>   at org.apache.kylin.cube.CubeManager.mergeSegments(CubeManager.
> >>> java:544)
>   at org.apache.kylin.rest.service.CubeService.mergeCubeSegment(C
> >>> ubeService.java:601)
>   at org.apache.kylin.rest.service.CubeService.updateOnNewSegment
> >>> Ready(CubeService.java:545)
>   at org.apache.kylin.rest.service.CubeService$$FastClassBySpring
> >>> CGLIB$$17a07c0e.invoke()
>   at org.springframework.cglib.proxy.MethodProxy.invoke(MethodPro
> >>> xy.java:204)
>   at org.springframework.aop.framework.CglibAopProxy$DynamicAdvis
> >>> edInterceptor.intercept(CglibAopProxy.java:669)
>   at org.apache.kylin.rest.service.CubeService$$EnhancerBySpringC
> >>> GLIB$$e4f6b188.updateOnNewSegmentReady()
>   at org.apache.kylin.rest.service.CacheService$1$1.run(CacheServ
> >>> ice.java:85)
> 
> 
>  Auto Merge Thresholds
>  8 (Hours)
>  1 (Days)
>  7 (Days)
>  15 (Days)
> 
> 
>  As I see inside the code, I see,
>  private void mergeCubeSegment(String cubeName) {
>    CubeInstance cube = getCubeManager().getCube(cubeName);
>    if (!cube.needAutoMerge()) {
>    return;
>    }
> 
>    synchronized (CubeService.class) {
>    try {
>    cube = getCubeManager().getCube(cubeName);
>    SegmentRange offsets = cube.autoMergeCubeSegments();
>    if (offsets != null) {
>    CubeSegment newSeg = getCubeManager().
> mergeSegments(cube,
> >>> null, offsets, true);
>    logger.debug("Will submit merge job on " + newSeg);
>    DefaultChainedExecutable job =
> >>> EngineFactory.createBatchMergeJob(newSeg, "SYSTEM");
>    getExecutableManager().addJob(job);
>    } else {
>    logger.debug("Not ready for merge on cube " + cubeName);
>    }
>    } catch (IOException e) {
>    logger.error("Failed to auto merge cube " + cubeName, e);
>    }
>    }
>  }
> 
>  ie; getCubeManager().mergeSegments(cube, null, offsets, true); is
> >>> passing the TSRange as ‘null’;
>  to CubeManager:public CubeSegment mergeSegments(CubeInstance cube,
> >>> TSRange tsRange, SegmentRange segRange, boolean force) method;
>  which is defined as:
>  if (cube.getSegments().isEmpty())
>    throw new IllegalArgumentException("Cube " + cube + " has no
> >>> segments");
> 
>  checkInputRanges(tsRange, segRange);
>  checkBuildingSegment(cube);
>  checkCubeIsPartitioned(cube);
> 
>  if (cube.getSegments().getFirstSegment().isOffsetCube()) {
>    // offset cube, merge by date 

Re: [jira] [Updated] (KYLIN-3170) Kylin generate wrong cuboids

2018-01-16 Thread
What's your conf value of  kylin.cube.aggrgroup.is-mandatory-only-valid?
Can you try it as true?

2018-01-17 11:18 GMT+08:00 Xingxing Di (JIRA) :

>
>  [ https://issues.apache.org/jira/browse/KYLIN-3170?page=
> com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
>
> Xingxing Di updated KYLIN-3170:
> ---
> Affects Version/s: v1.6.0
>
> > Kylin generate wrong cuboids
> > 
> >
> > Key: KYLIN-3170
> > URL: https://issues.apache.org/jira/browse/KYLIN-3170
> > Project: Kylin
> >  Issue Type: Bug
> >  Components: Job Engine
> >Affects Versions: v1.6.0, v2.2.0
> > Environment: kylin2.2.0, hbase1.2.4
> >Reporter: Xingxing Di
> >Assignee: Shaofeng SHI
> >Priority: Major
> >
> > We have a cube which has 5 dimensions and 2 agg groups:
> > 5 dimensions : A, B, C, D, E
> > agg group1 : Mandatory Dimensions (A, B),  Hierarchy Dimensions (D, E)
> > agg group2 : Mandatory Dimensions (A, B),  Hierarchy Dimensions (C, E)
> >
> > The cube generate 5 cuboids :
> >
> > Cuboid 1
> >  Cuboid 11011
> >  Cuboid 11010
> >  Cuboid 11101
> >  Cuboid 11100
> >
> > there this no cuboid 11000 , I'm confused about this,  I think
> this should be a bug.
> >
>
>
>
> --
> This message was sent by Atlassian JIRA
> (v7.6.3#76005)
>


Re: [jira] [Updated] (KYLIN-3089) Query exception on SortedIteratorMergerWithLimit

2017-12-11 Thread
can any one help review the logic?

2017-12-12 10:38 GMT+08:00 Yang Hao (JIRA) :

>
>  [ https://issues.apache.org/jira/browse/KYLIN-3089?page=
> com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
>
> Yang Hao updated KYLIN-3089:
> 
> Component/s: Query Engine
>
> > Query exception on SortedIteratorMergerWithLimit
> > 
> >
> > Key: KYLIN-3089
> > URL: https://issues.apache.org/jira/browse/KYLIN-3089
> > Project: Kylin
> >  Issue Type: Bug
> >  Components: Query Engine
> >Affects Versions: v2.1.0
> >Reporter: Yang Hao
> >
> > The executing error only exists on some special case. I have a simple
> sql, and the query is routing onto SortedIteratorMergerWithLimit. When
> iterate data, it triggers such error
> > {code:java}
> >//TODO: remove this check when validated
> > if (last != null) {
> > if (comparator.compare(last, fetched) > 0)
> > throw new IllegalStateException("Not sorted! last: "
> + last + " fetched: " + fetched);
> > }
> > {code}
> > sql is as belows.
> > {code:java}
> > select "DATE",appid,dim_1,dim_2, sum(uv) as uv
> > from table_1
> > where appid =  and "DATE" = 2017
> > group by "DATE",appid,dim_1,dim_2
> > limit 5
> > {code}
>
>
>
> --
> This message was sent by Atlassian JIRA
> (v6.4.14#64029)
>


Shall we define the base cuboid?

2017-11-19 Thread
As in the article
http://kylin.apache.org/blog/2016/02/18/new-aggregation-group/ , two aggs
are defined in the last. The AGG1 is base cuboid, shall we define it
explicity? It seems the base cuboid is computed implicit


Re: how to change the order of rowkey_columns

2017-11-14 Thread
Thanks for that

2017-11-14 22:50 GMT+08:00 Billy Liu <billy...@apache.org>:

> You could change the rowkey order by drag-and-drop from GUI directly.
>
> 2017-11-14 10:52 GMT+08:00 杨浩 <yangha...@gmail.com>:
>
>>To speed up the query, we should position the frequent column before
>> others, such as partition column date should be at the first for every
>> query would contain it. Then how to change the order of  rowkey_columns, or
>> in other words,  what decides the order of rowkey_columns?
>>
>
>


how to change the order of rowkey_columns

2017-11-13 Thread
   To speed up the query, we should position the frequent column before
others, such as partition column date should be at the first for every
query would contain it. Then how to change the order of  rowkey_columns, or
in other words,  what decides the order of rowkey_columns?


Re: [VOTE] Release apache-kylin-2.2.0 (RC1)

2017-10-30 Thread
+1

2017-10-31 10:50 GMT+08:00 ShaoFeng Shi :

> +1 (binding)
>
> mvn test passed
>
> Checked signature and MD5 hash; Looks good to me.
>
> 2017-10-30 22:34 GMT+08:00 Billy Liu :
>
> > +1 (binding)
> >
> > mvn test passed
> > md5 verified with Java 1.8.0_91-b14 on MacOS 10.13
> >
> > 2017-10-30 21:55 GMT+08:00 Dong Li :
> >
> > > Hi all,
> > >
> > > I have created a build for Apache Kylin 2.2.0, release candidate 1.
> > >
> > > Changes highlights:
> > >
> > > KYLIN-2963 - Remove Beta for Spark Cubing
> > > KYLIN-2703 - kylin supports managing ACL through apache ranger.
> > > KYLIN-2761 - Table Level ACL
> > >
> > > And more than 74 issues fixed.
> > >
> > > Thanks to everyone who has contributed to this release.
> > > Here’s release notes:
> > > *https://issues.apache.org/jira/secure/ReleaseNote.jspa?
> > > projectId=12316121=12341139
> > >  > > projectId=12316121=12341139>*
> > >
> > > The commit to be voted upon:
> > >
> > > *https://github.com/apache/kylin/commit/850c0a6a3b9296d26121f9986701bb
> > > 191921b1bf
> > >  > > 191921b1bf>*
> > >
> > > Its hash is 850c0a6a3b9296d26121f9986701bb191921b1bf.
> > >
> > > The artifacts to be voted on are located here:
> > > https://dist.apache.org/repos/dist/dev/kylin/apache-kylin-2.2.0-rc1/
> > >
> > > (The binary packages for HBase 1.x and CDH 5.7 are also provided for
> > > testing)
> > >
> > > The hashes of the artifacts are as follows:
> > > apache-kylin-2.2.0-src.tar.gz.md5 49d166925dd841f6d0333c2a89e66497
> > > apache-kylin-2.2.0-src.tar.gz.sha1 477cf78734382916537b951b0ebaae
> > > 39122441f3
> > >
> > > A staged Maven repository is available for review at:
> > > *https://repository.apache.org/content/repositories/
> orgapachekylin-1045
> > >  orgapachekylin-1045
> > >*
> > >
> > > Release artifacts are signed with the following key:
> > > https://people.apache.org/keys/committer/lidong.asc
> > >
> > > Please vote on releasing this package as Apache Kylin 2.2.0.
> > >
> > > The vote is open for the next 72 hours and passes if a majority of
> > > at least three +1 PPMC votes are cast.
> > >
> > > [ ] +1 Release this package as Apache Kylin 2.2.0
> > > [ ]  0 I don't feel strongly about it, but I'm okay with the release
> > > [ ] -1 Do not release this package because...
> > >
> > > Here is my vote:
> > >
> > > +1 (binding)
> > >
> > > Thanks,
> > > Dong Li
> > >
> >
>
>
>
> --
> Best regards,
>
> Shaofeng Shi 史少锋
>


what's the best practice for choosing machine for kylin

2017-10-30 Thread
What's the best practice for choosing  machine for kylin?


Re: question about joint dimensions

2017-10-27 Thread
I see, it cannot use old cube data if we want to add new aggregation group
to speed up query.

2017-10-27 9:18 GMT+08:00 ShaoFeng Shi <shaofeng...@apache.org>:

> You can disable the cube, purge the data, edit it to add new group, and
> then rebuild it.
>
> Or you can clone it into a new cube, and then make the change on it. After
> the new cube is ready, disable the old one.
>
> 2017-10-26 16:31 GMT+08:00 杨浩 <yangha...@gmail.com>:
>
> > Thanks for your explanation. If we want to add new aggregation group ,
> what
> > should I do?
> >
> > 2017-10-25 18:36 GMT+08:00 ShaoFeng Shi <shaofeng...@apache.org>:
> >
> > > Hi hao,
> > >
> > > It will go to the base cuboid: ABCDE, and aggregate D at running time.
> So
> > > in theory, it can answer all combinations even if you customized the
> > > groups.
> > >
> > > 2017-10-25 14:58 GMT+08:00 杨浩 <yangha...@gmail.com>:
> > >
> > > > I have used aggregation groups to reduce cuboids, but it seems not
> take
> > > > effect. A table has dimension A, B, C, D, E, and set this aggregation
> > > > groups:
> > > > group 1: A,B,C which has joint dimensions A,B,C
> > > > group 2: A,B,D which has joint dimensions A,B,D
> > > > group 3: A,B,E which has joint dimensions A,B,E
> > > > but I can query the table using "group by A B C E" ,  is it OK?
> > > >
> > >
> > >
> > >
> > > --
> > > Best regards,
> > >
> > > Shaofeng Shi 史少锋
> > >
> >
>
>
>
> --
> Best regards,
>
> Shaofeng Shi 史少锋
>


Re: question about joint dimensions

2017-10-26 Thread
Thanks for your explanation. If we want to add new aggregation group , what
should I do?

2017-10-25 18:36 GMT+08:00 ShaoFeng Shi <shaofeng...@apache.org>:

> Hi hao,
>
> It will go to the base cuboid: ABCDE, and aggregate D at running time. So
> in theory, it can answer all combinations even if you customized the
> groups.
>
> 2017-10-25 14:58 GMT+08:00 杨浩 <yangha...@gmail.com>:
>
> > I have used aggregation groups to reduce cuboids, but it seems not take
> > effect. A table has dimension A, B, C, D, E, and set this aggregation
> > groups:
> > group 1: A,B,C which has joint dimensions A,B,C
> > group 2: A,B,D which has joint dimensions A,B,D
> > group 3: A,B,E which has joint dimensions A,B,E
> > but I can query the table using "group by A B C E" ,  is it OK?
> >
>
>
>
> --
> Best regards,
>
> Shaofeng Shi 史少锋
>


question about joint dimensions

2017-10-25 Thread
I have used aggregation groups to reduce cuboids, but it seems not take
effect. A table has dimension A, B, C, D, E, and set this aggregation
groups:
group 1: A,B,C which has joint dimensions A,B,C
group 2: A,B,D which has joint dimensions A,B,D
group 3: A,B,E which has joint dimensions A,B,E
but I can query the table using "group by A B C E" ,  is it OK?


Re: Re: How to clean up the invisible cube

2017-10-20 Thread
you can delete cube_0831 like this

  bin/metastore.sh remove /cube/cube_0831.json
  bin/metastore.sh remove /cube_desc/cube_0831.json


2017-09-20 13:24 GMT+08:00 ShaoFeng Shi :

> You can backup all the metadata to a local folder, and then run
> "metastore.sh reset" to clean up them from hbase; In local metadata, you
> can remove the redundant cube json file, and also remove the reference in
> the project json file. When all done, restore that to hbase.
>
> All these operations need be taken when Kylin is stopped. A backup is
> required before start.
>
> 2017-09-19 19:00 GMT+08:00 apache_...@163.com :
>
> > Info show Loaded 12 Cube(s),but Only 9  can lists(2017-09-19
> > 18:58:33,788 INFO  [http-bio-7070-exec-2] cube.CubeManager:795 : Loaded 9
> > cubes, fail on 0 cubes)
> >
> >
> >
> > 2017-09-19 18:56:49,105 DEBUG [Thread-15] metadata.MetadataManager:388 :
> > Reloading Table_exd info from folder kylin_metadata(key='/table_
> > exd')@kylin_metadata@hbase
> > 2017-09-19 18:56:49,883 DEBUG [Thread-15] metadata.MetadataManager:397 :
> > Loaded 26 SourceTable EXD(s)
> > 2017-09-19 18:56:49,883 DEBUG [Thread-15] metadata.MetadataManager:572 :
> > Reloading DataModel from folder kylin_metadata(key='/model_
> > desc')@kylin_metadata@hbase
> > 2017-09-19 18:56:50,015 INFO  [Thread-15] metadata.MetadataManager:580 :
> > Reloading data model at /model_desc/BUSS_INFO.json
> > 2017-09-19 18:56:50,104 INFO  [Thread-15] metadata.MetadataManager:580 :
> > Reloading data model at /model_desc/M_BUSS_INFO.json
> > 2017-09-19 18:56:50,108 INFO  [Thread-15] metadata.MetadataManager:580 :
> > Reloading data model at /model_desc/kylin_sales_model.json
> > 2017-09-19 18:56:50,113 INFO  [Thread-15] metadata.MetadataManager:580 :
> > Reloading data model at /model_desc/model.json
> > 2017-09-19 18:56:50,117 INFO  [Thread-15] metadata.MetadataManager:580 :
> > Reloading data model at /model_desc/model_loan.json
> > 2017-09-19 18:56:50,130 INFO  [Thread-15] metadata.MetadataManager:580 :
> > Reloading data model at /model_desc/yewubill.json
> > 2017-09-19 18:56:50,132 DEBUG [Thread-15] metadata.MetadataManager:588 :
> > Loaded 6 DataModel(s)
> > 2017-09-19 18:56:50,133 DEBUG [Thread-15] metadata.MetadataManager:453 :
> > Reloading ExternalFilter from folder kylin_metadata(key='/ext_
> > filter')@kylin_metadata@hbase
> > 2017-09-19 18:56:50,151 DEBUG [Thread-15] metadata.MetadataManager:462 :
> > Loaded 0 ExternalFilter(s)
> >
> >
> >
> > se
> > 2017-09-19 18:58:33,748 WARN  [http-bio-7070-exec-2] common.
> BackwardCompatibilityConfig:93
> > : Config 'kylin.hbase.region.cut' is deprecated, use
> > 'kylin.storage.hbase.region-cut-gb' instead
> > 2017-09-19 18:58:33,749 WARN  [http-bio-7070-exec-2] common.
> BackwardCompatibilityConfig:93
> > : Config 'kylin.hbase.region.count.min' is deprecated, use
> > 'kylin.storage.hbase.min-region-count' instead
> > 2017-09-19 18:58:33,749 WARN  [http-bio-7070-exec-2] common.
> BackwardCompatibilityConfig:93
> > : Config 'kylin.hbase.region.count.max' is deprecated, use
> > 'kylin.storage.hbase.max-region-count' instead
> > 2017-09-19 18:58:33,775 INFO  [http-bio-7070-exec-2]
> > cube.CubeDescManager:340 : Loaded 12 Cube(s)
> > 2017-09-19 18:58:33,776 INFO  [http-bio-7070-exec-2] cube.CubeManager:834
> > : Reloaded cube C_4187e78b4ce54017aae2f161ea3840dd being CUBE[name=C_
> > 4187e78b4ce54017aae2f161ea3840dd] having 1 segments
> > 2017-09-19 18:58:33,777 INFO  [http-bio-7070-exec-2] cube.CubeManager:834
> > : Reloaded cube asdfds being CUBE[name=asdfds] having 0 segments
> > 2017-09-19 18:58:33,779 INFO  [http-bio-7070-exec-2] cube.CubeManager:834
> > : Reloaded cube cube_loan being CUBE[name=cube_loan] having 1 segments
> > 2017-09-19 18:58:33,781 INFO  [http-bio-7070-exec-2] cube.CubeManager:834
> > : Reloaded cube day1 being CUBE[name=day1] having 2 segments
> > 2017-09-19 18:58:33,782 INFO  [http-bio-7070-exec-2] cube.CubeManager:834
> > : Reloaded cube group being CUBE[name=group] having 0 segments
> > 2017-09-19 18:58:33,784 INFO  [http-bio-7070-exec-2] cube.CubeManager:834
> > : Reloaded cube kylin_sales_cube being CUBE[name=kylin_sales_cube]
> having 1
> > segments
> > 2017-09-19 18:58:33,785 INFO  [http-bio-7070-exec-2] cube.CubeManager:834
> > : Reloaded cube kylin_sales_cube_clone being CUBE[name=kylin_sales_cube_
> clone]
> > having 0 segments
> > 2017-09-19 18:58:33,787 INFO  [http-bio-7070-exec-2] cube.CubeManager:834
> > : Reloaded cube sadf being CUBE[name=sadf] having 0 segments
> > 2017-09-19 18:58:33,788 INFO  [http-bio-7070-exec-2] cube.CubeManager:834
> > : Reloaded cube yewucube being CUBE[name=yewucube] having 1 segments
> > 2017-09-19 18:58:33,788 INFO  [http-bio-7070-exec-2] cube.CubeManager:795
> > : Loaded 9 cubes, fail on 0 cubes
> > 2017-09-19 18:58:53,775 INFO  [pool-8-thread-1]
> > threadpool.DefaultScheduler:123 : Job Fetcher: 0 should running, 0
> actual
> > running, 0 stopped, 0 ready, 136 already succeed, 

Re: [Announce] New Apache Kylin committer Cheng Wang

2017-10-19 Thread
Congratulations to Cheng!

2017-10-17 16:30 GMT+08:00 ShaoFeng Shi :

> Welcome Cheng!
>
> 2017-10-17 12:48 GMT+08:00 Cheng Wang :
>
> > Thanks!
> >
> >
> >
> > On 10/17/17, 11:17 AM, "Chen, Sammi"  wrote:
> >
> > >Congratulations to Cheng!
> > >
> > >-Original Message-
> > >From: Luke Han [mailto:luke...@apache.org]
> > >Sent: Monday, October 16, 2017 6:33 PM
> > >To: dev ; user ; Apache
> > Kylin PMC 
> > >Subject: [Announce] New Apache Kylin committer Cheng Wang
> > >
> > >On behalf of the Apache Kylin PMC, I am very pleased to announce that
> > Cheng Wang has accepted the PMC's invitation to become a committer on the
> > project.
> > >
> > >We appreciate all of Cheng's generous contributions about many bug
> fixes,
> > patches, helped many users. We are so glad to have him to be our new
> > committer and looking forward to his continued involvement.
> > >
> > >Congratulations and Welcome, Cheng!
> >
>
>
>
> --
> Best regards,
>
> Shaofeng Shi 史少锋
>


How to remove error model?

2017-10-12 Thread
There are some error model/cube in kylin, it cannot be seen in the website,
and I cannot genereate the same model named that, How to delete it?

Command bin/metastore.sh clean --delete true cannot delete the dirty model.


Re: how to filter long tail data

2017-09-03 Thread
Okay, our team want to use Kylin as an ETL tool, but there are many long
tail data after building. Can these data be filtered directly by kylin, or
do we have to  make some change to the code ?

2017-09-03 19:42 GMT+08:00 Li Yang <liy...@apache.org>:

> Please ask Kylin related question here.
>
> On Fri, Sep 1, 2017 at 2:47 PM, 杨浩 <yangha...@gmail.com> wrote:
>
> > If a index is less than 2, we don't want to store it in hbase . How to
> > filter the long tail data ?
> >
>


How to filter the long tail data?

2017-09-01 Thread
If a index is less than 2, we don't want to store it in hbase . How to
filter the long tail data ?


how to filter long tail data

2017-09-01 Thread
If a index is less than 2, we don't want to store it in hbase . How to
filter the long tail data ?


Re: [VOTE] Release apache-kylin-2.1.0 (RC1)

2017-08-06 Thread
+1

2017-08-06 20:39 GMT+08:00 Li Yang :

> +1
>
> mvn test passed
>
> java version "1.7.0_95"
> OpenJDK Runtime Environment (rhel-2.6.4.0.el6_7-x86_64 u95-b00)
> OpenJDK 64-Bit Server VM (build 24.95-b01, mixed mode)
>
>
>
> On Sat, Aug 5, 2017 at 6:37 PM, ShaoFeng Shi 
> wrote:
>
> > Hi all,
> >
> > I have created a build for Apache Kylin 2.1.0, release candidate 1.
> >
> > Changes highlights:
> > KYLIN-2506 - Refactor global dictionary
> > KYLIN-2575 - Experimental feature: Computed Column
> > KYLIN-2579 KYLIN-2580  - Improvement on subqueries
> > KYLIN-2633 - Upgrade Spark to 2.1
> > KYLIN-2646 - Project level query authorization
> >
> > And more than 60 bug fixes.
> >
> > Thanks to everyone who has contributed to this release.
> > Here’s release notes:
> > https://issues.apache.org/jira/secure/ReleaseNote.jspa?
> > projectId=12316121=12340443
> >
> > The commit to be voted upon:
> >
> > https://github.com/apache/kylin/commit/47b5a0ded63e721736dbe6c4ecf1f0
> > 2b0b97ba43
> >
> > Its hash is 47b5a0ded63e721736dbe6c4ecf1f02b0b97ba43.
> >
> > The artifacts to be voted on are located here:
> > https://dist.apache.org/repos/dist/dev/kylin/apache-kylin-2.1.0-rc1/
> >
> > (The binary packages for HBase 1.x and CDH 5.7are also provided for
> > testing)
> >
> > The hashes of the artifacts are as follows:
> > apache-kylin-2.1.0-src.tar.gz.md5 bb0458bff380f0670ccea57773d809d9
> > apache-kylin-2.1.0-src.tar.gz.sha1 5e5eebacdd72ded25508c4947565a1
> > ef48784ddb
> >
> > A staged Maven repository is available for review at:
> > https://repository.apache.org/content/repositories/orgapachekylin-1042/
> >
> > Release artifacts are signed with the following key:
> > https://people.apache.org/keys/committer/shaofengshi.asc
> >
> > Please vote on releasing this package as Apache Kylin 2.1.0.
> >
> > The vote is open for the next 72 hours and passes if a majority of
> > at least three +1 PPMC votes are cast.
> >
> > [ ] +1 Release this package as Apache Kylin 2.1.0
> > [ ]  0 I don't feel strongly about it, but I'm okay with the release
> > [ ] -1 Do not release this package because...
> >
> > Here is my vote:
> >
> > +1 (binding)
> >
> > --
> > Best regards,
> >
> > Shaofeng Shi 史少锋
> >
>


use standalone secure Hive and MR

2017-06-27 Thread
Our company have used a hadoop version different from Apache Hadoop, and
kerberos is used to keep secure. Our group want to use Kylin, using hive
and MR from our company , but the hbase maintained by our team using Apache
HBase.  We have used Hive beeline but it cannot read meta info from Hive.
Any one knows how to configure or change source code for using standalone
Hive、MR?