[jira] [Updated] (KYLIN-4625) Debug the code of Kylin on Parquet without hadoop environment

2020-07-14 Thread Xiaoxiang Yu (Jira)


 [ 
https://issues.apache.org/jira/browse/KYLIN-4625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoxiang Yu updated KYLIN-4625:

Fix Version/s: v4.0.0-beta

> Debug the code of Kylin on Parquet without hadoop environment
> -
>
> Key: KYLIN-4625
> URL: https://issues.apache.org/jira/browse/KYLIN-4625
> Project: Kylin
>  Issue Type: Improvement
>  Components: Spark Engine
>Reporter: wangrupeng
>Assignee: wangrupeng
>Priority: Major
> Fix For: v4.0.0-beta
>
> Attachments: image-2020-07-08-17-41-35-954.png, 
> image-2020-07-08-17-42-09-603.png, screenshot-1.png
>
>
> Currently, Kylin on Parquet already supports debuging source code with local 
> csv files and Not dependent on remote HDP sandbox, but it's a little bit 
> complex. The steps are as follows:
>  * edit the properties of 
> $KYLIN_SOURCE_DIR/examples/test_case_data/sandbox/kylin.properties to local
> {code:java}
>  kylin.metadata.url=$LOCAL_META_DIR
>  kylin.env.zookeeper-is-local=true
>  kylin.env.hdfs-working-dir=file:///path/to/local/dir
>  kylin.engine.spark-conf.spark.master=local
>  kylin.engine.spark-conf.spark.eventLog.dir=/path/to/local/dir
>  kylin.env=UT{code}
>  * debug org.apache.kylin.rest.DebugTomcat with IDEA && add VM option 
> "-Dspark.local=true"
>  !image-2020-07-08-17-41-35-954.png|width=574,height=363!
>  * Load csv data source by pressing button "Data Source->Load CSV File as 
> Table" on "Model" page, and set the schema for your table. Then press 
> "submit" to save.
>  !image-2020-07-08-17-42-09-603.png|width=577,height=259!
> Most time we debug just want to build and query cube quickly and focus the 
> bug we want to resolve. But current way is complex to load csv tables, create 
> model and cube and it's hard to use kylin sample cube. So, I want to add a 
> csv source which using the model of kylin sample data directly when debug 
> tomcat started.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KYLIN-4625) Debug the code of Kylin on Parquet without hadoop environment

2020-07-14 Thread Xiaoxiang Yu (Jira)


 [ 
https://issues.apache.org/jira/browse/KYLIN-4625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoxiang Yu updated KYLIN-4625:

Sprint:   (was: Sprint 53)

> Debug the code of Kylin on Parquet without hadoop environment
> -
>
> Key: KYLIN-4625
> URL: https://issues.apache.org/jira/browse/KYLIN-4625
> Project: Kylin
>  Issue Type: Improvement
>  Components: Spark Engine
>Reporter: wangrupeng
>Assignee: wangrupeng
>Priority: Major
> Fix For: v4.0.0-beta
>
> Attachments: image-2020-07-08-17-41-35-954.png, 
> image-2020-07-08-17-42-09-603.png, screenshot-1.png
>
>
> Currently, Kylin on Parquet already supports debuging source code with local 
> csv files and Not dependent on remote HDP sandbox, but it's a little bit 
> complex. The steps are as follows:
>  * edit the properties of 
> $KYLIN_SOURCE_DIR/examples/test_case_data/sandbox/kylin.properties to local
> {code:java}
>  kylin.metadata.url=$LOCAL_META_DIR
>  kylin.env.zookeeper-is-local=true
>  kylin.env.hdfs-working-dir=file:///path/to/local/dir
>  kylin.engine.spark-conf.spark.master=local
>  kylin.engine.spark-conf.spark.eventLog.dir=/path/to/local/dir
>  kylin.env=UT{code}
>  * debug org.apache.kylin.rest.DebugTomcat with IDEA && add VM option 
> "-Dspark.local=true"
>  !image-2020-07-08-17-41-35-954.png|width=574,height=363!
>  * Load csv data source by pressing button "Data Source->Load CSV File as 
> Table" on "Model" page, and set the schema for your table. Then press 
> "submit" to save.
>  !image-2020-07-08-17-42-09-603.png|width=577,height=259!
> Most time we debug just want to build and query cube quickly and focus the 
> bug we want to resolve. But current way is complex to load csv tables, create 
> model and cube and it's hard to use kylin sample cube. So, I want to add a 
> csv source which using the model of kylin sample data directly when debug 
> tomcat started.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KYLIN-4625) Debug the code of Kylin on Parquet without hadoop environment

2020-07-09 Thread wangrupeng (Jira)


 [ 
https://issues.apache.org/jira/browse/KYLIN-4625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangrupeng updated KYLIN-4625:
--
Sprint: Sprint 53

> Debug the code of Kylin on Parquet without hadoop environment
> -
>
> Key: KYLIN-4625
> URL: https://issues.apache.org/jira/browse/KYLIN-4625
> Project: Kylin
>  Issue Type: Improvement
>  Components: Spark Engine
>Reporter: wangrupeng
>Assignee: wangrupeng
>Priority: Major
> Attachments: image-2020-07-08-17-41-35-954.png, 
> image-2020-07-08-17-42-09-603.png, screenshot-1.png
>
>
> Currently, Kylin on Parquet already supports debuging source code with local 
> csv files and Not dependent on remote HDP sandbox, but it's a little bit 
> complex. The steps are as follows:
>  * edit the properties of 
> $KYLIN_SOURCE_DIR/examples/test_case_data/sandbox/kylin.properties to local
> {code:java}
>  kylin.metadata.url=$LOCAL_META_DIR
>  kylin.env.zookeeper-is-local=true
>  kylin.env.hdfs-working-dir=file:///path/to/local/dir
>  kylin.engine.spark-conf.spark.master=local
>  kylin.engine.spark-conf.spark.eventLog.dir=/path/to/local/dir
>  kylin.env=UT{code}
>  * debug org.apache.kylin.rest.DebugTomcat with IDEA && add VM option 
> "-Dspark.local=true"
>  !image-2020-07-08-17-41-35-954.png|width=574,height=363!
>  * Load csv data source by pressing button "Data Source->Load CSV File as 
> Table" on "Model" page, and set the schema for your table. Then press 
> "submit" to save.
>  !image-2020-07-08-17-42-09-603.png|width=577,height=259!
> Most time we debug just want to build and query cube quickly and focus the 
> bug we want to resolve. But current way is complex to load csv tables, create 
> model and cube and it's hard to use kylin sample cube. So, I want to add a 
> csv source which using the model of kylin sample data directly when debug 
> tomcat started.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KYLIN-4625) Debug the code of Kylin on Parquet without hadoop environment

2020-07-08 Thread wangrupeng (Jira)


 [ 
https://issues.apache.org/jira/browse/KYLIN-4625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangrupeng updated KYLIN-4625:
--
Description: 
Currently, Kylin on Parquet already supports debuging source code with local 
csv files and Not dependent on remote HDP sandbox, but it's a little bit 
complex. The steps are as follows:
 * edit the properties of 
$KYLIN_SOURCE_DIR/examples/test_case_data/sandbox/kylin.properties to local

{code:java}
 kylin.metadata.url=$LOCAL_META_DIR
 kylin.env.zookeeper-is-local=true
 kylin.env.hdfs-working-dir=file:///path/to/local/dir
 kylin.engine.spark-conf.spark.master=local
 kylin.engine.spark-conf.spark.eventLog.dir=/path/to/local/dir
 kylin.env=UT{code}
 * debug org.apache.kylin.rest.DebugTomcat with IDEA && add VM option 
"-Dspark.local=true"
 !image-2020-07-08-17-41-35-954.png|width=574,height=363!
 * Load csv data source by pressing button "Data Source->Load CSV File as 
Table" on "Model" page, and set the schema for your table. Then press "submit" 
to save.
 !image-2020-07-08-17-42-09-603.png|width=577,height=259!

Most time we debug just want to build and query cube quickly and focus the bug 
we want to resolve. But current way is complex to load csv tables, create model 
and cube and it's hard to use kylin sample cube. So, I want to add a csv source 
which using the model of kylin sample data directly when debug tomcat started.

  was:
Currently, Kylin on Parquet already supports debuging source code with local 
csv files, but it's a little bit complex. The steps are as follows:
 * edit the properties of 
$KYLIN_SOURCE_DIR/examples/test_case_data/sandbox/kylin.properties to local

{code:java}
 kylin.metadata.url=$LOCAL_META_DIR
 kylin.env.zookeeper-is-local=true
 kylin.env.hdfs-working-dir=file:///path/to/local/dir
 kylin.engine.spark-conf.spark.master=local
 kylin.engine.spark-conf.spark.eventLog.dir=/path/to/local/dir
 kylin.env=UT{code}
 * debug org.apache.kylin.rest.DebugTomcat with IDEA && add VM option 
"-Dspark.local=true"
 !image-2020-07-08-17-41-35-954.png|width=574,height=363!
 * Load csv data source by pressing button "Data Source->Load CSV File as 
Table" on "Model" page, and set the schema for your table. Then press "submit" 
to save.
 !image-2020-07-08-17-42-09-603.png|width=577,height=259!

Most time we debug just want to build and query cube quickly and focus the bug 
we want to resolve. But current way is complex to load csv tables, create model 
and cube and it's hard to use kylin sample cube. So, I want to add a csv source 
which using the model of kylin sample data directly when debug tomcat started.


> Debug the code of Kylin on Parquet without hadoop environment
> -
>
> Key: KYLIN-4625
> URL: https://issues.apache.org/jira/browse/KYLIN-4625
> Project: Kylin
>  Issue Type: Improvement
>  Components: Spark Engine
>Reporter: wangrupeng
>Assignee: wangrupeng
>Priority: Major
> Attachments: image-2020-07-08-17-41-35-954.png, 
> image-2020-07-08-17-42-09-603.png, screenshot-1.png
>
>
> Currently, Kylin on Parquet already supports debuging source code with local 
> csv files and Not dependent on remote HDP sandbox, but it's a little bit 
> complex. The steps are as follows:
>  * edit the properties of 
> $KYLIN_SOURCE_DIR/examples/test_case_data/sandbox/kylin.properties to local
> {code:java}
>  kylin.metadata.url=$LOCAL_META_DIR
>  kylin.env.zookeeper-is-local=true
>  kylin.env.hdfs-working-dir=file:///path/to/local/dir
>  kylin.engine.spark-conf.spark.master=local
>  kylin.engine.spark-conf.spark.eventLog.dir=/path/to/local/dir
>  kylin.env=UT{code}
>  * debug org.apache.kylin.rest.DebugTomcat with IDEA && add VM option 
> "-Dspark.local=true"
>  !image-2020-07-08-17-41-35-954.png|width=574,height=363!
>  * Load csv data source by pressing button "Data Source->Load CSV File as 
> Table" on "Model" page, and set the schema for your table. Then press 
> "submit" to save.
>  !image-2020-07-08-17-42-09-603.png|width=577,height=259!
> Most time we debug just want to build and query cube quickly and focus the 
> bug we want to resolve. But current way is complex to load csv tables, create 
> model and cube and it's hard to use kylin sample cube. So, I want to add a 
> csv source which using the model of kylin sample data directly when debug 
> tomcat started.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KYLIN-4625) Debug the code of Kylin on Parquet without hadoop environment

2020-07-08 Thread wangrupeng (Jira)


 [ 
https://issues.apache.org/jira/browse/KYLIN-4625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangrupeng updated KYLIN-4625:
--
Description: 
Currently, Kylin on Parquet already supports debuging source code with local 
csv files, but it's a little bit complex. The steps are as follows:
 * edit the properties of 
$KYLIN_SOURCE_DIR/examples/test_case_data/sandbox/kylin.properties to local

{code:java}
 kylin.metadata.url=$LOCAL_META_DIR
 kylin.env.zookeeper-is-local=true
 kylin.env.hdfs-working-dir=file:///path/to/local/dir
 kylin.engine.spark-conf.spark.master=local
 kylin.engine.spark-conf.spark.eventLog.dir=/path/to/local/dir
 kylin.env=UT{code}
 * debug org.apache.kylin.rest.DebugTomcat with IDEA && add VM option 
"-Dspark.local=true"
 !image-2020-07-08-17-41-35-954.png|width=574,height=363!
 * Load csv data source by pressing button "Data Source->Load CSV File as 
Table" on "Model" page, and set the schema for your table. Then press "submit" 
to save.
 !image-2020-07-08-17-42-09-603.png|width=577,height=259!

Most time we debug just want to build and query cube quickly and focus the bug 
we want to resolve. But current way is complex to load csv tables, create model 
and cube and it's hard to use kylin sample cube. So, I want to add a csv source 
which using the model of kylin sample data directly when debug tomcat started.

  was:
dCurrently, Kylin on Parquet already supports debuging source code with local 
csv files, but it's a little bit complex. The steps are as follows:
 * edit the properties of 
$KYLIN_SOURCE_DIR/examples/test_case_data/sandbox/kylin.properties to local

{code:java}
 kylin.metadata.url=$LOCAL_META_DIR
 kylin.env.zookeeper-is-local=true
 kylin.env.hdfs-working-dir=file:///path/to/local/dir
 kylin.engine.spark-conf.spark.master=local
 kylin.engine.spark-conf.spark.eventLog.dir=/path/to/local/dir
 kylin.env=UT{code}
 * debug org.apache.kylin.rest.DebugTomcat with IDEA && add VM option 
"-Dspark.local=true"
 !image-2020-07-08-17-41-35-954.png|width=574,height=363!
 * Load csv data source by pressing button "Data Source->Load CSV File as 
Table" on "Model" page, and set the schema for your table. Then press "submit" 
to save.
 !image-2020-07-08-17-42-09-603.png|width=577,height=259!

Most time we debug just want to build and query cube quickly and focus the bug 
we want to resolve. But current way is complex to load csv tables, create model 
and cube and it's hard to use kylin sample cube. So, I want to add a csv source 
which using the model of kylin sample data directly when debug tomcat started.


> Debug the code of Kylin on Parquet without hadoop environment
> -
>
> Key: KYLIN-4625
> URL: https://issues.apache.org/jira/browse/KYLIN-4625
> Project: Kylin
>  Issue Type: Improvement
>  Components: Spark Engine
>Reporter: wangrupeng
>Assignee: wangrupeng
>Priority: Major
> Attachments: image-2020-07-08-17-41-35-954.png, 
> image-2020-07-08-17-42-09-603.png, screenshot-1.png
>
>
> Currently, Kylin on Parquet already supports debuging source code with local 
> csv files, but it's a little bit complex. The steps are as follows:
>  * edit the properties of 
> $KYLIN_SOURCE_DIR/examples/test_case_data/sandbox/kylin.properties to local
> {code:java}
>  kylin.metadata.url=$LOCAL_META_DIR
>  kylin.env.zookeeper-is-local=true
>  kylin.env.hdfs-working-dir=file:///path/to/local/dir
>  kylin.engine.spark-conf.spark.master=local
>  kylin.engine.spark-conf.spark.eventLog.dir=/path/to/local/dir
>  kylin.env=UT{code}
>  * debug org.apache.kylin.rest.DebugTomcat with IDEA && add VM option 
> "-Dspark.local=true"
>  !image-2020-07-08-17-41-35-954.png|width=574,height=363!
>  * Load csv data source by pressing button "Data Source->Load CSV File as 
> Table" on "Model" page, and set the schema for your table. Then press 
> "submit" to save.
>  !image-2020-07-08-17-42-09-603.png|width=577,height=259!
> Most time we debug just want to build and query cube quickly and focus the 
> bug we want to resolve. But current way is complex to load csv tables, create 
> model and cube and it's hard to use kylin sample cube. So, I want to add a 
> csv source which using the model of kylin sample data directly when debug 
> tomcat started.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KYLIN-4625) Debug the code of Kylin on Parquet without hadoop environment

2020-07-08 Thread wangrupeng (Jira)


 [ 
https://issues.apache.org/jira/browse/KYLIN-4625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangrupeng updated KYLIN-4625:
--
Description: 
Currently, Kylin on Parquet already supports debuging source code with local 
csv files, but it's a little bit complex. The steps are as follows:
 * edit the properties of 
$KYLIN_SOURCE_DIR/examples/test_case_data/sandbox/kylin.properties to local

{code:java}
kylin.metadata.url=$LOCAL_META_DIR
 kylin.env.zookeeper-is-local=true
 kylin.env.hdfs-working-dir=file:///path/to/local/dir
 kylin.engine.spark-conf.spark.master=local
 kylin.engine.spark-conf.spark.eventLog.dir=/path/to/local/dir
 kylin.env=UT{code}

 * debug org.apache.kylin.rest.DebugTomcat with IDEA && add VM option 
"-Dspark.local=true"
 !image-2020-07-08-17-41-35-954.png|width=574,height=363!
 * Load csv data source by pressing button "Data Source->Load CSV File as 
Table" on "Model" page, and set the schema for your table. Then press "submit" 
to save.
 !image-2020-07-08-17-42-09-603.png|width=577,height=259!

Most time we debug just want to build and query cube quickly and focus the bug 
we want to resolve. But current way is complex to load csv tables, create model 
and cube and it's hard to use kylin sample cube. So, I want to add a csv source 
which using the model of kylin sample data directly when debug tomcat started.

  was:
Currently, Kylin on Parquet already supports debuging source code with local 
csv files, but it's a little bit complex. The steps are as follows:
 * edit the properties of 
$KYLIN_SOURCE_DIR/examples/test_case_data/sandbox/kylin.properties to local
 ```log
 kylin.metadata.url=$LOCAL_META_DIR
 kylin.env.zookeeper-is-local=true
 kylin.env.hdfs-working-dir=[file:///path/to/local/dir]
 kylin.engine.spark-conf.spark.master=local
 kylin.engine.spark-conf.spark.eventLog.dir=/path/to/local/dir
 kylin.env=UT
 ```
 * debug org.apache.kylin.rest.DebugTomcat with IDEA && add VM option 
"-Dspark.local=true"
 !image-2020-07-08-17-41-35-954.png|width=574,height=363!
 * Load csv data source by pressing button "Data Source->Load CSV File as 
Table" on "Model" page, and set the schema for your table. Then press "submit" 
to save.
 !image-2020-07-08-17-42-09-603.png|width=577,height=259!

Most time we debug just want to build and query cube quickly and focus the bug 
we want to resolve. But current way is complex to load csv tables, create model 
and cube and it's hard to use kylin sample cube. So, I want to add a csv source 
which using the model of kylin sample data directly when debug tomcat started.


> Debug the code of Kylin on Parquet without hadoop environment
> -
>
> Key: KYLIN-4625
> URL: https://issues.apache.org/jira/browse/KYLIN-4625
> Project: Kylin
>  Issue Type: Improvement
>  Components: Spark Engine
>Reporter: wangrupeng
>Assignee: wangrupeng
>Priority: Major
> Attachments: image-2020-07-08-17-41-35-954.png, 
> image-2020-07-08-17-42-09-603.png, screenshot-1.png
>
>
> Currently, Kylin on Parquet already supports debuging source code with local 
> csv files, but it's a little bit complex. The steps are as follows:
>  * edit the properties of 
> $KYLIN_SOURCE_DIR/examples/test_case_data/sandbox/kylin.properties to local
> {code:java}
> kylin.metadata.url=$LOCAL_META_DIR
>  kylin.env.zookeeper-is-local=true
>  kylin.env.hdfs-working-dir=file:///path/to/local/dir
>  kylin.engine.spark-conf.spark.master=local
>  kylin.engine.spark-conf.spark.eventLog.dir=/path/to/local/dir
>  kylin.env=UT{code}
>  * debug org.apache.kylin.rest.DebugTomcat with IDEA && add VM option 
> "-Dspark.local=true"
>  !image-2020-07-08-17-41-35-954.png|width=574,height=363!
>  * Load csv data source by pressing button "Data Source->Load CSV File as 
> Table" on "Model" page, and set the schema for your table. Then press 
> "submit" to save.
>  !image-2020-07-08-17-42-09-603.png|width=577,height=259!
> Most time we debug just want to build and query cube quickly and focus the 
> bug we want to resolve. But current way is complex to load csv tables, create 
> model and cube and it's hard to use kylin sample cube. So, I want to add a 
> csv source which using the model of kylin sample data directly when debug 
> tomcat started.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KYLIN-4625) Debug the code of Kylin on Parquet without hadoop environment

2020-07-08 Thread wangrupeng (Jira)


 [ 
https://issues.apache.org/jira/browse/KYLIN-4625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangrupeng updated KYLIN-4625:
--
Description: 
dCurrently, Kylin on Parquet already supports debuging source code with local 
csv files, but it's a little bit complex. The steps are as follows:
 * edit the properties of 
$KYLIN_SOURCE_DIR/examples/test_case_data/sandbox/kylin.properties to local

{code:java}
 kylin.metadata.url=$LOCAL_META_DIR
 kylin.env.zookeeper-is-local=true
 kylin.env.hdfs-working-dir=file:///path/to/local/dir
 kylin.engine.spark-conf.spark.master=local
 kylin.engine.spark-conf.spark.eventLog.dir=/path/to/local/dir
 kylin.env=UT{code}
 * debug org.apache.kylin.rest.DebugTomcat with IDEA && add VM option 
"-Dspark.local=true"
 !image-2020-07-08-17-41-35-954.png|width=574,height=363!
 * Load csv data source by pressing button "Data Source->Load CSV File as 
Table" on "Model" page, and set the schema for your table. Then press "submit" 
to save.
 !image-2020-07-08-17-42-09-603.png|width=577,height=259!

Most time we debug just want to build and query cube quickly and focus the bug 
we want to resolve. But current way is complex to load csv tables, create model 
and cube and it's hard to use kylin sample cube. So, I want to add a csv source 
which using the model of kylin sample data directly when debug tomcat started.

  was:
Currently, Kylin on Parquet already supports debuging source code with local 
csv files, but it's a little bit complex. The steps are as follows:
 * edit the properties of 
$KYLIN_SOURCE_DIR/examples/test_case_data/sandbox/kylin.properties to local

{code:java}
kylin.metadata.url=$LOCAL_META_DIR
 kylin.env.zookeeper-is-local=true
 kylin.env.hdfs-working-dir=file:///path/to/local/dir
 kylin.engine.spark-conf.spark.master=local
 kylin.engine.spark-conf.spark.eventLog.dir=/path/to/local/dir
 kylin.env=UT{code}

 * debug org.apache.kylin.rest.DebugTomcat with IDEA && add VM option 
"-Dspark.local=true"
 !image-2020-07-08-17-41-35-954.png|width=574,height=363!
 * Load csv data source by pressing button "Data Source->Load CSV File as 
Table" on "Model" page, and set the schema for your table. Then press "submit" 
to save.
 !image-2020-07-08-17-42-09-603.png|width=577,height=259!

Most time we debug just want to build and query cube quickly and focus the bug 
we want to resolve. But current way is complex to load csv tables, create model 
and cube and it's hard to use kylin sample cube. So, I want to add a csv source 
which using the model of kylin sample data directly when debug tomcat started.


> Debug the code of Kylin on Parquet without hadoop environment
> -
>
> Key: KYLIN-4625
> URL: https://issues.apache.org/jira/browse/KYLIN-4625
> Project: Kylin
>  Issue Type: Improvement
>  Components: Spark Engine
>Reporter: wangrupeng
>Assignee: wangrupeng
>Priority: Major
> Attachments: image-2020-07-08-17-41-35-954.png, 
> image-2020-07-08-17-42-09-603.png, screenshot-1.png
>
>
> dCurrently, Kylin on Parquet already supports debuging source code with local 
> csv files, but it's a little bit complex. The steps are as follows:
>  * edit the properties of 
> $KYLIN_SOURCE_DIR/examples/test_case_data/sandbox/kylin.properties to local
> {code:java}
>  kylin.metadata.url=$LOCAL_META_DIR
>  kylin.env.zookeeper-is-local=true
>  kylin.env.hdfs-working-dir=file:///path/to/local/dir
>  kylin.engine.spark-conf.spark.master=local
>  kylin.engine.spark-conf.spark.eventLog.dir=/path/to/local/dir
>  kylin.env=UT{code}
>  * debug org.apache.kylin.rest.DebugTomcat with IDEA && add VM option 
> "-Dspark.local=true"
>  !image-2020-07-08-17-41-35-954.png|width=574,height=363!
>  * Load csv data source by pressing button "Data Source->Load CSV File as 
> Table" on "Model" page, and set the schema for your table. Then press 
> "submit" to save.
>  !image-2020-07-08-17-42-09-603.png|width=577,height=259!
> Most time we debug just want to build and query cube quickly and focus the 
> bug we want to resolve. But current way is complex to load csv tables, create 
> model and cube and it's hard to use kylin sample cube. So, I want to add a 
> csv source which using the model of kylin sample data directly when debug 
> tomcat started.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KYLIN-4625) Debug the code of Kylin on Parquet without hadoop environment

2020-07-08 Thread wangrupeng (Jira)


 [ 
https://issues.apache.org/jira/browse/KYLIN-4625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangrupeng updated KYLIN-4625:
--
Description: 
Currently, Kylin on Parquet already supports debuging source code with local 
csv files, but it's a little bit complex. The steps are as follows:
 * edit the properties of 
$KYLIN_SOURCE_DIR/examples/test_case_data/sandbox/kylin.properties to local
 ```log
 kylin.metadata.url=$LOCAL_META_DIR
 kylin.env.zookeeper-is-local=true
 kylin.env.hdfs-working-dir=[file:///path/to/local/dir]
 kylin.engine.spark-conf.spark.master=local
 kylin.engine.spark-conf.spark.eventLog.dir=/path/to/local/dir
 kylin.env=UT
 ```
 * debug org.apache.kylin.rest.DebugTomcat with IDEA && add VM option 
"-Dspark.local=true"
 !image-2020-07-08-17-41-35-954.png|width=574,height=363!
 * Load csv data source by pressing button "Data Source->Load CSV File as 
Table" on "Model" page, and set the schema for your table. Then press "submit" 
to save.
 !image-2020-07-08-17-42-09-603.png|width=577,height=259!

Most time we debug just want to build and query cube quickly and focus the bug 
we want to resolve. But current way is complex to load csv tables, create model 
and cube and it's hard to use kylin sample cube. So, I want to add a csv source 
which using the model of kylin sample data directly when debug tomcat started.

  was:
Currently, Kylin on Parquet already supports debuging source code with local 
csv files, but it's a little bit complex. The steps are as follows:
 * edit the properties of 
$KYLIN_SOURCE_DIR/examples/test_case_data/sandbox/kylin.properties to local
 ```log
 kylin.metadata.url=$LOCAL_META_DIR
 kylin.env.zookeeper-is-local=true
 kylin.env.hdfs-working-dir=[file:///path/to/local/dir]
 kylin.engine.spark-conf.spark.master=local
 kylin.engine.spark-conf.spark.eventLog.dir=/path/to/local/dir
 kylin.env=UT
 ```
 * debug org.apache.kylin.rest.DebugTomcat with IDEA && add VM option 
"-Dspark.local=true"
 !image-2020-07-08-17-41-35-954.png|width=574,height=363!
 * Load csv data source by pressing button "Data Source->Load CSV File as 
Table" on "Model" page, and set the schema for your table. Then press "submit" 
to save.
 !image-2020-07-08-17-42-09-603.png|width=577,height=259!

Most time we debug just want to build and query cube easy. But current way is 
complex to load csv tables and create model and cube. So, I want to add a csv 
source which using the model of kylin sample data directly when debug tomcat 
started.


> Debug the code of Kylin on Parquet without hadoop environment
> -
>
> Key: KYLIN-4625
> URL: https://issues.apache.org/jira/browse/KYLIN-4625
> Project: Kylin
>  Issue Type: Improvement
>  Components: Spark Engine
>Reporter: wangrupeng
>Assignee: wangrupeng
>Priority: Major
> Attachments: image-2020-07-08-17-41-35-954.png, 
> image-2020-07-08-17-42-09-603.png, screenshot-1.png
>
>
> Currently, Kylin on Parquet already supports debuging source code with local 
> csv files, but it's a little bit complex. The steps are as follows:
>  * edit the properties of 
> $KYLIN_SOURCE_DIR/examples/test_case_data/sandbox/kylin.properties to local
>  ```log
>  kylin.metadata.url=$LOCAL_META_DIR
>  kylin.env.zookeeper-is-local=true
>  kylin.env.hdfs-working-dir=[file:///path/to/local/dir]
>  kylin.engine.spark-conf.spark.master=local
>  kylin.engine.spark-conf.spark.eventLog.dir=/path/to/local/dir
>  kylin.env=UT
>  ```
>  * debug org.apache.kylin.rest.DebugTomcat with IDEA && add VM option 
> "-Dspark.local=true"
>  !image-2020-07-08-17-41-35-954.png|width=574,height=363!
>  * Load csv data source by pressing button "Data Source->Load CSV File as 
> Table" on "Model" page, and set the schema for your table. Then press 
> "submit" to save.
>  !image-2020-07-08-17-42-09-603.png|width=577,height=259!
> Most time we debug just want to build and query cube quickly and focus the 
> bug we want to resolve. But current way is complex to load csv tables, create 
> model and cube and it's hard to use kylin sample cube. So, I want to add a 
> csv source which using the model of kylin sample data directly when debug 
> tomcat started.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KYLIN-4625) Debug the code of Kylin on Parquet without hadoop environment

2020-07-08 Thread wangrupeng (Jira)


 [ 
https://issues.apache.org/jira/browse/KYLIN-4625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangrupeng updated KYLIN-4625:
--
Description: 
Currently, Kylin on Parquet already supports debuging source code with local 
csv files, but it's a little bit complex. The steps are as follows:
 * edit the properties of 
$KYLIN_SOURCE_DIR/examples/test_case_data/sandbox/kylin.properties to local
 ```log
 kylin.metadata.url=$LOCAL_META_DIR
 kylin.env.zookeeper-is-local=true
 kylin.env.hdfs-working-dir=[file:///path/to/local/dir]
 kylin.engine.spark-conf.spark.master=local
 kylin.engine.spark-conf.spark.eventLog.dir=/path/to/local/dir
 kylin.env=UT
 ```
 * debug org.apache.kylin.rest.DebugTomcat with IDEA && add VM option 
"-Dspark.local=true"
 !image-2020-07-08-17-41-35-954.png|width=574,height=363!
 * Load csv data source by pressing button "Data Source->Load CSV File as 
Table" on "Model" page, and set the schema for your table. Then press "submit" 
to save.
 !image-2020-07-08-17-42-09-603.png|width=577,height=259!

Most time we debug just want to build and query cube easy. But current way is 
complex to load csv tables and create model and cube. So, I want to add a csv 
source which using the model of kylin sample data directly when debug tomcat 
started.

  was:
Currently, Kylin on Parquet already supports debuging source code with local 
csv files, but it's a little bit complex. The steps are as follows:
* edit the properties of 
$KYLIN_SOURCE_DIR/examples/test_case_data/sandbox/kylin.properties to local
   ```log
   kylin.metadata.url=$LOCAL_META_DIR
   kylin.env.zookeeper-is-local=true
   kylin.env.hdfs-working-dir=file:///path/to/local/dir
   kylin.engine.spark-conf.spark.master=local
   kylin.engine.spark-conf.spark.eventLog.dir=/path/to/local/dir
   kylin.env=UT
   ```
* debug org.apache.kylin.rest.DebugTomcat with IDEA && add VM option 
"-Dspark.local=true" 
!image-2020-07-08-17-41-35-954.png! 
* Load csv data source by pressing button "Data Source->Load CSV File as Table" 
on "Model" page, and set the schema for your table. Then press "submit" to save.
 !image-2020-07-08-17-42-09-603.png! 

Most time we debug just want to build and query cube easy. But current way is 
complex to load csv tables and create model and cube. So, I want to add a csv 
source  which using the model of kylin sample data directly when debug tomcat 
started.


> Debug the code of Kylin on Parquet without hadoop environment
> -
>
> Key: KYLIN-4625
> URL: https://issues.apache.org/jira/browse/KYLIN-4625
> Project: Kylin
>  Issue Type: Improvement
>  Components: Spark Engine
>Reporter: wangrupeng
>Assignee: wangrupeng
>Priority: Major
> Attachments: image-2020-07-08-17-41-35-954.png, 
> image-2020-07-08-17-42-09-603.png, screenshot-1.png
>
>
> Currently, Kylin on Parquet already supports debuging source code with local 
> csv files, but it's a little bit complex. The steps are as follows:
>  * edit the properties of 
> $KYLIN_SOURCE_DIR/examples/test_case_data/sandbox/kylin.properties to local
>  ```log
>  kylin.metadata.url=$LOCAL_META_DIR
>  kylin.env.zookeeper-is-local=true
>  kylin.env.hdfs-working-dir=[file:///path/to/local/dir]
>  kylin.engine.spark-conf.spark.master=local
>  kylin.engine.spark-conf.spark.eventLog.dir=/path/to/local/dir
>  kylin.env=UT
>  ```
>  * debug org.apache.kylin.rest.DebugTomcat with IDEA && add VM option 
> "-Dspark.local=true"
>  !image-2020-07-08-17-41-35-954.png|width=574,height=363!
>  * Load csv data source by pressing button "Data Source->Load CSV File as 
> Table" on "Model" page, and set the schema for your table. Then press 
> "submit" to save.
>  !image-2020-07-08-17-42-09-603.png|width=577,height=259!
> Most time we debug just want to build and query cube easy. But current way is 
> complex to load csv tables and create model and cube. So, I want to add a csv 
> source which using the model of kylin sample data directly when debug tomcat 
> started.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KYLIN-4625) Debug the code of Kylin on Parquet without hadoop environment

2020-07-08 Thread wangrupeng (Jira)


 [ 
https://issues.apache.org/jira/browse/KYLIN-4625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangrupeng updated KYLIN-4625:
--
Attachment: screenshot-1.png

> Debug the code of Kylin on Parquet without hadoop environment
> -
>
> Key: KYLIN-4625
> URL: https://issues.apache.org/jira/browse/KYLIN-4625
> Project: Kylin
>  Issue Type: Improvement
>  Components: Spark Engine
>Reporter: wangrupeng
>Assignee: wangrupeng
>Priority: Major
> Attachments: image-2020-07-08-17-41-35-954.png, 
> image-2020-07-08-17-42-09-603.png, screenshot-1.png
>
>
> Currently, Kylin on Parquet already supports debuging source code with local 
> csv files, but it's a little bit complex. The steps are as follows:
> * edit the properties of 
> $KYLIN_SOURCE_DIR/examples/test_case_data/sandbox/kylin.properties to local
>```log
>kylin.metadata.url=$LOCAL_META_DIR
>kylin.env.zookeeper-is-local=true
>kylin.env.hdfs-working-dir=file:///path/to/local/dir
>kylin.engine.spark-conf.spark.master=local
>kylin.engine.spark-conf.spark.eventLog.dir=/path/to/local/dir
>kylin.env=UT
>```
> * debug org.apache.kylin.rest.DebugTomcat with IDEA && add VM option 
> "-Dspark.local=true" 
> !image-2020-07-08-17-41-35-954.png! 
> * Load csv data source by pressing button "Data Source->Load CSV File as 
> Table" on "Model" page, and set the schema for your table. Then press 
> "submit" to save.
>  !image-2020-07-08-17-42-09-603.png! 
> Most time we debug just want to build and query cube easy. But current way is 
> complex to load csv tables and create model and cube. So, I want to add a csv 
> source  which using the model of kylin sample data directly when debug tomcat 
> started.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (KYLIN-4625) Debug the code of Kylin on Parquet without hadoop environment

2020-07-08 Thread wangrupeng (Jira)


 [ 
https://issues.apache.org/jira/browse/KYLIN-4625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

wangrupeng updated KYLIN-4625:
--
Description: 
Currently, Kylin on Parquet already supports debuging source code with local 
csv files, but it's a little bit complex. The steps are as follows:
* edit the properties of 
$KYLIN_SOURCE_DIR/examples/test_case_data/sandbox/kylin.properties to local
   ```log
   kylin.metadata.url=$LOCAL_META_DIR
   kylin.env.zookeeper-is-local=true
   kylin.env.hdfs-working-dir=file:///path/to/local/dir
   kylin.engine.spark-conf.spark.master=local
   kylin.engine.spark-conf.spark.eventLog.dir=/path/to/local/dir
   kylin.env=UT
   ```
* debug org.apache.kylin.rest.DebugTomcat with IDEA && add VM option 
"-Dspark.local=true" 
!image-2020-07-08-17-41-35-954.png! 
* Load csv data source by pressing button "Data Source->Load CSV File as Table" 
on "Model" page, and set the schema for your table. Then press "submit" to save.
 !image-2020-07-08-17-42-09-603.png! 

Most time we debug just want to build and query cube easy. But current way is 
complex to load csv tables and create model and cube. So, I want to add a csv 
source  which using the model of kylin sample data directly when debug tomcat 
started.

  was:
Currently, Kylin on Parquet already supports debuging source code with local 
csv files, but it's a little bit complex. The steps are as follows:
* edit the properties of 
$KYLIN_SOURCE_DIR/examples/test_case_data/sandbox/kylin.properties to local
   ```log
   kylin.metadata.url=$LOCAL_META_DIR
   kylin.env.zookeeper-is-local=true
   kylin.env.hdfs-working-dir=file:///path/to/local/dir
   kylin.engine.spark-conf.spark.master=local
   kylin.engine.spark-conf.spark.eventLog.dir=/path/to/local/dir
   ```
* debug org.apache.kylin.rest.DebugTomcat with IDEA && add VM option 
"-Dspark.local=true" 
!image-2020-07-08-17-41-35-954.png! 
* Load csv data source by pressing button "Data Source->Load CSV File as Table" 
on "Model" page, and set the schema for your table. Then press "submit" to save.
 !image-2020-07-08-17-42-09-603.png! 

Most time we debug just want to build and query cube easy. But current way is 
complex to load csv tables and create model and cube. So, I want to add a csv 
source  which using the model of kylin sample data directly when debug tomcat 
started.


> Debug the code of Kylin on Parquet without hadoop environment
> -
>
> Key: KYLIN-4625
> URL: https://issues.apache.org/jira/browse/KYLIN-4625
> Project: Kylin
>  Issue Type: Improvement
>  Components: Spark Engine
>Reporter: wangrupeng
>Assignee: wangrupeng
>Priority: Major
> Attachments: image-2020-07-08-17-41-35-954.png, 
> image-2020-07-08-17-42-09-603.png
>
>
> Currently, Kylin on Parquet already supports debuging source code with local 
> csv files, but it's a little bit complex. The steps are as follows:
> * edit the properties of 
> $KYLIN_SOURCE_DIR/examples/test_case_data/sandbox/kylin.properties to local
>```log
>kylin.metadata.url=$LOCAL_META_DIR
>kylin.env.zookeeper-is-local=true
>kylin.env.hdfs-working-dir=file:///path/to/local/dir
>kylin.engine.spark-conf.spark.master=local
>kylin.engine.spark-conf.spark.eventLog.dir=/path/to/local/dir
>kylin.env=UT
>```
> * debug org.apache.kylin.rest.DebugTomcat with IDEA && add VM option 
> "-Dspark.local=true" 
> !image-2020-07-08-17-41-35-954.png! 
> * Load csv data source by pressing button "Data Source->Load CSV File as 
> Table" on "Model" page, and set the schema for your table. Then press 
> "submit" to save.
>  !image-2020-07-08-17-42-09-603.png! 
> Most time we debug just want to build and query cube easy. But current way is 
> complex to load csv tables and create model and cube. So, I want to add a csv 
> source  which using the model of kylin sample data directly when debug tomcat 
> started.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)