[jira] [Resolved] (GRIFFIN-185) [UI] download miss records

2018-08-21 Thread Lionel Liu (JIRA)


 [ 
https://issues.apache.org/jira/browse/GRIFFIN-185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lionel Liu resolved GRIFFIN-185.

   Resolution: Done
Fix Version/s: 1.0.0-incubating

Has tested this function in docker image griffin_spark2:0.2.1

> [UI] download miss records
> --
>
> Key: GRIFFIN-185
> URL: https://issues.apache.org/jira/browse/GRIFFIN-185
> Project: Griffin (Incubating)
>  Issue Type: New Feature
>Reporter: Juan Li
>Assignee: Juan Li
>Priority: Major
> Fix For: 1.0.0-incubating
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (GRIFFIN-188) Docker dev question

2018-08-21 Thread Lionel Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/GRIFFIN-188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16588281#comment-16588281
 ] 

Lionel Liu edited comment on GRIFFIN-188 at 8/22/18 2:19 AM:
-

Hi [~djkooks], seems like you're using docker container as griffin dependent 
environment, and running GriffinWebApplication at local or via your IDE, am I 
right?

1. In the 'service/src/main/resource/application.properties' you set: 

```

spring.datasource.url=jdbc:postgresql://192.168.99.100:{color:#ff}5432{color}/quartz?autoReconnect=true=false

```

192.168.99.100 should be your docker host ip address, since you can not access 
docker container directly, in the docker-compose.yml file we've mapped the port 
5432 of docker container to the port 35432 of docker host. Thus you need to set 
it like this:

```

spring.datasource.url=jdbc:postgresql://192.168.99.100:{color:#ff}35432{color}/quartz?autoReconnect=true=false

```

 

2. I've noticed that you're running the code of master branch, because we've 
modified the json format of measure module recently, the docker image 
`bhlx3lyx7/griffin_spark2:0.2.0` is out of date. We've also updated the docker 
image in these days, you can pull the new docker image 
`bhlx3lyx7/griffin_spark2:{color:#ff}0.2.1{color}`, and modify the version 
number in docker-compose.yml you're using too.

We'll also update the document later.

 

Hope this helps you, thanks.


was (Author: lionel_3l):
Hi [~djkooks], seems like you're using docker container as griffin dependent 
environment, and running GriffinWebApplication at local or via your IDE, am I 
right?

1. In the 'service/src/main/resource/application.properties' you set: 

```

spring.datasource.url=jdbc:postgresql://192.168.99.100:{color:#FF}5432{color}/quartz?autoReconnect=true=false

```

192.168.99.100 should be your docker host ip address, since you can not access 
docker container directly, in the docker-compose.yml file we've mapped the port 
5432 of docker container to the port 35432 of docker host. Thus you need to set 
it like this:

```

spring.datasource.url=jdbc:postgresql://192.168.99.100:{color:#FF}35432{color}/quartz?autoReconnect=true=false

```

 

2. I've noticed that you're running the code of master branch, because we've 
modified the json format of measure module recently, the docker image 
`bhlx3lyx7/griffin_spark2:0.2.0` is out of date. We've also updated the docker 
image in these days, you can pull the new docker image 
`bhlx3lyx7/griffin_spark2:{color:#FF}0.2.1{color}`, and modify the version 
number in docker-compose.yml you're using too.

 

Hope this helps you, thanks.

> Docker dev question
> ---
>
> Key: GRIFFIN-188
> URL: https://issues.apache.org/jira/browse/GRIFFIN-188
> Project: Griffin (Incubating)
>  Issue Type: Task
>Reporter: Kwang-in (Dennis) JUNG
>Assignee: Lionel Liu
>Priority: Trivial
>
> Hello,
> I'm following guide in `environment for dev`, and finished docker containers 
> setup(API goes well via postman).
> Now, I setup the properties value and run GriffinWebApplication, but it 
> failed:
> ```
> 2018-08-21 14:45:12.385 INFO 7667 --- [ main] o.a.g.c.c.EnvConfig : {
>  "spark" : {
>  "log.level" : "WARN",
>  "checkpoint.dir" : "hdfs:///griffin/checkpoint/${JOB_NAME}",
>  "init.clear" : true,
>  "batch.interval" : "1m",
>  "process.interval" : "5m",
>  "config" : {
>  "spark.default.parallelism" : 4,
>  "spark.task.maxFailures" : 5,
>  "spark.streaming.kafkaMaxRatePerPartition" : 1000,
>  "spark.streaming.concurrentJobs" : 4,
>  "spark.yarn.maxAppAttempts" : 5,
>  "spark.yarn.am.attemptFailuresValidityInterval" : "1h",
>  "spark.yarn.max.executor.failures" : 120,
>  "spark.yarn.executor.failuresValidityInterval" : "1h",
>  "spark.hadoop.fs.hdfs.impl.disable.cache" : true
>  }
>  },
>  "sinks" : [ {
>  "type" : "CONSOLE",
>  "config" : {
>  "max.log.lines" : 100
>  }
>  }, {
>  "type" : "HDFS",
>  "config" : {
>  "path" : "hdfs:///griffin/persist",
>  "max.persist.lines" : 1,
>  "max.lines.per.file" : 1
>  }
>  }, {
>  "type" : "ELASTICSEARCH",
>  "config" : {
>  "method" : "post",
>  "api" : "http://es:9200/griffin/accuracy;
>  }
>  } ],
>  "griffin.checkpoint" : [ {
>  "type" : "zk",
>  "config" : {
>  "hosts" : "zk:2181",
>  "namespace" : "griffin/infocache",
>  "lock.path" : "lock",
>  "mode" : "persist",
>  "init.clear" : false,
>  "close.clear" : false
>  }
>  } ]
> }
> 2018-08-21 14:45:12.387 INFO 7667 --- [ main] o.a.g.c.u.FileUtil : Location 
> is empty. Read from default path.
> 2018-08-21 14:45:12.396 INFO 7667 --- [ main] o.a.g.c.u.FileUtil : Location 
> is empty. Read from default path.
> 2018-08-21 14:45:12.397 INFO 7667 --- [ main] o.s.b.f.c.PropertiesFactoryBean 
> : Loading properties file from class path resource [quartz.properties]
> 2018-08-21 

[jira] [Commented] (GRIFFIN-188) Docker dev question

2018-08-21 Thread Lionel Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/GRIFFIN-188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16588281#comment-16588281
 ] 

Lionel Liu commented on GRIFFIN-188:


Hi [~djkooks], seems like you're using docker container as griffin dependent 
environment, and running GriffinWebApplication at local or via your IDE, am I 
right?

1. In the 'service/src/main/resource/application.properties' you set: 

```

spring.datasource.url=jdbc:postgresql://192.168.99.100:{color:#FF}5432{color}/quartz?autoReconnect=true=false

```

192.168.99.100 should be your docker host ip address, since you can not access 
docker container directly, in the docker-compose.yml file we've mapped the port 
5432 of docker container to the port 35432 of docker host. Thus you need to set 
it like this:

```

spring.datasource.url=jdbc:postgresql://192.168.99.100:{color:#FF}35432{color}/quartz?autoReconnect=true=false

```

 

2. I've noticed that you're running the code of master branch, because we've 
modified the json format of measure module recently, the docker image 
`bhlx3lyx7/griffin_spark2:0.2.0` is out of date. We've also updated the docker 
image in these days, you can pull the new docker image 
`bhlx3lyx7/griffin_spark2:{color:#FF}0.2.1{color}`, and modify the version 
number in docker-compose.yml you're using too.

 

Hope this helps you, thanks.

> Docker dev question
> ---
>
> Key: GRIFFIN-188
> URL: https://issues.apache.org/jira/browse/GRIFFIN-188
> Project: Griffin (Incubating)
>  Issue Type: Task
>Reporter: Kwang-in (Dennis) JUNG
>Assignee: Lionel Liu
>Priority: Trivial
>
> Hello,
> I'm following guide in `environment for dev`, and finished docker containers 
> setup(API goes well via postman).
> Now, I setup the properties value and run GriffinWebApplication, but it 
> failed:
> ```
> 2018-08-21 14:45:12.385 INFO 7667 --- [ main] o.a.g.c.c.EnvConfig : {
>  "spark" : {
>  "log.level" : "WARN",
>  "checkpoint.dir" : "hdfs:///griffin/checkpoint/${JOB_NAME}",
>  "init.clear" : true,
>  "batch.interval" : "1m",
>  "process.interval" : "5m",
>  "config" : {
>  "spark.default.parallelism" : 4,
>  "spark.task.maxFailures" : 5,
>  "spark.streaming.kafkaMaxRatePerPartition" : 1000,
>  "spark.streaming.concurrentJobs" : 4,
>  "spark.yarn.maxAppAttempts" : 5,
>  "spark.yarn.am.attemptFailuresValidityInterval" : "1h",
>  "spark.yarn.max.executor.failures" : 120,
>  "spark.yarn.executor.failuresValidityInterval" : "1h",
>  "spark.hadoop.fs.hdfs.impl.disable.cache" : true
>  }
>  },
>  "sinks" : [ {
>  "type" : "CONSOLE",
>  "config" : {
>  "max.log.lines" : 100
>  }
>  }, {
>  "type" : "HDFS",
>  "config" : {
>  "path" : "hdfs:///griffin/persist",
>  "max.persist.lines" : 1,
>  "max.lines.per.file" : 1
>  }
>  }, {
>  "type" : "ELASTICSEARCH",
>  "config" : {
>  "method" : "post",
>  "api" : "http://es:9200/griffin/accuracy;
>  }
>  } ],
>  "griffin.checkpoint" : [ {
>  "type" : "zk",
>  "config" : {
>  "hosts" : "zk:2181",
>  "namespace" : "griffin/infocache",
>  "lock.path" : "lock",
>  "mode" : "persist",
>  "init.clear" : false,
>  "close.clear" : false
>  }
>  } ]
> }
> 2018-08-21 14:45:12.387 INFO 7667 --- [ main] o.a.g.c.u.FileUtil : Location 
> is empty. Read from default path.
> 2018-08-21 14:45:12.396 INFO 7667 --- [ main] o.a.g.c.u.FileUtil : Location 
> is empty. Read from default path.
> 2018-08-21 14:45:12.397 INFO 7667 --- [ main] o.s.b.f.c.PropertiesFactoryBean 
> : Loading properties file from class path resource [quartz.properties]
> 2018-08-21 14:45:12.400 INFO 7667 --- [ main] o.a.g.c.u.PropertiesUtil : Read 
> properties successfully from /quartz.properties.
> 2018-08-21 14:45:12.516 INFO 7667 --- [ main] o.q.i.StdSchedulerFactory : 
> Using default implementation for ThreadExecutor
> 2018-08-21 14:45:12.605 INFO 7667 --- [ main] o.q.c.SchedulerSignalerImpl : 
> Initialized Scheduler Signaller of type: class 
> org.quartz.core.SchedulerSignalerImpl
> 2018-08-21 14:45:12.605 INFO 7667 --- [ main] o.q.c.QuartzScheduler : Quartz 
> Scheduler v.2.2.2 created.
> 2018-08-21 14:45:22.613 INFO 7667 --- [ main] o.s.s.q.LocalDataSourceJobStore 
> : Could not detect database type. Assuming locks can be taken.
> 2018-08-21 14:45:22.613 INFO 7667 --- [ main] o.s.s.q.LocalDataSourceJobStore 
> : Using db table-based data access locking (synchronization).
> Aug 21, 2018 2:45:22 PM org.apache.tomcat.jdbc.pool.ConnectionPool init
> SEVERE: Unable to create initial connections of pool.
> org.postgresql.util.PSQLException: The connection attempt failed.
>  at 
> org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:272)
>  at 
> org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:51)
>  at org.postgresql.jdbc.PgConnection.(PgConnection.java:215)
>  at 

[jira] [Assigned] (GRIFFIN-188) Docker dev question

2018-08-21 Thread Lionel Liu (JIRA)


 [ 
https://issues.apache.org/jira/browse/GRIFFIN-188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lionel Liu reassigned GRIFFIN-188:
--

Assignee: Lionel Liu

> Docker dev question
> ---
>
> Key: GRIFFIN-188
> URL: https://issues.apache.org/jira/browse/GRIFFIN-188
> Project: Griffin (Incubating)
>  Issue Type: Task
>Reporter: Kwang-in (Dennis) JUNG
>Assignee: Lionel Liu
>Priority: Trivial
>
> Hello,
> I'm following guide in `environment for dev`, and finished docker containers 
> setup(API goes well via postman).
> Now, I setup the properties value and run GriffinWebApplication, but it 
> failed:
> ```
> 2018-08-21 14:45:12.385 INFO 7667 --- [ main] o.a.g.c.c.EnvConfig : {
>  "spark" : {
>  "log.level" : "WARN",
>  "checkpoint.dir" : "hdfs:///griffin/checkpoint/${JOB_NAME}",
>  "init.clear" : true,
>  "batch.interval" : "1m",
>  "process.interval" : "5m",
>  "config" : {
>  "spark.default.parallelism" : 4,
>  "spark.task.maxFailures" : 5,
>  "spark.streaming.kafkaMaxRatePerPartition" : 1000,
>  "spark.streaming.concurrentJobs" : 4,
>  "spark.yarn.maxAppAttempts" : 5,
>  "spark.yarn.am.attemptFailuresValidityInterval" : "1h",
>  "spark.yarn.max.executor.failures" : 120,
>  "spark.yarn.executor.failuresValidityInterval" : "1h",
>  "spark.hadoop.fs.hdfs.impl.disable.cache" : true
>  }
>  },
>  "sinks" : [ {
>  "type" : "CONSOLE",
>  "config" : {
>  "max.log.lines" : 100
>  }
>  }, {
>  "type" : "HDFS",
>  "config" : {
>  "path" : "hdfs:///griffin/persist",
>  "max.persist.lines" : 1,
>  "max.lines.per.file" : 1
>  }
>  }, {
>  "type" : "ELASTICSEARCH",
>  "config" : {
>  "method" : "post",
>  "api" : "http://es:9200/griffin/accuracy;
>  }
>  } ],
>  "griffin.checkpoint" : [ {
>  "type" : "zk",
>  "config" : {
>  "hosts" : "zk:2181",
>  "namespace" : "griffin/infocache",
>  "lock.path" : "lock",
>  "mode" : "persist",
>  "init.clear" : false,
>  "close.clear" : false
>  }
>  } ]
> }
> 2018-08-21 14:45:12.387 INFO 7667 --- [ main] o.a.g.c.u.FileUtil : Location 
> is empty. Read from default path.
> 2018-08-21 14:45:12.396 INFO 7667 --- [ main] o.a.g.c.u.FileUtil : Location 
> is empty. Read from default path.
> 2018-08-21 14:45:12.397 INFO 7667 --- [ main] o.s.b.f.c.PropertiesFactoryBean 
> : Loading properties file from class path resource [quartz.properties]
> 2018-08-21 14:45:12.400 INFO 7667 --- [ main] o.a.g.c.u.PropertiesUtil : Read 
> properties successfully from /quartz.properties.
> 2018-08-21 14:45:12.516 INFO 7667 --- [ main] o.q.i.StdSchedulerFactory : 
> Using default implementation for ThreadExecutor
> 2018-08-21 14:45:12.605 INFO 7667 --- [ main] o.q.c.SchedulerSignalerImpl : 
> Initialized Scheduler Signaller of type: class 
> org.quartz.core.SchedulerSignalerImpl
> 2018-08-21 14:45:12.605 INFO 7667 --- [ main] o.q.c.QuartzScheduler : Quartz 
> Scheduler v.2.2.2 created.
> 2018-08-21 14:45:22.613 INFO 7667 --- [ main] o.s.s.q.LocalDataSourceJobStore 
> : Could not detect database type. Assuming locks can be taken.
> 2018-08-21 14:45:22.613 INFO 7667 --- [ main] o.s.s.q.LocalDataSourceJobStore 
> : Using db table-based data access locking (synchronization).
> Aug 21, 2018 2:45:22 PM org.apache.tomcat.jdbc.pool.ConnectionPool init
> SEVERE: Unable to create initial connections of pool.
> org.postgresql.util.PSQLException: The connection attempt failed.
>  at 
> org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:272)
>  at 
> org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:51)
>  at org.postgresql.jdbc.PgConnection.(PgConnection.java:215)
>  at org.postgresql.Driver.makeConnection(Driver.java:404)
>  at org.postgresql.Driver.connect(Driver.java:272)
>  at 
> org.apache.tomcat.jdbc.pool.PooledConnection.connectUsingDriver(PooledConnection.java:310)
>  at 
> org.apache.tomcat.jdbc.pool.PooledConnection.connect(PooledConnection.java:203)
>  at 
> org.apache.tomcat.jdbc.pool.ConnectionPool.createConnection(ConnectionPool.java:732)
>  at 
> org.apache.tomcat.jdbc.pool.ConnectionPool.borrowConnection(ConnectionPool.java:664)
>  at org.apache.tomcat.jdbc.pool.ConnectionPool.init(ConnectionPool.java:479)
>  at org.apache.tomcat.jdbc.pool.ConnectionPool.(ConnectionPool.java:154)
>  at 
> org.apache.tomcat.jdbc.pool.DataSourceProxy.pCreatePool(DataSourceProxy.java:118)
>  at 
> org.apache.tomcat.jdbc.pool.DataSourceProxy.createPool(DataSourceProxy.java:107)
>  at 
> org.apache.tomcat.jdbc.pool.DataSourceProxy.getConnection(DataSourceProxy.java:131)
>  at 
> org.springframework.jdbc.datasource.DataSourceUtils.doGetConnection(DataSourceUtils.java:111)
>  at 
> org.springframework.jdbc.datasource.DataSourceUtils.getConnection(DataSourceUtils.java:77)
>  at 
> org.springframework.jdbc.support.JdbcUtils.extractDatabaseMetaData(JdbcUtils.java:326)
>  at 
> 

[GitHub] incubator-griffin pull request #384: Initialize java code checks rules for g...

2018-08-21 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/incubator-griffin/pull/384


---


[GitHub] incubator-griffin issue #384: Initialize java code checks rules for griffin

2018-08-21 Thread toyboxman
Github user toyboxman commented on the issue:

https://github.com/apache/incubator-griffin/pull/384
  
@guoyuepeng 

I append guide part about how to enable style-check in maven and idea. I 
think we can add scala and type script code style rule here as well

thanks


---