Hi 逸秋

Could you use Hbase 0.98 instead?

Actually, Hbase 1+ has a lot of API changes, Eagle does not supported yet.

If you still have trouble, please paste your error log.



Regards,
Qingwen Zhao | Apache Eagle
Desk: 20998167 | Cell: +86 18818215025






On 5/15/17, 2:57 PM, "丁逸秋" <[email protected]> wrote:

>Hello,I am having trouble using the 0.5.0-SNAPSHOT version,hope to get
>your
>help.
>The environment is configured as follows:
>System:Centos7.0
>JDK:1.8.121
>Hadoop:2.7.3
>Hbase:1.3.0
>Storm:0.9.7
>Mysql:5.7.16
>Zookeeper:3.4.9
>
>Pull master datatime:2017-05-03
>in Centos7.0 Compile
>
>eagle.conf as follows:
>service {
> env = "asdf"
> host = "datanode01"
> port = 9090
> username = "admin"
> password = "secret"
> readTimeOutSeconds = 60
> context = "/rest"
> timezone = "UTC"
>}
>
>zookeeper {
> zkQuorum = "datanode01:2181,datanode02:2181,datanode03:2181"
> zkSessionTimeoutMs : 15000
> zkRetryTimes : 3
> zkRetryInterval : 20000
>}
>
>storage {
> type = "hbase"
> hbase {
>autoCreateTable = true
>zookeeperQuorum = "datanode01,datanode02,datanode03"
>zookeeperPropertyClientPort = 2181
>zookeeperZnodeParent = "/hbase"
>tableNamePrefixedWithEnvironment = false
>coprocessorEnabled = false
> }
>}
>metadata {
> store = org.apache.eagle.metadata.service.memory.MemoryMetadataStore
> jdbc {
>username = "eagle"
>password = "eagle123456"
>driverClassName = com.mysql.jdbc.Driver
>database = "eagles"
>connection = "jdbc:mysql://datanode01:3306/eagles"
> }
>}
>application {
> stream {
>provider = org.apache.eagle.app.messaging.KafkaStreamProvider
> }
> storm {
>nimbusHost = "datanode01"
>nimbusThriftPort = 6627
> }
> updateStatus: {
>initialDelay: 10
>period: 10
> }
> healthCheck {
>initialDelay = 30
>period = 60
>publisher {
> publisherImpl =
>org.apache.eagle.app.service.impl.ApplicationHealthCheckEmailPublisher
> dailySendHour = 11
> mail.smtp.host = ""
> mail.smtp.port = 25
> mail.smtp.recipients = ""
> mail.smtp.subject = "Eagle Application Health Check"
> mail.smtp.template = "HealthCheckTemplate.vm"
>}
> }
> mailService {
>mailSmtpServer = "",
>mailSmtpPort = 25,
>mailSmtpAuth = "false"
>//mailSmtpConn = "plaintext",
>mailSmtpUsername = ""
>mailSmtpPassword = ""
>mailSmtpDebug = false
> }
> dailyJobReport {
>reportHourTime: 1
>reportPeriodInHour: 12
>numTopUsers : 10
>jobOvertimeLimitInHour: 6
>subject: "Job Report For 12 hours"
>recipients: ""
>template: "JobReportTemplate.vm"
> }
> analyzerReport {
>sender: ""
>recipients: ""
>template: "AnalyzerReportTemplate.vm"
>cc: ""
> }
>}
>coordinator {
>#  boltParallelism = 5
> policyDefaultParallelism = 5
> boltLoadUpbound = 0.8
> topologyLoadUpbound = 0.8
> numOfAlertBoltsPerTopology = 5
> policiesPerBolt = 10
> streamsPerBolt = 10
> reuseBoltInStreams = true
> zkConfig {
>zkQuorum = "datanode01:2181"
>zkRoot = "/alert"
>zkSessionTimeoutMs = 10000
>connectionTimeoutMs = 10000
>zkRetryTimes = 3
>zkRetryInterval = 3000
> }
> metadataService {
>host = "datanode01",
>port = 9090,
>context = "/rest"
> }
> metadataDynamicCheck {
>initDelayMillis = 1000
>delayMillis = 30000
>stateClearPeriodMin = 1440
>stateReservedCapacity = 100
> }
>}
>server.yml:No change
>Questions are as follows:
>1.each start up ,Mysql eagles database no table.
>2.each start up ,visit port:9090,Are let me create Site And then
>re-install
>application,but hbase some tables Still have data.
>3.at install application:Topology Health Check.The configuration is as
>follows:
>{
>"topology.fetchDataIntervalInSecs": "300",
>"topology.parseThreadPoolSize": "5",
>"topology.message.timeout.secs": "60",
>"topology.numDataFetcherSpout": "1",
>"topology.numEntityPersistBolt": "1",
>"topology.numOfKafkaSinkBolt": "2",
>"topology.rackResolverCls":
>"org.apache.eagle.topology.resolver.impl.DefaultTopologyRackResolver",
>"topology.resolverAPIUrl": "http://namenode01:8088/ws/v1/cluster/nodes";,
>"dataSourceConfig.hbase.enabled": "true",
>"dataSourceConfig.hbase.zkQuorum": "datanode01,datanode02,datanode03",
>"dataSourceConfig.hbase.zkZnodeParent": "/hbase",
>"dataSourceConfig.hbase.zkPropertyClientPort": "2181",
>"dataSourceConfig.hdfs.enabled": "true",
>"dataSourceConfig.hdfs.namenodeUrl": "http://namenode01:50070,
>http://namenode02:50070";,
>"dataSourceConfig.mr.enabled": "true",
>"dataSourceConfig.mr.rmUrl": "http://namenode01:8088";,
>"dataSourceConfig.mr.historyServerUrl": "http://namenode01:19888";,
>"dataSourceConfig.system.enabled": "true",
>"dataSourceConfig.system.topic": "topology_system_sandbox",
>"dataSourceConfig.system.zkConnection":
>"datanode01:2181,datanode02:2181,datanode03:2181",
>"dataSourceConfig.system.schemeCls": "storm.kafka.StringScheme",
>"dataSourceConfig.system.dataSendBatchSize": "1",
>"dataSinkConfig.topic": "topology_check_sandbox",
>"dataSinkConfig.brokerList":
>"datanode01:9092,datanode02:9092,datanode03:9092",
>"dataSinkConfig.serializerClass": "kafka.serializer.StringEncoder",
>"dataSinkConfig.keySerializerClass": "kafka.serializer.StringEncoder"
>}
>then start up,at site can only see Several nodes survive or Several nodes
>Good Health,then other data is no(for
>example:Metrics,MemoryUsage,DFSCapacity..) show  NO DATA.
>I see "Topology Health Check" Settings configuration a Kafka's
>Topic("System Topology Source Topic"),but I do not know data where to
>collect,I at eagle/eagle-external/hadoop_jmx_collectd/ set use collectd
>collection data,then I deploy Collectd5.6.2 ,running I use collectd's
> write_kafka plugin,write kafka,I consumer topic
>"topology_system_sandbox",find the format of the data,storm tip Analysis
>data error.
>
>These questions have troubled me for many days.
>I hope you can help me.
>Thank you so much.

Reply via email to