arvindshmicrosoft commented on a change in pull request #338:
URL: https://github.com/apache/fluo-muchos/pull/338#discussion_r459842371
##########
File path: ansible/docker.yml
##########
@@ -30,13 +31,15 @@
register: swarm_status
changed_when: "'active' not in swarm_status.stdout_lines"
- name: initialize swarm
- shell: >
+ shell: |
+ docker swarm leave --force || echo "leaving swarm for multiple setups"
Review comment:
Could you please clarify the intent of having the `docker swarm leave
--force` here?
##########
File path: ansible/roles/elasticsearch/tasks/main.yml
##########
@@ -0,0 +1,83 @@
+---
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+#
+# Installing Elasticsearch
+#
+
+# Add the Elasticsearch yum repo.
+- name: "download elasticsearch rpm"
+ get_url:
+ args:
+ url: https://artifacts.elastic.co/downloads/elasticsearch/{{
elasticsearch_rpm }}
+ dest: /tmp/{{ elasticsearch_rpm }}
+ checksum: "{{ elasticsearch_sha512 }}"
Review comment:
it would be better if we use `_checksum` instead of `_sha512`, to align
with the other similarly named variables. There are other places where this
variable name would need to be changed, trust you will do the needful there as
well.
```suggestion
checksum: "{{ elasticsearch_checksum }}"
```
##########
File path: README.md
##########
@@ -255,6 +257,7 @@ different for your cluster.
* Grafana Metrics and Monitoring -
[http://metrics:3000/](http://metrics:3000/)
* Mesos status - [http://leader1:5050/](http://leader1:5050/) (if
`mesosmaster` configured on leader1)
* Marathon status - [http://leader1:8080/](http://leader1:8080/) (if
`mesosmaster` configured on leader1)
+ * Kibana status - [http;//leader1:5601/](http://leader1:5601) (But Kibana is
configured on all nodes)
Review comment:
Typo
```suggestion
* Kibana status - [http://leader1:5601/](http://leader1:5601) (But Kibana
is configured on all nodes)
```
##########
File path: ansible/roles/logstash/templates/logstash-simple-2.conf
##########
@@ -0,0 +1,150 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,e
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+input {
+ beats {
+ port => 5044
+ }
+
+ file {
+ path => "/media/ephemeral0/logs/accumulo/*.log"
+ type => "accumulo"
+ start_position => "beginning"
+ }
+
+ file {
+ path => "/media/ephemeral0/logs/hadoop/*.log"
+ type => "hadoop"
+ start_position => "beginning"
+ }
+
+ file {
+ path => "/media/ephemeral0/logs/zookeeper/*.out"
+ type => "zookeeper"
+ start_position => "beginning"
+ }
+}
+
+filter {
+ if [type] == "accumulo" {
+
+ grok {
+
+ #2020-03-30 14:01:06,322 [gc.GarbageCollectWriteAheadLogs] DEBUG: New
tablet servers noticed: [worker3:9997[1000005c5390005],
worker1:9997[1000005c5390004], worker2:9997[1000005c5390006]]
+ match => { "message" => "%{TIMESTAMP_ISO8601:TIMESTAMP}
\[%{GREEDYDATA:module}\.%{GREEDYDATA:className\] %{LOGLEVEL:level}:
%{GREEDYDATA:message}\[%{GREEDYDATA:tservername}:%{GREEDYDATA:tservername}\:%{GREEDYDATA:tservername}\:%{GREEDYDATA:hostname}\]"}
+
+ #2020-06-18 14:27:15,793 [impl.MetricsConfig] INFO : Loaded properties
from hadoop-metrics2-accumulo.properties
+ match => { "message" => "%{TIMESTAMP_ISO8601:TIMESTAMP}
\[%{GREEDYDATA:module}\.%{GREEDYDATA:className}\] %{LOGLEVEL:level}:
%{GREEDYDATA:message}\[%{GREEDYDATA:module}\]"}
+
+ #2020-03-31 04:43:48,553 [gc.GarbageCollectWriteAheadLogs] DEBUG: New
tablet servers noticed: []
+ #2020-04-01 01:56:04,271 [gc.GarbageCollectWriteAheadLogs] DEBUG:
Tablet servers removed: []
+ #2020-03-30 14:00:35,898 [gc.SimpleGarbageCollector] DEBUG: Got GC
ZooKeeper lock
+ #2020-03-30 14:03:39,048 [master.Master] DEBUG: [Root Table]: scan
time 0.00 seconds
+ #2020-03-30 14:03:29,331 [master.Master] DEBUG: Finished gathering
information from 3 of 3 servers in 0.02 seconds
+ match => { "message" => "%{TIMESTAMP_ISO8601:TIMESTAMP}
\[%{GREEDYDATA:module}\.%{GREEDYDATA:className}\] %{LOGLEVEL:level}:
%{GREEDYDATA:message}"}
+
+ #2020-04-01 12:56:22,176 [state.ZooTabletStateStore] DEBUG: Returning
root tablet state:
+r<<@(null,worker1:9997[1000005c5390004],worker1:9997[1000005c5390004])
+ #2020-03-30 14:03:39,048 [master.Master] DEBUG: Root Table location
State: +r<<@(null,worker1:9997[1000005c5390004],worker1:9997[1000005c5390004])
+ match => { "message" => "%{TIMESTAMP_ISO8601:TIMESTAMP}
\[%{GREEDYDATA:module}\.%{GREEDYDATA:className}\] %{LOGLEVEL:level}:
%{GREEDYDATA:message}\(%{GREEDYDATA:tservers}\)"}
+
+ #2020-06-17 14:06:03,898 [sasl.SaslDataTransferClient] INFO : SASL
encryption trust check: localHostTrusted = false, remoteHostTrusted = false
+ #2020-06-17 13:40:55,503 [compress.CodecPool] INFO : Got brand-new
decompressor [.deflate]
+ #2020-06-17 13:40:19,188 [bcfile.Compression] INFO : Trying to load
Lzo codec class: org.apache.hadoop.io.compress.LzoCodec
+ #2020-06-17 13:40:11,052 [lru.LruBlockCacheManager] INFO : Creating
INDEX cache with configuration
org.apache.accumulo.core.file.blockfile.cache.lru.LruBlockCacheConfiguration@22e49d8,
acceptableFactor: 0.85, minFactor: 0.75, singleFactor: 0.25, multiFactor: 0.5,
memoryFactor: 0.25, mapLoadFactor: 0.75, mapConcurrencyLevel: 16,
useEvictionThread: true
+ #2020-03-30 14:01:06,223 [gc.SimpleGarbageCollector] INFO : Number of
successfully deleted data files: 0
+ #2020-06-17 13:40:11,212 [tserver.TabletServer] INFO : Tablet server
starting on 0.0.0.0
+ #2020-06-17 13:40:10,742 [server.ServerUtil] INFO :
tserver.memory.manager =
org.apache.accumulo.server.tabletserver.LargestFirstMemoryManager
+ #2020-06-17 13:40:09,222 [conf.SiteConfiguration] INFO : Found
Accumulo configuration on classpath at
/home/centos/install/accumulo-2.0.0/conf/accumulo.properties
+ #2020-06-24 14:15:58,727 [compact.CompactRange] INFO : No iterators or
compaction strategy
+ match => {"message" => "%{TIMESTAMP_ISO8601:TIMESTAMP}
\[%{GREEDYDATA:module}\.%{GREEDYDATA:className}] %{LOGLEVEL:level} :
%{GREEDYDATA:message}"}
+
+ #2020-06-24 11:10:00,094 [tracer.TraceServer] INFO : Instance
a7f82873-7c32-4510-8560-665523d9b57f
+ match => {"message" => "%{TIMESTAMP_ISO8601:TIMESTAMP}
\[%{GREEDYDATA:module}\.%{GREEDYDATA:className}\] %{LOGLEVEL:level} :
%{GREEDYDATA:instance} %{GREEDYDATA:instanceID}"}
+
+ #2020-06-24 11:09:59,929 [server.ServerUtil] INFO : instance.volumes =
hdfs://leader1:8020/accumulo
+ match => {"message" => "%{TIMESTAMP_ISO8601:TIMESTAMP}
\[%{GREEDYDATA:module}\.%{GREEDYDATA:className}\] %{LOGLEVEL:level} :
%{GREEDYDATA:instance} %{URI:path}"}
+
+ #2020-06-24 11:09:55,624 [conf.SiteConfiguration] INFO : Found
Accumulo configuration on classpath at
/home/centos/install/accumulo-2.0.0/conf/accumulo.properties
+ match => {"message" => "%{TIMESTAMP_ISO8601:TIMESTAMP}
\[%{GREEDYDATA:module}\.%{GREEDYDATA:className}\] %{LOGLEVEL:level} :
%{GREEDYDATA:message} %{PATH:path}"}
+
+ #2020-06-24 15:06:05,196 [zookeeper.DistributedReadWriteLock] INFO :
Added lock entry 0 userData 620e830c39101580 lockTpye READ
+ #2020-06-24 14:10:58,054 [tableOps.Utils] INFO : table +r
(79bfe74ef98dc6c) locked for read operation: COMPACT
+ #2020-06-24 15:11:06,486 [tableOps.Utils] INFO : namespace +accumulo
(24c7ba560088d55b) locked for read operation: COMPACT
+ match => {"message" => "%{TIMESTAMP_ISO8601:TIMESTAMP}
\[%{GREEDYDATA:module}\.%{GREEDYDATA:className}\] %{LOGLEVEL:level} :
%{GREEDYDATA:message} %{WORD:operation}"}
+
+
+ #2020-06-24 11:10:00,900 [security.SecurityOperation] INFO : Granted
table permission READ for user root on the table 1 at the request of user
!SYSTEM
+ #2020-06-24 11:10:00,926 [security.SecurityOperation] INFO : Granted
table permission WRITE for user root on the table 1 at the request of user
!SYSTEM
+ #2020-06-24 11:10:00,935 [security.SecurityOperation] INFO : Granted
table permission BULK_IMPORT for user root on the table 1 at the request of
user !SYSTEM
+ #2020-06-24 11:10:00,939 [security.SecurityOperation] INFO : Granted
table permission ALTER_TABLE for user root on the table 1 at the request of
user !SYSTEM
+ #2020-06-24 11:10:00,955 [security.SecurityOperation] INFO : Granted
table permission GRANT for user root on the table 1 at the request of user
!SYSTEM
+ #2020-06-24 11:10:00,962 [security.SecurityOperation] INFO : Granted
table permission DROP_TABLE for user root on the table 1 at the request of user
!SYSTEM
+ #2020-06-24 11:10:00,965 [security.SecurityOperation] INFO : Granted
table permission GET_SUMMARIES for user root on the table 1 at the request of
user !SYSTEM
+ #2020-06-24 11:10:00,989 [zookeeper.DistributedReadWriteLock] INFO :
Added lock entry 0 userData 0665dc60a64ce2cd lockType WRITE
+ match => {"message" => "%{TIMESTAMP_ISO8601:TIMESTAMP}
\[%{GREEDYDATA:module}\.%{GREEDYDATA:className}\] %{LOGLEVEL:level} :
%{GREEDYDATA:message}%{WORD:operation}"}
+
+
+
+
+
+ }
+ }
+ else if [type] == "hadoop"{
+ grok {
+ #2020-03-30 15:01:17,801 INFO org.apache.hadoop.hdfs.StateChange:
DIR* completeFile: /accumulo/tables/+r/root_tablet/F000003o.rf_tmp is closed by
DFSClient_NONMAPREDUCE_69506301_11
+ match => { "message" => "%{TIMESTAMP_ISO8601}%{SPACE}%{WORD:log
level}%{GREEDYDATA:hostname}%{GREEDYDATA:LINK}:%{GREEDYDATA:state}%{LOGLEVEL:level}:
%{GREEDYDATA:message}"}
+ match => { "message" => "%{TIMESTAMP_ISO8601}%{SPACE}%{WORD:log
level}%{GREEDYDATA:ip}%{GREEDYDATA:LINK}:%{GREEDYDATA:state}%{LOGLEVEL:level}:
%{GREEDYDATA:message}"}
+
+ #2020-06-17 13:39:10,434 INFO
org.apache.hadoop.yarn.logaggregation.AggregatedLogDeletionService: aggregated
log deletion finished.
+ #2020-06-17 13:39:06,483 INFO
org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer: registered UNIX signal
handlers for [TERM, HUP, INT]
+ #2020-06-17 13:39:07,064 INFO
org.apache.hadoop.metrics2.impl.MetricsConfig: Loaded properties from
hadoop-metrics2.properties
+ #2020-06-17 13:39:07,147 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled Metric snapshot
period at 10 second(s).
+ #2020-06-17 13:39:07,703 INFO
org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system
[hdfs://leader1:8020]
+ match => {"message" => "%{TIMESTAMP_ISO8601:TIMESTAMP}
%{LOGLEVEL:level} %{GREEDYDATA:className}: %{GREEDYDATA:message}"}
+ }
+}
+
+ else if [type] == "zookeeper" {
+
+ grok {
+ #2020-06-15 17:15:15,275 [myid:] - INFO
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@222] - Accepted
socket connection from /172.31.66.109:55402
+ match => { "message" => "%{TIMESTAMP_ISO8601:TIMESTAMP}
\[%{GREEDYDATA:ID}:]\ - %{LOGLEVEL:level} \[%{GREEDYDATA:location}\]\ -
%{GREEDYDATA:message}" }
+
+ #2020-06-16 14:34:20,639 [myid:] - INFO
[main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:2181
+ #2020-06-16 14:34:20,623 [myid:] - INFO [main:Environment@100] -
Server environment:java.compiler=<NA>
+ match => {"message" => "%{TIMESTAMP_ISO8601:TIMESTAMP}
\[%{GREEDYDATA:ID}:]\ - %{LOGLEVEL:level}
\[%{GREEDYDATA:method}:%{GREEDYDATA:className}\]\ \- %{GREEDYDATA:message}"}
+
+ #2020-06-16 18:41:28,734 [myid:] - INFO [ProcessThread(sid:0
cport:2181)::PrepRequestProcessor@653] - Got user-level KeeperException when
processing sessionid:0x1000007cddd000b type:create cxid:0xb7ef zxid:0x164a
txntype:-1 reqpath:n/a Error
Path:/accumulo/4d65522e-62cd-4540-9270-2272a3db2f85/masters/tick
Error:KeeperErrorCode = NodeExists for
/accumulo/4d65522e-62cd-4540-9270-2272a3db2f85/masters/tick
+ #2020-06-16 14:34:20,622 [myid:] - INFO [main:Environment@100] -
Server
environment:java.class.path=/home/centos/install/zookeeper-3.4.14/bin/../zookeeper-server/target/classes:/home/centos/install/zookeeper-3.4.14/bin/../build/classes:/home/centos/install/zookeeper-3.4.14/bin/../zookeeper-server/target/lib/*.jar:/home/centos/install/zookeeper-3.4.14/bin/../build/lib/*.jar:/home/centos/install/zookeeper-3.4.14/bin/../lib/slf4j-log4j12-1.7.25.jar:/home/centos/install/zookeeper-3.4.14/bin/../lib/slf4j-api-1.7.25.jar:/home/centos/install/zookeeper-3.4.14/bin/../lib/netty-3.10.6.Final.jar:/home/centos/install/zookeeper-3.4.14/bin/../lib/log4j-1.2.17.jar:/home/centos/install/zookeeper-3.4.14/bin/../lib/jline-0.9.94.jar:/home/centos/install/zookeeper-3.4.14/bin/../lib/audience-annotations-0.5.0.jar:/home/centos/install/zookeeper-3.4.14/bin/../zookeeper-3.4.14.jar:/home/centos/install/zookeeper-3.4.14/bin/../zookeeper-server/src/main/resources/lib/*.jar:/home/centos/install/zookee
per-3.4.14/bin/../conf:
+ match => {"message" => "%{TIMESTAMP_ISO8601:TIMESTAMP}
\[%{GREEDYDATA:ID}:]\ - %{LOGLEVEL:level}
\[%{GREEDYDATA:method}:%{GREEDYDATA:className}\]\ \-
%{GREEDYDATA:message}\:%{PATH}"}
+
+ #2020-06-17 13:37:17,041 [myid:] - INFO [main:ZooKeeperServerMain@98]
- Starting server
+ match => {"message" => "%{TIMESTAMP_ISO8601:TIMESTAMP}
\[%{GREEDYDATA:ID}:]\ - %{LOGLEVEL:level}
\[%{GREEDYDATA:method}:%{GREEDYDATA:className}\] - %{GREEDYDATA:message}"}
+
+ match => {"message" => "%{TIMESTAMP_ISO8601:TIMESTAMP}
\[%{GREEDYDATA:ID}:]\ - %{LOGLEVEL:level}
\[%{GREEDYDATA:method}:%{GREEDYDATA:className}\]\ \- %{GREEDYDATA:message}\
/%{IPORHOST}"}
+
+
+
+ }
+
+ }
+ }
+output {
+ elasticsearch {
+ hosts => ["http://leader1:9200"]
Review comment:
Please see my other comment about not hard-coding hostnames. Instead use
a reference to the appropriate host from the Ansible host inventory.
##########
File path: ansible/roles/logstash/defaults/main.yml
##########
@@ -0,0 +1,19 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+logstash_rpm: logstash-oss-7.8.0.rpm
+logstash_sha512:
"sha512:6640d44c13b6d9243c07ba8fcd25aa9c81f95bcaeefed1d3462c880a64180fce94a966d6c11bf6ebec39c4ab31dec1ec53ff5a6fe3a5e8fd41583db320f9cb3b"
Review comment:
```suggestion
logstash_checksum:
"sha512:6640d44c13b6d9243c07ba8fcd25aa9c81f95bcaeefed1d3462c880a64180fce94a966d6c11bf6ebec39c4ab31dec1ec53ff5a6fe3a5e8fd41583db320f9cb3b"
```
##########
File path: ansible/roles/logstash/templates/logstash-simple-2.conf
##########
@@ -0,0 +1,150 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,e
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+input {
+ beats {
+ port => 5044
+ }
+
+ file {
+ path => "/media/ephemeral0/logs/accumulo/*.log"
Review comment:
These paths should be referencing the `worker_data_dirs` variable
instead of hard-coding.
##########
File path: ansible/roles/filebeat/defaults/main.yml
##########
@@ -0,0 +1,21 @@
+---
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,e
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+filebeat_rpm: filebeat-7.8.0-x86_64.rpm
+filebeat_sha512:
"sha512:d3483622cc55f8e0e5a3198bb179bd6e74d66a50a6b6d8df82a3e22eae9b345886505fce9439003b8ab57b3cf70bcfbcfaec40ff3fbd41ff385f1fb9ae6a34e0"
Review comment:
Similar to previous suggestion; change `_sha512` naming suffixes to
`_checksum` in this declaration, and also in related references.
```suggestion
filebeat_checksum:
"sha512:d3483622cc55f8e0e5a3198bb179bd6e74d66a50a6b6d8df82a3e22eae9b345886505fce9439003b8ab57b3cf70bcfbcfaec40ff3fbd41ff385f1fb9ae6a34e0"
```
##########
File path: ansible/roles/elasticsearch/tasks/main.yml
##########
@@ -0,0 +1,83 @@
+---
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+#
+# Installing Elasticsearch
+#
+
+# Add the Elasticsearch yum repo.
+- name: "download elasticsearch rpm"
+ get_url:
+ args:
+ url: https://artifacts.elastic.co/downloads/elasticsearch/{{
elasticsearch_rpm }}
+ dest: /tmp/{{ elasticsearch_rpm }}
+ checksum: "{{ elasticsearch_sha512 }}"
+ force: no
+
+# Install elasticsearch
+- name: "ensure elasticsearch is installed"
+ become: true
+ yum:
+ name: /tmp/{{ elasticsearch_rpm }}
+ state: present
+
+# Update Elasticsearch config file to allow access (to secure Elasticsearch,
bind to 'localhost').
+- name: Updating the config file to allow outside access
+ lineinfile:
+ destfile: /etc/elasticsearch/elasticsearch.yml
+ regexp: "network.host:"
+ line: "network.host: 0.0.0.0"
+ become: true
+
+# Update Elasticsearch port in config file
+- name: Updating the port in config file
+ lineinfile:
+ destfile: /etc/elasticsearch/elasticsearch.yml
+ regexp: "http.port:"
+ line: "http.port: 9200"
+ become: true
+
+# Update Discovery Settings in config file
+- name: Update the Discovery Seed Host
+ lineinfile:
+ destfile: /etc/elasticsearch/elasticsearch.yml
+ regexp: "discovery.seed_hosts:"
+ line: 'discovery.seed_hosts: ["leader1"]'
Review comment:
Wondering if this can be parameterized to the first node by using
something like `"{{ groups['nodes'][0] }}"` ? The hard-coded reference to
`leader1` will break when different hostnames are used for the nodes in the
cluster.
```suggestion
line: "discovery.seed_hosts: [\"{{ groups['nodes'][0] }}\"]"
```
##########
File path: ansible/roles/filebeat/templates/filebeat.reference.yml
##########
@@ -0,0 +1,2071 @@
+#
Review comment:
Is this file used for runtime configuration, or is it just provided for
a reference (as the name seems to suggest)? I am not aware of Filebeat, so I'm
trying to neutrally gauge how deeply I need to review this file (or not).
##########
File path: ansible/roles/kibana/defaults/main.yml
##########
@@ -0,0 +1,19 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+kibana_rpm: kibana-oss-7.8.0-x86_64.rpm
+kibana_sha512:
"sha512:2d86268fb951f91a00a981b121e8fa8e77d67671400c180ef81d74d135c24838ea44faad69d76223f0bd8e86ca4ba5e893a75807e84b43287e0a3c323f1fd837"
Review comment:
Similar to previous suggestions on renaming these kinds of variables.
```suggestion
kibana_checksum:
"sha512:2d86268fb951f91a00a981b121e8fa8e77d67671400c180ef81d74d135c24838ea44faad69d76223f0bd8e86ca4ba5e893a75807e84b43287e0a3c323f1fd837"
```
##########
File path: lib/muchos/config.py
##########
@@ -0,0 +1,647 @@
+#
Review comment:
We no longer use this file, it was replaced by the various files under
lib/muchos/config/. It should be deleted and not be a part of your PR.
##########
File path: ansible/elk.yml
##########
@@ -0,0 +1,10 @@
+---
Review comment:
Please add the license header.
##########
File path: ansible/roles/elasticsearch/tasks/main.yml
##########
@@ -0,0 +1,83 @@
+---
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+#
+# Installing Elasticsearch
+#
+
+# Add the Elasticsearch yum repo.
+- name: "download elasticsearch rpm"
+ get_url:
+ args:
+ url: https://artifacts.elastic.co/downloads/elasticsearch/{{
elasticsearch_rpm }}
+ dest: /tmp/{{ elasticsearch_rpm }}
+ checksum: "{{ elasticsearch_sha512 }}"
+ force: no
+
+# Install elasticsearch
+- name: "ensure elasticsearch is installed"
+ become: true
+ yum:
+ name: /tmp/{{ elasticsearch_rpm }}
+ state: present
+
+# Update Elasticsearch config file to allow access (to secure Elasticsearch,
bind to 'localhost').
+- name: Updating the config file to allow outside access
+ lineinfile:
+ destfile: /etc/elasticsearch/elasticsearch.yml
+ regexp: "network.host:"
+ line: "network.host: 0.0.0.0"
+ become: true
+
+# Update Elasticsearch port in config file
+- name: Updating the port in config file
+ lineinfile:
+ destfile: /etc/elasticsearch/elasticsearch.yml
+ regexp: "http.port:"
+ line: "http.port: 9200"
+ become: true
+
+# Update Discovery Settings in config file
+- name: Update the Discovery Seed Host
+ lineinfile:
+ destfile: /etc/elasticsearch/elasticsearch.yml
+ regexp: "discovery.seed_hosts:"
+ line: 'discovery.seed_hosts: ["leader1"]'
+ become: true
+
+# Update Discovery Settings in config file
+- name: Update the Discovery Seed Host
+ lineinfile:
+ destfile: /etc/elasticsearch/elasticsearch.yml
+ regexp: "cluster.initial_master_nodes:"
+ line: 'cluster.initial_master_nodes: ["leader1"]'
Review comment:
See previous suggestion to avoid hard-coding this hostname.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]