[ambari] 03/03: AMBARI-25683 Regenerate swagger API docs
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch branch-2.7 in repository https://gitbox.apache.org/repos/asf/ambari.git commit de371c14fb93f9f3385a9d29b351fbf8cbe41378 Author: Szabolcs Beki AuthorDate: Tue Nov 2 16:59:08 2021 +0100 AMBARI-25683 Regenerate swagger API docs --- ambari-server/docs/api/generated/index.html | 396 ambari-server/docs/api/generated/swagger.json | 424 ++ ambari-server/docs/configuration/index.md | 6 +- 3 files changed, 435 insertions(+), 391 deletions(-) diff --git a/ambari-server/docs/api/generated/index.html b/ambari-server/docs/api/generated/index.html index 690154f..239cfda 100644 --- a/ambari-server/docs/api/generated/index.html +++ b/ambari-server/docs/api/generated/index.html @@ -890,18 +890,18 @@ margin-bottom: 20px; defs.AlertGroup = { "type" : "object", "properties" : { -"default" : { - "type" : "boolean", - "default" : false +"name" : { + "type" : "string" }, -"id" : { +"cluster_id" : { "type" : "integer", "format" : "int64" }, -"name" : { - "type" : "string" +"default" : { + "type" : "boolean", + "default" : false }, -"cluster_id" : { +"id" : { "type" : "integer", "format" : "int64" } @@ -910,19 +910,6 @@ margin-bottom: 20px; defs.AlertTargetInfo = { "type" : "object", "properties" : { -"name" : { - "type" : "string" -}, -"properties" : { - "type" : "object", - "additionalProperties" : { -"type" : "string" - } -}, -"id" : { - "type" : "integer", - "format" : "int64" -}, "enabled" : { "type" : "boolean", "default" : false @@ -948,6 +935,19 @@ margin-bottom: 20px; "global" : { "type" : "boolean", "default" : false +}, +"name" : { + "type" : "string" +}, +"properties" : { + "type" : "object", + "additionalProperties" : { +"type" : "string" + } +}, +"id" : { + "type" : "integer", + "format" : "int64" } } }; @@ -979,10 +979,10 @@ margin-bottom: 20px; "service_name" : { "type" : "string" }, -"artifact_name" : { +"stack_name" : { "type" : "string" }, -"stack_name" : { +"artifact_name" : { "type" : "string" } } @@ -1004,45 +1004,45 @@ margin-bottom: 20px; defs.BatchRequestRequest = { "type" : "object", "properties" : { -"type" : { +"RequestBodyInfo" : { + "$ref" : "#/definitions/RequestBodyInfo" +}, +"uri" : { "type" : "string" }, "order_id" : { "type" : "integer", "format" : "int64" }, -"uri" : { +"type" : { "type" : "string" -}, -"RequestBodyInfo" : { - "$ref" : "#/definitions/RequestBodyInfo" } } }; defs.BatchRequestResponse = { "type" : "object", "properties" : { -"request_type" : { +"request_status" : { "type" : "string" }, "request_body" : { "type" : "string" }, -"request_status" : { +"request_uri" : { "type" : "string" }, "return_code" : { "type" : "integer", "format" : "int32" }, +"response_message" : { + "type" : "string" +}, "order_id" : { "type" : "integer", "format" : "int64" }, -"response_message" : { - "type" : "string" -}, -"request_uri" : { +"request_type" : { "type" : "string" }
[ambari] branch branch-2.7 updated (0fc5d44 -> de371c1)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a change to branch branch-2.7 in repository https://gitbox.apache.org/repos/asf/ambari.git. from 0fc5d44 AMBARI-25695 Pull request tests fail due to Maven 3.8.x blocks http repositories (santal) (#3321) new 4548cb2 Adding new PGP public keys to KEYS file new f86716b AMBARI-25683 Updated pom.xmls with version 2.7.6.0.0 new de371c1 AMBARI-25683 Regenerate swagger API docs The 3 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. Summary of changes: KEYS | 56 +++ ambari-admin/pom.xml | 4 +- ambari-agent/pom.xml | 4 +- ambari-funtest/pom.xml | 4 +- ambari-infra/ambari-infra-solr-plugin/pom.xml | 2 +- ambari-metrics/ambari-metrics-assembly/pom.xml | 4 +- ambari-metrics/ambari-metrics-common/pom.xml | 2 +- ambari-metrics/ambari-metrics-flume-sink/pom.xml | 4 +- ambari-metrics/ambari-metrics-grafana/pom.xml | 4 +- ambari-metrics/ambari-metrics-hadoop-sink/pom.xml | 4 +- .../ambari-metrics-host-aggregator/pom.xml | 4 +- .../ambari-metrics-host-monitoring/pom.xml | 4 +- ambari-metrics/ambari-metrics-kafka-sink/pom.xml | 4 +- .../ambari-metrics-storm-sink-legacy/pom.xml | 4 +- ambari-metrics/ambari-metrics-storm-sink/pom.xml | 4 +- .../ambari-metrics-timelineservice/pom.xml | 4 +- ambari-metrics/pom.xml | 2 +- ambari-project/pom.xml | 4 +- ambari-server/docs/api/generated/index.html| 396 +-- ambari-server/docs/api/generated/swagger.json | 424 +++-- ambari-server/docs/configuration/index.md | 6 +- ambari-server/pom.xml | 4 +- ambari-utility/pom.xml | 2 +- ambari-views/examples/calculator-view/pom.xml | 4 +- ambari-views/examples/cluster-view/pom.xml | 6 +- ambari-views/examples/favorite-view/pom.xml| 4 +- ambari-views/examples/hello-servlet-view/pom.xml | 6 +- ambari-views/examples/hello-spring-view/pom.xml| 6 +- ambari-views/examples/helloworld-view/pom.xml | 6 +- .../examples/phone-list-upgrade-view/pom.xml | 8 +- ambari-views/examples/phone-list-view/pom.xml | 4 +- ambari-views/examples/pom.xml | 4 +- .../examples/property-validator-view/pom.xml | 6 +- ambari-views/examples/property-view/pom.xml| 6 +- ambari-views/examples/restricted-view/pom.xml | 4 +- ambari-views/examples/simple-view/pom.xml | 6 +- ambari-views/pom.xml | 4 +- ambari-web/pom.xml | 4 +- contrib/ambari-scom/ambari-scom-server/pom.xml | 4 +- contrib/ambari-scom/metrics-sink/pom.xml | 2 +- contrib/ambari-scom/pom.xml| 2 +- contrib/management-packs/microsoft-r_mpack/pom.xml | 2 +- contrib/management-packs/pom.xml | 2 +- contrib/views/ambari-views-package/pom.xml | 4 +- contrib/views/capacity-scheduler/pom.xml | 4 +- contrib/views/commons/pom.xml | 6 +- contrib/views/files/pom.xml| 6 +- contrib/views/pig/pom.xml | 10 +- contrib/views/pom.xml | 8 +- contrib/views/utils/pom.xml| 4 +- contrib/views/wfmanager/pom.xml| 6 +- docs/pom.xml | 58 +-- pom.xml| 2 +- 53 files changed, 624 insertions(+), 524 deletions(-)
[ambari] 02/03: AMBARI-25683 Updated pom.xmls with version 2.7.6.0.0
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch branch-2.7 in repository https://gitbox.apache.org/repos/asf/ambari.git commit f86716b749511becacec7794a72355f9f8901605 Author: Szabolcs Beki AuthorDate: Mon Nov 1 22:07:58 2021 +0100 AMBARI-25683 Updated pom.xmls with version 2.7.6.0.0 --- ambari-admin/pom.xml | 4 +- ambari-agent/pom.xml | 4 +- ambari-funtest/pom.xml | 4 +- ambari-infra/ambari-infra-solr-plugin/pom.xml | 2 +- ambari-metrics/ambari-metrics-assembly/pom.xml | 4 +- ambari-metrics/ambari-metrics-common/pom.xml | 2 +- ambari-metrics/ambari-metrics-flume-sink/pom.xml | 4 +- ambari-metrics/ambari-metrics-grafana/pom.xml | 4 +- ambari-metrics/ambari-metrics-hadoop-sink/pom.xml | 4 +- .../ambari-metrics-host-aggregator/pom.xml | 4 +- .../ambari-metrics-host-monitoring/pom.xml | 4 +- ambari-metrics/ambari-metrics-kafka-sink/pom.xml | 4 +- .../ambari-metrics-storm-sink-legacy/pom.xml | 4 +- ambari-metrics/ambari-metrics-storm-sink/pom.xml | 4 +- .../ambari-metrics-timelineservice/pom.xml | 4 +- ambari-metrics/pom.xml | 2 +- ambari-project/pom.xml | 4 +- ambari-server/pom.xml | 4 +- ambari-utility/pom.xml | 2 +- ambari-views/examples/calculator-view/pom.xml | 4 +- ambari-views/examples/cluster-view/pom.xml | 6 +-- ambari-views/examples/favorite-view/pom.xml| 4 +- ambari-views/examples/hello-servlet-view/pom.xml | 6 +-- ambari-views/examples/hello-spring-view/pom.xml| 6 +-- ambari-views/examples/helloworld-view/pom.xml | 6 +-- .../examples/phone-list-upgrade-view/pom.xml | 8 +-- ambari-views/examples/phone-list-view/pom.xml | 4 +- ambari-views/examples/pom.xml | 4 +- .../examples/property-validator-view/pom.xml | 6 +-- ambari-views/examples/property-view/pom.xml| 6 +-- ambari-views/examples/restricted-view/pom.xml | 4 +- ambari-views/examples/simple-view/pom.xml | 6 +-- ambari-views/pom.xml | 4 +- ambari-web/pom.xml | 4 +- contrib/ambari-scom/ambari-scom-server/pom.xml | 4 +- contrib/ambari-scom/metrics-sink/pom.xml | 2 +- contrib/ambari-scom/pom.xml| 2 +- contrib/management-packs/microsoft-r_mpack/pom.xml | 2 +- contrib/management-packs/pom.xml | 2 +- contrib/views/ambari-views-package/pom.xml | 4 +- contrib/views/capacity-scheduler/pom.xml | 4 +- contrib/views/commons/pom.xml | 6 +-- contrib/views/files/pom.xml| 6 +-- contrib/views/pig/pom.xml | 10 ++-- contrib/views/pom.xml | 8 +-- contrib/views/utils/pom.xml| 4 +- contrib/views/wfmanager/pom.xml| 6 +-- docs/pom.xml | 58 +++--- pom.xml| 2 +- 49 files changed, 133 insertions(+), 133 deletions(-) diff --git a/ambari-admin/pom.xml b/ambari-admin/pom.xml index 9ab45a8..646bec7 100644 --- a/ambari-admin/pom.xml +++ b/ambari-admin/pom.xml @@ -19,7 +19,7 @@ org.apache.ambari ambari-project -2.7.5.0.0 +2.7.6.0.0 ../ambari-project 4.0.0 @@ -27,7 +27,7 @@ ambari-admin jar Ambari Admin View - 2.7.5.0.0 + 2.7.6.0.0 Admin control panel diff --git a/ambari-agent/pom.xml b/ambari-agent/pom.xml index 214aa85..5bcb6f6 100644 --- a/ambari-agent/pom.xml +++ b/ambari-agent/pom.xml @@ -19,13 +19,13 @@ org.apache.ambari ambari-project -2.7.5.0.0 +2.7.6.0.0 ../ambari-project 4.0.0 org.apache.ambari ambari-agent - 2.7.5.0.0 + 2.7.6.0.0 Ambari Agent Ambari Agent diff --git a/ambari-funtest/pom.xml b/ambari-funtest/pom.xml index 8ed4457..36f5611 100644 --- a/ambari-funtest/pom.xml +++ b/ambari-funtest/pom.xml @@ -13,12 +13,12 @@ org.apache.ambari ambari-project -2.7.5.0.0 +2.7.6.0.0 ../ambari-project org.apache.ambari ambari-funtest - 2.7.5.0.0 + 2.7.6.0.0 ${packagingFormat} Ambari Functional Tests Ambari Functional Tests diff --git a/ambari-infra/ambari-infra-solr-plugin/pom.xml b/ambari-infra/ambari-infra-solr-plugin/pom.xml index 6badf57..67f5d1b 100644 --- a/ambari-infra/ambari-infra-solr-plugin/pom.xml +++ b/ambari-infra/ambari-infra-solr-plugin/pom.xml @@ -45,7 +45,7 @@ org.apache.ambari ambari-metrics-common - 2.7.5.0.0 + 2.7.6.0.0 org.apache.solr diff --git a/ambari-metrics/ambari-metrics
[ambari] 01/03: Adding new PGP public keys to KEYS file
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch branch-2.7 in repository https://gitbox.apache.org/repos/asf/ambari.git commit 4548cb29522d1b18d023d0b8bbc69559f1cad9fc Author: Szabolcs Beki AuthorDate: Mon Nov 1 21:02:10 2021 +0100 Adding new PGP public keys to KEYS file --- KEYS | 56 1 file changed, 56 insertions(+) diff --git a/KEYS b/KEYS index bc09fe5..314e21c 100644 --- a/KEYS +++ b/KEYS @@ -718,3 +718,59 @@ WoJhMGUvAlZ9vFtd2mvth9JeikaKHJH+N4oC+cUj2uT+uY8rd6xWaibIiaSOJgld EK0Ns8Tn8JViIhAQGXKjrQ== =thUO -END PGP PUBLIC KEY BLOCK- +pub rsa4096 2021-11-01 [SC] + 3F3834D40E36902AD0F8012A324890C5AFA3C379 +uid [ultimate] Szabolcs Beki +sub rsa4096 2021-11-01 [E] + +-BEGIN PGP PUBLIC KEY BLOCK- + +mQINBGF/83wBEADqScRWW5kB1+R/p0cBZeSHp6VTa1Car5tBZ6aHzzOoxDZup8YP +Vt43Z81nJJjO/IPDSxLmYKpASc5vqK1I7kT+U7vSNLySNkb7sM3Y3td7HIBdm4XL +hqyTtQypCWCZ19e5Z9GNNX6vTsDoau8O0ntqMCBorVsvy2H3CbhTpcMMmiY+qXCf +XbcReXAZK5gH3aD8EtEBGbnJFnjj1wX/8NVJavEE5uCmpRgp1aKYi7Y3F9C8NprP +vUs7S5PyVMSMtyeBNBHYVGLB3iTD7nz/ZM1Wnq0CGpEJnB+7BCbYSSdruGi078y2 +CNKfeudEYxbgZ1dp2YerZR5tUNg4UFhwDfkzjGCIumo6i0sb/nwGFgrlKyD/x42H +hMdt+IO0QdCaPlkfXiw3IoIdewPrOpZq/YFAxGGQzu7KYSF+QymPT+weEBw/Lofe +C5GABXMJrummloXVnc5jsOl++eRI0U9CRua3+XlSOgU0dXnzvyvYK0JEOi6YhCSC +StHKrmEyyOjgqIkjIO7FSnsHgjsDxRHDIMv1K87eRNWGPC/4nlQvCEQHQZoXE9yu +PWgzlG95hw5uDEGek7Hg3jtXanGvtH9HTaQjEoMDKiB0qh3+n/mR3hJEzPDbX8c3 +yr33qTCnivbSdq56PobKNeXtsICafvVRIUQvrx9NFewWIuDVulsKnASW8QARAQAB +tCBTemFib2xjcyBCZWtpIDxzemFiaUBhcGFjaGUub3JnPokCTgQTAQgAOBYhBD84 +NNQONpAq0PgBKjJIkMWvo8N5BQJhf/N8AhsDBQsJCAcCBhUKCQgLAgQWAgMBAh4B +AheAAAoJEDJIkMWvo8N5JwEP/jgmttBBGx3FJJpWor+Iph9eK/zHl8z0aMjNrG1K +z8/L5Jbo6MsNLXUQdI4eXGpEUQ2t+5PizsKuJ4DHbiT9H4haFzvPRKG6pXFjfEoS +7s30Z7JJkmEu1ifDNSzJGmHvGzZJLBzVU0TukKZn/FREewLE2FlhpEfIqIjosgG6 +j5nS54y7p3CzLICpa0gdaJsRum5jcKfLkCgVuP32HwpZvWq46G7sdnMetDMmKmA6 +lMNRqoNCazrU2WXuyAbL1gdFCZ7tWx7hKqxcileQKAafEA3rmHRpbte1/mUCkdUI +12jcNEsC2G15F0/kCIBFWWP2HajzSpXj/ZXkTBljIPAS862/rbN4pp01vDMnkCJP +wBeTLAbTY5PhFBlVnlyObk7GV566i2GN3W3xZY7O2u/nOpgUu09mEVP/tdu4Byhk +O4xG9ci7fI8+SqChRcudC5arOwTlQrd08nvrErlCh+y5gW813bfw7vmcFB0x4f5p +omajXR97wH64cx9PU7Ab/zS4Ap18xEwQdmHH0en7Uaq8HmsfMPriSC0wz2dsmX8O +MV4VY3VHySLgI6rbx7deLKChWNBc1hFOmx8Oz7p75PIRODwg4psP3aH2OBFuYyXN +ZqxjMVXiPkNt8jOCJYIkGW4cz34lFAQoSds6dI2OQhTHvXI3/OT54bGlUpT+HoMo +IreguQINBGF/83wBEADI4bJP7vLfkgWkI6sPfMsMo3hDtzx3QGSctqb1I8k38+8z +bjEB6g7ccsJSV47ALA3KbwxH3hWLxe9ENu3ZOF3fWqoCvOXOlKicvwmmwEiLP20b +rY9c38Ixv+7SJBr7D7k/FAQANTwda4OnW+bCF5sJnG0fAM9MlvFC9ma9T75E0EiD +EpPO896KGVD3GrgpjEJBTq+hQ0A4TCoMgI4yoodjELYVuw+1U7sYxAWkIlFy25Pj +aNM5fn5SmdpC8U2F4Flwe4wLsY8zJOELPUfQfse7PnMN3eet4DbUpf2rE05fqM7z +B+6EvPsyvkCBx63mfVD+XNgC+YC+EWG6ll9K70or6QO/p/60e0ynKdfalguqu17c +kOcn/NadFgFj0ijS5Mrv7FSJ+UJo9gzxK1HSdYgncEc0sPg5t90PW0VcPIV1O2KO +mindTf/7cT+fK8cdu0A27d/9W/hc1lsiexEx+eYe10G13zYcOHdDGRQQwbIKqBG+ +L+/EIC7dIARvhOfdJGD+QZHxCktu/dnPkn/XM80m9qgnAxr9MCbR74pq+52Zxg88 +zf3w2uc0+wHJPTbvdoOhnsCrJbU3cl4B1LLmvuMrGNtI7+dgbBzSziV9F9Ytm9O+ +urSxF7iUVsYN12p6UGMfILkpvsFrwM12LP4CLtbgUWRsDSPZakRUQnhOJgdW2QAR +AQABiQI2BBgBCAAgFiEEPzg01A42kCrQ+AEqMkiQxa+jw3kFAmF/83wCGwwACgkQ +MkiQxa+jw3nlmw/9GLY9ee1sRUqemoo/ft0r9irnLb5CF3fKZdZ7fJV9uOc16pSg +3zyChjuFu5Y5zfcn1JFq/koPHM7s+spLanKOTmzwF6fPnld+bNHX49hFfSC4p/Zv +jnWv51Bd/dJS/uMCkzWxnyx1sPA/GnuBOBbGrNr4VcmTo0kLdxXgNoVnXEEQ2dzW +nxqhuIhESw39BBPXzhyeHB5tJUNbRqptUAa/LvxTy45RF3QkxYnzG0jfxHVkavJK +V7VWxuyBuhzC8dJ1iuPC+0XVOFB15I4UfLbqpRwUsHA7Y5Ti3iN082su/AVGA5kv +1jfQSZlJd9CKws9tDFzgD1IGZwX4WOdcf4yVMkxzvaeiY9AJj9Y6uiTCo7jV7t4r +CbSABbBeRaZNAnRbxeFTdZSzHOT8/Ujij5opXuts/wdDldYFchvEongMY+dUn+31 +Do/VxJ5UcTvvGQ/51mV4vt8ir6swNaVlZvLIGpavoFsEAQJzGnCsfP7d8XLYWlXj +XKEN9H1kvZZ7W2o9dv9XdI4ovywTu4MasuJ+LtpG2xU5+YSt1unV51YOEZG997hb +q8dGC6MetBAVUqyuSRra26j9QYjQ/FKXvGjLGu7YT4nybOv2bvotCKdV7oehUmaU +6YP58pcA0QHij0eM6Jc9CkcSbZQUtCZyjbLX36AcoFUDYemBLJn85o/UouY= +=YhQV +-END PGP PUBLIC KEY BLOCK-
[ambari] branch branch-2.7 updated (2e60a76 -> e415212)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a change to branch branch-2.7 in repository https://gitbox.apache.org/repos/asf/ambari.git. from 2e60a76 AMBARI-25599 Consider to eliminate HDP public binary references (santal) (#3283) new da83488 AMBARI-25621. Ambari soft alert never become hard. (dvitiuk) new 50645e1 AMBARI-25621. Ambari soft alert never become hard. (dvitiuk) new e415212 AMBARI-25621. Ambari soft alert never become hard. (dvitiuk) The 3 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. Summary of changes: .../python/ambari_agent/AlertStatusReporter.py | 27 +++--- 1 file changed, 24 insertions(+), 3 deletions(-)
[ambari] 02/03: AMBARI-25621. Ambari soft alert never become hard. (dvitiuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch branch-2.7 in repository https://gitbox.apache.org/repos/asf/ambari.git commit 50645e187f517a1eae594dc7443632c7b673bdec Author: Dmytro Vitiuk AuthorDate: Tue Feb 23 14:29:39 2021 +0200 AMBARI-25621. Ambari soft alert never become hard. (dvitiuk) --- .../python/ambari_agent/AlertStatusReporter.py | 33 +- 1 file changed, 19 insertions(+), 14 deletions(-) diff --git a/ambari-agent/src/main/python/ambari_agent/AlertStatusReporter.py b/ambari-agent/src/main/python/ambari_agent/AlertStatusReporter.py index 09c554f..7a9f337 100644 --- a/ambari-agent/src/main/python/ambari_agent/AlertStatusReporter.py +++ b/ambari-agent/src/main/python/ambari_agent/AlertStatusReporter.py @@ -96,21 +96,26 @@ class AlertStatusReporter(threading.Thread): alert_name = alert['name'] alert_state = alert['state'] - alert_definition = filter(lambda definition: definition['name'] == alert_name, - self.alert_definitions_cache[cluster_id]['alertDefinitions'])[0] - definition_tolerance_enabled = alert_definition['repeat_tolerance_enabled'] - if definition_tolerance_enabled: -alert_tolerance = int(alert_definition['repeat_tolerance']) + alert_definitions = filter(lambda definition: definition['name'] == alert_name, + self.alert_definitions_cache[cluster_id]['alertDefinitions']) + if alert_definitions: +alert_definition = alert_definitions[0] +definition_tolerance_enabled = alert_definition['repeat_tolerance_enabled'] +if definition_tolerance_enabled: + alert_tolerance = int(alert_definition['repeat_tolerance']) +else: + alert_tolerance = int(self.initializer_module.configurations_cache[cluster_id]['configurations']['cluster-env']['alerts_repeat_tolerance']) + +# if status changed then add alert + reset counter +# if status not changed and counter is not satisfied then add alert (but only for not-OK) +if [alert[field] for field in self.FIELDS_CHANGED_RESEND_ALERT] != self.reported_alerts[cluster_id][alert_name]: + changed_alerts.append(alert) + self.alert_repeats[cluster_id][alert_name] = 0 +elif self.alert_repeats[cluster_id][alert_name] < alert_tolerance and alert_state != 'OK': + changed_alerts.append(alert) else: -alert_tolerance = int(self.initializer_module.configurations_cache[cluster_id]['configurations']['cluster-env']['alerts_repeat_tolerance']) - - # if status changed then add alert + reset counter - # if status not changed and counter is not satisfied then add alert (but only for not-OK) - if [alert[field] for field in self.FIELDS_CHANGED_RESEND_ALERT] != self.reported_alerts[cluster_id][alert_name]: -changed_alerts.append(alert) -self.alert_repeats[cluster_id][alert_name] = 0 - elif self.alert_repeats[cluster_id][alert_name] < alert_tolerance and alert_state != 'OK': -changed_alerts.append(alert) +logger.warn("An alert '{0}' was appeared with state '{1}' for not-existing alert definition." +.format(alert_name, alert_state)) return changed_alerts
[ambari] 01/03: AMBARI-25621. Ambari soft alert never become hard. (dvitiuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch branch-2.7 in repository https://gitbox.apache.org/repos/asf/ambari.git commit da83488a1701d8e54d8790ac6c9fdf74f2a5feb2 Author: Dmytro Vitiuk AuthorDate: Tue Feb 23 14:02:16 2021 +0200 AMBARI-25621. Ambari soft alert never become hard. (dvitiuk) --- .../main/python/ambari_agent/AlertStatusReporter.py| 18 +- 1 file changed, 17 insertions(+), 1 deletion(-) diff --git a/ambari-agent/src/main/python/ambari_agent/AlertStatusReporter.py b/ambari-agent/src/main/python/ambari_agent/AlertStatusReporter.py index dd17c5a..09c554f 100644 --- a/ambari-agent/src/main/python/ambari_agent/AlertStatusReporter.py +++ b/ambari-agent/src/main/python/ambari_agent/AlertStatusReporter.py @@ -42,6 +42,7 @@ class AlertStatusReporter(threading.Thread): self.stale_alerts_monitor = initializer_module.stale_alerts_monitor self.server_responses_listener = initializer_module.server_responses_listener self.reported_alerts = defaultdict(lambda:defaultdict(lambda:[])) +self.alert_repeats = defaultdict(lambda:defaultdict(lambda:[])) self.send_alert_changes_only = initializer_module.config.send_alert_changes_only threading.Thread.__init__(self) @@ -83,6 +84,7 @@ class AlertStatusReporter(threading.Thread): alert_name = alert['name'] self.reported_alerts[cluster_id][alert_name] = [alert[field] for field in self.FIELDS_CHANGED_RESEND_ALERT] + self.alert_repeats[cluster_id][alert_name] += 1 def get_changed_alerts(self, alerts): """ @@ -92,9 +94,23 @@ class AlertStatusReporter(threading.Thread): for alert in alerts: cluster_id = alert['clusterId'] alert_name = alert['name'] - + alert_state = alert['state'] + + alert_definition = filter(lambda definition: definition['name'] == alert_name, + self.alert_definitions_cache[cluster_id]['alertDefinitions'])[0] + definition_tolerance_enabled = alert_definition['repeat_tolerance_enabled'] + if definition_tolerance_enabled: +alert_tolerance = int(alert_definition['repeat_tolerance']) + else: +alert_tolerance = int(self.initializer_module.configurations_cache[cluster_id]['configurations']['cluster-env']['alerts_repeat_tolerance']) + + # if status changed then add alert + reset counter + # if status not changed and counter is not satisfied then add alert (but only for not-OK) if [alert[field] for field in self.FIELDS_CHANGED_RESEND_ALERT] != self.reported_alerts[cluster_id][alert_name]: changed_alerts.append(alert) +self.alert_repeats[cluster_id][alert_name] = 0 + elif self.alert_repeats[cluster_id][alert_name] < alert_tolerance and alert_state != 'OK': +changed_alerts.append(alert) return changed_alerts
[ambari] 03/03: AMBARI-25621. Ambari soft alert never become hard. (dvitiuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch branch-2.7 in repository https://gitbox.apache.org/repos/asf/ambari.git commit e4152124c0fd3b7a8f4fde83b1645e9cd812ceb3 Author: Dmytro Vitiuk AuthorDate: Wed Feb 24 00:52:42 2021 +0200 AMBARI-25621. Ambari soft alert never become hard. (dvitiuk) --- ambari-agent/src/main/python/ambari_agent/AlertStatusReporter.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/ambari-agent/src/main/python/ambari_agent/AlertStatusReporter.py b/ambari-agent/src/main/python/ambari_agent/AlertStatusReporter.py index 7a9f337..c512ff2 100644 --- a/ambari-agent/src/main/python/ambari_agent/AlertStatusReporter.py +++ b/ambari-agent/src/main/python/ambari_agent/AlertStatusReporter.py @@ -114,7 +114,7 @@ class AlertStatusReporter(threading.Thread): elif self.alert_repeats[cluster_id][alert_name] < alert_tolerance and alert_state != 'OK': changed_alerts.append(alert) else: -logger.warn("An alert '{0}' was appeared with state '{1}' for not-existing alert definition." +logger.warn("Cannot find alert definition for alert='{0}', alert_state='{1}'." .format(alert_name, alert_state)) return changed_alerts
[ambari] branch branch-2.7 updated: AMBARI-25604. During blueprint deploy tasks sometimes fail due to KeyError on large clusters (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch branch-2.7 in repository https://gitbox.apache.org/repos/asf/ambari.git The following commit(s) were added to refs/heads/branch-2.7 by this push: new 1745d5a AMBARI-25604. During blueprint deploy tasks sometimes fail due to KeyError on large clusters (aonishuk) 1745d5a is described below commit 1745d5aa265ec811a235026d976012b1eebb6b7a Author: Andrew Onishchuk AuthorDate: Thu Dec 10 20:49:54 2020 +0200 AMBARI-25604. During blueprint deploy tasks sometimes fail due to KeyError on large clusters (aonishuk) --- .../src/main/python/ambari_agent/ClusterTopologyCache.py | 9 - 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/ambari-agent/src/main/python/ambari_agent/ClusterTopologyCache.py b/ambari-agent/src/main/python/ambari_agent/ClusterTopologyCache.py index b7863c6..90987ca 100644 --- a/ambari-agent/src/main/python/ambari_agent/ClusterTopologyCache.py +++ b/ambari-agent/src/main/python/ambari_agent/ClusterTopologyCache.py @@ -109,7 +109,14 @@ class ClusterTopologyCache(ClusterCache): cluster_host_info = defaultdict(lambda: []) for component_dict in self[cluster_id].components: component_name = component_dict.componentName - hostnames = [self.hosts_to_id[cluster_id][host_id].hostName for host_id in component_dict.hostIds] + hostnames = [] + for host_id in component_dict.hostIds: +if host_id in self.hosts_to_id[cluster_id]: + hostnames.append(self.hosts_to_id[cluster_id][host_id].hostName) +else: + # In theory this should never happen. But in practice it happened when ambari-server had corrupt DB cache. + logger.warning("Cannot find host_id={} in cluster_id={}".format(host_id, cluster_id)) + cluster_host_info[component_name.lower()+"_hosts"] += hostnames cluster_host_info['all_hosts'] = []
[ambari] 02/02: AMBARI-25602. BlackDuck scan: vulnerable org.apache.hadoop 1.2.1 in fast-hdfs (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch branch-2.7 in repository https://gitbox.apache.org/repos/asf/ambari.git commit 8c3292b8fee28a6b6204b3876af610599446cf52 Author: Andrew Onishchuk AuthorDate: Thu Dec 10 20:33:14 2020 +0200 AMBARI-25602. BlackDuck scan: vulnerable org.apache.hadoop 1.2.1 in fast-hdfs (aonishuk) --- .../before-START/files/fast-hdfs-resource.jar | Bin 19286899 -> 22893810 bytes 1 file changed, 0 insertions(+), 0 deletions(-) diff --git a/ambari-server/src/main/resources/stack-hooks/before-START/files/fast-hdfs-resource.jar b/ambari-server/src/main/resources/stack-hooks/before-START/files/fast-hdfs-resource.jar index b8f633f..27a2082 100644 Binary files a/ambari-server/src/main/resources/stack-hooks/before-START/files/fast-hdfs-resource.jar and b/ambari-server/src/main/resources/stack-hooks/before-START/files/fast-hdfs-resource.jar differ
[ambari] branch branch-2.7 updated (a4990d9 -> 8c3292b)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a change to branch branch-2.7 in repository https://gitbox.apache.org/repos/asf/ambari.git. from a4990d9 AMBARI-25569 Reassess Ambari Metrics data migration - 2nd part (#3254) new e33cada AMBARI-25602. BlackDuck scan: vulnerable org.apache.hadoop 1.2.1 in fast-hdfs (aonishuk) new 8c3292b AMBARI-25602. BlackDuck scan: vulnerable org.apache.hadoop 1.2.1 in fast-hdfs (aonishuk) The 2 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. Summary of changes: .../before-START/files/fast-hdfs-resource.jar | Bin 19286899 -> 22893810 bytes contrib/fast-hdfs-resource/pom.xml | 39 ++--- 2 files changed, 2 insertions(+), 37 deletions(-)
[ambari] 01/02: AMBARI-25602. BlackDuck scan: vulnerable org.apache.hadoop 1.2.1 in fast-hdfs (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch branch-2.7 in repository https://gitbox.apache.org/repos/asf/ambari.git commit e33cada7d5acfbe3566ff2d5edffb617edc978f3 Author: Andrew Onishchuk AuthorDate: Thu Dec 10 20:29:06 2020 +0200 AMBARI-25602. BlackDuck scan: vulnerable org.apache.hadoop 1.2.1 in fast-hdfs (aonishuk) --- contrib/fast-hdfs-resource/pom.xml | 39 ++ 1 file changed, 2 insertions(+), 37 deletions(-) diff --git a/contrib/fast-hdfs-resource/pom.xml b/contrib/fast-hdfs-resource/pom.xml index 8aa9186..75842bf 100644 --- a/contrib/fast-hdfs-resource/pom.xml +++ b/contrib/fast-hdfs-resource/pom.xml @@ -38,43 +38,8 @@ org.apache.hadoop - hadoop-tools - 1.2.1 - - - org.apache.hadoop - hadoop-core - 1.2.1 - - - tomcat - jasper-runtime - - - tomcat - jasper-compiler - - - org.mortbay.jetty - jetty - - - org.mortbay.jetty - jetty-util - - - org.mortbay.jetty - jsp-api-2.1 - - - org.mortbay.jetty - jsp-2.1 - - - org.mortbay.jetty - jetty - - + hadoop-common + 2.7.7 com.google.code.gson
[ambari] branch branch-2.7 updated: AMBARI-25589. When hearbeat is lost sometimes start/stop tasks can hang for a long time. (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch branch-2.7 in repository https://gitbox.apache.org/repos/asf/ambari.git The following commit(s) were added to refs/heads/branch-2.7 by this push: new 93a4d3d AMBARI-25589. When hearbeat is lost sometimes start/stop tasks can hang for a long time. (aonishuk) 93a4d3d is described below commit 93a4d3ddf19edfd6bbbc58e6c8396679a5f68617 Author: Andrew Onishuk AuthorDate: Tue Nov 24 13:21:50 2020 +0200 AMBARI-25589. When hearbeat is lost sometimes start/stop tasks can hang for a long time. (aonishuk) --- .../main/python/ambari_agent/CommandStatusDict.py| 20 +--- 1 file changed, 9 insertions(+), 11 deletions(-) diff --git a/ambari-agent/src/main/python/ambari_agent/CommandStatusDict.py b/ambari-agent/src/main/python/ambari_agent/CommandStatusDict.py index d27dad4..89d0a3d 100644 --- a/ambari-agent/src/main/python/ambari_agent/CommandStatusDict.py +++ b/ambari-agent/src/main/python/ambari_agent/CommandStatusDict.py @@ -68,19 +68,17 @@ class CommandStatusDict(): """ Stores new version of report for command (replaces previous) """ -from ActionQueue import ActionQueue - -key = command['taskId'] -# delete stale data about this command -self.delete_command_data(key) +with self.lock: + key = command['taskId'] + # delete stale data about this command + self.delete_command_data(key) + self.queue_report_sending(key, command, report) -is_sent, correlation_id = self.force_update_to_server({command['clusterId']: [report]}) -updatable = report['status'] == CommandStatus.in_progress and self.command_update_output + report_dict = {command['clusterId']: [report]} + is_sent, correlation_id = self.force_update_to_server(report_dict) -if not is_sent or updatable: - self.queue_report_sending(key, command, report) -else: - self.server_responses_listener.listener_functions_on_error[correlation_id] = lambda headers, message: self.queue_report_sending(key, command, report) + self.server_responses_listener.listener_functions_on_success[correlation_id] = lambda headers, message: \ +self.clear_reported_reports(report_dict) def queue_report_sending(self, key, command, report): with self.lock:
[ambari] branch branch-2.7 updated (b10d71d -> f020c5a)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a change to branch branch-2.7 in repository https://gitbox.apache.org/repos/asf/ambari.git. from b10d71d [AMBARI-25587] Metrics cannot be stored and the exception message is null when metric value is NaN (#3262) (akiyamaneko via dgrinenko) add f020c5a AMBARI-25590. Ambari Get Hosts API returning empty json (aonishuk) No new revisions were added by this update. Summary of changes: .../org/apache/ambari/server/state/cluster/ClusterImpl.java | 13 + 1 file changed, 13 insertions(+)
[ambari] branch branch-2.7 updated: AMBARI-25567. Fix escaping issues in check database command (#3240)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch branch-2.7 in repository https://gitbox.apache.org/repos/asf/ambari.git The following commit(s) were added to refs/heads/branch-2.7 by this push: new fe2cca7 AMBARI-25567. Fix escaping issues in check database command (#3240) fe2cca7 is described below commit fe2cca79cad9ce4f9872f0217e8ae31f25431bcd Author: aonishuk AuthorDate: Fri Oct 9 18:30:03 2020 +0300 AMBARI-25567. Fix escaping issues in check database command (#3240) --- .../0.12.0.2.0/package/scripts/hive_service.py | 7 +- .../resources/custom_actions/scripts/check_host.py | 7 +- .../test/python/custom_actions/TestCheckHost.py| 3 - .../stacks/2.0.6/HIVE/test_hive_metastore.py | 51 --- .../python/stacks/2.0.6/HIVE/test_hive_server.py | 100 +++-- .../python/stacks/2.1/HIVE/test_hive_metastore.py | 42 +++-- .../python/stacks/2.5/HIVE/test_hive_server_int.py | 1 + 7 files changed, 153 insertions(+), 58 deletions(-) diff --git a/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/hive_service.py b/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/hive_service.py index 4a9ecc9..75a2d7a 100644 --- a/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/hive_service.py +++ b/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/hive_service.py @@ -33,6 +33,7 @@ from resource_management.libraries.functions import get_user_call_output from resource_management.libraries.functions.show_logs import show_logs from resource_management.libraries.functions import StackFeature from resource_management.libraries.functions.stack_features import check_stack_feature +from resource_management.core.utils import PasswordString from ambari_commons.os_family_impl import OsFamilyFuncImpl, OsFamilyImpl from ambari_commons import OSConst @@ -149,8 +150,10 @@ def validate_connection(target_path_to_jdbc, hive_lib_path): " in hive lib dir. So, db connection check can fail. Please run 'ambari-server setup --jdbc-db={db_name} --jdbc-driver={path_to_jdbc} on server host.'" Logger.error(error_message) - db_connection_check_command = format( -"{java64_home}/bin/java -cp {check_db_connection_jar}:{path_to_jdbc} org.apache.ambari.server.DBConnectionVerification '{hive_jdbc_connection_url}' {hive_metastore_user_name} {hive_metastore_user_passwd!p} {hive_jdbc_driver}") + db_connection_check_command = (format('{java64_home}/bin/java'), '-cp', + format('{check_db_connection_jar}:{path_to_jdbc}'), 'org.apache.ambari.server.DBConnectionVerification', + params.hive_jdbc_connection_url, params.hive_metastore_user_name, + PasswordString(params.hive_metastore_user_passwd), params.hive_jdbc_driver) try: Execute(db_connection_check_command, diff --git a/ambari-server/src/main/resources/custom_actions/scripts/check_host.py b/ambari-server/src/main/resources/custom_actions/scripts/check_host.py index 9fd2fe5..f3a72a6 100644 --- a/ambari-server/src/main/resources/custom_actions/scripts/check_host.py +++ b/ambari-server/src/main/resources/custom_actions/scripts/check_host.py @@ -39,6 +39,7 @@ from resource_management.core.exceptions import Fail from ambari_commons.constants import AMBARI_SUDO_BINARY from resource_management.core import shell from resource_management.core.logger import Logger +from resource_management.core.utils import PasswordString # WARNING. If you are adding a new host check that is used by cleanup, add it to BEFORE_CLEANUP_HOST_CHECKS @@ -445,9 +446,9 @@ class CheckHost(Script): user_name = "SYS AS SYSDBA" # try to connect to db -db_connection_check_command = format("{java_exec} -cp {check_db_connection_path}{class_path_delimiter}" \ - "{jdbc_jar_path} -Djava.library.path={java_library_path} org.apache.ambari.server.DBConnectionVerification \"{db_connection_url}\" " \ - "\"{user_name}\" {user_passwd!p} {jdbc_driver_class}") +db_connection_check_command = (java_exec, '-cp', format("{check_db_connection_path}{class_path_delimiter}{jdbc_jar_path}"), + format("-Djava.library.path={java_library_path}"), "org.apache.ambari.server.DBConnectionVerification", + db_connection_url, user_name, PasswordString(user_passwd), jdbc_driver_class) if db_name == DB_SQLA: db_connection_check_command = "LD_LIBRARY_PATH=$LD_LIBRARY_PATH:{0}{1} {2}".format(agent_cache_dir, diff --git a/ambari-server/src/test/python/custom_actions/TestCheckHost.py b/ambari-server/src/test/python/custom_ac
[ambari] branch branch-2.7 updated: AMBARI-25567. Fix escaping issues in check database command (#3240)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch branch-2.7 in repository https://gitbox.apache.org/repos/asf/ambari.git The following commit(s) were added to refs/heads/branch-2.7 by this push: new fe2cca7 AMBARI-25567. Fix escaping issues in check database command (#3240) fe2cca7 is described below commit fe2cca79cad9ce4f9872f0217e8ae31f25431bcd Author: aonishuk AuthorDate: Fri Oct 9 18:30:03 2020 +0300 AMBARI-25567. Fix escaping issues in check database command (#3240) --- .../0.12.0.2.0/package/scripts/hive_service.py | 7 +- .../resources/custom_actions/scripts/check_host.py | 7 +- .../test/python/custom_actions/TestCheckHost.py| 3 - .../stacks/2.0.6/HIVE/test_hive_metastore.py | 51 --- .../python/stacks/2.0.6/HIVE/test_hive_server.py | 100 +++-- .../python/stacks/2.1/HIVE/test_hive_metastore.py | 42 +++-- .../python/stacks/2.5/HIVE/test_hive_server_int.py | 1 + 7 files changed, 153 insertions(+), 58 deletions(-) diff --git a/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/hive_service.py b/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/hive_service.py index 4a9ecc9..75a2d7a 100644 --- a/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/hive_service.py +++ b/ambari-server/src/main/resources/common-services/HIVE/0.12.0.2.0/package/scripts/hive_service.py @@ -33,6 +33,7 @@ from resource_management.libraries.functions import get_user_call_output from resource_management.libraries.functions.show_logs import show_logs from resource_management.libraries.functions import StackFeature from resource_management.libraries.functions.stack_features import check_stack_feature +from resource_management.core.utils import PasswordString from ambari_commons.os_family_impl import OsFamilyFuncImpl, OsFamilyImpl from ambari_commons import OSConst @@ -149,8 +150,10 @@ def validate_connection(target_path_to_jdbc, hive_lib_path): " in hive lib dir. So, db connection check can fail. Please run 'ambari-server setup --jdbc-db={db_name} --jdbc-driver={path_to_jdbc} on server host.'" Logger.error(error_message) - db_connection_check_command = format( -"{java64_home}/bin/java -cp {check_db_connection_jar}:{path_to_jdbc} org.apache.ambari.server.DBConnectionVerification '{hive_jdbc_connection_url}' {hive_metastore_user_name} {hive_metastore_user_passwd!p} {hive_jdbc_driver}") + db_connection_check_command = (format('{java64_home}/bin/java'), '-cp', + format('{check_db_connection_jar}:{path_to_jdbc}'), 'org.apache.ambari.server.DBConnectionVerification', + params.hive_jdbc_connection_url, params.hive_metastore_user_name, + PasswordString(params.hive_metastore_user_passwd), params.hive_jdbc_driver) try: Execute(db_connection_check_command, diff --git a/ambari-server/src/main/resources/custom_actions/scripts/check_host.py b/ambari-server/src/main/resources/custom_actions/scripts/check_host.py index 9fd2fe5..f3a72a6 100644 --- a/ambari-server/src/main/resources/custom_actions/scripts/check_host.py +++ b/ambari-server/src/main/resources/custom_actions/scripts/check_host.py @@ -39,6 +39,7 @@ from resource_management.core.exceptions import Fail from ambari_commons.constants import AMBARI_SUDO_BINARY from resource_management.core import shell from resource_management.core.logger import Logger +from resource_management.core.utils import PasswordString # WARNING. If you are adding a new host check that is used by cleanup, add it to BEFORE_CLEANUP_HOST_CHECKS @@ -445,9 +446,9 @@ class CheckHost(Script): user_name = "SYS AS SYSDBA" # try to connect to db -db_connection_check_command = format("{java_exec} -cp {check_db_connection_path}{class_path_delimiter}" \ - "{jdbc_jar_path} -Djava.library.path={java_library_path} org.apache.ambari.server.DBConnectionVerification \"{db_connection_url}\" " \ - "\"{user_name}\" {user_passwd!p} {jdbc_driver_class}") +db_connection_check_command = (java_exec, '-cp', format("{check_db_connection_path}{class_path_delimiter}{jdbc_jar_path}"), + format("-Djava.library.path={java_library_path}"), "org.apache.ambari.server.DBConnectionVerification", + db_connection_url, user_name, PasswordString(user_passwd), jdbc_driver_class) if db_name == DB_SQLA: db_connection_check_command = "LD_LIBRARY_PATH=$LD_LIBRARY_PATH:{0}{1} {2}".format(agent_cache_dir, diff --git a/ambari-server/src/test/python/custom_actions/TestCheckHost.py b/ambari-server/src/test/python/custom_ac
[ambari] branch branch-2.7 updated (3c99042 -> 1e12cfc)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a change to branch branch-2.7 in repository https://gitbox.apache.org/repos/asf/ambari.git. from 3c99042 AMBARI-25559. Clarify the difference between num_llap_nodes and (#3228) add 1e12cfc Revert "AMBARI-25558. Upgrade fails because host is out of sync (#3227)" (#3230) No new revisions were added by this update. Summary of changes: .../events/listeners/upgrade/HostVersionOutOfSyncListener.java | 10 ++ 1 file changed, 2 insertions(+), 8 deletions(-)
[ambari] 01/01: Revert "AMBARI-25558. Upgrade fails because host is out of sync (#3227)"
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch revert-3227-AMBARI-25558-branch-2.7-10 in repository https://gitbox.apache.org/repos/asf/ambari.git commit 271695c03bb5ca03a5f1801dd39752da25879cfd Author: aonishuk AuthorDate: Wed Sep 23 17:09:30 2020 +0300 Revert "AMBARI-25558. Upgrade fails because host is out of sync (#3227)" This reverts commit 36e5735bfa84efe8b34e7f509e5deef2f3bddd4b. --- .../events/listeners/upgrade/HostVersionOutOfSyncListener.java | 10 ++ 1 file changed, 2 insertions(+), 8 deletions(-) diff --git a/ambari-server/src/main/java/org/apache/ambari/server/events/listeners/upgrade/HostVersionOutOfSyncListener.java b/ambari-server/src/main/java/org/apache/ambari/server/events/listeners/upgrade/HostVersionOutOfSyncListener.java index 48083e2..4b3e42a 100644 --- a/ambari-server/src/main/java/org/apache/ambari/server/events/listeners/upgrade/HostVersionOutOfSyncListener.java +++ b/ambari-server/src/main/java/org/apache/ambari/server/events/listeners/upgrade/HostVersionOutOfSyncListener.java @@ -137,14 +137,8 @@ public class HostVersionOutOfSyncListener { hostStackId.getStackName(), hostStackId.getStackVersion(), serviceName, componentName); continue; } - -ComponentInfo component = ami.get().getService(hostStackId.getStackName(), hostStackId.getStackVersion(), - serviceName).getComponentByName(componentName); - -// Skip lookup if stack does not contain the component -if (component == null) { - continue; -} +ComponentInfo component = ami.get().getComponent(hostStackId.getStackName(), +hostStackId.getStackVersion(), serviceName, componentName); if (!component.isVersionAdvertised()) { RepositoryVersionState state = checkAllHostComponents(hostStackId, hostVersionEntity.getHostEntity());
[ambari] branch revert-3227-AMBARI-25558-branch-2.7-10 created (now 271695c)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a change to branch revert-3227-AMBARI-25558-branch-2.7-10 in repository https://gitbox.apache.org/repos/asf/ambari.git. at 271695c Revert "AMBARI-25558. Upgrade fails because host is out of sync (#3227)" This branch includes the following new commits: new 271695c Revert "AMBARI-25558. Upgrade fails because host is out of sync (#3227)" The 1 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference.
[ambari] branch branch-2.7 updated (7ead925 -> 36e5735)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a change to branch branch-2.7 in repository https://gitbox.apache.org/repos/asf/ambari.git. from 7ead925 AMBARI-25550. Add viewFS protocol to DFS_PROTOCOLS_REGEX (dlysnichenko) (#3226) add 36e5735 AMBARI-25558. Upgrade fails because host is out of sync (#3227) No new revisions were added by this update. Summary of changes: .../events/listeners/upgrade/HostVersionOutOfSyncListener.java | 10 -- 1 file changed, 8 insertions(+), 2 deletions(-)
[ambari] branch branch-2.7 updated: AMBARI-25471. After node reboot autostart of components takes too much time. (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch branch-2.7 in repository https://gitbox.apache.org/repos/asf/ambari.git The following commit(s) were added to refs/heads/branch-2.7 by this push: new 24147b6 AMBARI-25471. After node reboot autostart of components takes too much time. (aonishuk) 24147b6 is described below commit 24147b6191801fba23e2dfaf9ef6cf9a51d4670c Author: Andrew Onishuk AuthorDate: Mon Feb 3 14:16:54 2020 +0200 AMBARI-25471. After node reboot autostart of components takes too much time. (aonishuk) --- .../src/main/python/ambari_agent/ActionQueue.py | 20 +++- .../python/ambari_agent/ComponentStatusExecutor.py | 2 +- .../src/main/python/ambari_agent/RecoveryManager.py | 9 - .../src/test/python/ambari_agent/TestAlerts.py | 2 +- .../test/python/ambari_agent/TestRecoveryManager.py | 1 + 5 files changed, 26 insertions(+), 8 deletions(-) diff --git a/ambari-agent/src/main/python/ambari_agent/ActionQueue.py b/ambari-agent/src/main/python/ambari_agent/ActionQueue.py index 3403b98..eede227 100644 --- a/ambari-agent/src/main/python/ambari_agent/ActionQueue.py +++ b/ambari-agent/src/main/python/ambari_agent/ActionQueue.py @@ -58,6 +58,9 @@ class ActionQueue(threading.Thread): # How much time(in seconds) we need wait for new incoming execution command before checking status command queue EXECUTION_COMMAND_WAIT_TIME = 2 + # key name in command dictionary + IS_RECOVERY_COMMAND = "isRecoveryCommand" + def __init__(self, initializer_module): super(ActionQueue, self).__init__() self.commandQueue = Queue.Queue() @@ -134,7 +137,11 @@ class ActionQueue(threading.Thread): if command is None: break -self.process_command(command) +# Recovery commands should be run in parallel (since we don't know the ordering on agent) +if self.IS_RECOVERY_COMMAND in command and command[self.IS_RECOVERY_COMMAND]: + self.start_parallel_command(command) +else: + self.process_command(command) else: # If parallel execution is enabled, just kick off all available # commands using separate threads @@ -150,10 +157,7 @@ class ActionQueue(threading.Thread): if 'commandParams' in command and 'command_retry_enabled' in command['commandParams']: retry_able = command['commandParams']['command_retry_enabled'] == "true" if retry_able: -logger.info("Kicking off a thread for the command, id={} taskId={}".format(command['commandId'], command['taskId'])) -t = threading.Thread(target=self.process_command, args=(command,)) -t.daemon = True -t.start() +self.start_parallel_command(command) else: self.process_command(command) break @@ -165,6 +169,12 @@ class ActionQueue(threading.Thread): logger.exception("ActionQueue thread failed with exception. Re-running it") logger.info("ActionQueue thread has successfully finished") + def start_parallel_command(self, command): +logger.info("Kicking off a thread for the command, id={} taskId={}".format(command['commandId'], command['taskId'])) +t = threading.Thread(target=self.process_command, args=(command,)) +t.daemon = True +t.start() + def fill_recovery_commands(self): if self.recovery_manager.enabled() and not self.tasks_in_progress_or_pending(): self.put(self.recovery_manager.get_recovery_commands()) diff --git a/ambari-agent/src/main/python/ambari_agent/ComponentStatusExecutor.py b/ambari-agent/src/main/python/ambari_agent/ComponentStatusExecutor.py index 7bf00df..e1fe52b 100644 --- a/ambari-agent/src/main/python/ambari_agent/ComponentStatusExecutor.py +++ b/ambari-agent/src/main/python/ambari_agent/ComponentStatusExecutor.py @@ -112,7 +112,7 @@ class ComponentStatusExecutor(threading.Thread): if result: cluster_reports[cluster_id].append(result) - +self.recovery_manager.statuses_computed_at_least_once = True cluster_reports = self.discard_stale_reports(cluster_reports) self.send_updates_to_server(cluster_reports) except ConnectionIsAlreadyClosed: # server and agent disconnected during sending data. Not an issue diff --git a/ambari-agent/src/main/python/ambari_agent/RecoveryManager.py b/ambari-agent/src/main/python/ambari_agent/RecoveryManager.py index 66da323..6712997 100644 --- a/ambari-agent/src/main/python/ambari_agent/RecoveryManager.py +++ b/ambari-agent/src/main/python/ambari_agent/RecoveryManager.py @@ -45,6 +45,7 @@ class RecoveryManager: HAS_STALE_CONFIG = "hasStaleConfigs" EXECUTION_COMMAND_DETAILS = "executionCommandDetails&q
[ambari] branch branch-2.7 updated: AMBARI-25464. Components autostart does not work sometimes and ambari-agent restart
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch branch-2.7 in repository https://gitbox.apache.org/repos/asf/ambari.git The following commit(s) were added to refs/heads/branch-2.7 by this push: new 6f830c9 AMBARI-25464. Components autostart does not work sometimes and ambari-agent restart 6f830c9 is described below commit 6f830c9382eb6e69d9e4182f60b649c2c8ef2d47 Author: aonishuk AuthorDate: Mon Jan 27 12:50:05 2020 +0200 AMBARI-25464. Components autostart does not work sometimes and ambari-agent restart * AMBARI-25464. Components autostart does not work sometimes and ambari-agent restart (aonishuk) * AMBARI-25464. Components autostart does not work sometimes and ambari-agent restart (aonishuk) --- ambari-agent/src/main/python/ambari_agent/ClusterCache.py | 9 + .../main/python/ambari_agent/ClusterConfigurationCache.py | 9 - .../main/python/ambari_agent/ClusterHostLevelParamsCache.py | 13 - .../src/main/python/ambari_agent/InitializerModule.py | 10 -- .../src/main/python/ambari_agent/RecoveryManager.py | 7 ++- .../ambari_agent/listeners/ConfigurationEventListener.py| 5 - .../ambari_agent/listeners/HostLevelParamsEventListener.py | 9 - 7 files changed, 31 insertions(+), 31 deletions(-) diff --git a/ambari-agent/src/main/python/ambari_agent/ClusterCache.py b/ambari-agent/src/main/python/ambari_agent/ClusterCache.py index 2e13f16..50ad8ee 100644 --- a/ambari-agent/src/main/python/ambari_agent/ClusterCache.py +++ b/ambari-agent/src/main/python/ambari_agent/ClusterCache.py @@ -75,6 +75,7 @@ class ClusterCache(dict): # Example: hostname change and restart causes old topology loading to fail with exception logger.exception("Loading saved cache for {0} failed".format(self.__class__.__name__)) self.rewrite_cache({}, None) + os.remove(self.__current_cache_hash_file) def get_cluster_indepedent_data(self): return self[ClusterCache.COMMON_DATA_CLUSTER] @@ -99,7 +100,7 @@ class ClusterCache(dict): del self[cache_id_to_delete] self.on_cache_update() -self.persist_cache() +self.persist_cache(cache_hash) # if all of above are sucessful finally set the hash self.hash = cache_hash @@ -131,7 +132,7 @@ class ClusterCache(dict): with self._cache_lock: self[cluster_id] = immutable_cache - def persist_cache(self): + def persist_cache(self, cache_hash): # ensure that our cache directory exists if not os.path.exists(self.cluster_cache_dir): os.makedirs(self.cluster_cache_dir) @@ -140,9 +141,9 @@ class ClusterCache(dict): with open(self.__current_cache_json_file, 'w') as f: json.dump(self, f, indent=2) - if self.hash is not None: + if cache_hash is not None: with open(self.__current_cache_hash_file, 'w') as fp: - fp.write(self.hash) + fp.write(cache_hash) def _get_mutable_copy(self): with self._cache_lock: diff --git a/ambari-agent/src/main/python/ambari_agent/ClusterConfigurationCache.py b/ambari-agent/src/main/python/ambari_agent/ClusterConfigurationCache.py index 677fff2..1bdc581 100644 --- a/ambari-agent/src/main/python/ambari_agent/ClusterConfigurationCache.py +++ b/ambari-agent/src/main/python/ambari_agent/ClusterConfigurationCache.py @@ -30,13 +30,20 @@ class ClusterConfigurationCache(ClusterCache): configuration properties. """ - def __init__(self, cluster_cache_dir): + def __init__(self, cluster_cache_dir, initializer_module): """ Initializes the configuration cache. :param cluster_cache_dir: directory the changed json are saved :return: """ +self.initializer_module = initializer_module super(ClusterConfigurationCache, self).__init__(cluster_cache_dir) + + def on_cache_update(self): +for cluster_id, configurations in self.iteritems(): + # FIXME: Recovery manager does not support multiple cluster as of now. + self.initializer_module.recovery_manager.cluster_id = cluster_id + self.initializer_module.recovery_manager.on_config_update(configurations) def get_cache_name(self): return 'configurations' diff --git a/ambari-agent/src/main/python/ambari_agent/ClusterHostLevelParamsCache.py b/ambari-agent/src/main/python/ambari_agent/ClusterHostLevelParamsCache.py index 3e490c5..9dd062f 100644 --- a/ambari-agent/src/main/python/ambari_agent/ClusterHostLevelParamsCache.py +++ b/ambari-agent/src/main/python/ambari_agent/ClusterHostLevelParamsCache.py @@ -33,13 +33,24 @@ class ClusterHostLevelParamsCache(ClusterCache): differently for every host. """ - def __init__(self, cluster_cache_dir): + def __init__(self, cluster_cache_dir, initializer_module): """ Initializes the host level params cach
[ambari] branch branch-2.7 updated: AMBARI-25455. Ambari-agent does not restart the agent when memory leak happens (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch branch-2.7 in repository https://gitbox.apache.org/repos/asf/ambari.git The following commit(s) were added to refs/heads/branch-2.7 by this push: new 5e9fe7f AMBARI-25455. Ambari-agent does not restart the agent when memory leak happens (aonishuk) 5e9fe7f is described below commit 5e9fe7fe0550caf7d9ab3e3fd0de139376ffc1a8 Author: Andrew Onishuk AuthorDate: Wed Jan 8 11:03:49 2020 +0200 AMBARI-25455. Ambari-agent does not restart the agent when memory leak happens (aonishuk) --- .../src/main/python/ambari_agent/AmbariConfig.py| 8 .../src/main/python/ambari_agent/HeartbeatThread.py | 17 + 2 files changed, 25 insertions(+) diff --git a/ambari-agent/src/main/python/ambari_agent/AmbariConfig.py b/ambari-agent/src/main/python/ambari_agent/AmbariConfig.py index fedd063..85bca49 100644 --- a/ambari-agent/src/main/python/ambari_agent/AmbariConfig.py +++ b/ambari-agent/src/main/python/ambari_agent/AmbariConfig.py @@ -191,6 +191,14 @@ class AmbariConfig: return int(self.get('heartbeat', 'state_interval_seconds', '60')) @property + def max_ram_soft(self): +return int(self.get('agent', 'memory_threshold_soft_mb', default='0')) + + @property + def max_ram_hard(self): +return int(self.get('agent', 'memory_threshold_hard_mb', default='0')) + + @property def log_max_symbols_size(self): return int(self.get('heartbeat', 'log_max_symbols_size', '90')) diff --git a/ambari-agent/src/main/python/ambari_agent/HeartbeatThread.py b/ambari-agent/src/main/python/ambari_agent/HeartbeatThread.py index 9210e79..acce6cf 100644 --- a/ambari-agent/src/main/python/ambari_agent/HeartbeatThread.py +++ b/ambari-agent/src/main/python/ambari_agent/HeartbeatThread.py @@ -38,10 +38,14 @@ from ambari_agent.listeners.HostLevelParamsEventListener import HostLevelParamsE from ambari_agent.listeners.AlertDefinitionsEventListener import AlertDefinitionsEventListener from ambari_agent import security from ambari_stomp.adapter.websocket import ConnectionIsAlreadyClosed +from ambari_commons.os_utils import get_used_ram HEARTBEAT_INTERVAL = 10 REQUEST_RESPONSE_TIMEOUT = 10 +AGENT_AUTO_RESTART_EXIT_CODE = 77 +AGENT_RAM_OVERUSE_MESSAGE = "Ambari-agent RAM usage {used_ram} MB went above {config_name}={max_ram} MB. Restarting ambari-agent to clean the RAM." + logger = logging.getLogger(__name__) class HeartbeatThread(threading.Thread): @@ -94,6 +98,8 @@ class HeartbeatThread(threading.Thread): if not self.initializer_module.is_registered: self.register() +self.check_for_memory_leak() + heartbeat_body = self.get_heartbeat_body() logger.debug("Heartbeat body is {0}".format(heartbeat_body)) response = self.blocking_request(heartbeat_body, Constants.HEARTBEAT_ENDPOINT) @@ -276,3 +282,14 @@ class HeartbeatThread(threading.Thread): return self.server_responses_listener.responses.blocking_pop(correlation_id, timeout=timeout) except BlockingDictionary.DictionaryPopTimeout: raise Exception("{0} seconds timeout expired waiting for response from server at {1} to message from {2}".format(timeout, Constants.SERVER_RESPONSES_TOPIC, destination)) + + def check_for_memory_leak(self): +used_ram = get_used_ram()/1000 +# dealing with a possible memory leaks +if self.config.max_ram_soft and used_ram >= self.config.max_ram_soft and not self.initializer_module.action_queue.tasks_in_progress_or_pending(): + logger.error(AGENT_RAM_OVERUSE_MESSAGE.format(used_ram=used_ram, config_name="memory_threshold_soft_mb", max_ram=self.config.max_ram_soft)) + Utils.restartAgent(self.stop_event) +if self.config.max_ram_hard and used_ram >= self.config.max_ram_hard: + logger.error(AGENT_RAM_OVERUSE_MESSAGE.format(used_ram=used_ram, config_name="memory_threshold_hard_mb", max_ram=self.config.max_ram_hard)) + Utils.restartAgent(self.stop_event) + \ No newline at end of file
[ambari] branch branch-2.7 updated: AMBARI-25450. [ubuntu16] HDP install failed for upgrade from HDP-3.0.1.0-187 to HDP-3.1.5.0-139 (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch branch-2.7 in repository https://gitbox.apache.org/repos/asf/ambari.git The following commit(s) were added to refs/heads/branch-2.7 by this push: new bceec5c AMBARI-25450. [ubuntu16] HDP install failed for upgrade from HDP-3.0.1.0-187 to HDP-3.1.5.0-139 (aonishuk) bceec5c is described below commit bceec5c53cdc50afb0f0809c58775d64df6871dd Author: Andrew Onishuk AuthorDate: Mon Dec 16 12:49:32 2019 +0200 AMBARI-25450. [ubuntu16] HDP install failed for upgrade from HDP-3.0.1.0-187 to HDP-3.1.5.0-139 (aonishuk) --- .../src/main/python/ambari_commons/repo_manager/apt_manager.py | 5 - 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/ambari-common/src/main/python/ambari_commons/repo_manager/apt_manager.py b/ambari-common/src/main/python/ambari_commons/repo_manager/apt_manager.py index eba8dfb..f7fbab9 100644 --- a/ambari-common/src/main/python/ambari_commons/repo_manager/apt_manager.py +++ b/ambari-common/src/main/python/ambari_commons/repo_manager/apt_manager.py @@ -155,7 +155,7 @@ class AptManager(GenericManager): def transform_baseurl_to_repoid(self, base_url): """ -Transforms the URL looking like proto://localhost/some/long/path to localhost_some_long_path +Transforms the URL looking like proto://login:password@localhost/some/long/path to localhost_some_long_path :type base_url str :rtype str @@ -165,6 +165,9 @@ class AptManager(GenericManager): if url_proto_pos > 0: base_url = base_url[url_proto_pos+len(url_proto_mask):] +if "@" in base_url: + base_url = base_url.split("@", 1)[1] + return base_url.replace("/", "_").replace(" ", "_") def get_available_packages_in_repos(self, repos):
[ambari] branch branch-2.7 updated: AMBARI-25446. Credentials should not be shown on cleartext on Ambari UI (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch branch-2.7 in repository https://gitbox.apache.org/repos/asf/ambari.git The following commit(s) were added to refs/heads/branch-2.7 by this push: new 6e28cc2 AMBARI-25446. Credentials should not be shown on cleartext on Ambari UI (aonishuk) 6e28cc2 is described below commit 6e28cc22720cde5ca77e1f825a3fa2e2b99a67f3 Author: Andrew Onishuk AuthorDate: Wed Dec 11 14:17:09 2019 +0200 AMBARI-25446. Credentials should not be shown on cleartext on Ambari UI (aonishuk) --- .../server/controller/internal/RepositoryResourceProvider.java | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/RepositoryResourceProvider.java b/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/RepositoryResourceProvider.java index 82f7903..b844495 100644 --- a/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/RepositoryResourceProvider.java +++ b/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/RepositoryResourceProvider.java @@ -44,6 +44,7 @@ import org.apache.ambari.server.controller.spi.ResourceAlreadyExistsException; import org.apache.ambari.server.controller.spi.SystemException; import org.apache.ambari.server.controller.spi.UnsupportedPropertyException; import org.apache.ambari.server.controller.utilities.PropertyHelper; +import org.apache.ambari.server.utils.URLCredentialsHider; import org.apache.commons.lang.BooleanUtils; public class RepositoryResourceProvider extends AbstractControllerResourceProvider { @@ -178,11 +179,11 @@ public class RepositoryResourceProvider extends AbstractControllerResourceProvid setResourceProperty(resource, REPOSITORY_REPO_NAME_PROPERTY_ID, response.getRepoName(), requestedIds); setResourceProperty(resource, REPOSITORY_DISTRIBUTION_PROPERTY_ID, response.getDistribution(), requestedIds); setResourceProperty(resource, REPOSITORY_COMPONENTS_PROPERTY_ID, response.getComponents(), requestedIds); -setResourceProperty(resource, REPOSITORY_BASE_URL_PROPERTY_ID, response.getBaseUrl(), requestedIds); +setResourceProperty(resource, REPOSITORY_BASE_URL_PROPERTY_ID, URLCredentialsHider.hideCredentials(response.getBaseUrl()), requestedIds); setResourceProperty(resource, REPOSITORY_OS_TYPE_PROPERTY_ID, response.getOsType(), requestedIds); setResourceProperty(resource, REPOSITORY_REPO_ID_PROPERTY_ID, response.getRepoId(), requestedIds); setResourceProperty(resource, REPOSITORY_MIRRORS_LIST_PROPERTY_ID, response.getMirrorsList(), requestedIds); -setResourceProperty(resource, REPOSITORY_DEFAULT_BASE_URL_PROPERTY_ID, response.getDefaultBaseUrl(), requestedIds); +setResourceProperty(resource, REPOSITORY_DEFAULT_BASE_URL_PROPERTY_ID, URLCredentialsHider.hideCredentials(response.getDefaultBaseUrl()), requestedIds); setResourceProperty(resource, REPOSITORY_UNIQUE_PROPERTY_ID, response.isUnique(), requestedIds); setResourceProperty(resource, REPOSITORY_TAGS_PROPERTY_ID, response.getTags(), requestedIds); setResourceProperty(resource, REPOSITORY_APPLICABLE_SERVICES_PROPERTY_ID, response.getApplicableServices(), requestedIds);
[ambari] branch branch-2.7 updated: AMBARI-25445. VDF registration fails with SunCertPathBuilderException (#3158)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch branch-2.7 in repository https://gitbox.apache.org/repos/asf/ambari.git The following commit(s) were added to refs/heads/branch-2.7 by this push: new bc2fd78 AMBARI-25445. VDF registration fails with SunCertPathBuilderException (#3158) bc2fd78 is described below commit bc2fd78f175d6d85b84be259e87ce3a00513f38d Author: aonishuk AuthorDate: Tue Dec 10 20:34:35 2019 +0200 AMBARI-25445. VDF registration fails with SunCertPathBuilderException (#3158) * AMBARI-25445. VDF registration fails with SunCertPathBuilderException: unable to find valid certification path to requested target' on HTTPS cluster (aonishuk) * AMBARI-25445. VDF registration fails with SunCertPathBuilderException: unable to find valid certification path to requested target' on HTTPS cluster (aonishuk) --- .../controller/AmbariManagementControllerImpl.java | 2 +- .../controller/internal/URLRedirectProvider.java | 44 +- .../VersionDefinitionResourceProvider.java | 2 +- 3 files changed, 44 insertions(+), 4 deletions(-) diff --git a/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariManagementControllerImpl.java b/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariManagementControllerImpl.java index fc8c965..0220e20 100644 --- a/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariManagementControllerImpl.java +++ b/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariManagementControllerImpl.java @@ -4464,7 +4464,7 @@ public class AmbariManagementControllerImpl implements AmbariManagementControlle * @throws AmbariException if verification fails */ private void verifyRepository(RepositoryRequest request) throws AmbariException { -URLRedirectProvider usp = new URLRedirectProvider(REPO_URL_CONNECT_TIMEOUT, REPO_URL_READ_TIMEOUT); +URLRedirectProvider usp = new URLRedirectProvider(REPO_URL_CONNECT_TIMEOUT, REPO_URL_READ_TIMEOUT, true); String repoName = request.getRepoName(); if (StringUtils.isEmpty(repoName)) { diff --git a/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/URLRedirectProvider.java b/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/URLRedirectProvider.java index aed89fc..1ec508c 100644 --- a/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/URLRedirectProvider.java +++ b/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/URLRedirectProvider.java @@ -21,7 +21,13 @@ package org.apache.ambari.server.controller.internal; import java.io.IOException; import java.io.InputStream; import java.nio.charset.StandardCharsets; +import java.security.KeyManagementException; +import java.security.KeyStoreException; +import java.security.NoSuchAlgorithmException; +import javax.net.ssl.SSLContext; + +import org.apache.ambari.server.AmbariException; import org.apache.ambari.server.utils.URLCredentialsHider; import org.apache.commons.io.IOUtils; import org.apache.http.HttpEntity; @@ -29,8 +35,15 @@ import org.apache.http.HttpStatus; import org.apache.http.client.config.RequestConfig; import org.apache.http.client.methods.CloseableHttpResponse; import org.apache.http.client.methods.HttpGet; +import org.apache.http.config.RegistryBuilder; +import org.apache.http.conn.socket.ConnectionSocketFactory; +import org.apache.http.conn.socket.PlainConnectionSocketFactory; +import org.apache.http.conn.ssl.NoopHostnameVerifier; +import org.apache.http.conn.ssl.SSLConnectionSocketFactory; import org.apache.http.impl.client.CloseableHttpClient; import org.apache.http.impl.client.HttpClientBuilder; +import org.apache.http.impl.conn.PoolingHttpClientConnectionManager; +import org.apache.http.ssl.SSLContextBuilder; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -42,14 +55,16 @@ public class URLRedirectProvider { private final int connTimeout; private final int readTimeout; + private final boolean skipSslCertificateCheck; - public URLRedirectProvider(int connectionTimeout, int readTimeout) { + public URLRedirectProvider(int connectionTimeout, int readTimeout, boolean skipSslCertificateCheck) { this.connTimeout = connectionTimeout; this.readTimeout = readTimeout; +this.skipSslCertificateCheck = skipSslCertificateCheck; } public RequestResult executeGet(String spec) throws IOException { -try (CloseableHttpClient httpClient = HttpClientBuilder.create().build()) { +try (CloseableHttpClient httpClient = buildHttpClient()) { HttpGet httpGet = new HttpGet(spec); RequestConfig requestConfig = RequestConfig.custom() @@ -74,6 +89,31 @@ public class URLRedirectProvider { } } + private CloseableHttpClient buildHttpClient() throws AmbariException { +HttpClientBuilder httpClientBuilder
[ambari] branch branch-2.7 updated: AMBARI-25444. Deploy fails with 401:Unauthorized on HDP-GPL; whereas url is actually accessible with credentials supplied by Releng team (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch branch-2.7 in repository https://gitbox.apache.org/repos/asf/ambari.git The following commit(s) were added to refs/heads/branch-2.7 by this push: new 15939ef AMBARI-25444. Deploy fails with 401:Unauthorized on HDP-GPL; whereas url is actually accessible with credentials supplied by Releng team (aonishuk) 15939ef is described below commit 15939ef421ace98996d46504a1cd68a000a15f7a Author: Andrew Onishuk AuthorDate: Mon Dec 9 17:54:24 2019 +0200 AMBARI-25444. Deploy fails with 401:Unauthorized on HDP-GPL; whereas url is actually accessible with credentials supplied by Releng team (aonishuk) --- .../ambari/server/controller/internal/RepositoryResourceProvider.java| 1 - 1 file changed, 1 deletion(-) diff --git a/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/RepositoryResourceProvider.java b/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/RepositoryResourceProvider.java index bba9331..82f7903 100644 --- a/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/RepositoryResourceProvider.java +++ b/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/RepositoryResourceProvider.java @@ -44,7 +44,6 @@ import org.apache.ambari.server.controller.spi.ResourceAlreadyExistsException; import org.apache.ambari.server.controller.spi.SystemException; import org.apache.ambari.server.controller.spi.UnsupportedPropertyException; import org.apache.ambari.server.controller.utilities.PropertyHelper; -import org.apache.ambari.server.utils.URLCredentialsHider; import org.apache.commons.lang.BooleanUtils; public class RepositoryResourceProvider extends AbstractControllerResourceProvider {
[ambari] branch branch-2.7 updated: AMBARI-25444. Deploy fails with 401:Unauthorized on HDP-GPL; whereas url is actually accessible with credentials supplied by Releng team (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch branch-2.7 in repository https://gitbox.apache.org/repos/asf/ambari.git The following commit(s) were added to refs/heads/branch-2.7 by this push: new 9d4dec9 AMBARI-25444. Deploy fails with 401:Unauthorized on HDP-GPL; whereas url is actually accessible with credentials supplied by Releng team (aonishuk) 9d4dec9 is described below commit 9d4dec96387b863dfae1aaa8724db8c4f350d73f Author: Andrew Onishuk AuthorDate: Mon Dec 9 15:23:48 2019 +0200 AMBARI-25444. Deploy fails with 401:Unauthorized on HDP-GPL; whereas url is actually accessible with credentials supplied by Releng team (aonishuk) --- .../ambari/server/controller/internal/RepositoryResourceProvider.java | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/RepositoryResourceProvider.java b/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/RepositoryResourceProvider.java index b844495..bba9331 100644 --- a/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/RepositoryResourceProvider.java +++ b/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/RepositoryResourceProvider.java @@ -179,11 +179,11 @@ public class RepositoryResourceProvider extends AbstractControllerResourceProvid setResourceProperty(resource, REPOSITORY_REPO_NAME_PROPERTY_ID, response.getRepoName(), requestedIds); setResourceProperty(resource, REPOSITORY_DISTRIBUTION_PROPERTY_ID, response.getDistribution(), requestedIds); setResourceProperty(resource, REPOSITORY_COMPONENTS_PROPERTY_ID, response.getComponents(), requestedIds); -setResourceProperty(resource, REPOSITORY_BASE_URL_PROPERTY_ID, URLCredentialsHider.hideCredentials(response.getBaseUrl()), requestedIds); +setResourceProperty(resource, REPOSITORY_BASE_URL_PROPERTY_ID, response.getBaseUrl(), requestedIds); setResourceProperty(resource, REPOSITORY_OS_TYPE_PROPERTY_ID, response.getOsType(), requestedIds); setResourceProperty(resource, REPOSITORY_REPO_ID_PROPERTY_ID, response.getRepoId(), requestedIds); setResourceProperty(resource, REPOSITORY_MIRRORS_LIST_PROPERTY_ID, response.getMirrorsList(), requestedIds); -setResourceProperty(resource, REPOSITORY_DEFAULT_BASE_URL_PROPERTY_ID, URLCredentialsHider.hideCredentials(response.getDefaultBaseUrl()), requestedIds); +setResourceProperty(resource, REPOSITORY_DEFAULT_BASE_URL_PROPERTY_ID, response.getDefaultBaseUrl(), requestedIds); setResourceProperty(resource, REPOSITORY_UNIQUE_PROPERTY_ID, response.isUnique(), requestedIds); setResourceProperty(resource, REPOSITORY_TAGS_PROPERTY_ID, response.getTags(), requestedIds); setResourceProperty(resource, REPOSITORY_APPLICABLE_SERVICES_PROPERTY_ID, response.getApplicableServices(), requestedIds);
[ambari] branch branch-2.7 updated: AMBARI-25433. Adding VDF fails with paywalled repos/urls (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch branch-2.7 in repository https://gitbox.apache.org/repos/asf/ambari.git The following commit(s) were added to refs/heads/branch-2.7 by this push: new 4fbcf42 AMBARI-25433. Adding VDF fails with paywalled repos/urls (aonishuk) 4fbcf42 is described below commit 4fbcf42a1a2b630fc4c69c8a50f1c8ae1a50e1f5 Author: Andrew Onishuk AuthorDate: Fri Dec 6 14:47:47 2019 +0200 AMBARI-25433. Adding VDF fails with paywalled repos/urls (aonishuk) --- .../controller/internal/URLStreamProvider.java | 48 +- 1 file changed, 47 insertions(+), 1 deletion(-) diff --git a/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/URLStreamProvider.java b/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/URLStreamProvider.java index 429d5c8..454a5c5 100644 --- a/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/URLStreamProvider.java +++ b/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/URLStreamProvider.java @@ -24,16 +24,25 @@ import java.io.IOException; import java.io.InputStream; import java.net.HttpURLConnection; import java.net.URL; +import java.net.URLConnection; +import java.security.KeyManagementException; import java.security.KeyStore; +import java.security.NoSuchAlgorithmException; +import java.security.SecureRandom; +import java.security.cert.X509Certificate; import java.util.Collections; import java.util.HashMap; import java.util.List; import java.util.Map; +import javax.net.ssl.HostnameVerifier; import javax.net.ssl.HttpsURLConnection; import javax.net.ssl.SSLContext; +import javax.net.ssl.SSLSession; import javax.net.ssl.SSLSocketFactory; +import javax.net.ssl.TrustManager; import javax.net.ssl.TrustManagerFactory; +import javax.net.ssl.X509TrustManager; import org.apache.ambari.server.configuration.ComponentSSLConfiguration; import org.apache.ambari.server.controller.utilities.StreamProvider; @@ -288,12 +297,49 @@ public class URLStreamProvider implements StreamProvider { return cookies + "; " + newCookie; } + public static class TrustAllHostnameVerifier implements HostnameVerifier + { +public boolean verify(String hostname, SSLSession session) { return true; } + } + + public static class TrustAllManager implements X509TrustManager + { +public X509Certificate[] getAcceptedIssuers() +{ + return new X509Certificate[0]; +} +public void checkClientTrusted(X509Certificate[] certs, String authType) {} +public void checkServerTrusted(X509Certificate[] certs, String authType) {} + } // - helper methods // Get a connection protected HttpURLConnection getConnection(URL url) throws IOException { -return (HttpURLConnection) url.openConnection(); +URLConnection connection = url.openConnection(); + +if (!setupTruststoreForHttps) { + HttpsURLConnection httpsConnection = (HttpsURLConnection) connection; + + // Create a trust manager that does not validate certificate chains + TrustManager[] trustAllCerts = new TrustManager[] { + new TrustAllManager() + }; + + // Ignore differences between given hostname and certificate hostname + HostnameVerifier hostnameVerifier = new TrustAllHostnameVerifier(); + // Install the all-trusting trust manager + try { +SSLContext sc = SSLContext.getInstance("SSL"); +sc.init(null, trustAllCerts, new SecureRandom()); +httpsConnection.setSSLSocketFactory(sc.getSocketFactory()); +httpsConnection.setHostnameVerifier(hostnameVerifier); + } catch (NoSuchAlgorithmException | KeyManagementException e) { +throw new IllegalStateException("Cannot create unverified ssl context.", e); + } +} + +return (HttpURLConnection) connection; } // Get an ssl connection
[ambari] branch branch-2.7 updated (afbbb73 -> 3d379f0)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a change to branch branch-2.7 in repository https://gitbox.apache.org/repos/asf/ambari.git. from afbbb73 AMBARI-25433. Ambari should add login and password to urls populated from VDF (aonishuk) add 3d379f0 AMBARI-25433. Adding VDF fails with paywalled repos/urls (aonishuk) No new revisions were added by this update. Summary of changes: .../server/controller/internal/VersionDefinitionResourceProvider.java | 4 .../main/java/org/apache/ambari/server/state/stack/RepositoryXml.java | 4 +++- 2 files changed, 3 insertions(+), 5 deletions(-)
[ambari] 02/02: AMBARI-25433. Ambari should add login and password to urls populated from VDF (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch branch-2.7 in repository https://gitbox.apache.org/repos/asf/ambari.git commit afbbb73db5a4f2adad2dd4e0e777f7c5109b2099 Author: Andrew Onishuk AuthorDate: Thu Nov 28 14:30:40 2019 +0200 AMBARI-25433. Ambari should add login and password to urls populated from VDF (aonishuk) --- .../internal/VersionDefinitionResourceProvider.java | 16 ++-- 1 file changed, 10 insertions(+), 6 deletions(-) diff --git a/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/VersionDefinitionResourceProvider.java b/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/VersionDefinitionResourceProvider.java index 97cf917..0779f26 100644 --- a/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/VersionDefinitionResourceProvider.java +++ b/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/VersionDefinitionResourceProvider.java @@ -21,6 +21,7 @@ import java.io.InputStream; import java.io.UnsupportedEncodingException; import java.net.MalformedURLException; import java.net.URI; +import java.net.URISyntaxException; import java.net.URL; import java.util.ArrayList; import java.util.Collection; @@ -581,6 +582,7 @@ public class VersionDefinitionResourceProvider extends AbstractAuthorizedResourc } else { URLStreamProvider provider = new URLStreamProvider(connectTimeout, readTimeout, ComponentSSLConfiguration.instance()); +provider.setSetupTruststoreForHttps(false); stream = provider.readFrom(definitionUrl); } @@ -615,12 +617,14 @@ public class VersionDefinitionResourceProvider extends AbstractAuthorizedResourc entity.setStack(stackEntity); -String credentials; -try { - URL url = new URL(holder.url); - credentials = url.getUserInfo(); -} catch (MalformedURLException e) { - throw new AmbariException(String.format("Could not parse url %s", holder.url), e); +String credentials = null; +if (holder.url != null) { + try { +URI uri = new URI(holder.url); +credentials = uri.getUserInfo(); + } catch (URISyntaxException e) { +throw new AmbariException(String.format("Could not parse url %s", holder.url), e); + } } List repos = holder.xml.repositoryInfo.getRepositories(credentials);
[ambari] branch branch-2.7 updated (767a168 -> afbbb73)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a change to branch branch-2.7 in repository https://gitbox.apache.org/repos/asf/ambari.git. from 767a168 Merge pull request #3140 from hiveww/AMBARI-25425-branch-2.7 new 9a1c248 AMBARI-25433. Ambari should add login and password to urls populated from VDF (aonishuk) new afbbb73 AMBARI-25433. Ambari should add login and password to urls populated from VDF (aonishuk) The 2 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. Summary of changes: .../VersionDefinitionResourceProvider.java | 18 +++- .../ambari/server/state/stack/RepositoryXml.java | 25 -- 2 files changed, 40 insertions(+), 3 deletions(-)
[ambari] 01/02: AMBARI-25433. Ambari should add login and password to urls populated from VDF (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch branch-2.7 in repository https://gitbox.apache.org/repos/asf/ambari.git commit 9a1c24863dee62d769c0e3d80e8daa25be0fc17c Author: Andrew Onishuk AuthorDate: Thu Nov 28 13:03:54 2019 +0200 AMBARI-25433. Ambari should add login and password to urls populated from VDF (aonishuk) --- .../VersionDefinitionResourceProvider.java | 14 +++- .../ambari/server/state/stack/RepositoryXml.java | 25 -- 2 files changed, 36 insertions(+), 3 deletions(-) diff --git a/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/VersionDefinitionResourceProvider.java b/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/VersionDefinitionResourceProvider.java index 9882147..97cf917 100644 --- a/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/VersionDefinitionResourceProvider.java +++ b/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/VersionDefinitionResourceProvider.java @@ -19,7 +19,9 @@ package org.apache.ambari.server.controller.internal; import java.io.InputStream; import java.io.UnsupportedEncodingException; +import java.net.MalformedURLException; import java.net.URI; +import java.net.URL; import java.util.ArrayList; import java.util.Collection; import java.util.Collections; @@ -29,6 +31,8 @@ import java.util.List; import java.util.Map; import java.util.Map.Entry; import java.util.Set; +import java.util.regex.Matcher; +import java.util.regex.Pattern; import org.apache.ambari.annotations.Experimental; import org.apache.ambari.annotations.ExperimentalFeature; @@ -611,7 +615,15 @@ public class VersionDefinitionResourceProvider extends AbstractAuthorizedResourc entity.setStack(stackEntity); -List repos = holder.xml.repositoryInfo.getRepositories(); +String credentials; +try { + URL url = new URL(holder.url); + credentials = url.getUserInfo(); +} catch (MalformedURLException e) { + throw new AmbariException(String.format("Could not parse url %s", holder.url), e); +} + +List repos = holder.xml.repositoryInfo.getRepositories(credentials); // Add service repositories (these are not contained by the VDF but are there in the stack model) ListMultimap stackReposByOs = diff --git a/ambari-server/src/main/java/org/apache/ambari/server/state/stack/RepositoryXml.java b/ambari-server/src/main/java/org/apache/ambari/server/state/stack/RepositoryXml.java index ccb25e8..c872deb 100644 --- a/ambari-server/src/main/java/org/apache/ambari/server/state/stack/RepositoryXml.java +++ b/ambari-server/src/main/java/org/apache/ambari/server/state/stack/RepositoryXml.java @@ -22,7 +22,8 @@ import java.util.Collection; import java.util.HashSet; import java.util.List; import java.util.Set; - +import java.util.regex.Matcher; +import java.util.regex.Pattern; import javax.xml.bind.annotation.XmlAccessType; import javax.xml.bind.annotation.XmlAccessorType; import javax.xml.bind.annotation.XmlAttribute; @@ -31,6 +32,7 @@ import javax.xml.bind.annotation.XmlElementWrapper; import javax.xml.bind.annotation.XmlRootElement; import javax.xml.bind.annotation.XmlTransient; +import com.google.common.base.Strings; import org.apache.ambari.server.stack.Validable; import org.apache.ambari.server.state.RepositoryInfo; @@ -40,6 +42,7 @@ import org.apache.ambari.server.state.RepositoryInfo; @XmlRootElement(name="reposinfo") @XmlAccessorType(XmlAccessType.FIELD) public class RepositoryXml implements Validable{ + private static final Pattern HTTP_URL_PROTOCOL_PATTERN = Pattern.compile("((http(s)*:\\/\\/))"); @XmlElement(name="latest") private String latestUri; @@ -219,6 +222,16 @@ public class RepositoryXml implements Validable{ * @return the list of repositories consumable by the web service. */ public List getRepositories() { +return getRepositories(null); + } + + /** + * @param credentials string with column separated username and password to be inserted in basurl. + *If set to null baseurl is not changed. + * + * @return the list of repositories consumable by the web service. + */ + public List getRepositories(String credentials) { List repos = new ArrayList<>(); for (RepositoryXml.Os o : getOses()) { @@ -227,7 +240,15 @@ public class RepositoryXml implements Validable{ for (RepositoryXml.Repo r : o.getRepos()) { RepositoryInfo ri = new RepositoryInfo(); - ri.setBaseUrl(r.getBaseUrl()); + String baseUrl = r.getBaseUrl(); + + // add credentials from VDF url to baseurl. + if (!Strings.isNullOrEmpty(credentials)) { +Matcher matcher = HTTP_URL_PROTOCOL_PATTERN.matcher(baseUrl); +baseUrl = matcher.replaceAll("$1" + credentials
[ambari] branch branch-2.7 updated (e9f71dc -> b1ddba6)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a change to branch branch-2.7 in repository https://gitbox.apache.org/repos/asf/ambari.git. from e9f71dc AMBARI-25379 fix import order for checkstyle add b1ddba6 AMBARI-25403 Ambari Management Pack: Ambari throws 500 error while downloading OneFS client configuration (santal) No new revisions were added by this update. Summary of changes: .../addon-services/ONEFS/1.0.0/package/scripts/params_linux.py | 2 ++ 1 file changed, 2 insertions(+)
[ambari] branch branch-2.7 updated (447b5e3 -> e9f71dc)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a change to branch branch-2.7 in repository https://gitbox.apache.org/repos/asf/ambari.git. from 447b5e3 AMBARI-25349 Move Ambari metrics to guava 28.0-jre (dgrinenko) (#3096) add c12203b AMBARI-25379 Upgrade AMS Grafana version to 6.3.5 add 7e7ef60 AMBARI-25379 Upgrade AMS Grafana version to 6.3.5 add 5f3da43 AMBARI-25379 Upgrade AMS Grafana version to 6.3.5 add 523cb11 AMBARI-25379 Upgrade AMS Grafana version to 6.3.5 add d564ecd Fix the missing brackets in datasource.js add 5a13d2d Use the 6.4.2 version package of Grafana. add daa76be AMBARI-25379 Bump Ambari Metrics version to 0.2.0 add bd5a7fe AMBARI-25379 Minor code cleanup add d7894c9 AMBARI-25379 Fix topic selection on kafka topics dasboard add c9f76fb AMBARI-25379 Fix 'All' selection regarding templated variables add afb41fb AMBARI-25379 Bumb datasource version to 1.0.2 add e8d12b0 AMBARI-25379 fix missing quotation mark add 86cf3d7 AMBARI-25379 fix indentation add e9f71dc AMBARI-25379 fix import order for checkstyle No new revisions were added by this update. Summary of changes: ambari-metrics/ambari-metrics-assembly/pom.xml | 13 + .../{directives.js => config_ctrl.d.ts}| 23 +- .../{directives.js => config_ctrl.js} | 32 +- .../{directives.js => config_ctrl.js.map} | 19 +- .../{directives.js => config_ctrl.ts} | 24 +- .../{directives.js => datasource.d.ts} | 20 +- .../ambari-metrics/datasource.js | 290 ++-- .../ambari-metrics/directives.js | 1 + .../ambari-metrics/img/ams-screenshot.png | Bin 0 -> 157579 bytes .../ambari-metrics/img/apache-ambari-logo-sm.png | Bin 0 -> 11354 bytes .../ambari-metrics/img/apache-ambari-logo.png | Bin 0 -> 25912 bytes .../ambari-metrics/img}/apache-ambari-project.png | Bin .../ambari-metrics/{directives.js => module.d.ts} | 23 +- .../ambari-metrics/{directives.js => module.js}| 35 +- .../{directives.js => module.js.map} | 19 +- .../ambari-metrics/{directives.js => module.ts}| 26 +- .../{config.html => annotations.editor.html} | 4 +- .../ambari-metrics/partials/config.html| 3 +- .../ambari-metrics/partials/query.editor.html | 222 -- .../ambari-metrics/partials/query.options.html | 42 -- .../ambari-metrics/plugin.json | 33 +- .../ambari-metrics/queryCtrl.js| 160 --- .../{directives.js => query_ctrl.d.ts} | 39 +- .../ambari-metrics/query_ctrl.js | 152 +++ .../ambari-metrics/query_ctrl.js.map | 19 + .../ambari-metrics/query_ctrl.ts | 169 +++ .../conf/unix/ams-grafana.ini | 93 +++- ambari-metrics/ambari-metrics-grafana/pom.xml | 6 - .../src/main/python/core/config_reader.py | 15 + ambari-metrics/pom.xml | 4 +- .../server/state/PropertyUpgradeBehavior.java | 9 + .../server/upgrade/AbstractUpgradeCatalog.java | 1 + .../ambari/server/upgrade/UpgradeCatalog275.java | 9 +- .../0.2.0/configuration/ams-grafana-ini.xml| 178 +++- .../AMBARI_METRICS/0.2.0/metainfo.xml | 27 ++ .../0.2.0/package/scripts/metrics_grafana_util.py | 490 + .../templates/metrics_grafana_datasource.json.j2 | 33 ++ 37 files changed, 1535 insertions(+), 698 deletions(-) copy ambari-metrics/ambari-metrics-grafana/ambari-metrics/{directives.js => config_ctrl.d.ts} (61%) copy ambari-metrics/ambari-metrics-grafana/ambari-metrics/{directives.js => config_ctrl.js} (58%) copy ambari-metrics/ambari-metrics-grafana/ambari-metrics/{directives.js => config_ctrl.js.map} (61%) copy ambari-metrics/ambari-metrics-grafana/ambari-metrics/{directives.js => config_ctrl.ts} (61%) copy ambari-metrics/ambari-metrics-grafana/ambari-metrics/{directives.js => datasource.d.ts} (61%) create mode 100644 ambari-metrics/ambari-metrics-grafana/ambari-metrics/img/ams-screenshot.png create mode 100644 ambari-metrics/ambari-metrics-grafana/ambari-metrics/img/apache-ambari-logo-sm.png create mode 100644 ambari-metrics/ambari-metrics-grafana/ambari-metrics/img/apache-ambari-logo.png copy {docs/src/site/resources/images => ambari-metrics/ambari-metrics-grafana/ambari-metrics/img}/apache-ambari-project.png (100%) copy ambari-metrics/ambari-metrics-grafana/ambari-metrics/{directives.js => module.d.ts} (61%) copy ambari-metrics/ambari-metrics-grafana/ambari-metrics/{directives.js => module.js} (51%) copy ambari-metrics/ambari-metrics-grafana/ambari-metrics/{directives.js => module.js.map} (61%) copy ambari-metrics/ambari-metrics-grafana/ambari-met
[ambari] branch branch-2.7 updated (fd305e7 -> 2dbaddb)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a change to branch branch-2.7 in repository https://gitbox.apache.org/repos/asf/ambari.git. from fd305e7 AMBARI-25394 Ambari Metrics whitelisting is failing on * wildcard for HBase Tables (santal) (#3104) add 2dbaddb AMBARI-25399 Add hive PAM support for service check and alerts (ihorlukianov) No new revisions were added by this update. Summary of changes: .../resource_management/libraries/functions/hive_check.py | 8 +++- .../package/alerts/alert_hive_interactive_thrift_port.py| 13 +++-- .../0.12.0.2.0/package/alerts/alert_hive_thrift_port.py | 13 +++-- .../HIVE/0.12.0.2.0/package/scripts/params_linux.py | 5 + .../HIVE/0.12.0.2.0/package/scripts/service_check.py| 3 ++- 5 files changed, 36 insertions(+), 6 deletions(-)
[ambari] branch branch-2.7 updated (ee5a90c -> 49827cc)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a change to branch branch-2.7 in repository https://gitbox.apache.org/repos/asf/ambari.git. from ee5a90c AMBARI-25327 : Prevent NPE for bindNotificationDispatchers and getServiceConfigVersionRequest(backport to branch-2.7) (#3038) add 49827cc AMBARI-25156. /var/log/messages gets filled with unhandled Python exception for client modules in ambari-agent (aonishuk) No new revisions were added by this update. Summary of changes: .../src/main/python/resource_management/libraries/script/script.py| 4 1 file changed, 4 insertions(+)
[ambari] branch branch-2.7 updated: AMBARI-25390. Disable indexing in /resources endpoint and sub-directories (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch branch-2.7 in repository https://gitbox.apache.org/repos/asf/ambari.git The following commit(s) were added to refs/heads/branch-2.7 by this push: new 461b5ba AMBARI-25390. Disable indexing in /resources endpoint and sub-directories (aonishuk) 461b5ba is described below commit 461b5bafe8a0ea66081700cff0a1fe4a96109b48 Author: Andrew Onishuk AuthorDate: Mon Oct 7 11:09:37 2019 +0300 AMBARI-25390. Disable indexing in /resources endpoint and sub-directories (aonishuk) --- ambari-server/src/main/assemblies/server.xml | 5 - 1 file changed, 5 deletions(-) diff --git a/ambari-server/src/main/assemblies/server.xml b/ambari-server/src/main/assemblies/server.xml index 6293dd4..67858e7 100644 --- a/ambari-server/src/main/assemblies/server.xml +++ b/ambari-server/src/main/assemblies/server.xml @@ -393,11 +393,6 @@ 755 - src/main/resources/index.html - /var/lib/ambari-server/resources - - - 755 src/main/resources/kerberos.json /var/lib/ambari-server/resources
[ambari] branch branch-2.7 updated: AMBARI-25390. Disable indexing in /resources endpoint and sub-directories (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch branch-2.7 in repository https://gitbox.apache.org/repos/asf/ambari.git The following commit(s) were added to refs/heads/branch-2.7 by this push: new a00a76d AMBARI-25390. Disable indexing in /resources endpoint and sub-directories (aonishuk) a00a76d is described below commit a00a76dd33343d54f8e2f9e504f5cea80e48c0d1 Author: Andrew Onishuk AuthorDate: Fri Oct 4 14:18:36 2019 +0300 AMBARI-25390. Disable indexing in /resources endpoint and sub-directories (aonishuk) --- .../apache/ambari/server/controller/AmbariServer.java | 1 + ambari-server/src/main/resources/index.html | 17 - 2 files changed, 1 insertion(+), 17 deletions(-) diff --git a/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariServer.java b/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariServer.java index bd99527..b7f44ff 100644 --- a/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariServer.java +++ b/ambari-server/src/main/java/org/apache/ambari/server/controller/AmbariServer.java @@ -511,6 +511,7 @@ public class AmbariServer { File resourcesDirectory = new File(configs.getResourceDirPath()); ServletHolder resources = new ServletHolder(DefaultServlet.class); resources.setInitParameter("resourceBase", resourcesDirectory.getParent()); + resources.setInitParameter("dirAllowed", "false"); root.addServlet(resources, "/resources/*"); resources.setInitOrder(5); diff --git a/ambari-server/src/main/resources/index.html b/ambari-server/src/main/resources/index.html deleted file mode 100644 index 734b094..000 --- a/ambari-server/src/main/resources/index.html +++ /dev/null @@ -1,17 +0,0 @@ -
[ambari] branch branch-2.7 updated: AMBARI-25359. Hive Service Check fails during Rolling Upgrade from HDP-3.1.0.0 to HDP-3.1.4.0 (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch branch-2.7 in repository https://gitbox.apache.org/repos/asf/ambari.git The following commit(s) were added to refs/heads/branch-2.7 by this push: new 8573015 AMBARI-25359. Hive Service Check fails during Rolling Upgrade from HDP-3.1.0.0 to HDP-3.1.4.0 (aonishuk) 8573015 is described below commit 85730159a4c7aab93b7801d215f76a06da9f5350 Author: Andrew Onishuk AuthorDate: Wed Aug 14 14:19:53 2019 +0300 AMBARI-25359. Hive Service Check fails during Rolling Upgrade from HDP-3.1.0.0 to HDP-3.1.4.0 (aonishuk) --- .../main/python/resource_management/libraries/functions/hive_check.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/ambari-common/src/main/python/resource_management/libraries/functions/hive_check.py b/ambari-common/src/main/python/resource_management/libraries/functions/hive_check.py index fa3eb7e..538bcfb 100644 --- a/ambari-common/src/main/python/resource_management/libraries/functions/hive_check.py +++ b/ambari-common/src/main/python/resource_management/libraries/functions/hive_check.py @@ -78,8 +78,8 @@ def check_thrift_port_sasl(address, port, hive_auth="NOSASL", key=None, kinitcmd # -n the user to connect as (ignored when using the hive principal in the URL, can be different from the user running the beeline command) # -e ';' executes a SQL commmand of NOOP - cmd = ("beeline -u '%s' %s -e ';' 2>&1 | awk '{print}' | grep -i " + \ - "-e 'Connected to:' -e 'Transaction isolation:' -e 'inactive HS2 instance; use service discovery'") % \ + cmd = ("! (beeline -u '%s' %s -e ';' 2>&1 | awk '{print}' | grep -vz -i " + \ + "-e 'Connected to:' -e 'Transaction isolation:' -e 'inactive HS2 instance; use service discovery')") % \ (format(";".join(beeline_url)), format(credential_str)) Execute(cmd,
[ambari] 01/02: AMBARI-25320 Backport AMBARI-24872 and AMBARI-24723 for ambari 2.7.4 (ihor lukianov)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch branch-2.7 in repository https://gitbox.apache.org/repos/asf/ambari.git commit f2063984190b932e5a77220c4a80f4665e0617fa Author: Ihor Lukianov AuthorDate: Thu Jun 20 12:40:50 2019 +0300 AMBARI-25320 Backport AMBARI-24872 and AMBARI-24723 for ambari 2.7.4 (ihor lukianov) --- .../ambari-metrics/datasource.js | 4 + .../core/timeline/PhoenixHBaseAccessor.java| 101 + .../discovery/TimelineMetricMetadataManager.java | 123 + .../timeline/query/MetadataQueryCondition.java | 82 ++ .../core/timeline/query/PhoenixTransactSQL.java| 50 + .../timeline/query/TransientMetricCondition.java | 29 - .../timeline/discovery/TestMetadataManager.java| 82 +- 7 files changed, 422 insertions(+), 49 deletions(-) diff --git a/ambari-metrics/ambari-metrics-grafana/ambari-metrics/datasource.js b/ambari-metrics/ambari-metrics-grafana/ambari-metrics/datasource.js index e7cd850..a5796b3 100644 --- a/ambari-metrics/ambari-metrics-grafana/ambari-metrics/datasource.js +++ b/ambari-metrics/ambari-metrics-grafana/ambari-metrics/datasource.js @@ -257,6 +257,10 @@ define([ // To speed up querying on templatized dashboards. var getAllHostData = function(target) { var instanceId = typeof target.templatedCluster == 'undefined' ? '' : '=' + target.templatedCluster; +var appId = target.app; +if ((appId === 'nifi' || appId === 'druid') && (!instanceId || instanceId === '=')) { +instanceId = "%" +} var precision = target.precision === 'default' || typeof target.precision == 'undefined' ? '' : '=' + target.precision; var metricAggregator = target.aggregator === "none" ? '' : '._' + target.aggregator; diff --git a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/ambari/metrics/core/timeline/PhoenixHBaseAccessor.java b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/ambari/metrics/core/timeline/PhoenixHBaseAccessor.java index c0427e7..b6ff202 100644 --- a/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/ambari/metrics/core/timeline/PhoenixHBaseAccessor.java +++ b/ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/ambari/metrics/core/timeline/PhoenixHBaseAccessor.java @@ -121,6 +121,7 @@ import org.apache.ambari.metrics.core.timeline.discovery.TimelineMetricMetadataK import org.apache.ambari.metrics.core.timeline.discovery.TimelineMetricMetadataManager; import org.apache.ambari.metrics.core.timeline.query.Condition; import org.apache.ambari.metrics.core.timeline.query.DefaultPhoenixDataSource; +import org.apache.ambari.metrics.core.timeline.query.MetadataQueryCondition; import org.apache.ambari.metrics.core.timeline.query.PhoenixConnectionProvider; import org.apache.ambari.metrics.core.timeline.query.PhoenixTransactSQL; import org.apache.ambari.metrics.core.timeline.query.SplitByMetricNamesCondition; @@ -1990,6 +1991,106 @@ public class PhoenixHBaseAccessor { return metadataMap; } + public List scanMetricMetadataForWildCardRequest(Collection metricNames, + String appId, + String instanceId) throws SQLException { +List metadataList = new ArrayList<>(); +Connection conn = getConnection(); +PreparedStatement stmt = null; +ResultSet rs = null; + +MetadataQueryCondition metadataQueryCondition = new MetadataQueryCondition(new ArrayList<>(metricNames), appId, instanceId); +stmt = PhoenixTransactSQL.prepareScanMetricMetadataSqlStmt(conn, metadataQueryCondition); +try { + if (stmt != null) { +rs = stmt.executeQuery(); +while (rs.next()) { + TimelineMetricMetadata metadata = new TimelineMetricMetadata( +rs.getString("METRIC_NAME"), +rs.getString("APP_ID"), +rs.getString("INSTANCE_ID"), +null, +null, +null, +false, +true + ); + + metadata.setUuid(rs.getBytes("UUID")); + metadataList.add(metadata); +} + } +} finally { + if (rs != null) { +try { + rs.close(); +} catch (SQLException e) { + // Ignore +} + } + if (stmt != null) { +try { + stmt.close(); +} catch (SQLException e) { + // Ignore +} + } + if (conn != null) { +try { + conn.close(); +} catch (SQLException sql) { + // Ignore +} + } +} + +return metada
[ambari] branch branch-2.7 updated (54dc60b -> 9e9ddf7)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a change to branch branch-2.7 in repository https://gitbox.apache.org/repos/asf/ambari.git. from 54dc60b [AMBARI-25324] : Upgrade commons-lang/lang3 to latest versions (branch-2.7) (#3031) new f206398 AMBARI-25320 Backport AMBARI-24872 and AMBARI-24723 for ambari 2.7.4 (ihor lukianov) new 9e9ddf7 AMBARI-25320 Backport AMBARI-24872 and AMBARI-24723 for ambari 2.7.4 - small adjustment(ihor lukianov) The 2 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. Summary of changes: .../ambari-metrics/datasource.js | 4 + .../core/timeline/PhoenixHBaseAccessor.java| 101 + .../discovery/TimelineMetricMetadataManager.java | 123 + .../timeline/query/MetadataQueryCondition.java | 82 ++ .../core/timeline/query/PhoenixTransactSQL.java| 50 + .../timeline/query/TransientMetricCondition.java | 29 - .../timeline/discovery/TestMetadataManager.java| 82 +- 7 files changed, 422 insertions(+), 49 deletions(-) create mode 100644 ambari-metrics/ambari-metrics-timelineservice/src/main/java/org/apache/ambari/metrics/core/timeline/query/MetadataQueryCondition.java
[ambari] 02/02: AMBARI-25320 Backport AMBARI-24872 and AMBARI-24723 for ambari 2.7.4 - small adjustment(ihor lukianov)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch branch-2.7 in repository https://gitbox.apache.org/repos/asf/ambari.git commit 9e9ddf752f36aa9340d6dfdc23ecccfcd646fc53 Author: Ihor Lukianov AuthorDate: Thu Jun 27 12:37:11 2019 +0300 AMBARI-25320 Backport AMBARI-24872 and AMBARI-24723 for ambari 2.7.4 - small adjustment(ihor lukianov) --- ambari-metrics/ambari-metrics-grafana/ambari-metrics/datasource.js | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/ambari-metrics/ambari-metrics-grafana/ambari-metrics/datasource.js b/ambari-metrics/ambari-metrics-grafana/ambari-metrics/datasource.js index a5796b3..ec5ee10 100644 --- a/ambari-metrics/ambari-metrics-grafana/ambari-metrics/datasource.js +++ b/ambari-metrics/ambari-metrics-grafana/ambari-metrics/datasource.js @@ -259,7 +259,7 @@ define([ var instanceId = typeof target.templatedCluster == 'undefined' ? '' : '=' + target.templatedCluster; var appId = target.app; if ((appId === 'nifi' || appId === 'druid') && (!instanceId || instanceId === '=')) { -instanceId = "%" +instanceId = "=%" } var precision = target.precision === 'default' || typeof target.precision == 'undefined' ? '' : '=' + target.precision;
[ambari] branch branch-2.7 updated: AMBARI-25277. Security Concern as ambari-server.log and ambari-agent.log shows cleartext passwords. (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch branch-2.7 in repository https://gitbox.apache.org/repos/asf/ambari.git The following commit(s) were added to refs/heads/branch-2.7 by this push: new bb76491 AMBARI-25277. Security Concern as ambari-server.log and ambari-agent.log shows cleartext passwords. (aonishuk) bb76491 is described below commit bb764916e2d8e6f380dc8ac4d1686aae1b5eda5b Author: Andrew Onishuk AuthorDate: Mon May 13 14:24:09 2019 +0300 AMBARI-25277. Security Concern as ambari-server.log and ambari-agent.log shows cleartext passwords. (aonishuk) --- .../src/main/python/ambari_agent/listeners/ServerResponsesListener.py | 2 +- ambari-server/src/main/resources/stacks/stack_advisor.py| 1 - 2 files changed, 1 insertion(+), 2 deletions(-) diff --git a/ambari-agent/src/main/python/ambari_agent/listeners/ServerResponsesListener.py b/ambari-agent/src/main/python/ambari_agent/listeners/ServerResponsesListener.py index 02d60a5..a4af571 100644 --- a/ambari-agent/src/main/python/ambari_agent/listeners/ServerResponsesListener.py +++ b/ambari-agent/src/main/python/ambari_agent/listeners/ServerResponsesListener.py @@ -74,7 +74,7 @@ class ServerResponsesListener(EventListener): This string will be used to log received messsage of this type """ if Constants.CORRELATION_ID_STRING in headers: - correlation_id = headers[Constants.CORRELATION_ID_STRING] + correlation_id = int(headers[Constants.CORRELATION_ID_STRING]) if correlation_id in self.logging_handlers: message_json = self.logging_handlers[correlation_id](headers, message_json) diff --git a/ambari-server/src/main/resources/stacks/stack_advisor.py b/ambari-server/src/main/resources/stacks/stack_advisor.py index 850c21c..d82cf80 100644 --- a/ambari-server/src/main/resources/stacks/stack_advisor.py +++ b/ambari-server/src/main/resources/stacks/stack_advisor.py @@ -1502,7 +1502,6 @@ class DefaultStackAdvisor(StackAdvisor): if siteProperties is not None: siteRecommendations = recommendedDefaults[siteName]["properties"] self.logger.info("SiteName: %s, method: %s" % (siteName, method.__name__)) -self.logger.info("Site properties: %s" % str(siteProperties)) self.logger.info("Recommendations: %s" % str(siteRecommendations)) return method(siteProperties, siteRecommendations, configurations, services, hosts) return []
[ambari] branch branch-2.5 updated (0d99a88 -> a0f4e5c)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a change to branch branch-2.5 in repository https://gitbox.apache.org/repos/asf/ambari.git. from 0d99a88 Add jdeb support (#1323) new 182c4f3 [AMBARI-25266] fix test failures on branch-2.5 (ihorlukianov) new a627708 [AMBARI-25266] fix build error at Findbugs with Maven 3.6 (ihorlukianov) new dda60fb [AMBARI-25266] Ambari Metrics Storm Sink compilation error due to storm-1.1.0-SNAPSHOT (ihorlukianov) new a0f4e5c [AMBARI-25266] fixed ambari-web unit test (ihorlukianov) The 4 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. Summary of changes: ambari-metrics/ambari-metrics-storm-sink/pom.xml | 2 +- ambari-server/pom.xml | 2 +- .../server/metadata/RoleCommandOrderTest.java | 45 +- .../ambari/server/metadata/RoleGraphTest.java | 9 - .../server/stageplanner/TestStagePlanner.java | 5 +++ .../HDP/{2.0.5 => 0.2}/role_command_order.json | 0 ambari-web/test/utils/date/timezone_test.js| 2 +- 7 files changed, 42 insertions(+), 23 deletions(-) copy ambari-server/src/test/resources/stacks/HDP/{2.0.5 => 0.2}/role_command_order.json (100%)
[ambari] 02/04: [AMBARI-25266] fix build error at Findbugs with Maven 3.6 (ihorlukianov)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch branch-2.5 in repository https://gitbox.apache.org/repos/asf/ambari.git commit a6277084a08c10e93fe7cd5e1c4c23c1916ca3fa Author: Ihor Lukianov AuthorDate: Mon May 6 16:02:20 2019 +0300 [AMBARI-25266] fix build error at Findbugs with Maven 3.6 (ihorlukianov) --- ambari-server/pom.xml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/ambari-server/pom.xml b/ambari-server/pom.xml index 2b93300..c4d59bd 100644 --- a/ambari-server/pom.xml +++ b/ambari-server/pom.xml @@ -540,7 +540,7 @@ org.codehaus.mojo findbugs-maven-plugin -3.0.3 +3.0.5 false Low
[ambari] 01/04: [AMBARI-25266] fix test failures on branch-2.5 (ihorlukianov)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch branch-2.5 in repository https://gitbox.apache.org/repos/asf/ambari.git commit 182c4f3b730265917d4f6b1ae3842e9874b81533 Author: Ihor Lukianov AuthorDate: Fri May 3 11:51:26 2019 +0300 [AMBARI-25266] fix test failures on branch-2.5 (ihorlukianov) --- .../server/metadata/RoleCommandOrderTest.java | 45 + .../ambari/server/metadata/RoleGraphTest.java | 9 +- .../server/stageplanner/TestStagePlanner.java | 5 + .../stacks/HDP/0.2/role_command_order.json | 104 + 4 files changed, 143 insertions(+), 20 deletions(-) diff --git a/ambari-server/src/test/java/org/apache/ambari/server/metadata/RoleCommandOrderTest.java b/ambari-server/src/test/java/org/apache/ambari/server/metadata/RoleCommandOrderTest.java index a8eadb6..7613c17 100644 --- a/ambari-server/src/test/java/org/apache/ambari/server/metadata/RoleCommandOrderTest.java +++ b/ambari-server/src/test/java/org/apache/ambari/server/metadata/RoleCommandOrderTest.java @@ -18,6 +18,15 @@ package org.apache.ambari.server.metadata; +import static junit.framework.Assert.assertEquals; +import static junit.framework.Assert.assertFalse; +import static junit.framework.Assert.assertNotNull; +import static junit.framework.Assert.assertTrue; +import static org.easymock.EasyMock.createMock; +import static org.easymock.EasyMock.expect; +import static org.easymock.EasyMock.replay; +import static org.easymock.EasyMock.verify; + import java.io.IOException; import java.io.InputStream; import java.sql.SQLException; @@ -55,14 +64,6 @@ import com.google.inject.Guice; import com.google.inject.Injector; import junit.framework.Assert; -import static junit.framework.Assert.assertEquals; -import static junit.framework.Assert.assertFalse; -import static junit.framework.Assert.assertNotNull; -import static junit.framework.Assert.assertTrue; -import static org.easymock.EasyMock.createMock; -import static org.easymock.EasyMock.expect; -import static org.easymock.EasyMock.replay; -import static org.easymock.EasyMock.verify; public class RoleCommandOrderTest { @@ -95,7 +96,8 @@ public class RoleCommandOrderTest { ClusterImpl cluster = createMock(ClusterImpl.class); Service service = createMock(Service.class); expect(cluster.getClusterId()).andReturn(1L); -expect(cluster.getCurrentStackVersion()).andReturn(new StackId("HDP", "2.0.6")); +expect(cluster.getClusterName()).andReturn("c1"); +expect(cluster.getDesiredStackVersion()).andReturn(new StackId("HDP", "2.0.6")); expect(cluster.getService("GLUSTERFS")).andReturn(service); expect(cluster.getService("HDFS")).andReturn(null); expect(cluster.getService("YARN")).andReturn(null); @@ -138,12 +140,13 @@ public class RoleCommandOrderTest { ClusterImpl cluster = createMock(ClusterImpl.class); expect(cluster.getService("GLUSTERFS")).andReturn(null); expect(cluster.getClusterId()).andReturn(1L); +expect(cluster.getClusterName()).andReturn("c1"); Service hdfsService = createMock(Service.class); expect(cluster.getService("HDFS")).andReturn(hdfsService).atLeastOnce(); expect(cluster.getService("YARN")).andReturn(null).atLeastOnce(); expect(hdfsService.getServiceComponent("JOURNALNODE")).andReturn(null); -expect(cluster.getCurrentStackVersion()).andReturn(new StackId("HDP", "2.0.6")); +expect(cluster.getDesiredStackVersion()).andReturn(new StackId("HDP", "2.0.6")); replay(cluster); replay(hdfsService); @@ -181,13 +184,14 @@ public class RoleCommandOrderTest { ClusterImpl cluster = createMock(ClusterImpl.class); expect(cluster.getService("GLUSTERFS")).andReturn(null); expect(cluster.getClusterId()).andReturn(1L); +expect(cluster.getClusterName()).andReturn("c1"); Service hdfsService = createMock(Service.class); ServiceComponent journalnodeSC = createMock(ServiceComponent.class); expect(cluster.getService("HDFS")).andReturn(hdfsService).atLeastOnce(); expect(cluster.getService("YARN")).andReturn(null); expect(hdfsService.getServiceComponent("JOURNALNODE")).andReturn(journalnodeSC); -expect(cluster.getCurrentStackVersion()).andReturn(new StackId("HDP", "2.0.6")); +expect(cluster.getDesiredStackVersion()).andReturn(new StackId("HDP", "2.0.6")); replay(cluster); replay(hdfsService); @@ -222,8 +226,9 @@ public class RoleCommandOrderTest { ServiceComponentHost sch2 = createMock(ServiceComponentHostImpl.class); expect(cluster.getService("GLUSTERFS")).andReturn(null); expect(cluster.getClusterId()).andReturn(1L); +expec
[ambari] 04/04: [AMBARI-25266] fixed ambari-web unit test (ihorlukianov)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch branch-2.5 in repository https://gitbox.apache.org/repos/asf/ambari.git commit a0f4e5c820b79c73c3857fbf94fb1508eebaa15c Author: Ihor Lukianov AuthorDate: Tue May 7 11:56:13 2019 +0300 [AMBARI-25266] fixed ambari-web unit test (ihorlukianov) --- ambari-web/test/utils/date/timezone_test.js | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/ambari-web/test/utils/date/timezone_test.js b/ambari-web/test/utils/date/timezone_test.js index 37a8c8c..ebf5e68 100644 --- a/ambari-web/test/utils/date/timezone_test.js +++ b/ambari-web/test/utils/date/timezone_test.js @@ -139,7 +139,7 @@ describe('timezoneUtils', function () { it('Detect UTC+1', function () { mockTimezoneOffset(0, 60); var tz = timezoneUtils.detectUserTimezone(); - expect(tz).to.contain('0-60|Atlantic'); + expect(tz).to.contain('0-60|Africa'); }); it('Detect UTC+1 for Europe', function () {
[ambari] 03/04: [AMBARI-25266] Ambari Metrics Storm Sink compilation error due to storm-1.1.0-SNAPSHOT (ihorlukianov)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch branch-2.5 in repository https://gitbox.apache.org/repos/asf/ambari.git commit dda60fb7dce5b1d8ddf620b232987e68090f40a4 Author: Ihor Lukianov AuthorDate: Mon May 6 18:09:05 2019 +0300 [AMBARI-25266] Ambari Metrics Storm Sink compilation error due to storm-1.1.0-SNAPSHOT (ihorlukianov) --- ambari-metrics/ambari-metrics-storm-sink/pom.xml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/ambari-metrics/ambari-metrics-storm-sink/pom.xml b/ambari-metrics/ambari-metrics-storm-sink/pom.xml index 779ee1c..5f17bc1 100644 --- a/ambari-metrics/ambari-metrics-storm-sink/pom.xml +++ b/ambari-metrics/ambari-metrics-storm-sink/pom.xml @@ -31,7 +31,7 @@ limitations under the License. jar -1.1.0-SNAPSHOT +1.1.0
[ambari] branch branch-2.7 updated: AMBARI-25123. /var/lib/ambari-agent/cache not updating (Ambari 2.7) (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch branch-2.7 in repository https://gitbox.apache.org/repos/asf/ambari.git The following commit(s) were added to refs/heads/branch-2.7 by this push: new 11fd4eb AMBARI-25123. /var/lib/ambari-agent/cache not updating (Ambari 2.7) (aonishuk) 11fd4eb is described below commit 11fd4eb25c8bd4d744c532e853d8bfad72a7b77b Author: Andrew Onishuk AuthorDate: Fri Feb 1 11:20:49 2019 +0200 AMBARI-25123. /var/lib/ambari-agent/cache not updating (Ambari 2.7) (aonishuk) --- ambari-agent/src/main/python/ambari_agent/CustomServiceOrchestrator.py | 1 + 1 file changed, 1 insertion(+) diff --git a/ambari-agent/src/main/python/ambari_agent/CustomServiceOrchestrator.py b/ambari-agent/src/main/python/ambari_agent/CustomServiceOrchestrator.py index 6d15e78..1ce2ae5 100644 --- a/ambari-agent/src/main/python/ambari_agent/CustomServiceOrchestrator.py +++ b/ambari-agent/src/main/python/ambari_agent/CustomServiceOrchestrator.py @@ -347,6 +347,7 @@ class CustomServiceOrchestrator(object): # forces a hash challenge on the directories to keep them updated, even # if the return type is not used +self.file_cache.get_host_scripts_base_dir(command) base_dir = self.file_cache.get_service_base_dir(command) script_path = self.resolve_script_path(base_dir, script) script_tuple = (script_path, base_dir)
[ambari] branch trunk updated: AMBARI-25123. /var/lib/ambari-agent/cache not updating (Ambari 2.7) (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch trunk in repository https://gitbox.apache.org/repos/asf/ambari.git The following commit(s) were added to refs/heads/trunk by this push: new 8bc8679 AMBARI-25123. /var/lib/ambari-agent/cache not updating (Ambari 2.7) (aonishuk) 8bc8679 is described below commit 8bc86792ea24162a9f366926d660ce9d251db0da Author: Andrew Onishuk AuthorDate: Thu Jan 24 09:10:19 2019 +0200 AMBARI-25123. /var/lib/ambari-agent/cache not updating (Ambari 2.7) (aonishuk) --- ambari-agent/src/main/python/ambari_agent/CustomServiceOrchestrator.py | 1 + 1 file changed, 1 insertion(+) diff --git a/ambari-agent/src/main/python/ambari_agent/CustomServiceOrchestrator.py b/ambari-agent/src/main/python/ambari_agent/CustomServiceOrchestrator.py index 13829f9..4443db3 100644 --- a/ambari-agent/src/main/python/ambari_agent/CustomServiceOrchestrator.py +++ b/ambari-agent/src/main/python/ambari_agent/CustomServiceOrchestrator.py @@ -352,6 +352,7 @@ class CustomServiceOrchestrator(object): # forces a hash challenge on the directories to keep them updated, even # if the return type is not used +self.file_cache.get_host_scripts_base_dir(command) base_dir = self.file_cache.get_service_base_dir(command) script_path = self.resolve_script_path(base_dir, script) script_tuple = (script_path, base_dir)
[ambari] branch trunk updated: [AMBARI-25106] Add Ozone JMX ports (dsen)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch trunk in repository https://gitbox.apache.org/repos/asf/ambari.git The following commit(s) were added to refs/heads/trunk by this push: new 8f670c7 [AMBARI-25106] Add Ozone JMX ports (dsen) 8f670c7 is described below commit 8f670c780c9c16efed9d966d1f0f6e71837c465d Author: Dmitry Sen AuthorDate: Thu Jan 17 13:13:32 2019 +0700 [AMBARI-25106] Add Ozone JMX ports (dsen) --- .../org/apache/ambari/server/controller/jmx/JMXPropertyProvider.java| 2 ++ 1 file changed, 2 insertions(+) diff --git a/ambari-server/src/main/java/org/apache/ambari/server/controller/jmx/JMXPropertyProvider.java b/ambari-server/src/main/java/org/apache/ambari/server/controller/jmx/JMXPropertyProvider.java index c22f90e..166edaf 100644 --- a/ambari-server/src/main/java/org/apache/ambari/server/controller/jmx/JMXPropertyProvider.java +++ b/ambari-server/src/main/java/org/apache/ambari/server/controller/jmx/JMXPropertyProvider.java @@ -110,6 +110,8 @@ public class JMXPropertyProvider extends ThreadPoolEnabledPropertyProvider { DEFAULT_JMX_PORTS.put("NODEMANAGER", "8042"); DEFAULT_JMX_PORTS.put("JOURNALNODE", "8480"); DEFAULT_JMX_PORTS.put("STORM_REST_API", "8745"); +DEFAULT_JMX_PORTS.put("OZONE_MANAGER", "9874"); +DEFAULT_JMX_PORTS.put("STORAGE_CONTAINER_MANAGER", "9876"); AD_HOC_PROPERTIES.put("NAMENODE", Collections.singletonMap("metrics/dfs/FSNamesystem/HAState",
[ambari] 04/04: Delete server.sh
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch trunk in repository https://gitbox.apache.org/repos/asf/ambari.git commit e8af38203520ddff14bcc8c8ea2b5c5c010fb642 Author: aonishuk AuthorDate: Thu Jan 10 08:22:17 2019 +0200 Delete server.sh --- server.sh | 52 1 file changed, 52 deletions(-) diff --git a/server.sh b/server.sh deleted file mode 100644 index 880ab44..000 --- a/server.sh +++ /dev/null @@ -1,52 +0,0 @@ -#!/bin/bash -yum install wget -y -wget -O /etc/yum.repos.d/ambari.repo http://10.240.0.30/ambari.repo -yum clean all; yum install ambari-server -y -sed -i -f /home/ambari/ambari-server/src/main/resources/stacks/PERF/install_packages.sed /var/lib/ambari-server/resources/custom_actions/scripts/install_packages.py -sed -i -f /home/ambari/ambari-server/src/main/resources/stacks/PERF/install_packages.sed /var/lib/ambari-agent/cache/custom_actions/scripts/install_packages.py - - -cd /; wget http://central.maven.org/maven2/mysql/mysql-connector-java/5.1.40/mysql-connector-java-5.1.40.jar; -mkdir /usr/share/java; chmod 777 /usr/share/java;cp mysql-connector-java-5.1.40.jar /usr/share/java/; chmod 777 /usr/share/java/mysql-connector-java-5.1.40.jar; -ln -s /usr/share/java/mysql-connector-java-5.1.40.jar /usr/share/java/mysql-connector-java.jar; -cd /etc/yum.repos.d/; wget http://repo.mysql.com/mysql-community-release-el6-5.noarch.rpm; rpm -ivh mysql-community-release-el6-5.noarch.rpm;yum clean all; yum install mysql-server -y -sed -i -e 's/mysqld]/mysqld]\nmax_allowed_packet=1024M\njoin_buffer_size=512M\nsort_buffer_size=128M\nread_rnd_buffer_size=128M\ninnodb_buffer_pool_size=16G\ninnodb_file_io_threads=16\ninnodb_thread_concurrency=32\nkey_buffer_size=16G\nquery_cache_limit=16M\nquery_cache_size=512M\nthread_cache_size=128\ninnodb_log_buffer_size=512M/1' /etc/my.cnf -service mysqld start -mysql -uroot -e "CREATE DATABASE ambari;" -mysql -uroot -e "SOURCE /var/lib/ambari-server/resources/Ambari-DDL-MySQL-CREATE.sql;" ambari -mysql -uroot -e "CREATE USER 'ambari'@'%' IDENTIFIED BY 'bigdata';" -mysql -uroot -e "GRANT ALL PRIVILEGES ON *.* TO 'ambari'@'%%';" -mysql -uroot -e "CREATE USER 'ambari'@'localhost' IDENTIFIED BY 'bigdata';" -mysql -uroot -e "GRANT ALL PRIVILEGES ON *.* TO 'ambari'@'localhost';" -mysql -uroot -e "CREATE USER 'ambari'@'perf-server-test-perf-1.c.pramod-thangali.internal' IDENTIFIED BY 'bigdata';" -mysql -uroot -e "GRANT ALL PRIVILEGES ON *.* TO 'ambari'@'perf-server-test-perf-1.c.pramod-thangali.internal';" -mysql -uroot -e "FLUSH PRIVILEGES;" - - -ambari-server setup -s -ambari-server setup --database mysql --jdbc-db=mysql --jdbc-driver=/usr/share/java/mysql-connector-java.jar --databasehost=localhost --databaseport=3306 --databasename=ambari --databaseusername=ambari --databasepassword=bigdata -sed -i -e 's/=postgres/=mysql/g' /etc/ambari-server/conf/ambari.properties -sed -i -e 's/server.persistence.type=local/server.persistence.type=remote/g' /etc/ambari-server/conf/ambari.properties -sed -i -e 's/local.database.user=postgres//g' /etc/ambari-server/conf/ambari.properties -sed -i -e 's/server.jdbc.postgres.schema=ambari//g' /etc/ambari-server/conf/ambari.properties -sed -i -e 's/agent.threadpool.size.max=25/agent.threadpool.size.max=100/g' /etc/ambari-server/conf/ambari.properties -sed -i -e 's/client.threadpool.size.max=25/client.threadpool.size.max=65/g' /etc/ambari-server/conf/ambari.properties -sed -i -e 's/false/true/g' /var/lib/ambari-server/resources/stacks/PERF/1.0/metainfo.xml -sed -i -e 's/false/true/g' /var/lib/ambari-server/resources/stacks/PERF/2.0/metainfo.xml -sed -i -e 's/-Xmx2048m/-Xmx16384m/g' /var/lib/ambari-server/ambari-env.sh - -echo 'server.jdbc.driver=com.mysql.jdbc.Driver' >> /etc/ambari-server/conf/ambari.properties -echo 'server.jdbc.rca.url=jdbc:mysql://perf-server-test-perf-1.c.pramod-thangali.internal:3306/ambari' >> /etc/ambari-server/conf/ambari.properties -echo 'server.jdbc.rca.driver=com.mysql.jdbc.Driver' >> /etc/ambari-server/conf/ambari.properties -echo 'server.jdbc.url=jdbc:mysql://perf-server-test-perf-1.c.pramod-thangali.internal:3306/ambari' >> /etc/ambari-server/conf/ambari.properties -echo 'server.jdbc.port=3306' >> /etc/ambari-server/conf/ambari.properties -echo 'server.jdbc.hostname=localhost' >> /etc/ambari-server/conf/ambari.properties -echo 'server.jdbc.driver.path=/usr/share/java/mysql-connector-java.jar' >> /etc/ambari-server/conf/ambari.properties -echo 'alerts.cache.enabled=true' >> /etc/ambari-server/conf/ambari.properties -echo 'alerts.cache.size=10' >> /etc/ambari-server/conf/ambari.properties -echo 'alerts.execution.scheduler.maxThreads=4' >> /etc/ambari-server/conf/ambari.properties -echo 'security.temporary.keys
[ambari] 03/04: Delete deploy-gce-perf-cluster.py.rej
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch trunk in repository https://gitbox.apache.org/repos/asf/ambari.git commit b3be244b0d6b68cffd128c7db43830cae8077a51 Author: aonishuk AuthorDate: Thu Jan 10 08:22:03 2019 +0200 Delete deploy-gce-perf-cluster.py.rej --- contrib/utils/perf/deploy-gce-perf-cluster.py.rej | 27 --- 1 file changed, 27 deletions(-) diff --git a/contrib/utils/perf/deploy-gce-perf-cluster.py.rej b/contrib/utils/perf/deploy-gce-perf-cluster.py.rej deleted file mode 100644 index 06eca6e..000 --- a/contrib/utils/perf/deploy-gce-perf-cluster.py.rej +++ /dev/null @@ -1,27 +0,0 @@ contrib/utils/perf/deploy-gce-perf-cluster.py -+++ contrib/utils/perf/deploy-gce-perf-cluster.py -@@ -28,7 +28,7 @@ import re - import socket - - cluster_prefix = "perf" --ambari_repo_file_url = "http://s3.amazonaws.com/dev.hortonworks.com/ambari/centos6/2.x/updates/2.5.0.0/ambaribn.repo; -+ambari_repo_file_url = "http://10.240.0.30/ambari.repo; - - public_hostname_script = "foo" - hostname_script = "foo" -@@ -397,13 +394,11 @@ def create_agent_script(server_host_name): - # TODO, instead of cloning Ambari repo on each VM, do it on the server once and distribute to all of the agents. - contents = "#!/bin/bash\n" + \ - "wget -O /etc/yum.repos.d/ambari.repo {0}\n".format(ambari_repo_file_url) + \ -- "yum clean all; yum install krb5-workstation git ambari-agent -y\n" + \ -- "mkdir /home ; cd /home; git clone https://github.com/apache/ambari.git ; cd ambari ; git checkout branch-2.5\n" + \ -- "cp -r /home/ambari/ambari-server/src/main/resources/stacks/PERF /var/lib/ambari-agent/cache/stacks/PERF\n" + \ -+ "yum clean all; yum install krb5-workstation ambari-agent -y\n" + \ - "sed -i -f /var/lib/ambari-agent/cache/stacks/PERF/PythonExecutor.sed /usr/lib/python2.6/site-packages/ambari_agent/PythonExecutor.py\n" + \ - "sed -i -e 's/hostname=localhost/hostname={0}/g' /etc/ambari-agent/conf/ambari-agent.ini\n".format(server_host_name) + \ - "sed -i -e 's/agent]/agent]\\nhostname_script={0}\\npublic_hostname_script={1}\\n/1' /etc/ambari-agent/conf/ambari-agent.ini\n".format(hostname_script, public_hostname_script) + \ -- "python /home/ambari/ambari-agent/conf/unix/agent-multiplier.py start\n" + \ -+ "wget http://10.240.0.30/agent-multiplier.py ; python agent-multiplier.py start\n" + \ - "exit 0" - - with open("agent.sh", "w") as f:
[ambari] branch trunk updated (04e120a -> e8af382)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a change to branch trunk in repository https://gitbox.apache.org/repos/asf/ambari.git. from 04e120a AMBARI-25094 Remove Flume Live widget from Ambari, alongside the Flume service during upgrade to HDP3. (ababiichuk) new 7c3275f AMBARI-25095. deploy-gce-perf-cluster.py fails after upgrade on gce controller (aonishuk) new acfb45b Delete agent.sh new b3be244 Delete deploy-gce-perf-cluster.py.rej new e8af382 Delete server.sh The 4 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. Summary of changes: contrib/utils/perf/deploy-gce-perf-cluster.py | 11 ++- 1 file changed, 6 insertions(+), 5 deletions(-)
[ambari] 01/04: AMBARI-25095. deploy-gce-perf-cluster.py fails after upgrade on gce controller (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch trunk in repository https://gitbox.apache.org/repos/asf/ambari.git commit 7c3275fde332585b5b3ce7c93afec7a84177498a Author: Andrew Onishuk AuthorDate: Thu Jan 10 08:20:26 2019 +0200 AMBARI-25095. deploy-gce-perf-cluster.py fails after upgrade on gce controller (aonishuk) --- agent.sh | 12 ++ contrib/utils/perf/deploy-gce-perf-cluster.py | 11 ++--- contrib/utils/perf/deploy-gce-perf-cluster.py.rej | 27 server.sh | 52 +++ 4 files changed, 97 insertions(+), 5 deletions(-) diff --git a/agent.sh b/agent.sh new file mode 100644 index 000..f1966f9 --- /dev/null +++ b/agent.sh @@ -0,0 +1,12 @@ +#!/bin/bash +yum install wget -y +wget -O /etc/yum.repos.d/ambari.repo http://10.240.0.30/ambari.repo +yum clean all; yum install krb5-workstation git ambari-agent -y +mkdir /home ; cd /home; git clone https://github.com/apache/ambari.git ; cd ambari ; git checkout branch-2.5 +cp -r /home/ambari/ambari-server/src/main/resources/stacks/PERF /var/lib/ambari-agent/cache/stacks/PERF +sed -i -f /var/lib/ambari-agent/cache/stacks/PERF/PythonExecutor.sed /usr/lib/ambari-agent/lib/ambari_agent/PythonExecutor.py +sed -i -f /var/lib/ambari-agent/cache/stacks/PERF/check_host.sed /var/lib/ambari-agent/cache/custom_actions/scripts/check_host.py +sed -i -e 's/hostname=localhost/hostname=perf-server-test-perf-1.c.pramod-thangali.internal/g' /etc/ambari-agent/conf/ambari-agent.ini +sed -i -e 's/agent]/agent]\nhostname_script=foo\npublic_hostname_script=foo\n/1' /etc/ambari-agent/conf/ambari-agent.ini +wget http://10.240.0.30/agent-multiplier.py ; python /home/ambari/ambari-agent/conf/unix/agent-multiplier.py start +exit 0 \ No newline at end of file diff --git a/contrib/utils/perf/deploy-gce-perf-cluster.py b/contrib/utils/perf/deploy-gce-perf-cluster.py index a1259eb..e34b07c 100644 --- a/contrib/utils/perf/deploy-gce-perf-cluster.py +++ b/contrib/utils/perf/deploy-gce-perf-cluster.py @@ -475,22 +475,23 @@ def get_vms_list(args): def __get_vms_list_from_name(args, cluster_name): """ - Method to parse "gce fqdn {cluster-name}" command output and get hosts and ips pairs for every host in cluster + Method to parse "gce info {cluster-name}" command output and get hosts and ips pairs for every host in cluster :param args: Command line args :return: Mapping of VM host name to ip. """ - gce_fqdb_cmd = '/opt/gce-utils/gce fqdn {0}'.format(cluster_name) + gce_fqdb_cmd = '/opt/gce-utils/gce info {0}'.format(cluster_name) out = execute_command(args, args.controller, gce_fqdb_cmd, "Failed to get VMs list!", "-tt") lines = out.split('\n') #print "LINES=" + str(lines) if lines[0].startswith("Using profile") and not lines[1].strip(): result = {} -for s in lines[2:]: # Ignore non-meaningful lines +for s in lines[4:]: # Ignore non-meaningful lines if not s: continue - match = re.match(r'^([\d\.]*)\s+([\w\.-]*)\s+([\w\.-]*)\s+$', s, re.M) + + match = re.match(r'^ [^ ]+ ([\w\.-]*)\s+([\d\.]*).*$', s, re.M) if match: -result[match.group(2)] = match.group(1) +result[match.group(1)] = match.group(2) else: raise Exception('Cannot parse "{0}"'.format(s)) return result diff --git a/contrib/utils/perf/deploy-gce-perf-cluster.py.rej b/contrib/utils/perf/deploy-gce-perf-cluster.py.rej new file mode 100644 index 000..06eca6e --- /dev/null +++ b/contrib/utils/perf/deploy-gce-perf-cluster.py.rej @@ -0,0 +1,27 @@ +--- contrib/utils/perf/deploy-gce-perf-cluster.py contrib/utils/perf/deploy-gce-perf-cluster.py +@@ -28,7 +28,7 @@ import re + import socket + + cluster_prefix = "perf" +-ambari_repo_file_url = "http://s3.amazonaws.com/dev.hortonworks.com/ambari/centos6/2.x/updates/2.5.0.0/ambaribn.repo; ++ambari_repo_file_url = "http://10.240.0.30/ambari.repo; + + public_hostname_script = "foo" + hostname_script = "foo" +@@ -397,13 +394,11 @@ def create_agent_script(server_host_name): + # TODO, instead of cloning Ambari repo on each VM, do it on the server once and distribute to all of the agents. + contents = "#!/bin/bash\n" + \ + "wget -O /etc/yum.repos.d/ambari.repo {0}\n".format(ambari_repo_file_url) + \ +- "yum clean all; yum install krb5-workstation git ambari-agent -y\n" + \ +- "mkdir /home ; cd /home; git clone https://github.com/apache/ambari.git ; cd ambari ; git checkout branch-2.5\n" + \ +- "cp -r /home/ambari/ambari-server/src/main/resources/stacks/PERF /var/lib/ambari-agent/cache/stacks/PERF\n" + \ ++ "yum clean all; yum install krb5-workstation ambari-agent -y\n" + \ +
[ambari] 02/04: Delete agent.sh
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch trunk in repository https://gitbox.apache.org/repos/asf/ambari.git commit acfb45bafbb57e7c37eb02e6f139ff7fceee221d Author: aonishuk AuthorDate: Thu Jan 10 08:21:50 2019 +0200 Delete agent.sh --- agent.sh | 12 1 file changed, 12 deletions(-) diff --git a/agent.sh b/agent.sh deleted file mode 100644 index f1966f9..000 --- a/agent.sh +++ /dev/null @@ -1,12 +0,0 @@ -#!/bin/bash -yum install wget -y -wget -O /etc/yum.repos.d/ambari.repo http://10.240.0.30/ambari.repo -yum clean all; yum install krb5-workstation git ambari-agent -y -mkdir /home ; cd /home; git clone https://github.com/apache/ambari.git ; cd ambari ; git checkout branch-2.5 -cp -r /home/ambari/ambari-server/src/main/resources/stacks/PERF /var/lib/ambari-agent/cache/stacks/PERF -sed -i -f /var/lib/ambari-agent/cache/stacks/PERF/PythonExecutor.sed /usr/lib/ambari-agent/lib/ambari_agent/PythonExecutor.py -sed -i -f /var/lib/ambari-agent/cache/stacks/PERF/check_host.sed /var/lib/ambari-agent/cache/custom_actions/scripts/check_host.py -sed -i -e 's/hostname=localhost/hostname=perf-server-test-perf-1.c.pramod-thangali.internal/g' /etc/ambari-agent/conf/ambari-agent.ini -sed -i -e 's/agent]/agent]\nhostname_script=foo\npublic_hostname_script=foo\n/1' /etc/ambari-agent/conf/ambari-agent.ini -wget http://10.240.0.30/agent-multiplier.py ; python /home/ambari/ambari-agent/conf/unix/agent-multiplier.py start -exit 0 \ No newline at end of file
[ambari] branch trunk updated: AMBARI-25031. Infra cluster fails to start if fs.defaultsFS is set to file:/// (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch trunk in repository https://gitbox.apache.org/repos/asf/ambari.git The following commit(s) were added to refs/heads/trunk by this push: new e2f9318 AMBARI-25031. Infra cluster fails to start if fs.defaultsFS is set to file:/// (aonishuk) e2f9318 is described below commit e2f9318474d876252e606ac637fe2b2cd146a096 Author: Andrew Onishuk AuthorDate: Tue Dec 11 12:34:40 2018 +0200 AMBARI-25031. Infra cluster fails to start if fs.defaultsFS is set to file:/// (aonishuk) --- .../python/resource_management/libraries/providers/hdfs_resource.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py b/ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py index b6f0fc4..af6218a 100644 --- a/ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py +++ b/ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py @@ -687,7 +687,7 @@ class HdfsResourceProvider(Provider): if self.has_core_configs: path_protocol = urlparse(self.resource.target).scheme.lower() - self.create_as_root = path_protocol == 'file' or self.default_protocol == 'file' and path_protocol == None + self.create_as_root = path_protocol == 'file' or self.default_protocol == 'file' and not path_protocol # for protocols which are different that defaultFs webhdfs will not be able to create directories # so for them fast-hdfs-resource.jar should be used
[ambari] branch trunk updated: AMBARI-25005. Ambari hides information about cred_store generation failures. Resulting in confusing errors at later stages (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch trunk in repository https://gitbox.apache.org/repos/asf/ambari.git The following commit(s) were added to refs/heads/trunk by this push: new fcd53d1 AMBARI-25005. Ambari hides information about cred_store generation failures. Resulting in confusing errors at later stages (aonishuk) fcd53d1 is described below commit fcd53d1383c5d66cca173eeb8f83986b7a921901 Author: Andrew Onishuk AuthorDate: Thu Dec 6 13:13:28 2018 +0200 AMBARI-25005. Ambari hides information about cred_store generation failures. Resulting in confusing errors at later stages (aonishuk) --- .../src/main/python/ambari_agent/CustomServiceOrchestrator.py | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/ambari-agent/src/main/python/ambari_agent/CustomServiceOrchestrator.py b/ambari-agent/src/main/python/ambari_agent/CustomServiceOrchestrator.py index 0ea3656..13829f9 100644 --- a/ambari-agent/src/main/python/ambari_agent/CustomServiceOrchestrator.py +++ b/ambari-agent/src/main/python/ambari_agent/CustomServiceOrchestrator.py @@ -30,11 +30,12 @@ import ambari_simplejson as json from collections import defaultdict from ConfigParser import NoOptionError -from ambari_commons import shell, subprocess32 +from ambari_commons import shell from ambari_commons.constants import AGENT_TMP_DIR from resource_management.libraries.functions.log_process_information import log_process_information from resource_management.core.utils import PasswordString from resource_management.core.encryption import ensure_decrypted +from resource_management.core import shell as rmf_shell from ambari_agent.models.commands import AgentCommand from ambari_agent.Utils import Utils @@ -305,8 +306,7 @@ class CustomServiceOrchestrator(object): cmd = (java_bin, '-cp', cs_lib_path, self.credential_shell_cmd, 'create', alias, '-value', protected_pwd, '-provider', provider_path) logger.info(cmd) -cmd_result = subprocess32.call(cmd) -logger.info('cmd_result = {0}'.format(cmd_result)) +rmf_shell.checked_call(cmd) os.chmod(file_path, 0644) # group and others should have read access so that the service user can read # Add JCEKS provider path instead config[self.CREDENTIAL_PROVIDER_PROPERTY_NAME] = provider_path
[ambari] branch trunk updated: AMBARI-25004. Directory/File creation hangs if relative path is supplied with cd_access (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch trunk in repository https://gitbox.apache.org/repos/asf/ambari.git The following commit(s) were added to refs/heads/trunk by this push: new e1ceec2 AMBARI-25004. Directory/File creation hangs if relative path is supplied with cd_access (aonishuk) e1ceec2 is described below commit e1ceec262ddcc02c095941024507048fbccbc255 Author: Andrew Onishuk AuthorDate: Thu Dec 6 10:56:49 2018 +0200 AMBARI-25004. Directory/File creation hangs if relative path is supplied with cd_access (aonishuk) --- .../src/main/python/resource_management/core/providers/system.py| 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/ambari-common/src/main/python/resource_management/core/providers/system.py b/ambari-common/src/main/python/resource_management/core/providers/system.py index acb19d4..a286d03 100644 --- a/ambari-common/src/main/python/resource_management/core/providers/system.py +++ b/ambari-common/src/main/python/resource_management/core/providers/system.py @@ -101,7 +101,7 @@ def _ensure_metadata(path, user, group, mode=None, cd_access=None, recursive_own raise Fail("'cd_acess' value '%s' is not valid" % (cd_access)) dir_path = re.sub('/+', '/', path) -while dir_path != os.sep: +while dir_path and dir_path != os.sep: if sudo.path_isdir(dir_path): sudo.chmod_extended(dir_path, cd_access+"+rx")
[ambari] branch trunk updated: AMBARI-24991. Commands timeout if stdout has non-unicode symbols. (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch trunk in repository https://gitbox.apache.org/repos/asf/ambari.git The following commit(s) were added to refs/heads/trunk by this push: new 17f9098 AMBARI-24991. Commands timeout if stdout has non-unicode symbols. (aonishuk) 17f9098 is described below commit 17f90987b1aa2b0fbb0bf787200d9148f0d6e785 Author: Andrew Onishuk AuthorDate: Tue Dec 4 14:24:37 2018 +0200 AMBARI-24991. Commands timeout if stdout has non-unicode symbols. (aonishuk) --- ambari-agent/src/main/python/ambari_agent/ActionQueue.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/ambari-agent/src/main/python/ambari_agent/ActionQueue.py b/ambari-agent/src/main/python/ambari_agent/ActionQueue.py index 072083a..3c979a6 100644 --- a/ambari-agent/src/main/python/ambari_agent/ActionQueue.py +++ b/ambari-agent/src/main/python/ambari_agent/ActionQueue.py @@ -336,8 +336,8 @@ class ActionQueue(threading.Thread): role_result = self.commandStatuses.generate_report_template(command) role_result.update({ - 'stdout': command_result['stdout'], - 'stderr': command_result['stderr'], + 'stdout': unicode(command_result['stdout'], errors='replace'), + 'stderr': unicode(command_result['stderr'], errors='replace'), 'exitCode': command_result['exitcode'], 'status': status, })
[ambari] branch trunk updated: AMBARI-24942. Dir creation fails if webhdfs is enabled (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch trunk in repository https://gitbox.apache.org/repos/asf/ambari.git The following commit(s) were added to refs/heads/trunk by this push: new 5c7f4c1 AMBARI-24942. Dir creation fails if webhdfs is enabled (aonishuk) 5c7f4c1 is described below commit 5c7f4c1bf419b004503425504b94b44d9ca985e0 Author: Andrew Onishuk AuthorDate: Thu Nov 22 12:43:05 2018 +0200 AMBARI-24942. Dir creation fails if webhdfs is enabled (aonishuk) --- .../libraries/providers/hdfs_resource.py | 33 +++--- 1 file changed, 17 insertions(+), 16 deletions(-) diff --git a/ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py b/ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py index 55d82d8..b6f0fc4 100644 --- a/ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py +++ b/ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py @@ -70,6 +70,8 @@ EXCEPTIONS_TO_RETRY = { "RetriableException": ("", 20, 6), } +DFS_WHICH_SUPPORT_WEBHDFS = ['hdfs'] + class HdfsResourceJar: """ This is slower than HdfsResourceWebHDFS implementation of HdfsResouce, but it works in any cases on any DFS types. @@ -92,16 +94,9 @@ class HdfsResourceJar: if not nameservices or len(nameservices) < 2: self.action_delayed_for_nameservice(None, action_name, main_resource) else: - default_fs_protocol = urlparse(main_resource.resource.default_fs).scheme - - if not default_fs_protocol or default_fs_protocol == "viewfs": -protocol = dfs_type.lower() - else: -protocol = default_fs_protocol - for nameservice in nameservices: try: - nameservice = protocol + "://" + nameservice + nameservice = main_resource.default_protocol + "://" + nameservice self.action_delayed_for_nameservice(nameservice, action_name, main_resource) except namenode_ha_utils.NoActiveNamenodeException as ex: # one of ns can be down (during initial start forexample) no need to worry for federated cluster @@ -212,12 +207,17 @@ class WebHDFSUtil: self.run_user = run_user self.security_enabled = security_enabled self.logoutput = logoutput - + @staticmethod - def is_webhdfs_available(is_webhdfs_enabled, dfs_type): -# only hdfs seems to support webHDFS -return (is_webhdfs_enabled and dfs_type.lower() == 'hdfs') + def get_default_protocol(default_fs, dfs_type): +default_fs_protocol = urlparse(default_fs).scheme.lower() +is_viewfs = default_fs_protocol == 'viewfs' +return dfs_type.lower() if is_viewfs else default_fs_protocol + @staticmethod + def is_webhdfs_available(is_webhdfs_enabled, default_protocol): +return (is_webhdfs_enabled and default_protocol in DFS_WHICH_SUPPORT_WEBHDFS) + def run_command(self, *args, **kwargs): """ This functions is a wrapper for self._run_command which does retry routine for it. @@ -637,6 +637,8 @@ class HdfsResourceProvider(Provider): self.assert_parameter_is_set('dfs_type') self.fsType = getattr(resource, 'dfs_type').lower() + +self.default_protocol = WebHDFSUtil.get_default_protocol(resource.default_fs, self.fsType) self.can_use_webhdfs = True if self.fsType == 'hdfs': @@ -684,13 +686,12 @@ class HdfsResourceProvider(Provider): if self.has_core_configs: path_protocol = urlparse(self.resource.target).scheme.lower() - default_fs_protocol = urlparse(self.resource.default_fs).scheme.lower() - self.create_as_root = path_protocol == 'file' or default_fs_protocol == 'file' and not path_protocol + self.create_as_root = path_protocol == 'file' or self.default_protocol == 'file' and path_protocol == None # for protocols which are different that defaultFs webhdfs will not be able to create directories # so for them fast-hdfs-resource.jar should be used - if path_protocol and default_fs_protocol != "viewfs" and path_protocol != default_fs_protocol: + if path_protocol and path_protocol != self.default_protocol: self.can_use_webhdfs = False Logger.info("Cannot use webhdfs for {0} defaultFs = {1} has different protocol".format(self.resource.target, self.resource.default_fs)) else: @@ -723,7 +724,7 @@ class HdfsResourceProvider(Provider): HdfsResourceJar().action_execute(self, sudo=True) def get_hdfs_resource_executor(self): -if self.can_use_webhdfs and WebHDFSUtil.is_webhdfs_available(self.webhdfs_enabled, self.fsType): +if self.can_use_webhdfs and WebHDFSUtil.is_webhdfs_available(self.webhdfs_enabled, self.default_protocol): return HdfsResourceWebHDFS() else: return HdfsResourceJar()
[ambari] 01/02: AMBARI-24920. LocalFS (file:///) directory creation fails (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch trunk in repository https://gitbox.apache.org/repos/asf/ambari.git commit 3da344ab8e7d878868998ee514dd0dbc99a2be55 Author: Andrew Onishuk AuthorDate: Mon Nov 19 12:16:10 2018 +0200 AMBARI-24920. LocalFS (file:///) directory creation fails (aonishuk) --- .../libraries/providers/hdfs_resource.py | 49 ++ .../libraries/providers/hdfs_resource.py.rej | 11 + 2 files changed, 43 insertions(+), 17 deletions(-) diff --git a/ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py b/ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py index 52b501d..33aa96a 100644 --- a/ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py +++ b/ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py @@ -113,8 +113,14 @@ class HdfsResourceJar: def action_delayed_for_nameservice(self, nameservice, action_name, main_resource): resource = {} env = Environment.get_instance() -if not 'hdfs_files' in env.config: - env.config['hdfs_files'] = [] +env_dict_key = 'hdfs_files_sudo' if main_resource.create_as_root else 'hdfs_files' + +if main_resource.create_as_root: + Logger.info("Will create {0} as root user".format(main_resource.resource.target)) + + +if not env_dict_key in env.config: + env.config[env_dict_key] = [] # Put values in dictionary-resource for field_name, json_field_name in RESOURCE_TO_JSON_FIELDS.iteritems(): @@ -130,22 +136,25 @@ class HdfsResourceJar: resource['nameservice'] = nameservice # Add resource to create -env.config['hdfs_files'].append(resource) +env.config[env_dict_key].append(resource) - def action_execute(self, main_resource): + def action_execute(self, main_resource, sudo=False): env = Environment.get_instance() +env_dict_key = 'hdfs_files_sudo' if sudo else 'hdfs_files' +if not env_dict_key in env.config or not env.config[env_dict_key]: + return + # Check required parameters -if main_resource.has_core_configs: +if not sudo: main_resource.assert_parameter_is_set('user') + user = main_resource.resource.user +else: + user = None + -if not 'hdfs_files' in env.config or not env.config['hdfs_files']: - Logger.info("No resources to create. 'create_on_execute' or 'delete_on_execute' or 'download_on_execute' wasn't triggered before this 'execute' action.") - return - hadoop_bin_dir = main_resource.resource.hadoop_bin_dir hadoop_conf_dir = main_resource.resource.hadoop_conf_dir -user = main_resource.resource.user if main_resource.has_core_configs else None security_enabled = main_resource.resource.security_enabled keytab_file = main_resource.resource.keytab kinit_path = main_resource.resource.kinit_path_local @@ -161,18 +170,19 @@ class HdfsResourceJar: # Write json file to disk File(json_path, owner = user, - content = json.dumps(env.config['hdfs_files']) + content = json.dumps(env.config[env_dict_key]) ) # Execute jar to create/delete resources in hadoop -Execute(format("hadoop --config {hadoop_conf_dir} jar {jar_path} {json_path}"), +Execute(('hadoop', '--config', hadoop_conf_dir, 'jar', jar_path, json_path), user=user, path=[hadoop_bin_dir], logoutput=logoutput, +sudo=sudo, ) # Clean -env.config['hdfs_files'] = [] +env.config[env_dict_key] = [] class WebHDFSCallException(Fail): @@ -618,7 +628,8 @@ class HdfsResourceProvider(Provider): self.has_core_configs = not is_empty(getattr(resource, 'default_fs')) self.ignored_resources_list = HdfsResourceProvider.get_ignored_resources_list(self.resource.hdfs_resource_ignore_file) - +self.create_as_root = False + if not self.has_core_configs: self.webhdfs_enabled = False self.fsType = None @@ -670,10 +681,12 @@ class HdfsResourceProvider(Provider): def action_delayed(self, action_name): self.assert_parameter_is_set('type') - + if self.has_core_configs: path_protocol = urlparse(self.resource.target).scheme.lower() default_fs_protocol = urlparse(self.resource.default_fs).scheme.lower() + + self.create_as_root = path_protocol == 'file' or default_fs_protocol == 'file' and path_protocol == None # for protocols which are different that defaultFs webhdfs will not be able to create directories # so for them fast-hdfs-resource.jar should be used @@ -682,6 +695,7 @@ class HdfsResourceProvider(Provider): Logger.info("Cannot use webhdfs for {0} defaultFs = {1} has different protocol".format(self.resource.targ
[ambari] branch trunk updated: AMBARI-24904. JAR does not exist: /var/lib/ambari-agent/lib/fast-hdfs-resource.jar (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch trunk in repository https://gitbox.apache.org/repos/asf/ambari.git The following commit(s) were added to refs/heads/trunk by this push: new 9e6965d AMBARI-24904. JAR does not exist: /var/lib/ambari-agent/lib/fast-hdfs-resource.jar (aonishuk) 9e6965d is described below commit 9e6965d0bff7d31fc188d9b5be42aac80f94e286 Author: Andrew Onishuk AuthorDate: Mon Nov 19 12:37:27 2018 +0200 AMBARI-24904. JAR does not exist: /var/lib/ambari-agent/lib/fast-hdfs-resource.jar (aonishuk) --- .../python/resource_management/libraries/providers/hdfs_resource.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py b/ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py index 33aa96a..887b4fb 100644 --- a/ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py +++ b/ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py @@ -216,7 +216,7 @@ class WebHDFSUtil: @staticmethod def is_webhdfs_available(is_webhdfs_enabled, dfs_type): # only hdfs seems to support webHDFS -return (is_webhdfs_enabled and dfs_type == 'hdfs') +return (is_webhdfs_enabled and dfs_type.lower() == 'hdfs') def run_command(self, *args, **kwargs): """
[ambari] branch trunk updated (72213a6 -> 5bc2e57)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a change to branch trunk in repository https://gitbox.apache.org/repos/asf/ambari.git. from 72213a6 AMBARI-24905 Service display name on left navigation bar should be suffixed with "Client" if only client service component is present for a service new 3da344a AMBARI-24920. LocalFS (file:///) directory creation fails (aonishuk) new 5bc2e57 Delete hdfs_resource.py.rej The 2 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. Summary of changes: .../libraries/providers/hdfs_resource.py | 49 ++ 1 file changed, 32 insertions(+), 17 deletions(-)
[ambari] 02/02: Delete hdfs_resource.py.rej
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch trunk in repository https://gitbox.apache.org/repos/asf/ambari.git commit 5bc2e578942ef3af46c7be86d11803a58fbb126c Author: aonishuk AuthorDate: Mon Nov 19 12:34:24 2018 +0200 Delete hdfs_resource.py.rej --- .../libraries/providers/hdfs_resource.py.rej | 11 --- 1 file changed, 11 deletions(-) diff --git a/ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py.rej b/ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py.rej deleted file mode 100644 index 0220898..000 --- a/ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py.rej +++ /dev/null @@ -1,11 +0,0 @@ ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py -+++ ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py -@@ -686,7 +686,7 @@ class HdfsResourceProvider(Provider): - path_protocol = urlparse(self.resource.target).scheme.lower() - default_fs_protocol = urlparse(self.resource.default_fs).scheme.lower() - -- self.create_as_root = path_protocol == 'file' or default_fs_protocol == 'file' and path_protocol == None -+ self.create_as_root = path_protocol == 'file' or default_fs_protocol == 'file' and not path_protocol - - # for protocols which are different that defaultFs webhdfs will not be able to create directories - # so for them fast-hdfs-resource.jar should be used
[ambari] branch trunk updated: AMBARI-24904. JAR does not exist: /var/lib/ambari-agent/lib/fast-hdfs-resource.jar (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch trunk in repository https://gitbox.apache.org/repos/asf/ambari.git The following commit(s) were added to refs/heads/trunk by this push: new 3d21552 AMBARI-24904. JAR does not exist: /var/lib/ambari-agent/lib/fast-hdfs-resource.jar (aonishuk) 3d21552 is described below commit 3d215524b279bf106a6950e36808f392cc2567b4 Author: Andrew Onishuk AuthorDate: Thu Nov 15 13:29:25 2018 +0200 AMBARI-24904. JAR does not exist: /var/lib/ambari-agent/lib/fast-hdfs-resource.jar (aonishuk) --- .../python/resource_management/libraries/providers/hdfs_resource.py | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py b/ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py index 70d2a27..52b501d 100644 --- a/ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py +++ b/ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py @@ -206,7 +206,7 @@ class WebHDFSUtil: @staticmethod def is_webhdfs_available(is_webhdfs_enabled, dfs_type): # only hdfs seems to support webHDFS -return (is_webhdfs_enabled and dfs_type == 'HDFS') +return (is_webhdfs_enabled and dfs_type == 'hdfs') def run_command(self, *args, **kwargs): """ @@ -625,10 +625,10 @@ class HdfsResourceProvider(Provider): return self.assert_parameter_is_set('dfs_type') -self.fsType = getattr(resource, 'dfs_type') +self.fsType = getattr(resource, 'dfs_type').lower() self.can_use_webhdfs = True -if self.fsType == 'HDFS': +if self.fsType == 'hdfs': self.assert_parameter_is_set('hdfs_site') self.webhdfs_enabled = self.resource.hdfs_site['dfs.webhdfs.enabled'] else:
[ambari] branch trunk updated: AMBARI-24903. hdfsResource should create resources in file:/// if core-site is not available. (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch trunk in repository https://gitbox.apache.org/repos/asf/ambari.git The following commit(s) were added to refs/heads/trunk by this push: new 17c47cf AMBARI-24903. hdfsResource should create resources in file:/// if core-site is not available. (aonishuk) 17c47cf is described below commit 17c47cff1ae2b5b0c0901fbdeea6e914ecebc575 Author: Andrew Onishuk AuthorDate: Thu Nov 15 11:49:41 2018 +0200 AMBARI-24903. hdfsResource should create resources in file:/// if core-site is not available. (aonishuk) --- .../libraries/providers/hdfs_resource.py | 33 +- .../before-START/scripts/shared_initialization.py | 20 +++-- 2 files changed, 25 insertions(+), 28 deletions(-) diff --git a/ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py b/ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py index b22e9b2..70d2a27 100644 --- a/ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py +++ b/ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py @@ -83,7 +83,7 @@ class HdfsResourceJar: def action_delayed(self, action_name, main_resource): dfs_type = main_resource.resource.dfs_type -if main_resource.resource.nameservices is None: # all nameservices +if main_resource.resource.nameservices is None and main_resource.has_core_configs: # all nameservices nameservices = namenode_ha_utils.get_nameservices(main_resource.resource.hdfs_site) else: nameservices = main_resource.resource.nameservices @@ -136,7 +136,8 @@ class HdfsResourceJar: env = Environment.get_instance() # Check required parameters -main_resource.assert_parameter_is_set('user') +if main_resource.has_core_configs: + main_resource.assert_parameter_is_set('user') if not 'hdfs_files' in env.config or not env.config['hdfs_files']: Logger.info("No resources to create. 'create_on_execute' or 'delete_on_execute' or 'download_on_execute' wasn't triggered before this 'execute' action.") @@ -144,7 +145,7 @@ class HdfsResourceJar: hadoop_bin_dir = main_resource.resource.hadoop_bin_dir hadoop_conf_dir = main_resource.resource.hadoop_conf_dir -user = main_resource.resource.user +user = main_resource.resource.user if main_resource.has_core_configs else None security_enabled = main_resource.resource.security_enabled keytab_file = main_resource.resource.keytab kinit_path = main_resource.resource.kinit_path_local @@ -616,6 +617,7 @@ class HdfsResourceProvider(Provider): super(HdfsResourceProvider,self).__init__(resource) self.has_core_configs = not is_empty(getattr(resource, 'default_fs')) +self.ignored_resources_list = HdfsResourceProvider.get_ignored_resources_list(self.resource.hdfs_resource_ignore_file) if not self.has_core_configs: self.webhdfs_enabled = False @@ -626,8 +628,6 @@ class HdfsResourceProvider(Provider): self.fsType = getattr(resource, 'dfs_type') self.can_use_webhdfs = True -self.ignored_resources_list = HdfsResourceProvider.get_ignored_resources_list(self.resource.hdfs_resource_ignore_file) - if self.fsType == 'HDFS': self.assert_parameter_is_set('hdfs_site') self.webhdfs_enabled = self.resource.hdfs_site['dfs.webhdfs.enabled'] @@ -669,20 +669,19 @@ class HdfsResourceProvider(Provider): return hdfs_resources_to_ignore def action_delayed(self, action_name): -if not self.has_core_configs: - Logger.info("Cannot find core-site or core-site/fs.defaultFs. Assuming usage of external filesystem for services. Ambari will not manage the directories.") - return - self.assert_parameter_is_set('type') -path_protocol = urlparse(self.resource.target).scheme.lower() -default_fs_protocol = urlparse(self.resource.default_fs).scheme.lower() +if self.has_core_configs: + path_protocol = urlparse(self.resource.target).scheme.lower() + default_fs_protocol = urlparse(self.resource.default_fs).scheme.lower() -# for protocols which are different that defaultFs webhdfs will not be able to create directories -# so for them fast-hdfs-resource.jar should be used -if path_protocol and default_fs_protocol != "viewfs" and path_protocol != default_fs_protocol: + # for protocols which are different that defaultFs webhdfs will not be able to create directories + # so for them fast-hdfs-resource.jar should be used + if path_protocol and default_fs_protocol != "viewfs" and path_protocol != default_fs_protocol: +self.can_use_webhdfs = False +Logger.info("Cannot use webhdfs for {0} defaultFs = {1} has different protocol".format(self.resource.target, self.resource.default_f
[ambari] branch trunk updated: AMBARI-24839. Ambari is trying to create hbase.rootdir using s3 url (#2609)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch trunk in repository https://gitbox.apache.org/repos/asf/ambari.git The following commit(s) were added to refs/heads/trunk by this push: new f231e83 AMBARI-24839. Ambari is trying to create hbase.rootdir using s3 url (#2609) f231e83 is described below commit f231e836afbd05d4ffd961226d1f6e5996b43816 Author: aonishuk AuthorDate: Thu Nov 15 11:48:30 2018 +0200 AMBARI-24839. Ambari is trying to create hbase.rootdir using s3 url (#2609) * AMBARI-24839. Ambari is trying to create hbase.rootdir using s3 url (aonishuk) * AMBARI-24839. Ambari is trying to create hbase.rootdir using s3 url (aonishuk) --- .../dummy_files/alert_definitions.json | 32 - .../libraries/providers/hdfs_resource.py | 36 +++ .../before-START/files/fast-hdfs-resource.jar | Bin 19286899 -> 16202231 bytes .../stack-hooks/before-START/scripts/params.py | 3 ++ .../before-START/scripts/shared_initialization.py | 27 +++--- .../stack-hooks/before-START/test_before_start.py | 32 - .../apache/ambari/fast_hdfs_resource/Runner.java | 40 +++-- 7 files changed, 108 insertions(+), 62 deletions(-) diff --git a/ambari-agent/src/test/python/ambari_agent/dummy_files/alert_definitions.json b/ambari-agent/src/test/python/ambari_agent/dummy_files/alert_definitions.json index d9a82a7..5962ac6 100644 --- a/ambari-agent/src/test/python/ambari_agent/dummy_files/alert_definitions.json +++ b/ambari-agent/src/test/python/ambari_agent/dummy_files/alert_definitions.json @@ -1,33 +1,33 @@ { "0": { -"clusterName": "c1", -"hash": "12341234134412341243124", -"hostName": "c6401.ambari.apache.org", +"clusterName": "c1", +"hash": "12341234134412341243124", +"hostName": "c6401.ambari.apache.org", "alertDefinitions": [ { -"name": "namenode_process", -"service": "HDFS", -"enabled": true, -"interval": 6, -"component": "NAMENODE", -"label": "NameNode process", +"name": "namenode_process", +"service": "HDFS", +"component": "NAMENODE", +"interval": 6, +"enabled": true, +"label": "NameNode process", "source": { "reporting": { "critical": { "text": "Could not load process info: {0} on host {1}:{2}" -}, +}, "ok": { "text": "TCP OK - {0:.4f} response time on port {1}" } - }, - "type": "PORT", - "uri": "{{hdfs-site/dfs.namenode.http-address}}", + }, + "type": "PORT", + "uri": "{{hdfs-site/dfs.namenode.http-address}}", "default_port": 50070 -}, -"scope": "HOST", +}, +"scope": "HOST", "uuid": "3f82ae27-fa6a-465b-b77d-67963ac55d2f" } -], +], "configurations": { "hdfs-site": { "dfs.namenode.http-address": "c6401.ambari.apache.org:50070" diff --git a/ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py b/ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py index 4a22c39..b22e9b2 100644 --- a/ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py +++ b/ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py @@ -35,6 +35,7 @@ from resource_management.core.logger import Logger from resource_management.core.providers import Provider from resource_management.core.resources.system import Execute from resource_management.core.resources.system import File +from resource_management.libraries.functions.is_empty import is_empty from resource_management.libraries.functions import format from resource_management.libraries.functions import namenode_ha_utils from resource_management.libraries.functions.get_user_call_output import get_user_call_output @@ -270,6 +271,8 @@ class WebHDFSUtil: if file_to_put: cmd += ["--data-binary", "@"+file_to_put, "-H", "Content-Type: application/octet-stream"] + else: +cmd += ["-d&quo
[ambari] branch trunk updated: AMBARI-24902. ATS 1.5 does not start in DL cluster without HDFS (aonishuk) (#2610)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch trunk in repository https://gitbox.apache.org/repos/asf/ambari.git The following commit(s) were added to refs/heads/trunk by this push: new 2b73664 AMBARI-24902. ATS 1.5 does not start in DL cluster without HDFS (aonishuk) (#2610) 2b73664 is described below commit 2b7366441b37ddf0a2cb7acb41fa196553c27115 Author: aonishuk AuthorDate: Thu Nov 15 11:17:18 2018 +0200 AMBARI-24902. ATS 1.5 does not start in DL cluster without HDFS (aonishuk) (#2610) --- .../resources/stack-hooks/before-ANY/scripts/hook.py | 3 ++- .../before-ANY/scripts/shared_initialization.py| 18 ++ 2 files changed, 12 insertions(+), 9 deletions(-) diff --git a/ambari-server/src/main/resources/stack-hooks/before-ANY/scripts/hook.py b/ambari-server/src/main/resources/stack-hooks/before-ANY/scripts/hook.py index 8b93f7b..02ce109 100644 --- a/ambari-server/src/main/resources/stack-hooks/before-ANY/scripts/hook.py +++ b/ambari-server/src/main/resources/stack-hooks/before-ANY/scripts/hook.py @@ -18,7 +18,7 @@ limitations under the License. """ -from shared_initialization import setup_users, setup_hadoop_env, setup_java +from shared_initialization import setup_users, setup_hadoop_env, setup_java, setup_env from resource_management import Hook @@ -31,6 +31,7 @@ class BeforeAnyHook(Hook): setup_users() if params.has_namenode or params.dfs_type == 'HCFS': setup_hadoop_env() +setup_env() setup_java() diff --git a/ambari-server/src/main/resources/stack-hooks/before-ANY/scripts/shared_initialization.py b/ambari-server/src/main/resources/stack-hooks/before-ANY/scripts/shared_initialization.py index 27679e0..97a812f 100644 --- a/ambari-server/src/main/resources/stack-hooks/before-ANY/scripts/shared_initialization.py +++ b/ambari-server/src/main/resources/stack-hooks/before-ANY/scripts/shared_initialization.py @@ -194,14 +194,16 @@ def setup_hadoop_env(): File(os.path.join(params.hadoop_conf_dir, 'hadoop-env.sh'), owner=tc_owner, group=params.user_group, content=InlineTemplate(params.hadoop_env_sh_template)) - -# Create tmp dir for java.io.tmpdir -# Handle a situation when /tmp is set to noexec -Directory(params.hadoop_java_io_tmpdir, - owner=params.hdfs_user, - group=params.user_group, - mode=01777 -) + +def setup_env(): + import params + # Create tmp dir for java.io.tmpdir + # Handle a situation when /tmp is set to noexec + Directory(params.hadoop_java_io_tmpdir, +owner=params.hdfs_user if params.has_namenode else None, +group=params.user_group, +mode=01777 + ) def setup_java(): """
[ambari] branch trunk updated: AMBARI-24839. Ambari is trying to create hbase.rootdir using s3 url (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch trunk in repository https://gitbox.apache.org/repos/asf/ambari.git The following commit(s) were added to refs/heads/trunk by this push: new 42a6363 AMBARI-24839. Ambari is trying to create hbase.rootdir using s3 url (aonishuk) 42a6363 is described below commit 42a6363c229a6fda95aa4e3f32567fc8196287d5 Author: aonishuk AuthorDate: Wed Nov 7 12:22:01 2018 +0200 AMBARI-24839. Ambari is trying to create hbase.rootdir using s3 url (aonishuk) --- .../python/resource_management/libraries/providers/hdfs_resource.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py b/ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py index f74382c..4a22c39 100644 --- a/ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py +++ b/ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py @@ -663,8 +663,8 @@ class HdfsResourceProvider(Provider): path_protocol = urlparse(self.resource.target).scheme.lower() default_fs_protocol = urlparse(self.resource.default_fs).scheme.lower() -if path_protocol == "s3a" or default_fs_protocol == "s3a" and path_protocol == None: - Logger.info("Skipping creation of {0} in {1} since auto-creation of s3a resource is currently not supported.".format(self.resource.target, self.resource.default_fs)) +if path_protocol and default_fs_protocol != "viewfs" and path_protocol != default_fs_protocol: + Logger.info("Skipping creation of {0} since it is not in default filesystem.".format(self.resource.target)) return parsed_path = HdfsResourceProvider.parse_path(self.resource.target)
[ambari] branch trunk updated: AMBARI-24839. Ambari is trying to create hbase.rootdir using s3 url (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch trunk in repository https://gitbox.apache.org/repos/asf/ambari.git The following commit(s) were added to refs/heads/trunk by this push: new 6a2ddb7 AMBARI-24839. Ambari is trying to create hbase.rootdir using s3 url (aonishuk) 6a2ddb7 is described below commit 6a2ddb7d91c1f723140a5e6b6830667fef4aa5ca Author: Andrew Onishuk AuthorDate: Mon Nov 5 10:44:16 2018 +0200 AMBARI-24839. Ambari is trying to create hbase.rootdir using s3 url (aonishuk) --- .../resource_management/libraries/providers/hdfs_resource.py | 7 +++ 1 file changed, 7 insertions(+) diff --git a/ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py b/ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py index 8f56539..f74382c 100644 --- a/ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py +++ b/ambari-common/src/main/python/resource_management/libraries/providers/hdfs_resource.py @@ -660,6 +660,13 @@ class HdfsResourceProvider(Provider): def action_delayed(self, action_name): self.assert_parameter_is_set('type') +path_protocol = urlparse(self.resource.target).scheme.lower() +default_fs_protocol = urlparse(self.resource.default_fs).scheme.lower() + +if path_protocol == "s3a" or default_fs_protocol == "s3a" and path_protocol == None: + Logger.info("Skipping creation of {0} in {1} since auto-creation of s3a resource is currently not supported.".format(self.resource.target, self.resource.default_fs)) + return + parsed_path = HdfsResourceProvider.parse_path(self.resource.target) parsed_not_managed_paths = [HdfsResourceProvider.parse_path(path) for path in self.resource.immutable_paths]
[ambari] branch trunk updated: AMBARI-24732. Datanode and Nodemanagers need to check in to the respective Masters to mark successful restart (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch trunk in repository https://gitbox.apache.org/repos/asf/ambari.git The following commit(s) were added to refs/heads/trunk by this push: new ffaeaed AMBARI-24732. Datanode and Nodemanagers need to check in to the respective Masters to mark successful restart (aonishuk) ffaeaed is described below commit ffaeaed70dceaf0803a039664601275a8020682c Author: Andrew Onishuk AuthorDate: Thu Oct 4 13:23:47 2018 +0300 AMBARI-24732. Datanode and Nodemanagers need to check in to the respective Masters to mark successful restart (aonishuk) --- .../resource_management/libraries/script/script.py | 21 ++--- 1 file changed, 14 insertions(+), 7 deletions(-) diff --git a/ambari-common/src/main/python/resource_management/libraries/script/script.py b/ambari-common/src/main/python/resource_management/libraries/script/script.py index 6c79a10..67bfca1 100644 --- a/ambari-common/src/main/python/resource_management/libraries/script/script.py +++ b/ambari-common/src/main/python/resource_management/libraries/script/script.py @@ -970,11 +970,13 @@ class Script(object): upgrade_type_command_param = "" direction = None +is_rolling_restart = None if config is not None: command_params = config["commandParams"] if "commandParams" in config else None if command_params is not None: upgrade_type_command_param = command_params["upgrade_type"] if "upgrade_type" in command_params else "" direction = command_params["upgrade_direction"] if "upgrade_direction" in command_params else None +is_rolling_restart = command_params["rolling_restart"] if "rolling_restart" in command_params else None upgrade_type = Script.get_upgrade_type(upgrade_type_command_param) is_stack_upgrade = upgrade_type is not None @@ -1044,24 +1046,29 @@ class Script(object): self.start(env) self.post_start(env) + if is_rolling_restart: +self.post_rolling_restart(env) + if is_stack_upgrade: -# Remain backward compatible with the rest of the services that haven't switched to using -# the post_upgrade_restart method. Once done. remove the else-block. -if "post_upgrade_restart" in dir(self): - self.post_upgrade_restart(env, upgrade_type=upgrade_type) -else: - self.post_rolling_restart(env) +self.post_upgrade_restart(env, upgrade_type=upgrade_type) + if self.should_expose_component_version("restart"): self.save_component_version_to_structured_out("restart") + def post_upgrade_restart(env, upgrade_type=None): +""" +To be overridden by subclasses +""" +pass # TODO, remove after all services have switched to post_upgrade_restart def post_rolling_restart(self, env): """ To be overridden by subclasses """ -pass +# Mostly Actions are the same for both of these cases. If they are different this method should be overriden. +self.post_upgrade_restart(env, UPGRADE_TYPE_ROLLING) def configure(self, env, upgrade_type=None, config_dir=None): """
[ambari] branch trunk updated: AMBARI-24846. Ambari-agent stop hangs if ambari-server is stopped. (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch trunk in repository https://gitbox.apache.org/repos/asf/ambari.git The following commit(s) were added to refs/heads/trunk by this push: new 0f0bc6b AMBARI-24846. Ambari-agent stop hangs if ambari-server is stopped. (aonishuk) 0f0bc6b is described below commit 0f0bc6b39eda33e3990b1c34a272dbad80f37784 Author: Andrew Onishuk AuthorDate: Tue Oct 30 14:43:23 2018 +0200 AMBARI-24846. Ambari-agent stop hangs if ambari-server is stopped. (aonishuk) --- ambari-agent/src/main/python/ambari_agent/main.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/ambari-agent/src/main/python/ambari_agent/main.py b/ambari-agent/src/main/python/ambari_agent/main.py index 492f94c..32317d8 100644 --- a/ambari-agent/src/main/python/ambari_agent/main.py +++ b/ambari-agent/src/main/python/ambari_agent/main.py @@ -476,7 +476,7 @@ def main(initializer_module, heartbeat_stop_callback=None): stopped = False # Keep trying to connect to a server or bail out if ambari-agent was stopped - while not connected and not stopped: + while not connected and not stopped and not initializer_module.stop_event.is_set(): for server_hostname in server_hostnames: server_url = config.get_api_url(server_hostname) try:
[ambari] branch branch-2.7 updated: AMBARI-24784. Ambari-agent cannot register sometimes (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch branch-2.7 in repository https://gitbox.apache.org/repos/asf/ambari.git The following commit(s) were added to refs/heads/branch-2.7 by this push: new 3c2bc7a AMBARI-24784. Ambari-agent cannot register sometimes (aonishuk) 3c2bc7a is described below commit 3c2bc7abd66cf7c5930a52b192196d8cc2ffdbbf Author: Andrew Onishuk AuthorDate: Tue Oct 30 11:51:29 2018 +0200 AMBARI-24784. Ambari-agent cannot register sometimes (aonishuk) --- .../src/main/python/ambari_agent/CustomServiceOrchestrator.py | 6 -- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/ambari-agent/src/main/python/ambari_agent/CustomServiceOrchestrator.py b/ambari-agent/src/main/python/ambari_agent/CustomServiceOrchestrator.py index 91c7385..6d15e78 100644 --- a/ambari-agent/src/main/python/ambari_agent/CustomServiceOrchestrator.py +++ b/ambari-agent/src/main/python/ambari_agent/CustomServiceOrchestrator.py @@ -415,8 +415,10 @@ class CustomServiceOrchestrator(object): self.commands_for_component_in_progress[cluster_id][command['role']] += 1 incremented_commands_for_component = True -# reset status which was reported, so agent re-reports it after command finished - self.initializer_module.component_status_executor.reported_component_status[cluster_id][command['role']]['STATUS'] = None +if 'serviceName' in command: + service_component_name = command['serviceName'] + "/" + command['role'] + # reset status which was reported, so agent re-reports it after command finished + self.initializer_module.component_status_executor.reported_component_status[cluster_id][service_component_name]['STATUS'] = None for py_file, current_base_dir in filtered_py_file_list: log_info_on_failure = command_name not in self.DONT_DEBUG_FAILURES_FOR_COMMANDS
[ambari] branch trunk updated: AMBARI-24559. Diff in Downloaded client config: Host file has Stack info where as downloaded file has 'None' in "user.agent.prefix" properties (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch trunk in repository https://gitbox.apache.org/repos/asf/ambari.git The following commit(s) were added to refs/heads/trunk by this push: new 18eefee AMBARI-24559. Diff in Downloaded client config: Host file has Stack info where as downloaded file has 'None' in "user.agent.prefix" properties (aonishuk) 18eefee is described below commit 18eefee38e7731176937d255ad4130221b19a0dc Author: Andrew Onishuk AuthorDate: Wed Oct 24 11:29:14 2018 +0300 AMBARI-24559. Diff in Downloaded client config: Host file has Stack info where as downloaded file has 'None' in "user.agent.prefix" properties (aonishuk) --- .../server/controller/internal/ClientConfigResourceProvider.java | 9 + 1 file changed, 9 insertions(+) diff --git a/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/ClientConfigResourceProvider.java b/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/ClientConfigResourceProvider.java index 01e8f37..3940c83 100644 --- a/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/ClientConfigResourceProvider.java +++ b/ambari-server/src/main/java/org/apache/ambari/server/controller/internal/ClientConfigResourceProvider.java @@ -76,6 +76,7 @@ import org.apache.ambari.server.state.DesiredConfig; import org.apache.ambari.server.state.PropertyInfo.PropertyType; import org.apache.ambari.server.state.Service; import org.apache.ambari.server.state.ServiceComponent; +import org.apache.ambari.server.state.ServiceComponentHost; import org.apache.ambari.server.state.ServiceInfo; import org.apache.ambari.server.state.ServiceOsSpecific; import org.apache.ambari.server.state.StackId; @@ -400,10 +401,17 @@ public class ClientConfigResourceProvider extends AbstractControllerResourceProv TreeMap clusterLevelParams = null; TreeMap ambariLevelParams = null; +TreeMap topologyCommandParams = new TreeMap<>(); if (getManagementController() instanceof AmbariManagementControllerImpl){ AmbariManagementControllerImpl controller = ((AmbariManagementControllerImpl)getManagementController()); clusterLevelParams = controller.getMetadataClusterLevelParams(cluster, stackId); ambariLevelParams = controller.getMetadataAmbariLevelParams(); + + Service s = cluster.getService(serviceName); + ServiceComponent sc = s.getServiceComponent(componentName); + ServiceComponentHost sch = sc.getServiceComponentHost(response.getHostname()); + + topologyCommandParams = controller.getTopologyCommandParams(cluster.getClusterId(), serviceName, componentName, sch); } TreeMap agentLevelParams = new TreeMap<>(); agentLevelParams.put("hostname", hostName); @@ -414,6 +422,7 @@ public class ClientConfigResourceProvider extends AbstractControllerResourceProv commandParams.put("env_configs_list", envConfigs); commandParams.put("properties_configs_list", propertiesConfigs); commandParams.put("output_file", componentName + "-configs" + Configuration.DEF_ARCHIVE_EXTENSION); +commandParams.putAll(topologyCommandParams); Map jsonContent = new TreeMap<>(); jsonContent.put("configurations", configurations);
[ambari] branch branch-2.7 updated: AMBARI-24813. Merge missing agent commits to branch-2.7 (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch branch-2.7 in repository https://gitbox.apache.org/repos/asf/ambari.git The following commit(s) were added to refs/heads/branch-2.7 by this push: new 232e3ba AMBARI-24813. Merge missing agent commits to branch-2.7 (aonishuk) 232e3ba is described below commit 232e3ba25d5a809acf84cdb7d15f0b0984a27ac1 Author: Andrew Onishuk AuthorDate: Tue Oct 23 11:35:26 2018 +0300 AMBARI-24813. Merge missing agent commits to branch-2.7 (aonishuk) --- .../resource_management/TestRepositoryResource.py | 45 +++--- .../libraries/functions/repository_util.py | 10 +- .../libraries/providers/__init__.py| 6 +- .../libraries/providers/repository.py | 153 ++--- .../libraries/resources/repository.py | 9 +- .../custom_actions/scripts/update_repo.py | 3 +- .../before-INSTALL/scripts/repo_initialization.py | 4 +- .../python/custom_actions/TestInstallPackages.py | 66 - .../test/python/custom_actions/TestUpdateRepo.py | 2 +- .../python/stacks/2.0.6/OOZIE/test_oozie_server.py | 48 +++ .../hooks/before-INSTALL/test_before_install.py| 23 ++-- 11 files changed, 191 insertions(+), 178 deletions(-) diff --git a/ambari-agent/src/test/python/resource_management/TestRepositoryResource.py b/ambari-agent/src/test/python/resource_management/TestRepositoryResource.py index b1a4757..3bb91e8 100644 --- a/ambari-agent/src/test/python/resource_management/TestRepositoryResource.py +++ b/ambari-agent/src/test/python/resource_management/TestRepositoryResource.py @@ -64,6 +64,8 @@ class TestRepositoryResource(TestCase): @patch.object(OSCheck, "is_ubuntu_family") @patch.object(OSCheck, "is_redhat_family") @patch("resource_management.libraries.providers.repository.File") +@patch("filecmp.cmp", new=MagicMock(return_value=False)) +@patch("os.path.isfile", new=MagicMock(return_value=True)) @patch.object(System, "os_family", new='redhat') def test_create_repo_redhat(self, file_mock, is_redhat_family, is_ubuntu_family, is_suse_family): @@ -77,6 +79,8 @@ class TestRepositoryResource(TestCase): mirror_list='https://mirrors.base_url.org/?repo=Repository=$basearch', repo_file_name='Repository', repo_template=RHEL_SUSE_DEFAULT_TEMPLATE) + +Repository(None, action="create") self.assertTrue('hadoop' in env.resources['Repository']) defined_arguments = env.resources['Repository']['hadoop'].arguments @@ -91,20 +95,16 @@ class TestRepositoryResource(TestCase): self.assertEqual(defined_arguments, expected_arguments) self.assertEqual(file_mock.call_args[0][0], '/etc/yum.repos.d/Repository.repo') -template_item = file_mock.call_args[1]['content'] -template = str(template_item.name) -expected_template_arguments.update({'repo_id': 'hadoop'}) - -self.assertEqual(expected_template_arguments, template_item.context._dict) -self.assertEqual(RHEL_SUSE_DEFAULT_TEMPLATE, template) - @patch.object(OSCheck, "is_suse_family") @patch.object(OSCheck, "is_ubuntu_family") @patch.object(OSCheck, "is_redhat_family") @patch.object(System, "os_family", new='suse') +@patch("resource_management.libraries.providers.repository.checked_call") +@patch("os.path.isfile", new=MagicMock(return_value=True)) +@patch("filecmp.cmp", new=MagicMock(return_value=False)) @patch("resource_management.libraries.providers.repository.File") -def test_create_repo_suse(self, file_mock, +def test_create_repo_suse(self, file_mock, checked_call, is_redhat_family, is_ubuntu_family, is_suse_family): is_redhat_family.return_value = False is_ubuntu_family.return_value = False @@ -116,6 +116,8 @@ class TestRepositoryResource(TestCase): mirror_list='https://mirrors.base_url.org/?repo=Repository=$basearch', repo_template = RHEL_SUSE_DEFAULT_TEMPLATE, repo_file_name='Repository') + +Repository(None, action="create") self.assertTrue('hadoop' in env.resources['Repository']) defined_arguments = env.resources['Repository']['hadoop'].arguments @@ -130,13 +132,6 @@ class TestRepositoryResource(TestCase): self.assertEqual(defined_arguments, expected_arguments) self.assertEqual(file_mock.call_args[0][0], '/etc/zypp/repos.d/Repository.repo') -template_item = file_mock.call_args[1]['
[ambari] branch trunk updated (a153e5e -> 18f6d37)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a change to branch trunk in repository https://gitbox.apache.org/repos/asf/ambari.git. from a153e5e AMBARI-24783 : removed dependencies which are having security issues (nitirajrathore) (#2493) add 18f6d37 AMBARI-24811. Remove unused files from ambari-agent (aonishuk) No new revisions were added by this update. Summary of changes: .../src/main/python/ambari_agent/Controller.py | 673 -- .../src/main/python/ambari_agent/Heartbeat.py | 125 .../ambari_agent/PythonReflectiveExecutor.py | 113 --- .../python/ambari_agent/StatusCommandsExecutor.py | 88 --- .../src/main/python/ambari_agent/client_example.py | 69 -- .../src/main/python/ambari_agent/test.json | 69 -- .../test/python/ambari_agent/TestAmbariAgent.py| 1 - .../src/test/python/ambari_agent/TestController.py | 764 - .../src/test/python/ambari_agent/TestHeartbeat.py | 254 --- .../src/test/python/ambari_agent/TestMain.py | 11 +- .../src/test/python/ambari_agent/TestSecurity.py | 1 - 11 files changed, 2 insertions(+), 2166 deletions(-) delete mode 100644 ambari-agent/src/main/python/ambari_agent/Controller.py delete mode 100644 ambari-agent/src/main/python/ambari_agent/Heartbeat.py delete mode 100644 ambari-agent/src/main/python/ambari_agent/PythonReflectiveExecutor.py delete mode 100644 ambari-agent/src/main/python/ambari_agent/StatusCommandsExecutor.py delete mode 100644 ambari-agent/src/main/python/ambari_agent/client_example.py delete mode 100644 ambari-agent/src/main/python/ambari_agent/test.json delete mode 100644 ambari-agent/src/test/python/ambari_agent/TestController.py delete mode 100644 ambari-agent/src/test/python/ambari_agent/TestHeartbeat.py
[ambari] branch branch-2.7 updated: AMBARI-24782. Introduce support for Ubuntu 18 LTS (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch branch-2.7 in repository https://gitbox.apache.org/repos/asf/ambari.git The following commit(s) were added to refs/heads/branch-2.7 by this push: new 347c61a AMBARI-24782. Introduce support for Ubuntu 18 LTS (aonishuk) 347c61a is described below commit 347c61a71bcab605b1c05f7f95e5704ac27527f9 Author: Andrew Onishuk AuthorDate: Tue Oct 16 13:35:41 2018 +0300 AMBARI-24782. Introduce support for Ubuntu 18 LTS (aonishuk) --- .../src/main/package/dependencies.properties | 2 +- .../resource_management/TestRepositoryResource.py | 16 +++ .../python/ambari_commons/resources/os_family.json | 3 +- .../libraries/providers/repository.py | 49 -- .../resource_management/libraries/script/script.py | 7 +++- .../src/main/resources/version_definition.xsd | 1 + 6 files changed, 54 insertions(+), 24 deletions(-) diff --git a/ambari-agent/src/main/package/dependencies.properties b/ambari-agent/src/main/package/dependencies.properties index 07b0b68..ec64264 100644 --- a/ambari-agent/src/main/package/dependencies.properties +++ b/ambari-agent/src/main/package/dependencies.properties @@ -29,4 +29,4 @@ # however should be encouraged manually in pom.xml. rpm.dependency.list=openssl,\nRequires: rpm-python,\nRequires: zlib,\nRequires: python >= 2.6 -deb.dependency.list=openssl, zlibc, python (>= 2.6) \ No newline at end of file +deb.dependency.list=openssl, python (>= 2.6) \ No newline at end of file diff --git a/ambari-agent/src/test/python/resource_management/TestRepositoryResource.py b/ambari-agent/src/test/python/resource_management/TestRepositoryResource.py index a69b57b..b1a4757 100644 --- a/ambari-agent/src/test/python/resource_management/TestRepositoryResource.py +++ b/ambari-agent/src/test/python/resource_management/TestRepositoryResource.py @@ -188,7 +188,7 @@ class TestRepositoryResource(TestCase): @patch.object(OSCheck, "is_suse_family") @patch.object(OSCheck, "is_ubuntu_family") @patch.object(OSCheck, "is_redhat_family") -@patch("resource_management.libraries.providers.repository.checked_call") +@patch("resource_management.libraries.providers.repository.call") @patch.object(tempfile, "NamedTemporaryFile") @patch("resource_management.libraries.providers.repository.Execute") @patch("resource_management.libraries.providers.repository.File") @@ -197,13 +197,13 @@ class TestRepositoryResource(TestCase): @patch.object(System, "os_release_name", new='precise') @patch.object(System, "os_family", new='ubuntu') def test_create_repo_ubuntu_repo_exists(self, file_mock, execute_mock, -tempfile_mock, checked_call_mock, is_redhat_family, is_ubuntu_family, is_suse_family): +tempfile_mock, call_mock, is_redhat_family, is_ubuntu_family, is_suse_family): is_redhat_family.return_value = False is_ubuntu_family.return_value = True is_suse_family.return_value = False tempfile_mock.return_value = MagicMock(spec=file) tempfile_mock.return_value.__enter__.return_value.name = "/tmp/1.txt" - checked_call_mock.return_value = 0, "The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 123ABCD" + call_mock.return_value = 0, "The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 123ABCD" with Environment('/') as env: with patch.object(repository, "__file__", new='/ambari/test/repo/dummy/path/file'): @@ -228,10 +228,10 @@ class TestRepositoryResource(TestCase): #'apt-get update -qq -o Dir::Etc::sourcelist="sources.list.d/HDP.list" -o APT::Get::List-Cleanup="0"') execute_command_item = execute_mock.call_args_list[0][0][0] - self.assertEqual(checked_call_mock.call_args_list[0][0][0], ['apt-get', 'update', '-qq', '-o', 'Dir::Etc::sourcelist=sources.list.d/HDP.list', '-o', 'Dir::Etc::sourceparts=-', '-o', 'APT::Get::List-Cleanup=0']) + self.assertEqual(call_mock.call_args_list[0][0][0], ['apt-get', 'update', '-qq', '-o', 'Dir::Etc::sourcelist=sources.list.d/HDP.list', '-o', 'Dir::Etc::sourceparts=-', '-o', 'APT::Get::List-Cleanup=0']) self.assertEqual(execute_command_item, ('apt-key', 'adv', '--recv-keys', '--keyserver', 'keyserver.ubuntu.com', '123ABCD')) -@patch("resource_management.libraries.providers.repository.checked_call") +@patch("resource_management.libraries.providers.repository.call") @patch.object(tempfile, "NamedTemporaryFile") @patch("resource_management.libraries.providers.reposito
[ambari] branch trunk updated: AMBARI-24784. Ambari-agent cannot register sometimes (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch trunk in repository https://gitbox.apache.org/repos/asf/ambari.git The following commit(s) were added to refs/heads/trunk by this push: new 345da1a AMBARI-24784. Ambari-agent cannot register sometimes (aonishuk) 345da1a is described below commit 345da1abed7c55b6b26c921a0a2ab6b92b37db8d Author: Andrew Onishuk AuthorDate: Tue Oct 16 15:04:32 2018 +0300 AMBARI-24784. Ambari-agent cannot register sometimes (aonishuk) --- .../src/main/python/ambari_agent/CustomServiceOrchestrator.py | 6 -- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/ambari-agent/src/main/python/ambari_agent/CustomServiceOrchestrator.py b/ambari-agent/src/main/python/ambari_agent/CustomServiceOrchestrator.py index 74b7d92..8abb479 100644 --- a/ambari-agent/src/main/python/ambari_agent/CustomServiceOrchestrator.py +++ b/ambari-agent/src/main/python/ambari_agent/CustomServiceOrchestrator.py @@ -417,8 +417,10 @@ class CustomServiceOrchestrator(object): self.commands_for_component_in_progress[cluster_id][command['role']] += 1 incremented_commands_for_component = True -# reset status which was reported, so agent re-reports it after command finished - self.initializer_module.component_status_executor.reported_component_status[cluster_id][command['role']]['STATUS'] = None +if 'serviceName' in command: + service_component_name = command['serviceName'] + "/" + command['role'] + # reset status which was reported, so agent re-reports it after command finished + self.initializer_module.component_status_executor.reported_component_status[cluster_id][service_component_name]['STATUS'] = None for py_file, current_base_dir in filtered_py_file_list: log_info_on_failure = command_name not in self.DONT_DEBUG_FAILURES_FOR_COMMANDS
[ambari] 02/02: Delete repository.py.rej
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch trunk in repository https://gitbox.apache.org/repos/asf/ambari.git commit f4a520f25d6cb3e06b93634e37d9a6798c038094 Author: aonishuk AuthorDate: Tue Oct 16 13:03:56 2018 +0300 Delete repository.py.rej --- .../libraries/providers/repository.py.rej | 45 -- 1 file changed, 45 deletions(-) diff --git a/ambari-common/src/main/python/resource_management/libraries/providers/repository.py.rej b/ambari-common/src/main/python/resource_management/libraries/providers/repository.py.rej deleted file mode 100644 index 37b977c..000 --- a/ambari-common/src/main/python/resource_management/libraries/providers/repository.py.rej +++ /dev/null @@ -1,45 +0,0 @@ ambari-common/src/main/python/resource_management/libraries/providers/repository.py -+++ ambari-common/src/main/python/resource_management/libraries/providers/repository.py -@@ -65,7 +66,14 @@ class RepositoryProvider(Provider): - content = StaticFile(tmpf.name) - ) - --self.update(repo_file_path) -+try: -+ self.update(repo_file_path) -+except: -+ # remove created file or else ambari will consider that update was successful and skip repository operations -+ File(repo_file_path, -+ action = "delete", -+ ) -+ raise - - RepositoryProvider.repo_files_content.clear() - -@@ -136,12 +144,23 @@ class UbuntuRepositoryProvider(RepositoryProvider): - def update(self, repo_file_path): - repo_file_name = os.path.basename(repo_file_path) - update_cmd_formatted = [format(x) for x in self.update_cmd] --# this is time expensive --retcode, out = checked_call(update_cmd_formatted, sudo=True, quiet=False) -+update_failed_exception = None -+ -+try: -+ # this is time expensive -+ retcode, out = call(update_cmd_formatted, sudo=True, quiet=False) -+except ExecutionFailed as ex: -+ out = ex.out -+ update_failed_exception = ex - --# add public keys for new repos - missing_pkeys = set(re.findall(self.missing_pkey_regex, out)) -+ -+# failed but NOT due to missing pubkey -+if update_failed_exception and not missing_pkeys: -+ raise update_failed_exception -+ - for pkey in missing_pkeys: -+ # add public keys for new repos - Execute(self.app_pkey_cmd_prefix + (pkey,), - timeout = 15, # in case we are on the host w/o internet (using localrepo), we should ignore hanging - ignore_failures = True,
[ambari] 01/02: AMBARI-24782. Introduce support for Ubuntu 18 LTS (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch trunk in repository https://gitbox.apache.org/repos/asf/ambari.git commit ecaba72046455efae8ac99a8c8eed723b614e3e9 Author: Andrew Onishuk AuthorDate: Tue Oct 16 12:45:49 2018 +0300 AMBARI-24782. Introduce support for Ubuntu 18 LTS (aonishuk) --- .../src/main/package/dependencies.properties | 2 +- .../resource_management/TestRepositoryResource.py | 16 .../python/ambari_commons/resources/os_family.json | 3 +- .../libraries/providers/repository.py | 29 +++--- .../libraries/providers/repository.py.rej | 45 ++ .../resource_management/libraries/script/script.py | 7 +++- .../src/main/resources/version_definition.xsd | 1 + 7 files changed, 86 insertions(+), 17 deletions(-) diff --git a/ambari-agent/src/main/package/dependencies.properties b/ambari-agent/src/main/package/dependencies.properties index 07b0b68..ec64264 100644 --- a/ambari-agent/src/main/package/dependencies.properties +++ b/ambari-agent/src/main/package/dependencies.properties @@ -29,4 +29,4 @@ # however should be encouraged manually in pom.xml. rpm.dependency.list=openssl,\nRequires: rpm-python,\nRequires: zlib,\nRequires: python >= 2.6 -deb.dependency.list=openssl, zlibc, python (>= 2.6) \ No newline at end of file +deb.dependency.list=openssl, python (>= 2.6) \ No newline at end of file diff --git a/ambari-agent/src/test/python/resource_management/TestRepositoryResource.py b/ambari-agent/src/test/python/resource_management/TestRepositoryResource.py index 91d4939..3bb91e8 100644 --- a/ambari-agent/src/test/python/resource_management/TestRepositoryResource.py +++ b/ambari-agent/src/test/python/resource_management/TestRepositoryResource.py @@ -188,7 +188,7 @@ class TestRepositoryResource(TestCase): @patch.object(OSCheck, "is_suse_family") @patch.object(OSCheck, "is_ubuntu_family") @patch.object(OSCheck, "is_redhat_family") -@patch("resource_management.libraries.providers.repository.checked_call") +@patch("resource_management.libraries.providers.repository.call") @patch.object(tempfile, "NamedTemporaryFile") @patch("resource_management.libraries.providers.repository.Execute") @patch("resource_management.libraries.providers.repository.File") @@ -197,13 +197,13 @@ class TestRepositoryResource(TestCase): @patch.object(System, "os_release_name", new='precise') @patch.object(System, "os_family", new='ubuntu') def test_create_repo_ubuntu_repo_exists(self, file_mock, execute_mock, -tempfile_mock, checked_call_mock, is_redhat_family, is_ubuntu_family, is_suse_family): +tempfile_mock, call_mock, is_redhat_family, is_ubuntu_family, is_suse_family): is_redhat_family.return_value = False is_ubuntu_family.return_value = True is_suse_family.return_value = False tempfile_mock.return_value = MagicMock(spec=file) tempfile_mock.return_value.__enter__.return_value.name = "/tmp/1.txt" - checked_call_mock.return_value = 0, "The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 123ABCD" + call_mock.return_value = 0, "The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 123ABCD" with Environment('/') as env: with patch.object(repository, "__file__", new='/ambari/test/repo/dummy/path/file'): @@ -229,10 +229,10 @@ class TestRepositoryResource(TestCase): #'apt-get update -qq -o Dir::Etc::sourcelist="sources.list.d/HDP.list" -o APT::Get::List-Cleanup="0"') execute_command_item = execute_mock.call_args_list[0][0][0] - self.assertEqual(checked_call_mock.call_args_list[0][0][0], ['apt-get', 'update', '-qq', '-o', 'Dir::Etc::sourcelist=sources.list.d/HDP.list', '-o', 'Dir::Etc::sourceparts=-', '-o', 'APT::Get::List-Cleanup=0']) + self.assertEqual(call_mock.call_args_list[0][0][0], ['apt-get', 'update', '-qq', '-o', 'Dir::Etc::sourcelist=sources.list.d/HDP.list', '-o', 'Dir::Etc::sourceparts=-', '-o', 'APT::Get::List-Cleanup=0']) self.assertEqual(execute_command_item, ('apt-key', 'adv', '--recv-keys', '--keyserver', 'keyserver.ubuntu.com', '123ABCD')) -@patch("resource_management.libraries.providers.repository.checked_call") +@patch("resource_management.libraries.providers.repository.call") @patch.object(tempfile, "NamedTemporaryFile") @patch("resource_management.libraries.providers.repository.Execute") @patch("resource_management.libraries.providers.repository.File") @@ -241,13 +241,13 @@ class
[ambari] branch trunk updated (3adfe2e -> f4a520f)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a change to branch trunk in repository https://gitbox.apache.org/repos/asf/ambari.git. from 3adfe2e AMBARI-24677. Directory resource cannot work with symlinks if link target is a relative path (aonishuk) new ecaba72 AMBARI-24782. Introduce support for Ubuntu 18 LTS (aonishuk) new f4a520f Delete repository.py.rej The 2 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. Summary of changes: .../src/main/package/dependencies.properties | 2 +- .../resource_management/TestRepositoryResource.py | 16 ++-- .../python/ambari_commons/resources/os_family.json | 3 ++- .../libraries/providers/repository.py | 29 ++ .../resource_management/libraries/script/script.py | 7 -- .../src/main/resources/version_definition.xsd | 1 + 6 files changed, 41 insertions(+), 17 deletions(-)
[ambari] branch trunk updated: AMBARI-24677. Directory resource cannot work with symlinks if link target is a relative path (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch trunk in repository https://gitbox.apache.org/repos/asf/ambari.git The following commit(s) were added to refs/heads/trunk by this push: new 3adfe2e AMBARI-24677. Directory resource cannot work with symlinks if link target is a relative path (aonishuk) 3adfe2e is described below commit 3adfe2e72e5dda0bbfd7e27e6aadd537f3efff3d Author: Andrew Onishuk AuthorDate: Mon Sep 24 13:11:44 2018 +0300 AMBARI-24677. Directory resource cannot work with symlinks if link target is a relative path (aonishuk) --- .../src/main/python/resource_management/core/providers/system.py | 4 1 file changed, 4 insertions(+) diff --git a/ambari-common/src/main/python/resource_management/core/providers/system.py b/ambari-common/src/main/python/resource_management/core/providers/system.py index 95c12ad..acb19d4 100644 --- a/ambari-common/src/main/python/resource_management/core/providers/system.py +++ b/ambari-common/src/main/python/resource_management/core/providers/system.py @@ -176,8 +176,12 @@ class DirectoryProvider(Provider): if path in followed_links: raise Fail("Applying %s failed, looped symbolic links found while resolving %s" % (self.resource, path)) followed_links.add(path) + prev_path = path path = sudo.readlink(path) + if not os.path.isabs(path): +path = os.path.join(os.path.dirname(prev_path), path) + if path != self.resource.path: Logger.info("Following the link {0} to {1} to create the directory".format(self.resource.path, path))
[ambari] 02/03: Update AmbariAgent.py
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch trunk in repository https://gitbox.apache.org/repos/asf/ambari.git commit 39aedf67d94931a077d067b28dc8bef25546fab0 Author: aonishuk AuthorDate: Wed Oct 10 15:33:56 2018 +0300 Update AmbariAgent.py --- ambari-agent/src/main/python/ambari_agent/AmbariAgent.py | 2 -- 1 file changed, 2 deletions(-) diff --git a/ambari-agent/src/main/python/ambari_agent/AmbariAgent.py b/ambari-agent/src/main/python/ambari_agent/AmbariAgent.py index d3cbd14..6f5a14e 100644 --- a/ambari-agent/src/main/python/ambari_agent/AmbariAgent.py +++ b/ambari-agent/src/main/python/ambari_agent/AmbariAgent.py @@ -51,8 +51,6 @@ def main(): mergedArgs = [PYTHON, AGENT_SCRIPT] + args while status == AGENT_AUTO_RESTART_EXIT_CODE: -with open("/tmp/x" , "w") as fp: - fp.write(str(mergedArgs)) mainProcess = subprocess32.Popen(mergedArgs) mainProcess.communicate() status = mainProcess.returncode
[ambari] branch trunk updated (87d652f -> cd9d442)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a change to branch trunk in repository https://gitbox.apache.org/repos/asf/ambari.git. from 87d652f Merge pull request #2433 from hiveww/AMBARI-24751-trunk new 9103295 AMBARI-24758. Ambari-agent takes up too many cpu of perf (aonishuk) new 39aedf6 Update AmbariAgent.py new cd9d442 Update ams_alert.py The 3 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. Summary of changes: .../src/main/python/ambari_agent/Facter.py | 30 ++ .../ambari_agent/HostCheckReportFileHandler.py | 5 ++-- .../src/main/python/ambari_agent/HostCleanup.py| 3 ++- .../src/main/python/ambari_agent/HostInfo.py | 5 ++-- .../main/python/ambari_agent/alerts/ams_alert.py | 4 +-- .../main/python/ambari_agent/alerts/base_alert.py | 3 ++- .../python/ambari_agent/alerts/metric_alert.py | 4 +-- .../python/ambari_agent/alerts/script_alert.py | 3 ++- ambari-agent/src/main/python/ambari_agent/main.py | 3 ++- 9 files changed, 36 insertions(+), 24 deletions(-)
[ambari] 03/03: Update ams_alert.py
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch trunk in repository https://gitbox.apache.org/repos/asf/ambari.git commit cd9d442b60521dc73363f1588f6715f17754367c Author: aonishuk AuthorDate: Fri Oct 12 09:38:15 2018 +0300 Update ams_alert.py --- ambari-agent/src/main/python/ambari_agent/alerts/ams_alert.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/ambari-agent/src/main/python/ambari_agent/alerts/ams_alert.py b/ambari-agent/src/main/python/ambari_agent/alerts/ams_alert.py index 32ac725..74511ed 100644 --- a/ambari-agent/src/main/python/ambari_agent/alerts/ams_alert.py +++ b/ambari-agent/src/main/python/ambari_agent/alerts/ams_alert.py @@ -212,7 +212,7 @@ def f(args): self.minimum_value = metric_info['minimum_value'] if 'value' in metric_info: - realcode = REALCODE_REGEXP.sub('(\{(\d+)\})', 'args[\g<2>][k]', metric_info['value']) + realcode = REALCODE_REGEXP.sub('args[\g<2>][k]', metric_info['value']) self.custom_value_module = imp.new_module(str(uuid.uuid4())) code = self.DYNAMIC_CODE_VALUE_TEMPLATE.format(realcode)
[ambari] 01/03: AMBARI-24758. Ambari-agent takes up too many cpu of perf (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch trunk in repository https://gitbox.apache.org/repos/asf/ambari.git commit 9103295a7a8a8da3efe0f1b96cf6d915bb8cbb1d Author: Andrew Onishuk AuthorDate: Wed Oct 10 15:30:43 2018 +0300 AMBARI-24758. Ambari-agent takes up too many cpu of perf (aonishuk) --- .../src/main/python/ambari_agent/AmbariAgent.py| 2 ++ .../src/main/python/ambari_agent/Facter.py | 30 ++ .../ambari_agent/HostCheckReportFileHandler.py | 5 ++-- .../src/main/python/ambari_agent/HostCleanup.py| 3 ++- .../src/main/python/ambari_agent/HostInfo.py | 5 ++-- .../main/python/ambari_agent/alerts/ams_alert.py | 4 +-- .../main/python/ambari_agent/alerts/base_alert.py | 3 ++- .../python/ambari_agent/alerts/metric_alert.py | 4 +-- .../python/ambari_agent/alerts/script_alert.py | 3 ++- ambari-agent/src/main/python/ambari_agent/main.py | 3 ++- 10 files changed, 38 insertions(+), 24 deletions(-) diff --git a/ambari-agent/src/main/python/ambari_agent/AmbariAgent.py b/ambari-agent/src/main/python/ambari_agent/AmbariAgent.py index 6f5a14e..d3cbd14 100644 --- a/ambari-agent/src/main/python/ambari_agent/AmbariAgent.py +++ b/ambari-agent/src/main/python/ambari_agent/AmbariAgent.py @@ -51,6 +51,8 @@ def main(): mergedArgs = [PYTHON, AGENT_SCRIPT] + args while status == AGENT_AUTO_RESTART_EXIT_CODE: +with open("/tmp/x" , "w") as fp: + fp.write(str(mergedArgs)) mainProcess = subprocess32.Popen(mergedArgs) mainProcess.communicate() status = mainProcess.returncode diff --git a/ambari-agent/src/main/python/ambari_agent/Facter.py b/ambari-agent/src/main/python/ambari_agent/Facter.py index 3859ff2..366160d 100644 --- a/ambari-agent/src/main/python/ambari_agent/Facter.py +++ b/ambari-agent/src/main/python/ambari_agent/Facter.py @@ -372,6 +372,15 @@ class FacterWindows(Facter): @OsFamilyImpl(os_family=OsFamilyImpl.DEFAULT) class FacterLinux(Facter): + FIRST_WORDS_REGEXP = re.compile(r',$') + IFNAMES_REGEXP = re.compile("^\d") + SE_STATUS_REGEXP = re.compile('(enforcing|permissive|enabled)') + DIGITS_REGEXP = re.compile("\d+") + FREEMEM_REGEXP = re.compile("MemFree:.*?(\d+) .*") + TOTALMEM_REGEXP = re.compile("MemTotal:.*?(\d+) .*") + SWAPFREE_REGEXP = re.compile("SwapFree:.*?(\d+) .*") + SWAPTOTAL_REGEXP = re.compile("SwapTotal:.*?(\d+) .*") + # selinux command GET_SE_LINUX_ST_CMD = "/usr/sbin/sestatus" GET_IFCONFIG_SHORT_CMD = "ifconfig -s" @@ -436,7 +445,7 @@ class FacterLinux(Facter): try: retcode, out, err = run_os_command(FacterLinux.GET_SE_LINUX_ST_CMD) - se_status = re.search('(enforcing|permissive|enabled)', out) + se_status = FacterLinux.SE_STATUS_REGEXP.search(out) if se_status: return True except OSError: @@ -449,19 +458,18 @@ class FacterLinux(Facter): if i.strip(): result = result + i.split()[0].strip() + "," -result = re.sub(r',$', "", result) +result = FacterLinux.FIRST_WORDS_REGEXP.sub("", result) return result def return_ifnames_from_ip_link(self, ip_link_output): list = [] -prog = re.compile("^\d") for line in ip_link_output.splitlines(): - if prog.match(line): + if FacterLinux.IFNAMES_REGEXP.match(line): list.append(line.split()[1].rstrip(":")) return ",".join(list) def data_return_first(self, patern, data): -full_list = re.findall(patern, data) +full_list = patern.findall(data) result = "" if full_list: result = full_list[0] @@ -518,7 +526,7 @@ class FacterLinux(Facter): # Return uptime seconds def getUptimeSeconds(self): try: - return int(self.data_return_first("\d+", self.DATA_UPTIME_OUTPUT)) + return int(self.data_return_first(FacterLinux.DIGITS_REGEXP, self.DATA_UPTIME_OUTPUT)) except ValueError: log.warn("Can't get an uptime value from {0}".format(self.DATA_UPTIME_OUTPUT)) return 0 @@ -527,7 +535,7 @@ class FacterLinux(Facter): def getMemoryFree(self): #:memoryfree_mb => "MemFree", try: - return int(self.data_return_first("MemFree:.*?(\d+) .*", self.DATA_MEMINFO_OUTPUT)) + return int(self.data_return_first(FacterLinux.FREEMEM_REGEXP, self.DATA_MEMINFO_OUTPUT)) except ValueError: log.warn("Can't get free memory size from {0}".format(self.DATA_MEMINFO_OUTPUT)) return 0 @@ -535,7 +543,7 @@ class FacterLinux(Facter): # Return memorytotal def getMemoryTotal(self): try: - return int(self.data_return_first("MemTotal:.*?(\d+) .*", self.DATA_MEMINFO_OUTPUT)) + return int(self.data_return_first(FacterLinux.TOTALMEM_REG
[ambari] branch branch-2.7 updated: AMBARI-24757. Grafana start failing on U14 fails with error "AttributeError: 'module' object has no attribute 'PROTOCOL_TLSv1_2'" (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch branch-2.7 in repository https://gitbox.apache.org/repos/asf/ambari.git The following commit(s) were added to refs/heads/branch-2.7 by this push: new 4da9f42 AMBARI-24757. Grafana start failing on U14 fails with error "AttributeError: 'module' object has no attribute 'PROTOCOL_TLSv1_2'" (aonishuk) 4da9f42 is described below commit 4da9f42584e7185be7c776574ce0e15ce4cc25d5 Author: Andrew Onishuk AuthorDate: Wed Oct 10 10:49:19 2018 +0300 AMBARI-24757. Grafana start failing on U14 fails with error "AttributeError: 'module' object has no attribute 'PROTOCOL_TLSv1_2'" (aonishuk) --- ambari-agent/src/main/python/ambari_agent/AmbariConfig.py| 5 +++-- .../src/main/python/resource_management/libraries/script/script.py | 4 ++-- 2 files changed, 5 insertions(+), 4 deletions(-) diff --git a/ambari-agent/src/main/python/ambari_agent/AmbariConfig.py b/ambari-agent/src/main/python/ambari_agent/AmbariConfig.py index 0d80dd0..fedd063 100644 --- a/ambari-agent/src/main/python/ambari_agent/AmbariConfig.py +++ b/ambari-agent/src/main/python/ambari_agent/AmbariConfig.py @@ -24,6 +24,7 @@ import StringIO import hostname import ambari_simplejson as json import os +import ssl from ambari_agent.FileCache import FileCache from ambari_commons.os_family_impl import OsFamilyFuncImpl, OsFamilyImpl @@ -376,7 +377,8 @@ class AmbariConfig: :return: protocol name, PROTOCOL_TLSv1_2 by default """ -return self.get('security', 'force_https_protocol', default="PROTOCOL_TLSv1_2") +default = "PROTOCOL_TLSv1_2" if hasattr(ssl, "PROTOCOL_TLSv1_2") else "PROTOCOL_TLSv1" +return self.get('security', 'force_https_protocol', default=default) def get_force_https_protocol_value(self): """ @@ -384,7 +386,6 @@ class AmbariConfig: :return: protocol value """ -import ssl return getattr(ssl, self.get_force_https_protocol_name()) def get_ca_cert_file_path(self): diff --git a/ambari-common/src/main/python/resource_management/libraries/script/script.py b/ambari-common/src/main/python/resource_management/libraries/script/script.py index 3725aec..2a17208 100644 --- a/ambari-common/src/main/python/resource_management/libraries/script/script.py +++ b/ambari-common/src/main/python/resource_management/libraries/script/script.py @@ -24,6 +24,7 @@ __all__ = ["Script"] import re import os import sys +import ssl import logging import platform import inspect @@ -129,7 +130,7 @@ class Script(object): # Class variable tmp_dir = "" - force_https_protocol = "PROTOCOL_TLSv1_2" + force_https_protocol = "PROTOCOL_TLSv1_2" if hasattr(ssl, "PROTOCOL_TLSv1_2") else "PROTOCOL_TLSv1" ca_cert_file_path = None def load_structured_out(self): @@ -622,7 +623,6 @@ class Script(object): :return: protocol value """ -import ssl return getattr(ssl, Script.get_force_https_protocol_name()) @staticmethod
[ambari] branch trunk updated: AMBARI-24757. Grafana start failing on U14 fails with error "AttributeError: 'module' object has no attribute 'PROTOCOL_TLSv1_2'" (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch trunk in repository https://gitbox.apache.org/repos/asf/ambari.git The following commit(s) were added to refs/heads/trunk by this push: new 2415a03 AMBARI-24757. Grafana start failing on U14 fails with error "AttributeError: 'module' object has no attribute 'PROTOCOL_TLSv1_2'" (aonishuk) 2415a03 is described below commit 2415a03b24dab92e703f71e7bca0a502a66bb402 Author: Andrew Onishuk AuthorDate: Wed Oct 10 10:46:54 2018 +0300 AMBARI-24757. Grafana start failing on U14 fails with error "AttributeError: 'module' object has no attribute 'PROTOCOL_TLSv1_2'" (aonishuk) --- ambari-agent/src/main/python/ambari_agent/AmbariConfig.py| 5 +++-- .../src/main/python/resource_management/libraries/script/script.py | 4 ++-- 2 files changed, 5 insertions(+), 4 deletions(-) diff --git a/ambari-agent/src/main/python/ambari_agent/AmbariConfig.py b/ambari-agent/src/main/python/ambari_agent/AmbariConfig.py index 0d80dd0..fedd063 100644 --- a/ambari-agent/src/main/python/ambari_agent/AmbariConfig.py +++ b/ambari-agent/src/main/python/ambari_agent/AmbariConfig.py @@ -24,6 +24,7 @@ import StringIO import hostname import ambari_simplejson as json import os +import ssl from ambari_agent.FileCache import FileCache from ambari_commons.os_family_impl import OsFamilyFuncImpl, OsFamilyImpl @@ -376,7 +377,8 @@ class AmbariConfig: :return: protocol name, PROTOCOL_TLSv1_2 by default """ -return self.get('security', 'force_https_protocol', default="PROTOCOL_TLSv1_2") +default = "PROTOCOL_TLSv1_2" if hasattr(ssl, "PROTOCOL_TLSv1_2") else "PROTOCOL_TLSv1" +return self.get('security', 'force_https_protocol', default=default) def get_force_https_protocol_value(self): """ @@ -384,7 +386,6 @@ class AmbariConfig: :return: protocol value """ -import ssl return getattr(ssl, self.get_force_https_protocol_name()) def get_ca_cert_file_path(self): diff --git a/ambari-common/src/main/python/resource_management/libraries/script/script.py b/ambari-common/src/main/python/resource_management/libraries/script/script.py index a792271..9b47f0e 100644 --- a/ambari-common/src/main/python/resource_management/libraries/script/script.py +++ b/ambari-common/src/main/python/resource_management/libraries/script/script.py @@ -24,6 +24,7 @@ __all__ = ["Script"] import re import os import sys +import ssl import logging import platform import inspect @@ -134,7 +135,7 @@ class Script(object): # Class variable tmp_dir = "" - force_https_protocol = "PROTOCOL_TLSv1_2" + force_https_protocol = "PROTOCOL_TLSv1_2" if hasattr(ssl, "PROTOCOL_TLSv1_2") else "PROTOCOL_TLSv1" ca_cert_file_path = None def load_structured_out(self): @@ -663,7 +664,6 @@ class Script(object): :return: protocol value """ -import ssl return getattr(ssl, Script.get_force_https_protocol_name()) @staticmethod
[ambari] branch trunk updated: AMBARI-23058. yum installation fails if there is any transaction files (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch trunk in repository https://gitbox.apache.org/repos/asf/ambari.git The following commit(s) were added to refs/heads/trunk by this push: new 9204bb5 AMBARI-23058. yum installation fails if there is any transaction files (aonishuk) 9204bb5 is described below commit 9204bb59abaf4df097130f737d8c5c1c54f1c66c Author: Andrew Onishuk AuthorDate: Thu Oct 4 13:49:56 2018 +0300 AMBARI-23058. yum installation fails if there is any transaction files (aonishuk) --- .../src/main/python/ambari_commons/repo_manager/yum_manager.py | 3 +++ 1 file changed, 3 insertions(+) diff --git a/ambari-common/src/main/python/ambari_commons/repo_manager/yum_manager.py b/ambari-common/src/main/python/ambari_commons/repo_manager/yum_manager.py index 8e2931e..95e92cf 100644 --- a/ambari-common/src/main/python/ambari_commons/repo_manager/yum_manager.py +++ b/ambari-common/src/main/python/ambari_commons/repo_manager/yum_manager.py @@ -450,6 +450,9 @@ Ambari has detected that there are incomplete Yum transactions on this host. Thi - Identify the pending transactions with the command 'yum history list ' - Revert each pending transaction with the command 'yum history undo' - Flush the transaction log with 'yum-complete-transaction --cleanup-only' + +If the issue persists, old transaction files may be the cause. +Please delete them from /var/lib/yum/transaction* """ for line in help_msg.split("\n"):
[ambari] branch trunk updated: AMBARI-24717. Ambari-agent does for save data hashes correctly (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch trunk in repository https://gitbox.apache.org/repos/asf/ambari.git The following commit(s) were added to refs/heads/trunk by this push: new b497744 AMBARI-24717. Ambari-agent does for save data hashes correctly (aonishuk) b497744 is described below commit b497744c3fd5db56c0c70954b1f8f96c244bbe8b Author: Andrew Onishuk AuthorDate: Mon Oct 1 14:32:14 2018 +0300 AMBARI-24717. Ambari-agent does for save data hashes correctly (aonishuk) --- ambari-agent/src/main/python/ambari_agent/ClusterCache.py| 12 ++-- .../src/main/python/ambari_agent/ClusterMetadataCache.py | 9 - .../src/main/python/ambari_agent/InitializerModule.py| 2 +- ambari-agent/src/main/python/ambari_agent/RecoveryManager.py | 9 + .../python/ambari_agent/listeners/MetadataEventListener.py | 6 -- 5 files changed, 24 insertions(+), 14 deletions(-) diff --git a/ambari-agent/src/main/python/ambari_agent/ClusterCache.py b/ambari-agent/src/main/python/ambari_agent/ClusterCache.py index 2e13f16..ea3498d 100644 --- a/ambari-agent/src/main/python/ambari_agent/ClusterCache.py +++ b/ambari-agent/src/main/python/ambari_agent/ClusterCache.py @@ -99,10 +99,7 @@ class ClusterCache(dict): del self[cache_id_to_delete] self.on_cache_update() -self.persist_cache() - -# if all of above are sucessful finally set the hash -self.hash = cache_hash +self.persist_cache(cache_hash) def cache_update(self, update_dict, cache_hash): """ @@ -131,7 +128,7 @@ class ClusterCache(dict): with self._cache_lock: self[cluster_id] = immutable_cache - def persist_cache(self): + def persist_cache(self, cache_hash): # ensure that our cache directory exists if not os.path.exists(self.cluster_cache_dir): os.makedirs(self.cluster_cache_dir) @@ -142,7 +139,10 @@ class ClusterCache(dict): if self.hash is not None: with open(self.__current_cache_hash_file, 'w') as fp: - fp.write(self.hash) + fp.write(cache_hash) + +# if all of above are successful finally set the hash +self.hash = cache_hash def _get_mutable_copy(self): with self._cache_lock: diff --git a/ambari-agent/src/main/python/ambari_agent/ClusterMetadataCache.py b/ambari-agent/src/main/python/ambari_agent/ClusterMetadataCache.py index 2ae7962..6c9fc8e 100644 --- a/ambari-agent/src/main/python/ambari_agent/ClusterMetadataCache.py +++ b/ambari-agent/src/main/python/ambari_agent/ClusterMetadataCache.py @@ -30,14 +30,21 @@ class ClusterMetadataCache(ClusterCache): topology properties. """ - def __init__(self, cluster_cache_dir): + def __init__(self, cluster_cache_dir, config): """ Initializes the topology cache. :param cluster_cache_dir: :return: """ +self.config = config super(ClusterMetadataCache, self).__init__(cluster_cache_dir) + def on_cache_update(self): +try: + self.config.update_configuration_from_metadata(self['-1']['agentConfigs']) +except KeyError: + pass + def cache_delete(self, cache_update, cache_hash): """ Only deleting cluster is supported here diff --git a/ambari-agent/src/main/python/ambari_agent/InitializerModule.py b/ambari-agent/src/main/python/ambari_agent/InitializerModule.py index b15aaec..c5d9bee 100644 --- a/ambari-agent/src/main/python/ambari_agent/InitializerModule.py +++ b/ambari-agent/src/main/python/ambari_agent/InitializerModule.py @@ -84,7 +84,7 @@ class InitializerModule: """ self.is_registered = False -self.metadata_cache = ClusterMetadataCache(self.config.cluster_cache_dir) +self.metadata_cache = ClusterMetadataCache(self.config.cluster_cache_dir, self.config) self.topology_cache = ClusterTopologyCache(self.config.cluster_cache_dir, self.config) self.host_level_params_cache = ClusterHostLevelParamsCache(self.config.cluster_cache_dir) self.configurations_cache = ClusterConfigurationCache(self.config.cluster_cache_dir) diff --git a/ambari-agent/src/main/python/ambari_agent/RecoveryManager.py b/ambari-agent/src/main/python/ambari_agent/RecoveryManager.py index e178457..4842353 100644 --- a/ambari-agent/src/main/python/ambari_agent/RecoveryManager.py +++ b/ambari-agent/src/main/python/ambari_agent/RecoveryManager.py @@ -103,6 +103,15 @@ class RecoveryManager: self.actions = {} self.update_config(6, 60, 5, 12, recovery_enabled, auto_start_only, auto_install_start) +# FIXME: Recovery manager does not support multiple clusters as of now. +if len(self.initializer_module.configurations_cache): + self.cluster_id = self.initializer_module.configurations_cache.keys()[0] + self.on_config_update() + +if len(self.initializer_module.host_level_params_
[ambari] branch trunk updated: AMBARI-24715. Clean up ambari-agent.log (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch trunk in repository https://gitbox.apache.org/repos/asf/ambari.git The following commit(s) were added to refs/heads/trunk by this push: new 33a42c3 AMBARI-24715. Clean up ambari-agent.log (aonishuk) 33a42c3 is described below commit 33a42c32bc45fc0ea5732819214d7b891c3c829f Author: Andrew Onishuk AuthorDate: Mon Oct 1 13:02:50 2018 +0300 AMBARI-24715. Clean up ambari-agent.log (aonishuk) --- ambari-agent/src/main/python/ambari_agent/HeartbeatThread.py | 2 +- ambari-agent/src/main/python/ambari_agent/RecoveryManager.py | 1 - ambari-agent/src/main/python/ambari_agent/alerts/collector.py| 5 + .../src/main/python/ambari_agent/listeners/CommandsEventListener.py | 4 .../main/python/ambari_agent/listeners/ServerResponsesListener.py| 2 +- ambari-agent/src/main/python/ambari_agent/main.py| 2 ++ ambari-agent/src/main/python/ambari_agent/security.py| 2 +- 7 files changed, 14 insertions(+), 4 deletions(-) diff --git a/ambari-agent/src/main/python/ambari_agent/HeartbeatThread.py b/ambari-agent/src/main/python/ambari_agent/HeartbeatThread.py index 9210e79..36b88d6 100644 --- a/ambari-agent/src/main/python/ambari_agent/HeartbeatThread.py +++ b/ambari-agent/src/main/python/ambari_agent/HeartbeatThread.py @@ -143,7 +143,7 @@ class HeartbeatThread(threading.Thread): try: listener.on_event({}, response) except: - logger.exception("Exception while handing response to request at {0}. {1}".format(endpoint, response)) + logger.exception("Exception while handing response to request at {0} {1}".format(endpoint, response)) raise finally: with listener.event_queue_lock: diff --git a/ambari-agent/src/main/python/ambari_agent/RecoveryManager.py b/ambari-agent/src/main/python/ambari_agent/RecoveryManager.py index ba6507c..e178457 100644 --- a/ambari-agent/src/main/python/ambari_agent/RecoveryManager.py +++ b/ambari-agent/src/main/python/ambari_agent/RecoveryManager.py @@ -165,7 +165,6 @@ class RecoveryManager: component_status = copy.deepcopy(self.default_component_status) component_status["current"] = state self.statuses[component] = component_status - logger.info("New status, current status is set to %s for %s", self.statuses[component]["current"], component) finally: self.__status_lock.release() pass diff --git a/ambari-agent/src/main/python/ambari_agent/alerts/collector.py b/ambari-agent/src/main/python/ambari_agent/alerts/collector.py index 089301f..daac7ee 100644 --- a/ambari-agent/src/main/python/ambari_agent/alerts/collector.py +++ b/ambari-agent/src/main/python/ambari_agent/alerts/collector.py @@ -66,6 +66,11 @@ class AlertCollector(): for cluster,alert_map in self.__buckets.iteritems(): for alert_name in alert_map.keys(): alert = alert_map[alert_name] + + if not 'uuid' in alert: +logger.warn("Alert {0} does not have uuid key.".format(alert)) +continue + if alert['uuid'] == alert_uuid: self.remove(cluster, alert_name) finally: diff --git a/ambari-agent/src/main/python/ambari_agent/listeners/CommandsEventListener.py b/ambari-agent/src/main/python/ambari_agent/listeners/CommandsEventListener.py index b25ec69..51ba7df 100644 --- a/ambari-agent/src/main/python/ambari_agent/listeners/CommandsEventListener.py +++ b/ambari-agent/src/main/python/ambari_agent/listeners/CommandsEventListener.py @@ -74,6 +74,10 @@ class CommandsEventListener(EventListener): command['repositoryFile'] = '...' if 'commandParams' in command: command['commandParams'] = '...' + if 'clusterHostInfo' in command: +command['clusterHostInfo'] = '...' + if 'componentVersionMap' in command: +command['componentVersionMap'] = '...' except KeyError: pass diff --git a/ambari-agent/src/main/python/ambari_agent/listeners/ServerResponsesListener.py b/ambari-agent/src/main/python/ambari_agent/listeners/ServerResponsesListener.py index 02d60a5..a4af571 100644 --- a/ambari-agent/src/main/python/ambari_agent/listeners/ServerResponsesListener.py +++ b/ambari-agent/src/main/python/ambari_agent/listeners/ServerResponsesListener.py @@ -74,7 +74,7 @@ class ServerResponsesListener(EventListener): This string will be used to log received messsage of this type """ if Constants.CORRELATION_ID_STRING in headers: - correlation_id = headers[Constants.CORRELATION_ID_STRING] + correlation_id = int(headers[Constants.CORRELATION_ID_STRING]) if correlation_id in self.logging_handlers: message_json = self.logging_handlers[correl
[ambari] branch trunk updated (204e777 -> e64f3df)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a change to branch trunk in repository https://gitbox.apache.org/repos/asf/ambari.git. from 204e777 AMBARI-24689. All component statuses should be re-send on registration (aonishuk) new 50d9550 AMBARI-24670. Directory creation sometimes fails with parallel_execution=1 (aonishuk) new e64f3df AMBARI-24670. Directory creation sometimes fails with parallel_execution=1 (aonishuk) The 2 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. Summary of changes: .../test/python/resource_management/TestLinkResource.py | 2 +- .../python/resource_management/core/providers/system.py | 15 +++ .../src/main/python/resource_management/core/source.py| 2 +- .../src/main/python/resource_management/core/sudo.py | 11 +-- 4 files changed, 22 insertions(+), 8 deletions(-)
[ambari] 02/02: AMBARI-24670. Directory creation sometimes fails with parallel_execution=1 (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch trunk in repository https://gitbox.apache.org/repos/asf/ambari.git commit e64f3dfb2365cfdfa14db553b0d9b55a3e80f6e2 Author: Andrew Onishuk AuthorDate: Wed Sep 26 11:42:36 2018 +0300 AMBARI-24670. Directory creation sometimes fails with parallel_execution=1 (aonishuk) --- ambari-agent/src/test/python/resource_management/TestLinkResource.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/ambari-agent/src/test/python/resource_management/TestLinkResource.py b/ambari-agent/src/test/python/resource_management/TestLinkResource.py index 221bf6b..6a1377b 100644 --- a/ambari-agent/src/test/python/resource_management/TestLinkResource.py +++ b/ambari-agent/src/test/python/resource_management/TestLinkResource.py @@ -34,7 +34,7 @@ class TestLinkResource(TestCase): @patch.object(os.path, "realpath") @patch("resource_management.core.sudo.path_lexists") - @patch("resource_management.core.sudo.path_lexists") + @patch("resource_management.core.sudo.path_islink") @patch("resource_management.core.sudo.unlink") @patch("resource_management.core.sudo.symlink") def test_action_create_relink(self, symlink_mock, unlink_mock,
[ambari] 01/02: AMBARI-24670. Directory creation sometimes fails with parallel_execution=1 (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch trunk in repository https://gitbox.apache.org/repos/asf/ambari.git commit 50d955084256a79c2b03599cfc0e3658779057d5 Author: Andrew Onishuk AuthorDate: Wed Sep 26 10:03:01 2018 +0300 AMBARI-24670. Directory creation sometimes fails with parallel_execution=1 (aonishuk) --- .../python/resource_management/core/providers/system.py | 15 +++ .../src/main/python/resource_management/core/source.py| 2 +- .../src/main/python/resource_management/core/sudo.py | 11 +-- 3 files changed, 21 insertions(+), 7 deletions(-) diff --git a/ambari-common/src/main/python/resource_management/core/providers/system.py b/ambari-common/src/main/python/resource_management/core/providers/system.py index 6293436..95c12ad 100644 --- a/ambari-common/src/main/python/resource_management/core/providers/system.py +++ b/ambari-common/src/main/python/resource_management/core/providers/system.py @@ -172,7 +172,7 @@ class DirectoryProvider(Provider): if self.resource.follow: # Follow symlink until the tail. followed_links = set() -while sudo.path_lexists(path): +while sudo.path_islink(path): if path in followed_links: raise Fail("Applying %s failed, looped symbolic links found while resolving %s" % (self.resource, path)) followed_links.add(path) @@ -188,8 +188,15 @@ class DirectoryProvider(Provider): if not sudo.path_isdir(dirname): raise Fail("Applying %s failed, parent directory %s doesn't exist" % (self.resource, dirname)) -sudo.makedir(path, self.resource.mode or 0755) - +try: + sudo.makedir(path, self.resource.mode or 0755) +except Exception as ex: + # race condition (somebody created the file before us) + if "File exists" in str(ex): +sudo.makedirs(path, self.resource.mode or 0755) + else: +raise + if not sudo.path_isdir(path): raise Fail("Applying %s failed, file %s already exists" % (self.resource, path)) @@ -216,7 +223,7 @@ class LinkProvider(Provider): oldpath = os.path.realpath(path) if oldpath == self.resource.to: return - if not sudo.path_lexists(path): + if not sudo.path_islink(path): raise Fail( "%s trying to create a symlink with the same name as an existing file or directory" % self.resource) Logger.info("%s replacing old symlink to %s" % (self.resource, oldpath)) diff --git a/ambari-common/src/main/python/resource_management/core/source.py b/ambari-common/src/main/python/resource_management/core/source.py index 32c5cad..a2b1598 100644 --- a/ambari-common/src/main/python/resource_management/core/source.py +++ b/ambari-common/src/main/python/resource_management/core/source.py @@ -72,7 +72,7 @@ class StaticFile(Source): basedir = self.env.config.basedir path = os.path.join(basedir, "files", self.name) -if not sudo.path_isfile(path) and not sudo.path_lexists(path): +if not sudo.path_isfile(path): raise Fail("{0} Source file {1} is not found".format(repr(self), path)) return self.read_file(path) diff --git a/ambari-common/src/main/python/resource_management/core/sudo.py b/ambari-common/src/main/python/resource_management/core/sudo.py index f103c8d..990b293 100644 --- a/ambari-common/src/main/python/resource_management/core/sudo.py +++ b/ambari-common/src/main/python/resource_management/core/sudo.py @@ -159,7 +159,10 @@ if os.geteuid() == 0: def path_isdir(path): return os.path.isdir(path) - + + def path_islink(path): +return os.path.islink(path) + def path_lexists(path): return os.path.lexists(path) @@ -267,10 +270,14 @@ else: # os.path.isdir def path_isdir(path): return (shell.call(["test", "-d", path], sudo=True)[0] == 0) + + # os.path.islink + def path_islink(path): +return (shell.call(["test", "-L", path], sudo=True)[0] == 0) # os.path.lexists def path_lexists(path): -return (shell.call(["test", "-L", path], sudo=True)[0] == 0) +return (shell.call(["test", "-e", path], sudo=True)[0] == 0) # os.readlink def readlink(path):
[ambari] 01/02: AMBARI-24689. All component statuses should be re-send on registration (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch trunk in repository https://gitbox.apache.org/repos/asf/ambari.git commit 1aae0de04bb29b9e14603722be18fe8393c4fbd8 Author: Andrew Onishuk AuthorDate: Wed Sep 26 11:26:44 2018 +0300 AMBARI-24689. All component statuses should be re-send on registration (aonishuk) --- .../python/ambari_agent/ComponentStatusExecutor.py | 47 +- .../main/python/ambari_agent/HeartbeatThread.py| 6 ++- .../python/resource_management/TestLinkResource.py | 2 +- 3 files changed, 43 insertions(+), 12 deletions(-) diff --git a/ambari-agent/src/main/python/ambari_agent/ComponentStatusExecutor.py b/ambari-agent/src/main/python/ambari_agent/ComponentStatusExecutor.py index df72c88..7bf00df 100644 --- a/ambari-agent/src/main/python/ambari_agent/ComponentStatusExecutor.py +++ b/ambari-agent/src/main/python/ambari_agent/ComponentStatusExecutor.py @@ -43,6 +43,7 @@ class ComponentStatusExecutor(threading.Thread): self.logger = logging.getLogger(__name__) self.reports_to_discard = [] self.reports_to_discard_lock = threading.RLock() +self.reported_component_status_lock = threading.RLock() threading.Thread.__init__(self) def run(self): @@ -178,7 +179,7 @@ class ComponentStatusExecutor(threading.Thread): 'clusterId': cluster_id, } -if status != self.reported_component_status[cluster_id][component_name][command_name]: +if status != self.reported_component_status[cluster_id]["{0}/{1}".format(service_name, component_name)][command_name]: logging.info("Status for {0} has changed to {1}".format(component_name, status)) self.recovery_manager.handle_status_change(component_name, status) @@ -191,6 +192,29 @@ class ComponentStatusExecutor(threading.Thread): return result return None + def force_send_component_statuses(self): +""" +Forcefully resends all component statuses which are currently in cache. +""" +cluster_reports = defaultdict(lambda:[]) + +with self.reported_component_status_lock: + for cluster_id, component_to_command_dict in self.reported_component_status.iteritems(): +for service_and_component_name, commands_status in component_to_command_dict.iteritems(): + service_name, component_name = service_and_component_name.split("/") + for command_name, status in commands_status.iteritems(): +report = { + 'serviceName': service_name, + 'componentName': component_name, + 'command': command_name, + 'status': status, + 'clusterId': cluster_id, +} + +cluster_reports[cluster_id].append(report) + +self.send_updates_to_server(cluster_reports) + def send_updates_to_server(self, cluster_reports): if not cluster_reports or not self.initializer_module.is_registered: return @@ -199,18 +223,21 @@ class ComponentStatusExecutor(threading.Thread): self.server_responses_listener.listener_functions_on_success[correlation_id] = lambda headers, message: self.save_reported_component_status(cluster_reports) def save_reported_component_status(self, cluster_reports): -for cluster_id, reports in cluster_reports.iteritems(): - for report in reports: -component_name = report['componentName'] -command = report['command'] -status = report['status'] +with self.reported_component_status_lock: + for cluster_id, reports in cluster_reports.iteritems(): +for report in reports: + component_name = report['componentName'] + service_name = report['serviceName'] + command = report['command'] + status = report['status'] -self.reported_component_status[cluster_id][component_name][command] = status + self.reported_component_status[cluster_id]["{0}/{1}".format(service_name, component_name)][command] = status def clean_not_existing_clusters_info(self): """ This needs to be done to remove information about clusters which where deleted (e.g. ambari-server reset) """ -for cluster_id in self.reported_component_status.keys(): - if cluster_id not in self.topology_cache.get_cluster_ids(): -del self.reported_component_status[cluster_id] +with self.reported_component_status_lock: + for cluster_id in self.reported_component_status.keys(): +if cluster_id not in self.topology_cache.get_cluster_ids(): + del self.reported_component_status[cluster_id] diff --git a/ambari-agent/src/main/python/ambari_agent/HeartbeatThread.py b/ambari-agent/src/main/python/ambari_agent/HeartbeatThread.py index ded5edd..9210e79 100644 --- a/ambari-agent/src/main/python/ambari_agent/HeartbeatThread.py +++ b/ambari-agent/src/
[ambari] 02/02: AMBARI-24689. All component statuses should be re-send on registration (aonishuk)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a commit to branch trunk in repository https://gitbox.apache.org/repos/asf/ambari.git commit 204e7770ace9b017fb3d6be127d5e3f3938ea01b Author: Andrew Onishuk AuthorDate: Wed Sep 26 11:39:16 2018 +0300 AMBARI-24689. All component statuses should be re-send on registration (aonishuk) --- ambari-agent/src/test/python/resource_management/TestLinkResource.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/ambari-agent/src/test/python/resource_management/TestLinkResource.py b/ambari-agent/src/test/python/resource_management/TestLinkResource.py index 6a1377b..221bf6b 100644 --- a/ambari-agent/src/test/python/resource_management/TestLinkResource.py +++ b/ambari-agent/src/test/python/resource_management/TestLinkResource.py @@ -34,7 +34,7 @@ class TestLinkResource(TestCase): @patch.object(os.path, "realpath") @patch("resource_management.core.sudo.path_lexists") - @patch("resource_management.core.sudo.path_islink") + @patch("resource_management.core.sudo.path_lexists") @patch("resource_management.core.sudo.unlink") @patch("resource_management.core.sudo.symlink") def test_action_create_relink(self, symlink_mock, unlink_mock,
[ambari] branch trunk updated (f26bad4 -> 204e777)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a change to branch trunk in repository https://gitbox.apache.org/repos/asf/ambari.git. from f26bad4 AMBARI-24535 : File View not accessible in Ambari 2.7 after enabling 3 namenodes in HDP 3.0 (nitirajrathore) (#2352) new 1aae0de AMBARI-24689. All component statuses should be re-send on registration (aonishuk) new 204e777 AMBARI-24689. All component statuses should be re-send on registration (aonishuk) The 2 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. Summary of changes: .../python/ambari_agent/ComponentStatusExecutor.py | 47 +- .../main/python/ambari_agent/HeartbeatThread.py| 6 ++- 2 files changed, 42 insertions(+), 11 deletions(-)
[ambari] branch trunk updated (aca66a0 -> eadc7df)
This is an automated email from the ASF dual-hosted git repository. aonishuk pushed a change to branch trunk in repository https://gitbox.apache.org/repos/asf/ambari.git. from aca66a0 AMBARI-24676. Issues w.r.t Manage Journal Nodes (Particularly deletion) in HA enabled cluster (akovalenko) add eadc7df AMBARI-24661. While registering agent can miss updates from server (aonishuk) No new revisions were added by this update. Summary of changes: .../src/main/python/ambari_agent/HeartbeatThread.py | 19 +++ .../main/python/ambari_agent/listeners/__init__.py| 6 ++ 2 files changed, 17 insertions(+), 8 deletions(-)