Re: Review Request 51903: [PREVIEW] Atlas web UI alert after performing stack upgrade to HDP 2.5 and adding Atlas Service

2016-09-15 Thread Jonathan Hurley

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/51903/#review149134
---


Ship it!




OK - I think Alejandro and I got to the bottom of this. The code in question is 
leftover code from the HDP 2.1 days. Under normal circumstances it's not being 
triggered to create /etc//conf since conditions are not being met 
for packages which aren't installed.

However, because of how Hive is packaging Atlas, it's tricking us into thinking 
Atlas needs to have its /etc/atlas/conf bootstrapped. Therefore, I am fine with 
this surgical approach. We need to open a Jira to remove this block of code now 
that it's no longer needed since we don't support HDP 2.1

- Jonathan Hurley


On Sept. 14, 2016, 9:57 p.m., Alejandro Fernandez wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/51903/
> ---
> 
> (Updated Sept. 14, 2016, 9:57 p.m.)
> 
> 
> Review request for Ambari, Dmytro Grinenko, Dmitro Lisnichenko, Jonathan 
> Hurley, and Nate Cole.
> 
> 
> Bugs: AMBARI-18368
> https://issues.apache.org/jira/browse/AMBARI-18368
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> Steps to Reproduce:
> 
> * Install Ambari 2.2.2 with HDP 2.4 with HBase, Solr, and Hive (this is 
> important)
> * Perform EU/RU to HDP 2.5 
> * Add Atlas Service
> 
> Atlas Server log contains,
> 
> Caused by: 
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at 
> http://natu146-ehbs-dgm10toeriesec-u14-1.openstacklocal:8886/solr: Can not 
> find the specified config set: vertex_index  
> 
> Fix:
> The Hive RPM installs /usr/$stack/$version/atlas with some partial packages 
> that contain Hive hooks, while the Atlas RPM is responsible for installing 
> the full content.
> If the user does not have Atlas currently installed on their stack, then 
> /usr/$stack/current/atlas-client will be a broken symlink, and we should not 
> create the symlink /etc/atlas/conf -> /usr/$stack/current/atlas-client/conf .
> If we mistakenly create this symlink, then when the user performs an EU/RU 
> and then adds Atlas service then the Atlas RPM will not be able to copy its 
> artifacts into /etc/atlas/conf directory and therefore prevent Ambari from by 
> copying those unmanaged contents into /etc/atlas/$version/0
> 
> Further, when installing Atlas service, we must copy the artifacts from 
> /etc/atlas/conf.backup/* to /etc/atlas/conf (which is now a symlink to 
> /usr/hdp/current/atlas-client/conf/) with the no-clobber flag.
> 
> 
> Diffs
> -
> 
>   
> ambari-common/src/main/python/resource_management/libraries/functions/conf_select.py
>  c60b324 
>   
> ambari-server/src/test/python/stacks/2.0.6/hooks/after-INSTALL/test_after_install.py
>  06a366e 
> 
> Diff: https://reviews.apache.org/r/51903/diff/
> 
> 
> Testing
> ---
> 
> --
> Total run:1125
> Total errors:0
> Total failures:0
> OK
> 
> 
> Need to perform more tests on a live cluster.
> 
> 
> Thanks,
> 
> Alejandro Fernandez
> 
>



Re: Review Request 51903: [PREVIEW] Atlas web UI alert after performing stack upgrade to HDP 2.5 and adding Atlas Service

2016-09-15 Thread Alejandro Fernandez


> On Sept. 15, 2016, 2:49 a.m., Jonathan Hurley wrote:
> > I think we can do better. Why not just use `os.path.exists` to check the 
> > `current_dir` structure. In the case of atlas, the psuedo code woudl read:
> > 
> > if "/etc/atlas/conf" is a directory and if "/usr/hdp/current/atlas-client" 
> > is a valid link
> >   then seed stuff
> >   
> > Basically we just want to seed IFF both the `conf_dir` is a phyiscal 
> > directory and the `current_dir` is valid (indicates installed)
> 
> Alejandro Fernandez wrote:
> The seeding happens when we install the new HDP 2.5 bits and before Atlas 
> RPM is installed, so /etc/atlas/conf will not exist and 
> /usr/hdp/current/atlas-client will not be a valid link (still points to 
> /usr/hdp/2.4/atlas-client which DNE) .
> Line 361 is needed because we cannot create symlink /etc/atlas/conf -> 
> /usr/hdp/current/atlas-client/conf since the source will be needed once the 
> Atlas RPM is installed.
> 
> Line 618 is needed because Atlas RPM brings in artifacts not known to 
> Ambari that are first stored in /etc/atlas/conf, which then gets copied to 
> /etc/atlas/conf.backup because we will need to make /etc/atlas/conf be a 
> symlink to /usr/hdp/current/atlas-client/conf
> 
> Jonathan Hurley wrote:
> But the problem here is that /etc/atlas/conf does exist after a non-Atlas 
> RPM creates that directory. I'm saying cut this off right in the beginning of 
> the method:
> 
> ```
>   stack_name = Script.get_stack_name()
>   bad_dirs = []
>   for dir_def in dirs:
> # if /etc//conf doesn't exist
> if not os.path.exists(dir_def['conf_dir']):
>   bad_dirs.append(dir_def['conf_dir'])
> # if /etc//conf> exists but /usr/hdp/current/atlas-client 
> is invalid  
> elif not os.path.exists(dir_def['current_dir']):
>   bad_dirs.append(dir_def['conf_dir'])
> ```

I think we're talking about different things here. 

Assume the cluster is on HDP 2.4 with Hive and no Atlas. A Package Install of 
HDP 2.5 in preperation for EU/RU will hit lines 347-366.
When we run yum install hive_2_5_0_*, then it install RPM 
atlas-metadata_2_5_0_0_*-hive-plugin.noarch, which  brings in 
/usr/hdp/2.5.0.0-/atlas (but only partially, with hook and hook-bin dirs 
only) and not /etc/atlas/conf

The first problem is that line 366, Link(conf_dir, to=current_dir), would have 
attempted to creat the symlink /etc/atlas/conf -> 
/usr/hdp/current/atlas-client/conf
At this point in time, /usr/hdp/current/atlas-client/ is still pointing to a 
broken symlink from HDP 2.4, so line 360 is enough of a check (I don't 
explicitly need to check for "atlas").

The second problem is that Atlas has artifacts not known to Ambari. After the 
EU/RU to HDP 2.5 completes and the user adds Atlas service, that brings in 
/etc/atlas/conf directory with *all* of the artifacts. The rest of the code 
then copies it to /etc/atlas/conf.backup/ since we need to delete 
/etc/atlas/conf and instead make a symlink /etc/atlas/conf -> 
/usr/hdp/current/atlas-client/conf
At this point, we need to seed it, cp from /etc/atlas/conf.backup to 
/etc/atlas/conf using no-clobber.

The type of seeding of configs we already do today is for services like Knox 
that needed to copy configs from say HDP 2.3 to 2.4 whenever the 2.4 packages 
are installed.


- Alejandro


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/51903/#review149019
---


On Sept. 15, 2016, 1:57 a.m., Alejandro Fernandez wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/51903/
> ---
> 
> (Updated Sept. 15, 2016, 1:57 a.m.)
> 
> 
> Review request for Ambari, Dmytro Grinenko, Dmitro Lisnichenko, Jonathan 
> Hurley, and Nate Cole.
> 
> 
> Bugs: AMBARI-18368
> https://issues.apache.org/jira/browse/AMBARI-18368
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> Steps to Reproduce:
> 
> * Install Ambari 2.2.2 with HDP 2.4 with HBase, Solr, and Hive (this is 
> important)
> * Perform EU/RU to HDP 2.5 
> * Add Atlas Service
> 
> Atlas Server log contains,
> 
> Caused by: 
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at 
> http://natu146-ehbs-dgm10toeriesec-u14-1.openstacklocal:8886/solr: Can not 
> find the specified config set: vertex_index  
> 
> Fix:
> The Hive RPM installs /usr/$stack/$version/atlas with some partial packages 
> that contain Hive hooks, while the Atlas RPM is responsible for installing 
> the full content.
> If the user does not have Atlas currently installed on their stack, then 
> /usr/$stack/current/atlas-client will be a broken symlink, and we should not 
> create the symlink /etc/atlas/conf 

Re: Review Request 51903: [PREVIEW] Atlas web UI alert after performing stack upgrade to HDP 2.5 and adding Atlas Service

2016-09-15 Thread Jonathan Hurley


> On Sept. 14, 2016, 10:49 p.m., Jonathan Hurley wrote:
> > I think we can do better. Why not just use `os.path.exists` to check the 
> > `current_dir` structure. In the case of atlas, the psuedo code woudl read:
> > 
> > if "/etc/atlas/conf" is a directory and if "/usr/hdp/current/atlas-client" 
> > is a valid link
> >   then seed stuff
> >   
> > Basically we just want to seed IFF both the `conf_dir` is a phyiscal 
> > directory and the `current_dir` is valid (indicates installed)
> 
> Alejandro Fernandez wrote:
> The seeding happens when we install the new HDP 2.5 bits and before Atlas 
> RPM is installed, so /etc/atlas/conf will not exist and 
> /usr/hdp/current/atlas-client will not be a valid link (still points to 
> /usr/hdp/2.4/atlas-client which DNE) .
> Line 361 is needed because we cannot create symlink /etc/atlas/conf -> 
> /usr/hdp/current/atlas-client/conf since the source will be needed once the 
> Atlas RPM is installed.
> 
> Line 618 is needed because Atlas RPM brings in artifacts not known to 
> Ambari that are first stored in /etc/atlas/conf, which then gets copied to 
> /etc/atlas/conf.backup because we will need to make /etc/atlas/conf be a 
> symlink to /usr/hdp/current/atlas-client/conf

But the problem here is that /etc/atlas/conf does exist after a non-Atlas RPM 
creates that directory. I'm saying cut this off right in the beginning of the 
method:

```
  stack_name = Script.get_stack_name()
  bad_dirs = []
  for dir_def in dirs:
# if /etc//conf doesn't exist
if not os.path.exists(dir_def['conf_dir']):
  bad_dirs.append(dir_def['conf_dir'])
# if /etc//conf> exists but /usr/hdp/current/atlas-client is 
invalid  
elif not os.path.exists(dir_def['current_dir']):
  bad_dirs.append(dir_def['conf_dir'])
```


- Jonathan


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/51903/#review149019
---


On Sept. 14, 2016, 9:57 p.m., Alejandro Fernandez wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/51903/
> ---
> 
> (Updated Sept. 14, 2016, 9:57 p.m.)
> 
> 
> Review request for Ambari, Dmytro Grinenko, Dmitro Lisnichenko, Jonathan 
> Hurley, and Nate Cole.
> 
> 
> Bugs: AMBARI-18368
> https://issues.apache.org/jira/browse/AMBARI-18368
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> Steps to Reproduce:
> 
> * Install Ambari 2.2.2 with HDP 2.4 with HBase, Solr, and Hive (this is 
> important)
> * Perform EU/RU to HDP 2.5 
> * Add Atlas Service
> 
> Atlas Server log contains,
> 
> Caused by: 
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at 
> http://natu146-ehbs-dgm10toeriesec-u14-1.openstacklocal:8886/solr: Can not 
> find the specified config set: vertex_index  
> 
> Fix:
> The Hive RPM installs /usr/$stack/$version/atlas with some partial packages 
> that contain Hive hooks, while the Atlas RPM is responsible for installing 
> the full content.
> If the user does not have Atlas currently installed on their stack, then 
> /usr/$stack/current/atlas-client will be a broken symlink, and we should not 
> create the symlink /etc/atlas/conf -> /usr/$stack/current/atlas-client/conf .
> If we mistakenly create this symlink, then when the user performs an EU/RU 
> and then adds Atlas service then the Atlas RPM will not be able to copy its 
> artifacts into /etc/atlas/conf directory and therefore prevent Ambari from by 
> copying those unmanaged contents into /etc/atlas/$version/0
> 
> Further, when installing Atlas service, we must copy the artifacts from 
> /etc/atlas/conf.backup/* to /etc/atlas/conf (which is now a symlink to 
> /usr/hdp/current/atlas-client/conf/) with the no-clobber flag.
> 
> 
> Diffs
> -
> 
>   
> ambari-common/src/main/python/resource_management/libraries/functions/conf_select.py
>  c60b324 
>   
> ambari-server/src/test/python/stacks/2.0.6/hooks/after-INSTALL/test_after_install.py
>  06a366e 
> 
> Diff: https://reviews.apache.org/r/51903/diff/
> 
> 
> Testing
> ---
> 
> --
> Total run:1125
> Total errors:0
> Total failures:0
> OK
> 
> 
> Need to perform more tests on a live cluster.
> 
> 
> Thanks,
> 
> Alejandro Fernandez
> 
>



Re: Review Request 51903: [PREVIEW] Atlas web UI alert after performing stack upgrade to HDP 2.5 and adding Atlas Service

2016-09-15 Thread Dmitro Lisnichenko

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/51903/#review149030
---




ambari-common/src/main/python/resource_management/libraries/functions/conf_select.py
 (line 346)


are we missing other elements of list?



ambari-common/src/main/python/resource_management/libraries/functions/conf_select.py
 (line 613)


the same here and in other places


- Dmitro Lisnichenko


On Sept. 15, 2016, 4:57 a.m., Alejandro Fernandez wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/51903/
> ---
> 
> (Updated Sept. 15, 2016, 4:57 a.m.)
> 
> 
> Review request for Ambari, Dmytro Grinenko, Dmitro Lisnichenko, Jonathan 
> Hurley, and Nate Cole.
> 
> 
> Bugs: AMBARI-18368
> https://issues.apache.org/jira/browse/AMBARI-18368
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> Steps to Reproduce:
> 
> * Install Ambari 2.2.2 with HDP 2.4 with HBase, Solr, and Hive (this is 
> important)
> * Perform EU/RU to HDP 2.5 
> * Add Atlas Service
> 
> Atlas Server log contains,
> 
> Caused by: 
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at 
> http://natu146-ehbs-dgm10toeriesec-u14-1.openstacklocal:8886/solr: Can not 
> find the specified config set: vertex_index  
> 
> Fix:
> The Hive RPM installs /usr/$stack/$version/atlas with some partial packages 
> that contain Hive hooks, while the Atlas RPM is responsible for installing 
> the full content.
> If the user does not have Atlas currently installed on their stack, then 
> /usr/$stack/current/atlas-client will be a broken symlink, and we should not 
> create the symlink /etc/atlas/conf -> /usr/$stack/current/atlas-client/conf .
> If we mistakenly create this symlink, then when the user performs an EU/RU 
> and then adds Atlas service then the Atlas RPM will not be able to copy its 
> artifacts into /etc/atlas/conf directory and therefore prevent Ambari from by 
> copying those unmanaged contents into /etc/atlas/$version/0
> 
> Further, when installing Atlas service, we must copy the artifacts from 
> /etc/atlas/conf.backup/* to /etc/atlas/conf (which is now a symlink to 
> /usr/hdp/current/atlas-client/conf/) with the no-clobber flag.
> 
> 
> Diffs
> -
> 
>   
> ambari-common/src/main/python/resource_management/libraries/functions/conf_select.py
>  c60b324 
>   
> ambari-server/src/test/python/stacks/2.0.6/hooks/after-INSTALL/test_after_install.py
>  06a366e 
> 
> Diff: https://reviews.apache.org/r/51903/diff/
> 
> 
> Testing
> ---
> 
> --
> Total run:1125
> Total errors:0
> Total failures:0
> OK
> 
> 
> Need to perform more tests on a live cluster.
> 
> 
> Thanks,
> 
> Alejandro Fernandez
> 
>



Re: Review Request 51903: [PREVIEW] Atlas web UI alert after performing stack upgrade to HDP 2.5 and adding Atlas Service

2016-09-14 Thread Alejandro Fernandez


> On Sept. 15, 2016, 2:49 a.m., Jonathan Hurley wrote:
> > I think we can do better. Why not just use `os.path.exists` to check the 
> > `current_dir` structure. In the case of atlas, the psuedo code woudl read:
> > 
> > if "/etc/atlas/conf" is a directory and if "/usr/hdp/current/atlas-client" 
> > is a valid link
> >   then seed stuff
> >   
> > Basically we just want to seed IFF both the `conf_dir` is a phyiscal 
> > directory and the `current_dir` is valid (indicates installed)

The seeding happens when we install the new HDP 2.5 bits and before Atlas RPM 
is installed, so /etc/atlas/conf will not exist and 
/usr/hdp/current/atlas-client will not be a valid link (still points to 
/usr/hdp/2.4/atlas-client which DNE) .
Line 361 is needed because we cannot create symlink /etc/atlas/conf -> 
/usr/hdp/current/atlas-client/conf since the source will be needed once the 
Atlas RPM is installed.

Line 618 is needed because Atlas RPM brings in artifacts not known to Ambari 
that are first stored in /etc/atlas/conf, which then gets copied to 
/etc/atlas/conf.backup because we will need to make /etc/atlas/conf be a 
symlink to /usr/hdp/current/atlas-client/conf


- Alejandro


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/51903/#review149019
---


On Sept. 15, 2016, 1:57 a.m., Alejandro Fernandez wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/51903/
> ---
> 
> (Updated Sept. 15, 2016, 1:57 a.m.)
> 
> 
> Review request for Ambari, Dmytro Grinenko, Dmitro Lisnichenko, Jonathan 
> Hurley, and Nate Cole.
> 
> 
> Bugs: AMBARI-18368
> https://issues.apache.org/jira/browse/AMBARI-18368
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> Steps to Reproduce:
> 
> * Install Ambari 2.2.2 with HDP 2.4 with HBase, Solr, and Hive (this is 
> important)
> * Perform EU/RU to HDP 2.5 
> * Add Atlas Service
> 
> Atlas Server log contains,
> 
> Caused by: 
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at 
> http://natu146-ehbs-dgm10toeriesec-u14-1.openstacklocal:8886/solr: Can not 
> find the specified config set: vertex_index  
> 
> Fix:
> The Hive RPM installs /usr/$stack/$version/atlas with some partial packages 
> that contain Hive hooks, while the Atlas RPM is responsible for installing 
> the full content.
> If the user does not have Atlas currently installed on their stack, then 
> /usr/$stack/current/atlas-client will be a broken symlink, and we should not 
> create the symlink /etc/atlas/conf -> /usr/$stack/current/atlas-client/conf .
> If we mistakenly create this symlink, then when the user performs an EU/RU 
> and then adds Atlas service then the Atlas RPM will not be able to copy its 
> artifacts into /etc/atlas/conf directory and therefore prevent Ambari from by 
> copying those unmanaged contents into /etc/atlas/$version/0
> 
> Further, when installing Atlas service, we must copy the artifacts from 
> /etc/atlas/conf.backup/* to /etc/atlas/conf (which is now a symlink to 
> /usr/hdp/current/atlas-client/conf/) with the no-clobber flag.
> 
> 
> Diffs
> -
> 
>   
> ambari-common/src/main/python/resource_management/libraries/functions/conf_select.py
>  c60b324 
>   
> ambari-server/src/test/python/stacks/2.0.6/hooks/after-INSTALL/test_after_install.py
>  06a366e 
> 
> Diff: https://reviews.apache.org/r/51903/diff/
> 
> 
> Testing
> ---
> 
> --
> Total run:1125
> Total errors:0
> Total failures:0
> OK
> 
> 
> Need to perform more tests on a live cluster.
> 
> 
> Thanks,
> 
> Alejandro Fernandez
> 
>



Re: Review Request 51903: [PREVIEW] Atlas web UI alert after performing stack upgrade to HDP 2.5 and adding Atlas Service

2016-09-14 Thread Jonathan Hurley

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/51903/#review149019
---



I think we can do better. Why not just use `os.path.exists` to check the 
`current_dir` structure. In the case of atlas, the psuedo code woudl read:

if "/etc/atlas/conf" is a directory and if "/usr/hdp/current/atlas-client" is a 
valid link
  then seed stuff
  
Basically we just want to seed IFF both the `conf_dir` is a phyiscal directory 
and the `current_dir` is valid (indicates installed)

- Jonathan Hurley


On Sept. 14, 2016, 9:57 p.m., Alejandro Fernandez wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/51903/
> ---
> 
> (Updated Sept. 14, 2016, 9:57 p.m.)
> 
> 
> Review request for Ambari, Dmytro Grinenko, Dmitro Lisnichenko, Jonathan 
> Hurley, and Nate Cole.
> 
> 
> Bugs: AMBARI-18368
> https://issues.apache.org/jira/browse/AMBARI-18368
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> Steps to Reproduce:
> 
> * Install Ambari 2.2.2 with HDP 2.4 with HBase, Solr, and Hive (this is 
> important)
> * Perform EU/RU to HDP 2.5 
> * Add Atlas Service
> 
> Atlas Server log contains,
> 
> Caused by: 
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at 
> http://natu146-ehbs-dgm10toeriesec-u14-1.openstacklocal:8886/solr: Can not 
> find the specified config set: vertex_index  
> 
> Fix:
> The Hive RPM installs /usr/$stack/$version/atlas with some partial packages 
> that contain Hive hooks, while the Atlas RPM is responsible for installing 
> the full content.
> If the user does not have Atlas currently installed on their stack, then 
> /usr/$stack/current/atlas-client will be a broken symlink, and we should not 
> create the symlink /etc/atlas/conf -> /usr/$stack/current/atlas-client/conf .
> If we mistakenly create this symlink, then when the user performs an EU/RU 
> and then adds Atlas service then the Atlas RPM will not be able to copy its 
> artifacts into /etc/atlas/conf directory and therefore prevent Ambari from by 
> copying those unmanaged contents into /etc/atlas/$version/0
> 
> Further, when installing Atlas service, we must copy the artifacts from 
> /etc/atlas/conf.backup/* to /etc/atlas/conf (which is now a symlink to 
> /usr/hdp/current/atlas-client/conf/) with the no-clobber flag.
> 
> 
> Diffs
> -
> 
>   
> ambari-common/src/main/python/resource_management/libraries/functions/conf_select.py
>  c60b324 
>   
> ambari-server/src/test/python/stacks/2.0.6/hooks/after-INSTALL/test_after_install.py
>  06a366e 
> 
> Diff: https://reviews.apache.org/r/51903/diff/
> 
> 
> Testing
> ---
> 
> --
> Total run:1125
> Total errors:0
> Total failures:0
> OK
> 
> 
> Need to perform more tests on a live cluster.
> 
> 
> Thanks,
> 
> Alejandro Fernandez
> 
>



Re: Review Request 51903: [PREVIEW] Atlas web UI alert after performing stack upgrade to HDP 2.5 and adding Atlas Service

2016-09-14 Thread Sumit Mohanty

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/51903/#review149018
---




ambari-common/src/main/python/resource_management/libraries/functions/conf_select.py
 (line 346)


Any concern if ATLAS was installed before the upgrade. In that case, 
/etc/atlas/conf is probably pointing to a valid folder. If we have not seen 
this scenario fail in our tests then my comment can be ignored.


- Sumit Mohanty


On Sept. 15, 2016, 1:57 a.m., Alejandro Fernandez wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/51903/
> ---
> 
> (Updated Sept. 15, 2016, 1:57 a.m.)
> 
> 
> Review request for Ambari, Dmytro Grinenko, Dmitro Lisnichenko, Jonathan 
> Hurley, and Nate Cole.
> 
> 
> Bugs: AMBARI-18368
> https://issues.apache.org/jira/browse/AMBARI-18368
> 
> 
> Repository: ambari
> 
> 
> Description
> ---
> 
> Steps to Reproduce:
> 
> * Install Ambari 2.2.2 with HDP 2.4 with HBase, Solr, and Hive (this is 
> important)
> * Perform EU/RU to HDP 2.5 
> * Add Atlas Service
> 
> Atlas Server log contains,
> 
> Caused by: 
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at 
> http://natu146-ehbs-dgm10toeriesec-u14-1.openstacklocal:8886/solr: Can not 
> find the specified config set: vertex_index  
> 
> Fix:
> The Hive RPM installs /usr/$stack/$version/atlas with some partial packages 
> that contain Hive hooks, while the Atlas RPM is responsible for installing 
> the full content.
> If the user does not have Atlas currently installed on their stack, then 
> /usr/$stack/current/atlas-client will be a broken symlink, and we should not 
> create the symlink /etc/atlas/conf -> /usr/$stack/current/atlas-client/conf .
> If we mistakenly create this symlink, then when the user performs an EU/RU 
> and then adds Atlas service then the Atlas RPM will not be able to copy its 
> artifacts into /etc/atlas/conf directory and therefore prevent Ambari from by 
> copying those unmanaged contents into /etc/atlas/$version/0
> 
> Further, when installing Atlas service, we must copy the artifacts from 
> /etc/atlas/conf.backup/* to /etc/atlas/conf (which is now a symlink to 
> /usr/hdp/current/atlas-client/conf/) with the no-clobber flag.
> 
> 
> Diffs
> -
> 
>   
> ambari-common/src/main/python/resource_management/libraries/functions/conf_select.py
>  c60b324 
>   
> ambari-server/src/test/python/stacks/2.0.6/hooks/after-INSTALL/test_after_install.py
>  06a366e 
> 
> Diff: https://reviews.apache.org/r/51903/diff/
> 
> 
> Testing
> ---
> 
> --
> Total run:1125
> Total errors:0
> Total failures:0
> OK
> 
> 
> Need to perform more tests on a live cluster.
> 
> 
> Thanks,
> 
> Alejandro Fernandez
> 
>



Review Request 51903: [PREVIEW] Atlas web UI alert after performing stack upgrade to HDP 2.5 and adding Atlas Service

2016-09-14 Thread Alejandro Fernandez

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/51903/
---

Review request for Ambari, Dmytro Grinenko, Dmitro Lisnichenko, Jonathan 
Hurley, and Nate Cole.


Bugs: AMBARI-18368
https://issues.apache.org/jira/browse/AMBARI-18368


Repository: ambari


Description
---

Steps to Reproduce:

* Install Ambari 2.2.2 with HDP 2.4 with HBase, Solr, and Hive (this is 
important)
* Perform EU/RU to HDP 2.5 
* Add Atlas Service

Atlas Server log contains,

Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at 
http://natu146-ehbs-dgm10toeriesec-u14-1.openstacklocal:8886/solr: Can not find 
the specified config set: vertex_index  

Fix:
The Hive RPM installs /usr/$stack/$version/atlas with some partial packages 
that contain Hive hooks, while the Atlas RPM is responsible for installing the 
full content.
If the user does not have Atlas currently installed on their stack, then 
/usr/$stack/current/atlas-client will be a broken symlink, and we should not 
create the symlink /etc/atlas/conf -> /usr/$stack/current/atlas-client/conf .
If we mistakenly create this symlink, then when the user performs an EU/RU and 
then adds Atlas service then the Atlas RPM will not be able to copy its 
artifacts into /etc/atlas/conf directory and therefore prevent Ambari from by 
copying those unmanaged contents into /etc/atlas/$version/0

Further, when installing Atlas service, we must copy the artifacts from 
/etc/atlas/conf.backup/* to /etc/atlas/conf (which is now a symlink to 
/usr/hdp/current/atlas-client/conf/) with the no-clobber flag.


Diffs
-

  
ambari-common/src/main/python/resource_management/libraries/functions/conf_select.py
 c60b324 
  
ambari-server/src/test/python/stacks/2.0.6/hooks/after-INSTALL/test_after_install.py
 06a366e 

Diff: https://reviews.apache.org/r/51903/diff/


Testing
---

--
Total run:1125
Total errors:0
Total failures:0
OK


Need to perform more tests on a live cluster.


Thanks,

Alejandro Fernandez