[jira] [Updated] (AMBARI-23287) Lack of synchronization accessing topologyHolder in HostResourceProvider#processDeleteHostRequests

2018-09-15 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-23287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated AMBARI-23287:

Description: 
HostResourceProvider#processDeleteHostRequests accesses topologyHolder without 
any synchronization.
 

  was:
HostResourceProvider#processDeleteHostRequests accesses topologyHolder without 
any synchronization .
 


> Lack of synchronization accessing topologyHolder in 
> HostResourceProvider#processDeleteHostRequests
> --
>
> Key: AMBARI-23287
> URL: https://issues.apache.org/jira/browse/AMBARI-23287
> Project: Ambari
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
>
> HostResourceProvider#processDeleteHostRequests accesses topologyHolder 
> without any synchronization.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-23288) stateWatcherClient should be closed upon return from OutputSolr#createSolrStateWatcher

2018-09-15 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-23288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated AMBARI-23288:

Description: 
{code}
CloudSolrClient stateWatcherClient = createSolrClient();
{code}
stateWatcherClient should be closed upon return from the method.

  was:
{code}
CloudSolrClient stateWatcherClient = createSolrClient();
{code}

stateWatcherClient should be closed upon return from the method.


> stateWatcherClient should be closed upon return from 
> OutputSolr#createSolrStateWatcher
> --
>
> Key: AMBARI-23288
> URL: https://issues.apache.org/jira/browse/AMBARI-23288
> Project: Ambari
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
>
> {code}
> CloudSolrClient stateWatcherClient = createSolrClient();
> {code}
> stateWatcherClient should be closed upon return from the method.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-22621) Ensure value for hbase.coprocessor.abortonerror is true

2018-09-15 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-22621?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated AMBARI-22621:

Description: 
In the coprocessor refactor for hbase-2, Server#abort has been taken out of 
reach.

We should ensure that value for hbase.coprocessor.abortonerror is true so that 
coprocessor can abort the server by throwing exception.

See HBASE-19341 for related details.

  was:
In the coprocessor refactor for hbase-2, Server#abort has been taken out of 
reach.


We should ensure that value for hbase.coprocessor.abortonerror is true so that 
coprocessor can abort the server by throwing exception.

See HBASE-19341 for related details.


> Ensure value for hbase.coprocessor.abortonerror is true
> ---
>
> Key: AMBARI-22621
> URL: https://issues.apache.org/jira/browse/AMBARI-22621
> Project: Ambari
>  Issue Type: Improvement
>Reporter: Ted Yu
>Priority: Major
>
> In the coprocessor refactor for hbase-2, Server#abort has been taken out of 
> reach.
> We should ensure that value for hbase.coprocessor.abortonerror is true so 
> that coprocessor can abort the server by throwing exception.
> See HBASE-19341 for related details.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-24607) rand should not be used in WebSocketProtocol.h

2018-09-15 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-24607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated AMBARI-24607:

Description: 
In 
ambari-admin/src/main/resources/ui/admin-web/node_modules/karma/node_modules/socket.io/node_modules/engine.io/node_modules/uws/src/WebSocketProtocol.h
 :

{code}
if (!isServer) {
dst[1] |= 0x80;
uint32_t random = rand();
{code}
Linear congruential algorithms are too easy to break.

  was:
In 
ambari-admin/src/main/resources/ui/admin-web/node_modules/karma/node_modules/socket.io/node_modules/engine.io/node_modules/uws/src/WebSocketProtocol.h
 :
{code}
if (!isServer) {
dst[1] |= 0x80;
uint32_t random = rand();
{code}
Linear congruential algorithms are too easy to break.


> rand should not be used in WebSocketProtocol.h
> --
>
> Key: AMBARI-24607
> URL: https://issues.apache.org/jira/browse/AMBARI-24607
> Project: Ambari
>  Issue Type: Bug
>Reporter: Ted Yu
>Priority: Minor
>
> In 
> ambari-admin/src/main/resources/ui/admin-web/node_modules/karma/node_modules/socket.io/node_modules/engine.io/node_modules/uws/src/WebSocketProtocol.h
>  :
> {code}
> if (!isServer) {
> dst[1] |= 0x80;
> uint32_t random = rand();
> {code}
> Linear congruential algorithms are too easy to break.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (AMBARI-23353) Provide sanity check for hbase in memory flush parameters

2018-09-15 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/AMBARI-23353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated AMBARI-23353:

Description: 
For hbase 2.0 release, there is correlation between the following parameters:

* hbase.memstore.inmemoryflush.threshold.factor : threshold for the active 
segment
* hbase.hregion.compacting.pipeline.segments.limit : pipeline length

For SSD, a threshold of 2% for the active segment 
(hbase.memstore.inmemoryflush.threshold.factor=0.02) and pipeline length of 4 
(hbase.hregion.compacting.pipeline.segments.limit=4).

For HDD, hbase.hregion.compacting.pipeline.segments.limit should be 3 (due to 
lower throughput of HDD).

  was:
For hbase 2.0 release, there is correlation between the following parameters:

* hbase.memstore.inmemoryflush.threshold.factor : threshold for the active 
segment
* hbase.hregion.compacting.pipeline.segments.limit : pipeline length


For SSD, a threshold of 2% for the active segment 
(hbase.memstore.inmemoryflush.threshold.factor=0.02) and pipeline length of 4 
(hbase.hregion.compacting.pipeline.segments.limit=4).

For HDD, hbase.hregion.compacting.pipeline.segments.limit should be 3 (due to 
lower throughput of HDD).


> Provide sanity check for hbase in memory flush parameters
> -
>
> Key: AMBARI-23353
> URL: https://issues.apache.org/jira/browse/AMBARI-23353
> Project: Ambari
>  Issue Type: Task
>Reporter: Ted Yu
>Priority: Major
>
> For hbase 2.0 release, there is correlation between the following parameters:
> * hbase.memstore.inmemoryflush.threshold.factor : threshold for the active 
> segment
> * hbase.hregion.compacting.pipeline.segments.limit : pipeline length
> For SSD, a threshold of 2% for the active segment 
> (hbase.memstore.inmemoryflush.threshold.factor=0.02) and pipeline length of 4 
> (hbase.hregion.compacting.pipeline.segments.limit=4).
> For HDD, hbase.hregion.compacting.pipeline.segments.limit should be 3 (due to 
> lower throughput of HDD).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)