[jira] [Commented] (HBASE-17125) Inconsistent result when use filter to read data

2017-04-20 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15978099#comment-15978099
 ] 

Duo Zhang commented on HBASE-17125:
---

Oh, seems the user calls setMaxVerions to 1. I believe the problem is that 
he/she found that the filter will return old values then he/she use 
setMaxVersions(1) and hope this could solve the problem.

So it is clear that in this user's mind, setMaxVersions should be used to 
control the number of versions passed to the filter. This is exactly what we 
provide in the latest patch. With the patch in place, the user does not need to 
call setMaxVersions(1) anymore.

Thanks.

> Inconsistent result when use filter to read data
> 
>
> Key: HBASE-17125
> URL: https://issues.apache.org/jira/browse/HBASE-17125
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: example.diff, HBASE-17125.master.001.patch, 
> HBASE-17125.master.002.patch, HBASE-17125.master.002.patch, 
> HBASE-17125.master.003.patch, HBASE-17125.master.004.patch, 
> HBASE-17125.master.005.patch, HBASE-17125.master.006.patch, 
> HBASE-17125.master.007.patch, HBASE-17125.master.008.patch
>
>
> Assume a cloumn's max versions is 3, then we write 4 versions of this column. 
> The oldest version doesn't remove immediately. But from the user view, the 
> oldest version has gone. When user use a filter to query, if the filter skip 
> a new version, then the oldest version will be seen again. But after compact 
> the region, then the oldest version will never been seen. So it is weird for 
> user. The query will get inconsistent result before and after region 
> compaction.
> The reason is matchColumn method of UserScanQueryMatcher. It first check the 
> cell by filter, then check the number of versions needed. So if the filter 
> skip the new version, then the oldest version will be seen again when it is 
> not removed.
> Have a discussion offline with [~Apache9] and [~fenghh], now we have two 
> solution for this problem. The first idea is check the number of versions 
> first, then check the cell by filter. As the comment of setFilter, the filter 
> is called after all tests for ttl, column match, deletes and max versions 
> have been run.
> {code}
>   /**
>* Apply the specified server-side filter when performing the Query.
>* Only {@link Filter#filterKeyValue(Cell)} is called AFTER all tests
>* for ttl, column match, deletes and max versions have been run.
>* @param filter filter to run on the server
>* @return this for invocation chaining
>*/
>   public Query setFilter(Filter filter) {
> this.filter = filter;
> return this;
>   }
> {code}
> But this idea has another problem, if a column's max version is 5 and the 
> user query only need 3 versions. It first check the version's number, then 
> check the cell by filter. So the cells number of the result may less than 3. 
> But there are 2 versions which don't read anymore.
> So the second idea has three steps.
> 1. check by the max versions of this column
> 2. check the kv by filter
> 3. check the versions which user need.
> But this will lead the ScanQueryMatcher more complicated. And this will break 
> the javadoc of Query.setFilter.
> Now we don't have a final solution for this problem. Suggestions are welcomed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17125) Inconsistent result when use filter to read data

2017-04-20 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15978098#comment-15978098
 ] 

Guanghao Zhang commented on HBASE-17125:


For the user mailing list case, the column's version is 1. So the user didn't 
need to setMaxVersions(). If the recent value is not matching, he will gets 
nothing.

> Inconsistent result when use filter to read data
> 
>
> Key: HBASE-17125
> URL: https://issues.apache.org/jira/browse/HBASE-17125
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: example.diff, HBASE-17125.master.001.patch, 
> HBASE-17125.master.002.patch, HBASE-17125.master.002.patch, 
> HBASE-17125.master.003.patch, HBASE-17125.master.004.patch, 
> HBASE-17125.master.005.patch, HBASE-17125.master.006.patch, 
> HBASE-17125.master.007.patch, HBASE-17125.master.008.patch
>
>
> Assume a cloumn's max versions is 3, then we write 4 versions of this column. 
> The oldest version doesn't remove immediately. But from the user view, the 
> oldest version has gone. When user use a filter to query, if the filter skip 
> a new version, then the oldest version will be seen again. But after compact 
> the region, then the oldest version will never been seen. So it is weird for 
> user. The query will get inconsistent result before and after region 
> compaction.
> The reason is matchColumn method of UserScanQueryMatcher. It first check the 
> cell by filter, then check the number of versions needed. So if the filter 
> skip the new version, then the oldest version will be seen again when it is 
> not removed.
> Have a discussion offline with [~Apache9] and [~fenghh], now we have two 
> solution for this problem. The first idea is check the number of versions 
> first, then check the cell by filter. As the comment of setFilter, the filter 
> is called after all tests for ttl, column match, deletes and max versions 
> have been run.
> {code}
>   /**
>* Apply the specified server-side filter when performing the Query.
>* Only {@link Filter#filterKeyValue(Cell)} is called AFTER all tests
>* for ttl, column match, deletes and max versions have been run.
>* @param filter filter to run on the server
>* @return this for invocation chaining
>*/
>   public Query setFilter(Filter filter) {
> this.filter = filter;
> return this;
>   }
> {code}
> But this idea has another problem, if a column's max version is 5 and the 
> user query only need 3 versions. It first check the version's number, then 
> check the cell by filter. So the cells number of the result may less than 3. 
> But there are 2 versions which don't read anymore.
> So the second idea has three steps.
> 1. check by the max versions of this column
> 2. check the kv by filter
> 3. check the versions which user need.
> But this will lead the ScanQueryMatcher more complicated. And this will break 
> the javadoc of Query.setFilter.
> Now we don't have a final solution for this problem. Suggestions are welcomed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17125) Inconsistent result when use filter to read data

2017-04-20 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15978092#comment-15978092
 ] 

Duo Zhang commented on HBASE-17125:
---

{quote}
So there also the user has to set this filter on get/scan and setMaxversions()?
{quote}
No. After the patch here the problem is gone. Just keep the code, everything 
will be OK.

And the problem of this fix is that we can not use setMaxVerions to control the 
number of returned versions if you user filter(maybe). So we introduce a 
SpecifiedNumVersionsColumnFilter to solve the problem.

Thanks.

> Inconsistent result when use filter to read data
> 
>
> Key: HBASE-17125
> URL: https://issues.apache.org/jira/browse/HBASE-17125
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: example.diff, HBASE-17125.master.001.patch, 
> HBASE-17125.master.002.patch, HBASE-17125.master.002.patch, 
> HBASE-17125.master.003.patch, HBASE-17125.master.004.patch, 
> HBASE-17125.master.005.patch, HBASE-17125.master.006.patch, 
> HBASE-17125.master.007.patch, HBASE-17125.master.008.patch
>
>
> Assume a cloumn's max versions is 3, then we write 4 versions of this column. 
> The oldest version doesn't remove immediately. But from the user view, the 
> oldest version has gone. When user use a filter to query, if the filter skip 
> a new version, then the oldest version will be seen again. But after compact 
> the region, then the oldest version will never been seen. So it is weird for 
> user. The query will get inconsistent result before and after region 
> compaction.
> The reason is matchColumn method of UserScanQueryMatcher. It first check the 
> cell by filter, then check the number of versions needed. So if the filter 
> skip the new version, then the oldest version will be seen again when it is 
> not removed.
> Have a discussion offline with [~Apache9] and [~fenghh], now we have two 
> solution for this problem. The first idea is check the number of versions 
> first, then check the cell by filter. As the comment of setFilter, the filter 
> is called after all tests for ttl, column match, deletes and max versions 
> have been run.
> {code}
>   /**
>* Apply the specified server-side filter when performing the Query.
>* Only {@link Filter#filterKeyValue(Cell)} is called AFTER all tests
>* for ttl, column match, deletes and max versions have been run.
>* @param filter filter to run on the server
>* @return this for invocation chaining
>*/
>   public Query setFilter(Filter filter) {
> this.filter = filter;
> return this;
>   }
> {code}
> But this idea has another problem, if a column's max version is 5 and the 
> user query only need 3 versions. It first check the version's number, then 
> check the cell by filter. So the cells number of the result may less than 3. 
> But there are 2 versions which don't read anymore.
> So the second idea has three steps.
> 1. check by the max versions of this column
> 2. check the kv by filter
> 3. check the versions which user need.
> But this will lead the ScanQueryMatcher more complicated. And this will break 
> the javadoc of Query.setFilter.
> Now we don't have a final solution for this problem. Suggestions are welcomed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15583) Any HTableDescriptor we give out should be immutable

2017-04-20 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-15583:
---
Status: Patch Available  (was: Open)

> Any HTableDescriptor we give out should be immutable
> 
>
> Key: HBASE-15583
> URL: https://issues.apache.org/jira/browse/HBASE-15583
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Gabor Liptak
>Assignee: Chia-Ping Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15583.v0.patch, HBASE-15583.v1.patch, 
> HBASE-15583.v2.patch, HBASE-15583.v3.patch, HBASE-15583.v4.patch, 
> HBASE-15583.v5.patch
>
>
> From [~enis] in https://issues.apache.org/jira/browse/HBASE-15505:
> PS Should UnmodifyableHTableDescriptor be renamed to 
> UnmodifiableHTableDescriptor?
> It should be named ImmutableHTableDescriptor to be consistent with 
> collections naming. Let's do this as a subtask of the parent jira, not here. 
> Thinking about it though, why would we return an Immutable HTD in 
> HTable.getTableDescriptor() versus a mutable HTD in 
> Admin.getTableDescriptor(). It does not make sense. Should we just get rid of 
> the Immutable ones?
> We also have UnmodifyableHRegionInfo which is not used at the moment it 
> seems. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15583) Any HTableDescriptor we give out should be immutable

2017-04-20 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-15583:
---
Attachment: HBASE-15583.v5.patch

fix the license error. All failed tests have passed locally.

> Any HTableDescriptor we give out should be immutable
> 
>
> Key: HBASE-15583
> URL: https://issues.apache.org/jira/browse/HBASE-15583
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Gabor Liptak
>Assignee: Chia-Ping Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15583.v0.patch, HBASE-15583.v1.patch, 
> HBASE-15583.v2.patch, HBASE-15583.v3.patch, HBASE-15583.v4.patch, 
> HBASE-15583.v5.patch
>
>
> From [~enis] in https://issues.apache.org/jira/browse/HBASE-15505:
> PS Should UnmodifyableHTableDescriptor be renamed to 
> UnmodifiableHTableDescriptor?
> It should be named ImmutableHTableDescriptor to be consistent with 
> collections naming. Let's do this as a subtask of the parent jira, not here. 
> Thinking about it though, why would we return an Immutable HTD in 
> HTable.getTableDescriptor() versus a mutable HTD in 
> Admin.getTableDescriptor(). It does not make sense. Should we just get rid of 
> the Immutable ones?
> We also have UnmodifyableHRegionInfo which is not used at the moment it 
> seems. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-15583) Any HTableDescriptor we give out should be immutable

2017-04-20 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-15583:
---
Status: Open  (was: Patch Available)

> Any HTableDescriptor we give out should be immutable
> 
>
> Key: HBASE-15583
> URL: https://issues.apache.org/jira/browse/HBASE-15583
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Gabor Liptak
>Assignee: Chia-Ping Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-15583.v0.patch, HBASE-15583.v1.patch, 
> HBASE-15583.v2.patch, HBASE-15583.v3.patch, HBASE-15583.v4.patch
>
>
> From [~enis] in https://issues.apache.org/jira/browse/HBASE-15505:
> PS Should UnmodifyableHTableDescriptor be renamed to 
> UnmodifiableHTableDescriptor?
> It should be named ImmutableHTableDescriptor to be consistent with 
> collections naming. Let's do this as a subtask of the parent jira, not here. 
> Thinking about it though, why would we return an Immutable HTD in 
> HTable.getTableDescriptor() versus a mutable HTD in 
> Admin.getTableDescriptor(). It does not make sense. Should we just get rid of 
> the Immutable ones?
> We also have UnmodifyableHRegionInfo which is not used at the moment it 
> seems. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17125) Inconsistent result when use filter to read data

2017-04-20 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15978087#comment-15978087
 ] 

Anoop Sam John commented on HBASE-17125:


In user@ mailing list there was a query from a user regarding similar issue 
while using value filter.   The recent value of a cell is not matching the 
value as per filter still he gets a older version of the cell.. He needs only 
one version (latest). So there also the user has to set this filter on get/scan 
and setMaxversions()?

> Inconsistent result when use filter to read data
> 
>
> Key: HBASE-17125
> URL: https://issues.apache.org/jira/browse/HBASE-17125
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: example.diff, HBASE-17125.master.001.patch, 
> HBASE-17125.master.002.patch, HBASE-17125.master.002.patch, 
> HBASE-17125.master.003.patch, HBASE-17125.master.004.patch, 
> HBASE-17125.master.005.patch, HBASE-17125.master.006.patch, 
> HBASE-17125.master.007.patch, HBASE-17125.master.008.patch
>
>
> Assume a cloumn's max versions is 3, then we write 4 versions of this column. 
> The oldest version doesn't remove immediately. But from the user view, the 
> oldest version has gone. When user use a filter to query, if the filter skip 
> a new version, then the oldest version will be seen again. But after compact 
> the region, then the oldest version will never been seen. So it is weird for 
> user. The query will get inconsistent result before and after region 
> compaction.
> The reason is matchColumn method of UserScanQueryMatcher. It first check the 
> cell by filter, then check the number of versions needed. So if the filter 
> skip the new version, then the oldest version will be seen again when it is 
> not removed.
> Have a discussion offline with [~Apache9] and [~fenghh], now we have two 
> solution for this problem. The first idea is check the number of versions 
> first, then check the cell by filter. As the comment of setFilter, the filter 
> is called after all tests for ttl, column match, deletes and max versions 
> have been run.
> {code}
>   /**
>* Apply the specified server-side filter when performing the Query.
>* Only {@link Filter#filterKeyValue(Cell)} is called AFTER all tests
>* for ttl, column match, deletes and max versions have been run.
>* @param filter filter to run on the server
>* @return this for invocation chaining
>*/
>   public Query setFilter(Filter filter) {
> this.filter = filter;
> return this;
>   }
> {code}
> But this idea has another problem, if a column's max version is 5 and the 
> user query only need 3 versions. It first check the version's number, then 
> check the cell by filter. So the cells number of the result may less than 3. 
> But there are 2 versions which don't read anymore.
> So the second idea has three steps.
> 1. check by the max versions of this column
> 2. check the kv by filter
> 3. check the versions which user need.
> But this will lead the ScanQueryMatcher more complicated. And this will break 
> the javadoc of Query.setFilter.
> Now we don't have a final solution for this problem. Suggestions are welcomed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17125) Inconsistent result when use filter to read data

2017-04-20 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15978085#comment-15978085
 ] 

Duo Zhang commented on HBASE-17125:
---

{quote}
So setMaxVersions with any value >= 5 (not only 5), then the server can check 
all versions.
{quote}

Yes, just call scan.setMaxVerions(), do not need to give a specific value.

> Inconsistent result when use filter to read data
> 
>
> Key: HBASE-17125
> URL: https://issues.apache.org/jira/browse/HBASE-17125
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: example.diff, HBASE-17125.master.001.patch, 
> HBASE-17125.master.002.patch, HBASE-17125.master.002.patch, 
> HBASE-17125.master.003.patch, HBASE-17125.master.004.patch, 
> HBASE-17125.master.005.patch, HBASE-17125.master.006.patch, 
> HBASE-17125.master.007.patch, HBASE-17125.master.008.patch
>
>
> Assume a cloumn's max versions is 3, then we write 4 versions of this column. 
> The oldest version doesn't remove immediately. But from the user view, the 
> oldest version has gone. When user use a filter to query, if the filter skip 
> a new version, then the oldest version will be seen again. But after compact 
> the region, then the oldest version will never been seen. So it is weird for 
> user. The query will get inconsistent result before and after region 
> compaction.
> The reason is matchColumn method of UserScanQueryMatcher. It first check the 
> cell by filter, then check the number of versions needed. So if the filter 
> skip the new version, then the oldest version will be seen again when it is 
> not removed.
> Have a discussion offline with [~Apache9] and [~fenghh], now we have two 
> solution for this problem. The first idea is check the number of versions 
> first, then check the cell by filter. As the comment of setFilter, the filter 
> is called after all tests for ttl, column match, deletes and max versions 
> have been run.
> {code}
>   /**
>* Apply the specified server-side filter when performing the Query.
>* Only {@link Filter#filterKeyValue(Cell)} is called AFTER all tests
>* for ttl, column match, deletes and max versions have been run.
>* @param filter filter to run on the server
>* @return this for invocation chaining
>*/
>   public Query setFilter(Filter filter) {
> this.filter = filter;
> return this;
>   }
> {code}
> But this idea has another problem, if a column's max version is 5 and the 
> user query only need 3 versions. It first check the version's number, then 
> check the cell by filter. So the cells number of the result may less than 3. 
> But there are 2 versions which don't read anymore.
> So the second idea has three steps.
> 1. check by the max versions of this column
> 2. check the kv by filter
> 3. check the versions which user need.
> But this will lead the ScanQueryMatcher more complicated. And this will break 
> the javadoc of Query.setFilter.
> Now we don't have a final solution for this problem. Suggestions are welcomed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17125) Inconsistent result when use filter to read data

2017-04-20 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15978080#comment-15978080
 ] 

Guanghao Zhang commented on HBASE-17125:


bq. It will so complicated for a user to set this as 5 and then the filter with 
3. 
The default version is 1. So the scan will only check the latest version. The 
user need set a bigger value if he want read more than one version. This 
scenario: the column's versions is 5. So setMaxVersions with any value >= 5 
(not only 5), then the server can check all versions.

> Inconsistent result when use filter to read data
> 
>
> Key: HBASE-17125
> URL: https://issues.apache.org/jira/browse/HBASE-17125
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: example.diff, HBASE-17125.master.001.patch, 
> HBASE-17125.master.002.patch, HBASE-17125.master.002.patch, 
> HBASE-17125.master.003.patch, HBASE-17125.master.004.patch, 
> HBASE-17125.master.005.patch, HBASE-17125.master.006.patch, 
> HBASE-17125.master.007.patch, HBASE-17125.master.008.patch
>
>
> Assume a cloumn's max versions is 3, then we write 4 versions of this column. 
> The oldest version doesn't remove immediately. But from the user view, the 
> oldest version has gone. When user use a filter to query, if the filter skip 
> a new version, then the oldest version will be seen again. But after compact 
> the region, then the oldest version will never been seen. So it is weird for 
> user. The query will get inconsistent result before and after region 
> compaction.
> The reason is matchColumn method of UserScanQueryMatcher. It first check the 
> cell by filter, then check the number of versions needed. So if the filter 
> skip the new version, then the oldest version will be seen again when it is 
> not removed.
> Have a discussion offline with [~Apache9] and [~fenghh], now we have two 
> solution for this problem. The first idea is check the number of versions 
> first, then check the cell by filter. As the comment of setFilter, the filter 
> is called after all tests for ttl, column match, deletes and max versions 
> have been run.
> {code}
>   /**
>* Apply the specified server-side filter when performing the Query.
>* Only {@link Filter#filterKeyValue(Cell)} is called AFTER all tests
>* for ttl, column match, deletes and max versions have been run.
>* @param filter filter to run on the server
>* @return this for invocation chaining
>*/
>   public Query setFilter(Filter filter) {
> this.filter = filter;
> return this;
>   }
> {code}
> But this idea has another problem, if a column's max version is 5 and the 
> user query only need 3 versions. It first check the version's number, then 
> check the cell by filter. So the cells number of the result may less than 3. 
> But there are 2 versions which don't read anymore.
> So the second idea has three steps.
> 1. check by the max versions of this column
> 2. check the kv by filter
> 3. check the versions which user need.
> But this will lead the ScanQueryMatcher more complicated. And this will break 
> the javadoc of Query.setFilter.
> Now we don't have a final solution for this problem. Suggestions are welcomed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17125) Inconsistent result when use filter to read data

2017-04-20 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15978078#comment-15978078
 ] 

Duo Zhang commented on HBASE-17125:
---

Why users need to setMaxVerions to 5 if they only want 3 versions? The fact is, 
if you do not use filter, then just use setMaxVersions to control the number of 
the veraions returned. If you use filter, and then please use 
SpecifiedNumVersionsColumnFilter if you want to control the number of versions 
returned as the max versions will be tested before filter. The setMaxVersions 
is used to control the number of versions passed to filter. I think this is 
clear enough?

Thanks.

> Inconsistent result when use filter to read data
> 
>
> Key: HBASE-17125
> URL: https://issues.apache.org/jira/browse/HBASE-17125
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: example.diff, HBASE-17125.master.001.patch, 
> HBASE-17125.master.002.patch, HBASE-17125.master.002.patch, 
> HBASE-17125.master.003.patch, HBASE-17125.master.004.patch, 
> HBASE-17125.master.005.patch, HBASE-17125.master.006.patch, 
> HBASE-17125.master.007.patch, HBASE-17125.master.008.patch
>
>
> Assume a cloumn's max versions is 3, then we write 4 versions of this column. 
> The oldest version doesn't remove immediately. But from the user view, the 
> oldest version has gone. When user use a filter to query, if the filter skip 
> a new version, then the oldest version will be seen again. But after compact 
> the region, then the oldest version will never been seen. So it is weird for 
> user. The query will get inconsistent result before and after region 
> compaction.
> The reason is matchColumn method of UserScanQueryMatcher. It first check the 
> cell by filter, then check the number of versions needed. So if the filter 
> skip the new version, then the oldest version will be seen again when it is 
> not removed.
> Have a discussion offline with [~Apache9] and [~fenghh], now we have two 
> solution for this problem. The first idea is check the number of versions 
> first, then check the cell by filter. As the comment of setFilter, the filter 
> is called after all tests for ttl, column match, deletes and max versions 
> have been run.
> {code}
>   /**
>* Apply the specified server-side filter when performing the Query.
>* Only {@link Filter#filterKeyValue(Cell)} is called AFTER all tests
>* for ttl, column match, deletes and max versions have been run.
>* @param filter filter to run on the server
>* @return this for invocation chaining
>*/
>   public Query setFilter(Filter filter) {
> this.filter = filter;
> return this;
>   }
> {code}
> But this idea has another problem, if a column's max version is 5 and the 
> user query only need 3 versions. It first check the version's number, then 
> check the cell by filter. So the cells number of the result may less than 3. 
> But there are 2 versions which don't read anymore.
> So the second idea has three steps.
> 1. check by the max versions of this column
> 2. check the kv by filter
> 3. check the versions which user need.
> But this will lead the ScanQueryMatcher more complicated. And this will break 
> the javadoc of Query.setFilter.
> Now we don't have a final solution for this problem. Suggestions are welcomed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17125) Inconsistent result when use filter to read data

2017-04-20 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15978067#comment-15978067
 ] 

Anoop Sam John commented on HBASE-17125:


I dont think it is correct to ask users to do setMaxVersions(5).  It will so 
complicated for a user to set this as 5 and then the filter with 3.  This is 
like we pass our impl headache to user.  IMHO.

> Inconsistent result when use filter to read data
> 
>
> Key: HBASE-17125
> URL: https://issues.apache.org/jira/browse/HBASE-17125
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: example.diff, HBASE-17125.master.001.patch, 
> HBASE-17125.master.002.patch, HBASE-17125.master.002.patch, 
> HBASE-17125.master.003.patch, HBASE-17125.master.004.patch, 
> HBASE-17125.master.005.patch, HBASE-17125.master.006.patch, 
> HBASE-17125.master.007.patch, HBASE-17125.master.008.patch
>
>
> Assume a cloumn's max versions is 3, then we write 4 versions of this column. 
> The oldest version doesn't remove immediately. But from the user view, the 
> oldest version has gone. When user use a filter to query, if the filter skip 
> a new version, then the oldest version will be seen again. But after compact 
> the region, then the oldest version will never been seen. So it is weird for 
> user. The query will get inconsistent result before and after region 
> compaction.
> The reason is matchColumn method of UserScanQueryMatcher. It first check the 
> cell by filter, then check the number of versions needed. So if the filter 
> skip the new version, then the oldest version will be seen again when it is 
> not removed.
> Have a discussion offline with [~Apache9] and [~fenghh], now we have two 
> solution for this problem. The first idea is check the number of versions 
> first, then check the cell by filter. As the comment of setFilter, the filter 
> is called after all tests for ttl, column match, deletes and max versions 
> have been run.
> {code}
>   /**
>* Apply the specified server-side filter when performing the Query.
>* Only {@link Filter#filterKeyValue(Cell)} is called AFTER all tests
>* for ttl, column match, deletes and max versions have been run.
>* @param filter filter to run on the server
>* @return this for invocation chaining
>*/
>   public Query setFilter(Filter filter) {
> this.filter = filter;
> return this;
>   }
> {code}
> But this idea has another problem, if a column's max version is 5 and the 
> user query only need 3 versions. It first check the version's number, then 
> check the cell by filter. So the cells number of the result may less than 3. 
> But there are 2 versions which don't read anymore.
> So the second idea has three steps.
> 1. check by the max versions of this column
> 2. check the kv by filter
> 3. check the versions which user need.
> But this will lead the ScanQueryMatcher more complicated. And this will break 
> the javadoc of Query.setFilter.
> Now we don't have a final solution for this problem. Suggestions are welcomed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17937) Memstore size becomes negative in case of expensive postPut/Delete Coprocessor call

2017-04-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15978055#comment-15978055
 ] 

Hudson commented on HBASE-17937:


SUCCESS: Integrated in Jenkins build HBase-1.2-JDK7 #124 (See 
[https://builds.apache.org/job/HBase-1.2-JDK7/124/])
HBASE-17937 Memstore size becomes negative in case of expensive (zhangduo: rev 
0b5440d6d1a6fb1943917d68655b3abb8bd483b0)
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestNegativeMemstoreSizeWithSlowCoprocessor.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


> Memstore size becomes negative in case of expensive postPut/Delete 
> Coprocessor call
> ---
>
> Key: HBASE-17937
> URL: https://issues.apache.org/jira/browse/HBASE-17937
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 2.0.0, 1.3.1, 0.98.24
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
> Fix For: 2.0.0, 1.4.0, 1.2.6, 1.3.2, 1.1.11
>
> Attachments: HBASE-17937.branch-1.001.patch, 
> HBASE-17937.branch-1.002.patch, HBASE-17937.master.001.patch, 
> HBASE-17937.master.002.patch, HBASE-17937.master.002.patch, 
> HBASE-17937.master.003.patch, HBASE-17937.master.003.patch
>
>
> We ran into a situation where the memstore size became negative due to 
> expensive postPut/Delete Coprocessor calls in doMiniBatchMutate. We update 
> the memstore size in the finally block of doMiniBatchMutate, however a queued 
> flush can be triggered during the coprocessor calls(if they are taking time 
> eg. index updates) since we have released the locks and advanced mvcc at this 
> point. The flush will turn the memstore size negative since the value 
> subtracted is the actual value flushed from stores. The negative value 
> impacts the future flushes amongst others that depend on memstore size.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17924) Consider sorting the row order when processing multi() ops before taking rowlocks

2017-04-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15978052#comment-15978052
 ] 

Hadoop QA commented on HBASE-17924:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 28s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
23s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 16s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
27s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 9s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
58m 12s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 53s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 149m 15s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.backup.TestBackupHFileCleaner |
|   | hadoop.hbase.regionserver.TestHRegionFileSystem |
|   | hadoop.hbase.master.locking.TestLockProcedure |
|   | hadoop.hbase.master.locking.TestLockManager |
|   | hadoop.hbase.procedure.TestProcedureManager |
|   | hadoop.hbase.master.balancer.TestRegionLocationFinder |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.03.0-ce Server=17.03.0-ce Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12864410/HBASE-17924.v4.patch |
| JIRA Issue | HBASE-17924 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 0de54256086c 4.8.3-std-1 #1 SMP Fri Oct 21 11:15:43 UTC 2016 
x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 49cba2c |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6523/artifact/patchprocess/patch-unit-hbase-server.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/6523/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6523/testReport/ |

[jira] [Commented] (HBASE-17937) Memstore size becomes negative in case of expensive postPut/Delete Coprocessor call

2017-04-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15978041#comment-15978041
 ] 

Hudson commented on HBASE-17937:


SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #2896 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/2896/])
HBASE-17937 Memstore size becomes negative in case of expensive (zhangduo: rev 
49cba2c237ecc1b3285d942f1ad176ea50c44cd1)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestNegativeMemstoreSizeWithSlowCoprocessor.java


> Memstore size becomes negative in case of expensive postPut/Delete 
> Coprocessor call
> ---
>
> Key: HBASE-17937
> URL: https://issues.apache.org/jira/browse/HBASE-17937
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 2.0.0, 1.3.1, 0.98.24
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
> Fix For: 2.0.0, 1.4.0, 1.2.6, 1.3.2, 1.1.11
>
> Attachments: HBASE-17937.branch-1.001.patch, 
> HBASE-17937.branch-1.002.patch, HBASE-17937.master.001.patch, 
> HBASE-17937.master.002.patch, HBASE-17937.master.002.patch, 
> HBASE-17937.master.003.patch, HBASE-17937.master.003.patch
>
>
> We ran into a situation where the memstore size became negative due to 
> expensive postPut/Delete Coprocessor calls in doMiniBatchMutate. We update 
> the memstore size in the finally block of doMiniBatchMutate, however a queued 
> flush can be triggered during the coprocessor calls(if they are taking time 
> eg. index updates) since we have released the locks and advanced mvcc at this 
> point. The flush will turn the memstore size negative since the value 
> subtracted is the actual value flushed from stores. The negative value 
> impacts the future flushes amongst others that depend on memstore size.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17937) Memstore size becomes negative in case of expensive postPut/Delete Coprocessor call

2017-04-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15978043#comment-15978043
 ] 

Hudson commented on HBASE-17937:


SUCCESS: Integrated in Jenkins build HBase-1.3-JDK7 #147 (See 
[https://builds.apache.org/job/HBase-1.3-JDK7/147/])
HBASE-17937 Memstore size becomes negative in case of expensive (zhangduo: rev 
3dcbb733e040f32e3f6bd9ab4063f5efe6bd522b)
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestNegativeMemstoreSizeWithSlowCoprocessor.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


> Memstore size becomes negative in case of expensive postPut/Delete 
> Coprocessor call
> ---
>
> Key: HBASE-17937
> URL: https://issues.apache.org/jira/browse/HBASE-17937
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 2.0.0, 1.3.1, 0.98.24
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
> Fix For: 2.0.0, 1.4.0, 1.2.6, 1.3.2, 1.1.11
>
> Attachments: HBASE-17937.branch-1.001.patch, 
> HBASE-17937.branch-1.002.patch, HBASE-17937.master.001.patch, 
> HBASE-17937.master.002.patch, HBASE-17937.master.002.patch, 
> HBASE-17937.master.003.patch, HBASE-17937.master.003.patch
>
>
> We ran into a situation where the memstore size became negative due to 
> expensive postPut/Delete Coprocessor calls in doMiniBatchMutate. We update 
> the memstore size in the finally block of doMiniBatchMutate, however a queued 
> flush can be triggered during the coprocessor calls(if they are taking time 
> eg. index updates) since we have released the locks and advanced mvcc at this 
> point. The flush will turn the memstore size negative since the value 
> subtracted is the actual value flushed from stores. The negative value 
> impacts the future flushes amongst others that depend on memstore size.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17937) Memstore size becomes negative in case of expensive postPut/Delete Coprocessor call

2017-04-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15978036#comment-15978036
 ] 

Hudson commented on HBASE-17937:


SUCCESS: Integrated in Jenkins build HBase-1.3-JDK8 #159 (See 
[https://builds.apache.org/job/HBase-1.3-JDK8/159/])
HBASE-17937 Memstore size becomes negative in case of expensive (zhangduo: rev 
3dcbb733e040f32e3f6bd9ab4063f5efe6bd522b)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestNegativeMemstoreSizeWithSlowCoprocessor.java


> Memstore size becomes negative in case of expensive postPut/Delete 
> Coprocessor call
> ---
>
> Key: HBASE-17937
> URL: https://issues.apache.org/jira/browse/HBASE-17937
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 2.0.0, 1.3.1, 0.98.24
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
> Fix For: 2.0.0, 1.4.0, 1.2.6, 1.3.2, 1.1.11
>
> Attachments: HBASE-17937.branch-1.001.patch, 
> HBASE-17937.branch-1.002.patch, HBASE-17937.master.001.patch, 
> HBASE-17937.master.002.patch, HBASE-17937.master.002.patch, 
> HBASE-17937.master.003.patch, HBASE-17937.master.003.patch
>
>
> We ran into a situation where the memstore size became negative due to 
> expensive postPut/Delete Coprocessor calls in doMiniBatchMutate. We update 
> the memstore size in the finally block of doMiniBatchMutate, however a queued 
> flush can be triggered during the coprocessor calls(if they are taking time 
> eg. index updates) since we have released the locks and advanced mvcc at this 
> point. The flush will turn the memstore size negative since the value 
> subtracted is the actual value flushed from stores. The negative value 
> impacts the future flushes amongst others that depend on memstore size.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17941) CellArrayMap#getCell may throw IndexOutOfBoundsException

2017-04-20 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-17941:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Thanks for the patch. [~s9514171]

> CellArrayMap#getCell may throw IndexOutOfBoundsException
> 
>
> Key: HBASE-17941
> URL: https://issues.apache.org/jira/browse/HBASE-17941
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Chia-Ping Tsai
>Assignee: Hsin-Ying Lee
>Priority: Minor
>  Labels: beginner
> Fix For: 2.0.0
>
> Attachments: HBASE-17941.v0.patch
>
>
> {noformat}
>   @Override
>   protected Cell getCell(int i) {
> if( (i < minCellIdx) && (i >= maxCellIdx) ) return null;
> return block[i];
>   }
> {noformat}
> && -> ||
> We have checked the index of bound before calling this method, so the 
> exception doesn't happen at current trunk.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17937) Memstore size becomes negative in case of expensive postPut/Delete Coprocessor call

2017-04-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15978020#comment-15978020
 ] 

Hudson commented on HBASE-17937:


FAILURE: Integrated in Jenkins build HBase-1.4 #699 (See 
[https://builds.apache.org/job/HBase-1.4/699/])
HBASE-17937 Memstore size becomes negative in case of expensive (zhangduo: rev 
d69a6366f6d36ce229df80447998e71ca4654518)
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestNegativeMemstoreSizeWithSlowCoprocessor.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


> Memstore size becomes negative in case of expensive postPut/Delete 
> Coprocessor call
> ---
>
> Key: HBASE-17937
> URL: https://issues.apache.org/jira/browse/HBASE-17937
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 2.0.0, 1.3.1, 0.98.24
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
> Fix For: 2.0.0, 1.4.0, 1.2.6, 1.3.2, 1.1.11
>
> Attachments: HBASE-17937.branch-1.001.patch, 
> HBASE-17937.branch-1.002.patch, HBASE-17937.master.001.patch, 
> HBASE-17937.master.002.patch, HBASE-17937.master.002.patch, 
> HBASE-17937.master.003.patch, HBASE-17937.master.003.patch
>
>
> We ran into a situation where the memstore size became negative due to 
> expensive postPut/Delete Coprocessor calls in doMiniBatchMutate. We update 
> the memstore size in the finally block of doMiniBatchMutate, however a queued 
> flush can be triggered during the coprocessor calls(if they are taking time 
> eg. index updates) since we have released the locks and advanced mvcc at this 
> point. The flush will turn the memstore size negative since the value 
> subtracted is the actual value flushed from stores. The negative value 
> impacts the future flushes amongst others that depend on memstore size.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17937) Memstore size becomes negative in case of expensive postPut/Delete Coprocessor call

2017-04-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15978015#comment-15978015
 ] 

Hudson commented on HBASE-17937:


SUCCESS: Integrated in Jenkins build HBase-1.1-JDK7 #1860 (See 
[https://builds.apache.org/job/HBase-1.1-JDK7/1860/])
HBASE-17937 Memstore size becomes negative in case of expensive (zhangduo: rev 
93ac76ef6164d8eb183f048ed727dc8b4290e0fa)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestNegativeMemstoreSizeWithSlowCoprocessor.java


> Memstore size becomes negative in case of expensive postPut/Delete 
> Coprocessor call
> ---
>
> Key: HBASE-17937
> URL: https://issues.apache.org/jira/browse/HBASE-17937
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 2.0.0, 1.3.1, 0.98.24
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
> Fix For: 2.0.0, 1.4.0, 1.2.6, 1.3.2, 1.1.11
>
> Attachments: HBASE-17937.branch-1.001.patch, 
> HBASE-17937.branch-1.002.patch, HBASE-17937.master.001.patch, 
> HBASE-17937.master.002.patch, HBASE-17937.master.002.patch, 
> HBASE-17937.master.003.patch, HBASE-17937.master.003.patch
>
>
> We ran into a situation where the memstore size became negative due to 
> expensive postPut/Delete Coprocessor calls in doMiniBatchMutate. We update 
> the memstore size in the finally block of doMiniBatchMutate, however a queued 
> flush can be triggered during the coprocessor calls(if they are taking time 
> eg. index updates) since we have released the locks and advanced mvcc at this 
> point. The flush will turn the memstore size negative since the value 
> subtracted is the actual value flushed from stores. The negative value 
> impacts the future flushes amongst others that depend on memstore size.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17937) Memstore size becomes negative in case of expensive postPut/Delete Coprocessor call

2017-04-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15978011#comment-15978011
 ] 

Hudson commented on HBASE-17937:


SUCCESS: Integrated in Jenkins build HBase-1.1-JDK8 #1943 (See 
[https://builds.apache.org/job/HBase-1.1-JDK8/1943/])
HBASE-17937 Memstore size becomes negative in case of expensive (zhangduo: rev 
93ac76ef6164d8eb183f048ed727dc8b4290e0fa)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestNegativeMemstoreSizeWithSlowCoprocessor.java


> Memstore size becomes negative in case of expensive postPut/Delete 
> Coprocessor call
> ---
>
> Key: HBASE-17937
> URL: https://issues.apache.org/jira/browse/HBASE-17937
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 2.0.0, 1.3.1, 0.98.24
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
> Fix For: 2.0.0, 1.4.0, 1.2.6, 1.3.2, 1.1.11
>
> Attachments: HBASE-17937.branch-1.001.patch, 
> HBASE-17937.branch-1.002.patch, HBASE-17937.master.001.patch, 
> HBASE-17937.master.002.patch, HBASE-17937.master.002.patch, 
> HBASE-17937.master.003.patch, HBASE-17937.master.003.patch
>
>
> We ran into a situation where the memstore size became negative due to 
> expensive postPut/Delete Coprocessor calls in doMiniBatchMutate. We update 
> the memstore size in the finally block of doMiniBatchMutate, however a queued 
> flush can be triggered during the coprocessor calls(if they are taking time 
> eg. index updates) since we have released the locks and advanced mvcc at this 
> point. The flush will turn the memstore size negative since the value 
> subtracted is the actual value flushed from stores. The negative value 
> impacts the future flushes amongst others that depend on memstore size.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17943) The in-memory flush size is different for each CompactingMemStore located in the same region

2017-04-20 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-17943:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Thanks for the reviews. [~yuzhih...@gmail.com], [~anoop.hbase], and [~ram_krish]

> The in-memory flush size is different for each CompactingMemStore located in 
> the same region 
> -
>
> Key: HBASE-17943
> URL: https://issues.apache.org/jira/browse/HBASE-17943
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 2.0.0
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0
>
> Attachments: HBASE-17943.v0.patch
>
>
> {noformat}
>   private void initInmemoryFlushSize(Configuration conf) {
> long memstoreFlushSize = getRegionServices().getMemstoreFlushSize();
> int numStores = getRegionServices().getNumStores();
> if (numStores <= 1) {
>   // Family number might also be zero in some of our unit test case
>   numStores = 1;
> }
> inmemoryFlushSize = memstoreFlushSize / numStores;
> {noformat}
> We initialize each store in parallel, so the return value from getNumStores() 
> may be different for each CompactingMemStore.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-13288) Fix naming of parameter in Delete constructor

2017-04-20 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-13288:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Thanks for the patch. [~ashish singhi]

> Fix naming of parameter in Delete constructor
> -
>
> Key: HBASE-13288
> URL: https://issues.apache.org/jira/browse/HBASE-13288
> Project: HBase
>  Issue Type: Bug
>  Components: API
>Affects Versions: 2.0.0
>Reporter: Lars George
>Assignee: Ashish Singhi
>Priority: Trivial
>  Labels: beginner
> Fix For: 2.0.0
>
> Attachments: HBASE-13288.patch
>
>
> We have these two variants:
> {code}
> Delete(byte[] row, long timestamp)
> Delete(final byte[] rowArray, final int rowOffset, final int rowLength, long 
> ts)
> {code}
> Both should use {{timestamp}} as the parameter name, not this mix.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17125) Inconsistent result when use filter to read data

2017-04-20 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15978000#comment-15978000
 ] 

Guanghao Zhang commented on HBASE-17125:


bq. If user calls scan#setMaxVersions(5), server would check more versions 
(than 3). However, there is a chance that more than 3 versions would be 
returned.
This can be addressed by scan.setFilter(new 
SpecifiedNumVersionsColumnFilter(3)). setMaxVersions means how many version 
will be check. And SpecifiedNumVersionsColumnFilter means how many versions 
will be returned.

> Inconsistent result when use filter to read data
> 
>
> Key: HBASE-17125
> URL: https://issues.apache.org/jira/browse/HBASE-17125
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: example.diff, HBASE-17125.master.001.patch, 
> HBASE-17125.master.002.patch, HBASE-17125.master.002.patch, 
> HBASE-17125.master.003.patch, HBASE-17125.master.004.patch, 
> HBASE-17125.master.005.patch, HBASE-17125.master.006.patch, 
> HBASE-17125.master.007.patch, HBASE-17125.master.008.patch
>
>
> Assume a cloumn's max versions is 3, then we write 4 versions of this column. 
> The oldest version doesn't remove immediately. But from the user view, the 
> oldest version has gone. When user use a filter to query, if the filter skip 
> a new version, then the oldest version will be seen again. But after compact 
> the region, then the oldest version will never been seen. So it is weird for 
> user. The query will get inconsistent result before and after region 
> compaction.
> The reason is matchColumn method of UserScanQueryMatcher. It first check the 
> cell by filter, then check the number of versions needed. So if the filter 
> skip the new version, then the oldest version will be seen again when it is 
> not removed.
> Have a discussion offline with [~Apache9] and [~fenghh], now we have two 
> solution for this problem. The first idea is check the number of versions 
> first, then check the cell by filter. As the comment of setFilter, the filter 
> is called after all tests for ttl, column match, deletes and max versions 
> have been run.
> {code}
>   /**
>* Apply the specified server-side filter when performing the Query.
>* Only {@link Filter#filterKeyValue(Cell)} is called AFTER all tests
>* for ttl, column match, deletes and max versions have been run.
>* @param filter filter to run on the server
>* @return this for invocation chaining
>*/
>   public Query setFilter(Filter filter) {
> this.filter = filter;
> return this;
>   }
> {code}
> But this idea has another problem, if a column's max version is 5 and the 
> user query only need 3 versions. It first check the version's number, then 
> check the cell by filter. So the cells number of the result may less than 3. 
> But there are 2 versions which don't read anymore.
> So the second idea has three steps.
> 1. check by the max versions of this column
> 2. check the kv by filter
> 3. check the versions which user need.
> But this will lead the ScanQueryMatcher more complicated. And this will break 
> the javadoc of Query.setFilter.
> Now we don't have a final solution for this problem. Suggestions are welcomed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17125) Inconsistent result when use filter to read data

2017-04-20 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15977993#comment-15977993
 ] 

Ted Yu commented on HBASE-17125:


Let's look at the scenario again:

bq. if a column's max version is 5 and the user query only need 3 versions

If user calls scan#setMaxVersions(5), server would check more versions (than 
3). However, there is a chance that more than 3 versions would be returned.
Instead of letting user deal with the slack, it would be better to handle this 
server side.

My proposal only involves a few lines of change to your latest patch - though 
there may be some unit test failure(s).

> Inconsistent result when use filter to read data
> 
>
> Key: HBASE-17125
> URL: https://issues.apache.org/jira/browse/HBASE-17125
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: example.diff, HBASE-17125.master.001.patch, 
> HBASE-17125.master.002.patch, HBASE-17125.master.002.patch, 
> HBASE-17125.master.003.patch, HBASE-17125.master.004.patch, 
> HBASE-17125.master.005.patch, HBASE-17125.master.006.patch, 
> HBASE-17125.master.007.patch, HBASE-17125.master.008.patch
>
>
> Assume a cloumn's max versions is 3, then we write 4 versions of this column. 
> The oldest version doesn't remove immediately. But from the user view, the 
> oldest version has gone. When user use a filter to query, if the filter skip 
> a new version, then the oldest version will be seen again. But after compact 
> the region, then the oldest version will never been seen. So it is weird for 
> user. The query will get inconsistent result before and after region 
> compaction.
> The reason is matchColumn method of UserScanQueryMatcher. It first check the 
> cell by filter, then check the number of versions needed. So if the filter 
> skip the new version, then the oldest version will be seen again when it is 
> not removed.
> Have a discussion offline with [~Apache9] and [~fenghh], now we have two 
> solution for this problem. The first idea is check the number of versions 
> first, then check the cell by filter. As the comment of setFilter, the filter 
> is called after all tests for ttl, column match, deletes and max versions 
> have been run.
> {code}
>   /**
>* Apply the specified server-side filter when performing the Query.
>* Only {@link Filter#filterKeyValue(Cell)} is called AFTER all tests
>* for ttl, column match, deletes and max versions have been run.
>* @param filter filter to run on the server
>* @return this for invocation chaining
>*/
>   public Query setFilter(Filter filter) {
> this.filter = filter;
> return this;
>   }
> {code}
> But this idea has another problem, if a column's max version is 5 and the 
> user query only need 3 versions. It first check the version's number, then 
> check the cell by filter. So the cells number of the result may less than 3. 
> But there are 2 versions which don't read anymore.
> So the second idea has three steps.
> 1. check by the max versions of this column
> 2. check the kv by filter
> 3. check the versions which user need.
> But this will lead the ScanQueryMatcher more complicated. And this will break 
> the javadoc of Query.setFilter.
> Now we don't have a final solution for this problem. Suggestions are welcomed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17941) CellArrayMap#getCell may throw IndexOutOfBoundsException

2017-04-20 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15977994#comment-15977994
 ] 

Chia-Ping Tsai commented on HBASE-17941:


LGTM. +1

> CellArrayMap#getCell may throw IndexOutOfBoundsException
> 
>
> Key: HBASE-17941
> URL: https://issues.apache.org/jira/browse/HBASE-17941
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Chia-Ping Tsai
>Assignee: Hsin-Ying Lee
>Priority: Minor
>  Labels: beginner
> Fix For: 2.0.0
>
> Attachments: HBASE-17941.v0.patch
>
>
> {noformat}
>   @Override
>   protected Cell getCell(int i) {
> if( (i < minCellIdx) && (i >= maxCellIdx) ) return null;
> return block[i];
>   }
> {noformat}
> && -> ||
> We have checked the index of bound before calling this method, so the 
> exception doesn't happen at current trunk.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17890) FuzzyRow tests fail if unaligned support is false

2017-04-20 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15977990#comment-15977990
 ] 

Chia-Ping Tsai commented on HBASE-17890:


The principal aim of v3 patch is to unify the fixed/non-fixed(-1/0) byte in the 
FuzzyRowFilter. It is worthwhile to make the fix.
# Avoid confusion for unit test. We don't need to pass different byte for 
unsafe/non-unsafe.
# Avoid another bug. The scenario is that  the unsafe is enabled on the 
client-side and is disabled on the server-side. The FuzzyRowFilter which is 
instantiated by client will convert fixed/non-fixed byte for unsafe, but the 
FuzzyRowFilter created on server-side doesn't convert it back for non-unsafe. 
(The server don't know which type of fixed/non-fixed client uses).


> FuzzyRow tests fail if unaligned support is false
> -
>
> Key: HBASE-17890
> URL: https://issues.apache.org/jira/browse/HBASE-17890
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0, 1.2.5
>Reporter: Jerry He
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0, 1.4.0, 1.2.6, 1.3.2
>
> Attachments: HBASE-17890.v0.branch-1.patch, HBASE-17890.v0.patch, 
> HBASE-17890.v1.branch-1.patch, HBASE-17890.v1.patch, HBASE-17890.v2.patch, 
> HBASE-17890.v3.patch, HBASE-17890.v3.patch, HBASE-17890.v3.patch, 
> HBASE-17890.v3.patch, HBASE-17890.v3.patch
>
>
> When unaligned support is false, FuzzyRow tests fail:
> {noformat}
> Failed tests:
>   TestFuzzyRowAndColumnRangeFilter.Test:134->runTest:157->runScanner:186 
> expected:<10> but was:<0>
>   TestFuzzyRowFilter.testSatisfiesForward:81 expected: but was:
>   TestFuzzyRowFilter.testSatisfiesReverse:121 expected: but 
> was:
>   TestFuzzyRowFilterEndToEnd.testEndToEnd:247->runTest1:278->runScanner:343 
> expected:<6250> but was:<0>
>   TestFuzzyRowFilterEndToEnd.testFilterList:385->runTest:417->runScanner:445 
> expected:<5> but was:<0>
>   TestFuzzyRowFilterEndToEnd.testHBASE14782:204 expected:<6> but was:<0>
> {noformat}
> This can be reproduced in the case described in HBASE-17869. Or on a platform 
> really without unaligned support.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17864) Implement async snapshot/cloneSnapshot/restoreSnapshot methods

2017-04-20 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15977989#comment-15977989
 ] 

Guanghao Zhang commented on HBASE-17864:


+1. The failed ut not related.

> Implement async snapshot/cloneSnapshot/restoreSnapshot methods
> --
>
> Key: HBASE-17864
> URL: https://issues.apache.org/jira/browse/HBASE-17864
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Zheng Hu
>Assignee: Zheng Hu
> Fix For: 2.0.0
>
> Attachments: HBASE-17864.v1.patch, HBASE-17864.v2.patch, 
> HBASE-17864.v3.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17125) Inconsistent result when use filter to read data

2017-04-20 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15977981#comment-15977981
 ] 

Guanghao Zhang commented on HBASE-17125:


bq. Can we pass this information (let's call it the slack) to ColumnTracker 
ctor ?
The javadoc of scan.setMaxVersions has been changed. If user set max versions 
is less than the column's max versions, it means user didn't want to check all 
versions. So I thought we don't need this information?

> Inconsistent result when use filter to read data
> 
>
> Key: HBASE-17125
> URL: https://issues.apache.org/jira/browse/HBASE-17125
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: example.diff, HBASE-17125.master.001.patch, 
> HBASE-17125.master.002.patch, HBASE-17125.master.002.patch, 
> HBASE-17125.master.003.patch, HBASE-17125.master.004.patch, 
> HBASE-17125.master.005.patch, HBASE-17125.master.006.patch, 
> HBASE-17125.master.007.patch, HBASE-17125.master.008.patch
>
>
> Assume a cloumn's max versions is 3, then we write 4 versions of this column. 
> The oldest version doesn't remove immediately. But from the user view, the 
> oldest version has gone. When user use a filter to query, if the filter skip 
> a new version, then the oldest version will be seen again. But after compact 
> the region, then the oldest version will never been seen. So it is weird for 
> user. The query will get inconsistent result before and after region 
> compaction.
> The reason is matchColumn method of UserScanQueryMatcher. It first check the 
> cell by filter, then check the number of versions needed. So if the filter 
> skip the new version, then the oldest version will be seen again when it is 
> not removed.
> Have a discussion offline with [~Apache9] and [~fenghh], now we have two 
> solution for this problem. The first idea is check the number of versions 
> first, then check the cell by filter. As the comment of setFilter, the filter 
> is called after all tests for ttl, column match, deletes and max versions 
> have been run.
> {code}
>   /**
>* Apply the specified server-side filter when performing the Query.
>* Only {@link Filter#filterKeyValue(Cell)} is called AFTER all tests
>* for ttl, column match, deletes and max versions have been run.
>* @param filter filter to run on the server
>* @return this for invocation chaining
>*/
>   public Query setFilter(Filter filter) {
> this.filter = filter;
> return this;
>   }
> {code}
> But this idea has another problem, if a column's max version is 5 and the 
> user query only need 3 versions. It first check the version's number, then 
> check the cell by filter. So the cells number of the result may less than 3. 
> But there are 2 versions which don't read anymore.
> So the second idea has three steps.
> 1. check by the max versions of this column
> 2. check the kv by filter
> 3. check the versions which user need.
> But this will lead the ScanQueryMatcher more complicated. And this will break 
> the javadoc of Query.setFilter.
> Now we don't have a final solution for this problem. Suggestions are welcomed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17125) Inconsistent result when use filter to read data

2017-04-20 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15977972#comment-15977972
 ] 

Guanghao Zhang commented on HBASE-17125:


bq. How is the above addressed in the current patch ?
Now scan.setMaxVerrsions means how many versions will be check. So this can be 
addressed by scan.setMaxVersions(5).

> Inconsistent result when use filter to read data
> 
>
> Key: HBASE-17125
> URL: https://issues.apache.org/jira/browse/HBASE-17125
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: example.diff, HBASE-17125.master.001.patch, 
> HBASE-17125.master.002.patch, HBASE-17125.master.002.patch, 
> HBASE-17125.master.003.patch, HBASE-17125.master.004.patch, 
> HBASE-17125.master.005.patch, HBASE-17125.master.006.patch, 
> HBASE-17125.master.007.patch, HBASE-17125.master.008.patch
>
>
> Assume a cloumn's max versions is 3, then we write 4 versions of this column. 
> The oldest version doesn't remove immediately. But from the user view, the 
> oldest version has gone. When user use a filter to query, if the filter skip 
> a new version, then the oldest version will be seen again. But after compact 
> the region, then the oldest version will never been seen. So it is weird for 
> user. The query will get inconsistent result before and after region 
> compaction.
> The reason is matchColumn method of UserScanQueryMatcher. It first check the 
> cell by filter, then check the number of versions needed. So if the filter 
> skip the new version, then the oldest version will be seen again when it is 
> not removed.
> Have a discussion offline with [~Apache9] and [~fenghh], now we have two 
> solution for this problem. The first idea is check the number of versions 
> first, then check the cell by filter. As the comment of setFilter, the filter 
> is called after all tests for ttl, column match, deletes and max versions 
> have been run.
> {code}
>   /**
>* Apply the specified server-side filter when performing the Query.
>* Only {@link Filter#filterKeyValue(Cell)} is called AFTER all tests
>* for ttl, column match, deletes and max versions have been run.
>* @param filter filter to run on the server
>* @return this for invocation chaining
>*/
>   public Query setFilter(Filter filter) {
> this.filter = filter;
> return this;
>   }
> {code}
> But this idea has another problem, if a column's max version is 5 and the 
> user query only need 3 versions. It first check the version's number, then 
> check the cell by filter. So the cells number of the result may less than 3. 
> But there are 2 versions which don't read anymore.
> So the second idea has three steps.
> 1. check by the max versions of this column
> 2. check the kv by filter
> 3. check the versions which user need.
> But this will lead the ScanQueryMatcher more complicated. And this will break 
> the javadoc of Query.setFilter.
> Now we don't have a final solution for this problem. Suggestions are welcomed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17937) Memstore size becomes negative in case of expensive postPut/Delete Coprocessor call

2017-04-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15977950#comment-15977950
 ] 

Hudson commented on HBASE-17937:


FAILURE: Integrated in Jenkins build HBase-1.2-JDK8 #120 (See 
[https://builds.apache.org/job/HBase-1.2-JDK8/120/])
HBASE-17937 Memstore size becomes negative in case of expensive (zhangduo: rev 
0b5440d6d1a6fb1943917d68655b3abb8bd483b0)
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestNegativeMemstoreSizeWithSlowCoprocessor.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


> Memstore size becomes negative in case of expensive postPut/Delete 
> Coprocessor call
> ---
>
> Key: HBASE-17937
> URL: https://issues.apache.org/jira/browse/HBASE-17937
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 2.0.0, 1.3.1, 0.98.24
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
> Fix For: 2.0.0, 1.4.0, 1.2.6, 1.3.2, 1.1.11
>
> Attachments: HBASE-17937.branch-1.001.patch, 
> HBASE-17937.branch-1.002.patch, HBASE-17937.master.001.patch, 
> HBASE-17937.master.002.patch, HBASE-17937.master.002.patch, 
> HBASE-17937.master.003.patch, HBASE-17937.master.003.patch
>
>
> We ran into a situation where the memstore size became negative due to 
> expensive postPut/Delete Coprocessor calls in doMiniBatchMutate. We update 
> the memstore size in the finally block of doMiniBatchMutate, however a queued 
> flush can be triggered during the coprocessor calls(if they are taking time 
> eg. index updates) since we have released the locks and advanced mvcc at this 
> point. The flush will turn the memstore size negative since the value 
> subtracted is the actual value flushed from stores. The negative value 
> impacts the future flushes amongst others that depend on memstore size.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17937) Memstore size becomes negative in case of expensive postPut/Delete Coprocessor call

2017-04-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15977936#comment-15977936
 ] 

Hudson commented on HBASE-17937:


SUCCESS: Integrated in Jenkins build HBase-1.2-IT #859 (See 
[https://builds.apache.org/job/HBase-1.2-IT/859/])
HBASE-17937 Memstore size becomes negative in case of expensive (zhangduo: rev 
0b5440d6d1a6fb1943917d68655b3abb8bd483b0)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestNegativeMemstoreSizeWithSlowCoprocessor.java


> Memstore size becomes negative in case of expensive postPut/Delete 
> Coprocessor call
> ---
>
> Key: HBASE-17937
> URL: https://issues.apache.org/jira/browse/HBASE-17937
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 2.0.0, 1.3.1, 0.98.24
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
> Fix For: 2.0.0, 1.4.0, 1.2.6, 1.3.2, 1.1.11
>
> Attachments: HBASE-17937.branch-1.001.patch, 
> HBASE-17937.branch-1.002.patch, HBASE-17937.master.001.patch, 
> HBASE-17937.master.002.patch, HBASE-17937.master.002.patch, 
> HBASE-17937.master.003.patch, HBASE-17937.master.003.patch
>
>
> We ran into a situation where the memstore size became negative due to 
> expensive postPut/Delete Coprocessor calls in doMiniBatchMutate. We update 
> the memstore size in the finally block of doMiniBatchMutate, however a queued 
> flush can be triggered during the coprocessor calls(if they are taking time 
> eg. index updates) since we have released the locks and advanced mvcc at this 
> point. The flush will turn the memstore size negative since the value 
> subtracted is the actual value flushed from stores. The negative value 
> impacts the future flushes amongst others that depend on memstore size.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17514) Warn when Thrift Server 1 is configured for proxy users but not the HTTP transport

2017-04-20 Thread lv zehui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15977935#comment-15977935
 ] 

lv zehui commented on HBASE-17514:
--

OK. I can take this one. Could you please assign this jira to me?

> Warn when Thrift Server 1 is configured for proxy users but not the HTTP 
> transport
> --
>
> Key: HBASE-17514
> URL: https://issues.apache.org/jira/browse/HBASE-17514
> Project: HBase
>  Issue Type: Improvement
>  Components: Thrift, Usability
>Reporter: Sean Busbey
>Priority: Minor
>  Labels: beginner
>
> The config {{hbase.thrift.support.proxyuser}} is ignored if the Thrift Server 
> 1 isn't configured to use an HTTP transport with 
> {{hbase.regionserver.thrift.http}}.
> We should emit a warning if our configs request proxy user support but don't 
> specify that HTTP should be used for the transport.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17924) Consider sorting the row order when processing multi() ops before taking rowlocks

2017-04-20 Thread Allan Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allan Yang updated HBASE-17924:
---
Attachment: HBASE-17924.v4.patch

> Consider sorting the row order when processing multi() ops before taking 
> rowlocks
> -
>
> Key: HBASE-17924
> URL: https://issues.apache.org/jira/browse/HBASE-17924
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0, 1.1.8
>Reporter: Andrew Purtell
>Assignee: Allan Yang
> Fix For: 2.0.0
>
> Attachments: HBASE-17924.patch, HBASE-17924.v0.patch, 
> HBASE-17924.v2.patch, HBASE-17924.v3.patch, HBASE-17924.v4.patch
>
>
> When processing a batch mutation, we take row locks in whatever order the 
> mutations were added to the multi op by the client.
>  
> {noformat}
> RSRpcServices#multi -> RSRpcServices#mutateRows -> HRegion#mutateRow -> 
> HRegion#mutateRowsWithLocks -> HRegion#processRowsWithLocks
> {noformat}
> Or
> {noformat}
> RSRpcServices#multi -> RSRpcServices#doNonAtomicRegionMutation ->
>   HRegion#get 
> | HRegion#append 
> | HRegion#increment 
> | HRegionServer#doBatchOp -> HRegion#batchMutate -> 
> HRegion#doMiniBatchMutation
> {noformat}
>  
> multi() is fed by client APIs that accept a RowMutations object containing 
> actions for multiple rows. The container for ops inside RowMutations is an 
> ArrayList, which doesn't change the ordering of objects added to it. The 
> protobuf implementation of the messages for multi ops do not reorder the list 
> of actions. When processing multi ops we iterate over the actions in the 
> order rehydrated from protobuf.
> We should discuss sorting the order of ops by row key when processing multi() 
> ops before taking row locks. Does this make lock ordering more predictable 
> for server side operations? Yes, but potentially surprising for the client, 
> right? Is there any legitimate reason we should take locks out of row key 
> sorted order because the client has structured the request as such?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17937) Memstore size becomes negative in case of expensive postPut/Delete Coprocessor call

2017-04-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15977922#comment-15977922
 ] 

Hudson commented on HBASE-17937:


SUCCESS: Integrated in Jenkins build HBase-1.3-IT #28 (See 
[https://builds.apache.org/job/HBase-1.3-IT/28/])
HBASE-17937 Memstore size becomes negative in case of expensive (zhangduo: rev 
3dcbb733e040f32e3f6bd9ab4063f5efe6bd522b)
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestNegativeMemstoreSizeWithSlowCoprocessor.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java


> Memstore size becomes negative in case of expensive postPut/Delete 
> Coprocessor call
> ---
>
> Key: HBASE-17937
> URL: https://issues.apache.org/jira/browse/HBASE-17937
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 2.0.0, 1.3.1, 0.98.24
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
> Fix For: 2.0.0, 1.4.0, 1.2.6, 1.3.2, 1.1.11
>
> Attachments: HBASE-17937.branch-1.001.patch, 
> HBASE-17937.branch-1.002.patch, HBASE-17937.master.001.patch, 
> HBASE-17937.master.002.patch, HBASE-17937.master.002.patch, 
> HBASE-17937.master.003.patch, HBASE-17937.master.003.patch
>
>
> We ran into a situation where the memstore size became negative due to 
> expensive postPut/Delete Coprocessor calls in doMiniBatchMutate. We update 
> the memstore size in the finally block of doMiniBatchMutate, however a queued 
> flush can be triggered during the coprocessor calls(if they are taking time 
> eg. index updates) since we have released the locks and advanced mvcc at this 
> point. The flush will turn the memstore size negative since the value 
> subtracted is the actual value flushed from stores. The negative value 
> impacts the future flushes amongst others that depend on memstore size.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17937) Memstore size becomes negative in case of expensive postPut/Delete Coprocessor call

2017-04-20 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-17937:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Pushed to branch-1.1+. Thanks [~abhishek.chouhan] for contributing. And thanks 
all for reviewing.

> Memstore size becomes negative in case of expensive postPut/Delete 
> Coprocessor call
> ---
>
> Key: HBASE-17937
> URL: https://issues.apache.org/jira/browse/HBASE-17937
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 2.0.0, 1.3.1, 0.98.24
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
> Fix For: 2.0.0, 1.4.0, 1.2.6, 1.3.2, 1.1.11
>
> Attachments: HBASE-17937.branch-1.001.patch, 
> HBASE-17937.branch-1.002.patch, HBASE-17937.master.001.patch, 
> HBASE-17937.master.002.patch, HBASE-17937.master.002.patch, 
> HBASE-17937.master.003.patch, HBASE-17937.master.003.patch
>
>
> We ran into a situation where the memstore size became negative due to 
> expensive postPut/Delete Coprocessor calls in doMiniBatchMutate. We update 
> the memstore size in the finally block of doMiniBatchMutate, however a queued 
> flush can be triggered during the coprocessor calls(if they are taking time 
> eg. index updates) since we have released the locks and advanced mvcc at this 
> point. The flush will turn the memstore size negative since the value 
> subtracted is the actual value flushed from stores. The negative value 
> impacts the future flushes amongst others that depend on memstore size.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17937) Memstore size becomes negative in case of expensive postPut/Delete Coprocessor call

2017-04-20 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-17937:
--
 Hadoop Flags: Reviewed
Fix Version/s: 1.1.11
   1.3.2
   1.2.6
   1.4.0
   2.0.0
  Component/s: regionserver

> Memstore size becomes negative in case of expensive postPut/Delete 
> Coprocessor call
> ---
>
> Key: HBASE-17937
> URL: https://issues.apache.org/jira/browse/HBASE-17937
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 2.0.0, 1.3.1, 0.98.24
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
> Fix For: 2.0.0, 1.4.0, 1.2.6, 1.3.2, 1.1.11
>
> Attachments: HBASE-17937.branch-1.001.patch, 
> HBASE-17937.branch-1.002.patch, HBASE-17937.master.001.patch, 
> HBASE-17937.master.002.patch, HBASE-17937.master.002.patch, 
> HBASE-17937.master.003.patch, HBASE-17937.master.003.patch
>
>
> We ran into a situation where the memstore size became negative due to 
> expensive postPut/Delete Coprocessor calls in doMiniBatchMutate. We update 
> the memstore size in the finally block of doMiniBatchMutate, however a queued 
> flush can be triggered during the coprocessor calls(if they are taking time 
> eg. index updates) since we have released the locks and advanced mvcc at this 
> point. The flush will turn the memstore size negative since the value 
> subtracted is the actual value flushed from stores. The negative value 
> impacts the future flushes amongst others that depend on memstore size.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17937) Memstore size becomes negative in case of expensive postPut/Delete Coprocessor call

2017-04-20 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15977899#comment-15977899
 ] 

Duo Zhang commented on HBASE-17937:
---

The patch can be applied to branch-1.1. And I have run the new UT on 
branch-1.1, it passed.

Let me commit and resolve this issue.

> Memstore size becomes negative in case of expensive postPut/Delete 
> Coprocessor call
> ---
>
> Key: HBASE-17937
> URL: https://issues.apache.org/jira/browse/HBASE-17937
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.3.1, 0.98.24
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
> Attachments: HBASE-17937.branch-1.001.patch, 
> HBASE-17937.branch-1.002.patch, HBASE-17937.master.001.patch, 
> HBASE-17937.master.002.patch, HBASE-17937.master.002.patch, 
> HBASE-17937.master.003.patch, HBASE-17937.master.003.patch
>
>
> We ran into a situation where the memstore size became negative due to 
> expensive postPut/Delete Coprocessor calls in doMiniBatchMutate. We update 
> the memstore size in the finally block of doMiniBatchMutate, however a queued 
> flush can be triggered during the coprocessor calls(if they are taking time 
> eg. index updates) since we have released the locks and advanced mvcc at this 
> point. The flush will turn the memstore size negative since the value 
> subtracted is the actual value flushed from stores. The negative value 
> impacts the future flushes amongst others that depend on memstore size.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17933) [hbase-spark] Support Java api for bulkload

2017-04-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15977890#comment-15977890
 ] 

Hadoop QA commented on HBASE-17933:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 26s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
4s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 35s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 8s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} scaladoc {color} | {color:green} 0m 
45s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} scalac {color} | {color:green} 1m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
54m 49s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
34s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 19s 
{color} | {color:red} hbase-spark generated 5 new + 18 unchanged - 0 fixed = 23 
total (was 18) {color} |
| {color:green}+1{color} | {color:green} scaladoc {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 5m 22s 
{color} | {color:green} hbase-spark in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
12s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 78m 38s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.03.0-ce Server=17.03.0-ce Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12864390/HBase-17933-V2.patch |
| JIRA Issue | HBASE-17933 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  scalac  scaladoc  |
| uname | Linux 19e4e7ee4cd5 4.8.3-std-1 #1 SMP Fri Oct 21 11:15:43 UTC 2016 
x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 40cc666 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6522/artifact/patchprocess/diff-javadoc-javadoc-hbase-spark.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6522/testReport/ |
| modules | C: hbase-spark U: hbase-spark |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6522/console |
| Powered by | Apache 

[jira] [Commented] (HBASE-16549) Procedure v2 - Add new AM metrics

2017-04-20 Thread Umesh Agashe (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15977815#comment-15977815
 ] 

Umesh Agashe commented on HBASE-16549:
--

We can ignore the previous failure. This patch applies on top of patch for 
HBASE-14614 which applies on master. I think the failure is due to hbase-14614 
branch doesn't exist.

> Procedure v2 - Add new AM metrics
> -
>
> Key: HBASE-16549
> URL: https://issues.apache.org/jira/browse/HBASE-16549
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2, Region Assignment
>Affects Versions: 2.0.0
>Reporter: Matteo Bertozzi
>Assignee: Umesh Agashe
> Fix For: 2.0.0
>
> Attachments: HBASE-16549-hbase-14614.v1.patch
>
>
> With the new AM we can add a bunch of metrics
>  - assign/unassign time
>  - server crash time
>  - grouping related metrics? (how many batch we do, and similar?)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17933) [hbase-spark] Support Java api for bulkload

2017-04-20 Thread Yi Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15977799#comment-15977799
 ] 

Yi Liang commented on HBASE-17933:
--

PATCH v2 Carry Sean's suggestion, and still keep the wrapper class

> [hbase-spark]  Support Java api for bulkload
> 
>
> Key: HBASE-17933
> URL: https://issues.apache.org/jira/browse/HBASE-17933
> Project: HBase
>  Issue Type: New Feature
>  Components: spark
>Affects Versions: 2.0.0
>Reporter: Yi Liang
>Assignee: Yi Liang
> Fix For: 2.0.0
>
> Attachments: HBase-17933-V1.patch, HBase-17933-V2.patch
>
>
> In JavaHBaseContext, there are java api for bulkPut, bulkDelete , but no 
> Java api for bulkload. And this jira will add bulkload java api to hbase-spark



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17933) [hbase-spark] Support Java api for bulkload

2017-04-20 Thread Yi Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liang updated HBASE-17933:
-
Attachment: HBase-17933-V2.patch

> [hbase-spark]  Support Java api for bulkload
> 
>
> Key: HBASE-17933
> URL: https://issues.apache.org/jira/browse/HBASE-17933
> Project: HBase
>  Issue Type: New Feature
>  Components: spark
>Affects Versions: 2.0.0
>Reporter: Yi Liang
>Assignee: Yi Liang
> Fix For: 2.0.0
>
> Attachments: HBase-17933-V1.patch, HBase-17933-V2.patch
>
>
> In JavaHBaseContext, there are java api for bulkPut, bulkDelete , but no 
> Java api for bulkload. And this jira will add bulkload java api to hbase-spark



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17448) Export metrics from RecoverableZooKeeper

2017-04-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1591#comment-1591
 ] 

Hadoop QA commented on HBASE-17448:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 10s {color} 
| {color:red} HBASE-17448 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/0.3.0/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12864387/HBASE-17448.patch |
| JIRA Issue | HBASE-17448 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6521/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Export metrics from RecoverableZooKeeper
> 
>
> Key: HBASE-17448
> URL: https://issues.apache.org/jira/browse/HBASE-17448
> Project: HBase
>  Issue Type: Improvement
>  Components: Zookeeper
>Affects Versions: 1.3.1
>Reporter: Andrew Purtell
>Assignee: Chinmay Kulkarni
>  Labels: patch
> Fix For: 1.3.1
>
> Attachments: HBASE-17448.patch
>
>
> Consider adding instrumentation to RecoverableZooKeeper that exposes metrics 
> on the performance and health of the embedded ZooKeeper client: latency 
> histograms for each op type, number of reconnections, number of ops where a 
> reconnection was necessary to proceed, number of failed ops due to 
> CONNECTIONLOSS, number of failed ops due to SESSIONEXIPRED, number of failed 
> ops due to OPERATIONTIMEOUT. 
> RecoverableZooKeeper is a class in hbase-client so we can hook up the new 
> metrics to both client- and server-side metrics reporters. Probably this 
> metrics source should be a process singleton. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17448) Export metrics from RecoverableZooKeeper

2017-04-20 Thread Chinmay Kulkarni (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated HBASE-17448:
-
Attachment: HBASE-17448.patch

> Export metrics from RecoverableZooKeeper
> 
>
> Key: HBASE-17448
> URL: https://issues.apache.org/jira/browse/HBASE-17448
> Project: HBase
>  Issue Type: Improvement
>  Components: Zookeeper
>Affects Versions: 1.3.1
>Reporter: Andrew Purtell
>Assignee: Chinmay Kulkarni
>  Labels: patch
> Fix For: 1.3.1
>
> Attachments: HBASE-17448.patch
>
>
> Consider adding instrumentation to RecoverableZooKeeper that exposes metrics 
> on the performance and health of the embedded ZooKeeper client: latency 
> histograms for each op type, number of reconnections, number of ops where a 
> reconnection was necessary to proceed, number of failed ops due to 
> CONNECTIONLOSS, number of failed ops due to SESSIONEXIPRED, number of failed 
> ops due to OPERATIONTIMEOUT. 
> RecoverableZooKeeper is a class in hbase-client so we can hook up the new 
> metrics to both client- and server-side metrics reporters. Probably this 
> metrics source should be a process singleton. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17448) Export metrics from RecoverableZooKeeper

2017-04-20 Thread Chinmay Kulkarni (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni updated HBASE-17448:
-
   Labels: patch  (was: )
Fix Version/s: 1.3.1
Affects Version/s: 1.3.1
   Status: Patch Available  (was: Open)

Added metrics for RecoverableZooKeeper related to specific exceptions, total 
failed ZooKeeper API calls and latency histograms for read, write and sync 
operations. Also added unit tests for the same. Added service provider for the 
ZooKeeper metrics implementation inside the hadoop compatibility module.

> Export metrics from RecoverableZooKeeper
> 
>
> Key: HBASE-17448
> URL: https://issues.apache.org/jira/browse/HBASE-17448
> Project: HBase
>  Issue Type: Improvement
>  Components: Zookeeper
>Affects Versions: 1.3.1
>Reporter: Andrew Purtell
>Assignee: Chinmay Kulkarni
>  Labels: patch
> Fix For: 1.3.1
>
>
> Consider adding instrumentation to RecoverableZooKeeper that exposes metrics 
> on the performance and health of the embedded ZooKeeper client: latency 
> histograms for each op type, number of reconnections, number of ops where a 
> reconnection was necessary to proceed, number of failed ops due to 
> CONNECTIONLOSS, number of failed ops due to SESSIONEXIPRED, number of failed 
> ops due to OPERATIONTIMEOUT. 
> RecoverableZooKeeper is a class in hbase-client so we can hook up the new 
> metrics to both client- and server-side metrics reporters. Probably this 
> metrics source should be a process singleton. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HBASE-17448) Export metrics from RecoverableZooKeeper

2017-04-20 Thread Chinmay Kulkarni (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chinmay Kulkarni reassigned HBASE-17448:


Assignee: Chinmay Kulkarni

> Export metrics from RecoverableZooKeeper
> 
>
> Key: HBASE-17448
> URL: https://issues.apache.org/jira/browse/HBASE-17448
> Project: HBase
>  Issue Type: Improvement
>  Components: Zookeeper
>Reporter: Andrew Purtell
>Assignee: Chinmay Kulkarni
>
> Consider adding instrumentation to RecoverableZooKeeper that exposes metrics 
> on the performance and health of the embedded ZooKeeper client: latency 
> histograms for each op type, number of reconnections, number of ops where a 
> reconnection was necessary to proceed, number of failed ops due to 
> CONNECTIONLOSS, number of failed ops due to SESSIONEXIPRED, number of failed 
> ops due to OPERATIONTIMEOUT. 
> RecoverableZooKeeper is a class in hbase-client so we can hook up the new 
> metrics to both client- and server-side metrics reporters. Probably this 
> metrics source should be a process singleton. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17890) FuzzyRow tests fail if unaligned support is false

2017-04-20 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15977687#comment-15977687
 ] 

Jerry He commented on HBASE-17890:
--

[~chia7712]   The v3 patch added other non-trivial changes. Are they necessary? 
It will take longer for us to understand your patch.
I still like your clean v2 patch. 
Could you explain why the origin TestFuzzyRowAndColumnRangeFilter and 
TestFuzzyRowFilterEndToEnd fail if unaligned support is false, and just do 
minimum to make them pass? Do they always fail? It is ok to add the two new 
medium tests.

> FuzzyRow tests fail if unaligned support is false
> -
>
> Key: HBASE-17890
> URL: https://issues.apache.org/jira/browse/HBASE-17890
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0, 1.2.5
>Reporter: Jerry He
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0, 1.4.0, 1.2.6, 1.3.2
>
> Attachments: HBASE-17890.v0.branch-1.patch, HBASE-17890.v0.patch, 
> HBASE-17890.v1.branch-1.patch, HBASE-17890.v1.patch, HBASE-17890.v2.patch, 
> HBASE-17890.v3.patch, HBASE-17890.v3.patch, HBASE-17890.v3.patch, 
> HBASE-17890.v3.patch, HBASE-17890.v3.patch
>
>
> When unaligned support is false, FuzzyRow tests fail:
> {noformat}
> Failed tests:
>   TestFuzzyRowAndColumnRangeFilter.Test:134->runTest:157->runScanner:186 
> expected:<10> but was:<0>
>   TestFuzzyRowFilter.testSatisfiesForward:81 expected: but was:
>   TestFuzzyRowFilter.testSatisfiesReverse:121 expected: but 
> was:
>   TestFuzzyRowFilterEndToEnd.testEndToEnd:247->runTest1:278->runScanner:343 
> expected:<6250> but was:<0>
>   TestFuzzyRowFilterEndToEnd.testFilterList:385->runTest:417->runScanner:445 
> expected:<5> but was:<0>
>   TestFuzzyRowFilterEndToEnd.testHBASE14782:204 expected:<6> but was:<0>
> {noformat}
> This can be reproduced in the case described in HBASE-17869. Or on a platform 
> really without unaligned support.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17944) ClassSize fails with Java 9

2017-04-20 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15977605#comment-15977605
 ] 

Enis Soztutar commented on HBASE-17944:
---

I meant the JDK7 property in the class decleration. 

> ClassSize fails with Java 9
> ---
>
> Key: HBASE-17944
> URL: https://issues.apache.org/jira/browse/HBASE-17944
> Project: HBase
>  Issue Type: Bug
>Reporter: Colm O hEigeartaigh
> Fix For: 2.0.0, 1.1.10, 1.2.6, 1.3.2
>
> Attachments: 0001-HBASE-17944-ClassSize-fails-with-Java-9.patch
>
>
> ClassSize fails when run with Java 9. The static block assumes that the java 
> version contains "." which is not necessarily the case with Java 9:
> Caused by: java.lang.RuntimeException: Unexpected version format: 9-ea
>   at org.apache.hadoop.hbase.util.ClassSize.(ClassSize.java:119)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-14619) Procedure V2: Implement balancer to be Procedure-based

2017-04-20 Thread Umesh Agashe (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15977594#comment-15977594
 ] 

Umesh Agashe commented on HBASE-14619:
--

Hi [~syuanjiang], I was looking around AMv2 and found this ticket. How will it 
work? This look interesting, can you add more details to it?

Thanks,
Umesh


> Procedure V2: Implement balancer to be Procedure-based
> --
>
> Key: HBASE-14619
> URL: https://issues.apache.org/jira/browse/HBASE-14619
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2
>Affects Versions: 2.0.0
>Reporter: Stephen Yuan Jiang
>Assignee: Stephen Yuan Jiang
> Fix For: 2.0.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17937) Memstore size becomes negative in case of expensive postPut/Delete Coprocessor call

2017-04-20 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15977488#comment-15977488
 ] 

Andrew Purtell commented on HBASE-17937:


This is also an issue for 1.1, which we are still releasing. I can port the 1.2 
change back, no problem. The test is the same. The change is the same in 
HRegion too, but the code is syntactically different. 

> Memstore size becomes negative in case of expensive postPut/Delete 
> Coprocessor call
> ---
>
> Key: HBASE-17937
> URL: https://issues.apache.org/jira/browse/HBASE-17937
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.3.1, 0.98.24
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
> Attachments: HBASE-17937.branch-1.001.patch, 
> HBASE-17937.branch-1.002.patch, HBASE-17937.master.001.patch, 
> HBASE-17937.master.002.patch, HBASE-17937.master.002.patch, 
> HBASE-17937.master.003.patch, HBASE-17937.master.003.patch
>
>
> We ran into a situation where the memstore size became negative due to 
> expensive postPut/Delete Coprocessor calls in doMiniBatchMutate. We update 
> the memstore size in the finally block of doMiniBatchMutate, however a queued 
> flush can be triggered during the coprocessor calls(if they are taking time 
> eg. index updates) since we have released the locks and advanced mvcc at this 
> point. The flush will turn the memstore size negative since the value 
> subtracted is the actual value flushed from stores. The negative value 
> impacts the future flushes amongst others that depend on memstore size.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17944) ClassSize fails with Java 9

2017-04-20 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15977476#comment-15977476
 ] 

Sean Busbey commented on HBASE-17944:
-

JDK7 is still used in the 1.y release lines, which this fix targets currently.

Is Java 9 close enough now that we should update the docs to list it? 
(presumably NT or X)

> ClassSize fails with Java 9
> ---
>
> Key: HBASE-17944
> URL: https://issues.apache.org/jira/browse/HBASE-17944
> Project: HBase
>  Issue Type: Bug
>Reporter: Colm O hEigeartaigh
> Fix For: 2.0.0, 1.1.10, 1.2.6, 1.3.2
>
> Attachments: 0001-HBASE-17944-ClassSize-fails-with-Java-9.patch
>
>
> ClassSize fails when run with Java 9. The static block assumes that the java 
> version contains "." which is not necessarily the case with Java 9:
> Caused by: java.lang.RuntimeException: Unexpected version format: 9-ea
>   at org.apache.hadoop.hbase.util.ClassSize.(ClassSize.java:119)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17933) [hbase-spark] Support Java api for bulkload

2017-04-20 Thread Yi Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15977394#comment-15977394
 ] 

Yi Liang commented on HBASE-17933:
--

{quote}is there a pre-existing Pair or 2 Tuple?{quote}
I will try to find some existing pair, or design a new pair class, that will be 
better than current one, :)

{quote}would it be better to still take arbitrary RDDs {quote}
I agree with this suggestion. This can provide user more flexibility

> [hbase-spark]  Support Java api for bulkload
> 
>
> Key: HBASE-17933
> URL: https://issues.apache.org/jira/browse/HBASE-17933
> Project: HBase
>  Issue Type: New Feature
>  Components: spark
>Affects Versions: 2.0.0
>Reporter: Yi Liang
>Assignee: Yi Liang
> Fix For: 2.0.0
>
> Attachments: HBase-17933-V1.patch
>
>
> In JavaHBaseContext, there are java api for bulkPut, bulkDelete , but no 
> Java api for bulkload. And this jira will add bulkload java api to hbase-spark



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17514) Warn when Thrift Server 1 is configured for proxy users but not the HTTP transport

2017-04-20 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15977273#comment-15977273
 ] 

Sean Busbey commented on HBASE-17514:
-

yeah that looks good.

I don't know off the top of my head if the thrift2 server supports proxy users. 
I would guess I only talked about the thrift1 server in this jira because I 
happened to be looking at it at the time. If it looks like thrift2 has the same 
issue, feel free to do both under this issue.

> Warn when Thrift Server 1 is configured for proxy users but not the HTTP 
> transport
> --
>
> Key: HBASE-17514
> URL: https://issues.apache.org/jira/browse/HBASE-17514
> Project: HBase
>  Issue Type: Improvement
>  Components: Thrift, Usability
>Reporter: Sean Busbey
>Priority: Minor
>  Labels: beginner
>
> The config {{hbase.thrift.support.proxyuser}} is ignored if the Thrift Server 
> 1 isn't configured to use an HTTP transport with 
> {{hbase.regionserver.thrift.http}}.
> We should emit a warning if our configs request proxy user support but don't 
> specify that HTTP should be used for the transport.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17941) CellArrayMap#getCell may throw IndexOutOfBoundsException

2017-04-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15977242#comment-15977242
 ] 

Hadoop QA commented on HBASE-17941:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 2m 25s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
47s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 5s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
4s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
34s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
0s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
44m 21s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 201m 17s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
47s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 269m 54s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.util.TestHBaseFsckTwoRS |
|   | hadoop.hbase.security.access.TestCoprocessorWhitelistMasterObserver |
|   | hadoop.hbase.util.TestHBaseFsckReplicas |
|   | hadoop.hbase.snapshot.TestMobSecureExportSnapshot |
|   | hadoop.hbase.snapshot.TestSecureExportSnapshot |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.03.0-ce Server=17.03.0-ce Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12864276/HBASE-17941.v0.patch |
| JIRA Issue | HBASE-17941 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 8718631b5b5d 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 40cc666 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6515/artifact/patchprocess/patch-unit-hbase-server.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/6515/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6515/testReport/ |
| modules | C: hbase-server U: 

[jira] [Work started] (HBASE-16290) Dump summary of callQueue content; can help debugging

2017-04-20 Thread Sreeram Venkatasubramanian (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-16290 started by Sreeram Venkatasubramanian.
--
> Dump summary of callQueue content; can help debugging
> -
>
> Key: HBASE-16290
> URL: https://issues.apache.org/jira/browse/HBASE-16290
> Project: HBase
>  Issue Type: Bug
>  Components: Operability
>Reporter: stack
>Assignee: Sreeram Venkatasubramanian
>Priority: Critical
>  Labels: beginner
> Attachments: Sample Summary.txt
>
>
> Being able to get a clue what is in a backedup callQueue could give insight 
> on what is going on on a jacked server. Just needs to summarize count, sizes, 
> call types. Useful debugging. In a servlet?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16290) Dump summary of callQueue content; can help debugging

2017-04-20 Thread Sreeram Venkatasubramanian (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sreeram Venkatasubramanian updated HBASE-16290:
---
Attachment: Sample Summary.txt

Sample call queue summary as coming in region server debug dump.

> Dump summary of callQueue content; can help debugging
> -
>
> Key: HBASE-16290
> URL: https://issues.apache.org/jira/browse/HBASE-16290
> Project: HBase
>  Issue Type: Bug
>  Components: Operability
>Reporter: stack
>Assignee: Sreeram Venkatasubramanian
>Priority: Critical
>  Labels: beginner
> Attachments: Sample Summary.txt
>
>
> Being able to get a clue what is in a backedup callQueue could give insight 
> on what is going on on a jacked server. Just needs to summarize count, sizes, 
> call types. Useful debugging. In a servlet?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16290) Dump summary of callQueue content; can help debugging

2017-04-20 Thread Sreeram Venkatasubramanian (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sreeram Venkatasubramanian updated HBASE-16290:
---
Attachment: (was: Sample Summary.txt)

> Dump summary of callQueue content; can help debugging
> -
>
> Key: HBASE-16290
> URL: https://issues.apache.org/jira/browse/HBASE-16290
> Project: HBase
>  Issue Type: Bug
>  Components: Operability
>Reporter: stack
>Assignee: Sreeram Venkatasubramanian
>Priority: Critical
>  Labels: beginner
>
> Being able to get a clue what is in a backedup callQueue could give insight 
> on what is going on on a jacked server. Just needs to summarize count, sizes, 
> call types. Useful debugging. In a servlet?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17933) [hbase-spark] Support Java api for bulkload

2017-04-20 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15977224#comment-15977224
 ] 

Sean Busbey commented on HBASE-17933:
-

hurm. is there a pre-existing Pair or 2 Tuple, or similar we could use? would 
that even be less awkward than this...

Even if we need to keep the wrapper classes, would it be better to still take 
arbitrary RDDs and a transforming function with destination of the wrappers?

(I'll try to clear up my mulling over this while you work on the other pieces 
of review feedback)

> [hbase-spark]  Support Java api for bulkload
> 
>
> Key: HBASE-17933
> URL: https://issues.apache.org/jira/browse/HBASE-17933
> Project: HBase
>  Issue Type: New Feature
>  Components: spark
>Affects Versions: 2.0.0
>Reporter: Yi Liang
>Assignee: Yi Liang
> Fix For: 2.0.0
>
> Attachments: HBase-17933-V1.patch
>
>
> In JavaHBaseContext, there are java api for bulkPut, bulkDelete , but no 
> Java api for bulkload. And this jira will add bulkload java api to hbase-spark



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16290) Dump summary of callQueue content; can help debugging

2017-04-20 Thread Sreeram Venkatasubramanian (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15977220#comment-15977220
 ] 

Sreeram Venkatasubramanian commented on HBASE-16290:


[~saint@gmail.com] I have attached sample call queue summary that appears 
in region server debug dump. Can you please take a look and let me know if it 
looks fine ? 

> Dump summary of callQueue content; can help debugging
> -
>
> Key: HBASE-16290
> URL: https://issues.apache.org/jira/browse/HBASE-16290
> Project: HBase
>  Issue Type: Bug
>  Components: Operability
>Reporter: stack
>Assignee: Sreeram Venkatasubramanian
>Priority: Critical
>  Labels: beginner
> Attachments: Sample Summary.txt
>
>
> Being able to get a clue what is in a backedup callQueue could give insight 
> on what is going on on a jacked server. Just needs to summarize count, sizes, 
> call types. Useful debugging. In a servlet?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-16549) Procedure v2 - Add new AM metrics

2017-04-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15977218#comment-15977218
 ] 

Hadoop QA commented on HBASE-16549:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 10s {color} 
| {color:red} HBASE-16549 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/0.3.0/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12864335/HBASE-16549-hbase-14614.v1.patch
 |
| JIRA Issue | HBASE-16549 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6520/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Procedure v2 - Add new AM metrics
> -
>
> Key: HBASE-16549
> URL: https://issues.apache.org/jira/browse/HBASE-16549
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2, Region Assignment
>Affects Versions: 2.0.0
>Reporter: Matteo Bertozzi
>Assignee: Umesh Agashe
> Fix For: 2.0.0
>
> Attachments: HBASE-16549-hbase-14614.v1.patch
>
>
> With the new AM we can add a bunch of metrics
>  - assign/unassign time
>  - server crash time
>  - grouping related metrics? (how many batch we do, and similar?)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16290) Dump summary of callQueue content; can help debugging

2017-04-20 Thread Sreeram Venkatasubramanian (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sreeram Venkatasubramanian updated HBASE-16290:
---
Attachment: Sample Summary.txt

Sample call queue summary that appears in region server debug dump

> Dump summary of callQueue content; can help debugging
> -
>
> Key: HBASE-16290
> URL: https://issues.apache.org/jira/browse/HBASE-16290
> Project: HBase
>  Issue Type: Bug
>  Components: Operability
>Reporter: stack
>Assignee: Sreeram Venkatasubramanian
>Priority: Critical
>  Labels: beginner
> Attachments: Sample Summary.txt
>
>
> Being able to get a clue what is in a backedup callQueue could give insight 
> on what is going on on a jacked server. Just needs to summarize count, sizes, 
> call types. Useful debugging. In a servlet?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16549) Procedure v2 - Add new AM metrics

2017-04-20 Thread Umesh Agashe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Umesh Agashe updated HBASE-16549:
-
Status: Patch Available  (was: In Progress)

> Procedure v2 - Add new AM metrics
> -
>
> Key: HBASE-16549
> URL: https://issues.apache.org/jira/browse/HBASE-16549
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2, Region Assignment
>Affects Versions: 2.0.0
>Reporter: Matteo Bertozzi
>Assignee: Umesh Agashe
> Fix For: 2.0.0
>
> Attachments: HBASE-16549-hbase-14614.v1.patch
>
>
> With the new AM we can add a bunch of metrics
>  - assign/unassign time
>  - server crash time
>  - grouping related metrics? (how many batch we do, and similar?)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-17945) Procedure V2: Add new Snapshot metrics to Snapshot procedure

2017-04-20 Thread Umesh Agashe (JIRA)
Umesh Agashe created HBASE-17945:


 Summary: Procedure V2: Add new Snapshot metrics to Snapshot 
procedure
 Key: HBASE-17945
 URL: https://issues.apache.org/jira/browse/HBASE-17945
 Project: HBase
  Issue Type: Improvement
  Components: proc-v2
Reporter: Umesh Agashe
Assignee: Umesh Agashe


Please refer to the doc in HBASE-16549.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-16549) Procedure v2 - Add new AM metrics

2017-04-20 Thread Umesh Agashe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Umesh Agashe updated HBASE-16549:
-
Attachment: HBASE-16549-hbase-14614.v1.patch

Patch add following metrics:

1. AssignProcedure:
  * assignSubmittedCount
  * assignTime
  * assignFailedCount

2. UnassignProcedure
  * unassignSubmittedCount
  * unassignTime
  * unassignFailedCount

3. MergeTableRegionProcedure
  * mergeSubmittedCount
  * mergeTime
  * mergeFailedCount

4. SplitTableRegionProcedure
  * splitSubmittedCount
  * splitTime
  * splitFailedCount

5. ServerCrashProcedure
  * serverCrashSubmittedCount
  * serverCrashTime
  * serverCrashFailedCount


> Procedure v2 - Add new AM metrics
> -
>
> Key: HBASE-16549
> URL: https://issues.apache.org/jira/browse/HBASE-16549
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2, Region Assignment
>Affects Versions: 2.0.0
>Reporter: Matteo Bertozzi
>Assignee: Umesh Agashe
> Fix For: 2.0.0
>
> Attachments: HBASE-16549-hbase-14614.v1.patch
>
>
> With the new AM we can add a bunch of metrics
>  - assign/unassign time
>  - server crash time
>  - grouping related metrics? (how many batch we do, and similar?)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Work started] (HBASE-16549) Procedure v2 - Add new AM metrics

2017-04-20 Thread Umesh Agashe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-16549 started by Umesh Agashe.

> Procedure v2 - Add new AM metrics
> -
>
> Key: HBASE-16549
> URL: https://issues.apache.org/jira/browse/HBASE-16549
> Project: HBase
>  Issue Type: Sub-task
>  Components: proc-v2, Region Assignment
>Affects Versions: 2.0.0
>Reporter: Matteo Bertozzi
>Assignee: Umesh Agashe
> Fix For: 2.0.0
>
>
> With the new AM we can add a bunch of metrics
>  - assign/unassign time
>  - server crash time
>  - grouping related metrics? (how many batch we do, and similar?)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17937) Memstore size becomes negative in case of expensive postPut/Delete Coprocessor call

2017-04-20 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15977123#comment-15977123
 ] 

Enis Soztutar commented on HBASE-17937:
---

+1. 

> Memstore size becomes negative in case of expensive postPut/Delete 
> Coprocessor call
> ---
>
> Key: HBASE-17937
> URL: https://issues.apache.org/jira/browse/HBASE-17937
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0, 1.3.1, 0.98.24
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
> Attachments: HBASE-17937.branch-1.001.patch, 
> HBASE-17937.branch-1.002.patch, HBASE-17937.master.001.patch, 
> HBASE-17937.master.002.patch, HBASE-17937.master.002.patch, 
> HBASE-17937.master.003.patch, HBASE-17937.master.003.patch
>
>
> We ran into a situation where the memstore size became negative due to 
> expensive postPut/Delete Coprocessor calls in doMiniBatchMutate. We update 
> the memstore size in the finally block of doMiniBatchMutate, however a queued 
> flush can be triggered during the coprocessor calls(if they are taking time 
> eg. index updates) since we have released the locks and advanced mvcc at this 
> point. The flush will turn the memstore size negative since the value 
> subtracted is the actual value flushed from stores. The negative value 
> impacts the future flushes amongst others that depend on memstore size.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17944) ClassSize fails with Java 9

2017-04-20 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15977093#comment-15977093
 ] 

Enis Soztutar commented on HBASE-17944:
---

{{JDK7}} is not used any more. Let's get rid of this code block altogether 
instead. 

> ClassSize fails with Java 9
> ---
>
> Key: HBASE-17944
> URL: https://issues.apache.org/jira/browse/HBASE-17944
> Project: HBase
>  Issue Type: Bug
>Reporter: Colm O hEigeartaigh
> Fix For: 2.0.0, 1.1.10, 1.2.6, 1.3.2
>
> Attachments: 0001-HBASE-17944-ClassSize-fails-with-Java-9.patch
>
>
> ClassSize fails when run with Java 9. The static block assumes that the java 
> version contains "." which is not necessarily the case with Java 9:
> Caused by: java.lang.RuntimeException: Unexpected version format: 9-ea
>   at org.apache.hadoop.hbase.util.ClassSize.(ClassSize.java:119)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17944) ClassSize fails with Java 9

2017-04-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15977079#comment-15977079
 ] 

Hadoop QA commented on HBASE-17944:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
59s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
8s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
33s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 0s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 40s 
{color} | {color:red} hbase-common generated 1 new + 0 unchanged - 0 fixed = 1 
total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 45s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
6s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 35m 49s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hbase-common |
|  |  Load of known null value in 
org.apache.hadoop.hbase.util.ClassSize.()  At 
ClassSize.java:in org.apache.hadoop.hbase.util.ClassSize.()  At ClassSize.java:[line 144] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12864299/0001-HBASE-17944-ClassSize-fails-with-Java-9.patch
 |
| JIRA Issue | HBASE-17944 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux d6bb996e29cd 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 40cc666 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6518/artifact/patchprocess/new-findbugs-hbase-common.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6518/testReport/ |
| modules | C: hbase-common U: hbase-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6518/console |
| Powered by | Apache Yetus 0.3.0   

[jira] [Commented] (HBASE-17906) When a huge amount of data writing to hbase through thrift2, there will be a deadlock error.

2017-04-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15977073#comment-15977073
 ] 

Hadoop QA commented on HBASE-17906:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 1m 48s 
{color} | {color:red} Docker failed to build yetus/hbase:ef91163. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12864316/HBASE-17906.branch-0.98.004.patch
 |
| JIRA Issue | HBASE-17906 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6519/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> When a huge amount of data writing to hbase through thrift2, there will be a 
> deadlock error.
> 
>
> Key: HBASE-17906
> URL: https://issues.apache.org/jira/browse/HBASE-17906
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.98.21, 0.98.22, 0.98.23, 0.98.24, 0.98.25
> Environment: hadoop 2.5.2, hbase 0.98.20 jdk1.8.0_77
>Reporter: Albert Lee
>  Labels: patch
> Fix For: 0.98.21, 0.98.22, 0.98.23, 0.98.24, 0.98.25
>
> Attachments: HBASE-17906.branch-0.98.004.patch, 
> HBASE-17906.master.001.patch, HBASE-17906.master.002.patch, 
> hbase-thrift-17906-ForRecurr.zip
>
>
> When a huge amount of data writing to hbase through thrift2, there will be a 
> deadlock error.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17906) When a huge amount of data writing to hbase through thrift2, there will be a deadlock error.

2017-04-20 Thread Albert Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Albert Lee updated HBASE-17906:
---
Attachment: (was: HBASE-17906.branch-0.98.004.patch)

> When a huge amount of data writing to hbase through thrift2, there will be a 
> deadlock error.
> 
>
> Key: HBASE-17906
> URL: https://issues.apache.org/jira/browse/HBASE-17906
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.98.21, 0.98.22, 0.98.23, 0.98.24, 0.98.25
> Environment: hadoop 2.5.2, hbase 0.98.20 jdk1.8.0_77
>Reporter: Albert Lee
>  Labels: patch
> Fix For: 0.98.21, 0.98.22, 0.98.23, 0.98.24, 0.98.25
>
> Attachments: HBASE-17906.branch-0.98.004.patch, 
> HBASE-17906.master.001.patch, HBASE-17906.master.002.patch, 
> hbase-thrift-17906-ForRecurr.zip
>
>
> When a huge amount of data writing to hbase through thrift2, there will be a 
> deadlock error.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17906) When a huge amount of data writing to hbase through thrift2, there will be a deadlock error.

2017-04-20 Thread Albert Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Albert Lee updated HBASE-17906:
---
Attachment: HBASE-17906.branch-0.98.004.patch

> When a huge amount of data writing to hbase through thrift2, there will be a 
> deadlock error.
> 
>
> Key: HBASE-17906
> URL: https://issues.apache.org/jira/browse/HBASE-17906
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.98.21, 0.98.22, 0.98.23, 0.98.24, 0.98.25
> Environment: hadoop 2.5.2, hbase 0.98.20 jdk1.8.0_77
>Reporter: Albert Lee
>  Labels: patch
> Fix For: 0.98.21, 0.98.22, 0.98.23, 0.98.24, 0.98.25
>
> Attachments: HBASE-17906.branch-0.98.004.patch, 
> HBASE-17906.master.001.patch, HBASE-17906.master.002.patch, 
> hbase-thrift-17906-ForRecurr.zip
>
>
> When a huge amount of data writing to hbase through thrift2, there will be a 
> deadlock error.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17906) When a huge amount of data writing to hbase through thrift2, there will be a deadlock error.

2017-04-20 Thread Albert Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Albert Lee updated HBASE-17906:
---
Attachment: HBASE-17906.branch-0.98.004.patch

> When a huge amount of data writing to hbase through thrift2, there will be a 
> deadlock error.
> 
>
> Key: HBASE-17906
> URL: https://issues.apache.org/jira/browse/HBASE-17906
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.98.21, 0.98.22, 0.98.23, 0.98.24, 0.98.25
> Environment: hadoop 2.5.2, hbase 0.98.20 jdk1.8.0_77
>Reporter: Albert Lee
>  Labels: patch
> Fix For: 0.98.21, 0.98.22, 0.98.23, 0.98.24, 0.98.25
>
> Attachments: HBASE-17906.branch-0.98.004.patch, 
> HBASE-17906.master.001.patch, HBASE-17906.master.002.patch, 
> hbase-thrift-17906-ForRecurr.zip
>
>
> When a huge amount of data writing to hbase through thrift2, there will be a 
> deadlock error.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17906) When a huge amount of data writing to hbase through thrift2, there will be a deadlock error.

2017-04-20 Thread Albert Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Albert Lee updated HBASE-17906:
---
Attachment: (was: HBASE-17906.branch-0.98.001.patch)

> When a huge amount of data writing to hbase through thrift2, there will be a 
> deadlock error.
> 
>
> Key: HBASE-17906
> URL: https://issues.apache.org/jira/browse/HBASE-17906
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.98.21, 0.98.22, 0.98.23, 0.98.24, 0.98.25
> Environment: hadoop 2.5.2, hbase 0.98.20 jdk1.8.0_77
>Reporter: Albert Lee
>  Labels: patch
> Fix For: 0.98.21, 0.98.22, 0.98.23, 0.98.24, 0.98.25
>
> Attachments: HBASE-17906.master.001.patch, 
> HBASE-17906.master.002.patch, hbase-thrift-17906-ForRecurr.zip
>
>
> When a huge amount of data writing to hbase through thrift2, there will be a 
> deadlock error.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17906) When a huge amount of data writing to hbase through thrift2, there will be a deadlock error.

2017-04-20 Thread Albert Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Albert Lee updated HBASE-17906:
---
Attachment: (was: HBASE-17906.branch-0.98.003.patch)

> When a huge amount of data writing to hbase through thrift2, there will be a 
> deadlock error.
> 
>
> Key: HBASE-17906
> URL: https://issues.apache.org/jira/browse/HBASE-17906
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.98.21, 0.98.22, 0.98.23, 0.98.24, 0.98.25
> Environment: hadoop 2.5.2, hbase 0.98.20 jdk1.8.0_77
>Reporter: Albert Lee
>  Labels: patch
> Fix For: 0.98.21, 0.98.22, 0.98.23, 0.98.24, 0.98.25
>
> Attachments: HBASE-17906.master.001.patch, 
> HBASE-17906.master.002.patch, hbase-thrift-17906-ForRecurr.zip
>
>
> When a huge amount of data writing to hbase through thrift2, there will be a 
> deadlock error.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17906) When a huge amount of data writing to hbase through thrift2, there will be a deadlock error.

2017-04-20 Thread Albert Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Albert Lee updated HBASE-17906:
---
Attachment: (was: HBASE-17906.branch-0.98.002.patch)

> When a huge amount of data writing to hbase through thrift2, there will be a 
> deadlock error.
> 
>
> Key: HBASE-17906
> URL: https://issues.apache.org/jira/browse/HBASE-17906
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.98.21, 0.98.22, 0.98.23, 0.98.24, 0.98.25
> Environment: hadoop 2.5.2, hbase 0.98.20 jdk1.8.0_77
>Reporter: Albert Lee
>  Labels: patch
> Fix For: 0.98.21, 0.98.22, 0.98.23, 0.98.24, 0.98.25
>
> Attachments: HBASE-17906.master.001.patch, 
> HBASE-17906.master.002.patch, hbase-thrift-17906-ForRecurr.zip
>
>
> When a huge amount of data writing to hbase through thrift2, there will be a 
> deadlock error.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17944) ClassSize fails with Java 9

2017-04-20 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15977057#comment-15977057
 ] 

Ted Yu commented on HBASE-17944:


Just curious: are you deploying java 9 in your cluster ?

Thanks

> ClassSize fails with Java 9
> ---
>
> Key: HBASE-17944
> URL: https://issues.apache.org/jira/browse/HBASE-17944
> Project: HBase
>  Issue Type: Bug
>Reporter: Colm O hEigeartaigh
> Fix For: 2.0.0, 1.1.10, 1.2.6, 1.3.2
>
> Attachments: 0001-HBASE-17944-ClassSize-fails-with-Java-9.patch
>
>
> ClassSize fails when run with Java 9. The static block assumes that the java 
> version contains "." which is not necessarily the case with Java 9:
> Caused by: java.lang.RuntimeException: Unexpected version format: 9-ea
>   at org.apache.hadoop.hbase.util.ClassSize.(ClassSize.java:119)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17944) ClassSize fails with Java 9

2017-04-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15977047#comment-15977047
 ] 

Hadoop QA commented on HBASE-17944:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
54s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
18s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
30s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
7s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
26m 48s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 58s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
7s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 23s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12864296/0001-HBASE-17944-ClassSize-fails-with-Java-9.patch
 |
| JIRA Issue | HBASE-17944 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux af336a30f712 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / 40cc666 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6517/testReport/ |
| modules | C: hbase-common U: hbase-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/6517/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> ClassSize fails with Java 9
> ---
>
> Key: HBASE-17944
> URL: https://issues.apache.org/jira/browse/HBASE-17944
> Project: HBase
>  Issue Type: Bug
>Reporter: Colm O hEigeartaigh
> Fix For: 2.0.0, 1.1.10, 1.2.6, 1.3.2
>
> 

[jira] [Commented] (HBASE-17906) When a huge amount of data writing to hbase through thrift2, there will be a deadlock error.

2017-04-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15977038#comment-15977038
 ] 

Hadoop QA commented on HBASE-17906:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 57s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
48s {color} | {color:green} 0.98 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s 
{color} | {color:green} 0.98 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
29s {color} | {color:green} 0.98 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
32s {color} | {color:green} 0.98 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 4s 
{color} | {color:red} hbase-thrift in 0.98 has 1 extant Findbugs warnings. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s 
{color} | {color:green} 0.98 passed {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 19s 
{color} | {color:red} hbase-thrift in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 20s 
{color} | {color:red} hbase-thrift in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 20s {color} 
| {color:red} hbase-thrift in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 0m 51s 
{color} | {color:red} The patch causes 14 errors with Hadoop v2.4.0. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 1m 41s 
{color} | {color:red} The patch causes 14 errors with Hadoop v2.4.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 2m 31s 
{color} | {color:red} The patch causes 14 errors with Hadoop v2.5.0. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 3m 19s 
{color} | {color:red} The patch causes 14 errors with Hadoop v2.5.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 4m 12s 
{color} | {color:red} The patch causes 14 errors with Hadoop v2.5.2. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 5m 1s 
{color} | {color:red} The patch causes 14 errors with Hadoop v2.6.1. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 5m 50s 
{color} | {color:red} The patch causes 14 errors with Hadoop v2.6.2. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 6m 37s 
{color} | {color:red} The patch causes 14 errors with Hadoop v2.6.3. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red} 7m 27s 
{color} | {color:red} The patch causes 14 errors with Hadoop v2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 12s 
{color} | {color:red} hbase-thrift in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 21s {color} 
| {color:red} hbase-thrift in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
11s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 33m 12s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:ef91163 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12864292/HBASE-17906.branch-0.98.003.patch
 |
| JIRA Issue | HBASE-17906 |
| Optional Tests |  asflicense  javac  javadoc  unit  

[jira] [Comment Edited] (HBASE-17125) Inconsistent result when use filter to read data

2017-04-20 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15976972#comment-15976972
 ] 

Ted Yu edited comment on HBASE-17125 at 4/20/17 4:40 PM:
-

bq. has another problem, if a column's max version is 5 and the user query only 
need 3 versions. It first check the version's number, then check the cell by 
filter. So the cells number of the result may less than 3. But there are 2 
versions which don't read anymore.

How is the above addressed in the current patch ?

Currently the max versions is obtained this way (see UserScanQueryMatcher):
{code}
int maxVersions = scan.isRaw() ? scan.getMaxVersions()
: Math.min(scan.getMaxVersions(), scanInfo.getMaxVersions());
{code}
The column tracker loses some information when column's max versions is greater 
than that specified in the Scan.
Can we pass this information (let's call it the slack) to ColumnTracker ctor ?

When filterResponse is SKIP, we can utilize the extra information to address 
the scenario described above by calling a new method of ColumnTracker (let's 
call it retract which decrements currentCount field if slack permits).



was (Author: yuzhih...@gmail.com):
bq. has another problem, if a column's max version is 5 and the user query only 
need 3 versions. It first check the version's number, then check the cell by 
filter. So the cells number of the result may less than 3. But there are 2 
versions which don't read anymore.

How is the above addressed in the current patch ?

Currently the max versions is obtained this way (see UserScanQueryMatcher):
{code}
int maxVersions = scan.isRaw() ? scan.getMaxVersions()
: Math.min(scan.getMaxVersions(), scanInfo.getMaxVersions());
{code}
The column tracker loses some information when column's max versions is greater 
than that specified in the Scan.
Can we pass this information to ColumnTracker so that the column tracker can 
return richer information (thru a tuple, e.g.) from checkVersions() ?
{code}
colChecker = columns.checkVersions(cell, timestamp, typeByte, false);
{code}
That way, when filterResponse is SKIP, we can utilize the extra information to 
address the scenario described above.


> Inconsistent result when use filter to read data
> 
>
> Key: HBASE-17125
> URL: https://issues.apache.org/jira/browse/HBASE-17125
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: example.diff, HBASE-17125.master.001.patch, 
> HBASE-17125.master.002.patch, HBASE-17125.master.002.patch, 
> HBASE-17125.master.003.patch, HBASE-17125.master.004.patch, 
> HBASE-17125.master.005.patch, HBASE-17125.master.006.patch, 
> HBASE-17125.master.007.patch, HBASE-17125.master.008.patch
>
>
> Assume a cloumn's max versions is 3, then we write 4 versions of this column. 
> The oldest version doesn't remove immediately. But from the user view, the 
> oldest version has gone. When user use a filter to query, if the filter skip 
> a new version, then the oldest version will be seen again. But after compact 
> the region, then the oldest version will never been seen. So it is weird for 
> user. The query will get inconsistent result before and after region 
> compaction.
> The reason is matchColumn method of UserScanQueryMatcher. It first check the 
> cell by filter, then check the number of versions needed. So if the filter 
> skip the new version, then the oldest version will be seen again when it is 
> not removed.
> Have a discussion offline with [~Apache9] and [~fenghh], now we have two 
> solution for this problem. The first idea is check the number of versions 
> first, then check the cell by filter. As the comment of setFilter, the filter 
> is called after all tests for ttl, column match, deletes and max versions 
> have been run.
> {code}
>   /**
>* Apply the specified server-side filter when performing the Query.
>* Only {@link Filter#filterKeyValue(Cell)} is called AFTER all tests
>* for ttl, column match, deletes and max versions have been run.
>* @param filter filter to run on the server
>* @return this for invocation chaining
>*/
>   public Query setFilter(Filter filter) {
> this.filter = filter;
> return this;
>   }
> {code}
> But this idea has another problem, if a column's max version is 5 and the 
> user query only need 3 versions. It first check the version's number, then 
> check the cell by filter. So the cells number of the result may less than 3. 
> But there are 2 versions which don't read anymore.
> So the second idea has three steps.
> 1. check by the max versions of this column
> 2. check the kv by filter
> 3. check the versions which user need.
> But this will lead the 

[jira] [Commented] (HBASE-17944) ClassSize fails with Java 9

2017-04-20 Thread Colm O hEigeartaigh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15977000#comment-15977000
 ] 

Colm O hEigeartaigh commented on HBASE-17944:
-

Good point! I have updated the patch just to throw the RuntimeException if the 
version is null.

> ClassSize fails with Java 9
> ---
>
> Key: HBASE-17944
> URL: https://issues.apache.org/jira/browse/HBASE-17944
> Project: HBase
>  Issue Type: Bug
>Reporter: Colm O hEigeartaigh
> Fix For: 2.0.0, 1.1.10, 1.2.6, 1.3.2
>
> Attachments: 0001-HBASE-17944-ClassSize-fails-with-Java-9.patch
>
>
> ClassSize fails when run with Java 9. The static block assumes that the java 
> version contains "." which is not necessarily the case with Java 9:
> Caused by: java.lang.RuntimeException: Unexpected version format: 9-ea
>   at org.apache.hadoop.hbase.util.ClassSize.(ClassSize.java:119)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17944) ClassSize fails with Java 9

2017-04-20 Thread Colm O hEigeartaigh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colm O hEigeartaigh updated HBASE-17944:

Attachment: 0001-HBASE-17944-ClassSize-fails-with-Java-9.patch

> ClassSize fails with Java 9
> ---
>
> Key: HBASE-17944
> URL: https://issues.apache.org/jira/browse/HBASE-17944
> Project: HBase
>  Issue Type: Bug
>Reporter: Colm O hEigeartaigh
> Fix For: 2.0.0, 1.1.10, 1.2.6, 1.3.2
>
> Attachments: 0001-HBASE-17944-ClassSize-fails-with-Java-9.patch
>
>
> ClassSize fails when run with Java 9. The static block assumes that the java 
> version contains "." which is not necessarily the case with Java 9:
> Caused by: java.lang.RuntimeException: Unexpected version format: 9-ea
>   at org.apache.hadoop.hbase.util.ClassSize.(ClassSize.java:119)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17944) ClassSize fails with Java 9

2017-04-20 Thread Colm O hEigeartaigh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colm O hEigeartaigh updated HBASE-17944:

Attachment: (was: 0001-HBASE-17944-ClassSize-fails-with-Java-9.patch)

> ClassSize fails with Java 9
> ---
>
> Key: HBASE-17944
> URL: https://issues.apache.org/jira/browse/HBASE-17944
> Project: HBase
>  Issue Type: Bug
>Reporter: Colm O hEigeartaigh
> Fix For: 2.0.0, 1.1.10, 1.2.6, 1.3.2
>
> Attachments: 0001-HBASE-17944-ClassSize-fails-with-Java-9.patch
>
>
> ClassSize fails when run with Java 9. The static block assumes that the java 
> version contains "." which is not necessarily the case with Java 9:
> Caused by: java.lang.RuntimeException: Unexpected version format: 9-ea
>   at org.apache.hadoop.hbase.util.ClassSize.(ClassSize.java:119)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17944) ClassSize fails with Java 9

2017-04-20 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15976988#comment-15976988
 ] 

Ted Yu commented on HBASE-17944:


{code}
140 } else if (version != null && version.startsWith("9")) {
141   JDK7 = false;
{code}
What if Java 10 rolls out :-)

> ClassSize fails with Java 9
> ---
>
> Key: HBASE-17944
> URL: https://issues.apache.org/jira/browse/HBASE-17944
> Project: HBase
>  Issue Type: Bug
>Reporter: Colm O hEigeartaigh
> Fix For: 2.0.0, 1.1.10, 1.2.6, 1.3.2
>
> Attachments: 0001-HBASE-17944-ClassSize-fails-with-Java-9.patch
>
>
> ClassSize fails when run with Java 9. The static block assumes that the java 
> version contains "." which is not necessarily the case with Java 9:
> Caused by: java.lang.RuntimeException: Unexpected version format: 9-ea
>   at org.apache.hadoop.hbase.util.ClassSize.(ClassSize.java:119)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17542) Move backup system table into separate namespace

2017-04-20 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-17542:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> Move backup system table into separate namespace
> 
>
> Key: HBASE-17542
> URL: https://issues.apache.org/jira/browse/HBASE-17542
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: HBASE-17542-v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17542) Move backup system table into separate namespace

2017-04-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15976980#comment-15976980
 ] 

Hudson commented on HBASE-17542:


SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #2893 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/2893/])
HBASE-17542 Move backup system table into separate namespace (tedyu: rev 
b1ef8dd43aa0f0102f296ea9b3eb76b5623052f5)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/backup/BackupRestoreConstants.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/backup/BackupHFileCleaner.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/backup/impl/BackupSystemTable.java


> Move backup system table into separate namespace
> 
>
> Key: HBASE-17542
> URL: https://issues.apache.org/jira/browse/HBASE-17542
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: HBASE-17542-v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17915) Implement async replication admin methods

2017-04-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15976981#comment-15976981
 ] 

Hudson commented on HBASE-17915:


SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #2893 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/2893/])
HBASE-17915 Implement async replication admin methods (zghao: rev 
40cc666ac984e846a8c7105b771ce6bec90c4ad3)
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncAdminBase.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/replication/ReplicationSerDeHelper.java
* (add) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncReplicationAdminApi.java


> Implement async replication admin methods
> -
>
> Key: HBASE-17915
> URL: https://issues.apache.org/jira/browse/HBASE-17915
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-17915.master.001.patch, 
> HBASE-17915.master.001.patch, HBASE-17915.master.001.patch, 
> HBASE-17915.master.002.patch, HBASE-17915.master.003.patch, 
> HBASE-17915.master.004.patch, HBASE-17915.master.005.patch, 
> HBASE-17915.master.005.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17125) Inconsistent result when use filter to read data

2017-04-20 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15976972#comment-15976972
 ] 

Ted Yu commented on HBASE-17125:


bq. has another problem, if a column's max version is 5 and the user query only 
need 3 versions. It first check the version's number, then check the cell by 
filter. So the cells number of the result may less than 3. But there are 2 
versions which don't read anymore.

How is the above addressed in the current patch ?

Currently the max versions is obtained this way (see UserScanQueryMatcher):
{code}
int maxVersions = scan.isRaw() ? scan.getMaxVersions()
: Math.min(scan.getMaxVersions(), scanInfo.getMaxVersions());
{code}
The column tracker loses some information when column's max versions is greater 
than that specified in the Scan.
Can we pass this information to ColumnTracker so that the column tracker can 
return richer information (thru a tuple, e.g.) from checkVersions() ?
{code}
colChecker = columns.checkVersions(cell, timestamp, typeByte, false);
{code}
That way, when filterResponse is SKIP, we can utilize the extra information to 
address the scenario described above.


> Inconsistent result when use filter to read data
> 
>
> Key: HBASE-17125
> URL: https://issues.apache.org/jira/browse/HBASE-17125
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: example.diff, HBASE-17125.master.001.patch, 
> HBASE-17125.master.002.patch, HBASE-17125.master.002.patch, 
> HBASE-17125.master.003.patch, HBASE-17125.master.004.patch, 
> HBASE-17125.master.005.patch, HBASE-17125.master.006.patch, 
> HBASE-17125.master.007.patch, HBASE-17125.master.008.patch
>
>
> Assume a cloumn's max versions is 3, then we write 4 versions of this column. 
> The oldest version doesn't remove immediately. But from the user view, the 
> oldest version has gone. When user use a filter to query, if the filter skip 
> a new version, then the oldest version will be seen again. But after compact 
> the region, then the oldest version will never been seen. So it is weird for 
> user. The query will get inconsistent result before and after region 
> compaction.
> The reason is matchColumn method of UserScanQueryMatcher. It first check the 
> cell by filter, then check the number of versions needed. So if the filter 
> skip the new version, then the oldest version will be seen again when it is 
> not removed.
> Have a discussion offline with [~Apache9] and [~fenghh], now we have two 
> solution for this problem. The first idea is check the number of versions 
> first, then check the cell by filter. As the comment of setFilter, the filter 
> is called after all tests for ttl, column match, deletes and max versions 
> have been run.
> {code}
>   /**
>* Apply the specified server-side filter when performing the Query.
>* Only {@link Filter#filterKeyValue(Cell)} is called AFTER all tests
>* for ttl, column match, deletes and max versions have been run.
>* @param filter filter to run on the server
>* @return this for invocation chaining
>*/
>   public Query setFilter(Filter filter) {
> this.filter = filter;
> return this;
>   }
> {code}
> But this idea has another problem, if a column's max version is 5 and the 
> user query only need 3 versions. It first check the version's number, then 
> check the cell by filter. So the cells number of the result may less than 3. 
> But there are 2 versions which don't read anymore.
> So the second idea has three steps.
> 1. check by the max versions of this column
> 2. check the kv by filter
> 3. check the versions which user need.
> But this will lead the ScanQueryMatcher more complicated. And this will break 
> the javadoc of Query.setFilter.
> Now we don't have a final solution for this problem. Suggestions are welcomed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17944) ClassSize fails with Java 9

2017-04-20 Thread Colm O hEigeartaigh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colm O hEigeartaigh updated HBASE-17944:

Attachment: 0001-HBASE-17944-ClassSize-fails-with-Java-9.patch

> ClassSize fails with Java 9
> ---
>
> Key: HBASE-17944
> URL: https://issues.apache.org/jira/browse/HBASE-17944
> Project: HBase
>  Issue Type: Bug
>Reporter: Colm O hEigeartaigh
> Fix For: 2.0.0, 1.1.10, 1.2.6, 1.3.2
>
> Attachments: 0001-HBASE-17944-ClassSize-fails-with-Java-9.patch
>
>
> ClassSize fails when run with Java 9. The static block assumes that the java 
> version contains "." which is not necessarily the case with Java 9:
> Caused by: java.lang.RuntimeException: Unexpected version format: 9-ea
>   at org.apache.hadoop.hbase.util.ClassSize.(ClassSize.java:119)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17944) ClassSize fails with Java 9

2017-04-20 Thread Colm O hEigeartaigh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colm O hEigeartaigh updated HBASE-17944:

Status: Patch Available  (was: Open)

> ClassSize fails with Java 9
> ---
>
> Key: HBASE-17944
> URL: https://issues.apache.org/jira/browse/HBASE-17944
> Project: HBase
>  Issue Type: Bug
>Reporter: Colm O hEigeartaigh
> Fix For: 2.0.0, 1.1.10, 1.2.6, 1.3.2
>
> Attachments: 0001-HBASE-17944-ClassSize-fails-with-Java-9.patch
>
>
> ClassSize fails when run with Java 9. The static block assumes that the java 
> version contains "." which is not necessarily the case with Java 9:
> Caused by: java.lang.RuntimeException: Unexpected version format: 9-ea
>   at org.apache.hadoop.hbase.util.ClassSize.(ClassSize.java:119)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (HBASE-17944) ClassSize fails with Java 9

2017-04-20 Thread Colm O hEigeartaigh (JIRA)
Colm O hEigeartaigh created HBASE-17944:
---

 Summary: ClassSize fails with Java 9
 Key: HBASE-17944
 URL: https://issues.apache.org/jira/browse/HBASE-17944
 Project: HBase
  Issue Type: Bug
Reporter: Colm O hEigeartaigh
 Fix For: 2.0.0, 1.1.10, 1.2.6, 1.3.2


ClassSize fails when run with Java 9. The static block assumes that the java 
version contains "." which is not necessarily the case with Java 9:

Caused by: java.lang.RuntimeException: Unexpected version format: 9-ea
at org.apache.hadoop.hbase.util.ClassSize.(ClassSize.java:119)




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17906) When a huge amount of data writing to hbase through thrift2, there will be a deadlock error.

2017-04-20 Thread Albert Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Albert Lee updated HBASE-17906:
---
Attachment: HBASE-17906.branch-0.98.003.patch

Add a unit test of this patch.

> When a huge amount of data writing to hbase through thrift2, there will be a 
> deadlock error.
> 
>
> Key: HBASE-17906
> URL: https://issues.apache.org/jira/browse/HBASE-17906
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.98.21, 0.98.22, 0.98.23, 0.98.24, 0.98.25
> Environment: hadoop 2.5.2, hbase 0.98.20 jdk1.8.0_77
>Reporter: Albert Lee
>  Labels: patch
> Fix For: 0.98.21, 0.98.22, 0.98.23, 0.98.24, 0.98.25
>
> Attachments: HBASE-17906.branch-0.98.001.patch, 
> HBASE-17906.branch-0.98.002.patch, HBASE-17906.branch-0.98.003.patch, 
> HBASE-17906.master.001.patch, HBASE-17906.master.002.patch, 
> hbase-thrift-17906-ForRecurr.zip
>
>
> When a huge amount of data writing to hbase through thrift2, there will be a 
> deadlock error.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-15583) Any HTableDescriptor we give out should be immutable

2017-04-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15976949#comment-15976949
 ] 

Hadoop QA commented on HBASE-15583:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} rubocop {color} | {color:blue} 0m 5s 
{color} | {color:blue} rubocop was not available. {color} |
| {color:blue}0{color} | {color:blue} ruby-lint {color} | {color:blue} 0m 5s 
{color} | {color:blue} Ruby-lint was not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 21 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
57s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 19s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 
10s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
23s {color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
32s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 19s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 1s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patched modules with no Java source: . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 17s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 109m 33s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 5s 
{color} | {color:green} hbase-rsgroup in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 42s 
{color} | {color:green} hbase-shell in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 36s 
{color} | {color:green} hbase-rest in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 114m 58s 
{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 1m 46s 
{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 307m 

[jira] [Commented] (HBASE-16851) User-facing documentation for the In-Memory Compaction feature

2017-04-20 Thread Edward Bortnikov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15976828#comment-15976828
 ] 

Edward Bortnikov commented on HBASE-16851:
--

Reference guide draft available in 
https://docs.google.com/document/d/1Xi1jh_30NKnjE3wSR-XF5JQixtyT6H_CdFTaVi78LKw/edit.
 
Please review. Thanks. 

> User-facing documentation for the In-Memory Compaction feature
> --
>
> Key: HBASE-16851
> URL: https://issues.apache.org/jira/browse/HBASE-16851
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: Edward Bortnikov
>Assignee: Edward Bortnikov
> Attachments: Accordion HBase In-Memory Compaction - Nov 1 .pdf, 
> Accordion HBase In-Memory Compaction - Nov 23.pdf, Accordion_ HBase In-Memory 
> Compaction - Oct 27.pdf, HBaseAcceleratedHbaseConf-final.pptx
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17924) Consider sorting the row order when processing multi() ops before taking rowlocks

2017-04-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15976766#comment-15976766
 ] 

Hadoop QA commented on HBASE-17924:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 27s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
6s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 16s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
26s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 1s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
57m 32s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 5m 2s 
{color} | {color:red} hbase-server generated 2 new + 0 unchanged - 0 fixed = 2 
total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 173m 26s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
34s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 257m 24s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hbase-server |
|  |  Dead store to mutationActionMap in 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(ClientProtos$RegionActionResult$Builder,
 Region, OperationQuota, List, CellScanner)  At 
RSRpcServices.java:org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(ClientProtos$RegionActionResult$Builder,
 Region, OperationQuota, List, CellScanner)  At RSRpcServices.java:[line 886] |
|  |  org.apache.hadoop.hbase.wal.WALSplitter$MutationReplay defines equals and 
uses Object.hashCode()  At WALSplitter.java:Object.hashCode()  At 
WALSplitter.java:[lines 2305-2308] |
| Failed junit tests | hadoop.hbase.snapshot.TestExportSnapshot |
|   | hadoop.hbase.snapshot.TestMobExportSnapshot |
|   | hadoop.hbase.snapshot.TestMobSecureExportSnapshot |
| Timed out junit tests | 
org.apache.hadoop.hbase.master.cleaner.TestSnapshotFromMaster |
|   | org.apache.hadoop.hbase.master.TestWarmupRegion |
|   | org.apache.hadoop.hbase.filter.TestFuzzyRowFilterEndToEnd |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.03.0-ce Server=17.03.0-ce Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12864224/HBASE-17924.v3.patch |
| JIRA Issue | HBASE-17924 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  

[jira] [Commented] (HBASE-17864) Implement async snapshot/cloneSnapshot/restoreSnapshot methods

2017-04-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15976760#comment-15976760
 ] 

Hadoop QA commented on HBASE-17864:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 7s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 31s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
45s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 18s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
4s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
37s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
37s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
4s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
33m 23s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 57s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 25s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 120m 24s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
56s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 182m 59s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.client.TestBlockEvictionFromClient |
|   | hadoop.hbase.snapshot.TestMobExportSnapshot |
|   | hadoop.hbase.snapshot.TestExportSnapshot |
|   | hadoop.hbase.snapshot.TestMobSecureExportSnapshot |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12864238/HBASE-17864.v3.patch |
| JIRA Issue | HBASE-17864 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 13f890e4a7c6 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 40cc666 |
| Default Java | 1.8.0_121 |
| findbugs | 

[jira] [Updated] (HBASE-17941) CellArrayMap#getCell may throw IndexOutOfBoundsException

2017-04-20 Thread Hsin-Ying Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hsin-Ying Lee updated HBASE-17941:
--
Status: Patch Available  (was: Open)

> CellArrayMap#getCell may throw IndexOutOfBoundsException
> 
>
> Key: HBASE-17941
> URL: https://issues.apache.org/jira/browse/HBASE-17941
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Chia-Ping Tsai
>Assignee: Hsin-Ying Lee
>Priority: Minor
>  Labels: beginner
> Fix For: 2.0.0
>
> Attachments: HBASE-17941.v0.patch
>
>
> {noformat}
>   @Override
>   protected Cell getCell(int i) {
> if( (i < minCellIdx) && (i >= maxCellIdx) ) return null;
> return block[i];
>   }
> {noformat}
> && -> ||
> We have checked the index of bound before calling this method, so the 
> exception doesn't happen at current trunk.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-17941) CellArrayMap#getCell may throw IndexOutOfBoundsException

2017-04-20 Thread Hsin-Ying Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hsin-Ying Lee updated HBASE-17941:
--
Attachment: HBASE-17941.v0.patch

> CellArrayMap#getCell may throw IndexOutOfBoundsException
> 
>
> Key: HBASE-17941
> URL: https://issues.apache.org/jira/browse/HBASE-17941
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Chia-Ping Tsai
>Assignee: Hsin-Ying Lee
>Priority: Minor
>  Labels: beginner
> Fix For: 2.0.0
>
> Attachments: HBASE-17941.v0.patch
>
>
> {noformat}
>   @Override
>   protected Cell getCell(int i) {
> if( (i < minCellIdx) && (i >= maxCellIdx) ) return null;
> return block[i];
>   }
> {noformat}
> && -> ||
> We have checked the index of bound before calling this method, so the 
> exception doesn't happen at current trunk.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (HBASE-17941) CellArrayMap#getCell may throw IndexOutOfBoundsException

2017-04-20 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John reassigned HBASE-17941:
--

Assignee: Hsin-Ying Lee

> CellArrayMap#getCell may throw IndexOutOfBoundsException
> 
>
> Key: HBASE-17941
> URL: https://issues.apache.org/jira/browse/HBASE-17941
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Chia-Ping Tsai
>Assignee: Hsin-Ying Lee
>Priority: Minor
>  Labels: beginner
> Fix For: 2.0.0
>
>
> {noformat}
>   @Override
>   protected Cell getCell(int i) {
> if( (i < minCellIdx) && (i >= maxCellIdx) ) return null;
> return block[i];
>   }
> {noformat}
> && -> ||
> We have checked the index of bound before calling this method, so the 
> exception doesn't happen at current trunk.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (HBASE-17940) HMaster can not start due to Jasper related classes conflict

2017-04-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15976572#comment-15976572
 ] 

Hudson commented on HBASE-17940:


SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #2892 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/2892/])
HBASE-17940 HMaster can not start due to Jasper related classes conflict 
(zhangduo: rev 0953c144700c18b16f0d34de5ccec90e7c9cef3d)
* (edit) pom.xml
* (edit) hbase-server/pom.xml


> HMaster can not start due to Jasper related classes conflict
> 
>
> Key: HBASE-17940
> URL: https://issues.apache.org/jira/browse/HBASE-17940
> Project: HBase
>  Issue Type: Bug
>  Components: dependencies, pom
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: HBASE-17940.patch
>
>
> I got this
> {noformat}
> java.lang.RuntimeException: Failed construction of Master: class 
> org.apache.hadoop.hbase.master.HMaster.
> at 
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2692)
> at 
> org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:235)
> at 
> org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at 
> org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:127)
> at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2703)
> Caused by: java.lang.NoSuchFieldError: IS_SECURITY_ENABLED
> at 
> org.apache.jasper.compiler.JspRuntimeContext.(JspRuntimeContext.java:194)
> at org.apache.jasper.servlet.JspServlet.init(JspServlet.java:159)
> at 
> org.eclipse.jetty.servlet.ServletHolder.initServlet(ServletHolder.java:640)
> at 
> org.eclipse.jetty.servlet.ServletHolder.initialize(ServletHolder.java:419)
> at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:875)
> at 
> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:348)
> at 
> org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1379)
> at 
> org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1341)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:772)
> at 
> org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:261)
> at 
> org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:517)
> at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
> at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132)
> at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114)
> at 
> org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)
> at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
> at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132)
> at org.eclipse.jetty.server.Server.start(Server.java:405)
> at 
> org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:106)
> at 
> org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)
> at org.eclipse.jetty.server.Server.doStart(Server.java:372)
> at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
> at org.apache.hadoop.hbase.http.HttpServer.start(HttpServer.java:969)
> at org.apache.hadoop.hbase.http.InfoServer.start(InfoServer.java:100)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.putUpWebUI(HRegionServer.java:1887)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.(HRegionServer.java:620)
> at org.apache.hadoop.hbase.master.HMaster.(HMaster.java:461)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
> Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at 
> org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2685)
> ... 5 more
> {noformat}
> The problem is that, we have same classes in two jars, 
> javax.servlet.jsp-2.3.2.jar and jasper-compiler-5.5.23.jar, such as 
> org.apache.jasper.Constants and 

[jira] [Commented] (HBASE-16438) Create a cell type so that chunk id is embedded in it

2017-04-20 Thread Anastasia Braginsky (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15976548#comment-15976548
 ] 

Anastasia Braginsky commented on HBASE-16438:
-

[~ram_krish], thank you very much!

> Create a cell type so that chunk id is embedded in it
> -
>
> Key: HBASE-16438
> URL: https://issues.apache.org/jira/browse/HBASE-16438
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: 
> HBASE-16438_10_ChunkCreatorwrappingChunkPool_withchunkRef.patch, 
> HBASE-16438_11_ChunkCreatorwrappingChunkPool_withchunkRef.patch, 
> HBASE-16438_12_ChunkCreatorwrappingChunkPool_withchunkRef.patch, 
> HBASE-16438_13_ChunkCreatorwrappingChunkPool_withchunkRef.patch, 
> HBASE-16438_13_ChunkCreatorwrappingChunkPool_withchunkRef.patch, 
> HBASE-16438_14_ChunkCreatorwrappingChunkPool_withchunkRef.patch, 
> HBASE-16438_14_ChunkCreatorwrappingChunkPool_withchunkRef.patch, 
> HBASE-16438_1.patch, HBASE-16438_3_ChunkCreatorwrappingChunkPool.patch, 
> HBASE-16438_4_ChunkCreatorwrappingChunkPool.patch, 
> HBASE-16438_8_ChunkCreatorwrappingChunkPool_withchunkRef.patch, 
> HBASE-16438_9_ChunkCreatorwrappingChunkPool_withchunkRef.patch, 
> HBASE-16438_addendum.patch, HBASE-16438.patch, 
> MemstoreChunkCell_memstoreChunkCreator_oldversion.patch, 
> MemstoreChunkCell_trunk.patch
>
>
> For CellChunkMap we may need a cell such that the chunk out of which it was 
> created, the id of the chunk be embedded in it so that when doing flattening 
> we can use the chunk id as a meta data. More details will follow once the 
> initial tasks are completed. 
> Why we need to embed the chunkid in the Cell is described by [~anastas] in 
> this remark over in parent issue 
> https://issues.apache.org/jira/browse/HBASE-14921?focusedCommentId=15244119=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15244119



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (HBASE-13288) Fix naming of parameter in Delete constructor

2017-04-20 Thread Chia-Ping Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai updated HBASE-13288:
---
Affects Version/s: (was: 1.0.0)
   2.0.0

> Fix naming of parameter in Delete constructor
> -
>
> Key: HBASE-13288
> URL: https://issues.apache.org/jira/browse/HBASE-13288
> Project: HBase
>  Issue Type: Bug
>  Components: API
>Affects Versions: 2.0.0
>Reporter: Lars George
>Assignee: Ashish Singhi
>Priority: Trivial
>  Labels: beginner
> Fix For: 2.0.0
>
> Attachments: HBASE-13288.patch
>
>
> We have these two variants:
> {code}
> Delete(byte[] row, long timestamp)
> Delete(final byte[] rowArray, final int rowOffset, final int rowLength, long 
> ts)
> {code}
> Both should use {{timestamp}} as the parameter name, not this mix.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


  1   2   >