[jira] [Updated] (HBASE-13659) Improve test run time for TestMetaWithReplicas class

2015-05-20 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-13659:
--
Affects Version/s: 1.1.0
Fix Version/s: 1.1.1
   1.2.0
   2.0.0

 Improve test run time for TestMetaWithReplicas class
 

 Key: HBASE-13659
 URL: https://issues.apache.org/jira/browse/HBASE-13659
 Project: HBase
  Issue Type: Sub-task
  Components: test
Affects Versions: 1.1.0
Reporter: Ashish Singhi
Assignee: Ashish Singhi
 Fix For: 2.0.0, 1.2.0, 1.1.1

 Attachments: HBASE-13659.patch


 In TestMetaWithReplicas, start and shutdown of mini cluster is done at start 
 and end of every test in that class respectively, which makes the test class 
 to take more time to complete. Instead we can start and stop the mini cluster 
 only once per the class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13704) Hbase throws OutOfOrderScannerNextException when MultiRowRangeFilter is used

2015-05-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14551899#comment-14551899
 ] 

Hudson commented on HBASE-13704:


SUCCESS: Integrated in HBase-1.2 #89 (See 
[https://builds.apache.org/job/HBase-1.2/89/])
HBASE-13704 Hbase throws OutOfOrderScannerNextException when 
MultiRowRangeFilter is used (Aleksandr Maksymenko) (tedyu: rev 
181ec60510b502fc3fff890e8c9236a95a80832f)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestMultiRowRangeFilter.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/MultiRowRangeFilter.java


 Hbase throws OutOfOrderScannerNextException when MultiRowRangeFilter is used
 

 Key: HBASE-13704
 URL: https://issues.apache.org/jira/browse/HBASE-13704
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 1.1.0
Reporter: Aleksandr Maksymenko
Assignee: Aleksandr Maksymenko
 Fix For: 2.0.0, 1.2.0, 1.1.1

 Attachments: 13704-v1.txt


 When using filter MultiRowRangeFilter with ranges closed to each other that 
 there are no rows between ranges, then OutOfOrderScannerNextException is 
 throwed.
 In filterRowKey method when range is switched to the next range, 
 currentReturnCode is set to SEEK_NEXT_USING_HINT (MultiRowRangeFilter: 118 in 
 v1.1.0). But if new range is already contain this row, then we should include 
 this row, not to seek for another one.
 Replacing line 118 to this code seems to be working fine:
 {code}
 if (range.contains(buffer, offset, length)) {
 currentReturnCode = ReturnCode.INCLUDE;
 } else {
 currentReturnCode = ReturnCode.SEEK_NEXT_USING_HINT;
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13693) [HBase MOB] Mob files are not encrypting.

2015-05-20 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14551877#comment-14551877
 ] 

Jingcheng Du commented on HBASE-13693:
--

Sure, I can open another JIRA to address this.

 [HBase MOB] Mob files are not encrypting.
 -

 Key: HBASE-13693
 URL: https://issues.apache.org/jira/browse/HBASE-13693
 Project: HBase
  Issue Type: Bug
  Components: mob
Affects Versions: hbase-11339
Reporter: Y. SREENIVASULU REDDY
Assignee: Ashutosh Jindal
 Fix For: hbase-11339

 Attachments: HBASE-13693-hbase-11339-v2.patch, 
 HBASE-13693-hbase-11339-v3.patch, HBASE-13693-hbase-11339.patch


 Mob HFiles are not encrypting.
 steps to reproduce:
 ===
 1.create a table and for column family with mob enabled and enable AES 
 encryption for the column family.
 2. Insert mob data into the table.
 3. Flush the mob table.
 4. check hfiles for mob data is created or not.
 5. check hfiles in hdfs is encrypted or not using hfile tool.
 {code}
 hfile tool output for mob reference hfile meta
 Block index size as per heapsize: 392
 reader=/hbase/data/default/mobTest/1587e00c3e257969c48d9872994ce57c/mobcf/8c33ab9e8201449e9ac709eb9e4263d6,
 Trailer:
 fileinfoOffset=527,
 loadOnOpenDataOffset=353,
 dataIndexCount=1,
 metaIndexCount=0,
 totalUncomressedBytes=5941,
 entryCount=9,
 compressionCodec=GZ,
 uncompressedDataIndexSize=34,
 numDataIndexLevels=1,
 firstDataBlockOffset=0,
 lastDataBlockOffset=0,
 comparatorClassName=org.apache.hadoop.hbase.KeyValue$KeyComparator,
 encryptionKey=PRESENT,
 majorVersion=3,
 minorVersion=0
 {code}
 {code}
 hfile tool output for mob hfile meta
 Block index size as per heapsize: 872
 reader=/hbase/mobdir/data/default/mobTest/46844d8b9f699e175a4d7bd57848c576/mobcf/d41d8cd98f00b204e9800998ecf8427e20150512bf18fa62a98c40d7bd6e810f790c6291,
 Trailer:
 fileinfoOffset=1018180,
 loadOnOpenDataOffset=1017959,
 dataIndexCount=9,
 metaIndexCount=0,
 totalUncomressedBytes=1552619,
 entryCount=9,
 compressionCodec=GZ,
 uncompressedDataIndexSize=266,
 numDataIndexLevels=1,
 firstDataBlockOffset=0,
 lastDataBlockOffset=904852,
 comparatorClassName=org.apache.hadoop.hbase.KeyValue$KeyComparator,
 encryptionKey=NONE,
 majorVersion=3,
 minorVersion=0
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-13098) HBase Connection Control

2015-05-20 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi resolved HBASE-13098.
---
Resolution: Not A Problem

Number of connection/requests to a table/namespace can be controlled using 
quota.

 HBase Connection Control
 

 Key: HBASE-13098
 URL: https://issues.apache.org/jira/browse/HBASE-13098
 Project: HBase
  Issue Type: New Feature
  Components: security
Reporter: Ashish Singhi
Assignee: Ashish Singhi
 Attachments: HBASE-13098.patch, HBase Connection Control.pdf


 It is desirable to set the limit on the number of client connections 
 permitted to the HBase server by controlling with certain system 
 variables/parameters. Too many connections to the HBase server imply too many 
 queries and MR jobs running on HBase. This can slow down the performance of 
 the system and lead to denial of service. Hence such connections need to be 
 controlled. Using too many connections may just cause thrashing rather than 
 get more useful work done.
 This is kind off inspired from 
 http://www.ebaytechblog.com/2014/08/21/quality-of-service-in-hadoop/#.VO2JXXyUe9y



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13658) Improve the test run time for TestAccessController class

2015-05-20 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-13658:
--
Fix Version/s: 1.2.0

 Improve the test run time for TestAccessController class
 

 Key: HBASE-13658
 URL: https://issues.apache.org/jira/browse/HBASE-13658
 Project: HBase
  Issue Type: Sub-task
  Components: test
Affects Versions: 0.98.12
Reporter: Ashish Singhi
Assignee: Ashish Singhi
 Fix For: 2.0.0, 0.98.13, 1.0.2, 1.2.0, 1.1.1

 Attachments: 13658.patch, HBASE-13658-v1.patch, HBASE-13658.patch


 Improve the test run time for TestAccessController class



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13719) Asynchronous scanner -- cache size-in-bytes bug fix

2015-05-20 Thread Eshcar Hillel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eshcar Hillel updated HBASE-13719:
--
Attachment: HBASE-13071-trunk-bug-fix.patch

 Asynchronous scanner -- cache size-in-bytes bug fix
 ---

 Key: HBASE-13719
 URL: https://issues.apache.org/jira/browse/HBASE-13719
 Project: HBase
  Issue Type: Bug
Reporter: Eshcar Hillel
 Attachments: HBASE-13071-trunk-bug-fix.patch


 Hbase Streaming Scan is a feature recently added to trunk.
 In this feature, an asynchronous scanner pre-loads data to the cache based on 
 its size (both row count and size in bytes). In one of the locations where 
 the scanner polls an item from the cache, the variable holding the estimated 
 byte size of the cache is not updated. This affects the decision of when to 
 load the next batch of data.
 A bug fix patch is attached - it comprises only local changes to the 
 ClientAsyncPrefetchScanner.java file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13720) Mob files are not encrypting in mob compaction and Sweeper

2015-05-20 Thread Jingcheng Du (JIRA)
Jingcheng Du created HBASE-13720:


 Summary: Mob files are not encrypting in mob compaction and Sweeper
 Key: HBASE-13720
 URL: https://issues.apache.org/jira/browse/HBASE-13720
 Project: HBase
  Issue Type: Sub-task
Reporter: Jingcheng Du
Assignee: Jingcheng Du


The mob files are not encrypted. Part of the issue was fixed in HBASE-13693. 
Still we have more places need the encryption too, for example the writer used 
in mob file compaction and Sweeper.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13686) Fail to limit rate in RateLimiter

2015-05-20 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14551871#comment-14551871
 ] 

Ashish Singhi commented on HBASE-13686:
---

I will fix the check style warnings in the next patch. Waiting for the feedback 
on the patch.

 Fail to limit rate in RateLimiter
 -

 Key: HBASE-13686
 URL: https://issues.apache.org/jira/browse/HBASE-13686
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0, 1.1.0
Reporter: Guanghao Zhang
Assignee: Ashish Singhi
 Fix For: 2.0.0, 1.2.0, 1.1.1

 Attachments: HBASE-13686.patch


 While using the patch in HBASE-11598 , I found that RateLimiter can't to 
 limit the rate right.
 {code} 
  /**
* given the time interval, are there enough available resources to allow 
 execution?
* @param now the current timestamp
* @param lastTs the timestamp of the last update
* @param amount the number of required resources
* @return true if there are enough available resources, otherwise false
*/
   public synchronized boolean canExecute(final long now, final long lastTs, 
 final long amount) {
 return avail = amount ? true : refill(now, lastTs) = amount;
   }
 {code}
 When avail = amount, avail can't be refill. But in the next time to call 
 canExecute, lastTs maybe update. So avail will waste some time to refill. 
 Even we use smaller rate than the limit, the canExecute will return false. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13717) TestBoundedRegionGroupingProvider#setMembershipDedups need to set HDFS diretory for WAL

2015-05-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14551881#comment-14551881
 ] 

Hudson commented on HBASE-13717:


SUCCESS: Integrated in HBase-TRUNK #6496 (See 
[https://builds.apache.org/job/HBase-TRUNK/6496/])
HBASE-13717 TestBoundedRegionGroupingProvider#setMembershipDedups need to set 
HDFS diretory for WAL (Stephen Yuan Jiang) (enis: rev 
0ef4a1088224b6fa3a2c85ef1d4efba6b7b48673)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/wal/TestBoundedRegionGroupingProvider.java


 TestBoundedRegionGroupingProvider#setMembershipDedups need to set HDFS 
 diretory for WAL
 ---

 Key: HBASE-13717
 URL: https://issues.apache.org/jira/browse/HBASE-13717
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 2.0.0, 1.1.0, 1.2.0
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
Priority: Minor
 Fix For: 2.0.0, 1.0.2, 1.2.0, 1.1.1

 Attachments: HBASE-13717.patch


 org.apache.hadoop.hbase.wal.TestBoundedRegionGroupingProvider#setMembershipDedups()
  fails during testing in windows:
 {noformat}
 java.lang.IllegalArgumentException: Pathname 
 /C:/tmp/hbase-myuser/hbase/WALs/setMembershipDedups from 
 hdfs://127.0.0.1:61737/C:/tmp/hbase-myuser/hbase/WALs/setMembershipDedups is 
 not a valid DFS filename.
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getPathName(DistributedFileSystem.java:197)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.access$000(DistributedFileSystem.java:106)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
   at 
 org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
   at 
 org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1301)
   at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1424)
   at 
 org.apache.hadoop.hbase.regionserver.wal.FSHLog.init(FSHLog.java:477)
   at 
 org.apache.hadoop.hbase.wal.DefaultWALProvider.init(DefaultWALProvider.java:97)
   at 
 org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:147)
   at 
 org.apache.hadoop.hbase.wal.BoundedRegionGroupingProvider.init(BoundedRegionGroupingProvider.java:56)
   at 
 org.apache.hadoop.hbase.wal.WALFactory.getProvider(WALFactory.java:147)
   at org.apache.hadoop.hbase.wal.WALFactory.init(WALFactory.java:179)
   at 
 org.apache.hadoop.hbase.wal.TestBoundedRegionGroupingProvider.setMembershipDedups(TestBoundedRegionGroupingProvider.java:161)
 {noformat}
 This is due to using the local file system path as root directory.  We should 
 set the HDFS directory as the root directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13716) Stop using Hadoop's FSConstants

2015-05-20 Thread zhangduo (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14551923#comment-14551923
 ] 

zhangduo commented on HBASE-13716:
--

There is an {{HdfsUtils.isHealthy(URI)}} method in hdfs. At least it has been 
introduced in hadoop-2.2.0. Could we make use of this method instead of calling 
{{DistributedFileSystem.setSafeMode}}?

 Stop using Hadoop's FSConstants
 ---

 Key: HBASE-13716
 URL: https://issues.apache.org/jira/browse/HBASE-13716
 Project: HBase
  Issue Type: Task
Affects Versions: 1.0.0, 1.1.0
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Blocker
 Fix For: 2.0.0, 1.0.2, 1.2.0, 1.1.1


 the FSConstants class was removed in HDFS-8135 (currently slated for Hadoop 
 2.8.0). I'm trying to have it reverted in branch-2, but we should migrate off 
 of it sooner rather htan later.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13658) Improve the test run time for TestAccessController class

2015-05-20 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-13658:
--
Affects Version/s: 0.98.12
Fix Version/s: 1.1.1
   1.0.2
   0.98.13
   2.0.0

 Improve the test run time for TestAccessController class
 

 Key: HBASE-13658
 URL: https://issues.apache.org/jira/browse/HBASE-13658
 Project: HBase
  Issue Type: Sub-task
  Components: test
Affects Versions: 0.98.12
Reporter: Ashish Singhi
Assignee: Ashish Singhi
 Fix For: 2.0.0, 0.98.13, 1.0.2, 1.1.1

 Attachments: 13658.patch, HBASE-13658-v1.patch, HBASE-13658.patch


 Improve the test run time for TestAccessController class



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13719) Asynchronous scanner -- cache size-in-bytes bug fix

2015-05-20 Thread Eshcar Hillel (JIRA)
Eshcar Hillel created HBASE-13719:
-

 Summary: Asynchronous scanner -- cache size-in-bytes bug fix
 Key: HBASE-13719
 URL: https://issues.apache.org/jira/browse/HBASE-13719
 Project: HBase
  Issue Type: Bug
Reporter: Eshcar Hillel


Hbase Streaming Scan is a feature recently added to trunk.
In this feature, an asynchronous scanner pre-loads data to the cache based on 
its size (both row count and size in bytes). In one of the locations where the 
scanner polls an item from the cache, the variable holding the estimated byte 
size of the cache is not updated. This affects the decision of when to load the 
next batch of data.

A bug fix patch is attached - it comprises only local changes to the 
ClientAsyncPrefetchScanner.java file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13670) [HBase MOB] ExpiredMobFileCleaner tool deletes mob files later for one more day after they are expired

2015-05-20 Thread Jingcheng Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingcheng Du updated HBASE-13670:
-
Component/s: documentation
Description: Currently the ExpiredMobFileCleaner cleans the expired mob 
file according to the date in the mob file name. The minimum unit of the date 
is day. So ExpiredMobFileCleaner only cleans the expired mob files later for 
one more day after they are expired. We need to document this.  (was: 
ExpiredMobFileCleaner tool is not deleting the expired mob data.

steps to reproduce:
===
1.Create the table with one column family as mob and set the TTL for mob 
columnfamily very less.
{code}
hbase(main):020:0 describe 'mobtab'
Table mobtab is ENABLED
mobtab
COLUMN FAMILIES DESCRIPTION
{NAME = 'mobcf',  IS_MOB = 'true',MOB_THRESHOLD = '102400', VERSIONS = '1', 
KEEP_DELETED_CELLS = 'FALSE', DATA_BLOCK_ENCODING = 'NONE', TTL = '60 
SECONDS (1 MINUTE)', MIN_VERSIONS = '0', REPLICATION_SCOPE = '0', BL
OOMFILTER = 'ROW', IN_MEMORY = 'false', COMPRESSION = 'NONE', BLOCKCACHE = 
'true', BLOCKSIZE = '65536'}
{NAME = 'norcf', BLOOMFILTER = 'ROW', VERSIONS = '1', IN_MEMORY = 'false', 
KEEP_DELETED_CELLS = 'FALSE', DATA_BLOCK_ENCODING = 'NONE', COMPRESSION = 
'NONE', TTL = 'FOREVER', MIN_VERSIONS = '0', BL
OCKCACHE = 'true', BLOCKSIZE = '65536', REPLICATION_SCOPE = '0'}
2 row(s) in 0.0650 seconds
{code}
2. then insert the mob data into the table(mobcf), and normal data into the 
another columnFamily(norcf).
3. flush the table.
4. scan the table before TTL expire. (able to fetch the data)
5. scan the table after TTL got expired, as a result mob data should not 
display, and mob file should exist in hdfs.
5. run ExpiredMobFileCleaner tool manually to clean the expired mob data for 
TTL expired data.
{code}
./hbase org.apache.hadoop.hbase.mob.ExpiredMobFileCleaner mobtab mobcf
{code}

{code}
client log_message:

2015-05-09 18:03:37,731 INFO  [main] mob.ExpiredMobFileCleaner: Cleaning the 
expired MOB files of mobcf in mobtab
2015-05-09 18:03:37,734 INFO  [main] hfile.CacheConfig: CacheConfig:disabled
2015-05-09 18:03:37,738 INFO  [main] mob.MobUtils: MOB HFiles older than 8 May 
2015 18:30:00 GMT will be deleted!
2015-05-09 18:03:37,971 DEBUG [main] mob.MobUtils: Checking file 
d41d8cd98f00b204e9800998ecf8427e20150509c9108e1a9252418abbfd54323922c518
2015-05-09 18:03:37,971 INFO  [main] mob.MobUtils: 0 expired mob files are 
deleted
2015-05-09 18:03:37,971 INFO  [main] 
client.ConnectionManager$HConnectionImplementation: Closing master protocol: 
MasterService
{code}

*problem:*

If we run ExpiredMobFileCleaner tool manually, it is not deleting the expired 
mob data. For deletion it is considering default time period 
hbase.master.mob.ttl.cleaner.period.
With this Time period hbase.master.mob.ttl.cleaner.period only 
ExpiredMobFileCleanerChore should consider.

{code}
conf:

property
namehbase.master.mob.ttl.cleaner.period/name
value8640/value
sourcehbase-default.xml/source
/property

{code})
 Issue Type: Improvement  (was: Bug)
Summary: [HBase MOB] ExpiredMobFileCleaner tool deletes mob files later 
for one more day after they are expired  (was: [HBase MOB] 
ExpiredMobFileCleaner tool is not deleting the expired mob data.)

 [HBase MOB] ExpiredMobFileCleaner tool deletes mob files later for one more 
 day after they are expired
 --

 Key: HBASE-13670
 URL: https://issues.apache.org/jira/browse/HBASE-13670
 Project: HBase
  Issue Type: Improvement
  Components: documentation, mob
Affects Versions: hbase-11339
Reporter: Y. SREENIVASULU REDDY
 Fix For: hbase-11339


 Currently the ExpiredMobFileCleaner cleans the expired mob file according to 
 the date in the mob file name. The minimum unit of the date is day. So 
 ExpiredMobFileCleaner only cleans the expired mob files later for one more 
 day after they are expired. We need to document this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13719) Asynchronous scanner -- cache size-in-bytes bug fix

2015-05-20 Thread Eshcar Hillel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eshcar Hillel updated HBASE-13719:
--
Status: Patch Available  (was: Open)

 Asynchronous scanner -- cache size-in-bytes bug fix
 ---

 Key: HBASE-13719
 URL: https://issues.apache.org/jira/browse/HBASE-13719
 Project: HBase
  Issue Type: Bug
Reporter: Eshcar Hillel
 Attachments: HBASE-13071-trunk-bug-fix.patch


 Hbase Streaming Scan is a feature recently added to trunk.
 In this feature, an asynchronous scanner pre-loads data to the cache based on 
 its size (both row count and size in bytes). In one of the locations where 
 the scanner polls an item from the cache, the variable holding the estimated 
 byte size of the cache is not updated. This affects the decision of when to 
 load the next batch of data.
 A bug fix patch is attached - it comprises only local changes to the 
 ClientAsyncPrefetchScanner.java file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13693) [HBase MOB] Mob files are not encrypting.

2015-05-20 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14551874#comment-14551874
 ] 

Jonathan Hsieh commented on HBASE-13693:


If there is more to be added, let's file another jira to follow up and to add 
testing for it.  

 [HBase MOB] Mob files are not encrypting.
 -

 Key: HBASE-13693
 URL: https://issues.apache.org/jira/browse/HBASE-13693
 Project: HBase
  Issue Type: Bug
  Components: mob
Affects Versions: hbase-11339
Reporter: Y. SREENIVASULU REDDY
Assignee: Ashutosh Jindal
 Fix For: hbase-11339

 Attachments: HBASE-13693-hbase-11339-v2.patch, 
 HBASE-13693-hbase-11339-v3.patch, HBASE-13693-hbase-11339.patch


 Mob HFiles are not encrypting.
 steps to reproduce:
 ===
 1.create a table and for column family with mob enabled and enable AES 
 encryption for the column family.
 2. Insert mob data into the table.
 3. Flush the mob table.
 4. check hfiles for mob data is created or not.
 5. check hfiles in hdfs is encrypted or not using hfile tool.
 {code}
 hfile tool output for mob reference hfile meta
 Block index size as per heapsize: 392
 reader=/hbase/data/default/mobTest/1587e00c3e257969c48d9872994ce57c/mobcf/8c33ab9e8201449e9ac709eb9e4263d6,
 Trailer:
 fileinfoOffset=527,
 loadOnOpenDataOffset=353,
 dataIndexCount=1,
 metaIndexCount=0,
 totalUncomressedBytes=5941,
 entryCount=9,
 compressionCodec=GZ,
 uncompressedDataIndexSize=34,
 numDataIndexLevels=1,
 firstDataBlockOffset=0,
 lastDataBlockOffset=0,
 comparatorClassName=org.apache.hadoop.hbase.KeyValue$KeyComparator,
 encryptionKey=PRESENT,
 majorVersion=3,
 minorVersion=0
 {code}
 {code}
 hfile tool output for mob hfile meta
 Block index size as per heapsize: 872
 reader=/hbase/mobdir/data/default/mobTest/46844d8b9f699e175a4d7bd57848c576/mobcf/d41d8cd98f00b204e9800998ecf8427e20150512bf18fa62a98c40d7bd6e810f790c6291,
 Trailer:
 fileinfoOffset=1018180,
 loadOnOpenDataOffset=1017959,
 dataIndexCount=9,
 metaIndexCount=0,
 totalUncomressedBytes=1552619,
 entryCount=9,
 compressionCodec=GZ,
 uncompressedDataIndexSize=266,
 numDataIndexLevels=1,
 firstDataBlockOffset=0,
 lastDataBlockOffset=904852,
 comparatorClassName=org.apache.hadoop.hbase.KeyValue$KeyComparator,
 encryptionKey=NONE,
 majorVersion=3,
 minorVersion=0
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13720) Mob files are not encrypting in mob compaction and Sweeper

2015-05-20 Thread Jingcheng Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingcheng Du updated HBASE-13720:
-
Attachment: HBASE-13720.diff

Add the encryption context to the writer used in mob file compactor and 
sweeper, and supplement unit tests for them.
[~jmhsieh], [~anoopsamjohn], [~ram_krish], could you please review and comment. 
Thanks a lot!

 Mob files are not encrypting in mob compaction and Sweeper
 --

 Key: HBASE-13720
 URL: https://issues.apache.org/jira/browse/HBASE-13720
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Jingcheng Du
Assignee: Jingcheng Du
 Fix For: hbase-11339

 Attachments: HBASE-13720.diff


 The mob files are not encrypted. Part of the issue was fixed in HBASE-13693. 
 Still we have more places need the encryption too, for example the writer 
 used in mob file compaction and Sweeper.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13375) Provide HBase superuser higher priority over other users in the RPC handling

2015-05-20 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14552093#comment-14552093
 ] 

Mikhail Antonov commented on HBASE-13375:
-

Speaking of the proper place for these 2 methods - I would agree that they may 
not be totally coherent with the User class scope, but, from practical 
standpoint, I'm not sure I see any apparently better way. These method were 
moved here from AccessConstorlLists class, which is in hbase-server module and 
isn't accessible from places like hbase-common. I'm somewhat reluctant to 
create new AuthenticationUtil class just to put there 2 one-liner methods (also 
- getGroupName() for example is basically producing a substring of another 
string - that's not really authentication functionality).

Speaking of the Jira at more high level - this one started as optimization RPC 
priority handling, then is was found that the way we retrieve list of super 
user isn't the best one (VisibilityUtils it was originally), and then in 
HBASE-10619 it was pointed out that we have 4 or 5 places where we parse and 
cache this information and it'd be better to reimplement it to keep this 
information in one place (User class was proposed). So I did these changes 
here, and I think HBASE-10619 is now blocked (?) waiting for changes in that 
part of API.

So I'm thinking maybe we could move on with the implementation in latest patch 
to this jira, that would also unblock HBASE-10619, and open another jira if 
needed to discuss whether we should create AuthenticationUtil class, and if 
yes, should it be a singleton or not etc?

 Provide HBase superuser higher priority over other users in the RPC handling
 

 Key: HBASE-13375
 URL: https://issues.apache.org/jira/browse/HBASE-13375
 Project: HBase
  Issue Type: Improvement
  Components: rpc
Reporter: Devaraj Das
Assignee: Mikhail Antonov
 Fix For: 1.1.1

 Attachments: HBASE-13375-v0.patch, HBASE-13375-v1.patch, 
 HBASE-13375-v1.patch, HBASE-13375-v1.patch, HBASE-13375-v2.patch, 
 HBASE-13375-v3.patch, HBASE-13375-v4.patch, HBASE-13375-v5.patch, 
 HBASE-13375-v6.patch, HBASE-13375-v7.patch


 HBASE-13351 annotates Master RPCs so that RegionServer RPCs are treated with 
 a higher priority compared to user RPCs (and they are handled by a separate 
 set of handlers, etc.). It may be good to stretch this to users too - hbase 
 superuser (configured via hbase.superuser) gets higher priority over other 
 users in the RPC handling. That way the superuser can always perform 
 administrative operations on the cluster even if all the normal priority 
 handlers are occupied (for example, we had a situation where all the master's 
 handlers were tied up with many simultaneous createTable RPC calls from 
 multiple users and the master wasn't able to perform any operations initiated 
 by the admin). (Discussed this some with [~enis] and [~elserj]).
 Does this make sense to others?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13375) Provide HBase superuser higher priority over other users in the RPC handling

2015-05-20 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14552104#comment-14552104
 ] 

Anoop Sam John commented on HBASE-13375:


A static method to Public User class ( loadSuperUsers(Configuration conf) ) is 
also a worrying thing. Any way as I said am just -0. If others feel +1 pls go 
ahead with commit.

 Provide HBase superuser higher priority over other users in the RPC handling
 

 Key: HBASE-13375
 URL: https://issues.apache.org/jira/browse/HBASE-13375
 Project: HBase
  Issue Type: Improvement
  Components: rpc
Reporter: Devaraj Das
Assignee: Mikhail Antonov
 Fix For: 1.1.1

 Attachments: HBASE-13375-v0.patch, HBASE-13375-v1.patch, 
 HBASE-13375-v1.patch, HBASE-13375-v1.patch, HBASE-13375-v2.patch, 
 HBASE-13375-v3.patch, HBASE-13375-v4.patch, HBASE-13375-v5.patch, 
 HBASE-13375-v6.patch, HBASE-13375-v7.patch


 HBASE-13351 annotates Master RPCs so that RegionServer RPCs are treated with 
 a higher priority compared to user RPCs (and they are handled by a separate 
 set of handlers, etc.). It may be good to stretch this to users too - hbase 
 superuser (configured via hbase.superuser) gets higher priority over other 
 users in the RPC handling. That way the superuser can always perform 
 administrative operations on the cluster even if all the normal priority 
 handlers are occupied (for example, we had a situation where all the master's 
 handlers were tied up with many simultaneous createTable RPC calls from 
 multiple users and the master wasn't able to perform any operations initiated 
 by the admin). (Discussed this some with [~enis] and [~elserj]).
 Does this make sense to others?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13375) Provide HBase superuser higher priority over other users in the RPC handling

2015-05-20 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14552166#comment-14552166
 ] 

Anoop Sam John commented on HBASE-13375:


This is what new class I thought
{code}
class Authentication** { // Not getting a name

//singleton instance
private ListString superUsers, superGroups;

public static initialize(Configuration){
 // initialize super user/group lists here
}

public static getInstance() {
  // throw RTE when called before initialize
}

public boolean isSuperUser(User){

}

//Group related utils.
public static isGroup()
public static getGroupName()

}
{code}


 Provide HBase superuser higher priority over other users in the RPC handling
 

 Key: HBASE-13375
 URL: https://issues.apache.org/jira/browse/HBASE-13375
 Project: HBase
  Issue Type: Improvement
  Components: rpc
Reporter: Devaraj Das
Assignee: Mikhail Antonov
 Fix For: 1.1.1

 Attachments: HBASE-13375-v0.patch, HBASE-13375-v1.patch, 
 HBASE-13375-v1.patch, HBASE-13375-v1.patch, HBASE-13375-v2.patch, 
 HBASE-13375-v3.patch, HBASE-13375-v4.patch, HBASE-13375-v5.patch, 
 HBASE-13375-v6.patch, HBASE-13375-v7.patch


 HBASE-13351 annotates Master RPCs so that RegionServer RPCs are treated with 
 a higher priority compared to user RPCs (and they are handled by a separate 
 set of handlers, etc.). It may be good to stretch this to users too - hbase 
 superuser (configured via hbase.superuser) gets higher priority over other 
 users in the RPC handling. That way the superuser can always perform 
 administrative operations on the cluster even if all the normal priority 
 handlers are occupied (for example, we had a situation where all the master's 
 handlers were tied up with many simultaneous createTable RPC calls from 
 multiple users and the master wasn't able to perform any operations initiated 
 by the admin). (Discussed this some with [~enis] and [~elserj]).
 Does this make sense to others?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13686) Fail to limit rate in RateLimiter

2015-05-20 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14552138#comment-14552138
 ] 

Guanghao Zhang commented on HBASE-13686:


Assume there is a RateLimiter which limit and avail is Long.MAX_VALUE. Then 
consume(1). The avail will be Long.MAX_VALUE - 1. After a long time, 
canExecute(1) again.  This will refill again. The delta will be much greater 
than 1. Then available + delta will be negative.
bq. Why you think so ? the new avail value will be calculated based on this 
refillAmount in the canExecute which I thought is ok.
Yeah, your code is ok. But as the below code shows, if refill return the new 
avail, the code in canExecute will be very simple to understand. Different 
refill strategy can refill the avail by themselves. The canExecute() should not 
to handle the special case of refillStrategy.
{code}
return refillStrategy.refill(limit, avail) = amount;
{code}

 Fail to limit rate in RateLimiter
 -

 Key: HBASE-13686
 URL: https://issues.apache.org/jira/browse/HBASE-13686
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0, 1.1.0
Reporter: Guanghao Zhang
Assignee: Ashish Singhi
 Fix For: 2.0.0, 1.2.0, 1.1.1

 Attachments: HBASE-13686.patch


 While using the patch in HBASE-11598 , I found that RateLimiter can't to 
 limit the rate right.
 {code} 
  /**
* given the time interval, are there enough available resources to allow 
 execution?
* @param now the current timestamp
* @param lastTs the timestamp of the last update
* @param amount the number of required resources
* @return true if there are enough available resources, otherwise false
*/
   public synchronized boolean canExecute(final long now, final long lastTs, 
 final long amount) {
 return avail = amount ? true : refill(now, lastTs) = amount;
   }
 {code}
 When avail = amount, avail can't be refill. But in the next time to call 
 canExecute, lastTs maybe update. So avail will waste some time to refill. 
 Even we use smaller rate than the limit, the canExecute will return false. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13448) New Cell implementation with cached component offsets/lengths

2015-05-20 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14552177#comment-14552177
 ] 

Anoop Sam John commented on HBASE-13448:


Seeing the number of times the getKeyLength() call happens and the decoding of 
the keylength,  I feel we have to cache that also.  I can see this keylength 
decoding is required while getting length/offset like qualifier.  The calls to 
these will be more as we do other cleanup (mentioned by Stack)   While we did 
profiling with offheap work and patch, we saw this getKeyLength() also in hot 
path.

 New Cell implementation with cached component offsets/lengths
 -

 Key: HBASE-13448
 URL: https://issues.apache.org/jira/browse/HBASE-13448
 Project: HBase
  Issue Type: Sub-task
  Components: Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0

 Attachments: 13291-0.98.txt, HBASE-13448.patch, HBASE-13448_V2.patch, 
 HBASE-13448_V3.patch, gc.png, hits.png


 This can be extension to KeyValue and can be instantiated and used in read 
 path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13704) Hbase throws OutOfOrderScannerNextException when MultiRowRangeFilter is used

2015-05-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14552020#comment-14552020
 ] 

Hudson commented on HBASE-13704:


SUCCESS: Integrated in HBase-TRUNK #6497 (See 
[https://builds.apache.org/job/HBase-TRUNK/6497/])
HBASE-13704 Hbase throws OutOfOrderScannerNextException when 
MultiRowRangeFilter is used (Aleksandr Maksymenko) (tedyu: rev 
132573792dc4947f2d7846f9e8093c9227c189da)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestMultiRowRangeFilter.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/filter/MultiRowRangeFilter.java


 Hbase throws OutOfOrderScannerNextException when MultiRowRangeFilter is used
 

 Key: HBASE-13704
 URL: https://issues.apache.org/jira/browse/HBASE-13704
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 1.1.0
Reporter: Aleksandr Maksymenko
Assignee: Aleksandr Maksymenko
 Fix For: 2.0.0, 1.2.0, 1.1.1

 Attachments: 13704-v1.txt


 When using filter MultiRowRangeFilter with ranges closed to each other that 
 there are no rows between ranges, then OutOfOrderScannerNextException is 
 throwed.
 In filterRowKey method when range is switched to the next range, 
 currentReturnCode is set to SEEK_NEXT_USING_HINT (MultiRowRangeFilter: 118 in 
 v1.1.0). But if new range is already contain this row, then we should include 
 this row, not to seek for another one.
 Replacing line 118 to this code seems to be working fine:
 {code}
 if (range.contains(buffer, offset, length)) {
 currentReturnCode = ReturnCode.INCLUDE;
 } else {
 currentReturnCode = ReturnCode.SEEK_NEXT_USING_HINT;
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-5980) Scanner responses from RS should include metrics on rows/KVs filtered

2015-05-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14551993#comment-14551993
 ] 

Hadoop QA commented on HBASE-5980:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12734067/5980v5.txt
  against master branch at commit 132573792dc4947f2d7846f9e8093c9227c189da.
  ATTACHMENT ID: 12734067

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
1900 checkstyle errors (more than the master's current 1898 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+   \003(\014\\324\002\n\003Get\022\013\n\003row\030\001 
\002(\014\022\027\n\006column\030\002 \003( +
+  unt\030\002 \001(\005\022\016\n\006exists\030\003 
\001(\010\022\024\n\005stale\030\004 \001(\010 +
+  \001 \002(\014\022\016\n\006family\030\002 
\002(\014\022\021\n\tqualifier\030\003 \002(\014 +
+  new java.lang.String[] { Region, Scan, ScannerId, 
NumberOfRows, CloseScanner, NextCallSeq, ClientHandlesPartials, 
ClientHandlesHeartbeats, TrackScanMetrics, });
+  new java.lang.String[] { CellsPerResult, ScannerId, 
MoreResults, Ttl, Results, Stale, PartialFlagPerResult, 
MoreResultsInRegion, HeartbeatMessage, ScanMetrics, });

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

 {color:red}-1 core zombie tests{color}.  There are 2 zombie test(s):   
at 
org.apache.camel.component.jetty.jettyproducer.HttpJettyProducerRecipientListCustomThreadPoolTest.testRecipientList(HttpJettyProducerRecipientListCustomThreadPoolTest.java:40)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14111//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14111//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14111//artifact/patchprocess/checkstyle-aggregate.html

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14111//console

This message is automatically generated.

 Scanner responses from RS should include metrics on rows/KVs filtered
 -

 Key: HBASE-5980
 URL: https://issues.apache.org/jira/browse/HBASE-5980
 Project: HBase
  Issue Type: Improvement
  Components: Client, metrics, regionserver
Affects Versions: 0.95.2
Reporter: Todd Lipcon
Assignee: Jonathan Lawlor
Priority: Minor
 Attachments: 5980v5.txt, 5980v5.txt, HBASE-5980-branch-1.patch, 
 HBASE-5980-v1.patch, HBASE-5980-v2.patch, HBASE-5980-v2.patch, 
 HBASE-5980-v3.patch, HBASE-5980-v4.patch


 Currently it's difficult to know, when issuing a filter, what percentage of 
 rows were skipped by that filter. We should expose some basic counters back 
 to the client scanner object. For example:
 - number of rows filtered by row key alone (filterRowKey())
 - number of times each filter response was returned by filterKeyValue() - 
 corresponding to Filter.ReturnCode
 What would be slickest is if this could actually return a tree of counters 
 for cases where FilterList or other combining filters are used. But a 
 top-level is a good start.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13719) Asynchronous scanner -- cache size-in-bytes bug fix

2015-05-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14552077#comment-14552077
 ] 

Hadoop QA commented on HBASE-13719:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12734082/HBASE-13071-trunk-bug-fix.patch
  against master branch at commit 132573792dc4947f2d7846f9e8093c9227c189da.
  ATTACHMENT ID: 12734082

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14112//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14112//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14112//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14112//console

This message is automatically generated.

 Asynchronous scanner -- cache size-in-bytes bug fix
 ---

 Key: HBASE-13719
 URL: https://issues.apache.org/jira/browse/HBASE-13719
 Project: HBase
  Issue Type: Bug
Reporter: Eshcar Hillel
 Attachments: HBASE-13071-trunk-bug-fix.patch


 Hbase Streaming Scan is a feature recently added to trunk.
 In this feature, an asynchronous scanner pre-loads data to the cache based on 
 its size (both row count and size in bytes). In one of the locations where 
 the scanner polls an item from the cache, the variable holding the estimated 
 byte size of the cache is not updated. This affects the decision of when to 
 load the next batch of data.
 A bug fix patch is attached - it comprises only local changes to the 
 ClientAsyncPrefetchScanner.java file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13686) Fail to limit rate in RateLimiter

2015-05-20 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14552001#comment-14552001
 ] 

Ashish Singhi commented on HBASE-13686:
---

[~mbertozzi] can you please review the patch. I would also like to hear from 
you as you being the main developer of this feature.

 Fail to limit rate in RateLimiter
 -

 Key: HBASE-13686
 URL: https://issues.apache.org/jira/browse/HBASE-13686
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0, 1.1.0
Reporter: Guanghao Zhang
Assignee: Ashish Singhi
 Fix For: 2.0.0, 1.2.0, 1.1.1

 Attachments: HBASE-13686.patch


 While using the patch in HBASE-11598 , I found that RateLimiter can't to 
 limit the rate right.
 {code} 
  /**
* given the time interval, are there enough available resources to allow 
 execution?
* @param now the current timestamp
* @param lastTs the timestamp of the last update
* @param amount the number of required resources
* @return true if there are enough available resources, otherwise false
*/
   public synchronized boolean canExecute(final long now, final long lastTs, 
 final long amount) {
 return avail = amount ? true : refill(now, lastTs) = amount;
   }
 {code}
 When avail = amount, avail can't be refill. But in the next time to call 
 canExecute, lastTs maybe update. So avail will waste some time to refill. 
 Even we use smaller rate than the limit, the canExecute will return false. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13716) Stop using Hadoop's FSConstants

2015-05-20 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14552248#comment-14552248
 ] 

Sean Busbey commented on HBASE-13716:
-

I have a patch that switches to HdfsConstants and works for now. I also have an 
open request on the HDFs ticket for what we're supposed to use. It could use 
more details about what we're trying to check.

 Stop using Hadoop's FSConstants
 ---

 Key: HBASE-13716
 URL: https://issues.apache.org/jira/browse/HBASE-13716
 Project: HBase
  Issue Type: Task
Affects Versions: 1.0.0, 1.1.0
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Blocker
 Fix For: 2.0.0, 1.0.2, 1.2.0, 1.1.1


 the FSConstants class was removed in HDFS-8135 (currently slated for Hadoop 
 2.8.0). I'm trying to have it reverted in branch-2, but we should migrate off 
 of it sooner rather htan later.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13722) Avoid non static method from BloomFilterUtil

2015-05-20 Thread Anoop Sam John (JIRA)
Anoop Sam John created HBASE-13722:
--

 Summary: Avoid non static method from BloomFilterUtil
 Key: HBASE-13722
 URL: https://issues.apache.org/jira/browse/HBASE-13722
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Trivial
 Fix For: 2.0.0


This is an unused method and slipped into this Util class from ByteBloomFilter 
during the cleanup.

boolean contains(byte[] buf, ByteBuffer bloom)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13722) Avoid non static method from BloomFilterUtil

2015-05-20 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-13722:
---
Attachment: HBASE-13722.patch

Trivial patch.  Will commit after QA run.  FYI [~ram_krish]

 Avoid non static method from BloomFilterUtil
 

 Key: HBASE-13722
 URL: https://issues.apache.org/jira/browse/HBASE-13722
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Trivial
 Fix For: 2.0.0

 Attachments: HBASE-13722.patch


 This is an unused method and slipped into this Util class from 
 ByteBloomFilter during the cleanup.
 boolean contains(byte[] buf, ByteBuffer bloom)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12451) IncreasingToUpperBoundRegionSplitPolicy may cause unnecessary region splits in rolling update of cluster

2015-05-20 Thread Liu Shaohui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liu Shaohui updated HBASE-12451:

Attachment: HBASE-12451-v2.diff

Rebase on master

 IncreasingToUpperBoundRegionSplitPolicy may cause unnecessary region splits 
 in rolling update of cluster
 

 Key: HBASE-12451
 URL: https://issues.apache.org/jira/browse/HBASE-12451
 Project: HBase
  Issue Type: Bug
Reporter: Liu Shaohui
Assignee: Liu Shaohui
Priority: Minor
 Fix For: 2.0.0

 Attachments: HBASE-12451-v1.diff, HBASE-12451-v2.diff


 Currently IncreasingToUpperBoundRegionSplitPolicy is the default region split 
 policy. In this policy, split size is the number of regions that are on this 
 server that all are of the same table, cubed, times 2x the region flush size.
 But when unloading regions of a regionserver in a cluster using 
 region_mover.rb, the number of regions that are on this server that all are 
 of the same table will decrease, and the split size will decrease too, which 
 may cause the left region split in the regionsever. Region Splits also 
 happens when loading regions of a regionserver in a cluster. 
 A improvment may set a minimum split size in 
 IncreasingToUpperBoundRegionSplitPolicy
 Suggestions are welcomed. Thanks~



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13722) Avoid non static method from BloomFilterUtil

2015-05-20 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-13722:
---
Status: Patch Available  (was: Open)

 Avoid non static method from BloomFilterUtil
 

 Key: HBASE-13722
 URL: https://issues.apache.org/jira/browse/HBASE-13722
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Trivial
 Fix For: 2.0.0

 Attachments: HBASE-13722.patch


 This is an unused method and slipped into this Util class from 
 ByteBloomFilter during the cleanup.
 boolean contains(byte[] buf, ByteBuffer bloom)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13686) Fail to limit rate in RateLimiter

2015-05-20 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14552301#comment-14552301
 ] 

Ashish Singhi commented on HBASE-13686:
---

The work of refillStrategy.refill is to give the number of resources that are 
refilled and that can then be checked and added to the existing avail to form a 
new avail.

bq. The canExecute() should not to handle the special case of refillStrategy
All the special case of refillStrategy is handled by themselves in {{refill}} 
method {{canExecute}} has only the code which will be common to all the 
refillStrategy after they finish calculating the refill amount and can avoid 
duplicating the code.

{quote}
Assume there is a RateLimiter which limit and avail is Long.MAX_VALUE. Then 
consume(1). The avail will be Long.MAX_VALUE - 1. After a long time, 
canExecute(1) again. This will refill again. The delta will be much greater 
than 1. Then available + delta will be negative.
{quote}
This you have already answered in your first comment that **check positive 
overflow can catch this case**. So I feel better to leave this piece of code 
in {{canExecute}} only and avoid duplicating this code in each refill method as 
of now.

I do not see any strong reason to move this code in {{refill}} of each 
{{RefillStrategy}}

 Fail to limit rate in RateLimiter
 -

 Key: HBASE-13686
 URL: https://issues.apache.org/jira/browse/HBASE-13686
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0, 1.1.0
Reporter: Guanghao Zhang
Assignee: Ashish Singhi
 Fix For: 2.0.0, 1.2.0, 1.1.1

 Attachments: HBASE-13686.patch


 While using the patch in HBASE-11598 , I found that RateLimiter can't to 
 limit the rate right.
 {code} 
  /**
* given the time interval, are there enough available resources to allow 
 execution?
* @param now the current timestamp
* @param lastTs the timestamp of the last update
* @param amount the number of required resources
* @return true if there are enough available resources, otherwise false
*/
   public synchronized boolean canExecute(final long now, final long lastTs, 
 final long amount) {
 return avail = amount ? true : refill(now, lastTs) = amount;
   }
 {code}
 When avail = amount, avail can't be refill. But in the next time to call 
 canExecute, lastTs maybe update. So avail will waste some time to refill. 
 Even we use smaller rate than the limit, the canExecute will return false. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13158) When client supports CellBlock, return the result Cells as controller payload for get(Get) API also

2015-05-20 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14552226#comment-14552226
 ] 

Anoop Sam John commented on HBASE-13158:


May be in 3.0 we can remove the client version check? What abt the client and 
server version relation requirement in general?

 When client supports CellBlock, return the result Cells as controller payload 
 for get(Get) API also
 ---

 Key: HBASE-13158
 URL: https://issues.apache.org/jira/browse/HBASE-13158
 Project: HBase
  Issue Type: Improvement
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0, 1.2.0

 Attachments: HBASE-13158.patch, HBASE-13158_V2.patch, 
 HBASE-13158_V3.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13721) Improve shell scan performances when using LIMIT

2015-05-20 Thread Jean-Marc Spaggiari (JIRA)
Jean-Marc Spaggiari created HBASE-13721:
---

 Summary: Improve shell scan performances when using LIMIT
 Key: HBASE-13721
 URL: https://issues.apache.org/jira/browse/HBASE-13721
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 1.1.0
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari


When doing a scan which is expected to return the exact same number of rows as 
the LIMIT we give, we still scan the entire table until we return the row(s) 
and then test the numbers of rows we have. This can take a lot of time.

Example:
scan 'sensors', { COLUMNS = ['v:f92acb5b-079a-42bc-913a-657f270a3dc1'], 
STARTROW = '000a', LIMIT = 1 }

This is because we will break on the limit condition AFTER we ask for the next 
row. If there is none, we scan the entire table than exit.

Goal of this patch is to handle this specific case without impacting the others.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12451) IncreasingToUpperBoundRegionSplitPolicy may cause unnecessary region splits in rolling update of cluster

2015-05-20 Thread Liu Shaohui (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14552293#comment-14552293
 ] 

Liu Shaohui commented on HBASE-12451:
-

Please help to review at https://reviews.apache.org/r/34467/

 IncreasingToUpperBoundRegionSplitPolicy may cause unnecessary region splits 
 in rolling update of cluster
 

 Key: HBASE-12451
 URL: https://issues.apache.org/jira/browse/HBASE-12451
 Project: HBase
  Issue Type: Bug
Reporter: Liu Shaohui
Assignee: Liu Shaohui
Priority: Minor
 Fix For: 2.0.0

 Attachments: HBASE-12451-v1.diff, HBASE-12451-v2.diff


 Currently IncreasingToUpperBoundRegionSplitPolicy is the default region split 
 policy. In this policy, split size is the number of regions that are on this 
 server that all are of the same table, cubed, times 2x the region flush size.
 But when unloading regions of a regionserver in a cluster using 
 region_mover.rb, the number of regions that are on this server that all are 
 of the same table will decrease, and the split size will decrease too, which 
 may cause the left region split in the regionsever. Region Splits also 
 happens when loading regions of a regionserver in a cluster. 
 A improvment may set a minimum split size in 
 IncreasingToUpperBoundRegionSplitPolicy
 Suggestions are welcomed. Thanks~



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13716) Stop using Hadoop's FSConstants

2015-05-20 Thread zhangduo (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14552264#comment-14552264
 ] 

zhangduo commented on HBASE-13716:
--

{quote}
 I also have an open request on the HDFs ticket for what we're supposed to use. 
It could use more details about what we're trying to check.
{quote}
Do you mean open an HDFS issue that add methods for HBase?

 Stop using Hadoop's FSConstants
 ---

 Key: HBASE-13716
 URL: https://issues.apache.org/jira/browse/HBASE-13716
 Project: HBase
  Issue Type: Task
Affects Versions: 1.0.0, 1.1.0
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Blocker
 Fix For: 2.0.0, 1.0.2, 1.2.0, 1.1.1


 the FSConstants class was removed in HDFS-8135 (currently slated for Hadoop 
 2.8.0). I'm trying to have it reverted in branch-2, but we should migrate off 
 of it sooner rather htan later.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13716) Stop using Hadoop's FSConstants

2015-05-20 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14552351#comment-14552351
 ] 

Sean Busbey commented on HBASE-13716:
-

I meant on HDFS-8135, but now I see you already made it over there. Essentially 
I should stop trying to follow up on jira when I'm on a bus. :)

 Stop using Hadoop's FSConstants
 ---

 Key: HBASE-13716
 URL: https://issues.apache.org/jira/browse/HBASE-13716
 Project: HBase
  Issue Type: Task
Affects Versions: 1.0.0, 1.1.0
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Blocker
 Fix For: 2.0.0, 1.0.2, 1.2.0, 1.1.1


 the FSConstants class was removed in HDFS-8135 (currently slated for Hadoop 
 2.8.0). I'm trying to have it reverted in branch-2, but we should migrate off 
 of it sooner rather htan later.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13716) Stop using Hadoop's FSConstants

2015-05-20 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-13716:

Status: Patch Available  (was: Open)

 Stop using Hadoop's FSConstants
 ---

 Key: HBASE-13716
 URL: https://issues.apache.org/jira/browse/HBASE-13716
 Project: HBase
  Issue Type: Task
Affects Versions: 1.1.0, 1.0.0
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Blocker
 Fix For: 2.0.0, 1.0.2, 1.2.0, 1.1.1

 Attachments: HBASE-13716.1.patch


 the FSConstants class was removed in HDFS-8135 (currently slated for Hadoop 
 2.8.0). I'm trying to have it reverted in branch-2, but we should migrate off 
 of it sooner rather htan later.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13716) Stop using Hadoop's FSConstants

2015-05-20 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-13716:

Attachment: HBASE-13716.1.patch

-01
  * switch to HdfsConstants


initial works-at-all patch while we figure out what the sustainable course of 
action is on HDFS-8135.

 Stop using Hadoop's FSConstants
 ---

 Key: HBASE-13716
 URL: https://issues.apache.org/jira/browse/HBASE-13716
 Project: HBase
  Issue Type: Task
Affects Versions: 1.0.0, 1.1.0
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Blocker
 Fix For: 2.0.0, 1.0.2, 1.2.0, 1.1.1

 Attachments: HBASE-13716.1.patch


 the FSConstants class was removed in HDFS-8135 (currently slated for Hadoop 
 2.8.0). I'm trying to have it reverted in branch-2, but we should migrate off 
 of it sooner rather htan later.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13700) Allow Thrift2 HSHA server to have configurable threads

2015-05-20 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14552545#comment-14552545
 ] 

Elliott Clark commented on HBASE-13700:
---

Ping?

 Allow Thrift2 HSHA server to have configurable threads
 --

 Key: HBASE-13700
 URL: https://issues.apache.org/jira/browse/HBASE-13700
 Project: HBase
  Issue Type: Bug
  Components: Thrift
Reporter: Elliott Clark
Assignee: Elliott Clark
 Attachments: HBASE-13700-v1.patch, HBASE-13700-v2.patch, 
 HBASE-13700.patch


 The half sync half async server by default starts 5 worker threads. For busy 
 servers that might not be enough. That should be configurable.
 For the threadpool there should be a way to set the max number of threads so 
 that creating threads doesn't run away. That should be configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13712) Backport HBASE-13199 to branch-1

2015-05-20 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14552530#comment-14552530
 ] 

Elliott Clark commented on HBASE-13712:
---

The patch uses Connection and other branch-1 constructs pretty liberally so it 
doesn't apply cleanly.
If someone has time and the desire to backport it to 0.98 I'll commit it.

 Backport HBASE-13199 to branch-1
 

 Key: HBASE-13712
 URL: https://issues.apache.org/jira/browse/HBASE-13712
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.1.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 1.2.0

 Attachments: HBASE-13712-branch-1.patch


 HBASE-13199 is practically a requirement for large clusters trying to use 
 Canary; we should port it to branch-1 so that it's usable on clusters with 
 10k regions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13722) Avoid non static method from BloomFilterUtil

2015-05-20 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14552609#comment-14552609
 ] 

ramkrishna.s.vasudevan commented on HBASE-13722:


+1

 Avoid non static method from BloomFilterUtil
 

 Key: HBASE-13722
 URL: https://issues.apache.org/jira/browse/HBASE-13722
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Trivial
 Fix For: 2.0.0

 Attachments: HBASE-13722.patch


 This is an unused method and slipped into this Util class from 
 ByteBloomFilter during the cleanup.
 boolean contains(byte[] buf, ByteBuffer bloom)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13721) Improve shell scan performances when using LIMIT

2015-05-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14552718#comment-14552718
 ] 

Hadoop QA commented on HBASE-13721:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12734136/HBASE-13721-v0-trunk.txt
  against master branch at commit 132573792dc4947f2d7846f9e8093c9227c189da.
  ATTACHMENT ID: 12734136

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14116//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14116//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14116//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14116//console

This message is automatically generated.

 Improve shell scan performances when using LIMIT
 

 Key: HBASE-13721
 URL: https://issues.apache.org/jira/browse/HBASE-13721
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 1.1.0
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
 Attachments: HBASE-13721-v0-trunk.txt


 When doing a scan which is expected to return the exact same number of rows 
 as the LIMIT we give, we still scan the entire table until we return the 
 row(s) and then test the numbers of rows we have. This can take a lot of time.
 Example:
 scan 'sensors', { COLUMNS = ['v:f92acb5b-079a-42bc-913a-657f270a3dc1'], 
 STARTROW = '000a', LIMIT = 1 }
 This is because we will break on the limit condition AFTER we ask for the 
 next row. If there is none, we scan the entire table than exit.
 Goal of this patch is to handle this specific case without impacting the 
 others.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13698) Add RegionLocator methods to Thrift2 proxy.

2015-05-20 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14552544#comment-14552544
 ] 

Elliott Clark commented on HBASE-13698:
---

Ping?

 Add RegionLocator methods to Thrift2 proxy.
 ---

 Key: HBASE-13698
 URL: https://issues.apache.org/jira/browse/HBASE-13698
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0, 1.1.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 1.2.0

 Attachments: HBASE-13698-v1.patch, HBASE-13698.patch


 Thrift2 doesn't provide the same functionality as the java client for getting 
 region locations. We should change that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13721) Improve shell scan performances when using LIMIT

2015-05-20 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14552664#comment-14552664
 ] 

ramkrishna.s.vasudevan commented on HBASE-13721:


Thanks for the patch [~jmspaggi]. Thanks for the review [~eclark].

 Improve shell scan performances when using LIMIT
 

 Key: HBASE-13721
 URL: https://issues.apache.org/jira/browse/HBASE-13721
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 1.1.0
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
 Attachments: HBASE-13721-v0-trunk.txt


 When doing a scan which is expected to return the exact same number of rows 
 as the LIMIT we give, we still scan the entire table until we return the 
 row(s) and then test the numbers of rows we have. This can take a lot of time.
 Example:
 scan 'sensors', { COLUMNS = ['v:f92acb5b-079a-42bc-913a-657f270a3dc1'], 
 STARTROW = '000a', LIMIT = 1 }
 This is because we will break on the limit condition AFTER we ask for the 
 next row. If there is none, we scan the entire table than exit.
 Goal of this patch is to handle this specific case without impacting the 
 others.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13448) New Cell implementation with cached component offsets/lengths

2015-05-20 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14552722#comment-14552722
 ] 

Lars Hofhansl commented on HBASE-13448:
---

bq. keylength, I feel we have to cache that also

It is very hard to quantify GC cost. A run might finish very quickly, but 
generate a lot of garbage that is collected later, slowing things down then.
Let's get numbers so that we do not have to guess what we should do :)
I'll double check my 0.98 patch and test run (I don't see how the patch would 
make things slower, so there must something I am not doing right).


 New Cell implementation with cached component offsets/lengths
 -

 Key: HBASE-13448
 URL: https://issues.apache.org/jira/browse/HBASE-13448
 Project: HBase
  Issue Type: Sub-task
  Components: Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0

 Attachments: 13291-0.98.txt, HBASE-13448.patch, HBASE-13448_V2.patch, 
 HBASE-13448_V3.patch, gc.png, hits.png


 This can be extension to KeyValue and can be instantiated and used in read 
 path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13722) Avoid non static method from BloomFilterUtil

2015-05-20 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-13722:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Thanks Ram. 


 Avoid non static method from BloomFilterUtil
 

 Key: HBASE-13722
 URL: https://issues.apache.org/jira/browse/HBASE-13722
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Trivial
 Fix For: 2.0.0

 Attachments: HBASE-13722.patch


 This is an unused method and slipped into this Util class from 
 ByteBloomFilter during the cleanup.
 boolean contains(byte[] buf, ByteBuffer bloom)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13716) Stop using Hadoop's FSConstants

2015-05-20 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14552708#comment-14552708
 ] 

stack commented on HBASE-13716:
---

+1

 Stop using Hadoop's FSConstants
 ---

 Key: HBASE-13716
 URL: https://issues.apache.org/jira/browse/HBASE-13716
 Project: HBase
  Issue Type: Task
Affects Versions: 1.0.0, 1.1.0
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Blocker
 Fix For: 2.0.0, 1.0.2, 1.2.0, 1.1.1

 Attachments: HBASE-13716.1.patch


 the FSConstants class was removed in HDFS-8135 (currently slated for Hadoop 
 2.8.0). I'm trying to have it reverted in branch-2, but we should migrate off 
 of it sooner rather htan later.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13721) Improve shell scan performances when using LIMIT

2015-05-20 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14552611#comment-14552611
 ] 

ramkrishna.s.vasudevan commented on HBASE-13721:


Should this be committed in branch-1 and 0.98 also?

 Improve shell scan performances when using LIMIT
 

 Key: HBASE-13721
 URL: https://issues.apache.org/jira/browse/HBASE-13721
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 1.1.0
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
 Attachments: HBASE-13721-v0-trunk.txt


 When doing a scan which is expected to return the exact same number of rows 
 as the LIMIT we give, we still scan the entire table until we return the 
 row(s) and then test the numbers of rows we have. This can take a lot of time.
 Example:
 scan 'sensors', { COLUMNS = ['v:f92acb5b-079a-42bc-913a-657f270a3dc1'], 
 STARTROW = '000a', LIMIT = 1 }
 This is because we will break on the limit condition AFTER we ask for the 
 next row. If there is none, we scan the entire table than exit.
 Goal of this patch is to handle this specific case without impacting the 
 others.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13724) ReplicationSource dies under certain conditions reading a sequence file

2015-05-20 Thread churro morales (JIRA)
churro morales created HBASE-13724:
--

 Summary: ReplicationSource dies under certain conditions reading a 
sequence file
 Key: HBASE-13724
 URL: https://issues.apache.org/jira/browse/HBASE-13724
 Project: HBase
  Issue Type: Bug
Reporter: churro morales


A little background, 

We run our server in -ea mode and have seen quite a few replication sources 
silently die over the past few months.

Note: the stacktrace I posted below comes from a regionserver running 0.94 but 
quickly looking at this issue, I believe this will happen in 98 too.  

Should we harden replication source to deal with these types of assertion 
errors by catching throwables, should we be dealing with this at the sequence 
file reader level?  Still looking into the root cause of this issue but when 
manually shutdown our regionservers the regionserver that recovered its queue 
replicated that log just fine.  So in our case a simple retry would've worked 
just fine.  

{code}
2015-05-08 11:04:23,348 ERROR 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSource: Unexpected 
exception in ReplicationSource, 
currentPath=hdfs://hm6.xxx.flurry.com:9000/hbase/.logs/x.yy.flurry.com,60020,1426792702998/x.atl.flurry.com%2C60020%2C1426792702998.1431107922449
java.lang.AssertionError
at 
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader$WALReaderFSDataInputStream.getPos(SequenceFileLogReader.java:121)
at 
org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1489)
at 
org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1479)
at 
org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1474)
at 
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.init(SequenceFileLogReader.java:55)
at 
org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:178)
at 
org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:734)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationHLogReaderManager.openReader(ReplicationHLogReaderManager.java:69)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.openReader(ReplicationSource.java:583)
at 
org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:373)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13722) Avoid non static method from BloomFilterUtil

2015-05-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14552527#comment-14552527
 ] 

Hadoop QA commented on HBASE-13722:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12734123/HBASE-13722.patch
  against master branch at commit 132573792dc4947f2d7846f9e8093c9227c189da.
  ATTACHMENT ID: 12734123

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14114//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14114//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14114//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14114//console

This message is automatically generated.

 Avoid non static method from BloomFilterUtil
 

 Key: HBASE-13722
 URL: https://issues.apache.org/jira/browse/HBASE-13722
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Trivial
 Fix For: 2.0.0

 Attachments: HBASE-13722.patch


 This is an unused method and slipped into this Util class from 
 ByteBloomFilter during the cleanup.
 boolean contains(byte[] buf, ByteBuffer bloom)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13721) Improve shell scan performances when using LIMIT

2015-05-20 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14552713#comment-14552713
 ] 

Andrew Purtell commented on HBASE-13721:


+1 for 0.98

 Improve shell scan performances when using LIMIT
 

 Key: HBASE-13721
 URL: https://issues.apache.org/jira/browse/HBASE-13721
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 1.1.0
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
 Attachments: HBASE-13721-v0-trunk.txt


 When doing a scan which is expected to return the exact same number of rows 
 as the LIMIT we give, we still scan the entire table until we return the 
 row(s) and then test the numbers of rows we have. This can take a lot of time.
 Example:
 scan 'sensors', { COLUMNS = ['v:f92acb5b-079a-42bc-913a-657f270a3dc1'], 
 STARTROW = '000a', LIMIT = 1 }
 This is because we will break on the limit condition AFTER we ask for the 
 next row. If there is none, we scan the entire table than exit.
 Goal of this patch is to handle this specific case without impacting the 
 others.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13721) Improve shell scan performances when using LIMIT

2015-05-20 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14552568#comment-14552568
 ] 

Elliott Clark commented on HBASE-13721:
---

+1 looks good.

 Improve shell scan performances when using LIMIT
 

 Key: HBASE-13721
 URL: https://issues.apache.org/jira/browse/HBASE-13721
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 1.1.0
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
 Attachments: HBASE-13721-v0-trunk.txt


 When doing a scan which is expected to return the exact same number of rows 
 as the LIMIT we give, we still scan the entire table until we return the 
 row(s) and then test the numbers of rows we have. This can take a lot of time.
 Example:
 scan 'sensors', { COLUMNS = ['v:f92acb5b-079a-42bc-913a-657f270a3dc1'], 
 STARTROW = '000a', LIMIT = 1 }
 This is because we will break on the limit condition AFTER we ask for the 
 next row. If there is none, we scan the entire table than exit.
 Goal of this patch is to handle this specific case without impacting the 
 others.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13721) Improve shell scan performances when using LIMIT

2015-05-20 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14552614#comment-14552614
 ] 

Elliott Clark commented on HBASE-13721:
---

Yeah I would think so. If this applies lets commit it everywhere.

 Improve shell scan performances when using LIMIT
 

 Key: HBASE-13721
 URL: https://issues.apache.org/jira/browse/HBASE-13721
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 1.1.0
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
 Attachments: HBASE-13721-v0-trunk.txt


 When doing a scan which is expected to return the exact same number of rows 
 as the LIMIT we give, we still scan the entire table until we return the 
 row(s) and then test the numbers of rows we have. This can take a lot of time.
 Example:
 scan 'sensors', { COLUMNS = ['v:f92acb5b-079a-42bc-913a-657f270a3dc1'], 
 STARTROW = '000a', LIMIT = 1 }
 This is because we will break on the limit condition AFTER we ask for the 
 next row. If there is none, we scan the entire table than exit.
 Goal of this patch is to handle this specific case without impacting the 
 others.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13721) Improve shell scan performances when using LIMIT

2015-05-20 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14552657#comment-14552657
 ] 

ramkrishna.s.vasudevan commented on HBASE-13721:


Pushed to master and branch-1 and above.
[~apurtell]
You need this for 0.98 also?  Just for confirmation.

 Improve shell scan performances when using LIMIT
 

 Key: HBASE-13721
 URL: https://issues.apache.org/jira/browse/HBASE-13721
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 1.1.0
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
 Attachments: HBASE-13721-v0-trunk.txt


 When doing a scan which is expected to return the exact same number of rows 
 as the LIMIT we give, we still scan the entire table until we return the 
 row(s) and then test the numbers of rows we have. This can take a lot of time.
 Example:
 scan 'sensors', { COLUMNS = ['v:f92acb5b-079a-42bc-913a-657f270a3dc1'], 
 STARTROW = '000a', LIMIT = 1 }
 This is because we will break on the limit condition AFTER we ask for the 
 next row. If there is none, we scan the entire table than exit.
 Goal of this patch is to handle this specific case without impacting the 
 others.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13716) Stop using Hadoop's FSConstants

2015-05-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14552697#comment-14552697
 ] 

Hadoop QA commented on HBASE-13716:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12734132/HBASE-13716.1.patch
  against master branch at commit 132573792dc4947f2d7846f9e8093c9227c189da.
  ATTACHMENT ID: 12734132

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s): 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14115//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14115//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14115//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14115//console

This message is automatically generated.

 Stop using Hadoop's FSConstants
 ---

 Key: HBASE-13716
 URL: https://issues.apache.org/jira/browse/HBASE-13716
 Project: HBase
  Issue Type: Task
Affects Versions: 1.0.0, 1.1.0
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Blocker
 Fix For: 2.0.0, 1.0.2, 1.2.0, 1.1.1

 Attachments: HBASE-13716.1.patch


 the FSConstants class was removed in HDFS-8135 (currently slated for Hadoop 
 2.8.0). I'm trying to have it reverted in branch-2, but we should migrate off 
 of it sooner rather htan later.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-13448) New Cell implementation with cached component offsets/lengths

2015-05-20 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14552739#comment-14552739
 ] 

Lars Hofhansl edited comment on HBASE-13448 at 5/20/15 5:44 PM:


Full 0.98 patch this time.
[~anoop.hbase], did I miss anything in that patch? I'll do my test-run again, 
and lemme tune GC a bit more careful on my test box (since this is an important 
part)


was (Author: lhofhansl):
Full 0.98 patch this time.
[~anoop.hbase], did I miss anything in that patch? I'll do my test-run again.

 New Cell implementation with cached component offsets/lengths
 -

 Key: HBASE-13448
 URL: https://issues.apache.org/jira/browse/HBASE-13448
 Project: HBase
  Issue Type: Sub-task
  Components: Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0

 Attachments: 13448-0.98.txt, HBASE-13448.patch, HBASE-13448_V2.patch, 
 HBASE-13448_V3.patch, gc.png, hits.png


 This can be extension to KeyValue and can be instantiated and used in read 
 path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13711) Provide an API to set min and max versions in HColumnDescriptor

2015-05-20 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14552776#comment-14552776
 ] 

Enis Soztutar commented on HBASE-13711:
---

+1 for the master patch and branch-1.1 patch. We can commit that to 1.0 as 
well. 

We use Int.MAX for this. It is unlikely that you would want 2B versions, so it 
should be fine. 
{code}
+  // TODO: Allow minVersion and maxVersion of 0 to be the way you say 
Keep all versions.
{code}

2 spaces instead of 4 here? 
{code}
+if (maxVersions  minVersions) {
+throw new IllegalArgumentException(Unable to set MaxVersion to  + 
maxVersions
+
{code}

 Provide an API to set min and max versions in HColumnDescriptor
 ---

 Key: HBASE-13711
 URL: https://issues.apache.org/jira/browse/HBASE-13711
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0, 1.1.0, 1.2.0
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
Priority: Minor
 Fix For: 2.0.0, 0.98.14, 1.2.0, 1.1.1

 Attachments: HBASE-13711-v2.patch, HBASE-13711.patch, 
 HBASE-13711.v1-branch-1.1.patch


 In org.apache.hadoop.hbase.chaos.actions.ChangeVersionsAction#perform(), it 
 tries to update the max and min versions in a column descriptor: 
 {code}
  for(HColumnDescriptor descriptor:columnDescriptors) {
descriptor.setMaxVersions(versions);
descriptor.setMinVersions(versions);
  }
 {code}
 If the current minimum version is greater than the new max version, an 
 IllegalArgumentException would throw from 
 org.apache.hadoop.hbase.HColumnDescriptor#setMaxVersions().  
 Here is an example (trying to set max version to 1 while currently min 
 version is 2):
 {noformat}
 java.lang.IllegalArgumentException: Set MaxVersion to 1 while minVersion is 
 2. Maximum versions must be = minimum versions
 at 
 org.apache.hadoop.hbase.HColumnDescriptor.setMaxVersions(HColumnDescriptor.java:634)
 at 
 org.apache.hadoop.hbase.chaos.actions.ChangeVersionsAction.perform(ChangeVersionsAction.java:62)
 {noformat}
 One solution is to change the order of set - set min version first and then 
 set max version (note: the current implement of 
 org.apache.hadoop.hbase.HColumnDescriptor#setMinVersions() does not check the 
 min version value and blindly set the version.  Not sure whether this is 
 by-design).
 Another solution is to provide an API to set both min and max version in one 
 function call.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13724) ReplicationSource dies under certain conditions reading a sequence file

2015-05-20 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14552817#comment-14552817
 ] 

stack commented on HBASE-13724:
---

Running with asserts in production is not usual practice so you will probably 
find lots of 'interesting' issues.

Regards this current one, our assert should print out something better than 
just that it tripped.  I wonder what realLength is coming back as in this case. 
 Looks like we'll go back and start reading earlier in the file so double 
replication -- probably not the end of the world but to be fixed for sure.

bq. Should we harden replication source to deal with these types of assertion 
errors ... 

Yes. Convert to an exception... and sounds like a retry might be in order here 
as you suggest.

 ReplicationSource dies under certain conditions reading a sequence file
 ---

 Key: HBASE-13724
 URL: https://issues.apache.org/jira/browse/HBASE-13724
 Project: HBase
  Issue Type: Bug
Reporter: churro morales

 A little background, 
 We run our server in -ea mode and have seen quite a few replication sources 
 silently die over the past few months.
 Note: the stacktrace I posted below comes from a regionserver running 0.94 
 but quickly looking at this issue, I believe this will happen in 98 too.  
 Should we harden replication source to deal with these types of assertion 
 errors by catching throwables, should we be dealing with this at the sequence 
 file reader level?  Still looking into the root cause of this issue but when 
 manually shutdown our regionservers the regionserver that recovered its queue 
 replicated that log just fine.  So in our case a simple retry would've worked 
 just fine.  
 {code}
 2015-05-08 11:04:23,348 ERROR 
 org.apache.hadoop.hbase.replication.regionserver.ReplicationSource: 
 Unexpected exception in ReplicationSource, 
 currentPath=hdfs://hm6.xxx.flurry.com:9000/hbase/.logs/x.yy.flurry.com,60020,1426792702998/x.atl.flurry.com%2C60020%2C1426792702998.1431107922449
 java.lang.AssertionError
 at 
 org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader$WALReaderFSDataInputStream.getPos(SequenceFileLogReader.java:121)
 at 
 org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1489)
 at 
 org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1479)
 at 
 org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1474)
 at 
 org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.init(SequenceFileLogReader.java:55)
 at 
 org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:178)
 at 
 org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:734)
 at 
 org.apache.hadoop.hbase.replication.regionserver.ReplicationHLogReaderManager.openReader(ReplicationHLogReaderManager.java:69)
 at 
 org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.openReader(ReplicationSource.java:583)
 at 
 org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:373)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13728) Remove use of Hadoop's GenericOptionsParser

2015-05-20 Thread Sean Busbey (JIRA)
Sean Busbey created HBASE-13728:
---

 Summary: Remove use of Hadoop's GenericOptionsParser
 Key: HBASE-13728
 URL: https://issues.apache.org/jira/browse/HBASE-13728
 Project: HBase
  Issue Type: Task
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Blocker


GenericOptionsParser has been IA.Private for all of Hadoop 2 (handled in 
HADOOP-6668) we shouldn't be using it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13158) When client supports CellBlock, return the result Cells as controller payload for get(Get) API also

2015-05-20 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14552936#comment-14552936
 ] 

stack commented on HBASE-13158:
---

[~anoop.hbase] did some profiling and found version check expensive. Lets see 
if can do a cheaper feature present check.

 When client supports CellBlock, return the result Cells as controller payload 
 for get(Get) API also
 ---

 Key: HBASE-13158
 URL: https://issues.apache.org/jira/browse/HBASE-13158
 Project: HBase
  Issue Type: Improvement
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0, 1.2.0

 Attachments: HBASE-13158.patch, HBASE-13158_V2.patch, 
 HBASE-13158_V3.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13728) Remove use of Hadoop's GenericOptionsParser

2015-05-20 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-13728:

Fix Version/s: 1.1.1
   1.2.0
   1.0.2
   Status: Patch Available  (was: Open)

 Remove use of Hadoop's GenericOptionsParser
 ---

 Key: HBASE-13728
 URL: https://issues.apache.org/jira/browse/HBASE-13728
 Project: HBase
  Issue Type: Task
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Blocker
 Fix For: 2.0.0, 1.0.2, 1.2.0, 1.1.1

 Attachments: HBASE-13728.1.patch


 GenericOptionsParser has been IA.Private for all of Hadoop 2 (handled in 
 HADOOP-6668) we shouldn't be using it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13698) Add RegionLocator methods to Thrift2 proxy.

2015-05-20 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14553066#comment-14553066
 ] 

Elliott Clark commented on HBASE-13698:
---

Committed to branch-1 and master. Thanks for the review.

 Add RegionLocator methods to Thrift2 proxy.
 ---

 Key: HBASE-13698
 URL: https://issues.apache.org/jira/browse/HBASE-13698
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0, 1.1.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 1.2.0

 Attachments: HBASE-13698-v1.patch, HBASE-13698.patch


 Thrift2 doesn't provide the same functionality as the java client for getting 
 region locations. We should change that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13710) Remove use of Hadoop's ReflectionUtil in favor of our own.

2015-05-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14553075#comment-14553075
 ] 

Hadoop QA commented on HBASE-13710:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12733893/HBASE-13710.2.patch
  against master branch at commit 88f19ab6979c7012c3dd22b2f45db9f746c7736d.
  ATTACHMENT ID: 12733893

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

 {color:red}-1 core zombie tests{color}.  There are 5 zombie test(s):   
at 
org.apache.hadoop.hbase.wal.TestWALSplit.testCorruptedFileGetsArchivedIfSkipErrors(TestWALSplit.java:520)
at 
org.apache.hadoop.hbase.wal.TestWALSplit.testLogsGetArchivedAfterSplit(TestWALSplit.java:648)
at 
org.apache.hadoop.hbase.replication.regionserver.TestRegionReplicaReplicationEndpointNoMaster.testReplayedEditsAreSkipped(TestRegionReplicaReplicationEndpointNoMaster.java:295)
at 
org.apache.hadoop.hbase.replication.regionserver.TestReplicationWALReaderManager.test(TestReplicationWALReaderManager.java:169)
at 
org.apache.hadoop.hbase.replication.regionserver.TestRegionReplicaReplicationEndpoint.testRegionReplicaReplicationIgnoresDisabledTables(TestRegionReplicaReplicationEndpoint.java:349)
at 
org.apache.hadoop.hbase.replication.regionserver.TestRegionReplicaReplicationEndpoint.testRegionReplicaReplicationIgnoresDroppedTables(TestRegionReplicaReplicationEndpoint.java:335)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14118//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14118//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14118//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14118//console

This message is automatically generated.

 Remove use of Hadoop's ReflectionUtil in favor of our own.
 --

 Key: HBASE-13710
 URL: https://issues.apache.org/jira/browse/HBASE-13710
 Project: HBase
  Issue Type: Improvement
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Minor
 Attachments: HBASE-13710.1.patch, HBASE-13710.2.patch


 HttpServer makes use of Hadoop's ReflectionUtil instead of our own. AFAICT 
 it's using 1 extra method. Just copy that one over to our own ReflectionUtil.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13158) When client supports CellBlock, return the result Cells as controller payload for get(Get) API also

2015-05-20 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-13158:
--
Attachment: 13158v4.suggestion.txt

Here is a suggestion [~anoop.hbase] Save on parse of String on each invocation. 
If we need to run with less friction still, I can work on something more 
radical. You have a means of testing?  Thanks.

 When client supports CellBlock, return the result Cells as controller payload 
 for get(Get) API also
 ---

 Key: HBASE-13158
 URL: https://issues.apache.org/jira/browse/HBASE-13158
 Project: HBase
  Issue Type: Improvement
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0, 1.2.0

 Attachments: 13158v4.suggestion.txt, HBASE-13158.patch, 
 HBASE-13158_V2.patch, HBASE-13158_V3.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-8329) Limit compaction speed

2015-05-20 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14553208#comment-14553208
 ] 

Jean-Marc Spaggiari commented on HBASE-8329:


But someone running 0.98 will not be able to upgrade to 1.0 because it lacks 
this feature, right? That's a bit strange. They will have to jump to 1.1 just 
because of that? That might be what we want, but it's just not usual I think.

 Limit compaction speed
 --

 Key: HBASE-8329
 URL: https://issues.apache.org/jira/browse/HBASE-8329
 Project: HBase
  Issue Type: Improvement
  Components: Compaction
Reporter: binlijin
Assignee: zhangduo
 Fix For: 2.0.0, 1.1.0, 0.98.11

 Attachments: HBASE-8329-0.98-addendum.patch, HBASE-8329-0.98.patch, 
 HBASE-8329-10.patch, HBASE-8329-11.patch, HBASE-8329-12.patch, 
 HBASE-8329-2-trunk.patch, HBASE-8329-3-trunk.patch, HBASE-8329-4-trunk.patch, 
 HBASE-8329-5-trunk.patch, HBASE-8329-6-trunk.patch, HBASE-8329-7-trunk.patch, 
 HBASE-8329-8-trunk.patch, HBASE-8329-9-trunk.patch, 
 HBASE-8329-branch-1.patch, HBASE-8329-trunk.patch, HBASE-8329_13.patch, 
 HBASE-8329_14.patch, HBASE-8329_15.patch, HBASE-8329_16.patch, 
 HBASE-8329_17.patch


 There is no speed or resource limit for compaction,I think we should add this 
 feature especially when request burst.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13721) Improve shell scan performances when using LIMIT

2015-05-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14553017#comment-14553017
 ] 

Hudson commented on HBASE-13721:


SUCCESS: Integrated in HBase-TRUNK #6498 (See 
[https://builds.apache.org/job/HBase-TRUNK/6498/])
HBASE-13721 - Improve shell scan performances when using LIMIT(JMS) 
(ramkrishna: rev 1fbde3abd3c5186540113cfd271f33f8484b1235)
* hbase-shell/src/main/ruby/hbase/table.rb


 Improve shell scan performances when using LIMIT
 

 Key: HBASE-13721
 URL: https://issues.apache.org/jira/browse/HBASE-13721
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 1.1.0
Reporter: Jean-Marc Spaggiari
Assignee: Jean-Marc Spaggiari
 Attachments: HBASE-13721-v0-trunk.txt


 When doing a scan which is expected to return the exact same number of rows 
 as the LIMIT we give, we still scan the entire table until we return the 
 row(s) and then test the numbers of rows we have. This can take a lot of time.
 Example:
 scan 'sensors', { COLUMNS = ['v:f92acb5b-079a-42bc-913a-657f270a3dc1'], 
 STARTROW = '000a', LIMIT = 1 }
 This is because we will break on the limit condition AFTER we ask for the 
 next row. If there is none, we scan the entire table than exit.
 Goal of this patch is to handle this specific case without impacting the 
 others.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13722) Avoid non static method from BloomFilterUtil

2015-05-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14553016#comment-14553016
 ] 

Hudson commented on HBASE-13722:


SUCCESS: Integrated in HBase-TRUNK #6498 (See 
[https://builds.apache.org/job/HBase-TRUNK/6498/])
HBASE-13722 Avoid non static method from BloomFilterUtil. (anoopsamjohn: rev 
88f19ab6979c7012c3dd22b2f45db9f746c7736d)
* hbase-server/src/main/java/org/apache/hadoop/hbase/util/BloomFilterUtil.java


 Avoid non static method from BloomFilterUtil
 

 Key: HBASE-13722
 URL: https://issues.apache.org/jira/browse/HBASE-13722
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
Priority: Trivial
 Fix For: 2.0.0

 Attachments: HBASE-13722.patch


 This is an unused method and slipped into this Util class from 
 ByteBloomFilter during the cleanup.
 boolean contains(byte[] buf, ByteBuffer bloom)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13726) stop using Hadoop's IOUtils

2015-05-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14553205#comment-14553205
 ] 

Hadoop QA commented on HBASE-13726:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12734198/HBASE-13726.1.patch
  against master branch at commit 77d9719e2bd7db20b0ad3bafb255c9d797d2b49d.
  ATTACHMENT ID: 12734198

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at org.apache.hadoop.mapred.TestMerge.testMerge(TestMerge.java:87)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14123//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14123//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14123//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14123//console

This message is automatically generated.

 stop using Hadoop's IOUtils
 ---

 Key: HBASE-13726
 URL: https://issues.apache.org/jira/browse/HBASE-13726
 Project: HBase
  Issue Type: Task
Reporter: Sean Busbey
Assignee: Sean Busbey
 Attachments: HBASE-13726.1.patch


 In HBaseFsck we make use of Hadoop's IOUtils for ignore-errors-while-closing.
 All of these methods (in the way we call them) behave the same as 
 commons-io's IOUtils.closeQuietly. One of the methods in the Hadoop version 
 also uses a parameter that isn't in org.apache.hadoop.
 We already have commons-io as a dependency in this module, we should just use 
 the commons-io version since it is stable and more limited in surface.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-5980) Scanner responses from RS should include metrics on rows/KVs filtered

2015-05-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14553021#comment-14553021
 ] 

Hadoop QA commented on HBASE-5980:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12734208/5980v5.txt
  against master branch at commit 7f2b33dbbf90474a8f73e4d38ea8f6817ee3dcdb.
  ATTACHMENT ID: 12734208

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14125//console

This message is automatically generated.

 Scanner responses from RS should include metrics on rows/KVs filtered
 -

 Key: HBASE-5980
 URL: https://issues.apache.org/jira/browse/HBASE-5980
 Project: HBase
  Issue Type: Improvement
  Components: Client, metrics, regionserver
Affects Versions: 0.95.2
Reporter: Todd Lipcon
Assignee: Jonathan Lawlor
Priority: Minor
 Attachments: 5980v5.txt, 5980v5.txt, 5980v5.txt, 
 HBASE-5980-branch-1.patch, HBASE-5980-v1.patch, HBASE-5980-v2.patch, 
 HBASE-5980-v2.patch, HBASE-5980-v3.patch, HBASE-5980-v4.patch


 Currently it's difficult to know, when issuing a filter, what percentage of 
 rows were skipped by that filter. We should expose some basic counters back 
 to the client scanner object. For example:
 - number of rows filtered by row key alone (filterRowKey())
 - number of times each filter response was returned by filterKeyValue() - 
 corresponding to Filter.ReturnCode
 What would be slickest is if this could actually return a tree of counters 
 for cases where FilterList or other combining filters are used. But a 
 top-level is a good start.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13698) Add RegionLocator methods to Thrift2 proxy.

2015-05-20 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-13698:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

 Add RegionLocator methods to Thrift2 proxy.
 ---

 Key: HBASE-13698
 URL: https://issues.apache.org/jira/browse/HBASE-13698
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0, 1.1.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 1.2.0

 Attachments: HBASE-13698-v1.patch, HBASE-13698.patch


 Thrift2 doesn't provide the same functionality as the java client for getting 
 region locations. We should change that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13711) Provide an API to set min and max versions in HColumnDescriptor

2015-05-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14553108#comment-14553108
 ] 

Hudson commented on HBASE-13711:


FAILURE: Integrated in HBase-1.0 #922 (See 
[https://builds.apache.org/job/HBase-1.0/922/])
HBASE-13711 Provide an API to set min and max versions in HColumnDescriptor 
(Stephen Yuan Jiang) (ndimiduk: rev 95ec0a475ebf738bc3f0e0ec80fca9129dff8706)
* 
hbase-it/src/test/java/org/apache/hadoop/hbase/chaos/actions/ChangeVersionsAction.java


 Provide an API to set min and max versions in HColumnDescriptor
 ---

 Key: HBASE-13711
 URL: https://issues.apache.org/jira/browse/HBASE-13711
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0, 1.1.0, 1.2.0
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
Priority: Minor
 Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.1

 Attachments: HBASE-13711-v2.patch, HBASE-13711.patch, 
 HBASE-13711.v1-branch-1.1.patch


 In org.apache.hadoop.hbase.chaos.actions.ChangeVersionsAction#perform(), it 
 tries to update the max and min versions in a column descriptor: 
 {code}
  for(HColumnDescriptor descriptor:columnDescriptors) {
descriptor.setMaxVersions(versions);
descriptor.setMinVersions(versions);
  }
 {code}
 If the current minimum version is greater than the new max version, an 
 IllegalArgumentException would throw from 
 org.apache.hadoop.hbase.HColumnDescriptor#setMaxVersions().  
 Here is an example (trying to set max version to 1 while currently min 
 version is 2):
 {noformat}
 java.lang.IllegalArgumentException: Set MaxVersion to 1 while minVersion is 
 2. Maximum versions must be = minimum versions
 at 
 org.apache.hadoop.hbase.HColumnDescriptor.setMaxVersions(HColumnDescriptor.java:634)
 at 
 org.apache.hadoop.hbase.chaos.actions.ChangeVersionsAction.perform(ChangeVersionsAction.java:62)
 {noformat}
 One solution is to change the order of set - set min version first and then 
 set max version (note: the current implement of 
 org.apache.hadoop.hbase.HColumnDescriptor#setMinVersions() does not check the 
 min version value and blindly set the version.  Not sure whether this is 
 by-design).
 Another solution is to provide an API to set both min and max version in one 
 function call.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13700) Allow Thrift2 HSHA server to have configurable threads

2015-05-20 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14553106#comment-14553106
 ] 

Elliott Clark commented on HBASE-13700:
---

Pushed to master and branch-1. Thanks for the review [~stack]

 Allow Thrift2 HSHA server to have configurable threads
 --

 Key: HBASE-13700
 URL: https://issues.apache.org/jira/browse/HBASE-13700
 Project: HBase
  Issue Type: Bug
  Components: Thrift
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 1.2.0

 Attachments: HBASE-13700-v1.patch, HBASE-13700-v2.patch, 
 HBASE-13700.patch


 The half sync half async server by default starts 5 worker threads. For busy 
 servers that might not be enough. That should be configurable.
 For the threadpool there should be a way to set the max number of threads so 
 that creating threads doesn't run away. That should be configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13711) Provide an API to set min and max versions in HColumnDescriptor

2015-05-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14553169#comment-14553169
 ] 

Hudson commented on HBASE-13711:


FAILURE: Integrated in HBase-1.1 #495 (See 
[https://builds.apache.org/job/HBase-1.1/495/])
HBASE-13711 Provide an API to set min and max versions in HColumnDescriptor 
(Stephen Yuan Jiang) (ndimiduk: rev 29f67d3cc1afa57b5bf2827ed09c433895b2863d)
* 
hbase-it/src/test/java/org/apache/hadoop/hbase/chaos/actions/ChangeVersionsAction.java


 Provide an API to set min and max versions in HColumnDescriptor
 ---

 Key: HBASE-13711
 URL: https://issues.apache.org/jira/browse/HBASE-13711
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0, 1.1.0, 1.2.0
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
Priority: Minor
 Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.1

 Attachments: HBASE-13711-v2.patch, HBASE-13711.patch, 
 HBASE-13711.v1-branch-1.1.patch


 In org.apache.hadoop.hbase.chaos.actions.ChangeVersionsAction#perform(), it 
 tries to update the max and min versions in a column descriptor: 
 {code}
  for(HColumnDescriptor descriptor:columnDescriptors) {
descriptor.setMaxVersions(versions);
descriptor.setMinVersions(versions);
  }
 {code}
 If the current minimum version is greater than the new max version, an 
 IllegalArgumentException would throw from 
 org.apache.hadoop.hbase.HColumnDescriptor#setMaxVersions().  
 Here is an example (trying to set max version to 1 while currently min 
 version is 2):
 {noformat}
 java.lang.IllegalArgumentException: Set MaxVersion to 1 while minVersion is 
 2. Maximum versions must be = minimum versions
 at 
 org.apache.hadoop.hbase.HColumnDescriptor.setMaxVersions(HColumnDescriptor.java:634)
 at 
 org.apache.hadoop.hbase.chaos.actions.ChangeVersionsAction.perform(ChangeVersionsAction.java:62)
 {noformat}
 One solution is to change the order of set - set min version first and then 
 set max version (note: the current implement of 
 org.apache.hadoop.hbase.HColumnDescriptor#setMinVersions() does not check the 
 min version value and blindly set the version.  Not sure whether this is 
 by-design).
 Another solution is to provide an API to set both min and max version in one 
 function call.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13711) Provide an API to set min and max versions in HColumnDescriptor

2015-05-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14553181#comment-14553181
 ] 

Hudson commented on HBASE-13711:


SUCCESS: Integrated in HBase-1.2 #91 (See 
[https://builds.apache.org/job/HBase-1.2/91/])
HBASE-13711 Provide an API to set min and max versions in HColumnDescriptor 
(Stephen Yuan Jiang) (ndimiduk: rev 49fc6c817d854d5876c42e6ccb88c24639e0a3ce)
* 
hbase-it/src/test/java/org/apache/hadoop/hbase/chaos/actions/ChangeVersionsAction.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java


 Provide an API to set min and max versions in HColumnDescriptor
 ---

 Key: HBASE-13711
 URL: https://issues.apache.org/jira/browse/HBASE-13711
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0, 1.1.0, 1.2.0
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
Priority: Minor
 Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.1

 Attachments: HBASE-13711-v2.patch, HBASE-13711.patch, 
 HBASE-13711.v1-branch-1.1.patch


 In org.apache.hadoop.hbase.chaos.actions.ChangeVersionsAction#perform(), it 
 tries to update the max and min versions in a column descriptor: 
 {code}
  for(HColumnDescriptor descriptor:columnDescriptors) {
descriptor.setMaxVersions(versions);
descriptor.setMinVersions(versions);
  }
 {code}
 If the current minimum version is greater than the new max version, an 
 IllegalArgumentException would throw from 
 org.apache.hadoop.hbase.HColumnDescriptor#setMaxVersions().  
 Here is an example (trying to set max version to 1 while currently min 
 version is 2):
 {noformat}
 java.lang.IllegalArgumentException: Set MaxVersion to 1 while minVersion is 
 2. Maximum versions must be = minimum versions
 at 
 org.apache.hadoop.hbase.HColumnDescriptor.setMaxVersions(HColumnDescriptor.java:634)
 at 
 org.apache.hadoop.hbase.chaos.actions.ChangeVersionsAction.perform(ChangeVersionsAction.java:62)
 {noformat}
 One solution is to change the order of set - set min version first and then 
 set max version (note: the current implement of 
 org.apache.hadoop.hbase.HColumnDescriptor#setMinVersions() does not check the 
 min version value and blindly set the version.  Not sure whether this is 
 by-design).
 Another solution is to provide an API to set both min and max version in one 
 function call.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13656) Rename getDeadServers to getDeadServersSize in Admin

2015-05-20 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-13656:
--
   Resolution: Fixed
Fix Version/s: 2.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Pushed to master. Thanks [~lars_francke]

 Rename getDeadServers to getDeadServersSize in Admin
 

 Key: HBASE-13656
 URL: https://issues.apache.org/jira/browse/HBASE-13656
 Project: HBase
  Issue Type: Improvement
Reporter: Lars Francke
Assignee: Lars Francke
Priority: Minor
 Fix For: 2.0.0

 Attachments: HBASE-13656.patch


 The name is inconsistent with the other methods (e.g. {{getServersSize}}  
 {{getBackupMastersSize}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13616) Move ServerShutdownHandler to Pv2

2015-05-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14553088#comment-14553088
 ] 

Hadoop QA commented on HBASE-13616:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12734182/13616v10.branch-1.txt
  against branch-1 branch at commit 88f19ab6979c7012c3dd22b2f45db9f746c7736d.
  ATTACHMENT ID: 12734182

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 24 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+regionsOnCrashedServer_ = 
java.util.Collections.unmodifiableList(regionsOnCrashedServer_);
+  new java.lang.String[] { UserInfo, UnmodifiedTableSchema, 
ModifiedTableSchema, DeleteColumnFamilyInModify, });
+  new java.lang.String[] { UserInfo, PreserveSplits, 
TableName, TableSchema, RegionInfo, });
+  new java.lang.String[] { ServerName, DistributedLogReplay, 
RegionsOnCrashedServer, RegionsToAssign, CarryingMeta, ShouldSplitWal, 
});

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14120//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14120//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14120//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14120//console

This message is automatically generated.

 Move ServerShutdownHandler to Pv2
 -

 Key: HBASE-13616
 URL: https://issues.apache.org/jira/browse/HBASE-13616
 Project: HBase
  Issue Type: Sub-task
  Components: proc-v2
Affects Versions: 1.1.0
Reporter: stack
Assignee: stack
 Attachments: 13616.wip.txt, 13616.wip.v3.branch-1.txt, 
 13616.wip.v4.branch-1.txt, 13616.wip.v5.branch-1.1.txt, 
 13616.wip.v6.branch-1.txt, 13616.wip.v7.branch-1.txt, 13616v10.branch-1.txt, 
 13616v8.branch-1.txt, 13616v9.branch-1.txt, 13616v9.branch-1.txt, 
 13616wip.v2.txt


 Move ServerShutdownHandler to run on ProcedureV2. Need this for DLR to work. 
 See HBASE-13567.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13700) Allow Thrift2 HSHA server to have configurable threads

2015-05-20 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-13700:
--
   Resolution: Fixed
Fix Version/s: 1.2.0
   2.0.0
   Status: Resolved  (was: Patch Available)

 Allow Thrift2 HSHA server to have configurable threads
 --

 Key: HBASE-13700
 URL: https://issues.apache.org/jira/browse/HBASE-13700
 Project: HBase
  Issue Type: Bug
  Components: Thrift
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 1.2.0

 Attachments: HBASE-13700-v1.patch, HBASE-13700-v2.patch, 
 HBASE-13700.patch


 The half sync half async server by default starts 5 worker threads. For busy 
 servers that might not be enough. That should be configurable.
 For the threadpool there should be a way to set the max number of threads so 
 that creating threads doesn't run away. That should be configurable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13718) Add a pretty printed table description to the table detail page of HBase's master

2015-05-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14553024#comment-14553024
 ] 

Hadoop QA commented on HBASE-13718:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12734180/D38649.diff
  against master branch at commit 88f19ab6979c7012c3dd22b2f45db9f746c7736d.
  ATTACHMENT ID: 12734180

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:red}-1 site{color}.  The patch appears to cause mvn site goal to 
fail.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.mapreduce.TestImportExport

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14121//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14121//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14121//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14121//console

This message is automatically generated.

 Add a pretty printed table description to the table detail page of HBase's 
 master
 -

 Key: HBASE-13718
 URL: https://issues.apache.org/jira/browse/HBASE-13718
 Project: HBase
  Issue Type: Improvement
  Components: hbase
Affects Versions: 2.0.0
Reporter: Joao Girao
Assignee: Joao Girao
Priority: Minor
 Fix For: 2.0.0, 1.2.0

 Attachments: D38649.diff, D38649.diff.txt, Screen Shot 2015-05-18 at 
 1.57.50 PM.png


 HBase's master has an info server that's useful for debugging and getting a 
 general overview of what's in the cluster. It has a page dedicated to 
 describing a cluster. You can reach it by going to something like: 
 http://localhost:54677/table.jsp?name=cluster_test
 That page currently doesn't have anything about the current table schema. It 
 would be nice to have a table that lists the different column families and 
 how they are set up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13729) Old hbase.regionserver.global.memstore.upperLimit is ignored if present

2015-05-20 Thread Esteban Gutierrez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esteban Gutierrez updated HBASE-13729:
--
Summary: Old hbase.regionserver.global.memstore.upperLimit is ignored if 
present  (was: old hbase.regionserver.global.memstore.upperLimit is ignored if 
present)

 Old hbase.regionserver.global.memstore.upperLimit is ignored if present
 ---

 Key: HBASE-13729
 URL: https://issues.apache.org/jira/browse/HBASE-13729
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 2.0.0, 1.0.1, 1.1.0
Reporter: Esteban Gutierrez
Assignee: Esteban Gutierrez
Priority: Critical

 If hbase.regionserver.global.memstore.upperLimit is present we should use it 
 instead of hbase.regionserver.global.memstore.size the current implementation 
 of HeapMemorySizeUtil.getGlobalMemStorePercent() asumes that if 
 hbase.regionserver.global.memstore.size is not defined thenit should use the 
 old configuration, however it should be the other way around.
 This has a large impact specially if doing a rolling upgrade of a cluster 
 when the memstore upper limit has been changed from the default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13729) old hbase.regionserver.global.memstore.upperLimit is ignored if present

2015-05-20 Thread Esteban Gutierrez (JIRA)
Esteban Gutierrez created HBASE-13729:
-

 Summary: old hbase.regionserver.global.memstore.upperLimit is 
ignored if present
 Key: HBASE-13729
 URL: https://issues.apache.org/jira/browse/HBASE-13729
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 1.1.0, 1.0.1, 2.0.0
Reporter: Esteban Gutierrez
Assignee: Esteban Gutierrez
Priority: Critical


If hbase.regionserver.global.memstore.upperLimit is present we should use it 
instead of hbase.regionserver.global.memstore.size the current implementation 
of HeapMemorySizeUtil.getGlobalMemStorePercent() asumes that if 
hbase.regionserver.global.memstore.size is not defined thenit should use the 
old configuration, however it should be the other way around.

This has a large impact specially if doing a rolling upgrade of a cluster when 
the memstore upper limit has been changed from the default.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-5980) Scanner responses from RS should include metrics on rows/KVs filtered

2015-05-20 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-5980:
-
Attachment: 5980v5.txt

Retry

 Scanner responses from RS should include metrics on rows/KVs filtered
 -

 Key: HBASE-5980
 URL: https://issues.apache.org/jira/browse/HBASE-5980
 Project: HBase
  Issue Type: Improvement
  Components: Client, metrics, regionserver
Affects Versions: 0.95.2
Reporter: Todd Lipcon
Assignee: Jonathan Lawlor
Priority: Minor
 Attachments: 5980v5.txt, 5980v5.txt, 5980v5.txt, 5980v5.txt, 
 HBASE-5980-branch-1.patch, HBASE-5980-v1.patch, HBASE-5980-v2.patch, 
 HBASE-5980-v2.patch, HBASE-5980-v3.patch, HBASE-5980-v4.patch


 Currently it's difficult to know, when issuing a filter, what percentage of 
 rows were skipped by that filter. We should expose some basic counters back 
 to the client scanner object. For example:
 - number of rows filtered by row key alone (filterRowKey())
 - number of times each filter response was returned by filterKeyValue() - 
 corresponding to Filter.ReturnCode
 What would be slickest is if this could actually return a tree of counters 
 for cases where FilterList or other combining filters are used. But a 
 top-level is a good start.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-5980) Scanner responses from RS should include metrics on rows/KVs filtered

2015-05-20 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14553044#comment-14553044
 ] 

stack commented on HBASE-5980:
--

Committed by mistake against trunk. Reverted till sure this good.

 Scanner responses from RS should include metrics on rows/KVs filtered
 -

 Key: HBASE-5980
 URL: https://issues.apache.org/jira/browse/HBASE-5980
 Project: HBase
  Issue Type: Improvement
  Components: Client, metrics, regionserver
Affects Versions: 0.95.2
Reporter: Todd Lipcon
Assignee: Jonathan Lawlor
Priority: Minor
 Attachments: 5980v5.txt, 5980v5.txt, 5980v5.txt, 5980v5.txt, 
 HBASE-5980-branch-1.patch, HBASE-5980-v1.patch, HBASE-5980-v2.patch, 
 HBASE-5980-v2.patch, HBASE-5980-v3.patch, HBASE-5980-v4.patch


 Currently it's difficult to know, when issuing a filter, what percentage of 
 rows were skipped by that filter. We should expose some basic counters back 
 to the client scanner object. For example:
 - number of rows filtered by row key alone (filterRowKey())
 - number of times each filter response was returned by filterKeyValue() - 
 corresponding to Filter.ReturnCode
 What would be slickest is if this could actually return a tree of counters 
 for cases where FilterList or other combining filters are used. But a 
 top-level is a good start.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-8329) Limit compaction speed

2015-05-20 Thread Jean-Marc Spaggiari (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14553087#comment-14553087
 ] 

Jean-Marc Spaggiari commented on HBASE-8329:


Is this in 1.0.x too? I can find it in trunk, says it fixed in 0,98, but not 
able to find it in 1.0.0. Is that normal?

 Limit compaction speed
 --

 Key: HBASE-8329
 URL: https://issues.apache.org/jira/browse/HBASE-8329
 Project: HBase
  Issue Type: Improvement
  Components: Compaction
Reporter: binlijin
Assignee: zhangduo
 Fix For: 2.0.0, 1.1.0, 0.98.11

 Attachments: HBASE-8329-0.98-addendum.patch, HBASE-8329-0.98.patch, 
 HBASE-8329-10.patch, HBASE-8329-11.patch, HBASE-8329-12.patch, 
 HBASE-8329-2-trunk.patch, HBASE-8329-3-trunk.patch, HBASE-8329-4-trunk.patch, 
 HBASE-8329-5-trunk.patch, HBASE-8329-6-trunk.patch, HBASE-8329-7-trunk.patch, 
 HBASE-8329-8-trunk.patch, HBASE-8329-9-trunk.patch, 
 HBASE-8329-branch-1.patch, HBASE-8329-trunk.patch, HBASE-8329_13.patch, 
 HBASE-8329_14.patch, HBASE-8329_15.patch, HBASE-8329_16.patch, 
 HBASE-8329_17.patch


 There is no speed or resource limit for compaction,I think we should add this 
 feature especially when request burst.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-8329) Limit compaction speed

2015-05-20 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14553198#comment-14553198
 ] 

Nick Dimiduk commented on HBASE-8329:
-

No, it's not on 1.0.x because it's a new feature and we no longer add new 
features to released branches in patch releases. This is by design in the 
1.0+ world, according to Semantic Versioning aspirations, 
http://hbase.apache.org/book.html#hbase.versioning.post10

New features that are backwards compatible only go into minor releases. Patch 
releases only get bug fixed.

 Limit compaction speed
 --

 Key: HBASE-8329
 URL: https://issues.apache.org/jira/browse/HBASE-8329
 Project: HBase
  Issue Type: Improvement
  Components: Compaction
Reporter: binlijin
Assignee: zhangduo
 Fix For: 2.0.0, 1.1.0, 0.98.11

 Attachments: HBASE-8329-0.98-addendum.patch, HBASE-8329-0.98.patch, 
 HBASE-8329-10.patch, HBASE-8329-11.patch, HBASE-8329-12.patch, 
 HBASE-8329-2-trunk.patch, HBASE-8329-3-trunk.patch, HBASE-8329-4-trunk.patch, 
 HBASE-8329-5-trunk.patch, HBASE-8329-6-trunk.patch, HBASE-8329-7-trunk.patch, 
 HBASE-8329-8-trunk.patch, HBASE-8329-9-trunk.patch, 
 HBASE-8329-branch-1.patch, HBASE-8329-trunk.patch, HBASE-8329_13.patch, 
 HBASE-8329_14.patch, HBASE-8329_15.patch, HBASE-8329_16.patch, 
 HBASE-8329_17.patch


 There is no speed or resource limit for compaction,I think we should add this 
 feature especially when request burst.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13728) Remove use of Hadoop's GenericOptionsParser

2015-05-20 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-13728:

Attachment: HBASE-13728.1.patch

This should behave the same as before and TestImportTsv is clean. Can 
definitely go back through branch-1 versions. Not sure about 0.98.

-01
  * rely on the ToolRunner invocation that calls generic option parsing for us.

 Remove use of Hadoop's GenericOptionsParser
 ---

 Key: HBASE-13728
 URL: https://issues.apache.org/jira/browse/HBASE-13728
 Project: HBase
  Issue Type: Task
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Blocker
 Fix For: 2.0.0, 1.0.2, 1.2.0, 1.1.1

 Attachments: HBASE-13728.1.patch


 GenericOptionsParser has been IA.Private for all of Hadoop 2 (handled in 
 HADOOP-6668) we shouldn't be using it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13728) Remove use of Hadoop's GenericOptionsParser

2015-05-20 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14553027#comment-14553027
 ] 

Sean Busbey commented on HBASE-13728:
-

docs for Hadoop 1 indicate it should be safe on 0.98 as well.

 Remove use of Hadoop's GenericOptionsParser
 ---

 Key: HBASE-13728
 URL: https://issues.apache.org/jira/browse/HBASE-13728
 Project: HBase
  Issue Type: Task
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Blocker
 Fix For: 2.0.0, 1.0.2, 1.2.0, 1.1.1

 Attachments: HBASE-13728.1.patch


 GenericOptionsParser has been IA.Private for all of Hadoop 2 (handled in 
 HADOOP-6668) we shouldn't be using it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13728) Remove use of Hadoop's GenericOptionsParser

2015-05-20 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-13728:

Status: Open  (was: Patch Available)

cancelling patch. should have searched before thinking I was done. :)

 Remove use of Hadoop's GenericOptionsParser
 ---

 Key: HBASE-13728
 URL: https://issues.apache.org/jira/browse/HBASE-13728
 Project: HBase
  Issue Type: Task
Reporter: Sean Busbey
Assignee: Sean Busbey
Priority: Blocker
 Fix For: 2.0.0, 1.0.2, 1.2.0, 1.1.1

 Attachments: HBASE-13728.1.patch


 GenericOptionsParser has been IA.Private for all of Hadoop 2 (handled in 
 HADOOP-6668) we shouldn't be using it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13727) Codehaus repository is out of service

2015-05-20 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-13727:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed to 0.98 and the branches 1.

 Codehaus repository is out of service
 -

 Key: HBASE-13727
 URL: https://issues.apache.org/jira/browse/HBASE-13727
 Project: HBase
  Issue Type: Bug
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Fix For: 0.98.13, 1.0.2, 1.2.0, 1.1.1

 Attachments: HBASE-13727-0.98.patch, HBASE-13727-branch-1.patch


 The Codehaus repository is now out of service and this can break our builds, 
 as found by BIGTOP-1874. Let's remove the dead repo entry from our POMs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13725) [book] Pseudo-Distributed Local Install can link to hadoop instructions

2015-05-20 Thread Nick Dimiduk (JIRA)
Nick Dimiduk created HBASE-13725:


 Summary: [book] Pseudo-Distributed Local Install can link to 
hadoop instructions
 Key: HBASE-13725
 URL: https://issues.apache.org/jira/browse/HBASE-13725
 Project: HBase
  Issue Type: Improvement
Reporter: Nick Dimiduk
Priority: Minor


The below is no longer true, we can link to 
http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html

{quote}
Hadoop Configuration
This procedure assumes that you have configured Hadoop and HDFS on your local 
system and or a remote system, and that they are running and available. It also 
assumes you are using Hadoop 2. Currently, the documentation on the Hadoop 
website does not include a quick start for Hadoop 2, but the guide at 
link:http://www.alexjf.net/blog/distributed-systems/hadoop-yarn-installation-definitive-guide
 is a good starting point.
{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13711) Provide an API to set min and max versions in HColumnDescriptor

2015-05-20 Thread Stephen Yuan Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Yuan Jiang updated HBASE-13711:
---
Attachment: HBASE-13711-v2.patch

 Provide an API to set min and max versions in HColumnDescriptor
 ---

 Key: HBASE-13711
 URL: https://issues.apache.org/jira/browse/HBASE-13711
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0, 1.1.0, 1.2.0
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
Priority: Minor
 Fix For: 2.0.0, 0.98.14, 1.2.0, 1.1.1

 Attachments: HBASE-13711-v2.patch, HBASE-13711.patch, 
 HBASE-13711.v1-branch-1.1.patch


 In org.apache.hadoop.hbase.chaos.actions.ChangeVersionsAction#perform(), it 
 tries to update the max and min versions in a column descriptor: 
 {code}
  for(HColumnDescriptor descriptor:columnDescriptors) {
descriptor.setMaxVersions(versions);
descriptor.setMinVersions(versions);
  }
 {code}
 If the current minimum version is greater than the new max version, an 
 IllegalArgumentException would throw from 
 org.apache.hadoop.hbase.HColumnDescriptor#setMaxVersions().  
 Here is an example (trying to set max version to 1 while currently min 
 version is 2):
 {noformat}
 java.lang.IllegalArgumentException: Set MaxVersion to 1 while minVersion is 
 2. Maximum versions must be = minimum versions
 at 
 org.apache.hadoop.hbase.HColumnDescriptor.setMaxVersions(HColumnDescriptor.java:634)
 at 
 org.apache.hadoop.hbase.chaos.actions.ChangeVersionsAction.perform(ChangeVersionsAction.java:62)
 {noformat}
 One solution is to change the order of set - set min version first and then 
 set max version (note: the current implement of 
 org.apache.hadoop.hbase.HColumnDescriptor#setMinVersions() does not check the 
 min version value and blindly set the version.  Not sure whether this is 
 by-design).
 Another solution is to provide an API to set both min and max version in one 
 function call.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13711) Provide an API to set min and max versions in HColumnDescriptor

2015-05-20 Thread Stephen Yuan Jiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Yuan Jiang updated HBASE-13711:
---
Attachment: (was: HBASE-13711-v2.patch)

 Provide an API to set min and max versions in HColumnDescriptor
 ---

 Key: HBASE-13711
 URL: https://issues.apache.org/jira/browse/HBASE-13711
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0, 1.1.0, 1.2.0
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
Priority: Minor
 Fix For: 2.0.0, 0.98.14, 1.2.0, 1.1.1

 Attachments: HBASE-13711-v2.patch, HBASE-13711.patch, 
 HBASE-13711.v1-branch-1.1.patch


 In org.apache.hadoop.hbase.chaos.actions.ChangeVersionsAction#perform(), it 
 tries to update the max and min versions in a column descriptor: 
 {code}
  for(HColumnDescriptor descriptor:columnDescriptors) {
descriptor.setMaxVersions(versions);
descriptor.setMinVersions(versions);
  }
 {code}
 If the current minimum version is greater than the new max version, an 
 IllegalArgumentException would throw from 
 org.apache.hadoop.hbase.HColumnDescriptor#setMaxVersions().  
 Here is an example (trying to set max version to 1 while currently min 
 version is 2):
 {noformat}
 java.lang.IllegalArgumentException: Set MaxVersion to 1 while minVersion is 
 2. Maximum versions must be = minimum versions
 at 
 org.apache.hadoop.hbase.HColumnDescriptor.setMaxVersions(HColumnDescriptor.java:634)
 at 
 org.apache.hadoop.hbase.chaos.actions.ChangeVersionsAction.perform(ChangeVersionsAction.java:62)
 {noformat}
 One solution is to change the order of set - set min version first and then 
 set max version (note: the current implement of 
 org.apache.hadoop.hbase.HColumnDescriptor#setMinVersions() does not check the 
 min version value and blindly set the version.  Not sure whether this is 
 by-design).
 Another solution is to provide an API to set both min and max version in one 
 function call.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13698) Add RegionLocator methods to Thrift2 proxy.

2015-05-20 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14552866#comment-14552866
 ] 

stack commented on HBASE-13698:
---

+1

 Add RegionLocator methods to Thrift2 proxy.
 ---

 Key: HBASE-13698
 URL: https://issues.apache.org/jira/browse/HBASE-13698
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0, 1.1.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 1.2.0

 Attachments: HBASE-13698-v1.patch, HBASE-13698.patch


 Thrift2 doesn't provide the same functionality as the java client for getting 
 region locations. We should change that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13726) stop using Hadoop's IOUtils

2015-05-20 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-13726:

Attachment: HBASE-13726.1.patch

 stop using Hadoop's IOUtils
 ---

 Key: HBASE-13726
 URL: https://issues.apache.org/jira/browse/HBASE-13726
 Project: HBase
  Issue Type: Task
Reporter: Sean Busbey
Assignee: Sean Busbey
 Attachments: HBASE-13726.1.patch


 In HBaseFsck we make use of Hadoop's IOUtils for ignore-errors-while-closing.
 All of these methods (in the way we call them) behave the same as 
 commons-io's IOUtils.closeQuietly. One of the methods in the Hadoop version 
 also uses a parameter that isn't in org.apache.hadoop.
 We already have commons-io as a dependency in this module, we should just use 
 the commons-io version since it is stable and more limited in surface.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13726) stop using Hadoop's IOUtils

2015-05-20 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-13726:

Status: Patch Available  (was: Open)

 stop using Hadoop's IOUtils
 ---

 Key: HBASE-13726
 URL: https://issues.apache.org/jira/browse/HBASE-13726
 Project: HBase
  Issue Type: Task
Reporter: Sean Busbey
Assignee: Sean Busbey
 Attachments: HBASE-13726.1.patch


 In HBaseFsck we make use of Hadoop's IOUtils for ignore-errors-while-closing.
 All of these methods (in the way we call them) behave the same as 
 commons-io's IOUtils.closeQuietly. One of the methods in the Hadoop version 
 also uses a parameter that isn't in org.apache.hadoop.
 We already have commons-io as a dependency in this module, we should just use 
 the commons-io version since it is stable and more limited in surface.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13727) Codehaus repository is out of service

2015-05-20 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-13727:
---
Attachment: HBASE-13727-branch-1.patch
HBASE-13727-0.98.patch

Only a problem for 0.98 and branch-1 (and -1.0 and -1.1)

 Codehaus repository is out of service
 -

 Key: HBASE-13727
 URL: https://issues.apache.org/jira/browse/HBASE-13727
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.13, 1.0.2, 1.2.0, 1.1.1
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Attachments: HBASE-13727-0.98.patch, HBASE-13727-branch-1.patch


 The Codehaus repository is now out of service and this can break our builds, 
 as found by BIGTOP-1874. Let's remove the dead repo entry from our POMs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13727) Codehaus repository is out of service

2015-05-20 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-13727:
---
Affects Version/s: (was: 2.0.0)

 Codehaus repository is out of service
 -

 Key: HBASE-13727
 URL: https://issues.apache.org/jira/browse/HBASE-13727
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.13, 1.0.2, 1.2.0, 1.1.1
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Attachments: HBASE-13727-0.98.patch, HBASE-13727-branch-1.patch


 The Codehaus repository is now out of service and this can break our builds, 
 as found by BIGTOP-1874. Let's remove the dead repo entry from our POMs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12451) IncreasingToUpperBoundRegionSplitPolicy may cause unnecessary region splits in rolling update of cluster

2015-05-20 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14552927#comment-14552927
 ] 

stack commented on HBASE-12451:
---

Looking at the patch, I like the way you have the master returning data to the 
regionserver.

This changes the behavior of IncreasingToUpperBoundRegionSplitPolicy (though it 
broke as you note at the head of this region when rolling restart/rebalance). 
Have you tried this patch? Does it work? Does the avg rather than RS count do a 
damping such that we put off split on rolling update. On the face of it it 
should.

 IncreasingToUpperBoundRegionSplitPolicy may cause unnecessary region splits 
 in rolling update of cluster
 

 Key: HBASE-12451
 URL: https://issues.apache.org/jira/browse/HBASE-12451
 Project: HBase
  Issue Type: Bug
Reporter: Liu Shaohui
Assignee: Liu Shaohui
Priority: Minor
 Fix For: 2.0.0

 Attachments: HBASE-12451-v1.diff, HBASE-12451-v2.diff


 Currently IncreasingToUpperBoundRegionSplitPolicy is the default region split 
 policy. In this policy, split size is the number of regions that are on this 
 server that all are of the same table, cubed, times 2x the region flush size.
 But when unloading regions of a regionserver in a cluster using 
 region_mover.rb, the number of regions that are on this server that all are 
 of the same table will decrease, and the split size will decrease too, which 
 may cause the left region split in the regionsever. Region Splits also 
 happens when loading regions of a regionserver in a cluster. 
 A improvment may set a minimum split size in 
 IncreasingToUpperBoundRegionSplitPolicy
 Suggestions are welcomed. Thanks~



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13448) New Cell implementation with cached component offsets/lengths

2015-05-20 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-13448:
--
Attachment: 13448-0.98.txt

Full 0.98 patch this time.
[~anoop.hbase], did I miss anything in that patch? I'll do my test-run again.

 New Cell implementation with cached component offsets/lengths
 -

 Key: HBASE-13448
 URL: https://issues.apache.org/jira/browse/HBASE-13448
 Project: HBase
  Issue Type: Sub-task
  Components: Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0

 Attachments: 13448-0.98.txt, HBASE-13448.patch, HBASE-13448_V2.patch, 
 HBASE-13448_V3.patch, gc.png, hits.png


 This can be extension to KeyValue and can be instantiated and used in read 
 path.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13711) Provide an API to set min and max versions in HColumnDescriptor

2015-05-20 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14552740#comment-14552740
 ] 

Nick Dimiduk commented on HBASE-13711:
--

+1 on patch for branch-1.1.

This all ready to go in [~syuanjiang]? I can commit this morning.

 Provide an API to set min and max versions in HColumnDescriptor
 ---

 Key: HBASE-13711
 URL: https://issues.apache.org/jira/browse/HBASE-13711
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0, 1.1.0, 1.2.0
Reporter: Stephen Yuan Jiang
Assignee: Stephen Yuan Jiang
Priority: Minor
 Fix For: 2.0.0, 0.98.14, 1.2.0, 1.1.1

 Attachments: HBASE-13711-v2.patch, HBASE-13711.patch, 
 HBASE-13711.v1-branch-1.1.patch


 In org.apache.hadoop.hbase.chaos.actions.ChangeVersionsAction#perform(), it 
 tries to update the max and min versions in a column descriptor: 
 {code}
  for(HColumnDescriptor descriptor:columnDescriptors) {
descriptor.setMaxVersions(versions);
descriptor.setMinVersions(versions);
  }
 {code}
 If the current minimum version is greater than the new max version, an 
 IllegalArgumentException would throw from 
 org.apache.hadoop.hbase.HColumnDescriptor#setMaxVersions().  
 Here is an example (trying to set max version to 1 while currently min 
 version is 2):
 {noformat}
 java.lang.IllegalArgumentException: Set MaxVersion to 1 while minVersion is 
 2. Maximum versions must be = minimum versions
 at 
 org.apache.hadoop.hbase.HColumnDescriptor.setMaxVersions(HColumnDescriptor.java:634)
 at 
 org.apache.hadoop.hbase.chaos.actions.ChangeVersionsAction.perform(ChangeVersionsAction.java:62)
 {noformat}
 One solution is to change the order of set - set min version first and then 
 set max version (note: the current implement of 
 org.apache.hadoop.hbase.HColumnDescriptor#setMinVersions() does not check the 
 min version value and blindly set the version.  Not sure whether this is 
 by-design).
 Another solution is to provide an API to set both min and max version in one 
 function call.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-13618) ReplicationSource is too eager to remove sinks

2015-05-20 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar resolved HBASE-13618.
---
Resolution: Fixed

Changed resolution to Fixed. 

 ReplicationSource is too eager to remove sinks
 --

 Key: HBASE-13618
 URL: https://issues.apache.org/jira/browse/HBASE-13618
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Minor
 Fix For: 2.0.0, 0.98.13, 1.0.2, 1.1.1

 Attachments: 13618-v2.txt, 13618.txt


 Looking at the replication for some other reason I noticed that the 
 replication source might be a bit too eager to remove sinks from the list of 
 valid sinks.
 The current logic allows a sink to fail N times (default 3) and then it will 
 be remove from the sinks. But note that this failure count is never reduced, 
 so given enough runtime and some network glitches _every_ sink will 
 eventually be removed. When all sink are removed the source pick new sinks 
 and the counter is set to 0 for all of them.
 I think we should change to reset the counter each time we successfully 
 replicate something to the sink (which proves the sink isn't dead). Or we 
 could decrease the counter each time we successfully replication, that might 
 be better - if we consistently fail more attempts than we succeed the sink 
 should be removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   >