[jira] [Updated] (HBASE-10378) Divide HLog interface into User and Implementor specific interfaces

2014-10-25 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-10378:

Release Note: 
HBase internals for the write ahead log have been refactored. Advanced users of 
HBase should be aware of the following changes.
  - The command for analyzing write ahead logs has been renamed from 'hlog' to 
'wal'. The old usage is deprecated and will be removed in a future version.
  - Some utility methods in the HBaseTesetingUtility related to testing 
write-ahead-logs were changed in incompatible ways. No functionality has been 
removed, but method names and arguments have changed. See the javadoc for 
HBaseTestingUtility for details.
  - The labeling of server metrics on the region server status pages changed. 
Previously, the number of backing files for the write ahead log was labeled 
'Num. HLog Files'. If you wish to see this statistic now, please look for the 
label 'Num. WAL Files.'  If you rely on JMX for these metrics, their location 
has not changed.

Adding release notes in progress.

 Divide HLog interface into User and Implementor specific interfaces
 ---

 Key: HBASE-10378
 URL: https://issues.apache.org/jira/browse/HBASE-10378
 Project: HBase
  Issue Type: Sub-task
  Components: wal
Reporter: Himanshu Vashishtha
Assignee: Sean Busbey
 Fix For: 2.0.0, 0.99.2

 Attachments: 10378-1.patch, 10378-2.patch


 HBASE-5937 introduces the HLog interface as a first step to support multiple 
 WAL implementations. This interface is a good start, but has some 
 limitations/drawbacks in its current state, such as:
 1) There is no clear distinction b/w User and Implementor APIs, and it 
 provides APIs both for WAL users (append, sync, etc) and also WAL 
 implementors (Reader/Writer interfaces, etc). There are APIs which are very 
 much implementation specific (getFileNum, etc) and a user such as a 
 RegionServer shouldn't know about it.
 2) There are about 14 methods in FSHLog which are not present in HLog 
 interface but are used at several places in the unit test code. These tests 
 typecast HLog to FSHLog, which makes it very difficult to test multiple WAL 
 implementations without doing some ugly checks.
 I'd like to propose some changes in HLog interface that would ease the multi 
 WAL story:
 1) Have two interfaces WAL and WALService. WAL provides APIs for 
 implementors. WALService provides APIs for users (such as RegionServer).
 2) A skeleton implementation of the above two interface as the base class for 
 other WAL implementations (AbstractWAL). It provides required fields for all 
 subclasses (fs, conf, log dir, etc). Make a minimal set of test only methods 
 and add this set in AbstractWAL.
 3) HLogFactory returns a WALService reference when creating a WAL instance; 
 if a user need to access impl specific APIs (there are unit tests which get 
 WAL from a HRegionServer and then call impl specific APIs), use AbstractWAL 
 type casting,
 4) Make TestHLog abstract and let all implementors provide their respective 
 test class which extends TestHLog (TestFSHLog, for example).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10378) Divide HLog interface into User and Implementor specific interfaces

2014-10-25 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184001#comment-14184001
 ] 

Sean Busbey commented on HBASE-10378:
-

cancelled patch while I fix stack's last round of comments and I make the 
change to HLogKey binary compatible.

 Divide HLog interface into User and Implementor specific interfaces
 ---

 Key: HBASE-10378
 URL: https://issues.apache.org/jira/browse/HBASE-10378
 Project: HBase
  Issue Type: Sub-task
  Components: wal
Reporter: Himanshu Vashishtha
Assignee: Sean Busbey
 Fix For: 2.0.0, 0.99.2

 Attachments: 10378-1.patch, 10378-2.patch


 HBASE-5937 introduces the HLog interface as a first step to support multiple 
 WAL implementations. This interface is a good start, but has some 
 limitations/drawbacks in its current state, such as:
 1) There is no clear distinction b/w User and Implementor APIs, and it 
 provides APIs both for WAL users (append, sync, etc) and also WAL 
 implementors (Reader/Writer interfaces, etc). There are APIs which are very 
 much implementation specific (getFileNum, etc) and a user such as a 
 RegionServer shouldn't know about it.
 2) There are about 14 methods in FSHLog which are not present in HLog 
 interface but are used at several places in the unit test code. These tests 
 typecast HLog to FSHLog, which makes it very difficult to test multiple WAL 
 implementations without doing some ugly checks.
 I'd like to propose some changes in HLog interface that would ease the multi 
 WAL story:
 1) Have two interfaces WAL and WALService. WAL provides APIs for 
 implementors. WALService provides APIs for users (such as RegionServer).
 2) A skeleton implementation of the above two interface as the base class for 
 other WAL implementations (AbstractWAL). It provides required fields for all 
 subclasses (fs, conf, log dir, etc). Make a minimal set of test only methods 
 and add this set in AbstractWAL.
 3) HLogFactory returns a WALService reference when creating a WAL instance; 
 if a user need to access impl specific APIs (there are unit tests which get 
 WAL from a HRegionServer and then call impl specific APIs), use AbstractWAL 
 type casting,
 4) Make TestHLog abstract and let all implementors provide their respective 
 test class which extends TestHLog (TestFSHLog, for example).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12202) Support DirectByteBuffer usage in HFileBlock

2014-10-25 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-12202:
---
Attachment: HBASE-12202-addendum.patch

ByteBufferUtils#compareTo  having an issue. Addendum fixes it.

 Support DirectByteBuffer usage in HFileBlock
 

 Key: HBASE-12202
 URL: https://issues.apache.org/jira/browse/HBASE-12202
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12202-addendum.patch, HBASE-12202.patch, 
 HBASE-12202_V2.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11912) Catch some bad practices at compile time with error-prone

2014-10-25 Thread Dima Spivak (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184024#comment-14184024
 ] 

Dima Spivak commented on HBASE-11912:
-

Looks like [50 tests started 
failing|https://builds.apache.org/job/HBase-TRUNK/5699/] after this went in, 
[~apurtell]. :-\

 Catch some bad practices at compile time with error-prone
 -

 Key: HBASE-11912
 URL: https://issues.apache.org/jira/browse/HBASE-11912
 Project: HBase
  Issue Type: Sub-task
Reporter: Andrew Purtell
 Attachments: HBASE-11912.patch, HBASE-11912.patch, HBASE-11912.patch


 Google's error-prone (https://code.google.com/p/error-prone/) wraps javac 
 with some additional static analysis that will generate additional warnings 
 or errors at compile time if certain bug patterns 
 (https://code.google.com/p/error-prone/wiki/BugPatterns) are detected. What's 
 nice about this approach, as opposed to findbugs, is the compile time 
 detection and erroring out prevent the detected problems from getting into 
 the codebase up front.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11368) Multi-column family BulkLoad fails if compactions go on too long

2014-10-25 Thread Qiang Tian (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Qiang Tian updated HBASE-11368:
---
Attachment: key_stacktrace_hbase10882.TXT

Hi [~stack],
Sorry for confusing. let me explain from scratch:
1)the root cause of problem - HRegion#lock.
From the stacktrace in HBASE-10882(also see key_stacktrace_hbase10882.TXT 
attached),  the event sequence is: 
1.1)the compaction acquires the readlock of HRegion#lock, 
1.2)the bulkload try to acquire the writelock of HRegion#lock if there are 
multiple CFs. it has to wait for compaction to release the readlock.
1.3)scanners try to acquire the readlock of HRegion#lock. they have to wait for 
the bulkload to release the writelock.
so both bulkload and scanners are blocked on HRegion#lock by compaction.

2)what is HRegion#lock used for?
Investigation on the HRegion#lock shows, it is originally designed to protect 
region close ONLY. if someone, such as region split, wants to close the region, 
it needs to wait for others release the readlock.  
Then HBASE-4552 used the lock to solve the multi-CF bulkload consistency issue. 
now we see it is too heavy.

3)can we not use HRegion#lock in bulkload?
the answer is yes. 
Internally, HStore#DefaultStoreFileManager#storefiles keeps track of the 
on-disk HFiles for a CF. we have below steps for the bulkload:
3.1)moves HFiles directly to region directory
3.2)add them into the {{storefiles}} list
3.3)notify StoreScanner that the HFile list is changed, which is done by 
resetting the StoreScanner#heap to null. this forces existing StoreScanner 
instances to reinitialize based on new the HFiles seen on disk in next 
scan/read request.
the step 3.2 and 3.3 is synchronized by HStore#lock. so we have CF level 
scan-bulkload consistency.
 
To achieve multi-CF scan-bulkload consistency, if we do not use HRegion#lock, 
we still need another region level lock --- a RegionScanner is composed of 
multiple StoreScanner, a StoreScanner(a CF scanner) is composed of a 
MemStoreScanner and multiple StoreFileScanner.

the RegionScannerImpl#sortheap(and joinedHeap) is just the entry point of 
multiple StoreScanners. to have multi-CF consistency, we need synchronization 
here - a lock is needed, but it is used only between scan and bulkload.



Regarding the code change you referenced, 
performance_improvement_verification_98.5.patch is to simulate the event 
sequence described in #1, for testing purpose only.

currently I use 98.5 for test since it is stable and easy to evaluate the 
effect of the change.
thanks.









 Multi-column family BulkLoad fails if compactions go on too long
 

 Key: HBASE-11368
 URL: https://issues.apache.org/jira/browse/HBASE-11368
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: Qiang Tian
 Attachments: hbase-11368-0.98.5.patch, key_stacktrace_hbase10882.TXT, 
 performance_improvement_verification_98.5.patch


 Compactions take a read lock.  If a multi-column family region, before bulk 
 loading, we want to take a write lock on the region.  If the compaction takes 
 too long, the bulk load fails.
 Various recipes include:
 + Making smaller regions (lame)
 + [~victorunique] suggests major compacting just before bulk loading over in 
 HBASE-10882 as a work around.
 Does the compaction need a read lock for that long?  Does the bulk load need 
 a full write lock when multiple column families?  Can we fail more gracefully 
 at least?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11368) Multi-column family BulkLoad fails if compactions go on too long

2014-10-25 Thread Qiang Tian (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184066#comment-14184066
 ] 

Qiang Tian commented on HBASE-11368:


the attachments:
{{key_stacktrace_hbase10882.TXT}} : the problem stacktrace
{{hbase-11368-0.98.5.patch}} : the fix
{{performance_improvement_verification_98.5.patch}}: the testcase to verify 
performance improvement




 Multi-column family BulkLoad fails if compactions go on too long
 

 Key: HBASE-11368
 URL: https://issues.apache.org/jira/browse/HBASE-11368
 Project: HBase
  Issue Type: Bug
Reporter: stack
Assignee: Qiang Tian
 Attachments: hbase-11368-0.98.5.patch, key_stacktrace_hbase10882.TXT, 
 performance_improvement_verification_98.5.patch


 Compactions take a read lock.  If a multi-column family region, before bulk 
 loading, we want to take a write lock on the region.  If the compaction takes 
 too long, the bulk load fails.
 Various recipes include:
 + Making smaller regions (lame)
 + [~victorunique] suggests major compacting just before bulk loading over in 
 HBASE-10882 as a work around.
 Does the compaction need a read lock for that long?  Does the bulk load need 
 a full write lock when multiple column families?  Can we fail more gracefully 
 at least?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11425) Cell/DBB end-to-end on the read-path

2014-10-25 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184075#comment-14184075
 ] 

Anoop Sam John commented on HBASE-11425:


{quote}
Testing with a 2 million Cells with single cell per row.
Writing all cells to a BB/DBB and trying a seek with to last kv. (To make 
compare across all cells in BB/DBB)
Seek code is like what we have in ScannerV3#blockSeek
with RK length 17 bytes (1st 13 bytes are same) Getting almost same result.
With RK length 117 bytes (1st 113 bytes are same) the DBB based read is ~3% 
degrade
{quote}
Well in this test, the read and compare were from HBB and DBB and those are 
almost same. 
In case of our CellComparator we have Unsafe based optimization. In my old test 
this was not in use.  With Unsafe based read from HBB#array() [this is what 
happens in HFileReaderV2/V3] there is a significant perf diff with DBB. Here RK 
length of 117 bytes and 2 millions cells and we seek to last cell, the DBB test 
is 50% slower. :(

I am thinking of doing Unsafe based compares for data in DBB as well.

Just done Unsafe based access from DBB/HBB and then we are in a better shape. 
The DBB based above test is ~12% slower than old HBB.array() based compares. 
Will raise a subtask and attach the approach there.


 Cell/DBB end-to-end on the read-path
 

 Key: HBASE-11425
 URL: https://issues.apache.org/jira/browse/HBASE-11425
 Project: HBase
  Issue Type: Umbrella
  Components: regionserver, Scanners
Affects Versions: 0.99.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John

 Umbrella jira to make sure we can have blocks cached in offheap backed cache. 
 In the entire read path, we can refer to this offheap buffer and avoid onheap 
 copying.
 The high level items I can identify as of now are
 1. Avoid the array() call on BB in read path.. (This is there in many 
 classes. We can handle class by class)
 2. Support Buffer based getter APIs in cell.  In read path we will create a 
 new Cell with backed by BB. Will need in CellComparator, Filter (like SCVF), 
 CPs etc.
 3. Avoid KeyValue.ensureKeyValue() calls in read path - This make byte copy.
 4. Remove all CP hooks (which are already deprecated) which deal with KVs.  
 (In read path)
 Will add subtasks under this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12345) Unsafe based Comparator for BB

2014-10-25 Thread Anoop Sam John (JIRA)
Anoop Sam John created HBASE-12345:
--

 Summary: Unsafe based Comparator for BB 
 Key: HBASE-12345
 URL: https://issues.apache.org/jira/browse/HBASE-12345
 Project: HBase
  Issue Type: Sub-task
Reporter: Anoop Sam John
Assignee: Anoop Sam John






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12345) Unsafe based Comparator for BB

2014-10-25 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-12345:
---
Attachment: HBASE-12345.patch

 Unsafe based Comparator for BB 
 ---

 Key: HBASE-12345
 URL: https://issues.apache.org/jira/browse/HBASE-12345
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Attachments: HBASE-12345.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-12345) Unsafe based Comparator for BB

2014-10-25 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John reassigned HBASE-12345:
--

Assignee: Anoop Sam John  (was: Anoop Sam John)

 Unsafe based Comparator for BB 
 ---

 Key: HBASE-12345
 URL: https://issues.apache.org/jira/browse/HBASE-12345
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Attachments: HBASE-12345.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-11425) Cell/DBB end-to-end on the read-path

2014-10-25 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John reassigned HBASE-11425:
--

Assignee: Anoop Sam John  (was: Anoop Sam John)

 Cell/DBB end-to-end on the read-path
 

 Key: HBASE-11425
 URL: https://issues.apache.org/jira/browse/HBASE-11425
 Project: HBase
  Issue Type: Umbrella
  Components: regionserver, Scanners
Affects Versions: 0.99.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John

 Umbrella jira to make sure we can have blocks cached in offheap backed cache. 
 In the entire read path, we can refer to this offheap buffer and avoid onheap 
 copying.
 The high level items I can identify as of now are
 1. Avoid the array() call on BB in read path.. (This is there in many 
 classes. We can handle class by class)
 2. Support Buffer based getter APIs in cell.  In read path we will create a 
 new Cell with backed by BB. Will need in CellComparator, Filter (like SCVF), 
 CPs etc.
 3. Avoid KeyValue.ensureKeyValue() calls in read path - This make byte copy.
 4. Remove all CP hooks (which are already deprecated) which deal with KVs.  
 (In read path)
 Will add subtasks under this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-12213) HFileBlock backed by Array of ByteBuffers

2014-10-25 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John reassigned HBASE-12213:
--

Assignee: Anoop Sam John  (was: Anoop Sam John)

 HFileBlock backed by Array of ByteBuffers
 -

 Key: HBASE-12213
 URL: https://issues.apache.org/jira/browse/HBASE-12213
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John

 In L2 cache (offheap) an HFile block might have been cached into multiple 
 chunks of buffers. If HFileBlock need single BB, we will end up in recreation 
 of bigger BB and copying. Instead we can make HFileBlock to serve data from 
 an array of BBs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12282) Ensure Cells and its implementations work with Buffers also

2014-10-25 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184111#comment-14184111
 ] 

Anoop Sam John commented on HBASE-12282:


Instead of adding new buffer based APIs to Cell, we can have a new Interface 
extension to Cell and let server side Cells in read path implement this new 
interface. Add getxxxBuffer() APIs in new interface

We should avoid adding new fields to KeyValue which increases the heap usage.

getxxxBuffer() API returning the ref to actual buffer can be problematic. We 
might need to pass this to CP hooks even. Any op causing change in the position 
in buffer can be an issue. This buffer will be the same buffer HFileBlock 
backs. Every time we duplicate/slice the actual buffer can cause too many 
objects creation. So we can do one thing
Cell impl refers to a wrapper BB object which wraps the actual buffer. The 
wrapper should allow absolute positioned read only APIs only and other APIs 
should be Illegal ops.

Based on the type of the Cell the CellComparator can use the buffer based APIs 
or array based APIs. 


 Ensure Cells and its implementations work with Buffers also
 ---

 Key: HBASE-12282
 URL: https://issues.apache.org/jira/browse/HBASE-12282
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Affects Versions: 0.99.1
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12224_2.patch


 This issue can be used to brainstorm and then do the necessary changes for 
 the offheap work.  All impl of cells deal with byte[] but when we change the 
 Hfileblocks/Readers to work purely with Buffers then the byte[] usage would 
 mean that always the data is copied to the onheap.  Cell may need some 
 interface change to implement this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11912) Catch some bad practices at compile time with error-prone

2014-10-25 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184123#comment-14184123
 ] 

Andrew Purtell commented on HBASE-11912:


All failures are TestHBaseFsck and related tests in util.hbck.*, something 
weird happened. I kicked off another trunk build to see if it's repeatable. 

 Catch some bad practices at compile time with error-prone
 -

 Key: HBASE-11912
 URL: https://issues.apache.org/jira/browse/HBASE-11912
 Project: HBase
  Issue Type: Sub-task
Reporter: Andrew Purtell
 Attachments: HBASE-11912.patch, HBASE-11912.patch, HBASE-11912.patch


 Google's error-prone (https://code.google.com/p/error-prone/) wraps javac 
 with some additional static analysis that will generate additional warnings 
 or errors at compile time if certain bug patterns 
 (https://code.google.com/p/error-prone/wiki/BugPatterns) are detected. What's 
 nice about this approach, as opposed to findbugs, is the compile time 
 detection and erroring out prevent the detected problems from getting into 
 the codebase up front.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11912) Catch some bad practices at compile time with error-prone

2014-10-25 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184134#comment-14184134
 ] 

Andrew Purtell commented on HBASE-11912:


TestHBaseFsck isn't happy. Reverted for now. 

 Catch some bad practices at compile time with error-prone
 -

 Key: HBASE-11912
 URL: https://issues.apache.org/jira/browse/HBASE-11912
 Project: HBase
  Issue Type: Sub-task
Reporter: Andrew Purtell
 Attachments: HBASE-11912.patch, HBASE-11912.patch, HBASE-11912.patch


 Google's error-prone (https://code.google.com/p/error-prone/) wraps javac 
 with some additional static analysis that will generate additional warnings 
 or errors at compile time if certain bug patterns 
 (https://code.google.com/p/error-prone/wiki/BugPatterns) are detected. What's 
 nice about this approach, as opposed to findbugs, is the compile time 
 detection and erroring out prevent the detected problems from getting into 
 the codebase up front.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11870) Optimization : Avoid copy of key and value for tags addition in AC and VC

2014-10-25 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-11870:
---
   Resolution: Fixed
Fix Version/s: 0.99.2
   2.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks for the reviews Ram  Andy.
Pushed to 0.99 and master

 Optimization : Avoid copy of key and value for tags addition in AC and VC
 -

 Key: HBASE-11870
 URL: https://issues.apache.org/jira/browse/HBASE-11870
 Project: HBase
  Issue Type: Improvement
  Components: Performance, security
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-11870.patch


 In AC and VC we have to add the per cell ACL tags/ visibility tags to Cells. 
 We get KeyValue objects and which need one backing array with key,value and 
 tags. So in order to add a tag we have to recreate buffer the and copy the 
 entire key , value and tags.  We can avoid this
 Create a new Cell impl which wraps the original Cell and fro the non tag 
 parts just refer this old buffer.
 This will contain a byte[] state for the tags part.
 Also we have to ensure we deal with Cells n write path not KV.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11912) Catch some bad practices at compile time with error-prone

2014-10-25 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-11912:
---
Assignee: Andrew Purtell
  Status: Patch Available  (was: Open)

 Catch some bad practices at compile time with error-prone
 -

 Key: HBASE-11912
 URL: https://issues.apache.org/jira/browse/HBASE-11912
 Project: HBase
  Issue Type: Sub-task
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Attachments: HBASE-11912.patch, HBASE-11912.patch, HBASE-11912.patch, 
 HBASE-11912.patch


 Google's error-prone (https://code.google.com/p/error-prone/) wraps javac 
 with some additional static analysis that will generate additional warnings 
 or errors at compile time if certain bug patterns 
 (https://code.google.com/p/error-prone/wiki/BugPatterns) are detected. What's 
 nice about this approach, as opposed to findbugs, is the compile time 
 detection and erroring out prevent the detected problems from getting into 
 the codebase up front.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11912) Catch some bad practices at compile time with error-prone

2014-10-25 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-11912:
---
Attachment: HBASE-11912.patch

Found the problem. I made some changes to ClusterStatus in this patch to fix 
error-prone ERRORs that had unintended consequences only exposed by 
TestHBaseFsck, nobody else asks for the dead server list from a ClusterStatus 
unpacked from protobuf.

Attaching updated patch.

I also went back and minimized further the changes in this patch to 
TestHBaseFsck for setting detailed logging: rather than convert some static 
method callers to static invocations I just dropped the 'static' qualifier from 
the relevant hbck method, none of the others like it are declared static. So 
it's a one line change instead of ~10. 

Kicking off a Jenkins run.

 Catch some bad practices at compile time with error-prone
 -

 Key: HBASE-11912
 URL: https://issues.apache.org/jira/browse/HBASE-11912
 Project: HBase
  Issue Type: Sub-task
Reporter: Andrew Purtell
 Attachments: HBASE-11912.patch, HBASE-11912.patch, HBASE-11912.patch, 
 HBASE-11912.patch


 Google's error-prone (https://code.google.com/p/error-prone/) wraps javac 
 with some additional static analysis that will generate additional warnings 
 or errors at compile time if certain bug patterns 
 (https://code.google.com/p/error-prone/wiki/BugPatterns) are detected. What's 
 nice about this approach, as opposed to findbugs, is the compile time 
 detection and erroring out prevent the detected problems from getting into 
 the codebase up front.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-11912) Catch some bad practices at compile time with error-prone

2014-10-25 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184167#comment-14184167
 ] 

Andrew Purtell edited comment on HBASE-11912 at 10/25/14 5:17 PM:
--

Found the problem. I made some changes to ClusterStatus in this patch to fix 
error-prone ERRORs that had unintended consequences only exposed by 
TestHBaseFsck, nobody else asks for the dead server list from a ClusterStatus 
unpacked from protobuf except integration tests.

Attaching updated patch.

I also went back and minimized further the changes in this patch to 
TestHBaseFsck for setting detailed logging: rather than convert some static 
method callers to static invocations I just dropped the 'static' qualifier from 
the relevant hbck method, none of the others like it are declared static. So 
it's a one line change instead of ~10. 

Kicking off a Jenkins run.


was (Author: apurtell):
Found the problem. I made some changes to ClusterStatus in this patch to fix 
error-prone ERRORs that had unintended consequences only exposed by 
TestHBaseFsck, nobody else asks for the dead server list from a ClusterStatus 
unpacked from protobuf.

Attaching updated patch.

I also went back and minimized further the changes in this patch to 
TestHBaseFsck for setting detailed logging: rather than convert some static 
method callers to static invocations I just dropped the 'static' qualifier from 
the relevant hbck method, none of the others like it are declared static. So 
it's a one line change instead of ~10. 

Kicking off a Jenkins run.

 Catch some bad practices at compile time with error-prone
 -

 Key: HBASE-11912
 URL: https://issues.apache.org/jira/browse/HBASE-11912
 Project: HBase
  Issue Type: Sub-task
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Attachments: HBASE-11912.patch, HBASE-11912.patch, HBASE-11912.patch, 
 HBASE-11912.patch


 Google's error-prone (https://code.google.com/p/error-prone/) wraps javac 
 with some additional static analysis that will generate additional warnings 
 or errors at compile time if certain bug patterns 
 (https://code.google.com/p/error-prone/wiki/BugPatterns) are detected. What's 
 nice about this approach, as opposed to findbugs, is the compile time 
 detection and erroring out prevent the detected problems from getting into 
 the codebase up front.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12345) Unsafe based Comparator for BB

2014-10-25 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184169#comment-14184169
 ] 

Ted Yu commented on HBASE-12345:


{code}
+  theUnsafe = (Unsafe) AccessController.doPrivileged(new 
PrivilegedActionObject() {
{code}
AccessController is in hbase-server module. Can it be used by a class in 
hbase-common ?

 Unsafe based Comparator for BB 
 ---

 Key: HBASE-12345
 URL: https://issues.apache.org/jira/browse/HBASE-12345
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Attachments: HBASE-12345.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11870) Optimization : Avoid copy of key and value for tags addition in AC and VC

2014-10-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184172#comment-14184172
 ] 

Hudson commented on HBASE-11870:


SUCCESS: Integrated in HBase-TRUNK #5701 (See 
[https://builds.apache.org/job/HBase-TRUNK/5701/])
HBASE-11870 Optimization : Avoid copy of key and value for tags addition in AC 
and VC. (anoop.s.john: rev 0fb4c4d5f08fd4f59771193c1542c28cf8154f35)
* hbase-server/src/main/java/org/apache/hadoop/hbase/TagRewriteCell.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/SettableTimestamp.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityController.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogSplitter.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/Tag.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/SettableSequenceId.java


 Optimization : Avoid copy of key and value for tags addition in AC and VC
 -

 Key: HBASE-11870
 URL: https://issues.apache.org/jira/browse/HBASE-11870
 Project: HBase
  Issue Type: Improvement
  Components: Performance, security
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-11870.patch


 In AC and VC we have to add the per cell ACL tags/ visibility tags to Cells. 
 We get KeyValue objects and which need one backing array with key,value and 
 tags. So in order to add a tag we have to recreate buffer the and copy the 
 entire key , value and tags.  We can avoid this
 Create a new Cell impl which wraps the original Cell and fro the non tag 
 parts just refer this old buffer.
 This will contain a byte[] state for the tags part.
 Also we have to ensure we deal with Cells n write path not KV.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11912) Catch some bad practices at compile time with error-prone

2014-10-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184171#comment-14184171
 ] 

Hudson commented on HBASE-11912:


SUCCESS: Integrated in HBase-TRUNK #5701 (See 
[https://builds.apache.org/job/HBase-TRUNK/5701/])
Revert HBASE-11912 Catch some bad practices at compile time with error-prone 
(apurtell: rev ff5bc351b24512357292025eb48adef3ec328ba1)
* hbase-client/pom.xml
* hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift2/HTablePool.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestScannerSelectionUsingTTL.java
* hbase-server/pom.xml
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/util/hbck/OfflineMetaRepair.java
* hbase-examples/pom.xml
* hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestSeekTo.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestTableSnapshotInputFormat.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/coprocessor/TestMasterCoprocessorExceptionWithAbort.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestCase.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/util/hbck/HbckTestingUtil.java
* hbase-hadoop-compat/pom.xml
* hbase-server/src/main/java/org/apache/hadoop/hbase/client/HTableWrapper.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestStoreFile.java
* hbase-hadoop2-compat/pom.xml
* hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsck.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/util/TestMergeTool.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/ClusterStatus.java
* pom.xml
* 
hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/keyvalue/TestKeyValueTool.java
* hbase-it/pom.xml
* 
hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/row/TestPrefixTreeSearcher.java
* hbase-shell/pom.xml
* hbase-prefix-tree/pom.xml
* hbase-thrift/pom.xml
* 
hbase-prefix-tree/src/test/java/org/apache/hadoop/hbase/codec/prefixtree/row/TestRowData.java
* hbase-common/pom.xml
* hbase-server/src/test/java/org/apache/hadoop/hbase/io/TestHeapSize.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java


 Catch some bad practices at compile time with error-prone
 -

 Key: HBASE-11912
 URL: https://issues.apache.org/jira/browse/HBASE-11912
 Project: HBase
  Issue Type: Sub-task
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Attachments: HBASE-11912.patch, HBASE-11912.patch, HBASE-11912.patch, 
 HBASE-11912.patch


 Google's error-prone (https://code.google.com/p/error-prone/) wraps javac 
 with some additional static analysis that will generate additional warnings 
 or errors at compile time if certain bug patterns 
 (https://code.google.com/p/error-prone/wiki/BugPatterns) are detected. What's 
 nice about this approach, as opposed to findbugs, is the compile time 
 detection and erroring out prevent the detected problems from getting into 
 the codebase up front.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12335) IntegrationTestRegionReplicaPerf is flaky

2014-10-25 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184175#comment-14184175
 ] 

Nick Dimiduk commented on HBASE-12335:
--

Removing meta's RS from the candidate RS's that can be clobbered improved 
things drastically. 99.99pct is consistently faster now, roughly 2x 99.9pct 
instead of 10x 99.9pct Full numbers are on a new tab in the streadsheet.

 IntegrationTestRegionReplicaPerf is flaky
 -

 Key: HBASE-12335
 URL: https://issues.apache.org/jira/browse/HBASE-12335
 Project: HBase
  Issue Type: Test
  Components: test
Affects Versions: 0.99.0, 2.0.0
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12335.00-0.99.patch, HBASE-12335.00.patch, 
 HBASE-12335.00.patch, HBASE-12335.00.patch


 I find that this test often fails; the assertion that running with read 
 replicas should complete faster than without is usually false. I need to 
 investigate further as to why this is the case and how we should tune it.
 In the mean time, I'd like to change the test to assert instead on the 
 average of the stdev across all the test runs in each category. Meaning, 
 enabling this feature should reduce the overall latency variance experienced 
 by the client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11870) Optimization : Avoid copy of key and value for tags addition in AC and VC

2014-10-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184188#comment-14184188
 ] 

Hudson commented on HBASE-11870:


SUCCESS: Integrated in HBase-1.0 #360 (See 
[https://builds.apache.org/job/HBase-1.0/360/])
HBASE-11870 Optimization : Avoid copy of key and value for tags addition in AC 
and VC. (anoop.s.john: rev 4d385d1509d2b20fb0e455c20f28cf0d3b059001)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/wal/HLogSplitter.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityController.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/Tag.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/TagRewriteCell.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/SettableSequenceId.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/SettableTimestamp.java


 Optimization : Avoid copy of key and value for tags addition in AC and VC
 -

 Key: HBASE-11870
 URL: https://issues.apache.org/jira/browse/HBASE-11870
 Project: HBase
  Issue Type: Improvement
  Components: Performance, security
Affects Versions: 0.98.0
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-11870.patch


 In AC and VC we have to add the per cell ACL tags/ visibility tags to Cells. 
 We get KeyValue objects and which need one backing array with key,value and 
 tags. So in order to add a tag we have to recreate buffer the and copy the 
 entire key , value and tags.  We can avoid this
 Create a new Cell impl which wraps the original Cell and fro the non tag 
 parts just refer this old buffer.
 This will contain a byte[] state for the tags part.
 Also we have to ensure we deal with Cells n write path not KV.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12335) IntegrationTestRegionReplicaPerf is flaky

2014-10-25 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-12335:
-
Attachment: HBASE-12335.01.patch
HBASE-12335.01-0.99.patch

New patches for master and branch-1.

 IntegrationTestRegionReplicaPerf is flaky
 -

 Key: HBASE-12335
 URL: https://issues.apache.org/jira/browse/HBASE-12335
 Project: HBase
  Issue Type: Test
  Components: test
Affects Versions: 0.99.0, 2.0.0
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12335.00-0.99.patch, HBASE-12335.00.patch, 
 HBASE-12335.00.patch, HBASE-12335.00.patch, HBASE-12335.01-0.99.patch, 
 HBASE-12335.01.patch


 I find that this test often fails; the assertion that running with read 
 replicas should complete faster than without is usually false. I need to 
 investigate further as to why this is the case and how we should tune it.
 In the mean time, I'd like to change the test to assert instead on the 
 average of the stdev across all the test runs in each category. Meaning, 
 enabling this feature should reduce the overall latency variance experienced 
 by the client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12335) IntegrationTestRegionReplicaPerf is flaky

2014-10-25 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184200#comment-14184200
 ] 

Nick Dimiduk commented on HBASE-12335:
--

FYI, here's the script to parse output produced by this test. 
https://gist.github.com/ndimiduk/91af1dbc4f7ca815f21d

 IntegrationTestRegionReplicaPerf is flaky
 -

 Key: HBASE-12335
 URL: https://issues.apache.org/jira/browse/HBASE-12335
 Project: HBase
  Issue Type: Test
  Components: test
Affects Versions: 0.99.0, 2.0.0
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12335.00-0.99.patch, HBASE-12335.00.patch, 
 HBASE-12335.00.patch, HBASE-12335.00.patch, HBASE-12335.01-0.99.patch, 
 HBASE-12335.01.patch


 I find that this test often fails; the assertion that running with read 
 replicas should complete faster than without is usually false. I need to 
 investigate further as to why this is the case and how we should tune it.
 In the mean time, I'd like to change the test to assert instead on the 
 average of the stdev across all the test runs in each category. Meaning, 
 enabling this feature should reduce the overall latency variance experienced 
 by the client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11912) Catch some bad practices at compile time with error-prone

2014-10-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184225#comment-14184225
 ] 

Hadoop QA commented on HBASE-11912:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12677117/HBASE-11912.patch
  against trunk revision .
  ATTACHMENT ID: 12677117

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 53 new 
or modified tests.

{color:red}-1 javac{color}.  The applied patch generated 113 javac compiler 
warnings (more than the trunk's current 53 warnings).

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11466//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11466//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11466//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11466//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11466//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11466//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11466//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11466//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11466//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11466//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11466//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11466//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11466//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11466//console

This message is automatically generated.

 Catch some bad practices at compile time with error-prone
 -

 Key: HBASE-11912
 URL: https://issues.apache.org/jira/browse/HBASE-11912
 Project: HBase
  Issue Type: Sub-task
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Attachments: HBASE-11912.patch, HBASE-11912.patch, HBASE-11912.patch, 
 HBASE-11912.patch


 Google's error-prone (https://code.google.com/p/error-prone/) wraps javac 
 with some additional static analysis that will generate additional warnings 
 or errors at compile time if certain bug patterns 
 (https://code.google.com/p/error-prone/wiki/BugPatterns) are detected. What's 
 nice about this approach, as opposed to findbugs, is the compile time 
 detection and erroring out prevent the detected problems from getting into 
 the codebase up front.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11912) Catch some bad practices at compile time with error-prone

2014-10-25 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184244#comment-14184244
 ] 

Andrew Purtell commented on HBASE-11912:


Looks good 

 Catch some bad practices at compile time with error-prone
 -

 Key: HBASE-11912
 URL: https://issues.apache.org/jira/browse/HBASE-11912
 Project: HBase
  Issue Type: Sub-task
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Attachments: HBASE-11912.patch, HBASE-11912.patch, HBASE-11912.patch, 
 HBASE-11912.patch


 Google's error-prone (https://code.google.com/p/error-prone/) wraps javac 
 with some additional static analysis that will generate additional warnings 
 or errors at compile time if certain bug patterns 
 (https://code.google.com/p/error-prone/wiki/BugPatterns) are detected. What's 
 nice about this approach, as opposed to findbugs, is the compile time 
 detection and erroring out prevent the detected problems from getting into 
 the codebase up front.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11912) Catch some bad practices at compile time with error-prone

2014-10-25 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184245#comment-14184245
 ] 

Andrew Purtell commented on HBASE-11912:


The new Javac warnings are this tool functioning as expected. We can adjust the 
expected number of Javac warnings for trunk here or with another patch. 

 Catch some bad practices at compile time with error-prone
 -

 Key: HBASE-11912
 URL: https://issues.apache.org/jira/browse/HBASE-11912
 Project: HBase
  Issue Type: Sub-task
Reporter: Andrew Purtell
Assignee: Andrew Purtell
 Attachments: HBASE-11912.patch, HBASE-11912.patch, HBASE-11912.patch, 
 HBASE-11912.patch


 Google's error-prone (https://code.google.com/p/error-prone/) wraps javac 
 with some additional static analysis that will generate additional warnings 
 or errors at compile time if certain bug patterns 
 (https://code.google.com/p/error-prone/wiki/BugPatterns) are detected. What's 
 nice about this approach, as opposed to findbugs, is the compile time 
 detection and erroring out prevent the detected problems from getting into 
 the codebase up front.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12335) IntegrationTestRegionReplicaPerf is flaky

2014-10-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184246#comment-14184246
 ] 

Hadoop QA commented on HBASE-12335:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12677121/HBASE-12335.01.patch
  against trunk revision .
  ATTACHMENT ID: 12677121

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
3792 checkstyle errors (more than the trunk's current 3791 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11467//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11467//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11467//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11467//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11467//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11467//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11467//artifact/patchprocess/newPatchFindbugsWarningshbase-rest.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11467//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11467//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11467//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11467//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11467//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11467//artifact/patchprocess/checkstyle-aggregate.html

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11467//console

This message is automatically generated.

 IntegrationTestRegionReplicaPerf is flaky
 -

 Key: HBASE-12335
 URL: https://issues.apache.org/jira/browse/HBASE-12335
 Project: HBase
  Issue Type: Test
  Components: test
Affects Versions: 0.99.0, 2.0.0
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12335.00-0.99.patch, HBASE-12335.00.patch, 
 HBASE-12335.00.patch, HBASE-12335.00.patch, HBASE-12335.01-0.99.patch, 
 HBASE-12335.01.patch


 I find that this test often fails; the assertion that running with read 
 replicas should complete faster than without is usually false. I need to 
 investigate further as to why this is the case and how we should tune it.
 In the mean time, I'd like to change the test to assert instead on the 
 average of the stdev across all the test runs in each category. Meaning, 
 enabling this feature should reduce the overall latency variance experienced 
 by the client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11118) non environment variable solution for IllegalAccessError: class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.Literal

2014-10-25 Thread Arijit Banerjee (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184253#comment-14184253
 ] 

Arijit Banerjee commented on HBASE-8:
-

What is the workaround for running such application through Oozie? Setting 
HADOOP_CLASSPATH in Java and MapReduce actions are not possible. There seems to 
be no provision to do that. 

 non environment variable solution for IllegalAccessError: class 
 com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass 
 com.google.protobuf.LiteralByteString
 --

 Key: HBASE-8
 URL: https://issues.apache.org/jira/browse/HBASE-8
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.98.2
Reporter: André Kelpe
Assignee: stack
Priority: Blocker
 Fix For: 0.99.0, 0.98.4, 2.0.0

 Attachments: 8.098-0.txt, 8.098.txt, 8.bytestringer.txt, 
 1118.suggested.undoing.optimization.on.clientside.txt, 
 1118.suggested.undoing.optimization.on.clientside.txt, 
 HBASE-8-0.98.00.patch, HBASE-8-0.98.01.patch, 
 HBASE-8-0.98.02.patch, HBASE-8-0.98.03.patch, 
 HBASE-8-0.98.patch.gz, HBASE-8-trunk.patch.gz, HBASE-8.00.patch, 
 HBASE-8.01.patch, HBASE-8.02.patch, HBASE-8_0.98_addendum.patch, 
 HBASE-8_master_addendum.patch, shade_attempt.patch


 I am running into the problem described in 
 https://issues.apache.org/jira/browse/HBASE-10304, while trying to use a 
 newer version within cascading.hbase 
 (https://github.com/cascading/cascading.hbase).
 One of the features of cascading.hbase is that you can use it from lingual 
 (http://www.cascading.org/projects/lingual/), our SQL layer for hadoop. 
 lingual has a notion of providers, which are fat jars that we pull down 
 dynamically at runtime. Those jars give users the ability to talk to any 
 system or format from SQL. They are added to the classpath  programmatically 
 before we submit jobs to a hadoop cluster.
 Since lingual does not know upfront , which providers are going to be used in 
 a given run, the HADOOP_CLASSPATH trick proposed in the JIRA above is really 
 clunky and breaks the ease of use we had before. No other provider requires 
 this right now.
 It would be great to have a programmatical way to fix this, when using fat 
 jars.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10304) Running an hbase job jar: IllegalAccessError: class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.LiteralByteString

2014-10-25 Thread Arijit Banerjee (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184254#comment-14184254
 ] 

Arijit Banerjee commented on HBASE-10304:
-

What is the workaround for running such application through Oozie? Setting 
HADOOP_CLASSPATH in Java and MapReduce actions are not possible. There seems to 
be no provision to do that. 

 Running an hbase job jar: IllegalAccessError: class 
 com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass 
 com.google.protobuf.LiteralByteString
 

 Key: HBASE-10304
 URL: https://issues.apache.org/jira/browse/HBASE-10304
 Project: HBase
  Issue Type: Bug
  Components: documentation, mapreduce
Affects Versions: 0.98.0, 0.96.1.1
Reporter: stack
Assignee: Nick Dimiduk
Priority: Blocker
 Fix For: 0.98.0, 0.99.0

 Attachments: HBASE-10304.docbook.patch, hbase-10304_not_tested.patch, 
 jobjar.xml


 (Jimmy has been working on this one internally.  I'm just the messenger 
 raising this critical issue upstream).
 So, if you make job jar and bundle up hbase inside in it because you want to 
 access hbase from your mapreduce task, the deploy of the job jar to the 
 cluster fails with:
 {code}
 14/01/05 08:59:19 INFO Configuration.deprecation: 
 topology.node.switch.mapping.impl is deprecated. Instead, use 
 net.topology.node.switch.mapping.impl
 14/01/05 08:59:19 INFO Configuration.deprecation: io.bytes.per.checksum is 
 deprecated. Instead, use dfs.bytes-per-checksum
 Exception in thread main java.lang.IllegalAccessError: class 
 com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass 
 com.google.protobuf.LiteralByteString
   at java.lang.ClassLoader.defineClass1(Native Method)
   at java.lang.ClassLoader.defineClass(ClassLoader.java:792)
   at 
 java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
   at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
   at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
   at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
   at java.security.AccessController.doPrivileged(Native Method)
   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
   at 
 org.apache.hadoop.hbase.protobuf.ProtobufUtil.toScan(ProtobufUtil.java:818)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.convertScanToString(TableMapReduceUtil.java:433)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:186)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:147)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:270)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:100)
   at 
 com.ngdata.hbaseindexer.mr.HBaseMapReduceIndexerTool.run(HBaseMapReduceIndexerTool.java:124)
   at 
 com.ngdata.hbaseindexer.mr.HBaseMapReduceIndexerTool.run(HBaseMapReduceIndexerTool.java:64)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
   at 
 com.ngdata.hbaseindexer.mr.HBaseMapReduceIndexerTool.main(HBaseMapReduceIndexerTool.java:51)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
 {code}
 So, ZCLBS is a hack.  This class is in the hbase-protocol module.  It is in 
 the com.google.protobuf package.  All is well and good usually.
 But when we make a job jar and bundle up hbase inside it, our 'trick' breaks. 
  RunJar makes a new class loader to run the job jar.  This URLCLassLoader 
 'attaches' all the jars and classes that are in jobjar so they can be found 
 when it does to do a lookup only Classloaders work by always delegating to 
 their parent first (unless you are a WAR file in a container where delegation 
 is 'off' for the most part) and in this case, the parent classloader will 
 have access to a pb jar since pb is in the hadoop CLASSPATH.  So, the parent 
 loads the pb classes.
 We then load ZCLBS only this is done in the claslsloader made by RunJar; 
 ZKCLBS has a different 

[jira] [Commented] (HBASE-11118) non environment variable solution for IllegalAccessError: class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.Literal

2014-10-25 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184287#comment-14184287
 ] 

Nick Dimiduk commented on HBASE-8:
--

hi [~ariforu]. Per the JIRA subject, after this commit, no environment variable 
manipulation is required. Are you seeing something different? Please send a 
note to the user@hbase list describing your hbase version, ooze, environment 
and any stack trace you're seeing in job launching. We'll help you get it 
resolved.

 non environment variable solution for IllegalAccessError: class 
 com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass 
 com.google.protobuf.LiteralByteString
 --

 Key: HBASE-8
 URL: https://issues.apache.org/jira/browse/HBASE-8
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.98.2
Reporter: André Kelpe
Assignee: stack
Priority: Blocker
 Fix For: 0.99.0, 0.98.4, 2.0.0

 Attachments: 8.098-0.txt, 8.098.txt, 8.bytestringer.txt, 
 1118.suggested.undoing.optimization.on.clientside.txt, 
 1118.suggested.undoing.optimization.on.clientside.txt, 
 HBASE-8-0.98.00.patch, HBASE-8-0.98.01.patch, 
 HBASE-8-0.98.02.patch, HBASE-8-0.98.03.patch, 
 HBASE-8-0.98.patch.gz, HBASE-8-trunk.patch.gz, HBASE-8.00.patch, 
 HBASE-8.01.patch, HBASE-8.02.patch, HBASE-8_0.98_addendum.patch, 
 HBASE-8_master_addendum.patch, shade_attempt.patch


 I am running into the problem described in 
 https://issues.apache.org/jira/browse/HBASE-10304, while trying to use a 
 newer version within cascading.hbase 
 (https://github.com/cascading/cascading.hbase).
 One of the features of cascading.hbase is that you can use it from lingual 
 (http://www.cascading.org/projects/lingual/), our SQL layer for hadoop. 
 lingual has a notion of providers, which are fat jars that we pull down 
 dynamically at runtime. Those jars give users the ability to talk to any 
 system or format from SQL. They are added to the classpath  programmatically 
 before we submit jobs to a hadoop cluster.
 Since lingual does not know upfront , which providers are going to be used in 
 a given run, the HADOOP_CLASSPATH trick proposed in the JIRA above is really 
 clunky and breaks the ease of use we had before. No other provider requires 
 this right now.
 It would be great to have a programmatical way to fix this, when using fat 
 jars.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-10304) Running an hbase job jar: IllegalAccessError: class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.LiteralByteString

2014-10-25 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184290#comment-14184290
 ] 

Nick Dimiduk commented on HBASE-10304:
--

This was resolved via HBASE-8. Please see my comment at the end of that 
ticket.

 Running an hbase job jar: IllegalAccessError: class 
 com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass 
 com.google.protobuf.LiteralByteString
 

 Key: HBASE-10304
 URL: https://issues.apache.org/jira/browse/HBASE-10304
 Project: HBase
  Issue Type: Bug
  Components: documentation, mapreduce
Affects Versions: 0.98.0, 0.96.1.1
Reporter: stack
Assignee: Nick Dimiduk
Priority: Blocker
 Fix For: 0.98.0, 0.99.0

 Attachments: HBASE-10304.docbook.patch, hbase-10304_not_tested.patch, 
 jobjar.xml


 (Jimmy has been working on this one internally.  I'm just the messenger 
 raising this critical issue upstream).
 So, if you make job jar and bundle up hbase inside in it because you want to 
 access hbase from your mapreduce task, the deploy of the job jar to the 
 cluster fails with:
 {code}
 14/01/05 08:59:19 INFO Configuration.deprecation: 
 topology.node.switch.mapping.impl is deprecated. Instead, use 
 net.topology.node.switch.mapping.impl
 14/01/05 08:59:19 INFO Configuration.deprecation: io.bytes.per.checksum is 
 deprecated. Instead, use dfs.bytes-per-checksum
 Exception in thread main java.lang.IllegalAccessError: class 
 com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass 
 com.google.protobuf.LiteralByteString
   at java.lang.ClassLoader.defineClass1(Native Method)
   at java.lang.ClassLoader.defineClass(ClassLoader.java:792)
   at 
 java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
   at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
   at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
   at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
   at java.security.AccessController.doPrivileged(Native Method)
   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
   at 
 org.apache.hadoop.hbase.protobuf.ProtobufUtil.toScan(ProtobufUtil.java:818)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.convertScanToString(TableMapReduceUtil.java:433)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:186)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:147)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:270)
   at 
 org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil.initTableMapperJob(TableMapReduceUtil.java:100)
   at 
 com.ngdata.hbaseindexer.mr.HBaseMapReduceIndexerTool.run(HBaseMapReduceIndexerTool.java:124)
   at 
 com.ngdata.hbaseindexer.mr.HBaseMapReduceIndexerTool.run(HBaseMapReduceIndexerTool.java:64)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
   at 
 com.ngdata.hbaseindexer.mr.HBaseMapReduceIndexerTool.main(HBaseMapReduceIndexerTool.java:51)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
 {code}
 So, ZCLBS is a hack.  This class is in the hbase-protocol module.  It is in 
 the com.google.protobuf package.  All is well and good usually.
 But when we make a job jar and bundle up hbase inside it, our 'trick' breaks. 
  RunJar makes a new class loader to run the job jar.  This URLCLassLoader 
 'attaches' all the jars and classes that are in jobjar so they can be found 
 when it does to do a lookup only Classloaders work by always delegating to 
 their parent first (unless you are a WAR file in a container where delegation 
 is 'off' for the most part) and in this case, the parent classloader will 
 have access to a pb jar since pb is in the hadoop CLASSPATH.  So, the parent 
 loads the pb classes.
 We then load ZCLBS only this is done in the claslsloader made by RunJar; 
 ZKCLBS has a different classloader from its superclass and we get the above 
 IllegalAccessError.
 Now (Jimmy's work comes in here), this 

[jira] [Commented] (HBASE-11118) non environment variable solution for IllegalAccessError: class com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass com.google.protobuf.Literal

2014-10-25 Thread Arijit Banerjee (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184296#comment-14184296
 ] 

Arijit Banerjee commented on HBASE-8:
-

Thanks Nick for the quick response. We are using CDH5.1.0 with HBase 0.98.1. It 
seems from HBASE-8 that this issue is fixed as of hbase 0.98.4 . Our 
application works fine when submitting directly from commandline after setting 
the following environment variable but it fails when spawned via Oozie. We have 
set the env variable in all data nodes, hadoop-env.sh for all users but without 
luck(probably being overridden somewhere). Wondering if there is any workaround 
for 0.98.1 with Oozie 4.0.

export 
HADOOP_CLASSPATH=/usr/share/cmf/lib/cdh5/hbase-protocol-0.98.1-cdh5.1.0.jar:/etc/hbase/conf
 

 non environment variable solution for IllegalAccessError: class 
 com.google.protobuf.ZeroCopyLiteralByteString cannot access its superclass 
 com.google.protobuf.LiteralByteString
 --

 Key: HBASE-8
 URL: https://issues.apache.org/jira/browse/HBASE-8
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.98.2
Reporter: André Kelpe
Assignee: stack
Priority: Blocker
 Fix For: 0.99.0, 0.98.4, 2.0.0

 Attachments: 8.098-0.txt, 8.098.txt, 8.bytestringer.txt, 
 1118.suggested.undoing.optimization.on.clientside.txt, 
 1118.suggested.undoing.optimization.on.clientside.txt, 
 HBASE-8-0.98.00.patch, HBASE-8-0.98.01.patch, 
 HBASE-8-0.98.02.patch, HBASE-8-0.98.03.patch, 
 HBASE-8-0.98.patch.gz, HBASE-8-trunk.patch.gz, HBASE-8.00.patch, 
 HBASE-8.01.patch, HBASE-8.02.patch, HBASE-8_0.98_addendum.patch, 
 HBASE-8_master_addendum.patch, shade_attempt.patch


 I am running into the problem described in 
 https://issues.apache.org/jira/browse/HBASE-10304, while trying to use a 
 newer version within cascading.hbase 
 (https://github.com/cascading/cascading.hbase).
 One of the features of cascading.hbase is that you can use it from lingual 
 (http://www.cascading.org/projects/lingual/), our SQL layer for hadoop. 
 lingual has a notion of providers, which are fat jars that we pull down 
 dynamically at runtime. Those jars give users the ability to talk to any 
 system or format from SQL. They are added to the classpath  programmatically 
 before we submit jobs to a hadoop cluster.
 Since lingual does not know upfront , which providers are going to be used in 
 a given run, the HADOOP_CLASSPATH trick proposed in the JIRA above is really 
 clunky and breaks the ease of use we had before. No other provider requires 
 this right now.
 It would be great to have a programmatical way to fix this, when using fat 
 jars.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12345) Unsafe based Comparator for BB

2014-10-25 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184351#comment-14184351
 ] 

Anoop Sam John commented on HBASE-12345:


This is
+import java.security.AccessController;


 Unsafe based Comparator for BB 
 ---

 Key: HBASE-12345
 URL: https://issues.apache.org/jira/browse/HBASE-12345
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Attachments: HBASE-12345.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12202) Support DirectByteBuffer usage in HFileBlock

2014-10-25 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184357#comment-14184357
 ] 

Ted Yu commented on HBASE-12202:


+1 on addendum.

 Support DirectByteBuffer usage in HFileBlock
 

 Key: HBASE-12202
 URL: https://issues.apache.org/jira/browse/HBASE-12202
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12202-addendum.patch, HBASE-12202.patch, 
 HBASE-12202_V2.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12345) Unsafe based Comparator for BB

2014-10-25 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184358#comment-14184358
 ] 

Ted Yu commented on HBASE-12345:


I see.

 Unsafe based Comparator for BB 
 ---

 Key: HBASE-12345
 URL: https://issues.apache.org/jira/browse/HBASE-12345
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Attachments: HBASE-12345.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HBASE-12202) Support DirectByteBuffer usage in HFileBlock

2014-10-25 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John reopened HBASE-12202:


Opening for commiting the addendum

 Support DirectByteBuffer usage in HFileBlock
 

 Key: HBASE-12202
 URL: https://issues.apache.org/jira/browse/HBASE-12202
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12202-addendum.patch, HBASE-12202.patch, 
 HBASE-12202_V2.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-12202) Support DirectByteBuffer usage in HFileBlock

2014-10-25 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John resolved HBASE-12202.

Resolution: Fixed

Pushed addendum to 0.99+.  Thanks Ted.

 Support DirectByteBuffer usage in HFileBlock
 

 Key: HBASE-12202
 URL: https://issues.apache.org/jira/browse/HBASE-12202
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12202-addendum.patch, HBASE-12202.patch, 
 HBASE-12202_V2.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12075) Preemptive Fast Fail

2014-10-25 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184385#comment-14184385
 ] 

stack commented on HBASE-12075:
---

This is a load of code.

'new' in the method name does not convey builder.  How about 
getRpcTryingCallerFactoryBuilder

Can we have example of how it is used by an application in release notes?

Sorry [~manukranthk], adding a method getNewRpcRetryingCallerFactory to 
ClusterConnection is kinda ugly but given ClusterConnection is internal and 
that it has things like getAsyncProcess, I think this addition ok.

Can classes likepublic class FailureInfo { be package protected?  ditto 
FastFailInterceptorContext


In HTable we have this:

this.rpcCallerFactory = 
connection.getNewRpcRetryingCallerFactory(configuration);

Does that mean we fail fast always?  If we commit this patch, client behavior 
changes?

WHat happens when this is in place? NoOpRetryableCallerInterceptor What kind of 
behavior can we expect?

Yeah, can all these client package classes be package protected at least?

Pardon dumb questions.



 Preemptive Fast Fail
 

 Key: HBASE-12075
 URL: https://issues.apache.org/jira/browse/HBASE-12075
 Project: HBase
  Issue Type: Sub-task
  Components: Client
Affects Versions: 0.99.0, 2.0.0, 0.98.6.1
Reporter: Manukranth Kolloju
Assignee: Manukranth Kolloju
 Attachments: 0001-Add-a-test-case-for-Preemptive-Fast-Fail.patch, 
 0001-HBASE-12075-Implement-Preemptive-Fast-Fail.patch, 
 0001-HBASE-12075-Implement-Preemptive-Fast-Fail.patch, 
 0001-HBASE-12075-Implement-Preemptive-Fast-Fail.patch, 
 0001-HBASE-12075-Implement-Preemptive-Fast-Fail.patch, 
 0001-HBASE-12075-Implement-Preemptive-Fast-Fail.patch, 
 0001-HBASE-12075-Implement-Preemptive-Fast-Fail.patch, 
 0001-HBASE-12075-Implement-Preemptive-Fast-Fail.patch, 
 0001-HBASE-12075-Implement-Preemptive-Fast-Fail.patch, 
 0001-Implement-Preemptive-Fast-Fail.patch, 
 0001-Implement-Preemptive-Fast-Fail.patch, 
 0001-Implement-Preemptive-Fast-Fail.patch, 
 0001-Implement-Preemptive-Fast-Fail.patch, 
 0001-Implement-Preemptive-Fast-Fail.patch


 In multi threaded clients, we use a feature developed on 0.89-fb branch 
 called Preemptive Fast Fail. This allows the client threads which would 
 potentially fail, fail fast. The idea behind this feature is that we allow, 
 among the hundreds of client threads, one thread to try and establish 
 connection with the regionserver and if that succeeds, we mark it as a live 
 node again. Meanwhile, other threads which are trying to establish connection 
 to the same server would ideally go into the timeouts which is effectively 
 unfruitful. We can in those cases return appropriate exceptions to those 
 clients instead of letting them retry.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-2609) Harmonize the Get and Delete operations

2014-10-25 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184388#comment-14184388
 ] 

stack commented on HBASE-2609:
--

Across Scan, Get, and Delete addFamily/addColumn/setTimestamp, there is not 
enough coherency to make an Interface -- what to cal it?  Delete has addColumns 
vs addColumn and sometimes on addColumn it takes a timestamp, sometimes not.

I always thought Get and Delete should be the same since about specifying 
coordinates.  Could work on this in another issue.

Let me commit this for now.

 Harmonize the Get and Delete operations
 ---

 Key: HBASE-2609
 URL: https://issues.apache.org/jira/browse/HBASE-2609
 Project: HBase
  Issue Type: Improvement
  Components: Client
Reporter: Jeff Hammerbacher
Assignee: stack
 Fix For: 0.99.2

 Attachments: 2609.txt, 2609v2.txt


 In my work on HBASE-2400, implementing deletes for the Avro server felt quite 
 awkward. Rather than the clean API of the Get object, which allows 
 restrictions on the result set from a row to be expressed with addColumn, 
 addFamily, setTimeStamp, setTimeRange, setMaxVersions, and setFilters, the 
 Delete object hides these semantics behind various constructors to 
 deleteColumn[s] an deleteFamily. From my naive vantage point, I see no reason 
 why it would be a bad idea to mimic the Get API exactly, though I could quite 
 possibly be missing something. Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-2609) Harmonize the Get and Delete operations

2014-10-25 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-2609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-2609:
-
   Resolution: Fixed
Fix Version/s: 2.0.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks for reviews.  Committed to branch-1+

 Harmonize the Get and Delete operations
 ---

 Key: HBASE-2609
 URL: https://issues.apache.org/jira/browse/HBASE-2609
 Project: HBase
  Issue Type: Improvement
  Components: Client
Reporter: Jeff Hammerbacher
Assignee: stack
 Fix For: 2.0.0, 0.99.2

 Attachments: 2609.txt, 2609v2.txt


 In my work on HBASE-2400, implementing deletes for the Avro server felt quite 
 awkward. Rather than the clean API of the Get object, which allows 
 restrictions on the result set from a row to be expressed with addColumn, 
 addFamily, setTimeStamp, setTimeRange, setMaxVersions, and setFilters, the 
 Delete object hides these semantics behind various constructors to 
 deleteColumn[s] an deleteFamily. From my naive vantage point, I see no reason 
 why it would be a bad idea to mimic the Get API exactly, though I could quite 
 possibly be missing something. Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12345) Unsafe based Comparator for BB

2014-10-25 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184390#comment-14184390
 ] 

stack commented on HBASE-12345:
---

Is unsafe faster?

 Unsafe based Comparator for BB 
 ---

 Key: HBASE-12345
 URL: https://issues.apache.org/jira/browse/HBASE-12345
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Attachments: HBASE-12345.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12313) Redo the hfile index length optimization so cell-based rather than serialized KV key

2014-10-25 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184391#comment-14184391
 ] 

stack commented on HBASE-12313:
---

That a +1 [~anoop.hbase]?

 Redo the hfile index length optimization so cell-based rather than serialized 
 KV key
 

 Key: HBASE-12313
 URL: https://issues.apache.org/jira/browse/HBASE-12313
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: stack
Assignee: stack
 Attachments: 
 0001-HBASE-12313-Redo-the-hfile-index-length-optimization.patch, 
 0001-HBASE-12313-Redo-the-hfile-index-length-optimization.patch, 
 0001-HBASE-12313-Redo-the-hfile-index-length-optimization.patch, 
 0001-HBASE-12313-Redo-the-hfile-index-length-optimization.patch, 
 0001-HBASE-12313-Redo-the-hfile-index-length-optimization.patch, 12313v5.txt


 Trying to remove API that returns the 'key' of a KV serialized into a byte 
 array is thorny.
 I tried to move over the first and last key serializations and the hfile 
 index entries to be cell but patch was turning massive.  Here is a smaller 
 patch that just redoes the optimization that tries to find 'short' midpoints 
 between last key of last block and first key of next block so it is 
 Cell-based rather than byte array based (presuming Keys serialized in a 
 certain way).  Adds unit tests which we didn't have before.
 Also remove CellKey.  Not needed... at least not yet.  Its just utility for 
 toString.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12202) Support DirectByteBuffer usage in HFileBlock

2014-10-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184392#comment-14184392
 ] 

Hudson commented on HBASE-12202:


SUCCESS: Integrated in HBase-1.0 #361 (See 
[https://builds.apache.org/job/HBase-1.0/361/])
HBASE-12202 Support DirectByteBuffer usage in HFileBlock - addendum 
(anoop.s.john: rev 37ac17f62638420430e1004aa48e29d5291e07b5)
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/ByteBufferUtils.java


 Support DirectByteBuffer usage in HFileBlock
 

 Key: HBASE-12202
 URL: https://issues.apache.org/jira/browse/HBASE-12202
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12202-addendum.patch, HBASE-12202.patch, 
 HBASE-12202_V2.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12202) Support DirectByteBuffer usage in HFileBlock

2014-10-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184393#comment-14184393
 ] 

Hudson commented on HBASE-12202:


SUCCESS: Integrated in HBase-TRUNK #5702 (See 
[https://builds.apache.org/job/HBase-TRUNK/5702/])
HBASE-12202 Support DirectByteBuffer usage in HFileBlock - addendum 
(anoop.s.john: rev 34f9962618c85ad041ca7eac4913453335a81647)
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/ByteBufferUtils.java


 Support DirectByteBuffer usage in HFileBlock
 

 Key: HBASE-12202
 URL: https://issues.apache.org/jira/browse/HBASE-12202
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0, 0.99.2

 Attachments: HBASE-12202-addendum.patch, HBASE-12202.patch, 
 HBASE-12202_V2.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-2609) Harmonize the Get and Delete operations

2014-10-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184409#comment-14184409
 ] 

Hudson commented on HBASE-2609:
---

SUCCESS: Integrated in HBase-TRUNK #5703 (See 
[https://builds.apache.org/job/HBase-TRUNK/5703/])
HBASE-2609 Harmonize the Get and Delete operations (stack: rev 
1d6c4678bb7964af34fb42a6c8bbf0553880bba3)
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/Delete.java


 Harmonize the Get and Delete operations
 ---

 Key: HBASE-2609
 URL: https://issues.apache.org/jira/browse/HBASE-2609
 Project: HBase
  Issue Type: Improvement
  Components: Client
Reporter: Jeff Hammerbacher
Assignee: stack
 Fix For: 2.0.0, 0.99.2

 Attachments: 2609.txt, 2609v2.txt


 In my work on HBASE-2400, implementing deletes for the Avro server felt quite 
 awkward. Rather than the clean API of the Get object, which allows 
 restrictions on the result set from a row to be expressed with addColumn, 
 addFamily, setTimeStamp, setTimeRange, setMaxVersions, and setFilters, the 
 Delete object hides these semantics behind various constructors to 
 deleteColumn[s] an deleteFamily. From my naive vantage point, I see no reason 
 why it would be a bad idea to mimic the Get API exactly, though I could quite 
 possibly be missing something. Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-2609) Harmonize the Get and Delete operations

2014-10-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184414#comment-14184414
 ] 

Hudson commented on HBASE-2609:
---

SUCCESS: Integrated in HBase-1.0 #362 (See 
[https://builds.apache.org/job/HBase-1.0/362/])
HBASE-2609 Harmonize the Get and Delete operations (stack: rev 
3fa96fb3c78b5561ca55b286a7019e11e9d365f0)
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/Delete.java


 Harmonize the Get and Delete operations
 ---

 Key: HBASE-2609
 URL: https://issues.apache.org/jira/browse/HBASE-2609
 Project: HBase
  Issue Type: Improvement
  Components: Client
Reporter: Jeff Hammerbacher
Assignee: stack
 Fix For: 2.0.0, 0.99.2

 Attachments: 2609.txt, 2609v2.txt


 In my work on HBASE-2400, implementing deletes for the Avro server felt quite 
 awkward. Rather than the clean API of the Get object, which allows 
 restrictions on the result set from a row to be expressed with addColumn, 
 addFamily, setTimeStamp, setTimeRange, setMaxVersions, and setFilters, the 
 Delete object hides these semantics behind various constructors to 
 deleteColumn[s] an deleteFamily. From my naive vantage point, I see no reason 
 why it would be a bad idea to mimic the Get API exactly, though I could quite 
 possibly be missing something. Thoughts?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)