[jira] [Commented] (HBASE-7111) hbase zkcli will not start if the zookeeper server choosen to connectted to is not available

2012-11-08 Thread Zhou wenjian (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493030#comment-13493030
 ] 

Zhou wenjian commented on HBASE-7111:
-

The testcase modified could assert my change.

And i have tested it in my local test cluster, if one of the zookeeper server 
in conf works, the script will connect to zookeeper


 hbase zkcli will not start if the zookeeper server choosen to connectted to  
 is not available
 -

 Key: HBASE-7111
 URL: https://issues.apache.org/jira/browse/HBASE-7111
 Project: HBase
  Issue Type: Bug
  Components: Zookeeper
Affects Versions: 0.94.2
Reporter: Zhou wenjian
Assignee: Zhou wenjian
 Fix For: 0.94.4

 Attachments: HBASE-7111-trunk.patch, HBASE-7111-trunk-v2.patch


 there are 3 zookeeper servers in my cluster.
 s1
 s2
 s3
 after killing  s3, i found the hbase zkcli will not start again.
 it will try to connect to s3 continuely. 
 /11/07 11:01:01 INFO zookeeper.ClientCnxn: Opening socket connection to 
 server s3
 12/11/07 11:01:01 WARN zookeeper.ClientCnxn: Session 0x0 for server null, 
 unexpected error, closing socket connection and attempting reconnect
 java.net.ConnectException: Connection refused
 from the code 
   public String parse(final Configuration c) {
 // Note that we do not simply grab the property
 // HConstants.ZOOKEEPER_QUORUM from the HBaseConfiguration because the
 // user may be using a zoo.cfg file.
 Properties zkProps = ZKConfig.makeZKProps(c);
 String host = null;
 String clientPort = null;
 for (EntryObject, Object entry: zkProps.entrySet()) {
   String key = entry.getKey().toString().trim();
   String value = entry.getValue().toString().trim();
   if (key.startsWith(server.)  host == null) {
 String[] parts = value.split(:);
 host = parts[0];
   } else if (key.endsWith(clientPort)) {
 clientPort = value;
   }
   if (host != null  clientPort != null) break;
 }
 return host != null  clientPort != null? host + : + clientPort: null;
   }
 the code will choose the fixed zookeeper server (here is the unavailable s3), 
 which leads to the script fails

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6563) s.isMajorCompaction() throws npe will cause current major Compaction checking abort

2012-11-08 Thread Zhou wenjian (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493060#comment-13493060
 ] 

Zhou wenjian commented on HBASE-6563:
-

[~lhofhansl]
not quite follow you, the npe will be throwed in trunk too, i think.

 private boolean isMajorCompaction(final ListStoreFile filesToCompact) throws 
IOException {
boolean result = false;
long mcTime = getNextMajorCompactTime();
if (filesToCompact == null || filesToCompact.isEmpty() || mcTime == 0) {
  return result;
} long lowTimestamp = getLowestTimestamp(filesToCompact);
long now = System.currentTimeMillis();
if (lowTimestamp  0l  lowTimestamp  (now - mcTime)) {
  // Major compaction time has elapsed.
  if (filesToCompact.size() == 1) {
// Single file
StoreFile sf = filesToCompact.get(0);
long oldest =
(sf.getReader().timeRangeTracker == null) ?
Long.MIN_VALUE :
now - sf.getReader().timeRangeTracker.minimumTimestamp;

if the file to compact is closed after the check, npe will be throwed too, 
which is  same to 94 and 90.
the  bad side is that current major compaction check for region is interrupted
IMO, need to catch and just Ingore it.

 s.isMajorCompaction() throws npe will cause current major Compaction checking 
 abort
 ---

 Key: HBASE-6563
 URL: https://issues.apache.org/jira/browse/HBASE-6563
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Reporter: Zhou wenjian
Assignee: Zhou wenjian
 Attachments: HBASE-6563-trunk.patch, HBASE-6563-trunk-v2.patch, 
 HBASE-6563-trunk-v3.patch


 2012-05-05 00:49:43,265 ERROR 
 org.apache.hadoop.hbase.regionserver.HRegionServer$MajorCompactionChecker: 
 Caught exception
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hbase.regionserver.Store.isMajorCompaction(Store.java:938)
 at 
 org.apache.hadoop.hbase.regionserver.Store.isMajorCompaction(Store.java:917)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.isMajorCompaction(HRegion.java:3250)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer$MajorCompactionChecker.chore(HRegionServer.java:1222)
 at org.apache.hadoop.hbase.Chore.run(Chore.java:66)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7117) UnixOperationSystemMXBean compile error with open JDK 1.6

2012-11-08 Thread Li Ping Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Ping Zhang updated HBASE-7117:
-

Affects Version/s: (was: 0.90.4)
   0.94.0

 UnixOperationSystemMXBean compile  error with open JDK 1.6
 --

 Key: HBASE-7117
 URL: https://issues.apache.org/jira/browse/HBASE-7117
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.92.0, 0.94.0
 Environment: RHEL 5.3, open JDK 1.6
Reporter: Li Ping Zhang
  Labels: patch
   Original Estimate: 96h
  Remaining Estimate: 96h

 UnixOperationSystemMXBean compile  error with open JDK 1.6.
 UnixOperatingSystemMXBean doesn't existed in open JDK 1.6, open JDK doesn't 
 have any get*FileDescriptorCount method in OperatingSystemMXBean at all, we 
 need to provide a corresponding method for it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7117) UnixOperationSystemMXBean compile error with open JDK 1.6

2012-11-08 Thread Li Ping Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Ping Zhang updated HBASE-7117:
-

Affects Version/s: (was: 0.92.0)

 UnixOperationSystemMXBean compile  error with open JDK 1.6
 --

 Key: HBASE-7117
 URL: https://issues.apache.org/jira/browse/HBASE-7117
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.0
 Environment: RHEL 5.3, open JDK 1.6
Reporter: Li Ping Zhang
  Labels: patch
   Original Estimate: 96h
  Remaining Estimate: 96h

 UnixOperationSystemMXBean compile  error with open JDK 1.6.
 UnixOperatingSystemMXBean doesn't existed in open JDK 1.6, open JDK doesn't 
 have any get*FileDescriptorCount method in OperatingSystemMXBean at all, we 
 need to provide a corresponding method for it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-7128) Reduced annoying catch clauses of UnsupportedEncodingException that is never thrown because of UTF-8

2012-11-08 Thread Hiroshi Ikeda (JIRA)
Hiroshi Ikeda created HBASE-7128:


 Summary: Reduced annoying catch clauses of 
UnsupportedEncodingException that is never thrown because of UTF-8
 Key: HBASE-7128
 URL: https://issues.apache.org/jira/browse/HBASE-7128
 Project: HBase
  Issue Type: Improvement
Reporter: Hiroshi Ikeda
Priority: Trivial


There are some codes that catch UnsupportedEncodingException, and log or ignore 
it because Java always supports UTF-8 (see the javadoc of Charset).

The catch clauses are annoying, and they should be replaced by methods of Bytes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7117) UnixOperationSystemMXBean compile error with open JDK 1.6

2012-11-08 Thread Li Ping Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493118#comment-13493118
 ] 

Li Ping Zhang commented on HBASE-7117:
--

Hi Stack, sorry, there is a type, the affect version should be 0.94.0. Thanks 
for your comments! 

 UnixOperationSystemMXBean compile  error with open JDK 1.6
 --

 Key: HBASE-7117
 URL: https://issues.apache.org/jira/browse/HBASE-7117
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.0
 Environment: RHEL 5.3, open JDK 1.6
Reporter: Li Ping Zhang
  Labels: patch
   Original Estimate: 96h
  Remaining Estimate: 96h

 UnixOperationSystemMXBean compile  error with open JDK 1.6.
 UnixOperatingSystemMXBean doesn't existed in open JDK 1.6, open JDK doesn't 
 have any get*FileDescriptorCount method in OperatingSystemMXBean at all, we 
 need to provide a corresponding method for it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7128) Reduced annoying catch clauses of UnsupportedEncodingException that is never thrown because of UTF-8

2012-11-08 Thread Hiroshi Ikeda (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hiroshi Ikeda updated HBASE-7128:
-

Attachment: HBASE-7128.patch

Added a patch.

This also breaks dependency from Bytes to HConstants in order to avoid 
confusion caused by their mutual calls in their initialization.

 Reduced annoying catch clauses of UnsupportedEncodingException that is never 
 thrown because of UTF-8
 

 Key: HBASE-7128
 URL: https://issues.apache.org/jira/browse/HBASE-7128
 Project: HBase
  Issue Type: Improvement
Reporter: Hiroshi Ikeda
Priority: Trivial
 Attachments: HBASE-7128.patch


 There are some codes that catch UnsupportedEncodingException, and log or 
 ignore it because Java always supports UTF-8 (see the javadoc of Charset).
 The catch clauses are annoying, and they should be replaced by methods of 
 Bytes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7106) [89-fb] Fix the NPE in unit tests for JDK7

2012-11-08 Thread Gustavo Anatoly (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493176#comment-13493176
 ] 

Gustavo Anatoly commented on HBASE-7106:


Hi, Liyin.

I understood your idea, but I'm confused how to replace NULL qualifier when [ 
-Pjdk7 ] profile is executed. I did think if when running profile we would have 
a separted package with specific test to JDK 7, using 
HConstants.EMPTY_BYTE_ARRAY.

Could you explain me, please?


 [89-fb] Fix the NPE in unit tests for JDK7
 --

 Key: HBASE-7106
 URL: https://issues.apache.org/jira/browse/HBASE-7106
 Project: HBase
  Issue Type: Improvement
Reporter: Liyin Tang
Priority: Trivial

 In JDK7, it will throw out NPE if put a NULL into a TreeSet. And in the unit 
 tests, user can add a NULL as qualifier into the family map for GET or SCAN. 
 So we shall do the followings: 
 1) Make sure the semantics of NULL column qualifier is equal to that of the 
 EMPYT_BYTE_ARRAY column qualifier.
 2) An easy fix is to use the EMPYT_BYTE_ARRAY qualifier to replace NULL 
 qualifier in the family map for the GET or SCAN objects, and everything else 
 shall be backward compatible.
 3) Add a jdk option in the pom.xml (Assuming user installed the fb packaged 
 jdk)
 eg: mvn test -Dtest=TestFromClientSide -Pjdk7

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-4583) Integrate RWCC with Append and Increment operations

2012-11-08 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493252#comment-13493252
 ] 

Lars Hofhansl commented on HBASE-4583:
--

Yep. that line was removed by accident. Cool that a test caught it.
Thanks Ted!

 Integrate RWCC with Append and Increment operations
 ---

 Key: HBASE-4583
 URL: https://issues.apache.org/jira/browse/HBASE-4583
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 0.96.0

 Attachments: 4583-mixed.txt, 4583-mixed-v2.txt, 4583-mixed-v3.txt, 
 4583-trunk-less-radical.txt, 4583-trunk-less-radical-v2.txt, 
 4583-trunk-less-radical-v3.txt, 4583-trunk-less-radical-v4.txt, 
 4583-trunk-less-radical-v5.txt, 4583-trunk-less-radical-v6.txt, 
 4583-trunk-radical.txt, 4583-trunk-radical_v2.txt, 4583-trunk-v3.txt, 
 4583.txt, 4583-v2.txt, 4583-v3.txt, 4583-v4.txt


 Currently Increment and Append operations do not work with RWCC and hence a 
 client could see the results of multiple such operation mixed in the same 
 Get/Scan.
 The semantics might be a bit more interesting here as upsert adds and removes 
 to and from the memstore.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7103) Need to fail split if SPLIT znode is deleted even before the split is completed.

2012-11-08 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493258#comment-13493258
 ] 

Lars Hofhansl commented on HBASE-7103:
--

Does my idea from above:

bq. First try to create a ZK node, then write to the journal.

Fix this? In that case the parallel split request would fail before it writes 
anything in its journal and hence would not attempt to clean up the ZK state.


 Need to fail split if SPLIT znode is deleted even before the split is 
 completed.
 

 Key: HBASE-7103
 URL: https://issues.apache.org/jira/browse/HBASE-7103
 Project: HBase
  Issue Type: Bug
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.94.3, 0.96.0

 Attachments: HBASE-7103_testcase.patch


 This came up after the following mail in dev list
 'infinite loop of RS_ZK_REGION_SPLIT on .94.2'.
 The following is the reason for the problem
 The following steps happen
 - Initially the parent region P1 starts splitting.
 - The split is going on normally.
 - Another split starts at the same time for the same region P1. (Not sure 
 why this started).
 - Rollback happens seeing an already existing node.
 - This node gets deleted in rollback and nodeDeleted Event starts.
 - In nodeDeleted event the RIT for the region P1 gets deleted.
 - Because of this there is no region in RIT.
 - Now the first split gets over.  Here the problem is we try to transit the 
 node to SPLITTING to SPLIT. But the node even does not exist.
 But we don take any action on this.  We think it is successful.
 - Because of this SplitRegionHandler never gets invoked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-4583) Integrate RWCC with Append and Increment operations

2012-11-08 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-4583:
-

Release Note: 
This issue fixes MVCC issues with Increment and Append. To retain the current 
performance characteristics, VERSIONS should be set to 1 on column families 
with columns to be incremented/appended-to.


  was:
This issue fixes MVCC issues with Increment and Append. To retain the current 
performance characteristics VERSIONS should be set to 1 on column families with 
columns to be incremented/appended-to.



 Integrate RWCC with Append and Increment operations
 ---

 Key: HBASE-4583
 URL: https://issues.apache.org/jira/browse/HBASE-4583
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 0.96.0

 Attachments: 4583-mixed.txt, 4583-mixed-v2.txt, 4583-mixed-v3.txt, 
 4583-trunk-less-radical.txt, 4583-trunk-less-radical-v2.txt, 
 4583-trunk-less-radical-v3.txt, 4583-trunk-less-radical-v4.txt, 
 4583-trunk-less-radical-v5.txt, 4583-trunk-less-radical-v6.txt, 
 4583-trunk-radical.txt, 4583-trunk-radical_v2.txt, 4583-trunk-v3.txt, 
 4583.txt, 4583-v2.txt, 4583-v3.txt, 4583-v4.txt


 Currently Increment and Append operations do not work with RWCC and hence a 
 client could see the results of multiple such operation mixed in the same 
 Get/Scan.
 The semantics might be a bit more interesting here as upsert adds and removes 
 to and from the memstore.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-4583) Integrate RWCC with Append and Increment operations

2012-11-08 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-4583:
-

Release Note: 
This issue fixes MVCC issues with Increment and Append. To retain the current 
performance characteristics VERSIONS should be set to 1 on column families with 
columns to be incremented/appended-to.


 Integrate RWCC with Append and Increment operations
 ---

 Key: HBASE-4583
 URL: https://issues.apache.org/jira/browse/HBASE-4583
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 0.96.0

 Attachments: 4583-mixed.txt, 4583-mixed-v2.txt, 4583-mixed-v3.txt, 
 4583-trunk-less-radical.txt, 4583-trunk-less-radical-v2.txt, 
 4583-trunk-less-radical-v3.txt, 4583-trunk-less-radical-v4.txt, 
 4583-trunk-less-radical-v5.txt, 4583-trunk-less-radical-v6.txt, 
 4583-trunk-radical.txt, 4583-trunk-radical_v2.txt, 4583-trunk-v3.txt, 
 4583.txt, 4583-v2.txt, 4583-v3.txt, 4583-v4.txt


 Currently Increment and Append operations do not work with RWCC and hence a 
 client could see the results of multiple such operation mixed in the same 
 Get/Scan.
 The semantics might be a bit more interesting here as upsert adds and removes 
 to and from the memstore.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-4583) Integrate RWCC with Append and Increment operations

2012-11-08 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493278#comment-13493278
 ] 

Lars Hofhansl commented on HBASE-4583:
--

bq. Does the above condition match the comment ? If one version old than 
readpoint is required, versionsOlderThanReadpoint == 1 should be enough, right ?

I think the comment and code is correct. We need make sure that any scanner 
does not see the KV to be removed, which means that must be one that is newer 
than this one, but still older than the readpoint.

(A possible change, though, would be to count KVs with cur.getMemstoreTS() = 
readpoint, instead of cur.getMemstoreTS()  readpoint)


 Integrate RWCC with Append and Increment operations
 ---

 Key: HBASE-4583
 URL: https://issues.apache.org/jira/browse/HBASE-4583
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 0.96.0

 Attachments: 4583-mixed.txt, 4583-mixed-v2.txt, 4583-mixed-v3.txt, 
 4583-trunk-less-radical.txt, 4583-trunk-less-radical-v2.txt, 
 4583-trunk-less-radical-v3.txt, 4583-trunk-less-radical-v4.txt, 
 4583-trunk-less-radical-v5.txt, 4583-trunk-less-radical-v6.txt, 
 4583-trunk-radical.txt, 4583-trunk-radical_v2.txt, 4583-trunk-v3.txt, 
 4583.txt, 4583-v2.txt, 4583-v3.txt, 4583-v4.txt


 Currently Increment and Append operations do not work with RWCC and hence a 
 client could see the results of multiple such operation mixed in the same 
 Get/Scan.
 The semantics might be a bit more interesting here as upsert adds and removes 
 to and from the memstore.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-4583) Integrate RWCC with Append and Increment operations

2012-11-08 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-4583:
-

Attachment: 4583-mixed-v4.txt

Version that does this.

 Integrate RWCC with Append and Increment operations
 ---

 Key: HBASE-4583
 URL: https://issues.apache.org/jira/browse/HBASE-4583
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 0.96.0

 Attachments: 4583-mixed.txt, 4583-mixed-v2.txt, 4583-mixed-v3.txt, 
 4583-mixed-v4.txt, 4583-trunk-less-radical.txt, 
 4583-trunk-less-radical-v2.txt, 4583-trunk-less-radical-v3.txt, 
 4583-trunk-less-radical-v4.txt, 4583-trunk-less-radical-v5.txt, 
 4583-trunk-less-radical-v6.txt, 4583-trunk-radical.txt, 
 4583-trunk-radical_v2.txt, 4583-trunk-v3.txt, 4583.txt, 4583-v2.txt, 
 4583-v3.txt, 4583-v4.txt


 Currently Increment and Append operations do not work with RWCC and hence a 
 client could see the results of multiple such operation mixed in the same 
 Get/Scan.
 The semantics might be a bit more interesting here as upsert adds and removes 
 to and from the memstore.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7128) Reduced annoying catch clauses of UnsupportedEncodingException that is never thrown because of UTF-8

2012-11-08 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-7128:
--

Fix Version/s: 0.96.0
   Status: Patch Available  (was: Open)

 Reduced annoying catch clauses of UnsupportedEncodingException that is never 
 thrown because of UTF-8
 

 Key: HBASE-7128
 URL: https://issues.apache.org/jira/browse/HBASE-7128
 Project: HBase
  Issue Type: Improvement
Reporter: Hiroshi Ikeda
Priority: Trivial
 Fix For: 0.96.0

 Attachments: HBASE-7128.patch


 There are some codes that catch UnsupportedEncodingException, and log or 
 ignore it because Java always supports UTF-8 (see the javadoc of Charset).
 The catch clauses are annoying, and they should be replaced by methods of 
 Bytes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7128) Reduced annoying catch clauses of UnsupportedEncodingException that is never thrown because of UTF-8

2012-11-08 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493312#comment-13493312
 ] 

Ted Yu commented on HBASE-7128:
---

In HConstants:
{code}
+  /** When we encode strings, we always specify UTF8 encoding */
+  public static final Charset UTF8_CHARSET = Bytes.UTF8_CHARSET;
{code}
HConstants and Bytes classes are both in hbase-common
Can UTF8_CHARSET be removed from HConstants ?

Patch looks good.

 Reduced annoying catch clauses of UnsupportedEncodingException that is never 
 thrown because of UTF-8
 

 Key: HBASE-7128
 URL: https://issues.apache.org/jira/browse/HBASE-7128
 Project: HBase
  Issue Type: Improvement
Reporter: Hiroshi Ikeda
Priority: Trivial
 Fix For: 0.96.0

 Attachments: HBASE-7128.patch


 There are some codes that catch UnsupportedEncodingException, and log or 
 ignore it because Java always supports UTF-8 (see the javadoc of Charset).
 The catch clauses are annoying, and they should be replaced by methods of 
 Bytes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-4583) Integrate RWCC with Append and Increment operations

2012-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493316#comment-13493316
 ] 

Hadoop QA commented on HBASE-4583:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12552670/4583-mixed-v4.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
87 warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 17 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3264//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3264//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3264//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3264//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3264//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3264//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3264//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3264//console

This message is automatically generated.

 Integrate RWCC with Append and Increment operations
 ---

 Key: HBASE-4583
 URL: https://issues.apache.org/jira/browse/HBASE-4583
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 0.96.0

 Attachments: 4583-mixed.txt, 4583-mixed-v2.txt, 4583-mixed-v3.txt, 
 4583-mixed-v4.txt, 4583-trunk-less-radical.txt, 
 4583-trunk-less-radical-v2.txt, 4583-trunk-less-radical-v3.txt, 
 4583-trunk-less-radical-v4.txt, 4583-trunk-less-radical-v5.txt, 
 4583-trunk-less-radical-v6.txt, 4583-trunk-radical.txt, 
 4583-trunk-radical_v2.txt, 4583-trunk-v3.txt, 4583.txt, 4583-v2.txt, 
 4583-v3.txt, 4583-v4.txt


 Currently Increment and Append operations do not work with RWCC and hence a 
 client could see the results of multiple such operation mixed in the same 
 Get/Scan.
 The semantics might be a bit more interesting here as upsert adds and removes 
 to and from the memstore.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6721) RegionServer Group based Assignment

2012-11-08 Thread Francis Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493317#comment-13493317
 ] 

Francis Liu commented on HBASE-6721:


{quote}
Have you also considered adding a state mirror in ZK to avoid the need for 
random assignment of catalog tables and the group table if it is available on 
(re)start?
{quote}
Another approach is to mirror only the catalog and group table assignment 
information in ZK. This would add less complexity and minimizes the cost of 
having inconsistent data between the two stores.

 RegionServer Group based Assignment
 ---

 Key: HBASE-6721
 URL: https://issues.apache.org/jira/browse/HBASE-6721
 Project: HBase
  Issue Type: New Feature
Reporter: Francis Liu
Assignee: Vandana Ayyalasomayajula
 Fix For: 0.96.0

 Attachments: HBASE-6721_94_2.patch, HBASE-6721_94_3.patch, 
 HBASE-6721_94_3.patch, HBASE-6721_94.patch, HBASE-6721_94.patch, 
 HBASE-6721-DesigDoc.pdf


 In multi-tenant deployments of HBase, it is likely that a RegionServer will 
 be serving out regions from a number of different tables owned by various 
 client applications. Being able to group a subset of running RegionServers 
 and assign specific tables to it, provides a client application a level of 
 isolation and resource allocation.
 The proposal essentially is to have an AssignmentManager which is aware of 
 RegionServer groups and assigns tables to region servers based on groupings. 
 Load balancing will occur on a per group basis as well. 
 This is essentially a simplification of the approach taken in HBASE-4120. See 
 attached document.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7106) [89-fb] Fix the NPE in unit tests for JDK7

2012-11-08 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493323#comment-13493323
 ] 

Jimmy Xiang commented on HBASE-7106:


I have some fix in HBASE-6206.

 [89-fb] Fix the NPE in unit tests for JDK7
 --

 Key: HBASE-7106
 URL: https://issues.apache.org/jira/browse/HBASE-7106
 Project: HBase
  Issue Type: Improvement
Reporter: Liyin Tang
Priority: Trivial

 In JDK7, it will throw out NPE if put a NULL into a TreeSet. And in the unit 
 tests, user can add a NULL as qualifier into the family map for GET or SCAN. 
 So we shall do the followings: 
 1) Make sure the semantics of NULL column qualifier is equal to that of the 
 EMPYT_BYTE_ARRAY column qualifier.
 2) An easy fix is to use the EMPYT_BYTE_ARRAY qualifier to replace NULL 
 qualifier in the family map for the GET or SCAN objects, and everything else 
 shall be backward compatible.
 3) Add a jdk option in the pom.xml (Assuming user installed the fb packaged 
 jdk)
 eg: mvn test -Dtest=TestFromClientSide -Pjdk7

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6721) RegionServer Group based Assignment

2012-11-08 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493330#comment-13493330
 ] 

Jimmy Xiang commented on HBASE-6721:


Do you have a patch for 0.96 on review board?

 RegionServer Group based Assignment
 ---

 Key: HBASE-6721
 URL: https://issues.apache.org/jira/browse/HBASE-6721
 Project: HBase
  Issue Type: New Feature
Reporter: Francis Liu
Assignee: Vandana Ayyalasomayajula
 Fix For: 0.96.0

 Attachments: HBASE-6721_94_2.patch, HBASE-6721_94_3.patch, 
 HBASE-6721_94_3.patch, HBASE-6721_94.patch, HBASE-6721_94.patch, 
 HBASE-6721-DesigDoc.pdf


 In multi-tenant deployments of HBase, it is likely that a RegionServer will 
 be serving out regions from a number of different tables owned by various 
 client applications. Being able to group a subset of running RegionServers 
 and assign specific tables to it, provides a client application a level of 
 isolation and resource allocation.
 The proposal essentially is to have an AssignmentManager which is aware of 
 RegionServer groups and assigns tables to region servers based on groupings. 
 Load balancing will occur on a per group basis as well. 
 This is essentially a simplification of the approach taken in HBASE-4120. See 
 attached document.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7128) Reduced annoying catch clauses of UnsupportedEncodingException that is never thrown because of UTF-8

2012-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493338#comment-13493338
 ] 

Hadoop QA commented on HBASE-7128:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12552649/HBASE-7128.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 21 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
87 warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 15 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3265//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3265//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3265//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3265//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3265//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3265//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3265//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3265//console

This message is automatically generated.

 Reduced annoying catch clauses of UnsupportedEncodingException that is never 
 thrown because of UTF-8
 

 Key: HBASE-7128
 URL: https://issues.apache.org/jira/browse/HBASE-7128
 Project: HBase
  Issue Type: Improvement
Reporter: Hiroshi Ikeda
Priority: Trivial
 Fix For: 0.96.0

 Attachments: HBASE-7128.patch


 There are some codes that catch UnsupportedEncodingException, and log or 
 ignore it because Java always supports UTF-8 (see the javadoc of Charset).
 The catch clauses are annoying, and they should be replaced by methods of 
 Bytes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7128) Reduced annoying catch clauses of UnsupportedEncodingException that is never thrown because of UTF-8

2012-11-08 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493352#comment-13493352
 ] 

stack commented on HBASE-7128:
--

[~ikeda] Thank you for doing this.

HConstants should not reference Bytes -- a class in top level package reference 
a subpackage class (I don't mind double define of UTF8_ENCODING if you have to)

Otherwise, the patch is great.



 Reduced annoying catch clauses of UnsupportedEncodingException that is never 
 thrown because of UTF-8
 

 Key: HBASE-7128
 URL: https://issues.apache.org/jira/browse/HBASE-7128
 Project: HBase
  Issue Type: Improvement
Reporter: Hiroshi Ikeda
Priority: Trivial
 Fix For: 0.96.0

 Attachments: HBASE-7128.patch


 There are some codes that catch UnsupportedEncodingException, and log or 
 ignore it because Java always supports UTF-8 (see the javadoc of Charset).
 The catch clauses are annoying, and they should be replaced by methods of 
 Bytes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7103) Need to fail split if SPLIT znode is deleted even before the split is completed.

2012-11-08 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493354#comment-13493354
 ] 

stack commented on HBASE-7103:
--

Yeah, Lars' idea is like I was saying.  Else, can't we keep dictionary keyed by 
region of currently splitting regions in the RS?

 Need to fail split if SPLIT znode is deleted even before the split is 
 completed.
 

 Key: HBASE-7103
 URL: https://issues.apache.org/jira/browse/HBASE-7103
 Project: HBase
  Issue Type: Bug
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.94.3, 0.96.0

 Attachments: HBASE-7103_testcase.patch


 This came up after the following mail in dev list
 'infinite loop of RS_ZK_REGION_SPLIT on .94.2'.
 The following is the reason for the problem
 The following steps happen
 - Initially the parent region P1 starts splitting.
 - The split is going on normally.
 - Another split starts at the same time for the same region P1. (Not sure 
 why this started).
 - Rollback happens seeing an already existing node.
 - This node gets deleted in rollback and nodeDeleted Event starts.
 - In nodeDeleted event the RIT for the region P1 gets deleted.
 - Because of this there is no region in RIT.
 - Now the first split gets over.  Here the problem is we try to transit the 
 node to SPLITTING to SPLIT. But the node even does not exist.
 But we don take any action on this.  We think it is successful.
 - Because of this SplitRegionHandler never gets invoked.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6721) RegionServer Group based Assignment

2012-11-08 Thread Vandana Ayyalasomayajula (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493367#comment-13493367
 ] 

Vandana Ayyalasomayajula commented on HBASE-6721:
-

[~jxiang] -- We have not started to work on the patch for trunk as we wanted 
the patch for branch-94 to address all the review comments. Hopefully after one 
more rounds of review, we will start working on the patch for the trunk.

 RegionServer Group based Assignment
 ---

 Key: HBASE-6721
 URL: https://issues.apache.org/jira/browse/HBASE-6721
 Project: HBase
  Issue Type: New Feature
Reporter: Francis Liu
Assignee: Vandana Ayyalasomayajula
 Fix For: 0.96.0

 Attachments: HBASE-6721_94_2.patch, HBASE-6721_94_3.patch, 
 HBASE-6721_94_3.patch, HBASE-6721_94.patch, HBASE-6721_94.patch, 
 HBASE-6721-DesigDoc.pdf


 In multi-tenant deployments of HBase, it is likely that a RegionServer will 
 be serving out regions from a number of different tables owned by various 
 client applications. Being able to group a subset of running RegionServers 
 and assign specific tables to it, provides a client application a level of 
 isolation and resource allocation.
 The proposal essentially is to have an AssignmentManager which is aware of 
 RegionServer groups and assigns tables to region servers based on groupings. 
 Load balancing will occur on a per group basis as well. 
 This is essentially a simplification of the approach taken in HBASE-4120. See 
 attached document.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6863) Offline snapshots

2012-11-08 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates updated HBASE-6863:
---

Attachment: hbase-6863-v3.patch

Updated diff with the changes from Matteo and Jon applied. This is what is 
currently up on RB.

 Offline snapshots
 -

 Key: HBASE-6863
 URL: https://issues.apache.org/jira/browse/HBASE-6863
 Project: HBase
  Issue Type: Sub-task
Reporter: Jesse Yates
Assignee: Jesse Yates
 Fix For: hbase-6055

 Attachments: hbase-6863-v3.patch


 Create a snapshot of a table while the table is offline. This also should 
 handle a lot of the common utils/scaffolding for taking snapshots 
 (HBASE-6055) with minimal overhead as the code itself is pretty simple.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6863) Offline snapshots

2012-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493371#comment-13493371
 ] 

Hadoop QA commented on HBASE-6863:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12552685/hbase-6863-v3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 24 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3266//console

This message is automatically generated.

 Offline snapshots
 -

 Key: HBASE-6863
 URL: https://issues.apache.org/jira/browse/HBASE-6863
 Project: HBase
  Issue Type: Sub-task
Reporter: Jesse Yates
Assignee: Jesse Yates
 Fix For: hbase-6055

 Attachments: hbase-6863-v3.patch


 Create a snapshot of a table while the table is offline. This also should 
 handle a lot of the common utils/scaffolding for taking snapshots 
 (HBASE-6055) with minimal overhead as the code itself is pretty simple.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6222) Add per-KeyValue Security

2012-11-08 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493387#comment-13493387
 ] 

Ted Yu commented on HBASE-6222:
---

@Himanshu:
I have some questions about your design doc.
bq. Versions will be kept for each unique visibility expression.
Would this inflate memstore because we are keeping potentially many more 
versions of KeyValue which differ by visibility expression only ?

bq. HTable-level property: a property “ENABLE_CELL_LEVEL_SECURITY” in
Since HFile metadata would include similar information, looks like storing such 
information at column family level is better.

bq. The property “ENABLE_CELL_LEVEL_SECURITY” can be change only after 
enabling/disabling the table.
I assume that the table is in disabled state when this property is changed.

on page 12:
bq. It attaches the CVFilter to the Get object,
Suppose a user belongs to more than one group, would multiple CVFilter 
instances be attached to the Get object ?

bq. it passes only the ones which pass the “Secret” visibility expression.He 
will get v1 and v4.
The above is inconsistent with description on page 11 where v2 and v4 are said 
to be returned.

Thanks

 Add per-KeyValue Security
 -

 Key: HBASE-6222
 URL: https://issues.apache.org/jira/browse/HBASE-6222
 Project: HBase
  Issue Type: New Feature
  Components: security
Reporter: stack
Assignee: Andrew Purtell
 Attachments: HBaseCellRow-LevelSecurityDesignDoc.docx, 
 HBaseCellRow-LevelSecurityPRD.docx


 Saw an interesting article: 
 http://www.fiercegovernmentit.com/story/sasc-accumulo-language-pro-open-source-say-proponents/2012-06-14
 The  Senate Armed Services Committee version of the fiscal 2013 national 
 defense authorization act (S. 3254) would require DoD agencies to foreswear 
 the Accumulo NoSQL database after Sept. 30, 2013, unless the DoD CIO 
 certifies that there exists either no viable commercial open source database 
 with security features comparable to [Accumulo] (such as the HBase or 
 Cassandra databases)...
 Not sure what a 'commercial open source database' is, and I'm not sure whats 
 going on in the article, but tra-la-la'ing, if we had per-KeyValue 'security' 
 like Accumulo's, we might put ourselves in the running for federal 
 contributions?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-2645) HLog writer can do 1-2 sync operations after lease has been recovered for split process.

2012-11-08 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-2645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-2645:
-

Attachment: 2645_hacking.txt

Hacked up patch.  Has two different users on two different filesystems 
(different dfsclients).  Enables the hdfs logging so I can see the above is 
indeed the case.  The 'regionserver' thread hangs in sync until it gets an IOE 
'Error Recovery for block failed because recovery from primary datanode ... 
failed 6 times'...after 40 seconds (ugh) not the lease exception I'd expect.  

 HLog writer can do 1-2 sync operations after lease has been recovered for 
 split process.
 

 Key: HBASE-2645
 URL: https://issues.apache.org/jira/browse/HBASE-2645
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.90.4
Reporter: Cosmin Lehene
Assignee: Todd Lipcon
Priority: Blocker
 Fix For: 0.96.0

 Attachments: 2645_hacking.txt, 2645.txt, 2645v2.txt, 2645v3.txt, 
 hdfs_1.0_editswriter_recoverlease.txt, 
 hdfs_trunk_editswriter_recoverlease.txt, 
 org.apache.hadoop.hbase.regionserver.wal.TestHLogSplit-output.txt, 
 org.apache.hadoop.hbase.regionserver.wal.TestHLogSplit-output.txt


 TestHLogSplit.testLogCannotBeWrittenOnceParsed is failing. 
 This test starts a thread that writes one edit to the log, syncs and counts. 
 During this, a HLog.splitLog operation is started. splitLog recovers the log 
 lease before reading the log, so that the original regionserver could not 
 wake up and write after the split process started.  
 The test compares the number of edits reported by the split process and by 
 the writer thread. Writer thread (called zombie in the test) should report = 
  than the splitLog (sync() might raise after the last edit gets written and 
 the edit won't get counted by zombie thread). However it appears that the 
 zombie counts 1-2 more edits. So it looks like it can sync without a lease.
 This might be a hdfs-0.20 related issue. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7110) refactor the compaction selection and config code similarly to 0.89-fb changes

2012-11-08 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493396#comment-13493396
 ] 

Sergey Shelukhin commented on HBASE-7110:
-

[~enis] can you please review? Thanks.

 refactor the compaction selection and config code similarly to 0.89-fb changes
 --

 Key: HBASE-7110
 URL: https://issues.apache.org/jira/browse/HBASE-7110
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.96.0
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HBASE-6371-v5-refactor-only-squashed.patch


 Separate JIRA for refactoring changes from HBASE-7055 (and further ones after 
 code review)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7121) Fix TestHFileOutputFormat after moving RS to metrics2

2012-11-08 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-7121:
-

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed revision 1407216.

 Fix TestHFileOutputFormat after moving RS to metrics2
 -

 Key: HBASE-7121
 URL: https://issues.apache.org/jira/browse/HBASE-7121
 Project: HBase
  Issue Type: Sub-task
  Components: metrics
Affects Versions: 0.96.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 0.96.0

 Attachments: HBASE-7121-0.patch


 When spinning up lots of threads in a single jvm it's possible that the 
 metrics wrapper can touch variables that are not initialized.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5416) Improve performance of scans with some kind of filters.

2012-11-08 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493399#comment-13493399
 ] 

Sergey Shelukhin commented on HBASE-5416:
-

[~stack] My point was that the approach is sound and that change being risky is 
not a good reason to not make it, on its own. +1 on tests/perf tests :)

 Improve performance of scans with some kind of filters.
 ---

 Key: HBASE-5416
 URL: https://issues.apache.org/jira/browse/HBASE-5416
 Project: HBase
  Issue Type: Improvement
  Components: Filters, Performance, regionserver
Affects Versions: 0.90.4
Reporter: Max Lapan
Assignee: Max Lapan
 Fix For: 0.96.0

 Attachments: 5416-Filtered_scans_v6.patch, 5416-v5.txt, 5416-v6.txt, 
 Filtered_scans.patch, Filtered_scans_v2.patch, Filtered_scans_v3.patch, 
 Filtered_scans_v4.patch, Filtered_scans_v5.1.patch, Filtered_scans_v5.patch, 
 Filtered_scans_v7.patch


 When the scan is performed, whole row is loaded into result list, after that 
 filter (if exists) is applied to detect that row is needed.
 But when scan is performed on several CFs and filter checks only data from 
 the subset of these CFs, data from CFs, not checked by a filter is not needed 
 on a filter stage. Only when we decided to include current row. And in such 
 case we can significantly reduce amount of IO performed by a scan, by loading 
 only values, actually checked by a filter.
 For example, we have two CFs: flags and snap. Flags is quite small (bunch of 
 megabytes) and is used to filter large entries from snap. Snap is very large 
 (10s of GB) and it is quite costly to scan it. If we needed only rows with 
 some flag specified, we use SingleColumnValueFilter to limit result to only 
 small subset of region. But current implementation is loading both CFs to 
 perform scan, when only small subset is needed.
 Attached patch adds one routine to Filter interface to allow filter to 
 specify which CF is needed to it's operation. In HRegion, we separate all 
 scanners into two groups: needed for filter and the rest (joined). When new 
 row is considered, only needed data is loaded, filter applied, and only if 
 filter accepts the row, rest of data is loaded. At our data, this speeds up 
 such kind of scans 30-50 times. Also, this gives us the way to better 
 normalize the data into separate columns by optimizing the scans performed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7106) [89-fb] Fix the NPE in unit tests for JDK7

2012-11-08 Thread Liyin Tang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493401#comment-13493401
 ] 

Liyin Tang commented on HBASE-7106:
---

Gustavo Anatoly: I didn't fully understand your questions :) The pom change is 
orthogonal with the code change.

Jimmy, The semantics of NULL column qualifier is equal to that of the 
EMPYT_BYTE_ARRAY column qualifier. 
However, the fix in HBASE-6206 will skip the NULL qualifier.
-set.add(qualifier);
+if (qualifier != null) {
+  set.add(qualifier);
+}

=
I think the correct fix shall be:

if (qualifier != null) {
  set.add(qualifier);
} else {
  set.add(HConstants.EMPTY_BYTE_ARRAY);
}

 [89-fb] Fix the NPE in unit tests for JDK7
 --

 Key: HBASE-7106
 URL: https://issues.apache.org/jira/browse/HBASE-7106
 Project: HBase
  Issue Type: Improvement
Reporter: Liyin Tang
Priority: Trivial

 In JDK7, it will throw out NPE if put a NULL into a TreeSet. And in the unit 
 tests, user can add a NULL as qualifier into the family map for GET or SCAN. 
 So we shall do the followings: 
 1) Make sure the semantics of NULL column qualifier is equal to that of the 
 EMPYT_BYTE_ARRAY column qualifier.
 2) An easy fix is to use the EMPYT_BYTE_ARRAY qualifier to replace NULL 
 qualifier in the family map for the GET or SCAN objects, and everything else 
 shall be backward compatible.
 3) Add a jdk option in the pom.xml (Assuming user installed the fb packaged 
 jdk)
 eg: mvn test -Dtest=TestFromClientSide -Pjdk7

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-4583) Integrate RWCC with Append and Increment operations

2012-11-08 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493400#comment-13493400
 ] 

stack commented on HBASE-4583:
--

I'm not expert in this area.  I like the way you make it so can do both upsert 
and add (need to release note it).  I like how you add the test only methods to 
HStore and not to the Interface.  Is that test ok w/ 100 threads?  Its pretty 
resource heavy?  It runs ok?  I'm +1 on committing to trunk if all tests pass.  
Good stuff Lars.

 Integrate RWCC with Append and Increment operations
 ---

 Key: HBASE-4583
 URL: https://issues.apache.org/jira/browse/HBASE-4583
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 0.96.0

 Attachments: 4583-mixed.txt, 4583-mixed-v2.txt, 4583-mixed-v3.txt, 
 4583-mixed-v4.txt, 4583-trunk-less-radical.txt, 
 4583-trunk-less-radical-v2.txt, 4583-trunk-less-radical-v3.txt, 
 4583-trunk-less-radical-v4.txt, 4583-trunk-less-radical-v5.txt, 
 4583-trunk-less-radical-v6.txt, 4583-trunk-radical.txt, 
 4583-trunk-radical_v2.txt, 4583-trunk-v3.txt, 4583.txt, 4583-v2.txt, 
 4583-v3.txt, 4583-v4.txt


 Currently Increment and Append operations do not work with RWCC and hence a 
 client could see the results of multiple such operation mixed in the same 
 Get/Scan.
 The semantics might be a bit more interesting here as upsert adds and removes 
 to and from the memstore.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7055) port HBASE-6371 tier-based compaction from 0.89-fb to trunk

2012-11-08 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493405#comment-13493405
 ] 

Sergey Shelukhin commented on HBASE-7055:
-

I didn't do any targeted testing, just general run on one box (w/ multiple 
RS-es). Not sure about the interplay with dynamic config, I'd assume if you 
mine data for table R/W patterns, take it offline and apply config once you'd 
be good for a long time w/better compactions. Cannot tell how much tweaking is 
normal w/o running it in production for a long time.
If we make column config specific to columns, maybe it can be refreshed 
separately from dynamic config JIRA xml file refresh. I haven't looked at the 
column config code yet. Will create separate JIRA when I do, unless someone 
does earlier.

 port HBASE-6371 tier-based compaction from 0.89-fb to trunk
 ---

 Key: HBASE-7055
 URL: https://issues.apache.org/jira/browse/HBASE-7055
 Project: HBase
  Issue Type: Task
  Components: Compaction
Affects Versions: 0.96.0
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: 0.96.0

 Attachments: HBASE-6371-squashed.patch, HBASE-6371-v2-squashed.patch, 
 HBASE-6371-v3-refactor-only-squashed.patch, 
 HBASE-6371-v4-refactor-only-squashed.patch, 
 HBASE-6371-v5-refactor-only-squashed.patch


 There's divergence in the code :(
 See HBASE-6371 for details.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7055) port HBASE-6371 tier-based compaction from 0.89-fb to trunk

2012-11-08 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493407#comment-13493407
 ] 

Sergey Shelukhin commented on HBASE-7055:
-

column config specific to columns e.g. stored in the table/CF metadata, not 
in xml file.

 port HBASE-6371 tier-based compaction from 0.89-fb to trunk
 ---

 Key: HBASE-7055
 URL: https://issues.apache.org/jira/browse/HBASE-7055
 Project: HBase
  Issue Type: Task
  Components: Compaction
Affects Versions: 0.96.0
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Fix For: 0.96.0

 Attachments: HBASE-6371-squashed.patch, HBASE-6371-v2-squashed.patch, 
 HBASE-6371-v3-refactor-only-squashed.patch, 
 HBASE-6371-v4-refactor-only-squashed.patch, 
 HBASE-6371-v5-refactor-only-squashed.patch


 There's divergence in the code :(
 See HBASE-6371 for details.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5416) Improve performance of scans with some kind of filters.

2012-11-08 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493410#comment-13493410
 ] 

stack commented on HBASE-5416:
--

[~sershe] If after sufficient tests (and perf), for sure.  I think the case 
that the change has sufficient test needs to be built before it goes in.

 Improve performance of scans with some kind of filters.
 ---

 Key: HBASE-5416
 URL: https://issues.apache.org/jira/browse/HBASE-5416
 Project: HBase
  Issue Type: Improvement
  Components: Filters, Performance, regionserver
Affects Versions: 0.90.4
Reporter: Max Lapan
Assignee: Max Lapan
 Fix For: 0.96.0

 Attachments: 5416-Filtered_scans_v6.patch, 5416-v5.txt, 5416-v6.txt, 
 Filtered_scans.patch, Filtered_scans_v2.patch, Filtered_scans_v3.patch, 
 Filtered_scans_v4.patch, Filtered_scans_v5.1.patch, Filtered_scans_v5.patch, 
 Filtered_scans_v7.patch


 When the scan is performed, whole row is loaded into result list, after that 
 filter (if exists) is applied to detect that row is needed.
 But when scan is performed on several CFs and filter checks only data from 
 the subset of these CFs, data from CFs, not checked by a filter is not needed 
 on a filter stage. Only when we decided to include current row. And in such 
 case we can significantly reduce amount of IO performed by a scan, by loading 
 only values, actually checked by a filter.
 For example, we have two CFs: flags and snap. Flags is quite small (bunch of 
 megabytes) and is used to filter large entries from snap. Snap is very large 
 (10s of GB) and it is quite costly to scan it. If we needed only rows with 
 some flag specified, we use SingleColumnValueFilter to limit result to only 
 small subset of region. But current implementation is loading both CFs to 
 perform scan, when only small subset is needed.
 Attached patch adds one routine to Filter interface to allow filter to 
 specify which CF is needed to it's operation. In HRegion, we separate all 
 scanners into two groups: needed for filter and the rest (joined). When new 
 row is considered, only needed data is loaded, filter applied, and only if 
 filter accepts the row, rest of data is loaded. At our data, this speeds up 
 such kind of scans 30-50 times. Also, this gives us the way to better 
 normalize the data into separate columns by optimizing the scans performed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-7129) Need documentation for REST atomic operations (HBASE-4720)

2012-11-08 Thread Joe Pallas (JIRA)
Joe Pallas created HBASE-7129:
-

 Summary: Need documentation for REST atomic operations (HBASE-4720)
 Key: HBASE-7129
 URL: https://issues.apache.org/jira/browse/HBASE-7129
 Project: HBase
  Issue Type: Bug
  Components: REST
Reporter: Joe Pallas
Priority: Minor


HBASE-4720 added checkAndPut/checkAndDelete capability to the REST interface, 
but the REST documentation (in the package summary) needs to be updated so 
people know that this feature exists and how to use it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-4913) Per-CF compaction Via the Shell

2012-11-08 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493419#comment-13493419
 ] 

Gregory Chanan commented on HBASE-4913:
---

Committed to trunk.  Going to look into backporting to 0.94.

I'll also add a release note.

 Per-CF compaction Via the Shell
 ---

 Key: HBASE-4913
 URL: https://issues.apache.org/jira/browse/HBASE-4913
 Project: HBase
  Issue Type: Sub-task
  Components: Client, regionserver
Reporter: Nicolas Spiegelberg
Assignee: Mubarak Seyed
 Fix For: 0.96.0, 0.94.4

 Attachments: HBASE-4913.trunk.v1.patch, HBASE-4913.trunk.v2.patch, 
 HBASE-4913.trunk.v2.patch, HBASE-4913-trunk-v3.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-4913) Per-CF compaction Via the Shell

2012-11-08 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HBASE-4913:
--

Fix Version/s: 0.94.4

 Per-CF compaction Via the Shell
 ---

 Key: HBASE-4913
 URL: https://issues.apache.org/jira/browse/HBASE-4913
 Project: HBase
  Issue Type: Sub-task
  Components: Client, regionserver
Reporter: Nicolas Spiegelberg
Assignee: Mubarak Seyed
 Fix For: 0.96.0, 0.94.4

 Attachments: HBASE-4913.trunk.v1.patch, HBASE-4913.trunk.v2.patch, 
 HBASE-4913.trunk.v2.patch, HBASE-4913-trunk-v3.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6966) Compressed RPCs for HBase (HBASE-5355) port to trunk

2012-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493427#comment-13493427
 ] 

Hadoop QA commented on HBASE-6966:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12552427/6966-v1.2.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
87 warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 15 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3267//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3267//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3267//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3267//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3267//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3267//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3267//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3267//console

This message is automatically generated.

 Compressed RPCs for HBase (HBASE-5355) port to trunk
 --

 Key: HBASE-6966
 URL: https://issues.apache.org/jira/browse/HBASE-6966
 Project: HBase
  Issue Type: Improvement
  Components: IPC/RPC
Reporter: Devaraj Das
Assignee: Devaraj Das
 Fix For: 0.96.0

 Attachments: 6966-1.patch, 6966-v1.1.txt, 6966-v1.2.txt, 6966-v2.txt


 This jira will address the port of the compressed RPC implementation to 
 trunk. I am expecting the patch to be significantly different due to the PB 
 stuff in trunk, and hence filed a separate jira.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-5778) Turn on WAL compression by default

2012-11-08 Thread Jean-Daniel Cryans (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-5778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jean-Daniel Cryans updated HBASE-5778:
--

Attachment: HBASE-5778-0.94-v2.patch

bq. If so, do you plan to address the test failure mentioned @ 13/Apr/12 02:53 ?

Eventually I'd like to turn it on by default but I was mostly interested in 
making replication work first.

So I took a look at testAppendClose and it was a simple matter of changing the 
reader to use the one that HBase provides. In that regard I'd say that the test 
was doing something wrong. The effect was that the SF reader, knowing nothing 
about compression, couldn't read compressed HLog entries.

This v2 patch fixes the test for when WAL compression is enabled.

 Turn on WAL compression by default
 --

 Key: HBASE-5778
 URL: https://issues.apache.org/jira/browse/HBASE-5778
 Project: HBase
  Issue Type: Improvement
Reporter: Jean-Daniel Cryans
Assignee: Jean-Daniel Cryans
Priority: Blocker
 Fix For: 0.96.0

 Attachments: 5778.addendum, 5778-addendum.txt, HBASE-5778-0.94.patch, 
 HBASE-5778-0.94-v2.patch, HBASE-5778.patch


 I ran some tests to verify if WAL compression should be turned on by default.
 For a use case where it's not very useful (values two order of magnitude 
 bigger than the keys), the insert time wasn't different and the CPU usage 15% 
 higher (150% CPU usage VS 130% when not compressing the WAL).
 When values are smaller than the keys, I saw a 38% improvement for the insert 
 run time and CPU usage was 33% higher (600% CPU usage VS 450%). I'm not sure 
 WAL compression accounts for all the additional CPU usage, it might just be 
 that we're able to insert faster and we spend more time in the MemStore per 
 second (because our MemStores are bad when they contain tens of thousands of 
 values).
 Those are two extremes, but it shows that for the price of some CPU we can 
 save a lot. My machines have 2 quads with HT, so I still had a lot of idle 
 CPUs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-2645) HLog writer can do 1-2 sync operations after lease has been recovered for split process.

2012-11-08 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-2645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493440#comment-13493440
 ] 

stack commented on HBASE-2645:
--

Hmmm... this patch sortof works but I want to learn more about this 40 second 
hang in writer.  Running more tests.

 HLog writer can do 1-2 sync operations after lease has been recovered for 
 split process.
 

 Key: HBASE-2645
 URL: https://issues.apache.org/jira/browse/HBASE-2645
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.90.4
Reporter: Cosmin Lehene
Assignee: Todd Lipcon
Priority: Blocker
 Fix For: 0.96.0

 Attachments: 2645_hacking.txt, 2645.txt, 2645v2.txt, 2645v3.txt, 
 hdfs_1.0_editswriter_recoverlease.txt, 
 hdfs_trunk_editswriter_recoverlease.txt, 
 org.apache.hadoop.hbase.regionserver.wal.TestHLogSplit-output.txt, 
 org.apache.hadoop.hbase.regionserver.wal.TestHLogSplit-output.txt


 TestHLogSplit.testLogCannotBeWrittenOnceParsed is failing. 
 This test starts a thread that writes one edit to the log, syncs and counts. 
 During this, a HLog.splitLog operation is started. splitLog recovers the log 
 lease before reading the log, so that the original regionserver could not 
 wake up and write after the split process started.  
 The test compares the number of edits reported by the split process and by 
 the writer thread. Writer thread (called zombie in the test) should report = 
  than the splitLog (sync() might raise after the last edit gets written and 
 the edit won't get counted by zombie thread). However it appears that the 
 zombie counts 1-2 more edits. So it looks like it can sync without a lease.
 This might be a hdfs-0.20 related issue. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5258) Move coprocessors set out of RegionLoad

2012-11-08 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493441#comment-13493441
 ] 

Sergey Shelukhin commented on HBASE-5258:
-

There may be potential for using info about region server load that is more 
realtime/critical than metrics channel, in master, for balancing/etc. Probably 
not important short term.
W.r.t. zk - why cannot master read zk nodes and apply rules too? ZK nodes may 
need to be made smarter for this purpose. 
Assuming RSs will have to talk to ZK anyway, for large clusters it will avoid 
extra all-to-one communication. Although master reading ZK to make decisions 
may introduce additional delay, especially if master just watches the nodes and 
builds up internal state, instead of querying entire state periodically.
Regardless, does having different coprocessors on the regions of the same table 
make sense? If user has logic relying on coprocessors for some data querying or 
correctness it seems dangerous. 


 Move coprocessors set out of RegionLoad
 ---

 Key: HBASE-5258
 URL: https://issues.apache.org/jira/browse/HBASE-5258
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu
Priority: Critical

 When I worked on HBASE-5256, I revisited the code related to Ser/De of 
 coprocessors set in RegionLoad.
 I think the rationale for embedding coprocessors set is for maximum 
 flexibility where each region can load different coprocessors.
 This flexibility is causing extra cost in the region server to Master 
 communication and increasing the footprint of Master heap.
 Would HServerLoad be a better place for this set ?
 If required, region server should calculate disparity of loaded coprocessors 
 among regions and send report through HServerLoad

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7110) refactor the compaction selection and config code similarly to 0.89-fb changes

2012-11-08 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493450#comment-13493450
 ] 

Enis Soztutar commented on HBASE-7110:
--

Can you please rebase the patch. It does not apply to trunk now. 

 refactor the compaction selection and config code similarly to 0.89-fb changes
 --

 Key: HBASE-7110
 URL: https://issues.apache.org/jira/browse/HBASE-7110
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.96.0
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HBASE-6371-v5-refactor-only-squashed.patch


 Separate JIRA for refactoring changes from HBASE-7055 (and further ones after 
 code review)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5778) Turn on WAL compression by default

2012-11-08 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493453#comment-13493453
 ] 

stack commented on HBASE-5778:
--

Adding compression context to the general HLog Interface seems incorrect to me. 
 This kinda of thing will not make sense for all implementations of HLog.   We 
are going against the effort which tries to turn HLog into an Interface with 
this patch as is.

Ditto on ReplicationSource having to know anything about HLog compression, 
carrying compression context (This seems 'off' having to do this in 
ReplicationSource -- +import 
org.apache.hadoop.hbase.regionserver.wal.CompressionContext;).  What happens if 
HLog has a different kind of compression than our current type?  All will break?

This seems wrong having to do this over in ReplicationSource:

{code}
+// If we're compressing logs and the oldest recovered log's last 
position is greater
+// than 0, we need to rebuild the dictionary up to that point without 
replicating
+// the edits again. The rebuilding part is simply done by reading the 
log.
{code}

Why can't the internal implementation do the skipping if dictionary is empty 
and we are at an offset  0?

Rather than passing compression context to SequenceFileLogReader, can we not 
have a CompressedSequenceLogReader and internally it manages compression 
contexts not letting them outside of CSLR?


 Turn on WAL compression by default
 --

 Key: HBASE-5778
 URL: https://issues.apache.org/jira/browse/HBASE-5778
 Project: HBase
  Issue Type: Improvement
Reporter: Jean-Daniel Cryans
Assignee: Jean-Daniel Cryans
Priority: Blocker
 Fix For: 0.96.0

 Attachments: 5778.addendum, 5778-addendum.txt, HBASE-5778-0.94.patch, 
 HBASE-5778-0.94-v2.patch, HBASE-5778.patch


 I ran some tests to verify if WAL compression should be turned on by default.
 For a use case where it's not very useful (values two order of magnitude 
 bigger than the keys), the insert time wasn't different and the CPU usage 15% 
 higher (150% CPU usage VS 130% when not compressing the WAL).
 When values are smaller than the keys, I saw a 38% improvement for the insert 
 run time and CPU usage was 33% higher (600% CPU usage VS 450%). I'm not sure 
 WAL compression accounts for all the additional CPU usage, it might just be 
 that we're able to insert faster and we spend more time in the MemStore per 
 second (because our MemStores are bad when they contain tens of thousands of 
 values).
 Those are two extremes, but it shows that for the price of some CPU we can 
 save a lot. My machines have 2 quads with HT, so I still had a lot of idle 
 CPUs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7046) Fix resource leak in TestHLogSplit#testOldRecoveredEditsFileSidelined

2012-11-08 Thread Himanshu Vashishtha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Himanshu Vashishtha updated HBASE-7046:
---

Status: Patch Available  (was: Open)

 Fix resource leak in TestHLogSplit#testOldRecoveredEditsFileSidelined
 -

 Key: HBASE-7046
 URL: https://issues.apache.org/jira/browse/HBASE-7046
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 0.96.0
Reporter: Himanshu Vashishtha
Assignee: Himanshu Vashishtha
 Fix For: 0.96.0

 Attachments: HBASE-7046.patch


 This method creates a writer but never closes one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7046) Fix resource leak in TestHLogSplit#testOldRecoveredEditsFileSidelined

2012-11-08 Thread Himanshu Vashishtha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Himanshu Vashishtha updated HBASE-7046:
---

Attachment: HBASE-7046.patch

one line fix.

 Fix resource leak in TestHLogSplit#testOldRecoveredEditsFileSidelined
 -

 Key: HBASE-7046
 URL: https://issues.apache.org/jira/browse/HBASE-7046
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 0.96.0
Reporter: Himanshu Vashishtha
Assignee: Himanshu Vashishtha
 Fix For: 0.96.0

 Attachments: HBASE-7046.patch


 This method creates a writer but never closes one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7010) PrefixFilter should seek to first matching row

2012-11-08 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-7010:
-

Attachment: 7010.txt

Here's a patch that does that.

 PrefixFilter should seek to first matching row
 --

 Key: HBASE-7010
 URL: https://issues.apache.org/jira/browse/HBASE-7010
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Priority: Minor
 Fix For: 0.96.0, 0.94.4

 Attachments: 7010.txt


 Currently a PrefixFilter will happily scan all KVs  prefix.
 If should seek forward to the prefix if the current KV  prefix.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7010) PrefixFilter should seek to first matching row

2012-11-08 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-7010:
-

Status: Patch Available  (was: Open)

 PrefixFilter should seek to first matching row
 --

 Key: HBASE-7010
 URL: https://issues.apache.org/jira/browse/HBASE-7010
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Priority: Minor
 Fix For: 0.96.0, 0.94.4

 Attachments: 7010.txt


 Currently a PrefixFilter will happily scan all KVs  prefix.
 If should seek forward to the prefix if the current KV  prefix.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7046) Fix resource leak in TestHLogSplit#testOldRecoveredEditsFileSidelined

2012-11-08 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493461#comment-13493461
 ] 

Ted Yu commented on HBASE-7046:
---

+1 on patch.

 Fix resource leak in TestHLogSplit#testOldRecoveredEditsFileSidelined
 -

 Key: HBASE-7046
 URL: https://issues.apache.org/jira/browse/HBASE-7046
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 0.96.0
Reporter: Himanshu Vashishtha
Assignee: Himanshu Vashishtha
 Fix For: 0.96.0

 Attachments: HBASE-7046.patch


 This method creates a writer but never closes one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Assigned] (HBASE-7010) PrefixFilter should seek to first matching row

2012-11-08 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl reassigned HBASE-7010:


Assignee: Lars Hofhansl

 PrefixFilter should seek to first matching row
 --

 Key: HBASE-7010
 URL: https://issues.apache.org/jira/browse/HBASE-7010
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Minor
 Fix For: 0.96.0, 0.94.4

 Attachments: 7010.txt


 Currently a PrefixFilter will happily scan all KVs  prefix.
 If should seek forward to the prefix if the current KV  prefix.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7010) PrefixFilter should seek to first matching row

2012-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493463#comment-13493463
 ] 

Hadoop QA commented on HBASE-7010:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12552699/7010.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
87 warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 16 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.filter.TestFilterList

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3272//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3272//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3272//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3272//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3272//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3272//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3272//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3272//console

This message is automatically generated.

 PrefixFilter should seek to first matching row
 --

 Key: HBASE-7010
 URL: https://issues.apache.org/jira/browse/HBASE-7010
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Minor
 Fix For: 0.96.0, 0.94.4

 Attachments: 7010.txt


 Currently a PrefixFilter will happily scan all KVs  prefix.
 If should seek forward to the prefix if the current KV  prefix.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7010) PrefixFilter should seek to first matching row

2012-11-08 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493467#comment-13493467
 ] 

Lars Hofhansl commented on HBASE-7010:
--

Failure is probably related. Will take a look later.

 PrefixFilter should seek to first matching row
 --

 Key: HBASE-7010
 URL: https://issues.apache.org/jira/browse/HBASE-7010
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
Priority: Minor
 Fix For: 0.96.0, 0.94.4

 Attachments: 7010.txt


 Currently a PrefixFilter will happily scan all KVs  prefix.
 If should seek forward to the prefix if the current KV  prefix.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7106) [89-fb] Fix the NPE in unit tests for JDK7

2012-11-08 Thread Gustavo Anatoly (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493468#comment-13493468
 ] 

Gustavo Anatoly commented on HBASE-7106:


Hi, Liyin.

:) I understood changes using HConstants.EMPTY_BYTE_ARRAY, but I was 
understanding incorrect part of your explanation about the pom profile. 

Thanks.

Could I contribute with a patch for review?

 [89-fb] Fix the NPE in unit tests for JDK7
 --

 Key: HBASE-7106
 URL: https://issues.apache.org/jira/browse/HBASE-7106
 Project: HBase
  Issue Type: Improvement
Reporter: Liyin Tang
Priority: Trivial

 In JDK7, it will throw out NPE if put a NULL into a TreeSet. And in the unit 
 tests, user can add a NULL as qualifier into the family map for GET or SCAN. 
 So we shall do the followings: 
 1) Make sure the semantics of NULL column qualifier is equal to that of the 
 EMPYT_BYTE_ARRAY column qualifier.
 2) An easy fix is to use the EMPYT_BYTE_ARRAY qualifier to replace NULL 
 qualifier in the family map for the GET or SCAN objects, and everything else 
 shall be backward compatible.
 3) Add a jdk option in the pom.xml (Assuming user installed the fb packaged 
 jdk)
 eg: mvn test -Dtest=TestFromClientSide -Pjdk7

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-7130) NULL qualifier is ignored

2012-11-08 Thread Jimmy Xiang (JIRA)
Jimmy Xiang created HBASE-7130:
--

 Summary: NULL qualifier is ignored
 Key: HBASE-7130
 URL: https://issues.apache.org/jira/browse/HBASE-7130
 Project: HBase
  Issue Type: Bug
  Components: Client, Protobufs
Affects Versions: 0.96.0
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 0.96.0


HBASE-6206 ignored NULL qualifier so the qualifier list could be empty. But the 
request converter doesn't skip empty qualifier list too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7106) [89-fb] Fix the NPE in unit tests for JDK7

2012-11-08 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493471#comment-13493471
 ] 

Jimmy Xiang commented on HBASE-7106:


Liyin, you are right. I will fix that (in trunk). I filed HBASE-7130.

 [89-fb] Fix the NPE in unit tests for JDK7
 --

 Key: HBASE-7106
 URL: https://issues.apache.org/jira/browse/HBASE-7106
 Project: HBase
  Issue Type: Improvement
Reporter: Liyin Tang
Priority: Trivial

 In JDK7, it will throw out NPE if put a NULL into a TreeSet. And in the unit 
 tests, user can add a NULL as qualifier into the family map for GET or SCAN. 
 So we shall do the followings: 
 1) Make sure the semantics of NULL column qualifier is equal to that of the 
 EMPYT_BYTE_ARRAY column qualifier.
 2) An easy fix is to use the EMPYT_BYTE_ARRAY qualifier to replace NULL 
 qualifier in the family map for the GET or SCAN objects, and everything else 
 shall be backward compatible.
 3) Add a jdk option in the pom.xml (Assuming user installed the fb packaged 
 jdk)
 eg: mvn test -Dtest=TestFromClientSide -Pjdk7

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-5776) HTableMultiplexer

2012-11-08 Thread Otis Gospodnetic (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493477#comment-13493477
 ] 

Otis Gospodnetic commented on HBASE-5776:
-

[~liangly] Any plans/ETA for getting this in trunk?

 HTableMultiplexer 
 --

 Key: HBASE-5776
 URL: https://issues.apache.org/jira/browse/HBASE-5776
 Project: HBase
  Issue Type: Improvement
Reporter: Liyin Tang
Assignee: Liyin Tang
 Attachments: ASF.LICENSE.NOT.GRANTED--D2775.1.patch, 
 ASF.LICENSE.NOT.GRANTED--D2775.1.patch, 
 ASF.LICENSE.NOT.GRANTED--D2775.2.patch, 
 ASF.LICENSE.NOT.GRANTED--D2775.2.patch, 
 ASF.LICENSE.NOT.GRANTED--D2775.3.patch, 
 ASF.LICENSE.NOT.GRANTED--D2775.4.patch, ASF.LICENSE.NOT.GRANTED--D2775.5.patch


 There is a known issue in HBase client that single slow/dead region server 
 could slow down the multiput operations across all the region servers. So the 
 HBase client will be as slow as the slowest region server in the cluster. 
  
 To solve this problem, HTableMultiplexer will separate the multiput 
 submitting threads with the flush threads, which means the multiput operation 
 will be a nonblocking operation. 
 The submitting thread will shard all the puts into different queues based on 
 its destination region server and return immediately. The flush threads will 
 flush these puts from each queue to its destination region server. 
 Currently the HTableMultiplexer only supports the put operation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7106) [89-fb] Fix the NPE in unit tests for JDK7

2012-11-08 Thread Liyin Tang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493478#comment-13493478
 ] 

Liyin Tang commented on HBASE-7106:
---

Gustavo Anatoly, sure ! 

 [89-fb] Fix the NPE in unit tests for JDK7
 --

 Key: HBASE-7106
 URL: https://issues.apache.org/jira/browse/HBASE-7106
 Project: HBase
  Issue Type: Improvement
Reporter: Liyin Tang
Priority: Trivial

 In JDK7, it will throw out NPE if put a NULL into a TreeSet. And in the unit 
 tests, user can add a NULL as qualifier into the family map for GET or SCAN. 
 So we shall do the followings: 
 1) Make sure the semantics of NULL column qualifier is equal to that of the 
 EMPYT_BYTE_ARRAY column qualifier.
 2) An easy fix is to use the EMPYT_BYTE_ARRAY qualifier to replace NULL 
 qualifier in the family map for the GET or SCAN objects, and everything else 
 shall be backward compatible.
 3) Add a jdk option in the pom.xml (Assuming user installed the fb packaged 
 jdk)
 eg: mvn test -Dtest=TestFromClientSide -Pjdk7

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7046) Fix resource leak in TestHLogSplit#testOldRecoveredEditsFileSidelined

2012-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493480#comment-13493480
 ] 

Hadoop QA commented on HBASE-7046:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12552696/HBASE-7046.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
87 warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 16 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.coprocessor.TestRegionServerCoprocessorExceptionWithAbort

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3271//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3271//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3271//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3271//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3271//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3271//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3271//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3271//console

This message is automatically generated.

 Fix resource leak in TestHLogSplit#testOldRecoveredEditsFileSidelined
 -

 Key: HBASE-7046
 URL: https://issues.apache.org/jira/browse/HBASE-7046
 Project: HBase
  Issue Type: Bug
  Components: wal
Affects Versions: 0.96.0
Reporter: Himanshu Vashishtha
Assignee: Himanshu Vashishtha
 Fix For: 0.96.0

 Attachments: HBASE-7046.patch


 This method creates a writer but never closes one.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7130) NULL qualifier is ignored

2012-11-08 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-7130:
---

Description: HBASE-6206 ignored NULL qualifier so the qualifier list could 
be empty. But the request converter skips empty qualifier list too.  (was: 
HBASE-6206 ignored NULL qualifier so the qualifier list could be empty. But the 
request converter doesn't skip empty qualifier list too.)

 NULL qualifier is ignored
 -

 Key: HBASE-7130
 URL: https://issues.apache.org/jira/browse/HBASE-7130
 Project: HBase
  Issue Type: Bug
  Components: Client, Protobufs
Affects Versions: 0.96.0
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 0.96.0


 HBASE-6206 ignored NULL qualifier so the qualifier list could be empty. But 
 the request converter skips empty qualifier list too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7115) [shell] Provide a way to register custom filters with the Filter Language Parser

2012-11-08 Thread Aditya Kishore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Kishore updated HBASE-7115:
--

Description: 
HBASE-5428 added this capability to thrift interface but the configuration 
parameter name is thrift specific.

This patch introduces a more generic parameter hbase.user.filters using which 
the user defined custom filters can be specified in the configuration and 
loaded in any client that needs to use the filter language parser.

The patch then uses this new parameter to register any user specified filters 
while invoking the HBase shell.

Example usage: Let's say I have written a couple of custom filters with class 
names *{{org.apache.hadoop.hbase.filter.custom.SuperDuperFilter}}* and 
*{{org.apache.hadoop.hbase.filter.custom.SilverBulletFilter}}* and I want to 
use them from HBase shell using the filter language.

To do that, I would add the following configuration to {{hbase-site.xml}}

{panel}{{property}}
{{  namehbase.user.filters/name}}
{{  value}}*{{SuperDuperFilter}}*{{:org.apache.hadoop.hbase.filter.custom.SuperDuperFilter,}}*{{SilverBulletFilter}}*{{:org.apache.hadoop.hbase.filter.custom.SilverBulletFilter/value}}
{{/property}}{panel}

Once this is configured, I can launch HBase shell and use these filters in my 
{{get}} or {{scan}} just the way I would use a built-in filter.

{code}
hbase(main):001:0 scan 't', {FILTER = SuperDuperFilter(true) AND 
SilverBulletFilter(42)}
ROW  COLUMN+CELL
 status  column=cf:a, 
timestamp=30438552, value=world_peace
1 row(s) in 0. seconds
{code}

  was:
HBASE-5428 added this capability to thrift interface but the configuration 
parameter name is thrift specific.

This patch introduces a more generic parameter hbase.user.filters using which 
the user custom filters can be specified in the configuration and loaded in any 
client that needs to use the filter language parser.

The patch then uses this new parameter to register any user specified filters 
while invoking the HBase shell.


 [shell] Provide a way to register custom filters with the Filter Language 
 Parser
 

 Key: HBASE-7115
 URL: https://issues.apache.org/jira/browse/HBASE-7115
 Project: HBase
  Issue Type: Improvement
  Components: Filters, shell
Affects Versions: 0.96.0
Reporter: Aditya Kishore
Assignee: Aditya Kishore
 Fix For: 0.96.0

 Attachments: HBASE-7115_trunk.patch


 HBASE-5428 added this capability to thrift interface but the configuration 
 parameter name is thrift specific.
 This patch introduces a more generic parameter hbase.user.filters using 
 which the user defined custom filters can be specified in the configuration 
 and loaded in any client that needs to use the filter language parser.
 The patch then uses this new parameter to register any user specified filters 
 while invoking the HBase shell.
 Example usage: Let's say I have written a couple of custom filters with class 
 names *{{org.apache.hadoop.hbase.filter.custom.SuperDuperFilter}}* and 
 *{{org.apache.hadoop.hbase.filter.custom.SilverBulletFilter}}* and I want to 
 use them from HBase shell using the filter language.
 To do that, I would add the following configuration to {{hbase-site.xml}}
 {panel}{{property}}
 {{  namehbase.user.filters/name}}
 {{  value}}*{{SuperDuperFilter}}*{{:org.apache.hadoop.hbase.filter.custom.SuperDuperFilter,}}*{{SilverBulletFilter}}*{{:org.apache.hadoop.hbase.filter.custom.SilverBulletFilter/value}}
 {{/property}}{panel}
 Once this is configured, I can launch HBase shell and use these filters in my 
 {{get}} or {{scan}} just the way I would use a built-in filter.
 {code}
 hbase(main):001:0 scan 't', {FILTER = SuperDuperFilter(true) AND 
 SilverBulletFilter(42)}
 ROW  COLUMN+CELL
  status  column=cf:a, 
 timestamp=30438552, value=world_peace
 1 row(s) in 0. seconds
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7109) integration tests on cluster are not getting picked up from distribution

2012-11-08 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-7109:


Attachment: HBASE-7109-v2-squashed.patch

 integration tests on cluster are not getting picked up from distribution
 

 Key: HBASE-7109
 URL: https://issues.apache.org/jira/browse/HBASE-7109
 Project: HBase
  Issue Type: Sub-task
  Components: test
Affects Versions: 0.96.0
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HBASE-7109-squashed.patch, HBASE-7109-v2-squashed.patch


 The method of finding test classes only works on local build (or its full 
 copy), not if the distribution is used.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7109) integration tests on cluster are not getting picked up from distribution

2012-11-08 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493507#comment-13493507
 ] 

Sergey Shelukhin commented on HBASE-7109:
-

renamed class and parameter, added some javadocs

 integration tests on cluster are not getting picked up from distribution
 

 Key: HBASE-7109
 URL: https://issues.apache.org/jira/browse/HBASE-7109
 Project: HBase
  Issue Type: Sub-task
  Components: test
Affects Versions: 0.96.0
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HBASE-7109-squashed.patch, HBASE-7109-v2-squashed.patch


 The method of finding test classes only works on local build (or its full 
 copy), not if the distribution is used.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7130) NULL qualifier is ignored

2012-11-08 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-7130:
---

Attachment: trunk-7130.patch

 NULL qualifier is ignored
 -

 Key: HBASE-7130
 URL: https://issues.apache.org/jira/browse/HBASE-7130
 Project: HBase
  Issue Type: Bug
  Components: Client, Protobufs
Affects Versions: 0.96.0
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 0.96.0

 Attachments: trunk-7130.patch


 HBASE-6206 ignored NULL qualifier so the qualifier list could be empty. But 
 the request converter skips empty qualifier list too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7130) NULL qualifier is ignored

2012-11-08 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-7130:
---

Status: Patch Available  (was: Open)

 NULL qualifier is ignored
 -

 Key: HBASE-7130
 URL: https://issues.apache.org/jira/browse/HBASE-7130
 Project: HBase
  Issue Type: Bug
  Components: Client, Protobufs
Affects Versions: 0.96.0
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 0.96.0

 Attachments: trunk-7130.patch


 HBASE-6206 ignored NULL qualifier so the qualifier list could be empty. But 
 the request converter skips empty qualifier list too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7115) [shell] Provide a way to register custom filters with the Filter Language Parser

2012-11-08 Thread Aditya Kishore (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493516#comment-13493516
 ] 

Aditya Kishore commented on HBASE-7115:
---

[~stack] Have updated the JIRA description.

 [shell] Provide a way to register custom filters with the Filter Language 
 Parser
 

 Key: HBASE-7115
 URL: https://issues.apache.org/jira/browse/HBASE-7115
 Project: HBase
  Issue Type: Improvement
  Components: Filters, shell
Affects Versions: 0.96.0
Reporter: Aditya Kishore
Assignee: Aditya Kishore
 Fix For: 0.96.0

 Attachments: HBASE-7115_trunk.patch


 HBASE-5428 added this capability to thrift interface but the configuration 
 parameter name is thrift specific.
 This patch introduces a more generic parameter hbase.user.filters using 
 which the user defined custom filters can be specified in the configuration 
 and loaded in any client that needs to use the filter language parser.
 The patch then uses this new parameter to register any user specified filters 
 while invoking the HBase shell.
 Example usage: Let's say I have written a couple of custom filters with class 
 names *{{org.apache.hadoop.hbase.filter.custom.SuperDuperFilter}}* and 
 *{{org.apache.hadoop.hbase.filter.custom.SilverBulletFilter}}* and I want to 
 use them from HBase shell using the filter language.
 To do that, I would add the following configuration to {{hbase-site.xml}}
 {panel}{{property}}
 {{  namehbase.user.filters/name}}
 {{  value}}*{{SuperDuperFilter}}*{{:org.apache.hadoop.hbase.filter.custom.SuperDuperFilter,}}*{{SilverBulletFilter}}*{{:org.apache.hadoop.hbase.filter.custom.SilverBulletFilter/value}}
 {{/property}}{panel}
 Once this is configured, I can launch HBase shell and use these filters in my 
 {{get}} or {{scan}} just the way I would use a built-in filter.
 {code}
 hbase(main):001:0 scan 't', {FILTER = SuperDuperFilter(true) AND 
 SilverBulletFilter(42)}
 ROW  COLUMN+CELL
  status  column=cf:a, 
 timestamp=30438552, value=world_peace
 1 row(s) in 0. seconds
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7115) [shell] Provide a way to register custom filters with the Filter Language Parser

2012-11-08 Thread Aditya Kishore (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493520#comment-13493520
 ] 

Aditya Kishore commented on HBASE-7115:
---

And yes, this only registers the custom filters with the Filter Language Parser 
and not does not add the JARS to client/server class path. Let me think about 
it. Probably we can load the filter jars in the same way co-processors jars are 
picked.

 [shell] Provide a way to register custom filters with the Filter Language 
 Parser
 

 Key: HBASE-7115
 URL: https://issues.apache.org/jira/browse/HBASE-7115
 Project: HBase
  Issue Type: Improvement
  Components: Filters, shell
Affects Versions: 0.96.0
Reporter: Aditya Kishore
Assignee: Aditya Kishore
 Fix For: 0.96.0

 Attachments: HBASE-7115_trunk.patch


 HBASE-5428 added this capability to thrift interface but the configuration 
 parameter name is thrift specific.
 This patch introduces a more generic parameter hbase.user.filters using 
 which the user defined custom filters can be specified in the configuration 
 and loaded in any client that needs to use the filter language parser.
 The patch then uses this new parameter to register any user specified filters 
 while invoking the HBase shell.
 Example usage: Let's say I have written a couple of custom filters with class 
 names *{{org.apache.hadoop.hbase.filter.custom.SuperDuperFilter}}* and 
 *{{org.apache.hadoop.hbase.filter.custom.SilverBulletFilter}}* and I want to 
 use them from HBase shell using the filter language.
 To do that, I would add the following configuration to {{hbase-site.xml}}
 {panel}{{property}}
 {{  namehbase.user.filters/name}}
 {{  value}}*{{SuperDuperFilter}}*{{:org.apache.hadoop.hbase.filter.custom.SuperDuperFilter,}}*{{SilverBulletFilter}}*{{:org.apache.hadoop.hbase.filter.custom.SilverBulletFilter/value}}
 {{/property}}{panel}
 Once this is configured, I can launch HBase shell and use these filters in my 
 {{get}} or {{scan}} just the way I would use a built-in filter.
 {code}
 hbase(main):001:0 scan 't', {FILTER = SuperDuperFilter(true) AND 
 SilverBulletFilter(42)}
 ROW  COLUMN+CELL
  status  column=cf:a, 
 timestamp=30438552, value=world_peace
 1 row(s) in 0. seconds
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7115) [shell] Provide a way to register custom filters with the Filter Language Parser

2012-11-08 Thread Aditya Kishore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Kishore updated HBASE-7115:
--

Description: 
HBASE-5428 added this capability to thrift interface but the configuration 
parameter name is thrift specific.

This patch introduces a more generic parameter hbase.user.filters using which 
the user defined custom filters can be specified in the configuration and 
loaded in any client that needs to use the filter language parser.

The patch then uses this new parameter to register any user specified filters 
while invoking the HBase shell.

Example usage: Let's say I have written a couple of custom filters with class 
names *{{org.apache.hadoop.hbase.filter.custom.SuperDuperFilter}}* and 
*{{org.apache.hadoop.hbase.filter.custom.SilverBulletFilter}}* and I want to 
use them from HBase shell using the filter language.

To do that, I would add the following configuration to {{hbase-site.xml}}

{panel}{{property}}
{{  namehbase.user.filters/name}}
{{  value}}*{{SuperDuperFilter}}*{{:org.apache.hadoop.hbase.filter.custom.SuperDuperFilter,}}*{{SilverBulletFilter}}*{{:org.apache.hadoop.hbase.filter.custom.SilverBulletFilter/value}}
{{/property}}{panel}

Once this is configured, I can launch HBase shell and use these filters in my 
{{get}} or {{scan}} just the way I would use a built-in filter.

{code}
hbase(main):001:0 scan 't', {FILTER = SuperDuperFilter(true) AND 
SilverBulletFilter(42)}
ROW  COLUMN+CELL
 status  column=cf:a, 
timestamp=30438552, value=world_peace
1 row(s) in 0. seconds
{code}

To use this feature in any client, the client needs to make the following 
function call as part of its initialization.
{code}
ParseFilter.registerUserFilters(configuration);
{code}

  was:
HBASE-5428 added this capability to thrift interface but the configuration 
parameter name is thrift specific.

This patch introduces a more generic parameter hbase.user.filters using which 
the user defined custom filters can be specified in the configuration and 
loaded in any client that needs to use the filter language parser.

The patch then uses this new parameter to register any user specified filters 
while invoking the HBase shell.

Example usage: Let's say I have written a couple of custom filters with class 
names *{{org.apache.hadoop.hbase.filter.custom.SuperDuperFilter}}* and 
*{{org.apache.hadoop.hbase.filter.custom.SilverBulletFilter}}* and I want to 
use them from HBase shell using the filter language.

To do that, I would add the following configuration to {{hbase-site.xml}}

{panel}{{property}}
{{  namehbase.user.filters/name}}
{{  value}}*{{SuperDuperFilter}}*{{:org.apache.hadoop.hbase.filter.custom.SuperDuperFilter,}}*{{SilverBulletFilter}}*{{:org.apache.hadoop.hbase.filter.custom.SilverBulletFilter/value}}
{{/property}}{panel}

Once this is configured, I can launch HBase shell and use these filters in my 
{{get}} or {{scan}} just the way I would use a built-in filter.

{code}
hbase(main):001:0 scan 't', {FILTER = SuperDuperFilter(true) AND 
SilverBulletFilter(42)}
ROW  COLUMN+CELL
 status  column=cf:a, 
timestamp=30438552, value=world_peace
1 row(s) in 0. seconds
{code}


 [shell] Provide a way to register custom filters with the Filter Language 
 Parser
 

 Key: HBASE-7115
 URL: https://issues.apache.org/jira/browse/HBASE-7115
 Project: HBase
  Issue Type: Improvement
  Components: Filters, shell
Affects Versions: 0.96.0
Reporter: Aditya Kishore
Assignee: Aditya Kishore
 Fix For: 0.96.0

 Attachments: HBASE-7115_trunk.patch


 HBASE-5428 added this capability to thrift interface but the configuration 
 parameter name is thrift specific.
 This patch introduces a more generic parameter hbase.user.filters using 
 which the user defined custom filters can be specified in the configuration 
 and loaded in any client that needs to use the filter language parser.
 The patch then uses this new parameter to register any user specified filters 
 while invoking the HBase shell.
 Example usage: Let's say I have written a couple of custom filters with class 
 names *{{org.apache.hadoop.hbase.filter.custom.SuperDuperFilter}}* and 
 *{{org.apache.hadoop.hbase.filter.custom.SilverBulletFilter}}* and I want to 
 use them from HBase shell using the filter language.
 To do that, I would add the following configuration to {{hbase-site.xml}}
 {panel}{{property}}
 {{  namehbase.user.filters/name}}
 {{  

[jira] [Updated] (HBASE-7110) refactor the compaction selection and config code similarly to 0.89-fb changes

2012-11-08 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-7110:


Attachment: HBASE-7110-v6-squashed.patch

 refactor the compaction selection and config code similarly to 0.89-fb changes
 --

 Key: HBASE-7110
 URL: https://issues.apache.org/jira/browse/HBASE-7110
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.96.0
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HBASE-6371-v5-refactor-only-squashed.patch, 
 HBASE-7110-v6-squashed.patch


 Separate JIRA for refactoring changes from HBASE-7055 (and further ones after 
 code review)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7110) refactor the compaction selection and config code similarly to 0.89-fb changes

2012-11-08 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493535#comment-13493535
 ] 

Sergey Shelukhin commented on HBASE-7110:
-

updated

 refactor the compaction selection and config code similarly to 0.89-fb changes
 --

 Key: HBASE-7110
 URL: https://issues.apache.org/jira/browse/HBASE-7110
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.96.0
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HBASE-6371-v5-refactor-only-squashed.patch, 
 HBASE-7110-v6-squashed.patch


 Separate JIRA for refactoring changes from HBASE-7055 (and further ones after 
 code review)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6826) [WINDOWS] TestFromClientSide failures

2012-11-08 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-6826:
-

   Resolution: Fixed
Fix Version/s: 0.96.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed this. Thanks Stack for the review. 

 [WINDOWS] TestFromClientSide failures
 -

 Key: HBASE-6826
 URL: https://issues.apache.org/jira/browse/HBASE-6826
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.3, 0.96.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar
  Labels: windows
 Fix For: 0.96.0

 Attachments: hbase-6826_v1-0.94.patch, hbase-6826_v1-trunk.patch, 
 hbase-6826_v2-0.94.patch, hbase-6826_v2-trunk.patch


 The following tests fail for TestFromClientSide: 
 {code}
 testPoolBehavior()
 testClientPoolRoundRobin()
 testClientPoolThreadLocal()
 {code}
 The first test fails due to the fact that the test (wrongly) assumes that 
 ThredPoolExecutor can reclaim the thread immediately. 
 The second and third tests seem to fail because that Put's to the table does 
 not specify an explicit timestamp, but on windows, consecutive calls to put 
 happen to finish in the same milisecond so that the resulting mutations have 
 the same timestamp, thus there is only one version of the cell value.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6822) [WINDOWS] MiniZookeeperCluster multiple daemons bind to the same port

2012-11-08 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-6822:
-

   Resolution: Fixed
Fix Version/s: 0.96.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed this. Thanks Stack for the review.

 [WINDOWS] MiniZookeeperCluster multiple daemons bind to the same port
 -

 Key: HBASE-6822
 URL: https://issues.apache.org/jira/browse/HBASE-6822
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.3, 0.96.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 0.96.0

 Attachments: hbase-6822_v1-0.94.patch, hbase-6822_v1-trunk.patch


 TestHBaseTestingUtility.testMiniZooKeeper() tests whether the mini zk cluster 
 is working by launching 5 threads corresponding to zk servers. 
 NIOServerCnxnFactory.configure() configures the socket as:
 {code}
 this.ss = ServerSocketChannel.open();
 ss.socket().setReuseAddress(true);
 {code}
 setReuseAddress() is set, because it allows the server to come back up and 
 bind to the same port before the socket is timed-out by the kernel.
 Under windows, the behavior on ServerSocket.setReuseAddress() is different 
 than on linux, in which it allows any process to bind to an already-bound 
 port. This causes ZK nodes starting on the same node, to be able to bind to 
 the same port. 
 The following part of the patch at 
 https://issues.apache.org/jira/browse/HADOOP-8223 deals with this case for 
 Hadoop:
 {code}
 if(Shell.WINDOWS) {
 +  // result of setting the SO_REUSEADDR flag is different on Windows
 +  // http://msdn.microsoft.com/en-us/library/ms740621(v=vs.85).aspx
 +  // without this 2 NN's can start on the same machine and listen on 
 +  // the same port with indeterminate routing of incoming requests to 
 them
 +  ret.setReuseAddress(false);
 +}
 {code}
 We should do the same in Zookeeper (I'll open a ZOOK issue). But in the 
 meantime, we can fix hbase tests to not rely on BindException to resolve for 
 bind errors. Especially, in  MiniZKCluster.startup() when starting more than 
 1 servers, we already know that we have to increment the port number. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6820) [WINDOWS] MiniZookeeperCluster should ensure that ZKDatabase is closed upon shutdown()

2012-11-08 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-6820:
-

   Resolution: Fixed
Fix Version/s: 0.96.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed this. Thanks Stack for the review. 

 [WINDOWS] MiniZookeeperCluster should ensure that ZKDatabase is closed upon 
 shutdown()
 --

 Key: HBASE-6820
 URL: https://issues.apache.org/jira/browse/HBASE-6820
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.3, 0.96.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar
  Labels: windows
 Fix For: 0.96.0

 Attachments: hbase-6820_v1-0.94.patch, hbase-6820_v1-trunk.patch


 MiniZookeeperCluster.shutdown() shuts down the ZookeeperServer and 
 NIOServerCnxnFactory. However, MiniZookeeperCluster uses a deprecated 
 ZookeeperServer constructor, which in turn constructs its own FileTxnSnapLog, 
 and ZKDatabase. Since ZookeeperServer.shutdown() does not close() the 
 ZKDatabase, we have to explicitly close it in MiniZookeeperCluster.shutdown().
 Tests effected by this are
 {code}
 TestSplitLogManager
 TestSplitLogWorker
 TestOfflineMetaRebuildBase
 TestOfflineMetaRebuildHole
 TestOfflineMetaRebuildOverlap
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-4913) Per-CF compaction Via the Shell

2012-11-08 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HBASE-4913:
--

Attachment: HBASE-4913-addendum.patch

* Attached HBASE-4913-addendum.patch *

When doing some testing on the 94 patch, I noticed the ruby parsing isn't that 
great; if you have more arguments than are supported it just ignores the 
command rather than give you an error message.

 Per-CF compaction Via the Shell
 ---

 Key: HBASE-4913
 URL: https://issues.apache.org/jira/browse/HBASE-4913
 Project: HBase
  Issue Type: Sub-task
  Components: Client, regionserver
Reporter: Nicolas Spiegelberg
Assignee: Mubarak Seyed
 Fix For: 0.96.0, 0.94.4

 Attachments: HBASE-4913-addendum.patch, HBASE-4913.trunk.v1.patch, 
 HBASE-4913.trunk.v2.patch, HBASE-4913.trunk.v2.patch, 
 HBASE-4913-trunk-v3.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6823) [WINDOWS] TestSplitTransaction fails due to the Log handle not released by a call to DaughterOpener.start()

2012-11-08 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-6823:
-

Resolution: Fixed
Status: Resolved  (was: Patch Available)

I've committed the v2 patch. It is just a rebase of the v1, w/o the imports. 
Thanks Stack for the review. 

 [WINDOWS] TestSplitTransaction fails due to the Log handle not released by a 
 call to DaughterOpener.start()
 ---

 Key: HBASE-6823
 URL: https://issues.apache.org/jira/browse/HBASE-6823
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.3, 0.96.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar
  Labels: windows
 Fix For: 0.96.0

 Attachments: hbase-6823_v1-0.94.patch, hbase-6823_v1-trunk.patch, 
 hbase-6823_v2-0.94.patch, hbase-6823_v2-trunk.patch


 There are two unit test cases in HBase RegionServer test failed in the clean 
 up stage that failed to delete the files/folders created in the test. 
 testWholesomeSplit(org.apache.hadoop.hbase.regionserver.TestSplitTransaction):
  Failed delete of ./target/test-
 data/1c386abc-f159-492e-b21f-e89fab24d85b/org.apache.hadoop.hbase.regionserver.TestSplitTransaction/table/a588d813fd26280c2b42e93565ed960c
 testRollback(org.apache.hadoop.hbase.regionserver.TestSplitTransaction): 
 Failed delete of ./target/test-data/6
 1a1a14b-0cc9-4dd6-93fd-4dc021e2bfcc/org.apache.hadoop.hbase.regionserver.TestSplitTransaction/table/8090abc89528461fa284288c257662cd
 The root cause is triggered by ta call to the DaughterOpener.start() in 
 \src\hbase\src\main\java\org\apache\hadoop\hbase\regionserver\SplitTransactopn.Java
  (openDaughters() function). It left handles to the splited folder/file and 
 causing deleting of the file/folder failed in the Windows OS.
 Windows does not allow to delete a file, while there are open file handlers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-4913) Per-CF compaction Via the Shell

2012-11-08 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated HBASE-4913:
--

Attachment: HBASE-4913-94.patch

* Attached HBASE-4913-94.patch *

94 version of patch.

 Per-CF compaction Via the Shell
 ---

 Key: HBASE-4913
 URL: https://issues.apache.org/jira/browse/HBASE-4913
 Project: HBase
  Issue Type: Sub-task
  Components: Client, regionserver
Reporter: Nicolas Spiegelberg
Assignee: Mubarak Seyed
 Fix For: 0.96.0, 0.94.4

 Attachments: HBASE-4913-94.patch, HBASE-4913-addendum.patch, 
 HBASE-4913.trunk.v1.patch, HBASE-4913.trunk.v2.patch, 
 HBASE-4913.trunk.v2.patch, HBASE-4913-trunk-v3.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Resolved] (HBASE-6827) [WINDOWS] TestScannerTimeout fails expecting a timeout

2012-11-08 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar resolved HBASE-6827.
--

   Resolution: Fixed
Fix Version/s: 0.96.0
 Hadoop Flags: Reviewed

I've committed this. Thanks Stack for the review. 

 [WINDOWS] TestScannerTimeout fails expecting a timeout
 --

 Key: HBASE-6827
 URL: https://issues.apache.org/jira/browse/HBASE-6827
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.3, 0.96.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 0.96.0

 Attachments: hbase-6827_v1-0.94.patch, hbase-6827_v1-trunk.patch


 TestScannerTimeout.test2481() fails with:
 {code}
 java.lang.AssertionError: We should be timing out
   at org.junit.Assert.fail(Assert.java:93)
   at 
 org.apache.hadoop.hbase.client.TestScannerTimeout.test2481(TestScannerTimeout.java:117)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7121) Fix TestHFileOutputFormat after moving RS to metrics2

2012-11-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493552#comment-13493552
 ] 

Hudson commented on HBASE-7121:
---

Integrated in HBase-TRUNK #3521 (See 
[https://builds.apache.org/job/HBase-TRUNK/3521/])
HBASE-7121 Fix TestHFileOutputFormat after moving RS to metrics2 (Revision 
1407216)

 Result = FAILURE
eclark : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperImpl.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionWrapperImpl.java


 Fix TestHFileOutputFormat after moving RS to metrics2
 -

 Key: HBASE-7121
 URL: https://issues.apache.org/jira/browse/HBASE-7121
 Project: HBase
  Issue Type: Sub-task
  Components: metrics
Affects Versions: 0.96.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 0.96.0

 Attachments: HBASE-7121-0.patch


 When spinning up lots of threads in a single jvm it's possible that the 
 metrics wrapper can touch variables that are not initialized.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-4913) Per-CF compaction Via the Shell

2012-11-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493553#comment-13493553
 ] 

Hudson commented on HBASE-4913:
---

Integrated in HBase-TRUNK #3521 (See 
[https://builds.apache.org/job/HBase-TRUNK/3521/])
HBASE-4913 Per-CF compaction Via the Shell (Mubarak and Gregory) (Revision 
1407227)

 Result = FAILURE
gchanan : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/protobuf/RequestConverter.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/protobuf/generated/AdminProtos.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java
* /hbase/trunk/hbase-server/src/main/protobuf/Admin.proto
* /hbase/trunk/hbase-server/src/main/ruby/hbase/admin.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/compact.rb
* /hbase/trunk/hbase-server/src/main/ruby/shell/commands/major_compact.rb
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompactionState.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionServerBulkLoad.java


 Per-CF compaction Via the Shell
 ---

 Key: HBASE-4913
 URL: https://issues.apache.org/jira/browse/HBASE-4913
 Project: HBase
  Issue Type: Sub-task
  Components: Client, regionserver
Reporter: Nicolas Spiegelberg
Assignee: Mubarak Seyed
 Fix For: 0.96.0, 0.94.4

 Attachments: HBASE-4913-94.patch, HBASE-4913-addendum.patch, 
 HBASE-4913.trunk.v1.patch, HBASE-4913.trunk.v2.patch, 
 HBASE-4913.trunk.v2.patch, HBASE-4913-trunk-v3.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6828) [WINDOWS] TestMemoryBoundedLogMessageBuffer failures

2012-11-08 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-6828:
-

   Resolution: Fixed
Fix Version/s: 0.96.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed this. Thanks Stack for review. 

 [WINDOWS] TestMemoryBoundedLogMessageBuffer failures
 

 Key: HBASE-6828
 URL: https://issues.apache.org/jira/browse/HBASE-6828
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.3, 0.96.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar
  Labels: windows
 Fix For: 0.96.0

 Attachments: hbase-6828_v1-0.94.patch, hbase-6828_v1-trunk.patch


 TestMemoryBoundedLogMessageBuffer fails because of a suspected \n line ending 
 difference.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7109) integration tests on cluster are not getting picked up from distribution

2012-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493559#comment-13493559
 ] 

Hadoop QA commented on HBASE-7109:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12552708/HBASE-7109-v2-squashed.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 10 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
87 warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 16 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3273//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3273//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3273//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3273//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3273//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3273//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3273//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3273//console

This message is automatically generated.

 integration tests on cluster are not getting picked up from distribution
 

 Key: HBASE-7109
 URL: https://issues.apache.org/jira/browse/HBASE-7109
 Project: HBase
  Issue Type: Sub-task
  Components: test
Affects Versions: 0.96.0
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HBASE-7109-squashed.patch, HBASE-7109-v2-squashed.patch


 The method of finding test classes only works on local build (or its full 
 copy), not if the distribution is used.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7109) integration tests on cluster are not getting picked up from distribution

2012-11-08 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493560#comment-13493560
 ] 

Ted Yu commented on HBASE-7109:
---

{code}
+public class ClassFinder {
{code}
Please add annotation for audience and stability.
{code}
+  public ListClass? findClasses(String packageName, boolean 
proceedOnExceptions)
{code}
The above method calls findTestClassesFromFiles() and findTestClassesFromJar(). 
This gives me impression that ClassFinder is already geared towards finding 
test classes.
Can ClassFinder and ClassTestFinder be merged ?

 integration tests on cluster are not getting picked up from distribution
 

 Key: HBASE-7109
 URL: https://issues.apache.org/jira/browse/HBASE-7109
 Project: HBase
  Issue Type: Sub-task
  Components: test
Affects Versions: 0.96.0
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HBASE-7109-squashed.patch, HBASE-7109-v2-squashed.patch


 The method of finding test classes only works on local build (or its full 
 copy), not if the distribution is used.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6831) [WINDOWS] HBaseTestingUtility.expireSession() does not expire zookeeper session

2012-11-08 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-6831:
-

   Resolution: Fixed
Fix Version/s: 0.96.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed this. Thanks Stack for review. 

 [WINDOWS] HBaseTestingUtility.expireSession() does not expire zookeeper 
 session
 ---

 Key: HBASE-6831
 URL: https://issues.apache.org/jira/browse/HBASE-6831
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.3, 0.96.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar
  Labels: windows
 Fix For: 0.96.0

 Attachments: hbase-6831_v1-0.94.patch, hbase-6831_v1-trunk.patch


 TestReplicationPeer fails because it forces the zookeeper session expiration 
 by calling HBaseTestingUtilty.expireSesssion(), but that function fails to do 
 so.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7109) integration tests on cluster are not getting picked up from distribution

2012-11-08 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493568#comment-13493568
 ] 

Sergey Shelukhin commented on HBASE-7109:
-

forgot to rename when splitting them :) will rename. These are different 
responsibilities, I think it's a good idea to split them.

 integration tests on cluster are not getting picked up from distribution
 

 Key: HBASE-7109
 URL: https://issues.apache.org/jira/browse/HBASE-7109
 Project: HBase
  Issue Type: Sub-task
  Components: test
Affects Versions: 0.96.0
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HBASE-7109-squashed.patch, HBASE-7109-v2-squashed.patch


 The method of finding test classes only works on local build (or its full 
 copy), not if the distribution is used.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7109) integration tests on cluster are not getting picked up from distribution

2012-11-08 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493569#comment-13493569
 ] 

Sergey Shelukhin commented on HBASE-7109:
-

ClassFinder can be used to find classes according to different rules, etc.

 integration tests on cluster are not getting picked up from distribution
 

 Key: HBASE-7109
 URL: https://issues.apache.org/jira/browse/HBASE-7109
 Project: HBase
  Issue Type: Sub-task
  Components: test
Affects Versions: 0.96.0
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HBASE-7109-squashed.patch, HBASE-7109-v2-squashed.patch


 The method of finding test classes only works on local build (or its full 
 copy), not if the distribution is used.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6466) Enable multi-thread for memstore flush

2012-11-08 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493572#comment-13493572
 ] 

Sergey Shelukhin commented on HBASE-6466:
-

Updating trunk patch. I will run some tests...

 Enable multi-thread for memstore flush
 --

 Key: HBASE-6466
 URL: https://issues.apache.org/jira/browse/HBASE-6466
 Project: HBase
  Issue Type: Improvement
Reporter: chunhui shen
Assignee: chunhui shen
 Attachments: HBASE-6466.patch, HBASE-6466v2.patch, HBASE-6466v3.patch


 If the KV is large or Hlog is closed with high-pressure putting, we found 
 memstore is often above the high water mark and block the putting.
 So should we enable multi-thread for Memstore Flush?
 Some performance test data for reference,
 1.test environment : 
 random writting;upper memstore limit 5.6GB;lower memstore limit 4.8GB;400 
 regions per regionserver;row len=50 bytes, value len=1024 bytes;5 
 regionserver, 300 ipc handler per regionserver;5 client, 50 thread handler 
 per client for writing
 2.test results:
 one cacheFlush handler, tps: 7.8k/s per regionserver, Flush:10.1MB/s per 
 regionserver, appears many aboveGlobalMemstoreLimit blocking
 two cacheFlush handlers, tps: 10.7k/s per regionserver, Flush:12.46MB/s per 
 regionserver,
 200 thread handler per client  two cacheFlush handlers, tps:16.1k/s per 
 regionserver, Flush:18.6MB/s per regionserver

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7130) NULL qualifier is ignored

2012-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493573#comment-13493573
 ] 

Hadoop QA commented on HBASE-7130:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12552711/trunk-7130.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
87 warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 16 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.coprocessor.TestAggregateProtocol
  org.apache.hadoop.hbase.master.TestRollingRestart

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3274//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3274//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3274//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3274//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3274//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3274//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3274//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3274//console

This message is automatically generated.

 NULL qualifier is ignored
 -

 Key: HBASE-7130
 URL: https://issues.apache.org/jira/browse/HBASE-7130
 Project: HBase
  Issue Type: Bug
  Components: Client, Protobufs
Affects Versions: 0.96.0
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang
 Fix For: 0.96.0

 Attachments: trunk-7130.patch


 HBASE-6206 ignored NULL qualifier so the qualifier list could be empty. But 
 the request converter skips empty qualifier list too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-6466) Enable multi-thread for memstore flush

2012-11-08 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-6466:


Attachment: HBASE-6466v3.1.patch

 Enable multi-thread for memstore flush
 --

 Key: HBASE-6466
 URL: https://issues.apache.org/jira/browse/HBASE-6466
 Project: HBase
  Issue Type: Improvement
Reporter: chunhui shen
Assignee: chunhui shen
 Attachments: HBASE-6466.patch, HBASE-6466v2.patch, 
 HBASE-6466v3.1.patch, HBASE-6466v3.patch


 If the KV is large or Hlog is closed with high-pressure putting, we found 
 memstore is often above the high water mark and block the putting.
 So should we enable multi-thread for Memstore Flush?
 Some performance test data for reference,
 1.test environment : 
 random writting;upper memstore limit 5.6GB;lower memstore limit 4.8GB;400 
 regions per regionserver;row len=50 bytes, value len=1024 bytes;5 
 regionserver, 300 ipc handler per regionserver;5 client, 50 thread handler 
 per client for writing
 2.test results:
 one cacheFlush handler, tps: 7.8k/s per regionserver, Flush:10.1MB/s per 
 regionserver, appears many aboveGlobalMemstoreLimit blocking
 two cacheFlush handlers, tps: 10.7k/s per regionserver, Flush:12.46MB/s per 
 regionserver,
 200 thread handler per client  two cacheFlush handlers, tps:16.1k/s per 
 regionserver, Flush:18.6MB/s per regionserver

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-4913) Per-CF compaction Via the Shell

2012-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493575#comment-13493575
 ] 

Hadoop QA commented on HBASE-4913:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12552725/HBASE-4913-94.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3275//console

This message is automatically generated.

 Per-CF compaction Via the Shell
 ---

 Key: HBASE-4913
 URL: https://issues.apache.org/jira/browse/HBASE-4913
 Project: HBase
  Issue Type: Sub-task
  Components: Client, regionserver
Reporter: Nicolas Spiegelberg
Assignee: Mubarak Seyed
 Fix For: 0.96.0, 0.94.4

 Attachments: HBASE-4913-94.patch, HBASE-4913-addendum.patch, 
 HBASE-4913.trunk.v1.patch, HBASE-4913.trunk.v2.patch, 
 HBASE-4913.trunk.v2.patch, HBASE-4913-trunk-v3.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-7122) Proper warning message when opening a log file with no entries (idle cluster)

2012-11-08 Thread Himanshu Vashishtha (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Himanshu Vashishtha updated HBASE-7122:
---

Attachment: HBase-7122.patch

Tested it on a cluster; it stops emitting exception and other behavior remains 
the same.

 Proper warning message when opening a log file with no entries (idle cluster)
 -

 Key: HBASE-7122
 URL: https://issues.apache.org/jira/browse/HBASE-7122
 Project: HBase
  Issue Type: Sub-task
  Components: Replication
Affects Versions: 0.94.2
Reporter: Himanshu Vashishtha
Assignee: Himanshu Vashishtha
 Fix For: 0.96.0

 Attachments: HBase-7122.patch


 In case the cluster is idle and the log has rolled (offset to 0), 
 replicationSource tries to open the log and gets an EOF exception. This gets 
 printed after every 10 sec until an entry is inserted in it.
 {code}
 2012-11-07 15:47:40,924 DEBUG regionserver.ReplicationSource 
 (ReplicationSource.java:openReader(487)) - Opening log for replication 
 c0315.hal.cloudera.com%2C40020%2C1352324202860.1352327804874 at 0
 2012-11-07 15:47:40,926 WARN  regionserver.ReplicationSource 
 (ReplicationSource.java:openReader(543)) - 1 Got: 
 java.io.EOFException
   at java.io.DataInputStream.readFully(DataInputStream.java:180)
   at java.io.DataInputStream.readFully(DataInputStream.java:152)
   at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1508)
   at 
 org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1486)
   at 
 org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1475)
   at 
 org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1470)
   at 
 org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.init(SequenceFileLogReader.java:55)
   at 
 org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:175)
   at 
 org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:716)
   at 
 org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.openReader(ReplicationSource.java:491)
   at 
 org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:290)
 2012-11-07 15:47:40,927 WARN  regionserver.ReplicationSource 
 (ReplicationSource.java:openReader(547)) - Waited too long for this file, 
 considering dumping
 2012-11-07 15:47:40,927 DEBUG regionserver.ReplicationSource 
 (ReplicationSource.java:sleepForRetries(562)) - Unable to open a reader, 
 sleeping 1000 times 10
 {code}
 We should reduce the log spewing in this case (or some informative message, 
 based on the offset).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6820) [WINDOWS] MiniZookeeperCluster should ensure that ZKDatabase is closed upon shutdown()

2012-11-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493604#comment-13493604
 ] 

Hudson commented on HBASE-6820:
---

Integrated in HBase-TRUNK #3522 (See 
[https://builds.apache.org/job/HBase-TRUNK/3522/])
HBASE-6820. [WINDOWS] MiniZookeeperCluster should ensure that ZKDatabase is 
closed upon shutdown() (Revision 1407287)

 Result = FAILURE
enis : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/MiniZooKeeperCluster.java


 [WINDOWS] MiniZookeeperCluster should ensure that ZKDatabase is closed upon 
 shutdown()
 --

 Key: HBASE-6820
 URL: https://issues.apache.org/jira/browse/HBASE-6820
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.3, 0.96.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar
  Labels: windows
 Fix For: 0.96.0

 Attachments: hbase-6820_v1-0.94.patch, hbase-6820_v1-trunk.patch


 MiniZookeeperCluster.shutdown() shuts down the ZookeeperServer and 
 NIOServerCnxnFactory. However, MiniZookeeperCluster uses a deprecated 
 ZookeeperServer constructor, which in turn constructs its own FileTxnSnapLog, 
 and ZKDatabase. Since ZookeeperServer.shutdown() does not close() the 
 ZKDatabase, we have to explicitly close it in MiniZookeeperCluster.shutdown().
 Tests effected by this are
 {code}
 TestSplitLogManager
 TestSplitLogWorker
 TestOfflineMetaRebuildBase
 TestOfflineMetaRebuildHole
 TestOfflineMetaRebuildOverlap
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6823) [WINDOWS] TestSplitTransaction fails due to the Log handle not released by a call to DaughterOpener.start()

2012-11-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493605#comment-13493605
 ] 

Hudson commented on HBASE-6823:
---

Integrated in HBase-TRUNK #3522 (See 
[https://builds.apache.org/job/HBase-TRUNK/3522/])
HBASE-6823. [WINDOWS] TestSplitTransaction fails due to the Log handle not 
released by a call to DaughterOpener.start() (Revision 1407289)

 Result = FAILURE
enis : 
Files : 
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestSplitTransaction.java


 [WINDOWS] TestSplitTransaction fails due to the Log handle not released by a 
 call to DaughterOpener.start()
 ---

 Key: HBASE-6823
 URL: https://issues.apache.org/jira/browse/HBASE-6823
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.3, 0.96.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar
  Labels: windows
 Fix For: 0.96.0

 Attachments: hbase-6823_v1-0.94.patch, hbase-6823_v1-trunk.patch, 
 hbase-6823_v2-0.94.patch, hbase-6823_v2-trunk.patch


 There are two unit test cases in HBase RegionServer test failed in the clean 
 up stage that failed to delete the files/folders created in the test. 
 testWholesomeSplit(org.apache.hadoop.hbase.regionserver.TestSplitTransaction):
  Failed delete of ./target/test-
 data/1c386abc-f159-492e-b21f-e89fab24d85b/org.apache.hadoop.hbase.regionserver.TestSplitTransaction/table/a588d813fd26280c2b42e93565ed960c
 testRollback(org.apache.hadoop.hbase.regionserver.TestSplitTransaction): 
 Failed delete of ./target/test-data/6
 1a1a14b-0cc9-4dd6-93fd-4dc021e2bfcc/org.apache.hadoop.hbase.regionserver.TestSplitTransaction/table/8090abc89528461fa284288c257662cd
 The root cause is triggered by ta call to the DaughterOpener.start() in 
 \src\hbase\src\main\java\org\apache\hadoop\hbase\regionserver\SplitTransactopn.Java
  (openDaughters() function). It left handles to the splited folder/file and 
 causing deleting of the file/folder failed in the Windows OS.
 Windows does not allow to delete a file, while there are open file handlers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6822) [WINDOWS] MiniZookeeperCluster multiple daemons bind to the same port

2012-11-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493606#comment-13493606
 ] 

Hudson commented on HBASE-6822:
---

Integrated in HBase-TRUNK #3522 (See 
[https://builds.apache.org/job/HBase-TRUNK/3522/])
HBASE-6822. [WINDOWS] MiniZookeeperCluster multiple daemons bind to the 
same port (Revision 1407286)

 Result = FAILURE
enis : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/MiniZooKeeperCluster.java


 [WINDOWS] MiniZookeeperCluster multiple daemons bind to the same port
 -

 Key: HBASE-6822
 URL: https://issues.apache.org/jira/browse/HBASE-6822
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.3, 0.96.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 0.96.0

 Attachments: hbase-6822_v1-0.94.patch, hbase-6822_v1-trunk.patch


 TestHBaseTestingUtility.testMiniZooKeeper() tests whether the mini zk cluster 
 is working by launching 5 threads corresponding to zk servers. 
 NIOServerCnxnFactory.configure() configures the socket as:
 {code}
 this.ss = ServerSocketChannel.open();
 ss.socket().setReuseAddress(true);
 {code}
 setReuseAddress() is set, because it allows the server to come back up and 
 bind to the same port before the socket is timed-out by the kernel.
 Under windows, the behavior on ServerSocket.setReuseAddress() is different 
 than on linux, in which it allows any process to bind to an already-bound 
 port. This causes ZK nodes starting on the same node, to be able to bind to 
 the same port. 
 The following part of the patch at 
 https://issues.apache.org/jira/browse/HADOOP-8223 deals with this case for 
 Hadoop:
 {code}
 if(Shell.WINDOWS) {
 +  // result of setting the SO_REUSEADDR flag is different on Windows
 +  // http://msdn.microsoft.com/en-us/library/ms740621(v=vs.85).aspx
 +  // without this 2 NN's can start on the same machine and listen on 
 +  // the same port with indeterminate routing of incoming requests to 
 them
 +  ret.setReuseAddress(false);
 +}
 {code}
 We should do the same in Zookeeper (I'll open a ZOOK issue). But in the 
 meantime, we can fix hbase tests to not rely on BindException to resolve for 
 bind errors. Especially, in  MiniZKCluster.startup() when starting more than 
 1 servers, we already know that we have to increment the port number. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6828) [WINDOWS] TestMemoryBoundedLogMessageBuffer failures

2012-11-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493608#comment-13493608
 ] 

Hudson commented on HBASE-6828:
---

Integrated in HBase-TRUNK #3522 (See 
[https://builds.apache.org/job/HBase-TRUNK/3522/])
HBASE-6828. [WINDOWS] TestMemoryBoundedLogMessageBuffer failures (Revision 
1407298)

 Result = FAILURE
enis : 
Files : 
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/monitoring/TestMemoryBoundedLogMessageBuffer.java


 [WINDOWS] TestMemoryBoundedLogMessageBuffer failures
 

 Key: HBASE-6828
 URL: https://issues.apache.org/jira/browse/HBASE-6828
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.3, 0.96.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar
  Labels: windows
 Fix For: 0.96.0

 Attachments: hbase-6828_v1-0.94.patch, hbase-6828_v1-trunk.patch


 TestMemoryBoundedLogMessageBuffer fails because of a suspected \n line ending 
 difference.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6827) [WINDOWS] TestScannerTimeout fails expecting a timeout

2012-11-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493609#comment-13493609
 ] 

Hudson commented on HBASE-6827:
---

Integrated in HBase-TRUNK #3522 (See 
[https://builds.apache.org/job/HBase-TRUNK/3522/])
HBASE-6827. [WINDOWS] TestScannerTimeout fails expecting a timeout 
(Revision 1407290)

 Result = FAILURE
enis : 
Files : 
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestScannerTimeout.java


 [WINDOWS] TestScannerTimeout fails expecting a timeout
 --

 Key: HBASE-6827
 URL: https://issues.apache.org/jira/browse/HBASE-6827
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.3, 0.96.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 0.96.0

 Attachments: hbase-6827_v1-0.94.patch, hbase-6827_v1-trunk.patch


 TestScannerTimeout.test2481() fails with:
 {code}
 java.lang.AssertionError: We should be timing out
   at org.junit.Assert.fail(Assert.java:93)
   at 
 org.apache.hadoop.hbase.client.TestScannerTimeout.test2481(TestScannerTimeout.java:117)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6826) [WINDOWS] TestFromClientSide failures

2012-11-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493610#comment-13493610
 ] 

Hudson commented on HBASE-6826:
---

Integrated in HBase-TRUNK #3522 (See 
[https://builds.apache.org/job/HBase-TRUNK/3522/])
HBASE-6826. [WINDOWS] TestFromClientSide failures (Revision 1407285)

 Result = FAILURE
enis : 
Files : 
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestFromClientSide.java


 [WINDOWS] TestFromClientSide failures
 -

 Key: HBASE-6826
 URL: https://issues.apache.org/jira/browse/HBASE-6826
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.3, 0.96.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar
  Labels: windows
 Fix For: 0.96.0

 Attachments: hbase-6826_v1-0.94.patch, hbase-6826_v1-trunk.patch, 
 hbase-6826_v2-0.94.patch, hbase-6826_v2-trunk.patch


 The following tests fail for TestFromClientSide: 
 {code}
 testPoolBehavior()
 testClientPoolRoundRobin()
 testClientPoolThreadLocal()
 {code}
 The first test fails due to the fact that the test (wrongly) assumes that 
 ThredPoolExecutor can reclaim the thread immediately. 
 The second and third tests seem to fail because that Put's to the table does 
 not specify an explicit timestamp, but on windows, consecutive calls to put 
 happen to finish in the same milisecond so that the resulting mutations have 
 the same timestamp, thus there is only one version of the cell value.  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6831) [WINDOWS] HBaseTestingUtility.expireSession() does not expire zookeeper session

2012-11-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493607#comment-13493607
 ] 

Hudson commented on HBASE-6831:
---

Integrated in HBase-TRUNK #3522 (See 
[https://builds.apache.org/job/HBase-TRUNK/3522/])
HBASE-6831. [WINDOWS] HBaseTestingUtility.expireSession() does not expire 
zookeeper session (Revision 1407300)

 Result = FAILURE
enis : 
Files : 
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java


 [WINDOWS] HBaseTestingUtility.expireSession() does not expire zookeeper 
 session
 ---

 Key: HBASE-6831
 URL: https://issues.apache.org/jira/browse/HBASE-6831
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.3, 0.96.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar
  Labels: windows
 Fix For: 0.96.0

 Attachments: hbase-6831_v1-0.94.patch, hbase-6831_v1-trunk.patch


 TestReplicationPeer fails because it forces the zookeeper session expiration 
 by calling HBaseTestingUtilty.expireSesssion(), but that function fails to do 
 so.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-4583) Integrate RWCC with Append and Increment operations

2012-11-08 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-4583:
--

Attachment: (was: 4583-mixed-v3.txt)

 Integrate RWCC with Append and Increment operations
 ---

 Key: HBASE-4583
 URL: https://issues.apache.org/jira/browse/HBASE-4583
 Project: HBase
  Issue Type: Bug
Reporter: Lars Hofhansl
Assignee: Lars Hofhansl
 Fix For: 0.96.0

 Attachments: 4583-mixed.txt, 4583-mixed-v2.txt, 4583-mixed-v4.txt, 
 4583-trunk-less-radical.txt, 4583-trunk-less-radical-v2.txt, 
 4583-trunk-less-radical-v3.txt, 4583-trunk-less-radical-v4.txt, 
 4583-trunk-less-radical-v5.txt, 4583-trunk-less-radical-v6.txt, 
 4583-trunk-radical.txt, 4583-trunk-radical_v2.txt, 4583-trunk-v3.txt, 
 4583.txt, 4583-v2.txt, 4583-v3.txt, 4583-v4.txt


 Currently Increment and Append operations do not work with RWCC and hence a 
 client could see the results of multiple such operation mixed in the same 
 Get/Scan.
 The semantics might be a bit more interesting here as upsert adds and removes 
 to and from the memstore.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-7110) refactor the compaction selection and config code similarly to 0.89-fb changes

2012-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493617#comment-13493617
 ] 

Hadoop QA commented on HBASE-7110:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12552718/HBASE-7110-v6-squashed.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 11 new 
or modified tests.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
85 warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 findbugs{color}.  The patch appears to introduce 17 new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.io.TestHeapSize

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3276//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3276//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3276//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3276//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3276//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3276//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3276//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/3276//console

This message is automatically generated.

 refactor the compaction selection and config code similarly to 0.89-fb changes
 --

 Key: HBASE-7110
 URL: https://issues.apache.org/jira/browse/HBASE-7110
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.96.0
Reporter: Sergey Shelukhin
Assignee: Sergey Shelukhin
 Attachments: HBASE-6371-v5-refactor-only-squashed.patch, 
 HBASE-7110-v6-squashed.patch


 Separate JIRA for refactoring changes from HBASE-7055 (and further ones after 
 code review)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-6820) [WINDOWS] MiniZookeeperCluster should ensure that ZKDatabase is closed upon shutdown()

2012-11-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13493621#comment-13493621
 ] 

Hudson commented on HBASE-6820:
---

Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #253 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/253/])
HBASE-6820. [WINDOWS] MiniZookeeperCluster should ensure that ZKDatabase is 
closed upon shutdown() (Revision 1407287)

 Result = FAILURE
enis : 
Files : 
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/zookeeper/MiniZooKeeperCluster.java


 [WINDOWS] MiniZookeeperCluster should ensure that ZKDatabase is closed upon 
 shutdown()
 --

 Key: HBASE-6820
 URL: https://issues.apache.org/jira/browse/HBASE-6820
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.3, 0.96.0
Reporter: Enis Soztutar
Assignee: Enis Soztutar
  Labels: windows
 Fix For: 0.96.0

 Attachments: hbase-6820_v1-0.94.patch, hbase-6820_v1-trunk.patch


 MiniZookeeperCluster.shutdown() shuts down the ZookeeperServer and 
 NIOServerCnxnFactory. However, MiniZookeeperCluster uses a deprecated 
 ZookeeperServer constructor, which in turn constructs its own FileTxnSnapLog, 
 and ZKDatabase. Since ZookeeperServer.shutdown() does not close() the 
 ZKDatabase, we have to explicitly close it in MiniZookeeperCluster.shutdown().
 Tests effected by this are
 {code}
 TestSplitLogManager
 TestSplitLogWorker
 TestOfflineMetaRebuildBase
 TestOfflineMetaRebuildHole
 TestOfflineMetaRebuildOverlap
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   >