Re: [VOTE] Sixth release candidate for HBase 1.0.0 (RC5) is available. Please vote by Feb 19 2015

2015-02-16 Thread Jean-Marc Spaggiari
No, 1,0.0 to 1.0.0 ;) Installed 1.0.0, changed a config in a hbase-site.xml
and did a rolling restart to have it taken into consideration.

I still have migration from 0.94 to 1.0.0 to test, and rolling restart from
0.9x to 1.0.0... And of course, the performances...

JM

2015-02-16 10:40 GMT-05:00 Ted Yu yuzhih...@gmail.com:

 bq. Did a rolling restart from 1.0.0 to 1.0.0

 Did you mean from 0.98 to 1.0.0 ?

 Cheers

 On Mon, Feb 16, 2015 at 7:37 AM, Jean-Marc Spaggiari 
 jean-m...@spaggiari.org wrote:

  Download and un-packed passed.
 
  Checked Changes.txt = Passed.
  Checked documentation = Link Why does HBase care about /etc/hosts?
  http://devving.com/?p=414 in section Quick Start - Standalone HBase
  doesn't work
  Run test suite = Failed 3 time in a row with JDK 1.7
  Failed tests:
TestNodeHealthCheckChore.testHealthCheckerFail:69-healthCheckerTest:90
  expected:FAILED but was:FAILED_WITH_EXCEPTION
 
 
 TestNodeHealthCheckChore.testHealthCheckerSuccess:63-healthCheckerTest:90
  expected:SUCCESS but was:FAILED_WITH_EXCEPTION
 
 
 TestNodeHealthCheckChore.testHealthCheckerTimeout:75-healthCheckerTest:90
  expected:TIMED_OUT but was:FAILED_WITH_EXCEPTION
 
  Tried with JDK8 and got a lot of:
  Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
  MaxPermSize=256m; support was removed in 8.0
  And the same errors + some others, ran it twice, twice the same errors.
  Same as with JDK1.7 + the SSL one.
  Failed tests:
TestNodeHealthCheckChore.testHealthCheckerFail:69-healthCheckerTest:90
  expected:FAILED but was:FAILED_WITH_EXCEPTION
 
 
 TestNodeHealthCheckChore.testHealthCheckerSuccess:63-healthCheckerTest:90
  expected:SUCCESS but was:FAILED_WITH_EXCEPTION
 
 
 TestNodeHealthCheckChore.testHealthCheckerTimeout:75-healthCheckerTest:90
  expected:TIMED_OUT but was:FAILED_WITH_EXCEPTION
 
  Tests in error:
 
 
 org.apache.hadoop.hbase.http.TestSSLHttpServer.org.apache.hadoop.hbase.http.TestSSLHttpServer
Run 1: TestSSLHttpServer.setup:71 » Certificate Subject class type
  invalid.
Run 2: TestSSLHttpServer.cleanup:102 NullPointer
 
 
 
  Checked RAT = Passed
 
  While running in standalone got this exception in the logs when clicking
 on
  Debug Dump in the master interface:
  2015-02-16 10:19:39,172 ERROR [666059465@qtp-2106900153-3] mortbay.log:
  /dump
  java.lang.NullPointerException
  at
 
 
 org.apache.hadoop.hbase.regionserver.RSDumpServlet.dumpQueue(RSDumpServlet.java:106)
  at
 
 
 org.apache.hadoop.hbase.master.MasterDumpServlet.doGet(MasterDumpServlet.java:105)
  at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
  at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
  at
  org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
  at
 
 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
  at
 
 
 org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:113)
  at
 
 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
  at
 
 
 org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1351)
  at
 
 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
  at
 
 org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
  at
 
 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
  at
 
 org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
  at
 
 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
  at
  org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
  at
 
 org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
  at
  org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
  at
  org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
  at
  org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
  at
 
 
 org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
  at
  org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
  at org.mortbay.jetty.Server.handle(Server.java:326)
  at
  org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
  at
 
 
 org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
  at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
  at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
  at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
  at
 
 
 org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
  at
 
 
 org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
 
  Ran some create table, puts, alter, get, scans from 

Re: Problem Migrating a CustomFilter to Hbase 0.98

2015-02-16 Thread Ted Yu
Mind pastebin'ing relevant potion of your MyCustomFilter ?

Thanks

On Sun, Feb 15, 2015 at 9:37 PM, Grant Pan gt84646...@gmail.com wrote:

 I have a a filter, MyCustomFilter that is similar to the
 SingleColumnValueFilter in the sense that I have the toByteArray(),
 parseFrom(), and convert() function usage in the same way and also extends
 FilterBase.

 The filter was originally for an old version of Hbase that used the write
 and readFields methods. Despite having the same methods and putting console
 output in the parseFrom, toByteArray, and convert functions... running my
 code that sets the specific filter, I can see the toByteArray and convert
 functions being called 9 times based on the logs.

 Then I get the following errors:
 java.util.concurrent.TimeoutException
 at java.util.concurrent.FutureTask.get(FutureTask.java:201)
 at
 Mon Feb 16 05:12:16 UTC 2015, null, java.net.SocketTimeoutException:
 callTimeout

 *Caused by: org.apache.hadoop.hbase.exceptions.DeserializationException:
 parseFrom called on base Filter, but should be called on derived type*
 at org.apache.hadoop.hbase.filter.Filter.parseFrom(Filter.java:267)
 ... 12 more

 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
 Method)
 at

 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)

 I recall that if you see the error in bold, it means your CustomFilter
 class does not have a parseFrom method, which I see that no console output
 has been written at all that is within my parseFrom function. What is the
 problem?



Problem Migrating a CustomFilter to Hbase 0.98

2015-02-16 Thread Grant Pan
I have a a filter, MyCustomFilter that is similar to the
SingleColumnValueFilter in the sense that I have the toByteArray(),
parseFrom(), and convert() function usage in the same way and also extends
FilterBase.

The filter was originally for an old version of Hbase that used the write
and readFields methods. Despite having the same methods and putting console
output in the parseFrom, toByteArray, and convert functions... running my
code that sets the specific filter, I can see the toByteArray and convert
functions being called 9 times based on the logs.

Then I get the following errors:
java.util.concurrent.TimeoutException
at java.util.concurrent.FutureTask.get(FutureTask.java:201)
at
Mon Feb 16 05:12:16 UTC 2015, null, java.net.SocketTimeoutException:
callTimeout

*Caused by: org.apache.hadoop.hbase.exceptions.DeserializationException:
parseFrom called on base Filter, but should be called on derived type*
at org.apache.hadoop.hbase.filter.Filter.parseFrom(Filter.java:267)
... 12 more

at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)

I recall that if you see the error in bold, it means your CustomFilter
class does not have a parseFrom method, which I see that no console output
has been written at all that is within my parseFrom function. What is the
problem?


[jira] [Created] (HBASE-13052) Explain each region split policy

2015-02-16 Thread Misty Stanley-Jones (JIRA)
Misty Stanley-Jones created HBASE-13052:
---

 Summary: Explain each region split policy
 Key: HBASE-13052
 URL: https://issues.apache.org/jira/browse/HBASE-13052
 Project: HBase
  Issue Type: Bug
  Components: documentation
Reporter: Misty Stanley-Jones
Assignee: Misty Stanley-Jones


{quote}
there are five region split policies today so that let's add a section which 
explains:

1. How each policies work. We can start from current java doc:
http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/regionserver/KeyPrefixRegionSplitPolicy.html
http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/regionserver/DelimitedKeyPrefixRegionSplitPolicy.html
http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/regionserver/DisabledRegionSplitPolicy.html
http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/regionserver/ConstantSizeRegionSplitPolicy.html
http://hbase.apache.org/devapidocs/org/apache/hadoop/hbase/regionserver/IncreasingToUpperBoundRegionSplitPolicy.html
2. How users can choose a good policy per their scenario basis
3. Pros and cons over each policies
{quote}
from [~daisuke.kobayashi]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13053) Add support of Visibility Labels in PerformanceEvaluation

2015-02-16 Thread Jerry He (JIRA)
Jerry He created HBASE-13053:


 Summary: Add support of Visibility Labels in PerformanceEvaluation
 Key: HBASE-13053
 URL: https://issues.apache.org/jira/browse/HBASE-13053
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.98.10.1, 1.0.0
Reporter: Jerry He
Assignee: Jerry He
Priority: Minor
 Fix For: 2.0.0, 1.0.1, 0.98.11


Add support of Visibility Labels in PerformanceEvaluation:

During write operations, support adding a visibility expression to KVs.
During read/scan operations, support using visibility authorization.

Here is the usage:
{noformat}
Options:
...
visibilityExp   Writes the visibility expression along with KVs. Use for write 
commands. Visiblity labels need to pre-exist.
visibilityAuth  Specify the visibility auths (comma separated labels) used in 
read or scan. Visiblity labels need to pre-exist.
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13054) Provide more tracing information for locking/latching events.

2015-02-16 Thread Rajeshbabu Chintaguntla (JIRA)
Rajeshbabu Chintaguntla created HBASE-13054:
---

 Summary: Provide more tracing information for locking/latching 
events.
 Key: HBASE-13054
 URL: https://issues.apache.org/jira/browse/HBASE-13054
 Project: HBase
  Issue Type: Improvement
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla
 Fix For: 2.0.0, 1.0.1, 1.1.0


Currently not much tracing information available for locking and latching 
events like row level locking during do mini batch mutations, region level 
locking during flush, close and so on. It's better to provide more information 
for such events.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13055) Wrong HRegion FIXED_OVERHEAD

2015-02-16 Thread zhangduo (JIRA)
zhangduo created HBASE-13055:


 Summary: Wrong HRegion FIXED_OVERHEAD
 Key: HBASE-13055
 URL: https://issues.apache.org/jira/browse/HBASE-13055
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.10, 1.0.0, 2.0.0, 1.1.0
Reporter: zhangduo
Assignee: zhangduo
 Fix For: 2.0.0, 1.0.1, 1.1.0, 0.98.11


HRegion.FIXED_OVERHEAD says we have 4 booleans on all branches. But actually we 
have 5 booleans.
{noformat}
isLoadingCfsOnDemandDefault
disallowWritesInRecovering
isRecovering
splitRequest
regionStatsEnabled
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13050) Hbase shell create_namespace command throws ArrayIndexOutOfBoundException for (invalid) empty text input.

2015-02-16 Thread Abhishek Kumar (JIRA)
Abhishek Kumar created HBASE-13050:
--

 Summary: Hbase shell create_namespace command throws 
ArrayIndexOutOfBoundException for (invalid) empty text input.
 Key: HBASE-13050
 URL: https://issues.apache.org/jira/browse/HBASE-13050
 Project: HBase
  Issue Type: Bug
Reporter: Abhishek Kumar
Priority: Trivial


{noformat}
hbase(main):008:0 create_namespace ''

ERROR: java.io.IOException: 0
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2072)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 0
at 
org.apache.hadoop.hbase.TableName.isLegalNamespaceName(TableName.java:222)
at 
org.apache.hadoop.hbase.TableName.isLegalNamespaceName(TableName.java:205)
{noformat}

TableName.isLegalNamespaceName tries to access namespaceName[offset] in case of 
empty text input and also this check for 'offset==length' in this method seems 
to be unnecessary and an empty  input validation check can be put in the 
beginning of this method instead:
{noformat}
 public static void isLegalNamespaceName(byte[] namespaceName, int offset, int 
length) {
// can add empty check in the beginning 
  if(length == 0) {
  throw new IllegalArgumentException(Namespace name must not be empty);
}
// end
for (int i = offset; i  length; i++) {
  if (Character.isLetterOrDigit(namespaceName[i])|| namespaceName[i] == 
'_') {
continue;
  }
  throw new IllegalArgumentException(Illegal character  + 
namespaceName[i] +
 at  + i + . Namespaces can only contain  +
'alphanumeric characters': i.e. [a-zA-Z_0-9]:  + 
Bytes.toString(namespaceName,
  offset, length));
}
 //  can remove below check
   if(offset == length)
  throw new IllegalArgumentException(Illegal character  + 
_namespaceName[offset] _+
   at  + offset + . Namespaces can only contain  +
  'alphanumeric characters': i.e. [a-zA-Z_0-9]:  + 
Bytes.toString(namespaceName,
offset, length));
 // 
}
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Sixth release candidate for HBase 1.0.0 (RC5) is available. Please vote by Feb 19 2015

2015-02-16 Thread Jean-Marc Spaggiari
Hi Enis,

Quick question, how do you validate the signature? Seems to be a compressed
format, not sure if there is a specific command to validate it.

Thanks,

JM

2015-02-15 0:55 GMT-05:00 Enis Söztutar e...@apache.org:

 It gives me great pleasure to announce that the sixth release candidate for
 the release
 1.0.0 (HBase-1.0.0RC5), is available for download at
 https://dist.apache.org/repos/dist/dev/hbase/hbase-1.0.0RC5/

 Maven artifacts are also available in the temporary repository
 https://repository.apache.org/content/repositories/orgapachehbase-1065

 Signed with my code signing key E964B5FF. Can be found here:
 https://people.apache.org/keys/committer/enis.asc

  Signed tag in the repository can be found here:

 https://git-wip-us.apache.org/repos/asf?p=hbase.git;a=tag;h=c4660912e9b46c917a9aba2106be4bf74182a764

 HBase 1.0.0 is the next stable release, and the start of semantic
 versioned
 releases (See [1]).

 The theme of 1.0.0 release is to become a stable base for future 1.x series
 of releases. We aim to achieve at least the same level of stability of 0.98
 releases.

 1.0.0 release contains 202 fixes on top of 0.99.2 release. Together with
 the
 previous 0.99.x releases, major changes in 1.0.0 are listed (but not
 limited to)
 below. Note that all previous 0.99.x releases are developer preview
 releases, and will
 NOT be supported in any form.

 API Cleanup and changes
   1.0.0 introduces new APIs, and deprecates some of commonly-used
   client side APIs (HTableInterface, HTable and HBaseAdmin).
   We advise to update your application to use the new style of APIs, since
   deprecated APIs might be removed in future releases (2.x). See [2] and
 [3]
   for an overview of changes. All Client side API's are marked with
   InterfaceAudience.Public class, indicating that the class/method is an
   official client API for HBase. All 1.x releases are planned to be API
   compatible for these classes. See [1] for an overview.

 Master runs a Region Server as well
   Starting with 1.0.0, the HBase master server and backup master servers
 will
   also act as a region server. RPC port and info port for web UI is shared
 for
   the master and region server roles. Active master can host regions of
   defined tables if configured (disabled by default). Backup masters will
 not
   host regions.

 Read availability using timeline consistent region replicas
   This release contains Phase 1 items for experimental Read availability
 using
   timeline consistent region replicas feature. A region can be hosted in
   multiple region servers in read-only mode. One of the replicas for the
 region
   will be primary, accepting writes, and other replicas will be sharing the
 same
   data files. Read requests can be done against any replica for the region
 with
   backup RPCs for high availability with timeline consistency guarantees.
 More
   information can be found at HBASE-10070.

 Online config change and other forward ports from 0.89-fb branch
   HBASE-12147 forward ported online config change which enables some of the
   configuration from the server to be reloaded without restarting the
 region
   servers.

 Other notable improvements in 1.0.0 (including previous 0.99.x) are
  - A new web skin in time for 1.0 (http://hbase.apache.org)
  - Automatic tuning of global memstore and block cache sizes
  - Various security, tags and visibility labels improvements
  - Bucket cache improvements (usability and compressed data blocks)
  - A new pluggable replication endpoint to plug in to HBase's inter-cluster
replication to replicate to a custom data store
  - A Dockerfile to easily build and run HBase from source
  - Truncate table command
  - Region assignment to use hbase:meta table instead of zookeeper for
 faster
region assignment (disabled by default)
  - Extensive documentation improvements
  - [HBASE-12511] - namespace permissions - add support from table creation
 privilege in a namespace 'C'
  - [HBASE-12568] - Adopt Semantic Versioning and document it in the book
  - [HBASE-12640] - Add Thrift-over-HTTPS and doAs support for Thrift Server
  - [HBASE-12651] - Backport HBASE-12559 'Provide LoadBalancer with online
 configuration capability' to branch-1
  - [HBASE-10560] - Per cell TTLs
  - [HBASE-11997] - CopyTable with bulkload
  - [HBASE-11990] - Make setting the start and stop row for a specific
 prefix easier
  - [HBASE-12220] - Add hedgedReads and hedgedReadWins metrics
  - [HBASE-12090] - Bytes: more Unsafe, more Faster
  - [HBASE-12032] - Script to stop regionservers via RPC
  - [HBASE-11907] - Use the joni byte[] regex engine in place of j.u.regex
 in RegexStringComparator
  - [HBASE-11796] - Add client support for atomic checkAndMutate
  - [HBASE-11804] - Raise default heap size if unspecified
  - [HBASE-11890] - HBase REST Client is hard coded to http protocol
  - [HBASE-12126] - Region server coprocessor endpoint
  - [HBASE-12183] - FuzzyRowFilter doesn't support reverse scans
  - 

[jira] [Created] (HBASE-13049) wal_roll ruby command doesn't work.

2015-02-16 Thread Bhupendra Kumar Jain (JIRA)
Bhupendra Kumar Jain created HBASE-13049:


 Summary: wal_roll ruby command doesn't work. 
 Key: HBASE-13049
 URL: https://issues.apache.org/jira/browse/HBASE-13049
 Project: HBase
  Issue Type: Bug
  Components: shell
Reporter: Bhupendra Kumar Jain


On execution of wal_roll command in shell, error message gets displayed as 
shown below

hbase(main):005:0 wal_roll 'host-10-19-92-94,16201,1424081618286'

*ERROR: cannot convert instance of class org.jruby.RubyString to class 
org.apache.hadoop.hbase.ServerName*

its because Admin Java api expecting a ServerName object but script passes the 
ServerName as string. 
currently script is as below
{code}
@admin.rollWALWriter(server_name)
{code}

It should be like 
{code}
@admin.rollWALWriter(ServerName.valueOf(server_name))
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13056) Refactor table.jsp code to remove repeated code and make it easier to add new checks

2015-02-16 Thread Vikas Vishwakarma (JIRA)
Vikas Vishwakarma created HBASE-13056:
-

 Summary: Refactor table.jsp code to remove repeated code and make 
it easier to add new checks
 Key: HBASE-13056
 URL: https://issues.apache.org/jira/browse/HBASE-13056
 Project: HBase
  Issue Type: Improvement
Affects Versions: 2.0.0
Reporter: Vikas Vishwakarma
 Fix For: 2.0.0


While trying to fix HBASE-13001, I realized that there is lot of html code 
repetition in table.jsp which is making addition of new checks slightly 
difficult in the sense I will have to:
1. Add the check at multiple places in the code
Or 
2. Repeat the html code again for the new check 

So I am proposing to re-factor table.jsp code such that the common html 
header/body is loaded without any condition check and then we generate the 
condition specific html code 

snapshot.jsp follows the same format as explained below:

{noformat}
Current implementation:


if( x ) {

  title_x
  common_html_header
  common_html_body
  x_specific_html_body

} else {

  title_y
  common_html_header
  common_html_body
  y_specific_html_body

}

New Implementation:
==
if( x ) {

  title_x

} else {

  title_y

}
common_html_header
common_html_body

if( x ) {

  x_specific_html_body

} else {

  y_specific_html_body

}
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13051) Custom line separator option for bulk loading

2015-02-16 Thread sivakumar (JIRA)
sivakumar created HBASE-13051:
-

 Summary: Custom line separator option for bulk loading
 Key: HBASE-13051
 URL: https://issues.apache.org/jira/browse/HBASE-13051
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.98.10.1
Reporter: sivakumar


While bulk loading data through ImportTsv dont have an option to choose custom 
line separator. It defaults to new line character (\n). This request is created 
to enhance the utility to support custom line separator.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Sixth release candidate for HBase 1.0.0 (RC5) is available. Please vote by Feb 19 2015

2015-02-16 Thread Jean-Marc Spaggiari
Download and un-packed passed.

Checked Changes.txt = Passed.
Checked documentation = Link Why does HBase care about /etc/hosts?
http://devving.com/?p=414 in section Quick Start - Standalone HBase
doesn't work
Run test suite = Failed 3 time in a row with JDK 1.7
Failed tests:
  TestNodeHealthCheckChore.testHealthCheckerFail:69-healthCheckerTest:90
expected:FAILED but was:FAILED_WITH_EXCEPTION

TestNodeHealthCheckChore.testHealthCheckerSuccess:63-healthCheckerTest:90
expected:SUCCESS but was:FAILED_WITH_EXCEPTION

TestNodeHealthCheckChore.testHealthCheckerTimeout:75-healthCheckerTest:90
expected:TIMED_OUT but was:FAILED_WITH_EXCEPTION

Tried with JDK8 and got a lot of:
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
MaxPermSize=256m; support was removed in 8.0
And the same errors + some others, ran it twice, twice the same errors.
Same as with JDK1.7 + the SSL one.
Failed tests:
  TestNodeHealthCheckChore.testHealthCheckerFail:69-healthCheckerTest:90
expected:FAILED but was:FAILED_WITH_EXCEPTION

TestNodeHealthCheckChore.testHealthCheckerSuccess:63-healthCheckerTest:90
expected:SUCCESS but was:FAILED_WITH_EXCEPTION

TestNodeHealthCheckChore.testHealthCheckerTimeout:75-healthCheckerTest:90
expected:TIMED_OUT but was:FAILED_WITH_EXCEPTION

Tests in error:
org.apache.hadoop.hbase.http.TestSSLHttpServer.org.apache.hadoop.hbase.http.TestSSLHttpServer
  Run 1: TestSSLHttpServer.setup:71 » Certificate Subject class type
invalid.
  Run 2: TestSSLHttpServer.cleanup:102 NullPointer



Checked RAT = Passed

While running in standalone got this exception in the logs when clicking on
Debug Dump in the master interface:
2015-02-16 10:19:39,172 ERROR [666059465@qtp-2106900153-3] mortbay.log:
/dump
java.lang.NullPointerException
at
org.apache.hadoop.hbase.regionserver.RSDumpServlet.dumpQueue(RSDumpServlet.java:106)
at
org.apache.hadoop.hbase.master.MasterDumpServlet.doGet(MasterDumpServlet.java:105)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at
org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
at
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
at
org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:113)
at
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at
org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1351)
at
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at
org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
at
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at
org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
at
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at
org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
at
org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
at
org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
at
org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
at
org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
at
org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
at org.mortbay.jetty.Server.handle(Server.java:326)
at
org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
at
org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
at
org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
at
org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)

Ran some create table, puts, alter, get, scans from command line = Passed.

I'm also running PE on another cluster, but I'm not happy with the way it
runs. Not related to HBase, more related to PE itself.  Might move to YCSB
or open some JIRAs.

Did a rolling restart from 1.0.0 to 1.0.0. = Passed.

Overall, all seems to be working fine, but not sure about the tests
failures and the dump exception.

0 for me. Because of the SSL issue with JDK 8 I'm not sure of the impact on
a secured cluster. Also, has not been able to get any successful run of the
tests and I have not been able to validate the performances.

JM

2015-02-16 8:00 GMT-05:00 Jean-Marc Spaggiari jean-m...@spaggiari.org:

 Hi Enis,

 Quick question, how do you validate the signature? Seems to be a
 compressed format, not sure if there 

Re: [VOTE] Sixth release candidate for HBase 1.0.0 (RC5) is available. Please vote by Feb 19 2015

2015-02-16 Thread Ted Yu
bq. Did a rolling restart from 1.0.0 to 1.0.0

Did you mean from 0.98 to 1.0.0 ?

Cheers

On Mon, Feb 16, 2015 at 7:37 AM, Jean-Marc Spaggiari 
jean-m...@spaggiari.org wrote:

 Download and un-packed passed.

 Checked Changes.txt = Passed.
 Checked documentation = Link Why does HBase care about /etc/hosts?
 http://devving.com/?p=414 in section Quick Start - Standalone HBase
 doesn't work
 Run test suite = Failed 3 time in a row with JDK 1.7
 Failed tests:
   TestNodeHealthCheckChore.testHealthCheckerFail:69-healthCheckerTest:90
 expected:FAILED but was:FAILED_WITH_EXCEPTION

 TestNodeHealthCheckChore.testHealthCheckerSuccess:63-healthCheckerTest:90
 expected:SUCCESS but was:FAILED_WITH_EXCEPTION

 TestNodeHealthCheckChore.testHealthCheckerTimeout:75-healthCheckerTest:90
 expected:TIMED_OUT but was:FAILED_WITH_EXCEPTION

 Tried with JDK8 and got a lot of:
 Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
 MaxPermSize=256m; support was removed in 8.0
 And the same errors + some others, ran it twice, twice the same errors.
 Same as with JDK1.7 + the SSL one.
 Failed tests:
   TestNodeHealthCheckChore.testHealthCheckerFail:69-healthCheckerTest:90
 expected:FAILED but was:FAILED_WITH_EXCEPTION

 TestNodeHealthCheckChore.testHealthCheckerSuccess:63-healthCheckerTest:90
 expected:SUCCESS but was:FAILED_WITH_EXCEPTION

 TestNodeHealthCheckChore.testHealthCheckerTimeout:75-healthCheckerTest:90
 expected:TIMED_OUT but was:FAILED_WITH_EXCEPTION

 Tests in error:

 org.apache.hadoop.hbase.http.TestSSLHttpServer.org.apache.hadoop.hbase.http.TestSSLHttpServer
   Run 1: TestSSLHttpServer.setup:71 » Certificate Subject class type
 invalid.
   Run 2: TestSSLHttpServer.cleanup:102 NullPointer



 Checked RAT = Passed

 While running in standalone got this exception in the logs when clicking on
 Debug Dump in the master interface:
 2015-02-16 10:19:39,172 ERROR [666059465@qtp-2106900153-3] mortbay.log:
 /dump
 java.lang.NullPointerException
 at

 org.apache.hadoop.hbase.regionserver.RSDumpServlet.dumpQueue(RSDumpServlet.java:106)
 at

 org.apache.hadoop.hbase.master.MasterDumpServlet.doGet(MasterDumpServlet.java:105)
 at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
 at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
 at
 org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
 at

 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
 at

 org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:113)
 at

 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 at

 org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter.doFilter(HttpServer.java:1351)
 at

 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 at
 org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
 at

 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 at
 org.apache.hadoop.hbase.http.NoCacheFilter.doFilter(NoCacheFilter.java:49)
 at

 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 at
 org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
 at
 org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
 at
 org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
 at
 org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
 at
 org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
 at

 org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
 at
 org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
 at org.mortbay.jetty.Server.handle(Server.java:326)
 at
 org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
 at

 org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
 at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
 at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
 at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
 at

 org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
 at

 org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)

 Ran some create table, puts, alter, get, scans from command line = Passed.

 I'm also running PE on another cluster, but I'm not happy with the way it
 runs. Not related to HBase, more related to PE itself.  Might move to YCSB
 or open some JIRAs.

 Did a rolling restart from 1.0.0 to 1.0.0. = Passed.

 Overall, all seems to be working fine, but not sure about the tests
 failures and the dump exception.

 0 for me. Because of the SSL issue with JDK 8 I'm not sure of the impact on
 a secured cluster. Also, has not been 

[jira] [Created] (HBASE-13048) Use hbase.crypto.wal.algorithm in SecureProtobufLogReader while decrypting the data

2015-02-16 Thread Ashish Singhi (JIRA)
Ashish Singhi created HBASE-13048:
-

 Summary: Use hbase.crypto.wal.algorithm in SecureProtobufLogReader 
while decrypting the data
 Key: HBASE-13048
 URL: https://issues.apache.org/jira/browse/HBASE-13048
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.10
Reporter: Ashish Singhi
Assignee: Ashish Singhi
Priority: Minor


We are using hbase.crypto.wal.algorithm in SecureProtobufLogWriter for 
encrypting the data but we are using hbase.crypto.key.algorithm in 
SecureProtobufLogReader while decrypting the data, fix this typo.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13047) Add HBase Configuration link missing on the table details pages

2015-02-16 Thread Vikas Vishwakarma (JIRA)
Vikas Vishwakarma created HBASE-13047:
-

 Summary: Add HBase Configuration link missing on the table 
details pages
 Key: HBASE-13047
 URL: https://issues.apache.org/jira/browse/HBASE-13047
 Project: HBase
  Issue Type: Bug
  Components: hbase
Affects Versions: 2.0.0
Reporter: Vikas Vishwakarma
Priority: Trivial


on the table details pages HBase Configuration link is missing on the 
navigation bar, which is inconsistent with other pages



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)