FYI, still on 0.99.0 RC 0 for now. I have been able to stabilize the tests
on my side. Issues where because of zombi deamons from previous tests. So I
now kill them all before I restart, and clear the tmp directory.
Now, with JDK1.8 I have been able to run it 5 times, got 5 times the same
error.
Tests in error:
org.apache.hadoop.hbase.http.TestSSLHttpServer: Subject class type
invalid.
org.apache.hadoop.hbase.http.TestSSLHttpServer
Tests run: 918, Failures: 0, Errors: 2, Skipped: 5
On the log side:
2014-09-11 21:42:59,198 DEBUG [pool-1-thread-1] log.Slf4jLog(40): stopped
org.mortbay.jetty.webapp.WebAppContext@a5993cf
{/,file:/home/jmspaggi/hbase-0.99.0/hbase-server/target/test-classes/webapps/test}
2014-09-11 21:42:59,198 INFO [pool-1-thread-1] log.Slf4jLog(67): Stopped
SslSocketConnector@localhost:0
2014-09-11 21:42:59,200 DEBUG [2116514935@qtp-1330098355-2]
log.Slf4jLog(49): EXCEPTION
java.net.SocketException: Socket closed
at java.net.SocketInputStream.read(SocketInputStream.java:190)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at sun.security.ssl.InputRecord.readFully(InputRecord.java:442)
at sun.security.ssl.InputRecord.read(InputRecord.java:480)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:927)
at
sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:884)
at sun.security.ssl.AppInputStream.read(AppInputStream.java:102)
at org.mortbay.io.ByteArrayBuffer.readFrom(ByteArrayBuffer.java:382)
at org.mortbay.io.bio.StreamEndPoint.fill(StreamEndPoint.java:114)
at
org.mortbay.jetty.bio.SocketConnector$Connection.fill(SocketConnector.java:198)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:290)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
at
org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)
at
org.mortbay.jetty.security.SslSocketConnector$SslConnection.run(SslSocketConnector.java:713)
at
org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
2014-09-11 21:42:59,201 DEBUG [2116514935@qtp-1330098355-2]
log.Slf4jLog(40): EOF
2014-09-11 21:42:59,201 DEBUG [pool-1-thread-1] log.Slf4jLog(40): stopped
SslSocketConnector@localhost:0
2014-09-11 21:42:59,201 DEBUG [2090681887@qtp-1330098355-0]
log.Slf4jLog(49): EXCEPTION
java.net.SocketException: Socket closed
at java.net.SocketInputStream.read(SocketInputStream.java:190)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at sun.security.ssl.InputRecord.readFully(InputRecord.java:442)
at sun.security.ssl.InputRecord.read(InputRecord.java:480)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:927)
at
sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:884)
at sun.security.ssl.AppInputStream.read(AppInputStream.java:102)
at org.mortbay.io.ByteArrayBuffer.readFrom(ByteArrayBuffer.java:382)
at org.mortbay.io.bio.StreamEndPoint.fill(StreamEndPoint.java:114)
at
org.mortbay.jetty.bio.SocketConnector$Connection.fill(SocketConnector.java:198)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:290)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
at
org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)
at
org.mortbay.jetty.security.SslSocketConnector$SslConnection.run(SslSocketConnector.java:713)
at
org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
2014-09-11 21:42:59,201 DEBUG [pool-1-thread-1] log.Slf4jLog(40): stopping
Server@6ba1e06e
2014-09-11 21:42:59,201 DEBUG [2090681887@qtp-1330098355-0]
log.Slf4jLog(40): EOF
2014-09-11 21:42:59,201 DEBUG [pool-1-thread-1] log.Slf4jLog(40): stopping
ContextHandlerCollection@50958cf6
On JDK1.7 I got a failure in
testBalanceOnMasterFailoverScenarioWithClosedNode(org.apache.hadoop.hbase.master.TestAssignmentManager):
test timed out after 60000 milliseconds
Most probably because a previous test using mini cluster did not kill it
corretly, or because 2 was running at the same time?
2014-09-16 09:23:03,226 DEBUG [pool-1-thread-1]
zookeeper.MiniZooKeeperCluster(171): Failed binding ZK Server to client
port: 59854
java.net.BindException: Adresse déjà utilisée
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
And
testGetPreviousRecoveryMode(org.apache.hadoop.hbase.master.TestSplitLogManager)
too. Not the first time.
I have been able to get a correct run from time to time, but not that often.
JM
2014-09-15 17:50 GMT-04:00 Enis Söztutar <[email protected]>:
> Yeah, it will be even more confusing for having colocation on for 0.99.0,
> but off for 0.99.1 and 1.0.
>
> Let me spin up another RC today, and do a 3 day vote.
>
> Enis
>
> On Mon, Sep 15, 2014 at 1:06 PM, lars hofhansl <[email protected]> wrote:
>
> > I think HBASE-11604 warrants a new RC. Maybe in a dev release we could be
> > more relaxed about this, would still be confusing for folks who play with
> > this the first time, see the changed the behavior, and then they play
> again
> > and it's back to what it was before.
> >
> > -- Lars
> >
> >
> >
> > ----- Original Message -----
> > From: Stack <[email protected]>
> > To: HBase Dev List <[email protected]>
> > Cc:
> > Sent: Monday, September 15, 2014 9:15 AM
> > Subject: Re: First release candidate for HBase 0.99.0 (RC0) is available.
> > Please vote by 09/17/2014
> >
> > On Sun, Sep 14, 2014 at 11:05 PM, Enis Söztutar <[email protected]>
> > wrote:
> >
> > > Ok,
> > >
> > > Let me sink this RC, and spin another quick one containing HBASE-11604.
> > > Will do tomorrow.
> > >
> > >
> > You don't want to just fix in a 0.99.1?
> >
> >
> >
> > > Should I wait for HBASE-11967?
> > >
> >
> >
> > I'd say no. Non-critical. Takes some work to repro. We've had this
> > problem always it seems.
> >
> > St.Ack
> >
> >
> >
> >
> >
> >
> >
> > > Enis
> > >
> > > On Fri, Sep 12, 2014 at 11:04 PM, Andrew Purtell <
> > [email protected]
> > > >
> > > wrote:
> > >
> > > > I agree it would be surprising to have masters running RegionServers
> > and
> > > > hosting regions. Maybe we can take that kind of departure for 2.0?
> (Or
> > > even
> > > > 1.1?) It's not clear what state that will end up in. Default-on
> > features
> > > in
> > > > 1.0 should carry forward and promote stability and familiarity?
> > > >
> > > >
> > > > > On Sep 11, 2014, at 10:02 AM, Stack <[email protected]> wrote:
> > > > >
> > > > > Thanks for doing the helpful writeup Enis. It helps doing
> evaluation.
> > > > >
> > > > > I have one question below.
> > > > >
> > > > > On Thu, Sep 11, 2014 at 1:56 AM, Enis Söztutar <[email protected]>
> > > wrote:
> > > > > ...
> > > > >
> > > > >>
> > > > >> Starting with 0.99.0, the HBase master server and backup master
> > > servers
> > > > >> will
> > > > >> also act as a region server. RPC port and info port for web UI is
> > > shared
> > > > >> for
> > > > >> the master and region server roles. Active master will be hosting
> > the
> > > > meta
> > > > >> table (and other hbase system tables, acl and namespace) by
> default
> > > > >> (unless configured otherwise). The master and backup masters will
> > not
> > > be
> > > > >> hosting user level regions. See HBASE-10569 and HBASE-11604 for
> more
> > > > >> details.
> > > > >
> > > > > I think we should change this so this is NOT the default in 1.0.
> What
> > > do
> > > > > folks think? The new deploy topology will surprise going from
> > earlier
> > > > > version. Better folks enable it explicitly*?
> > > > > St.Ack
> > > > >
> > > > > * I used to be in favor of this feature being on by default but I
> > have
> > > > > since changed my mind given how I see meta hosting evolving in the
> > > > > near-future.
> > > >
> > >
> >
>