See <https://builds.apache.org/job/Phoenix-master/988/changes>

Changes:

[jtaylor] Revert "PHOENIX-2462 Add tephra test dependency to spark module"

------------------------------------------
[...truncated 86117 lines...]
03:39:38.548 [RS:0;asf910:41515] DEBUG org.apache.zookeeper.ZooKeeper - Closing 
session: 0x15156793dd60006
03:39:38.548 [RS:0;asf910:41515] DEBUG org.apache.zookeeper.ClientCnxn - 
Closing client for session: 0x15156793dd60006
03:39:38.549 [ProcessThread(sid:0 cport:-1):] INFO  
o.a.z.server.PrepRequestProcessor - Processed session termination for 
sessionid: 0x15156793dd60006
03:39:38.566 [SyncThread:0] DEBUG o.a.z.server.FinalRequestProcessor - 
Processing request:: sessionid:0x15156793dd60006 type:closeSession cxid:0x3 
zxid:0x4e txntype:-11 reqpath:n/a
03:39:38.566 [SyncThread:0] DEBUG o.a.z.server.FinalRequestProcessor - 
sessionid:0x15156793dd60006 type:closeSession cxid:0x3 zxid:0x4e txntype:-11 
reqpath:n/a
03:39:38.567 [RS:0;asf910:41515-SendThread(localhost:58509)] DEBUG 
org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15156793dd60006, 
packet:: clientPath:null serverPath:null finished:false header:: 3,-11  
replyHeader:: 3,78,0  request:: null response:: null
03:39:38.567 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:58509] INFO  
o.a.zookeeper.server.NIOServerCnxn - Closed socket connection for client 
/0:0:0:0:0:0:0:1:44639 which had sessionid 0x15156793dd60006
03:39:38.567 [RS:0;asf910:41515-SendThread(localhost:58509)] DEBUG 
org.apache.zookeeper.ClientCnxn - An exception was thrown while closing send 
thread for session 0x15156793dd60006 : Unable to read additional data from 
server sessionid 0x15156793dd60006, likely server has closed socket
03:39:38.567 [RS:0;asf910:41515] DEBUG org.apache.zookeeper.ClientCnxn - 
Disconnecting client for session: 0x15156793dd60006
03:39:38.567 [RS:0;asf910:41515-EventThread] INFO  
org.apache.zookeeper.ClientCnxn - EventThread shut down
03:39:38.568 [RS:0;asf910:41515] INFO  org.apache.zookeeper.ZooKeeper - 
Session: 0x15156793dd60006 closed
03:39:38.568 [RS:0;asf910:41515] DEBUG o.a.h.hbase.ipc.AbstractRpcClient - 
Stopping rpc client
03:39:38.568 [RS:0;asf910:41515] INFO  o.apache.hadoop.hbase.ipc.RpcServer - 
Stopping server on 41515
03:39:38.568 [RpcServer.listener,port=41515] INFO  
o.apache.hadoop.hbase.ipc.RpcServer - RpcServer.listener,port=41515: stopping
03:39:38.570 [RpcServer.responder] INFO  o.apache.hadoop.hbase.ipc.RpcServer - 
RpcServer.responder: stopped
03:39:38.570 [RpcServer.responder] INFO  o.apache.hadoop.hbase.ipc.RpcServer - 
RpcServer.responder: stopping
03:39:38.583 [SyncThread:0] DEBUG o.a.z.server.FinalRequestProcessor - 
Processing request:: sessionid:0x15156793dd60001 type:delete cxid:0x3f 
zxid:0x4f txntype:2 reqpath:n/a
03:39:38.584 [SyncThread:0] DEBUG o.a.z.server.FinalRequestProcessor - 
sessionid:0x15156793dd60001 type:delete cxid:0x3f zxid:0x4f txntype:2 
reqpath:n/a
03:39:38.584 [ScalaTest-2-SendThread(localhost:58509)] DEBUG 
org.apache.zookeeper.ClientCnxn - Got notification sessionid:0x15156793dd60000
03:39:38.584 [ScalaTest-2-SendThread(localhost:58509)] DEBUG 
org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected 
type:NodeDeleted path:/hbase/rs/asf910.gq1.ygridcore.net,41515,1448854766509 
for sessionid 0x15156793dd60000
03:39:38.584 [ScalaTest-2-SendThread(localhost:58509)] DEBUG 
org.apache.zookeeper.ClientCnxn - Got notification sessionid:0x15156793dd60001
03:39:38.584 [ScalaTest-2-SendThread(localhost:58509)] DEBUG 
org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected 
type:NodeDeleted path:/hbase/rs/asf910.gq1.ygridcore.net,41515,1448854766509 
for sessionid 0x15156793dd60001
03:39:38.584 [ScalaTest-2-EventThread] DEBUG o.a.h.h.zookeeper.ZooKeeperWatcher 
- regionserver:41515-0x15156793dd60001, quorum=localhost:58509, 
baseZNode=/hbase Received ZooKeeper Event, type=NodeDeleted, 
state=SyncConnected, path=/hbase/rs/asf910.gq1.ygridcore.net,41515,1448854766509
03:39:38.584 [ScalaTest-2-SendThread(localhost:58509)] DEBUG 
org.apache.zookeeper.ClientCnxn - Got notification sessionid:0x15156793dd60000
03:39:38.584 [ScalaTest-2-EventThread] DEBUG o.a.h.h.zookeeper.ZooKeeperWatcher 
- master:40469-0x15156793dd60000, quorum=localhost:58509, baseZNode=/hbase 
Received ZooKeeper Event, type=NodeDeleted, state=SyncConnected, 
path=/hbase/rs/asf910.gq1.ygridcore.net,41515,1448854766509
03:39:38.584 [ScalaTest-2-SendThread(localhost:58509)] DEBUG 
org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected 
type:NodeChildrenChanged path:/hbase/rs for sessionid 0x15156793dd60000
03:39:38.584 [ScalaTest-2-EventThread] INFO  o.a.h.h.z.RegionServerTracker - 
RegionServer ephemeral node deleted, processing expiration 
[asf910.gq1.ygridcore.net,41515,1448854766509]
03:39:38.584 [ScalaTest-2-SendThread(localhost:58509)] DEBUG 
org.apache.zookeeper.ClientCnxn - Got notification sessionid:0x15156793dd60001
03:39:38.584 [ScalaTest-2-SendThread(localhost:58509)] DEBUG 
org.apache.zookeeper.ClientCnxn - Got WatchedEvent state:SyncConnected 
type:NodeChildrenChanged path:/hbase/rs for sessionid 0x15156793dd60001
03:39:38.584 [ScalaTest-2-EventThread] DEBUG o.a.h.h.zookeeper.ZooKeeperWatcher 
- regionserver:41515-0x15156793dd60001, quorum=localhost:58509, 
baseZNode=/hbase Received ZooKeeper Event, type=NodeChildrenChanged, 
state=SyncConnected, path=/hbase/rs
03:39:38.584 [ScalaTest-2-SendThread(localhost:58509)] DEBUG 
org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15156793dd60001, 
packet:: clientPath:null serverPath:null finished:false header:: 63,2  
replyHeader:: 63,79,0  request:: 
'/hbase/rs/asf910.gq1.ygridcore.net%2C41515%2C1448854766509,-1  response:: null
03:39:38.584 [RS:0;asf910:41515] DEBUG org.apache.zookeeper.ZooKeeper - Closing 
session: 0x15156793dd60001
03:39:38.584 [RS:0;asf910:41515] DEBUG org.apache.zookeeper.ClientCnxn - 
Closing client for session: 0x15156793dd60001
03:39:38.585 [SyncThread:0] DEBUG o.a.z.server.FinalRequestProcessor - 
Processing request:: sessionid:0x15156793dd60000 type:getData cxid:0x10d 
zxid:0xfffffffffffffffe txntype:unknown 
reqpath:/hbase/region-in-transition/1588230740
03:39:38.585 [SyncThread:0] DEBUG o.a.z.server.FinalRequestProcessor - 
sessionid:0x15156793dd60000 type:getData cxid:0x10d zxid:0xfffffffffffffffe 
txntype:unknown reqpath:/hbase/region-in-transition/1588230740
03:39:38.585 [ScalaTest-2-SendThread(localhost:58509)] DEBUG 
org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15156793dd60000, 
packet:: clientPath:null serverPath:null finished:false header:: 269,4  
replyHeader:: 269,79,-101  request:: '/hbase/region-in-transition/1588230740,T  
response::  
03:39:38.585 [ScalaTest-2-EventThread] DEBUG 
o.a.h.hbase.master.AssignmentManager - based on AM, current 
region=hbase:meta,,1.1588230740 is on 
server=asf910.gq1.ygridcore.net,41515,1448854766509 server being checked: 
asf910.gq1.ygridcore.net,41515,1448854766509
03:39:38.587 [ProcessThread(sid:0 cport:-1):] INFO  
o.a.z.server.PrepRequestProcessor - Processed session termination for 
sessionid: 0x15156793dd60001
03:39:38.589 [ScalaTest-2-EventThread] DEBUG o.a.h.hbase.master.ServerManager - 
Added=asf910.gq1.ygridcore.net,41515,1448854766509 to dead servers, submitted 
shutdown handler to be executed meta=true
03:39:38.589 [ScalaTest-2-EventThread] DEBUG o.a.h.h.zookeeper.ZooKeeperWatcher 
- master:40469-0x15156793dd60000, quorum=localhost:58509, baseZNode=/hbase 
Received ZooKeeper Event, type=NodeChildrenChanged, state=SyncConnected, 
path=/hbase/rs
03:39:38.590 [MASTER_META_SERVER_OPERATIONS-asf910:40469-0] INFO  
o.a.h.h.m.h.MetaServerShutdownHandler - Splitting hbase:meta logs for 
asf910.gq1.ygridcore.net,41515,1448854766509
03:39:38.590 [IPC Parameter Sending Thread #0] DEBUG 
org.apache.hadoop.ipc.Client - IPC Client (137371501) connection to 
localhost/127.0.0.1:34521 from jenkins sending #240
03:39:38.591 [Socket Reader #1 for port 34521] DEBUG 
org.apache.hadoop.ipc.Server -  got #240
03:39:38.591 [IPC Server handler 0 on 34521] DEBUG org.apache.hadoop.ipc.Server 
- IPC Server handler 0 on 34521: 
org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from 127.0.0.1:44550 
Call#240 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER
03:39:38.591 [IPC Server handler 0 on 34521] DEBUG 
o.a.h.security.UserGroupInformation - PrivilegedAction as:jenkins (auth:SIMPLE) 
from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
03:39:38.591 [IPC Server handler 0 on 34521] DEBUG 
org.apache.hadoop.security.Groups - Returning cached groups for 'jenkins'
03:39:38.591 [IPC Server handler 0 on 34521] INFO  
o.a.h.h.s.n.FSNamesystem.audit - allowed=true        ugi=jenkins (auth:SIMPLE)  
     ip=/127.0.0.1   cmd=getfileinfo 
src=/user/jenkins/test-data/33ca65d8-335b-4eec-858a-b067710c45d4/WALs/asf910.gq1.ygridcore.net,41515,1448854766509
      dst=null        perm=null       proto=rpc
03:39:38.592 [IPC Server handler 0 on 34521] INFO  org.apache.hadoop.ipc.Server 
- Served: getFileInfo queueTime= 0 procesingTime= 1
03:39:38.592 [IPC Server handler 0 on 34521] DEBUG org.apache.hadoop.ipc.Server 
- IPC Server handler 0 on 34521: responding to 
org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from 127.0.0.1:44550 
Call#240 Retry#0
03:39:38.592 [IPC Client (137371501) connection to localhost/127.0.0.1:34521 
from jenkins] DEBUG org.apache.hadoop.ipc.Client - IPC Client (137371501) 
connection to localhost/127.0.0.1:34521 from jenkins got value #240
03:39:38.592 [MASTER_META_SERVER_OPERATIONS-asf910:40469-0] DEBUG 
o.a.hadoop.ipc.ProtobufRpcEngine - Call: getFileInfo took 2ms
03:39:38.593 [IPC Parameter Sending Thread #0] DEBUG 
org.apache.hadoop.ipc.Client - IPC Client (137371501) connection to 
localhost/127.0.0.1:34521 from jenkins sending #241
03:39:38.593 [Socket Reader #1 for port 34521] DEBUG 
org.apache.hadoop.ipc.Server -  got #241
03:39:38.593 [IPC Server handler 0 on 34521] DEBUG org.apache.hadoop.ipc.Server 
- IPC Server handler 0 on 34521: responding to 
org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from 127.0.0.1:44550 
Call#240 Retry#0 Wrote 92 bytes.
03:39:38.593 [IPC Server handler 0 on 34521] DEBUG org.apache.hadoop.ipc.Server 
- IPC Server handler 0 on 34521: 
org.apache.hadoop.hdfs.protocol.ClientProtocol.rename from 127.0.0.1:44550 
Call#241 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER
03:39:38.593 [IPC Server handler 0 on 34521] DEBUG 
o.a.h.security.UserGroupInformation - PrivilegedAction as:jenkins (auth:SIMPLE) 
from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
03:39:38.593 [IPC Server handler 0 on 34521] DEBUG 
org.apache.hadoop.hdfs.StateChange - *DIR* NameNode.rename: 
/user/jenkins/test-data/33ca65d8-335b-4eec-858a-b067710c45d4/WALs/asf910.gq1.ygridcore.net,41515,1448854766509
 to 
/user/jenkins/test-data/33ca65d8-335b-4eec-858a-b067710c45d4/WALs/asf910.gq1.ygridcore.net,41515,1448854766509-splitting
03:39:38.593 [IPC Server handler 0 on 34521] DEBUG 
org.apache.hadoop.hdfs.StateChange - DIR* NameSystem.renameTo: 
/user/jenkins/test-data/33ca65d8-335b-4eec-858a-b067710c45d4/WALs/asf910.gq1.ygridcore.net,41515,1448854766509
 to 
/user/jenkins/test-data/33ca65d8-335b-4eec-858a-b067710c45d4/WALs/asf910.gq1.ygridcore.net,41515,1448854766509-splitting
03:39:38.593 [IPC Server handler 0 on 34521] DEBUG 
org.apache.hadoop.security.Groups - Returning cached groups for 'jenkins'
03:39:38.593 [IPC Server handler 0 on 34521] DEBUG 
org.apache.hadoop.hdfs.StateChange - DIR* FSDirectory.renameTo: 
/user/jenkins/test-data/33ca65d8-335b-4eec-858a-b067710c45d4/WALs/asf910.gq1.ygridcore.net,41515,1448854766509
 to 
/user/jenkins/test-data/33ca65d8-335b-4eec-858a-b067710c45d4/WALs/asf910.gq1.ygridcore.net,41515,1448854766509-splitting
03:39:38.594 [IPC Server handler 0 on 34521] DEBUG 
org.apache.hadoop.hdfs.StateChange - DIR* FSDirectory.unprotectedRenameTo: 
/user/jenkins/test-data/33ca65d8-335b-4eec-858a-b067710c45d4/WALs/asf910.gq1.ygridcore.net,41515,1448854766509
 is renamed to 
/user/jenkins/test-data/33ca65d8-335b-4eec-858a-b067710c45d4/WALs/asf910.gq1.ygridcore.net,41515,1448854766509-splitting
03:39:38.594 [IPC Server handler 0 on 34521] DEBUG 
o.a.h.h.server.namenode.LeaseManager - LeaseManager.changelease:  
src=/user/jenkins/test-data/33ca65d8-335b-4eec-858a-b067710c45d4/WALs/asf910.gq1.ygridcore.net,41515,1448854766509,
 
dest=/user/jenkins/test-data/33ca65d8-335b-4eec-858a-b067710c45d4/WALs/asf910.gq1.ygridcore.net,41515,1448854766509-splitting
03:39:38.594 [IPC Server handler 0 on 34521] DEBUG 
o.a.h.h.server.namenode.LeaseManager - LeaseManager.findLease: 
prefix=/user/jenkins/test-data/33ca65d8-335b-4eec-858a-b067710c45d4/WALs/asf910.gq1.ygridcore.net,41515,1448854766509
03:39:38.594 [IPC Server handler 0 on 34521] INFO  
o.a.h.h.s.n.FSNamesystem.audit - allowed=true        ugi=jenkins (auth:SIMPLE)  
     ip=/127.0.0.1   cmd=rename      
src=/user/jenkins/test-data/33ca65d8-335b-4eec-858a-b067710c45d4/WALs/asf910.gq1.ygridcore.net,41515,1448854766509
      
dst=/user/jenkins/test-data/33ca65d8-335b-4eec-858a-b067710c45d4/WALs/asf910.gq1.ygridcore.net,41515,1448854766509-splitting
    perm=jenkins.hfs.0:supergroup:rwxr-xr-x proto=rpc
03:39:38.594 [IPC Server handler 0 on 34521] INFO  org.apache.hadoop.ipc.Server 
- Served: rename queueTime= 0 procesingTime= 1
03:39:38.594 [IPC Server handler 0 on 34521] DEBUG org.apache.hadoop.ipc.Server 
- IPC Server handler 0 on 34521: responding to 
org.apache.hadoop.hdfs.protocol.ClientProtocol.rename from 127.0.0.1:44550 
Call#241 Retry#0
03:39:38.595 [IPC Client (137371501) connection to localhost/127.0.0.1:34521 
from jenkins] DEBUG org.apache.hadoop.ipc.Client - IPC Client (137371501) 
connection to localhost/127.0.0.1:34521 from jenkins got value #241
03:39:38.595 [MASTER_META_SERVER_OPERATIONS-asf910:40469-0] DEBUG 
o.a.hadoop.ipc.ProtobufRpcEngine - Call: rename took 3ms
03:39:38.595 [MASTER_META_SERVER_OPERATIONS-asf910:40469-0] DEBUG 
o.a.h.hbase.master.MasterFileSystem - Renamed region directory: 
hdfs://localhost:34521/user/jenkins/test-data/33ca65d8-335b-4eec-858a-b067710c45d4/WALs/asf910.gq1.ygridcore.net,41515,1448854766509-splitting
03:39:38.595 [MASTER_META_SERVER_OPERATIONS-asf910:40469-0] INFO  
o.a.h.hbase.master.SplitLogManager - dead splitlog workers 
[asf910.gq1.ygridcore.net,41515,1448854766509]
03:39:38.595 [IPC Parameter Sending Thread #0] DEBUG 
org.apache.hadoop.ipc.Client - IPC Client (137371501) connection to 
localhost/127.0.0.1:34521 from jenkins sending #242
03:39:38.596 [Socket Reader #1 for port 34521] DEBUG 
org.apache.hadoop.ipc.Server -  got #242
03:39:38.596 [IPC Server handler 0 on 34521] DEBUG org.apache.hadoop.ipc.Server 
- IPC Server handler 0 on 34521: responding to 
org.apache.hadoop.hdfs.protocol.ClientProtocol.rename from 127.0.0.1:44550 
Call#241 Retry#0 Wrote 35 bytes.
03:39:38.596 [IPC Server handler 1 on 34521] DEBUG org.apache.hadoop.ipc.Server 
- IPC Server handler 1 on 34521: 
org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from 127.0.0.1:44550 
Call#242 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER
03:39:38.596 [IPC Server handler 1 on 34521] DEBUG 
o.a.h.security.UserGroupInformation - PrivilegedAction as:jenkins (auth:SIMPLE) 
from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
03:39:38.596 [IPC Server handler 1 on 34521] DEBUG 
org.apache.hadoop.security.Groups - Returning cached groups for 'jenkins'
03:39:38.596 [IPC Server handler 1 on 34521] INFO  
o.a.h.h.s.n.FSNamesystem.audit - allowed=true        ugi=jenkins (auth:SIMPLE)  
     ip=/127.0.0.1   cmd=getfileinfo 
src=/user/jenkins/test-data/33ca65d8-335b-4eec-858a-b067710c45d4/WALs/asf910.gq1.ygridcore.net,41515,1448854766509-splitting
    dst=null        perm=null       proto=rpc
03:39:38.596 [IPC Server handler 1 on 34521] INFO  org.apache.hadoop.ipc.Server 
- Served: getFileInfo queueTime= 0 procesingTime= 0
03:39:38.597 [IPC Server handler 1 on 34521] DEBUG org.apache.hadoop.ipc.Server 
- IPC Server handler 1 on 34521: responding to 
org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from 127.0.0.1:44550 
Call#242 Retry#0
03:39:38.597 [IPC Server handler 1 on 34521] DEBUG org.apache.hadoop.ipc.Server 
- IPC Server handler 1 on 34521: responding to 
org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from 127.0.0.1:44550 
Call#242 Retry#0 Wrote 92 bytes.
03:39:38.597 [IPC Client (137371501) connection to localhost/127.0.0.1:34521 
from jenkins] DEBUG org.apache.hadoop.ipc.Client - IPC Client (137371501) 
connection to localhost/127.0.0.1:34521 from jenkins got value #242
03:39:38.597 [MASTER_META_SERVER_OPERATIONS-asf910:40469-0] DEBUG 
o.a.hadoop.ipc.ProtobufRpcEngine - Call: getFileInfo took 2ms
03:39:38.597 [IPC Parameter Sending Thread #0] DEBUG 
org.apache.hadoop.ipc.Client - IPC Client (137371501) connection to 
localhost/127.0.0.1:34521 from jenkins sending #243
03:39:38.598 [Socket Reader #1 for port 34521] DEBUG 
org.apache.hadoop.ipc.Server -  got #243
03:39:38.598 [IPC Server handler 0 on 34521] DEBUG org.apache.hadoop.ipc.Server 
- IPC Server handler 0 on 34521: 
org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from 127.0.0.1:44550 
Call#243 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER
03:39:38.598 [IPC Server handler 0 on 34521] DEBUG 
o.a.h.security.UserGroupInformation - PrivilegedAction as:jenkins (auth:SIMPLE) 
from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
03:39:38.598 [IPC Server handler 0 on 34521] DEBUG 
org.apache.hadoop.security.Groups - Returning cached groups for 'jenkins'
03:39:38.598 [SyncThread:0] DEBUG o.a.z.server.FinalRequestProcessor - 
Processing request:: sessionid:0x15156793dd60001 type:closeSession cxid:0x40 
zxid:0x50 txntype:-11 reqpath:n/a
03:39:38.598 [SyncThread:0] DEBUG o.a.z.server.FinalRequestProcessor - 
sessionid:0x15156793dd60001 type:closeSession cxid:0x40 zxid:0x50 txntype:-11 
reqpath:n/a
03:39:38.598 [IPC Server handler 0 on 34521] INFO  
o.a.h.h.s.n.FSNamesystem.audit - allowed=true        ugi=jenkins (auth:SIMPLE)  
     ip=/127.0.0.1   cmd=listStatus  
src=/user/jenkins/test-data/33ca65d8-335b-4eec-858a-b067710c45d4/WALs/asf910.gq1.ygridcore.net,41515,1448854766509-splitting
    dst=null        perm=null       proto=rpc
03:39:38.598 [SyncThread:0] DEBUG o.a.z.server.FinalRequestProcessor - 
Processing request:: sessionid:0x15156793dd60000 type:getChildren cxid:0x10e 
zxid:0xfffffffffffffffe txntype:unknown reqpath:/hbase/rs
03:39:38.598 [SyncThread:0] DEBUG o.a.z.server.FinalRequestProcessor - 
sessionid:0x15156793dd60000 type:getChildren cxid:0x10e zxid:0xfffffffffffffffe 
txntype:unknown reqpath:/hbase/rs
03:39:38.598 [IPC Server handler 0 on 34521] INFO  org.apache.hadoop.ipc.Server 
- Served: getListing queueTime= 0 procesingTime= 0
03:39:38.598 [IPC Server handler 0 on 34521] DEBUG org.apache.hadoop.ipc.Server 
- IPC Server handler 0 on 34521: responding to 
org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from 127.0.0.1:44550 
Call#243 Retry#0
03:39:38.598 [ScalaTest-2-SendThread(localhost:58509)] DEBUG 
org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15156793dd60001, 
packet:: clientPath:null serverPath:null finished:false header:: 64,-11  
replyHeader:: 64,80,0  request:: null response:: null
03:39:38.598 [RS:0;asf910:41515] DEBUG org.apache.zookeeper.ClientCnxn - 
Disconnecting client for session: 0x15156793dd60001
03:39:38.598 [IPC Client (137371501) connection to localhost/127.0.0.1:34521 
from jenkins] DEBUG org.apache.hadoop.ipc.Client - IPC Client (137371501) 
connection to localhost/127.0.0.1:34521 from jenkins got value #243
03:39:38.599 [ScalaTest-2-EventThread] INFO  org.apache.zookeeper.ClientCnxn - 
EventThread shut down
03:39:38.599 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:58509] INFO  
o.a.zookeeper.server.NIOServerCnxn - Closed socket connection for client 
/0:0:0:0:0:0:0:1:44626 which had sessionid 0x15156793dd60001
03:39:38.599 [RS:0;asf910:41515] INFO  org.apache.zookeeper.ZooKeeper - 
Session: 0x15156793dd60001 closed
03:39:38.599 [RS:0;asf910:41515] INFO  o.a.h.h.regionserver.HRegionServer - 
stopping server asf910.gq1.ygridcore.net,41515,1448854766509; zookeeper 
connection closed.
03:39:38.599 [RS:0;asf910:41515] INFO  o.a.h.h.regionserver.HRegionServer - 
RS:0;asf910:41515 exiting
03:39:38.599 [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1f9246fc] INFO 
 o.a.hadoop.hbase.MiniHBaseCluster - Hook closing 
fs=org.apache.hadoop.hbase.fs.HFileSystem@1f9246fc
03:39:38.600 [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1f9246fc] 
DEBUG org.apache.hadoop.ipc.Client - stopping client from cache: 
org.apache.hadoop.ipc.Client@afa6041
03:39:38.600 [Shutdown of org.apache.hadoop.hbase.fs.HFileSystem@1f9246fc] 
DEBUG org.apache.hadoop.ipc.Client - stopping client from cache: 
org.apache.hadoop.ipc.Client@afa6041
03:39:38.600 [Thread-2] INFO  o.a.h.h.regionserver.ShutdownHook - Starting fs 
shutdown hook thread.
03:39:38.601 [Thread-278] DEBUG org.apache.hadoop.hdfs.DFSClient - DFSClient 
writeChunk allocating new packet seqno=7, 
src=/user/jenkins/test-data/33ca65d8-335b-4eec-858a-b067710c45d4/MasterProcWALs/state-00000000000000000001.log,
 packetSize=65532, chunksPerPacket=127, bytesCurBlock=3072
03:39:38.601 [Thread-278] DEBUG org.apache.hadoop.hdfs.DFSClient - Queued 
packet 7
03:39:38.601 [ScalaTest-2-SendThread(localhost:58509)] DEBUG 
org.apache.zookeeper.ClientCnxn - Reading reply sessionid:0x15156793dd60000, 
packet:: clientPath:null serverPath:null finished:false header:: 270,8  
replyHeader:: 270,80,0  request:: '/hbase/rs,T  response:: v{} 
03:39:38.601 [Thread-278] DEBUG org.apache.hadoop.hdfs.DFSClient - Queued 
packet 8
03:39:38.601 [Thread-278] DEBUG org.apache.hadoop.hdfs.DFSClient - Waiting for 
ack for: 8
03:39:38.599 [MASTER_META_SERVER_OPERATIONS-asf910:40469-0] DEBUG 
o.a.hadoop.ipc.ProtobufRpcEngine - Call: getListing took 2ms
03:39:38.601 [MASTER_META_SERVER_OPERATIONS-asf910:40469-0] INFO  
o.a.h.hbase.master.SplitLogManager - 
hdfs://localhost:34521/user/jenkins/test-data/33ca65d8-335b-4eec-858a-b067710c45d4/WALs/asf910.gq1.ygridcore.net,41515,1448854766509-splitting
 is empty dir, no logs to split
03:39:38.601 [DataStreamer for file 
/user/jenkins/test-data/33ca65d8-335b-4eec-858a-b067710c45d4/MasterProcWALs/state-00000000000000000001.log
 block BP-1998386433-67.195.81.154-1448854761513:blk_1073741832_1008] DEBUG 
org.apache.hadoop.hdfs.DFSClient - DataStreamer block 
BP-1998386433-67.195.81.154-1448854761513:blk_1073741832_1008 sending packet 
packet seqno:7 offsetInBlock:3072 lastPacketInBlock:false 
lastByteOffsetInBlock: 3288
03:39:38.601 [MASTER_META_SERVER_OPERATIONS-asf910:40469-0] DEBUG 
o.a.h.hbase.master.SplitLogManager - Scheduling batch of logs to split
03:39:38.601 [MASTER_META_SERVER_OPERATIONS-asf910:40469-0] INFO  
o.a.h.hbase.master.SplitLogManager - started splitting 0 logs in 
[hdfs://localhost:34521/user/jenkins/test-data/33ca65d8-335b-4eec-858a-b067710c45d4/WALs/asf910.gq1.ygridcore.net,41515,1448854766509-splitting]
 for [asf910.gq1.ygridcore.net,41515,1448854766509]
03:39:38.601 [DataXceiver for client DFSClient_NONMAPREDUCE_1808156532_10 at 
/127.0.0.1:57555 [Receiving block 
BP-1998386433-67.195.81.154-1448854761513:blk_1073741832_1008]] DEBUG 
o.a.h.hdfs.server.datanode.DataNode - Receiving one packet for block 
BP-1998386433-67.195.81.154-1448854761513:blk_1073741832_1008: PacketHeader 
with packetLen=220 header data: offsetInBlock: 3072
seqno: 7
lastPacketInBlock: false
dataLen: 216

03:39:38.602 [DataXceiver for client DFSClient_NONMAPREDUCE_1808156532_10 at 
/127.0.0.1:57555 [Receiving block 
BP-1998386433-67.195.81.154-1448854761513:blk_1073741832_1008]] DEBUG 
o.a.h.hdfs.server.datanode.DataNode - PacketResponder: 
BP-1998386433-67.195.81.154-1448854761513:blk_1073741832_1008, 
type=LAST_IN_PIPELINE, downstreams=0:[]: enqueue Packet(seqno=7, 
lastPacketInBlock=false, offsetInBlock=3288, 
ackEnqueueNanoTime=16006570699924082, ackStatus=SUCCESS)
03:39:38.602 [PacketResponder: 
BP-1998386433-67.195.81.154-1448854761513:blk_1073741832_1008, 
type=LAST_IN_PIPELINE, downstreams=0:[]] DEBUG 
o.a.h.hdfs.server.datanode.DataNode - PacketResponder: 
BP-1998386433-67.195.81.154-1448854761513:blk_1073741832_1008, 
type=LAST_IN_PIPELINE, downstreams=0:[], replyAck=seqno: 7 status: SUCCESS 
downstreamAckTimeNanos: 0
03:39:38.602 [PacketResponder: 
BP-1998386433-67.195.81.154-1448854761513:blk_1073741832_1008, 
type=LAST_IN_PIPELINE, downstreams=0:[]] DEBUG 
o.a.h.hdfs.server.datanode.DataNode - PacketResponder: 
BP-1998386433-67.195.81.154-1448854761513:blk_1073741832_1008, 
type=LAST_IN_PIPELINE, downstreams=0:[]: seqno=-2 waiting for local datanode to 
finish write.
03:39:38.602 [ResponseProcessor for block 
BP-1998386433-67.195.81.154-1448854761513:blk_1073741832_1008] DEBUG 
org.apache.hadoop.hdfs.DFSClient - DFSClient seqno: 7 status: SUCCESS 
downstreamAckTimeNanos: 0
03:39:38.602 [DataStreamer for file 
/user/jenkins/test-data/33ca65d8-335b-4eec-858a-b067710c45d4/MasterProcWALs/state-00000000000000000001.log
 block BP-1998386433-67.195.81.154-1448854761513:blk_1073741832_1008] DEBUG 
org.apache.hadoop.hdfs.DFSClient - DataStreamer block 
BP-1998386433-67.195.81.154-1448854761513:blk_1073741832_1008 sending packet 
packet seqno:8 offsetInBlock:3288 lastPacketInBlock:true lastByteOffsetInBlock: 
3288
03:39:38.603 [DataXceiver for client DFSClient_NONMAPREDUCE_1808156532_10 at 
/127.0.0.1:57555 [Receiving block 
BP-1998386433-67.195.81.154-1448854761513:blk_1073741832_1008]] DEBUG 
o.a.h.hdfs.server.datanode.DataNode - Receiving one packet for block 
BP-1998386433-67.195.81.154-1448854761513:blk_1073741832_1008: PacketHeader 
with packetLen=0 header data: offsetInBlock: 3288
seqno: 8
lastPacketInBlock: true
dataLen: 0

03:39:38.603 [DataXceiver for client DFSClient_NONMAPREDUCE_1808156532_10 at 
/127.0.0.1:57555 [Receiving block 
BP-1998386433-67.195.81.154-1448854761513:blk_1073741832_1008]] DEBUG 
o.a.h.hdfs.server.datanode.DataNode - Receiving an empty packet or the end of 
the block BP-1998386433-67.195.81.154-1448854761513:blk_1073741832_1008
03:39:38.603 [DataXceiver for client DFSClient_NONMAPREDUCE_1808156532_10 at 
/127.0.0.1:57555 [Receiving block 
BP-1998386433-67.195.81.154-1448854761513:blk_1073741832_1008]] DEBUG 
o.a.h.hdfs.server.datanode.DataNode - PacketResponder: 
BP-1998386433-67.195.81.154-1448854761513:blk_1073741832_1008, 
type=LAST_IN_PIPELINE, downstreams=0:[]: enqueue Packet(seqno=8, 
lastPacketInBlock=true, offsetInBlock=3288, 
ackEnqueueNanoTime=16006570701034214, ackStatus=SUCCESS)
03:39:38.605 [IPC Server handler 0 on 34521] DEBUG org.apache.hadoop.ipc.Server 
- IPC Server handler 0 on 34521: responding to 
org.apache.hadoop.hdfs.protocol.ClientProtocol.getListing from 127.0.0.1:44550 
Call#243 Retry#0 Wrote 37 bytes.
03:39:38.605 [PacketResponder: 
BP-1998386433-67.195.81.154-1448854761513:blk_1073741832_1008, 
type=LAST_IN_PIPELINE, downstreams=0:[]] DEBUG o.a.h.h.s.d.f.impl.FsDatasetImpl 
- addBlock: Moved 
<https://builds.apache.org/job/Phoenix-master/ws/phoenix-spark/target/test-data/722983c4-349f-4e3e-9df8-2f5e867ba464/dfscluster_867b12de-4147-48bc-87cb-074bf7f630f5/dfs/data/data2/current/BP-1998386433-67.195.81.154-1448854761513/current/rbw/blk_1073741832_1008.meta>
 to 
<https://builds.apache.org/job/Phoenix-master/ws/phoenix-spark/target/test-data/722983c4-349f-4e3e-9df8-2f5e867ba464/dfscluster_867b12de-4147-48bc-87cb-074bf7f630f5/dfs/data/data2/current/BP-1998386433-67.195.81.154-1448854761513/current/finalized/blk_1073741832_1008.meta>
 and 
<https://builds.apache.org/job/Phoenix-master/ws/phoenix-spark/target/test-data/722983c4-349f-4e3e-9df8-2f5e867ba464/dfscluster_867b12de-4147-48bc-87cb-074bf7f630f5/dfs/data/data2/current/BP-1998386433-67.195.81.154-1448854761513/current/rbw/blk_1073741832>
 to 
<https://builds.apache.org/job/Phoenix-master/ws/phoenix-spark/target/test-data/722983c4-349f-4e3e-9df8-2f5e867ba464/dfscluster_867b12de-4147-48bc-87cb-074bf7f630f5/dfs/data/data2/current/BP-1998386433-67.195.81.154-1448854761513/current/finalized/blk_1073741832>
03:39:38.605 [PacketResponder: 
BP-1998386433-67.195.81.154-1448854761513:blk_1073741832_1008, 
type=LAST_IN_PIPELINE, downstreams=0:[]] INFO  o.a.h.h.s.d.DataNode.clienttrace 
- src: /127.0.0.1:57555, dest: /127.0.0.1:51419, bytes: 3288, op: HDFS_WRITE, 
cliID: DFSClient_NONMAPREDUCE_1808156532_10, offset: 0, srvID: 
9edf6e45-33f9-4554-9ca3-d194e938a0d0, blockid: 
BP-1998386433-67.195.81.154-1448854761513:blk_1073741832_1008, duration: 
9549576022
03:39:38.606 [ResponseProcessor for block 
BP-1998386433-67.195.81.154-1448854761513:blk_1073741832_1008] DEBUG 
org.apache.hadoop.hdfs.DFSClient - DFSClient seqno: 8 status: SUCCESS 
downstreamAckTimeNanos: 0
03:39:38.606 [DataStreamer for file 
/user/jenkins/test-data/33ca65d8-335b-4eec-858a-b067710c45d4/MasterProcWALs/state-00000000000000000001.log
 block BP-1998386433-67.195.81.154-1448854761513:blk_1073741832_1008] DEBUG 
org.apache.hadoop.hdfs.DFSClient - Closing old block 
BP-1998386433-67.195.81.154-1448854761513:blk_1073741832_1008
03:39:38.606 [PacketResponder: 
BP-1998386433-67.195.81.154-1448854761513:blk_1073741832_1008, 
type=LAST_IN_PIPELINE, downstreams=0:[]] DEBUG 
o.a.h.hdfs.server.datanode.DataNode - PacketResponder: 
BP-1998386433-67.195.81.154-1448854761513:blk_1073741832_1008, 
type=LAST_IN_PIPELINE, downstreams=0:[], replyAck=seqno: 8 status: SUCCESS 
downstreamAckTimeNanos: 0
03:39:38.606 [PacketResponder: 
BP-1998386433-67.195.81.154-1448854761513:blk_1073741832_1008, 
type=LAST_IN_PIPELINE, downstreams=0:[]] INFO  
o.a.h.hdfs.server.datanode.DataNode - PacketResponder: 
BP-1998386433-67.195.81.154-1448854761513:blk_1073741832_1008, 
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
03:39:38.606 [DataXceiver for client DFSClient_NONMAPREDUCE_1808156532_10 at 
/127.0.0.1:57555 [Receiving block 
BP-1998386433-67.195.81.154-1448854761513:blk_1073741832_1008]] DEBUG 
o.a.h.hdfs.server.datanode.DataNode - PacketResponder: 
BP-1998386433-67.195.81.154-1448854761513:blk_1073741832_1008, 
type=LAST_IN_PIPELINE, downstreams=0:[]: closing
03:39:38.606 [DataXceiver for client DFSClient_NONMAPREDUCE_1808156532_10 at 
/127.0.0.1:57555 [Receiving block 
BP-1998386433-67.195.81.154-1448854761513:blk_1073741832_1008]] DEBUG 
o.a.h.hdfs.server.datanode.DataNode - 127.0.0.1:51419:Number of active 
connections is: 2
03:39:38.607 [IPC Parameter Sending Thread #0] DEBUG 
org.apache.hadoop.ipc.Client - IPC Client (137371501) connection to 
localhost/127.0.0.1:34521 from jenkins sending #244
03:39:38.607 [Socket Reader #2 for port 34521] DEBUG 
org.apache.hadoop.ipc.Server -  got #244
03:39:38.607 [IPC Server handler 1 on 34521] DEBUG org.apache.hadoop.ipc.Server 
- IPC Server handler 1 on 34521: 
org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol.blockReceivedAndDeleted 
from 127.0.0.1:44549 Call#244 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER
03:39:38.607 [IPC Server handler 1 on 34521] DEBUG 
o.a.h.security.UserGroupInformation - PrivilegedAction as:jenkins (auth:SIMPLE) 
from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
03:39:38.608 [IPC Server handler 1 on 34521] DEBUG BlockStateChange - *BLOCK* 
NameNode.blockReceivedAndDeleted: from DatanodeRegistration(127.0.0.1, 
datanodeUuid=9edf6e45-33f9-4554-9ca3-d194e938a0d0, infoPort=47971, 
ipcPort=45582, storageInfo=lv=-55;cid=testClusterID;nsid=150682456;c=0) 1 
blocks.
03:39:38.609 [IPC Server handler 1 on 34521] DEBUG o.a.h.h.s.b.BlockManager - 
Reported block blk_1073741832_1008 on 127.0.0.1:51419 size 3288 replicaState = 
FINALIZED
03:39:38.609 [IPC Parameter Sending Thread #0] DEBUG 
org.apache.hadoop.ipc.Client - IPC Client (137371501) connection to 
localhost/127.0.0.1:34521 from jenkins sending #245
03:39:38.609 [IPC Server handler 1 on 34521] DEBUG o.a.h.h.s.b.BlockManager - 
In memory blockUCState = UNDER_CONSTRUCTION
03:39:38.609 [Socket Reader #1 for port 34521] DEBUG 
org.apache.hadoop.ipc.Server -  got #245
03:39:38.609 [IPC Server handler 1 on 34521] INFO  BlockStateChange - BLOCK* 
addStoredBlock: blockMap updated: 127.0.0.1:51419 is added to 
blk_1073741832_1008{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, 
replicas=[ReplicaUnderConstruction[[DISK]DS-3c92e93c-e657-477e-bf9b-c3ad8fda67de:NORMAL|RBW]]}
 size 464
03:39:38.609 [IPC Server handler 1 on 34521] DEBUG BlockStateChange - BLOCK* 
block RECEIVED_BLOCK: blk_1073741832_1008 is received from 
DatanodeRegistration(127.0.0.1, 
datanodeUuid=9edf6e45-33f9-4554-9ca3-d194e938a0d0, infoPort=47971, 
ipcPort=45582, storageInfo=lv=-55;cid=testClusterID;nsid=150682456;c=0)
03:39:38.609 [IPC Server handler 1 on 34521] DEBUG BlockStateChange - *BLOCK* 
NameNode.processIncrementalBlockReport: from DatanodeRegistration(127.0.0.1, 
datanodeUuid=9edf6e45-33f9-4554-9ca3-d194e938a0d0, infoPort=47971, 
ipcPort=45582, storageInfo=lv=-55;cid=testClusterID;nsid=150682456;c=0) 
receiving: 0,  received: 1,  deleted: 0
03:39:38.609 [IPC Server handler 0 on 34521] DEBUG org.apache.hadoop.ipc.Server 
- IPC Server handler 0 on 34521: 
org.apache.hadoop.hdfs.protocol.ClientProtocol.complete from 127.0.0.1:44550 
Call#245 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER
03:39:38.609 [IPC Server handler 1 on 34521] INFO  org.apache.hadoop.ipc.Server 
- Served: blockReceivedAndDeleted queueTime= 1 procesingTime= 1
03:39:38.609 [IPC Server handler 0 on 34521] DEBUG 
o.a.h.security.UserGroupInformation - PrivilegedAction as:jenkins (auth:SIMPLE) 
from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
03:39:38.609 [IPC Server handler 1 on 34521] DEBUG org.apache.hadoop.ipc.Server 
- IPC Server handler 1 on 34521: responding to 
org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol.blockReceivedAndDeleted 
from 127.0.0.1:44549 Call#244 Retry#0
03:39:38.609 [IPC Server handler 0 on 34521] DEBUG 
org.apache.hadoop.hdfs.StateChange - *DIR* NameNode.complete: 
/user/jenkins/test-data/33ca65d8-335b-4eec-858a-b067710c45d4/MasterProcWALs/state-00000000000000000001.log
 fileId=16410 for DFSClient_NONMAPREDUCE_1808156532_10
03:39:38.609 [IPC Server handler 0 on 34521] DEBUG 
org.apache.hadoop.hdfs.StateChange - DIR* NameSystem.completeFile: 
/user/jenkins/test-data/33ca65d8-335b-4eec-858a-b067710c45d4/MasterProcWALs/state-00000000000000000001.log
 for DFSClient_NONMAPREDUCE_1808156532_10
03:39:38.609 [IPC Client (137371501) connection to localhost/127.0.0.1:34521 
from jenkins] DEBUG org.apache.hadoop.ipc.Client - IPC Client (137371501) 
connection to localhost/127.0.0.1:34521 from jenkins got value #244
03:39:38.609 [IPC Server handler 1 on 34521] DEBUG org.apache.hadoop.ipc.Server 
- IPC Server handler 1 on 34521: responding to 
org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol.blockReceivedAndDeleted 
from 127.0.0.1:44549 Call#244 Retry#0 Wrote 33 bytes.
03:39:38.609 [DataNode: 
[[[DISK]<https://builds.apache.org/job/Phoenix-master/ws/phoenix-spark/target/test-data/722983c4-349f-4e3e-9df8-2f5e867ba464/dfscluster_867b12de-4147-48bc-87cb-074bf7f630f5/dfs/data/data1/,>
 
[DISK]<https://builds.apache.org/job/Phoenix-master/ws/phoenix-spark/target/test-data/722983c4-349f-4e3e-9df8-2f5e867ba464/dfscluster_867b12de-4147-48bc-87cb-074bf7f630f5/dfs/data/data2/]]>
  heartbeating to localhost/127.0.0.1:34521] DEBUG 
o.a.hadoop.ipc.ProtobufRpcEngine - Call: blockReceivedAndDeleted took 3ms
03:39:38.610 [IPC Server handler 0 on 34521] DEBUG 
org.apache.hadoop.hdfs.StateChange - closeFile: 
/user/jenkins/test-data/33ca65d8-335b-4eec-858a-b067710c45d4/MasterProcWALs/state-00000000000000000001.log
 with 1 blocks is persisted to the file system
03:39:38.610 [IPC Server handler 0 on 34521] INFO  
org.apache.hadoop.hdfs.StateChange - DIR* completeFile: 
/user/jenkins/test-data/33ca65d8-335b-4eec-858a-b067710c45d4/MasterProcWALs/state-00000000000000000001.log
 is closed by DFSClient_NONMAPREDUCE_1808156532_10
03:39:38.610 [IPC Server handler 0 on 34521] INFO  org.apache.hadoop.ipc.Server 
- Served: complete queueTime= 0 procesingTime= 1
03:39:38.610 [IPC Server handler 0 on 34521] DEBUG org.apache.hadoop.ipc.Server 
- IPC Server handler 0 on 34521: responding to 
org.apache.hadoop.hdfs.protocol.ClientProtocol.complete from 127.0.0.1:44550 
Call#245 Retry#0
03:39:38.610 [IPC Client (137371501) connection to localhost/127.0.0.1:34521 
from jenkins] DEBUG org.apache.hadoop.ipc.Client - IPC Client (137371501) 
connection to localhost/127.0.0.1:34521 from jenkins got value #245
03:39:38.610 [Thread-278] DEBUG o.a.hadoop.ipc.ProtobufRpcEngine - Call: 
complete took 2ms
03:39:38.610 [Thread-278] DEBUG org.apache.hadoop.ipc.Client - stopping client 
from cache: org.apache.hadoop.ipc.Client@afa6041
03:39:38.610 [IPC Server handler 0 on 34521] DEBUG org.apache.hadoop.ipc.Server 
- IPC Server handler 0 on 34521: responding to 
org.apache.hadoop.hdfs.protocol.ClientProtocol.complete from 127.0.0.1:44550 
Call#245 Retry#0 Wrote 35 bytes.
03:39:38.611 [MASTER_META_SERVER_OPERATIONS-asf910:40469-0] DEBUG 
o.a.hadoop.hdfs.BlockReaderLocal - dfs.client.use.legacy.blockreader.local = 
false
03:39:38.611 [MASTER_META_SERVER_OPERATIONS-asf910:40469-0] DEBUG 
o.a.hadoop.hdfs.BlockReaderLocal - dfs.client.read.shortcircuit = false
03:39:38.611 [MASTER_META_SERVER_OPERATIONS-asf910:40469-0] DEBUG 
o.a.hadoop.hdfs.BlockReaderLocal - dfs.client.domain.socket.data.traffic = false
03:39:38.611 [MASTER_META_SERVER_OPERATIONS-asf910:40469-0] DEBUG 
o.a.hadoop.hdfs.BlockReaderLocal - dfs.domain.socket.path = 
03:39:38.611 [MASTER_META_SERVER_OPERATIONS-asf910:40469-0] DEBUG 
o.apache.hadoop.io.retry.RetryUtils - multipleLinearRandomRetry = null
03:39:38.611 [MASTER_META_SERVER_OPERATIONS-asf910:40469-0] DEBUG 
org.apache.hadoop.ipc.Client - getting client out of cache: 
org.apache.hadoop.ipc.Client@afa6041
03:39:38.612 [IPC Parameter Sending Thread #0] DEBUG 
org.apache.hadoop.ipc.Client - IPC Client (137371501) connection to 
localhost/127.0.0.1:34521 from jenkins sending #246
03:39:38.612 [Thread-2] INFO  o.a.h.h.regionserver.ShutdownHook - Shutdown hook 
finished.
03:39:38.612 [Socket Reader #1 for port 34521] DEBUG 
org.apache.hadoop.ipc.Server -  got #246
03:39:38.612 [IPC Server handler 1 on 34521] DEBUG org.apache.hadoop.ipc.Server 
- IPC Server handler 1 on 34521: 
org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from 127.0.0.1:44550 
Call#246 Retry#0 for RpcKind RPC_PROTOCOL_BUFFER
03:39:38.612 [IPC Server handler 1 on 34521] DEBUG 
o.a.h.security.UserGroupInformation - PrivilegedAction as:jenkins (auth:SIMPLE) 
from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
03:39:38.614 [IPC Server handler 1 on 34521] DEBUG 
org.apache.hadoop.security.Groups - Returning cached groups for 'jenkins'
03:39:38.614 [IPC Server handler 1 on 34521] INFO  
o.a.h.h.s.n.FSNamesystem.audit - allowed=true        ugi=jenkins (auth:SIMPLE)  
     ip=/127.0.0.1   cmd=getfileinfo 
src=/user/jenkins/test-data/33ca65d8-335b-4eec-858a-b067710c45d4/WALs/asf910.gq1.ygridcore.net,41515,1448854766509-splitting
    dst=null        perm=null       proto=rpc
03:39:38.614 [IPC Server handler 1 on 34521] INFO  org.apache.hadoop.ipc.Server 
- Served: getFileInfo queueTime= 0 procesingTime= 2
03:39:38.614 [IPC Server handler 1 on 34521] DEBUG org.apache.hadoop.ipc.Server 
- IPC Server handler 1 on 34521: responding to 
org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from 127.0.0.1:44550 
Call#246 Retry#0
03:39:38.614 [IPC Server handler 1 on 34521] DEBUG org.apache.hadoop.ipc.Server 
- IPC Server handler 1 on 34521: responding to 
org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from 127.0.0.1:44550 
Call#246 Retry#0 Wrote 92 bytes.
03:39:38.615 [IPC Client (137371501) connection to localhost/127.0.0.1:34521 
from jenkins] DEBUG org.apache.hadoop.ipc.Client - IPC Client (137371501) 
connection to localhost/127.0.0.1:34521 from jenkins got value #246
03:39:38.615 [MASTER_META_SERVER_OPERATIONS-asf910:40469-0] DEBUG 
o.a.hadoop.ipc.ProtobufRpcEngine - Call: getFileInfo took 3ms
03:39:38.615 [IPC Parameter Sending Thread #0] DEBUG 
org.apache.hadoop.ipc.Client - IPC Client (137371501) connection to 
localhost/127.0.0.1:34521 from jenkins sending #247
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Phoenix .................................... SUCCESS [3.680s]
[INFO] Phoenix Core ...................................... SUCCESS [52:12.030s]
[INFO] Phoenix - Flume ................................... SUCCESS [1:11.516s]
[INFO] Phoenix - Pig ..................................... SUCCESS [2:45.368s]
[INFO] Phoenix Query Server Client ....................... SUCCESS [1.283s]
[INFO] Phoenix Query Server .............................. SUCCESS [1:43.447s]
[INFO] Phoenix - Pherf ................................... SUCCESS [1:59.843s]
[INFO] Phoenix - Spark ................................... FAILURE [48.814s]
[INFO] Phoenix Assembly .................................. SKIPPED
[INFO] Phoenix - Tracing Web Application ................. SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 1:00:46.628s
[INFO] Finished at: Mon Nov 30 03:39:38 UTC 2015
[INFO] Final Memory: 78M/617M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.scalatest:scalatest-maven-plugin:1.0:test 
(integration-test) on project phoenix-spark: There are test failures -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal 
org.scalatest:scalatest-maven-plugin:1.0:test (integration-test) on project 
phoenix-spark: There are test failures
        at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:213)
        at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
        at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
        at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
        at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
        at 
org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
        at 
org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
        at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:320)
        at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156)
        at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537)
        at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196)
        at org.apache.maven.cli.MavenCli.main(MavenCli.java:141)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at 
org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:290)
        at 
org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:230)
        at 
org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:409)
        at 
org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:352)
Caused by: org.apache.maven.plugin.MojoFailureException: There are test failures
        at org.scalatest.tools.maven.TestMojo.execute(TestMojo.java:107)
        at 
org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:101)
        at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:209)
        ... 19 more
[ERROR] 
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :phoenix-spark
Build step 'Invoke top-level Maven targets' marked build as failure
Archiving artifacts
Compressed 1.48 GB of artifacts by 30.3% relative to #975
Updating PHOENIX-2462
Recording test results

Reply via email to