See <https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/190/changes>
Changes:
[umamahesh] HDFS-8412. Fix the test failures in HTTPFS: In some tests
setReplication called after fs close. Contributed by Uma Maheswara Rao G.
[aw] HADOOP-11884. test-patch.sh should pull the real findbugs version (Kengo
Seki via aw)
[aw] HADOOP-11944. add option to test-patch to avoid relocating patch process
directory (Sean Busbey via aw)
[aw] HADOOP-11949. Add user-provided plugins to test-patch (Sean Busbey via aw)
[arp] HDFS-8345. Storage policy APIs must be exposed via the FileSystem
interface. (Arpit Agarwal)
[szetszwo] HDFS-8405. Fix a typo in NamenodeFsck. Contributed by Takanobu
Asanuma
[raviprak] HDFS-4185. Add a metric for number of active leases (Rakesh R via
raviprak)
[xgong] YARN-3541. Add version info on timeline service / generic history web
UI and REST API. Contributed by Zhijie Shen
[jing9] HADOOP-1540. Support file exclusion list in distcp. Contributed by Rich
Haase.
[vinayakumarb] HDFS-6348. SecondaryNameNode not terminating properly on runtime
exceptions (Contributed by Rakesh R)
[aajisaka] HADOOP-10971. Add -C flag to make `hadoop fs -ls` print filenames
only. Contributed by Kengo Seki.
[aajisaka] Move HADOOP-8934 in CHANGES.txt from 3.0.0 to 2.8.0.
[vinayakumarb] HADOOP-11103. Clean up RemoteException (Contributed by Sean
Busbey)
[aajisaka] Move HADOOP-11581 in CHANGES.txt from 3.0.0 to 2.8.0.
------------------------------------------
[...truncated 8399 lines...]
[exec] 2015-05-19 14:26:22,735 INFO datanode.DataNode
(BlockPoolManager.java:doRefreshNamenodes(194)) - Starting BPOfferServices for
nameservices: <default>
[exec] 2015-05-19 14:26:22,746 INFO datanode.DataNode
(BPServiceActor.java:run(791)) - Block pool <registering> (Datanode Uuid
unassigned) service to localhost/127.0.0.1:35907 starting to offer service
[exec] 2015-05-19 14:26:22,752 INFO ipc.Server (Server.java:run(692)) -
IPC Server listener on 46119: starting
[exec] 2015-05-19 14:26:22,753 INFO ipc.Server (Server.java:run(852)) -
IPC Server Responder: starting
[exec] 2015-05-19 14:26:23,004 INFO common.Storage
(Storage.java:tryLock(715)) - Lock on
<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/in_use.lock>
acquired by nodename [email protected]
[exec] 2015-05-19 14:26:23,005 INFO common.Storage
(DataStorage.java:loadStorageDirectory(272)) - Storage directory
<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1>
is not formatted for BP-7584180-67.195.81.150-1432045580756
[exec] 2015-05-19 14:26:23,005 INFO common.Storage
(DataStorage.java:loadStorageDirectory(274)) - Formatting ...
[exec] 2015-05-19 14:26:23,024 INFO common.Storage
(BlockPoolSliceStorage.java:recoverTransitionRead(241)) - Analyzing storage
directories for bpid BP-7584180-67.195.81.150-1432045580756
[exec] 2015-05-19 14:26:23,024 INFO common.Storage
(Storage.java:lock(675)) - Locking is disabled for
<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-7584180-67.195.81.150-1432045580756>
[exec] 2015-05-19 14:26:23,025 INFO common.Storage
(BlockPoolSliceStorage.java:loadStorageDirectory(158)) - Block pool storage
directory
<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-7584180-67.195.81.150-1432045580756>
is not formatted for BP-7584180-67.195.81.150-1432045580756
[exec] 2015-05-19 14:26:23,025 INFO common.Storage
(BlockPoolSliceStorage.java:loadStorageDirectory(160)) - Formatting ...
[exec] 2015-05-19 14:26:23,025 INFO common.Storage
(BlockPoolSliceStorage.java:format(267)) - Formatting block pool
BP-7584180-67.195.81.150-1432045580756 directory
<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-7584180-67.195.81.150-1432045580756/current>
[exec] 2015-05-19 14:26:23,027 INFO common.Storage
(Storage.java:tryLock(715)) - Lock on
<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/in_use.lock>
acquired by nodename [email protected]
[exec] 2015-05-19 14:26:23,027 INFO common.Storage
(DataStorage.java:loadStorageDirectory(272)) - Storage directory
<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2>
is not formatted for BP-7584180-67.195.81.150-1432045580756
[exec] 2015-05-19 14:26:23,027 INFO common.Storage
(DataStorage.java:loadStorageDirectory(274)) - Formatting ...
[exec] 2015-05-19 14:26:23,042 INFO common.Storage
(BlockPoolSliceStorage.java:recoverTransitionRead(241)) - Analyzing storage
directories for bpid BP-7584180-67.195.81.150-1432045580756
[exec] 2015-05-19 14:26:23,042 INFO common.Storage
(Storage.java:lock(675)) - Locking is disabled for
<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-7584180-67.195.81.150-1432045580756>
[exec] 2015-05-19 14:26:23,042 INFO common.Storage
(BlockPoolSliceStorage.java:loadStorageDirectory(158)) - Block pool storage
directory
<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-7584180-67.195.81.150-1432045580756>
is not formatted for BP-7584180-67.195.81.150-1432045580756
[exec] 2015-05-19 14:26:23,043 INFO common.Storage
(BlockPoolSliceStorage.java:loadStorageDirectory(160)) - Formatting ...
[exec] 2015-05-19 14:26:23,043 INFO common.Storage
(BlockPoolSliceStorage.java:format(267)) - Formatting block pool
BP-7584180-67.195.81.150-1432045580756 directory
<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-7584180-67.195.81.150-1432045580756/current>
[exec] 2015-05-19 14:26:23,044 INFO datanode.DataNode
(DataNode.java:initStorage(1405)) - Setting up storage:
nsid=549639075;bpid=BP-7584180-67.195.81.150-1432045580756;lv=-56;nsInfo=lv=-63;cid=testClusterID;nsid=549639075;c=0;bpid=BP-7584180-67.195.81.150-1432045580756;dnuuid=null
[exec] 2015-05-19 14:26:23,046 INFO datanode.DataNode
(DataNode.java:checkDatanodeUuid(1234)) - Generated and persisted new Datanode
UUID a2ce3683-c6bd-4cc3-a4fe-c2dc39dcf8eb
[exec] 2015-05-19 14:26:23,067 INFO impl.FsDatasetImpl
(FsVolumeList.java:addVolume(305)) - Added new volume:
DS-2108349f-36fa-4a07-9a8a-3dfa1545438e
[exec] 2015-05-19 14:26:23,068 INFO impl.FsDatasetImpl
(FsDatasetImpl.java:addVolume(403)) - Added volume -
<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current,>
StorageType: DISK
[exec] 2015-05-19 14:26:23,068 INFO impl.FsDatasetImpl
(FsVolumeList.java:addVolume(305)) - Added new volume:
DS-67a2fab5-3ae2-42fa-9373-85d9719bd70a
[exec] 2015-05-19 14:26:23,068 INFO impl.FsDatasetImpl
(FsDatasetImpl.java:addVolume(403)) - Added volume -
<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current,>
StorageType: DISK
[exec] 2015-05-19 14:26:23,071 INFO impl.FsDatasetImpl
(FsDatasetImpl.java:registerMBean(2079)) - Registered FSDatasetState MBean
[exec] 2015-05-19 14:26:23,078 INFO datanode.DirectoryScanner
(DirectoryScanner.java:start(332)) - Periodic Directory Tree Verification scan
starting at 1432066256078 with interval 21600000
[exec] 2015-05-19 14:26:23,078 INFO impl.FsDatasetImpl
(FsDatasetImpl.java:addBlockPool(2535)) - Adding block pool
BP-7584180-67.195.81.150-1432045580756
[exec] 2015-05-19 14:26:23,079 INFO impl.FsDatasetImpl
(FsVolumeList.java:run(404)) - Scanning block pool
BP-7584180-67.195.81.150-1432045580756 on volume
<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current...>
[exec] 2015-05-19 14:26:23,079 INFO impl.FsDatasetImpl
(FsVolumeList.java:run(404)) - Scanning block pool
BP-7584180-67.195.81.150-1432045580756 on volume
<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current...>
[exec] 2015-05-19 14:26:23,094 INFO impl.FsDatasetImpl
(FsVolumeList.java:run(409)) - Time taken to scan block pool
BP-7584180-67.195.81.150-1432045580756 on
<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current>:
15ms
[exec] 2015-05-19 14:26:23,095 INFO hdfs.MiniDFSCluster
(MiniDFSCluster.java:shouldWait(2316)) - dnInfo.length != numDataNodes
[exec] 2015-05-19 14:26:23,095 INFO hdfs.MiniDFSCluster
(MiniDFSCluster.java:waitActive(2268)) - Waiting for cluster to become active
[exec] 2015-05-19 14:26:23,095 INFO impl.FsDatasetImpl
(FsVolumeList.java:run(409)) - Time taken to scan block pool
BP-7584180-67.195.81.150-1432045580756 on
<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current>:
15ms
[exec] 2015-05-19 14:26:23,095 INFO impl.FsDatasetImpl
(FsVolumeList.java:addBlockPool(435)) - Total time to scan all replicas for
block pool BP-7584180-67.195.81.150-1432045580756: 16ms
[exec] 2015-05-19 14:26:23,096 INFO impl.FsDatasetImpl
(FsVolumeList.java:run(190)) - Adding replicas to map for block pool
BP-7584180-67.195.81.150-1432045580756 on volume
<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current...>
[exec] 2015-05-19 14:26:23,096 INFO impl.BlockPoolSlice
(BlockPoolSlice.java:readReplicasFromCache(688)) - Replica Cache file:
<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current/BP-7584180-67.195.81.150-1432045580756/current/replicas>
doesn't exist
[exec] 2015-05-19 14:26:23,097 INFO impl.FsDatasetImpl
(FsVolumeList.java:run(190)) - Adding replicas to map for block pool
BP-7584180-67.195.81.150-1432045580756 on volume
<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current...>
[exec] 2015-05-19 14:26:23,097 INFO impl.FsDatasetImpl
(FsVolumeList.java:run(195)) - Time to add replicas to map for block pool
BP-7584180-67.195.81.150-1432045580756 on volume
<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data1/current>:
1ms
[exec] 2015-05-19 14:26:23,097 INFO impl.BlockPoolSlice
(BlockPoolSlice.java:readReplicasFromCache(688)) - Replica Cache file:
<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current/BP-7584180-67.195.81.150-1432045580756/current/replicas>
doesn't exist
[exec] 2015-05-19 14:26:23,097 INFO impl.FsDatasetImpl
(FsVolumeList.java:run(195)) - Time to add replicas to map for block pool
BP-7584180-67.195.81.150-1432045580756 on volume
<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/data/data2/current>:
0ms
[exec] 2015-05-19 14:26:23,097 INFO impl.FsDatasetImpl
(FsVolumeList.java:getAllVolumesMap(221)) - Total time to add all replicas to
map: 2ms
[exec] 2015-05-19 14:26:23,099 INFO datanode.DataNode
(BPServiceActor.java:register(746)) - Block pool
BP-7584180-67.195.81.150-1432045580756 (Datanode Uuid
a2ce3683-c6bd-4cc3-a4fe-c2dc39dcf8eb) service to localhost/127.0.0.1:35907
beginning handshake with NN
[exec] 2015-05-19 14:26:23,110 INFO hdfs.StateChange
(DatanodeManager.java:registerDatanode(883)) - BLOCK* registerDatanode: from
DatanodeRegistration(127.0.0.1:56401,
datanodeUuid=a2ce3683-c6bd-4cc3-a4fe-c2dc39dcf8eb, infoPort=54329,
infoSecurePort=0, ipcPort=46119,
storageInfo=lv=-56;cid=testClusterID;nsid=549639075;c=0) storage
a2ce3683-c6bd-4cc3-a4fe-c2dc39dcf8eb
[exec] 2015-05-19 14:26:23,110 INFO blockmanagement.DatanodeDescriptor
(DatanodeDescriptor.java:updateHeartbeatState(448)) - Number of failed storage
changes from 0 to 0
[exec] 2015-05-19 14:26:23,111 INFO net.NetworkTopology
(NetworkTopology.java:add(418)) - Adding a new node:
/default-rack/127.0.0.1:56401
[exec] 2015-05-19 14:26:23,115 INFO datanode.DataNode
(BPServiceActor.java:register(764)) - Block pool Block pool
BP-7584180-67.195.81.150-1432045580756 (Datanode Uuid
a2ce3683-c6bd-4cc3-a4fe-c2dc39dcf8eb) service to localhost/127.0.0.1:35907
successfully registered with NN
[exec] 2015-05-19 14:26:23,115 INFO datanode.DataNode
(BPServiceActor.java:offerService(625)) - For namenode
localhost/127.0.0.1:35907 using BLOCKREPORT_INTERVAL of 21600000msec
CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000
[exec] 2015-05-19 14:26:23,125 INFO blockmanagement.DatanodeDescriptor
(DatanodeDescriptor.java:updateHeartbeatState(448)) - Number of failed storage
changes from 0 to 0
[exec] 2015-05-19 14:26:23,125 INFO blockmanagement.DatanodeDescriptor
(DatanodeDescriptor.java:updateStorage(859)) - Adding new storage ID
DS-2108349f-36fa-4a07-9a8a-3dfa1545438e for DN 127.0.0.1:56401
[exec] 2015-05-19 14:26:23,126 INFO blockmanagement.DatanodeDescriptor
(DatanodeDescriptor.java:updateStorage(859)) - Adding new storage ID
DS-67a2fab5-3ae2-42fa-9373-85d9719bd70a for DN 127.0.0.1:56401
[exec] 2015-05-19 14:26:23,134 INFO datanode.DataNode
(BPOfferService.java:updateActorStatesFromHeartbeat(511)) - Namenode Block pool
BP-7584180-67.195.81.150-1432045580756 (Datanode Uuid
a2ce3683-c6bd-4cc3-a4fe-c2dc39dcf8eb) service to localhost/127.0.0.1:35907
trying to claim ACTIVE state with txid=1
[exec] 2015-05-19 14:26:23,135 INFO datanode.DataNode
(BPOfferService.java:updateActorStatesFromHeartbeat(523)) - Acknowledging
ACTIVE Namenode Block pool BP-7584180-67.195.81.150-1432045580756 (Datanode
Uuid a2ce3683-c6bd-4cc3-a4fe-c2dc39dcf8eb) service to localhost/127.0.0.1:35907
[exec] 2015-05-19 14:26:23,147 INFO blockmanagement.BlockManager
(BlockManager.java:processReport(1816)) - Processing first storage report for
DS-67a2fab5-3ae2-42fa-9373-85d9719bd70a from datanode
a2ce3683-c6bd-4cc3-a4fe-c2dc39dcf8eb
[exec] 2015-05-19 14:26:23,147 INFO BlockStateChange
(BlockManager.java:processReport(1865)) - BLOCK* processReport: from storage
DS-67a2fab5-3ae2-42fa-9373-85d9719bd70a node
DatanodeRegistration(127.0.0.1:56401,
datanodeUuid=a2ce3683-c6bd-4cc3-a4fe-c2dc39dcf8eb, infoPort=54329,
infoSecurePort=0, ipcPort=46119,
storageInfo=lv=-56;cid=testClusterID;nsid=549639075;c=0), blocks: 0,
hasStaleStorage: true, processing time: 0 msecs
[exec] 2015-05-19 14:26:23,148 INFO blockmanagement.BlockManager
(BlockManager.java:processReport(1816)) - Processing first storage report for
DS-2108349f-36fa-4a07-9a8a-3dfa1545438e from datanode
a2ce3683-c6bd-4cc3-a4fe-c2dc39dcf8eb
[exec] 2015-05-19 14:26:23,148 INFO BlockStateChange
(BlockManager.java:processReport(1865)) - BLOCK* processReport: from storage
DS-2108349f-36fa-4a07-9a8a-3dfa1545438e node
DatanodeRegistration(127.0.0.1:56401,
datanodeUuid=a2ce3683-c6bd-4cc3-a4fe-c2dc39dcf8eb, infoPort=54329,
infoSecurePort=0, ipcPort=46119,
storageInfo=lv=-56;cid=testClusterID;nsid=549639075;c=0), blocks: 0,
hasStaleStorage: false, processing time: 0 msecs
[exec] 2015-05-19 14:26:23,163 INFO datanode.DataNode
(BPServiceActor.java:blockReport(490)) - Successfully sent block report
0x9ad282ae030b6f0e, containing 2 storage report(s), of which we sent 2. The
reports had 0 total blocks and used 1 RPC(s). This took 3 msec to generate and
25 msecs for RPC and NN processing. Got back one command: FinalizeCommand/5.
[exec] 2015-05-19 14:26:23,163 INFO datanode.DataNode
(BPOfferService.java:processCommandFromActive(693)) - Got finalize command for
block pool BP-7584180-67.195.81.150-1432045580756
[exec] 2015-05-19 14:26:23,204 INFO hdfs.MiniDFSCluster
(MiniDFSCluster.java:waitActive(2299)) - Cluster is active
[exec] 2015-05-19 14:26:23,210 INFO hdfs.MiniDFSCluster
(MiniDFSCluster.java:shutdown(1728)) - Shutting down the Mini HDFS Cluster
[exec] 2015-05-19 14:26:23,210 INFO hdfs.MiniDFSCluster
(MiniDFSCluster.java:shutdownDataNodes(1773)) - Shutting down DataNode 0
[exec] 2015-05-19 14:26:23,211 WARN datanode.DirectoryScanner
(DirectoryScanner.java:shutdown(378)) - DirectoryScanner: shutdown has been
called
[exec] 2015-05-19 14:26:23,211 INFO datanode.DataNode
(DataXceiverServer.java:closeAllPeers(263)) - Closing all peers.
[exec] 2015-05-19 14:26:23,212 INFO mortbay.log (Slf4jLog.java:info(67))
- Stopped SelectChannelConnector@localhost:0
[exec] 2015-05-19 14:26:23,323 INFO ipc.Server (Server.java:stop(2569)) -
Stopping server on 46119
[exec] 2015-05-19 14:26:23,324 INFO ipc.Server (Server.java:run(724)) -
Stopping IPC Server listener on 46119
[exec] 2015-05-19 14:26:23,324 INFO ipc.Server (Server.java:run(857)) -
Stopping IPC Server Responder
[exec] 2015-05-19 14:26:23,324 WARN datanode.DataNode
(BPServiceActor.java:offerService(701)) - BPOfferService for Block pool
BP-7584180-67.195.81.150-1432045580756 (Datanode Uuid
a2ce3683-c6bd-4cc3-a4fe-c2dc39dcf8eb) service to localhost/127.0.0.1:35907
interrupted
[exec] 2015-05-19 14:26:23,324 WARN datanode.DataNode
(BPServiceActor.java:run(831)) - Ending block pool service for: Block pool
BP-7584180-67.195.81.150-1432045580756 (Datanode Uuid
a2ce3683-c6bd-4cc3-a4fe-c2dc39dcf8eb) service to localhost/127.0.0.1:35907
[exec] 2015-05-19 14:26:23,426 INFO datanode.DataNode
(BlockPoolManager.java:remove(102)) - Removed Block pool
BP-7584180-67.195.81.150-1432045580756 (Datanode Uuid
a2ce3683-c6bd-4cc3-a4fe-c2dc39dcf8eb)
[exec] 2015-05-19 14:26:23,426 INFO impl.FsDatasetImpl
(FsDatasetImpl.java:shutdownBlockPool(2545)) - Removing block pool
BP-7584180-67.195.81.150-1432045580756
[exec] 2015-05-19 14:26:23,427 INFO impl.FsDatasetAsyncDiskService
(FsDatasetAsyncDiskService.java:shutdown(183)) - Shutting down all async disk
service threads
[exec] 2015-05-19 14:26:23,427 INFO impl.FsDatasetAsyncDiskService
(FsDatasetAsyncDiskService.java:shutdown(191)) - All async disk service threads
have been shut down
[exec] 2015-05-19 14:26:23,427 INFO impl.RamDiskAsyncLazyPersistService
(RamDiskAsyncLazyPersistService.java:shutdown(165)) - Shutting down all async
lazy persist service threads
[exec] 2015-05-19 14:26:23,427 INFO impl.RamDiskAsyncLazyPersistService
(RamDiskAsyncLazyPersistService.java:shutdown(172)) - All async lazy persist
service threads have been shut down
[exec] 2015-05-19 14:26:23,433 INFO datanode.DataNode
(DataNode.java:shutdown(1821)) - Shutdown complete.
[exec] 2015-05-19 14:26:23,433 INFO namenode.FSNamesystem
(FSNamesystem.java:stopActiveServices(1221)) - Stopping services started for
active state
[exec] 2015-05-19 14:26:23,435 INFO namenode.FSEditLog
(FSEditLog.java:endCurrentLogSegment(1291)) - Ending log segment 1
[exec] 2015-05-19 14:26:23,435 INFO namenode.FSNamesystem
(FSNamesystem.java:run(4325)) - NameNodeEditLogRoller was interrupted, exiting
[exec] 2015-05-19 14:26:23,435 INFO namenode.FSNamesystem
(FSNamesystem.java:run(4405)) - LazyPersistFileScrubber was interrupted, exiting
[exec] 2015-05-19 14:26:23,436 INFO namenode.FSEditLog
(FSEditLog.java:printStatistics(698)) - Number of transactions: 2 Total time
for transactions(ms): 2 Number of transactions batched in Syncs: 0 Number of
syncs: 3 SyncTimes(ms): 2 1
[exec] 2015-05-19 14:26:23,438 INFO namenode.FileJournalManager
(FileJournalManager.java:finalizeLogSegment(134)) - Finalizing edits file
<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/edits_inprogress_0000000000000000001>
->
<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name1/current/edits_0000000000000000001-0000000000000000002>
[exec] 2015-05-19 14:26:23,439 INFO namenode.FileJournalManager
(FileJournalManager.java:finalizeLogSegment(134)) - Finalizing edits file
<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/current/edits_inprogress_0000000000000000001>
->
<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/native/build/test/data/dfs/name2/current/edits_0000000000000000001-0000000000000000002>
[exec] 2015-05-19 14:26:23,440 INFO ipc.Server (Server.java:stop(2569)) -
Stopping server on 35907
[exec] 2015-05-19 14:26:23,441 INFO ipc.Server (Server.java:run(724)) -
Stopping IPC Server listener on 35907
[exec] 2015-05-19 14:26:23,441 INFO ipc.Server (Server.java:run(857)) -
Stopping IPC Server Responder
[exec] 2015-05-19 14:26:23,441 INFO blockmanagement.BlockManager
(BlockManager.java:run(3686)) - Stopping ReplicationMonitor.
[exec] 2015-05-19 14:26:23,477 INFO namenode.FSNamesystem
(FSNamesystem.java:stopActiveServices(1221)) - Stopping services started for
active state
[exec] 2015-05-19 14:26:23,477 INFO namenode.FSNamesystem
(FSNamesystem.java:stopStandbyServices(1311)) - Stopping services started for
standby state
[exec] 2015-05-19 14:26:23,479 INFO mortbay.log (Slf4jLog.java:info(67))
- Stopped SelectChannelConnector@localhost:0
[exec] 2015-05-19 14:26:23,580 INFO impl.MetricsSystemImpl
(MetricsSystemImpl.java:stop(211)) - Stopping DataNode metrics system...
[exec] 2015-05-19 14:26:23,582 INFO impl.MetricsSystemImpl
(MetricsSystemImpl.java:stop(217)) - DataNode metrics system stopped.
[exec] 2015-05-19 14:26:23,583 INFO impl.MetricsSystemImpl
(MetricsSystemImpl.java:shutdown(601)) - DataNode metrics system shutdown
complete.
[echo] Finished test_native_mini_dfs
[INFO] Executed tasks
[INFO]
[INFO] --- maven-jar-plugin:2.5:jar (prepare-jar) @ hadoop-hdfs ---
[INFO] Building jar:
<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT.jar>
[INFO]
[INFO] --- maven-jar-plugin:2.5:test-jar (prepare-test-jar) @ hadoop-hdfs ---
[INFO] Building jar:
<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-tests.jar>
[INFO]
[INFO] >>> maven-source-plugin:2.3:jar (default) @ hadoop-hdfs >>>
[INFO]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs ---
[INFO] Executing tasks
main:
[INFO] Executed tasks
[INFO]
[INFO] --- hadoop-maven-plugins:3.0.0-SNAPSHOT:protoc (compile-protoc) @
hadoop-hdfs ---
[INFO]
[INFO] <<< maven-source-plugin:2.3:jar (default) @ hadoop-hdfs <<<
[INFO]
[INFO] --- maven-source-plugin:2.3:jar (default) @ hadoop-hdfs ---
[INFO] Building jar:
<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-sources.jar>
[INFO]
[INFO] >>> maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs >>>
[INFO]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs ---
[INFO] Executing tasks
main:
[INFO] Executed tasks
[INFO]
[INFO] --- hadoop-maven-plugins:3.0.0-SNAPSHOT:protoc (compile-protoc) @
hadoop-hdfs ---
[INFO]
[INFO] <<< maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs <<<
[INFO]
[INFO] --- maven-source-plugin:2.3:test-jar (default) @ hadoop-hdfs ---
[INFO] Building jar:
<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-hdfs-3.0.0-SNAPSHOT-test-sources.jar>
[INFO]
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default) @ hadoop-hdfs ---
[INFO] Fork Value is true
[java] Warnings generated: 1
[INFO] Done FindBugs Analysis....
[INFO]
[INFO] --- maven-dependency-plugin:2.2:copy (site) @ hadoop-hdfs ---
[INFO] Configured Artifact: jdiff:jdiff:1.0.9:jar
[INFO] Configured Artifact:
org.apache.hadoop:hadoop-annotations:3.0.0-SNAPSHOT:jar
[INFO] Configured Artifact: xerces:xercesImpl:2.11.0:jar
[INFO] Copying jdiff-1.0.9.jar to
<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/jdiff.jar>
[INFO] Copying hadoop-annotations-3.0.0-SNAPSHOT.jar to
<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/hadoop-annotations.jar>
[INFO] Copying xercesImpl-2.11.0.jar to
<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/xerces.jar>
[INFO]
[INFO] --- maven-antrun-plugin:1.7:run (site) @ hadoop-hdfs ---
[INFO] Executing tasks
main:
[mkdir] Created dir:
<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/docs-src>
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HttpFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS BookKeeper Journal
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Hadoop HDFS-NFS
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] ------------------------------------------------------------------------
[INFO] Building Apache Hadoop HDFS Project 3.0.0-SNAPSHOT
[INFO] ------------------------------------------------------------------------
[INFO]
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project
---
[INFO] Deleting
<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target>
[INFO]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project
---
[INFO] Executing tasks
main:
[mkdir] Created dir:
<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/target/test-dir>
[INFO] Executed tasks
[INFO]
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @
hadoop-hdfs-project ---
[INFO]
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @
hadoop-hdfs-project ---
[INFO]
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @
hadoop-hdfs-project ---
[INFO]
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @
hadoop-hdfs-project ---
[INFO]
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable
package
[INFO]
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project
---
[INFO]
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @
hadoop-hdfs-project ---
[INFO]
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @
hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [ 46.973 s]
[INFO] Apache Hadoop HDFS ................................ FAILURE [ 02:53 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [ 0.053 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 02:53 h
[INFO] Finished at: 2015-05-19T14:28:29+00:00
[INFO] Final Memory: 54M/265M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project
hadoop-hdfs: An Ant BuildException has occured:
<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/src/main/docs>
does not exist.
[ERROR] around Ant part ...<copy
todir="<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/docs-src">...>
@ 5:127 in
<https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/ws/hadoop-hdfs-project/hadoop-hdfs/target/antrun/build-main.xml>
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please
read the following articles:
[ERROR] [Help 1]
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR] mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #175
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 805181 bytes
Compression is 0.0%
Took 39 sec
Recording test results
Updating HADOOP-11581
Updating HADOOP-11949
Updating HADOOP-11944
Updating YARN-3541
Updating HADOOP-1540
Updating HADOOP-8934
Updating HADOOP-10971
Updating HADOOP-11103
Updating HADOOP-11884
Updating HDFS-8345
Updating HDFS-8405
Updating HDFS-8412
Updating HDFS-6348
Updating HDFS-4185