PIG-3290 is tracking this failure: https://issues.apache.org/jira/browse/PIG-3290
In addition, PIG-3286 is tracking another failing unit test: https://issues.apache.org/jira/browse/PIG-3286 On Mon, Apr 22, 2013 at 3:32 PM, Apache Jenkins Server < [email protected]> wrote: > See <https://builds.apache.org/job/Pig-trunk/1463/changes> > > Changes: > > [daijy] PIG-2767: Pig creates wrong schema after dereferencing nested > tuple fields > > ------------------------------------------ > [...truncated 38216 lines...] > [junit] at > java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185) > [junit] at > sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159) > [junit] at > sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84) > [junit] at > org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:131) > [junit] at java.lang.Thread.run(Thread.java:662) > [junit] > [junit] 439752 [IPC Server listener on 56499] INFO > org.apache.hadoop.ipc.Server - Stopping IPC Server listener on 56499 > [junit] 439753 > [org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@baf589] INFO > org.apache.hadoop.hdfs.server.datanode.DataNode - Exiting DataXceiveServer > [junit] 439756 > [org.apache.hadoop.hdfs.server.datanode.DataBlockScanner@7ec028] INFO > org.apache.hadoop.hdfs.server.datanode.DataBlockScanner - Exiting > DataBlockScanner thread. > [junit] 440083 [DataNode: > [build/test/data/dfs/data/data1,build/test/data/dfs/data/data2]] INFO > org.apache.hadoop.hdfs.server.datanode.DataNode - Scheduling block > blk_-6177855010699382237_1189 file > build/test/data/dfs/data/data1/current/blk_-6177855010699382237 for deletion > [junit] 440084 [DataNode: > [build/test/data/dfs/data/data1,build/test/data/dfs/data/data2]] INFO > org.apache.hadoop.hdfs.server.datanode.DataNode - Scheduling block > blk_-2088787686098981270_1184 file > build/test/data/dfs/data/data2/current/blk_-2088787686098981270 for deletion > [junit] 440084 [DataNode: > [build/test/data/dfs/data/data1,build/test/data/dfs/data/data2]] INFO > org.apache.hadoop.hdfs.server.datanode.DataNode - Scheduling block > blk_1448845170710403685_1186 file > build/test/data/dfs/data/data2/current/blk_1448845170710403685 for deletion > [junit] 440084 [Thread-307] INFO > org.apache.hadoop.hdfs.server.datanode.DataNode - Deleted block > blk_-6177855010699382237_1189 at file > build/test/data/dfs/data/data1/current/blk_-6177855010699382237 > [junit] 440084 [DataNode: > [build/test/data/dfs/data/data1,build/test/data/dfs/data/data2]] INFO > org.apache.hadoop.hdfs.server.datanode.DataNode - Scheduling block > blk_1505983587429670804_1190 file > build/test/data/dfs/data/data2/current/blk_1505983587429670804 for deletion > [junit] 440084 [Thread-256] INFO > org.apache.hadoop.hdfs.server.datanode.DataNode - Deleted block > blk_-2088787686098981270_1184 at file > build/test/data/dfs/data/data2/current/blk_-2088787686098981270 > [junit] 440084 [DataNode: > [build/test/data/dfs/data/data1,build/test/data/dfs/data/data2]] INFO > org.apache.hadoop.hdfs.server.datanode.DataNode - Scheduling block > blk_1969082855003938608_1191 file > build/test/data/dfs/data/data1/current/blk_1969082855003938608 for deletion > [junit] 440084 [Thread-256] INFO > org.apache.hadoop.hdfs.server.datanode.DataNode - Deleted block > blk_1448845170710403685_1186 at file > build/test/data/dfs/data/data2/current/blk_1448845170710403685 > [junit] 440084 [DataNode: > [build/test/data/dfs/data/data1,build/test/data/dfs/data/data2]] INFO > org.apache.hadoop.hdfs.server.datanode.DataNode - Scheduling block > blk_2600316783254142406_1192 file > build/test/data/dfs/data/data2/current/blk_2600316783254142406 for deletion > [junit] 440084 [Thread-307] INFO > org.apache.hadoop.hdfs.server.datanode.DataNode - Deleted block > blk_1969082855003938608_1191 at file > build/test/data/dfs/data/data1/current/blk_1969082855003938608 > [junit] 440084 [Thread-256] INFO > org.apache.hadoop.hdfs.server.datanode.DataNode - Deleted block > blk_1505983587429670804_1190 at file > build/test/data/dfs/data/data2/current/blk_1505983587429670804 > [junit] 440084 [Thread-256] INFO > org.apache.hadoop.hdfs.server.datanode.DataNode - Deleted block > blk_2600316783254142406_1192 at file > build/test/data/dfs/data/data2/current/blk_2600316783254142406 > [junit] 440084 [DataNode: > [build/test/data/dfs/data/data1,build/test/data/dfs/data/data2]] INFO > org.apache.hadoop.hdfs.server.datanode.DataNode - Scheduling block > blk_3621221455178812140_1185 file > build/test/data/dfs/data/data1/current/blk_3621221455178812140 for deletion > [junit] 440084 [DataNode: > [build/test/data/dfs/data/data1,build/test/data/dfs/data/data2]] WARN > org.apache.hadoop.hdfs.server.datanode.DataNode - Unexpected error trying > to delete block blk_4492901830886386128_1183. BlockInfo not found in > volumeMap. > [junit] 440084 [Thread-307] INFO > org.apache.hadoop.hdfs.server.datanode.DataNode - Deleted block > blk_3621221455178812140_1185 at file > build/test/data/dfs/data/data1/current/blk_3621221455178812140 > [junit] 440085 [DataNode: > [build/test/data/dfs/data/data1,build/test/data/dfs/data/data2]] INFO > org.mortbay.log - Completed FSVolumeSet.checkDirs. Removed=0volumes. List > of current volumes: < > https://builds.apache.org/job/Pig-trunk/ws/trunk/build/test/data/dfs/data/data1/current,/home/jenkins/jenkins-slave/workspace/Pig-trunk/trunk/build/test/data/dfs/data/data2/current > > > [junit] 440085 [DataNode: > [build/test/data/dfs/data/data1,build/test/data/dfs/data/data2]] WARN > org.apache.hadoop.hdfs.server.datanode.DataNode - Error processing > datanode Command > [junit] java.io.IOException: Error in deleting blocks. > [junit] at > org.apache.hadoop.hdfs.server.datanode.FSDataset.invalidate(FSDataset.java:1835) > [junit] at > org.apache.hadoop.hdfs.server.datanode.DataNode.processCommand(DataNode.java:1084) > [junit] at > org.apache.hadoop.hdfs.server.datanode.DataNode.processCommand(DataNode.java:1046) > [junit] at > org.apache.hadoop.hdfs.server.datanode.DataNode.offerService(DataNode.java:901) > [junit] at > org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1429) > [junit] at java.lang.Thread.run(Thread.java:662) > [junit] 440492 [DataNode: > [build/test/data/dfs/data/data3,build/test/data/dfs/data/data4]] INFO > org.apache.hadoop.hdfs.server.datanode.DataNode - Scheduling block > blk_-6177855010699382237_1189 file > build/test/data/dfs/data/data4/current/blk_-6177855010699382237 for deletion > [junit] 440492 [DataNode: > [build/test/data/dfs/data/data3,build/test/data/dfs/data/data4]] INFO > org.apache.hadoop.hdfs.server.datanode.DataNode - Scheduling block > blk_1505983587429670804_1190 file > build/test/data/dfs/data/data3/current/blk_1505983587429670804 for deletion > [junit] 440492 [Thread-311] INFO > org.apache.hadoop.hdfs.server.datanode.DataNode - Deleted block > blk_-6177855010699382237_1189 at file > build/test/data/dfs/data/data4/current/blk_-6177855010699382237 > [junit] 440492 [DataNode: > [build/test/data/dfs/data/data3,build/test/data/dfs/data/data4]] INFO > org.apache.hadoop.hdfs.server.datanode.DataNode - Scheduling block > blk_1969082855003938608_1191 file > build/test/data/dfs/data/data4/current/blk_1969082855003938608 for deletion > [junit] 440493 [Thread-312] INFO > org.apache.hadoop.hdfs.server.datanode.DataNode - Deleted block > blk_1505983587429670804_1190 at file > build/test/data/dfs/data/data3/current/blk_1505983587429670804 > [junit] 440493 [DataNode: > [build/test/data/dfs/data/data3,build/test/data/dfs/data/data4]] INFO > org.apache.hadoop.hdfs.server.datanode.DataNode - Scheduling block > blk_2600316783254142406_1192 file > build/test/data/dfs/data/data3/current/blk_2600316783254142406 for deletion > [junit] 440493 [DataNode: > [build/test/data/dfs/data/data3,build/test/data/dfs/data/data4]] INFO > org.apache.hadoop.hdfs.server.datanode.DataNode - Scheduling block > blk_4534108920867883282_1182 file > build/test/data/dfs/data/data3/current/blk_4534108920867883282 for deletion > [junit] 440493 [Thread-312] INFO > org.apache.hadoop.hdfs.server.datanode.DataNode - Deleted block > blk_2600316783254142406_1192 at file > build/test/data/dfs/data/data3/current/blk_2600316783254142406 > [junit] 440493 [DataNode: > [build/test/data/dfs/data/data3,build/test/data/dfs/data/data4]] INFO > org.apache.hadoop.hdfs.server.datanode.DataNode - Scheduling block > blk_7667914583613760056_1188 file > build/test/data/dfs/data/data3/current/blk_7667914583613760056 for deletion > [junit] 440493 [Thread-312] INFO > org.apache.hadoop.hdfs.server.datanode.DataNode - Deleted block > blk_4534108920867883282_1182 at file > build/test/data/dfs/data/data3/current/blk_4534108920867883282 > [junit] 440493 [Thread-311] INFO > org.apache.hadoop.hdfs.server.datanode.DataNode - Deleted block > blk_1969082855003938608_1191 at file > build/test/data/dfs/data/data4/current/blk_1969082855003938608 > [junit] 440493 [Thread-312] INFO > org.apache.hadoop.hdfs.server.datanode.DataNode - Deleted block > blk_7667914583613760056_1188 at file > build/test/data/dfs/data/data3/current/blk_7667914583613760056 > [junit] 440753 [main] INFO > org.apache.hadoop.hdfs.server.datanode.DataNode - Waiting for threadgroup > to exit, active threads is 0 > [junit] 440753 [DataNode: > [build/test/data/dfs/data/data5,build/test/data/dfs/data/data6]] INFO > org.apache.hadoop.hdfs.server.datanode.DataNode - DatanodeRegistration( > 127.0.0.1:60844, > storageID=DS-193010016-67.195.138.24-60844-1366669481053, infoPort=43720, > ipcPort=56499):Finishing DataNode in: FSDataset{dirpath='< > https://builds.apache.org/job/Pig-trunk/ws/trunk/build/test/data/dfs/data/data5/current,/home/jenkins/jenkins-slave/workspace/Pig-trunk/trunk/build/test/data/dfs/data/data6/current' > }> > [junit] 440753 [DataNode: > [build/test/data/dfs/data/data5,build/test/data/dfs/data/data6]] INFO > org.apache.hadoop.ipc.Server - Stopping server on 56499 > [junit] 440753 [DataNode: > [build/test/data/dfs/data/data5,build/test/data/dfs/data/data6]] INFO > org.apache.hadoop.ipc.metrics.RpcInstrumentation - shut down > [junit] 440753 [DataNode: > [build/test/data/dfs/data/data5,build/test/data/dfs/data/data6]] INFO > org.apache.hadoop.hdfs.server.datanode.DataNode - Waiting for threadgroup > to exit, active threads is 0 > [junit] 440753 [DataNode: > [build/test/data/dfs/data/data5,build/test/data/dfs/data/data6]] INFO > org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService - > Shutting down all async disk service threads... > [junit] 440753 [DataNode: > [build/test/data/dfs/data/data5,build/test/data/dfs/data/data6]] INFO > org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService - All > async disk service threads have been shut down. > [junit] 440753 [main] WARN org.apache.hadoop.metrics2.util.MBeans - > Hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId-1460287628 > [junit] javax.management.InstanceNotFoundException: > Hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId-1460287628 > [junit] at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1094) > [junit] at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:415) > [junit] at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:403) > [junit] at > com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:506) > [junit] at > org.apache.hadoop.metrics2.util.MBeans.unregister(MBeans.java:71) > [junit] at > org.apache.hadoop.hdfs.server.datanode.FSDataset.shutdown(FSDataset.java:1934) > [junit] at > org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:788) > [junit] at > org.apache.hadoop.hdfs.MiniDFSCluster.shutdownDataNodes(MiniDFSCluster.java:566) > [junit] at > org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:550) > [junit] at > org.apache.pig.test.MiniGenericCluster.shutdownMiniDfsClusters(MiniGenericCluster.java:87) > [junit] at > org.apache.pig.test.MiniGenericCluster.shutdownMiniDfsAndMrClusters(MiniGenericCluster.java:77) > [junit] at > org.apache.pig.test.MiniGenericCluster.shutDown(MiniGenericCluster.java:68) > [junit] at > org.apache.pig.test.TestStore.oneTimeTearDown(TestStore.java:138) > [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) > [junit] at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > [junit] at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > [junit] at java.lang.reflect.Method.invoke(Method.java:597) > [junit] at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > [junit] at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > [junit] at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > [junit] at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) > [junit] at > org.junit.runners.ParentRunner.run(ParentRunner.java:309) > [junit] at > junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:38) > [junit] at > org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:420) > [junit] at > org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:911) > [junit] at > org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:768) > [junit] 440754 [main] WARN > org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService - > AsyncDiskService has already shut down. > [junit] Shutting down DataNode 1 > [junit] 440754 [main] INFO org.mortbay.log - Stopped > SelectChannelConnector@localhost:0 > [junit] 440755 [main] INFO org.apache.hadoop.ipc.Server - Stopping > server on 39601 > [junit] 440755 [IPC Server handler 0 on 39601] INFO > org.apache.hadoop.ipc.Server - IPC Server handler 0 on 39601: exiting > [junit] 440755 [IPC Server handler 1 on 39601] INFO > org.apache.hadoop.ipc.Server - IPC Server handler 1 on 39601: exiting > [junit] 440755 [IPC Server handler 2 on 39601] INFO > org.apache.hadoop.ipc.Server - IPC Server handler 2 on 39601: exiting > [junit] 440755 [IPC Server listener on 39601] INFO > org.apache.hadoop.ipc.Server - Stopping IPC Server listener on 39601 > [junit] 440755 [IPC Server Responder] INFO > org.apache.hadoop.ipc.Server - Stopping IPC Server Responder > [junit] 440756 [main] INFO > org.apache.hadoop.ipc.metrics.RpcInstrumentation - shut down > [junit] 440756 [main] INFO > org.apache.hadoop.hdfs.server.datanode.DataNode - Waiting for threadgroup > to exit, active threads is 1 > [junit] 440756 > [org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@64160e] WARN > org.apache.hadoop.hdfs.server.datanode.DataNode - DatanodeRegistration( > 127.0.0.1:55308, > storageID=DS-1289720787-67.195.138.24-55308-1366669480666, infoPort=38298, > ipcPort=39601):DataXceiveServer:java.nio.channels.AsynchronousCloseException > [junit] at > java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185) > [junit] at > sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159) > [junit] at > sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84) > [junit] at > org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:131) > [junit] at java.lang.Thread.run(Thread.java:662) > [junit] > [junit] 440756 > [org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@64160e] INFO > org.apache.hadoop.hdfs.server.datanode.DataNode - Exiting DataXceiveServer > [junit] 441388 > [org.apache.hadoop.hdfs.server.datanode.DataBlockScanner@72d873] INFO > org.apache.hadoop.hdfs.server.datanode.DataBlockScanner - Exiting > DataBlockScanner thread. > [junit] 441730 > [org.apache.hadoop.hdfs.server.namenode.FSNamesystem$ReplicationMonitor@1cc5069] > INFO org.apache.hadoop.hdfs.StateChange - BLOCK* ask 127.0.0.1:60844 to > delete blk_1505983587429670804_1190 blk_7667914583613760056_1188 > blk_4534108920867883282_1182 blk_2600316783254142406_1192 > blk_1969082855003938608_1191 > [junit] 441730 > [org.apache.hadoop.hdfs.server.namenode.FSNamesystem$ReplicationMonitor@1cc5069] > INFO org.apache.hadoop.hdfs.StateChange - BLOCK* ask 127.0.0.1:47643 to > delete blk_7667914583613760056_1188 blk_4492901830886386128_1183 > blk_4534108920867883282_1182 blk_-6177855010699382237_1189 > [junit] 441756 [main] INFO > org.apache.hadoop.hdfs.server.datanode.DataNode - Waiting for threadgroup > to exit, active threads is 0 > [junit] 441756 [DataNode: > [build/test/data/dfs/data/data3,build/test/data/dfs/data/data4]] INFO > org.apache.hadoop.hdfs.server.datanode.DataNode - DatanodeRegistration( > 127.0.0.1:55308, > storageID=DS-1289720787-67.195.138.24-55308-1366669480666, infoPort=38298, > ipcPort=39601):Finishing DataNode in: FSDataset{dirpath='< > https://builds.apache.org/job/Pig-trunk/ws/trunk/build/test/data/dfs/data/data3/current,/home/jenkins/jenkins-slave/workspace/Pig-trunk/trunk/build/test/data/dfs/data/data4/current' > }> > [junit] 441756 [DataNode: > [build/test/data/dfs/data/data3,build/test/data/dfs/data/data4]] INFO > org.apache.hadoop.ipc.Server - Stopping server on 39601 > [junit] 441756 [DataNode: > [build/test/data/dfs/data/data3,build/test/data/dfs/data/data4]] INFO > org.apache.hadoop.ipc.metrics.RpcInstrumentation - shut down > [junit] 441756 [DataNode: > [build/test/data/dfs/data/data3,build/test/data/dfs/data/data4]] INFO > org.apache.hadoop.hdfs.server.datanode.DataNode - Waiting for threadgroup > to exit, active threads is 0 > [junit] 441756 [DataNode: > [build/test/data/dfs/data/data3,build/test/data/dfs/data/data4]] INFO > org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService - > Shutting down all async disk service threads... > [junit] 441756 [DataNode: > [build/test/data/dfs/data/data3,build/test/data/dfs/data/data4]] INFO > org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService - All > async disk service threads have been shut down. > [junit] 441756 [main] WARN org.apache.hadoop.metrics2.util.MBeans - > Hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId-267238330 > [junit] javax.management.InstanceNotFoundException: > Hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId-267238330 > [junit] at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1094) > [junit] at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:415) > [junit] at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:403) > [junit] at > com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:506) > [junit] at > org.apache.hadoop.metrics2.util.MBeans.unregister(MBeans.java:71) > [junit] at > org.apache.hadoop.hdfs.server.datanode.FSDataset.shutdown(FSDataset.java:1934) > [junit] at > org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:788) > [junit] at > org.apache.hadoop.hdfs.MiniDFSCluster.shutdownDataNodes(MiniDFSCluster.java:566) > [junit] at > org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:550) > [junit] at > org.apache.pig.test.MiniGenericCluster.shutdownMiniDfsClusters(MiniGenericCluster.java:87) > [junit] at > org.apache.pig.test.MiniGenericCluster.shutdownMiniDfsAndMrClusters(MiniGenericCluster.java:77) > [junit] at > org.apache.pig.test.MiniGenericCluster.shutDown(MiniGenericCluster.java:68) > [junit] at > org.apache.pig.test.TestStore.oneTimeTearDown(TestStore.java:138) > [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) > [junit] at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > [junit] at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > [junit] at java.lang.reflect.Method.invoke(Method.java:597) > [junit] at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > [junit] at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > [junit] at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > [junit] at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) > [junit] at > org.junit.runners.ParentRunner.run(ParentRunner.java:309) > [junit] at > junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:38) > [junit] at > org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:420) > [junit] at > org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:911) > [junit] at > org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:768) > [junit] 441757 [main] WARN > org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService - > AsyncDiskService has already shut down. > [junit] Shutting down DataNode 0 > [junit] 441761 [main] INFO org.mortbay.log - Stopped > SelectChannelConnector@localhost:0 > [junit] 441862 [main] INFO org.apache.hadoop.ipc.Server - Stopping > server on 52940 > [junit] 441862 [IPC Server handler 0 on 52940] INFO > org.apache.hadoop.ipc.Server - IPC Server handler 0 on 52940: exiting > [junit] 441862 [IPC Server handler 2 on 52940] INFO > org.apache.hadoop.ipc.Server - IPC Server handler 2 on 52940: exiting > [junit] 441862 [IPC Server Responder] INFO > org.apache.hadoop.ipc.Server - Stopping IPC Server Responder > [junit] 441862 [IPC Server listener on 52940] INFO > org.apache.hadoop.ipc.Server - Stopping IPC Server listener on 52940 > [junit] 441862 [main] INFO > org.apache.hadoop.ipc.metrics.RpcInstrumentation - shut down > [junit] 441862 [IPC Server handler 1 on 52940] INFO > org.apache.hadoop.ipc.Server - IPC Server handler 1 on 52940: exiting > [junit] 441862 [main] INFO > org.apache.hadoop.hdfs.server.datanode.DataNode - Waiting for threadgroup > to exit, active threads is 1 > [junit] 441862 > [org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@11bd9c9] WARN > org.apache.hadoop.hdfs.server.datanode.DataNode - DatanodeRegistration( > 127.0.0.1:51709, > storageID=DS-433870727-67.195.138.24-51709-1366669480269, infoPort=48683, > ipcPort=52940):DataXceiveServer:java.nio.channels.AsynchronousCloseException > [junit] at > java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:185) > [junit] at > sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:159) > [junit] at > sun.nio.ch.ServerSocketAdaptor.accept(ServerSocketAdaptor.java:84) > [junit] at > org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:131) > [junit] at java.lang.Thread.run(Thread.java:662) > [junit] > [junit] 441862 > [org.apache.hadoop.hdfs.server.datanode.DataXceiverServer@11bd9c9] INFO > org.apache.hadoop.hdfs.server.datanode.DataNode - Exiting DataXceiveServer > [junit] 442004 > [org.apache.hadoop.hdfs.server.datanode.DataBlockScanner@f6f1b6] INFO > org.apache.hadoop.hdfs.server.datanode.DataBlockScanner - Exiting > DataBlockScanner thread. > [junit] 442862 [main] INFO > org.apache.hadoop.hdfs.server.datanode.DataNode - Waiting for threadgroup > to exit, active threads is 0 > [junit] 442862 [DataNode: > [build/test/data/dfs/data/data1,build/test/data/dfs/data/data2]] INFO > org.apache.hadoop.hdfs.server.datanode.DataNode - DatanodeRegistration( > 127.0.0.1:51709, > storageID=DS-433870727-67.195.138.24-51709-1366669480269, infoPort=48683, > ipcPort=52940):Finishing DataNode in: FSDataset{dirpath='< > https://builds.apache.org/job/Pig-trunk/ws/trunk/build/test/data/dfs/data/data1/current,/home/jenkins/jenkins-slave/workspace/Pig-trunk/trunk/build/test/data/dfs/data/data2/current' > }> > [junit] 442862 [DataNode: > [build/test/data/dfs/data/data1,build/test/data/dfs/data/data2]] WARN > org.apache.hadoop.metrics2.util.MBeans - > Hadoop:service=DataNode,name=DataNodeInfo > [junit] javax.management.InstanceNotFoundException: > Hadoop:service=DataNode,name=DataNodeInfo > [junit] at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1094) > [junit] at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:415) > [junit] at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:403) > [junit] at > com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:506) > [junit] at > org.apache.hadoop.metrics2.util.MBeans.unregister(MBeans.java:71) > [junit] at > org.apache.hadoop.hdfs.server.datanode.DataNode.unRegisterMXBean(DataNode.java:513) > [junit] at > org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:726) > [junit] at > org.apache.hadoop.hdfs.server.datanode.DataNode.run(DataNode.java:1442) > [junit] at java.lang.Thread.run(Thread.java:662) > [junit] 442863 [DataNode: > [build/test/data/dfs/data/data1,build/test/data/dfs/data/data2]] INFO > org.apache.hadoop.ipc.Server - Stopping server on 52940 > [junit] 442863 [DataNode: > [build/test/data/dfs/data/data1,build/test/data/dfs/data/data2]] INFO > org.apache.hadoop.ipc.metrics.RpcInstrumentation - shut down > [junit] 442863 [DataNode: > [build/test/data/dfs/data/data1,build/test/data/dfs/data/data2]] INFO > org.apache.hadoop.hdfs.server.datanode.DataNode - Waiting for threadgroup > to exit, active threads is 0 > [junit] 442863 [DataNode: > [build/test/data/dfs/data/data1,build/test/data/dfs/data/data2]] INFO > org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService - > Shutting down all async disk service threads... > [junit] 442863 [DataNode: > [build/test/data/dfs/data/data1,build/test/data/dfs/data/data2]] INFO > org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService - All > async disk service threads have been shut down. > [junit] 442863 [main] WARN org.apache.hadoop.metrics2.util.MBeans - > Hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId-1230879502 > [junit] javax.management.InstanceNotFoundException: > Hadoop:service=DataNode,name=FSDatasetState-UndefinedStorageId-1230879502 > [junit] at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getMBean(DefaultMBeanServerInterceptor.java:1094) > [junit] at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:415) > [junit] at > com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:403) > [junit] at > com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:506) > [junit] at > org.apache.hadoop.metrics2.util.MBeans.unregister(MBeans.java:71) > [junit] at > org.apache.hadoop.hdfs.server.datanode.FSDataset.shutdown(FSDataset.java:1934) > [junit] at > org.apache.hadoop.hdfs.server.datanode.DataNode.shutdown(DataNode.java:788) > [junit] at > org.apache.hadoop.hdfs.MiniDFSCluster.shutdownDataNodes(MiniDFSCluster.java:566) > [junit] at > org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:550) > [junit] at > org.apache.pig.test.MiniGenericCluster.shutdownMiniDfsClusters(MiniGenericCluster.java:87) > [junit] at > org.apache.pig.test.MiniGenericCluster.shutdownMiniDfsAndMrClusters(MiniGenericCluster.java:77) > [junit] at > org.apache.pig.test.MiniGenericCluster.shutDown(MiniGenericCluster.java:68) > [junit] at > org.apache.pig.test.TestStore.oneTimeTearDown(TestStore.java:138) > [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native > Method) > [junit] at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > [junit] at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > [junit] at java.lang.reflect.Method.invoke(Method.java:597) > [junit] at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > [junit] at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > [junit] at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > [junit] at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33) > [junit] at > org.junit.runners.ParentRunner.run(ParentRunner.java:309) > [junit] at > junit.framework.JUnit4TestAdapter.run(JUnit4TestAdapter.java:38) > [junit] at > org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:420) > [junit] at > org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:911) > [junit] at > org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:768) > [junit] 442863 [main] WARN > org.apache.hadoop.hdfs.server.datanode.FSDatasetAsyncDiskService - > AsyncDiskService has already shut down. > [junit] 442864 [main] INFO org.mortbay.log - Stopped > SelectChannelConnector@localhost:0 > [junit] 442965 > [org.apache.hadoop.hdfs.server.namenode.FSNamesystem$ReplicationMonitor@1cc5069] > WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem - > ReplicationMonitor thread received > InterruptedException.java.lang.InterruptedException: sleep interrupted > [junit] 442965 > [org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor@1dccedd] > INFO org.apache.hadoop.hdfs.server.namenode.DecommissionManager - > Interrupted Monitor > [junit] java.lang.InterruptedException: sleep interrupted > [junit] at java.lang.Thread.sleep(Native Method) > [junit] at > org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65) > [junit] at java.lang.Thread.run(Thread.java:662) > [junit] 442965 [main] INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem - Number of > transactions: 963 Total time for transactions(ms): 16Number of transactions > batched in Syncs: 102 Number of syncs: 658 SyncTimes(ms): 8389 595 > [junit] 442973 [main] INFO org.apache.hadoop.ipc.Server - Stopping > server on 36870 > [junit] 442973 [IPC Server listener on 36870] INFO > org.apache.hadoop.ipc.Server - Stopping IPC Server listener on 36870 > [junit] 442973 [main] INFO > org.apache.hadoop.ipc.metrics.RpcInstrumentation - shut down > [junit] 442973 [IPC Server Responder] INFO > org.apache.hadoop.ipc.Server - Stopping IPC Server Responder > [junit] 442973 [IPC Server handler 2 on 36870] INFO > org.apache.hadoop.ipc.Server - IPC Server handler 2 on 36870: exiting > [junit] 442973 [IPC Server handler 0 on 36870] INFO > org.apache.hadoop.ipc.Server - IPC Server handler 0 on 36870: exiting > [junit] 442973 [IPC Server handler 4 on 36870] INFO > org.apache.hadoop.ipc.Server - IPC Server handler 4 on 36870: exiting > [junit] 442973 [IPC Server handler 9 on 36870] INFO > org.apache.hadoop.ipc.Server - IPC Server handler 9 on 36870: exiting > [junit] 442974 [IPC Server handler 3 on 36870] INFO > org.apache.hadoop.ipc.Server - IPC Server handler 3 on 36870: exiting > [junit] 442974 [IPC Server handler 8 on 36870] INFO > org.apache.hadoop.ipc.Server - IPC Server handler 8 on 36870: exiting > [junit] 442974 [IPC Server handler 5 on 36870] INFO > org.apache.hadoop.ipc.Server - IPC Server handler 5 on 36870: exiting > [junit] 442974 [IPC Server handler 6 on 36870] INFO > org.apache.hadoop.ipc.Server - IPC Server handler 6 on 36870: exiting > [junit] 442974 [IPC Server handler 7 on 36870] INFO > org.apache.hadoop.ipc.Server - IPC Server handler 7 on 36870: exiting > [junit] 442974 [IPC Server handler 1 on 36870] INFO > org.apache.hadoop.ipc.Server - IPC Server handler 1 on 36870: exiting > [junit] Tests run: 17, Failures: 0, Errors: 0, Time elapsed: 434.34 sec > [junit] Running org.apache.pig.test.TestStringUDFs > [junit] 0 [main] WARN org.apache.pig.builtin.LAST_INDEX_OF - No > logger object provided to UDF: org.apache.pig.builtin.LAST_INDEX_OF. Failed > to process input; error - null > [junit] 5 [main] WARN org.apache.pig.builtin.SUBSTRING - No > logger object provided to UDF: org.apache.pig.builtin.SUBSTRING. > java.lang.StringIndexOutOfBoundsException: String index out of range: -2 > [junit] 6 [main] WARN org.apache.pig.builtin.SUBSTRING - No > logger object provided to UDF: org.apache.pig.builtin.SUBSTRING. > java.lang.StringIndexOutOfBoundsException: String index out of range: -8 > [junit] 6 [main] WARN org.apache.pig.builtin.SUBSTRING - No > logger object provided to UDF: org.apache.pig.builtin.SUBSTRING. > java.lang.StringIndexOutOfBoundsException: String index out of range: -2 > [junit] 7 [main] WARN org.apache.pig.builtin.SUBSTRING - No > logger object provided to UDF: org.apache.pig.builtin.SUBSTRING. > java.lang.NullPointerException > [junit] 7 [main] WARN org.apache.pig.builtin.SUBSTRING - No > logger object provided to UDF: org.apache.pig.builtin.SUBSTRING. > java.lang.StringIndexOutOfBoundsException: String index out of range: -1 > [junit] 9 [main] WARN org.apache.pig.builtin.INDEXOF - No logger > object provided to UDF: org.apache.pig.builtin.INDEXOF. Failed to process > input; error - null > [junit] Tests run: 13, Failures: 0, Errors: 0, Time elapsed: 0.253 sec > [delete] Deleting directory /tmp/pig_junit_tmp116276669 > > BUILD FAILED > <https://builds.apache.org/job/Pig-trunk/ws/trunk/build.xml>:786: The > following error occurred while executing this line: > <https://builds.apache.org/job/Pig-trunk/ws/trunk/build.xml>:854: Tests > failed! > > Total time: 19 minutes 15 seconds > Build step 'Execute shell' marked build as failure > [FINDBUGS] Skipping publisher since build result is FAILURE > Recording test results > Publishing Javadoc > Archiving artifacts > Recording fingerprints >
