Is there a chance that since I am using ZooKeeper 3.4.5 and Nutch 2.2.1 builds with ZooKeeper 3.3.1, there is a version issue?
On Thu, Nov 14, 2013 at 3:44 PM, Jon Uhal <[email protected]> wrote: > So I think it has to do with Accumulo somehow. I reverted the > conf/gora.properties setting for mock from false to: > > gora.datastore.accumulo.mock=true > > and re-building and re-running the runtime deploy job completed > successfully. Trying to see if I can track down the issue. > > > On Thu, Nov 14, 2013 at 2:24 PM, Jon Uhal <[email protected]> wrote: > >> So I broke down and tried to use the 1.7 release to do the inject step >> just to see if it would work. It did. So there is something that I either >> broke with my 2.2.1 setup or I'm doing something wrong with my 2.2.1 >> configs. Going to try to re-build 2.2.1 from source and try again. >> >> >> On Thu, Nov 14, 2013 at 12:13 PM, Jon Uhal <[email protected]> wrote: >> >>> Lewis, >>> >>> Here is what I can find. I have logging levels all set to DEBUG to try >>> to figure this out so it's overkill so I'm trying to pick and choose the >>> right things to post. >>> >>> Here is the meat of my zoo.cfg file. I'm not sure where the clientPath >>> or serverPath would be getting set from. I'm still learning Zookeeper... >>> >>> tickTime=2000 >>> initLimit=10 >>> syncLimit=5 >>> dataDir=/webroot/search/nutch/zookeeper-3.4.5/data >>> clientPort=2181 >>> maxClientCnxns=100 >>> server.1=dev1o:2888:3888 >>> server.2=dev5o:2888:3888 >>> server.3=dev6o:2888:3888 >>> >>> >>> Here is the extracted parts of the hadoop-search-tasktracker-dev5o.log. >>> I removed the heartbeat lines and reduced the FSInputChecker lines. >>> >>> 2013-11-14 16:50:31,485 DEBUG org.apache.hadoop.mapred.SortedRanges: >>> currentIndex 0 0:0 >>> 2013-11-14 16:50:31,485 DEBUG org.apache.hadoop.mapred.Counters: >>> Creating group org.apache.hadoop.mapred.Task$Counter with bundle >>> 2013-11-14 16:50:31,485 DEBUG org.apache.hadoop.mapred.Counters: Adding >>> SPILLED_RECORDS >>> 2013-11-14 16:50:31,485 DEBUG org.apache.hadoop.mapred.Counters: >>> Creating group org.apache.hadoop.mapred.Task$Counter with bundle >>> 2013-11-14 16:50:31,485 DEBUG org.apache.hadoop.mapred.SortedRanges: >>> currentIndex 0 0:0 >>> 2013-11-14 16:50:31,485 DEBUG org.apache.hadoop.mapred.SortedRanges: >>> currentIndex 1 0:0 >>> 2013-11-14 16:50:31,486 INFO org.apache.hadoop.mapred.TaskTracker: >>> LaunchTaskAction (registerTask): attempt_201311141534_0005_m_000002_0 >>> task's state:UNASSIGNED >>> 2013-11-14 16:50:31,486 INFO org.apache.hadoop.mapred.TaskTracker: >>> Trying to launch : attempt_201311141534_0005_m_000002_0 which needs 1 slots >>> 2013-11-14 16:50:31,486 INFO org.apache.hadoop.mapred.TaskTracker: In >>> TaskLauncher, current free slots : 2 and trying to launch >>> attempt_201311141534_0005_m_000002_0 which needs 1 slots >>> 2013-11-14 16:50:31,487 DEBUG org.apache.hadoop.ipc.Client: The ping >>> interval is60000ms. >>> 2013-11-14 16:50:31,487 DEBUG org.apache.hadoop.ipc.Client: Use SIMPLE >>> authentication for protocol ClientProtocol >>> 2013-11-14 16:50:31,487 DEBUG org.apache.hadoop.ipc.Client: Connecting >>> to dev1/10.2.156.115:9000 >>> 2013-11-14 16:50:31,487 DEBUG org.apache.hadoop.ipc.Client: IPC Client >>> (47) connection to dev1/10.2.156.115:9000 from search sending #15080 >>> 2013-11-14 16:50:31,487 DEBUG org.apache.hadoop.ipc.Client: IPC Client >>> (47) connection to dev1/10.2.156.115:9000 from search: starting, having >>> connections 2 >>> 2013-11-14 16:50:31,489 DEBUG org.apache.hadoop.ipc.Client: IPC Client >>> (47) connection to dev1/10.2.156.115:9000 from search got value #15080 >>> 2013-11-14 16:50:31,489 DEBUG org.apache.hadoop.ipc.RPC: Call: >>> getFileInfo 2 >>> 2013-11-14 16:50:31,489 DEBUG org.apache.hadoop.mapred.TaskTracker: >>> localizingJobTokenFile from >>> sd=/webroot/search/hadoop/mapred/system/job_201311141534_0005/jobToken to >>> /tmp/hadoop-search/mapred/local/ttprivate/taskTracker/search/jobcache/job_201311141534_0005/jobToken >>> 2013-11-14 16:50:31,489 DEBUG org.apache.hadoop.ipc.Client: IPC Client >>> (47) connection to dev1/10.2.156.115:9000 from search sending #15081 >>> 2013-11-14 16:50:31,491 DEBUG org.apache.hadoop.ipc.Client: IPC Client >>> (47) connection to dev1/10.2.156.115:9000 from search got value #15081 >>> 2013-11-14 16:50:31,491 DEBUG org.apache.hadoop.ipc.RPC: Call: >>> getFileInfo 2 >>> 2013-11-14 16:50:31,491 DEBUG org.apache.hadoop.ipc.Client: IPC Client >>> (47) connection to dev1/10.2.156.115:9000 from search sending #15082 >>> 2013-11-14 16:50:31,492 DEBUG org.apache.hadoop.ipc.Client: IPC Client >>> (47) connection to dev1/10.2.156.115:9000 from search got value #15082 >>> 2013-11-14 16:50:31,492 DEBUG org.apache.hadoop.ipc.RPC: Call: >>> getFileInfo 1 >>> 2013-11-14 16:50:31,492 DEBUG org.apache.hadoop.ipc.Client: IPC Client >>> (47) connection to dev1/10.2.156.115:9000 from search sending #15083 >>> 2013-11-14 16:50:31,493 DEBUG org.apache.hadoop.ipc.Client: IPC Client >>> (47) connection to dev1/10.2.156.115:9000 from search got value #15083 >>> 2013-11-14 16:50:31,494 DEBUG org.apache.hadoop.ipc.RPC: Call: >>> getBlockLocations 2 >>> 2013-11-14 16:50:31,497 DEBUG org.apache.hadoop.hdfs.DFSClient: >>> Connecting to /10.2.156.207:50010 >>> 2013-11-14 16:50:31,502 DEBUG org.apache.hadoop.fs.FSInputChecker: >>> DFSClient readChunk got seqno 0 offsetInBlock 0 lastPacketInBlock true >>> packetLen 114 >>> 2013-11-14 16:50:31,503 DEBUG >>> org.apache.hadoop.mapreduce.security.TokenCache: Task: Loaded jobTokenFile >>> from: >>> /tmp/hadoop-search/mapred/local/ttprivate/taskTracker/search/jobcache/job_201311141534_0005/jobToken; >>> num of sec keys = 0 Number of tokens 1 >>> 2013-11-14 16:50:31,503 DEBUG >>> org.apache.hadoop.security.UserGroupInformation: PriviledgedAction >>> as:search >>> from:org.apache.hadoop.mapred.TaskTracker.getFS(TaskTracker.java:742) >>> 2013-11-14 16:50:31,503 DEBUG org.apache.hadoop.fs.FileSystem: Creating >>> filesystem for >>> hdfs://dev1:9000/tmp/hadoop-search/mapred/staging/search/.staging/job_201311141534_0005/job.xml >>> 2013-11-14 16:50:31,503 DEBUG org.apache.hadoop.io.retry.RetryUtils: >>> multipleLinearRandomRetry = null >>> 2013-11-14 16:50:31,504 DEBUG org.apache.hadoop.ipc.Client: The ping >>> interval is60000ms. >>> 2013-11-14 16:50:31,504 DEBUG org.apache.hadoop.ipc.Client: Use SIMPLE >>> authentication for protocol ClientProtocol >>> 2013-11-14 16:50:31,504 DEBUG org.apache.hadoop.ipc.Client: Connecting >>> to dev1/10.2.156.115:9000 >>> 2013-11-14 16:50:31,504 DEBUG org.apache.hadoop.ipc.Client: IPC Client >>> (47) connection to dev1/10.2.156.115:9000 from search sending #15084 >>> 2013-11-14 16:50:31,504 DEBUG org.apache.hadoop.ipc.Client: IPC Client >>> (47) connection to dev1/10.2.156.115:9000 from search: starting, having >>> connections 3 >>> 2013-11-14 16:50:31,506 DEBUG org.apache.hadoop.ipc.Client: IPC Client >>> (47) connection to dev1/10.2.156.115:9000 from search got value #15084 >>> 2013-11-14 16:50:31,506 DEBUG org.apache.hadoop.ipc.RPC: Call: >>> getProtocolVersion 2 >>> 2013-11-14 16:50:31,506 DEBUG org.apache.hadoop.hdfs.DFSClient: Short >>> circuit read is false >>> 2013-11-14 16:50:31,506 DEBUG org.apache.hadoop.hdfs.DFSClient: Connect >>> to datanode via hostname is false >>> 2013-11-14 16:50:31,506 DEBUG org.apache.hadoop.ipc.Client: IPC Client >>> (47) connection to dev1/10.2.156.115:9000 from search sending #15085 >>> 2013-11-14 16:50:31,507 DEBUG org.apache.hadoop.ipc.Client: IPC Client >>> (47) connection to dev1/10.2.156.115:9000 from search got value #15085 >>> 2013-11-14 16:50:31,507 DEBUG org.apache.hadoop.ipc.RPC: Call: >>> getFileInfo 1 >>> 2013-11-14 16:50:31,508 DEBUG org.apache.hadoop.ipc.Client: IPC Client >>> (47) connection to dev1/10.2.156.115:9000 from search sending #15086 >>> 2013-11-14 16:50:31,509 DEBUG org.apache.hadoop.ipc.Client: IPC Client >>> (47) connection to dev1/10.2.156.115:9000 from search got value #15086 >>> 2013-11-14 16:50:31,509 DEBUG org.apache.hadoop.ipc.RPC: Call: >>> getFileInfo 1 >>> 2013-11-14 16:50:31,509 DEBUG org.apache.hadoop.ipc.Client: IPC Client >>> (47) connection to dev1/10.2.156.115:9000 from search sending #15087 >>> 2013-11-14 16:50:31,510 DEBUG org.apache.hadoop.ipc.Client: IPC Client >>> (47) connection to dev1/10.2.156.115:9000 from search got value #15087 >>> 2013-11-14 16:50:31,510 DEBUG org.apache.hadoop.ipc.RPC: Call: >>> getFileInfo 1 >>> 2013-11-14 16:50:31,510 DEBUG org.apache.hadoop.ipc.Client: IPC Client >>> (47) connection to dev1/10.2.156.115:9000 from search sending #15088 >>> 2013-11-14 16:50:31,512 DEBUG org.apache.hadoop.ipc.Client: IPC Client >>> (47) connection to dev1/10.2.156.115:9000 from search got value #15088 >>> 2013-11-14 16:50:31,512 DEBUG org.apache.hadoop.ipc.RPC: Call: >>> getBlockLocations 2 >>> 2013-11-14 16:50:31,513 DEBUG org.apache.hadoop.hdfs.DFSClient: >>> Connecting to /10.2.156.207:50010 >>> 2013-11-14 16:50:31,514 DEBUG org.apache.hadoop.fs.FSInputChecker: >>> DFSClient readChunk got seqno 0 offsetInBlock 0 lastPacketInBlock false >>> packetLen 49024 >>> 2013-11-14 16:50:31,514 DEBUG org.apache.hadoop.fs.FSInputChecker: >>> DFSClient readChunk got seqno 1 offsetInBlock 48640 lastPacketInBlock true >>> packetLen 271 >>> 2013-11-14 16:50:31,515 DEBUG >>> org.apache.hadoop.security.UserGroupInformation: PriviledgedAction >>> as:search >>> from:org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1311) >>> 2013-11-14 16:50:31,531 DEBUG org.apache.hadoop.fs.FileSystem: Creating >>> filesystem for file:/// >>> 2013-11-14 16:50:31,539 INFO org.apache.hadoop.mapred.JobLocalizer: >>> Initializing user search on this TT. >>> 2013-11-14 16:50:31,558 DEBUG org.apache.hadoop.ipc.Client: IPC Client >>> (47) connection to dev1/10.2.156.115:9000 from search sending #15089 >>> 2013-11-14 16:50:31,560 DEBUG org.apache.hadoop.ipc.Client: IPC Client >>> (47) connection to dev1/10.2.156.115:9000 from search got value #15089 >>> 2013-11-14 16:50:31,560 DEBUG org.apache.hadoop.ipc.RPC: Call: >>> getFileInfo 2 >>> 2013-11-14 16:50:31,560 DEBUG org.apache.hadoop.ipc.Client: IPC Client >>> (47) connection to dev1/10.2.156.115:9000 from search sending #15090 >>> 2013-11-14 16:50:31,561 DEBUG org.apache.hadoop.ipc.Client: IPC Client >>> (47) connection to dev1/10.2.156.115:9000 from search got value #15090 >>> 2013-11-14 16:50:31,561 DEBUG org.apache.hadoop.ipc.RPC: Call: >>> getFileInfo 1 >>> 2013-11-14 16:50:31,561 DEBUG org.apache.hadoop.ipc.Client: IPC Client >>> (47) connection to dev1/10.2.156.115:9000 from search sending #15091 >>> 2013-11-14 16:50:31,563 DEBUG org.apache.hadoop.ipc.Client: IPC Client >>> (47) connection to dev1/10.2.156.115:9000 from search got value #15091 >>> 2013-11-14 16:50:31,563 DEBUG org.apache.hadoop.ipc.RPC: Call: >>> getFileInfo 2 >>> 2013-11-14 16:50:31,563 DEBUG org.apache.hadoop.ipc.Client: IPC Client >>> (47) connection to dev1/10.2.156.115:9000 from search sending #15092 >>> 2013-11-14 16:50:31,564 DEBUG org.apache.hadoop.ipc.Client: IPC Client >>> (47) connection to dev1/10.2.156.115:9000 from search got value #15092 >>> 2013-11-14 16:50:31,565 DEBUG org.apache.hadoop.ipc.RPC: Call: >>> getBlockLocations 2 >>> 2013-11-14 16:50:31,565 DEBUG org.apache.hadoop.hdfs.DFSClient: >>> Connecting to /10.2.156.207:50010 >>> 2013-11-14 16:50:31,566 DEBUG org.apache.hadoop.fs.FSInputChecker: >>> DFSClient readChunk got seqno 0 offsetInBlock 0 lastPacketInBlock false >>> packetLen 66052 >>> 2013-11-14 16:50:31,567 DEBUG org.apache.hadoop.fs.FSInputChecker: >>> DFSClient readChunk got seqno 1 offsetInBlock 65536 lastPacketInBlock false >>> packetLen 66052 >>> 2013-11-14 16:50:31,567 DEBUG org.apache.hadoop.fs.FSInputChecker: >>> DFSClient readChunk got seqno 2 offsetInBlock 131072 lastPacketInBlock >>> false packetLen 66052 >>> 2013-11-14 16:50:31,567 DEBUG org.apache.hadoop.fs.FSInputChecker: >>> DFSClient readChunk got seqno 3 offsetInBlock 196608 lastPacketInBlock >>> false packetLen 66052 >>> (REMOVED FOR BREVITY) >>> 2013-11-14 16:50:31,912 DEBUG org.apache.hadoop.fs.FSInputChecker: >>> DFSClient readChunk got seqno 1021 offsetInBlock 66912256 lastPacketInBlock >>> false packetLen 66052 >>> 2013-11-14 16:50:31,913 DEBUG org.apache.hadoop.fs.FSInputChecker: >>> DFSClient readChunk got seqno 1022 offsetInBlock 66977792 lastPacketInBlock >>> false packetLen 66052 >>> 2013-11-14 16:50:31,913 DEBUG org.apache.hadoop.fs.FSInputChecker: >>> DFSClient readChunk got seqno 1023 offsetInBlock 67043328 lastPacketInBlock >>> true packetLen 66052 >>> 2013-11-14 16:50:31,913 DEBUG org.apache.hadoop.hdfs.DFSClient: >>> Connecting to /10.2.156.207:50010 >>> 2013-11-14 16:50:31,915 DEBUG org.apache.hadoop.fs.FSInputChecker: >>> DFSClient readChunk got seqno 0 offsetInBlock 0 lastPacketInBlock false >>> packetLen 66052 >>> 2013-11-14 16:50:31,915 DEBUG org.apache.hadoop.fs.FSInputChecker: >>> DFSClient readChunk got seqno 1 offsetInBlock 65536 lastPacketInBlock false >>> packetLen 66052 >>> 2013-11-14 16:50:31,916 DEBUG org.apache.hadoop.fs.FSInputChecker: >>> DFSClient readChunk got seqno 2 offsetInBlock 131072 lastPacketInBlock >>> false packetLen 66052 >>> 2013-11-14 16:50:31,917 DEBUG org.apache.hadoop.fs.FSInputChecker: >>> DFSClient readChunk got seqno 3 offsetInBlock 196608 lastPacketInBlock >>> false packetLen 66052 >>> (REMOVED FOR BREVITY) >>> 2013-11-14 16:50:31,942 DEBUG org.apache.hadoop.fs.FSInputChecker: >>> DFSClient readChunk got seqno 64 offsetInBlock 4194304 lastPacketInBlock >>> false packetLen 66052 >>> 2013-11-14 16:50:31,942 DEBUG org.apache.hadoop.fs.FSInputChecker: >>> DFSClient readChunk got seqno 65 offsetInBlock 4259840 lastPacketInBlock >>> false packetLen 4132 >>> 2013-11-14 16:50:31,942 DEBUG org.apache.hadoop.fs.FSInputChecker: >>> DFSClient readChunk got seqno 66 offsetInBlock 4263936 lastPacketInBlock >>> true packetLen 410 >>> 2013-11-14 16:50:32,684 DEBUG org.apache.hadoop.mapred.TaskRunner: >>> putting jobToken file name into environment >>> /tmp/hadoop-search/mapred/local/taskTracker/search/jobcache/job_201311141534_0005/jobToken >>> 2013-11-14 16:50:32,684 INFO org.apache.hadoop.mapred.JvmManager: In >>> JvmRunner constructed JVM ID: jvm_201311141534_0005_m_-1017739470 >>> 2013-11-14 16:50:32,684 INFO org.apache.hadoop.mapred.JvmManager: JVM >>> Runner jvm_201311141534_0005_m_-1017739470 spawned. >>> 2013-11-14 16:50:32,686 INFO org.apache.hadoop.mapred.TaskController: >>> Writing commands to >>> /tmp/hadoop-search/mapred/local/ttprivate/taskTracker/search/jobcache/job_201311141534_0005/attempt_201311141534_0005_m_000002_0/taskjvm.sh >>> 2013-11-14 16:50:33,290 DEBUG org.apache.hadoop.ipc.Server: Server >>> connection from 127.0.0.1:46720; # active connections: 1; # queued >>> calls: 0 >>> 2013-11-14 16:50:33,299 DEBUG org.apache.hadoop.ipc.Server: Successfully >>> authorized >>> org.apache.hadoop.mapred.TaskUmbilicalProtocol-job_201311141534_0005 >>> 2013-11-14 16:50:33,299 DEBUG org.apache.hadoop.ipc.Server: got #0 >>> 2013-11-14 16:50:33,299 DEBUG org.apache.hadoop.ipc.Server: IPC Server >>> handler 0 on 50882: has #0 from 127.0.0.1:46720 >>> 2013-11-14 16:50:33,300 DEBUG >>> org.apache.hadoop.security.UserGroupInformation: PriviledgedAction >>> as:job_201311141534_0005 >>> from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426) >>> 2013-11-14 16:50:33,300 DEBUG org.apache.hadoop.ipc.Server: Served: >>> getProtocolVersion queueTime= 1 procesingTime= 0 >>> 2013-11-14 16:50:33,300 DEBUG org.apache.hadoop.ipc.Server: IPC Server >>> Responder: responding to #0 from 127.0.0.1:46720 >>> 2013-11-14 16:50:33,301 DEBUG org.apache.hadoop.ipc.Server: IPC Server >>> Responder: responding to #0 from 127.0.0.1:46720 Wrote 22 bytes. >>> 2013-11-14 16:50:33,303 DEBUG org.apache.hadoop.ipc.Server: got #1 >>> 2013-11-14 16:50:33,303 DEBUG org.apache.hadoop.ipc.Server: IPC Server >>> handler 1 on 50882: has #1 from 127.0.0.1:46720 >>> 2013-11-14 16:50:33,303 DEBUG >>> org.apache.hadoop.security.UserGroupInformation: PriviledgedAction >>> as:job_201311141534_0005 >>> from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426) >>> 2013-11-14 16:50:33,303 DEBUG org.apache.hadoop.mapred.TaskTracker: JVM >>> with ID : jvm_201311141534_0005_m_-1017739470 asked for a task >>> 2013-11-14 16:50:33,303 INFO org.apache.hadoop.mapred.TaskTracker: JVM >>> with ID: jvm_201311141534_0005_m_-1017739470 given task: >>> attempt_201311141534_0005_m_000002_0 >>> 2013-11-14 16:50:33,303 DEBUG org.apache.hadoop.ipc.Server: Served: >>> getTask queueTime= 0 procesingTime= 0 >>> 2013-11-14 16:50:33,304 DEBUG org.apache.hadoop.ipc.Server: IPC Server >>> Responder: responding to #1 from 127.0.0.1:46720 >>> 2013-11-14 16:50:33,304 DEBUG org.apache.hadoop.ipc.Server: IPC Server >>> Responder: responding to #1 from 127.0.0.1:46720 Wrote 417 bytes. >>> 2013-11-14 16:50:33,862 DEBUG org.apache.hadoop.ipc.Server: got #3 >>> 2013-11-14 16:50:33,863 DEBUG org.apache.hadoop.mapred.Counters: >>> Creating group FileSystemCounters with nothing >>> 2013-11-14 16:50:33,863 DEBUG org.apache.hadoop.mapred.Counters: >>> Creating group org.apache.hadoop.mapred.Task$Counter with bundle >>> 2013-11-14 16:50:33,863 DEBUG org.apache.hadoop.ipc.Server: IPC Server >>> handler 2 on 50882: has #3 from 127.0.0.1:46720 >>> 2013-11-14 16:50:33,863 DEBUG >>> org.apache.hadoop.security.UserGroupInformation: PriviledgedAction >>> as:job_201311141534_0005 >>> from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426) >>> 2013-11-14 16:50:33,863 INFO org.apache.hadoop.mapred.TaskTracker: >>> attempt_201311141534_0005_m_000002_0 0.0% setup >>> 2013-11-14 16:50:33,864 DEBUG org.apache.hadoop.ipc.Server: Served: >>> statusUpdate queueTime= 0 procesingTime= 1 >>> 2013-11-14 16:50:33,864 DEBUG org.apache.hadoop.ipc.Server: IPC Server >>> Responder: responding to #3 from 127.0.0.1:46720 >>> 2013-11-14 16:50:33,864 DEBUG org.apache.hadoop.ipc.Server: IPC Server >>> Responder: responding to #3 from 127.0.0.1:46720 Wrote 18 bytes. >>> 2013-11-14 16:50:33,864 DEBUG org.apache.hadoop.ipc.Server: got #4 >>> 2013-11-14 16:50:33,865 DEBUG org.apache.hadoop.ipc.Server: IPC Server >>> handler 3 on 50882: has #4 from 127.0.0.1:46720 >>> 2013-11-14 16:50:33,865 DEBUG >>> org.apache.hadoop.security.UserGroupInformation: PriviledgedAction >>> as:job_201311141534_0005 >>> from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426) >>> 2013-11-14 16:50:33,865 INFO org.apache.hadoop.mapred.TaskTracker: Task >>> attempt_201311141534_0005_m_000002_0 is done. >>> 2013-11-14 16:50:33,865 INFO org.apache.hadoop.mapred.TaskTracker: >>> reported output size for attempt_201311141534_0005_m_000002_0 was -1 >>> 2013-11-14 16:50:33,865 DEBUG org.apache.hadoop.ipc.Server: Served: done >>> queueTime= 0 procesingTime= 0 >>> 2013-11-14 16:50:33,865 DEBUG org.apache.hadoop.mapred.TaskTracker: >>> Cleaning up attempt_201311141534_0005_m_000002_0 >>> 2013-11-14 16:50:33,865 DEBUG org.apache.hadoop.ipc.Server: IPC Server >>> Responder: responding to #4 from 127.0.0.1:46720 >>> 2013-11-14 16:50:33,865 DEBUG org.apache.hadoop.ipc.Server: IPC Server >>> Responder: responding to #4 from 127.0.0.1:46720 Wrote 95 bytes. >>> 2013-11-14 16:50:33,865 INFO org.apache.hadoop.mapred.TaskTracker: >>> addFreeSlot : current free slots : 2 >>> 2013-11-14 16:50:33,866 DEBUG org.apache.hadoop.mapred.CleanupQueue: >>> DELETED >>> dir(search,/jobcache/job_201311141534_0005/attempt_201311141534_0005_m_000002_0/work) >>> 2013-11-14 16:50:33,919 DEBUG org.apache.hadoop.ipc.Server: IPC Server >>> listener on 50882: disconnecting client 127.0.0.1:46720. Number of >>> active connections: 1 >>> 2013-11-14 16:50:33,933 DEBUG org.apache.hadoop.mapred.SortedRanges: >>> currentIndex 0 0:0 >>> 2013-11-14 16:50:33,933 DEBUG org.apache.hadoop.mapred.Counters: >>> Creating group org.apache.hadoop.mapred.Task$Counter with bundle >>> 2013-11-14 16:50:33,933 DEBUG org.apache.hadoop.mapred.Counters: Adding >>> SPILLED_RECORDS >>> 2013-11-14 16:50:33,933 DEBUG org.apache.hadoop.mapred.Counters: >>> Creating group org.apache.hadoop.mapred.Task$Counter with bundle >>> 2013-11-14 16:50:33,933 DEBUG org.apache.hadoop.mapred.SortedRanges: >>> currentIndex 0 0:0 >>> 2013-11-14 16:50:33,933 DEBUG org.apache.hadoop.mapred.SortedRanges: >>> currentIndex 1 0:0 >>> 2013-11-14 16:50:33,933 INFO org.apache.hadoop.mapred.TaskTracker: >>> LaunchTaskAction (registerTask): attempt_201311141534_0005_m_000000_0 >>> task's state:UNASSIGNED >>> 2013-11-14 16:50:33,933 INFO org.apache.hadoop.mapred.TaskTracker: >>> Trying to launch : attempt_201311141534_0005_m_000000_0 which needs 1 slots >>> 2013-11-14 16:50:33,933 INFO org.apache.hadoop.mapred.TaskTracker: In >>> TaskLauncher, current free slots : 2 and trying to launch >>> attempt_201311141534_0005_m_000000_0 which needs 1 slots >>> 2013-11-14 16:50:33,943 DEBUG org.apache.hadoop.mapred.TaskRunner: >>> putting jobToken file name into environment >>> /tmp/hadoop-search/mapred/local/taskTracker/search/jobcache/job_201311141534_0005/jobToken >>> 2013-11-14 16:50:33,943 INFO org.apache.hadoop.mapred.JvmManager: In >>> JvmRunner constructed JVM ID: jvm_201311141534_0005_m_-331531893 >>> 2013-11-14 16:50:33,943 INFO org.apache.hadoop.mapred.JvmManager: JVM >>> Runner jvm_201311141534_0005_m_-331531893 spawned. >>> 2013-11-14 16:50:33,947 INFO org.apache.hadoop.mapred.TaskController: >>> Writing commands to >>> /tmp/hadoop-search/mapred/local/ttprivate/taskTracker/search/jobcache/job_201311141534_0005/attempt_201311141534_0005_m_000000_0/taskjvm.sh >>> 2013-11-14 16:50:34,035 INFO org.apache.hadoop.mapred.JvmManager: JVM : >>> jvm_201311141534_0005_m_-1017739470 exited with exit code 0. Number of >>> tasks it ran: 1 >>> 2013-11-14 16:50:34,036 DEBUG org.apache.hadoop.mapred.CleanupQueue: >>> DELETED >>> dir(search,/jobcache/job_201311141534_0005/attempt_201311141534_0005_m_000002_0/work) >>> 2013-11-14 16:50:34,040 DEBUG >>> org.apache.hadoop.mapred.TaskLogsTruncater: Truncation is not needed for >>> /data/search/hadoop/hadoop-1.2.1/libexec/../logs/userlogs/job_201311141534_0005/attempt_201311141534_0005_m_000002_0/stdout >>> 2013-11-14 16:50:34,040 DEBUG >>> org.apache.hadoop.mapred.TaskLogsTruncater: Truncation is not needed for >>> /data/search/hadoop/hadoop-1.2.1/libexec/../logs/userlogs/job_201311141534_0005/attempt_201311141534_0005_m_000002_0/stderr >>> 2013-11-14 16:50:34,040 DEBUG >>> org.apache.hadoop.mapred.TaskLogsTruncater: Truncation is not needed for >>> /data/search/hadoop/hadoop-1.2.1/libexec/../logs/userlogs/job_201311141534_0005/attempt_201311141534_0005_m_000002_0/syslog >>> 2013-11-14 16:50:34,531 DEBUG org.apache.hadoop.ipc.Server: Server >>> connection from 127.0.0.1:46724; # active connections: 1; # queued >>> calls: 0 >>> 2013-11-14 16:50:34,540 DEBUG org.apache.hadoop.ipc.Server: Successfully >>> authorized >>> org.apache.hadoop.mapred.TaskUmbilicalProtocol-job_201311141534_0005 >>> 2013-11-14 16:50:34,540 DEBUG org.apache.hadoop.ipc.Server: got #0 >>> 2013-11-14 16:50:34,540 DEBUG org.apache.hadoop.ipc.Server: IPC Server >>> handler 0 on 50882: has #0 from 127.0.0.1:46724 >>> 2013-11-14 16:50:34,540 DEBUG >>> org.apache.hadoop.security.UserGroupInformation: PriviledgedAction >>> as:job_201311141534_0005 >>> from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426) >>> 2013-11-14 16:50:34,541 DEBUG org.apache.hadoop.ipc.Server: Served: >>> getProtocolVersion queueTime= 1 procesingTime= 0 >>> 2013-11-14 16:50:34,541 DEBUG org.apache.hadoop.ipc.Server: IPC Server >>> Responder: responding to #0 from 127.0.0.1:46724 >>> 2013-11-14 16:50:34,541 DEBUG org.apache.hadoop.ipc.Server: IPC Server >>> Responder: responding to #0 from 127.0.0.1:46724 Wrote 22 bytes. >>> 2013-11-14 16:50:34,543 DEBUG org.apache.hadoop.ipc.Server: got #1 >>> 2013-11-14 16:50:34,543 DEBUG org.apache.hadoop.ipc.Server: IPC Server >>> handler 1 on 50882: has #1 from 127.0.0.1:46724 >>> 2013-11-14 16:50:34,543 DEBUG >>> org.apache.hadoop.security.UserGroupInformation: PriviledgedAction >>> as:job_201311141534_0005 >>> from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426) >>> 2013-11-14 16:50:34,544 DEBUG org.apache.hadoop.mapred.TaskTracker: JVM >>> with ID : jvm_201311141534_0005_m_-331531893 asked for a task >>> 2013-11-14 16:50:34,544 INFO org.apache.hadoop.mapred.TaskTracker: JVM >>> with ID: jvm_201311141534_0005_m_-331531893 given task: >>> attempt_201311141534_0005_m_000000_0 >>> 2013-11-14 16:50:34,544 DEBUG org.apache.hadoop.ipc.Server: Served: >>> getTask queueTime= 0 procesingTime= 1 >>> 2013-11-14 16:50:34,544 DEBUG org.apache.hadoop.ipc.Server: IPC Server >>> Responder: responding to #1 from 127.0.0.1:46724 >>> 2013-11-14 16:50:34,544 DEBUG org.apache.hadoop.ipc.Server: IPC Server >>> Responder: responding to #1 from 127.0.0.1:46724 Wrote 544 bytes. >>> 2013-11-14 16:50:34,639 INFO org.apache.hadoop.mapred.TaskTracker: >>> Received KillTaskAction for task: attempt_201311141534_0005_m_000002_0 >>> 2013-11-14 16:50:34,639 INFO org.apache.hadoop.mapred.TaskTracker: About >>> to purge task: attempt_201311141534_0005_m_000002_0 >>> 2013-11-14 16:50:34,639 DEBUG org.apache.hadoop.mapred.TaskTracker: >>> Cleaning up attempt_201311141534_0005_m_000002_0 >>> 2013-11-14 16:50:34,639 INFO org.apache.hadoop.mapred.IndexCache: Map ID >>> attempt_201311141534_0005_m_000002_0 not found in cache >>> 2013-11-14 16:50:34,639 DEBUG org.apache.hadoop.mapred.CleanupQueue: >>> DELETED >>> dir(search,/jobcache/job_201311141534_0005/attempt_201311141534_0005_m_000002_0) >>> 2013-11-14 16:50:37,878 DEBUG org.apache.hadoop.ipc.Server: got #4 >>> 2013-11-14 16:50:37,879 DEBUG org.apache.hadoop.ipc.Server: IPC Server >>> handler 2 on 50882: has #4 from 127.0.0.1:46724 >>> 2013-11-14 16:50:37,879 DEBUG >>> org.apache.hadoop.security.UserGroupInformation: PriviledgedAction >>> as:job_201311141534_0005 >>> from:org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426) >>> 2013-11-14 16:50:37,879 DEBUG org.apache.hadoop.ipc.Server: Served: ping >>> queueTime= 0 procesingTime= 0 >>> 2013-11-14 16:50:37,879 DEBUG org.apache.hadoop.ipc.Server: IPC Server >>> Responder: responding to #4 from 127.0.0.1:46724 >>> 2013-11-14 16:50:37,879 DEBUG org.apache.hadoop.ipc.Server: IPC Server >>> Responder: responding to #4 from 127.0.0.1:46724 Wrote 18 bytes. >>> 2013-11-14 16:50:39,236 DEBUG org.apache.hadoop.mapred.JettyBugMonitor: >>> Jetty CPU usage: 0.0% >>> >>> >>> >>> >>> On Thu, Nov 14, 2013 at 11:29 AM, Lewis John Mcgibbney < >>> [email protected]> wrote: >>> >>>> Hi Jon, >>>> >>>> On Thu, Nov 14, 2013 at 4:15 PM, <[email protected]> >>>> wrote: >>>> >>>> > >>>> > Unable to inject seeds with >>>> > 29017 by: Jon Uhal >>>> > >>>> > First, here is my environment: >>>> > >>>> > >>>> > Hadoop 1.2.1 >>>> > Accumulo 1.4.4 >>>> > Zookeeper 3.4.5 >>>> > Gora 0.3 >>>> > Solr 4.5.1 >>>> > >>>> >>>> All software revisions look fine so good start :) >>>> >>>> >>>> > >>>> > ...but when I switched over to try the >>>> > runtime/deploy nutch job, the inject step would just timeout. >>>> > >>>> >>>> Can you please post some of the your Hadoop logging from when the >>>> InjectorJob is run on the cluster? >>>> >>>> >>>> > >>>> > >>>> > 2013-11-14 15:35:44,481 INFO org.apache.zookeeper.ZooKeeper: >>>> Initiating >>>> > client connection, connectString=dev1o,dev5o,dev6o >>>> sessionTimeout=30000 >>>> > >>>> > >>>> watcher=org.apache.accumulo.core.zookeeper.ZooSession$AccumuloWatcher@df2d38 >>>> > 2013-11-14 15:35:44,483 DEBUG org.apache.zookeeper.ClientCnxn: >>>> > zookeeper.disableAutoWatchReset is false >>>> > 2013-11-14 15:35:44,493 INFO org.apache.zookeeper.ClientCnxn: Opening >>>> > socket connection to server dev1o/10.2.156.115:2181 >>>> > 2013-11-14 15:35:44,494 INFO org.apache.zookeeper.ClientCnxn: Socket >>>> > connection established to dev1o/10.2.156.115:2181, initiating session >>>> > 2013-11-14 15:35:44,496 DEBUG org.apache.zookeeper.ClientCnxn: Session >>>> > establishment request sent on dev1o/10.2.156.115:2181 >>>> > 2013-11-14 15:35:44,503 INFO org.apache.zookeeper.ClientCnxn: Session >>>> > establishment complete on server dev1o/10.2.156.115:2181, sessionid = >>>> > 0x142573f8b440007, negotiated timeout = 30000 >>>> > 2013-11-14 15:35:44,601 DEBUG org.apache.zookeeper.ClientCnxn: Reading >>>> > reply sessionid:0x142573f8b440007, packet:: clientPath:null >>>> serverPath:null >>>> > finished:false header:: 1,3 replyHeader:: 1,81604378745,0 request:: >>>> > '/accumulo/instances/dev1o,T response:: >>>> > >>>> > >>>> s{25769803868,25769803868,1384364228759,1384364228759,0,0,0,0,36,0,25769803868} >>>> > >>>> > >>>> Although the Zookeeper INFO and DEBUG logging indicates that server and >>>> client connections are opened and successfully managed, I am concerned >>>> that >>>> the zookeeper clientPath and serverPath entries later display as null! >>>> Can >>>> you confirm if Zookeeper is correctly installed on all nodes and >>>> connections are managed? >>>> >>> >>> >>> >>> -- >>> Jon Uhal >>> >> >> >> >> -- >> Jon Uhal >> > > > > -- > Jon Uhal > -- Jon Uhal

