Hi all,

I’m trying to replicate the gdelt data ingest example from 
https://github.com/geomesa/geomesa-tutorials/tree/master/geomesa-examples-gdelt

Is there anything wrong w/ the process run? Especially:

File Input Format Counters
                                Bytes Read=744844275
                File Output Format Counters
                                Bytes Written=0
                org.locationtech.geomesa.jobs.output
                                failed=0
                                written=2649862

Why “Bytes Written=0”?

Can anyone help look at the log to see if anything went wrong for this example?

Thank you.

Best,

Max.

----

[gzhang200@ebdp-ch2-s032p geomesa-tutorials-master]$ hadoop jar 
geomesa-examples-gdelt/target/geomesa-examples-gdelt-1.3.0.0-m1-SNAPSHOT.jar \
> com.example.geomesa.gdelt.GDELTIngest \
> -instanceId hdp-accumulo-instance \
> -zookeepers 
> ebdp-ch2-s012p.sys.comcast.net,ebdp-ch2-s013p.sys.comcast.net,ebdp-ch2-s014p.sys.comcast.net
>  \
> -user gzhang200 \
> -password 123456 \
> -auths user,admin \
> -tableName gdelt -featureName event \
> -ingestFile /user/gzhang200/gdelt/gdelt.tsv
WARNING: Use "yarn jar" to launch YARN applications.
16/12/22 16:07:35 INFO zookeeper.ZooKeeper: Client 
environment:zookeeper.version=3.4.6-227--1, built on 09/09/2016 22:17 GMT
16/12/22 16:07:35 INFO zookeeper.ZooKeeper: Client 
environment:host.name=ebdp-ch2-s032p.sys.comcast.net
16/12/22 16:07:35 INFO zookeeper.ZooKeeper: Client 
environment:java.version=1.8.0_91
16/12/22 16:07:35 INFO zookeeper.ZooKeeper: Client 
environment:java.vendor=Oracle Corporation
16/12/22 16:07:35 INFO zookeeper.ZooKeeper: Client 
environment:java.home=/usr/java/jdk1.8.0_91/jre
16/12/22 16:07:35 INFO zookeeper.ZooKeeper: Client 
environment:java.library.path=:/usr/hdp/2.4.3.0-227/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.4.3.0-227/hadoop/lib/native
16/12/22 16:07:35 INFO zookeeper.ZooKeeper: Client 
environment:java.io.tmpdir=/tmp
16/12/22 16:07:35 INFO zookeeper.ZooKeeper: Client 
environment:java.compiler=<NA>
16/12/22 16:07:35 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
16/12/22 16:07:35 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
16/12/22 16:07:35 INFO zookeeper.ZooKeeper: Client 
environment:os.version=2.6.32-504.30.3.el6.x86_64
16/12/22 16:07:35 INFO zookeeper.ZooKeeper: Client 
environment:user.name=gzhang200
16/12/22 16:07:35 INFO zookeeper.ZooKeeper: Client 
environment:user.home=/home/gzhang200
16/12/22 16:07:35 INFO zookeeper.ZooKeeper: Client 
environment:user.dir=/home/gzhang200/geomesa-tutorials-master
16/12/22 16:07:35 INFO zookeeper.ZooKeeper: Initiating client connection, 
connectString=ebdp-ch2-s012p.sys.comcast.net,ebdp-ch2-s013p.sys.comcast.net,ebdp-ch2-s014p.sys.comcast.net
 sessionTimeout=30000 
watcher=org.apache.accumulo.fate.zookeeper.ZooSession$ZooWatcher@113050bf
16/12/22 16:07:35 INFO zookeeper.ClientCnxn: Opening socket connection to 
server ebdp-ch2-s014p.sys.comcast.net/172.26.7.248:2181. Will not attempt to 
authenticate using SASL (unknown error)
16/12/22 16:07:35 INFO zookeeper.ClientCnxn: Socket connection established to 
ebdp-ch2-s014p.sys.comcast.net/172.26.7.248:2181, initiating session
16/12/22 16:07:35 INFO zookeeper.ClientCnxn: Session establishment complete on 
server ebdp-ch2-s014p.sys.comcast.net/172.26.7.248:2181, sessionid = 
0x358fe03469d7c84, negotiated timeout = 30000
16/12/22 16:07:36 INFO imps.CuratorFrameworkImpl: Starting
16/12/22 16:07:36 INFO zookeeper.ZooKeeper: Initiating client connection, 
connectString=ebdp-ch2-s012p.sys.comcast.net,ebdp-ch2-s013p.sys.comcast.net,ebdp-ch2-s014p.sys.comcast.net
 sessionTimeout=60000 watcher=org.apache.curator.ConnectionState@2b3d5c09
16/12/22 16:07:36 INFO zookeeper.ClientCnxn: Opening socket connection to 
server ebdp-ch2-s013p.sys.comcast.net/172.26.7.247:2181. Will not attempt to 
authenticate using SASL (unknown error)
16/12/22 16:07:36 INFO zookeeper.ClientCnxn: Socket connection established to 
ebdp-ch2-s013p.sys.comcast.net/172.26.7.247:2181, initiating session
16/12/22 16:07:36 INFO zookeeper.ClientCnxn: Session establishment complete on 
server ebdp-ch2-s013p.sys.comcast.net/172.26.7.247:2181, sessionid = 
0x2591c1e76c820ec, negotiated timeout = 40000
16/12/22 16:07:36 INFO state.ConnectionStateManager: State change: CONNECTED
16/12/22 16:07:39 WARN data.AccumuloDataStore: Configured server-side iterators 
do not match client version - client version: 1.3.0-m1-SNAPSHOT, server 
version: 1.2.6
16/12/22 16:07:43 INFO zookeeper.ZooKeeper: Session: 0x2591c1e76c820ec closed
16/12/22 16:07:43 INFO zookeeper.ClientCnxn: EventThread shut down
16/12/22 16:07:45 INFO impl.TimelineClientImpl: Timeline service address: 
http://ebdp-ch2-s013p.sys.comcast.net:8188/ws/v1/timeline/
16/12/22 16:07:46 WARN ipc.Client: Failed to connect to server: 
ebdp-ch2-s013p.sys.comcast.net/172.26.7.247:8032: retries get failed due to 
exceeded maximum allowed retries number: 0
java.net.ConnectException: Connection refused
                at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
                at 
sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
                at 
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
                at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
                at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
                at 
org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:649)
                at 
org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:744)
                at 
org.apache.hadoop.ipc.Client$Connection.access$3000(Client.java:397)
                at org.apache.hadoop.ipc.Client.getConnection(Client.java:1521)
                at org.apache.hadoop.ipc.Client.call(Client.java:1431)
                at org.apache.hadoop.ipc.Client.call(Client.java:1392)
                at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
                at com.sun.proxy.$Proxy27.getNewApplication(Unknown Source)
                at 
org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getNewApplication(ApplicationClientProtocolPBClientImpl.java:221)
                at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
                at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
                at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
                at java.lang.reflect.Method.invoke(Method.java:498)
                at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:258)
                at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)
                at com.sun.proxy.$Proxy28.getNewApplication(Unknown Source)
                at 
org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getNewApplication(YarnClientImpl.java:220)
                at 
org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.createApplication(YarnClientImpl.java:228)
                at 
org.apache.hadoop.mapred.ResourceMgrDelegate.getNewJobID(ResourceMgrDelegate.java:188)
                at 
org.apache.hadoop.mapred.YARNRunner.getNewJobID(YARNRunner.java:231)
                at 
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:153)
                at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
                at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
                at java.security.AccessController.doPrivileged(Native Method)
                at javax.security.auth.Subject.doAs(Subject.java:422)
                at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
                at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
                at 
com.example.geomesa.gdelt.GDELTIngest.runMapReduceJob(GDELTIngest.java:147)
                at 
com.example.geomesa.gdelt.GDELTIngest.main(GDELTIngest.java:124)
                at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
                at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
                at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
                at java.lang.reflect.Method.invoke(Method.java:498)
                at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
                at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
16/12/22 16:07:46 INFO client.ConfiguredRMFailoverProxyProvider: Failing over 
to rm2
16/12/22 16:07:46 WARN mapreduce.JobResourceUploader: Hadoop command-line 
option parsing not performed. Implement the Tool interface and execute your 
application with ToolRunner to remedy this.
16/12/22 16:07:47 INFO input.FileInputFormat: Total input paths to process : 1
16/12/22 16:07:47 INFO mapreduce.JobSubmitter: number of splits:6
16/12/22 16:07:48 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
job_1481638664938_0123
16/12/22 16:07:48 INFO impl.YarnClientImpl: Submitted application 
application_1481638664938_0123
16/12/22 16:07:48 INFO mapreduce.Job: The url to track the job: 
http://ebdp-ch2-s036p.sys.comcast.net:8088/proxy/application_1481638664938_0123/
16/12/22 16:07:48 INFO mapreduce.Job: Running job: job_1481638664938_0123
16/12/22 16:07:58 INFO mapreduce.Job: Job job_1481638664938_0123 running in 
uber mode : false
16/12/22 16:07:58 INFO mapreduce.Job:  map 0% reduce 0%
16/12/22 16:08:15 INFO mapreduce.Job:  map 1% reduce 0%
16/12/22 16:08:18 INFO mapreduce.Job:  map 2% reduce 0%
16/12/22 16:08:20 INFO mapreduce.Job:  map 3% reduce 0%
16/12/22 16:08:21 INFO mapreduce.Job:  map 4% reduce 0%
16/12/22 16:08:23 INFO mapreduce.Job:  map 5% reduce 0%
16/12/22 16:08:24 INFO mapreduce.Job:  map 7% reduce 0%
16/12/22 16:08:26 INFO mapreduce.Job:  map 8% reduce 0%
16/12/22 16:08:27 INFO mapreduce.Job:  map 9% reduce 0%
16/12/22 16:08:29 INFO mapreduce.Job:  map 10% reduce 0%
16/12/22 16:08:30 INFO mapreduce.Job:  map 12% reduce 0%
16/12/22 16:08:32 INFO mapreduce.Job:  map 13% reduce 0%
16/12/22 16:08:33 INFO mapreduce.Job:  map 15% reduce 0%
16/12/22 16:08:35 INFO mapreduce.Job:  map 16% reduce 0%
16/12/22 16:08:36 INFO mapreduce.Job:  map 18% reduce 0%
16/12/22 16:08:38 INFO mapreduce.Job:  map 19% reduce 0%
16/12/22 16:08:39 INFO mapreduce.Job:  map 22% reduce 0%
16/12/22 16:08:41 INFO mapreduce.Job:  map 23% reduce 0%
16/12/22 16:08:42 INFO mapreduce.Job:  map 25% reduce 0%
16/12/22 16:08:44 INFO mapreduce.Job:  map 26% reduce 0%
16/12/22 16:08:45 INFO mapreduce.Job:  map 28% reduce 0%
16/12/22 16:08:47 INFO mapreduce.Job:  map 30% reduce 0%
16/12/22 16:08:48 INFO mapreduce.Job:  map 32% reduce 0%
16/12/22 16:08:50 INFO mapreduce.Job:  map 34% reduce 0%
16/12/22 16:08:51 INFO mapreduce.Job:  map 36% reduce 0%
16/12/22 16:08:53 INFO mapreduce.Job:  map 38% reduce 0%
16/12/22 16:08:54 INFO mapreduce.Job:  map 40% reduce 0%
16/12/22 16:08:56 INFO mapreduce.Job:  map 43% reduce 0%
16/12/22 16:08:57 INFO mapreduce.Job:  map 44% reduce 0%
16/12/22 16:08:58 INFO mapreduce.Job:  map 45% reduce 0%
16/12/22 16:08:59 INFO mapreduce.Job:  map 47% reduce 0%
16/12/22 16:09:00 INFO mapreduce.Job:  map 49% reduce 0%
16/12/22 16:09:02 INFO mapreduce.Job:  map 51% reduce 0%
16/12/22 16:09:03 INFO mapreduce.Job:  map 53% reduce 0%
16/12/22 16:09:04 INFO mapreduce.Job:  map 54% reduce 0%
16/12/22 16:09:05 INFO mapreduce.Job:  map 56% reduce 0%
16/12/22 16:09:06 INFO mapreduce.Job:  map 58% reduce 0%
16/12/22 16:09:07 INFO mapreduce.Job:  map 59% reduce 0%
16/12/22 16:09:08 INFO mapreduce.Job:  map 60% reduce 0%
16/12/22 16:09:09 INFO mapreduce.Job:  map 62% reduce 0%
16/12/22 16:09:11 INFO mapreduce.Job:  map 64% reduce 0%
16/12/22 16:09:12 INFO mapreduce.Job:  map 66% reduce 0%
16/12/22 16:09:13 INFO mapreduce.Job:  map 67% reduce 0%
16/12/22 16:09:14 INFO mapreduce.Job:  map 68% reduce 0%
16/12/22 16:09:15 INFO mapreduce.Job:  map 70% reduce 0%
16/12/22 16:09:16 INFO mapreduce.Job:  map 71% reduce 0%
16/12/22 16:09:17 INFO mapreduce.Job:  map 72% reduce 0%
16/12/22 16:09:18 INFO mapreduce.Job:  map 74% reduce 0%
16/12/22 16:09:19 INFO mapreduce.Job:  map 75% reduce 0%
16/12/22 16:09:20 INFO mapreduce.Job:  map 76% reduce 0%
16/12/22 16:09:21 INFO mapreduce.Job:  map 77% reduce 0%
16/12/22 16:09:23 INFO mapreduce.Job:  map 78% reduce 0%
16/12/22 16:09:26 INFO mapreduce.Job:  map 79% reduce 0%
16/12/22 16:09:32 INFO mapreduce.Job:  map 81% reduce 0%
16/12/22 16:09:33 INFO mapreduce.Job:  map 82% reduce 0%
16/12/22 16:09:34 INFO mapreduce.Job:  map 83% reduce 0%
16/12/22 16:09:35 INFO mapreduce.Job:  map 85% reduce 0%
16/12/22 16:09:36 INFO mapreduce.Job:  map 87% reduce 0%
16/12/22 16:09:38 INFO mapreduce.Job:  map 89% reduce 0%
16/12/22 16:09:39 INFO mapreduce.Job:  map 90% reduce 0%
16/12/22 16:09:40 INFO mapreduce.Job:  map 91% reduce 0%
16/12/22 16:09:42 INFO mapreduce.Job:  map 93% reduce 0%
16/12/22 16:09:44 INFO mapreduce.Job:  map 94% reduce 0%
16/12/22 16:09:46 INFO mapreduce.Job:  map 95% reduce 0%
16/12/22 16:09:47 INFO mapreduce.Job:  map 96% reduce 0%
16/12/22 16:09:48 INFO mapreduce.Job:  map 97% reduce 0%
16/12/22 16:09:51 INFO mapreduce.Job:  map 98% reduce 0%
16/12/22 16:09:53 INFO mapreduce.Job:  map 99% reduce 0%
16/12/22 16:09:57 INFO mapreduce.Job:  map 100% reduce 0%
16/12/22 16:09:58 INFO mapreduce.Job: Job job_1481638664938_0123 completed 
successfully
16/12/22 16:09:58 INFO mapreduce.Job: Counters: 33
                File System Counters
                                FILE: Number of bytes read=0
                                FILE: Number of bytes written=873414
                                FILE: Number of read operations=0
                                FILE: Number of large read operations=0
                                FILE: Number of write operations=0
                                HDFS: Number of bytes read=744844983
                                HDFS: Number of bytes written=0
                                HDFS: Number of read operations=12
                                HDFS: Number of large read operations=0
                                HDFS: Number of write operations=0
                Job Counters
                                Launched map tasks=6
                                Data-local map tasks=3
                                Rack-local map tasks=3
                                Total time spent by all maps in occupied slots 
(ms)=627466
                                Total time spent by all reduces in occupied 
slots (ms)=0
                                Total time spent by all map tasks (ms)=627466
                                Total vcore-seconds taken by all map 
tasks=627466
                                Total megabyte-seconds taken by all map 
tasks=2570100736
                Map-Reduce Framework
                                Map input records=2649862
                                Map output records=2649862
                                Input split bytes=708
                                Spilled Records=0
                                Failed Shuffles=0
                                Merged Map outputs=0
                                GC time elapsed (ms)=7458
                                CPU time spent (ms)=714560
                                Physical memory (bytes) snapshot=3888177152
                                Virtual memory (bytes) snapshot=33075073024
                                Total committed heap usage (bytes)=2762997760
                File Input Format Counters
                                Bytes Read=744844275
                File Output Format Counters
                                Bytes Written=0
                org.locationtech.geomesa.jobs.output
                                failed=0
                                written=2649862
[gzhang200@ebdp-ch2-s032p geomesa-tutorials-master]$
------------------------------------------------------------------------------
Developer Access Program for Intel Xeon Phi Processors
Access to Intel Xeon Phi processor-based developer platforms.
With one year of Intel Parallel Studio XE.
Training and support from Colfax.
Order your platform today.http://sdm.link/intel
_______________________________________________
Geoserver-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/geoserver-users

Reply via email to