Cool. Glad to hear that. Thank you.

Fang, Yan
[email protected]
+1 (206) 849-4108


On Tue, Aug 12, 2014 at 9:49 AM, Telles Nobrega <[email protected]>
wrote:

> That was the problem, thanks for the help I was able to run it.
>
> I really appreciatte all the time you guys tpok to help me out.
>
>
>
> On Tue, Aug 12, 2014 at 1:43 PM, Yan Fang <[email protected]> wrote:
>
> > Yes, tar.gz should have all the necessary libs. If this error does not
> pop
> > up when you run "run-job", my guess is that you may forget to reupload
> the
> > tar.gz package after you recompile.
> >
> > Fang, Yan
> > [email protected]
> > +1 (206) 849-4108
> >
> >
> > On Tue, Aug 12, 2014 at 6:34 AM, Telles Nobrega <[email protected]
> >
> > wrote:
> >
> > > What is the expected behavior here. The tar.gz file is in hdfs, it
> should
> > > find all necessary libs in the tar.gz right?
> > >
> > >
> > > On Tue, Aug 12, 2014 at 10:19 AM, Telles Nobrega <
> > [email protected]>
> > > wrote:
> > >
> > > > Chris and Yan,
> > > >
> > > > I was able to run the job but I got the error:
> > > >
> > > > Exception in thread "main" java.util.ServiceConfigurationError:
> > > > org.apache.hadoop.fs.FileSystem: Provider
> > > > org.apache.hadoop.hdfs.DistributedFileSystem could not be
> instantiated
> > > >         at java.util.ServiceLoader.fail(ServiceLoader.java:224)
> > > >         at java.util.ServiceLoader.access$100(ServiceLoader.java:181)
> > > >         at
> > > > java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:377)
> > > >         at java.util.ServiceLoader$1.next(ServiceLoader.java:445)
> > > >         at
> > > > org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2400)
> > > >         at
> > > >
> > org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2411)
> > > >         at
> > > >
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2428)
> > > >         at
> > org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88)
> > > >         at
> > > >
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2467)
> > > >         at
> > > org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2449)
> > > >         at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:367)
> > > >         at org.apache.hadoop.fs.Path.getFileSystem(Path.java:287)
> > > >         at
> > > >
> > >
> >
> org.apache.samza.job.yarn.SamzaAppMasterTaskManager.startContainer(SamzaAppMasterTaskManager.scala:278)
> > > >          at
> > > >
> > >
> >
> org.apache.samza.job.yarn.SamzaAppMasterTaskManager.onContainerAllocated(SamzaAppMasterTaskManager.scala:126)
> > > >         at
> > > >
> > >
> >
> org.apache.samza.job.yarn.YarnAppMaster$$anonfun$run$8$$anonfun$apply$2.apply(YarnAppMaster.scala:66)
> > > >         at
> > > >
> > >
> >
> org.apache.samza.job.yarn.YarnAppMaster$$anonfun$run$8$$anonfun$apply$2.apply(YarnAppMaster.scala:66)
> > > >         at scala.collection.immutable.List.foreach(List.scala:318)
> > > >         at
> > > >
> > >
> >
> org.apache.samza.job.yarn.YarnAppMaster$$anonfun$run$8.apply(YarnAppMaster.scala:66)
> > > >         at
> > > >
> > >
> >
> org.apache.samza.job.yarn.YarnAppMaster$$anonfun$run$8.apply(YarnAppMaster.scala:66)
> > > >         at
> scala.collection.Iterator$class.foreach(Iterator.scala:727)
> > > >         at
> > scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
> > > >         at
> > > > scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> > > >         at
> scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> > > >         at
> > > > org.apache.samza.job.yarn.YarnAppMaster.run(YarnAppMaster.scala:66)
> > > >         at
> > > >
> org.apache.samza.job.yarn.SamzaAppMaster$.main(SamzaAppMaster.scala:81)
> > > >         at
> > > > org.apache.samza.job.yarn.SamzaAppMaster.main(SamzaAppMaster.scala)
> > > >  Caused by: java.lang.NoClassDefFoundError:
> > > > org/apache/hadoop/conf/Configuration$DeprecationDelta
> > > >         at
> > > >
> > >
> >
> org.apache.hadoop.hdfs.HdfsConfiguration.addDeprecatedKeys(HdfsConfiguration.java:66)
> > > >         at
> > > >
> > >
> >
> org.apache.hadoop.hdfs.HdfsConfiguration.<clinit>(HdfsConfiguration.java:31)
> > > >         at
> > > >
> > >
> >
> org.apache.hadoop.hdfs.DistributedFileSystem.<clinit>(DistributedFileSystem.java:106)
> > > >         at
> > sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> > > > Method)
> > > >         at
> > > >
> > >
> >
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> > > >         at
> > > >
> > >
> >
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> > > >         at
> > > java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> > > >         at java.lang.Class.newInstance(Class.java:374)
> > > >         at
> > > > java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:373)
> > > >          ... 23 more
> > > > Caused by: java.lang.ClassNotFoundException:
> > > > org.apache.hadoop.conf.Configuration$DeprecationDelta
> > > >         at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
> > > >         at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
> > > >         at java.security.AccessController.doPrivileged(Native Method)
> > > >         at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
> > > >         at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
> > > >         at
> > sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
> > > >         at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
> > > >         ... 32 more
> > > >
> > > > In the machine that is running the job. Do I need to put the jar
> files
> > > > there too? and where?
> > > >
> > > > Thanks
> > > >
> > > >
> > > > On Tue, Aug 12, 2014 at 9:17 AM, Telles Nobrega <
> > [email protected]
> > > >
> > > > wrote:
> > > >
> > > >> Sorry for bothering this much.
> > > >>
> > > >>
> > > >> On Tue, Aug 12, 2014 at 9:17 AM, Telles Nobrega <
> > > [email protected]>
> > > >> wrote:
> > > >>
> > > >>> Now I have this error:
> > > >>>
> > > >>> Exception in thread "main" java.net.ConnectException: Call From
> > > >>> telles-samza-master/10.1.0.79 to telles-samza-master:8020 failed
> on
> > > >>> connection exception: java.net.ConnectException: Connection
> refused;
> > > For
> > > >>> more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
> > > >>>  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> > > >>> Method)
> > > >>> at
> > > >>>
> > >
> >
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
> > > >>>  at
> > > >>>
> > >
> >
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> > > >>> at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> > > >>>  at
> org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
> > > >>> at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
> > > >>>  at org.apache.hadoop.ipc.Client.call(Client.java:1410)
> > > >>> at org.apache.hadoop.ipc.Client.call(Client.java:1359)
> > > >>>  at
> > > >>>
> > >
> >
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
> > > >>> at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source)
> > > >>>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > > >>> at
> > > >>>
> > >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> > > >>>  at
> > > >>>
> > >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > > >>> at java.lang.reflect.Method.invoke(Method.java:606)
> > > >>>  at
> > > >>>
> > >
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
> > > >>> at
> > > >>>
> > >
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> > > >>>  at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source)
> > > >>> at
> > > >>>
> > >
> >
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:671)
> > > >>>  at
> org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1746)
> > > >>> at
> > > >>>
> > >
> >
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1112)
> > > >>>  at
> > > >>>
> > >
> >
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1108)
> > > >>> at
> > > >>>
> > >
> >
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> > > >>>  at
> > > >>>
> > >
> >
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1108)
> > > >>> at
> > > >>>
> > >
> >
> org.apache.samza.job.yarn.ClientHelper.submitApplication(ClientHelper.scala:111)
> > > >>>  at org.apache.samza.job.yarn.YarnJob.submit(YarnJob.scala:55)
> > > >>> at org.apache.samza.job.yarn.YarnJob.submit(YarnJob.scala:48)
> > > >>>  at org.apache.samza.job.JobRunner.run(JobRunner.scala:62)
> > > >>> at org.apache.samza.job.JobRunner$.main(JobRunner.scala:37)
> > > >>>  at org.apache.samza.job.JobRunner.main(JobRunner.scala)
> > > >>> Caused by: java.net.ConnectException: Connection refused
> > > >>>  at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> > > >>> at
> > > sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
> > > >>>  at
> > > >>>
> > >
> >
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> > > >>> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
> > > >>>  at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
> > > >>> at
> > > >>>
> > >
> org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:601)
> > > >>>  at
> > > >>>
> > org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:696)
> > > >>> at
> > org.apache.hadoop.ipc.Client$Connection.access$2700(Client.java:367)
> > > >>>  at org.apache.hadoop.ipc.Client.getConnection(Client.java:1458)
> > > >>> at org.apache.hadoop.ipc.Client.call(Client.java:1377)
> > > >>>  ... 22 more
> > > >>>
> > > >>>
> > > >>>
> > > >>> On Tue, Aug 12, 2014 at 3:39 AM, Yan Fang <[email protected]>
> > > wrote:
> > > >>>
> > > >>>> Hi Telles,
> > > >>>>
> > > >>>> I think you put the wrong port. Usually, the HDFS port is 8020,
> not
> > > >>>> 50070.
> > > >>>> You should put something like:
> > > >>>>
> > *hdfs://telles**-samza-master:8020*/path/to/samza-job-package.taz.gz.
> > > >>>> Thanks.
> > > >>>>
> > > >>>> Fang, Yan
> > > >>>> [email protected]
> > > >>>> +1 (206) 849-4108
> > > >>>>
> > > >>>>
> > > >>>> On Mon, Aug 11, 2014 at 8:31 PM, Telles Nobrega <
> > > >>>> [email protected]>
> > > >>>> wrote:
> > > >>>>
> > > >>>> > I tried moving from HDFS to HttpFileSystem. I’m getting the
> > > >>>> HttpFileSystem
> > > >>>> > not found exception. I have done the steps in the tutorial that
> > > Chris
> > > >>>> > pasted below (I had done that before, but I’m not sure what is
> the
> > > >>>> > problem). Seems like since I have the compiled file in one
> machine
> > > >>>> > (resource manager) and I submit it and try to download from the
> > node
> > > >>>> > managers, they don’t have samza-yarn.jar (don’t know how to
> > include
> > > >>>> it,
> > > >>>> > since the run will be done in the resource manager).
> > > >>>> >
> > > >>>> > Can you give me a tip on how to solve this?
> > > >>>> >
> > > >>>> > Thanks in advance.
> > > >>>> >
> > > >>>> > ps. the folder and tar.gz of the job are located in one machine
> > > >>>> alone, is
> > > >>>> > that the right way to do it or do I need to replicate
> hello-samza
> > in
> > > >>>> all
> > > >>>> > machines to run it?
> > > >>>> > On 11 Aug 2014, at 23:12, Telles Nobrega <
> [email protected]
> > >
> > > >>>> wrote:
> > > >>>> >
> > > >>>> > > What is your suggestion here, should I keep going on this
> quest
> > to
> > > >>>> fix
> > > >>>> > hdfs or should I try to run using HttpFileSystem?
> > > >>>> > > On 11 Aug 2014, at 23:01, Telles Nobrega <
> > [email protected]
> > > >
> > > >>>> > wrote:
> > > >>>> > >
> > > >>>> > >> The port is right?? 50700. I have no idea what is happening
> > now.
> > > >>>> > >>
> > > >>>> > >> On 11 Aug 2014, at 22:33, Telles Nobrega <
> > > [email protected]>
> > > >>>> > wrote:
> > > >>>> > >>
> > > >>>> > >>> Right now the error is the following:
> > > >>>> > >>> Exception in thread "main" java.io.IOException: Failed on
> > local
> > > >>>> > exception: com.google.protobuf.InvalidProtocolBufferException:
> > > >>>> Protocol
> > > >>>> > message end-group tag did not match expected tag.; Host Details
> :
> > > >>>> local
> > > >>>> > host is: "telles-samza-master/10.1.0.79"; destination host is:
> > > >>>> > "telles-samza-master":50070;
> > > >>>> > >>>     at
> > > >>>> org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)
> > > >>>> > >>>     at org.apache.hadoop.ipc.Client.call(Client.java:1410)
> > > >>>> > >>>     at org.apache.hadoop.ipc.Client.call(Client.java:1359)
> > > >>>> > >>>     at
> > > >>>> >
> > > >>>>
> > >
> >
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
> > > >>>> > >>>     at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source)
> > > >>>> > >>>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> > > Method)
> > > >>>> > >>>     at
> > > >>>> >
> > > >>>>
> > >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> > > >>>> > >>>     at
> > > >>>> >
> > > >>>>
> > >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > > >>>> > >>>     at java.lang.reflect.Method.invoke(Method.java:606)
> > > >>>> > >>>     at
> > > >>>> >
> > > >>>>
> > >
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
> > > >>>> > >>>     at
> > > >>>> >
> > > >>>>
> > >
> >
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> > > >>>> > >>>     at com.sun.proxy.$Proxy14.getFileInfo(Unknown Source)
> > > >>>> > >>>     at
> > > >>>> >
> > > >>>>
> > >
> >
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:671)
> > > >>>> > >>>     at
> > > >>>> >
> org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1746)
> > > >>>> > >>>     at
> > > >>>> >
> > > >>>>
> > >
> >
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1112)
> > > >>>> > >>>     at
> > > >>>> >
> > > >>>>
> > >
> >
> org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1108)
> > > >>>> > >>>     at
> > > >>>> >
> > > >>>>
> > >
> >
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> > > >>>> > >>>     at
> > > >>>> >
> > > >>>>
> > >
> >
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1108)
> > > >>>> > >>>     at
> > > >>>> >
> > > >>>>
> > >
> >
> org.apache.samza.job.yarn.ClientHelper.submitApplication(ClientHelper.scala:111)
> > > >>>> > >>>     at
> > > org.apache.samza.job.yarn.YarnJob.submit(YarnJob.scala:55)
> > > >>>> > >>>     at
> > > org.apache.samza.job.yarn.YarnJob.submit(YarnJob.scala:48)
> > > >>>> > >>>     at
> org.apache.samza.job.JobRunner.run(JobRunner.scala:62)
> > > >>>> > >>>     at
> > org.apache.samza.job.JobRunner$.main(JobRunner.scala:37)
> > > >>>> > >>>     at org.apache.samza.job.JobRunner.main(JobRunner.scala)
> > > >>>> > >>> Caused by:
> com.google.protobuf.InvalidProtocolBufferException:
> > > >>>> > Protocol message end-group tag did not match expected tag.
> > > >>>> > >>>     at
> > > >>>> >
> > > >>>>
> > >
> >
> com.google.protobuf.InvalidProtocolBufferException.invalidEndTag(InvalidProtocolBufferException.java:94)
> > > >>>> > >>>     at
> > > >>>> >
> > > >>>>
> > >
> >
> com.google.protobuf.CodedInputStream.checkLastTagWas(CodedInputStream.java:124)
> > > >>>> > >>>     at
> > > >>>> >
> > > >>>>
> > >
> >
> com.google.protobuf.AbstractParser.parsePartialFrom(AbstractParser.java:202)
> > > >>>> > >>>     at
> > > >>>> >
> > > >>>>
> > >
> >
> com.google.protobuf.AbstractParser.parsePartialDelimitedFrom(AbstractParser.java:241)
> > > >>>> > >>>     at
> > > >>>> >
> > > >>>>
> > >
> >
> com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:253)
> > > >>>> > >>>     at
> > > >>>> >
> > > >>>>
> > >
> >
> com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:259)
> > > >>>> > >>>     at
> > > >>>> >
> > > >>>>
> > >
> >
> com.google.protobuf.AbstractParser.parseDelimitedFrom(AbstractParser.java:49)
> > > >>>> > >>>     at
> > > >>>> >
> > > >>>>
> > >
> >
> org.apache.hadoop.ipc.protobuf.RpcHeaderProtos$RpcResponseHeaderProto.parseDelimitedFrom(RpcHeaderProtos.java:2364)
> > > >>>> > >>>     at
> > > >>>> >
> > > >>>>
> > >
> >
> org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1051)
> > > >>>> > >>>     at
> > > >>>> org.apache.hadoop.ipc.Client$Connection.run(Client.java:945)
> > > >>>> > >>>
> > > >>>> > >>> I feel that I’m close to making it run. Thanks for the help
> in
> > > >>>> advance.
> > > >>>> > >>> On 11 Aug 2014, at 22:06, Telles Nobrega <
> > > [email protected]
> > > >>>> >
> > > >>>> > wrote:
> > > >>>> > >>>
> > > >>>> > >>>> Hi, I downloaded hadoop-common-2.3.0.jar and it worked
> > better.
> > > >>>> Now
> > > >>>> > I’m having a configuration problem with my host, but it looks
> like
> > > >>>> the hdfs
> > > >>>> > is not a problem anymore.
> > > >>>> > >>>>
> > > >>>> > >>>>
> > > >>>> > >>>>
> > > >>>> > >>>>
> > > >>>> > >>>> On 11 Aug 2014, at 22:04, Telles Nobrega <
> > > >>>> [email protected]>
> > > >>>> > wrote:
> > > >>>> > >>>>
> > > >>>> > >>>>> So, I added hadoop-hdfs-2.3.0.jar as a maven dependency.
> > > >>>> Recompiled
> > > >>>> > the project, extracted to deploy/samza and there problem still
> > > >>>> happens. I
> > > >>>> > downloaded hadoop-client-2.3.0.jar and the problems still
> happens,
> > > >>>> > hadoop-common is 2.2.0 does this is a problem? I will try with
> > 2.3.0
> > > >>>> > >>>>>
> > > >>>> > >>>>> Actually a lot of hadoop jars are 2.2.0
> > > >>>> > >>>>>
> > > >>>> > >>>>> On 11 Aug 2014, at 21:33, Yan Fang <[email protected]>
> > > >>>> wrote:
> > > >>>> > >>>>>
> > > >>>> > >>>>>> <include>org.apache.hadoop:hadoop-hdfs</include>
> > > >>>> > >>>>>
> > > >>>> > >>>>
> > > >>>> > >>>
> > > >>>> > >>
> > > >>>> > >
> > > >>>> >
> > > >>>> >
> > > >>>>
> > > >>>
> > > >>>
> > > >>>
> > > >>> --
> > > >>> ------------------------------------------
> > > >>> Telles Mota Vidal Nobrega
> > > >>> M.sc. Candidate at UFCG
> > > >>> B.sc. in Computer Science at UFCG
> > > >>> Software Engineer at OpenStack Project - HP/LSD-UFCG
> > > >>>
> > > >>
> > > >>
> > > >>
> > > >> --
> > > >> ------------------------------------------
> > > >> Telles Mota Vidal Nobrega
> > > >> M.sc. Candidate at UFCG
> > > >> B.sc. in Computer Science at UFCG
> > > >> Software Engineer at OpenStack Project - HP/LSD-UFCG
> > > >>
> > > >
> > > >
> > > >
> > > > --
> > > > ------------------------------------------
> > > > Telles Mota Vidal Nobrega
> > > > M.sc. Candidate at UFCG
> > > > B.sc. in Computer Science at UFCG
> > > > Software Engineer at OpenStack Project - HP/LSD-UFCG
> > > >
> > >
> > >
> > >
> > > --
> > > ------------------------------------------
> > > Telles Mota Vidal Nobrega
> > > M.sc. Candidate at UFCG
> > > B.sc. in Computer Science at UFCG
> > > Software Engineer at OpenStack Project - HP/LSD-UFCG
> > >
> >
>
>
>
> --
> ------------------------------------------
> Telles Mota Vidal Nobrega
> M.sc. Candidate at UFCG
> B.sc. in Computer Science at UFCG
> Software Engineer at OpenStack Project - HP/LSD-UFCG
>

Reply via email to