Re: Hadoop usage in uploading downloading big data

2014-06-10 Thread rahul.soa
Thank for your reply. The data is design data. This data is mix of ascii, binary and design views (IPs, libraries, cadence design data etc..). Currently its exactlly the way we use any other repository servers such as svn, git. etc.. (checkin, checkout, add, tag etc...). User stores the data on

Permission denied

2014-06-10 Thread EdwardKing
I want to try hadoop example,but it raise following error,where is wrong,how to correct? Thanks. [root@localhost ~]# useradd -g hadoop yarn useradd: user 'yarn' already exists [root@localhost ~]# gpasswd -a hdfs hadoop Adding user hdfs to group hadoop [root@localhost ~]# su - hdfs [hdfs@localhost

Re: Permission denied

2014-06-10 Thread Nitin Pawar
your hdfs default user is yarn you may want to change that to hdfs to run your job as user yarn On Tue, Jun 10, 2014 at 1:32 PM, EdwardKing zhan...@neusoft.com wrote: I want to try hadoop example,but it raise following error,where is wrong,how to correct? Thanks. [root@localhost ~]#

Re: priority in the container request

2014-06-10 Thread Krishna Kishore Bonagiri
Thanks Vinod for the quick answer, it seems to be working when I am requesting all containers with the same specification, but not when I have multiple container requests with different host names specified. Is this expected behavior? Kishore On Mon, Jun 9, 2014 at 10:51 PM, Vinod Kumar

Writable RPC had a lot of leftover TCP connections in CLOSE_WAIT after RPC_TIMEOUT is enabled

2014-06-10 Thread Anfernee Xu
Hi, I'm using hadoop-2.2.0 and take advantage of Hadoop WritableRpcEngine to build my distributed application, and I have 'heartbeat' interface in my application to check availability periodically, in order to detect any potential failure, I enabled rpc_timeout when creating the proxy as below

Re: Writable RPC had a lot of leftover TCP connections in CLOSE_WAIT after RPC_TIMEOUT is enabled

2014-06-10 Thread Ted Yu
Why don't you base your application on ProtobufRpcEngine ? Cheers On Tue, Jun 10, 2014 at 10:42 AM, Anfernee Xu anfernee...@gmail.com wrote: Hi, I'm using hadoop-2.2.0 and take advantage of Hadoop WritableRpcEngine to build my distributed application, and I have 'heartbeat' interface in my

Re: Writable RPC had a lot of leftover TCP connections in CLOSE_WAIT after RPC_TIMEOUT is enabled

2014-06-10 Thread Anfernee Xu
Because it's kind of legacy system I built 4-5 years back with Hadoop 0.2.x release, and recently we moved to 2.2.0 release. Moving to ProtocolBuffer is one option but we need to migrate our infrastructure(hadoop and so on) first and get it working(no regressions). Is it a known issue? Thanks

Where is the run result of hadoop

2014-06-10 Thread EdwardKing
I try hadoop-mapreduce-examples-2.2.0.jar , and the screen information is mapred.ClientServiceDelegate: Application state is completed. FinalApplicationStatus=SUCCEEDED The final result should be follows,but I don't find it,why it don't show, where is the result? Estimated value of Pi is