that's not really an option.
Thanks!
Chris
--
Chris Anderson
http://jchris.mfdz.com
the task
trackers, in order to make the limit apply? That seems unlikely - I'd
really like to manage this parameter on a per-job level.
Thanks for any input!
Chris
--
Chris Anderson
http://jchris.mfdz.com
for managing resources used
by the processes spawned by streaming jar? Ideally I'd like to run my
ruby scripts under nice.
I can hack something together with wrappers, but I'm thinking there
might be a configuration option to handle this within Streaming jar.
Thanks for any suggestions!
--
Chris
you describe, and it's working well.
Chris
--
Chris Anderson
http://jchris.mfdz.com
of
known hosts.
localhost: no tasktracker to stop
stopping namenode
localhost: no datanode to stop
localhost: no secondarynamenode to stop
conf files in /usr/local/hadoop-0.17.0
==
# cat conf/slaves
localhost
# cat conf/masters
localhost
--
Chris Anderson
http://jchris.mfdz.com
on EC2 :), we do protect the hadoop web processes by
putting a proxy in front of it. A user connects to the proxy,
authenticates, and then gets the output from the hadoop process. All of the
redirection magic happens via a localhost connection, so no data is leaked
unprotected.
--
Chris
impractical.
Better to do the proxy thing.
This would be a nice addition to the Hadoop EC2 AMI (which is super
helpful, by the way). Thanks to whoever put it together.
--
Chris Anderson
http://jchris.mfdz.com