I assume I can do something like set the max number of jobs for the node to 0 and something similar for hdfs? Is there a recommended way to go about this?
I have a machine that is part of the cluster but I'd like to dedicate it
to being the web server and run the db but still have access to starting
jobs and getting data out of hdfs. In other words I'd like to have the
cores, memory, and disk only minimally affected by running jobs on the
cluster yet still have easy access when I need to get data out.
- Cannot start name node after turning on hadoop security Allan Yan
- Fwd: Cannot start name node after turning on hadoop secu... Allan Yan
- Fwd: Cannot start name node after turning on hadoop ... Allan Yan
- mini node in a cluster Pat Ferrel
- Re: mini node in a cluster Tom Melendez
- Re: mini node in a cluster Pat Ferrel
- Re: mini node in a cluster Tom Melendez
- Re: mini node in a cluster Pat Ferrel
- Re: Cannot start name node after turning on hado... Allan Yan