Hi Gary,
It looks like port 8080 is already taken on your machine by XDB.
You should shut XDB down to free up port 8080 and re-launch the Sandbox VM.
Then you should be able to log in to Ambari using ambari/ambari.
Yusaku
On Sat, Jan 4, 2014 at 3:19 PM, Ted Yu yuzhih...@gmail.com wrote:
Hi,
While creating the blocks for a file containing n number of lines, how does
Hadoop take care of the problem of not Cutting a line in between while
creating blocks?
Is it taken care of by Hadoop?
Thanks
Shalish.
HDFS is agnostic about the contents of the data you store. Think about
it: Line ending character is not the universal way for files to
separate their records.
This question's been asked several times before (search on
http://search-hadoop.com for example). Read
Are there any limits on the total size of LocalResources that a YARN app
requests? Do the PUBLIC ones age out of cache over time? Are there settable
controls?
Thanks
John
Ted,
Thanks for the pointer. But when I read about the RESTful API:
http://hadoop.apache.org/docs/r2.2.0/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html#Cluster_Application_API
I only see a method to query the AM logs, not the task container logs. How
does one get from AppID to the list
Hi,
I’m trying to run Nutch 2.2.1 on a Haddop 2-node cluster. My hadoop cluster is
running fine and I’ve successfully added the input and output directory on to
HDFS. But when I run
$HADOOP_HOME/bin/hadoop jar /nutch/apache-nutch-2.2.1.job
org.apache.nutch.crawl.Crawler urls -dir crawl -depth
I am trying to setup 4 node cluster on ec2
ec2 machine setup is as follow
1 namenode, (master) 1 secondary namenode , and 2 slave nodes
after issuing start-all.sh on master , all daemons starts as expected with
only one issue
on slave2 - data node and tasktracker starts , but on slave1 only
Can you pastebin the stack trace involving the NPE ?
Thanks
On Jan 4, 2014, at 9:25 AM, Manikandan Saravanan
manikan...@thesocialpeople.net wrote:
Hi,
I’m trying to run Nutch 2.2.1 on a Haddop 2-node cluster. My hadoop cluster
is running fine and I’ve successfully added the input and
You can start/stop an Hadoop daemon manually on a machine via
bin/hadoop-daemon.sh
start/stop [namenode | secondarynamenode | datanode | jobtracker |
tasktracker]
On Fri, Jan 3, 2014 at 11:47 AM, navaz navaz@gmail.com wrote:
How to remove one of the slave node. ?
I have a namenode (
Hmm.. I just removed the “crawl” directory (output directory) from the command
and it works! I’m storing the output in a Cassandra cluster using Gora anyway.
So I don’t think I want to store that on HDFS :)
--
Manikandan Saravanan
Architect - Technology
TheSocialPeople
On 4 January 2014 at
Hi guys
I try to connect to Hortonworks Sandbox VM 1.3
using
http://127.0.0.1:8080/
I use
user=admin
pwd=admin
[image: Inline image 1]
Unable to connect. Does anybody know what default user/password is to log
in to ambari admin screen on hortonworks sandbox 1.3?
thanks
Gary B
image.png
also you can exclude the data nodes from conf/mapred-site.xml
dfs.hosts/dfs.hosts.excludeList of permitted/excluded DataNodes.If
necessary, use these files to control the list of allowable datanodes.
On Sat, Jan 4, 2014 at 12:37 PM, Hardik Pandya smarty.ju...@gmail.comwrote:
You can
R u using Cloudera manager? It would be easy to remove the node using that.
On Sat, Jan 4, 2014 at 11:20 PM, Hardik Pandya smarty.ju...@gmail.comwrote:
also you can exclude the data nodes from conf/mapred-site.xml
dfs.hosts/dfs.hosts.exclude List of permitted/excluded DataNodes.If
May be this would clarify some aspect of your questions
Resource Localization in YARN Deep
Divehttp://hortonworks.com/blog/resource-localization-in-yarn-deep-dive/
The threshold for local files is dictated by the configuration property
*yarn.nodemanager.localizer.cache.target-size-mb* described
https://www.odesk.com/jobs/common-crawl-hadoop-files-MySQL-cluster-and-Cassandra_~~887c486d56d12da9
Mohammad Alkahtani
P.O.Box 102275
Riyadh 11675
Saudi Arabia
mobile: 00966 555 33 1717
answering my own question
slave1 mapred-site.xml was missing the mapred.job.tracker property and thus
taking default values for host as *local*
mapred.job.trackerlocalThe host and port that the MapReduce job tracker
runs at. If local, then jobs are run in-process as a single map and
reduce task.
Please send email to user-unsubscr...@hadoop.apache.org
On Sat, Jan 4, 2014 at 6:59 PM, Brent Nikolaus bnikol...@gmail.com wrote:
18 matches
Mail list logo