The job was actually running and it finished successfully. I was able to
monitor it by using wget and the resource manager web interface
(http://host:port/ws/v1/cluster/apps) and the yarn application command that
Xuan suggested.
Thank you all for your help.
On 11 March 2014 06:11, sudhakara st
I need to make a performance comparison between a sequential code and a
hadoop code written in C++ with the pipes. What is the easiest solution?
after executing this command :
mvn clean install
i am getting this error.
Failed tests:
TestMetricsSystemImpl.testMultiThreadedPublish:232 expected:0 but
was:5
TestNetUtils.testNormalizeHostName:617 null
TestFsShellReturnCode.testGetWithInvalidSourcePathShouldNotDisplayNullInConsole:307
Hi,
I'm currently trying to use Hadoop-2.x and noticed when I try to load the
file in pig and it shows as follows while reading but file has multiple
records and also weird thing is if I dump the variable its shows the pig
tuples,
Successfully read 0 records from: /tmp/sample.txt
Any reason?
Those are just test failures, I suggest you skip the tests as you earlier
did and do mvn clean install.
On Tue, Mar 11, 2014 at 5:10 AM, Avinash Kujur avin...@gmail.com wrote:
after executing this command :
mvn clean install
i am getting this error.
Failed tests:
This is a Pig problem, not a Hadoop 2.x one - can you please ask it
at u...@pig.apache.org? You may have to subscribe to it first.
On Tue, Mar 11, 2014 at 1:03 PM, Viswanathan J
jayamviswanat...@gmail.com wrote:
Hi,
I'm currently trying to use Hadoop-2.x and noticed when I try to load the
Hi,
I'm running Yarn 2.2.0 release, I'm wondering how can I learn the task
failure rate by machine?
In older release, there's a web page can tell me that information, so I can
exclude the machine with higher failure rate from the cluster, is there any
equivalent feature in 2.2.0 release?
Thanks
i want to subscribe to this list.
Thanks
A
On Fri, Mar 7, 2014 at 7:37 AM, hwpstorage hwpstor...@gmail.com wrote:
Hello,
It looks like the HDFS caching does not work well.
The cached log file is around 200MB. The hadoop cluster has 3 nodes, each
has 4GB memory.
-bash-4.1$ hdfs cacheadmin -addPool wptest1
Successfully added cache
Hi,
Recently I have faced a heap size error when I run
$MAHOUT_HOME/bin/mahout wikipediaXMLSplitter -d
$MAHOUT_HOME/examples/temp/enwiki-latest-pages-articles.xml -o
wikipedia/chunks -c 64
Here is the specs
1- XML file size = 44GB
2- System memory = 54GB (on virtualbox)
3- Heap size = 51GB
As I posted earlier, here is the result of a successful test
5.4GB XML file (which is larger than enwiki-latest-pages-articles10.xml) with
4GB of RAM and -Xmx128m tooks 5 minutes to complete.
I didn't find a larger wikipedia XML file. Need to test 10GB, 20GB and 30GB
files
Regards,
Mahmood
You have already subscribed to this list.
2014-03-12 2:22 GMT+07:00 anu238 . anurag.guj...@gmail.com:
i want to subscribe to this list.
Thanks
A
--
Alpha Bagus Sunggono, CBSP
(Certified Brownies Solution Provider)
You can start the resource manager without starting any node manager. And
the source code of resource manager and node manager are in different sub
pom project.
On Wed, Mar 12, 2014 at 7:06 AM, anu238 . anurag.guj...@gmail.com wrote:
Hi All,
I am sorry for the blast, I wanted to use only the
There's no limitation on same-subnet.
Regards,
*Stanley Shi,*
On Wed, Mar 12, 2014 at 1:31 PM, navaz navaz@gmail.com wrote:
Hi all
Question regarding hadoop architecture . Generally in hadoop cluster nodes
are placed in racks and all the nodes connected to top of the rack switch .
14 matches
Mail list logo