Hi,
I am new to hadoop.
I am trying to use
for (StatisticsCollector.TimeWindow window : tracker.getStatistics().
collector.DEFAULT_COLLECT_WINDOWS) { JobTrackerStatistics.TaskTrackerStat
ttStat = tracker.getStatistics(). getTaskTrackerStat(tt.getTrackerName());
out.println("" + ttStat.totalTask
I already have it working using the xml files. I was trying to see what are
the parameters that I need to pass to the conf object. Should I take all
the parameters in the xml file and use it in the conf file?
On Mon, Nov 12, 2012 at 7:17 PM, Yanbo Liang wrote:
> There are two candidate:
> 1) You
There are two candidate:
1) You need to copy your Hadoop/HBase configuration such as
common-site.xml, hdfs-site.xml, or *hbase-site.xml *file from "etc" or
"conf" subdirectory of Hadoop/HBase installation directory into the Java
project directory. Then the configuration of Hadoop/HBase will be auto
try copying files from hadoop in hbase to each other's conf directory.
Regards,
Mohammad Tariq
On Tue, Nov 13, 2012 at 5:04 AM, Mohit Anchlia wrote:
> Is it necessary to add hadoop and hbase site xmls in the classpath of the
> java client? Is there any other way we can configure it using g
I was actually looking for an example to do it in the java code. But I
think I've found a way to do it by iterating over all the files using
globStatus() method.
On Mon, Nov 12, 2012 at 5:50 PM, yinghua hu wrote:
> Hi, Mohit
>
> You can input everything in a directory. See the step 12 in this li
Hi,
My Hadoop cluster runs log with the level of INFO, and it outputs every
block-related information to the HDFS log. I would like to keep the INFO
level, but suppress block-related information. Is it possible? Can one
control Hadoop logs by components?
Thank you. Sincerely,
Mark
when submiting a job,the ToolRunnuer or JobClient just distribute your jars
to hdfs,
so that tasktrackers can launch/"re-run" it.
In your case,you should have your dynamic class re-generate in
mapper/reducer`s setup method,
or the runtime classloader will miss them all.
On Tue, Nov 13, 2012 at 7:
Hi, Mohit
You can input everything in a directory. See the step 12 in this link.
http://raseshmori.wordpress.com/
On Mon, Nov 12, 2012 at 5:40 PM, Mohit Anchlia wrote:
> Using Java dfs api is it possible to read all the files in a directory? Or
> do I need to list all the files in the directory
Using Java dfs api is it possible to read all the files in a directory? Or
do I need to list all the files in the directory and then read it?
I found the following:
http://cloudfront.blogspot.com/2012/06/hbase-counters-part-i.html
http://palominodb.com/blog/2012/08/24/distributed-counter-performance-hbase-part-1
Which hbase version are you using ?
Cheers
On Mon, Nov 12, 2012 at 4:21 PM, Mesika, Asaf wrote:
> Hi,
>
> Can anyone refe
I was simple able to read using below code. Didn't have to decompress. It
looks like reader automatically knows and decompresses the file before
returning it to the user.
On Mon, Nov 12, 2012 at 3:16 PM, Mohit Anchlia wrote:
> I am looking for an example that read snappy compressed snappy file. C
Hi,
Can anyone reference me to an article/blog detailing how Counters are
implemented in HBase?
We get very low throughput of batch of Increment relative to batch of Put, and
I would like to investigate why, by understanding the basic workflow on
updating a counter - does it use block-cache? D
Is it necessary to add hadoop and hbase site xmls in the classpath of the
java client? Is there any other way we can configure it using general
properties file using key=value?
I am looking for an example that read snappy compressed snappy file. Could
someone point me to it? What I have so far is this:
Configuration conf = *new* Configuration();
FileSystem fs = FileSystem.*get*(URI.*create*(uri), conf);
Path path = *new* Path(uri);
SequenceFile.Reader reader = *null*
Hello everybody,
We are use using CapacityScheduler and Hadoop 0.20.2 (cdh3U3) on a cluster
composed of nodes(20) with :
ONE-NODE=16 core, 24 GB memory (AMD 4274), JVM HotSpot 1.7.0_05
We execute scientific Job in Map/Reduce task (using Cascading 1.2). We use
CapacityScheduler to avoid memory
Hi everyone,
Anyone knows if the new corona tools (Facebook just released as open
source) are compatible with hadoop 1.0.x ? or just 0.20.x ?
Thanks.
Good to know it works as I well. Thanks for sharing.
From: yinghua hu [mailto:yinghua...@gmail.com]
Sent: Friday, November 09, 2012 7:48 PM
To: user@hadoop.apache.org
Subject: Re: error running pi program
Hi, Ted and Andy
I tried both internal and external hostnames. They both worked. But I will
Hi All,
We want to use hadoop archives for our internal project. Just wanted
to know how stable this har functionality is.
Is anyone using this in production? If so, did u face any problems by
using HAR.
Appreciate for all the help!.
regards,
R
Hi,
I have a question:
Is there an existing tools that I can use to copy data from
hadoop-0.20-append to cdh3(cross datacenter), like Continuous Copier in
facebook(As I know, it is not open sourced,right?).
I can't use distcp, when I use distcp it seems that hadoop-0.20-append and
cdh3 can't commu
19 matches
Mail list logo