Hi Arun,
TestCodec only writes once to the deflateFilter, that's why the test
works with LzoCodec.
I've tried out the change mentioned below, on streaming data as well as
compressing files, and it works.
I have one app that sends ~200MB (uncompressed size) lzo-compressed data
over http
Moving this to hadoop-user.
Just to clarify, did you set test.randomwrite.maps_per_host to 5 in the run
with ceph?
-Original Message-
From: Esteban Molina-Estolano [mailto:[EMAIL PROTECTED]
Sent: Friday, June 01, 2007 1:45 PM
To: [EMAIL PROTECTED]
Subject: Adding new filesystem to
Looks like you have a good point! I think you are right.
Let me raise a jira to handle this issue more generally, i.e., fix all
places wherever this kind of check needs to be done.
-Original Message-
From: Calvin Yu [mailto:[EMAIL PROTECTED]
Sent: Friday, June 01, 2007 8:50 PM
To:
HBase is another application that needs write-append.
Every HBase update is written both to a RAM-based and file-system-based
log. On a period the RAM-based log is flushed to the filesystem. The
RAM-based log and its flushes are used fielding queries.
The sympathetic file-system-based log
Calvin Yu wrote:
The problem seems to be with the MapTask's (MapTask.java) sort
progress thread (line #196) not stopping after the sort is completed,
and hence the call to join() (line# 190) never returns. This is
because that thread is only catching the InterruptedException, and not
checking
You're right Doug, I ran a simple test to verify that interrupt() will
result in a InterruptedException on a call to sleep(), so my hang up
problem is something else. I'm going to rerun my job and post a
thread dump of the hang up.
Calvin
On 6/1/07, Doug Cutting [EMAIL PROTECTED] wrote:
Are you certain that interrupt() is called before sleep()? If
interrupt() is called during the sleep() then it should clearly throw
the InterruptedException. The question is whether it is thrown if the
call to interrupt() precedes the call to sleep(). Please feel free to
post your test
Mark Meissonnier wrote:
Sweet. It works. Thanks
Someone should put it on this wiki page
http://wiki.apache.org/lucene-hadoop/hadoop-0.1-dev/bin/hadoop_dfs
I don't have editing priviledges.
Anyone can create themselves a wiki account and edit pages. Just use
the Login button at the top of
Is there another way to upgrade this DFS cluster? Any help is
appreciated.
Denis, latest hadoop supports upgrade/rollback/finalize. You do not need
to manually backup data/image directories.
Do you run secondary name-node on your cluster? It is a good idea to
have it if you want an extra
I've looked over the code and it looks right. I like the
InteruptedException for telling threads to stop. The only gotcha is
that a lot of the old Hadoop code ignores InterruptedException. But
looking at the code in that thread, there is only one handler and it
re-interrupts the thread. So
public class Test {
public static void main(String[] args) {
System.out.println(interrupting..);
Thread.currentThread().interrupt();
try {
Thread.sleep(100);
System.out.println(done.);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
Granted, this is
Owen,
Thank you for the corrections and single node
properties suggestion for hadoop-site.xml.
Since I'm also running Apache Tomcat, Web server and
James mail server the JAVA_HOME improvement will
provide benefits across the board.
I still need to set JAVA_PLATFORM on a dual, 2GHZ PowerPC
with
12 matches
Mail list logo