HOD supports a PBS environment, namely Torque. Torque is the vastly
improved fork of OpenPBS. You may be able to get HOD working on OpenPBS,
or better still persuade your cluster admins to upgrade to a more recent
version of Torque (e.g. at least 2.1.x)
Craig
On 22/07/28164 20:59, Udaya
, Craig Macdonald
cra...@dcs.gla.ac.ukwrote:
Hello all,
Following recent hardware discussions, I thought I'd ask a related
question. Our cluster nodes have 3 drives: 1x 160GB system/scratch
and 2x
500GB DFS drives.
The 160GB system drive is partitioned such that 100GB is for job
mapred.local
Hello all,
Following recent hardware discussions, I thought I'd ask a related
question. Our cluster nodes have 3 drives: 1x 160GB system/scratch and
2x 500GB DFS drives.
The 160GB system drive is partitioned such that 100GB is for job
mapred.local space. However, we find that for our
Hi Jacky,
Please to hear that fuse-dfs is working for you.
Do you mean that you want to mount dfs://localhost:9000/users at /mnt/hdfs ?
If so, fuse-dfs doesnt currently support this, but it would be a good
idea for a future improvement.
Craig
jacky_ji wrote:
i can use fuse-dfs to mount
Yes, tmpwatch may cause problems deleting files that havent been
accessed for a while.
On fedora/redhat chmod -x /etc/cron.daily/tmpwatch to disable it completely.
C
Mark Kerzner wrote:
Indeed, this was the right answer, and in the morning the file system was as
fresh as in the evening.
Hi Arifa,
The O_APPEND flag is the subject of
https://issues.apache.org/jira/browse/HADOOP-4494
Craig
Arifa Nisar wrote:
Hello All,
I am using hadoop 0.19.0, whose release notes includes HADOOP-1700
Introduced append operation for HDFS files. I am trying to test this new
feature using my
Hi Roopa,
I cant comment on the S3 specifics. However, fuse-dfs is based on a C
interface called libhdfs which allows C programs (such as fuse-dfs) to
connect to the Hadoop file system Java API. This being the case,
fuse-dfs should (theoretically) be able to connect to any file system
that
solve my problem?
Roopa
On Jan 28, 2009, at 1:03 PM, Craig Macdonald wrote:
Hi Roopa,
I cant comment on the S3 specifics. However, fuse-dfs is based on a C
interface called libhdfs which allows C programs (such as fuse-dfs)
to connect to the Hadoop file system Java API. This being the case
the logs anywhere? I dont see anything in
/var/log/messages either
looks like it tries to create the file system in hdfs.c but not sure
where it fails.
I have the hadoop home set so i believe it gets the config info.
any idea?
Thanks,
Roopa
On Jan 28, 2009, at 1:59 PM, Craig Macdonald wrote
and was failing with that
error .I created another mount point for mounting which resolved the
transport end point error.
Also i had -d option on my command..:)
Roopa
On Jan 28, 2009, at 6:35 PM, Craig Macdonald wrote:
Hi Roopa,
Firstly, can you get the fuse-dfs working for an instance
Hi,
I guess that the java on your PATH is different from the setting of your
$JAVA_HOME env variable.
Try $JAVA_HOME/bin/java -version?
Also, there is a program called Java Preferences on each system for
changing the default java version used.
Craig
nitesh bhatia wrote:
Hi
I am trying to
Hello Hadoop Core,
I have a very brief question: Our map tasks create side-effect files, in
the directory returned by FileOutputFormat.getWorkOutputPath().
This works fine for the getting the side-effect files that can be
accessed by the reducers.
However, as these map-generated
Tamas,
There is a patch attached to the issue, which you should be able to
apply to get O_APPEND .
https://issues.apache.org/jira/browse/HADOOP-4494
Craig
Tamás Szokol wrote:
Hi!
I'm using the latest stable 0.19.0 version of hadoop. I'd like to try
the new append functionality. Is it
Hi Hemanth,
While HOD does not do this automatically, please note that since you
are bringing up a Map/Reduce cluster on the allocated nodes, you can
submit map/reduce parameters with which to bring up the cluster when
allocating jobs. The relevant options are
Hemanth,
snip
Just FYI, at Yahoo! we've set torque to allocate separate nodes for
the number specified to HOD. In other words, the number corresponds to
the number of nodes, not processors. This has proved simpler to
manage. I forget right now, but I think you can make Torque behave
like this
Hello,
We have two HOD questions:
(1) For our current Torque PBS setup, the number of nodes requested by
HOD (-l nodes=X) corresponds to the number of CPUs allocated, however
these nodes can be spread across various partially or empty nodes.
Unfortunately, HOD does not appear to honour the
I have a related question - I have a class which is both mapper and
reducer. How can I tell in configure() if the current task is map or a
reduce task? Parse the taskid?
C
Owen O'Malley wrote:
On Dec 4, 2008, at 9:19 PM, abhinit wrote:
I have set some variable using the JobConf object.
17 matches
Mail list logo