Hi,
I have a application which creates a simple text file on hdfs. There is a
second application which processes this file. The second application picks
up the file for processing only when the file has not been modified for 10
mins. In this way, the second application is sure that this file is
Yes. The component which will be interfacing with hadoop will be deployed in
an application server.
And the application server runs on 1.5 only. Therefore, we cannot migrate to
1.6.
At present I am thinking of decoupling the component from hadoop so that
this component can run from within the
Hi,
I am trying to create a hadoop cluster which can handle 2000 write requests
per second.
In each write request I would writing a line of size 1KB in a file.
I would be using machine having following configuration:
Platfom: Red Hat Linux 9.0
CPU : 2.07 GHz
RAM : 1GB
Can anyone help in
We provided a patch for 16 that could be retrofitted into 19
Our internal use of this has shown that jstack can hang in some
situations, and that just sending the sigquit is safer.
https://issues.apache.org/jira/browse/HADOOP-3994
Ryan LeCompte wrote:
For what it's worth, I started seeing
I believe that file modification times are updated only when the file is
closed. Are you appending to a preexisting file?
thanks,
dhruba
On Tue, Dec 30, 2008 at 3:14 AM, Sandeep Dhawan dsand...@hcl.in wrote:
Hi,
I have a application which creates a simple text file on hdfs. There is a
Hello,
I am new to hadoop. I am using hadoop 0.17, I am trying to run it
Pseudo-Distributed.
I get NotReplicatedYetException while executing 'bin/hadoop dfs' commands.
The following is the partial text of the exception
08/12/31 02:38:26 INFO dfs.DFSClient:
org.apache.hadoop.ipc.RemoteException:
It looks like you do not have datanodes running.
Can you check datanodes logs and see if they were started without errors.
Thanks,
Lohit
- Original Message
From: sagar arlekar sagar.arle...@gmail.com
To: core-user@hadoop.apache.org
Sent: Tuesday, December 30, 2008 1:00:04 PM
Subject:
Thanks Lohit.
For now I have reinstalled hadoop and its working fine. Hopefully the
exception wont reoccur.
On Wed, Dec 31, 2008 at 2:49 AM, lohit lohit.vijayar...@yahoo.com wrote:
It looks like you do not have datanodes running.
Can you check datanodes logs and see if they were started
The path separator is a major issue with a number of items in the
configuration data set that are multiple items packed together via the
path separator.
the class path
the distributed cache
the input path set
all suffer from the path.separator issue for 2 reasons:
1 being the difference across
Hey all,
I have a similar issue. I am specifically having problems with the config
option mapred.child.java.opts. I set it to -Xmx1024m and it uses -Xmx200m
regardless. I am running Hadoop 0.18.2 and I'm pretty sure this option was
working in the previous versions of Hadoop I was using.
I am
Hi Dhruba,
The file is being closed properly but the timestamp does not get modified.
The modification timestamp
still shows the file creation time.
I am creating a new file and writing data into this file.
Thanks,
Sandeep
Dhruba Borthakur-2 wrote:
I believe that file modification times
11 matches
Mail list logo