Re: Check in sync journal nodes

2014-05-27 Thread lohit
that journalnode service is running, which is simple, I'd like to check latency or sync status. Are there any API, command to check it? Regards Juan Carlos Fernandez -- Have a Nice Day! Lohit

Re: Writing an application based on YARN 2.2

2013-11-12 Thread lohit
if anyone can share some experiences here. Many thanks. Bill -- Have a Nice Day! Lohit

Re: Question about Name Spaces…

2013-05-15 Thread Lohit
On May 15, 2013, at 7:17 AM, Michael Segel michael_se...@hotmail.com wrote: Quick question... So when we have a cluster which has multiple namespaces (multiple name nodes) , why would you have a file in two different namespaces? Are you saying why one would create same file in two

Re: Question about Name Spaces…

2013-05-15 Thread lohit
a single name space. The reason I am asking is that I'm trying to see how people view and use namespaces. Does that make sense? Thx On May 15, 2013, at 9:24 AM, Lohit lohit.vijayar...@yahoo.com wrote: On May 15, 2013, at 7:17 AM, Michael Segel michael_se...@hotmail.com wrote

Re: Encryption in HDFS

2013-02-25 Thread lohit
in HDFS. Are these implementations enough to secure HDFS? best regards, seonpark * Sorry for my bad english -- Have a Nice Day! Lohit

Re: Memory based scheduling

2012-10-30 Thread lohit
space. Is it possible to limit the number of tasks (mapper) per computer to 1 or 2 for these kinds of jobs ? Regards, Marco -- Have a Nice Day! Lohit

Re: HDFS using SAN

2012-10-16 Thread lohit
views.* *** ** ** Thanks, Abhishek ** ** -- Have a Nice Day! Lohit

Re: HDFS federation

2012-10-16 Thread lohit
single linux box i have 3 ips with me. -- Have a Nice Day! Lohit

Re: map-red with many input paths

2012-10-16 Thread Lohit
at MultiFileInputFormat if you want to club multiple files per map task. It is best to move completed job directories to some other path so as to avoid filtering altogether Lohit On Oct 16, 2012, at 5:25 PM, Koert Kuipers ko...@tresata.com wrote: currently i run a map-reduce job that reads from

Re: Hbase Web UI Interface on hbase 0.90.3 ?

2011-06-03 Thread lohit
to have hbase-default.xml in conf directory. hbase jar already has hbase-default.xml. So do I need to set all those configurations on my own for this new version ?? Thanks, Praveenesh -- Have a Nice Day! Lohit

Re: FileSystem API - Moving files in HDFS

2011-05-13 Thread lohit
There is no FileSystem API to copy. You could try hadoop dfs -cp src dest which basically reads the file and writes to new file. The code for this is in FsShell.java - Original Message From: Jim Twensky jim.twen...@gmail.com To: core-u...@hadoop.apache.org Sent: Fri, May 13, 2011

Re: Socket closed Exception

2009-04-01 Thread lohit
Thanks Koji, Raghu. This seemed to solve our problem, havent seen this happen in the past 2 days. What is the typical value of ipc.client.idlethreshold on big clusters. Does default value of 4000 suffice? Lohit - Original Message From: Koji Noguchi knogu...@yahoo-inc.com To: core

Re: Socket closed Exception

2009-03-30 Thread lohit
they do RPC for create/open/getFileInfo. I will give this a try. Thanks again, Lohit - Original Message From: Koji Noguchi knogu...@yahoo-inc.com To: core-user@hadoop.apache.org Sent: Sunday, March 29, 2009 11:44:29 PM Subject: RE: Socket closed Exception Hi Lohit, My initial guess

Re: Socket closed Exception

2009-03-30 Thread lohit
Thanks Raghu, is the log level at DEBUG? I do not see any socket close exception at NameNode at WARN/INFO level. Lohit - Original Message From: Raghu Angadi rang...@yahoo-inc.com To: core-user@hadoop.apache.org Sent: Monday, March 30, 2009 12:08:19 PM Subject: Re: Socket closed

Socket closed Exception

2009-03-29 Thread lohit
/TaskTracker/Task logs. (This is on HDFS 0.15) Are there cases where NameNode closes socket due heavy load or during conention of resource of anykind? Thanks, Lohit

Re: What happens when you do a ctrl-c on a big dfs -rmr

2009-03-11 Thread lohit
time. You do not need to reformat HDFS. Lohit - Original Message From: bzheng bing.zh...@gmail.com To: core-user@hadoop.apache.org Sent: Wednesday, March 11, 2009 7:48:41 PM Subject: What happens when you do a ctrl-c on a big dfs -rmr I did a ctrl-c immediately after issuing a hadoop dfs

Re: copyFromLocal *

2009-02-09 Thread lohit
Which version of hadoop are you using. I think from 0.18 or 0.19 copyFromLocal accepts multiple files as input but destination should be a directory. Lohit - Original Message From: S D sd.codewarr...@gmail.com To: Hadoop Mailing List core-user@hadoop.apache.org Sent: Monday, February

Re: using HDFS for a distributed storage system

2009-02-09 Thread lohit
I am planning to add the individual files initially, and after a while (lets say 2 days after insertion) will make a SequenceFile out of each directory (I am currently looking into SequenceFile) and delete the previous files of that directory from HDFS. That way in future, I can access any

Re: Bad connection to FS.

2009-02-04 Thread lohit
/*namenode*.log Lohit - Original Message From: Amandeep Khurana ama...@gmail.com To: core-user@hadoop.apache.org Sent: Wednesday, February 4, 2009 5:26:43 PM Subject: Re: Bad connection to FS. Here's what I had done.. 1. Stop the whole system 2. Delete all the data in the directories where

Re: Hadoop-KFS-FileSystem API

2009-02-03 Thread lohit
-site.xml and your FileSystem API talk to KFS. 5. Alternatively you could also create an object of KosmosFileSystem, which extends from FileSystem. Look at org.apache.hadoop.fs.kfs.KosmosFileSystem for example. Lohit - Original Message From: Wasim Bari wasimb...@msn.com To: core-user

Re: stop the running job?

2009-01-12 Thread Lohit
Try ./bin/hadoop job -h Lohit On Jan 12, 2009, at 6:10 PM, Samuel Guo guosi...@gmail.com wrote: Hi all, Is there any method that I can use to stop or suspend a runing job in Hadoop? Regards, Samuel

Re: NotReplicatedYetException by 'bin/hadoop dfs' commands

2008-12-30 Thread lohit
It looks like you do not have datanodes running. Can you check datanodes logs and see if they were started without errors. Thanks, Lohit - Original Message From: sagar arlekar sagar.arle...@gmail.com To: core-user@hadoop.apache.org Sent: Tuesday, December 30, 2008 1:00:04 PM Subject

Re: Copy data between HDFS instances...

2008-12-17 Thread lohit
try hadoop distcp more info here http://hadoop.apache.org/core/docs/current/distcp.html Documentation is for current release, but looking hadoop distcp should print out help message. Thanks, Lohit - Original Message From: C G parallel...@yahoo.com To: core-user@hadoop.apache.org

Re: dead node

2008-12-03 Thread lohit
Hi Nik, Can you explain the steps you did. Was NameNode/JobTracker running on the node where datanode ran. In a cluster with more than one node stopping one datanode does stop whole cluster. Thanks, Lohit - Original Message From: Nikolay Grebnev [EMAIL PROTECTED] To: core-user

Re: how can I decommission nodes on-the-fly?

2008-11-26 Thread lohit
, by decomissioning you would be asking NameNode to copy over the block is has to some other datanode. Thanks, Lohit - Original Message From: Amareshwari Sriramadasu [EMAIL PROTECTED] To: core-user@hadoop.apache.org Sent: Tuesday, November 25, 2008 11:51:21 PM Subject: Re: how can I decommission

Re: How to retrieve rack ID of a datanode

2008-11-26 Thread lohit
I take that back. I forgot about the changes in new version of HDFS. If you are testing this take a look at TestReplication.java Lohit - Original Message From: Ramya R [EMAIL PROTECTED] To: core-user@hadoop.apache.org Cc: [EMAIL PROTECTED] Sent: Tuesday, November 25, 2008 11:15:28 PM

Re: Getting Reduce Output Bytes

2008-11-25 Thread Lohit
Thanks sharad and paco. Lohit On Nov 25, 2008, at 5:34 AM, Paco NATHAN [EMAIL PROTECTED] wrote: Hi Lohit, Our teams collects those kinds of measurements using this patch: https://issues.apache.org/jira/browse/HADOOP-4559 Some example Java code in the comments shows how to access the data

Re: 64 bit namenode and secondary namenode 32 bit datanode

2008-11-25 Thread lohit
. Thanks, lohit - Original Message From: Sagar Naik [EMAIL PROTECTED] To: core-user@hadoop.apache.org Sent: Tuesday, November 25, 2008 3:58:53 PM Subject: 64 bit namenode and secondary namenode 32 bit datanode I am trying to migrate from 32 bit jvm and 64 bit for namenode only. *setup* NN

Re: 64 bit namenode and secondary namenode 32 bit datanode

2008-11-25 Thread lohit
. Thanks, Lohit - Original Message From: Sagar Naik [EMAIL PROTECTED] To: core-user@hadoop.apache.org Sent: Tuesday, November 25, 2008 4:32:26 PM Subject: Re: 64 bit namenode and secondary namenode 32 bit datanode lohit wrote: I might be wrong, but my assumption is running SN either

Re: Performing a Lookup in Multiple MapFiles?

2008-11-18 Thread lohit
) or reuse the one already used by Hadoop. http://hadoop.apache.org/core/docs/r0.18.2/api/org/apache/hadoop/mapred/Partitioner.html has details. I think this http://hadoop.apache.org/core/docs/r0.18.2/api/org/apache/hadoop/examples/SleepJob.html has its usage example. (look for SleepJob.java) -Lohit

Re: Recovery of files in hadoop 18

2008-11-14 Thread lohit
. you would lose all changes that has happened since the last checkpoint. Hope that helps, Lohit - Original Message From: Sagar Naik [EMAIL PROTECTED] To: core-user@hadoop.apache.org Sent: Friday, November 14, 2008 10:38:45 AM Subject: Recovery of files in hadoop 18 Hi, I

Re: Recovery of files in hadoop 18

2008-11-14 Thread lohit
started this namenode with old image and empty edits. You do not want your latest edits to be replayed, which has your delete transactions. Thanks, Lohit - Original Message From: Sagar Naik [EMAIL PROTECTED] To: core-user@hadoop.apache.org Sent: Friday, November 14, 2008 12:11:46 PM

Re: Recovery of files in hadoop 18

2008-11-14 Thread lohit
woudl have been to open edits in hex editor or similar to check) , but this should work. Once done, you could start. Thanks, Lohit - Original Message From: Sagar Naik [EMAIL PROTECTED] To: core-user@hadoop.apache.org Sent: Friday, November 14, 2008 1:59:04 PM Subject: Re: Recovery

Re: Cleaning up files in HDFS?

2008-11-14 Thread lohit
/hdfs_design.html Thanks, Lohit - Original Message From: Erik Holstad [EMAIL PROTECTED] To: core-user@hadoop.apache.org Sent: Friday, November 14, 2008 5:08:03 PM Subject: Cleaning up files in HDFS? Hi! We would like to run a delete script that deletes all files older than x days

Re: Namenode Failure

2008-11-13 Thread lohit
Hi Ankur, We have had this kind of failure reported by others earlier on this list. This might help you http://markmail.org/message/u6l6lwus33oeivcd Thanks, Lohit - Original Message From: ANKUR GOEL [EMAIL PROTECTED] To: [EMAIL PROTECTED]; core-user@hadoop.apache.org Sent: Thursday

Re: How to exclude machines from a cluster

2008-11-13 Thread lohit
two, may be having the files compressed might help. Lohit - Original Message From: Zhou, Yunqing [EMAIL PROTECTED] To: core-user@hadoop.apache.org Sent: Thursday, November 13, 2008 1:06:00 AM Subject: How to exclude machines from a cluster Here is a cluster with 13 machines. And due

Re: Caching data selectively on slaves

2008-11-11 Thread lohit
to access DFS multiple times. If you know that the each 'D' is read by one 'R' then you are not buying much with DistributedCache. Although you should also keep in mind if you are read takes long time you reducers might timeout failing to report status. Thanks, Lohit - Original Message

Re: reduce more than one way

2008-11-07 Thread lohit
of reducers. Thanks, Lohit - Original Message From: Elia Mazzawi [EMAIL PROTECTED] To: core-user@hadoop.apache.org Sent: Friday, November 7, 2008 12:35:44 PM Subject: reduce more than one way Hello, I'm writing hadoop programs in Java, I have 2 hadooop map/reduce programs that have the same map

Re: Urgent: Need -importCheckpoint equivalent for 0.15.3

2008-10-13 Thread lohit
Its good to take backup of existing data storage (namenode secondary namenode). Konstantine has explained the steps in this JIRA https://issues.apache.org/jira/browse/HADOOP-2585?focusedCommentId=12558173#action_12558173 HTH, Lohit - Original Message From: Stu Hood [EMAIL PROTECTED

Re: counter for number of mapper records

2008-09-24 Thread lohit
Yes, take a look at src/mapred/org/apache/hadoop/mapred/Task_Counter.properties Those are all the counters available for a task. -Lohit - Original Message From: Sandy [EMAIL PROTECTED] To: core-user@hadoop.apache.org Sent: Wednesday, September 24, 2008 5:09:39 PM Subject: counter

Re: Tips on sorting using Hadoop

2008-09-20 Thread lohit
this is done. http://svn.apache.org/repos/asf/hadoop/core/trunk/src/examples/org/apache/hadoop/examples/terasort/TeraSort.java Thanks, Lohit - Original Message From: Edward J. Yoon [EMAIL PROTECTED] To: core-user@hadoop.apache.org Sent: Saturday, September 20, 2008 10:53:40 AM Subject

Re: Hadoop Streaming and Multiline Input

2008-09-09 Thread lohit
If your webpage is xml tagged and you are looking into using streaming. This might help http://hadoop.apache.org/core/docs/r0.18.0/streaming.html#How+do+I+parse+XML+documents+using+streaming%3F -Lohit - Original Message From: Jim Twensky [EMAIL PROTECTED] To: core-user

Re: Compare data on HDFS side

2008-09-04 Thread Lohit Vijayarenu
this task trivial. Lohit On Sep 4, 2008, at 6:51 AM, Andrey Pankov [EMAIL PROTECTED] wrote: Hello, Does anyone know is it possible to compare data on HDFS but avoid coping data to local box? I mean if I'd like to find difference between local text files I can use diff command. If files

Re: can i run multiple datanode in one pc?

2008-09-04 Thread lohit
dirs. -Lohit - Original Message From: 叶双明 [EMAIL PROTECTED] To: core-user@hadoop.apache.org Sent: Thursday, September 4, 2008 12:01:48 AM Subject: Re: can i run multiple datanode in one pc? Thanks lohit. I run start datanod by comman: bin/hadoop datanode -conf conf/hadoop-site.xml

Re: can i run multiple datanode in one pc?

2008-09-03 Thread lohit
Yes, each datanode should point to different config. So, if you have conf/hadoop-site.xml make another conf2/hadoop-site.xml with ports for datanode specific stuff and you should be able to start multiple datanodes on same node. -Lohit - Original Message From: 叶双明 [EMAIL PROTECTED

Re: parallel hadoop process reading same input file

2008-08-28 Thread lohit
Hi Deepak, Can you explain what process and what files they are trying to read? If you are talking about map/reduce tasks reading files on DFS, then, yes parallel reads are allowed. Multiple writers are not. -Lohit - Original Message From: Deepak Diwakar [EMAIL PROTECTED] To: core

Re: Load balancing in HDFS

2008-08-27 Thread lohit
to your cluster or would like to rebalance your cluster you could use the rebalancer utility http://hadoop.apache.org/core/docs/current/hdfs_user_guide.html#Rebalancer -Lohit - Original Message From: Mork0075 [EMAIL PROTECTED] To: core-user@hadoop.apache.org Sent: Wednesday, August 27, 2008

Re: java.io.IOException: Could not get block locations. Aborting...

2008-08-12 Thread lohit
). Thanks, Lohit - Original Message From: Piotr Kozikowski [EMAIL PROTECTED] To: core-user@hadoop.apache.org Sent: Monday, August 11, 2008 12:20:05 PM Subject: Re: java.io.IOException: Could not get block locations. Aborting... Hi again, The Could not get block locations exception was gone

Re: NameNode hardware specs

2008-08-12 Thread lohit
not want your JobTracker or NameNode to be on that system. PS: Could you point to the wiki you are referring to? We might need to make some corrections. Thanks, Lohit - Original Message From: Manish Shah [EMAIL PROTECTED] To: core-user@hadoop.apache.org Sent: Tuesday, August 12, 2008 11

Re: Difference between Hadoop Streaming and Normal mode

2008-08-12 Thread lohit
, but other than that you should be able to do most of the stuff. Lot of applications use streaming. -Lohit - Original Message From: John DeTreville [EMAIL PROTECTED] To: core-user@hadoop.apache.org Sent: Tuesday, August 12, 2008 3:33:57 PM Subject: RE: Difference between Hadoop

Re: Random block placement

2008-08-12 Thread lohit
Hi John, This file should be a good starting point for you. src/hdfs/org/apache/hadoop/hdfs/server/namenode/ReplicationtargetChooser.java There has been discussions about a pluggable block place policy https://issues.apache.org/jira/browse/HADOOP-3799 Thanks, Lohit - Original Message

Re: what is the correct usage of hdfs metrics

2008-08-08 Thread lohit
directory. Specify file name and period to monitor the metrics. Thanks, Lohit - Original Message From: Ivan Georgiev [EMAIL PROTECTED] To: core-user@hadoop.apache.org Sent: Friday, August 8, 2008 4:39:36 AM Subject: what is the correct usage of hdfs metrics Hi, I have been unable

Re: namenode jobtracker: joint or separate, which is better?

2008-08-08 Thread lohit
down nodes for malicious programs, in such cases you do not want your jobtracker or namenode to be on those nodes. Also, running multiple jvms might slow down the node and your process. I would recommend you run atleast the NameNode on dedicated node. Thanks, Lohit - Original Message

Re: How to enable compression of blockfiles?

2008-08-08 Thread lohit
not splittable, meaning each map will consume whole of .gz Thanks, Lohit - Original Message From: Michael K. Tung [EMAIL PROTECTED] To: core-user@hadoop.apache.org Sent: Friday, August 8, 2008 1:09:01 PM Subject: How to enable compression of blockfiles? Hello, I have a simple question

Re: DFS. How to read from a specific datanode

2008-08-06 Thread lohit
(https://issues.apache.org/jira/secure/CreateIssue!default.jspa) as improvement request and continue the discussion there? -Lohit - Original Message From: Kevin [EMAIL PROTECTED] To: core-user@hadoop.apache.org Sent: Wednesday, August 6, 2008 10:37:44 AM Subject: Re: DFS. How to read from

Re: DFS. How to read from a specific datanode

2008-08-05 Thread lohit
I havent tried it, but see if you can create DFSClient object and use its open() and read() calls to get the job done. Basically you would have to force currentNode to be your node of interest in there. Just curious, what is the use case for your request? Thanks, Lohit - Original

Re: EOFException while starting name node

2008-08-04 Thread lohit
edits, start the namenode and run 'hadoop fsck /' to see if you have any corrupt files and fix/get rid of them. PS : Take a back up of dfs.name.dir before updating and playing around with it. Thanks, Lohit - Original Message From: steph [EMAIL PROTECTED] To: core-user@hadoop.apache.org

Re: How to control the map and reduce step sequentially

2008-07-29 Thread lohit
Wiki和文件应该帮助。 否则,请打开JIRA要求将帮助大家:)的更好的文献 - Original Message From: Daniel Yu [EMAIL PROTECTED] To: core-user@hadoop.apache.org Sent: Tuesday, July 29, 2008 9:22:00 AM Subject: Re: How to control the map and reduce step sequentially 我现在在国外读书 我的毕业设计课题正好是用hadoop和hbase的 有一个中文社区是件挺不错的事

Re: Multiple master nodes

2008-07-29 Thread lohit
It would be really helpful for many if you could create a twiki of this. Those ideas could be used while implementing HA. Thanks, Lohit - Original Message From: paul [EMAIL PROTECTED] To: core-user@hadoop.apache.org Sent: Tuesday, July 29, 2008 11:56:44 AM Subject: Re: Multiple master

Re: partitioning the inputs to the mapper

2008-07-27 Thread lohit
://wiki.apache.org/hadoop/FAQ#10 -Lohit

Re: How to add/remove slave nodes on run time

2008-07-11 Thread lohit
the datanode and once decommissioned just kill DataNode process. This is described in there http://wiki.apache.org/hadoop/FAQ#17 Thanks, Lohit - Original Message From: Kevin [EMAIL PROTECTED] To: core-user@hadoop.apache.org Sent: Friday, July 11, 2008 3:43:41 PM Subject: How to add/remove

Re: How to add/remove slave nodes on run time

2008-07-11 Thread lohit
tasktracker as well. Thanks, Lohit - Original Message From: Keliang Zhao [EMAIL PROTECTED] To: core-user@hadoop.apache.org Sent: Friday, July 11, 2008 4:31:05 PM Subject: Re: How to add/remove slave nodes on run time May I ask what is the right command to start a datanode on a slave? I

Re: Is Hadoop Really the right framework for me?

2008-07-10 Thread lohit
this test src/test/org/apache/hadoop/mapred/lib/TestLineInputFormat.java HTH, Lohit - Original Message From: Sandy [EMAIL PROTECTED] To: core-user@hadoop.apache.org Sent: Thursday, July 10, 2008 2:47:21 PM Subject: Is Hadoop Really the right framework for me? Hello, I have been posting

Re: Is Hadoop Really the right framework for me?

2008-07-10 Thread lohit
://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.18/src/mapred/org/apache/hadoop/mapred/lib/NLineInputFormat.java copy it to your .mapred/lib directory, rebuild everything and try it out. I assume it should work, but I havent tried it out yet. Thanks, Lohit - Original Message From

Re: ERROR dfs.NameNode - java.io.EOFException

2008-07-05 Thread lohit
I remember dhruba telling me about this once. Yes, Take a backup of the whole current directory. As you have seen, remove the last line from edits and try to start the NameNode. If it starts, then run fsck to find out which file had the problem. Thanks, Lohit - Original Message From

Re: HDFS blocks

2008-06-27 Thread lohit
basically merges all small files into one file. In hadoop 0.18 we have archives and once HADOOP-1700 is done, one could open the file to append to it. Thanks, Lohit - Original Message From: Goel, Ankur [EMAIL PROTECTED] To: core-user@hadoop.apache.org Sent: Friday, June 27, 2008 2:27:57 AM

Re: best command line way to check up/down status of HDFS?

2008-06-27 Thread lohit
If NameNode is down, secondary namenode does not serve requests. It is used to update the fsimage. (http://hadoop.apache.org/core/docs/r0.17.0/hdfs_user_guide.html#Secondary+Namenode) Thanks, Lohit - Original Message From: Miles Osborne [EMAIL PROTECTED] To: core-user

Re: Global Variables via DFS

2008-06-25 Thread lohit
/jambajuice); /code Thanks, Lohit - Original Message From: Steve Loughran [EMAIL PROTECTED] To: core-user@hadoop.apache.org Sent: Wednesday, June 25, 2008 9:15:55 AM Subject: Re: Global Variables via DFS javaxtreme wrote: Hello all, I am having a bit of a problem with a seemingly simple

Re: Compiling Word Count in C++ : Hadoop Pipes

2008-06-25 Thread lohit
ant -Dcompile.c++=yes compile-c++-examples I picked it up from build.xml Thanks, Lohit - Original Message From: Sandy [EMAIL PROTECTED] To: core-user@hadoop.apache.org Sent: Wednesday, June 25, 2008 10:44:20 AM Subject: Compiling Word Count in C++ : Hadoop Pipes Hi, I am currently

Re: Question about Hadoop

2008-06-12 Thread lohit
++. Check info about streaming here http://hadoop.apache.org/core/docs/r0.17.0/streaming.html And information about parsing XML files in streaming in here http://hadoop.apache.org/core/docs/r0.17.0/streaming.html#How+do+I+parse+XML+documents+using+streaming%3F Thanks, Lohit - Original

Re: Map Task timed out?

2008-06-12 Thread lohit
Yes, there is a timeout defined by mapred.task.timeout default was 600 seconds. And here silent means the task (either map or reduce ) has not reported any status using the reporter you get with map/reduce function Thanks, Lohit - Original Message From: Edward J. Yoon [EMAIL PROTECTED

Re: setrep

2008-06-04 Thread lohit
seeing very long delays? Thanks, Lohit

Re: About Metrics update

2008-06-02 Thread lohit
In MetricsIntValue, incrMetrics() was being called on pushMetrics(), instead of setMetrics(). This used to cause the values to be incremented periodically. Thanks, Lohit - Original Message From: Ion Badita [EMAIL PROTECTED] To: core-user@hadoop.apache.org Sent: Saturday, May 31, 2008 4

Re: About Metrics update

2008-05-30 Thread lohit
Hi Ion, Which version of Hadoop are you using? The problem you reported about safeModeTime and fsImageLoadTime keep growing was fixed in 0.18 (or trunk) Thanks, Lohit - Original Message From: Ion Badita [EMAIL PROTECTED] To: core-user@hadoop.apache.org Sent: Friday, May 30, 2008 8:10

Re: Making the case for Hadoop

2008-05-16 Thread lohit
You could also find some info about companies/projects using Hadoop at PoweredBy page http://wiki.apache.org/hadoop/PoweredBy Thanks, Lohit - Original Message From: Ted Dunning [EMAIL PROTECTED] To: core-user@hadoop.apache.org Sent: Friday, May 16, 2008 10:02:25 AM Subject: Re: Making

Re: HDFS corrupt...how to proceed?

2008-05-12 Thread lohit
suggests that namenode state has been updated, meaning blocks which were missing earlier might be reported now. Check with full options and see which blocks from which files are missing. Thanks, Lohit - Original Message From: C G [EMAIL PROTECTED] To: core-user@hadoop.apache.org Sent: Sunday

Re: Corrupt HDFS and salvaging data

2008-05-09 Thread lohit
would have been added to new datanode. You can set replication factor of a file using 'hadoop dfs -setrep command. Thanks, Lohit - Original Message From: Otis Gospodnetic [EMAIL PROTECTED] To: core-user@hadoop.apache.org Sent: Friday, May 9, 2008 7:16:42 AM Subject: Re: Corrupt HDFS

Re: Corrupt HDFS and salvaging data

2008-05-08 Thread lohit
it would ask for confirmation. Thanks, Lohit - Original Message From: Otis Gospodnetic [EMAIL PROTECTED] To: core-user@hadoop.apache.org Sent: Thursday, May 8, 2008 9:00:34 PM Subject: Re: Corrupt HDFS and salvaging data Hi, Update: It seems fsck reports HDFS is corrupt when

Re: hadoop and deprecation

2008-04-24 Thread lohit
If a method is deprecated in version 0.14, it could be removed in version 0.15 at the earliest. Might be removed anytime starting 0.15. - Original Message From: Karl Wettin [EMAIL PROTECTED] To: core-user@hadoop.apache.org Sent: Thursday, April 24, 2008 4:07:48 AM Subject: hadoop and

Re: Run DfsShell command after your job is complete?

2008-04-21 Thread lohit
Yes FsShell.java implements most of the Shell commands. You could also use the FileSystem API http://hadoop.apache.org/core/docs/current/api/org/apache/hadoop/fs/FileSystem.html Simple example http://wiki.apache.org/hadoop/HadoopDfsReadWriteExample Thanks, Lohit - Original Message From

Re: could only be replicated to 0 nodes, instead of 1

2008-04-12 Thread lohit
Can you check the datanode and namenode logs and see if all are up and running? I am assuming you are running this on single host hence replication of 1. Thanks, Lohit - Original Message From: John Menzer [EMAIL PROTECTED] To: core-user@hadoop.apache.org Sent: Saturday, April 12, 2008

Re: how to set logging level to debug

2008-04-10 Thread lohit
You could use hadoop daemonlog to get and set LOG levels to set FSNameSystem to DEBUG you would do something like this hadoop daemonlog -setlevel namenode:50070 org.apache.hadoop.dfs.FSNameSystem DEBUG Thanks, Lohit - Original Message From: Cagdas Gerede [EMAIL PROTECTED] To: core

Re: [core] problems while coping files from local file system to dfs

2008-03-24 Thread lohit
pointed out, use a node which is not in your cluster as a datanode. In this case, the first copy would be placed on a random node in the cluster because your client is not longer a datanode. Thanks, Lohit - Original Message From: Ted Dunning [EMAIL PROTECTED] To: core-user

Re: [core] problems while coping files from local file system to dfs

2008-03-24 Thread lohit
in your case you have only 2 copies and it increases the probability of losing replicas. There has been discussion about having different policy for different files, but hasn't been implemented yet. On 24/03/2008, lohit [EMAIL PROTECTED] wrote: If your client use to copy is one of the datanodes

Re: using a perl script with argument variables which point to config files on the DFS as a mapper

2008-03-06 Thread Lohit
you could use -cacheFile or -file option for this. Check streaming doc for examples. On Mar 6, 2008, at 2:32 PM, Theodore Van Rooy [EMAIL PROTECTED] wrote: I would like to convert a perl script that currently uses argument variables to run with Hadoop Streaming. Normally I would

Re: Processing multiple files - need to identify in map

2008-03-04 Thread lohit
paths before submitting your job more info here http://hadoop.apache.org/core/docs/current/api/org/apache/hadoop/mapred/JobConf.html#addInputPath(org.apache.hadoop.fs.Path) Thanks, Lohit - Original Message From: Tarandeep Singh [EMAIL PROTECTED] To: core-user@hadoop.apache.org Sent

Re: how to recover if master node goes down?

2008-02-02 Thread lohit . vijayarenu
You should be able to see all your earlier files once the namenode is up again. Check this detailed document which describes how this is achieved. http://hadoop.apache.org/core/docs/r0.15.3/hdfs_design.html Thanks, Lohit - Original Message From: Ben Kucinich [EMAIL PROTECTED] To: core