Re: ZKFC ActiveBreadCrumb Value

2018-09-14 Thread Wellington Chevreuil
You could still use Harsh's solution programatically, or maybe an easier way is to use HAUtil.getAddressOfActive() [1] for that? Ideally we should not need to query ZK directly. [1]

Re: How to use webhdfs CONCAT?

2017-07-27 Thread Wellington Chevreuil
01182348739=o02nv4on4FXbhlijJ+R/KXvhooQ="; > Path=/; Expires=Thu, 27-Jul-2017 19:05:48 GMT; HttpOnly > Transfer-Encoding: chunked > > {"RemoteException":{"exception":"FileNotFoundException","javaClassName":"java.io.FileNotFo

Re: How to use webhdfs CONCAT?

2017-07-25 Thread Wellington Chevreuil
Hi Cinyoung, Concat has some restrictions, like the need for src file having last block size to be the same as the configured dfs.block.size. If all the conditions are met, below command example should work (where we are concatenating /user/root/file-2 into /user/root/file-1): curl -i -X

Re: HDFS HA(Based on QJM) Failover Frequently with Large FSimage and Busy Requests

2017-05-03 Thread Wellington Chevreuil
Hi Yizhou, Yes, this might be causing the failovers. I've seen situations where download of large fsimage from SBNN, plus additional requests to ANN led to longer disk latency, which caused any Service RPC request that require an HDFS WRITE LOCK to take longer to be processed. This can cause

Re: Encrypt a directory using some key (JAVA)

2016-12-14 Thread Wellington Chevreuil
Hi Aneela, All methods from DFS CLI are exposed in KMS HTTP REST API. Your java application can then make http requests to KMS. Here is an example of related http request format for creating a key: POST http://HOST:PORT/kms/v1/keys Content-Type: application/json { "name": "",

Re: GridMix doesn't run ClassNotFoundException Rumen

2015-12-15 Thread Wellington Chevreuil
Hi Simone, You should make sure to include hadoop-rumen-2.6.0.jar on the classpath for the Nodemanagers, or include it on the classpath of your job. > On 14 Dec 2015, at 09:56, siscia wrote: > > Hello folks, > > I am trying to run a simulation with GridMix but

Re: GridMix doesn't run ClassNotFoundException Rumen

2015-12-15 Thread Wellington Chevreuil
-2.6.1.jar stax-api-1.0-2.jar > hadoop-rumen-2.6.1.jar xmlenc-0.52.jar > hadoop-sls-2.6.1.jar xz-1.0.jar > hadoop-streaming-2.6.1.jar zookeeper-3.4.6.jar > > Am I doing something wrong ? How do I check the classpath of the >

Re: HTTPFS without impersonation

2015-06-03 Thread Wellington Chevreuil
Hi, do u have below property on core-site.xml file used by your hdfs? property namehadoop.proxyuser.HTTP.hosts/name value*/value /property property namehadoop.proxyuser.HTTP.groups/name value*/value /property Hello all, We need to run several HTTPFS instances on our

Re: HTTPFS without impersonation

2015-06-03 Thread Wellington Chevreuil
If that doesn't work, u may need to define one entry for these properties to each user running an httpfs instance. See below: http://hadoop.apache.org/docs/current/hadoop-hdfs-httpfs/ServerSetup.html Em 03/06/2015 12:40, Wellington Chevreuil wellington.chevre...@gmail.com escreveu: Hi, do u

Re: Connection Refused error on Hadoop2.6 running Ubuntu 15.04 Desktop on Pseudo-distributed mode

2015-04-27 Thread Wellington Chevreuil
There might be some FATAL/ERROR/WARN or Exception messages in this log file that can explain why NN process is dying. Can you paste some of the last lines on the log file? On 27 Apr 2015, at 09:37, Susheel Kumar Gadalay skgada...@gmail.com wrote: jps listing is not showing namenode daemon.

Re: Connection Refused error on Hadoop2.6 running Ubuntu 15.04 Desktop on Pseudo-distributed mode

2015-04-27 Thread Wellington Chevreuil
Anand Murali 11/7, 'Anand Vihar', Kandasamy St, Mylapore Chennai - 600 004, India Ph: (044)- 28474593/ 43526162 (voicemail) On Monday, April 27, 2015 2:46 PM, Wellington Chevreuil wellington.chevre...@gmail.com wrote: There might be some FATAL/ERROR/WARN or Exception messages

Re: Connection Refused error on Hadoop2.6 running Ubuntu 15.04 Desktop on Pseudo-distributed mode

2015-04-27 Thread Wellington Chevreuil
/ 43526162 (voicemail) On Monday, April 27, 2015 4:16 PM, Wellington Chevreuil wellington.chevre...@gmail.com wrote: Hello Anand, This error means NN could not find it's metadata directory. You probably need to run hadoop namenode -format command before trying to start hdfs

Re: Connection Refused error on Hadoop2.6 running Ubuntu 15.04 Desktop on Pseudo-distributed mode

2015-04-27 Thread Wellington Chevreuil
Hello Anand, Per your original email, this would be: /home/anand_vihar/hadoop-2.6.0/logs/hadoop-anand_vihar-namenode-Latitude-E5540.out Cheers. On 27 Apr 2015, at 09:41, Anand Murali anand_vi...@yahoo.com wrote: Susheel: Since I am new to this, what log file should I look for in the log

Re: Datanode not allowed to connect to the Namenode in Hadoop 2.3.0 cluster.

2014-08-05 Thread Wellington Chevreuil
You should have /etc/hosts properly configured on all your cluster nodes. On 5 Aug 2014, at 07:28, S.L simpleliving...@gmail.com wrote: when you say /etc/hosts/ file , you mean only on the master of on both the master and slaves? On Tue, Aug 5, 2014 at 1:20 AM, Satyam Singh

Re: Exception in hadoop and java

2014-08-04 Thread Wellington Chevreuil
These indicates some lib versions conflicts - UnsupportedOperationException: setXIncludeAware is not supported on this JAXP implementation or earlier: class gnu.xml.dom.JAXPFactory That classe is in gnujaxp jar. This chart api probably brought different version for this lib, from the version

Re: Create HDFS directory fails

2014-07-29 Thread Wellington Chevreuil
Hum, I'm not sure, but I think through the API, you have to create each folder level at a time. For instance, if your current path is /user/logger and you want to create /user/logger/dev2/tmp2, you have to first do hdfs.create(new Path(/user/logger/dev2)), then hdfs.create(new

Re: One datanode is down then write/read starts failing

2014-07-28 Thread Wellington Chevreuil
Can you make sure you still have enough HDFS space once you kill this DN? If not, HDFS will automatically enter safemode if it detects there's no hdfs space available. The error message on the logs should have some hints on this. Cheers. On 28 Jul 2014, at 16:56, Satyam Singh

Re: Decommissioning a data node and problems bringing it back online

2014-07-24 Thread Wellington Chevreuil
You should not face any data loss. The replicas were just moved away from that node to other nodes in the cluster during decommission. Once you recommission the node and re-balance your cluster, HDFS will re-distribute replicas between the nodes evenly, and the recommissioned node will receive

Re: Replace a block with a new one

2014-07-17 Thread Wellington Chevreuil
Hi, there's no way to do that, as HDFS does not provide file updates features. You'll need to write a new file with the changes. Notice that even if you manage to find the physical block replica files on the disk, corresponding to the part of the file you want to change, you can't simply

Re: Submitting a mapreduce job to remote jobtracker

2014-07-16 Thread Wellington Chevreuil
Hi, You should have proper core-site.xml, hdfs-site.xml and mapred-site.xml on your classpath. These files should be available on /etc/hadoop/conf, so that hadoop jar command would be able to load it. Thanks, Wellington. On 16 Jul 2014, at 06:25, harish tangella harish.tange...@gmail.com

Re: Exception in Jobtracker (java.lang.OutOfMemoryError: Java heap space)

2014-04-14 Thread Wellington Chevreuil
Hi Viswanathan, this looks like your job history is full, and is filling up your jobtracker heap: 2014-04-12 02:25:47,963 ERROR org.apache.hadoop.mapred.JobHistory: Unable to move history file to DONE canonical subfolder. java.lang.OutOfMemoryError: Java heap space Have you set any value

Re: Replication HDFS

2014-03-28 Thread Wellington Chevreuil
Hi Victor, if by replication you mean copy from one cluster to other, you can use the distcp command. Cheers. On 28 Mar 2014, at 16:30, Serge Blazhievsky hadoop...@gmail.com wrote: You mean replication between two different hadoop cluster or you just need data to be replicated between two

Re: How check sum are generated for blocks in data node

2014-03-28 Thread Wellington Chevreuil
Hi Reena, the pipeline is per block. If you have half of your file in data node A only, that means the pipeline had only one node (node A, in this case, probably because replication factor is set to 1) and then, data node A has the checksums for its block. The same applies to data node B.

Re: How exactly Oozie works internally?

2013-08-12 Thread Wellington Chevreuil
Hi Kasa, did you create the oozie user on the target ssh server, and does this have all user rights to execute want it should on the target server? Regards, Wellington. 2013/8/12 Kasa V Varun Tej kasava...@gmail.com Folks, I have been working on this oozie SSH action from past 2 days. I'm

Re: Uploading file to HDFS

2013-04-19 Thread Wellington Chevreuil
Can't you use flume for that? 2013/4/19 David Parks davidpark...@yahoo.com I just realized another trick you might trying. The Hadoop dfs client can read input from STDIN, you could use netcat to pipe the stuff across to HDFS without hitting the hard drive, I haven’t tried it, but here’s

Re: How to process only input files containing 100% valid rows

2013-04-19 Thread Wellington Chevreuil
How about use a combiner to mark as dirty all rows from a dirty file, for instance, putting dirty flag as part of the key, then in the reducer you can simply ignore this rows and/or output the bad file name. It still will have to pass through the whole file, but at least avoids the case where you

Re: Getting custom input splits from files that are not byte-aligned or line-aligned

2013-02-23 Thread Wellington Chevreuil
Hi, I think you'll have to implement your own custom FileInputFormat, using this lib you mentioned to properly read your file records and split them through map tasks. Regards, Wellington. Em 23/02/2013 14:14, Public Network Services publicnetworkservi...@gmail.com escreveu: Hi... I use an

Re: Newbie: HBase good for Tree like structure?

2013-02-19 Thread Wellington Chevreuil
Hi José, I think your structure is ok to define HBase row keys. The main issue you`ll have then is row you`ll be able to build these keys, so that you can properly access your tree nodes. Regarding your scalability concerns, you should not worry to start with a small Hadoop/Hbase cluster (even

Re: Hadoop 0.23.1 installation

2012-03-01 Thread Wellington Chevreuil
Hi, can you tell us how are you trying to format your hdfs? As it´s a NoClassDefFoundError, probably your hadoop lib is not on your classpath. Thanks, Wellington. 2012/2/29 Marcos Ortiz mlor...@uci.cu: On 03/01/2012 04:48 AM, raghavendhra rahul wrote: Hi,       I tried to configure hadoop

Re: job taking input file, which is being written by its preceding job's map phase

2012-02-09 Thread Wellington Chevreuil
Hi Harsh, I had noticed that this ChainMapper belongs to the old version package (org.apache.hadoop.mapred instead of org.apache.hadoop.mapreduce). Although it takes generic Class types as it's method argument, is this class able to work with Mappers from the new version package

Re: Hadoop map reduce merge algorithm

2012-01-12 Thread Wellington Chevreuil
Intermediate data from the map phase is written to disk by the Mapper. After that, the data will be sent to Reducer(s) and it will perform 3 steps: - shuffle: where all output data from mappers are sorted as input to the Reducer(s); - sort: output data from mappers are grouped by key. This is

Re: Hadoop file uploads

2011-10-04 Thread Wellington Chevreuil
Hey Sadak, you don't need to write a MR job for that. You can make your java program use Hadoop Java API for that. You would need to use FileSystem (http://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/fs/FileSystem.html) and Path