Re: Failed to start namenode.

2016-06-09 Thread Rakesh Radhakrishnan
Hi, Could you please check kerberos principal name is specified correctly in "hdfs-site.xml", which is used to authenticate against Kerberos. If using _HOST variable in hdfs-site.xml, ensure that hostname is getting resolved and it matches with the principal name. If keytab file defined in "hdfs-

Re: Failed to start namenode.

2016-06-09 Thread Rakesh Radhakrishnan
is JIRA gives you some >> background. https://issues.apache.org/jira/browse/HADOOP-4487 >> >> >> >> After the setup of your cluster, if you are still having issues with >> other services or HDFS – This is a diagnostic tool that can help you. >> https://github.com/stevelough

Re: Setting up secure Multi-Node cluster

2016-06-27 Thread Rakesh Radhakrishnan
Hi Aneela, IIUC, Namenode, Datanode is using _HOST pattern in their principal and needs to create separate principal for NN and DN if running in different machines. I hope the below explanation will help you. "dfs.namenode.kerberos.principal" is typically set to nn/_HOST@REALM. Each Namenode will

Re: Error in Hbase backup from secure to normal cluster.

2016-07-11 Thread Rakesh Radhakrishnan
Hi, Hope you are executing 'distcp' command from the secured cluster. Are you executing the command from a non-super user? Please explain me the command/way you are executing to understand, how you are entering "entered super user credentials" and -D command line args. Also, please share your hdf

Re: Error in Hbase backup from secure to normal cluster.

2016-07-11 Thread Rakesh Radhakrishnan
. Regards, Rakesh On Mon, Jul 11, 2016 at 6:13 PM, Rakesh Radhakrishnan wrote: > Hi, > > Hope you are executing 'distcp' command from the secured cluster. Are you > executing the command from a non-super user? Please explain me the > command/way you are executing t

Re: Error in Hbase backup from secure to normal cluster.

2016-07-12 Thread Rakesh Radhakrishnan
your information,while executing restore command the table is > restored successfully without this kind of issues. > > So please could you explain how to solve this exception. > > Looking back for your response, > > Thanks, > Matheskrishna > > > On Mon, Jul 11, 20

Re: Subcribe

2016-07-17 Thread Rakesh Radhakrishnan
Hi Sandeep, Please go through the web page: " https://hadoop.apache.org/mailing_lists.html"; and can subscribe by following these steps. Regards, Rakesh On Mon, Jul 18, 2016 at 8:32 AM, sandeep vura wrote: > Hi Team, > > please add my email id in subscribe list. > > Regards, > Sandeep.v >

Re: Standby Namenode getting RPC latency alerts

2016-07-17 Thread Rakesh Radhakrishnan
Hi Sandeep, This alert could be triggered if the NN operations exceeds certain threshold value. Sometimes an increase in the RPC processing time increases the length of call queue and results in this situation. Could you please provide more details about the client operations you are performing an

Re: Hadoop Installation on Windows 7 in 64 bit

2016-07-17 Thread Rakesh Radhakrishnan
>>>I couldn't find folder* conf in *hadoop home. Could you check %HADOOP_HOME%/etc/hadoop/hadoop-env.cmd path. May be, U:/Desktop/hadoop-2.7.2/etc/hadoop/hadoop-env.cmd location. Typically HADOOP_CONF_DIR will be set to %HADOOP_HOME%/etc/hadoop. Could you check "HADOOP_CONF_DIR" env variable valu

Re: About Archival Storage

2016-07-19 Thread Rakesh Radhakrishnan
Is that mean I should config dfs.replication with 1 ? if more than one I should not use *Lazy_Persist* policies ? The idea of Lazy_Persist policy is, while writing blocks, one replica will be placed in memory first and then it is lazily persisted into DISK. It doesn't means that, you are not

Re: ZKFC do not work in Hadoop HA

2016-07-19 Thread Rakesh Radhakrishnan
Hi Alexandr, I could see the following warning message in your logs and is the reason for unsuccessful fencing. Could you please check 'fuser' command execution in your system. 2016-07-19 14:43:23,705 WARN org.apache.hadoop.ha.SshFenceByTcpPort: PATH=$PATH:/sbin:/usr/sbin fuser -v -k -n tcp 8020

Re: ZKFC do not work in Hadoop HA

2016-07-19 Thread Rakesh Radhakrishnan
ve just installed it. ZKFC became work properly! > > Best regards, > Alexandr > > On Tue, Jul 19, 2016 at 5:29 PM, Rakesh Radhakrishnan > wrote: > >> Hi Alexandr, >> >> I could see the following warning message in your logs and is the reason >> for unsuc

Re: ZKFC fencing problem after the active node crash

2016-07-19 Thread Rakesh Radhakrishnan
Hi Alexandr, Since you powered off the Active NN machine, during fail-over SNN timed out to connect to this machine and fencing is failed. Typically fencing methods should be configured to not to allow multiple writers to same shared storage. It looks like you are using 'QJM' and it supports the f

Re: About Archival Storage

2016-07-19 Thread Rakesh Radhakrishnan
ta from hot to cold automatically ? It use algorithm > like LRU、LFU ? > > 2016-07-19 19:55 GMT+08:00 Rakesh Radhakrishnan : > >> >>>>Is that mean I should config dfs.replication with 1 ? if more than >> one I should not use *Lazy_Persist* policies ? >> >

Re: About Archival Storage

2016-07-19 Thread Rakesh Radhakrishnan
have come to cold , I don't need to tell it what exactly files/dirs > need to be move now ? > Of course I should tell it what files/dirs need to monitoring. > > 2016-07-20 12:35 GMT+08:00 Rakesh Radhakrishnan : > >> >>>I have another question is , hdfs mover (A New

Re: Start client side daemon

2016-07-22 Thread Rakesh Radhakrishnan
Hi Kun, HDFS won't start any client side object(for example, DistributedFileSystem). I can say, HDFS Client -> user applications access the file system using the HDFS client, a library that exports the HDFS file system interface. Perhaps, you can visit api docs, https://hadoop.apache.org/docs/r2.6

Re: Start client side daemon

2016-07-22 Thread Rakesh Radhakrishnan
Ren wrote: > Thanks for your reply. So The clients can be located at any machine that > has the HDFS client library, correct? > > On Fri, Jul 22, 2016 at 10:50 AM, Rakesh Radhakrishnan > wrote: > >> Hi Kun, >> >> HDFS won't start any client side object(for

Re: Improving recovery performance for degraded reads

2016-07-27 Thread Rakesh Radhakrishnan
can > display to the client, do you think stripping would still help ? > Is there a possibility that since I know that all the segments of the HD > image would always be read together, by stripping and distributing it on > different nodes, I am ignoring its special/temporal localit

Re: [DISCUSS] Retire BKJM from trunk?

2016-07-27 Thread Rakesh Radhakrishnan
If I remember correctly, Huawei also adopted QJM component. I hope @Vinay might have discussed internally in Huawei before starting this e-mail discussion thread. I'm +1, for removing the bkjm contrib from the trunk code. Also, there are quite few open sub-tasks under HDFS-3399 umbrella jira, whic

Re: Teradata into hadoop Migration

2016-08-01 Thread Rakesh Radhakrishnan
Hi Bhagaban, Perhaps, you can try "Apache Sqoop" to transfer data to Hadoop from Teradata. Apache Sqoop provides an efficient approach for transferring large data between Hadoop related systems and structured data stores. It allows support for a data store to be added as a so-called connector and

Re: Teradata into hadoop Migration

2016-08-04 Thread Rakesh Radhakrishnan
> Thanks in advance for your help. > > Bhagaban > > On Mon, Aug 1, 2016 at 6:07 PM, Rakesh Radhakrishnan > wrote: > >> Hi Bhagaban, >> >> Perhaps, you can try "Apache Sqoop" to transfer data to Hadoop from >> Teradata. Apache Sqoop provides an effici

Re: issue starting regionserver with SASL authentication failed

2016-08-06 Thread Rakesh Radhakrishnan
Hey Aneela, I've filtered the below output from your log messages. It looks like you have "/ranger" directory under the root directory and directory listing is working fine. *Found 1 items* *drwxr-xr-x - hdfs supergroup 0 2016-08-02 14:44 /ranger* I think its putting all the log messa

Re: Cannot run Hadoop on Windows

2016-08-08 Thread Rakesh Radhakrishnan
Hi Atri, I doubt the problem is due to space in the path -> "Program Files". Instead of C:\Program Files\Java\jdk1.8.0_101, please copy JDK dir to C:\java\jdk1.8.0_101 and try once. Rakesh Intel On Mon, Aug 8, 2016 at 4:34 PM, Atri Sharma wrote: > Hi All, > > I am trying to run a compiled Hado

Re: Connecting JConsole to ResourceManager

2016-08-09 Thread Rakesh Radhakrishnan
Hi Atri, Do you meant, something like, jconsole [processID]. afaik, the local jmx uses the local filesystem. I hope your processes are running under same user to ensure there is no permission issues. Also, could you please check %TEMP% and %TMP% environment variables and make sure YOUR_USER_NAME

Re: Journal nodes in HA

2016-08-12 Thread Rakesh Radhakrishnan
Hi Konstantinos, The typical deployment is, three Journal Nodes(JNs) and can collocate two of the three JNs on the same machine where Namenodes(2 NNs) are running. The third one can be deployed to the machine where ZK server is running(assume ZK cluster has 3 nodes). I'd recommend to have a dedica

Re: Journal nodes in HA

2016-08-12 Thread Rakesh Radhakrishnan
mance benchmarks showing what bandwidth we can sustain with shared vs >> dedicated storage for the journal nodes? >> >> Thank you, >> Konstantinos >> >> >> >> >> On Fri, Aug 12, 2016 at 2:26 PM, Rakesh Radhakrishnan > > wrote: >> >>

Re: Is it possible to configure hdfs in a federation mode and in an HA mode in the same time?

2016-08-18 Thread Rakesh Radhakrishnan
Yes, it is possible is to enable HA mode and Automatic Failover in a federated namespace. Following are some of the quick references, I feel its worth reading these blogs to get more insight into this. I think, you can start prototyping a test cluster with this and post your queries to this forum i

Re: Namenode Unable to Authenticate to QJM in Secure mode.

2016-08-19 Thread Rakesh Radhakrishnan
Hi Akash, In general "GSSException: No valid credentials provided" means you don’t have valid Kerberos credentials. I'm suspecting some issues related to spnego, could you please revisit all of your kerb related configurations, probably you can start from the below configuration. Please share *-si

Re: java.lang.NoSuchFieldError: HADOOP_CLASSPATH

2016-08-26 Thread Rakesh Radhakrishnan
Hi Senthil, There might be case of including the wrong version of a jar file, could you please check "Environment.HADOOP_CLASSPATH" enum variable in "org.apache.hadoop.yarn.api.ApplicationConstants.java" class in your hadoop jar file?. I think it is throwing "NoSuchFieldError" as its not seeing th

Re: Running a HA HDFS cluster on alpine linux

2016-08-28 Thread Rakesh Radhakrishnan
Hi Francis, There could be cases of connection fluctuations between ZKFC and ZK server, I've observed the following message from your logs. I'd suggest you to start analyzing all your ZooKeeper servers log messages and see ZooKeeper cluster status during this period. BTW, could you tell me the ZK

Re: java.lang.NoSuchFieldError: HADOOP_CLASSPATH

2016-08-29 Thread Rakesh Radhakrishnan
’t see the ENUM HADOOP_CLASSPATH in Yarn API .. > > > > --Senthil > > *From:* Rakesh Radhakrishnan [mailto:rake...@apache.org] > *Sent:* Friday, August 26, 2016 8:26 PM > *To:* kumar, Senthil(AWF) > *Cc:* user.hadoop > *Subject:* Re: java.lang.NoSuchFieldError: HADOOP_CLAS

Re: HDFS Balancer Stuck after 10 Minz

2016-09-08 Thread Rakesh Radhakrishnan
, Rakesh Radhakrishnan wrote: > Have you taken multiple thread dumps (jstack) and observed the operations > which are performing during this period of time. Perhaps there could be > high chance of searching for data blocks which it can move around to > balance the cluster. > > Cou

Re: HDFS ACL | Unable to define ACL automatically for child folders

2016-09-18 Thread Rakesh Radhakrishnan
It looks like '/user/test3' has owner '"hdfs" and denying the access while performing operations via "shashi" user. One idea is to recursively set ACL to sub-directories and files as follows: hdfs dfs -setfacl -R -m default:user:shashi:rwx /user -R, option can be used to

Re: HDFS ACL | Unable to define ACL automatically for child folders

2016-09-19 Thread Rakesh Radhakrishnan
at 11:22 AM, Shashi Vishwakarma < shashi.vish...@gmail.com> wrote: > Thanks Rakesh. > > Just last question, is there any Java API available for recursively > applying ACL or I need to iterate on all folders of dir and apply acl for > each? > > Thanks > Shashi >

Re: hdfs2.7.3 kerberos can not startup

2016-09-20 Thread Rakesh Radhakrishnan
>>Caused by: javax.security.auth.login.LoginException: Unable to obtain password from user Could you please check kerberos principal name is specified correctly in "hdfs-site.xml", which is used to authenticate against Kerberos. If keytab file defined in "hdfs-site.xml" and doesn't exists or

Re: hdfs2.7.3 kerberos can not startup

2016-09-21 Thread Rakesh Radhakrishnan
gt;> >>> >>> >>> >>> For namenode httpserver start fail, please check rakesh comments.. >>> >>> >>> >>> This is probably due to some missing configuration. >>> >>> Could you please re-check the ssl-server.xml,

Re: hdfs2.7.3 not work with kerberos

2016-09-21 Thread Rakesh Radhakrishnan
I could see "Ticket cache: KEYRING:persistent:1004:1004" in your env. May be KEYRING persistent cache setting is causing trouble, Kerberos libraries to store the krb cache in a location and the Hadoop libraries can't seem to access it. Please refer these links, https://community.hortonworks.com/q

Re: LeaseExpiredException: No lease on /user/biadmin/analytic‐root/SX5XPWPPDPQH/.

2016-10-16 Thread Rakesh Radhakrishnan
Hi Jian Feng, Could you please check your code and see any possibilities of simultaneous access to the same file. Mostly this situation happens when multiple clients tries to access the same file. Code Reference:- https://github.com/apache/hadoop/blob/branch-2.2/hadoop- hdfs-project/hadoop-hdfs/s

Re: Connecting Hadoop HA cluster via java client

2016-10-18 Thread Rakesh Radhakrishnan
Hi, dfs.namenode.http-address, this is the fully-qualified HTTP address for each NameNode to listen on. Similarly to rpc-address configuration, set the addresses for both NameNodes HTTP servers(Web UI) to listen on and can browse the status of Active/Standby NN in Web browser. Also, hdfs supports

Re: Erasure Coding Policies

2019-07-01 Thread Rakesh Radhakrishnan
Hi, RS Legacy is pure Java based implementation. Probably you can look at the encoding/decoding logic at github repo https://github.com/Jerry-Xin/hadoop/blob/master/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/RSRawEncoderLegacy.java https://github.c