Self-healing hdfs functionality

2018-02-02 Thread sidharth kumar
face day to day life which can be automated as self healing feature . So I would like to request all community member to provide list of issue they face everyday and can be taken as a feature for self healing hdfs Warm Regards Sidharth Kumar | Mob: +91 8197 555 599 LinkedIn:www.linkedin.com

Hbase trace

2018-01-24 Thread sidharth kumar
Hi Team, I want to know what read and write requests operations are being carried out on HBase. I enabled trace in log4j but could not get info. Could you please help me how to extract this info from hbase and which log could give me better info. Warm Regards Sidharth Kumar | Mob: +91 8197

Local read-only users in ambari

2017-12-01 Thread sidharth kumar
server and register cluster as remote cluster and set hive views but still i have the same problem. Kindly help to resolve this. Warm Regards Sidharth Kumar | Mob: +91 8197 555 599 LinkedIn:www.linkedin.com/in/sidharthkumar2792<http://:www.linkedin.com/in/sidharthkumar2792>

Apache ambari

2017-09-08 Thread sidharth kumar
Hi, Apache ambari is open source. So,can we setup Apache ambari to manage existing Apache Hadoop cluster ? Warm Regards Sidharth Kumar | Mob: +91 8197 555 599 / 7892 192 367 LinkedIn:www.linkedin.com/in/sidharthkumar2792

spark on yarn error -- Please help

2017-08-28 Thread sidharth kumar
Hi, I have configured apace spark over yarn. I am able to run map reduce job successfully but spark-shell gives below error. Kindly help me to resolve this issue *SPARK-DEFAULT.CONF* spark.master spark://master2:7077 spark.eventLog.enabled true

Hadoop 3.0

2017-07-09 Thread sidharth kumar
Hi, Is there any documentation through which we can know what are the changes targeted in Hadoop 3.0 Warm Regards Sidharth Kumar | Mob: +91 8197 555 599 / 7892 192 367 LinkedIn:www.linkedin.com/in/sidharthkumar2792

Re: reconfiguring storage

2017-07-07 Thread sidharth kumar
Hi, Just want to add on daemeon, if the miss configuration happened  on couple of nodes. It's better to do it one at a time or else take backup of your data. Warm Regards Sidharth Kumar | Mob: +91 8197 555 599 / 7892 192 367 LinkedIn:www.linkedin.com/in/sidharthkumar2792

Re: Kafka or Flume

2017-07-02 Thread Sidharth Kumar
Thank you very much for your help. What about the flow as Nifi --> Kafka --> storm for real time processing and then storing into HBase ? Warm Regards Sidharth Kumar | Mob: +91 8197 555 599/7892 192 367 | LinkedIn: www.linkedin.com/in/sidharthkumar2792 On 02-Jul-2017 12:40 PM,

Re: Kafka or Flume

2017-07-01 Thread Sidharth Kumar
ta stored in hadoop. So can you suggest a flow with a little more in detail Warm Regards Sidharth Kumar | Mob: +91 8197 555 599/7892 192 367 | LinkedIn: www.linkedin.com/in/sidharthkumar2792 On 01-Jul-2017 9:46 PM, "Gagan Brahmi" <gaganbra...@gmail.com> wrote: I'd say the da

Re: Kafka or Flume

2017-07-01 Thread Sidharth Kumar
confluent Kafka HDFS >> connector) to put data into HDFS then >> Write tool to read data from topic, validate and store in other topic. >> >> We are using combination of these steps to process over 10 million >> events/second. >> >> I hope it helps.. >>

RE: Kafka or Flume

2017-06-29 Thread Sidharth Kumar
Thanks! What about Kafka with Flume? And also I would like to tell that everyday data intake is in millions and can't afford to loose even a single piece of data. Which makes a need of high availablity. Warm Regards Sidharth Kumar | Mob: +91 8197 555 599/7892 192 367 | LinkedIn

Kafka or Flume

2017-06-29 Thread Sidharth Kumar
of historical data which is stored in hadoop. So, my question is which injestion tool will be best for this Kafka or Flume? Any suggestions will be a great help for me. Warm Regards Sidharth Kumar | Mob: +91 8197 555 599/7892 192 367 | LinkedIn: www.linkedin.com/in/sidharthkumar2792

RE: Lots of warning messages and exception in namenode logs

2017-06-29 Thread Sidharth Kumar
Regards Sidharth Kumar | Mob: +91 8197 555 599/7892 192 367 | LinkedIn: www.linkedin.com/in/sidharthkumar2792 On 29-Jun-2017 3:45 PM, "omprakash" <ompraka...@cdac.in> wrote: > Hi Ravi, > > > > I have 5 nodes in Hadoop cluster and all have same configurations. Aft

Re: GARBAGE COLLECTOR

2017-06-19 Thread Sidharth Kumar
of what it does. > > On Mon, 19 Jun 2017 at 14:20 Sidharth Kumar <sidharthkumar2...@gmail.com> > wrote: > >> Hi Team, >> >> How feasible will it be, if I configure CMS Garbage collector for Hadoop >> daemons and configure G1 for Map Reduce jobs which run for hour

GARBAGE COLLECTOR

2017-06-19 Thread Sidharth Kumar
Hi Team, How feasible will it be, if I configure CMS Garbage collector for Hadoop daemons and configure G1 for Map Reduce jobs which run for hours? Thanks for your help ...! -- Regards Sidharth Kumar | Mob: +91 8197 555 599 | LinkedIn <https://www.linkedin.com/in/sidharthkumar2792/>

Re: How to monitor YARN application memory per container?

2017-06-13 Thread Sidharth Kumar
Hi, I guess you can get it from http://:/jmx or /metrics Regards Sidharth LinkedIn: www.linkedin.com/in/sidharthkumar2792 On 13-Jun-2017 6:26 PM, "Shmuel Blitz" wrote: > (This question has also been published on StackOveflow >

Re: When i run wordcount of Hadoop in Win10, i got wrong info

2017-06-12 Thread Sidharth Kumar
Check /tmp directory permissions and owners Sidharth On 13-Jun-2017 3:20 AM, "Deng Yong" wrote: > D:\hdp\sbin>yarn jar d:/hdp/share/hadoop/mapreduce/ > hadoop-mapreduce-examples-2.7.3.jar wordcount /aa.txt /out > > 17/06/10 15:27:32 INFO client.RMProxy: Connecting to

How to Contribute as hadoop admin

2017-05-31 Thread Sidharth Kumar
Hi, I have been working as hadoop admin since 2 years, I subscribed to this group 3 months before but since then never able to figure out something which a hadoop admin contribute can do. It will be great full if someone help me out to contribute in hadoop 3.0 development. Thanks for help in

Re: Why hdfs don't have current working directory

2017-05-26 Thread Sidharth Kumar
riharan > > On Fri, May 26, 2017 at 3:44 PM, Sidharth Kumar < > sidharthkumar2...@gmail.com> wrote: > >> Hi, >> >> Can you kindly explain me why hdfs doesnt have current directory concept. >> Why Hadoop is not implement to use pwd? Why command like cd and P

Why hdfs don't have current working directory

2017-05-26 Thread Sidharth Kumar
Hi, Can you kindly explain me why hdfs doesnt have current directory concept. Why Hadoop is not implement to use pwd? Why command like cd and PWD cannot be implemented in hdfs? Regards Sidharth Mob: +91 819799 LinkedIn: www.linkedin.com/in/sidharthkumar2792

Re: access error while trying to run distcp from source cluster

2017-05-25 Thread Sidharth Kumar
Hi , It may be because user don't have the write permission in destination cluster path. For example $Su - abcde $hadoop distcp /data/sample1 hdfs://destclstnn:8020/data/ So,in the above case user abcde should have the write permission at destination path hdfs://destclstnn:8020/data/ Regards

Re: Block pool error in datanode

2017-05-24 Thread Sidharth Kumar
So I guess this is due to change in blockpool id. If you have older fsimage backup ,start namenode using that fsimage or delete the current directory of datanodes hdfs storage and re-format the namenode once again Regards Sidharth Mob: +91 819799 LinkedIn:

RE: Hdfs default block size

2017-05-22 Thread Sidharth Kumar
/hdfs-default.xml > > > > Regards > > Surendra > > > > > > *From:* Sidharth Kumar [mailto:sidharthkumar2...@gmail.com] > *Sent:* 22 May 2017 19:36 > *To:* common-u...@hadoop.apache.org > *Subject:* Hdfs default block size > > > > Hi, >

Hdfs default block size

2017-05-22 Thread Sidharth Kumar
Hi, Can you kindly tell me what is the default block size in apache hadoop 2.7.3? Is it 64mb or 128mb? Thanks Sidharth

Re: Hadoop 2.7.3 cluster namenode not starting

2017-05-17 Thread Sidharth Kumar
shan.patha...@gmail.com> wrote: Apologies for the delayed reply, was away due to some personal issues. I tried the telnet command as well, but no luck. I get the response that 'Name or service not known' Thanks Bhushan Pathak Thanks Bhushan Pathak On Wed, May 3, 2017 at 7:48 AM, Sidh

Re: Hadoop 2.7.3 cluster namenode not starting

2017-05-02 Thread Sidharth Kumar
Can you check if the ports are opened by running telnet command. Run below command from source machine to destination machine and check if this help $telnet Ex: $telnet 192.168.1.60 9000 Let's Hadooping! Bests Sidharth Mob: +91 819799 LinkedIn: www.linkedin.com/in/sidharthkumar2792

Re: Hdfs read and write operation

2017-04-20 Thread Sidharth Kumar
Hi, Could anyone kindly help me to clear my below doubts Thanks On 19-Apr-2017 8:08 PM, "Sidharth Kumar" <sidharthkumar2...@gmail.com> wrote: Hi, please help me to understand it 1) If we read anatomy of hdfs read in hadoop definitive guide it says data queue is consumed by s

Re: Hadoop namespace format user and permissions

2017-04-20 Thread Sidharth Kumar
Hi James, Please create a user hadoop or hdfs and change the ownership of directory to hdfs:hadoop. Hdfs run with hdfs user. This should probably resolve your​ issue. If you need I can share document which i made for pseudo mode installation to help my mates. Please let me know if issue still

Hdfs read and write operation

2017-04-19 Thread Sidharth Kumar
Hi, please help me to understand it 1) If we read anatomy of hdfs read in hadoop definitive guide it says data queue is consumed by streamer. So, can you just tell me that will there be only one streamer in a cluster which consume packets from data queue and create pipeline for each packets to

Re: Disk full errors in local-dirs, what data is stored in yarn.nodemanager.local-dirs?

2017-04-12 Thread Sidharth Kumar
tory > > > > - > To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org <javascript:;> > For additional commands, e-mail: user-h...@hadoop.apache.org > <javascript:;> > > -- Regards Sidharth Kumar | Mob: +91 8197 555 599 | LinkedIn <https://www.linkedin.com/in/sidharthkumar2792/>

Re: Anatomy of read in hdfs

2017-04-10 Thread Sidharth Kumar
e: > > > On Mon, Apr 10, 2017 at 11:46 AM, Sidharth Kumar < > sidharthkumar2...@gmail.com> wrote: > >> Thanks Philippe, >> >> I am looking for answer only restricted to HDFS. Because we can do read >> and write operations from CLI using commands like &qu

Re: Anatomy of read in hdfs

2017-04-10 Thread Sidharth Kumar
t; block size). There is no java framework per se for splitting up an file >> (technically not so, but let's simplify, outside of your own custom code). >> >> >> *...* >> >> >> >> *Daemeon C.M. ReiydelleUSA (+1) 415.501.0198 <(415)%20501-0198>L

Re: Anatomy of read in hdfs

2017-04-09 Thread Sidharth Kumar
one java program then it's just a single thread process and will read the data sequentially. On Friday, April 7, 2017, Sidharth Kumar <sidharthkumar2...@gmail.com> wrote: > Thanks for your response . But I dint understand yet,if you don't mind can > you tell me what do you mea

Re: Anatomy of read in hdfs

2017-04-07 Thread Sidharth Kumar
processing framework like MapReduce. Regards, Philippe On Thu, Apr 6, 2017 at 9:55 PM, Sidharth Kumar <sidharthkumar2...@gmail.com> wrote: > Hi Genies, > > I have a small doubt that hdfs read operation is parallel or sequential > process. Because from my understanding i

Anatomy of read in hdfs

2017-04-06 Thread Sidharth Kumar
Hi Genies, I have a small doubt that hdfs read operation is parallel or sequential process. Because from my understanding it should be parallel but if I read "hadoop definitive guide 4" in anatomy of read it says "*Data is streamed from the datanode back **to the client, which calls read()

Customize Sqoop default property

2017-04-06 Thread Sidharth Kumar
Hi, I am importing data from RDBMS to hadoop using sqoop but my RDBMS data is multi valued and contains "," special character. So, While importing data using sqoop into hadoop ,sqoop by default it separate the columns by using "," character. Is there any property through which we can customize

Request for Hadoop mailing list subscription and 3.0.0 issues

2017-03-28 Thread Sidharth Kumar
the configurations.While the same set of configuration worked fine for hadoop2.7.2 and other stable versions. Thanks for your help in advance -- Regards Sidharth Kumar | Mob: +91 8197 555 599 | LinkedIn <https://www.linkedin.com/in/sidharthkumar2792/>