face day to day life
which can be automated as self healing feature .
So I would like to request all community member to provide list of issue they
face everyday and can be taken as a feature for self healing hdfs
Warm Regards
Sidharth Kumar | Mob: +91 8197 555 599
LinkedIn:www.linkedin.com
Hi Team,
I want to know what read and write requests operations are being carried out on
HBase. I enabled trace in log4j but could not get info. Could you please help
me how to extract this info from hbase and which log could give me better info.
Warm Regards
Sidharth Kumar | Mob: +91 8197
server and register
cluster as remote cluster and set hive views but still i have the same problem.
Kindly help to resolve this.
Warm Regards
Sidharth Kumar | Mob: +91 8197 555 599
LinkedIn:www.linkedin.com/in/sidharthkumar2792<http://:www.linkedin.com/in/sidharthkumar2792>
Hi,
Apache ambari is open source. So,can we setup Apache ambari to manage existing
Apache Hadoop cluster ?
Warm Regards
Sidharth Kumar | Mob: +91 8197 555 599 / 7892 192 367
LinkedIn:www.linkedin.com/in/sidharthkumar2792
Hi,
I have configured apace spark over yarn. I am able to run map reduce job
successfully but spark-shell gives below error.
Kindly help me to resolve this issue
*SPARK-DEFAULT.CONF*
spark.master spark://master2:7077
spark.eventLog.enabled true
Hi,
Is there any documentation through which we can know what are the changes
targeted in Hadoop 3.0
Warm Regards
Sidharth Kumar | Mob: +91 8197 555 599 / 7892 192 367
LinkedIn:www.linkedin.com/in/sidharthkumar2792
Hi,
Just want to add on daemeon, if the miss configuration happened on couple of
nodes. It's better to do it one at a time or else take backup of your data.
Warm Regards
Sidharth Kumar | Mob: +91 8197 555 599 / 7892 192 367
LinkedIn:www.linkedin.com/in/sidharthkumar2792
Thank you very much for your help. What about the flow as Nifi --> Kafka
--> storm for real time processing and then storing into HBase ?
Warm Regards
Sidharth Kumar | Mob: +91 8197 555 599/7892 192 367 | LinkedIn:
www.linkedin.com/in/sidharthkumar2792
On 02-Jul-2017 12:40 PM,
ta
stored in hadoop.
So can you suggest a flow with a little more in detail
Warm Regards
Sidharth Kumar | Mob: +91 8197 555 599/7892 192 367 | LinkedIn:
www.linkedin.com/in/sidharthkumar2792
On 01-Jul-2017 9:46 PM, "Gagan Brahmi" <gaganbra...@gmail.com> wrote:
I'd say the da
confluent Kafka HDFS
>> connector) to put data into HDFS then
>> Write tool to read data from topic, validate and store in other topic.
>>
>> We are using combination of these steps to process over 10 million
>> events/second.
>>
>> I hope it helps..
>>
Thanks! What about Kafka with Flume? And also I would like to tell that
everyday data intake is in millions and can't afford to loose even a single
piece of data. Which makes a need of high availablity.
Warm Regards
Sidharth Kumar | Mob: +91 8197 555 599/7892 192 367 | LinkedIn
of
historical data which is stored in hadoop. So, my question is which
injestion tool will be best for this Kafka or Flume?
Any suggestions will be a great help for me.
Warm Regards
Sidharth Kumar | Mob: +91 8197 555 599/7892 192 367 | LinkedIn:
www.linkedin.com/in/sidharthkumar2792
Regards
Sidharth Kumar | Mob: +91 8197 555 599/7892 192 367 | LinkedIn:
www.linkedin.com/in/sidharthkumar2792
On 29-Jun-2017 3:45 PM, "omprakash" <ompraka...@cdac.in> wrote:
> Hi Ravi,
>
>
>
> I have 5 nodes in Hadoop cluster and all have same configurations. Aft
of what it does.
>
> On Mon, 19 Jun 2017 at 14:20 Sidharth Kumar <sidharthkumar2...@gmail.com>
> wrote:
>
>> Hi Team,
>>
>> How feasible will it be, if I configure CMS Garbage collector for Hadoop
>> daemons and configure G1 for Map Reduce jobs which run for hour
Hi Team,
How feasible will it be, if I configure CMS Garbage collector for Hadoop
daemons and configure G1 for Map Reduce jobs which run for hours?
Thanks for your help ...!
--
Regards
Sidharth Kumar | Mob: +91 8197 555 599 | LinkedIn
<https://www.linkedin.com/in/sidharthkumar2792/>
Hi,
I guess you can get it from http://:/jmx or
/metrics
Regards
Sidharth
LinkedIn: www.linkedin.com/in/sidharthkumar2792
On 13-Jun-2017 6:26 PM, "Shmuel Blitz" wrote:
> (This question has also been published on StackOveflow
>
Check /tmp directory permissions and owners
Sidharth
On 13-Jun-2017 3:20 AM, "Deng Yong" wrote:
> D:\hdp\sbin>yarn jar d:/hdp/share/hadoop/mapreduce/
> hadoop-mapreduce-examples-2.7.3.jar wordcount /aa.txt /out
>
> 17/06/10 15:27:32 INFO client.RMProxy: Connecting to
Hi,
I have been working as hadoop admin since 2 years, I subscribed to this
group 3 months before but since then never able to figure out something
which a hadoop admin contribute can do. It will be great full if someone
help me out to contribute in hadoop 3.0 development.
Thanks for help in
riharan
>
> On Fri, May 26, 2017 at 3:44 PM, Sidharth Kumar <
> sidharthkumar2...@gmail.com> wrote:
>
>> Hi,
>>
>> Can you kindly explain me why hdfs doesnt have current directory concept.
>> Why Hadoop is not implement to use pwd? Why command like cd and P
Hi,
Can you kindly explain me why hdfs doesnt have current directory concept.
Why Hadoop is not implement to use pwd? Why command like cd and PWD cannot
be implemented in hdfs?
Regards
Sidharth
Mob: +91 819799
LinkedIn: www.linkedin.com/in/sidharthkumar2792
Hi ,
It may be because user don't have the write permission in destination
cluster path.
For example
$Su - abcde
$hadoop distcp /data/sample1 hdfs://destclstnn:8020/data/
So,in the above case user abcde should have the write permission at
destination path hdfs://destclstnn:8020/data/
Regards
So I guess this is due to change in blockpool id. If you have older fsimage
backup ,start namenode using that fsimage or delete the current directory
of datanodes hdfs storage and re-format the namenode once again
Regards
Sidharth
Mob: +91 819799
LinkedIn:
/hdfs-default.xml
>
>
>
> Regards
>
> Surendra
>
>
>
>
>
> *From:* Sidharth Kumar [mailto:sidharthkumar2...@gmail.com]
> *Sent:* 22 May 2017 19:36
> *To:* common-u...@hadoop.apache.org
> *Subject:* Hdfs default block size
>
>
>
> Hi,
>
Hi,
Can you kindly tell me what is the default block size in apache hadoop
2.7.3? Is it 64mb or 128mb?
Thanks
Sidharth
shan.patha...@gmail.com>
wrote:
Apologies for the delayed reply, was away due to some personal issues.
I tried the telnet command as well, but no luck. I get the response that
'Name or service not known'
Thanks
Bhushan Pathak
Thanks
Bhushan Pathak
On Wed, May 3, 2017 at 7:48 AM, Sidh
Can you check if the ports are opened by running telnet command.
Run below command from source machine to destination machine and check if
this help
$telnet
Ex: $telnet 192.168.1.60 9000
Let's Hadooping!
Bests
Sidharth
Mob: +91 819799
LinkedIn: www.linkedin.com/in/sidharthkumar2792
Hi,
Could anyone kindly help me to clear my below doubts
Thanks
On 19-Apr-2017 8:08 PM, "Sidharth Kumar" <sidharthkumar2...@gmail.com>
wrote:
Hi,
please help me to understand it
1) If we read anatomy of hdfs read in hadoop definitive guide it says data
queue is consumed by s
Hi James,
Please create a user hadoop or hdfs and change the ownership of directory
to hdfs:hadoop. Hdfs run with hdfs user. This should probably resolve your
issue. If you need I can share document which i made for pseudo mode
installation to help my mates.
Please let me know if issue still
Hi,
please help me to understand it
1) If we read anatomy of hdfs read in hadoop definitive guide it says data
queue is consumed by streamer. So, can you just tell me that will there be
only one streamer in a cluster which consume packets from data queue and
create pipeline for each packets to
tory
>
>
>
> -
> To unsubscribe, e-mail: user-unsubscr...@hadoop.apache.org <javascript:;>
> For additional commands, e-mail: user-h...@hadoop.apache.org
> <javascript:;>
>
>
--
Regards
Sidharth Kumar | Mob: +91 8197 555 599 | LinkedIn
<https://www.linkedin.com/in/sidharthkumar2792/>
e:
>
>
> On Mon, Apr 10, 2017 at 11:46 AM, Sidharth Kumar <
> sidharthkumar2...@gmail.com> wrote:
>
>> Thanks Philippe,
>>
>> I am looking for answer only restricted to HDFS. Because we can do read
>> and write operations from CLI using commands like &qu
t; block size). There is no java framework per se for splitting up an file
>> (technically not so, but let's simplify, outside of your own custom code).
>>
>>
>> *...*
>>
>>
>>
>> *Daemeon C.M. ReiydelleUSA (+1) 415.501.0198 <(415)%20501-0198>L
one java program then it's just a
single thread process and will read the data sequentially.
On Friday, April 7, 2017, Sidharth Kumar <sidharthkumar2...@gmail.com>
wrote:
> Thanks for your response . But I dint understand yet,if you don't mind can
> you tell me what do you mea
processing framework like MapReduce.
Regards,
Philippe
On Thu, Apr 6, 2017 at 9:55 PM, Sidharth Kumar <sidharthkumar2...@gmail.com>
wrote:
> Hi Genies,
>
> I have a small doubt that hdfs read operation is parallel or sequential
> process. Because from my understanding i
Hi Genies,
I have a small doubt that hdfs read operation is parallel or sequential
process. Because from my understanding it should be parallel but if I read
"hadoop definitive guide 4" in anatomy of read it says "*Data is streamed
from the datanode back **to the client, which calls read()
Hi,
I am importing data from RDBMS to hadoop using sqoop but my RDBMS data is
multi valued and contains "," special character.
So, While importing data using sqoop into hadoop ,sqoop by default it
separate the columns by using "," character. Is there any property through
which we can customize
the configurations.While the same set of
configuration worked fine for hadoop2.7.2 and other stable versions.
Thanks for your help in advance
--
Regards
Sidharth Kumar | Mob: +91 8197 555 599 | LinkedIn
<https://www.linkedin.com/in/sidharthkumar2792/>
37 matches
Mail list logo