Hi Alvaro,
I think you can configure to use custom hostname for docker containers as
well.
Hostname should be provided durin launch of containers using -h parameter.
And with user created docker network DNS resolution of these hostnames
among the containers is possible. provide --network-alias
I think you might need to change the IP itself.
Try something similar to 192.168.1.20
-Vinay
On 27 Apr 2017 8:20 pm, "Bhushan Pathak" wrote:
> Hello
>
> I have a 3-node cluster where I have installed hadoop 2.7.3. I have
> updated core-site.xml, mapred-site.xml,
Hi All,
BKJM was Active and made much stable when the NameNode HA was implemented
and there was no QJM implemented.
Now QJM is present and is much stable which is adopted by many production
environment.
I wonder whether it would be a good time to retire BKJM from trunk?
Are there
Hi
You might be hitting https://issues.apache.org/jira/browse/HDFS-9530,.
This will arrive soon in coming 2.7.3 Release. ☺
-Vinay
From: Ophir Etzion [mailto:op...@foursquare.com]
Sent: 22 July 2016 19:03
To: user@hadoop.apache.org
Subject: wrong remaining space reported by Data Nodes
Hi,
I
menode side.
-Vinay
From: Aneela Saleem [mailto:ane...@platalytics.com]
Sent: 30 June 2016 13:24
To: Vinayakumar B <vinayakumar...@huawei.com>
Cc: user@hadoop.apache.org
Subject: Re: datanode is unable to connect to namenode
Thanks Vinayakumar
Yes you got it right i was using different pri
Hi Aneela,
1. Looks like you have attached the hdfs-site.xml from 'hadoop-master' node.
For this node datanode connection is successfull as mentioned in below logs.
2016-06-29 10:01:35,700 INFO
SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for
Rack awareness feature introduced to place the data blocks distributed
among multiple racks, to avoid the data loss in case of whole rack failure.
Now while reading/writing data blocks, to find the closest, data locality
w.r.t to client will be considered. To know the nearest datanode in terms
of
un the command?
>
> If I want to change some code, Could you please explain a little more
> about how to debug/run my new modified code? Thanks so much.
>
>
>
> On Tue, Apr 19, 2016 at 2:17 PM, Vinayakumar B <vinayakum...@apache.org>
> wrote:
>
>>
>>
-Vinay
-- Forwarded message --
From: Vinayakumar B <vinayakum...@apache.org>
Date: Tue, Apr 19, 2016 at 11:47 PM
Subject: Re: Eclipse debug HDFS server side code
To: Kun Ren <ren.h...@gmail.com>
1. Since you are debugging remote code, you can't change the code
dynami
Hi Kun Ren,
You can follow the below steps.
1. configure HADOOP_NAMENODE_OPTS="-Xdebug
-Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=3988" in
hadoop-env.sh
2. Start Namenode
3. Now Namenode will start debug port in 3988.
4. Configure Remote debug application to connect to :3988 in
Hi All,
Just wanted to know, what is the maximum and practical dfs.block.size used in
production/test clusters.
Current default value is 128MB and it can support upto 128TB ( Yup, right.
It's just a configuration value though)
I have seen clusters using upto 1G block size for big files.
Hi Chef,
Can you confirm the below points?
1) Did you upgrade all datanodes to 2.7.2?
2) Did you finalized the upgrade using the following command?
Run "hdfs dfsadmin -rollingUpgrade
+1,
Thanks Arpit
-Vinay
From: Brahma Reddy Battula [mailto:brahmareddy.batt...@hotmail.com]
Sent: Friday, November 06, 2015 8:27 AM
To: user@hadoop.apache.org
Subject: RE: Unsubscribe footer for user@h.a.o messages
+ 1 ( non-binding)..
Nice thought,Arpit..
Thanks And Regards
Brahma Reddy
thats cool.
-Vinay
On Tue, Nov 3, 2015 at 9:34 PM, Shashi Vishwakarma <shashi.vish...@gmail.com
> wrote:
> Thanks all...It was a cluster issue...Its working for me now:)
> On 3 Nov 2015 7:01 am, "Vinayakumar B" <vinayakumar...@huawei.com> wrote:
>
>>
For simplicity You just can copy HADOOP_CONF_DIR from one of the cluster's
machine. And place it in class path of the client program.
Principal you are using to login is the client principal. It can be
different from server principal.
-Vinay
On Nov 2, 2015 22:37, "Vishwakarma, Chhaya" <
Hi Shashi,
Did you copy conf directory (ex: /etc/hadoop by default) from any of
the cluster machine’s Hadoop installation as mentioned in #1 in Andreina’s
reply below?
I hope, if cluster is running successfully with Kerberos enabled, it should
have a configuration
Looks like this issue is present in the latest code as well.
Please report a ticket in Jira and if you have the fix, you can provide the
patch as well.
-Vinay
From: daniedeng(邓飞) [mailto:danied...@tencent.com]
Sent: Friday, October 16, 2015 1:15 PM
To: hdfs-issues; user@hadoop.apache.org
ISTEN 22944/java
>>
>> I understand what you're saying about a gateway often existing at that
>> address for a subnet. I'm not familiar enough with Vagrant to answer this
>> right now, but I will put in a question there.
>>
>> I can also change the other two IP addres
192.168.51.1 might be gateway to 51.* subnet right?
Can you verify whether connections from outside 51 subnet, to 51.4 machine
using other subnet IP as remote IP. ?
You can create any connection, may not be namenode-datanode.
for ex: Connection from 192.168.52.4 dn to 192.168.51.4 namenode
Sharing the link to simple video for understanding why and what is
ErasureCoding.
http://www.intel.com/content/www/us/en/storage/erasure-code-isa-l-solution-video.html
Thanks to intel for such a nice video.
Regards,
Vinay
You can change the replication factor using the following command
hdfs dfs - setrep [-R] rep path
Once this is done, you can re-commission the datanode, then all the
overreplicated blocks will be removed.
If not removed, restart the datanode.
Regards,
Vinayakumar B
From: Phan, Truong Q
Correct david,
Sshfence doesnot handle network unavailability.
Since the JournalNodes ensures that only one NN can write, fencing of old
active handled Automatically. So configuring fence method to shell(/bin/true)
should be fine.
Regards,
Vinayakumar B.
From: david marion [mailto:dlmar
Its simple,
bytes read from local file system: File_bytes_read
bytes read from HDFS file system: hdfs_bytes_read
Regards,
Vinayakumar B
From: Sai Sai [mailto:saigr...@yahoo.in]
Sent: 14 March 2014 14:51
To: user@hadoop.apache.org
Subject: File_bytes_read vs hdfs_bytes_read
Just wondering what
Hi Satyam,
Check whether your Camel client-side configurations are pointing to correct
NameNode(s).
What is the deployment ? whether HA/Non-HA?
And check whether same exception is present in (Active) NameNode logs. If not
then request is going to some other NameNode.
Regards,
Vinayakumar B
in the following way by constructing the CLASSPATH
which includes HADOOP_CONF_DIR
java -cp CLASSPATH MAIN-CLASS args
or
simply use hadoop jar test.jar
Cheers,
Vinayakumar B
From: Chris Mawata [mailto:chris.maw...@gmail.com]
Sent: 25 February 2014 20:08
To: user@hadoop.apache.org
Subject: Re: Wrong
Hi Anil,
I think multiple clients/tasks are trying to write to same file with overwrite
enabled
Second client is overwriting the first client's file, and first client is
getting the below mentioned exception.
Please check ..
Regards,
Vinayakumar B
From: AnilKumar B [mailto:akumarb2
. Run the job again,
3. Try to find out the files written by reducers using the hdfs-audit log
and find out the exact file which is overwritten before closing.
Regards,
Vinayakumar B
From: AnilKumar B [mailto:akumarb2...@gmail.com]
Sent: 24 February 2014 16:15
To: user
Send a simple mail to
user-unsubscr...@hadoop.apache.orgmailto:user-unsubscr...@hadoop.apache.org
FYI, http://hadoop.apache.org/mailing_lists.html
From: Suresh M03 [mailto:suresh@mphasis.com]
Sent: 24 February 2014 11:40
To: user@hadoop.apache.org
Subject: RE: No job shown in Hadoop
/32 bit) as
of machine...?
Regards,
Vinayakumar B
From: Mr 0 [mailto:bobwolf...@hotmail.com]
Sent: 08 January 2014 10:15
To: user@hadoop.apache.org
Subject: RE: JAVA cannot execute binary file
I have had, at some point on earlier versions of hadoop:
Inside hadoop-env.sh where you set /usr/lib/jvm
configurations are for Hadoop 2.x
Configure different subdirectories if you are using same disk for multiple
processes.
Ex: /hadoop/data1/dfs/data
And
/hadoop/data1/yarn/nm-local-dir
Cheers,
Vinayakumar B
From: Tao Xiao [mailto:xiaotao.cs
Hi Krishna,
Please check the out files as well for daemons. You may find something.
Cheers,
Vinayakumar B
From: Krishna Kishore Bonagiri [mailto:write2kish...@gmail.com]
Sent: 16 December 2013 16:50
To: user@hadoop.apache.org
Subject: Re: Yarn -- one of the daemons getting killed
Hi Vinod
directory in eclipse project.
4. Rebuild hadoop-hdfs and run the test.
If any more problems let me know.
Cheers,
Vinayakumar B
From: Karim Awara [mailto:karim.aw...@kaust.edu.sa]
Sent: 15 December 2013 22:26
To: user
Subject: Re: MiniDFSCluster setup
I imported all the projects under the root
, YARN_CONF_DIR,
YARN_PID_DIR
3. And start both clusters with different ENV variables set
Thanks and Regards,
Vinayakumar B
From: Geelong Yao [mailto:geelong...@gmail.com]
Sent: 12 December 2013 07:09
To: user@hadoop.apache.org
Subject: two version on the same cluster?
Hi Everyone
Hi Viswa,
Sorry for the late reply,
Have you restarted NodeManagers after copying the lzo jars to lib?
Thanks and Regards,
Vinayakumar B
From: Viswanathan J [mailto:jayamviswanat...@gmail.com]
Sent: 06 December 2013 23:32
To: user@hadoop.apache.org
Subject: Compression LZO class not found issue
are killed in between these files will remain in
hdfs showing underreplicated blocks.
Thanks and Regards,
Vinayakumar B
From: ch huang [mailto:justlo...@gmail.com]
Sent: 11 December 2013 06:48
To: user@hadoop.apache.org
Subject: Re: how to handle the corrupt block in HDFS?
By default this higher
It looks simple, :)
Shuffled Maps= Number of Map Tasks * Number of Reducers
Thanks and Regards,
Vinayakumar B
From: ch huang [mailto:justlo...@gmail.com]
Sent: 11 December 2013 10:56
To: user@hadoop.apache.org
Subject: issue about Shuffled Maps in MR job summary
hi,maillist:
i run
Hi Ch huang,
Please check whether all datanodes in your cluster have enough disk space
and number non-decommissioned nodes should be non-zero.
Thanks and regards,
Vinayakumar B
From: ch huang [mailto:justlo...@gmail.com]
Sent: 06 December 2013 07:14
To: user@hadoop.apache.org
Subject: error
should be able to view the Job details in JobHistoryServer once
the Job execution is over.
Thanks and Regards,
Vinayakumar B
From: Jian He [mailto:j...@hortonworks.com]
Sent: 03 December 2013 12:07
To: user@hadoop.apache.org
Subject: Re: Problem viewing a job in hadoop v2 web UI
Can you try
Hi Siddharth,
Looks like the issue with one of the machine. Or its happening in different
machines also?
I don't think it's a problem with JVM heap memory.
Suggest you to check this once,
http://stackoverflow.com/questions/8384000/java-io-ioexception-error-11
Thanks and Regards,
Vinayakumar
In your eclipse classpath core-site.xml is there?
Directory which contains site xmls should be there in classpath. Not
directly xml files.
Make sure fs.defaultFS points to correct hdfs path
Regards,
Vinayakumar B
On Nov 2, 2013 5:21 PM, Harsh J ha...@cloudera.com wrote:
Your job configuration
and generate code using 2.5 Protobuf.
.compile and run again.
Regards,
Vinayakumar B
On Sep 10, 2013 8:58 AM, sam liu samliuhad...@gmail.com wrote:
This is an env issue. Hadoop-2.10-beta upgraded protobuf to 2.5 from
2.4.1, but the version of protobuf in my env is still 2.4.1, so the sqoop
unit tests
to decide based on
your usecase.
Regards,
Vinayakumar B
On Sep 10, 2013 9:02 AM, kun yan yankunhad...@gmail.com wrote:
Hi all
Can I modify HDFS data block size is 32MB, I know the default is 64MB
thanks
--
In the Hadoop world, I am just a novice, explore the entire Hadoop
ecosystem, I hope one
Hi,
If you are moving from NonHA (single master) to HA, then follow the below
steps.
1. Configure the another namenode's configuration in the running
namenode and all datanode's configurations. And configure logical
fs.defaultFS
2. Configure the shared storage related
,
Vinayakumar B
__
From: Vamshi Krishna [vamshi2...@gmail.com]
Sent: Tuesday, February 14, 2012 8:28 PM
To: mapreduce-user@hadoop.apache.org
Subject: how to specify key and value for an input to mapreduce job
Hi all,
i have a job which read all the rows
same as the block size of the input file. If the split size is more
than the block size then Task may need to get the block data from multiple
datanodes.
Thanks and Regards,
Vinayakumar B
From: GUOJUN Zhu [mailto:guojun_...@freddiemac.com]
Sent: Saturday, February 11, 2012 3:50 AM
Please check the defect in MAPREDUCE jira
https://issues.apache.org/jira/browse/MAPREDUCE-2264
This is because the compression is enabled for map outputs and statistics
are taken on compressed data instead of original data.
-Original Message-
From: Joey Echeverria
Hi All,
I need help in setting up the Next Gen Mapreduce.
Please provide links to documents/Guide if any to start setting up the Next
Gen MR.
Thanks and Regards,
Vinayakumar B
***
This e-mail
Thanks Praveen.
I could able to run Sample word count Job after Reading
http://svn.apache.org/repos/asf/hadoop/common/branches/MR-279/mapreduce/INST
ALL
Thanks and Regards,
Vinayakumar B
48 matches
Mail list logo