t; file types and have InputFormat#createRecordReader() delegate to the two
> different InputFormats.
>
> Erik
>
> On Wed, Jun 21, 2017 at 9:42 PM, vivek wrote:
>
>>
>> Hi,
>> I have two different files having different names.I want to input files
>> depending u
Hi,
I have two different files having different names.I want to input files
depending upon the name of the file and use different input format for both
the files.
Eg
If i have two files abc_file1 and bcd_file2
In the same input folder And i want to use textinputformat for file1 and
someother form
Hi,
Now getting
Client cannot authenticate via:[TOKEN, KERBEROS]; Host Details : local host is:
"IMPETUS-NL146/192.168.56.1"; destination host is:
"sandbox.hortonworks.com":8020;
Tried to use cacheTicket and created proxy user.
-Vviek
From: Vivek Mishra [mailto:vivek.
Hi Daniel,
Thanks so Much. Let me give it a try and grab another opportunity to thank you
again ☺
Sincerely,
-Vivek
From: Daniel Schulz [mailto:danielschulz2...@hotmail.com]
Sent: 13 March 2016 14:59
To: Vivek Mishra
Cc: Benoy Antony ; user@hadoop.apache.org
Subject: Re: Kerberos Hadoop access
ess and run map reduce job but
struggling with java api.
Sincerely,
-Vivek
From: Daniel Schulz [mailto:danielschulz2...@hotmail.com]
Sent: 13 March 2016 14:10
To: Vivek Mishra
Cc: Benoy Antony ; user@hadoop.apache.org
Subject: Re: Kerberos Hadoop access
Hi Vivek,
Benoy is right: when you log i
connect with remote KDC server first for kinit?
Here in my case, KDC is on machine X and secured hadoop cluster is on machine Y.
Please suggest.
Sincerely,
-Vivek
From: Benoy Antony [mailto:bant...@gmail.com]
Sent: 13 March 2016 02:43
To: Vivek Mishra
Cc: user@hadoop.apache.org
Subject: Re
Hi,
Can anyone point me to a reference for running map reduce job or HDFS file
creation over Kerberos secured HDFS cluster( From remote client machine)?
Spent entire day with different tweaks using UserGroupInformation and
SecurityUtil.
NOTE: This message
jar myjar.jar arg0 arg1"
>
> How can I fix this?
>
> Patcharee
>
--
Thanks and Regards,
VIVEK KOUL
quot; says we used
> 642.8G. we set replica facotr is 3.
> using 93*3=279G would make sense.
> so my question: who used other disks? how to clean them?
>
> Thanks.
> Tang
>
>
--
Thanks and Regards,
VIVEK KOUL
hot below, all is ok
>>
>> But when I run JPS command, I do not see the datanode process
>>
>>
>>
>>
>>
>> That’s where my worry is L L
>>
>>
>>
>> Standing by ….
>>
>>
>>
>> *From:* vivek [mailto:viv
electronique est susceptible d'alteration.
> La SOCIETE GENERALE et ses filiales declinent toute responsabilite au
> titre de ce message s'il a ete altere, deforme ou falsifie.
> Veuillez consulter le site http://swapdisclosure.sgcib.com afin de
> recueillir d'importantes informations sur les produits derives.
> *
>
--
Thanks and Regards,
VIVEK KOUL
/54180991637732354
*Discussion Topics? *
*What is Big Data ?
*Challenges in Big Data
*What is Hadoop ?
*Opportunities in Hadoop / Big Data
For further details visit us at www.soapttrainings.com
Best Regards,
*Kumar Vivek |* Director
M +91-7675824584| si...@soapt.com
is possible? Is it because the buffer cannot
completely fit in memory and I need to increase the memory?
Thanks
Vivek
l hard for me to compromise with
> huge downtime.
> Is there any optimal way to merge two hadoop clusters.
>
> ~Shashank
>
>
>
--
Thanks and Regards,
VIVEK KOUL
hi,
Have u rubn the command namenode -format???
Thanks & regards ,
Vivek
On Fri, Aug 24, 2012 at 12:58 PM, Abhay Ratnaparkhi <
abhay.ratnapar...@gmail.com> wrote:
> Hello,
>
> I had a running hadoop cluster.
> I restarted it and after that namenode is unable to start. I am
15 matches
Mail list logo