Have you tried with
String fileName = ((org.apache.hadoop.mapreduce.lib.input.FileSplit)
context.getInputSplit()).getPath().getName();
?
hope it helps
Olivier
Le 6 déc. 2012 à 00:24, Hans Uhlig a écrit :
> I am currently using multiple inputs to merge quite a few different but
> related file
anyone ?
Début du message réexpédié :
> De : Olivier Varene - echo
> Objet : ReduceTask > ShuffleRamManager : Java Heap memory error
> Date : 4 décembre 2012 09:34:06 HNEC
> À : mapreduce-user@hadoop.apache.org
> Répondre à : mapreduce-user@hadoop.apache.org
>
>
> Hi to all,
> first many thank
Oliver,
Sorry, missed this.
The historical reason, if I remember right, is that we used to have a single
byte buffer and hence the limit.
We should definitely remove it now since we don't use a single buffer. Mind
opening a jira?
http://wiki.apache.org/hadoop/HowToContribute
thanks!
Aru
Yes I will
thanks for the answer
regards
Olivier
Le 6 déc. 2012 à 19:41, Arun C Murthy a écrit :
> Oliver,
>
> Sorry, missed this.
>
> The historical reason, if I remember right, is that we used to have a single
> byte buffer and hence the limit.
>
> We should definitely remove it now si