I just noticed that when I run a "hadoop jar
my-fat-jar-with-all-dependencies.jar" , it unjars the job jar in
/tmp/hadoop-username/hadoop-unjar-xxxx/ and extracts all the classes in
there.

the fat jar is pretty big, so it took up a lot of space (particularly
inodes ) and ran out of quota.

I wonder why do we have to unjar these classes on the **client node** ? the
jar won't even be accessed until on the compute nodes, right?

Reply via email to