Thanks.
On Fri, Jul 18, 2014 at 10:41 PM, Rich Haase <[email protected]> wrote: > HDFS handles the splitting of files into multiple blocks. It's a file > system operation that is transparent to the user. > > > On Fri, Jul 18, 2014 at 11:07 AM, Ashish Dobhal <[email protected] > > wrote: > >> Rich Haase Thanks, >> But if the copy ops do not occur as a MR job then how does the splitting >> of a file into several blocks takes place. >> >> >> On Fri, Jul 18, 2014 at 10:24 PM, Rich Haase <[email protected]> wrote: >> >>> File copy operations do not run as map reduce jobs. All hadoop fs >>> commands are run as operations against HDFS and do not use the MapReduce. >>> >>> >>> On Fri, Jul 18, 2014 at 10:49 AM, Ashish Dobhal < >>> [email protected]> wrote: >>> >>>> Does the normal operations of hadoop such as uploading and downloading >>>> a file into the HDFS run as a MR job. >>>> If so why cant I see the job being run on my task tracker and job >>>> tracker. >>>> Thank you. >>>> >>> >>> >>> >>> -- >>> *Kernighan's Law* >>> "Debugging is twice as hard as writing the code in the first place. >>> Therefore, if you write the code as cleverly as possible, you are, by >>> definition, not smart enough to debug it." >>> >> >> > > > -- > *Kernighan's Law* > "Debugging is twice as hard as writing the code in the first place. > Therefore, if you write the code as cleverly as possible, you are, by > definition, not smart enough to debug it." >
