[bcc: sqoop-u...@cloudera.org, to:sqoop-user@incubator.apache.org.
Please move the conversation over to Apache mailing lists.]

Hi Sonal,

If you have only jobs that have transferred 32KB of data to HDFS, then
the numbers you have stated below make sense. The reason why the DFS
Used % remains 0 is because the value 32KB is insignificant as to the
overall capacity of the system which is 119GB.

If however you are running multiple Sqoop jobs and it does not seem to
be changing the DFS used value at all, there may be something else
going on. Are you able to assert that Sqoop import did indeed copy
over data from the source database?

Thanks,
Arvind

On Wed, Aug 17, 2011 at 12:45 AM, Sonal <imsonalku...@gmail.com> wrote:
> Hi,
>
> I have a stand alone setup with client/ server on same machine. I have
> started the namenode, datanode, jobtracker, tasktracker daemons in my
> machine.
> I can see the web interface for namenode, jobtracker
> This is namenode configuration
>
> Configured Capacity     :       119.14 GB
> DFS Used        :       32 KB
> Non DFS Used    :       111.69 GB
> DFS Remaining   :       7.45 GB
> DFS Used%       :       0 %
> DFS Remaining%  :       6.26 %
> Live Nodes      :       1
> Dead Nodes      :       0
> Decommissioning Nodes   :       0
> Number of Under-Replicated Blocks       :       0
>
> But, when i am running sqoop import jobs , stiill DFS used is 0%.
> Why DFS used is not changing?
> How to collect stats for each job/mapper running for the import?
>
> Thanks & Regards,
> Sonal Kumar
>
> --
> NOTE: The mailing list sqoop-u...@cloudera.org is deprecated in favor of 
> Apache Sqoop mailing list sqoop-user@incubator.apache.org. Please subscribe 
> to it by sending an email to incubator-sqoop-user-subscr...@apache.org.
>

Reply via email to