[ 
https://issues.apache.org/jira/browse/SQOOP-2920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15283033#comment-15283033
 ] 

Ruslan Dautkhanov commented on SQOOP-2920:
------------------------------------------

Thank you [~maugli] for the patch.

We tested this patch on a sample of our production data (21.4m rows, 716 
columns).

*Before patch*:
{quote}
16/05/12 20:39:36 INFO mapreduce.ExportJobBase: Transferred 8.4101 GB in 
4,953.4655 seconds (1.7386 MB/sec)
16/05/12 20:39:36 INFO mapreduce.ExportJobBase: Exported 21399476 records.

  GC time elapsed (ms)=1745751
  CPU time spent (ms)=238899370
  Physical memory (bytes) snapshot=240646844416
  Virtual memory (bytes) snapshot=491522174976
  Total committed heap usage (bytes)=204771688448
{quote}

After patch:
{quote}
16/05/12 18:17:36 INFO mapreduce.ExportJobBase: Transferred 8.4101 GB in 
744.7664 seconds (11.5633 MB/sec)
16/05/12 18:17:36 INFO mapreduce.ExportJobBase: Exported 21399476 records.

  GC time elapsed (ms)=1640876
  CPU time spent (ms)=59953350
  Physical memory (bytes) snapshot=319115075584
  Virtual memory (bytes) snapshot=486723493888
  Total committed heap usage (bytes)=281407389696
{quote}

That's a great improvement.

CPU time dropped 4 times. sqoop run time dropped 8 times.
getFieldMap0() dropped 93% to 6.7%. Quite a progress.

> sqoop performance deteriorates significantly on wide datasets; sqoop 100% on 
> cpu
> --------------------------------------------------------------------------------
>
>                 Key: SQOOP-2920
>                 URL: https://issues.apache.org/jira/browse/SQOOP-2920
>             Project: Sqoop
>          Issue Type: Bug
>          Components: connectors/oracle, hive-integration, metastore
>    Affects Versions: 1.4.5
>         Environment: - sqoop export on a very wide dataset (over 700 columns)
> - sqoop export to oracle
> - subset of columns is exported (using --columns argument)
> - parquet files
> - --table --hcatalog-database --hcatalog-table options are used
>            Reporter: Ruslan Dautkhanov
>            Priority: Critical
>              Labels: columns, hive, oracle, perfomance
>         Attachments: jstack.zip, top - sqoop mappers hog cpu.png
>
>
> We sqoop export from datalake to Oracle quite often.
> Every time we sqoop "narrow" datasets, Oracle always have scalability issues 
> (3-node all-flash Oracle RAC) normally can't keep up with more than 45-55 
> sqoop mappers. Map-reduce framework shows sqoop mappers are not so loaded. 
> On wide datasets, this picture is quite opposite. Oracle shows 95% of 
> sessions are bored and waiting for new INSERTs. Even when we go over hundred 
> of mappers. Sqoop has serious scalability issues on very wide datasets. (Our 
> company normally has very wide datasets)
> For example, on the last sqoop export:
> Started ~2.5 hours ago and 95 mappers already accumulated
> CPU time spent (ms)   1,065,858,760
> (looking at this metric through map-reduce framework stats)
> 1 million seconds of CPU time.
> Or 11219.57 per mapper. Which is roughly 3.11 hours of CPU time per mapper. 
> So they are 100% on cpu.
> Will also attach jstack files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to