[
https://issues.apache.org/jira/browse/SPARK-15376?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
xiaoyu chen updated SPARK-15376:
--------------------------------
Affects Version/s: 1.5.0
1.6.1
> DataFrame write.jdbc() inserts more rows than acutal
> ----------------------------------------------------
>
> Key: SPARK-15376
> URL: https://issues.apache.org/jira/browse/SPARK-15376
> Project: Spark
> Issue Type: Bug
> Affects Versions: 1.4.1, 1.5.0, 1.6.1
> Environment: CentOS 6 cluster mode
> Cores: 300 (300 granted, 0 left)
> Executor Memory: 45.0 GB
> Submit Date: Wed May 18 10:26:40 CST 2016
> Reporter: xiaoyu chen
> Labels: DataFrame
>
> It's a odd bug, occur under this situation:
>
> {code:title=Bar.scala}
> val rddRaw = sc.textFile("xxx").map(xxx).sample(false, 0.15)
> println(rddRaw.count()) // the actual rows insert to mysql is more
> than rdd's record num. In my case, is 239994 (rdd), ~241300 (database
> inserted)
> // iter all rows in another way, if drop the Range for loop, the bug
> wouldn't occur
> for(some_id <- Range(some_ids_all_range)){
> rddRaw.filter(_._2 == some_id).randomSplit(Array(x, x, x), 1)
> .foreach( rd => {
> // val curCnt = rd.count() // if invoke count() on rd before write, it
> would be ok
> rd.map(x => new TestRow(null,
> xxx)).toDF().write.mode(SaveMode.Append).jdbc(xxx)
> }
> )
> }
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]