Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/7990#discussion_r36479927
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/execution/RowFormatConvertersSuite.scala
---
@@ -87,4 +91,36 @@ class RowFormatConvertersSuite extends SparkPlanTest {
input.map(Row.fromTuple)
)
}
+
+ test("SPARK-9683: we should deep copy UTF8String when convert unsafe row
to safe row") {
+ SparkPlan.currentContext.set(TestSQLContext)
+ val schema = ArrayType(StringType)
+ val rows = (1 to 100).map { i =>
+ InternalRow(new
GenericArrayData(Array[Any](UTF8String.fromString(i.toString))))
+ }
+ val relation = LocalTableScan(Seq(AttributeReference("t", schema)()),
rows)
+
+ val plan =
+ DummyPlan(
+ ConvertToSafe(
+ ConvertToUnsafe(relation)))
+ assert(plan.execute().collect().map(_.getUTF8String(0).toString) ===
(1 to 100).map(_.toString))
+ }
+}
+
+case class DummyPlan(child: SparkPlan) extends UnaryNode {
+
+ override protected def doExecute(): RDD[InternalRow] = {
+ child.execute().mapPartitions { iter =>
+ // cache all strings to make sure we have deep copied UTF8String
inside incoming
+ // safe InternalRow.
+ val strings = new scala.collection.mutable.ArrayBuffer[UTF8String]
+ iter.foreach { row =>
+ strings += row.getArray(0).getUTF8String(0)
--- End diff --
But in https://github.com/apache/spark/pull/7840 seems we deep copy row and
string anyway? I just want to follow that.
Since `toSeq(schema: StructType)` has been merged, let me try to remove the
`ConvertToSafe`.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]