aokolnychyi commented on code in PR #38005:
URL: https://github.com/apache/spark/pull/38005#discussion_r1022077282
##########
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/WriteToDataSourceV2Exec.scala:
##########
@@ -477,6 +507,73 @@ object DataWritingSparkTask extends Logging {
}
}
+object DataWritingSparkTask extends WritingSparkTask[DataWriter[InternalRow]] {
+ override protected def write(writer: DataWriter[InternalRow], row:
InternalRow): Unit = {
+ writer.write(row)
+ }
+}
+
+case class DeltaWritingSparkTask(
+ projections: WriteDeltaProjections) extends
WritingSparkTask[DeltaWriter[InternalRow]] {
+
+ private lazy val rowProjection = projections.rowProjection.orNull
Review Comment:
I am reusing the same `ProjectingInternalRow` instance for all rows to avoid
extra copies. Without reusing, the performance will be fairly poor. My
assumption is that sources don't usually accumulate incoming records before
passing to the writer and if they do, they can call copy.
Any feedback on this point is highly appreciated.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]