metesynnada opened a new issue, #6050:
URL: https://github.com/apache/arrow-datafusion/issues/6050
### Is your feature request related to a problem or challenge?
**Is your feature request related to a problem or challenge? Please describe
what you are trying to do.**
The `MemoryExec` currently uses cloning for `RecordBatch`es, which is
completely avoidable. The methods
```rust
impl MemoryExec {
/// Create a new execution plan for reading in-memory record batches
/// The provided `schema` should not have the projection applied.
pub fn try_new(
partitions: &[Vec<RecordBatch>],
schema: SchemaRef,
projection: Option<Vec<usize>>,
) -> Result<Self> {
let projected_schema = project_schema(&schema, projection.as_ref())?;
Ok(Self {
partitions: partitions.to_vec(),
schema,
projected_schema,
projection,
sort_information: None,
})
}
}
impl ExecutionPlan for MemoryExec {
fn execute(
&self,
partition: usize,
_context: Arc<TaskContext>,
) -> Result<SendableRecordBatchStream> {
Ok(Box::pin(MemoryStream::try_new(
self.partitions[partition].clone(),
self.projected_schema.clone(),
self.projection.clone(),
)?))
}
}
```
clones the data for transfer ownership. This can be avoided with shared
pointer usage.
### Describe the solution you'd like
Share data behind Arc<RwLock<..>> to avoid clones and improve performance.
### Describe alternatives you've considered
_No response_
### Additional context
_No response_
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]