hengfeiyang commented on PR #3700:
URL:
https://github.com/apache/arrow-datafusion/pull/3700#issuecomment-1271752120
@alamb i tested with #3733 , it looks `with_column` still can't work.
i tested use config:
Cargo.toml
```
datafusion = { git = "https://github.com/apache/arrow-datafusion" }
```
with the example code:
```
use std::sync::Arc;
use datafusion::arrow::array::Int32Array;
use datafusion::arrow::datatypes::{DataType, Field, Schema};
use datafusion::arrow::record_batch::RecordBatch;
use datafusion::datasource::MemTable;
use datafusion::error::Result;
use datafusion::from_slice::FromSlice;
use datafusion::prelude::{col, lit, SessionContext};
/// This example demonstrates how to use the DataFrame API against in-memory
data.
#[tokio::main]
async fn main() -> Result<()> {
// define a schema.
let schema = Arc::new(Schema::new(vec![Field::new("f.c",
DataType::Int32, false)]));
// define data.
let batch = RecordBatch::try_new(
schema.clone(),
vec![Arc::new(Int32Array::from_slice([1, 10, 10, 100]))],
)?;
// declare a new context. In spark API, this corresponds to a new spark
SQLsession
let ctx = SessionContext::new();
// declare a table in memory. In spark API, this corresponds to
createDataFrame(...).
let provider = MemTable::try_new(schema.clone(), vec![vec![batch]])?;
ctx.register_table("t", Arc::new(provider))?;
let df = ctx.table("t")?;
// construct an expression corresponding to "SELECT * FROM t WHERE f.c =
10" in SQL
let filter = col("f.c").eq(lit(10));
let df = df.filter(filter)?;
// print the results
df.show().await?;
Ok(())
}
```
Result:
```
Error: SchemaError(FieldNotFound { qualifier: Some("f"), name: "c",
valid_fields: Some(["t.f.c"]) })
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]