hengfeiyang opened a new issue, #3682:
URL: https://github.com/apache/arrow-datafusion/issues/3682

   **Describe the bug**
   Field names containing periods such as f.c cannot work.
   
   **To Reproduce**
   Steps to reproduce the behavior:
   
   ```rust
   use std::sync::Arc;
   
   use datafusion::arrow::array::Int32Array;
   use datafusion::arrow::datatypes::{DataType, Field, Schema};
   use datafusion::arrow::record_batch::RecordBatch;
   use datafusion::datasource::MemTable;
   use datafusion::error::Result;
   use datafusion::from_slice::FromSlice;
   use datafusion::prelude::{col, lit, SessionContext};
   
   /// This example demonstrates how to use the DataFrame API against in-memory 
data.
   #[tokio::main]
   async fn main() -> Result<()> {
       // define a schema.
       let schema = Arc::new(Schema::new(vec![Field::new("f.c", 
DataType::Int32, false)]));
   
       // define data.
       let batch = RecordBatch::try_new(
           schema.clone(),
           vec![Arc::new(Int32Array::from_slice(&[1, 10, 10, 100]))],
       )?;
   
       // declare a new context. In spark API, this corresponds to a new spark 
SQLsession
       let ctx = SessionContext::new();
   
       // declare a table in memory. In spark API, this corresponds to 
createDataFrame(...).
       let provider = MemTable::try_new(schema.clone(), vec![vec![batch]])?;
       ctx.register_table("t", Arc::new(provider))?;
       let df = ctx.table("t")?;
   
       // construct an expression corresponding to "SELECT * FROM t WHERE f.c = 
10" in SQL
       let filter = col("f.c").eq(lit(10));
   
       let df = df.filter(filter)?;
   
       // print the results
       df.show().await?;
   
       Ok(())
   }
   ```
   
   **Expected behavior**
   expect it can work.
   
   **Additional context**
   DataFusion 12.0


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to