Jefffrey commented on code in PR #5343:
URL: https://github.com/apache/arrow-datafusion/pull/5343#discussion_r1119909964


##########
docs/source/user-guide/example-usage.md:
##########
@@ -118,8 +118,8 @@ async fn main() -> datafusion::error::Result<()> {
   let ctx = SessionContext::new();
   let df = ctx.read_csv("tests/data/capitalized_example.csv", 
CsvReadOptions::new()).await?;
 
-  let df = df.filter(col("A").lt_eq(col("c")))?
-           .aggregate(vec![col("A")], vec![min(col("b"))])?
+  let df = df.filter(col("\"A\"").lt_eq(col("c")))?
+           .aggregate(vec![col("\"A\"")], vec![min(col("b"))])?
            .limit(0, Some(100))?;

Review Comment:
   Yeah that would have the benefit of being less breaking and being a bit more 
explicit in what is happening. `parse_col` would make it more explicit though 
I'm not really a fan of that name, feels a bit clunky.
   
   For what its worth Spark does seem to be able to parse the input to their 
`col` function, instead of taking it wholly unqualified I believe



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to