[GitHub] [spark] amaliujia commented on a diff in pull request #38546: [SPARK-41036][CONNECT][PYTHON] `columns` API should use `schema` API to avoid data fetching

2022-11-10 Thread GitBox


amaliujia commented on code in PR #38546:
URL: https://github.com/apache/spark/pull/38546#discussion_r1019478368


##
python/pyspark/sql/connect/dataframe.py:
##
@@ -139,11 +139,9 @@ def columns(self) -> List[str]:
 if self._plan is None:
 return []
 if "columns" not in self._cache and self._plan is not None:
-pdd = self.limit(0).toPandas()

Review Comment:
   I removed this caching stuff in this PR. After we pretty much support enough 
API, we can go back to build a cache layer for all metadata like API to save 
RPC calls.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] amaliujia commented on a diff in pull request #38546: [SPARK-41036][CONNECT][PYTHON] `columns` API should use `schema` API to avoid data fetching

2022-11-09 Thread GitBox


amaliujia commented on code in PR #38546:
URL: https://github.com/apache/spark/pull/38546#discussion_r1018712010


##
python/pyspark/sql/connect/dataframe.py:
##
@@ -139,11 +139,9 @@ def columns(self) -> List[str]:
 if self._plan is None:
 return []
 if "columns" not in self._cache and self._plan is not None:
-pdd = self.limit(0).toPandas()

Review Comment:
   if you think this is a bit over-engineering I can remove.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] amaliujia commented on a diff in pull request #38546: [SPARK-41036][CONNECT][PYTHON] `columns` API should use `schema` API to avoid data fetching

2022-11-09 Thread GitBox


amaliujia commented on code in PR #38546:
URL: https://github.com/apache/spark/pull/38546#discussion_r1018712010


##
python/pyspark/sql/connect/dataframe.py:
##
@@ -139,11 +139,9 @@ def columns(self) -> List[str]:
 if self._plan is None:
 return []
 if "columns" not in self._cache and self._plan is not None:
-pdd = self.limit(0).toPandas()

Review Comment:
   Ok if you think this is a bit over-engineering I can remove.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] amaliujia commented on a diff in pull request #38546: [SPARK-41036][CONNECT][PYTHON] `columns` API should use `schema` API to avoid data fetching

2022-11-09 Thread GitBox


amaliujia commented on code in PR #38546:
URL: https://github.com/apache/spark/pull/38546#discussion_r1018710135


##
python/pyspark/sql/connect/dataframe.py:
##
@@ -139,11 +139,9 @@ def columns(self) -> List[str]:
 if self._plan is None:
 return []
 if "columns" not in self._cache and self._plan is not None:
-pdd = self.limit(0).toPandas()

Review Comment:
   hmm if users call this API multiple times, we only need one gRPC. This 
should be useful right?
   
   something like:
   ```
   df.columns()
   
   
   xxx
   df.columns()
   
   
   df.columns()
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] amaliujia commented on a diff in pull request #38546: [SPARK-41036][CONNECT][PYTHON] `columns` API should use `schema` API to avoid data fetching

2022-11-09 Thread GitBox


amaliujia commented on code in PR #38546:
URL: https://github.com/apache/spark/pull/38546#discussion_r1018710135


##
python/pyspark/sql/connect/dataframe.py:
##
@@ -139,11 +139,9 @@ def columns(self) -> List[str]:
 if self._plan is None:
 return []
 if "columns" not in self._cache and self._plan is not None:
-pdd = self.limit(0).toPandas()

Review Comment:
   hmm if users call this API multiple times, we only need one gRPC. This 
should be useful right?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] amaliujia commented on a diff in pull request #38546: [SPARK-41036][CONNECT][PYTHON] `columns` API should use `schema` API to avoid data fetching

2022-11-08 Thread GitBox


amaliujia commented on code in PR #38546:
URL: https://github.com/apache/spark/pull/38546#discussion_r1017484449


##
python/pyspark/sql/connect/dataframe.py:
##
@@ -139,11 +139,9 @@ def columns(self) -> List[str]:
 if self._plan is None:
 return []
 if "columns" not in self._cache and self._plan is not None:
-pdd = self.limit(0).toPandas()

Review Comment:
   I would prefer to not depend on the underly API when doing caching...
   
   E.g. what if someday the cache on the schema is gone but this API is not 
aware of it, etc.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] amaliujia commented on a diff in pull request #38546: [SPARK-41036][CONNECT][PYTHON] `columns` API should use `schema` API to avoid data fetching

2022-11-08 Thread GitBox


amaliujia commented on code in PR #38546:
URL: https://github.com/apache/spark/pull/38546#discussion_r1017484449


##
python/pyspark/sql/connect/dataframe.py:
##
@@ -139,11 +139,9 @@ def columns(self) -> List[str]:
 if self._plan is None:
 return []
 if "columns" not in self._cache and self._plan is not None:
-pdd = self.limit(0).toPandas()

Review Comment:
   I would prefer to not depend on the underly API when doing caching...
   
   E.g. what if someday the cache on the schema is gone but this API is not 
aware of it, etc.
   
   Basically do not make assumptions :) 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org