zero323 commented on a change in pull request #34466:
URL: https://github.com/apache/spark/pull/34466#discussion_r751071337
##########
File path: python/pyspark/context.py
##########
@@ -15,6 +15,7 @@
# limitations under the License.
#
+from __future__ import annotations
Review comment:
Think about these cases:
```python
# foo.py
class Foo:
def foo(self) -> Foo: ...
```
In normal execution environment `Foo` in the return type will be evaluated
before `Foo` class , so on runtime you'll get an exception
```
Traceback (most recent call last):
File "foo.py", line 3, in <module>
class Foo:
File "foo.py", line 4, in Foo
def foo(self) -> Foo: ...
NameError: name 'Foo' is not defined
```
However, it will type check, because mypy has been used delayed evaluation
for a while now. Hence, we quote
```python
# foo.py
class Foo:
def foo(self) -> "Foo": ...
```
It wouldn't be necessary in 3.7 with `from __future__ import annotations and
later in general.
As of `TYPE_CHECKING` blocks, @ueshin and @xinrong-databricks done a lot
work here and set certain conventions, so it is best to analyze their work, but
in general:
- Imports of non-code objects (`Protocols`, aliases and such) that live only
in stub files, must be imported type check.
- Imports that are added only for the sake of type checking and would cause
cyclic dependencies between modules should use type check.
- Imports from `typing_extensions` must use type checking, to avoid hard
dependency on this package, but I don't think we have any need for these in
`.py` files.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]