1zg12 commented on PR #37738:
URL: https://github.com/apache/spark/pull/37738#issuecomment-1233696662

   > Hm, skipping them doesn't seem right either. Not sure if this should be an 
option; it is just something that doesn't make sense to encode
   
   If it's a field the developer/application is comfortable with having 
self/circular reference, from Spark perspective I think it should allow the 
developer to stop the loop gracefully (which is to skip further processing the 
field in loop at developers' own judgement).
   
   This PR is not to force for either way (stop the whole application 
immediately as existing or skip the filed if the developer choose to), but 
leave for the developer a choice to choose. Ultimately, it's the developer 
building their own application have best knowledge how to handle it.
   
   I guess Spark probably assumed the circular reference must be a mistake made 
by the developers/application earlier. But it can really be a valid case even 
it could be rare.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to