[ 
https://issues.apache.org/jira/browse/SPARK-12731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15126886#comment-15126886
 ] 

Bryan Cutler commented on SPARK-12731:
--------------------------------------

Just to add my 2cents since I've been working on a similar cleanup task.. I 
always thought an 80 char limit was to be friendly to people that had to work 
out of a limited terminal, maybe doing sys-admin or embedded work.  I can't 
imagine someone doing serious work on Spark where this is a necessity, and I 
think that 100 char limit across the board is still fair for readability and 
much more simple to enforce.

> PySpark docstring cleanup
> -------------------------
>
>                 Key: SPARK-12731
>                 URL: https://issues.apache.org/jira/browse/SPARK-12731
>             Project: Spark
>          Issue Type: Improvement
>          Components: Documentation, PySpark
>            Reporter: holdenk
>            Priority: Trivial
>
> We don't currently have any automated checks that our PySpark docstring lines 
> are within pep8/275/276 lenght limits (since the pep8 checker doesn't handle 
> this). As such there are ~400 non-comformant docstring lines. This JIRA is to 
> fix those docstring lines and add a command to lint python to fail on long 
> lines.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to