[ https://issues.apache.org/jira/browse/SPARK-13823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15195036#comment-15195036 ]
Apache Spark commented on SPARK-13823: -------------------------------------- User 'srowen' has created a pull request for this issue: https://github.com/apache/spark/pull/11725 > Always specify Charset in String <-> byte[] conversions (and remaining > Coverity items) > -------------------------------------------------------------------------------------- > > Key: SPARK-13823 > URL: https://issues.apache.org/jira/browse/SPARK-13823 > Project: Spark > Issue Type: Improvement > Components: Spark Core, SQL, Streaming > Affects Versions: 2.0.0 > Reporter: Sean Owen > Assignee: Sean Owen > Priority: Minor > Fix For: 2.0.0 > > > Most of the remaining items from the last Coverity scan concern using, for > example, the constructor {{new String(byte[])}} or the method > {{String.getBytes()}}, or similarly for constructors of {{InputStreamReader}} > and {{OutputStreamWriter}}. These use the platform default encoding, which > means their behavior may change in different locales, which should be > undesirable in all cases in Spark. > It makes sense to specify UTF-8 as the default everywhere; where already > specified, it's UTF-8 in 95% of cases. A few tests set US-ASCII, but UTF-8 is > a superset. > We should also consistently use {{StandardCharsets.UTF_8}} rather than > "UTF-8" or Guava's {{Charsets.UTF_8}} to specify this. > (Finally, we should touch up the other few remaining Coverity scan items, > which are trivial, while we're here.) -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org