Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/9724#discussion_r44884741
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/DataFrameReader.scala ---
@@ -227,6 +227,15 @@ class DataFrameReader private[sql](sqlContext:
SQLContext) extends Logging {
* This function goes through the input once to determine the input
schema. If you know the
* schema in advance, use the version that specifies the schema to avoid
the extra scan.
*
+ * You can set the following JSON-specific options to deal with
non-standard JSON files:
+ * <li>`primitivesAsString` (default `false`): infers all primitive
values as a string type</li>
+ * <li>`allowComments` (default `false`): ignores Java/C++ style comment
in JSON records</li>
+ * <li>`allowUnquotedFieldNames` (default `false`): allows unquoted JSON
field names</li>
+ * <li>`allowSingleQuotes` (default `true`): allows single quotes in
addition to double quotes
+ * </li>
+ * <li>`allowNumericLeadingZeros` (default `false`): allows leading
zeros in numbers
+ * (e.g. 00012)</li>
--- End diff --
I think we skipped it in the past because it had very little impact on
performance, so in most cases it is better to just use 1.0... Maybe we should
even deprecate that option.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]