[
https://issues.apache.org/jira/browse/FLINK-8714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Michal Klempa updated FLINK-8714:
---------------------------------
Description:
When a newcomer (like me), goes through the docs, there are several places
where examples encourage to read the input data using the
{{env.readTextFile()}} method.
This method variant does not take a second argument - character set (see
[https://ci.apache.org/projects/flink/flink-docs-release-1.4/api/java/org/apache/flink/streaming/api/environment/StreamExecutionEnvironment.html#readTextFile-java.lang.String-).]
This versoin relies (according to Javadoc) on " The file will be read with the
system's default character set. "
This behavior is also default in Java, like in the
{{java.util.String.getBytes()}} method, where not supplying the charset mean -
use the system locale or the one which JVM was started with (see
[https://stackoverflow.com/questions/64038/setting-java-locale-settings).]
There are two ways to set locale prior to JVM start (-D arguments or set LC_ALL
variable).
Given this is something a new Flink user may not know about, nor he wants to
spend hours trying to find the environment-related bug (it works on localhost,
but in production the locale is different), I would kindly suggest a change in
documentation: lets migrate examples to use the two-argument version of
{{readTextFile(filePath, charsetName)}}.
I am open to criticism and suggestions. The listing of {{readTextFile}} I was
able to grep in docs is:
{code:java}
./dev/datastream_api.md:- `readTextFile(path)` - Reads text files, i.e. files
that respect the `TextInputFormat` specification, line-by-line and returns them
as Strings.
./dev/datastream_api.md:- `readTextFile(path)` - Reads text files, i.e. files
that respect the `TextInputFormat` specification, line-by-line and returns them
as Strings.
./dev/libs/storm_compatibility.md:DataStream<String> text =
env.readTextFile(localFilePath);
./dev/cluster_execution.md: DataSet<String> data =
env.readTextFile("hdfs://path/to/file");
./dev/batch/index.md:- `readTextFile(path)` / `TextInputFormat` - Reads files
line wise and returns them as Strings.
./dev/batch/index.md:- `readTextFileWithValue(path)` / `TextValueInputFormat` -
Reads files line wise and returns them as
./dev/batch/index.md:DataSet<String> localLines =
env.readTextFile("file:///path/to/my/textfile");
./dev/batch/index.md:DataSet<String> hdfsLines =
env.readTextFile("hdfs://nnHost:nnPort/path/to/my/textfile");
./dev/batch/index.md:DataSet<String> logs =
env.readTextFile("file:///path/with.nested/files")
./dev/batch/index.md:- `readTextFile(path)` / `TextInputFormat` - Reads files
line wise and returns them as Strings.
./dev/batch/index.md:- `readTextFileWithValue(path)` / `TextValueInputFormat` -
Reads files line wise and returns them as
./dev/batch/index.md:val localLines =
env.readTextFile("file:///path/to/my/textfile")
./dev/batch/index.md:val hdfsLines =
env.readTextFile("hdfs://nnHost:nnPort/path/to/my/textfile")
./dev/batch/index.md:env.readTextFile("file:///path/with.nested/files").withParameters(parameters)
./dev/batch/index.md:DataSet<String> lines = env.readTextFile(pathToTextFile);
./dev/batch/index.md:val lines = env.readTextFile(pathToTextFile)
./dev/batch/examples.md:DataSet<String> text =
env.readTextFile("/path/to/file");
./dev/batch/examples.md:val text = env.readTextFile("/path/to/file")
./dev/api_concepts.md:DataStream<String> text =
env.readTextFile("file:///path/to/file");
./dev/api_concepts.md:val text: DataStream[String] =
env.readTextFile("file:///path/to/file")
./dev/local_execution.md: DataSet<String> data =
env.readTextFile("file:///path/to/file");
./ops/deployment/aws.md:env.readTextFile("s3://<bucket>/<endpoint>");{code}
was:
When a newcomer (like me), goes through the docs, there are several places
where examples encourage to read the input data using the `env.readTextFile()`
method.
This method variant does not take a second argument - character set (see
[https://ci.apache.org/projects/flink/flink-docs-release-1.4/api/java/org/apache/flink/streaming/api/environment/StreamExecutionEnvironment.html#readTextFile-java.lang.String-).]
This versoin relies (according to Javadoc) on " The file will be read with the
system's default character set. "
This behavior is also default in Java, like in the
`java.util.String.getBytes()` method, where not supplying the charset mean -
use the system locale or the one which JVM was started with (see
[https://stackoverflow.com/questions/64038/setting-java-locale-settings).]
There are two ways to set locale prior to JVM start (-D arguments or set LC_ALL
variable).
Given this is something a new Flink user may not know about, nor he wants to
spend hours trying to find the environment-related bug (it works on localhost,
but in production the locale is different), I would kindly suggest a change in
documentation: lets migrate examples to use the two-argument version of
`readTextFile(filePath, charsetName)`.
I am open to criticism and suggestions. The listing of readTextFile I was able
to grep in docs is:
```
./dev/datastream_api.md:- `readTextFile(path)` - Reads text files, i.e. files
that respect the `TextInputFormat` specification, line-by-line and returns them
as Strings.
./dev/datastream_api.md:- `readTextFile(path)` - Reads text files, i.e. files
that respect the `TextInputFormat` specification, line-by-line and returns them
as Strings.
./dev/libs/storm_compatibility.md:DataStream<String> text =
env.readTextFile(localFilePath);
./dev/cluster_execution.md: DataSet<String> data =
env.readTextFile("hdfs://path/to/file");
./dev/batch/index.md:- `readTextFile(path)` / `TextInputFormat` - Reads files
line wise and returns them as Strings.
./dev/batch/index.md:- `readTextFileWithValue(path)` / `TextValueInputFormat` -
Reads files line wise and returns them as
./dev/batch/index.md:DataSet<String> localLines =
env.readTextFile("file:///path/to/my/textfile");
./dev/batch/index.md:DataSet<String> hdfsLines =
env.readTextFile("hdfs://nnHost:nnPort/path/to/my/textfile");
./dev/batch/index.md:DataSet<String> logs =
env.readTextFile("file:///path/with.nested/files")
./dev/batch/index.md:- `readTextFile(path)` / `TextInputFormat` - Reads files
line wise and returns them as Strings.
./dev/batch/index.md:- `readTextFileWithValue(path)` / `TextValueInputFormat` -
Reads files line wise and returns them as
./dev/batch/index.md:val localLines =
env.readTextFile("file:///path/to/my/textfile")
./dev/batch/index.md:val hdfsLines =
env.readTextFile("hdfs://nnHost:nnPort/path/to/my/textfile")
./dev/batch/index.md:env.readTextFile("file:///path/with.nested/files").withParameters(parameters)
./dev/batch/index.md:DataSet<String> lines = env.readTextFile(pathToTextFile);
./dev/batch/index.md:val lines = env.readTextFile(pathToTextFile)
./dev/batch/examples.md:DataSet<String> text =
env.readTextFile("/path/to/file");
./dev/batch/examples.md:val text = env.readTextFile("/path/to/file")
./dev/api_concepts.md:DataStream<String> text =
env.readTextFile("file:///path/to/file");
./dev/api_concepts.md:val text: DataStream[String] =
env.readTextFile("file:///path/to/file")
./dev/local_execution.md: DataSet<String> data =
env.readTextFile("file:///path/to/file");
./ops/deployment/aws.md:env.readTextFile("s3://<bucket>/<endpoint>");
```
> Suggest new users to use env.readTextFile method with 2 arguments (using the
> charset), not to rely on system charset (which varies across environments)
> -------------------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: FLINK-8714
> URL: https://issues.apache.org/jira/browse/FLINK-8714
> Project: Flink
> Issue Type: Improvement
> Components: Documentation
> Affects Versions: 1.4.0
> Reporter: Michal Klempa
> Priority: Trivial
> Labels: easyfix, newbie
>
> When a newcomer (like me), goes through the docs, there are several places
> where examples encourage to read the input data using the
> {{env.readTextFile()}} method.
>
> This method variant does not take a second argument - character set (see
> [https://ci.apache.org/projects/flink/flink-docs-release-1.4/api/java/org/apache/flink/streaming/api/environment/StreamExecutionEnvironment.html#readTextFile-java.lang.String-).]
> This versoin relies (according to Javadoc) on " The file will be read with
> the system's default character set. "
>
> This behavior is also default in Java, like in the
> {{java.util.String.getBytes()}} method, where not supplying the charset mean
> - use the system locale or the one which JVM was started with (see
> [https://stackoverflow.com/questions/64038/setting-java-locale-settings).]
> There are two ways to set locale prior to JVM start (-D arguments or set
> LC_ALL variable).
>
> Given this is something a new Flink user may not know about, nor he wants to
> spend hours trying to find the environment-related bug (it works on
> localhost, but in production the locale is different), I would kindly suggest
> a change in documentation: lets migrate examples to use the two-argument
> version of {{readTextFile(filePath, charsetName)}}.
>
> I am open to criticism and suggestions. The listing of {{readTextFile}} I was
> able to grep in docs is:
> {code:java}
> ./dev/datastream_api.md:- `readTextFile(path)` - Reads text files, i.e. files
> that respect the `TextInputFormat` specification, line-by-line and returns
> them as Strings.
> ./dev/datastream_api.md:- `readTextFile(path)` - Reads text files, i.e. files
> that respect the `TextInputFormat` specification, line-by-line and returns
> them as Strings.
> ./dev/libs/storm_compatibility.md:DataStream<String> text =
> env.readTextFile(localFilePath);
> ./dev/cluster_execution.md: DataSet<String> data =
> env.readTextFile("hdfs://path/to/file");
> ./dev/batch/index.md:- `readTextFile(path)` / `TextInputFormat` - Reads files
> line wise and returns them as Strings.
> ./dev/batch/index.md:- `readTextFileWithValue(path)` / `TextValueInputFormat`
> - Reads files line wise and returns them as
> ./dev/batch/index.md:DataSet<String> localLines =
> env.readTextFile("file:///path/to/my/textfile");
> ./dev/batch/index.md:DataSet<String> hdfsLines =
> env.readTextFile("hdfs://nnHost:nnPort/path/to/my/textfile");
> ./dev/batch/index.md:DataSet<String> logs =
> env.readTextFile("file:///path/with.nested/files")
> ./dev/batch/index.md:- `readTextFile(path)` / `TextInputFormat` - Reads files
> line wise and returns them as Strings.
> ./dev/batch/index.md:- `readTextFileWithValue(path)` / `TextValueInputFormat`
> - Reads files line wise and returns them as
> ./dev/batch/index.md:val localLines =
> env.readTextFile("file:///path/to/my/textfile")
> ./dev/batch/index.md:val hdfsLines =
> env.readTextFile("hdfs://nnHost:nnPort/path/to/my/textfile")
> ./dev/batch/index.md:env.readTextFile("file:///path/with.nested/files").withParameters(parameters)
> ./dev/batch/index.md:DataSet<String> lines = env.readTextFile(pathToTextFile);
> ./dev/batch/index.md:val lines = env.readTextFile(pathToTextFile)
> ./dev/batch/examples.md:DataSet<String> text =
> env.readTextFile("/path/to/file");
> ./dev/batch/examples.md:val text = env.readTextFile("/path/to/file")
> ./dev/api_concepts.md:DataStream<String> text =
> env.readTextFile("file:///path/to/file");
> ./dev/api_concepts.md:val text: DataStream[String] =
> env.readTextFile("file:///path/to/file")
> ./dev/local_execution.md: DataSet<String> data =
> env.readTextFile("file:///path/to/file");
> ./ops/deployment/aws.md:env.readTextFile("s3://<bucket>/<endpoint>");{code}
>
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)