Jason van Zyl wrote:
If it's right most of the time, and it saves the user from having to  know
or worry about it then yes I would use it.

Could you elaborate this a little more. Say we start easy and have a build
with just about 100 Java source files. Do you suggest to peek at each of
them before passing them to a tool like javac or just a subset and how
should this subset be determined? What should be done when the charset
detection reports different encodings for the set of files to process? Will
the charset detection happen over and over again for each plugin (javac,
javadoc, jxr)? What do you consider "most of time", telling the various
ISO-8859 families apart is not really easy. My impression is that usage of
JChardet will significantly increase code complexity without giving me a
solid build.

Also, I believe it's a bad idea to free users from worrying about the
encoding. This would be similar to the doubtful magic the JRE provides with
its default encoding: It encourages developers to ignore the encoding issue,
leading to platform-dependent behavior. Platform-dependent Java code is a
bad practice and Maven, as far as I heard, aims at promoting best practices.
File encoding is a parameter affecting your build output just like the
source/target settings used for the compiler and hence should be explicitly
controlled.

As we talk about it: What is the agreed file encoding for the Maven sources
(MNGSITE-46)?


Benjamin


---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to