Okay, legacy web application - what appserver?
Seems like the bug here is that, if NetBeans is starting the application
server, that it should detect (from configuration files or wherever) the
encoding that application server will use for logging (or know what it
is). That would be an actual fix
I'm working right now on a legacy web project that uses a bunch of
System.out.println for logging and debugging. It was plagued by a lot of
encoding problems, and I fixed most of them by forcing everything to be
UTF-8. There is no sense in using the platform default encoding when the
text output
No argument that the situation doesn't need a fix.
But you didn't answer my question: *What* are you running when the problem
shows up? Your own Java project? If so, Ant, Maven or something else
(i.e. build system where this is settable/detectable or not?)? Or some
application server or third
TB> That's not that uncommon, but the right solution is to *detect* that
the output is UTF-8 when the IDE runs whatever it is you're running.
That's hard to do in general, unfortunately. Web browsers do character set
detection by a statistical analysis of character frequencies in input
documents
In my case, I'm running in windows, with the dreaded and hated Windows-1252
default encoding.
Using default OS encoding is really bad for portability and causes a lot of
encoding problems. See this JEP draft maybe for Java 11:
http://openjdk.java.net/jeps/8187041 - There are three proposed
Your problem is most likely your operating system's default file encoding
here (perhaps MacRoman?). The IDE is assuming that process output is
whatever your operating system's default encoding is, which is the right
assumption, since that *is* what command-line utilities will output. It
happens
I frequently had some long-standing problems with the console output
encoding in Netbeans. Which always presented garbled non-ascii characters
for me.
After deciding that it was enough, I went to search for a solution and did
found a very simple one in StackOverflow. Just add