The java.awt.Graphics.drawBytes() method has never specified how it interprets
the byte data.

So an implementation might
- interpret the bytes as ASCII
- interpret the bytes as UTF-8
- interpret the bytes using the default (host) encoding
- let the underlying rendering system interpret the bytes, which probably,
  but not necessarily, equates to the host encoding.

Presently I don't see how any application could reliably depend on
this API when the results are unspecified.

It has been proposed to tighten/clarify the specification so that
the bytes are always interpreted in the default (host) encoding of the JRE.
This means that implicitly something like new String(byteData) would occur.
But this is something an application can do anyway, and then use drawString.
The current performance of drawBytes and drawString are essentially identical.
This change would add the overhead of new String(byteData), so drawBytes would
be definitely slower than it is now, although somewhat more functional.
This may not matter if drawBytes is rarely used which is almost certainly
the case if we are all writing applications we expect to be useful outside
of West European language markets.

An alternative is to specify to interpret the bytes as ASCII.
This is what currently happens with JDK 1.4.x, but isn't necessarily what
all implementations have done.

Note also that the APIs that allow you to measure the size of the text
when displayed aren't helpful since they work with strings and chars.
If these were important you'd probably want to convert the byte data
to one of these formats anyway.

So I am interested to hear how developers currently use drawBytes, to help
decide which, if any, action to take.

-Phil.

===========================================================================
To unsubscribe, send email to [EMAIL PROTECTED] and include in the body
of the message "signoff JAVA2D-INTEREST".  For general help, send email to
[EMAIL PROTECTED] and include in the body of the message "help".

Reply via email to