Hi .

I hacked the TXTRenderer for i18n.

Currently the org.apache.fop.render.pcl.PCLStream class is
used as OutputStream in TXTRenderer. The add method in
PCLStream calss is as below:

    public void add(String str) {
        if (!doOutput)

        byte buff[] = new byte[str.length()];
        int countr;
        int len = str.length();
        for (countr = 0; countr < len; countr++)
            buff[countr] = (byte)str.charAt(countr);
        try {
        } catch (IOException e) {
            // e.printStackTrace();
            // e.printStackTrace(System.out);
            throw new RuntimeException(e.toString());

I think that this algorithm is wrong for the character > 127.
This reason is that the literal length of char is 2 bytes and
the literal length of byte is 1 byte. To avoid this problem,
I think that the following algorithm is better than now.

    public void add(String str) {
        if (!doOutput) return;
        try {
            byte buff[] = str.getBytes("UTF-8");
        } catch (IOException e) {
            throw new RuntimeException(e.toString());

This algorithm may be not good for PCLRenderer because
I don't know whether the PCL printer supports the UTF-8
encoding or not.

However I think that the TXTRenderer could use the
multilingualable encoding because it is possible to include
some languages in a same single fo file.

Therere I consider that the TXTRenderer should not use the
PCLStream and had better use original OutputStream (such as

Will my thought be wrong?

Best Regards.


To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, email: [EMAIL PROTECTED]

Reply via email to