Hi, Art.

I attach the most simplest changes to this mail.

I created a new org.apache.fop.render.txt.TXTStream class and 
modified the TXTRenderer class.

A difference of behavior with an existing code is that a 
generated text is written by UTF-8 encoding (not ISO-8859-1).

It maybe more better that users can specify a charset encoding 
at anywhere. However I also think that most users will not need 
a function more than current TXTRenderer. So I think that this 
changes are enough to view the text.

By the way, a generated text is very dirty :)

---
Satoshi Ishigami   VIC TOKAI CORPORATION



On Mon, 28 Jan 2002 12:01:54 -0500 , Art Welch wrote:

> You are probably correct. The TXTRenderer probably should not use the same
> add method as the PCL renderer. Since it should just generate plain text,
> there probably is not a reason that it should not be able to support i18n.
> As coded however, it may be more aptly named the "ASCIIRenderer" (or maybe
> that should be "PC-8").
> 
> Without looking at the code, I am not sure how the TXTRenderer would handle
> chars instead of bytes. My guess is that some (simple) code changes would
> need to be made.
> 
> Personally I do not know that the TXTRenderer is useful enough to be worth
> spending much effort on. But if the changes are simple and useful to
> someone... Certainly it would be good for FOP (and all of its components) to
> support i18n.
> 
> Art
> 
> -----Original Message-----
> From: Satoshi Ishigami [mailto:[EMAIL PROTECTED]]
> Sent: Sunday, January 27, 2002 6:35 PM
> To: [EMAIL PROTECTED]
> Subject: i18n in TXTRenderer
> 
> 
> 
> Hi .
> 
> I hacked the TXTRenderer for i18n.
> 
> Currently the org.apache.fop.render.pcl.PCLStream class is
> used as OutputStream in TXTRenderer. The add method in
> PCLStream calss is as below:
> 
>     public void add(String str) {
>         if (!doOutput)
>             return;
> 
>         byte buff[] = new byte[str.length()];
>         int countr;
>         int len = str.length();
>         for (countr = 0; countr < len; countr++)
>             buff[countr] = (byte)str.charAt(countr);
>         try {
>             out.write(buff);
>         } catch (IOException e) {
>             // e.printStackTrace();
>             // e.printStackTrace(System.out);
>             throw new RuntimeException(e.toString());
>         }
>     }
> 
> I think that this algorithm is wrong for the character > 127.
> This reason is that the literal length of char is 2 bytes and
> the literal length of byte is 1 byte. To avoid this problem,
> I think that the following algorithm is better than now.
> 
>     public void add(String str) {
>         if (!doOutput) return;
>         try {
>             byte buff[] = str.getBytes("UTF-8");
>             out.write(buff);
>         } catch (IOException e) {
>             throw new RuntimeException(e.toString());
>         }
>     }
> 
> This algorithm may be not good for PCLRenderer because
> I don't know whether the PCL printer supports the UTF-8
> encoding or not.
> 
> However I think that the TXTRenderer could use the
> multilingualable encoding because it is possible to include
> some languages in a same single fo file.
> 
> Therere I consider that the TXTRenderer should not use the
> PCLStream and had better use original OutputStream (such as
> TXTStream).
> 
> Will my thought be wrong?
> 
> Best Regards.
> 
> ---
> Satoshi Ishigami   VIC TOKAI CORPORATION
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, email: [EMAIL PROTECTED]
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, email: [EMAIL PROTECTED]
> 
> 

Attachment: patch.tar.gz
Description: Binary data

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, email: [EMAIL PROTECTED]

Reply via email to