Re: Property Cache: Null Pointer Exception

2007-11-15 Thread Jeremias Maerki
Sounds pretty good. About the potential issue you described: Writing
unit tests for the class can help you a long way catching potential
problems early. Might actually be worth it considering such a critical
components in the FO tree area. Just a thought.

Jeremias Maerki



On 15.11.2007 01:18:32 Andreas L Delmelle wrote:
 
 On Nov 14, 2007, at 21:38, Jeremias Maerki wrote:
 
 Hi Jeremias, Chris,
 
  jm-PropertyCache-MemLeak.diff.txt
 
 
 My proposal, incorporating the changes in Jeremias' diff, below.
 
 To sum it up:
 Only one CacheCleaner per PropertyCache, and one accompanying thread.
 If, after a put(), cleanup seems to be needed *and* the thread is not  
 alive, lock on the cleaner, and start the thread.
 
 If the thread is busy, I guess it suffices to continue, assuming that  
 the hash distribution should eventually lead some put() back to same  
 bucket/segment...?
 
 As Jeremias noted, currently rehash is called for every time an  
 attempt to clean up fails. Maybe this needs improvement... OTOH,  
 rehash now becomes a bit simpler, since it does not need to take into  
 account interfering cleaners. Only one remaining: CacheCleaner.run()  
 is now synchronized on the cleaner, and rehash() itself is done  
 within the cleaner thread.
 
 What I see as a possible issue though, is that there is a theoretical  
 limit to rehash() having any effect whatsoever. If the cache grows to  
 64 buckets, then the maximum number of segments that exceed the  
 threshold can never be greater than half the table-size... This might  
 be a non-issue, as this would only be triggered if the cache's size  
 is at least 2048 instances (not counting the elements in the buckets  
 that don't exceed the threshold). No problem for enums, keeps.  
 Strings and numbers, though?
 
 
 
 Cheers
 
 Andreas
 
 Index: src/java/org/apache/fop/fo/properties/PropertyCache.java
 ===
 --- src/java/org/apache/fop/fo/properties/PropertyCache.java 
 (revision 594306)
 +++ src/java/org/apache/fop/fo/properties/PropertyCache.java 
 (working copy)
 @@ -41,6 +41,9 @@
   /** the table of hash-buckets */
   private CacheEntry[] table = new CacheEntry[8];
 
 +private CacheCleaner cleaner = new CacheCleaner();
 +private Thread cleanerThread = new Thread(cleaner);
 +
   /* same hash function as used by java.util.HashMap */
   private static int hash(Object x) {
   int h = x.hashCode();
 @@ -77,6 +80,10 @@
   this.hash = old.hash;
   }
 
 +boolean isCleared() {
 +return (ref == null || ref.get() == null);
 +}
 +
   }
 
   /* Wrapper objects to synchronize on */
 @@ -85,7 +92,7 @@
   }
 
   /*
 - * Class modeling a cleanup thread.
 + * Class modeling the cleanup thread.
*
* Once run() is called, the segment is locked and the hash-bucket
* will be traversed, removing any obsolete entries.
 @@ -95,50 +102,51 @@
 
   private int hash;
 
 -CacheCleaner(int hash) {
 +CacheCleaner() {
 +}
 +
 +void init(int hash) {
   this.hash = hash;
   }
 
   public void run() {
 -//System.out.println(Cleaning segment  + this.segment);
 -CacheSegment segment = segments[this.hash  SEGMENT_MASK];
 -int oldCount;
 -int newCount;
 -synchronized (segment) {
 -oldCount = segment.count;
 -/* check first to see if another cleaner thread already
 - * pushed the number of entries back below the  
 threshold
 - * if so, return immediately
 - */
 -if (segment.count  (2 * table.length)) {
 -return;
 -}
 -
 -int index = this.hash  (table.length - 1);
 -CacheEntry first = table[index];
 -WeakReference ref;
 -for (CacheEntry e = first; e != null; e = e.next) {
 -ref = e.ref;
 -if (ref != null  ref.get() == null) {
 -/* remove obsolete entry
 -/* 1. clear value, cause interference for  
 non-blocking get() */
 -e.ref = null;
 -
 -/* 2. clone the segment, without the  
 obsolete entry */
 -CacheEntry head = e.next;
 -for (CacheEntry c = first; c != e; c =  
 c.next) {
 -head = new CacheEntry(c, head);
 +synchronized (this) {
 +//System.out.println(Cleaning segment  +  
 this.segment);
 +CacheSegment segment = segments[this.hash   
 SEGMENT_MASK];
 +int oldCount;
 +int newCount;
 +synchronized (segment) {
 +oldCount = segment.count;
 +
 +

DO NOT REPLY [Bug 43143] - [PATCH] ExpertEncoding and ExpertSubsetEncoding not detected for Type 1 fonts

2007-11-15 Thread bugzilla
DO NOT REPLY TO THIS EMAIL, BUT PLEASE POST YOUR BUG·
RELATED COMMENTS THROUGH THE WEB INTERFACE AVAILABLE AT
http://issues.apache.org/bugzilla/show_bug.cgi?id=43143.
ANY REPLY MADE TO THIS MESSAGE WILL NOT BE COLLECTED AND·
INSERTED IN THE BUG DATABASE.

http://issues.apache.org/bugzilla/show_bug.cgi?id=43143





--- Additional Comments From [EMAIL PROTECTED]  2007-11-15 05:32 ---
I had to revert part of the patch since it produced problems with fonts that
worked before. Hopefully, this fixes it. More info in the commit message:
http://svn.apache.org/viewvc?rev=595297view=rev

-- 
Configure bugmail: http://issues.apache.org/bugzilla/userprefs.cgi?tab=email
--- You are receiving this mail because: ---
You are the assignee for the bug, or are watching the assignee.


Re: Property Cache: Null Pointer Exception

2007-11-15 Thread Chris Bowditch

Jeremias Maerki wrote:


The attached patch should fix the memory leak Chris described (in one of
my runs the number of CacheEntry instances after a full GC went down
from 10575 (169KB, before the change) to 386 (6KB, after the change).
I've run various documents through the multi-threading testbed
application (in the test directory) with up to 17 threads on my
dual-core CPU. The output seems to be fine. No corruption or failures.


Thanks Jeremias. I have applied the patch and tested it and FOP runs 
faster as well as the memory staying at a constant 60Mb.


snip/

Thanks,

Chris




Re: Property Cache: Null Pointer Exception

2007-11-15 Thread Andreas L Delmelle

On Nov 15, 2007, at 16:30, Chris Bowditch wrote:



What I see as a possible issue though, is that there is a  
theoretical  limit to rehash() having any effect whatsoever. If  
the cache grows to  64 buckets, then the maximum number of  
segments that exceed the  threshold can never be greater than half  
the table-size... This might  be a non-issue, as this would only  
be triggered if the cache's size  is at least 2048 instances (not  
counting the elements in the buckets  that don't exceed the  
threshold). No problem for enums, keeps.  Strings and numbers,  
though?


2048 doesn't sound good enough as a maximum number of instances if  
Strings and integers are included. Why can't this number be  
increased by having more buckets and/or segments?


It's not so much a maximum number of instances, but more: the number  
of buckets will not grow beyond 64. If the cache were to grow more,  
the instances would still be divided over 64 buckets (which means a  
slightly longer time to retrieve entries)
This is not due to the number of segments, but simply due to the  
naïve condition that triggers a rehash. I'll see if I can come up  
with a better check, and will repost the patch after that.



Later

Andreas



FOP Enhancements

2007-11-15 Thread mckacl

Hello,

I have taken responsibility for the FOP implementation at my company. 
Unfortunately, in the past the framework was directly customized for our
product requirements. 

We are upgrading to the current versin of FOP, and going forward we will not
directly customize the framework and hope to use extensions.  I have two
enhancements I need assistance with.

1. fo:block-container overflow-to  attribute was added.  Basically, if
text does not fit in the block the overflow is added to another
fo:block-container.  The overflow block may be on any page.


2. fo:block-container smallest-font-size  attribute was added.  If text
cannot fit within the block, the font-size is reduced until it fits or the
smallest-font-size is reached.  In addition, the block supports the
overflow-to attribute.

My company is in the health-care industry so the overflow is extremely
important for patient instructions and warnings.

Let me know if you think these enhancements are suitable for extensions, or
if some other means is appropriate.

Thanks,
Andrew

-- 
View this message in context: 
http://www.nabble.com/FOP-Enhancements-tf4816337.html#a13778915
Sent from the FOP - Dev mailing list archive at Nabble.com.



Upgrade from FOP 0.20.5 to 0.94 text-align xsl used to convert the HTML to PDF

2007-11-15 Thread KarenT

The documentation seems to indicate that text-align=right, left, center'
works, but I am getting the following error:  Nov 15, 2007 12:32:35 PM
org.apache.fop.fo.PropertyList 

would the following be a problem?   fo:block text-align={q1/@align} 

Are variables a problem  ?
-- 
View this message in context: 
http://www.nabble.com/Upgrade-from-FOP-0.20.5-to-0.94--text-align--xsl-used-to-convert-the-HTML-to-PDF-tf4816566.html#a13779745
Sent from the FOP - Dev mailing list archive at Nabble.com.



Re: Property Cache: Null Pointer Exception

2007-11-15 Thread Chris Bowditch

Andreas L Delmelle wrote:



On Nov 14, 2007, at 21:38, Jeremias Maerki wrote:

Hi Jeremias, Chris,


jm-PropertyCache-MemLeak.diff.txt




My proposal, incorporating the changes in Jeremias' diff, below.


Thanks for the diff. Unfortunately I have been unsuccessful in applying 
it after several attempts. First I tried using Tortoise SVN client, then 
I downloaded GNUWin32 Patch and that fails to apply all but hunk 7. I 
also asked a colleague working on Linux to try and apply the patch but 
it fails for him too (although one more hunk is successful)


I guess I could manually make the updates, but I would prefer to work 
out whats going wrong here to avoid similar problems in the future and 
to minimize the risk of error.




To sum it up:
Only one CacheCleaner per PropertyCache, and one accompanying thread.
If, after a put(), cleanup seems to be needed *and* the thread is not  
alive, lock on the cleaner, and start the thread.


If the thread is busy, I guess it suffices to continue, assuming that  
the hash distribution should eventually lead some put() back to same  
bucket/segment...?


As Jeremias noted, currently rehash is called for every time an  attempt 
to clean up fails. Maybe this needs improvement... OTOH,  rehash now 
becomes a bit simpler, since it does not need to take into  account 
interfering cleaners. Only one remaining: CacheCleaner.run()  is now 
synchronized on the cleaner, and rehash() itself is done  within the 
cleaner thread.


What I see as a possible issue though, is that there is a theoretical  
limit to rehash() having any effect whatsoever. If the cache grows to  
64 buckets, then the maximum number of segments that exceed the  
threshold can never be greater than half the table-size... This might  
be a non-issue, as this would only be triggered if the cache's size  is 
at least 2048 instances (not counting the elements in the buckets  that 
don't exceed the threshold). No problem for enums, keeps.  Strings and 
numbers, though?


2048 doesn't sound good enough as a maximum number of instances if 
Strings and integers are included. Why can't this number be increased by 
having more buckets and/or segments?


snip/

Chris




Re: Upgrade from FOP 0.20.5 to 0.94 text-align xsl used to convert the HTML to PDF

2007-11-15 Thread Andreas L Delmelle

On Nov 15, 2007, at 20:18, KarenT wrote:

Hi

Please direct such questions to fop-users@ in the future. Thanks!



The documentation seems to indicate that text-align=right,  
left, center'

works, but I am getting the following error:
Nov 15, 2007 12:32:35 PM
org.apache.fop.fo.PropertyList


I'm somehow suspecting a piece to be missing here. Can you post the  
full error message?




would the following be a problem?   fo:block text-align={q1/ 
@align} 


Are variables a problem  ?


Which variable are you talking about?
In XSLT, the above is a shortcut for:

xsl:element name=fo:block
  xsl:attribute name=text-alignxsl:value-of select=q1/@align / 
/xsl:attribute

/xsl:element

No variable here.

In FO, the token {q1/@align} means nothing, and is an illegal value  
for the text-align property. No variable here, either.


Either the expression is not evaluated during the XSLT phase, which  
would point to an error in the stylesheet,
or the expression is correctly evaluated but leads to an invalid  
value for text-align.



Andreas



Re: Property Cache: Null Pointer Exception

2007-11-15 Thread Andreas L Delmelle

On Nov 15, 2007, at 16:30, Chris Bowditch wrote:



Thanks for the diff. Unfortunately I have been unsuccessful in  
applying it after several attempts. First I tried using Tortoise  
SVN client, then I downloaded GNUWin32 Patch and that fails to  
apply all but hunk 7. I also asked a colleague working on Linux to  
try and apply the patch but it fails for him too (although one more  
hunk is successful)


I guess I could manually make the updates, but I would prefer to  
work out whats going wrong here to avoid similar problems in the  
future and to minimize the risk of error.


Updated diff in attach. No idea why the patching would fail on your  
end... maybe something to do with encoding?

Saved the file now explicitly with ISO-encoding, just to be sure.

Both the 'theoretical limit' issue and the 'too many threads' should  
be resolved.


The cleanup + rehash logic is now roughly:

* if the total number of elements in a segment becomes double the  
amount of buckets

= trigger a cleanup, but only if none is running

* if this cleanup did not have effect
= register the segment in cleaner.votesForRehash

* if the total number of votes exceeds 8 (= 32 segments / 4), then  
rehash


Only got it working with a newly created Thread for each cleanup,  
though the amount that will be alive at the same time is already  
drastically reduced.

Reusing the Thread instance never seemed to work for me.




propcache.diff
Description: Binary data