On 14.08.2008, at 16:24, Van Daele, Koen wrote:

> Hi,

Hi Koen,


> Thanks for the great work. I'm looking forward to seeing the rest of  
> the tutorial. The tutorial mentions a sample application, but I  
> can't seem to find a link to it, is it available somewhere?

It will be shortly, yes. Completely slipped our minds that this needs  
to be done, as well, good catch!

> Some questions:
> * How do people handle the FPF and xhtml entities like ©  
> Previously I had dom_resolve_externals set to true, but lately I've  
> been getting a lot of errors from libxml that it was unable to load  
> files from www.w3.org I've gone back to setting  
> dom_resolve_externals to false and using © in stead of ©  
> (seems to work ok). Are there other ways of dealing with this? It  
> seems annoying that FPF is dependent on the www.w3.org website. I  
> believe someone mentioned caching dtd's? Any info on how to do that?  
> Are there other options? This might be something that new user find  
> confusing, so maybe it should be mentioned in the tutorial too.

This is a limitation of DOM. I assume that your page encoding is  
UTF-8, so you can simply insert a literal © character. The only non- 
XML entity (those are <, >, ", & and ') you'll  
likely ever need in your templates is  , the numeric entity for  
the non-breaking space (  in HTML).

You can use XML Catalogs to enable loading of external DTDs from your  
locale file system. By default, libxml looks for a catalog file in / 
etc/xml/catalog. These links might help with setting this up: 
http://www.whump.com/moreLikeThis/2004/01/16/03815/ 
  and http://www.cafeconleche.org/books/effectivexml/chapters/47.html  
(the former is also mentioned in http://php.net/manual/en/book.dom.php#57274) 
. The second link shows how the location of the catalog can also be  
customized through environment variables. After that, DOM will not  
load the DTD over the web (which is also horribly slow) anymore.

As mentioned, however, it's typically best not having to do this  
altogether by avoiding non-XML entities, which is easy to do with  
UTF-8 encoding.

Another option is Tidy, which I believe can be configured to replace  
entities with their numeric equivalents.



> * Is there any more info available on validation? I know the basics  
> of the system, but it seems to be capable of much more. I'd like to  
> do the following: check all parameters on a read request, trim every  
> string (possible sincs 11.2 I believe) and remove any parmeters that  
> are empty.
> E.g. if the url is http://www.foo.com/search?keywords=&user=++test++&page=5
> I'd like to strip the whitespace around test and have the keywords  
> parameter removed. Is this possible?

Not at this stage I'm afraid, but we're also working on a reference  
manual that will have detailed information on all the areas that need  
exhaustive documentation, such as the validation.

If your validation mode is "strict" or "conditional" (which it should  
be), then any request data elements that haven't been validated will  
be removed. Trimming is indeed possible as of 0.11.2.

Validation uses the respective request data holder information to  
determine whether or not an argument is empty. Keywords, in this case,  
would be regarded empty for web requests, and thus shouldn't show up  
if the validator has required="true".

If you want a bunch of validators to be specific to a request method,  
wrap it in a <validators method="read"> block.

Hope that helps,

David
_______________________________________________
users mailing list
[email protected]
http://lists.agavi.org/mailman/listinfo/users

Reply via email to