Number of reported errors (rules) will structurally grow; length of text 
is out of our field of influence. So there is only speeding things up left.

Apart from profiling the code, one could consider profiling the 
individual rules and the amount of found hits, related to the 
seriousness of the reported error.

Suppose there is a slow/expesive rule, that almost never matches to warn 
about some low-priority issue; maybe it is worth switching it of then.


Ruud.

On 21-11-12 18:46, Daniel Naber wrote:
> On 21.11.2012, 09:11:46 R.J. Baars wrote:
>
>> Wouldn't a faster server, faster compiler (if available) be easier
>> solutions?
> Well, it's always easier in the short run to buy more hardware. But our
> algorithm depends on both the text length (no surprise) and the number of
> rules, i.e. its runtime is like number_of_rules * length_of_text. So the
> more rules we get, the slower checking becomes.
>
>> Or could moving some processing to the client side be an option?
> Not really, the server needs to do all the work, because that's what the
> API is for.
>
> Regards
>   Daniel
>


------------------------------------------------------------------------------
Monitor your physical, virtual and cloud infrastructure from a single
web console. Get in-depth insight into apps, servers, databases, vmware,
SAP, cloud infrastructure, etc. Download 30-day Free Trial.
Pricing starts from $795 for 25 servers or applications!
http://p.sf.net/sfu/zoho_dev2dev_nov
_______________________________________________
Languagetool-devel mailing list
Languagetool-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/languagetool-devel

Reply via email to