There was a patch by Sean Timm you should investigate as well.

It limited a query so it would take a maximum of X seconds to execute, and would just return the rows it had found in that time.


Feak, Todd wrote:
I see value in this in the form of protecting the client from itself.

For example, our Solr isn't accessible from the Internet. It's all
behind firewalls. But, the client applications can make programming
mistakes. I would love the ability to lock them down to a certain number
of rows, just in case someone typos and puts in 1000 instead of 100, or
the like.

Admittedly, testing and QA should catch these things, but sometimes it's
nice to put in a few safeguards to stop the obvious mistakes from
occurring.

-Todd Feak

-----Original Message-----
From: Matthias Epheser [mailto:[EMAIL PROTECTED] Sent: Monday, November 17, 2008 9:07 AM
To: solr-user@lucene.apache.org
Subject: Re: Solr security

Ryan McKinley schrieb:
  however I have found that in any site where
stability/load and uptime are a serious concern, this is better
handled
in a tier in front of java -- typically the loadbalancer / haproxy / whatever -- and managed by people more cautious then me.

Full ack. What do you think about the only solr related thing "left",
the paramter filtering/blocking (eg. rows<1000). Is this suitable to do it in a Filter delivered by solr? Of course as an optional alternative.

ryan






Reply via email to