, which is not supported here.
On Dec 8, 2010, at 5:38 AM, Markus Jelsma wrote:
Hi,
Got another issue here. This time it's the PHP serialized response writer
throwing the following exception only when spatial parameters are set
using LocalParams in Solr 1.4.1 using JTeam's plugin
%20solrf=false
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350
Don't know if its useful but from the old thread:
http://code.google.com/p/solr-uima/wiki/5MinutesTutorial
On Wednesday 08 December 2010 16:18:06 webdev1977 wrote:
Any luck with a tutorial? :-)
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350
Should be no problem but please paste the log output etc.
All,
I have a csv file and I want to store one of the fields as a tdouble type.
It does not like that at all...Is there a way to cast the string value to a
tdouble?
Thanks,
Adam
.
I suspect the origin of hte problem is that PHPSerializedWriter overrides
writeDoc and that prevented the writeMapOpener(-1) from ever happening,
but then writeSolrDocument was added which PHPSerializedWriter doesn't
override that.
-Hoss
--
Markus Jelsma - CTO - Openindex
http
SolrHome with
Solr data dir. So a mistake occurs. All the indexes are put under
$TOMCAT_HOME/bin. This is NOT what I expect. I hope indexes are under
SolrHome.
Could you please give me a hand?
Best,
Bing Li
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620
That smells like: http://www.jteam.nl/news/spatialsolr.html
My partner is using a publicly available plugin for GeoSpatial. It is used
both during indexing and during search. It forms some kind of gridding
system and puts 10 fields per row related to that. Doing a Radius search
(vs a bounding
There can be numerous explanations such as your configuration (cache warm
queries, merge factor, replication events etc) but also I/O having trouble
flushing everything to disk. It could also be a memory problem, the OS might
start swapping if you allocate too much RAM to the JVM leaving little
Pradeep is right, but, check the solrconfig, the query parser is defined there.
Look for the basedOn attribute in the queryParser element.
You said you were using a third party plugin. What do you expect people
herre to know? Solr plugins don't have parameters lat, long, radius and
Maybe you've overlooked the build parameter?
http://wiki.apache.org/solr/SpellCheckComponent#spellcheck.build
Hi,
the spellchecker component already provides a buildOnCommit and
buildOnOptimize option.
Since we have several spellchecker indices building on each commit is
not really what
. It is usually a
better idea to learn from others’ mistakes, so you do not have to make
them yourself. from
'http://blogs.techrepublic.com.com/security/?p=4501tag=nl.e036'
EARTH has a Right To Life,
otherwise we all die.
- Original Message
From: Markus Jelsma markus.jel
Where did you put the jar?
All,
Can anyone shed some light on this error. I can't seem to get this
class to load. I am using the distribution of Solr from Lucid
Imagination and the Spatial Plugin from here
https://issues.apache.org/jira/browse/SOLR-773. I don't know how to
apply a patch
=doctitleISA Mailing pack letter/str
str name=signaturefd9d9e1c0de32fb5/str
/doc
If you wish to view the St. James's Place email disclaimer, please use the
link below
http://www.sjp.co.uk/portal/internet/SJPemaildisclaimer
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
Anyway, try putting the jar in
work/Jetty_0_0_0_0_8983_solr.war__solr__k1kf17/webapp/WEB-INF/lib/
On Tuesday 14 December 2010 11:10:47 Markus Jelsma wrote:
Where did you put the jar?
All,
Can anyone shed some light on this error. I can't seem to get this
class to load. I am using
The GeoDistanceComponent triggers the problem. It may be an issue in the
component but it could very well be a Solr issue. It seems you missed a very
recent thread on this one.
https://issues.apache.org/jira/browse/SOLR-2278
I finally figured out how to use curl to GET results, i.e. just turn
No. But why is it a problem? A standard XML parser won't feel the difference.
Hi,
In SOLR XML the blank spaces are displayed with just str/ tags
Is there a way I can make SOLR XML to display the blank values as
str/str
instead of just
str/
Also has anyone parsed the blank value
replicate the solrconfig.xml so a include
solr/corename/conf/file.xml will not work in the cores i replicate it to and i
can't embed some corename property in the href to make it generic.
Anyone knows a trick here? Thanks!
Cheers,
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com
=solr.LowerCaseFilterFactory/
filter class=solr.RemoveDuplicatesTokenFilterFactory/
/analyzer
Thank you in advance; any suggestions are welcome!
Sebastian
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350
These
HTTP Status 500 - null java.lang.NullPointerException at
java.io.StringReader.init(StringReader.java:50) at
are returned in HTML. I use Nginx to detect the HTTP error code and return a
JSON encoded body with the appropriate content type. Maybe it could be done in
the servlet container
Check your configuration and log file. And, remember, log files will only get
replicated if their hashes are different. And, new configuration files will not
be replicated, you'll need to upload them to the slaves manually for the first
time. Slaves will not replicate what they don't have.
won't replicatie configuration files
iirc.
Regards,
Stevo.
On Tue, Dec 28, 2010 at 1:06 PM, Markus Jelsma
markus.jel...@openindex.io wrote:
Check your configuration and log file. And, remember, log files will only
get replicated if their hashes are different. And, new configuration
guidance.
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350
this message in context:
http://lucene.472066.n3.nabble.com/Sort-Facet-Query-tp2167635p2167635.htm
l Sent from the Solr - User mailing list archive at Nabble.com.
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350
pointers on how to proceed? Thanks.
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350
] appears to have started a thread named
[MultiThreadedHttpConnectionManager cleanup] but has failed to stop it. This
is very likely to create a memory leak.
Jan 4, 2011 3:09:48 PM org.apache.coyote.http11.Http11Protocol destroy
INFO: Stopping Coyote HTTP/1.1 on http-8080
--
Markus Jelsma - CTO
MultiThreadedHttpConnectionManager to
CoreContainer.
-Yonik
http://www.lucidimagination.com
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350
the mess and (due to low
interval polling) make another attempt. If i, however, restart (instead of
abort-fetch) the old temporary directory will stay and needs to be deleted
manually.
Cheers,
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350
I don't have Windows :)
Is this on Windows or Unix? Windows will not delete a file that is still
open.
On Tue, Jan 4, 2011 at 10:07 AM, Markus Jelsma
markus.jel...@openindex.io wrote:
Is it possible this problem has something to do with my old index files
not being removed
http://wiki.apache.org/solr/UpdateXmlMessages#A.22rollback.22
Hi,
Is there a way to specify to abort (rollback) the data import should there
be an error/exception?
If everything runs smoothly, commit the data import.
Thanks,
Tri
I have no Windows.
On Tuesday 04 January 2011 23:20:00 Lance Norskog wrote:
Is this on Windows or Unix? Windows will not delete a file that is still
open.
On Tue, Jan 4, 2011 at 10:07 AM, Markus Jelsma
markus.jel...@openindex.io wrote:
Is it possible this problem has something to do
Any thoughts on this one? Should i add a ticket?
On Tuesday 04 January 2011 20:08:40 Markus Jelsma wrote:
Hi,
It seems abort-fetch nicely removes the index directory which i'm
replicating to which is fine. Restarting, however, does not trigger the
the same feature as the abort-fetch command
No, it also depends on the queries you execute (sorting is a big consumer) and
the number of concurrent users.
Is that a general rule of thumb? That it is best to have about the
same amount of RAM as the size of your index?
So, with a 5GB index, I should have between 4GB and 8GB of RAM
Any sources to cite for this statement? And are you talking about RAM
allocated to the JVM or available for OS cache?
Not sure if this was mentioned yet, but if you are doing slave/master
replication you'll need 2x the RAM at replication time. Just something to
keep in mind.
-mike
On
Hi,
It works just like boolean operators in the main query:
fq=-status:refunded
http://lucene.apache.org/java/2_9_1/queryparsersyntax.html#Boolean operators
Cheers
hello.
i need to filter a field. i want all fields are not like the given string.
eg.: ...fq=status!=refundend
how can
will
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350
I haven't used edismax but i can imagine its a feature. Ths is because
inconstent use of stopwords in the analyzers of the fields specified in qf can
yield really unexpected results because of the mm parameter.
In dismax, if one analyzer removed stopwords and the other doesn't the mm
parameter
This is supposed to be dealt with outside the index. All input must be UTF-8
encoded. Failing to do so will give unexpected results.
We've created an index from a number of different documents that are
supplied by third parties. We want the index to only contain UTF-8
encoded characters. I
Have used edismax and Stopword filters as well. But usually use the fq
parameter e.g. fq=title:the life and never had any issues.
That is because filter queries are not relevant for the mm parameter which is
being used for the main query.
Can you turn on the debugQuery and check whats the
Ingram Content Group
(615) 213-4311
-Original Message-
From: Markus Jelsma [mailto:markus.jel...@openindex.io]
Sent: Wednesday, January 12, 2011 4:44 PM
To: solr-user@lucene.apache.org
Cc: Jayendra Patil
Subject: Re: StopFilterFactory and qf containing some fields that use it
and some
dig deeper than that into what goes on.
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350
Perhaps it would be more useful to RTFM instead of messing around on the
mailing list: http://wiki.apache.org/solr/CommonQueryParameters#start
Please, read every wiki page you can find and write notes.
Do I even need a body for this message? ;-)
Dennis Gearon
Signature Warning
Please visit the Nutch project. It is a powerful crawler and can integrate
with Solr.
http://nutch.apache.org/
Hi Solr users,
I hope you can help. We are migrating our intranet web site management
system to Windows 2008 and need a replacement for Index Server to do the
text searching. I
something that will index all the files in a given folder, rather than
follow links like a crawler. Can Nutch do this? As well as the other
requirements below?
Regards
Cathy
On 14 January 2011 12:09, Markus Jelsma markus.jel...@openindex.io wrote:
Please visit the Nutch project
I think Steve wants the 1000th, 2000th and 3000th document from the query. And
since there's no method of doing so you're constrained to executing three
queries with rows=1 and start is resp. 1000, 2000 and 3000.
If you want these documents to return you will have to do multiple queries
with
There is always CPU and RAM involved for every nice component you use. Just
how much the penalty is depends completely on your hardware, index and type of
query. Under heavy load it numbers will change.
Since we don't know your situation and it's hard to predict without
benchmarks, you should
In my opinion it should work for every update handler. If you're really sure
your configuration if fine and it still doesn't work you might have to file an
issue.
Your configuration looks alright but don't forget you've configured
overwriteDupes=false!
Hello,
here is an excerpt of my
:30.577:WARN::handle failed
java.lang.OutOfMemoryError: GC overhead limit exceeded
Thanks,
Isan.
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350
://localhost:8080/solr/cs/select?q=(poi_id:3)defType=dismaxmm=1
What I wanted to do when I specify mm=1 is to say at least 1 query
parameter matches.
What am I missing?
Thanks,
Tri
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350
=(poi_id:3)defType=dismaxmm=1
What I wanted to do when I specify mm=1 is to say at least 1 query
parameter matches.
What am I missing?
Thanks,
Tri
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350
Hi,
This is a slave polling the master for its index version but it seems the
master fails to respond.
From the javadoc:
public class NoHttpResponseException
extends IOException
Signals that the target server failed to respond with a valid HTTP
response.
Cheers,
I see a large number
Oh, and this should not have the INFO level in my opinion. Other log lines
indicating a problem with the master (such as a time out or unreachable host)
are not flagged as INFO.
Maybe you could file a Jira ticket? Don't forget to specifiy your Solr version.
Also, please check the master log
Why creating two threads for the same problem? Anyway, is your servlet
container capable of accepting UTF-8 in the URL? Also, is SolrNet capable of
handling those characters? To confirm, try a tool like curl.
Dear all,
After reading some pages on the Web, I created the index with the
On Wed, Jan 19, 2011 at 2:34 AM, Markus Jelsma
markus.jel...@openindex.iowrote:
Why creating two threads for the same problem? Anyway, is your servlet
container capable of accepting UTF-8 in the URL? Also, is SolrNet capable
of
handling those characters? To confirm, try a tool like curl
Hi,
You get an error because LocalParams need to be in the beginning of a
parameter's value. So no parenthesis first. The second query should not give an
error because it's a valid query.
Anyway, i assume you're looking for :
http://wiki.apache.org/solr/SimpleFacetParameters#Multi-
[X] ASF Mirrors (linked in our release announcements or via the Lucene
website)
[] Maven repository (whether you use Maven, Ant+Ivy, Buildr, etc.)
[X] I/we build them from source via an SVN/Git checkout.
[] Other (someone in your company mirrors them internally or via a
downstream
://master_host:port/solr/replication?command=indexversion
http://slave_host:port/solr/replication?command=details
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350
Issue created:
https://issues.apache.org/jira/browse/SOLR-2323
On Tuesday 04 January 2011 20:08:40 Markus Jelsma wrote:
Hi,
It seems abort-fetch nicely removes the index directory which i'm
replicating to which is fine. Restarting, however, does not trigger the
the same feature as the abort
may be able to suggest other alternatives.
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350
That someone should just visit the wiki:
http://wiki.apache.org/solr/SolrResources
If someone is looking for good documentation and getting started guides, I
am putting this in the newsgroups to be searched upon. I recommend:
A/ The Wikis: (FREE)
http://wiki.apache.org/solr/FrontPage
http://lucene.apache.org/solr/#getstarted
I would like to index the information of my employees to be able to get
through some fields such as: e-mail, registration, ID, cell phone, name.
I am very new to SOLR and would like to know how to index these fields this
way and how to search
You only need so much for Solr so it can do its thing. Faceting can take quite
some memory on a large index but sorting can be a really big RAM consumer.
As Erick pointed out, inspect and tune the cache settings and adjust RAM
allocated to the JVM if required. Using tools like JConsole you can
Hi,
I've never seen Solr's behaviour with a huge amount of values in a multi
valued but i think it should work alright. Then you can stored a list of user
ID's along with each book document and user filter queries to include or
exclude the book from the result set.
Cheers,
Hi,
I'm
Hi,
I'm unsure if i completely understand but you first had the error for
local.code and then set the property in solr.xml? Then of course it will give
an error for the next undefined property that has no default set.
If you use a property without default it _must_ be defined in solr.xml or
can be found.
We do have sorting but not faceting. OK so I guess there is no 'hard and
fast rule' as such so I will play with it and see.
Thanks for the help
On Wed, Jan 19, 2011 at 11:48 PM, Markus Jelsma
markus.jel...@openindex.iowrote:
You only need so much for Solr so it can do
You have set the property already but i haven't seen you use that same
property for the dataDir setting in solrconfig.
I've checked the archive, and plenty of people have suggested an
arrangement where you can have two cores which share a configuration but
maintain separate data paths. But I
Hi,
Are there performance issues during the index switch?
What do you mean by index switch?
As the size of index gets bigger, response time slows down? Are there any
studies on this?
I haven't seen any studies as of yet but response time will slow down for some
components. Sorting
for the dataimport.delta values? that
doesn't seem right
On Wed, Jan 19, 2011 at 11:57 AM, Markus Jelsma
markus.jel...@openindex.iowrote:
Hi,
I'm unsure if i completely understand but you first had the error for
local.code and then set the property in solr.xml? Then of course it will
give
an error
#System_property_substitution
there error I am getting is that I have no default value
for ${dataimporter.last_index_time}
should I just define -00-00 00:00:00 as the default for that field?
On Wed, Jan 19, 2011 at 12:45 PM, Markus Jelsma
markus.jel...@openindex.iowrote:
No, you only need defaults
Did i write wt? Oh dear. The q and w are too close =)
Markus,
Its not wt its qt, wt for response type,
Also qt is not for Query Parser its for Request Handler ,In solrconfig.xml
there are many Request Handlers can be Defined using dismax Query Parser
Or Using lucene Query Parser.
If you
!!!
Jörg
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350
overrided by the one replicated.
Is there a way to have the slaves having their solrconfig replicated, but
with some special configurations?
I want to avoid having to enter to each slave to configure it, i prefer to
do it in a centralized way.
--
Markus Jelsma - CTO - Openindex
http
21, 2011 at 10:16 AM, Ezequiel Calderara
ezech...@gmail.comwrote:
Thanks!, thats what i needed!
There is always some much to learn about Solr/Lucene!
On Fri, Jan 21, 2011 at 10:08 AM, Markus Jelsma
markus.jel...@openindex.io wrote:
solrcore.properties
Hi,
Please take a look at Apache Nutch. I can crawl through a file system over FTP.
After crawling, it can use Tika to extract the content from your PDF files and
other. Finally you can then send the data to your Solr server for indexing.
http://nutch.apache.org/
Hi All,
Is there is any
Hi,
You can use Solr 1.4.1 and a third party plugin [1]. It does a pretty good job
in spatial search. You could also try the Solr 3.1 branch which also has some
spatial features on-board. It, however, does not return computed distances but
can filter and sort using the great circle algorithm
for Search-Requests - commit every Minute - 4GB Xmx
- Solr2 for Update-Request - delta every 2 Minutes - 4GB Xmx
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350
.
Regds
dhanesh s.r
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350
.
Pl do let me know the reason. Is there anything I need to do for the core
migration? I dont have any data in these cores. Also if there was data is
there a nice way of migrating from 1.4.0 to 1.4.1 (Which does not involve
reindexing) ?
Regards,
Prasad
--
Markus Jelsma - CTO - Openindex
http
Hi,
You haven't defined the field in Solr's schema.xml configuration so it needs to
be added first. Perhaps following the tutorial might be a good idea.
http://lucene.apache.org/solr/tutorial.html
Cheers.
Hello Team:
I am in the process of setting up Solr 1.4 with Magento ENterprise
other tuning I can
do but any other hints, tips, tricks or cluebats gratefully received.
Even if it's just Yeah, we had that problem and we added more slaves
and periodically restarted them
thanks,
Simon
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620
it is ok to reduce the cache sizes? Would this increase disk
i/o, or would the index be hold in the OS's disk cache?
Yes! If you also allocate less RAM to the JVM then there is more for the OS to
cache.
Do have other recommendations to follow / questions?
Thanx cheers,
Martin
--
Markus Jelsma
for version, so I don't think it's the version of Luke.
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350
Then you don't need NGrams at all. A wildcard will suffice or you can use the
TermsComponent.
If these strings are indexed as single tokens (KeywordTokenizer with
LowercaseFilter) you can simply do field:app* to retrieve the apple milk
shake. You can also use the string field type but then you
Oh, i should perhaps mention that EdgeNGrams will yield results a lot quicker
than using wildcards at the cost of a larger index. You should, of course, use
EdgeNGrams if you worry about performance and have a huge index and a number
of queries per second.
Then you don't need NGrams at all. A
This should shed some light on the matter
http://lucene.apache.org/java/2_9_0/fileformats.html
I am saying there is a list of tokens that have been parsed (a table of
them) for each column? Or one for the whole index?
Dennis Gearon
Signature Warning
It is always a
Not right now:
https://issues.apache.org/jira/browse/SOLR-1909
Hi - I have the SOLR deduplication configured and working well.
Is there any way I can tell which documents have been not added to the
index as a result of the deduplication rejecting subsequent identical
documents?
Many
Hi,
If your query yields 1000 documents and the rows parameter is 10 then you'll
get only 10 documents. Consult the wiki on the start and rows parameters:
http://wiki.apache.org/solr/CommonQueryParameters
Cheers.
Dear all,
I got a weird problem. The number of searched documents is much
http://wiki.apache.org/solr/ClusteringComponent
http://wiki.apache.org/solr/FieldCollapsing
if
the desciption of one product already exist in index not import this new
product.
--
Markus Jelsma - CTO - Openindex
http://www.linkedin.com/in/markus17
050-8536620 / 06-50258350
^
--
Markus
, try xmllint on your shell to
check the given xml?
Regards
Stefan
On Tue, Feb 1, 2011 at 4:43 PM, Markus Jelsma
markus.jel...@openindex.io wrote:
There is an issue with the XML response writer. It cannot cope with some
very exotic characters or possibly the right-to-left writing systems
in firefox has the 'exotic' characters you are
expecting. There might also be some issues on your platform with mixing
script direction but that is probably not likely.
Cheers
François
On Feb 1, 2011, at 10:43 AM, Markus Jelsma wrote:
There is an issue with the XML response writer. It cannot
expect in every case that the XML output produced by Solr is well-formed
even if the libraries used under the hood return garbage.
-Sascha
p.s. I can provide the pdf file in question, if anybody would like to
see it in action.
On 01.02.2011 16:43, Markus Jelsma wrote
http://wiki.apache.org/solr/ExtractingRequestHandler
On Wednesday 02 February 2011 16:49:12 Thumuluri, Sai wrote:
Good Morning,
I am planning to get started on indexing MS office using ApacheSolr -
can someone please direct me where I should start?
Thanks,
Sai Thumuluri
--
Markus
Or decrease the mergeFactor.
or change the index to a compound-index
solrconfig.xml: useCompoundFiletrue/useCompoundFile
so solr creates one index file and not thousands.
-
--- System
One Server, 12 GB RAM, 2
Hi
I've seen almost all funky charsets but gothic is always trouble. I'm also
unsure if its really a bug in Solr. It could well be the Xerces being unable
to cope. Besides, most systems indeed don't go well with gothic. This mail
client does, but my terminal can't find its cursor after
Heap usage can spike after a commit. Existing caches are still in use and new
caches are being generated and/or auto warmed. Can you confirm this is the
case?
On Friday 28 January 2011 00:34:42 Simon Wistow wrote:
On Tue, Jan 25, 2011 at 01:28:16PM +0100, Markus Jelsma said:
Are you sure you
It would be quite annoying if it behaves as you were hoping for. This way it
is possible to use different field types (and analyzers) for the same field
value. In faceting, for example, this can be important because you should use
analyzed fields for q and fq but unanalyzed fields for
There is no measurable performance penalty when setting the parameter, except
maybe the execution of the query with a high value for rows. To make things
easy, you can define q.alt=*:* as default in your request handler. No need to
specifiy it in the URL.
Hi,
I use dismax handler with
Field values are copied before being analyzed. There is no cascading of
analyzers.
Hello list,
if I have a field title which copied to text and a field text that is
copied to text.stemmed. Am I going to get the copy from the field title to
the field text.stemmed or should I include it?
is particularly powerful in there so it'd be nice to see
what's happening.
Any logging category I need to activate?
paul
Le 8 févr. 2011 à 03:22, Markus Jelsma a écrit :
There is no measurable performance penalty when setting the parameter,
except maybe the execution of the query with a high value
401 - 500 of 1541 matches
Mail list logo