Hello Nagendra,
Congratulations on the new release!
In terms of downloading: does one need to be registered on the site do
download the bundle? The download links lead to
http://solr-ra.tgels.org/solr-ra.jsp.
Regards,
Dmitry Kan
On Tue, Dec 27, 2011 at 4:30 PM, Nagendra Nagarajayya
nnagaraja
with non-intersecting hits.
That practically means, that the merger should simply concatenate the
shard results into one list (automatically pre-sorted by design).
Can something be improved in the SOLR merger facet logic here? Should we
look at something else as well?
--
Thanks,
Dmitry Kan
at 1:56 PM, Dmitry Kan dmitry@gmail.com wrote:
Hello list,
This might be not the right place to ask the jmx specific questions, but
I
decided to try, as we are polling SOLR statistics through jmx.
We currently have two solr cores with different schemas A and B being run
under
That's absolutely right. Thanks for the suggestion.
On Thu, Dec 29, 2011 at 2:47 PM, Gora Mohanty g...@mimirtech.com wrote:
On Thu, Dec 29, 2011 at 6:15 PM, Dmitry Kan dmitry@gmail.com wrote:
Well, we don't use multicore feature of SOLR, so in our case SOLR
instances
are just separate
-to-force-substitutions-in-data-tp3646195p3646195.html
Sent from the Solr - User mailing list archive at Nabble.com.
--
Regards,
Dmitry Kan
submitted too. Is there a way of suppressing the original query?
--
Regards,
Dmitry Kan
.
--
Regards,
Dmitry Kan
on resultset count.
On Fri, Jan 13, 2012 at 5:44 PM, Dmitry Kan dmitry@gmail.com wrote:
You could do this on the client side, just read 10 first facets off the
top
of the list and mark the remaining as Others.
On Fri, Jan 13, 2012 at 12:47 PM, Manish Bafna
manish.bafna...@gmail.comwrote
:49 PM, Dmitry Kan dmitry@gmail.com wrote:
Hello list,
I need to split the incoming original facet query into a list of
sub-queries. The logic is done and each sub-query gets added into outgoing
queue with rb.addRequest(), where rb is instance of ResponseBuilder.
In the logs I see
didn't say
how, and in your followup quesiton, it sounds like you are still hitting
the limit of maxBooleanClauses.
So what exactly have you changed/done that is done and what is the
new problem?
-Hoss
--
Regards,
Dmitry Kan
, Dmitry Kan dmitry@gmail.com wrote:
OK, let me clarify it:
if solrconfig has maxBooleanClauses set to 1000 for example, than queries
with clauses more than 1000 in number will be rejected with the mentioned
exception.
What I want to do is automatically split such queries into sub-queries
the Java Heap size for this shard? Or is
there another method to avoid this slow calls?
Thank you
Daniel
--
Regards,
Dmitry Kan
the Solr
instance?
Thanks
On Tue, Jan 17, 2012 at 1:56 PM, Dmitry Kan dmitry@gmail.com wrote:
I had a similar problem for a similar task. And in my case merging the
results from two shards turned out to be a culprit. If you can logically
store your data just in one shard, your
. Maybe more. It depends.
On Tue, Jan 17, 2012 at 2:14 PM, Dmitry Kan dmitry@gmail.com
wrote:
Hi Daniel,
My index is 6,5G. I'm sure it can be bigger. facet.limit we ask for is
beyond 100 thousand. It is sub-second speed. I run it with -Xms1024m
-Xmx12000m under tomcat
--
Regards,
Dmitry Kan
http://localhost:7070/solr/docs/admin/analysis.jsp passed the query
*lock and did not find ReversedWildcardFilterFactory to the indexer or any
other filters that could do the reversing.
-Shyam
-Original Message-
From: Dmitry Kan [mailto:dmitry@gmail.com]
Sent: Wednesday, January
-Original Message-
From: Dmitry Kan [mailto:dmitry@gmail.com]
Sent: Wednesday, January 18, 2012 4:26 PM
To: solr-user@lucene.apache.org
Subject: Re: Question on Reverse Indexing
OK. Not sure what is your system architecture there, but could your queries
stay cached in some server caches
into a text_rev field? I *think* that it only
needs to be in text_rev, but I want to make sure before I go mucking
with my schema.
--
Regards,
Dmitry Kan
Dimitry,
Completed a clean index and I still see the same behavior.
Did not use Luke but from the search page we use leading wild card search
is working.
-Shyam
-Original Message-
From: Dmitry Kan [mailto:dmitry@gmail.com]
Sent: Wednesday, January 18, 2012 5:07 PM
To: solr-user
name=defTypeedismax/str
str name=qf
title^15.0 indexed_content^1.0 attachment_titles^5.0
attachment_bodies^1.0
/str
/lst
/requestHandler
-Shyam
-Original Message-
From: Dmitry Kan [mailto:dmitry@gmail.com]
Sent: Thursday, January 19, 2012 4:20 PM
To: solr
but when I remove the filter the size decreases but in
either case I am seeing the leading wild card query working.
-Shyam
-Original Message-
From: Dmitry Kan [mailto:dmitry@gmail.com]
Sent: Thursday, January 19, 2012 5:57 PM
To: solr-user@lucene.apache.org
Subject: Re: Question
Hi,
The article you gave mentions 13GB of index size. It is quite small index
from our perspective. We have noticed, that at least solr 3.4 has some sort
of choking point with respect to growing index size. It just becomes
substantially slower than what we need (a query on avg taking more than
will have to do a load test to identify the cutoff point to begin
using the strategy of shards.
Thanks
2012/1/24, Dmitry Kan dmitry@gmail.com:
Hi,
The article you gave mentions 13GB of index size. It is quite small
index
from our perspective. We have noticed, that at least solr
, continue searching with the
remaining shard set.
What would be the proven way to achieve this?
--
Regards,
Dmitry Kan
An offtopic: as some of my questions went unnoticed too, I could recommend
asking them somewhere else in parallel, for example: stackoverflow.com.
But as SOLR and its ecosystem sometimes pose tough questions and
problems, stackoverflow can ignore them as well. Anyhow, just another
opportunity..
as an internal or external command,
operable program or batch file.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Fail-to-compile-Java-code-trying-to-use-SolrJ-with-Solr-tp3708902p3708923.html
Sent from the Solr - User mailing list archive at Nabble.com.
--
Regards,
Dmitry
=truesort=score+descfl=sitzung,gremium,betreff,datum,timestamp,score,aktenzeichen,typ,id,anhangstart=0q=Am+Heidstammhl.fl=betreffwt=standardfq=hl=truerows=10version=2.2}
hits=14 status=0 QTime=244
Thanks!
--
Regards,
Dmitry Kan
Actually, I wouldn't count on it and just specify index and query sides
explicitly. Just to play it safe.
On Fri, Feb 3, 2012 at 8:34 PM, Marian Steinbach mar...@sendung.de wrote:
2012/2/3 Dmitry Kan dmitry@gmail.com:
What about query side of the field?
It's identical. At least that's
... was there literally a + in the query or was that
urlencoded? Try debugQuery=true for both queries and see what you get for
the query parsing output.
Erik
On Feb 3, 2012, at 14:18 , Dmitry Kan wrote:
Actually, I wouldn't count on it and just specify index and query sides
Hi,
This talk has some interesting details on setting up an Lucene index in RAM:
http://www.lucidimagination.com/devzone/events/conferences/revolution/2011/lucene-yelp
Would be great to hear your findings!
Dmitry
2012/2/8 James ljatreey...@163.com
Is there any practice to load index into
of this.
Thanks.
--
Regards,
Dmitry Kan
well, you should add these fields in schema.xml, otherwise solr won't know
them.
On Wed, Feb 8, 2012 at 2:48 PM, Radu Toev radut...@gmail.com wrote:
The schema.xml is the default file that comes with Solr 3.5, didn't change
anything there.
On Wed, Feb 8, 2012 at 2:45 PM, Dmitry Kan dmitry
getting a total of 2k.
Where could be the problem?
Thanks
--
Regards,
Dmitry Kan
...@gmail.com wrote:
1. Nothing in the logs
2. No.
On Thu, Feb 16, 2012 at 12:44 PM, Dmitry Kan dmitry@gmail.com wrote:
1. Do you see any errors / exceptions in the logs?
2. Could you have duplicates?
On Thu, Feb 16, 2012 at 10:15 AM, Radu Toev radut...@gmail.com wrote:
Hello,
I
them ok.
The same, if I do sepparately the 1k database. It indexes ok.
On Thu, Feb 16, 2012 at 2:11 PM, Dmitry Kan dmitry@gmail.com wrote:
It sounds a bit, as if SOLR stopped processing data once it queried all
from the smaller dataset. That's why you have 2000. If you just have
=m_sv_name/
field column=m_c_cluster_major/
field column=m_c_cluster_minor/
field column=m_c_country/
field column=m_c_code/
/entity
/document
/dataConfig
I've removed the connection params
The unique key is id.
On Thu, Feb 16, 2012 at 2:27 PM, Dmitry Kan dmitry@gmail.com wrote
Toev radut...@gmail.com wrote:
I'm not sure I follow.
The idea is to have only one document. Do the multiple documents have the
same structure then(different datasources), and if so how are they actually
indexed?
Thanks.
On Thu, Feb 16, 2012 at 4:40 PM, Dmitry Kan dmitry@gmail.com
no problem, hope it helps, you're welcome.
On Thu, Feb 16, 2012 at 5:03 PM, Radu Toev radut...@gmail.com wrote:
Really good point on the ids, I completely overlooked that matter.
I will give it a try.
Thanks again.
On Thu, Feb 16, 2012 at 5:00 PM, Dmitry Kan dmitry@gmail.com wrote
For example, this way:
1. Implement a filter factory:
[code]
package com.mycomp.solr.analysis;
import org.apache.lucene.analysis.TokenStream;
import org.apache.solr.analysis.BaseTokenFilterFactory;
import org.apache.solr.common.ResourceLoader;
import
or received.
Further communication will signify your consent to this.
--
Regards,
Dmitry Kan
to
change the schema too much) and speed the indexing speed (from current 212
days for 12MM docs) would be much appreciated.
thank you
Peyman
--
Regards,
Dmitry Kan
.
Thanks,
Ramo
--
Regards,
Dmitry Kan
: 0.00
cumulative_inserts : 0
cumulative_evictions : 0
Is there something tob e optimized?
Thanks,
Ramo
-Ursprüngliche Nachricht-
Von: Dmitry Kan [mailto:dmitry@gmail.com]
Gesendet: Montag, 12. März 2012 15:06
An: solr-user@lucene.apache.org
Betreff: Re: Performance
impact it has.
Thans,
Ramo
-Ursprüngliche Nachricht-
Von: Dmitry Kan [mailto:dmitry@gmail.com]
Gesendet: Montag, 12. März 2012 16:21
An: solr-user@lucene.apache.org
Betreff: Re: Performance (responsetime) on request
you can optimize the documentCache by setting maxSize
Hi,
Actually we ran into the same issue with using ids parameter, in the solr
front with shards architecture (exception throws in the solr front). Were
you able to solve it by using the key:value syntax or some other way?
BTW, there was a related issue:
So I solved it by using key:(id1 OR ... idn).
On Tue, Mar 27, 2012 at 9:14 AM, Dmitry Kan dmitry@gmail.com wrote:
Hi,
Actually we ran into the same issue with using ids parameter, in the solr
front with shards architecture (exception throws in the solr front). Were
you able to solve
,
Dmitry Kan
from the Solr - User mailing list archive at Nabble.com.
--
Regards,
Dmitry Kan
,
Dmitry Kan
= rb.resultIds.get(id);
[/code]
returns sdoc=null, which causes the next line of code to fail with an NPE:
[code]
int idx = sdoc.positionInResponse;
[/code]
Am I missing anything? Can something be done for solving this issue?
Thanks.
--
Regards,
Dmitry Kan
Can anyone help me out with this? Is this too complicated / unclear? I
could share more detail if needed.
On Wed, Apr 11, 2012 at 3:16 PM, Dmitry Kan dmitry@gmail.com wrote:
Hello,
Hopefully this question is not too complex to handle, but I'm currently
stuck with it.
We have a system
of IDS can help you. Could you please provide
more detailed info like stacktraces, etc. Btw, have you checked trunk for
your case?
On Thu, Apr 12, 2012 at 7:08 PM, Dmitry Kan dmitry@gmail.com wrote:
Can anyone help me out with this? Is this too complicated / unclear? I
could share more
interested). To
make it part of a releasable trunk, one would most probably need to provide
some way to configure 1st tier level.
Thanks,
Dmitry
On Thu, Apr 12, 2012 at 9:34 PM, Yonik Seeley yo...@lucidimagination.comwrote:
On Wed, Apr 11, 2012 at 8:16 AM, Dmitry Kan dmitry@gmail.com wrote:
We
-with-Solr-tp3925557p3925557.html
Sent from the Solr - User mailing list archive at Nabble.com.
--
Regards,
Dmitry Kan
not be case sensitive.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Auto-suggest-on-indexed-file-content-filtered-based-on-user-tp3934565p3937370.html
Sent from the Solr - User mailing list archive at Nabble.com.
--
Regards,
Dmitry Kan
Problem here is that e.g. New York is stored as two different tokens in
your index, as you use white space tokenizer. The easiest solution would be
to detect and break the incoming one-word query tokens into several tokens,
i.e. newyork = new york. That's probably possible only if there is a
am not getting any result .
--
View this message in context:
http://lucene.472066.n3.nabble.com/problem-with-date-searching-tp3961761p3961833.html
Sent from the Solr - User mailing list archive at Nabble.com.
--
Regards,
Dmitry Kan
:-)
Could you help me, please?
Thank you very much!
Bye.
--
Regards,
Dmitry Kan
multivalued fields which may have duplicates
and I would like to be able to get a count of how many documents that
term appears (currently what faceting does) but also how many times
that term appears in general.
--
Regards,
Dmitry Kan
-idf0.0025412960609911056/double
/lst
/lst
/lst
...
/lst
-Dmitry
On Sat, May 5, 2012 at 12:05 AM, Jamie Johnson jej2...@gmail.com wrote:
it might be...can you provide an example of the request/response?
On Fri, May 4, 2012 at 3:31 PM, Dmitry Kan dmitry@gmail.com wrote:
I have tried (as a test
.n3.nabble.com/file/n3974922/conf.rar conf.rar
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-query-issues-tp3974922.html
Sent from the Solr - User mailing list archive at Nabble.com.
--
Regards,
Dmitry Kan
-tp3974922p3975167.html
Sent from the Solr - User mailing list archive at Nabble.com.
--
Regards,
Dmitry Kan
counts from the merged
docSets. How do I do it? Any pointers would be appreciated.
--
With Thanks and Regards,
Ramprakash Ramamoorthy,
Project Trainee,
Zoho Corporation.
+91 9626975420
--
Regards,
Dmitry Kan
(inefficiently) at the application layer but I
was wondering if there has been any attempts within the community to solve
similar problems, efficiently without paying a hefty response time price?
thank you
Peyman
[1] http://en.wikipedia.org/wiki/Kernel_methods
--
Regards,
Dmitry Kan
,
Dmitry Kan
seems best and accept that it doesn't
handle the other cases.
BTW, the proper name is PricewaterhouseCoopers.
-- Jack Krupansky
-Original Message- From: Dmitry Kan
Sent: Wednesday, November 07, 2012 1:58 AM
To: solr-user@lucene.apache.org
Subject: searching camel cased terms
.
-- Jack Krupansky
-Original Message- From: Dmitry Kan
Sent: Wednesday, November 07, 2012 1:58 AM
To: solr-user@lucene.apache.org
Subject: searching camel cased terms with phrase queries
Hello list,
There was a number of threads about handling camel cased words apparently
in the past
By specifying the tokenizer in question as a filter in schema.xml for your
text field type. In case it is your custom tokenizer, it must adhere to the
Lucene / SOLR API to submit tokens properly down the processing stream..
like these: http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters.
--
Regards,
Dmitry Kan
Dubai, U.A.E. *
[image: netapp-cloud-esig-dollar]
** **
--
Regards,
Dmitry Kan
,
Dmitry Kan
if you test
and find any.
Regards,
Dmitry Kan
On Fri, Dec 7, 2012 at 5:50 PM, Neil Ireson n.ire...@dcs.shef.ac.uk wrote:
In case it is of use, I have just uploaded an updated and mavenised
version of the Luke code to the Luke discussion list, see
https://groups.google.com/d/**topic/luke-discuss
Hi,
Have you tried looking at admin analysis page? You can see how i-pod gets
indexed and highlight query results there too.
Best,
Dmitry Kan
On Wed, Dec 26, 2012 at 10:08 AM, Jose Yadao josesya...@gmail.com wrote:
Hi and Happy Holidays to everyone.
I have a question regarding the use
field.
As for the performance, no major delays compared to the original proximity
search implementation have been noticed.
Best,
Dmitry Kan
On Wed, Dec 19, 2012 at 10:53 AM, Dmitry Kan solrexp...@gmail.com wrote:
Dear list,
We are currently evaluating proximity searches (term1 term2 ~slope
Hi,
Have a look onto TokenFilter. Extending it will give you access to a
TokenStream.
Regards,
Dmitry Kan
On Fri, Dec 21, 2012 at 9:05 AM, Xi Shen davidshe...@gmail.com wrote:
Hi,
I am looking for a token filter that can combine 2 terms into 1? E.g.
the input has been tokenized by white
#hl.snippets
Dmitry
On Mon, Jan 14, 2013 at 3:14 PM, Dmitry Kan solrexp...@gmail.com wrote:
Hello!
I'm playing with the regex feature of highlighting in SOLR. The regex I
have is pretty simple and, given a keyword query, it hits in a few places
inside each document.
Is there a way of highlighting
...@griddynamics.com wrote:
Dmitry,
I have some relevant experience and ready to help, but I can not get the
core problem. Could you please expand the description and/or provide a
sample?
On Tue, Jan 15, 2013 at 11:01 AM, Dmitry Kan solrexp...@gmail.com wrote:
Hello!
Is there a simple way
Hi,
For the sake of story completeness, I was able to fix the highlighter to
work with the token matches that go beyond the length of the text field.
The solution was to mod on matched token positions, if they exceed the
length of the text.
Dmitry
On Thu, Dec 27, 2012 at 10:13 AM, Dmitry Kan
with facet by function patch
https://issues.apache.org/jira/browse/SOLR-1581 accomplished by tf()
function.
It doesn't seem like much help.
On Fri, Jan 18, 2013 at 12:42 PM, Dmitry Kan solrexp...@gmail.com wrote:
that we actually require the count of the sentences inside
each document where
Hello!
Does SOLR 4.x support / is going to support the multi-term phrase search
inside proximity searches?
To illustrate, we would like the following to work:
\a b\ c~10
which would return hits with a b 10 tokens away from c in no particular
order.
It looks like
/**SurroundQueryParserhttp://wiki.apache.org/solr/SurroundQueryParser
But, surround does not support regex terms, just wildcards.
-- Jack Krupansky
-Original Message- From: Dmitry Kan
Sent: Friday, January 18, 2013 8:59 AM
To: solr-user@lucene.apache.org
Subject: SOLR 4.x: multiterm phrase inside
at 4:44 PM, Jack Krupansky j...@basetechnology.comwrote:
Unfortuntaely, yes.
-- Jack Krupansky
-Original Message- From: Dmitry Kan
Sent: Friday, January 18, 2013 9:42 AM
To: solr-user@lucene.apache.org
Subject: Re: SOLR 4.x: multiterm phrase inside proximity searches possible
Yep, that's my issue: we still use solr 3.4.
On Fri, Jan 18, 2013 at 4:57 PM, Jack Krupansky j...@basetechnology.comwrote:
LUCENE-2754 is already in Lucene 4.0 - SpanMultiTermQueryWrapper.
-- Jack Krupansky
-Original Message- From: Dmitry Kan
Sent: Friday, January 18, 2013 9:50 AM
Hello!
Is there some activity on SOLR-1604? Can one of the contributors answer two
simple questions?
https://issues.apache.org/jira/browse/SOLR-1604?focusedCommentId=13557053page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13557053
Regards,
Dmitry
spans
- if you are in 3.x you'll have a problem with disjunction queries.
it seems challenging, doesn't it?
18.01.2013 17:40 пользователь Dmitry Kan solrexp...@gmail.com написал:
Mikhail,
Do you say, that it is not possible to access the matched terms positions
in the FacetComponent
in one of the legs and try to access the matched spans
- if you are in 3.x you'll have a problem with disjunction queries.
it seems challenging, doesn't it?
18.01.2013 17:40 пользователь Dmitry Kan solrexp...@gmail.com написал:
Mikhail,
Do you say, that it is not possible to access
-2000
docs with 3-5 facet fields. It shows 100 q/sec on an average datacenter
box.
On Mon, Jan 21, 2013 at 5:23 PM, Dmitry Kan solrexp...@gmail.com wrote:
Mikhail,
Thanks for the guidance! This indeed sounds challenging, esp. given the
bonus of fighting with solr 3.x in light
(start-off-topic): Alexandre, nice ideas. Last in the *) list is a bit far
stretched, but still good. I would still add one: how to have exact matches
and inexact matches in the same analyzed field. (end-off-topic)
On Wed, Jan 23, 2013 at 2:40 PM, Alexandre Rafalovitch
arafa...@gmail.comwrote:
Does debugQuery=true tell anything useful for these? Like what is the
component taking most of the 30 seconds. Do you have evictions in your solr
caches?
Dmitry
On Thu, Jan 31, 2013 at 10:01 AM, Mou mouna...@gmail.com wrote:
I am running solr 3.4 on tomcat 7.
Our index is very big , two
.
This is the way all lucene's own analysis tests work: e.g.
http://svn.apache.org/repos/asf/lucene/dev/trunk/lucene/analysis/common/src/test/org/apache/lucene/analysis/en/TestEnglishMinimalStemFilter.java
On Thu, Feb 14, 2013 at 7:40 AM, Dmitry Kan solrexp...@gmail.com wrote:
Hello,
Asked
Amazing. Thanks!
On Fri, Feb 15, 2013 at 7:07 PM, Robert Muir rcm...@gmail.com wrote:
For 3.4, extend ReusableAnalyzerBase
On Fri, Feb 15, 2013 at 12:06 PM, Dmitry Kan solrexp...@gmail.com wrote:
Thanks a lot, Robert.
I need to study a bit more closely the link you have sent. I have
To clarify a bit:
I did a quick test with my example and it seemed to fail with []
but passing with [].
did you mean to use {} in one of these?
Dmitry
On Sun, Feb 17, 2013 at 4:22 AM, Alexandre Rafalovitch
arafa...@gmail.comwrote:
I am looking at the Solr WIKI and some of the examples seem
at
once. Lately, it doesn't seem to be working. (Anonymous - via GTD book)
On Tue, Feb 19, 2013 at 3:11 AM, Dmitry Kan solrexp...@gmail.com wrote:
To clarify a bit:
I did a quick test with my example and it seemed to fail with []
but passing with [].
did you mean to use {} in one
Can you also show, how you define a field rawData in the schema?
Dmitry
On Mon, Mar 4, 2013 at 4:13 PM, Van Tassell, Kristian
kristian.vantass...@siemens.com wrote:
Does anyone have any ideas? I don't understand how the query can match, as
I am querying against the same field, and yet get
Hello,
Look towards Tika. It can handle these MS Word file formats:
http://tika.apache.org/1.3/formats.html#Microsoft_Office_document_formats
Solr Wiki:
http://wiki.apache.org/solr/ExtractingRequestHandler
I don't have a link for a tutorial with example schemas.
Dmitry
On Tue, Mar 5, 2013
Probably, the bulk indexing feature is not implemented for tika processing,
but you can easily compile a script yourself:
Extract in a loop over the word files in a directory:
curl
http://localhost:8983/solr/update/extract?literal.id=doc5defaultField=text;
--data-binary @tutorial.html -H
PM, Dmitry Kan solrexp...@gmail.com wrote:
Thanks Alexandre for correcting the link and Mikhail for sharing the ideas!
Mihkail,
I will need to look closer at your customization of SpansFacetComponent on
the blogpost.
Is it so, that in this component, you are accessing and counting
Hello,
is spanNOT operator supported in this patch? If not, is there a need for
this feature for anyone?
Regards,
Dmitry
Thanks Mikhail.
On Tue, Mar 5, 2013 at 8:23 PM, Mikhail Khludnev mkhlud...@griddynamics.com
wrote:
Something like this.
On Tue, Mar 5, 2013 at 6:16 PM, Dmitry Kan solrexp...@gmail.com wrote:
Hello,
I spent some more time on this and used Mikhail's suggestions of which
classes would
archive at Nabble.com.
--
Regards,
Dmitry Kan
1 - 100 of 538 matches
Mail list logo