in debug mode it writes only 10 because there is a rows parameter
which is by default set to 10
make it 100 or so and you should be seeing all docs. But in non-debug
mode there is no such parameter
On Sun, Oct 12, 2008 at 11:00 PM, con [EMAIL PROTECTED] wrote:
I wrote a jdbc program to
Thanks Nobble
I tried in the debug mode with rows=100 and it is accepting all the result
sets.
So i suppose there is nothing wrong in the query.
But I am not able to update the index since this is available only in the
debug mode.
Can you please give some suggestions based on this.
thanks
con
Hello rameshgalla,
Monday, October 13, 2008, 8:25:56 AM, you wrote:
r Hi,
r I don't know there is better solution for this one. But I resolved this
r problem in my application like this.
r After getting the spell suggestion I have performed the search operation
r without displaying the results.
Hello Gene,
Am Montag, den 13.10.2008, 23:32 +1300 schrieb ristretto.rb:
How does one use of this field type.
Forums, wiki, Lucene in Action, all coming up empty.
If there's a doc somewhere please point me there.
I use pysolr to index. But, that's not a requirement.
I'm not sure how one
Thx a lot !
I downloaded a dictionary called de_DR.xml and put it into my conf
directory...
Then I changed my schema.xml to :
class=solr.DictionaryCompoundWordTokenFilterFactory
dictFile=./conf/de_DR.xml
minWordSize=5
minSubwordSize=2
maxSubwordSize=15
onlyLongestMatch=true
but solr can´t
now just do a normal full-import do not enable debug . I guess it
should be just fine
On Mon, Oct 13, 2008 at 1:20 PM, con [EMAIL PROTECTED] wrote:
Thanks Nobble
I tried in the debug mode with rows=100 and it is accepting all the result
sets.
So i suppose there is nothing wrong in the
Hi,
I would like to manage properly multi language search motor,
I would like your advice about what have I done.
Solr1.3
tomcat55
http://www.nabble.com/file/p19954805/schema.xml schema.xml
Thanks a lot,
--
View this message in context:
This came up the other day, too, see http://lucene.markmail.org/message/cnrrkw3d35wqxhzz?q=How+to+tokenize/analyze+docs+for+the+spellchecker
.
I think we could add this to the Lucene spellchecker, or at least to
the SpellCheckComponent. rameshgalla, care to write up your code as a
patch?
Hi,
is it really neccessary to put it all into one index? You could also use the
Solr MultiCore/MultipleIndexes feature and seperate by language.
Regards,
Hannes
On Mon, Oct 13, 2008 at 3:20 PM, sunnyfr [EMAIL PROTECTED] wrote:
Hi,
I would like to manage properly multi language search
Fairly nebulous requirements, but I recently was involved in a
multilingual search platform.
The approach, translated to solr 1.3 would be to use multicore - one
core per geography. Then a schema.xml per core, each with a different
language in the porter algorithm, stopwords etc - taken from
Hi,
Thanks guys for your answer, but I don't think I can use multi-core for each
language,
because for exemple if somebody is connected from Italia and if there is not
that much Italian's book,
so by default I will show up few italian books but all the english one as
well.
Do you have an
Well, it's this section shown below, which would change from geography
to geography.
Parameterise the EnglishPorterFilterFactory and protwords.
You could introduce logic in the front end which asks if num results is
zero then makes a call to the english language, but it doesn't make
logical
What is the problem with the way that I've done,
Does that's means that there is some which are linked with language that we
won't manage by search,
there is too many language, the application will be for video,
we will manage around 10 language, but in our database we have around 25
language,
Hannes Carl Meyer schrieb:
Hi,
is it really neccessary to put it all into one index? You could also use the
Solr MultiCore/MultipleIndexes feature and seperate by language.
Is there a good webpage with infos about the multiindex-feature ?
I know http://wiki.apache.org/solr/MultipleIndexes
In your schema you define each field as follows:
fieldtype name=text_it class=solr.TextField
−
analyzer
tokenizer class=solr.StandardTokenizerFactory/
filter class=solr.StandardFilterFactory/
filter class=solr.ISOLatin1AccentFilterFactory/
filter class=solr.LowerCaseFilterFactory/
filter
Hi Ralf,
you should also check on the example inside the Solr 1.3 download package!
The management of multiple languages inside multiple indexes really makes
sense in terms of configuration efforts (look at your big kahuna
configuration file!), performance and gives an additional scalibility
But I don't get, if you look in my schema.xml it's what I've done, multi
index?
So I was right ?
Hannes Carl Meyer-2 wrote:
Hi Ralf,
you should also check on the example inside the Solr 1.3 download package!
The management of multiple languages inside multiple indexes really makes
Nope, your schema defines a single index with alle languages being stored.
The other way would be MultiCore/MultipleIndexes as described here:
http://wiki.apache.org/solr/CoreAdmin and
http://wiki.apache.org/solr/MultipleIndexes#head-e517417ef9b96e32168b2cf35ab6ff393f360d59
On Mon, Oct 13, 2008
Ok so actually multi-core is multi-index?
Cheers for this links
Hannes Carl Meyer-2 wrote:
Nope, your schema defines a single index with alle languages being stored.
The other way would be MultiCore/MultipleIndexes as described here:
http://wiki.apache.org/solr/CoreAdmin and
How does one use of this field type.
Forums, wiki, Lucene in Action, all coming up empty.
If there's a doc somewhere please point me there.
I use pysolr to index. But, that's not a requirement.
I'm not sure how one adds multivalues to a document. And once added,
if you want to remove one
how
Hi Ralf,
On 10/13/2008 at 5:45 AM, Kraus, Ralf | pixelhouse GmbH wrote:
but solr can´t find the dictionary file :-(
Try using the name of the file without a path - I believe the conf/ directory
is in the search path used by Solr when loading resources, i.e.:
dictFile=de_DR.xml
As an
: but as soon as I edit schema.xml in any of the cores and restart tomcat..and
: view schema file in schema browser of solr admin, it doesn't reflect the
: changes.
are you sure you are editing hte right schema.xml file? what is the
absolute path of the file you are editing? when you look at
Svein Parnas-2 wrote:
One way to boost exact match of one occurrence of a multivalued field
is to add some kind of special start-of-field token and end-of-field
token in the data, eg:
document
field name=professorJohn Dane/field
field name=coursesoftok Algorithms eoftok/field
Hi
I have now recreated the whole index with new index files and all is back to
normal again. I think something had happend to our old index files.
Many thanks to you who tried to help.
Uwe
On Mon, Oct 6, 2008 at 5:39 PM, Uwe Klosa [EMAIL PROTECTED] wrote:
I already had the chance to setup a
Hi Grant,
Thanks for your response. I'm trying to simulate our production
environment's search traffic which has very low cache hit rate.
Turning off the caches can help us better understand query times and
the load of the slave's when distribution occurs with a small list of
pre-canned
Is this possible? Thinking in a two-phase migration process, I`d like to
upgrade first my master generation index, and
then after that, I`ll upgrade my index consumer.
Is this possible or I will have any issue? I`m using Embeeded Server and
not HTTP.
Thanks a lot!
[]s,
Lucas
--
Lucas
it may be possible however, you would need to use 1.2 with the
lucene libraries from 1.3. The index format has changed, so a newer
index can not be read by an older lucene.
ryan
On Oct 13, 2008, at 3:25 PM, Lucas F. A. Teixeira wrote:
Is this possible? Thinking in a two-phase
Hello Grant,
GI This came up the other day, too, see
http://lucene.markmail.org/message/cnrrkw3d35wqxhzz?q=How+to+tokenize/analyze+docs+for+the+spellchecker
GI .
GI I think we could add this to the Lucene spellchecker, or at least to
GI the SpellCheckComponent. rameshgalla, care to write
Hello.
I use Solr 1.3 and I have a problem with ShingleFilterFactory.
I read about ShingleFilterFactory and decided to try it.
I created new type, just for experimenting:
fieldType name=my_type class=solr.TextField
analyzer type=index
tokenizer
Hi Aleksey,
KeywordTokenizerFactory creates a single token out of the input given it.
You probably want something like WhitespaceTokenizerFactory instead - it
creates tokens at whitespace boundaries.
Steve
On 10/13/2008 at 5:30 PM, Aleksey Gogolev wrote:
Hello.
I use Solr 1.3 and I
Hi !
For custom faceting of numerical fields (and similar applications), it
would be super-useful if the list of terms for each numerical field in
the index
(accessible via FieldCache.StringIndex.lookup), could be stored in
numerical rather than natural (alphabetical) order.
(For example
Gene, I think you can think of multi-valued fields as just regular fields with
multiple values concatenated together. There is nothing super magical about
them.
Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch
- Original Message
From: ristretto.rb [EMAIL
: Try using the name of the file without a path - I believe the conf/ directory
is in the search path used by Solr when loading resources, i.e.:
:
:dictFile=de_DR.xml
according to the code the param name is dictionary not dictFile.
I'll add a better error message.
-Hoss
: :dictFile=de_DR.xml
:
: according to the code the param name is dictionary not dictFile.
PS: the dictionary file shouldn't be and XML file, it should look just
like a stopwords file (one word per line)
-Hoss
34 matches
Mail list logo