I'm now considering if Solr (Lucene) is a good choice when we have a
huge number of indexed document and a large number of new documents
needs to be indexed everyday.
Maybe I'm wrong, but my feeling is that the way the sort caches are
handled (recreated after new commit, not shared between
Hi Todd,
I definitely no idea if I request MySql database I've the good characters :
My japon file : select title from video where title like '%画%' and
video.language='ja' limit 2;
[EMAIL PROTECTED]:/# mysql -A -pass -u solr --h vip.videos.com dailymotion
japon
title
恐怖映画、こわくない版
映画のミステイク・ムービー
Maybe it comes from my data-config ??
dataConfig
dataSource type=JdbcDataSource
driver=com.mysql.jdbc.Driver
url=jdbc:mysql://master.vip.videos.com/videos
user=solr
password=pass
batchSize=-1
And even accent doesn't work :( it's really a problem with my utf8
je suis allée le voir et comme d'abitude les paysages sont magnifiques,
mais l'histoire n'est p
Maybe it comes from my data-config ??
dataConfig
dataSource type=JdbcDataSource
driver=com.mysql.jdbc.Driver
Hi,
I've solr 1.3 and tomcat55.
When I try to index a bit of data and I request ALL, obviously my accent and
UTF8 encoding is not took in consideration.
doc
date name=created2006-12-14T15:28:27Z/date
str name=description_ja
Le 1er film de Goro Miyazaki (fils de Hayao)
br /je suis allée ...
hello everybody
thank you all for your help and ideas it works now.
what are we doing wrong?
Florian
actually, I am not sure what we did wrong. After we started it again
from scratch and with the simplified query it all worked as expected.
Regards
Florian
Hi Jerome,
I tried to chat with you but you wasn't there or ...?? lol on your website.
Ok I tried what you did and my file bring me back in gedit :
?xml version=1.0 encoding=UTF-8?
response
lst name=responseHeaderint name=status0/intint
name=QTime0/intlst name=paramsstr
It actually come from the database mysql's variable :
| character_set_client| latin1
|
| character_set_connection| latin1
|
so I don't know really now how to configure my
Hi,
How can I do to manage that ??
| character_set_client| latin1
|
| character_set_connection| latin1
|
| character_set_database | utf8
Hi,
On using Facet in solr query I am facing various issues.
Scenario 1:
I have 11 Index with tag : str name=Index_Type_sproductIndex/str
my search query is appended by facet parameters :
facet=truefacet.field=Index_Type_sqt=dismaxrequest
The facet node i am getting in solr result is :
-
Hi,
I have gone through the archive in search of Hierarchical Faceting but was not
clear as what should I exactly do to achieve that.
Suppose, I have 3 categories like politics, science and sports. In the schema,
I am defining a field type called 'Category'. I don't have a sub category field
Hello !
Thank you it is working. I've done a query, and my facet query is :
facet_queries:{
published_year_facet:[1999 TO 2005]:95,
rating_facet:[3 TO 3.99]:25,
rating_facet:[1 TO 1.99]:1},
Is it possible to 'group' kind of queries (published together, rating
together
I do have that in my config. It's existence doesn't seem to affect this
particular issue. I've tried it with and without.
-Todd
-Original Message-
From: Ryan McKinley [mailto:[EMAIL PROTECTED]
Sent: Monday, October 20, 2008 4:36 PM
To: solr-user@lucene.apache.org
Subject: Re: Problem
Hi Jeffrey,
How did you manage with your database conneciton in latin-1 to get your
information properly in utf-8 ?
to manage stemming everything ???
Thanks a lot,
How did you manage if
Tiong Jeffrey wrote:
Hi Ajanta,
thanks! Since I used PHP, I managed to use the PHP decode
On 10/21/2008 at 12:14 AM, Noble Paul നോബിള് नोब्ळ् wrote:
On Tue, Oct 21, 2008 at 12:56 AM, Shalin Shekhar Mangar [EMAIL PROTECTED]
wrote:
Your data-config looks fine except for one thing -- you do not need to
escape '' character in an XML attribute. It maybe throwing off the
parsing
Wow, I really should read more closely before I respond - I see now, Noble,
that you were talking about DIH's ability to parse escaped ''s in attribute
values, rather than about whether '' was an acceptable character in attribute
values.
I should repurpose my remarks to note to Shalin, though,
Any idea,? What can I do?
sunnyfr wrote:
Hi,
How can I do to manage that ??
| character_set_client| latin1
|
| character_set_connection| latin1
|
|
: I need to deploy the Solr using winstone servlet engine. Please help me
: how to configure it.
I've never heard of winstone until reading your thread, but according to
the home page...
http://winstone.sourceforge.net/
...you just specify the name of the war on the command line...
: Is it possible to do date math in a FunctionQuery? This doesn't work, but I'm
: looking for something like:
:
: bf=recip((NOW-updated),1,200,10) when using DisMax to get the elapsed time
: between NOW and when the document was updated (where updated is a Date field).
Date Math (as
: Can somebody explain meaning of different patterns?
Vicky: there is a lot of good information about the SynonymFilterFactory
on the wiki...
http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters#SynonymFilter
-Hoss
: Admin screen, index document is coming up correctly in the response. But
: when I start locating index (.cfs) file into folders . File is not created
: at all.
:
: Solr Config xml entry (core 1) :
:
: dataDir${solr.data.dir:./solr/data}/data Dir
unless yo uare defining a solr.data.dir
Thanks for the reply Hoss.
As far as our application goes, Commits and reads are done to the index during
the normal business hours. However, we observed the max warmers error happening
during a nightly job when the only operation is 4 parallel threads commits data
to index and Optimizes it
: I want to use Solr to provide a web-based read-only view of this index.
: I require that the index not be locked in any way while Solr is using it
: (so the Java program can continue updating it) and that Solr is able to
: see new documents added to the index, if not immediately, at least a
Hi,
Im pretty intrigued by the Ocean search stuff and the Lucene patch, Im
wondering if it's something that a tweaked Solr w/ mod Lucene can run
now? Has anyone tried merging that patch and running w/ Solr? Im
sure there is more to it than just swapping out the libs but the real
time
So I tried to look on google for an answer to this before I posted
here. Basically I am trying to understand how prefix searching works.
I have a dynamic text field (indexed and stored) full_name_t
I have some data in my index, specifically a record with full_name_t =
Robert P Page
A search on:
Hello,
I've been having issues with out of memory errors on searches in Solr. I
was wondering if I'm hitting a limit with solr or if I've configured
something seriously wrong.
Solr Setup
- 3 cores
- 3163615 documents each
- 10 GB size
- approx 10 fields
- document sizes vary from a few kb to
How much RAM in the box total? How many sort fields and what types?
Sorts on each core?
Willie Wong wrote:
Hello,
I've been having issues with out of memory errors on searches in Solr. I
was wondering if I'm hitting a limit with solr or if I've configured
something seriously wrong.
Solr
: The problem is that I will have hundreds of users doing queries, and a
: continuous flow of document coming in.
: So a delay in warming up a cache could be acceptable if I do it a few times
: per day. But not on a too regular basis (right now, the first query that loads
: the cache takes 150s).
28 matches
Mail list logo