I'm trying to index a set of stores and their articles. I
have two
XML-files, one that contains the data of the stores and one
that contains
articles for each store. I'm using DIH with
XPathEntityProcessor to process
the file containing the store, and using a nested entity I
try to get all
So here is the problem, I have a requirement to implement
search by a
person name.
Names consist of
- first name
- middle name
- last name
- nickname
there is a list of synonyms which should be applied just for
first name and
middle name.
In search, all fields should be searched
Hi Michael,
I unsinstall Tomcat6, java, etc... and re-install all packages...I will
see if it's ok with a new install
I will keep inform, thx !!
Le 21/07/2012 17:05, Michael Della Bitta a écrit :
Yeah, that's Tomcat's memory leak detector. Technically that's a
memory leak, but in practice
It happens in 3.6, for this reasons I thought of moving to solandra.
If I do a commit, the all documents are persisted with out any issues.
There is no issues in terms of any functionality, but only this happens is
increase in physical RAM, goes higher and higher and stop at maximum and it
never
My uniqeKey in scema.xml is id. I've tried adding pk=id to the store
entity but it makes no difference.
The result is the same if I set rootEntity=false on the store entity.
However I added debug and verbose output to the dataimporthandler and I
noticed a slight change in how the nested queries
22 July 2012, Apache SolrT 3.6.1 available
The Lucene PMC is pleased to announce the release of Apache Solr 3.6.1.
Solr is the popular, blazing fast open source enterprise search platform
from
the Apache Lucene project. Its major features include powerful full-text
search, hit highlighting,
Hi,
It seems that both of you simply don't understand what's happening in your
operating system kernel. Please read the blog post again!
It happens in 3.6, for this reasons I thought of moving to solandra.
If I do a commit, the all documents are persisted with out any issues.
There is no
I am still struggling with nested DIH myself, but I notice that your
correlation condition is on the field level (@StoreId='${store.id}).
Were you planning to repeat it for each field definition?
Have you tried putting it instead in the forEach section?
Alternatively, maybe you need to use
Hi Ahmet,
Thanks for the reply, Yes, actually after I posted the first question,
I found that edismax is very helpful in this use case. There is another
problem which is about hyphens in the search query.
I guess I need to post it in another email.
Thank you very much
On Sun, Jul 22, 2012 at
Haven't done this in code myself, but take a look at
MutlCoreJettyExampleTest and the associated base
class, that might give you some pointers
Best
Erick
On Thu, Jul 19, 2012 at 9:35 PM, Nicholas Ball
nicholas.b...@nodelay.com wrote:
What is the best way to redirect a SolrQueryRequest to
Hi -- thanks for the response. It's the right direction. However on closer
look I don't think I can use it directly. The reason is that in my case,
the query string is always *:*, we use filter query to get different
results. When fq=(field1:xyz) we want to boost one document and let sort=
to take
Wait by using filter queries with *:*, you're essentially
disabling scoring. *:*
resolves to ConstantScore Query, and filter queries don't lend any
scoring at all.
It really sounds like you're shooting yourself in the foot by using *:*, what
happens if you use q= instead? QEV can be used in
The articleId field is the only field in the correlation file so I just
need to get that one working.
I tried butting the condition in the forEach secion. If I hardcode a value,
like 0104, it works but it doesn't work with the variable. Haven't looked
at the sourcecode yet but maybe forEach
Hey Erick,
Managed to do this in the end by reconstructing a new SolrQueryRequest
with a SolrRequestParsers (method buildRequestFrom()) and then calling
core.execute();
Took some fiddling but seems to be working now! :)
Thanks for the help!
Nick
On Sun, 22 Jul 2012 10:58:16 -0400, Erick
Or for names that are more involved, you can use special
tokenizer/filter chain and index different variants of the name into
one index
example:
https://github.com/romanchyla/montysolr/blob/solr-trunk/contrib/adsabs/src/java/org/apache/lucene/analysis/synonym/AuthorSynonymFilter.java
roman
On
It's almost what I've been doing, but I didn't write my own filter,
I used SynonymFilterFactory.
Thanks
On Sun, Jul 22, 2012 at 12:45 PM, Roman Chyla roman.ch...@gmail.com wrote:
Or for names that are more involved, you can use special
tokenizer/filter chain and index different variants of
Bonne chance!
Michael Della Bitta
Appinions, Inc. -- Where Influence Isn’t a Game.
http://www.appinions.com
On Sun, Jul 22, 2012 at 6:38 AM, Bruno Mannina bmann...@free.fr wrote:
Hi Michael,
I unsinstall Tomcat6, java, etc... and re-install
Hi,
I have an index of about 50m documents. the fields in this index are
basically hierarchical tokens: token1, token2 token10
When searching the index, I start by getting a list of the query tokens
(1..10) and then requesting the documents that suit those query tokens.
I always want about
I've installed
rpm -qa | grep -i ^tomcat-7
tomcat-7.0.27-7.1.noarch
with
update-alternatives --query java | grep Value
Value: /usr/lib64/jvm/jre-1.7.0-openjdk/bin/java
on
GNU/Linux
x86_64
kernel 3.1.10
Tomcat is started
I get a similar situation using Windows 2008 and Solr 3.6. Memory using mmap is
never released. Even if I turn off traffic and commit and do a manual gc. If
the size of the index is 3gb then memory used will be heap + 3gb of shared
used. If I use a 6gb index I get heap + 6gb. If I turn off
Hi!
I am very excited to announce the availability of Solr 4.0-ALPHA with
RankingAlgorithm 1.4.4 with Realtime NRT. The Realtime NRT
implementation now supports both RankingAlgorithm and Lucene. Realtime
NRT is a high performance and more granular NRT implementation as to
soft commit. The
What exactly is Realtime NRT (Near Real Time)?
On Sun, 2012-07-22 at 14:07 -0700, Nagendra Nagarajayya wrote:
Hi!
I am very excited to announce the availability of Solr 4.0-ALPHA with
RankingAlgorithm 1.4.4 with Realtime NRT. The Realtime NRT
implementation now supports both
Ok, problem found by digging in the source code. If it is a bug or works
by design I don't know but the reason is when the translation of the
vaiable ${store.id} is made.
The translation is made in the method initXpathReader() with these lines:
String xpath = field.get(XPATH);
/srv/www sounds like a doc root for a web server...
On Jul 22, 2012, at 1:24 PM, k9...@operamail.com wrote:
I've installed
rpm -qa | grep -i ^tomcat-7
tomcat-7.0.27-7.1.noarch
with
update-alternatives --query java | grep Value
Value:
On Sun, Jul 22, 2012, at 02:08 PM, Jon Sharp wrote:
/srv/www sounds like a doc root for a web server...
It's a simple directory.
It's not configured as doc root for my web server.
It is hopeless to talk to both of you, you don't understand virtual memory:
I get a similar situation using Windows 2008 and Solr 3.6. Memory using
mmap=is never released. Even if I turn off traffic and commit and do a
manual
gc= If the size of the index is 3gb then memory used will be heap +
Hu Uwe,
Thanks Wwe, Have you checked the Bug in JRE for mmapDirectory?. I was
mentioning this, This is posted in Oracle site, and the API doc.
They accept this as a bug, have you seen this?.
“MMapDirectoryhttp://lucene.apache.org/java/3_0_2/api/core/org/apache/lucene/store/MMapDirectory.htmluses
Realtime NRT is a NRT implementation available for Solr 1.4.1 to Solr
4.0. To enable NRT it makes available a NRTIndexReader to the
IndexSearcher for searching the index. It does not close the
SolrIndexSearcher which is a very heavy object with caches, etc. to do
this. Since the Searcher is
28 matches
Mail list logo