Yeah, I know.
Does anyone could tell me wich one is the good way?
Regards,
What an interesting application :-)
Dennis Gearon
Signature Warning
It is always a good idea to learn from your own mistakes. It is usually a
better idea to learn from others’ mistakes, so you do
Is that right?
On Tue, Oct 19, 2010 at 11:08 PM, findbestopensource
findbestopensou...@gmail.com wrote:
Hello all,
I have posted an article Lucene vs Solr
http://www.findbestopensource.com/article-detail/lucene-vs-solr
Please feel free to add your comments.
Regards
Aditya
On 10/19/2010 2:40 PM, Chris Hostetter wrote:
The formats are not currently compatible. The first priority was to get
the format fixed so it was using true UTF8 (instead of Java's bastardized
modified UTF8) in a way that would generate a clear error if people
attempted to use an older SolrJ to
Thanks Jonathan.
To further clarify, I understand the the match of
my blue rabbit
would have to be found in 1 element (of my multi-valued defined field) for the
phrase boost on that field to kick in.
If for example my document had the following 3 entries for the multi-value
field
I tried this work-around, but seems not work for me.
I still get array of score in the response.
I have two physical server A and B
localhost -- A
test --B
I issue query to A like this
Ok I Do a little test after previous email. The work-around that hoss
provided is not work when you issue query *:*
I tried to issue query like key:aaa and work-around works no matter shards
node is one or two or more.
Thanks hoss. And maybe you could try and help me confirmed this situation is
Anyone can suggests how to do multiple partial word searching?
On Wed, Oct 20, 2010 at 11:42 AM, Chamnap Chhorn chamnapchh...@gmail.comwrote:
Hi,
I have some problem with combining the query with multiple parital-word
searching in dismax handler. In order to make multiple partial word
Wouldn't it be easier to ensure, that your config.aspx returns valid
xml? Wrap your existing Code with some Exception-Handling and return your
fallback-xml if something goes wrong?
Hi,
Im having trouble with searching with number fields, if this field has
alphanumerics then search is working perfect but not with all numbers, can
anyone suggest me solution???
fieldType name=text class=solr.TextField positionIncrementGap=100
analyzer type=index
tokenizer
Hi
Thanks for your reply. In actual fact, the config.aspx will either return a
valid xml, or it will return an empty string - and unfortunately an empty
string is not considered valid xml by the Solr xml parser.
The config.aspx is a rather general application, returning all sorts of data,
Hello experts,
Has anyone succeded in configuring and run Solr on WebSphere 7 and be
kind enough to help me on this ?
New to Solr and Websphere, I am looking on any hints on how to
configure solr on WebSphere 7. I was able to configure and run it on
tomcat and from the embedded Jetty.
The wiki
I had the same problem. The work around was to send mails in plain text.
On Wed, Oct 20, 2010 at 10:21 AM, Abdullah Shaikh
abdullah.shaik...@gmail.com wrote:
Just a test mail to check if my mails are reaching the ML.
I dont know, but my mails are failing to reach the ML with the following
Initcron Labs Announces Blaze - Appliance for Solr .
Read more at and download from : http://www.initcron.org/blaze
Blaze is a tailor made appliance preinstalled and preconfigured with Apache
Solr running within Tomcat servlet container. It lets you focus on
developing applications based on
Hi everyone! (my first post)
I am new, but really curious about usefullness of lucene/solr in documents
search from the web applications. I use Ruby on Rails to create one, with
plugin acts_as_solr_reloaded that makes connection between web app and
solr easy.
So I am in a point, where I know
Sounds good, but there is nothing to download on Sourceforge?
Is this free or do you charge for it?
Cheers,
Stefan
Am 20.10.2010 13:03, schrieb Initcron Labs:
Initcron Labs Announces Blaze - Appliance for Solr .
Read more at and download from : http://www.initcron.org/blaze
Blaze is a
Did you visit http://sourceforge.net/projects/blazeappliance/files/ ?
There are currently Blaze__Appliance_for_Solr.i686-0.1.1.oem.tar.gz (412MB)
Blaze__Appliance_for_Solr.i686-0.1.1.ovf.tar.gz (434MB) to download
On Wed, Oct 20, 2010 at 3:23 PM, Stefan Moises moi...@shoptimax.de wrote:
Thanks, will look into those.
Andu
On Mon, Oct 18, 2010 at 4:14 PM, Ahmet Arslan iori...@yahoo.com wrote:
I know but I can't figure out what
functions to use. :)
Oh, I see. Why not just use {!boost b=log(vote)}?
May be scale(vote,0.5,10)?
Hello all,
I'm just wondering what the benefits/consequences are of using shards or
merging all the cores into a single core. Personally I have tried both, but
my document set is not large enough that I can actually test performance and
whatnot.
What is a better approach of implementing a
oh, I guess they have just uploaded it... when I've checked the file
list was empty :)
Am 20.10.2010 15:36, schrieb Stefan Matheis:
Did you visit http://sourceforge.net/projects/blazeappliance/files/ ?
There are currently Blaze__Appliance_for_Solr.i686-0.1.1.oem.tar.gz (412MB)
Thre's approximately a 100% chance that you are going to go through a server
side langauge(php, ruby, pearl, java, VB/asp/,net[cough,cough]), before you get
to Solr/Lucene. I'd recommend it anyway.
This code will should look at the user's browser locale (en_US, pl_PL, es_CO,
etc). The server
Careful comparing apples to oranges ;-)
For one, your lucene code doesn't retrieve stored fields.
Did you try the solr request more than once (with a different q, but
the same filters?)
Also, by default, Solr independently caches the filters. This can be
higher up-front cost, but a win when
2010/10/20 Dennis Gearon gear...@sbcglobal.net
Thre's approximately a 100% chance that you are going to go through a
server side langauge(php, ruby, pearl, java, VB/asp/,net[cough,cough]),
before you get to Solr/Lucene. I'd recommend it anyway.
I use a server side language (Ruby) as I build
Under category facet, there are multiple selections, whicih can be
personal,corporate or other
How can I get both personal and corporate ones, I tried
fq=category:corporatefq=category:personal
It looks easy, but I can't find the solution.
--
Yavuz Selim YILMAZ
Hi all,
We've booked a London Search Social for Thursday the 28th Sept. Come
along if you fancy geeking out about search and related technology
over a beer.
Please note that we're not meeting in the same place as usual. Details
on the meetup page.
http://www.meetup.com/london-search-social/
Wow, apologies for utter stupidity. Both subject line and body should
have read 28th OCT.
On 20 October 2010 15:42, Richard Marr richard.m...@gmail.com wrote:
Hi all,
We've booked a London Search Social for Thursday the 28th Sept. Come
along if you fancy geeking out about search and related
It should work fine. Make sure the field is indexed and check your index.
On Wednesday 20 October 2010 16:39:03 Yavuz Selim YILMAZ wrote:
Under category facet, there are multiple selections, whicih can be
personal,corporate or other
How can I get both personal and corporate ones, I
Hi,
I have a very common question but couldnt find any post related to my
question in this forum,
I am currently initiating a full import each week but the data that have
been deleted in the source is not update in my document as I am using
clean=false.
We are indexing multiple data by data
Can't you in each delete of that data, save the ids in other table?
And then process those ids against solr to delete them?
On Wed, Oct 20, 2010 at 11:51 AM, bbarani bbar...@gmail.com wrote:
Hi,
I have a very common question but couldnt find any post related to my
question in this forum,
I
fq=(category:corporate category:personal)
On Wed, Oct 20, 2010 at 7:39 AM, Yavuz Selim YILMAZ yvzslmyilm...@gmail.com
wrote:
Under category facet, there are multiple selections, whicih can be
personal,corporate or other
How can I get both personal and corporate ones, I tried
ironicnet,
Thanks for your reply.
We actually use virtual DB modelling tool to fetch the data from various
sources during run time hence we dont have any control over the source.
We consolidate the data from more than one source and index the consolidated
data using SOLR. We dont have any
Since you are performing a complete reload of all of your data, I don't
understand why you can't create a new core, load your new data, swap
your application to look at the new core, and then erase the old one, if
you want.
Even so, you could track the timestamps on all your documents, which
Thanks for your response Grant.
I already have the bounding box based implementation in place. And on a
document base of around 350K it is super fast.
What about a document base of millions of documents? While a tier based
approach will narrow down the document space significantly this concern
As Prasad said:
fq=(category:corporate category:personal)
But you might want to check your schema.xml to see what you have here:
!-- SolrQueryParser configuration: defaultOperator=AND|OR --
solrQueryParser defaultOperator=AND /
You can always specify your operator in
Sorry, what Pradeep said, not Prasad. My apologies Pradeep.
-Original Message-
From: Tim Gilbert
Sent: Wednesday, October 20, 2010 12:18 PM
To: 'solr-user@lucene.apache.org'
Subject: RE: Mulitple facet - fq
As Prasad said:
fq=(category:corporate category:personal)
But you
Hi,
I am trying to use EmbeddedSolrServer with just one core and I'd like to
load solrconfig.xml, schema.xml and other configuration files from a jar
via getResourceAsStream(...).
I've tried to use SolrResourceLoader, but all my attempts failed with a
RuntimeException: Can't find resource [...].
On 10/20/2010 9:59 AM, bbarani wrote:
We actually use virtual DB modelling tool to fetch the data from various
sources during run time hence we dont have any control over the source.
We consolidate the data from more than one source and index the consolidated
data using SOLR. We dont have any
oh, I guess they have just uploaded it... when I've checked the file list
was empty :)
Yes. Upload is still in progress.
Currently all formats are on the Suse Gallery page. On the Sourceforge I
have managed to upload four formats now including live CD, preload CD,
HDD/USB image and ovf
Hi,
Is it possible to define different Similarity classes for different fields?
We have a use case where we are interested in avoid term frequency (tf) when
our fields are multiValued.
Regards,
Raimon Bosch.
--
View this message in context:
Hi Solr Users,
I used the TermsComponent to walk through all the indexed terms and find
ones of particular interest (named entities). And now, I'd like to search
for documents that contain these particular entities. I have both query-time
and index-time stemming set for the field, which means I
Also you can set an expiration policy maybe, and delete files that expire
after some time and aren't older than other... but i don't know if you can
iterate over the existing ids...
On Wed, Oct 20, 2010 at 1:34 PM, Shawn Heisey s...@elyograg.org wrote:
On 10/20/2010 9:59 AM, bbarani wrote:
We
In our current search app, we have sorting and filtering based on item
prices. We'd like to extend this to support sorting and filtering in the
buyer's native currency with the items themselves listed in the seller's
native currency. E.g: as a buyer, if my native currency is the Euro, my
search of
Here's what I would do -
Search all the fields everytime regardless of language. Use one handler and
specify all of these in qf and pf.
question_en, answer_en,
question_fr, answer_fr,
question_pl, answer_pl
Individual field based analyzers will take care of appropriate tokenization
and you will
We are trying to convert a Lucene-based search solution to a
Solr/Lucene-based solution. The problem we have is that we currently have
our data split into many indexes and Solr expects things to be in a single
index unless you're sharding. In addition to this, our indexes wouldn't
work well
I am trying to import mods xml data in solr using the xml/http datasource
This does not work with XPathEntityProcessor of the data import handler
xpath=/mods/name/namepa...@type = 'date']
I actually have 143 records with type attribute as 'date' for element
namePart.
Thank you
Parinita
Well, it all depends (tm). your example wouldn't match, but if you
didn't have an increment gap greater than 1, black cat his blue #would#
match.
Best
Erick
On Wed, Oct 20, 2010 at 3:22 AM, Jason Brown jason.br...@sjp.co.uk wrote:
Thanks Jonathan.
To further clarify, I understand the the
I don't see anything obvious. Try going to the admin page and click the
analysis link. That'll let you see pretty much exactly how things get
parsed both for indexing and querying.
Unless your synonyms are somehow getting in the way, but I don't
see how.
Best
Erick
On Wed, Oct 20, 2010 at 5:15
This may be a wild herring, but have you tried raw? NOTE: I'm a little
out of my depth here on what this actually does, so don't waste time by
thinking I'm an authority on this one. See:
http://lucene.apache.org/solr/api/org/apache/solr/search/RawQParserPlugin.html
and
Thank you very much~! I'll try it :)
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-can-i-get-collect-search-result-from-custom-filtered-query-tp1723055p1742898.html
Sent from the Solr - User mailing list archive at Nabble.com.
We are indexing multiple data by data types hence cant delete the index
and
do a complete re-indexing each week also we want to delete the orphan solr
documents (for which the data is not present in back end DB) on a daily
basis.
Can you make delete by query work? Something like delete all Solr
That looks very promising based on a couple of quick queries. Any objections
if I move the javadoc help into the wiki, specifically:
Create a term query from the input value without any text analysis or
transformation whatsoever. This is useful in debugging, or when raw terms
are returned from
See below:
But also search the archives for multilanguage, this topic has been
discussed
many times before. Lucid Imagination maintains a Solr-powered (of course)
searchable
list at: http://www.lucidimagination.com/search/
http://www.lucidimagination.com/search/
On Wed, Oct 20, 2010 at 9:03 AM,
Which is why the positionIncrementGap is set to a high number normally (100 in
the sample schema.xml). With this being so, phrases won't match accross values
in a multi-valued field. If for some reason you were using a dismax ps phrase
slop that was higher than your positionIncrementGap, you
It seems to me that multiple cores are along the lines you
need, a single instance of Solr that can search across multiple
sub-indexes that do not necessarily share schemas, and are
independently maintainable..
This might be a good place to start: http://wiki.apache.org/solr/CoreAdmin
HTH
Help updating/clarifying the Wiki is #alwyas# appreciated
Erick
On Wed, Oct 20, 2010 at 9:10 PM, Sasank Mudunuri sas...@gmail.com wrote:
That looks very promising based on a couple of quick queries. Any
objections
if I move the javadoc help into the wiki, specifically:
Create a term
Thanks Erick. The problem with multiple cores is that the documents are scored
independently in each core. I would like to be able to search across both
cores and have the scores 'normalized' in a way that's similar to what Lucene's
MultiSearcher would do. As far a I understand, multiple
Now my question is.. Is there a way I can use preImportDeleteQuery to
delete
the documents from SOLR for which the data doesnt exist in back end db? I
dont have anything called delete status in DB, instead I need to get all
the
UID's from SOLR document and compare it with all the UID's in
Thanks - I was hoping it wouldnt match - and I belive you've confimred it wont
in my case as the default positionIncrementGap is set.
Many Thanks
Jason.
-Original Message-
From: Jonathan Rochkind [mailto:rochk...@jhu.edu]
Sent: Thu 21/10/2010 02:27
To: solr-user@lucene.apache.org
If you are a Solr/Lucene developer in Pune, India and are interested in a
consulting opportunity overseas,
or on Projects local to the area, please get in touch with me.
Thanks
On Tue, Oct 19, 2010 at 9:34 PM, danomano dshopk...@earthlink.net wrote:
Hi folks, I was wondering if there is any native support for posting gzipped
files to solr?
i.e. I'm testing a project where we inject our log files into solr for
indexing, these logs files are gzipped, and I figure it
On Mon, Oct 18, 2010 at 8:22 PM, Jason, Kim hialo...@gmail.com wrote:
Sorry for the delay in replying. Was caught up in various things this
week.
Thank you for reply, Gora
But I still have several questions.
Did you use separate index?
If so, you indexed 0.7 million Xml files per instance
Hi all,
I increased my RAM size to 8GB and i want 4GB of it to be used
for solr itself. can anyone tell me the way to allocate the RAM for the
solr.
Regards,
satya
61 matches
Mail list logo