You could use a copyField.
On 1/17/10 1:47 PM, Koji Sekiguchi k...@r.email.ne.jp wrote:
Pradeep Pujari wrote:
how can I specify uniqueKey value in schema.xml as a concatenation of 3
columns. Like prod_id+attr_name+att_value?
Thanks,
Pradeep
Solr doesn't support it. You should
One way to do it is to group by city and then sort=geodist() asc
select?group=truegroup.field=citysort=geodist() descrows=10fl=city
It might require 2 calls to SOLR to get it the way you want.
On Wed, Feb 15, 2012 at 5:51 PM, Eric Grobler impalah...@googlemail.com wrote:
Hi Solr community,
I
Please backport to 3x.
On Mon, Feb 20, 2012 at 2:22 PM, Yonik Seeley
yo...@lucidimagination.com wrote:
This should be fixed in trunk by LUCENE-2566
QueryParser: Unary operators +,-,! will not be treated as operators if
they are followed by whitespace.
-Yonik
lucidimagination.com
On
What I would like to do is ONLY boost if there is a match on terms in
SOLR 3.5. For example:
1. q=smithdefType=dismaxqf=user_querysort=score desc
2. I want to add a boost by distance (closest = highest score), ONLY
if there is a hit on #1.
This one only multiplies by the smith *
I also get an issue with . with edismax.
For example: Dr. Smith gices me different results than dr Smith
On Thu, Mar 1, 2012 at 10:18 PM, Way Cool way1.wayc...@gmail.com wrote:
Thanks Ahmet! That's good to know someone else also tried to make phrase
queries to fix multi-word synonym issue.
Actually the results are great with lucene. The issue is with edismax.
I did figure out the issue...
The scoring was putting different results based on distance, when I
really need the scoring to be:
score=tf(user_query,smith) and add geodist() only if tf 0. this is
pretty difficult to do in
Here is a performance question for you...
I want to be able to return results 160 km from Denver, CO. We have
run some performance numbers and we know what
doing bbox is MUCH faster than geofilt.
However we want to order the queries and run bbox AND then run geofilt
on those results, OR we can
Yeah I am a bit afraid when people want to use the join() feature. To
get good performance you really need to try to stick to the
recommendation of denormalizing your database into multiValued search
fields.
You can also use external fields, or store formatted info into a
String field in json or
Why not wrap the call into a service and then call the right handler?
On Fri, Mar 9, 2012 at 10:11 AM, geeky2 gee...@hotmail.com wrote:
hello all,
does solr have a mechanism that could intercept a request (before it is
handed off to a request handler).
the intent (from the business) is to
debugQuery tells you.
On Fri, Mar 9, 2012 at 1:05 PM, Russell Black rbl...@fold3.com wrote:
When searching across multiple fields, is there a way to identify which
field(s) resulted in a match without using highlighting or stored fields?
--
Bill Bell
billnb...@gmail.com
cell 720-256-8076
Great answer Robert.
On Fri, Mar 9, 2012 at 12:06 PM, Robert Stewart bstewart...@gmail.com wrote:
Split up index into say 100 cores, and then route each search to a specific
core by some mod operator on the user id:
core_number = userid % num_cores
core_name = core+core_number
That way
For our use case this is a no-no. When the index is updated, we need
all indexes to be updated at the same time.
We put all indexes (slaves) behind a load balancer and the user would
expect the same results from page to page.
On Tue, Mar 20, 2012 at 5:36 AM, Eric Pugh
What type of logging were you using?
Did you try log back? We get a pretty large increase when using that.
On Fri, Mar 23, 2012 at 2:57 PM, dw5ight dw5i...@gmail.com wrote:
Hey All-
we run a http://carsabi.com car search engine with Solr and did some
benchmarking recently after we switched
I am also very confused at the use case for the Suggester component.
With collate on, it will try to combine random words together not the
actual phrases that are there.
I get better mileage out of EDGE grams and tokenize on whitespace...
Left to right... Since that is how most people think.
Can you also include a /select?q=*:*wt=xml
?
On Thu, Mar 29, 2012 at 11:47 AM, Erick Erickson
erickerick...@gmail.com wrote:
Hmmm, looking at your schema, faceting on a uniqueKey really doesn't make
all that much sense, there will always be exactly one of them. At
least it's highly
Why don't yu contribute RA to the source so that it is a
feature/module inside SOLR?
On Thu, Mar 29, 2012 at 8:32 AM, Nagendra Nagarajayya
nnagaraja...@transaxtions.com wrote:
It is from build 2012-03-19 from the trunk (part of the email). No fork.
Regards,
Nagendra Nagarajayya
If you have degree of separation (like friend). You could do something like:
...defType=dismaxbq=degree_of_separation:1^100
Thanks.
On Thu, Apr 5, 2012 at 12:55 AM, Monmohan Singh monmo...@gmail.com wrote:
Hi,
Any inputs or experience that others have come across will be really
helpful to
One idea was to wrap the field with CDATA. Or base64 encode it.
On Fri, Apr 27, 2012 at 7:50 PM, Bill Bell billnb...@gmail.com wrote:
We are indexing a simple XML field from SQL Server into Solr as a stored
field. We have noticed that the amp; is outputed as amp;amp; when using
wt=XML.
I am getting a post.jar failure when trying to post the following
CDATA field... It used to work on older versions. This is in SOlr 3.6.
add
doc
field name=idSP2514N/field
field name=nameSamsung SpinPoint P120 SP2514N - hard drive - 250
GB - ATA-133/field
field name=manuSamsung Electronics
-Original Message- From: William Bell
Sent: Monday, April 30, 2012 4:18 PM
To: solr-user@lucene.apache.org
Subject: post.jar failing
I am getting a post.jar failure when trying to post the following
CDATA field... It used to work on older versions. This is in SOlr 3.6.
add
doc
field
- From: William Bell
Sent: Tuesday, May 01, 2012 11:09 AM
To: solr-user@lucene.apache.org
Subject: Re: post.jar failing
OK. I am using SOLR 3.6.
I restarted SOLR and it started working. No idea why. You were right I
showed the error log from a different document.
We might want to add
Does anyone have the slides or sample code from:
Building Query Auto-Completion Systems with Lucene 4.0
Presented by Sudarshan Gaikaiwari, Software Engineer,Yelp
We want to implement WFST with GEO boosting.
--
Bill Bell
billnb...@gmail.com
cell 720-256-8076
I want to go a geodist() calculation on 2 different sfields. How would
I do that?
http://localhost:8983/solr/select?q={!func}add(geodist(),geodist())fq={!geofilt}pt=39.86347,-105.04888d=100sfield=store_lat_lon
But I really want geodist() for one pt, and another geodist() for another pt.
Can I
=store_lat_lon
Let me know if this solved your problem!
*Juan*
On Wed, Aug 31, 2011 at 11:58 PM, William Bell billnb...@gmail.com wrote:
I want to go a geodist() calculation on 2 different sfields. How would
I do that?
http://localhost:8983/solr/select?q={!func}add(geodist(),geodist())fq
We are going to be posting a Solr Command Utility - Windows version.
https://github.com/justengland/Solr-Command-Utility
It is a work in progress. Next step is to document it.
1. Use it to force a DIH full or delta index.
2. It works with 2 cores. A primary core does the indexing, and if
there
I am using 3.3 SOLR. I tried passing in -Denable.master=true and
-Denable.slave=true on the Slave machine.
Then I changed solrconfig.xml to reference each as per:
http://wiki.apache.org/solr/SolrReplication#enable.2BAC8-disable_master.2BAC8-slave_in_a_node
But this is not working. The enable
Thoughts?
On Wed, Feb 2, 2011 at 10:38 PM, Bill Bell billnb...@gmail.com wrote:
This is posted as an enhancement on SOLR-2345.
I am willing to work on it. But I am stuck. I would like to loop through
the lat/long values when they are stored in a multiValue list. But it
appears that I cannot
latCombined and longcombined and
calculate the closests distance to the user-defined point.
hth,
Geert-Jan
2011/2/3 William Bell billnb...@gmail.com
Thoughts?
On Wed, Feb 2, 2011 at 10:38 PM, Bill Bell billnb...@gmail.com wrote:
This is posted as an enhancement on SOLR-2345.
I am
I am not sure I understand your question.
But you can boost the result based on one value over another value.
Look at bf
http://wiki.apache.org/solr/SolrRelevancyFAQ#How_can_I_change_the_score_of_a_document_based_on_the_.2Avalue.2A_of_a_field_.28say.2C_.22popularity.22.29
On Wed, Feb 9, 2011
The first two questions are almost like religion. I am not sure we
want to start a debate.
Core setup is fairly easy. Add a solr.xml file and subdirs one per
core (see example/) directory. Make sure you use the right URL for the
admin console.
On Mon, Feb 14, 2011 at 3:38 AM, Rosa (Anuncios)
We use it in production, but the # of docs is only 2.5M.
2011/2/19 François Schiettecatte fschietteca...@gmail.com:
I use it in a production setting, but I don't have a very large data set or a
very heavy query load, the reason I use it is for edismax.
François
On Feb 19, 2011, at 9:50
I have used ram disks on slaves, since the master is already persisted.
On Sun, Feb 27, 2011 at 7:00 PM, Nick Jenkin njen...@gmail.com wrote:
You could also try using a ram disk,
mkdir /var/ramdisk
mount -t tmpfs none /var/ramdisk -o size=m
Obviously, if you lose power you will lose
See http://wiki.apache.org/solr/SpatialSearch and yest use sort=geodist()+asc
This Wiki page has everything you should need\.
On Tue, Mar 1, 2011 at 3:49 PM, Alexandre Rocco alel...@gmail.com wrote:
Hi Bill,
I was using a different approach to sort by the distance with the dist()
function,
I am not 100% sure. But I why did you not use the standard confix for text ?
fieldType name=text class=solr.TextField
positionIncrementGap=100 autoGeneratePhraseQueries=true
analyzer type=index
tokenizer class=solr.WhitespaceTokenizerFactory/
!-- in this example, we will
field column=date dateTimeFormat=-MM-dd'T'hh:mm:ss /
Did you convert the date to standard GMT format as above in DIH?
Also add transformer=DateFormatTransformer,...
http://lucene.apache.org/solr/api/org/apache/solr/schema/DateField.html
On Tue, Mar 1, 2011 at 7:54 PM, cyang2010
My patch is for 4.0 trunk.
On Fri, Mar 11, 2011 at 10:05 PM, rajini maski rajinima...@gmail.com wrote:
Thanks Bill Bell . .This query works after applying the patch you refered
to, is it? Please can you let me know how do I need to update the current
war (apache solr 1.4.1 )file with this new
There is a bug that leaves old index.* directories in the Solr data directory.
Here is a script that will clean it up. I wanted to make sure this is
okay, without doing a core reload.
Thanks.
#!/bin/bash
DIR=/mnt/servers/solr/data
LIST=`ls $DIR`
INDEX=`cat $DIR/index.properties | grep index\=
Thank you for pointing out #2. The commitsToKeep is interesting, but I
thought each commit would create a segment (before optimized) and be
self contained in the index.* directory?
I would only run this on the slave.
Bill
On Tue, Apr 5, 2011 at 2:54 PM, Markus Jelsma
markus.jel...@openindex.io
Just set up your schema with a string multivalued field...
On Wed, Apr 13, 2011 at 12:47 AM, shrinath.m shrinat...@webyog.com wrote:
For example, I am storing email ids of a person. If the person has 3 email
ids, I want to store them as
email = 'x...@whatever.com'
email = 'a...@blah.com'
Yeah you don't need Java to use Solr. PHP, Curl, Python, HTTP Request
APIs all work fine.
The purpose of Solr is to wrap Lucene into a REST-like API that anyone
can call using HTTP.
On Thu, May 5, 2011 at 4:35 PM, Otis Gospodnetic
otis_gospodne...@yahoo.com wrote:
Short answer: Yes, you can
Are you giving that solution away? What is the costs? etc!!
On Thu, May 5, 2011 at 2:58 PM, Otis Gospodnetic
otis_gospodne...@yahoo.com wrote:
Hi,
I haven't used Suggester yet, but couldn't you feed it all lowercase content
and
then lowercase whatever the user is typing before sending it
Is there a parser that can take a string and tell you what part is an
address, and what is not?
Split the field into 2 fields?
Search: Dr. Bell in Denver, CO
Search: Dr. Smith near 10722 Main St, Denver, CO
Search: Denver, CO for Cardiologist
Thoughts?
2011/5/5 François Schiettecatte
I am not a fan of code in a wiki page that is not tested. The purpose
of JIRA is so that we apply patches, and get it committed.
Let's try to move in that direction.
Bill
2011/6/24 Noble Paul നോബിള് नोब्ळ् noble.p...@gmail.com:
On Thu, Jun 23, 2011 at 9:13 PM, simon mtnes...@gmail.com
Yeah we use this in production. Yonik: WHat are the performance
implication with doing this? Will the fq be cached?
On Sat, Jun 25, 2011 at 7:27 AM, Yonik Seeley
yo...@lucidimagination.com wrote:
On Sat, Jun 25, 2011 at 5:56 AM, marthinal jm.rodriguez.ve...@gmail.com
wrote:
sfield, pt and d
RoySolr,
Not sure what language your client is written in, but this is a simple
if statement.
if (category == TV) {
qStr = q=*:*facet=truefacet.field=tv_sizefacet.field=resolution;
elseif (category == Computer) {
qStr = q=*:*facet=truefacet.field=cpufacet.field=gpu;
}
curl
Setup the filter on query and indexing to make it case insensitive...
Then reindex.
On Fri, Jul 8, 2011 at 1:26 AM, Romi romijain3...@gmail.com wrote:
Hello, I am using solr search. my search field contains both diamond and
Diamond.
But when i search for Diamond/diamond it gives me correct
dismax does not work with a=*:*
defType=dismaxq=*:* no hits
You need to switch this to:
defType=dismaxq.alt=*:* no hits
On Mon, Jul 18, 2011 at 8:44 PM, Erick Erickson erickerick...@gmail.com wrote:
What are qf_dismax and pf_dismax? They are meaningless to
Solr. Try adding
What does the committers think about adding a index queue in Solr?
Then we can have lots of one-off index requests that would queue up...
On Fri, Jul 22, 2011 at 3:14 AM, Pierre GOSSE pierre.go...@arisem.com wrote:
Solr still respond to search queries during commit, only new indexations
Let me give the full user case... There has been a little
misunderstanding, but really some good discussions...
1. Some Cardiologists are also Family Doctors and Internal Medicine
doctors (Internist).
2. The use case that confuses the users is the output of the query
when using dismax across 2
Let's just wait until SOLR 4.0 is out in a couple months.
On Fri, May 25, 2012 at 9:06 AM, Maciej Lisiewski c2h...@poczta.fm wrote:
There is some discussion here:
https://issues.apache.org/jira/browse/SOLR-3159
I've seen it - it's one of the Jira tickets I was referring to: Jetty 8 is
You went over the max limit for number of docs.
On Monday, May 28, 2012, tosenthu wrote:
Hi
I have a index of size 1 Tb.. And I prepared this by setting up a
background
script to index records. The index was fine last 2 days, and i have not
disturbed the process. Suddenly when i queried
We are using SOLR 1.4, and we are experiencing full index replication
every 15 minutes.
I have checked the solrconfig and it has maxsegments set to 20. It
appears like it is indexing a segment, but replicating the whole
index.
How can I verify it and possibly fix the issue?
--
Bill Bell
For the search results we actually put the small amount of data in the core.
Once someone clicks the results and we need to go to the item to
display the detailed results, we create another core with a stored XML
string field and an ID. The ID is indexable, and the string field is
only stored.
We all know that MMapDirectory is fastest. However we cannot always
use it since you might run out of memory on large indexes right?
Here is how I got iSimpleFSDirectoryFactory to work. Just set
-Dsolr.directoryFactory=solr.SimpleFSDirectoryFactory.
Your solrconfig.xml:
directoryFactory
Yep.
-Dsolr.directoryFactory=solr.SimpleFSDirectoryFactory
or
-Dsolr.directoryFactory=solr.MMapDirectoryFactory
works great.
On Mon, Jul 16, 2012 at 7:55 PM, Michael Della Bitta
michael.della.bi...@appinions.com wrote:
Hi Bill,
Standard picks one for you. Otherwise, you can hardcode the
-Dfile.encoding=UTF-8... Is this usually recommended for SOLR indexes?
Or is the encoding usually just handled by the servlet container like Jetty?
--
Bill Bell
billnb...@gmail.com
cell 720-256-8076
Same issue here. Also in the file there is multiple last index times for
each entity and we cannot reference the individual anymore.
DIH.entity1.last_index_time does not pass through to the query anymore.
On Friday, April 12, 2013, jimtronic wrote:
My data-config files use the
OK, is d in degrees or miles?
On Fri, Apr 12, 2013 at 10:20 PM, David Smiley (@MITRE.org)
dsmi...@mitre.org wrote:
Bill,
I responded to the issue you created about this:
https://issues.apache.org/jira/browse/SOLR-4704
In summary, use {!geofilt}.
~ David
Billnbell wrote
I would
We are getting an issue when using a GUID got a field in Solr 4.2. Solr 3.6
is fine. Something like:
fl=098765-765-788558-7654_userid as a string stored.
The issue is when the GUID is begging with numeric and then a minus.
This is a bug
--
Bill Bell
billnb...@gmail.com
cell 720-256-8076
You can update a row. Just allow a request parameter in the DIH and add it
to your query.
id=65
Then in your query you can use that. See the Wiki on DIh.
On Friday, April 19, 2013, Gora Mohanty wrote:
On 19 April 2013 19:50, hassancrowdc hassancrowdc...@gmail.comjavascript:;
wrote:
I
Guys,
Getting results to return with higher or lower precedence has to do with
relative scores.
For example
I want exact match to be scored highest and then text matching. You
generally use a copyField into 2 or more fields and set up different
fieldType and then boost one field over the other.
You can store JSON in Solr as a string field. For searching you need to
pull out into separate fields.
To store JSON and use wt=jaon without messing with the field try my patch.
Solr-4685 and there is a field patch to take XML and convert to JSON if you
need that.
[image: Solr]
- Solr
Can we get this in please to 4.3?
https://issues.apache.org/jira/browse/SOLR-4746
--
Bill Bell
billnb...@gmail.com
cell 720-256-8076
I am getting no results when using dynamic field, and the name begins with
numbers.
This is okay on 3.6, but does not work in 4.2.
dynamic name: 1234566_user
fl=1234566_user
If I change it to name: user_1234566 it works.
This appears to be a bug.
--
Bill Bell
billnb...@gmail.com
cell
I also get this. 4.2+
On Fri, Apr 19, 2013 at 10:43 PM, Eric Myers badllam...@gmail.com wrote:
I have multiple parallel entities in my document and when I run an import
there are times like
xxx.last_index_time
where xxx is the name of the entity.
I tried accessing these using
https://issues.apache.org/jira/browse/LUCENE-4226
It mentions that we can set compression mode:
FAST, HIGH_COMPRESSION, FAST_UNCOMPRESSION.
--
Bill Bell
billnb...@gmail.com
cell 720-256-8076
24. apr. 2013 kl. 07:02 skrev William Bell billnb...@gmail.com:
Can we get this in please to 4.3?
https://issues.apache.org/jira/browse/SOLR-4746
--
Bill Bell
billnb...@gmail.com
cell 720-256-8076
--
Bill Bell
billnb...@gmail.com
cell 720-256-8076
Why don't we add a parameter to allow non programmers to change it?
Compression=FAST|etc
On Thursday, April 25, 2013, Chris Hostetter wrote:
: Subject: How do set compression for compression on stored fields in SOLR
4.2.1
:
: https://issues.apache.org/jira/browse/LUCENE-4226
: It mentions
Since facets are now included in Lucene, why don't we add a pass through
from Solr? The current facet code can live on but we could create new param
like facet.lucene=true?
Seems like a great enhancement !
--
Bill Bell
billnb...@gmail.com
cell 720-256-8076
Lucene facets in Solr. Maybe that could
be one of the key turning points for what defines Lucene/Solr 5.0.
Is there a Jira for this? I don't recall one.
-- Jack Krupansky
-Original Message- From: William Bell
Sent: Friday, April 26, 2013 4:01 AM
To: solr-user@lucene.apache.org
It does not work anymore in 4.x.
${dih.last_index_time} does work, but the entity version does not.
Bill
On Tue, May 7, 2013 at 4:19 PM, Shalin Shekhar Mangar
shalinman...@gmail.com wrote:
Using ${dih.entity_name.last_index_time} should work. Make sure you put
it in quotes in your query.
Try https://issues.apache.org/jira/browse/SOLR-4685
It allows you to return put JSON from a string field.
Also to convert a XML field to JSON you can use a plugin for DIH
https://issues.apache.org/jira/browse/SOLR-4692
On Monday, May 13, 2013, Chris Hostetter wrote:
: I don't want to use
Did you switch the JVM too?
On Thu, May 16, 2013 at 7:14 PM, Wei Zhao dweiz...@gmail.com wrote:
We are migrating from Solr 3.5 to Solr 4.2.
After some performance testing, we found 4.2's memory usage is a lot higher
than 3.5. Our 12GM max heap process used to handle the test pretty well
Yeah how to turn off index writer ?
On Friday, May 17, 2013, Andre Bois-Crettez wrote:
Can you explain your setup more ?
ie. is it master/slave, indexing in parallel, etc ?
We had to commit more often to reduce JVM memory usage due to
transaction logs in SolrCloud mode, compared with
It would be beneficial. Lucene facets are really fast without caching and
are what I call v2 since the drill sideways also adds capabilities.
On Wed, May 22, 2013 at 8:41 PM, Brendan Grainger
brendan.grain...@gmail.com wrote:
Thanks Jack, no urgency here. I'm unsure that it would even be
We have a 3GB index. We index on the master and then replicate to the
slaves.
But the issue is that after the slaves switch over - we get deadlocking, #
of threads increase to 500, and most times the SOLR instance just plain
locks up.
We tried adding a bunch of warming queries, but we still have
I solved this:
https://issues.apache.org/jira/browse/SOLR-4685
To get the field in there from XMl to JSON:
https://issues.apache.org/jira/browse/SOLR-4692
EnjoY!
On Wed, May 22, 2013 at 6:03 PM, Karthick Duraisamy Soundararaj
karthick.soundara...@gmail.com wrote:
Hello all,
OK here is the use case:
- Someone types Dr. Joe Smith
- We have the lat long of the user (say Denver, CO)
- We want to limit to 50 km around Denver, but if there is an exact match
we want to put that one at the top of the results.
- If there an elegant way to do this? Or do we need to run 2
Thanks David !
On Sun, May 26, 2013 at 8:02 AM, David Smiley (@MITRE.org)
dsmi...@mitre.org wrote:
Hi Bill.
So it seems you want an exact match to be first even if it is outside the
spatial region, right? Your suggested implementation suggests this. And
apparently you want to sort
This simple feature of sort=geodist() asc is very powerful since it
enables us to move from SOLR 3 to SOLR 4 without rewriting all our queries.
We also use boost=geodist() in some cases, and some bf/bq.
bf=recip(geodist(),2,200,20)sort=score
When is 4.3.1 coming out?
--
Bill Bell
billnb...@gmail.com
cell 720-256-8076
It would be good to see some CMS configs too... Can you send your java
params?
On Wed, Jun 19, 2013 at 8:46 PM, Shawn Heisey s...@elyograg.org wrote:
On 6/19/2013 4:18 PM, Timothy Potter wrote:
I'm sure there's some site to do this but wanted to get a feel for
who's running Solr 4 on Java
Is there a simpler way to kick off a DIH handler update when it is running?
Scenario:
1. Doing an update using DIH
2. We need to kick off another update. Cannot since DIH is already running.
So the program inserts into a table (ID=55)
3. Since the DIH is still running old update, we cannot fire
Who is using varnish in front of SOLR?
Anyone have any configs that work with the cache control headers of SOLR?
--
Bill Bell
billnb...@gmail.com
cell 720-256-8076
, 2013 at 10:48 PM, William Bell billnb...@gmail.com
wrote:
Who is using varnish in front of SOLR?
Anyone have any configs that work with the cache control headers of SOLR?
--
Bill Bell
billnb...@gmail.com
cell 720-256-8076
--
Bill Bell
billnb...@gmail.com
cell 720-256-8076
I agree. It is even slower when the slave is being pounded.
On Fri, Jun 21, 2013 at 3:35 AM, Ted zhanghailian...@qq.com wrote:
Solr replication is extremely slow(less then 1MB/s)
When the replication is runinng,network and disk occupancy rate remained at
a very low level.
I've tried
It goes restart the MMap stuff though.
On Fri, Jun 21, 2013 at 12:26 PM, Michael Ryan mr...@moreover.com wrote:
Restarting Solr won't clear the disk cache. When I'm doing perf testing,
I'll sometimes run this on the server before each test to clear out the
disk cache:
echo 1
OK.
Here is the answer for us. Here is a sample default.vcl. We are validating
the LastModified ( if (!beresp.http.last-modified) )
is changing when the core is indexed and the version changes of the index.
This does 10 minutes caching and a 1hr grace period (if solr is down, it
will deliver
SOLR calls every 15 to 20 minutes.
One varnish was able to handle it with almost no lingering connections, and
load average of 1.
Varnish is very optimized and worth trying.
On Sat, Jun 29, 2013 at 6:47 PM, William Bell billnb...@gmail.com wrote:
OK.
Here is the answer for us. Here
Maybe to ignore?
You can set a dynamic Field to ignore as well.
On Wed, Jul 3, 2013 at 9:22 AM, Ali, Saqib docbook@gmail.com wrote:
Hello all,
What would be the use case for such a field:
field name=stored_on type=tdate indexed=false
stored=false/
and
field
In your schema you can define a Field type, and have it remove anything
after the ..
Or use something like
http://wiki.apache.org/solr/DataImportHandler#RegexTransformer
field column=fileName regex=.*?(.+)\..* sourceColName=full_name/
On Wed, Jul 3, 2013 at 12:35 AM, archit2112
We should consider adding another parameter RealTime in the log. That
would really help all of us trying to figure out how much time a query is
taking.
On Tue, Jun 4, 2013 at 5:14 PM, Otis Gospodnetic otis.gospodne...@gmail.com
wrote:
Right. The main takeway is that QTime is not exactly what
If you are a programmer, you can modify it and attach a patch in Jira...
On Tue, Jun 4, 2013 at 4:25 AM, Marcin Rzewucki mrzewu...@gmail.com wrote:
Hi there,
StatsComponent currently does not have median on the list of results. Is
there a plan to add it in the next release(s) ? Shall I
Can it do Geo Spatial searching? (i.e. Find documents within 10 miles of a
lat,long?)
On Fri, Jul 5, 2013 at 12:53 PM, Fergus McDowall
fergusmcdow...@gmail.comwrote:
Here is some news that might be of interest to users and implementers of
Solr
I submitted a JIRA ticket a while ago, since I thought that having a way to
use the Lucene facets in SOLR could speed up our faceting. However, no one
seems to have picked up the development.
https://issues.apache.org/jira/browse/SOLR-4774
What is involved with hooking it into SOLR ? Similar to
Why is LUCENE-474 not committed?
On Thu, Jul 4, 2013 at 4:21 PM, Koji Sekiguchi k...@r.email.ne.jp wrote:
Hi Dotan,
(13/07/04 23:51), Dotan Cohen wrote:
Thank you Jack and Koji. I will take a look at MLT and also at the
.zip files from LUCENE-474. Koji, did you have to modify the code
I have a field that has omitNorms=true, but when I look at debugQuery I see
that
the field is being normalized for the score.
What can I do to turn off normalization in the score?
I want a simple way to do 2 things:
boost geodist() highest at 1 mile and lowest at 100 miles.
plus add a boost for
Can we get a sample fieldType and field definition?
Thanks.
On Mon, Jul 8, 2013 at 8:40 AM, Jack Krupansky j...@basetechnology.comwrote:
Yes, you should be able to used nested query parsers to mix the queries.
Solr 4.1(?) made it easier.
-- Jack Krupansky
-Original Message- From:
Hmmm. One way is:
http://localhost:8983/solr/core/select/?q=*%3A*facet=truefacet.field=idfacet.offset=10rows=0facet.limit=1http://hgsolr2devmstr:8983/solr/providersearch/select/?q=*%3A*facet=truefacet.field=cityfacet.offset=10rows=0facet.limit=1
If you have a result you have results 10.
to the list, queryNorm is calculated in the Similarity
object, I need to dig further, but that's probably a good place to start.
On 10 July 2013 04:57, William Bell billnb...@gmail.com wrote:
I have a field that has omitNorms=true, but when I look at debugQuery I
see
that
the field
1 - 100 of 308 matches
Mail list logo