I also think that's a good question and currently without a use this
answer :-)
I think it shouldn't be hard to write a Solr service querying ZK and
replicate both conf and indexes (via SnapPuller or ZK itself) so that such
a node is responsible to back up the whole cluster in a secure storage
Hi
It appears from the Solr documentation that it is not possible to group by
multi-value fields. Is this correct?
Also, grouping only works on text fields - not for example int fields. I was
wondering what the basis for this decision was, and if it is actually possible
to group by an int
Hi Jack,
Your suggestion works perfectly!
Thank you very much!!
it ended up being something like this:
query=_query_:'status:1 AND NOT priority:\-1' AND _query_:'{!frange l=3000
u=5000}max(sum(suser_count), sum(user_count))'
Regards,
Dirceu
On Thu, Sep 20, 2012 at 10:46 PM, Jack Krupansky
Hi Markus,
thank you very much, it helped. After many tries and reorderings in
config-files it works.
Greetings, tom
--
View this message in context:
http://lucene.472066.n3.nabble.com/poor-language-detection-tp4008624p4009374.html
Sent from the Solr - User mailing list archive at
Dear Solr community,
I am rather new to Solr, however I already find it kind of attractive. We are
developing a research application, which contains a Solr index with three
different kinds of documents, here the basic idea:
- A document of type doc consisting of fields id, docid,
The ReplicationHandler still works when you use SolrCloud, right? can't you
just replicate from one (or N, depending on the number of shards) of the
nodes in the cluster? That way you could keep a Solr instance that's only
used to replicate the indexes, and you could have it somewhere else (other
Looks like some sort of foul-up with Groovy versions and Solr 3.6.1 as I had
to roll back to Groovy 1.7.10 to get this to work. Started with Groovy 2 and
then 1.8 before 1.7.10. What's odd is that I implemented the same calls made
in ScriptTransformer.java in a test program and they worked
Hi,
I'm updating my Solr from version 3.4 to version 3.6.1 and I'm facing a
little problem with the DIH.
In the delta-import I'm using the /parentDeltaQuery/ feature of the DIH
to update the parent entity.
I don't think this is working properly.
I realized that it's just executing the
Thanks. If Solr doesn't have any special logic for dealing with
algorithmic-complexity attack-like overloads, then it sounds like Jetty and
Tomcat are responsible for Solr's unusually good performance in my
experiments (unusual compared to other non-Java web applications).
Cheers,
Mike
On Wed,
Hi Bernd,
You mentioned: Only one slave is online the other is for backup. The backup
gets replicated first.
After that the servers will be switched and the online becomes backup.
Do you please let us know how to do you do the Switch? We use SWAP to switch
in solr cloud. After SWAP, when we
Gian,
The only way to handle it is to provide a test case and attach to jira.
Thanks
On Fri, Sep 21, 2012 at 6:03 PM, Gian Marco Tagliani
gm.tagli...@gmail.comwrote:
Hi,
I'm updating my Solr from version 3.4 to version 3.6.1 and I'm facing a
little problem with the DIH.
In the
Hi David, I've installed the latest nightly, and am trying to use the spacial
queries.I've defined a field called Rectangle as such:field name=Rectangle
type=location_rpt indexed=true stored=true multiValued=true / Can you
provide some guidance on how to index a field and how to query it?
On 9/20/2012 1:02 AM, Alok Bhandari wrote:
Hello,
I am using solr 3.6.0 , I have observed many connection in CLOSE_WAIT state
after using solr server for some time. On further analysis and googling
found that I need to close the idle connections from the client which is
connecting to solr to
On 9/21/2012 11:20 AM, Shawn Heisey wrote:
On 9/20/2012 1:02 AM, Alok Bhandari wrote:
I am using solr 3.6.0 , I have observed many connection in CLOSE_WAIT
state
This is a me too email.
One difference - I am running 3.5.0.
Thanks,
Shawn
I have a search text field that contains all the search terms. I'm taking
user input and breaking it up into tokens of term1, term2, term3, etc and
the then submitting to Dismax.
q=search_text:term1* AND search_text:term2* AND search_text:term3*
This works great. The problem is when a user
http://wiki.apache.org/solr/SolrAdaptersForLuceneSpatial4
Definitely needs some updating; I will try to get to that this weekend.
-
Author: http://www.packtpub.com/apache-solr-3-enterprise-search-server/book
--
View this message in context:
The following is the working query with more weight to title. I am using
default parser. But the published_date of the results of this query is not
in order. I want date is in order.
?q=content_type:video AND (title:(obama budget)^2 OR
export_headline:(obama budget))fl=title,score,export_headline
Thanks David, that's exactly what I needed. One thing, from my experiments,
the order seems to be Xmin Ymin Xmax Ymax for both the indexing and the query.
Eric. Date: Fri, 21 Sep 2012 10:34:07 -0700
From: dsmi...@mitre.org
To: solr-user@lucene.apache.org
Subject: Re: Using Solr-3304
On 9/21/2012 11:22 AM, cleonard wrote:
Now a mistyped term is no problem. I still get results. The issue now is
that I get too many results back. What I want is something that effectively
does an AND if a term is matched, but does an OR when a term is not found.
To say it a differnt way -- If
I've played with the mm parameter quite a bit. It does sort of do what I
need if I do multiple queries decreasing the mm parameter with each call.
However, I'm doing this for a web form auto complete or suggester so I
really want to make this happen in a single request if at all possible.
--
Hi,
I'm curious... why do you issue multiple queries for autocomplete purposes?
Have you tried using Suggester? May also want
http://sematext.com/products/autocomplete/index.html which works
nicely with Solr.
Otis
Search Analytics - http://sematext.com/search-analytics/index.html
Performance
Hi,
If you want to get results sorted by published_date field, then you
need to sort by it sort=published_date+ASC. And then you don't really
need field boosting/weights.
Otis
Search Analytics - http://sematext.com/search-analytics/index.html
Performance Monitoring -
I ma having the same problem after upgrading from 3.2 to 4.0. I have the
sharedLib=lib added in the tag and I still get the same error. I deleted
all the files from the SOLR home directory and copied the files from 4.0
package. I still see this error. Where else could the old lib files be
: Below is my solr.xml configuration, and already set persistent to true.
...
: Then publish 1 record to test1, and query. it's ok now.
Ok, first off -- please provide more details on how exactly you are
running Solr. Your initial email said...
In Solr 3.6, core swap function works
: The ReplicationHandler still works when you use SolrCloud, right? can't you
: just replicate from one (or N, depending on the number of shards) of the
: nodes in the cluster? That way you could keep a Solr instance that's only
: used to replicate the indexes, and you could have it somewhere
Otis Gospodnetic-5 wrote
Hi,
I'm curious... why do you issue multiple queries for autocomplete
purposes?
Have you tried using Suggester? May also want
http://sematext.com/products/autocomplete/index.html which works
nicely with Solr.
Otis
Search Analytics -
David, I tried increasing the maxDetailDist, as I need 9 decimal value
precision.fieldType name=rectangle
class=solr.SpatialRecursivePrefixTreeFieldType geo=false distErrPct=0.025
maxDetailDist=0.1 / But when I do, I get the following error: Data
SEVERE:
We're running Solr 3.4, a fairly out-of-the-box solr/jetty setup, with
-Xms1200m -Xmx3500m . When we start pushing more than a couple documents
per second at it (PDFs, they go through SolrCell/Tika/PDFBox), the java
process hangs, becoming completely unresponsive.
We thought it might be an issue
For your use-case of time ranges, set geo=false (as you've done). At this
point you have a quad tree but it doesn't (yet) work properly for the default
min max numbers that a double can store, so you need to specify the boundary
rectangle explicitly and to the particular numbers for your
I have to deal with 3 parameters, time filtering, a groupid (1 to 2000) and a
quality value (1 to 5), and was hoping to use a X format = Group.Ticks and Y =
quality level, where ticks is the number of ticks for a given time, rounded to
the minute. In other words, my field indexing would look
Spatial doesn't (yet) support 3d. If you have multi-value relationships across
all 3 parameters you mentioned, then you're a bit stuck. I thought you had 1d
(time) multi-value ranges without needing to correlate that to other numeric
ranges that are also multi-value.
On Sep 21, 2012, at 5:03
The requirments have evolved. :-) This is still the best solution for my
needs, I'm close, I belive this can work. Removing quality from the equation,
I have to deal with pairs of GroupIds and Times. If I set the Y access to 0,
as you mentioned, can I create a pair of X values with the
If you can stick to two dimensions then great. Remember to set the boundary
attribute on the field type as I described so that spatial knows the
numerical boundaries that all the data must fit in. e.g. boundary=0 0
10 2.5 (substituting whatever appropriate number of time units you need
for
Hi Kevin,
Try taking a heap dump snapshot and analyzing it with something like
YourKit to see what's eating the memory.
SPM for Solr (see signature) will show you JVM heap and GC
numbers/graphs/activity that may shed some light on the issue.
You could also turn on verbose GC logging and/or use
When I said boundary I meant worldBounds.
Oh, and set distErrPct=0 to get precise shapes; the default is non-zero.
It'll use more disk space of course, and all the more reason to carefully
choose your world bounds carefully.
-
Author:
Thanks David, I'll play around with it. I appreciate the help,Eric.
Date: Fri, 21 Sep 2012 14:47:36 -0700
From: dsmi...@mitre.org
To: solr-user@lucene.apache.org
Subject: RE: Using Solr-3304
When I said boundary I meant worldBounds.
Oh, and set distErrPct=0 to get precise shapes; the
: I am using solr 3.6.0 , I have observed many connection in CLOSE_WAIT state
: after using solr server for some time. On further analysis and googling
: found that I need to close the idle connections from the client which is
: connecting to solr to query data and it does reduce the number
: The following is the working query with more weight to title. I am using
: default parser. But the published_date of the results of this query is not
: in order. I want date is in order.
your request seems to contradict itself -- you say the results are not in
order because you want date is
Hi Chris,
Thanks for your help. Today I tried again and try to figure out the reason.
1. set up an external zookeeper server.
2. change /opt/solr/apache-solr-4.0.0-BETA/example/solr/solr.xml persistent
to true. and run below command to upload config to zk. (renamed multicore
to solr, and need
I cant compile SOLR 4.0, but i can compile trunk fine.
ant create-package fails with:
BUILD FAILED
/usr/local/jboss/.jenkins/jobs/Solr/workspace/solr/common-build.xml:229:
The following error occurred while executing this line:
: I cant compile SOLR 4.0, but i can compile trunk fine.
Hmmm... that's suprising, the part of hte build file you pointed out as
causing you problems on 4x also exists in trunk.
: Documentation for ant - https://ant.apache.org/manual/Tasks/script.html it
: requires some external library. I
On 21 September 2012 09:16, bhaveshjogi bhavesh.jogi...@gmail.com wrote:
Hi,I am using this link for mapping my xml file:
http://wiki.apache.org/solr/DataImportHandler#wikipedia
http://wiki.apache.org/solr/DataImportHandler#wikipedia but its not work
because of complex XML file. Like
Yonik, et al.
I believe I found the section of code pushing me into 'insanity' status:
---snip---
int[] collapseIDs = null;
float[] hotnessValues = null;
String[] artistIDs = null;
try {
collapseIDs =
43 matches
Mail list logo