On Mon, 2013-04-29 at 18:22 +0200, Dmitry Kan wrote:
Does it even make sense to paginate in facet searches, if we require deep
paging?
Whether it makes sense logically is up to you.
Technically deep paging in facets could be implemented in approximately
the same way as deep paging in search
I'm using the latest solr 4.2.1 with apache-tomcat-7.0.33
What exactly is the purpose of the linkconfig commmand? When I run it
before I creating the collection, it creates a file in zookeeper. After
creating the collection, the file is gone and a directory comes in place.
On 04/29/2013
Anyone an idea how to debug this?
Thx!
On 04/25/2013 09:18 AM, Arkadi Colson wrote:
Hi
It seems not to work in my case. We are using the solr php module for
talking to Solr. Currently we have 2 collections 'intradesk' and 'lvs'
for 10 solr hosts (shards: 5 - repl: 2). Because there is no
It is logical to use facet pagination for us, if it would work. It just
doesn't, probably due the data amount we store and imposed RAM settings.
On Tue, Apr 30, 2013 at 10:39 AM, Toke Eskildsen
t...@statsbiblioteket.dkwrote:
On Mon, 2013-04-29 at 18:22 +0200, Dmitry Kan wrote:
Does it
When I look at admin gui I see that for a leader:
Replication (Slave) Version Gen Size
Master: 1367309548534 84 779.87 MB
Slave: 1367307649512 82 784.44 MB
and that for a replica:
Replication (Master) Version Gen Size
Master: 1367309548534 84 779.87 MB
isn't it confusing leader is a slave and
Hi,
Is there any upper limit on the number of facet queries I can include in a
single query. Also is there any performance hit if I include too many facet
queries in a single query
Any help would be appreciated
--
View this message in context:
Dear Experts,
I have a requirement for the exact matches and applying alphabetical
sorting thereafter.
To illustrate, the results should be sorted in exact matches and all later
alphabetical.
So, if there are 5 documents as below
Doc1
title: trees
Doc 2
title: plum trees
Doc 3
Be sure to test the bloom postings format on your own use case ... in
my tests (heavy PK lookups) it was slower.
But to answer your question: I would expect a single segment index to
have much faster PK lookups than a multi-segment one, with and without
the bloom postings format, but bloom may
I'm getting this error in tomcat log:
Apr 30, 2013 11:21:44 AM org.apache.solr.common.SolrException log
SEVERE: null:org.apache.solr.common.SolrException: Error trying to proxy
request for url:
http://solr03.officemeeuwen.smartbit.be:8983/solr/intradesk/select/
at
This what I get on the solr host where the query is proxy-ed to:
Apr 30, 2013 11:36:28 AM org.apache.solr.core.SolrCore execute
INFO: [intradesk_shard1_replica1] webapp=/solr path=/admin/ping/
params={indent=onwt=jsonversion=2.2} hits=0 status=0 QTime=1
Apr 30, 2013 11:36:28 AM
Hi Folks;
I can backup my indexes at SolrCloud via
http://_master_host_:_port_/solr/replication?command=backup
and it creates a file called snapshot. I know that I should pull that
directory any other safe place (a backup store) However what should I do to
make a recovery from that backup file?
I use SolrCloud, 4.2.1 of Solr.
Here is a detail from my admin page:
Replication (Slave) Version Gen Size
Master: 1367309548534 84 779.87 MB
Slave: 1367307649512 82 784.44 MB
When I use command=abortfetch still file size are not same. Any idea?
Never mind about these last 2 posts. debugQuery parameter must be false
or true instead of 0 or 1
On 04/30/2013 01:54 PM, Arkadi Colson wrote:
This what I get on the solr host where the query is proxy-ed to:
Apr 30, 2013 11:36:28 AM org.apache.solr.core.SolrCore execute
INFO:
I've set up a new test situation with the same results. 3 solr nodes and
3 collections with 2 shards and 0 replica's.
Collection Intradesk is on solr01 and solr02. When querying to solr03 I
got no results:
Uncaught exception 'SolrClientException' with message 'Unsuccessful
query request :
There's no fixed limit of facet.query's. But certainly there is a performance
impact, which is often mitigated by warming and caching. You'll need to test
the impact in your environment.
Erik
On Apr 30, 2013, at 02:22 , vicky desai wrote:
Hi,
Is there any upper limit on the
Hallo,
I have more than one sortable fields. Example:
I have address data and I want to sort by zip code and street name.
Now I have both fields as sortable declared in the schema and I
can sort by them.
q=*sort=zip+asc
or
q=*sort=street+asc
But how can I sort by both (first zip code then
Please review:
http://wiki.apache.org/solr/UsingMailingLists
You haven't given us near enough information to answer your question.
My guess is you're trying to return very large data sets, something
Solr isn't designed to do, but that's only a guess.
Possibly you have not set lazy field loading
Actually, look at the referenced JIRA
https://issues.apache.org/jira/browse/SOLR-2438 and you'll see it's
changed in 3.6.
Best
Erick
On Mon, Apr 29, 2013 at 9:36 AM, geeky2 gee...@hotmail.com wrote:
here is the jira link:
https://issues.apache.org/jira/browse/SOLR-219
--
View this
I don't think you can do that. You're essentially
trying to mix ordering of the result set. You
_might_ be able to kludge some of this with
grouping, but I doubt it.
You'll need two queries I'd guess.
Best
Erick
On Mon, Apr 29, 2013 at 9:44 AM, Sandeep Mestry sanmes...@gmail.com wrote:
Dear
I haven't found that, but I did find this:
Apr 30, 2013 9:38:10 AM org.apache.solr.core.CoreContainer create
INFO: Creating SolrCore 'collection1' using instanceDir: solr/collection1
Apr 30, 2013 9:38:10 AM org.apache.solr.core.SolrResourceLoader init
INFO: new SolrResourceLoader for directory:
hello erik,
thank you for the info - yes - i did notice ;)
one more reason for us to upgrade from 3.5.
thx
mark
--
View this message in context:
http://lucene.472066.n3.nabble.com/why-does-affect-case-sensitivity-of-query-results-tp4059801p406.html
Sent from the Solr - User mailing
On 4/30/2013 6:13 AM, Furkan KAMACI wrote:
I use SolrCloud, 4.2.1 of Solr.
Here is a detail from my admin page:
Replication (Slave) Version Gen Size
Master: 1367309548534 84 779.87 MB
Slave: 1367307649512 82 784.44 MB
When I use command=abortfetch still file size are not same. Any
On 4/30/2013 2:51 AM, Furkan KAMACI wrote:
When I look at admin gui I see that for a leader:
Replication (Slave) Version Gen Size
Master: 1367309548534 84 779.87 MB
Slave: 1367307649512 82 784.44 MB
and that for a replica:
Replication (Master) Version Gen Size
Master: 1367309548534
On 4/30/2013 8:16 AM, Shawn Heisey wrote:
Master info gets listed as soon as the replication handler has started.
If that core has ever been used as a replication target since the last
restart, then it will show the slave information as well.
After thinking about it. I'm not 100% sure that
In Solr Cloud, commits can happen at different times across replicas. Which
means merges also may happen at different times. So there's no expectation
of the cores of different replicas being totally similar.
Michael Della Bitta
Appinions
18 East
Thank's Jan for your reply.
My application has thousands of users and I don't know yet how many of them
will use this feature. They can exclude one document from their search
results or can exclude 200.000 documents. It's much more natural that they
exclude something like 50~300 documents. More
That directory is the data directory for the core... you'd just swap it in.
Michael Della Bitta
Appinions
18 East 41st Street, 2nd Floor
New York, NY 10017-6271
www.appinions.com
Where Influence Isn’t a Game
On Tue, Apr 30, 2013 at 8:06 AM,
Thanks Erick,
I tried grouping and it appears to work okay. However, I will need to
change the client to parse the output..
fq=title:(tree)group=truegroup.query=title:(trees) NOT
title_ci:treesgroup.query=title_ci:blairgroup.sort=title_sort
descsort=score desc,title_sort asc
I used the actual
Thanks Jack Krupansky, Its very helpful :)
Jack Krupansky-2 wrote
The WDF types will treat a character the same regardless of where it
appears.
For something conditional, like dot between letters vs. dot lot preceded
and
followed by a letter, you either have to have a custom tokenizer or
I think that replication occurs after commit by default. It has been long
time however there is still mismatch between leader and replica
(approximately 5 MB). I tried to pull indexes from leader but it is still
same.
2013/4/30 Michael Della Bitta michael.della.bi...@appinions.com
In Solr
Peter, try sorting them only using one sort parameter, separating the fields
by comma.
sort=zip+asc,street+asc
--
View this message in context:
http://lucene.472066.n3.nabble.com/More-than-one-sort-criteria-tp4059989p4060015.html
Sent from the Solr - User mailing list archive at Nabble.com.
I'm a little confused. Are you using Solr Cloud, or ordinary replication?
Michael Della Bitta
Appinions
18 East 41st Street, 2nd Floor
New York, NY 10017-6271
www.appinions.com
Where Influence Isn’t a Game
On Tue, Apr 30, 2013 at 10:33 AM,
I have created 2 versions of Solr core in different servers. one is simple
core having all records in one core. And other is shards core, distributed
over 3 cores on server.
Simple core :
http://localhost:8080/sorl/core0/select?q=text:hoers~1
Distributed core :
A fuzzy query itself does not know about distributed search - Lucene simply
scores the query results based on the local index. Then, Solr is merging the
merging the query results from different nodes.
Try the query locally for each node and set debugQuery=true and see how each
document gets
Should I stop the node first? And what will happen to transaction logs?
Should I backup it too?
2013/4/30 Michael Della Bitta michael.della.bi...@appinions.com
That directory is the data directory for the core... you'd just swap it in.
Michael Della Bitta
I use Solr 4.2.1 as SolrCloud
2013/4/30 Michael Della Bitta michael.della.bi...@appinions.com
I'm a little confused. Are you using Solr Cloud, or ordinary replication?
Michael Della Bitta
Appinions
18 East 41st Street, 2nd Floor
New York,
I use Solr 4.2.1
What happens if I unload a core, I mean what does Solr do? Because solr.xml
didn't change and I think that Solr should writes something to somewhere or
deletes something from somewhere?
Shawn, why they don't have same data byte per byte? Can I force slave to
pull them, I tried but didn't work.
2013/4/30 Furkan KAMACI furkankam...@gmail.com
I use Solr 4.2.1 as SolrCloud
2013/4/30 Michael Della Bitta michael.della.bi...@appinions.com
I'm a little confused. Are you using
Jon
Did you upgrade from an earlier Solr-Installation? If so, clearing your browser
cache might help. There is fix for 4.3 in place.
If it does not, what is the output of
http://localhost:8983/solr/admin/cores?wt=json ? Does it contain the
collection1 core?
To try a basic thing, what do you
Then there is no replication, and no slaves nor masters. There's a leader
and followers. Documents themselves are sent from the leader to followers,
not cores or segments. You should not expect the bits on the disk across
leaders and followers to be the same because of the reasons I mentioned
Presumably you'd only be restoring a backup in the face of a catastrophe.
Yes, you'd need to stop the node. And the transaction logs may not be
useful in this case. You'd have trouble reconciling them with the version
of the index in your backup I would think.
Anybody who knows more about this
Thanks James for your reply.
I have updated to 3.6.2. Now the NullPointerException is gone. But the
entities with CachedSqlEntityProcessor don't add anything to solr.
And entities without CachedSqlEntityProcessor, are working fine.
Why entities with CachedSqlEntityProcessor don't do anything?
By default, an unload action will only unregister the Solr core (locally
and from zookeeper if running in cloud mode) to stop it from taking
requests. It will not delete any files.
The UNLOAD action also accepts the following parameters:
1. deleteIndex=true -- will delete the solr index after the
However I am using SolrCloud with 5 shards. Every leader has a replica.
What do you mean with followers?
2013/4/30 Michael Della Bitta michael.della.bi...@appinions.com
Then there is no replication, and no slaves nor masters. There's a leader
and followers. Documents themselves are sent from
I have closed my application and restarted it.If it didn't change anything
could you tell me how admin page says:
There are no SolrCores running.
Using the Solr Admin UI currently requires at least one SolrCore.
I think it stored something to somewhere?
2013/4/30 Shalin Shekhar Mangar
The UNLOAD command removes the core name from solr.xml
On Tue, Apr 30, 2013 at 11:28 PM, Furkan KAMACI furkankam...@gmail.comwrote:
I have closed my application and restarted it.If it didn't change anything
could you tell me how admin page says:
There are no SolrCores running.
Using the
Oops, it changes solr.xml, OK.
2013/4/30 Furkan KAMACI furkankam...@gmail.com
I have closed my application and restarted it.If it didn't change anything
could you tell me how admin page says:
There are no SolrCores running.
Using the Solr Admin UI currently requires at least one SolrCore.
I'd say a follower is a participant in a shard that's not the leader.
Michael Della Bitta
Appinions
18 East 41st Street, 2nd Floor
New York, NY 10017-6271
www.appinions.com
Where Influence Isn’t a Game
On Tue, Apr 30, 2013 at 1:27 PM, Furkan
If we talk about SolrCloud terminology does follower and replica means
same? Is there any documentation at wiki for that?
2013/4/30 Michael Della Bitta michael.della.bi...@appinions.com
I'd say a follower is a participant in a shard that's not the leader.
Michael Della Bitta
I could be getting this wrong, and the wiki is down at the moment, but I
think a replica can be a leader, whereas a follower is definitely not.
Michael Della Bitta
Appinions
18 East 41st Street, 2nd Floor
New York, NY 10017-6271
I had index and tlog folder under my data folder. I have a snapshot folder
too when I make backup. However what will I do next if I want to use
backup, will I remove index and tlog folders and put just my snapshot
folder? What folks do?
2013/4/30 Michael Della Bitta
It would be nice if I learn what is a follower means and how to define them
(I know the example of replica but didn't see an example of follower yet)
2013/4/30 Michael Della Bitta michael.della.bi...@appinions.com
I could be getting this wrong, and the wiki is down at the moment, but I
think a
Could you define your use-case in some more detail? On the
surface, this query doesn't really make a lot of sense. How
would merchant_end_of_day_in_utc_epoch be determined?
Presumably there are zillions of values across your index for
this value, depending on the document. Which one should be
Solr 4.0 was indexing data and the machine crashed.
Any suggestions on how to recover my index since I don't want to delete my
data directory?
When I try to start it again, I get this error:
ERROR 12:01:46,493 Failed to load Solr core: xyz.index1
ERROR 12:01:46,493 Cause:
ERROR 12:01:46,494
I have a question regarding boosting the exact match queries to top,
followed by partial match and if there is no exact match then give me
partial match. The following 2 solutions have yielded different results, and
I was not clear on it why
This is the schema I have
field name=f1
Erick,
I believe Indika wants to do this SQL WHERE clause in Solr:
WHERE start_time_utc_epoch = '1970-01-01T00:00:00Z' AND start_time_utc_epoch
= merchant_end_of_day_in_utc_epoch
On Tue, Apr 30, 2013 at 11:49 AM, Erick Erickson erickerick...@gmail.comwrote:
Could you define your use-case in
The logs stdout and stderr are blank. The log request had info, but nothing
that is related.
Persistent flag is set to true in all environments.
After backing up the solr.xml file, the collections were manually erased
from the file allowing me to list collections without breaking! Yay.
This
We have a master-slave Solr set up and run live queries only against the
slave. Full import (with optimize) happens on master every day at 2 a.m.
Delta imports happen every 10 min for one entity and every hour for another
entity.
The following exceptions occur a few times every day in our app
Hi,
How, practically would a user end up with 200.000 documents excluded? Is there
some way in your application to exclude categories of documents with one
click? If so, I would index those category IDs on all docs in that category,
and then do fq=-cat:123 instead of adding all the individual
Hi,
The pf feature will only kick in for phrases, i.e. multiple tokens. Per
definition a string is one single token, so it will never kick in for strings.
A workaround can be found here: https://github.com/cominvent/exactmatch
--
Jan Høydahl, search solution architect
Cominvent AS -
On 4/30/2013 8:33 AM, Furkan KAMACI wrote:
I think that replication occurs after commit by default. It has been long
time however there is still mismatch between leader and replica
(approximately 5 MB). I tried to pull indexes from leader but it is still
same.
My mail server has been down most
I agree with Michael that you'll only ever need your backup if you
lose all nodes hosting a shard (leader + all other replicas), so the
tlog doesn't really factor in when recovering from backup.
The snapshot created by the replication handler is the index only and
it makes most sense in my mind
Hi,
Try running the CheckIndex tool.
Otis
Solr ElasticSearch Support
http://sematext.com/
On Apr 30, 2013 3:10 PM, Utkarsh Sengar utkarsh2...@gmail.com wrote:
Solr 4.0 was indexing data and the machine crashed.
Any suggestions on how to recover my index since I don't want to delete my
data
An alternative would be a custom SearchComponent that post-processes hits.
Otis
Solr ElasticSearch Support
http://sematext.com/
On Apr 30, 2013 10:27 AM, Sandeep Mestry sanmes...@gmail.com wrote:
Thanks Erick,
I tried grouping and it appears to work okay. However, I will need to
change the
Yes, the SQL statement is what I am trying to achieve. As for the
merchant_end_of_day_in_utc_epoch, we map the time to start of epoch and
convert that to UTC, so that all the merchants are in the same timezone
which would make it easier to query for open ones.
For the use case when we need to
FWIW, one of our current clients runs queries with 6000 facet queries...
Otis
Solr ElasticSearch Support
http://sematext.com/
On Apr 30, 2013 5:22 AM, vicky desai vicky.de...@germinait.com wrote:
Hi,
Is there any upper limit on the number of facet queries I can include in a
single query.
66 matches
Mail list logo