Ok, I found another way of doing it which will preserve the QueryResponse
object. I've used DefaultHttpClient, set the credentials and finally passed
it as a constructor to the CloudSolrClient.
*DefaultHttpClient httpclient = new DefaultHttpClient();
UsernamePasswordCredentials defaultcreds = new
thanks for replying .
PERFORMANCE WARNING: Overlapping onDeckSearchers=2
one more warning is coming please suggest for this also.
On Wed, May 11, 2016 at 7:53 PM, Ahmet Arslan
wrote:
> Hi Midas,
>
> It looks like you are committing too frequently, cache warming
..=aaa:1 bbb:2&..
On Wed, May 11, 2016 at 11:34 PM, baggadonuts wrote:
> Refer to the following documentation: https://wiki.apache.org/solr/Join
>
> According to the documentation the SOLR equivalent of this SQL query:
>
> SELECT xxx, yyy
> FROM collection1
>
Hi,
I'm looking into the option of adding basic authentication using Solrj
API. Currently, I'm using the following code for querying Solr.
SolrClient client = new CloudSolrClient("127.0.0.1:9983");
((CloudSolrClient)client).setDefaultCollection("gettingstarted");
ModifiableSolrParams param =
A couple of ideas. If this is 5x consider Streaming Aggregation.
The idea here is that you stream the docs back to a SolrJ client and
slice and dice them there. SA is designed to export 400K docs/sec,
but the returned values must be DocValues (i.e. no text types, strings
are OK).
Have you seen
Personally I'd just let it do the default "hash modulo #shards".
I don't see how you could shard based on location and I don't know
why you'd want to. Let's say you have some kind of restriction like
"we'll never return a doc from any state except the one our location is in".
So you'd have your
Well, it can always be rebuilt from the backed-up index. That suggester
reads the _stored_ fields from the docs to build up the suggester
index. With a lot of documents that could take a very long time though.
If you desperately need it, AFAIK you'll have to back it up whenever
you build it I'm
Fields that don't match for a particular document just don't contribute to the
score. The boost is multiplied into the score calculated for that field and
term. So if for doc1 the calculated score is 5 and you boost by 2, the result is
10. If doc2 has a calculated score of 20 and you boost by 1,
If you're able to use Solr 6 then you can use Streaming Expressions to
solve this. The docs for Streaming Expressions in Solr 6 can be found at
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=61330338.
One option would be to use an intersect to find documents in both sets.
I just added you to the contributors group, you should be able to post now.
On Wed, May 11, 2016 at 4:22 PM, Chris Hostetter
wrote:
>
> If you re-load the jira you should see at the top this message...
>
> ---
> Jira is in Temporary Lockdown mode as a spam
Hi Shawn,
Thank for suggestion of Zookeepers. For the 'buyout', I think your
misunderstanding is my fault since my description is kinda vague. Actually, the
'buyout' is requested by special customers. Their budget can only buy some
special service that "must be" (not just be able to)
Originally, I want to experiment both master-slave and SolrCloud on my PC but
want to save time to install another Solr server. If I have to do that, I think
I have to change the default port for 2nd Solr server, right?
However, after reading the mails of Toke and Charlie. I decide to delve
Hi Toke and Charile,
Thanks for sharing your cases and your well suggestions. After reading
through your mails, I'll delve into SolrCloud. One thing I'd like to share with
all of people in maillist: Chinese corpus could create a dramatically large
size of index with respect to what
Hi Nick,
Thanks for the reply. According to my requirement I can use only option
one. I thought about that solution but I was bit lazy to implement that
since I have many modules and solr cores. If I'm going to configure request
handlers for each drop down value in each component it seems like a
Refer to the following documentation: https://wiki.apache.org/solr/Join
According to the documentation the SOLR equivalent of this SQL query:
SELECT xxx, yyy
FROM collection1
WHERE outer_id IN (SELECT inner_id FROM collection1 where zzz = "vvv")
is this:
Hi,
I am trying to configure Cross Data Center Replication using solr 6.0.
I am having issue configuring solrconfig.xml on both the target and source
side. I keep getting error
"org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
Solr instance is not configured with the
If you re-load the jira you should see at the top this message...
---
Jira is in Temporary Lockdown mode as a spam countermeasure. Only
logged-in users with active roles (committer, contributor, PMC, etc.) will
be able to create issues or comments during this time. Lockdown period
from 11 May
On 5/11/2016 1:32 PM, A Laxmi wrote:
> Is it possible to determine how complex a document is using Solr?
> Complexity in terms of whether document is readable by a 7th grade vs. PHD
> Grad?
Out of the box? No. You can of course embed any custom component
you're willing to find or write.
In
On 5/11/2016 6:06 AM, Horváth Péter Gergely wrote:
> If there is no such research document available, I would be much obliged if
> you could give some hints on what and how to measure in Solr / Solr cloud
> world. (E.g. what the optimal resource utilization of a Solr instance is,
> how to
On 5/11/2016 3:55 AM, scott.chu wrote:
> **
> If I use SolrCloud, I know I have to setup Zookeeper. I know there're
> something called 'quorum' or 'ensemble' in Zookeeper terminologies. I
> also know there is a need for (2n+1) Zookeeper nodes per n SolrCloud
> nodes. Is
On 5/9/2016 10:56 PM, Sandy Foley wrote:
> Question #1:Is there a SINGLE command that can be issued to each server from
> a load balancer to check the ping status of each server?
I am not aware of a single request that will test every collection.
The way I have things set up, each load balancer
On 5/10/2016 9:02 AM, Mugeesh Husain wrote:
> I am using solr 5.3 version with inbuilt jetty server.
>
> I am looking for a proxy kind of thing which i could prevent outside User
> access for all of the link, I would give only access select and select core
> url accessibility other than this
Hello.
Somehow, I am no longer able to comment on Solr Jira tickets.
When I go to https://issues.apache.org/jira/browse/SOLR-7963
I am logged in... I can edit the ticket, but there is no comment box or
comment button visible.
Any help would be very appreciated.
Thank you very much.
--
Hi Xavi.
The blenderType=linear not working has been introduced in
https://issues.apache.org/jira/browse/LUCENE-6939
"linear" has been refactored to "position_linear"
I would be grateful if a committer could help update the wiki with the
comments at
There are many different “readability scores”. The most common is
Flesch-Kincaid, which uses the number of words, number of sentences, and number
of syllables. Solr has the word count, but not the other two.
https://en.wikipedia.org/wiki/Readability_test
Excellent! That file gave me fits at first. It lives in two locations, but
the one that counts for booting SOLR is the /etc/default one.
On May 11, 2016 12:53 PM, "Tom Gullo" wrote:
That helps. I ended up updating the sole.in.sh file in /etc/default and
that was in getting
>
* What I mean is that a technical paper will have a different type of
complexity from let's say a Shakespearean play, because the former will
have technical jargon, while the latter will have really high level*
* vocabulary.*Good point. But, I am thinking a 7th grade might find both of
them
Please correct me if I'm wrong, but I think what Joel means is the variety
of words in a document.
One more aspect that will come into play here, I think, is the different
types of complexity.
What I mean is that a technical paper will have a different type of
complexity from let's say a
Yes, length of the words would be one way but was wondering if there are
any other ways to identify the complexity.
On Wed, May 11, 2016 at 3:46 PM, A Laxmi wrote:
> Yes, length of the words would be one way but was wondering if there are
> any ways to identify the
Yes, length of the words would be one way but was wondering if there are
any ways to identify the complexity.
On Wed, May 11, 2016 at 3:36 PM, Joel Bernstein wrote:
> I'm wondering if the size of the vocabulary used would be enough for this?
>
> Joel Bernstein
>
Hi there,
I am trying to configure Cross Data Center Replication using solr 6.0.
I am having issue creating collections or reloading old collections with
the new solrconfig.xml on both the target and source side. I keep getting
error
I'm wondering if the size of the vocabulary used would be enough for this?
Joel Bernstein
http://joelsolr.blogspot.com/
On Wed, May 11, 2016 at 3:32 PM, A Laxmi wrote:
> Hi,
>
> Is it possible to determine how complex a document is using Solr?
> Complexity in terms of
Hi,
Is it possible to determine how complex a document is using Solr?
Complexity in terms of whether document is readable by a 7th grade vs. PHD
Grad?
Thanks!
AL
Aliasing works great, I implemented it after upgrading to Solr 5 and it
allows us to do this exact thing. The only thing you have to watch out for
is indexing new items (if they overwrite old ones) while you are
re-indexing.
I took it a step further for another collection that stores a lot of
That helps. I ended up updating the sole.in.sh file in /etc/default and that
was in getting picked up. Thanks
> On May 11, 2016, at 2:05 PM, Tom Gullo wrote:
>
> My Solr installation is running on Tomcat on port 8080 with a web context
> name that is different than
Brian,
Thanks for your reply. My first post was bit convoluted, tried to explain
the issue in the subsequent post. Here's a security JSON. I've solr and
beehive assigned the admin role which allows them to have access to "update"
and "read". This works as expected. I add a new role "browseRole"
Oh, I see -
Hmmm... I just did a disaster recovery work up for my IT guys and basically
I recommended they build SOLR from scratch and reindex rather than try to
recover (same for changing versions)
However, we've got a small-ish data set and that may not work for everyone.
Any chance you can
Yup - bottom of solr.in.sh - if you used the "install for production"
script.
/etc/default/solr.in.sh (on linux which is all I do these days)
Hope that helps... Ping back if not.
SOLR_PID_DIR="/var/solr"
SOLR_HOME="/var/solr/data"
LOG4J_PROPS="/var/solr/log4j.properties"
My Solr installation is running on Tomcat on port 8080 with a web context name
that is different than /solr. We want to move to a basic jetty setup with all
the defaults. I haven’t found a clean way to do this. A lot of the values
like baseurl and /leader/elect/shard1 have values that need
I may be answering the wrong question - but SolrCloud goes in by default on
8983, yes? Is yours currently on 8080?
I don't recall where, but I think I saw a config file setting for the port
number (In Solr I mean)
Am I on the right track or are you asking something other than how to get
Solr on
I need to change the web context and the port for a SolrCloud installation.
Example, change:
host:8080/some-api-here/
to this:
host:8983/solr/
Does anyone know how to do this with SolrCloud? There are values stored in
clusterstate.json and /leader/elect and I could change them but
that
Correcting typo in original post and making it a little clearer
Hi
Can someone help us understand how null values affect boosting.
Say we have field_1 (with boost ^10.1) and field_2 (with boost ^9.1).
We search for foo.
Document A : field_1 : does not exist
Field_2
Hi
Can someone help us understand how null values affect boosting.
Say we have field_1 (with boost ^10.1) and field_2 (with boost ^9.1).
We search for foo. Document A has field_1(foo match) and field_2(empty) and
Document B has field_2(foo match) but no field_1.
As per our understanding the
I can't say I followed your entire example, but I think you're running
into a couple of issues:
1) Users don't get any roles by default. So, when you initial setup
includes this:
{
"name": "all",
"role": "all"
}
but nobody has the "all" role, it doesn't surprise
CREATE OR REPLACE FUNCTION page(IN i_app name character varying, IN
i_photo_id big int, IN i_page integer, IN i_member_id big int, OUT
o_similar_page_name character varying, OUT o_similar_page_id big int, OUT
o_similar_photo_id big int[])
DECLARE
v_limitINTEGER := 4;
v_offset INTEGER;
ERROR 0-thread-7 o.a.s.c.SolrCore <> Too many close [count:-1] on
org.apache.solr.core.SolrCore@3d6f8ad3. Please report this exception to
solr-user@lucene.apache.org
There are only two ways I can think of to accomplish this and neither of
them are dynamically setting the suggester field as is looks according to
the Doc (which does sometimes have lacking info so I might be wrong) you
cannot set something like *suggest.fl=combo_box_field* at query time. But
Hi Shawn,
Thanks for your input and help.
what you just guessed is right, we run solr in jetty by using start.jar, the
parms is what I sent to you in my last mail.
about GC, I will check it carefully, thanks.
--
发自我的网易邮箱手机智能版
在 2016-05-11 21:32:33,"Shawn Heisey" 写道:
Anyone ?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Issues-with-Authentication-Role-based-authorization-tp4276024p4276153.html
Sent from the Solr - User mailing list archive at Nabble.com.
I have a client whose Solr installation creates a
analyzingInfixSuggesterIndexDir directory besides index and tlog. I notice that
this analyzingInfixSuggesterIndexDir is not included in backups (created by
replication?command=backup). Is there a way to include this? Or does it not
need to be
On 5/10/2016 10:34 PM, scott.chu wrote:
> A further question: Can master-slave and SolrCloud exist simultaneously in
> one Solr server? If yes, how can I do it?
No. SolrCloud uses replication internally for automated recovery on an
as-needed basis. SolrCloud completely manages multiple
Hi Midas,
It looks like you are committing too frequently, cache warming cannot catchup.
Either lower your commit rate, or disable cache auto warm (autowarmCount=0).
You can also remove queries registered at newSearcher event if you have defined
some.
Ahmet
On Wednesday, May 11, 2016 2:51
Hi Kishor,
You can try escaping the search phrase "Garmin Class A" > Garmin\ Class\ A
Lasitha Wattaladeniya
Software Engineer
Mobile : +6593896893
Blog : techreadme.blogspot.com
On Wed, May 11, 2016 at 6:12 PM, Ahmet Arslan
wrote:
> Hi,
>
> You can be explicit
Hi Bastien,
Please use magic _query_ field, q=hospital AND _query_:"{!q.op=AND v=$a}"
ahmet
On Wednesday, May 11, 2016 2:35 PM, Latard - MDPI AG
wrote:
Hi Everybody,
Is there a way to pass only some of the data by reference and some
others in the q param?
e.g.:
Hello devs,
I'm trying to implement auto complete text suggestions using solr. I have a
text box and next to that there's a combo box. So the auto complete should
suggest based on the value selected in the combo box.
Basically I should be able to change the suggest field based on the value
On 5/10/2016 7:46 PM, lltvw wrote:
> the args used to start solr are as following, and upload my screen shot to
> http://www.yupoo.com/photos/qzone3927066199/96064170/, please help to take a
> look, thanks.
>
> -DSTOP.PORT=7989
> -DSTOP.KEY=
> -DzkHost=node1:2181,node2:2181,node3:2181/solr
>
On 5/11/2016 3:08 AM, scott.chu wrote:
> I see there's a -h option for bin\solr start command. What's that for?
When we crate a core, say 'abc', the request url is something like
http:///solr/abc. I'd like to change 'solr' to other name, how can I
do it?
The "host" is what SolrCloud will
Hi All,
I am wondering if there is any recommendation or convention regarding
planning and benchmarking a Solr node / Solr Cloud cluster infrastructure.
I am looking for a somewhat more structured approach than trying with our
forecast data volumes and keep adding more resources (CPU, RAM, disk
On 11/05/2016 10:55, scott.chu wrote:
I just find maillist seems not accept colorful fonts (cause I receive
my own letter from maillist and see blue colors are gone!). I use
asterisk row to highlight my questions and send this again.
Answers inline below.
C
- Original Message -
Hi i am getting following error
org.apache.solr.common.SolrException: Error opening new searcher.
exceeded limit of maxWarmingSearchers=2, try again later.
what should i do to remove it .
Hi Everybody,
Is there a way to pass only some of the data by reference and some
others in the q param?
e.g.:
q1. http://localhost:8983/solr/my_core/select?{!q.op=OR
v=$a}=abstract,title=hospital Leapfrog=true
q1a. http://localhost:8983/solr/my_core/select?q=hospital AND
On Wed, 2016-05-11 at 11:27 +0800, scott.chu wrote:
> I want to build a Solr engine for over 60-year news articles. My
> requests are (I use Solr 5.4.1):
Charlie Hull has given you an fine answer, which I agree with fully, so
I'll just add a bit from our experience.
We are running a similar
Hi,
You can be explicit about the field that you want to search on. e.g.
q=product_name:(Garmin Class A)
Or you can use lucene query parser with default field (df) parameter. e.g.
q={!lucene df=product_name)Garmin Class A
Its all about query parsers.
Ahmet
On Wednesday, May 11, 2016 9:12
Hi Thrinadh,
Why don't you use plain wildcard search? There are two operator star and
question mark for this purpose.
Ahmet
On Wednesday, May 11, 2016 4:31 AM, Thrinadh Kuppili
wrote:
Thank you, Yes i am aware that surround with quotes will result in match for
space
I just tried the method. The method is throwing a exception after just
passing a solr document of a solr response to the method
My source code:
SolrDocument currentDoc = DocumentList.get(f);
DocumentObjectBinder binder = new DocumentObjectBinder();
SolrInputDocument inputDoc =
I just find maillist seems not accept colorful fonts (cause I receive my own
letter from maillist and see blue colors are gone!). I use asterisk row to
highlight my questions and send this again.
- Original Message -
From: scott(自己)
To: solr-user
To:
Date: 2016/5/11 (週三) 17:34
Hi, Charlie,
Thanks first for your concrete answer. I have further questions as written
in blue color below.
scott.chu,scott@udngroup.com
2016/5/11 (週三)
- Original Message -
From: Charlie Hull
To: solr-user@lucene.apache.org
CC:
Date: 2016/5/11 (週三) 16:21
Subject: Re:
I see there's a -h option for bin\solr start command. What's that for? When we
crate a core, say 'abc', the request url is something like
http:///solr/abc. I'd like to change 'solr' to other name, how can I do it?
We have a horrible Solr query that groups by a field and then sorts by
another. My understanding is that for this to happen it has to sort by the
grouping field, group it and then sort the resulting result set. It's not a
fast query.
Unfortunately our documents now need to be grouped as well
On 11/05/2016 04:27, scott.chu wrote:
Fix some typos, add some words and resend same question =>
I want to build a Solr engine for over 60-year news articles. My
requests are (I use Solr 5.4.1):
Hi Scott,
We've actually done something very similar for the our client NLA Media
Access in the
On Wed, May 11, 2016 at 10:16 AM, Derek Poh wrote:
> Hi Erick
>
> Yes we have identified and fixed the page slow loading.
>
Derek,
Can you elaborate more? What did you fix?
>
> I was wondering if there are any best practices when it comes to deciding
> to create a
Hi All,
I've an application that has location based data. The data is expected to
grow rapidly and the search is also based on the location i.e the search is
done using the geospatial distance range.
I am wondering what is the best possible way to shard the index. Any
pointer/input is highly
Hi Erick
Yes we have identified and fixed the page slow loading.
I was wondering if there are any best practices when it comes to
deciding to create a single collection that stores all information in it
or create multiple sub collections. I understand now itdepends on the
use-case.
My
Ok, I'm really struggling to figure out the right approach here. I wanted to
make it simple and started fresh. Removed the existing node (node1 and
node2), started the server in Cloud mode and uploaded the following
security.json.
{
"authentication": {
"blockUnknown": true,
"class":
I want to search a product and product name is "Garmin Class A" so I expect
result is product name matching string "Garmin Class A" but it searches
separately i dont know why and how it happen.Please guide me how to search a
string in only one field only not in other fields. "debug": {
75 matches
Mail list logo