The restart issues aside, I’m trying to lockdown usage of the Collections API,
but that also does not seem to be working either.
Here is my security.json. I’m using the “collection-admin-edit” permission and
assigning it to the “adminRole”. However, after uploading the new
security.json and
My initial thought was to use scheduling built with DIH:
http://wiki.apache.org/solr/DataImportHandler#Scheduling
But I think just a cron job should do the same for me.
Thanks
On Tue, Sep 1, 2015 at 8:51 AM, Davis, Daniel (NIH/NLM) [C] <
daniel.da...@nih.gov> wrote:
> On 8/31/2015 11:26 AM,
On 9/1/2015 11:45 AM, Troy Edwards wrote:
> My initial thought was to use scheduling built with DIH:
> http://wiki.apache.org/solr/DataImportHandler#Scheduling
>
> But I think just a cron job should do the same for me.
The dataimport scheduler does not exist in any Solr version. This is a
We have a Solr cloud (4.7) consisting of 5 servers.
At some point we noticed that one of the servers had a very high CPU and
was not responding. A few minutes later, the other 4 servers were
responding very slowly. A restart was required.
Looking at the Solr logs, we mainly saw symptoms, i.e.
Thanks for the clarification!
So is the wiki page incorrect at
https://cwiki.apache.org/confluence/display/solr/Basic+Authentication+Plugin
which says that the admin ui will require authentication once the authorization
plugin is activated?
"An authorization plugin is also available to
you are attempting to write your signature to your ID field. That's not
a good idea. You are generating your signature from the content field,
which seems okay. Change your id to be
your 'signature' field instead of id, and something different will
happen :-)
Upayavira
On Tue, Sep 1, 2015, at
Admin UI is not protected by any of these permissions. Only if you try
to perform a protected operation , it asks for a password.
I'll investigate the restart problem and report my findings
On Tue, Sep 1, 2015 at 3:10 AM, Kevin Lee wrote:
> Anyone else running into
Take a step back. *why* do you need a blend? Can you adjust the scores
on your shards to make the normal algorithm work better for you?
Upayavira
On Mon, Aug 31, 2015, at 08:47 PM, Mohan gupta wrote:
> Hi Folks,
>
> I need to merge docs received from multiple shards via a custom logic, a
>
I removed that statement
"If activating the authorization plugin doesn't protect the admin ui,
how does one protect access to it?"
One does not need to protect the admin UI. You only need to protect
the relevant API calls . I mean it's OK to not protect the CSS and
HTML stuff. But if you
I'm investigating why restarts or first time start does not read the
security.json
On Tue, Sep 1, 2015 at 1:00 PM, Noble Paul wrote:
> I removed that statement
>
> "If activating the authorization plugin doesn't protect the admin ui,
> how does one protect access to it?"
>
I wonder if this resolves it [1]. It has been applied to trunk, but not
to the 5.x release branch.
If you needed it in 5.x, I wonder if there's a way that particular
choice could be made configurable.
Upayavira
[1] https://issues.apache.org/jira/browse/LUCENE-6711
On Tue, Sep 1, 2015, at 02:43
Erick
Yes, we see documents changing their position in the list due to having
deleted docs.
In our searchresult,weapply higher boost (bq) to a group of matched
documents to have them display at the top tier of the result.
At times 1 or 2 of these documentsare not return in the top tier, they
Hi Yonik,
Thanks for pointing out the difference.
I've made modification and tried with this below command for JSON Facet,
but it is still having a QTime of 410, as compared to the Legacy Facet
QTime of 22:
http://localhost:8983/solr/collection1/select?q=paint={f:{field:content}}=0
Is this the
Hi Kevin,
Were you able to get a workaround / fix for your problem ? I'm also
looking to secure Collection and Update APIs by upgrading to 5.3. Just
wondering if it's worth the upgrade or should I wait for the next version,
which will probably address this.
Regards,
Shamik
--
View this
We should add a simple scheduler in the UI. It is very useful. To schedule
various actions:
- Full index
- Delta Index
- Replicate
On Tue, Sep 1, 2015 at 12:41 PM, Shawn Heisey wrote:
> On 9/1/2015 11:45 AM, Troy Edwards wrote:
> > My initial thought was to use
While it may be useful to have a scheduler for simple cases, I think there are
too many variables to make it useful for everyone's case. For example, I
recently wrote a script that uses the data import handler api to get the
status, kick off the import, etc. However, before allowing it to
i'm really confused:|
i'm really anxious about cost of update like count.
and as you said:
you indexed like_count field .and i think it cost alot to update and index
again docs.
because like count change more and more
so isn't better to indede="false" that this field name??!!
On Tue, Sep 1, 2015
No worries, thanks again I'll begin teaching this
On Mon, Aug 31, 2015, 5:16 PM Tomás Fernández Löbbe
wrote:
> Sorry Jamie, I totally missed this email. There was no Jira that I could
> find. I created SOLR-7996
>
> On Sat, Aug 29, 2015 at 5:26 AM, Jamie Johnson
Hi,
In the link which you have send I Cannot se how to connect to solr cloud
for synchronization of indexes.
From the description, this is straight forward SolrCloud where you
have replicas on the separate machines, see:
Hi Upayavira,
I've tried to change id to be signature, but nothing is indexed into Solr as
well. Is that what you mean?
Besides that, I've also included a copyField to copy the content field into
the signature field. Both versions (with and without copyField) have
nothing indexed into Solr.
So you want to be able to sort by the "number of likes" value for a
post?
What version of Solr are you using? How many posts do you have?
There's a neat feature in Solr 5.2.1 (I'm pretty sure it is there, not
5.3) called score joins. Using that you can have two cores, one
containing your posts,
you don't need to use a dynamic field, just a normal field will work for
you. But, you *will* want to index it, and you may benefit from
docValues, so:
Upayavira
On Tue, Sep 1, 2015, at 10:59 AM, sara hajili wrote:
> my solr version is 5.2.1
> i have a question.
> if i create 2 core .one for
Hi,
I have tried the solution from your blog with my schema and with the
example from the blog post, with solr-5.3.0 and with solr-5.4.
0-2015-08-12 and i get this error:
"responseHeader":{
"status":500,
"QTime":32},
"error":{
"msg":"child query must only match non-parent docs,
Mikhail,
The purpose of using EmbeddedSolrServer is for testing, not for running as
main().
Is there a best practice for doing integration-testing of solr? Or of
validating that queries to solr returns the expected result?
E.g. I have this bit of production code:
private String
hi.
at first i.m sorry for my bad english!
i have a social app.i want to use solr for searching in this app.
i have many document (in my case people text that posted on my social app).
and i indexed this.
but i'm have 1 issue and it is :
i have very doc(post) and they have a property "like" is it
Tracking not teaching... Auto complete is fun...
On Tue, Sep 1, 2015, 6:34 AM Jamie Johnson wrote:
> No worries, thanks again I'll begin teaching this
>
> On Mon, Aug 31, 2015, 5:16 PM Tomás Fernández Löbbe
> wrote:
>
>> Sorry Jamie, I totally missed
Hi,
I'm using Solr 5.2.1, and I would like to find out, what is the difference
between Legacy Facets and JSON Facets in Solr? I was told that JSON Facets
has a much lesser Request Latency, but I couldn't find any major difference
in speed. Or must we have a larger index in order to have any
my solr version is 5.2.1
i have a question.
if i create 2 core .one for post and one for like . i must index like count?
i mean in schema for like core i must write:
am i true?
On Tue, Sep 1, 2015 at 2:42 AM, Upayavira wrote:
> So you want to be able to sort by the "number
http://blog.griddynamics.com/2015/08/scoring-join-party-in-solr-53.html
shows how to keep updates in a separate core. Notice that it is an
intermediate-level article for query syntax.
For persian text analysis, there is a pre-built analyser defiition in
the techproducts example, start from that.
I suspect URL encoding might mess with MUST (+) clause. Can you post
debugQuery=true output to make sure that the query parsed right.
Then make sure that chid query and parent filter are fully orthogonal. eg
+type_s:product +color_s:Red returns no result.
Last check to check, make sure that you
Endre,
Here is the problem. SolrTestCase4J already brings solr core/container and
a sort of server already orchestrated by a complex harness. Thus, adding
EmbeddedSS makes all things quite complicated, it's challenging to
understand which ones misbehaves. Giving that you need to debug DIH config
I
On 1 September 2015 at 08:29, Mikhail Khludnev
wrote:
> Last check to check, make sure that you don't have deleted document in the
> index for a while. You can check in at SolrAdmin.
What's the significance of that particular advice? Is something in the
join including
i used pre-built persian analyzer but it isn't enough for me.
this work just for search exacly indexed work!
like search 'go' and search result is 'go'
as i said in english we have stemming,synonym,etc filters that help us to
have flexible search.
and i want some filter for persian.
that pre-built
Do you mean that normally you do get stuff indexed but when you make
any of these changes the indexing stops working and you get empty
index? If so, you probably misconfigured something and should be
getting error messages.
If, on the other hand, you see no changes, check that you are actually
and another question is:
my docs are persian and i use text_fa for fieldType but i wanna to have a
persian textfield that handle search problem such as stemming.
word distance,synonyms etc
like english types.
as i said i handle "میخواهم " and "خواستن" and so on.
can you suggest me a fieldtype for
If Solr's persian configuration is not sufficient, you could look into
commercial implementation from Basis Tech (I haven't tested it):
http://www.basistech.com/text-analytics/rosette/base-linguistics/for-arabic/
(says it supports persian at the bottom of the page).
I would also open a JIRA with
No, I've tested it several times after committing it.
I've tried executing it several times. The QTime for the JSON Facet is
always between the range of 400 to 600, while the QTime for the Legacy
Facet is usually between 15 to 30.
I have indexed about 1GB of rich-text data into the collection
How many document total in your corpus? And how many do you
intend to have?
My point is that if you are testing this with a small corpus, the results
are very likely different than when you test on a reasonable corpus.
So if you expect your "real" index will contain many more docs than
what
That's pretty strange...
There can be caching differences. Is this the first time the request
is executed after a commit?
What does executing it again show?
-Yonik
On Tue, Sep 1, 2015 at 9:47 PM, Zheng Lin Edwin Yeo
wrote:
> Hi Yonik,
>
> Thanks for pointing out the
The type of field is text_general.
I found that the problem mainly happen in the content field of the
collections with rich text document.
It works fine for other files, and also collections indexed with CSV
documents, even if the fieldType is text_general.
Regards,
Edwin
On 2 September 2015
I post this question on Stackoverflow and would like some suggestion:
solr - Custom sharding or auto Sharding on SolrCloud? - Stack Overflow
http://stackoverflow.com/questions/32343813/custom-sharding-or-auto-sharding-on-solrcloud
Scott Chu,scott@udngroup.com
2015/9/2
On Tue, Sep 1, 2015 at 11:51 PM, Zheng Lin Edwin Yeo
wrote:
> No, I've tested it several times after committing it.
Hmmm, well something is really wrong for this orders of magnitude
difference. I've never seen anything like that and we should
definitely try to get to the
On 9/1/2015 12:53 AM, Arnon Yogev wrote:
> We have a Solr cloud (4.7) consisting of 5 servers.
> At some point we noticed that one of the servers had a very high CPU and
> was not responding. A few minutes later, the other 4 servers were
> responding very slowly. A restart was required.
> Looking
On 1 September 2015 at 09:10, Mikhail Khludnev
wrote:
>> Not many
>> people know about it, may help to disambiguate the syntax.
>>
> Oh. C'mon! it's announced for ages http://yonik.com/solr/query-syntax/
Not everybody reads and keeps track of every feature of Solr.
I've tried the following commands and I found that the Legacy Faceting is
actually much faster than JSON Faceting. Not sure why is this so, when the
document from this link http://yonik.com/solr-count-distinct/ states that
JSON Facets has a much lower request latency.
(For Legacy Facet) - QTime:
Hello, i have had this issue as well. I patched QueryComponent and some other
files that are used by QueryComponent so that it is finally possible to extend
QueryComponent.
https://issues.apache.org/jira/browse/SOLR-7968
-Original message-
> From:Mohan gupta
Yes, here it is. These are in solrconfig.xml
dedupe
true
signature
false
content
solr.processor.Lookup3Signature
Regards,
Edwin
On 1 September 2015 at 22:26, Upayavira wrote:
> Can you repeat the config you have for the dedup update chain?
>
> Thx
>
*Bump*
On Tue, Sep 1, 2015 at 1:17 AM, Mohan gupta wrote:
> Hi Folks,
>
> I need to merge docs received from multiple shards via a custom logic, a
> straightforward score based priority queue doesn't work for my scenario (I
> need to maintain a blend/distribution of
On 8/31/2015 11:26 AM, Troy Edwards wrote:
> I am having a hard time finding documentation on DataImportHandler
> scheduling in SolrCloud. Can someone please post a link to that? I
> have a requirement that the DIH should be initiated at a specific time
> Monday through Friday.
Troy, is your
Hi Alexandre,
Yes, the indexing works fine previously until the following line is added
to my /update handler in solrconfig.xml.
dedupe
Regards,
Edwin
On 1 September 2015 at 20:25, Alexandre Rafalovitch
wrote:
> Do you mean that normally you do get stuff
Hi Upayavira,
Yes, I tried with a completely new index. I found that once I added the
line below to my /update handler in solrconfig.xml, the indexing doesn't
work anymore.
dedupe
Besides that, it is also not able to do any deletion to the index when this
line is added.
Regards,
Edwin
On 1
Can you repeat the config you have for the dedup update chain?
Thx
On Tue, Sep 1, 2015, at 02:57 PM, Zheng Lin Edwin Yeo wrote:
> Hi Upayavira,
>
> Yes, I tried with a completely new index. I found that once I added the
> line below to my /update handler in solrconfig.xml, the indexing doesn't
On Tue, Sep 1, 2015 at 3:44 PM, Alexandre Rafalovitch
wrote:
> On 1 September 2015 at 08:29, Mikhail Khludnev
> wrote:
> > Last check to check, make sure that you don't have deleted document in
> the
> > index for a while. You can check in at
Have you tried with a completely clean index? Are you deduping, or just
calculating the signature? Is it possible dedup is preventing your
documents from indexing (because it thinks they are dups)?
On Tue, Sep 1, 2015, at 09:46 AM, Zheng Lin Edwin Yeo wrote:
> Hi Upayavira,
>
> I've tried to
They aren't doing the same thing...
The first URL is doing a straight facet on the content field.
The second URL is doing a facet on the content field and asking for an
additional statistic for each bucket.
-Yonik
On Tue, Sep 1, 2015 at 11:08 AM, Zheng Lin Edwin Yeo
Derek:
Why do you care? What evidence do you have that this matters _practically_?
If you've look at scoring with a small number of documents, you'll see
significant
differences due to deleted documents. In most cases, as you get a larger number
of documents the ranking of documents in an index
Is this for multi-datacenter? If so, you may want to review Apple's
presentation at the last Solr Revolution:
https://www.youtube.com/watch?v=_Erkln5WWLw=2=PLU6n9Voqu_1FM8nmVwiWWDRtsEjlPqhgP
Regards,
Alex.
Solr Analyzers, Tokenizers, Filters, URPs and even a newsletter:
Looks like there is a bug in that . On start/restart the security.json
is not loaded
I shall open a ticket
https://issues.apache.org/jira/browse/SOLR-8000
On Tue, Sep 1, 2015 at 1:01 PM, Noble Paul wrote:
> I'm investigating why restarts or first time start does not read
I don't really understand your problem here. The whole point
behind SolrCloud is that you simply create a collection where
each shard has replicas and the rest is automatic.
So, assuming you have one collection already running under
SolrCloud (i.e. it's all being coordinated through Zookeeper),
59 matches
Mail list logo