PS: I am using solr 1.4
Regards,
Raakhi
On Wed, Jun 30, 2010 at 12:05 PM, Rakhi Khatwani wrote:
> Hi,
>I am trying out solr security on my setup from the following links:
> http://wiki.apache.org/solr/SolrSecurity
>
> http://www.lucidimagination.com/search/document/d1e338dc452db2e4/how_c
Hi,
I am trying out solr security on my setup from the following links:
http://wiki.apache.org/solr/SolrSecurity
http://www.lucidimagination.com/search/document/d1e338dc452db2e4/how_can_i_protect_the_solr_cores
Following is my configuration:
realms.properties:
admin: admin,server-administr
Is there anyway to override/change up the default PhraseQuery class that is
used... similar to how you can change out the Similarity class?
Let me explain what I am trying to do. I would like to override the TF is
calculated... always returning a max of 1 for phraseFreq.
For example:
Query: "fo
2010/6/27 Jason Chaffee
> The solr docs say it is RESTful, yet it seems that it doesn't use http
> headers in a RESTful way. For example, it doesn't seem to use the Accept:
> request header to determine the media-type to be returned. Instead, it
> requires a query parameter to be used in the UR
Yes, the StatsComponent returns the values in an XML.
http://wiki.apache.org/solr/StatsComponent
On Tue, Jun 29, 2010 at 7:23 AM, Na_D wrote:
>
>
>
> I knew that the jsp page= http://localhost:8983/solr/admin/stats.jsp
> shows the different statistics but actually I am trying to read the
To highlight a field, Solr needs some extra Lucene values. If these
are not configured for the field in the schema, Solr has to re-analyze
the field to highlight it. If you want faster highlighting, you have
to add term vectors to the schema. Here is the grand map of such
things:
http://wiki.apach
There is memory used for each facet. All of the facets are loaded for
any facet query. Your best shot is to limit the number of facets.
On Tue, Jun 29, 2010 at 11:42 AM, olivier sallou
wrote:
> I have given 6G to Tomcat. Using facet.method=enum and facet.limit seems to
> fix the issue with a few
The 'bind error' means that you already had another Solr running. Use
'jps' to find all of the processes called 'start.jar' and kill them.
Lance
On Mon, Jun 28, 2010 at 2:36 PM, Lance Hill wrote:
> Hi,
>
>
>
> I am trying to get db indexing up and running, but I am having trouble
> getting it wo
Solr supports multi-valued fields. You can add various skills to one
field and it will store all of the values in order. You can search on
any of the values. For numbers, you might want a subtype_value
convention: skillYears1_9 as one of the values for the skillYears
field.
Lance
On Mon, Jun 28,
Not at all. For one thing, a RESTful service does not allow a GET to
alter any data. It is just an HTTP-based web service.
On Sat, Jun 26, 2010 at 5:29 PM, Jason Chaffee wrote:
> The solr docs say it is RESTful, yet it seems that it doesn't use http
> headers in a RESTful way. For example, it d
Yes, it is better to use ints for ids than strings. Also, the Trie int
fields have a compressed format that may cut the storage needs even
more. 8m * 4 = 32mb, times "a few hundred", we'll say 300, is 900mb of
IDs. I don't know how these fields are stored, but if they are
separate objects we've bl
What are you actual highlighting requirements? you could try
things like maxAnalyzedChars, requireFieldMatch, etc
http://wiki.apache.org/solr/HighlightingParameters
has a good list, but you've probably already seen that page
Best
Erick
On Tue, Jun 29, 2010 at 9:11 PM, Peter Spam wrote:
To follow up, I've found that my queries are very fast (even with &fq=), until
I add &hl=true. What can I do to speed up highlighting? Should I consider
injecting a line at a time, rather than the entire file as a field?
-Pete
On Jun 29, 2010, at 11:07 AM, Peter Spam wrote:
> Thanks for eve
This may help:
http://lucene.apache.org/java/2_4_0/queryparsersyntax.html#Boolean%20operators
But the clause you specified translates roughly as "find all the
documents that contain R, then remove any of them that match
"* TO *". * TO * contains all the documents with R, so everything
you just mat
(10/06/30 1:11), Lance Hill wrote:
How do I know if solr is actually loading my database driver properly? I
added the mysql connector to the solr/lib directory, I added to the solrconfig.xml just to be sure it would find the
connector. When I start the application, I see it loaded my dataImporte
Hello I am trying to find the right max and min settings for Java 1.6 on 20GB
index with 8 million docs, running 1.6_018 JVM with solr 1.4, and am currently
have java set to an even 4GB (export JAVA_OPTS="-Xmx4096m -Xms4096m") for both
min and max which is doing pretty well but occasionally stil
Jan,
Looks interesting. I will try this.
Thanks!
Darren
On Mon, 2010-06-28 at 19:54 +0200, Jan Høydahl / Cominvent wrote:
> Hi,
>
> You might also want to check out the new Lucene-Hunspell stemmer at
> http://code.google.com/p/lucene-hunspell/
> It uses OpenOffice dictionaries with known st
> We've got an app in production that executes leading
> wildcard queries just
> fine.
>
>
> 0
> 1298
>
> title:*news
>
>
>
>
> The same app in dev/qa has undergone a major
> schema/solrconfig overhaul,
> including introducing multiple cores, and leading wildcard
> queries no lon
Hi,
I'm a little confused on how either solrj is working or how solr is working.
I'm using solr 1.4.
@Test (groups = {"integration"}, enabled = true)
public void testDate() throws Exception
{
SolrServer solr =
SolrServerFactory.getStreamingUpdateSolrServer(searchDataIngestConfigur
We've got an app in production that executes leading wildcard queries just
fine.
0
1298
title:*news
The same app in dev/qa has undergone a major schema/solrconfig overhaul,
including introducing multiple cores, and leading wildcard queries no longer
work...
org.apache.lucene.qu
I tried query="select cast(concat(replytable.comment_id,',', replytable.SID) as
char)", it works now !
Thanks you, Alex :)
Vivian
-Original Message-
From: Alexey Serba [mailto:ase...@gmail.com]
Sent: Tuesday, June 29, 2010 4:38 PM
To: solr-user@lucene.apache.org
Subject: Re: solr data
It's weird. I tried and it works for me.
1) Try to add convertType="true" to JdbcDataSource definition
See
http://wiki.apache.org/solr/DataImportHandlerFaq#Blob_values_in_my_table_are_added_to_the_Solr_document_as_object_strings_like_B.401f23c5
2) Try to apply cast operation to whole result, i.e
Britske good workaround!
I did not thought about the possibility of using subqueries.
Regards
- Mitch
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-I-can-use-score-value-for-my-function-tp899662p931448.html
Sent from the Solr - User mailing list archive at Nabble.com.
I have given 6G to Tomcat. Using facet.method=enum and facet.limit seems to
fix the issue with a few tests, but I do know that it is not a "final"
solution. Will work under certain configurations.
Real "issue" is to be able to know what is the required RAM for an index...
2010/6/29 Nagelberg, Kal
How much memory have you given the solr jvm? Many servlet containers have small
amount by default.
-Kal
-Original Message-
From: olivier sallou [mailto:olivier.sal...@gmail.com]
Sent: Tuesday, June 29, 2010 2:04 PM
To: solr-user@lucene.apache.org
Subject: Faceted search outofmemory
Hi,
I already use facet.limit in my query. I tried however facet.method=enum and
though it does not seem to fix everything, I have some requests without the
outofmemory error.
Best would be to have a calculation rule of required memory for such type of
query.
2010/6/29 Markus Jelsma
> http://wiki.ap
http://wiki.apache.org/solr/SimpleFacetParameters#facet.limit
-Original message-
From: olivier sallou
Sent: Tue 29-06-2010 20:11
To: solr-user@lucene.apache.org;
Subject: Re: Faceted search outofmemory
How do make paging over facets?
2010/6/29 Ankit Bhatnagar
>
> Did you trying pag
Thanks for everyone's help - I have this working now, but sometimes the queries
are incredibly slow!! For example, 461360. Also, I
had to bump up the min/max RAM size to 1GB/3.5GB for things to inject without
throwing heap memory errors. However, my data set is very small! 36 text
files, fo
How do make paging over facets?
2010/6/29 Ankit Bhatnagar
>
> Did you trying paging them?
>
>
> -Original Message-
> From: olivier sallou [mailto:olivier.sal...@gmail.com]
> Sent: Tuesday, June 29, 2010 2:04 PM
> To: solr-user@lucene.apache.org
> Subject: Faceted search outofmemory
>
> H
I tried query="select concat(cast(replytable.comment_id as char), ',',
cast(replytable.SID as char)) as commentreply from commenttable right join
replytable on replytable.comment_id=commenttable.comment_id where
commenttable.story_id='${story.story_id}'" too, but I still got strange
characters
Did you trying paging them?
-Original Message-
From: olivier sallou [mailto:olivier.sal...@gmail.com]
Sent: Tuesday, June 29, 2010 2:04 PM
To: solr-user@lucene.apache.org
Subject: Faceted search outofmemory
Hi,
I try to make a faceted search on a very large index (around 200GB with 200
Hi,
I try to make a faceted search on a very large index (around 200GB with 200M
doc).
I have an out of memory error. With no facet it works fine.
There are quite many questions around this but I could not find the answer.
How can we know the required memory when facets are used so that I try to
s
Hi Ahmet,
it works, thanks a lot!
To be true I have no idea what's the problem with
defType=lucene&q.op=OR&df=topic&q=R NOT [* TO *]
-Sascha
Ahmet Arslan wrote:
I have a (multi-valued) field topic in my index which does
not need to exist in every document. Now, I'm struggling
with formulating
> Yes, it is registered exactly as you
> indicated in solrconfig and when the
> application starts up, I can see a message indicating the
> data-config is
> loaded successfully. So although the data config is loaded
> successfully, I
> cannot seem to access the dataimport handler.
Strange, solr/da
Yes, it is registered exactly as you indicated in solrconfig and when the
application starts up, I can see a message indicating the data-config is
loaded successfully. So although the data config is loaded successfully, I
cannot seem to access the dataimport handler.
Regards,
L. Hill
-Origin
On Tue, Jun 22, 2010 at 9:38 AM, Stephen Duncan Jr
wrote:
> I'm prototyping using StreamingUpdateSolrServer. I want to send a commit
> (or optimize) after I'm done adding all of my docs, rather than wait for the
> autoCommit to kick in. However, since StreamingUpdateSolrServer is
> multi-threade
Hi,
Check out the wiki [1] on this subject.
[1]: http://wiki.apache.org/solr/SolrSecurity
Cheers,
-Original message-
From: Vladimir Sutskever
Sent: Tue 29-06-2010 18:05
To: solr-user@lucene.apache.org;
Subject: Disabling Access to Solr Admin Panel
Hi All,
How can I forbid
> How do I know if solr is actually
> loading my database driver properly? I
> added the mysql connector to the solr/lib directory, I
> added dir="./lib" /> to the solrconfig.xml just to be sure it
> would find the
> connector. When I start the application, I see it loaded my
> dataImporter
> data
How do I know if solr is actually loading my database driver properly? I
added the mysql connector to the solr/lib directory, I added to the solrconfig.xml just to be sure it would find the
connector. When I start the application, I see it loaded my dataImporter
data config, but when I try to acce
Hi All,
How can I forbid access to the SOLR index admin panel?
Can I configure this in the /jetty.xml -
I understand that's it's not "true" security - considering
updates/delete/re-indexing commands will still be allowed - via GET request.
Kind regards,
Vladimir Sutskever
Investment Bank - T
In case anyone's interested (and I know at least one person is because
they asked me where to find the solr.TemporalCoverage class - sorry
that was my fault, I shouldn't have used the default package name),
here's how I got around the problem.
It's not the neatest solution in the world, but
> I have a (multi-valued) field topic in my index which does
> not need to exist in every document. Now, I'm struggling
> with formulating a query that returns all documents that
> either have no topic field at all *or* whose topic field
> value is R.
Does this work?
&defType=lucene&q.op=OR&q=topi
On Tue, Jun 22, 2010 at 9:38 AM, Stephen Duncan Jr wrote:
> I'm prototyping using StreamingUpdateSolrServer. I want to send a commit
> (or optimize) after I'm done adding all of my docs, rather than wait for the
> autoCommit to kick in. However, since StreamingUpdateSolrServer is
> multi-thread
Hi folks,
I have a (multi-valued) field topic in my index which does not need to
exist in every document. Now, I'm struggling with formulating a query
that returns all documents that either have no topic field at all *or*
whose topic field value is R.
Unfortunately, the query
/select?q={!lu
I knew that the jsp page= http://localhost:8983/solr/admin/stats.jsp
shows the different statistics but actually I am trying to read the hit
rate of the solr cache's via a Java Code.That's why I asked if the same is
exposed via Solr API's...Please share if you know about the same.
Thank
Thank you but I didn't find anything like "Merge thread" and I continued to
have the lock file.
The segments were not merged so I stopped the SOLR and restart.
The lock disappear but I guess the optimization didn’t complete.
I'll try again tomorrow
-Original Message-
From: Alexander
Hi,
De AdminRequestHandler exposes a JSP [1] that'll return a nice XML document
with all the information you need about cache statistics and other.
[1]: http://localhost:8983/solr/admin/stats.jsp
Cheers,
On Tuesday 29 June 2010 15:52:56 Na_D wrote:
> This is just an enquiry.I just wanted to k
It's possible using functionqueries. See this link.
http://wiki.apache.org/solr/FunctionQuery#query
2010/6/29 MitchK
>
> Ramzesua,
>
> this is not possible, because Solr does not know what is the resulting
> score
> at query-time (as far as I know).
> The score will be computed, when every hit
Ramzesua,
this is not possible, because Solr does not know what is the resulting score
at query-time (as far as I know).
The score will be computed, when every hit from every field is combined by
the scorer.
Furthermore I have shown you an alternative in the other threads. It makes
not exactly wh
This is just an enquiry.I just wanted to know if the cache hit rates of solr
exposed via the API of solr?
--
View this message in context:
http://lucene.472066.n3.nabble.com/Cache-hits-exposed-by-API-tp930602p930602.html
Sent from the Solr - User mailing list archive at Nabble.com.
Thank you for your answer, Alex.
I tried it, but I got some weird output
"commentreply":["[...@1b06a21","[...@107dcfe","[...@13dcd27","[...@67cd84","[...@e5ace9","[...@bb05de","[...@7e56
It is supposed to be "commentreply":["234234,2","87979,343",...
Both comment_id and reply_id are integers.
On 29.06.2010, at 15:01, Jan Høydahl / Cominvent wrote:
> When you mix query handlers like this you will need to add a "+" or an "AND"
> in front of the _query_: part as well, in order for it to be required.
> You will see the difference when you try the above query directly on your
> Solr ins
When you mix query handlers like this you will need to add a "+" or an "AND" in
front of the _query_: part as well, in order for it to be required.
I.e.
hl.fragsize=0&facet=true&sort=score+desc&hl.simple.pre=&hl.fl=*&hl=true&rows=21&fl=*,score&start=0&q=%2Btag_ids:(23)++%2Bdocument_code_prefix:(A
On 29.06.2010, at 13:38, Lukas Kahwe Smith wrote:
>
> On 29.06.2010, at 13:24, Jan Høydahl / Cominvent wrote:
>
>> Hi,
>>
>> In DisMax the "mm" parameter controls whether terms are required or
>> optional. The default is 100% which means all terms required, i.e. you do
>> not need to add "+"
I just saw now that it exactly works like expected below, anyway thx
On Tuesday 29 June 2010 13:05:15 Alexander Rothenberg wrote:
> Hi,
> was curious if the field-property 'required' can be added to any field, not
> just the unique-field. Wiki has no info about it.
>
> I would like to set that pro
On 6/27/10 4:51 PM, David Smiley (@MITRE.org) wrote:
>
> I just noticed that field compression (e.g. compressed="true") is no longer
> in Solr, nor can I find why this was done. Can a committer offer an
> explanation? If the reason is that it eats up CPU, then I'd rather accept
> this tradeoff f
On 29.06.2010, at 13:24, Jan Høydahl / Cominvent wrote:
> Hi,
>
> In DisMax the "mm" parameter controls whether terms are required or optional.
> The default is 100% which means all terms required, i.e. you do not need to
> add "+". You can change to mm=0 and you will get the same behaviour as
Hi pal :)
Unfortunately copyField works only BEFORE analysis and you cannot "chain"
them...
The simplest solution would be to duplicate your copyField's:
Another way would be to look into the UpdateProcessorChain and write a "copy"
processor which does whatever you need.
--
Jan Høydahl,
Hi,
In DisMax the "mm" parameter controls whether terms are required or optional.
The default is 100% which means all terms required, i.e. you do not need to add
"+". You can change to mm=0 and you will get the same behaviour as standard
parser, i.e. an "OR" behaviour, where the "+" would say t
To determine if the optimize is still in progress, you can look at the
admin-frontend on the page "THREAD DUMP" for something like "Lucene Merge
Thread". If its there, then optimize is still running. Also, index-filesize
and filenames in your index-dir are changing a lot...
On Tuesday 29 June
Hi,
was curious if the field-property 'required' can be added to any field, not
just the unique-field. Wiki has no info about it.
I would like to set that property to some fields in the shema.xml that dont
belong to the root-entity of the document-schema (looking at
data-config.xml)... I want
Hi,
I'm using solr1.4.0 default installation.
Is there a place where I can find the optimization status.
I sent a optimize http request and it should had finish by now, but I
still see the lock file on index folder.
Can I see somewhere if the optimization is still running?
Thanks,
(sorry if this message ends up being sent twice)
We have a use-case where we'd like to fill a field from multiple sources,
i.e.
… (other source-fields are copied in to text as well)
and then analyze the resulting text-field in a number of ways, each
requiring it's own field.
Is it possib
hi ,
I am fectching the following details programatically :
---
---
Name :: /replication
Class :: org.apache.solr.handler.ReplicationHandler
Version :: $Revision: 829682 $
Description :: ReplicationHandler provides replication of index
What i forgot to mention is that those Errors only occur on the Slave, the
Master is working just fine.
Ram/Hardware/Java Version/Config/Startup parameters etc. are exactly the same
on both Machines.
-Ursprüngliche Nachricht-
Von: Bastian Spitzer [mailto:bspit...@magix.net]
Gesendet: Di
Hi,
I was wondering if anyone was aware of any existing functionality where
clients/server components could register some search criteria and be
notified of newly committed data matching the search when it becomes
available
- a 'push/streaming' search, rather than 'pull'?
Thanks!
Hi,
We just migrated from SOLR 1.4 to 1.4.1. We are observing some new
Errors in the logs that didnt occured
before the migration, so we want to share them with you and are hoping
to get some help solving them.
We are using 1Master and 1Slave with replication on 2 different machines
running only
Hi,
I am a bit confused about the +/- syntax. Am I understanding it properly that
when using the normal query handler + means required and - means prohibit where
as in the dismax handler + means required and - means optional?
http://lucene.apache.org/java/2_9_1/queryparsersyntax.html
The "+" or
> positionIncrementGap="100">
> type="index">
>
>
>
>
>
> minGramSize="1" maxGramSize="25"/>
>
>
> type="query">
>
>
>
>
>
>
>
>
> Results:
>
> http:
David,
well, I am no committer, but I noticed that Lucene will no longer care of
compressing (I think this was because of the trouble when doing this) and
maybe this is the reason why Solr keeps this option no longer available.
Unfortunately, I do not have got any link for it, but I think this w
70 matches
Mail list logo