Hi,
I tried to create core by simply hitting the below url
http://localhost:8983/solr/admin/cores?action=CREATEname=core3instanceDir=C://solrconfig=solrconfig.xmlschema=schema.xmldataDir=C://solr/data
It made a entry in the solr.xml file . but the core directory is not
created.
Please
Le 7 mai 2012 à 13:30, Marcelo Carvalho Fernandes a écrit :
Anything else?
If fearing DoS attacks by too large queries (e.g. if having millions of
documents), consider writing a query-component that can limit the queries.
I believe that there's nothing else.
paul
Dear all,
I am using solr for log search.
During every search, based on the input request, I will have to search
through n index directories dynamically. n may range from 1 to 100.
While n being a higher value, firing 100 cores wouldn't be a viable
solution. How do I achiever this in solr, in
Hi,
I am trying to create core dynamically. what are the configuration
steps that needs to be followed to do the same. Please let me know if you
have any idea on that
Thanks
Prabhakarn.P
--
View this message in context:
This pages gives you everything you need
http://wiki.apache.org/solr/CoreAdmin#CREATE
Regards,
Dave
On 9 May 2012, at 08:32, pprabhcisco123 wrote:
Hi,
I am trying to create core dynamically. what are the configuration
steps that needs to be followed to do the same. Please let
Am 08.05.2012 23:23, schrieb Lance Norskog:
Lucene does not support more 2^32 unique documents, so you need to
partition.
Just a small note:
I doubt that Solr supports more than 2^31 unique documents, as most
other Java applications that use int values.
Greetings,
Kuli
Hi Dave,
I tried to create core programmatically as below. But getting
following error.
CoreAdminResponse statusResponse =
CoreAdminRequest.getStatus(indexName, solr);
coreExists =
statusResponse.getCoreStatus(indexName).size()
0;
I'm trying to think through a Solr-based email alerting engine that
would have the following properties:
1. Users can enter queries they want to be alerted on, and the syntax
for alert queries should be the same syntax as my regular solr
(dismax) queries.
1a. Corollary: Because of not just
Otis,
I've just subscribed to nutch mailing list, however it's a very
low-volume one (at least that's what I came across), so can't I ask here?
Regards,
On 5/8/12 11:54 PM, Otis Gospodnetic wrote:
Tolga - you should ask on the Nutch mailing list, not Solr one. :)
Otis
Performance
Thanks Lance
There is already a clear partition - as you assumed, by date.
My requirement is for the best setup for:
1. A *single machine*
2. Quickly changing index - so i need to have the option to load and unload
partitions dynamically
Do you think that the sharding model that solr offers is
Hi
I tried to create cores dynamically using the below code,
CoreAdminResponse statusResponse = CoreAdminRequest.getStatus(indexName,
solr);
coreExists =
statusResponse.getCoreStatus(indexName).size()
0;
System.out.println(got the
Hi ,
I have a requirement to create cores for each customers. I tried
creating cores using the below code
CoreAdminRequest.Create create = new
CoreAdminRequest.Create();
CoreAdminRequest.createCore(indexName+i,
C://solr/,
solr);
It
While n being a higher value, firing 100 cores wouldn't be a viable
solution. How do I achiever this in solr, in short I would like to
have a single core and get results out of multiple index searchers and
that implies multiple index readers.
When you'd want to have single core with multiple
Hello SOLR experts,
I have my own indexing web-application which talks in XML to SOLR. It works
wonderfully well.
The queue is displayed in the indexer, so that experts can have a track that it
went well into the index.
However, i see no way currently to display that solr's searcher includes
My question is, is it possible to run
multiple combination of search queries to just get only result count in
a
single trip without using facet.query?
No. AFAIK.
Yes, you're true. I just tried googling on this and I'm finding that a
requirement similar to mine is being filed under New
Hello.
i running a solr replication. works well, but i need to replicate my
dataimport-properties.
if server1 replicate this file after he create everytime a new file, with
*.timestamp, because the first replication run create this file with wrong
permissions ...
how can is say to solr
On Wed, May 9, 2012 at 3:26 PM, pravesh suyalprav...@yahoo.com wrote:
While n being a higher value, firing 100 cores wouldn't be a viable
solution. How do I achiever this in solr, in short I would like to
have a single core and get results out of multiple index searchers and
that implies multiple
What command you are using in your cron on the slave to only rebuild the
spellcheck index?
I have only found the option to query the slave for dummy string and attache
it as URL attribute the spellcheck.build=true.
E.g.
slave-solr:8983/solr/my-index/spell/?q=helllospellcheck.build=truewt=xml
--
For such an alerting service, I would make it a requirement that it's WYSIWYG -
e.g. let the user enter a search, and then refine it through facets, filters,
ranges etc until he is satisfied with ALL the results returned. Do not rely on
relevane here, but sort the results by date or similar.
Afternoon,
We are testing an updated version of our Solr server running solr 3.5.0 and we
are experiencing some performance issues with regard to updates and commits.
Searches are working well.
There are approximately 80,000 documents and the index is about 2.5 GB. This
does not seem to be
Hi,
eDismax does its own query parsing before shipping the terms to Analysis (which
is responsible for stopword removal). That's why these are not treated as
stopwords. The quickest solution for you is probably to remove (or|OR|and|AND)
before sending the query to Solr.
Also see SOLR-3086 for
I think we have to add this for java based rep.
+1
Hi,
I’m totally out of my depth here but I am trying, so I apologise if this is
a bit of a basic question. I need the following information to be indexed
and then made searchable by Solr:
Title – A title for the company
Company – The name of the company
Description – A description of the company
Why would you replicate data import properties? The master does the importing
not the slave...
Sent from my Mobile device
720-256-8076
On May 9, 2012, at 7:23 AM, stockii stock.jo...@googlemail.com wrote:
Hello.
i running a solr replication. works well, but i need to replicate my
Dear list,
I recently figured out that the FrenchLightStemFilterFactory performs
some interestingly undocumented normalization on tokens...
There's a norm() helper called for each produced token that performs,
amongst other things, deletions on repeated characters... Only for
tokens with
+1 as well especially for larger indexes
Sent from my Mobile device
720-256-8076
On May 9, 2012, at 9:46 AM, Jan Høydahl jan@cominvent.com wrote:
I think we have to add this for java based rep.
+1
my setup includes a asynchron replication.
this means, both are master AND slave at the same time. so i can easy switch
master and slave on the fly without resarting any server with mass of
scripts ... i trigger a replication via cronjob and look everytime, if
server is master or slave. only
my setup includes a asynchron replication.
this means, both are master AND slave at the same time. so i can easy switch
master and slave on the fly without resarting any server with mass of
scripts ... i trigger a replication via cronjob and look everytime, if
server is master or slave. only
Hi :)
I remember that in a Lucene query, there is something like mandatory
values. I just have to add a + symbol in front of the mandatory
parameter, like: +myField:my value
I was wondering if there was something similar in Solr queries? Or is
this behaviour activated by default?
Gary
you will have to create the directory and configs yourself .you will need
to call the command once you create the directory and give permissions ,the
following url only creates the data folder and makes an entry in solr.xml
Refer :http://blog.dustinrue.com/archives/690
Regards
Sujahta
On Wed,
Hello,
I have successfully installed “Solr 3.6” over “Tomcat” (inside a folder
under C:\ I have 2 subfolders: tomcat-“tomcat” installation and solr-“solr”
home). I have copied the solr folder from the “examples”. Then I tried to
index data from database. The indexing was successful. But I have a
Hi,
Have you defined your default search field in the schema.xml? If not or in
doubt, just prefix your query specifically with a field name, smth like
q=search_field_name:word
-- Dmitry
On Wed, May 9, 2012 at 9:12 PM, anarchos78 rigasathanasio...@hotmail.comwrote:
Hello,
I have successfully
Yes.
See http://wiki.apache.org/solr/SolrQuerySyntax - The standard Solr Query
Parser syntax is a superset of the Lucene Query Parser syntax.
Which links to http://lucene.apache.org/core/3_6_0/queryparsersyntax.html
Note - Based on the info on these pages I believe the + symbol is to be
Yes i have already done it!
The schema.xml file is:
?xml version=1.0 encoding=UTF-8 ?
schema name=example version=1.5
types
fieldType name=string class=solr.StrField sortMissingLast=true /
fieldType name=boolean class=solr.BoolField
sortMissingLast=true/
Well, that's not gonna work, because your search field is just a string, so
a literal search against it would only work. You should instead define the
search field type to have analyzers and / or tokenizers, in case of course
it contains some form of to be searchable text. Have a look at text_en,
Hi Richard,
An attempt to add a single document and commit is taking many minutes but the
time taken is not consistent.
Are you committing after every doc? If so, don't do it. :) Check Solr ML
archives (e.g. http://search-lucene.com/ ) for past discussions on this topic.
Do you have any
Paul,
Are you asking how to figure out the time between add doc and see doc?
I suppose it could be useful to have Solr expose info about how much time
until the next autocommit and then you could add that to the warmup time from
previous warming and estimate.
Otis
Performance Monitoring
On 5/9/2012 7:01 AM, richard.pog...@holidaylettings.co.uk wrote:
We are testing an updated version of our Solr server running solr 3.5.0 and we
are experiencing some performance issues with regard to updates and commits.
Searches are working well.
There are approximately 80,000 documents and
Hi Chris,
I think there is some confusion here.
When people say things about relevance scores they talk about comparing them
across queries.
What you have is a different situation, or at least a situation that lends
itself to working around this, at least partially.
You have N users.
Each user
Another option is to remove autowarming, and instead create a small
bunch of queries that go most of the way. If you sort on a field, do
that sort; facet queries also. This will load the basic Lucene data
structures. Also, just getting the index data loaded into the OS disk
cache helps a lot.
On
Can you give me any example on how to do this?
I am really stuck
Thank you in advance
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-query-issues-tp3974922p3975384.html
Sent from the Solr - User mailing list archive at Nabble.com.
There is a lat/long type and geosearch queries for it. Did you plan to
use that? See the solr/example schemas for use of geosearch.
On Wed, May 9, 2012 at 6:59 AM, Spadez james_will...@hotmail.com wrote:
Hi,
I’m totally out of my depth here but I am trying, so I apologise if this is
a bit of
I have imported data from a database. When i set a type different than string
solr throws error: Unknown fieldtype 'text' specified on field biog at
org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:511)
--
View this message in context:
Hi Otis,
I was not so much trying to find estimates but trying to indicate if it was
done.
I understand the indexing works in batches after which there's a commit
followed by a warm-phase: if my add could be responded with a commit id and
that one could check that this commit is now available,
Thank you for the feedback. Yes they are uses for geospacial. After doing a
bit of homework I found this correction. Is this how it should be done?
field name=lat type=tdouble indexed=true stored=true/
field name=lon type=tdouble indexed=true stored=true/
--
View this message in context:
+ before term is correct. in lucene term includes field and value.
Query ::= ( Clause )*
Clause ::= [+, -] [TERM :] ( TERM | ( Query ) )
#_TERM_CHAR: ( _TERM_START_CHAR | _ESCAPED_CHAR | - | + )
#_ESCAPED_CHAR: \\ ~[]
in lucene query syntax, you can't express a term value including space.
Thanks, Jan.
For now, I would go for the quick solution and just have something that
removes and|or before sending the query to Solr.
The issue in https://issues.apache.org/jira/browse/SOLR-3086 SOLR-3086 is
not what I need. I don't want to totally disable boolean operators, just
limit them
I am in a E-Comerce project right now, and I have a requirement like this :
I have a lot of commodities in my SOLR indexes, commodity has the price
field, now I want to do facet range query,
I refer to the solr wiki, the facet range query need specify
*facet.range.gap* or specify
You can also add a copyField to your schema to copy from *_s (or whatever
schema fields you are storing your strings in) to the text field (which as
type of text_general or something similar.) Best to do separate copyFields
for only the specific string fields that have text you want to search
I don't personally know the details, but I heard somebody at the conference
say that you could hit some solr admin stats URL to access some MBeans stat
that tells you whether there are pending documents that are not yet
committed.
I see a reference to docsPending mentioned here:
The query treatment is probably correct, because the default operator is
AND, so when and gets treated as a stop word and ignored the default
operator is still AND, but when or is treated as a stop word and ignored
the operator changes from OR to the default implicit AND.
-- Jack Krupansky
And you will have to define a text field type, or use one of the existing
text field types, such as text_general in the example Solr schema.
-- Jack Krupansky
-Original Message-
From: Spadez
Sent: Wednesday, May 09, 2012 9:59 AM
To: solr-user@lucene.apache.org
Subject: Newbie Tries
52 matches
Mail list logo