Hi,
Re:
-
Is there some place I should indicate what parameters are including in
the json objects send? I was able to test books.json without the
error.
Yes, in Solr's schema.xml (under the conf/ directory). See
http://wiki.apache.org/solr/SchemaXml for more details.
please help ...
--
View this message in context:
http://lucene.472066.n3.nabble.com/Solr-3-6-1-and-facet-query-regular-expression-tp4047628p4047947.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi, I am kind of new in here. Got the same question...
I am using Java version 1.6 and Lucene version 3.3.
Can the index file size increase automatically over night?
During the evening i see the size around 11GB, next day morning i see it to
be 18GB and again the size reduces around 8GB.
I have
Well, if nothing is going on at all, it's hard to see why the index would
increase. So I suspect _something_ is going on. Possibilities:
1 you have indexing activity going on. Even if it's just replacing docs
that already exist, which is actually an add and a delete the index will
grow for a
Why do you think that during a backup any write activity would corrupt your
backup? Solr backs up live indexes all the time, that's what's happening,
in effect, when you replicate from a master to a slave. There's no
requirement that the master stop indexing. The replication backup command
Ahmet
See admin/analysis, it's invaluable. Probably
The terms are being searched against your default text field which I'd
guess is not title.
Also, try adding debug=all to your query and look in the debug info at the
parsed form of the query to see what's actually being searched.
Best
Erick
On
There are sometimes caching issues with the browser, nothing to do with
Solr itself. Try clearing your browser cache, that would be my bet...
Best
Erick
On Fri, Mar 15, 2013 at 1:52 AM, Santoash Rajaram raja...@apple.com wrote:
I don't have an answer but I have seen this before too. I assumed
bq: the master did not index fast enough
Stop. Pause, analyze G.
Maybe you're already done this, but have you identified _why_ the master
doesn't index fast enough? If you're indexing from SolrJ, try commenting
out _just_ the line server.add(doc/doclist). I can't tell you how many
setups
Hi
I am using Solr 4.2. I created 2 collections by using
http://localhost:8983/solr/admin/collections?action=CREATEname=collection1numShards=1replicationFactor=0maxShardsPerNode=1;
A, good catch! Coincidentally yesterday while in the midst of looking
at some other JIRAs, I noticed that some pages on the Wiki said 4.2 and
changed what I ran across to 4.3. I originally started the Wikis when I
though I would go fast enough to get it into 4.2, sorry for the confusion!
If
Hi,
This is perhaps a trivial question but somehow I could not pin-down:
when trying to index a file (using solr 3.6.1) I got the error:
Caused by: org.apache.solr.common.SolrException: Error loading class
'solr.extraction.ExtractingRequestHandler'
I know in solrconfig.xml we have defined
You mean replication factor of 1 not 0.
I'm cleaning up the collection api responses for 4.3 so that you would get a
clear error for this type of thing
(https://issues.apache.org/jira/browse/SOLR-4494). There should be info about
it in the overseer log currently.
I think I've also already fix
On 3/16/2013 9:52 AM, Erick Erickson wrote:
A, good catch! Coincidentally yesterday while in the midst of looking
at some other JIRAs, I noticed that some pages on the Wiki said 4.2 and
changed what I ran across to 4.3. I originally started the Wikis when I
though I would go fast enough to
aahha… i used a replication factor of 0.
I thought 0 means no replication of original..
Should that be 1 if i want no replication?
./zahoor
On 16-Mar-2013, at 8:46 PM, Mark Miller markrmil...@gmail.com wrote:
You mean replication factor of 1 not 0.
I'm cleaning up the collection api
On Sat, Mar 16, 2013 at 11:36 AM, J Mohamed Zahoor jmo...@gmail.com wrote:
aahha… i used a replication factor of 0.
I thought 0 means no replication of original..
Should that be 1 if i want no replication?
Think of it as the number of copies of a book at a library.
replicationFactor is the
Got it.. Thanks.
./Zahoor
On 16-Mar-2013, at 9:13 PM, Yonik Seeley yo...@lucidworks.com wrote:
On Sat, Mar 16, 2013 at 11:36 AM, J Mohamed Zahoor jmo...@gmail.com wrote:
aahha… i used a replication factor of 0.
I thought 0 means no replication of original..
Should that be 1 if i want no
Cool, I'll need to try this. I could have sworn that it didn't work that
way in 4.0, but maybe my test was bunk.
-g
On Fri, Mar 15, 2013 at 9:41 PM, Mark Miller markrmil...@gmail.com wrote:
You can do this - just modify your starting Solr example to have no cores
in solr.xml. You won't be
Hi,
So, will search time be the same for the case when fields are indexed only vs
the case when they are indexed and stored?
Thanks.
Alex.
-Original Message-
From: Otis Gospodnetic otis.gospodne...@gmail.com
To: solr-user solr-user@lucene.apache.org
Sent: Fri, Mar 15, 2013 8:09
Yeah, I don't know that I've ever tried with 4.0, but I've done this with 4.1
and 4.2.
- Mark
On Mar 16, 2013, at 12:19 PM, Gary Yngve gary.yn...@gmail.com wrote:
Cool, I'll need to try this. I could have sworn that it didn't work that
way in 4.0, but maybe my test was bunk.
-g
On
Search depends only on the index. But... returning field values for each
of the matched documents does require access to the stored values. So,
search time is in no way impacting by the existence or non-existence of
stored values, but total query processing time would of course include both
Here's the normal path to the example configuration in Solr 4.1:
.../solr-4.1.0/example/solr/collection1/conf
That's the directory in which the example schema.xml and other configuration
files live.
There is no solr-4.1.0/example/conf directory, unless you managed to create
one yourself.
Hi,
Whats the best option for backing up solrcloud,
replicate each shard ?
Thanks
msj
It's not something that's been specifically tackled, but you would probably
want to use replication or replications built in backup command (copies out a
snapshot of the index). Then you could rsync a copy to somewhere safe. I think
there is a lot more we could do with backup, but that's what
Hi xavier,
It's not clear to me what you want. Is the edge you're referring to the
beginning of a field? E.g. raw text one two three four with EdgeShingleFilter
configured to produce unigrams, bigrams and trigams would produce one, one
two, and one two three, but nothing else?
If so, I
Steve,
Yes, I want only one, one two, and one two three, but nothing else.
Cool if this can be achieved without java code even better, I'll check that
filter.
I need this for building a field used for suggestions, the user
specifically wants no match only from the edge.
thanks!
On Sat, Mar 16,
I read too fast your reply, so I thought you meant configuring the
LimitTokenPositionFilter. I see you mean I have to write one, ok...
On Sat, Mar 16, 2013 at 10:33 PM, xavier jmlucjav jmluc...@gmail.comwrote:
Steve,
Yes, I want only one, one two, and one two three, but nothing else.
Cool
Ok, I have created a processor which manages to update the external file.
Basically,
until a commit request, the hidden document IDs are stored in a Set and when
a commit is requested, a new file is created by copying the last one, then
the additional IDs are appended to the external file.
Now
The /replication handler in solrconfig.xml has a commented-out master
section which has a confFiles element which specifies which configuration
files to replicate:
str name=confFilesschema.xml,stopwords.txt/str
You can add your external file to that comma-separated list.
-- Jack Krupansky
Hi Jack,
the external files involved in External File Fields are not stored in the
configuration directory and cannot be replicated this way, furthermore in
Solr Cloud, additional files are not replicated anymore.
There is something like that in the code:
/ if (confFileNameAlias.size() 1 ||
(13/03/16 4:08), Van Tassell, Kristian wrote:
Hello everyone,
If I search for a term “baz” and tell it to highlight it, it highlights just
fine.
If, however, I search for “foo bar” using the q parameter, which appears in
that same document/same field, and use the hl.q parameter to
Ah, yes, with SolrCloud... configuration files are kept in Zookeeper:
http://wiki.apache.org/solr/SolrCloud#Getting_your_Configuration_Files_into_ZooKeeper
And, yes, EFF reads from the index directory.
Maybe you could have a custom handler/component that simply copied the EFF
file(s) from
31 matches
Mail list logo