OK - I figured out the logging. Here is the logging output plus the
console output and the stack trace:
main] INFO org.apache.solr.core.SolrResourceLoader - new
SolrResourceLoader for directory: '/Users/carlroberts/dev/solr-4.10.3/'
[main] INFO org.apache.solr.core.SolrResourceLoader - Adding
On 1/21/2015 9:56 AM, Carl Roberts wrote:
BTW - I don't know if this will help also, but here is a screen shot
of my classpath in eclipse.
The URL in the slf4j error message does describe the problem with
logging, but if you know nothing about slf4j, it probably won't help you
much.
Make sure
Hello Everyone,
I am hitting a few issues with solr replicas going into recovery and then
doing a full index copy.I am trying to understand the solr recovery
process.I have read a few blogs on this and saw that when leader notifies
a replica to recover(in my case it is due to connection
On 1/21/2015 9:15 AM, Clemens Wyss DEV wrote:
What I meant is:
If I do SolrServer#rollback after 11 documents were added, will then only 1
or all 11 docments that have been added in the
SolrServer-tranascation/context?
If autoCommit is set to 10 docs and openSearcher is true, it would roll
I had to hardcode the path in solrconfig.xml from this:
${solr.install.dir:}
to this:
/Users/carlroberts/dev/solr-4.10.3/
to avoid the classloader warnings, but I still get the same error. I am
not sure where the ${solr.install.dir:} value gets pulled from but
apparently that is
Ah, OK, you need to include a logging jar in your classpath - the log4j and
slf4j-log4j jars in the solr distribution will help here. Once you've got some
logging set up, then you should be able to work out what's going wrong!
Alan Woodward
www.flax.co.uk
On 21 Jan 2015, at 16:53, Carl
Hi,
Could there be a bug in the EmbeddedSolrServer that is causing this?
Is it still supported in version 4.10.3?
If it is, can someone please provide me assistance with this?
Regards,
Joe
On 1/21/15, 12:18 PM, Carl Roberts wrote:
I had to hardcode the path in solrconfig.xml from this:
Hi everyone -
I posted a question on stackoverflow but in hindsight this would have been
a better place to start. Below is the link.
Basically I can't get the example working when using an external ZK cluster
and auto-core discovery. Solr 4.10.1 works fine, but the newest release
never gets new
Hi,
Is Solr a good candidate to index 100s of nodes in one XML file?
I have an RSS feed XML file that has 100s of nodes with several elements
in each node that I have to index, so I was planning to parse the XML
with Stax and extract the data from each node and add it to Solr. There
will
I am trying to implement type-ahead suggestion for single field which
should ignore whitesapce, underscore or special characters in autosuggest.
It works as suggested by Alex using KeywordTokenizerFactory but how to
ignore whitesapce, underscore...
Example itemName data can be :
ABC E12 : if
Already did. And the logging gets me no closer to fixing the issue.
Here is the logging.
[main] INFO org.apache.solr.core.SolrResourceLoader - new
SolrResourceLoader for directory: '/Users/carlroberts/dev/solr-4.10.3/'
[main] INFO org.apache.solr.core.SolrResourceLoader - Adding
I was confused because I couldn't believe my jars might be out of sync. But
of course they were. I had to create a new eclipse project to sort it out,
but that exception has disappeared. Sorry for the confusing post.
--
View this message in context:
On 1/21/2015 9:13 AM, Nitin Solanki wrote:
Thanks. Great Explanation.. One more thing I want to ask. Which is best
doing only hard commit or both hard and soft commit? I want to index 21 GB
of data.
My recommendations for the autoCommit settings are on that URL that I
linked - maxTime set to
Hi Yusniel,
Solr manages documents as a whole. This means updating an existing document
means replacing. So you should/could index metadata and full text in one step,
one solr document under one unique ID. That would the simplest case. You could
also also use nested child documents to use
Hi,
I have downloaded the code and documentation for Solr version 4.10.3.
I am trying to follow SolrJ Wiki guide and I am running into errors.
The latest error is this one:
Exception in thread main org.apache.solr.common.SolrException: No such
core: db
at
So far I have not been able to get the logging to work - here is what I
get in the console prior to the exception:
SLF4J: Failed to load class org.slf4j.impl.StaticLoggerBinder.
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See
Thanks. Great Explanation.. One more thing I want to ask. Which is best
doing only hard commit or both hard and soft commit? I want to index 21 GB
of data.
On Wed, Jan 21, 2015 at 7:48 PM, Shawn Heisey apa...@elyograg.org wrote:
On 1/21/2015 6:01 AM, Nitin Solanki wrote:
How much of maximum
What I meant is:
If I do SolrServer#rollback after 11 documents were added, will then only 1 or
all 11 docments that have been added in the SolrServer-tranascation/context?
-Ursprüngliche Nachricht-
Von: Shawn Heisey [mailto:apa...@elyograg.org]
Gesendet: Mittwoch, 21. Januar 2015 15:24
Tomoko Uchida wrote
Hi,
Strictly speaking, MultiPhraseQuery and BooleanQuery wrapping PhraseQuerys
are not equal.
For each query, Query.rewrite() returns different object. (with Lucene
4.10.3)
q1.rewrite(reader).toString() returns:
body:blueberry chocolate (pie tart), where q1 is
That certainly looks like it ought to work. Is there log output that you could
show us as well?
Alan Woodward
www.flax.co.uk
On 21 Jan 2015, at 16:09, Carl Roberts wrote:
Hi,
I have downloaded the code and documentation for Solr version 4.10.3.
I am trying to follow SolrJ Wiki guide
Aha, I think you're being stung by
https://issues.apache.org/jira/browse/SOLR-6643. Which will be fixed in the
upcoming 5.0 release, or you can patch your system with the patch attached to
that issue.
Alan Woodward
www.flax.co.uk
On 21 Jan 2015, at 19:44, Carl Roberts wrote:
Already did.
All,
How can I reduce the logging levels to SEVERE that survives a Tomcat restart or
a machine reboot in Solr. As you may know, I can change the logging levels
from the logging page in admin console but those changes are not persistent
across Tomcat server restart or machine reboot.
Following
On 1/21/2015 12:53 PM, Carl Roberts wrote:
Is Solr a good candidate to index 100s of nodes in one XML file?
I have an RSS feed XML file that has 100s of nodes with several
elements in each node that I have to index, so I was planning to parse
the XML with Stax and extract the data from each
I am running solr 4.10.2 with geofilt (~20% of docs have 30+ lat/lon
points) and everything work hunky dori. Than I added a bf with geodist
along the lines of:
recip(geodist(),5,20,5) after few hours of running I end up with OOM
GC overhead limit exceeded. I've seen this
Hi,
Not sure, but I think that the PatternReplaceFilterFactory or
the PatternReplaceCharFilterFactory could help you deleting those
characters.
Regards.
On Jan 21, 2015 7:59 PM, Vishal Swaroop vishal@gmail.com wrote:
I am trying to implement type-ahead suggestion for single field which
Solr is just fine for this.
It even ships with an example of how to read an RSS file under the DIH
directory. DIH is also most likely what you will use for the first
implementation. Don't need to worry about Stax or anything, unless
your file format is very weird or has overlapping namespaces
This is what we use for our autosuggest field in Solr 3.4. It works for us as
you describe below.
fieldType name=autocomplete_edge class=solr.TextField
analyzer type=index
tokenizer
class=solr.KeywordTokenizerFactory/
Hi,
Just add log4j.logger.org.apache.solr=SEVERE to you log4j properties.
*Thanks,*
*Rajesh,*
*(mobile) : 8328789519.*
On Wed, Jan 21, 2015 at 3:14 PM, Nemani, Raj raj.nem...@turner.com wrote:
All,
How can I reduce the logging levels to SEVERE that survives a Tomcat
restart or a machine
On Wed, 21 Jan 2015, Mihran Shahinian wrote:
: Date: Wed, 21 Jan 2015 16:06:18 -0600
: From: Mihran Shahinian slowmih...@gmail.com
: Reply-To: solr-user@lucene.apache.org
: To: solr-user@lucene.apache.org
: Subject: boosting by geodist - GC Overhead Limit exceeded
:
: I am running solr 4.10.2
Hi Darren,
Can you please show the contents of the clusterstate.json from ZooKeeper?
Please use github gist or a pastebin like service. The Admin UI has a
dump screen which shows the entire content of ZooKeeper as a json.
On Wed, Jan 21, 2015 at 6:15 PM, Darren Spehr darre...@gmail.com wrote:
On 1/21/2015 5:16 PM, Carl Roberts wrote:
BTW - it seems that is very hard to get started with the Embedded
server. The doc is out of date. The code seems to be untested and buggy.
On 1/21/15, 7:15 PM, Carl Roberts wrote:
HmmmIt looks like FutureTask is calling setException(Throwable t)
: I'm facing a problem with multiple field sort in Solr. I'm using the
: following fields in sort :
:
: PublishDate asc,DocumentType asc
correction: you are using: PublishDate desc,DocumentType desc
: The sort is only happening on PublishDate, DocumentType seemsto completely
: ignored.
Hi Nishanth,
The recovery happens as follows:
1. PeerSync is attempted first. If the number of new updates on leader is
less than 100 then the missing documents are fetched directly and indexed
locally. The tlog tells us the last 100 updates very quickly. Other uses of
the tlog are for
Hi Visal,
Maybe the next pattern can help you (the conf attached by David is really
nice):
...pattern=(\s)+ replacement= replace=all/
Hope it helps.
On Wed, Jan 21, 2015 at 10:57 PM, David M Giannone david.giann...@gm.com
wrote:
This is what we use for our autosuggest field in Solr 3.4. It
Hi,
I'm facing a problem with multiple field sort in Solr. I'm using the
following fields in sort :
PublishDate asc,DocumentType asc
The sort is only happening on PublishDate, DocumentType seemsto completely
ignored. Here's my field type definition.
field name=PublishDate type=tdate
: I posted a question on stackoverflow but in hindsight this would have been
: a better place to start. Below is the link.
:
: Basically I can't get the example working when using an external ZK cluster
: and auto-core discovery. Solr 4.10.1 works fine, but the newest release
your SO URL shows
On 1/21/2015 1:14 PM, Nemani, Raj wrote:
How can I reduce the logging levels to SEVERE that survives a Tomcat restart
or a machine reboot in Solr. As you may know, I can change the logging
levels from the logging page in admin console but those changes are not
persistent across Tomcat
On 1/21/2015 7:24 PM, Shawn Heisey wrote:
I have no way to know what container or logging framework you're using.
Followup on this:
Unless you have modified the solr war for version 3.2.0 to change the
logging jars, you will definitely be using java.util.logging. Here's
some URLs that may
Thanks Hoss, this is exactly what I needed. I had previously run the
example using nothing more than an external ZK hosting my own
configuration. This of course means one of two things - my conf was bad, or
Solr was at fault. The conf has been working for ages so I didn't test a
replacement (it's
I *indexed* *2GB* of data. Now I want to *change* the *type* of *field*
from *textSpell* to *string* type into
*schema.xml.*
Detail Explanation on Stackoverflow. Below is the link:
On 22 January 2015 at 11:23, Nitin Solanki nitinml...@gmail.com wrote:
I *indexed* *2GB* of data. Now I want to *change* the *type* of *field*
from *textSpell* to *string* type into
Yes, one would need to reindex.
Regards,
Gora
Hi Shawn,
Many thanks for all your help. Moving the lucene JARs from
solr.solr.home/lib to the same classpath directory as the solr JARs plus
adding a bunch more dependency JAR files and most of the files from the
collection1/conf directory - these ones to be exact, has me a lot closer
to
On 1/21/2015 7:02 PM, Carl Roberts wrote:
Got it all working...:)
I just replaced the solrconfig.xml and schema.xml files that I was using
with the ones from collection1 in one of the examples. I had modified
those files to remove certain sections which I thought were not needed
and
Ah - OK - let me try that. BTW - I applied the fix from the bug link
you gave me to log the errors and I am now at least getting the actual
errors:
*default core name=db
solr home=/Users/carlroberts/dev/solr-4.10.3/
db is loaded=false
core init
Thanks Hoss for clearing up my doubt. I was confused with the ordering. So I
guess, the first field is always the primary sort field followed by
secondary.
Thanks again.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Issue-with-Solr-multiple-sort-tp4181056p4181062.html
Got it all working...:)
I just replaced the solrconfig.xml and schema.xml files that I was using
with the ones from collection1 in one of the examples. I had modified
those files to remove certain sections which I thought were not needed
and apparently I don't understand those files very
Thank you Shalin.So in a system where the indexing rate is more than 5K TPS
or so the replica will never be able to recover through peer sync
process.In my case I have mostly seen step 3 where a full copy happens
and if the index size is huge it takes a very long time for replicas to
Norgorn [lsunnyd...@mail.ru] wrote:
So, as we see, memory, used by first shard to group, wasn't released.
Caches are already nearly zero.
It should be one or the other: Either the memory is released or there is
something in the caches. Anyway, DocValues is the way to go, so ensure that it
We are trying to run SOLR with big index, using as little RAM as possible.
Simple search for our cases works nice, but field collapsing (group=true)
queries fall with OOM.
Our setup is several shards per SOLR entity, each shard on it's own HDD.
We've tried same queries, but to one specific shard,
Ok. Thanx
On Thu, Jan 22, 2015 at 11:38 AM, Gora Mohanty g...@mimirtech.com wrote:
On 22 January 2015 at 11:23, Nitin Solanki nitinml...@gmail.com wrote:
I *indexed* *2GB* of data. Now I want to *change* the *type* of *field*
from *textSpell* to *string* type into
Yes, one would need to
Any ideas?
--
View this message in context:
http://lucene.472066.n3.nabble.com/MultiPhraseQuery-Rewrite-to-BooleanQuery-tp4180638p4180820.html
Sent from the Solr - User mailing list archive at Nabble.com.
On Tue, 2015-01-20 at 15:41 +0100, Jürgen Wagner (DVT) wrote:
[Snip: Valid concerns]
3. Cardinality: there may be rather large collections and some smaller
collections in the federation. If you use SolrCloud to obtain results,
the ones from smaller collections will get more significance in
On Wed, 2015-01-21 at 09:46 +0100, Toke Eskildsen wrote:
Anyway, RAID 0 does really help for random access, [...]
Should have been ...does not really help
- Toke Eskildsen
On Wed, 2015-01-21 at 07:56 +0100, Nimrod Cohen wrote:
RAID [0] configuration
each shard has data on each one of the 8 disks in the RAID, on each
query to get 1K docs, each shard request to get data from the one RAID
disk, so we get 8 request to get date from all of the disks and we get
a
I am working on solr spell checker along with suggester. I am saving
document like this :
{ngram:the,count:10}
{ngram:the age,count:5}
{ngram:the age of,count:3}
where *ngram* is unique key and applied *StandardTokenizer* and
*ShingleFactoryFilter*(1 to 5 size).
So, when I search word *the* it
How much of maximum data we can commit on Solr using hard commit without
using Soft commit.
maxTime is 1000 in autoCommit
Details explanation is on Stackoverflow
http://stackoverflow.com/questions/28067853/how-much-maximum-data-can-we-hard-commit-in-solr
.
Thanks a lot Alex...
It looks like it works as expected... I removed EdgeNGramFilterFactory
from query section and used KeywordTokenizerFactory in index... this
is final version..
fieldType name=text_general_edge_ngram class=solr.TextField
positionIncrementGap=100
analyzer type=index
Hi,
Strictly speaking, MultiPhraseQuery and BooleanQuery wrapping PhraseQuerys
are not equal.
For each query, Query.rewrite() returns different object. (with Lucene
4.10.3)
q1.rewrite(reader).toString() returns:
body:blueberry chocolate (pie tart), where q1 is your first multi
phrase query.
On 1/20/2015 10:43 PM, Yusniel Hidalgo Delgado wrote:
I am diving into Solr recently and I need help in the following usage
scenery. I am working on a project for extract and search bibliographic
metadata from PDF files. Firstly, my PDF files are processed to extract
bibliographic metadata
On 1/21/2015 6:01 AM, Nitin Solanki wrote:
How much of maximum data we can commit on Solr using hard commit without
using Soft commit.
maxTime is 1000 in autoCommit
Details explanation is on Stackoverflow
On 1/20/2015 11:42 PM, Clemens Wyss DEV wrote:
But then what happens if:
Autocommit is set to 10 docs
and
I add 11 docs and then decide (due to an exception?) to rollback.
Will only one (i.e. the last added) document be rollen back?
The way I understand the low-level architecture, yes --
61 matches
Mail list logo