I wrote a simple test to reproduce a very similar stack trace to the above
issue, where only some line numbers differences.
Any ideas as to why the following happens? Any help would be very
appreciated.
* The test case:
@Test
public void documentCommitAndRollbackTest() throws
Hi
We have a problem that seems to be due to memory leaks during search on
Solr 4.0. Havnt dived into it yet, so I am certainly not sure, but just
wanted to ask upfront, if 4.0 contains any known memory leaks? And if
they have been fixed?
Regards, Per Steffensen
How do you know that it is Solr and nothing else?
Have you check with MemoryAnalyzer?
http://wiki.eclipse.org/index.php/MemoryAnalyzer
As we are always using the most recent released version we
have never seen any memory leaks with Solr so far.
Regards
Bernd
Am 15.03.2013 08:21, schrieb Per
We are currently using
Oracle Corporation Java HotSpot(TM) 64-Bit Server VM (1.7.0_07 23.3-b01)
Runs excellent and also no memory parameter tweaking neccessary.
Give enough physical and JVM memory, use -XX:+UseG1GC and thats it.
Also no saw tooth and GC timeouts from JVM as with earlier
Thans for the support so far,
I was running the dataimport on a replica! Now i start it on the leader and
it goes with 590 doc/s. I think all docs were going to another node and then
came back.
Is there a way to get the leader? If there is, i can detect the leader with
a script and start the
Hi, I have a question regarding how to facet query a particular field using a
regular expression ... I have a data like below:
{
responseHeader:{
status:0,
QTime:10,
params:{
hl.fragsize:500,
facet:true,
sort:score desc,
indent:on,
facet.limit:-1,
Hi,
@Lance - thanks, it's a pleasure to give something back to the community. Even
if it is comparatively small. :-)
@Paul - it's definitly not 15 min but rather 2 min. Actually, the testing part
of this setup is very regular compared to other Maven projects. The copying of
the WAR file and
Hello,
currently when we set qt=tvrhtv.all=true; it will return all the words
which are there in text of field.
is there any way, if i can get term vector information of specific word
only, like i can pass the word, and it will just return term position and
frequency for that word only?
and
The purpose of the schema is to associate a type with a field name.
That's it.
A dynamic field associates a type with a range of names.
An empty field in a Lucene index doesn't take any space, so having 450
fields doesn't in itself cause a problem. The point at which you may
have a problem is
Hi,
I want to find what your experiences are with different storage setups.
We tried running a master/slave setup on the SAN but quickly realized that the
master did not index fast enough. We didn't run with soft commit though – maybe
that would change the conclusion?
The slaves seemed to run
I don't have an answer but I have seen this before too. I assumed this is an
issue with the admin UI. In my case the number returned by the query looked
closer to the truth than the one in the UI. I even tried an hard commit and
optimize via admin UI. It didn't help.
If you want to try hard
Hi,
My web service has the all DB related information(like
username,password,entity names,fields etc).I want to pass this data to Solr
dataimport handler to do
the importing(fullimport or deltaimport).
Is it possible, passing the DB information and doing the data import from
the solr?.(i want
I use http and get /solr/replication?command=indexversion urls to get
index versions on master and slave. The replication works fine but
index versions from /solr/replication?command=indexversion differ.
Best regards,
Rafal.
2013/3/14 Mark Miller markrmil...@gmail.com:
What calls are you using
On Fri, Mar 15, 2013 at 6:46 AM, raulgrande83 raulgrand...@hotmail.com wrote:
Thank you for your help. I'm afraid it won't be so easy to change de jvm
version, because it is required at the moment.
It seems that Solr 4.2 supports Java 1.6 at least. Is that correct?
Could you find any clue of
On 3/15/13 9:13 AM, Bernd Fehling wrote:
How do you know that it is Solr and nothing else?
It is memory usage inside the Jetty/Solr JVM we monitor, so by
definition it is Solr (or Jetty, but I couldnt imagine). The lower
border (after full GC) of memory usage is increasing.
Have you check
Am 15.03.2013 12:24, schrieb Per Steffensen:
On 3/15/13 9:13 AM, Bernd Fehling wrote:
How do you know that it is Solr and nothing else?
It is memory usage inside the Jetty/Solr JVM we monitor, so by definition it
is Solr (or Jetty, but I couldnt imagine). The lower border (after
full GC)
You can either have those values stored in variables (${varname}) and have
those configured somewhere else in Solr (there are several options).
Or, if you have a data source in your servlet contains, you can use
jndiName of that instead of configuring it in Solr itself.
Regards,
Alex.
The main issue with dynamic fields is that because you have one definition,
you can also have only one treatment.
So, all of your fields (covered by one dynField definition) will have to be
of the same type. They will all have to be single- or multi- valued. They
will all have to be stored or
Yes u can know that, u must understand shard partition.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Advice-solrCloud-DIH-tp4047339p4047673.html
Sent from the Solr - User mailing list archive at Nabble.com.
Hi,
Most of our clients/customers use local storage. Some use SSDs and some
SANs, and those with extra cash use SANs with SSDs.
But what you wrote needs more detail because sources of poor performance
can come from many places and there are a lot or very different setups out
there that work in
Mark
--
View this message in context:
http://lucene.472066.n3.nabble.com/How-should-I-configure-Solr-to-support-multi-word-synonyms-tp4044578p4047678.html
Sent from the Solr - User mailing list archive at Nabble.com.
NRT seems not to work in my case when doing a softcommit every 2
seconds. My conf looks like this:
autoCommit
maxTime1/maxTime
openSearcherfalse/openSearcher
/autoCommit
autoSoftCommit
maxTime2000/maxTime
/autoSoftCommit
No result from Solr when searching for a word in the file. When doing
Hi Andy,
may be you can look at Scotas products, www.scotas.com/products. They
combine the data synchronization in near real time between Oracle and Solr,
and also you can consumes data during SQLQ query time with new operators
and functions or direct to Solr.
Bye!
2013/3/12 Andy Lester
Hi Jack, thanks a lot for your reply. I did that dynamicField name=*
type=text multiValued=true /. However, when I run Solr it gives me a
bunch of errors. It actually displays the content of my files on my command
line and shows some logs like this:
org.apache.solr.common.SolrException:
Hi,
one of our clients provides for an important Argentine telco, a complete
system to integrate and organize in a simple system large volumes of data
with information about customers,
transactions, security risk, potential frauds among other activities all in
real time. For text searching they
On 15 March 2013 19:28, Luis reneonta...@gmail.com wrote:
Hi Jack, thanks a lot for your reply. I did that dynamicField name=*
type=text multiValued=true /. However, when I run Solr it gives me a
bunch of errors. It actually displays the content of my files on my command
line and shows some
Hi Gora, thank you for your reply. I am not using any commands, I just go on
the Solr dashboard, db Dataimport and execute a full-import.
*My schema.xml looks like this:*
field name=id type=string indexed=true stored=true required=true
multiValued=false /
field name=sku type=textTight
Niklas,
In Linux, the API for watching for filesystem changes is called
inotify. You'd need to write something to listen to those events and
react accordingly.
Here's a brief discussion about it:
http://stackoverflow.com/questions/4062806/inotify-how-to-use-it-linux
Michael Della Bitta
Erick, before I do that - which I'll be happy to - I just want to make
sure I'm testing the right thing. The wiki seems to indicate this is a
4.2+ feature, but the ticket marks it as fixed in 4.3. Maybe just a
document bug?
-Mike
On 3/14/13 9:44 PM, Erick Erickson wrote:
H, could you
Take a look at ManifoldCF, whch has a file system crawler which can track
changed files.
-- Jack Krupansky
-Original Message-
From: Niklas Langvig
Sent: Friday, March 15, 2013 11:10 AM
To: solr-user@lucene.apache.org
Subject: solr cell
We have all our documents (doc, docx, pdf) on a
java.lang.Float cannot be cast to java.lang.String typically means you
wrote float name=xn/float for a parameter which is expected to be a
string (str ...).
-- Jack Krupansky
-Original Message-
From: Rohan Thakur
Sent: Friday, March 15, 2013 10:01 AM
To: solr-user@lucene.apache.org
The explain section that is returned if you specify the debugQuery=true
parameter will provides the details of what terms matched for each document.
-- Jack Krupansky
-Original Message-
From: Rohan Thakur
Sent: Friday, March 15, 2013 9:19 AM
To: solr-user@lucene.apache.org
Subject:
Hi,I have a requirement below is my dataset ,Client_name rep_name
acct_nameSUSAN CHILTONGERARD BUCHANAN CHILTON SLARRY
CHILTON GERARD BUCHANAN CHILTON LMy schema.xml SEARCHI
need response
as,Search for CHILTON , group by Client
: Just from this observation, it seems like the code for SOLR 4.1 takes a
: wrong turn somewhere for large responses if it comes across the same query
: with a different fl list again.If the spinning query is pre-cached via
There definiately seems to be a problem with lazy field loading +
On 15 March 2013 20:16, Luis reneonta...@gmail.com wrote:
Hi Gora, thank you for your reply. I am not using any commands, I just go
on
the Solr dashboard, db Dataimport and execute a full-import.
In that case, you are not using the ExtractingRequestHandler, but
using the DataImportHandler,
Hi,
I wondered if solr searches on indexed fields only or on entire index? In more
detail, let say I have fields id, title and content, all indexed, stored.
Will a search send all these fields to memory or only indexed part of these
fields?
Thanks.
Alex.
And up! :-)
I´ve been wondering if using CloudSolrServer has something to do here. Does
it have a bad performance when a CloudSolrServer singletong receives
multiple queries? Is it recommended to have a CloudSolrServer instances
list and select one of them with a Round Robin criteria?
Sorry, Gora. It is ${fileSourcePaths.urlpath} actually.
*My complete schema.xml is this:*
?xml version=1.0 encoding=UTF-8 ?
schema name=db version=1.1
types
fieldType name=text_general class=solr.TextField
positionIncrementGap=100 /
fieldType name=string
Hi,
I'm interested in using the new Analyzing Suggester described by Mike
McCandless [1], but I'm not sure how it should be configured.
I've setup my SpellCheckComponent with
str name=classnameorg.apache.solr.spelling.suggest.Suggester/str
str
We use the toString call on the query in our logs. For some numeric types, the
encoded form of the number is being printed instead of the readable form.
This makes tail and some other tools very unhappy...
Here is a partial example of a query.toString() that would have had binary in
it. As a
You def have to use multiple threads with it for it to be fast, but 3 or 4 docs
a second still sounds absurdly slow.
- Mark
On Mar 15, 2013, at 2:58 PM, Luis Cappa Banda luisca...@gmail.com wrote:
And up! :-)
I´ve been wondering if using CloudSolrServer has something to do here. Does
it
Is there a document that tells how to create multiple threads? Search
returns many hits which orbit this idea, but I haven't spotted one
which tells how.
Thanks
Jack
On Fri, Mar 15, 2013 at 1:01 PM, Mark Miller markrmil...@gmail.com wrote:
You def have to use multiple threads with it for it to
Me neither. Please, Mark, can you tell us how?
2013/3/15 Jack Park jackp...@topicquests.org
Is there a document that tells how to create multiple threads? Search
returns many hits which orbit this idea, but I haven't spotted one
which tells how.
Thanks
Jack
On Fri, Mar 15, 2013 at 1:01
Thanks, Robert.
I'm I correct in thinking that queryAnalyzerFieldType isn't needed at all
if I'm using spellcheck.q rather than just q?
Eoghan
On 15 March 2013 20:07, Robert Muir rcm...@gmail.com wrote:
On Fri, Mar 15, 2013 at 3:04 PM, Eoghan Ó Carragáin
eoghan.ocarrag...@gmail.com wrote:
You have to open searchers for the new data to show up. Try this:
autoCommit
maxTime1/maxTime
openSearcherfalse/**openSearcher
/autoCommit
autoSoftCommit
maxTime2000/maxTime
openSearchertrue/**openSearcher
/autoSoftCommit
Make sure that you have low autowarm counts otherwise you need to
Hi all,
Running the solr server:
~/solr-4.1.0/example$ java -jar start.jar
For updating solr with json, I followed the convention at:
example/examplesdocs/books.json
which has:
[
{
id : 978-0641723445,^M
cat : [book,hardcover],^M
name : The Lightning Thief,^M
author : Rick
Is there some place I should indicate what parameters are including in
the json objects send? I was able to test books.json without the
error.
Yes, in Solr's schema.xml (under the conf/ directory). See
http://wiki.apache.org/solr/SchemaXml for more details.
Erik
On Mar 15, 2013,
I tried it and I get the same error response! Which is because... I don't
have a field named datasource.
You need to check the Solr schema.xml for the available fields and then add
any fields that your JSON uses that are not already there. Be sure to
shutdown and restart Solr after editing
Another options similar to this would be the new file system
WatchService available in java 7:
http://docs.oracle.com/javase/tutorial/essential/io/notification.html
Arcadius.
On 15 March 2013 15:22, Michael Della Bitta
michael.della.bi...@appinions.com wrote:
Niklas,
In Linux, the API for
Sorry, should have specified. 4.1
On Fri, Mar 15, 2013 at 4:33 PM, Mark Miller markrmil...@gmail.com wrote:
What Solr version? 4.0, 4.1 4.2?
- Mark
On Mar 15, 2013, at 7:19 PM, Gary Yngve gary.yn...@gmail.com wrote:
my solr cloud has been running fine for weeks, but about a week ago,
Also, looking at overseer_elect, everything looks fine. node is valid and
live.
On Fri, Mar 15, 2013 at 4:47 PM, Gary Yngve gary.yn...@gmail.com wrote:
Sorry, should have specified. 4.1
On Fri, Mar 15, 2013 at 4:33 PM, Mark Miller markrmil...@gmail.comwrote:
What Solr version? 4.0,
Strange - we hardened that loop in 4.1 - so I'm not sure what happened here.
Can you do a stack dump on the overseer and see if you see an Overseer thread
running perhaps? Or just post the results?
To recover, you should be able to just restart the Overseer node and have
someone else take over
I restarted the overseer node and another took over, queues are empty now.
the server with core production_things_shard1_2
is having these errors:
shard update error RetryNode:
http://10.104.59.189:8883/solr/production_things_shard11_replica1/:org.apache.solr.client.solrj.SolrServerException:
it doesn't appear to be a shard1 vs shard11 issue... 60% of my followers
are red now in the solr cloud graph.. trying to figure out what that
means...
On Fri, Mar 15, 2013 at 6:48 PM, Gary Yngve gary.yn...@gmail.com wrote:
I restarted the overseer node and another took over, queues are empty
i think those followers are red from trying to forward requests to the
overseer while it was being restarted. i guess i'll see if they become
green over time. or i guess i can restart them one at a time..
On Fri, Mar 15, 2013 at 6:53 PM, Gary Yngve gary.yn...@gmail.com wrote:
it doesn't
It looks like they are not picking up the new leader state for some reason…
Thats where where it say the local state doesn't match the zookeeper state. If
the local state doesn't match the zookeeper state in a short amount of time
when a new leader comes, everything will bail because it assumes
On Mar 15, 2013, at 10:04 PM, Gary Yngve gary.yn...@gmail.com wrote:
i think those followers are red from trying to forward requests to the
overseer while it was being restarted. i guess i'll see if they become
green over time. or i guess i can restart them one at a time..
Restarting the
Hi,
I think you are asking if the original/raw content of those fields will be
read. No, it won't, not for the search itself. If you want to
retrieve/return those fields then, of course, they will be read for the
documents being returned.
Otis
--
Solr ElasticSearch Support
On 16 March 2013 00:30, Luis reneonta...@gmail.com wrote:
Sorry, Gora. It is ${fileSourcePaths.urlpath} actually.
Most likely, there is some issue with the selected urlpath
not pointing to a proper http or file source. E.g., urlpath
could be something like http://example.com/myfile.pdf .
Please
Hi,
I also have been using that plugin (https://github.com/healthonnet/hon-
lucene-synonyms) in a project and it's been working pretty well. But I
think Solr should handle multi-word synonyms natively (BTW, there is a
story in jira for that https://issues.apache.org/jira/browse/SOLR-4381).
One
I will upgrade to 4.2 this weekend and see what happens. We are on ec2 and
have had a few issues with hostnames with both zk and solr. (but in this
case i haven't rebooted any instances either)
it's relatively a pain to do the upgrade because we have a query/scorer
fork of lucene along with
On Mar 16, 2013, at 12:30 AM, Gary Yngve gary.yn...@gmail.com wrote:
I will upgrade to 4.2 this weekend and see what happens. We are on ec2 and
have had a few issues with hostnames with both zk and solr. (but in this
case i haven't rebooted any instances either)
There is actually a new
62 matches
Mail list logo