Ok this is very surprising.
I just ran the curl command
curl --silent
http://xx.xx.xx.xx:8985/solr/collectionABC/update/?commit=trueopenSearcher=false
And on the solr log file I can see these messages:
/Dec 16, 2012 10:44:14 PM org.apache.solr.update.DirectUpdateHandler2 commit
INFO: start
I don't think autocommit is deprecated, it's just commented out of the config
and using commitWithin (assuming you're working from SolrJ) is preferred if
possible.
But what governs a particular set of docs? What are the criteria
that determine when
you want to commit? Flushes and commits are
On May 16, 2012, at 5:23 AM, marco crivellaro wrote:
Hi all,
this might be a silly question but I've found different opinions on the
subject.
When a search is run after a commit is performed will the result include all
document(s) committed until last commit?
use case (sync):
1- add
In the 3.6 world, LukeRequestHandler does some...er...really expensive
things when you click into the admin/schema browser. This is _much_
better in trunk BTW.
So, as Yonik says, LukeRequestHandler probably accounts for
one of the threads.
Does this occur when nobody is playing around with the
Hi,
This is what the thread dump looks like.
Any ideas?
Mav
Java HotSpot(TM) 64-Bit Server VM20.1-b02Thread Count: current=19,
peak=20, daemon=6'DestroyJavaVM' Id=26, RUNNABLE on lock=, total cpu
time=198450.ms user time=196890.ms'Timer-2' Id=25, TIMED_WAITING
on
On Sat, Apr 28, 2012 at 7:02 AM, mav.p...@holidaylettings.co.uk
mav.p...@holidaylettings.co.uk wrote:
Hi,
This is what the thread dump looks like.
Any ideas?
Looks like the thread taking up CPU is in LukeRequestHandler
1062730578@qtp-1535043768-5' Id=16, RUNNABLE on lock=, total cpu
One more thing I noticed is the the schema browser in the admin interface also
eventually times out…
Any ideas from anyone ?
From: Mav Peri
To: solr-user@lucene.apache.orgmailto:solr-user@lucene.apache.org
solr-user@lucene.apache.orgmailto:solr-user@lucene.apache.org
Subject: commit stops
Hi
On Fri, Apr 27, 2012 at 9:18 AM, mav.p...@holidaylettings.co.uk
mav.p...@holidaylettings.co.uk wrote:
We have an index of about 3.5gb which seems to work fine until it suddenly
stops accepting new commits.
Users can still search on the front end but nothing new can be committed and
it
Thanks for the reply
The client expects a response within 2 minutes and after that will report
an error. When we build fresh it seems to work and the operation takes a
second or two to complete. Once it gets to a stage it hangs it simply
won't accept any further commits. I did an index check and
On Fri, Apr 27, 2012 at 8:23 PM, mav.p...@holidaylettings.co.uk
mav.p...@holidaylettings.co.uk wrote:
Hi again,
This is the only log entry I can find, regarding the failed commits…
Still timing out as far as the client is concerned and there is actually
nothing happening on the server in
We also see extreme slowness using Solr 3.6 when trying to commit a delete. We
also get hangs. We do 1 commit at most a week. Rebuilding from scratching using
DIH works fine and has never hung.
Bill Bell
Sent from mobile
On Apr 27, 2012, at 5:59 PM, mav.p...@holidaylettings.co.uk
What issues? It really shouldn't be a problem.
On Mar 22, 2012, at 11:44 PM, I-Chiang Chen ichiangc...@gmail.com wrote:
At this time we are not leveraging the NRT functionality. This is the
initial data load process where the idea is to just add all 200 millions
records first. Than do a
We did some tests too with many millions of documents and auto-commit enabled.
It didn't take long for the indexer to stall and in the meantime the number of
open files exploded, to over 16k, then 32k.
On Friday 23 March 2012 12:20:15 Mark Miller wrote:
What issues? It really shouldn't be a
We saw couple distinct errors and all machines in a shard is identical:
-On the leader of the shard
Mar 21, 2012 1:58:34 AM org.apache.solr.common.SolrException log
SEVERE: shard update error StdNode:
http://blah.blah.net:8983/solr/master2-slave1/:org.apache.solr.common.SolrException:
Map failed
On Mar 23, 2012, at 12:49 PM, I-Chiang Chen wrote:
Caused by: java.lang.OutOfMemoryError: Map failed
Hmm...looks like this is the key info here.
- Mark Miller
lucidimagination.com
On Mar 21, 2012, at 9:37 PM, I-Chiang Chen wrote:
We are currently experimenting with SolrCloud functionality in Solr 4.0.
The goal is to see if Solr 4.0 trunk with is current state is able to
handle roughly 200million documents. The document size is not big around 40
fields no more than a
At this time we are not leveraging the NRT functionality. This is the
initial data load process where the idea is to just add all 200 millions
records first. Than do a single commit at the end to make them searchable.
We actually disabled auto commit at this time.
We have tried to leave auto
Right, I suspect you're hitting merges. How often are you
committing? In other words, why are you committing explicitly?
It's often better to use commitWithin on the add command
and just let Solr do its work without explicitly committing.
Going forward, this is fixed in trunk by the
Am 07.02.2012 15:12, schrieb Erick Erickson:
Right, I suspect you're hitting merges.
Guess so.
How often are you
committing?
One time, after all work is done.
In other words, why are you committing explicitly?
It's often better to use commitWithin on the add command
and just let Solr do
Hi,
Yep, anything added between two commits must be regarded as lost in case of
crash.
You can of course minimize this interval by using a low commitWithin. But
after a crash you should always investigate whether the last minutes of adds
made it.
A transaction log feature is being developed,
On Fri, Jan 27, 2012 at 3:25 PM, Jan Høydahl jan@cominvent.com wrote:
Hi,
Yep, anything added between two commits must be regarded as lost in case of
crash.
You can of course minimize this interval by using a low commitWithin. But
after a crash you should always investigate whether the
Hmmm, does it work just to put this in the masters index and let
replication to its tricks and issue your commit on the master?
Or am I missing something here?
Best
Erick
On Tue, Jan 3, 2012 at 1:33 PM, Martin Koch m...@issuu.com wrote:
Hi List
I have a Solr cluster set up in a master/slave
Yes.
However, something must actually have been updated in the index before a
commit on the master causes the slave to update (this is what was confusing
me).
Since I'll be updating the index fairly often, this will not be a problem
for me.
If, however, the external file field is updated often,
Bug (ahem, that is nudge) the committers over on the dev list to pick
it up and commit it. They'll alter the status etc.
Best
Erick
On Thu, Sep 1, 2011 at 2:37 AM, Bernd Fehling
bernd.fehl...@uni-bielefeld.de wrote:
Hi list,
I have fixed an issue and created a patch (SOLR-2726) but how to
: RE: commit time and lock
Hi Mark
I've read that in a thread title Weird optimize performance degradation, where Erick Erickson
states that Older versions of Lucene would search faster on an optimized index, but this is no longer
necessary., and more recently in a thread you initiated a month ago
Subject: RE: commit time and lock
Hi Mark
I've read that in a thread title Weird optimize performance degradation,
where Erick Erickson states that Older versions of Lucene would search
faster on an optimized index, but this is no longer necessary., and more
recently in a thread you
À : solr-user@lucene.apache.org
Objet : Re: commit time and lock
Actually i m worried about the response time. i k commiting around 500
docs in every 5 minutes. as i know,correct me if i m wrong; at the
time of commiting solr server stop responding. my concern is how to
minimize the response
Rhods [mailto:jonty.rh...@gmail.com]
Envoyé : jeudi 21 juillet 2011 20:27
À : solr-user@lucene.apache.org
Objet : Re: commit time and lock
Actually i m worried about the response time. i k commiting around 500
docs in every 5 minutes. as i know,correct me if i m wrong; at the
time of commiting
[mailto:jonty.rh...@gmail.com]
Envoyé : jeudi 21 juillet 2011 20:27
À : solr-user@lucene.apache.org
Objet : Re: commit time and lock
Actually i m worried about the response time. i k commiting around 500
docs in every 5 minutes. as i know,correct me if i m wrong; at the
time of commiting solr server
[mailto:jonty.rh...@gmail.com]
Envoyé : vendredi 22 juillet 2011 12:45
À : solr-user@lucene.apache.org
Objet : Re: commit time and lock
Thanks for clarity.
One more thing I want to know about optimization.
Right now I am planning to optimize the server in 24 hour. Optimization is
also time taking ( last
]
Envoyé : vendredi 22 juillet 2011 12:45
À : solr-user@lucene.apache.org
Objet : Re: commit time and lock
Thanks for clarity.
One more thing I want to know about optimization.
Right now I am planning to optimize the server in 24 hour. Optimization is
also time taking ( last time took around
Objet : Re: commit time and lock
Hello,
Pierre, can you tell us where you read that?
I've read here that optimization is not always a requirement to have an
efficient index, due to some low level changes in lucene 3.xx
Marc.
On Fri, Jul 22, 2011 at 2:10 PM, Pierre GOSSE pierre.go...@arisem.comwrote
On 7/22/2011 8:23 AM, Pierre GOSSE wrote:
I've read that in a thread title Weird optimize performance degradation, where Erick Erickson
states that Older versions of Lucene would search faster on an optimized index, but this is no longer
necessary., and more recently in a thread you initiated
2011 16:42
À : solr-user@lucene.apache.org
Objet : Re: commit time and lock
On 7/22/2011 8:23 AM, Pierre GOSSE wrote:
I've read that in a thread title Weird optimize performance degradation,
where Erick Erickson states that Older versions of Lucene would search
faster on an optimized index
To: solr-user@lucene.apache.org
Subject: RE: commit time and lock
Hi Mark
I've read that in a thread title Weird optimize performance degradation,
where Erick Erickson states that Older versions of Lucene would search faster
on an optimized index, but this is no longer necessary., and more
On 7/22/2011 9:32 AM, Pierre GOSSE wrote:
Merging does not happen often enough to keep deleted documents to a low enough
count ?
Maybe there's a need to have partial optimization available in solr, meaning
that segment with too much deleted document could be copied to a new file without
Actually i m worried about the response time. i k commiting around 500
docs in every 5 minutes. as i know,correct me if i m wrong; at the
time of commiting solr server stop responding. my concern is how to
minimize the response time so user not need to wait. or any other
logic will require for my
Dear all,
Kindly help me..
thanks
On Tuesday 21 June 2011 11:46 AM, Jonty Rhods wrote:
I am using solrj to index the data. I have around 5 docs indexed. As at
the time of commit due to lock server stop giving response so I was
calculating commit time:
double starttemp =
What is it you want help with? You haven't told us what the
problem you're trying to solve is. Are you asking how to
speed up indexing? What have you tried? Have you
looked at: http://wiki.apache.org/solr/FAQ#Performance?
Best
Erick
On Tue, Jun 21, 2011 at 2:16 AM, Jonty Rhods
Are you optimizing? That is unnecessary when committing, and is often the
culprit.
Best
Erick
On Tue, Jun 7, 2011 at 5:42 AM, Rohit Gupta ro...@in-rev.com wrote:
Hi,
My commit seems to be taking too much time, if you notice from the Dataimport
status given below to commit 1000 docs its
http://svn.apache.org/repos/asf/lucene/dev/trunk/solr/example/solr/conf/solrconfig.xml
Look for autocommit and maxDocs.
Hi,
I'm using DIH and want to perform commits each N processed document, how
can I do this?
thanks in advance
DIH Response XML:
lst name=statusMessages
str name=Total Requests made to DataSource2/str
str name=Total Rows Fetched1/str
str name=Total Documents Skipped0/str
str name=Delta Dump started2010-11-24 09:56:11/str
str name=Identifying Delta2010-11-24 09:56:11/str
str name=Deltas Obtained2010-11-24
its so strange ...
- i copy the solrconfig.xml from this core, that works and no changes
- i delete all fields in my query and change it to a simple query with two
fields. no commit ...
--
View this message in context:
=( anyone a idea ?
--
View this message in context:
http://lucene.472066.n3.nabble.com/commit-true-has-no-effect-tp1952567p1953391.html
Sent from the Solr - User mailing list archive at Nabble.com.
Patience, my friend. It's still early in the morning and people are thinking
about Thanksgiving G...
We need more details. My first guess is that only the sql statement
changed
means that something's wrong with the new SQL. There's a little-known
debug console for DIH you might want to
Okay, sry and thx for reply.
I Know the Links that you post and i Know the Most dih Settings from Wiki.
Im Not New in solr ... Dih says To me After a Delta that Some documenty
changed, but He Dong want to Commit. The auery is nö Broken, i check this,
changed the query and expert with. But with
Thanks Eric. For the record, we are using 1.4.1 and SolrJ.
On 31 October 2010 01:54, Erick Erickson erickerick...@gmail.com wrote:
What version of Solr are you using?
About committing. I'd just let the solr defaults handle that. You configure
this in the autocommit section of solrconfig.xml.
What version of Solr are you using?
About committing. I'd just let the solr defaults handle that. You configure
this in the autocommit section of solrconfig.xml. I'm pretty sure this gets
triggered even if you're using SolrJ.
That said, it's probably wise to issue a commit after all your data
I am not sure why some commits take very long time.
Hmm... Because it merges index segments... How large is your index?
Also is there a way to reduce the time it takes?
You can disable commit in DIH call and use autoCommit instead. It's
kind of hack because you postpone commit operation and
On 7/23/10 5:59 PM, Alexey Serba wrote:
Another option is to set optimize=false in DIH call ( it's true by
default ).
Ouch - that should really be changed then.
- Mark
Hi,
On 07.05.2010 22:47, Chris Hostetter wrote:
so it's the full request time, and would be inclusive of any postCommit
event handlers -- that's important to know. the logs will help clear up
wether the underlying commit is really taking up a large amount of time
or if it's some postCommit
Hi,
On 05.05.2010 03:49, Chris Hostetter wrote:
: Are you accidentally building the spellchecker database on each commit?
...
: This could also be caused by performing an optimize after the commit, or
it
: could be caused by auto warming the caches, or a combination of both.
The mail servers are often not too friendly with attachments, so people
either inline configs or put them on a server and post the URL.
HTH
Erick
On Wed, May 5, 2010 at 12:06 PM, Markus Fischer mar...@fischer.name wrote:
Hi,
On 05.05.2010 03:49, Chris Hostetter wrote:
: Are you
Hi,
On 04.05.2010 03:24, Mark Miller wrote:
On 5/3/10 9:06 AM, Markus Fischer wrote:
we recently began having trouble with our Solr 1.4 instance. We've about
850k documents in the index which is about 1.2GB in size; the JVM which
runs tomcat/solr (no other apps are deployed) has been given
It might be worth checking the VMWare environment - if you're using the
VMWare scsi vmdk and it's shared across multiple VMs and there's a lot of
disk contention (i.e. multiple VMs are all busy reading/writing to/from the
same disk channel), this can really slow down I/O operations.
On Tue, May
On 04.05.2010 11:01, Peter Sturge wrote:
It might be worth checking the VMWare environment - if you're using the
VMWare scsi vmdk and it's shared across multiple VMs and there's a lot of
disk contention (i.e. multiple VMs are all busy reading/writing to/from the
same disk channel), this can
@lucene.apache.org
Subject: Re: Commit takes 1 to 2 minutes, CPU usage affects other apps
On 04.05.2010 11:01, Peter Sturge wrote:
It might be worth checking the VMWare environment - if you're using
the
VMWare scsi vmdk and it's shared across multiple VMs and there's a
lot of
disk contention (i.e
: Are you accidentally building the spellchecker database on each commit?
...
: This could also be caused by performing an optimize after the commit, or it
: could be caused by auto warming the caches, or a combination of both.
The heart of the matter being: it's pretty much impossible
On 5/3/10 9:06 AM, Markus Fischer wrote:
Hi,
we recently began having trouble with our Solr 1.4 instance. We've about
850k documents in the index which is about 1.2GB in size; the JVM which
runs tomcat/solr (no other apps are deployed) has been given 2GB.
We've a forum and run a process every
(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
Date: Thu, 21 Jan 2010 15:33:50 -0800
Subject: Re: commit fails on weblogic
From: goks...@gmail.com
To: solr-user@lucene.apache.org
There might be a limit in Weblogic on the number or length of
parameters allowed in a POST.
On Thu
There might be a limit in Weblogic on the number or length of
parameters allowed in a POST.
On Thu, Jan 21, 2010 at 7:37 AM, Joe Kessel isjust...@hotmail.com wrote:
Using Solr 1.4 and the StreamingUpdateSolrServer on Weblogic 10.3 and get the
following error on commit. The data seems to load
2009/11/11 Licinio Fernández Maurelo licinio.fernan...@gmail.com
Hi folks,
i'm getting this error while committing after a dataimport of only 12 docs
!!!
Exception while solr commit.
java.io.IOException: background merge hit exception: _3kta:C2329239
_3ktb:c11-_3ktb into _3ktc [optimize]
Thanks Israel, i've done a sucesfull import using optimize=false
2009/11/11 Israel Ekpo israele...@gmail.com
2009/11/11 Licinio Fernández Maurelo licinio.fernan...@gmail.com
Hi folks,
i'm getting this error while committing after a dataimport of only 12
docs
!!!
Exception while
Hey Ashish,
If commit fails, the documents won't be indexed! You can look at your index
by pointing luke http://www.getopt.org/luke/ to your data folder (a Solr
index is a Lucene index) or hit:
http://host:port/solr/admin/luke/
to get an xml reply of what your index looks like.
You can commit
Hi,
Any idea if documents from solr server are cleared even if commit fails or I
can still again try commit after some time??
Thanks,
Ashish
Ashish P wrote:
If I add 10 document to solrServer as in solrServer.addIndex(docs) ( Using
Embedded ) and then I commit and commit fails for for some
Hi Hossman,
I would love to know either how do you manage this ?
thanks,
Shalin Shekhar Mangar wrote:
On Fri, Mar 6, 2009 at 8:47 AM, Steve Conover scono...@gmail.com wrote:
That's exactly what I'm doing, but I'm explicitly replicating, and
committing. Even under these circumstances,
: My application is in prod and quite frequently�getting NullPointerException.
...
: java.lang.NullPointerException
: at
com.fm.search.incrementalindex.service.AuctionCollectionServiceImpl.indexData(AuctionCollectionServiceImpl.java:251)
: at
)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:529)
Thanks,
Mahendra
--- On Sat, 3/14/09, Yonik Seeley yo...@lucidimagination.com wrote:
From: Yonik Seeley yo...@lucidimagination.com
Subject: Re: Commit is taking very long time
To: solr-user@lucene.apache.org
Date: Saturday
From your logs, it looks like the time is spent in closing of the index.
There may be some pending deletes buffered, but they shouldn't take too long.
There could also be a merge triggered... but this would only happen
sometimes, not every time you commit.
One more relatively recent change in
On Fri, Mar 6, 2009 at 8:47 AM, Steve Conover scono...@gmail.com wrote:
That's exactly what I'm doing, but I'm explicitly replicating, and
committing. Even under these circumstances, what could explain the
delay after commit before the new index becomes available?
How are you explicitly
Yep, I notice the default is true/true, but I explicitly specified
both those things too and there's no difference in behavior.
On Wed, Mar 4, 2009 at 7:39 PM, Shalin Shekhar Mangar
shalinman...@gmail.com wrote:
On Thu, Mar 5, 2009 at 6:06 AM, Steve Conover scono...@gmail.com wrote:
I'm doing
On Thu, Mar 5, 2009 at 10:30 PM, Steve Conover scono...@gmail.com wrote:
Yep, I notice the default is true/true, but I explicitly specified
both those things too and there's no difference in behavior.
Perhaps you are indexing on the master and then searching on the slaves? It
may be the delay
That's exactly what I'm doing, but I'm explicitly replicating, and
committing. Even under these circumstances, what could explain the
delay after commit before the new index becomes available?
On Thu, Mar 5, 2009 at 10:55 AM, Shalin Shekhar Mangar
shalinman...@gmail.com wrote:
On Thu, Mar 5,
: I suspect this has something to do with waiting for the searcher to
: warm and switch over (?). Though, I'm confused because when I print
: out /solr/admin/registry.jsp, the hashcode of the Searcher changes
: immediately (as the commit docs say, the commit operation blocks by
: default until a
On Thu, Mar 5, 2009 at 6:06 AM, Steve Conover scono...@gmail.com wrote:
I'm doing some testing of a solr master/slave config and find that,
after syncing my slave, I need to sleep for about 400ms after commit
to see the new index state. i.e. if I don't sleep, and I execute a
query, I get
It's actually the space, sorry.
But yes my snapshot looks huge around 3G every 20mn, so should I clean them
up more often like every 4hours??
sunnyfr wrote:
Hi,
Last night I've got an error during the importation and I don't get what
does that mean and it even kill my
Hi
Yes I saw that afterward so I decrease it from 5000 to 4500
Sunny
Grant Ingersoll-6 wrote:
It looks like you are running out of memory. What is your heap size?
On Feb 11, 2009, at 4:09 AM, sunnyfr wrote:
Hi
Have you an idea why after a night with solr running, but just
Batch committing is always a better option than committing for each
document. An optimize automatically commits. Note that you may not need to
optimize very frequently. For a lot cases, optimizing once per day works
fine.
Yes but commit once per day wont show updated datas until the next day,
It looks like you are running out of memory. What is your heap size?
On Feb 11, 2009, at 4:09 AM, sunnyfr wrote:
Hi
Have you an idea why after a night with solr running, but just
commit every
five minute??
It looks like process never shutdown ???
root 29428 0.0 0.0 53988 2648 ?
12275 documents in 422 seconds = 29 docs/second. How fast do you want it to
complete?
How much time do the queries take to create a document? We don't know the
size of the documents?
On Thu, Feb 5, 2009 at 4:11 PM, sunnyfr johanna...@gmail.com wrote:
Hi,
Sorry but I don't know where is the
Yes the average is 12 docs seconde updated.
I've 8,5M documents and I try to update every 5mn so I guess I've no choice
with 8G of ram to have almost null warmup and cache. My data folder is about
5.8G.
What would you reckon ?
I actually reduce warmup and cache, it works fine now, I will see
sunnyfr wrote:
Yes the average is 12 docs seconde updated.
In our case with indexing normal web-pages on a normal workstation we
have about 10 docs per second (updating + committing). This feels quite
long. But if this is normal... ok.
I actually reduce warmup and cache, it works fine now, I
On Thu, Feb 5, 2009 at 7:07 PM, Gert Brinkmann g...@netcologne.de wrote:
sunnyfr wrote:
Yes the average is 12 docs seconde updated.
In our case with indexing normal web-pages on a normal workstation we
have about 10 docs per second (updating + committing). This feels quite
long. But if
Hi
I have now recreated the whole index with new index files and all is back to
normal again. I think something had happend to our old index files.
Many thanks to you who tried to help.
Uwe
On Mon, Oct 6, 2008 at 5:39 PM, Uwe Klosa [EMAIL PROTECTED] wrote:
I already had the chance to setup a
I already had the chance to setup a new server for testing. Before deploying
my application I checked my solrconfig against the solrconfig from 1.3. And
removed the deprecated parameters. I started updating the new index. I
ingest 100 documents att a time and then I do a commit(). With 2000
5 minutes for only one update is slow.
On Fri, Oct 3, 2008 at 8:13 PM, Fuad Efendi [EMAIL PROTECTED] wrote:
Hi Uwe,
5 minutes is not slow; commit can't be realtime... I do commitoptimize
once a day at 3:00AM. It takes 15-20 minutes, but I have several millions
daily updates...
Is there
Thanks Mike
The use of fsync() might be the answer to my problem, because I have
installed Solr for lack of other possibilities in a zone on Solaris with ZFS
which slows down when many fsync() calls are made. This will be fixed in a
upcoming release of Solaris, but I will move as soon as possible
Hmm OK that seems like a possible explanation then. Still it's spooky
that it's taking 5 minutes. How many files are in the index at the
time you call commit?
I wonder if you were to simply pause for say 30 seconds, before
issuing the commit, whether you'd then see the commit go
There are around 35.000 files in the index. When I started Indexing 5 weeks
ago with only 2000 documents I did not this issue. I have seen it the first
time with around 10.000 documents.
Before that I have been using the same instance on a Linux machine with up
to 17.000 documents and I haven't
Yikes! That's way too many files. Have you changed mergeFactor? Or
implemented a custom DeletionPolicy or MergePolicy?
Or... does anyone know of something else in Solr's configuration that
could lead to such an insane number of files?
Mike
Uwe Klosa wrote:
There are around 35.000
Oh, you meant index files. I misunderstood your question. Sorry, now that I
read it again I see what you meant. There are only 136 index files. So no
problem there.
Uwe
On Sat, Oct 4, 2008 at 1:59 PM, Michael McCandless
[EMAIL PROTECTED] wrote:
Yikes! That's way too many files. Have you
Oh OK, phew. I misunderstood your answer too!
So it seems like fsync with ZFS can be very slow?
Mike
Uwe Klosa wrote:
Oh, you meant index files. I misunderstood your question. Sorry, now
that I
read it again I see what you meant. There are only 136 index files.
So no
problem there.
On Fri, Oct 3, 2008 at 2:28 PM, Michael McCandless
[EMAIL PROTECTED] wrote:
Yonik, when Solr commits what does it actually do?
Less than it used to (Solr now uses Lucene to handle deletes).
A solr-level commit closes the IndexWriter, calls some configured
callbacks, opens a new IndexSearcher,
On Sat, Oct 4, 2008 at 9:35 AM, Michael McCandless
[EMAIL PROTECTED] wrote:
So it seems like fsync with ZFS can be very slow?
The other user that appears to have a commit issue is on Win64.
http://www.nabble.com/*Very*-slow-Commit-after-upgrading-to-solr-1.3-td19720792.html#a19720792
-Yonik
A Opening Server is always happening directly after start commit with no
delay. But I can see many {commit=} with QTime around 280.000 (4 and a half
minutes)
One difference I could see to your logging is that I have waitFlush=true.
Could that have this impact?
Uwe
On Sat, Oct 4, 2008 at 4:36
On Sat, Oct 4, 2008 at 11:55 AM, Uwe Klosa [EMAIL PROTECTED] wrote:
A Opening Server is always happening directly after start commit with no
delay.
Ah, so it doesn't look like it's the close of the IndexWriter then!
When do you see the end_commit_flush?
Could you post everything in your log
Hi Sanraj,
It would be helpful if you put more effort into writing your emails. It is
very difficult to understand your question.
Solr does not perform commits internally. You have to explicity call commit
once you are done adding/deleting your documents. A commit makes the changes
visible to
Similar report with no response yet:
http://www.nabble.com/*Very*-slow-Commit-after-upgrading-to-solr-1.3-td19720792.html#a19720792
Uwe Klosa wrote:
Hi
I have a big problem with one of my solr instances. A commit can take up to
5 minutes. This time does not depend on the number of documents
Hi Uwe,
5 minutes is not slow; commit can't be realtime... I do
commitoptimize once a day at 3:00AM. It takes 15-20 minutes, but I
have several millions daily updates...
Is there a way to see why commits are slow? Has anyone had the same problem
and what was the solution that solved it?
On Fri, Oct 3, 2008 at 1:56 PM, Uwe Klosa [EMAIL PROTECTED] wrote:
I have a big problem with one of my solr instances. A commit can take up to
5 minutes. This time does not depend on the number of documents which are
updated. The difference for 1 or 100 updated documents is only a few
seconds.
101 - 200 of 235 matches
Mail list logo