Hi Shaveta,
simple, index a doc and search for this ;)
An soft commit stands for NearRealTimeSearch, It could take a couple
of seconds to see this doc,
but it should be there.
Best regards
Vadim
2012/11/26 Shaveta_Chawla shaveta.cha...@knimbus.com:
I have migrated solr 3.6 to solr 4.0. I have
Hi Shaveta,
simple, index a doc and search for this ;)
An soft commit stands for NearRealTimeSearch, It could take a couple
of seconds to see this doc,
but it should be there.
Best regards
Vadim
2012/11/26 Shaveta_Chawla shaveta.cha...@knimbus.com:
I have migrated solr 3.6 to solr 4.0. I have
Hi,
your JVM need more RAM. My setup works well with 10 Cores, and 300mio.
docs, Xmx8GB Xms8GB, 16GB for OS.
But it's how Bernd mentioned, the memory consumption depends on the
number of fields and the fieldCache.
Best Regards
Vadim
2012/11/16 Bernd Fehling bernd.fehl...@uni-bielefeld.de:
I
Hi,
how your update/add command looks like?
Regards
Vadim
2012/10/18 rayvicky zongwei...@gmail.com:
i make it work on weblogic.
but when i add or update index ,it error
2012-10-17 ?Χ03?47·?3? CST Error HTTP Session BEA-100060 An
unexpected error occurred while retrieving the session for
Hi,
these are JAVA_OPTS params, you can find and set this stuff in the
startManagedWeblogic script.
Best regards
Vadim
2012/10/16 rayvicky zongwei...@gmail.com:
who can help me ?
where to settings -DzkRun-Dbootstrap_conf=true
-DzkHost=localhost:9080 -DnumShards=2
in weblogic
--
Hi Rogerio,
i can imagine what it is. Tomcat extract the war-files in
/var/lib/tomcatXX/webapps.
If you already run an older Solr-Version on your server, the old
extracted Solr-war could still be there (keyword: tomcat cache).
Delete the /var/lib/tomcatXX/webapps/solr - folder and restart tomcat,
but it should work with
pol* tel*~5 types of queries.
Ahmet
--- On Thu, 9/27/12, Vadim Kisselmann v.kisselm...@gmail.com wrote:
From: Vadim Kisselmann v.kisselm...@gmail.com
Subject: Re: Proximity(tilde) combined with wildcard, AutomatonQuery ?
To: solr-user@lucene.apache.org
Date: Thursday
Hi Ahmet,
thanks for your reply:)
I see that it does not come with the 4.0 release, because the given
patches do not work with this version.
Right?
Best regards
Vadim
2012/9/26 Ahmet Arslan iori...@yahoo.com:
we assume i have a simple query like this with wildcard and
tilde:
japa*
Hi Roy,
jepp, it works with Tomcat 6 and an external Zookeeper.
I will publish a blogpost about it tomorrow on sentric.ch
My blogpost is ready, but i had no time to publish it in the last
couple of days:)
Best regards
Vadim
2012/9/27 Markus Jelsma markus.jel...@openindex.io:
Hi - on Debian
Hi guys,
we assume i have a simple query like this with wildcard and tilde:
japa* fukushima~10
instead of japan fukushima~10 OR japanese fukushima~10, etc.
Do we have a solution in Solr 4.0 to work with these kind of queries?
Does the AutomatonQuery/Filter cover this case?
Best regards
Vadim
\solr (schema.xml,
solrconfig.xml ...)
-Mensagem original-
De: Vadim Kisselmann [mailto:v.kisselm...@gmail.com] Enviada em: sexta-feira,
24 de agosto de 2012 07:26
Para: solr-user@lucene.apache.org
Assunto: Re: Problem to start solr-4.0.0-BETA with tomcat-6.0.20
a presumption:
do
your docs are marked as deleted.
you should optimize after commit, then they will be really deleted.
it's easier and faster to stop your jetty/tomcat, drop your index
directory and start your servlet container...
when it's not possible, then optimize.
regards
Vadim
2012/8/27 Jamel ESSOUSSI
a presumption:
do you use your old solrconfig.xml files from older installations?
when yes, compare the default config and yours.
2012/8/23 Claudio Ranieri claudio.rani...@estadao.com:
I made this instalation on a new tomcat.
With Solr 3.4.*, 3.5.*, 3.6.* works with jars into
Hi folks,
i have this case:
i want to update my solr 4.0 from trunk to solr 4.0 alpha. the index
structure has changed, i can't replicate.
10 cores are in use, each with 30Mio docs. We assume that all fields
are stored and indexed.
What is the best way to export the docs from all cores on one
same problem.
but here should tomcat6 have the right to read/write your index.
regards
vadim
2012/7/14 Bruno Mannina bmann...@free.fr:
I found the problem I think, It was a permission problem on the schema.xml
schema.xml was only readable by the solr user.
Now I have the same problem with
regards
Vadim
2012/7/5 Stefan Matheis matheis.ste...@googlemail.com:
Great, thanks Vadim
On Thursday, July 5, 2012 at 9:34 AM, Vadim Kisselmann wrote:
Hi Stefan,
ok, i would test the latest version from trunk with tomcat in next
days and open an new issue:)
regards
Vadim
2012/7/3
for your failure.
Can you raise the JVM memory and see if you still hit the spike and go
OOM? this is very unlikely a IndexWriter problem. I'd rather look at
your warmup queries ie. fieldcache, FieldValueCache usage. Are you
sorting / facet on anything?
simon
On Tue, Jul 10, 2012 at 4:49 PM, Vadim
Hi folks,
my Test-Server with Solr 4.0 from trunk(version 1292064 from late
february) throws this exception...
auto commit error...:java.lang.IllegalStateException: this writer hit
an OutOfMemoryError; cannot commit
at
Hi Robert,
Can you run Lucene's checkIndex tool on your index?
No, unfortunately not. This Solr should run without stoppage, an
tomcat-restart is ok, but not more:)
I tested newer trunk-versions a couple of months ago, but they fail
all with tomcat.
i would test 4.0-alpha in next days with
Hi Stefan,
ok, i would test the latest version from trunk with tomcat in next
days and open an new issue:)
regards
Vadim
2012/7/3 Stefan Matheis matheis.ste...@googlemail.com:
On Tuesday, July 3, 2012 at 8:10 PM, Vadim Kisselmann wrote:
sorry, i overlooked your latest comment with the new
same problem here:
https://mail.google.com/mail/u/0/?ui=2view=btopver=18zqbez0n5t35q=tomcat%20v.kisselmannqs=truesearch=queryth=13615cfb9a5064bdqt=kisselmann.1.tomcat.1.tomcat's.1.v.1cvid=3
) that.
Regards
Stefan
On Tuesday, July 3, 2012 at 4:00 PM, Vadim Kisselmann wrote:
same problem here:
https://mail.google.com/mail/u/0/?ui=2view=btopver=18zqbez0n5t35q=tomcat%20v.kisselmannqs=truesearch=queryth=13615cfb9a5064bdqt=kisselmann.1.tomcat.1.tomcat's.1.v.1cvid=3
https
in your schema.xml you can set the default query parser operator, in
your case solrQueryParser defaultOperator=AND/, but it's
deprecated.
When you use the edismax, read this:http://drupal.org/node/1559394 .
mm-param is here the answer.
Best regards
Vadim
2012/7/2 Steve Fatula
Hi folks,
i have to look for an old live system with solr 1.4.
When i optimize an bigger index with round about 200GB(after optimize
and cut, 100GB) and my slaves
replicate the newest version after(!) optimize, they hang(all) with
100% in replication and they have at once circa 300GB index sizes.
Forget to mention:
After Tomcat-restart, the slaves still have an index with 300GB.
After an manual replication command in UI, 100GB like master in a
couple of seconds and all is ok.
2012/6/19 Vadim Kisselmann v.kisselm...@googlemail.com:
Hi folks,
i have to look for an old live system
Hi Otis,
done :) Till now we use Graphite, Ganglia and Zabbix. For our JVM
monitoring JStatsD.
Best regards
Vadim
2012/5/31 Otis Gospodnetic otis_gospodne...@yahoo.com:
Hi,
Super quick poll: What do you use for Solr performance monitoring?
Vote here:
. apr. 2012, at 11:21, Vadim Kisselmann wrote:
Hi folks,
i use solr 4.0 from trunk, and edismax as standard query handler.
In my schema i defined this: solrQueryParser defaultOperator=AND/
I have this simple problem:
nascar +author:serg* (3500 matches)
+nascar +author:serg* (1 match
://www.lucidimagination.com/blog/2010/05/23/whats-a-dismax/
http://lucene.apache.org/solr/api/org/apache/solr/util/doc-files/min-should-match.html
Best regards
Vadim
2012/4/30 Vadim Kisselmann v.kisselm...@googlemail.com:
Hi Jan,
thanks for your response!
My qf parameter for edismax is: title. My
Hi folks,
i use solr 4.0 from trunk, and edismax as standard query handler.
In my schema i defined this: solrQueryParser defaultOperator=AND/
I have this simple problem:
nascar +author:serg* (3500 matches)
+nascar +author:serg* (1 match)
nascar author:serg* (5200 matches)
nascar AND
hi,
when only the slaves are used for search, why not, more RAM for OS.
I keep my default settings on my master, because of when my slaves are
busy with client-queries,
i can test a few things on my master.
best regards
vadim
2012/4/27 Jamel ESSOUSSI jamel.essou...@gmail.com:
Hi,
I use two
of the various files above should give
you a hint as to what's using the most space, but it'll be a bit
of a hunt for you to pinpoint what's actually up. TermVectors
and norms are often sources of using up space.
Best
Erick
On Wed, Mar 28, 2012 at 10:55 AM, Vadim Kisselmann
v.kisselm
and testing shows problems
Best
Erick
On Thu, Mar 29, 2012 at 9:32 AM, Vadim Kisselmann
v.kisselm...@googlemail.com wrote:
Hi Erick,
thanks:)
The admin UI give me the counts, so i can identify fields with big
bulks of unique terms.
I known this wiki-page, but i read it one more time
Hello folks,
i work with Solr 4.0 r1292064 from trunk.
My index grows fast, with 10Mio. docs i get an index size of 150GB
(25% stored, 75% indexed).
I want to find out, which fields(content) are too large, to consider measures.
How can i localize/discover the largest fields in my index?
also view it at nabble using this link:
http://lucene.472066.n3.nabble.com/SolrCloud-new-td1528872.html
Best,
Jerry M.
On Wed, Mar 21, 2012 at 5:51 AM, Vadim Kisselmann
v.kisselm...@googlemail.com wrote:
Hello folks,
i read the SolrCloud Wiki and Bruno Dumon's blog entry with his First
Hello folks,
i read the SolrCloud Wiki and Bruno Dumon's blog entry with his First
Exploration of SolrCloud.
Examples and a first setup with embedded Jetty and ZK WORKS without problems.
I tried to setup my own configuration with Tomcat and an external
Zookeeper(my Master-ZK), but it doesn't
you have to re-index your data.
best regards
vadim
2012/3/21 syed kather in.ab...@gmail.com:
Team
I have indexed my data with solr 3.3 version , As I need to use
hierarchical facets features from solr 4.0 .
Can I use the existing data with Solr 4.0 version or should need to
re-index the
Hi folks,
i comment this issue : https://issues.apache.org/jira/browse/SOLR-3238 ,
but i want to ask here if anyone have the same problem.
I use Solr 4.0 from trunk(latest) with tomcat6.
I get an error in New Admin UI:
This interface requires that you activate the admin request handlers,
add
Hi Chris,
thanks for your response.Ok, we will wait :)
Best Regards
Vadim
2012/3/8 Chris Hostetter hossman_luc...@fucit.org
: where and when is the next Eurocon scheduled?
: I read something about denmark and autumn 2012(i don't know where *g*).
I do not know where, but sometime in the
Hi folks,
where and when is the next Eurocon scheduled?
I read something about denmark and autumn 2012(i don't know where *g*).
Best regards and thanks
Vadim
Set maxBooleanClauses in your solrconfig.xml higher, default is 1024.
Your query blast this limit.
Regards
Vadim
2012/2/22 Darren Govoni dar...@ontrenet.com
Hi,
I am suddenly getting a maxClauseCount exception for no reason. I am
using Solr 3.5. I have only 206 documents in my index.
Any
Hello folks,
I build a simple custom component for “hl.q” query.
My case was to inject hl.q=params on the fly, with filter params like
fields which were in my
standard query. These were highlighted , because Solr/Lucene have no way of
interpreting an extended q clause and saying this part is a
:
Vadim,
Would using xslt output help?
Otis
Performance Monitoring SaaS for Solr -
http://sematext.com/spm/solr-performance-monitoring/index.html
From: Vadim Kisselmann v.kisselm...@googlemail.com
To: solr-user@lucene.apache.org
Sent: Wednesday
Hello folks,
i want to reindex about 10Mio. Docs. from one Solr(1.4.1) to another
Solr(1.4.1).
I changed my schema.xml (field types sing to slong), standard
replication would fail.
what is the fastest and smartest way to manage this?
this here sound great (EntityProcessor):
Hi Ahmet,
thanks for quick response:)
I've already thought the same...
And it will be a pain to export and import this huge doc-set as CSV.
Do i have an another solution?
Regards
Vadim
2012/2/8 Ahmet Arslan iori...@yahoo.com:
i want to reindex about 10Mio. Docs. from one Solr(1.4.1) to
another
Another problem appeared ;)
how can i export my docs in csv-format?
In Solr 3.1+ i can use the query-param wt=csv, but in Solr 1.4.1?
Best Regards
Vadim
2012/2/8 Vadim Kisselmann v.kisselm...@googlemail.com:
Hi Ahmet,
thanks for quick response:)
I've already thought the same
. Sounds like a plan? :)
Best Regards
Vadim
2012/2/1 Koji Sekiguchi k...@r.email.ne.jp:
(12/02/01 4:28), Vadim Kisselmann wrote:
Hmm, i don´t know, but i can test it tomorrow at work.
i´m not sure about the right syntax with hl.q. (?)
but i report :)
hl.q can accept same syntax of q
Hi,
i have problems with edismax, filter queries and highlighting.
First of all: can edismax deal with filter queries?
My case:
Edismax is my default requestHandler.
My query in SolrAdminGUI: (roomba OR irobot) AND language:de
You can see, that my q is roomba OR irobot and my fq is
Hi Ahmet,
thanks for quick response :)
I've also discovered this failure.
I wonder that the query themselves works.
For example: query = language:de
I get results which only have language:de.
Also works the fq and i get only the de-result in my field language.
I can't understand the behavior. It
with debugQuery=on would help.
No, fq does NOT get translated into q params, it's a
completely separate mechanism so I'm not quite sure
what you're seeing.
Best
Erick
On Tue, Jan 31, 2012 at 8:40 AM, Vadim Kisselmann
v.kisselm...@googlemail.com wrote:
Hi Ahmet,
thanks for quick response
Hi Erick,
I didn't read your first post carefully enough, I was keying
on the words filter query. Your query does not have
any filter queries! I thought you were talking
about fq=language:de type clauses, which is what
I was responding to.
no problem, i understand:)
Solr/Lucene have no
Hmm, i don´t know, but i can test it tomorrow at work.
i´m not sure about the right syntax with hl.q. (?)
but i report :)
2012/1/31 Ahmet Arslan iori...@yahoo.com:
Try the fq option maybe?
I thought so, unfortunately.
fq will be the only option. I should rebuild my
application :)
Could
Hi Christopher,
when all needed jars are included, you can only have wrong paths in
your solrconfig.xml
Regards
Vadim
2012/1/26 Stanislaw Osinski stanislaw.osin...@carrotsearch.com:
Hi,
Can you paste the logs from the second run?
Thanks,
Staszek
On Wed, Jan 25, 2012 at 00:12,
Hello Folks,
i want to decrease the max. number of terms for my fields to 500.
I thought what the maxFieldLength parameter in solrconfig.xml is
intended for this.
In my case it doesn't work.
The half of my text fields includes longer text(about 1 words).
With 100 docs in my index i had an
P.S.:
i use Solr 4.0 from trunk.
Is maxFieldLength deprecated in Solr 4.0 ?
If so, do i have an alternative to decrease the number of terms during indexing?
Regards
Vadim
2012/1/26 Vadim Kisselmann v.kisselm...@googlemail.com:
Hello Folks,
i want to decrease the max. number of terms for my
Sean, Ahmet,
thanks for response:)
I use Solr 4.0 from trunk.
In my solrconfig.xml is only one maxFieldLength param.
I think it is deprecated in Solr Versions 3.5+...
But LimitTokenCountFilterFactory works in my case :)
Thanks!
Regards
Vadim
2012/1/26 Ahmet Arslan iori...@yahoo.com:
i want
Hi,
it depends from your hardware.
Read this:
http://www.derivante.com/2009/05/05/solr-performance-benchmarks-single-vs-multi-core-index-shards/
Think about your cache-config (few updates, big caches) and a good
HW-infrastructure.
In my case i can handle a 250GB index with 100mil. docs on a I7
.
Regards,
Dmitry
On Tue, Jan 24, 2012 at 10:30 AM, Vadim Kisselmann
v.kisselm...@googlemail.com wrote:
Hi,
it depends from your hardware.
Read this:
http://www.derivante.com/2009/05/05/solr-performance-benchmarks-single-vs-multi-core-index-shards/
Think about your cache-config (few updates
Hello folks,
is it possible to find out the size (in KB) of specific fields from
one document? Eventually with Luke or Lucid Gaze?
My case:
docs in my old index (Solr 1.4) have sizes of 3-4KB each.
In my new index(Solr 4.0 trunk) there are about 15KB per doc.
I changed only 2 things in my
Hi Stanislaw,
did you already have time to create a patch?
If not, can you tell me please which lines in which class in source code
are relevant?
Thanks and regards
Vadim Kisselmann
2011/11/29 Vadim Kisselmann v.kisselm...@googlemail.com
Hi,
the quick and dirty way sound good:)
It would
Hi,
comment out the lines with the collapse component in your solrconfig.xml if
not need it.
otherwise, you're missing the right jar's for this component, or path's to
this jars in your solrconfig.xml are wrong.
regards
vadim
2011/12/1 Pawan Darira pawan.dar...@gmail.com
Hi
I am migrating
the trick.
Cheers,
S.
On Thu, Dec 1, 2011 at 10:43, Vadim Kisselmann
v.kisselm...@googlemail.comwrote:
Hi Stanislaw,
did you already have time to create a patch?
If not, can you tell me please which lines in which class in source code
are relevant?
Thanks and regards
Vadim Kisselmann
Hi folks,
i've installed the clustering component in solr 1.4.1 and it works, but not
really:)
You can see what the doc id is corrupt.
arr name=clusterslst
arr name=labels
strEuro-Krise/str
/arrarr name=docs
str½Íџ/str
str¾ͽ/str
str¿)ై/str
str/str
/arr/lst
my fields:
field name=id type=sint
ids to
the output. I've just tried a similar configuration on Solr 3.5 and the
integer identifiers looked fine. Can you try the same configuration on Solr
3.5?
Thanks,
Staszek
On Tue, Nov 29, 2011 at 12:03, Vadim Kisselmann
v.kisselm...@googlemail.com
wrote:
Hi folks,
i've installed
Hi,
the quick and dirty way sound good:)
It would be great if you can send me a patch for 1.4.1.
By the way, i tested Solr. 3.5 with my 1.4.1 test index.
I can search and optimize, but clustering doesn't work (java.lang.Integer
cannot be cast to java.lang.String)
My uniqieKey for my docs it the
Hi,
yes, see http://wiki.apache.org/solr/DistributedSearch
Regards
Vadim
2011/11/2 Val Minyaylo vminya...@centraldesktop.com
Have you tried to query multiple cores at same time?
On 10/31/2011 8:30 AM, Vadim Kisselmann wrote:
it works.
it was one wrong placed backslash in my config
Hi Edwin, Chris
it´s an old bug. I have big problems too with OffsetExceptions when i use
Highlighting, or Carrot.
It looks like a problem with HTMLStripCharFilter.
Patch doesn´t work.
https://issues.apache.org/jira/browse/LUCENE-2208
Regards
Vadim
2011/11/11 Edwin Steiner
Hello folks,
i have questions about MLT and Deduplication and what would be the best
choice in my case.
Case:
I index 1000 docs, 5 of them are 95% the same (for example: copy pasted
blog articles from different sources, with slight changes (author name,
etc..)).
But they have differences.
*Now
Hello folks,
i have an problem with shard indexing.
with an single core i use this update command:
http://localhost:8983/solr/update .
now i have 2 shards, we can call them core0 / core1
http://localhost:8983/solr/core0/update .
can i adjust anything to indexing in the same way like
architect
Cominvent AS - www.cominvent.com
Solr Training - www.solrtraining.com
On 2. nov. 2011, at 10:00, Vadim Kisselmann wrote:
Hello folks,
i have an problem with shard indexing.
with an single core i use this update command:
http://localhost:8983/solr/update .
now i have 2 shards
?
Thanks and Regards
Vadim
2011/11/2 Yury Kats yuryk...@yahoo.com
There's a defaultCore parameter in solr.xml that let's you specify what
core should be used when none is specified in the URL. You can change that
every time you create a new core.
From: Vadim
, Vadim Kisselmann wrote:
Hello Jan,
thanks for your quick response.
It's quite difficult to explain:
We want to create new shards on the fly every month and switch the
default
shard to the newest one.
We always want to index to the newest shard with the same update query
like http
Hi folks,
i have a small blockade in the configuration of an multicore setup.
i use the latest solr version (4.0) from trunk and the example (with jetty).
single core is running without problems.
We assume that i have this structure:
/solr-trunk/solr/example/multicore/
it works.
it was one wrong placed backslash in my config;)
sharing the config/schema files is not a problem.
regards vadim
2011/10/31 Vadim Kisselmann v.kisselm...@googlemail.com
Hi folks,
i have a small blockade in the configuration of an multicore setup.
i use the latest solr version (4.0
Internal Server Error
Error: org.apache.lucene.search.highlight.InvalidTokenOffsetsException:
Token the exceeds length of provided text sized 41
Best Regards
Vadim
2011/10/20 Vadim Kisselmann v.kisselm...@googlemail.com
Hello folks,
i have big problems with InvalidTokenOffsetExceptions
Hello folks,
i have big problems with InvalidTokenOffsetExceptions with highlighting.
Looks like a bug in HTMLStripCharFilter.
H.Wang added a patch in LUCENE-2208, but nobody have time to look at this.
Could someone of the committers please take a look at this patch and commit
it or is this
Hi,
a number of relevant questions is given.
i have another one:
which type of docs do you have? Do you add some new docs every day? Or is it
a stable number of docs (500Mio.) ?
What about Replication?
Regards Vadim
2011/10/17 Otis Gospodnetic otis_gospodne...@yahoo.com
Hi Jesús,
Others
Hello folks,
i have a question about the MLT.
For example my query:
localhost:8983/solr/mlt/?q=gefechtseinsatz+AND+dnamlt=truemlt.fl=textmlt.count=0mlt.boost=truemlt.mindf=5mlt.mintf=5mlt.minwl=4
*I have 1 Query-RESULT and 13 MLT-docs. The MLT-Result corresponds to
the half of my index.*
In
Hi Fred,
analyze the queries which take longer.
We observe our queries and see the problems with q-time with queries which
are complex, with phrase queries or queries which contains numbers or
special characters.
if you don't know it:
?
Fred.
Am Mittwoch, 28. September 2011 um 13:18 schrieb Vadim Kisselmann:
Hi Fred,
analyze the queries which take longer.
We observe our queries and see the problems with q-time with queries
which
are complex, with phrase queries or queries which contains numbers or
special
why should the optimization reduce the number of files?
It happens only when you indexing docs with same unique key.
Have you differences in numDocs und maxDocs after optimize?
If yes:
how is your optimize command ?
Regards
Vadim
2011/9/28 Manish Bafna manish.bafna...@gmail.com
Try to do
,
before optimization there are many files but after optimization i always
end
up with just 3 files in my index filder. Just want to find out if this was
ok.
Thanks
On Wed, Sep 28, 2011 at 1:23 PM, Vadim Kisselmann
v.kisselm...@googlemail.com wrote:
why should the optimization reduce
)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
at java.lang.Thread.run(Thread.java:662)
Am Mittwoch, 28. September 2011 um 13:53 schrieb Frederik Kraus:
Am Mittwoch, 28. September 2011 um 13:41 schrieb Vadim Kisselmann:
Hi Fred,
ok, it's a strange behavior
files.
no.
during optimize you only delete docs, which are flagged as deleted. no
matter how old they are.
if your numDocs and maxDocs have the same number of Docs, you only rebuild
and merge your index, but you delete nothing.
Regards
On Wed, Sep 28, 2011 at 6:43 PM, Vadim Kisselmann
be still
open for reading)
2nd time optimize, other than the new index file, all else gets deleted.
This is happening specifically on Windows.
On Wed, Sep 28, 2011 at 8:23 PM, Vadim Kisselmann
v.kisselm...@googlemail.com wrote:
2011/9/28 Manish Bafna manish.bafna...@gmail.com
Tirthankar,
are you indexing 1.smaller docs or 2.books?
if 1. your caches are too big for your memory, as Erick already said.
Try to allocate 10GB für JVM, leave 14GB for your HDD-Cache and make your
caches smaller.
if 2. read the blog-posts on hathitrust.com.
Hi folks,
I'm writing here again (beside Jira: SOLR-2565), eventually any one can help
here:
I tested the nightly build #1595 with an new patch (2565), but NRT doesn't
work in my case.
I index 10 docs/sec, it takes 1-30sec. to see the results.
same behavior when i update an existing document.
Hi Markus,
thanks for your answer.
I'm using Solr. 4.0 and jetty now and observe the behavior and my error logs
next week.
tomcat can be a reason, we will see, i'll report.
I'm indexing WITHOUT batches, one doc after another. But i would try out the
batch indexing as well as
retry indexing
Hello folks,
i use solr 1.4.1 and every 2 to 6 hours i have indexing errors in my log
files.
on the client side:
2011-08-04 12:01:18,966 ERROR [Worker-242] IndexServiceImpl - Indexing
failed with SolrServerException.
Details: org.apache.commons.httpclient.ProtocolException: Unbuffered entity
Hello Shawn,
Primary assumption: You have a 64-bit OS and a 64-bit JVM.
Jepp, it's running 64-bit Linux with 64-bit JVM
It sounds to me like you're I/O bound, because your machine cannot
keep enough of your index in RAM. Relative to your 100GB index, you
only have a maximum of 14GB of RAM
On Mar 17, 2011, at 3:19 PM, Shawn Heisey wrote:
On 3/17/2011 3:43 AM, Vadim Kisselmann wrote:
Unfortunately, this doesn't seem to be the problem. The queries
themselves are running fine. The problem is that the replications is
crawling when there are many queries going
Hi Bill,
You could always rsync the index dir and reload (old scripts).
I used them previously but was getting problems with them. The
application querying the Solr doesn't cause enough load on it to
trigger the issue. Yet.
But this is still something we should investigate.
Indeed :-)
See
Hi everyone,
I have Solr running on one master and two slaves (load balanced) via
Solr 1.4.1 native replication.
If the load is low, both slaves replicate with around 100MB/s from master.
But when I use Solrmeter (100-400 queries/min) for load tests (over
the load balancer), the replication
92 matches
Mail list logo