it seems you are using 64bit jvm(32bit jvm can only allocate about 1.5GB).
you should enable pointer compression by -XX:+UseCompressedOops
On Thu, Mar 15, 2012 at 1:58 PM, Husain, Yavar yhus...@firstam.com wrote:
Thanks for helping me out.
I have allocated Xms-2.0GB Xmx-2.0GB
However i see
why should enable pointer compression?
-- Original --
From: Li Lifancye...@gmail.com;
Date: Thu, Mar 15, 2012 02:41 PM
To: Husain, Yavaryhus...@firstam.com;
Cc: solr-user@lucene.apache.orgsolr-user@lucene.apache.org;
Subject: Re: Solr out of memory
it can reduce memory usage. for small heap application less than 4GB, it
may speed up.
but be careful, for large heap application, it depends.
you should do some test for yourself.
our application's test result is: it reduce memory usage but enlarge
response time. we use 25GB memory.
Hello.
Is it possible to switch master/slave on the fly without restarting the
server?
-
--- System
One Server, 12 GB RAM, 2 Solr Instances, 8 Cores,
1 Core with 45 Million Documents other Cores 200.000
- Solr1 for
would
http://www.lucidimagination.com/blog/2011/10/02/monitoring-apache-solr-and-lucidworks-with-zabbix/work
for your scenario?
Tommaso
2012/3/12 Alex Leonhardt aleonha...@venda.com
Hi All,
I was wondering if anyone knows of a free tool to use to monitor multiple
Solr hosts under one roof ?
Hi folks,
i comment this issue : https://issues.apache.org/jira/browse/SOLR-3238 ,
but i want to ask here if anyone have the same problem.
I use Solr 4.0 from trunk(latest) with tomcat6.
I get an error in New Admin UI:
This interface requires that you activate the admin request handlers,
add
Hi Alp,
if you have not changed how SOLR logs in general, you should find the
log output in the regular server logfile. For Tomcat you can find this
in TOMCAT_HOME/catalina.out (or search for that name).
If there is a problem with your schema, SOLR should be complaining about
it during
Thanks a ton.
From: Li Li [fancye...@gmail.com]
Sent: Thursday, March 15, 2012 12:11 PM
To: Husain, Yavar
Cc: solr-user@lucene.apache.org
Subject: Re: Solr out of memory exception
it seems you are using 64bit jvm(32bit jvm can only allocate about 1.5GB).
Thanks guys! I'll try out OpenNMS Zabbix :)
Alex
On 03/14/2012 12:07 AM, Jan Høydahl wrote:
And here is a page on how to wire Solr's JMX info into OpenNMS monitoring tool.
Have not tried it, but as soon as a collector config is defined once I'd guess
it could be re-used, maybe shipped with
Mike
Actually i'm not able to tell you what each value stands for .. but what i can
tell you is, where the information is coming from.
The interface requests /admin/system which is using
FWIW it looks like this feature has been enabled by default since JDK 6 Update
23:
http://blog.juma.me.uk/2008/10/14/32-bit-or-64-bit-jvm-how-about-a-hybrid/
François
On Mar 15, 2012, at 6:39 AM, Husain, Yavar wrote:
Thanks a ton.
From: Li
Hi
I have a scenario, where I store a field which is an Id,
ID field
--
1
3
4
Descrption mapping
---
1 = Options 1
2 = Options A
3 = Options 3
4 = Options 4a
Is there a way in solr when ever i query this field should return me the
description instead of the id. And help
Hey there,
I've been working through the Solr Tutorial
(http://lucene.apache.org/solr/tutorial.html), using the example schema and
documents, just working through step by step trying everything out. Everything
worked out the way it should (just using the example queries and stuff), except
for the
Or just ignore it if you have the disk space. The files will be cleaned up
eventually. I believe they'll magically disappear if you simply bounce the
server (but work on *nix so can't personally guarantee it). And replication
won't replicate the stale files, so that's not a problem either
Hello all,
I have installed Solr search on my Drupal 7 installation. Currently, it works
as an All search tool. I'd like to limit the scope of the search with an
available pull-down to set the topic for searching.
Attached is a screenshot to better visualize what I have to get implemented
Screenshots/attachments don't make it through to the Apache lists, sorry.
As for limiting search results, if you've indexed your topics into a string
field, then you limit adding fq=field_name:value to your Solr requests. And
you can get a list of topics using a
Erik,
Thank you, attached is the screenshot I tried to submit. I'm VERY new to using
Solr, so I'm going to try and make the best sense of your response with my
limited experience.
Thank you,
AJ
-Original Message-
From: Erik Hatcher [mailto:erik.hatc...@gmail.com]
Sent: Thursday,
Hello,
I have still same problem after installation.
Files are loaded:
~/appl/apache-solr-3.5.0/example $ java -Dsolr.solr.home=multicore/ -jar
start.jar 21 | grep contrib
INFO: Adding
'file:/home/virus/appl/apache-solr-3.5.0/contrib/velocity/lib/velocity-tools-2.0.jar'
to classloader
INFO:
Here is the link to the screenshot I tried to send about my Topic/Scope issue.
http://imageupload.org/en/file/200809/solr-scope.png.html
-Original Message-
From: Valentin, AJ
Sent: Thursday, March 15, 2012 9:47 AM
To: 'solr-user@lucene.apache.org'
Subject: RE: Adding a Topics (Scope)
Hi all,
I'm facing problems regarding multiple Filter Queries in SOLR 1.4.1 -
I hope some one will be able to help.
Example 1 - works fine: {!tag=myfieldtag}(-(myfield:*))
Example 2 - works fine: {!tag=myfieldtag}((myfield:Bio | myfield:Alexa))
Please note that in Example 2, result sets of
Hi all,
#1. I'm on windows
When I run solr+jetty from where the start.jar lives using this as provided by
README:
java.exe -Dsolr.solr.home=multicore -jar start.jar
All works fine.
However if I try this:
c:\HP\solr\exampleC:\Program Files\Java\jre6\bin\java.exe
Say I have around 30-40 fields (SQL Table Columns) indexed using Solr from the
database. I concatenate those fields into one field by using Solr copyfield
directive and than make it default search field which I search.
If at the database level itself I perform concatenation of all those fields
The problem is that when replicating, the double-size index gets replicated
to slaves. I am now doing a dummy commit with always the same document and
it works fine.. After the optimize and dummy commit process I just end up
with numDocs = x and maxDocs = x+1. I don't get the nice green
I am using apache solr 3.5.0
I have a index size of 4.2 G on disk. I have about 30 M records of small
string and numeric fields
categ | name | age | sex | addr | balance | min balance | interest | tax |
customertype
I am running a solr server with
jdk1.6.0_21_64/bin/java -Xms512m -Xmx2048M
No, the deleted files do not get replicated. Instead, the slaves do the same
thing as the master, holding on to the deleted files after the new files are
copied over.
The optimize is obsoleting all of your index files, so maybe should quit doing
that. Without an optimize, the deleted files
Is your balance field multi-valued by chance? I dont have much
experience with stats component but it may be very innefficient for
larger indexes. How is memory/performance if you turn stats off?
On Thu, Mar 15, 2012 at 11:58 AM, harisundhar hari@gmail.com wrote:
I am using apache solr
Hi all,
Just testing SOLR 3.5.0. and notice a different behavior on this new
version:
select?rows=10q=sig%3a(54ba3e8fd3d5d8371f0e01c403085a0c)?
this query returns no results on my indexes, but works for SOLR 1.4.0
and returns Java heap space java.lang.OutOfMemoryError: Java heap
See:
http://javahowto.blogspot.com/2006/06/6-common-errors-in-setting-java-heap.html
Your Xmx specification is wrong I think.
-Xmx2.0GB
-Xmx2.0G
-Xmx-2GB
-Xmx-2G
-Xmx-2.0GB
-Xmx-2.0G
-Xmx=2G
all fail immediately when I try them from a command line on raw Java
from a command prompt.
Perhaps
Somehow you'd have to create a custom collector, probably queue off the docs
that made it to the collector and have some asynchronous thread consuming
those docs and sending them in bits...
But this is so antithetical to how Solr operates that I suspect my hand-waving
wouldn't really work out.
The best advice I can give is to spend some time on the admin/analysis page.
For instance, I believe that your first index analysis chain will do nothing.
KeywordTokenizerFactory does not break up the incoming text at all. Since there
is only a single token, the shinglefilter isn't doing anything
Thanks for the reply Erik,
Yep, the project is working with distributed Solr applications (i.e.
shards) but not with the Solr supplied shard implementation, rather a
custom version (not very different to it to be honest).
I understand that Solr has scoring at it's heart which is something we are
Hello Nicholas,
Looks like we are around the same point. Here is my branch
https://github.com/m-khl/solr-patches/tree/streaming there are only two
commits on top of it. And here is the test
Hi,
I was looking for something similar.
I tried this patch :
https://issues.apache.org/jira/browse/SOLR-2112
it's working quite well (I've back-ported the code in Solr 3.5.0...).
Is it really different from what you are trying to achieve ?
Ludovic.
-
Jouve
France.
--
View this message
Upgrading to jre7 made it go away!
Radek.
Radoslaw Zajkowski
Senior Developer
O°
proximity
CANADA
t: 416-972-1505 ext.7306
c: 647-281-2567
f: 416-944-7886
2011 ADCC Interactive Agency of the Year
2011 Strategy Magazine Digital Agency of the Year
http://www.proximityworld.com/
Join us on:
Run the program under jconsole (visualgc on some machines). This
connects to your tomcat and gives a running view of memory use and
garbage collection activity.
On Thu, Mar 15, 2012 at 10:28 AM, Erick Erickson
erickerick...@gmail.com wrote:
See:
I've got a build system that uses SolrJ. The currently running version
uses CommonsHttpSolrServer for everything. In the name of performance,
I am writing a new version that uses CommonsHttpSolrServer for queries
and StreamingUpdateSolr for updates.
One of the things that my program does is
Hello.
I have an SQL database with documents having an ID, TITLE and SUMMARY.
I am using the DIH to index the data.
In the DIH dataConfig, for every document, I would like to do something
like:
field column=TITLE name=title* boost=2.0* /
In other words, A match on any document's title is
Ludovic,
I looked through. First of all, it seems to me you don't amend regular
servlet solr server, but the only embedded one.
Anyway, the difference is that you stream DocList via callback, but it
means that you've instantiated it in memory and keep it there until it will
be completely
On 3/15/2012 5:53 PM, Shawn Heisey wrote:
Is there any way to get get the threads within SUSS objects to
immediately exit without creating other issues? Alternatively, if
immediate isn't possible, the exit could take 1-2 seconds. I could
not find any kind of method in the API that closes
39 matches
Mail list logo