Hello,
I have a problem with solr and multicores on SLES 11 SP 2.
I have 3 cores, each with more than 20 segments.
When I try to start the tomcat6, it can not start the CoreContainer.
Caused by: java.lang.OutOfMemoryError: Map failed
at sun.nio.ch.FileChannelImpl.map0(Native Method)
I
The first thing I would check is the virtual memory limit (ulimit -v, check
this for the operating system user that runs Tomcat /Solr).
It should be set to unlimited, but this is as far as i remember not the
default settings on SLES 11.
Since 3.1, Solr maps the index files to virtual memory.
Great. Thanks.
That solves my problem.
Greetings
Jochen
André Widhani schrieb:
The first thing I would check is the virtual memory limit (ulimit -v, check
this for the operating system user that runs Tomcat /Solr).
It should be set to unlimited, but this is as far as i remember not the
Hello!
Is this what you are looking for
https://lucene.apache.org/core/old_versioned_docs/versions/3_0_0/queryparsersyntax.html#Fuzzy%20Searches
?
--
Regards,
Rafał Kuć
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch - ElasticSearch
Hi,
I need to know how we can implement fuzzy
Thank you for the reply. I have done a bit of reading and it says I can also
use this one:
filter class=solr.NGramFilterFactory minGramSize=3 maxGramSize=30 /
This is what I will use I think, as it weeds out words like at I as a
bonus.
--
View this message in context:
I've hit a bit of a wall and would appreciate some guidance. I want to index
a large block of text, like such:
I dont want to store this as it is in Solr, I want to instead have two
versions of it. One as a truncated form, and one as a keyword form.
*Truncated Form:*
*Keyword Form (using
I dont want to store this as it is in Solr, I want to
instead have two
versions of it. One as a truncated form, and one as a
keyword form.
*Truncated Form:*
If truncated form means first N characters then copyField can be used
http://wiki.apache.org/solr/SchemaXml#Copy_Fields
*Keyword
Thanks.
Is any extra configuration from the Solr side to make this work ?
Any additional text files like synonyms.txt, any additional fields or any
changes in schema.xml or solrconfig.xml ?
On Mon, Sep 17, 2012 at 4:45 PM, Rafał Kuć r@solr.pl wrote:
Hello!
Is this what you are looking for
Hi Jack,
Thanks.
Even though I have mentioned compound Index to true in the Indexconfig
section of schema for 3.6 version ,it still seems to create normal Index
files.
Attached is the solrconfig.xml
Please let me know if anything wrong
Regards
Sujatha
On Sat, Sep 15, 2012 at 9:43 PM, Jack
Hello!
There is no need to include any changes or additional component to
have fuzzy search working in Solr.
--
Regards,
Rafał Kuć
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch - ElasticSearch
Thanks.
Is any extra configuration from the Solr side to make this work ?
Any
Got it.
Thanks Rafał !
On Mon, Sep 17, 2012 at 6:37 PM, Rafał Kuć r@solr.pl wrote:
Hello!
There is no need to include any changes or additional component to
have fuzzy search working in Solr.
--
Regards,
Rafał Kuć
Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch -
Hello everyone,
When im using stats=truestats=product_price parameter, it returns me the
following structure:
lst name=stats
lst name=stats_fields
lst name=produto_preco
double name=min1.0/double
double name=max1.0/double
long name=count7/long
long name=missing0/long
double name=sum7.0/double
Purely for searching.
The truncated form is just to show to the user as a preview, and the keyword
form is for the keyword searching.
--
View this message in context:
http://lucene.472066.n3.nabble.com/Taking-a-full-text-then-truncate-and-duplicate-with-stopwords-tp4008269p4008295.html
Sent
Could you clue us in as to why this is important to you? I mean, any modern
programming language should be capable of dealing with parsing 1.0 if it
can deal with parsing 1.00.
-- Jack Krupansky
-Original Message-
From: Gustav
Sent: Monday, September 17, 2012 9:19 AM
To:
Add the fmap.content=your-stored-field to the URL.
Or if your schema doesn't already have a content field, add one that is
stored and it will automatically be used.
-- Jack Krupansky
-Original Message-
From: Alexander Troost
Sent: Monday, September 17, 2012 1:12 AM
To:
In an attempt to answer my own question, is this a good solution.
Before I was thinking of importing my fulltext description once, then
sorting it into two seperate fields in solr, one truncated, one keyword.
How about instead actually importing my fulltext description twice. Then I
can import
That will match internal substrings in addition to prefix strings. EdgeNGram
does only prefix substrings, which is generally what people want. So,
NGramFilter would match England when the query is land or gland,
gla, etc.
Use the Solr Admin Analysis UI to enter text to see how the filter
That doc is out of date for 4.0. See the 4.0 Javadoc on FuzzyQuery for
updated info. The tilda right operand is now an integer editing distance
(number of times to insert char, delete char, change char, or transpose two
adjacent chars to map index term to query term) that is limited to 2.
Be
Thanks Jack.
We are using Solr 3.4.
On Mon, Sep 17, 2012 at 8:18 PM, Jack Krupansky j...@basetechnology.comwrote:
That doc is out of date for 4.0. See the 4.0 Javadoc on FuzzyQuery for
updated info. The tilda right operand is now an integer editing distance
(number of times to insert char,
--- On Mon, 9/17/12, Spadez james_will...@hotmail.com wrote:
From: Spadez james_will...@hotmail.com
Subject: Re: Taking a full text, then truncate and duplicate with stopwords
To: solr-user@lucene.apache.org
Date: Monday, September 17, 2012, 5:32 PM
In an attempt to answer my own
Can I have some clarification about installing Tomcat as the user solr? See
http://wiki.apache.org/solr/SolrTomcat#Installing_Tomcat_6 second paragraph,
which states Create the solr user. As solr, extract the Tomcat 6.0 download
into /opt/tomcat6.
Does this user need a home-dir? (I'm
Hi
I am to planning use APache solr for Oracle DB based (Future we may may use
some other DB) search for our project. Its going to be a customer faced
product and we are using Spring MVC frame work. Could anybody help me how
can i integrate Apache Solr with my project or could any body suggest
I probably wouldn't suggest running Tomcat as root because of the
principle of least privilege, but aside from that, it's sort of
immaterial what you call the account, particularly if you already have
a 'tomcat' daemon account set up.
Michael Della Bitta
Thank you for the reply.
The trouble is, I want the truncated desciption to still have the keywords.
If I pass it to the keyword_descipriton and remove words like and i
then if etc, then copy it across to truncated_description, my truncated
description will not be a sentance, it will only be
Ok. I can still define GramSize too?
*filter class=solr.EdgeNGramFilterFactory minGramSize=3
maxGramSize=30 /*
--
View this message in context:
http://lucene.472066.n3.nabble.com/Only-exact-match-searches-working-tp4008160p4008361.html
Sent from the Solr - User mailing list archive at
The trouble is, I want the truncated desciption to still
have the keywords.
copyField copies raw text, it has noting to do with analysis.
Hi David, I see that you committed the work for solr-3304 to the 4.x tree,
which is great news, thanks.I'm not fully familiar with the process, does that
mean its currently available in the nighty builds?Eric.
Date: Wed, 29 Aug 2012 08:44:14 -0700
From: dsmi...@mitre.org
To:
Maybe I dont understand, but if you are copying the keyword description field
and then truncating it then the truncated form will only have keywords too.
That isnt what I want. I want the truncated form to have words like a
the it etc that would have been removed when added to
keyword_description.
Ok. I can still define GramSize too?
*filter class=solr.EdgeNGramFilterFactory
minGramSize=3
maxGramSize=30 /*
Yes you can.
http://lucene.apache.org/solr/api-3_6_1/org/apache/solr/analysis/EdgeNGramFilterFactory.html
--- On Mon, 9/17/12, Spadez james_will...@hotmail.com wrote:
From: Spadez james_will...@hotmail.com
Subject: Re: Taking a full text, then truncate and duplicate with stopwords
To: solr-user@lucene.apache.org
Date: Monday, September 17, 2012, 7:10 PM
Maybe I dont understand, but if you
are
The only catch here is that copyField might truncate in the middle of a
word, yielding an improper term.
-- Jack Krupansky
-Original Message-
From: Ahmet Arslan
Sent: Monday, September 17, 2012 11:54 AM
To: solr-user@lucene.apache.org
Subject: Re: Taking a full text, then truncate
I'm really confused here. I have a document which is say 4000 words long. I
want to get this put into two fields in Solr without having to save the
original document in its entirety within Solr.
When I import my fulltext (4000 word) document to Solr I was going to put it
straight into
Then if I do copy command to move it into truncate_document
then even though
I can reduce it down to say 100 words, it is lacking words
like and it
and this because it has been copied from the
keyword_document.
That's not true. copy operation is performed before analysis (stopword removal,
You said it has been copied from the keyword_document [field], but the
reality is that Solr is not copying from the indexed value of the field, but
from the source value for the field. The idea is that multiple fields can be
based on the same source value even if they analyze and index the
Yes absolutely. Since 4.0 hasn't been released, anything with a fix version to
4.0 basically implies trunk as well. Also notice my comment Committed to
trunk 4x which is explicit.
~ David
On Sep 17, 2012, at 12:02 PM, Eric Khoury [via Lucene] wrote:
Hi David, I see that you committed the
Well, my client is asking if is it possible, im just providing the search
enginne to him, not working directly with the application. Dont know exactly
in what language he is programming.
--
View this message in context:
Sorry for late response. To be strict, here is what i want:
* I get documents all the time. Let's assume those are news (It's
rather similar thing).
* Every time i get new batch of news i should add them to Solr index
and get cluster information for that document. Store this information
in
Hi,
Solr doesn't have any built-in mechanism for document/field level security
- basically it's delegated to the container to provide security, but this
of course won't apply to specific documents and/or fields.
There are are a lot of ways to skin this cat, some bits of which have been
covered by
Hi Nalini,
We had similar requirements and this is how we did it (using your example):
Record A:
Field1_All: something
Field1_Private: something
Field2_All: ''
Field2_Private: something private
Field3_All: ''
Field3_Private: something very private
I've looked through documentation and postings and expect that a single
filter cache entry should be approx MaxDoc/8 bytes.
Our frequently updated index (replication every 3 minutes) has maxdoc ~= 23
Million.
So I'm figuring 3MB per entry. With CacheSize=512 I expect something like
1.5GB of
Hi,
I've got a set up as follows:
- 13 cores
- 2 servers
- running Solr 4.0 Beta with numShards=1 and an embedded zookeeper.
I'm trying to figure out why some complex queries are running so slowly in
this setup versus quickly in a standalone mode.
Given a query like: /select?q=(some complex
You can use an XSL response writer to transform your values to have a different
precision.
http://wiki.apache.org/solr/XsltResponseWriter
Would most likely be better for your client to just do it on his end though. He
is probably parsing the response anyway.
-Original Message-
From:
On Mon, Sep 17, 2012 at 3:44 PM, Mike Schultz mike.schu...@gmail.com wrote:
So I'm figuring 3MB per entry. With CacheSize=512 I expect something like
1.5GB of RAM, but with the server in steady state after 1/2 hour, it is 7GB
larger than without the cache.
Heap size and memory use aren't
Ah, ok this is news to me and makes a lot more sense. If I can just run this
back past you to make sure I understand. If I move my full_text to
If I move my fulltext document from my SQL database to keyword_document it
will contain the original fulltext in the source, but the index will have
the
You're getting the hang of it. No particular location for CopyField, just
not within fields or types. Putting them after your fields makes sense.
See the Solr example schema.
-- Jack Krupansky
-Original Message-
From: Spadez
Sent: Monday, September 17, 2012 4:47 PM
To:
Ok, I'll try running as tomcat.
The wiki has a problem with the tomcat startup script. It looks like it's
supposed to be a link which allows us to download a shell script, but when I
click it, I get the error message You are not allowed to do AttachFile on
this page. Login and try again..
Hi group,
On this wiki page these two links below are broken as they are also on
lucidworks' version, can someone point me at the correct locations please? I
googled around and came up with possible good links.
Thanks
Robi
http://wiki.apache.org/solr/LanguageAnalysis#Other_Tips
Hi Robert,
Anyone can edit wiki, you just need to create user.
Regarding URLs
http://svn.apache.org/repos/asf/lucene/dev/trunk/solr/core/src/test-files/solr/collection1/conf/stemdict.txt
http://svn.apache.org/repos/asf/lucene/dev/trunk/solr/example/solr/collection1/conf/protwords.txt
--- On
Hello All,
I have a requirement or a pre=requirement for our search application.
Basically the engine will be on a website with plenty of users and more than
20 different fields, including location.
So basically, the question is this:
Is it possible to let user's define their position in search
Hi,
I am using solr 3.6.1, I created a new core whatever3 dynamically, and I see
solr.xml updated
as:
solr persistent=true
cores adminPath=/admin/cores defaultCoreName=collection1
...
core name=whatever3
instanceDir=C:/lucene/Solr_3.6/apache-solr-3.6.1/example/mysolr\
: But when I update data like
http://localhost:8080/solr/whatever3/update?commit=true;, the data
: did not go to the newly specified dataDir (I can see core whatver3 is
apparently used from log)?
:
: Only way to make it work is NOT to define dataDir in solrconfig.xml, is this
by design or I
: I can't reproduce the problem you are seeing -- can you please provide
: more details..
Correction: i can reproduce this.
This was in fact some odd behavior in the 1.x and 3.x lines that has been
changed for 4.x in SOLR-1897.
If you had no dataDir in your solrconfig.xml, or if you had a
Thanks very much for your quick guidance, which is very helpful!
Lisheng
-Original Message-
From: Chris Hostetter [mailto:hossman_luc...@fucit.org]
Sent: Monday, September 17, 2012 6:30 PM
To: solr-user@lucene.apache.org
Subject: Re: In multi-core, special dataDir is not used?
: I
There is another option: pairing multi-valued roles and fields. Multi-valued
fields support in-order return: the values are returned in the same order you
added them. This means that you can have two fields with matched pairs of
values.
Secure data often a many-to-many relationship where any
I am using the following defines and query, and want to hightlight of the
title and body elements of HTML documents.
FieldTypes defines:
=
fieldType name=text_ja class=solr.TextField positionIncrementGap=100
autoGeneratePhraseQueries=true
analyzer type=index
I have the same error. can you guide me how to solve this error?my id :
bhavesh.jogi...@gmail.com
--
View this message in context:
http://lucene.472066.n3.nabble.com/Logging-from-data-config-xml-tp3956009p4008540.html
Sent from the Solr - User mailing list archive at Nabble.com.
56 matches
Mail list logo