I am pretty sure it does not yet support distributed shards..
But the patch was written for 4.0... So there might be issues with running
it on 1.4.1.
On 5/26/11 11:08 PM, rajini maski rajinima...@gmail.com wrote:
The patch solr 2242 for getting count of distinct facet terms doesn't
work
I have an XML string like this:
?xml version=1.0
encoding=UTF-8?languageintl![CDATA[hello]]/intlloc![CDATA[solr
]]/loc/language
By using HTMLStripTransformer, I expect to get 'hello,solr'.
But actual this transformer will remove ALL THE TEXT INSIDE!
Did I do something silly, or is
No such issues . Successfully integrated with 1.4.1 and it works across
single index.
for f.2.facet.numFacetTerms=1 parameter it will give the distinct count
result
for f.2.facet.numFacetTerms=2 parameter it will give counts as well as
results for facets.
But this is working only across
Hi
When I do a facet query on my data, it shows me a list of all the words
present in my database with their count. Is it possible to not get the
results of common words like a, an, the, http and so one but only get
the count of stuff we need like microsoft, ipad, solr, etc.
--
Thanx
which analyzer do you use for indexing ? You could exclude those stop words
during indexing
http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters
On Fri, May 27, 2011 at 1:36 PM, Jasneet Sabharwal
jasneet.sabhar...@ngicorporation.com wrote:
Hi
When I do a facet query on my data, it
mm ok. I configure 2 spellcheckers:
searchComponent name=spellcheck class=solr.SpellCheckComponent
lst name=spellchecker
str name=namespell_what/str
str name=fieldspell_what/str
str name=buildOnOptimizetrue/str
str
Are you talking about a facet query or a facet field?
If it's a facet query, I don't get what's going on.
If it's a facet field... well, if it's a fixed set of words you're interested
in, filter the query to only those words and you'll get counts only for them.
If you just need to filter out
copyfield source=date dest=text/
The letter f should be capital. copyfield =copyField
That uber dictionary is not what i want. I get also suggestions form the
where in the what. An example:
what where
chelseaLondon
Soccerclub Bondon London
When i type soccerclub london i want the suggestion from the what
I would expect that it doesn't understand CDATA and thinks of
everything between and as a 'tag'.
Best Regards,
Bryan Rasmussen
On Fri, May 27, 2011 at 9:41 AM, Ellery Leung elleryle...@be-o.com wrote:
I have an XML string like this:
?xml version=1.0
Hello,
I have to perform range queries agains a date field. It is a TrieDateField, and
I'm already using it for sorting. Hence, there will be already en entry in the
FieldCache for it.
According to:
http://www.lucidimagination.com/blog/2009/07/06/ranges-over-functions-in-solr-14/
frange
Hi all
here is what I have been trying and the problem
I am trying to see how many times a single word appears in a field.
Basically, I have a field called universal, and lets say the field is like
this:
car house road age sex school education education tree garden
and I am searching useing
Got it. Actually I use solr.MappingCharFilterFactory to replace the ![CDATA[
and ]] to empty first, and use HTMLStripCharFilterFactory to get hello and
solr.
For future reference, here is part of schema.xml
fieldType name=textMaxWord class=solr.TextField
analyzer type=index
Thanks for you answer James :)
For guys who would meet up with this problem,
http://markmail.org/thread/xce4qyzs5367yplo also speaks about this, and
reaches James' conclusion too.
On Thu, May 26, 2011 at 10:19 PM, Dyer, James james.d...@ingrambook.comwrote:
This is a limitation of Lucene/Solr
Hi, in my indexed document i do not want a uniqueKey field, but when i do not
give any uniqueKey in schema.xml then it shows an exception
org.apache.solr.common.SolrException: QueryElevationComponent requires the
schema to have a uniqueKeyField.
it means QueryElevationComponent requires a
test
-
Thanks Regards
Romi
--
View this message in context:
http://lucene.472066.n3.nabble.com/test-tp2992199p2992199.html
Sent from the Solr - User mailing list archive at Nabble.com.
Remove the component configuration from your solrconfig.
Hi, in my indexed document i do not want a uniqueKey field, but when i do
not give any uniqueKey in schema.xml then it shows an exception
org.apache.solr.common.SolrException: QueryElevationComponent requires the
schema to have a
i removed
searchComponent name=elevator
class=org.apache.solr.handler.component.QueryElevationComponent
str name=queryFieldTypestring/str
str name=config-fileelevate.xml/str
/searchComponent
from solrconfig.xml but it is showing the following exception:
I wanted to have the basic idea of setting these parameters in solrconfig.xml
writeLockTimeout/writeLockTimeout
commitLockTimeout/commitLockTimeout
what actually writeLockTimeout and commitLockTimeout indicates here.
-
Thanks Regards
Romi
--
View this message in context:
Is there any way to render html entities in DIH for a specific field?
Thanks
--
Anass
2011/5/27 Denis Kuzmenok forward...@ukr.net:
Hi.
I have and indexed database which is indexed few times a day and
contain tinyint flag (like is_enabled, is_active, etc), and content
isn't changed too often, but flags are.
So if i index via post.jar only flags then entire document
I'm using 3.1 now. Indexing lasts for a few hours, and have big
plain size. Getting all documents would be rather slow :(
Not with 1.4, but apparently there is a patch for trunk. Not
sure if it is in 3.1.
If you are on 1.4, you could first query Solr to get the data
for the document to
On Thu, May 26, 2011 at 6:52 PM, Rahul Warawdekar
rahul.warawde...@gmail.com wrote:
Hi All,
I am using Solr 3.1 for one of our search based applications.
We are using DIH to index our data and TikaEntityProcessor to index
attachments.
Currently we are running into an issue while extracting
Sorry my question was not clear.
when I get data from database, some field contains some html special chars,
and what i want to do is just convert them automatically.
On Fri, May 27, 2011 at 1:00 PM, Gora Mohanty g...@mimirtech.com wrote:
On Fri, May 27, 2011 at 3:50 PM, anass talby
Hi,
I was wondering if this issue had already been raised.
We currently have a use case where nested field collapsing would be really
helpful
I.e Collapse on field X then Collapse on Field Y within the groups returned
by field X
The current behavior of specifying multiple fields seem to be
I've found the same issue.
As long as I know, the only solution is to create a copy field which combines
both-fields values and facet on this field.
If one of the fields has a set of distinct values known in advance and its
cardinality c is not too big, it isn't a great problem: you can do with
Thanks I was looking exactly for this.
I needed to spli tokens based on comma.
On Fri, Jun 18, 2010 at 10:12 PM, Joe Calderon calderon@gmail.comwrote:
set generateWordParts=1 on wordDelimiter or use
PatternTokenizerFactory to split on commas
Hello,
I am in an odd position. The application server I use has built-in
integration with SOLR. Unfortunately, its native capabilities are
fairly limited, specifically, it only supports a standard/pre-defined
set of fields which can be indexed. As a result, it has left me
kludging how I
The * endpoint for range terms wasn't implemented yet in 1.4.1 As a
workaround, we use very large and very small values.
-Mike
On 05/27/2011 12:55 AM, alucard001 wrote:
Hi all
I am using SOLR 1.4.1 (according to solr info), but no matter what date
field I use (date or tdate) defined in
Hi Bob,
Hmm... I don't think this approach will scale with bigger and more documents :(
Thanks for your help though; I think I should take a look at customizing
highlight component to achieve this...
Thanks,
Jeff
On May 27, 2011, at 12:24 PM, Bob Sandiford bob.sandif...@sirsidynix.com
You're up against a couple of real limitations with Solr's spell checking. The
first limitation is that you can only use 1 dictionary per query.
The second limitation is that if a word is in the dictionary it never tries to
correct it. This will happen even if you *don't* combine your two
Hi,
I was wondering if this issue had already been raised.
We currently have a use case where nested field collapsing would be really
helpful
I.e Collapse on field X then Collapse on Field Y within the groups returned
by field X
The current behavior of specifying multiple fields seem to be
Did you try pivot?
Bill Bell
Sent from mobile
On May 27, 2011, at 4:13 AM, Martijn Laarman mpdre...@gmail.com wrote:
Hi,
I was wondering if this issue had already been raised.
We currently have a use case where nested field collapsing would be really
helpful
I.e Collapse on field X
On 5/27/2011 6:48 AM, Romi wrote:
What is the benifit of setting autocommit in solrconfig.xml.
i read somewhere that these settings control how often pending updates will
be automatically pushed to the index.
does it mean if solr server is running then it automaticaly starts indexing
process
For this, I ended up just changing it to string and using abcdefg* to
match. That seems to work so far.
Thanks,
Brian Lamb
On Wed, May 25, 2011 at 4:53 PM, Brian Lamb
brian.l...@journalexperts.comwrote:
Hi all,
I'm running into some confusion with the way edgengram works. I have the
field
I'm still not having any luck with this. Has anyone actually gotten this to
work so far? I feel like I've followed the directions to the letter but it
just doesn't work.
Thanks,
Brian Lamb
On Wed, May 25, 2011 at 2:48 PM, Brian Lamb
brian.l...@journalexperts.comwrote:
I looked at the patch
Can you open a Lucene issue (against the new grouping module) for
this?
I think this is a compelling use case that we should try to support.
In theory, with the general two-pass grouping collector, this should
be possible, but will require three passes, and we also must
generalize the 2nd pass
Hello,
I am using the latest nightly build of Solr 4.0 and I would like to
use grouping/field collapsing while maintaining compatibility with my
current parser. I am using the regular webinterface to test it, the
same commands like in the wiki, just with the field names matching my
dataset.
Nobody?
Please, help
edua...@calandra.com.br
17/05/2011 16:13
Please respond to
solr-user@lucene.apache.org
To
solr-user@lucene.apache.org
cc
Subject
Pivot with Stats (or Stats with Pivot)
Hi All,
Is it possible to get stats (like Stats Component: min ,max, sum, count,
On May 27, 2011, at 1:04 AM, Ahmet Arslan wrote:
The letter f should be capital
Hah! Well-spotted! Thanks.
-==-
Jack Repenning
Technologist
Codesion Business Unit
CollabNet, Inc.
8000 Marina Boulevard, Suite 600
Brisbane, California 94005
office: +1 650.228.2562
twitter:
We verified with the fiddler proxy server that when we use the Java
CommonsHttpSolrServer to communicate with our Solr server we are not able to
get the client to post a commit/ message back to Solr. The result is that we
can't force the tail end of a batch job to commit after it has run and
I managed to get a thread dump during a slow commit:
resin-tcp-connection-*:5062-129 Id=12721 in RUNNABLE total cpu
time=391530.ms user time=390620.ms
at java.lang.String.intern(Native Method)
at
org.apache.lucene.util.SimpleStringInterner.intern(SimpleStringInterner.java:74)
at
Thanks Mike,
I've opened https://issues.apache.org/jira/browse/SOLR-2553 for this.
It's exciting to hear a workable implementation might be possible!
On Fri, May 27, 2011 at 6:23 PM, Michael McCandless
luc...@mikemccandless.com wrote:
Can you open a Lucene issue (against the new grouping
I know this question has been asked before but I think my situation is a
little different. Basically I need to do custom scores that the traditional
function queries simply won't allow me to do. I actually need to hit
another server from Java (passing in a bunch of things mostly relying on how
are there any updates on this? any third party apps that can make this work
as expected?
On Wed, Feb 23, 2011 at 12:38 PM, Dyer, James james.d...@ingrambook.comwrote:
Tanner,
Currently Solr will only make suggestions for words that are not in the
dictionary, unless you specifiy
Where can one find the KStemmer source for 4.0?
On 5/12/11 11:28 PM, Bernd Fehling wrote:
I backported a Lucid KStemmer version from solr 4.0 which I found
somewhere.
Just changed from
import org.apache.lucene.analysis.util.CharArraySet; // solr4.0
to
import
Is LucidWorks source no longer available? In earlier versions their
source code was available but after the latest install I can not seem to
find it?
Thank you Mike.
So I understand that now. But what about the other items that have values
on both size? They don't work at all.
-Original Message-
From: Mike Sokolov [mailto:soko...@ifactory.com]
Sent: 2011年5月27日 10:23 下午
To: solr-user@lucene.apache.org
Cc: alucard001
Subject: Re:
48 matches
Mail list logo