I can see your point, though I think edge cases would be one concern, if
someone *can* create a very large synonyms file, someone *will* create that
file. What would you set the zookeeper max data size to be? 50MB? 100MB?
Someone is going to do something bad if there's nothing to tell them not
Thanks.. i am caching in HTTP now..
./zahoor
On 08-May-2013, at 3:58 AM, Yonik Seeley yo...@lucidworks.com wrote:
On Tue, May 7, 2013 at 12:48 PM, J Mohamed Zahoor zah...@indix.com wrote:
Hi
I am computing lots of stats as part of a query…
looks like the solr caching is not helping here…
David, have you seen the finite state automata the synonym lookup is built
on? The lookup is very efficient and fast. You have a point though, it is
going to fail for someone.
Roman
On 8 May 2013 03:11, David Parks davidpark...@yahoo.com wrote:
I can see your point, though I think edge cases
I will give it a go!
thank you
best
Silvio
On 05/08/2013 03:07 AM, Chris Hostetter wrote:
: I am about to index identfier fields containing blanks (shelfmarks) eg. G
: 23/60 12
: The field type is set to Solr.string. To get the exact matching hit (the doc
: with shelfmark mentioned above)
Hello!
Use a float field type in your schema.xml file, for example like this:
fieldType name=float class=solr.TrieFloatField precisionStep=0
positionIncrementGap=0/
Define a field using this type:
field name=price type=float indexed=true stored=true/
You'll be able to index data like this:
I will index for example:
field name=price19,95/field
field name=price25,45/field
I can only float with numbers with dots indexing.
Thanks
Am Mittwoch, den 08.05.2013, 10:52 +0200 schrieb Rafał Kuć
r@solr.pl:
Hello!
Use a float field type in your schema.xml file, for example like
this:
Ok, I will do a fresh install in a VM and check that the error isn't
reproduce.
-
Best regards
--
View this message in context:
http://lucene.472066.n3.nabble.com/Lazy-load-Error-on-UI-analysis-area-tp4061291p4061512.html
Sent from the Solr - User mailing list archive at Nabble.com.
On 8 May 2013 14:48, be...@bkern.de be...@bkern.de wrote:
I will index for example:
field name=price19,95/field
field name=price25,45/field
I can only float with numbers with dots indexing.
I don't think that it is currently possible to change the decimal
separator. You should replace ','
If you need just the count of the results found, check the numFound.
If you would like to get all the results possible in one go, you could try
rows=-1. This may impact on your server a lot, so be careful.
If you have a single non-sharded index, try pagination
(start=offsetrows=window_size)
Mohamed,
(out of curiosity) What kind of tool are you using for that?
On Wed, May 8, 2013 at 10:13 AM, J Mohamed Zahoor zah...@indix.com wrote:
Thanks.. i am caching in HTTP now..
./zahoor
On 08-May-2013, at 3:58 AM, Yonik Seeley yo...@lucidworks.com wrote:
On Tue, May 7, 2013 at
that worked like a charme, but what must I do if want an additional field to
match e.g.
{!term f=myFieldName}G 23/60 12 +location:bookshelf
Best,
Silvio
On 05/08/2013 03:07 AM, Chris Hostetter wrote:
: I am about to index identfier fields containing blanks (shelfmarks) eg. G
: 23/60 12
:
If you're using the latest Solr, then you should be able to do it the
other way around:
q=+location:bookshelf {!term f=myFieldName}G 23/60 12
You might also find the trick I mentioned before useful:
q=+location:bookshelf {!term f=myFieldName v=$productCode}productCode=G
23/60 12
Upayavira
On
You could use a RegexReplaceProcessor in an update processor chain. From
the Javadoc:
processor class=solr.RegexReplaceProcessorFactory
str name=fieldNamecontent/str
str name=fieldNametitle/str
str name=pattern\s+/str
str name=replacement /str
/processor
This could replace the
I am using a simple LRU cache in my client where i store req and response for
now.
Later might move to something like varnish.
./zahoor
On 08-May-2013, at 3:26 PM, Dmitry Kan solrexp...@gmail.com wrote:
Mohamed,
(out of curiosity) What kind of tool are you using for that?
On Wed,
I found the error, the class of analysis field request handler was not set
properly.
-
Best regards
--
View this message in context:
http://lucene.472066.n3.nabble.com/Lazy-load-Error-on-UI-analysis-area-tp4061291p4061526.html
Sent from the Solr - User mailing list archive at Nabble.com.
Thanks Erick,
I had a look on
deduplicationhttp://docs.lucidworks.com/display/solr/De-Duplication
.
I added :
updateRequestProcessorChain name=dedupe
processor class=solr.processor.SignatureUpdateProcessorFactory
bool name=enabledtrue/bool
str
hi all,
I need to analyse the query sent to solr . I need to parse the query
through a pipline made through uima.
can anyone help me understand , how do i do this.
I have already created an Aggregate Analyzer in uima, now needs to run a
solr input query through this, to increase relevancy in
Hi all,
I just reported this issue: http://issues.apache.org/jira/browse/SOLR-4800
java.lang.IllegalArgumentException: No enum const class
org.apache.lucene.util.Version.LUCENE_43
solr-4.3.0/example/solr/collection1/conf/solrconfig.xml has
luceneMatchVersionLUCENE_43/luceneMatchVersion
Which
OK, thanks.
On Wed, May 8, 2013 at 1:38 PM, J Mohamed Zahoor zah...@indix.com wrote:
I am using a simple LRU cache in my client where i store req and response
for now.
Later might move to something like varnish.
./zahoor
On 08-May-2013, at 3:26 PM, Dmitry Kan solrexp...@gmail.com
Please help me on this!!
meghana wrote
To ensure the all records exist in single node, i queried on specific
duration, so , for shards core and simple core query, results should be
similar.
as you suggested, i analyzed the debugQuery for one specific search
*
text:worde~1
*
, and I
Thanks, Erick. The link you gave me is mostly about getting Suggester
working with Phrases, which I've already done with queryAnalyzerFieldType
and no custom code.
What my main issue is that the query itself isn't getting returned *if *it
is an actual word/token in my index. So for example if a
Hi Roald,
On the ticket, you report the following version information:
solr-spec : 4.2.1.2013.03.26.08.26.55
solr-impl : 4.2.1 1461071 - mark - 2013-03-26 08:26:55
lucene-spec : 4.2.1
lucene-impl : 4.2.1 1461071 - mark - 2013-03-26 08:23:34
This shows that your servlet container is running
Hallo,
I have a field with the type TIMESTAMP(6) in an oracle view.
When I want to import it directly to SOLR I get this error message:
WARNING: Error creating document : SolrInputDocument[oid=12,
last_action_timestamp=oracle.sql.TIMESTAMP@34907781, status=2, ...]
I thought it reported 4.2.1 because I set luceneMatchVersion to LUCENE_42.
I am using the the 4.3.0 war. Very strange.
I will set up a new virtual machine to make sure there is no way that I am
accidentally using 4.2.1
On Wed, May 8, 2013 at 3:06 PM, Alan Woodward a...@flax.co.uk wrote:
Hi
Peter,
Looks like you can call timestampValue() on that object and get back a
java.sql.Timestamp, which is a subclass of java.util.Date:
http://docs.oracle.com/cd/E16338_01/appdev.112/e13995/oracle/sql/TIMESTAMP.html#timestampValue__
Hope that helps,
Michael Della Bitta
Within MySQL it is possible to get the Top N results while summing a
particular column in the database. For example:
SELECT ip_address, SUM(ip_count) AS count FROM table GROUP BY ip_address
ORDER BY count DESC LIMIT 5
This will return the top 5 ip_address based on the sum of ip_count.
Is there
Hi,
have a look at http://wiki.apache.org/solr/TermsComponent.
Regards,
Carlos.
2013/5/8 ld luzange...@gmail.com
Within MySQL it is possible to get the Top N results while summing a
particular column in the database. For example:
SELECT ip_address, SUM(ip_count) AS count FROM table GROUP
Hi all,
I just reported this issue: http://issues.apache.org/jira/browse/SOLR-4800
java.lang.IllegalArgumentException: No enum const class
org.apache.lucene.util.Version.LUCENE_43
solr-4.3.0/example/solr/collection1/conf/solrconfig.xml has
luceneMatchVersionLUCENE_43/luceneMatchVersion
Which
I opened a Jira issue in Oct of 2011 which is still outstanding. I've
boosted the priority to Critical as each time I've upgraded Solr, I've had
to manually patch and build the jars. There is a patch (for 3.6) attached
to the ticket. Is there someone with commit access who can take a look and
Any idea on this? I still cannot get the combination of transient cores and
transientCacheSize to work as I think it should: give me the ability to
create a large number cores and automatically load and unload them for me
based on a limit that I set.
If anyone else is using this feature and it is
Hi all,
I upgrade my solrcluster today from 4.2.1 to 4.3. On startup I can see some
error like this:
2449515 [catalina-exec-51] ERROR org.apache.solr.core.SolrCore –
org.apache.solr.common.SolrException: incref on a closed log:
i find UpdateRequestProcessors (
http://wiki.apache.org/solr/UpdateRequestProcessor) a handy way to add and
remove NLP-related fields to a document as it is processed by Solr. this is
also how UIMA integrates with Solr (http://wiki.apache.org/solr/SolrUIMA).
you might want to take a look at UIMA
I solved it by setting up a new virtual machine. Apparantly tomcat was
still using 4.2.1 somehow.
Thanks!
On Wed, May 8, 2013 at 3:40 PM, Roald depja...@gmail.com wrote:
I thought it reported 4.2.1 because I set luceneMatchVersion to LUCENE_42.
I am using the the 4.3.0 war. Very strange.
I
any update on this?
will this be addressed/fixed?
in our system, our UI will allow user to paginate through search results.
As my in deep test find out, if the rows=0, the results size is consistently
the total sum of the documents on all shards regardless there is any
duplicates; if the rows
ok when my head is cooled down, I remember this old school issue... that I
have been dealing with it myself.
so I do not expect this can be straighten out or fixed in anyways.
basically when you have to sorted results sets you need to merge, and
paginate through, it is never an easy job (if all
Unfortunately, terms do not help solve my issue.
To elaborate - say i have 5 entries:
uuid - ipaddress - ipcount
1 1.1.1.1 80
2 2.2.2.2 1
3 3.3.3.3 20
4 3.3.3.3 20
When i run a facet query on the ipaddress, i get the following results:
Ok found the solution .. Like SpellcheckComponent , Elevate Component also
requires shards.qt param .. But still dont know why both these components
doesn't work in absense of shards.qt . Can anyone explain ?
Thanks
Varun
On Mon, May 6, 2013 at 1:14 PM, varun srivastava
Ok found the solution .. Like SpellcheckComponent , Elevate Component also
requires shards.qt param .. But still dont know why both these components
doesn't work in absense of shards.qt . Can anyone explain ?
Thanks
On Sat, May 4, 2013 at 1:08 PM, varun srivastava varunmail...@gmail.comwrote:
On 5/8/2013 9:20 AM, Shane Perry wrote:
I opened a Jira issue in Oct of 2011 which is still outstanding. I've
boosted the priority to Critical as each time I've upgraded Solr, I've had
to manually patch and build the jars. There is a patch (for 3.6) attached
to the ticket. Is there someone
Hi,
I have gotten solr 4.3 up and running on tomcat7/windows7. I have added the
two dataimport handler jars (found in the dist folder of my solr 4.3 download)
to the tomcat/lib folder (where I also placed the solr.war).
Then I added the following line to my solrconfig.xml:
requestHandler
Yeah, I realize my fix is more a bandage. While it wouldn't be a good
long-term solution, how about going the path of ignoring unrecognized types
and logging a warning message so the handler does crash. The Jira ticket
could then be left open (and hopefully assigned) to fix the actual problem.
Could be classloader issue. E.g. the jars in tomcat/lib not visible to
whatever is trying to load DIH. Have you tried putting those jars
somewhere else and using lib directive in solrconfig.xml instead to
point to them?
Regards,
Alex.
On Wed, May 8, 2013 at 2:07 PM, William Pierce
Thanks, Alex. I have tried placing the jars in a folder under solrhome/lib
or under the instanceDir/lib with appropriate declarations in the
solrconfig.xml. I can see the jars being loaded in the logs. But neither
configuration seems to work.
Bill
-Original Message-
From:
: i want to query documents which match a certain dynamic criteria.
: like, How do i get all documents, where sub(field1,field2) 0 ?
:
: i tried _val_: sub(field1,field2) and used fq:[_val_:[0 TO *]
take a look at the frange QParser...
I'd say it is still a CLASSPATH issue. Quick Google shows long history
of complaints (all about Tomcat):
http://www.manning-sandbox.com/thread.jspa?threadID=51061
Personal blog: http://blog.outerthoughts.com/
LinkedIn: http://www.linkedin.com/in/alexandrerafalovitch
- Time is the quality of nature
: Questions:
: - what is the advantage of having indexed=true and docvalues=true?
indexed=true and docValues=true are orthoginal. it might make sense
to use both if you wnated to do term queries on the field but also
faceting -- because indexed tems are generally faster for queries, but
: is it possible to store (text) payload to numeric fields (class
: solr.TrieDoubleField)? My goal is to store measure units to numeric
: features - e.g. '1.5 cm' - and to use faceted search with these fields.
: But the field type doesn't allow analyzers to add the payload data. I
: want to
I have created an index that contains pizza hut and when I misspell it
pizza hot the spellchecker doesn't return anything. The strange thing is
it does find pizza hut when it is mispelled to pizza hit
What is the logic behind this behaviour? any help
thank you
--
View this message in context:
try to remove those in the configuration
--
View this message in context:
http://lucene.472066.n3.nabble.com/spellcheck-tp506116p4061675.html
Sent from the Solr - User mailing list archive at Nabble.com.
Try setting spellcheck.alternativeTermCount to a nonzero value. See
http://wiki.apache.org/solr/SpellCheckComponent#spellcheck.alternativeTermCount
The issue may be that by default, the spellchecker will never try to offer
suggestions for a term that exists in the dictionary. So if some other
Hi,
Right, the network could be something else - memory of network, for
instance. What are you using to index? Make sure you're hitting Solr
with multiple threads if your CPU is multi-core. Use SPM for Solr or
anything else and share some Solr monitoring graphs if you think they
can help.
Hi,
I'm using solr 4.0 and I'm using an atomic update to increment a tdouble 3
times with the same value (99.4). The third time it is incremented the values
comes out to 298.25. Has anyone seen this error or how to fix it?
Maybe I should use the regular double instead of a
: I'm using solr 4.0 and I'm using an atomic update to increment a tdouble
: 3 times with the same value (99.4). The third time it is incremented the
: values comes out to 298.25. Has anyone seen this error or
: how to fix it? Maybe I should use the regular double instead of a
:
Why did you place solr.war in tomcat/lib?
Can you detail the specific errors you get when you place your DIH jars in
solr-home/lib or instanceDir/lib?
--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com
8. mai 2013 kl. 21:15 skrev William Pierce evalsi...@hotmail.com:
Hi
We have a huge index with more than 50 million documents. In the beginning
we disabled norms for some fields by setting omitNorms=true. Recently we
decided to add norms to few other fields and we removed omitNorms=true from
schema.
I read in solr forum that if one of the document in any
The reason I placed the solr.war in tomcat/lib was -- I guess -- because
that's way I had always done it since 1.3 days. Our tomcat instance(s) run
nothing other than solr - so that seemed as good a place as any.
The DIH jars that I placed in the tomcat/lib are:
Hi,
I need help figuring why I keep getting the error below. I am running the
example store core using Solr 4.3.0 on Centos. When I use the solr web app
(http://localhost:8983/solr) to issue the following query against the
example docs:
In the q edit box:
*:*
In the fq edit box:
On 5/8/2013 8:12 AM, marotosg wrote:
Hi,
I have 4 different cores in same machine.
Person core - 3 million docs - 20 GB size
Company Core - 1 million docs - 2GB size
Documents Core - 5 million docs - 5GB size
Emails Core - 50,000 thousand - 200 Mb
While I am indexing data
: I have a field with the type TIMESTAMP(6) in an oracle view.
...
: What is the best way to import it?
...
: This way works but I do not know if this is the best practise:
...
: TO_CHAR(LAST_ACTION_TIMESTAMP, '-MM-DD HH24:MI:SS') as LAT
instead of having your
I presume you meant to substitute the pattern and replacement for this case:
processor class=solr.RegexReplaceProcessorFactory
str name=fieldNamecontent/str
str name=fieldNametitle/str
str name=pattern,/str
str name=replacement./str
/processor
-- Jack Krupansky
-Original
Geez, at this point, why not just escape the space with a backslash instead
of all that extra cruft:
q=+location:bookshelf myFieldName:G\ 23/60\ 12
or
q=myFieldName:G\ 23/60\ 12 +location:bookshelf
-- Jack Krupansky
-Original Message-
From: Upayavira
Sent: Wednesday, May 08, 2013
Is it currently possible to have per-shard replication factor?
A bit of background on the use case...
If you are hashing content to shards by a known factor (lets say date
ranges, 12 shards, 1 per month) it might be the case that most of your
search traffic would be directed to one particular
62 matches
Mail list logo