Uhhh... UIMA... and parameter checking... NOT. You're probably missing
something, but there is so much stuff.
I have some examples in my e-book that show various errors you can get for
missing/incorrect parameters for UIMA:
http://www.lulu.com/us/en/shop/jack-krupansky/solr-4x-deep-dive-early
For the sake of completeness, please post the parsed query that you get when
you add the debug=true parameter. IOW, how Solr/Lucene actually interprets
the query itself.
-- Jack Krupansky
-Original Message-
From: Shawn Heisey
Sent: Thursday, August 21, 2014 10:03 AM
To: solr-user
performance, as long
as the prefix isn't too short (e.g., cat*). See PrefixQuery:
http://lucene.apache.org/core/4_9_0/core/org/apache/lucene/search/PrefixQuery.html
ngram filters can also be used, but... that can make the index rather large.
-- Jack Krupansky
-Original Message-
From: Umesh
to confirm whether you really need to use
string as opposed to text field.
-- Jack Krupansky
-Original Message-
From: Nishanth S
Sent: Tuesday, August 19, 2014 12:03 PM
To: solr-user@lucene.apache.org
Subject: Substring and Case In sensitive Search
Hi,
I am very new to solr.How can I
CPU-bound or I/O-bound?
-- Jack Krupansky
-Original Message-
From: SolrUser1543
Sent: Tuesday, August 19, 2014 2:57 PM
To: solr-user@lucene.apache.org
Subject: Performance of Boolean query with hundreds of OR clauses.
I am using Solr to perform search for finding similar pictures
What release of Solr?
Do you have autoGeneratePhraseQueries=true on the field?
And when you said But any of these does, did you mean But NONE of these
does?
-- Jack Krupansky
-Original Message-
From: heaven
Sent: Tuesday, August 19, 2014 2:34 PM
To: solr-user@lucene.apache.org
In any case, besides the raw code and the similarity Javadoc, Lucene does
have Javadoc for file formats:
http://lucene.apache.org/core/4_9_0/core/org/apache/lucene/codecs/lucene49/package-summary.html
-- Jack Krupansky
-Original Message-
From: Aman Tandon
Sent: Sunday, August 17
query, and pivot query, with
QTime, and debug=true timing to show which search components are consuming
the time.
-- Jack Krupansky
-Original Message-
From: Oded Sofer
Sent: Thursday, August 14, 2014 6:29 AM
To: solr-user@lucene.apache.org
Subject: Question
Hello
We are implementing
patterns, which you will
have to test for yourself, you will probably need to use an application
layer to shard your 100s of billions to specific SolrCloud clusters.
-- Jack Krupansky
-Original Message-
From: Wilburn, Scott
Sent: Thursday, August 14, 2014 11:05 AM
To: solr-user
Why? The semantics are defined by the code and similarity matching
algorithm, not... files.
-- Jack Krupansky
-Original Message-
From: abhi Abhishek
Sent: Wednesday, August 13, 2014 2:40 AM
To: solr-user@lucene.apache.org
Subject: Re: explaination of query processing in SOLR
Thanks
Could you clarify what you mean with the term cloud, as in per cloud and
individual clouds? That's not a proper Solr or SolrCloud concept per se.
SolrCloud works with a single cluster of nodes. And there is no
interaction between separate SolrCloud clusters.
-- Jack Krupansky
-Original
with a rule of thumb of 100 million documents per node (and
that is million, not billion.) That could be a lot higher - or a lot lower -
based on your actual schema and data value distribution.
-- Jack Krupansky
-Original Message-
From: Wilburn, Scott
Sent: Wednesday, August 13, 2014
Use the parse date update request processor:
http://lucene.apache.org/solr/4_9_0/solr-core/org/apache/solr/update/processor/ParseDateFieldUpdateProcessorFactory.html
Additional examples are in my e-book:
http://www.lulu.com/us/en/shop/jack-krupansky/solr-4x-deep-dive-early-access-release-7
The use of a wildcard suppresses analysis of the query term, so the special
characters remain, but... they were removed when the terms were indexed, so
no match. You must manually emulate the index term analysis in order to use
wildcards.
-- Jack Krupansky
-Original Message-
From
Generally, large requests are an anti-pattern in modern distributed
systems. Better to have a number of smaller requests executing in parallel
and then merge the results in the application layer.
-- Jack Krupansky
-Original Message-
From: Bruno Mannina
Sent: Saturday, August 9, 2014
that it is not a massive, blocking request.
-- Jack Krupansky
-Original Message-
From: Bruno Mannina
Sent: Sunday, August 10, 2014 6:04 PM
To: solr-user@lucene.apache.org
Subject: Re: How can I request a big list of values ?
Hi Anshum,
I can do it with 3.6 release no ?
my main problem, it's that I have
potential of a system, which in this case is
parallel execution of distributed components.
-- Jack Krupansky
-Original Message-
From: Bruno Mannina
Sent: Sunday, August 10, 2014 6:01 PM
To: solr-user@lucene.apache.org
Subject: Re: How can I request a big list of values ?
Hi Jack,
ok
(Search
Components), but none of it is down at that Lucene file level.
-- Jack Krupansky
-Original Message-
From: abhi Abhishek
Sent: Friday, August 8, 2014 7:59 AM
To: solr-user@lucene.apache.org
Subject: explaination of query processing in SOLR
Hello,
I am fairly new to SOLR, can
And the Solr Support list is where people register their available
consulting services:
http://wiki.apache.org/solr/Support
-- Jack Krupansky
-Original Message-
From: Alexandre Rafalovitch
Sent: Friday, August 8, 2014 9:12 AM
To: solr-user
Subject: Re: Help Required
We don't mediate
and use more powerful hardware.
Architect your application and model your data around the strengths of Solr
(and Lucene.) And also look at your queries first, to make sure they will
make sense.
-- Jack Krupansky
-Original Message-
From: Lisheng Zhang
Sent: Friday, August 8, 2014 5:25
The word delimiter filter is actually combining 100-001 into 11. You
have BOTH catenateNumbers AND catenateAll, so 100-R8989 should generate
THREE tokens: the concatenated numbers 100, the concatenated words R8989,
and both numbers and words concatenated, 100R8989 .
-- Jack Krupansky
almost anything you want, but its
up to you to decide what you want to index. IOW, it is your obligation to
come up with a data model. And the data model should be driven in large part
by the query and access requirements mentioned above.
-- Jack Krupansky
-Original Message-
From
OR tractor.
-- Jack Krupansky
-Original Message-
From: Corey Gerhardt
Sent: Wednesday, August 6, 2014 1:14 PM
To: Solr User List
Subject: Suggestion for term searches
I have an interesting situation of searching Business Names where results
should be partially sorted by position.
Searching
a stream of
flat documents.
-- Jack Krupansky
-Original Message-
From: Ali Nazemian
Sent: Wednesday, August 6, 2014 9:35 AM
To: solr-user@lucene.apache.org
Subject: Re: indexing comments with Apache Solr
Dear Alexandre,
Hi,
Thank you very much. I think nested document is what I need. Do you
An update request processor could do the trick. You can use the stateless
script update processor to code a JavaScript snippet to do whatever logic
you want. Plenty of examples in my e-book:
http://www.lulu.com/us/en/shop/jack-krupansky/solr-4x-deep-dive-early-access-release-7/ebook/product
And neither project supports the Lucene faceting module, correct?
And the ES web site says: WARNING: Facets are deprecated and will be
removed in a future release. You are encouraged to migrate to aggregations
instead.
That makes it more of an apples/oranges comparison.
-- Jack Krupansky
, the query phrase
will generate an extra token which will participate in the matching and
might cause a mismatch.
-- Jack Krupansky
-Original Message-
From: Paul Rogers
Sent: Monday, August 4, 2014 5:55 PM
To: solr-user@lucene.apache.org
Subject: Re: How to search for phrase IAE_UPC_0001
for some clear and obvious benefit in terms of features, performance, and
scalability.
-- Jack Krupansky
-Original Message-
From: Salman Akram
Sent: Friday, August 1, 2014 1:35 AM
To: Solr Group
Subject: Re: Solr vs ElasticSearch
I did see that earlier. My main concern is search
performance
do need separate queries.
-- Jack Krupansky
-Original Message-
From: Smitha Rajiv
Sent: Thursday, July 31, 2014 1:31 AM
To: solr-user@lucene.apache.org
Subject: Querying from solr shards
Hi All,
Currently i am using solr legacy distributed configuration (not solr cloud,
single solr
To be clear, I wasn't suggesting that Accumulo was the cause of integration
complexity - EVERY NoSQL will have integration complexity of comparable
magnitude. The advantage of DataStax Enterprise or Sqrrl Enterprise is that
they have done the integration work for you.
-- Jack Krupansky
And I have a lot more explanation and examples for word delimiter filter in
my e-book:
http://www.lulu.com/us/en/shop/jack-krupansky/solr-4x-deep-dive-early-access-release-7/ebook/product-21203548.html
-- Jack Krupansky
-Original Message-
From: Erick Erickson
Sent: Thursday, July 31
A range query: published_date:[2012-09-26T00:00:00Z TO
2012-09-27T00:00:00Z}
WIth LucidWorks Search, you can simply say: published_date:2012-09-26 and it
will internally generate that full range query.
See:
http://docs.lucidworks.com/display/lweug/Date+Queries
-- Jack Krupansky
. The offset of the offending document should be reported so that the app
can locate the problem to resolve it.
Give us the actual server stack trace so we can verify whether this was
simply user error or some defect in Solr itself.
-- Jack Krupansky
-Original Message-
From: Liram
, if any.
-- Jack Krupansky
-Original Message-
From: EXTERNAL Taminidi Ravi (ETI, Automotive-Service-Solutions)
Sent: Tuesday, July 29, 2014 8:10 AM
To: solr-user@lucene.apache.org
Subject: fq bq
Hi , I am using the bq to boost a particular value of a field. But when I
try to add
caching to hold the entire Solr index.
Do you have Solr auto-commit enabled?
-- Jack Krupansky
-Original Message-
From: Ameya Aware
Sent: Tuesday, July 29, 2014 3:01 PM
To: solr-user@lucene.apache.org
Subject: Re: Scaling Issues
I am using Apache ManifoldCF framework which connects to my
Apply the boost to the specific query term you want boosted, like
Name:Car*^200.
-- Jack Krupansky
-Original Message-
From: EXTERNAL Taminidi Ravi (ETI, Automotive-Service-Solutions)
Sent: Tuesday, July 29, 2014 3:16 PM
To: Jack Krupansky
Cc: solr-user@lucene.apache.org
Subject: RE
if it
is missing, but you would have to specify an explicit value for the URP to
use.
-- Jack Krupansky
-Original Message-
From: Elran Dvir
Sent: Monday, July 28, 2014 4:12 AM
To: solr-user@lucene.apache.org
Subject: RE: copy EnumField to text field
Are you saying that default values
Correct - copy field copies the raw, original, source input value, before
the actual field type has had a chance to process it in any way.
-- Jack Krupansky
-Original Message-
From: Elran Dvir
Sent: Monday, July 28, 2014 8:08 AM
To: solr-user@lucene.apache.org
Subject: RE: copy
Or are you using ManifoldCF?
-- Jack Krupansky
-Original Message-
From: Rafał Kuć
Sent: Monday, July 28, 2014 11:00 AM
To: solr-user@lucene.apache.org
Subject: Re: Query about vacuum full
Hello!
Please refer to PostgreSQL mailing list with this question. This
question is purely
to sync mode and not mere ETL, which
people do all the time with batch scripts, Java extraction and ingestion
connectors, and cron jobs.
Give it a shot and let us know how it works out.
-- Jack Krupansky
-Original Message-
From: Ali Nazemian
Sent: Sunday, July 27, 2014 1:20 AM
immediate, high-value support and
guidance. A rich guy's version of this mailing list!
-- Jack Krupansky
-Original Message-
From: Alexandre Rafalovitch
Sent: Friday, July 25, 2014 9:17 PM
To: solr-user
Subject: Re: Any Solr consultants available??
On Fri, Jul 25, 2014 at 6:59 PM, Jack
are like that in
Solr.
-- Jack Krupansky
-Original Message-
From: Alexandre Rafalovitch
Sent: Friday, July 25, 2014 4:52 AM
To: solr-user
Subject: Re: Any Solr consultants available??
Well, if we do it in England, we could hire out a castle, I bet. :-) I
am flexible on my holiday locations
debug gives
you.
-- Jack Krupansky
-Original Message-
From: O. Olson
Sent: Thursday, July 24, 2014 6:45 PM
To: solr-user@lucene.apache.org
Subject: Understanding the Debug explanations for Query Result
Scoring/Ranking
Hi,
If you add /*debug=true*/ to the Solr request /(and wt=xml
-book:
http://www.lulu.com/us/en/shop/jack-krupansky/solr-4x-deep-dive-early-access-release-7/ebook/product-21203548.html
You could at least use this feature to implement a did you mean... UI for
your search app - show the user actual results but also a proposed query
with the words broken apart
are
still a work in progress, but certainly headed in the right direction. And
it has Hadoop and Spark integration as well.
See:
http://www.datastax.com/what-we-offer/products-services/datastax-enterprise
-- Jack Krupansky
-Original Message-
From: Ali Nazemian
Sent: Thursday, July 24
.
-- Jack Krupansky
-Original Message-
From: Sven Schönfeldt
Sent: Thursday, July 24, 2014 8:35 AM
To: solr-user@lucene.apache.org
Subject: Re: Need a tipp, how to find documents where content is tel aviv
but user query is telaviv?
Thanks!
Thats my core problem, to let solr search
.
See:
http://sqrrl.com/product/search/
Out of curiosity, why are you not using that integrated Lucene support of
Sqrrl Enterprise?
-- Jack Krupansky
-Original Message-
From: Ali Nazemian
Sent: Thursday, July 24, 2014 3:07 PM
To: solr-user@lucene.apache.org
Subject: Re: integrating
defaultSearchField being
deprecated in favor of the df parameter.
-- Jack Krupansky
-Original Message-
From: shashi.rsb
Sent: Wednesday, July 23, 2014 5:51 AM
To: solr-user@lucene.apache.org
Subject: solr 3.6 to 4.7 upgrade has changed the query string
Hi,
Our backend application
... anyone. All the great Solr guys I know are quite busy.
Thanks.
-- Jack Krupansky
From: Jessica Feigin
Sent: Wednesday, July 23, 2014 3:36 PM
To: 'Jack Krupansky'
Subject: Thank you!
Hi Jack,
Thanks for your assistance, below is the Solr Consultant job description:
Our client
Yeah, I saw that, which is why I suggested not being too picky about specific
requirements. If you have at least two or three years of solid Solr experience,
that would make you at least worth looking at.
-- Jack Krupansky
From: Tri Cao
Sent: Wednesday, July 23, 2014 3:57 PM
To: solr-user
Deleted documents remain in the Lucene index until an optimize or segment
merge operation removes them. As a result they are still counted in document
frequency. An update is a combination of a delete and an add of a fresh
document.
-- Jack Krupansky
-Original Message-
From
bet is to get that RDBMS data moved to Cassandra or DSE ASAP. All
you have until then is a stopgap measure rather than a robust architecture.
-- Jack Krupansky
-Original Message-
From: Yavar Husain
Sent: Tuesday, July 22, 2014 2:22 AM
To: solr-user@lucene.apache.org
Subject: Re: Solr
Or possibly use the synonym filter at query or index time for common
misspellings or misunderstandings about the spelling. That would be
automatic, without the user needing to add the explicit fuzzy query
operator.
-- Jack Krupansky
-Original Message-
From: Anshum Gupta
Sent
in that
Solr-enabled Cassandra data center just the same as with normal Solr.
-- Jack Krupansky
-Original Message-
From: Yavar Husain
Sent: Monday, July 21, 2014 8:37 AM
To: solr-user@lucene.apache.org
Subject: Solr Cassandra MySQL Best Practice Indexing
So my full text data lies on Cassandra
Set the field type for such a field to ignored.
Or set it to string and then you can still examine or query the data even
if it is not properly formatted.
-- Jack Krupansky
-Original Message-
From: Ameya Aware
Sent: Monday, July 21, 2014 11:12 AM
To: solr-user@lucene.apache.org
put the entire query in quotes or escape the space with a backslash.
Of, just use the edismax query parser with the pf or pf2 parameters and
then Solr will boost exact phrase matches even if not quoted or escaped.
-- Jack Krupansky
-Original Message-
From: prashantc88
Sent: Monday
Based on your stated requirements, there is no obvious need to use the
keyword tokenizer. So fix that and then quoted phrases or escaped spaces
should work.
-- Jack Krupansky
-Original Message-
From: prashantc88
Sent: Monday, July 21, 2014 11:51 AM
To: solr-user@lucene.apache.org
index.
-- Jack Krupansky
-Original Message-
From: newBie88
Sent: Monday, July 21, 2014 1:13 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr schema.xml query analyser
My apologies Jack. But there was a mistake in my question.
I actually switched query and textExactMatch in my
query. For SolrCloud the sharding of queries takes place automatically
without any shards parameter. But you should use the CloudSolrServer for
load balancing of SolrCloud anyway - internally it does the load balancing
automatically based on discovery of the SolrCloud configuration.
-- Jack
You can specify an alternate query to use for highlighting purposes, with
the hl.q parameter. It doesn't affect the query results, but lets you
control which terms get highlighted.
See:
http://wiki.apache.org/solr/HighlightingParameters#hl.q
-- Jack Krupansky
-Original Message-
From
Further down in the stack trace you will find the cause of the exception.
Solr is calling the init method, but your code is throwing an exception.
Your jar is probably in the proper place, otherwise Solr wouldn't have been
able to load it and call the init method for it.
-- Jack Krupansky
500B - as in 500,000,000,000? Really?
-- Jack Krupansky
-Original Message-
From: tomasv
Sent: Friday, July 18, 2014 8:18 PM
To: solr-user@lucene.apache.org
Subject: shards as subset of All Shards
Hello, This is kind of weird, but here goes:
We are setting up a document repository
, if only better error reporting at a minimum.
-- Jack Krupansky
-Original Message-
From: Shalin Shekhar Mangar
Sent: Thursday, July 17, 2014 12:40 AM
To: solr-user@lucene.apache.org
Subject: Re: problem with replication/solrcloud - getting 'missing required
field' during update
is a
concern as well.
-- Jack Krupansky
-Original Message-
From: Andy Crossen
Sent: Wednesday, July 16, 2014 1:05 PM
To: solr-user@lucene.apache.org
Subject: Re: Using hundreds of dynamic fields
Thanks, Jack and Jared, for your input on this. I'm looking into whether
parent-child
Yeah, this is another one of those places where the behavior of Solr is
defined but way down in the Lucene Javadoc, where no Solr user should ever
have to go!
It's also the kind of detail documented in my Solr Deep Dive e-book:
http://www.lulu.com/us/en/shop/jack-krupansky/solr-4x-deep-dive
Oops... forgot the link to the stop filter factory Javadoc:
http://lucene.apache.org/core/4_9_0/analyzers-common/org/apache/lucene/analysis/core/StopFilterFactory.html
-- Jack Krupansky
-Original Message-
From: Jack Krupansky
Sent: Tuesday, July 15, 2014 7:42 AM
To: solr-user
It's a bug (file a Jira) that this Lucene (and Solr) feature is not
documented in the Lucene Javadoc for the stop filter factory.
But, I do have it fully documented, with examples, in my Solr Deep Dive
e-book:
http://www.lulu.com/us/en/shop/jack-krupansky/solr-4x-deep-dive-early-access
Or, if you happen to leave off the words attribute of the stop filter (or
misspell the attribute name), it will use the internal Lucene hardwired list
of stop words.
-- Jack Krupansky
-Original Message-
From: Anshum Gupta
Sent: Monday, July 14, 2014 4:03 PM
To: solr-user
-and-cassandra
If those approaches are not sufficient for your needs, maybe you could
elaborate on any special needs you have.
-- Jack Krupansky
-Original Message-
From: Shuai Zhang
Sent: Sunday, July 13, 2014 7:38 AM
To: solr-user@lucene.apache.org
Subject: Is there any data importer
.
But... none of that has anything to do with your subject question of data
importer, so... what is the real question here?
-- Jack Krupansky
-Original Message-
From: Shuai Zhang
Sent: Sunday, July 13, 2014 11:06 AM
To: solr-user@lucene.apache.org
Subject: Re: Is there any data importer
engine.
-- Jack Krupansky
-Original Message-
From: Lee Chunki
Sent: Thursday, July 10, 2014 1:24 AM
To: solr-user@lucene.apache.org
Subject: run multiple queries at the same time
Hi,
Is there any way to run multiple queries at the same time?
situation is
1. when query in
2. check
Ahmet is correct: the porter stemmer assumes that your input is lower case,
so be sure to place the lower case filter before stemming.
BTW, this is the kind of detail that I have in my e-book:
http://www.lulu.com/us/en/shop/jack-krupansky/solr-4x-deep-dive-early-access-release-7/ebook/product
parser doc in my e-book before 4.1 came out.
This was the point where the divorce between the Lucene and Solr query
parsers took place, because the feature needed to be added to the query
parser grammar, but the Lucene guys objected to this Solr feature.
-- Jack Krupansky
-Original Message
From the Solr 4.1 release notes:
* Solr QParsers may now be directly invoked in the lucene query syntax
via localParams and without the _query_ magic field hack.
Example: foo AND {!term f=myfield v=$qq}
-- Jack Krupansky
-Original Message-
From: Jack Krupansky
Sent: Thursday, July
The word delimiter filter has a types parameter where you specify a file
that can map hyphen to alpha or numeric.
There is an example in my e-book.
-- Jack Krupansky
-Original Message-
From: EXTERNAL Taminidi Ravi (ETI, Automotive-Service-Solutions)
Sent: Tuesday, July 8, 2014 2:18
you need to populate, the
analysis and processing details can be worked out.
-- Jack Krupansky
-Original Message-
From: Dan Bolser
Sent: Saturday, July 5, 2014 4:49 AM
To: solr-user
Subject: Re: Field for 'species' data?
I'm super noob... Why choose to write it add a custom update
So, the immediate question is whether the value in the Solr source document
has the full taxonomy path for the species, or just parts, and some external
taxonomy definition must be consulted to fill in the rest of the hierarchy
path for that species.
-- Jack Krupansky
-Original Message
if the time is spent in the query process or some
other stage of processing.
How is your JVM heap usage? Make sure you have enough heap but not too much.
Are a lot of GCs occurring?
Does your index fit entirely in OS system memory for file caching? If not,
you could be incurring tons of IO.
-- Jack
, with a separate query request handler for each language that
optimizes the settings for that language, such as the language-specific
fields to use for the qf parameter.
-- Jack Krupansky
-Original Message-
From: benjelloun
Sent: Friday, July 4, 2014 10:52 AM
To: solr-user@lucene.apache.org
the language-specific qf_xx parameter to the main
qf parameter based on the language that is detected.
-- Jack Krupansky
-Original Message-
From: Paul Libbrecht
Sent: Friday, July 4, 2014 11:36 AM
To: solr-user@lucene.apache.org
Subject: Re: multilingual search
To do just what Jack
/PathHierarchyTokenizerFactory.html
Or maybe a combination of the two approaches.
I think I have some examples of it in my e-book.
-- Jack Krupansky
-Original Message-
From: Dan Bolser
Sent: Friday, July 4, 2014 11:57 AM
To: solr-user
Subject: Re: Field for 'species' data?
The problem
s/dynamic_field/dynamicField/
-- Jack Krupansky
-Original Message-
From: Alexandre Rafalovitch
Sent: Thursday, July 3, 2014 5:45 AM
To: solr-user@lucene.apache.org
Subject: Re: Dynamic field doesnt work
I would say something is misspelt somewhere. Put a dynamic field
called
Unfortunately, not - the syntax is hard-wired into the grammar.
Feel free to file a Jira though. I would be in favor of having a query
parser config option to disable features like regex and leading wildcard as
well.
-- Jack Krupansky
-Original Message-
From: Markus Schuch
Sent
is still open, with disagreement over whether even giving an
exception for this scenario is reasonable. I'll add some comments there. I'm
in favor of BooleanQuery implicitly adding the *:* if only negative terms
are present.
-- Jack Krupansky
-Original Message-
From: Shawn Heisey
Sent
I think the white space after the comma is the culprit. No white space is
allowed in function queries that are embedded, such as in the sort
parameter.
-- Jack Krupansky
-Original Message-
From: Ahmet Arslan
Sent: Wednesday, July 2, 2014 2:19 PM
To: solr-user@lucene.apache.org
Thanks for posting this.
-- Jack Krupansky
-Original Message-
From: wrdrvr
Sent: Wednesday, July 2, 2014 1:47 PM
To: solr-user@lucene.apache.org
Subject: Re: Migration from Autonomy IDOL to SOLR
I know that this is an old thread, but I wanted to pass on some additional
information
Take a look at the synonym filter as well. I mean, basically that's exactly
what you are doing - adding synonyms at each position.
-- Jack Krupansky
-Original Message-
From: Manuel Le Normand
Sent: Wednesday, July 2, 2014 12:57 PM
To: solr-user@lucene.apache.org
Subject: Re: OCR
,query($q,0))+descwt=jsonindent=true
-- Jack Krupansky
-Original Message-
From: rachun
Sent: Wednesday, July 2, 2014 7:44 PM
To: solr-user@lucene.apache.org
Subject: Re: Customise score
Hi Jack,
I tried as you suggest
.../select?q=MacBooksort=sum(base_score,score)+descwt=jsonindent=true
Yeah, there's a known bug that a negative-only query within parentheses
doesn't match properly - you need to add a non-negative term, such as *:*.
For example:
text:(+happy) AND user:(*:* -123456789)
-- Jack Krupansky
-Original Message-
From: Brett Hoerner
Sent: Tuesday, July 1
only handles pure negative queries at the
top-level query, but not within parenthesized sub-queries
https://issues.apache.org/jira/browse/SOLR-3744
And:
SOLR-3729 - ExtendedDismaxQParser (edismax) doesn't parse (*:*) properly
https://issues.apache.org/jira/browse/SOLR-3729
-- Jack Krupansky
My vague recollection is that at least at one time there was a limitation
somewhere in SolrCloud, but whether that is still true, I don't know.
-- Jack Krupansky
-Original Message-
From: Alexandre Rafalovitch
Sent: Tuesday, July 1, 2014 9:48 AM
To: solr-user@lucene.apache.org
Subject
anyway, so count of
characters versus UTF-8 bytes may be a non-problem.
-- Jack Krupansky
-Original Message-
From: Michael Ryan
Sent: Tuesday, July 1, 2014 9:49 AM
To: solr-user@lucene.apache.org
Subject: Best way to fix Document contains at least one immense term?
In LUCENE-5472, Lucene
parameters for each boost.
See:
https://cwiki.apache.org/confluence/display/solr/The+Extended+DisMax+Query+Parser
-- Jack Krupansky
-Original Message-
From: Bhoomit Vasani
Sent: Monday, June 30, 2014 7:30 AM
To: solr-user@lucene.apache.org
Subject: How do I use multiple boost functions
formats
anyway, eliminating the need for the parse date update processor - a Solr
band-aid to cover the weakness of the Lucene feature.
-- Jack Krupansky
-Original Message-
From: Shalin Shekhar Mangar
Sent: Sunday, June 29, 2014 6:43 AM
To: solr-user@lucene.apache.org
Subject: Re: Any way
I think you wanted to remove letters, but your pattern removes NON-letters -
that's what the ^ does, negation.
So, try: =[a-z].
You can also get rid of the lower case filter and just use [a-zA-Z].
-- Jack Krupansky
-Original Message-
From: rachun
Sent: Sunday, June 29, 2014 3:19 AM
One shard with one replica would be a single machine, so maybe you mean
either two shards each with one replica or one shard with two replicas.
-- Jack Krupansky
-Original Message-
From: vidit.asthana
Sent: Saturday, June 28, 2014 5:09 PM
To: solr-user@lucene.apache.org
Subject
previous metrics for that document.
-- Jack Krupansky
-Original Message-
From: Andy Crossen
Sent: Friday, June 27, 2014 11:10 AM
To: solr-user@lucene.apache.org
Subject: Using hundreds of dynamic fields
Hi folks,
My application requires tracking a daily performance metric for all
Erick, I agree, but... wouldn't it be SO COOL if it did work! Avoid all the
ridiculous complexity of cloud.
Have a temporary lock to permit and exclude updates.
-- Jack Krupansky
-Original Message-
From: Erick Erickson
Sent: Thursday, June 26, 2014 12:37 PM
To: solr-user
for this case??)
-- Jack Krupansky
-Original Message-
From: Erick Erickson
Sent: Tuesday, June 24, 2014 11:38 AM
To: solr-user@lucene.apache.org ; Ahmet Arslan
Subject: Re: No results for a wildcard query for text_general field in solr
4.1
Wildcards are a tough thing to get your head around
guide?
-- Jack Krupansky
-Original Message-
From: Erick Erickson
Sent: Tuesday, June 24, 2014 11:46 AM
To: solr-user@lucene.apache.org
Subject: Re: Does one need to perform an optimize soon after doing a batch
indexing using SolrJ ?
Your indexing process looks fine, there's no reason
501 - 600 of 2643 matches
Mail list logo