Did you mean to use "||" for the OR operator? A single "|" is not treated as
an operator - it will be treated as a term and sent through normal term
analysis.
-- Jack Krupansky
-Original Message-
From: Shamik Bandopadhyay
Sent: Wednesday, February 12, 2014 5
Is price a float/double field?
price:[99.5 TO 100.5] -- price near 100
price:[900 TO 1000]
or
price:[899.5 TO 1000.5]
-- Jack Krupansky
-Original Message-
From: jay67
Sent: Wednesday, February 12, 2014 12:03 PM
To: solr-user@lucene.apache.org
Subject: Using numeric ranges in Solr
x+Handlers#UploadingDatawithIndexHandlers-UsingXSLTtoTransformXMLIndexUpdates
-- Jack Krupansky
-Original Message-
From: Eric_Peng
Sent: Wednesday, February 12, 2014 11:42 AM
To: solr-user@lucene.apache.org
Subject: Re: Question about how to upload XML by using SolrJ Client Java
Code
Tha
computing
environment, coupled with multi-core processors and parallel threads.
-- Jack Krupansky
-Original Message-
From: Pisarev, Vitaliy
Sent: Wednesday, February 12, 2014 10:28 AM
To: solr-user@lucene.apache.org
Subject: RE: Solr perfromance with commitWithin seesm too good to be true. I
a fault-tolerant,
fully-distributed system. Your application can/should make its own decision
as to what it will do if an indexing operation cannot be serviced.
-- Jack Krupansky
-Original Message-
From: elmerfudd
Sent: Wednesday, February 12, 2014 7:54 AM
To: solr-user
)))+OR+(field2:value2).
-- Jack Krupansky
-Original Message-
From: Johannes Siegert
Sent: Tuesday, February 11, 2014 10:57 AM
To: solr-user@lucene.apache.org
Subject: solr-query with NOT and OR operator
Hi,
my solr-request contains the following filter-query:
fq=((-(field1:value1
Try the complex phrase query parser:
https://issues.apache.org/jira/browse/SOLR-1604
Or in LucidWorks Search you can say:
J* NEAR:5 K*
-- Jack Krupansky
-Original Message-
From: Kashish
Sent: Monday, February 10, 2014 6:12 PM
To: solr-user@lucene.apache.org
Subject: Re
"Eric solrUser"~102
would match.
-- Jack Krupansky
-Original Message-
From: Nirali Mehta
Sent: Monday, February 10, 2014 3:13 PM
To: solr-user@lucene.apache.org
Subject: Re: positionIncrementGap in schema.xml - Doesn't seem to work
Erick,
Here is the example.
The
.
That said, given Lucene/Solr's rich support for large tokenized fields, they
might be a better choice for representing large lists of entities - if
denormalization is not quite practical.
-- Jack Krupansky
-Original Message-
From: Luis Lebolo
Sent: Monday, February 10, 2014
ction would rarely need to be sharded.
You didn't speak at all about HA (High Availability) requirements or
replication.
Or about query latency requirements or query load - which can impact
replication requirements.
-- Jack Krupansky
-Original Message-
From: Pisarev, Vit
analyzer, etc.
More like type as merely inheriting attributes from another field/type.
-- Jack Krupansky
-Original Message-
From: Benson Margulies
Sent: Saturday, February 8, 2014 2:37 PM
To: solr-user@lucene.apache.org
Subject: A bit lost in the land of schemaless Solr
Say that I have 10
that more will definitely cause problems, but because you will
be beyond common usage and increasingly sensitive to amount of data and
Java/JVM performance capabilities.
-- Jack Krupansky
-Original Message-
From: Mike L.
Sent: Saturday, February 8, 2014 2:12 PM
To: solr-user@lucene
The UIMA component is not very error-friendly - NPE gets thrown for missing
or misspelled parameter names. Basically, you have to look at the source
code based on that stack trace to find out which parameter was missing.
-- Jack Krupansky
-Original Message-
From: rashi gandhi
Sent
I suspect that's a bug. The phrase boost code should have the logic to
exclude negated terms.
File a Jira.
Thanks for reporting this.
-- Jack Krupansky
-Original Message-
From: Geert Van Huychem
Sent: Friday, February 7, 2014 9:40 AM
To: solr-user@lucene.apache.org
Su
Use the pf parameter and then you won't have to modify the original query at
all! And you can add a boost for the phrase, which is a common practice.
pf=search-field^10.0
-- Jack Krupansky
-Original Message-
From: Srinivasa7
Sent: Thursday, February 6, 2014 11:21 AM
To: solr
work, at least for some simple cases.
-- Jack Krupansky
-Original Message-
From: Teague James
Sent: Thursday, February 6, 2014 11:11 AM
To: solr-user@lucene.apache.org
Subject: RE: Partial Word Search
Jack,
Thanks for responding! I had tried configuring this asymmetrically before
omewhat to query complexity,
albeit in the name of better relevancy.
-- Jack Krupansky
-Original Message-
From: Srinivasa7
Sent: Thursday, February 6, 2014 9:30 AM
To: solr-user@lucene.apache.org
Subject: Performance impact using edismax over dismax
Hi All,
I have a requirement to sear
It appears that at this moment the best approach would be to write a Java
program that reads from MongoDB and writes to Solr (Solr XML update
requests.) Or, write a program that reads from MongDB and outputs a CSV
format text file and then import that directly into Solr.
-- Jack Krupansky
Yes. Look at the example solrconfig.xml for a section labeled "defaults" for
the "/select" request handler. You should see "df" as one parameter. Just
copy that and change "df" to "debug" and change the field name to "true".
-- Jack
Tom, I did make an effort to "sort out" both the old and newer solr.xml
features in my Solr 4.x Deep Dive e-book.
-- Jack Krupansky
-Original Message-
From: Tom Burton-West
Sent: Wednesday, February 5, 2014 5:56 PM
To: solr-user@lucene.apache.org
Subject: Re: Default core f
ce it will query for all
the sub-terms, but AND will only work if all the sub-terms occur in the
document field.
-- Jack Krupansky
-Original Message-
From: Teague James
Sent: Wednesday, February 5, 2014 4:52 PM
To: solr-user@lucene.apache.org
Subject: Partial Word Search
I cannot
(Gulp!)
You could also set the debug parameter (temporarily) in the defaults section
of your query request handler. But you still need to dump the text of the
query response.
-- Jack Krupansky
-Original Message-
From: Mauro Gregorio Binetti
Sent: Wednesday, February 5, 2014 12:47
I’m not interested in the log (although maybe somebody else can spot something
there) – it’s the query response that is returned on your query HTTP request
(XML or JSON.) The specific parameter to add to your HTTP query request is
“&debug=true”.
-- Jack Krupansky
From: Mauro Gregorio Bin
Simply post to this mail list the timing section of the query response for a
test query that you feel is too slow, but be sure to add the debug=true
parameter (or debug=timing.)
-- Jack Krupansky
-Original Message-
From: Mauro Gregorio Binetti
Sent: Wednesday, February 5, 2014 6:44
ush" model, as opposed to the
Solr DIH "pull" model.
See:
http://blog.mongodb.org/post/29127828146/introducing-mongo-connector
-- Jack Krupansky
-Original Message-
From: rachun
Sent: Wednesday, February 5, 2014 6:25 AM
To: solr-user@lucene.apache.org
Subject: Re:
to a query.
-- Jack Krupansky
-Original Message-
From: Mauro Gregorio Binetti
Sent: Wednesday, February 5, 2014 5:17 AM
To: solr-user@lucene.apache.org
Subject: Disable searching on ddm tika metadata
Hi everybody,
I'm a newbie and I'm working on searching performance in
What will your queries be like? Will it be okay if they are relatively slow?
I mean, how many of those 100 fields will you need to use in a typical (95th
percentile) query?
-- Jack Krupansky
-Original Message-
From: Mike L.
Sent: Tuesday, February 4, 2014 10:00 PM
To: solr-user
nt of data processed can help. Any multivalued fields with lots of
values?
-- Jack Krupansky
-Original Message-
From: Joel Cohen
Sent: Tuesday, February 4, 2014 1:43 PM
To: solr-user@lucene.apache.org
Subject: Re: Lowering query time
1. We are faceting. I'm not a developer so I'
nd queried using the Cassandra
API(s) as well.
To be clear, DSE is a "database", not a "search platform". The idea is that
DSE can be the system of record, with data stored in Cassandra and can
easily be reindexed for Solr at any time from that Cassandra data.
-- Jack Kr
Maybe you need a larger Java heap.
-- Jack Krupansky
-Original Message-
From: Sathya
Sent: Tuesday, February 4, 2014 6:11 AM
To: solr-user@lucene.apache.org
Subject: Solr Searching Issue
Hi Friends,
I am working in Solr 4.6.0 from last 2 months. i have indexed the data in
solr
I think he want to do a bunch of separate queries and return separate result
sets for each.
Hmmm... maybe it would be nice to allow multiple "q" parameters in one query
request, each returning a separate set of results.
-- Jack Krupansky
-Original Message-
From: Eric
PDF files can be directly imported into Solr using Solr Cell (AKA
ExtractingRequestHandler).
See:
https://cwiki.apache.org/confluence/display/solr/Uploading+Data+with+Solr+Cell+using+Apache+Tika
Internally, Solr Cell uses Tika, which in turn uses PDFBox.
-- Jack Krupansky
-Original
If SDL Tridion can export to CSV format, Solr can then import from CSV
format.
Otherwise, you may have to write a custom script or even maybe Java code to
read from SDL Tridion and output a supported Solr format, such as Solr XML,
Solr JSON, or CSV.
-- Jack Krupansky
-Original Message
q=+term1 term2^0.6
Will require term1 but term2 is optional.
-- Jack Krupansky
-Original Message-
From: abhishek jain
Sent: Saturday, February 1, 2014 10:27 AM
To: solr-user@lucene.apache.org ; 'Ahmet Arslan'
Subject: RE: Special character search in Solr and boosting withou
What does your actual query look like? Is it two range queries and an AND?
Also, you have spaces in your field names, so that makes it more difficult
to write queries since they need to be escaped.
-- Jack Krupansky
-Original Message-
From: Avner Levy
Sent: Saturday, January 18
g what the original
handler name was.
-- Jack Krupansky
-Original Message-
From: StrW_dev
Sent: Friday, January 31, 2014 4:56 AM
To: solr-user@lucene.apache.org
Subject: Re: Realtimeget SolrCloud
That seemed to be the issue.
I had several other request handlers as I wasn't using the s
ot; remains for
now.
-- Jack Krupansky
-Original Message-
From: Aleksander Akerø
Sent: Thursday, January 30, 2014 9:31 AM
To: solr-user@lucene.apache.org
Subject: Re: KeywordTokenizerFactory - trouble with "exact" matches
Yes, I actually noted that about the filter vs. tokeni
even if these examples are not formally released, at least people can view
and copy them.
-- Jack Krupansky
-Original Message-
From: Alexandre Rafalovitch
Sent: Tuesday, January 21, 2014 8:00 AM
To: solr-user@lucene.apache.org
Subject: Solr middle-ware?
Hello,
All the Solr
Lucene's default scoring should give you much of what you want - ranking
hits of low-frequency terms higher - without any special query syntax - just
list out your terms and use "OR" as your default operator.
-- Jack Krupansky
-Original Message-
From: svante karlsson
,
it would treat "26KA" as "26" AND "KA" AND "26KA", which requires that
"26KA" (without the trailing dot) to be in the index.
It seems counter-intuitive, but the attributes of the index and query word
delimiter filters need to be slightly asymm
!field f=myfield}Foo Bar
See:
http://wiki.apache.org/solr/QueryParser
You can also pre-configure the field query parser with the defType=field
parameter.
-- Jack Krupansky
-Original Message-
From: Srinivasa7
Sent: Thursday, January 30, 2014 6:37 AM
To: solr-user@lucene.ap
Thank you for taking a look.
2014-01-29 Jack Krupansky
What field type and analyzer/tokenizer are you using?
-- Jack Krupansky
-Original Me
What field type and analyzer/tokenizer are you using?
-- Jack Krupansky
-Original Message-
From: Thomas Michael Engelke
Sent: Wednesday, January 29, 2014 10:45 AM
To: solr-user@lucene.apache.org
Subject: Not finding part of fulltext field when word ends in dot
Hello everybody
not
match.
-- Jack Krupansky
-Original Message-
From: Aleksander Akerø
Sent: Wednesday, January 29, 2014 9:55 AM
To: solr-user@lucene.apache.org
Subject: Re: KeywordTokenizerFactory - trouble with "exact" matches
update:
Guessing that this has nothing to do with the tokenizer
ince
they are on different machines.)
-- Jack Krupansky
-Original Message-
From: Susheel Kumar
Sent: Sunday, January 26, 2014 10:54 AM
To: solr-user@lucene.apache.org
Subject: RE: Solr server requirements for 100+ million documents
Thank you Erick for your valuable inputs. Yes, we have t
for both scaling of query response
and availability if nodes go down.
-- Jack Krupansky
-Original Message-
From: rashmi maheshwari
Sent: Tuesday, January 28, 2014 11:36 AM
To: solr-user@lucene.apache.org
Subject: Solr & Nutch
Hi,
Question1 --> When Solr could parse html, documen
Or just use the internal document ID: fl=*,[docid]
Granted, the docID may change if a segment merge occurs and earlier
documents have been deleted, but it may be sufficient for your purposes.
-- Jack Krupansky
-Original Message-
From: Upayavira
Sent: Friday, January 03, 2014 5:58
The defType parameter applies only to the q parameter, not to fq, so you
will need to explicitly give the query parser for fq:
fq={!queryparsername}filterquery
-- Jack Krupansky
-Original Message-
From: suren
Sent: Thursday, January 02, 2014 7:32 PM
To: solr-user@lucene.apache.org
on those four fields with a boost,
although I'm not sure a boost will be of any value in the case of a dismax
which is providing an exact match anyway.
-- Jack Krupansky
-Original Message-
From: rashi gandhi
Sent: Tuesday, December 31, 2013 8:15 AM
To: solr-user@lucene.apach
Wouldn't it be better or at least easier to simply filter out the
unacceptable phrases before you send them to Solr for indexing?
-- Jack Krupansky
-Original Message-
From: Jorge Luis BetancourtGonzález
Sent: Saturday, December 21, 2013 3:05 AM
To: solr-user@lucene.apache.org
Su
to have multiple documents
with the same unique ID (which is now no longer unique.)
Tell us a little more about your data model and why you chose it.
-- Jack Krupansky
-Original Message-
From: neerajp
Sent: Friday, December 20, 2013 12:57 AM
To: solr-user@lucene.apache.org
Subject
That's a feature of the standard tokenizer. You'll have to use a field type
which uses the white space tokenizer to preserve special characters.
-- Jack Krupansky
-Original Message-
From: suren
Sent: Thursday, December 19, 2013 10:56 AM
To: solr-user@lucene.apache.org
Su
top-level query typically defaults to matching all documents, or
"*:*", so if you want that effect, use:
docKey:*:*
Two other possibilities:
docKey:* -- all documents with a value in the field
*:* -docKey:* -- all documents with no value in the field
-- Jack Krupansky
-Origin
Do a proof of concept implementation and see for yourself if you find the
performance acceptable.
I mean, performance should be reasonably decent.
-- Jack Krupansky
-Original Message-
From: Jayni
Sent: Friday, December 13, 2013 12:22 PM
To: solr-user@lucene.apache.org
Subject: Re
Just use the edismax query parser with bigrams and trigrams enabled and the
default operator set to OR. That will select all sentences even vaguely
similar and will more highly score sentences that have a greater number of
words and phrases that match.
-- Jack Krupansky
-Original Message
Note that the synonym filter accepts a comma-separated list of synonym
files, so you can split your huge synonym file into two or more smaller
files.
-- Jack Krupansky
-Original Message-
From: gf80
Sent: Wednesday, December 11, 2013 5:21 PM
To: solr-user@lucene.apache.org
Subject
Ah... although the lower case filtering does get applied properly in a
"multiterm" analysis scenario, stemming does not. What stemmer are you
using? I suspect that "swimming" normally becomes "swim". Compare the debug
output of the two queries.
-- Jack Kr
he classic query parser.
-- Jack Krupansky
-Original Message-
From: Karsten R.
Sent: Tuesday, December 03, 2013 1:24 AM
To: solr-user@lucene.apache.org
Subject: Using the flexible query parser in Solr instead of classic
Hi folks,
last year we built a 3.X Solr-QueryParse
The edismax (ExtendedDisMax) query parser is the best, overall. There are
other specialized query parsers with features that edismax does not have
(e.g., surround for span queries, and complex phrase for wildcards in
phrases.)
-- Jack Krupansky
-Original Message-
From: elmerfudd
Any chance that you had already indexed some data before your finalized
these configuration settings? That pre-existing data would need to be
manually reindexed for the update processor to be effective.
-- Jack Krupansky
-Original Message-
From: Dishanker Raj
Sent: Friday, November
Yeah, purely negative sub-queries have had problems, so rewrite:
fq = (access:Allow*) OR (-access:*)
as
fq = (access:Allow*) OR (*:* -access:*)
-- Jack Krupansky
-Original Message-
From: Thomas Kurz
Sent: Thursday, November 28, 2013 10:34 AM
To: solr-user@lucene.apache.org
Subject
If you have chosen to use improper field names, then in the fl parameter you
need to reference them using the "field" function:
fl=id,field(01text)
The basic concept is that Solr doesn't ban improper field names, but that
they don't work in all contexts.
-- Jack Krup
What do you really want to do/accomplish? I mean, for what purpose?
You can turn on the Lucene infostream for logging of index writing.
See:
https://cwiki.apache.org/confluence/display/solr/IndexConfig+in+SolrConfig
Set to "true".
There are some examples in my e-book.
-- Jack
To be honest, this kind of question comes up so often, that it probably is
worth a Jira to have a more customized or parameterized "explain".
Function queries in the "fl" list give you a lot more control, but not at
the level of actual terms that matched.
-- Jack Krup
is used, it won't tell you which term matched -
although a tf value of 0 basically tells you that.
-- Jack Krupansky
-Original Message-
From: Jamie Johnson
Sent: Wednesday, November 27, 2013 11:38 AM
To: solr-user@lucene.apache.org
Subject: Re: Term Vector Component Question
Jack,
That information would be included in the debugQuery output as well.
-- Jack Krupansky
-Original Message-
From: Jamie Johnson
Sent: Wednesday, November 27, 2013 9:32 AM
To: solr-user@lucene.apache.org
Subject: Term Vector Component Question
I am interested in retrieving the tf
Just bite the bullet and do the query at your application level. I mean,
Solr/Lucene would have to do the same amount of work internally anyway. If
the perceived performance overhead is too great, get beefier hardware.
-- Jack Krupansky
-Original Message-
From: Thomas Scheffler
Sent
round it with an update processor that copied the field and
massaged the multiple values into what you really want the language
detection to see. You could even implement that processor as a JavaScript
script with the stateless script update processor.
-- Jack Krupansky
-Original Me
consumed by
an application.
-- Jack Krupansky
-Original Message-
From: kumar
Sent: Tuesday, November 26, 2013 3:31 AM
To: solr-user@lucene.apache.org
Subject: Re: Storing solr results in excel
If we specify wt=csv then results appear like csv format but i need to store
them in seperate
it is completely safe to use and save Lucene document IDs, but
only as long as no merging of segments is performed. Even one tiny merge and
all subsequent saved document IDs are invalidated. Be careful with your
merge policy - normally merges are happening in the background,
automatically.
-
t; the query in the manner that you're describing.
Or, simply settle for a "heuristic" approach that may give you 70% of what
you want using only existing Solr features on the server side.
-- Jack Krupansky
-Original Message-
From: Mirko
Sent: Thursday, November
"Would you store "a" as "A" ?"
No, not in any case.
-- Jack Krupansky
-Original Message-
From: Michael Sokolov
Sent: Thursday, November 21, 2013 8:56 AM
To: solr-user@lucene.apache.org
Subject: Re: How to index X™ as ™ (HTML decimal entity)
I have to a
e for basic display, and one for "detail"
display.
I'm more of a "platform" guy than an "app-specific" guy - give the app
developer tools that they can blend to meet their own requirements (or
interests or tastes.)
But Solr users should make no mistake, SGML e
and a better description.
-- Jack Krupansky
-Original Message-
From: Ahmet Arslan
Sent: Thursday, November 21, 2013 11:40 AM
To: solr-user@lucene.apache.org
Subject: Re: search with wildcard
Hi Adnreas,
If you don't want to use wildcards at query time, alternative way is to use
N
But all of this begs the question of the original question: "I need to store
the HTML Entity (decimal) equivalent value (i.e. ™) in SOLR rather
than storing the original value."
Maybe the original poster could clarify the nature of their need.
-- Jack Krupansky
-Original Message-
AFAICT, it's not an "extremely bad idea" - using SGML/HTML as a format for
storing text to be rendered. If you disagree - try explaining yourself.
But maybe TM should be encoded as "™". Ditto for other named SGML
entities.
-- Jack Krupansky
-Original Message
Any analysis filtering affects the indexed value only, but the stored value
would be unchanged from the original input value. An update processor lets
you modify the original input value that will be stored.
-- Jack Krupansky
-Original Message-
From: Uwe Reh
Sent: Wednesday
ly, you don't really not to AND with a
sub-query, so make it:
-endtime:"1970-01-01T01:00:00Z"
And then it is simply a clause of the Boolean query.
-- Jack Krupansky
-Original Message-
From: vishalgupta084
Sent: Wednesday, November 20, 2013 7:31 AM
To: solr-user@lucene.a
You could use an update processor to map non-ASCII codes to SGML entities.
You could code it as a JavaScript script and use the stateless script update
processor.
-- Jack Krupansky
-Original Message-
From: Developer
Sent: Tuesday, November 19, 2013 5:46 PM
To: solr-user
s/work/word/
"word delimiter filter"
-- Jack Krupansky
-Original Message-----
From: Jack Krupansky
Sent: Thursday, November 14, 2013 11:34 AM
To: solr-user@lucene.apache.org
Subject: Re: Query on multi valued field
I suppose you could define the field as tokenized text wit
the update processor as a JavaScript script. The
simplest approach to the query side would be to expand the special query
syntax in your application layer.
-- Jack Krupansky
-Original Message-
From: giridhar
Sent: Thursday, November 14, 2013 10:45 AM
To: solr-user@lucene.apache.org
S
is complaining that there is no matching .
-- Jack Krupansky
-Original Message-
From: Marcello Lorenzi
Sent: Thursday, November 14, 2013 9:26 AM
To: solr-user@lucene.apache.org
Subject: Solr xml img parsing exception
Hi,
I have installed a Solr 4.3 instance and we have configured ma
I believe it is the TZ column from this table:
http://en.wikipedia.org/wiki/List_of_tz_database_time_zones
Yeah, it's on my TODO list for my book.
I suspect that "tz" will not affect "NOW", which is probably UTC. I suspect
that "tz" only affects literal date
ments that
have a value in the default search field. In many cases this would give
identical results to a *:* query, but in some apps it might not.
Still it would be nice to know who originated this suggestion to use *\*
instead of *:* - or even simply *.
-- Jack Krupansky
-Original Me
see how the parameters are
being processed.
-- Jack Krupansky
-Original Message-
From: Abhijith Jain -X (abhijjai - DIGITAL-X INC at Cisco)
Sent: Tuesday, November 12, 2013 8:03 PM
To: solr-user@lucene.apache.org
Subject: RE: Modify the querySearch to q=*:*
Thanks for the quick reply. I
of config settings.
-- Jack Krupansky
From: Abhijith Jain -X (abhijjai - DIGITAL-X INC at Cisco)
Sent: Tuesday, November 12, 2013 7:22 PM
To: solr-user@lucene.apache.org
Subject: Modify the querySearch to q=*:*
Hello,
I upgraded Solr to 4.4.0(previous Solr version was 3.5). After the full
se is
already in Java, why do you need JSON? The real purpose of a JSON response
is usually simply to more easily map it to Java (or JavaScript.)
-- Jack Krupansky
-Original Message-
From: Dharmendra Jaiswal
Sent: Tuesday, November 12, 2013 4:54 AM
To: solr-user@lucene.apache.org
Subject: H
Any kind of cross-field processing is best done in an update processor.
There are a lot of built-in update processors as well as a JavaScript script
update processor.
-- Jack Krupansky
-Original Message-
From: Dileepa Jayakody
Sent: Tuesday, November 12, 2013 1:31 AM
To: solr-user
easier to master.
-- Jack Krupansky
-Original Message-
From: Ryan Cutter
Sent: Monday, November 11, 2013 10:18 AM
To: solr-user@lucene.apache.org
Subject: Re: Unit of dimension for solr field
I think Upayavira's suggestion of writing a filter factory fits what you're
Thanks for the plug Erick, but my deep dive doesn't go quite that deep
(yet.)
But I'm sure a 2,500 page book on how to develop all manner of custom Solr
plugin would indeed be valuable though.
But I do have plenty of example of using the many builtin Solr analysis
filters.
Spoke too soon. Hacking rocks!
Finally landed on this heuristic, and it works:
resourceURL:"http://someotherserver.org/";
On Thu, Nov 7, 2013 at 9:52 AM, Jack Park wrote:
> Figuring out a google query to gain an answer seems difficult given
> the ambiguity;
>
> I have
;resourceURL:*" will find all of them, but there is this question:
What does the query look like to find that specific URL?
Of course, "resourceURL:http://someotherserver.org/"; doesn't work
This one
resourceURL=http%3A%2F%2Fsomeotherserver.org%2F
fails as well.
What am I overlooking?
Many thanks in advance.
Jack
Is it possible that you added stored="true" later, after some of the
documents were already indexed? Then the older documents would not have the
stored values. If so, you need to reindex the older documents.
-- Jack Krupansky
-Original Message-
From: gohome190
Sent: Monday
everything work.
On Sun, Nov 3, 2013 at 12:04 PM, Jack Park wrote:
> I now have a single ZK running standalone on 2121. On the same CPU, I
> have three nodes.
>
> I used a curl to send over two documents, one each to two of the three
> nodes in the cloud. According to a web query, th
"document " + i);
doc.addField( "details", "This is document " + i);
server.add(doc);
The error is thrown at server.add(doc)
Many thanks in advance for any observations or suggestions.
Cheers
Jack
-cloud mode.
Thanks
Jack
On Fri, Nov 1, 2013 at 11:12 AM, Shawn Heisey wrote:
> On 11/1/2013 12:07 PM, Jack Park wrote:
>>
>> The top error message at my test harness is this:
>>
>> No live SolrServers available to handle this request:
>> [http://127.0.1.1:8983/solr
,
because those servers actually exist, to the test harness, at
10.1.10.178, and if I access any one of them from the browser,
/solr/collection1 does not work, but /solr/#/collection1 does work.
On Fri, Nov 1, 2013 at 10:34 AM, Jack Park wrote:
> /clusterstate.json seems to clearly state that al
ection reset by peer would suggest something in my code, but my
code is a clone of code supplied in a Solr training course. Must be
good. Right?
I also have no clue what is /127.0.0.1:39065 -- that's not one of my nodes.
The quest continues.
On Fri, Nov 1, 2013 at 9:21 AM, Jack Park wrote
you using?
>
> Alan Woodward
> www.flax.co.uk
>
>
> On 1 Nov 2013, at 04:19, Jack Park wrote:
>
>> After digging deeper (slow for a *nix newbee), I uncovered issues with
>> the java installation. A step in installation of Oracle Java has it
>> that you -ins
the simple one-box
3-node cloud test, and used the test code from the Lucidworks course
to send over and read some documents. That failed with this:
Unknown document router '{name=compositeId}'
Lots more research.
Closer...
On Thu, Oct 31, 2013 at 5:44 PM, Jack Park wrote:
> Latest zo
801 - 900 of 2693 matches
Mail list logo