MaryJo - I'm on vacation but can't resist... iirc there are some very
useful query modifications suggested in the readme on the github for the
plugin... can't access right now.
You may know about them already, but if it's been a while since you looked,
those may help...
On Jun 3, 2016 12:28 PM,
Yes, query parameters/modifications mentioned in the readme. Beyond those
I don't have useful advice at this point
On Jun 4, 2016 10:56 PM, "MaryJo Sminkey" <mjsmin...@gmail.com> wrote:
> On Sat, Jun 4, 2016 at 11:47 PM, John Bickerstaff <
> j...@johnbickerstaff.com>
on Developer*
>
> *CF Webtools*
> You Dream It... We Build It. <https://www.cfwebtools.com/>
> 11204 Davenport Suite 100
> Omaha, Nebraska 68154
> O: 402.408.3733 x128
> E: maryjo.smin...@cfwebtools.com
> Skype: maryjos.cfwebtools
>
>
> On Mon, May 30, 2016 at 5:0
This may be no help at all, but my first thought is to wonder if anything
else is already running on port 80?
That might explain the somewhat silent "fail"...
Nicely said by the way - resisting the urge
On Tue, May 31, 2016 at 2:02 PM, Teague James
wrote:
>
ince he really doesn't know Solr well
> either.
>
> Mary Jo
>
>
>
>
> On Mon, May 30, 2016 at 7:49 PM, John Bickerstaff <
> j...@johnbickerstaff.com>
> wrote:
>
> > Thanks for the comment Mary Jo...
> >
> > The error loading the class rings a
ry to update it soon. I've run
> the plugin on Solr 5 and 6, solrcloud and standalone. For running in
> SolrCloud make sure you follow
>
> https://cwiki.apache.org/confluence/display/solr/Adding+Custom+Plugins+in+SolrCloud+Mode
> On May 31, 2016 5:13 PM, "John Bickerstaff" <j...
t care about the classloader, I believe you can use whatever
> dir you want, with the appropriate bit of solrconfig.xml to load it.
> Something like:
>
>
>
> On 5/31/16, 2:13 PM, "John Bickerstaff" <j...@johnbickerstaff.com> wrote:
>
> >All --
&g
about the valuesourceyparser is a bit
confusing)
Thanks
On Tue, May 31, 2016 at 5:02 PM, John Bickerstaff <j...@johnbickerstaff.com>
wrote:
> Thanks Jeff,
>
> I believe I tried that, and it still refused to load.. But I'd sure love
> it to work since the other process i
of course... If anyone on the list has
experience in this area...
Thanks.
On Thu, May 26, 2016 at 10:25 AM, John Bickerstaff <j...@johnbickerstaff.com
> wrote:
> Hi all,
>
> I'm creating a Solr Cloud that will index and search medical text.
> Multi-word synonyms are a pretty impor
Hi all,
I'm creating a Solr Cloud that will index and search medical text.
Multi-word synonyms are a pretty important factor.
I find that there are some challenges around multi-word synonyms and I also
found on the wiki that there is a recommended 3rd-party parser
(synonym_edismax parser)
erison, I'd be glad to hear your experience.
> >
> > I haven't used it, but I am aware of one other project in this vein that
> > you might be interested in looking at:
> > https://github.com/LucidWorks/auto-phrase-tokenfilter
> >
> >
> > On 5/26/1
usually seek out a specialist to help
me make sure the query isn't wasteful. It frequently was and I learned a
lot.
On Thu, May 26, 2016 at 12:31 PM, John Bickerstaff <j...@johnbickerstaff.com
> wrote:
> It may or may not be helpful, but there's a similar class of problem that
> i
project in this vein that
> you might be interested in looking at:
> https://github.com/LucidWorks/auto-phrase-tokenfilter
>
>
> On 5/26/16, 9:29 AM, "John Bickerstaff" <j...@johnbickerstaff.com> wrote:
>
> >Ahh - for question #3 I may have spoken too soon.
It may or may not be helpful, but there's a similar class of problem that
is frequently solved either by stored procedures or by running the query on
a time-frame and storing the results... Doesn't matter if the end-point
for the data is Solr or somewhere else.
The problem is long running
fixing typo:
http://wiki.apache.org/solr/QueryParser (search the page for
synonym_edismax)
On Thu, May 26, 2016 at 11:50 AM, John Bickerstaff <j...@johnbickerstaff.com
> wrote:
> Hey Jeff (or anyone interested in multi-word synonyms) here are some
> potentially interesting links
ser, and addressing
> problems with how queries are constructed from Lucene’s “sausagized” token
> stream.
>
> --
> Steve
> www.lucidworks.com
>
> > On May 26, 2016, at 2:21 PM, John Bickerstaff <j...@johnbickerstaff.com>
> wrote:
> >
> > Thanks Chris --
LClassLoader.java:381)
>
> at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
>
> at
> java.net.FactoryURLClassLoader.loadClass(URLClassLoader.java:814)
>
> at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
>
> at java.lang.Class.forNa
- I'm being lazy, I know.
Thanks all!
On Tue, May 31, 2016 at 11:35 PM, Shawn Heisey <apa...@elyograg.org> wrote:
> On 5/31/2016 3:13 PM, John Bickerstaff wrote:
> > The suggestion on the readme is that I can drop the
> > hon_lucene_synonyms jar file into the
t;
> On Wed, Jun 1, 2016 at 12:20 PM, John Bickerstaff <
> j...@johnbickerstaff.com>
> wrote:
>
> > Hi Mary Jo,
> >
> > I'll point you to Joe's earlier comment about needing to use the Blob
> Store
> > API... He put a link in his response.
&g
e esp
> when I linked the latest 5.0.4 test config prior.
>
> You can get the older jars from the links off the readme.md.
> On Jun 1, 2016 6:14 PM, "Shawn Heisey" <apa...@elyograg.org> wrote:
>
> On 6/1/2016 1:10 PM, John Bickerstaff wrote:
> > @Joe:
> >
&g
at 12:42 PM, John Bickerstaff <j...@johnbickerstaff.com>
wrote:
> So - the instructions on using the Blob Store API say to use the
> Denable.runtime.lib=true option when starting Solr.
>
> Thing is, I've installed per the "for production" instructions which gives
> me
ib - which, taking the default "for production"
install script on Ubuntu resolved to /var/solr/data/lib
Good luck!
On Wed, Jun 1, 2016 at 12:49 PM, John Bickerstaff <j...@johnbickerstaff.com>
wrote:
> I tried this - it didn't fail. I don't know if it really star
not be the greatest example)
>
>
> On 5/31/16, 4:02 PM, "John Bickerstaff" <j...@johnbickerstaff.com> wrote:
>
> >Thanks Jeff,
> >
> >I believe I tried that, and it still refused to load.. But I'd sure love
> >it to work since the other proces
start.jar in /opt/solr/server as long as I
issue the "cloud mode" flag or does that no longer work in 5.x?
Do I instead have to modify that start script in /etc/init.d ?
On Wed, Jun 1, 2016 at 10:42 AM, John Bickerstaff <j...@johnbickerstaff.com>
wrote:
> Ahhh - gotcha.
>
> We
a 8 and is the first release that made
> it to maven central. It uses the namespace
> com.github.healthonnet.search.SynonymExpandingExtendedDismaxQParserPlugin
>
> The features are the same for all versions.
>
> Hope this clears things up.
>
> -Joe
> On Jun 1, 2016 8:11 PM
fic approach - could you validate
whether my thought process is correct and / or if I'm missing something?
Yes - I get that I can set it all up and try - but it's what I don't know I
don't know that bothers me...
On Fri, May 27, 2016 at 11:57 AM, John Bickerstaff <j...@johnbickerstaff.com
> wrot
OK - Slapping forehead now... D'oh!
1.2 wrote:
> Hi all -
>
> I've successfully run the hon-lucene-synonyms plugin from the Admin
> console by adding the following to the Raw Query Parameters field...
>
>
> =text=synonym_edismax=true=1.2=1.1
>
> I got those from the Read Me on the github
Hi all -
I've successfully run the hon-lucene-synonyms plugin from the Admin console
by adding the following to the Raw Query Parameters field...
=text=synonym_edismax=true=1.2=1.1
I got those from the Read Me on the github account.
Now I'm trying to make this work via a requestHandler in
Congrats!
Now you can enjoy those huge royalty payments that I'm sure are coming
in...
Great book and it's been hugely helpful to me.
--JohnB
On Tue, Jun 21, 2016 at 12:12 PM, Doug Turnbull <
dturnb...@opensourceconnections.com> wrote:
> Not much more to add than my post here! This book
Hi all,
I have a question about whether sub-queries in Solr requestHandlers go
against the total index or against the results of the previous query.
Here's a simple example:
{!edismax qf=blah, blah}
{!edismax qf=blah, blah}
My question is:
What does Query2 run
he Rerank stuff.
>
> That said, if you do want to use the results of one query in another
> you can just put the whole thing into an fq clause (perhaps with
> {!cache=false}. At that point you'd get back the top N does
> that made it through the fq clause in score order.
>
>
one query through another.
>
> Best,
> Erick
> On Jun 21, 2016 2:43 PM, "John Bickerstaff" <j...@johnbickerstaff.com>
> wrote:
>
> > Hi all,
> >
> > I have a question about whether sub-queries in Solr requestHandlers go
> >
uery,
> multiply their score by the results of the bf
> specified".
>
> Best,
> Erick
>
>
> On Wed, Jun 22, 2016 at 3:04 PM, John Bickerstaff
> <j...@johnbickerstaff.com> wrote:
> > Oh - gotcha... Thanks for taking the time to reply. My use of the
&
se-marcio.mart...@mines-paristech.fr> wrote:
>
> Hi John,
>
> On 06/23/2016 10:18 PM, John Bickerstaff wrote:
>
>> Jose,
>>
>> There is a setting in the solr.in.sh script that should make Solr start
>> in
>> "cloud" mode...
>>
I'll add my vote that reading the book will really expand your
understanding of search and "Relevance".
If you're working in the Search space, this book is really worth your time!
On Thu, Jun 23, 2016 at 2:59 PM, MaryJo Sminkey wrote:
> > For someone familiar with Solr,
t... Got the
> way !!!
>
> Regards
>
> José-Marcio
>
>
> On 06/23/2016 10:44 PM, John Bickerstaff wrote:
>
>> So, if you installed with the install script (warning: I used 5.4 but I
>> think everything is the same) and add this setting in your solr.in.sh
&
Jose,
There is a setting in the solr.in.sh script that should make Solr start in
"cloud" mode...
It's ZK_HOST
That's where you list the IP addresses (or hostnames) of your zookeeper
machines...
Is this set?
What version of Solr are you using?
On Thu, Jun 23, 2016 at 2:13 PM, Jose-Marcio
>From some docs I'm working on - this command (against one solr box) got me
the entire cluster's state...
Don't know if it'll work for you, but just in case... There may be an api
command that is similar - not sure. I'm mostly operating on the command
line right now.
(statdx is the name of my
Right... You can store that anywhere - but at least consider not storing
it in your existing SOLR collection just because it's there... It's not
really the same kind of data -- it's application meta-data and/or
user-specific data...
Getting it out later will be more difficult than if you store
If you can get to the IP addresses from your application, then there's
probably a way... Do you mean you're firewalled off or in some other way
unable to access the Solr box IP's from your Java application?
If you're looking to do "automated build of virtual machines" there are
some tools like
Therefore, this becomes possible:
http://stackoverflow.com/questions/525212/how-to-run-unix-shell-script-from-java-code
Hackish, but certainly doable... Given there's no API...
On Wed, Apr 6, 2016 at 3:44 PM, John Bickerstaff <j...@johnbickerstaff.com>
wrote:
> Yup - just tested - tha
Yup - just tested - that command runs fine with Solr NOT running...
On Wed, Apr 6, 2016 at 3:41 PM, John Bickerstaff <j...@johnbickerstaff.com>
wrote:
> If you can get to the IP addresses from your application, then there's
> probably a way... Do you mean you're firewalled off or i
t; Thanks
>
> Bosco
>
>
>
>
> On 4/6/16, 2:47 PM, "John Bickerstaff" <j...@johnbickerstaff.com> wrote:
>
> >Therefore, this becomes possible:
> >
> http://stackoverflow.com/questions/525212/how-to-run-unix-shell-script-from-java-code
> >
&
My own choices were driven mostly by the usage of the data - from a more
architectural perspective.
I have "appDocuments" and "appImages" for one of the applications I'm
supporting. Because they are so closely connected (an appDocuments can
have N number of appImages and appImages can belong to
In terms of #2, this might be of use...
https://wiki.apache.org/solr/HowToReindex
On Tue, Apr 5, 2016 at 3:08 PM, Anuj Lal wrote:
> I am new to solr. Need some advice from more experienced solr team
> members
>
> I am upgrading 4.4 solr cluster to 5.5
>
>
> One of the
Hello all,
I'm wondering if anyone can comment on arguments for and against putting
solr.xml into Zookeeper?
I assume one argument for doing so is that I would then have all
configuration in one place.
I also assume that if it doesn't get included as part of the upconfig
command, there is
A few thoughts...
>From a black-box testing perspective, you might try changing that
softCommit time frame to something longer and see if it makes a difference.
The size of your documents will make a difference too - so the comparison
to 300 - 500 on other cloud setups may or may not be
I recently upgraded from 4.x to 5.5 -- it was a pain to figure it out, but
it turns out to be fairly straightforward...
Caveat: Because I run all my data into Kafka first, I was able to easily
re-create my collections by running a microservice that pulls from Kafka
and dumps into Solr.
I have a
ties as per 5.5
> changes
> 5. start zookeeper
> 5. upload config to zookeeper
> 6. Create collection using rest api
> 7. start cluster
> 8. copy collection data from 4.5 to solr 5.5 data directory
>
>
> If you can share upgrade step/process document, that will be great
>
/confluence/display/solr/Upgrading+Solr
https://cwiki.apache.org/confluence/display/solr/Upgrading+a+Solr+4.x+Cluster+to+Solr+5.0
https://cwiki.apache.org/confluence/display/solr/Major+Changes+from+Solr+4+to+Solr+5
On Wed, Apr 6, 2016 at 8:58 AM, John Bickerstaff <j...@johnbickerstaff.com>
wrote
-
and then sort them by ID based on the data associated with the User (a list
of ID's, in order)
There is even a way to write a plugin that will go after external data to
help sort Solr documents, although I'm guessing you'd rather avoid that...
On Fri, Apr 1, 2016 at 11:59 AM,
l...
http://stackoverflow.com/questions/3931827/solr-merging-results-of-2-cores-into-only-those-results-that-have-a-matching-fie
On Fri, Apr 1, 2016 at 12:40 PM, John Bickerstaff <j...@johnbickerstaff.com>
wrote:
> Tamas,
>
> I'm brainstorming here - not being careful, just throwing o
Specifically, what drives the position in the list? Is it arbitrary or is
it driven by some piece of data?
If data-driven - code could do the sorting based on that data... separate
from SOLR...
Alternatively, if the data point exists in SOLR, a "sub-query" might be
used to get the right sort
ecause Solr does the work of
> filtering and pagination. If sorting were done outside than I would have to
> read every document from Solr to sort them. It is not an option, I have to
> query onle one page.
>
> I don't understand how to solve it using subqueries.
> 2016. ápr
ver your intent is for this search.
On Fri, Apr 1, 2016 at 11:15 AM, John Bickerstaff <j...@johnbickerstaff.com>
wrote:
> Just to be clear - I don't mean who requests the list (application or
> user) I mean what "rule" determines the ordering of the list?
>
> Or, is there even a
(status, amount, ..) from offset and 50
> rows then it would be perfect and fast. If ordering would be outside of
> solr then i have to retrive almost every 1 documents from solr (a bit
> less if filtered) to order them and display the page of 50 products.
> 2016. ápr. 1. 19:15 ez
Sweet - that's a good point - I ran into that too - I had not run the
commit for the last "batch" (I was using SolrJ) and so numbers didn't match
until I did.
On Mon, Apr 4, 2016 at 9:50 PM, Binoy Dalal wrote:
> 1) Are you sure you don't have duplicates?
> 2) All of your
ID's are on documents that are actually unique.
On Mon, Apr 4, 2016 at 9:51 PM, John Bickerstaff <j...@johnbickerstaff.com>
wrote:
> Sweet - that's a good point - I ran into that too - I had not run the
> commit for the last "batch" (I was using SolrJ) and so numbers didn
The first question is whether you have duplicate ID's in your data set.
I had the same kind of thing a few months back, freaked out, and spent a
few hours trying to figure it out by coding extra logging etc... to keep
track of every single count at every stage of the process.. All the
numbers
I believe I want to set up a search handler with a function query to avoid
needing to code it.
The function query does some weighting by checking the "title" field for
whatever the user entered as their search term (named myCurrentSearchTerm
below)
To test this out in the Admin UI, I have the
You can sort like this (I believe that _version_ is the internal id/index
number for the document, but you might want to verify)
In the Admin UI, enter the following in the sort field:
_version_ asc
You could also put an entry in the default searchHandler in solrconfig.xml
to do this to every
I can display a sorted list
> via:
>
> fq=listid_s:378
> sort=listpos(listpos_s,378) asc
>
> Regards,
> Tamas
>
> On Fri, Apr 1, 2016 at 8:55 PM, John Bickerstaff <j...@johnbickerstaff.com
> >
> wrote:
>
> > Tamas,
> >
> > This feels a bit li
Does SOLR cloud push indexing across all nodes? I've been planning 4 SOLR
boxes with only 3 exposed via the load balancer, leaving the 4th available
internally for my microservices to hit with indexing work.
I was assuming that if I hit my "solr4" IP address, only "solr4" will do
the indexing...
Will the processes be Solr processes? Or do you mean multiple threads
hitting the same Solr server(s)?
There will be a natural bottleneck at one Solr server if you are hitting it
with a lot of threads - since that one server will have to do all the
indexing.
I don't know if this idea is
I guess errors like "fsync-ing the write ahead log in SyncThread:5 took
7268ms which will adversely effect operation latency."
and: "likely client has closed socket"
make me wonder if something went wrong in terms of running out of disk
space for logs (thus giving your OS no space for necessary
Which field do you try to atomically update? A or B or some other?
On Apr 21, 2016 8:29 PM, "Tirthankar Chatterjee"
wrote:
> Hi,
> Here is the scenario for SOLR5.5:
>
> FieldA type= stored=true indexed=true
>
> FieldB type= stored=false indexed=true docValue=true
>
Having run the optimize from the admin UI on one of my three cores in a
Solr Cloud collection, I find that when I got to try to run it on one of
the other cores, it is already "optimized"
I realize that's not the same thing as an API call, but thought it might
help.
On Tue, May 17, 2016 at 11:22
I think those zk server warning messages are expected. Until you have 3
running instances you don't have a "Quorum" and the Zookeeper instances
complain. Once the third one comes up they are "happy" and don't complain
any more. You'd get similar messages if one of the Zookeeper nodes ever
went
g4j.properties"
SOLR_LOGS_DIR="/var/solr/logs"
SOLR_PORT="8983"
On Wed, May 11, 2016 at 11:59 AM, John Bickerstaff <j...@johnbickerstaff.com
> wrote:
> I may be answering the wrong question - but SolrCloud goes in by default
> on 8983, yes? Is yours currently on
inks
> they are replicas. So I’m looking to see if anyone knows what is the
> cleanest way to move from a Tomcat/8080 install to a Jetty/8983 one.
>
> Thanks
>
> > On May 11, 2016, at 1:59 PM, John Bickerstaff <j...@johnbickerstaff.com>
> wrote:
> >
&g
I may be answering the wrong question - but SolrCloud goes in by default on
8983, yes? Is yours currently on 8080?
I don't recall where, but I think I saw a config file setting for the port
number (In Solr I mean)
Am I on the right track or are you asking something other than how to get
Solr on
ing to see if anyone knows what
is the cleanest way to move from a Tomcat/8080 install to a Jetty/8983 one.
>
> Thanks
>
>> On May 11, 2016, at 1:59 PM, John Bickerstaff <j...@johnbickerstaff.com>
wrote:
>>
>> I may be answering the wrong question - but SolrCloud
I'm not a dev, but I would assume the following if I were concerned with
speed and atomicity
A. A commit WILL be reflected in all appropriate shards / replicas in a
very short time.
I believe Solr Cloud guarantees this, although the time frame
will be dependent on "B"
B. Network,
In your original command, you listed the same port twice. That may have
been at least part of the difficulty.
It's probably fine to just use one zk node - as the zookeeper instances
should be aware of each other.
I also assume that if your solr.in.sh (or windows equavalent) has the
properly
it's roundabout, but this might work -- ask for the healthcheck status
(from the solr box) and hit each zkNode separately.
I'm on Linux so you'll have to translate to Windows... using the solr.cmd
file I assume...
./solr healthcheck -z 192.168.56.5:2181/solr5_4 -c collectionName
./solr
I should clarify:
http:/XXX.XXX.XX.XX:8983/solr/yourCoreName/select
q=*%3A*=0=json=true=true=category
"yourCoreName" will get built in for you if you use the Solr Admin UI for
queries --
On Fri, May 13, 2016 at 9:36 AM, John Bickerstaff <j...@johnbickerstaff.com>
wrote:
> I
In case it's helpful for a quick and dirty peek at your facets, the
following URL (in a browser or Curl) will get you basic facets for a field
named "category" -- assuming you change the IP address / hostname to match
yours.
http:/XXX.XXX.XX.XX:8983/solr/statdx_shard1_replica3/select
I've been working on a less-complex thing along the same lines - taking all
the data from our corporate database and pumping it into Kafka for
long-term storage -- and the ability to "play back" all the Kafka messages
any time we need to re-index.
That simpler scenario has worked like a charm. I
My default schema.xml does not have an entry for solr.StringField so I
can't tell you what that one does.
If you look for solr.StrField in the schema.xml file, you'll get some idea
of how it's defined. The default setting is for it not to be analyzed.
On Tue, May 3, 2016 at 10:16 AM, Steven
g and such it is missing
>> from the official Solr wiki's.
>>
>> Steve
>>
>> [1] https://wiki.apache.org/solr/SolrFacetingOverview,
>>
>> http://grokbase.com/t/lucene/solr-commits/06cw5038rk/solr-wiki-update-of-solrfacetingoverview-by-jjlarrea
>> ,
>>
>>
I think you should be able to change $SOLR_HOME to any valid path.
For example: /var/logs/solr_logs
On Tue, May 3, 2016 at 4:02 PM, Yunee Lee wrote:
> Hi, solr experts.
>
> I have a question for installing solr server.
> Using ' install_solr_service.sh' with option -d ,
Hoss - I'm guessing this is all in the install script that gets created
when you run that command (can't remember it) on the tar.gz file...
In other words, Yunee can edit that file, find those variables (like
SOLR_SERVICE) and change them from what they're set to by default to
whatever he
Max doc is the total amount of documents in the collection INCLUDING the
ones that have been deleted but not actually removed. Don't worry, deleted
docs are not used in search results.
Yes, you can change the number by "optimizing" (see the button) but this
does take time and bandwidth so use it
I'll just briefly add some thoughts...
#1 This can be done several ways - including keeping a totally separate
document that contains ONLY the data you're willing to expose for free --
but what you want to accomplish is not clear enough to me for me to start
making recommendations. I'll just say
g -- which is generally incompatible with a really
great user experience...
And, of course, I may have totally missed your meaning and you may have had
something totally different in mind...
On Thu, May 5, 2016 at 8:33 AM, John Bickerstaff <j...@johnbickerstaff.com>
wrote:
> I'll just briefl
ure if that's a typo, a real thing and such it is missing
> from the official Solr wiki's.
>
> Steve
>
> [1] https://wiki.apache.org/solr/SolrFacetingOverview,
>
> http://grokbase.com/t/lucene/solr-commits/06cw5038rk/solr-wiki-update-of-solrfacetingoverview-by-jjlarrea
>
Oh, and what, if any directories need to exist for the ADDREPLICA command
to work?
Hopefully nothing past the already existing /var/solr/data created by the
Solr install script?
On Fri, Apr 15, 2016 at 11:18 AM, John Bickerstaff <j...@johnbickerstaff.com
> wrote:
> Oh, and wha
Oh, and what, if any directories need to exist for the ADDREPLICA
On Fri, Apr 15, 2016 at 11:09 AM, John Bickerstaff <j...@johnbickerstaff.com
> wrote:
> Thanks again Eric - I'm going to be trying the ADDREPLICA again today or
> Monday. I much prefer that to hand-edit hackery...
>
es the `=...` actually work for you? When attempting similar with
> > Solr 5.3.1, despite what documentation said, I had to use
> > `node_name=...`.
> >
> >
> > Thanks,
> > Jarek
> >
> > On Fri, 15 Apr 2016, at 05:48, John Bickerstaff wrote:
> >
I have the following (essentially hard-coded) line in the Solr Admin Query
UI
=
bq: contentType:(searchTerm1 searchTerm2 searchTerm2)^1000
=
The "searchTerm" entries represent whatever the user typed into the search
box. This can be one or more words. Usually less than 5.
I want to
.
Thanks.
On Thu, Apr 14, 2016 at 12:34 PM, John Bickerstaff <j...@johnbickerstaff.com
> wrote:
> I have the following (essentially hard-coded) line in the Solr Admin Query
> UI
>
> =
> bq: contentType:(searchTerm1 searchTerm2 searchTerm2)^1000
> =
>
> The
t
> boosting just influences the score it does _not_ explicitly order the
> results. So the docs with "figo" in the conentType field will tend to
> the top, but won't be absolutely guaranteed to be there.
>
>
>
> Best,
> Erick
>
> On Thu, Apr 14, 2016 at 12:18 PM
> Curious what command did you use?
>
> On Thu, Apr 14, 2016 at 3:48 PM, John Bickerstaff <
> j...@johnbickerstaff.com>
> wrote:
>
> > I had a hard time getting replicas made via the API, once I had created
> the
> > collection for the first time although t
I had a hard time getting replicas made via the API, once I had created the
collection for the first time although that may have been ignorance on
my part.
I was able to get it done fairly easily on the Linux command line. If
that's an option and you're interested, let me know - I have a
5.4
This problem drove me insane for about a month...
I'll send you the doc.
On Thu, Apr 14, 2016 at 5:02 PM, Jay Potharaju <jspothar...@gmail.com>
wrote:
> Thanks John, which version of solr are you using?
>
> On Thu, Apr 14, 2016 at 3:59 PM, John Bickerstaff <
> j..
> collection name that doesn't conflict, and create the thing, and smoke test
> with it. I know that standard practice is to bring up all new nodes, but
> I don't see why this is needed.
>
> -Original Message-
> From: John Bickerstaff [mailto:j...@johnbickerstaff.c
il.com> wrote:
> On Mon, Apr 18, 2016 at 3:52 PM, John Bickerstaff
> <j...@johnbickerstaff.com> wrote:
> > Thanks all - very helpful.
> >
> > @Shawn - your reply implies that even if I'm hitting the URL for a single
> > endpoint via HTTP - the "balancing&qu
, Apr 19, 2016 at 7:59 AM, Shawn Heisey <apa...@elyograg.org> wrote:
> On 4/18/2016 11:22 AM, John Bickerstaff wrote:
> > So - my IT guy makes the case that we don't really need Zookeeper / Solr
> > Cloud...
>
> > I'm biased in terms of using the most recen
atabases.
If you're interested, ping me -- I'm happy to share what I've got...
On Tue, Apr 19, 2016 at 2:08 AM, Charlie Hull <char...@flax.co.uk> wrote:
> On 18/04/2016 18:22, John Bickerstaff wrote:
>
>> So - my IT guy makes the case that we don't really need Zookeeper / Solr
>
un on the Solr instance the load
balancer targeted - due to "a" above.
Corrections or refinements welcomed...
On Mon, Apr 18, 2016 at 7:21 AM, Shawn Heisey <apa...@elyograg.org> wrote:
> On 4/17/2016 10:35 PM, John Bickerstaff wrote:
> > My prior use of SOLR in produ
1 - 100 of 248 matches
Mail list logo