Hi,
I'm new to SOLR, but I've got it up and running, indexing data via the DIH,
and properly returning results for queries. I'm trying to setup another
core to run suggester, in order to autocomplete geographical locations. We
have a web application that needs to take a city, state / region,
Hi,
I'm reposting my StackOverflow question to this thread as I'm not getting
much of a response there. Thank you for any assistance you can provide!
http://stackoverflow.com/questions/8705600/using-solr-autocomplete-for-addresses
I'm new to SOLR, but I've got it up and running, indexing data
setup, but I'm not quite comfortable enough with SOLR and the
SOLR architecture yet (honestly I've only been using it for about 2 weeks
now).
Thanks for the help!
Dave
On Tue, Jan 3, 2012 at 8:24 AM, Jan Høydahl jan@cominvent.com wrote:
Hi,
As you see, you've got an answer at StackOverflow
to the
suggester?
Thanks!
Dave
On Tue, Jan 3, 2012 at 9:41 AM, Dave dla...@gmail.com wrote:
Hi Jan,
Yes, I just saw the answer. I've implemented that, and it's working as
expected. I do have Suggest running on its own core, separate from my
standard search handler. I think, however
Hi Juan,
When I'm storing the content, the field has a LowerCaseFilterFactory
filter, so that when I'm searching it's not case sensitive. Is there a way
to re-filter the data when it's presented as a result to restore the case
or convert to Title Case?
Thanks,
Dave
On Thu, Jan 5, 2012 at 12:41
lower-case.
Thanks!
Dave
On Thu, Jan 5, 2012 at 2:01 PM, Juan Grande juan.gra...@gmail.com wrote:
Hi Dave,
Have you tried running a query and taking a look at the results?
The filters that you define in the fieldType don't affect the way the data
is *stored*, it affects the way the data
if necessary. However, I
can't understand why it would need so much memory. Could I have something
configured incorrectly? I've been over the configs several times, trying to
get them down to the bare minimum.
Thanks for any assistance!
Dave
I've tried up to -Xmx5g
On Mon, Jan 16, 2012 at 9:15 PM, qiu chi chiqiu@gmail.com wrote:
What is the largest -Xmx value you have tried?
Your index size seems not very big
Try -Xmx2048m , it should work
On Tue, Jan 17, 2012 at 9:31 AM, Dave dla...@gmail.com wrote:
I'm trying to figure
:
you may disable FST look up and use lucene index as the suggest method
FST look up loads all documents into the memory, you can use the lucene
spell checker instead
On Tue, Jan 17, 2012 at 10:31 AM, Dave dla...@gmail.com wrote:
I've tried up to -Xmx5g
On Mon, Jan 16, 2012 at 9:15 PM
Thank you Robert, I'd appreciate that. Any idea how long it will take to
get a fix? Would I be better switching to trunk? Is trunk stable enough for
someone who's very much a SOLR novice?
Thanks,
Dave
On Mon, Jan 16, 2012 at 10:08 PM, Robert Muir rcm...@gmail.com wrote:
looks like https
you can try out branch_3x if you want.
you can either wait for a nightly build or compile from svn
(http://svn.apache.org/repos/asf/lucene/dev/branches/branch_3x/).
On Tue, Jan 17, 2012 at 8:35 AM, Dave dla...@gmail.com wrote:
Thank you Robert, I'd appreciate that. Any idea how long
Robert, where can I pull down a nightly build from? Will it include the
apache-solr-core-3.3.0.jar and lucene-core-3.3-SNAPSHOT.jar jars? I need to
re-build with a custom SpellingQueryConverter.java.
Thanks,
Dave
On Tue, Jan 17, 2012 at 8:59 AM, Robert Muir rcm...@gmail.com wrote:
I committed
branch_3x if you want.
you can either wait for a nightly build or compile from svn
(http://svn.apache.org/repos/asf/lucene/dev/branches/branch_3x/).
On Tue, Jan 17, 2012 at 8:35 AM, Dave dla...@gmail.com wrote:
Thank you Robert, I'd appreciate that. Any idea how long it will take to
get
, 2012 at 8:35 AM, Dave dla...@gmail.com wrote:
Thank you Robert, I'd appreciate that. Any idea how long it will take to
get a fix? Would I be better switching to trunk? Is trunk stable enough
for
someone who's very much a SOLR novice?
Thanks,
Dave
On Mon, Jan 16, 2012 at 10:08 PM
$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
On Wed, Jan 18, 2012 at 5:24 PM, Dave dla...@gmail.com wrote:
Unfortunately, that doesn't look like it solved my problem. I built
, 2012 at 11:27 AM, Dave dla...@gmail.com wrote:
I'm also seeing the error when I try to start up the SOLR instance:
SEVERE: java.lang.OutOfMemoryError: Java heap space
at org.apache.lucene.util.ArrayUtil.grow(ArrayUtil.java:344)
at org.apache.lucene.util.ArrayUtil.grow(ArrayUtil.java:352
. Once they select a suggestion, that suggestion
needs to have certain information associated with it. It seems that the
Suggester component is not the right tool for this. Anyone have other ideas?
Thanks,
Dave
On Thu, Jan 19, 2012 at 6:09 PM, Dave dla...@gmail.com wrote:
That was how I originally
Thanks Jan, this is perfect! I'm going to work on implementing it this week
and let you know how it works for us. Thanks again!
Dave
On Wed, Jan 25, 2012 at 1:10 PM, Jan Høydahl jan@cominvent.com wrote:
Hi,
I don't think that the suggester can output multiple fields. You would
have
B is a better option long term. Solr is meant for retrieving flat data, fast,
not hierarchical. That's what a database is for and trust me you would rather
have a real database on the end point. Each tool has a purpose, solr can never
replace a relational database, and a relational database
; you could re-read and send to Solr.
>
> Best,
> Erick
>
>> On Tue, Feb 21, 2017 at 5:17 PM, Dave <hastings.recurs...@gmail.com> wrote:
>> B is a better option long term. Solr is meant for retrieving flat data,
>> fast, not hierarchical. That's what a database
Can't see what's color coded in the email.
> On Feb 13, 2017, at 5:35 PM, Anatharaman, Srinatha (Contractor)
> wrote:
>
> Hi,
>
> I am loading email files which are in RFC822 format into SolrCloud using Flume
> But some meta data of the emails is not
That sounds pretty much like a hack. So if two imports happen at the same time
they have to wait for each other?
> On Feb 12, 2017, at 4:01 PM, Shawn Heisey wrote:
>
>> On 2/12/2017 10:30 AM, Minh wrote:
>> Hi everyone,
>> How can i run multithreads of DIH in a cluster for
Can you please elaborate?
> Sure, will try to move out to external zookeeper
>
>> On Sun, Feb 26, 2017 at 7:07 PM Dave <hastings.recurs...@gmail.com> wrote:
>>
>> You shouldn't use the embedded zookeeper with solr, it's just for
>> development not anywher
You shouldn't use the embedded zookeeper with solr, it's just for development
not anywhere near worthy of being out in production. Otherwise it looks like
you may have a port scanner running. In any case don't use the zk that comes
with solr
> On Feb 26, 2017, at 6:52 PM, Satya Marivada
That seems difficult if not impossible. The joins are just complex queries,
with the same data set.
> On Feb 28, 2017, at 11:37 PM, Nitin Kumar wrote:
>
> Hi,
>
> Can we use join query for more than 2 cores in solr. If yes, please provide
> reference or example.
>
Maybe commongrams could help this but it boils down to speed/quality/cheap.
Choose two. Thanks
> On Apr 1, 2017, at 10:28 AM, Shawn Heisey wrote:
>
>> On 3/31/2017 1:55 PM, David Hastings wrote:
>> So I un-commented out the line, to enable it to go against 6 important
>>
https://wiki.apache.org/solr/FieldCollapsing
> On Mar 13, 2017, at 9:59 PM, Dave <hastings.recurs...@gmail.com> wrote:
>
> Perhaps look into grouping on that field.
>
>> On Mar 13, 2017, at 9:08 PM, Scott Smith <ssm...@mainstreamdata.com> wrote:
>>
>
Perhaps look into grouping on that field.
> On Mar 13, 2017, at 9:08 PM, Scott Smith wrote:
>
> I'm trying to solve a search problem and wondering if facets (or something
> else) might solve the problem.
>
> Let's assume I have a bunch of documents (100 million+).
Min.count is what you're looking for to get non 0 facets
> On Apr 17, 2017, at 6:51 PM, Furkan KAMACI wrote:
>
> My query:
>
> /select?facet.field=research=on=content:test
>
> Q1) Facet returns research values with 0 counts which has a research value
> that is not from
There is no solid rule. Honestly stand alone solr can handle quite a bit, I
don't think there's a valid reason to go to cloud unless you are starting from
scratch and want to use the newest buzz word, stand alone can handle well over
half a terabyte index at sub second speeds all day long.
>
To add to this, not sure of solr cloud uses it, but you're going to want to
destroy the wrote.lock file as well
> On Aug 1, 2017, at 9:31 PM, Shawn Heisey wrote:
>
>> On 8/1/2017 7:09 PM, Erick Erickson wrote:
>> WARNING: what I currently understand about the limitations
Why didn't you set it to be indexed? Sure it would be a small dent in an index
> On Aug 11, 2017, at 5:20 PM, Barbet Alain wrote:
>
> Re,
> I take a look on the source code where this msg happen
>
Personally I say use a rdbms for data storage, it's what it's for. Solr is for
search and retrieve and the expense of possible loss of all data, in which case
you rebuild it.
> On Aug 12, 2017, at 11:26 AM, Muwonge Ronald wrote:
>
> Hi Solr can use mongodb for storage and
Rebuild your index. It's just the safest way.
On Aug 13, 2017, at 2:02 PM, SOLR4189 wrote:
>> If you are changing things like WordDelimiterFilterFactory to the graph
>> version, you'll definitely want to reindex
>
> What does it mean "*want to reindex*"? If I change
>
Eric you going to vegas next month?
> On Aug 10, 2017, at 7:38 PM, Erick Erickson wrote:
>
> Omer:
>
> Solr does not implement pure boolean logic, see:
> https://lucidworks.com/2011/12/28/why-not-and-or-and-not/.
>
> With appropriate parentheses it can give the same
Sorry that should have read have not tested in solr cloud.
> On Jul 6, 2017, at 6:37 PM, Dave <hastings.recurs...@gmail.com> wrote:
>
> I have tested that out in solr cloud, but for solr master slave replication
> the config sets will not go without a reload,
I have tested that out in solr cloud, but for solr master slave replication the
config sets will not go without a reload, even if specified in the in the slave
settings.
> On Jul 6, 2017, at 5:56 PM, Erick Erickson wrote:
>
> I'm not entirely sure what happens if the
Ones a search engine and the other is a nosql db. They're nothing alike and are
completely different tools for completely different jobs.
> On Aug 4, 2017, at 7:16 PM, Francesco Viscomi wrote:
>
> Hi all,
> why i have to choose solr if mongoDb is easier to learn and to
Uhm. Dude are you drinking?
1. Lucidworks would never say that.
2. Maria is not a json +MySQL. Maria is a fork of the last open source version
of MySQL before oracle bought them
3.walter is 100% correct. Solr is search. The only complex data structure it
has is an array. Something like mongo
Also, id love to see an example of a many to many relationship in a nosql db
>> as you described, since that's a rdbms concept. If it exists in a nosql
>> environment I would like to learn how...
>>
>>> On Aug 4, 2017, at 10:56 PM, Dave <hastings.recurs...@gmail.com&
>>
>>>> wunder
>>>> Walter Underwood
>>>> wun...@wunderwood.org
>>>> http://observer.wunderwood.org/ (my blog)
>>>>
>>>>
>>>>> On Aug 4, 2017, at 8:13 PM, David Hastings <dhasti...@wshein.com>
to the mailing list that's supposed to serve as a source of help,
which, you asked for.
> On Aug 5, 2017, at 7:54 AM, Dave <hastings.recurs...@gmail.com> wrote:
>
> Also I wouldn't really recommend mongodb at all, it should only to be used as
> a fast front end to an acid compliant
You will want to have both solr and a sql/nosql data storage option. They serve
different purposes
> On May 8, 2017, at 10:43 PM, bharath.mvkumar
> wrote:
>
> Hi All,
>
> We have a use case where we have mysql database which stores documents and
> also some of
This could be useful in a space expensive situation, although the reason I
wanted to try it is multiple solr instances in one server reading one index on
the ssd. This use case where on the nfs still leads to a single point of
failure situation on one of the most fragile parts of a server, the
I think it's depends what you are backing up and restoring from. Hardware
failure? Accidental delete? For my use case my master indexer stores the index
on a San with daily snap shots for reliability, then my live searching master
is on a San as well, my live slave searchers are all on SSD
If you are not capable of even writing your own indexing code, let alone
crawler, I would prefer that you just stop now. No one is going to help you
with this request, at least I'd hope not.
> On Jun 1, 2017, at 5:31 PM, David Choi wrote:
>
> Hello,
>
> I was
And I mean that in the context of stealing content from sites that explicitly
declare they don't want to be crawled. Robots.txt is to be followed.
> On Jun 1, 2017, at 5:31 PM, David Choi wrote:
>
> Hello,
>
> I was wondering if anyone could guide me on how to crawl
I’d personally use your second option. Simple and straightforward if you can
afford the time for a reindex
> On Oct 3, 2017, at 6:23 PM, John Blythe wrote:
>
> hey all.
>
> was hoping to find a query function that would allow me to filter based on
> the length of an
ppose any one knows where i may be able to find them, or point me in
>> a direction to get more information about this tool.
>>
>> Thanks - dave
>>
Get the raw logs from normal use, script out something to replicate the
searches and have it fork to as many cores as the solr server has is what I'd
do.
> On Sep 4, 2017, at 5:26 AM, Daniel Ortega wrote:
>
> I would recommend you Solrmeter cloud
>
> This fork
My other concern would be your p's and q's. If you start mixing in Boolean
logic and solrs weak respect for it, it could be unpredictable
> On Sep 3, 2017, at 5:43 PM, Phil Scadden wrote:
>
> 5 seems a reasonable limit to me. After that revert to slow.
>
> -Original
Store it as an atom rather than an up address.
> On Jun 18, 2018, at 12:14 PM, root23 wrote:
>
> Hi all,
> is there a built in data type which i can use for ip address which can
> provide me sorting ip address based on the class? if not then what is the
> best way to sort based on ip address
You may find that buying some more memory will be your best bang for the buck
in your set up. 32-64 gb isn’t expensive,
> On Dec 27, 2017, at 6:57 PM, Suresh Pendap wrote:
>
> What is the downside of configuring ramBufferSizeMB to be equal to 5GB ?
> Is it only that
Would a minor solr upgrade such as this require a reindexing in order to take
advantage of the skg functionality, or would it work regardless? A full
reindex is quite a large operation in my use case
On a side note, does adding docvalues to an already indexed field, and then
optimizing, prevent the need to reindex to take advantage of docvalues? I was
under the impression you had to reindex the content.
> On Nov 3, 2018, at 4:41 AM, Deepak Goel wrote:
>
> I would start by monitoring the
Do you mind if I ask why so many collections rather than a field in one
collection that you can apply a filter query to each customer to restrict the
result set, assuming you’re the one controlling the middle ware?
> On Jan 22, 2019, at 4:43 PM, Monica Skidmore
> wrote:
>
> We have been
But then I would lose the steaming expressions right?
> On Nov 20, 2018, at 6:00 PM, Edward Ribeiro wrote:
>
> Hi David,
>
> Well, as a last resort you can resort to classic schema.xml if you are
> using standalone Solr and don't bother to give up schema API. Then you are
> back to manually
Wow. Ok dude relax and take a nap. It sounds like you don’t even have a core
defined. Maybe you’d do and I’m reaching a bit but start there solr is super
simple and only gets complicated when you’re complicated.
> On Apr 6, 2019, at 8:59 AM, Dave Beckstrom wrote:
>
> Hi Everyone,
Use the mlt to get the queries to use for getting facets in a two search
approach
> On Feb 25, 2019, at 10:18 PM, Zheng Lin Edwin Yeo
> wrote:
>
> Hi Martin,
>
> I think there are some pictures which are not being sent through in the
> email.
>
> Do send your query that you are using, and
I’m more curious what you’d expect to see, and what possible benefit you could
get from it
> On Feb 28, 2019, at 8:48 PM, Zheng Lin Edwin Yeo wrote:
>
> Hi Martin,
>
> I have no idea on this, as the case has not been active for almost 2 years.
> Maybe I can try to follow up.
>
> Faceting by
Sounds like you need to use code and post process your results as it sounds too
specific to your use case. Just my opinion, unless you want to get into spacial
queries which is a whole different animal and something I don’t think many have
experience with, including myself
> On Feb 16, 2019,
This will tell you pretty everything you need to get started
https://lucene.apache.org/solr/guide/6_6/language-analysis.html
> On Feb 5, 2019, at 4:55 AM, akash jayaweera wrote:
>
> Hello All,
>
> Can i get details how to use English analyzer with stemming,
> lemmatizatiion, stopword removal
You *can use solr as a database, in the same sense that you *can use a chainsaw
to remodel your bathroom. Is it the right tool for the job? No. Can you make
it work? Yes. As for HA and cluster rdbms gallera cluster works great for
Maria db, and is acid compliant. I’m sure any other database
You’re going to want to start by having more than 3gb for memory in my opinion
but the rest of your set up is more complex than I’ve dealt with.
On Sep 3, 2019, at 1:10 PM, Andrew Kettmann
wrote:
>> How many zookeepers do you have? How many collections? What is there size?
>> How much CPU /
As a side note, if you use shingles with the mlt handler I believe you will get
better scores/relevant results. So “to be free” becomes indexes as “to_be”
“to_be_free” and “be_free” but also as each word. It makes the index
significantly larger but creates better “unique terms” in my opinion
I know this has nothing to do with the issue at hand but if you have a public
facing solr instance you have much bigger issues.
> On Sep 19, 2019, at 10:16 PM, Tyrone Tse wrote:
>
> I finally got JWT Authentication working on Solr 8.1.1.
> This is my security.json file contents
> {
>
https://doc.sitecore.com/developers/90/platform-administration-and-architecture/en/using-solr-auto-suggest.html
If you need more references. Set all parameters yourself, don’t rely on
defaults.
> On Nov 21, 2019, at 3:41 PM, Dave wrote:
>
> https://lucidworks.com/post/solr-
https://lucidworks.com/post/solr-suggester/
You must set buildonstartup to false, the default is true. Try it
> On Nov 21, 2019, at 3:21 PM, Koen De Groote
> wrote:
>
> Erick:
>
> No suggesters. There is 1 spellchecker for
>
> text_general
>
> But no buildOnCommit or buildOnStartup setting
It clarifies yes. You need new fields. In this case something like
Address_us
Address_uk
And index and search them accordingly with different stopword files used in
different field types, hence the copy field from “address” into as many new
fields as needed
> On Dec 2, 2019, at 7:33 PM,
.org If you can, please build on your explanation as It
> sounds relevant.
> -Original Message-
> From: Dave
> Sent: Monday, December 2, 2019 7:38 PM
> To: solr-user@lucene.apache.org
> Cc: jornfra...@gmail.com
> Subject: Re: Is it possible to have different
I would index the products a user purchased as well as the number of times
purchased, then I would take a user, search their bought products boosted by
how many times purchased, against other users, have a facet for products and
filter out the top bought products that are not on the users
Actually at about that time the replication finished and added about 20-30gb to
the index from the master. My current set up goes
Indexing master -> indexer slave/production master (only replicated on
command)-> three search slaves (replicate each 15 minutes)
We added about 2.3m docs, then I
teh query
>>
>> On Fri, Oct 25, 2019 at 12:11 PM Audrey Lorberfeld -
>> audrey.lorberf...@ibm.com wrote:
>>
>>> So then you do run your POS tagger at query-time, Dave?
>>>
>>> --
>>> Audrey Lorberfeld
>>> Data Scientist, w3 Search
>
I’m young here I think, not even 40 and only been using solr since like 2008 or
so, so like 1.4 give or take. But I know a really good therapist if you want to
talk about it.
> On Nov 30, 2019, at 6:56 PM, Mark Miller wrote:
>
> Now I have sacrificed to give you a new chance. A little for my
I guess I don’t understand why one wouldn’t simply make a basic front end for
solr, it’s literally the easiest thing to throw together and then you control
all authentication and filters per user. Even a basic one would be some w3
school tutorials with php+json+whatever authentication Mech you
#1 merry Xmas thing
#2 you initially said you were talking about 1k documents. That will not be a
large enough sample size to see the index size differences with this new field,
in any case the index size should never really matter. But if you go to a few
million you will notice the size has
Or just do it the lazy way and use a dynamic field. I’ve found little to no
drawbacks with them aside from a complete lack of documentation of the field in
the schema itself
> On Dec 8, 2019, at 8:07 AM, David Barnett wrote:
>
> Also - look at adding fields using Solr admin, this will these
But!
If we don’t have people throwing a new release into production and finding real
world problems we can’t trust that the current release problems will be exposed
and then remedied, so it’s a double edged sword. I personally agree with
staying a major version back, but that’s because it
If you’re not getting values, don’t ask for the facet. Facets are expensive as
hell, maybe you should think more about your query’s than your infrastructure,
solr cloud won’t help you at all especially if your asking for things you don’t
need
> On Jan 18, 2020, at 1:25 PM, Rajdeep Sahoo
Agreed with the above. what’s your idea of “huge”? I have 600 ish gb in one
core plus another 250x2 in two more on the same standalone solr instance and it
runs more than fine
> On Jan 18, 2020, at 11:31 AM, Shawn Heisey wrote:
>
> On 1/18/2020 1:05 AM, Rajdeep Sahoo wrote:
>> Our Index size
It doesn’t need to be identical, just anything with a buildon reload statement
> On Jan 17, 2020, at 12:17 PM, rhys J wrote:
>
> On Fri, Jan 17, 2020 at 12:10 PM David Hastings <
> hastings.recurs...@gmail.com> wrote:
>
>> something like this in your solr config:
>>
>> autosuggest >
There is no increase in speed, but features. Doc values add some but it’s hard
to quantify, and some people think solr cloud has speed increases but I don’t
think they exist when hardware cost is nonexistent and it adds too much
complexity to something that should be simple.
> On Dec 28,
You best off doing a full reindex to a single solr cloud 8.x node and then when
done start taking down 7.x nodes, upgrade them to 8.x and add them to the new
cluster. upgrading indexes has so many potential issues,
> On Mar 6, 2020, at 9:21 PM, lstusr 5u93n4 wrote:
>
> Hi Webster,
>
> When
#1. This is a HORRIBLE IDEA
#2 If I was going to do this I would destroy the update request handler as well
as the entire admin ui from the solr instance, set up a replication from a
secure solr instance on an interval. This way no one could send an update
/delete command, you could still
Agreed. Just a JavaScript check on the input box would work fine for 99% of
cases, unless something automatic is running them in which case just server
side redirect back to the form.
> On Oct 27, 2020, at 11:54 AM, Mark Robinson wrote:
>
> Hi Konstantinos ,
>
> Thanks for the reply.
> I
That’s a good place to start. The idea was to make sure titles that started
with a date would not always be at the forefront and the actual title of the
doc would be sorted.
> On Jul 15, 2020, at 4:58 PM, Erick Erickson wrote:
>
> Yeah, it’s always a question “how much is enough/too much”.
It sounds like you have suggester indexes being built on startup. Without them
they just come up in a second or so
> On Aug 7, 2020, at 6:03 PM, Schwartz, Tony wrote:
>
> I have many collections. When I start solr, it takes 30 - 45 minutes to
> start up and load all the collections. My
e.org
> Subject: RE: solr startup
>
> suggester? what do i need to look for in the configs?
>
> Tony
>
>
>
> Sent from my Verizon, Samsung Galaxy smartphone
>
>
>
> Original message
> From: Dave mailto:hastings.recurs...@gmail
Seriously. Doug answered all of your questions.
> On Jul 3, 2020, at 6:12 AM, Atri Sharma wrote:
>
> Please do not cross post. I believe your questions were already answered?
>
>> On Fri, Jul 3, 2020 at 3:08 PM Gautam K wrote:
>>
>> Since it's a bit of an urgent request so if could please
Is it horrible that I’m already burnt out from just reading that?
I’m going to stick to the classic solr master slave set up for the foreseeable
future, at least that let’s me focus more on the search theory rather than the
back end system non stop.
> On Jun 9, 2020, at 5:11 PM, Vincenzo
A simple Perl script would be able to cover this, I have a cron job Perl script
that does a search with an expected result, if the result isn’t there it fails
over to a backup search server, sends me an email, and I fix what’s wrong. The
backup search server is a direct clone of the live server
I’ll add that whenever I’ve had a solr instance shut down, for me it’s been a
hardware failure. Either the ram or the disk got a “glitch” and both of these
are relatively fragile and wear and tear type parts of the machine, and should
be expected to fail and be replaced from time to time. Solr
I’m going to go against the advice SLIGHTLY, it really depends on how you have
things set up as far as your solr server hosting is done. If you’re searching
off the same solr server you’re indexing to, yeah don’t ever optimize it will
take care of itself, people much smarter than us, like
Just rebuild the index. Pretty sure they’re gone if they aren’t in your vm
backup, and solr isn’t a document storage tool, it’s a place to index the data
from your document store, so it’s understood more or less that it can always be
rebuilt when needed
> On Nov 13, 2020, at 9:52 PM, Alex
Solr isn’t meant to be public facing. Not sure how anyone would send these
commands since it can’t be reached from the outside world
> On Nov 12, 2020, at 7:12 AM, Sheikh, Wasim A.
> wrote:
>
> Hi Team,
>
> Currently we are facing the below vulnerability for Apache Solr tool. So can
> you
Are you running off of a release? onlyMorePopular was only
implemented in the trunk a few days ago (in earlier versions, even if
you specified onlyMorePopular, it was ignored).
dave
On Oct 24, 2007, at 5:58 PM, Justin Knoll wrote:
I'm running the example Solr install with a custom
of the request.
$solrResults then has nice arrays for accessing the results, facets,
etc.
It is a string you're getting back -- but it's just the serialized
representation.
dave
What are the results of the two var_dumps?
dave
On Nov 5, 2007, at 10:06 PM, James liu wrote:
first: i m sure i enable php and phps in my solrconfig.xml
two: i can't get answer.
*phps:
*?php
$url = '
http://localhost:8080/solr1/select/?
q=2version=2.2start=0rows=10indent=onwt=phps
;}}
This is exactly correct.
two var_dump result:
bool(false)
So, unserializing is failing. Are you running from the trunk or from
a nightly? There was a bug a couple of weeks ago that sent back
faulty serialized data. It's fixed now. It's possible this is your
issue.
dave
On Nov 6, 2007
use phps, you must unserialize the results. If you
use php, you must eval the results (including some sugar to get a
variable set to that value).
dave
1 - 100 of 205 matches
Mail list logo