While my client is all PHP it does not use a solr client. I wanted to stay
with he latest Solt Cloud and the PHP clients all seemed to have some kind
of issue being unaware of newer Solr Cloud versions. The client makes pure
REST calls with Curl. It is stateful through local storage. There is no
persistent connection. There are no cookies and PHP work is not sticky so
it is designed for round robin on both the internal network.

I'm thinking we have a different idea of persistent. To me something like
MySQL can be persistent, ie a fifo queue for requests. The stack can be
always on/connected on something like a heap storage.

I never thought about the impact of a solr node crashing with PHP on top.
Many thanks!

Was thinking of running a conga line (Ricci & Luci projects) and shutting
down and replacing failed nodes. Never done this with Solr. I don't see any
reasons why it would not work.

** When you say an array of connections per host. It would still require an
internal DNS because hosts files don't round robin. perhaps this is handled
in the Python client??

You have given me some good clarification. I think lol. I know I can spin
out WWW servers based on load. I'm not sure how shit will fly spinning up
additional solr nodes. I'm not sure what happens if you spin up an empty
solr node and what will happen with replication, shards and load cost of
spinning an instance. I'm facing some experimentation me thinks. This will
be a manual process at first, for sure....

I guess I could put the solr connect requests in my clients into a try
loop, looking for successful connections by name before any action.

Many thanks,

GW




On 15 December 2016 at 04:46, Dorian Hoxha <dorian.ho...@gmail.com> wrote:

> See replies inline:
>
> On Wed, Dec 14, 2016 at 3:36 PM, GW <thegeofo...@gmail.com> wrote:
>
> > Thanks,
> >
> > I understand accessing solr directly. I'm doing REST calls to a single
> > machine.
> >
> > If I have a cluster of five servers and say three Apache servers, I can
> > round robin the REST calls to all five in the cluster?
> >
> I don't know about php, but it would be better to have "persistent
> connections" or something to the solr servers. In python for example this
> is done automatically. It would be better if each php-server has a
> different order of an array of [list of solr ips]. This way each box will
> contact a ~different solr instance, and will have better chance of not
> creating too may new connections (since the connection cache is
> per-url/ip).
>
> >
> > I guess I'm going to find out. :-)  If so I might be better off just
> > running Apache on all my solr instances.
> >
> I've done that before (though with es, but it's ~same). And just contacting
> the localhost solr. The problem with that, is that if the solr on the
> current host fails, your php won't work. So best in this scenario is to
> have an array of hosts, but the first being the local solr.
>
> >
> >
> >
> >
> >
> > On 14 December 2016 at 07:08, Dorian Hoxha <dorian.ho...@gmail.com>
> wrote:
> >
> > > See replies inline:
> > >
> > > On Wed, Dec 14, 2016 at 11:16 AM, GW <thegeofo...@gmail.com> wrote:
> > >
> > > > Hello folks,
> > > >
> > > > I'm about to set up a Web service I created with PHP/Apache <--> Solr
> > > Cloud
> > > >
> > > > I'm hoping to index a bazillion documents.
> > > >
> > > ok , how many inserts/second ?
> > >
> > > >
> > > > I'm thinking about using Linode.com because the pricing looks great.
> > Any
> > > > opinions??
> > > >
> > > Pricing is 'ok'. For bazillion documents, I would skip vps and go
> > straight
> > > dedicated. Check out ovh.com / online.net etc etc
> > >
> > > >
> > > > I envision using an Apache/PHP round robin in front of a solr cloud
> > > >
> > > > My thoughts are that I send my requests to the Solr instances on the
> > > > Zookeeper Ensemble. Am I missing something?
> > > >
> > > You contact with solr directly, don't have to connect to zookeeper for
> > > loadbalancing.
> > >
> > > >
> > > > What can I say.. I'm software oriented and a little hardware
> > challenged.
> > > >
> > > > Thanks in advance,
> > > >
> > > > GW
> > > >
> > >
> >
>

Reply via email to