Re: Trying to apply patch for SOLR-7036

2015-11-06 Thread r b
Ah, thanks for that. The 4.10 branch was it. If I have time, I'll
study up on what this patch is doing and see if I can't port it to 5x.

On Fri, Nov 6, 2015 at 6:24 AM, Shawn Heisey <apa...@elyograg.org> wrote:
> On 11/5/2015 7:04 PM, r b wrote:
>> I just wanted to double check that my steps were not too off base.
>>
>> I am trying to apply the patch from 8/May/15 and it seems to be
>> slightly off. Inside the working revision is 1658487 so I checked that
>> out from svn. This is what I did.
>>
>> svn checkout
>> http://svn.apache.org/repos/asf/lucene/dev/trunk@1658487 lucene_trunk
>> cd lucene_trunk/solr
>> curl 
>> https://issues.apache.org/jira/secure/attachment/12731517/SOLR-7036.patch
>> | patch -p0
>>
>> But `patch` still fails on a few hunks. I figured this patch was made
>> with `svn diff` so it should apply smoothly to that same revision,
>> shouldn't it?
>
> Erick had the same problem with the patch back in July, and asked the
> submitter to update the patch to trunk.  I tried applying the patch to
> branch_5x at the specified revision and that failed too.
>
> When I pulled down that specific revision of the lucene_solr_4_10
> branch, then it would cleanly apply.  There are vast differences between
> all 4.x branches/tags and the newer branches, which is why you cannot
> get the patch applied.  A huge amount of work went into the code for
> version 5.0.0, and the work on trunk and branch_5x since that release
> has been enormous.
>
> Getting this patch into 5x or trunk is going to require a lot of manual
> work.  The original patch author is best qualified to do that work.  If
> you want to tackle the job, feel free.  If you do so, please upload a
> new patch to the issue.
>
> Thanks,
> Shawn
>


Trying to apply patch for SOLR-7036

2015-11-05 Thread r b
I just wanted to double check that my steps were not too off base.

I am trying to apply the patch from 8/May/15 and it seems to be
slightly off. Inside the working revision is 1658487 so I checked that
out from svn. This is what I did.

svn checkout
http://svn.apache.org/repos/asf/lucene/dev/trunk@1658487 lucene_trunk
cd lucene_trunk/solr
curl 
https://issues.apache.org/jira/secure/attachment/12731517/SOLR-7036.patch
| patch -p0

But `patch` still fails on a few hunks. I figured this patch was made
with `svn diff` so it should apply smoothly to that same revision,
shouldn't it?

-renning


are there any SolrCloud supervisors?

2015-10-02 Thread r b
I've been working on something that just monitors ZooKeeper to add and
remove nodes from collections. the use case being I put SolrCloud in
an autoscaling group on EC2 and as instances go up and down, I need
them added to the collection. It's something I've built for work and
could clean up to share on GitHub if there is much interest.

I asked in the IRC about a SolrCloud supervisor utility but wanted to
extend that question to this list. are there any more "full featured"
supervisors out there?


-renning


solrcloud in an inconsistent state

2015-09-29 Thread r b
lately, my workflow has been 1) make some config changes, 2) upload to
zookeeper, 3) use collections API to reload config for the collection.
this has been working pretty well.

starting last week, i started using the AnalyzingInfixLookupFactory in
a SuggestComponent (up until then, it was just the
FuzzyLookupFactory). the Infix lookup requires an indexPath where it
keeps an index on disk.

first couple times i used this and went through my cycle, it was no
problem. but then i started getting some strange errors:

LockObtainFailedException: Lock obtain timed out:
NativeFSLock@/opt/solr-5.2.1/server/solr/myCollection_shard4_replica6/data/myInfixLookup/write.lock

now when i go and try to update the config and reload, i do not get a
response back and the connection drops after a minute.

when i run other collections API commands, i notice them queueing up
in the overseer collection work queue. after a seemingly long while,
they disappear. i assumed it was just that some solrcloud nodes were
just taking a while, but when playing with the suggest component's
handler, i notice that not all of the nodes get the new config
changes.

has anyone else seen this before? maybe there is somethign wrong with
my workflow that caused this?

-renning