RE: [VOTE] Release Lucene/Solr 5.4.0-RC1

2015-12-08 Thread Uwe Schindler
Hi,

You must use the smoke tester from the 5.4 branch (not branch_5x, not trunk). 
You may have used the one from trunk that checks (of course) if the files were 
compiled with Java 8, which is wrong.

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de

> -Original Message-
> From: Ahmet Arslan [mailto:iori...@yahoo.com.INVALID]
> Sent: Tuesday, December 08, 2015 9:04 AM
> To: dev@lucene.apache.org
> Subject: Re: [VOTE] Release Lucene/Solr 5.4.0-RC1
> 
> Hi,
> 
> python3 -u dev-tools/scripts/smokeTestRelease.py
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.4.0-RC1-
> rev1718046 gives me following exception:
> 
> RuntimeError: JAR file
> "/private/tmp/smoke_lucene_5.4.0_1718046_1/unpack/lucene-
> 5.4.0/analysis/common/lucene-analyzers-common-5.4.0.jar" is missing "X-
> Compile-Source-JDK: 1.8" inside its META-INF/MANIFEST.MF
> 
> I am doing something wrong?
> 
> Thanks,
> Ahmet
> 
> 
> 
> On Tuesday, December 8, 2015 3:15 AM, "david.w.smi...@gmail.com"
>  wrote:
> 
> 
> 
> +1 for release. (tested with Java 7)
> SUCCESS! [0:56:31.943245]
> 
> 
> On Mon, Dec 7, 2015 at 8:05 PM Steve Rowe  wrote:
> 
> +1
> >
> >Docs, javadocs, and changes look good.
> >
> >Smoke tester was happy with Java7 and Java8:
> >
> >SUCCESS! [1:53:58.550314]
> >
> >Steve
> >
> >> On Dec 7, 2015, at 5:31 AM, Upayavira  wrote:
> >>
> >> Yes, Shalin, you are right. My fix was still required, but I clearly
> >> manually entered the SVN commit command wrong. Seeing as it does not
> >> impact upon the contents of the files, I have executed an SVN mv
> >> command, rerun the smoke test with the below, which worked:
> >>
> >> python3 -u dev-tools/scripts/smokeTestRelease.py
> >> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.4.0-RC1-
> rev1718046
> >>
> >> Please, folks, use the above to run the smoke test for this release.
> >>
> >> Upayavira
> >>
> >> On Mon, Dec 7, 2015, at 04:00 AM, Shalin Shekhar Mangar wrote:
> >>> Hi Upayavira,
> >>>
> >>> The svn revision in the URL is wrong. It should be 1718046 but it is
> >>> 178046 which makes the smoke tester fail with the following message:
> >>>
> >>> RuntimeError: JAR file
> >>> "/tmp/smoke_lucene_5.4.0_178046_1/unpack/lucene-
> 5.4.0/analysis/common/lucene-analyzers-common-5.4.0.jar"
> >>> is missing "Implementation-Version: 5.4.0 178046 " inside its
> >>> META-INF/MANIFEST.MF (wrong svn revision?)
> >>>
> >>> I think you may need to generate a new RC. But perhaps an svn move to
> >>> a path with the right revision number may also suffice?
> >>>
> >>> On Mon, Dec 7, 2015 at 9:12 AM, Shalin Shekhar Mangar
> >>>  wrote:
>  Thanks Upayavira. I guess Apache has started redirecting http traffic
>  to https recently on dist.apache.org which must have broken this
>  script. I am able to run smoke tester after applying your patch.
> 
>  On Mon, Dec 7, 2015 at 2:08 AM, Upayavira  wrote:
> > The getHREFs() method is taking in an HTTPS URL, but failing to
> preserve
> > the protocol, resulting in an HTTP call that the server naturally
> > bounces to HTTPS. Unfortunately, the next loop round also forgets
> the
> > HTTPS, and hence we're stuck in an endless loop. Below is a patch that
> > fixes this issue. I'd rather someone with more knowledge of this script
> > confirm my suspicion and apply the patch for us all to use, as I cannot
> > see how this ever worked.
> >
> > I personally ran the smoke test on my local copy, so did not hit this
> > HTTP/HTTPS code. I'm running the HTTP version now, and will check
> on it
> > in the morning.
> >
> > Index: dev-tools/scripts/smokeTestRelease.py
> >
> ==
> =
> > --- dev-tools/scripts/smokeTestRelease.py   (revision 1718046)
> > +++ dev-tools/scripts/smokeTestRelease.py   (working copy)
> > @@ -84,7 +84,12 @@
> >   # Deref any redirects
> >   while True:
> > url = urllib.parse.urlparse(urlString)
> > -h = http.client.HTTPConnection(url.netloc)
> > +if url.scheme == "http":
> > +  h = http.client.HTTPConnection(url.netloc)
> > +elif url.scheme == "https":
> > +  h = http.client.HTTPSConnection(url.netloc)
> > +else:
> > +  raise RuntimeError("Unknown protocol: %s" % url.scheme)
> > h.request('GET', url.path)
> > r = h.getresponse()
> > newLoc = r.getheader('location')
> >
> > Upayavira
> >
> > On Sun, Dec 6, 2015, at 06:26 PM, Noble Paul wrote:
> >> Same here.
> >>
> >> On Sun, Dec 6, 2015 at 2:36 PM, Shalin Shekhar Mangar
> >>  wrote:
> >>> Is anyone able to run the smoke tester on this RC? It just hangs for
> a
> >>> long time on "loading release URL" 

Re: [VOTE] Release Lucene/Solr 5.4.0-RC1

2015-12-08 Thread Adrien Grand
I hit a test failure when running the smoke tester that reproduces on the
5.4 source tree:

09:32 jpountz@zreptik4:~/src/lucene_solr_5_4/lucene/core $
JAVA_HOME=/home/jpountz/local/jdk1.7.0_55/ ant test
-Dtestcase=TestIndexWriterExceptions
-Dtests.method=testTooManyFileException -Dtests.seed=855A4CA4CD3CF9A3
-Dtests.slow=true s.locale=ar_SY -Dtests.timezone=Australia/ACT
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[junit4] Suite: org.apache.lucene.index.TestIndexWriterExceptions [junit4]
2> NOTE: reproduce with: ant test -Dtestcase=TestIndexWriterExceptions
-Dtests.method=testTooManyFileException -Dtests.seed=855A4CA4CD3CF9A3
-Dtests.slow=true -Dtests.locale=ar_SY -Dtests.timezone=Australia/ACT
-Dtests.asserts=true -Dtests.file.encoding=UTF-8 [junit4] FAILURE 0.21s |
TestIndexWriterExceptions.testTooManyFileException <<< [junit4] > Throwable
#1: java.lang.AssertionError: file _0.si does not exist; files=[segments_1,
_0.cfs, _0.si, _0.cfe] [junit4] > at
__randomizedtesting.SeedInfo.seed([855A4CA4CD3CF9A3:DAC89DD242954BF]:0)
[junit4] > at
org.apache.lucene.index.IndexWriter.filesExist(IndexWriter.java:4327)
[junit4] > at
org.apache.lucene.index.IndexWriter.(IndexWriter.java:955) [junit4] >
at
org.apache.lucene.index.TestIndexWriterExceptions.testTooManyFileException(TestIndexWriterExceptions.java:1852)
[junit4] > at java.lang.Thread.run(Thread.java:745) [junit4] 2> NOTE: test
params are: codec=Asserting(Lucene54):
{foo=PostingsFormat(name=LuceneFixedGap)}, docValues:{},
sim=RandomSimilarityProvider(queryNorm=false,coord=crazy): {},
locale=ar_SY, timezone=Australia/ACT [junit4] 2> NOTE: Linux
3.13.0-71-generic amd64/Oracle Corporation 1.7.0_55
(64-bit)/cpus=8,threads=1,free=187645328,total=251658240 [junit4] 2> NOTE:
All tests run in this JVM: [TestIndexWriterExceptions] [junit4] Completed
[1/1] in 0.46s, 1 test, 1 failure <<< FAILURES!


Le mar. 8 déc. 2015 à 09:14, Uwe Schindler  a écrit :

> Hi,
>
> You must use the smoke tester from the 5.4 branch (not branch_5x, not
> trunk). You may have used the one from trunk that checks (of course) if the
> files were compiled with Java 8, which is wrong.
>
> Uwe
>
> -
> Uwe Schindler
> H.-H.-Meier-Allee 63, D-28213 Bremen
> http://www.thetaphi.de
> eMail: u...@thetaphi.de
>
> > -Original Message-
> > From: Ahmet Arslan [mailto:iori...@yahoo.com.INVALID]
> > Sent: Tuesday, December 08, 2015 9:04 AM
> > To: dev@lucene.apache.org
> > Subject: Re: [VOTE] Release Lucene/Solr 5.4.0-RC1
> >
> > Hi,
> >
> > python3 -u dev-tools/scripts/smokeTestRelease.py
> > https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.4.0-RC1-
> > rev1718046 gives me following exception:
> >
> > RuntimeError: JAR file
> > "/private/tmp/smoke_lucene_5.4.0_1718046_1/unpack/lucene-
> > 5.4.0/analysis/common/lucene-analyzers-common-5.4.0.jar" is missing "X-
> > Compile-Source-JDK: 1.8" inside its META-INF/MANIFEST.MF
> >
> > I am doing something wrong?
> >
> > Thanks,
> > Ahmet
> >
> >
> >
> > On Tuesday, December 8, 2015 3:15 AM, "david.w.smi...@gmail.com"
> >  wrote:
> >
> >
> >
> > +1 for release. (tested with Java 7)
> > SUCCESS! [0:56:31.943245]
> >
> >
> > On Mon, Dec 7, 2015 at 8:05 PM Steve Rowe  wrote:
> >
> > +1
> > >
> > >Docs, javadocs, and changes look good.
> > >
> > >Smoke tester was happy with Java7 and Java8:
> > >
> > >SUCCESS! [1:53:58.550314]
> > >
> > >Steve
> > >
> > >> On Dec 7, 2015, at 5:31 AM, Upayavira  wrote:
> > >>
> > >> Yes, Shalin, you are right. My fix was still required, but I clearly
> > >> manually entered the SVN commit command wrong. Seeing as it does not
> > >> impact upon the contents of the files, I have executed an SVN mv
> > >> command, rerun the smoke test with the below, which worked:
> > >>
> > >> python3 -u dev-tools/scripts/smokeTestRelease.py
> > >> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.4.0-RC1-
> > rev1718046
> > >>
> > >> Please, folks, use the above to run the smoke test for this release.
> > >>
> > >> Upayavira
> > >>
> > >> On Mon, Dec 7, 2015, at 04:00 AM, Shalin Shekhar Mangar wrote:
> > >>> Hi Upayavira,
> > >>>
> > >>> The svn revision in the URL is wrong. It should be 1718046 but it is
> > >>> 178046 which makes the smoke tester fail with the following message:
> > >>>
> > >>> RuntimeError: JAR file
> > >>> "/tmp/smoke_lucene_5.4.0_178046_1/unpack/lucene-
> > 5.4.0/analysis/common/lucene-analyzers-common-5.4.0.jar"
> > >>> is missing "Implementation-Version: 5.4.0 178046 " inside its
> > >>> META-INF/MANIFEST.MF (wrong svn revision?)
> > >>>
> > >>> I think you may need to generate a new RC. But perhaps an svn move to
> > >>> a path with the right revision number may also suffice?
> > >>>
> > >>> On Mon, Dec 7, 2015 at 9:12 AM, Shalin Shekhar Mangar
> > >>>  wrote:
> >  Thanks Upayavira. I guess Apache has started redirecting http
> traffic
> >  to https recently on dist.apache.org 

[jira] [Commented] (SOLR-8386) The new admin UI doesn't understand that managed schemas are the default in 6.0

2015-12-08 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15046597#comment-15046597
 ] 

Varun Thacker commented on SOLR-8386:
-

bq. The difference is that you do not need to specify anything special in 6x.

The difference is that when no schemaFactory is specified in the solrconfig.xml 
file then ManagedSchemaFactory will be used as opposed to ClassicSchemaFactory 
starting 6.0

> The new admin UI doesn't understand that managed schemas are the default in 
> 6.0
> ---
>
> Key: SOLR-8386
> URL: https://issues.apache.org/jira/browse/SOLR-8386
> Project: Solr
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 6.0
>Reporter: Erick Erickson
>
> SOLR-8131 makes managed schema the default in Solr 6.0. So the "add field" & 
> etc buttons aren't being shown in the schema link as they are in 5.x (5.4+).
> Note, _all_ the configsets in 5.5+ are managed, as they are in 6.x The 
> difference is that you do not need to specify anything special in 6x. 
> So whatever the key that determines whether the admin UI knows the schema is 
> managed or not probably needs to be updated.
> [~upayavira] any hints here?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8373) KerberosPlugin: Using multiple nodes on same machine leads clients to fetch TGT for every request

2015-12-08 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-8373:
-
Attachment: SOLR-8373.patch

smaller patch eliminating the change to the {{init()}}

> KerberosPlugin: Using multiple nodes on same machine leads clients to fetch 
> TGT for every request
> -
>
> Key: SOLR-8373
> URL: https://issues.apache.org/jira/browse/SOLR-8373
> Project: Solr
>  Issue Type: Bug
>Reporter: Ishan Chattopadhyaya
>Assignee: Noble Paul
>Priority: Critical
> Attachments: SOLR-8373.patch, SOLR-8373.patch, SOLR-8373.patch, 
> SOLR-8373.patch, SOLR-8373.patch
>
>
> Kerberized solr nodes accept negotiate/spnego/kerberos requests and processes 
> them. It also passes back to the client a cookie called "hadoop.auth" (which 
> is currently unused, but will eventually be used for delegation tokens). 
> If two or more nodes are on the same machine, they all send out the cookie 
> which have the same domain (hostname) and same path, but different cookie 
> values.
> Upon receipt at the client, if a cookie is rejected (which in this case will 
> be), the client compulsorily gets a ​​*new*​​ TGT from the KDC instead of 
> reading the same ticket from the ticketcache. This is causing the heavy 
> traffic at the KDC, plus intermittent "Request is a replay" (which indicates 
> race condition at KDC while handing out the TGT for the same principal).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 5.4.0-RC1

2015-12-08 Thread Ahmet Arslan
Hi,

python3 -u dev-tools/scripts/smokeTestRelease.py 
https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.4.0-RC1-rev1718046 
gives me following exception:

RuntimeError: JAR file 
"/private/tmp/smoke_lucene_5.4.0_1718046_1/unpack/lucene-5.4.0/analysis/common/lucene-analyzers-common-5.4.0.jar"
 is missing "X-Compile-Source-JDK: 1.8" inside its META-INF/MANIFEST.MF

I am doing something wrong?

Thanks,
Ahmet



On Tuesday, December 8, 2015 3:15 AM, "david.w.smi...@gmail.com" 
 wrote:



+1 for release. (tested with Java 7)
SUCCESS! [0:56:31.943245]


On Mon, Dec 7, 2015 at 8:05 PM Steve Rowe  wrote:

+1
>
>Docs, javadocs, and changes look good.
>
>Smoke tester was happy with Java7 and Java8:
>
>SUCCESS! [1:53:58.550314]
>
>Steve
>
>> On Dec 7, 2015, at 5:31 AM, Upayavira  wrote:
>>
>> Yes, Shalin, you are right. My fix was still required, but I clearly
>> manually entered the SVN commit command wrong. Seeing as it does not
>> impact upon the contents of the files, I have executed an SVN mv
>> command, rerun the smoke test with the below, which worked:
>>
>> python3 -u dev-tools/scripts/smokeTestRelease.py
>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.4.0-RC1-rev1718046
>>
>> Please, folks, use the above to run the smoke test for this release.
>>
>> Upayavira
>>
>> On Mon, Dec 7, 2015, at 04:00 AM, Shalin Shekhar Mangar wrote:
>>> Hi Upayavira,
>>>
>>> The svn revision in the URL is wrong. It should be 1718046 but it is
>>> 178046 which makes the smoke tester fail with the following message:
>>>
>>> RuntimeError: JAR file
>>> "/tmp/smoke_lucene_5.4.0_178046_1/unpack/lucene-5.4.0/analysis/common/lucene-analyzers-common-5.4.0.jar"
>>> is missing "Implementation-Version: 5.4.0 178046 " inside its
>>> META-INF/MANIFEST.MF (wrong svn revision?)
>>>
>>> I think you may need to generate a new RC. But perhaps an svn move to
>>> a path with the right revision number may also suffice?
>>>
>>> On Mon, Dec 7, 2015 at 9:12 AM, Shalin Shekhar Mangar
>>>  wrote:
 Thanks Upayavira. I guess Apache has started redirecting http traffic
 to https recently on dist.apache.org which must have broken this
 script. I am able to run smoke tester after applying your patch.

 On Mon, Dec 7, 2015 at 2:08 AM, Upayavira  wrote:
> The getHREFs() method is taking in an HTTPS URL, but failing to preserve
> the protocol, resulting in an HTTP call that the server naturally
> bounces to HTTPS. Unfortunately, the next loop round also forgets the
> HTTPS, and hence we're stuck in an endless loop. Below is a patch that
> fixes this issue. I'd rather someone with more knowledge of this script
> confirm my suspicion and apply the patch for us all to use, as I cannot
> see how this ever worked.
>
> I personally ran the smoke test on my local copy, so did not hit this
> HTTP/HTTPS code. I'm running the HTTP version now, and will check on it
> in the morning.
>
> Index: dev-tools/scripts/smokeTestRelease.py
> ===
> --- dev-tools/scripts/smokeTestRelease.py   (revision 1718046)
> +++ dev-tools/scripts/smokeTestRelease.py   (working copy)
> @@ -84,7 +84,12 @@
>   # Deref any redirects
>   while True:
> url = urllib.parse.urlparse(urlString)
> -h = http.client.HTTPConnection(url.netloc)
> +if url.scheme == "http":
> +  h = http.client.HTTPConnection(url.netloc)
> +elif url.scheme == "https":
> +  h = http.client.HTTPSConnection(url.netloc)
> +else:
> +  raise RuntimeError("Unknown protocol: %s" % url.scheme)
> h.request('GET', url.path)
> r = h.getresponse()
> newLoc = r.getheader('location')
>
> Upayavira
>
> On Sun, Dec 6, 2015, at 06:26 PM, Noble Paul wrote:
>> Same here.
>>
>> On Sun, Dec 6, 2015 at 2:36 PM, Shalin Shekhar Mangar
>>  wrote:
>>> Is anyone able to run the smoke tester on this RC? It just hangs for a
>>> long time on "loading release URL" for me.
>>>
>>> python3 -u dev-tools/scripts/smokeTestRelease.py --tmp-dir
>>> ../smoke-5.4 --revision 178046 --version 5.4.0 --test-java8
>>> ~/programs/jdk8
>>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.4.0-RC1-rev178046/
>>> Java 1.7 JAVA_HOME=/home/shalin/programs/jdk7
>>> Java 1.8 JAVA_HOME=/home/shalin/programs/jdk8
>>> NOTE: output encoding is UTF-8
>>>
>>> Load release URL
>>> "https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.4.0-RC1-rev178046/;...
>>>
>>> I did a strace and found that the server is returning a HTTP 301 moved
>>> permanently response to the http request.
>>>
>>> On Sat, Dec 5, 2015 at 4:28 PM, Upayavira  wrote:
 Please 

[jira] [Commented] (SOLR-8386) The new admin UI doesn't understand that managed schemas are the default in 6.0

2015-12-08 Thread Upayavira (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15046634#comment-15046634
 ] 

Upayavira commented on SOLR-8386:
-

This is what does it:
{code}
Config.get({core: $routeParams.core}, function(data) {
$scope.isSchemaUpdatable = data.config.schemaFactory.class == 
"ManagedIndexSchemaFactory";
});
{code}

i.e. it calls the config API, and looks to see what schema factory is 
configured. If it is the Managed one, then it switches on those buttons.

Config.get() calls /solr//config - what do you get from calling 
that URL, especially inside config.schemaFactory?

> The new admin UI doesn't understand that managed schemas are the default in 
> 6.0
> ---
>
> Key: SOLR-8386
> URL: https://issues.apache.org/jira/browse/SOLR-8386
> Project: Solr
>  Issue Type: Bug
>  Components: UI
>Affects Versions: 6.0
>Reporter: Erick Erickson
>
> SOLR-8131 makes managed schema the default in Solr 6.0. So the "add field" & 
> etc buttons aren't being shown in the schema link as they are in 5.x (5.4+).
> Note, _all_ the configsets in 5.5+ are managed, as they are in 6.x The 
> difference is that you do not need to specify anything special in 6x. 
> So whatever the key that determines whether the admin UI knows the schema is 
> managed or not probably needs to be updated.
> [~upayavira] any hints here?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 5.4.0-RC1

2015-12-08 Thread Michael McCandless
I'll dig.

Mike McCandless

http://blog.mikemccandless.com


On Tue, Dec 8, 2015 at 3:37 AM, Adrien Grand  wrote:
> I hit a test failure when running the smoke tester that reproduces on the
> 5.4 source tree:
>
> 09:32 jpountz@zreptik4:~/src/lucene_solr_5_4/lucene/core $
> JAVA_HOME=/home/jpountz/local/jdk1.7.0_55/ ant test
> -Dtestcase=TestIndexWriterExceptions -Dtests.method=testTooManyFileException
> -Dtests.seed=855A4CA4CD3CF9A3 -Dtests.slow=true s.locale=ar_SY
> -Dtests.timezone=Australia/ACT -Dtests.asserts=true
> -Dtests.file.encoding=UTF-8
>
> [junit4] Suite: org.apache.lucene.index.TestIndexWriterExceptions [junit4]
> 2> NOTE: reproduce with: ant test -Dtestcase=TestIndexWriterExceptions
> -Dtests.method=testTooManyFileException -Dtests.seed=855A4CA4CD3CF9A3
> -Dtests.slow=true -Dtests.locale=ar_SY -Dtests.timezone=Australia/ACT
> -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [junit4] FAILURE 0.21s |
> TestIndexWriterExceptions.testTooManyFileException <<< [junit4] > Throwable
> #1: java.lang.AssertionError: file _0.si does not exist; files=[segments_1,
> _0.cfs, _0.si, _0.cfe] [junit4] > at
> __randomizedtesting.SeedInfo.seed([855A4CA4CD3CF9A3:DAC89DD242954BF]:0)
> [junit4] > at
> org.apache.lucene.index.IndexWriter.filesExist(IndexWriter.java:4327)
> [junit4] > at
> org.apache.lucene.index.IndexWriter.(IndexWriter.java:955) [junit4] >
> at
> org.apache.lucene.index.TestIndexWriterExceptions.testTooManyFileException(TestIndexWriterExceptions.java:1852)
> [junit4] > at java.lang.Thread.run(Thread.java:745) [junit4] 2> NOTE: test
> params are: codec=Asserting(Lucene54):
> {foo=PostingsFormat(name=LuceneFixedGap)}, docValues:{},
> sim=RandomSimilarityProvider(queryNorm=false,coord=crazy): {}, locale=ar_SY,
> timezone=Australia/ACT [junit4] 2> NOTE: Linux 3.13.0-71-generic
> amd64/Oracle Corporation 1.7.0_55
> (64-bit)/cpus=8,threads=1,free=187645328,total=251658240 [junit4] 2> NOTE:
> All tests run in this JVM: [TestIndexWriterExceptions] [junit4] Completed
> [1/1] in 0.46s, 1 test, 1 failure <<< FAILURES!
>
>
> Le mar. 8 déc. 2015 à 09:14, Uwe Schindler  a écrit :
>>
>> Hi,
>>
>> You must use the smoke tester from the 5.4 branch (not branch_5x, not
>> trunk). You may have used the one from trunk that checks (of course) if the
>> files were compiled with Java 8, which is wrong.
>>
>> Uwe
>>
>> -
>> Uwe Schindler
>> H.-H.-Meier-Allee 63, D-28213 Bremen
>> http://www.thetaphi.de
>> eMail: u...@thetaphi.de
>>
>> > -Original Message-
>> > From: Ahmet Arslan [mailto:iori...@yahoo.com.INVALID]
>> > Sent: Tuesday, December 08, 2015 9:04 AM
>> > To: dev@lucene.apache.org
>> > Subject: Re: [VOTE] Release Lucene/Solr 5.4.0-RC1
>> >
>> > Hi,
>> >
>> > python3 -u dev-tools/scripts/smokeTestRelease.py
>> > https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.4.0-RC1-
>> > rev1718046 gives me following exception:
>> >
>> > RuntimeError: JAR file
>> > "/private/tmp/smoke_lucene_5.4.0_1718046_1/unpack/lucene-
>> > 5.4.0/analysis/common/lucene-analyzers-common-5.4.0.jar" is missing "X-
>> > Compile-Source-JDK: 1.8" inside its META-INF/MANIFEST.MF
>> >
>> > I am doing something wrong?
>> >
>> > Thanks,
>> > Ahmet
>> >
>> >
>> >
>> > On Tuesday, December 8, 2015 3:15 AM, "david.w.smi...@gmail.com"
>> >  wrote:
>> >
>> >
>> >
>> > +1 for release. (tested with Java 7)
>> > SUCCESS! [0:56:31.943245]
>> >
>> >
>> > On Mon, Dec 7, 2015 at 8:05 PM Steve Rowe  wrote:
>> >
>> > +1
>> > >
>> > >Docs, javadocs, and changes look good.
>> > >
>> > >Smoke tester was happy with Java7 and Java8:
>> > >
>> > >SUCCESS! [1:53:58.550314]
>> > >
>> > >Steve
>> > >
>> > >> On Dec 7, 2015, at 5:31 AM, Upayavira  wrote:
>> > >>
>> > >> Yes, Shalin, you are right. My fix was still required, but I clearly
>> > >> manually entered the SVN commit command wrong. Seeing as it does not
>> > >> impact upon the contents of the files, I have executed an SVN mv
>> > >> command, rerun the smoke test with the below, which worked:
>> > >>
>> > >> python3 -u dev-tools/scripts/smokeTestRelease.py
>> > >> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.4.0-RC1-
>> > rev1718046
>> > >>
>> > >> Please, folks, use the above to run the smoke test for this release.
>> > >>
>> > >> Upayavira
>> > >>
>> > >> On Mon, Dec 7, 2015, at 04:00 AM, Shalin Shekhar Mangar wrote:
>> > >>> Hi Upayavira,
>> > >>>
>> > >>> The svn revision in the URL is wrong. It should be 1718046 but it is
>> > >>> 178046 which makes the smoke tester fail with the following message:
>> > >>>
>> > >>> RuntimeError: JAR file
>> > >>> "/tmp/smoke_lucene_5.4.0_178046_1/unpack/lucene-
>> > 5.4.0/analysis/common/lucene-analyzers-common-5.4.0.jar"
>> > >>> is missing "Implementation-Version: 5.4.0 178046 " inside its
>> > >>> META-INF/MANIFEST.MF (wrong svn revision?)
>> > >>>
>> > >>> I think you may need to generate a new RC. But perhaps an svn move
>> > >>> to
>> > 

[jira] [Updated] (SOLR-8387) Solr example configs should ship with managed-schema instead of schema.xml

2015-12-08 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-8387:

Fix Version/s: Trunk

> Solr example configs should ship with managed-schema instead of schema.xml
> --
>
> Key: SOLR-8387
> URL: https://issues.apache.org/jira/browse/SOLR-8387
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: Trunk
>Reporter: Varun Thacker
> Fix For: Trunk
>
>
> This is a followup of SOLR-8131 . In SOLR-8131 if a schema factory is not 
> specified explicitly managed schema will be used.
> Now since managed schema factory is the default, when a user goes to start 
> solr 6.0 their schema.xml file will get converted to managed-schema  . This 
> might seem trappy or confusing to a user. Hence why don't we directly ship 
> with a a file called {{managed-schema}} instead of {{schema.xml}} . Just a 
> rename of the files in all the example configs that we ship. The data_driven 
> config does that already



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8387) Solr example configs should ship with managed-schema instead of schema.xml

2015-12-08 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-8387:

Attachment: SOLR-8387.patch

Small patch. Does a rename of schema.xml -> managed-schema . Currently only 
example/files and data_driven config sets had this.

Should this be applied to 5.x as well, since all the configs there explicitly 
use ManagedSchemaFactory and the rename will happen after they have started 
solr for the first time?

> Solr example configs should ship with managed-schema instead of schema.xml
> --
>
> Key: SOLR-8387
> URL: https://issues.apache.org/jira/browse/SOLR-8387
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: Trunk
>Reporter: Varun Thacker
> Fix For: Trunk
>
> Attachments: SOLR-8387.patch
>
>
> This is a followup of SOLR-8131 . In SOLR-8131 if a schema factory is not 
> specified explicitly managed schema will be used.
> Now since managed schema factory is the default, when a user goes to start 
> solr 6.0 their schema.xml file will get converted to managed-schema  . This 
> might seem trappy or confusing to a user. Hence why don't we directly ship 
> with a a file called {{managed-schema}} instead of {{schema.xml}} . Just a 
> rename of the files in all the example configs that we ship. The data_driven 
> config does that already



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8373) KerberosPlugin: Using multiple nodes on same machine leads clients to fetch TGT for every request

2015-12-08 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-8373:
---
Description: 
Kerberized solr nodes accept negotiate/spnego/kerberos requests and processes 
them. It also passes back to the client a cookie called "hadoop.auth" (which is 
currently unused, but will eventually be used for delegation tokens). 

If two or more nodes are on the same machine, they all send out the cookie 
which have the same domain (hostname) and same path, but different cookie 
values.

Upon receipt at the client, if a cookie is rejected (which in this case will 
be), the client gets a​​ TGT from the KDC. This is causing the heavy traffic at 
the KDC, plus intermittent "Request is a replay" (which indicates race 
condition at KDC while handing out the TGT for the same principal). I think 
having a (well configured) ticket cache is a potential solution, but having 
cookies get rejected is bad enough.


  was:
Kerberized solr nodes accept negotiate/spnego/kerberos requests and processes 
them. It also passes back to the client a cookie called "hadoop.auth" (which is 
currently unused, but will eventually be used for delegation tokens). 

If two or more nodes are on the same machine, they all send out the cookie 
which have the same domain (hostname) and same path, but different cookie 
values.

Upon receipt at the client, if a cookie is rejected (which in this case will 
be), the client compulsorily gets a ​​*new*​​ TGT from the KDC instead of 
reading the same ticket from the ticketcache. This is causing the heavy traffic 
at the KDC, plus intermittent "Request is a replay" (which indicates race 
condition at KDC while handing out the TGT for the same principal).



> KerberosPlugin: Using multiple nodes on same machine leads clients to fetch 
> TGT for every request
> -
>
> Key: SOLR-8373
> URL: https://issues.apache.org/jira/browse/SOLR-8373
> Project: Solr
>  Issue Type: Bug
>Reporter: Ishan Chattopadhyaya
>Assignee: Noble Paul
>Priority: Critical
> Attachments: SOLR-8373.patch, SOLR-8373.patch, SOLR-8373.patch, 
> SOLR-8373.patch, SOLR-8373.patch
>
>
> Kerberized solr nodes accept negotiate/spnego/kerberos requests and processes 
> them. It also passes back to the client a cookie called "hadoop.auth" (which 
> is currently unused, but will eventually be used for delegation tokens). 
> If two or more nodes are on the same machine, they all send out the cookie 
> which have the same domain (hostname) and same path, but different cookie 
> values.
> Upon receipt at the client, if a cookie is rejected (which in this case will 
> be), the client gets a​​ TGT from the KDC. This is causing the heavy traffic 
> at the KDC, plus intermittent "Request is a replay" (which indicates race 
> condition at KDC while handing out the TGT for the same principal). I think 
> having a (well configured) ticket cache is a potential solution, but having 
> cookies get rejected is bad enough.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6917) Move NumericField out of core to backwards-codecs

2015-12-08 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-6917:
---
Attachment: LUCENE-6917-broken-javadocs.patch

Thanks [~thetaphi], I did exactly that, however the javadocs would still point 
to {{core}} instead of the module, I suspect because the same package name 
appears in both?

I'm attaching a tiny patch based on trunk showing the issue.  If you apply this 
patch, which just adds an @link to {{GeoHashUtils}} from {{queryparser}} (whose 
{{build.xml}} already has the javadocs link as you described) to {{sandbox}} 
then cd to {{lucene/queryparser}} and run {{ant javadocs}} and then open 
{{lucene/build/docs/queryparser/org/apache/lucene/queryparser/classic/QueryParserBase.html}}
 and look for the {{GeoHashUtils}} link, you should see it (incorrectly) 
pointing to core, not sandbox:

{noformat}
 Here is the nocommit link: GeoHashUtils.
{noformat}

Am I doing something wrong?

> Move NumericField out of core to backwards-codecs
> -
>
> Key: LUCENE-6917
> URL: https://issues.apache.org/jira/browse/LUCENE-6917
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 6.0
>
> Attachments: LUCENE-6917-broken-javadocs.patch, LUCENE-6917.patch, 
> LUCENE-6917.patch
>
>
> DimensionalValues seems to be better across the board (indexing time, 
> indexing size, search-speed, search-time heap required) than NumericField, at 
> least in my testing so far.
> I think for 6.0 we should move {{IntField}}, {{LongField}}, {{FloatField}}, 
> {{DoubleField}} and {{NumericRangeQuery}} to {{backward-codecs}}, and rename 
> with {{Legacy}} prefix?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8373) KerberosPlugin: Using multiple nodes on same machine leads clients to fetch TGT for every request

2015-12-08 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15046568#comment-15046568
 ] 

Noble Paul edited comment on SOLR-8373 at 12/8/15 10:16 AM:


smaller patch eliminating the backward incompatible change to the {{init()}}


was (Author: noble.paul):
smaller patch eliminating the change to the {{init()}}

> KerberosPlugin: Using multiple nodes on same machine leads clients to fetch 
> TGT for every request
> -
>
> Key: SOLR-8373
> URL: https://issues.apache.org/jira/browse/SOLR-8373
> Project: Solr
>  Issue Type: Bug
>Reporter: Ishan Chattopadhyaya
>Assignee: Noble Paul
>Priority: Critical
> Attachments: SOLR-8373.patch, SOLR-8373.patch, SOLR-8373.patch, 
> SOLR-8373.patch, SOLR-8373.patch
>
>
> Kerberized solr nodes accept negotiate/spnego/kerberos requests and processes 
> them. It also passes back to the client a cookie called "hadoop.auth" (which 
> is currently unused, but will eventually be used for delegation tokens). 
> If two or more nodes are on the same machine, they all send out the cookie 
> which have the same domain (hostname) and same path, but different cookie 
> values.
> Upon receipt at the client, if a cookie is rejected (which in this case will 
> be), the client gets a​​ TGT from the KDC. This is causing the heavy traffic 
> at the KDC, plus intermittent "Request is a replay" (which indicates race 
> condition at KDC while handing out the TGT for the same principal). I think 
> having a (well configured) ticket cache is a potential solution, but having 
> cookies get rejected is bad enough.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_66) - Build # 15145 - Failure!

2015-12-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15145/
Java: 64bit/jdk1.8.0_66 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

14 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.admin.CoreAdminHandlerTest

Error Message:
14 threads leaked from SUITE scope at 
org.apache.solr.handler.admin.CoreAdminHandlerTest: 1) Thread[id=2171, 
name=qtp38807629-2171, state=RUNNABLE, group=TGRP-CoreAdminHandlerTest] 
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at 
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at 
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79) at 
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at 
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at 
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) at 
org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:232)
 at 
org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:180)
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:162)
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.execute(ExecuteProduceConsume.java:101)
 at org.eclipse.jetty.io.ManagedSelector.run(ManagedSelector.java:136)  
   at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572) 
at java.lang.Thread.run(Thread.java:745)2) Thread[id=2173, 
name=qtp38807629-2173, state=RUNNABLE, group=TGRP-CoreAdminHandlerTest] 
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at 
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at 
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79) at 
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at 
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at 
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) at 
org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:232)
 at 
org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:180)
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:162)
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:147)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572) 
at java.lang.Thread.run(Thread.java:745)3) Thread[id=2183, 
name=searcherExecutor-993-thread-1, state=WAITING, 
group=TGRP-CoreAdminHandlerTest] at sun.misc.Unsafe.park(Native Method) 
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)4) Thread[id=2185, 
name=Scheduler-1997338177, state=TIMED_WAITING, 
group=TGRP-CoreAdminHandlerTest] at sun.misc.Unsafe.park(Native Method) 
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)5) Thread[id=2168, 
name=qtp38807629-2168, state=TIMED_WAITING, group=TGRP-CoreAdminHandlerTest]
 at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:392) 
at 

[jira] [Updated] (SOLR-8305) can (ExternalFileField|LatLonType).getValueSource's QParser use be replaced?

2015-12-08 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-8305:
--
Attachment: SOLR-8305.patch

updated/reduced patch to account for SOLR-8360

> can (ExternalFileField|LatLonType).getValueSource's QParser use be replaced?
> 
>
> Key: SOLR-8305
> URL: https://issues.apache.org/jira/browse/SOLR-8305
> Project: Solr
>  Issue Type: Wish
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-8305.patch, SOLR-8305.patch
>
>
> Question with patch against trunk for illustration only at this point. 
> Context is that elsewhere in the classes the {{IndexSchema}} member is used 
> instead of the {{QParser}} method parameter and would the resulting data 
> directory or schema not be the same when using the member instead of the 
> parameter?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 5.4.0-RC1

2015-12-08 Thread Anshum Gupta
+1

Smoke tester is happy on both Java7 and Java8.

SUCCESS! [1:13:28.357352]

On Sat, Dec 5, 2015 at 4:28 PM, Upayavira  wrote:

> Please vote for the RC1 release candidate for Lucene/Solr 5.4.0
>
> The artifacts can be downloaded from:
>
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.4.0-RC1-rev178046
>
> You can run the smoke tester directly with this command:
> python3 -u dev-tools/scripts/smokeTestRelease.py
>
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.4.0-RC1-rev178046
>
> I will let this vote run until midnight (GMT) on Wednesday 9 December.
>
> Please cast your votes! (and let me know, politely :-) if I missed
> anything)
>
> Upayavira
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


-- 
Anshum Gupta


[jira] [Created] (SOLR-8388) TestSolrQueryResponse (factor out, then extend)

2015-12-08 Thread Christine Poerschke (JIRA)
Christine Poerschke created SOLR-8388:
-

 Summary: TestSolrQueryResponse (factor out, then extend)
 Key: SOLR-8388
 URL: https://issues.apache.org/jira/browse/SOLR-8388
 Project: Solr
  Issue Type: Test
Reporter: Christine Poerschke
Assignee: Christine Poerschke
 Attachments: SOLR-8388-part1of2.patch

factor out 
{{solr/core/src/test/org/apache/solr/response/TestSolrQueryResponse.java}} from 
{{solr/core/src/test/org/apache/solr/servlet/ResponseHeaderTest.java}} and then 
extend it



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8388) TestSolrQueryResponse (factor out, then extend)

2015-12-08 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-8388:
--
Attachment: SOLR-8388-part1of2.patch

> TestSolrQueryResponse (factor out, then extend)
> ---
>
> Key: SOLR-8388
> URL: https://issues.apache.org/jira/browse/SOLR-8388
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
> Attachments: SOLR-8388-part1of2.patch
>
>
> factor out 
> {{solr/core/src/test/org/apache/solr/response/TestSolrQueryResponse.java}} 
> from {{solr/core/src/test/org/apache/solr/servlet/ResponseHeaderTest.java}} 
> and then extend it



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8305) can LatLonType.getValueSource's QParser use be replaced?

2015-12-08 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-8305:
--
Summary: can LatLonType.getValueSource's QParser use be replaced?  (was: 
can (ExternalFileField|LatLonType).getValueSource's QParser use be replaced?)

> can LatLonType.getValueSource's QParser use be replaced?
> 
>
> Key: SOLR-8305
> URL: https://issues.apache.org/jira/browse/SOLR-8305
> Project: Solr
>  Issue Type: Wish
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-8305.patch, SOLR-8305.patch
>
>
> Question with patch against trunk for illustration only at this point. 
> Context is that elsewhere in the classes the {{IndexSchema}} member is used 
> instead of the {{QParser}} method parameter and would the resulting data 
> directory or schema not be the same when using the member instead of the 
> parameter?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8305) can LatLonType.getValueSource's QParser use be replaced?

2015-12-08 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-8305:
--
Description: Question with patch against trunk. Context is that elsewhere 
in the classes the {{IndexSchema}} member is used instead of the {{QParser}} 
method parameter and would the resulting schema not be the same when using the 
member instead of the parameter? Motivation here is potential removal of the 
QParser argument from the {{getValueSource}} signature.  (was: Question with 
patch against trunk for illustration only at this point. Context is that 
elsewhere in the classes the {{IndexSchema}} member is used instead of the 
{{QParser}} method parameter and would the resulting data directory or schema 
not be the same when using the member instead of the parameter?)

> can LatLonType.getValueSource's QParser use be replaced?
> 
>
> Key: SOLR-8305
> URL: https://issues.apache.org/jira/browse/SOLR-8305
> Project: Solr
>  Issue Type: Wish
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-8305.patch, SOLR-8305.patch
>
>
> Question with patch against trunk. Context is that elsewhere in the classes 
> the {{IndexSchema}} member is used instead of the {{QParser}} method 
> parameter and would the resulting schema not be the same when using the 
> member instead of the parameter? Motivation here is potential removal of the 
> QParser argument from the {{getValueSource}} signature.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 5.4.0-RC1

2015-12-08 Thread Michael McCandless
OK I committed a fix ... it was just a test bug ... I don't think it
requires a respin.

Mike McCandless

http://blog.mikemccandless.com


On Tue, Dec 8, 2015 at 3:37 AM, Adrien Grand  wrote:
> I hit a test failure when running the smoke tester that reproduces on the
> 5.4 source tree:
>
> 09:32 jpountz@zreptik4:~/src/lucene_solr_5_4/lucene/core $
> JAVA_HOME=/home/jpountz/local/jdk1.7.0_55/ ant test
> -Dtestcase=TestIndexWriterExceptions -Dtests.method=testTooManyFileException
> -Dtests.seed=855A4CA4CD3CF9A3 -Dtests.slow=true s.locale=ar_SY
> -Dtests.timezone=Australia/ACT -Dtests.asserts=true
> -Dtests.file.encoding=UTF-8
>
> [junit4] Suite: org.apache.lucene.index.TestIndexWriterExceptions [junit4]
> 2> NOTE: reproduce with: ant test -Dtestcase=TestIndexWriterExceptions
> -Dtests.method=testTooManyFileException -Dtests.seed=855A4CA4CD3CF9A3
> -Dtests.slow=true -Dtests.locale=ar_SY -Dtests.timezone=Australia/ACT
> -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [junit4] FAILURE 0.21s |
> TestIndexWriterExceptions.testTooManyFileException <<< [junit4] > Throwable
> #1: java.lang.AssertionError: file _0.si does not exist; files=[segments_1,
> _0.cfs, _0.si, _0.cfe] [junit4] > at
> __randomizedtesting.SeedInfo.seed([855A4CA4CD3CF9A3:DAC89DD242954BF]:0)
> [junit4] > at
> org.apache.lucene.index.IndexWriter.filesExist(IndexWriter.java:4327)
> [junit4] > at
> org.apache.lucene.index.IndexWriter.(IndexWriter.java:955) [junit4] >
> at
> org.apache.lucene.index.TestIndexWriterExceptions.testTooManyFileException(TestIndexWriterExceptions.java:1852)
> [junit4] > at java.lang.Thread.run(Thread.java:745) [junit4] 2> NOTE: test
> params are: codec=Asserting(Lucene54):
> {foo=PostingsFormat(name=LuceneFixedGap)}, docValues:{},
> sim=RandomSimilarityProvider(queryNorm=false,coord=crazy): {}, locale=ar_SY,
> timezone=Australia/ACT [junit4] 2> NOTE: Linux 3.13.0-71-generic
> amd64/Oracle Corporation 1.7.0_55
> (64-bit)/cpus=8,threads=1,free=187645328,total=251658240 [junit4] 2> NOTE:
> All tests run in this JVM: [TestIndexWriterExceptions] [junit4] Completed
> [1/1] in 0.46s, 1 test, 1 failure <<< FAILURES!
>
>
> Le mar. 8 déc. 2015 à 09:14, Uwe Schindler  a écrit :
>>
>> Hi,
>>
>> You must use the smoke tester from the 5.4 branch (not branch_5x, not
>> trunk). You may have used the one from trunk that checks (of course) if the
>> files were compiled with Java 8, which is wrong.
>>
>> Uwe
>>
>> -
>> Uwe Schindler
>> H.-H.-Meier-Allee 63, D-28213 Bremen
>> http://www.thetaphi.de
>> eMail: u...@thetaphi.de
>>
>> > -Original Message-
>> > From: Ahmet Arslan [mailto:iori...@yahoo.com.INVALID]
>> > Sent: Tuesday, December 08, 2015 9:04 AM
>> > To: dev@lucene.apache.org
>> > Subject: Re: [VOTE] Release Lucene/Solr 5.4.0-RC1
>> >
>> > Hi,
>> >
>> > python3 -u dev-tools/scripts/smokeTestRelease.py
>> > https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.4.0-RC1-
>> > rev1718046 gives me following exception:
>> >
>> > RuntimeError: JAR file
>> > "/private/tmp/smoke_lucene_5.4.0_1718046_1/unpack/lucene-
>> > 5.4.0/analysis/common/lucene-analyzers-common-5.4.0.jar" is missing "X-
>> > Compile-Source-JDK: 1.8" inside its META-INF/MANIFEST.MF
>> >
>> > I am doing something wrong?
>> >
>> > Thanks,
>> > Ahmet
>> >
>> >
>> >
>> > On Tuesday, December 8, 2015 3:15 AM, "david.w.smi...@gmail.com"
>> >  wrote:
>> >
>> >
>> >
>> > +1 for release. (tested with Java 7)
>> > SUCCESS! [0:56:31.943245]
>> >
>> >
>> > On Mon, Dec 7, 2015 at 8:05 PM Steve Rowe  wrote:
>> >
>> > +1
>> > >
>> > >Docs, javadocs, and changes look good.
>> > >
>> > >Smoke tester was happy with Java7 and Java8:
>> > >
>> > >SUCCESS! [1:53:58.550314]
>> > >
>> > >Steve
>> > >
>> > >> On Dec 7, 2015, at 5:31 AM, Upayavira  wrote:
>> > >>
>> > >> Yes, Shalin, you are right. My fix was still required, but I clearly
>> > >> manually entered the SVN commit command wrong. Seeing as it does not
>> > >> impact upon the contents of the files, I have executed an SVN mv
>> > >> command, rerun the smoke test with the below, which worked:
>> > >>
>> > >> python3 -u dev-tools/scripts/smokeTestRelease.py
>> > >> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.4.0-RC1-
>> > rev1718046
>> > >>
>> > >> Please, folks, use the above to run the smoke test for this release.
>> > >>
>> > >> Upayavira
>> > >>
>> > >> On Mon, Dec 7, 2015, at 04:00 AM, Shalin Shekhar Mangar wrote:
>> > >>> Hi Upayavira,
>> > >>>
>> > >>> The svn revision in the URL is wrong. It should be 1718046 but it is
>> > >>> 178046 which makes the smoke tester fail with the following message:
>> > >>>
>> > >>> RuntimeError: JAR file
>> > >>> "/tmp/smoke_lucene_5.4.0_178046_1/unpack/lucene-
>> > 5.4.0/analysis/common/lucene-analyzers-common-5.4.0.jar"
>> > >>> is missing "Implementation-Version: 5.4.0 178046 " inside its
>> > >>> META-INF/MANIFEST.MF (wrong svn revision?)
>> > >>>
>> > >>> I 

[jira] [Commented] (LUCENE-6917) Move NumericField out of core to backwards-codecs

2015-12-08 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15046723#comment-15046723
 ] 

Uwe Schindler commented on LUCENE-6917:
---

I think you meant that links from "core" module that point to legacy class 
won't work. This is something different! I have no idea!

> Move NumericField out of core to backwards-codecs
> -
>
> Key: LUCENE-6917
> URL: https://issues.apache.org/jira/browse/LUCENE-6917
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 6.0
>
> Attachments: LUCENE-6917-broken-javadocs.patch, LUCENE-6917.patch, 
> LUCENE-6917.patch
>
>
> DimensionalValues seems to be better across the board (indexing time, 
> indexing size, search-speed, search-time heap required) than NumericField, at 
> least in my testing so far.
> I think for 6.0 we should move {{IntField}}, {{LongField}}, {{FloatField}}, 
> {{DoubleField}} and {{NumericRangeQuery}} to {{backward-codecs}}, and rename 
> with {{Legacy}} prefix?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4280) spellcheck.maxResultsForSuggest based on filter query results

2015-12-08 Thread Markus Jelsma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15046790#comment-15046790
 ] 

Markus Jelsma commented on SOLR-4280:
-

Hello [~jdyer] - this sounds very good. However, an addition to this feature 
would be the option to also choose which filter the fraction operates on. I 
have seen some strange results when drilling deeper using more and more 
restrictive filters.

It was originally meant to use maxResultsForSuggest in a multi-tenant index. 
The current maxResultsForSuggest is not suitable for clients having a very 
large index and clients having a small index.

How about an optional spellcheck.maxResultsForSuggest.fq=field:value. If the 
user specifies this, the patch won't need to find the most restrictive filter.

M.

> spellcheck.maxResultsForSuggest based on filter query results
> -
>
> Key: SOLR-4280
> URL: https://issues.apache.org/jira/browse/SOLR-4280
> Project: Solr
>  Issue Type: Improvement
>  Components: spellchecker
>Reporter: Markus Jelsma
> Fix For: 4.9, Trunk
>
> Attachments: SOLR-4280-trunk-1.patch, SOLR-4280-trunk.patch, 
> SOLR-4280-trunk.patch, SOLR-4280.patch
>
>
> spellcheck.maxResultsForSuggest takes a fixed number but ideally should be 
> able to take a ratio and calculate that against the maximum number of results 
> the filter queries return.
> At least in our case this would certainly add a lot of value. >99% of our 
> end-users search within one or more filters of which one is always unique. 
> The number of documents for each of those unique filters varies significantly 
> ranging from 300 to 3.000.000 documents in which they search. The 
> maxResultsForSuggest is set to a reasonable low value so it kind of works 
> fine but sometimes leads to undesired suggestions for a large subcorpus that 
> has more misspellings.
> Spun off from SOLR-4278.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8360) ExternalFileField.getValueSource uses req.datadir but this.schema

2015-12-08 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke resolved SOLR-8360.
---
   Resolution: Fixed
Fix Version/s: 5.5
   Trunk

> ExternalFileField.getValueSource uses req.datadir but this.schema
> -
>
> Key: SOLR-8360
> URL: https://issues.apache.org/jira/browse/SOLR-8360
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
> Fix For: Trunk, 5.5
>
> Attachments: SOLR-8360-option1.patch, SOLR-8360-option2.patch
>
>
> {{ExternalFileField.getValueSource(SchemaField field, QParser parser)}} has 
> available:
> * datadir
> ** parser.getReq().getCore().getDataDir()
> ** this.schema.getResourceLoader().getDataDir()
> * schema
> ** parser.getReq().getSchema()
> ** this.schema
> {{ExternalFileField.getValueSource}} uses 
> {{parser.getReq().getCore().getDataDir()}} explicitly but implicitly 
> {{this.schema}} - should it use {{parser.getReq().getSchema()}} instead 
> (Option 1 patch)? Or if in practice actually req.datadir and this.datadir are 
> always the same could we stop using the parser argument (Option 2 patch (1 
> line))?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 5.4.0-RC1

2015-12-08 Thread Adrien Grand
Thanks Mike!

Le mar. 8 déc. 2015 à 11:00, Michael McCandless 
a écrit :

> OK I committed a fix ... it was just a test bug ... I don't think it
> requires a respin.
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
>
> On Tue, Dec 8, 2015 at 3:37 AM, Adrien Grand  wrote:
> > I hit a test failure when running the smoke tester that reproduces on the
> > 5.4 source tree:
> >
> > 09:32 jpountz@zreptik4:~/src/lucene_solr_5_4/lucene/core $
> > JAVA_HOME=/home/jpountz/local/jdk1.7.0_55/ ant test
> > -Dtestcase=TestIndexWriterExceptions
> -Dtests.method=testTooManyFileException
> > -Dtests.seed=855A4CA4CD3CF9A3 -Dtests.slow=true s.locale=ar_SY
> > -Dtests.timezone=Australia/ACT -Dtests.asserts=true
> > -Dtests.file.encoding=UTF-8
> >
> > [junit4] Suite: org.apache.lucene.index.TestIndexWriterExceptions
> [junit4]
> > 2> NOTE: reproduce with: ant test -Dtestcase=TestIndexWriterExceptions
> > -Dtests.method=testTooManyFileException -Dtests.seed=855A4CA4CD3CF9A3
> > -Dtests.slow=true -Dtests.locale=ar_SY -Dtests.timezone=Australia/ACT
> > -Dtests.asserts=true -Dtests.file.encoding=UTF-8 [junit4] FAILURE 0.21s |
> > TestIndexWriterExceptions.testTooManyFileException <<< [junit4] >
> Throwable
> > #1: java.lang.AssertionError: file _0.si does not exist;
> files=[segments_1,
> > _0.cfs, _0.si, _0.cfe] [junit4] > at
> > __randomizedtesting.SeedInfo.seed([855A4CA4CD3CF9A3:DAC89DD242954BF]:0)
> > [junit4] > at
> > org.apache.lucene.index.IndexWriter.filesExist(IndexWriter.java:4327)
> > [junit4] > at
> > org.apache.lucene.index.IndexWriter.(IndexWriter.java:955)
> [junit4] >
> > at
> >
> org.apache.lucene.index.TestIndexWriterExceptions.testTooManyFileException(TestIndexWriterExceptions.java:1852)
> > [junit4] > at java.lang.Thread.run(Thread.java:745) [junit4] 2> NOTE:
> test
> > params are: codec=Asserting(Lucene54):
> > {foo=PostingsFormat(name=LuceneFixedGap)}, docValues:{},
> > sim=RandomSimilarityProvider(queryNorm=false,coord=crazy): {},
> locale=ar_SY,
> > timezone=Australia/ACT [junit4] 2> NOTE: Linux 3.13.0-71-generic
> > amd64/Oracle Corporation 1.7.0_55
> > (64-bit)/cpus=8,threads=1,free=187645328,total=251658240 [junit4] 2>
> NOTE:
> > All tests run in this JVM: [TestIndexWriterExceptions] [junit4] Completed
> > [1/1] in 0.46s, 1 test, 1 failure <<< FAILURES!
> >
> >
> > Le mar. 8 déc. 2015 à 09:14, Uwe Schindler  a écrit :
> >>
> >> Hi,
> >>
> >> You must use the smoke tester from the 5.4 branch (not branch_5x, not
> >> trunk). You may have used the one from trunk that checks (of course) if
> the
> >> files were compiled with Java 8, which is wrong.
> >>
> >> Uwe
> >>
> >> -
> >> Uwe Schindler
> >> H.-H.-Meier-Allee 63, D-28213 Bremen
> >> http://www.thetaphi.de
> >> eMail: u...@thetaphi.de
> >>
> >> > -Original Message-
> >> > From: Ahmet Arslan [mailto:iori...@yahoo.com.INVALID]
> >> > Sent: Tuesday, December 08, 2015 9:04 AM
> >> > To: dev@lucene.apache.org
> >> > Subject: Re: [VOTE] Release Lucene/Solr 5.4.0-RC1
> >> >
> >> > Hi,
> >> >
> >> > python3 -u dev-tools/scripts/smokeTestRelease.py
> >> > https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.4.0-RC1-
> >> > rev1718046 gives me following exception:
> >> >
> >> > RuntimeError: JAR file
> >> > "/private/tmp/smoke_lucene_5.4.0_1718046_1/unpack/lucene-
> >> > 5.4.0/analysis/common/lucene-analyzers-common-5.4.0.jar" is missing
> "X-
> >> > Compile-Source-JDK: 1.8" inside its META-INF/MANIFEST.MF
> >> >
> >> > I am doing something wrong?
> >> >
> >> > Thanks,
> >> > Ahmet
> >> >
> >> >
> >> >
> >> > On Tuesday, December 8, 2015 3:15 AM, "david.w.smi...@gmail.com"
> >> >  wrote:
> >> >
> >> >
> >> >
> >> > +1 for release. (tested with Java 7)
> >> > SUCCESS! [0:56:31.943245]
> >> >
> >> >
> >> > On Mon, Dec 7, 2015 at 8:05 PM Steve Rowe  wrote:
> >> >
> >> > +1
> >> > >
> >> > >Docs, javadocs, and changes look good.
> >> > >
> >> > >Smoke tester was happy with Java7 and Java8:
> >> > >
> >> > >SUCCESS! [1:53:58.550314]
> >> > >
> >> > >Steve
> >> > >
> >> > >> On Dec 7, 2015, at 5:31 AM, Upayavira  wrote:
> >> > >>
> >> > >> Yes, Shalin, you are right. My fix was still required, but I
> clearly
> >> > >> manually entered the SVN commit command wrong. Seeing as it does
> not
> >> > >> impact upon the contents of the files, I have executed an SVN mv
> >> > >> command, rerun the smoke test with the below, which worked:
> >> > >>
> >> > >> python3 -u dev-tools/scripts/smokeTestRelease.py
> >> > >>
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.4.0-RC1-
> >> > rev1718046
> >> > >>
> >> > >> Please, folks, use the above to run the smoke test for this
> release.
> >> > >>
> >> > >> Upayavira
> >> > >>
> >> > >> On Mon, Dec 7, 2015, at 04:00 AM, Shalin Shekhar Mangar wrote:
> >> > >>> Hi Upayavira,
> >> > >>>
> >> > >>> The svn revision in the URL is wrong. It should be 1718046 but it
> is
> >> > 

[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_66) - Build # 5455 - Failure!

2015-12-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5455/
Java: 32bit/jdk1.8.0_66 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.handler.TestReplicationHandler.doTestStressReplication

Error Message:
[index.20151208073351547, index.20151208073352619, index.properties, 
replication.properties] expected:<1> but was:<2>

Stack Trace:
java.lang.AssertionError: [index.20151208073351547, index.20151208073352619, 
index.properties, replication.properties] expected:<1> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([3503465226C6BC93:EEA8469423EED520]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.handler.TestReplicationHandler.checkForSingleIndex(TestReplicationHandler.java:820)
at 
org.apache.solr.handler.TestReplicationHandler.doTestStressReplication(TestReplicationHandler.java:787)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-8360) ExternalFileField.getValueSource uses req.datadir but this.schema

2015-12-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15046806#comment-15046806
 ] 

ASF subversion and git services commented on SOLR-8360:
---

Commit 1718580 from [~cpoerschke] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1718580 ]

SOLR-8360: simplify ExternalFileField.getValueSource implementation (merge in 
revision 1718562 from trunk)

> ExternalFileField.getValueSource uses req.datadir but this.schema
> -
>
> Key: SOLR-8360
> URL: https://issues.apache.org/jira/browse/SOLR-8360
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
> Attachments: SOLR-8360-option1.patch, SOLR-8360-option2.patch
>
>
> {{ExternalFileField.getValueSource(SchemaField field, QParser parser)}} has 
> available:
> * datadir
> ** parser.getReq().getCore().getDataDir()
> ** this.schema.getResourceLoader().getDataDir()
> * schema
> ** parser.getReq().getSchema()
> ** this.schema
> {{ExternalFileField.getValueSource}} uses 
> {{parser.getReq().getCore().getDataDir()}} explicitly but implicitly 
> {{this.schema}} - should it use {{parser.getReq().getSchema()}} instead 
> (Option 1 patch)? Or if in practice actually req.datadir and this.datadir are 
> always the same could we stop using the parser argument (Option 2 patch (1 
> line))?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8383) SolrCore.java container initialCapacity tweaks

2015-12-08 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-8383:
--
Attachment: SOLR-8383.patch

> SolrCore.java container initialCapacity tweaks
> --
>
> Key: SOLR-8383
> URL: https://issues.apache.org/jira/browse/SOLR-8383
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-8383.patch, SOLR-8383.patch
>
>
> patch against trunk to follow



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8383) SolrCore.java + QParserPlugin.java container initialCapacity tweaks

2015-12-08 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-8383:
--
Summary: SolrCore.java + QParserPlugin.java container initialCapacity 
tweaks  (was: SolrCore.java container initialCapacity tweaks)

> SolrCore.java + QParserPlugin.java container initialCapacity tweaks
> ---
>
> Key: SOLR-8383
> URL: https://issues.apache.org/jira/browse/SOLR-8383
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-8383.patch, SOLR-8383.patch
>
>
> patch against trunk to follow



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8383) SolrCore.java container initialCapacity tweaks

2015-12-08 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15046685#comment-15046685
 ] 

Christine Poerschke commented on SOLR-8383:
---

Thanks for your input Mike. Based on usage I would opt for the two argument 
constructor with loadFactor 1. Attaching revised patch which also corrects a 
typo and makes a similar change in QParserPlugin.

> SolrCore.java container initialCapacity tweaks
> --
>
> Key: SOLR-8383
> URL: https://issues.apache.org/jira/browse/SOLR-8383
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-8383.patch, SOLR-8383.patch
>
>
> patch against trunk to follow



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8368) A SolrCore needs to replay it's tlog before the leader election process.

2015-12-08 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15046801#comment-15046801
 ] 

Mark Miller commented on SOLR-8368:
---

bq. By the way, this would really going to complicate / upset my shared index / 
tlog on a shared filesystem solution

Actually, hopefully I was over thinking that and it won't. 

> A SolrCore needs to replay it's tlog before the leader election process.
> 
>
> Key: SOLR-8368
> URL: https://issues.apache.org/jira/browse/SOLR-8368
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>
> If we do it after like now, the correct leader may not be able to become 
> leader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8360) ExternalFileField.getValueSource uses req.datadir but this.schema

2015-12-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15046753#comment-15046753
 ] 

ASF subversion and git services commented on SOLR-8360:
---

Commit 1718562 from [~cpoerschke] in branch 'dev/trunk'
[ https://svn.apache.org/r1718562 ]

SOLR-8360: simplify ExternalFileField.getValueSource implementation

> ExternalFileField.getValueSource uses req.datadir but this.schema
> -
>
> Key: SOLR-8360
> URL: https://issues.apache.org/jira/browse/SOLR-8360
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
> Attachments: SOLR-8360-option1.patch, SOLR-8360-option2.patch
>
>
> {{ExternalFileField.getValueSource(SchemaField field, QParser parser)}} has 
> available:
> * datadir
> ** parser.getReq().getCore().getDataDir()
> ** this.schema.getResourceLoader().getDataDir()
> * schema
> ** parser.getReq().getSchema()
> ** this.schema
> {{ExternalFileField.getValueSource}} uses 
> {{parser.getReq().getCore().getDataDir()}} explicitly but implicitly 
> {{this.schema}} - should it use {{parser.getReq().getSchema()}} instead 
> (Option 1 patch)? Or if in practice actually req.datadir and this.datadir are 
> always the same could we stop using the parser argument (Option 2 patch (1 
> line))?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 5.4.0-RC1

2015-12-08 Thread Noble Paul
SUCCESS! [1:55:24.152211]

java 7

On Tue, Dec 8, 2015 at 4:20 PM, Anshum Gupta  wrote:
> +1
>
> Smoke tester is happy on both Java7 and Java8.
>
> SUCCESS! [1:13:28.357352]
>
> On Sat, Dec 5, 2015 at 4:28 PM, Upayavira  wrote:
>>
>> Please vote for the RC1 release candidate for Lucene/Solr 5.4.0
>>
>> The artifacts can be downloaded from:
>>
>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.4.0-RC1-rev178046
>>
>> You can run the smoke tester directly with this command:
>> python3 -u dev-tools/scripts/smokeTestRelease.py
>>
>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.4.0-RC1-rev178046
>>
>> I will let this vote run until midnight (GMT) on Wednesday 9 December.
>>
>> Please cast your votes! (and let me know, politely :-) if I missed
>> anything)
>>
>> Upayavira
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>
>
>
> --
> Anshum Gupta



-- 
-
Noble Paul

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.4 - Build # 12 - Still Failing

2015-12-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.4/12/

1 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=59605, name=collection1, 
state=RUNNABLE, group=TGRP-CollectionsAPIDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=59605, name=collection1, state=RUNNABLE, 
group=TGRP-CollectionsAPIDistributedZkTest]
Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:56037/_pm: collection already exists: 
awholynewstresscollection_collection1_0
at __randomizedtesting.SeedInfo.seed([EB19EDDE5315287C]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:575)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:372)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:325)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:871)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:807)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1574)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1595)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:881)




Build Log:
[...truncated 10815 lines...]
   [junit4] Suite: org.apache.solr.cloud.CollectionsAPIDistributedZkTest
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.4/solr/build/solr-core/test/J0/temp/solr.cloud.CollectionsAPIDistributedZkTest_EB19EDDE5315287C-001/init-core-data-001
   [junit4]   2> 1696265 INFO  
(SUITE-CollectionsAPIDistributedZkTest-seed#[EB19EDDE5315287C]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false)
   [junit4]   2> 1696265 INFO  
(SUITE-CollectionsAPIDistributedZkTest-seed#[EB19EDDE5315287C]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /_pm/
   [junit4]   2> 1696271 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[EB19EDDE5315287C]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 1696271 INFO  (Thread-54234) [] o.a.s.c.ZkTestServer 
client port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 1696271 INFO  (Thread-54234) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 1696371 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[EB19EDDE5315287C]) [] 
o.a.s.c.ZkTestServer start zk server on port:43566
   [junit4]   2> 1696371 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[EB19EDDE5315287C]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 1696372 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[EB19EDDE5315287C]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 1696377 INFO  (zkCallback-1072-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@78b9e56b 
name:ZooKeeperConnection Watcher:127.0.0.1:43566 got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 1696377 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[EB19EDDE5315287C]) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 1696378 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[EB19EDDE5315287C]) [] 
o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2> 1696378 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[EB19EDDE5315287C]) [] 
o.a.s.c.c.SolrZkClient makePath: /solr
   [junit4]   2> 1696382 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[EB19EDDE5315287C]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 1696384 INFO  
(TEST-CollectionsAPIDistributedZkTest.test-seed#[EB19EDDE5315287C]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 1696385 INFO  (zkCallback-1073-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@637a799f 
name:ZooKeeperConnection Watcher:127.0.0.1:43566/solr got event WatchedEvent 
state:SyncConnected type:None path:null 

Re: VOTE: RC0 Release apache-solr-ref-guide-5.4.pdf

2015-12-08 Thread Steve Rowe
There’s a table with a truncated right-hand column on page 221 (De-Duplication 
| Configuration Options | SignatureUpdateProcessorFactory properties).  I think 
this is worthy of respin.

I scanned through the rest of the ref guide and couldn’t find any more examples 
of right-truncated tables.

I’ll work on a fix.

Steve


> On Dec 7, 2015, at 2:38 PM, Cassandra Targett  wrote:
> 
> Sorry, bad link in my earlier mail:
> 
> https://dist.apache.org/repos/dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-5.4-RC0/
> 
> On Mon, Dec 7, 2015 at 1:23 PM, Cassandra Targett  wrote:
> Please VOTE to release the Apache Solr Ref Guide for Solr 5.4:
> 
> https://dist.apache.org/repos/dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-X.Y-RCZ
> 
> $ cat apache-solr-ref-guide-5.4-RC0/apache-solr-ref-guide-5.4.pdf.sha1 
> 
> 56413817546a1f7acf2493f24d96bab8071659ef  apache-solr-ref-guide-5.4.pdf
> 
> 
> 
> +1 from me.
> 
> Cassandra
> 
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 5.4.0-RC1

2015-12-08 Thread Shalin Shekhar Mangar
+1

SUCCESS! [1:03:00.784525]

On Tue, Dec 8, 2015 at 4:20 PM, Anshum Gupta  wrote:
> +1
>
> Smoke tester is happy on both Java7 and Java8.
>
> SUCCESS! [1:13:28.357352]
>
> On Sat, Dec 5, 2015 at 4:28 PM, Upayavira  wrote:
>>
>> Please vote for the RC1 release candidate for Lucene/Solr 5.4.0
>>
>> The artifacts can be downloaded from:
>>
>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.4.0-RC1-rev178046
>>
>> You can run the smoke tester directly with this command:
>> python3 -u dev-tools/scripts/smokeTestRelease.py
>>
>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.4.0-RC1-rev178046
>>
>> I will let this vote run until midnight (GMT) on Wednesday 9 December.
>>
>> Please cast your votes! (and let me know, politely :-) if I missed
>> anything)
>>
>> Upayavira
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>
>
>
> --
> Anshum Gupta



-- 
Regards,
Shalin Shekhar Mangar.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Windows (64bit/jdk1.7.0_80) - Build # 5325 - Still Failing!

2015-12-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/5325/
Java: 64bit/jdk1.7.0_80 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Error from server at https://127.0.0.1:51481/awholynewcollection_0: non ok 
status: 500, message:Server Error

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:51481/awholynewcollection_0: non ok status: 
500, message:Server Error
at 
__randomizedtesting.SeedInfo.seed([5F7925D876205D15:D72D1A02D8DC30ED]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:508)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForNon403or404or503(AbstractFullDistribZkTestBase.java:1754)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:658)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test(CollectionsAPIDistributedZkTest.java:160)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

Re: [JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 1040 - Still Failing

2015-12-08 Thread david.w.smi...@gmail.com
Mikhail,
In the future, particularly when committing code that uses Java 8 features,
be sure to compile & run tests even on the 5x branch after back-porting but
before committing.  From time to time this fails for stuff, like
easy-to-forget cases of not using ‘final’.
~ David

On Tue, Dec 8, 2015 at 2:21 AM Mikhail Khludnev 
wrote:

> It's mine. Looking into.
>
> On Tue, Dec 8, 2015 at 9:35 AM, Apache Jenkins Server <
> jenk...@builds.apache.org> wrote:
>
>> Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/1040/
>>
>> All tests passed
>>
>> Build Log:
>> [...truncated 7450 lines...]
>> [javac] Compiling 6 source files to
>> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/build/join/classes/test
>> [javac]
>> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/join/src/test/org/apache/lucene/search/join/TestJoinUtil.java:954:
>> error: cannot find symbol
>> [javac] assert nextInt ==
>> Integer.parseUnsignedInt(uniqueRandomValue,16);
>> [javac]  ^
>> [javac]   symbol:   method parseUnsignedInt(String,int)
>> [javac]   location: class Integer
>> [javac]
>> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/join/src/test/org/apache/lucene/search/join/TestJoinUtil.java:1206:
>> error: cannot find symbol
>> [javac] final int linkInt =
>> Integer.parseUnsignedInt(linkValue,16);
>> [javac]^
>> [javac]   symbol:   method parseUnsignedInt(String,int)
>> [javac]   location: class Integer
>> [javac] Note:
>> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/join/src/test/org/apache/lucene/search/join/TestBlockJoin.java
>> uses or overrides a deprecated API.
>> [javac] Note: Recompile with -Xlint:deprecation for details.
>> [javac] 2 errors
>>
>> BUILD FAILED
>> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/build.xml:801:
>> The following error occurred while executing this line:
>> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/build.xml:738:
>> The following error occurred while executing this line:
>> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/build.xml:59:
>> The following error occurred while executing this line:
>> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/build.xml:472:
>> The following error occurred while executing this line:
>> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/common-build.xml:2252:
>> The following error occurred while executing this line:
>> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/module-build.xml:58:
>> The following error occurred while executing this line:
>> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/module-build.xml:55:
>> The following error occurred while executing this line:
>> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/common-build.xml:818:
>> The following error occurred while executing this line:
>> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/common-build.xml:832:
>> The following error occurred while executing this line:
>> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/common-build.xml:1968:
>> Compile failed; see the compiler error output for details.
>>
>> Total time: 106 minutes 8 seconds
>> Build step 'Invoke Ant' marked build as failure
>> Archiving artifacts
>> Recording test results
>> Email was triggered for: Failure - Any
>> Sending email for trigger: Failure - Any
>>
>>
>>
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>
>
>
> --
> Sincerely yours
> Mikhail Khludnev
> Principal Engineer,
> Grid Dynamics
>
> 
> 
>
-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


Re: VOTE: RC0 Release apache-solr-ref-guide-5.4.pdf

2015-12-08 Thread Cassandra Targett
OK, I agree - that's horrible. Let me know if you think you have it fixed,
or if you'd like me to also take a look.

I wonder why the line breaks in the PDF CSS aren't working :(

On Tue, Dec 8, 2015 at 7:52 AM, Steve Rowe  wrote:

> There’s a table with a truncated right-hand column on page 221
> (De-Duplication | Configuration Options | SignatureUpdateProcessorFactory
> properties).  I think this is worthy of respin.
>
> I scanned through the rest of the ref guide and couldn’t find any more
> examples of right-truncated tables.
>
> I’ll work on a fix.
>
> Steve
>
>
> > On Dec 7, 2015, at 2:38 PM, Cassandra Targett 
> wrote:
> >
> > Sorry, bad link in my earlier mail:
> >
> >
> https://dist.apache.org/repos/dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-5.4-RC0/
> >
> > On Mon, Dec 7, 2015 at 1:23 PM, Cassandra Targett 
> wrote:
> > Please VOTE to release the Apache Solr Ref Guide for Solr 5.4:
> >
> >
> https://dist.apache.org/repos/dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-X.Y-RCZ
> >
> > $ cat apache-solr-ref-guide-5.4-RC0/apache-solr-ref-guide-5.4.pdf.sha1
> >
> > 56413817546a1f7acf2493f24d96bab8071659ef  apache-solr-ref-guide-5.4.pdf
> >
> >
> >
> > +1 from me.
> >
> > Cassandra
> >
> >
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: VOTE: RC0 Release apache-solr-ref-guide-5.4.pdf

2015-12-08 Thread Cassandra Targett
Thanks Steve, I'll generate a new PDF to check it again and post the next
RC if it's all right.

On Tue, Dec 8, 2015 at 8:22 AM, Steve Rowe  wrote:

> I added chunk a couple places within the long terms in the
> affected table, and the single-page PDF export looks fixed now: no extra
> whitespace, wrapping is performed, all content is visible.
>
> Steve
>
> > On Dec 8, 2015, at 8:52 AM, Steve Rowe  wrote:
> >
> > There’s a table with a truncated right-hand column on page 221
> (De-Duplication | Configuration Options | SignatureUpdateProcessorFactory
> properties).  I think this is worthy of respin.
> >
> > I scanned through the rest of the ref guide and couldn’t find any more
> examples of right-truncated tables.
> >
> > I’ll work on a fix.
> >
> > Steve
> >
> >
> >> On Dec 7, 2015, at 2:38 PM, Cassandra Targett 
> wrote:
> >>
> >> Sorry, bad link in my earlier mail:
> >>
> >>
> https://dist.apache.org/repos/dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-5.4-RC0/
> >>
> >> On Mon, Dec 7, 2015 at 1:23 PM, Cassandra Targett 
> wrote:
> >> Please VOTE to release the Apache Solr Ref Guide for Solr 5.4:
> >>
> >>
> https://dist.apache.org/repos/dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-X.Y-RCZ
> >>
> >> $ cat apache-solr-ref-guide-5.4-RC0/apache-solr-ref-guide-5.4.pdf.sha1
> >>
> >> 56413817546a1f7acf2493f24d96bab8071659ef  apache-solr-ref-guide-5.4.pdf
> >>
> >>
> >>
> >> +1 from me.
> >>
> >> Cassandra
> >>
> >>
> >
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Created] (SOLR-8389) Convert CDCR peer cluster and other configurations into collection properties modifiable via APIs

2015-12-08 Thread Shalin Shekhar Mangar (JIRA)
Shalin Shekhar Mangar created SOLR-8389:
---

 Summary: Convert CDCR peer cluster and other configurations into 
collection properties modifiable via APIs
 Key: SOLR-8389
 URL: https://issues.apache.org/jira/browse/SOLR-8389
 Project: Solr
  Issue Type: Improvement
  Components: CDCR, SolrCloud
Reporter: Shalin Shekhar Mangar
 Fix For: Trunk, 6.0


CDCR configuration is kept inside solrconfig.xml which makes it difficult to 
add or change peer cluster configuration.

I propose to move all CDCR config to collection level properties in cluster 
state so that they can be modified using the existing modify collection API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8367) The new LIR 'all replicas participate' failsafe code needs to be improved.

2015-12-08 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8367:
--
Attachment: SOLR-8367.patch

Patch with initial logic change.

We need to see if all the replicas are involved even if we are not the first 
replica in line for the leader election.

We also wait to request recoveries until we actually see we will be the leader.

> The new LIR 'all replicas participate' failsafe code needs to be improved.
> --
>
> Key: SOLR-8367
> URL: https://issues.apache.org/jira/browse/SOLR-8367
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
> Attachments: SOLR-8367.patch
>
>
> For one, it currently only kicks in the first attempted leader. If it's 
> another replica that is stuck in LIR, it won't help.
> Second, when we attempt to be leader, knowing we might fail due to LIR, we 
> should not put other replicas into recovery if they fail to sync with us - 
> not until we know we will actually be leader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-8367) The new LIR 'all replicas participate' failsafe code needs to be improved.

2015-12-08 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reassigned SOLR-8367:
-

Assignee: Mark Miller

> The new LIR 'all replicas participate' failsafe code needs to be improved.
> --
>
> Key: SOLR-8367
> URL: https://issues.apache.org/jira/browse/SOLR-8367
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8367.patch
>
>
> For one, it currently only kicks in the first attempted leader. If it's 
> another replica that is stuck in LIR, it won't help.
> Second, when we attempt to be leader, knowing we might fail due to LIR, we 
> should not put other replicas into recovery if they fail to sync with us - 
> not until we know we will actually be leader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-5.4-Linux (32bit/jdk-9-ea+95) - Build # 346 - Failure!

2015-12-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.4-Linux/346/
Java: 32bit/jdk-9-ea+95 -server -XX:+UseG1GC -XX:-CompactStrings

3 tests failed.
FAILED:  
org.apache.solr.handler.dataimport.TestTikaEntityProcessor.testIndexingWithTikaEntityProcessor

Error Message:


Stack Trace:
java.lang.ExceptionInInitializerError
at 
__randomizedtesting.SeedInfo.seed([934B7D8E934D08BD:CE977C1BB3B0860F]:0)
at org.apache.tika.parser.pdf.PDFParser.parse(PDFParser.java:146)
at 
org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:256)
at 
org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:256)
at 
org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:120)
at 
org.apache.solr.handler.dataimport.TikaEntityProcessor.nextRow(TikaEntityProcessor.java:159)
at 
org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:244)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:476)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:415)
at 
org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:330)
at 
org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:233)
at 
org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:417)
at 
org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:481)
at 
org.apache.solr.handler.dataimport.DataImportHandler.handleRequestBody(DataImportHandler.java:186)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2073)
at org.apache.solr.util.TestHarness.query(TestHarness.java:311)
at 
org.apache.solr.handler.dataimport.AbstractDataImportHandlerTestCase.runFullImport(AbstractDataImportHandlerTestCase.java:87)
at 
org.apache.solr.handler.dataimport.TestTikaEntityProcessor.testIndexingWithTikaEntityProcessor(TestTikaEntityProcessor.java:112)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:520)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 

Re: VOTE: RC0 Release apache-solr-ref-guide-5.4.pdf

2015-12-08 Thread Steve Rowe
I added chunk a couple places within the long terms in the 
affected table, and the single-page PDF export looks fixed now: no extra 
whitespace, wrapping is performed, all content is visible.

Steve
 
> On Dec 8, 2015, at 8:52 AM, Steve Rowe  wrote:
> 
> There’s a table with a truncated right-hand column on page 221 
> (De-Duplication | Configuration Options | SignatureUpdateProcessorFactory 
> properties).  I think this is worthy of respin.
> 
> I scanned through the rest of the ref guide and couldn’t find any more 
> examples of right-truncated tables.
> 
> I’ll work on a fix.
> 
> Steve
> 
> 
>> On Dec 7, 2015, at 2:38 PM, Cassandra Targett  wrote:
>> 
>> Sorry, bad link in my earlier mail:
>> 
>> https://dist.apache.org/repos/dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-5.4-RC0/
>> 
>> On Mon, Dec 7, 2015 at 1:23 PM, Cassandra Targett  
>> wrote:
>> Please VOTE to release the Apache Solr Ref Guide for Solr 5.4:
>> 
>> https://dist.apache.org/repos/dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-X.Y-RCZ
>> 
>> $ cat apache-solr-ref-guide-5.4-RC0/apache-solr-ref-guide-5.4.pdf.sha1 
>> 
>> 56413817546a1f7acf2493f24d96bab8071659ef  apache-solr-ref-guide-5.4.pdf
>> 
>> 
>> 
>> +1 from me.
>> 
>> Cassandra
>> 
>> 
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8391) Convert CdcrRequestHandler into an implicitly loaded request handler

2015-12-08 Thread Shalin Shekhar Mangar (JIRA)
Shalin Shekhar Mangar created SOLR-8391:
---

 Summary: Convert CdcrRequestHandler into an implicitly loaded 
request handler
 Key: SOLR-8391
 URL: https://issues.apache.org/jira/browse/SOLR-8391
 Project: Solr
  Issue Type: Improvement
  Components: CDCR, SolrCloud
Reporter: Shalin Shekhar Mangar
 Fix For: Trunk, 6.0


Once SOLR-8389 is finished, we should be able to make CdcrRequestHandler an 
implicit request handler so that it is always loaded (without overhead if cdcr 
is not being used) and there is no need to add it explicitly to solrconfig.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8390) Capture transfer rate and latency in CDCR replication

2015-12-08 Thread Shalin Shekhar Mangar (JIRA)
Shalin Shekhar Mangar created SOLR-8390:
---

 Summary: Capture transfer rate and latency in CDCR replication
 Key: SOLR-8390
 URL: https://issues.apache.org/jira/browse/SOLR-8390
 Project: Solr
  Issue Type: Improvement
  Components: CDCR, SolrCloud
Reporter: Shalin Shekhar Mangar
 Fix For: Trunk, 6.0


We should capture transfer rate and latency in CDCR replication and make it 
available via an API and JMX.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4280) spellcheck.maxResultsForSuggest based on filter query results

2015-12-08 Thread James Dyer (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15046908#comment-15046908
 ] 

James Dyer commented on SOLR-4280:
--

[~markus17]  Are you able to take the updated patch and add the additional 
functionality you suggest?  I agree that the "most-restrictive" filter might 
not serve everyone's needs, but all-in-all this might be a nice feature for 
multi-tenant situations.

> spellcheck.maxResultsForSuggest based on filter query results
> -
>
> Key: SOLR-4280
> URL: https://issues.apache.org/jira/browse/SOLR-4280
> Project: Solr
>  Issue Type: Improvement
>  Components: spellchecker
>Reporter: Markus Jelsma
> Fix For: 4.9, Trunk
>
> Attachments: SOLR-4280-trunk-1.patch, SOLR-4280-trunk.patch, 
> SOLR-4280-trunk.patch, SOLR-4280.patch
>
>
> spellcheck.maxResultsForSuggest takes a fixed number but ideally should be 
> able to take a ratio and calculate that against the maximum number of results 
> the filter queries return.
> At least in our case this would certainly add a lot of value. >99% of our 
> end-users search within one or more filters of which one is always unique. 
> The number of documents for each of those unique filters varies significantly 
> ranging from 300 to 3.000.000 documents in which they search. The 
> maxResultsForSuggest is set to a reasonable low value so it kind of works 
> fine but sometimes leads to undesired suggestions for a large subcorpus that 
> has more misspellings.
> Spun off from SOLR-4278.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6917) Move NumericField out of core to backwards-codecs

2015-12-08 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15046976#comment-15046976
 ] 

Michael McCandless commented on LUCENE-6917:


bq. I think you meant that links from "core" module that point to legacy class 
won't work.

It's not that: it's a link from {{queryparser}} module to {{sandbox}} module, 
and it fails because the javadocs incorrectly links the class from {{sandbox}} 
as if it were in core.  But talking to [~rcmuir], he seems to know what's 
happening here and basically we should not be putting the same package in 
multiple modules.

So ... one idea is to be less ambitious here, and instead leave NumericField in 
{{core}}, but renamed to LegacyX and deprecated.  Then the links will be fine 
since no classes moved, they were simply renamed.

> Move NumericField out of core to backwards-codecs
> -
>
> Key: LUCENE-6917
> URL: https://issues.apache.org/jira/browse/LUCENE-6917
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 6.0
>
> Attachments: LUCENE-6917-broken-javadocs.patch, LUCENE-6917.patch, 
> LUCENE-6917.patch
>
>
> DimensionalValues seems to be better across the board (indexing time, 
> indexing size, search-speed, search-time heap required) than NumericField, at 
> least in my testing so far.
> I think for 6.0 we should move {{IntField}}, {{LongField}}, {{FloatField}}, 
> {{DoubleField}} and {{NumericRangeQuery}} to {{backward-codecs}}, and rename 
> with {{Legacy}} prefix?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: VOTE: RC1 Release apache-solr-ref-guide-5.4.pdf

2015-12-08 Thread Erik Hatcher
+1, looks good, thanks Cassandra!




> On Dec 8, 2015, at 10:53 AM, Cassandra Targett  wrote:
> 
> Please VOTE to release the Apache Solr Ref Guide for Solr 5.4:
> 
> https://dist.apache.org/repos/dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-5.4-RC1/
>  
> 
> 
> $ cat apache-solr-ref-guide-5.4.pdf.sha1 
> 
> 1912cd4682646e994e6f4188eb79bfdc3bccba63  apache-solr-ref-guide-5.4.pdf
> 
> 
> +1 from me.
> 
> Thanks,
> Cassandra



Re: [VOTE] Release Lucene/Solr 5.4.0-RC1

2015-12-08 Thread Adrien Grand
+1
SUCCESS! [0:59:54.998323]

Le mar. 8 déc. 2015 à 16:48, Jan Høydahl  a écrit :

> +1
> SUCCESS! [1:40:13.877370]
>
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com
>
> > 8. des. 2015 kl. 15.02 skrev Shalin Shekhar Mangar <
> shalinman...@gmail.com>:
> >
> > +1
> >
> > SUCCESS! [1:03:00.784525]
> >
> > On Tue, Dec 8, 2015 at 4:20 PM, Anshum Gupta 
> wrote:
> >> +1
> >>
> >> Smoke tester is happy on both Java7 and Java8.
> >>
> >> SUCCESS! [1:13:28.357352]
> >>
> >> On Sat, Dec 5, 2015 at 4:28 PM, Upayavira  wrote:
> >>>
> >>> Please vote for the RC1 release candidate for Lucene/Solr 5.4.0
> >>>
> >>> The artifacts can be downloaded from:
> >>>
> >>>
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.4.0-RC1-rev178046
> >>>
> >>> You can run the smoke tester directly with this command:
> >>> python3 -u dev-tools/scripts/smokeTestRelease.py
> >>>
> >>>
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.4.0-RC1-rev178046
> >>>
> >>> I will let this vote run until midnight (GMT) on Wednesday 9 December.
> >>>
> >>> Please cast your votes! (and let me know, politely :-) if I missed
> >>> anything)
> >>>
> >>> Upayavira
> >>>
> >>> -
> >>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> >>> For additional commands, e-mail: dev-h...@lucene.apache.org
> >>>
> >>
> >>
> >>
> >> --
> >> Anshum Gupta
> >
> >
> >
> > --
> > Regards,
> > Shalin Shekhar Mangar.
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: [VOTE] Release Lucene/Solr 5.4.0-RC1

2015-12-08 Thread Jan Høydahl
+1
SUCCESS! [1:40:13.877370]

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 8. des. 2015 kl. 15.02 skrev Shalin Shekhar Mangar :
> 
> +1
> 
> SUCCESS! [1:03:00.784525]
> 
> On Tue, Dec 8, 2015 at 4:20 PM, Anshum Gupta  wrote:
>> +1
>> 
>> Smoke tester is happy on both Java7 and Java8.
>> 
>> SUCCESS! [1:13:28.357352]
>> 
>> On Sat, Dec 5, 2015 at 4:28 PM, Upayavira  wrote:
>>> 
>>> Please vote for the RC1 release candidate for Lucene/Solr 5.4.0
>>> 
>>> The artifacts can be downloaded from:
>>> 
>>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.4.0-RC1-rev178046
>>> 
>>> You can run the smoke tester directly with this command:
>>> python3 -u dev-tools/scripts/smokeTestRelease.py
>>> 
>>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.4.0-RC1-rev178046
>>> 
>>> I will let this vote run until midnight (GMT) on Wednesday 9 December.
>>> 
>>> Please cast your votes! (and let me know, politely :-) if I missed
>>> anything)
>>> 
>>> Upayavira
>>> 
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>> 
>> 
>> 
>> 
>> --
>> Anshum Gupta
> 
> 
> 
> -- 
> Regards,
> Shalin Shekhar Mangar.
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



VOTE: RC1 Release apache-solr-ref-guide-5.4.pdf

2015-12-08 Thread Cassandra Targett
Please VOTE to release the Apache Solr Ref Guide for Solr 5.4:

https://dist.apache.org/repos/dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-5.4-RC1/

$ cat apache-solr-ref-guide-5.4.pdf.sha1

1912cd4682646e994e6f4188eb79bfdc3bccba63  apache-solr-ref-guide-5.4.pdf


+1 from me.

Thanks,
Cassandra


Re: VOTE: RC1 Release apache-solr-ref-guide-5.4.pdf

2015-12-08 Thread Erick Erickson
+1 from me.

On Tue, Dec 8, 2015 at 7:53 AM, Cassandra Targett  wrote:
> Please VOTE to release the Apache Solr Ref Guide for Solr 5.4:
>
> https://dist.apache.org/repos/dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-5.4-RC1/
>
> $ cat apache-solr-ref-guide-5.4.pdf.sha1
>
> 1912cd4682646e994e6f4188eb79bfdc3bccba63  apache-solr-ref-guide-5.4.pdf
>
>
> +1 from me.
>
> Thanks,
> Cassandra

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8387) Solr example configs should ship with managed-schema instead of schema.xml

2015-12-08 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047018#comment-15047018
 ] 

Erik Hatcher commented on SOLR-8387:


+1, thanks [~varunthacker]

> Solr example configs should ship with managed-schema instead of schema.xml
> --
>
> Key: SOLR-8387
> URL: https://issues.apache.org/jira/browse/SOLR-8387
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: Trunk
>Reporter: Varun Thacker
> Fix For: Trunk
>
> Attachments: SOLR-8387.patch
>
>
> This is a followup of SOLR-8131 . In SOLR-8131 if a schema factory is not 
> specified explicitly managed schema will be used.
> Now since managed schema factory is the default, when a user goes to start 
> solr 6.0 their schema.xml file will get converted to managed-schema  . This 
> might seem trappy or confusing to a user. Hence why don't we directly ship 
> with a a file called {{managed-schema}} instead of {{schema.xml}} . Just a 
> rename of the files in all the example configs that we ship. The data_driven 
> config does that already



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 1040 - Still Failing

2015-12-08 Thread Mikhail Khludnev
Got. I just didn't have jdk7 after last laptop crash. 5x build on jdk8
caught syntax errors, that's why I relaxed after fixing compile errors and
run tests. I didn't catch rt.jar difference. Now I'm able to test on jdk7.
Awaiting for CI.

On Tue, Dec 8, 2015 at 5:25 PM, david.w.smi...@gmail.com <
david.w.smi...@gmail.com> wrote:

> Mikhail,
> In the future, particularly when committing code that uses Java 8
> features, be sure to compile & run tests even on the 5x branch after
> back-porting but before committing.  From time to time this fails for
> stuff, like easy-to-forget cases of not using ‘final’.
> ~ David
>
> On Tue, Dec 8, 2015 at 2:21 AM Mikhail Khludnev <
> mkhlud...@griddynamics.com> wrote:
>
>> It's mine. Looking into.
>>
>> On Tue, Dec 8, 2015 at 9:35 AM, Apache Jenkins Server <
>> jenk...@builds.apache.org> wrote:
>>
>>> Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/1040/
>>>
>>> All tests passed
>>>
>>> Build Log:
>>> [...truncated 7450 lines...]
>>> [javac] Compiling 6 source files to
>>> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/build/join/classes/test
>>> [javac]
>>> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/join/src/test/org/apache/lucene/search/join/TestJoinUtil.java:954:
>>> error: cannot find symbol
>>> [javac] assert nextInt ==
>>> Integer.parseUnsignedInt(uniqueRandomValue,16);
>>> [javac]  ^
>>> [javac]   symbol:   method parseUnsignedInt(String,int)
>>> [javac]   location: class Integer
>>> [javac]
>>> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/join/src/test/org/apache/lucene/search/join/TestJoinUtil.java:1206:
>>> error: cannot find symbol
>>> [javac] final int linkInt =
>>> Integer.parseUnsignedInt(linkValue,16);
>>> [javac]^
>>> [javac]   symbol:   method parseUnsignedInt(String,int)
>>> [javac]   location: class Integer
>>> [javac] Note:
>>> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/join/src/test/org/apache/lucene/search/join/TestBlockJoin.java
>>> uses or overrides a deprecated API.
>>> [javac] Note: Recompile with -Xlint:deprecation for details.
>>> [javac] 2 errors
>>>
>>> BUILD FAILED
>>> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/build.xml:801:
>>> The following error occurred while executing this line:
>>> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/build.xml:738:
>>> The following error occurred while executing this line:
>>> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/build.xml:59:
>>> The following error occurred while executing this line:
>>> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/build.xml:472:
>>> The following error occurred while executing this line:
>>> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/common-build.xml:2252:
>>> The following error occurred while executing this line:
>>> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/module-build.xml:58:
>>> The following error occurred while executing this line:
>>> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/module-build.xml:55:
>>> The following error occurred while executing this line:
>>> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/common-build.xml:818:
>>> The following error occurred while executing this line:
>>> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/common-build.xml:832:
>>> The following error occurred while executing this line:
>>> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/common-build.xml:1968:
>>> Compile failed; see the compiler error output for details.
>>>
>>> Total time: 106 minutes 8 seconds
>>> Build step 'Invoke Ant' marked build as failure
>>> Archiving artifacts
>>> Recording test results
>>> Email was triggered for: Failure - Any
>>> Sending email for trigger: Failure - Any
>>>
>>>
>>>
>>>
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>>
>>
>>
>> --
>> Sincerely yours
>> Mikhail Khludnev
>> Principal Engineer,
>> Grid Dynamics
>>
>> 
>> 
>>
> --
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> http://www.solrenterprisesearchserver.com
>



-- 
Sincerely yours
Mikhail Khludnev
Principal Engineer,
Grid Dynamics





[jira] [Updated] (SOLR-8385) Narrow StreamFactory.withFunctionName clazz parameter to prevent misconfiguration.

2015-12-08 Thread Jason Gerlowski (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gerlowski updated SOLR-8385:
--
Attachment: SOLR-8385.patch

Patch.  Changes the Type for functionName values from {{Class}} to {{Class}}, both in method signatures, and internally in 
{{StreamFactory}}.

All tests pass for me.

> Narrow StreamFactory.withFunctionName clazz parameter to prevent 
> misconfiguration.
> --
>
> Key: SOLR-8385
> URL: https://issues.apache.org/jira/browse/SOLR-8385
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Jason Gerlowski
>Priority: Trivial
>  Labels: easyfix
> Fix For: Trunk
>
> Attachments: SOLR-8385.patch
>
>
> Currently, {{StreamFactory}} has several overloaded {{withFunctionName}} 
> methods.  One of these, takes two parameters: a {{String}} functionName, and 
> a {{Class}} implementation type.
> This second parameter is a little too generic, because it's really only ever 
> valid for a small, small subset of classes (i.e. {{Class Expressible}} ).  Narrowing the type of this parameter is a small step 
> towards preventing accidental misconfiguration of a {{StreamFactory}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8367) The new LIR 'all replicas participate' failsafe code needs to be improved.

2015-12-08 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047208#comment-15047208
 ] 

Mark Miller commented on SOLR-8367:
---

Or it's my head is dopey and I need to be hard coding a filesystem based 
directory.

Just surprised that when you don't, tlogs don't survive restart either. Perhaps 
because of the old index dir cleanup code.

> The new LIR 'all replicas participate' failsafe code needs to be improved.
> --
>
> Key: SOLR-8367
> URL: https://issues.apache.org/jira/browse/SOLR-8367
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8367.patch
>
>
> For one, it currently only kicks in the first attempted leader. If it's 
> another replica that is stuck in LIR, it won't help.
> Second, when we attempt to be leader, knowing we might fail due to LIR, we 
> should not put other replicas into recovery if they fail to sync with us - 
> not until we know we will actually be leader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5868) JoinUtil support for NUMERIC docValues fields

2015-12-08 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047163#comment-15047163
 ] 

Mikhail Khludnev edited comment on LUCENE-5868 at 12/8/15 6:03 PM:
---

Oh. Yeah!! 5.x is fixed
https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/3822/changes



was (Author: mkhludnev):
Oh. Yeah!! 5.x is fiexed
https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/3822/changes


> JoinUtil support for NUMERIC docValues fields 
> --
>
> Key: LUCENE-5868
> URL: https://issues.apache.org/jira/browse/LUCENE-5868
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Minor
> Fix For: 5.5
>
> Attachments: LUCENE-5868-5x.patch, LUCENE-5868-5x.patch, 
> LUCENE-5868-lambdarefactoring.patch, LUCENE-5868-lambdarefactoring.patch, 
> LUCENE-5868.patch, LUCENE-5868.patch, LUCENE-5868.patch, LUCENE-5868.patch, 
> LUCENE-5868.patch, LUCENE-5868.patch, qtj.diff
>
>
> while polishing SOLR-6234 I found that JoinUtil can't join int dv fields at 
> least. 
> I plan to provide test/patch. It might be important, because Solr's join can 
> do that. Please vote if you care! 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5868) JoinUtil support for NUMERIC docValues fields

2015-12-08 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047163#comment-15047163
 ] 

Mikhail Khludnev commented on LUCENE-5868:
--

Oh. Yeah!! 5.x is fiexed
https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/3822/changes


> JoinUtil support for NUMERIC docValues fields 
> --
>
> Key: LUCENE-5868
> URL: https://issues.apache.org/jira/browse/LUCENE-5868
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Minor
> Fix For: 5.5
>
> Attachments: LUCENE-5868-5x.patch, LUCENE-5868-5x.patch, 
> LUCENE-5868-lambdarefactoring.patch, LUCENE-5868-lambdarefactoring.patch, 
> LUCENE-5868.patch, LUCENE-5868.patch, LUCENE-5868.patch, LUCENE-5868.patch, 
> LUCENE-5868.patch, LUCENE-5868.patch, qtj.diff
>
>
> while polishing SOLR-6234 I found that JoinUtil can't join int dv fields at 
> least. 
> I plan to provide test/patch. It might be important, because Solr's join can 
> do that. Please vote if you care! 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8392) SolrParam.get(String) returns String and shouldn't be used in other instanceof checks

2015-12-08 Thread Mike Drob (JIRA)
Mike Drob created SOLR-8392:
---

 Summary: SolrParam.get(String) returns String and shouldn't be 
used in other instanceof checks
 Key: SOLR-8392
 URL: https://issues.apache.org/jira/browse/SOLR-8392
 Project: Solr
  Issue Type: Bug
Reporter: Mike Drob
 Fix For: Trunk


There's a couple of places where we declare the return type of solrParams.get() 
as an Object and then do instanceof checks for other types. Since we know it 
will be a String, we can simplify this logic in several places.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8392) SolrParam.get(String) returns String and shouldn't be used in other instanceof checks

2015-12-08 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated SOLR-8392:

Attachment: SOLR-8392.patch

{{patch -p0}} attached against trunk.

> SolrParam.get(String) returns String and shouldn't be used in other 
> instanceof checks
> -
>
> Key: SOLR-8392
> URL: https://issues.apache.org/jira/browse/SOLR-8392
> Project: Solr
>  Issue Type: Bug
>Reporter: Mike Drob
> Fix For: Trunk
>
> Attachments: SOLR-8392.patch
>
>
> There's a couple of places where we declare the return type of 
> solrParams.get() as an Object and then do instanceof checks for other types. 
> Since we know it will be a String, we can simplify this logic in several 
> places.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-5.4-Linux (32bit/jdk-9-ea+95) - Build # 347 - Still Failing!

2015-12-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.4-Linux/347/
Java: 32bit/jdk-9-ea+95 -server -XX:+UseG1GC -XX:-CompactStrings

3 tests failed.
FAILED:  org.apache.solr.handler.dataimport.TestTikaEntityProcessor.testSkip

Error Message:


Stack Trace:
java.lang.ExceptionInInitializerError
at 
__randomizedtesting.SeedInfo.seed([725CC15AA12004D7:B43369C9916A3ED0]:0)
at org.apache.tika.parser.pdf.PDFParser.parse(PDFParser.java:146)
at 
org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:256)
at 
org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:256)
at 
org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:120)
at 
org.apache.solr.handler.dataimport.TikaEntityProcessor.nextRow(TikaEntityProcessor.java:159)
at 
org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:244)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:476)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:415)
at 
org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:330)
at 
org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:233)
at 
org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:417)
at 
org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:481)
at 
org.apache.solr.handler.dataimport.DataImportHandler.handleRequestBody(DataImportHandler.java:186)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2073)
at org.apache.solr.util.TestHarness.query(TestHarness.java:311)
at 
org.apache.solr.handler.dataimport.AbstractDataImportHandlerTestCase.runFullImport(AbstractDataImportHandlerTestCase.java:87)
at 
org.apache.solr.handler.dataimport.TestTikaEntityProcessor.testSkip(TestTikaEntityProcessor.java:118)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:520)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
   

[jira] [Commented] (SOLR-8367) The new LIR 'all replicas participate' failsafe code needs to be improved.

2015-12-08 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047142#comment-15047142
 ] 

Mark Miller commented on SOLR-8367:
---

Working on a test, just hitting some issues. I think someone broke it so that 
when you restart a Jetty instance in a test, it's not getting the same data dir 
or something.

> The new LIR 'all replicas participate' failsafe code needs to be improved.
> --
>
> Key: SOLR-8367
> URL: https://issues.apache.org/jira/browse/SOLR-8367
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8367.patch
>
>
> For one, it currently only kicks in the first attempted leader. If it's 
> another replica that is stuck in LIR, it won't help.
> Second, when we attempt to be leader, knowing we might fail due to LIR, we 
> should not put other replicas into recovery if they fail to sync with us - 
> not until we know we will actually be leader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8385) Narrow StreamFactory.withFunctionName clazz parameter to prevent misconfiguration.

2015-12-08 Thread Jason Gerlowski (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gerlowski updated SOLR-8385:
--
Description: 
Currently, {{StreamFactory}} has several overloaded {{withFunctionName}} 
methods.  One of these, takes two parameters: a {{String}} functionName, and a 
{{Class}} implementation type.

This second parameter is a little too generic, because it's really only ever 
valid for a small, small subset of classes (i.e. {{Class Narrow StreamFactory.withFunctionName clazz parameter to prevent 
> misconfiguration.
> --
>
> Key: SOLR-8385
> URL: https://issues.apache.org/jira/browse/SOLR-8385
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Jason Gerlowski
>Priority: Trivial
>  Labels: easyfix
> Fix For: Trunk
>
>
> Currently, {{StreamFactory}} has several overloaded {{withFunctionName}} 
> methods.  One of these, takes two parameters: a {{String}} functionName, and 
> a {{Class}} implementation type.
> This second parameter is a little too generic, because it's really only ever 
> valid for a small, small subset of classes (i.e. {{Class Expressible}} ).  Narrowing the type of this parameter is a small step 
> towards preventing accidental misconfiguration of a {{StreamFactory}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8380) CollectionsAPIDistributedZkTest frequent failures with non ok status: 500, message:Server Error

2015-12-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047247#comment-15047247
 ] 

ASF subversion and git services commented on SOLR-8380:
---

Commit 1718675 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1718675 ]

SOLR-8380: Add core name in assert message

> CollectionsAPIDistributedZkTest frequent failures with non ok status: 500, 
> message:Server Error
> ---
>
> Key: SOLR-8380
> URL: https://issues.apache.org/jira/browse/SOLR-8380
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud, Tests
>Reporter: Shalin Shekhar Mangar
> Fix For: Trunk, 5.5
>
> Attachments: CollectionsAPIDistributedZkTest-fail.log
>
>
> Frequent failures of the form:
> {code}
> 3 tests failed.
> FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test
> Error Message:
> Error from server at http://127.0.0.1:51207/obnq/awholynewcollection_0: non 
> ok status: 500, message:Server Error
> Stack Trace:
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://127.0.0.1:51207/obnq/awholynewcollection_0: non ok 
> status: 500, message:Server Error
> at 
> __randomizedtesting.SeedInfo.seed([FD971CBA781DD16A:75C32360D6E1BC92]:0)
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:508)
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
> at 
> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
> at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943)
> at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958)
> at 
> org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForNon403or404or503(AbstractFullDistribZkTestBase.java:1754)
> at 
> org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:658)
> at 
> org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test(CollectionsAPIDistributedZkTest.java:160)
> {code}
> This is easily reproducible (1 out of 3 runs).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr git mirror will soon turn off

2015-12-08 Thread Upayavira
Have we heard anything more from Infrastructure? It seems the thing to
do right now is to get more of a conversation going with them to
understand the issue at hand. Once the release is done, I'd be happy to
try and get that conversation going faster than it is.

Upayavira

On Tue, Dec 8, 2015, at 06:57 PM, Dennis Gove wrote:
> github will reject files larger than 100MB and will warn for files
> larger than 50MB
> (https://help.github.com/articles/working-with-large-files/). They
> have recently released Git Large File Storage to alleviate issues
> caused by these restrictions
> (https://github.com/blog/1986-announcing-git-large-file-storage-lfs)
> but there is a cost associated with using such a thing so I would
> imagine that path is a no-go. The limit is on a per-file basis and in
> other projects I've gotten around it by using split to split large
> files before adding to a github repo and then using cat to combine the
> pieces back before using the file. I'm not sure how feasible of a
> solution that would be for us but perhaps we could add hooks to do the
> split-ting and cat-ing automatically for users.
>
> I'm in favor of a full switch to git (and github).
>
> Doing would require changes to the ant build scripts as at least one
> command (package and related package commands) requires an svn
> checkout to add some information to the created package. We'd have to
> change that logic to instead look at git metadata.
>
> On Mon, Dec 7, 2015 at 2:48 AM, Dawid Weiss
>  wrote:
>> I tried it once (for storing large text files -- Polish dictionaries,
>>
uncompressed -- on github), but it simply didn't work. More headaches
>>
than benefits (to me).
>>
>>
Dawid
>>
>>
On Sun, Dec 6, 2015 at 10:04 PM, Doug Turnbull
>>
 wrote:
>>
> I had not heard of git-lfs looks promising
>>
>
>>
> https://git-lfs.github.com/?utm_source=github_site_medium=blog_campaign=gitlfs
>>
>
>>
>
>>
> On Sunday, December 6, 2015, Jan Høydahl
>  wrote:
>>
>>
>>
>> If the size of historic jars is the problem here, would looking into
>>
>> git-lfs for *.jar be one workaround? I might also be totally off
>> here :-)
>>
>>
>>
>> --
>>
>> Jan Høydahl, search solution architect
>>
>> Cominvent AS - www.cominvent.com
>>
>>
>>
>> 6. des. 2015 kl. 00.46 skrev Scott Blum :
>>
>>
>>
>> If lucene was a new project being started today, is there any
>> question
>>
>> about whether it would be managed in svn or git?  If not, this
>> might be a
>>
>> good impetus for moving to a better world.
>>
>>
>>
>> On Sat, Dec 5, 2015 at 6:19 PM, Yonik Seeley
>>  wrote:
>>
>>>
>>
>>> On Sat, Dec 5, 2015 at 5:53 PM, david.w.smi...@gmail.com
>>
>>>  wrote:
>>
>>> > I understand Gus; but we’d like to separate the question of
>>> > wether we
>>
>>> > should
>>
>>> > move from svn to git from fixing the git mirror.
>>
>>>
>>
>>> Except moving to git is one path to fixing the issue, so it's not
>>
>>> really separate.
>>
>>> Give the multiple problems that the svn-git bridge seems to
>>> have (both
>>
>>> memory leaks + history), perhaps the sooner we switch to git, the
>>
>>> better.
>>
>>>
>>
>>> -Yonik
>>
>>>
>>
>>> ---
>>> --
>>
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>>
>>
>>
>>
>>
>>
>
>>
>
>>
> --
>>
> Doug Turnbull | Search Relevance Consultant | OpenSource
> Connections, LLC |
>>
> 240.476.9983
>>
> Author: Relevant Search
>>
> This e-mail and all contents, including attachments, is
> considered to be
>>
> Company Confidential unless explicitly stated otherwise, regardless of
>>
> whether attachments are marked as such.
>>
>
>>
>>
-
>>
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>
For additional commands, e-mail: dev-h...@lucene.apache.org
>>


[jira] [Commented] (LUCENE-6919) Change the Scorer API to expose an iterator instead of extending DocIdSetIterator

2015-12-08 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047333#comment-15047333
 ] 

Paul Elschot commented on LUCENE-6919:
--

Better performance, and about 70 lines less code in spite of splitting off 
SpanScorer from Spans.

LGTM, except for one minor point: the javadocs of Scorer.docID() say that there 
is a default implementation, but the method is abstract there.



> Change the Scorer API to expose an iterator instead of extending 
> DocIdSetIterator
> -
>
> Key: LUCENE-6919
> URL: https://issues.apache.org/jira/browse/LUCENE-6919
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-6919.patch, LUCENE-6919.patch
>
>
> I was working on trying to address the performance regression on LUCENE-6815 
> but this is hard to do without introducing specialization of 
> DisjunctionScorer which I'd like to avoid at all costs.
> I think the performance regression would be easy to address without 
> specialization if Scorers were changed to return an iterator instead of 
> extending DocIdSetIterator. So conceptually the API would move from
> {code}
> class Scorer extends DocIdSetIterator {
> }
> {code}
> to
> {code}
> class Scorer {
>   DocIdSetIterator iterator();
> }
> {code}
> This would help me because then if none of the sub clauses support two-phase 
> iteration, DisjunctionScorer could directly return the approximation as an 
> iterator instead of having to check if twoPhase == null at every iteration.
> Such an approach could also help remove some method calls. For instance 
> TermScorer.nextDoc calls PostingsEnum.nextDoc but with this change 
> TermScorer.iterator() could return the PostingsEnum and TermScorer would not 
> even appear in stack traces when scoring. I hacked a patch to see how much 
> that would help and luceneutil seems to like the change:
> {noformat}
> TaskQPS baseline  StdDev   QPS patch  StdDev  
>   Pct diff
>   Fuzzy1   88.54 (15.7%)   86.73 (16.6%)   
> -2.0% ( -29% -   35%)
>   AndHighLow  698.98  (4.1%)  691.11  (5.1%)   
> -1.1% (  -9% -8%)
>   Fuzzy2   26.47 (11.2%)   26.28 (10.3%)   
> -0.7% ( -19% -   23%)
>  MedSpanNear  141.03  (3.3%)  140.51  (3.2%)   
> -0.4% (  -6% -6%)
>   HighPhrase   60.66  (2.6%)   60.48  (3.3%)   
> -0.3% (  -5% -5%)
>  LowSpanNear   29.25  (2.4%)   29.21  (2.1%)   
> -0.1% (  -4% -4%)
>MedPhrase   28.32  (1.9%)   28.28  (2.0%)   
> -0.1% (  -3% -3%)
>LowPhrase   17.31  (2.1%)   17.29  (2.6%)   
> -0.1% (  -4% -4%)
> HighSloppyPhrase   10.93  (6.0%)   10.92  (6.0%)   
> -0.1% ( -11% -   12%)
>  MedSloppyPhrase   72.21  (2.2%)   72.27  (1.8%)
> 0.1% (  -3% -4%)
>  Respell   57.35  (3.2%)   57.41  (3.4%)
> 0.1% (  -6% -6%)
> HighSpanNear   26.71  (3.0%)   26.75  (2.5%)
> 0.1% (  -5% -5%)
> OrNotHighLow  803.46  (3.4%)  807.03  (4.2%)
> 0.4% (  -6% -8%)
>  LowSloppyPhrase   88.02  (3.4%)   88.77  (2.5%)
> 0.8% (  -4% -7%)
> OrNotHighMed  200.45  (2.7%)  203.83  (2.5%)
> 1.7% (  -3% -7%)
>   OrHighHigh   38.98  (7.9%)   40.30  (6.6%)
> 3.4% ( -10% -   19%)
> HighTerm   92.53  (5.3%)   95.94  (5.8%)
> 3.7% (  -7% -   15%)
>OrHighMed   53.80  (7.7%)   55.79  (6.6%)
> 3.7% (  -9% -   19%)
>   AndHighMed  266.69  (1.7%)  277.15  (2.5%)
> 3.9% (   0% -8%)
>  Prefix3   44.68  (5.4%)   46.60  (7.0%)
> 4.3% (  -7% -   17%)
>  MedTerm  261.52  (4.9%)  273.52  (5.4%)
> 4.6% (  -5% -   15%)
> Wildcard   42.39  (6.1%)   44.35  (7.8%)
> 4.6% (  -8% -   19%)
>   IntNRQ   10.46  (7.0%)   10.99  (9.5%)
> 5.0% ( -10% -   23%)
>OrNotHighHigh   67.15  (4.6%)   70.65  (4.5%)
> 5.2% (  -3% -   15%)
>OrHighNotHigh   43.07  (5.1%)   45.36  (5.4%)
> 5.3% (  -4% -   16%)
>OrHighLow   64.19  (6.4%)   67.72  (5.5%)
> 5.5% (  -6% -   18%)
>  AndHighHigh   64.17  (2.3%)   67.87  

[jira] [Commented] (SOLR-8380) CollectionsAPIDistributedZkTest frequent failures with non ok status: 500, message:Server Error

2015-12-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047250#comment-15047250
 ] 

ASF subversion and git services commented on SOLR-8380:
---

Commit 1718676 from sha...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1718676 ]

SOLR-8380: Add core name in assert message

> CollectionsAPIDistributedZkTest frequent failures with non ok status: 500, 
> message:Server Error
> ---
>
> Key: SOLR-8380
> URL: https://issues.apache.org/jira/browse/SOLR-8380
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud, Tests
>Reporter: Shalin Shekhar Mangar
> Fix For: Trunk, 5.5
>
> Attachments: CollectionsAPIDistributedZkTest-fail.log
>
>
> Frequent failures of the form:
> {code}
> 3 tests failed.
> FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test
> Error Message:
> Error from server at http://127.0.0.1:51207/obnq/awholynewcollection_0: non 
> ok status: 500, message:Server Error
> Stack Trace:
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://127.0.0.1:51207/obnq/awholynewcollection_0: non ok 
> status: 500, message:Server Error
> at 
> __randomizedtesting.SeedInfo.seed([FD971CBA781DD16A:75C32360D6E1BC92]:0)
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:508)
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
> at 
> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
> at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943)
> at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958)
> at 
> org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForNon403or404or503(AbstractFullDistribZkTestBase.java:1754)
> at 
> org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:658)
> at 
> org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test(CollectionsAPIDistributedZkTest.java:160)
> {code}
> This is easily reproducible (1 out of 3 runs).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr git mirror will soon turn off

2015-12-08 Thread Dennis Gove
github will reject files larger than 100MB and will warn for files larger
than 50MB (https://help.github.com/articles/working-with-large-files/).
They have recently released Git Large File Storage to alleviate issues
caused by these restrictions (
https://github.com/blog/1986-announcing-git-large-file-storage-lfs) but
there is a cost associated with using such a thing so I would imagine that
path is a no-go. The limit is on a per-file basis and in other projects
I've gotten around it by using split to split large files before adding to
a github repo and then using cat to combine the pieces back before using
the file. I'm not sure how feasible of a solution that would be for us but
perhaps we could add hooks to do the split-ting and cat-ing automatically
for users.

I'm in favor of a full switch to git (and github).

Doing would require changes to the ant build scripts as at least one
command (package and related package commands) requires an svn checkout to
add some information to the created package. We'd have to change that logic
to instead look at git metadata.

On Mon, Dec 7, 2015 at 2:48 AM, Dawid Weiss  wrote:

> I tried it once (for storing large text files -- Polish dictionaries,
> uncompressed -- on github), but it simply didn't work. More headaches
> than benefits (to me).
>
> Dawid
>
> On Sun, Dec 6, 2015 at 10:04 PM, Doug Turnbull
>  wrote:
> > I had not heard of git-lfs looks promising
> >
> >
> https://git-lfs.github.com/?utm_source=github_site_medium=blog_campaign=gitlfs
> >
> >
> > On Sunday, December 6, 2015, Jan Høydahl  wrote:
> >>
> >> If the size of historic jars is the problem here, would looking into
> >> git-lfs for *.jar be one workaround? I might also be totally off here
> :-)
> >>
> >> --
> >> Jan Høydahl, search solution architect
> >> Cominvent AS - www.cominvent.com
> >>
> >> 6. des. 2015 kl. 00.46 skrev Scott Blum :
> >>
> >> If lucene was a new project being started today, is there any question
> >> about whether it would be managed in svn or git?  If not, this might be
> a
> >> good impetus for moving to a better world.
> >>
> >> On Sat, Dec 5, 2015 at 6:19 PM, Yonik Seeley  wrote:
> >>>
> >>> On Sat, Dec 5, 2015 at 5:53 PM, david.w.smi...@gmail.com
> >>>  wrote:
> >>> > I understand Gus; but we’d like to separate the question of wether we
> >>> > should
> >>> > move from svn to git from fixing the git mirror.
> >>>
> >>> Except moving to git is one path to fixing the issue, so it's not
> >>> really separate.
> >>> Give the multiple problems that the svn-git bridge seems to have (both
> >>> memory leaks + history), perhaps the sooner we switch to git, the
> >>> better.
> >>>
> >>> -Yonik
> >>>
> >>> -
> >>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> >>> For additional commands, e-mail: dev-h...@lucene.apache.org
> >>>
> >>
> >>
> >
> >
> > --
> > Doug Turnbull | Search Relevance Consultant | OpenSource Connections,
> LLC |
> > 240.476.9983
> > Author: Relevant Search
> > This e-mail and all contents, including attachments, is considered to be
> > Company Confidential unless explicitly stated otherwise, regardless of
> > whether attachments are marked as such.
> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: Lucene/Solr git mirror will soon turn off

2015-12-08 Thread Michael McCandless
On Tue, Dec 8, 2015 at 2:05 PM, Upayavira  wrote:

> Have we heard anything more from Infrastructure?

Alas, no, unfortunately, at least from what I've seen ...

I would love to know if this memory leak in git-svn is a known issue
so we can be more informed (we've asked several times I think, but no
answer that I've seen, which could just be because it is NOT a known
issue).

Dawid also asked a few days ago for an export from svn so he could
play with git-svn himself (thank you for "volunteering" Dawid!), at
https://issues.apache.org/jira/browse/INFRA-10828 but no response so
far.  This seems the most promising lead to me so far, if only infra
could get the bits to Dawid soon...

Paul Elschot continues to improve his script (thank you!) to work
around delayed git mirroring:
https://issues.apache.org/jira/browse/LUCENE-6922 ... it seems this
may be the only option for git users come Dec 30, unless we decide
soon to do a full cutover from svn to git.

> Once the release is done, I'd be happy to try and get that conversation going 
> faster than it is.

That would be wonderful, thank you!

Mike McCandless

http://blog.mikemccandless.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6919) Change the Scorer API to expose an iterator instead of extending DocIdSetIterator

2015-12-08 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047403#comment-15047403
 ] 

Alan Woodward commented on LUCENE-6919:
---

I think now that scoring has moved back out of Spans into a special SpanScorer, 
we don't need the docScorer in there anymore?

> Change the Scorer API to expose an iterator instead of extending 
> DocIdSetIterator
> -
>
> Key: LUCENE-6919
> URL: https://issues.apache.org/jira/browse/LUCENE-6919
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-6919.patch, LUCENE-6919.patch
>
>
> I was working on trying to address the performance regression on LUCENE-6815 
> but this is hard to do without introducing specialization of 
> DisjunctionScorer which I'd like to avoid at all costs.
> I think the performance regression would be easy to address without 
> specialization if Scorers were changed to return an iterator instead of 
> extending DocIdSetIterator. So conceptually the API would move from
> {code}
> class Scorer extends DocIdSetIterator {
> }
> {code}
> to
> {code}
> class Scorer {
>   DocIdSetIterator iterator();
> }
> {code}
> This would help me because then if none of the sub clauses support two-phase 
> iteration, DisjunctionScorer could directly return the approximation as an 
> iterator instead of having to check if twoPhase == null at every iteration.
> Such an approach could also help remove some method calls. For instance 
> TermScorer.nextDoc calls PostingsEnum.nextDoc but with this change 
> TermScorer.iterator() could return the PostingsEnum and TermScorer would not 
> even appear in stack traces when scoring. I hacked a patch to see how much 
> that would help and luceneutil seems to like the change:
> {noformat}
> TaskQPS baseline  StdDev   QPS patch  StdDev  
>   Pct diff
>   Fuzzy1   88.54 (15.7%)   86.73 (16.6%)   
> -2.0% ( -29% -   35%)
>   AndHighLow  698.98  (4.1%)  691.11  (5.1%)   
> -1.1% (  -9% -8%)
>   Fuzzy2   26.47 (11.2%)   26.28 (10.3%)   
> -0.7% ( -19% -   23%)
>  MedSpanNear  141.03  (3.3%)  140.51  (3.2%)   
> -0.4% (  -6% -6%)
>   HighPhrase   60.66  (2.6%)   60.48  (3.3%)   
> -0.3% (  -5% -5%)
>  LowSpanNear   29.25  (2.4%)   29.21  (2.1%)   
> -0.1% (  -4% -4%)
>MedPhrase   28.32  (1.9%)   28.28  (2.0%)   
> -0.1% (  -3% -3%)
>LowPhrase   17.31  (2.1%)   17.29  (2.6%)   
> -0.1% (  -4% -4%)
> HighSloppyPhrase   10.93  (6.0%)   10.92  (6.0%)   
> -0.1% ( -11% -   12%)
>  MedSloppyPhrase   72.21  (2.2%)   72.27  (1.8%)
> 0.1% (  -3% -4%)
>  Respell   57.35  (3.2%)   57.41  (3.4%)
> 0.1% (  -6% -6%)
> HighSpanNear   26.71  (3.0%)   26.75  (2.5%)
> 0.1% (  -5% -5%)
> OrNotHighLow  803.46  (3.4%)  807.03  (4.2%)
> 0.4% (  -6% -8%)
>  LowSloppyPhrase   88.02  (3.4%)   88.77  (2.5%)
> 0.8% (  -4% -7%)
> OrNotHighMed  200.45  (2.7%)  203.83  (2.5%)
> 1.7% (  -3% -7%)
>   OrHighHigh   38.98  (7.9%)   40.30  (6.6%)
> 3.4% ( -10% -   19%)
> HighTerm   92.53  (5.3%)   95.94  (5.8%)
> 3.7% (  -7% -   15%)
>OrHighMed   53.80  (7.7%)   55.79  (6.6%)
> 3.7% (  -9% -   19%)
>   AndHighMed  266.69  (1.7%)  277.15  (2.5%)
> 3.9% (   0% -8%)
>  Prefix3   44.68  (5.4%)   46.60  (7.0%)
> 4.3% (  -7% -   17%)
>  MedTerm  261.52  (4.9%)  273.52  (5.4%)
> 4.6% (  -5% -   15%)
> Wildcard   42.39  (6.1%)   44.35  (7.8%)
> 4.6% (  -8% -   19%)
>   IntNRQ   10.46  (7.0%)   10.99  (9.5%)
> 5.0% ( -10% -   23%)
>OrNotHighHigh   67.15  (4.6%)   70.65  (4.5%)
> 5.2% (  -3% -   15%)
>OrHighNotHigh   43.07  (5.1%)   45.36  (5.4%)
> 5.3% (  -4% -   16%)
>OrHighLow   64.19  (6.4%)   67.72  (5.5%)
> 5.5% (  -6% -   18%)
>  AndHighHigh   64.17  (2.3%)   67.87  (2.1%)
> 5.8% (   1% -   10%)
>  LowTerm  642.94 (10.9%)  681.48  (8.5%)
> 6.0% 

[jira] [Updated] (SOLR-8394) Luke handler doesn't support FilterLeafReader

2015-12-08 Thread Steve Molloy (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Molloy updated SOLR-8394:
---
Attachment: SOLR-8394.patch

Simple patch to unwrap LeafReader if they are FilterLeafReader. Then apply same 
logic of exiting if not a SegmentReader.

> Luke handler doesn't support FilterLeafReader
> -
>
> Key: SOLR-8394
> URL: https://issues.apache.org/jira/browse/SOLR-8394
> Project: Solr
>  Issue Type: Improvement
>Reporter: Steve Molloy
> Attachments: SOLR-8394.patch
>
>
> When fetching index information, luke handler only looks at ramBytesUsed for 
> SegmentReader leaves. If these readers are wrapped in FilterLeafReader, no 
> RAM usage is returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8393) Component for Solr resource usage planning

2015-12-08 Thread Steve Molloy (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Molloy updated SOLR-8393:
---
Attachment: SOLR-8393.patch

Cleaned up version of the same patch.

> Component for Solr resource usage planning
> --
>
> Key: SOLR-8393
> URL: https://issues.apache.org/jira/browse/SOLR-8393
> Project: Solr
>  Issue Type: Improvement
>Reporter: Steve Molloy
> Attachments: SOLR-8393.patch, SOLR-8393.patch
>
>
> One question that keeps coming back is how much disk and RAM do I need to run 
> Solr. The most common response is that it highly depends on your data. While 
> true, it makes for frustrated users trying to plan their deployments. 
> The idea I'm bringing is to create a new component that will attempt to 
> extrapolate resources needed in the future by looking at resources currently 
> used. By adding a parameter for the target number of documents, current 
> resources are adapted by a ratio relative to current number of documents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr git mirror will soon turn off

2015-12-08 Thread Scott Blum
Dumb question, but searching around suggests that git-svn can be killed and
then resumed with `git svn fetch`.  Shouldn't that resolve any
process-level memory leak?

On Fri, Dec 4, 2015 at 3:57 PM, Michael McCandless <
luc...@mikemccandless.com> wrote:

> Hello devs,
>
> The infra team has notified us (Lucene/Solr) that in 26 days our
> git-svn mirror will be turned off, because running it consumes too
> many system resources, affecting other projects, apparently because of
> a memory leak in git-svn.
>
> Does anyone know of a link to this git-svn issue?  Is it a known
> issue?  If there's something simple we can do (remove old jars from
> our svn history, remove old branches), maybe we can sidestep the issue
> and infra will allow it to keep running?
>
> Or maybe someone in the Lucene/Solr dev community with prior
> experience with git-svn could volunteer to play with it to see if
> there's a viable solution, maybe with command-line options e.g. to
> only mirror specific branches (trunk, 5.x)?
>
> Or maybe it's time for us to switch to git, but there are problems
> there too, e.g. we are currently missing large parts of our svn
> history from the mirror now and it's not clear whether that would be
> fixed if we switched:
> https://issues.apache.org/jira/browse/INFRA-10828  Also, because we
> used to add JAR files to svn, the "git clone" would likely take
> several GBs unless we remove those JARs from our history.
>
> Or if anyone has any other ideas, we should explore them, because
> otherwise in 26 days there will be no more updates to the git mirror
> of Lucene and Solr sources...
>
> Thanks,
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Created] (SOLR-8393) Component for Solr resource usage planning

2015-12-08 Thread Steve Molloy (JIRA)
Steve Molloy created SOLR-8393:
--

 Summary: Component for Solr resource usage planning
 Key: SOLR-8393
 URL: https://issues.apache.org/jira/browse/SOLR-8393
 Project: Solr
  Issue Type: Improvement
Reporter: Steve Molloy


One question that keeps coming back is how much disk and RAM do I need to run 
Solr. The most common response is that it highly depends on your data. While 
true, it makes for frustrated users trying to plan their deployments. 

The idea I'm bringing is to create a new component that will attempt to 
extrapolate resources needed in the future by looking at resources currently 
used. By adding a parameter for the target number of documents, current 
resources are adapted by a ratio relative to current number of documents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8393) Component for Solr resource usage planning

2015-12-08 Thread Steve Molloy (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Molloy updated SOLR-8393:
---
Attachment: SOLR-8393.patch

Patch based on replication's disk size estimate and adapted Luke's index RAM 
estimates. Solr RAM estimates tentatively derived from excel sheet provided in 
dev-tools.

> Component for Solr resource usage planning
> --
>
> Key: SOLR-8393
> URL: https://issues.apache.org/jira/browse/SOLR-8393
> Project: Solr
>  Issue Type: Improvement
>Reporter: Steve Molloy
> Attachments: SOLR-8393.patch
>
>
> One question that keeps coming back is how much disk and RAM do I need to run 
> Solr. The most common response is that it highly depends on your data. While 
> true, it makes for frustrated users trying to plan their deployments. 
> The idea I'm bringing is to create a new component that will attempt to 
> extrapolate resources needed in the future by looking at resources currently 
> used. By adding a parameter for the target number of documents, current 
> resources are adapted by a ratio relative to current number of documents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Solaris (64bit/jdk1.8.0) - Build # 240 - Failure!

2015-12-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Solaris/240/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.core.TestArbitraryIndexDir.testLoadNewIndexDir

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([C83E12344CD10C1F:2164A90CD2489CB7]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:749)
at 
org.apache.solr.core.TestArbitraryIndexDir.testLoadNewIndexDir(TestArbitraryIndexDir.java:107)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: REQUEST FAILED: xpath=*[count(//doc)=1]
xml response was: 

00


request was:q=id:2=standard=0=20=2.2
at 

Solr resource usage estimation

2015-12-08 Thread Steve Molloy
Just sharing a crude component to attempt disk and RAM estimation for Solr. I 
know I'm asked often and usually cannot come up with an answer that is 
useful... So, I know there was an excel sheet attempting this, I actually used 
some of it for the logic. But I think it was not very straight forward to use. 
So the component bases its estimates on current usage and adjusts by a ratio 
based on current number of documents vs target number of documents.

Details in SOLR-8393.

Steve
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr git mirror will soon turn off

2015-12-08 Thread Geoffrey Corey
If you do that, then the changes do not sync to github, and there's a 99%
chance that the next time a change is seen by the mirroring process (or by
the hourly cron that updates all the svn->git mirrors) the same memory leak
would happen.

On Tue, Dec 8, 2015 at 12:40 PM, Scott Blum  wrote:

> Dumb question, but searching around suggests that git-svn can be killed
> and then resumed with `git svn fetch`.  Shouldn't that resolve any
> process-level memory leak?
>
> On Fri, Dec 4, 2015 at 3:57 PM, Michael McCandless <
> luc...@mikemccandless.com> wrote:
>
>> Hello devs,
>>
>> The infra team has notified us (Lucene/Solr) that in 26 days our
>> git-svn mirror will be turned off, because running it consumes too
>> many system resources, affecting other projects, apparently because of
>> a memory leak in git-svn.
>>
>> Does anyone know of a link to this git-svn issue?  Is it a known
>> issue?  If there's something simple we can do (remove old jars from
>> our svn history, remove old branches), maybe we can sidestep the issue
>> and infra will allow it to keep running?
>>
>> Or maybe someone in the Lucene/Solr dev community with prior
>> experience with git-svn could volunteer to play with it to see if
>> there's a viable solution, maybe with command-line options e.g. to
>> only mirror specific branches (trunk, 5.x)?
>>
>> Or maybe it's time for us to switch to git, but there are problems
>> there too, e.g. we are currently missing large parts of our svn
>> history from the mirror now and it's not clear whether that would be
>> fixed if we switched:
>> https://issues.apache.org/jira/browse/INFRA-10828  Also, because we
>> used to add JAR files to svn, the "git clone" would likely take
>> several GBs unless we remove those JARs from our history.
>>
>> Or if anyone has any other ideas, we should explore them, because
>> otherwise in 26 days there will be no more updates to the git mirror
>> of Lucene and Solr sources...
>>
>> Thanks,
>>
>> Mike McCandless
>>
>> http://blog.mikemccandless.com
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>
>


Re: Lucene/Solr git mirror will soon turn off

2015-12-08 Thread Upayavira
Here's what I've just got on the Infra hipchat channel:

The ASF has a tool, svn2gitupdate[1], which I presume uses git-svn,
which fails periodically. When it does fail, it takes with it all other
ASF projects that make use of the same tool, until an admin can
intervene and restart things.

When it fails, it OOMs, and blocks all disk activity.

If someone wanted to reproduce this issue, you could: * create a 4Gb VM
* Install svn2gitupdate from [1] * Clone the Lucene git repo from ASF
git or github * Run the tool repeatedly until it fails   - it is the
pull from SVN that fails, not the push to git, so we don't need a remote
git server


The other option is just switching to Git. Now, given the issue is with
reading from SVN, not writing to Git, Infrastructure *would* be able to
give us a decent SVN->Git export - even if they had to rerun the process
a number of times, this would be acceptable as a one-off task.

So it seems we have two options:
1) Set up a VM and debug reading from SVN
2) Just migrate to Git and be done with it.

Thoughts? Volunteers?

Upayavira

[1] 
https://svn.apache.org/repos/infra/infrastructure/trunk/projects/git/svn2gitupdate/


On Tue, Dec 8, 2015, at 08:49 PM, Geoffrey Corey wrote:
> If you do that, then the changes do not sync to github, and there's a
> 99% chance that the next time a change is seen by the mirroring
> process (or by the hourly cron that updates all the svn->git mirrors)
> the same memory leak would happen.
>
> On Tue, Dec 8, 2015 at 12:40 PM, Scott Blum
>  wrote:
>> Dumb question, but searching around suggests that git-svn can be
>> killed and then resumed with `git svn fetch`.  Shouldn't that resolve
>> any process-level memory leak?
>>
>> On Fri, Dec 4, 2015 at 3:57 PM, Michael McCandless
>>  wrote:
>>> Hello devs,
>>>
>>>
The infra team has notified us (Lucene/Solr) that in 26 days our
>>>
git-svn mirror will be turned off, because running it consumes too
>>>
many system resources, affecting other projects, apparently because of
>>>
a memory leak in git-svn.
>>>
>>>
Does anyone know of a link to this git-svn issue?  Is it a known
>>>
issue?  If there's something simple we can do (remove old jars from
>>>
our svn history, remove old branches), maybe we can sidestep the issue
>>>
and infra will allow it to keep running?
>>>
>>>
Or maybe someone in the Lucene/Solr dev community with prior
>>>
experience with git-svn could volunteer to play with it to see if
>>>
there's a viable solution, maybe with command-line options e.g. to
>>>
only mirror specific branches (trunk, 5.x)?
>>>
>>>
Or maybe it's time for us to switch to git, but there are problems
>>>
there too, e.g. we are currently missing large parts of our svn
>>>
history from the mirror now and it's not clear whether that would be
>>>
fixed if we switched:
>>> https://issues.apache.org/jira/browse/INFRA-10828  Also, because we
>>>
used to add JAR files to svn, the "git clone" would likely take
>>>
several GBs unless we remove those JARs from our history.
>>>
>>>
Or if anyone has any other ideas, we should explore them, because
>>>
otherwise in 26 days there will be no more updates to the git mirror
>>>
of Lucene and Solr sources...
>>>
>>>
Thanks,
>>>
>>>
Mike McCandless
>>>
>>> http://blog.mikemccandless.com
>>>
>>>
-
>>>
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>>
For additional commands, e-mail: dev-h...@lucene.apache.org


Re: Lucene/Solr git mirror will soon turn off

2015-12-08 Thread Dawid Weiss
> Don’t know how much we have of historic jars in our history.

I actually do know. Or will know. In about ~10 hours. I wrote a script
that does the following:

1) git log all revisions touching https://svn.apache.org/repos/asf/lucene
2) grep revision numbers
3) use svnrdump to get every single commit (revision) above, in
incremental mode.

This will allow me to:

1) recreate only Lucene/ Solr SVN, locally.
2) measure the size of SVN repo.
3) measure the size of any conversion to git (even if it's one-by-one
checkout, then-sync with git).

>From what I see up until now size should not be an issue at all. Even
with all binary blobs so far the SVN incremental dumps measure ~3.7G
(and I'm about 75% done). There is one interesting super-large commit,
this one:

svn log -r1240618 https://svn.apache.org/repos/asf/lucene

r1240618 | gsingers | 2012-02-04 22:45:17 +0100 (Sat, 04 Feb 2012) | 1 line

LUCENE-2748: bring in old Lucene docs

This commit diff weights... wait for it... 1.3G! I didn't check what
it actually was.

Will keep you posted.

D.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8394) Luke handler doesn't support FilterLeafReader

2015-12-08 Thread Steve Molloy (JIRA)
Steve Molloy created SOLR-8394:
--

 Summary: Luke handler doesn't support FilterLeafReader
 Key: SOLR-8394
 URL: https://issues.apache.org/jira/browse/SOLR-8394
 Project: Solr
  Issue Type: Improvement
Reporter: Steve Molloy


When fetching index information, luke handler only looks at ramBytesUsed for 
SegmentReader leaves. If these readers are wrapped in FilterLeafReader, no RAM 
usage is returned.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr git mirror will soon turn off

2015-12-08 Thread Jan Høydahl
The lfs cost at GitHub starts at >1Gb. Don’t know how much we have of historic 
jars in our history. Also, as far as I understand, Apache is free to install 
their own git-lfs server, so the repository will use an Apache-operated server 
for storing the large files instead of GitHub’s own storage service. Since we 
don’t check in jars anymore, this would be a one-time sync to populate lfs, and 
then git clients will get small pointer files in pace of the large files, and 
will need to install git-lfs and run "git lfs fetch” in order to replace these 
with proper binaries.

However, until we know more about why the svn-git mirroring breaks, we cannot 
say whether LFS would help at all.

If we migrate to git (which I’m totally in favor of), I guess LFS could be a 
way to migrate ALL history at a lower cost, so new users can clone the whole 
repo faster. It will be the “git lfs fetch” stage that takes time if the user 
chooses to fetch various large files. For normal usage of current branches it 
will not be necessary. Win win.

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 8. des. 2015 kl. 19.57 skrev Dennis Gove :
> 
> github will reject files larger than 100MB and will warn for files larger 
> than 50MB (https://help.github.com/articles/working-with-large-files/ 
> ). They have 
> recently released Git Large File Storage to alleviate issues caused by these 
> restrictions 
> (https://github.com/blog/1986-announcing-git-large-file-storage-lfs 
> ) but 
> there is a cost associated with using such a thing so I would imagine that 
> path is a no-go. The limit is on a per-file basis and in other projects I've 
> gotten around it by using split to split large files before adding to a 
> github repo and then using cat to combine the pieces back before using the 
> file. I'm not sure how feasible of a solution that would be for us but 
> perhaps we could add hooks to do the split-ting and cat-ing automatically for 
> users. 
> 
> I'm in favor of a full switch to git (and github).
> 
> Doing would require changes to the ant build scripts as at least one command 
> (package and related package commands) requires an svn checkout to add some 
> information to the created package. We'd have to change that logic to instead 
> look at git metadata.
> 
> On Mon, Dec 7, 2015 at 2:48 AM, Dawid Weiss  > wrote:
> I tried it once (for storing large text files -- Polish dictionaries,
> uncompressed -- on github), but it simply didn't work. More headaches
> than benefits (to me).
> 
> Dawid
> 
> On Sun, Dec 6, 2015 at 10:04 PM, Doug Turnbull
>  > wrote:
> > I had not heard of git-lfs looks promising
> >
> > https://git-lfs.github.com/?utm_source=github_site_medium=blog_campaign=gitlfs
> >  
> > 
> >
> >
> > On Sunday, December 6, 2015, Jan Høydahl  > > wrote:
> >>
> >> If the size of historic jars is the problem here, would looking into
> >> git-lfs for *.jar be one workaround? I might also be totally off here :-)
> >>
> >> --
> >> Jan Høydahl, search solution architect
> >> Cominvent AS - www.cominvent.com 
> >>
> >> 6. des. 2015 kl. 00.46 skrev Scott Blum  >> >:
> >>
> >> If lucene was a new project being started today, is there any question
> >> about whether it would be managed in svn or git?  If not, this might be a
> >> good impetus for moving to a better world.
> >>
> >> On Sat, Dec 5, 2015 at 6:19 PM, Yonik Seeley  >> > wrote:
> >>>
> >>> On Sat, Dec 5, 2015 at 5:53 PM, david.w.smi...@gmail.com 
> >>> 
> >>> > wrote:
> >>> > I understand Gus; but we’d like to separate the question of wether we
> >>> > should
> >>> > move from svn to git from fixing the git mirror.
> >>>
> >>> Except moving to git is one path to fixing the issue, so it's not
> >>> really separate.
> >>> Give the multiple problems that the svn-git bridge seems to have (both
> >>> memory leaks + history), perhaps the sooner we switch to git, the
> >>> better.
> >>>
> >>> -Yonik
> >>>
> >>> -
> >>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org 
> >>> 
> >>> For additional commands, e-mail: dev-h...@lucene.apache.org 
> >>> 
> >>>
> >>
> >>
> >
> >
> > --
> > Doug Turnbull | Search Relevance Consultant | OpenSource Connections, LLC |
> > 

[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_66) - Build # 5456 - Still Failing!

2015-12-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5456/
Java: 32bit/jdk1.8.0_66 -client -XX:+UseConcMarkSweepGC

4 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([164771DD18399354]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.LeaderFailoverAfterPartitionTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([164771DD18399354]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.LeaderFailoverAfterPartitionTest

Error Message:
Captured an uncaught exception in thread: Thread[id=567, 
name=SocketProxy-Response-64035:49885, state=RUNNABLE, 
group=TGRP-LeaderFailoverAfterPartitionTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=567, name=SocketProxy-Response-64035:49885, 
state=RUNNABLE, group=TGRP-LeaderFailoverAfterPartitionTest]
Caused by: java.lang.RuntimeException: java.net.SocketException: Socket is 
closed
at __randomizedtesting.SeedInfo.seed([164771DD18399354]:0)
at 
org.apache.solr.cloud.SocketProxy$Bridge$Pump.run(SocketProxy.java:348)
Caused by: java.net.SocketException: Socket is closed
at java.net.Socket.setSoTimeout(Socket.java:1137)
at 
org.apache.solr.cloud.SocketProxy$Bridge$Pump.run(SocketProxy.java:345)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.LeaderFailoverAfterPartitionTest

Error Message:
Captured an uncaught exception in thread: Thread[id=598, 
name=SocketProxy-Request-53097:64035, state=RUNNABLE, 
group=TGRP-LeaderFailoverAfterPartitionTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=598, name=SocketProxy-Request-53097:64035, 
state=RUNNABLE, group=TGRP-LeaderFailoverAfterPartitionTest]
Caused by: java.lang.RuntimeException: java.net.SocketException: Socket is 
closed
at __randomizedtesting.SeedInfo.seed([164771DD18399354]:0)
at 
org.apache.solr.cloud.SocketProxy$Bridge$Pump.run(SocketProxy.java:348)
Caused by: java.net.SocketException: Socket is closed
at java.net.Socket.setSoTimeout(Socket.java:1137)
at 
org.apache.solr.cloud.SocketProxy$Bridge$Pump.run(SocketProxy.java:345)




Build Log:
[...truncated 10805 lines...]
   [junit4] Suite: org.apache.solr.cloud.LeaderFailoverAfterPartitionTest
   [junit4]   2> Creating dataDir: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.LeaderFailoverAfterPartitionTest_164771DD18399354-001\init-core-data-001
   [junit4]   2> 91496 INFO  
(SUITE-LeaderFailoverAfterPartitionTest-seed#[164771DD18399354]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   2> 91506 INFO  
(TEST-LeaderFailoverAfterPartitionTest.test-seed#[164771DD18399354]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 91506 INFO  (Thread-130) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 91506 INFO  (Thread-130) [] o.a.s.c.ZkTestServer Starting 
server
   [junit4]   2> 91607 INFO  
(TEST-LeaderFailoverAfterPartitionTest.test-seed#[164771DD18399354]) [] 
o.a.s.c.ZkTestServer start zk server on port:63996
   [junit4]   2> 91608 INFO  
(TEST-LeaderFailoverAfterPartitionTest.test-seed#[164771DD18399354]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 91609 INFO  
(TEST-LeaderFailoverAfterPartitionTest.test-seed#[164771DD18399354]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 91612 INFO  (zkCallback-71-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@b8d8e0 name:ZooKeeperConnection 
Watcher:127.0.0.1:63996 got event WatchedEvent state:SyncConnected type:None 
path:null path:null type:None
   [junit4]   2> 91612 INFO  
(TEST-LeaderFailoverAfterPartitionTest.test-seed#[164771DD18399354]) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 91612 INFO  
(TEST-LeaderFailoverAfterPartitionTest.test-seed#[164771DD18399354]) [] 
o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2> 91612 INFO  
(TEST-LeaderFailoverAfterPartitionTest.test-seed#[164771DD18399354]) [] 
o.a.s.c.c.SolrZkClient makePath: /solr
   [junit4]   2> 91616 WARN  (NIOServerCxn.Factory:0.0.0.0/0.0.0.0:0) [] 
o.a.z.s.NIOServerCnxn caught end of stream exception
   [junit4]   2> EndOfStreamException: Unable to read additional data from 
client sessionid 0x1518363308b, likely client has closed socket
   [junit4]   2>

[jira] [Comment Edited] (SOLR-8227) Recovering replicas should be able to recover from any active replica

2015-12-08 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15048224#comment-15048224
 ] 

Ishan Chattopadhyaya edited comment on SOLR-8227 at 12/9/15 7:48 AM:
-

I think you misunderstand the difference between an update command in the 
update log / transaction log (which is passed on during peersync) and a 
document in the index. It is guaranteed that after an update is ack'ed by the 
leader to the client, the effect would be the same on the indexes across all 
replicas (barring bugs). However, it may be possible that the update logs / 
transaction logs are different across the replicas (as the commit commands are 
not versioned when distributed, so they could be in slightly different points 
in time). That would mean not exactly same set of update commands are in the 
update log / transaction log for every replica. Does that answer your question?


was (Author: ichattopadhyaya):
I think you misunderstand the difference between an update command in the 
update log / transaction log (which is passed on during peersync) and a 
document in the index. It is guaranteed that after an update is ack'ed by the 
leader to the client, the effect would be the same on the index across all 
replicas (barring bugs). However, it may be possible that the update logs / 
transaction logs are different across the replicas (as the commit commands are 
not versioned when distributed, so they could be in slightly different points 
in time). That would mean not exactly same set of update commands are in the 
update log / transaction log for every replica. Does that answer your question?

> Recovering replicas should be able to recover from any active replica
> -
>
> Key: SOLR-8227
> URL: https://issues.apache.org/jira/browse/SOLR-8227
> Project: Solr
>  Issue Type: Improvement
>Reporter: Varun Thacker
>
> Currently when a replica goes into recovery it uses the leader to recover. It 
> first   tries to do a PeerSync. If thats not successful it does a 
> replication. Most of the times it ends up doing a full replication because 
> segment merging, autoCommits causing segments to be formed differently on the 
> replicas ( We should explore improving that in another issue ) . 
> But when many replicas are recovering and hitting the leader, the leader can 
> become a bottleneck. Since Solr is a CP system , we should be able to recover 
> from any of the 'active' replicas instead of just the leader. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8387) Solr example configs should ship with managed-schema instead of schema.xml

2015-12-08 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-8387:

Attachment: SOLR-8387.patch

Patch which will apply against trunk and 5.x . From what I am thinking this 
Jira doesn't require an entry in the "Upgrading from Solr 5.4" section since 
nothing changes for users in 5.x in terms of default behaviour .

The default behaviour changes in 6.0 which has already been documented in the 
"Upgrading from Solr 5.x" section as part of SOLR-8131 




> Solr example configs should ship with managed-schema instead of schema.xml
> --
>
> Key: SOLR-8387
> URL: https://issues.apache.org/jira/browse/SOLR-8387
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: Trunk
>Reporter: Varun Thacker
> Fix For: Trunk, 5.5
>
> Attachments: SOLR-8387.patch, SOLR-8387.patch
>
>
> This is a followup of SOLR-8131 . In SOLR-8131 if a schema factory is not 
> specified explicitly managed schema will be used.
> Now since managed schema factory is the default, when a user goes to start 
> solr 6.0 their schema.xml file will get converted to managed-schema  . This 
> might seem trappy or confusing to a user. Hence why don't we directly ship 
> with a a file called {{managed-schema}} instead of {{schema.xml}} . Just a 
> rename of the files in all the example configs that we ship. The data_driven 
> config does that already



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8227) Recovering replicas should be able to recover from any active replica

2015-12-08 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15048224#comment-15048224
 ] 

Ishan Chattopadhyaya commented on SOLR-8227:


I think you misunderstand the difference between an update command in the 
update log / transaction log (which is passed on during peersync) and a 
document in the index. It is guaranteed that after an update is ack'ed by the 
leader to the client, the effect would be the same on the index across all 
replicas (barring bugs). However, it may be possible that the update logs / 
transaction logs are different across the replicas (as the commit commands are 
not versioned when distributed, so they could be in slightly different points 
in time). That would mean not exactly same set of update commands are in the 
update log / transaction log for every replica. Does that answer your question?

> Recovering replicas should be able to recover from any active replica
> -
>
> Key: SOLR-8227
> URL: https://issues.apache.org/jira/browse/SOLR-8227
> Project: Solr
>  Issue Type: Improvement
>Reporter: Varun Thacker
>
> Currently when a replica goes into recovery it uses the leader to recover. It 
> first   tries to do a PeerSync. If thats not successful it does a 
> replication. Most of the times it ends up doing a full replication because 
> segment merging, autoCommits causing segments to be formed differently on the 
> replicas ( We should explore improving that in another issue ) . 
> But when many replicas are recovering and hitting the leader, the leader can 
> become a bottleneck. Since Solr is a CP system , we should be able to recover 
> from any of the 'active' replicas instead of just the leader. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-5766) Add a random full cluster restart to ChaosMonkey tests.

2015-12-08 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-5766:
--
Comment: was deleted

(was: This initial test has actually been committed already. I'm working a much 
beefed up version though, with additional fault injection to test various 
cluster restart issues.)

> Add a random full cluster restart to ChaosMonkey tests.
> ---
>
> Key: SOLR-5766
> URL: https://issues.apache.org/jira/browse/SOLR-5766
> Project: Solr
>  Issue Type: Test
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 4.9, Trunk
>
> Attachments: SOLR-5766.patch
>
>
> We can't yet easily keep sending updates during the restart until we have the 
> ability to say how many replicas the update must reach to be successful, but 
> we can easily stop updates, do a full cluster restart and then start updates 
> again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8367) The new LIR 'all replicas participate' failsafe code needs to be improved.

2015-12-08 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047731#comment-15047731
 ] 

Mike Drob commented on SOLR-8367:
-

SyncStrategy, the try-catch at line ~218 can go away.
Do we need to check {{isClosed}} again during {{requestRecoveries}}? It's 
possible that it's false when the recoveries are being set up, but changes to 
true later by the time we actually make the RPC.
An optimization might be to break out of the whole loop when closed, since it 
looks like not much will be happening anyway.

In ZkController, can we log which retry we are on (in both places)? That will 
make parsing logs easier when this failure happens.

You have a couple {{System.out.println}} in SolrCore that could probably be 
{{log.debug}} or even {{trace}}.

In the test, you could store {{(HttpSolrClient) clients.get(0)).getBaseURL()}} 
as a local variable instead of loading it twice.

If you're fixing shard count to 2 in the test, do we still want to 
{{createCollection(testCollectionName, 1, 3, 1);}} for three shards?

Architecture question -- What happens when you send an update FROMLEADER to the 
one that happens to be the leader? Also, why are we using decreasing versions?

> The new LIR 'all replicas participate' failsafe code needs to be improved.
> --
>
> Key: SOLR-8367
> URL: https://issues.apache.org/jira/browse/SOLR-8367
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8367.patch, SOLR-8367.patch
>
>
> For one, it currently only kicks in the first attempted leader. If it's 
> another replica that is stuck in LIR, it won't help.
> Second, when we attempt to be leader, knowing we might fail due to LIR, we 
> should not put other replicas into recovery if they fail to sync with us - 
> not until we know we will actually be leader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-trunk - Build # 357 - Still Failing

2015-12-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-trunk/357/

No tests ran.

Build Log:
[...truncated 52537 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/dist
 [copy] Copying 461 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 245 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (18.0 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-6.0.0-src.tgz...
   [smoker] 28.3 MB in 0.03 sec (835.2 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-6.0.0.tgz...
   [smoker] 65.4 MB in 0.08 sec (804.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-6.0.0.zip...
   [smoker] 75.9 MB in 0.09 sec (821.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-6.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 5953 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-6.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 5953 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-6.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 211 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.00 sec (150.6 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-6.0.0-src.tgz...
   [smoker] 37.2 MB in 0.05 sec (769.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-6.0.0.tgz...
   [smoker] 132.1 MB in 0.17 sec (776.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-6.0.0.zip...
   [smoker] 140.4 MB in 0.18 sec (798.1 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-6.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-6.0.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/tmp/unpack/solr-6.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/tmp/unpack/solr-6.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/tmp/unpack/solr-6.0.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/tmp/unpack/solr-6.0.0-java8
   [smoker] Creating Solr home directory 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/tmp/unpack/solr-6.0.0-java8/example/techproducts/solr
   [smoker] 
   [smoker] Starting up Solr on port 8983 using command:
   [smoker] bin/solr start -p 8983 -s "example/techproducts/solr"
   [smoker] 
   [smoker] Waiting up to 30 seconds to see Solr running on port 8983 [|]  
 [/]   [-]   [\]   [|]   [/]   [-]   
[\]   [|]   [/]  

[jira] [Commented] (SOLR-8279) Add a new SolrCloud test that stops and starts the cluster while indexing data.

2015-12-08 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047625#comment-15047625
 ] 

Mark Miller commented on SOLR-8279:
---

This initial test has actually been committed already. I'm working a much 
beefed up version though, with additional fault injection to test various 
cluster restart issues.

> Add a new SolrCloud test that stops and starts the cluster while indexing 
> data.
> ---
>
> Key: SOLR-8279
> URL: https://issues.apache.org/jira/browse/SOLR-8279
> Project: Solr
>  Issue Type: Test
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8279.patch, SOLR-8279.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-5.x-Linux (32bit/jdk-9-ea+95) - Build # 14849 - Failure!

2015-12-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/14849/
Java: 32bit/jdk-9-ea+95 -client -XX:+UseG1GC -XX:-CompactStrings

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SaslZkACLProviderTest

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.SaslZkACLProviderTest: 1) Thread[id=10265, 
name=apacheds, state=WAITING, group=TGRP-SaslZkACLProviderTest] at 
java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:516) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)2) Thread[id=10268, 
name=ou=system.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest] 
at jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)3) Thread[id=10269, 
name=groupCache.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest]
 at jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)4) Thread[id=10267, 
name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)5) Thread[id=10266, 
name=kdcReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.cloud.SaslZkACLProviderTest: 
   1) Thread[id=10265, name=apacheds, state=WAITING, 
group=TGRP-SaslZkACLProviderTest]
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:516)
at java.util.TimerThread.mainLoop(Timer.java:526)
at java.util.TimerThread.run(Timer.java:505)
   2) Thread[id=10268, name=ou=system.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest]
at 

[jira] [Commented] (SOLR-8378) Add upconfig and downconfig commands to the bin/solr script and managed-only schema to configsets

2015-12-08 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047835#comment-15047835
 ] 

Mike Drob commented on SOLR-8378:
-

I think what Greg is suggesting is that there still be bin/solr as the main 
entry point, but you can invoke it as {{solr zk upconfig}} and {{solr zk 
downconfig}}. Then maybe someday we add {{solr zk foo}} and everything is 
happily namespaced. This is the same pattern that Hadoop used for a while, 
although they eventually split into multiple executables IIRC.

> Add upconfig and downconfig commands to the bin/solr script and managed-only 
> schema to configsets
> -
>
> Key: SOLR-8378
> URL: https://issues.apache.org/jira/browse/SOLR-8378
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Affects Versions: 5.4, Trunk
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: SOLR-8378.patch, SOLR-8378.patch, SOLR-8378.patch, 
> SOLR-8378.patch, SOLR-8378.patch
>
>
> It would be convenient to be able to upload and download arbitrary configsets 
> to Zookeeper.
> This _might_ be the last thing we need before not requiring users be aware of 
> zkcli, which is awkward.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-trunk-Linux (32bit/jdk-9-ea+95) - Build # 15149 - Still Failing!

2015-12-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15149/
Java: 32bit/jdk-9-ea+95 -client -XX:+UseSerialGC -XX:-CompactStrings

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CollectionsAPIDistributedZkTest

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest: 1) Thread[id=5430, 
name=TEST-CollectionsAPIDistributedZkTest.test-seed#[719DA756EEE38794]-EventThread,
 state=WAITING, group=TGRP-CollectionsAPIDistributedZkTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:178) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2061)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
2) Thread[id=5662, name=zkCallback-604-thread-2, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:461)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1082)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)3) Thread[id=5431, 
name=zkCallback-604-thread-1, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:461)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1082)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)4) Thread[id=5429, 
name=TEST-CollectionsAPIDistributedZkTest.test-seed#[719DA756EEE38794]-SendThread(127.0.0.1:35530),
 state=TIMED_WAITING, group=TGRP-CollectionsAPIDistributedZkTest] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:994)5) 
Thread[id=5663, name=zkCallback-604-thread-3, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:461)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1082)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.cloud.CollectionsAPIDistributedZkTest: 
   1) Thread[id=5430, 
name=TEST-CollectionsAPIDistributedZkTest.test-seed#[719DA756EEE38794]-EventThread,
 state=WAITING, group=TGRP-CollectionsAPIDistributedZkTest]
at jdk.internal.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:178)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2061)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
   2) Thread[id=5662, name=zkCallback-604-thread-2, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest]
at jdk.internal.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218)
at 

[jira] [Updated] (SOLR-8367) The new LIR 'all replicas participate' failsafe code needs to be improved.

2015-12-08 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8367:
--
Attachment: SOLR-8367.patch

Okay, here is basically what we need with additional testing. Still need to 
give it another once over.

> The new LIR 'all replicas participate' failsafe code needs to be improved.
> --
>
> Key: SOLR-8367
> URL: https://issues.apache.org/jira/browse/SOLR-8367
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8367.patch, SOLR-8367.patch
>
>
> For one, it currently only kicks in the first attempted leader. If it's 
> another replica that is stuck in LIR, it won't help.
> Second, when we attempt to be leader, knowing we might fail due to LIR, we 
> should not put other replicas into recovery if they fail to sync with us - 
> not until we know we will actually be leader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5766) Add a random full cluster restart to ChaosMonkey tests.

2015-12-08 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047618#comment-15047618
 ] 

Mark Miller commented on SOLR-5766:
---

This initial test has actually been committed already. I'm working a much 
beefed up version though, with additional fault injection to test various 
cluster restart issues.

> Add a random full cluster restart to ChaosMonkey tests.
> ---
>
> Key: SOLR-5766
> URL: https://issues.apache.org/jira/browse/SOLR-5766
> Project: Solr
>  Issue Type: Test
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 4.9, Trunk
>
> Attachments: SOLR-5766.patch
>
>
> We can't yet easily keep sending updates during the restart until we have the 
> ability to say how many replicas the update must reach to be successful, but 
> we can easily stop updates, do a full cluster restart and then start updates 
> again.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 5.4.0-RC1

2015-12-08 Thread Tomás Fernández Löbbe
+1
SUCCESS! [1:22:46.915162]

On Tue, Dec 8, 2015 at 2:37 PM, Joel Bernstein  wrote:

> +1
>
> SUCCESS! [0:48:51.911862]
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Tue, Dec 8, 2015 at 10:56 AM, Adrien Grand  wrote:
>
>> +1
>> SUCCESS! [0:59:54.998323]
>>
>> Le mar. 8 déc. 2015 à 16:48, Jan Høydahl  a
>> écrit :
>>
>>> +1
>>> SUCCESS! [1:40:13.877370]
>>>
>>> --
>>> Jan Høydahl, search solution architect
>>> Cominvent AS - www.cominvent.com
>>>
>>> > 8. des. 2015 kl. 15.02 skrev Shalin Shekhar Mangar <
>>> shalinman...@gmail.com>:
>>> >
>>> > +1
>>> >
>>> > SUCCESS! [1:03:00.784525]
>>> >
>>> > On Tue, Dec 8, 2015 at 4:20 PM, Anshum Gupta 
>>> wrote:
>>> >> +1
>>> >>
>>> >> Smoke tester is happy on both Java7 and Java8.
>>> >>
>>> >> SUCCESS! [1:13:28.357352]
>>> >>
>>> >> On Sat, Dec 5, 2015 at 4:28 PM, Upayavira  wrote:
>>> >>>
>>> >>> Please vote for the RC1 release candidate for Lucene/Solr 5.4.0
>>> >>>
>>> >>> The artifacts can be downloaded from:
>>> >>>
>>> >>>
>>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.4.0-RC1-rev178046
>>> >>>
>>> >>> You can run the smoke tester directly with this command:
>>> >>> python3 -u dev-tools/scripts/smokeTestRelease.py
>>> >>>
>>> >>>
>>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.4.0-RC1-rev178046
>>> >>>
>>> >>> I will let this vote run until midnight (GMT) on Wednesday 9
>>> December.
>>> >>>
>>> >>> Please cast your votes! (and let me know, politely :-) if I missed
>>> >>> anything)
>>> >>>
>>> >>> Upayavira
>>> >>>
>>> >>> -
>>> >>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> >>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>> >>>
>>> >>
>>> >>
>>> >>
>>> >> --
>>> >> Anshum Gupta
>>> >
>>> >
>>> >
>>> > --
>>> > Regards,
>>> > Shalin Shekhar Mangar.
>>> >
>>> > -
>>> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> > For additional commands, e-mail: dev-h...@lucene.apache.org
>>> >
>>>
>>>
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>>>
>


[JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk-9-ea+95) - Build # 15148 - Still Failing!

2015-12-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15148/
Java: 64bit/jdk-9-ea+95 -XX:+UseCompressedOops -XX:+UseParallelGC 
-XX:-CompactStrings

5 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SaslZkACLProviderTest

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.SaslZkACLProviderTest: 1) Thread[id=869, 
name=apacheds, state=WAITING, group=TGRP-SaslZkACLProviderTest] at 
java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:516) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)2) Thread[id=873, 
name=ou=system.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest] 
at jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)3) Thread[id=870, 
name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)4) Thread[id=872, 
name=groupCache.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest]
 at jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)5) Thread[id=871, 
name=kdcReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.cloud.SaslZkACLProviderTest: 
   1) Thread[id=869, name=apacheds, state=WAITING, 
group=TGRP-SaslZkACLProviderTest]
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:516)
at java.util.TimerThread.mainLoop(Timer.java:526)
at java.util.TimerThread.run(Timer.java:505)
   2) Thread[id=873, name=ou=system.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest]
at 

[JENKINS-EA] Lucene-Solr-5.4-Linux (64bit/jdk-9-ea+95) - Build # 349 - Still Failing!

2015-12-08 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.4-Linux/349/
Java: 64bit/jdk-9-ea+95 -XX:-UseCompressedOops -XX:+UseSerialGC 
-XX:-CompactStrings

3 tests failed.
FAILED:  org.apache.solr.handler.dataimport.TestTikaEntityProcessor.testSkip

Error Message:


Stack Trace:
java.lang.ExceptionInInitializerError
at 
__randomizedtesting.SeedInfo.seed([D1A0B4B7E1CD6704:17CF1C24D1875D03]:0)
at org.apache.tika.parser.pdf.PDFParser.parse(PDFParser.java:146)
at 
org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:256)
at 
org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:256)
at 
org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:120)
at 
org.apache.solr.handler.dataimport.TikaEntityProcessor.nextRow(TikaEntityProcessor.java:159)
at 
org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:244)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:476)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:415)
at 
org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:330)
at 
org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:233)
at 
org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:417)
at 
org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:481)
at 
org.apache.solr.handler.dataimport.DataImportHandler.handleRequestBody(DataImportHandler.java:186)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2073)
at org.apache.solr.util.TestHarness.query(TestHarness.java:311)
at 
org.apache.solr.handler.dataimport.AbstractDataImportHandlerTestCase.runFullImport(AbstractDataImportHandlerTestCase.java:87)
at 
org.apache.solr.handler.dataimport.TestTikaEntityProcessor.testSkip(TestTikaEntityProcessor.java:118)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:520)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 

[jira] [Commented] (SOLR-8378) Add upconfig and downconfig commands to the bin/solr script and managed-only schema to configsets

2015-12-08 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15047649#comment-15047649
 ] 

Shawn Heisey commented on SOLR-8378:


Just the other day, I was silently cursing the fact that zkcli was buried in a 
deep directory under server, rather than living in the bin directory.

{quote}
zk upconfig
zk downconfig
{quote}

Assuming that I understand what you're proposing correctly, the command name 
you've decribed (zk) is very simple.  Perhaps more important, it is unlikely to 
be confused with the zkCli script that comes with zookeeper, which causes 
confusion with some users trying to use zkcli.  I do wonder if maybe it should 
be name something like zksolr instead, so the fact that it's tied to solr is 
more obvious.  The "zk" name is very acceptable, unless Solr is packaged to LSB 
standards and the scripts end up someplace like /usr/bin, in which case it will 
be confusing.


> Add upconfig and downconfig commands to the bin/solr script and managed-only 
> schema to configsets
> -
>
> Key: SOLR-8378
> URL: https://issues.apache.org/jira/browse/SOLR-8378
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools
>Affects Versions: 5.4, Trunk
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: SOLR-8378.patch, SOLR-8378.patch, SOLR-8378.patch, 
> SOLR-8378.patch, SOLR-8378.patch
>
>
> It would be convenient to be able to upload and download arbitrary configsets 
> to Zookeeper.
> This _might_ be the last thing we need before not requiring users be aware of 
> zkcli, which is awkward.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >