[JENKINS] Lucene-trunk-Linux-java7-64-analyzers - Build # 1662 - Failure!

2014-04-05 Thread builder
Build: builds.flonkings.com/job/Lucene-trunk-Linux-java7-64-analyzers/1662/

1 tests failed.
REGRESSION:  org.apache.lucene.analysis.core.TestRandomChains.testRandomChains

Error Message:
startOffset must be non-negative, and endOffset must be = startOffset, 
startOffset=9,endOffset=7

Stack Trace:
java.lang.IllegalArgumentException: startOffset must be non-negative, and 
endOffset must be = startOffset, startOffset=9,endOffset=7
at 
__randomizedtesting.SeedInfo.seed([E08E59C10F9A374B:DD6F70A048882A8B]:0)
at 
org.apache.lucene.analysis.tokenattributes.OffsetAttributeImpl.setOffset(OffsetAttributeImpl.java:45)
at 
org.apache.lucene.analysis.shingle.ShingleFilter.incrementToken(ShingleFilter.java:345)
at 
org.apache.lucene.analysis.ValidatingTokenFilter.incrementToken(ValidatingTokenFilter.java:78)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:699)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:610)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:511)
at 
org.apache.lucene.analysis.core.TestRandomChains.testRandomChains(TestRandomChains.java:901)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:360)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:793)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:453)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-5569) Rename AtomicReader to LeafReader

2014-04-05 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961012#comment-13961012
 ] 

Robert Muir commented on LUCENE-5569:
-

Yes 5.0 is the time to fix this stuff. Tim is wrong, its not a horrible burden 
to users. Users don't have to upgrade. And fixing the api makes it easier on 
newer users.

 Rename AtomicReader to LeafReader
 -

 Key: LUCENE-5569
 URL: https://issues.apache.org/jira/browse/LUCENE-5569
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Priority: Minor
 Fix For: 5.0

 Attachments: LUCENE-5569.patch


 See LUCENE-5527 for more context: several of us seem to prefer {{Leaf}} to 
 {{Atomic}}.
 Talking from my experience, I was a bit confused in the beginning that this 
 thing is named {{AtomicReader}}, since {{Atomic}} is otherwise used in Java 
 in the context of concurrency. So maybe renaming it to {{Leaf}} would help 
 remove this confusion and also carry the information that these readers are 
 used as leaves of top-level readers?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5569) Rename AtomicReader to LeafReader

2014-04-05 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961013#comment-13961013
 ] 

Adrien Grand commented on LUCENE-5569:
--

+1 to the patch. Maybe we should leave a note about this renaming in the 
lucene/MIGRATE.txt?

 Rename AtomicReader to LeafReader
 -

 Key: LUCENE-5569
 URL: https://issues.apache.org/jira/browse/LUCENE-5569
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Priority: Minor
 Fix For: 5.0

 Attachments: LUCENE-5569.patch


 See LUCENE-5527 for more context: several of us seem to prefer {{Leaf}} to 
 {{Atomic}}.
 Talking from my experience, I was a bit confused in the beginning that this 
 thing is named {{AtomicReader}}, since {{Atomic}} is otherwise used in Java 
 in the context of concurrency. So maybe renaming it to {{Leaf}} would help 
 remove this confusion and also carry the information that these readers are 
 used as leaves of top-level readers?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5577) Temporary folder and file management (and cleanup facilities)

2014-04-05 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated LUCENE-5577:


Description: 
This is a spinoff of the work [~markrmil...@gmail.com], Uwe and me have 
initiated in SOLR-5914. 

The core concept is this:

* every test should create its temporary folders and files in a sub-folder of 
one, easily identifiable, parent.

* the parent folder is named after the test class, with [org.apache.] removed 
for brevity. the folder name includes the master seed and a generated sequence 
integer, if needed, to easily demarcate different runs (and identify which run 
produced which files).

* temporary folder/ file creation routines should be available in LTC, so that 
no additional imports/ fiddling is needed.

* any temporary folders and files should be removable at the end of the suite 
(class), if all tests completes successfully. failure to do so (on windows) 
should be marked as an error

* there should be a way to temporarily mark a class to circumvent the above 
check (for known offenders)

* there should be a way for developers to leave temporary files on disk for 
further inspection (even in case of successful runs).

  was:
This is a spinoff of the work [~markrmil...@gmail.com], Uwe and me have 
initiated in SOLR-5914. 

The core concept is this:

* every test should create its temporary folders and files in a sub-folder of 
one, easily identifiable, parent.

* temporary folder/ file creation routines should be available in LTC, so that 
no additional imports/ fiddling is needed.

* any temporary folders and files should be removable at the end of the suite 
(class), if all tests completes successfully. failure to do so (on windows) 
should be marked as an error

* there should be a way to temporarily mark a class to circumvent the above 
check (for known offenders)

* there should be a way for developers to leave temporary files on disk for 
further inspection (even in case of successful runs).


 Temporary folder and file management (and cleanup facilities)
 -

 Key: LUCENE-5577
 URL: https://issues.apache.org/jira/browse/LUCENE-5577
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Dawid Weiss
Assignee: Dawid Weiss
Priority: Minor
 Fix For: 4.8, 5.0


 This is a spinoff of the work [~markrmil...@gmail.com], Uwe and me have 
 initiated in SOLR-5914. 
 The core concept is this:
 * every test should create its temporary folders and files in a sub-folder of 
 one, easily identifiable, parent.
 * the parent folder is named after the test class, with [org.apache.] removed 
 for brevity. the folder name includes the master seed and a generated 
 sequence integer, if needed, to easily demarcate different runs (and identify 
 which run produced which files).
 * temporary folder/ file creation routines should be available in LTC, so 
 that no additional imports/ fiddling is needed.
 * any temporary folders and files should be removable at the end of the suite 
 (class), if all tests completes successfully. failure to do so (on windows) 
 should be marked as an error
 * there should be a way to temporarily mark a class to circumvent the above 
 check (for known offenders)
 * there should be a way for developers to leave temporary files on disk for 
 further inspection (even in case of successful runs).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5577) Temporary folder and file management (and cleanup facilities)

2014-04-05 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated LUCENE-5577:


Description: 
This is a spinoff of the work [~markrmil...@gmail.com], Uwe and me have 
initiated in SOLR-5914. 

The core concept is this:

* every test should create its temporary folders and files in a sub-folder of 
one, easily identifiable, parent.

* the parent folder is named after the test class, with [org.apache.] removed 
for brevity. the folder name includes the master seed and a generated sequence 
integer, if needed, to easily demarcate different runs (and identify which run 
produced which files).

* temporary folder/ file creation routines should be available in LTC, so that 
no additional imports/ fiddling is needed.

* any temporary folders and files should be removable at the end of the suite 
(class), if all tests completes successfully. Failure to remove any files 
should be marked as an error (will manifest itself on windows with open file 
handles).

* there should be a way to temporarily mark a class to circumvent the above 
check (SuppressTempFileChecks annotation for known offenders). Annotating a 
class with SuppressTempFileChecks will still attempt to remove temporary files, 
but will not cause an error if this fails. Any files that couldn't be deleted 
will be printed to stdout with an appropriate message.

* there should be a way for developers to leave temporary files on disk for 
further inspection (even in case of successful runs). The following system 
properties will work:
{code}
tests.leavetmpdir /* default */,
tests.leaveTemporary /* ANT tasks's (junit4) flag. */,
tests.leavetemporary /* lowercase version of the above */,
solr.test.leavetmpdir /* Solr's legacy property for backcompat */))
{code}


  was:
This is a spinoff of the work [~markrmil...@gmail.com], Uwe and me have 
initiated in SOLR-5914. 

The core concept is this:

* every test should create its temporary folders and files in a sub-folder of 
one, easily identifiable, parent.

* the parent folder is named after the test class, with [org.apache.] removed 
for brevity. the folder name includes the master seed and a generated sequence 
integer, if needed, to easily demarcate different runs (and identify which run 
produced which files).

* temporary folder/ file creation routines should be available in LTC, so that 
no additional imports/ fiddling is needed.

* any temporary folders and files should be removable at the end of the suite 
(class), if all tests completes successfully. failure to do so (on windows) 
should be marked as an error

* there should be a way to temporarily mark a class to circumvent the above 
check (for known offenders)

* there should be a way for developers to leave temporary files on disk for 
further inspection (even in case of successful runs).


 Temporary folder and file management (and cleanup facilities)
 -

 Key: LUCENE-5577
 URL: https://issues.apache.org/jira/browse/LUCENE-5577
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Dawid Weiss
Assignee: Dawid Weiss
Priority: Minor
 Fix For: 4.8, 5.0


 This is a spinoff of the work [~markrmil...@gmail.com], Uwe and me have 
 initiated in SOLR-5914. 
 The core concept is this:
 * every test should create its temporary folders and files in a sub-folder of 
 one, easily identifiable, parent.
 * the parent folder is named after the test class, with [org.apache.] removed 
 for brevity. the folder name includes the master seed and a generated 
 sequence integer, if needed, to easily demarcate different runs (and identify 
 which run produced which files).
 * temporary folder/ file creation routines should be available in LTC, so 
 that no additional imports/ fiddling is needed.
 * any temporary folders and files should be removable at the end of the suite 
 (class), if all tests completes successfully. Failure to remove any files 
 should be marked as an error (will manifest itself on windows with open file 
 handles).
 * there should be a way to temporarily mark a class to circumvent the above 
 check (SuppressTempFileChecks annotation for known offenders). Annotating a 
 class with SuppressTempFileChecks will still attempt to remove temporary 
 files, but will not cause an error if this fails. Any files that couldn't be 
 deleted will be printed to stdout with an appropriate message.
 * there should be a way for developers to leave temporary files on disk for 
 further inspection (even in case of successful runs). The following system 
 properties will work:
 {code}
 tests.leavetmpdir /* default */,
 tests.leaveTemporary /* ANT tasks's (junit4) flag. */,
 tests.leavetemporary /* lowercase version of the above */,
   

[jira] [Commented] (LUCENE-5569) Rename AtomicReader to LeafReader

2014-04-05 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961017#comment-13961017
 ] 

Robert Muir commented on LUCENE-5569:
-

We should also follow up by completely nuking readercontext, 
atomicreadercontext. This indirection hurts and complicates all core lucene 
apis, for all use cases, just to support bad practices and esoteric shit, like 
climbing up reader tree and using slow wrappers.

Its ok if we are a little less flexible and simplify the API. For example we 
could declare readers are instances and have a docbase and parent. Multireaders 
and other weird shit could wrap the readers to fix this up.

 Rename AtomicReader to LeafReader
 -

 Key: LUCENE-5569
 URL: https://issues.apache.org/jira/browse/LUCENE-5569
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Priority: Minor
 Fix For: 5.0

 Attachments: LUCENE-5569.patch


 See LUCENE-5527 for more context: several of us seem to prefer {{Leaf}} to 
 {{Atomic}}.
 Talking from my experience, I was a bit confused in the beginning that this 
 thing is named {{AtomicReader}}, since {{Atomic}} is otherwise used in Java 
 in the context of concurrency. So maybe renaming it to {{Leaf}} would help 
 remove this confusion and also carry the information that these readers are 
 used as leaves of top-level readers?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5569) Rename AtomicReader to LeafReader

2014-04-05 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961018#comment-13961018
 ] 

Adrien Grand commented on LUCENE-5569:
--

+1 to do that in a follow-up issue

 Rename AtomicReader to LeafReader
 -

 Key: LUCENE-5569
 URL: https://issues.apache.org/jira/browse/LUCENE-5569
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Priority: Minor
 Fix For: 5.0

 Attachments: LUCENE-5569.patch


 See LUCENE-5527 for more context: several of us seem to prefer {{Leaf}} to 
 {{Atomic}}.
 Talking from my experience, I was a bit confused in the beginning that this 
 thing is named {{AtomicReader}}, since {{Atomic}} is otherwise used in Java 
 in the context of concurrency. So maybe renaming it to {{Leaf}} would help 
 remove this confusion and also carry the information that these readers are 
 used as leaves of top-level readers?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5577) Temporary folder and file management (and cleanup facilities)

2014-04-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961022#comment-13961022
 ] 

ASF subversion and git services commented on LUCENE-5577:
-

Commit 1585028 from dwe...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1585028 ]

LUCENE-5577: Temporary folder and file management (and cleanup facilities)

 Temporary folder and file management (and cleanup facilities)
 -

 Key: LUCENE-5577
 URL: https://issues.apache.org/jira/browse/LUCENE-5577
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Dawid Weiss
Assignee: Dawid Weiss
Priority: Minor
 Fix For: 4.8, 5.0


 This is a spinoff of the work [~markrmil...@gmail.com], Uwe and me have 
 initiated in SOLR-5914. 
 The core concept is this:
 * every test should create its temporary folders and files in a sub-folder of 
 one, easily identifiable, parent.
 * the parent folder is named after the test class, with [org.apache.] removed 
 for brevity. the folder name includes the master seed and a generated 
 sequence integer, if needed, to easily demarcate different runs (and identify 
 which run produced which files).
 * temporary folder/ file creation routines should be available in LTC, so 
 that no additional imports/ fiddling is needed.
 * any temporary folders and files should be removable at the end of the suite 
 (class), if all tests completes successfully. Failure to remove any files 
 should be marked as an error (will manifest itself on windows with open file 
 handles).
 * there should be a way to temporarily mark a class to circumvent the above 
 check (SuppressTempFileChecks annotation for known offenders). Annotating a 
 class with SuppressTempFileChecks will still attempt to remove temporary 
 files, but will not cause an error if this fails. Any files that couldn't be 
 deleted will be printed to stdout with an appropriate message.
 * there should be a way for developers to leave temporary files on disk for 
 further inspection (even in case of successful runs). The following system 
 properties will work:
 {code}
 tests.leavetmpdir /* default */,
 tests.leaveTemporary /* ANT tasks's (junit4) flag. */,
 tests.leavetemporary /* lowercase version of the above */,
 solr.test.leavetmpdir /* Solr's legacy property for backcompat */))
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5577) Temporary folder and file management (and cleanup facilities)

2014-04-05 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated LUCENE-5577:


Description: 
This is a spinoff of the work [~markrmil...@gmail.com], Uwe and me have 
initiated in SOLR-5914. 

The core concept is this:

* every test should create its temporary folders and files in a sub-folder of 
one, easily identifiable, parent.

* the parent folder is named after the test class, with [org.apache.] removed 
for brevity. the folder name includes the master seed and a generated sequence 
integer, if needed, to easily demarcate different runs (and identify which run 
produced which files).

* temporary folder/ file creation routines should be available in LTC, so that 
no additional imports/ fiddling is needed.
{code}
LTC#createTempDir()
LTC#createTempDir(String prefix)
LTC#createTempFile()
LTC#createTempFile(String prefix, String suffix)
{code}
Note the absence of create a temporary file/dir under a given directory 
method. These shouldn't be needed since any temporary folder created by the 
above is guaranteed to be empty. If one still needs temporary file creation use 
createTempDir() and then relevant File class methods.

* any temporary folders and files should be removable at the end of the suite 
(class), if all tests completes successfully. Failure to remove any files 
should be marked as an error (will manifest itself on windows with open file 
handles).

* there should be a way to temporarily mark a class to circumvent the above 
check (SuppressTempFileChecks annotation for known offenders). Annotating a 
class with SuppressTempFileChecks will still attempt to remove temporary files, 
but will not cause an error if this fails. Any files that couldn't be deleted 
will be printed to stdout with an appropriate message.

* there should be a way for developers to leave temporary files on disk for 
further inspection (even in case of successful runs). The following system 
properties will work:
{code}
tests.leavetmpdir /* default */,
tests.leaveTemporary /* ANT tasks's (junit4) flag. */,
tests.leavetemporary /* lowercase version of the above */,
solr.test.leavetmpdir /* Solr's legacy property for backcompat */))
{code}


  was:
This is a spinoff of the work [~markrmil...@gmail.com], Uwe and me have 
initiated in SOLR-5914. 

The core concept is this:

* every test should create its temporary folders and files in a sub-folder of 
one, easily identifiable, parent.

* the parent folder is named after the test class, with [org.apache.] removed 
for brevity. the folder name includes the master seed and a generated sequence 
integer, if needed, to easily demarcate different runs (and identify which run 
produced which files).

* temporary folder/ file creation routines should be available in LTC, so that 
no additional imports/ fiddling is needed.

* any temporary folders and files should be removable at the end of the suite 
(class), if all tests completes successfully. Failure to remove any files 
should be marked as an error (will manifest itself on windows with open file 
handles).

* there should be a way to temporarily mark a class to circumvent the above 
check (SuppressTempFileChecks annotation for known offenders). Annotating a 
class with SuppressTempFileChecks will still attempt to remove temporary files, 
but will not cause an error if this fails. Any files that couldn't be deleted 
will be printed to stdout with an appropriate message.

* there should be a way for developers to leave temporary files on disk for 
further inspection (even in case of successful runs). The following system 
properties will work:
{code}
tests.leavetmpdir /* default */,
tests.leaveTemporary /* ANT tasks's (junit4) flag. */,
tests.leavetemporary /* lowercase version of the above */,
solr.test.leavetmpdir /* Solr's legacy property for backcompat */))
{code}



 Temporary folder and file management (and cleanup facilities)
 -

 Key: LUCENE-5577
 URL: https://issues.apache.org/jira/browse/LUCENE-5577
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Dawid Weiss
Assignee: Dawid Weiss
Priority: Minor
 Fix For: 4.8, 5.0


 This is a spinoff of the work [~markrmil...@gmail.com], Uwe and me have 
 initiated in SOLR-5914. 
 The core concept is this:
 * every test should create its temporary folders and files in a sub-folder of 
 one, easily identifiable, parent.
 * the parent folder is named after the test class, with [org.apache.] removed 
 for brevity. the folder name includes the master seed and a generated 
 sequence integer, if needed, to easily demarcate different runs (and identify 
 which run produced which files).
 * temporary folder/ file creation routines should be 

[jira] [Commented] (LUCENE-5577) Temporary folder and file management (and cleanup facilities)

2014-04-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961023#comment-13961023
 ] 

ASF subversion and git services commented on LUCENE-5577:
-

Commit 1585029 from dwe...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1585029 ]

LUCENE-5577: Added javadocs and minor renames.

 Temporary folder and file management (and cleanup facilities)
 -

 Key: LUCENE-5577
 URL: https://issues.apache.org/jira/browse/LUCENE-5577
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Dawid Weiss
Assignee: Dawid Weiss
Priority: Minor
 Fix For: 4.8, 5.0


 This is a spinoff of the work [~markrmil...@gmail.com], Uwe and me have 
 initiated in SOLR-5914. 
 The core concept is this:
 * every test should create its temporary folders and files in a sub-folder of 
 one, easily identifiable, parent.
 * the parent folder is named after the test class, with [org.apache.] removed 
 for brevity. the folder name includes the master seed and a generated 
 sequence integer, if needed, to easily demarcate different runs (and identify 
 which run produced which files).
 * temporary folder/ file creation routines should be available in LTC, so 
 that no additional imports/ fiddling is needed.
 {code}
 LTC#createTempDir()
 LTC#createTempDir(String prefix)
 LTC#createTempFile()
 LTC#createTempFile(String prefix, String suffix)
 {code}
 Note the absence of create a temporary file/dir under a given directory 
 method. These shouldn't be needed since any temporary folder created by the 
 above is guaranteed to be empty. If one still needs temporary file creation 
 use createTempDir() and then relevant File class methods.
 * any temporary folders and files should be removable at the end of the suite 
 (class), if all tests completes successfully. Failure to remove any files 
 should be marked as an error (will manifest itself on windows with open file 
 handles).
 * there should be a way to temporarily mark a class to circumvent the above 
 check (SuppressTempFileChecks annotation for known offenders). Annotating a 
 class with SuppressTempFileChecks will still attempt to remove temporary 
 files, but will not cause an error if this fails. Any files that couldn't be 
 deleted will be printed to stdout with an appropriate message.
 * there should be a way for developers to leave temporary files on disk for 
 further inspection (even in case of successful runs). The following system 
 properties will work:
 {code}
 tests.leavetmpdir /* default */,
 tests.leaveTemporary /* ANT tasks's (junit4) flag. */,
 tests.leavetemporary /* lowercase version of the above */,
 solr.test.leavetmpdir /* Solr's legacy property for backcompat */))
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-5821) Search inconsistency on SolrCloud replicas

2014-04-05 Thread Maxim Novikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maxim Novikov reopened SOLR-5821:
-


I have to reopen this issue as the inconsistency still exists regardless of the 
queries. I am adding details and screenshots in comments.

 Search inconsistency on SolrCloud replicas
 --

 Key: SOLR-5821
 URL: https://issues.apache.org/jira/browse/SOLR-5821
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.6.1
 Environment: SolrCloud:
 1 shard, 2 replicas
 Both instances/replicas have identical hardware/software:
 CPU(s): 4
 RAM: 8Gb
 HDD: 100Gb
 OS: CentOS 6.5
 ZooKeeper 3.4.5
 Tomcat 8.0.3
 Solr 4.6.1
 Servers are utilized to run Solr only.
Reporter: Maxim Novikov
Priority: Critical
  Labels: cloud, inconsistency, replica, search

 We use the following infrastructure:
 SolrCloud with 1 shard and 2 replicas. The index is built using 
 DataImportHandler (importing data from the database). The number of items in 
 the index can vary from 100 to 100,000,000.
 After indexing part of the data (not necessarily all the data, it is enough 
 to have a small number of items in the search index), we can observe that 
 Solr instances (replicas) return different results for the same search 
 queries. I believe it happens because some of the results have the same 
 scores, and Solr instances return those in a random order.
 PS This is a critical issue for us as we use a load balancer to scale Solr 
 through replicas, and as a result of this issue, we retrieve various results 
 for the same queries all the time. They are not necessarily completely 
 different, but even a couple of items that differ is a deal breaker.
 The expected behaviour would be to always get identical results for the same 
 search queries from all replicas. Otherwise, this cloud thing works just 
 unreliably.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[important] Temporary file handling.

2014-04-05 Thread Dawid Weiss
I've just committed:
https://issues.apache.org/jira/browse/LUCENE-5577

This moves temporary file/ directory creation methods from TestUtil to
LuceneTestCase and adds some safety checks that should fire on Windows
in case of unremovable temporary files. It should be also helpful to
decrease temporary disk space use.

If there are any problems with this patch (jenkins, etc.), feel free
to revert from trunk. I'll be offline tomorrow and won't be able to
take any actions.

Dawid

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5577) Temporary folder and file management (and cleanup facilities)

2014-04-05 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961025#comment-13961025
 ] 

Dawid Weiss commented on LUCENE-5577:
-

Mark, Uwe - do you know how to backport this to 4.x other than by manual labor? 
The patch is pretty extensive and I'm not really familiar with svn (looking for 
something like git's rebase).

 Temporary folder and file management (and cleanup facilities)
 -

 Key: LUCENE-5577
 URL: https://issues.apache.org/jira/browse/LUCENE-5577
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Dawid Weiss
Assignee: Dawid Weiss
Priority: Minor
 Fix For: 4.8, 5.0


 This is a spinoff of the work [~markrmil...@gmail.com], Uwe and me have 
 initiated in SOLR-5914. 
 The core concept is this:
 * every test should create its temporary folders and files in a sub-folder of 
 one, easily identifiable, parent.
 * the parent folder is named after the test class, with [org.apache.] removed 
 for brevity. the folder name includes the master seed and a generated 
 sequence integer, if needed, to easily demarcate different runs (and identify 
 which run produced which files).
 * temporary folder/ file creation routines should be available in LTC, so 
 that no additional imports/ fiddling is needed.
 {code}
 LTC#createTempDir()
 LTC#createTempDir(String prefix)
 LTC#createTempFile()
 LTC#createTempFile(String prefix, String suffix)
 {code}
 Note the absence of create a temporary file/dir under a given directory 
 method. These shouldn't be needed since any temporary folder created by the 
 above is guaranteed to be empty. If one still needs temporary file creation 
 use createTempDir() and then relevant File class methods.
 * any temporary folders and files should be removable at the end of the suite 
 (class), if all tests completes successfully. Failure to remove any files 
 should be marked as an error (will manifest itself on windows with open file 
 handles).
 * there should be a way to temporarily mark a class to circumvent the above 
 check (SuppressTempFileChecks annotation for known offenders). Annotating a 
 class with SuppressTempFileChecks will still attempt to remove temporary 
 files, but will not cause an error if this fails. Any files that couldn't be 
 deleted will be printed to stdout with an appropriate message.
 * there should be a way for developers to leave temporary files on disk for 
 further inspection (even in case of successful runs). The following system 
 properties will work:
 {code}
 tests.leavetmpdir /* default */,
 tests.leaveTemporary /* ANT tasks's (junit4) flag. */,
 tests.leavetemporary /* lowercase version of the above */,
 solr.test.leavetmpdir /* Solr's legacy property for backcompat */))
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5821) Search inconsistency on SolrCloud replicas

2014-04-05 Thread Maxim Novikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maxim Novikov updated SOLR-5821:


Affects Version/s: 4.7.1

 Search inconsistency on SolrCloud replicas
 --

 Key: SOLR-5821
 URL: https://issues.apache.org/jira/browse/SOLR-5821
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.6.1, 4.7.1
 Environment: SolrCloud:
 1 shard, 2 replicas
 Both instances/replicas have identical hardware/software:
 CPU(s): 4
 RAM: 8Gb
 HDD: 100Gb
 OS: CentOS 6.5
 ZooKeeper 3.4.5
 Tomcat 8.0.3
 Solr 4.6.1
 Servers are utilized to run Solr only.
Reporter: Maxim Novikov
Priority: Critical
  Labels: cloud, inconsistency, replica, search
 Attachments: Screen Shot 2014-04-05 at 2.26.26 AM.png, Screen Shot 
 2014-04-05 at 2.26.41 AM.png


 We use the following infrastructure:
 SolrCloud with 1 shard and 2 replicas. The index is built using 
 DataImportHandler (importing data from the database). The number of items in 
 the index can vary from 100 to 100,000,000.
 After indexing part of the data (not necessarily all the data, it is enough 
 to have a small number of items in the search index), we can observe that 
 Solr instances (replicas) return different results for the same search 
 queries. I believe it happens because some of the results have the same 
 scores, and Solr instances return those in a random order.
 PS This is a critical issue for us as we use a load balancer to scale Solr 
 through replicas, and as a result of this issue, we retrieve various results 
 for the same queries all the time. They are not necessarily completely 
 different, but even a couple of items that differ is a deal breaker.
 The expected behaviour would be to always get identical results for the same 
 search queries from all replicas. Otherwise, this cloud thing works just 
 unreliably.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5821) Search inconsistency on SolrCloud replicas

2014-04-05 Thread Maxim Novikov (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maxim Novikov updated SOLR-5821:


Attachment: Screen Shot 2014-04-05 at 2.26.41 AM.png
Screen Shot 2014-04-05 at 2.26.26 AM.png

SolrCloud infrastructure:

3 ZooKeeper nodes + 3 Solr replicas (1 shard) on Tomcat 7.

When importing the data from the database through one of the Solr instances 
(DataImportHandler) another Solr instance was down (had to be restarted). The 
result of that you can see in the screenshots. The number of items on 1st 
machine is 9,812,001 items, on that one that was down for a couple of seconds 
is 9,811,987.

PS And the worst thing is that I can't see a way to synchronize them now, as 
replication requests via HTTP don't seem to be working as in SolrCloud all the 
nodes behave like masters and HTTP replication request (pulling data from the 
master to a slave) just fails. But even if it worked, it wouldn't be really 
appropriate. That way you would need to perform consistent checks all the time 
(as data continues coming), and do something on your own...

 Search inconsistency on SolrCloud replicas
 --

 Key: SOLR-5821
 URL: https://issues.apache.org/jira/browse/SOLR-5821
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.6.1, 4.7.1
 Environment: SolrCloud:
 1 shard, 2 replicas
 Both instances/replicas have identical hardware/software:
 CPU(s): 4
 RAM: 8Gb
 HDD: 100Gb
 OS: CentOS 6.5
 ZooKeeper 3.4.5
 Tomcat 8.0.3
 Solr 4.6.1
 Servers are utilized to run Solr only.
Reporter: Maxim Novikov
Priority: Critical
  Labels: cloud, inconsistency, replica, search
 Attachments: Screen Shot 2014-04-05 at 2.26.26 AM.png, Screen Shot 
 2014-04-05 at 2.26.41 AM.png


 We use the following infrastructure:
 SolrCloud with 1 shard and 2 replicas. The index is built using 
 DataImportHandler (importing data from the database). The number of items in 
 the index can vary from 100 to 100,000,000.
 After indexing part of the data (not necessarily all the data, it is enough 
 to have a small number of items in the search index), we can observe that 
 Solr instances (replicas) return different results for the same search 
 queries. I believe it happens because some of the results have the same 
 scores, and Solr instances return those in a random order.
 PS This is a critical issue for us as we use a load balancer to scale Solr 
 through replicas, and as a result of this issue, we retrieve various results 
 for the same queries all the time. They are not necessarily completely 
 different, but even a couple of items that differ is a deal breaker.
 The expected behaviour would be to always get identical results for the same 
 search queries from all replicas. Otherwise, this cloud thing works just 
 unreliably.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5577) Temporary folder and file management (and cleanup facilities)

2014-04-05 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961034#comment-13961034
 ] 

Dawid Weiss commented on LUCENE-5577:
-

Nevermind, got it.

 Temporary folder and file management (and cleanup facilities)
 -

 Key: LUCENE-5577
 URL: https://issues.apache.org/jira/browse/LUCENE-5577
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Dawid Weiss
Assignee: Dawid Weiss
Priority: Minor
 Fix For: 4.8, 5.0


 This is a spinoff of the work [~markrmil...@gmail.com], Uwe and me have 
 initiated in SOLR-5914. 
 The core concept is this:
 * every test should create its temporary folders and files in a sub-folder of 
 one, easily identifiable, parent.
 * the parent folder is named after the test class, with [org.apache.] removed 
 for brevity. the folder name includes the master seed and a generated 
 sequence integer, if needed, to easily demarcate different runs (and identify 
 which run produced which files).
 * temporary folder/ file creation routines should be available in LTC, so 
 that no additional imports/ fiddling is needed.
 {code}
 LTC#createTempDir()
 LTC#createTempDir(String prefix)
 LTC#createTempFile()
 LTC#createTempFile(String prefix, String suffix)
 {code}
 Note the absence of create a temporary file/dir under a given directory 
 method. These shouldn't be needed since any temporary folder created by the 
 above is guaranteed to be empty. If one still needs temporary file creation 
 use createTempDir() and then relevant File class methods.
 * any temporary folders and files should be removable at the end of the suite 
 (class), if all tests completes successfully. Failure to remove any files 
 should be marked as an error (will manifest itself on windows with open file 
 handles).
 * there should be a way to temporarily mark a class to circumvent the above 
 check (SuppressTempFileChecks annotation for known offenders). Annotating a 
 class with SuppressTempFileChecks will still attempt to remove temporary 
 files, but will not cause an error if this fails. Any files that couldn't be 
 deleted will be printed to stdout with an appropriate message.
 * there should be a way for developers to leave temporary files on disk for 
 further inspection (even in case of successful runs). The following system 
 properties will work:
 {code}
 tests.leavetmpdir /* default */,
 tests.leaveTemporary /* ANT tasks's (junit4) flag. */,
 tests.leavetemporary /* lowercase version of the above */,
 solr.test.leavetmpdir /* Solr's legacy property for backcompat */))
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5577) Temporary folder and file management (and cleanup facilities)

2014-04-05 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss resolved LUCENE-5577.
-

Resolution: Fixed

 Temporary folder and file management (and cleanup facilities)
 -

 Key: LUCENE-5577
 URL: https://issues.apache.org/jira/browse/LUCENE-5577
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Dawid Weiss
Assignee: Dawid Weiss
Priority: Minor
 Fix For: 4.8, 5.0


 This is a spinoff of the work [~markrmil...@gmail.com], Uwe and me have 
 initiated in SOLR-5914. 
 The core concept is this:
 * every test should create its temporary folders and files in a sub-folder of 
 one, easily identifiable, parent.
 * the parent folder is named after the test class, with [org.apache.] removed 
 for brevity. the folder name includes the master seed and a generated 
 sequence integer, if needed, to easily demarcate different runs (and identify 
 which run produced which files).
 * temporary folder/ file creation routines should be available in LTC, so 
 that no additional imports/ fiddling is needed.
 {code}
 LTC#createTempDir()
 LTC#createTempDir(String prefix)
 LTC#createTempFile()
 LTC#createTempFile(String prefix, String suffix)
 {code}
 Note the absence of create a temporary file/dir under a given directory 
 method. These shouldn't be needed since any temporary folder created by the 
 above is guaranteed to be empty. If one still needs temporary file creation 
 use createTempDir() and then relevant File class methods.
 * any temporary folders and files should be removable at the end of the suite 
 (class), if all tests completes successfully. Failure to remove any files 
 should be marked as an error (will manifest itself on windows with open file 
 handles).
 * there should be a way to temporarily mark a class to circumvent the above 
 check (SuppressTempFileChecks annotation for known offenders). Annotating a 
 class with SuppressTempFileChecks will still attempt to remove temporary 
 files, but will not cause an error if this fails. Any files that couldn't be 
 deleted will be printed to stdout with an appropriate message.
 * there should be a way for developers to leave temporary files on disk for 
 further inspection (even in case of successful runs). The following system 
 properties will work:
 {code}
 tests.leavetmpdir /* default */,
 tests.leaveTemporary /* ANT tasks's (junit4) flag. */,
 tests.leavetemporary /* lowercase version of the above */,
 solr.test.leavetmpdir /* Solr's legacy property for backcompat */))
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5577) Temporary folder and file management (and cleanup facilities)

2014-04-05 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated LUCENE-5577:


Description: 
This is a spinoff of the work [~markrmil...@gmail.com], Uwe and me have 
initiated in SOLR-5914. 

The core concept is this:

* every test should create its temporary folders and files in a sub-folder of 
one, easily identifiable, parent.

* the parent folder is named after the test class, with [org.apache.] removed 
for brevity. the folder name includes the master seed and a generated sequence 
integer, if needed, to easily demarcate different runs (and identify which run 
produced which files).

* temporary folder/ file creation routines should be available in LTC, so that 
no additional imports/ fiddling is needed.
{code}
LTC#createTempDir()
LTC#createTempDir(String prefix)
LTC#createTempFile()
LTC#createTempFile(String prefix, String suffix)
{code}
Note the absence of create a temporary file/dir under a given directory 
method. These shouldn't be needed since any temporary folder created by the 
above is guaranteed to be empty. If one still needs temporary file creation use 
createTempDir() and then relevant File class methods.

* any temporary folders and files should be removable at the end of the suite 
(class), if all tests completes successfully. Failure to remove any files 
should be marked as an error (will manifest itself on windows if one leaves 
open file handles).

* there should be a way to temporarily mark a class to circumvent the above 
check (SuppressTempFileChecks annotation for known offenders). Annotating a 
class with SuppressTempFileChecks will still attempt to remove temporary files, 
but will not cause an error if this fails. Any files that couldn't be deleted 
will be printed to stdout with an appropriate message.

* there should be a way for developers to leave temporary files on disk for 
further inspection (even in case of successful runs). The following system 
properties will work:
{code}
tests.leavetmpdir /* default */,
tests.leaveTemporary /* ANT tasks's (junit4) flag. */,
tests.leavetemporary /* lowercase version of the above */,
solr.test.leavetmpdir /* Solr's legacy property for backcompat */))
{code}


  was:
This is a spinoff of the work [~markrmil...@gmail.com], Uwe and me have 
initiated in SOLR-5914. 

The core concept is this:

* every test should create its temporary folders and files in a sub-folder of 
one, easily identifiable, parent.

* the parent folder is named after the test class, with [org.apache.] removed 
for brevity. the folder name includes the master seed and a generated sequence 
integer, if needed, to easily demarcate different runs (and identify which run 
produced which files).

* temporary folder/ file creation routines should be available in LTC, so that 
no additional imports/ fiddling is needed.
{code}
LTC#createTempDir()
LTC#createTempDir(String prefix)
LTC#createTempFile()
LTC#createTempFile(String prefix, String suffix)
{code}
Note the absence of create a temporary file/dir under a given directory 
method. These shouldn't be needed since any temporary folder created by the 
above is guaranteed to be empty. If one still needs temporary file creation use 
createTempDir() and then relevant File class methods.

* any temporary folders and files should be removable at the end of the suite 
(class), if all tests completes successfully. Failure to remove any files 
should be marked as an error (will manifest itself on windows with open file 
handles).

* there should be a way to temporarily mark a class to circumvent the above 
check (SuppressTempFileChecks annotation for known offenders). Annotating a 
class with SuppressTempFileChecks will still attempt to remove temporary files, 
but will not cause an error if this fails. Any files that couldn't be deleted 
will be printed to stdout with an appropriate message.

* there should be a way for developers to leave temporary files on disk for 
further inspection (even in case of successful runs). The following system 
properties will work:
{code}
tests.leavetmpdir /* default */,
tests.leaveTemporary /* ANT tasks's (junit4) flag. */,
tests.leavetemporary /* lowercase version of the above */,
solr.test.leavetmpdir /* Solr's legacy property for backcompat */))
{code}



 Temporary folder and file management (and cleanup facilities)
 -

 Key: LUCENE-5577
 URL: https://issues.apache.org/jira/browse/LUCENE-5577
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Dawid Weiss
Assignee: Dawid Weiss
Priority: Minor
 Fix For: 4.8, 5.0


 This is a spinoff of the work [~markrmil...@gmail.com], Uwe and me have 
 initiated in SOLR-5914. 
 The core concept is this:
 * every 

[jira] [Commented] (LUCENE-5577) Temporary folder and file management (and cleanup facilities)

2014-04-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961037#comment-13961037
 ] 

ASF subversion and git services commented on LUCENE-5577:
-

Commit 1585035 from dwe...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1585035 ]

LUCENE-5577: Temporary folder and file management (and cleanup facilities)

 Temporary folder and file management (and cleanup facilities)
 -

 Key: LUCENE-5577
 URL: https://issues.apache.org/jira/browse/LUCENE-5577
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Dawid Weiss
Assignee: Dawid Weiss
Priority: Minor
 Fix For: 4.8, 5.0


 This is a spinoff of the work [~markrmil...@gmail.com], Uwe and me have 
 initiated in SOLR-5914. 
 The core concept is this:
 * every test should create its temporary folders and files in a sub-folder of 
 one, easily identifiable, parent.
 * the parent folder is named after the test class, with [org.apache.] removed 
 for brevity. the folder name includes the master seed and a generated 
 sequence integer, if needed, to easily demarcate different runs (and identify 
 which run produced which files).
 * temporary folder/ file creation routines should be available in LTC, so 
 that no additional imports/ fiddling is needed.
 {code}
 LTC#createTempDir()
 LTC#createTempDir(String prefix)
 LTC#createTempFile()
 LTC#createTempFile(String prefix, String suffix)
 {code}
 Note the absence of create a temporary file/dir under a given directory 
 method. These shouldn't be needed since any temporary folder created by the 
 above is guaranteed to be empty. If one still needs temporary file creation 
 use createTempDir() and then relevant File class methods.
 * any temporary folders and files should be removable at the end of the suite 
 (class), if all tests completes successfully. Failure to remove any files 
 should be marked as an error (will manifest itself on windows if one leaves 
 open file handles).
 * there should be a way to temporarily mark a class to circumvent the above 
 check (SuppressTempFileChecks annotation for known offenders). Annotating a 
 class with SuppressTempFileChecks will still attempt to remove temporary 
 files, but will not cause an error if this fails. Any files that couldn't be 
 deleted will be printed to stdout with an appropriate message.
 * there should be a way for developers to leave temporary files on disk for 
 further inspection (even in case of successful runs). The following system 
 properties will work:
 {code}
 tests.leavetmpdir /* default */,
 tests.leaveTemporary /* ANT tasks's (junit4) flag. */,
 tests.leavetemporary /* lowercase version of the above */,
 solr.test.leavetmpdir /* Solr's legacy property for backcompat */))
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2446) Add checksums to Lucene segment files

2014-04-05 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961050#comment-13961050
 ] 

Uwe Schindler commented on LUCENE-2446:
---

But we should take care of it, maybe open issue! so we add it once 4.8 is 
released?

 Add checksums to Lucene segment files
 -

 Key: LUCENE-2446
 URL: https://issues.apache.org/jira/browse/LUCENE-2446
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Reporter: Lance Norskog
  Labels: checksum
 Fix For: 4.8, 5.0

 Attachments: LUCENE-2446.patch


 It would be useful for the different files in a Lucene index to include 
 checksums. This would make it easy to spot corruption while copying index 
 files around; the various cloud efforts assume many more data-copying 
 operations than older single-index implementations.
 This feature might be much easier to implement if all index files are created 
 in a sequential fashion. This issue therefore depends on [LUCENE-2373].



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-2446) Add checksums to Lucene segment files

2014-04-05 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961051#comment-13961051
 ] 

Robert Muir commented on LUCENE-2446:
-

It's part of the release workflow. Come on, this is nothing new and no 
different from any other format change. No need to single out this issue.

 Add checksums to Lucene segment files
 -

 Key: LUCENE-2446
 URL: https://issues.apache.org/jira/browse/LUCENE-2446
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/index
Reporter: Lance Norskog
  Labels: checksum
 Fix For: 4.8, 5.0

 Attachments: LUCENE-2446.patch


 It would be useful for the different files in a Lucene index to include 
 checksums. This would make it easy to spot corruption while copying index 
 files around; the various cloud efforts assume many more data-copying 
 operations than older single-index implementations.
 This feature might be much easier to implement if all index files are created 
 in a sequential fashion. This issue therefore depends on [LUCENE-2373].



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0) - Build # 3926 - Failure!

2014-04-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/3926/
Java: 32bit/jdk1.8.0 -client -XX:+UseParallelGC

4 tests failed.
REGRESSION:  
org.apache.lucene.facet.taxonomy.directory.TestAddTaxonomy.testMedium

Error Message:
The system cannot find the path specified

Stack Trace:
java.io.IOException: The system cannot find the path specified
at 
__randomizedtesting.SeedInfo.seed([C4796645DAA2B083:EB375FE691962B48]:0)
at java.io.WinNTFileSystem.createFileExclusively(Native Method)
at java.io.File.createNewFile(File.java:1012)
at 
org.apache.lucene.util.LuceneTestCase.createTempFile(LuceneTestCase.java:2301)
at 
org.apache.lucene.facet.taxonomy.directory.TestAddTaxonomy.randomOrdinalMap(TestAddTaxonomy.java:77)
at 
org.apache.lucene.facet.taxonomy.directory.TestAddTaxonomy.dotest(TestAddTaxonomy.java:66)
at 
org.apache.lucene.facet.taxonomy.directory.TestAddTaxonomy.testMedium(TestAddTaxonomy.java:163)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:360)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:793)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:453)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:360)
at java.lang.Thread.run(Thread.java:744)


REGRESSION:  
org.apache.lucene.facet.taxonomy.directory.TestAddTaxonomy.testAddEmpty

Error Message:
The system cannot find the path specified


Re: [JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0) - Build # 3926 - Failure!

2014-04-05 Thread Dawid Weiss
I know what this is, fixing.

D.

On Sat, Apr 5, 2014 at 1:27 PM, Policeman Jenkins Server
jenk...@thetaphi.de wrote:
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/3926/
 Java: 32bit/jdk1.8.0 -client -XX:+UseParallelGC

 4 tests failed.
 REGRESSION:  
 org.apache.lucene.facet.taxonomy.directory.TestAddTaxonomy.testMedium

 Error Message:
 The system cannot find the path specified

 Stack Trace:
 java.io.IOException: The system cannot find the path specified
 at 
 __randomizedtesting.SeedInfo.seed([C4796645DAA2B083:EB375FE691962B48]:0)
 at java.io.WinNTFileSystem.createFileExclusively(Native Method)
 at java.io.File.createNewFile(File.java:1012)
 at 
 org.apache.lucene.util.LuceneTestCase.createTempFile(LuceneTestCase.java:2301)
 at 
 org.apache.lucene.facet.taxonomy.directory.TestAddTaxonomy.randomOrdinalMap(TestAddTaxonomy.java:77)
 at 
 org.apache.lucene.facet.taxonomy.directory.TestAddTaxonomy.dotest(TestAddTaxonomy.java:66)
 at 
 org.apache.lucene.facet.taxonomy.directory.TestAddTaxonomy.testMedium(TestAddTaxonomy.java:163)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:483)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
 at 
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 at 
 org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:360)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:793)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:453)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 at 
 org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 

[jira] [Commented] (LUCENE-5577) Temporary folder and file management (and cleanup facilities)

2014-04-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961061#comment-13961061
 ] 

ASF subversion and git services commented on LUCENE-5577:
-

Commit 1585052 from dwe...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1585052 ]

LUCENE-5577: the cleanup code didn't cleanup after itself...

 Temporary folder and file management (and cleanup facilities)
 -

 Key: LUCENE-5577
 URL: https://issues.apache.org/jira/browse/LUCENE-5577
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Dawid Weiss
Assignee: Dawid Weiss
Priority: Minor
 Fix For: 4.8, 5.0


 This is a spinoff of the work [~markrmil...@gmail.com], Uwe and me have 
 initiated in SOLR-5914. 
 The core concept is this:
 * every test should create its temporary folders and files in a sub-folder of 
 one, easily identifiable, parent.
 * the parent folder is named after the test class, with [org.apache.] removed 
 for brevity. the folder name includes the master seed and a generated 
 sequence integer, if needed, to easily demarcate different runs (and identify 
 which run produced which files).
 * temporary folder/ file creation routines should be available in LTC, so 
 that no additional imports/ fiddling is needed.
 {code}
 LTC#createTempDir()
 LTC#createTempDir(String prefix)
 LTC#createTempFile()
 LTC#createTempFile(String prefix, String suffix)
 {code}
 Note the absence of create a temporary file/dir under a given directory 
 method. These shouldn't be needed since any temporary folder created by the 
 above is guaranteed to be empty. If one still needs temporary file creation 
 use createTempDir() and then relevant File class methods.
 * any temporary folders and files should be removable at the end of the suite 
 (class), if all tests completes successfully. Failure to remove any files 
 should be marked as an error (will manifest itself on windows if one leaves 
 open file handles).
 * there should be a way to temporarily mark a class to circumvent the above 
 check (SuppressTempFileChecks annotation for known offenders). Annotating a 
 class with SuppressTempFileChecks will still attempt to remove temporary 
 files, but will not cause an error if this fails. Any files that couldn't be 
 deleted will be printed to stdout with an appropriate message.
 * there should be a way for developers to leave temporary files on disk for 
 further inspection (even in case of successful runs). The following system 
 properties will work:
 {code}
 tests.leavetmpdir /* default */,
 tests.leaveTemporary /* ANT tasks's (junit4) flag. */,
 tests.leavetemporary /* lowercase version of the above */,
 solr.test.leavetmpdir /* Solr's legacy property for backcompat */))
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5960) Add Support for Basic Authentication to Post.jar

2014-04-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961063#comment-13961063
 ] 

ASF subversion and git services commented on SOLR-5960:
---

Commit 1585057 from uschind...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1585057 ]

SOLR-5960: Add support for basic authentication in post.jar tool

 Add Support for Basic Authentication to Post.jar
 

 Key: SOLR-5960
 URL: https://issues.apache.org/jira/browse/SOLR-5960
 Project: Solr
  Issue Type: Improvement
  Components: scripts and tools
Affects Versions: 4.7
Reporter: Sameer Maggon
Assignee: Uwe Schindler
Priority: Minor
 Attachments: solr-5960-1.patch, solr-5960.patch


 Post.jar currently doesn't support Basic Authentication if Solr is configured 
 to use Basic Authentication.
 I've attached a patch that enables users to use post.jar if their Solr is 
 configured with Basic Authentication.
 Here's the example usage:
 java -Durl=http://username:password@hostname:8080/solr/update; -jar post.jar 
 sample.xml



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5577) Temporary folder and file management (and cleanup facilities)

2014-04-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961062#comment-13961062
 ] 

ASF subversion and git services commented on LUCENE-5577:
-

Commit 1585053 from dwe...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1585053 ]

LUCENE-5577: the cleanup code didn't cleanup after itself...

 Temporary folder and file management (and cleanup facilities)
 -

 Key: LUCENE-5577
 URL: https://issues.apache.org/jira/browse/LUCENE-5577
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Dawid Weiss
Assignee: Dawid Weiss
Priority: Minor
 Fix For: 4.8, 5.0


 This is a spinoff of the work [~markrmil...@gmail.com], Uwe and me have 
 initiated in SOLR-5914. 
 The core concept is this:
 * every test should create its temporary folders and files in a sub-folder of 
 one, easily identifiable, parent.
 * the parent folder is named after the test class, with [org.apache.] removed 
 for brevity. the folder name includes the master seed and a generated 
 sequence integer, if needed, to easily demarcate different runs (and identify 
 which run produced which files).
 * temporary folder/ file creation routines should be available in LTC, so 
 that no additional imports/ fiddling is needed.
 {code}
 LTC#createTempDir()
 LTC#createTempDir(String prefix)
 LTC#createTempFile()
 LTC#createTempFile(String prefix, String suffix)
 {code}
 Note the absence of create a temporary file/dir under a given directory 
 method. These shouldn't be needed since any temporary folder created by the 
 above is guaranteed to be empty. If one still needs temporary file creation 
 use createTempDir() and then relevant File class methods.
 * any temporary folders and files should be removable at the end of the suite 
 (class), if all tests completes successfully. Failure to remove any files 
 should be marked as an error (will manifest itself on windows if one leaves 
 open file handles).
 * there should be a way to temporarily mark a class to circumvent the above 
 check (SuppressTempFileChecks annotation for known offenders). Annotating a 
 class with SuppressTempFileChecks will still attempt to remove temporary 
 files, but will not cause an error if this fails. Any files that couldn't be 
 deleted will be printed to stdout with an appropriate message.
 * there should be a way for developers to leave temporary files on disk for 
 further inspection (even in case of successful runs). The following system 
 properties will work:
 {code}
 tests.leavetmpdir /* default */,
 tests.leaveTemporary /* ANT tasks's (junit4) flag. */,
 tests.leavetemporary /* lowercase version of the above */,
 solr.test.leavetmpdir /* Solr's legacy property for backcompat */))
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5960) Add Support for Basic Authentication to Post.jar

2014-04-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961064#comment-13961064
 ] 

ASF subversion and git services commented on SOLR-5960:
---

Commit 1585062 from uschind...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1585062 ]

Merged revision(s) 1585057 from lucene/dev/trunk:
SOLR-5960: Add support for basic authentication in post.jar tool

 Add Support for Basic Authentication to Post.jar
 

 Key: SOLR-5960
 URL: https://issues.apache.org/jira/browse/SOLR-5960
 Project: Solr
  Issue Type: Improvement
  Components: scripts and tools
Affects Versions: 4.7
Reporter: Sameer Maggon
Assignee: Uwe Schindler
Priority: Minor
 Attachments: solr-5960-1.patch, solr-5960.patch


 Post.jar currently doesn't support Basic Authentication if Solr is configured 
 to use Basic Authentication.
 I've attached a patch that enables users to use post.jar if their Solr is 
 configured with Basic Authentication.
 Here's the example usage:
 java -Durl=http://username:password@hostname:8080/solr/update; -jar post.jar 
 sample.xml



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5960) Add Support for Basic Authentication to Post.jar

2014-04-05 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler resolved SOLR-5960.
-

   Resolution: Fixed
Fix Version/s: 5.0
   4.8

I commmited your patch, I just added a simple test for the URL parsing to 
SimplePostToolTest.

Unfortunately the whole tool has no real test case, it just uses a mock mode 
where it does nothing.

To setup a real test we would need to spawn a Jetty and configure it to use 
basic auth. I cannot do this at the moment, so I just leave this as untested 
code.

 Add Support for Basic Authentication to Post.jar
 

 Key: SOLR-5960
 URL: https://issues.apache.org/jira/browse/SOLR-5960
 Project: Solr
  Issue Type: Improvement
  Components: scripts and tools
Affects Versions: 4.7
Reporter: Sameer Maggon
Assignee: Uwe Schindler
Priority: Minor
 Fix For: 4.8, 5.0

 Attachments: solr-5960-1.patch, solr-5960.patch


 Post.jar currently doesn't support Basic Authentication if Solr is configured 
 to use Basic Authentication.
 I've attached a patch that enables users to use post.jar if their Solr is 
 configured with Basic Authentication.
 Here's the example usage:
 java -Durl=http://username:password@hostname:8080/solr/update; -jar post.jar 
 sample.xml



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5569) Rename AtomicReader to LeafReader

2014-04-05 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961068#comment-13961068
 ] 

Yonik Seeley commented on LUCENE-5569:
--

FWIW, I agree with Tim.  There's not a right or wrong to it... it's a judgement 
call.  It's clear to me that the bar these days to renaming public APIs is far 
too low... but I appear to be outnumbered.

 Rename AtomicReader to LeafReader
 -

 Key: LUCENE-5569
 URL: https://issues.apache.org/jira/browse/LUCENE-5569
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Priority: Minor
 Fix For: 5.0

 Attachments: LUCENE-5569.patch


 See LUCENE-5527 for more context: several of us seem to prefer {{Leaf}} to 
 {{Atomic}}.
 Talking from my experience, I was a bit confused in the beginning that this 
 thing is named {{AtomicReader}}, since {{Atomic}} is otherwise used in Java 
 in the context of concurrency. So maybe renaming it to {{Leaf}} would help 
 remove this confusion and also carry the information that these readers are 
 used as leaves of top-level readers?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5569) Rename AtomicReader to LeafReader

2014-04-05 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961076#comment-13961076
 ] 

Uwe Schindler commented on LUCENE-5569:
---

Hi,
I also tend to aggree with Yonik and Tim. In general I would also prefer to 
rename it. Unfortunately i was the one who added that name in the big 
IndexReader refactoring. But at that time we all agreed on that name.
But we should have thought better about it. LeafReader is fine, especially 
becaus ethe other APIs already use leave: Like {{ReaderContext#leaves()}}.
If we really want to rename that, we should do this shortly before the release 
of 5.0. Otherwise merging might be much harder from trunk to 4.x. As this is 
just an eclipse rename of 2 classes (AtomicReader, AtomicReaderContext), this 
is not much work, just an Eclipse-automatism. The remaining classes is the 
same: FilterAtomicReader, and various test-only readers.

 Rename AtomicReader to LeafReader
 -

 Key: LUCENE-5569
 URL: https://issues.apache.org/jira/browse/LUCENE-5569
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Priority: Minor
 Fix For: 5.0

 Attachments: LUCENE-5569.patch


 See LUCENE-5527 for more context: several of us seem to prefer {{Leaf}} to 
 {{Atomic}}.
 Talking from my experience, I was a bit confused in the beginning that this 
 thing is named {{AtomicReader}}, since {{Atomic}} is otherwise used in Java 
 in the context of concurrency. So maybe renaming it to {{Leaf}} would help 
 remove this confusion and also carry the information that these readers are 
 used as leaves of top-level readers?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-5569) Rename AtomicReader to LeafReader

2014-04-05 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961076#comment-13961076
 ] 

Uwe Schindler edited comment on LUCENE-5569 at 4/5/14 12:50 PM:


Hi,
I also tend to aggree with Yonik and Tim. In general I would also prefer to 
rename it, but the work needed for users is big with having no new features. A 
pure rename is useless. Unfortunately i was the one who added that name in the 
big IndexReader refactoring. But at that time we all agreed on that name.
But we should have thought better about it. LeafReader is fine, especially 
becaus ethe other APIs already use leave: Like {{ReaderContext#leaves()}}.
If we really want to rename that, we should do this shortly before the release 
of 5.0. Otherwise merging might be much harder from trunk to 4.x. As this is 
just an eclipse rename of 2 classes (AtomicReader, AtomicReaderContext), this 
is not much work, just an Eclipse-automatism. The remaining classes is the 
same: FilterAtomicReader, and various test-only readers.


was (Author: thetaphi):
Hi,
I also tend to aggree with Yonik and Tim. In general I would also prefer to 
rename it. Unfortunately i was the one who added that name in the big 
IndexReader refactoring. But at that time we all agreed on that name.
But we should have thought better about it. LeafReader is fine, especially 
becaus ethe other APIs already use leave: Like {{ReaderContext#leaves()}}.
If we really want to rename that, we should do this shortly before the release 
of 5.0. Otherwise merging might be much harder from trunk to 4.x. As this is 
just an eclipse rename of 2 classes (AtomicReader, AtomicReaderContext), this 
is not much work, just an Eclipse-automatism. The remaining classes is the 
same: FilterAtomicReader, and various test-only readers.

 Rename AtomicReader to LeafReader
 -

 Key: LUCENE-5569
 URL: https://issues.apache.org/jira/browse/LUCENE-5569
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Priority: Minor
 Fix For: 5.0

 Attachments: LUCENE-5569.patch


 See LUCENE-5527 for more context: several of us seem to prefer {{Leaf}} to 
 {{Atomic}}.
 Talking from my experience, I was a bit confused in the beginning that this 
 thing is named {{AtomicReader}}, since {{Atomic}} is otherwise used in Java 
 in the context of concurrency. So maybe renaming it to {{Leaf}} would help 
 remove this confusion and also carry the information that these readers are 
 used as leaves of top-level readers?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5056) Indexing non-point shapes close to the poles doesn't scale

2014-04-05 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961102#comment-13961102
 ] 

David Smiley commented on LUCENE-5056:
--

I was just thinking about this bug/shortcoming or whatever one might call it.  
There's an easy solution to this -- modify the algorithm that determines how 
many levels to go down so it's based on a Euclidean computation, not 
geodesic.  It means that the shape is going to be a lot more blocky 
(approximated) than the same same on the equator.  But I feel that doesn't 
matter, or at least it won't matter soon once RecursivePrefixTree  
SerializedDVStrategy get linked up such that indexed non-point shapes will be 
validated for precision against the actual vector geometry.  Once that happens, 
it will matter very little how few grid cells represent a shape since you'll 
always have absolute precision as far as the shape geometry can calculate it.

 Indexing non-point shapes close to the poles doesn't scale
 --

 Key: LUCENE-5056
 URL: https://issues.apache.org/jira/browse/LUCENE-5056
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/spatial
Affects Versions: 4.3
Reporter: Hal Deadman
Assignee: David Smiley
 Fix For: 4.8

 Attachments: indexed circle close to the pole.png


 From: [~hdeadman]
 We are seeing an issue where certain shapes are causing Solr to use up all 
 available heap space when a record with one of those shapes is indexed. We 
 were indexing polygons where we had the points going clockwise instead of 
 counter-clockwise and the shape would be so large that we would run out of 
 memory. We fixed those shapes but we are seeing this circle eat up about 
 700MB of memory before we get an OutOfMemory error (heap space) with a 1GB 
 JVM heap.
 Circle(3.0 90 d=0.0499542757922153)
 Google Earth can't plot that circle either, maybe it is invalid or too close 
 to the north pole due to the latitude of 90, but it would be nice if there 
 was a way for shapes to be validated before they cause an OOM error.
 The objects (4.5 million) are all GeohashPrefixTree$GhCell objects in an 
 ArrayList owned by PrefixTreeStrategy$CellTokenStream.
 Is there anyway to have a max number of cells in a shape before it is 
 considered too large and is not indexed? Is there a geo library that could 
 validate the shape as being reasonably sized and bounded before it is 
 processed?
 We are currently using Solr 4.1.
 fieldType name=location_rpt 
 class=solr.SpatialRecursivePrefixTreeFieldType
 spatialContextFactory=com.spatial4j.core.context.jts.JtsSpatialContextFactory
 geo=true distErrPct=0.025 maxDistErr=0.09 units=degrees /



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-2412) Multipath hierarchical faceting

2014-04-05 Thread J.L. Hill (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13960777#comment-13960777
 ] 

J.L. Hill edited comment on SOLR-2412 at 4/5/14 4:38 PM:
-

After spending the past few days on this, I am a bit stuck on how to limit the 
facets returned to a sublevel of the facets. For example, from the example 
above, only returning the facets L1_T1 and those below it. From normal 
faceting, I think it would be done via facet.prefix=L0_T1/L1_T1
I tried facet.prefix and efacet.prefix. 
Additionally, am I correct that the number of documents matching a facet field 
are to be in the count key rather than with the field name as in the standard 
faceting?
Thanks in advance.


was (Author: hill):
After spending the past few days on this, I am a bit stuck on how to limit the 
facets returned to a sublevel of the facets. For example, from the example 
above, only returning the facets L1_T1 and those below it. From normal 
faceting, I think it would be done via facet.prefix=L0_T1/L1_T1
I tried facet.prefix and efacet.prefix. 
Thanks in advance.

 Multipath hierarchical faceting
 ---

 Key: SOLR-2412
 URL: https://issues.apache.org/jira/browse/SOLR-2412
 Project: Solr
  Issue Type: New Feature
  Components: SearchComponents - other
Affects Versions: 4.0
 Environment: Fast IO when huge hierarchies are used
Reporter: Toke Eskildsen
  Labels: contrib, patch
 Attachments: SOLR-2412.patch, SOLR-2412.patch, SOLR-2412.patch, 
 SOLR-2412.patch, SOLR-2412.patch, SOLR-2412.patch, SOLR-2412.patch


 Hierarchical faceting with slow startup, low memory overhead and fast 
 response. Distinguishing features as compared to SOLR-64 and SOLR-792 are
   * Multiple paths per document
   * Query-time analysis of the facet-field; no special requirements for 
 indexing besides retaining separator characters in the terms used for faceting
   * Optional custom sorting of tag values
   * Recursive counting of references to tags at all levels of the output
 This is a shell around LUCENE-2369, making it work with the Solr API. The 
 underlying principle is to reference terms by their ordinals and create an 
 index wide documents to tags map, augmented with a compressed representation 
 of hierarchical levels.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-2412) Multipath hierarchical faceting

2014-04-05 Thread J.L. Hill (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13960777#comment-13960777
 ] 

J.L. Hill edited comment on SOLR-2412 at 4/5/14 4:39 PM:
-

After spending the past few days on this, I am a bit stuck on how to limit the 
facets returned to a sublevel of the facets. For example, from the example 
above, only returning the facets L1_T1 and those below it. From normal 
faceting, I think it would be done via facet.prefix=L0_T1/L1_T1
I tried facet.prefix and efacet.prefix. 
Additionally, am I correct that the number of documents matching a facet field 
are to be in the count key rather than with the facet field name as in the 
standard faceting?
Thanks in advance.


was (Author: hill):
After spending the past few days on this, I am a bit stuck on how to limit the 
facets returned to a sublevel of the facets. For example, from the example 
above, only returning the facets L1_T1 and those below it. From normal 
faceting, I think it would be done via facet.prefix=L0_T1/L1_T1
I tried facet.prefix and efacet.prefix. 
Additionally, am I correct that the number of documents matching a facet field 
are to be in the count key rather than with the field name as in the standard 
faceting?
Thanks in advance.

 Multipath hierarchical faceting
 ---

 Key: SOLR-2412
 URL: https://issues.apache.org/jira/browse/SOLR-2412
 Project: Solr
  Issue Type: New Feature
  Components: SearchComponents - other
Affects Versions: 4.0
 Environment: Fast IO when huge hierarchies are used
Reporter: Toke Eskildsen
  Labels: contrib, patch
 Attachments: SOLR-2412.patch, SOLR-2412.patch, SOLR-2412.patch, 
 SOLR-2412.patch, SOLR-2412.patch, SOLR-2412.patch, SOLR-2412.patch


 Hierarchical faceting with slow startup, low memory overhead and fast 
 response. Distinguishing features as compared to SOLR-64 and SOLR-792 are
   * Multiple paths per document
   * Query-time analysis of the facet-field; no special requirements for 
 indexing besides retaining separator characters in the terms used for faceting
   * Optional custom sorting of tag values
   * Recursive counting of references to tags at all levels of the output
 This is a shell around LUCENE-2369, making it work with the Solr API. The 
 underlying principle is to reference terms by their ordinals and create an 
 index wide documents to tags map, augmented with a compressed representation 
 of hierarchical levels.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-2412) Multipath hierarchical faceting

2014-04-05 Thread J.L. Hill (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13960777#comment-13960777
 ] 

J.L. Hill edited comment on SOLR-2412 at 4/5/14 4:41 PM:
-

After spending the past few days on this, I am a bit stuck on how to limit the 
facets returned to a sublevel of the facets. For example, from the example 
above, only returning the facets L1_T1 and those below it. From normal 
faceting, I think it would be done via facet.prefix=L0_T1/L1_T1
I tried facet.prefix and efacet.prefix. 
Additionally, am I correct that the number of documents matching a facet field 
are to be in the count field/key (in standard faceting, it is with the facet 
field name)? The count seems not to match, but I am still testing.
Thanks in advance.


was (Author: hill):
After spending the past few days on this, I am a bit stuck on how to limit the 
facets returned to a sublevel of the facets. For example, from the example 
above, only returning the facets L1_T1 and those below it. From normal 
faceting, I think it would be done via facet.prefix=L0_T1/L1_T1
I tried facet.prefix and efacet.prefix. 
Additionally, am I correct that the number of documents matching a facet field 
are to be in the count key rather than with the facet field name as in the 
standard faceting?
Thanks in advance.

 Multipath hierarchical faceting
 ---

 Key: SOLR-2412
 URL: https://issues.apache.org/jira/browse/SOLR-2412
 Project: Solr
  Issue Type: New Feature
  Components: SearchComponents - other
Affects Versions: 4.0
 Environment: Fast IO when huge hierarchies are used
Reporter: Toke Eskildsen
  Labels: contrib, patch
 Attachments: SOLR-2412.patch, SOLR-2412.patch, SOLR-2412.patch, 
 SOLR-2412.patch, SOLR-2412.patch, SOLR-2412.patch, SOLR-2412.patch


 Hierarchical faceting with slow startup, low memory overhead and fast 
 response. Distinguishing features as compared to SOLR-64 and SOLR-792 are
   * Multiple paths per document
   * Query-time analysis of the facet-field; no special requirements for 
 indexing besides retaining separator characters in the terms used for faceting
   * Optional custom sorting of tag values
   * Recursive counting of references to tags at all levels of the output
 This is a shell around LUCENE-2369, making it work with the Solr API. The 
 underlying principle is to reference terms by their ordinals and create an 
 index wide documents to tags map, augmented with a compressed representation 
 of hierarchical levels.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5654) Create a synonym filter factory that is (re)configurable, and capable of reporting its configuration, via REST API

2014-04-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961170#comment-13961170
 ] 

ASF subversion and git services commented on SOLR-5654:
---

Commit 1585147 from sar...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1585147 ]

SOLR-5654: Create a synonym filter factory that is (re)configurable, and 
capable of reporting its configuration, via REST API (merged trunk r1584211)

 Create a synonym filter factory that is (re)configurable, and capable of 
 reporting its configuration, via REST API
 --

 Key: SOLR-5654
 URL: https://issues.apache.org/jira/browse/SOLR-5654
 Project: Solr
  Issue Type: Sub-task
  Components: Schema and Analysis
Reporter: Steve Rowe
 Attachments: SOLR-5654.patch, SOLR-5654.patch, SOLR-5654.patch, 
 SOLR-5654.patch, SOLR-5654.patch


 A synonym filter factory could be (re)configurable via REST API by 
 registering with the RESTManager described in SOLR-5653, and then responding 
 to REST API calls to modify its init params and its synonyms resource file.
 Read-only (GET) REST API calls should also be provided, both for init params 
 and the synonyms resource file.
 It should be possible to add/remove/modify one or more entries in the 
 synonyms resource file.
 We should probably use JSON for the REST request body, as is done in the 
 Schema REST API methods.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5654) Create a synonym filter factory that is (re)configurable, and capable of reporting its configuration, via REST API

2014-04-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961171#comment-13961171
 ] 

ASF subversion and git services commented on SOLR-5654:
---

Commit 1585148 from sar...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1585148 ]

SOLR-5654: Add CHANGES.txt entry

 Create a synonym filter factory that is (re)configurable, and capable of 
 reporting its configuration, via REST API
 --

 Key: SOLR-5654
 URL: https://issues.apache.org/jira/browse/SOLR-5654
 Project: Solr
  Issue Type: Sub-task
  Components: Schema and Analysis
Reporter: Steve Rowe
 Attachments: SOLR-5654.patch, SOLR-5654.patch, SOLR-5654.patch, 
 SOLR-5654.patch, SOLR-5654.patch


 A synonym filter factory could be (re)configurable via REST API by 
 registering with the RESTManager described in SOLR-5653, and then responding 
 to REST API calls to modify its init params and its synonyms resource file.
 Read-only (GET) REST API calls should also be provided, both for init params 
 and the synonyms resource file.
 It should be possible to add/remove/modify one or more entries in the 
 synonyms resource file.
 We should probably use JSON for the REST request body, as is done in the 
 Schema REST API methods.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5653) Create a RESTManager to provide REST API endpoints for reconfigurable plugins

2014-04-05 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved SOLR-5653.
--

Resolution: Fixed
  Assignee: Steve Rowe

Committed to trunk and branch_4x.

Thanks Tim!

 Create a RESTManager to provide REST API endpoints for reconfigurable plugins
 -

 Key: SOLR-5653
 URL: https://issues.apache.org/jira/browse/SOLR-5653
 Project: Solr
  Issue Type: Sub-task
Reporter: Steve Rowe
Assignee: Steve Rowe
 Fix For: 4.8, 5.0

 Attachments: SOLR-5653.patch, SOLR-5653.patch, SOLR-5653.patch, 
 SOLR-5653.patch, SOLR-5653.patch, SOLR-5653.patch, SOLR-5653.patch


 It should be possible to reconfigure Solr plugins' resources and init params 
 without directly editing the serialized schema or {{solrconfig.xml}} (see 
 Hoss's arguments about this in the context of the schema, which also apply to 
 {{solrconfig.xml}}, in the description of SOLR-4658)
 The RESTManager should allow plugins declared in either the schema or in 
 {{solrconfig.xml}} to register one or more REST endpoints, one endpoint per 
 reconfigurable resource, including init params.  To allow for multiple plugin 
 instances, registering plugins will need to provide a handle of some form to 
 distinguish the instances.
 This RESTManager should also be able to create new instances of plugins that 
 it has been configured to allow.  The RESTManager will need its own 
 serialized configuration to remember these plugin declarations.
 Example endpoints:
 * SynonymFilterFactory
 ** init params: {{/solr/collection1/config/syns/myinstance/options}}
 ** synonyms resource: 
 {{/solr/collection1/config/syns/myinstance/synonyms-list}}
 * /select request handler
 ** init params: {{/solr/collection1/config/requestHandlers/select/options}}
 We should aim for full CRUD over init params and structured resources.  The 
 plugins will bear responsibility for handling resource modification requests, 
 though we should provide utility methods to make this easy.
 However, since we won't be directly modifying the serialized schema and 
 {{solrconfig.xml}}, anything configured in those two places can't be 
 invalidated by configuration serialized elsewhere.  As a result, it won't be 
 possible to remove plugins declared in the serialized schema or 
 {{solrconfig.xml}}.  Similarly, any init params declared in either place 
 won't be modifiable.  Instead, there should be some form of init param that 
 declares that the plugin is reconfigurable, maybe using something like 
 managed - note that request handlers already provide a handle - the 
 request handler name - and so don't need that to be separately specified:
 {code:xml}
 requestHandler name=/select class=solr.SearchHandler
managed/
 /requestHandler
 {code}
 and in the serialized schema - a handle needs to be specified here:
 {code:xml}
 fieldType name=text_general class=solr.TextField 
 positionIncrementGap=100
 ...
   analyzer type=query
 tokenizer class=solr.StandardTokenizerFactory/
 filter class=solr.SynonymFilterFactory managed=english-synonyms/
 ...
 {code}
 All of the above examples use the existing plugin factory class names, but 
 we'll have to create new RESTManager-aware classes to handle registration 
 with RESTManager.
 Core/collection reloading should not be performed automatically when a REST 
 API call is made to one of these RESTManager-mediated REST endpoints, since 
 for batched config modifications, that could take way too long.  But maybe 
 reloading could be a query parameter to these REST API calls. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5655) Create a stopword filter factory that is (re)configurable, and capable of reporting its configuration, via REST API

2014-04-05 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved SOLR-5655.
--

Resolution: Fixed
  Assignee: Steve Rowe

Committed to trunk and branch_4x.

Thanks Tim!

 Create a stopword filter factory that is (re)configurable, and capable of 
 reporting its configuration, via REST API
 ---

 Key: SOLR-5655
 URL: https://issues.apache.org/jira/browse/SOLR-5655
 Project: Solr
  Issue Type: Sub-task
  Components: Schema and Analysis
Reporter: Steve Rowe
Assignee: Steve Rowe
 Attachments: SOLR-5655.patch, SOLR-5655.patch, SOLR-5655.patch, 
 SOLR-5655.patch, SOLR-5655.patch


 A stopword filter factory could be (re)configurable via REST API by 
 registering with the RESTManager described in SOLR-5653, and then responding 
 to REST API calls to modify its init params and its stopwords resource file.
 Read-only (GET) REST API calls should also be provided, both for init params 
 and the stopwords resource file.
 It should be possible to add/remove one or more entries in the stopwords 
 resource file.
 We should probably use JSON for the REST request body, as is done in the 
 Schema REST API methods.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5654) Create a synonym filter factory that is (re)configurable, and capable of reporting its configuration, via REST API

2014-04-05 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved SOLR-5654.
--

Resolution: Fixed
  Assignee: Steve Rowe

Committed to trunk and branch_4x.

Thanks Tim!

 Create a synonym filter factory that is (re)configurable, and capable of 
 reporting its configuration, via REST API
 --

 Key: SOLR-5654
 URL: https://issues.apache.org/jira/browse/SOLR-5654
 Project: Solr
  Issue Type: Sub-task
  Components: Schema and Analysis
Reporter: Steve Rowe
Assignee: Steve Rowe
 Attachments: SOLR-5654.patch, SOLR-5654.patch, SOLR-5654.patch, 
 SOLR-5654.patch, SOLR-5654.patch


 A synonym filter factory could be (re)configurable via REST API by 
 registering with the RESTManager described in SOLR-5653, and then responding 
 to REST API calls to modify its init params and its synonyms resource file.
 Read-only (GET) REST API calls should also be provided, both for init params 
 and the synonyms resource file.
 It should be possible to add/remove/modify one or more entries in the 
 synonyms resource file.
 We should probably use JSON for the REST request body, as is done in the 
 Schema REST API methods.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5965) Solr Reference Guide: Investigate shortcut links to automatically version javadoc links

2014-04-05 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961195#comment-13961195
 ] 

Steve Rowe commented on SOLR-5965:
--

Actually, this applies to all versioned documentation, including Changes.html 
and the Solr tutorial, not just the javadocs.

 Solr Reference Guide: Investigate shortcut links to automatically version 
 javadoc links
 ---

 Key: SOLR-5965
 URL: https://issues.apache.org/jira/browse/SOLR-5965
 Project: Solr
  Issue Type: Improvement
  Components: documentation
Reporter: Steve Rowe
Priority: Minor

 Confluence has a feature called Shortcut links, that allow link prefixes 
 (or infixes or suffixes) to be specified in short form.
 I think this feature could be used to version javadoc links so that we have 
 two shortcuts, one for the latest Solr javadocs and another for the latest 
 Lucene javadocs, and then all javadoc links are specified relative to these 
 shortcuts.
 See 
 [https://confluence.atlassian.com/display/CONF50/Configuring+Shortcut+Links].
 Two problems, both minor I think:
 # Creating and maintaining shortcut links requires Confluence Admin 
 privileges, but Apache Infrastructure has taken these away from just about 
 everybody.  When I last chatted on #asfinfra with gmcdonald (who took away 
 the privileges), though, he was okay with giving this back Confluence Admin 
 privileges if we found that it was necessary.
 # There is an open bug with editing shortcut-form links - it doesn't work - 
 you have to remove them and then re-add them: 
 [https://jira.atlassian.com/browse/CONF-24812]



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5573) Deadlock during class loading/ initialization

2014-04-05 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-5573:


Attachment: LUCENE-5573.patch

Here's my solution: i moved all this helper stuff to a separate class. So the 
patch adds one public class, but removes 3 others, so overall I think its a win 
and reduces the API...

Would be nice to figure out some kind of analysis we could do to detect these 
things (versus having tests hang if we get lucky).

Thank you Dawid for digging in...

 Deadlock during class loading/ initialization
 -

 Key: LUCENE-5573
 URL: https://issues.apache.org/jira/browse/LUCENE-5573
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Dawid Weiss
Assignee: Robert Muir
 Fix For: 4.8, 5.0

 Attachments: A.java, C.java, LUCENE-5573.patch, Main.java, X.java


 It's always worth looking into those randomized failures. 
 http://builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/81259/console
 Log quote:
 {code}
 [junit4]   2  jstack at approximately timeout time 
[junit4]   2 Lucene Merge Thread #0 ID=25 RUNNABLE
[junit4]   2  at 
 org.apache.lucene.codecs.lucene45.Lucene45DocValuesProducer.getSortedSet(Lucene45DocValuesProducer.java:541)
[junit4]   2  at 
 org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsReader.getSortedSet(PerFieldDocValuesFormat.java:285)
[junit4]   2  at 
 org.apache.lucene.index.SegmentReader.getSortedSetDocValues(SegmentReader.java:500)
[junit4]   2  at 
 org.apache.lucene.index.SegmentMerger.mergeDocValues(SegmentMerger.java:204)
[junit4]   2  at 
 org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:119)
[junit4]   2  at 
 org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4068)
[junit4]   2  at 
 org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3664)
[junit4]   2  at 
 org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
[junit4]   2  at 
 org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
[junit4]   2 
[junit4]   2 
 TEST-TestLucene45DocValuesFormat.testSortedSetVariableLengthVsUninvertedField-seed#[7362FE36DE729D42]
  ID=23 RUNNABLE
[junit4]   2  at 
 org.apache.lucene.index.SortedSetDocValues.clinit(SortedSetDocValues.java:72)
[junit4]   2  at 
 org.apache.lucene.index.DocTermOrds.iterator(DocTermOrds.java:767)
[junit4]   2  at 
 org.apache.lucene.search.FieldCacheImpl.getDocTermOrds(FieldCacheImpl.java:1214)
[junit4]   2  at 
 org.apache.lucene.index.BaseDocValuesFormatTestCase.doTestSortedSetVsUninvertedField(BaseDocValuesFormatTestCase.java:2342)
[junit4]   2  at 
 org.apache.lucene.index.BaseDocValuesFormatTestCase.testSortedSetVariableLengthVsUninvertedField(BaseDocValuesFormatTestCase.java:2375)
[junit4]   2  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
 Method)
[junit4]   2  at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
[junit4]   2  at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[junit4]   2  at java.lang.reflect.Method.invoke(Method.java:606)
[junit4]   2  at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1617)
[junit4]   2  at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:826)
[junit4]   2  at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:862)
[junit4]   2  at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:876)
[junit4]   2  at 
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
[junit4]   2  at 
 org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
[junit4]   2  at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
[junit4]   2  at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
[junit4]   2  at 
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
[junit4]   2  at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
[junit4]   2  at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
[junit4]   2  at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
[junit4]   2  at 
 

[jira] [Commented] (LUCENE-5573) Deadlock during class loading/ initialization

2014-04-05 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961225#comment-13961225
 ] 

Dawid Weiss commented on LUCENE-5573:
-

 Would be nice to figure out some kind of analysis we could do to detect these 
 things (versus having tests hang if we get lucky).

I think it should be doable with static bytecode analysis -- you'd need to 
parse clinit methods and extract DAG of class loading dependencies. If there 
exists any two pairs with reverse order initialization then you have a 
potential deadlock. Again - this shouldn't be a frequent scenario... but who 
knows :)

 Deadlock during class loading/ initialization
 -

 Key: LUCENE-5573
 URL: https://issues.apache.org/jira/browse/LUCENE-5573
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Dawid Weiss
Assignee: Robert Muir
 Fix For: 4.8, 5.0

 Attachments: A.java, C.java, LUCENE-5573.patch, Main.java, X.java


 It's always worth looking into those randomized failures. 
 http://builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/81259/console
 Log quote:
 {code}
 [junit4]   2  jstack at approximately timeout time 
[junit4]   2 Lucene Merge Thread #0 ID=25 RUNNABLE
[junit4]   2  at 
 org.apache.lucene.codecs.lucene45.Lucene45DocValuesProducer.getSortedSet(Lucene45DocValuesProducer.java:541)
[junit4]   2  at 
 org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsReader.getSortedSet(PerFieldDocValuesFormat.java:285)
[junit4]   2  at 
 org.apache.lucene.index.SegmentReader.getSortedSetDocValues(SegmentReader.java:500)
[junit4]   2  at 
 org.apache.lucene.index.SegmentMerger.mergeDocValues(SegmentMerger.java:204)
[junit4]   2  at 
 org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:119)
[junit4]   2  at 
 org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4068)
[junit4]   2  at 
 org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3664)
[junit4]   2  at 
 org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
[junit4]   2  at 
 org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
[junit4]   2 
[junit4]   2 
 TEST-TestLucene45DocValuesFormat.testSortedSetVariableLengthVsUninvertedField-seed#[7362FE36DE729D42]
  ID=23 RUNNABLE
[junit4]   2  at 
 org.apache.lucene.index.SortedSetDocValues.clinit(SortedSetDocValues.java:72)
[junit4]   2  at 
 org.apache.lucene.index.DocTermOrds.iterator(DocTermOrds.java:767)
[junit4]   2  at 
 org.apache.lucene.search.FieldCacheImpl.getDocTermOrds(FieldCacheImpl.java:1214)
[junit4]   2  at 
 org.apache.lucene.index.BaseDocValuesFormatTestCase.doTestSortedSetVsUninvertedField(BaseDocValuesFormatTestCase.java:2342)
[junit4]   2  at 
 org.apache.lucene.index.BaseDocValuesFormatTestCase.testSortedSetVariableLengthVsUninvertedField(BaseDocValuesFormatTestCase.java:2375)
[junit4]   2  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
 Method)
[junit4]   2  at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
[junit4]   2  at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[junit4]   2  at java.lang.reflect.Method.invoke(Method.java:606)
[junit4]   2  at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1617)
[junit4]   2  at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:826)
[junit4]   2  at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:862)
[junit4]   2  at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:876)
[junit4]   2  at 
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
[junit4]   2  at 
 org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
[junit4]   2  at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
[junit4]   2  at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
[junit4]   2  at 
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
[junit4]   2  at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
[junit4]   2  at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
[junit4]   2  at 
 

[jira] [Commented] (LUCENE-5573) Deadlock during class loading/ initialization

2014-04-05 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13961226#comment-13961226
 ] 

Robert Muir commented on LUCENE-5573:
-

Classycle can list the possibilities, but it just lists cycles in general, not 
ones in static init... or i dont know how to coerce it into this.

{noformat}
Cycle: org.apache.lucene.index.SortedSetDocValues et al. with 3 vertices. 
Layer: 8
abstract class org.apache.lucene.index.SortedSetDocValues (2621 bytes) 
sources: java: Used by 23 classes. Uses 4/2 internal/external classes
class org.apache.lucene.index.SortedSetDocValuesTermsEnum (3896 bytes) 
sources: java: Used by 1 classes. Uses 8/4 internal/external classes
abstract class org.apache.lucene.index.RandomAccessOrds (398 bytes) 
sources: java: Used by 3 classes. Uses 1/0 internal/external classes
{noformat}


 Deadlock during class loading/ initialization
 -

 Key: LUCENE-5573
 URL: https://issues.apache.org/jira/browse/LUCENE-5573
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Dawid Weiss
Assignee: Robert Muir
 Fix For: 4.8, 5.0

 Attachments: A.java, C.java, LUCENE-5573.patch, Main.java, X.java


 It's always worth looking into those randomized failures. 
 http://builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/81259/console
 Log quote:
 {code}
 [junit4]   2  jstack at approximately timeout time 
[junit4]   2 Lucene Merge Thread #0 ID=25 RUNNABLE
[junit4]   2  at 
 org.apache.lucene.codecs.lucene45.Lucene45DocValuesProducer.getSortedSet(Lucene45DocValuesProducer.java:541)
[junit4]   2  at 
 org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsReader.getSortedSet(PerFieldDocValuesFormat.java:285)
[junit4]   2  at 
 org.apache.lucene.index.SegmentReader.getSortedSetDocValues(SegmentReader.java:500)
[junit4]   2  at 
 org.apache.lucene.index.SegmentMerger.mergeDocValues(SegmentMerger.java:204)
[junit4]   2  at 
 org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:119)
[junit4]   2  at 
 org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4068)
[junit4]   2  at 
 org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3664)
[junit4]   2  at 
 org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:405)
[junit4]   2  at 
 org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:482)
[junit4]   2 
[junit4]   2 
 TEST-TestLucene45DocValuesFormat.testSortedSetVariableLengthVsUninvertedField-seed#[7362FE36DE729D42]
  ID=23 RUNNABLE
[junit4]   2  at 
 org.apache.lucene.index.SortedSetDocValues.clinit(SortedSetDocValues.java:72)
[junit4]   2  at 
 org.apache.lucene.index.DocTermOrds.iterator(DocTermOrds.java:767)
[junit4]   2  at 
 org.apache.lucene.search.FieldCacheImpl.getDocTermOrds(FieldCacheImpl.java:1214)
[junit4]   2  at 
 org.apache.lucene.index.BaseDocValuesFormatTestCase.doTestSortedSetVsUninvertedField(BaseDocValuesFormatTestCase.java:2342)
[junit4]   2  at 
 org.apache.lucene.index.BaseDocValuesFormatTestCase.testSortedSetVariableLengthVsUninvertedField(BaseDocValuesFormatTestCase.java:2375)
[junit4]   2  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
 Method)
[junit4]   2  at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
[junit4]   2  at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[junit4]   2  at java.lang.reflect.Method.invoke(Method.java:606)
[junit4]   2  at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1617)
[junit4]   2  at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:826)
[junit4]   2  at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:862)
[junit4]   2  at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:876)
[junit4]   2  at 
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
[junit4]   2  at 
 org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
[junit4]   2  at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
[junit4]   2  at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
[junit4]   2  at 
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
[junit4]   2  at 
 

Re: Solr Ref Guide vs. Wiki

2014-04-05 Thread Furkan KAMACI
| I think an interesting side-effect issue here is user perception. I
| feel that ElasticSearch (yet, them) get a lot of points not only
| because of features (not discussing that here) but because they
| actually have taken time to put a polish on the SEO, onboarding and
| other human perception aspects.

that is exactly true. Search that on Google: solr search parameters and
have a look at the results. Then search that: elasticsearch search
parameters. You can feel that there has been done a work for SEO at
elasticsearch side.

When I started to learn Solr I read every thing, every thing that I can
find. These include Solr wiki, books, websites about Solr and every e-mail
at mail list. However is it usual that there were still some pages at wiki
that I've not seen it before. I've clicked every link at wiki but there was
some hidden(!) pages that I could not achieve to see it. For example every
body knows that Shawn has a page tells something about GC tunning for Solr.
Do you know that how to reach that page when you start to read the wiki?

Reference guide is pretty good. It is like a book for Solr. You start to
read and when you finish it you get a good knowledge about Solr. My
suggestion is that: We can rich the Solr guide with links to some external
pages if it is necessary. On the other hand we can design a nice web site
that explains Solr feautures as like elasticsearch.

I personally separate people into four main categories who works with
Solr:

First category: He uses Solr, wants to improve search performance, tunning
etc. etc. but do not know the implementation details very much.
Second category: He is interesting about scalability, tunning etc. of Solr.
Third category: He is interesting about linguistic/search part of Solr
(Lucene).
Forth category: He is interesting about developing Solr.

So, a wiki should point to the people who uses it, who wants to operate it,
who wants to improve search benefit and who wants to develop it. My
personal idea is that: first category is very important too. When you read
the guide of Elasticsearch it is simple and explains the main things (i.e.
you can compare the analyzer page at Elasticsearch and Solr). People want
to startup a system and do not want to do much more thing (I know it is
impossible). We can help address to such kind of audience too (I know that
Solr and Elasticsearch audince are not same). I mean a web page explains
Solr as like elasticsearch and a guide (with links to other resources) that
addresses to both four category would be nice.

All in all I would want to help Solr for such kind of documentation (I can
work with Alexandre collaboratively). It would be nice if we have something
like that.

Thanks;
Furkan KAMACI



2014-04-05 6:21 GMT+03:00 Alexandre Rafalovitch arafa...@gmail.com:

 +1 on consolidating to the Reference Guide and figuring out the way to
 make wiki a lot less visible. But for a completely different set of
 reasons than discussed already.

 [[rant-start]]

 I think an interesting side-effect issue here is user perception. I
 feel that ElasticSearch (yet, them) get a lot of points not only
 because of features (not discussing that here) but because they
 actually have taken time to put a polish on the SEO, onboarding and
 other human perception aspects.

 Solr's messaging is - like many of Apache projects - deeply technical,
 self-referential and on the main path puts Development before Use
 (literally, by the order of the wiki sections). Which is _no longer_
 representative of the users' needs.

 Reference guide is a large step in the right direction. Commercial
 distributions also do their best to do the messaging right, even if
 often at the expense of pushing Solr into an implementation detail
 (Cloudera!).

 But I think this is a case of the tide raising all (Solr-based) boats.
 Somebody with UX skills can probably deconstruct and reconstruct the
 user experience and the same information will have a lot more impact.

 This even applies to technical issues as well. Elastic Search has
 great success talking about schema-less design and Solr relegates its
 equivalence to a small section deep in the Wiki/Guide. Same with
 real-time updates. That's because the site/documentation is organized
 from the implementation rather than impact points of view.

 If somebody has resources to throw at this, I would start from the UX
 and user-onboarding part. Maybe even do that for both Lucene and Solr
 to emphasize common links. And I would be happy to work with someone
 on that too. Maybe, there is even a need for a separate
 super-duper-happy-solr-path mailing group to specialize on that.
 Something that commercial companies can temporarily throw other
 non-dev resources at, when required.

 [[rant-end]]

 Regards,
Alex.
 P.s. There is a LOT more to the rant, with specific suggestions. And I
 am walking my talk too (book, solr-start, my nascent mailing list, and
 a ToDo list to last me next several years of fun projects).

 

[JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.7.0_60-ea-b10) - Build # 9897 - Failure!

2014-04-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/9897/
Java: 32bit/jdk1.7.0_60-ea-b10 -server -XX:+UseSerialGC

1 tests failed.
REGRESSION:  
org.apache.lucene.analysis.icu.TestICUNormalizer2CharFilter.testRandomStrings

Error Message:
startOffset 107 expected:587 but was:588

Stack Trace:
java.lang.AssertionError: startOffset 107 expected:587 but was:588
at 
__randomizedtesting.SeedInfo.seed([19423CE8988D3E11:91CB3C563B896924]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:181)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:294)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:298)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:857)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:612)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:511)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:435)
at 
org.apache.lucene.analysis.icu.TestICUNormalizer2CharFilter.testRandomStrings(TestICUNormalizer2CharFilter.java:186)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:360)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:793)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:453)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 

[JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.8.0) - Build # 9899 - Failure!

2014-04-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/9899/
Java: 32bit/jdk1.8.0 -client -XX:+UseSerialGC

1 tests failed.
REGRESSION:  
org.apache.lucene.analysis.icu.TestICUNormalizer2CharFilter.testRandomStrings

Error Message:
term 450 expected:?[?਍] but was:?[?਍ਹ]

Stack Trace:
org.junit.ComparisonFailure: term 450 expected:?[?਍] but was:?[?ਹ]
at 
__randomizedtesting.SeedInfo.seed([B4100D1094B8E1C3:3C990DAE37BCB6F6]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:179)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:294)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:298)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:857)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:612)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:511)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:435)
at 
org.apache.lucene.analysis.icu.TestICUNormalizer2CharFilter.testRandomStrings(TestICUNormalizer2CharFilter.java:202)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:360)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:793)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:453)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at