Re: junit - previous test run data not cleared

2013-05-20 Thread Dawid Weiss
This is not a bug, see:

https://issues.apache.org/jira/browse/LUCENE-4654

what you see are averages from locally collected statistics. They have
been moved from temporary folders to allow better approximation of
build times on CI servers (where ant clean usually takes place in
between builds).

If you want to remove previous stats (or make them volatile and
removed after ant clean), make a user-default for any of the following
properties:

  property name=local.caches location=${common.dir}/../.caches /
  property name=tests.cachedir  location=${local.caches}/test-stats /

If you place that folder under a build it'll be successively removed
after an ant clean.

Dawid

On Mon, May 20, 2013 at 7:52 AM, Shawn Heisey s...@elyograg.org wrote:
 I noticed something a little odd -- certain test data is not cleared up
 by the clean target.  At some point in the past, I ran all the tests.
 Today, while running an individual test case, I see tophint data from
 other tests.  I was in solr/solrj and ran -Dtestcase=NamedListTest.
 This is the last part of the output:

 [junit4:junit4] Executing 1 suite with 1 JVM.
 [junit4:junit4]
 [junit4:junit4] Started J0 PID(6384@freya).
 [junit4:junit4] Suite: org.apache.solr.common.util.NamedListTest
 [junit4:junit4] OK  0.13s | NamedListTest.testRecursive
 [junit4:junit4] OK  0.01s | NamedListTest.testRemove
 [junit4:junit4] Completed in 0.79s, 2 tests
 [junit4:junit4]
 [junit4:junit4] JVM J0: 0.87 .. 2.25 = 1.38s
 [junit4:junit4] Execution time total: 2.26 sec.
 [junit4:junit4] Tests summary: 1 suite, 2 tests
  [echo] 5 slowest tests:
 [junit4:tophints]  40.21s |
 org.apache.solr.client.solrj.impl.CloudSolrServerTest
 [junit4:tophints]  38.67s |
 org.apache.solr.client.solrj.TestLBHttpSolrServer
 [junit4:tophints]  23.83s |
 org.apache.solr.client.solrj.embedded.SolrExampleStreamingTest
 [junit4:tophints]  15.20s |
 org.apache.solr.client.solrj.embedded.SolrExampleStreamingBinaryTest
 [junit4:tophints]   7.96s | org.apache.solr.client.solrj.TestBatchUpdate

 BUILD SUCCESSFUL
 Total time: 11 seconds

 As you can see, the tophints data was all from tests that I didn't run
 this time.  It happens on both Linux and Windows, and for tests under
 lucene as well as solr.

 Is this a known problem with no simple solution, or should I file an issue?

 Thanks,
 Shawn

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: junit - previous test run data not cleared

2013-05-20 Thread Shawn Heisey
On 5/19/2013 11:52 PM, Shawn Heisey wrote:
 As you can see, the tophints data was all from tests that I didn't run
 this time.  It happens on both Linux and Windows, and for tests under
 lucene as well as solr.

After I sent this, I noted the existence of a .cache directory with many
timehints.txt files.  This tickled a memory about test timing statistics
that survive multiple test runs on jenkins.

I can think of two ways to approach fixing this cosmetic problem that
don't involve losing the statistics.  We could implement one or both of
them:

The first idea is to have the display of tophints data reflect only the
tests that were actually run, not the cached data about every test in
the group.

The second idea touches on another topic that interests me -- having a
build target that will return the tree to a completely pristine state,
just like when it is first checked out from svn.

I just saw Dawid's reply.  It confirmed my vague memory.  Thanks for the
info about tweaking my local install, I will look into that.

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-4.3 - Build # 4 - Failure

2013-05-20 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-4.3/4/

No tests ran.

Build Log:
[...truncated 32711 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.3/lucene/build/fakeRelease
 [copy] Copying 401 files to 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.3/lucene/build/fakeRelease/lucene
 [copy] Copying 194 files to 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.3/lucene/build/fakeRelease/solr
 [exec] JAVA6_HOME is /home/hudson/tools/java/latest1.6
 [exec] JAVA7_HOME is /home/hudson/tools/java/latest1.7
 [exec] NOTE: output encoding is US-ASCII
 [exec] 
 [exec] Load release URL 
file:/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.3/lTraceback
 (most recent call last):
 [exec]   File 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.3/dev-tools/scriptucene/build/fakeRelease/...
 [exec] 
 [exec] Test Lucene...
 [exec]   test basics...
 [exec]   get KEYS
 [exec] 0.1 MB
 [exec] 
 [exec] command gpg --homedir 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.3/lucene/build/fakeReleaseTmp/lucene.gpg
 --import /usr/home/hudson/hudson-slave/ws/smokeTestRelease.py, line 1385, in 
module
 [exec] main()
 [exec]   File 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.3/dev-tools/scripts/smokeTestRelease.py,
 line 1331, in main
 [exec] smokeTest(baseURL, version, tmpDir, isSigned)
 [exec]   File 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.3/dev-tools/scripts/smokeTestRelease.py,
 line 1364, in 
smokeTestorkspace/Lucene-Solr-SmokeRelease-4.3/lucene/build/fakeReleaseTmp/lucene.KEYS
 failed:
 [exec] 
 [exec] checkSigs('lucene', lucenePath, version, tmpDir, isSigned)
 [exec]   File 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.3/dev-tools/scripts/smokeTestRelease.py,
 line 365, in checkSigs
 [exec] '%s/%s.gpg.import.log 21' % (tmpDir, project))
 [exec]   File 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.3/dev-tools/scripts/smokeTestRelease.py,
 line 513, in run
 [exec] printFileContents(logFile)
 [exec]   File 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.3/dev-tools/scripts/smokeTestRelease.py,
 line 497, in printFileContents
 [exec] txt = codecs.open(fileName, 'r', 
encoding=sys.getdefaultencoding(), errors='replace').read()
 [exec]   File /usr/local/lib/python3.2/codecs.py, line 884, in open
 [exec] file = builtins.open(filename, mode, buffering)
 [exec] IOError: [Errno 2] No such file or directory: 
'/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.3/lucene/build/fakeReleaseTmp/lucene.gpg.import.log
 21'

BUILD FAILED
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.3/build.xml:303:
 exec returned: 1

Total time: 17 minutes 27 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-4048) Add a findRecursive method to NamedList

2013-05-20 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13661832#comment-13661832
 ] 

Shawn Heisey commented on SOLR-4048:


Trunk commit r1484386 to improve and clean up new method.  All tests and 
precommit passed.  Will run tests and precommit on 4x before backporting 
tomorrow.


 Add a findRecursive method to NamedList
 -

 Key: SOLR-4048
 URL: https://issues.apache.org/jira/browse/SOLR-4048
 Project: Solr
  Issue Type: New Feature
Affects Versions: 4.0
Reporter: Shawn Heisey
Assignee: Shawn Heisey
Priority: Minor
 Fix For: 4.4

 Attachments: SOLR-4048-cleanup.patch, SOLR-4048.patch, 
 SOLR-4048.patch, SOLR-4048.patch, SOLR-4048.patch, SOLR-4048.patch, 
 SOLR-4048.patch, SOLR-4048.patch


 Most of the time when accessing data from a NamedList, what you'll be doing 
 is using get() to retrieve another NamedList, and doing so over and over 
 until you reach the final level, where you'll actually retrieve the value you 
 want.
 I propose adding a method to NamedList which would do all that heavy lifting 
 for you.  I created the following method for my own code.  It could be 
 adapted fairly easily for inclusion into NamedList itself.  The only reason I 
 did not include it as a patch is because I figure you'll want to ensure it 
 meets all your particular coding guidelines, and that the JavaDoc is much 
 better than I have done here:
 {code}
   /**
* Recursively parse a NamedList and return the value at the last level,
* assuming that the object found at each level is also a NamedList. For
* example, if response is the NamedList response from the Solr4 mbean
* handler, the following code makes sense:
* 
* String coreName = (String) getRecursiveFromResponse(response, new
* String[] { solr-mbeans, CORE, core, stats, coreName });
* 
* 
* @param namedList the NamedList to parse
* @param args A list of values to recursively request
* @return the object at the last level.
* @throws SolrServerException
*/
   @SuppressWarnings(unchecked)
   private final Object getRecursiveFromResponse(
   NamedListObject namedList, String[] args)
   throws SolrServerException
   {
   NamedListObject list = null;
   Object value = null;
   try
   {
   for (String key : args)
   {
   if (list == null)
   {
   list = namedList;
   }
   else
   {
   list = (NamedListObject) value;
   }
   value = list.get(key);
   }
   return value;
   }
   catch (Exception e)
   {
   throw new SolrServerException(
   Failed to recursively parse 
 NamedList, e);
   }
   }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 266 - Still Failing

2013-05-20 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/266/

4 tests failed.
REGRESSION:  org.apache.lucene.spatial.DistanceStrategyTest.testDistanceOrder 
{strategy=recursive_quad}

Error Message:
Not equal for doc 0 expected:2.827493667602539 but was:180.0

Stack Trace:
java.lang.AssertionError: Not equal for doc 0 expected:2.827493667602539 but 
was:180.0
at 
__randomizedtesting.SeedInfo.seed([F4A4BB00B127FF78:76CC5EC69BC981D0]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:443)
at 
org.apache.lucene.spatial.StrategyTestCase.checkValueSource(StrategyTestCase.java:227)
at 
org.apache.lucene.spatial.DistanceStrategyTest.checkDistValueSource(DistanceStrategyTest.java:124)
at 
org.apache.lucene.spatial.DistanceStrategyTest.testDistanceOrder(DistanceStrategyTest.java:95)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:722)


REGRESSION:  org.apache.lucene.spatial.DistanceStrategyTest.testDistanceOrder 
{strategy=termquery_geohash}

Error Message:
Not equal for doc 0 expected:2.827493667602539 but was:180.0


[jira] [Commented] (SOLR-2894) Implement distributed pivot faceting

2013-05-20 Thread Elran Dvir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13661866#comment-13661866
 ] 

Elran Dvir commented on SOLR-2894:
--

Hi,

I want to report a problem that we found in the patch of March 21st.
It seems that the problem Shahar reported is now solved, but there is another 
similar problem.
In short, the problem seems to be related to facet.limit per field definition 
and the symptom is that a distributed pivot returns less terms than expected.
Here's a simple scenario:

if I run a distributed pivot such as:
http://myHost:8999/solr/core-B/select?shards=myHost:8999/solr/core-Aq=*:*wt=xmlfacet=truefacet.pivot=field_Arows=0facet.limit=-1facet.sort=index

it will return exactly number of terms for field_A as expected.

On the other hand, if I use f.field_name.facet.limit=-1:
http://myHost:8999/solr/core-B/select?shards=myHost:8999/solr/core-Aq=*:*wt=xmlfacet=truefacet.pivot=field_Arows=0f.field_A.facet.limit=-1facet.sort=index

then it will return at most 100 terms for field_A.

I'll appreciate your help with this.

Thanks.

 Implement distributed pivot faceting
 

 Key: SOLR-2894
 URL: https://issues.apache.org/jira/browse/SOLR-2894
 Project: Solr
  Issue Type: Improvement
Reporter: Erik Hatcher
 Fix For: 4.4

 Attachments: SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894-reworked.patch


 Following up on SOLR-792, pivot faceting currently only supports 
 undistributed mode.  Distributed pivot faceting needs to be implemented.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4048) Add a findRecursive method to NamedList

2013-05-20 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13661883#comment-13661883
 ] 

Uwe Schindler commented on SOLR-4048:
-

All fine now!

 Add a findRecursive method to NamedList
 -

 Key: SOLR-4048
 URL: https://issues.apache.org/jira/browse/SOLR-4048
 Project: Solr
  Issue Type: New Feature
Affects Versions: 4.0
Reporter: Shawn Heisey
Assignee: Shawn Heisey
Priority: Minor
 Fix For: 4.4

 Attachments: SOLR-4048-cleanup.patch, SOLR-4048.patch, 
 SOLR-4048.patch, SOLR-4048.patch, SOLR-4048.patch, SOLR-4048.patch, 
 SOLR-4048.patch, SOLR-4048.patch


 Most of the time when accessing data from a NamedList, what you'll be doing 
 is using get() to retrieve another NamedList, and doing so over and over 
 until you reach the final level, where you'll actually retrieve the value you 
 want.
 I propose adding a method to NamedList which would do all that heavy lifting 
 for you.  I created the following method for my own code.  It could be 
 adapted fairly easily for inclusion into NamedList itself.  The only reason I 
 did not include it as a patch is because I figure you'll want to ensure it 
 meets all your particular coding guidelines, and that the JavaDoc is much 
 better than I have done here:
 {code}
   /**
* Recursively parse a NamedList and return the value at the last level,
* assuming that the object found at each level is also a NamedList. For
* example, if response is the NamedList response from the Solr4 mbean
* handler, the following code makes sense:
* 
* String coreName = (String) getRecursiveFromResponse(response, new
* String[] { solr-mbeans, CORE, core, stats, coreName });
* 
* 
* @param namedList the NamedList to parse
* @param args A list of values to recursively request
* @return the object at the last level.
* @throws SolrServerException
*/
   @SuppressWarnings(unchecked)
   private final Object getRecursiveFromResponse(
   NamedListObject namedList, String[] args)
   throws SolrServerException
   {
   NamedListObject list = null;
   Object value = null;
   try
   {
   for (String key : args)
   {
   if (list == null)
   {
   list = namedList;
   }
   else
   {
   list = (NamedListObject) value;
   }
   value = list.get(key);
   }
   return value;
   }
   catch (Exception e)
   {
   throw new SolrServerException(
   Failed to recursively parse 
 NamedList, e);
   }
   }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4841) DetectedLanguage constructor should be public

2013-05-20 Thread Maciej Lizewski (JIRA)
Maciej Lizewski created SOLR-4841:
-

 Summary: DetectedLanguage constructor should be public
 Key: SOLR-4841
 URL: https://issues.apache.org/jira/browse/SOLR-4841
 Project: Solr
  Issue Type: Bug
Reporter: Maciej Lizewski


org.apache.solr.update.processor.DetectedLanguage constructor should be public. 
Without that it is impossible to create owne class extending 
LanguageIdentifierUpdateProcessor.

LanguageIdentifierUpdateProcessor base class needs detectLanguage(String 
content) function to return listy of DetectedLanguage's but you cannot create 
such objects because constructor is accessible only in same package 
(org.apache.solr.update.processor).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4841) DetectedLanguage constructor should be public

2013-05-20 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13661991#comment-13661991
 ] 

Uwe Schindler commented on SOLR-4841:
-

protected should be enough!

 DetectedLanguage constructor should be public
 -

 Key: SOLR-4841
 URL: https://issues.apache.org/jira/browse/SOLR-4841
 Project: Solr
  Issue Type: Bug
Reporter: Maciej Lizewski

 org.apache.solr.update.processor.DetectedLanguage constructor should be 
 public. Without that it is impossible to create owne class extending 
 LanguageIdentifierUpdateProcessor.
 LanguageIdentifierUpdateProcessor base class needs detectLanguage(String 
 content) function to return listy of DetectedLanguage's but you cannot create 
 such objects because constructor is accessible only in same package 
 (org.apache.solr.update.processor).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4841) DetectedLanguage constructor should be public

2013-05-20 Thread Maciej Lizewski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13662013#comment-13662013
 ] 

Maciej Lizewski commented on SOLR-4841:
---

I am OK with that. My point was that it is impossible to extend 
LanguageIdentifierUpdateProcessor with own custom implementation, because 
constructor does not have any visibility modifier, so it is accessible only 
from same package.

 DetectedLanguage constructor should be public
 -

 Key: SOLR-4841
 URL: https://issues.apache.org/jira/browse/SOLR-4841
 Project: Solr
  Issue Type: Bug
Reporter: Maciej Lizewski

 org.apache.solr.update.processor.DetectedLanguage constructor should be 
 public. Without that it is impossible to create owne class extending 
 LanguageIdentifierUpdateProcessor.
 LanguageIdentifierUpdateProcessor base class needs detectLanguage(String 
 content) function to return listy of DetectedLanguage's but you cannot create 
 such objects because constructor is accessible only in same package 
 (org.apache.solr.update.processor).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5010) Getting a Poterstemmer error in Solr

2013-05-20 Thread Mark Streitman (JIRA)
Mark Streitman created LUCENE-5010:
--

 Summary: Getting a Poterstemmer error in Solr
 Key: LUCENE-5010
 URL: https://issues.apache.org/jira/browse/LUCENE-5010
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/analysis, modules/spellchecker
Affects Versions: 3.6.1
 Environment: Windows 7 64bit
Reporter: Mark Streitman


Java version 1.7.0
Java(TM) SE Runtime Environment (build 1.7.0-b147)
Java HotSpot(TM) 64-Bit Server VM (build 21.0-b17, mixed mode)
is dying from an error in porterstemmer.


This is just like the error listed from 2011
https://issues.apache.org/jira/browse/LUCENE-3335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13070153#comment-13070153

Below is the log file that is generated


#
# A fatal error has been detected by the Java Runtime Environment:
#
#  EXCEPTION_ACCESS_VIOLATION (0xc005) at pc=0x02b08ce1, pid=3208, 
tid=4688
#
# JRE version: 7.0-b147
# Java VM: Java HotSpot(TM) 64-Bit Server VM (21.0-b17 mixed mode windows-amd64 
compressed oops)
# Problematic frame:
# J  org.apache.lucene.analysis.PorterStemmer.stem(I)Z
#
# Failed to write core dump. Minidumps are not enabled by default on client 
versions of Windows
#
# If you would like to submit a bug report, please visit:
#   http://bugreport.sun.com/bugreport/crash.jsp
#

---  T H R E A D  ---

Current thread (0x0f57f000):  JavaThread 
http-localhost-127.0.0.1-8080-5 daemon [_thread_in_Java, id=4688, 
stack(0x0b25,0x0b35)]

siginfo: ExceptionCode=0xc005, reading address 0x0002fd8be046

Registers:
RAX=0x0001, RBX=0x, RCX=0xfd8be038, 
RDX=0x0065
RSP=0x0b34e950, RBP=0x, RSI=0x, 
RDI=0x0003
R8 =0xfd8be010, R9 =0xfffe, R10=0x0065, 
R11=0x0032
R12=0x, R13=0x02b08c88, R14=0x0003, 
R15=0x0f57f000
RIP=0x02b08ce1, EFLAGS=0x00010286

Top of Stack: (sp=0x0b34e950)
0x0b34e950:   00650072 
0x0b34e960:   fd8be010 0001e053f560
0x0b34e970:    fd8bdbb0
0x0b34e980:   0b34ea08 026660d8
0x0b34e990:   fd8bdfe0 026660d8
0x0b34e9a0:   0b34ea08 026663d0
0x0b34e9b0:   026663d0 
0x0b34e9c0:   fd8be010 0b34e9c8
0x0b34e9d0:   d2e598d2 0b34ea30
0x0b34e9e0:   d2e5a290 d3fe7bc8
0x0b34e9f0:   d2e59918 0b34e9b8
0x0b34ea00:   0b34ea40 fd8be010
0x0b34ea10:   02a502c4 0004
0x0b34ea20:    fd8bdbb0
0x0b34ea30:   fd8be010 02a502c4
0x0b34ea40:   f5d70090 fd8bdf30 

Instructions: (pc=0x02b08ce1)
0x02b08cc1:   41 83 c1 fb 4c 63 f7 42 0f b7 5c 71 10 89 5c 24
0x02b08cd1:   04 8b c7 83 c0 fe 0f b7 54 41 10 8b df 83 c3 fc
0x02b08ce1:   40 0f b7 6c 59 10 83 c7 fd 44 0f b7 6c 79 10 41
0x02b08cf1:   83 fa 69 0f 84 07 03 00 00 49 b8 d8 31 4e e0 00 


Register to memory mapping:

RAX=0x0001 is an unknown value
RBX=0x is an unallocated location in the heap
RCX=0xfd8be038 is an oop
[C 
 - klass: {type array char}
 - length: 50
RDX=0x0065 is an unknown value
RSP=0x0b34e950 is pointing into the stack for thread: 0x0f57f000
RBP=0x is an unknown value
RSI=0x is an unknown value
RDI=0x0003 is an unknown value
R8 =0xfd8be010 is an oop
org.apache.lucene.analysis.PorterStemmer 
 - klass: 'org/apache/lucene/analysis/PorterStemmer'
R9 =0xfffe is an unallocated location in the heap
R10=0x0065 is an unknown value
R11=0x0032 is an unknown value
R12=0x is an unknown value
R13=0x02b07c10 [CodeBlob (0x02b07c10)]
Framesize: 12
R14=0x0003 is an unknown value
R15=0x0f57f000 is a thread


Stack: [0x0b25,0x0b35],  sp=0x0b34e950,  free 
space=1018k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
J  org.apache.lucene.analysis.PorterStemmer.stem(I)Z

[error occurred during error reporting (printing native stack), id 0xc005]


---  P R O C E S S  ---

Java Threads: ( = current thread )
  0x08570800 JavaThread http-localhost-127.0.0.1-8080-21 daemon 
[_thread_blocked, id=5652, stack(0x0c81,0x0c91)]
  0x05eb5800 JavaThread http-localhost-127.0.0.1-8080-20 daemon 

[JENKINS] Lucene-Solr-4.x-MacOSX (64bit/jdk1.6.0) - Build # 472 - Failure!

2013-05-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-MacOSX/472/
Java: 64bit/jdk1.6.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
REGRESSION:  org.apache.lucene.replicator.http.HttpReplicatorTest.testBasic

Error Message:
Connection to http://localhost:51800 refused

Stack Trace:
org.apache.http.conn.HttpHostConnectException: Connection to 
http://localhost:51800 refused
at 
__randomizedtesting.SeedInfo.seed([57D348FF93F4CB24:FC2955EA4C284D0A]:0)
at 
org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:190)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:645)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
at 
org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
at 
org.apache.lucene.replicator.http.HttpClientBase.executeGET(HttpClientBase.java:178)
at 
org.apache.lucene.replicator.http.HttpReplicator.checkForUpdate(HttpReplicator.java:51)
at 
org.apache.lucene.replicator.ReplicationClient.doUpdate(ReplicationClient.java:196)
at 
org.apache.lucene.replicator.ReplicationClient.updateNow(ReplicationClient.java:402)
at 
org.apache.lucene.replicator.http.HttpReplicatorTest.testBasic(HttpReplicatorTest.java:110)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
  

ExternalDocValuesFilterReader progress?

2013-05-20 Thread Ryan McKinley
In march, there was so effort at looking at cleaner ways to integrate
external data:

http://mail-archives.apache.org/mod_mbox/lucene-dev/201303.mbox/%3cc2e7cc37-52f2-4527-a919-a071d26f9...@flax.co.uk%3E

Any updates on this?

Thanks
Ryan


RE: have developer question about ClobTransformer and DIH

2013-05-20 Thread Dyer, James
I think you're confusing the hierarchy of your database's types with the 
hierarchy in Java.  In Java, a java.sql.Blob and a java.sql.Clob are 2 
different things.  They do not extend a common ancestor (excpt 
java.lang.Object).  To write code that deals with both means you need to have 
separate paths for each object type.  There is no way around this.  (Compare 
the situation with Integer, Float, BigDecimal, etc, which all extend 
Number...In this case, your jdbc code can just expect a Number back from the 
database regardless of what object a particular jdbc driver decided to return 
to you.)

James Dyer
Ingram Content Group
(615) 213-4311


-Original Message-
From: geeky2 [mailto:gee...@hotmail.com] 
Sent: Friday, May 17, 2013 9:01 PM
To: dev@lucene.apache.org
Subject: RE: have developer question about ClobTransformer and DIH


i still have a disconnect on this (see below)

i have been reading on the informix site about BLOB, CLOB and TEXT types.

*i miss-stated earlier that a TEXT type is another type of informix blob -
after reading the dox - this is not true.*


I think what it comes down to is that a Clob is-not-a Blob.  


the informix docs indicate the opposite.  CLOB and BLOB are sub-classes of
smart object types.

what is a smart object type (the super class for BLOB and CLOB):

  
http://publib.boulder.ibm.com/infocenter/idshelp/v10/index.jsp?topic=/com.ibm.sqlr.doc/sqlrmst136.htm

what is a BLOB type:

  
http://publib.boulder.ibm.com/infocenter/idshelp/v10/index.jsp?topic=/com.ibm.sqlr.doc/sqlrmst136.htm


what is a CLOB type:

  
http://publib.boulder.ibm.com/infocenter/idshelp/v10/index.jsp?topic=/com.ibm.sqlr.doc/sqlrmst136.htm

what is a TEXT type:

  
http://publib.boulder.ibm.com/infocenter/idshelp/v10/index.jsp?topic=/com.ibm.sqlr.doc/sqlrmst136.htm



after reading the above - my disconnect lies with the following:

if an informix TEXT type is basically text - then why did solr return the
two TEXT fields as binary addresses,  when i removed all references to
ClobTransformer and the clob=true switches from the fields in the
db-config.xml file??

if TEXT is just text, then there should be no need to leverage
ClobTransformer and to cast TEXT type fields as CLOBs.

see my earlier post on the solr users group for the detail:

http://lucene.472066.n3.nabble.com/having-trouble-storing-large-text-blob-fields-returns-binary-address-in-search-results-td4063979.html#a4064260


mark





--
View this message in context: 
http://lucene.472066.n3.nabble.com/have-developer-question-about-ClobTransformer-and-DIH-tp4064256p4064323.html
Sent from the Lucene - Java Developer mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [jira] [Created] (SOLR-4827) fuzzy search problem

2013-05-20 Thread Erick Erickson
Note, I'm rapidly catching up so I just skimmed... but is it possible
that your imports are replacing older documents with new documents
that don't happen to match? Looking at numDocs/maxDoc in the admin GUI
should tell you this, the delta between those two numbers is the
number of replaced documents.

Best
Erick

On Thu, May 16, 2013 at 3:59 AM, vishal parekh (JIRA) j...@apache.org wrote:
 vishal parekh created SOLR-4827:
 ---

  Summary: fuzzy search problem
  Key: SOLR-4827
  URL: https://issues.apache.org/jira/browse/SOLR-4827
  Project: Solr
   Issue Type: Bug
   Components: search
 Affects Versions: 4.3, 4.2
  Environment: OS - ubuntu
 Server - Jboss 7
 Reporter: vishal parekh


 I am periodically import/index records into solr server.

 (1) so, suppose first, import 40 records, commited.
 and then do fuzzy search on it, and works fine.

 (2) import another 10 records, commited. fuzzy search works fine.

 (3) import another 5 records, commited. now, when i do fuzzy search on
 not these new records but above older records; it gives me lesser records 
 then previous.

 say after 1st import it gives me 3000 records (from 40) for any fuzzy 
 search, now on same data it returns only 1000 records  (from 40) for same 
 search.

 above steps are just example, its not like after 3rd import only it cause 
 this issue.

 not sure, if size of index cause any problem or any other issue.



 --
 This message is automatically generated by JIRA.
 If you think it was sent incorrectly, please contact your JIRA administrators
 For more information on JIRA, see: http://www.atlassian.com/software/jira

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5007) smokeTestRelease.py should be able to pass cmdline test args to 'ant test', e.g. -Dtests.jettyConnector=Socket; also, ant nightly-smoke should be able to pass these

2013-05-20 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved LUCENE-5007.


   Resolution: Fixed
Lucene Fields: New,Patch Available  (was: New)

Committed:

- trunk: r1484524
- branch_4x: r1484525 

 smokeTestRelease.py should be able to pass cmdline test args to 'ant test', 
 e.g. -Dtests.jettyConnector=Socket; also, ant nightly-smoke should be 
 able to pass these args to smokeTestRelease.py
 

 Key: LUCENE-5007
 URL: https://issues.apache.org/jira/browse/LUCENE-5007
 Project: Lucene - Core
  Issue Type: Improvement
  Components: general/test
Reporter: Steve Rowe
Assignee: Steve Rowe
Priority: Minor
 Fix For: 4.4

 Attachments: LUCENE-5007-branch_4x.patch, 
 LUCENE-5007-branch_4x.patch, LUCENE-5007-trunk.patch


 SOLR-4189 added sensitivity to sysprop {{tests.jettyConnector}} to allow 
 setting test mode Jetty to use Socket connector instead of the default 
 SelectChannel connector.
 New module lucene/replicator is running into the same problem, failing 100% 
 of the time when running under 'ant nightly-smoke' on ASF Jenkins on FreeBSD.
 At present there's no way from smokeTestRelease.py, or from ant 
 nightly-smoke, to pass through this sysprop (or any other).
 [~rcmuir] wrote on dev@l.o.a about one of the replicator module's failures on 
 FreeBSD:
 {quote}
 This is a jenkins setup/test harness issue.
 there needs to be a way for the jetty connector sysprop to be passed
 all the way thru to ant test running from the smoketester.
 {quote}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.7.0_21) - Build # 2842 - Still Failing!

2013-05-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/2842/
Java: 64bit/jdk1.7.0_21 -XX:+UseCompressedOops -XX:+UseG1GC

5 tests failed.
FAILED:  
org.apache.solr.client.solrj.embedded.TestEmbeddedSolrServer.testGetCoreContainer

Error Message:


Stack Trace:
org.apache.solr.common.SolrException: 
at 
__randomizedtesting.SeedInfo.seed([ED20914B7DE21255:2033A504CB137AEB]:0)
at org.apache.solr.core.CoreContainer.load(CoreContainer.java:262)
at org.apache.solr.core.CoreContainer.load(CoreContainer.java:219)
at org.apache.solr.core.CoreContainer.init(CoreContainer.java:149)
at 
org.apache.solr.client.solrj.embedded.AbstractEmbeddedSolrServerTestCase.setUp(AbstractEmbeddedSolrServerTestCase.java:64)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:771)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.io.IOException: The filename, directory name, or volume label 
syntax is incorrect
at java.io.WinNTFileSystem.canonicalize0(Native Method)
at java.io.Win32FileSystem.canonicalize(Win32FileSystem.java:414)
at java.io.File.getCanonicalPath(File.java:589)
at 

Re: ExternalDocValuesFilterReader progress?

2013-05-20 Thread Alan Woodward
I made a start on this, but in the end the client decided to do something 
different so it never got finished.  I did get 
https://issues.apache.org/jira/browse/LUCENE-4902 out of it, which would allow 
you to plug in custom FilterReaders to Solr via the IndexReaderFactory.  I'm 
still interested in working on it, just need a project that would use it, and 
some time...

Alan Woodward
www.flax.co.uk


On 20 May 2013, at 16:29, Ryan McKinley wrote:

 In march, there was so effort at looking at cleaner ways to integrate 
 external data:
 
 http://mail-archives.apache.org/mod_mbox/lucene-dev/201303.mbox/%3cc2e7cc37-52f2-4527-a919-a071d26f9...@flax.co.uk%3E
 
 Any updates on this?  
 
 Thanks
 Ryan
 
 



[jira] [Commented] (SOLR-4048) Add a findRecursive method to NamedList

2013-05-20 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13662225#comment-13662225
 ] 

Shawn Heisey commented on SOLR-4048:


On 4x, two solr-core tests failed with 500 server errors during shard 
splitting.  I've seen this over and over in Jenkins, so I'm pretty sure it's 
not my changes.  All solr-solrj tests pass fine, and precommit passes.  The 
branch_4x commit is r1484548.

 Add a findRecursive method to NamedList
 -

 Key: SOLR-4048
 URL: https://issues.apache.org/jira/browse/SOLR-4048
 Project: Solr
  Issue Type: New Feature
Affects Versions: 4.0
Reporter: Shawn Heisey
Assignee: Shawn Heisey
Priority: Minor
 Fix For: 4.4

 Attachments: SOLR-4048-cleanup.patch, SOLR-4048.patch, 
 SOLR-4048.patch, SOLR-4048.patch, SOLR-4048.patch, SOLR-4048.patch, 
 SOLR-4048.patch, SOLR-4048.patch


 Most of the time when accessing data from a NamedList, what you'll be doing 
 is using get() to retrieve another NamedList, and doing so over and over 
 until you reach the final level, where you'll actually retrieve the value you 
 want.
 I propose adding a method to NamedList which would do all that heavy lifting 
 for you.  I created the following method for my own code.  It could be 
 adapted fairly easily for inclusion into NamedList itself.  The only reason I 
 did not include it as a patch is because I figure you'll want to ensure it 
 meets all your particular coding guidelines, and that the JavaDoc is much 
 better than I have done here:
 {code}
   /**
* Recursively parse a NamedList and return the value at the last level,
* assuming that the object found at each level is also a NamedList. For
* example, if response is the NamedList response from the Solr4 mbean
* handler, the following code makes sense:
* 
* String coreName = (String) getRecursiveFromResponse(response, new
* String[] { solr-mbeans, CORE, core, stats, coreName });
* 
* 
* @param namedList the NamedList to parse
* @param args A list of values to recursively request
* @return the object at the last level.
* @throws SolrServerException
*/
   @SuppressWarnings(unchecked)
   private final Object getRecursiveFromResponse(
   NamedListObject namedList, String[] args)
   throws SolrServerException
   {
   NamedListObject list = null;
   Object value = null;
   try
   {
   for (String key : args)
   {
   if (list == null)
   {
   list = namedList;
   }
   else
   {
   list = (NamedListObject) value;
   }
   value = list.get(key);
   }
   return value;
   }
   catch (Exception e)
   {
   throw new SolrServerException(
   Failed to recursively parse 
 NamedList, e);
   }
   }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4048) Add a findRecursive method to NamedList

2013-05-20 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey resolved SOLR-4048.


   Resolution: Fixed
Fix Version/s: 5.0

 Add a findRecursive method to NamedList
 -

 Key: SOLR-4048
 URL: https://issues.apache.org/jira/browse/SOLR-4048
 Project: Solr
  Issue Type: New Feature
Affects Versions: 4.0
Reporter: Shawn Heisey
Assignee: Shawn Heisey
Priority: Minor
 Fix For: 5.0, 4.4

 Attachments: SOLR-4048-cleanup.patch, SOLR-4048.patch, 
 SOLR-4048.patch, SOLR-4048.patch, SOLR-4048.patch, SOLR-4048.patch, 
 SOLR-4048.patch, SOLR-4048.patch


 Most of the time when accessing data from a NamedList, what you'll be doing 
 is using get() to retrieve another NamedList, and doing so over and over 
 until you reach the final level, where you'll actually retrieve the value you 
 want.
 I propose adding a method to NamedList which would do all that heavy lifting 
 for you.  I created the following method for my own code.  It could be 
 adapted fairly easily for inclusion into NamedList itself.  The only reason I 
 did not include it as a patch is because I figure you'll want to ensure it 
 meets all your particular coding guidelines, and that the JavaDoc is much 
 better than I have done here:
 {code}
   /**
* Recursively parse a NamedList and return the value at the last level,
* assuming that the object found at each level is also a NamedList. For
* example, if response is the NamedList response from the Solr4 mbean
* handler, the following code makes sense:
* 
* String coreName = (String) getRecursiveFromResponse(response, new
* String[] { solr-mbeans, CORE, core, stats, coreName });
* 
* 
* @param namedList the NamedList to parse
* @param args A list of values to recursively request
* @return the object at the last level.
* @throws SolrServerException
*/
   @SuppressWarnings(unchecked)
   private final Object getRecursiveFromResponse(
   NamedListObject namedList, String[] args)
   throws SolrServerException
   {
   NamedListObject list = null;
   Object value = null;
   try
   {
   for (String key : args)
   {
   if (list == null)
   {
   list = namedList;
   }
   else
   {
   list = (NamedListObject) value;
   }
   value = list.get(key);
   }
   return value;
   }
   catch (Exception e)
   {
   throw new SolrServerException(
   Failed to recursively parse 
 NamedList, e);
   }
   }
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4788) Multiple Entities DIH delta import: dataimporter.[entityName].last_index_time is empty

2013-05-20 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13662253#comment-13662253
 ] 

Shawn Heisey commented on SOLR-4788:


A review of all Solr issues that mention last_index_time turns up SOLR-4051 
(via SOLR-1970) as a possible candidate for the commit that broke this 
functionality.  This assumes of course that it worked after the feature was 
added by SOLR-783, which is probably a safe assumption.

SOLR-4051 says that it patches functionality that was introduced to 3.6.  I 
think that was added by SOLR-2382, so it might have been SOLR-2382 that broke 
things.

If I get some time in the near future I will attempt to write a test that 
illustrates the bug, and see if I can run that test on 3.6 as well.  If anyone 
out there can try a manual test on 3.6, that would save some time.

Side note: the code uses two constants for last_index_time - LAST_INDEX_TIME 
and LAST_INDEX_KEY.  Those should probably be combined.


 Multiple Entities DIH delta import: dataimporter.[entityName].last_index_time 
 is empty
 --

 Key: SOLR-4788
 URL: https://issues.apache.org/jira/browse/SOLR-4788
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.2, 4.3
 Environment: solr-spec
 4.2.1.2013.03.26.08.26.55
 solr-impl
 4.2.1 1461071 - mark - 2013-03-26 08:26:55
 lucene-spec
 4.2.1
 lucene-impl
 4.2.1 1461071 - mark - 2013-03-26 08:23:34
 OR
 solr-spec
 4.3.0
 solr-impl
 4.3.0 1477023 - simonw - 2013-04-29 15:10:12
 lucene-spec
 4.3.0
 lucene-impl
 4.3.0 1477023 - simonw - 2013-04-29 14:55:14
Reporter: chakming wong
Assignee: Shalin Shekhar Mangar

 {code:title=conf/dataimport.properties|borderStyle=solid}entity1.last_index_time=2013-05-06
  03\:02\:06
 last_index_time=2013-05-06 03\:05\:22
 entity2.last_index_time=2013-05-06 03\:03\:14
 entity3.last_index_time=2013-05-06 03\:05\:22
 {code}
 {code:title=conf/solrconfig.xml|borderStyle=solid}?xml version=1.0 
 encoding=UTF-8 ?
 ...
 requestHandler name=/dataimport 
 class=org.apache.solr.handler.dataimport.DataImportHandler
 lst name=defaults
 str name=configdihconfig.xml/str
 /lst
 /requestHandler
 ...
 {code}
 {code:title=conf/dihconfig.xml|borderStyle=solid}?xml version=1.0 
 encoding=UTF-8 ?
 dataConfig
 dataSource name=source1
 type=JdbcDataSource driver=com.mysql.jdbc.Driver
 url=jdbc:mysql://*:*/*
 user=* password=*/
 document name=strings
 entity name=entity1 pk=id dataSource=source1
 query=SELECT * FROM table_a
 deltaQuery=SELECT table_a_id FROM table_b WHERE 
 last_modified  '${dataimporter.entity1.last_index_time}'
 deltaImportQuery=SELECT * FROM table_a WHERE id = 
 '${dataimporter.entity1.id}'
 transformer=TemplateTransformer
 field ...
   ... 
 ... /field
 /entity
 entity name=entity2
   ... 
   ...
 /entity
 entity name=entity3
   ... 
   ...
 /entity
 /document
 /dataConfig
 {code} 
 In above setup, *dataimporter.entity1.last_index_time* is *empty string* and 
 cause the sql query having error

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4834) Surround QParser should enable query text analysis

2013-05-20 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13662263#comment-13662263
 ] 

Paul Elschot commented on SOLR-4834:


From the Lucene In Action book, 2nd edition:

{quote}
Unlike the standard QueryParser, the Surround parser doesn’t use an analyzer.
This means that the user will have to know precisely how terms are indexed. For
indexing texts to be queried by the Surround language, we recommend that you 
use a
lowercasing analyzer that removes only the most frequently occurring 
punctuations.
{quote}

Nevertheless, to use an analyzer for the queries, one can override some of the 
protected methods of the Surround QueryParser to use the analyzer.

 Surround QParser should enable query text analysis
 --

 Key: SOLR-4834
 URL: https://issues.apache.org/jira/browse/SOLR-4834
 Project: Solr
  Issue Type: Improvement
  Components: query parsers
Affects Versions: 4.3
Reporter: Isaac Hebsh
  Labels: analysis, qparserplugin, surround
 Fix For: 5.0, 4.4


 When using surround query parser, the query terms are not being analyzed. The 
 basic example is lower case, of course. This is probably an intended 
 behaviour, not a bug.
 I suggest one more query parameter, which determines whether to do analysis 
 or not. something like this:
 {code}
 _query_:{!surround df=myfield analyze=true}SpinPoint 7n GB18030
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: svn commit: r1484015 - in /lucene/dev/branches/lucene_solr_4_2/solr/example/lib/ext: ./ jcl-over-slf4j-1.6.6.jar jul-to-slf4j-1.6.6.jar log4j-1.2.16.jar slf4j-api-1.6.6.jar slf4j-log4j12-1.6.6.jar

2013-05-20 Thread Michael McCandless
James,

I'm assuming this was a mistake?  Can you revert it?  Thanks.

Mike McCandless

http://blog.mikemccandless.com


On Sat, May 18, 2013 at 4:41 AM, Uwe Schindler u...@thetaphi.de wrote:
 What happened here?:
 - We don't use 4.2 branch anymore
 - Please don't commit JAR files

 -
 Uwe Schindler
 H.-H.-Meier-Allee 63, D-28213 Bremen
 http://www.thetaphi.de
 eMail: u...@thetaphi.de


 -Original Message-
 From: jd...@apache.org [mailto:jd...@apache.org]
 Sent: Saturday, May 18, 2013 12:13 AM
 To: comm...@lucene.apache.org
 Subject: svn commit: r1484015 - in
 /lucene/dev/branches/lucene_solr_4_2/solr/example/lib/ext: ./ jcl-over-
 slf4j-1.6.6.jar jul-to-slf4j-1.6.6.jar log4j-1.2.16.jar slf4j-api-1.6.6.jar 
 slf4j-
 log4j12-1.6.6.jar

 Author: jdyer
 Date: Fri May 17 22:13:05 2013
 New Revision: 1484015

 URL: http://svn.apache.org/r1484015
 Log:
 initial buy

 Added:
 lucene/dev/branches/lucene_solr_4_2/solr/example/lib/ext/
 lucene/dev/branches/lucene_solr_4_2/solr/example/lib/ext/jcl-over-
 slf4j-1.6.6.jar   (with props)
 lucene/dev/branches/lucene_solr_4_2/solr/example/lib/ext/jul-to-slf4j-
 1.6.6.jar   (with props)
 lucene/dev/branches/lucene_solr_4_2/solr/example/lib/ext/log4j-
 1.2.16.jar   (with props)
 lucene/dev/branches/lucene_solr_4_2/solr/example/lib/ext/slf4j-api-
 1.6.6.jar   (with props)
 lucene/dev/branches/lucene_solr_4_2/solr/example/lib/ext/slf4j-log4j12-
 1.6.6.jar   (with props)

 Added: lucene/dev/branches/lucene_solr_4_2/solr/example/lib/ext/jcl-
 over-slf4j-1.6.6.jar
 URL:
 http://svn.apache.org/viewvc/lucene/dev/branches/lucene_solr_4_2/solr/e
 xample/lib/ext/jcl-over-slf4j-1.6.6.jar?rev=1484015view=auto
 ==
 
 Binary file - no diff available.

 Added: lucene/dev/branches/lucene_solr_4_2/solr/example/lib/ext/jul-to-
 slf4j-1.6.6.jar
 URL:
 http://svn.apache.org/viewvc/lucene/dev/branches/lucene_solr_4_2/solr/e
 xample/lib/ext/jul-to-slf4j-1.6.6.jar?rev=1484015view=auto
 ==
 
 Binary file - no diff available.

 Added: lucene/dev/branches/lucene_solr_4_2/solr/example/lib/ext/log4j-
 1.2.16.jar
 URL:
 http://svn.apache.org/viewvc/lucene/dev/branches/lucene_solr_4_2/solr/e
 xample/lib/ext/log4j-1.2.16.jar?rev=1484015view=auto
 ==
 
 Binary file - no diff available.

 Added: lucene/dev/branches/lucene_solr_4_2/solr/example/lib/ext/slf4j-
 api-1.6.6.jar
 URL:
 http://svn.apache.org/viewvc/lucene/dev/branches/lucene_solr_4_2/solr/e
 xample/lib/ext/slf4j-api-1.6.6.jar?rev=1484015view=auto
 ==
 
 Binary file - no diff available.

 Added: lucene/dev/branches/lucene_solr_4_2/solr/example/lib/ext/slf4j-
 log4j12-1.6.6.jar
 URL:
 http://svn.apache.org/viewvc/lucene/dev/branches/lucene_solr_4_2/solr/e
 xample/lib/ext/slf4j-log4j12-1.6.6.jar?rev=1484015view=auto
 ==
 
 Binary file - no diff available.



 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: [JENKINS] Lucene-Solr-SmokeRelease-4.3 - Build # 4 - Failure

2013-05-20 Thread Uwe Schindler
Steven, is your fix LUCENE-5007 related to that?

I am just mentioning this, because 4.3 is the release branch, so it should pass 
smoke tester!

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de

 -Original Message-
 From: Apache Jenkins Server [mailto:jenk...@builds.apache.org]
 Sent: Monday, May 20, 2013 9:08 AM
 To: dev@lucene.apache.org
 Subject: [JENKINS] Lucene-Solr-SmokeRelease-4.3 - Build # 4 - Failure
 
 Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-4.3/4/
 
 No tests ran.
 
 Build Log:
 [...truncated 32711 lines...]
 prepare-release-no-sign:
 [mkdir] Created dir: /usr/home/hudson/hudson-slave/workspace/Lucene-
 Solr-SmokeRelease-4.3/lucene/build/fakeRelease
  [copy] Copying 401 files to /usr/home/hudson/hudson-
 slave/workspace/Lucene-Solr-SmokeRelease-
 4.3/lucene/build/fakeRelease/lucene
  [copy] Copying 194 files to /usr/home/hudson/hudson-
 slave/workspace/Lucene-Solr-SmokeRelease-
 4.3/lucene/build/fakeRelease/solr
  [exec] JAVA6_HOME is /home/hudson/tools/java/latest1.6
  [exec] JAVA7_HOME is /home/hudson/tools/java/latest1.7
  [exec] NOTE: output encoding is US-ASCII
  [exec]
  [exec] Load release URL file:/usr/home/hudson/hudson-
 slave/workspace/Lucene-Solr-SmokeRelease-4.3/lTraceback (most recent
 call last):
  [exec]   File /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-
 SmokeRelease-4.3/dev-tools/scriptucene/build/fakeRelease/...
  [exec]
  [exec] Test Lucene...
  [exec]   test basics...
  [exec]   get KEYS
  [exec] 0.1 MB
  [exec]
  [exec] command gpg --homedir /usr/home/hudson/hudson-
 slave/workspace/Lucene-Solr-SmokeRelease-
 4.3/lucene/build/fakeReleaseTmp/lucene.gpg --import
 /usr/home/hudson/hudson-slave/ws/smokeTestRelease.py, line 1385, in
 module
  [exec] main()
  [exec]   File /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-
 SmokeRelease-4.3/dev-tools/scripts/smokeTestRelease.py, line 1331, in
 main
  [exec] smokeTest(baseURL, version, tmpDir, isSigned)
  [exec]   File /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-
 SmokeRelease-4.3/dev-tools/scripts/smokeTestRelease.py, line 1364, in
 smokeTestorkspace/Lucene-Solr-SmokeRelease-
 4.3/lucene/build/fakeReleaseTmp/lucene.KEYS failed:
  [exec]
  [exec] checkSigs('lucene', lucenePath, version, tmpDir, isSigned)
  [exec]   File /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-
 SmokeRelease-4.3/dev-tools/scripts/smokeTestRelease.py, line 365, in
 checkSigs
  [exec] '%s/%s.gpg.import.log 21' % (tmpDir, project))
  [exec]   File /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-
 SmokeRelease-4.3/dev-tools/scripts/smokeTestRelease.py, line 513, in run
  [exec] printFileContents(logFile)
  [exec]   File /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-
 SmokeRelease-4.3/dev-tools/scripts/smokeTestRelease.py, line 497, in
 printFileContents
  [exec] txt = codecs.open(fileName, 'r',
 encoding=sys.getdefaultencoding(), errors='replace').read()
  [exec]   File /usr/local/lib/python3.2/codecs.py, line 884, in open
  [exec] file = builtins.open(filename, mode, buffering)
  [exec] IOError: [Errno 2] No such file or directory:
 '/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-
 4.3/lucene/build/fakeReleaseTmp/lucene.gpg.import.log 21'
 
 BUILD FAILED
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-
 4.3/build.xml:303: exec returned: 1
 
 Total time: 17 minutes 27 seconds
 Build step 'Invoke Ant' marked build as failure Email was triggered for: 
 Failure
 Sending email for trigger: Failure
 



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-SmokeRelease-4.3 - Build # 4 - Failure

2013-05-20 Thread Steve Rowe
Uwe,

No, I didn't commit LUCENE-5007 on the 4.3 branch.

I skimmed the errors and IIRC it looked like  21 is part of a file name 
that Python can't find, which makes sense.  Not sure why this is only a problem 
on Jenkins, though.  Maybe the newer version of Python you upgraded to changed 
file handling somehow?

Steve

On May 20, 2013, at 4:03 PM, Uwe Schindler u...@thetaphi.de wrote:

 Steven, is your fix LUCENE-5007 related to that?
 
 I am just mentioning this, because 4.3 is the release branch, so it should 
 pass smoke tester!
 
 Uwe
 
 -
 Uwe Schindler
 H.-H.-Meier-Allee 63, D-28213 Bremen
 http://www.thetaphi.de
 eMail: u...@thetaphi.de
 
 -Original Message-
 From: Apache Jenkins Server [mailto:jenk...@builds.apache.org]
 Sent: Monday, May 20, 2013 9:08 AM
 To: dev@lucene.apache.org
 Subject: [JENKINS] Lucene-Solr-SmokeRelease-4.3 - Build # 4 - Failure
 
 Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-4.3/4/
 
 No tests ran.
 
 Build Log:
 [...truncated 32711 lines...]
 prepare-release-no-sign:
[mkdir] Created dir: /usr/home/hudson/hudson-slave/workspace/Lucene-
 Solr-SmokeRelease-4.3/lucene/build/fakeRelease
 [copy] Copying 401 files to /usr/home/hudson/hudson-
 slave/workspace/Lucene-Solr-SmokeRelease-
 4.3/lucene/build/fakeRelease/lucene
 [copy] Copying 194 files to /usr/home/hudson/hudson-
 slave/workspace/Lucene-Solr-SmokeRelease-
 4.3/lucene/build/fakeRelease/solr
 [exec] JAVA6_HOME is /home/hudson/tools/java/latest1.6
 [exec] JAVA7_HOME is /home/hudson/tools/java/latest1.7
 [exec] NOTE: output encoding is US-ASCII
 [exec]
 [exec] Load release URL file:/usr/home/hudson/hudson-
 slave/workspace/Lucene-Solr-SmokeRelease-4.3/lTraceback (most recent
 call last):
 [exec]   File /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-
 SmokeRelease-4.3/dev-tools/scriptucene/build/fakeRelease/...
 [exec]
 [exec] Test Lucene...
 [exec]   test basics...
 [exec]   get KEYS
 [exec] 0.1 MB
 [exec]
 [exec] command gpg --homedir /usr/home/hudson/hudson-
 slave/workspace/Lucene-Solr-SmokeRelease-
 4.3/lucene/build/fakeReleaseTmp/lucene.gpg --import
 /usr/home/hudson/hudson-slave/ws/smokeTestRelease.py, line 1385, in
 module
 [exec] main()
 [exec]   File /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-
 SmokeRelease-4.3/dev-tools/scripts/smokeTestRelease.py, line 1331, in
 main
 [exec] smokeTest(baseURL, version, tmpDir, isSigned)
 [exec]   File /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-
 SmokeRelease-4.3/dev-tools/scripts/smokeTestRelease.py, line 1364, in
 smokeTestorkspace/Lucene-Solr-SmokeRelease-
 4.3/lucene/build/fakeReleaseTmp/lucene.KEYS failed:
 [exec]
 [exec] checkSigs('lucene', lucenePath, version, tmpDir, isSigned)
 [exec]   File /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-
 SmokeRelease-4.3/dev-tools/scripts/smokeTestRelease.py, line 365, in
 checkSigs
 [exec] '%s/%s.gpg.import.log 21' % (tmpDir, project))
 [exec]   File /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-
 SmokeRelease-4.3/dev-tools/scripts/smokeTestRelease.py, line 513, in run
 [exec] printFileContents(logFile)
 [exec]   File /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-
 SmokeRelease-4.3/dev-tools/scripts/smokeTestRelease.py, line 497, in
 printFileContents
 [exec] txt = codecs.open(fileName, 'r',
 encoding=sys.getdefaultencoding(), errors='replace').read()
 [exec]   File /usr/local/lib/python3.2/codecs.py, line 884, in open
 [exec] file = builtins.open(filename, mode, buffering)
 [exec] IOError: [Errno 2] No such file or directory:
 '/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-
 4.3/lucene/build/fakeReleaseTmp/lucene.gpg.import.log 21'
 
 BUILD FAILED
 /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-
 4.3/build.xml:303: exec returned: 1
 
 Total time: 17 minutes 27 seconds
 Build step 'Invoke Ant' marked build as failure Email was triggered for: 
 Failure
 Sending email for trigger: Failure
 
 
 
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: [JENKINS] Lucene-Solr-SmokeRelease-4.3 - Build # 4 - Failure

2013-05-20 Thread Uwe Schindler
It is still Python 3.2 and 2.7. Maybe some changes in FreeBSD 9.x

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: Steve Rowe [mailto:sar...@gmail.com]
 Sent: Monday, May 20, 2013 10:07 PM
 To: dev@lucene.apache.org
 Subject: Re: [JENKINS] Lucene-Solr-SmokeRelease-4.3 - Build # 4 - Failure
 
 Uwe,
 
 No, I didn't commit LUCENE-5007 on the 4.3 branch.
 
 I skimmed the errors and IIRC it looked like  21 is part of a file name 
 that
 Python can't find, which makes sense.  Not sure why this is only a problem on
 Jenkins, though.  Maybe the newer version of Python you upgraded to
 changed file handling somehow?
 
 Steve
 
 On May 20, 2013, at 4:03 PM, Uwe Schindler u...@thetaphi.de wrote:
 
  Steven, is your fix LUCENE-5007 related to that?
 
  I am just mentioning this, because 4.3 is the release branch, so it should
 pass smoke tester!
 
  Uwe
 
  -
  Uwe Schindler
  H.-H.-Meier-Allee 63, D-28213 Bremen
  http://www.thetaphi.de
  eMail: u...@thetaphi.de
 
  -Original Message-
  From: Apache Jenkins Server [mailto:jenk...@builds.apache.org]
  Sent: Monday, May 20, 2013 9:08 AM
  To: dev@lucene.apache.org
  Subject: [JENKINS] Lucene-Solr-SmokeRelease-4.3 - Build # 4 - Failure
 
  Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-4.3/4/
 
  No tests ran.
 
  Build Log:
  [...truncated 32711 lines...]
  prepare-release-no-sign:
 [mkdir] Created dir:
  /usr/home/hudson/hudson-slave/workspace/Lucene-
  Solr-SmokeRelease-4.3/lucene/build/fakeRelease
  [copy] Copying 401 files to /usr/home/hudson/hudson-
  slave/workspace/Lucene-Solr-SmokeRelease-
  4.3/lucene/build/fakeRelease/lucene
  [copy] Copying 194 files to /usr/home/hudson/hudson-
  slave/workspace/Lucene-Solr-SmokeRelease-
  4.3/lucene/build/fakeRelease/solr
  [exec] JAVA6_HOME is /home/hudson/tools/java/latest1.6
  [exec] JAVA7_HOME is /home/hudson/tools/java/latest1.7
  [exec] NOTE: output encoding is US-ASCII
  [exec]
  [exec] Load release URL file:/usr/home/hudson/hudson-
  slave/workspace/Lucene-Solr-SmokeRelease-4.3/lTraceback (most recent
  call last):
  [exec]   File /usr/home/hudson/hudson-slave/workspace/Lucene-
 Solr-
  SmokeRelease-4.3/dev-tools/scriptucene/build/fakeRelease/...
  [exec]
  [exec] Test Lucene...
  [exec]   test basics...
  [exec]   get KEYS
  [exec] 0.1 MB
  [exec]
  [exec] command gpg --homedir /usr/home/hudson/hudson-
  slave/workspace/Lucene-Solr-SmokeRelease-
  4.3/lucene/build/fakeReleaseTmp/lucene.gpg --import
  /usr/home/hudson/hudson-slave/ws/smokeTestRelease.py, line 1385,
 in
  module
  [exec] main()
  [exec]   File /usr/home/hudson/hudson-slave/workspace/Lucene-
 Solr-
  SmokeRelease-4.3/dev-tools/scripts/smokeTestRelease.py, line 1331,
  in main
  [exec] smokeTest(baseURL, version, tmpDir, isSigned)
  [exec]   File /usr/home/hudson/hudson-slave/workspace/Lucene-
 Solr-
  SmokeRelease-4.3/dev-tools/scripts/smokeTestRelease.py, line 1364,
  in
  smokeTestorkspace/Lucene-Solr-SmokeRelease-
  4.3/lucene/build/fakeReleaseTmp/lucene.KEYS failed:
  [exec]
  [exec] checkSigs('lucene', lucenePath, version, tmpDir, isSigned)
  [exec]   File /usr/home/hudson/hudson-slave/workspace/Lucene-
 Solr-
  SmokeRelease-4.3/dev-tools/scripts/smokeTestRelease.py, line 365, in
  checkSigs
  [exec] '%s/%s.gpg.import.log 21' % (tmpDir, project))
  [exec]   File /usr/home/hudson/hudson-slave/workspace/Lucene-
 Solr-
  SmokeRelease-4.3/dev-tools/scripts/smokeTestRelease.py, line 513, in
 run
  [exec] printFileContents(logFile)
  [exec]   File /usr/home/hudson/hudson-slave/workspace/Lucene-
 Solr-
  SmokeRelease-4.3/dev-tools/scripts/smokeTestRelease.py, line 497, in
  printFileContents
  [exec] txt = codecs.open(fileName, 'r',
  encoding=sys.getdefaultencoding(), errors='replace').read()
  [exec]   File /usr/local/lib/python3.2/codecs.py, line 884, in open
  [exec] file = builtins.open(filename, mode, buffering)
  [exec] IOError: [Errno 2] No such file or directory:
  '/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-
 SmokeRelease-
  4.3/lucene/build/fakeReleaseTmp/lucene.gpg.import.log 21'
 
  BUILD FAILED
  /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-
 SmokeRelease-
  4.3/build.xml:303: exec returned: 1
 
  Total time: 17 minutes 27 seconds
  Build step 'Invoke Ant' marked build as failure Email was triggered
  for: Failure Sending email for trigger: Failure
 
 
 
 
  -
  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For
  additional commands, e-mail: dev-h...@lucene.apache.org
 
 
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
 commands, e-mail: dev-h...@lucene.apache.org



RE: svn commit: r1484015 - in /lucene/dev/branches/lucene_solr_4_2/solr/example/lib/ext: ./ jcl-over-slf4j-1.6.6.jar jul-to-slf4j-1.6.6.jar log4j-1.2.16.jar slf4j-api-1.6.6.jar slf4j-log4j12-1.6.6.jar

2013-05-20 Thread Dyer, James
My apologies.  I will revert now.

James Dyer
Ingram Content Group
(615) 213-4311


-Original Message-
From: Michael McCandless [mailto:luc...@mikemccandless.com] 
Sent: Monday, May 20, 2013 2:48 PM
To: Lucene/Solr dev
Subject: Re: svn commit: r1484015 - in 
/lucene/dev/branches/lucene_solr_4_2/solr/example/lib/ext: ./ 
jcl-over-slf4j-1.6.6.jar jul-to-slf4j-1.6.6.jar log4j-1.2.16.jar 
slf4j-api-1.6.6.jar slf4j-log4j12-1.6.6.jar

James,

I'm assuming this was a mistake?  Can you revert it?  Thanks.

Mike McCandless

http://blog.mikemccandless.com


On Sat, May 18, 2013 at 4:41 AM, Uwe Schindler u...@thetaphi.de wrote:
 What happened here?:
 - We don't use 4.2 branch anymore
 - Please don't commit JAR files

 -
 Uwe Schindler
 H.-H.-Meier-Allee 63, D-28213 Bremen
 http://www.thetaphi.de
 eMail: u...@thetaphi.de


 -Original Message-
 From: jd...@apache.org [mailto:jd...@apache.org]
 Sent: Saturday, May 18, 2013 12:13 AM
 To: comm...@lucene.apache.org
 Subject: svn commit: r1484015 - in
 /lucene/dev/branches/lucene_solr_4_2/solr/example/lib/ext: ./ jcl-over-
 slf4j-1.6.6.jar jul-to-slf4j-1.6.6.jar log4j-1.2.16.jar slf4j-api-1.6.6.jar 
 slf4j-
 log4j12-1.6.6.jar

 Author: jdyer
 Date: Fri May 17 22:13:05 2013
 New Revision: 1484015

 URL: http://svn.apache.org/r1484015
 Log:
 initial buy

 Added:
 lucene/dev/branches/lucene_solr_4_2/solr/example/lib/ext/
 lucene/dev/branches/lucene_solr_4_2/solr/example/lib/ext/jcl-over-
 slf4j-1.6.6.jar   (with props)
 lucene/dev/branches/lucene_solr_4_2/solr/example/lib/ext/jul-to-slf4j-
 1.6.6.jar   (with props)
 lucene/dev/branches/lucene_solr_4_2/solr/example/lib/ext/log4j-
 1.2.16.jar   (with props)
 lucene/dev/branches/lucene_solr_4_2/solr/example/lib/ext/slf4j-api-
 1.6.6.jar   (with props)
 lucene/dev/branches/lucene_solr_4_2/solr/example/lib/ext/slf4j-log4j12-
 1.6.6.jar   (with props)

 Added: lucene/dev/branches/lucene_solr_4_2/solr/example/lib/ext/jcl-
 over-slf4j-1.6.6.jar
 URL:
 http://svn.apache.org/viewvc/lucene/dev/branches/lucene_solr_4_2/solr/e
 xample/lib/ext/jcl-over-slf4j-1.6.6.jar?rev=1484015view=auto
 ==
 
 Binary file - no diff available.

 Added: lucene/dev/branches/lucene_solr_4_2/solr/example/lib/ext/jul-to-
 slf4j-1.6.6.jar
 URL:
 http://svn.apache.org/viewvc/lucene/dev/branches/lucene_solr_4_2/solr/e
 xample/lib/ext/jul-to-slf4j-1.6.6.jar?rev=1484015view=auto
 ==
 
 Binary file - no diff available.

 Added: lucene/dev/branches/lucene_solr_4_2/solr/example/lib/ext/log4j-
 1.2.16.jar
 URL:
 http://svn.apache.org/viewvc/lucene/dev/branches/lucene_solr_4_2/solr/e
 xample/lib/ext/log4j-1.2.16.jar?rev=1484015view=auto
 ==
 
 Binary file - no diff available.

 Added: lucene/dev/branches/lucene_solr_4_2/solr/example/lib/ext/slf4j-
 api-1.6.6.jar
 URL:
 http://svn.apache.org/viewvc/lucene/dev/branches/lucene_solr_4_2/solr/e
 xample/lib/ext/slf4j-api-1.6.6.jar?rev=1484015view=auto
 ==
 
 Binary file - no diff available.

 Added: lucene/dev/branches/lucene_solr_4_2/solr/example/lib/ext/slf4j-
 log4j12-1.6.6.jar
 URL:
 http://svn.apache.org/viewvc/lucene/dev/branches/lucene_solr_4_2/solr/e
 xample/lib/ext/slf4j-log4j12-1.6.6.jar?rev=1484015view=auto
 ==
 
 Binary file - no diff available.



 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Solr core discovery wiki pages - rename from 4.3 to 4.4

2013-05-20 Thread Shawn Heisey
For anyone who doesn't know already, core discovery is fundamentally 
broken in 4.3.0, and the problems won't be fixed in 4.3.1.  The specific 
problem that a user found isn't a problem on branch_4x, but the code is 
quite a lot different.


See discussion on SOLR-4773 starting on May 6th.

I am planning to go through the wiki and rename all the pages for core 
discovery so they say 4.4, and modify the page content similarly as 
well.  The pages include a note saying that the change was introduced in 
4.3.0 but doesn't work right until 4.4.  I will put redirects on the old 
pages after I rename them.


It will be a bit of an undertaking to make sure all the changes happen 
somehwat seamlessly, so I will be waiting until I get home this evening 
before attempting it.


I also wanted to get some feedback on this change before it happens.

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



SOLR-3076 and IndexWriter.addDocuments()

2013-05-20 Thread Tom Burton-West
My understanding of Lucene Block-Join indexing is that at some point
IndexWriter.addDocuments() or IndexWriter.updateDocuments() need to be
called to actually write a block of documents to disk.

   I'm trying to understand how SOLR-3076 (Solr should support block
joins), works and haven't been able to trace out how or where it calls
IndexWriter.addDocuments() or IndexWriter.updateDocuments.

Can someone point me to the right place in the patch code?

(If I should be asking this in the JIRA instead of the dev list please let
me know)

Tom


[jira] [Updated] (SOLR-4816) Change CloudSolrServer to send updates to the correct shard

2013-05-20 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-4816:
-

Attachment: SOLR-4816.patch

After spending more time looking at everything that an upateRequest can do I 
realized that not all parts of a request are routable.

The latest patch handles this by first sending all the routable updates to the 
correct shard. Then executing a final update request with non-routable update 
commands such OPTIMIZE or deleteByQuery.

This latest patch has not been tested so is for review purposes only.



 Change CloudSolrServer to send updates to the correct shard
 ---

 Key: SOLR-4816
 URL: https://issues.apache.org/jira/browse/SOLR-4816
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816-sriesenberg.patch


 This issue changes CloudSolrServer so it routes update requests to the 
 correct shard. This would be a nice feature to have to eliminate the document 
 routing overhead on the Solr servers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-4816) Change CloudSolrServer to send updates to the correct shard

2013-05-20 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13662389#comment-13662389
 ] 

Joel Bernstein edited comment on SOLR-4816 at 5/20/13 10:00 PM:


After spending more time looking at everything that an upateRequest can do I 
realized that not all parts of a request are routable.

The latest patch handles this by first sending all the routable updates to the 
correct shard. Then executing a final update request with non-routable update 
commands such as OPTIMIZE or deleteByQuery.

This latest patch has not been tested so is for review purposes only.



  was (Author: joel.bernstein):
After spending more time looking at everything that an upateRequest can do 
I realized that not all parts of a request are routable.

The latest patch handles this by first sending all the routable updates to the 
correct shard. Then executing a final update request with non-routable update 
commands such OPTIMIZE or deleteByQuery.

This latest patch has not been tested so is for review purposes only.


  
 Change CloudSolrServer to send updates to the correct shard
 ---

 Key: SOLR-4816
 URL: https://issues.apache.org/jira/browse/SOLR-4816
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816-sriesenberg.patch


 This issue changes CloudSolrServer so it routes update requests to the 
 correct shard. This would be a nice feature to have to eliminate the document 
 routing overhead on the Solr servers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4816) Change CloudSolrServer to send updates to the correct shard

2013-05-20 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-4816:
-

Attachment: SOLR-4816.patch

 Change CloudSolrServer to send updates to the correct shard
 ---

 Key: SOLR-4816
 URL: https://issues.apache.org/jira/browse/SOLR-4816
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.3
Reporter: Joel Bernstein
Priority: Minor
 Attachments: SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, 
 SOLR-4816.patch, SOLR-4816.patch, SOLR-4816.patch, SOLR-4816-sriesenberg.patch


 This issue changes CloudSolrServer so it routes update requests to the 
 correct shard. This would be a nice feature to have to eliminate the document 
 routing overhead on the Solr servers.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3385) Extended Dismax parser ignores all regular search terms when one search term is using + (dismax behaves differently)

2013-05-20 Thread Naomi Dushay (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13662402#comment-13662402
 ] 

Naomi Dushay commented on SOLR-3385:


I believe this is the same as SOLR-2649:

// For correct lucene queries, turn off mm processing if there
// were explicit operators (except for AND).
boolean doMinMatched = (numOR + numNOT + numPluses + numMinuses) == 0;
(lines 232-234 taken from 
tags/lucene_solr_3_3/solr/src/java/org/apache/solr/search/ExtendedDismaxQParserPlugin.java)

 Extended Dismax parser ignores all regular search terms when one search term 
 is using + (dismax behaves differently)
 

 Key: SOLR-3385
 URL: https://issues.apache.org/jira/browse/SOLR-3385
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 3.5
Reporter: Nils Kaiser
 Attachments: select_dev_PLUSsales_dismax_553results.xml, 
 select_dev_PLUSsales_edismax_9600results.xml, 
 select_dev_PLUSsales_miau_dismax_0results.xml, 
 select_dev_PLUSsales_miau_edismax_9600results.xml, 
 select_dev_sales_miau_edismax_0results.xml, 
 select_PLUSsales_dismax_9600results.xml, 
 select_PLUSsales_edismax_9600results.xml


 When using the extended dismax parser with at least one term using + or -, 
 all other search terms are ignored.
 Example:
 (the terms dev and sales are found in the index, the term miau is not part of 
 the index)
 dev sales miau, +dev +sales +miau, dev +sales +miau all give me 0 
 results (as expected)
 dev +sales miau, dev +sales or +sales return the same number of results 
 (dev and miau terms are ignored)
 The standard dismax parser always treats search terms as +, so dev sales 
 miau, +dev +sales miau, dev +sales miau return the same number of 
 results. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: SOLR-3076 and IndexWriter.addDocuments()

2013-05-20 Thread Tom Burton-West
Found it.  In AddBlockUpdateTest.testSmallBlockDirect

 assertEquals(2, h.getCore().getUpdateHandler().addBlock(cmd));
and in the patched code DirectUpdateHandler2.addBlock()

Tom


On Mon, May 20, 2013 at 5:49 PM, Tom Burton-West tburt...@umich.edu wrote:

 My understanding of Lucene Block-Join indexing is that at some point
 IndexWriter.addDocuments() or IndexWriter.updateDocuments() need to be
 called to actually write a block of documents to disk.

I'm trying to understand how SOLR-3076 (Solr should support block
 joins), works and haven't been able to trace out how or where it calls
 IndexWriter.addDocuments() or IndexWriter.updateDocuments.

 Can someone point me to the right place in the patch code?

 (If I should be asking this in the JIRA instead of the dev list please let
 me know)

 Tom



[jira] [Commented] (SOLR-2649) MM ignored in edismax queries with operators

2013-05-20 Thread Naomi Dushay (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13662455#comment-13662455
 ] 

Naomi Dushay commented on SOLR-2649:


Our dismax mm setting is 6-1 690%.

I would like our mm to be honored for the top-level SHOULD clauses.  Oh please, 
oh please?

EDISMAX

q=customer driven academic library:
  +(((custom)~0.01 (driven)~0.01 (academ)~0.01 (librari)~0.01)~4)   4 hits
customer NOT driven academic library:
  +((custom)~0.01 -(driven)~0.01 (academ)~0.01 (librari)~0.01)  984300 
hits  = INSANE
customer -driven academic library:
  +((custom)~0.01 -(driven)~0.01 (academ)~0.01 (librari)~0.01)  984300 
hits  = INSANE
customer OR academic OR library NOT driven:
  +((custom)~0.01 (academ)~0.01 (librari)~0.01 -(driven)~0.01)  984300 
hits
customer academic library:
  +(((custom)~0.01 (academ)~0.01 (librari)~0.01)~3) 100 hits


DISMAX  (plausible results!):

customer driven academic library:
  +(((custom)~0.01 (driven)~0.01 (academ)~0.01 (librari)~0.01)~4) ()4 hits
customer NOT driven academic library:
  +(((custom)~0.01 -(driven)~0.01 (academ)~0.01 (librari)~0.01)~3) ()   96 hits
customer -driven academic library:
  +(((custom)~0.01 -(driven)~0.01 (academ)~0.01 (librari)~0.01)~3) ()   96 hits
customer academic library:
  +(((custom)~0.01 (academ)~0.01 (librari)~0.01)~3)()   100 hits



 MM ignored in edismax queries with operators
 

 Key: SOLR-2649
 URL: https://issues.apache.org/jira/browse/SOLR-2649
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Reporter: Magnus Bergmark
Priority: Minor
 Fix For: 4.4


 Hypothetical scenario:
   1. User searches for stocks oil gold with MM set to 50%
   2. User adds -stockings to the query: stocks oil gold -stockings
   3. User gets no hits since MM was ignored and all terms where AND-ed 
 together
 The behavior seems to be intentional, although the reason why is never 
 explained:
   // For correct lucene queries, turn off mm processing if there
   // were explicit operators (except for AND).
   boolean doMinMatched = (numOR + numNOT + numPluses + numMinuses) == 0; 
 (lines 232-234 taken from 
 tags/lucene_solr_3_3/solr/src/java/org/apache/solr/search/ExtendedDismaxQParserPlugin.java)
 This makes edismax unsuitable as an replacement to dismax; mm is one of the 
 primary features of dismax.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: have developer question about ClobTransformer and DIH

2013-05-20 Thread Chris Hostetter

: I think you're confusing the hierarchy of your database's types with the 
: hierarchy in Java.  In Java, a java.sql.Blob and a java.sql.Clob are 2 
: different things.  They do not extend a common ancestor (excpt 

Exactly - regardless of what the informix docs may say about how column 
types are related (for the purposes of casting an conversion in SQL) what 
matters is how the JDBC driver you are using maps your databases types to 
java types,

some quick googling pulls up this result...

http://docs.oracle.com/cd/E17904_01/web./e13753/informix.htm#i1065747

Which lists specifically..

Informix BLOB - JDBC BLOB
Informix CLOB - JDBC CLOB
Informix TEXT - JDBC LONGVARCHAR

...i suspect that there is not any real problem with the ClobTransformer 
-- it seems to be working perfectly, dealing with the CLOB fields returned 
appropriately -- but there may in fact be a problem with if/how DIH deals 
with JDBC values that are LONGVARCHARs.

Since i don't see Types.LONGVARCHAR mentioned anywhere in the DIH code 
base, i suspect it's falling back to some default behavior assuming 
String data which doesn't account for the way LONGVARCHAR data is 
probably returned as an Object that needs to be streamed similar to a 
Clob.

specifically, some quick googling for LONGVARCHAR in the JDBC APIs 
suggests that ResultSet.getUnicodeStream or ResultSet.getCharacterStream 
should be used for LONGVARCHAR columns -- but i don't see any usage of 
that method in the DIH code base.

geeky2: would you mind opening a bug to fix support for LONGVARCHAR in 
JdbcDataSource?



-Hoss

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-2649) MM ignored in edismax queries with operators

2013-05-20 Thread Naomi Dushay (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13662455#comment-13662455
 ] 

Naomi Dushay edited comment on SOLR-2649 at 5/20/13 11:41 PM:
--

Our dismax mm setting is 6-1 690%.

I would like our mm to be honored for the top-level SHOULD clauses.  Oh please, 
oh please?

EDISMAX

q=customer driven academic library:
  +(((custom)~0.01 (driven)~0.01 (academ)~0.01 (librari)~0.01)~4)   4 hits

customer NOT driven academic library:
  +((custom)~0.01 -(driven)~0.01 (academ)~0.01 (librari)~0.01)  984300 
hits  = INSANE

customer -driven academic library:
  +((custom)~0.01 -(driven)~0.01 (academ)~0.01 (librari)~0.01)  984300 
hits  = INSANE

customer OR academic OR library NOT driven:
  +((custom)~0.01 (academ)~0.01 (librari)~0.01 -(driven)~0.01)  984300 
hits

customer academic library:
  +(((custom)~0.01 (academ)~0.01 (librari)~0.01)~3) 100 hits


DISMAX  (plausible results!):

customer driven academic library:
  +(((custom)~0.01 (driven)~0.01 (academ)~0.01 (librari)~0.01)~4) ()
4 hits

customer NOT driven academic library:
  +(((custom)~0.01 -(driven)~0.01 (academ)~0.01 (librari)~0.01)~3) ()   
96 hits

customer -driven academic library:
  +(((custom)~0.01 -(driven)~0.01 (academ)~0.01 (librari)~0.01)~3) ()   
96 hits

customer academic library:
  +(((custom)~0.01 (academ)~0.01 (librari)~0.01)~3)()   
100 hits



  was (Author: ndushay):
Our dismax mm setting is 6-1 690%.

I would like our mm to be honored for the top-level SHOULD clauses.  Oh please, 
oh please?

EDISMAX

q=customer driven academic library:
  +(((custom)~0.01 (driven)~0.01 (academ)~0.01 (librari)~0.01)~4)   4 hits
customer NOT driven academic library:
  +((custom)~0.01 -(driven)~0.01 (academ)~0.01 (librari)~0.01)  984300 
hits  = INSANE
customer -driven academic library:
  +((custom)~0.01 -(driven)~0.01 (academ)~0.01 (librari)~0.01)  984300 
hits  = INSANE
customer OR academic OR library NOT driven:
  +((custom)~0.01 (academ)~0.01 (librari)~0.01 -(driven)~0.01)  984300 
hits
customer academic library:
  +(((custom)~0.01 (academ)~0.01 (librari)~0.01)~3) 100 hits


DISMAX  (plausible results!):

customer driven academic library:
  +(((custom)~0.01 (driven)~0.01 (academ)~0.01 (librari)~0.01)~4) ()4 hits
customer NOT driven academic library:
  +(((custom)~0.01 -(driven)~0.01 (academ)~0.01 (librari)~0.01)~3) ()   96 hits
customer -driven academic library:
  +(((custom)~0.01 -(driven)~0.01 (academ)~0.01 (librari)~0.01)~3) ()   96 hits
customer academic library:
  +(((custom)~0.01 (academ)~0.01 (librari)~0.01)~3)()   100 hits


  
 MM ignored in edismax queries with operators
 

 Key: SOLR-2649
 URL: https://issues.apache.org/jira/browse/SOLR-2649
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Reporter: Magnus Bergmark
Priority: Minor
 Fix For: 4.4


 Hypothetical scenario:
   1. User searches for stocks oil gold with MM set to 50%
   2. User adds -stockings to the query: stocks oil gold -stockings
   3. User gets no hits since MM was ignored and all terms where AND-ed 
 together
 The behavior seems to be intentional, although the reason why is never 
 explained:
   // For correct lucene queries, turn off mm processing if there
   // were explicit operators (except for AND).
   boolean doMinMatched = (numOR + numNOT + numPluses + numMinuses) == 0; 
 (lines 232-234 taken from 
 tags/lucene_solr_3_3/solr/src/java/org/apache/solr/search/ExtendedDismaxQParserPlugin.java)
 This makes edismax unsuitable as an replacement to dismax; mm is one of the 
 primary features of dismax.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Solr core discovery wiki pages - rename from 4.3 to 4.4

2013-05-20 Thread Erick Erickson
Go for it, it's probably best to discourage people from trying this
out at this point...

Erick

On Mon, May 20, 2013 at 4:35 PM, Shawn Heisey s...@elyograg.org wrote:
 For anyone who doesn't know already, core discovery is fundamentally broken
 in 4.3.0, and the problems won't be fixed in 4.3.1.  The specific problem
 that a user found isn't a problem on branch_4x, but the code is quite a lot
 different.

 See discussion on SOLR-4773 starting on May 6th.

 I am planning to go through the wiki and rename all the pages for core
 discovery so they say 4.4, and modify the page content similarly as well.
 The pages include a note saying that the change was introduced in 4.3.0 but
 doesn't work right until 4.4.  I will put redirects on the old pages after I
 rename them.

 It will be a bit of an undertaking to make sure all the changes happen
 somehwat seamlessly, so I will be waiting until I get home this evening
 before attempting it.

 I also wanted to get some feedback on this change before it happens.

 Thanks,
 Shawn


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4842) Field faceting with local params affects successive field faceting parameters

2013-05-20 Thread Erik Hatcher (JIRA)
Erik Hatcher created SOLR-4842:
--

 Summary: Field faceting with local params affects successive field 
faceting parameters
 Key: SOLR-4842
 URL: https://issues.apache.org/jira/browse/SOLR-4842
 Project: Solr
  Issue Type: Bug
  Components: search, SearchComponents - other
Affects Versions: 4.3
Reporter: Erik Hatcher
Assignee: Erik Hatcher
Priority: Critical


SOLR-4717 introduced local param support for per-field faceting, allowing the 
same field to be faceted in various ways.  There's a problem such that one 
fields local param setting will override the defaults of the next field.  For 
example: facet.field={!key=foo facet.mincount=1 
facet.missing=false}fieldfacet.field={!key=bar facet.missing=true}field causes 
the bar faceting to use facet.mincount=1 when it should use the default of 0.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4842) Field faceting with local params affects successive field faceting parameters

2013-05-20 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-4842:
---

Attachment: SOLR-4842.patch

Here's a patch demonstrating a failing test case that should pass, 
demonstrating the corruption of one facet.field's local params into another 
facet.field.

 Field faceting with local params affects successive field faceting parameters
 -

 Key: SOLR-4842
 URL: https://issues.apache.org/jira/browse/SOLR-4842
 Project: Solr
  Issue Type: Bug
  Components: search, SearchComponents - other
Affects Versions: 4.3
Reporter: Erik Hatcher
Assignee: Erik Hatcher
Priority: Critical
 Attachments: SOLR-4842.patch


 SOLR-4717 introduced local param support for per-field faceting, allowing the 
 same field to be faceted in various ways.  There's a problem such that one 
 fields local param setting will override the defaults of the next field.  For 
 example: facet.field={!key=foo facet.mincount=1 
 facet.missing=false}fieldfacet.field={!key=bar facet.missing=true}field 
 causes the bar faceting to use facet.mincount=1 when it should use the 
 default of 0.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4842) Field faceting with local params affects successive field faceting parameters

2013-05-20 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-4842:
---

Fix Version/s: 4.3.1

 Field faceting with local params affects successive field faceting parameters
 -

 Key: SOLR-4842
 URL: https://issues.apache.org/jira/browse/SOLR-4842
 Project: Solr
  Issue Type: Bug
  Components: search, SearchComponents - other
Affects Versions: 4.3
Reporter: Erik Hatcher
Assignee: Erik Hatcher
Priority: Critical
 Fix For: 4.3.1

 Attachments: SOLR-4842.patch


 SOLR-4717 introduced local param support for per-field faceting, allowing the 
 same field to be faceted in various ways.  There's a problem such that one 
 fields local param setting will override the defaults of the next field.  For 
 example: facet.field={!key=foo facet.mincount=1 
 facet.missing=false}fieldfacet.field={!key=bar facet.missing=true}field 
 causes the bar faceting to use facet.mincount=1 when it should use the 
 default of 0.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4842) Field faceting with local params affects successive field faceting parameters

2013-05-20 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-4842:
---

Fix Version/s: (was: 4.3.1)
  Description: SOLR-4717 introduced local param support for per-field 
faceting, allowing the same field to be faceted in various ways.  There's a 
problem such that one fields local param setting will override the defaults of 
the next field.  For example: {code}facet.field={!key=foo facet.mincount=1 
facet.missing=false}fieldfacet.field={!key=bar facet.missing=true}field{code} 
causes the bar faceting to use facet.mincount=1 when it should use the 
default of 0.  (was: SOLR-4717 introduced local param support for per-field 
faceting, allowing the same field to be faceted in various ways.  There's a 
problem such that one fields local param setting will override the defaults of 
the next field.  For example: facet.field={!key=foo facet.mincount=1 
facet.missing=false}fieldfacet.field={!key=bar facet.missing=true}field causes 
the bar faceting to use facet.mincount=1 when it should use the default of 0.)

 Field faceting with local params affects successive field faceting parameters
 -

 Key: SOLR-4842
 URL: https://issues.apache.org/jira/browse/SOLR-4842
 Project: Solr
  Issue Type: Bug
  Components: search, SearchComponents - other
Affects Versions: 4.3
Reporter: Erik Hatcher
Assignee: Erik Hatcher
Priority: Critical
 Attachments: SOLR-4842.patch


 SOLR-4717 introduced local param support for per-field faceting, allowing the 
 same field to be faceted in various ways.  There's a problem such that one 
 fields local param setting will override the defaults of the next field.  For 
 example: {code}facet.field={!key=foo facet.mincount=1 
 facet.missing=false}fieldfacet.field={!key=bar 
 facet.missing=true}field{code} causes the bar faceting to use 
 facet.mincount=1 when it should use the default of 0.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4842) Field faceting with local params affects successive field faceting parameters

2013-05-20 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13662580#comment-13662580
 ] 

Erik Hatcher commented on SOLR-4842:


I'm not sure my patch actually shows the problem clearly yet.  facet.mincount 
and deprecated facet.zeros support still in there confuse things a bit.  I'm 
still working through a test case showing the issue clearly.

 Field faceting with local params affects successive field faceting parameters
 -

 Key: SOLR-4842
 URL: https://issues.apache.org/jira/browse/SOLR-4842
 Project: Solr
  Issue Type: Bug
  Components: search, SearchComponents - other
Affects Versions: 4.3
Reporter: Erik Hatcher
Assignee: Erik Hatcher
Priority: Critical
 Attachments: SOLR-4842.patch


 SOLR-4717 introduced local param support for per-field faceting, allowing the 
 same field to be faceted in various ways.  There's a problem such that one 
 fields local param setting will override the defaults of the next field.  For 
 example: {code}facet.field={!key=foo facet.mincount=1 
 facet.missing=false}fieldfacet.field={!key=bar 
 facet.missing=true}field{code} causes the bar faceting to use 
 facet.mincount=1 when it should use the default of 0.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: ExternalDocValuesFilterReader progress?

2013-05-20 Thread Ryan McKinley
thanks

thats a great start...  ah solo much to do!


On Mon, May 20, 2013 at 10:37 AM, Alan Woodward a...@flax.co.uk wrote:

 I made a start on this, but in the end the client decided to do something
 different so it never got finished.  I did get
 https://issues.apache.org/jira/browse/LUCENE-4902 out of it, which would
 allow you to plug in custom FilterReaders to Solr via the
 IndexReaderFactory.  I'm still interested in working on it, just need a
 project that would use it, and some time...

 Alan Woodward
 www.flax.co.uk


 On 20 May 2013, at 16:29, Ryan McKinley wrote:

 In march, there was so effort at looking at cleaner ways to integrate
 external data:


 http://mail-archives.apache.org/mod_mbox/lucene-dev/201303.mbox/%3cc2e7cc37-52f2-4527-a919-a071d26f9...@flax.co.uk%3E

 Any updates on this?

 Thanks
 Ryan






[jira] [Updated] (SOLR-4842) Field faceting with local params affects successive field faceting parameters

2013-05-20 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-4842:
---

Attachment: SOLR-4842__hoss_tests.patch


Erik: based on your followup comment, i ignored your patch and attempted to 
write a test to reproduce the general problem you described and could not do so 
-- see attached SOLR-4842__hoss_tests.patch.

if there is a bug, i suspect it must be something subtle in the way the 
defaults of a particular param are defined.  if you're having trouble writing a 
test aptch to demonstrate the problme you are seeing, can you at least describe 
a specific example query where you observe a problem?  even if you can't share 
the docs needed to seee the problem, knowing exactly what params may help 
narrow things down.



 Field faceting with local params affects successive field faceting parameters
 -

 Key: SOLR-4842
 URL: https://issues.apache.org/jira/browse/SOLR-4842
 Project: Solr
  Issue Type: Bug
  Components: search, SearchComponents - other
Affects Versions: 4.3
Reporter: Erik Hatcher
Assignee: Erik Hatcher
Priority: Critical
 Attachments: SOLR-4842__hoss_tests.patch, SOLR-4842.patch


 SOLR-4717 introduced local param support for per-field faceting, allowing the 
 same field to be faceted in various ways.  There's a problem such that one 
 fields local param setting will override the defaults of the next field.  For 
 example: {code}facet.field={!key=foo facet.mincount=1 
 facet.missing=false}fieldfacet.field={!key=bar 
 facet.missing=true}field{code} causes the bar faceting to use 
 facet.mincount=1 when it should use the default of 0.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4842) Field faceting with local params affects successive field faceting parameters

2013-05-20 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4842?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13662646#comment-13662646
 ] 

Hoss Man commented on SOLR-4842:


Erik: ok, now looking at your test, i think it's just flawed.

Ignore for a minute the issue of faceting multiple ways, ignore the foo key 
in the assertQ your patch modifies, ignore everything about it, delete it from 
the test, and just consider a query using only the bar key like so...

{noformat}
  assertQ(ignore foo, look at bar,
  req(q, id:[42 TO 47]
  ,facet, true
  ,facet.zeros, false
  ,fq, id:[42 TO 45]
  ,facet.field, {!key=bar  +
 facet.missing=true +
  }+fname
  )
  ,*[count(//doc)=4]
  ,*[count(//lst[@name='bar']/int)=5]
  ,//lst[@name='bar']/int[not(@name)][.='1']
  );
{noformat}

That test is still going to fail because facet.zeros=false but you are 
asserting that there should be 5 terms for bar.  the only way there could be 
5 terms is if you include the terms with a  zero.

I don't think the docs have never really specified what happens if you mix and 
match facet.mincount with the deprecated facet.zeros (ie: 
facet.mincount=1facet.zeros=truefacet.field=XXX), let alone in the case of 
per-field overrides (ie: 
facet.mincount=1f.XXX.facet.zeros=truefacet.field=XXX) -- i think it's fair 
game to say all bets are off in the new situation of localparams.  but in this 
specific case, there's no way it makes sense to think that the bar key should 
have a mincount of 0.


 Field faceting with local params affects successive field faceting parameters
 -

 Key: SOLR-4842
 URL: https://issues.apache.org/jira/browse/SOLR-4842
 Project: Solr
  Issue Type: Bug
  Components: search, SearchComponents - other
Affects Versions: 4.3
Reporter: Erik Hatcher
Assignee: Erik Hatcher
Priority: Critical
 Attachments: SOLR-4842__hoss_tests.patch, SOLR-4842.patch


 SOLR-4717 introduced local param support for per-field faceting, allowing the 
 same field to be faceted in various ways.  There's a problem such that one 
 fields local param setting will override the defaults of the next field.  For 
 example: {code}facet.field={!key=foo facet.mincount=1 
 facet.missing=false}fieldfacet.field={!key=bar 
 facet.missing=true}field{code} causes the bar faceting to use 
 facet.mincount=1 when it should use the default of 0.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-4.3 - Build # 5 - Still Failing

2013-05-20 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-4.3/5/

No tests ran.

Build Log:
[...truncated 32711 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.3/lucene/build/fakeRelease
 [copy] Copying 401 files to 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.3/lucene/build/fakeRelease/lucene
 [copy] Copying 194 files to 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.3/lucene/build/fakeRelease/solr
 [exec] JAVA6_HOME is /home/hudson/tools/java/latest1.6
 [exec] JAVA7_HOME is /home/hudson/tools/java/latest1.7
 [exec] NOTE: output encoding is US-ASCII
 [exec] 
 [exec] Load release URL 
file:/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.3/lucene/build/fakeRelease/...
 [exec] 
 [exec] Test Lucene...
 [exec]   test basics...
 [exec]   get KEYS
 [exec] 0.1 MB
 [exec] 
 [exec] command gpg --homedir 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.3/lucene/build/fakeReleaseTmp/lucene.gpg
 --import /usr/home/hudson/hudson-slave/wTraceback (most recent call last):
 [exec]   File 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.3/dev-tools/script
 [exec] 
orkspace/Lucene-Solr-SmokeRelease-4.3/lucene/build/fakeReleaseTmp/lucene.KEYS 
failed:
 [exec] s/smokeTestRelease.py, line 1385, in module
 [exec] main()
 [exec]   File 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.3/dev-tools/scripts/smokeTestRelease.py,
 line 1331, in main
 [exec] smokeTest(baseURL, version, tmpDir, isSigned)
 [exec]   File 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.3/dev-tools/scripts/smokeTestRelease.py,
 line 1364, in smokeTest
 [exec] checkSigs('lucene', lucenePath, version, tmpDir, isSigned)
 [exec]   File 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.3/dev-tools/scripts/smokeTestRelease.py,
 line 365, in checkSigs
 [exec] '%s/%s.gpg.import.log 21' % (tmpDir, project))
 [exec]   File 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.3/dev-tools/scripts/smokeTestRelease.py,
 line 513, in run
 [exec] printFileContents(logFile)
 [exec]   File 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.3/dev-tools/scripts/smokeTestRelease.py,
 line 497, in printFileContents
 [exec] txt = codecs.open(fileName, 'r', 
encoding=sys.getdefaultencoding(), errors='replace').read()
 [exec]   File /usr/local/lib/python3.2/codecs.py, line 884, in open
 [exec] file = builtins.open(filename, mode, buffering)
 [exec] IOError: [Errno 2] No such file or directory: 
'/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.3/lucene/build/fakeReleaseTmp/lucene.gpg.import.log
 21'

BUILD FAILED
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-4.3/build.xml:303:
 exec returned: 1

Total time: 17 minutes 23 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org