Re: Problem using generic types?

2013-12-15 Thread Petrus Hyvönen
Hi Andi,

I see your point and have now kept in the pure python domain.

If I run my script from the shell by python script.py it does not crash. 
However if I execute it line-by-line in python it crashes (or in other tools 
such as ipython notebook).
All classes used are non-wrapped java classes, but I get the same effect with 
classes made for python subclassing.
I am getting this on both MacOSX 64-bit python and Windows 7 32-bit python.


 elDetector = 
 elDetector.withHandler(ContinueOnEvent().of_(ElevationDetector))
#
# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x00010005da1a, pid=3318, tid=1287
#
# JRE version: Java(TM) SE Runtime Environment (7.0_45-b18) (build 1.7.0_45-b18)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (24.45-b08 mixed mode bsd-amd64 
compressed oops)
# Problematic frame:
# C  [libpython2.7.dylib+0x5aa1a]  PyObject_GetAttr+0x1a
#


from the stack it seems like there is somthing happening in wrapType 
Stack: [0x7fff5fb8,0x7fff5fc0],  sp=0x7fff5fbff470,  free 
space=509k
Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
C  [libpython2.7.dylib+0x5aa1a]  PyObject_GetAttr+0x1a
C  [_orekit.so+0xa80878]  wrapType(_typeobject*, _jobject* const)+0x58
C  [_orekit.so+0x554400]  
org::orekit::propagation::events::t_AbstractReconfigurableDetector_withHandler(org::orekit::propagation::events::t_AbstractReconfigurableDetector*,
 _object*)+0x1c0

First, is the generic class assignment correct as if to write new 
ContinueOnEventElevationDetector() in java? And is it ok to use regular java 
objects/types?

Any other comments to move forward highly appriciated.. Is it somehow possible 
to get more log what is going wrong?

WIth best regards
/Petrus



On 15 Dec 2013, at 2:40 , Andi Vajda va...@apache.org wrote:

 
 On Dec 14, 2013, at 19:14, Petrus Hyvönen petrus.hyvo...@gmail.com wrote:
 
 Hi,
 
 I'm having a problem with I think might be related to generic types, but not 
 sure at all.
 
 I'm wrapping a orbit calculation library, which has been working well but in 
 latest version is using generic types and I'm getting some problems. The 
 script works when executed in plain python, but fails in ipython notebook on 
 this last line when executed as a couple of cells.
 
 What is an 'ipython notebook' ?
 
 Andi.,
 
 
 The section with problem in my script is:
 elDetector = 
 ElevationDetector(sta1Frame).withConstantElevation(math.radians(5.0))
 elDetector = elDetector.withHandler(ContinueOnEvent().of_(ElevationDetector))
 
 In Java it would typically look something like:
 
 ElevationDetector detector = new ElevationDetector(topo)
   .withConstantElevation(x)
   .withHandler(new 
 ContinueOnEventElevationDetector());
 
 It produces correct results in plain python, but crashes the kernel in 
 ipython if executed as cells, and in exection from spyder I get an error 
 message:
 
  elDetector = 
 elDetector.withHandler(ContinueOnEvent().of_(ElevationDetector))
 AttributeError: 'str' object has no attribute 'wrapfn_' 
 
 As I have been using this setup stabely with lots of other functions it 
 feels like there is something with the generic type line, but I don't really 
 know how to get any further? I'm confused by that the pauses in the 
 execution could seem to affect the result.
 
 Any comments highly appriciated...
 
 Best Regards
 /Petrus
 



Re: Problem using generic types?

2013-12-15 Thread Andi Vajda

 On Dec 15, 2013, at 5:43, Petrus Hyvönen petrus.hyvo...@gmail.com wrote:
 
 Hi Andi,
 
 I see your point and have now kept in the pure python domain.
 
 If I run my script from the shell by python script.py it does not crash. 
 However if I execute it line-by-line in python it crashes (or in other tools 
 such as ipython notebook).
 All classes used are non-wrapped java classes, but I get the same effect with 
 classes made for python subclassing.
 I am getting this on both MacOSX 64-bit python and Windows 7 32-bit python.
 
 
 elDetector = 
 elDetector.withHandler(ContinueOnEvent().of_(ElevationDetector))
 #
 # A fatal error has been detected by the Java Runtime Environment:
 #
 #  SIGSEGV (0xb) at pc=0x00010005da1a, pid=3318, tid=1287
 #
 # JRE version: Java(TM) SE Runtime Environment (7.0_45-b18) (build 
 1.7.0_45-b18)
 # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.45-b08 mixed mode bsd-amd64 
 compressed oops)
 # Problematic frame:
 # C  [libpython2.7.dylib+0x5aa1a]  PyObject_GetAttr+0x1a
 #
 
 
 from the stack it seems like there is somthing happening in wrapType 
 Stack: [0x7fff5fb8,0x7fff5fc0],  sp=0x7fff5fbff470,  free 
 space=509k
 Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native 
 code)
 C  [libpython2.7.dylib+0x5aa1a]  PyObject_GetAttr+0x1a
 C  [_orekit.so+0xa80878]  wrapType(_typeobject*, _jobject* const)+0x58
 C  [_orekit.so+0x554400]  
 org::orekit::propagation::events::t_AbstractReconfigurableDetector_withHandler(org::orekit::propagation::events::t_AbstractReconfigurableDetector*,
  _object*)+0x1c0
 
 First, is the generic class assignment correct as if to write new 
 ContinueOnEventElevationDetector() in java? And is it ok to use regular 
 java objects/types?
 
 Any other comments to move forward highly appriciated.. Is it somehow 
 possible to get more log what is going wrong?

You could compile the whole thing for debugging, by adding --debug after 
'build' in the jcc invocation and run it with gdb.

If you can isolate a reproducible crash into a small test case, I can also take 
a look at it.

Andi..

 
 WIth best regards
 /Petrus
 
 
 
 On 15 Dec 2013, at 2:40 , Andi Vajda va...@apache.org wrote:
 
 
 On Dec 14, 2013, at 19:14, Petrus Hyvönen petrus.hyvo...@gmail.com wrote:
 
 Hi,
 
 I'm having a problem with I think might be related to generic types, but 
 not sure at all.
 
 I'm wrapping a orbit calculation library, which has been working well but 
 in latest version is using generic types and I'm getting some problems. The 
 script works when executed in plain python, but fails in ipython notebook 
 on this last line when executed as a couple of cells.
 
 What is an 'ipython notebook' ?
 
 Andi.,
 
 
 The section with problem in my script is:
 elDetector = 
 ElevationDetector(sta1Frame).withConstantElevation(math.radians(5.0))
 elDetector = 
 elDetector.withHandler(ContinueOnEvent().of_(ElevationDetector))
 
 In Java it would typically look something like:
 
 ElevationDetector detector = new ElevationDetector(topo)
  .withConstantElevation(x)
  .withHandler(new 
 ContinueOnEventElevationDetector());
 
 It produces correct results in plain python, but crashes the kernel in 
 ipython if executed as cells, and in exection from spyder I get an error 
 message:
 
  elDetector = 
 elDetector.withHandler(ContinueOnEvent().of_(ElevationDetector))
 AttributeError: 'str' object has no attribute 'wrapfn_' 
 
 As I have been using this setup stabely with lots of other functions it 
 feels like there is something with the generic type line, but I don't 
 really know how to get any further? I'm confused by that the pauses in the 
 execution could seem to affect the result.
 
 Any comments highly appriciated...
 
 Best Regards
 /Petrus
 


[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 1121 - Failure!

2013-12-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1121/
Java: 64bit/jdk1.7.0 -XX:+UseCompressedOops -XX:+UseG1GC

4 tests failed.
REGRESSION:  org.apache.solr.schema.ModifyConfFileTest.testConfigWrite

Error Message:
should have detected an error early!

Stack Trace:
java.lang.AssertionError: should have detected an error early!
at 
__randomizedtesting.SeedInfo.seed([25171EC8E9855519:932C33C0D58C25DC]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.schema.ModifyConfFileTest.testConfigWrite(ModifyConfFileTest.java:102)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:744)


FAILED:  junit.framework.TestSuite.org.apache.solr.schema.ModifyConfFileTest

Error Message:
ERROR: SolrIndexSearcher opens=1 

Tracking Your Issues In The JDK Bug System

2013-12-15 Thread Rory O'Donnell Oracle, Dublin Ireland

Hi Dawid,

I hope you will find Dalibor's blog on Tracking Your Issues In The JDK 
Bug System useful!

Another step in the right direction.

Rgds,Rory

--
Rgds,Rory O'Donnell
Quality Engineering Manager
Oracle EMEA , Dublin, Ireland


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-trunk - Build # 141 - Still Failing

2013-12-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-trunk/141/

No tests ran.

Build Log:
[...truncated 51994 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/fakeRelease
 [copy] Copying 431 files to 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/fakeRelease/lucene
 [copy] Copying 230 files to 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/fakeRelease/solr
 [exec] JAVA7_HOME is /home/hudson/tools/java/latest1.7
 [exec] NOTE: output encoding is US-ASCII
 [exec] 
 [exec] Load release URL 
file:/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/fakeRelease/...
 [exec] 
 [exec] Test Lucene...
 [exec]   test basics...
 [exec]   get KEYS
 [exec] 0.1 MB in 0.01 sec (10.2 MB/sec)
 [exec]   check changes HTML...
 [exec]   download lucene-5.0.0-src.tgz...
 [exec] 26.9 MB in 0.04 sec (645.8 MB/sec)
 [exec] verify md5/sha1 digests
 [exec]   download lucene-5.0.0.tgz...
 [exec] 61.5 MB in 0.20 sec (313.4 MB/sec)
 [exec] verify md5/sha1 digests
 [exec]   download lucene-5.0.0.zip...
 [exec] 71.2 MB in 0.10 sec (700.0 MB/sec)
 [exec] verify md5/sha1 digests
 [exec]   unpack lucene-5.0.0.tgz...
 [exec] verify JAR metadata/identity/no javax.* or java.* classes...
 [exec] test demo with 1.7...
 [exec]   got 5692 hits for query lucene
 [exec] check Lucene's javadoc JAR
 [exec]   unpack lucene-5.0.0.zip...
 [exec] verify JAR metadata/identity/no javax.* or java.* classes...
 [exec] test demo with 1.7...
 [exec]   got 5692 hits for query lucene
 [exec] check Lucene's javadoc JAR
 [exec]   unpack lucene-5.0.0-src.tgz...
 [exec] make sure no JARs/WARs in src dist...
 [exec] run ant validate
 [exec] run tests w/ Java 7 and testArgs='-Dtests.jettyConnector=Socket 
 -Dtests.disableHdfs=true'...
 [exec] test demo with 1.7...
 [exec]   got 226 hits for query lucene
 [exec] generate javadocs w/ Java 7...
 [exec] 
 [exec] Crawl/parse...
 [exec] 
 [exec] Verify...
 [exec] 
 [exec] Test Solr...
 [exec]   test basics...
 [exec]   get KEYS
 [exec] 0.1 MB in 0.00 sec (54.8 MB/sec)
 [exec]   check changes HTML...
 [exec]   download solr-5.0.0-src.tgz...
 [exec] 32.5 MB in 0.08 sec (384.3 MB/sec)
 [exec] verify md5/sha1 digests
 [exec]   download solr-5.0.0.tgz...
 [exec] 117.0 MB in 1.05 sec (111.7 MB/sec)
 [exec] verify md5/sha1 digests
 [exec]   download solr-5.0.0.zip...
 [exec] 122.6 MB in 0.20 sec (610.4 MB/sec)
 [exec] verify md5/sha1 digests
 [exec]   unpack solr-5.0.0.tgz...
 [exec] verify JAR metadata/identity/no javax.* or java.* classes...
 [exec] unpack lucene-5.0.0.tgz...
 [exec] Traceback (most recent call last):
 [exec]   File 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/dev-tools/scripts/smokeTestRelease.py,
 line 1334, in module
 [exec] main()
 [exec]   File 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/dev-tools/scripts/smokeTestRelease.py,
 line 1278, in main
 [exec] smokeTest(baseURL, svnRevision, version, tmpDir, isSigned, 
testArgs)
 [exec]   File 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/dev-tools/scripts/smokeTestRelease.py,
 line 1322, in smokeTest
 [exec] unpackAndVerify('solr', tmpDir, artifact, svnRevision, version, 
testArgs, baseURL)
 [exec]   File 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/dev-tools/scripts/smokeTestRelease.py,
 line 627, in unpackAndVerify
 [exec] verifyUnpacked(project, artifact, unpackPath, svnRevision, 
version, testArgs, tmpDir, baseURL)
 [exec]   File 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/dev-tools/scripts/smokeTestRelease.py,
 line 752, in verifyUnpacked
 [exec] checkAllJARs(os.getcwd(), project, svnRevision, version, 
tmpDir, baseURL)
 [exec]   File 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/dev-tools/scripts/smokeTestRelease.py,
 line 276, in checkAllJARs
 [exec] noJavaPackageClasses('JAR file %s' % fullPath, fullPath)
 [exec]   File 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/dev-tools/scripts/smokeTestRelease.py,
 line 169, in noJavaPackageClasses
 [exec] raise RuntimeError('%s contains sheisty class %s' %  (desc, 
name2))
 [exec] RuntimeError: JAR file 
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/fakeReleaseTmp/unpack/solr-5.0.0/contrib/map-reduce/lib/Saxon-HE-9.5.1-2.jar
 contains sheisty 

Re: Tracking Your Issues In The JDK Bug System

2013-12-15 Thread Dawid Weiss
Thanks. I stopped using RSSs after GReader died... for some reason
just don't have the patience for feedly or other substitutions.
Anyway, good to know, thanks.

The major breakthrough would be to add the ability to comment on
issues (or just register in the Jira). :)

Dawid

On Sun, Dec 15, 2013 at 6:58 PM, Rory O'Donnell Oracle, Dublin Ireland
rory.odonn...@oracle.com wrote:
 Hi Dawid,

 I hope you will find Dalibor's blog on Tracking Your Issues In The JDK Bug
 System useful!
 Another step in the right direction.

 Rgds,Rory

 --
 Rgds,Rory O'Donnell
 Quality Engineering Manager
 Oracle EMEA , Dublin, Ireland


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5552) Leader recovery process can select the wrong leader if all replicas for a shard are down and trying to recover

2013-12-15 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13848693#comment-13848693
 ] 

Mark Miller edited comment on SOLR-5552 at 12/15/13 9:50 PM:
-

I think what we want to do here is look at having the core actually accept http 
requests before it registers and enters leader election - any issues we find 
doing this should be issues anyway, as we already have this case on a ZooKeeper 
expiration and recovery.


was (Author: markrmil...@gmail.com):
I think we want to do here is look at having the core actually accept http 
requests before it registers and enters leader election - any issues we find 
there should be issues anyway, as we already have this case on a ZooKeeper 
expiration and recovery.

 Leader recovery process can select the wrong leader if all replicas for a 
 shard are down and trying to recover
 --

 Key: SOLR-5552
 URL: https://issues.apache.org/jira/browse/SOLR-5552
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Timothy Potter
  Labels: leader, recovery
 Attachments: SOLR-5552.patch


 One particular issue that leads to out-of-sync shards, related to SOLR-4260
 Here's what I know so far, which admittedly isn't much:
 As cloud85 (replica before it crashed) is initializing, it enters the wait 
 process in ShardLeaderElectionContext#waitForReplicasToComeUp; this is 
 expected and a good thing.
 Some short amount of time in the future, cloud84 (leader before it crashed) 
 begins initializing and gets to a point where it adds itself as a possible 
 leader for the shard (by creating a znode under 
 /collections/cloud/leaders_elect/shard1/election), which leads to cloud85 
 being able to return from waitForReplicasToComeUp and try to determine who 
 should be the leader.
 cloud85 then tries to run the SyncStrategy, which can never work because in 
 this scenario the Jetty HTTP listener is not active yet on either node, so 
 all replication work that uses HTTP requests fails on both nodes ... PeerSync 
 treats these failures as indicators that the other replicas in the shard are 
 unavailable (or whatever) and assumes success. Here's the log message:
 2013-12-11 11:43:25,936 [coreLoadExecutor-3-thread-1] WARN 
 solr.update.PeerSync - PeerSync: core=cloud_shard1_replica1 
 url=http://cloud85:8985/solr couldn't connect to 
 http://cloud84:8984/solr/cloud_shard1_replica2/, counting as success
 The Jetty HTTP listener doesn't start accepting connections until long after 
 this process has completed and already selected the wrong leader.
 From what I can see, we seem to have a leader recovery process that is based 
 partly on HTTP requests to the other nodes, but the HTTP listener on those 
 nodes isn't active yet. We need a leader recovery process that doesn't rely 
 on HTTP requests. Perhaps, leader recovery for a shard w/o a current leader 
 may need to work differently than leader election in a shard that has 
 replicas that can respond to HTTP requests? All of what I'm seeing makes 
 perfect sense for leader election when there are active replicas and the 
 current leader fails.
 All this aside, I'm not asserting that this is the only cause for the 
 out-of-sync issues reported in this ticket, but it definitely seems like it 
 could happen in a real cluster.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5552) Leader recovery process can select the wrong leader if all replicas for a shard are down and trying to recover

2013-12-15 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13848693#comment-13848693
 ] 

Mark Miller commented on SOLR-5552:
---

I think we want to do here is look at having the core actually accept http 
requests before it registers and enters leader election - any issues we find 
there should be issues anyway, as we already have this case on a ZooKeeper 
expiration and recovery.

 Leader recovery process can select the wrong leader if all replicas for a 
 shard are down and trying to recover
 --

 Key: SOLR-5552
 URL: https://issues.apache.org/jira/browse/SOLR-5552
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Timothy Potter
  Labels: leader, recovery
 Attachments: SOLR-5552.patch


 One particular issue that leads to out-of-sync shards, related to SOLR-4260
 Here's what I know so far, which admittedly isn't much:
 As cloud85 (replica before it crashed) is initializing, it enters the wait 
 process in ShardLeaderElectionContext#waitForReplicasToComeUp; this is 
 expected and a good thing.
 Some short amount of time in the future, cloud84 (leader before it crashed) 
 begins initializing and gets to a point where it adds itself as a possible 
 leader for the shard (by creating a znode under 
 /collections/cloud/leaders_elect/shard1/election), which leads to cloud85 
 being able to return from waitForReplicasToComeUp and try to determine who 
 should be the leader.
 cloud85 then tries to run the SyncStrategy, which can never work because in 
 this scenario the Jetty HTTP listener is not active yet on either node, so 
 all replication work that uses HTTP requests fails on both nodes ... PeerSync 
 treats these failures as indicators that the other replicas in the shard are 
 unavailable (or whatever) and assumes success. Here's the log message:
 2013-12-11 11:43:25,936 [coreLoadExecutor-3-thread-1] WARN 
 solr.update.PeerSync - PeerSync: core=cloud_shard1_replica1 
 url=http://cloud85:8985/solr couldn't connect to 
 http://cloud84:8984/solr/cloud_shard1_replica2/, counting as success
 The Jetty HTTP listener doesn't start accepting connections until long after 
 this process has completed and already selected the wrong leader.
 From what I can see, we seem to have a leader recovery process that is based 
 partly on HTTP requests to the other nodes, but the HTTP listener on those 
 nodes isn't active yet. We need a leader recovery process that doesn't rely 
 on HTTP requests. Perhaps, leader recovery for a shard w/o a current leader 
 may need to work differently than leader election in a shard that has 
 replicas that can respond to HTTP requests? All of what I'm seeing makes 
 perfect sense for leader election when there are active replicas and the 
 current leader fails.
 All this aside, I'm not asserting that this is the only cause for the 
 out-of-sync issues reported in this ticket, but it definitely seems like it 
 could happen in a real cluster.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-1301) Add a Solr contrib that allows for building Solr indexes via Hadoop's Map-Reduce.

2013-12-15 Thread wolfgang hoschek (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13848097#comment-13848097
 ] 

wolfgang hoschek edited comment on SOLR-1301 at 12/16/13 2:27 AM:
--

Might be best to write a program that generates the list of files and then 
explicitly provide that file list to the MR job, e.g. via the --input-list 
option. For example you could use the HDFS version of the Linux file system 
'find' command for that (HdfsFindTool doc and code here: 
https://github.com/cloudera/search/tree/master_1.1.0/search-mr#hdfsfindtool)




was (Author: whoschek):
Might be best to write a program that generates the list of files and then 
explicitly provide that file list to the MR job, e.g. via the --input-list 
option. For example you could use the HDFS version of the Linux file system 
'find' command for that (HdfsFindTool doc and code here: 
https://github.com/cloudera/search/tree/master_1.1.0/search-mr)



 Add a Solr contrib that allows for building Solr indexes via Hadoop's 
 Map-Reduce.
 -

 Key: SOLR-1301
 URL: https://issues.apache.org/jira/browse/SOLR-1301
 Project: Solr
  Issue Type: New Feature
Reporter: Andrzej Bialecki 
Assignee: Mark Miller
 Fix For: 5.0, 4.7

 Attachments: README.txt, SOLR-1301-hadoop-0-20.patch, 
 SOLR-1301-hadoop-0-20.patch, SOLR-1301-maven-intellij.patch, SOLR-1301.patch, 
 SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, 
 SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, 
 SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, 
 SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, 
 SOLR-1301.patch, SolrRecordWriter.java, commons-logging-1.0.4.jar, 
 commons-logging-api-1.0.4.jar, hadoop-0.19.1-core.jar, 
 hadoop-0.20.1-core.jar, hadoop-core-0.20.2-cdh3u3.jar, hadoop.patch, 
 log4j-1.2.15.jar


 This patch contains  a contrib module that provides distributed indexing 
 (using Hadoop) to Solr EmbeddedSolrServer. The idea behind this module is 
 twofold:
 * provide an API that is familiar to Hadoop developers, i.e. that of 
 OutputFormat
 * avoid unnecessary export and (de)serialization of data maintained on HDFS. 
 SolrOutputFormat consumes data produced by reduce tasks directly, without 
 storing it in intermediate files. Furthermore, by using an 
 EmbeddedSolrServer, the indexing task is split into as many parts as there 
 are reducers, and the data to be indexed is not sent over the network.
 Design
 --
 Key/value pairs produced by reduce tasks are passed to SolrOutputFormat, 
 which in turn uses SolrRecordWriter to write this data. SolrRecordWriter 
 instantiates an EmbeddedSolrServer, and it also instantiates an 
 implementation of SolrDocumentConverter, which is responsible for turning 
 Hadoop (key, value) into a SolrInputDocument. This data is then added to a 
 batch, which is periodically submitted to EmbeddedSolrServer. When reduce 
 task completes, and the OutputFormat is closed, SolrRecordWriter calls 
 commit() and optimize() on the EmbeddedSolrServer.
 The API provides facilities to specify an arbitrary existing solr.home 
 directory, from which the conf/ and lib/ files will be taken.
 This process results in the creation of as many partial Solr home directories 
 as there were reduce tasks. The output shards are placed in the output 
 directory on the default filesystem (e.g. HDFS). Such part-N directories 
 can be used to run N shard servers. Additionally, users can specify the 
 number of reduce tasks, in particular 1 reduce task, in which case the output 
 will consist of a single shard.
 An example application is provided that processes large CSV files and uses 
 this API. It uses a custom CSV processing to avoid (de)serialization overhead.
 This patch relies on hadoop-core-0.19.1.jar - I attached the jar to this 
 issue, you should put it in contrib/hadoop/lib.
 Note: the development of this patch was sponsored by an anonymous contributor 
 and approved for release under Apache License.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1301) Add a Solr contrib that allows for building Solr indexes via Hadoop's Map-Reduce.

2013-12-15 Thread wolfgang hoschek (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13848775#comment-13848775
 ] 

wolfgang hoschek commented on SOLR-1301:


bq. it would be convenient if we could ignore the underscore (_) hidden files 
in hdfs as well as the . hidden files when reading input files from hdfs.

+1

 Add a Solr contrib that allows for building Solr indexes via Hadoop's 
 Map-Reduce.
 -

 Key: SOLR-1301
 URL: https://issues.apache.org/jira/browse/SOLR-1301
 Project: Solr
  Issue Type: New Feature
Reporter: Andrzej Bialecki 
Assignee: Mark Miller
 Fix For: 5.0, 4.7

 Attachments: README.txt, SOLR-1301-hadoop-0-20.patch, 
 SOLR-1301-hadoop-0-20.patch, SOLR-1301-maven-intellij.patch, SOLR-1301.patch, 
 SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, 
 SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, 
 SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, 
 SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, SOLR-1301.patch, 
 SOLR-1301.patch, SolrRecordWriter.java, commons-logging-1.0.4.jar, 
 commons-logging-api-1.0.4.jar, hadoop-0.19.1-core.jar, 
 hadoop-0.20.1-core.jar, hadoop-core-0.20.2-cdh3u3.jar, hadoop.patch, 
 log4j-1.2.15.jar


 This patch contains  a contrib module that provides distributed indexing 
 (using Hadoop) to Solr EmbeddedSolrServer. The idea behind this module is 
 twofold:
 * provide an API that is familiar to Hadoop developers, i.e. that of 
 OutputFormat
 * avoid unnecessary export and (de)serialization of data maintained on HDFS. 
 SolrOutputFormat consumes data produced by reduce tasks directly, without 
 storing it in intermediate files. Furthermore, by using an 
 EmbeddedSolrServer, the indexing task is split into as many parts as there 
 are reducers, and the data to be indexed is not sent over the network.
 Design
 --
 Key/value pairs produced by reduce tasks are passed to SolrOutputFormat, 
 which in turn uses SolrRecordWriter to write this data. SolrRecordWriter 
 instantiates an EmbeddedSolrServer, and it also instantiates an 
 implementation of SolrDocumentConverter, which is responsible for turning 
 Hadoop (key, value) into a SolrInputDocument. This data is then added to a 
 batch, which is periodically submitted to EmbeddedSolrServer. When reduce 
 task completes, and the OutputFormat is closed, SolrRecordWriter calls 
 commit() and optimize() on the EmbeddedSolrServer.
 The API provides facilities to specify an arbitrary existing solr.home 
 directory, from which the conf/ and lib/ files will be taken.
 This process results in the creation of as many partial Solr home directories 
 as there were reduce tasks. The output shards are placed in the output 
 directory on the default filesystem (e.g. HDFS). Such part-N directories 
 can be used to run N shard servers. Additionally, users can specify the 
 number of reduce tasks, in particular 1 reduce task, in which case the output 
 will consist of a single shard.
 An example application is provided that processes large CSV files and uses 
 this API. It uses a custom CSV processing to avoid (de)serialization overhead.
 This patch relies on hadoop-core-0.19.1.jar - I attached the jar to this 
 issue, you should put it in contrib/hadoop/lib.
 Note: the development of this patch was sponsored by an anonymous contributor 
 and approved for release under Apache License.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Windows (64bit/jdk1.8.0-ea-b119) - Build # 3495 - Still Failing!

2013-12-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Windows/3495/
Java: 64bit/jdk1.8.0-ea-b119 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
REGRESSION:  org.apache.solr.core.TestNonNRTOpen.testReaderIsNotNRT

Error Message:
expected:3 but was:2

Stack Trace:
java.lang.AssertionError: expected:3 but was:2
at 
__randomizedtesting.SeedInfo.seed([E763DD0E7D8A9CA8:52E5BC89C24B2E5C]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.core.TestNonNRTOpen.assertNotNRT(TestNonNRTOpen.java:133)
at 
org.apache.solr.core.TestNonNRTOpen.testReaderIsNotNRT(TestNonNRTOpen.java:94)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Comment Edited] (SOLR-5027) Field Collapsing PostFilter

2013-12-15 Thread Deepak Mishra (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13848876#comment-13848876
 ] 

Deepak Mishra edited comment on SOLR-5027 at 12/16/13 7:06 AM:
---

Joel check the JIRA SOLR-5554 again. I have attached the details to reproduce 
the error and the error log in FINE mode.


was (Author: deepakmishra117):
Joel check the JIRA again. I have attached the details to reproduce the error 
and the error log in FINE mode.

 Field Collapsing PostFilter
 ---

 Key: SOLR-5027
 URL: https://issues.apache.org/jira/browse/SOLR-5027
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 5.0
Reporter: Joel Bernstein
Assignee: Joel Bernstein
Priority: Minor
 Fix For: 4.6, 5.0

 Attachments: SOLR-5027.patch, SOLR-5027.patch, SOLR-5027.patch, 
 SOLR-5027.patch, SOLR-5027.patch, SOLR-5027.patch, SOLR-5027.patch, 
 SOLR-5027.patch, SOLR-5027.patch


 This ticket introduces the *CollapsingQParserPlugin* 
 The *CollapsingQParserPlugin* is a PostFilter that performs field collapsing. 
 This is a high performance alternative to standard Solr field collapsing 
 (with *ngroups*) when the number of distinct groups in the result set is high.
 For example in one performance test, a search with 10 million full results 
 and 1 million collapsed groups:
 Standard grouping with ngroups : 17 seconds.
 CollapsingQParserPlugin: 300 milli-seconds.
 Sample syntax:
 Collapse based on the highest scoring document:
 {code}
 fq=(!collapse field=field_name}
 {code}
 Collapse based on the min value of a numeric field:
 {code}
 fq={!collapse field=field_name min=field_name}
 {code}
 Collapse based on the max value of a numeric field:
 {code}
 fq={!collapse field=field_name max=field_name}
 {code}
 Collapse with a null policy:
 {code}
 fq={!collapse field=field_name nullPolicy=null_policy}
 {code}
 There are three null policies:
 ignore : removes docs with a null value in the collapse field (default).
 expand : treats each doc with a null value in the collapse field as a 
 separate group.
 collapse : collapses all docs with a null value into a single group using 
 either highest score, or min/max.
 The CollapsingQParserPlugin also fully supports the QueryElevationComponent
 *Note:*  The July 16 patch also includes and ExpandComponent that expands the 
 collapsed groups for the current search result page. This functionality will 
 be moved to it's own ticket.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5027) Field Collapsing PostFilter

2013-12-15 Thread Deepak Mishra (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13848876#comment-13848876
 ] 

Deepak Mishra commented on SOLR-5027:
-

Joel check the JIRA again. I have attached the details to reproduce the error 
and the error log in FINE mode.

 Field Collapsing PostFilter
 ---

 Key: SOLR-5027
 URL: https://issues.apache.org/jira/browse/SOLR-5027
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 5.0
Reporter: Joel Bernstein
Assignee: Joel Bernstein
Priority: Minor
 Fix For: 4.6, 5.0

 Attachments: SOLR-5027.patch, SOLR-5027.patch, SOLR-5027.patch, 
 SOLR-5027.patch, SOLR-5027.patch, SOLR-5027.patch, SOLR-5027.patch, 
 SOLR-5027.patch, SOLR-5027.patch


 This ticket introduces the *CollapsingQParserPlugin* 
 The *CollapsingQParserPlugin* is a PostFilter that performs field collapsing. 
 This is a high performance alternative to standard Solr field collapsing 
 (with *ngroups*) when the number of distinct groups in the result set is high.
 For example in one performance test, a search with 10 million full results 
 and 1 million collapsed groups:
 Standard grouping with ngroups : 17 seconds.
 CollapsingQParserPlugin: 300 milli-seconds.
 Sample syntax:
 Collapse based on the highest scoring document:
 {code}
 fq=(!collapse field=field_name}
 {code}
 Collapse based on the min value of a numeric field:
 {code}
 fq={!collapse field=field_name min=field_name}
 {code}
 Collapse based on the max value of a numeric field:
 {code}
 fq={!collapse field=field_name max=field_name}
 {code}
 Collapse with a null policy:
 {code}
 fq={!collapse field=field_name nullPolicy=null_policy}
 {code}
 There are three null policies:
 ignore : removes docs with a null value in the collapse field (default).
 expand : treats each doc with a null value in the collapse field as a 
 separate group.
 collapse : collapses all docs with a null value into a single group using 
 either highest score, or min/max.
 The CollapsingQParserPlugin also fully supports the QueryElevationComponent
 *Note:*  The July 16 patch also includes and ExpandComponent that expands the 
 collapsed groups for the current search result page. This functionality will 
 be moved to it's own ticket.



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org