[jira] [Commented] (SOLR-5944) Support updates of numeric DocValues

2015-10-05 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14943734#comment-14943734
 ] 

Yonik Seeley commented on SOLR-5944:


bq. I need to do the due diligence and write some tests to verify that things 
will work with log replays and peer sync.

Yeah, things are tricky enough in this area (distributed updates / recovery in 
general) that one can't really validate through tests.  Need to brainstorm and 
think through all of the different failure scenarios and then try to use tests 
to uncover scenarios you hadn't considered.

bq. This prevPointer is just used (in the patch) for RTGs. In the 
InPlaceUpdateDistribTest, I've introduced commits (with 1/3 probability) in 
between the re-ordered updates, and the RTG seems to work fine.

Ah, that makes sense now (for standalone / leader at least).

Off the top of my head, here's a possible issue:
 - replica buffers a partial update in memory (because it was reordered)
 - a commit comes in, and we roll over to a new tlog
 - node goes down and then comes back up, and the in-memory update is no longer 
in memory, and the old tlog won't be replayed.  It will look like we applied 
that update.

bq.  Is it okay to return success if it was written to (at least) the in-memory 
buffer (which holds these reordered updates)?

I don't think so... another scenario:
 - a client does an in-place update
 - the replicas receive the update reordered, and buffer in memory.
 - the client gets the response
 - the leader goes down (and stays down... hard drive crash)
 - one of the other replicas takes over as leader, but we've now lost data we 
confirmed as written (the in-place update was only ever applied on the leader), 
even though we only lost 1 server.

And then there is the even simpler scenario I think you were alluding to: if an 
update is ack'd, then a RTG on any active replica should see that update (or a 
later one). 

> Support updates of numeric DocValues
> 
>
> Key: SOLR-5944
> URL: https://issues.apache.org/jira/browse/SOLR-5944
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
>Assignee: Shalin Shekhar Mangar
> Attachments: SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch
>
>
> LUCENE-5189 introduced support for updates to numeric docvalues. It would be 
> really nice to have Solr support this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5944) Support updates of numeric DocValues

2015-10-05 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14943754#comment-14943754
 ] 

Yonik Seeley commented on SOLR-5944:


bq. > Another approach would be to get rid of update reordering... i.e. ensure 
that updates are not reordered when sending from leader to replicas.
bq. Sounds interesting. How do you suggest can this be achieved?

Don't reorder updates between leader and replicas:
  - create a new ConcurrentUpdateSolrClient that uses a single channel and can 
return individual responses... perhaps this fits into HTTP/2 ?
  - have only a single SolrClient on the leader talk to each replica
  - order the udpates in \_version\_ order when sending
  -- prob multiple ways to achieve this... reserve a slot when getting the 
version, or change versions so that they are contiguous so we know if we are 
missing one.

The only additional reason to use multiple threads when sending is to increase 
indexing performance.  We can also implement multi-threading for increased 
parallelism on the server side.  This should also simplify clients (no more 
batching, multiple threads, etc), as well as make our general recovery system 
more robust.

> Support updates of numeric DocValues
> 
>
> Key: SOLR-5944
> URL: https://issues.apache.org/jira/browse/SOLR-5944
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
>Assignee: Shalin Shekhar Mangar
> Attachments: SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch
>
>
> LUCENE-5189 introduced support for updates to numeric docvalues. It would be 
> really nice to have Solr support this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7995) Add a LIST command to ConfigSets API

2015-10-05 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14943840#comment-14943840
 ] 

Gregory Chanan commented on SOLR-7995:
--

I will commit this soon if there are no objections.

> Add a LIST command to ConfigSets API
> 
>
> Key: SOLR-7995
> URL: https://issues.apache.org/jira/browse/SOLR-7995
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
> Attachments: SOLR-7995.patch, SOLR-7995.patch
>
>
> It would be useful to have a LIST command in the ConfigSets API so that 
> clients do not have to access zookeeper in order to get the ConfigSets to use 
> for the other operations (create, delete).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_60) - Build # 14136 - Failure!

2015-10-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/14136/
Java: 64bit/jdk1.8.0_60 -XX:-UseCompressedOops -XX:+UseSerialGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestAuthenticationFramework

Error Message:
90 threads leaked from SUITE scope at 
org.apache.solr.cloud.TestAuthenticationFramework: 1) Thread[id=1120, 
name=qtp1467139841-1120-selector-ServerConnectorManager@f2e682c/1, 
state=RUNNABLE, group=TGRP-TestAuthenticationFramework] at 
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at 
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at 
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79) at 
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at 
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at 
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) at 
org.eclipse.jetty.io.SelectorManager$ManagedSelector.select(SelectorManager.java:600)
 at 
org.eclipse.jetty.io.SelectorManager$ManagedSelector.run(SelectorManager.java:549)
 at 
org.eclipse.jetty.util.thread.NonBlockingThread.run(NonBlockingThread.java:52)  
   at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555) 
at java.lang.Thread.run(Thread.java:745)2) Thread[id=1238, 
name=zkCallback-153-thread-3, state=TIMED_WAITING, 
group=TGRP-TestAuthenticationFramework] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)3) Thread[id=1125, 
name=qtp1467139841-1125-selector-ServerConnectorManager@f2e682c/2, 
state=RUNNABLE, group=TGRP-TestAuthenticationFramework] at 
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at 
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at 
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79) at 
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at 
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at 
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) at 
org.eclipse.jetty.io.SelectorManager$ManagedSelector.select(SelectorManager.java:600)
 at 
org.eclipse.jetty.io.SelectorManager$ManagedSelector.run(SelectorManager.java:549)
 at 
org.eclipse.jetty.util.thread.NonBlockingThread.run(NonBlockingThread.java:52)  
   at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555) 
at java.lang.Thread.run(Thread.java:745)4) Thread[id=1131, 
name=qtp1467139841-1131-selector-ServerConnectorManager@f2e682c/3, 
state=RUNNABLE, group=TGRP-TestAuthenticationFramework] at 
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at 
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at 
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79) at 
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at 
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at 
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) at 
org.eclipse.jetty.io.SelectorManager$ManagedSelector.select(SelectorManager.java:600)
 at 
org.eclipse.jetty.io.SelectorManager$ManagedSelector.run(SelectorManager.java:549)
 at 
org.eclipse.jetty.util.thread.NonBlockingThread.run(NonBlockingThread.java:52)  
   at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555) 
at java.lang.Thread.run(Thread.java:745)5) Thread[id=1235, 
name=zkCallback-152-thread-2, state=TIMED_WAITING, 
group=TGRP-TestAuthenticationFramework] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 

[JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b82) - Build # 14421 - Failure!

2015-10-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14421/
Java: 64bit/jdk1.9.0-ea-b82 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 10614 lines...]
   [junit4] JVM J2: stdout was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/temp/junit4-J2-20151005_195751_974.sysout
   [junit4] >>> JVM J2: stdout (verbatim) 
   [junit4] #
   [junit4] # A fatal error has been detected by the Java Runtime Environment:
   [junit4] #
   [junit4] #  SIGSEGV (0xb) at pc=0x7fa46a749cce, pid=9684, 
tid=0x2622
   [junit4] #
   [junit4] # JRE version: Java(TM) SE Runtime Environment (9.0-b82) (build 
1.9.0-ea-b82)
   [junit4] # Java VM: Java HotSpot(TM) 64-Bit Server VM (1.9.0-ea-b82, mixed 
mode, tiered, concurrent mark sweep gc, linux-amd64)
   [junit4] # Problematic frame:
   [junit4] # V  [libjvm.so+0x80ccce]  
PhaseIdealLoop::build_loop_late_post(Node*)+0x13e
   [junit4] #
   [junit4] # No core dump will be written. Core dumps have been disabled. To 
enable core dumping, try "ulimit -c unlimited" before starting Java again
   [junit4] #
   [junit4] # An error report file with more information is saved as:
   [junit4] # 
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J2/hs_err_pid9684.log
   [junit4] 
   [junit4] [error occurred during error reporting , id 0xb]
   [junit4] 
   [junit4] #
   [junit4] # If you would like to submit a bug report, please visit:
   [junit4] #   http://bugreport.java.com/bugreport/crash.jsp
   [junit4] #
   [junit4] <<< JVM J2: EOF 

[...truncated 56 lines...]
   [junit4] JVM J0: stdout was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/temp/junit4-J0-20151005_195751_974.sysout
   [junit4] >>> JVM J0: stdout (verbatim) 
   [junit4] #
   [junit4] # A fatal error has been detected by the Java Runtime Environment:
   [junit4] #
   [junit4] #  SIGSEGV (0xb) at pc=0x7fc81505ecce, pid=9685, 
tid=0x261c
   [junit4] #
   [junit4] # JRE version: Java(TM) SE Runtime Environment (9.0-b82) (build 
1.9.0-ea-b82)
   [junit4] # Java VM: Java HotSpot(TM) 64-Bit Server VM (1.9.0-ea-b82, mixed 
mode, tiered, concurrent mark sweep gc, linux-amd64)
   [junit4] # Problematic frame:
   [junit4] # V  [libjvm.so+0x80ccce]  
PhaseIdealLoop::build_loop_late_post(Node*)+0x13e
   [junit4] #
   [junit4] # No core dump will be written. Core dumps have been disabled. To 
enable core dumping, try "ulimit -c unlimited" before starting Java again
   [junit4] #
   [junit4] # An error report file with more information is saved as:
   [junit4] # 
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J0/hs_err_pid9685.log
   [junit4] 
   [junit4] [error occurred during error reporting , id 0xb]
   [junit4] 
   [junit4] #
   [junit4] # If you would like to submit a bug report, please visit:
   [junit4] #   http://bugreport.java.com/bugreport/crash.jsp
   [junit4] #
   [junit4] <<< JVM J0: EOF 

[...truncated 341 lines...]
   [junit4] ERROR: JVM J0 ended with an exception, command line: 
/home/jenkins/tools/java/64bit/jdk1.9.0-ea-b82/bin/java -XX:-UseCompressedOops 
-XX:+UseConcMarkSweepGC -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/heapdumps -ea 
-esa -Dtests.prefix=tests -Dtests.seed=88400144FF3E4B9E -Xmx512M -Dtests.iters= 
-Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random 
-Dtests.postingsformat=random -Dtests.docvaluesformat=random 
-Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random 
-Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=6.0.0 
-Dtests.cleanthreads=perClass 
-Djava.util.logging.config.file=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false 
-Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=3 -DtempDir=./temp 
-Djava.io.tmpdir=./temp 
-Djunit4.tempDir=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/temp
 -Dcommon.dir=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene 
-Dclover.db.dir=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/clover/db
 
-Djava.security.policy=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/tools/junit4/solr-tests.policy
 -Dtests.LUCENE_VERSION=6.0.0 -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Djunit4.childvm.cwd=/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J0
 -Djunit4.childvm.id=0 -Djunit4.childvm.count=3 -Dtests.leaveTemporary=false 
-Dtests.filterstacks=true -Dtests.disableHdfs=true 
-Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Dfile.encoding=UTF-8 -classpath 

[jira] [Commented] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2015-10-05 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14944000#comment-14944000
 ] 

Anshum Gupta commented on SOLR-6736:


I plan on resuming work on this one later today. If you have something in place 
already for GET, go ahead with it, else I'll take it up and keep it in sync 
with all of the other Config set work that has recently been done.

Let's move all of the GET stuff from here and only use SOLR-8054 for it. 

> A collections-like request handler to manage solr configurations on zookeeper
> -
>
> Key: SOLR-6736
> URL: https://issues.apache.org/jira/browse/SOLR-6736
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Varun Rajput
>Assignee: Anshum Gupta
> Attachments: SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
> SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
> SOLR-6736.patch, newzkconf.zip, test_private.pem, test_pub.der, 
> zkconfighandler.zip, zkconfighandler.zip
>
>
> Managing Solr configuration files on zookeeper becomes cumbersome while using 
> solr in cloud mode, especially while trying out changes in the 
> configurations. 
> It will be great if there is a request handler that can provide an API to 
> manage the configurations similar to the collections handler that would allow 
> actions like uploading new configurations, linking them to a collection, 
> deleting configurations, etc.
> example : 
> {code}
> #use the following command to upload a new configset called mynewconf. This 
> will fail if there is alredy a conf called 'mynewconf'. The file could be a 
> jar , zip or a tar file which contains all the files for the this conf.
> curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
> @testconf.zip 
> http://localhost:8983/solr/admin/configs/mynewconf?sig=
> {code}
> A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
> available
> A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the 
> list of files in mynewconf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7995) Add a LIST command to ConfigSets API

2015-10-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14944012#comment-14944012
 ] 

ASF subversion and git services commented on SOLR-7995:
---

Commit 1706919 from gcha...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1706919 ]

SOLR-7995: Add a LIST command to ConfigSets API

> Add a LIST command to ConfigSets API
> 
>
> Key: SOLR-7995
> URL: https://issues.apache.org/jira/browse/SOLR-7995
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
> Attachments: SOLR-7995.patch, SOLR-7995.patch
>
>
> It would be useful to have a LIST command in the ConfigSets API so that 
> clients do not have to access zookeeper in order to get the ConfigSets to use 
> for the other operations (create, delete).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8054) Add a GET command to ConfigSets API

2015-10-05 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14943875#comment-14943875
 ] 

Gregory Chanan commented on SOLR-8054:
--

Thanks for the responses [~markrmil...@gmail.com] and [~ysee...@gmail.com].

bq. Thinking about a view config API like this, I think I'd want some way to 
get an individual file or all the files (as a zip, in one stream, whatever ends 
up making sense) depending on param.

I think both individual file and all file APIs make sense.  For the all file 
APIs, I'm not sure whether as a zip or all in one stream makes more sense.  Is 
there an existing API where we provide multiple files in one stream?  I'd 
rather follow that logic than make something up myself.

{quote}
Should uploading (cloning) configsets be taken into consideration here?
{quote}

Yes, but I've tried to stay away from exposing file-modication based APIs 
directly in Solr due to the security issues discussed in SOLR-5287 / SOLR-5539. 
 One approach to thinking about this is to break the operations into tasks a 
cluster-level administrator would do vs tasks an individual would do and seeing 
if we can do the later without file-based APIs.

bq. Downloading the config for the purposes of making a backup, with the 
ability to restore it later after trying some different things

This falls into tasks an individual would do.  An individual could do this 
today via cloning their copy via the ConfigSet API and "restoring" via 
deleting/copying the old one.  That's not super easy, but you could provide a 
nicer interface by say, keeping track of the changes with version numbers and 
letting you restore from version number.  So I don't think this strictly needs 
a file-based API.

bq. Essentially cloning a config in a different cluster (testing, 
troubleshooting, etc)

That's an interesting use case because it's sort of between what a user would 
do and what an administrator would do.  One possibility is to have some 
higher-level cross-cluster replication functionality that lets you replicate 
configs to another cluster.  You could imagine this happening on all configs or 
some subset.  Alternatively, if the user is an administrator of the backup 
cluster (which seems likely here), there is nothing stopping you from using the 
existing ZkCLI commands.  That's just not feasible if ZK security is on and the 
user doing the operations doesn't have permissions, but that doesn't seem that 
likely in this case.

{quote}Both seem useful (If I'm correctly understanding what you mean by 
File-based). APIs may manipulate the state, but dealing with the persisted 
state as a whole also seems useful. For instance, cloning a config via config 
APIs that deal with individual settings seems difficult.{quote}

Yes, that's a good point.  Hopefully the above makes sense -- provide Solr APIs 
for end users and have administrators use the ZkCLI.


> Add a GET command to ConfigSets API
> ---
>
> Key: SOLR-8054
> URL: https://issues.apache.org/jira/browse/SOLR-8054
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>
> It would be useful to have a command that allows you to view a ConfigSet via 
> the API rather than going to zookeeper directly.  Mainly for security 
> reasons, e.g.
> - solr may have different security requirements than the ZNodes e.g. only 
> solr can view znodes but any authenticated user can call ConfigSet API
> - it's nicer than pointing to the web UI and using the zookeeper viewer, 
> because of the same security concerns as above and that you don't have to 
> know the internal zookeeper paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8054) Add a GET command to ConfigSets API

2015-10-05 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14944033#comment-14944033
 ] 

Gregory Chanan commented on SOLR-8054:
--

Ok, from discussing with [~anshumg] in SOLR-6736 it sounds like we are going to 
use that jira for modification and this JIRA for GET.

I do not currently have a patch for GET.  As I wrote above, I think there are 
two separate steps here (that could be separate jiras, but should have a 
sensible API between them):
1) Config/Schema API equivalents
2) File-based APIs

My plan is to start work on 1) first, someone can take up 2) or I can work on 
it after 1).  But let's figure out a sensible API before we commit anything.

> Add a GET command to ConfigSets API
> ---
>
> Key: SOLR-8054
> URL: https://issues.apache.org/jira/browse/SOLR-8054
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>
> It would be useful to have a command that allows you to view a ConfigSet via 
> the API rather than going to zookeeper directly.  Mainly for security 
> reasons, e.g.
> - solr may have different security requirements than the ZNodes e.g. only 
> solr can view znodes but any authenticated user can call ConfigSet API
> - it's nicer than pointing to the web UI and using the zookeeper viewer, 
> because of the same security concerns as above and that you don't have to 
> know the internal zookeeper paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8117) Rule-based placement issue with 'cores' tag

2015-10-05 Thread Ludovic Boutros (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ludovic Boutros updated SOLR-8117:
--
Attachment: SOLR-8117.patch

Ok, so something like this should be better:

I have modified the function _Rule.canMatch()_ in order to prevent the 
additional verification in case of operator '<' or '='.

I've added another test for your example cores>1.

> Rule-based placement issue with 'cores' tag 
> 
>
> Key: SOLR-8117
> URL: https://issues.apache.org/jira/browse/SOLR-8117
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3, 5.3.1
>Reporter: Ludovic Boutros
>Assignee: Noble Paul
> Attachments: SOLR-8117.patch, SOLR-8117.patch
>
>
> The rule-based placement fails on an empty node (core count = 0) with 
> condition 'cores:<1'.
> It also fails if current core number is equal to the core number in the 
> condition - 1. 
> During the placement strategy process, the core counts for a node are 
> incremented when all the rules match.
> At the end of  the code, an additional verification of all the conditions is 
> done with incremented core count and therefore it fails.
> I don't know why this additional verification is needed and removing it seems 
> to fix the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8054) Add a GET command to ConfigSets API

2015-10-05 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14943950#comment-14943950
 ] 

Anshum Gupta commented on SOLR-8054:


SOLR-6736 was heading towards just that. Let's combine the effort there.
It had been on my list for a while and I never got a chance to wrap it up, but 
I plan to spend some time on this now.

> Add a GET command to ConfigSets API
> ---
>
> Key: SOLR-8054
> URL: https://issues.apache.org/jira/browse/SOLR-8054
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>
> It would be useful to have a command that allows you to view a ConfigSet via 
> the API rather than going to zookeeper directly.  Mainly for security 
> reasons, e.g.
> - solr may have different security requirements than the ZNodes e.g. only 
> solr can view znodes but any authenticated user can call ConfigSet API
> - it's nicer than pointing to the web UI and using the zookeeper viewer, 
> because of the same security concerns as above and that you don't have to 
> know the internal zookeeper paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2015-10-05 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14943978#comment-14943978
 ] 

Gregory Chanan commented on SOLR-6736:
--

bq. Also, let's get this patch inline with SOLR-7789 so we share the end-point 
that has already been introduced to handle configs. I plan to review this patch 
in detail later today.

What's the status of this?

I'm also unclear what the exact scope of this JIRA is.  It sounds like we are 
just limiting this jira to allowing you to upload entire config sets, is that 
correct?  I'm adding a LIST command in SOLR-7995 and a GET command at 
SOLR-8054.  It sounds like neither of those efforts would conflict with this, 
except that the GET command should use the same parameter names for e.g. 
individual files / zipped vs streamed.  I'd rather not duplicate work, so let 
me know.

> A collections-like request handler to manage solr configurations on zookeeper
> -
>
> Key: SOLR-6736
> URL: https://issues.apache.org/jira/browse/SOLR-6736
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Varun Rajput
>Assignee: Anshum Gupta
> Attachments: SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
> SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
> SOLR-6736.patch, newzkconf.zip, test_private.pem, test_pub.der, 
> zkconfighandler.zip, zkconfighandler.zip
>
>
> Managing Solr configuration files on zookeeper becomes cumbersome while using 
> solr in cloud mode, especially while trying out changes in the 
> configurations. 
> It will be great if there is a request handler that can provide an API to 
> manage the configurations similar to the collections handler that would allow 
> actions like uploading new configurations, linking them to a collection, 
> deleting configurations, etc.
> example : 
> {code}
> #use the following command to upload a new configset called mynewconf. This 
> will fail if there is alredy a conf called 'mynewconf'. The file could be a 
> jar , zip or a tar file which contains all the files for the this conf.
> curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
> @testconf.zip 
> http://localhost:8983/solr/admin/configs/mynewconf?sig=
> {code}
> A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
> available
> A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the 
> list of files in mynewconf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7995) Add a LIST command to ConfigSets API

2015-10-05 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan resolved SOLR-7995.
--
   Resolution: Fixed
Fix Version/s: 6.0
   5.4

> Add a LIST command to ConfigSets API
> 
>
> Key: SOLR-7995
> URL: https://issues.apache.org/jira/browse/SOLR-7995
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
> Fix For: 5.4, 6.0
>
> Attachments: SOLR-7995.patch, SOLR-7995.patch
>
>
> It would be useful to have a LIST command in the ConfigSets API so that 
> clients do not have to access zookeeper in order to get the ConfigSets to use 
> for the other operations (create, delete).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 2777 - Failure!

2015-10-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/2777/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.core.TestDynamicLoading.testDynamicLoading

Error Message:
Could not get expected value  'X val changed' for path 'x' full output: {   
"responseHeader":{ "status":0, "QTime":0},   "params":{"wt":"json"},   
"context":{ "webapp":"/a_", "path":"/test1", "httpMethod":"GET"},   
"class":"org.apache.solr.core.BlobStoreTestRequestHandler",   "x":"X val"}

Stack Trace:
java.lang.AssertionError: Could not get expected value  'X val changed' for 
path 'x' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "params":{"wt":"json"},
  "context":{
"webapp":"/a_",
"path":"/test1",
"httpMethod":"GET"},
  "class":"org.apache.solr.core.BlobStoreTestRequestHandler",
  "x":"X val"}
at 
__randomizedtesting.SeedInfo.seed([204D5D95BF66CB57:F80070C248BB6EF7]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:441)
at 
org.apache.solr.core.TestDynamicLoading.testDynamicLoading(TestDynamicLoading.java:260)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-7833) Add new Solr book 'Solr Cookbook - Third Edition' to selection of Solr books and news.

2015-10-05 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14943000#comment-14943000
 ] 

Rafał Kuć commented on SOLR-7833:
-

Thanks a lot for the commit and the changes :) As for the books, I see all the 
others are there, so let's leave it as it is. 

> Add new Solr book 'Solr Cookbook - Third Edition' to selection of Solr books 
> and news.
> --
>
> Key: SOLR-7833
> URL: https://issues.apache.org/jira/browse/SOLR-7833
> Project: Solr
>  Issue Type: Task
>Reporter: Zico Fernandes
>Assignee: Steve Rowe
> Attachments: SOLR-7833.patch, SOLR-7833_version_2.patch, Solr 
> Cookbook_Third Edition.jpg, book_solr_cookbook_3ed.jpg
>
>
> Rafał Kuć is proud to finally announce the book Solr Cookbook - Third Edition 
> by Packt Publishing. This edition will specifically appeal to developers who 
> wish to quickly get to grips with the changes and new features of Apache Solr 
> 5. 
> Solr Cookbook - Third Edition has over 100 easy to follow recipes to solve 
> real-time problems related to Apache Solr 4.x and 5.0 effectively. Starting 
> with vital information on setting up Solr, the developer will quickly 
> progress to analyzing their text data through querying and performance 
> improvement. Finally, they will explore real-life situations, where Solr can 
> be used to simplify daily collection handling.
> With numerous practical chapters centered on important Solr techniques and 
> methods Solr Cookbook - Third Edition will guide intermediate Solr Developers 
> who are willing to learn and implement Pro-level practices, techniques, and 
> solutions.
> Click here to read more about the Solr Cookbook - Third Edition: 
> http://bit.ly/1Q2AGS8



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1561: POMs out of sync

2015-10-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1561/

No tests ran.

Build Log:
[...truncated 24627 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:791: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:290: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/lucene/build.xml:409:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/lucene/common-build.xml:574:
 Error deploying artifact 'org.apache.lucene:lucene-solr-grandparent:pom': 
Error retrieving previous build number for artifact 
'org.apache.lucene:lucene-solr-grandparent:pom': repository metadata for: 
'snapshot org.apache.lucene:lucene-solr-grandparent:6.0.0-SNAPSHOT' could not 
be retrieved from repository: apache.snapshots.https due to an error: Error 
transferring file: Connection timed out

Total time: 10 minutes 30 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (LUCENE-5205) SpanQueryParser with recursion, analysis and syntax very similar to classic QueryParser

2015-10-05 Thread Modassar Ather (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14942956#comment-14942956
 ] 

Modassar Ather commented on LUCENE-5205:


Hi [~talli...@mitre.org]

The patch of this feature is failing to build with compilation error with 
Lucene/Solr-5.3.1.
I am trying to resolve it locally. public void fillBytesRef() method from 
TermToBytesRefAttribute.java has been removed.
Kindly look into it.

Thanks,
Modassar

> SpanQueryParser with recursion, analysis and syntax very similar to classic 
> QueryParser
> ---
>
> Key: LUCENE-5205
> URL: https://issues.apache.org/jira/browse/LUCENE-5205
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/queryparser
>Reporter: Tim Allison
>  Labels: patch
> Attachments: LUCENE-5205-cleanup-tests.patch, 
> LUCENE-5205-date-pkg-prvt.patch, LUCENE-5205.patch.gz, LUCENE-5205.patch.gz, 
> LUCENE-5205_dateTestReInitPkgPrvt.patch, 
> LUCENE-5205_improve_stop_word_handling.patch, 
> LUCENE-5205_smallTestMods.patch, LUCENE_5205.patch, 
> SpanQueryParser_v1.patch.gz, patch.txt
>
>
> This parser extends QueryParserBase and includes functionality from:
> * Classic QueryParser: most of its syntax
> * SurroundQueryParser: recursive parsing for "near" and "not" clauses.
> * ComplexPhraseQueryParser: can handle "near" queries that include multiterms 
> (wildcard, fuzzy, regex, prefix),
> * AnalyzingQueryParser: has an option to analyze multiterms.
> At a high level, there's a first pass BooleanQuery/field parser and then a 
> span query parser handles all terminal nodes and phrases.
> Same as classic syntax:
> * term: test 
> * fuzzy: roam~0.8, roam~2
> * wildcard: te?t, test*, t*st
> * regex: /\[mb\]oat/
> * phrase: "jakarta apache"
> * phrase with slop: "jakarta apache"~3
> * default "or" clause: jakarta apache
> * grouping "or" clause: (jakarta apache)
> * boolean and +/-: (lucene OR apache) NOT jakarta; +lucene +apache -jakarta
> * multiple fields: title:lucene author:hatcher
>  
> Main additions in SpanQueryParser syntax vs. classic syntax:
> * Can require "in order" for phrases with slop with the \~> operator: 
> "jakarta apache"\~>3
> * Can specify "not near": "fever bieber"!\~3,10 ::
> find "fever" but not if "bieber" appears within 3 words before or 10 
> words after it.
> * Fully recursive phrasal queries with \[ and \]; as in: \[\[jakarta 
> apache\]~3 lucene\]\~>4 :: 
> find "jakarta" within 3 words of "apache", and that hit has to be within 
> four words before "lucene"
> * Can also use \[\] for single level phrasal queries instead of " as in: 
> \[jakarta apache\]
> * Can use "or grouping" clauses in phrasal queries: "apache (lucene solr)"\~3 
> :: find "apache" and then either "lucene" or "solr" within three words.
> * Can use multiterms in phrasal queries: "jakarta\~1 ap*che"\~2
> * Did I mention full recursion: \[\[jakarta\~1 ap*che\]\~2 (solr~ 
> /l\[ou\]\+\[cs\]\[en\]\+/)]\~10 :: Find something like "jakarta" within two 
> words of "ap*che" and that hit has to be within ten words of something like 
> "solr" or that "lucene" regex.
> * Can require at least x number of hits at boolean level: "apache AND (lucene 
> solr tika)~2
> * Can use negative only query: -jakarta :: Find all docs that don't contain 
> "jakarta"
> * Can use an edit distance > 2 for fuzzy query via SlowFuzzyQuery (beware of 
> potential performance issues!).
> Trivial additions:
> * Can specify prefix length in fuzzy queries: jakarta~1,2 (edit distance =1, 
> prefix =2)
> * Can specifiy Optimal String Alignment (OSA) vs Levenshtein for distance 
> <=2: (jakarta~1 (OSA) vs jakarta~>1(Levenshtein)
> This parser can be very useful for concordance tasks (see also LUCENE-5317 
> and LUCENE-5318) and for analytical search.  
> Until LUCENE-2878 is closed, this might have a use for fans of SpanQuery.
> Most of the documentation is in the javadoc for SpanQueryParser.
> Any and all feedback is welcome.  Thank you.
> Until this is added to the Lucene project, I've added a standalone 
> lucene-addons repo (with jars compiled for the latest stable build of Lucene) 
>  on [github|https://github.com/tballison/lucene-addons].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6825) Add multidimensional byte[] indexing support to Lucene

2015-10-05 Thread Michael McCandless (JIRA)
Michael McCandless created LUCENE-6825:
--

 Summary: Add multidimensional byte[] indexing support to Lucene
 Key: LUCENE-6825
 URL: https://issues.apache.org/jira/browse/LUCENE-6825
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: Trunk


I think we should graduate the low-level block KD-tree data structure
from sandbox into Lucene's core?

This can be used for very fast 1D range filtering for numerics,
removing the 8 byte (long/double) limit we have today, so e.g. we
could efficiently support BigInteger, BigDecimal, IPv6 addresses, etc.

It can also be used for > 1D use cases, like 2D (lat/lon) and 3D
(x/y/z with geo3d) geo shape intersection searches.

The idea here is to add a new part of the Codec API (DimensionalFormat
maybe?) that can do low-level N-dim point indexing and at runtime
exposes only an "intersect" method.

It should give sizable performance gains (smaller index, faster
searching) over what we have today, and even over what auto-prefix
with efficient numeric terms would do.

There are many steps here ... and I think adding this is analogous to
how we added FSTs, where we first added low level data structure
support and then gradually cutover the places that benefit from an
FST.

So for the first step, I'd like to just add the low-level block
KD-tree impl into oal.util.bkd, but make a couple improvements over
what we have now in sandbox:

  * Use byte[] as the value not int (@rjernst's good idea!)

  * Generalize it to arbitrary dimensions vs. specialized/forked 1D,
2D, 3D cases we have now

This is already hard enough :)  After that we can build the
DimensionalFormat on top, then cutover existing specialized block
KD-trees.  We also need to fix OfflineSorter to use Directory API so
we don't fill up /tmp when building a block KD-tree.

A block KD-tree is at heart an inverted data structure, like postings,
but is also similar to auto-prefix in that it "picks" proper
N-dimensional "terms" (leaf blocks) to index based on how the specific
data being indexed is distributed.  I think this is a big part of why
it's so fast, i.e. in contrast to today where we statically slice up
the space into the same terms regardless of the data (trie shifting,
morton codes, geohash, hilbert curves, etc.)

I'm marking this as trunk only for now... as we iterate we can see if
it could maybe go back to 5.x...




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6825) Add multidimensional byte[] indexing support to Lucene

2015-10-05 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-6825:
---
Attachment: LUCENE-6825.patch

Initial patch.

Many nocommits still, but the randomized multi-dimensional int[]
indexing test case seems to pass at least once.

There are many classes under oal.util.bkd, but all are package private
except for the BKDWriter/Reader classes.

One limitation here is that every dimension must be the same number of
bytes.  This is fine for all the KD-tree use cases today, but e.g. it
means you can't have one dim that's a long, and another dim that's an
int.  I think this is fine for starters (progress not perfection!),
and it is a big simplification of the code since it means all encoding
while building the tree is fixed byte width per document.

This is just the low-level data structure!  It's like FST.java.  Later 
(separate issues, separate commits) we need DimensionalFormat, queries that use 
the read-time API to execute, etc.

I'll open a separate issue to cutover OfflineSorter to Directory API;
I think it's a blocker for this one.


> Add multidimensional byte[] indexing support to Lucene
> --
>
> Key: LUCENE-6825
> URL: https://issues.apache.org/jira/browse/LUCENE-6825
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: Trunk
>
> Attachments: LUCENE-6825.patch
>
>
> I think we should graduate the low-level block KD-tree data structure
> from sandbox into Lucene's core?
> This can be used for very fast 1D range filtering for numerics,
> removing the 8 byte (long/double) limit we have today, so e.g. we
> could efficiently support BigInteger, BigDecimal, IPv6 addresses, etc.
> It can also be used for > 1D use cases, like 2D (lat/lon) and 3D
> (x/y/z with geo3d) geo shape intersection searches.
> The idea here is to add a new part of the Codec API (DimensionalFormat
> maybe?) that can do low-level N-dim point indexing and at runtime
> exposes only an "intersect" method.
> It should give sizable performance gains (smaller index, faster
> searching) over what we have today, and even over what auto-prefix
> with efficient numeric terms would do.
> There are many steps here ... and I think adding this is analogous to
> how we added FSTs, where we first added low level data structure
> support and then gradually cutover the places that benefit from an
> FST.
> So for the first step, I'd like to just add the low-level block
> KD-tree impl into oal.util.bkd, but make a couple improvements over
> what we have now in sandbox:
>   * Use byte[] as the value not int (@rjernst's good idea!)
>   * Generalize it to arbitrary dimensions vs. specialized/forked 1D,
> 2D, 3D cases we have now
> This is already hard enough :)  After that we can build the
> DimensionalFormat on top, then cutover existing specialized block
> KD-trees.  We also need to fix OfflineSorter to use Directory API so
> we don't fill up /tmp when building a block KD-tree.
> A block KD-tree is at heart an inverted data structure, like postings,
> but is also similar to auto-prefix in that it "picks" proper
> N-dimensional "terms" (leaf blocks) to index based on how the specific
> data being indexed is distributed.  I think this is a big part of why
> it's so fast, i.e. in contrast to today where we statically slice up
> the space into the same terms regardless of the data (trie shifting,
> morton codes, geohash, hilbert curves, etc.)
> I'm marking this as trunk only for now... as we iterate we can see if
> it could maybe go back to 5.x...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Solaris (64bit/jdk1.8.0) - Build # 100 - Failure!

2015-10-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Solaris/100/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.lucene.index.TestExitableDirectoryReader.testExitableFilterIndexReader

Error Message:
The request took too long to iterate over terms. Timeout: timeoutAt: 
231829099369964 (System.nanoTime(): 231838700362338), 
TermsEnum=org.apache.lucene.codecs.memory.DirectPostingsFormat$DirectField$DirectTermsEnum@2cef5348

Stack Trace:
org.apache.lucene.index.ExitableDirectoryReader$ExitingReaderException: The 
request took too long to iterate over terms. Timeout: timeoutAt: 
231829099369964 (System.nanoTime(): 231838700362338), 
TermsEnum=org.apache.lucene.codecs.memory.DirectPostingsFormat$DirectField$DirectTermsEnum@2cef5348
at 
__randomizedtesting.SeedInfo.seed([E05BA3F17123657F:583E0EB01AF9EC86]:0)
at 
org.apache.lucene.index.ExitableDirectoryReader$ExitableTermsEnum.checkAndThrow(ExitableDirectoryReader.java:173)
at 
org.apache.lucene.index.ExitableDirectoryReader$ExitableTermsEnum.(ExitableDirectoryReader.java:163)
at 
org.apache.lucene.index.ExitableDirectoryReader$ExitableTerms.iterator(ExitableDirectoryReader.java:147)
at 
org.apache.lucene.index.FilterLeafReader$FilterTerms.iterator(FilterLeafReader.java:113)
at 
org.apache.lucene.index.TestExitableDirectoryReader$TestReader$TestTerms.iterator(TestExitableDirectoryReader.java:58)
at org.apache.lucene.index.Terms.intersect(Terms.java:75)
at 
org.apache.lucene.util.automaton.CompiledAutomaton.getTermsEnum(CompiledAutomaton.java:336)
at 
org.apache.lucene.search.AutomatonQuery.getTermsEnum(AutomatonQuery.java:107)
at 
org.apache.lucene.search.MultiTermQuery.getTermsEnum(MultiTermQuery.java:304)
at 
org.apache.lucene.search.MultiTermQueryConstantScoreWrapper$1.rewrite(MultiTermQueryConstantScoreWrapper.java:145)
at 
org.apache.lucene.search.MultiTermQueryConstantScoreWrapper$1.bulkScorer(MultiTermQueryConstantScoreWrapper.java:198)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:646)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:453)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:572)
at 
org.apache.lucene.search.IndexSearcher.searchAfter(IndexSearcher.java:430)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:441)
at 
org.apache.lucene.index.TestExitableDirectoryReader.testExitableFilterIndexReader(TestExitableDirectoryReader.java:127)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

Re: Cross-node joins

2015-10-05 Thread Scott Blum
Updated SOLR-7090 with a not fully-working patch.

On Wed, Sep 30, 2015 at 5:45 PM, Scott Blum  wrote:

> Alright, I'll put something on SOLR-7090 in a bit.
>
> Meanwhile, I'm trying to get a basic test running, and running into a
> stupid problem...  I am trying to write a cloud and non-cloud code path for
> the facet query.  What I want to do is create a solrj HttpSolrClient either
> way, but I can't figure out how to create one to do a local query on a
> known core.  So I'm doing some convoluted stuff where I use a
> LocalSolrQueryRequest and SolrQueryResponse, and it seems pretty wonky.
>
> Any tips?
>
> On Wed, Sep 30, 2015 at 4:36 PM, Ishan Chattopadhyaya <
> ichattopadhy...@gmail.com> wrote:
>
>> I think LUCENE-3759 is not a good place, since this is a Solr specific
>> implementation.
>> Please feel free to use SOLR-7090. Based on my idea of your
>> implementation, it wasn't clear to me whether or not the intention of the
>> patch is the same as what SOLR-7090 (at a high level), as per the
>> description there, is trying to solve. But if ever we feel the need, we can
>> always split the issue/impl later; or even resolve-as-duplicate two
>> different JIRA issues later. So, please feel free to choose as you see
>> things fit.
>>
>> On Thu, Oct 1, 2015 at 2:02 AM, Scott Blum  wrote:
>>
>>> Hi Ishan,
>>>
>>> I definitely should write a test.  It's supposed to be a drop-in
>>> replacement for the existing Join query.  I wasn't sure if I should hijack
>>> SOLR-7090, or maybe LUCENE-3759, or just open a new JIRA.  Please advise!
>>>
>>> Or I'm happy to continue discussing high level on this thread.
>>>
>>> Best,
>>> Scott
>>>
>>> On Wed, Sep 30, 2015 at 4:28 PM, Ishan Chattopadhyaya <
>>> ichattopadhy...@gmail.com> wrote:
>>>
 Hi Scott,
 I've replied to your comment on SOLR-7090.

 I just had a look at the your fulljoin implementation, but I wasn't
 sure if I follow this properly. Maybe a unit test would help?
 Also, do you plan to open a JIRA (or, maybe, use SOLR-7090 JIRA itself,
 so as to keep all related efforts together in one issue) to discuss your
 full join approach?
 Regards,
 Ishan

 On Thu, Oct 1, 2015 at 1:19 AM, Scott Blum 
 wrote:

> So I went down the route of creating a new QParser named "fulljoin",
> and I have it essentially working.
>
> https://github.com/fullstorydev/lucene-solr/commits/scottb/fulljoin
>
> Basically, I copied JoinQParserPlugin, ripped out the local index
> "from" processing, and replaced it with a SolrCloud facet query.  IE, you
> facet over the 'from' field and turn the facet result into the set of 
> terms
> you care about.
>
> The part I need some help on is that I'm fairly sure the caching
> (equality) is wrong.  If the collection gets updated in such a way that 
> the
> results of the facet query would change, I don't think I'm properly
> invalidating the cache / failing an equality check.
>
> I assume this is what JoinQuery.fromCoreOpenTime does, handle equality
> correctly so that if the underlying core is updated, the cache will get
> invalidated?  I need to do something similar such that if the results of
> the facet query would return a different term list, I can change the
> equality computation.  Any advice?
>


>>>
>>
>


[jira] [Comment Edited] (SOLR-7090) Cross collection join

2015-10-05 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14944094#comment-14944094
 ] 

Scott Blum edited comment on SOLR-7090 at 10/5/15 9:37 PM:
---

I have this basically working as a QParser.  Under the hood, it uses a 
distributed Facet query to collect the appropriate term list, which it then 
applies to the local core.

The main test works, but the random tests I'm having trouble with, and I'm not 
sure what I'm doing wrong.  I'm getting a different set of failures on trunk 
than I was getting on a similar patch against ~5.2.1.

On trunk, the final result set tends to have too few documents in it, (e.g. 10 
!= 7), even though the fulljoin is actually recording that it found 10 docs.  
I've been digging on this but haven't figured it out yet.

On ~5.2.1, I was getting a different failure related to caching.  On index 
clear + commit, a fulljoin query result would get cached, and subsequent 
commits would not invalidate the result, so by the time a query would be 
performed, it would miss all but the first few docs.

Any help would be much appreciated!


was (Author: dragonsinth):
I have this basically working as a QParser.  Under the hood, it uses a 
distributed Facet query to collect the appropriate term list, when it then 
applies to the local core.

I can't get all the random tests to work, though, and I'm not sure what I'm 
doing wrong.  I'm getting a different set of failures on trunk than I was 
getting on a similar patch against ~5.2.1.

On trunk, the final result set tends to have too few documents in it, (e.g. 10 
!= 7), even though the fulljoin is actually recording that it found 10 docs.  
I've been digging on this but haven't figured it out yet.

On ~5.2.1, I was getting a different failure related to caching.  On index 
clear + commit, a fulljoin query result would get cached, and subsequent 
commits would not invalidate the result, so by the time a query would be 
performed, it would miss all but the first few docs.

Any help would be much appreciated!

> Cross collection join
> -
>
> Key: SOLR-7090
> URL: https://issues.apache.org/jira/browse/SOLR-7090
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
> Fix For: 5.2, Trunk
>
> Attachments: SOLR-7090-fulljoin.patch, SOLR-7090.patch
>
>
> Although SOLR-4905 supports joins across collections in Cloud mode, there are 
> limitations, (i) the secondary collection must be replicated at each node 
> where the primary collection has a replica, (ii) the secondary collection 
> must be singly sharded.
> This issue explores ideas/possibilities of cross collection joins, even 
> across nodes. This will be helpful for users who wish to maintain boosts or 
> signals in a secondary, more frequently updated collection, and perform query 
> time join of these boosts/signals with results from the primary collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7090) Cross collection join

2015-10-05 Thread Scott Blum (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Scott Blum updated SOLR-7090:
-
Attachment: SOLR-7090-fulljoin.patch

I have this basically working as a QParser.  Under the hood, it uses a 
distributed Facet query to collect the appropriate term list, when it then 
applies to the local core.

I can't get all the random tests to work, though, and I'm not sure what I'm 
doing wrong.  I'm getting a different set of failures on trunk than I was 
getting on a similar patch against ~5.2.1.

On trunk, the final result set tends to have too few documents in it, (e.g. 10 
!= 7), even though the fulljoin is actually recording that it found 10 docs.  
I've been digging on this but haven't figured it out yet.

On ~5.2.1, I was getting a different failure related to caching.  On index 
clear + commit, a fulljoin query result would get cached, and subsequent 
commits would not invalidate the result, so by the time a query would be 
performed, it would miss all but the first few docs.

Any help would be much appreciated!

> Cross collection join
> -
>
> Key: SOLR-7090
> URL: https://issues.apache.org/jira/browse/SOLR-7090
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
> Fix For: 5.2, Trunk
>
> Attachments: SOLR-7090-fulljoin.patch, SOLR-7090.patch
>
>
> Although SOLR-4905 supports joins across collections in Cloud mode, there are 
> limitations, (i) the secondary collection must be replicated at each node 
> where the primary collection has a replica, (ii) the secondary collection 
> must be singly sharded.
> This issue explores ideas/possibilities of cross collection joins, even 
> across nodes. This will be helpful for users who wish to maintain boosts or 
> signals in a secondary, more frequently updated collection, and perform query 
> time join of these boosts/signals with results from the primary collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 977 - Still Failing

2015-10-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/977/

3 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=18289, name=collection3, 
state=RUNNABLE, group=TGRP-CollectionsAPIDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=18289, name=collection3, state=RUNNABLE, 
group=TGRP-CollectionsAPIDistributedZkTest]
Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:39969: collection already exists: 
awholynewstresscollection_collection3_1
at __randomizedtesting.SeedInfo.seed([F1C31F1CCECA8AE8]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:575)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:372)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:325)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1099)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1574)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1595)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:888)


FAILED:  org.apache.solr.cloud.hdfs.HdfsCollectionsAPIDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=7640, name=collection1, 
state=RUNNABLE, group=TGRP-HdfsCollectionsAPIDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=7640, name=collection1, state=RUNNABLE, 
group=TGRP-HdfsCollectionsAPIDistributedZkTest]
at 
__randomizedtesting.SeedInfo.seed([F1C31F1CCECA8AE8:799720C66036E710]:0)
Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:42023: Could not find collection : 
awholynewstresscollection_collection1_4
at __randomizedtesting.SeedInfo.seed([F1C31F1CCECA8AE8]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:575)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:372)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:325)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1099)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:895)


FAILED:  
org.apache.solr.cloud.LeaderInitiatedRecoveryOnShardRestartTest.testRestartWithAllInLIR

Error Message:
Captured an uncaught exception in thread: Thread[id=116935, 
name=coreZkRegister-6107-thread-2, state=RUNNABLE, 
group=TGRP-LeaderInitiatedRecoveryOnShardRestartTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=116935, name=coreZkRegister-6107-thread-2, 
state=RUNNABLE, group=TGRP-LeaderInitiatedRecoveryOnShardRestartTest]
Caused by: java.lang.AssertionError
at __randomizedtesting.SeedInfo.seed([F1C31F1CCECA8AE8]:0)
at 
org.apache.solr.cloud.ZkController.updateLeaderInitiatedRecoveryState(ZkController.java:2133)
at 
org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:422)
at 
org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:197)
at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:157)
at 
org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:346)
at 

[jira] [Created] (SOLR-8130) Solr's hdfs safe mode detection does not catch all cases of being in safe mode.

2015-10-05 Thread Mark Miller (JIRA)
Mark Miller created SOLR-8130:
-

 Summary: Solr's hdfs safe mode detection does not catch all cases 
of being in safe mode.
 Key: SOLR-8130
 URL: https://issues.apache.org/jira/browse/SOLR-8130
 Project: Solr
  Issue Type: Bug
  Components: hdfs
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: Trunk, 5.4






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_60) - Build # 14423 - Failure!

2015-10-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14423/
Java: 32bit/jdk1.8.0_60 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.ShardSplitTest.test

Error Message:
Timeout occured while waiting response from server at: http://127.0.0.1:47618

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:47618
at 
__randomizedtesting.SeedInfo.seed([DC1969AD2C65538B:544D567782993E73]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:587)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.ShardSplitTest.splitShard(ShardSplitTest.java:490)
at 
org.apache.solr.cloud.ShardSplitTest.incompleteOrOverlappingCustomRangeTest(ShardSplitTest.java:111)
at org.apache.solr.cloud.ShardSplitTest.test(ShardSplitTest.java:76)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 

[jira] [Commented] (SOLR-7995) Add a LIST command to ConfigSets API

2015-10-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14944013#comment-14944013
 ] 

ASF subversion and git services commented on SOLR-7995:
---

Commit 1706920 from gcha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1706920 ]

SOLR-7995: Add a LIST command to ConfigSets API

> Add a LIST command to ConfigSets API
> 
>
> Key: SOLR-7995
> URL: https://issues.apache.org/jira/browse/SOLR-7995
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
> Attachments: SOLR-7995.patch, SOLR-7995.patch
>
>
> It would be useful to have a LIST command in the ConfigSets API so that 
> clients do not have to access zookeeper in order to get the ConfigSets to use 
> for the other operations (create, delete).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7211) Off-Heap field cache

2015-10-05 Thread Bill Bell (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14944475#comment-14944475
 ] 

Bill Bell commented on SOLR-7211:
-

Are we going forward with this?

> Off-Heap field cache
> 
>
> Key: SOLR-7211
> URL: https://issues.apache.org/jira/browse/SOLR-7211
> Project: Solr
>  Issue Type: New Feature
>Reporter: Yonik Seeley
>
> Off-heap field cache implementation will help with GC issues and enable 
> native code performance optimizations



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b82) - Build # 14425 - Failure!

2015-10-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14425/
Java: 64bit/jdk1.9.0-ea-b82 -XX:+UseCompressedOops -XX:+UseG1GC

4 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SaslZkACLProviderTest

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.SaslZkACLProviderTest: 1) Thread[id=6592, 
name=ou=system.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:746)2) Thread[id=6591, 
name=kdcReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:746)3) Thread[id=6593, 
name=groupCache.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest]
 at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:746)4) Thread[id=6590, 
name=apacheds, state=WAITING, group=TGRP-SaslZkACLProviderTest] at 
java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:516) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)5) Thread[id=6594, 
name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:746)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.cloud.SaslZkACLProviderTest: 
   1) Thread[id=6592, name=ou=system.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
at 

[jira] [Created] (LUCENE-6826) java.lang.ClassCastException: org.apache.lucene.index.TermsEnum$2 cannot be cast to org.apache.lucene.index.MultiTermsEnum when adding indexes

2015-10-05 Thread Trejkaz (JIRA)
Trejkaz created LUCENE-6826:
---

 Summary: java.lang.ClassCastException: 
org.apache.lucene.index.TermsEnum$2 cannot be cast to 
org.apache.lucene.index.MultiTermsEnum when adding indexes
 Key: LUCENE-6826
 URL: https://issues.apache.org/jira/browse/LUCENE-6826
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/index
Affects Versions: 5.2.1
Reporter: Trejkaz


We are using addIndexes and FilterCodecReader tricks as part of index migration.

Whether FilterCodecReader tricks are required to reproduce this is uncertain, 
but in any case, when migrating a particular index, I saw this exception:

{noformat}
java.lang.ClassCastException: org.apache.lucene.index.TermsEnum$2 cannot be 
cast to org.apache.lucene.index.MultiTermsEnum
at 
org.apache.lucene.index.MappedMultiFields$MappedMultiTerms.iterator(MappedMultiFields.java:65)
at 
org.apache.lucene.codecs.blocktree.BlockTreeTermsWriter.write(BlockTreeTermsWriter.java:426)
at 
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsWriter.write(PerFieldPostingsFormat.java:198)
at 
org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:105)
at 
org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:193)
at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:95)
at org.apache.lucene.index.IndexWriter.addIndexes(IndexWriter.java:2519)
{noformat}

TermsEnum$2 appears to be TermsEnum.EMPTY. The place where it creates it is 
here:

MultiTermsEnum#reset:
{code}
if (queue.size() == 0) {
  return TermsEnum.EMPTY;   // <- this is not a MultiTermsEnum
} else {
  return this;
}
{code}

A quick hack would be for MappedMultiFields to check for TermsEnum.EMPTY 
specifically before casting, but there might be some way to avoid the cast 
entirely and that would obviously be a better idea.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: SOLR-8050: Test case demonstrating the b...

2015-10-05 Thread LucVL
GitHub user LucVL opened a pull request:

https://github.com/apache/lucene-solr/pull/202

SOLR-8050: Test case demonstrating the bug

To run just this testcase, use:
```sh
ant test -Dtests.class=org.apache.solr.update.processor.AtomicUpdatesTest 
-Dtests.method=testMultipleTDateValues
```

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/LucVL/lucene-solr SOLR-8050

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/202.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #202


commit bb7b239eb25a8826e9767edc52e970a8b2aab405
Author: Luc Vanlerberghe 
Date:   2015-10-05T09:58:56Z

Test case demonstrating the bug




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6590) Explore different ways to apply boosts

2015-10-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14943302#comment-14943302
 ] 

ASF subversion and git services commented on LUCENE-6590:
-

Commit 1706827 from [~jpountz] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1706827 ]

LUCENE-6590: Fix toString() representations of queries to include the boost.

Some of them were lost in the merge, others were just missing.

> Explore different ways to apply boosts
> --
>
> Key: LUCENE-6590
> URL: https://issues.apache.org/jira/browse/LUCENE-6590
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: 5.4
>
> Attachments: LUCENE-6590.patch, LUCENE-6590.patch, LUCENE-6590.patch, 
> LUCENE-6590.patch, LUCENE-6590.patch, LUCENE-6590.patch, LUCENE-6590.patch
>
>
> Follow-up from LUCENE-6570: the fact that all queries are mutable in order to 
> allow for applying a boost raises issues since it makes queries bad cache 
> keys since their hashcode can change anytime. We could just document that 
> queries should never be modified after they have gone through IndexSearcher 
> but it would be even better if the API made queries impossible to mutate at 
> all.
> I think there are two main options:
>  - either replace "void setBoost(boost)" with something like "Query 
> withBoost(boost)" which would return a clone that has a different boost
>  - or move boost handling outside of Query, for instance we could have a 
> (immutable) query impl that would be dedicated to applying boosts, that 
> queries that need to change boosts at rewrite time (such as BooleanQuery) 
> would use as a wrapper.
> The latter idea is from Robert and I like it a lot given how often I either 
> introduced or found a bug which was due to the boost parameter being ignored. 
> Maybe there are other options, but I think this is worth exploring.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7686) New Admin UI Dashboard page doesn't show system info and args

2015-10-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14943224#comment-14943224
 ] 

ASF subversion and git services commented on SOLR-7686:
---

Commit 1706796 from [~upayavira] in branch 'dev/trunk'
[ https://svn.apache.org/r1706796 ]

SOLR-7686 Fix dashboard on Windows - don't show load average

> New Admin UI Dashboard page doesn't show system info and args
> -
>
> Key: SOLR-7686
> URL: https://issues.apache.org/jira/browse/SOLR-7686
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.2.1
>Reporter: Varun Thacker
>Assignee: Upayavira
>Priority: Minor
> Attachments: SOLR-7686.patch, new-ui-dashboard.png, system.json
>
>
> When I'm on http://localhost:8983/solr/index.html#/ none of the system bar 
> graphs show anything and neither does the JVM args that the server started 
> with show up in 'Args'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8050) Partial update on document with multivalued date field fails

2015-10-05 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14943257#comment-14943257
 ] 

ASF GitHub Bot commented on SOLR-8050:
--

GitHub user LucVL opened a pull request:

https://github.com/apache/lucene-solr/pull/202

SOLR-8050: Test case demonstrating the bug

To run just this testcase, use:
```sh
ant test -Dtests.class=org.apache.solr.update.processor.AtomicUpdatesTest 
-Dtests.method=testMultipleTDateValues
```

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/LucVL/lucene-solr SOLR-8050

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/202.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #202


commit bb7b239eb25a8826e9767edc52e970a8b2aab405
Author: Luc Vanlerberghe 
Date:   2015-10-05T09:58:56Z

Test case demonstrating the bug




> Partial update on document with multivalued date field fails
> 
>
> Key: SOLR-8050
> URL: https://issues.apache.org/jira/browse/SOLR-8050
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java, SolrJ
>Affects Versions: 5.2.1
> Environment: embedded solr
> java 1.7
> win
>Reporter: Burkhard Buelte
> Attachments: screenshot-1.png
>
>
> When updating a document with multivalued date field Solr throws a exception
> like: org.apache.solr.common.SolrException: Invalid Date String:'Mon Sep 
> 14 01:48:38 CEST 2015'
> even if the update document doesn't contain any datefield.
> See following code snippet to reproduce 
> 1. create a doc with multivalued date field (here dynamic field _dts)
> SolrInputDocument doc = new SolrInputDocument();
> String id = Long.toString(System.currentTimeMillis());
> System.out.println("testUpdate: adding test document to solr ID=" + 
> id);
> doc.addField(CollectionSchema.id.name(), id);
> doc.addField(CollectionSchema.title.name(), "Lorem ipsum");
> doc.addField(CollectionSchema.host_s.name(), "yacy.net");
> doc.addField(CollectionSchema.text_t.name(), "Lorem ipsum dolor sit 
> amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut 
> labore et dolore magna aliqua.");
> doc.addField(CollectionSchema.dates_in_content_dts.name(), new 
> Date());
> solr.add(doc);
> solr.commit(true);
> 2. update any field on this doc via partial update
> SolrInputDocument sid = new SolrInputDocument();
> sid.addField(CollectionSchema.id.name(), 
> doc.getFieldValue(CollectionSchema.id.name()));
> sid.addField(CollectionSchema.host_s.name(), "yacy.yacy");
> solr.update(sid);
> solr.commit(true);
> Result
> Caused by: org.apache.solr.common.SolrException: Invalid Date String:'Mon Sep 
> 14 01:48:38 CEST 2015'
>   at org.apache.solr.util.DateFormatUtil.parseMath(DateFormatUtil.java:87)
>   at 
> org.apache.solr.schema.TrieField.readableToIndexed(TrieField.java:473)
>   at org.apache.solr.schema.TrieField.createFields(TrieField.java:715)
>   at 
> org.apache.solr.update.DocumentBuilder.addField(DocumentBuilder.java:48)
>   at 
> org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:123)
>   at 
> org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:83)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:237)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:163)
>   at 
> org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:955)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1110)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:706)
>   at 
> org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:104)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
>   at 
> org.apache.solr.update.processor.LanguageIdentifierUpdateProcessor.processAdd(LanguageIdentifierUpdateProcessor.java:207)
>   at 
> org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:250)
>   at 

[GitHub] lucene-solr pull request: o.a.l.queryparser.flexible.standard.Stan...

2015-10-05 Thread LucVL
Github user LucVL closed the pull request at:

https://github.com/apache/lucene-solr/pull/108


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7833) Add new Solr book 'Solr Cookbook - Third Edition' to selection of Solr books and news.

2015-10-05 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved SOLR-7833.
--
Resolution: Fixed

> Add new Solr book 'Solr Cookbook - Third Edition' to selection of Solr books 
> and news.
> --
>
> Key: SOLR-7833
> URL: https://issues.apache.org/jira/browse/SOLR-7833
> Project: Solr
>  Issue Type: Task
>Reporter: Zico Fernandes
>Assignee: Steve Rowe
> Attachments: SOLR-7833.patch, SOLR-7833_version_2.patch, Solr 
> Cookbook_Third Edition.jpg, book_solr_cookbook_3ed.jpg
>
>
> Rafał Kuć is proud to finally announce the book Solr Cookbook - Third Edition 
> by Packt Publishing. This edition will specifically appeal to developers who 
> wish to quickly get to grips with the changes and new features of Apache Solr 
> 5. 
> Solr Cookbook - Third Edition has over 100 easy to follow recipes to solve 
> real-time problems related to Apache Solr 4.x and 5.0 effectively. Starting 
> with vital information on setting up Solr, the developer will quickly 
> progress to analyzing their text data through querying and performance 
> improvement. Finally, they will explore real-life situations, where Solr can 
> be used to simplify daily collection handling.
> With numerous practical chapters centered on important Solr techniques and 
> methods Solr Cookbook - Third Edition will guide intermediate Solr Developers 
> who are willing to learn and implement Pro-level practices, techniques, and 
> solutions.
> Click here to read more about the Solr Cookbook - Third Edition: 
> http://bit.ly/1Q2AGS8



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7543) Create GraphQuery that allows graph traversal as a query operator.

2015-10-05 Thread Kevin Watters (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Watters updated SOLR-7543:

Attachment: SOLR-7543.patch

Patch with GraphQuery / parsers / unit tests.

> Create GraphQuery that allows graph traversal as a query operator.
> --
>
> Key: SOLR-7543
> URL: https://issues.apache.org/jira/browse/SOLR-7543
> Project: Solr
>  Issue Type: New Feature
>  Components: search
>Reporter: Kevin Watters
>Priority: Minor
> Attachments: SOLR-7543.patch
>
>
> I have a GraphQuery that I implemented a long time back that allows a user to 
> specify a "startQuery" to identify which documents to start graph traversal 
> from.  It then gathers up the edge ids for those documents , optionally 
> applies an additional filter.  The query is then re-executed continually 
> until no new edge ids are identified.  I am currently hosting this code up at 
> https://github.com/kwatters/solrgraph and I would like to work with the 
> community to get some feedback and ultimately get it committed back in as a 
> lucene query.
> Here's a bit more of a description of the parameters for the query / graph 
> traversal:
> q - the initial start query that identifies the universe of documents to 
> start traversal from.
> fromField - the field name that contains the node id
> toField - the name of the field that contains the edge id(s).
> traversalFilter - this is an additional query that can be supplied to limit 
> the scope of graph traversal to just the edges that satisfy the 
> traversalFilter query.
> maxDepth - integer specifying how deep the breadth first search should go.
> returnStartNodes - boolean to determine if the documents that matched the 
> original "q" should be returned as part of the graph.
> onlyLeafNodes - boolean that filters the graph query to only return 
> documents/nodes that have no edges.
> We identify a set of documents with "q" as any arbitrary lucene query.  It 
> will collect the values in the fromField, create an OR query with those 
> values , optionally apply an additional constraint from the "traversalFilter" 
> and walk the result set until no new edges are detected.  Traversal can also 
> be stopped at N hops away as defined with the maxDepth.  This is a BFS 
> (Breadth First Search) algorithm.  Cycle detection is done by not revisiting 
> the same document for edge extraction.  
> This query operator does not keep track of how you arrived at the document, 
> but only that the traversal did arrive at the document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7938) MergeStream to support N streams

2015-10-05 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14943354#comment-14943354
 ] 

Joel Bernstein commented on SOLR-7938:
--

I think the feature looks great. I'm just wrapping up SOLR-8086, which will 
pretty much clear my plate.

Let's create an umbrella ticket for Streaming and SQL issues so we don't lose 
track of them. We can link this ticket and other outstanding Streaming and SQL 
tickets to the umbrella ticket. The umbrella ticket can also be a place to 
discuss the road map.



> MergeStream to support N streams
> 
>
> Key: SOLR-7938
> URL: https://issues.apache.org/jira/browse/SOLR-7938
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Dennis Gove
>Priority: Minor
>  Labels: streaming
> Attachments: SOLR-7938.patch
>
>
> Enhances MergeStream to support merging N streams. This was previously 
> limited to merging just two streams but with this enhancement it can now 
> accept any number of streams to merge.
> Based on the comparator, if more than one stream could provide the next value 
> then the selected value will follow the order of the streams as they appear 
> in the expression or were added to the MergeStream object.
> {code}
> merge(
>   search(collection1, q="id:(0 4)", fl="id,a_s,a_i,a_f", sort="a_f asc, a_s 
> asc"),
>   search(collection1, q="id:(1)", fl="id,a_s,a_i,a_f", sort="a_f asc, a_s 
> asc"),
>   search(collection1, q="id:(2)", fl="id,a_s,a_i,a_f", sort="a_f asc, a_s 
> asc"),
>   on="a_f asc"
> )
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7686) New Admin UI Dashboard page doesn't show system info and args

2015-10-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14943228#comment-14943228
 ] 

ASF subversion and git services commented on SOLR-7686:
---

Commit 1706799 from [~upayavira] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1706799 ]

SOLR-7686 Fix dashboard on Windows - don't show load average

> New Admin UI Dashboard page doesn't show system info and args
> -
>
> Key: SOLR-7686
> URL: https://issues.apache.org/jira/browse/SOLR-7686
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.2.1
>Reporter: Varun Thacker
>Assignee: Upayavira
>Priority: Minor
> Attachments: SOLR-7686.patch, new-ui-dashboard.png, system.json
>
>
> When I'm on http://localhost:8983/solr/index.html#/ none of the system bar 
> graphs show anything and neither does the JVM args that the server started 
> with show up in 'Args'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_60) - Build # 14127 - Failure!

2015-10-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/14127/
Java: 64bit/jdk1.8.0_60 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.cloud.ShardSplitTest.test

Error Message:
There are still nodes recoverying - waited for 15 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 15 
seconds
at 
__randomizedtesting.SeedInfo.seed([4D9DCAB1CFCE8DB0:C5C9F56B6132E048]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:172)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:836)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForThingsToLevelOut(AbstractFullDistribZkTestBase.java:1393)
at org.apache.solr.cloud.ShardSplitTest.test(ShardSplitTest.java:70)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 

[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_60) - Build # 5308 - Failure!

2015-10-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5308/
Java: 64bit/jdk1.8.0_60 -XX:+UseCompressedOops -XX:+UseSerialGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.TestCloudPivotFacet

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.cloud.TestCloudPivotFacet:  
   1) Thread[id=4931, 
name=OverseerHdfsCoreFailoverThread-94637070258667531-127.0.0.1:55952_oj%2Fp-n_02,
 state=TIMED_WAITING, group=Overseer Hdfs SolrCore Failover Thread.] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:139)
 at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.TestCloudPivotFacet: 
   1) Thread[id=4931, 
name=OverseerHdfsCoreFailoverThread-94637070258667531-127.0.0.1:55952_oj%2Fp-n_02,
 state=TIMED_WAITING, group=Overseer Hdfs SolrCore Failover Thread.]
at java.lang.Thread.sleep(Native Method)
at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:139)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([F8BF90845D99DA2A]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.TestCloudPivotFacet

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=4931, 
name=OverseerHdfsCoreFailoverThread-94637070258667531-127.0.0.1:55952_oj%2Fp-n_02,
 state=RUNNABLE, group=Overseer Hdfs SolrCore Failover Thread.] at 
java.lang.Throwable.getStackTraceElement(Native Method) at 
java.lang.Throwable.getOurStackTrace(Throwable.java:827) at 
java.lang.Throwable.printStackTrace(Throwable.java:656) at 
java.lang.Throwable.printStackTrace(Throwable.java:721) at 
org.apache.solr.common.SolrException.toStr(SolrException.java:170) at 
org.apache.solr.common.SolrException.log(SolrException.java:144) at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:133)
 at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=4931, 
name=OverseerHdfsCoreFailoverThread-94637070258667531-127.0.0.1:55952_oj%2Fp-n_02,
 state=RUNNABLE, group=Overseer Hdfs SolrCore Failover Thread.]
at java.lang.Throwable.getStackTraceElement(Native Method)
at java.lang.Throwable.getOurStackTrace(Throwable.java:827)
at java.lang.Throwable.printStackTrace(Throwable.java:656)
at java.lang.Throwable.printStackTrace(Throwable.java:721)
at org.apache.solr.common.SolrException.toStr(SolrException.java:170)
at org.apache.solr.common.SolrException.log(SolrException.java:144)
at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:133)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([F8BF90845D99DA2A]:0)


FAILED:  org.apache.solr.logging.TestLogWatcher.testLog4jWatcher

Error Message:
expected:<13> but was:<1>

Stack Trace:
java.lang.AssertionError: expected:<13> but was:<1>
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.logging.TestLogWatcher.testLog4jWatcher(TestLogWatcher.java:54)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
at 

[jira] [Commented] (SOLR-8050) Partial update on document with multivalued date field fails

2015-10-05 Thread Luc Vanlerberghe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14943309#comment-14943309
 ] 

Luc Vanlerberghe commented on SOLR-8050:


A temporary workaround seems to be to include the data of the multi-valued 
tdate field in the update request to prevent Solr trying to decode the existing 
values...

In the patch I attached earlier, I now used
{code:java}
doc.setField("multiTDate_tdtdv", new String[]{"1986-01-01T00:00:00Z", 
"1988-01-01T00:00:00Z", "1980-01-01T00:00:00Z"});
{code}
to construct the original document and added
{code:java}
doc.setField("multiTDate_tdtdv", ImmutableMap.of("set", 
"1986-01-01T00:00:00Z")); 
doc.addField("multiTDate_tdtdv", ImmutableMap.of("set", 
"1988-01-01T00:00:00Z")); 
doc.addField("multiTDate_tdtdv", ImmutableMap.of("set", 
"1980-01-01T00:00:00Z")); 
{code}
to the update request and the test passes


> Partial update on document with multivalued date field fails
> 
>
> Key: SOLR-8050
> URL: https://issues.apache.org/jira/browse/SOLR-8050
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java, SolrJ
>Affects Versions: 5.2.1
> Environment: embedded solr
> java 1.7
> win
>Reporter: Burkhard Buelte
> Attachments: screenshot-1.png
>
>
> When updating a document with multivalued date field Solr throws a exception
> like: org.apache.solr.common.SolrException: Invalid Date String:'Mon Sep 
> 14 01:48:38 CEST 2015'
> even if the update document doesn't contain any datefield.
> See following code snippet to reproduce 
> 1. create a doc with multivalued date field (here dynamic field _dts)
> SolrInputDocument doc = new SolrInputDocument();
> String id = Long.toString(System.currentTimeMillis());
> System.out.println("testUpdate: adding test document to solr ID=" + 
> id);
> doc.addField(CollectionSchema.id.name(), id);
> doc.addField(CollectionSchema.title.name(), "Lorem ipsum");
> doc.addField(CollectionSchema.host_s.name(), "yacy.net");
> doc.addField(CollectionSchema.text_t.name(), "Lorem ipsum dolor sit 
> amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut 
> labore et dolore magna aliqua.");
> doc.addField(CollectionSchema.dates_in_content_dts.name(), new 
> Date());
> solr.add(doc);
> solr.commit(true);
> 2. update any field on this doc via partial update
> SolrInputDocument sid = new SolrInputDocument();
> sid.addField(CollectionSchema.id.name(), 
> doc.getFieldValue(CollectionSchema.id.name()));
> sid.addField(CollectionSchema.host_s.name(), "yacy.yacy");
> solr.update(sid);
> solr.commit(true);
> Result
> Caused by: org.apache.solr.common.SolrException: Invalid Date String:'Mon Sep 
> 14 01:48:38 CEST 2015'
>   at org.apache.solr.util.DateFormatUtil.parseMath(DateFormatUtil.java:87)
>   at 
> org.apache.solr.schema.TrieField.readableToIndexed(TrieField.java:473)
>   at org.apache.solr.schema.TrieField.createFields(TrieField.java:715)
>   at 
> org.apache.solr.update.DocumentBuilder.addField(DocumentBuilder.java:48)
>   at 
> org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:123)
>   at 
> org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:83)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:237)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:163)
>   at 
> org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:955)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1110)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:706)
>   at 
> org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:104)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
>   at 
> org.apache.solr.update.processor.LanguageIdentifierUpdateProcessor.processAdd(LanguageIdentifierUpdateProcessor.java:207)
>   at 
> org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:250)
>   at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:177)
>   at 
> org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:98)
>   at 
> 

[GitHub] lucene-solr pull request: Test case demonstrating the bug

2015-10-05 Thread LucVL
Github user LucVL closed the pull request at:

https://github.com/apache/lucene-solr/pull/201


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5205) SpanQueryParser with recursion, analysis and syntax very similar to classic QueryParser

2015-10-05 Thread Tim Allison (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14943324#comment-14943324
 ] 

Tim Allison commented on LUCENE-5205:
-

Y, ha, upgraded code last week locally.  I'll push to a new lucene5.3on-0.1 
branch shortly.

> SpanQueryParser with recursion, analysis and syntax very similar to classic 
> QueryParser
> ---
>
> Key: LUCENE-5205
> URL: https://issues.apache.org/jira/browse/LUCENE-5205
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/queryparser
>Reporter: Tim Allison
>  Labels: patch
> Attachments: LUCENE-5205-cleanup-tests.patch, 
> LUCENE-5205-date-pkg-prvt.patch, LUCENE-5205.patch.gz, LUCENE-5205.patch.gz, 
> LUCENE-5205_dateTestReInitPkgPrvt.patch, 
> LUCENE-5205_improve_stop_word_handling.patch, 
> LUCENE-5205_smallTestMods.patch, LUCENE_5205.patch, 
> SpanQueryParser_v1.patch.gz, patch.txt
>
>
> This parser extends QueryParserBase and includes functionality from:
> * Classic QueryParser: most of its syntax
> * SurroundQueryParser: recursive parsing for "near" and "not" clauses.
> * ComplexPhraseQueryParser: can handle "near" queries that include multiterms 
> (wildcard, fuzzy, regex, prefix),
> * AnalyzingQueryParser: has an option to analyze multiterms.
> At a high level, there's a first pass BooleanQuery/field parser and then a 
> span query parser handles all terminal nodes and phrases.
> Same as classic syntax:
> * term: test 
> * fuzzy: roam~0.8, roam~2
> * wildcard: te?t, test*, t*st
> * regex: /\[mb\]oat/
> * phrase: "jakarta apache"
> * phrase with slop: "jakarta apache"~3
> * default "or" clause: jakarta apache
> * grouping "or" clause: (jakarta apache)
> * boolean and +/-: (lucene OR apache) NOT jakarta; +lucene +apache -jakarta
> * multiple fields: title:lucene author:hatcher
>  
> Main additions in SpanQueryParser syntax vs. classic syntax:
> * Can require "in order" for phrases with slop with the \~> operator: 
> "jakarta apache"\~>3
> * Can specify "not near": "fever bieber"!\~3,10 ::
> find "fever" but not if "bieber" appears within 3 words before or 10 
> words after it.
> * Fully recursive phrasal queries with \[ and \]; as in: \[\[jakarta 
> apache\]~3 lucene\]\~>4 :: 
> find "jakarta" within 3 words of "apache", and that hit has to be within 
> four words before "lucene"
> * Can also use \[\] for single level phrasal queries instead of " as in: 
> \[jakarta apache\]
> * Can use "or grouping" clauses in phrasal queries: "apache (lucene solr)"\~3 
> :: find "apache" and then either "lucene" or "solr" within three words.
> * Can use multiterms in phrasal queries: "jakarta\~1 ap*che"\~2
> * Did I mention full recursion: \[\[jakarta\~1 ap*che\]\~2 (solr~ 
> /l\[ou\]\+\[cs\]\[en\]\+/)]\~10 :: Find something like "jakarta" within two 
> words of "ap*che" and that hit has to be within ten words of something like 
> "solr" or that "lucene" regex.
> * Can require at least x number of hits at boolean level: "apache AND (lucene 
> solr tika)~2
> * Can use negative only query: -jakarta :: Find all docs that don't contain 
> "jakarta"
> * Can use an edit distance > 2 for fuzzy query via SlowFuzzyQuery (beware of 
> potential performance issues!).
> Trivial additions:
> * Can specify prefix length in fuzzy queries: jakarta~1,2 (edit distance =1, 
> prefix =2)
> * Can specifiy Optimal String Alignment (OSA) vs Levenshtein for distance 
> <=2: (jakarta~1 (OSA) vs jakarta~>1(Levenshtein)
> This parser can be very useful for concordance tasks (see also LUCENE-5317 
> and LUCENE-5318) and for analytical search.  
> Until LUCENE-2878 is closed, this might have a use for fans of SpanQuery.
> Most of the documentation is in the javadoc for SpanQueryParser.
> Any and all feedback is welcome.  Thank you.
> Until this is added to the Lucene project, I've added a standalone 
> lucene-addons repo (with jars compiled for the latest stable build of Lucene) 
>  on [github|https://github.com/tballison/lucene-addons].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Solaris (64bit/jdk1.8.0) - Build # 100 - Failure!

2015-10-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Solaris/100/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([C19E646ADF1FAB78:49CA5BB071E3C680]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.junit.Assert.assertNull(Assert.java:562)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testNoConfigSetExist(CollectionsAPIDistributedZkTest.java:519)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test(CollectionsAPIDistributedZkTest.java:166)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 

[jira] [Commented] (LUCENE-5205) SpanQueryParser with recursion, analysis and syntax very similar to classic QueryParser

2015-10-05 Thread Tim Allison (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14943327#comment-14943327
 ] 

Tim Allison commented on LUCENE-5205:
-

Let me know if there are any surprise with 
[lucene5.3on-0.1|https://github.com/tballison/lucene-addons/tree/lucene5.3on-0.1].
 

The solr-5410 (Solr parser wrapper for the SpanQueryParser works), but I 
haven't yet upgraded solr-5411 (Solr level concordance wrapper).

> SpanQueryParser with recursion, analysis and syntax very similar to classic 
> QueryParser
> ---
>
> Key: LUCENE-5205
> URL: https://issues.apache.org/jira/browse/LUCENE-5205
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/queryparser
>Reporter: Tim Allison
>  Labels: patch
> Attachments: LUCENE-5205-cleanup-tests.patch, 
> LUCENE-5205-date-pkg-prvt.patch, LUCENE-5205.patch.gz, LUCENE-5205.patch.gz, 
> LUCENE-5205_dateTestReInitPkgPrvt.patch, 
> LUCENE-5205_improve_stop_word_handling.patch, 
> LUCENE-5205_smallTestMods.patch, LUCENE_5205.patch, 
> SpanQueryParser_v1.patch.gz, patch.txt
>
>
> This parser extends QueryParserBase and includes functionality from:
> * Classic QueryParser: most of its syntax
> * SurroundQueryParser: recursive parsing for "near" and "not" clauses.
> * ComplexPhraseQueryParser: can handle "near" queries that include multiterms 
> (wildcard, fuzzy, regex, prefix),
> * AnalyzingQueryParser: has an option to analyze multiterms.
> At a high level, there's a first pass BooleanQuery/field parser and then a 
> span query parser handles all terminal nodes and phrases.
> Same as classic syntax:
> * term: test 
> * fuzzy: roam~0.8, roam~2
> * wildcard: te?t, test*, t*st
> * regex: /\[mb\]oat/
> * phrase: "jakarta apache"
> * phrase with slop: "jakarta apache"~3
> * default "or" clause: jakarta apache
> * grouping "or" clause: (jakarta apache)
> * boolean and +/-: (lucene OR apache) NOT jakarta; +lucene +apache -jakarta
> * multiple fields: title:lucene author:hatcher
>  
> Main additions in SpanQueryParser syntax vs. classic syntax:
> * Can require "in order" for phrases with slop with the \~> operator: 
> "jakarta apache"\~>3
> * Can specify "not near": "fever bieber"!\~3,10 ::
> find "fever" but not if "bieber" appears within 3 words before or 10 
> words after it.
> * Fully recursive phrasal queries with \[ and \]; as in: \[\[jakarta 
> apache\]~3 lucene\]\~>4 :: 
> find "jakarta" within 3 words of "apache", and that hit has to be within 
> four words before "lucene"
> * Can also use \[\] for single level phrasal queries instead of " as in: 
> \[jakarta apache\]
> * Can use "or grouping" clauses in phrasal queries: "apache (lucene solr)"\~3 
> :: find "apache" and then either "lucene" or "solr" within three words.
> * Can use multiterms in phrasal queries: "jakarta\~1 ap*che"\~2
> * Did I mention full recursion: \[\[jakarta\~1 ap*che\]\~2 (solr~ 
> /l\[ou\]\+\[cs\]\[en\]\+/)]\~10 :: Find something like "jakarta" within two 
> words of "ap*che" and that hit has to be within ten words of something like 
> "solr" or that "lucene" regex.
> * Can require at least x number of hits at boolean level: "apache AND (lucene 
> solr tika)~2
> * Can use negative only query: -jakarta :: Find all docs that don't contain 
> "jakarta"
> * Can use an edit distance > 2 for fuzzy query via SlowFuzzyQuery (beware of 
> potential performance issues!).
> Trivial additions:
> * Can specify prefix length in fuzzy queries: jakarta~1,2 (edit distance =1, 
> prefix =2)
> * Can specifiy Optimal String Alignment (OSA) vs Levenshtein for distance 
> <=2: (jakarta~1 (OSA) vs jakarta~>1(Levenshtein)
> This parser can be very useful for concordance tasks (see also LUCENE-5317 
> and LUCENE-5318) and for analytical search.  
> Until LUCENE-2878 is closed, this might have a use for fans of SpanQuery.
> Most of the documentation is in the javadoc for SpanQueryParser.
> Any and all feedback is welcome.  Thank you.
> Until this is added to the Lucene project, I've added a standalone 
> lucene-addons repo (with jars compiled for the latest stable build of Lucene) 
>  on [github|https://github.com/tballison/lucene-addons].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Solaris (64bit/jdk1.8.0) - Build # 98 - Still Failing!

2015-10-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Solaris/98/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.update.AutoCommitTest.testCommitWithin

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([C60E3B425E9B57F0:7CDC543ADDB5B9E5]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:765)
at 
org.apache.solr.update.AutoCommitTest.testCommitWithin(AutoCommitTest.java:321)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound=1]
xml response was: 

00


request was:q=id:529=standard=0=20=2.2
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:758)
... 40 more




Build Log:
[...truncated 10005 lines...]
   [junit4] Suite: org.apache.solr.update.AutoCommitTest
 

[jira] [Commented] (SOLR-8050) Partial update on document with multivalued date field fails

2015-10-05 Thread Luc Vanlerberghe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14943265#comment-14943265
 ] 

Luc Vanlerberghe commented on SOLR-8050:


As I mentioned in the code comments, not only does TrieField.createField cannot 
make sense of the output of Date.toString() as opposed to a correctly formed 
UTC date/time string (like "1986-01-01T00:00:00Z"), but the value the Date 
object contains depends on the locale the test is run in, so there must be an 
error even earlier in the update logic while decoding the values in the Lucene 
Document...

> Partial update on document with multivalued date field fails
> 
>
> Key: SOLR-8050
> URL: https://issues.apache.org/jira/browse/SOLR-8050
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java, SolrJ
>Affects Versions: 5.2.1
> Environment: embedded solr
> java 1.7
> win
>Reporter: Burkhard Buelte
> Attachments: screenshot-1.png
>
>
> When updating a document with multivalued date field Solr throws a exception
> like: org.apache.solr.common.SolrException: Invalid Date String:'Mon Sep 
> 14 01:48:38 CEST 2015'
> even if the update document doesn't contain any datefield.
> See following code snippet to reproduce 
> 1. create a doc with multivalued date field (here dynamic field _dts)
> SolrInputDocument doc = new SolrInputDocument();
> String id = Long.toString(System.currentTimeMillis());
> System.out.println("testUpdate: adding test document to solr ID=" + 
> id);
> doc.addField(CollectionSchema.id.name(), id);
> doc.addField(CollectionSchema.title.name(), "Lorem ipsum");
> doc.addField(CollectionSchema.host_s.name(), "yacy.net");
> doc.addField(CollectionSchema.text_t.name(), "Lorem ipsum dolor sit 
> amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut 
> labore et dolore magna aliqua.");
> doc.addField(CollectionSchema.dates_in_content_dts.name(), new 
> Date());
> solr.add(doc);
> solr.commit(true);
> 2. update any field on this doc via partial update
> SolrInputDocument sid = new SolrInputDocument();
> sid.addField(CollectionSchema.id.name(), 
> doc.getFieldValue(CollectionSchema.id.name()));
> sid.addField(CollectionSchema.host_s.name(), "yacy.yacy");
> solr.update(sid);
> solr.commit(true);
> Result
> Caused by: org.apache.solr.common.SolrException: Invalid Date String:'Mon Sep 
> 14 01:48:38 CEST 2015'
>   at org.apache.solr.util.DateFormatUtil.parseMath(DateFormatUtil.java:87)
>   at 
> org.apache.solr.schema.TrieField.readableToIndexed(TrieField.java:473)
>   at org.apache.solr.schema.TrieField.createFields(TrieField.java:715)
>   at 
> org.apache.solr.update.DocumentBuilder.addField(DocumentBuilder.java:48)
>   at 
> org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:123)
>   at 
> org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:83)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:237)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:163)
>   at 
> org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:955)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1110)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:706)
>   at 
> org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:104)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
>   at 
> org.apache.solr.update.processor.LanguageIdentifierUpdateProcessor.processAdd(LanguageIdentifierUpdateProcessor.java:207)
>   at 
> org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:250)
>   at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:177)
>   at 
> org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:98)
>   at 
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2068)
>   at 
> 

[jira] [Comment Edited] (SOLR-8050) Partial update on document with multivalued date field fails

2015-10-05 Thread Luc Vanlerberghe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14943265#comment-14943265
 ] 

Luc Vanlerberghe edited comment on SOLR-8050 at 10/5/15 11:52 AM:
--

As I mentioned in the code comments, not only is TrieField.createField not able 
to make sense of the output of Date.toString() as opposed to a correctly formed 
UTC date/time string (like "1986-01-01T00:00:00Z"), but the value the Date 
object contains depends on the locale the test is run in, so there must be an 
error even earlier in the update logic while decoding the values in the Lucene 
Document...


was (Author: lvl):
As I mentioned in the code comments, not only does TrieField.createField cannot 
make sense of the output of Date.toString() as opposed to a correctly formed 
UTC date/time string (like "1986-01-01T00:00:00Z"), but the value the Date 
object contains depends on the locale the test is run in, so there must be an 
error even earlier in the update logic while decoding the values in the Lucene 
Document...

> Partial update on document with multivalued date field fails
> 
>
> Key: SOLR-8050
> URL: https://issues.apache.org/jira/browse/SOLR-8050
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java, SolrJ
>Affects Versions: 5.2.1
> Environment: embedded solr
> java 1.7
> win
>Reporter: Burkhard Buelte
> Attachments: screenshot-1.png
>
>
> When updating a document with multivalued date field Solr throws a exception
> like: org.apache.solr.common.SolrException: Invalid Date String:'Mon Sep 
> 14 01:48:38 CEST 2015'
> even if the update document doesn't contain any datefield.
> See following code snippet to reproduce 
> 1. create a doc with multivalued date field (here dynamic field _dts)
> SolrInputDocument doc = new SolrInputDocument();
> String id = Long.toString(System.currentTimeMillis());
> System.out.println("testUpdate: adding test document to solr ID=" + 
> id);
> doc.addField(CollectionSchema.id.name(), id);
> doc.addField(CollectionSchema.title.name(), "Lorem ipsum");
> doc.addField(CollectionSchema.host_s.name(), "yacy.net");
> doc.addField(CollectionSchema.text_t.name(), "Lorem ipsum dolor sit 
> amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut 
> labore et dolore magna aliqua.");
> doc.addField(CollectionSchema.dates_in_content_dts.name(), new 
> Date());
> solr.add(doc);
> solr.commit(true);
> 2. update any field on this doc via partial update
> SolrInputDocument sid = new SolrInputDocument();
> sid.addField(CollectionSchema.id.name(), 
> doc.getFieldValue(CollectionSchema.id.name()));
> sid.addField(CollectionSchema.host_s.name(), "yacy.yacy");
> solr.update(sid);
> solr.commit(true);
> Result
> Caused by: org.apache.solr.common.SolrException: Invalid Date String:'Mon Sep 
> 14 01:48:38 CEST 2015'
>   at org.apache.solr.util.DateFormatUtil.parseMath(DateFormatUtil.java:87)
>   at 
> org.apache.solr.schema.TrieField.readableToIndexed(TrieField.java:473)
>   at org.apache.solr.schema.TrieField.createFields(TrieField.java:715)
>   at 
> org.apache.solr.update.DocumentBuilder.addField(DocumentBuilder.java:48)
>   at 
> org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:123)
>   at 
> org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:83)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:237)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:163)
>   at 
> org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:955)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1110)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:706)
>   at 
> org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:104)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
>   at 
> org.apache.solr.update.processor.LanguageIdentifierUpdateProcessor.processAdd(LanguageIdentifierUpdateProcessor.java:207)
>   at 
> org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:250)
>   at 

[JENKINS] Lucene-Solr-5.x-Windows (64bit/jdk1.7.0_80) - Build # 5178 - Failure!

2015-10-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/5178/
Java: 64bit/jdk1.7.0_80 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.lucene.store.TestNativeFSLockFactory.testStressLocks

Error Message:
IndexWriter hit unexpected exceptions

Stack Trace:
java.lang.AssertionError: IndexWriter hit unexpected exceptions
at 
__randomizedtesting.SeedInfo.seed([91588621BD830FAD:CF69C8DCA12FC7CB]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.lucene.store.BaseLockFactoryTestCase.testStressLocks(BaseLockFactoryTestCase.java:175)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 1050 lines...]
   [junit4] Suite: org.apache.lucene.store.TestNativeFSLockFactory
   [junit4]   1> Stress Test Index Writer: creation hit unexpected exception: 
java.io.FileNotFoundException: segments_8 in 
dir=MMapDirectory@C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\lucene\build\core\test\J1\temp\lucene.store.TestNativeFSLockFactory_91588621BD830FAD-001\tempDir-009
 lockFactory=org.apache.lucene.store.NativeFSLockFactory@343dd4b6
   [junit4]   1> java.io.FileNotFoundException: segments_8 in 

[GitHub] lucene-solr pull request: Test case demonstrating the bug

2015-10-05 Thread LucVL
GitHub user LucVL opened a pull request:

https://github.com/apache/lucene-solr/pull/201

Test case demonstrating the bug

To run just this testcase, use:
```sh
ant test -Dtests.class=org.apache.solr.update.processor.AtomicUpdatesTest 
-Dtests.method=testMultipleTDateValues
```

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/LucVL/lucene-solr SOLR-8050

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/201.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #201


commit bb7b239eb25a8826e9767edc52e970a8b2aab405
Author: Luc Vanlerberghe 
Date:   2015-10-05T09:58:56Z

Test case demonstrating the bug




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: [JENKINS-EA] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b82) - Build # 14121 - Failure!

2015-10-05 Thread Uwe Schindler
Hi,

this could be caused by some changes in 
https://bugs.openjdk.java.net/browse/JDK-8130847 that may cause this. We are in 
discussion on OpenJDK mailing list.

The fix for this could be (needs investigation):
- https://bugs.openjdk.java.net/browse/JDK-8134974 
- http://hg.openjdk.java.net/jdk9/hs-comp/hotspot/rev/bfb61f868681

This will appear in build 84. I will now try to reproduce the problem with 
Apache Directory tests directly.

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


> -Original Message-
> From: Policeman Jenkins Server [mailto:jenk...@thetaphi.de]
> Sent: Monday, October 05, 2015 1:33 AM
> To: dev@lucene.apache.org
> Subject: [JENKINS-EA] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b82) - Build #
> 14121 - Failure!
> Importance: Low
> 
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/14121/
> Java: 64bit/jdk1.9.0-ea-b82 -XX:-UseCompressedOops -XX:+UseSerialGC
> 
> All tests passed
> 
> Build Log:
> [...truncated 9953 lines...]
>[junit4] JVM J2: stdout was not empty, see:
> /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-
> core/test/temp/junit4-J2-20151004_224124_933.sysout
>[junit4] >>> JVM J2: stdout (verbatim) 
>[junit4] #
>[junit4] # A fatal error has been detected by the Java Runtime
> Environment:
>[junit4] #
>[junit4] #  SIGSEGV (0xb) at pc=0x7fc95282ecce, pid=7379,
> tid=0x1cfe
>[junit4] #
>[junit4] # JRE version: Java(TM) SE Runtime Environment (9.0-b82) (build
> 1.9.0-ea-b82)
>[junit4] # Java VM: Java HotSpot(TM) 64-Bit Server VM (1.9.0-ea-b82, mixed
> mode, tiered, serial gc, linux-amd64)
>[junit4] # Problematic frame:
>[junit4] # V  [libjvm.so+0x80ccce]
> PhaseIdealLoop::build_loop_late_post(Node*)+0x13e
>[junit4] #
>[junit4] # No core dump will be written. Core dumps have been disabled. To
> enable core dumping, try "ulimit -c unlimited" before starting Java again
>[junit4] #
>[junit4] # An error report file with more information is saved as:
>[junit4] # /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-
> core/test/J2/hs_err_pid7379.log
>[junit4]
>[junit4] [error occurred during error reporting , id 0xb]
>[junit4]
>[junit4] #
>[junit4] # If you would like to submit a bug report, please visit:
>[junit4] #   http://bugreport.java.com/bugreport/crash.jsp
>[junit4] #
>[junit4] <<< JVM J2: EOF 
> 
> [...truncated 302 lines...]
>[junit4] JVM J1: stdout was not empty, see:
> /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-
> core/test/temp/junit4-J1-20151004_224124_933.sysout
>[junit4] >>> JVM J1: stdout (verbatim) 
>[junit4] #
>[junit4] # A fatal error has been detected by the Java Runtime
> Environment:
>[junit4] #
>[junit4] #  SIGSEGV (0xb) at pc=0x7f3eadb3ccce, pid=7377,
> tid=0x1cef
>[junit4] #
>[junit4] # JRE version: Java(TM) SE Runtime Environment (9.0-b82) (build
> 1.9.0-ea-b82)
>[junit4] # Java VM: Java HotSpot(TM) 64-Bit Server VM (1.9.0-ea-b82, mixed
> mode, tiered, serial gc, linux-amd64)
>[junit4] # Problematic frame:
>[junit4] # V  [libjvm.so+0x80ccce]
> PhaseIdealLoop::build_loop_late_post(Node*)+0x13e
>[junit4] #
>[junit4] # No core dump will be written. Core dumps have been disabled. To
> enable core dumping, try "ulimit -c unlimited" before starting Java again
>[junit4] #
>[junit4] # An error report file with more information is saved as:
>[junit4] # /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-
> core/test/J1/hs_err_pid7377.log
>[junit4]
>[junit4] [error occurred during error reporting , id 0xb]
>[junit4]
>[junit4] #
>[junit4] # If you would like to submit a bug report, please visit:
>[junit4] #   http://bugreport.java.com/bugreport/crash.jsp
>[junit4] #
>[junit4] <<< JVM J1: EOF 
> 
> [...truncated 849 lines...]
>[junit4] ERROR: JVM J1 ended with an exception, command line:
> /home/jenkins/tools/java/64bit/jdk1.9.0-ea-b82/bin/java -XX:-
> UseCompressedOops -XX:+UseSerialGC -
> XX:+HeapDumpOnOutOfMemoryError -
> XX:HeapDumpPath=/home/jenkins/workspace/Lucene-Solr-5.x-
> Linux/heapdumps -ea -esa -Dtests.prefix=tests -
> Dtests.seed=32E4720D00D0D286 -Xmx512M -Dtests.iters= -
> Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random -
> Dtests.postingsformat=random -Dtests.docvaluesformat=random -
> Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random -
> Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=5.4.0 -
> Dtests.cleanthreads=perClass -
> Djava.util.logging.config.file=/home/jenkins/workspace/Lucene-Solr-5.x-
> Linux/lucene/tools/junit4/logging.properties -Dtests.nightly=false -
> Dtests.weekly=false -Dtests.monster=false -Dtests.slow=true -
> Dtests.asserts=true -Dtests.multiplier=3 -DtempDir=./temp -
> Djava.io.tmpdir=./temp -
> 

[jira] [Commented] (SOLR-8050) Partial update on document with multivalued date field fails

2015-10-05 Thread Luc Vanlerberghe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14943245#comment-14943245
 ] 

Luc Vanlerberghe commented on SOLR-8050:


I have the same problem in solr-5.1.0 and was able to create a simple test 
demonstrating the problem in trunk.

I'll upload a patch/pull-request with the failing testcase shortly


> Partial update on document with multivalued date field fails
> 
>
> Key: SOLR-8050
> URL: https://issues.apache.org/jira/browse/SOLR-8050
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java, SolrJ
>Affects Versions: 5.2.1
> Environment: embedded solr
> java 1.7
> win
>Reporter: Burkhard Buelte
> Attachments: screenshot-1.png
>
>
> When updating a document with multivalued date field Solr throws a exception
> like: org.apache.solr.common.SolrException: Invalid Date String:'Mon Sep 
> 14 01:48:38 CEST 2015'
> even if the update document doesn't contain any datefield.
> See following code snippet to reproduce 
> 1. create a doc with multivalued date field (here dynamic field _dts)
> SolrInputDocument doc = new SolrInputDocument();
> String id = Long.toString(System.currentTimeMillis());
> System.out.println("testUpdate: adding test document to solr ID=" + 
> id);
> doc.addField(CollectionSchema.id.name(), id);
> doc.addField(CollectionSchema.title.name(), "Lorem ipsum");
> doc.addField(CollectionSchema.host_s.name(), "yacy.net");
> doc.addField(CollectionSchema.text_t.name(), "Lorem ipsum dolor sit 
> amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut 
> labore et dolore magna aliqua.");
> doc.addField(CollectionSchema.dates_in_content_dts.name(), new 
> Date());
> solr.add(doc);
> solr.commit(true);
> 2. update any field on this doc via partial update
> SolrInputDocument sid = new SolrInputDocument();
> sid.addField(CollectionSchema.id.name(), 
> doc.getFieldValue(CollectionSchema.id.name()));
> sid.addField(CollectionSchema.host_s.name(), "yacy.yacy");
> solr.update(sid);
> solr.commit(true);
> Result
> Caused by: org.apache.solr.common.SolrException: Invalid Date String:'Mon Sep 
> 14 01:48:38 CEST 2015'
>   at org.apache.solr.util.DateFormatUtil.parseMath(DateFormatUtil.java:87)
>   at 
> org.apache.solr.schema.TrieField.readableToIndexed(TrieField.java:473)
>   at org.apache.solr.schema.TrieField.createFields(TrieField.java:715)
>   at 
> org.apache.solr.update.DocumentBuilder.addField(DocumentBuilder.java:48)
>   at 
> org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:123)
>   at 
> org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:83)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:237)
>   at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:163)
>   at 
> org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:955)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1110)
>   at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:706)
>   at 
> org.apache.solr.update.processor.LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:104)
>   at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
>   at 
> org.apache.solr.update.processor.LanguageIdentifierUpdateProcessor.processAdd(LanguageIdentifierUpdateProcessor.java:207)
>   at 
> org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:250)
>   at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:177)
>   at 
> org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:98)
>   at 
> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2068)
>   at 
> org.apache.solr.client.solrj.embedded.EmbeddedSolrServer.request(EmbeddedSolrServer.java:179)
>   at 
> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
>   at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:174)
>   at 

[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 813 - Still Failing

2015-10-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/813/

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HdfsChaosMonkeySafeLeaderTest

Error Message:
2 threads leaked from SUITE scope at 
org.apache.solr.cloud.hdfs.HdfsChaosMonkeySafeLeaderTest: 1) 
Thread[id=4961, name=StoppableIndexingThread, state=TIMED_WAITING, 
group=TGRP-HdfsChaosMonkeySafeLeaderTest] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.solr.cloud.StoppableIndexingThread.run(StoppableIndexingThread.java:136)
2) Thread[id=4962, name=StoppableIndexingThread, state=TIMED_WAITING, 
group=TGRP-HdfsChaosMonkeySafeLeaderTest] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.solr.cloud.StoppableIndexingThread.run(StoppableIndexingThread.java:136)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE 
scope at org.apache.solr.cloud.hdfs.HdfsChaosMonkeySafeLeaderTest: 
   1) Thread[id=4961, name=StoppableIndexingThread, state=TIMED_WAITING, 
group=TGRP-HdfsChaosMonkeySafeLeaderTest]
at java.lang.Thread.sleep(Native Method)
at 
org.apache.solr.cloud.StoppableIndexingThread.run(StoppableIndexingThread.java:136)
   2) Thread[id=4962, name=StoppableIndexingThread, state=TIMED_WAITING, 
group=TGRP-HdfsChaosMonkeySafeLeaderTest]
at java.lang.Thread.sleep(Native Method)
at 
org.apache.solr.cloud.StoppableIndexingThread.run(StoppableIndexingThread.java:136)
at __randomizedtesting.SeedInfo.seed([AD2575BC4A303AF3]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HdfsChaosMonkeySafeLeaderTest

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=4961, name=StoppableIndexingThread, state=RUNNABLE, 
group=TGRP-HdfsChaosMonkeySafeLeaderTest] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.solr.cloud.StoppableIndexingThread.run(StoppableIndexingThread.java:136)
2) Thread[id=4962, name=StoppableIndexingThread, state=RUNNABLE, 
group=TGRP-HdfsChaosMonkeySafeLeaderTest] at 
java.lang.StringCoding$StringEncoder.encode(StringCoding.java:304) at 
java.lang.StringCoding.encode(StringCoding.java:344) at 
java.lang.String.getBytes(String.java:906) at 
org.apache.solr.common.util.ContentStreamBase$StringStream.getStream(ContentStreamBase.java:182)
 at 
org.apache.solr.client.solrj.request.RequestWriter$LazyContentStream.getStream(RequestWriter.java:125)
 at 
org.apache.solr.client.solrj.impl.HttpSolrClient.createMethod(HttpSolrClient.java:439)
 at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:239)
 at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
 at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150) 
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:167)   
  at 
org.apache.solr.cloud.StoppableIndexingThread.indexDocs(StoppableIndexingThread.java:177)
 at 
org.apache.solr.cloud.StoppableIndexingThread.run(StoppableIndexingThread.java:116)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=4961, name=StoppableIndexingThread, state=RUNNABLE, 
group=TGRP-HdfsChaosMonkeySafeLeaderTest]
at java.lang.Thread.sleep(Native Method)
at 
org.apache.solr.cloud.StoppableIndexingThread.run(StoppableIndexingThread.java:136)
   2) Thread[id=4962, name=StoppableIndexingThread, state=RUNNABLE, 
group=TGRP-HdfsChaosMonkeySafeLeaderTest]
at java.lang.StringCoding$StringEncoder.encode(StringCoding.java:304)
at java.lang.StringCoding.encode(StringCoding.java:344)
at java.lang.String.getBytes(String.java:906)
at 
org.apache.solr.common.util.ContentStreamBase$StringStream.getStream(ContentStreamBase.java:182)
at 
org.apache.solr.client.solrj.request.RequestWriter$LazyContentStream.getStream(RequestWriter.java:125)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.createMethod(HttpSolrClient.java:439)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:239)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:167)
at 
org.apache.solr.cloud.StoppableIndexingThread.indexDocs(StoppableIndexingThread.java:177)
at 
org.apache.solr.cloud.StoppableIndexingThread.run(StoppableIndexingThread.java:116)
at __randomizedtesting.SeedInfo.seed([AD2575BC4A303AF3]:0)


FAILED:  org.apache.solr.cloud.hdfs.HdfsChaosMonkeySafeLeaderTest.test

Error Message:
The Monkey ran for over 30 seconds and no jetties were stopped - this is worth 

[jira] [Created] (SOLR-8126) update- does not work if the component is present in solrconfig.xml

2015-10-05 Thread Noble Paul (JIRA)
Noble Paul created SOLR-8126:


 Summary: update- does not work if the component is 
present in solrconfig.xml
 Key: SOLR-8126
 URL: https://issues.apache.org/jira/browse/SOLR-8126
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Noble Paul


If a component is already added using the api or if it is present in 
{{configoverlay.json}} update command works. It does not work if it is presen t 
in {{solrconfig.xml}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8054) Add a GET command to ConfigSets API

2015-10-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14943487#comment-14943487
 ] 

Mark Miller commented on SOLR-8054:
---

Thinking about a view config API like this, I think I'd want some way to get an 
individual file or all the files (as a zip, in one stream, whatever ends up 
making sense) depending on param.

> Add a GET command to ConfigSets API
> ---
>
> Key: SOLR-8054
> URL: https://issues.apache.org/jira/browse/SOLR-8054
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>
> It would be useful to have a command that allows you to view a ConfigSet via 
> the API rather than going to zookeeper directly.  Mainly for security 
> reasons, e.g.
> - solr may have different security requirements than the ZNodes e.g. only 
> solr can view znodes but any authenticated user can call ConfigSet API
> - it's nicer than pointing to the web UI and using the zookeeper viewer, 
> because of the same security concerns as above and that you don't have to 
> know the internal zookeeper paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7938) MergeStream to support N streams

2015-10-05 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14943379#comment-14943379
 ] 

Joel Bernstein commented on SOLR-7938:
--

I just created the umbrella ticket and linked it. If you have other tickets out 
there feel free to link them to SOLR-8086.

> MergeStream to support N streams
> 
>
> Key: SOLR-7938
> URL: https://issues.apache.org/jira/browse/SOLR-7938
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Dennis Gove
>Priority: Minor
>  Labels: streaming
> Attachments: SOLR-7938.patch
>
>
> Enhances MergeStream to support merging N streams. This was previously 
> limited to merging just two streams but with this enhancement it can now 
> accept any number of streams to merge.
> Based on the comparator, if more than one stream could provide the next value 
> then the selected value will follow the order of the streams as they appear 
> in the expression or were added to the MergeStream object.
> {code}
> merge(
>   search(collection1, q="id:(0 4)", fl="id,a_s,a_i,a_f", sort="a_f asc, a_s 
> asc"),
>   search(collection1, q="id:(1)", fl="id,a_s,a_i,a_f", sort="a_f asc, a_s 
> asc"),
>   search(collection1, q="id:(2)", fl="id,a_s,a_i,a_f", sort="a_f asc, a_s 
> asc"),
>   on="a_f asc"
> )
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7938) MergeStream to support N streams

2015-10-05 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14943379#comment-14943379
 ] 

Joel Bernstein edited comment on SOLR-7938 at 10/5/15 1:49 PM:
---

I just created the umbrella ticket and linked it. If you have other tickets out 
there feel free to link them to SOLR-8125.


was (Author: joel.bernstein):
I just created the umbrella ticket and linked it. If you have other tickets out 
there feel free to link them to SOLR-8086.

> MergeStream to support N streams
> 
>
> Key: SOLR-7938
> URL: https://issues.apache.org/jira/browse/SOLR-7938
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: Trunk
>Reporter: Dennis Gove
>Priority: Minor
>  Labels: streaming
> Attachments: SOLR-7938.patch
>
>
> Enhances MergeStream to support merging N streams. This was previously 
> limited to merging just two streams but with this enhancement it can now 
> accept any number of streams to merge.
> Based on the comparator, if more than one stream could provide the next value 
> then the selected value will follow the order of the streams as they appear 
> in the expression or were added to the MergeStream object.
> {code}
> merge(
>   search(collection1, q="id:(0 4)", fl="id,a_s,a_i,a_f", sort="a_f asc, a_s 
> asc"),
>   search(collection1, q="id:(1)", fl="id,a_s,a_i,a_f", sort="a_f asc, a_s 
> asc"),
>   search(collection1, q="id:(2)", fl="id,a_s,a_i,a_f", sort="a_f asc, a_s 
> asc"),
>   on="a_f asc"
> )
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8125) Umbrella ticket for Streaming and SQL issues

2015-10-05 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-8125:


 Summary: Umbrella ticket for Streaming and SQL issues
 Key: SOLR-8125
 URL: https://issues.apache.org/jira/browse/SOLR-8125
 Project: Solr
  Issue Type: New Feature
  Components: SolrJ
Reporter: Joel Bernstein


This is an umbrella ticket for tracking issues around the *Streaming API*, 
*Streaming Expressions* and *Parallel SQL*.

Issues can be linked to this ticket and discussions about the road map can also 
happen on this ticket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8127) Luke does not know about dynamic fields on other shards fast enough

2015-10-05 Thread Alex K (JIRA)
Alex K created SOLR-8127:


 Summary: Luke does not know about dynamic fields on other shards 
fast enough
 Key: SOLR-8127
 URL: https://issues.apache.org/jira/browse/SOLR-8127
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis, SolrCloud
Affects Versions: 4.10.2
 Environment: 3 shards
Reporter: Alex K


Add a document with a new (never seen before) dynamic field. It will not be 
visible through Luke requests on the other shards for quite a while, and there 
is no documentation regarding exactly how long it will take. The result is that 
a query to Luke must be made to every shard in the cluster if all dynamic 
fields are needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8127) Luke does not know about dynamic fields on other shards fast enough

2015-10-05 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14943555#comment-14943555
 ] 

Shawn Heisey commented on SOLR-8127:


The actual Luke application is standalone Lucene-level program (separate from 
Solr) that can open a Lucene index.  The Luke support in Solr is similar -- I 
believe that it only functions on the specific Lucene index stored in the shard 
replica that handles the request.  It has absolutely no knowledge of what 
happens at the upper levels of Solr or SolrCloud.  As far as I'm aware, it will 
never show you a dynamic field that is not explicitly present in the local 
Lucene index (shard replica).

I'm assuming that when you say "Luke" you are referring to the Luke request 
handler in Solr, not the separate Luke application.

What *may* be happening here, especially if your requests are being sent to the 
collection rather than a specific replica/shard, is that SolrCloud is bouncing 
your request between different shards and replicas within the collection, so 
the request may be handled by replica 1 of shard 1 on one attempt, then replica 
3 of shard 7 on another attempt.  If this is what is happening, then that field 
will only be present in the results if the request ends up on a replica for the 
specific shard containing the document with that field.

> Luke does not know about dynamic fields on other shards fast enough
> ---
>
> Key: SOLR-8127
> URL: https://issues.apache.org/jira/browse/SOLR-8127
> Project: Solr
>  Issue Type: Bug
>  Components: Schema and Analysis, SolrCloud
>Affects Versions: 4.10.2
> Environment: 3 shards
>Reporter: Alex K
>  Labels: dynamic, luke, replication, sharding
>
> Add a document with a new (never seen before) dynamic field. It will not be 
> visible through Luke requests on the other shards for quite a while, and 
> there is no documentation regarding exactly how long it will take. The result 
> is that a query to Luke must be made to every shard in the cluster if all 
> dynamic fields are needed.
> All shards should be aware of a new dynamic field within seconds, if not 
> milliseconds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8127) Luke request handler does not know about dynamic fields on other shards fast enough

2015-10-05 Thread Alex K (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14943572#comment-14943572
 ] 

Alex K commented on SOLR-8127:
--

Amazingly quick response :)

You're right, I've updated the ticket title to indicate this is about the Luke 
request handler.

I believe the scenario you described is exactly what's happening. I didn't know 
that Luke has no way of knowing this stuff, being at the Lucene level. Thank 
you for giving a definitive answer.

> Luke request handler does not know about dynamic fields on other shards fast 
> enough
> ---
>
> Key: SOLR-8127
> URL: https://issues.apache.org/jira/browse/SOLR-8127
> Project: Solr
>  Issue Type: Bug
>  Components: Schema and Analysis, SolrCloud
>Affects Versions: 4.10.2
> Environment: 3 shards
>Reporter: Alex K
>  Labels: dynamic, luke, replication, sharding
>
> Add a document with a new (never seen before) dynamic field. It will not be 
> visible through Luke requests on the other shards for quite a while, and 
> there is no documentation regarding exactly how long it will take. The result 
> is that a query to Luke must be made to every shard in the cluster if all 
> dynamic fields are needed.
> All shards should be aware of a new dynamic field within seconds, if not 
> milliseconds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5944) Support updates of numeric DocValues

2015-10-05 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14943573#comment-14943573
 ] 

Yonik Seeley commented on SOLR-5944:


Whew... this seems really tricky.  I've been diving into the Chaos* fails 
recently, and at first blush it seems like this would add more complexity to 
recovery as well (log replays, peer sync, etc?)  What are the implications 
there?

{quote}
If, upon receiving the update on a replica, the doc version on index/tlog is 
not the "old version" (that means we've missed in update in between to the doc, 
because of reordering), then we can write this update to tlog (and mark it 
somehow as something we're waiting on) but not actually update the doc in the 
index until we receive the update whose update "old version" is what we are 
expecting. After doing this (for all pending updates for the doc), we could 
unmark the documents.
{quote}
It seems like we can't return success on an update until that update has 
actually been applied?
Also, what happens to this prevPointer you are writing to the tlog if there is 
a commit in-between?

Another approach would be to get rid of update reordering... i.e. ensure that 
updates are not reordered when sending from leader to replicas.

> Support updates of numeric DocValues
> 
>
> Key: SOLR-5944
> URL: https://issues.apache.org/jira/browse/SOLR-5944
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
>Assignee: Shalin Shekhar Mangar
> Attachments: SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch
>
>
> LUCENE-5189 introduced support for updates to numeric docvalues. It would be 
> really nice to have Solr support this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5944) Support updates of numeric DocValues

2015-10-05 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14943610#comment-14943610
 ] 

Yonik Seeley commented on SOLR-5944:


Another "progress but not perfection" approach would be to get single-node 
working and committed and open a new issue for cloud mode support.

> Support updates of numeric DocValues
> 
>
> Key: SOLR-5944
> URL: https://issues.apache.org/jira/browse/SOLR-5944
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
>Assignee: Shalin Shekhar Mangar
> Attachments: SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch
>
>
> LUCENE-5189 introduced support for updates to numeric docvalues. It would be 
> really nice to have Solr support this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5944) Support updates of numeric DocValues

2015-10-05 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14943615#comment-14943615
 ] 

Ishan Chattopadhyaya edited comment on SOLR-5944 at 10/5/15 4:32 PM:
-

Thanks for looking at it.
bq. Whew... this seems really tricky. I've been diving into the Chaos* fails 
recently, and at first blush it seems like this would add more complexity to 
recovery as well (log replays, peer sync, etc?) What are the implications there?
I need to do the due diligence and write some tests to verify that things will 
work with log replays and peer sync.

Actually, since the following comment, things changed a bit (maybe be simpler?):
bq. If, upon receiving the update on a replica, the doc version on index/tlog 
is not the "old version" (that means we've missed in update in between to the 
doc, because of reordering), then we can write this update to tlog (and mark it 
somehow as something we're waiting on) but not actually update the doc in the 
index until we receive the update whose update "old version" is what we are 
expecting. After doing this (for all pending updates for the doc), we could 
unmark the documents.

Changed the above to the following:
{noformat}
If, upon receiving the update on a replica, the doc version on index/tlog is
 not the "old version" (that means we've missed in update in between to the
 doc, because of reordering), then we write this in-place update to a temporary
 in-memory buffer and not actually write this to the tlog/index until we receive
 the update whose "old version" is what we are expecting for the buffered
 updates. As buffered updates get written to the tlog/index, they are removed
 from the in-memory buffer.
{noformat}

This ensures that the tlog entries are always exactly in the order in which the 
documents were written.

bq. It seems like we can't return success on an update until that update has 
actually been applied?
Good point, I haven't thought about this. Is it okay to return success if it 
was written to (at least) the in-memory buffer (which holds these reordered 
updates)? Of course, that would entail the risk of queries to this replica to 
return the updated document till before the point at which reordering started.

bq.  Also, what happens to this prevPointer you are writing to the tlog if 
there is a commit in-between?
This prevPointer is just used (in the patch) for RTGs. In the 
{{InPlaceUpdateDistribTest}}, I've introduced commits (with 1/3 probability) in 
between the re-ordered updates, and the RTG seems to work fine.

bq. Another approach would be to get rid of update reordering... i.e. ensure 
that updates are not reordered when sending from leader to replicas.
Sounds interesting. How do you suggest can this be achieved?


was (Author: ichattopadhyaya):
Thanks for looking at it.
bq. Whew... this seems really tricky. I've been diving into the Chaos* fails 
recently, and at first blush it seems like this would add more complexity to 
recovery as well (log replays, peer sync, etc?) What are the implications there?
I need to do the due diligence and write some tests to verify that things will 
work with log replays and peer sync.

Actually, since the following comment, things changed a bit (maybe be simpler?):
bq. If, upon receiving the update on a replica, the doc version on index/tlog 
is not the "old version" (that means we've missed in update in between to the 
doc, because of reordering), then we can write this update to tlog (and mark it 
somehow as something we're waiting on) but not actually update the doc in the 
index until we receive the update whose update "old version" is what we are 
expecting. After doing this (for all pending updates for the doc), we could 
unmark the documents.

Changed the above to the following:
{noformat}
If, upon receiving the update on a replica, the doc version on index/tlog is 
not the "old version" (that means we've missed in update in between to the doc, 
because of reordering), then we write this in-place update to a temporary 
in-memory buffer and not actually write this to the tlog/index until we receive 
the update whose "old version" is what we are expecting for the buffered 
updates. As buffered updates get written to the tlog/index, they are removed 
from the in-memory buffer.
{noformat}

This ensures that the tlog entries are always exactly in the order in which the 
documents were written.

bq. It seems like we can't return success on an update until that update has 
actually been applied?
Good point, I haven't thought about this. Is it okay to return success if it 
was written to (at least) the in-memory buffer (which holds these reordered 
updates)? Of course, that would entail the risk of queries to this replica to 
return the updated document till before the point at which reordering started.

bq.  Also, what happens to this prevPointer you are writing to the tlog if 
there 

[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_60) - Build # 14419 - Failure!

2015-10-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14419/
Java: 32bit/jdk1.8.0_60 -server -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: 
http://127.0.0.1:46256/m_dzx/en/c8n_1x3_lf_shard1_replica1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: 
http://127.0.0.1:46256/m_dzx/en/c8n_1x3_lf_shard1_replica1
at 
__randomizedtesting.SeedInfo.seed([F1DA0D30583FC5E4:798E32EAF6C3A81C]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:648)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:981)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:611)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:597)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:172)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:51)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 

[jira] [Updated] (SOLR-8127) Luke does not know about dynamic fields on other shards fast enough

2015-10-05 Thread Alex K (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex K updated SOLR-8127:
-
Description: 
Add a document with a new (never seen before) dynamic field. It will not be 
visible through Luke requests on the other shards for quite a while, and there 
is no documentation regarding exactly how long it will take. The result is that 
a query to Luke must be made to every shard in the cluster if all dynamic 
fields are needed.

All shards should be aware of a new dynamic field within seconds, if not 
milliseconds.

  was:Add a document with a new (never seen before) dynamic field. It will not 
be visible through Luke requests on the other shards for quite a while, and 
there is no documentation regarding exactly how long it will take. The result 
is that a query to Luke must be made to every shard in the cluster if all 
dynamic fields are needed.


> Luke does not know about dynamic fields on other shards fast enough
> ---
>
> Key: SOLR-8127
> URL: https://issues.apache.org/jira/browse/SOLR-8127
> Project: Solr
>  Issue Type: Bug
>  Components: Schema and Analysis, SolrCloud
>Affects Versions: 4.10.2
> Environment: 3 shards
>Reporter: Alex K
>  Labels: dynamic, luke, replication, sharding
>
> Add a document with a new (never seen before) dynamic field. It will not be 
> visible through Luke requests on the other shards for quite a while, and 
> there is no documentation regarding exactly how long it will take. The result 
> is that a query to Luke must be made to every shard in the cluster if all 
> dynamic fields are needed.
> All shards should be aware of a new dynamic field within seconds, if not 
> milliseconds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8054) Add a GET command to ConfigSets API

2015-10-05 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14943533#comment-14943533
 ] 

Yonik Seeley commented on SOLR-8054:


Should uploading (cloning) configsets be taken into consideration here?

I'm thinking about usecases involving
 - Downloading the config for the purposes of making a backup, with the ability 
to restore it later after trying some different things
 - Essentially cloning a config in a different cluster (testing, 
troubleshooting, etc)

bq. (I hope) the future is API-based like the Config/Schema API, not File-based.

Both seem useful (If I'm correctly understanding what you mean by File-based).  
APIs may manipulate the state, but dealing with the persisted state as a whole 
also seems useful.  For instance, cloning a config via config APIs that deal 
with individual settings seems difficult.


> Add a GET command to ConfigSets API
> ---
>
> Key: SOLR-8054
> URL: https://issues.apache.org/jira/browse/SOLR-8054
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
>
> It would be useful to have a command that allows you to view a ConfigSet via 
> the API rather than going to zookeeper directly.  Mainly for security 
> reasons, e.g.
> - solr may have different security requirements than the ZNodes e.g. only 
> solr can view znodes but any authenticated user can call ConfigSet API
> - it's nicer than pointing to the web UI and using the zookeeper viewer, 
> because of the same security concerns as above and that you don't have to 
> know the internal zookeeper paths.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8129) HdfsChaosMonkeyNothingIsSafeTest failures

2015-10-05 Thread Yonik Seeley (JIRA)
Yonik Seeley created SOLR-8129:
--

 Summary: HdfsChaosMonkeyNothingIsSafeTest failures
 Key: SOLR-8129
 URL: https://issues.apache.org/jira/browse/SOLR-8129
 Project: Solr
  Issue Type: Bug
Reporter: Yonik Seeley


New HDFS chaos test in SOLR-8123 hits a number of types of failures, including 
shard inconsistency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8127) Luke request handler does not know about dynamic fields on other shards fast enough

2015-10-05 Thread Alex K (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex K updated SOLR-8127:
-
Summary: Luke request handler does not know about dynamic fields on other 
shards fast enough  (was: Luke does not know about dynamic fields on other 
shards fast enough)

> Luke request handler does not know about dynamic fields on other shards fast 
> enough
> ---
>
> Key: SOLR-8127
> URL: https://issues.apache.org/jira/browse/SOLR-8127
> Project: Solr
>  Issue Type: Bug
>  Components: Schema and Analysis, SolrCloud
>Affects Versions: 4.10.2
> Environment: 3 shards
>Reporter: Alex K
>  Labels: dynamic, luke, replication, sharding
>
> Add a document with a new (never seen before) dynamic field. It will not be 
> visible through Luke requests on the other shards for quite a while, and 
> there is no documentation regarding exactly how long it will take. The result 
> is that a query to Luke must be made to every shard in the cluster if all 
> dynamic fields are needed.
> All shards should be aware of a new dynamic field within seconds, if not 
> milliseconds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5944) Support updates of numeric DocValues

2015-10-05 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14943570#comment-14943570
 ] 

Erick Erickson commented on SOLR-5944:
--

bq: my initial thought is to not support in-place partial updates to 
multivalued docvalues fields

+1, "progress not perfection" and all that.

Seriously, there would be so many people thrilled with update-in-place for 
a non-multiValued field that I think waiting for that support isn't necessary. 
Of course people will clamor for multiValued support ;)

> Support updates of numeric DocValues
> 
>
> Key: SOLR-5944
> URL: https://issues.apache.org/jira/browse/SOLR-5944
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
>Assignee: Shalin Shekhar Mangar
> Attachments: SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch
>
>
> LUCENE-5189 introduced support for updates to numeric docvalues. It would be 
> really nice to have Solr support this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5944) Support updates of numeric DocValues

2015-10-05 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-5944:
---
Attachment: SOLR-5944.patch

Made some progress on this.
* Reordered updates working.
* Added test for reordered updates. (TODO: some more needed)
* Fixed some issues with RealTimeGet of updates from my previous patch.

Need to clean up the nocommits, add more tests. Also, need to deal with 
multivalued docvalues fields: my initial thought is to not support in-place 
partial updates to multivalued docvalues fields.

> Support updates of numeric DocValues
> 
>
> Key: SOLR-5944
> URL: https://issues.apache.org/jira/browse/SOLR-5944
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
>Assignee: Shalin Shekhar Mangar
> Attachments: SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch
>
>
> LUCENE-5189 introduced support for updates to numeric docvalues. It would be 
> really nice to have Solr support this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: LUCENE-3687: Allow similarity to encode norms other than a single byte

2015-10-05 Thread Ivan Provalov
Mikhail is right.  I was getting hung up on the new API in this LUCENE-3687.  
Instead, one could use the existing API and encode up to four different ways of 
doc length using bytes joining into a long (bitwise).  Thank you, Robert Muir, 
for pointing this out to me!


 On Sunday, October 4, 2015 6:56 AM, Ivan Provalov  
wrote:
   

 Mikhail,

Thank you for your reply.

Even though the long is returned from this function, it is always encoded as a 
single byte lossy representation.  In order to change that and add other norms 
(for using other similarity functions on the same indexed data), there should 
be a support for multiple norms.  Imagine using two similarities side-by-side - 
a default and an LMSimilarity with discountOverlaps set to false, or trying a 
different doc length normalization where the length shouldn't be a reciprocal 
square root function like in the DefaultSimilarity.  The only way of doing it, 
is to have the multiple norms stored.

Here is the LUCENE-3687 Description:
"This removes the long standing limitation that norms are a single byte. Yet, 
we still need to expose this functionality to Similarity to write / encode 
norms in a different format."

I am wondering if there is a plan to roll this into a release.

Thanks,


Ivan




On Saturday, October 3, 2015 11:04 PM, Mikhail Khludnev 
 wrote:



Hello,

Norms can be long, see 
org.apache.lucene.search.similarities.TFIDFSimilarity.encodeNormValue(float)
  /** Encodes a normalization factor for storage in an index. */
  public abstract long encodeNormValue(float f);



On Sun, Oct 4, 2015 at 6:39 AM, Ivan Provalov  
wrote:

When does this 4.0-ALPHA feature going to be included in the released version?  
>https://issues.apache.org/jira/browse/LUCENE-3687  
>It's the "Allow similarity to encode norms other than a single byte".  
>
>
>I thought that it would be in the released versions, but it looks like it's 
>only on 4.0-alpha.  I am using 4.6.1, but also looked in 5.3.1 source, none of 
>these include the changes.  
>
>
>With these changes, the new API in the sim class is accepting the norms, like 
>so: computeNorm(FieldInvertState state, Norm norm). 
>
>
>Thank you,
>
>
>Ivan Provalov


-- 

Sincerely yours
Mikhail Khludnev
Principal Engineer,

Grid Dynamics


  

[jira] [Commented] (SOLR-8127) Luke request handler does not know about dynamic fields on other shards fast enough

2015-10-05 Thread Upayavira (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14943612#comment-14943612
 ] 

Upayavira commented on SOLR-8127:
-

The Luke Request Handler inquires of an index, not a collection. Therefore, 
adding a document with a not-yet-seen dynamic field to another shard will never 
show on the current shard. This functionality long precedes SolrCloud and has 
not been updated to take account of it. You must think of the 
LukeRequestHandler as requesting index information relating to the shard you 
are querying, and no others.

Not sure if there is something to be done with this ticket. Converting the 
LukeRequestHandler to handle distributed requests would be neat, but isn't what 
is described here.

> Luke request handler does not know about dynamic fields on other shards fast 
> enough
> ---
>
> Key: SOLR-8127
> URL: https://issues.apache.org/jira/browse/SOLR-8127
> Project: Solr
>  Issue Type: Bug
>  Components: Schema and Analysis, SolrCloud
>Affects Versions: 4.10.2
> Environment: 3 shards
>Reporter: Alex K
>  Labels: dynamic, luke, replication, sharding
>
> Add a document with a new (never seen before) dynamic field. It will not be 
> visible through Luke requests on the other shards for quite a while, and 
> there is no documentation regarding exactly how long it will take. The result 
> is that a query to Luke must be made to every shard in the cluster if all 
> dynamic fields are needed.
> All shards should be aware of a new dynamic field within seconds, if not 
> milliseconds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8128) Current locale not set on LocaleConfig-based Velocity tools

2015-10-05 Thread Erik Hatcher (JIRA)
Erik Hatcher created SOLR-8128:
--

 Summary: Current locale not set on LocaleConfig-based Velocity 
tools
 Key: SOLR-8128
 URL: https://issues.apache.org/jira/browse/SOLR-8128
 Project: Solr
  Issue Type: Bug
Reporter: Erik Hatcher
Assignee: Erik Hatcher


The locale feature of VelocityResponseWriter is currently used to set the 
locale for the $resource tool.  However, there are some other tools that 
leverage the locale setting that are falling back to the default system locale.

For example, $number.format should allow {{$number.format("integer",5)}} to 
render the number in the v.locale specified locale but always uses the default 
system locale.  

A workaround for number formatting is to use the $resource.locale setting like 
this:  {{$number.format("integer",5,$resource.locale)}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-8127) Luke request handler does not know about dynamic fields on other shards fast enough

2015-10-05 Thread Alex K (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex K closed SOLR-8127.

Resolution: Invalid

> Luke request handler does not know about dynamic fields on other shards fast 
> enough
> ---
>
> Key: SOLR-8127
> URL: https://issues.apache.org/jira/browse/SOLR-8127
> Project: Solr
>  Issue Type: Bug
>  Components: Schema and Analysis, SolrCloud
>Affects Versions: 4.10.2
> Environment: 3 shards
>Reporter: Alex K
>  Labels: dynamic, luke, replication, sharding
>
> Add a document with a new (never seen before) dynamic field. It will not be 
> visible through Luke requests on the other shards for quite a while, and 
> there is no documentation regarding exactly how long it will take. The result 
> is that a query to Luke must be made to every shard in the cluster if all 
> dynamic fields are needed.
> All shards should be aware of a new dynamic field within seconds, if not 
> milliseconds.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5944) Support updates of numeric DocValues

2015-10-05 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14943615#comment-14943615
 ] 

Ishan Chattopadhyaya commented on SOLR-5944:


Thanks for looking at it.
bq. Whew... this seems really tricky. I've been diving into the Chaos* fails 
recently, and at first blush it seems like this would add more complexity to 
recovery as well (log replays, peer sync, etc?) What are the implications there?
I need to do the due diligence and write some tests to verify that things will 
work with log replays and peer sync.

Actually, since the following comment, things changed a bit (maybe be simpler?):
bq. If, upon receiving the update on a replica, the doc version on index/tlog 
is not the "old version" (that means we've missed in update in between to the 
doc, because of reordering), then we can write this update to tlog (and mark it 
somehow as something we're waiting on) but not actually update the doc in the 
index until we receive the update whose update "old version" is what we are 
expecting. After doing this (for all pending updates for the doc), we could 
unmark the documents.

Changed the above to the following:
{noformat}
If, upon receiving the update on a replica, the doc version on index/tlog is 
not the "old version" (that means we've missed in update in between to the doc, 
because of reordering), then we write this in-place update to a temporary 
in-memory buffer and not actually write this to the tlog/index until we receive 
the update whose "old version" is what we are expecting for the buffered 
updates. As buffered updates get written to the tlog/index, they are removed 
from the in-memory buffer.
{noformat}

This ensures that the tlog entries are always exactly in the order in which the 
documents were written.

bq. It seems like we can't return success on an update until that update has 
actually been applied?
Good point, I haven't thought about this. Is it okay to return success if it 
was written to (at least) the in-memory buffer (which holds these reordered 
updates)? Of course, that would entail the risk of queries to this replica to 
return the updated document till before the point at which reordering started.

bq.  Also, what happens to this prevPointer you are writing to the tlog if 
there is a commit in-between?
This prevPointer is just used (in the patch) for RTGs. In the 
{{InPlaceUpdateDistribTest}}, I've introduced commits (with 1/3 probability) in 
between the re-ordered updates, and the RTG seems to work fine.

bq. Another approach would be to get rid of update reordering... i.e. ensure 
that updates are not reordered when sending from leader to replicas.
Sounds interesting. How do you suggest can this be achieved?

> Support updates of numeric DocValues
> 
>
> Key: SOLR-5944
> URL: https://issues.apache.org/jira/browse/SOLR-5944
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
>Assignee: Shalin Shekhar Mangar
> Attachments: SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch
>
>
> LUCENE-5189 introduced support for updates to numeric docvalues. It would be 
> really nice to have Solr support this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5944) Support updates of numeric DocValues

2015-10-05 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14943615#comment-14943615
 ] 

Ishan Chattopadhyaya edited comment on SOLR-5944 at 10/5/15 4:56 PM:
-

Thanks for looking at it.
bq. Whew... this seems really tricky. I've been diving into the Chaos* fails 
recently, and at first blush it seems like this would add more complexity to 
recovery as well (log replays, peer sync, etc?) What are the implications there?
I need to do the due diligence and write some tests to verify that things will 
work with log replays and peer sync.

Actually, since the following comment, things changed a bit (maybe be simpler?):
bq. If, upon receiving the update on a replica, the doc version on index/tlog 
is not the "old version" (that means we've missed in update in between to the 
doc, because of reordering), then we can write this update to tlog (and mark it 
somehow as something we're waiting on) but not actually update the doc in the 
index until we receive the update whose update "old version" is what we are 
expecting. After doing this (for all pending updates for the doc), we could 
unmark the documents.

Changed the above to the following:
{noformat}
If, upon receiving the update on a replica, the last doc's __version__ in 
index/tlog is
 not the "prevVersion" of the update (that means we've missed one/more updates 
in 
between to the doc, because of reordering), then we write this in-place update 
to a temporary
 in-memory buffer and not actually write this to the tlog/index until we receive
 the update whose __version__ is what we are expecting as the prevVersion for 
the buffered
 update. As buffered updates get written to the tlog/index, they are removed
 from the in-memory buffer.
{noformat}

This ensures that the tlog entries are always exactly in the order in which the 
documents were written.

bq. It seems like we can't return success on an update until that update has 
actually been applied?
Good point, I haven't thought about this. Is it okay to return success if it 
was written to (at least) the in-memory buffer (which holds these reordered 
updates)? Of course, that would entail the risk of queries to this replica to 
return the updated document till before the point at which reordering started.

bq.  Also, what happens to this prevPointer you are writing to the tlog if 
there is a commit in-between?
This prevPointer is just used (in the patch) for RTGs. In the 
{{InPlaceUpdateDistribTest}}, I've introduced commits (with 1/3 probability) in 
between the re-ordered updates, and the RTG seems to work fine.

bq. Another approach would be to get rid of update reordering... i.e. ensure 
that updates are not reordered when sending from leader to replicas.
Sounds interesting. How do you suggest can this be achieved?


was (Author: ichattopadhyaya):
Thanks for looking at it.
bq. Whew... this seems really tricky. I've been diving into the Chaos* fails 
recently, and at first blush it seems like this would add more complexity to 
recovery as well (log replays, peer sync, etc?) What are the implications there?
I need to do the due diligence and write some tests to verify that things will 
work with log replays and peer sync.

Actually, since the following comment, things changed a bit (maybe be simpler?):
bq. If, upon receiving the update on a replica, the doc version on index/tlog 
is not the "old version" (that means we've missed in update in between to the 
doc, because of reordering), then we can write this update to tlog (and mark it 
somehow as something we're waiting on) but not actually update the doc in the 
index until we receive the update whose update "old version" is what we are 
expecting. After doing this (for all pending updates for the doc), we could 
unmark the documents.

Changed the above to the following:
{noformat}
If, upon receiving the update on a replica, the doc version on index/tlog is
 not the "old version" (that means we've missed in update in between to the
 doc, because of reordering), then we write this in-place update to a temporary
 in-memory buffer and not actually write this to the tlog/index until we receive
 the update whose "old version" is what we are expecting for the buffered
 updates. As buffered updates get written to the tlog/index, they are removed
 from the in-memory buffer.
{noformat}

This ensures that the tlog entries are always exactly in the order in which the 
documents were written.

bq. It seems like we can't return success on an update until that update has 
actually been applied?
Good point, I haven't thought about this. Is it okay to return success if it 
was written to (at least) the in-memory buffer (which holds these reordered 
updates)? Of course, that would entail the risk of queries to this replica to 
return the updated document till before the point at which reordering started.

bq.  Also, what happens to this 

[jira] [Commented] (SOLR-8117) Rule-based placement issue with 'cores' tag

2015-10-05 Thread Ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14943664#comment-14943664
 ] 

Ludovic Boutros commented on SOLR-8117:
---

Thank you Paul, 

This example is good.
But do you agree that the given test in the patch should pass ? (I mean a 
condition cores<1 should let a core be created on an empty node ?)   

> Rule-based placement issue with 'cores' tag 
> 
>
> Key: SOLR-8117
> URL: https://issues.apache.org/jira/browse/SOLR-8117
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3, 5.3.1
>Reporter: Ludovic Boutros
>Assignee: Noble Paul
> Attachments: SOLR-8117.patch
>
>
> The rule-based placement fails on an empty node (core count = 0) with 
> condition 'cores:<1'.
> It also fails if current core number is equal to the core number in the 
> condition - 1. 
> During the placement strategy process, the core counts for a node are 
> incremented when all the rules match.
> At the end of  the code, an additional verification of all the conditions is 
> done with incremented core count and therefore it fails.
> I don't know why this additional verification is needed and removing it seems 
> to fix the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org