[jira] [Commented] (SOLR-6325) Expose per-collection and per-shard aggregate statistics

2014-08-06 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088897#comment-14088897
 ] 

Shalin Shekhar Mangar commented on SOLR-6325:
-

All these stats are not necessary. I will create a white list of stats that 
should be copied over. I will also use the API added SOLR-6332 to find out the 
relevant handlers. There will be a whitelist of handlers for which stats will 
be collected. Extra ones can be specified via request parameters to this API.

> Expose per-collection and per-shard aggregate statistics
> 
>
> Key: SOLR-6325
> URL: https://issues.apache.org/jira/browse/SOLR-6325
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 4.9, 5.0
>
> Attachments: SOLR-6325.patch
>
>
> SolrCloud doesn't provide any aggregate stats about the cluster or a 
> collection. Very common questions such as document counts per shard, index 
> sizes, request rates etc cannot be answered easily without figuring out the 
> cluster state, invoking multiple core admin APIs and aggregating them 
> manually.
> I propose that we expose an API which returns each of the following on a 
> per-collection and per-shard basis:
> # Document counts
> # Index size on disk
> # Query request rate
> # Indexing request rate
> # Real time get request rate
> I am not yet sure if this should be a distributed search component or a 
> collection API.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6315) Remove SimpleOrderedMap

2014-08-06 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088894#comment-14088894
 ] 

Shai Erera commented on SOLR-6315:
--

{quote}
One option for the stated purpose of this issue is to add a boolean flag within 
NamedList (possibly with a getter/setter) to use in JSONResponseWriter. Another 
is to bite the bullet and actually implement an extension of NamedList that 
behaves differently – in this case (based on what I see in JSONResponseWriter 
and the javadocs), preventing duplicates and being more efficient for key 
lookups.
{quote}

This is exactly what I proposed in my last comment -- to make SimpleOrderedMap 
a useful on its own NamedList, by e.g. forbidding duplicates and null values. I 
just have no idea which tests will fail, and e.g. if there are places in the 
code that use SimpleOrderedMap for outputting in "map" form, but rely on being 
able to add multiple values to the same key.

It looks weird to me that we allow that and still output as map, because the 
JSON parser has to do something about these mult-valued keys. So I wrote this 
simple test, to check what happens if you do that:

{code}
public static void main(String[] args) throws Exception {
  JSONResponseWriter writer = new JSONResponseWriter();
  StringWriter sw = new StringWriter();
  SolrQueryRequest req = new LocalSolrQueryRequest(null, new NamedList 
());
  SolrQueryResponse rsp = new SolrQueryResponse();
  rsp.add("foo", 1);
  rsp.add("foo", 2);
  writer.write(sw, req, rsp);
  String json = sw.toString();
  System.out.println(json);
  
  Map rspMap = (Map)ObjectBuilder.fromJSON(json);
  System.out.println(rspMap);
}
{code}

And it prints:

{noformat}
{"foo":1,"foo":2}

{foo=2}
{noformat}

This makes me believe that whoever uses SimpleOrderedMap does not in fact add 
multiple values to the same key, because at parse time only the last one 
prevails? But I guess Yonik would know the answer to that better.

If indeed that's the case, I think it's best if we just make SimpleOrderedMap 
simple, disallow null and multi-valued keys. And then outputting it in "map" 
style would make sense.

> Remove SimpleOrderedMap
> ---
>
> Key: SOLR-6315
> URL: https://issues.apache.org/jira/browse/SOLR-6315
> Project: Solr
>  Issue Type: Improvement
>  Components: clients - java
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: SOLR-6315.patch
>
>
> As I described on SOLR-912, SimpleOrderedMap is redundant and generally 
> useless class, with confusing jdocs. We should remove it. I'll attach a patch 
> shortly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6191) Self Describing SearchComponents, RequestHandlers, params. etc.

2014-08-06 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088891#comment-14088891
 ] 

Shalin Shekhar Mangar commented on SOLR-6191:
-

I think we should split this issue into two APIs:
# A node capabilities API which returns all the paths and the request handlers 
supported by a node
# A method introspection API which returns all the parameters required for a 
method.

They might actually be one API endpoint but this will simplify the development 
because #1 just needs the information in solrconfig.xml and no annotations or 
interfaces are necessary. #2 can be implemented after the implementation 
details are fleshed out. Another reason for asking this separation is that I am 
blocked by the need of #1 for SOLR-6235.

> Self Describing SearchComponents, RequestHandlers, params. etc.
> ---
>
> Key: SOLR-6191
> URL: https://issues.apache.org/jira/browse/SOLR-6191
> Project: Solr
>  Issue Type: Bug
>Reporter: Vitaliy Zhovtyuk
>Assignee: Noble Paul
>  Labels: features
> Attachments: SOLR-6191.patch, SOLR-6191.patch, SOLR-6191.patch
>
>
> We should have self describing parameters for search components, etc.
> I think we should support UNIX style short and long names and that you should 
> also be able to get a short description of what a parameter does if you ask 
> for INFO on it.
> For instance, &fl could also be &fieldList, etc.
> Also, we should put this into the base classes so that new components can add 
> to it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6332) Add a core admin API to describe registered APIs

2014-08-06 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-6332:
-

Description: 
{{/admin/cores?action=methodinfo&core=}}

should list all the paths registered in that core and their classes 

  was:
/core/admin?action=methodinfo&core=

should list all the paths registered in that core and their classes 


> Add a core admin API to describe registered APIs
> 
>
> Key: SOLR-6332
> URL: https://issues.apache.org/jira/browse/SOLR-6332
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> {{/admin/cores?action=methodinfo&core=}}
> should list all the paths registered in that core and their classes 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5871) Simplify or remove use of Version in IndexWriterConfig

2014-08-06 Thread Ryan Ernst (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Ernst updated LUCENE-5871:
---

Attachment: LUCENE-5871.patch

Thanks for all the ideas! This new patch removes {{shutdown()}} (as a public 
method), and makes close() act based on {{commitOnClose}}.  There are a couple 
things still to fix:
* I'm unsure if TestIndexWriter.testCloseWhileMergeIsRunning is really doing 
anything (nothing waits on mergeStarted? and the exc check of the old behavior 
was the only assertion?)
* TestIndexWriterMerging.testNoWaitClose hangs (I haven't looked at this at all 
yet)
* I haven't checked the latest changes against Solr yet.

> Simplify or remove use of Version in IndexWriterConfig
> --
>
> Key: LUCENE-5871
> URL: https://issues.apache.org/jira/browse/LUCENE-5871
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ryan Ernst
> Attachments: LUCENE-5871.patch, LUCENE-5871.patch
>
>
> {{IndexWriter}} currently uses Version from {{IndexWriterConfig}} to 
> determine the semantics of {{close()}}.  This is a trapdoor for users, as 
> they often default to just sending Version.LUCENE_CURRENT since they don't 
> understand what it will be used for.  Instead, we should make the semantics 
> of close a direction option in IWC.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6332) Add a core admin API to describe registered APIs

2014-08-06 Thread Noble Paul (JIRA)
Noble Paul created SOLR-6332:


 Summary: Add a core admin API to describe registered APIs
 Key: SOLR-6332
 URL: https://issues.apache.org/jira/browse/SOLR-6332
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul


/core/admin?action=methodinfo&core=

should list all the paths registered in that core and their classes 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_11) - Build # 10969 - Still Failing!

2014-08-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/10969/
Java: 32bit/jdk1.8.0_11 -server -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 13806 lines...]
   [junit4] JVM J0: stdout was not empty, see: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/temp/junit4-J0-20140807_04_655.sysout
   [junit4] >>> JVM J0: stdout (verbatim) 
   [junit4] #
   [junit4] # There is insufficient memory for the Java Runtime Environment to 
continue.
   [junit4] # Native memory allocation (malloc) failed to allocate 32756 bytes 
for ChunkPool::allocate
   [junit4] # An error report file with more information is saved as:
   [junit4] # 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J0/hs_err_pid2662.log
   [junit4] <<< JVM J0: EOF 

[...truncated 199 lines...]
   [junit4] ERROR: JVM J0 ended with an exception, command line: 
/var/lib/jenkins/tools/java/32bit/jdk1.8.0_11/jre/bin/java -server 
-XX:+UseSerialGC -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/heapdumps 
-Dtests.prefix=tests -Dtests.seed=6168BEFC9ED9767B -Xmx512M -Dtests.iters= 
-Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random 
-Dtests.postingsformat=random -Dtests.docvaluesformat=random 
-Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random 
-Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=5.0 
-Dtests.cleanthreads=perClass 
-Djava.util.logging.config.file=/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false 
-Dtests.slow=true -Dtests.asserts.gracious=false -Dtests.multiplier=3 
-DtempDir=./temp -Djava.io.tmpdir=./temp 
-Djunit4.tempDir=/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/temp
 
-Dclover.db.dir=/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/clover/db
 -Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Djava.security.policy=/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/tools/junit4/tests.policy
 -Dlucene.version=5.0-SNAPSHOT -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Dtests.leaveTemporary=false -Dtests.filterstacks=true -Dtests.disableHdfs=true 
-Dfile.encoding=US-ASCII -classpath 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/classes/test:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-test-framework/classes/java:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/test-framework/lib/junit4-ant-2.1.6.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/src/test-files:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/test-framework/classes/java:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/codecs/classes/java:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-solrj/classes/java:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/classes/java:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/analysis/common/lucene-analyzers-common-5.0-SNAPSHOT.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/analysis/kuromoji/lucene-analyzers-kuromoji-5.0-SNAPSHOT.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/analysis/phonetic/lucene-analyzers-phonetic-5.0-SNAPSHOT.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/codecs/lucene-codecs-5.0-SNAPSHOT.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/highlighter/lucene-highlighter-5.0-SNAPSHOT.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/memory/lucene-memory-5.0-SNAPSHOT.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/misc/lucene-misc-5.0-SNAPSHOT.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/spatial/lucene-spatial-5.0-SNAPSHOT.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/expressions/lucene-expressions-5.0-SNAPSHOT.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/suggest/lucene-suggest-5.0-SNAPSHOT.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/grouping/lucene-grouping-5.0-SNAPSHOT.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/queries/lucene-queries-5.0-SNAPSHOT.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/queryparser/lucene-queryparser-5.0-SNAPSHOT.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build/join/lucene-join-5.0-SNAPSHOT.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/lib/antlr-runtime-3.5.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/lib/asm-4.1.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/lib/asm-commons-4.1.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-trun

[jira] [Resolved] (SOLR-6285) StackOverflowError in SolrCloud test on jenkins

2014-08-06 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-6285.
-

Resolution: Fixed

> StackOverflowError in SolrCloud test on jenkins
> ---
>
> Key: SOLR-6285
> URL: https://issues.apache.org/jira/browse/SOLR-6285
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud, Tests
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>
> https://builds.apache.org/job/Lucene-Solr-Tests-4.x-Java7/2039/
> {code}
>   [junit4] JVM J1: stderr was not empty, see: 
> /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java7/solr/build/solr-core/test/temp/junit4-J1-20140726_114347_582.syserr
>[junit4] >>> JVM J1: stderr (verbatim) 
>[junit4] WARN: Unhandled exception in event serialization. -> 
> java.lang.StackOverflowError
>[junit4]   at java.util.HashMap.hash(HashMap.java:362)
>[junit4]   at java.util.HashMap.getEntry(HashMap.java:462)
>[junit4]   at java.util.HashMap.get(HashMap.java:417)
>[junit4]   at 
> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.internal.ParameterizedTypeHandlerMap.getHandlerFor(ParameterizedTypeHandlerMap.java:139)
>[junit4]   at 
> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.GsonToMiniGsonTypeAdapterFactory.create(GsonToMiniGsonTypeAdapterFactory.java:60)
>[junit4]   at 
> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.internal.bind.MiniGson.getAdapter(MiniGson.java:92)
>[junit4]   at 
> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$1.(ReflectiveTypeAdapterFactory.java:75)
>[junit4]   at 
> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.internal.bind.ReflectiveTypeAdapterFactory.createBoundField(ReflectiveTypeAdapterFactory.java:74)
>[junit4]   at 
> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.internal.bind.ReflectiveTypeAdapterFactory.getBoundFields(ReflectiveTypeAdapterFactory.java:112)
>[junit4]   at 
> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.internal.bind.ReflectiveTypeAdapterFactory.create(ReflectiveTypeAdapterFactory.java:65)
>[junit4]   at 
> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.internal.bind.MiniGson.getAdapter(MiniGson.java:92)
>[junit4]   at 
> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.Gson.toJson(Gson.java:504)
>[junit4]   at 
> com.carrotsearch.ant.tasks.junit4.events.Serializer.serialize(Serializer.java:87)
>[junit4]   at 
> com.carrotsearch.ant.tasks.junit4.slave.SlaveMain$4.write(SlaveMain.java:410)
>[junit4]   at 
> java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
>[junit4]   at 
> java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
>[junit4]   at java.io.PrintStream.flush(PrintStream.java:338)
>[junit4]   at java.io.FilterOutputStream.flush(FilterOutputStream.java:140)
>[junit4]   at java.io.PrintStream.write(PrintStream.java:482)
>[junit4]   at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
>[junit4]   at 
> sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:291)
>[junit4]   at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:295)
>[junit4]   at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:141)
>[junit4]   at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229)
>[junit4]   at 
> org.apache.log4j.helpers.QuietWriter.flush(QuietWriter.java:59)
>[junit4]   at 
> org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:324)
>[junit4]   at 
> org.apache.log4j.WriterAppender.append(WriterAppender.java:162)
>[junit4]   at 
> org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
>[junit4]   at 
> org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
>[junit4]   at org.apache.log4j.Category.callAppenders(Category.java:206)
>[junit4]   at org.apache.log4j.Category.forcedLog(Category.java:391)
>[junit4]   at org.apache.log4j.Category.log(Category.java:856)
>[junit4]   at 
> org.slf4j.impl.Log4jLoggerAdapter.info(Log4jLoggerAdapter.java:304)
>[junit4]   at 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:191)
>[junit4]   at 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:163)
>[junit4]   at 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:125)
>[junit4]   at 
> org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:314)
>[junit4]   at 
> org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
>[junit4]   at 
> org.apache.solr.cloud.ShardLead

[jira] [Resolved] (SOLR-4385) Stop using SVN Keyword Substitution in Solr src code.

2014-08-06 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved SOLR-4385.
--

   Resolution: Fixed
Fix Version/s: (was: 4.9)
   4.10

Committed to trunk and branch_4x.

> Stop using SVN Keyword Substitution in Solr src code.
> -
>
> Key: SOLR-4385
> URL: https://issues.apache.org/jira/browse/SOLR-4385
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Steve Rowe
> Fix For: 5.0, 4.10
>
> Attachments: detect-keywords-property.patch, detector.patch, 
> detector.patch, detector.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6285) StackOverflowError in SolrCloud test on jenkins

2014-08-06 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-6285.
-

Resolution: Duplicate

> StackOverflowError in SolrCloud test on jenkins
> ---
>
> Key: SOLR-6285
> URL: https://issues.apache.org/jira/browse/SOLR-6285
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud, Tests
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>
> https://builds.apache.org/job/Lucene-Solr-Tests-4.x-Java7/2039/
> {code}
>   [junit4] JVM J1: stderr was not empty, see: 
> /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java7/solr/build/solr-core/test/temp/junit4-J1-20140726_114347_582.syserr
>[junit4] >>> JVM J1: stderr (verbatim) 
>[junit4] WARN: Unhandled exception in event serialization. -> 
> java.lang.StackOverflowError
>[junit4]   at java.util.HashMap.hash(HashMap.java:362)
>[junit4]   at java.util.HashMap.getEntry(HashMap.java:462)
>[junit4]   at java.util.HashMap.get(HashMap.java:417)
>[junit4]   at 
> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.internal.ParameterizedTypeHandlerMap.getHandlerFor(ParameterizedTypeHandlerMap.java:139)
>[junit4]   at 
> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.GsonToMiniGsonTypeAdapterFactory.create(GsonToMiniGsonTypeAdapterFactory.java:60)
>[junit4]   at 
> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.internal.bind.MiniGson.getAdapter(MiniGson.java:92)
>[junit4]   at 
> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$1.(ReflectiveTypeAdapterFactory.java:75)
>[junit4]   at 
> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.internal.bind.ReflectiveTypeAdapterFactory.createBoundField(ReflectiveTypeAdapterFactory.java:74)
>[junit4]   at 
> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.internal.bind.ReflectiveTypeAdapterFactory.getBoundFields(ReflectiveTypeAdapterFactory.java:112)
>[junit4]   at 
> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.internal.bind.ReflectiveTypeAdapterFactory.create(ReflectiveTypeAdapterFactory.java:65)
>[junit4]   at 
> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.internal.bind.MiniGson.getAdapter(MiniGson.java:92)
>[junit4]   at 
> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.Gson.toJson(Gson.java:504)
>[junit4]   at 
> com.carrotsearch.ant.tasks.junit4.events.Serializer.serialize(Serializer.java:87)
>[junit4]   at 
> com.carrotsearch.ant.tasks.junit4.slave.SlaveMain$4.write(SlaveMain.java:410)
>[junit4]   at 
> java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
>[junit4]   at 
> java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
>[junit4]   at java.io.PrintStream.flush(PrintStream.java:338)
>[junit4]   at java.io.FilterOutputStream.flush(FilterOutputStream.java:140)
>[junit4]   at java.io.PrintStream.write(PrintStream.java:482)
>[junit4]   at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
>[junit4]   at 
> sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:291)
>[junit4]   at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:295)
>[junit4]   at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:141)
>[junit4]   at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229)
>[junit4]   at 
> org.apache.log4j.helpers.QuietWriter.flush(QuietWriter.java:59)
>[junit4]   at 
> org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:324)
>[junit4]   at 
> org.apache.log4j.WriterAppender.append(WriterAppender.java:162)
>[junit4]   at 
> org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
>[junit4]   at 
> org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
>[junit4]   at org.apache.log4j.Category.callAppenders(Category.java:206)
>[junit4]   at org.apache.log4j.Category.forcedLog(Category.java:391)
>[junit4]   at org.apache.log4j.Category.log(Category.java:856)
>[junit4]   at 
> org.slf4j.impl.Log4jLoggerAdapter.info(Log4jLoggerAdapter.java:304)
>[junit4]   at 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:191)
>[junit4]   at 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:163)
>[junit4]   at 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:125)
>[junit4]   at 
> org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:314)
>[junit4]   at 
> org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
>[junit4]   at 
> org.apache.solr.cloud.Shard

[jira] [Commented] (SOLR-6213) StackOverflowException in Solr cloud's leader election

2014-08-06 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088850#comment-14088850
 ] 

Shalin Shekhar Mangar commented on SOLR-6213:
-

More recent failures: 
# https://builds.apache.org/job/Lucene-Solr-Tests-4.x-Java7/2039/ (from 
SOLR-6285)
# http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/10846/

> StackOverflowException in Solr cloud's leader election
> --
>
> Key: SOLR-6213
> URL: https://issues.apache.org/jira/browse/SOLR-6213
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.0, 4.10
>Reporter: Dawid Weiss
>Priority: Critical
>
> This is what's causing test hangs (at least on FreeBSD, LUCENE-5786), 
> possibly on other machines too. The problem is stack overflow from looped 
> calls in:
> {code}
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:448)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:212)
>   > 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:163)
>   > 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:125)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:313)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:448)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:212)
>   > 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:163)
>   > 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:125)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:313)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:448)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:212)
>   > 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:163)
>   > 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:125)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:313)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:448)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:212)
>   > 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:163)
>   > 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:125)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:313)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:448)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:212)
>   > 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:163)
>   > 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:125)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:313)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:448)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:212)
>   > 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:163)
>   > 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:125)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:313)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:448)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:212)
>   > 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:163)
>   > 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:125)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:313)
>   > org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:448)
>   > 
> org.apache.solr.cloud.ShardLeaderElectionContext

[jira] [Reopened] (SOLR-6285) StackOverflowError in SolrCloud test on jenkins

2014-08-06 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar reopened SOLR-6285:
-


> StackOverflowError in SolrCloud test on jenkins
> ---
>
> Key: SOLR-6285
> URL: https://issues.apache.org/jira/browse/SOLR-6285
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud, Tests
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>
> https://builds.apache.org/job/Lucene-Solr-Tests-4.x-Java7/2039/
> {code}
>   [junit4] JVM J1: stderr was not empty, see: 
> /usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Tests-4.x-Java7/solr/build/solr-core/test/temp/junit4-J1-20140726_114347_582.syserr
>[junit4] >>> JVM J1: stderr (verbatim) 
>[junit4] WARN: Unhandled exception in event serialization. -> 
> java.lang.StackOverflowError
>[junit4]   at java.util.HashMap.hash(HashMap.java:362)
>[junit4]   at java.util.HashMap.getEntry(HashMap.java:462)
>[junit4]   at java.util.HashMap.get(HashMap.java:417)
>[junit4]   at 
> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.internal.ParameterizedTypeHandlerMap.getHandlerFor(ParameterizedTypeHandlerMap.java:139)
>[junit4]   at 
> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.GsonToMiniGsonTypeAdapterFactory.create(GsonToMiniGsonTypeAdapterFactory.java:60)
>[junit4]   at 
> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.internal.bind.MiniGson.getAdapter(MiniGson.java:92)
>[junit4]   at 
> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.internal.bind.ReflectiveTypeAdapterFactory$1.(ReflectiveTypeAdapterFactory.java:75)
>[junit4]   at 
> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.internal.bind.ReflectiveTypeAdapterFactory.createBoundField(ReflectiveTypeAdapterFactory.java:74)
>[junit4]   at 
> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.internal.bind.ReflectiveTypeAdapterFactory.getBoundFields(ReflectiveTypeAdapterFactory.java:112)
>[junit4]   at 
> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.internal.bind.ReflectiveTypeAdapterFactory.create(ReflectiveTypeAdapterFactory.java:65)
>[junit4]   at 
> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.internal.bind.MiniGson.getAdapter(MiniGson.java:92)
>[junit4]   at 
> com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.Gson.toJson(Gson.java:504)
>[junit4]   at 
> com.carrotsearch.ant.tasks.junit4.events.Serializer.serialize(Serializer.java:87)
>[junit4]   at 
> com.carrotsearch.ant.tasks.junit4.slave.SlaveMain$4.write(SlaveMain.java:410)
>[junit4]   at 
> java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
>[junit4]   at 
> java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
>[junit4]   at java.io.PrintStream.flush(PrintStream.java:338)
>[junit4]   at java.io.FilterOutputStream.flush(FilterOutputStream.java:140)
>[junit4]   at java.io.PrintStream.write(PrintStream.java:482)
>[junit4]   at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
>[junit4]   at 
> sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:291)
>[junit4]   at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:295)
>[junit4]   at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:141)
>[junit4]   at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229)
>[junit4]   at 
> org.apache.log4j.helpers.QuietWriter.flush(QuietWriter.java:59)
>[junit4]   at 
> org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:324)
>[junit4]   at 
> org.apache.log4j.WriterAppender.append(WriterAppender.java:162)
>[junit4]   at 
> org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
>[junit4]   at 
> org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
>[junit4]   at org.apache.log4j.Category.callAppenders(Category.java:206)
>[junit4]   at org.apache.log4j.Category.forcedLog(Category.java:391)
>[junit4]   at org.apache.log4j.Category.log(Category.java:856)
>[junit4]   at 
> org.slf4j.impl.Log4jLoggerAdapter.info(Log4jLoggerAdapter.java:304)
>[junit4]   at 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:191)
>[junit4]   at 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:163)
>[junit4]   at 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:125)
>[junit4]   at 
> org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:314)
>[junit4]   at 
> org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
>[junit4]   at 
> org.apache.solr.cloud.ShardLeaderElectionContext.rejo

[IMPORTANT] Please change your Subversion config [auto-props] to exclude svn:keywords

2014-08-06 Thread Steve Rowe
I just committed SOLR-4385 ("Stop using SVN Keyword Substitution in Solr src 
code”), which puts in place two checks (in both Lucene and Solr):

1. If any *.java file contains any Subversion keyword -- $Id, $LastChangedDate, 
$Date, $LastChangedRevision, $LastChangedRev, $Revision, $Rev, $LastChangedBy, 
$Author, $HeadURL, $URL, or $Header -- ‘ant validate-source-patterns’ will fail.

2. If any file has the "svn:keywords" property (regardless of value), ‘ant 
check-svn-working-copy’ will fail. 

‘ant precommit’ invokes both the ‘validate-source-patterns’ and the 
‘check-svn-working-copy’ targets.

Please change the auto-props in your Subversion configuration (usually at 
~/.subversion/config in the [auto-props] section, though non-command-line 
clients may have different configuration mechanisms) so that none of the file 
globs' lists of properties includes “svn:keywords=whatever”.

Thanks,
Steve


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4385) Stop using SVN Keyword Substitution in Solr src code.

2014-08-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088844#comment-14088844
 ] 

ASF subversion and git services commented on SOLR-4385:
---

Commit 1616403 from [~sar...@syr.edu] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1616403 ]

SOLR-4385: Stop using SVN Keyword Substitution in Solr src code (merged trunk 
r1616393)

> Stop using SVN Keyword Substitution in Solr src code.
> -
>
> Key: SOLR-4385
> URL: https://issues.apache.org/jira/browse/SOLR-4385
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Steve Rowe
> Fix For: 4.9, 5.0
>
> Attachments: detect-keywords-property.patch, detector.patch, 
> detector.patch, detector.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6314) Multi-threaded facet counts differ when SolrCloud has >1 shard

2014-08-06 Thread Vamsee Yarlagadda (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088833#comment-14088833
 ] 

Vamsee Yarlagadda commented on SOLR-6314:
-

Thanks [~erickerickson] for looking into this.

Yes, you are right. It makes perfect sense to return a count for every unique 
facet request rather than repeating the facets over and over. It might be the 
case that the facet result that's returned in the case of multi-shard (by going 
through the aggregating code) is the right thing to do. Perhaps, we may want to 
fix the behavior for single shard system and make changes to the unit tests to 
reflect the same.

I can't think of any particular reason why the initial implementation of 
multithreaded faceting created a test that will check for duplicate facet 
counts. It might be a test bug too?
https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/test/org/apache/solr/request/TestFaceting.java#L654

Thoughts?

> Multi-threaded facet counts differ when SolrCloud has >1 shard
> --
>
> Key: SOLR-6314
> URL: https://issues.apache.org/jira/browse/SOLR-6314
> Project: Solr
>  Issue Type: Bug
>  Components: SearchComponents - other, SolrCloud
>Affects Versions: 5.0
>Reporter: Vamsee Yarlagadda
>Assignee: Erick Erickson
>
> I am trying to work with multi-threaded faceting on SolrCloud and in the 
> process i was hit by some issues.
> I am currently running the below upstream test on different SolrCloud 
> configurations and i am getting a different result set per configuration.
> https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/test/org/apache/solr/request/TestFaceting.java#L654
> Setup:
> - *Indexed 50 docs into SolrCloud.*
> - *If the SolrCloud has only 1 shard, the facet field query has the below 
> output (which matches with the expected upstream test output - # facet fields 
> ~ 50).*
> {code}
> $ curl  
> "http://localhost:8983/solr/collection1/select?facet=true&fl=id&indent=true&q=id%3A*&facet.limit=-1&facet.threads=1000&facet.field=f0_ws&facet.field=f0_ws&facet.field=f0_ws&facet.field=f0_ws&facet.field=f0_ws&facet.field=f1_ws&facet.field=f1_ws&facet.field=f1_ws&facet.field=f1_ws&facet.field=f1_ws&facet.field=f2_ws&facet.field=f2_ws&facet.field=f2_ws&facet.field=f2_ws&facet.field=f2_ws&facet.field=f3_ws&facet.field=f3_ws&facet.field=f3_ws&facet.field=f3_ws&facet.field=f3_ws&facet.field=f4_ws&facet.field=f4_ws&facet.field=f4_ws&facet.field=f4_ws&facet.field=f4_ws&facet.field=f5_ws&facet.field=f5_ws&facet.field=f5_ws&facet.field=f5_ws&facet.field=f5_ws&facet.field=f6_ws&facet.field=f6_ws&facet.field=f6_ws&facet.field=f6_ws&facet.field=f6_ws&facet.field=f7_ws&facet.field=f7_ws&facet.field=f7_ws&facet.field=f7_ws&facet.field=f7_ws&facet.field=f8_ws&facet.field=f8_ws&facet.field=f8_ws&facet.field=f8_ws&facet.field=f8_ws&facet.field=f9_ws&facet.field=f9_ws&facet.field=f9_ws&facet.field=f9_ws&facet.field=f9_ws&rows=1&wt=xml";
> 
> 
> 
>   0
>   21
>   
> true
> id
> true
> id:*
> -1
> 1000
> 
>   f0_ws
>   f0_ws
>   f0_ws
>   f0_ws
>   f0_ws
>   f1_ws
>   f1_ws
>   f1_ws
>   f1_ws
>   f1_ws
>   f2_ws
>   f2_ws
>   f2_ws
>   f2_ws
>   f2_ws
>   f3_ws
>   f3_ws
>   f3_ws
>   f3_ws
>   f3_ws
>   f4_ws
>   f4_ws
>   f4_ws
>   f4_ws
>   f4_ws
>   f5_ws
>   f5_ws
>   f5_ws
>   f5_ws
>   f5_ws
>   f6_ws
>   f6_ws
>   f6_ws
>   f6_ws
>   f6_ws
>   f7_ws
>   f7_ws
>   f7_ws
>   f7_ws
>   f7_ws
>   f8_ws
>   f8_ws
>   f8_ws
>   f8_ws
>   f8_ws
>   f9_ws
>   f9_ws
>   f9_ws
>   f9_ws
>   f9_ws
> 
> xml
> 1
>   
> 
> 
>   
> 0.0
> 
> 
>   
>   
> 
>   25
>   25
> 
> 
>   25
>   25
> 
> 
>   25
>   25
> 
> 
>   25
>   25
> 
> 
>   25
>   25
> 
> 
>   33
>   17
> 
> 
>   33
>   17
> 
> 
>   33
>   17
> 
> 
>   33
>   17
> 
> 
>   33
>   17
> 
> 
>   37
>   13
> 
> 
>   37
>   13
> 
> 
>   37
>   13
> 
> 
>   37
>   13
> 
> 
>   37
>   13
> 
> 
>   40
>   10
> 
> 
>   40
>   10
> 
> 
>   40
>   10
> 
> 
>   40
>   10
> 
> 
>   40
>   10
> 
> 
>   41
>   9
> 
> 
>   41
>   9
> 
> 
>   41
>   9
> 
> 
>   41
>   9
> 
> 
>   41
>   9
> 
> 
>   42
>   8
> 
> 
>   42
>   8
> 
> 
>   42
>  

[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 1755 - Still Failing!

2014-08-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1755/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.schema.TestCloudSchemaless.testDistribSearch

Error Message:
Timeout occured while waiting response from server at: 
https://127.0.0.1:53400/_/zl/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: https://127.0.0.1:53400/_/zl/collection1
at 
__randomizedtesting.SeedInfo.seed([D9102A635FA5AA4:8C778CBE42A53A98]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:561)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206)
at 
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:124)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:68)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:54)
at 
org.apache.solr.schema.TestCloudSchemaless.doTest(TestCloudSchemaless.java:140)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:865)
at sun.reflect.GeneratedMethodAccessor70.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
   

[jira] [Commented] (SOLR-6314) Multi-threaded facet counts differ when SolrCloud has >1 shard

2014-08-06 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088820#comment-14088820
 ] 

Erick Erickson commented on SOLR-6314:
--

OK, taking a closer look at this and I wonder what the right behavior is. The 
totals
are correct, it's just that they repeated in one case and not in  another. 

It _looks_ like I can restate the problem like this:

When a facet field is requested more than one time in non-sharded cluster, then 
the field
is repeated in the result set. 

When a facet field is requested more than once in a sharded cluster, then the 
field is only
returned once in the result set.

IOW, specifying the same facet.field twice: &facet.field=f1&facet.field=f1
results in two (identical) sections in the response in a non-sharded case
and one in the sharded case.

I'll look at the code tomorrow to see where the difference happens, I suspect 
in the
aggregating code in the distributed case but that's just a guess.

So the question is what the right behavior really is. I can argue that 
specifying the
exact same facet parameter (either query or field) more than once is a waste, 
and the
facet information should be cleaned up on the way in by removing duplicates. 
That would give
the same response in both cases and return just one entry per unique facet 
criteria. This is 
arguably avoiding useless work (what's the value of specifying the same facet 
parameter
twice?) That would change current behavior in the single-shard case however.

If we tried to return multiple entries in the sharded case, it seems quite 
fragile to try to sum the
facet sub-counts separately. By that I mean say shard 1 returns
f1:33
f1:33

shard two returns
f1:78
f1:78 

You'd like the final result to be
f1:111
f1:111

On the surface, some rule like "add facets by
position when the key is identical and return multiple
counts" seems like it would work, but it also seems
rife for errors to creep in with arguably no added value. What happens,
for instance, if there are three values for "f1" from one shard
and only two from another? I don't see how that would really happen,
but

So, my question for you (and anyone who wants to chime in)
is: "Do you agree that pruning multiple identical facet criteria
is a Good Thing?". If not, what use case does returning multiple
identical facet counts support and is that use case worth the
effort? My gut feeling is no.

Thanks for bringing this up it's certainly something
that's confusing. I suspect it's just something that hasn't been
thought of in the past

> Multi-threaded facet counts differ when SolrCloud has >1 shard
> --
>
> Key: SOLR-6314
> URL: https://issues.apache.org/jira/browse/SOLR-6314
> Project: Solr
>  Issue Type: Bug
>  Components: SearchComponents - other, SolrCloud
>Affects Versions: 5.0
>Reporter: Vamsee Yarlagadda
>Assignee: Erick Erickson
>
> I am trying to work with multi-threaded faceting on SolrCloud and in the 
> process i was hit by some issues.
> I am currently running the below upstream test on different SolrCloud 
> configurations and i am getting a different result set per configuration.
> https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/test/org/apache/solr/request/TestFaceting.java#L654
> Setup:
> - *Indexed 50 docs into SolrCloud.*
> - *If the SolrCloud has only 1 shard, the facet field query has the below 
> output (which matches with the expected upstream test output - # facet fields 
> ~ 50).*
> {code}
> $ curl  
> "http://localhost:8983/solr/collection1/select?facet=true&fl=id&indent=true&q=id%3A*&facet.limit=-1&facet.threads=1000&facet.field=f0_ws&facet.field=f0_ws&facet.field=f0_ws&facet.field=f0_ws&facet.field=f0_ws&facet.field=f1_ws&facet.field=f1_ws&facet.field=f1_ws&facet.field=f1_ws&facet.field=f1_ws&facet.field=f2_ws&facet.field=f2_ws&facet.field=f2_ws&facet.field=f2_ws&facet.field=f2_ws&facet.field=f3_ws&facet.field=f3_ws&facet.field=f3_ws&facet.field=f3_ws&facet.field=f3_ws&facet.field=f4_ws&facet.field=f4_ws&facet.field=f4_ws&facet.field=f4_ws&facet.field=f4_ws&facet.field=f5_ws&facet.field=f5_ws&facet.field=f5_ws&facet.field=f5_ws&facet.field=f5_ws&facet.field=f6_ws&facet.field=f6_ws&facet.field=f6_ws&facet.field=f6_ws&facet.field=f6_ws&facet.field=f7_ws&facet.field=f7_ws&facet.field=f7_ws&facet.field=f7_ws&facet.field=f7_ws&facet.field=f8_ws&facet.field=f8_ws&facet.field=f8_ws&facet.field=f8_ws&facet.field=f8_ws&facet.field=f9_ws&facet.field=f9_ws&facet.field=f9_ws&facet.field=f9_ws&facet.field=f9_ws&rows=1&wt=xml";
> 
> 
> 
>   0
>   21
>   
> true
> id
> true
> id:*
> -1
> 1000
> 
>   f0_ws
>   f0_ws
>   f0_ws
>   f0_ws
>   f0_ws
>   f1_ws
>   f1_ws
>   f1_ws
>   f1_ws
>   f1_ws
>   f2_ws
>  

[jira] [Commented] (SOLR-4385) Stop using SVN Keyword Substitution in Solr src code.

2014-08-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088755#comment-14088755
 ] 

ASF subversion and git services commented on SOLR-4385:
---

Commit 1616393 from [~sar...@syr.edu] in branch 'dev/trunk'
[ https://svn.apache.org/r1616393 ]

SOLR-4385: Stop using SVN Keyword Substitution in Solr src code

> Stop using SVN Keyword Substitution in Solr src code.
> -
>
> Key: SOLR-4385
> URL: https://issues.apache.org/jira/browse/SOLR-4385
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Steve Rowe
> Fix For: 4.9, 5.0
>
> Attachments: detect-keywords-property.patch, detector.patch, 
> detector.patch, detector.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1180: POMs out of sync

2014-08-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1180/

3 tests failed.
FAILED:  org.apache.solr.cloud.MultiThreadedOCPTest.testDistribSearch

Error Message:
Task 1002 did not complete, final state: submitted

Stack Trace:
java.lang.AssertionError: Task 1002 did not complete, final state: submitted
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.MultiThreadedOCPTest.testTaskExclusivity(MultiThreadedOCPTest.java:144)
at 
org.apache.solr.cloud.MultiThreadedOCPTest.doTest(MultiThreadedOCPTest.java:71)


FAILED:  org.apache.solr.cloud.MultiThreadedOCPTest.testDistribSearch

Error Message:
Captured an uncaught exception in thread: Thread[id=57751, 
name=qtp1712131373-57751, state=RUNNABLE, group=TGRP-MultiThreadedOCPTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=57751, name=qtp1712131373-57751, state=RUNNABLE, 
group=TGRP-MultiThreadedOCPTest]
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
at __randomizedtesting.SeedInfo.seed([75ABA17D293BB508]:0)
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1047)
at 
sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1312)
at 
sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1339)
at 
sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1323)
at 
org.eclipse.jetty.server.ssl.SslSocketConnector$SslConnectorEndPoint.run(SslSocketConnector.java:665)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.cloud.MultiThreadedOCPTest.testDistribSearch

Error Message:
Captured an uncaught exception in thread: Thread[id=57753, 
name=OverseerThreadFactory-9084-thread-5, state=RUNNABLE, group=Overseer 
collection creation process.]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=57753, name=OverseerThreadFactory-9084-thread-5, 
state=RUNNABLE, group=Overseer collection creation process.]
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
at __randomizedtesting.SeedInfo.seed([75ABA17D293BB508]:0)
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
at 
java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:949)
at 
java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1371)
at 
java.util.concurrent.ExecutorCompletionService.submit(ExecutorCompletionService.java:181)
at 
org.apache.solr.handler.component.HttpShardHandler.submit(HttpShardHandler.java:184)
at 
org.apache.solr.cloud.OverseerCollectionProcessor.sendShardRequest(OverseerCollectionProcessor.java:2054)
at 
org.apache.solr.cloud.OverseerCollectionProcessor.addReplica(OverseerCollectionProcessor.java:2370)
at 
org.apache.solr.cloud.OverseerCollectionProcessor.splitShard(OverseerCollectionProcessor.java:1360)
at 
org.apache.solr.cloud.OverseerCollectionProcessor.processMessage(OverseerCollectionProcessor.java:563)
at 
org.apache.solr.cloud.OverseerCollectionProcessor$Runner.run(OverseerCollectionProcessor.java:2618)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 52767 lines...]
BUILD FAILED
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:490: 
The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:182: 
The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-trunk/extra-targets.xml:77:
 Java returned: 1

Total time: 261 minutes 13 seconds
Build step 'Invoke Ant' marked build as failure
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: Can't assign jiras to myself

2014-08-06 Thread Tomás Fernández Löbbe
Thanks David, I can assign jiras properly now.

Tomás


On Wed, Aug 6, 2014 at 7:15 PM, david.w.smi...@gmail.com <
david.w.smi...@gmail.com> wrote:

> Tomás, I put you into the “Committers” role for Lucene & Solr in JIRA just
> now.
>
> ~ David Smiley
> Freelance Apache Lucene/Solr Search Consultant/Developer
> http://www.linkedin.com/in/davidwsmiley
>
>
> On Wed, Aug 6, 2014 at 9:51 PM, Tomás Fernández Löbbe <
> tomasflo...@gmail.com> wrote:
>
>> May I be missing the "committer role" in Jira?
>>
>> https://wiki.apache.org/lucene-java/CommittersResources
>>
>> Tomás
>>
>
>


[jira] [Assigned] (SOLR-6283) Add support for Interval Faceting in SolrJ

2014-08-06 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-6283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe reassigned SOLR-6283:
---

Assignee: Tomás Fernández Löbbe  (was: Erick Erickson)

> Add support for Interval Faceting in SolrJ
> --
>
> Key: SOLR-6283
> URL: https://issues.apache.org/jira/browse/SOLR-6283
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 4.10
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
> Attachments: SOLR-6283.patch, SOLR-6283.patch
>
>
> Interval Faceting was added in SOLR-6216. Add support for it in SolrJ



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Can't assign jiras to myself

2014-08-06 Thread david.w.smi...@gmail.com
Tomás, I put you into the “Committers” role for Lucene & Solr in JIRA just
now.

~ David Smiley
Freelance Apache Lucene/Solr Search Consultant/Developer
http://www.linkedin.com/in/davidwsmiley


On Wed, Aug 6, 2014 at 9:51 PM, Tomás Fernández Löbbe  wrote:

> May I be missing the "committer role" in Jira?
>
> https://wiki.apache.org/lucene-java/CommittersResources
>
> Tomás
>


[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_20-ea-b23) - Build # 10968 - Failure!

2014-08-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/10968/
Java: 32bit/jdk1.8.0_20-ea-b23 -server -XX:+UseG1GC

1 tests failed.
REGRESSION:  org.apache.solr.cloud.MultiThreadedOCPTest.testDistribSearch

Error Message:
Captured an uncaught exception in thread: Thread[id=16937, 
name=parallelCoreAdminExecutor-3778-thread-12, state=RUNNABLE, 
group=TGRP-MultiThreadedOCPTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=16937, 
name=parallelCoreAdminExecutor-3778-thread-12, state=RUNNABLE, 
group=TGRP-MultiThreadedOCPTest]
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
at __randomizedtesting.SeedInfo.seed([F524480A13DE4FB4]:0)
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
at 
java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:950)
at 
java.util.concurrent.ThreadPoolExecutor.processWorkerExit(ThreadPoolExecutor.java:1018)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1160)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 12016 lines...]
   [junit4] Suite: org.apache.solr.cloud.MultiThreadedOCPTest
   [junit4]   2> Creating dataDir: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J0/./temp/solr.cloud.MultiThreadedOCPTest-F524480A13DE4FB4-001/init-core-data-001
   [junit4]   2> 2129871 T7345 oas.SolrTestCaseJ4.buildSSLConfig Randomized ssl 
(true) and clientAuth (false)
   [junit4]   2> 2129872 T7345 
oas.BaseDistributedSearchTestCase.initHostContext Setting hostContext system 
property: /
   [junit4]   2> 2129875 T7345 oas.SolrTestCaseJ4.setUp ###Starting 
testDistribSearch
   [junit4]   2> 2129875 T7345 oasc.ZkTestServer.run STARTING ZK TEST SERVER
   [junit4]   1> client port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 2129876 T7346 oasc.ZkTestServer$ZKServerMain.runFromConfig 
Starting server
   [junit4]   2> 2129976 T7345 oasc.ZkTestServer.run start zk server on 
port:48236
   [junit4]   2> 2129977 T7345 oascc.ConnectionManager.waitForConnected Waiting 
for client to connect to ZooKeeper
   [junit4]   2> 2129980 T7352 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@32a71a name:ZooKeeperConnection 
Watcher:127.0.0.1:48236 got event WatchedEvent state:SyncConnected type:None 
path:null path:null type:None
   [junit4]   2> 2129980 T7345 oascc.ConnectionManager.waitForConnected Client 
is connected to ZooKeeper
   [junit4]   2> 2129981 T7345 oascc.SolrZkClient.makePath makePath: /solr
   [junit4]   2> 2129985 T7345 oascc.ConnectionManager.waitForConnected Waiting 
for client to connect to ZooKeeper
   [junit4]   2> 2129986 T7354 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@a01aa4 name:ZooKeeperConnection 
Watcher:127.0.0.1:48236/solr got event WatchedEvent state:SyncConnected 
type:None path:null path:null type:None
   [junit4]   2> 2129986 T7345 oascc.ConnectionManager.waitForConnected Client 
is connected to ZooKeeper
   [junit4]   2> 2129987 T7345 oascc.SolrZkClient.makePath makePath: 
/collections/collection1
   [junit4]   2> 2129990 T7345 oascc.SolrZkClient.makePath makePath: 
/collections/collection1/shards
   [junit4]   2> 2129992 T7345 oascc.SolrZkClient.makePath makePath: 
/collections/control_collection
   [junit4]   2> 2129995 T7345 oascc.SolrZkClient.makePath makePath: 
/collections/control_collection/shards
   [junit4]   2> 2129997 T7345 oasc.AbstractZkTestCase.putConfig put 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/src/test-files/solr/collection1/conf/solrconfig-tlog.xml
 to /configs/conf1/solrconfig.xml
   [junit4]   2> 2129998 T7345 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/solrconfig.xml
   [junit4]   2> 2130002 T7345 oasc.AbstractZkTestCase.putConfig put 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/src/test-files/solr/collection1/conf/schema.xml
 to /configs/conf1/schema.xml
   [junit4]   2> 2130002 T7345 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/schema.xml
   [junit4]   2> 2130004 T7345 oasc.AbstractZkTestCase.putConfig put 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/src/test-files/solr/collection1/conf/solrconfig.snippet.randomindexconfig.xml
 to /configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2> 2130005 T7345 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2> 2130007 T7345 oasc.AbstractZkTestCase.putConfig put 
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/core/src/test-files/solr/collection1/conf/stopwords.txt
 to /configs/conf1/stopwords.txt
   [junit4]   2> 2130007 T7345 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/stopwords.txt
   [ju

Can't assign jiras to myself

2014-08-06 Thread Tomás Fernández Löbbe
May I be missing the "committer role" in Jira?

https://wiki.apache.org/lucene-java/CommittersResources

Tomás


[jira] [Commented] (SOLR-6236) Need an optional fallback mechanism for selecting a leader when all replicas are in leader-initiated recovery.

2014-08-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088534#comment-14088534
 ] 

Mark Miller commented on SOLR-6236:
---

I've started looking at this more. Have to spend some more time with the patch. 
That all still sounds good to me.

I still think, ideally, if all replicas are participating in a shard, things 
would just repair by default. The best you can do in that case anyway is to 
restart the shard and have everyone participate in the election. Seems the 
system should just do that then without the restart or a manual step or a 
change in config. It's only when all replicas can't participate that it's 
dangerous. 

A further improvement though, and I think perhaps hard to do.

I'm +1 on adding this functionality. I have some questions around how the 
forceLeaderFailedRecoveryThreshold works and when it's reset, but I'll spend 
some time looking at that patch for that first.

> Need an optional fallback mechanism for selecting a leader when all replicas 
> are in leader-initiated recovery.
> --
>
> Key: SOLR-6236
> URL: https://issues.apache.org/jira/browse/SOLR-6236
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Timothy Potter
>Assignee: Timothy Potter
> Attachments: SOLR-6236.patch
>
>
> Offshoot from discussion in SOLR-6235, key points are:
> Tim: In ElectionContext, when running shouldIBeLeader, the node will choose 
> to not be the leader if it is in LIR. However, this could lead to no leader. 
> My thinking there is the state is bad enough that we would need manual 
> intervention to clear one of the LIR znodes to allow a replica to get past 
> this point. But maybe we can do better here?
> Shalin: Good question. With careful use of minRf, the user can retry 
> operations and maintain consistency even if we arbitrarily elect a leader in 
> this case. But most people won't use minRf and don't care about consistency 
> as much as availability. For them there should be a way to get out of this 
> mess easily. We can have a collection property (boolean + timeout value) to 
> force elect a leader even if all shards were in LIR. What do you think?
> Mark: Indeed, it's a current limitation that you can have all nodes in a 
> shard thinking they cannot be leader, even when all of them are available. 
> This is not required by the distributed model we have at all, it's just a 
> consequence of being over restrictive on the initial implementation - if all 
> known replicas are participating, you should be able to get a leader. So I'm 
> not sure if this case should be optional. But iff not all known replicas are 
> participating and you still want to force a leader, that should be optional - 
> I think it should default to false though. I think the system should default 
> to reasonable data safety in these cases.
> How best to solve this, I'm not quite sure, but happy to look at a patch. How 
> do you plan on monitoring and taking action? Via the Overseer? It seems 
> tricky to do it from the replicas.
> Tim: We have a similar issue where a replica attempting to be the leader 
> needs to wait a while to see other replicas before declaring itself the 
> leader, see ElectionContext around line 200:
> int leaderVoteWait = cc.getZkController().getLeaderVoteWait();
> if (!weAreReplacement)
> { waitForReplicasToComeUp(weAreReplacement, leaderVoteWait); }
> So one quick idea might be to have the code that checks if it's in LIR see if 
> all replicas are in LIR and if so, wait out the leaderVoteWait period and 
> check again. If all are still in LIR, then move on with becoming the leader 
> (in the spirit of availability).
> {quote}
> But iff not all known replicas are participating and you still want to force 
> a leader, that should be optional - I think it should default to false 
> though. I think the system should default to reasonable data safety in these 
> cases.
> {quote}
> Shalin: That's the same case as the leaderVoteWait situation and we do go 
> ahead after that amount of time even if all replicas aren't participating. 
> Therefore, I think that we should handle it the same way. But to help people 
> who care about consistency over availability, there should be a configurable 
> property which bans this auto-promotion completely.
> In any case, we should switch to coreNodeName instead of coreName and open an 
> issue to improve the leader election part.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.8.0_11) - Build # 10846 - Failure!

2014-08-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/10846/
Java: 32bit/jdk1.8.0_11 -client -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 11801 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/solr/build/solr-core/test/temp/junit4-J1-20140806_232719_947.syserr
   [junit4] >>> JVM J1: stderr (verbatim) 
   [junit4] WARN: Unhandled exception in event serialization. -> 
java.lang.StackOverflowError
   [junit4] at 
sun.security.provider.PolicyFile.implies(PolicyFile.java:1078)
   [junit4] at 
java.security.ProtectionDomain.implies(ProtectionDomain.java:272)
   [junit4] at 
java.security.AccessControlContext.checkPermission(AccessControlContext.java:435)
   [junit4] at 
java.security.AccessController.checkPermission(AccessController.java:884)
   [junit4] at 
java.lang.SecurityManager.checkPermission(SecurityManager.java:549)
   [junit4] at 
java.lang.reflect.AccessibleObject.setAccessible(AccessibleObject.java:95)
   [junit4] at 
com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.internal.bind.ReflectiveTypeAdapterFactory.getBoundFields(ReflectiveTypeAdapterFactory.java:104)
   [junit4] at 
com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.internal.bind.ReflectiveTypeAdapterFactory.create(ReflectiveTypeAdapterFactory.java:65)
   [junit4] at 
com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.internal.bind.MiniGson.getAdapter(MiniGson.java:92)
   [junit4] at 
com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.Gson.toJson(Gson.java:504)
   [junit4] at 
com.carrotsearch.ant.tasks.junit4.events.Serializer.serialize(Serializer.java:87)
   [junit4] at 
com.carrotsearch.ant.tasks.junit4.slave.SlaveMain$4.write(SlaveMain.java:410)
   [junit4] at 
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
   [junit4] at 
java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
   [junit4] at java.io.PrintStream.flush(PrintStream.java:338)
   [junit4] at java.io.FilterOutputStream.flush(FilterOutputStream.java:140)
   [junit4] at java.io.PrintStream.write(PrintStream.java:482)
   [junit4] at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
   [junit4] at 
sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:291)
   [junit4] at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:295)
   [junit4] at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:141)
   [junit4] at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229)
   [junit4] at 
org.apache.log4j.helpers.QuietWriter.flush(QuietWriter.java:59)
   [junit4] at 
org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:324)
   [junit4] at 
org.apache.log4j.WriterAppender.append(WriterAppender.java:162)
   [junit4] at 
org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
   [junit4] at 
org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
   [junit4] at org.apache.log4j.Category.callAppenders(Category.java:206)
   [junit4] at org.apache.log4j.Category.forcedLog(Category.java:391)
   [junit4] at org.apache.log4j.Category.log(Category.java:856)
   [junit4] at 
org.slf4j.impl.Log4jLoggerAdapter.info(Log4jLoggerAdapter.java:304)
   [junit4] at 
org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:191)
   [junit4] at 
org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:163)
   [junit4] at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:125)
   [junit4] at 
org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:314)
   [junit4] at 
org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
   [junit4] at 
org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:452)
   [junit4] at 
org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:217)
   [junit4] at 
org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:163)
   [junit4] at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:125)
   [junit4] at 
org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:314)
   [junit4] at 
org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
   [junit4] at 
org.apache.solr.cloud.ShardLeaderElectionContext.rejoinLeaderElection(ElectionContext.java:452)
   [junit4] at 
org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:217)
   [junit4] at 
org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:163)
   [junit4] at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:125)
   [junit4] at 
org.apache.solr.cloud.LeaderElector.joinElection(LeaderEl

[jira] [Updated] (SOLR-6283) Add support for Interval Faceting in SolrJ

2014-08-06 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-6283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-6283:


Affects Version/s: 4.10

> Add support for Interval Faceting in SolrJ
> --
>
> Key: SOLR-6283
> URL: https://issues.apache.org/jira/browse/SOLR-6283
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 4.10
>Reporter: Tomás Fernández Löbbe
>Assignee: Erick Erickson
> Attachments: SOLR-6283.patch, SOLR-6283.patch
>
>
> Interval Faceting was added in SOLR-6216. Add support for it in SolrJ



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6283) Add support for Interval Faceting in SolrJ

2014-08-06 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-6283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-6283:


Attachment: SOLR-6283.patch

here is a new patch with some more javadocs. I'm going to be committing this 
soon

> Add support for Interval Faceting in SolrJ
> --
>
> Key: SOLR-6283
> URL: https://issues.apache.org/jira/browse/SOLR-6283
> Project: Solr
>  Issue Type: Improvement
>Reporter: Tomás Fernández Löbbe
>Assignee: Erick Erickson
> Attachments: SOLR-6283.patch, SOLR-6283.patch
>
>
> Interval Faceting was added in SOLR-6216. Add support for it in SolrJ



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4580) Support for protecting content in ZK

2014-08-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088515#comment-14088515
 ] 

Mark Miller commented on SOLR-4580:
---

Oh yeah, I also worked the cloud-dev script to test this manually into the 
exisiting standard scripts by using the new JAVA_OPTS support instead of a new 
one off script. This is also how you can make cloud-dev scripts work with HDFS 
and makes things much easier to test and maintain.

> Support for protecting content in ZK
> 
>
> Key: SOLR-4580
> URL: https://issues.apache.org/jira/browse/SOLR-4580
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Affects Versions: 4.2
>Reporter: Per Steffensen
>Assignee: Mark Miller
>  Labels: security, solr, zookeeper
> Attachments: SOLR-4580.patch, SOLR-4580.patch, 
> SOLR-4580_branch_4x_r1482255.patch
>
>
> We want to protect content in zookeeper. 
> In order to run a CloudSolrServer in "client-space" you will have to open for 
> access to zookeeper from client-space. 
> If you do not trust persons or systems in client-space you want to protect 
> zookeeper against evilness from client-space - e.g.
> * Changing configuration
> * Trying to mess up system by manipulating clusterstate
> * Add a delete-collection job to be carried out by the Overseer
> * etc
> Even if you do not open for zookeeper access to someone outside your "secure 
> zone" you might want to protect zookeeper content from being manipulated by 
> e.g.
> * Malware that found its way into secure zone
> * Other systems also using zookeeper
> * etc.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4580) Support for protecting content in ZK

2014-08-06 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-4580:
--

Attachment: SOLR-4580.patch

Here is a patch updated to trunk.

If you try and use zk security without a chroot, it fails to start.

I still have to test using custom zk security impls and review the way custom 
impls classloading is done.

> Support for protecting content in ZK
> 
>
> Key: SOLR-4580
> URL: https://issues.apache.org/jira/browse/SOLR-4580
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Affects Versions: 4.2
>Reporter: Per Steffensen
>Assignee: Mark Miller
>  Labels: security, solr, zookeeper
> Attachments: SOLR-4580.patch, SOLR-4580.patch, 
> SOLR-4580_branch_4x_r1482255.patch
>
>
> We want to protect content in zookeeper. 
> In order to run a CloudSolrServer in "client-space" you will have to open for 
> access to zookeeper from client-space. 
> If you do not trust persons or systems in client-space you want to protect 
> zookeeper against evilness from client-space - e.g.
> * Changing configuration
> * Trying to mess up system by manipulating clusterstate
> * Add a delete-collection job to be carried out by the Overseer
> * etc
> Even if you do not open for zookeeper access to someone outside your "secure 
> zone" you might want to protect zookeeper content from being manipulated by 
> e.g.
> * Malware that found its way into secure zone
> * Other systems also using zookeeper
> * etc.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6261) Run ZK watch event callbacks in parallel to the event thread

2014-08-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088476#comment-14088476
 ] 

Mark Miller edited comment on SOLR-6261 at 8/6/14 11:46 PM:


Thanks Shalin, I'll take a look. Looks like the executor isn't working as 
efficiently as we need.


was (Author: markrmil...@gmail.com):
Thanks Shalin, I'll take a look. Looks like the executor working as efficiently 
as we need.

> Run ZK watch event callbacks in parallel to the event thread
> 
>
> Key: SOLR-6261
> URL: https://issues.apache.org/jira/browse/SOLR-6261
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.9
>Reporter: Ramkumar Aiyengar
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 5.0, 4.10
>
>
> Currently checking for leadership (due to the leader's ephemeral node going 
> away) happens in ZK's event thread. If there are many cores and all of them 
> are due leadership, then they would have to serially go through the two-way 
> sync and leadership takeover.
> For tens of cores, this could mean 30-40s without leadership before the last 
> in the list even gets to start the leadership process. If the leadership 
> process happens in a separate thread, then the cores could all take over in 
> parallel.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6261) Run ZK watch event callbacks in parallel to the event thread

2014-08-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088476#comment-14088476
 ] 

Mark Miller commented on SOLR-6261:
---

Thanks Shalin, I'll take a look. Looks like the executor working as efficiently 
as we need.

> Run ZK watch event callbacks in parallel to the event thread
> 
>
> Key: SOLR-6261
> URL: https://issues.apache.org/jira/browse/SOLR-6261
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.9
>Reporter: Ramkumar Aiyengar
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 5.0, 4.10
>
>
> Currently checking for leadership (due to the leader's ephemeral node going 
> away) happens in ZK's event thread. If there are many cores and all of them 
> are due leadership, then they would have to serially go through the two-way 
> sync and leadership takeover.
> For tens of cores, this could mean 30-40s without leadership before the last 
> in the list even gets to start the leadership process. If the leadership 
> process happens in a separate thread, then the cores could all take over in 
> parallel.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-6261) Run ZK watch event callbacks in parallel to the event thread

2014-08-06 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reopened SOLR-6261:
---


> Run ZK watch event callbacks in parallel to the event thread
> 
>
> Key: SOLR-6261
> URL: https://issues.apache.org/jira/browse/SOLR-6261
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.9
>Reporter: Ramkumar Aiyengar
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 5.0, 4.10
>
>
> Currently checking for leadership (due to the leader's ephemeral node going 
> away) happens in ZK's event thread. If there are many cores and all of them 
> are due leadership, then they would have to serially go through the two-way 
> sync and leadership takeover.
> For tens of cores, this could mean 30-40s without leadership before the last 
> in the list even gets to start the leadership process. If the leadership 
> process happens in a separate thread, then the cores could all take over in 
> parallel.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6261) Run ZK watch event callbacks in parallel to the event thread

2014-08-06 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088466#comment-14088466
 ] 

Shalin Shekhar Mangar commented on SOLR-6261:
-

Guys, I think there's something weird happening since this was committed. Many 
tests such as MultiThreadedOCPTest and ShardSplitTest have been failing with 
OutOfMemory trying to create new watcher threads. A typical fail has the 
following in logs:

{code}
 [junit4]   2> 1218223 T4785 oasc.DistributedQueue$LatchChildWatcher.process 
LatchChildWatcher fired on path: /overseer/collection-queue-work state: 
SyncConnected type NodeChildrenChanged
   [junit4]   2> 1218223 T4789 oasc.DistributedQueue$LatchChildWatcher.process 
LatchChildWatcher fired on path: /overseer/collection-queue-work state: 
SyncConnected type NodeChildrenChanged
   [junit4]   2> 1218223 T4791 oasc.DistributedQueue$LatchChildWatcher.process 
LatchChildWatcher fired on path: /overseer/collection-queue-work state: 
SyncConnected type NodeChildrenChanged
   [junit4]   2> 1218223 T4795 oasc.DistributedQueue$LatchChildWatcher.process 
LatchChildWatcher fired on path: /overseer/collection-queue-work state: 
SyncConnected type NodeChildrenChanged
   [junit4]   2> 1218223 T4797 oasc.DistributedQueue$LatchChildWatcher.process 
LatchChildWatcher fired on path: /overseer/collection-queue-work state: 
SyncConnected type NodeChildrenChanged
   [junit4]   2> 1218222 T4803 oasc.DistributedQueue$LatchChildWatcher.process 
LatchChildWatcher fired on path: /overseer/collection-queue-work state: 
SyncConnected type NodeChildrenChanged
   [junit4]   2> 1218222 T3305 oaz.ClientCnxn$EventThread.processEvent ERROR 
Error while calling watcher  java.lang.OutOfMemoryError: unable to create new 
native thread
   [junit4]   2>at java.lang.Thread.start0(Native Method)
   [junit4]   2>at java.lang.Thread.start(Thread.java:714)
   [junit4]   2>at 
java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:950)
   [junit4]   2>at 
java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1368)
   [junit4]   2>at 
java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112)
   [junit4]   2>at 
org.apache.solr.common.cloud.SolrZkClient$3.process(SolrZkClient.java:201)
   [junit4]   2>at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
   [junit4]   2>at 
org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
   [junit4]   2> 
{code}
I see hundreds of LatchChildWatcher.process events and then the node goes out 
of memory.

Here are some of the recent fails:
http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4233/
https://builds.apache.org/job/Lucene-Solr-NightlyTests-4.x/592/
https://builds.apache.org/job/Lucene-Solr-Tests-4.x-Java7/2048/

> Run ZK watch event callbacks in parallel to the event thread
> 
>
> Key: SOLR-6261
> URL: https://issues.apache.org/jira/browse/SOLR-6261
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 4.9
>Reporter: Ramkumar Aiyengar
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 5.0, 4.10
>
>
> Currently checking for leadership (due to the leader's ephemeral node going 
> away) happens in ZK's event thread. If there are many cores and all of them 
> are due leadership, then they would have to serially go through the two-way 
> sync and leadership takeover.
> For tens of cores, this could mean 30-40s without leadership before the last 
> in the list even gets to start the leadership process. If the leadership 
> process happens in a separate thread, then the cores could all take over in 
> parallel.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6325) Expose per-collection and per-shard aggregate statistics

2014-08-06 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-6325:


Attachment: SOLR-6325.patch

Here's a rough patch which returns per-shard statistics:

{code}
curl 
'http://localhost:8983/solr/admin/collections?action=clusterstatus&stats=true&name=collection1&shard=shard1&wt=json&indent=on'
{code}

{code}
{
  "responseHeader":{
"status":0,
"QTime":2023},
  "cluster":{
"collections":{
  "collection1":{
"shards":{"shard1":{
"range":"8000-",
"state":"active",
"replicas":{"core_node1":{
"state":"active",
"base_url":"http://127.0.1.1:8983/solr";,
"core":"collection1",
"node_name":"127.0.1.1:8983_solr",
"leader":"true"}},
"/update":{
  "75thPcRequestTime":0.0,
  "15minRateReqsPerSecond":0.0,
  "999thPcRequestTime":0.0,
  "99thPcRequestTime":0.0,
  "95thPcRequestTime":0.0,
  "5minRateReqsPerSecond":0.0,
  "timeouts":0,
  "requests":0,
  "avgRequestsPerSecond":0.0,
  "errors":0,
  "avgTimePerRequest":0.0,
  "medianRequestTime":0.0,
  "handlerStart":1407367247133,
  "totalTime":0.0},
"/select":{
  "75thPcRequestTime":26.804607,
  "15minRateReqsPerSecond":0.19779007785878447,
  "999thPcRequestTime":26.804607,
  "99thPcRequestTime":26.804607,
  "95thPcRequestTime":26.804607,
  "5minRateReqsPerSecond":0.1934432200964012,
  "timeouts":0,
  "requests":1,
  "avgRequestsPerSecond":0.05091259561701815,
  "errors":0,
  "avgTimePerRequest":26.804607,
  "medianRequestTime":26.804607,
  "handlerStart":1407367247129,
  "totalTime":26.804607},
"/get":{
  "75thPcRequestTime":0.0,
  "15minRateReqsPerSecond":0.0,
  "999thPcRequestTime":0.0,
  "99thPcRequestTime":0.0,
  "95thPcRequestTime":0.0,
  "5minRateReqsPerSecond":0.0,
  "timeouts":0,
  "requests":0,
  "avgRequestsPerSecond":0.0,
  "errors":0,
  "avgTimePerRequest":0.0,
  "medianRequestTime":0.0,
  "handlerStart":1407367247131,
  "totalTime":0.0},
"/replication":{
  "15minRateReqsPerSecond":0.0,
  "75thPcRequestTime":0.0,
  "999thPcRequestTime":0.0,
  "isSlave":"false",
  "99thPcRequestTime":0.0,
  "95thPcRequestTime":0.0,
  "replicateAfter":["commit"],
  "5minRateReqsPerSecond":0.0,
  
"indexPath":"/home/shalin/work/oss/shalin-lusolr/solr/example1/solr/collection1/data/index/",
  "replicationEnabled":"true",
  "timeouts":0,
  "requests":0,
  "avgRequestsPerSecond":0.0,
  "errors":0,
  "avgTimePerRequest":0.0,
  "indexSize":"89 bytes",
  "indexVersion":0,
  "isMaster":"true",
  "medianRequestTime":0.0,
  "handlerStart":1407367247142,
  "generation":1,
  "totalTime":0.0}}},
"maxShardsPerNode":"1",
"router":{"name":"compositeId"},
"replicationFactor":"1",
"autoCreated":"true"}},
"live_nodes":["127.0.1.1:7574_solr",
  "127.0.1.1:8983_solr"]}}
{code}

The handler names are hard-coded right now but I'm hoping that the work being 
done in SOLR-6191 will help introspect the capabilities of a node and let us 
read the names of the interesting handlers.

> Expose per-collection and per-shard aggregate statistics
> 
>
> Key: SOLR-6325
> URL: https://issues.apache.org/jira/browse/SOLR-6325
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: 4.9, 5.0
>
> Attachments: SOLR-6325.patch
>
>
> SolrCloud doesn't provide any aggregate stats about the cluster or a 
> collection. Very common questions such as document counts per shard, index 
> sizes, request rates etc cannot be answered easily without figuring out the 
> cluster state, invoking multiple core admin APIs and aggregating them 
> manually.
> I propose that we expose an API which returns each of the following on a 
> per-collection and per-shard basis:
> # Document counts
> # Index size on disk
> # Query request rate
> # Indexing request rate
> # Real tim

[jira] [Updated] (SOLR-2894) Implement distributed pivot faceting

2014-08-06 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-2894:
---

Attachment: SOLR-2894.patch

Fingers crossed, this is the final patch.

Not functional changes, just resolving hte prviously mentioned nocommits by 
renaming variables/methods or replacing comments about Jiras for future 
improvements with the actual jira numbers.

ant precommit passes.

> Implement distributed pivot faceting
> 
>
> Key: SOLR-2894
> URL: https://issues.apache.org/jira/browse/SOLR-2894
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erik Hatcher
>Assignee: Hoss Man
> Fix For: 4.9, 5.0
>
> Attachments: SOLR-2894-mincount-minification.patch, 
> SOLR-2894-reworked.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
> SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
> SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
> SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
> SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
> SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
> SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
> SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
> SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
> SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
> SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
> SOLR-2894_cloud_test.patch, dateToObject.patch, pivot_mincount_problem.sh
>
>
> Following up on SOLR-792, pivot faceting currently only supports 
> undistributed mode.  Distributed pivot faceting needs to be implemented.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_20-ea-b23) - Build # 4233 - Failure!

2014-08-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4233/
Java: 32bit/jdk1.8.0_20-ea-b23 -server -XX:+UseConcMarkSweepGC

1 tests failed.
REGRESSION:  org.apache.solr.cloud.MultiThreadedOCPTest.testDistribSearch

Error Message:
Task 3002 did not complete, final state: failed

Stack Trace:
java.lang.AssertionError: Task 3002 did not complete, final state: failed
at 
__randomizedtesting.SeedInfo.seed([B5C53E7A181EF81D:3423B0626F419821]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.MultiThreadedOCPTest.testDeduplicationOfSubmittedTasks(MultiThreadedOCPTest.java:163)
at 
org.apache.solr.cloud.MultiThreadedOCPTest.doTest(MultiThreadedOCPTest.java:72)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:865)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.rand

[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-08-06 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088402#comment-14088402
 ] 

Gregory Chanan commented on SOLR-3619:
--

bq. Gregory Chanan, you did a lot with the managed schema mode recently. What 
do you think about it becoming the primary Solr 'mode' in it's current state?

Interesting question.  I don't have a philosophical objection to it, like I 
would with schemaless.  My main concerns are:
- I think SOLR-6249 would definitely need to be addressed, it's too 
non-intuitive to use programatically at this point
- Cassandra gave a good overview of the other limitations of the API.  Those 
are less serious than SOLR-6249, because instead of something breaking, you 
might just get stuck.  I'd have some concern that the usual workflow would be: 
try managed schema -> get stuck -> write the schema manually.  This is a worse 
experience IMO than just telling users to write the schema manually.

So I think I'd pass on making it the "default" for now.

> Rename 'example' dir to 'server' and pull examples into an 'examples' 
> directory
> ---
>
> Key: SOLR-3619
> URL: https://issues.apache.org/jira/browse/SOLR-3619
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Timothy Potter
> Fix For: 4.9, 5.0
>
> Attachments: SOLR-3619.patch, SOLR-3619.patch, managed-schema, 
> server-name-layout.png, solrconfig.xml
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6163) special chars and ManagedSynonymFilterFactory

2014-08-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088383#comment-14088383
 ] 

ASF subversion and git services commented on SOLR-6163:
---

Commit 1616366 from [~thelabdude] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1616366 ]

SOLR-6163: Correctly decode special characters in managed stopwords and synonym 
endpoints.

> special chars and ManagedSynonymFilterFactory
> -
>
> Key: SOLR-6163
> URL: https://issues.apache.org/jira/browse/SOLR-6163
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.8
>Reporter: Wim Kumpen
>Assignee: Timothy Potter
> Attachments: SOLR-6163-v2.patch, SOLR-6163-v3.patch, 
> SOLR-6163-v4.patch, SOLR-6163.patch
>
>
> Hey,
> I was playing with the ManagedSynonymFilterFactory to create a synonym list 
> with the API. But I have difficulties when my keys contains special 
> characters (or spaces) to delete them...
> I added a key ééé that matches with some other words. It's saved in the 
> synonym file as ééé.
> When I try to delete it, I do:
> curl -X DELETE 
> "http://localhost/solr/mycore/schema/analysis/synonyms/english/ééé";
> error message: %C3%A9%C3%A9%C3%A9%C2%B5 not found in 
> /schema/analysis/synonyms/english
> A wild guess from me is that %C3%A9 isn't decoded back to ééé. And that's why 
> he can't find the keyword?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6066) CollapsingQParserPlugin + Elevation does not respects "fq" (filter query)

2014-08-06 Thread David Boychuck (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088126#comment-14088126
 ] 

David Boychuck edited comment on SOLR-6066 at 8/6/14 10:15 PM:
---

Attaching patch on SVN trunk


was (Author: dboychuck):
Attaching patch on SVN 4x branch

> CollapsingQParserPlugin + Elevation does not respects "fq" (filter query) 
> --
>
> Key: SOLR-6066
> URL: https://issues.apache.org/jira/browse/SOLR-6066
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers
>Affects Versions: 4.8
>Reporter: Herb Jiang
>Assignee: Joel Bernstein
> Fix For: 4.9
>
> Attachments: SOLR-6066.patch, SOLR-6066.patch, SOLR-6066.patch, 
> TestCollapseQParserPlugin.java
>
>
> QueryElevationComponent respects the "fq" parameter. But when use 
> CollapsingQParserPlugin with QueryElevationComponent, additional "fq" has no 
> effect.
> I use following test case to show this issue. (It will failed)
> {code:java}
> String[] doc = {"id","1", "term_s", "", "group_s", "group1", 
> "category_s", "cat2", "test_ti", "5", "test_tl", "10", "test_tf", "2000"};
> assertU(adoc(doc));
> assertU(commit());
> String[] doc1 = {"id","2", "term_s","", "group_s", "group1", 
> "category_s", "cat2", "test_ti", "50", "test_tl", "100", "test_tf", "200"};
> assertU(adoc(doc1));
> String[] doc2 = {"id","3", "term_s", "", "test_ti", "5000", 
> "test_tl", "100", "test_tf", "200"};
> assertU(adoc(doc2));
> assertU(commit());
> String[] doc3 = {"id","4", "term_s", "", "test_ti", "500", "test_tl", 
> "1000", "test_tf", "2000"};
> assertU(adoc(doc3));
> String[] doc4 = {"id","5", "term_s", "", "group_s", "group2", 
> "category_s", "cat1", "test_ti", "4", "test_tl", "10", "test_tf", "2000"};
> assertU(adoc(doc4));
> assertU(commit());
> String[] doc5 = {"id","6", "term_s","", "group_s", "group2", 
> "category_s", "cat1", "test_ti", "10", "test_tl", "100", "test_tf", "200"};
> assertU(adoc(doc5));
> assertU(commit());
> //Test additional filter query when using collapse
> params = new ModifiableSolrParams();
> params.add("q", "");
> params.add("fq", "{!collapse field=group_s}");
> params.add("fq", "category_s:cat1");
> params.add("defType", "edismax");
> params.add("bf", "field(test_ti)");
> params.add("qf", "term_s");
> params.add("qt", "/elevate");
> params.add("elevateIds", "2");
> assertQ(req(params), "*[count(//doc)=1]",
> "//result/doc[1]/float[@name='id'][.='6.0']");
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-4.x - Build # 592 - Still Failing

2014-08-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-4.x/592/

2 tests failed.
REGRESSION:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch

Error Message:
Captured an uncaught exception in thread: Thread[id=780113, 
name=RecoveryThread, state=RUNNABLE, group=TGRP-CollectionsAPIDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=780113, name=RecoveryThread, state=RUNNABLE, 
group=TGRP-CollectionsAPIDistributedZkTest]
Caused by: java.lang.OutOfMemoryError: unable to create new native thread
at __randomizedtesting.SeedInfo.seed([AB9A8892F2A68BB5]:0)
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
at 
java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:949)
at 
java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1360)
at 
java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:132)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.httpUriRequest(HttpSolrServer.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.httpUriRequest(HttpSolrServer.java:230)
at 
org.apache.solr.cloud.RecoveryStrategy.sendPrepRecoveryCmd(RecoveryStrategy.java:610)
at 
org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:371)
at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:235)


FAILED:  org.apache.solr.cloud.MultiThreadedOCPTest.testDistribSearch

Error Message:
We have a failed SPLITSHARD task

Stack Trace:
java.lang.AssertionError: We have a failed SPLITSHARD task
at 
__randomizedtesting.SeedInfo.seed([AB9A8892F2A68BB5:2A7C068A85F9EB89]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.MultiThreadedOCPTest.testTaskExclusivity(MultiThreadedOCPTest.java:125)
at 
org.apache.solr.cloud.MultiThreadedOCPTest.doTest(MultiThreadedOCPTest.java:71)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:867)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule

[jira] [Commented] (SOLR-6331) possible memory optimization for distributed pivot faceting

2014-08-06 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088349#comment-14088349
 ] 

Hoss Man commented on SOLR-6331:


This spun out of an idea i originally floated in SOLR-2894, but didn't move 
forward on because it will involve quite a bit of refactoring...

{quote}
* the way refinement currently works in PivotFacetField, after we've refined 
our values, we mark that we no longer need refinement, and then on the next 
call we recursively refine the subpivots of each value – and in both cases we 
do the offset+limit calculations and hang on to all of the values (both below 
offset and above limit) as we keep iterating down hte pivots – they don't get 
thrown away until the final trim() call just before building up the final 
result.
* i previously suggested folding the trim() logic into the NamedList response 
logic – but now i'm wondering if the trim() logic should instead be folded into 
refinement? so once we're sure a level is fully refined, we go ahead and trim 
that level before drilling down and refining it's kids?
{quote}


> possible memory optimization for distributed pivot faceting
> ---
>
> Key: SOLR-6331
> URL: https://issues.apache.org/jira/browse/SOLR-6331
> Project: Solr
>  Issue Type: Improvement
>Reporter: Hoss Man
>
> As noted in a comment in {{PivotFacetField.trim()}}...
> {code}
> // we can probably optimize the memory usage by trimming each level of the 
> pivot once
> // we know we've fully refined the values at that level 
> // (ie: fold this logic into refineNextLevelOfFacets)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6331) possible memory optimization for distributed pivot faceting

2014-08-06 Thread Hoss Man (JIRA)
Hoss Man created SOLR-6331:
--

 Summary: possible memory optimization for distributed pivot 
faceting
 Key: SOLR-6331
 URL: https://issues.apache.org/jira/browse/SOLR-6331
 Project: Solr
  Issue Type: Improvement
Reporter: Hoss Man


As noted in a comment in {{PivotFacetField.trim()}}...

{code}
// we can probably optimize the memory usage by trimming each level of the 
pivot once
// we know we've fully refined the values at that level 
// (ie: fold this logic into refineNextLevelOfFacets)
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6330) distributed pivot faceting may not work well with some custom FieldTypes

2014-08-06 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088343#comment-14088343
 ] 

Hoss Man commented on SOLR-6330:


Spinning this issue off from SOLR-2894, where it was decided that this 
shouldn't block distributed support being added to pivot faceting and can be 
addressed later as needed.

the PivotFacetValue class (added in SOLR-2894) has a comment referring to this 
issue ("SOLR-6330") pointing at the likley starting point to address this 
problem in pivot facet refinement if/when we have the method(s) needed from the 
FieldType API.

> distributed pivot faceting may not work well with some custom FieldTypes
> 
>
> Key: SOLR-6330
> URL: https://issues.apache.org/jira/browse/SOLR-6330
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Priority: Minor
>
> A limitiation of the distributed pivot faceting code is that it makes some 
> explicit assumptions about the datatypes of the pivot values for the purposes 
> of "serializing" the values in order to make refinement requests to the 
> individual shards for those values.
> This logic works fine for String based fields, dates, and primitive numerics 
> -- but any custom FieldType that has a {{toObject()}} method which does not 
> return one of those data types may have problems.  While pivot faceting uses 
> the typed objects returned by {{toObject()}} in it's responses, there is no 
> general FieldType method for converting those objects back into Strings 
> suitable for the refinement requests.
> Untill we have some abstract, FieldType based, method for converting the 
> value Objects into Strings thta can be included in the refinement requests 
> for use in {{FieldType.getFieldQuery()}} there isn't really a good solution 
> for this.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6330) distributed pivot faceting may not work well with some custom FieldTypes

2014-08-06 Thread Hoss Man (JIRA)
Hoss Man created SOLR-6330:
--

 Summary: distributed pivot faceting may not work well with some 
custom FieldTypes
 Key: SOLR-6330
 URL: https://issues.apache.org/jira/browse/SOLR-6330
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Priority: Minor


A limitiation of the distributed pivot faceting code is that it makes some 
explicit assumptions about the datatypes of the pivot values for the purposes 
of "serializing" the values in order to make refinement requests to the 
individual shards for those values.

This logic works fine for String based fields, dates, and primitive numerics -- 
but any custom FieldType that has a {{toObject()}} method which does not return 
one of those data types may have problems.  While pivot faceting uses the typed 
objects returned by {{toObject()}} in it's responses, there is no general 
FieldType method for converting those objects back into Strings suitable for 
the refinement requests.

Untill we have some abstract, FieldType based, method for converting the value 
Objects into Strings thta can be included in the refinement requests for use in 
{{FieldType.getFieldQuery()}} there isn't really a good solution for this.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6163) special chars and ManagedSynonymFilterFactory

2014-08-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088334#comment-14088334
 ] 

ASF subversion and git services commented on SOLR-6163:
---

Commit 1616361 from [~thelabdude] in branch 'dev/trunk'
[ https://svn.apache.org/r1616361 ]

SOLR-6163: Correctly decode special characters in managed stopwords and synonym 
endpoints.

> special chars and ManagedSynonymFilterFactory
> -
>
> Key: SOLR-6163
> URL: https://issues.apache.org/jira/browse/SOLR-6163
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.8
>Reporter: Wim Kumpen
>Assignee: Timothy Potter
> Attachments: SOLR-6163-v2.patch, SOLR-6163-v3.patch, 
> SOLR-6163-v4.patch, SOLR-6163.patch
>
>
> Hey,
> I was playing with the ManagedSynonymFilterFactory to create a synonym list 
> with the API. But I have difficulties when my keys contains special 
> characters (or spaces) to delete them...
> I added a key ééé that matches with some other words. It's saved in the 
> synonym file as ééé.
> When I try to delete it, I do:
> curl -X DELETE 
> "http://localhost/solr/mycore/schema/analysis/synonyms/english/ééé";
> error message: %C3%A9%C3%A9%C3%A9%C2%B5 not found in 
> /schema/analysis/synonyms/english
> A wild guess from me is that %C3%A9 isn't decoded back to ééé. And that's why 
> he can't find the keyword?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-08-06 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088287#comment-14088287
 ] 

Hoss Man commented on SOLR-3619:


FWIW: I've been deliberately avoiding reading/commenting on most of the issues 
like this one for hte past few months, because i've come to realize I've got 
far too much vested history to have any real idea what choices for things like 
this best serve the "new user experience" with solr.

But since miller called me out explicitly and asked for an opinion, i'll give 
one -- but please don't take any of this this as a vote for/against any other 
specific, concrete, ideas other people have proposed -- because i still haven't 
read most of the comments in this issue since whenever my last comment was. 

The main gist i get of the current discussion is about "default" behavior and 
schema management -- so here is the pie-in-the-sky opinion of what i personally 
think would make a lot of sense in the long run...

* You start solr up the first time, you have 0 collections.
* there are _many_ sample configsets that come with solr, ready to be specified 
by name when you create your first collection
** none of the configsets are named "default" or "collection1" - there is no 
such thing as a "default" config set, just like there is no such thing as a 
"default" collection
** each of the sample config sets has a README file explaining why it's 
interesting
** each of the sample config sets has a file/directory of sample data, or a DIH 
config file that knows how to index some external data
** the configsets should all be as small as they can possible be, while still 
clearly showcasing the thing that they showcase

* a paired down version of the current "collection1" configs would be called 
"sample_techproductdata_configs"
** anything in the current configs not directly related to the tech product 
example docs would be ripped out
** everything would be tweaked to showcase the best possible configs we could 
imagine for the specific sample data usecase (ie: request handler defualts, 
spell check configs, velocity UI templates, etc...)
** the first thing in the tutorial would be creating a "tech_products" 
collection using this configset, and indexing it's sample data.
** the tutorial would then use the "tech_products" collection to demo some of 
the features currently covered in the tutorial (basic search, schema concepts, 
faceting, highlighting, etc...)
* another config called "sample_bookdata_configs" would also be much a much 
smaller subset of the current "collection1" configs
** it would be paired with books.json & books.csv, have nothing in it unrelated 
to books, etc...
** we probably wouldn't need to mention this config set in the tutorial, but 
having it available as another example of a purpose created set of configs for 
users to compare/contrast with the "sample_techproductdata_configs" would be 
useful.

* there would be a configset named "basic_configs"
** this would have the managed-schema enabled (so the REST API could be used to 
manipulate the schema) .. as more REST APIs are added moving forward, they 
would also be enabled in this config set.
** this would _not_ have the "schemaless" update processors
*** ? or maybe the update processors are there, but in a non-default chain ?
** they key goal here being the most basic configs someone could start with, 
and add to, w/o any confusion about what might be cruft they can delete
** the second major section of the tutorial would have the user create two 
collections using this configset, and then use those two collections to 
showcase the Schema REST APIs to create the same field with different 
properties, and then show how the two collections behavior differently with the 
same sample data indexed into them.

* there would be a configset named "data_driven_schema_configs"
** this would have all of the "schemaless" bells and whistles enabled
** the tutorial would have the user create a collection using this config to 
show off these features, etc...

* there would be many other config sets available, to show off various features 
of solr, int isolated specific example configs where they are easier to digest 
then the current kitchen sink of pain that we have today.
** every configset would have some JUnit tests to verify that they work, and 
that they can load their sample data (even if we aren't testing "curl", we can 
test with contentStream and the file path, mock DIH datasources, etc...)

* the world would be full of sunshine and rainbows and free candy.

> Rename 'example' dir to 'server' and pull examples into an 'examples' 
> directory
> ---
>
> Key: SOLR-3619
> URL: https://issues.apache.org/jira/browse/SOLR-3619
> Project: Solr
>  Issue Type: Improvement
>Reporter:

[jira] [Updated] (SOLR-6297) Distributed spellcheck with WordBreakSpellchecker can lose suggestions

2014-08-06 Thread James Dyer (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Dyer updated SOLR-6297:
-

Attachment: SOLR-6297.patch

This patch fixes word-break suggestions by ensuring that both 
WordBreakSolrSpellChecker and ConjuctionSolrSpellChecker always output every 
original term, even if the list of suggestions is empty.  This is consistent 
with the behavior of DirectSolrSpellChecker.

This approach is problematic for combined-word suggestions as the various 
shards cannot know which new terms were invented by others.  For this, 
SpellCheckComponent will need to loosen its requirement that all shards return 
a term in order for it to be in the final response.

> Distributed spellcheck with WordBreakSpellchecker can lose suggestions
> --
>
> Key: SOLR-6297
> URL: https://issues.apache.org/jira/browse/SOLR-6297
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.9
>Reporter: Steve Molloy
> Attachments: SOLR-6297.patch, SOLR-6297.patch
>
>
> When performing a spellcheck request in distributed environment with the 
> WordBreakSpellChecker configured, the shard response merging logic can lose 
> some suggestions. Basically, the merging logic ensures that all shards marked 
> the query as not being correctly spelled, which is good, but also expects all 
> shards to return some suggestions, which isn't necessarily the case. So if 
> shard 1 returns 10 suggestions but shard 2 returns none, the final result 
> will contain no suggestions because the term has suggestions from only 1 of 2 
> shards.
> This isn't the case with the DirectSolrSpellChecker which works properly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1153) deltaImportQuery should be honored on child entities as well

2014-08-06 Thread Archana Reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088263#comment-14088263
 ] 

Archana Reddy commented on SOLR-1153:
-

We are facing a similar issue but out use case is nested entity.

we have the below nested entitties

ParentEntity
chilld1Entity
 child2Entity.

child1Entity is child of ParentEntity and parent of child2Entity. Child2Entity 
is child of child1Entity.

when we add deltaImportQuery on child2Entity. It is not executing 
deltaImportQuery during delta import.
I found that below code(#503) in the DocBuilder class is restricting the 
deltaImportQuery from not being executed at the nested child entity level.
for (EntityProcessorWrapper child : epw.getChildren()) {
  buildDocument(vr, doc,
  child.getEntity().isDocRoot() ? pk : null, child, false, ctx, 
entitiesToDestroy);
}


> deltaImportQuery should be honored on child entities as well
> 
>
> Key: SOLR-1153
> URL: https://issues.apache.org/jira/browse/SOLR-1153
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - DataImportHandler
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Minor
> Fix For: 1.4
>
> Attachments: SOLR-1153.patch
>
>
> currently , only the root-entity can have this attribute



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-08-06 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088264#comment-14088264
 ] 

Yonik Seeley commented on SOLR-3619:


Another point about "managed schema"... although it tells you not to, you *can* 
hand-edit it just fine, and the syntax is the same.
You just don't want to edit it while solr is running since the change won't 
take affect and your changes would also be overwritten by the next operation to 
go through the API.  Longer term, I think schema and managed schema should be 
synonymous, but I lost that argument in the short term ;-)

> Rename 'example' dir to 'server' and pull examples into an 'examples' 
> directory
> ---
>
> Key: SOLR-3619
> URL: https://issues.apache.org/jira/browse/SOLR-3619
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Timothy Potter
> Fix For: 4.9, 5.0
>
> Attachments: SOLR-3619.patch, SOLR-3619.patch, managed-schema, 
> server-name-layout.png, solrconfig.xml
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-08-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088253#comment-14088253
 ] 

Mark Miller commented on SOLR-3619:
---

bq. I did it recently and didn't think it was bad at all. 

Thanks for the input Cassandra!

{quote}
The biggest plus for me was that I didn't need to bother with ZK tools while in 
SolrCloud mode to make some simple edits. How anyone uses those tools is really 
beyond me - I have a huge hazy gap in my brain whenever I try to use them and I 
fail miserably.
{quote}

That's still a huge issue with SolrCloud, but extends to solrconfig.xml as 
well. We have an extra script that helps deal with that for Cloudera Search, 
but it's still no fun.

bq. However, the Schema API is still a work-in-progress; 

Okay, so sounds like that is the largest limiting part of this right now?

[~gchanan], you did a lot with the managed schema mode recently. What do you 
think about it becoming the primary Solr 'mode' in it's current state?

> Rename 'example' dir to 'server' and pull examples into an 'examples' 
> directory
> ---
>
> Key: SOLR-3619
> URL: https://issues.apache.org/jira/browse/SOLR-3619
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Timothy Potter
> Fix For: 4.9, 5.0
>
> Attachments: SOLR-3619.patch, SOLR-3619.patch, managed-schema, 
> server-name-layout.png, solrconfig.xml
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-08-06 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088235#comment-14088235
 ] 

Cassandra Targett commented on SOLR-3619:
-

bq. Has anyone started in managed schema mode and used the rest api to lock 
down a real schema and compared that to building the schema in an xml file? 

I did it recently and didn't think it was bad at all. I was trying to index 
some web access_logs with logstash into a SolrCloud instance and wanted to 
avoid ZK tools to edit the files, so I put it in managed schema mode and used 
the REST API to make fields & copyFields. The hardest part was really figuring 
out the fields that logstash was outputting to the documents, which took some 
trial & error. The API calls themselves were easy (for me). 

The biggest plus for me was that I didn't need to bother with ZK tools while in 
SolrCloud mode to make some simple edits. How anyone uses those tools is really 
beyond me - I have a huge hazy gap in my brain whenever I try to use them and I 
fail miserably. 

However, the Schema API is still a work-in-progress; I could do the basics of 
what I needed for my little project, but I couldn't delete fields, create new 
fieldTypes or change the analysis for any fieldType. If I had started with 
schemaless and then wanted to lock down my fields later, I don't know that I 
would have had the tools with the REST API, and I'm not sure how I would edit 
it manually if we're not supposed to edit the schema by hand when it's in 
managed mode. And then I'd have to contend with ZK.

bq. Is it easy to bootstrap the managed schema stuff with a hand built 
schema.xml?
I didn't do it as part of the project I just described, but I have done it 
before - made edits to schema.xml and then switched to using Managed Schema. It 
was as expected - a new file named 'managed-schema' was created that had my 
changes already in it and then I could use the REST API.

> Rename 'example' dir to 'server' and pull examples into an 'examples' 
> directory
> ---
>
> Key: SOLR-3619
> URL: https://issues.apache.org/jira/browse/SOLR-3619
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Timothy Potter
> Fix For: 4.9, 5.0
>
> Attachments: SOLR-3619.patch, SOLR-3619.patch, managed-schema, 
> server-name-layout.png, solrconfig.xml
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-08-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088240#comment-14088240
 ] 

Mark Miller commented on SOLR-3619:
---

bq. I'm not trying to be obtuse or argumentative,

Argue all you want by the way. That's how this stuff generally gets hashed out 
- people make arguments, stuff falls out of it, others weigh in on ideas. At 
some point, more and more consensus generally falls out. The only problem with 
arguing is if no one ends up being willing to compromise or change their 
opinion on anything regardless of the arguments or efforts to compromise.

> Rename 'example' dir to 'server' and pull examples into an 'examples' 
> directory
> ---
>
> Key: SOLR-3619
> URL: https://issues.apache.org/jira/browse/SOLR-3619
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Timothy Potter
> Fix For: 4.9, 5.0
>
> Attachments: SOLR-3619.patch, SOLR-3619.patch, managed-schema, 
> server-name-layout.png, solrconfig.xml
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3881) frequent OOM in LanguageIdentifierUpdateProcessor

2014-08-06 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088223#comment-14088223
 ] 

Steve Rowe commented on SOLR-3881:
--

Vitaliy, thanks for the changes.

I see a few more issues in your latest patch:

# {{LangDetectLanguageIdentifierUpdateProcessor.detectLanguage()}} still uses 
{{concatFields()}}, but it shouldn't -- that was the whole point about moving 
it to {{TikaLanguageIdentifierUpdateProcessor}}; instead,  
{{LangDetectLanguageIdentifierUpdateProcessor.detectLanguage()}} should loop 
over {{inputFields}} and call {{detector.append()}} (similarly to what 
{{concatFields()}} does).
# {{concatFields()}} and {{getExpectedSize()}} should move to 
{{TikaLanguageIdentifierUpdateProcessor}}.
# {{LanguageIdentifierUpdateProcessor.getExpectedSize()}} still takes a 
{{maxAppendSize}}, which didn't get renamed, but that param could be removed 
entirely, since {{maxFieldValueChars}} is available as a data member.
# There are a bunch of whitespace changes in 
{{LanguageIdentifierUpdateProcessorFactoryTestCase.java}} - it makes reviewing 
patches significantly harder when they include changes like this.  Your IDE 
should have settings that make it stop doing this.
# There is still some import reordering in 
{{TikaLanguageIdentifierUpdateProcessor.java}}.

One last thing:

{quote}
bq. The total chars default should be its own setting; I was thinking we could 
make it double the per-value default?

\[VZ] added default value to maxTotalChars and changed both to 10K like in 
com.cybozu.labs.langdetect.Detector.maxLength
{quote}

Thanks for adding the total chars default, but you didn't make it double the 
field value chars default, as I suggested.  Not sure if that's better - if the 
user specifies multiple fields and the first one is the only one that's used to 
determine the language because it's larger than the total char default, is that 
an issue?  I was thinking that it would be better to visit at least one other 
field (hence the idea of total = 2 * per-field), but that wouldn't fully 
address the issue.  What do you think?

> frequent OOM in LanguageIdentifierUpdateProcessor
> -
>
> Key: SOLR-3881
> URL: https://issues.apache.org/jira/browse/SOLR-3881
> Project: Solr
>  Issue Type: Bug
>  Components: update
>Affects Versions: 4.0
> Environment: CentOS 6.x, JDK 1.6, (java -server -Xms2G -Xmx2G 
> -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=)
>Reporter: Rob Tulloh
> Fix For: 4.9, 5.0
>
> Attachments: SOLR-3881.patch, SOLR-3881.patch, SOLR-3881.patch, 
> SOLR-3881.patch
>
>
> We are seeing frequent failures from Solr causing it to OOM. Here is the 
> stack trace we observe when this happens:
> {noformat}
> Caused by: java.lang.OutOfMemoryError: Java heap space
> at java.util.Arrays.copyOf(Arrays.java:2882)
> at 
> java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:100)
> at 
> java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:390)
> at java.lang.StringBuffer.append(StringBuffer.java:224)
> at 
> org.apache.solr.update.processor.LanguageIdentifierUpdateProcessor.concatFields(LanguageIdentifierUpdateProcessor.java:286)
> at 
> org.apache.solr.update.processor.LanguageIdentifierUpdateProcessor.process(LanguageIdentifierUpdateProcessor.java:189)
> at 
> org.apache.solr.update.processor.LanguageIdentifierUpdateProcessor.processAdd(LanguageIdentifierUpdateProcessor.java:171)
> at 
> org.apache.solr.handler.BinaryUpdateRequestHandler$2.update(BinaryUpdateRequestHandler.java:90)
> at 
> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:140)
> at 
> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readIterator(JavaBinUpdateRequestCodec.java:120)
> at 
> org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:221)
> at 
> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readNamedList(JavaBinUpdateRequestCodec.java:105)
> at 
> org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:186)
> at 
> org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:112)
> at 
> org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:147)
> at 
> org.apache.solr.handler.BinaryUpdateRequestHandler.parseAndLoadDocs(BinaryUpdateRequestHandler.java:100)
> at 
> org.apache.solr.handler.BinaryUpdateRequestHandler.access$000(BinaryUpdateRequestHandler.java:47)
> at 
> org.apache.solr.handler.BinaryUpdateRequestHandler$1.load(BinaryUpdateRequestHandler.java:58)
> at 
> org.apache.solr.handler.Con

[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-08-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088195#comment-14088195
 ] 

Mark Miller commented on SOLR-3619:
---

Has anyone started in managed schema mode and used the rest api to lock down a 
real schema and compared that to building the schema in an xml file? Is it 
pretty nice or is it a fork in the eye? Is it easy to bootstrap the managed 
schema stuff with a hand built schema.xml?

> Rename 'example' dir to 'server' and pull examples into an 'examples' 
> directory
> ---
>
> Key: SOLR-3619
> URL: https://issues.apache.org/jira/browse/SOLR-3619
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Timothy Potter
> Fix For: 4.9, 5.0
>
> Attachments: SOLR-3619.patch, SOLR-3619.patch, managed-schema, 
> server-name-layout.png, solrconfig.xml
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-08-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088184#comment-14088184
 ] 

Mark Miller edited comment on SOLR-3619 at 8/6/14 8:18 PM:
---

bq. how enabling an optional feature makes it hard to go to production?

Right now you basically start with how we recommend you go to production. From 
what I understand, your proposal means everything just works as "schemaless" 
and you would have to dig to figure out how to get out of this mode. Find some 
other config files to plug in, figure out what to edit in your config files, I 
don't know. By enabling this optional feature, you are making the non optional 
part more difficult to get to and likely just out of mind entirely.

bq. For those who want to lock down their schema, it should be easily 
doable/changeable via an API. 

If I though that was easier that it seems to me, I might be willing to go 
further down that thought path.

[~hossman_luc...@fucit.org], any chance I can ask for your opinion on this 
issue?




was (Author: markrmil...@gmail.com):
bq. how enabling an optional feature makes it hard to go to production?

Right now you basically start with how we recommend you go to production. From 
what I understand, your proposal means everything just works as "schemaless" 
and you would have to dig to figure out how to get out of this mode. Fine some 
other config files to plug in, I don't know. By enabling this optional feature, 
you are making the non optional part more difficult to get to and likely just 
out of mind entirely.

bq. For those who want to lock down their schema, it should be easily 
doable/changeable via an API. 

If I though that was easier that it seems to me, I might be willing to go 
further down that thought path.

[~hossman_luc...@fucit.org], any chance I can ask for your opinion on this 
issue?



> Rename 'example' dir to 'server' and pull examples into an 'examples' 
> directory
> ---
>
> Key: SOLR-3619
> URL: https://issues.apache.org/jira/browse/SOLR-3619
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Timothy Potter
> Fix For: 4.9, 5.0
>
> Attachments: SOLR-3619.patch, SOLR-3619.patch, managed-schema, 
> server-name-layout.png, solrconfig.xml
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-08-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088184#comment-14088184
 ] 

Mark Miller commented on SOLR-3619:
---

bq. how enabling an optional feature makes it hard to go to production?

Right now you basically start with how we recommend you go to production. From 
what I understand, your proposal means everything just works as "schemaless" 
and you would have to dig to figure out how to get out of this mode. Fine some 
other config files to plug in, I don't know. By enabling this optional feature, 
you are making the non optional part more difficult to get to and likely just 
out of mind entirely.

bq. For those who want to lock down their schema, it should be easily 
doable/changeable via an API. 

If I though that was easier that it seems to me, I might be willing to go 
further down that thought path.

[~hossman_luc...@fucit.org], any chance I can ask for your opinion on this 
issue?



> Rename 'example' dir to 'server' and pull examples into an 'examples' 
> directory
> ---
>
> Key: SOLR-3619
> URL: https://issues.apache.org/jira/browse/SOLR-3619
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Timothy Potter
> Fix For: 4.9, 5.0
>
> Attachments: SOLR-3619.patch, SOLR-3619.patch, managed-schema, 
> server-name-layout.png, solrconfig.xml
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-08-06 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter updated SOLR-3619:
-

Attachment: solrconfig.xml
managed-schema

bq. non field-guessing default should probably just have dynamic fields as a 
start

When I look at the managed-schema from the schemaless example (which is what I 
based the default on), it has all the popular Solr field types and dynamic 
fields defined. I've attached it here for your review - let me know if you want 
to add/remove anything.

I'm not trying to be obtuse or argumentative, but can you elaborate on how 
enabling an optional feature makes it hard to go to production? If I don't want 
field guessing, then my docs coming need to be well-formed to the schema that 
is defined.

Disabling field guessing requires one line change (to comment out the 
add-unknown-fields-to-the-schema in the update chain) and I don't think we've 
ever advocated going to production without the user doing a thorough review of 
their solrconfig.xml.

My default solrconfig.xml is attached too, please review as I was pretty 
aggressive with my removals.

At this point, it sounds like we're in agreement that there should be a getting 
started configset and another one that doesn't have field guessing enabled. Any 
other features that you think are important to be enabled /disabled. These 
configsets need names? Thinking data-driven-schema (Hoss' term) and default??

> Rename 'example' dir to 'server' and pull examples into an 'examples' 
> directory
> ---
>
> Key: SOLR-3619
> URL: https://issues.apache.org/jira/browse/SOLR-3619
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Timothy Potter
> Fix For: 4.9, 5.0
>
> Attachments: SOLR-3619.patch, SOLR-3619.patch, managed-schema, 
> server-name-layout.png, solrconfig.xml
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5871) Simplify or remove use of Version in IndexWriterConfig

2014-08-06 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088169#comment-14088169
 ] 

Michael McCandless commented on LUCENE-5871:


bq. I guess what I'm trying to say is, if we make this IWC setter, maybe we 
could remove shutdown and stick w/ only close(). 

+1

> Simplify or remove use of Version in IndexWriterConfig
> --
>
> Key: LUCENE-5871
> URL: https://issues.apache.org/jira/browse/LUCENE-5871
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ryan Ernst
> Attachments: LUCENE-5871.patch
>
>
> {{IndexWriter}} currently uses Version from {{IndexWriterConfig}} to 
> determine the semantics of {{close()}}.  This is a trapdoor for users, as 
> they often default to just sending Version.LUCENE_CURRENT since they don't 
> understand what it will be used for.  Instead, we should make the semantics 
> of close a direction option in IWC.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-08-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088159#comment-14088159
 ] 

Mark Miller commented on SOLR-3619:
---

bq.  production vs prototyping

The names would probably be more like 'default' and 'schemaless'. Documentation 
should warn heavily about schemaless in production.

> Rename 'example' dir to 'server' and pull examples into an 'examples' 
> directory
> ---
>
> Key: SOLR-3619
> URL: https://issues.apache.org/jira/browse/SOLR-3619
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Timothy Potter
> Fix For: 4.9, 5.0
>
> Attachments: SOLR-3619.patch, SOLR-3619.patch, server-name-layout.png
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: SOLRJ Stopping Streaming

2014-08-06 Thread Shawn Heisey
On 8/6/2014 1:34 PM, Felipe Dantas de Souza Paiva wrote:
> in version 4.0 of SOLRJ a support for streaming response was added:
>
> https://issues.apache.org/jira/browse/SOLR-2112
>
> In my application, the output for the SOLR input stream is a response
> stream from a REST web service.
>
> It works fine, but if the client closes the connection with the REST
> server, the SOLR stream continues to work. As a result of that, CPU
> remains being used, although nothing is being delivered to the client.
>
> Is there a way to force the SOLR stream to be closed?
>
> I think I would have to modify the class
> StreamingBinaryResponseParser, by adding a new method that checks if
> the SOLR stream should be closed.
>
> Am I right? I am using the 4.1.0 version of the SOLRJ.

The solr-user list is more appropriate for this question.

The 4.1.0 version is getting very old - the release announcement was in
January 2013.  There have been a LOT of bugs fixed in versions up
through the most recent, which is 4.9.0.  Upgrading is advised. 
Upgrading the server is also advised.

I do not see a specific issue in CHANGES.txt mentioning anything like
what you have indicated here, but we'll need to know if it's still a
problem in the latest version before filing a bug.

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6066) CollapsingQParserPlugin + Elevation does not respects "fq" (filter query)

2014-08-06 Thread David Boychuck (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Boychuck updated SOLR-6066:
-

Attachment: SOLR-6066.patch

> CollapsingQParserPlugin + Elevation does not respects "fq" (filter query) 
> --
>
> Key: SOLR-6066
> URL: https://issues.apache.org/jira/browse/SOLR-6066
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers
>Affects Versions: 4.8
>Reporter: Herb Jiang
>Assignee: Joel Bernstein
> Fix For: 4.9
>
> Attachments: SOLR-6066.patch, SOLR-6066.patch, SOLR-6066.patch, 
> TestCollapseQParserPlugin.java
>
>
> QueryElevationComponent respects the "fq" parameter. But when use 
> CollapsingQParserPlugin with QueryElevationComponent, additional "fq" has no 
> effect.
> I use following test case to show this issue. (It will failed)
> {code:java}
> String[] doc = {"id","1", "term_s", "", "group_s", "group1", 
> "category_s", "cat2", "test_ti", "5", "test_tl", "10", "test_tf", "2000"};
> assertU(adoc(doc));
> assertU(commit());
> String[] doc1 = {"id","2", "term_s","", "group_s", "group1", 
> "category_s", "cat2", "test_ti", "50", "test_tl", "100", "test_tf", "200"};
> assertU(adoc(doc1));
> String[] doc2 = {"id","3", "term_s", "", "test_ti", "5000", 
> "test_tl", "100", "test_tf", "200"};
> assertU(adoc(doc2));
> assertU(commit());
> String[] doc3 = {"id","4", "term_s", "", "test_ti", "500", "test_tl", 
> "1000", "test_tf", "2000"};
> assertU(adoc(doc3));
> String[] doc4 = {"id","5", "term_s", "", "group_s", "group2", 
> "category_s", "cat1", "test_ti", "4", "test_tl", "10", "test_tf", "2000"};
> assertU(adoc(doc4));
> assertU(commit());
> String[] doc5 = {"id","6", "term_s","", "group_s", "group2", 
> "category_s", "cat1", "test_ti", "10", "test_tl", "100", "test_tf", "200"};
> assertU(adoc(doc5));
> assertU(commit());
> //Test additional filter query when using collapse
> params = new ModifiableSolrParams();
> params.add("q", "");
> params.add("fq", "{!collapse field=group_s}");
> params.add("fq", "category_s:cat1");
> params.add("defType", "edismax");
> params.add("bf", "field(test_ti)");
> params.add("qf", "term_s");
> params.add("qt", "/elevate");
> params.add("elevateIds", "2");
> assertQ(req(params), "*[count(//doc)=1]",
> "//result/doc[1]/float[@name='id'][.='6.0']");
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-08-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088151#comment-14088151
 ] 

Mark Miller commented on SOLR-3619:
---

bq. Having modes (esp called production vs prototyping)

They are not actually modes, they are just different starting config. The 
complexity is already well beyond that. One can certainly be a "specify" 
nothing default. I'd still vote for production 'config' for that. The other 
should be as easy as a param to choose.

bq. Schemaless can sometimes be desired in production

Yes, in some very specific cases, so it doesn't really affect this discussion. 
It's mostly not desired, especially with the misguided idea that it's 
"friendly", because its more 'insidious' in production than friendly unless you 
understand what you are getting into well.



> Rename 'example' dir to 'server' and pull examples into an 'examples' 
> directory
> ---
>
> Key: SOLR-3619
> URL: https://issues.apache.org/jira/browse/SOLR-3619
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Timothy Potter
> Fix For: 4.9, 5.0
>
> Attachments: SOLR-3619.patch, SOLR-3619.patch, server-name-layout.png
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6066) CollapsingQParserPlugin + Elevation does not respects "fq" (filter query)

2014-08-06 Thread David Boychuck (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Boychuck updated SOLR-6066:
-

Attachment: (was: SOLR-6066.patch)

> CollapsingQParserPlugin + Elevation does not respects "fq" (filter query) 
> --
>
> Key: SOLR-6066
> URL: https://issues.apache.org/jira/browse/SOLR-6066
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers
>Affects Versions: 4.8
>Reporter: Herb Jiang
>Assignee: Joel Bernstein
> Fix For: 4.9
>
> Attachments: SOLR-6066.patch, SOLR-6066.patch, SOLR-6066.patch, 
> TestCollapseQParserPlugin.java
>
>
> QueryElevationComponent respects the "fq" parameter. But when use 
> CollapsingQParserPlugin with QueryElevationComponent, additional "fq" has no 
> effect.
> I use following test case to show this issue. (It will failed)
> {code:java}
> String[] doc = {"id","1", "term_s", "", "group_s", "group1", 
> "category_s", "cat2", "test_ti", "5", "test_tl", "10", "test_tf", "2000"};
> assertU(adoc(doc));
> assertU(commit());
> String[] doc1 = {"id","2", "term_s","", "group_s", "group1", 
> "category_s", "cat2", "test_ti", "50", "test_tl", "100", "test_tf", "200"};
> assertU(adoc(doc1));
> String[] doc2 = {"id","3", "term_s", "", "test_ti", "5000", 
> "test_tl", "100", "test_tf", "200"};
> assertU(adoc(doc2));
> assertU(commit());
> String[] doc3 = {"id","4", "term_s", "", "test_ti", "500", "test_tl", 
> "1000", "test_tf", "2000"};
> assertU(adoc(doc3));
> String[] doc4 = {"id","5", "term_s", "", "group_s", "group2", 
> "category_s", "cat1", "test_ti", "4", "test_tl", "10", "test_tf", "2000"};
> assertU(adoc(doc4));
> assertU(commit());
> String[] doc5 = {"id","6", "term_s","", "group_s", "group2", 
> "category_s", "cat1", "test_ti", "10", "test_tl", "100", "test_tf", "200"};
> assertU(adoc(doc5));
> assertU(commit());
> //Test additional filter query when using collapse
> params = new ModifiableSolrParams();
> params.add("q", "");
> params.add("fq", "{!collapse field=group_s}");
> params.add("fq", "category_s:cat1");
> params.add("defType", "edismax");
> params.add("bf", "field(test_ti)");
> params.add("qf", "term_s");
> params.add("qt", "/elevate");
> params.add("elevateIds", "2");
> assertQ(req(params), "*[count(//doc)=1]",
> "//result/doc[1]/float[@name='id'][.='6.0']");
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-08-06 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088141#comment-14088141
 ] 

Yonik Seeley edited comment on SOLR-3619 at 8/6/14 7:57 PM:


bq. I guess I was optimizing for a quick and easy getting started experience. 
Solr doesn't have a getting to production problem,

+1

bq. I'm -1 on making a sensible default hard and a prototyping default easy. 
They both need to be just as easy,
[...] 
bq. That is why we need both modes and it need to be just as easy to choose 
either.

Having modes (esp called production vs prototyping) ups the perceived 
complexity again and makes the prototyping mode feel cheap somehow (i.e. it's 
just for show).

Schemaless can *sometimes* be desired in production (an internal cloud type 
scenario where new collections are being created often by new users).  For 
those who want to lock down their schema, it should be easily doable/changeable 
via an API.  Same goes for the managed schema.



was (Author: ysee...@gmail.com):
bq. I guess I was optimizing for a quick and easy getting started experience. 
Solr doesn't have a getting to production problem,

+1

bq. I'm -1 on making a sensible default hard and a prototyping default easy. 
They both need to be just as easy,
[...] bq. That is why we need both modes and it need to be just as easy to 
choose either.

Having modes (esp called production vs prototyping) ups the perceived 
complexity again and makes the prototyping mode feel cheap somehow (i.e. it's 
just for show).

Schemaless can *sometimes* be desired in production (an internal cloud type 
scenario where new collections are being created often by new users).  For 
those who want to lock down their schema, it should be easily doable/changeable 
via an API.  Same goes for the managed schema.


> Rename 'example' dir to 'server' and pull examples into an 'examples' 
> directory
> ---
>
> Key: SOLR-3619
> URL: https://issues.apache.org/jira/browse/SOLR-3619
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Timothy Potter
> Fix For: 4.9, 5.0
>
> Attachments: SOLR-3619.patch, SOLR-3619.patch, server-name-layout.png
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-08-06 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088141#comment-14088141
 ] 

Yonik Seeley commented on SOLR-3619:


bq. I guess I was optimizing for a quick and easy getting started experience. 
Solr doesn't have a getting to production problem,

+1

bq. I'm -1 on making a sensible default hard and a prototyping default easy. 
They both need to be just as easy,
[...] bq. That is why we need both modes and it need to be just as easy to 
choose either.

Having modes (esp called production vs prototyping) ups the perceived 
complexity again and makes the prototyping mode feel cheap somehow (i.e. it's 
just for show).

Schemaless can *sometimes* be desired in production (an internal cloud type 
scenario where new collections are being created often by new users).  For 
those who want to lock down their schema, it should be easily doable/changeable 
via an API.  Same goes for the managed schema.


> Rename 'example' dir to 'server' and pull examples into an 'examples' 
> directory
> ---
>
> Key: SOLR-3619
> URL: https://issues.apache.org/jira/browse/SOLR-3619
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Timothy Potter
> Fix For: 4.9, 5.0
>
> Attachments: SOLR-3619.patch, SOLR-3619.patch, server-name-layout.png
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6066) CollapsingQParserPlugin + Elevation does not respects "fq" (filter query)

2014-08-06 Thread David Boychuck (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Boychuck updated SOLR-6066:
-

Attachment: SOLR-6066.patch

Attaching patch on SVN 4x branch

> CollapsingQParserPlugin + Elevation does not respects "fq" (filter query) 
> --
>
> Key: SOLR-6066
> URL: https://issues.apache.org/jira/browse/SOLR-6066
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers
>Affects Versions: 4.8
>Reporter: Herb Jiang
>Assignee: Joel Bernstein
> Fix For: 4.9
>
> Attachments: SOLR-6066.patch, SOLR-6066.patch, SOLR-6066.patch, 
> TestCollapseQParserPlugin.java
>
>
> QueryElevationComponent respects the "fq" parameter. But when use 
> CollapsingQParserPlugin with QueryElevationComponent, additional "fq" has no 
> effect.
> I use following test case to show this issue. (It will failed)
> {code:java}
> String[] doc = {"id","1", "term_s", "", "group_s", "group1", 
> "category_s", "cat2", "test_ti", "5", "test_tl", "10", "test_tf", "2000"};
> assertU(adoc(doc));
> assertU(commit());
> String[] doc1 = {"id","2", "term_s","", "group_s", "group1", 
> "category_s", "cat2", "test_ti", "50", "test_tl", "100", "test_tf", "200"};
> assertU(adoc(doc1));
> String[] doc2 = {"id","3", "term_s", "", "test_ti", "5000", 
> "test_tl", "100", "test_tf", "200"};
> assertU(adoc(doc2));
> assertU(commit());
> String[] doc3 = {"id","4", "term_s", "", "test_ti", "500", "test_tl", 
> "1000", "test_tf", "2000"};
> assertU(adoc(doc3));
> String[] doc4 = {"id","5", "term_s", "", "group_s", "group2", 
> "category_s", "cat1", "test_ti", "4", "test_tl", "10", "test_tf", "2000"};
> assertU(adoc(doc4));
> assertU(commit());
> String[] doc5 = {"id","6", "term_s","", "group_s", "group2", 
> "category_s", "cat1", "test_ti", "10", "test_tl", "100", "test_tf", "200"};
> assertU(adoc(doc5));
> assertU(commit());
> //Test additional filter query when using collapse
> params = new ModifiableSolrParams();
> params.add("q", "");
> params.add("fq", "{!collapse field=group_s}");
> params.add("fq", "category_s:cat1");
> params.add("defType", "edismax");
> params.add("bf", "field(test_ti)");
> params.add("qf", "term_s");
> params.add("qt", "/elevate");
> params.add("elevateIds", "2");
> assertQ(req(params), "*[count(//doc)=1]",
> "//result/doc[1]/float[@name='id'][.='6.0']");
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-08-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088119#comment-14088119
 ] 

Mark Miller commented on SOLR-3619:
---

bq. Solr doesn't have a getting to production problem

It will if you make the default field guessing and make a production default 
hard by leaving it out.

bq.  but if we lose the user in the first 5 minutes,

That is why we need both modes and it need to be just as easy to choose either.

I'm -1 on making a sensible default hard and a prototyping default easy. They 
both need to be just as easy, a good production start can't be eclipsed.

> Rename 'example' dir to 'server' and pull examples into an 'examples' 
> directory
> ---
>
> Key: SOLR-3619
> URL: https://issues.apache.org/jira/browse/SOLR-3619
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Timothy Potter
> Fix For: 4.9, 5.0
>
> Attachments: SOLR-3619.patch, SOLR-3619.patch, server-name-layout.png
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



SOLRJ Stopping Streaming

2014-08-06 Thread Felipe Dantas de Souza Paiva
Hi Guys,

in version 4.0 of SOLRJ a support for streaming response was added:

https://issues.apache.org/jira/browse/SOLR-2112

In my application, the output for the SOLR input stream is a response stream 
from a REST web service.

It works fine, but if the client closes the connection with the REST server, 
the SOLR stream continues to work. As a result of that, CPU remains being used, 
although nothing is being delivered to the client.

Is there a way to force the SOLR stream to be closed?

I think I would have to modify the class StreamingBinaryResponseParser, by 
adding a new method that checks if the SOLR stream should be closed.

Am I right? I am using the 4.1.0 version of the SOLRJ.

Thank you all.
Cheers,

Felipe Dantas de Souza Paiva
UOL - Analista de Sistemas
Av. Brig. Faria Lima, 1384, 3° andar . 01452-002 . São Paulo/SP
Telefone: 11 3092 6938



AVISO: A informação contida neste e-mail, bem como em qualquer de seus anexos, 
é CONFIDENCIAL e destinada ao uso exclusivo do(s) destinatário(s) acima 
referido(s), podendo conter informações sigilosas e/ou legalmente protegidas. 
Caso você não seja o destinatário desta mensagem, informamos que qualquer 
divulgação, distribuição ou cópia deste e-mail e/ou de qualquer de seus anexos 
é absolutamente proibida. Solicitamos que o remetente seja comunicado 
imediatamente, respondendo esta mensagem, e que o original desta mensagem e de 
seus anexos, bem como toda e qualquer cópia e/ou impressão realizada a partir 
destes, sejam permanentemente apagados e/ou destruídos. Informações adicionais 
sobre nossa empresa podem ser obtidas no site http://sobre.uol.com.br/.

NOTICE: The information contained in this e-mail and any attachments thereto is 
CONFIDENTIAL and is intended only for use by the recipient named herein and may 
contain legally privileged and/or secret information.
If you are not the e-mail´s intended recipient, you are hereby notified that 
any dissemination, distribution or copy of this e-mail, and/or any attachments 
thereto, is strictly prohibited. Please immediately notify the sender replying 
to the above mentioned e-mail address, and permanently delete and/or destroy 
the original and any copy of this e-mail and/or its attachments, as well as any 
printout thereof. Additional information about our company may be obtained 
through the site http://www.uol.com.br/ir/.


[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-08-06 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088106#comment-14088106
 ] 

Timothy Potter commented on SOLR-3619:
--

+1 on the named configsets approach (working on that now)

bq. I'm not sure I'm sold on that as the default ... without a sensible 
production default as well.

I guess I was optimizing for a quick and easy getting started experience. Solr 
doesn't have a getting to production problem, it's the getting started path 
that's way too complicated, esp. for a new user population that knows they need 
search  and just wants to kick the tires on Solr for a few minutes / hours.  
There's something very powerful about being able to: 1) start solr, 2) send in 
a few JSON docs, 3) query for your data.

Of course we all know that adding some data and firing off a few queries is a 
long way from production, but if we lose the user in the first 5 minutes, 
getting to production is no longer relevant. So I think we need a low-barrier 
to entry configset for this user population and if we want to have another more 
structured configset, that's great too.

> Rename 'example' dir to 'server' and pull examples into an 'examples' 
> directory
> ---
>
> Key: SOLR-3619
> URL: https://issues.apache.org/jira/browse/SOLR-3619
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Timothy Potter
> Fix For: 4.9, 5.0
>
> Attachments: SOLR-3619.patch, SOLR-3619.patch, server-name-layout.png
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6304) Add a way to flatten an input JSON to multiple docs

2014-08-06 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-6304:
-

Attachment: SOLR-6304.patch

A streaming parser for JSON 

> Add a way to flatten an input JSON to multiple docs
> ---
>
> Key: SOLR-6304
> URL: https://issues.apache.org/jira/browse/SOLR-6304
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
> Attachments: SOLR-6304.patch
>
>
> example
> {noformat}
> curl 
> localhost:8983/update/json/docs?split=/batters/batter&f=recipeId:/id&f=recipeType:/type&f=id:/batters/batter/id&f=type:/batters/batter/type
>  -d '
> {
>   "id": "0001",
>   "type": "donut",
>   "name": "Cake",
>   "ppu": 0.55,
>   "batters": {
>   "batter":
>   [
>   { "id": "1001", "type": 
> "Regular" },
>   { "id": "1002", "type": 
> "Chocolate" },
>   { "id": "1003", "type": 
> "Blueberry" },
>   { "id": "1004", "type": 
> "Devil's Food" }
>   ]
>   }
> }'
> {noformat}
> should produce the following output docs
> {noformat}
> { "recipeId":"001", "recipeType":"donut", "id":"1001", "type":"Regular" }
> { "recipeId":"001", "recipeType":"donut", "id":"1002", "type":"Chocolate" }
> { "recipeId":"001", "recipeType":"donut", "id":"1003", "type":"Blueberry" }
> { "recipeId":"001", "recipeType":"donut", "id":"1004", "type":"Devil's food" }
> {noformat}
> the split param is the element in the tree where it should be split into 
> multiple docs. The 'f' are field name mappings



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-08-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088091#comment-14088091
 ] 

Mark Miller commented on SOLR-3619:
---

Yeah, it should be done with named config sets.

The key then, for both modes, is allowing the use of template/default config 
sets without uploading them or setting them up first.

> Rename 'example' dir to 'server' and pull examples into an 'examples' 
> directory
> ---
>
> Key: SOLR-3619
> URL: https://issues.apache.org/jira/browse/SOLR-3619
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Timothy Potter
> Fix For: 4.9, 5.0
>
> Attachments: SOLR-3619.patch, SOLR-3619.patch, server-name-layout.png
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-08-06 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088093#comment-14088093
 ] 

David Smiley commented on SOLR-3619:


bq. Tim, in standalone mode, a third option would be to use the named 
configsets functionality, right?

Definitely; right?  It's pretty useful.

bq. I think the non managed-schema and non field-guessing default should 
probably just have dynamic fields as a start. Field guessing has too many 
downsides to bring it to the front without a sensible production default as 
well.

+1 I can't stand field guessing; lets not encourage users

> Rename 'example' dir to 'server' and pull examples into an 'examples' 
> directory
> ---
>
> Key: SOLR-3619
> URL: https://issues.apache.org/jira/browse/SOLR-3619
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Timothy Potter
> Fix For: 4.9, 5.0
>
> Attachments: SOLR-3619.patch, SOLR-3619.patch, server-name-layout.png
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-08-06 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088077#comment-14088077
 ] 

Yonik Seeley commented on SOLR-3619:


bq. Tim, in standalone mode, a third option would be to use the named 
configsets functionality, right?

+1, this feels like the right way.  This should also sort of mirror cloud-mode 
given that config sets were supposed to mirror cloud-mode, right?



> Rename 'example' dir to 'server' and pull examples into an 'examples' 
> directory
> ---
>
> Key: SOLR-3619
> URL: https://issues.apache.org/jira/browse/SOLR-3619
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Timothy Potter
> Fix For: 4.9, 5.0
>
> Attachments: SOLR-3619.patch, SOLR-3619.patch, server-name-layout.png
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-08-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088066#comment-14088066
 ] 

Mark Miller commented on SOLR-3619:
---

bq. I'm favoring the first because it's more like the cloud experience (tooling 
puts the config in the right place), 

I want to hide that as well. This is a negative currently. It should be 
optional IMO.

> Rename 'example' dir to 'server' and pull examples into an 'examples' 
> directory
> ---
>
> Key: SOLR-3619
> URL: https://issues.apache.org/jira/browse/SOLR-3619
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Timothy Potter
> Fix For: 4.9, 5.0
>
> Attachments: SOLR-3619.patch, SOLR-3619.patch, server-name-layout.png
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-08-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088062#comment-14088062
 ] 

Mark Miller commented on SOLR-3619:
---

bq. We can either

Shouldn't the core create command just create the instance dir and move the 
config into place when it sees the user did not specify any config?

> Rename 'example' dir to 'server' and pull examples into an 'examples' 
> directory
> ---
>
> Key: SOLR-3619
> URL: https://issues.apache.org/jira/browse/SOLR-3619
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Timothy Potter
> Fix For: 4.9, 5.0
>
> Attachments: SOLR-3619.patch, SOLR-3619.patch, server-name-layout.png
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6315) Remove SimpleOrderedMap

2014-08-06 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088061#comment-14088061
 ] 

Shawn Heisey commented on SOLR-6315:


Looking over these notes, those on SOLR-912, and the actual implementation of 
SimpleOrderedMap ... it looks to me like SimpleOrderedMap has zero behavior 
difference from NamedList.  If I'm wrong about that, will someone please point 
me at the code that makes them different?

When I was poking through the code making a patch to remove SimpleOrderedMap, I 
do remember seeing one place where another class makes a decision based on 
whether the object is a SimpleOrderedMap or not, and it turns out that it was 
in JSONResponseWriter, which is mentioned above.

There are exactly three places in trunk where "instanceof SimpledOrderedMap" 
appears.  One of them is in code that decides how to encode javabin, which adds 
yet another possible complication to the entire notion of dropping 
SimpleOrderedMap.  The javabin codec has ORDERED_MAP and NAMED_LST as distinct 
object types.

A derivative class should only exist if its implementation is different than 
the parent, or if it makes sense within the human mind to think of them as 
different things because the distinction is concrete and will be required 
frequently.

The three places I mentioned are cases where an entirely new class (different 
only in the name) is used instead of implementing a setting within the class 
that other code can use to make decisions.  Although the existing method uses 
slightly less memory, I think it's the wrong approach.  I know that I'm only 
one voice, and I may be overruled.

One option for the stated purpose of this issue is to add a boolean flag within 
NamedList (possibly with a getter/setter) to use in JSONResponseWriter.  
Another is to bite the bullet and actually implement an extension of NamedList 
that behaves differently -- in this case (based on what I see in 
JSONResponseWriter and the javadocs), preventing duplicates and being more 
efficient for key lookups.


> Remove SimpleOrderedMap
> ---
>
> Key: SOLR-6315
> URL: https://issues.apache.org/jira/browse/SOLR-6315
> Project: Solr
>  Issue Type: Improvement
>  Components: clients - java
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: SOLR-6315.patch
>
>
> As I described on SOLR-912, SimpleOrderedMap is redundant and generally 
> useless class, with confusing jdocs. We should remove it. I'll attach a patch 
> shortly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-08-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088054#comment-14088054
 ] 

Mark Miller commented on SOLR-3619:
---

bq. Next patch will include a new default_conf directory cooked up in 
server/solr/default_conf. It's a minimized solrconfig.xml with managed-schema 
and field-guessing enabled.

I'm not sure I'm sold on that as the default. Perhaps - with some big red 
documentation warnings. I think at a minimum though, it needs to be as easy as 
a command line switch to change between the two defaults. I think the non 
managed-schema and non field-guessing default should probably just have dynamic 
fields as a start. Field guessing has too many downsides to bring it to the 
front without a sensible production default as well.

> Rename 'example' dir to 'server' and pull examples into an 'examples' 
> directory
> ---
>
> Key: SOLR-3619
> URL: https://issues.apache.org/jira/browse/SOLR-3619
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Timothy Potter
> Fix For: 4.9, 5.0
>
> Attachments: SOLR-3619.patch, SOLR-3619.patch, server-name-layout.png
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-08-06 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088038#comment-14088038
 ] 

Steve Rowe commented on SOLR-3619:
--

Tim, in standalone mode, a third option would be to use the named configsets 
functionality, right?: SOLR-4478

> Rename 'example' dir to 'server' and pull examples into an 'examples' 
> directory
> ---
>
> Key: SOLR-3619
> URL: https://issues.apache.org/jira/browse/SOLR-3619
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Timothy Potter
> Fix For: 4.9, 5.0
>
> Attachments: SOLR-3619.patch, SOLR-3619.patch, server-name-layout.png
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-08-06 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088029#comment-14088029
 ] 

Timothy Potter commented on SOLR-3619:
--

Just an update on where things sit with this one. I've committed bin/solr 
scripts (SOLR-3617) that will work with this new layout (and the legacy example 
layout). 

Next patch will include a new default_conf directory cooked up in 
server/solr/default_conf. It's a minimized solrconfig.xml with managed-schema 
and field-guessing enabled.

Having a default conf directory raises the question of how to use it during 
core creation? This is mainly for non-cloud mode since in cloud mode, you have 
to upload a config directory to ZooKeeper before creating cores and we have 
tools for that.

We can either
# have the bin/solr script implement a "new_core" command {{bin/solr new_core 
-n foo}} that creates the instance directory and cp -r's the default_conf to 
instanceDir/conf for the user and then just hits the Core API's CREATE 
endpoint, or
# add the ability to create a core using the default configuration to the 
CoreAdminHandler, i.e. the core creation logic uses some logic added to the 
SolrResourceLoader to find the default config when solrconfig.xml is not found

Of the two approaches, I'm favoring the first because it's more like the cloud 
experience (tooling puts the config in the right place), esp. since the Core 
Admin API isn't usable for creating new cores without doing some work upfront 
on the command-line (i.e. user has to go create the instanceDir first anyway).

The first approach also avoids the Solr code doing something subtle behind the 
scenes that the user is not aware of; for instance if the user fat-fingered the 
name of their conf directory (e.g. their instance dir contains cnf instead of 
conf) or something silly like that, building logic into the CoreAdminHandler to 
use default config would skip their config and use the default one vs. throwing 
an error about not finding their conf. However I have a way of talking myself 
into things that require less work :P


> Rename 'example' dir to 'server' and pull examples into an 'examples' 
> directory
> ---
>
> Key: SOLR-3619
> URL: https://issues.apache.org/jira/browse/SOLR-3619
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Timothy Potter
> Fix For: 4.9, 5.0
>
> Attachments: SOLR-3619.patch, SOLR-3619.patch, server-name-layout.png
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6329) facet.pivot.mincount=0 doesn't work well in distributed pivot faceting

2014-08-06 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14087984#comment-14087984
 ] 

Hoss Man commented on SOLR-6329:


Notes from SOLR-2894 about the root of the issue...

{panel}

>From what I can tell, the gist of the issue is that when dealing with 
>sub-fields of the pivot, the coordination code doesn't know about some of the 
>"0" values if no shard which has the value for the parent field even knows 
>about the existence of the term.

The simplest example of this discrepency (compared to single node pivots) is to 
consider an index with only 2 docs...

{noformat}
[{"id":1,"top_s":"foo","sub_s":"bar"}
 {"id":2,"top_s":"xxx","sub_s":"yyy"}]
{noformat}

If those two docs exist in a single node index, and you pivot on 
{{top_s,sub_s}} using mincount=0 you get a response like this...

{noformat}
$ curl -sS 
'http://localhost:8881/solr/select?q=*:*&rows=0&facet=true&facet.pivot.mincount=0&facet.pivot=top_s,sub_s&omitHeader=true&wt=json&indent=true'
{
  "response":{"numFound":2,"start":0,"docs":[]
  },
  "facet_counts":{
"facet_queries":{},
"facet_fields":{},
"facet_dates":{},
"facet_ranges":{},
"facet_intervals":{},
"facet_pivot":{
  "top_s,sub_s":[{
  "field":"top_s",
  "value":"foo",
  "count":1,
  "pivot":[{
  "field":"sub_s",
  "value":"bar",
  "count":1},
{
  "field":"sub_s",
  "value":"yyy",
  "count":0}]},
{
  "field":"top_s",
  "value":"xxx",
  "count":1,
  "pivot":[{
  "field":"sub_s",
  "value":"yyy",
  "count":1},
{
  "field":"sub_s",
  "value":"bar",
  "count":0}]}]}}}
{noformat}

If however you index each of those docs on a seperate shard, the response comes 
back like this...

{noformat}
$ curl -sS 
'http://localhost:8881/solr/select?q=*:*&rows=0&facet=true&facet.pivot.mincount=0&facet.pivot=top_s,sub_s&omitHeader=true&wt=json&indent=true&shards=localhost:8881/solr,localhost:8882/solr'
{
  "response":{"numFound":2,"start":0,"maxScore":1.0,"docs":[]
  },
  "facet_counts":{
"facet_queries":{},
"facet_fields":{},
"facet_dates":{},
"facet_ranges":{},
"facet_intervals":{},
"facet_pivot":{
  "top_s,sub_s":[{
  "field":"top_s",
  "value":"foo",
  "count":1,
  "pivot":[{
  "field":"sub_s",
  "value":"bar",
  "count":1}]},
{
  "field":"top_s",
  "value":"xxx",
  "count":1,
  "pivot":[{
  "field":"sub_s",
  "value":"yyy",
  "count":1}]}]}}}
{noformat}

The only solution i can think of, would be an extra (special to mincount=0) 
stage of logic, after each PivotFacetField is refined, that would:
* iterate over all the values of the current pivot
* build up a Set of all all the known values for the child-pivots of of those 
values
* iterate over all the values again, merging in a "0"-count child value for 
every value in the set

...ie: "At least one shard knows about value 'v_x' in field 'sub_field', so add 
a count of '0' for 'v_x' in every 'sub_field' collection nested under the 
'top_field' in our 'top_field,sub_field' pivot"

I haven't thought this idea through enough to be confident it would work, or 
that it's worth doing ... i'm certainly not convinced that mincount=0 makes 
enough sense in a facet.pivot usecase to think getting this test working should 
hold up getting this committed -- probably something that should just be 
committed as is, with an open Jira that it's a known bug.
{panel}

SOLR-2894 includes a commented out test case related to using mincount=0 in 
distributed pivot faceting in DistributedFacetPivotLargeTest (annotated with 
"SOLR-6329")

> facet.pivot.mincount=0 doesn't work well in distributed pivot faceting
> --
>
> Key: SOLR-6329
> URL: https://issues.apache.org/jira/browse/SOLR-6329
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Priority: Minor
>
> Using facet.pivot.mincount=0 in conjunction with the distributed pivot 
> faceting support being added in SOLR-2894 doesn't work as folks would expect 
> if they are use to using facet.pivot.mincount=0 in a single node setup.
> Filing this issue to track this as a known defect, because it may not have a 
> viable solution.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6329) facet.pivot.mincount=0 doesn't work well in distributed pivot faceting

2014-08-06 Thread Hoss Man (JIRA)
Hoss Man created SOLR-6329:
--

 Summary: facet.pivot.mincount=0 doesn't work well in distributed 
pivot faceting
 Key: SOLR-6329
 URL: https://issues.apache.org/jira/browse/SOLR-6329
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Priority: Minor


Using facet.pivot.mincount=0 in conjunction with the distributed pivot faceting 
support being added in SOLR-2894 doesn't work as folks would expect if they are 
use to using facet.pivot.mincount=0 in a single node setup.

Filing this issue to track this as a known defect, because it may not have a 
viable solution.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6328) facet.limit=0 returns no counts, even if facet.missing=true

2014-08-06 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14087967#comment-14087967
 ] 

Hoss Man commented on SOLR-6328:



Examples of the problem...

* facet.field{noformat}
$ curl -sS 
'http://localhost:8983/solr/select?facet.field=inStock&facet.missing=true&facet.limit=1&facet=true&q=*:*&rows=0&omitHeader=true&wt=json&indent=true'
{
  "response":{"numFound":32,"start":0,"docs":[]
  },
  "facet_counts":{
"facet_queries":{},
"facet_fields":{
  "inStock":[
"true",17,
null,11]},
"facet_dates":{},
"facet_ranges":{},
"facet_intervals":{}}}
$ curl -sS 
'http://localhost:8983/solr/select?facet.field=inStock&facet.missing=true&facet.limit=0&facet=true&q=*:*&rows=0&omitHeader=true&wt=json&indent=true'
{
  "response":{"numFound":32,"start":0,"docs":[]
  },
  "facet_counts":{
"facet_queries":{},
"facet_fields":{
  "inStock":[]},
"facet_dates":{},
"facet_ranges":{},
"facet_intervals":{}}}
{noformat}
* facet.pivot{noformat}
$ curl -sS 
'http://localhost:8983/solr/select?facet.pivot=manu_id_s,inStock&facet.missing=true&facet.limit=1&facet=true&q=*:*&rows=0&omitHeader=true&wt=json&indent=true'
{
  "response":{"numFound":32,"start":0,"docs":[]
  },
  "facet_counts":{
"facet_queries":{},
"facet_fields":{},
"facet_dates":{},
"facet_ranges":{},
"facet_intervals":{},
"facet_pivot":{
  "manu_id_s,inStock":[{
  "field":"manu_id_s",
  "value":"corsair",
  "count":3,
  "pivot":[{
  "field":"inStock",
  "value":true,
  "count":3}]},
{
  "field":"manu_id_s",
  "value":null,
  "count":14,
  "pivot":[{
  "field":"inStock",
  "value":true,
  "count":3},
{
  "field":"inStock",
  "value":null,
  "count":11}]}]}}}
$ curl -sS 
'http://localhost:8983/solr/select?facet.pivot=manu_id_s,inStock&facet.missing=true&facet.limit=0&facet=true&q=*:*&rows=0&omitHeader=true&wt=json&indent=true'
{
  "response":{"numFound":32,"start":0,"docs":[]
  },
  "facet_counts":{
"facet_queries":{},
"facet_fields":{},
"facet_dates":{},
"facet_ranges":{},
"facet_intervals":{},
"facet_pivot":{
  "manu_id_s,inStock":[]}}}
{noformat}


I discovered this while working on SOLR-2894 where i initially thought it was a 
bug specific to (distributed) pivot faceting, but later realized facet.field 
also has the same problem.  There is a commented out distributed pivot test for 
this situation in DistributedFacetPivotLargeTest as part of SOLR-2894. 
(annotated with "SOLR-6328")


> facet.limit=0 returns no counts, even if facet.missing=true
> ---
>
> Key: SOLR-6328
> URL: https://issues.apache.org/jira/browse/SOLR-6328
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Priority: Minor
>
> facet.limit constraints the number of term values returned for a field when 
> using facet.field or facet.pivot, but that limit is (suppose) to be 
> independent of facet.missing, which adds an additional count beyond the 
> facet.limit for docs that are "missing" that field.
> This works fine for facet.limit >= 1, but if you use 
> {{facet.limit=0&facet.missing=true}} (ie: you are only interested in the 
> missing count) you get no counts at all -- not even for the missing count.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6328) facet.limit=0 returns no counts, even if facet.missing=true

2014-08-06 Thread Hoss Man (JIRA)
Hoss Man created SOLR-6328:
--

 Summary: facet.limit=0 returns no counts, even if 
facet.missing=true
 Key: SOLR-6328
 URL: https://issues.apache.org/jira/browse/SOLR-6328
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Priority: Minor


facet.limit constraints the number of term values returned for a field when 
using facet.field or facet.pivot, but that limit is (suppose) to be independent 
of facet.missing, which adds an additional count beyond the facet.limit for 
docs that are "missing" that field.

This works fine for facet.limit >= 1, but if you use 
{{facet.limit=0&facet.missing=true}} (ie: you are only interested in the 
missing count) you get no counts at all -- not even for the missing count.




--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-4.x-Java7 - Build # 2048 - Still Failing

2014-08-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-4.x-Java7/2048/

1 tests failed.
FAILED:  org.apache.solr.cloud.MultiThreadedOCPTest.testDistribSearch

Error Message:
We have a failed SPLITSHARD task

Stack Trace:
java.lang.AssertionError: We have a failed SPLITSHARD task
at 
__randomizedtesting.SeedInfo.seed([F68153FE96B8AAD7:7767DDE6E1E7CAEB]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.MultiThreadedOCPTest.testTaskExclusivity(MultiThreadedOCPTest.java:125)
at 
org.apache.solr.cloud.MultiThreadedOCPTest.doTest(MultiThreadedOCPTest.java:71)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:867)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomize

[jira] [Updated] (SOLR-3617) Consider adding start scripts.

2014-08-06 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter updated SOLR-3617:
-

Attachment: SOLR-3617.patch

I've committed this on trunk. Notable improvements from this last patch include:

* Interactive session to launch SolrCloud example for Windows
* bin/solr -i calls the SolrCLI class (SOLR-6233) to get basic information 
about a running server
* bin/solr healthcheck -collection foo runs some basic health checks against a 
collection (cloud mode only)
* Support for legacy (branch_4x layout) and soon-to-be server layout (SOLR-3619)
* Windows script works when Solr is installed in a directory containing a space
* Hardening of *nix and Windows scripts

I'm hoping to back-port these to branch_4x soon so they can be included in the 
4.10 release.

> Consider adding start scripts.
> --
>
> Key: SOLR-3617
> URL: https://issues.apache.org/jira/browse/SOLR-3617
> Project: Solr
>  Issue Type: New Feature
>Reporter: Mark Miller
>Assignee: Timothy Potter
> Attachments: SOLR-3617.patch, SOLR-3617.patch, SOLR-3617.patch, 
> SOLR-3617.patch
>
>
> I've always found that starting Solr with java -jar start.jar is a little odd 
> if you are not a java guy, but I think there are bigger pros than looking 
> less odd in shipping some start scripts.
> Not only do you get a cleaner start command:
> sh solr.sh or solr.bat or something
> But you also can do a couple other little nice things:
> * it becomes fairly obvious for a new casual user to see how to start the 
> system without reading doc.
> * you can make the working dir the location of the script - this lets you 
> call the start script from another dir and still have all the relative dir 
> setup work.
> * have an out of the box place to save startup params like -Xmx.
> * we could have multiple start scripts - say solr-dev.sh that logged to the 
> console and default to sys default for RAM - and also solr-prod which was 
> fully configured for logging, pegged Xms and Xmx at some larger value (1GB?) 
> etc.
> You would still of course be able to make the java cmd directly - and that is 
> probably what you would do when it's time to run as a service - but these 
> could be good starter scripts to get people on the right track and improve 
> the initial user experience.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Linux (64bit/jdk1.8.0_11) - Build # 10844 - Failure!

2014-08-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/10844/
Java: 64bit/jdk1.8.0_11 -XX:-UseCompressedOops -XX:+UseG1GC

5 tests failed.
REGRESSION:  
org.apache.solr.client.solrj.SolrExampleBinaryTest.testChildDoctransformer

Error Message:
Expected mime type application/octet-stream but got text/html.   
 
Error 500 Server Error   HTTP ERROR: 500 
Problem accessing /solr/collection1/select. Reason: Server 
Error Powered by Jetty:// 












  

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Expected 
mime type application/octet-stream but got text/html. 


Error 500 Server Error


HTTP ERROR: 500
Problem accessing /solr/collection1/select. Reason:
Server Error
Powered by Jetty://























at 
__randomizedtesting.SeedInfo.seed([6E2601170AB2735A:1DFC1E8D86AA045C]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:513)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206)
at 
org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
at 
org.apache.solr.client.solrj.SolrExampleTests.testChildDoctransformer(SolrExampleTests.java:1373)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 

[jira] [Resolved] (SOLR-6313) Improve SolrCloud cloud-dev scripts.

2014-08-06 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-6313.
---

   Resolution: Fixed
Fix Version/s: 4.10
   5.0

Thanks for testing it out [~vamsee]!

> Improve SolrCloud cloud-dev scripts.
> 
>
> Key: SOLR-6313
> URL: https://issues.apache.org/jira/browse/SOLR-6313
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 5.0, 4.10
>
> Attachments: SOLR-6313.patch
>
>
> I've been improving the cloud-dev scripts to help with manual testing. I've 
> been doing this mostly as part of SOLR-5656, but I'd like to spin in out into 
> it's own issue.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6313) Improve SolrCloud cloud-dev scripts.

2014-08-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14087890#comment-14087890
 ] 

ASF subversion and git services commented on SOLR-6313:
---

Commit 1616278 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1616278 ]

SOLR-6313: Improve SolrCloud cloud-dev scripts.

> Improve SolrCloud cloud-dev scripts.
> 
>
> Key: SOLR-6313
> URL: https://issues.apache.org/jira/browse/SOLR-6313
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 5.0, 4.10
>
> Attachments: SOLR-6313.patch
>
>
> I've been improving the cloud-dev scripts to help with manual testing. I've 
> been doing this mostly as part of SOLR-5656, but I'd like to spin in out into 
> it's own issue.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6313) Improve SolrCloud cloud-dev scripts.

2014-08-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14087889#comment-14087889
 ] 

ASF subversion and git services commented on SOLR-6313:
---

Commit 1616275 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1616275 ]

SOLR-6313: Improve SolrCloud cloud-dev scripts.

> Improve SolrCloud cloud-dev scripts.
> 
>
> Key: SOLR-6313
> URL: https://issues.apache.org/jira/browse/SOLR-6313
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-6313.patch
>
>
> I've been improving the cloud-dev scripts to help with manual testing. I've 
> been doing this mostly as part of SOLR-5656, but I'd like to spin in out into 
> it's own issue.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6313) Improve SolrCloud cloud-dev scripts.

2014-08-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14087881#comment-14087881
 ] 

Mark Miller commented on SOLR-6313:
---

I'll commit this shortly. Still other things I'd like to do, but at this point 
I'm copying my latest version of these scripts around checkouts and it will be 
a lot simpler to commit this and continue improving in other issues.

> Improve SolrCloud cloud-dev scripts.
> 
>
> Key: SOLR-6313
> URL: https://issues.apache.org/jira/browse/SOLR-6313
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-6313.patch
>
>
> I've been improving the cloud-dev scripts to help with manual testing. I've 
> been doing this mostly as part of SOLR-5656, but I'd like to spin in out into 
> it's own issue.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4580) Support for protecting content in ZK

2014-08-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14087877#comment-14087877
 ] 

Mark Miller commented on SOLR-4580:
---

I've updated this to trunk. Patch coming after I do some testing again.

> Support for protecting content in ZK
> 
>
> Key: SOLR-4580
> URL: https://issues.apache.org/jira/browse/SOLR-4580
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Affects Versions: 4.2
>Reporter: Per Steffensen
>Assignee: Mark Miller
>  Labels: security, solr, zookeeper
> Attachments: SOLR-4580.patch, SOLR-4580_branch_4x_r1482255.patch
>
>
> We want to protect content in zookeeper. 
> In order to run a CloudSolrServer in "client-space" you will have to open for 
> access to zookeeper from client-space. 
> If you do not trust persons or systems in client-space you want to protect 
> zookeeper against evilness from client-space - e.g.
> * Changing configuration
> * Trying to mess up system by manipulating clusterstate
> * Add a delete-collection job to be carried out by the Overseer
> * etc
> Even if you do not open for zookeeper access to someone outside your "secure 
> zone" you might want to protect zookeeper content from being manipulated by 
> e.g.
> * Malware that found its way into secure zone
> * Other systems also using zookeeper
> * etc.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 1754 - Still Failing!

2014-08-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1754/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.schema.TestCloudSchemaless.testDistribSearch

Error Message:
Timeout occured while waiting response from server at: 
https://127.0.0.1:50726/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: https://127.0.0.1:50726/collection1
at 
__randomizedtesting.SeedInfo.seed([ACB6DAD6A7E73D20:2D5054CED0B85D1C]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:561)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206)
at 
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:124)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:68)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:54)
at 
org.apache.solr.schema.TestCloudSchemaless.doTest(TestCloudSchemaless.java:140)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:865)
at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apa

[jira] [Commented] (SOLR-3617) Consider adding start scripts.

2014-08-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14087870#comment-14087870
 ] 

ASF subversion and git services commented on SOLR-3617:
---

Commit 1616271 from [~thelabdude] in branch 'dev/trunk'
[ https://svn.apache.org/r1616271 ]

SOLR-3617: start/stop script with support for running examples

> Consider adding start scripts.
> --
>
> Key: SOLR-3617
> URL: https://issues.apache.org/jira/browse/SOLR-3617
> Project: Solr
>  Issue Type: New Feature
>Reporter: Mark Miller
>Assignee: Timothy Potter
> Attachments: SOLR-3617.patch, SOLR-3617.patch, SOLR-3617.patch
>
>
> I've always found that starting Solr with java -jar start.jar is a little odd 
> if you are not a java guy, but I think there are bigger pros than looking 
> less odd in shipping some start scripts.
> Not only do you get a cleaner start command:
> sh solr.sh or solr.bat or something
> But you also can do a couple other little nice things:
> * it becomes fairly obvious for a new casual user to see how to start the 
> system without reading doc.
> * you can make the working dir the location of the script - this lets you 
> call the start script from another dir and still have all the relative dir 
> setup work.
> * have an out of the box place to save startup params like -Xmx.
> * we could have multiple start scripts - say solr-dev.sh that logged to the 
> console and default to sys default for RAM - and also solr-prod which was 
> fully configured for logging, pegged Xms and Xmx at some larger value (1GB?) 
> etc.
> You would still of course be able to make the java cmd directly - and that is 
> probably what you would do when it's time to run as a service - but these 
> could be good starter scripts to get people on the right track and improve 
> the initial user experience.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5244) Exporting Full Sorted Result Sets

2014-08-06 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14087849#comment-14087849
 ] 

David Smiley commented on SOLR-5244:


+1 great idea Erik

> Exporting Full Sorted Result Sets
> -
>
> Key: SOLR-5244
> URL: https://issues.apache.org/jira/browse/SOLR-5244
> Project: Solr
>  Issue Type: New Feature
>  Components: search
>Affects Versions: 5.0
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Minor
> Fix For: 5.0, 4.10
>
> Attachments: 0001-SOLR_5244.patch, SOLR-5244.patch, SOLR-5244.patch, 
> SOLR-5244.patch, SOLR-5244.patch, SOLR-5244.patch
>
>
> This ticket allows Solr to export full sorted result sets. The proposed 
> syntax is:
> {code}
> q=*:*&rows=-1&wt=xsort&fl=a,b,c&sort=a desc,b desc
> {code}
> Under the covers, the rows=-1 parameter will signal Solr to use the 
> ExportQParserPlugin as a RankQuery, which will simply collect a BitSet of the 
> results. The SortingResponseWriter will sort the results based on the sort 
> criteria and stream the results out.
> This capability will open up Solr for a whole range of uses that were 
> typically done using aggregation engines like Hadoop. For example:
> *Large Distributed Joins*
> A client outside of Solr calls two different Solr collections and returns the 
> results sorted by a join key. The client iterates through both streams and 
> performs a merge join.
> *Fully Distributed Field Collapsing/Grouping*
> A client outside of Solr makes individual calls to all the servers in a 
> single collection and returns results sorted by the collapse key. The client 
> merge joins the sorted lists on the collapse key to perform the field 
> collapse.
> *High Cardinality Distributed Aggregation*
> A client outside of Solr makes individual calls to all the servers in a 
> single collection and sorts on a high cardinality field. The client then 
> merge joins the sorted lists to perform the high cardinality aggregation.
> *Large Scale Time Series Rollups*
> A client outside Solr makes individual calls to all servers in a collection 
> and sorts on time dimensions. The client merge joins the sorted result sets 
> and rolls up the time dimensions as it iterates through the data.
> In these scenarios Solr is being used as a distributed sorting engine. 
> Developers can write clients that take advantage of this sorting capability 
> in any way they wish.
> *Session Analysis and Aggregation*
> A client outside Solr makes individual calls to all servers in a collection 
> and sorts on the sessionID. The client merge joins the sorted results and 
> aggregates sessions as it iterates through the results.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5244) Exporting Full Sorted Result Sets

2014-08-06 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14087835#comment-14087835
 ] 

Joel Bernstein commented on SOLR-5244:
--

I like this idea. The request would look something like this then:

/export?q=blah&fl=field1,field2&sort=field+desc

The defaults would specify the rq and wt parameter.


> Exporting Full Sorted Result Sets
> -
>
> Key: SOLR-5244
> URL: https://issues.apache.org/jira/browse/SOLR-5244
> Project: Solr
>  Issue Type: New Feature
>  Components: search
>Affects Versions: 5.0
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Minor
> Fix For: 5.0, 4.10
>
> Attachments: 0001-SOLR_5244.patch, SOLR-5244.patch, SOLR-5244.patch, 
> SOLR-5244.patch, SOLR-5244.patch, SOLR-5244.patch
>
>
> This ticket allows Solr to export full sorted result sets. The proposed 
> syntax is:
> {code}
> q=*:*&rows=-1&wt=xsort&fl=a,b,c&sort=a desc,b desc
> {code}
> Under the covers, the rows=-1 parameter will signal Solr to use the 
> ExportQParserPlugin as a RankQuery, which will simply collect a BitSet of the 
> results. The SortingResponseWriter will sort the results based on the sort 
> criteria and stream the results out.
> This capability will open up Solr for a whole range of uses that were 
> typically done using aggregation engines like Hadoop. For example:
> *Large Distributed Joins*
> A client outside of Solr calls two different Solr collections and returns the 
> results sorted by a join key. The client iterates through both streams and 
> performs a merge join.
> *Fully Distributed Field Collapsing/Grouping*
> A client outside of Solr makes individual calls to all the servers in a 
> single collection and returns results sorted by the collapse key. The client 
> merge joins the sorted lists on the collapse key to perform the field 
> collapse.
> *High Cardinality Distributed Aggregation*
> A client outside of Solr makes individual calls to all the servers in a 
> single collection and sorts on a high cardinality field. The client then 
> merge joins the sorted lists to perform the high cardinality aggregation.
> *Large Scale Time Series Rollups*
> A client outside Solr makes individual calls to all servers in a collection 
> and sorts on time dimensions. The client merge joins the sorted result sets 
> and rolls up the time dimensions as it iterates through the data.
> In these scenarios Solr is being used as a distributed sorting engine. 
> Developers can write clients that take advantage of this sorting capability 
> in any way they wish.
> *Session Analysis and Aggregation*
> A client outside Solr makes individual calls to all servers in a collection 
> and sorts on the sessionID. The client merge joins the sorted results and 
> aggregates sessions as it iterates through the results.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5244) Exporting Full Sorted Result Sets

2014-08-06 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5244?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14087808#comment-14087808
 ] 

Erik Hatcher commented on SOLR-5244:


Just a thought at first glance, those are some scary/hairy implementation 
details with the quirky parameter requirements, so maybe this could start out 
as a request handler (that can still be a SearchHandler subclass and thus 
support components) that gets mapped to /export (which sets as defaults or 
invariants the magic incantations).  ??

> Exporting Full Sorted Result Sets
> -
>
> Key: SOLR-5244
> URL: https://issues.apache.org/jira/browse/SOLR-5244
> Project: Solr
>  Issue Type: New Feature
>  Components: search
>Affects Versions: 5.0
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Minor
> Fix For: 5.0, 4.10
>
> Attachments: 0001-SOLR_5244.patch, SOLR-5244.patch, SOLR-5244.patch, 
> SOLR-5244.patch, SOLR-5244.patch, SOLR-5244.patch
>
>
> This ticket allows Solr to export full sorted result sets. The proposed 
> syntax is:
> {code}
> q=*:*&rows=-1&wt=xsort&fl=a,b,c&sort=a desc,b desc
> {code}
> Under the covers, the rows=-1 parameter will signal Solr to use the 
> ExportQParserPlugin as a RankQuery, which will simply collect a BitSet of the 
> results. The SortingResponseWriter will sort the results based on the sort 
> criteria and stream the results out.
> This capability will open up Solr for a whole range of uses that were 
> typically done using aggregation engines like Hadoop. For example:
> *Large Distributed Joins*
> A client outside of Solr calls two different Solr collections and returns the 
> results sorted by a join key. The client iterates through both streams and 
> performs a merge join.
> *Fully Distributed Field Collapsing/Grouping*
> A client outside of Solr makes individual calls to all the servers in a 
> single collection and returns results sorted by the collapse key. The client 
> merge joins the sorted lists on the collapse key to perform the field 
> collapse.
> *High Cardinality Distributed Aggregation*
> A client outside of Solr makes individual calls to all the servers in a 
> single collection and sorts on a high cardinality field. The client then 
> merge joins the sorted lists to perform the high cardinality aggregation.
> *Large Scale Time Series Rollups*
> A client outside Solr makes individual calls to all servers in a collection 
> and sorts on time dimensions. The client merge joins the sorted result sets 
> and rolls up the time dimensions as it iterates through the data.
> In these scenarios Solr is being used as a distributed sorting engine. 
> Developers can write clients that take advantage of this sorting capability 
> in any way they wish.
> *Session Analysis and Aggregation*
> A client outside Solr makes individual calls to all servers in a collection 
> and sorts on the sessionID. The client merge joins the sorted results and 
> aggregates sessions as it iterates through the results.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-4.x #671: POMs out of sync

2014-08-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-4.x/671/

3 tests failed.
REGRESSION:  
org.apache.solr.cloud.OverseerCollectionProcessorTest.testReplicationEqualNumberOfSlicesPerNodeSendNullCreateNodes

Error Message:
 Queue not empty within 1 ms1407333643031

Stack Trace:
java.lang.AssertionError:  Queue not empty within 1 ms1407333643031
at 
__randomizedtesting.SeedInfo.seed([CF97F782E18A5F9E:F9BF94258D78A44A]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.OverseerCollectionProcessorTest.waitForEmptyQueue(OverseerCollectionProcessorTest.java:556)
at 
org.apache.solr.cloud.OverseerCollectionProcessorTest.testTemplate(OverseerCollectionProcessorTest.java:601)
at 
org.apache.solr.cloud.OverseerCollectionProcessorTest.testReplicationEqualNumberOfSlicesPerNodeSendNullCreateNodes(OverseerCollectionProcessorTest.java:683)


REGRESSION:  org.apache.solr.cloud.OverseerTest.testOverseerFailure

Error Message:
Could not register as the leader because creating the ephemeral registration 
node in ZooKeeper failed

Stack Trace:
org.apache.solr.common.SolrException: Could not register as the leader because 
creating the ephemeral registration node in ZooKeeper failed
at org.apache.zookeeper.KeeperException.create(KeeperException.java:119)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783)
at 
org.apache.solr.common.cloud.SolrZkClient$11.execute(SolrZkClient.java:457)
at 
org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:73)
at 
org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:454)
at 
org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:411)
at 
org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:398)
at 
org.apache.solr.cloud.ShardLeaderElectionContextBase$1.execute(ElectionContext.java:136)
at 
org.apache.solr.common.util.RetryUtil.retryOnThrowable(RetryUtil.java:34)
at 
org.apache.solr.cloud.ShardLeaderElectionContextBase.runLeaderProcess(ElectionContext.java:131)
at 
org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:163)
at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:125)
at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:155)
at 
org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:314)
at 
org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
at 
org.apache.solr.cloud.OverseerTest$MockZKController.publishState(OverseerTest.java:155)
at 
org.apache.solr.cloud.OverseerTest.testOverseerFailure(OverseerTest.java:660)


FAILED:  org.apache.solr.cloud.MultiThreadedOCPTest.testDistribSearch

Error Message:
We have a failed SPLITSHARD task

Stack Trace:
java.lang.AssertionError: We have a failed SPLITSHARD task
at 
__randomizedtesting.SeedInfo.seed([7A25AEB1FE01C372:FBC320A9895EA34E]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.MultiThreadedOCPTest.testTaskExclusivity(MultiThreadedOCPTest.java:125)
at 
org.apache.solr.cloud.MultiThreadedOCPTest.doTest(MultiThreadedOCPTest.java:71)




Build Log:
[...truncated 53093 lines...]
BUILD FAILED
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-4.x/build.xml:490: 
The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-4.x/build.xml:182: 
The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-4.x/extra-targets.xml:77:
 Java returned: 1

Total time: 210 minutes 2 seconds
Build step 'Invoke Ant' marked build as failure
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-6233) Provide basic command line tools for checking Solr status and health.

2014-08-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14087800#comment-14087800
 ] 

ASF subversion and git services commented on SOLR-6233:
---

Commit 1616256 from [~thelabdude] in branch 'dev/trunk'
[ https://svn.apache.org/r1616256 ]

SOLR-6233: Basic implementation of a command-line application for checking 
status of Solr and running a healthcheck for a collection; intended to be used 
with bin/solr script.

> Provide basic command line tools for checking Solr status and health.
> -
>
> Key: SOLR-6233
> URL: https://issues.apache.org/jira/browse/SOLR-6233
> Project: Solr
>  Issue Type: Improvement
>Reporter: Timothy Potter
>Assignee: Timothy Potter
>Priority: Minor
>
> As part of the start script development work SOLR-3617, example restructuring 
> SOLR-3619, and the overall curb appeal work SOLR-4430, I'd like to have an 
> option on the SystemInfoHandler that gives a shorter, well formatted JSON 
> synopsis of essential information. I know "essential" is vague ;-) but right 
> now using curl to http://host:port/solr/admin/info/system?wt=json gives too 
> much information when I just want a synopsis of a Solr server. 
> Maybe something like &overview=true?
> Result would be:
> {noformat}
> {
>   "address": "http://localhost:8983/solr";,
>   "mode": "solrcloud",
>   "zookeeper": "localhost:2181/foo",
>   "uptime": "2 days, 3 hours, 4 minutes, 5 seconds",
>   "version": "5.0-SNAPSHOT",
>   "status": "healthy",
>   "memory": "4.2g of 6g"
> }
> {noformat}
> Now of course, one may argue all this information can be easily parsed from 
> the JSON but consider cross-platform command-line tools that don't have 
> immediate access to a JSON parser, such as the bin/solr start script.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5871) Simplify or remove use of Version in IndexWriterConfig

2014-08-06 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14087783#comment-14087783
 ] 

Robert Muir commented on LUCENE-5871:
-

{quote}
Actually, I rather like the change in semantics (from the current (on trunk) 
weird "throw exception or not in close telling you if you lost changes", to 
"commit on close").

And ... the default makes me nervous: can we go back to 4.x's default? Ie, by 
default close will wait for merges / commit, but you can disable this to make 
close == rollback by calling IWC.setCommitOnClose(false).

If we do this, I think we need to fix close to call shutdown(true) when 
commitOnClose is true, else rollback, and it no longer throws any exceptions 
about changes being lost
{quote}

+1

This gives the option for advanced users but prevents any scary mailing lists 
messages about people losing documents because they passed VERSION_CURRENT.

> Simplify or remove use of Version in IndexWriterConfig
> --
>
> Key: LUCENE-5871
> URL: https://issues.apache.org/jira/browse/LUCENE-5871
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ryan Ernst
> Attachments: LUCENE-5871.patch
>
>
> {{IndexWriter}} currently uses Version from {{IndexWriterConfig}} to 
> determine the semantics of {{close()}}.  This is a trapdoor for users, as 
> they often default to just sending Version.LUCENE_CURRENT since they don't 
> understand what it will be used for.  Instead, we should make the semantics 
> of close a direction option in IWC.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >