[jira] [Commented] (SOLR-7296) Reconcile facetting implementations
[ https://issues.apache.org/jira/browse/SOLR-7296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14388122#comment-14388122 ] Toke Eskildsen commented on SOLR-7296: -- At least for plain String faceting, Solr distributed faceting is fairly simple and works exactly as you describe. Its interface for the second phase fine-counting is basically give me the counts for these exact terms. It is clever enough not to re-request counts already delivered in phase 1, but that is an implementation detail. The core classes would be FacetComponent for the logistics, then DocValuesFacets, SimpleFacets and UninvertedField for the three different-but-nearly-the-same versions of String faceting. Reconcile facetting implementations --- Key: SOLR-7296 URL: https://issues.apache.org/jira/browse/SOLR-7296 Project: Solr Issue Type: Task Components: faceting Reporter: Steve Molloy SOLR-7214 introduced a new way of controlling faceting, the unmbrella SOLR-6348 brings a lot of improvements in facet functionality, namely around pivots. Both make a lot of sense from a user perspective, but currently have completely different implementations. With the analytics components, this makes 3 implementation of the same logic, which is bound to behave differently as time goes by. We should reconcile all implementations to ease maintenance and offer consistent behaviour no matter how parameters are passed to the API. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7319) Workaround the Four Month Bug causing GC pause problems
[ https://issues.apache.org/jira/browse/SOLR-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14388131#comment-14388131 ] Shawn Heisey commented on SOLR-7319: Does the bin/solr script offer a way to send an option directly to the java commandline? Should we have the ability to have a local user config script (similar to /etc/default/solr but contained within the solr download, with both shell and windows versions) to provide additional config? Workaround the Four Month Bug causing GC pause problems - Key: SOLR-7319 URL: https://issues.apache.org/jira/browse/SOLR-7319 Project: Solr Issue Type: Bug Components: scripts and tools Affects Versions: 5.0 Reporter: Shawn Heisey Assignee: Shawn Heisey Fix For: 5.1 Attachments: SOLR-7319.patch, SOLR-7319.patch, SOLR-7319.patch A twitter engineer found a bug in the JVM that contributes to GC pause problems: http://www.evanjones.ca/jvm-mmap-pause.html Problem summary (in case the blog post disappears): The JVM calculates statistics on things like garbage collection and writes them to a file in the temp directory using MMAP. If there is a lot of other MMAP write activity, which is precisely how Lucene accomplishes indexing and merging, it can result in a GC pause because the mmap write to the temp file is delayed. We should implement the workaround in the solr start scripts (disable creation of the mmap statistics tempfile) and document the impact in CHANGES.txt. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7319) Workaround the Four Month Bug causing GC pause problems
[ https://issues.apache.org/jira/browse/SOLR-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14388154#comment-14388154 ] Ferenczi Jim commented on SOLR-7319: Most of the java options in the solr.in.cmd should not be activated by default. The tenuring threshold, the numbers of threads for the GC, ..., they all depend on the type of deployment you have, the size of the heap and the machine hosting the Solr node. In my company we are using a custom script full of java options that we added over the years. Most of the options are here because somebody added this with the assertion that the performance are better. Most of the time, we don't know what the option is for but nobody wants to remove it because the urban legend says it's useful. The solr startup script should be almost empty (at least for the java options), maybe one or two options to set up the garbage collector and that's it. Workaround the Four Month Bug causing GC pause problems - Key: SOLR-7319 URL: https://issues.apache.org/jira/browse/SOLR-7319 Project: Solr Issue Type: Bug Components: scripts and tools Affects Versions: 5.0 Reporter: Shawn Heisey Assignee: Shawn Heisey Fix For: 5.1 Attachments: SOLR-7319.patch, SOLR-7319.patch, SOLR-7319.patch A twitter engineer found a bug in the JVM that contributes to GC pause problems: http://www.evanjones.ca/jvm-mmap-pause.html Problem summary (in case the blog post disappears): The JVM calculates statistics on things like garbage collection and writes them to a file in the temp directory using MMAP. If there is a lot of other MMAP write activity, which is precisely how Lucene accomplishes indexing and merging, it can result in a GC pause because the mmap write to the temp file is delayed. We should implement the workaround in the solr start scripts (disable creation of the mmap statistics tempfile) and document the impact in CHANGES.txt. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-7329) Show core name in logging UI
Ryan McKinley created SOLR-7329: --- Summary: Show core name in logging UI Key: SOLR-7329 URL: https://issues.apache.org/jira/browse/SOLR-7329 Project: Solr Issue Type: Improvement Reporter: Ryan McKinley Assignee: Ryan McKinley Fix For: 5.2, Trunk Now that the logging events know the core name, we should show that in the UI also -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: 5.1 branch created
On Tue, Mar 31, 2015 at 4:57 AM, Timothy Potter thelabd...@gmail.com wrote: FYI - We've already agreed that LUCENE-6303 should get committed to this branch when it is ready. You created the branch after I committed the patch on LUCENE-6303 so this looks good to me already! -- Adrien - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-6924) TestSolrConfigHandlerCloud fails frequently.
[ https://issues.apache.org/jira/browse/SOLR-6924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul updated SOLR-6924: - Attachment: SOLR-6924.patch New fix that would invoke a refresh command to each replica to ensure that they are all updated to the latest config version. This is the same strategy used by schema reloads as well TestSolrConfigHandlerCloud fails frequently. Key: SOLR-6924 URL: https://issues.apache.org/jira/browse/SOLR-6924 Project: Solr Issue Type: Test Reporter: Mark Miller Assignee: Noble Paul Fix For: Trunk, 5.1 Attachments: SOLR-6924.patch, SOLR-6924.patch I see this fail all the time. Usually something like: java.lang.AssertionError: Could not get expected value P val for path [response, params, y, p] full output { -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-6924) TestSolrConfigHandlerCloud fails frequently.
[ https://issues.apache.org/jira/browse/SOLR-6924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul updated SOLR-6924: - Fix Version/s: 5.1 Trunk TestSolrConfigHandlerCloud fails frequently. Key: SOLR-6924 URL: https://issues.apache.org/jira/browse/SOLR-6924 Project: Solr Issue Type: Test Reporter: Mark Miller Assignee: Noble Paul Fix For: Trunk, 5.1 Attachments: SOLR-6924.patch, SOLR-6924.patch I see this fail all the time. Usually something like: java.lang.AssertionError: Could not get expected value P val for path [response, params, y, p] full output { -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-7325) Change Slice state into enum
[ https://issues.apache.org/jira/browse/SOLR-7325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shai Erera reassigned SOLR-7325: Assignee: Shai Erera Change Slice state into enum Key: SOLR-7325 URL: https://issues.apache.org/jira/browse/SOLR-7325 Project: Solr Issue Type: Improvement Components: SolrJ Reporter: Shai Erera Assignee: Shai Erera Attachments: SOLR-7325.patch, SOLR-7325.patch, SOLR-7325.patch Slice state is currently interacted with as a string. It is IMO not trivial to understand which values it can be compared to, in part because the Replica and Slice states are located in different classes, some repeating same constant names and values. Also, it's not very clear when does a Slice get into which state and what does that mean. I think if it's an enum, and documented briefly in the code, it would help interacting with it through code. I don't mind if we include more extensive documentation in the reference guide / wiki and refer people there for more details. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7325) Change Slice state into enum
[ https://issues.apache.org/jira/browse/SOLR-7325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shai Erera updated SOLR-7325: - Attachment: SOLR-7325.patch Patch fixes more places which got some tests angry. I also replaced some of the strings I found with their CONSTANT reference. I think it's ready! Change Slice state into enum Key: SOLR-7325 URL: https://issues.apache.org/jira/browse/SOLR-7325 Project: Solr Issue Type: Improvement Components: SolrJ Reporter: Shai Erera Attachments: SOLR-7325.patch, SOLR-7325.patch, SOLR-7325.patch Slice state is currently interacted with as a string. It is IMO not trivial to understand which values it can be compared to, in part because the Replica and Slice states are located in different classes, some repeating same constant names and values. Also, it's not very clear when does a Slice get into which state and what does that mean. I think if it's an enum, and documented briefly in the code, it would help interacting with it through code. I don't mind if we include more extensive documentation in the reference guide / wiki and refer people there for more details. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7329) Show core name in logging UI
[ https://issues.apache.org/jira/browse/SOLR-7329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ryan McKinley updated SOLR-7329: Attachment: SOLR-7329-core-in-logging-ui.patch here is a simple patch. Show core name in logging UI Key: SOLR-7329 URL: https://issues.apache.org/jira/browse/SOLR-7329 Project: Solr Issue Type: Improvement Reporter: Ryan McKinley Assignee: Ryan McKinley Fix For: Trunk, 5.2 Attachments: SOLR-7329-core-in-logging-ui.patch Now that the logging events know the core name, we should show that in the UI also -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-6381) DocumentsWriterStallControl's .wait() should have a time limit
Michael McCandless created LUCENE-6381: -- Summary: DocumentsWriterStallControl's .wait() should have a time limit Key: LUCENE-6381 URL: https://issues.apache.org/jira/browse/LUCENE-6381 Project: Lucene - Core Issue Type: Bug Reporter: Michael McCandless Assignee: Michael McCandless Fix For: Trunk, 5.1 This build was hung: http://build-us-00.elastic.co/job/es_core_15_centos/230/testReport/junit/org.elasticsearch.index.engine/InternalEngineTests/testDeletesAloneCanTriggerRefresh/ Only one thread was stalled in DocumentsWriterStallControl, which means we have a bug somewhere, because that thread should have un-stalled once the other (too many) threads finished flushing their segments. I think we should make a simple defensive change here: instead of wait(), which waits forever for a .notify/All() to wake it up, we should wait for up to a time limit. This way when any concurrency bug like this strikes, we won't hang forever. I cannot reproduce that particular hang... what's unique about that test is it uses a positively minuscule (1 KB) IW buffer. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6308) Spans to extend DocIdSetIterator; was: SpansEnum, deprecate Spans
[ https://issues.apache.org/jira/browse/LUCENE-6308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14388232#comment-14388232 ] ASF subversion and git services commented on LUCENE-6308: - Commit 1670272 from [~mikemccand] in branch 'dev/trunk' [ https://svn.apache.org/r1670272 ] LUCENE-6308: cutover Spans to DISI, reuse ConjunctionDISI, use two-phased iteration Spans to extend DocIdSetIterator; was: SpansEnum, deprecate Spans - Key: LUCENE-6308 URL: https://issues.apache.org/jira/browse/LUCENE-6308 Project: Lucene - Core Issue Type: Bug Components: core/search Affects Versions: Trunk Reporter: Paul Elschot Priority: Minor Attachments: LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch An alternative for Spans that looks more like PositionsEnum and adds two phase doc id iteration -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7236) Securing Solr (umbrella issue)
[ https://issues.apache.org/jira/browse/SOLR-7236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14388230#comment-14388230 ] Per Steffensen commented on SOLR-7236: -- Sorry [~janhoy] - will stop now. The discussion is related to security, but probably not enough to be discussed here. Guess this JIRA just deals with the fact that Solr-node is not necessarily web-container in the future - then what to do about security. Great initiative! Securing Solr (umbrella issue) -- Key: SOLR-7236 URL: https://issues.apache.org/jira/browse/SOLR-7236 Project: Solr Issue Type: New Feature Reporter: Jan Høydahl Labels: Security This is an umbrella issue for adding security to Solr. The discussion here should discuss real user needs and high-level strategy, before deciding on implementation details. All work will be done in sub tasks and linked issues. Solr has not traditionally concerned itself with security. And It has been a general view among the committers that it may be better to stay out of it to avoid blood on our hands in this mine-field. Still, Solr has lately seen SSL support, securing of ZK, and signing of jars, and discussions have begun about securing operations in Solr. Some of the topics to address are * User management (flat file, AD/LDAP etc) * Authentication (Admin UI, Admin and data/query operations. Tons of auth protocols: basic, digest, oauth, pki..) * Authorization (who can do what with what API, collection, doc) * Pluggability (no user's needs are equal) * And we could go on and on but this is what we've seen the most demand for -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6308) Spans to extend DocIdSetIterator; was: SpansEnum, deprecate Spans
[ https://issues.apache.org/jira/browse/LUCENE-6308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14388276#comment-14388276 ] ASF subversion and git services commented on LUCENE-6308: - Commit 1670281 from [~mikemccand] in branch 'dev/trunk' [ https://svn.apache.org/r1670281 ] LUCENE-6308: fix test bug Spans to extend DocIdSetIterator; was: SpansEnum, deprecate Spans - Key: LUCENE-6308 URL: https://issues.apache.org/jira/browse/LUCENE-6308 Project: Lucene - Core Issue Type: Bug Components: core/search Affects Versions: Trunk Reporter: Paul Elschot Priority: Minor Fix For: Trunk, 5.2 Attachments: LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch An alternative for Spans that looks more like PositionsEnum and adds two phase doc id iteration -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6308) Spans to extend DocIdSetIterator; was: SpansEnum, deprecate Spans
[ https://issues.apache.org/jira/browse/LUCENE-6308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14388253#comment-14388253 ] ASF subversion and git services commented on LUCENE-6308: - Commit 1670273 from [~mikemccand] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1670273 ] LUCENE-6308: cutover Spans to DISI, reuse ConjunctionDISI, use two-phased iteration Spans to extend DocIdSetIterator; was: SpansEnum, deprecate Spans - Key: LUCENE-6308 URL: https://issues.apache.org/jira/browse/LUCENE-6308 Project: Lucene - Core Issue Type: Bug Components: core/search Affects Versions: Trunk Reporter: Paul Elschot Priority: Minor Attachments: LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch An alternative for Spans that looks more like PositionsEnum and adds two phase doc id iteration -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6308) Spans to extend DocIdSetIterator; was: SpansEnum, deprecate Spans
[ https://issues.apache.org/jira/browse/LUCENE-6308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14388271#comment-14388271 ] ASF subversion and git services commented on LUCENE-6308: - Commit 1670279 from [~mikemccand] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1670279 ] LUCENE-6308: woops: revert Spans to extend DocIdSetIterator; was: SpansEnum, deprecate Spans - Key: LUCENE-6308 URL: https://issues.apache.org/jira/browse/LUCENE-6308 Project: Lucene - Core Issue Type: Bug Components: core/search Affects Versions: Trunk Reporter: Paul Elschot Priority: Minor Fix For: Trunk, 5.2 Attachments: LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch An alternative for Spans that looks more like PositionsEnum and adds two phase doc id iteration -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6308) Spans to extend DocIdSetIterator; was: SpansEnum, deprecate Spans
[ https://issues.apache.org/jira/browse/LUCENE-6308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14388274#comment-14388274 ] ASF subversion and git services commented on LUCENE-6308: - Commit 1670280 from [~mikemccand] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1670280 ] LUCENE-6308: fix test bug Spans to extend DocIdSetIterator; was: SpansEnum, deprecate Spans - Key: LUCENE-6308 URL: https://issues.apache.org/jira/browse/LUCENE-6308 Project: Lucene - Core Issue Type: Bug Components: core/search Affects Versions: Trunk Reporter: Paul Elschot Priority: Minor Fix For: Trunk, 5.2 Attachments: LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch An alternative for Spans that looks more like PositionsEnum and adds two phase doc id iteration -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: 5.1 branch created
I would like to commit the patch I submitted to SOLR-6924 as well. This is to address the test failuires On Tue, Mar 31, 2015 at 1:01 PM, Adrien Grand jpou...@gmail.com wrote: On Tue, Mar 31, 2015 at 4:57 AM, Timothy Potter thelabd...@gmail.com wrote: FYI - We've already agreed that LUCENE-6303 should get committed to this branch when it is ready. You created the branch after I committed the patch on LUCENE-6303 so this looks good to me already! -- Adrien - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org -- - Noble Paul
[jira] [Updated] (SOLR-4061) CREATE action in Collections API should allow to upload a new configuration
[ https://issues.apache.org/jira/browse/SOLR-4061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shai Erera updated SOLR-4061: - Fix Version/s: (was: 4.9) (was: Trunk) CREATE action in Collections API should allow to upload a new configuration --- Key: SOLR-4061 URL: https://issues.apache.org/jira/browse/SOLR-4061 Project: Solr Issue Type: Improvement Components: SolrCloud Reporter: Tomás Fernández Löbbe Assignee: Mark Miller Priority: Minor Attachments: SOLR-4061.patch When creating new collections with the Collection API, the only option is to point to an existing configuration in ZK. It would be nice to be able to upload a new configuration in the same command. For more details see http://lucene.472066.n3.nabble.com/Error-with-SolrCloud-td4019351.html -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-6382) Don't allow position = Integer.MAX_VALUE going forward
Michael McCandless created LUCENE-6382: -- Summary: Don't allow position = Integer.MAX_VALUE going forward Key: LUCENE-6382 URL: https://issues.apache.org/jira/browse/LUCENE-6382 Project: Lucene - Core Issue Type: Improvement Reporter: Michael McCandless Assignee: Michael McCandless Fix For: Trunk, 5.2 Spinoff from LUCENE-6308, where Integer.MAX_VALUE position is now used as a sentinel during position iteration to indicate that there are no more positions. Where IW now detects int overflow of position, it should now also detect == Integer.MAX_VALUE. And CI should note corruption if a segment's version is = 5.2 and has Integer.MAX_VALUE position. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Early Access builds for JDK 9 b55 and JDK 8u60 b08 are available on java.net
Hi Uwe Dawid, Early Access build for JDK 9 b55 https://jdk9.java.net/download/ available on java.net, summary of changes are listed here http://www.java.net/download/jdk9/changes/jdk9-b55.html Early Access build for JDK 8u60 b08 http://jdk8.java.net/download.html is available on java.net, summary of changes are listed here. http://www.java.net/download/jdk8u60/changes/jdk8u60-b08.html Rgds,Rory -- Rgds,Rory O'Donnell Quality Engineering Manager Oracle EMEA , Dublin, Ireland
[jira] [Commented] (SOLR-7236) Securing Solr (umbrella issue)
[ https://issues.apache.org/jira/browse/SOLR-7236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14388227#comment-14388227 ] Per Steffensen commented on SOLR-7236: -- I am ok with embedding Jetty, and you are right, that there are probably lots of things that would be easier. Just make sure that you can still participate in configuring it from the outside - jetty.xml and web.xml. At least until an alternative solution gives the same flexibility. What I fear is that we remove all the flexibility of web-container - because we are using - including its ability to handle security. I checked out 5.0.0 code, but I am not able to see that Solr-node is not still just Jetty on top-level, and that Solr does not control anything before web.xml/SolrDispatchFilter. Can you please point me to some of the more important JIRAs around this hiding/removing web-container initiative. Thanks! Just want to understand what has been done/achieved until now. Securing Solr (umbrella issue) -- Key: SOLR-7236 URL: https://issues.apache.org/jira/browse/SOLR-7236 Project: Solr Issue Type: New Feature Reporter: Jan Høydahl Labels: Security This is an umbrella issue for adding security to Solr. The discussion here should discuss real user needs and high-level strategy, before deciding on implementation details. All work will be done in sub tasks and linked issues. Solr has not traditionally concerned itself with security. And It has been a general view among the committers that it may be better to stay out of it to avoid blood on our hands in this mine-field. Still, Solr has lately seen SSL support, securing of ZK, and signing of jars, and discussions have begun about securing operations in Solr. Some of the topics to address are * User management (flat file, AD/LDAP etc) * Authentication (Admin UI, Admin and data/query operations. Tons of auth protocols: basic, digest, oauth, pki..) * Authorization (who can do what with what API, collection, doc) * Pluggability (no user's needs are equal) * And we could go on and on but this is what we've seen the most demand for -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6381) DocumentsWriterStallControl's .wait() should have a time limit
[ https://issues.apache.org/jira/browse/LUCENE-6381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael McCandless updated LUCENE-6381: --- Attachment: LUCENE-6381.patch Simple patch, one line change. I'd like to backport to 5.1... outright hangs are bad. This is just a defensive step ... separately, we have some concurrency bug where a .notify/All() was not sent. DocumentsWriterStallControl's .wait() should have a time limit -- Key: LUCENE-6381 URL: https://issues.apache.org/jira/browse/LUCENE-6381 Project: Lucene - Core Issue Type: Bug Reporter: Michael McCandless Assignee: Michael McCandless Fix For: Trunk, 5.1 Attachments: LUCENE-6381.patch This build was hung: http://build-us-00.elastic.co/job/es_core_15_centos/230/testReport/junit/org.elasticsearch.index.engine/InternalEngineTests/testDeletesAloneCanTriggerRefresh/ Only one thread was stalled in DocumentsWriterStallControl, which means we have a bug somewhere, because that thread should have un-stalled once the other (too many) threads finished flushing their segments. I think we should make a simple defensive change here: instead of wait(), which waits forever for a .notify/All() to wake it up, we should wait for up to a time limit. This way when any concurrency bug like this strikes, we won't hang forever. I cannot reproduce that particular hang... what's unique about that test is it uses a positively minuscule (1 KB) IW buffer. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-6308) Spans to extend DocIdSetIterator; was: SpansEnum, deprecate Spans
[ https://issues.apache.org/jira/browse/LUCENE-6308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael McCandless resolved LUCENE-6308. Resolution: Fixed Fix Version/s: Trunk 5.2 Thanks [~paul.elsc...@xs4all.nl]. I'll open a follow-on issue for IW/CheckIndex to detect the now illegal position=Int.MAX_VALUE going forward... Spans to extend DocIdSetIterator; was: SpansEnum, deprecate Spans - Key: LUCENE-6308 URL: https://issues.apache.org/jira/browse/LUCENE-6308 Project: Lucene - Core Issue Type: Bug Components: core/search Affects Versions: Trunk Reporter: Paul Elschot Priority: Minor Fix For: 5.2, Trunk Attachments: LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch An alternative for Spans that looks more like PositionsEnum and adds two phase doc id iteration -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6308) Spans to extend DocIdSetIterator; was: SpansEnum, deprecate Spans
[ https://issues.apache.org/jira/browse/LUCENE-6308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14388270#comment-14388270 ] ASF subversion and git services commented on LUCENE-6308: - Commit 1670278 from [~mikemccand] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1670278 ] LUCENE-6308: fix test bug Spans to extend DocIdSetIterator; was: SpansEnum, deprecate Spans - Key: LUCENE-6308 URL: https://issues.apache.org/jira/browse/LUCENE-6308 Project: Lucene - Core Issue Type: Bug Components: core/search Affects Versions: Trunk Reporter: Paul Elschot Priority: Minor Fix For: Trunk, 5.2 Attachments: LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch An alternative for Spans that looks more like PositionsEnum and adds two phase doc id iteration -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[DISCUSS] Change Query API to make queries immutable in 6.0
Recent changes that added automatic filter caching to IndexSearcher uncovered some traps with our queries when it comes to using them as cache keys. The problem comes from the fact that some of our main queries are mutable, and modifying them while they are used as cache keys makes the entry that they are caching invisible (because the hash code changed too) yet still using memory. While I think most users would be unaffected as it is rather uncommon to modify queries after having passed them to IndexSearcher, I would like to remove this trap by making queries immutable: everything should be set at construction time except the boost parameter that could still be changed with the same clone()/setBoost() mechanism as today. First I would like to make sure that it sounds good to everyone and then to discuss what the API should look like. Most of our queries happen to be immutable already (NumericRangeQuery, TermsQuery, SpanNearQuery, etc.) but some aren't and the main exceptions are: - BooleanQuery, - DisjunctionMaxQuery, - PhraseQuery, - MultiPhraseQuery. We could take all parameters that are set as setters and move them to constructor arguments. For the above queries, this would mean (using varargs for ease of use): BooleanQuery(boolean disableCoord, int minShouldMatch, BooleanClause... clauses) DisjunctionMaxQuery(float tieBreakMul, Query... clauses) For PhraseQuery and MultiPhraseQuery, the closest to what we have today would require adding new classes to wrap terms and positions together, for instance: class TermAndPosition { public final BytesRef term; public final int position; } so that eg. PhraseQuery would look like: PhraseQuery(int slop, String field, TermAndPosition... terms) MultiPhraseQuery would be the same with several terms at the same position. Comments/ideas/concerns are highly welcome. -- Adrien - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6378) Fix RuntimeExceptions that are thrown without the root cause
[ https://issues.apache.org/jira/browse/LUCENE-6378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Thacker updated LUCENE-6378: -- Attachment: LUCENE-6378.patch Patch which throws the underlying exception. All tests pass and precommit is happy. Fix RuntimeExceptions that are thrown without the root cause Key: LUCENE-6378 URL: https://issues.apache.org/jira/browse/LUCENE-6378 Project: Lucene - Core Issue Type: Bug Reporter: Varun Thacker Fix For: Trunk, 5.1 Attachments: LUCENE-6378.patch In the lucene/solr codebase I can see 15 RuntimeExceptions that are thrown without wrapping the root cause. We should fix them to wrap the root cause before throwing it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7236) Securing Solr (umbrella issue)
[ https://issues.apache.org/jira/browse/SOLR-7236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14388393#comment-14388393 ] Jan Høydahl commented on SOLR-7236: --- Yep, we need a container agnostic security API like Shiro. Or we could roll our own, but I'm not convinced that's necessary, I suspect it is such a thing that will grow out of hand, with constant requests for more bindings to framework X and thus give too wide an attack surface in Solr-specific code. Securing Solr (umbrella issue) -- Key: SOLR-7236 URL: https://issues.apache.org/jira/browse/SOLR-7236 Project: Solr Issue Type: New Feature Reporter: Jan Høydahl Labels: Security This is an umbrella issue for adding security to Solr. The discussion here should discuss real user needs and high-level strategy, before deciding on implementation details. All work will be done in sub tasks and linked issues. Solr has not traditionally concerned itself with security. And It has been a general view among the committers that it may be better to stay out of it to avoid blood on our hands in this mine-field. Still, Solr has lately seen SSL support, securing of ZK, and signing of jars, and discussions have begun about securing operations in Solr. Some of the topics to address are * User management (flat file, AD/LDAP etc) * Authentication (Admin UI, Admin and data/query operations. Tons of auth protocols: basic, digest, oauth, pki..) * Authorization (who can do what with what API, collection, doc) * Pluggability (no user's needs are equal) * And we could go on and on but this is what we've seen the most demand for -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b54) - Build # 12001 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12001/ Java: 64bit/jdk1.9.0-ea-b54 -XX:+UseCompressedOops -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.handler.TestSolrConfigHandlerCloud.test Error Message: Could not get expected value 'P val' for path 'response/params/y/p' full output: { responseHeader:{ status:0, QTime:0}, response:{ znodeVersion:1, params:{ x:{ a:A val, b:B val, :{v:0}}, y:{ c:CY val, b:BY val, i:20, d:[ val 1, val 2], :{v:0} Stack Trace: java.lang.AssertionError: Could not get expected value 'P val' for path 'response/params/y/p' full output: { responseHeader:{ status:0, QTime:0}, response:{ znodeVersion:1, params:{ x:{ a:A val, b:B val, :{v:0}}, y:{ c:CY val, b:BY val, i:20, d:[ val 1, val 2], :{v:0} at __randomizedtesting.SeedInfo.seed([879A9FF0C72CFC68:FCEA02A69D09190]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:406) at org.apache.solr.handler.TestSolrConfigHandlerCloud.testReqParams(TestSolrConfigHandlerCloud.java:245) at org.apache.solr.handler.TestSolrConfigHandlerCloud.test(TestSolrConfigHandlerCloud.java:78) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:502) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
Re: [DISCUSS] Change Query API to make queries immutable in 6.0
Same with BooleanQuery. the go-to ctor should just take 'clauses' On Tue, Mar 31, 2015 at 5:18 AM, Michael McCandless luc...@mikemccandless.com wrote: +1 For PhraseQuery we could also have a common-case ctor that just takes the terms (and assumes sequential positions)? Mike McCandless http://blog.mikemccandless.com On Tue, Mar 31, 2015 at 5:10 AM, Adrien Grand jpou...@gmail.com wrote: Recent changes that added automatic filter caching to IndexSearcher uncovered some traps with our queries when it comes to using them as cache keys. The problem comes from the fact that some of our main queries are mutable, and modifying them while they are used as cache keys makes the entry that they are caching invisible (because the hash code changed too) yet still using memory. While I think most users would be unaffected as it is rather uncommon to modify queries after having passed them to IndexSearcher, I would like to remove this trap by making queries immutable: everything should be set at construction time except the boost parameter that could still be changed with the same clone()/setBoost() mechanism as today. First I would like to make sure that it sounds good to everyone and then to discuss what the API should look like. Most of our queries happen to be immutable already (NumericRangeQuery, TermsQuery, SpanNearQuery, etc.) but some aren't and the main exceptions are: - BooleanQuery, - DisjunctionMaxQuery, - PhraseQuery, - MultiPhraseQuery. We could take all parameters that are set as setters and move them to constructor arguments. For the above queries, this would mean (using varargs for ease of use): BooleanQuery(boolean disableCoord, int minShouldMatch, BooleanClause... clauses) DisjunctionMaxQuery(float tieBreakMul, Query... clauses) For PhraseQuery and MultiPhraseQuery, the closest to what we have today would require adding new classes to wrap terms and positions together, for instance: class TermAndPosition { public final BytesRef term; public final int position; } so that eg. PhraseQuery would look like: PhraseQuery(int slop, String field, TermAndPosition... terms) MultiPhraseQuery would be the same with several terms at the same position. Comments/ideas/concerns are highly welcome. -- Adrien - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-6709) ClassCastException in QueryResponse after applying XMLResponseParser on a response containing an expanded section
[ https://issues.apache.org/jira/browse/SOLR-6709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Thacker updated SOLR-6709: Attachment: SOLR-6709.patch Added javadocs to {QueryResult.getExpandedResults}. Is there anything else that needs to be addressed? ClassCastException in QueryResponse after applying XMLResponseParser on a response containing an expanded section --- Key: SOLR-6709 URL: https://issues.apache.org/jira/browse/SOLR-6709 Project: Solr Issue Type: Bug Components: SolrJ Reporter: Simon Endele Assignee: Varun Thacker Fix For: Trunk, 5.1 Attachments: SOLR-6709.patch, SOLR-6709.patch, SOLR-6709.patch, test-response.xml Shouldn't the following code work on the attached input file? It matches the structure of a Solr response with wt=xml. {code}import java.io.InputStream; import org.apache.solr.client.solrj.ResponseParser; import org.apache.solr.client.solrj.impl.XMLResponseParser; import org.apache.solr.client.solrj.response.QueryResponse; import org.apache.solr.common.util.NamedList; import org.junit.Test; public class ParseXmlExpandedTest { @Test public void test() { ResponseParser responseParser = new XMLResponseParser(); InputStream inStream = getClass() .getResourceAsStream(test-response.xml); NamedListObject response = responseParser .processResponse(inStream, UTF-8); QueryResponse queryResponse = new QueryResponse(response, null); } }{code} Unexpectedly (for me), it throws a java.lang.ClassCastException: org.apache.solr.common.util.SimpleOrderedMap cannot be cast to java.util.Map at org.apache.solr.client.solrj.response.QueryResponse.setResponse(QueryResponse.java:126) Am I missing something, is XMLResponseParser deprecated or something? We use a setup like this to mock a QueryResponse for unit tests in our service that post-processes the Solr response. Obviously, it works with the javabin format which SolrJ uses internally. But that is no appropriate format for unit tests, where the response should be human readable. I think there's some conversion missing in QueryResponse or XMLResponseParser. Note: The null value supplied as SolrServer argument to the constructor of QueryResponse shouldn't have an effect as the error occurs before the parameter is even used. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2864 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2864/ 3 tests failed. FAILED: org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.test Error Message: IOException occured when talking to server at: http://127.0.0.1:64623/c8n_1x3_commits_shard1_replica3 Stack Trace: org.apache.solr.client.solrj.SolrServerException: IOException occured when talking to server at: http://127.0.0.1:64623/c8n_1x3_commits_shard1_replica3 at __randomizedtesting.SeedInfo.seed([9F43DFEF91BABBC7:1717E0353F46D63F]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:570) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:233) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:225) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135) at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:483) at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:464) at org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.oneShardTest(LeaderInitiatedRecoveryOnCommitTest.java:130) at org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.test(LeaderInitiatedRecoveryOnCommitTest.java:62) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
RE: 5.1 branch created
Hi, I enabled the Jenkins runs for the 5.1 release branch: - Policman Jenkins standard randomized test run - ASF Jenkins Artifacts builds - ASF Jenkins release smoker Uwe - Uwe Schindler H.-H.-Meier-Allee 63, D-28213 Bremen http://www.thetaphi.de eMail: u...@thetaphi.de -Original Message- From: Timothy Potter [mailto:thelabd...@gmail.com] Sent: Tuesday, March 31, 2015 4:58 AM To: lucene dev Subject: 5.1 branch created The 5.1 branch has been created - https://svn.apache.org/repos/asf/lucene/dev/branches/lucene_solr_5_1/ Here's a friendly reminder (from the wiki) on the agreed process for a minor release: * No new features may be committed to the branch. * Documentation patches, build patches and serious bug fixes may be committed to the branch. However, you should submit all patches you want to commit to Jira first to give others the chance to review and possibly vote against the patch. Keep in mind that it is our main intention to keep the branch as stable as possible. * All patches that are intended for the branch should first be committed to trunk, merged into the minor release branch, and then into the current release branch. * Normal trunk and minor release branch development may continue as usual. However, if you plan to commit a big change to the trunk while the branch feature freeze is in effect, think twice: can't the addition wait a couple more days? Merges of bug fixes into the branch may become more difficult. * Only Jira issues with Fix version 5.1 and priority Blocker will delay a release candidate build. FYI - We've already agreed that LUCENE-6303 should get committed to this branch when it is ready. On Mon, Mar 30, 2015 at 2:08 PM, Timothy Potter thelabd...@gmail.com wrote: I'd like to move ahead an create the 5.1 branch later today so that we can start locking down what's included in the release. I know this adds an extra merge step for you Adrien for LUCENE-6303, but I hope that's not too much trouble for you? Cheers, Tim On Fri, Mar 27, 2015 at 5:24 PM, Adrien Grand jpou...@gmail.com wrote: Hi Timothy, We have an issue with auto caching in Lucene that uncovered some issues with using queries as cache keys since some of them are mutable (including major one like BooleanQuery and PhraseQuery). I reopened https://issues.apache.org/jira/browse/LUCENE-6303 and provided a patch to disable this feature so that we can release. I can hopefully commit it early next week. On Wed, Mar 25, 2015 at 6:17 PM, Timothy Potter thelabd...@gmail.com wrote: Hi, I'd like to create the 5.1 branch soon'ish, thinking maybe late tomorrow or early Friday. If I understand correctly, that implies that new features should not be added after that point without some agreement among the committers about whether it should be included? Let me know if this is too soon and when a more ideal date/time would be. Sincerely, Your friendly 5.1 release manager (aka thelabdude) -- Adrien - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-5879) Add auto-prefix terms to block tree terms dict
[ https://issues.apache.org/jira/browse/LUCENE-5879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14388339#comment-14388339 ] Adrien Grand commented on LUCENE-5879: -- +1 to the patch bq. I don't think we should rush this into 5.1. +1 Add auto-prefix terms to block tree terms dict -- Key: LUCENE-5879 URL: https://issues.apache.org/jira/browse/LUCENE-5879 Project: Lucene - Core Issue Type: New Feature Components: core/codecs Reporter: Michael McCandless Assignee: Michael McCandless Fix For: 5.0, Trunk Attachments: LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch This cool idea to generalize numeric/trie fields came from Adrien: Today, when we index a numeric field (LongField, etc.) we pre-compute (via NumericTokenStream) outside of indexer/codec which prefix terms should be indexed. But this can be inefficient: you set a static precisionStep, and always add those prefix terms regardless of how the terms in the field are actually distributed. Yet typically in real world applications the terms have a non-random distribution. So, it should be better if instead the terms dict decides where it makes sense to insert prefix terms, based on how dense the terms are in each region of term space. This way we can speed up query time for both term (e.g. infix suggester) and numeric ranges, and it should let us use less index space and get faster range queries. This would also mean that min/maxTerm for a numeric field would now be correct, vs today where the externally computed prefix terms are placed after the full precision terms, causing hairy code like NumericUtils.getMaxInt/Long. So optos like LUCENE-5860 become feasible. The terms dict can also do tricks not possible if you must live on top of its APIs, e.g. to handle the adversary/over-constrained case when a given prefix has too many terms following it but finer prefixes have too few (what block tree calls floor term blocks). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_60-ea-b06) - Build # 12167 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/12167/ Java: 64bit/jdk1.8.0_60-ea-b06 -XX:-UseCompressedOops -XX:+UseG1GC 1 tests failed. FAILED: org.apache.solr.handler.TestSolrConfigHandlerCloud.test Error Message: Could not get expected value 'P val' for path 'response/params/y/p' full output: { responseHeader:{ status:0, QTime:0}, response:{ znodeVersion:2, params:{ x:{ a:A val, b:B val, :{v:0}}, y:{ c:CY val modified, b:BY val, i:20, d:[ val 1, val 2], e:EY val, :{v:0} Stack Trace: java.lang.AssertionError: Could not get expected value 'P val' for path 'response/params/y/p' full output: { responseHeader:{ status:0, QTime:0}, response:{ znodeVersion:2, params:{ x:{ a:A val, b:B val, :{v:0}}, y:{ c:CY val modified, b:BY val, i:20, d:[ val 1, val 2], e:EY val, :{v:0} at __randomizedtesting.SeedInfo.seed([C944B880CAEDAE47:4110875A6411C3BF]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:406) at org.apache.solr.handler.TestSolrConfigHandlerCloud.testReqParams(TestSolrConfigHandlerCloud.java:245) at org.apache.solr.handler.TestSolrConfigHandlerCloud.test(TestSolrConfigHandlerCloud.java:78) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at
Re: [DISCUSS] Change Query API to make queries immutable in 6.0
+1 For PhraseQuery we could also have a common-case ctor that just takes the terms (and assumes sequential positions)? Mike McCandless http://blog.mikemccandless.com On Tue, Mar 31, 2015 at 5:10 AM, Adrien Grand jpou...@gmail.com wrote: Recent changes that added automatic filter caching to IndexSearcher uncovered some traps with our queries when it comes to using them as cache keys. The problem comes from the fact that some of our main queries are mutable, and modifying them while they are used as cache keys makes the entry that they are caching invisible (because the hash code changed too) yet still using memory. While I think most users would be unaffected as it is rather uncommon to modify queries after having passed them to IndexSearcher, I would like to remove this trap by making queries immutable: everything should be set at construction time except the boost parameter that could still be changed with the same clone()/setBoost() mechanism as today. First I would like to make sure that it sounds good to everyone and then to discuss what the API should look like. Most of our queries happen to be immutable already (NumericRangeQuery, TermsQuery, SpanNearQuery, etc.) but some aren't and the main exceptions are: - BooleanQuery, - DisjunctionMaxQuery, - PhraseQuery, - MultiPhraseQuery. We could take all parameters that are set as setters and move them to constructor arguments. For the above queries, this would mean (using varargs for ease of use): BooleanQuery(boolean disableCoord, int minShouldMatch, BooleanClause... clauses) DisjunctionMaxQuery(float tieBreakMul, Query... clauses) For PhraseQuery and MultiPhraseQuery, the closest to what we have today would require adding new classes to wrap terms and positions together, for instance: class TermAndPosition { public final BytesRef term; public final int position; } so that eg. PhraseQuery would look like: PhraseQuery(int slop, String field, TermAndPosition... terms) MultiPhraseQuery would be the same with several terms at the same position. Comments/ideas/concerns are highly welcome. -- Adrien - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6379) IndexWriter's delete-by-query should optimize/specialize MatchAllDocsQuery
[ https://issues.apache.org/jira/browse/LUCENE-6379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael McCandless updated LUCENE-6379: --- Attachment: LUCENE-6379.patch Patch w/ simple test showing that Lucene's schema is actually reset (omitNorms goes away for a field). IndexWriter's delete-by-query should optimize/specialize MatchAllDocsQuery -- Key: LUCENE-6379 URL: https://issues.apache.org/jira/browse/LUCENE-6379 Project: Lucene - Core Issue Type: Improvement Reporter: Michael McCandless Assignee: Michael McCandless Fix For: Trunk, 5.1 Attachments: LUCENE-6379.patch We can short-circuit this to just IW.deleteAll (Solr already does so I think). This also has the nice side effect of clearing Lucene's low-schema (FieldInfos). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-6637) Solr should have a way to restore a core
[ https://issues.apache.org/jira/browse/SOLR-6637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul updated SOLR-6637: - Attachment: SOLR-6637.patch I have removed the reentrant lock and the synchronized is good enough . Please take a look Solr should have a way to restore a core Key: SOLR-6637 URL: https://issues.apache.org/jira/browse/SOLR-6637 Project: Solr Issue Type: Improvement Reporter: Varun Thacker Attachments: SOLR-6637.patch, SOLR-6637.patch, SOLR-6637.patch, SOLR-6637.patch, SOLR-6637.patch, SOLR-6637.patch, SOLR-6637.patch, SOLR-6637.patch, SOLR-6637.patch We have a core backup command which backs up the index. We should have a restore command too. This would restore any named snapshots created by the replication handlers backup command. While working on this patch right now I realized that during backup we only backup the index. Should we backup the conf files also? Any thoughts? I could separate Jira for this. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_40) - Build # 12168 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/12168/ Java: 32bit/jdk1.8.0_40 -client -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: org.apache.solr.cloud.RecoveryAfterSoftCommitTest.test Error Message: Didn't see all replicas for shard shard1 in collection1 come up within 3 ms! ClusterState: { collection1:{ replicationFactor:1, shards:{shard1:{ range:8000-7fff, state:active, replicas:{ core_node1:{ core:collection1, base_url:http://127.0.0.1:38830/_uni;, node_name:127.0.0.1:38830__uni, state:active, leader:true}, core_node2:{ core:collection1, base_url:http://127.0.0.1:40651/_uni;, node_name:127.0.0.1:40651__uni, state:recovering, router:{name:compositeId}, maxShardsPerNode:1, autoAddReplicas:false, autoCreated:true}, control_collection:{ replicationFactor:1, shards:{shard1:{ range:8000-7fff, state:active, replicas:{core_node1:{ core:collection1, base_url:http://127.0.0.1:53107/_uni;, node_name:127.0.0.1:53107__uni, state:active, leader:true, router:{name:compositeId}, maxShardsPerNode:1, autoAddReplicas:false, autoCreated:true}} Stack Trace: java.lang.AssertionError: Didn't see all replicas for shard shard1 in collection1 come up within 3 ms! ClusterState: { collection1:{ replicationFactor:1, shards:{shard1:{ range:8000-7fff, state:active, replicas:{ core_node1:{ core:collection1, base_url:http://127.0.0.1:38830/_uni;, node_name:127.0.0.1:38830__uni, state:active, leader:true}, core_node2:{ core:collection1, base_url:http://127.0.0.1:40651/_uni;, node_name:127.0.0.1:40651__uni, state:recovering, router:{name:compositeId}, maxShardsPerNode:1, autoAddReplicas:false, autoCreated:true}, control_collection:{ replicationFactor:1, shards:{shard1:{ range:8000-7fff, state:active, replicas:{core_node1:{ core:collection1, base_url:http://127.0.0.1:53107/_uni;, node_name:127.0.0.1:53107__uni, state:active, leader:true, router:{name:compositeId}, maxShardsPerNode:1, autoAddReplicas:false, autoCreated:true}} at __randomizedtesting.SeedInfo.seed([1BE8F81E122A67D:89EAB05B4FDECB85]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.ensureAllReplicasAreActive(AbstractFullDistribZkTestBase.java:1925) at org.apache.solr.cloud.RecoveryAfterSoftCommitTest.test(RecoveryAfterSoftCommitTest.java:103) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at
[jira] [Commented] (LUCENE-6379) IndexWriter's delete-by-query should optimize/specialize MatchAllDocsQuery
[ https://issues.apache.org/jira/browse/LUCENE-6379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14388481#comment-14388481 ] Adrien Grand commented on LUCENE-6379: -- +1 IndexWriter's delete-by-query should optimize/specialize MatchAllDocsQuery -- Key: LUCENE-6379 URL: https://issues.apache.org/jira/browse/LUCENE-6379 Project: Lucene - Core Issue Type: Improvement Reporter: Michael McCandless Assignee: Michael McCandless Fix For: Trunk, 5.1 Attachments: LUCENE-6379.patch We can short-circuit this to just IW.deleteAll (Solr already does so I think). This also has the nice side effect of clearing Lucene's low-schema (FieldInfos). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7312) REST API is not REST
[ https://issues.apache.org/jira/browse/SOLR-7312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14388484#comment-14388484 ] Noble Paul commented on SOLR-7312: -- As I mentioned earlier, I'm +1 for making all write operations use POST/PUT. Especially the collection API. We can make the default setting to be to allow GET and use a flag to disable GET. {noformat} curl -X POST http://localhost:8983/solr/collections/create?name=mycollection;... #and curl -X POST http://localhost:8983/solr/collections/delete?name=mycollection {noformat} can be easily implemented now in a backcompat way REST API is not REST -- Key: SOLR-7312 URL: https://issues.apache.org/jira/browse/SOLR-7312 Project: Solr Issue Type: Improvement Components: Server Affects Versions: 5.0 Reporter: Mark Haase Assignee: Noble Paul The documentation refers to a REST API over and over, and yet I don't see a REST API. I see an HTTP API but not a REST API. Here are a few things the HTTP API does that are not RESTful: * Offers RPC verbs instead of resources/nouns. (E.g. schema API has commands like add-field, add-copy-field, etc.) * Tunnels non-idempotent requests (like creating a core) through idempotent HTTP verb (GET). * Tunnels deletes through HTTP GET. * PUT/POST confusion, POST used to update a named resource, such as the Blob API. * Returns `200 OK` HTTP code even when the command fails. (Try adding a field to your schema that already exists. You get `200 OK` and an error message hidden in the payload. Try calling a collections API when you're using non-cloud mode: `200 OK` and an error message in the payload. Gah.) * Does not provide link relations. * HTTP status line contains a JSON payload (!) and no 'Content-Type' header for some failed commands, like `curl -X DELETE http://solr:8983/solr/admin/cores/foo` * Content negotiation is done via query parameter (`wt=json`), instead of `Accept` header. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6379) IndexWriter's delete-by-query should optimize/specialize MatchAllDocsQuery
[ https://issues.apache.org/jira/browse/LUCENE-6379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14388486#comment-14388486 ] Robert Muir commented on LUCENE-6379: - +1 IndexWriter's delete-by-query should optimize/specialize MatchAllDocsQuery -- Key: LUCENE-6379 URL: https://issues.apache.org/jira/browse/LUCENE-6379 Project: Lucene - Core Issue Type: Improvement Reporter: Michael McCandless Assignee: Michael McCandless Fix For: Trunk, 5.1 Attachments: LUCENE-6379.patch We can short-circuit this to just IW.deleteAll (Solr already does so I think). This also has the nice side effect of clearing Lucene's low-schema (FieldInfos). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6379) IndexWriter's delete-by-query should optimize/specialize MatchAllDocsQuery
[ https://issues.apache.org/jira/browse/LUCENE-6379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14388497#comment-14388497 ] Adrien Grand commented on LUCENE-6379: -- One minor paranoid nitpick: I just noticed MatchAllDocsQuery is not final and the code checks instanceof. So maybe we should make MatchAllDocsQuery final or check that query.getClass() == MatchAllDocsQuery.class (or both). IndexWriter's delete-by-query should optimize/specialize MatchAllDocsQuery -- Key: LUCENE-6379 URL: https://issues.apache.org/jira/browse/LUCENE-6379 Project: Lucene - Core Issue Type: Improvement Reporter: Michael McCandless Assignee: Michael McCandless Fix For: Trunk, 5.1 Attachments: LUCENE-6379.patch We can short-circuit this to just IW.deleteAll (Solr already does so I think). This also has the nice side effect of clearing Lucene's low-schema (FieldInfos). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6378) Fix RuntimeExceptions that are thrown without the root cause
[ https://issues.apache.org/jira/browse/LUCENE-6378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14388499#comment-14388499 ] Adrien Grand commented on LUCENE-6378: -- +1 Fix RuntimeExceptions that are thrown without the root cause Key: LUCENE-6378 URL: https://issues.apache.org/jira/browse/LUCENE-6378 Project: Lucene - Core Issue Type: Bug Reporter: Varun Thacker Fix For: Trunk, 5.1 Attachments: LUCENE-6378.patch In the lucene/solr codebase I can see 15 RuntimeExceptions that are thrown without wrapping the root cause. We should fix them to wrap the root cause before throwing it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-5879) Add auto-prefix terms to block tree terms dict
[ https://issues.apache.org/jira/browse/LUCENE-5879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14388500#comment-14388500 ] Robert Muir commented on LUCENE-5879: - We should be able to add a trivial test to lucene/codecs that extends BasePostingsFormatTestCase for this new PF ? What about putting it into rotation in RandomCodec? These things would give us a lot of testing. Add auto-prefix terms to block tree terms dict -- Key: LUCENE-5879 URL: https://issues.apache.org/jira/browse/LUCENE-5879 Project: Lucene - Core Issue Type: New Feature Components: core/codecs Reporter: Michael McCandless Assignee: Michael McCandless Fix For: 5.0, Trunk Attachments: LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch This cool idea to generalize numeric/trie fields came from Adrien: Today, when we index a numeric field (LongField, etc.) we pre-compute (via NumericTokenStream) outside of indexer/codec which prefix terms should be indexed. But this can be inefficient: you set a static precisionStep, and always add those prefix terms regardless of how the terms in the field are actually distributed. Yet typically in real world applications the terms have a non-random distribution. So, it should be better if instead the terms dict decides where it makes sense to insert prefix terms, based on how dense the terms are in each region of term space. This way we can speed up query time for both term (e.g. infix suggester) and numeric ranges, and it should let us use less index space and get faster range queries. This would also mean that min/maxTerm for a numeric field would now be correct, vs today where the externally computed prefix terms are placed after the full precision terms, causing hairy code like NumericUtils.getMaxInt/Long. So optos like LUCENE-5860 become feasible. The terms dict can also do tricks not possible if you must live on top of its APIs, e.g. to handle the adversary/over-constrained case when a given prefix has too many terms following it but finer prefixes have too few (what block tree calls floor term blocks). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-5.1-Linux (32bit/jdk1.8.0_40) - Build # 176 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.1-Linux/176/ Java: 32bit/jdk1.8.0_40 -server -XX:+UseG1GC 1 tests failed. FAILED: org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test Error Message: Error from server at http://127.0.0.1:59885/mft/i/compositeid_collection_with_routerfield_shard1_replica1: no servers hosting shard: Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://127.0.0.1:59885/mft/i/compositeid_collection_with_routerfield_shard1_replica1: no servers hosting shard: at __randomizedtesting.SeedInfo.seed([35B6F2C864E61B65:BDE2CD12CA1A769D]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:556) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:233) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:225) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135) at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943) at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958) at org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testDeleteByIdCompositeRouterWithRouterField(FullSolrCloudDistribCmdsTest.java:357) at org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test(FullSolrCloudDistribCmdsTest.java:146) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
[jira] [Commented] (SOLR-6709) ClassCastException in QueryResponse after applying XMLResponseParser on a response containing an expanded section
[ https://issues.apache.org/jira/browse/SOLR-6709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14388403#comment-14388403 ] Joel Bernstein commented on SOLR-6709: -- Looking at the patch, I don't see any issues. As long as the tests are passing I think we are fine. ClassCastException in QueryResponse after applying XMLResponseParser on a response containing an expanded section --- Key: SOLR-6709 URL: https://issues.apache.org/jira/browse/SOLR-6709 Project: Solr Issue Type: Bug Components: SolrJ Reporter: Simon Endele Assignee: Varun Thacker Fix For: Trunk, 5.1 Attachments: SOLR-6709.patch, SOLR-6709.patch, SOLR-6709.patch, test-response.xml Shouldn't the following code work on the attached input file? It matches the structure of a Solr response with wt=xml. {code}import java.io.InputStream; import org.apache.solr.client.solrj.ResponseParser; import org.apache.solr.client.solrj.impl.XMLResponseParser; import org.apache.solr.client.solrj.response.QueryResponse; import org.apache.solr.common.util.NamedList; import org.junit.Test; public class ParseXmlExpandedTest { @Test public void test() { ResponseParser responseParser = new XMLResponseParser(); InputStream inStream = getClass() .getResourceAsStream(test-response.xml); NamedListObject response = responseParser .processResponse(inStream, UTF-8); QueryResponse queryResponse = new QueryResponse(response, null); } }{code} Unexpectedly (for me), it throws a java.lang.ClassCastException: org.apache.solr.common.util.SimpleOrderedMap cannot be cast to java.util.Map at org.apache.solr.client.solrj.response.QueryResponse.setResponse(QueryResponse.java:126) Am I missing something, is XMLResponseParser deprecated or something? We use a setup like this to mock a QueryResponse for unit tests in our service that post-processes the Solr response. Obviously, it works with the javabin format which SolrJ uses internally. But that is no appropriate format for unit tests, where the response should be human readable. I think there's some conversion missing in QueryResponse or XMLResponseParser. Note: The null value supplied as SolrServer argument to the constructor of QueryResponse shouldn't have an effect as the error occurs before the parameter is even used. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7312) REST API is not REST
[ https://issues.apache.org/jira/browse/SOLR-7312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14388463#comment-14388463 ] Shai Erera commented on SOLR-7312: -- I find REST a little bit like AGILE - you take what suits you most. You definitely shouldn't bend the code (or yourself) to be truly agile and/or truly REST. Therefore I care less about people coming and say this isn't pure REST. REST is a standard and there are many debates about it, e.g. the difference between a PUT and POST. We should do whatever we feel is right. One thing I would love to see is not being able to modify *any* state of Solr via GET calls. That alone would be great progress (just my opinion though). Also, I definitely don't think we should change all the API at once. We can do that incrementally, taking care of one segment/area at a time. E.g. the Collections API can certainly look like: {noformat} curl -X POST http://localhost:8983/solr/collections/create?name=mycollection;... curl -X DELETE http://localhost:8983/solr/collections/mycollection ... {noformat} With that we take out the 'action' parameter and fold it either into the Http method or if we want to use a single method (e.g. POST) for multiple actions, we put the command as part of the URL (/create). I also don't advocate that we become fanatic about it. If DELETE is not convenient because we want to send additional parameters, we can make it a POST with a /delete path, although I think we shouldn't have problems w/ DELETE specifically. REST API is not REST -- Key: SOLR-7312 URL: https://issues.apache.org/jira/browse/SOLR-7312 Project: Solr Issue Type: Improvement Components: Server Affects Versions: 5.0 Reporter: Mark Haase Assignee: Noble Paul The documentation refers to a REST API over and over, and yet I don't see a REST API. I see an HTTP API but not a REST API. Here are a few things the HTTP API does that are not RESTful: * Offers RPC verbs instead of resources/nouns. (E.g. schema API has commands like add-field, add-copy-field, etc.) * Tunnels non-idempotent requests (like creating a core) through idempotent HTTP verb (GET). * Tunnels deletes through HTTP GET. * PUT/POST confusion, POST used to update a named resource, such as the Blob API. * Returns `200 OK` HTTP code even when the command fails. (Try adding a field to your schema that already exists. You get `200 OK` and an error message hidden in the payload. Try calling a collections API when you're using non-cloud mode: `200 OK` and an error message in the payload. Gah.) * Does not provide link relations. * HTTP status line contains a JSON payload (!) and no 'Content-Type' header for some failed commands, like `curl -X DELETE http://solr:8983/solr/admin/cores/foo` * Content negotiation is done via query parameter (`wt=json`), instead of `Accept` header. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6271) PostingsEnum should have consistent flags behavior
[ https://issues.apache.org/jira/browse/LUCENE-6271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14389811#comment-14389811 ] ASF subversion and git services commented on LUCENE-6271: - Commit 1670533 from [~rcmuir] in branch 'dev/branches/lucene6271' [ https://svn.apache.org/r1670533 ] LUCENE-6271: merge trunk changes up to r1670529 PostingsEnum should have consistent flags behavior -- Key: LUCENE-6271 URL: https://issues.apache.org/jira/browse/LUCENE-6271 Project: Lucene - Core Issue Type: Bug Reporter: Ryan Ernst Attachments: LUCENE-6271.patch When asking for flags like OFFSETS or PAYLOADS with DocsAndPositionsEnum, the behavior was to always return an enum, even if offsets or payloads were not indexed. They would just not be available from the enum if they were not present. This behavior was carried over to PostingsEnum, which is good. However, the new POSITIONS flag has different behavior. If positions are not available, null is returned, instead of a PostingsEnum that just gives access to freqs. This behavior is confusing, as it means you have to special case asking for positions (only ask if you know they were indexed) which sort of defeats the purpose of the unified PostingsEnum. We should make POSITIONS have the same behavior as other flags. The trickiest part will be maintaining backcompat for DocsAndPositionsEnum in 5.x, but I think it can be done. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2868 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2868/ 3 tests failed. FAILED: org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.test Error Message: IOException occured when talking to server at: http://127.0.0.1:48214/c8n_1x3_commits_shard1_replica1 Stack Trace: org.apache.solr.client.solrj.SolrServerException: IOException occured when talking to server at: http://127.0.0.1:48214/c8n_1x3_commits_shard1_replica1 at __randomizedtesting.SeedInfo.seed([22900A47AEDD8F59:AAC4359D0021E2A1]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:570) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:233) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:225) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135) at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:483) at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:464) at org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.oneShardTest(LeaderInitiatedRecoveryOnCommitTest.java:130) at org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.test(LeaderInitiatedRecoveryOnCommitTest.java:62) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
Re: [JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 786 - Still Failing
The problem is with NIGHTLY we generate many more documents, if we get unlucky and get something like MemoryPostings also are indexing stuff like payloads, we get huge FSTs (the outputs have big postings lists) I don't want to lose coverage for Nightly/Direct so I removed the NIGHTLY conditional logic in TestDuelingCodecs and spun off a separate @Nightly subclass that just excludes the memory-hungry ones. On Tue, Mar 31, 2015 at 10:15 PM, Robert Muir rcm...@gmail.com wrote: I can reproduce it with current 5.x, it hits OOME after about 15 minutes. On Sat, Mar 14, 2015 at 11:01 AM, Apache Jenkins Server jenk...@builds.apache.org wrote: Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/786/ 1 tests failed. REGRESSION: org.apache.lucene.index.TestDuelingCodecs.testEquals Error Message: Java heap space Stack Trace: java.lang.OutOfMemoryError: Java heap space at __randomizedtesting.SeedInfo.seed([8353178BBB8D1BE0:873C441075B44BE2]:0) at org.apache.lucene.util.fst.BytesStore.skipBytes(BytesStore.java:307) at org.apache.lucene.util.fst.FST.addNode(FST.java:801) at org.apache.lucene.util.fst.NodeHash.add(NodeHash.java:126) at org.apache.lucene.util.fst.Builder.compileNode(Builder.java:189) at org.apache.lucene.util.fst.Builder.freezeTail(Builder.java:277) at org.apache.lucene.util.fst.Builder.add(Builder.java:381) at org.apache.lucene.codecs.memory.MemoryPostingsFormat$TermsWriter.finishTerm(MemoryPostingsFormat.java:257) at org.apache.lucene.codecs.memory.MemoryPostingsFormat$TermsWriter.access$500(MemoryPostingsFormat.java:112) at org.apache.lucene.codecs.memory.MemoryPostingsFormat$MemoryFieldsConsumer.write(MemoryPostingsFormat.java:399) at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsWriter.write(PerFieldPostingsFormat.java:198) at org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:105) at org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:186) at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:95) at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3933) at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3514) at org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40) at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:1803) at org.apache.lucene.index.IndexWriter.doAfterSegmentFlushed(IndexWriter.java:4572) at org.apache.lucene.index.DocumentsWriter$MergePendingEvent.process(DocumentsWriter.java:704) at org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:4598) at org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:4589) at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1351) at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1138) at org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:152) at org.apache.lucene.index.TestDuelingCodecs.createRandomIndex(TestDuelingCodecs.java:137) at org.apache.lucene.index.TestDuelingCodecs.testEquals(TestDuelingCodecs.java:149) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836) Build Log: [...truncated 1706 lines...] [junit4] Suite: org.apache.lucene.index.TestDuelingCodecs [junit4] 2 NOTE: download the large Jenkins line-docs file by running 'ant get-jenkins-line-docs' in the lucene directory. [junit4] 2 NOTE: reproduce with: ant test -Dtestcase=TestDuelingCodecs -Dtests.method=testEquals -Dtests.seed=8353178BBB8D1BE0 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt -Dtests.locale=fr_BE -Dtests.timezone=Africa/Timbuktu -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [junit4] ERROR 1667s J2 | TestDuelingCodecs.testEquals [junit4] Throwable #1: java.lang.OutOfMemoryError: Java heap space [junit4]at __randomizedtesting.SeedInfo.seed([8353178BBB8D1BE0:873C441075B44BE2]:0) [junit4]at org.apache.lucene.util.fst.BytesStore.skipBytes(BytesStore.java:307) [junit4]at org.apache.lucene.util.fst.FST.addNode(FST.java:801) [junit4]at
[jira] [Commented] (LUCENE-6271) PostingsEnum should have consistent flags behavior
[ https://issues.apache.org/jira/browse/LUCENE-6271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14389800#comment-14389800 ] ASF subversion and git services commented on LUCENE-6271: - Commit 1670530 from [~rcmuir] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1670530 ] merge unrelated nightly test bugfixes from LUCENE-6271 branch PostingsEnum should have consistent flags behavior -- Key: LUCENE-6271 URL: https://issues.apache.org/jira/browse/LUCENE-6271 Project: Lucene - Core Issue Type: Bug Reporter: Ryan Ernst Attachments: LUCENE-6271.patch When asking for flags like OFFSETS or PAYLOADS with DocsAndPositionsEnum, the behavior was to always return an enum, even if offsets or payloads were not indexed. They would just not be available from the enum if they were not present. This behavior was carried over to PostingsEnum, which is good. However, the new POSITIONS flag has different behavior. If positions are not available, null is returned, instead of a PostingsEnum that just gives access to freqs. This behavior is confusing, as it means you have to special case asking for positions (only ask if you know they were indexed) which sort of defeats the purpose of the unified PostingsEnum. We should make POSITIONS have the same behavior as other flags. The trickiest part will be maintaining backcompat for DocsAndPositionsEnum in 5.x, but I think it can be done. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7274) Pluggable authentication module in Solr
[ https://issues.apache.org/jira/browse/SOLR-7274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14389918#comment-14389918 ] Noble Paul commented on SOLR-7274: -- bq.I would think this one time editing would be performed by a security conscious system administrator, not a user per se. We don't expose web.xml anymore. So it is not even something we would like to document. OTOH , A user would be hacking Solr to add a servlet filter . it could be an option for some esoteric custom authentication plugin. But , it cannot be an option that we document or recommend Pluggable authentication module in Solr --- Key: SOLR-7274 URL: https://issues.apache.org/jira/browse/SOLR-7274 Project: Solr Issue Type: Sub-task Reporter: Anshum Gupta It would be good to have Solr support different authentication protocols. To begin with, it'd be good to have support for kerberos and basic auth. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7274) Pluggable authentication module in Solr
[ https://issues.apache.org/jira/browse/SOLR-7274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14389933#comment-14389933 ] Anshum Gupta commented on SOLR-7274: +1 on what Noble says. Can we use what Cloudera does? [~gchanan], you might have something to say here. Ideally, we should not expose web.xml or any other jetty specific implementation detail to the end users. Perhaps configure this via environment variables? Pluggable authentication module in Solr --- Key: SOLR-7274 URL: https://issues.apache.org/jira/browse/SOLR-7274 Project: Solr Issue Type: Sub-task Reporter: Anshum Gupta It would be good to have Solr support different authentication protocols. To begin with, it'd be good to have support for kerberos and basic auth. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7274) Pluggable authentication module in Solr
[ https://issues.apache.org/jira/browse/SOLR-7274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14389945#comment-14389945 ] Noble Paul commented on SOLR-7274: -- Let me give a thread dump of my thought process * We give an interface for an authentication plugin. The users can choose to use it or not use it (our default impl must use it) . All it does is , return an instance of java.security.Principal. Solr would just set it to {{request.setAttribute(java.security.Principal, principalObj)}}. * Solr would provide an interface the user can implement and we also give a mechanism to configure that. * If somebody wishes to implement this using a filter , they can still do the same without our plugin interface . Because, it just uses the servlet API. And, in that case they would NOT have an authentication plugin and we don't care . We only care about the request attribute * The authorization module would be passed the {{Principal}} and it can decide on how to authorize the {{Principal}} for the given request * Solr would give an API and a mechanism to configure the authorization plugin and . We will give a default impl for the same . Pluggable authentication module in Solr --- Key: SOLR-7274 URL: https://issues.apache.org/jira/browse/SOLR-7274 Project: Solr Issue Type: Sub-task Reporter: Anshum Gupta It would be good to have Solr support different authentication protocols. To begin with, it'd be good to have support for kerberos and basic auth. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6383) MemoryPostings fst encoding can be surprisingly inefficient (especially in tests, with payloads)
[ https://issues.apache.org/jira/browse/LUCENE-6383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14389982#comment-14389982 ] Robert Muir commented on LUCENE-6383: - We should also look into why [~jpountz]'s test for things getting bigger on merge (BaseIndexFileFormatTestCase.testMergeStability) doesnt find this. Maybe the problems are specific to payloads? BasePostingsFormat.addRandomFields iterates through all the possible index options, but never adds any payloads. Could be a tricky thing to do in tests in general, because of optimizations when payloads have the same length. There is a TODO in TestMemoryPostingsFormat to randomize pack=true/false. Maybe its related too, TestMemoryPF never tests that, but RandomCodec randomizes the option. MemoryPostings fst encoding can be surprisingly inefficient (especially in tests, with payloads) Key: LUCENE-6383 URL: https://issues.apache.org/jira/browse/LUCENE-6383 Project: Lucene - Core Issue Type: Bug Reporter: Robert Muir I just worked around this in 2 nightly OOM fails. One was TestDuelingCodecs, the other was TestIndexWriterForceMerge's space usage test. In general the trend is the same, it seems the more documents you merge, you just get bigger and bigger FST outputs and the size of this PF in ram and on disk grows in a way you don't expect. E.g. merging 300KB of segments resulted in 450KB single segment, and memory usage gets absurdly high. The issue seems especially aggravated in tests, when MockAnalyzer adds lots of payloads. Maybe it should encode the postings data in a more efficient way? Can it just use a Long output pointing into a RAMFile or something? Or maybe there is just a crazy bug? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 786 - Still Failing
I can reproduce it with current 5.x, it hits OOME after about 15 minutes. On Sat, Mar 14, 2015 at 11:01 AM, Apache Jenkins Server jenk...@builds.apache.org wrote: Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/786/ 1 tests failed. REGRESSION: org.apache.lucene.index.TestDuelingCodecs.testEquals Error Message: Java heap space Stack Trace: java.lang.OutOfMemoryError: Java heap space at __randomizedtesting.SeedInfo.seed([8353178BBB8D1BE0:873C441075B44BE2]:0) at org.apache.lucene.util.fst.BytesStore.skipBytes(BytesStore.java:307) at org.apache.lucene.util.fst.FST.addNode(FST.java:801) at org.apache.lucene.util.fst.NodeHash.add(NodeHash.java:126) at org.apache.lucene.util.fst.Builder.compileNode(Builder.java:189) at org.apache.lucene.util.fst.Builder.freezeTail(Builder.java:277) at org.apache.lucene.util.fst.Builder.add(Builder.java:381) at org.apache.lucene.codecs.memory.MemoryPostingsFormat$TermsWriter.finishTerm(MemoryPostingsFormat.java:257) at org.apache.lucene.codecs.memory.MemoryPostingsFormat$TermsWriter.access$500(MemoryPostingsFormat.java:112) at org.apache.lucene.codecs.memory.MemoryPostingsFormat$MemoryFieldsConsumer.write(MemoryPostingsFormat.java:399) at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsWriter.write(PerFieldPostingsFormat.java:198) at org.apache.lucene.codecs.FieldsConsumer.merge(FieldsConsumer.java:105) at org.apache.lucene.index.SegmentMerger.mergeTerms(SegmentMerger.java:186) at org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:95) at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:3933) at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3514) at org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40) at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:1803) at org.apache.lucene.index.IndexWriter.doAfterSegmentFlushed(IndexWriter.java:4572) at org.apache.lucene.index.DocumentsWriter$MergePendingEvent.process(DocumentsWriter.java:704) at org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:4598) at org.apache.lucene.index.IndexWriter.processEvents(IndexWriter.java:4589) at org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1351) at org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1138) at org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:152) at org.apache.lucene.index.TestDuelingCodecs.createRandomIndex(TestDuelingCodecs.java:137) at org.apache.lucene.index.TestDuelingCodecs.testEquals(TestDuelingCodecs.java:149) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836) Build Log: [...truncated 1706 lines...] [junit4] Suite: org.apache.lucene.index.TestDuelingCodecs [junit4] 2 NOTE: download the large Jenkins line-docs file by running 'ant get-jenkins-line-docs' in the lucene directory. [junit4] 2 NOTE: reproduce with: ant test -Dtestcase=TestDuelingCodecs -Dtests.method=testEquals -Dtests.seed=8353178BBB8D1BE0 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true -Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt -Dtests.locale=fr_BE -Dtests.timezone=Africa/Timbuktu -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1 [junit4] ERROR 1667s J2 | TestDuelingCodecs.testEquals [junit4] Throwable #1: java.lang.OutOfMemoryError: Java heap space [junit4]at __randomizedtesting.SeedInfo.seed([8353178BBB8D1BE0:873C441075B44BE2]:0) [junit4]at org.apache.lucene.util.fst.BytesStore.skipBytes(BytesStore.java:307) [junit4]at org.apache.lucene.util.fst.FST.addNode(FST.java:801) [junit4]at org.apache.lucene.util.fst.NodeHash.add(NodeHash.java:126) [junit4]at org.apache.lucene.util.fst.Builder.compileNode(Builder.java:189) [junit4]at org.apache.lucene.util.fst.Builder.freezeTail(Builder.java:277) [junit4]at org.apache.lucene.util.fst.Builder.add(Builder.java:381) [junit4]at org.apache.lucene.codecs.memory.MemoryPostingsFormat$TermsWriter.finishTerm(MemoryPostingsFormat.java:257) [junit4]at
[jira] [Commented] (SOLR-7274) Pluggable authentication module in Solr
[ https://issues.apache.org/jira/browse/SOLR-7274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14389888#comment-14389888 ] Noble Paul commented on SOLR-7274: -- Users editing web.xml is not an option Pluggable authentication module in Solr --- Key: SOLR-7274 URL: https://issues.apache.org/jira/browse/SOLR-7274 Project: Solr Issue Type: Sub-task Reporter: Anshum Gupta It would be good to have Solr support different authentication protocols. To begin with, it'd be good to have support for kerberos and basic auth. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7274) Pluggable authentication module in Solr
[ https://issues.apache.org/jira/browse/SOLR-7274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14389904#comment-14389904 ] Ishan Chattopadhyaya commented on SOLR-7274: I would think this one time editing would be performed by a security conscious system administrator, not a user per se. However, if even that is not a good idea, then, in such a case, the configuration properties can be fetched from ZK. Though, doing that would mean we wouldn't be able to support SolrCloud. Pluggable authentication module in Solr --- Key: SOLR-7274 URL: https://issues.apache.org/jira/browse/SOLR-7274 Project: Solr Issue Type: Sub-task Reporter: Anshum Gupta It would be good to have Solr support different authentication protocols. To begin with, it'd be good to have support for kerberos and basic auth. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6271) PostingsEnum should have consistent flags behavior
[ https://issues.apache.org/jira/browse/LUCENE-6271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14389915#comment-14389915 ] ASF subversion and git services commented on LUCENE-6271: - Commit 1670547 from [~rcmuir] in branch 'dev/branches/lucene6271' [ https://svn.apache.org/r1670547 ] LUCENE-6271: merge trunk changes up to r1670546 PostingsEnum should have consistent flags behavior -- Key: LUCENE-6271 URL: https://issues.apache.org/jira/browse/LUCENE-6271 Project: Lucene - Core Issue Type: Bug Reporter: Ryan Ernst Attachments: LUCENE-6271.patch When asking for flags like OFFSETS or PAYLOADS with DocsAndPositionsEnum, the behavior was to always return an enum, even if offsets or payloads were not indexed. They would just not be available from the enum if they were not present. This behavior was carried over to PostingsEnum, which is good. However, the new POSITIONS flag has different behavior. If positions are not available, null is returned, instead of a PostingsEnum that just gives access to freqs. This behavior is confusing, as it means you have to special case asking for positions (only ask if you know they were indexed) which sort of defeats the purpose of the unified PostingsEnum. We should make POSITIONS have the same behavior as other flags. The trickiest part will be maintaining backcompat for DocsAndPositionsEnum in 5.x, but I think it can be done. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7334) Admin UI does not show Num Docs and Deleted Docs, and Heap Memory Usage is -1
[ https://issues.apache.org/jira/browse/SOLR-7334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14389939#comment-14389939 ] Erick Erickson commented on SOLR-7334: -- Hmm, somehow I can't assign it to Upayavira, so I'll ping him this way: [~upayavira], here's the ticket. Admin UI does not show Num Docs and Deleted Docs, and Heap Memory Usage is -1 --- Key: SOLR-7334 URL: https://issues.apache.org/jira/browse/SOLR-7334 Project: Solr Issue Type: Bug Components: UI Affects Versions: 5.0, Trunk Reporter: Erick Erickson Priority: Blocker I'm calling this a blocker, but I won't argue the point too much. Mostly I'm making sure we make a conscious decision here. Steps to reproduce: bin/solr start -e techproducts Just to go to the admin UI and select the core. On a chat, Upayavira volunteered, so I'm assigning it to him. I'm sure if anyone wants to jump on it he wouldn't mind. [~thelabdude] What's your opinion? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-6383) MemoryPostings fst encoding can be surprisingly inefficient (especially in tests, with payloads)
Robert Muir created LUCENE-6383: --- Summary: MemoryPostings fst encoding can be surprisingly inefficient (especially in tests, with payloads) Key: LUCENE-6383 URL: https://issues.apache.org/jira/browse/LUCENE-6383 Project: Lucene - Core Issue Type: Bug Reporter: Robert Muir I just worked around this in 2 nightly OOM fails. One was TestDuelingCodecs, the other was TestIndexWriterForceMerge's space usage test. In general the trend is the same, it seems the more documents you merge, you just get bigger and bigger FST outputs and the size of this PF in ram and on disk grows in a way you don't expect. E.g. merging 300KB of segments resulted in 450KB single segment, and memory usage gets absurdly high. The issue seems especially aggravated in tests, when MockAnalyzer adds lots of payloads. Maybe it should encode the postings data in a more efficient way? Can it just use a Long output pointing into a RAMFile or something? Or maybe there is just a crazy bug? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 803 - Still Failing
I'm looking into this one next. On Tue, Mar 31, 2015 at 1:06 PM, Apache Jenkins Server jenk...@builds.apache.org wrote: Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/803/ 1 tests failed. FAILED: junit.framework.TestSuite.org.apache.lucene.replicator.IndexAndTaxonomyReplicationClientTest Error Message: file handle leaks: [FileChannel(/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/build/replicator/test/J2/temp/lucene.replicator.IndexAndTaxonomyReplicationClientTest 10CCB411FDA8B966-001/index-MMapDirectory-001/_1j_MockRandom_0.tio)] Stack Trace: java.lang.RuntimeException: file handle leaks: [FileChannel(/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/build/replicator/test/J2/temp/lucene.replicator.IndexAndTaxonomyReplicationClientTest 10CCB411FDA8B966-001/index-MMapDirectory-001/_1j_MockRandom_0.tio)] at __randomizedtesting.SeedInfo.seed([10CCB411FDA8B966]:0) at org.apache.lucene.mockfile.LeakFS.onClose(LeakFS.java:64) at org.apache.lucene.mockfile.FilterFileSystem.close(FilterFileSystem.java:78) at org.apache.lucene.mockfile.FilterFileSystem.close(FilterFileSystem.java:79) at org.apache.lucene.mockfile.FilterFileSystem.close(FilterFileSystem.java:79) at org.apache.lucene.mockfile.FilterFileSystem.close(FilterFileSystem.java:79) at org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:212) at com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.Exception at org.apache.lucene.mockfile.LeakFS.onOpen(LeakFS.java:47) at org.apache.lucene.mockfile.HandleTrackingFS.callOpenHook(HandleTrackingFS.java:84) at org.apache.lucene.mockfile.HandleTrackingFS.newFileChannel(HandleTrackingFS.java:191) at org.apache.lucene.mockfile.FilterFileSystemProvider.newFileChannel(FilterFileSystemProvider.java:204) at org.apache.lucene.mockfile.HandleTrackingFS.newFileChannel(HandleTrackingFS.java:163) at org.apache.lucene.mockfile.FilterFileSystemProvider.newFileChannel(FilterFileSystemProvider.java:204) at org.apache.lucene.mockfile.HandleTrackingFS.newFileChannel(HandleTrackingFS.java:163) at org.apache.lucene.mockfile.FilterFileSystemProvider.newFileChannel(FilterFileSystemProvider.java:204) at java.nio.channels.FileChannel.open(FileChannel.java:287) at java.nio.channels.FileChannel.open(FileChannel.java:334) at org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:213) at org.apache.lucene.codecs.blocktreeords.OrdsBlockTreeTermsReader.init(OrdsBlockTreeTermsReader.java:71) at org.apache.lucene.codecs.mockrandom.MockRandomPostingsFormat.fieldsProducer(MockRandomPostingsFormat.java:383) at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.init(PerFieldPostingsFormat.java:258) at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:336) at org.apache.lucene.index.SegmentCoreReaders.init(SegmentCoreReaders.java:104) at org.apache.lucene.index.SegmentReader.init(SegmentReader.java:65) at org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:58) at org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:50) at org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:660) at org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:50) at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:63) at org.apache.lucene.replicator.IndexAndTaxonomyReplicationClientTest.assertHandlerRevision(IndexAndTaxonomyReplicationClientTest.java:158) at
[jira] [Commented] (SOLR-7274) Pluggable authentication module in Solr
[ https://issues.apache.org/jira/browse/SOLR-7274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14389935#comment-14389935 ] Ishan Chattopadhyaya commented on SOLR-7274: Sure, sounds good. We could do the configuration via environment variables. Pluggable authentication module in Solr --- Key: SOLR-7274 URL: https://issues.apache.org/jira/browse/SOLR-7274 Project: Solr Issue Type: Sub-task Reporter: Anshum Gupta It would be good to have Solr support different authentication protocols. To begin with, it'd be good to have support for kerberos and basic auth. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-7274) Pluggable authentication module in Solr
[ https://issues.apache.org/jira/browse/SOLR-7274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14389336#comment-14389336 ] Ishan Chattopadhyaya edited comment on SOLR-7274 at 4/1/15 3:36 AM: I am working on implementing pluggable authentication support, initially supporting Kerberos and Basic Auth mechanisms. Here's a high level design that I'm working towards: An authentication layer, consisting of servlet filters for each of the supported mechanisms, need to be written and configured (via environment variables) to be invoked before the requests hit the SolrDispatchFilter. (In case of us moving away from the servlets paradigm, this can later be folded into the SolrDispatchFilter.) This authentication layer should ensure that the request, which leaves this layer and gets propogated down the chain, must, at least, have a java.security.Principal object associated with the request. This user principal could be used, for example, by any downstream authorization layer (SOLR-7275) to perform fine grained access control based on requests, resources etc. As for inter-node requests, the interfaces should support both (a) inter-node requests authenticating using the original user principal (where possible); as well as (b) inter-node requests authenticating using a node's own service principal. (SOLR-4470 has some context for this with respect to basic auth.) was (Author: ichattopadhyaya): I am working on implementing pluggable authentication support, initially supporting Kerberos and Basic Auth mechanisms. Here's a high level design that I'm working towards: An authentication layer, consisting of servlet filters for each of the supported mechanisms, need to be written and configured (via web.xml) to be invoked before the requests hit the SolrDispatchFilter. (In case of us moving away from the servlets paradigm, this can later be folded into the SolrDispatchFilter.) This authentication layer should ensure that the request, which leaves this layer and gets propogated down the chain, must, at least, have a java.security.Principal object associated with the request. This user principal could be used, for example, by any downstream authorization layer (SOLR-7275) to perform fine grained access control based on requests, resources etc. As for inter-node requests, the interfaces should support both (a) inter-node requests authenticating using the original user principal (where possible); as well as (b) inter-node requests authenticating using a node's own service principal. (SOLR-4470 has some context for this with respect to basic auth.) Pluggable authentication module in Solr --- Key: SOLR-7274 URL: https://issues.apache.org/jira/browse/SOLR-7274 Project: Solr Issue Type: Sub-task Reporter: Anshum Gupta It would be good to have Solr support different authentication protocols. To begin with, it'd be good to have support for kerberos and basic auth. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-7274) Pluggable authentication module in Solr
[ https://issues.apache.org/jira/browse/SOLR-7274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14389945#comment-14389945 ] Noble Paul edited comment on SOLR-7274 at 4/1/15 3:55 AM: -- Let me give a thread dump of my thought process * We give an interface for an authentication plugin. The users can choose to use it or not use it (our basic impl must use it) . All it does is , return an instance of java.security.Principal. Solr would just set it to {{request.setAttribute(java.security.Principal, principalObj)}}. * Solr would provide an interface the user can implement and we also give a mechanism to configure that. * If somebody wishes to implement this using a filter , they can still do the same without our plugin interface . Because, it just uses the servlet API. And, in that case they would NOT have an authentication plugin and Solr doesn't care . It only only cares about the request attribute. We will NOT have to support or document the filter mode. In the future, if we move away from a web container, all the plugins implemented using our API will be back compatible and we DO NOT have to offer any such guarantees to the filter based implementations * The authorization module would be passed the {{Principal}} and it can decide on how to authorize the {{Principal}} for the given request * Solr would give an API and a mechanism to configure the authorization plugin and . Solr will give a basic impl for the same . was (Author: noble.paul): Let me give a thread dump of my thought process * We give an interface for an authentication plugin. The users can choose to use it or not use it (our default impl must use it) . All it does is , return an instance of java.security.Principal. Solr would just set it to {{request.setAttribute(java.security.Principal, principalObj)}}. * Solr would provide an interface the user can implement and we also give a mechanism to configure that. * If somebody wishes to implement this using a filter , they can still do the same without our plugin interface . Because, it just uses the servlet API. And, in that case they would NOT have an authentication plugin and we don't care . We only care about the request attribute * The authorization module would be passed the {{Principal}} and it can decide on how to authorize the {{Principal}} for the given request * Solr would give an API and a mechanism to configure the authorization plugin and . We will give a default impl for the same . Pluggable authentication module in Solr --- Key: SOLR-7274 URL: https://issues.apache.org/jira/browse/SOLR-7274 Project: Solr Issue Type: Sub-task Reporter: Anshum Gupta It would be good to have Solr support different authentication protocols. To begin with, it'd be good to have support for kerberos and basic auth. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_40) - Build # 4622 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4622/ Java: 32bit/jdk1.8.0_40 -client -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.core.TestSolrConfigHandler Error Message: Could not remove the following files (in the order of attempts): C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestSolrConfigHandler 7F75871214028009-001\tempDir-010\collection1\conf\params.json: java.nio.file.FileSystemException: C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestSolrConfigHandler 7F75871214028009-001\tempDir-010\collection1\conf\params.json: The process cannot access the file because it is being used by another process. C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestSolrConfigHandler 7F75871214028009-001\tempDir-010\collection1\conf: java.nio.file.DirectoryNotEmptyException: C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestSolrConfigHandler 7F75871214028009-001\tempDir-010\collection1\conf C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestSolrConfigHandler 7F75871214028009-001\tempDir-010\collection1: java.nio.file.DirectoryNotEmptyException: C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestSolrConfigHandler 7F75871214028009-001\tempDir-010\collection1 C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestSolrConfigHandler 7F75871214028009-001\tempDir-010: java.nio.file.DirectoryNotEmptyException: C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestSolrConfigHandler 7F75871214028009-001\tempDir-010 C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestSolrConfigHandler 7F75871214028009-001: java.nio.file.DirectoryNotEmptyException: C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestSolrConfigHandler 7F75871214028009-001 Stack Trace: java.io.IOException: Could not remove the following files (in the order of attempts): C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestSolrConfigHandler 7F75871214028009-001\tempDir-010\collection1\conf\params.json: java.nio.file.FileSystemException: C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestSolrConfigHandler 7F75871214028009-001\tempDir-010\collection1\conf\params.json: The process cannot access the file because it is being used by another process. C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestSolrConfigHandler 7F75871214028009-001\tempDir-010\collection1\conf: java.nio.file.DirectoryNotEmptyException: C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestSolrConfigHandler 7F75871214028009-001\tempDir-010\collection1\conf C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestSolrConfigHandler 7F75871214028009-001\tempDir-010\collection1: java.nio.file.DirectoryNotEmptyException: C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestSolrConfigHandler 7F75871214028009-001\tempDir-010\collection1 C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestSolrConfigHandler 7F75871214028009-001\tempDir-010: java.nio.file.DirectoryNotEmptyException: C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestSolrConfigHandler 7F75871214028009-001\tempDir-010 C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestSolrConfigHandler 7F75871214028009-001: java.nio.file.DirectoryNotEmptyException: C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestSolrConfigHandler 7F75871214028009-001 at __randomizedtesting.SeedInfo.seed([7F75871214028009]:0) at org.apache.lucene.util.IOUtils.rm(IOUtils.java:286) at org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:200) at com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (LUCENE-6378) Fix RuntimeExceptions that are thrown without the root cause
[ https://issues.apache.org/jira/browse/LUCENE-6378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14389977#comment-14389977 ] ASF subversion and git services commented on LUCENE-6378: - Commit 1670564 from [~varunthacker] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1670564 ] LUCENE-6378: Fix all RuntimeExceptions to throw the underlying root cause (merged from trunk r1670453) Fix RuntimeExceptions that are thrown without the root cause Key: LUCENE-6378 URL: https://issues.apache.org/jira/browse/LUCENE-6378 Project: Lucene - Core Issue Type: Bug Reporter: Varun Thacker Fix For: Trunk, 5.1 Attachments: LUCENE-6378.patch In the lucene/solr codebase I can see 15 RuntimeExceptions that are thrown without wrapping the root cause. We should fix them to wrap the root cause before throwing it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-6378) Fix RuntimeExceptions that are thrown without the root cause
[ https://issues.apache.org/jira/browse/LUCENE-6378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Thacker resolved LUCENE-6378. --- Resolution: Fixed Fix Version/s: (was: 5.1) 5.2 Thanks Mike and Adrien! Fix RuntimeExceptions that are thrown without the root cause Key: LUCENE-6378 URL: https://issues.apache.org/jira/browse/LUCENE-6378 Project: Lucene - Core Issue Type: Bug Reporter: Varun Thacker Fix For: Trunk, 5.2 Attachments: LUCENE-6378.patch In the lucene/solr codebase I can see 15 RuntimeExceptions that are thrown without wrapping the root cause. We should fix them to wrap the root cause before throwing it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: backport of nightly test fixes to 5.1 branch.
Sounds good Mr. Muir (since we're being all formal and such ;-), thanks for the heads up. Cheers, Tim On Tue, Mar 31, 2015 at 6:02 PM, Robert Muir rcm...@gmail.com wrote: Hi Timothy, As a start I'd like to merge http://svn.apache.org/viewvc?view=revisionrevision=1670530 to the 5.1 branch. These are the most frequent failures during nightly tests in lucene. Changes are test-only. - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6271) PostingsEnum should have consistent flags behavior
[ https://issues.apache.org/jira/browse/LUCENE-6271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14389921#comment-14389921 ] ASF subversion and git services commented on LUCENE-6271: - Commit 1670548 from [~rcmuir] in branch 'dev/branches/lucene_solr_5_1' [ https://svn.apache.org/r1670548 ] merge unrelated nightly test bugfixes from LUCENE-6271 branch PostingsEnum should have consistent flags behavior -- Key: LUCENE-6271 URL: https://issues.apache.org/jira/browse/LUCENE-6271 Project: Lucene - Core Issue Type: Bug Reporter: Ryan Ernst Attachments: LUCENE-6271.patch When asking for flags like OFFSETS or PAYLOADS with DocsAndPositionsEnum, the behavior was to always return an enum, even if offsets or payloads were not indexed. They would just not be available from the enum if they were not present. This behavior was carried over to PostingsEnum, which is good. However, the new POSITIONS flag has different behavior. If positions are not available, null is returned, instead of a PostingsEnum that just gives access to freqs. This behavior is confusing, as it means you have to special case asking for positions (only ask if you know they were indexed) which sort of defeats the purpose of the unified PostingsEnum. We should make POSITIONS have the same behavior as other flags. The trickiest part will be maintaining backcompat for DocsAndPositionsEnum in 5.x, but I think it can be done. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6271) PostingsEnum should have consistent flags behavior
[ https://issues.apache.org/jira/browse/LUCENE-6271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14389795#comment-14389795 ] ASF subversion and git services commented on LUCENE-6271: - Commit 1670529 from [~rcmuir] in branch 'dev/trunk' [ https://svn.apache.org/r1670529 ] merge unrelated nightly test bugfixes from LUCENE-6271 branch PostingsEnum should have consistent flags behavior -- Key: LUCENE-6271 URL: https://issues.apache.org/jira/browse/LUCENE-6271 Project: Lucene - Core Issue Type: Bug Reporter: Ryan Ernst Attachments: LUCENE-6271.patch When asking for flags like OFFSETS or PAYLOADS with DocsAndPositionsEnum, the behavior was to always return an enum, even if offsets or payloads were not indexed. They would just not be available from the enum if they were not present. This behavior was carried over to PostingsEnum, which is good. However, the new POSITIONS flag has different behavior. If positions are not available, null is returned, instead of a PostingsEnum that just gives access to freqs. This behavior is confusing, as it means you have to special case asking for positions (only ask if you know they were indexed) which sort of defeats the purpose of the unified PostingsEnum. We should make POSITIONS have the same behavior as other flags. The trickiest part will be maintaining backcompat for DocsAndPositionsEnum in 5.x, but I think it can be done. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_60-ea-b06) - Build # 12174 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/12174/ Java: 32bit/jdk1.8.0_60-ea-b06 -client -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test Error Message: There were too many update fails (25 20) - we expect it can happen, but shouldn't easily Stack Trace: java.lang.AssertionError: There were too many update fails (25 20) - we expect it can happen, but shouldn't easily at __randomizedtesting.SeedInfo.seed([4F77470707C5572A:C72378DDA9393AD2]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertFalse(Assert.java:68) at org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test(ChaosMonkeyNothingIsSafeTest.java:230) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at
Re: [JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 774 - Still Failing
I committed a fix. But I think memoryPF can get really wasteful here. This is ultimately the same problem as the TestDuelingCodecs OOM. On Tue, Mar 31, 2015 at 11:08 PM, Robert Muir rcm...@gmail.com wrote: This reproduces. I'm digging. On Mon, Mar 2, 2015 at 9:52 AM, Apache Jenkins Server jenk...@builds.apache.org wrote: Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/774/ 1 tests failed. REGRESSION: org.apache.lucene.index.TestIndexWriterForceMerge.testForceMergeTempSpaceUsage Error Message: forceMerge used too much temporary space: starting usage was 379542 bytes; final usage was 442916 bytes; max temp usage was 1669519 but should have been 1328748 (= 3X starting usage) Stack Trace: java.lang.AssertionError: forceMerge used too much temporary space: starting usage was 379542 bytes; final usage was 442916 bytes; max temp usage was 1669519 but should have been 1328748 (= 3X starting usage) at __randomizedtesting.SeedInfo.seed([AD6008DD6F02F612:B7A2CB2E011215D5]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.lucene.index.TestIndexWriterForceMerge.testForceMergeTempSpaceUsage(TestIndexWriterForceMerge.java:181) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at
backport of nightly test fixes to 5.1 branch.
Hi Timothy, As a start I'd like to merge http://svn.apache.org/viewvc?view=revisionrevision=1670530 to the 5.1 branch. These are the most frequent failures during nightly tests in lucene. Changes are test-only. - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 2121 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/2121/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC 1 tests failed. FAILED: org.apache.solr.cloud.BasicDistributedZkTest.test Error Message: commitWithin did not work on node: http://127.0.0.1:51154/collection1 expected:68 but was:67 Stack Trace: java.lang.AssertionError: commitWithin did not work on node: http://127.0.0.1:51154/collection1 expected:68 but was:67 at __randomizedtesting.SeedInfo.seed([F73AAC6C2D198304:7F6E93B683E5EEFC]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.apache.solr.cloud.BasicDistributedZkTest.test(BasicDistributedZkTest.java:343) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at
[jira] [Commented] (SOLR-7319) Workaround the Four Month Bug causing GC pause problems
[ https://issues.apache.org/jira/browse/SOLR-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14389924#comment-14389924 ] Shawn Heisey commented on SOLR-7319: Devolving into a general discussion about garbage collection tuning: [~jim.ferenczi], I've had really good luck with these GC tuning options, although I have now moved on to G1GC: https://wiki.apache.org/solr/ShawnHeisey#CMS_.28ConcurrentMarkSweep.29_Collector I tried really hard to make these options completely generic and not dependent on the number of CPUs, the size of the heap, the amount of system memory, or anything else that's site specific, but users with particularly small or large setups might need to adjust them. Here's the GC tuning options I ended up when I updated and compiled branch_5x and started the server with bin/solr: {noformat} -XX:NewRatio=3 -XX:SurvivorRatio=4 -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8 -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 -XX:+CMSScavengeBeforeRemark -XX:PretenureSizeThreshold=64m -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=50 -XX:CMSMaxAbortablePrecleanTime=6000 -XX:+CMSParallelRemarkEnabled -XX:+ParallelRefProcEnabled {noformat} These are largely the same as what I came up with for my system. Both sets have options that the other set doesn't. I know from experience and my discussions on the hotspot-gc-use mailing list that ParallelRefProcEnabled is *critical* for good GC performance with Solr. Solr apparently creates a LOT of references, so processing them in parallel is a real help. PretenureSizeThreshold is probably very important, to make sure that objects will not automatically end up in the old generation unless they're REALLY big - similar to the G1HeapRegionSize option for G1 that can control which objects are classified as humongous allocations. The other options are a concerted effort to avoid full GCs. I don't like the fact that the number of GC threads is hard-coded. For someone who's got 8 or more CPU cores (which I do), these are probably good options, but if you've got a low end system with one or two cores, it's too many threads. I have to wonder whether the 512MB default heap size is a problem. It would be for me, but for a small-scale proof-of-concept, it is probably plenty. Would it be easily possible to detect the total amount of system memory and set the max heap to a percentage? Workaround the Four Month Bug causing GC pause problems - Key: SOLR-7319 URL: https://issues.apache.org/jira/browse/SOLR-7319 Project: Solr Issue Type: Bug Components: scripts and tools Affects Versions: 5.0 Reporter: Shawn Heisey Assignee: Shawn Heisey Attachments: SOLR-7319.patch, SOLR-7319.patch, SOLR-7319.patch A twitter engineer found a bug in the JVM that contributes to GC pause problems: http://www.evanjones.ca/jvm-mmap-pause.html Problem summary (in case the blog post disappears): The JVM calculates statistics on things like garbage collection and writes them to a file in the temp directory using MMAP. If there is a lot of other MMAP write activity, which is precisely how Lucene accomplishes indexing and merging, it can result in a GC pause because the mmap write to the temp file is delayed. We should implement the workaround in the solr start scripts (disable creation of the mmap statistics tempfile) and document the impact in CHANGES.txt. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 774 - Still Failing
I opened https://issues.apache.org/jira/browse/LUCENE-6383 as a followup. The fix should work for this test at least, i beasted the test over 100 times and it seems ok now. On Wed, Apr 1, 2015 at 12:02 AM, Robert Muir rcm...@gmail.com wrote: I committed a fix. But I think memoryPF can get really wasteful here. This is ultimately the same problem as the TestDuelingCodecs OOM. On Tue, Mar 31, 2015 at 11:08 PM, Robert Muir rcm...@gmail.com wrote: This reproduces. I'm digging. On Mon, Mar 2, 2015 at 9:52 AM, Apache Jenkins Server jenk...@builds.apache.org wrote: Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/774/ 1 tests failed. REGRESSION: org.apache.lucene.index.TestIndexWriterForceMerge.testForceMergeTempSpaceUsage Error Message: forceMerge used too much temporary space: starting usage was 379542 bytes; final usage was 442916 bytes; max temp usage was 1669519 but should have been 1328748 (= 3X starting usage) Stack Trace: java.lang.AssertionError: forceMerge used too much temporary space: starting usage was 379542 bytes; final usage was 442916 bytes; max temp usage was 1669519 but should have been 1328748 (= 3X starting usage) at __randomizedtesting.SeedInfo.seed([AD6008DD6F02F612:B7A2CB2E011215D5]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.lucene.index.TestIndexWriterForceMerge.testForceMergeTempSpaceUsage(TestIndexWriterForceMerge.java:181) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at
Re: [JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 803 - Still Failing
I picked same JVM version and args and revision number, and tried beasting the seed and master seed. I cannot make this one fail. I don't see a leak in OrdsBlockTreeTermsReader at a glance. The test itself is provoking random low level exceptions and assertHandlerRevision is masking some of them, which worries me a little that there is a race somewhere and its a real problem. I will improve the exception on leaked files to not just show the first one but give more debugging information in case it happens again. On Tue, Mar 31, 2015 at 1:06 PM, Apache Jenkins Server jenk...@builds.apache.org wrote: Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/803/ 1 tests failed. FAILED: junit.framework.TestSuite.org.apache.lucene.replicator.IndexAndTaxonomyReplicationClientTest Error Message: file handle leaks: [FileChannel(/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/build/replicator/test/J2/temp/lucene.replicator.IndexAndTaxonomyReplicationClientTest 10CCB411FDA8B966-001/index-MMapDirectory-001/_1j_MockRandom_0.tio)] Stack Trace: java.lang.RuntimeException: file handle leaks: [FileChannel(/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/build/replicator/test/J2/temp/lucene.replicator.IndexAndTaxonomyReplicationClientTest 10CCB411FDA8B966-001/index-MMapDirectory-001/_1j_MockRandom_0.tio)] at __randomizedtesting.SeedInfo.seed([10CCB411FDA8B966]:0) at org.apache.lucene.mockfile.LeakFS.onClose(LeakFS.java:64) at org.apache.lucene.mockfile.FilterFileSystem.close(FilterFileSystem.java:78) at org.apache.lucene.mockfile.FilterFileSystem.close(FilterFileSystem.java:79) at org.apache.lucene.mockfile.FilterFileSystem.close(FilterFileSystem.java:79) at org.apache.lucene.mockfile.FilterFileSystem.close(FilterFileSystem.java:79) at org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:212) at com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.Exception at org.apache.lucene.mockfile.LeakFS.onOpen(LeakFS.java:47) at org.apache.lucene.mockfile.HandleTrackingFS.callOpenHook(HandleTrackingFS.java:84) at org.apache.lucene.mockfile.HandleTrackingFS.newFileChannel(HandleTrackingFS.java:191) at org.apache.lucene.mockfile.FilterFileSystemProvider.newFileChannel(FilterFileSystemProvider.java:204) at org.apache.lucene.mockfile.HandleTrackingFS.newFileChannel(HandleTrackingFS.java:163) at org.apache.lucene.mockfile.FilterFileSystemProvider.newFileChannel(FilterFileSystemProvider.java:204) at org.apache.lucene.mockfile.HandleTrackingFS.newFileChannel(HandleTrackingFS.java:163) at org.apache.lucene.mockfile.FilterFileSystemProvider.newFileChannel(FilterFileSystemProvider.java:204) at java.nio.channels.FileChannel.open(FileChannel.java:287) at java.nio.channels.FileChannel.open(FileChannel.java:334) at org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:213) at org.apache.lucene.codecs.blocktreeords.OrdsBlockTreeTermsReader.init(OrdsBlockTreeTermsReader.java:71) at org.apache.lucene.codecs.mockrandom.MockRandomPostingsFormat.fieldsProducer(MockRandomPostingsFormat.java:383) at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.init(PerFieldPostingsFormat.java:258) at org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:336) at org.apache.lucene.index.SegmentCoreReaders.init(SegmentCoreReaders.java:104) at org.apache.lucene.index.SegmentReader.init(SegmentReader.java:65) at org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:58) at org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:50) at
[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.8.0_40) - Build # 12008 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12008/ Java: 32bit/jdk1.8.0_40 -server -XX:+UseSerialGC 1 tests failed. FAILED: org.apache.solr.DistributedIntervalFacetingTest.test Error Message: Timeout occured while waiting response from server at: https://127.0.0.1:47256/_bf/v/collection1 Stack Trace: org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting response from server at: https://127.0.0.1:47256/_bf/v/collection1 at __randomizedtesting.SeedInfo.seed([B2B88A2DEAD1437D:3AECB5F7442D2E85]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:568) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:233) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:225) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135) at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:174) at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:139) at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:153) at org.apache.solr.BaseDistributedSearchTestCase.indexDoc(BaseDistributedSearchTestCase.java:484) at org.apache.solr.BaseDistributedSearchTestCase.indexr(BaseDistributedSearchTestCase.java:466) at org.apache.solr.DistributedIntervalFacetingTest.testRandom(DistributedIntervalFacetingTest.java:145) at org.apache.solr.DistributedIntervalFacetingTest.test(DistributedIntervalFacetingTest.java:45) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsRepeatStatement.callStatement(BaseDistributedSearchTestCase.java:982) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at
Re: [JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 774 - Still Failing
This reproduces. I'm digging. On Mon, Mar 2, 2015 at 9:52 AM, Apache Jenkins Server jenk...@builds.apache.org wrote: Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/774/ 1 tests failed. REGRESSION: org.apache.lucene.index.TestIndexWriterForceMerge.testForceMergeTempSpaceUsage Error Message: forceMerge used too much temporary space: starting usage was 379542 bytes; final usage was 442916 bytes; max temp usage was 1669519 but should have been 1328748 (= 3X starting usage) Stack Trace: java.lang.AssertionError: forceMerge used too much temporary space: starting usage was 379542 bytes; final usage was 442916 bytes; max temp usage was 1669519 but should have been 1328748 (= 3X starting usage) at __randomizedtesting.SeedInfo.seed([AD6008DD6F02F612:B7A2CB2E011215D5]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.lucene.index.TestIndexWriterForceMerge.testForceMergeTempSpaceUsage(TestIndexWriterForceMerge.java:181) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at
[jira] [Comment Edited] (SOLR-7274) Pluggable authentication module in Solr
[ https://issues.apache.org/jira/browse/SOLR-7274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14389935#comment-14389935 ] Ishan Chattopadhyaya edited comment on SOLR-7274 at 4/1/15 3:37 AM: Sure, sounds good. We could do the configuration via environment variables. I've modified my initial comment to reflect this. was (Author: ichattopadhyaya): Sure, sounds good. We could do the configuration via environment variables. Pluggable authentication module in Solr --- Key: SOLR-7274 URL: https://issues.apache.org/jira/browse/SOLR-7274 Project: Solr Issue Type: Sub-task Reporter: Anshum Gupta It would be good to have Solr support different authentication protocols. To begin with, it'd be good to have support for kerberos and basic auth. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-7334) Admin UI does not show Num Docs and Deleted Docs, and Heap Memory Usage is -1
Erick Erickson created SOLR-7334: Summary: Admin UI does not show Num Docs and Deleted Docs, and Heap Memory Usage is -1 Key: SOLR-7334 URL: https://issues.apache.org/jira/browse/SOLR-7334 Project: Solr Issue Type: Bug Components: UI Affects Versions: 5.0, Trunk Reporter: Erick Erickson Priority: Blocker I'm calling this a blocker, but I won't argue the point too much. Mostly I'm making sure we make a conscious decision here. Steps to reproduce: bin/solr start -e techproducts Just to go to the admin UI and select the core. On a chat, Upayavira volunteered, so I'm assigning it to him. I'm sure if anyone wants to jump on it he wouldn't mind. [~thelabdude] What's your opinion? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper
[ https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14388548#comment-14388548 ] Shai Erera commented on SOLR-6736: -- bq. I'm tempted to just restrict this to support just upload of a configset because we have not yet assessed the security implications of these and the implications of changing config of a running collection [~noble.paul], I've read the discussion on SOLR-5287 and was wondering if you can explain why limiting this API to only uploading a configset addresses any of the security vulnerabilities? The configset may include anything that I want, including XSLT files which may be able to hamper the system, correct? Is it only because we cannot associate a configset with a collection when we issue a {{/collections?action=CREATE}} command that you consider it safe? I.e. the configset will exist in ZK, but not really used? If so, why enabling this at all? A collections-like request handler to manage solr configurations on zookeeper - Key: SOLR-6736 URL: https://issues.apache.org/jira/browse/SOLR-6736 Project: Solr Issue Type: New Feature Components: SolrCloud Reporter: Varun Rajput Assignee: Anshum Gupta Priority: Minor Fix For: 5.0, Trunk Attachments: SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, zkconfighandler.zip Managing Solr configuration files on zookeeper becomes cumbersome while using solr in cloud mode, especially while trying out changes in the configurations. It will be great if there is a request handler that can provide an API to manage the configurations similar to the collections handler that would allow actions like uploading new configurations, linking them to a collection, deleting configurations, etc. example : {code} #use the following command to upload a new configset called mynewconf. This will fail if there is alredy a conf called 'mynewconf'. The file could be a jar , zip or a tar file which contains all the files for the this conf. curl -X POST -H 'Content-Type: application/octet-stream' --data-binary @testconf.zip http://localhost:8983/solr/admin/configs/mynewconf?sig=the-signature {code} A GET to http://localhost:8983/solr/admin/configs will give a list of configs available A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the list of files in mynewconf -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-5.x-Windows (32bit/jdk1.8.0_40) - Build # 4502 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4502/ Java: 32bit/jdk1.8.0_40 -server -XX:+UseG1GC 2 tests failed. FAILED: org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test Error Message: There were too many update fails (28 20) - we expect it can happen, but shouldn't easily Stack Trace: java.lang.AssertionError: There were too many update fails (28 20) - we expect it can happen, but shouldn't easily at __randomizedtesting.SeedInfo.seed([B3BF5B4900596339:3BEB6493AEA50EC1]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertFalse(Assert.java:68) at org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test(ChaosMonkeyNothingIsSafeTest.java:230) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (SOLR-7324) No need to call isIndexStale if full copy is already needed
[ https://issues.apache.org/jira/browse/SOLR-7324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14388673#comment-14388673 ] ASF subversion and git services commented on SOLR-7324: --- Commit 1670359 from [~varunthacker] in branch 'dev/trunk' [ https://svn.apache.org/r1670359 ] SOLR-7324: IndexFetcher does not need to call isIndexStale if full copy is already needed No need to call isIndexStale if full copy is already needed --- Key: SOLR-7324 URL: https://issues.apache.org/jira/browse/SOLR-7324 Project: Solr Issue Type: Improvement Components: replication (java) Affects Versions: 4.10.4 Reporter: Stephan Lagraulet Assignee: Varun Thacker Attachments: SOLR-7324.patch During replication, we had a message File _3ww7_Lucene41_0.tim expected to be 2027667 while it is 1861076 when in fact there was already a match on commit.getGeneration() = latestGeneration So this extra operation is not needed. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-5.1-Linux (64bit/jdk1.8.0_40) - Build # 177 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.1-Linux/177/ Java: 64bit/jdk1.8.0_40 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test Error Message: Error from server at http://127.0.0.1:60892/compositeid_collection_with_routerfield_shard1_replica1: no servers hosting shard: Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://127.0.0.1:60892/compositeid_collection_with_routerfield_shard1_replica1: no servers hosting shard: at __randomizedtesting.SeedInfo.seed([CFAB63BFDE7309FC:47FF5C65708F6404]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:556) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:233) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:225) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135) at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943) at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958) at org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testDeleteByIdCompositeRouterWithRouterField(FullSolrCloudDistribCmdsTest.java:357) at org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test(FullSolrCloudDistribCmdsTest.java:146) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
Re: [DISCUSS] Change Query API to make queries immutable in 6.0
Hi Charles, On Tue, Mar 31, 2015 at 4:12 PM, Reitzel, Charles charles.reit...@tiaa-cref.org wrote: Am I missing something? Across the project, I’m seeing over 1,000 references to BooleanQuery.add(). Already, this seems like a pretty major refactoring. And I haven’t checked the other types of queries: DisjunctionMax, Phrase, and MultiPhrase. At that scale, bugs will be introduced. I’m not disagreeing with the concept. At all. It’s part of the Collections contract that anything used in hashCode() and equals() be kept immutable. Just wondering if the cost is worth the principle this time? The majority of call sites are in test folders. This does not make the change easier but it decreases the chances to introduce an actual bug. Also, the queries that we need to modify are those that are best tested so I'm quite confident that this change will not be a bug nest. However I totally agree that the change it huge, this is why I asked for opinions on the list before doing it actually doing it. -- Adrien - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7312) REST API is not REST
[ https://issues.apache.org/jira/browse/SOLR-7312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14388669#comment-14388669 ] Mark Haase commented on SOLR-7312: -- The curmudgeonly bonbon learns a hard lesson from a dilettante. Another clock beyond the stalactite secretly admires a womanly trombone. The gonad for the bicep usually hosts the cigar. A darling hand hibernates, and the pocket defined by a cigar goes to sleep; however, the amour-propre ridiculously plays pinochle with some knowingly self-actualized swamp. REST API is not REST -- Key: SOLR-7312 URL: https://issues.apache.org/jira/browse/SOLR-7312 Project: Solr Issue Type: Improvement Components: Server Affects Versions: 5.0 Reporter: Mark Haase Assignee: Noble Paul The documentation refers to a REST API over and over, and yet I don't see a REST API. I see an HTTP API but not a REST API. Here are a few things the HTTP API does that are not RESTful: * Offers RPC verbs instead of resources/nouns. (E.g. schema API has commands like add-field, add-copy-field, etc.) * Tunnels non-idempotent requests (like creating a core) through idempotent HTTP verb (GET). * Tunnels deletes through HTTP GET. * PUT/POST confusion, POST used to update a named resource, such as the Blob API. * Returns `200 OK` HTTP code even when the command fails. (Try adding a field to your schema that already exists. You get `200 OK` and an error message hidden in the payload. Try calling a collections API when you're using non-cloud mode: `200 OK` and an error message in the payload. Gah.) * Does not provide link relations. * HTTP status line contains a JSON payload (!) and no 'Content-Type' header for some failed commands, like `curl -X DELETE http://solr:8983/solr/admin/cores/foo` * Content negotiation is done via query parameter (`wt=json`), instead of `Accept` header. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 2119 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/2119/ Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: org.apache.solr.client.solrj.TestLBHttpSolrClient.testReliability Error Message: No live SolrServers available to handle this request Stack Trace: org.apache.solr.client.solrj.SolrServerException: No live SolrServers available to handle this request at __randomizedtesting.SeedInfo.seed([44DBD7564AE74501:85130A10EB8194A8]:0) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:564) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135) at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943) at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958) at org.apache.solr.client.solrj.TestLBHttpSolrClient.testReliability(TestLBHttpSolrClient.java:219) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at
[jira] [Commented] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper
[ https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14388727#comment-14388727 ] Anshum Gupta commented on SOLR-6736: I think it would make sense to hold this until SOLR-7274 and SOLR-7275 (more importantly) get committed. I'm working on security in solr and I think I should be done with it in time for 5.2. A collections-like request handler to manage solr configurations on zookeeper - Key: SOLR-6736 URL: https://issues.apache.org/jira/browse/SOLR-6736 Project: Solr Issue Type: New Feature Components: SolrCloud Reporter: Varun Rajput Assignee: Anshum Gupta Priority: Minor Fix For: 5.0, Trunk Attachments: SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, zkconfighandler.zip Managing Solr configuration files on zookeeper becomes cumbersome while using solr in cloud mode, especially while trying out changes in the configurations. It will be great if there is a request handler that can provide an API to manage the configurations similar to the collections handler that would allow actions like uploading new configurations, linking them to a collection, deleting configurations, etc. example : {code} #use the following command to upload a new configset called mynewconf. This will fail if there is alredy a conf called 'mynewconf'. The file could be a jar , zip or a tar file which contains all the files for the this conf. curl -X POST -H 'Content-Type: application/octet-stream' --data-binary @testconf.zip http://localhost:8983/solr/admin/configs/mynewconf?sig=the-signature {code} A GET to http://localhost:8983/solr/admin/configs will give a list of configs available A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the list of files in mynewconf -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [DISCUSS] Change Query API to make queries immutable in 6.0
On Tue, Mar 31, 2015 at 4:32 PM, Terry Smith sheb...@gmail.com wrote: Thanks for the explanation. It seems a pity to make queries just nearly immutable. Do you have any interest in adding a boost parameter to clone() so they really could be immutable? We could have a single method, but if we do it I would rather do it in a different change since it would affect all queries as opposed to only a handful of them. Also there is some benefit in having clone() and setBoost() in that cloning and setters are things that are familiar to everyone. If we replace them with a new method, we would need to specify its semantics. (Not a blocker, just wanted to mention what the pros/cons are in my opinion.) -- Adrien - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: 5.1 branch created
Tim, I think it is really important to get https://issues.apache.org/jira/browse/LUCENE-6271 in for 5.1. The positings refactorings that happened for 5.1 have left the semantics of getting postings information in an odd state. There is a branch there to fix all the uses within lucene/solr/tests, which I am working on now (sorry, I've been MIA for a while). The danger of not releasing with this fixed is we are stuck with the crazy semantics indefinitely (even changing the semantics of just the return value in a major upgrade seems bad IMO, since it is surprising, vs a compile break). Thanks Ryan On Tue, Mar 31, 2015 at 3:05 AM, Uwe Schindler u...@thetaphi.de wrote: Hi, I enabled the Jenkins runs for the 5.1 release branch: - Policman Jenkins standard randomized test run - ASF Jenkins Artifacts builds - ASF Jenkins release smoker Uwe - Uwe Schindler H.-H.-Meier-Allee 63, D-28213 Bremen http://www.thetaphi.de eMail: u...@thetaphi.de -Original Message- From: Timothy Potter [mailto:thelabd...@gmail.com] Sent: Tuesday, March 31, 2015 4:58 AM To: lucene dev Subject: 5.1 branch created The 5.1 branch has been created - https://svn.apache.org/repos/asf/lucene/dev/branches/lucene_solr_5_1/ Here's a friendly reminder (from the wiki) on the agreed process for a minor release: * No new features may be committed to the branch. * Documentation patches, build patches and serious bug fixes may be committed to the branch. However, you should submit all patches you want to commit to Jira first to give others the chance to review and possibly vote against the patch. Keep in mind that it is our main intention to keep the branch as stable as possible. * All patches that are intended for the branch should first be committed to trunk, merged into the minor release branch, and then into the current release branch. * Normal trunk and minor release branch development may continue as usual. However, if you plan to commit a big change to the trunk while the branch feature freeze is in effect, think twice: can't the addition wait a couple more days? Merges of bug fixes into the branch may become more difficult. * Only Jira issues with Fix version 5.1 and priority Blocker will delay a release candidate build. FYI - We've already agreed that LUCENE-6303 should get committed to this branch when it is ready. On Mon, Mar 30, 2015 at 2:08 PM, Timothy Potter thelabd...@gmail.com wrote: I'd like to move ahead an create the 5.1 branch later today so that we can start locking down what's included in the release. I know this adds an extra merge step for you Adrien for LUCENE-6303, but I hope that's not too much trouble for you? Cheers, Tim On Fri, Mar 27, 2015 at 5:24 PM, Adrien Grand jpou...@gmail.com wrote: Hi Timothy, We have an issue with auto caching in Lucene that uncovered some issues with using queries as cache keys since some of them are mutable (including major one like BooleanQuery and PhraseQuery). I reopened https://issues.apache.org/jira/browse/LUCENE-6303 and provided a patch to disable this feature so that we can release. I can hopefully commit it early next week. On Wed, Mar 25, 2015 at 6:17 PM, Timothy Potter thelabd...@gmail.com wrote: Hi, I'd like to create the 5.1 branch soon'ish, thinking maybe late tomorrow or early Friday. If I understand correctly, that implies that new features should not be added after that point without some agreement among the committers about whether it should be included? Let me know if this is too soon and when a more ideal date/time would be. Sincerely, Your friendly 5.1 release manager (aka thelabdude) -- Adrien - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [jira] [Commented] (SOLR-7319) Workaround the Four Month Bug causing GC pause problems
Yes and yes. the -a option for the bin/solr command passes stuff through, e.g. bin/solr start -c -z localhost:2181 -p 8981 -s example/cloud/node1/solr -a -Xmx4G -Xms4G and the like. It'd be useful I would guess to be able to specify a local file of options as well I should think. Erikc On Mon, Mar 30, 2015 at 11:49 PM, Shawn Heisey (JIRA) j...@apache.org wrote: [ https://issues.apache.org/jira/browse/SOLR-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14388131#comment-14388131 ] Shawn Heisey commented on SOLR-7319: Does the bin/solr script offer a way to send an option directly to the java commandline? Should we have the ability to have a local user config script (similar to /etc/default/solr but contained within the solr download, with both shell and windows versions) to provide additional config? Workaround the Four Month Bug causing GC pause problems - Key: SOLR-7319 URL: https://issues.apache.org/jira/browse/SOLR-7319 Project: Solr Issue Type: Bug Components: scripts and tools Affects Versions: 5.0 Reporter: Shawn Heisey Assignee: Shawn Heisey Fix For: 5.1 Attachments: SOLR-7319.patch, SOLR-7319.patch, SOLR-7319.patch A twitter engineer found a bug in the JVM that contributes to GC pause problems: http://www.evanjones.ca/jvm-mmap-pause.html Problem summary (in case the blog post disappears): The JVM calculates statistics on things like garbage collection and writes them to a file in the temp directory using MMAP. If there is a lot of other MMAP write activity, which is precisely how Lucene accomplishes indexing and merging, it can result in a GC pause because the mmap write to the temp file is delayed. We should implement the workaround in the solr start scripts (disable creation of the mmap statistics tempfile) and document the impact in CHANGES.txt. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
RE: [DISCUSS] Change Query API to make queries immutable in 6.0
Am I missing something? Across the project, I’m seeing over 1,000 references to BooleanQuery.add(). Already, this seems like a pretty major refactoring. And I haven’t checked the other types of queries: DisjunctionMax, Phrase, and MultiPhrase. At that scale, bugs will be introduced. I’m not disagreeing with the concept. At all. It’s part of the Collections contract that anything used in hashCode() and equals() be kept immutable. Just wondering if the cost is worth the principle this time? In the spirit of discussion, an alternate approach might be to: a. Locate the places in the code where a query is taken from the cache and modified after the fact. b. Remove the query object before modifying and placing back on the cache. Easier said than done, I realize. Note, changing the constructors and removing modifiers would force all of these changes anyway. It's just that they would be lost in a forest of other minor modifications.So, even if folks are ok with the larger scale changes, it might make sense to start with the problematic places first and then move on to the bulk of syntax changes. Please ignore this if I am missing something here. From: Terry Smith [mailto:sheb...@gmail.com] Sent: Tuesday, March 31, 2015 9:38 AM To: dev@lucene.apache.org Subject: Re: [DISCUSS] Change Query API to make queries immutable in 6.0 Adrien, I missed the reason that boost is going to stay mutable. Is this to support query rewriting? --Terry On Tue, Mar 31, 2015 at 7:21 AM, Robert Muir rcm...@gmail.com wrote: Same with BooleanQuery. the go-to ctor should just take 'clauses' On Tue, Mar 31, 2015 at 5:18 AM, Michael McCandless luc...@mikemccandless.com wrote: +1 For PhraseQuery we could also have a common-case ctor that just takes the terms (and assumes sequential positions)? Mike McCandless http://blog.mikemccandless.com On Tue, Mar 31, 2015 at 5:10 AM, Adrien Grand jpou...@gmail.com wrote: Recent changes that added automatic filter caching to IndexSearcher uncovered some traps with our queries when it comes to using them as cache keys. The problem comes from the fact that some of our main queries are mutable, and modifying them while they are used as cache keys makes the entry that they are caching invisible (because the hash code changed too) yet still using memory. While I think most users would be unaffected as it is rather uncommon to modify queries after having passed them to IndexSearcher, I would like to remove this trap by making queries immutable: everything should be set at construction time except the boost parameter that could still be changed with the same clone()/setBoost() mechanism as today. First I would like to make sure that it sounds good to everyone and then to discuss what the API should look like. Most of our queries happen to be immutable already (NumericRangeQuery, TermsQuery, SpanNearQuery, etc.) but some aren't and the main exceptions are: - BooleanQuery, - DisjunctionMaxQuery, - PhraseQuery, - MultiPhraseQuery. We could take all parameters that are set as setters and move them to constructor arguments. For the above queries, this would mean (using varargs for ease of use): BooleanQuery(boolean disableCoord, int minShouldMatch, BooleanClause... clauses) DisjunctionMaxQuery(float tieBreakMul, Query... clauses) For PhraseQuery and MultiPhraseQuery, the closest to what we have today would require adding new classes to wrap terms and positions together, for instance: class TermAndPosition { public final BytesRef term; public final int position; } so that eg. PhraseQuery would look like: PhraseQuery(int slop, String field, TermAndPosition... terms) MultiPhraseQuery would be the same with several terms at the same position. Comments/ideas/concerns are highly welcome. -- Adrien * This e-mail may contain confidential or privileged information. If you are not the intended recipient, please notify the sender immediately and then delete it. TIAA-CREF *
About Apache Solr Contribution
Dev Team, I am currently working with Apache Solr.While using the Apache Solr suggestion module an idea came to my mind and I want to contribute this to the Solr Project. While using Solr we pass the requests and receive response using rest on http why don't we use the Web Socket.Rather than sending the requests while the user is type we should create a web socket connection.In short I would like to provide web socket interface to the suggestion module.I am not the contributor to any other Apache project and unfamiliar with the development of Apache Eco System.I would like your suggestions regarding this. Regards, Luqman Ul Khair
Re: [DISCUSS] Change Query API to make queries immutable in 6.0
Hi Terry, Indeed this is for query rewriting. For instance if you have a boolean query with a boost of 5 that wraps a single MUST clause with a term query, then we rewrite to this to the inner term query and update its boost using clone() and setBoost() in order to not modify in-place a user-modified query. On Tue, Mar 31, 2015 at 3:37 PM, Terry Smith sheb...@gmail.com wrote: Adrien, I missed the reason that boost is going to stay mutable. Is this to support query rewriting? --Terry On Tue, Mar 31, 2015 at 7:21 AM, Robert Muir rcm...@gmail.com wrote: Same with BooleanQuery. the go-to ctor should just take 'clauses' On Tue, Mar 31, 2015 at 5:18 AM, Michael McCandless luc...@mikemccandless.com wrote: +1 For PhraseQuery we could also have a common-case ctor that just takes the terms (and assumes sequential positions)? Mike McCandless http://blog.mikemccandless.com On Tue, Mar 31, 2015 at 5:10 AM, Adrien Grand jpou...@gmail.com wrote: Recent changes that added automatic filter caching to IndexSearcher uncovered some traps with our queries when it comes to using them as cache keys. The problem comes from the fact that some of our main queries are mutable, and modifying them while they are used as cache keys makes the entry that they are caching invisible (because the hash code changed too) yet still using memory. While I think most users would be unaffected as it is rather uncommon to modify queries after having passed them to IndexSearcher, I would like to remove this trap by making queries immutable: everything should be set at construction time except the boost parameter that could still be changed with the same clone()/setBoost() mechanism as today. First I would like to make sure that it sounds good to everyone and then to discuss what the API should look like. Most of our queries happen to be immutable already (NumericRangeQuery, TermsQuery, SpanNearQuery, etc.) but some aren't and the main exceptions are: - BooleanQuery, - DisjunctionMaxQuery, - PhraseQuery, - MultiPhraseQuery. We could take all parameters that are set as setters and move them to constructor arguments. For the above queries, this would mean (using varargs for ease of use): BooleanQuery(boolean disableCoord, int minShouldMatch, BooleanClause... clauses) DisjunctionMaxQuery(float tieBreakMul, Query... clauses) For PhraseQuery and MultiPhraseQuery, the closest to what we have today would require adding new classes to wrap terms and positions together, for instance: class TermAndPosition { public final BytesRef term; public final int position; } so that eg. PhraseQuery would look like: PhraseQuery(int slop, String field, TermAndPosition... terms) MultiPhraseQuery would be the same with several terms at the same position. Comments/ideas/concerns are highly welcome. -- Adrien - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org -- Adrien - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [DISCUSS] Change Query API to make queries immutable in 6.0
Adrien, Thanks for the explanation. It seems a pity to make queries just nearly immutable. Do you have any interest in adding a boost parameter to clone() so they really could be immutable? --Terry On Tue, Mar 31, 2015 at 9:44 AM, Adrien Grand jpou...@gmail.com wrote: Hi Terry, Indeed this is for query rewriting. For instance if you have a boolean query with a boost of 5 that wraps a single MUST clause with a term query, then we rewrite to this to the inner term query and update its boost using clone() and setBoost() in order to not modify in-place a user-modified query. On Tue, Mar 31, 2015 at 3:37 PM, Terry Smith sheb...@gmail.com wrote: Adrien, I missed the reason that boost is going to stay mutable. Is this to support query rewriting? --Terry On Tue, Mar 31, 2015 at 7:21 AM, Robert Muir rcm...@gmail.com wrote: Same with BooleanQuery. the go-to ctor should just take 'clauses' On Tue, Mar 31, 2015 at 5:18 AM, Michael McCandless luc...@mikemccandless.com wrote: +1 For PhraseQuery we could also have a common-case ctor that just takes the terms (and assumes sequential positions)? Mike McCandless http://blog.mikemccandless.com On Tue, Mar 31, 2015 at 5:10 AM, Adrien Grand jpou...@gmail.com wrote: Recent changes that added automatic filter caching to IndexSearcher uncovered some traps with our queries when it comes to using them as cache keys. The problem comes from the fact that some of our main queries are mutable, and modifying them while they are used as cache keys makes the entry that they are caching invisible (because the hash code changed too) yet still using memory. While I think most users would be unaffected as it is rather uncommon to modify queries after having passed them to IndexSearcher, I would like to remove this trap by making queries immutable: everything should be set at construction time except the boost parameter that could still be changed with the same clone()/setBoost() mechanism as today. First I would like to make sure that it sounds good to everyone and then to discuss what the API should look like. Most of our queries happen to be immutable already (NumericRangeQuery, TermsQuery, SpanNearQuery, etc.) but some aren't and the main exceptions are: - BooleanQuery, - DisjunctionMaxQuery, - PhraseQuery, - MultiPhraseQuery. We could take all parameters that are set as setters and move them to constructor arguments. For the above queries, this would mean (using varargs for ease of use): BooleanQuery(boolean disableCoord, int minShouldMatch, BooleanClause... clauses) DisjunctionMaxQuery(float tieBreakMul, Query... clauses) For PhraseQuery and MultiPhraseQuery, the closest to what we have today would require adding new classes to wrap terms and positions together, for instance: class TermAndPosition { public final BytesRef term; public final int position; } so that eg. PhraseQuery would look like: PhraseQuery(int slop, String field, TermAndPosition... terms) MultiPhraseQuery would be the same with several terms at the same position. Comments/ideas/concerns are highly welcome. -- Adrien - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org -- Adrien - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [jira] [Commented] (SOLR-7319) Workaround the Four Month Bug causing GC pause problems
we should actually fix the script to pass any args that being with -X straight through to the JVM so we don't need the -a part, i.e. bin/solr start -XX:+PerfDisableSharedMem vs. bin/solr start -a -XX:+PerfDisableSharedMem On Tue, Mar 31, 2015 at 6:43 AM, Erick Erickson erickerick...@gmail.com wrote: Yes and yes. the -a option for the bin/solr command passes stuff through, e.g. bin/solr start -c -z localhost:2181 -p 8981 -s example/cloud/node1/solr -a -Xmx4G -Xms4G and the like. It'd be useful I would guess to be able to specify a local file of options as well I should think. Erikc On Mon, Mar 30, 2015 at 11:49 PM, Shawn Heisey (JIRA) j...@apache.org wrote: [ https://issues.apache.org/jira/browse/SOLR-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14388131#comment-14388131 ] Shawn Heisey commented on SOLR-7319: Does the bin/solr script offer a way to send an option directly to the java commandline? Should we have the ability to have a local user config script (similar to /etc/default/solr but contained within the solr download, with both shell and windows versions) to provide additional config? Workaround the Four Month Bug causing GC pause problems - Key: SOLR-7319 URL: https://issues.apache.org/jira/browse/SOLR-7319 Project: Solr Issue Type: Bug Components: scripts and tools Affects Versions: 5.0 Reporter: Shawn Heisey Assignee: Shawn Heisey Fix For: 5.1 Attachments: SOLR-7319.patch, SOLR-7319.patch, SOLR-7319.patch A twitter engineer found a bug in the JVM that contributes to GC pause problems: http://www.evanjones.ca/jvm-mmap-pause.html Problem summary (in case the blog post disappears): The JVM calculates statistics on things like garbage collection and writes them to a file in the temp directory using MMAP. If there is a lot of other MMAP write activity, which is precisely how Lucene accomplishes indexing and merging, it can result in a GC pause because the mmap write to the temp file is delayed. We should implement the workaround in the solr start scripts (disable creation of the mmap statistics tempfile) and document the impact in CHANGES.txt. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7325) Change Slice state into enum
[ https://issues.apache.org/jira/browse/SOLR-7325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14390016#comment-14390016 ] ASF subversion and git services commented on SOLR-7325: --- Commit 1670566 from [~shaie] in branch 'dev/trunk' [ https://svn.apache.org/r1670566 ] SOLR-7325: Change Slice state into enum Change Slice state into enum Key: SOLR-7325 URL: https://issues.apache.org/jira/browse/SOLR-7325 Project: Solr Issue Type: Improvement Components: SolrJ Reporter: Shai Erera Assignee: Shai Erera Attachments: SOLR-7325.patch, SOLR-7325.patch, SOLR-7325.patch Slice state is currently interacted with as a string. It is IMO not trivial to understand which values it can be compared to, in part because the Replica and Slice states are located in different classes, some repeating same constant names and values. Also, it's not very clear when does a Slice get into which state and what does that mean. I think if it's an enum, and documented briefly in the code, it would help interacting with it through code. I don't mind if we include more extensive documentation in the reference guide / wiki and refer people there for more details. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org