[jira] [Updated] (SOLR-7692) Implement BasicAuth based impl for the new Authentication/Authorization APIs
[ https://issues.apache.org/jira/browse/SOLR-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anshum Gupta updated SOLR-7692: --- Attachment: SOLR-7692.patch BasicAuthPlugin no longer extends {{javax.servlet.Filter}}. Implement BasicAuth based impl for the new Authentication/Authorization APIs Key: SOLR-7692 URL: https://issues.apache.org/jira/browse/SOLR-7692 Project: Solr Issue Type: New Feature Reporter: Noble Paul Assignee: Noble Paul Attachments: SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7757.patch, SOLR-7757.patch, SOLR-7757.patch This involves various components h2. Authentication A basic auth based authentication filter. This should retrieve the user credentials from ZK. The user name and sha1 hash of password should be stored in ZK sample authentication json {code:javascript} { authentication:{ class: solr.BasicAuthPlugin, users :{ john :09fljnklnoiuy98 buygujkjnlk, david:f678njfgfjnklno iuy9865ty, pete: 87ykjnklndfhjh8 98uyiy98, } } } {code} h2. authorization plugin This would store the roles of various users and their privileges in ZK sample authorization.json {code:javascript} { authorization: { class: solr.ZKAuthorization, roles :{ admin : [john] guest : [john, david,pete] } permissions: { collection-edit: { role: admin }, coreadmin:{ role:admin }, config-edit: { //all collections role: admin, method:POST }, schema-edit: { roles: admin, method:POST }, update: { //all collections role: dev }, mycoll_update: { collection: mycoll, path:[/update/*], role: [somebody] } } } } {code} We will also need to provide APIs to create users and assign them roles -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7227) Solr distribution archive should have the WAR file extracted already
[ https://issues.apache.org/jira/browse/SOLR-7227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler updated SOLR-7227: Attachment: SOLR-7227-part2.patch One more obsolete property. I also added extractWAR=false to the context descriptor: this prevents Jetty from creating the temporary folder, that leaves checkout dirty. This helps with read-only filesystems, too. Seems ready to commit now. I noticed one thing: the inner folder solr-webapp/webapp is somehow obsolete, we could move the whole stuff one up (remove inner webapp folder). This should maybe a separate issue/commit, because this affects script files, too. Solr distribution archive should have the WAR file extracted already Key: SOLR-7227 URL: https://issues.apache.org/jira/browse/SOLR-7227 Project: Solr Issue Type: Improvement Affects Versions: 5.0 Reporter: Timothy Potter Assignee: Timothy Potter Fix For: 5.3, Trunk Attachments: SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227.patch, SOLR-7227.patch Currently, there is still the solr.war file in the server/webapps directory, which gets extracted upon first startup of Solr. It would be better to ship Solr with the WAR already extracted, thus taking us one step closer to truly not shipping a WAR file. Moreover, some users have reported not being able to make /opt/solr truly read-only because of the need to extract the WAR on-the-fly upon first startup. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-6699) Integrate lat/lon BKD and spatial3d
Michael McCandless created LUCENE-6699: -- Summary: Integrate lat/lon BKD and spatial3d Key: LUCENE-6699 URL: https://issues.apache.org/jira/browse/LUCENE-6699 Project: Lucene - Core Issue Type: New Feature Reporter: Michael McCandless Assignee: Michael McCandless I'm opening this for discussion, because I'm not yet sure how to do this integration, because of my ignorance about spatial in general and spatial3d in particular :) Our BKD tree impl is very fast at doing lat/lon shape intersection (bbox, polygon, soon distance: LUCENE-6698) against previously indexed points. I think to integrate with spatial3d, we would first need to record lat/lon/z into doc values. Somewhere I saw discussion about how we could stuff all 3 into a single long value with acceptable precision loss? Or, we could use BinaryDocValues? We need all 3 dims available to do the fast per-hit query time filtering. But, second: what do we index into the BKD tree? Can we just index earth surface lat/lon, and then at query time is spatial3d able to give me an enclosing surface lat/lon bbox for a 3d shape? Or ... must we index all 3 dimensions into the BKD tree (seems like this could be somewhat wasteful)? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-7227) Solr distribution archive should have the WAR file extracted already
[ https://issues.apache.org/jira/browse/SOLR-7227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14643965#comment-14643965 ] Uwe Schindler edited comment on SOLR-7227 at 7/28/15 7:26 AM: -- Patch removing webapp WAR build completely. I also renamed some targets. was (Author: thetaphi): Path removing webapp build completely. I also renamed some targets. Solr distribution archive should have the WAR file extracted already Key: SOLR-7227 URL: https://issues.apache.org/jira/browse/SOLR-7227 Project: Solr Issue Type: Improvement Affects Versions: 5.0 Reporter: Timothy Potter Assignee: Timothy Potter Fix For: 5.3, Trunk Attachments: SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227.patch, SOLR-7227.patch Currently, there is still the solr.war file in the server/webapps directory, which gets extracted upon first startup of Solr. It would be better to ship Solr with the WAR already extracted, thus taking us one step closer to truly not shipping a WAR file. Moreover, some users have reported not being able to make /opt/solr truly read-only because of the need to extract the WAR on-the-fly upon first startup. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7227) Solr distribution archive should have the WAR file extracted already
[ https://issues.apache.org/jira/browse/SOLR-7227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler updated SOLR-7227: Attachment: SOLR-7227-part2.patch Removal of more useless stuff Solr distribution archive should have the WAR file extracted already Key: SOLR-7227 URL: https://issues.apache.org/jira/browse/SOLR-7227 Project: Solr Issue Type: Improvement Affects Versions: 5.0 Reporter: Timothy Potter Assignee: Timothy Potter Fix For: 5.3, Trunk Attachments: SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227.patch, SOLR-7227.patch Currently, there is still the solr.war file in the server/webapps directory, which gets extracted upon first startup of Solr. It would be better to ship Solr with the WAR already extracted, thus taking us one step closer to truly not shipping a WAR file. Moreover, some users have reported not being able to make /opt/solr truly read-only because of the need to extract the WAR on-the-fly upon first startup. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-7692) Implement BasicAuth based impl for the new Authentication/Authorization APIs
[ https://issues.apache.org/jira/browse/SOLR-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14643994#comment-14643994 ] Ishan Chattopadhyaya edited comment on SOLR-7692 at 7/28/15 7:38 AM: - The SOLR-7757 patch still had compilation errors, since it depended on getValueMap() of RuleBasedAuthorizationPlugin, which is present in SOLR-7692. Fixed that, and added patches based on the latest patches by Anshum. SOLR-7757 patch compiles fine now, in isolation. was (Author: ichattopadhyaya): The SOLR-7757 patch still had compilation errors, since it depended on getValueMap() of RuleBasedAuthorizationPlugin, which is present in SOLR-7692. Fixed that, and added patches based on the latest patches by Anshum. SOLR-7757 patch applies in isolation. Implement BasicAuth based impl for the new Authentication/Authorization APIs Key: SOLR-7692 URL: https://issues.apache.org/jira/browse/SOLR-7692 Project: Solr Issue Type: New Feature Reporter: Noble Paul Assignee: Noble Paul Attachments: SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7757.patch, SOLR-7757.patch, SOLR-7757.patch This involves various components h2. Authentication A basic auth based authentication filter. This should retrieve the user credentials from ZK. The user name and sha1 hash of password should be stored in ZK sample authentication json {code:javascript} { authentication:{ class: solr.BasicAuthPlugin, users :{ john :09fljnklnoiuy98 buygujkjnlk, david:f678njfgfjnklno iuy9865ty, pete: 87ykjnklndfhjh8 98uyiy98, } } } {code} h2. authorization plugin This would store the roles of various users and their privileges in ZK sample authorization.json {code:javascript} { authorization: { class: solr.ZKAuthorization, roles :{ admin : [john] guest : [john, david,pete] } permissions: { collection-edit: { role: admin }, coreadmin:{ role:admin }, config-edit: { //all collections role: admin, method:POST }, schema-edit: { roles: admin, method:POST }, update: { //all collections role: dev }, mycoll_update: { collection: mycoll, path:[/update/*], role: [somebody] } } } } {code} We will also need to provide APIs to create users and assign them roles -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7227) Solr distribution archive should have the WAR file extracted already
[ https://issues.apache.org/jira/browse/SOLR-7227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler updated SOLR-7227: Attachment: SOLR-7227-part2.patch I accidently removed additional webapp folder, reverted. Solr distribution archive should have the WAR file extracted already Key: SOLR-7227 URL: https://issues.apache.org/jira/browse/SOLR-7227 Project: Solr Issue Type: Improvement Affects Versions: 5.0 Reporter: Timothy Potter Assignee: Timothy Potter Fix For: 5.3, Trunk Attachments: SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227.patch, SOLR-7227.patch Currently, there is still the solr.war file in the server/webapps directory, which gets extracted upon first startup of Solr. It would be better to ship Solr with the WAR already extracted, thus taking us one step closer to truly not shipping a WAR file. Moreover, some users have reported not being able to make /opt/solr truly read-only because of the need to extract the WAR on-the-fly upon first startup. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6697) Use 1D KD tree for alternative to postings based numeric range filters
[ https://issues.apache.org/jira/browse/LUCENE-6697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14644102#comment-14644102 ] Michael McCandless commented on LUCENE-6697: {quote} It should also easily generalize beyond 64 bits to arbitrary byte[], e.g. for LUCENE-5596, but I haven't explored that here. {quote} I started to explore this, since it's an easy fork for the 1D numeric case, but I was stopped dead in my tracks when I tried to add the doc values integration ... I'm trying to wrap SortedSetDocValues, and unfortunately the iterables passed to addSortedSetField don't let me quickly look up the byte[] for each doc ... it's like a I somehow need to pull a DocValuesProducer at write time ... Use 1D KD tree for alternative to postings based numeric range filters -- Key: LUCENE-6697 URL: https://issues.apache.org/jira/browse/LUCENE-6697 Project: Lucene - Core Issue Type: New Feature Reporter: Michael McCandless Assignee: Michael McCandless Attachments: LUCENE-6697.patch Today Lucene uses postings to index a numeric value at multiple precision levels for fast range searching. It's somewhat costly: each numeric value is indexed with multiple terms (4 terms by default) ... I think a dedicated 1D BKD tree should be more compact and perform better. It should also easily generalize beyond 64 bits to arbitrary byte[], e.g. for LUCENE-5596, but I haven't explored that here. A 1D BKD tree just sorts all values, and then indexes adjacent leaf blocks of size 512-1024 (by default) values per block, and their docIDs, into a fully balanced binary tree. Building the range filter is then just a recursive walk through this tree. It's the same structure we use for 2D lat/lon BKD tree, just with 1D instead. I implemented it as a DocValuesFormat that also writes the numeric tree on the side. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6697) Use 1D KD tree for alternative to postings based numeric range filters
[ https://issues.apache.org/jira/browse/LUCENE-6697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14644103#comment-14644103 ] Michael McCandless commented on LUCENE-6697: bq. I was stopped dead in my tracks Hmm, actually, I think I should simply use the ords, not the byte[] values, and then it will work well! Use 1D KD tree for alternative to postings based numeric range filters -- Key: LUCENE-6697 URL: https://issues.apache.org/jira/browse/LUCENE-6697 Project: Lucene - Core Issue Type: New Feature Reporter: Michael McCandless Assignee: Michael McCandless Attachments: LUCENE-6697.patch Today Lucene uses postings to index a numeric value at multiple precision levels for fast range searching. It's somewhat costly: each numeric value is indexed with multiple terms (4 terms by default) ... I think a dedicated 1D BKD tree should be more compact and perform better. It should also easily generalize beyond 64 bits to arbitrary byte[], e.g. for LUCENE-5596, but I haven't explored that here. A 1D BKD tree just sorts all values, and then indexes adjacent leaf blocks of size 512-1024 (by default) values per block, and their docIDs, into a fully balanced binary tree. Building the range filter is then just a recursive walk through this tree. It's the same structure we use for 2D lat/lon BKD tree, just with 1D instead. I implemented it as a DocValuesFormat that also writes the numeric tree on the side. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6590) Explore different ways to apply boosts
[ https://issues.apache.org/jira/browse/LUCENE-6590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14644145#comment-14644145 ] Adrien Grand commented on LUCENE-6590: -- Sorry Paul, I don't understand your last comment? Explore different ways to apply boosts -- Key: LUCENE-6590 URL: https://issues.apache.org/jira/browse/LUCENE-6590 Project: Lucene - Core Issue Type: Wish Reporter: Adrien Grand Priority: Minor Attachments: LUCENE-6590.patch, LUCENE-6590.patch, LUCENE-6590.patch, LUCENE-6590.patch Follow-up from LUCENE-6570: the fact that all queries are mutable in order to allow for applying a boost raises issues since it makes queries bad cache keys since their hashcode can change anytime. We could just document that queries should never be modified after they have gone through IndexSearcher but it would be even better if the API made queries impossible to mutate at all. I think there are two main options: - either replace void setBoost(boost) with something like Query withBoost(boost) which would return a clone that has a different boost - or move boost handling outside of Query, for instance we could have a (immutable) query impl that would be dedicated to applying boosts, that queries that need to change boosts at rewrite time (such as BooleanQuery) would use as a wrapper. The latter idea is from Robert and I like it a lot given how often I either introduced or found a bug which was due to the boost parameter being ignored. Maybe there are other options, but I think this is worth exploring. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6531) Make PhraseQuery immutable
[ https://issues.apache.org/jira/browse/LUCENE-6531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14644164#comment-14644164 ] ASF subversion and git services commented on LUCENE-6531: - Commit 1693059 from [~jpountz] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1693059 ] LUCENE-6531: Make PhraseQuery immutable. Make PhraseQuery immutable -- Key: LUCENE-6531 URL: https://issues.apache.org/jira/browse/LUCENE-6531 Project: Lucene - Core Issue Type: Improvement Reporter: Adrien Grand Assignee: Adrien Grand Priority: Minor Fix For: 6.0 Attachments: LUCENE-6531.patch, LUCENE-6531.patch Mutable queries are an issue for automatic filter caching since modifying a query after it has been put into the cache will corrupt the cache. We should make all queries immutable (up to the boost) to avoid this issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6531) Make PhraseQuery immutable
[ https://issues.apache.org/jira/browse/LUCENE-6531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adrien Grand updated LUCENE-6531: - Fix Version/s: 5.3 Make PhraseQuery immutable -- Key: LUCENE-6531 URL: https://issues.apache.org/jira/browse/LUCENE-6531 Project: Lucene - Core Issue Type: Improvement Reporter: Adrien Grand Assignee: Adrien Grand Priority: Minor Fix For: 5.3, 6.0 Attachments: LUCENE-6531.patch, LUCENE-6531.patch Mutable queries are an issue for automatic filter caching since modifying a query after it has been put into the cache will corrupt the cache. We should make all queries immutable (up to the boost) to avoid this issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7227) Solr distribution archive should have the WAR file extracted already
[ https://issues.apache.org/jira/browse/SOLR-7227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler updated SOLR-7227: Attachment: (was: SOLR-7227-part2.patch) Solr distribution archive should have the WAR file extracted already Key: SOLR-7227 URL: https://issues.apache.org/jira/browse/SOLR-7227 Project: Solr Issue Type: Improvement Affects Versions: 5.0 Reporter: Timothy Potter Assignee: Timothy Potter Fix For: 5.3, Trunk Attachments: SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227.patch, SOLR-7227.patch Currently, there is still the solr.war file in the server/webapps directory, which gets extracted upon first startup of Solr. It would be better to ship Solr with the WAR already extracted, thus taking us one step closer to truly not shipping a WAR file. Moreover, some users have reported not being able to make /opt/solr truly read-only because of the need to extract the WAR on-the-fly upon first startup. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7227) Solr distribution archive should have the WAR file extracted already
[ https://issues.apache.org/jira/browse/SOLR-7227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler updated SOLR-7227: Attachment: SOLR-7227-part2.patch Solr distribution archive should have the WAR file extracted already Key: SOLR-7227 URL: https://issues.apache.org/jira/browse/SOLR-7227 Project: Solr Issue Type: Improvement Affects Versions: 5.0 Reporter: Timothy Potter Assignee: Timothy Potter Fix For: 5.3, Trunk Attachments: SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227.patch, SOLR-7227.patch Currently, there is still the solr.war file in the server/webapps directory, which gets extracted upon first startup of Solr. It would be better to ship Solr with the WAR already extracted, thus taking us one step closer to truly not shipping a WAR file. Moreover, some users have reported not being able to make /opt/solr truly read-only because of the need to extract the WAR on-the-fly upon first startup. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7227) Solr distribution archive should have the WAR file extracted already
[ https://issues.apache.org/jira/browse/SOLR-7227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14643938#comment-14643938 ] Uwe Schindler commented on SOLR-7227: - Hi, just one question: Why do we create the WAR file at all in the build.xml of webapps module? Currently we create it and delete it in the the same ANT target after extracting! I would simply replace the {{war/}} by a {{copy/}} task. Some attributes like the path to the web.xml need to be changed (as the copy task does not have them, but otherwise the ANT War task does nothing special. The manifest is useless. Solr distribution archive should have the WAR file extracted already Key: SOLR-7227 URL: https://issues.apache.org/jira/browse/SOLR-7227 Project: Solr Issue Type: Improvement Affects Versions: 5.0 Reporter: Timothy Potter Assignee: Timothy Potter Fix For: 5.3, Trunk Attachments: SOLR-7227.patch, SOLR-7227.patch Currently, there is still the solr.war file in the server/webapps directory, which gets extracted upon first startup of Solr. It would be better to ship Solr with the WAR already extracted, thus taking us one step closer to truly not shipping a WAR file. Moreover, some users have reported not being able to make /opt/solr truly read-only because of the need to extract the WAR on-the-fly upon first startup. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6531) Make PhraseQuery immutable
[ https://issues.apache.org/jira/browse/LUCENE-6531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14644173#comment-14644173 ] ASF subversion and git services commented on LUCENE-6531: - Commit 1693060 from [~jpountz] in branch 'dev/trunk' [ https://svn.apache.org/r1693060 ] LUCENE-6531: backported to 5.3. Make PhraseQuery immutable -- Key: LUCENE-6531 URL: https://issues.apache.org/jira/browse/LUCENE-6531 Project: Lucene - Core Issue Type: Improvement Reporter: Adrien Grand Assignee: Adrien Grand Priority: Minor Fix For: 5.3, 6.0 Attachments: LUCENE-6531.patch, LUCENE-6531.patch Mutable queries are an issue for automatic filter caching since modifying a query after it has been put into the cache will corrupt the cache. We should make all queries immutable (up to the boost) to avoid this issue. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7757) Create a framework to edit/reload security params
[ https://issues.apache.org/jira/browse/SOLR-7757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14643997#comment-14643997 ] Ishan Chattopadhyaya commented on SOLR-7757: The latest patch for this is added to SOLR-7692, here: https://issues.apache.org/jira/secure/attachment/12747513/SOLR-7757.patch Create a framework to edit/reload security params - Key: SOLR-7757 URL: https://issues.apache.org/jira/browse/SOLR-7757 Project: Solr Issue Type: Sub-task Reporter: Noble Paul Assignee: Noble Paul We should have a standard mechanism which security plugins can use to edit/reload etc for various plugins. This will involve solr watching the {{/security.json}} and giving callbacks to the plugins. It wil also create standard end points for Rest-like APIs for each plugin. Each plugin will be able to define the payload, verify it, modify the config etc -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-5.x-Linux (32bit/jdk1.9.0-ea-b60) - Build # 13453 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/13453/ Java: 32bit/jdk1.9.0-ea-b60 -server -XX:+UseG1GC -Djava.locale.providers=JRE,SPI 1 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.HttpPartitionTest Error Message: ObjectTracker found 1 object(s) that were not released!!! [TransactionLog] Stack Trace: java.lang.AssertionError: ObjectTracker found 1 object(s) that were not released!!! [TransactionLog] at __randomizedtesting.SeedInfo.seed([F1A39F115C983E6D]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNull(Assert.java:551) at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:236) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:502) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:799) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at java.lang.Thread.run(Thread.java:745) Build Log: [...truncated 10190 lines...] [junit4] Suite: org.apache.solr.cloud.HttpPartitionTest [junit4] 2 Creating dataDir: /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.HttpPartitionTest_F1A39F115C983E6D-001/init-core-data-001 [junit4] 2 9308 INFO (SUITE-HttpPartitionTest-seed#[F1A39F115C983E6D]-worker) [] o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: / [junit4] 2 9316 INFO (TEST-HttpPartitionTest.test-seed#[F1A39F115C983E6D]) [] o.a.s.c.ZkTestServer STARTING ZK TEST SERVER [junit4] 2 9317 INFO (Thread-43) [] o.a.s.c.ZkTestServer client port:0.0.0.0/0.0.0.0:0 [junit4] 2 9318 INFO (Thread-43) [] o.a.s.c.ZkTestServer Starting server [junit4] 2 9417 INFO (TEST-HttpPartitionTest.test-seed#[F1A39F115C983E6D]) [] o.a.s.c.ZkTestServer start zk server on port:52919 [junit4] 2 9429 INFO (TEST-HttpPartitionTest.test-seed#[F1A39F115C983E6D]) [] o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider [junit4] 2 9462 INFO (TEST-HttpPartitionTest.test-seed#[F1A39F115C983E6D]) [] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper [junit4] 2 9496 INFO (zkCallback-9-thread-1) [] o.a.s.c.c.ConnectionManager Watcher org.apache.solr.common.cloud.ConnectionManager@10a8ec5 name:ZooKeeperConnection Watcher:127.0.0.1:52919 got event WatchedEvent state:SyncConnected type:None path:null path:null type:None [junit4] 2 9497 INFO (TEST-HttpPartitionTest.test-seed#[F1A39F115C983E6D]) [] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper [junit4] 2 9497 INFO (TEST-HttpPartitionTest.test-seed#[F1A39F115C983E6D]) [] o.a.s.c.c.SolrZkClient Using default ZkACLProvider [junit4] 2 9498 INFO (TEST-HttpPartitionTest.test-seed#[F1A39F115C983E6D]) [
[jira] [Updated] (SOLR-7692) Implement BasicAuth based impl for the new Authentication/Authorization APIs
[ https://issues.apache.org/jira/browse/SOLR-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ishan Chattopadhyaya updated SOLR-7692: --- Attachment: SOLR-7757.patch SOLR-7692.patch The SOLR-7757 patch still had compilation errors, since it depended on getValueMap() of RuleBasedAuthorizationPlugin, which is present in SOLR-7692. Fixed that, and added patches based on the latest patches by Anshum. SOLR-7757 patch applies in isolation. Implement BasicAuth based impl for the new Authentication/Authorization APIs Key: SOLR-7692 URL: https://issues.apache.org/jira/browse/SOLR-7692 Project: Solr Issue Type: New Feature Reporter: Noble Paul Assignee: Noble Paul Attachments: SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7757.patch, SOLR-7757.patch, SOLR-7757.patch This involves various components h2. Authentication A basic auth based authentication filter. This should retrieve the user credentials from ZK. The user name and sha1 hash of password should be stored in ZK sample authentication json {code:javascript} { authentication:{ class: solr.BasicAuthPlugin, users :{ john :09fljnklnoiuy98 buygujkjnlk, david:f678njfgfjnklno iuy9865ty, pete: 87ykjnklndfhjh8 98uyiy98, } } } {code} h2. authorization plugin This would store the roles of various users and their privileges in ZK sample authorization.json {code:javascript} { authorization: { class: solr.ZKAuthorization, roles :{ admin : [john] guest : [john, david,pete] } permissions: { collection-edit: { role: admin }, coreadmin:{ role:admin }, config-edit: { //all collections role: admin, method:POST }, schema-edit: { roles: admin, method:POST }, update: { //all collections role: dev }, mycoll_update: { collection: mycoll, path:[/update/*], role: [somebody] } } } } {code} We will also need to provide APIs to create users and assign them roles -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-7837) Implement BasicAuth Authentication Plugin
Noble Paul created SOLR-7837: Summary: Implement BasicAuth Authentication Plugin Key: SOLR-7837 URL: https://issues.apache.org/jira/browse/SOLR-7837 Project: Solr Issue Type: Sub-task Reporter: Noble Paul Assignee: Noble Paul -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7757) Create a framework to edit/reload security params
[ https://issues.apache.org/jira/browse/SOLR-7757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anshum Gupta updated SOLR-7757: --- Attachment: SOLR-7757.patch Moving the last patch from SOLR-7692 that was meant to this issue here. Create a framework to edit/reload security params - Key: SOLR-7757 URL: https://issues.apache.org/jira/browse/SOLR-7757 Project: Solr Issue Type: Sub-task Reporter: Noble Paul Assignee: Noble Paul Attachments: SOLR-7757.patch We should have a standard mechanism which security plugins can use to edit/reload etc for various plugins. This will involve solr watching the {{/security.json}} and giving callbacks to the plugins. It wil also create standard end points for Rest-like APIs for each plugin. Each plugin will be able to define the payload, verify it, modify the config etc -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7227) Solr distribution archive should have the WAR file extracted already
[ https://issues.apache.org/jira/browse/SOLR-7227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler updated SOLR-7227: Attachment: SOLR-7227-part2.patch Path removing webapp build completely. I also renamed some targets. Solr distribution archive should have the WAR file extracted already Key: SOLR-7227 URL: https://issues.apache.org/jira/browse/SOLR-7227 Project: Solr Issue Type: Improvement Affects Versions: 5.0 Reporter: Timothy Potter Assignee: Timothy Potter Fix For: 5.3, Trunk Attachments: SOLR-7227-part2.patch, SOLR-7227.patch, SOLR-7227.patch Currently, there is still the solr.war file in the server/webapps directory, which gets extracted upon first startup of Solr. It would be better to ship Solr with the WAR already extracted, thus taking us one step closer to truly not shipping a WAR file. Moreover, some users have reported not being able to make /opt/solr truly read-only because of the need to extract the WAR on-the-fly upon first startup. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6664) Replace SynonymFilter with SynonymGraphFilter
[ https://issues.apache.org/jira/browse/LUCENE-6664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael McCandless updated LUCENE-6664: --- Attachment: LUCENE-6664.patch New patch with Rob's idea: I made the new SynonymGraphFilter and SausageFilter package private, and replaced the old SynonymFilter with these two filters. But TestSynonymMapFilter (the existing unit test) fails, because there are some changes in behavior with the new filter: * Syn output order is different: with the new syn filter, the syn comes out before the original token. This is necessary to ensure offsets never go backwards... * When there are more output tokens for a syn than input tokens, then new syn filter makes new positions for the extra tokens, but the old one didn't. * The new syn filter does more captureState() calls I think we need to keep the old behavior available, maybe using a Version constant or a separate class (SynFilterPre53, LegacySynFilter) or something? Replace SynonymFilter with SynonymGraphFilter - Key: LUCENE-6664 URL: https://issues.apache.org/jira/browse/LUCENE-6664 Project: Lucene - Core Issue Type: New Feature Reporter: Michael McCandless Assignee: Michael McCandless Fix For: 5.3, Trunk Attachments: LUCENE-6664.patch, LUCENE-6664.patch, LUCENE-6664.patch, usa.png, usa_flat.png Spinoff from LUCENE-6582. I created a new SynonymGraphFilter (to replace the current buggy SynonymFilter), that produces correct graphs (does no graph flattening itself). I think this makes it simpler. This means you must add the FlattenGraphFilter yourself, if you are applying synonyms during indexing. Index-time syn expansion is a necessarily lossy graph transformation when multi-token (input or output) synonyms are applied, because the index does not store {{posLength}}, so there will always be phrase queries that should match but do not, and then phrase queries that should not match but do. http://blog.mikemccandless.com/2012/04/lucenes-tokenstreams-are-actually.html goes into detail about this. However, with this new SynonymGraphFilter, if instead you do synonym expansion at query time (and don't do the flattening), and you use TermAutomatonQuery (future: somehow integrated into a query parser), or maybe just enumerate all paths and make union of PhraseQuery, you should get 100% correct matches (not sure about proper scoring though...). This new syn filter still cannot consume an arbitrary graph. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-7838) Implement a RuleBasedAuthorizationPlugin
Noble Paul created SOLR-7838: Summary: Implement a RuleBasedAuthorizationPlugin Key: SOLR-7838 URL: https://issues.apache.org/jira/browse/SOLR-7838 Project: Solr Issue Type: Sub-task Reporter: Noble Paul -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6590) Explore different ways to apply boosts
[ https://issues.apache.org/jira/browse/LUCENE-6590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adrien Grand updated LUCENE-6590: - Attachment: LUCENE-6590.patch Iterated on the patch to make our queries return `super.rewrite` instead of `this` when they exhausted their rewrite rules to prepare for the backport. This way we could nicely handle backward compatibility by fixing the base Query.rewrite to return a BoostQuery around itself when the boost is not 1. Explore different ways to apply boosts -- Key: LUCENE-6590 URL: https://issues.apache.org/jira/browse/LUCENE-6590 Project: Lucene - Core Issue Type: Wish Reporter: Adrien Grand Priority: Minor Attachments: LUCENE-6590.patch, LUCENE-6590.patch, LUCENE-6590.patch, LUCENE-6590.patch, LUCENE-6590.patch Follow-up from LUCENE-6570: the fact that all queries are mutable in order to allow for applying a boost raises issues since it makes queries bad cache keys since their hashcode can change anytime. We could just document that queries should never be modified after they have gone through IndexSearcher but it would be even better if the API made queries impossible to mutate at all. I think there are two main options: - either replace void setBoost(boost) with something like Query withBoost(boost) which would return a clone that has a different boost - or move boost handling outside of Query, for instance we could have a (immutable) query impl that would be dedicated to applying boosts, that queries that need to change boosts at rewrite time (such as BooleanQuery) would use as a wrapper. The latter idea is from Robert and I like it a lot given how often I either introduced or found a bug which was due to the boost parameter being ignored. Maybe there are other options, but I think this is worth exploring. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-7845) sum should treat NULL as 0
[ https://issues.apache.org/jira/browse/SOLR-7845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hoss Man resolved SOLR-7845. Resolution: Not A Problem This is working as designed. The change in behavior is due to a bug that was fixed in the underlying math ValueSources in 5.0, and explicitly called out in the Solr upgrading instructions for 5.0... bq. Bugs fixed in several ValueSource functions may result in different behavior in situations where some documents do not have values for fields wrapped in other value sources. Users who want to preserve the previous behavior may need to wrap fields in the def() function. Example: changing fl=sum(fieldA,fieldB) to fl=sum(def(fieldA,0.0),def(fieldB,0.0)). See LUCENE-5961 for more details. Using the techproducts example data the various options are easy to compare... {noformat} http://localhost:8983/solr/techproducts/select?x=id:USDq=cat:currencyfl=id,query%28$x%29,sum%281,query%28$x%29%29,sum%281,def%28query%28$x%29,0%29%29 {noformat} sum should treat NULL as 0 -- Key: SOLR-7845 URL: https://issues.apache.org/jira/browse/SOLR-7845 Project: Solr Issue Type: Bug Reporter: Bill Bell sum(0,query()) used to treat the NULL values in query as 0. It stopped working in SOLR 5. Do we want to fix this? {noformat} http://localhost:8983/solr/select?hqval1=pwid:2q=*:*fl=pwid,$yy=sum(0,query({!lucene%20v=$hqval1})) {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-SmokeRelease-5.x - Build # 288 - Failure
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-5.x/288/ No tests ran. Build Log: [...truncated 53021 lines...] prepare-release-no-sign: [mkdir] Created dir: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist [copy] Copying 461 files to /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/lucene [copy] Copying 245 files to /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/solr [smoker] Java 1.7 JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.7 [smoker] Java 1.8 JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8 [smoker] NOTE: output encoding is UTF-8 [smoker] [smoker] Load release URL file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/... [smoker] [smoker] Test Lucene... [smoker] test basics... [smoker] get KEYS [smoker] 0.1 MB in 0.02 sec (8.1 MB/sec) [smoker] check changes HTML... [smoker] download lucene-5.3.0-src.tgz... [smoker] 28.4 MB in 0.04 sec (668.5 MB/sec) [smoker] verify md5/sha1 digests [smoker] download lucene-5.3.0.tgz... [smoker] 65.6 MB in 0.09 sec (704.8 MB/sec) [smoker] verify md5/sha1 digests [smoker] download lucene-5.3.0.zip... [smoker] 75.8 MB in 0.13 sec (578.9 MB/sec) [smoker] verify md5/sha1 digests [smoker] unpack lucene-5.3.0.tgz... [smoker] verify JAR metadata/identity/no javax.* or java.* classes... [smoker] test demo with 1.7... [smoker] got 6041 hits for query lucene [smoker] checkindex with 1.7... [smoker] test demo with 1.8... [smoker] got 6041 hits for query lucene [smoker] checkindex with 1.8... [smoker] check Lucene's javadoc JAR [smoker] unpack lucene-5.3.0.zip... [smoker] verify JAR metadata/identity/no javax.* or java.* classes... [smoker] test demo with 1.7... [smoker] got 6041 hits for query lucene [smoker] checkindex with 1.7... [smoker] test demo with 1.8... [smoker] got 6041 hits for query lucene [smoker] checkindex with 1.8... [smoker] check Lucene's javadoc JAR [smoker] unpack lucene-5.3.0-src.tgz... [smoker] make sure no JARs/WARs in src dist... [smoker] run ant validate [smoker] run tests w/ Java 7 and testArgs='-Dtests.slow=false'... [smoker] test demo with 1.7... [smoker] got 213 hits for query lucene [smoker] checkindex with 1.7... [smoker] generate javadocs w/ Java 7... [smoker] [smoker] Crawl/parse... [smoker] [smoker] Verify... [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'... [smoker] test demo with 1.8... [smoker] got 213 hits for query lucene [smoker] checkindex with 1.8... [smoker] generate javadocs w/ Java 8... [smoker] [smoker] Crawl/parse... [smoker] [smoker] Verify... [smoker] confirm all releases have coverage in TestBackwardsCompatibility [smoker] find all past Lucene releases... [smoker] run TestBackwardsCompatibility.. [smoker] success! [smoker] [smoker] Test Solr... [smoker] test basics... [smoker] get KEYS [smoker] 0.1 MB in 0.00 sec (30.4 MB/sec) [smoker] check changes HTML... [smoker] download solr-5.3.0-src.tgz... [smoker] 36.9 MB in 0.78 sec (47.3 MB/sec) [smoker] verify md5/sha1 digests [smoker] download solr-5.3.0.tgz... [smoker] 128.3 MB in 1.27 sec (101.4 MB/sec) [smoker] verify md5/sha1 digests [smoker] download solr-5.3.0.zip... [smoker] 135.9 MB in 1.31 sec (103.4 MB/sec) [smoker] verify md5/sha1 digests [smoker] unpack solr-5.3.0.tgz... [smoker] verify JAR metadata/identity/no javax.* or java.* classes... [smoker] unpack lucene-5.3.0.tgz... [smoker] **WARNING**: skipping check of /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.3.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar: it has javax.* classes [smoker] **WARNING**: skipping check of /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.3.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar: it has javax.* classes [smoker] copying unpacked distribution for Java 7 ... [smoker] test solr example w/ Java 7... [smoker] start Solr instance (log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.3.0-java7/solr-example.log)... [smoker] No process found for Solr node running on port 8983 [smoker] starting Solr on port 8983 from
AnalyzingInfixSuggester lookup to take Query instead of BooleanQuery?
Looking at the lookup method at https://github.com/apache/lucene-solr/blob/5767764a2b621fce76c0b0529ddde550fdc00307/lucene/suggest/src/java/org/apache/lucene/search/suggest/analyzing/AnalyzingInfixSuggester.java#L493 The parameter BooleanQuery contextQuery could well be a TermQuery. Is it worth generalizing that method to take as parameter the base Query instead? Thanks.
[JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b60) - Build # 13641 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/13641/ Java: 64bit/jdk1.9.0-ea-b60 -XX:-UseCompressedOops -XX:+UseSerialGC -Djava.locale.providers=JRE,SPI 1 tests failed. FAILED: org.apache.solr.cloud.CdcrReplicationDistributedZkTest.doTests Error Message: Timeout while trying to assert update logs @ collection=source_collection Stack Trace: java.lang.AssertionError: Timeout while trying to assert update logs @ collection=source_collection at __randomizedtesting.SeedInfo.seed([18EBDB3D08C1B2C7:108BAE1107CF9ACC]:0) at org.apache.solr.cloud.CdcrReplicationDistributedZkTest.assertNumberOfTlogFiles(CdcrReplicationDistributedZkTest.java:644) at org.apache.solr.cloud.CdcrReplicationDistributedZkTest.doTestUpdateLogSynchronisation(CdcrReplicationDistributedZkTest.java:384) at org.apache.solr.cloud.CdcrReplicationDistributedZkTest.doTests(CdcrReplicationDistributedZkTest.java:50) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:502) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at
[jira] [Created] (SOLR-7847) Implement the logic that runs examples in Java instead of in OS specific scripts (bin/solr and bin/solr.cmd)
Timothy Potter created SOLR-7847: Summary: Implement the logic that runs examples in Java instead of in OS specific scripts (bin/solr and bin/solr.cmd) Key: SOLR-7847 URL: https://issues.apache.org/jira/browse/SOLR-7847 Project: Solr Issue Type: Improvement Components: scripts and tools Reporter: Timothy Potter Assignee: Timothy Potter Off-shoot from SOLR-7043 to tackle the specific task of moving the logic that runs the examples (cloud, techproducts, etc) to Java code instead of complex OS-specific scripts. This is only one small step along the way to get SOLR-7043 resolved. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7845) sum should treat NULL as 0
[ https://issues.apache.org/jira/browse/SOLR-7845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14645182#comment-14645182 ] Bill Bell commented on SOLR-7845: - OK, it might have to do with query($qq, DEFAULT) http://localhost:8983/solr/select?q=*:*fl=pwid,sum(0,query($qq,5.0))qq={!lucene}pwid:2 This returns no default: doc str name=pwid2KSTV/str /doc doc str name=pwidX9F6L/str /doc doc str name=pwid2N8LQ/str /doc sum should treat NULL as 0 -- Key: SOLR-7845 URL: https://issues.apache.org/jira/browse/SOLR-7845 Project: Solr Issue Type: Bug Reporter: Bill Bell sum(0,query()) used to treat the NULL values in query as 0. It stopped working in SOLR 5. Do we want to fix this? {noformat} http://localhost:8983/solr/select?hqval1=pwid:2q=*:*fl=pwid,$yy=sum(0,query({!lucene%20v=$hqval1})) {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5606) REST based Collections API
[ https://issues.apache.org/jira/browse/SOLR-5606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14645179#comment-14645179 ] Timothy Potter commented on SOLR-5606: -- [~smolloy] what's your proposal for how to get a list of collections? If I'm reading this statement correctly: bq. Not only is it cleaner to have the resource you want to interact with part of the URL used to access it, it would also make it much easier to integrate with standard client libraries for REST APIs. Sounds like you want the API to get a list of collections to be at endpoint {{/solr/collections}} and when addressing a specific collection {{/solr/collections/collection/[query|update|admin|whatever]}} Noble proposed hanging the list of collections off of {{/solr}} to avoid this issue, which I'm fine with and also POST'ing to {{/solr}} to create a collection (with the correct JSON params in the request body of course). Without the type parameter, we either have to say {{/solr}} always deals with collections only and any other types have their own unique path, such as {{/solr/cores}}. If we want {{/solr/collections/}} in every path that addresses anything collection related, then I'm not going to argue ... just want to be clear. REST based Collections API -- Key: SOLR-5606 URL: https://issues.apache.org/jira/browse/SOLR-5606 Project: Solr Issue Type: Improvement Components: SolrCloud Reporter: Jan Høydahl Priority: Minor Fix For: Trunk For consistency reasons, the collections API (and other admin APIs) should be REST based. Spinoff from SOLR-1523 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-SmokeRelease-trunk - Build # 244 - Failure
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-trunk/244/ No tests ran. Build Log: [...truncated 52546 lines...] prepare-release-no-sign: [mkdir] Created dir: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/dist [copy] Copying 461 files to /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/dist/lucene [copy] Copying 245 files to /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/dist/solr [smoker] Java 1.8 JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8 [smoker] NOTE: output encoding is UTF-8 [smoker] [smoker] Load release URL file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/dist/... [smoker] [smoker] Test Lucene... [smoker] test basics... [smoker] get KEYS [smoker] 0.1 MB in 0.02 sec (8.9 MB/sec) [smoker] check changes HTML... [smoker] download lucene-6.0.0-src.tgz... [smoker] 28.1 MB in 0.04 sec (724.3 MB/sec) [smoker] verify md5/sha1 digests [smoker] download lucene-6.0.0.tgz... [smoker] 64.7 MB in 0.09 sec (685.6 MB/sec) [smoker] verify md5/sha1 digests [smoker] download lucene-6.0.0.zip... [smoker] 75.0 MB in 0.11 sec (704.8 MB/sec) [smoker] verify md5/sha1 digests [smoker] unpack lucene-6.0.0.tgz... [smoker] verify JAR metadata/identity/no javax.* or java.* classes... [smoker] test demo with 1.8... [smoker] got 5823 hits for query lucene [smoker] checkindex with 1.8... [smoker] check Lucene's javadoc JAR [smoker] unpack lucene-6.0.0.zip... [smoker] verify JAR metadata/identity/no javax.* or java.* classes... [smoker] test demo with 1.8... [smoker] got 5823 hits for query lucene [smoker] checkindex with 1.8... [smoker] check Lucene's javadoc JAR [smoker] unpack lucene-6.0.0-src.tgz... [smoker] make sure no JARs/WARs in src dist... [smoker] run ant validate [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'... [smoker] test demo with 1.8... [smoker] got 213 hits for query lucene [smoker] checkindex with 1.8... [smoker] generate javadocs w/ Java 8... [smoker] [smoker] Crawl/parse... [smoker] [smoker] Verify... [smoker] confirm all releases have coverage in TestBackwardsCompatibility [smoker] find all past Lucene releases... [smoker] run TestBackwardsCompatibility.. [smoker] success! [smoker] [smoker] Test Solr... [smoker] test basics... [smoker] get KEYS [smoker] 0.1 MB in 0.01 sec (25.9 MB/sec) [smoker] check changes HTML... [smoker] download solr-6.0.0-src.tgz... [smoker] 36.6 MB in 0.51 sec (71.3 MB/sec) [smoker] verify md5/sha1 digests [smoker] download solr-6.0.0.tgz... [smoker] 130.3 MB in 2.01 sec (64.7 MB/sec) [smoker] verify md5/sha1 digests [smoker] download solr-6.0.0.zip... [smoker] 138.3 MB in 1.51 sec (91.3 MB/sec) [smoker] verify md5/sha1 digests [smoker] unpack solr-6.0.0.tgz... [smoker] verify JAR metadata/identity/no javax.* or java.* classes... [smoker] unpack lucene-6.0.0.tgz... [smoker] **WARNING**: skipping check of /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/tmp/unpack/solr-6.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar: it has javax.* classes [smoker] **WARNING**: skipping check of /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/tmp/unpack/solr-6.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar: it has javax.* classes [smoker] copying unpacked distribution for Java 8 ... [smoker] test solr example w/ Java 8... [smoker] start Solr instance (log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/tmp/unpack/solr-6.0.0-java8/solr-example.log)... [smoker] No process found for Solr node running on port 8983 [smoker] starting Solr on port 8983 from /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/tmp/unpack/solr-6.0.0-java8 [smoker] startup done [smoker] [smoker] Setup new core instance directory: [smoker] /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/tmp/unpack/solr-6.0.0-java8/server/solr/techproducts [smoker] [smoker] Creating new core 'techproducts' using command: [smoker] http://localhost:8983/solr/admin/cores?action=CREATEname=techproductsinstanceDir=techproducts [smoker] [smoker] { [smoker] responseHeader:{ [smoker] status:0, [smoker] QTime:1433},
[jira] [Commented] (SOLR-5606) REST based Collections API
[ https://issues.apache.org/jira/browse/SOLR-5606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14645374#comment-14645374 ] Mark Miller commented on SOLR-5606: --- Honestly, having collections related things hang off solr/collections seems much better to me. Having everything off /solr and using a type param reminds of the ugly core admin API. I wish I was around when that API was added to veto it :) REST based Collections API -- Key: SOLR-5606 URL: https://issues.apache.org/jira/browse/SOLR-5606 Project: Solr Issue Type: Improvement Components: SolrCloud Reporter: Jan Høydahl Priority: Minor Fix For: Trunk For consistency reasons, the collections API (and other admin APIs) should be REST based. Spinoff from SOLR-1523 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5606) REST based Collections API
[ https://issues.apache.org/jira/browse/SOLR-5606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14645380#comment-14645380 ] Anshum Gupta commented on SOLR-5606: I didn't +1 on a /solr endpoint . I love the bulk API + sensible rest endpoints. I'm not too concerned about PUT Vs POST but that we have clean (REST like) end points. REST based Collections API -- Key: SOLR-5606 URL: https://issues.apache.org/jira/browse/SOLR-5606 Project: Solr Issue Type: Improvement Components: SolrCloud Reporter: Jan Høydahl Priority: Minor Fix For: Trunk For consistency reasons, the collections API (and other admin APIs) should be REST based. Spinoff from SOLR-1523 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_51) - Build # 5081 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5081/ Java: 64bit/jdk1.8.0_51 -XX:+UseCompressedOops -XX:+UseParallelGC 3 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandlerBackup Error Message: Suite timeout exceeded (= 720 msec). Stack Trace: java.lang.Exception: Suite timeout exceeded (= 720 msec). at __randomizedtesting.SeedInfo.seed([4D90963E3BD1F08]:0) FAILED: org.apache.solr.cloud.MultiThreadedOCPTest.test Error Message: Mutual exclusion failed. Found more than one task running for the same collection Stack Trace: java.lang.AssertionError: Mutual exclusion failed. Found more than one task running for the same collection at __randomizedtesting.SeedInfo.seed([4D90963E3BD1F08:8C8D36B94D4172F0]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.MultiThreadedOCPTest.testTaskExclusivity(MultiThreadedOCPTest.java:136) at org.apache.solr.cloud.MultiThreadedOCPTest.test(MultiThreadedOCPTest.java:58) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at
[jira] [Commented] (SOLR-7692) Implement BasicAuth based impl for the new Authentication/Authorization APIs
[ https://issues.apache.org/jira/browse/SOLR-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14644236#comment-14644236 ] Ishan Chattopadhyaya commented on SOLR-7692: bq. Let's separate out the authentication and authorization patches into different issues. As they are orthogonal, we should commit them separately. +1. However, although I agree that we should separate out the authc and authz parts into different issues, the current integration test (BasicAuthIntegrationTest) would need to be rewritten as it is using both the plugins together. Implement BasicAuth based impl for the new Authentication/Authorization APIs Key: SOLR-7692 URL: https://issues.apache.org/jira/browse/SOLR-7692 Project: Solr Issue Type: New Feature Reporter: Noble Paul Assignee: Noble Paul Attachments: SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7757.patch, SOLR-7757.patch, SOLR-7757.patch This involves various components h2. Authentication A basic auth based authentication filter. This should retrieve the user credentials from ZK. The user name and sha1 hash of password should be stored in ZK sample authentication json {code:javascript} { authentication:{ class: solr.BasicAuthPlugin, users :{ john :09fljnklnoiuy98 buygujkjnlk, david:f678njfgfjnklno iuy9865ty, pete: 87ykjnklndfhjh8 98uyiy98, } } } {code} h2. authorization plugin This would store the roles of various users and their privileges in ZK sample authorization.json {code:javascript} { authorization: { class: solr.ZKAuthorization, roles :{ admin : [john] guest : [john, david,pete] } permissions: { collection-edit: { role: admin }, coreadmin:{ role:admin }, config-edit: { //all collections role: admin, method:POST }, schema-edit: { roles: admin, method:POST }, update: { //all collections role: dev }, mycoll_update: { collection: mycoll, path:[/update/*], role: [somebody] } } } } {code} We will also need to provide APIs to create users and assign them roles -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 749 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/749/ 1 tests failed. FAILED: org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test Error Message: Captured an uncaught exception in thread: Thread[id=6560, name=collection5, state=RUNNABLE, group=TGRP-CollectionsAPIDistributedZkTest] Stack Trace: com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught exception in thread: Thread[id=6560, name=collection5, state=RUNNABLE, group=TGRP-CollectionsAPIDistributedZkTest] Caused by: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at https://127.0.0.1:39297/yc_suz/mw: Could not find collection : awholynewstresscollection_collection5_1 at __randomizedtesting.SeedInfo.seed([F6FA78EB9F8891F]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:560) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:234) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:226) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:376) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:328) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1086) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:856) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:799) at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220) at org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:894) Build Log: [...truncated 10488 lines...] [junit4] Suite: org.apache.solr.cloud.CollectionsAPIDistributedZkTest [junit4] 2 Creating dataDir: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-trunk/solr/build/solr-core/test/J1/temp/solr.cloud.CollectionsAPIDistributedZkTest_F6FA78EB9F8891F-001/init-core-data-001 [junit4] 2 1279437 INFO (SUITE-CollectionsAPIDistributedZkTest-seed#[F6FA78EB9F8891F]-worker) [] o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (false) [junit4] 2 1279439 INFO (SUITE-CollectionsAPIDistributedZkTest-seed#[F6FA78EB9F8891F]-worker) [] o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /yc_suz/mw [junit4] 2 1279441 INFO (TEST-CollectionsAPIDistributedZkTest.test-seed#[F6FA78EB9F8891F]) [] o.a.s.c.ZkTestServer STARTING ZK TEST SERVER [junit4] 2 1279442 INFO (Thread-5036) [] o.a.s.c.ZkTestServer client port:0.0.0.0/0.0.0.0:0 [junit4] 2 1279442 INFO (Thread-5036) [] o.a.s.c.ZkTestServer Starting server [junit4] 2 1279542 INFO (TEST-CollectionsAPIDistributedZkTest.test-seed#[F6FA78EB9F8891F]) [] o.a.s.c.ZkTestServer start zk server on port:43713 [junit4] 2 1279542 INFO (TEST-CollectionsAPIDistributedZkTest.test-seed#[F6FA78EB9F8891F]) [] o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider [junit4] 2 1279543 INFO (TEST-CollectionsAPIDistributedZkTest.test-seed#[F6FA78EB9F8891F]) [] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper [junit4] 2 1279547 INFO (zkCallback-193-thread-1) [] o.a.s.c.c.ConnectionManager Watcher org.apache.solr.common.cloud.ConnectionManager@448643b5 name:ZooKeeperConnection Watcher:127.0.0.1:43713 got event WatchedEvent state:SyncConnected type:None path:null path:null type:None [junit4] 2 1279547 INFO (TEST-CollectionsAPIDistributedZkTest.test-seed#[F6FA78EB9F8891F]) [] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper [junit4] 2 1279548 INFO (TEST-CollectionsAPIDistributedZkTest.test-seed#[F6FA78EB9F8891F]) [] o.a.s.c.c.SolrZkClient Using default ZkACLProvider [junit4] 2 1279548 INFO (TEST-CollectionsAPIDistributedZkTest.test-seed#[F6FA78EB9F8891F]) [] o.a.s.c.c.SolrZkClient makePath: /solr [junit4] 2 1279551 INFO (TEST-CollectionsAPIDistributedZkTest.test-seed#[F6FA78EB9F8891F]) [] o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider [junit4] 2 1279552 INFO (TEST-CollectionsAPIDistributedZkTest.test-seed#[F6FA78EB9F8891F]) [] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper [junit4] 2 1279557 INFO (zkCallback-194-thread-1) [] o.a.s.c.c.ConnectionManager Watcher org.apache.solr.common.cloud.ConnectionManager@637ab6fc name:ZooKeeperConnection Watcher:127.0.0.1:43713/solr got event WatchedEvent state:SyncConnected type:None path:null path:null type:None [junit4] 2 1279559 INFO (TEST-CollectionsAPIDistributedZkTest.test-seed#[F6FA78EB9F8891F]) [] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper [junit4] 2 1279559 INFO
[jira] [Comment Edited] (SOLR-7227) Solr distribution archive should have the WAR file extracted already
[ https://issues.apache.org/jira/browse/SOLR-7227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14644389#comment-14644389 ] Uwe Schindler edited comment on SOLR-7227 at 7/28/15 1:55 PM: -- New patch without WAR special case in smoke tester (no longer needed) -- currently untested, have to boot Linux again. was (Author: thetaphi): New patch without WAR special case in smoke tester (no longer needed) Solr distribution archive should have the WAR file extracted already Key: SOLR-7227 URL: https://issues.apache.org/jira/browse/SOLR-7227 Project: Solr Issue Type: Improvement Affects Versions: 5.0 Reporter: Timothy Potter Assignee: Timothy Potter Fix For: 5.3, Trunk Attachments: SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227.patch, SOLR-7227.patch Currently, there is still the solr.war file in the server/webapps directory, which gets extracted upon first startup of Solr. It would be better to ship Solr with the WAR already extracted, thus taking us one step closer to truly not shipping a WAR file. Moreover, some users have reported not being able to make /opt/solr truly read-only because of the need to extract the WAR on-the-fly upon first startup. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-7692) Implement BasicAuth based impl for the new Authentication/Authorization APIs
[ https://issues.apache.org/jira/browse/SOLR-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14644404#comment-14644404 ] Ishan Chattopadhyaya edited comment on SOLR-7692 at 7/28/15 2:12 PM: - 1. In Sha256AuthenticationProvider, line 106 {noformat} try { digest = MessageDigest.getInstance(SHA-256); } catch (NoSuchAlgorithmException e) { BasicAuthPlugin.log.error(e.getMessage(), e); return null;//should not happen } {noformat} Shouldn't this be an exception thrown? 2. In the sha256() method (same place as above), {noformat} public static String sha256(String password, String saltKey) { MessageDigest digest; try { digest = MessageDigest.getInstance(SHA-256); } catch (NoSuchAlgorithmException e) { BasicAuthPlugin.log.error(e.getMessage(), e); return null;//should not happen } if (saltKey != null) { digest.reset(); digest.update(Base64.decodeBase64(saltKey)); } byte[] btPass = digest.digest(password.getBytes(StandardCharsets.UTF_8)); digest.reset(); btPass = digest.digest(btPass); return Base64.encodeBase64String(btPass); } {noformat} I think we should reuse a digest instance, instead of creating one using the factory method for every request, as there are significant overheads to creating a new digest algorithm instance. Reference: https://books.google.co.in/books?id=42etT_9-_9MCpg=PT254lpg=PT254 3. For SolrJ support, I've added SOLR-7839. 4. For internode communication, I think (please correct me if I'm wrong) the ThreadLocal approach won't work for cases when the internode request is made from a threadpool, from where the headers of the original request thread's ThreadLocal won't be accessible. I think we need something like SOLR-6625, where the request object can store the user principal / headers etc. and pass it along to the request interceptor as a context. 5. As per our discussion offline, the internode request which are originated from a Solr node (not a subrequest of a main user request) cannot be secured this way. Either each node uses its own principal/credentials to send internode requests in such cases, or there's another secure mechanism of internode requests internal to Solr (e.g. asymmetric cryptographic mechanism, e.g. PKI), irrespective of the authc plugins used for user requests. was (Author: ichattopadhyaya): 1. In Sha256AuthenticationProvider, line 106 {noformat} NoSuchAlgorithmException e) { BasicAuthPlugin.log.error(e.getMessage(), e); return null;//should not happen } {noformat} Shouldn't this be an exception thrown? 2. In the sha256() method, I think we should reuse a digest instance, since there are significant overheads to creating a new digest algorithm instance. Reference: https://books.google.co.in/books?id=42etT_9-_9MCpg=PT254lpg=PT254 3. For SolrJ support, I've added SOLR-7839. 4. For internode communication, I think (please correct me if I'm wrong) the ThreadLocal approach won't work for cases when the internode request is made from a threadpool, from where the headers of the original request thread's ThreadLocal won't be accessible. I think we need something like SOLR-6625, where the request object can store the user principal / headers etc. and pass it along to the request interceptor as a context. 5. As per our discussion offline, the internode request which are originated from a Solr node (not a subrequest of a main user request) cannot be secured this way. Either each node uses its own principal/credentials to send internode requests in such cases, or there's another secure mechanism of internode requests internal to Solr (e.g. asymmetric cryptographic mechanism, e.g. PKI), irrespective of the authc plugins used for user requests. Implement BasicAuth based impl for the new Authentication/Authorization APIs Key: SOLR-7692 URL: https://issues.apache.org/jira/browse/SOLR-7692 Project: Solr Issue Type: New Feature Reporter: Noble Paul Assignee: Noble Paul Attachments: SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7757.patch, SOLR-7757.patch, SOLR-7757.patch This involves various components h2. Authentication A basic auth based authentication filter. This should retrieve the user credentials from ZK. The user name and sha1 hash of password should be stored in ZK sample authentication json {code:javascript} { authentication:{ class: solr.BasicAuthPlugin, users :{ john :09fljnklnoiuy98
[jira] [Commented] (SOLR-7836) Possible deadlock when closing refcounted index writers.
[ https://issues.apache.org/jira/browse/SOLR-7836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14644420#comment-14644420 ] Mark Miller commented on SOLR-7836: --- That sounds like it's right - been awhile since I've looked at the code, but the idea is, you get a writer, use it briefly, then release it in a finally. There should not be code that gets a writer, then gets a writer, then tries to release both of them after. Possible deadlock when closing refcounted index writers. Key: SOLR-7836 URL: https://issues.apache.org/jira/browse/SOLR-7836 Project: Solr Issue Type: Bug Reporter: Erick Erickson Assignee: Erick Erickson Attachments: SOLR-7836.patch Preliminary patch for what looks like a possible race condition between writerFree and pauseWriter in DefaultSorlCoreState. Looking for comments and/or why I'm completely missing the boat. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-7840) ZkStateReader.updateClusterState fetches watched collections twice from ZK
Shalin Shekhar Mangar created SOLR-7840: --- Summary: ZkStateReader.updateClusterState fetches watched collections twice from ZK Key: SOLR-7840 URL: https://issues.apache.org/jira/browse/SOLR-7840 Project: Solr Issue Type: Bug Components: SolrCloud Affects Versions: 5.2.1, 4.10.4 Reporter: Shalin Shekhar Mangar Assignee: Shalin Shekhar Mangar Priority: Minor Fix For: 5.3, Trunk ZkStateReader.updateClusterState fetches watched collections once during constructState and then again after re-acquiring the update lock. Fetching the watched collections live from ZK once is enough. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6234) Scoring modes for query time join
[ https://issues.apache.org/jira/browse/SOLR-6234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14644366#comment-14644366 ] ASF subversion and git services commented on SOLR-6234: --- Commit 1693092 from m...@apache.org in branch 'dev/trunk' [ https://svn.apache.org/r1693092 ] SOLR-6234: Scoring for query time join Scoring modes for query time join -- Key: SOLR-6234 URL: https://issues.apache.org/jira/browse/SOLR-6234 Project: Solr Issue Type: New Feature Components: query parsers Affects Versions: 5.3 Reporter: Mikhail Khludnev Assignee: Timothy Potter Labels: features, patch, test Fix For: 5.3 Attachments: SOLR-6234.patch, SOLR-6234.patch, SOLR-6234.patch, SOLR-6234.patch, SOLR-6234.patch, SOLR-6234.patch, SOLR-6234.patch, otherHandler.patch it adds ability to call Lucene's JoinUtil by specifying local param, ie \{!join score=...} It supports: - {{score=none|avg|max|total}} local param (passed as ScoreMode to JoinUtil) - -supports {{b=100}} param to pass {{Query.setBoost()}}- postponed till SOLR-7814. - -{{multiVals=true|false}} is introduced- YAGNI, let me know otherwise. - there is a test coverage for cross core join case. - so far it joins string and multivalue string fields (Sorted, SortedSet, Binary), but not Numerics DVs. follow-up LUCENE-5868 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7227) Solr distribution archive should have the WAR file extracted already
[ https://issues.apache.org/jira/browse/SOLR-7227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14644377#comment-14644377 ] Timothy Potter commented on SOLR-7227: -- I'm pretty sure the smoke tester checks things in the manifest. [~thetaphi] Did you run smoke tester with your patch? Solr distribution archive should have the WAR file extracted already Key: SOLR-7227 URL: https://issues.apache.org/jira/browse/SOLR-7227 Project: Solr Issue Type: Improvement Affects Versions: 5.0 Reporter: Timothy Potter Assignee: Timothy Potter Fix For: 5.3, Trunk Attachments: SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227.patch, SOLR-7227.patch Currently, there is still the solr.war file in the server/webapps directory, which gets extracted upon first startup of Solr. It would be better to ship Solr with the WAR already extracted, thus taking us one step closer to truly not shipping a WAR file. Moreover, some users have reported not being able to make /opt/solr truly read-only because of the need to extract the WAR on-the-fly upon first startup. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7227) Solr distribution archive should have the WAR file extracted already
[ https://issues.apache.org/jira/browse/SOLR-7227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14644381#comment-14644381 ] Uwe Schindler commented on SOLR-7227: - Yes passes. Manifests are only required inside JAR or WAR files (they have metadata about the WAR file itsself) - and the WAR file is gone. The JAR files of our application all have valid META-INF. Solr distribution archive should have the WAR file extracted already Key: SOLR-7227 URL: https://issues.apache.org/jira/browse/SOLR-7227 Project: Solr Issue Type: Improvement Affects Versions: 5.0 Reporter: Timothy Potter Assignee: Timothy Potter Fix For: 5.3, Trunk Attachments: SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227.patch, SOLR-7227.patch Currently, there is still the solr.war file in the server/webapps directory, which gets extracted upon first startup of Solr. It would be better to ship Solr with the WAR already extracted, thus taking us one step closer to truly not shipping a WAR file. Moreover, some users have reported not being able to make /opt/solr truly read-only because of the need to extract the WAR on-the-fly upon first startup. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7227) Solr distribution archive should have the WAR file extracted already
[ https://issues.apache.org/jira/browse/SOLR-7227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler updated SOLR-7227: Attachment: SOLR-7227-part2.patch New patch without WAR special case in smoke tester (no longer needed) Solr distribution archive should have the WAR file extracted already Key: SOLR-7227 URL: https://issues.apache.org/jira/browse/SOLR-7227 Project: Solr Issue Type: Improvement Affects Versions: 5.0 Reporter: Timothy Potter Assignee: Timothy Potter Fix For: 5.3, Trunk Attachments: SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227.patch, SOLR-7227.patch Currently, there is still the solr.war file in the server/webapps directory, which gets extracted upon first startup of Solr. It would be better to ship Solr with the WAR already extracted, thus taking us one step closer to truly not shipping a WAR file. Moreover, some users have reported not being able to make /opt/solr truly read-only because of the need to extract the WAR on-the-fly upon first startup. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7227) Solr distribution archive should have the WAR file extracted already
[ https://issues.apache.org/jira/browse/SOLR-7227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14644418#comment-14644418 ] Timothy Potter commented on SOLR-7227: -- thanks - I'm running it now too on my Mac Solr distribution archive should have the WAR file extracted already Key: SOLR-7227 URL: https://issues.apache.org/jira/browse/SOLR-7227 Project: Solr Issue Type: Improvement Affects Versions: 5.0 Reporter: Timothy Potter Assignee: Timothy Potter Fix For: 5.3, Trunk Attachments: SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227.patch, SOLR-7227.patch Currently, there is still the solr.war file in the server/webapps directory, which gets extracted upon first startup of Solr. It would be better to ship Solr with the WAR already extracted, thus taking us one step closer to truly not shipping a WAR file. Moreover, some users have reported not being able to make /opt/solr truly read-only because of the need to extract the WAR on-the-fly upon first startup. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-7227) Solr distribution archive should have the WAR file extracted already
[ https://issues.apache.org/jira/browse/SOLR-7227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14644423#comment-14644423 ] Uwe Schindler edited comment on SOLR-7227 at 7/28/15 2:24 PM: -- Otherwise do you have an opinion about the extra inner folder solr-webapp/webapp? We should remove the inner webapp folder, I just did not do this, because I have no idea which scripts are affected by this. I wanted to look into that later or in a separate commit. I just wanted to get rid of the WAR completely. was (Author: thetaphi): Otherwise do you have an opinion about the extra folder solr-webapp/webapp. We should remove the inner webapp folder, I just did not do this, because I have no idea which scripts are affected by this. I wanted to look into that later or in a separate commit. I just wanted to get rid of the WAR completely. Solr distribution archive should have the WAR file extracted already Key: SOLR-7227 URL: https://issues.apache.org/jira/browse/SOLR-7227 Project: Solr Issue Type: Improvement Affects Versions: 5.0 Reporter: Timothy Potter Assignee: Timothy Potter Fix For: 5.3, Trunk Attachments: SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227.patch, SOLR-7227.patch Currently, there is still the solr.war file in the server/webapps directory, which gets extracted upon first startup of Solr. It would be better to ship Solr with the WAR already extracted, thus taking us one step closer to truly not shipping a WAR file. Moreover, some users have reported not being able to make /opt/solr truly read-only because of the need to extract the WAR on-the-fly upon first startup. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7227) Solr distribution archive should have the WAR file extracted already
[ https://issues.apache.org/jira/browse/SOLR-7227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14644423#comment-14644423 ] Uwe Schindler commented on SOLR-7227: - Otherwise do you have an opinion about the extra folder solr-webapp/webapp. We should remove the inner webapp folder, I just did not do this, because I have no idea which scripts are affected by this. I wanted to look into that later or in a separate commit. I just wanted to get rid of the WAR completely. Solr distribution archive should have the WAR file extracted already Key: SOLR-7227 URL: https://issues.apache.org/jira/browse/SOLR-7227 Project: Solr Issue Type: Improvement Affects Versions: 5.0 Reporter: Timothy Potter Assignee: Timothy Potter Fix For: 5.3, Trunk Attachments: SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227.patch, SOLR-7227.patch Currently, there is still the solr.war file in the server/webapps directory, which gets extracted upon first startup of Solr. It would be better to ship Solr with the WAR already extracted, thus taking us one step closer to truly not shipping a WAR file. Moreover, some users have reported not being able to make /opt/solr truly read-only because of the need to extract the WAR on-the-fly upon first startup. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5022) PermGen exhausted test failures on Jenkins.
[ https://issues.apache.org/jira/browse/SOLR-5022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14644455#comment-14644455 ] Uwe Schindler commented on SOLR-5022: - This build failed on Windows: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4951/ This generally happens when the following is true: - Java 7 on branch_5x - Some Solr test fails (and does not clean up after itsself completely) - Another expensive test starts afterwards in the same JVM. This one gets a hidden permgen error and then hangs. I would like to raise permgen in the 5.x branch, if the JRE version is exactly JDK 7 (Oracle/OpenJDK) as a temporary workaround. We should do this really before 5.3, as the smoke tester also sometimes hangs! PermGen exhausted test failures on Jenkins. --- Key: SOLR-5022 URL: https://issues.apache.org/jira/browse/SOLR-5022 Project: Solr Issue Type: Test Components: Tests Reporter: Mark Miller Assignee: Uwe Schindler Priority: Critical Fix For: 5.3 Attachments: SOLR-5022-permgen.patch, SOLR-5022-permgen.patch, SOLR-5022.patch, intern-count-win.txt -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_60-ea-b24) - Build # 13633 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/13633/ Java: 64bit/jdk1.8.0_60-ea-b24 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 2 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeySafeLeaderTest Error Message: ObjectTracker found 2 object(s) that were not released!!! [TransactionLog] Stack Trace: java.lang.AssertionError: ObjectTracker found 2 object(s) that were not released!!! [TransactionLog] at __randomizedtesting.SeedInfo.seed([BFDA157B7CAA1A46]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNull(Assert.java:551) at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:236) at sun.reflect.GeneratedMethodAccessor37.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:799) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at java.lang.Thread.run(Thread.java:745) FAILED: org.apache.solr.cloud.BasicDistributedZkTest.test Error Message: Error from server at http://127.0.0.1:50821/collection1: Exception writing document id 57 to the index; possible analysis error. Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://127.0.0.1:50821/collection1: Exception writing document id 57 to the index; possible analysis error. at __randomizedtesting.SeedInfo.seed([BFDA157B7CAA1A46:378E2AA1D25677BE]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:560) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:234) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:226) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:152) at org.apache.solr.BaseDistributedSearchTestCase.add(BaseDistributedSearchTestCase.java:511) at org.apache.solr.cloud.BasicDistributedZkTest.testUpdateProcessorsRunOnlyOnce(BasicDistributedZkTest.java:639) at org.apache.solr.cloud.BasicDistributedZkTest.test(BasicDistributedZkTest.java:350) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872) at
[jira] [Created] (LUCENE-6698) Add BKDDistanceQuery
Michael McCandless created LUCENE-6698: -- Summary: Add BKDDistanceQuery Key: LUCENE-6698 URL: https://issues.apache.org/jira/browse/LUCENE-6698 Project: Lucene - Core Issue Type: New Feature Reporter: Michael McCandless Our BKD tree impl should be very fast at doing distance from lat/lon center point X query. I haven't started this ... [~nknize] expressed interest in working on it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7227) Solr distribution archive should have the WAR file extracted already
[ https://issues.apache.org/jira/browse/SOLR-7227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14644383#comment-14644383 ] Uwe Schindler commented on SOLR-7227: - But in any case we can remove the special cases for WAR files from the smoke tester, but this is why you left the issue open. Solr distribution archive should have the WAR file extracted already Key: SOLR-7227 URL: https://issues.apache.org/jira/browse/SOLR-7227 Project: Solr Issue Type: Improvement Affects Versions: 5.0 Reporter: Timothy Potter Assignee: Timothy Potter Fix For: 5.3, Trunk Attachments: SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227.patch, SOLR-7227.patch Currently, there is still the solr.war file in the server/webapps directory, which gets extracted upon first startup of Solr. It would be better to ship Solr with the WAR already extracted, thus taking us one step closer to truly not shipping a WAR file. Moreover, some users have reported not being able to make /opt/solr truly read-only because of the need to extract the WAR on-the-fly upon first startup. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-7692) Implement BasicAuth based impl for the new Authentication/Authorization APIs
[ https://issues.apache.org/jira/browse/SOLR-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14644404#comment-14644404 ] Ishan Chattopadhyaya edited comment on SOLR-7692 at 7/28/15 2:12 PM: - 1. In Sha256AuthenticationProvider, line 106 {noformat} try { digest = MessageDigest.getInstance(SHA-256); } catch (NoSuchAlgorithmException e) { BasicAuthPlugin.log.error(e.getMessage(), e); return null;//should not happen } {noformat} Shouldn't this be an exception, e.g. SolrException, thrown? 2. In the sha256() method (same place as above), {noformat} public static String sha256(String password, String saltKey) { MessageDigest digest; try { digest = MessageDigest.getInstance(SHA-256); } catch (NoSuchAlgorithmException e) { BasicAuthPlugin.log.error(e.getMessage(), e); return null;//should not happen } if (saltKey != null) { digest.reset(); digest.update(Base64.decodeBase64(saltKey)); } byte[] btPass = digest.digest(password.getBytes(StandardCharsets.UTF_8)); digest.reset(); btPass = digest.digest(btPass); return Base64.encodeBase64String(btPass); } {noformat} I think we should reuse a digest instance, instead of creating one using the factory method for every request, as there are significant overheads to creating a new digest algorithm instance. Reference: https://books.google.co.in/books?id=42etT_9-_9MCpg=PT254lpg=PT254 3. For SolrJ support, I've added SOLR-7839. 4. For internode communication, I think (please correct me if I'm wrong) the ThreadLocal approach won't work for cases when the internode request is made from a threadpool, from where the headers of the original request thread's ThreadLocal won't be accessible. I think we need something like SOLR-6625, where the request object can store the user principal / headers etc. and pass it along to the request interceptor as a context. 5. As per our discussion offline, the internode request which are originated from a Solr node (not a subrequest of a main user request) cannot be secured this way. Either each node uses its own principal/credentials to send internode requests in such cases, or there's another secure mechanism of internode requests internal to Solr (e.g. asymmetric cryptographic mechanism, e.g. PKI), irrespective of the authc plugins used for user requests. was (Author: ichattopadhyaya): 1. In Sha256AuthenticationProvider, line 106 {noformat} try { digest = MessageDigest.getInstance(SHA-256); } catch (NoSuchAlgorithmException e) { BasicAuthPlugin.log.error(e.getMessage(), e); return null;//should not happen } {noformat} Shouldn't this be an exception thrown? 2. In the sha256() method (same place as above), {noformat} public static String sha256(String password, String saltKey) { MessageDigest digest; try { digest = MessageDigest.getInstance(SHA-256); } catch (NoSuchAlgorithmException e) { BasicAuthPlugin.log.error(e.getMessage(), e); return null;//should not happen } if (saltKey != null) { digest.reset(); digest.update(Base64.decodeBase64(saltKey)); } byte[] btPass = digest.digest(password.getBytes(StandardCharsets.UTF_8)); digest.reset(); btPass = digest.digest(btPass); return Base64.encodeBase64String(btPass); } {noformat} I think we should reuse a digest instance, instead of creating one using the factory method for every request, as there are significant overheads to creating a new digest algorithm instance. Reference: https://books.google.co.in/books?id=42etT_9-_9MCpg=PT254lpg=PT254 3. For SolrJ support, I've added SOLR-7839. 4. For internode communication, I think (please correct me if I'm wrong) the ThreadLocal approach won't work for cases when the internode request is made from a threadpool, from where the headers of the original request thread's ThreadLocal won't be accessible. I think we need something like SOLR-6625, where the request object can store the user principal / headers etc. and pass it along to the request interceptor as a context. 5. As per our discussion offline, the internode request which are originated from a Solr node (not a subrequest of a main user request) cannot be secured this way. Either each node uses its own principal/credentials to send internode requests in such cases, or there's another secure mechanism of internode requests internal to Solr (e.g. asymmetric cryptographic mechanism, e.g. PKI), irrespective of the authc plugins used for user requests. Implement BasicAuth based impl for the new Authentication/Authorization APIs Key: SOLR-7692 URL: https://issues.apache.org/jira/browse/SOLR-7692 Project: Solr Issue
[jira] [Commented] (SOLR-7227) Solr distribution archive should have the WAR file extracted already
[ https://issues.apache.org/jira/browse/SOLR-7227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14644422#comment-14644422 ] Uwe Schindler commented on SOLR-7227: - Do you use the latest patch? The old one may not yet have that WAR file parts removed. The old smoker worked on cygwin, but you never know :-) Solr distribution archive should have the WAR file extracted already Key: SOLR-7227 URL: https://issues.apache.org/jira/browse/SOLR-7227 Project: Solr Issue Type: Improvement Affects Versions: 5.0 Reporter: Timothy Potter Assignee: Timothy Potter Fix For: 5.3, Trunk Attachments: SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227.patch, SOLR-7227.patch Currently, there is still the solr.war file in the server/webapps directory, which gets extracted upon first startup of Solr. It would be better to ship Solr with the WAR already extracted, thus taking us one step closer to truly not shipping a WAR file. Moreover, some users have reported not being able to make /opt/solr truly read-only because of the need to extract the WAR on-the-fly upon first startup. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7227) Solr distribution archive should have the WAR file extracted already
[ https://issues.apache.org/jira/browse/SOLR-7227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14644452#comment-14644452 ] Timothy Potter commented on SOLR-7227: -- yes, using latest patch you posted at 14:52 ... We might as well address the extra folder now too ... the scripts affected are minimal (smoketester, bin/solr, bin/solr.cmd, zkcli.sh/cmd, and a few in the cloud-dev). Solr distribution archive should have the WAR file extracted already Key: SOLR-7227 URL: https://issues.apache.org/jira/browse/SOLR-7227 Project: Solr Issue Type: Improvement Affects Versions: 5.0 Reporter: Timothy Potter Assignee: Timothy Potter Fix For: 5.3, Trunk Attachments: SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227.patch, SOLR-7227.patch Currently, there is still the solr.war file in the server/webapps directory, which gets extracted upon first startup of Solr. It would be better to ship Solr with the WAR already extracted, thus taking us one step closer to truly not shipping a WAR file. Moreover, some users have reported not being able to make /opt/solr truly read-only because of the need to extract the WAR on-the-fly upon first startup. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6647) Add GeoHash String Utilities to core GeoUtils
[ https://issues.apache.org/jira/browse/LUCENE-6647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nicholas Knize updated LUCENE-6647: --- Attachment: LUCENE-6647.patch Updated GeoHash patch with unit tests. Add GeoHash String Utilities to core GeoUtils - Key: LUCENE-6647 URL: https://issues.apache.org/jira/browse/LUCENE-6647 Project: Lucene - Core Issue Type: New Feature Reporter: Nicholas Knize Attachments: LUCENE-6647.patch, LUCENE-6647.patch GeoPointField uses morton encoding to efficiently pack lat/lon values into a single long. GeoHashing effectively does the same thing but uses base 32 encoding to represent this long value as a human readable string. Many user applications already use the string representation of the hash. This issue simply adds the base32 string representation of the already computed morton code. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7692) Implement BasicAuth based impl for the new Authentication/Authorization APIs
[ https://issues.apache.org/jira/browse/SOLR-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14644404#comment-14644404 ] Ishan Chattopadhyaya commented on SOLR-7692: 1. In Sha256AuthenticationProvider, line 106 {noformat} NoSuchAlgorithmException e) { BasicAuthPlugin.log.error(e.getMessage(), e); return null;//should not happen } {noformat} Shouldn't this be an exception thrown? 2. In the sha256() method, I think we should reuse a digest instance, since there are significant overheads to creating a new digest algorithm instance. Reference: https://books.google.co.in/books?id=42etT_9-_9MCpg=PT254lpg=PT254 3. For SolrJ support, I've added SOLR-7839. 4. For internode communication, I think (please correct me if I'm wrong) the ThreadLocal approach won't work for cases when the internode request is made from a threadpool, from where the headers of the original request thread's ThreadLocal won't be accessible. I think we need something like SOLR-6625, where the request object can store the user principal / headers etc. and pass it along to the request interceptor as a context. 5. As per our discussion offline, the internode request which are originated from a Solr node (not a subrequest of a main user request) cannot be secured this way. Either each node uses its own principal/credentials to send internode requests in such cases, or there's another secure mechanism of internode requests internal to Solr (e.g. asymmetric cryptographic mechanism, e.g. PKI), irrespective of the authc plugins used for user requests. Implement BasicAuth based impl for the new Authentication/Authorization APIs Key: SOLR-7692 URL: https://issues.apache.org/jira/browse/SOLR-7692 Project: Solr Issue Type: New Feature Reporter: Noble Paul Assignee: Noble Paul Attachments: SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7757.patch, SOLR-7757.patch, SOLR-7757.patch This involves various components h2. Authentication A basic auth based authentication filter. This should retrieve the user credentials from ZK. The user name and sha1 hash of password should be stored in ZK sample authentication json {code:javascript} { authentication:{ class: solr.BasicAuthPlugin, users :{ john :09fljnklnoiuy98 buygujkjnlk, david:f678njfgfjnklno iuy9865ty, pete: 87ykjnklndfhjh8 98uyiy98, } } } {code} h2. authorization plugin This would store the roles of various users and their privileges in ZK sample authorization.json {code:javascript} { authorization: { class: solr.ZKAuthorization, roles :{ admin : [john] guest : [john, david,pete] } permissions: { collection-edit: { role: admin }, coreadmin:{ role:admin }, config-edit: { //all collections role: admin, method:POST }, schema-edit: { roles: admin, method:POST }, update: { //all collections role: dev }, mycoll_update: { collection: mycoll, path:[/update/*], role: [somebody] } } } } {code} We will also need to provide APIs to create users and assign them roles -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 2510 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2510/ Java: 64bit/jdk1.7.0 -XX:+UseCompressedOops -XX:+UseSerialGC 1 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.cloud.HttpPartitionTest Error Message: ObjectTracker found 1 object(s) that were not released!!! [TransactionLog] Stack Trace: java.lang.AssertionError: ObjectTracker found 1 object(s) that were not released!!! [TransactionLog] at __randomizedtesting.SeedInfo.seed([9468AF203D12AB88]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertNull(Assert.java:551) at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:236) at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:799) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at java.lang.Thread.run(Thread.java:745) Build Log: [...truncated 11228 lines...] [junit4] Suite: org.apache.solr.cloud.HttpPartitionTest [junit4] 2 Creating dataDir: /Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/build/solr-core/test/J0/temp/solr.cloud.HttpPartitionTest_9468AF203D12AB88-001/init-core-data-001 [junit4] 2 1896621 INFO (SUITE-HttpPartitionTest-seed#[9468AF203D12AB88]-worker) [] o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: / [junit4] 2 1896624 INFO (TEST-HttpPartitionTest.test-seed#[9468AF203D12AB88]) [] o.a.s.c.ZkTestServer STARTING ZK TEST SERVER [junit4] 2 1896625 INFO (Thread-4347) [] o.a.s.c.ZkTestServer client port:0.0.0.0/0.0.0.0:0 [junit4] 2 1896625 INFO (Thread-4347) [] o.a.s.c.ZkTestServer Starting server [junit4] 2 1896726 INFO (TEST-HttpPartitionTest.test-seed#[9468AF203D12AB88]) [] o.a.s.c.ZkTestServer start zk server on port:62373 [junit4] 2 1896726 INFO (TEST-HttpPartitionTest.test-seed#[9468AF203D12AB88]) [] o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider [junit4] 2 1896727 INFO (TEST-HttpPartitionTest.test-seed#[9468AF203D12AB88]) [] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper [junit4] 2 1896734 INFO (zkCallback-1195-thread-1) [] o.a.s.c.c.ConnectionManager Watcher org.apache.solr.common.cloud.ConnectionManager@47a83711 name:ZooKeeperConnection Watcher:127.0.0.1:62373 got event WatchedEvent state:SyncConnected type:None path:null path:null type:None [junit4] 2 1896734 INFO (TEST-HttpPartitionTest.test-seed#[9468AF203D12AB88]) [] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper [junit4] 2 1896735 INFO (TEST-HttpPartitionTest.test-seed#[9468AF203D12AB88]) [] o.a.s.c.c.SolrZkClient Using default ZkACLProvider [junit4] 2 1896735 INFO (TEST-HttpPartitionTest.test-seed#[9468AF203D12AB88]) [] o.a.s.c.c.SolrZkClient makePath: /solr [junit4] 2
[jira] [Assigned] (LUCENE-6580) Allow defined-width gaps in SpanNearQuery
[ https://issues.apache.org/jira/browse/LUCENE-6580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Woodward reassigned LUCENE-6580: - Assignee: Alan Woodward Allow defined-width gaps in SpanNearQuery - Key: LUCENE-6580 URL: https://issues.apache.org/jira/browse/LUCENE-6580 Project: Lucene - Core Issue Type: Improvement Reporter: Alan Woodward Assignee: Alan Woodward Attachments: LUCENE-6580.patch, LUCENE-6580.patch, LUCENE-6580.patch SpanNearQuery is not quite an exact Spans replacement for PhraseQuery at the moment, because while you can ask for an overall slop in an ordered match, you can't specify exactly where the gaps should appear. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7227) Solr distribution archive should have the WAR file extracted already
[ https://issues.apache.org/jira/browse/SOLR-7227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14644493#comment-14644493 ] Shawn Heisey commented on SOLR-7227: bq. Otherwise do you have an opinion about the extra inner folder solr-webapp/webapp? I'm not sure we should move the artifacts out of the inner webapp directory, at least not until the next major version. My concern is not our own scripts. We can change those easily enough. The potential problem is homegrown scripts written by users. If we move the extracted artifacts, even just one directory level, we risk problems with highly customized user setups. Is it enough to assume someone who builds their own scripts will be able to use a note in the upgrading from section of CHANGES.txt to figure out how to fix their setup when they upgrade? It might be. User confusion is always a worry for me, because Solr already has plenty to offer in that department. I can't imagine based on anything you've said that I would want to vote -1. I offer my thoughts only for consideration. Semi-related: I need to find out what we've got for documentation on upgrading a Solr 5.x install to the next release. I have some ideas about how I would do it, but I'd like to know what (if anything) we are saying officially. Solr distribution archive should have the WAR file extracted already Key: SOLR-7227 URL: https://issues.apache.org/jira/browse/SOLR-7227 Project: Solr Issue Type: Improvement Affects Versions: 5.0 Reporter: Timothy Potter Assignee: Timothy Potter Fix For: 5.3, Trunk Attachments: SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227.patch, SOLR-7227.patch Currently, there is still the solr.war file in the server/webapps directory, which gets extracted upon first startup of Solr. It would be better to ship Solr with the WAR already extracted, thus taking us one step closer to truly not shipping a WAR file. Moreover, some users have reported not being able to make /opt/solr truly read-only because of the need to extract the WAR on-the-fly upon first startup. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [CI] Lucene 5x Linux 64 Test Only - Build # 57764 - Failure!
Hmm, seed doesn't repro. But this is clearly a test bug in its exception handling that is masking the root cause of the failure. So I committed a fix for test bug #2, but bug #1 still lurks ... Mike McCandless http://blog.mikemccandless.com On Tue, Jul 28, 2015 at 10:54 AM, bu...@elastic.co wrote: *BUILD FAILURE* Build URL http://build-eu-00.elastic.co/job/lucene_linux_java8_64_test_only/57764/ Project:lucene_linux_java8_64_test_only Randomization: JDK8,local,heap[512m],-server +UseParallelGC -UseCompressedOops +AggressiveOpts,sec manager on Date of build:Tue, 28 Jul 2015 16:51:16 +0200 Build duration:2 min 48 sec *CHANGES* Revision by *mkhl: * *(SOLR-6234: Scoring for query time join)*edit checkoutedit checkout/dev-toolsedit checkout/lucene edit checkout/lucene/BUILD.txtedit checkout/lucene/CHANGES.txt edit checkout/lucene/JRE_VERSION_MIGRATION.txtedit checkout/lucene/LICENSE.txtedit checkout/lucene/MIGRATE.txtedit checkout/lucene/NOTICE.txtedit checkout/lucene/README.txtedit checkout/lucene/SYSTEM_REQUIREMENTS.txtedit checkout/lucene/analysis/common/src/java/org/apache/lucene/analysis/miscellaneous/Lucene47WordDelimiterFilter.java edit checkout/lucene/analysis/common/src/java/org/apache/lucene/analysis/standard/std40/ASCIITLD.jflex-macro edit checkout/lucene/analysis/common/src/java/org/apache/lucene/analysis/standard/std40/SUPPLEMENTARY.jflex-macro edit checkout/lucene/analysis/common/src/java/org/apache/lucene/analysis/standard/std40/StandardTokenizerImpl40.java edit checkout/lucene/analysis/common/src/java/org/apache/lucene/analysis/standard/std40/StandardTokenizerImpl40.jflex edit checkout/lucene/analysis/common/src/java/org/apache/lucene/analysis/standard/std40/UAX29URLEmailTokenizerImpl40.java edit checkout/lucene/analysis/common/src/java/org/apache/lucene/analysis/standard/std40/UAX29URLEmailTokenizerImpl40.jflex edit checkout/lucene/analysis/common/src/java/org/apache/lucene/analysis/standard/std40/package.html edit checkout/lucene/analysis/common/src/test/org/apache/lucene/analysis/miscellaneous/TestLucene47WordDelimiterFilter.java edit checkout/lucene/backward-codecsedit checkout/lucene/benchmark edit checkout/lucene/build.xmledit checkout/lucene/classification edit checkout/lucene/classification/build.xmledit checkout/lucene/classification/ivy.xmledit checkout/lucene/classification/srcedit checkout/lucene/codecsedit checkout/lucene/common-build.xmledit checkout/lucene/coreedit checkout/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterExceptions2.java edit checkout/lucene/core/src/test/org/apache/lucene/search/TestSort.java edit checkout/lucene/core/src/test/org/apache/lucene/search/TestSortRandom.java edit checkout/lucene/core/src/test/org/apache/lucene/search/TestTopFieldCollector.java edit checkout/lucene/core/src/test/org/apache/lucene/search/TestTotalHitCountCollector.java edit checkout/lucene/demoedit checkout/lucene/expressionsedit checkout/lucene/facetedit checkout/lucene/groupingedit checkout/lucene/highlighteredit checkout/lucene/ivy-ignore-conflicts.propertiesedit checkout/lucene/ivy-settings.xmledit checkout/lucene/ivy-versions.propertiesedit checkout/lucene/join edit checkout/lucene/licensesedit checkout/lucene/memoryedit checkout/lucene/miscedit checkout/lucene/module-build.xmledit checkout/lucene/queriesedit checkout/lucene/queries/src/test/org/apache/lucene/queries/function/TestFunctionQuerySort.java edit checkout/lucene/queryparseredit checkout/lucene/replicator edit checkout/lucene/sandboxedit checkout/lucene/siteedit checkout/lucene/spatialedit checkout/lucene/spatial/src/java/org/apache/lucene/spatial/bboxedit checkout/lucene/spatial/src/java/org/apache/lucene/spatial/prefix/PrefixTreeFacetCounter.java edit checkout/lucene/spatial/src/java/org/apache/lucene/spatial/util/ShapeAreaValueSource.java edit checkout/lucene/spatial/src/test-files/data/simple-bbox.txtedit checkout/lucene/spatial/src/test-files/simple-Queries-BBox.txtedit checkout/lucene/spatial/src/test/org/apache/lucene/spatial/bboxedit checkout/lucene/spatial3dedit checkout/lucene/suggestedit checkout/lucene/test-frameworkedit checkout/lucene/test-framework/src/java/org/apache/lucene/codecs/cranky edit checkout/lucene/toolsedit checkout/lucene/version.properties edit checkout/solredit checkout/solr/CHANGES.txtedit checkout/solr/LICENSE.txtedit checkout/solr/NOTICE.txtedit checkout/solr/README.txtedit checkout/solr/binedit checkout/solr/build.xmledit checkout/solr/cloud-devedit checkout/solr/common-build.xmledit checkout/solr/contribedit checkout/solr/coreedit checkout/solr/core/src/java/org/apache/solr/request/DocValuesStats.java edit
[jira] [Commented] (SOLR-6234) Scoring modes for query time join
[ https://issues.apache.org/jira/browse/SOLR-6234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14644466#comment-14644466 ] ASF subversion and git services commented on SOLR-6234: --- Commit 1693101 from m...@apache.org in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1693101 ] SOLR-6234: Scoring for query time join Scoring modes for query time join -- Key: SOLR-6234 URL: https://issues.apache.org/jira/browse/SOLR-6234 Project: Solr Issue Type: New Feature Components: query parsers Affects Versions: 5.3 Reporter: Mikhail Khludnev Assignee: Timothy Potter Labels: features, patch, test Fix For: 5.3 Attachments: SOLR-6234.patch, SOLR-6234.patch, SOLR-6234.patch, SOLR-6234.patch, SOLR-6234.patch, SOLR-6234.patch, SOLR-6234.patch, otherHandler.patch it adds ability to call Lucene's JoinUtil by specifying local param, ie \{!join score=...} It supports: - {{score=none|avg|max|total}} local param (passed as ScoreMode to JoinUtil) - -supports {{b=100}} param to pass {{Query.setBoost()}}- postponed till SOLR-7814. - -{{multiVals=true|false}} is introduced- YAGNI, let me know otherwise. - there is a test coverage for cross core join case. - so far it joins string and multivalue string fields (Sorted, SortedSet, Binary), but not Numerics DVs. follow-up LUCENE-5868 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6580) Allow defined-width gaps in SpanNearQuery
[ https://issues.apache.org/jira/browse/LUCENE-6580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Woodward updated LUCENE-6580: -- Attachment: LUCENE-6580.patch Patch updated to trunk. I'd like to get this in for 5.3 if nobody objects. Allow defined-width gaps in SpanNearQuery - Key: LUCENE-6580 URL: https://issues.apache.org/jira/browse/LUCENE-6580 Project: Lucene - Core Issue Type: Improvement Reporter: Alan Woodward Attachments: LUCENE-6580.patch, LUCENE-6580.patch, LUCENE-6580.patch SpanNearQuery is not quite an exact Spans replacement for PhraseQuery at the moment, because while you can ask for an overall slop in an ordered match, you can't specify exactly where the gaps should appear. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_51) - Build # 5080 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5080/ Java: 32bit/jdk1.8.0_51 -server -XX:+UseConcMarkSweepGC No tests ran. Build Log: [...truncated 2776 lines...] ERROR: Connection was broken: java.io.IOException: Unexpected termination of the channel at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:50) Caused by: java.io.EOFException at java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2325) at java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:2794) at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:801) at java.io.ObjectInputStream.init(ObjectInputStream.java:299) at hudson.remoting.ObjectInputStreamEx.init(ObjectInputStreamEx.java:40) at hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:34) at hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:48) Build step 'Invoke Ant' marked build as failure ERROR: Publisher 'Archive the artifacts' failed: no workspace for Lucene-Solr-trunk-Windows #5080 ERROR: Publisher 'Publish JUnit test result report' failed: no workspace for Lucene-Solr-trunk-Windows #5080 Email was triggered for: Failure - Any Sending email for trigger: Failure - Any - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7227) Solr distribution archive should have the WAR file extracted already
[ https://issues.apache.org/jira/browse/SOLR-7227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14644506#comment-14644506 ] Uwe Schindler commented on SOLR-7227: - Smoke tester passes for me on Linux. There is an unrelated bug in smoker when it tries to execute post.jar (it does not setup PATH correctly as its does for the scripts), so it fails if you have no java in your path, or it executes the wrong Java (maybe older version). I will open separate issue, its really unrelated. It just costed me half an hour :( Shawn: I have no strong opinion, we can leave it as it is. But custom scripts may already break because there is no WAR anymore. In previous versions, Jetty extracted the WAR to a temporary folder, so the scripts will for sure no longer work. Solr distribution archive should have the WAR file extracted already Key: SOLR-7227 URL: https://issues.apache.org/jira/browse/SOLR-7227 Project: Solr Issue Type: Improvement Affects Versions: 5.0 Reporter: Timothy Potter Assignee: Timothy Potter Fix For: 5.3, Trunk Attachments: SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227.patch, SOLR-7227.patch Currently, there is still the solr.war file in the server/webapps directory, which gets extracted upon first startup of Solr. It would be better to ship Solr with the WAR already extracted, thus taking us one step closer to truly not shipping a WAR file. Moreover, some users have reported not being able to make /opt/solr truly read-only because of the need to extract the WAR on-the-fly upon first startup. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-6700) FSDirectory can't open indexes that are symlinks, due to a deficiency in Files.createDirectories
Stephen Green created LUCENE-6700: - Summary: FSDirectory can't open indexes that are symlinks, due to a deficiency in Files.createDirectories Key: LUCENE-6700 URL: https://issues.apache.org/jira/browse/LUCENE-6700 Project: Lucene - Core Issue Type: Bug Components: core/index Affects Versions: 5.0 Environment: java version 1.8.0_45 Solaris or Linux Symlinked directory Reporter: Stephen Green Priority: Minor Lucene, using FSDirectory (via NIOFS directory) cannot open an index from a Path that is a symbolic link to an actual index directory. Trying to do so generates an exception stack like: Exception in thread main java.nio.file.FileAlreadyExistsException: maildex.idx at sun.nio.fs.UnixException.translateToIOException(UnixException.java:88) at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) at sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:384) at java.nio.file.Files.createDirectory(Files.java:674) at java.nio.file.Files.createAndCheckIsDirectory(Files.java:781) at java.nio.file.Files.createDirectories(Files.java:727) at org.apache.lucene.store.FSDirectory.init(FSDirectory.java:128) at org.apache.lucene.store.NIOFSDirectory.init(NIOFSDirectory.java:64) at org.apache.lucene.store.NIOFSDirectory.init(NIOFSDirectory.java:74) This problem occurs on both Linux and Solaris (which probably use the same SPI for Unix file systems at the bottom of the java.nio.file stack.) This problem has been noted in the OpenJDK issue tracker at: https://bugs.openjdk.java.net/browse/JDK-8130464 And closed as Not an Issue because Files.createDirectories is meant to operate on directories, and a symlink is not a directory. This doesn't strike me as particularly helpful, but I guess is sort of makes sense in a broken-by-design way. The work-around is simply to move or copy the index to the place where I want it, but this makes concurrent read-only development on the index difficult when the index is large. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6234) Scoring modes for query time join
[ https://issues.apache.org/jira/browse/SOLR-6234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14644577#comment-14644577 ] Mikhail Khludnev commented on SOLR-6234: the latest patch adds a tag into solr/common-build.xml which fixes it. I did run precommit for sure a plenty of times. fwiw build passed https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/245/ Scoring modes for query time join -- Key: SOLR-6234 URL: https://issues.apache.org/jira/browse/SOLR-6234 Project: Solr Issue Type: New Feature Components: query parsers Affects Versions: 5.3 Reporter: Mikhail Khludnev Assignee: Timothy Potter Labels: features, patch, test Fix For: 5.3 Attachments: SOLR-6234.patch, SOLR-6234.patch, SOLR-6234.patch, SOLR-6234.patch, SOLR-6234.patch, SOLR-6234.patch, SOLR-6234.patch, otherHandler.patch it adds ability to call Lucene's JoinUtil by specifying local param, ie \{!join score=...} It supports: - {{score=none|avg|max|total}} local param (passed as ScoreMode to JoinUtil) - -supports {{b=100}} param to pass {{Query.setBoost()}}- postponed till SOLR-7814. - -{{multiVals=true|false}} is introduced- YAGNI, let me know otherwise. - there is a test coverage for cross core join case. - so far it joins string and multivalue string fields (Sorted, SortedSet, Binary), but not Numerics DVs. follow-up LUCENE-5868 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7227) Solr distribution archive should have the WAR file extracted already
[ https://issues.apache.org/jira/browse/SOLR-7227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler updated SOLR-7227: Attachment: SOLR-7227-part2.patch I reverted the Smoker changes from the previous commit and removed the WAR stuff completely. This is now much cleaner, Solr distribution archive should have the WAR file extracted already Key: SOLR-7227 URL: https://issues.apache.org/jira/browse/SOLR-7227 Project: Solr Issue Type: Improvement Affects Versions: 5.0 Reporter: Timothy Potter Assignee: Timothy Potter Fix For: 5.3, Trunk Attachments: SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227.patch, SOLR-7227.patch Currently, there is still the solr.war file in the server/webapps directory, which gets extracted upon first startup of Solr. It would be better to ship Solr with the WAR already extracted, thus taking us one step closer to truly not shipping a WAR file. Moreover, some users have reported not being able to make /opt/solr truly read-only because of the need to extract the WAR on-the-fly upon first startup. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[CI] Lucene 5x Linux 64 Test Only - Build # 57764 - Failure!
BUILD FAILURE Build URLhttp://build-eu-00.elastic.co/job/lucene_linux_java8_64_test_only/57764/ Project:lucene_linux_java8_64_test_only Randomization: JDK8,local,heap[512m],-server +UseParallelGC -UseCompressedOops +AggressiveOpts,sec manager on Date of build:Tue, 28 Jul 2015 16:51:16 +0200 Build duration:2 min 48 sec CHANGES Revision by mkhl: (SOLR-6234: Scoring for query time join) edit checkout edit checkout/dev-tools edit checkout/lucene edit checkout/lucene/BUILD.txt edit checkout/lucene/CHANGES.txt edit checkout/lucene/JRE_VERSION_MIGRATION.txt edit checkout/lucene/LICENSE.txt edit checkout/lucene/MIGRATE.txt edit checkout/lucene/NOTICE.txt edit checkout/lucene/README.txt edit checkout/lucene/SYSTEM_REQUIREMENTS.txt edit checkout/lucene/analysis/common/src/java/org/apache/lucene/analysis/miscellaneous/Lucene47WordDelimiterFilter.java edit checkout/lucene/analysis/common/src/java/org/apache/lucene/analysis/standard/std40/ASCIITLD.jflex-macro edit checkout/lucene/analysis/common/src/java/org/apache/lucene/analysis/standard/std40/SUPPLEMENTARY.jflex-macro edit checkout/lucene/analysis/common/src/java/org/apache/lucene/analysis/standard/std40/StandardTokenizerImpl40.java edit checkout/lucene/analysis/common/src/java/org/apache/lucene/analysis/standard/std40/StandardTokenizerImpl40.jflex edit checkout/lucene/analysis/common/src/java/org/apache/lucene/analysis/standard/std40/UAX29URLEmailTokenizerImpl40.java edit checkout/lucene/analysis/common/src/java/org/apache/lucene/analysis/standard/std40/UAX29URLEmailTokenizerImpl40.jflex edit checkout/lucene/analysis/common/src/java/org/apache/lucene/analysis/standard/std40/package.html edit checkout/lucene/analysis/common/src/test/org/apache/lucene/analysis/miscellaneous/TestLucene47WordDelimiterFilter.java edit checkout/lucene/backward-codecs edit checkout/lucene/benchmark edit checkout/lucene/build.xml edit checkout/lucene/classification edit checkout/lucene/classification/build.xml edit checkout/lucene/classification/ivy.xml edit checkout/lucene/classification/src edit checkout/lucene/codecs edit checkout/lucene/common-build.xml edit checkout/lucene/core edit checkout/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterExceptions2.java edit checkout/lucene/core/src/test/org/apache/lucene/search/TestSort.java edit checkout/lucene/core/src/test/org/apache/lucene/search/TestSortRandom.java edit checkout/lucene/core/src/test/org/apache/lucene/search/TestTopFieldCollector.java edit checkout/lucene/core/src/test/org/apache/lucene/search/TestTotalHitCountCollector.java edit checkout/lucene/demo edit checkout/lucene/expressions edit checkout/lucene/facet edit checkout/lucene/grouping edit checkout/lucene/highlighter edit checkout/lucene/ivy-ignore-conflicts.properties edit checkout/lucene/ivy-settings.xml edit checkout/lucene/ivy-versions.properties edit checkout/lucene/join edit checkout/lucene/licenses edit checkout/lucene/memory edit checkout/lucene/misc edit checkout/lucene/module-build.xml edit
[jira] [Updated] (LUCENE-6513) Allow limits on SpanMultiTermQueryWrapper expansion
[ https://issues.apache.org/jira/browse/LUCENE-6513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Woodward updated LUCENE-6513: -- Attachment: LUCENE-6513.patch Updated to trunk. I don't think there's any easier way to solve this (the other SpanRewriteMethods end up building SpanOrQueries as well) Allow limits on SpanMultiTermQueryWrapper expansion --- Key: LUCENE-6513 URL: https://issues.apache.org/jira/browse/LUCENE-6513 Project: Lucene - Core Issue Type: Improvement Reporter: Alan Woodward Priority: Minor Attachments: LUCENE-6513.patch, LUCENE-6513.patch SpanMultiTermQueryWrapper currently rewrites to a SpanOrQuery with as many clauses as there are matching terms. It would be nice to be able to limit this in a slightly nicer way than using TopTerms, which for most queries just translates to a lexicographical ordering. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-7842) ZK connection loss or session expiry events should not fire config directory listeners
Shalin Shekhar Mangar created SOLR-7842: --- Summary: ZK connection loss or session expiry events should not fire config directory listeners Key: SOLR-7842 URL: https://issues.apache.org/jira/browse/SOLR-7842 Project: Solr Issue Type: Bug Components: SolrCloud Affects Versions: 5.2.1 Reporter: Shalin Shekhar Mangar Priority: Minor Fix For: 5.3, Trunk The watcher on the config directory has the following in the process method: {code} Stat stat = null; try { stat = zkClient.exists(zkDir, null, true); } catch (KeeperException e) { //ignore , it is not a big deal } catch (InterruptedException e) { Thread.currentThread().interrupt(); } boolean resetWatcher = false; try { resetWatcher = fireEventListeners(zkDir); } finally { if (Event.EventType.None.equals(event.getType())) { log.info(A node got unwatched for {}, zkDir); } else { if (resetWatcher) setConfWatcher(zkDir, this, stat); else log.info(A node got unwatched for {}, zkDir); } } {code} Even if the watcher is fired because of session expiry or connection loss, the fireEventListeners() method is executed and all subsequent listener invocations fail due to the loss of connection/session. All this is logged as well. {code} 466879 WARN (Thread-78) [ ] o.a.s.c.ZkController listener throws error org.apache.solr.common.SolrException: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /configs/jepsen/params.json at org.apache.solr.core.RequestParams.getFreshRequestParams(RequestParams.java:158) at org.apache.solr.core.SolrConfig.refreshRequestParams(SolrConfig.java:909) at org.apache.solr.core.SolrCore$11.run(SolrCore.java:2585) at org.apache.solr.cloud.ZkController$4.run(ZkController.java:2385) Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /configs/jepsen/params.json at org.apache.zookeeper.KeeperException.create(KeeperException.java:99) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045) at org.apache.solr.common.cloud.SolrZkClient$4.execute(SolrZkClient.java:302) at org.apache.solr.common.cloud.SolrZkClient$4.execute(SolrZkClient.java:299) at org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:61) at org.apache.solr.common.cloud.SolrZkClient.exists(SolrZkClient.java:299) at org.apache.solr.core.RequestParams.getFreshRequestParams(RequestParams.java:148) ... 3 more {code} We should check the keeper state in addition to the event type and ignore such events. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-6625) HttpClient callback in HttpSolrServer
[ https://issues.apache.org/jira/browse/SOLR-6625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ishan Chattopadhyaya updated SOLR-6625: --- Attachment: SOLR-6625_interceptor.patch Updated the patch to trunk. I think this is good to go, even without using a ThreadLocal. This should be useful for SOLR-7692. HttpClient callback in HttpSolrServer - Key: SOLR-6625 URL: https://issues.apache.org/jira/browse/SOLR-6625 Project: Solr Issue Type: Improvement Components: SolrJ Reporter: Gregory Chanan Assignee: Gregory Chanan Priority: Minor Attachments: SOLR-6625.patch, SOLR-6625.patch, SOLR-6625.patch, SOLR-6625.patch, SOLR-6625.patch, SOLR-6625.patch, SOLR-6625_interceptor.patch, SOLR-6625_interceptor.patch, SOLR-6625_interceptor.patch, SOLR-6625_interceptor.patch, SOLR-6625_interceptor.patch, SOLR-6625_r1654079.patch, SOLR-6625_r1654079.patch Some of our setups use Solr in a SPNego/kerberos setup (we've done this by adding our own filters to the web.xml). We have an issue in that SPNego requires a negotiation step, but some HttpSolrServer requests are not repeatable, notably the PUT/POST requests. So, what happens is, HttpSolrServer sends the requests, the server responds with a negotiation request, and the request fails because the request is not repeatable. We've modified our code to send a repeatable request beforehand in these cases. It would be nicer if HttpSolrServer provided a pre/post callback when it was making an httpclient request. This would allow administrators to make changes to the request for authentication purposes, and would allow users to make per-request changes to the httpclient calls (i.e. modify httpclient requestconfig to modify the timeout on a per-request basis). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-7841) solradmin's query browser incorrectly renders docs with large ID large numbers
Erick Tryzelaar created SOLR-7841: - Summary: solradmin's query browser incorrectly renders docs with large ID large numbers Key: SOLR-7841 URL: https://issues.apache.org/jira/browse/SOLR-7841 Project: Solr Issue Type: Bug Components: web gui Affects Versions: 4.10.4 Reporter: Erick Tryzelaar JavaScript JSON engines parse integers as a floating point number. This causes problems for documents a large id, such as {{38585496994725888}}, which gets cast into the floating point number {{38585496994725890}}. This means that one cannot reliably copy an id from a {{*:*}} query and search for it with {{id:...}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6234) Scoring modes for query time join
[ https://issues.apache.org/jira/browse/SOLR-6234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14644515#comment-14644515 ] Timothy Potter commented on SOLR-6234: -- Looks like some problems with the Javadoc still ... I'm getting this from running the smoke tester. [~mkhludnev] did you run precommit? {code} [smoker] file:///Users/timpotter/dev/lw/projects/solr_trunk_co/lucene/build/smokeTestRelease/tmp/unpack/solr-6.0.0/solr/build/docs/solr-core/org/apache/solr/search/join/ScoreJoinQParserPlugin.html [smoker] BROKEN LINK: file:///Users/timpotter/dev/lw/projects/solr_trunk_co/lucene/build/smokeTestRelease/tmp/unpack/solr-6.0.0/lucene/build/docs/core/org/apache/lucene/search/join.ScoreMode.html [smoker] Traceback (most recent call last): [smoker] File /Users/timpotter/dev/lw/projects/solr_trunk_co/dev-tools/scripts/smokeTestRelease.py, line 1463, in module [smoker] main() [smoker] File /Users/timpotter/dev/lw/projects/solr_trunk_co/dev-tools/scripts/smokeTestRelease.py, line 1408, in main [smoker] smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, c.is_signed, ' '.join(c.test_args)) [smoker] File /Users/timpotter/dev/lw/projects/solr_trunk_co/dev-tools/scripts/smokeTestRelease.py, line 1453, in smokeTest [smoker] unpackAndVerify(java, 'solr', tmpDir, 'solr-%s-src.tgz' % version, svnRevision, version, testArgs, baseURL) [smoker] File /Users/timpotter/dev/lw/projects/solr_trunk_co/dev-tools/scripts/smokeTestRelease.py, line 592, in unpackAndVerify [smoker] verifyUnpacked(java, project, artifact, unpackPath, svnRevision, version, testArgs, tmpDir, baseURL) [smoker] File /Users/timpotter/dev/lw/projects/solr_trunk_co/dev-tools/scripts/smokeTestRelease.py, line 701, in verifyUnpacked [smoker] checkJavadocpathFull('%s/solr/build/docs' % unpackPath, False) [smoker] File /Users/timpotter/dev/lw/projects/solr_trunk_co/dev-tools/scripts/smokeTestRelease.py, line 893, in checkJavadocpathFull [smoker] raise RuntimeError('broken javadocs links found!') [smoker] RuntimeError: broken javadocs links found! {code} Scoring modes for query time join -- Key: SOLR-6234 URL: https://issues.apache.org/jira/browse/SOLR-6234 Project: Solr Issue Type: New Feature Components: query parsers Affects Versions: 5.3 Reporter: Mikhail Khludnev Assignee: Timothy Potter Labels: features, patch, test Fix For: 5.3 Attachments: SOLR-6234.patch, SOLR-6234.patch, SOLR-6234.patch, SOLR-6234.patch, SOLR-6234.patch, SOLR-6234.patch, SOLR-6234.patch, otherHandler.patch it adds ability to call Lucene's JoinUtil by specifying local param, ie \{!join score=...} It supports: - {{score=none|avg|max|total}} local param (passed as ScoreMode to JoinUtil) - -supports {{b=100}} param to pass {{Query.setBoost()}}- postponed till SOLR-7814. - -{{multiVals=true|false}} is introduced- YAGNI, let me know otherwise. - there is a test coverage for cross core join case. - so far it joins string and multivalue string fields (Sorted, SortedSet, Binary), but not Numerics DVs. follow-up LUCENE-5868 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7227) Solr distribution archive should have the WAR file extracted already
[ https://issues.apache.org/jira/browse/SOLR-7227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14644514#comment-14644514 ] Shawn Heisey commented on SOLR-7227: bq. But custom scripts may already break because there is no WAR anymore. Very true, and that is something that I had not considered. I also have no strong opinion, and it sounds like this entire change is destined to lead to user confusion, so let's jump in all the way! Solr distribution archive should have the WAR file extracted already Key: SOLR-7227 URL: https://issues.apache.org/jira/browse/SOLR-7227 Project: Solr Issue Type: Improvement Affects Versions: 5.0 Reporter: Timothy Potter Assignee: Timothy Potter Fix For: 5.3, Trunk Attachments: SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227-part2.patch, SOLR-7227.patch, SOLR-7227.patch Currently, there is still the solr.war file in the server/webapps directory, which gets extracted upon first startup of Solr. It would be better to ship Solr with the WAR already extracted, thus taking us one step closer to truly not shipping a WAR file. Moreover, some users have reported not being able to make /opt/solr truly read-only because of the need to extract the WAR on-the-fly upon first startup. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7841) solradmin's query browser incorrectly renders docs with large ID large numbers
[ https://issues.apache.org/jira/browse/SOLR-7841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Tryzelaar updated SOLR-7841: -- Description: JavaScript JSON engines parse integers as a floating point number. This causes problems for documents a large id, such as {{38585496994725888}}, which gets cast into the floating point number {{38585496994725890}}. This means that one cannot reliably copy an id from a {{\*:\*}} query and search for it with {{id:...}}. (was: JavaScript JSON engines parse integers as a floating point number. This causes problems for documents a large id, such as {{38585496994725888}}, which gets cast into the floating point number {{38585496994725890}}. This means that one cannot reliably copy an id from a {{*:*}} query and search for it with {{id:...}}.) solradmin's query browser incorrectly renders docs with large ID large numbers -- Key: SOLR-7841 URL: https://issues.apache.org/jira/browse/SOLR-7841 Project: Solr Issue Type: Bug Components: web gui Affects Versions: 4.10.4 Reporter: Erick Tryzelaar JavaScript JSON engines parse integers as a floating point number. This causes problems for documents a large id, such as {{38585496994725888}}, which gets cast into the floating point number {{38585496994725890}}. This means that one cannot reliably copy an id from a {{\*:\*}} query and search for it with {{id:...}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-5.x-Windows (32bit/jdk1.8.0_51) - Build # 4952 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4952/ Java: 32bit/jdk1.8.0_51 -server -XX:+UseSerialGC 1 tests failed. FAILED: org.apache.solr.handler.TestReplicationHandler.doTestStressReplication Error Message: [index.20150728233218532, index.20150728233219567, index.properties, replication.properties] expected:1 but was:2 Stack Trace: java.lang.AssertionError: [index.20150728233218532, index.20150728233219567, index.properties, replication.properties] expected:1 but was:2 at __randomizedtesting.SeedInfo.seed([4EB89D84FF4AD00A:95139D42FA62B9B9]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.apache.solr.handler.TestReplicationHandler.checkForSingleIndex(TestReplicationHandler.java:818) at org.apache.solr.handler.TestReplicationHandler.doTestStressReplication(TestReplicationHandler.java:785) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Created] (LUCENE-6701) Move queries to the lucene/queries module
Adrien Grand created LUCENE-6701: Summary: Move queries to the lucene/queries module Key: LUCENE-6701 URL: https://issues.apache.org/jira/browse/LUCENE-6701 Project: Lucene - Core Issue Type: Task Reporter: Adrien Grand Assignee: Adrien Grand Priority: Minor We should try to move our lucene/core query impls to the lucene/queries module. For most queries, this should be easy (spans, Wildcard, etc.). However for more core queries, moving them to lucene/queries might require to add more dependencies between modules so we might want to keep a couple of them in core, like TermQuery (I have not really dug yet). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-6699) Integrate lat/lon BKD and spatial3d
[ https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14644248#comment-14644248 ] Karl Wright edited comment on LUCENE-6699 at 7/28/15 11:23 AM: --- bq. Can we just index earth surface lat/lon, and then at query time is spatial3d able to give me an enclosing surface lat/lon bbox for a 3d shape? Ok, so now you have me confused a bit as to what your requirements are for BKD. If you want to split the globe up into lat/lon rectangles, and use BKD that way to descend, then obviously you'd need points to be stored in lat/lon. But that would make less sense for geospatial3d, because what you're really trying to do is assess membership in a shape, or distance also in regards to a shape, both of which require (x,y,z) not lat/lon. Yes, you can convert to (x,y,z) from lat/lon, but the conversion is relatively expensive. Instead, I could imagine just staying natively in (x,y,z), and doing your splits in that space, e.g. split in x, then in y, then in z. So you'd have a GeoPoint3D which would pack (x,y,z) in a format you could rapidly extract, and a splitting algorithm that would use the known ranges for these values. Does that make sense to you? Would that work with BKD? was (Author: kwri...@metacarta.com): bq. Can we just index earth surface lat/lon, and then at query time is spatial3d able to give me an enclosing surface lat/lon bbox for a 3d shape? Ok, so now you have me confused a bit as to what your requirements are for BKD. If you want to split the globe up into lat/lon rectangles, and use BKD that way to descend, then obviously you'd need points to be stored in lat/lon. But that would make less sense for geospatial3d, because what you're really trying to do is assess membership in a shape, or distance also in regards to a shape, both of which require (x,y,z) not lat/lon. Yes, you can convert to (x,y,z) from lat/lon, but the conversion is relatively expensive. Instead, I could imagine just staying natively in (x,y,z), and doing your splits in that space, e.g. split in x, then in y, then in z. So you'd have a GeoPoint3D which would pack (x,y,z) in a format you could rapidly extract, and a splitting algorithm that would use the known ranges for these values. Does that make sense to you? Would that work with BKD? Integrate lat/lon BKD and spatial3d --- Key: LUCENE-6699 URL: https://issues.apache.org/jira/browse/LUCENE-6699 Project: Lucene - Core Issue Type: New Feature Reporter: Michael McCandless Assignee: Michael McCandless I'm opening this for discussion, because I'm not yet sure how to do this integration, because of my ignorance about spatial in general and spatial3d in particular :) Our BKD tree impl is very fast at doing lat/lon shape intersection (bbox, polygon, soon distance: LUCENE-6698) against previously indexed points. I think to integrate with spatial3d, we would first need to record lat/lon/z into doc values. Somewhere I saw discussion about how we could stuff all 3 into a single long value with acceptable precision loss? Or, we could use BinaryDocValues? We need all 3 dims available to do the fast per-hit query time filtering. But, second: what do we index into the BKD tree? Can we just index earth surface lat/lon, and then at query time is spatial3d able to give me an enclosing surface lat/lon bbox for a 3d shape? Or ... must we index all 3 dimensions into the BKD tree (seems like this could be somewhat wasteful)? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d
[ https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14644248#comment-14644248 ] Karl Wright commented on LUCENE-6699: - bq. Can we just index earth surface lat/lon, and then at query time is spatial3d able to give me an enclosing surface lat/lon bbox for a 3d shape? Ok, so now you have me confused a bit as to what your requirements are for BKD. If you want to split the globe up into lat/lon rectangles, and use BKD that way to descend, then obviously you'd need points to be stored in lat/lon. But that would make less sense for geospatial3d, because what you're really trying to do is assess membership in a shape, or distance also in regards to a shape, both of which require (x,y,z) not lat/lon. Yes, you can convert to (x,y,z) from lat/lon, but the conversion is relatively expensive. Instead, I could imagine just staying natively in (x,y,z), and doing your splits in that space, e.g. split in x, then in y, then in z. So you'd have a GeoPoint3D which would pack (x,y,z) in a format you could rapidly extract, and a splitting algorithm that would use the known ranges for these values. Does that make sense to you? Would that work with BKD? Integrate lat/lon BKD and spatial3d --- Key: LUCENE-6699 URL: https://issues.apache.org/jira/browse/LUCENE-6699 Project: Lucene - Core Issue Type: New Feature Reporter: Michael McCandless Assignee: Michael McCandless I'm opening this for discussion, because I'm not yet sure how to do this integration, because of my ignorance about spatial in general and spatial3d in particular :) Our BKD tree impl is very fast at doing lat/lon shape intersection (bbox, polygon, soon distance: LUCENE-6698) against previously indexed points. I think to integrate with spatial3d, we would first need to record lat/lon/z into doc values. Somewhere I saw discussion about how we could stuff all 3 into a single long value with acceptable precision loss? Or, we could use BinaryDocValues? We need all 3 dims available to do the fast per-hit query time filtering. But, second: what do we index into the BKD tree? Can we just index earth surface lat/lon, and then at query time is spatial3d able to give me an enclosing surface lat/lon bbox for a 3d shape? Or ... must we index all 3 dimensions into the BKD tree (seems like this could be somewhat wasteful)? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d
[ https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14644204#comment-14644204 ] Karl Wright commented on LUCENE-6699: - Hi Mike, The coordinates you'd need to store for geospatial3d are: (x,y,z), rather than (lat,lon,z). The values are floating-point numbers that range between -1.0 and 1.0 for a sphere, and slightly more than that in abs value for a WGS84 ellipsoid. There was a ticket where I attached code for a packing scheme that would ram all three values into a 64-bit long; I'll see if I can find it. That packing scheme basically gave you resolution of about a meter. However, as you have pointed out, it's not strictly necessary to stick to 64-bit longs either, so you're free to propose anything that makes sense. ;-) Integrate lat/lon BKD and spatial3d --- Key: LUCENE-6699 URL: https://issues.apache.org/jira/browse/LUCENE-6699 Project: Lucene - Core Issue Type: New Feature Reporter: Michael McCandless Assignee: Michael McCandless I'm opening this for discussion, because I'm not yet sure how to do this integration, because of my ignorance about spatial in general and spatial3d in particular :) Our BKD tree impl is very fast at doing lat/lon shape intersection (bbox, polygon, soon distance: LUCENE-6698) against previously indexed points. I think to integrate with spatial3d, we would first need to record lat/lon/z into doc values. Somewhere I saw discussion about how we could stuff all 3 into a single long value with acceptable precision loss? Or, we could use BinaryDocValues? We need all 3 dims available to do the fast per-hit query time filtering. But, second: what do we index into the BKD tree? Can we just index earth surface lat/lon, and then at query time is spatial3d able to give me an enclosing surface lat/lon bbox for a 3d shape? Or ... must we index all 3 dimensions into the BKD tree (seems like this could be somewhat wasteful)? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7692) Implement BasicAuth based impl for the new Authentication/Authorization APIs
[ https://issues.apache.org/jira/browse/SOLR-7692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ishan Chattopadhyaya updated SOLR-7692: --- Attachment: SOLR-7692.patch Updated the patch, fixing the test failure for BasicAuthIntegrationTest. Implement BasicAuth based impl for the new Authentication/Authorization APIs Key: SOLR-7692 URL: https://issues.apache.org/jira/browse/SOLR-7692 Project: Solr Issue Type: New Feature Reporter: Noble Paul Assignee: Noble Paul Attachments: SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7692.patch, SOLR-7757.patch, SOLR-7757.patch, SOLR-7757.patch This involves various components h2. Authentication A basic auth based authentication filter. This should retrieve the user credentials from ZK. The user name and sha1 hash of password should be stored in ZK sample authentication json {code:javascript} { authentication:{ class: solr.BasicAuthPlugin, users :{ john :09fljnklnoiuy98 buygujkjnlk, david:f678njfgfjnklno iuy9865ty, pete: 87ykjnklndfhjh8 98uyiy98, } } } {code} h2. authorization plugin This would store the roles of various users and their privileges in ZK sample authorization.json {code:javascript} { authorization: { class: solr.ZKAuthorization, roles :{ admin : [john] guest : [john, david,pete] } permissions: { collection-edit: { role: admin }, coreadmin:{ role:admin }, config-edit: { //all collections role: admin, method:POST }, schema-edit: { roles: admin, method:POST }, update: { //all collections role: dev }, mycoll_update: { collection: mycoll, path:[/update/*], role: [somebody] } } } } {code} We will also need to provide APIs to create users and assign them roles -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Ref guide updates for security page
Hi Anshum / someone who has some time, Can you please incorporate the following changes for the https://cwiki.apache.org/confluence/display/solr/Security page, in time for the 5.3 ref guide release? Changes are here: https://docs.google.com/document/d/1wG_CQpA7_JVAVcous6fX-c-farcDIQoWHxOmQqiIUro/edit# Thanks, Ishan
[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_51) - Build # 5079 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5079/ Java: 32bit/jdk1.8.0_51 -server -XX:+UseSerialGC 4 tests failed. FAILED: org.apache.solr.cloud.CdcrRequestHandlerTest.doTest Error Message: expected:[dis]abled but was:[en]abled Stack Trace: org.junit.ComparisonFailure: expected:[dis]abled but was:[en]abled at __randomizedtesting.SeedInfo.seed([B13C768832EBB8DC:1678CE2C5F50AB65]:0) at org.junit.Assert.assertEquals(Assert.java:125) at org.junit.Assert.assertEquals(Assert.java:147) at org.apache.solr.cloud.BaseCdcrDistributedZkTest.assertState(BaseCdcrDistributedZkTest.java:289) at org.apache.solr.cloud.CdcrRequestHandlerTest.doTestBufferActions(CdcrRequestHandlerTest.java:138) at org.apache.solr.cloud.CdcrRequestHandlerTest.doTest(CdcrRequestHandlerTest.java:40) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at
[jira] [Created] (SOLR-7839) SolrJ support for BasicAuth
Ishan Chattopadhyaya created SOLR-7839: -- Summary: SolrJ support for BasicAuth Key: SOLR-7839 URL: https://issues.apache.org/jira/browse/SOLR-7839 Project: Solr Issue Type: New Feature Reporter: Ishan Chattopadhyaya Given the plugin for server side support for BasicAuth in SOLR-7692, a HttpClientConfigurer (similar to Krb5HttpClientConfigurer) could be written up for users to use in their SolrJ applications that interact with such a secure Solr cluster. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-6234) Scoring modes for query time join
[ https://issues.apache.org/jira/browse/SOLR-6234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Khludnev updated SOLR-6234: --- Attachment: SOLR-6234.patch added link to lucene join for javadocs. Scoring modes for query time join -- Key: SOLR-6234 URL: https://issues.apache.org/jira/browse/SOLR-6234 Project: Solr Issue Type: New Feature Components: query parsers Affects Versions: 5.3 Reporter: Mikhail Khludnev Assignee: Timothy Potter Labels: features, patch, test Fix For: 5.3 Attachments: SOLR-6234.patch, SOLR-6234.patch, SOLR-6234.patch, SOLR-6234.patch, SOLR-6234.patch, SOLR-6234.patch, SOLR-6234.patch, otherHandler.patch it adds ability to call Lucene's JoinUtil by specifying local param, ie \{!join score=...} It supports: - {{score=none|avg|max|total}} local param (passed as ScoreMode to JoinUtil) - -supports {{b=100}} param to pass {{Query.setBoost()}}- postponed till SOLR-7814. - -{{multiVals=true|false}} is introduced- YAGNI, let me know otherwise. - there is a test coverage for cross core join case. - so far it joins string and multivalue string fields (Sorted, SortedSet, Binary), but not Numerics DVs. follow-up LUCENE-5868 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk1.9.0-ea-b60) - Build # 13635 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/13635/ Java: 64bit/jdk1.9.0-ea-b60 -XX:+UseCompressedOops -XX:+UseSerialGC -Djava.locale.providers=JRE,SPI 3 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.handler.component.TestTrackingShardHandlerFactory Error Message: ERROR: SolrIndexSearcher opens=7 closes=6 Stack Trace: java.lang.AssertionError: ERROR: SolrIndexSearcher opens=7 closes=6 at __randomizedtesting.SeedInfo.seed([FD93AB75F62B7A16]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.SolrTestCaseJ4.endTrackingSearchers(SolrTestCaseJ4.java:467) at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:233) at sun.reflect.GeneratedMethodAccessor69.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:502) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:799) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at java.lang.Thread.run(Thread.java:745) FAILED: junit.framework.TestSuite.org.apache.solr.handler.component.TestTrackingShardHandlerFactory Error Message: 2 threads leaked from SUITE scope at org.apache.solr.handler.component.TestTrackingShardHandlerFactory: 1) Thread[id=13836, name=qtp1624940342-13836, state=RUNNABLE, group=TGRP-TestTrackingShardHandlerFactory] at java.util.WeakHashMap.get(WeakHashMap.java:403) at org.apache.solr.servlet.cache.HttpCacheHeaderUtil.calcEtag(HttpCacheHeaderUtil.java:101) at org.apache.solr.servlet.cache.HttpCacheHeaderUtil.doCacheHeaderValidation(HttpCacheHeaderUtil.java:219) at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:445) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:191) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) at org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:106) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) at org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:83) at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:364) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) at
[jira] [Commented] (SOLR-7836) Possible deadlock when closing refcounted index writers.
[ https://issues.apache.org/jira/browse/SOLR-7836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14644312#comment-14644312 ] Erick Erickson commented on SOLR-7836: -- OK, the patch doesn't address the real issue. I still think it's bad to leave those dangling pauseWriters around, but the _real_ issue appears to be over in DirectUpdateHandler2. There's a ref counted index writer obtained in addDoc0. But then addDoc0 calls addAndDelete which tries to get a ref counted index writer again. If another thread sets pauseWriter in between, then it's deadlocked. I think the solution is to just pass the IndexWriter down to addAndDelete, but won't have time to really look until this evening. Possible deadlock when closing refcounted index writers. Key: SOLR-7836 URL: https://issues.apache.org/jira/browse/SOLR-7836 Project: Solr Issue Type: Bug Reporter: Erick Erickson Assignee: Erick Erickson Attachments: SOLR-7836.patch Preliminary patch for what looks like a possible race condition between writerFree and pauseWriter in DefaultSorlCoreState. Looking for comments and/or why I'm completely missing the boat. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5606) REST based Collections API
[ https://issues.apache.org/jira/browse/SOLR-5606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14645443#comment-14645443 ] Noble Paul commented on SOLR-5606: -- It is fine to have {{/solr/collections}} end point for collection related APIs. Just keep in mind that we are taking up a name in the namespace (this could collide with a collection name) with every path we add. What I mean to say is let us not add too many top level paths {{/solr/collections}}, {{/solr/cores}} , {{/solr/configs}}, {{solr/cluster}} Let's have a list of end points we need and make a call on what should be the paths. It is OK to have a prefix of {{/solr/admin/*}} because we already have that namespace taken up. So the only name collision is {{admin}} which is already taken up REST based Collections API -- Key: SOLR-5606 URL: https://issues.apache.org/jira/browse/SOLR-5606 Project: Solr Issue Type: Improvement Components: SolrCloud Reporter: Jan Høydahl Priority: Minor Fix For: Trunk For consistency reasons, the collections API (and other admin APIs) should be REST based. Spinoff from SOLR-1523 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-SmokeRelease-5.x - Build # 289 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-5.x/289/ No tests ran. Build Log: [...truncated 53018 lines...] prepare-release-no-sign: [mkdir] Created dir: /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist [copy] Copying 461 files to /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/lucene [copy] Copying 245 files to /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/solr [smoker] Java 1.7 JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.7 [smoker] Java 1.8 JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8 [smoker] NOTE: output encoding is UTF-8 [smoker] [smoker] Load release URL file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/... [smoker] [smoker] Test Lucene... [smoker] test basics... [smoker] get KEYS [smoker] 0.1 MB in 0.01 sec (11.7 MB/sec) [smoker] check changes HTML... [smoker] download lucene-5.3.0-src.tgz... [smoker] 28.4 MB in 0.04 sec (674.8 MB/sec) [smoker] verify md5/sha1 digests [smoker] download lucene-5.3.0.tgz... [smoker] 65.6 MB in 0.10 sec (638.3 MB/sec) [smoker] verify md5/sha1 digests [smoker] download lucene-5.3.0.zip... [smoker] 75.8 MB in 0.12 sec (650.2 MB/sec) [smoker] verify md5/sha1 digests [smoker] unpack lucene-5.3.0.tgz... [smoker] verify JAR metadata/identity/no javax.* or java.* classes... [smoker] test demo with 1.7... [smoker] got 6041 hits for query lucene [smoker] checkindex with 1.7... [smoker] test demo with 1.8... [smoker] got 6041 hits for query lucene [smoker] checkindex with 1.8... [smoker] check Lucene's javadoc JAR [smoker] unpack lucene-5.3.0.zip... [smoker] verify JAR metadata/identity/no javax.* or java.* classes... [smoker] test demo with 1.7... [smoker] got 6041 hits for query lucene [smoker] checkindex with 1.7... [smoker] test demo with 1.8... [smoker] got 6041 hits for query lucene [smoker] checkindex with 1.8... [smoker] check Lucene's javadoc JAR [smoker] unpack lucene-5.3.0-src.tgz... [smoker] make sure no JARs/WARs in src dist... [smoker] run ant validate [smoker] run tests w/ Java 7 and testArgs='-Dtests.slow=false'... [smoker] test demo with 1.7... [smoker] got 213 hits for query lucene [smoker] checkindex with 1.7... [smoker] generate javadocs w/ Java 7... [smoker] [smoker] Crawl/parse... [smoker] [smoker] Verify... [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'... [smoker] test demo with 1.8... [smoker] got 213 hits for query lucene [smoker] checkindex with 1.8... [smoker] generate javadocs w/ Java 8... [smoker] [smoker] Crawl/parse... [smoker] [smoker] Verify... [smoker] confirm all releases have coverage in TestBackwardsCompatibility [smoker] find all past Lucene releases... [smoker] run TestBackwardsCompatibility.. [smoker] success! [smoker] [smoker] Test Solr... [smoker] test basics... [smoker] get KEYS [smoker] 0.1 MB in 0.00 sec (32.0 MB/sec) [smoker] check changes HTML... [smoker] download solr-5.3.0-src.tgz... [smoker] 36.9 MB in 0.45 sec (81.6 MB/sec) [smoker] verify md5/sha1 digests [smoker] download solr-5.3.0.tgz... [smoker] 128.3 MB in 1.40 sec (92.0 MB/sec) [smoker] verify md5/sha1 digests [smoker] download solr-5.3.0.zip... [smoker] 135.9 MB in 1.58 sec (85.8 MB/sec) [smoker] verify md5/sha1 digests [smoker] unpack solr-5.3.0.tgz... [smoker] verify JAR metadata/identity/no javax.* or java.* classes... [smoker] unpack lucene-5.3.0.tgz... [smoker] **WARNING**: skipping check of /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.3.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar: it has javax.* classes [smoker] **WARNING**: skipping check of /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.3.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar: it has javax.* classes [smoker] copying unpacked distribution for Java 7 ... [smoker] test solr example w/ Java 7... [smoker] start Solr instance (log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.3.0-java7/solr-example.log)... [smoker] No process found for Solr node running on port 8983 [smoker] starting Solr on port 8983 from
[jira] [Updated] (SOLR-7842) ZK connection loss or session expiry events should not fire config directory listeners
[ https://issues.apache.org/jira/browse/SOLR-7842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shalin Shekhar Mangar updated SOLR-7842: Attachment: SOLR-7842.patch Trivial patch which returns from watcher method if event state is disconnected or expired. ZK connection loss or session expiry events should not fire config directory listeners -- Key: SOLR-7842 URL: https://issues.apache.org/jira/browse/SOLR-7842 Project: Solr Issue Type: Bug Components: SolrCloud Affects Versions: 5.2.1 Reporter: Shalin Shekhar Mangar Priority: Minor Labels: difficulty-easy, impact-low Fix For: 5.3, Trunk Attachments: SOLR-7842.patch The watcher on the config directory has the following in the process method: {code} Stat stat = null; try { stat = zkClient.exists(zkDir, null, true); } catch (KeeperException e) { //ignore , it is not a big deal } catch (InterruptedException e) { Thread.currentThread().interrupt(); } boolean resetWatcher = false; try { resetWatcher = fireEventListeners(zkDir); } finally { if (Event.EventType.None.equals(event.getType())) { log.info(A node got unwatched for {}, zkDir); } else { if (resetWatcher) setConfWatcher(zkDir, this, stat); else log.info(A node got unwatched for {}, zkDir); } } {code} Even if the watcher is fired because of session expiry or connection loss, the fireEventListeners() method is executed and all subsequent listener invocations fail due to the loss of connection/session. All this is logged as well. {code} 466879 WARN (Thread-78) [ ] o.a.s.c.ZkController listener throws error org.apache.solr.common.SolrException: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /configs/jepsen/params.json at org.apache.solr.core.RequestParams.getFreshRequestParams(RequestParams.java:158) at org.apache.solr.core.SolrConfig.refreshRequestParams(SolrConfig.java:909) at org.apache.solr.core.SolrCore$11.run(SolrCore.java:2585) at org.apache.solr.cloud.ZkController$4.run(ZkController.java:2385) Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /configs/jepsen/params.json at org.apache.zookeeper.KeeperException.create(KeeperException.java:99) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045) at org.apache.solr.common.cloud.SolrZkClient$4.execute(SolrZkClient.java:302) at org.apache.solr.common.cloud.SolrZkClient$4.execute(SolrZkClient.java:299) at org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:61) at org.apache.solr.common.cloud.SolrZkClient.exists(SolrZkClient.java:299) at org.apache.solr.core.RequestParams.getFreshRequestParams(RequestParams.java:148) ... 3 more {code} We should check the keeper state in addition to the event type and ignore such events. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-6699) Integrate lat/lon BKD and spatial3d
[ https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14644248#comment-14644248 ] Karl Wright edited comment on LUCENE-6699 at 7/29/15 5:49 AM: -- bq. Can we just index earth surface lat/lon, and then at query time is spatial3d able to give me an enclosing surface lat/lon bbox for a 3d shape? [~mikemccand] Ok, so now you have me confused a bit as to what your requirements are for BKD. If you want to split the globe up into lat/lon rectangles, and use BKD that way to descend, then obviously you'd need points to be stored in lat/lon. But that would make less sense for geospatial3d, because what you're really trying to do is assess membership in a shape, or distance also in regards to a shape, both of which require (x,y,z) not lat/lon. Yes, you can convert to (x,y,z) from lat/lon, but the conversion is relatively expensive. Instead, I could imagine just staying natively in (x,y,z), and doing your splits in that space, e.g. split in x, then in y, then in z. So you'd have a GeoPoint3D which would pack (x,y,z) in a format you could rapidly extract, and a splitting algorithm that would use the known ranges for these values. Does that make sense to you? Would that work with BKD? was (Author: kwri...@metacarta.com): bq. Can we just index earth surface lat/lon, and then at query time is spatial3d able to give me an enclosing surface lat/lon bbox for a 3d shape? Ok, so now you have me confused a bit as to what your requirements are for BKD. If you want to split the globe up into lat/lon rectangles, and use BKD that way to descend, then obviously you'd need points to be stored in lat/lon. But that would make less sense for geospatial3d, because what you're really trying to do is assess membership in a shape, or distance also in regards to a shape, both of which require (x,y,z) not lat/lon. Yes, you can convert to (x,y,z) from lat/lon, but the conversion is relatively expensive. Instead, I could imagine just staying natively in (x,y,z), and doing your splits in that space, e.g. split in x, then in y, then in z. So you'd have a GeoPoint3D which would pack (x,y,z) in a format you could rapidly extract, and a splitting algorithm that would use the known ranges for these values. Does that make sense to you? Would that work with BKD? Integrate lat/lon BKD and spatial3d --- Key: LUCENE-6699 URL: https://issues.apache.org/jira/browse/LUCENE-6699 Project: Lucene - Core Issue Type: New Feature Reporter: Michael McCandless Assignee: Michael McCandless I'm opening this for discussion, because I'm not yet sure how to do this integration, because of my ignorance about spatial in general and spatial3d in particular :) Our BKD tree impl is very fast at doing lat/lon shape intersection (bbox, polygon, soon distance: LUCENE-6698) against previously indexed points. I think to integrate with spatial3d, we would first need to record lat/lon/z into doc values. Somewhere I saw discussion about how we could stuff all 3 into a single long value with acceptable precision loss? Or, we could use BinaryDocValues? We need all 3 dims available to do the fast per-hit query time filtering. But, second: what do we index into the BKD tree? Can we just index earth surface lat/lon, and then at query time is spatial3d able to give me an enclosing surface lat/lon bbox for a 3d shape? Or ... must we index all 3 dimensions into the BKD tree (seems like this could be somewhat wasteful)? -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7843) Importing Deltal create a memory leak
[ https://issues.apache.org/jira/browse/SOLR-7843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pablo Lozano updated SOLR-7843: --- Description: The org.apache.solr.handler.dataimport.SolrWriter is not correctly cleaning itself after finishing importing Deltas as the SetObject deltaKeys is not being cleaned after the process has finished. When using a custom importer or DataSource for my case I need to add additional parameters to the delta keys. When the data import finishes the DeltaKeys is not set back to null and the DataImporter, DocBuilder and the SolrWriter are mantained as live objects because there are being referenced by the infoRegistry of the SolrCore which seems to be used for Jmx information. It appears that starting a second delta import did not freed the memory which may cause on the long run an OutOfMemory, I have not checked if starting a full import would break the references and free the memory. An easy fix is possible which would be to add to the SolrWriter deltaKeys = null; on the close method. Or nullify the writer on DocBuilder after being used on the method execute(); was: The org.apache.solr.handler.dataimport.SolrWriter is not correctly cleaning itself after finishing importing Deltas as the SetObject deltaKeys is not being cleaned after the process has finished. When using a custom importer or DataSource for my case I need to add additional parameters to the delta keys. When the data import finishes the DeltaKeys is not set back to null and the DataImporter, DocBuilder and the SolrWriter are mantained as live objects because there are being referenced by the infoRegistry of the SolrCore which seems to be used for Jmx information. It appears that starting a second delta import did not freed the memory which may cause on the long run an OutOfMemory, I have not checked if starting a full import would break the references and free the memory. An easy fix is possible which would be to add to the SolrWriter deltaKeys = null; on the close method. Importing Deltal create a memory leak - Key: SOLR-7843 URL: https://issues.apache.org/jira/browse/SOLR-7843 Project: Solr Issue Type: Bug Components: contrib - DataImportHandler Affects Versions: 5.2.1 Reporter: Pablo Lozano Labels: memory-leak The org.apache.solr.handler.dataimport.SolrWriter is not correctly cleaning itself after finishing importing Deltas as the SetObject deltaKeys is not being cleaned after the process has finished. When using a custom importer or DataSource for my case I need to add additional parameters to the delta keys. When the data import finishes the DeltaKeys is not set back to null and the DataImporter, DocBuilder and the SolrWriter are mantained as live objects because there are being referenced by the infoRegistry of the SolrCore which seems to be used for Jmx information. It appears that starting a second delta import did not freed the memory which may cause on the long run an OutOfMemory, I have not checked if starting a full import would break the references and free the memory. An easy fix is possible which would be to add to the SolrWriter deltaKeys = null; on the close method. Or nullify the writer on DocBuilder after being used on the method execute(); -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-6625) HttpClient callback in HttpSolrServer
[ https://issues.apache.org/jira/browse/SOLR-6625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ishan Chattopadhyaya updated SOLR-6625: --- Attachment: SOLR-6625_interceptor.patch Minor fix, was causing a test failure. HttpClient callback in HttpSolrServer - Key: SOLR-6625 URL: https://issues.apache.org/jira/browse/SOLR-6625 Project: Solr Issue Type: Improvement Components: SolrJ Reporter: Gregory Chanan Assignee: Gregory Chanan Priority: Minor Attachments: SOLR-6625.patch, SOLR-6625.patch, SOLR-6625.patch, SOLR-6625.patch, SOLR-6625.patch, SOLR-6625.patch, SOLR-6625_interceptor.patch, SOLR-6625_interceptor.patch, SOLR-6625_interceptor.patch, SOLR-6625_interceptor.patch, SOLR-6625_interceptor.patch, SOLR-6625_interceptor.patch, SOLR-6625_r1654079.patch, SOLR-6625_r1654079.patch Some of our setups use Solr in a SPNego/kerberos setup (we've done this by adding our own filters to the web.xml). We have an issue in that SPNego requires a negotiation step, but some HttpSolrServer requests are not repeatable, notably the PUT/POST requests. So, what happens is, HttpSolrServer sends the requests, the server responds with a negotiation request, and the request fails because the request is not repeatable. We've modified our code to send a repeatable request beforehand in these cases. It would be nicer if HttpSolrServer provided a pre/post callback when it was making an httpclient request. This would allow administrators to make changes to the request for authentication purposes, and would allow users to make per-request changes to the httpclient calls (i.e. modify httpclient requestconfig to modify the timeout on a per-request basis). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5606) REST based Collections API
[ https://issues.apache.org/jira/browse/SOLR-5606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14644627#comment-14644627 ] Noble Paul commented on SOLR-5606: -- Yes, By default it will just give a list of all available collections . use extra params to filter . We need to push collections front and center The type parameter could default to collection and I should be able to request for cores as well REST based Collections API -- Key: SOLR-5606 URL: https://issues.apache.org/jira/browse/SOLR-5606 Project: Solr Issue Type: Improvement Components: SolrCloud Reporter: Jan Høydahl Priority: Minor Fix For: Trunk For consistency reasons, the collections API (and other admin APIs) should be REST based. Spinoff from SOLR-1523 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-5606) REST based Collections API
[ https://issues.apache.org/jira/browse/SOLR-5606?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14644662#comment-14644662 ] Mark Miller commented on SOLR-5606: --- {quote}Just to be clear, are you talking about the pure REST APIs for Schema that Steve Rowe did or the bulk-style Schema API that Noble did? I'm personally a fan of the bulk operation style vs. addressing every object inside of schema (or config) as a specific resource.{quote} Whichever makes sense for the situation. I have no problem with the bulk approach when it's more user friendly. I just mean the newer more restlike approaches (rather than everything is a GET and handled by ACTION=, etc). REST based Collections API -- Key: SOLR-5606 URL: https://issues.apache.org/jira/browse/SOLR-5606 Project: Solr Issue Type: Improvement Components: SolrCloud Reporter: Jan Høydahl Priority: Minor Fix For: Trunk For consistency reasons, the collections API (and other admin APIs) should be REST based. Spinoff from SOLR-1523 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6357) Using query time Join in deleteByQuery throws ClassCastException
[ https://issues.apache.org/jira/browse/SOLR-6357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14644680#comment-14644680 ] Timothy Potter commented on SOLR-6357: -- Adding a unit test that uses the score join solution to fix this issue. Using query time Join in deleteByQuery throws ClassCastException Key: SOLR-6357 URL: https://issues.apache.org/jira/browse/SOLR-6357 Project: Solr Issue Type: Bug Components: query parsers Affects Versions: 4.9 Reporter: Arcadius Ahouansou Assignee: Timothy Potter Consider the following input document where we have: - 1 Samsung mobile phone and - 2 manufactures: Apple and Samsung. {code} [ { id:galaxy note ii, cat:product, manu_s:samsung }, { id:samsung, cat:manufacturer, name:Samsung Electronics }, { id:apple, cat:manufacturer, name:Apple Inc } ] {code} My objective is to delete from the default index all manufacturers not having any product in the index. After indexing ( curl 'http://localhost:8983/solr/update?commit=true' -H Content-Type: text/json --data-binary @delete-by-join-query.json ) I went to {code}http://localhost:8983/solr/select?q=cat:manufacturer -{!join from=manu_s to=id}cat:product {code} and I could see only Apple, the only manufacturer not having any product in the index. However, when I use that same query for deletion: {code} http://localhost:8983/solr/update?commit=truestream.body=deletequerycat:manufacturer -{!join from=manu_s to=id}cat:product/query/delete {code} I get {code} java.lang.ClassCastException: org.apache.lucene.search.IndexSearcher cannot be cast to org.apache.solr.search.SolrIndexSearcher at org.apache.solr.search.JoinQuery.createWeight(JoinQParserPlugin.java:143) at org.apache.lucene.search.BooleanQuery$BooleanWeight.init(BooleanQuery.java:185) at org.apache.lucene.search.BooleanQuery.createWeight(BooleanQuery.java:526) at org.apache.lucene.search.BooleanQuery$BooleanWeight.init(BooleanQuery.java:185) at org.apache.lucene.search.BooleanQuery.createWeight(BooleanQuery.java:526) at org.apache.lucene.search.IndexSearcher.createNormalizedWeight(IndexSearcher.java:684) at org.apache.lucene.search.QueryWrapperFilter.getDocIdSet(QueryWrapperFilter.java:55) at org.apache.lucene.index.BufferedUpdatesStream.applyQueryDeletes(BufferedUpdatesStream.java:552) at org.apache.lucene.index.BufferedUpdatesStream.applyDeletesAndUpdates(BufferedUpdatesStream.java:287) at {code} This seems to be a bug. Looking at the source code, the exception is happening in {code} @Override public Weight createWeight(IndexSearcher searcher) throws IOException { return new JoinQueryWeight((SolrIndexSearcher)searcher); } {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-6357) Using query time Join in deleteByQuery throws ClassCastException
[ https://issues.apache.org/jira/browse/SOLR-6357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Timothy Potter reassigned SOLR-6357: Assignee: Timothy Potter Using query time Join in deleteByQuery throws ClassCastException Key: SOLR-6357 URL: https://issues.apache.org/jira/browse/SOLR-6357 Project: Solr Issue Type: Bug Components: query parsers Affects Versions: 4.9 Reporter: Arcadius Ahouansou Assignee: Timothy Potter Consider the following input document where we have: - 1 Samsung mobile phone and - 2 manufactures: Apple and Samsung. {code} [ { id:galaxy note ii, cat:product, manu_s:samsung }, { id:samsung, cat:manufacturer, name:Samsung Electronics }, { id:apple, cat:manufacturer, name:Apple Inc } ] {code} My objective is to delete from the default index all manufacturers not having any product in the index. After indexing ( curl 'http://localhost:8983/solr/update?commit=true' -H Content-Type: text/json --data-binary @delete-by-join-query.json ) I went to {code}http://localhost:8983/solr/select?q=cat:manufacturer -{!join from=manu_s to=id}cat:product {code} and I could see only Apple, the only manufacturer not having any product in the index. However, when I use that same query for deletion: {code} http://localhost:8983/solr/update?commit=truestream.body=deletequerycat:manufacturer -{!join from=manu_s to=id}cat:product/query/delete {code} I get {code} java.lang.ClassCastException: org.apache.lucene.search.IndexSearcher cannot be cast to org.apache.solr.search.SolrIndexSearcher at org.apache.solr.search.JoinQuery.createWeight(JoinQParserPlugin.java:143) at org.apache.lucene.search.BooleanQuery$BooleanWeight.init(BooleanQuery.java:185) at org.apache.lucene.search.BooleanQuery.createWeight(BooleanQuery.java:526) at org.apache.lucene.search.BooleanQuery$BooleanWeight.init(BooleanQuery.java:185) at org.apache.lucene.search.BooleanQuery.createWeight(BooleanQuery.java:526) at org.apache.lucene.search.IndexSearcher.createNormalizedWeight(IndexSearcher.java:684) at org.apache.lucene.search.QueryWrapperFilter.getDocIdSet(QueryWrapperFilter.java:55) at org.apache.lucene.index.BufferedUpdatesStream.applyQueryDeletes(BufferedUpdatesStream.java:552) at org.apache.lucene.index.BufferedUpdatesStream.applyDeletesAndUpdates(BufferedUpdatesStream.java:287) at {code} This seems to be a bug. Looking at the source code, the exception is happening in {code} @Override public Weight createWeight(IndexSearcher searcher) throws IOException { return new JoinQueryWeight((SolrIndexSearcher)searcher); } {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-6234) Scoring modes for query time join
[ https://issues.apache.org/jira/browse/SOLR-6234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Timothy Potter reassigned SOLR-6234: Assignee: Mikhail Khludnev (was: Timothy Potter) Scoring modes for query time join -- Key: SOLR-6234 URL: https://issues.apache.org/jira/browse/SOLR-6234 Project: Solr Issue Type: New Feature Components: query parsers Affects Versions: 5.3 Reporter: Mikhail Khludnev Assignee: Mikhail Khludnev Labels: features, patch, test Fix For: 5.3 Attachments: SOLR-6234.patch, SOLR-6234.patch, SOLR-6234.patch, SOLR-6234.patch, SOLR-6234.patch, SOLR-6234.patch, SOLR-6234.patch, otherHandler.patch it adds ability to call Lucene's JoinUtil by specifying local param, ie \{!join score=...} It supports: - {{score=none|avg|max|total}} local param (passed as ScoreMode to JoinUtil) - -supports {{b=100}} param to pass {{Query.setBoost()}}- postponed till SOLR-7814. - -{{multiVals=true|false}} is introduced- YAGNI, let me know otherwise. - there is a test coverage for cross core join case. - so far it joins string and multivalue string fields (Sorted, SortedSet, Binary), but not Numerics DVs. follow-up LUCENE-5868 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org