[jira] [Commented] (SOLR-3428) SolrCmdDistributor flushAdds/flushDeletes problems

2012-05-03 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13267490#comment-13267490
 ] 

Mark Miller commented on SOLR-3428:
---

Any chance you could break out a test case and fix for this from that? I'd like 
to see this fixed sooner rather than later, and that patch is fairly large with 
a lot of unrelated changes in the other issue.

> SolrCmdDistributor flushAdds/flushDeletes problems
> --
>
> Key: SOLR-3428
> URL: https://issues.apache.org/jira/browse/SOLR-3428
> Project: Solr
>  Issue Type: Bug
>  Components: replication (java), SolrCloud, update
>Affects Versions: 4.0
>Reporter: Per Steffensen
>Assignee: Per Steffensen
>  Labels: add, delete, replica, solrcloud, update
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> A few problems with SolrCmdDistributor.flushAdds/flushDeletes
> * If number of AddRequests/DeleteRequests in alist/dlist is below limit for a 
> specific node the method returns immediately and doesnt flush for subsequent 
> nodes
> * When returning immediately because there is below limit requests for a 
> given node, then previous nodes that have already been flushed/submitted are 
> not removed from adds/deletes maps (causing them to be flushed/submitted 
> again the next time flushAdds/flushDeletes is executed)
> * The idea about just combining params does not work for SEEN_LEADER params 
> (and probably others as well). Since SEEN_LEADER cannot be expressed (unlike 
> commitWithin and overwrite) for individual operations in the request, you 
> need to sent two separate submits. One containing requests with 
> SEEN_LEADER=true and one with SEEN_LEADER=false.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-3437) Recovery issues a spurious commit to the cluster.

2012-05-05 Thread Mark Miller (JIRA)
Mark Miller created SOLR-3437:
-

 Summary: Recovery issues a spurious commit to the cluster.
 Key: SOLR-3437
 URL: https://issues.apache.org/jira/browse/SOLR-3437
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 4.0


as reported by Trym R. Møller on the mailing list.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3437) Recovery issues a spurious commit to the cluster.

2012-05-05 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-3437:
--

Component/s: SolrCloud

> Recovery issues a spurious commit to the cluster.
> -
>
> Key: SOLR-3437
> URL: https://issues.apache.org/jira/browse/SOLR-3437
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 4.0
>
>
> as reported by Trym R. Møller on the mailing list.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-3437) Recovery issues a spurious commit to the cluster.

2012-05-05 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-3437.
---

Resolution: Fixed

Thanks Trym!

> Recovery issues a spurious commit to the cluster.
> -
>
> Key: SOLR-3437
> URL: https://issues.apache.org/jira/browse/SOLR-3437
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 4.0
>
>
> as reported by Trym R. Møller on the mailing list.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3221) Make Shard handler threadpool configurable

2012-05-08 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13270498#comment-13270498
 ] 

Mark Miller commented on SOLR-3221:
---

bq. I am loathe to submit a patch for changing the CHANGES.txt

Usually the committer handles this - no rules about it though. 

I'd go with something a bit shorter - no need to get into the gritty details - 
that's why the JIRA issue number is there. I'd stick to something closer to 
"Make shard handler threadpool configurable." or "Added the ability to directly 
configure aspects of the concurrency and thread-pooling used within distributed 
search in solr."

> Make Shard handler threadpool configurable
> --
>
> Key: SOLR-3221
> URL: https://issues.apache.org/jira/browse/SOLR-3221
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 3.6, 4.0
>Reporter: Greg Bowyer
>Assignee: Erick Erickson
>  Labels: distributed, http, shard
> Fix For: 3.6, 4.0
>
> Attachments: SOLR-3221-3x_branch.patch, SOLR-3221-3x_branch.patch, 
> SOLR-3221-3x_branch.patch, SOLR-3221-3x_branch.patch, 
> SOLR-3221-3x_branch.patch, SOLR-3221-trunk.patch, SOLR-3221-trunk.patch, 
> SOLR-3221-trunk.patch, SOLR-3221-trunk.patch, SOLR-3221-trunk.patch
>
>
> From profiling of monitor contention, as well as observations of the
> 95th and 99th response times for nodes that perform distributed search
> (or ‟aggregator‟ nodes) it would appear that the HttpShardHandler code
> currently does a suboptimal job of managing outgoing shard level
> requests.
> Presently the code contained within lucene 3.5's SearchHandler and
> Lucene trunk / 3x's ShardHandlerFactory create arbitrary threads in
> order to service distributed search requests. This is done presently to
> limit the size of the threadpool such that it does not consume resources
> in deployment configurations that do not use distributed search.
> This unfortunately has two impacts on the response time if the node
> coordinating the distribution is under high load.
> The usage of the MaxConnectionsPerHost configuration option results in
> aggressive activity on semaphores within HttpCommons, it has been
> observed that the aggregator can have a response time far greater than
> that of the searchers. The above monitor contention would appear to
> suggest that in some cases its possible for liveness issues to occur and
> for simple queries to be starved of resources simply due to a lack of
> attention from the viewpoint of context switching.
> With, as mentioned above the http commons connection being hotly
> contended
> The fair, queue based configuration eliminates this, at the cost of
> throughput.
> This patch aims to make the threadpool largely configurable allowing for
> those using solr to choose the throughput vs latency balance they
> desire.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3174) Visualize Cluster State

2012-05-16 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13276768#comment-13276768
 ] 

Mark Miller commented on SOLR-3174:
---

This is fantastic Stefan - I can't thank you enough for the work you have put 
in to the new admin UI. It is light years ahead of what we had.

Just popped back to this issue because I've been doing some testing, and it 
looks like even though I have nodes that are trying to recover, everything 
looks green and happy. I'll try and do some more debugging when I get a moment.

> Visualize Cluster State
> ---
>
> Key: SOLR-3174
> URL: https://issues.apache.org/jira/browse/SOLR-3174
> Project: Solr
>  Issue Type: New Feature
>  Components: web gui
>Reporter: Ryan McKinley
>Assignee: Stefan Matheis (steffkes)
> Fix For: 4.0
>
> Attachments: SOLR-3174-graph.png, SOLR-3174-graph.png, 
> SOLR-3174-graph.png, SOLR-3174-rgraph.png, SOLR-3174-rgraph.png, 
> SOLR-3174-rgraph.png, SOLR-3174.patch, SOLR-3174.patch, SOLR-3174.patch, 
> SOLR-3174.patch, SOLR-3174.patch
>
>
> It would be great to visualize the cluster state in the new UI. 
> See Mark's wish:
> https://issues.apache.org/jira/browse/SOLR-3162?focusedCommentId=13218272&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13218272

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3174) Visualize Cluster State

2012-05-16 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13276770#comment-13276770
 ] 

Mark Miller commented on SOLR-3174:
---

Whoops - I think I just missed the real issue - I'm working with 2 collections 
(collection1 and collection2), but only one of them (collection1) is showing 
up. collection2 has the recoveries occurring. I'll file a new JIRA.

> Visualize Cluster State
> ---
>
> Key: SOLR-3174
> URL: https://issues.apache.org/jira/browse/SOLR-3174
> Project: Solr
>  Issue Type: New Feature
>  Components: web gui
>Reporter: Ryan McKinley
>Assignee: Stefan Matheis (steffkes)
> Fix For: 4.0
>
> Attachments: SOLR-3174-graph.png, SOLR-3174-graph.png, 
> SOLR-3174-graph.png, SOLR-3174-rgraph.png, SOLR-3174-rgraph.png, 
> SOLR-3174-rgraph.png, SOLR-3174.patch, SOLR-3174.patch, SOLR-3174.patch, 
> SOLR-3174.patch, SOLR-3174.patch
>
>
> It would be great to visualize the cluster state in the new UI. 
> See Mark's wish:
> https://issues.apache.org/jira/browse/SOLR-3162?focusedCommentId=13218272&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13218272

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-3459) I started up in SolrCloud mode with 2 collections, but the cluster visualization page only displayed the first collection.

2012-05-16 Thread Mark Miller (JIRA)
Mark Miller created SOLR-3459:
-

 Summary: I started up in SolrCloud mode with 2 collections, but 
the cluster visualization page only displayed the first collection.
 Key: SOLR-3459
 URL: https://issues.apache.org/jira/browse/SOLR-3459
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Mark Miller
 Fix For: 4.0




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3459) I started up in SolrCloud mode with 2 collections, but the cluster visualization page only displayed the first collection.

2012-05-16 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13276771#comment-13276771
 ] 

Mark Miller commented on SOLR-3459:
---

Give me a bit and I'll try and offer a simple cmd to get two collections up.

> I started up in SolrCloud mode with 2 collections, but the cluster 
> visualization page only displayed the first collection.
> --
>
> Key: SOLR-3459
> URL: https://issues.apache.org/jira/browse/SOLR-3459
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Mark Miller
> Fix For: 4.0
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-3460) Improve cmd line config bootstrap tool.

2012-05-16 Thread Mark Miller (JIRA)
Mark Miller created SOLR-3460:
-

 Summary: Improve cmd line config bootstrap tool.
 Key: SOLR-3460
 URL: https://issues.apache.org/jira/browse/SOLR-3460
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 4.0


Improve cmd line tool for bootstrapping config sets. Rather than take a config 
set name and directory, make it work like -Dboostrap_conf=true and read 
solr.xml to find config sets. Config sets will be named after the collection 
and auto linked to the identically named collection.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3174) Visualize Cluster State

2012-05-16 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13276796#comment-13276796
 ] 

Mark Miller commented on SOLR-3174:
---

Another thing we should probably do is add a key for the meaning of the colors.

> Visualize Cluster State
> ---
>
> Key: SOLR-3174
> URL: https://issues.apache.org/jira/browse/SOLR-3174
> Project: Solr
>  Issue Type: New Feature
>  Components: web gui
>Reporter: Ryan McKinley
>Assignee: Stefan Matheis (steffkes)
> Fix For: 4.0
>
> Attachments: SOLR-3174-graph.png, SOLR-3174-graph.png, 
> SOLR-3174-graph.png, SOLR-3174-rgraph.png, SOLR-3174-rgraph.png, 
> SOLR-3174-rgraph.png, SOLR-3174.patch, SOLR-3174.patch, SOLR-3174.patch, 
> SOLR-3174.patch, SOLR-3174.patch
>
>
> It would be great to visualize the cluster state in the new UI. 
> See Mark's wish:
> https://issues.apache.org/jira/browse/SOLR-3162?focusedCommentId=13218272&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13218272

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-3462) When I start Solr with one collection, the CoreAdmin tab does not show up. If I start with 2 collections, it does.

2012-05-16 Thread Mark Miller (JIRA)
Mark Miller created SOLR-3462:
-

 Summary: When I start Solr with one collection, the CoreAdmin tab 
does not show up. If I start with 2 collections, it does.
 Key: SOLR-3462
 URL: https://issues.apache.org/jira/browse/SOLR-3462
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 4.0
Reporter: Mark Miller
 Fix For: 4.0


Perhaps this was done for compatibility with the old single core mode? If you 
are doing multi core and only have a single solrcore, you still want access to 
CoreAdmin though.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-3462) When I start Solr with one collection, the CoreAdmin tab does not show up. If I start with 2 collections, it does.

2012-05-16 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-3462.
---

Resolution: Duplicate

yup

> When I start Solr with one collection, the CoreAdmin tab does not show up. If 
> I start with 2 collections, it does.
> --
>
> Key: SOLR-3462
> URL: https://issues.apache.org/jira/browse/SOLR-3462
> Project: Solr
>  Issue Type: Bug
>  Components: web gui
>Affects Versions: 4.0
>Reporter: Mark Miller
> Fix For: 4.0
>
>
> Perhaps this was done for compatibility with the old single core mode? If you 
> are doing multi core and only have a single solrcore, you still want access 
> to CoreAdmin though.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-3462) When I start Solr with one collection, the CoreAdmin tab does not show up. If I start with 2 collections, it does.

2012-05-16 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller closed SOLR-3462.
-


> When I start Solr with one collection, the CoreAdmin tab does not show up. If 
> I start with 2 collections, it does.
> --
>
> Key: SOLR-3462
> URL: https://issues.apache.org/jira/browse/SOLR-3462
> Project: Solr
>  Issue Type: Bug
>  Components: web gui
>Affects Versions: 4.0
>Reporter: Mark Miller
> Fix For: 4.0
>
>
> Perhaps this was done for compatibility with the old single core mode? If you 
> are doing multi core and only have a single solrcore, you still want access 
> to CoreAdmin though.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3459) I started up in SolrCloud mode with 2 collections, but the cluster visualization page only displayed the first collection.

2012-05-16 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13276863#comment-13276863
 ] 

Mark Miller commented on SOLR-3459:
---

Yeah, that is it exactly. It would be nice if the view for each collection was 
simply stacked on top of each other - that is, the first would be as it is now, 
and the next below it and so on. Then you can just scroll down and see the view 
for each collection?

> I started up in SolrCloud mode with 2 collections, but the cluster 
> visualization page only displayed the first collection.
> --
>
> Key: SOLR-3459
> URL: https://issues.apache.org/jira/browse/SOLR-3459
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud, web gui
>Affects Versions: 4.0
>Reporter: Mark Miller
>Assignee: Stefan Matheis (steffkes)
> Fix For: 4.0
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3401) Solr Core Admin view is not visible unless multiple cores already exist

2012-05-16 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13276884#comment-13276884
 ] 

Mark Miller commented on SOLR-3401:
---

+1 - works for me with a single collection in solrcloud now.

> Solr Core Admin view is not visible unless multiple cores already exist
> ---
>
> Key: SOLR-3401
> URL: https://issues.apache.org/jira/browse/SOLR-3401
> Project: Solr
>  Issue Type: Improvement
>  Components: web gui
>Affects Versions: 4.0
>Reporter: Jamie Johnson
>Assignee: Stefan Matheis (steffkes)
>Priority: Minor
> Fix For: 4.0
>
> Attachments: SOLR-3401.patch
>
>
> The new web gui does not show the Core Admin view unless there are already 
> multiples cores defined.  We should show the view in instances when we're 
> running in single core mode as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3459) I started up in SolrCloud mode with 2 collections, but the cluster visualization page only displayed the first collection.

2012-05-16 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13276945#comment-13276945
 ] 

Mark Miller commented on SOLR-3459:
---

shard1 can be anything though - its just a shard(n) name by default. So just as 
often, there would be no overlap in shard name for the different collections. I 
think logically they should be presented as separate trees.

> I started up in SolrCloud mode with 2 collections, but the cluster 
> visualization page only displayed the first collection.
> --
>
> Key: SOLR-3459
> URL: https://issues.apache.org/jira/browse/SOLR-3459
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud, web gui
>Affects Versions: 4.0
>Reporter: Mark Miller
>Assignee: Stefan Matheis (steffkes)
> Fix For: 4.0
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3460) Improve cmd line config bootstrap tool.

2012-05-16 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-3460:
--

Attachment: SOLR-3460.patch

first pass

> Improve cmd line config bootstrap tool.
> ---
>
> Key: SOLR-3460
> URL: https://issues.apache.org/jira/browse/SOLR-3460
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 4.0
>
> Attachments: SOLR-3460.patch
>
>
> Improve cmd line tool for bootstrapping config sets. Rather than take a 
> config set name and directory, make it work like -Dboostrap_conf=true and 
> read solr.xml to find config sets. Config sets will be named after the 
> collection and auto linked to the identically named collection.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1726) Deep Paging and Large Results Improvements

2012-05-16 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13277266#comment-13277266
 ] 

Mark Miller commented on SOLR-1726:
---

bq. Remove the CHANGES entry until this gets straightened out?

+1 - looks like Mike has made this work with non score sort as well, for when 
we put it back in.

> Deep Paging and Large Results Improvements
> --
>
> Key: SOLR-1726
> URL: https://issues.apache.org/jira/browse/SOLR-1726
> Project: Solr
>  Issue Type: Improvement
>Reporter: Grant Ingersoll
>Assignee: Grant Ingersoll
>Priority: Minor
> Fix For: 4.0
>
> Attachments: CommonParams.java, QParser.java, QueryComponent.java, 
> ResponseBuilder.java, SOLR-1726.patch, SOLR-1726.patch, 
> SolrIndexSearcher.java, TopDocsCollector.java, TopScoreDocCollector.java
>
>
> There are possibly ways to improve collections of "deep paging" by passing 
> Solr/Lucene more information about the last page of results seen, thereby 
> saving priority queue operations.   See LUCENE-2215.
> There may also be better options for retrieving large numbers of rows at a 
> time that are worth exploring.  LUCENE-2127.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3460) Improve cmd line config bootstrap tool.

2012-05-17 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13277870#comment-13277870
 ] 

Mark Miller commented on SOLR-3460:
---

Okay, I'll commit this shortly.

> Improve cmd line config bootstrap tool.
> ---
>
> Key: SOLR-3460
> URL: https://issues.apache.org/jira/browse/SOLR-3460
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 4.0
>
> Attachments: SOLR-3460.patch
>
>
> Improve cmd line tool for bootstrapping config sets. Rather than take a 
> config set name and directory, make it work like -Dboostrap_conf=true and 
> read solr.xml to find config sets. Config sets will be named after the 
> collection and auto linked to the identically named collection.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3460) Improve cmd line config bootstrap tool.

2012-05-17 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13277979#comment-13277979
 ] 

Mark Miller commented on SOLR-3460:
---

Committed - I'll add some doc to the wiki.

> Improve cmd line config bootstrap tool.
> ---
>
> Key: SOLR-3460
> URL: https://issues.apache.org/jira/browse/SOLR-3460
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 4.0
>
> Attachments: SOLR-3460.patch
>
>
> Improve cmd line tool for bootstrapping config sets. Rather than take a 
> config set name and directory, make it work like -Dboostrap_conf=true and 
> read solr.xml to find config sets. Config sets will be named after the 
> collection and auto linked to the identically named collection.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3459) I started up in SolrCloud mode with 2 collections, but the cluster visualization page only displayed the first collection.

2012-05-20 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13279863#comment-13279863
 ] 

Mark Miller commented on SOLR-3459:
---

I just tried the patch using cloud-dev/solrcloud-multi-start.sh...

Screens attached and cluster.json:
{noformat}
{
  "core1":{
"shard1":{
  "halfmetal:8983_solr_core1":{
"shard":"shard1",
"leader":"true",
"state":"active",
"core":"core1",
"collection":"core1",
"node_name":"halfmetal:8983_solr",
"base_url":"http://halfmetal:8983/solr"},
  "halfmetal:7575_solr_core1":{
"shard":"shard1",
"state":"active",
"core":"core1",
"collection":"core1",
"node_name":"halfmetal:7575_solr",
"base_url":"http://halfmetal:7575/solr"},
  "halfmetal:7574_solr_core1":{
"shard":"shard1",
"state":"active",
"core":"core1",
"collection":"core1",
"node_name":"halfmetal:7574_solr",
"base_url":"http://halfmetal:7574/solr"}},
"shard2":{
  "halfmetal:7578_solr_core1":{
"shard":"shard2",
"leader":"true",
"state":"active",
"core":"core1",
"collection":"core1",
"node_name":"halfmetal:7578_solr",
"base_url":"http://halfmetal:7578/solr"},
  "halfmetal:7577_solr_core1":{
"shard":"shard2",
"state":"active",
"core":"core1",
"collection":"core1",
"node_name":"halfmetal:7577_solr",
"base_url":"http://halfmetal:7577/solr"},
  "halfmetal:7576_solr_core1":{
"shard":"shard2",
"state":"active",
"core":"core1",
"collection":"core1",
"node_name":"halfmetal:7576_solr",
"base_url":"http://halfmetal:7576/solr"}}},
  "core0":{
"shard1":{
  "halfmetal:8983_solr_core0":{
"shard":"shard1",
"leader":"true",
"state":"active",
"core":"core0",
"collection":"core0",
"node_name":"halfmetal:8983_solr",
"base_url":"http://halfmetal:8983/solr"},
  "halfmetal:7578_solr_core0":{
"shard":"shard1",
"state":"active",
"core":"core0",
"collection":"core0",
"node_name":"halfmetal:7578_solr",
"base_url":"http://halfmetal:7578/solr"},
  "halfmetal:7575_solr_core0":{
"shard":"shard1",
"state":"active",
"core":"core0",
"collection":"core0",
"node_name":"halfmetal:7575_solr",
"base_url":"http://halfmetal:7575/solr"}},
"shard2":{
  "halfmetal:7576_solr_core0":{
"shard":"shard2",
"state":"active",
"core":"core0",
"collection":"core0",
"node_name":"halfmetal:7576_solr",
"base_url":"http://halfmetal:7576/solr"},
  "halfmetal:7574_solr_core0":{
"shard":"shard2",
"state":"active",
"core":"core0",
"collection":"core0",
"node_name":"halfmetal:7574_solr",
"base_url":"http://halfmetal:7574/solr"},
  "halfmetal:7577_solr_core0":{
"shard":"shard2",
"leader":"true",
"state":"active",
"core":"core0",
"collection":"core0",
"node_name":"halfmetal:7577_solr",
"base_url":"http://halfmetal:7577/solr"
{noformat}

> I started up in SolrCloud mode with 2 collections, but the cluster 
> visualization page only displayed the first collection.
> --
>
> Key: SOLR-3459
> URL: https://issues.apache.org/jira/browse/SOLR-3459
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud, web gui
>Affects Versions: 4.0
>Reporter: Mark Miller
>Assignee: Stefan Matheis (steffkes)
> Fix For: 4.0
>
> Attachments: SOLR-3459.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3459) I started up in SolrCloud mode with 2 collections, but the cluster visualization page only displayed the first collection.

2012-05-20 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-3459:
--

Attachment: screen1.png
screen2.png

> I started up in SolrCloud mode with 2 collections, but the cluster 
> visualization page only displayed the first collection.
> --
>
> Key: SOLR-3459
> URL: https://issues.apache.org/jira/browse/SOLR-3459
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud, web gui
>Affects Versions: 4.0
>Reporter: Mark Miller
>Assignee: Stefan Matheis (steffkes)
> Fix For: 4.0
>
> Attachments: SOLR-3459.patch, screen1.png, screen2.png
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3459) I started up in SolrCloud mode with 2 collections, but the cluster visualization page only displayed the first collection.

2012-05-20 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13279864#comment-13279864
 ] 

Mark Miller commented on SOLR-3459:
---

Thanks a lot Stefan - looking good!

> I started up in SolrCloud mode with 2 collections, but the cluster 
> visualization page only displayed the first collection.
> --
>
> Key: SOLR-3459
> URL: https://issues.apache.org/jira/browse/SOLR-3459
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud, web gui
>Affects Versions: 4.0
>Reporter: Mark Miller
>Assignee: Stefan Matheis (steffkes)
> Fix For: 4.0
>
> Attachments: SOLR-3459.patch, screen1.png, screen2.png
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-3472) ping request handler should force distrib=false default

2012-05-20 Thread Mark Miller (JIRA)
Mark Miller created SOLR-3472:
-

 Summary: ping request handler should force distrib=false default
 Key: SOLR-3472
 URL: https://issues.apache.org/jira/browse/SOLR-3472
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 4.0




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3472) ping request handler should force distrib=false default

2012-05-20 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-3472:
--

Attachment: SOLR-3472.patch

> ping request handler should force distrib=false default
> ---
>
> Key: SOLR-3472
> URL: https://issues.apache.org/jira/browse/SOLR-3472
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 4.0
>
> Attachments: SOLR-3472.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-3472) ping request handler should force distrib=false default

2012-05-20 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-3472.
---

Resolution: Fixed

committed

> ping request handler should force distrib=false default
> ---
>
> Key: SOLR-3472
> URL: https://issues.apache.org/jira/browse/SOLR-3472
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 4.0
>
> Attachments: SOLR-3472.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3473) Distributed deduplication broken

2012-05-21 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13280256#comment-13280256
 ] 

Mark Miller commented on SOLR-3473:
---

My next response on the ML:
{quote}
I take that back - I think that may be the only way to make this work well. We 
need that document clone, which will let you put the dedupe proc after the 
distrib proc. I think in general, the dedupe proc will only work if your 
signature field is the id field though - otherwise, hash sharding that happens 
on the id field is going to cause a problem.{quote}

> Distributed deduplication broken
> 
>
> Key: SOLR-3473
> URL: https://issues.apache.org/jira/browse/SOLR-3473
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud, update
>Affects Versions: 4.0
>Reporter: Markus Jelsma
> Fix For: 4.0
>
>
> Solr's deduplication via the SignatureUpdateProcessor is broken for 
> distributed updates on SolrCloud.
> Mark Miller:
> {quote}
> Looking again at the SignatureUpdateProcessor code, I think that indeed this 
> won't currently work with distrib updates. Could you file a JIRA issue for 
> that? The problem is that we convert update commands into solr documents - 
> and that can cause a loss of info if an update proc modifies the update 
> command.
> I think the reason that you see a multiple values error when you try the 
> other order is because of the lack of a document clone (the other issue I 
> mentioned a few emails back). Addressing that won't solve your issue though - 
> we have to come up with a way to propagate the currently lost info on the 
> update command.
> {quote}
> Please see the ML thread for the full discussion: 
> http://lucene.472066.n3.nabble.com/SolrCloud-deduplication-td3984657.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2822) don't run update processors twice

2012-05-21 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13280273#comment-13280273
 ] 

Mark Miller commented on SOLR-2822:
---

I was under the impression this might solve SOLR-3215, but now I'm not sure?

> don't run update processors twice
> -
>
> Key: SOLR-2822
> URL: https://issues.apache.org/jira/browse/SOLR-2822
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud, update
>Reporter: Yonik Seeley
> Fix For: 4.0
>
> Attachments: SOLR-2822.patch, SOLR-2822.patch
>
>
> An update will first go through processors until it gets to the point where 
> it is forwarded to the leader (or forwarded to replicas if already on the 
> leader).
> We need a way to skip over the processors that were already run (perhaps by 
> using a processor chain dedicated to sub-updates?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3473) Distributed deduplication broken

2012-05-21 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13280280#comment-13280280
 ] 

Mark Miller commented on SOLR-3473:
---

bq. To work around the problem of having the digest field as ID, could it not 
simply issue a deleteByQuery for the digest prior to adding it? Would that 
cause significant overhead for very large systems with many updates?

Yeah, that might be an option - I don't know that it will be great perf wise, 
or race airtight wise, but it may a viable option.

bq. We would, from Nutch' point of view, certainly want to avoid changing the 
ID from URL to digest.

Ah, interesting. If you are enforcing uniqueness by digest though, is this 
really a problem? It would only have to be in the Solr world that the id was 
the digest - and you could even call it something else and have an id:url field 
as well. Just thinking out loud.

Or, perhaps we could make it so you could pick the hash field? Then hash on 
digest. If you are using overwrite=true, this should work right?

Or perhaps someone else has some ideas...

> Distributed deduplication broken
> 
>
> Key: SOLR-3473
> URL: https://issues.apache.org/jira/browse/SOLR-3473
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud, update
>Affects Versions: 4.0
>Reporter: Markus Jelsma
> Fix For: 4.0
>
>
> Solr's deduplication via the SignatureUpdateProcessor is broken for 
> distributed updates on SolrCloud.
> Mark Miller:
> {quote}
> Looking again at the SignatureUpdateProcessor code, I think that indeed this 
> won't currently work with distrib updates. Could you file a JIRA issue for 
> that? The problem is that we convert update commands into solr documents - 
> and that can cause a loss of info if an update proc modifies the update 
> command.
> I think the reason that you see a multiple values error when you try the 
> other order is because of the lack of a document clone (the other issue I 
> mentioned a few emails back). Addressing that won't solve your issue though - 
> we have to come up with a way to propagate the currently lost info on the 
> update command.
> {quote}
> Please see the ML thread for the full discussion: 
> http://lucene.472066.n3.nabble.com/SolrCloud-deduplication-td3984657.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-3474) It would be great if the SolrCloud cluster viz views would auto refresh.

2012-05-21 Thread Mark Miller (JIRA)
Mark Miller created SOLR-3474:
-

 Summary: It would be great if the SolrCloud cluster viz views 
would auto refresh.
 Key: SOLR-3474
 URL: https://issues.apache.org/jira/browse/SOLR-3474
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Priority: Minor


If you are sitting on that screen and knock down a server, would be nice for 
that to show up without requiring a refresh.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3215) We should clone the SolrInputDocument before adding locally and then send that clone to replicas.

2012-05-21 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13280405#comment-13280405
 ] 

Mark Miller commented on SOLR-3215:
---

bq. Are there use cases for this?

What about the sig update proc or others that do something like modify the 
updateTerm on the update command - this type of thing does not get distributed 
as we turn update commands into update requests, and the mapping doesn't cover 
updateTerm.

> We should clone the SolrInputDocument before adding locally and then send 
> that clone to replicas.
> -
>
> Key: SOLR-3215
> URL: https://issues.apache.org/jira/browse/SOLR-3215
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 4.0
>
> Attachments: SOLR-3215.patch
>
>
> If we don't do this, the behavior is a little unexpected. You cannot avoid 
> having other processors always hit documents twice unless we support using 
> multiple update chains. We have another issue open that should make this 
> better, but I'd like to do this sooner than that. We are going to have to end 
> up cloning anyway when we want to offer the ability to not wait for the local 
> add before sending to replicas.
> Cloning with the current SolrInputDocument, SolrInputField apis is a little 
> scary - there is an Object to contend with - but it seems we can pretty much 
> count on that being a primitive that we don't have to clone?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3215) We should clone the SolrInputDocument before adding locally and then send that clone to replicas.

2012-05-21 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13280455#comment-13280455
 ] 

Mark Miller commented on SOLR-3215:
---

bq. one way to avoid this would be to stop treating the "local" replica as 
special

Just to point out explicitly: currently we have to offer this (local is 
special) as well though - it's a requirement for our current 'super safe' mode 
that we add locally first. So unless we also address some other issues, we'd 
have to allow things to happen both ways.



> We should clone the SolrInputDocument before adding locally and then send 
> that clone to replicas.
> -
>
> Key: SOLR-3215
> URL: https://issues.apache.org/jira/browse/SOLR-3215
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 4.0
>
> Attachments: SOLR-3215.patch
>
>
> If we don't do this, the behavior is a little unexpected. You cannot avoid 
> having other processors always hit documents twice unless we support using 
> multiple update chains. We have another issue open that should make this 
> better, but I'd like to do this sooner than that. We are going to have to end 
> up cloning anyway when we want to offer the ability to not wait for the local 
> add before sending to replicas.
> Cloning with the current SolrInputDocument, SolrInputField apis is a little 
> scary - there is an Object to contend with - but it seems we can pretty much 
> count on that being a primitive that we don't have to clone?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3474) It would be great if the SolrCloud cluster viz views would auto refresh.

2012-05-22 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13280934#comment-13280934
 ] 

Mark Miller commented on SOLR-3474:
---

I think live_nodes and clusterstate.json auto update would be nice - but most 
important to me is the cluster state visualization - I guess that means we need 
to get both cluster.json and live_nodes anyway, because you need all that to 
create the cluster viz. My worry is that you put your browser on that page to 
make sure everything is happy, and oh a node goes down and everything still 
looks green an hour later - until you hit refresh...

Every 10 seconds sounds reasonable to me.

> It would be great if the SolrCloud cluster viz views would auto refresh.
> 
>
> Key: SOLR-3474
> URL: https://issues.apache.org/jira/browse/SOLR-3474
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Priority: Minor
>
> If you are sitting on that screen and knock down a server, would be nice for 
> that to show up without requiring a refresh.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-3488) Create a Collections API for SolrCloud

2012-05-25 Thread Mark Miller (JIRA)
Mark Miller created SOLR-3488:
-

 Summary: Create a Collections API for SolrCloud
 Key: SOLR-3488
 URL: https://issues.apache.org/jira/browse/SOLR-3488
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2822) don't run update processors twice

2012-05-25 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13283361#comment-13283361
 ] 

Mark Miller commented on SOLR-2822:
---

+1 - approach seems as elegant as we could shoot for right now. I much prefer 
it to juggling multiple chains.

I still worry about the 'clone doc' issue and update procs between distrib and 
run - if we do decide to not let procs live there, we should probably hard fail 
on it.

Latest patch looks good to me - let's commit and iterate on trunk.

> don't run update processors twice
> -
>
> Key: SOLR-2822
> URL: https://issues.apache.org/jira/browse/SOLR-2822
> Project: Solr
>  Issue Type: Sub-task
>  Components: SolrCloud, update
>Reporter: Yonik Seeley
> Fix For: 4.0
>
> Attachments: SOLR-2822.patch, SOLR-2822.patch, SOLR-2822.patch
>
>
> An update will first go through processors until it gets to the point where 
> it is forwarded to the leader (or forwarded to replicas if already on the 
> leader).
> We need a way to skip over the processors that were already run (perhaps by 
> using a processor chain dedicated to sub-updates?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3488) Create a Collections API for SolrCloud

2012-05-25 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13283382#comment-13283382
 ] 

Mark Miller commented on SOLR-3488:
---

I'll post an initial patch just for create soon. It's just a start though. I've 
added a bunch of comments for TODOs or things to consider for the future. I'd 
like to start simple just to get 'something' in though.

So initially, you can create a new collection and pass an existing collection 
name to determine which shards it's created on. Would also be nice to be able 
to explicitly pass the shard urls to use, as well as simply offer X shards, Y 
replicas. In that case, perhaps the leader could handle ensuring that. You 
might also want to be able to simply say, create it on all known shards.

Further things to look at:

* other commands, like remove/delete.
* what to do when some create calls fail? should we instead add a create node 
to a queue in zookeeper? Make the overseer responsible for checking for any 
jobs there, completing them (if needed) and then removing the job from the 
queue? Other ideas.

> Create a Collections API for SolrCloud
> --
>
> Key: SOLR-3488
> URL: https://issues.apache.org/jira/browse/SOLR-3488
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Mark Miller
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-2923) IllegalArgumentException when using useFilterForSortedQuery on an empty index

2012-05-25 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reassigned SOLR-2923:
-

Assignee: Mark Miller

> IllegalArgumentException when using useFilterForSortedQuery on an empty index
> -
>
> Key: SOLR-2923
> URL: https://issues.apache.org/jira/browse/SOLR-2923
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 3.6, 4.0
>Reporter: Adrien Grand
>Assignee: Mark Miller
>Priority: Trivial
> Attachments: SOLR-2923.patch
>
>
> An IllegalArgumentException can occur under the following circumstances:
>  - the index is empty,
>  - {{useFilterForSortedQuery}} is enabled,
>  - {{queryResultsCache}} is disabled.
> Here are what the exception and its stack trace look like (Solr trunk):
> {quote}
> numHits must be > 0; please use TotalHitCountCollector if you just need the 
> total hit count
> java.lang.IllegalArgumentException: numHits must be > 0; please use 
> TotalHitCountCollector if you just need the total hit count
>   at 
> org.apache.lucene.search.TopFieldCollector.create(TopFieldCollector.java:917)
>   at 
> org.apache.solr.search.SolrIndexSearcher.sortDocSet(SolrIndexSearcher.java:1741)
>   at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1211)
>   at 
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:353)
>   ...
> {quote}
> To reproduce this error from a fresh copy of Solr trunk, edit 
> {{example/solr/conf/solrconfig.xml}} to disable {{queryResultCache}} and 
> enable {{useFilterForSortedQuery}}. Then run {{ant run-example}} and issue a 
> query which sorts against any field 
> ({{http://localhost:8983/solr/select?q=*:*&sort=manu+desc}} for example).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2923) IllegalArgumentException when using useFilterForSortedQuery on an empty index

2012-05-25 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13283493#comment-13283493
 ] 

Mark Miller commented on SOLR-2923:
---

patch looks good to me

> IllegalArgumentException when using useFilterForSortedQuery on an empty index
> -
>
> Key: SOLR-2923
> URL: https://issues.apache.org/jira/browse/SOLR-2923
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 3.6, 4.0
>Reporter: Adrien Grand
>Assignee: Mark Miller
>Priority: Trivial
> Attachments: SOLR-2923.patch
>
>
> An IllegalArgumentException can occur under the following circumstances:
>  - the index is empty,
>  - {{useFilterForSortedQuery}} is enabled,
>  - {{queryResultsCache}} is disabled.
> Here are what the exception and its stack trace look like (Solr trunk):
> {quote}
> numHits must be > 0; please use TotalHitCountCollector if you just need the 
> total hit count
> java.lang.IllegalArgumentException: numHits must be > 0; please use 
> TotalHitCountCollector if you just need the total hit count
>   at 
> org.apache.lucene.search.TopFieldCollector.create(TopFieldCollector.java:917)
>   at 
> org.apache.solr.search.SolrIndexSearcher.sortDocSet(SolrIndexSearcher.java:1741)
>   at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1211)
>   at 
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:353)
>   ...
> {quote}
> To reproduce this error from a fresh copy of Solr trunk, edit 
> {{example/solr/conf/solrconfig.xml}} to disable {{queryResultCache}} and 
> enable {{useFilterForSortedQuery}}. Then run {{ant run-example}} and issue a 
> query which sorts against any field 
> ({{http://localhost:8983/solr/select?q=*:*&sort=manu+desc}} for example).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-3488) Create a Collections API for SolrCloud

2012-05-25 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13283382#comment-13283382
 ] 

Mark Miller edited comment on SOLR-3488 at 5/25/12 3:12 PM:


I'll post an initial patch just for create soon. It's just a start though. I've 
added a bunch of comments for TODOs or things to consider for the future. I'd 
like to start simple just to get 'something' in though.

So initially, you can create a new collection and pass an existing collection 
name to determine which shards it's created on. Would also be nice to be able 
to explicitly pass the shard urls to use, as well as simply offer X shards, Y 
replicas. In that case, perhaps the -leader- overseer could handle ensuring 
that. You might also want to be able to simply say, create it on all known 
shards.

Further things to look at:

* other commands, like remove/delete.
* what to do when some create calls fail? should we instead add a create node 
to a queue in zookeeper? Make the overseer responsible for checking for any 
jobs there, completing them (if needed) and then removing the job from the 
queue? Other ideas.

  was (Author: markrmil...@gmail.com):
I'll post an initial patch just for create soon. It's just a start though. 
I've added a bunch of comments for TODOs or things to consider for the future. 
I'd like to start simple just to get 'something' in though.

So initially, you can create a new collection and pass an existing collection 
name to determine which shards it's created on. Would also be nice to be able 
to explicitly pass the shard urls to use, as well as simply offer X shards, Y 
replicas. In that case, perhaps the leader could handle ensuring that. You 
might also want to be able to simply say, create it on all known shards.

Further things to look at:

* other commands, like remove/delete.
* what to do when some create calls fail? should we instead add a create node 
to a queue in zookeeper? Make the overseer responsible for checking for any 
jobs there, completing them (if needed) and then removing the job from the 
queue? Other ideas.
  
> Create a Collections API for SolrCloud
> --
>
> Key: SOLR-3488
> URL: https://issues.apache.org/jira/browse/SOLR-3488
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Mark Miller
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-2923) IllegalArgumentException when using useFilterForSortedQuery on an empty index

2012-05-25 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-2923.
---

   Resolution: Fixed
Fix Version/s: 4.0

Thanks Adrien!

> IllegalArgumentException when using useFilterForSortedQuery on an empty index
> -
>
> Key: SOLR-2923
> URL: https://issues.apache.org/jira/browse/SOLR-2923
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 3.6, 4.0
>Reporter: Adrien Grand
>Assignee: Mark Miller
>Priority: Trivial
> Fix For: 4.0
>
> Attachments: SOLR-2923.patch
>
>
> An IllegalArgumentException can occur under the following circumstances:
>  - the index is empty,
>  - {{useFilterForSortedQuery}} is enabled,
>  - {{queryResultsCache}} is disabled.
> Here are what the exception and its stack trace look like (Solr trunk):
> {quote}
> numHits must be > 0; please use TotalHitCountCollector if you just need the 
> total hit count
> java.lang.IllegalArgumentException: numHits must be > 0; please use 
> TotalHitCountCollector if you just need the total hit count
>   at 
> org.apache.lucene.search.TopFieldCollector.create(TopFieldCollector.java:917)
>   at 
> org.apache.solr.search.SolrIndexSearcher.sortDocSet(SolrIndexSearcher.java:1741)
>   at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1211)
>   at 
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:353)
>   ...
> {quote}
> To reproduce this error from a fresh copy of Solr trunk, edit 
> {{example/solr/conf/solrconfig.xml}} to disable {{queryResultCache}} and 
> enable {{useFilterForSortedQuery}}. Then run {{ant run-example}} and issue a 
> query which sorts against any field 
> ({{http://localhost:8983/solr/select?q=*:*&sort=manu+desc}} for example).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3488) Create a Collections API for SolrCloud

2012-05-25 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-3488:
--

Attachment: SOLR-3488.patch

I'm going on vacation for a week, so here is my early work on just getting 
something basic going. It does not involved any overseer stuff yet.

Someone feel free to take it - commit it and iterate, or iterate in patch form 
- whatever makes sense. I'll help when I get back if there is more to do, and 
if no one makes any progress, I'll continue on it when I get back.

Currently, I've copied the core admin handler pattern and made a collections 
handler. There is one simple test and currently the only way to choose which 
nodes the collection is put on is to give an existing template collection.

The test asserts nothing at the moment - all very early work. But I imagine we 
will be changing direction a fair amount, so that's good I think.



> Create a Collections API for SolrCloud
> --
>
> Key: SOLR-3488
> URL: https://issues.apache.org/jira/browse/SOLR-3488
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-3488.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3511) Refactor overseer to use a distributed "work"queue

2012-06-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13289380#comment-13289380
 ] 

Mark Miller commented on SOLR-3511:
---

I have not had a chance to look at this yet - but should we leverage this for 
collection creation as well or will that be a separate work queue?

> Refactor overseer to use a distributed "work"queue
> --
>
> Key: SOLR-3511
> URL: https://issues.apache.org/jira/browse/SOLR-3511
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Sami Siren
>Assignee: Sami Siren
> Attachments: SOLR-3511.patch
>
>
> By using a queue overseer becomes a watch free, a lot simpler and probably  
> less buggy too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4115) JAR resolution/ cleanup should be done automatically for ant clean/ eclipse/ resolve.

2012-06-07 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13290997#comment-13290997
 ] 

Mark Miller commented on LUCENE-4115:
-

Looks like Windows does not like this one.

BUILD FAILED
C:\Jenkins\workspace\Lucene-Solr-4.x-Windows-Java7-64\build.xml:29: The 
following error occurred while executing this line:
C:\Jenkins\workspace\Lucene-Solr-4.x-Windows-Java7-64\lucene\build.xml:448: The 
following error occurred while executing this line:
C:\Jenkins\workspace\Lucene-Solr-4.x-Windows-Java7-64\lucene\common-build.xml:618:
 The following error occurred while executing this line:
C:\Jenkins\workspace\Lucene-Solr-4.x-Windows-Java7-64\lucene\common-build.xml:286:
 Unable to delete file 
C:\Jenkins\workspace\Lucene-Solr-4.x-Windows-Java7-64\lucene\test-framework\lib\junit4-ant-1.5.0.jar

> JAR resolution/ cleanup should be done automatically for ant clean/ eclipse/ 
> resolve.
> -
>
> Key: LUCENE-4115
> URL: https://issues.apache.org/jira/browse/LUCENE-4115
> Project: Lucene - Java
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Fix For: 4.0, 5.0
>
> Attachments: LUCENE-4111.patch
>
>
> I think we should add the following target deps:
> ant clean [depends on] clean-jars
> ant resolve [depends on] clean-jars
> ant eclipse [depends on] resolve, clean-jars
> ant idea [depends on] resolve, clean-jars
> This eliminates the need to remember about cleaning up stale jars which users 
> complain about (and I think they're right about it). The overhead will be 
> minimal since resolve is only going to copy jars from cache. Eclipse won't 
> have a problem with updated JARs if they end up at the same location.
> If there are no objections I will fix this in a few hours.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4115) JAR resolution/ cleanup should be done automatically for ant clean/ eclipse/ resolve.

2012-06-07 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13291269#comment-13291269
 ] 

Mark Miller commented on LUCENE-4115:
-

bq. what target did you issue 

I'm getting this from Uwe's jenkins emails to the devs list rather than my own 
machine. I don't think there is any IDE involved there.

A quick test in my vm shows it happening at the end of running ant test though.

> JAR resolution/ cleanup should be done automatically for ant clean/ eclipse/ 
> resolve.
> -
>
> Key: LUCENE-4115
> URL: https://issues.apache.org/jira/browse/LUCENE-4115
> Project: Lucene - Java
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Fix For: 4.0, 5.0
>
> Attachments: LUCENE-4111.patch
>
>
> I think we should add the following target deps:
> ant clean [depends on] clean-jars
> ant resolve [depends on] clean-jars
> ant eclipse [depends on] resolve, clean-jars
> ant idea [depends on] resolve, clean-jars
> This eliminates the need to remember about cleaning up stale jars which users 
> complain about (and I think they're right about it). The overhead will be 
> minimal since resolve is only going to copy jars from cache. Eclipse won't 
> have a problem with updated JARs if they end up at the same location.
> If there are no objections I will fix this in a few hours.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3527) Optimize ignores maxSegments in distributed environment

2012-06-08 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13291890#comment-13291890
 ] 

Mark Miller commented on SOLR-3527:
---

Sounds right Andy - thanks for the report.

> Optimize ignores maxSegments in distributed environment
> ---
>
> Key: SOLR-3527
> URL: https://issues.apache.org/jira/browse/SOLR-3527
> Project: Solr
>  Issue Type: Bug
>  Components: SearchComponents - other
>Affects Versions: 4.0
>Reporter: Andy Laird
>
> Send the following command to a Solr server with many segments in a 
> multi-shard, multi-server environment:
> curl 
> "http://localhost:8080/solr/update?optimize=true&waitFlush=true&maxSegments=6&distrib=false";
> The local server will end up with the number of segments at 6, as requested, 
> but all other shards in the index will be optimized with maxSegments=1, which 
> takes far longer to complete.  All shards should be optimized to the 
> requested value of 6.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3511) Refactor overseer to use a distributed "work"queue

2012-06-09 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13292417#comment-13292417
 ] 

Mark Miller commented on SOLR-3511:
---

This is nice Sami - great work. I've been going over it and working on 
integrating a first pass at collection creation as well.

> Refactor overseer to use a distributed "work"queue
> --
>
> Key: SOLR-3511
> URL: https://issues.apache.org/jira/browse/SOLR-3511
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Sami Siren
>Assignee: Sami Siren
> Fix For: 4.0
>
> Attachments: SOLR-3511.patch, SOLR-3511.patch
>
>
> By using a queue overseer becomes a watch free, a lot simpler and probably  
> less buggy too.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3488) Create a Collections API for SolrCloud

2012-06-09 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13292418#comment-13292418
 ] 

Mark Miller commented on SOLR-3488:
---

Thanks Tommaso!

bq. Regarding the template based creation I think it should use a different 
parameter name for the collection template (e.g. "template") and use the 
"collection" parameter for the new collection name.

I'm actually hoping that perhaps that stuff is temporary, and I just did it to 
have something that works now. I think though, that we should really change how 
things work - so that you just pass the number of shards and the number of 
replicas, and the overseer just ensures the collection is on the right number 
of nodes. Then we don't have to have this 'template' collection to figure out 
what nodes to create on - or explicitly pass the nodes.

Sami has a distributed work queue for the overseer setup now, and I'm working 
on integrating this with that.

> Create a Collections API for SolrCloud
> --
>
> Key: SOLR-3488
> URL: https://issues.apache.org/jira/browse/SOLR-3488
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-3488.patch, SOLR-3488_2.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-3531) NRTCachingDirectoryFactory should be configurable via solrconfig.xml

2012-06-09 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reassigned SOLR-3531:
-

Assignee: Mark Miller

> NRTCachingDirectoryFactory should be configurable via solrconfig.xml
> 
>
> Key: SOLR-3531
> URL: https://issues.apache.org/jira/browse/SOLR-3531
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 4.0
>Reporter: Andy Laird
>Assignee: Mark Miller
>Priority: Minor
> Attachments: Configure_NRTCachingDirectory_via_solrconfig.patch
>
>
> {{NRTCachingDirectoryFactory}} currently hard-codes the values for 
> {{maxMergeSizeMB}} and {{maxCachedMB}} it uses when creating a new 
> {{NRTCachingDirectory}} instance.  These values should be configurable in the 
> usual way via {{solrconfig.xml}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3488) Create a Collections API for SolrCloud

2012-06-15 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13295689#comment-13295689
 ] 

Mark Miller commented on SOLR-3488:
---

I've got my first patch ready - still some things to address, but it currently 
does queue based collection creation.

One thing I recently realized when I put some last minute pieces together is 
that I cannot share the same Overseer queue that already exists - it will cause 
a deadlock as we wait for states to be registered. Processing the queue with 
multiple threads still seemed scary if you were to create a lot of collections 
at once - so it seems just safer to use a different queue.

I'm still somewhat unsure about handing failures though - for the moment I'm 
simply adding the job back onto the queue - this gets complicated quickly 
though. Especially if you add in delete collection and it can fail. If you 
start putting commands back on the queue you could have weird create/delete 
command retry reordering?

I also have not switched to requiring or respecting a replication factor - I 
was thinking perhaps specifying nothing or -1 would give you what you have now? 
An infinite rep factor? And we would enforce a lower rep factor if requested? 
For now I still require that you pass a collection template and new nodes are 
created on the nodes that host the collection template.

I'm not sure how replication factor would be enforced though? The Oveerseer 
just periodically prunes and adds given what it sees and what the rep factor 
is? Is that how failures should be handled? Don't readd to the queue, just let 
the periodic job attempt to fix things later? 

What if someone starts a new node with a new replicas pre configured in 
solr.xml? The Overseer periodic job would simply remove them shortly thereafter 
if the rep factor was not high enough?

One issue with pruning at the moment is that unloading a core will not remove 
it's data dir. We probably want to fix that for collection removal.

If we go too far down this path, it seems rebalancing starts to become 
important as well.

I've got some other thoughts and ideas to get down, but that is a start so I 
can gather some feedback.

I have not yet integrated Tomasso's work, but will if we don't end up changing 
things much from now.

> Create a Collections API for SolrCloud
> --
>
> Key: SOLR-3488
> URL: https://issues.apache.org/jira/browse/SOLR-3488
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-3488.patch, SOLR-3488.patch, SOLR-3488_2.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3488) Create a Collections API for SolrCloud

2012-06-15 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-3488:
--

Attachment: SOLR-3488.patch

> Create a Collections API for SolrCloud
> --
>
> Key: SOLR-3488
> URL: https://issues.apache.org/jira/browse/SOLR-3488
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-3488.patch, SOLR-3488.patch, SOLR-3488_2.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4150) Change version properties in branch_4x to "4.0-ALPHA-SNAPSHOT"

2012-06-19 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13396762#comment-13396762
 ] 

Mark Miller commented on LUCENE-4150:
-

bq. and will not be convinced otherwise!

I'm glad your open for discussion :)

Personally, I'm not at all concerned about handling these releases as we handle 
a typical release.

IMHO, snapshot with index back compat attempted promises is the perfect 
commitment level. Call it a release, call it a snapshot, I'll vote for either 
one, but I don't think they should be full fledged releases at all.

Let's dump this sucker out, and if anyone else wants to pour around some gravy 
after, so be it.

> Change version properties in branch_4x to "4.0-ALPHA-SNAPSHOT" 
> ---
>
> Key: LUCENE-4150
> URL: https://issues.apache.org/jira/browse/LUCENE-4150
> Project: Lucene - Java
>  Issue Type: Task
>  Components: general/build
>Affects Versions: 4.0
>Reporter: Steven Rowe
>Priority: Minor
>
> The next release off branch_4x will be named "4.0-ALPHA", so the current 
> version string should be "4.0-ALPHA-SNAPSHOT".
> (Similarly, after 4.0-ALPHA is released, the version string should be changed 
> to "4.0-BETA-SNAPSHOT".)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3488) Create a Collections API for SolrCloud

2012-06-19 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-3488:
--

Attachment: SOLR-3488.patch

updated patch - some refactoring and started adding remove collection code - 
though currently we do not remove all collection info from zk even when you 
unload every shard - something we should probably start doing?

> Create a Collections API for SolrCloud
> --
>
> Key: SOLR-3488
> URL: https://issues.apache.org/jira/browse/SOLR-3488
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-3488.patch, SOLR-3488.patch, SOLR-3488.patch, 
> SOLR-3488_2.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3488) Create a Collections API for SolrCloud

2012-06-20 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13397510#comment-13397510
 ] 

Mark Miller commented on SOLR-3488:
---

bq. It seems that in the latest patch even in case of failure the job is 
removed from queue.

Right - I was putting it back on the queue, but once I added deletes, I removed 
that because I was worried about reorderings. I figure we may need a different 
strategy in general. I'll expand on that in a new comment.


bq. report error to user and do not try to create the collection

Yeah, that is one option - then we have to remove the collection on the other 
nodes though. For instance, what happens if one of the create cores calls fails 
due to an intermittent connection error. Do we fail then? We would need to 
clean up first. Then what if one of those nodes fails before we could remove 
it. And then comes back with that core later. I agree that simple might be the 
best bet to start, but in failure scenarios it gets a little muddy quickly. 
Which may be fine to start as you suggest.

bq. I have one question about the patch specifically in the 
OverseerCollectionProcessor where you create the collection: why do you need 
the collection param? 

Mostly just simplicity to start - getting the nodes based on a template 
collection was easy. Tommaso did some work on extracting a strategy class, but 
I have not yet integrated it. Certainly we need more options at a minimum, or 
perhaps just a different strategy. Simplest might be a way to go, but it also 
might be a back compat problem if we choose to do something else. I'll try and 
elaborate in a new comment a bit later today.

bq. and improve things on SVN from now

Okay, that sounds fine to me. I'll try and polish the patch a smidgen and 
commit it as a start soon.

> Create a Collections API for SolrCloud
> --
>
> Key: SOLR-3488
> URL: https://issues.apache.org/jira/browse/SOLR-3488
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-3488.patch, SOLR-3488.patch, SOLR-3488.patch, 
> SOLR-3488_2.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3488) Create a Collections API for SolrCloud

2012-06-20 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13397695#comment-13397695
 ] 

Mark Miller commented on SOLR-3488:
---

Perhaps its a little too ambitious, but the reason I brought up the idea of the 
overseer handling collection management every n seconds is:

Lets say you have 4 nodes with 2 collections on them. You want each collection 
to use as many nodes as are available. Now you want to add a new node. To get 
it to participate in the existing collections, you have to configure them, or 
create new compatible cores over http on the new node. Wouldn't it be nice if 
the Overseer just saw the new node, that the collections had repFactor=MAX_INT 
and created the cores for you?

Also, consider failure scenarios:

If you remove a collection, what happens when a node that was down comes back 
and had that a piece of that collection? Your collection will be back as a 
single node. An Overseer process could prune this off shortly after.

So numShards/repFactor + Overseeer smarts seems simple and good to me. But 
sometimes you may want to be precise in picking shards/repliacs. Perhaps simply 
doing some kind of 'rack awareness' type feature down the road is the best way 
to control this though. You could create connections and weight costs using 
token markers for each node or something.

So I think maybe we would need a new zk node where solr instances register 
rather than cores? then we know what is available to place replicas on - even 
if that Solr instance has no cores?

Then the Overseer would have a process that ran every n (1 min?) and looked at 
each collection and its repFactor and numShards, and add or prune given the 
current state.

This would also account for failures on collection creation or deletion. If a 
node was down and missed the operation, when it came back, within N seconds, 
the Overseer would add or prune with the restored node.

It handles a lot of failures scenarios (with some lag) and makes the interface 
to the user a lot simpler. Adding nodes can eventually mean just starting up a 
node new rather than requiring any config. It's also easy to deal with changing 
the replication factor. Just update it in zk, and when the Overseer process 
runs next, it will add and prune to match the latest value (given the number of 
nodes available).




> Create a Collections API for SolrCloud
> --
>
> Key: SOLR-3488
> URL: https://issues.apache.org/jira/browse/SOLR-3488
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-3488.patch, SOLR-3488.patch, SOLR-3488.patch, 
> SOLR-3488_2.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3488) Create a Collections API for SolrCloud

2012-06-21 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13398942#comment-13398942
 ] 

Mark Miller commented on SOLR-3488:
---

bq. so basically, if I understood correctly, is that the overseer has the 
capability of doing periodic checks without an explicit action / request from a 
client which can help on cleaning states / check for failures / etc.

Yeah - basically, either every n seconds, or when the overseer sees a new node 
come or go, it looks at each collection, checks its replication factor, and 
either adds or removes nodes to match it given the nodes that are currently up. 
So with some lag, whatever you set for the replication will eventually be 
matched no matter the failures or random state of the cluster when the 
collection is created or its replication factor changed.

> Create a Collections API for SolrCloud
> --
>
> Key: SOLR-3488
> URL: https://issues.apache.org/jira/browse/SOLR-3488
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-3488.patch, SOLR-3488.patch, SOLR-3488.patch, 
> SOLR-3488_2.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3561) Error during deletion of shard/core

2012-06-22 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-3561:
--

Fix Version/s: 4.0

> Error during deletion of shard/core
> ---
>
> Key: SOLR-3561
> URL: https://issues.apache.org/jira/browse/SOLR-3561
> Project: Solr
>  Issue Type: Bug
>  Components: multicore, replication (java), SolrCloud
>Affects Versions: 4.0
> Environment: Solr trunk (4.0-SNAPSHOT) from 29/2-2012
>Reporter: Per Steffensen
> Fix For: 4.0
>
>
> Running several Solr servers in Cloud-cluster (zkHost set on the Solr 
> servers).
> Several collections with several slices and one replica for each slice (each 
> slice has two shards)
> Basically we want let our system delete an entire collection. We do this by 
> trying to delete each and every shard under the collection. Each shard is 
> deleted one by one, by doing CoreAdmin-UNLOAD-requests against the relevant 
> Solr
> {code}
> CoreAdminRequest request = new CoreAdminRequest();
> request.setAction(CoreAdminAction.UNLOAD);
> request.setCoreName(shardName);
> CoreAdminResponse resp = request.process(new CommonsHttpSolrServer(solrUrl));
> {code}
> The delete/unload succeeds, but in like 10% of the cases we get errors on 
> involved Solr servers, right around the time where shard/cores are deleted, 
> and we end up in a situation where ZK still claims (forever) that the deleted 
> shard is still present and active.
> Form here the issue is easilier explained by a more concrete example:
> - 7 Solr servers involved
> - Several collection a.o. one called "collection_2012_04", consisting of 28 
> slices, 56 shards (remember 1 replica for each slice) named 
> "collection_2012_04_sliceX_shardY" for all pairs in {X:1..28}x{Y:1,2}
> - Each Solr server running 8 shards, e.g Solr server #1 is running shard 
> "collection_2012_04_slice1_shard1" and Solr server #7 is running shard 
> "collection_2012_04_slice1_shard2" belonging to the same slice "slice1".
> When we decide to delete the collection "collection_2012_04" we go through 
> all 56 shards and delete/unload them one-by-one - including 
> "collection_2012_04_slice1_shard1" and "collection_2012_04_slice1_shard2". At 
> some point during or shortly after all this deletion we see the following 
> exceptions in solr.log on Solr server #7
> {code}
> Aug 1, 2012 12:02:50 AM org.apache.solr.common.SolrException log
> SEVERE: Error while trying to recover:org.apache.solr.common.SolrException: 
> core not found:collection_2012_04_slice1_shard1
> request: 
> http://solr_server_1:8983/solr/admin/cores?action=PREPRECOVERY&core=collection_2012_04_slice1_shard1&nodeName=solr_server_7%3A8983_solr&coreNodeName=solr_server_7%3A8983_solr_collection_2012_04_slice1_shard2&state=recovering&checkLive=true&pauseFor=6000&wt=javabin&version=2
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
> at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> at 
> org.apache.solr.common.SolrExceptionPropagationHelper.decodeFromMsg(SolrExceptionPropagationHelper.java:29)
> at 
> org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:445)
> at 
> org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:264)
> at 
> org.apache.solr.cloud.RecoveryStrategy.sendPrepRecoveryCmd(RecoveryStrategy.java:188)
> at 
> org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:285)
> at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:206)
> Aug 1, 2012 12:02:50 AM org.apache.solr.common.SolrException log
> SEVERE: Recovery failed - trying again...
> Aug 1, 2012 12:02:51 AM org.apache.solr.cloud.LeaderElector$1 process
> WARNING:
> java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
> at java.util.ArrayList.RangeCheck(ArrayList.java:547)
> at java.util.ArrayList.get(ArrayList.java:322)
> at org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:96)
> at org.apache.solr.cloud.LeaderElector.access$000(LeaderElector.java:57)
> at org.apache.solr.cloud.LeaderElector$1.process(LeaderElector.java:121)
> at 
> org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:531)
> at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:507)
> Aug 1, 2012 12:02:51 AM org.apache.solr.cloud.LeaderElector$1 process
> {code}
> Im not sure exactly how to interpret this, but it seems to me that some 
> recovery job tries to recover collection_2012_04_slice1_shard2 on Solr server 
> #7 from collection_2012_04_slice1_shard1 on Solr server #1, but fail because 
> Solr server 

[jira] [Updated] (SOLR-3563) Collection in ZK not deleted when all shards has been unloaded

2012-06-22 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-3563:
--

Fix Version/s: 4.0

> Collection in ZK not deleted when all shards has been unloaded
> --
>
> Key: SOLR-3563
> URL: https://issues.apache.org/jira/browse/SOLR-3563
> Project: Solr
>  Issue Type: Bug
>  Components: multicore, SolrCloud
>Affects Versions: 4.0
> Environment: Same as SOLR-3561
>Reporter: Per Steffensen
>Priority: Minor
> Fix For: 4.0
>
>
> Same scanario as SOLR-3561 - deleting shards/cores using CoreAdmin/UNLOAD 
> command.
> I have noticed that when I have done CoreAdmin/UNLOAD for all shard under a 
> collection, that the collection and all its slices are still present in ZK 
> under /collections. I might be ok since the operation is called UNLOAD, but I 
> basically want to delete an entire collection and all data related to it 
> (including information about it in ZK).
> A delete-collection operation, that also deletes info about the collection 
> under /collections in ZK, would be very nice! Or a delete-shard/core 
> operation and then some nice logic that detects when all shards belonging to 
> a collection has been deleted, and when that has happened deletes info about 
> the collection under /collections in ZK.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3488) Create a Collections API for SolrCloud

2012-06-22 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-3488:
--

Fix Version/s: 4.0

> Create a Collections API for SolrCloud
> --
>
> Key: SOLR-3488
> URL: https://issues.apache.org/jira/browse/SOLR-3488
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 4.0
>
> Attachments: SOLR-3488.patch, SOLR-3488.patch, SOLR-3488.patch, 
> SOLR-3488_2.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3488) Create a Collections API for SolrCloud

2012-06-22 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13399290#comment-13399290
 ] 

Mark Miller commented on SOLR-3488:
---

To get something incrementally committable I'm changing from using a collection 
template to a simple numReplicas. I have hit an annoying stall where it is 
difficult to get all of the node host urls. The live_nodes list is translated 
from url to path safe. It's not reversible if _ is in the original url. You can 
put the url in data at each node, but then you have to slowly read each node 
rather than a simple getChildren call. You can also try and find every node by 
running through the whole json cluster state file - but that wouldn't give you 
any nodes that had no cores on it at the moment (say after a collection delete).

> Create a Collections API for SolrCloud
> --
>
> Key: SOLR-3488
> URL: https://issues.apache.org/jira/browse/SOLR-3488
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 4.0
>
> Attachments: SOLR-3488.patch, SOLR-3488.patch, SOLR-3488.patch, 
> SOLR-3488_2.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-3571) You should have the option of removing the data dir when unloading a core.

2012-06-22 Thread Mark Miller (JIRA)
Mark Miller created SOLR-3571:
-

 Summary: You should have the option of removing the data dir when 
unloading a core.
 Key: SOLR-3571
 URL: https://issues.apache.org/jira/browse/SOLR-3571
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Priority: Minor




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-3571) You should have the option of removing the data dir when unloading a core.

2012-06-22 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-3571.
---

Resolution: Duplicate

> You should have the option of removing the data dir when unloading a core.
> --
>
> Key: SOLR-3571
> URL: https://issues.apache.org/jira/browse/SOLR-3571
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Priority: Minor
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3563) Collection in ZK not deleted when all shards has been unloaded

2012-06-22 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13399507#comment-13399507
 ] 

Mark Miller commented on SOLR-3563:
---

I've add this to the work I did on the collections API - it's needed there for 
collection removal. Should be committing a first iteration of that soon.

> Collection in ZK not deleted when all shards has been unloaded
> --
>
> Key: SOLR-3563
> URL: https://issues.apache.org/jira/browse/SOLR-3563
> Project: Solr
>  Issue Type: Bug
>  Components: multicore, SolrCloud
>Affects Versions: 4.0
> Environment: Same as SOLR-3561
>Reporter: Per Steffensen
>Priority: Minor
> Fix For: 4.0
>
>
> Same scanario as SOLR-3561 - deleting shards/cores using CoreAdmin/UNLOAD 
> command.
> I have noticed that when I have done CoreAdmin/UNLOAD for all shard under a 
> collection, that the collection and all its slices are still present in ZK 
> under /collections. I might be ok since the operation is called UNLOAD, but I 
> basically want to delete an entire collection and all data related to it 
> (including information about it in ZK).
> A delete-collection operation, that also deletes info about the collection 
> under /collections in ZK, would be very nice! Or a delete-shard/core 
> operation and then some nice logic that detects when all shards belonging to 
> a collection has been deleted, and when that has happened deletes info about 
> the collection under /collections in ZK.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3562) Data folder not deleted during unload

2012-06-22 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13399508#comment-13399508
 ] 

Mark Miller commented on SOLR-3562:
---

I've add this to the work I did on the collections API - it's needed there for 
collection removal. Should be committing a first iteration of that soon.

> Data folder not deleted during unload
> -
>
> Key: SOLR-3562
> URL: https://issues.apache.org/jira/browse/SOLR-3562
> Project: Solr
>  Issue Type: Bug
>  Components: multicore, SolrCloud
>Affects Versions: 4.0
> Environment: Same as SOLR-3561
>Reporter: Per Steffensen
>Priority: Minor
>
> Same scanario as SOLR-3561 - deleting shards/cores using CoreAdmin/UNLOAD 
> command.
> I have noticed that when doing CoreAdmin/UNLOAD, the data-folder on disk 
> belonging to the shard/core that has been unloaded is not deleted. I might be 
> ok since the operation is called UNLOAD, but I basically want to delete a 
> shard/core and all data related to it (including its data-folder).
> Dont we have a delete shard/core operation? Or what do I need to do? Do I 
> have to manually delete the data-folder myself after having unloaded?
> A delete-shard/core or even a delete-collection operation would be very nice!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-3562) Data folder not deleted during unload

2012-06-22 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reassigned SOLR-3562:
-

Assignee: Mark Miller

> Data folder not deleted during unload
> -
>
> Key: SOLR-3562
> URL: https://issues.apache.org/jira/browse/SOLR-3562
> Project: Solr
>  Issue Type: Bug
>  Components: multicore, SolrCloud
>Affects Versions: 4.0
> Environment: Same as SOLR-3561
>Reporter: Per Steffensen
>Assignee: Mark Miller
>Priority: Minor
>
> Same scanario as SOLR-3561 - deleting shards/cores using CoreAdmin/UNLOAD 
> command.
> I have noticed that when doing CoreAdmin/UNLOAD, the data-folder on disk 
> belonging to the shard/core that has been unloaded is not deleted. I might be 
> ok since the operation is called UNLOAD, but I basically want to delete a 
> shard/core and all data related to it (including its data-folder).
> Dont we have a delete shard/core operation? Or what do I need to do? Do I 
> have to manually delete the data-folder myself after having unloaded?
> A delete-shard/core or even a delete-collection operation would be very nice!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-3563) Collection in ZK not deleted when all shards has been unloaded

2012-06-22 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reassigned SOLR-3563:
-

Assignee: Mark Miller

> Collection in ZK not deleted when all shards has been unloaded
> --
>
> Key: SOLR-3563
> URL: https://issues.apache.org/jira/browse/SOLR-3563
> Project: Solr
>  Issue Type: Bug
>  Components: multicore, SolrCloud
>Affects Versions: 4.0
> Environment: Same as SOLR-3561
>Reporter: Per Steffensen
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 4.0
>
>
> Same scanario as SOLR-3561 - deleting shards/cores using CoreAdmin/UNLOAD 
> command.
> I have noticed that when I have done CoreAdmin/UNLOAD for all shard under a 
> collection, that the collection and all its slices are still present in ZK 
> under /collections. I might be ok since the operation is called UNLOAD, but I 
> basically want to delete an entire collection and all data related to it 
> (including information about it in ZK).
> A delete-collection operation, that also deletes info about the collection 
> under /collections in ZK, would be very nice! Or a delete-shard/core 
> operation and then some nice logic that detects when all shards belonging to 
> a collection has been deleted, and when that has happened deletes info about 
> the collection under /collections in ZK.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3488) Create a Collections API for SolrCloud

2012-06-22 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13399518#comment-13399518
 ] 

Mark Miller commented on SOLR-3488:
---

Re the above: I'm tempted to add another data node that just has the list of 
nodes? I think it would be good to have an efficient way to get that list. It's 
pain with clusterstate.json and that loses nodes with no cores on it now.

Something I just remembered I have to look into: the default location of the 
data dir for on the fly cores that are created is probably not great. 

> Create a Collections API for SolrCloud
> --
>
> Key: SOLR-3488
> URL: https://issues.apache.org/jira/browse/SOLR-3488
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 4.0
>
> Attachments: SOLR-3488.patch, SOLR-3488.patch, SOLR-3488.patch, 
> SOLR-3488_2.patch
>
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3562) Data folder not deleted during unload

2012-06-22 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13399545#comment-13399545
 ] 

Mark Miller commented on SOLR-3562:
---

ill add an option both for the instanceDir and dataDir

> Data folder not deleted during unload
> -
>
> Key: SOLR-3562
> URL: https://issues.apache.org/jira/browse/SOLR-3562
> Project: Solr
>  Issue Type: Bug
>  Components: multicore, SolrCloud
>Affects Versions: 4.0
> Environment: Same as SOLR-3561
>Reporter: Per Steffensen
>Assignee: Mark Miller
>Priority: Minor
>
> Same scanario as SOLR-3561 - deleting shards/cores using CoreAdmin/UNLOAD 
> command.
> I have noticed that when doing CoreAdmin/UNLOAD, the data-folder on disk 
> belonging to the shard/core that has been unloaded is not deleted. I might be 
> ok since the operation is called UNLOAD, but I basically want to delete a 
> shard/core and all data related to it (including its data-folder).
> Dont we have a delete shard/core operation? Or what do I need to do? Do I 
> have to manually delete the data-folder myself after having unloaded?
> A delete-shard/core or even a delete-collection operation would be very nice!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-1770) move default example core config/data into a collection1 folder

2012-06-25 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1770?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13400515#comment-13400515
 ] 

Mark Miller commented on SOLR-1770:
---

Not sure what happened to the last attempt, I must have got sidetracked. We 
really need this in 4 though - otherwise creating new cores/collections is 
really ugly.

> move default example core config/data into a collection1 folder
> ---
>
> Key: SOLR-1770
> URL: https://issues.apache.org/jira/browse/SOLR-1770
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 1.4
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 4.0, 5.0
>
> Attachments: SOLR-1770.patch
>
>
> This is a better starting point for adding more cores - perhaps we can also 
> get rid of multi-core example

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-1770) move default example core config/data into a collection1 folder

2012-06-25 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-1770:
--

 Priority: Critical  (was: Major)
Fix Version/s: 5.0

> move default example core config/data into a collection1 folder
> ---
>
> Key: SOLR-1770
> URL: https://issues.apache.org/jira/browse/SOLR-1770
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 1.4
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Critical
> Fix For: 4.0, 5.0
>
> Attachments: SOLR-1770.patch
>
>
> This is a better starting point for adding more cores - perhaps we can also 
> get rid of multi-core example

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-3575) solr.xml should default to persist=true

2012-06-25 Thread Mark Miller (JIRA)
Mark Miller created SOLR-3575:
-

 Summary: solr.xml should default to persist=true
 Key: SOLR-3575
 URL: https://issues.apache.org/jira/browse/SOLR-3575
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller
 Fix For: 4.0, 5.0


The default of false is kind of silly IMO.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Assigned: (SOLR-2127) When using the defaultCoreName attribute, after performing a swap, solr.xml no longer contains the defaultCoreName attribute, and the core which was dafult is now renamed

2010-10-13 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reassigned SOLR-2127:
-

Assignee: Mark Miller

> When using the defaultCoreName attribute, after performing a swap, solr.xml 
> no longer contains the defaultCoreName attribute, and the core which was 
> dafult is now renamed to ""
> 
>
> Key: SOLR-2127
> URL: https://issues.apache.org/jira/browse/SOLR-2127
> Project: Solr
>  Issue Type: Bug
>  Components: multicore
>Affects Versions: 1.4.2, 1.5
>Reporter: Ephraim Ofir
>Assignee: Mark Miller
>Priority: Minor
>
> Tried using the defaultCoreName attribute on a 2 core setup. After performing 
> a swap, my solr.xml no longer contains the defaultCoreName attribute, and the 
> core which was dafult is now renamed to "", so after restart of the process 
> can't access it by it's former name and can't perform other operations on it 
> such as rename, reload or swap... 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Updated: (SOLR-2127) When using the defaultCoreName attribute, after performing a swap, solr.xml no longer contains the defaultCoreName attribute, and the core which was dafult is now renamed t

2010-10-13 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-2127:
--

Attachment: SOLR-2127.patch

Probably more to come here - but this patch adds writing out the 
defaultCoreName to the persist method, and adds a test for this.

> When using the defaultCoreName attribute, after performing a swap, solr.xml 
> no longer contains the defaultCoreName attribute, and the core which was 
> dafult is now renamed to ""
> 
>
> Key: SOLR-2127
> URL: https://issues.apache.org/jira/browse/SOLR-2127
> Project: Solr
>  Issue Type: Bug
>  Components: multicore
>Affects Versions: 1.4.2, 1.5
>Reporter: Ephraim Ofir
>Assignee: Mark Miller
>Priority: Minor
> Attachments: SOLR-2127.patch
>
>
> Tried using the defaultCoreName attribute on a 2 core setup. After performing 
> a swap, my solr.xml no longer contains the defaultCoreName attribute, and the 
> core which was dafult is now renamed to "", so after restart of the process 
> can't access it by it's former name and can't perform other operations on it 
> such as rename, reload or swap... 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Updated: (SOLR-1873) Commit Solr Cloud to trunk

2010-10-13 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-1873:
--

Description: 
See http://wiki.apache.org/solr/SolrCloud

This is a real hassle - I didn't merge up to trunk before all the svn 
scrambling, so integrating cloud is now a bit difficult. I'm running through 
and just preparing a commit by hand though (applying changes/handling conflicts 
a file at a time).

  was:This is a real hassle - I didn't merge up to trunk before all the svn 
scrambling, so integrating cloud is now a bit difficult. I'm running through 
and just preparing a commit by hand though (applying changes/handling conflicts 
a file at a time).


> Commit Solr Cloud to trunk
> --
>
> Key: SOLR-1873
> URL: https://issues.apache.org/jira/browse/SOLR-1873
> Project: Solr
>  Issue Type: New Feature
>Affects Versions: 1.4
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: Next
>
> Attachments: log4j-over-slf4j-1.5.5.jar, SOLR-1873.patch, 
> SOLR-1873.patch, SOLR-1873.patch, SOLR-1873.patch, SOLR-1873.patch, 
> SOLR-1873.patch, SOLR-1873.patch, SOLR-1873.patch, SOLR-1873.patch, 
> SOLR-1873.patch, SOLR-1873.patch, SOLR-1873.patch, SOLR-1873.patch, 
> SOLR-1873.patch, TEST-org.apache.solr.cloud.ZkSolrClientTest.txt, 
> zookeeper-3.2.2.jar, zookeeper-3.3.1.jar
>
>
> See http://wiki.apache.org/solr/SolrCloud
> This is a real hassle - I didn't merge up to trunk before all the svn 
> scrambling, so integrating cloud is now a bit difficult. I'm running through 
> and just preparing a commit by hand though (applying changes/handling 
> conflicts a file at a time).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Resolved: (SOLR-1873) Commit Solr Cloud to trunk

2010-10-13 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-1873.
---

   Resolution: Fixed
Fix Version/s: (was: Next)
   4.0

committed r1022188

> Commit Solr Cloud to trunk
> --
>
> Key: SOLR-1873
> URL: https://issues.apache.org/jira/browse/SOLR-1873
> Project: Solr
>  Issue Type: New Feature
>Affects Versions: 1.4
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 4.0
>
> Attachments: log4j-over-slf4j-1.5.5.jar, SOLR-1873.patch, 
> SOLR-1873.patch, SOLR-1873.patch, SOLR-1873.patch, SOLR-1873.patch, 
> SOLR-1873.patch, SOLR-1873.patch, SOLR-1873.patch, SOLR-1873.patch, 
> SOLR-1873.patch, SOLR-1873.patch, SOLR-1873.patch, SOLR-1873.patch, 
> SOLR-1873.patch, TEST-org.apache.solr.cloud.ZkSolrClientTest.txt, 
> zookeeper-3.2.2.jar, zookeeper-3.3.1.jar
>
>
> See http://wiki.apache.org/solr/SolrCloud
> This is a real hassle - I didn't merge up to trunk before all the svn 
> scrambling, so integrating cloud is now a bit difficult. I'm running through 
> and just preparing a commit by hand though (applying changes/handling 
> conflicts a file at a time).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Created: (SOLR-2172) ZkController should update it's live node set after registering itself

2010-10-18 Thread Mark Miller (JIRA)
ZkController should update it's live node set after registering itself
--

 Key: SOLR-2172
 URL: https://issues.apache.org/jira/browse/SOLR-2172
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 4.0


to be sure it's own entry is in the it's current cloud state

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Updated: (SOLR-2172) ZkController should update it's live node set after registering itself

2010-10-18 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-2172:
--

Component/s: SolrCloud

> ZkController should update it's live node set after registering itself
> --
>
> Key: SOLR-2172
> URL: https://issues.apache.org/jira/browse/SOLR-2172
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 4.0
>
>
> to be sure it's own entry is in the it's current cloud state

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (SOLR-2172) ZkController should update it's live node set after registering itself

2010-10-18 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12922132#action_12922132
 ] 

Mark Miller commented on SOLR-2172:
---

{noformat}
Index: solr/src/java/org/apache/solr/cloud/ZkController.java
===
--- solr/src/java/org/apache/solr/cloud/ZkController.java   (revision 
1023871)
+++ solr/src/java/org/apache/solr/cloud/ZkController.java   (working copy)
@@ -377,6 +377,13 @@
   }
 }
 zkClient.getChildren(ZkStateReader.LIVE_NODES_ZKNODE, liveNodeWatcher);
+try {
+  zkStateReader.updateLiveNodes();
+} catch (IOException e) {
+  log.error("", e);
+  throw new ZooKeeperException(SolrException.ErrorCode.SERVER_ERROR,
+  "", e);
+}
   }
   
   public String getNodeName() {
{noformat}

> ZkController should update it's live node set after registering itself
> --
>
> Key: SOLR-2172
> URL: https://issues.apache.org/jira/browse/SOLR-2172
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 4.0
>
>
> to be sure it's own entry is in the it's current cloud state

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (SOLR-2170) BasicZkTest instantiates extra CoreContainer

2010-10-18 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12922137#action_12922137
 ] 

Mark Miller commented on SOLR-2170:
---

Again, left over from a rushed conversion to the junit 4 style tests.

Had to rework some things to fix this without the hack.

> BasicZkTest instantiates extra CoreContainer
> 
>
> Key: SOLR-2170
> URL: https://issues.apache.org/jira/browse/SOLR-2170
> Project: Solr
>  Issue Type: Test
>Reporter: Yonik Seeley
>
> BasicZkTest has a beforClass that calls initCore, but then AbstractZkTestCase
> also instantiates it's own TestHarness.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Resolved: (SOLR-2170) BasicZkTest instantiates extra CoreContainer

2010-10-18 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-2170.
---

   Resolution: Fixed
Fix Version/s: 4.0
 Assignee: Mark Miller

> BasicZkTest instantiates extra CoreContainer
> 
>
> Key: SOLR-2170
> URL: https://issues.apache.org/jira/browse/SOLR-2170
> Project: Solr
>  Issue Type: Test
>Reporter: Yonik Seeley
>Assignee: Mark Miller
> Fix For: 4.0
>
>
> BasicZkTest has a beforClass that calls initCore, but then AbstractZkTestCase
> also instantiates it's own TestHarness.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Resolved: (SOLR-2172) ZkController should update it's live node set after registering itself

2010-10-18 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-2172.
---

Resolution: Fixed

> ZkController should update it's live node set after registering itself
> --
>
> Key: SOLR-2172
> URL: https://issues.apache.org/jira/browse/SOLR-2172
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 4.0
>
>
> to be sure it's own entry is in the it's current cloud state

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (SOLR-2159) CloudStateUpdateTest.testCoreRegistration test failure

2010-10-18 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12922146#action_12922146
 ] 

Mark Miller commented on SOLR-2159:
---

I tried bumping up the number of retries on this - was only 4*50ms = 200ms. 
Brought that up to 500ms for now - will see if this pops up again.

> CloudStateUpdateTest.testCoreRegistration test failure
> --
>
> Key: SOLR-2159
> URL: https://issues.apache.org/jira/browse/SOLR-2159
> Project: Solr
>  Issue Type: Bug
>  Components: Build
>Affects Versions: 4.0
> Environment: Hudson
>Reporter: Robert Muir
> Fix For: 4.0
>
>
> CloudStateUpdateTest.testCoreRegistration failed in Hudson, with:
> expected:<2> but was:<3>
> Here is the stacktrace:
> {noformat}
> [junit] Testsuite: org.apache.solr.cloud.CloudStateUpdateTest
> [junit] Testcase: 
> testCoreRegistration(org.apache.solr.cloud.CloudStateUpdateTest):   FAILED
> [junit] expected:<2> but was:<3>
> [junit] junit.framework.AssertionFailedError: expected:<2> but was:<3>
> [junit]   at 
> org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:795)
> [junit]   at 
> org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:768)
> [junit]   at 
> org.apache.solr.cloud.CloudStateUpdateTest.testCoreRegistration(CloudStateUpdateTest.java:203)
> [junit] 
> [junit] 
> [junit] Tests run: 1, Failures: 1, Errors: 0, Time elapsed: 18.254 sec
> [junit] 
> [junit] - Standard Output ---
> [junit] NOTE: reproduce with: ant test -Dtestcase=CloudStateUpdateTest 
> -Dtestmethod=testCoreRegistration 
> -Dtests.seed=3315086210462004965:6080191299009105620
> [junit] NOTE: test params are: codec=Standard, locale=bg_BG, timezone=CNT
> [junit] -  ---
> [junit] - Standard Error -
> [junit] 2010-10-15 2:01:28 org.apache.solr.core.CoreContainer register
> [junit] SEVERE: 
> [junit] org.apache.zookeeper.KeeperException$ConnectionLossException: 
> KeeperErrorCode = ConnectionLoss for 
> /collections/collection1/shards/127.0.0.1:1662_solr_
> [junit]   at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:90)
> [junit]   at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:42)
> [junit]   at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:637)
> [junit]   at 
> org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:348)
> [junit]   at 
> org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:309)
> [junit]   at 
> org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:371)
> [junit]   at 
> org.apache.solr.cloud.ZkController.addZkShardsNode(ZkController.java:155)
> [junit]   at 
> org.apache.solr.cloud.ZkController.register(ZkController.java:474)
> [junit]   at 
> org.apache.solr.core.CoreContainer.register(CoreContainer.java:515)
> [junit]   at 
> org.apache.solr.core.CoreContainer.load(CoreContainer.java:408)
> [junit]   at 
> org.apache.solr.core.CoreContainer.load(CoreContainer.java:289)
> [junit]   at 
> org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:213)
> [junit]   at 
> org.apache.solr.cloud.CloudStateUpdateTest.setUp(CloudStateUpdateTest.java:124)
> [junit]   at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source)
> [junit]   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> [junit]   at java.lang.reflect.Method.invoke(Method.java:616)
> [junit]   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
> [junit]   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
> [junit]   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
> [junit]   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:27)
> [junit]   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
> [junit]   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:76)
> [junit]   at 
> org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:795)
> [junit]   at 
> org.apache.lucene.util.LuceneTestCase$LuceneTestCaseRunner.runChild(LuceneTestCase.java:768)
> [junit]   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
> [junit]   at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
> [junit]   at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
> [junit]   at 
> org.

[jira] Closed: (SOLR-2174) commit durring backup of more then 10 seconds causes snapshoot to fail ?

2010-10-18 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller closed SOLR-2174.
-

Resolution: Duplicate

looks like a dupe of SOLR-2100 - already resolved.

> commit durring backup of more then 10 seconds causes snapshoot to fail ?
> 
>
> Key: SOLR-2174
> URL: https://issues.apache.org/jira/browse/SOLR-2174
> Project: Solr
>  Issue Type: Bug
>  Components: replication (java)
>Affects Versions: 1.4.1
>Reporter: Hoss Man
>
> Comment from Peter Sturge in email...
> http://lucene.472066.n3.nabble.com/commitReserveDuration-backups-and-saveCommitPoint-td1407399.html
> {quote}
> In Solr 1.4 and 1.4.1, the SOLR-1475 patch is certainly there, but I don't 
> believe it truly addresses the problem.
> Here's why:
> When a 'backup' command is received by the RemplicationHandler, it creates a 
> SnapShooter instance and asynchronously does a full file snapshot of the 
> current commit point.
> The current commit version to which this refers, however, is set to be 
> cleared on the next commit by the value of 'commitReserveDuration', which, by 
> default, is set to 10secs. (see cleanReserves() in 
> IndexDeletionPolicyWrapper.java).
> If you perform a backup and no commits occur during this time, it's fine, 
> because clearReserves() is not called. If you do get a commit during the 
> backup process, and the backup takes longer than 10secs,
> the whole snapshot operation fails (because delete() doesn't see the commit 
> point in savedCommits - see below).
> {quote}
> Peter's email mentions two patches that he believes will fix this problem

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-2562) Make Luke a Lucene/Solr Module

2010-10-18 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12922407#action_12922407
 ] 

Mark Miller commented on LUCENE-2562:
-

Still going - just kind of a weekend, nights project, of which I've been short 
of recently. Last weekend, 3 day hiking camping trip, previous 2 weekends 
before that (and all days in between) in Boston. Looking a little wavy going 
forward too, but I'm still heavily invested.

If Grant is not worried about getting a code grant, than neither am I. IMO, 
since the contributors are committers with CLA's, there is not much to worry 
about. Since unless someone says differently, I'm inclined to just move this 
(it's already in svn, so moving it under trunk is not much different).

If someone has a concern, we can hit legal-discuss though.

> Make Luke a Lucene/Solr Module
> --
>
> Key: LUCENE-2562
> URL: https://issues.apache.org/jira/browse/LUCENE-2562
> Project: Lucene - Java
>  Issue Type: Task
>Reporter: Mark Miller
> Attachments: luke1.jpg, luke2.jpg, luke3.jpg
>
>
> see
> http://search.lucidimagination.com/search/document/ee0e048c6b56ee2/luke_in_need_of_maintainer
> http://search.lucidimagination.com/search/document/5e53136b7dcb609b/web_based_luke
> I think it would be great if there was a version of Luke that always worked 
> with trunk - and it would also be great if it was easier to match Luke jars 
> with Lucene versions.
> While I'd like to get GWT Luke into the mix as well, I think the easiest 
> starting point is to straight port Luke to another UI toolkit before 
> abstracting out DTO objects that both GWT Luke and Pivot Luke could share.
> I've started slowly converting Luke's use of thinlet to Apache Pivot. I 
> haven't/don't have a lot of time for this at the moment, but I've plugged 
> away here and there over the past work or two. There is still a *lot* to do.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Updated: (SOLR-2127) When using the defaultCoreName attribute, after performing a swap, solr.xml no longer contains the defaultCoreName attribute, and the core which was dafult is now renamed t

2010-10-20 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-2127:
--

Fix Version/s: 4.0
   3.1

> When using the defaultCoreName attribute, after performing a swap, solr.xml 
> no longer contains the defaultCoreName attribute, and the core which was 
> dafult is now renamed to ""
> 
>
> Key: SOLR-2127
> URL: https://issues.apache.org/jira/browse/SOLR-2127
> Project: Solr
>  Issue Type: Bug
>  Components: multicore
>Affects Versions: 1.4.2, 1.5
>Reporter: Ephraim Ofir
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 3.1, 4.0
>
> Attachments: SOLR-2127.patch
>
>
> Tried using the defaultCoreName attribute on a 2 core setup. After performing 
> a swap, my solr.xml no longer contains the defaultCoreName attribute, and the 
> core which was dafult is now renamed to "", so after restart of the process 
> can't access it by it's former name and can't perform other operations on it 
> such as rename, reload or swap... 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (SOLR-2127) When using the defaultCoreName attribute, after performing a swap, solr.xml no longer contains the defaultCoreName attribute, and the core which was dafult is now renamed

2010-10-20 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12922951#action_12922951
 ] 

Mark Miller commented on SOLR-2127:
---

I'm still looking to see if there are more problems here, but I have committed 
the issue covered in the current patch - that is/was a clear and simple to 
address bug. I just want to make sure there is not another part to this before 
resolving.

> When using the defaultCoreName attribute, after performing a swap, solr.xml 
> no longer contains the defaultCoreName attribute, and the core which was 
> dafult is now renamed to ""
> 
>
> Key: SOLR-2127
> URL: https://issues.apache.org/jira/browse/SOLR-2127
> Project: Solr
>  Issue Type: Bug
>  Components: multicore
>Affects Versions: 1.4.2, 1.5
>Reporter: Ephraim Ofir
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 3.1, 4.0
>
> Attachments: SOLR-2127.patch
>
>
> Tried using the defaultCoreName attribute on a 2 core setup. After performing 
> a swap, my solr.xml no longer contains the defaultCoreName attribute, and the 
> core which was dafult is now renamed to "", so after restart of the process 
> can't access it by it's former name and can't perform other operations on it 
> such as rename, reload or swap... 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Created: (SOLR-2191) Change SolrException cstrs that take Throwable to default to alreadyLogged=false

2010-10-24 Thread Mark Miller (JIRA)
Change SolrException cstrs that take Throwable to default to alreadyLogged=false


 Key: SOLR-2191
 URL: https://issues.apache.org/jira/browse/SOLR-2191
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
 Fix For: Next


Because of misuse, many exceptions are now not logged at all - can be painful 
when doing dev. I think we should flip this setting and work at removing any 
double logging - losing logging is worse (and it almost looks like we lose more 
logging than we would get in double logging) - and bad solrexception/logging 
patterns are proliferating.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Updated: (SOLR-2191) Change SolrException cstrs that take Throwable to default to alreadyLogged=false

2010-10-24 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-2191:
--

Attachment: SOLR-2191.patch

patch that enables the logging of a whole slew of exceptions

> Change SolrException cstrs that take Throwable to default to 
> alreadyLogged=false
> 
>
> Key: SOLR-2191
> URL: https://issues.apache.org/jira/browse/SOLR-2191
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
> Fix For: Next
>
> Attachments: SOLR-2191.patch
>
>
> Because of misuse, many exceptions are now not logged at all - can be painful 
> when doing dev. I think we should flip this setting and work at removing any 
> double logging - losing logging is worse (and it almost looks like we lose 
> more logging than we would get in double logging) - and bad 
> solrexception/logging patterns are proliferating.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Created: (SOLR-2193) Re-architect Update Handler

2010-10-24 Thread Mark Miller (JIRA)
Re-architect Update Handler
---

 Key: SOLR-2193
 URL: https://issues.apache.org/jira/browse/SOLR-2193
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
 Fix For: Next


The update handler needs an overhaul.

A few goals I think we might want to look at:

1. Cleanup - drop DirectUpdateHandler(2) line - move to something like 
UpdateHandler, DefaultUpdateHandler
2. Expose the SolrIndexWriter in the api or add the proper abstractions to get 
done what we know do with special casing:
if (directupdatehandler2)
  success
 else
  failish
3. Stop closing the IndexWriter and start using commit.
4. Drop iwAccess, iwCommit locks and sync mostly at the Lucene level.
5. Keep NRT support in mind.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (SOLR-2193) Re-architect Update Handler

2010-10-24 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12924427#action_12924427
 ] 

Mark Miller commented on SOLR-2193:
---

I've been playing with a patch the keeps the IndexWriter open always (shares 
them across core reloads) and drops our internal update locks - so far, all 
tests pass, but there are still issues to deal with.

I'll post the patch once I work a few more things out. Won't cover everything - 
just a start to explore different ideas.

> Re-architect Update Handler
> ---
>
> Key: SOLR-2193
> URL: https://issues.apache.org/jira/browse/SOLR-2193
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
> Fix For: Next
>
>
> The update handler needs an overhaul.
> A few goals I think we might want to look at:
> 1. Cleanup - drop DirectUpdateHandler(2) line - move to something like 
> UpdateHandler, DefaultUpdateHandler
> 2. Expose the SolrIndexWriter in the api or add the proper abstractions to 
> get done what we know do with special casing:
> if (directupdatehandler2)
>   success
>  else
>   failish
> 3. Stop closing the IndexWriter and start using commit.
> 4. Drop iwAccess, iwCommit locks and sync mostly at the Lucene level.
> 5. Keep NRT support in mind.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Updated: (SOLR-2193) Re-architect Update Handler

2010-10-24 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-2193:
--

Description: 
The update handler needs an overhaul.

A few goals I think we might want to look at:

1. Cleanup - drop DirectUpdateHandler(2) line - move to something like 
UpdateHandler, DefaultUpdateHandler
2. Expose the SolrIndexWriter in the api or add the proper abstractions to get 
done what we now do with special casing:
if (directupdatehandler2)
  success
 else
  failish
3. Stop closing the IndexWriter and start using commit (still lazy IW init 
though).
4. Drop iwAccess, iwCommit locks and sync mostly at the Lucene level.
5. Keep NRT support in mind.

  was:
The update handler needs an overhaul.

A few goals I think we might want to look at:

1. Cleanup - drop DirectUpdateHandler(2) line - move to something like 
UpdateHandler, DefaultUpdateHandler
2. Expose the SolrIndexWriter in the api or add the proper abstractions to get 
done what we know do with special casing:
if (directupdatehandler2)
  success
 else
  failish
3. Stop closing the IndexWriter and start using commit.
4. Drop iwAccess, iwCommit locks and sync mostly at the Lucene level.
5. Keep NRT support in mind.


> Re-architect Update Handler
> ---
>
> Key: SOLR-2193
> URL: https://issues.apache.org/jira/browse/SOLR-2193
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
> Fix For: Next
>
>
> The update handler needs an overhaul.
> A few goals I think we might want to look at:
> 1. Cleanup - drop DirectUpdateHandler(2) line - move to something like 
> UpdateHandler, DefaultUpdateHandler
> 2. Expose the SolrIndexWriter in the api or add the proper abstractions to 
> get done what we now do with special casing:
> if (directupdatehandler2)
>   success
>  else
>   failish
> 3. Stop closing the IndexWriter and start using commit (still lazy IW init 
> though).
> 4. Drop iwAccess, iwCommit locks and sync mostly at the Lucene level.
> 5. Keep NRT support in mind.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (SOLR-1897) The data dir from the core descriptor should override the data dir from the solrconfig.xml rather than the other way round

2010-10-24 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12924428#action_12924428
 ] 

Mark Miller commented on SOLR-1897:
---

This went in with cloud - thought I had resolved this. I'll make a changes 
entry and resolve soon.

> The data dir from the core descriptor should override the data dir from the 
> solrconfig.xml rather than the other way round
> --
>
> Key: SOLR-1897
> URL: https://issues.apache.org/jira/browse/SOLR-1897
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Minor
> Fix For: Next
>
> Attachments: SOLR-1897.patch
>
>


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (SOLR-1962) Index directory disagreement SolrCore#initIndex

2010-10-24 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12924429#action_12924429
 ] 

Mark Miller commented on SOLR-1962:
---

I'm going to commit this tomorrow. 

> Index directory disagreement SolrCore#initIndex
> ---
>
> Key: SOLR-1962
> URL: https://issues.apache.org/jira/browse/SOLR-1962
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Minor
> Fix For: Next
>
> Attachments: SOLR-1962.patch
>
>
> getNewIndexDir is widely used in this method - but then when a new index is 
> created, getIndexDir is used:
> {code}
>   // Create the index if it doesn't exist.
>   if(!indexExists) {
> log.warn(logid+"Solr index directory '" + new File(getNewIndexDir()) 
> + "' doesn't exist."
> + " Creating new index...");
> SolrIndexWriter writer = new SolrIndexWriter("SolrCore.initIndex", 
> getIndexDir(), getDirectoryFactory(), true, schema, 
> solrConfig.mainIndexConfig, solrDelPolicy);
> writer.close();
>   }
> {code}
> also this piece uses getIndexDir():
> {code}
>   if (indexExists && firstTime && removeLocks) {
> // to remove locks, the directory must already exist... so we create 
> it
> // if it didn't exist already...
> Directory dir = SolrIndexWriter.getDirectory(getIndexDir(), 
> getDirectoryFactory(), solrConfig.mainIndexConfig);
> if (dir != null)  {
>   if (IndexWriter.isLocked(dir)) {
> log.warn(logid+"WARNING: Solr index directory '" + getIndexDir() 
> + "' is locked.  Unlocking...");
> IndexWriter.unlock(dir);
>   }
>   dir.close();
> }
>   }
> {code}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (SOLR-1674) improve analysis tests, cut over to new API

2010-10-24 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12924431#action_12924431
 ] 

Mark Miller commented on SOLR-1674:
---

Going to close this if no one objects...

> improve analysis tests, cut over to new API
> ---
>
> Key: SOLR-1674
> URL: https://issues.apache.org/jira/browse/SOLR-1674
> Project: Solr
>  Issue Type: Test
>  Components: Schema and Analysis
>Reporter: Robert Muir
>Assignee: Mark Miller
> Fix For: 1.5, 3.1, 4.0
>
> Attachments: SOLR-1674.patch, SOLR-1674.patch, SOLR-1674_speedup.patch
>
>
> This patch
> * converts all analysis tests to use the new tokenstream api
> * converts most tests to use the more stringent assertion mechanisms from 
> lucene
> * adds new tests to improve coverage
> Most bugs found by more stringent testing have been fixed, with the exception 
> of SynonymFilter.
> The problems with this filter are more serious, the previous tests were 
> essentially a no-op.
> The new tests for SynonymFilter test the current behavior, but have FIXMEs 
> with what I think the old test wanted to expect in the comments.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Updated: (SOLR-1962) Index directory disagreement SolrCore#initIndex

2010-10-25 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-1962:
--

Affects Version/s: 1.4
   1.4.1
Fix Version/s: (was: Next)
   4.0
   3.1

> Index directory disagreement SolrCore#initIndex
> ---
>
> Key: SOLR-1962
> URL: https://issues.apache.org/jira/browse/SOLR-1962
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 1.4, 1.4.1
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 3.1, 4.0
>
> Attachments: SOLR-1962.patch
>
>
> getNewIndexDir is widely used in this method - but then when a new index is 
> created, getIndexDir is used:
> {code}
>   // Create the index if it doesn't exist.
>   if(!indexExists) {
> log.warn(logid+"Solr index directory '" + new File(getNewIndexDir()) 
> + "' doesn't exist."
> + " Creating new index...");
> SolrIndexWriter writer = new SolrIndexWriter("SolrCore.initIndex", 
> getIndexDir(), getDirectoryFactory(), true, schema, 
> solrConfig.mainIndexConfig, solrDelPolicy);
> writer.close();
>   }
> {code}
> also this piece uses getIndexDir():
> {code}
>   if (indexExists && firstTime && removeLocks) {
> // to remove locks, the directory must already exist... so we create 
> it
> // if it didn't exist already...
> Directory dir = SolrIndexWriter.getDirectory(getIndexDir(), 
> getDirectoryFactory(), solrConfig.mainIndexConfig);
> if (dir != null)  {
>   if (IndexWriter.isLocked(dir)) {
> log.warn(logid+"WARNING: Solr index directory '" + getIndexDir() 
> + "' is locked.  Unlocking...");
> IndexWriter.unlock(dir);
>   }
>   dir.close();
> }
>   }
> {code}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Created: (SOLR-2225) CoreContainer#register should use checkDefault to normalize the core name

2010-11-09 Thread Mark Miller (JIRA)
CoreContainer#register should use checkDefault to normalize the core name
-

 Key: SOLR-2225
 URL: https://issues.apache.org/jira/browse/SOLR-2225
 Project: Solr
  Issue Type: Bug
  Components: multicore
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 3.1, 4.0


fail case:

start with default collection set to collection1
remove core collection1
default collection on CoreContainer is still set to collection1
add core collection1
it doesn't act like the default core

we might do as the summary suggests, or when the default core is removed, we 
reset to no default core until one is again explicitly set

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-2766) ParallelReader should support getSequentialSubReaders if possible

2010-11-15 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12932326#action_12932326
 ] 

Mark Miller commented on LUCENE-2766:
-

If you forgot about detection to start, and put it on the user to declare they 
will keep segments in sync, then its pretty simple isn't it? Something like:

{code}
  public IndexReader[] getSequentialSubReaders() {
if (!synchedSubReaders) {
  return null;
} else {
  int numReaders = readers.size();
  IndexReader firstReader = readers.get(0);
  IndexReader[] firstReaderSubReaders = firstReader
  .getSequentialSubReaders();
  IndexReader[] seqSubReaders;
  if (firstReaderSubReaders != null) {
int segCnt = firstReaderSubReaders.length;
seqSubReaders = new IndexReader[segCnt];
try {
  for (int j = 0; j < segCnt; j++) {
ParallelReader pr = new ParallelReader();
seqSubReaders[j] = pr;
for (int i = 0; i < numReaders; i++) {
  IndexReader reader = readers.get(i);
  IndexReader[] subs = reader.getSequentialSubReaders();
  if (subs == null) {
return null;
  }
  pr.add(subs[j]);
}
  }
} catch (IOException e) {
  throw new RuntimeException(e);
}
return seqSubReaders;
  }
  return null;
}
  }
{code}

> ParallelReader should support getSequentialSubReaders if possible
> -
>
> Key: LUCENE-2766
> URL: https://issues.apache.org/jira/browse/LUCENE-2766
> Project: Lucene - Java
>  Issue Type: Improvement
>  Components: Index
>Reporter: Andrzej Bialecki 
>
> Applications that need to use ParallelReader can't currently use per-segment 
> optimizations because getSequentialSubReaders returns null.
> Considering the strict requirements on input indexes that ParallelReader 
> already enforces it's usually the case that the additional indexes are built 
> with the knowledge of the primary index, in order to keep the docId-s 
> synchronized. If that's the case then it's conceivable that these indexes 
> could be created with the same number of segments, which in turn would mean 
> that their docId-s are synchronized on a per-segment level. ParallelReader 
> should detect such cases, and in getSequentialSubReader it should return an 
> array of ParallelReader-s created from corresponding segments of input 
> indexes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-2766) ParallelReader should support getSequentialSubReaders if possible

2010-11-15 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12932336#action_12932336
 ] 

Mark Miller commented on LUCENE-2766:
-

And if you don't necessarily need to descend into a deep/non standard reader 
graph - but one step at a time.

> ParallelReader should support getSequentialSubReaders if possible
> -
>
> Key: LUCENE-2766
> URL: https://issues.apache.org/jira/browse/LUCENE-2766
> Project: Lucene - Java
>  Issue Type: Improvement
>  Components: Index
>Reporter: Andrzej Bialecki 
>
> Applications that need to use ParallelReader can't currently use per-segment 
> optimizations because getSequentialSubReaders returns null.
> Considering the strict requirements on input indexes that ParallelReader 
> already enforces it's usually the case that the additional indexes are built 
> with the knowledge of the primary index, in order to keep the docId-s 
> synchronized. If that's the case then it's conceivable that these indexes 
> could be created with the same number of segments, which in turn would mean 
> that their docId-s are synchronized on a per-segment level. ParallelReader 
> should detect such cases, and in getSequentialSubReader it should return an 
> array of ParallelReader-s created from corresponding segments of input 
> indexes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (LUCENE-2766) ParallelReader should support getSequentialSubReaders if possible

2010-11-16 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-2766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12932471#action_12932471
 ] 

Mark Miller commented on LUCENE-2766:
-

Thats the other side of the coin though (the harder part it would seem). 
Doesn't seem too difficult to add support to ParallelReader for 
getSequentialSubReaders for the right cases - the hard part is keeping synched 
up segments in your indexes. But this issue seemed to assume that part 
separately.

> ParallelReader should support getSequentialSubReaders if possible
> -
>
> Key: LUCENE-2766
> URL: https://issues.apache.org/jira/browse/LUCENE-2766
> Project: Lucene - Java
>  Issue Type: Improvement
>  Components: Index
>Reporter: Andrzej Bialecki 
>
> Applications that need to use ParallelReader can't currently use per-segment 
> optimizations because getSequentialSubReaders returns null.
> Considering the strict requirements on input indexes that ParallelReader 
> already enforces it's usually the case that the additional indexes are built 
> with the knowledge of the primary index, in order to keep the docId-s 
> synchronized. If that's the case then it's conceivable that these indexes 
> could be created with the same number of segments, which in turn would mean 
> that their docId-s are synchronized on a per-segment level. ParallelReader 
> should detect such cases, and in getSequentialSubReader it should return an 
> array of ParallelReader-s created from corresponding segments of input 
> indexes.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] Commented: (SOLR-1775) Replication of 300MB stops indexing for 5 seconds when syncing

2010-12-08 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12969446#action_12969446
 ] 

Mark Miller commented on SOLR-1775:
---

Backup copies, but if I remember right, the java replication handler will 
attempt to rename - if that fails (going across drives/partitions), it does a 
copy?

> Replication of 300MB stops indexing for 5 seconds when syncing
> --
>
> Key: SOLR-1775
> URL: https://issues.apache.org/jira/browse/SOLR-1775
> Project: Solr
>  Issue Type: Bug
>  Components: replication (java)
>Affects Versions: 1.4
> Environment: Centos 5.3
>Reporter: Bill Bell
>
> When using Java replication in v1.4 and doing a sync from master to slave, 
> the slave delays for about 5-10 seconds. When using rsync this does not occur.
> Is there a way to thread better or lower the priority to not impact queries 
> when it is bringing over the index files from the master? Maybe a separate 
> process?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   3   4   5   6   7   8   9   10   >