[jira] [Commented] (SOLR-13568) Expand component should not cache group queries in the filter cache

2019-08-26 Thread Ludovic Boutros (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16915871#comment-16915871
 ] 

Ludovic Boutros commented on SOLR-13568:


Any news on this ? [~tomasflobbe] do you think you could take a look on this 
tiny patch ?

> Expand component should not cache group queries in the filter cache
> ---
>
> Key: SOLR-13568
> URL: https://issues.apache.org/jira/browse/SOLR-13568
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 7.7.2, 8.1.1
>Reporter: Ludovic Boutros
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently the expand component is creating queries (bit sets) from the 
> current page document ids.
> These queries are sadly put in the filter cache.
> This behavior floods the filter cache and it becomes inefficient.
> Therefore, the group query should be wrapped in a query with its cache flag 
> disabled.
> This is a tiny little thing to do. The PR will follow very soon.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13568) Expand component should not cache group queries in the filter cache

2019-07-04 Thread Ludovic Boutros (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16878867#comment-16878867
 ] 

Ludovic Boutros commented on SOLR-13568:


Great, thank you [~joel.bernstein]. Tell me if I can help.

> Expand component should not cache group queries in the filter cache
> ---
>
> Key: SOLR-13568
> URL: https://issues.apache.org/jira/browse/SOLR-13568
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.7.2, 8.1.1
>Reporter: Ludovic Boutros
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently the expand component is creating queries (bit sets) from the 
> current page document ids.
> These queries are sadly put in the filter cache.
> This behavior floods the filter cache and it becomes inefficient.
> Therefore, the group query should be wrapped in a query with its cache flag 
> disabled.
> This is a tiny little thing to do. The PR will follow very soon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13568) Expand component should not cache group queries in the filter cache

2019-06-25 Thread Ludovic Boutros (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16872060#comment-16872060
 ] 

Ludovic Boutros commented on SOLR-13568:


The PR is done on the 8x branch. Is it the good one or would you prefer on the 
master ?
[~joel.bernstein], I think your are the master of collapsing/expand features. :)
Would you like to take a look at this please ?

Thank you !

> Expand component should not cache group queries in the filter cache
> ---
>
> Key: SOLR-13568
> URL: https://issues.apache.org/jira/browse/SOLR-13568
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.7.2, 8.1.1
>Reporter: Ludovic Boutros
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently the expand component is creating queries (bit sets) from the 
> current page document ids.
> These queries are sadly put in the filter cache.
> This behavior floods the filter cache and it becomes inefficient.
> Therefore, the group query should be wrapped in a query with its cache flag 
> disabled.
> This is a tiny little thing to do. The PR will follow very soon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13568) Expand component should not cache group queries in the filter cache

2019-06-20 Thread Ludovic Boutros (JIRA)
Ludovic Boutros created SOLR-13568:
--

 Summary: Expand component should not cache group queries in the 
filter cache
 Key: SOLR-13568
 URL: https://issues.apache.org/jira/browse/SOLR-13568
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 8.1.1, 7.7.2
Reporter: Ludovic Boutros


Currently the expand component is creating queries (bit sets) from the current 
page document ids.
These queries are sadly put in the filter cache.
This behavior floods the filter cache and it becomes inefficient.

Therefore, the group query should be wrapped in a query with its cache flag 
disabled.

This is a tiny little thing to do. The PR will follow very soon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6086) Replica active during Warming

2017-07-31 Thread Ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16107189#comment-16107189
 ] 

Ludovic Boutros commented on SOLR-6086:
---

Thanks [~shalinmangar].

> Replica active during Warming
> -
>
> Key: SOLR-6086
> URL: https://issues.apache.org/jira/browse/SOLR-6086
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.6.1, 4.8.1
>Reporter: Ludovic Boutros
>Assignee: Shalin Shekhar Mangar
>  Labels: difficulty-medium, impact-medium
> Fix For: master (8.0), 7.1
>
> Attachments: SOLR-6086.patch, SOLR-6086.patch, SOLR-6086.patch, 
> SOLR-6086.patch, SOLR-6086-temp.patch
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> At least with Solr 4.6.1, replica are considered as active during the warming 
> process.
> This means that if you restart a replica or create a new one, queries will  
> be send to this replica and the query will hang until the end of the warming  
> process (If cold searchers are not used).
> You cannot add or restart a node silently anymore.
> I think that the fact that the replica is active is not a bad thing.
> But, the HttpShardHandler and the CloudSolrServer class should take the 
> warming process in account.
> Currently, I have developped a new very simple component which check that a 
> searcher is registered.
> I am also developping custom HttpShardHandler and CloudSolrServer classes 
> which will check the warming process in addition to the ACTIVE status in the 
> cluster state.
> This seems to be more a workaround than a solution but that's all I can do in 
> this version.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8030) Transaction log does not store the update chain (or req params?) used for updates

2017-01-05 Thread Ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15801285#comment-15801285
 ] 

Ludovic Boutros commented on SOLR-8030:
---

Thank you [~hossman] for the clarification. And you are right on the request 
parameters as well, I did not check the abstract class.

Sorry [~dsmiley] for the misunderstanding ;)

> Transaction log does not store the update chain (or req params?) used for 
> updates
> -
>
> Key: SOLR-8030
> URL: https://issues.apache.org/jira/browse/SOLR-8030
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Ludovic Boutros
> Attachments: SOLR-8030.patch
>
>
> Transaction Log does not store the update chain, or any other details from 
> the original update request such as the request params, used during updates.
> Therefore tLog uses the default update chain, and a synthetic request, during 
> log replay.
> If we implement custom update logic with multiple distinct update chains that 
> use custom processors after DistributedUpdateProcessor, or if the default 
> chain uses processors whose behavior depends on other request params, then 
> log replay may be incorrect.
> Potentially problematic scenerios (need test cases):
> * DBQ where the main query string uses local param variables that refer to 
> other request params
> * custom Update chain set as {{default="true"}} using something like 
> StatelessScriptUpdateProcessorFactory after DUP where the script depends on 
> request params.
> * multiple named update chains with diff processors configured after DUP and 
> specific requests sent to diff chains -- ex: ParseDateProcessor w/ custom 
> formats configured after DUP in some special chains, but not in the default 
> chain



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8030) Transaction log does not store the update chain (or req params?) used for updates

2017-01-04 Thread Ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15797577#comment-15797577
 ] 

Ludovic Boutros commented on SOLR-8030:
---

Thank you [~hossman].

The point is that for main operations, the Document Update Processors do not 
have access to the Solr request.
The parameters are stored in the commands (add, delete, commit).
I don't know if for merging and rollback operations the parameters could also 
be stored in the command.

This way we do would not have to worry about request parameters.

I agree with [~dsmiley] that the log replay is too complicated.
But I do not agree with [~dsmiley] that this should not be fixed because of 
very specific use cases.

I think the log replay must be symetric by default.
This is the natural behavior of all database systems that I can think of.
If you need something else you can always check the REPLAY flag in your 
processor.

Currently, your index can be easily corrupted because your update processor 
logic is not applied during log replay.


> Transaction log does not store the update chain (or req params?) used for 
> updates
> -
>
> Key: SOLR-8030
> URL: https://issues.apache.org/jira/browse/SOLR-8030
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Ludovic Boutros
> Attachments: SOLR-8030.patch
>
>
> Transaction Log does not store the update chain, or any other details from 
> the original update request such as the request params, used during updates.
> Therefore tLog uses the default update chain, and a synthetic request, during 
> log replay.
> If we implement custom update logic with multiple distinct update chains that 
> use custom processors after DistributedUpdateProcessor, or if the default 
> chain uses processors whose behavior depends on other request params, then 
> log replay may be incorrect.
> Potentially problematic scenerios (need test cases):
> * DBQ where the main query string uses local param variables that refer to 
> other request params
> * custom Update chain set as {{default="true"}} using something like 
> StatelessScriptUpdateProcessorFactory after DUP where the script depends on 
> request params.
> * multiple named update chains with diff processors configured after DUP and 
> specific requests sent to diff chains -- ex: ParseDateProcessor w/ custom 
> formats configured after DUP in some special chains, but not in the default 
> chain



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8030) Transaction log does not store the update chain used for updates

2016-04-01 Thread Ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15221309#comment-15221309
 ] 

Ludovic Boutros edited comment on SOLR-8030 at 4/1/16 7:36 AM:
---

[~dsmiley], if I remember the process correctly, (I checked this a long time 
ago now ;)), there are two different cases :
- the first one is the "normal" case, the URP chain is used and the 
DirectUpdateHandler adds the documents to the UpdateLog.
- the second one is used during recovery when updates must be buffered and the 
URP chain is not used before buffering documents to the UpdateLog. And the 
buffering is done by the DistributedURP.

That's why you will find two different places where documents are added to the 
UpdateLog and that's why two different URP chain should be put in the UpdateLog 
(see my old comment on that subject).


was (Author: lboutros):
[~dsmiley], if I remember the process correctly, if I remember the process 
correctly (I checked this a long time ago now ;)), there are two different 
cases :
- the first one is is the "normal" case, the URP chain is used and the 
DirectUpdateHandler adds the documents to the UpdateLog.
- the second one is used during recovery when updates must be buffered and the 
URP chain is not used before buffering documents to the UpdateLog. And the 
buffering is done by the DistributedURP.

That's why you will find two different places where documents are added to the 
UpdateLog and that's why two different URP chain should be put in the UpdateLog 
(see my old comment on that subject).

> Transaction log does not store the update chain used for updates
> 
>
> Key: SOLR-8030
> URL: https://issues.apache.org/jira/browse/SOLR-8030
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Ludovic Boutros
> Attachments: SOLR-8030.patch
>
>
> Transaction Log does not store the update chain used during updates.
> Therefore tLog uses the default update chain during log replay.
> If we implement custom update logic with multiple update chains, the log 
> replay could break this logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8030) Transaction log does not store the update chain used for updates

2016-04-01 Thread Ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15221309#comment-15221309
 ] 

Ludovic Boutros edited comment on SOLR-8030 at 4/1/16 7:35 AM:
---

[~dsmiley], if I remember the process correctly, if I remember the process 
correctly (I checked this a long time ago now ;)), there are two different 
cases :
- the first one is is the "normal" case, the URP chain is used and the 
DirectUpdateHandler adds the documents to the UpdateLog.
- the second one is used during recovery when updates must be buffered and the 
URP chain is not used before buffering documents to the UpdateLog. And the 
buffering is done by the DistributedURP.

That's why you will find two different places where documents are added to the 
UpdateLog and that's why two different URP chain should be put in the UpdateLog 
(see my old comment on that subject).


was (Author: lboutros):
[~dsmiley], if I remember the process correctly[~dsmiley], if I remember the 
process correctly (I checked this a long time ago now ;)), there are two 
different cases :
- the first one is is the "normal" case, the URP chain is used and the 
DirectUpdateHandler adds the documents to the UpdateLog.
- the second one is used during recovery when updates must be buffered and the 
URP chain is not used before buffering documents to the UpdateLog. And the 
buffering is done by the DistributedURP.

That's why you will find two different places where documents are added to the 
UpdateLog and that's why two different URP chain should be put in the UpdateLog 
(see my old comment on that subject).

> Transaction log does not store the update chain used for updates
> 
>
> Key: SOLR-8030
> URL: https://issues.apache.org/jira/browse/SOLR-8030
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Ludovic Boutros
> Attachments: SOLR-8030.patch
>
>
> Transaction Log does not store the update chain used during updates.
> Therefore tLog uses the default update chain during log replay.
> If we implement custom update logic with multiple update chains, the log 
> replay could break this logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8030) Transaction log does not store the update chain used for updates

2016-04-01 Thread Ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15221309#comment-15221309
 ] 

Ludovic Boutros commented on SOLR-8030:
---

[~dsmiley], if I remember the process correctly[~dsmiley], if I remember the 
process correctly (I checked this a long time ago now ;)), there are two 
different cases :
- the first one is is the "normal" case, the URP chain is used and the 
DirectUpdateHandler adds the documents to the UpdateLog.
- the second one is used during recovery when updates must be buffered and the 
URP chain is not used before buffering documents to the UpdateLog. And the 
buffering is done by the DistributedURP.

That's why you will find two different places where documents are added to the 
UpdateLog and that's why two different URP chain should be put in the UpdateLog 
(see my old comment on that subject).

> Transaction log does not store the update chain used for updates
> 
>
> Key: SOLR-8030
> URL: https://issues.apache.org/jira/browse/SOLR-8030
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Ludovic Boutros
> Attachments: SOLR-8030.patch
>
>
> Transaction Log does not store the update chain used during updates.
> Therefore tLog uses the default update chain during log replay.
> If we implement custom update logic with multiple update chains, the log 
> replay could break this logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8030) Transaction log does not store the update chain used for updates

2016-03-24 Thread Ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15211031#comment-15211031
 ] 

Ludovic Boutros commented on SOLR-8030:
---

I learned it the hard way too [~dsmiley]. I'll try to take some times on this 
next WE. Thx.

> Transaction log does not store the update chain used for updates
> 
>
> Key: SOLR-8030
> URL: https://issues.apache.org/jira/browse/SOLR-8030
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Ludovic Boutros
> Attachments: SOLR-8030.patch
>
>
> Transaction Log does not store the update chain used during updates.
> Therefore tLog uses the default update chain during log replay.
> If we implement custom update logic with multiple update chains, the log 
> replay could break this logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8117) Rule-based placement issue with 'cores' tag

2015-10-14 Thread Ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14957527#comment-14957527
 ] 

Ludovic Boutros commented on SOLR-8117:
---

Paul, I managed to reproduce the issue in a test case.
And you are right, depending on the empty node positions, the test fails.

I will try to attach the test case and perhaps a fix soon.


> Rule-based placement issue with 'cores' tag 
> 
>
> Key: SOLR-8117
> URL: https://issues.apache.org/jira/browse/SOLR-8117
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3, 5.3.1
>Reporter: Ludovic Boutros
>Assignee: Noble Paul
> Attachments: SOLR-8117.patch
>
>
> The rule-based placement fails on an empty node (core count = 0) with 
> condition 'cores:<1'.
> It also fails if current core number is equal to the core number in the 
> condition - 1. 
> During the placement strategy process, the core counts for a node are 
> incremented when all the rules match.
> At the end of  the code, an additional verification of all the conditions is 
> done with incremented core count and therefore it fails.
> I don't know why this additional verification is needed and removing it seems 
> to fix the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8117) Rule-based placement issue with 'cores' tag

2015-10-06 Thread Ludovic Boutros (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ludovic Boutros updated SOLR-8117:
--
Attachment: (was: SOLR-8117.patch)

> Rule-based placement issue with 'cores' tag 
> 
>
> Key: SOLR-8117
> URL: https://issues.apache.org/jira/browse/SOLR-8117
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3, 5.3.1
>Reporter: Ludovic Boutros
>Assignee: Noble Paul
> Attachments: SOLR-8117.patch
>
>
> The rule-based placement fails on an empty node (core count = 0) with 
> condition 'cores:<1'.
> It also fails if current core number is equal to the core number in the 
> condition - 1. 
> During the placement strategy process, the core counts for a node are 
> incremented when all the rules match.
> At the end of  the code, an additional verification of all the conditions is 
> done with incremented core count and therefore it fails.
> I don't know why this additional verification is needed and removing it seems 
> to fix the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8117) Rule-based placement issue with 'cores' tag

2015-10-06 Thread Ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14945144#comment-14945144
 ] 

Ludovic Boutros commented on SOLR-8117:
---

hmm, I see, the rules should be considered as a mandatory state before and 
after the collection creation.
This type of condition (<1) should be considered as invalid. I misunderstood 
the rule configuration.

Thank you Paul.

I will try to reproduce the other behavior: 

sometimes a collection creation is allowed and sometimes not with the same 
cluster and the same rules.

I use these two rules:

rule=shard:*,host:*,replica:<2
rule=shard:*,cores:<2

The last time, I had to retry 3 times to finally create a collection (7 shards, 
2 replicas per shard).

The demo cluster contains 4 hosts, 16 nodes (4 per host), 14 empty nodes.

With your explaination, it should never be allowed to create this collection 
because all nodes contain 2 cores after the collection creation.
Or perhaps, the two rules are not applied the way I think.

By the way, the behavior should always be the same.


> Rule-based placement issue with 'cores' tag 
> 
>
> Key: SOLR-8117
> URL: https://issues.apache.org/jira/browse/SOLR-8117
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3, 5.3.1
>Reporter: Ludovic Boutros
>Assignee: Noble Paul
> Attachments: SOLR-8117.patch
>
>
> The rule-based placement fails on an empty node (core count = 0) with 
> condition 'cores:<1'.
> It also fails if current core number is equal to the core number in the 
> condition - 1. 
> During the placement strategy process, the core counts for a node are 
> incremented when all the rules match.
> At the end of  the code, an additional verification of all the conditions is 
> done with incremented core count and therefore it fails.
> I don't know why this additional verification is needed and removing it seems 
> to fix the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8117) Rule-based placement issue with 'cores' tag

2015-10-05 Thread Ludovic Boutros (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ludovic Boutros updated SOLR-8117:
--
Attachment: SOLR-8117.patch

Ok, so something like this should be better:

I have modified the function _Rule.canMatch()_ in order to prevent the 
additional verification in case of operator '<' or '='.

I've added another test for your example cores>1.

> Rule-based placement issue with 'cores' tag 
> 
>
> Key: SOLR-8117
> URL: https://issues.apache.org/jira/browse/SOLR-8117
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3, 5.3.1
>Reporter: Ludovic Boutros
>Assignee: Noble Paul
> Attachments: SOLR-8117.patch, SOLR-8117.patch
>
>
> The rule-based placement fails on an empty node (core count = 0) with 
> condition 'cores:<1'.
> It also fails if current core number is equal to the core number in the 
> condition - 1. 
> During the placement strategy process, the core counts for a node are 
> incremented when all the rules match.
> At the end of  the code, an additional verification of all the conditions is 
> done with incremented core count and therefore it fails.
> I don't know why this additional verification is needed and removing it seems 
> to fix the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8117) Rule-based placement issue with 'cores' tag

2015-10-05 Thread Ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14943664#comment-14943664
 ] 

Ludovic Boutros commented on SOLR-8117:
---

Thank you Paul, 

This example is good.
But do you agree that the given test in the patch should pass ? (I mean a 
condition cores<1 should let a core be created on an empty node ?)   

> Rule-based placement issue with 'cores' tag 
> 
>
> Key: SOLR-8117
> URL: https://issues.apache.org/jira/browse/SOLR-8117
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3, 5.3.1
>Reporter: Ludovic Boutros
>Assignee: Noble Paul
> Attachments: SOLR-8117.patch
>
>
> The rule-based placement fails on an empty node (core count = 0) with 
> condition 'cores:<1'.
> It also fails if current core number is equal to the core number in the 
> condition - 1. 
> During the placement strategy process, the core counts for a node are 
> incremented when all the rules match.
> At the end of  the code, an additional verification of all the conditions is 
> done with incremented core count and therefore it fails.
> I don't know why this additional verification is needed and removing it seems 
> to fix the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8117) Rule-based placement issue with 'cores' tag

2015-10-03 Thread Ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14942332#comment-14942332
 ] 

Ludovic Boutros commented on SOLR-8117:
---

Hi Paul, thank you for your answer, 

it was just at first glance and all related tests were ok.

That said, could you please give me an example in order to add a test which 
should fail without this second validation ?

Does this mean that the increment of the core count should be done in this 
second validation ?
Or do you mean that the condition '<1' should be always true at the end ? This 
would seem weird to me.

I also have a stability issue in the collection creation, but I did not manage 
to reproduce it currently in a unit test (Only in production and in Docker). 
The collection creation fails two or three times and then is created with 
success on the fourth time for instance. I will check after the fix of this 
issue.


> Rule-based placement issue with 'cores' tag 
> 
>
> Key: SOLR-8117
> URL: https://issues.apache.org/jira/browse/SOLR-8117
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3, 5.3.1
>Reporter: Ludovic Boutros
>Assignee: Noble Paul
> Attachments: SOLR-8117.patch
>
>
> The rule-based placement fails on an empty node (core count = 0) with 
> condition 'cores:<1'.
> It also fails if current core number is equal to the core number in the 
> condition - 1. 
> During the placement strategy process, the core counts for a node are 
> incremented when all the rules match.
> At the end of  the code, an additional verification of all the conditions is 
> done with incremented core count and therefore it fails.
> I don't know why this additional verification is needed and removing it seems 
> to fix the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8117) Rule-based placement issue with 'cores' tag

2015-10-03 Thread Ludovic Boutros (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ludovic Boutros updated SOLR-8117:
--
Description: 
The rule-based placement fails on an empty node (core count = 0) with condition 
'cores:<1'.

It also fails if current core number is equal to the core number in the 
condition - 1. 

During the placement strategy process, the core counts for a node are 
incremented when all the rules match.

At the end of  the code, an additional verification of all the conditions is 
done with incremented core count and therefore it fails.

I don't know why this additional verification is needed and removing it seems 
to fix the issue.

  was:
The rule-based placement fails on an empty node (core count = 0) with condition 
'cores:<1'.

It also fails if current core number is equal to the core number in the 
condition - 1. 

During the placement strategy process, the core counts for a node are 
incremented when all the rules match.

At the end of  the code, an additional verification of all the conditions is 
done (with incremented core count) and it fails.

I don't know why this condition is needed and removing it seems to fix the 
issue.


> Rule-based placement issue with 'cores' tag 
> 
>
> Key: SOLR-8117
> URL: https://issues.apache.org/jira/browse/SOLR-8117
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3, 5.3.1
>Reporter: Ludovic Boutros
> Attachments: SOLR-8117.patch
>
>
> The rule-based placement fails on an empty node (core count = 0) with 
> condition 'cores:<1'.
> It also fails if current core number is equal to the core number in the 
> condition - 1. 
> During the placement strategy process, the core counts for a node are 
> incremented when all the rules match.
> At the end of  the code, an additional verification of all the conditions is 
> done with incremented core count and therefore it fails.
> I don't know why this additional verification is needed and removing it seems 
> to fix the issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8117) Rule-based placement issue with 'cores' tag

2015-10-02 Thread Ludovic Boutros (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ludovic Boutros updated SOLR-8117:
--
Attachment: SOLR-8117.patch

> Rule-based placement issue with 'cores' tag 
> 
>
> Key: SOLR-8117
> URL: https://issues.apache.org/jira/browse/SOLR-8117
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3, 5.3.1
>Reporter: Ludovic Boutros
> Attachments: SOLR-8117.patch
>
>
> The rule-based placement fails on an empty node (core count = 0) with 
> condition 'cores:<1'.
> It also fails if current core number is equal to the core number in the 
> condition - 1. 
> During the placement strategy process, the core counts for a node are 
> incremented when all the rules match.
> At the end of  the code, an additional verification of all the conditions is 
> done (with incremented core count) and it fails.
> I don't know why this condition is needed and removing it seems to fix the 
> issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8117) Rule-based placement issue with 'cores' tag

2015-10-02 Thread Ludovic Boutros (JIRA)
Ludovic Boutros created SOLR-8117:
-

 Summary: Rule-based placement issue with 'cores' tag 
 Key: SOLR-8117
 URL: https://issues.apache.org/jira/browse/SOLR-8117
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 5.3, 5.3.1
Reporter: Ludovic Boutros


The rule-based placement fails on an empty node (core count = 0) with condition 
'cores:<1'.

It also fails if current core number is equal to the core number in the 
condition - 1. 

During the placement strategy process, the core counts for a node are 
incremented when all the rules match.

At the end of  the code, an additional verification of all the conditions is 
done (with incremented core count) and it fails.

I don't know why this condition is needed and removing it seems to fix the 
issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8117) Rule-based placement issue with 'cores' tag

2015-10-02 Thread Ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14941733#comment-14941733
 ] 

Ludovic Boutros commented on SOLR-8117:
---

A patch with a test and the additional verification removed.

The rule test and rule engine test pass.

> Rule-based placement issue with 'cores' tag 
> 
>
> Key: SOLR-8117
> URL: https://issues.apache.org/jira/browse/SOLR-8117
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3, 5.3.1
>Reporter: Ludovic Boutros
> Attachments: SOLR-8117.patch
>
>
> The rule-based placement fails on an empty node (core count = 0) with 
> condition 'cores:<1'.
> It also fails if current core number is equal to the core number in the 
> condition - 1. 
> During the placement strategy process, the core counts for a node are 
> incremented when all the rules match.
> At the end of  the code, an additional verification of all the conditions is 
> done (with incremented core count) and it fails.
> I don't know why this condition is needed and removing it seems to fix the 
> issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8030) Transaction log does not store the update chain used for updates

2015-09-28 Thread Ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14933642#comment-14933642
 ] 

Ludovic Boutros commented on SOLR-8030:
---

I'm trying to fix this issue. 
But it seems to be even more complicated.

I have added the update chain name to the transaction log for the following 
operations:
add, delete, deleteByQuery

The 'finish' call can be done on each update chain used during the log replay.

But, actually commits are ignored and a final commit is fired at the end.
But if the update logic of the chain does something during the commit it will 
be ignored and it seems to be tricky to improve this.

The (de)serialization of the commands are done with element positions,
so, it's not really easy to add new elements to the transaction log. These 
positions must be updated in multiple places. Perhaps it should use at least 
some constant values...

Another thing, it seems that PeerSync uses the same (de)serialization, and is 
affected by the same issue. The bad thing is that the code is duplicated. It 
will have to take the update chain in account too.

> Transaction log does not store the update chain used for updates
> 
>
> Key: SOLR-8030
> URL: https://issues.apache.org/jira/browse/SOLR-8030
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Ludovic Boutros
> Attachments: SOLR-8030-test.patch
>
>
> Transaction Log does not store the update chain used during updates.
> Therefore tLog uses the default update chain during log replay.
> If we implement custom update logic with multiple update chains, the log 
> replay could break this logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8030) Transaction log does not store the update chain used for updates

2015-09-28 Thread Ludovic Boutros (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ludovic Boutros updated SOLR-8030:
--
Attachment: (was: SOLR-8030-test.patch)

> Transaction log does not store the update chain used for updates
> 
>
> Key: SOLR-8030
> URL: https://issues.apache.org/jira/browse/SOLR-8030
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Ludovic Boutros
> Attachments: SOLR-8030.patch
>
>
> Transaction Log does not store the update chain used during updates.
> Therefore tLog uses the default update chain during log replay.
> If we implement custom update logic with multiple update chains, the log 
> replay could break this logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8030) Transaction log does not store the update chain used for updates

2015-09-28 Thread Ludovic Boutros (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ludovic Boutros updated SOLR-8030:
--
Attachment: SOLR-8030.patch

The update chain to use in case of log replay or peersync should be: 

DistributedUpdateProcessor => RunUpdateProcessor

The DistributedUpdateProcessor is needed in case of version update.
The update chain should be kept in order to be able to call the finish() method 
at the end of log replay or peer sync.

This patch contains updated tests: TestRecovery and PeerSyncTest which check 
the (bad) usage of default route. It cannot be called anymore.  

It contains a fix as well.

> Transaction log does not store the update chain used for updates
> 
>
> Key: SOLR-8030
> URL: https://issues.apache.org/jira/browse/SOLR-8030
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Ludovic Boutros
> Attachments: SOLR-8030.patch
>
>
> Transaction Log does not store the update chain used during updates.
> Therefore tLog uses the default update chain during log replay.
> If we implement custom update logic with multiple update chains, the log 
> replay could break this logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8030) Transaction log does not store the update chain used for updates

2015-09-25 Thread Ludovic Boutros (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ludovic Boutros updated SOLR-8030:
--
Attachment: SOLR-8030-test.patch

Ok, I have found my problem with the test. It needs an FS directory.
This patch is a simplified test for this issue.

> Transaction log does not store the update chain used for updates
> 
>
> Key: SOLR-8030
> URL: https://issues.apache.org/jira/browse/SOLR-8030
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Ludovic Boutros
> Attachments: SOLR-8030-test.patch, SOLR-8030-test.patch, 
> SOLR-8030-test.patch
>
>
> Transaction Log does not store the update chain used during updates.
> Therefore tLog uses the default update chain during log replay.
> If we implement custom update logic with multiple update chains, the log 
> replay could break this logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8030) Transaction log does not store the update chain used for updates

2015-09-25 Thread Ludovic Boutros (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ludovic Boutros updated SOLR-8030:
--
Attachment: (was: SOLR-8030-test.patch)

> Transaction log does not store the update chain used for updates
> 
>
> Key: SOLR-8030
> URL: https://issues.apache.org/jira/browse/SOLR-8030
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Ludovic Boutros
> Attachments: SOLR-8030-test.patch
>
>
> Transaction Log does not store the update chain used during updates.
> Therefore tLog uses the default update chain during log replay.
> If we implement custom update logic with multiple update chains, the log 
> replay could break this logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8030) Transaction log does not store the update chain used for updates

2015-09-25 Thread Ludovic Boutros (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ludovic Boutros updated SOLR-8030:
--
Attachment: (was: SOLR-8030-test.patch)

> Transaction log does not store the update chain used for updates
> 
>
> Key: SOLR-8030
> URL: https://issues.apache.org/jira/browse/SOLR-8030
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Ludovic Boutros
> Attachments: SOLR-8030-test.patch
>
>
> Transaction Log does not store the update chain used during updates.
> Therefore tLog uses the default update chain during log replay.
> If we implement custom update logic with multiple update chains, the log 
> replay could break this logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8030) Transaction log does not store the update chain used for updates

2015-09-24 Thread Ludovic Boutros (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ludovic Boutros updated SOLR-8030:
--
Attachment: SOLR-8030-test.patch

Update of the test.

I have a problem with this test, it creates incoherent replicas (On my mac).

This prevents the real update chain testing.

> Transaction log does not store the update chain used for updates
> 
>
> Key: SOLR-8030
> URL: https://issues.apache.org/jira/browse/SOLR-8030
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Ludovic Boutros
> Attachments: SOLR-8030-test.patch, SOLR-8030-test.patch
>
>
> Transaction Log does not store the update chain used during updates.
> Therefore tLog uses the default update chain during log replay.
> If we implement custom update logic with multiple update chains, the log 
> replay could break this logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8030) Transaction log does not store the update chain used for updates

2015-09-24 Thread Ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906706#comment-14906706
 ] 

Ludovic Boutros edited comment on SOLR-8030 at 9/24/15 5:38 PM:


Update of the test.

I have a problem with this test, it creates inconsistent replicas (On my mac).

This prevents the real update chain testing.



was (Author: lboutros):
Update of the test.

I have a problem with this test, it creates incoherent replicas (On my mac).

This prevents the real update chain testing.

> Transaction log does not store the update chain used for updates
> 
>
> Key: SOLR-8030
> URL: https://issues.apache.org/jira/browse/SOLR-8030
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Ludovic Boutros
> Attachments: SOLR-8030-test.patch, SOLR-8030-test.patch
>
>
> Transaction Log does not store the update chain used during updates.
> Therefore tLog uses the default update chain during log replay.
> If we implement custom update logic with multiple update chains, the log 
> replay could break this logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8030) Transaction log does not store the update chain used for updates

2015-09-24 Thread Ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14906713#comment-14906713
 ] 

Ludovic Boutros commented on SOLR-8030:
---

Is it related to this SOLR-8085 ?

> Transaction log does not store the update chain used for updates
> 
>
> Key: SOLR-8030
> URL: https://issues.apache.org/jira/browse/SOLR-8030
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Ludovic Boutros
> Attachments: SOLR-8030-test.patch, SOLR-8030-test.patch
>
>
> Transaction Log does not store the update chain used during updates.
> Therefore tLog uses the default update chain during log replay.
> If we implement custom update logic with multiple update chains, the log 
> replay could break this logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8030) Transaction log does not store the update chain used for updates

2015-09-21 Thread Ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14900359#comment-14900359
 ] 

Ludovic Boutros commented on SOLR-8030:
---

I will update the test in order to control pre-distributed processors (buffered 
updates during recovery).
I will also rename the processor which locks the replay with a more explicit 
name.

I propose to add one or two flags to the command status to be able to detect 
already processed update commands (before and after distribution).
They should be set in the RunUpdateProcessor and in the 
DistributedUpdateProcessor.

For buffered updates, the replay should start processing update from the 
DistributedUpdateProcessor.
For the other updates, the replay should only use the RunUpdateProcessor.

For the update chain, I think the name could be stored in the tlog and reused 
during replay (With perhaps the last chain cached to prevent the search in the 
update chain map...).



> Transaction log does not store the update chain used for updates
> 
>
> Key: SOLR-8030
> URL: https://issues.apache.org/jira/browse/SOLR-8030
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Ludovic Boutros
> Attachments: SOLR-8030-test.patch
>
>
> Transaction Log does not store the update chain used during updates.
> Therefore tLog uses the default update chain during log replay.
> If we implement custom update logic with multiple update chains, the log 
> replay could break this logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8030) Transaction log does not store the update chain used for updates

2015-09-19 Thread ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14877185#comment-14877185
 ] 

ludovic Boutros commented on SOLR-8030:
---

I managed to reproduce the two different problems in a test.

- Processors can be applied multiple times because of log replay
- Default update chain is the only chain used during log replay

> Transaction log does not store the update chain used for updates
> 
>
> Key: SOLR-8030
> URL: https://issues.apache.org/jira/browse/SOLR-8030
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: ludovic Boutros
>
> Transaction Log does not store the update chain used during updates.
> Therefore tLog uses the default update chain during log replay.
> If we implement custom update logic with multiple update chains, the log 
> replay could break this logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8030) Transaction log does not store the update chain used for updates

2015-09-19 Thread ludovic Boutros (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ludovic Boutros updated SOLR-8030:
--
Attachment: SOLR-8030-test.patch

I don't know if there is an easier way to test this.

> Transaction log does not store the update chain used for updates
> 
>
> Key: SOLR-8030
> URL: https://issues.apache.org/jira/browse/SOLR-8030
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: ludovic Boutros
> Attachments: SOLR-8030-test.patch
>
>
> Transaction Log does not store the update chain used during updates.
> Therefore tLog uses the default update chain during log replay.
> If we implement custom update logic with multiple update chains, the log 
> replay could break this logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8030) Transaction log does not store the update chain used for updates

2015-09-16 Thread ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14747282#comment-14747282
 ] 

ludovic Boutros commented on SOLR-8030:
---

I think [~hossman_luc...@fucit.org] is right.
It seems that log entries are added at the end of the update processor chain 
and replayed from the beginning of the default processor chain.
The replayed commands are local commands, so I think that pre-distributed 
processors are bypassed, but the other processors seem to be used. 

Why the log replay does not use the _RunUpdateProcessor_ directly, instead of 
the full update chain ? 

> Transaction log does not store the update chain used for updates
> 
>
> Key: SOLR-8030
> URL: https://issues.apache.org/jira/browse/SOLR-8030
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: ludovic Boutros
>
> Transaction Log does not store the update chain used during updates.
> Therefore tLog uses the default update chain during log replay.
> If we implement custom update logic with multiple update chains, the log 
> replay could break this logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8030) Transaction log does not store the update chain used for updates

2015-09-16 Thread ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14790722#comment-14790722
 ] 

ludovic Boutros commented on SOLR-8030:
---

True, this bug is initialy pointing out that the default chain was always used 
for the replay.
But it seems to be more complicated like you said.

The DistributedUpdateProcessor seems to buffer updates  in the tLog while in 
inactive state.
So you're right, only replaying to RunUpdateProcessor would be a mistake.

But the DirectupdateHandler2 seems to add updates  in the tLog while in active 
state.
And replaying from the beginning of the default update chain seems to be a 
mistake as well.



> Transaction log does not store the update chain used for updates
> 
>
> Key: SOLR-8030
> URL: https://issues.apache.org/jira/browse/SOLR-8030
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: ludovic Boutros
>
> Transaction Log does not store the update chain used during updates.
> Therefore tLog uses the default update chain during log replay.
> If we implement custom update logic with multiple update chains, the log 
> replay could break this logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8030) Transaction log does not store the update chain used for updates

2015-09-10 Thread ludovic Boutros (JIRA)
ludovic Boutros created SOLR-8030:
-

 Summary: Transaction log does not store the update chain used for 
updates
 Key: SOLR-8030
 URL: https://issues.apache.org/jira/browse/SOLR-8030
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 5.3
Reporter: ludovic Boutros


Transaction Log does not store the update chain used during updates.

Therefore tLog uses the default update chain during log replay.

If we implement custom update logic with multiple update chains, the log replay 
could break this logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8030) Transaction log does not store the update chain used for updates

2015-09-10 Thread ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14738819#comment-14738819
 ] 

ludovic Boutros commented on SOLR-8030:
---

Seems to be here:

{code:title=TransactionLog.java|borderStyle=solid}
  public long writeDeleteByQuery(DeleteUpdateCommand cmd, int flags) {
LogCodec codec = new LogCodec(resolver);
try {
  checkWriteHeader(codec, null);

  MemOutputStream out = new MemOutputStream(new byte[20 + 
(cmd.query.length())]);
  codec.init(out);
  codec.writeTag(JavaBinCodec.ARR, 3);
  codec.writeInt(UpdateLog.DELETE_BY_QUERY | flags);  // should just take 
one byte
  codec.writeLong(cmd.getVersion());
  codec.writeStr(cmd.query);

  synchronized (this) {
long pos = fos.size();   // if we had flushed, this should be equal to 
channel.position()
out.writeAll(fos);
endRecord(pos);
// fos.flushBuffer();  // flush later
return pos;
  }
  } catch (IOException e) {
throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, e);
  }

  }
{code}

> Transaction log does not store the update chain used for updates
> 
>
> Key: SOLR-8030
> URL: https://issues.apache.org/jira/browse/SOLR-8030
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: ludovic Boutros
>
> Transaction Log does not store the update chain used during updates.
> Therefore tLog uses the default update chain during log replay.
> If we implement custom update logic with multiple update chains, the log 
> replay could break this logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8030) Transaction log does not store the update chain used for updates

2015-09-10 Thread ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14738802#comment-14738802
 ] 

ludovic Boutros commented on SOLR-8030:
---

Not for delete by query for instance, it seems.

> Transaction log does not store the update chain used for updates
> 
>
> Key: SOLR-8030
> URL: https://issues.apache.org/jira/browse/SOLR-8030
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: ludovic Boutros
>
> Transaction Log does not store the update chain used during updates.
> Therefore tLog uses the default update chain during log replay.
> If we implement custom update logic with multiple update chains, the log 
> replay could break this logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8030) Transaction log does not store the update chain used for updates

2015-09-10 Thread ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14738859#comment-14738859
 ] 

ludovic Boutros commented on SOLR-8030:
---

[~ichattopadhyaya],

you could create an update processor which forbid deleteByQuery updates.
Then put it in the default update chain.
You can create another update chain without this processor.
Add some documents and delete them with queries with the update chain allowing 
this operation.
Next, play with the famous Monkey ;)

Perhaps, are there easier ways to reproduce ?

I can try do reproduce this, I like the Monkey :p.



> Transaction log does not store the update chain used for updates
> 
>
> Key: SOLR-8030
> URL: https://issues.apache.org/jira/browse/SOLR-8030
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: ludovic Boutros
>
> Transaction Log does not store the update chain used during updates.
> Therefore tLog uses the default update chain during log replay.
> If we implement custom update logic with multiple update chains, the log 
> replay could break this logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7988) LukeRequest on default path is broken with CloudSolrClient

2015-09-01 Thread ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14725496#comment-14725496
 ] 

ludovic Boutros commented on SOLR-7988:
---

Thank you !

> LukeRequest on default path is broken with CloudSolrClient
> --
>
> Key: SOLR-7988
> URL: https://issues.apache.org/jira/browse/SOLR-7988
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: 5.3
>Reporter: ludovic Boutros
>
> SOLR-7757 breaks the default access on the _LukeRequestHandler_ (/admin/luke) 
> with _CloudSolrClient_.
> See the following commit :
> https://svn.apache.org/viewvc/lucene/dev/branches/branch_5x/solr/solrj/src/java/org/apache/solr/client/solrj/impl/CloudSolrClient.java?r1=1694556=1694555=1694556
> The name of the collection is not added to the request URL and therefore we 
> get a 404 error in the response.
> Defining the _LukeRequestHandler_ with another path in the _solrconfig_ is a 
> workaround but it's quite annoying. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7988) LukeRequest on default path is broken with CloudSolrClient

2015-08-31 Thread ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14723545#comment-14723545
 ] 

ludovic Boutros commented on SOLR-7988:
---

Ok, I will create a patch with a test and a fix.

What are the current default admin handlers which need the collection name ?

> LukeRequest on default path is broken with CloudSolrClient
> --
>
> Key: SOLR-7988
> URL: https://issues.apache.org/jira/browse/SOLR-7988
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: 5.3
>Reporter: ludovic Boutros
>
> SOLR-7757 breaks the default access on the _LukeRequestHandler_ (/admin/luke) 
> with _CloudSolrClient_.
> See the following commit :
> https://svn.apache.org/viewvc/lucene/dev/branches/branch_5x/solr/solrj/src/java/org/apache/solr/client/solrj/impl/CloudSolrClient.java?r1=1694556=1694555=1694556
> The name of the collection is not added to the request URL and therefore we 
> get a 404 error in the response.
> Defining the _LukeRequestHandler_ with another path in the _solrconfig_ is a 
> workaround but it's quite annoying. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7988) LukeRequest on default path is broken with CloudSolrClient

2015-08-31 Thread ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14723600#comment-14723600
 ] 

ludovic Boutros commented on SOLR-7988:
---

I did not test this code but something like this should work:

{code:title=DefaultHandlerTest.java|borderStyle=solid}
private void defaultHandlerTest() {
String collectionName = "defaultHandlerCollection";
createCollection(collectionName, controlClientCloud, 2, 2);
waitForRecoveriesToFinish(collectionName, false);
try (CloudSolrClient cloudClient = createCloudClient(collectionName)) {

  LukeRequest lukeRequest = new LukeRequest();

  try {
lukeResponse = lukeRequest.process(cloudClient);
  } catch (Exception e) {
fail("Cannot find default luke request handler");
  }
}
  }
{code}

> LukeRequest on default path is broken with CloudSolrClient
> --
>
> Key: SOLR-7988
> URL: https://issues.apache.org/jira/browse/SOLR-7988
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: 5.3
>Reporter: ludovic Boutros
>
> SOLR-7757 breaks the default access on the _LukeRequestHandler_ (/admin/luke) 
> with _CloudSolrClient_.
> See the following commit :
> https://svn.apache.org/viewvc/lucene/dev/branches/branch_5x/solr/solrj/src/java/org/apache/solr/client/solrj/impl/CloudSolrClient.java?r1=1694556=1694555=1694556
> The name of the collection is not added to the request URL and therefore we 
> get a 404 error in the response.
> Defining the _LukeRequestHandler_ with another path in the _solrconfig_ is a 
> workaround but it's quite annoying. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7988) LukeRequest on default path is broken with CloudSolrClient

2015-08-31 Thread ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14723208#comment-14723208
 ] 

ludovic Boutros commented on SOLR-7988:
---

Well, quite true :)

The question I would have here is:

Do you want to keep this handler and perhaps others, which need the collection 
name on the admin path ?

If yes, I can imagine an exclusion in the _CloudSolrClient_
if no, we just need to choose another path and register the handler(s) with 
this new path (but that would break a sort of compatibility...)  

> LukeRequest on default path is broken with CloudSolrClient
> --
>
> Key: SOLR-7988
> URL: https://issues.apache.org/jira/browse/SOLR-7988
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: 5.3
>Reporter: ludovic Boutros
>
> SOLR-7757 breaks the default access on the _LukeRequestHandler_ (/admin/luke) 
> with _CloudSolrClient_.
> See the following commit :
> https://svn.apache.org/viewvc/lucene/dev/branches/branch_5x/solr/solrj/src/java/org/apache/solr/client/solrj/impl/CloudSolrClient.java?r1=1694556=1694555=1694556
> The name of the collection is not added to the request URL and therefore we 
> get a 404 error in the response.
> Defining the _LukeRequestHandler_ with another path in the _solrconfig_ is a 
> workaround but it's quite annoying. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7988) LukeRequest on default path is broken with CloudSolrClient

2015-08-28 Thread ludovic Boutros (JIRA)
ludovic Boutros created SOLR-7988:
-

 Summary: LukeRequest on default path is broken with CloudSolrClient
 Key: SOLR-7988
 URL: https://issues.apache.org/jira/browse/SOLR-7988
 Project: Solr
  Issue Type: Bug
  Components: SolrJ
Affects Versions: 5.3
Reporter: ludovic Boutros


SOLR-7757 breaks the default access on the _LukeRequestHandler_ (/admin/luke) 
with _CloudSolrClient_.

See the following commit :

https://svn.apache.org/viewvc/lucene/dev/branches/branch_5x/solr/solrj/src/java/org/apache/solr/client/solrj/impl/CloudSolrClient.java?r1=1694556r2=1694555pathrev=1694556

The name of the collection is not added to the request URL and therefore we get 
a 404 error in the response.

Defining the _LukeRequestHandler_ with another path in the _solrconfig_ is a 
workaround but it's quite annoying. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7988) LukeRequest on default path is broken with CloudSolrClient

2015-08-28 Thread ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14720224#comment-14720224
 ] 

ludovic Boutros commented on SOLR-7988:
---

I've only tested with SolrCloud currently in an IT case.

 LukeRequest on default path is broken with CloudSolrClient
 --

 Key: SOLR-7988
 URL: https://issues.apache.org/jira/browse/SOLR-7988
 Project: Solr
  Issue Type: Bug
  Components: SolrJ
Affects Versions: 5.3
Reporter: ludovic Boutros

 SOLR-7757 breaks the default access on the _LukeRequestHandler_ (/admin/luke) 
 with _CloudSolrClient_.
 See the following commit :
 https://svn.apache.org/viewvc/lucene/dev/branches/branch_5x/solr/solrj/src/java/org/apache/solr/client/solrj/impl/CloudSolrClient.java?r1=1694556r2=1694555pathrev=1694556
 The name of the collection is not added to the request URL and therefore we 
 get a 404 error in the response.
 Defining the _LukeRequestHandler_ with another path in the _solrconfig_ is a 
 workaround but it's quite annoying. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-3229) SpanNearQuery: ordered spans should not overlap

2014-11-14 Thread ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-3229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14212492#comment-14212492
 ] 

ludovic Boutros commented on LUCENE-3229:
-

Thank you Erik.

 SpanNearQuery: ordered spans should not overlap
 ---

 Key: LUCENE-3229
 URL: https://issues.apache.org/jira/browse/LUCENE-3229
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/search
Affects Versions: 3.1
 Environment: Windows XP, Java 1.6
Reporter: ludovic Boutros
Assignee: Erik Hatcher
 Fix For: 4.10.3, 5.0, Trunk

 Attachments: LUCENE-3229.patch, LUCENE-3229.patch, LUCENE-3229.patch, 
 LUCENE-3229.patch, LUCENE-3229.patch, SpanOverlap.diff, SpanOverlap2.diff, 
 SpanOverlapTestUnit.diff


 While using Span queries I think I've found a little bug.
 With a document like this (from the TestNearSpansOrdered unit test) :
 w1 w2 w3 w4 w5
 If I try to search for this span query :
 spanNear([spanNear([field:w3, field:w5], 1, true), field:w4], 0, true)
 the above document is returned and I think it should not because 'w4' is not 
 after 'w5'.
 The 2 spans are not ordered, because there is an overlap.
 I will add a test patch in the TestNearSpansOrdered unit test.
 I will add a patch to solve this issue too.
 Basicaly it modifies the two docSpansOrdered functions to make sure that the 
 spans does not overlap.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6086) Replica active during Warming

2014-05-22 Thread ludovic Boutros (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ludovic Boutros updated SOLR-6086:
--

Attachment: SOLR-6086.patch

Second patch which also solves the second case.

A second test added too.

These two tests are quite slow: more than 2 minutes on my machine, is it an 
issue ?




 Replica active during Warming
 -

 Key: SOLR-6086
 URL: https://issues.apache.org/jira/browse/SOLR-6086
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.6.1, 4.8.1
Reporter: ludovic Boutros
 Attachments: SOLR-6086.patch, SOLR-6086.patch


 At least with Solr 4.6.1, replica are considered as active during the warming 
 process.
 This means that if you restart a replica or create a new one, queries will  
 be send to this replica and the query will hang until the end of the warming  
 process (If cold searchers are not used).
 You cannot add or restart a node silently anymore.
 I think that the fact that the replica is active is not a bad thing.
 But, the HttpShardHandler and the CloudSolrServer class should take the 
 warming process in account.
 Currently, I have developped a new very simple component which check that a 
 searcher is registered.
 I am also developping custom HttpShardHandler and CloudSolrServer classes 
 which will check the warming process in addition to the ACTIVE status in the 
 cluster state.
 This seems to be more a workaround than a solution but that's all I can do in 
 this version.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6086) Replica active during Warming

2014-05-21 Thread ludovic Boutros (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ludovic Boutros updated SOLR-6086:
--

Attachment: SOLR-6086.patch

I checked the differences in the logs and in the code.

The problem occures when:
- a node is restarted 
- Peer Sync failed (no /get handler for instance, should it become mandatory 
?)
- the node is already synced (nothing to replicate)

or :

- a node is restarted and this is the leader (I do not know if it only appends 
with a lonely leader...)
- the node is already synced (nothing to replicate)

For the first case,

I think this is a side effect of the modification in SOLR-4965. 

If Peer Sync is succesfull, in the code an explicit commit is called. And 
there's a comment which says:

{code:title=RecoveryStrategy.java|borderStyle=solid}
// force open a new searcher
core.getUpdateHandler().commit(new CommitUpdateCommand(req, false));
{code}

This is not the case if Peer Sync failed.
Just adding this line is enough to correct this issue.

Here is a patch with a test which reproduce the problem and the correction (to 
be applied to the branch 4x).

I am working on the second case.

 Replica active during Warming
 -

 Key: SOLR-6086
 URL: https://issues.apache.org/jira/browse/SOLR-6086
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.6.1, 4.8.1
Reporter: ludovic Boutros
 Attachments: SOLR-6086.patch


 At least with Solr 4.6.1, replica are considered as active during the warming 
 process.
 This means that if you restart a replica or create a new one, queries will  
 be send to this replica and the query will hang until the end of the warming  
 process (If cold searchers are not used).
 You cannot add or restart a node silently anymore.
 I think that the fact that the replica is active is not a bad thing.
 But, the HttpShardHandler and the CloudSolrServer class should take the 
 warming process in account.
 Currently, I have developped a new very simple component which check that a 
 searcher is registered.
 I am also developping custom HttpShardHandler and CloudSolrServer classes 
 which will check the warming process in addition to the ACTIVE status in the 
 cluster state.
 This seems to be more a workaround than a solution but that's all I can do in 
 this version.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6086) Replica active during Warming

2014-05-21 Thread ludovic Boutros (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ludovic Boutros updated SOLR-6086:
--

Affects Version/s: 4.8.1

 Replica active during Warming
 -

 Key: SOLR-6086
 URL: https://issues.apache.org/jira/browse/SOLR-6086
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.6.1, 4.8.1
Reporter: ludovic Boutros
 Attachments: SOLR-6086.patch


 At least with Solr 4.6.1, replica are considered as active during the warming 
 process.
 This means that if you restart a replica or create a new one, queries will  
 be send to this replica and the query will hang until the end of the warming  
 process (If cold searchers are not used).
 You cannot add or restart a node silently anymore.
 I think that the fact that the replica is active is not a bad thing.
 But, the HttpShardHandler and the CloudSolrServer class should take the 
 warming process in account.
 Currently, I have developped a new very simple component which check that a 
 searcher is registered.
 I am also developping custom HttpShardHandler and CloudSolrServer classes 
 which will check the warming process in addition to the ACTIVE status in the 
 cluster state.
 This seems to be more a workaround than a solution but that's all I can do in 
 this version.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6086) Replica active during Warming

2014-05-21 Thread ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14004856#comment-14004856
 ] 

ludovic Boutros edited comment on SOLR-6086 at 5/21/14 4:23 PM:


I checked the differences in the logs and in the code.

The problem occures when:
- a node is restarted 
- Peer Sync failed (no /get handler for instance, should it become mandatory 
?)
- the node is already synced (nothing to replicate)

or :

- a node is restarted and this is the leader (I do not know if it only appends 
with a lonely leader...)
- the node is already synced (nothing to replicate)

For the first case,

I think this is a side effect of the modification in SOLR-4965. 

If Peer Sync is succesfull, in the code an explicit commit is called. And 
there's a comment which says:

{code:title=RecoveryStrategy.java|borderStyle=solid}
// force open a new searcher
core.getUpdateHandler().commit(new CommitUpdateCommand(req, false));
{code}

This is not the case if Peer Sync failed.
Just adding this line is enough to correct this issue.

Here is a patch with a test which reproduces the problem and the correction (to 
be applied to the branch 4x).

I am working on the second case.


was (Author: lboutros):
I checked the differences in the logs and in the code.

The problem occures when:
- a node is restarted 
- Peer Sync failed (no /get handler for instance, should it become mandatory 
?)
- the node is already synced (nothing to replicate)

or :

- a node is restarted and this is the leader (I do not know if it only appends 
with a lonely leader...)
- the node is already synced (nothing to replicate)

For the first case,

I think this is a side effect of the modification in SOLR-4965. 

If Peer Sync is succesfull, in the code an explicit commit is called. And 
there's a comment which says:

{code:title=RecoveryStrategy.java|borderStyle=solid}
// force open a new searcher
core.getUpdateHandler().commit(new CommitUpdateCommand(req, false));
{code}

This is not the case if Peer Sync failed.
Just adding this line is enough to correct this issue.

Here is a patch with a test which reproduce the problem and the correction (to 
be applied to the branch 4x).

I am working on the second case.

 Replica active during Warming
 -

 Key: SOLR-6086
 URL: https://issues.apache.org/jira/browse/SOLR-6086
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.6.1, 4.8.1
Reporter: ludovic Boutros
 Attachments: SOLR-6086.patch


 At least with Solr 4.6.1, replica are considered as active during the warming 
 process.
 This means that if you restart a replica or create a new one, queries will  
 be send to this replica and the query will hang until the end of the warming  
 process (If cold searchers are not used).
 You cannot add or restart a node silently anymore.
 I think that the fact that the replica is active is not a bad thing.
 But, the HttpShardHandler and the CloudSolrServer class should take the 
 warming process in account.
 Currently, I have developped a new very simple component which check that a 
 searcher is registered.
 I am also developping custom HttpShardHandler and CloudSolrServer classes 
 which will check the warming process in addition to the ACTIVE status in the 
 cluster state.
 This seems to be more a workaround than a solution but that's all I can do in 
 this version.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6086) Replica active during Warming

2014-05-16 Thread ludovic Boutros (JIRA)
ludovic Boutros created SOLR-6086:
-

 Summary: Replica active during Warming
 Key: SOLR-6086
 URL: https://issues.apache.org/jira/browse/SOLR-6086
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.6.1
Reporter: ludovic Boutros


At least with Solr 4.6.1, replica are considered as active during the warming 
process.

This means that if you restart a replica or create a new one, queries will  
be send to this replica and the query will hang until the end of the warming  
process (If cold searchers are not used).

You cannot add or restart a node silently anymore.

I think that the fact that the replica is active is not a bad thing.
But, the HttpShardHandler and the CloudSolrServer class should take the warming 
process in account.

Currently, I have developped a new very simple component which check that a 
searcher is registered.
I am also developping custom HttpShardHandler and CloudSolrServer classes which 
will check the warming process in addition to the ACTIVE status in the cluster 
state.

This seems to be more a workaround than a solution but that's all I can do in 
this version.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5941) CommitTracker should use the default UpdateProcessingChain for autocommit

2014-04-14 Thread ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13968300#comment-13968300
 ] 

ludovic Boutros commented on SOLR-5941:
---

Hi Shalin,

How can I help you to fix this issue ?

 CommitTracker should use the default UpdateProcessingChain for autocommit
 -

 Key: SOLR-5941
 URL: https://issues.apache.org/jira/browse/SOLR-5941
 Project: Solr
  Issue Type: Bug
  Components: update
Affects Versions: 4.6, 4.7
Reporter: ludovic Boutros
Assignee: Shalin Shekhar Mangar
 Fix For: 4.8, 5.0

 Attachments: SOLR-5941.patch


 Currently, the CommitTracker class is using the UpdateHandler directly for 
 autocommit.
 If a custom update processor is configured with a commit action, nothing is 
 done until an explicit commit is done by the client.
 This can produce incoherant behaviors.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5943) SolrCmdDistributor does not distribute the openSearcher parameter

2014-04-02 Thread ludovic Boutros (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ludovic Boutros updated SOLR-5943:
--

Attachment: SOLR-5943.patch

A first small patch with unit test on branch lucene_solr_4_7.


 SolrCmdDistributor does not distribute the openSearcher parameter
 -

 Key: SOLR-5943
 URL: https://issues.apache.org/jira/browse/SOLR-5943
 Project: Solr
  Issue Type: Bug
  Components: update
Affects Versions: 4.6.1, 4.7
Reporter: ludovic Boutros
Assignee: Shalin Shekhar Mangar
 Fix For: 4.8, 5.0

 Attachments: SOLR-5943.patch


 The openSearcher parameter in a commit command is totally ignored by the 
 SolrCmdDistributor :
 {code:title=SolrCmdDistributor.java|borderStyle=solid}
  void addCommit(UpdateRequest ureq, CommitUpdateCommand cmd) {
 if (cmd == null) return;
 ureq.setAction(cmd.optimize ? AbstractUpdateRequest.ACTION.OPTIMIZE
 : AbstractUpdateRequest.ACTION.COMMIT, false, cmd.waitSearcher, 
 cmd.maxOptimizeSegments, cmd.softCommit, cmd.expungeDeletes);
   }{code}
 I think the SolrJ API should take this parameter in account as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5943) SolrCmdDistributor does not distribute the openSearcher parameter

2014-04-02 Thread ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13957877#comment-13957877
 ] 

ludovic Boutros commented on SOLR-5943:
---

Excellent, thank you Shalin.

 SolrCmdDistributor does not distribute the openSearcher parameter
 -

 Key: SOLR-5943
 URL: https://issues.apache.org/jira/browse/SOLR-5943
 Project: Solr
  Issue Type: Bug
  Components: update
Affects Versions: 4.6.1, 4.7
Reporter: ludovic Boutros
Assignee: Shalin Shekhar Mangar
 Fix For: 4.8, 5.0

 Attachments: SOLR-5943.patch, SOLR-5943.patch


 The openSearcher parameter in a commit command is totally ignored by the 
 SolrCmdDistributor :
 {code:title=SolrCmdDistributor.java|borderStyle=solid}
  void addCommit(UpdateRequest ureq, CommitUpdateCommand cmd) {
 if (cmd == null) return;
 ureq.setAction(cmd.optimize ? AbstractUpdateRequest.ACTION.OPTIMIZE
 : AbstractUpdateRequest.ACTION.COMMIT, false, cmd.waitSearcher, 
 cmd.maxOptimizeSegments, cmd.softCommit, cmd.expungeDeletes);
   }{code}
 I think the SolrJ API should take this parameter in account as well.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5941) CommitTracker should use the default UpdateProcessingChain for autocommit

2014-04-01 Thread ludovic Boutros (JIRA)
ludovic Boutros created SOLR-5941:
-

 Summary: CommitTracker should use the default 
UpdateProcessingChain for autocommit
 Key: SOLR-5941
 URL: https://issues.apache.org/jira/browse/SOLR-5941
 Project: Solr
  Issue Type: Bug
  Components: update
Affects Versions: 4.7, 4.6
Reporter: ludovic Boutros


Currently, the CommitTracker class is using the UpdateHandler directly for 
autocommit.

If a custom update processor is configured with a commit action, nothing is 
done until an explicit commit is done by the client.

This can produce incoherant behaviors.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5941) CommitTracker should use the default UpdateProcessingChain for autocommit

2014-04-01 Thread ludovic Boutros (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ludovic Boutros updated SOLR-5941:
--

Attachment: SOLR-5941.patch

A small starting patch.

 CommitTracker should use the default UpdateProcessingChain for autocommit
 -

 Key: SOLR-5941
 URL: https://issues.apache.org/jira/browse/SOLR-5941
 Project: Solr
  Issue Type: Bug
  Components: update
Affects Versions: 4.6, 4.7
Reporter: ludovic Boutros
 Attachments: SOLR-5941.patch


 Currently, the CommitTracker class is using the UpdateHandler directly for 
 autocommit.
 If a custom update processor is configured with a commit action, nothing is 
 done until an explicit commit is done by the client.
 This can produce incoherant behaviors.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5943) SolrCmdDistributor does not distribute the openSearcher parameter

2014-04-01 Thread ludovic Boutros (JIRA)
ludovic Boutros created SOLR-5943:
-

 Summary: SolrCmdDistributor does not distribute the openSearcher 
parameter
 Key: SOLR-5943
 URL: https://issues.apache.org/jira/browse/SOLR-5943
 Project: Solr
  Issue Type: Bug
  Components: update
Affects Versions: 4.7, 4.6.1
Reporter: ludovic Boutros


The openSearcher parameter in a commit command is totally ignored by the 
SolrCmdDistributor :

{code:title=SolrCmdDistributor.java|borderStyle=solid}
 void addCommit(UpdateRequest ureq, CommitUpdateCommand cmd) {
if (cmd == null) return;
ureq.setAction(cmd.optimize ? AbstractUpdateRequest.ACTION.OPTIMIZE
: AbstractUpdateRequest.ACTION.COMMIT, false, cmd.waitSearcher, 
cmd.maxOptimizeSegments, cmd.softCommit, cmd.expungeDeletes);
  }{code}

I think the SolrJ API should take this parameter in account as well.





--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5235) Update Log replay does not use the processor chain for commit

2013-09-11 Thread ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13764381#comment-13764381
 ] 

ludovic Boutros commented on SOLR-5235:
---

The portion code of UpdateLog.java for reference :

{code:title=UpdateLog.java|borderStyle=solid}
CommitUpdateCommand cmd = new CommitUpdateCommand(req, false);
cmd.setVersion(commitVersion);
cmd.softCommit = false;
cmd.waitSearcher = true;
cmd.setFlags(UpdateCommand.REPLAY);
try {
  if (debug) log.debug(commit  +  cmd);
  uhandler.commit(cmd);  // this should cause a commit to be 
added to the incomplete log and avoid it being replayed again after a restart.
} catch (IOException ex) {
  recoveryInfo.errors++;
  loglog.error(Replay exception: final commit., ex);
}
{code} 

 Update Log replay does not use the processor chain for commit
 -

 Key: SOLR-5235
 URL: https://issues.apache.org/jira/browse/SOLR-5235
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.3.1, 4.4
Reporter: ludovic Boutros

 The update log replay process commit commands directly with the Update 
 Handler.
 The update processor chain is not used. I may be wrong but I think this is to 
 prevent to log this commit command again in the LogUpdateProcessor.
 But this commit command is flagged with the flag UpdateCommand.REPLAY. I 
 think this flag should be checked in the LogUpdateProcessor in order to adapt 
 its behavior.
 Currently, commit actions in custom Update Processors are not applied in case 
 of a crash without an explicit commit.
 A workaround can be done with the finish function but this is not ideal.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5235) Update Log replay does not use the processor chain for commit

2013-09-11 Thread ludovic Boutros (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ludovic Boutros updated SOLR-5235:
--

Summary: Update Log replay does not use the processor chain for commit  
(was: Update Log replay does use the processor chain for commit)

 Update Log replay does not use the processor chain for commit
 -

 Key: SOLR-5235
 URL: https://issues.apache.org/jira/browse/SOLR-5235
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.3.1, 4.4
Reporter: ludovic Boutros

 The update log replay process commit commands directly with the Update 
 Handler.
 The update processor chain is not used. I may be wrong but I think this is to 
 prevent to log this commit command again in the LogUpdateProcessor.
 But this commit command is flagged with the flag UpdateCommand.REPLAY. I 
 think this flag should be checked in the LogUpdateProcessor in order to adapt 
 its behavior.
 Currently, commit actions in custom Update Processors are not applied in case 
 of a crash without an explicit commit.
 A workaround can be done with the finish function but this is not ideal.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5235) Update Log replay does use the processor chain for commit

2013-09-11 Thread ludovic Boutros (JIRA)
ludovic Boutros created SOLR-5235:
-

 Summary: Update Log replay does use the processor chain for commit
 Key: SOLR-5235
 URL: https://issues.apache.org/jira/browse/SOLR-5235
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.4, 4.3.1
Reporter: ludovic Boutros


The update log replay process commit commands directly with the Update Handler.
The update processor chain is not used. I may be wrong but I think this is to 
prevent to log this commit command again in the LogUpdateProcessor.

But this commit command is flagged with the flag UpdateCommand.REPLAY. I think 
this flag should be checked in the LogUpdateProcessor in order to adapt its 
behavior.

Currently, commit actions in custom Update Processors are not applied in case 
of a crash without an explicit commit.
A workaround can be done with the finish function but this is not ideal.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5235) Update Log replay does not use the processor chain for commit

2013-09-11 Thread ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13764381#comment-13764381
 ] 

ludovic Boutros edited comment on SOLR-5235 at 9/11/13 3:45 PM:


The code of UpdateLog.java for reference :

{code:title=UpdateLog.java|borderStyle=solid}
CommitUpdateCommand cmd = new CommitUpdateCommand(req, false);
cmd.setVersion(commitVersion);
cmd.softCommit = false;
cmd.waitSearcher = true;
cmd.setFlags(UpdateCommand.REPLAY);
try {
  if (debug) log.debug(commit  +  cmd);
  uhandler.commit(cmd);  // this should cause a commit to be 
added to the incomplete log and avoid it being replayed again after a restart.
} catch (IOException ex) {
  recoveryInfo.errors++;
  loglog.error(Replay exception: final commit., ex);
}
{code} 

  was (Author: lboutros):
The portion code of UpdateLog.java for reference :

{code:title=UpdateLog.java|borderStyle=solid}
CommitUpdateCommand cmd = new CommitUpdateCommand(req, false);
cmd.setVersion(commitVersion);
cmd.softCommit = false;
cmd.waitSearcher = true;
cmd.setFlags(UpdateCommand.REPLAY);
try {
  if (debug) log.debug(commit  +  cmd);
  uhandler.commit(cmd);  // this should cause a commit to be 
added to the incomplete log and avoid it being replayed again after a restart.
} catch (IOException ex) {
  recoveryInfo.errors++;
  loglog.error(Replay exception: final commit., ex);
}
{code} 
  
 Update Log replay does not use the processor chain for commit
 -

 Key: SOLR-5235
 URL: https://issues.apache.org/jira/browse/SOLR-5235
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.3.1, 4.4
Reporter: ludovic Boutros

 The update log replay process commit commands directly with the Update 
 Handler.
 The update processor chain is not used. I may be wrong but I think this is to 
 prevent to log this commit command again in the LogUpdateProcessor.
 But this commit command is flagged with the flag UpdateCommand.REPLAY. I 
 think this flag should be checked in the LogUpdateProcessor in order to adapt 
 its behavior.
 Currently, commit actions in custom Update Processors are not applied in case 
 of a crash without an explicit commit.
 A workaround can be done with the finish function but this is not ideal.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5235) Update Log replay does not use the processor chain for commit

2013-09-11 Thread ludovic Boutros (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ludovic Boutros updated SOLR-5235:
--

Description: 
The update log replay sends commit commands directly with the Update Handler.
The update processor chain is not used. I may be wrong but I think this is to 
prevent to log this commit command again in the LogUpdateProcessor.

But this commit command is flagged with the flag UpdateCommand.REPLAY. I think 
this flag should be checked in the LogUpdateProcessor in order to adapt its 
behavior.

Currently, commit actions in custom Update Processors are not applied in case 
of a crash without an explicit commit.
A workaround can be done with the finish function but this is not ideal.

  was:
The update log replay process commit commands directly with the Update Handler.
The update processor chain is not used. I may be wrong but I think this is to 
prevent to log this commit command again in the LogUpdateProcessor.

But this commit command is flagged with the flag UpdateCommand.REPLAY. I think 
this flag should be checked in the LogUpdateProcessor in order to adapt its 
behavior.

Currently, commit actions in custom Update Processors are not applied in case 
of a crash without an explicit commit.
A workaround can be done with the finish function but this is not ideal.


 Update Log replay does not use the processor chain for commit
 -

 Key: SOLR-5235
 URL: https://issues.apache.org/jira/browse/SOLR-5235
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.3.1, 4.4
Reporter: ludovic Boutros

 The update log replay sends commit commands directly with the Update Handler.
 The update processor chain is not used. I may be wrong but I think this is to 
 prevent to log this commit command again in the LogUpdateProcessor.
 But this commit command is flagged with the flag UpdateCommand.REPLAY. I 
 think this flag should be checked in the LogUpdateProcessor in order to adapt 
 its behavior.
 Currently, commit actions in custom Update Processors are not applied in case 
 of a crash without an explicit commit.
 A workaround can be done with the finish function but this is not ideal.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5224) SolrCmdDistributor flush functions should combine original request params

2013-09-10 Thread ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763018#comment-13763018
 ] 

ludovic Boutros commented on SOLR-5224:
---

This little patch is working.

I checked the current unit test in order to add some tests, but it's not 
trivial (at least for me :)).

 SolrCmdDistributor flush functions should combine original request params
 -

 Key: SOLR-5224
 URL: https://issues.apache.org/jira/browse/SOLR-5224
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.3.1, 4.4
Reporter: ludovic Boutros
Assignee: Mark Miller
 Fix For: 4.5, 5.0

 Attachments: SOLR-5224.patch


 The flush commands in the class SolrCmdDistributor do not combine original 
 request params into external update requests.
 The actual code is :
 {code:title=SolrCmdDistributor.java|borderStyle=solid}
   UpdateRequestExt ureq = new UpdateRequestExt();
   
   ModifiableSolrParams combinedParams = new ModifiableSolrParams();
   
   for (AddRequest aReq : alist) {
 AddUpdateCommand cmd = aReq.cmd;
 combinedParams.add(aReq.params);

 ureq.add(cmd.solrDoc, cmd.commitWithin, cmd.overwrite);
   }
   
   if (ureq.getParams() == null) ureq.setParams(new 
 ModifiableSolrParams());
   ureq.getParams().add(combinedParams);
 {code} 
 but, the params from the original request: cmd.getReq().getParams() should be 
 combined as well in order to get them back in custom update processors for 
 instance.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5224) SolrCmdDistributor flush functions should combine original request params

2013-09-10 Thread ludovic Boutros (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ludovic Boutros updated SOLR-5224:
--

Attachment: SOLR-5224.patch

 SolrCmdDistributor flush functions should combine original request params
 -

 Key: SOLR-5224
 URL: https://issues.apache.org/jira/browse/SOLR-5224
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.3.1, 4.4
Reporter: ludovic Boutros
Assignee: Mark Miller
 Fix For: 4.5, 5.0

 Attachments: SOLR-5224.patch


 The flush commands in the class SolrCmdDistributor do not combine original 
 request params into external update requests.
 The actual code is :
 {code:title=SolrCmdDistributor.java|borderStyle=solid}
   UpdateRequestExt ureq = new UpdateRequestExt();
   
   ModifiableSolrParams combinedParams = new ModifiableSolrParams();
   
   for (AddRequest aReq : alist) {
 AddUpdateCommand cmd = aReq.cmd;
 combinedParams.add(aReq.params);

 ureq.add(cmd.solrDoc, cmd.commitWithin, cmd.overwrite);
   }
   
   if (ureq.getParams() == null) ureq.setParams(new 
 ModifiableSolrParams());
   ureq.getParams().add(combinedParams);
 {code} 
 but, the params from the original request: cmd.getReq().getParams() should be 
 combined as well in order to get them back in custom update processors for 
 instance.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5224) SolrCmdDistributor flush functions should combine original request params

2013-09-10 Thread ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763018#comment-13763018
 ] 

ludovic Boutros edited comment on SOLR-5224 at 9/10/13 1:18 PM:


This little patch for 4.4 works.

I checked the current unit test in order to add some tests, but it's not 
trivial (at least for me :)).

  was (Author: lboutros):
This little patch is working.

I checked the current unit test in order to add some tests, but it's not 
trivial (at least for me :)).
  
 SolrCmdDistributor flush functions should combine original request params
 -

 Key: SOLR-5224
 URL: https://issues.apache.org/jira/browse/SOLR-5224
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.3.1, 4.4
Reporter: ludovic Boutros
Assignee: Mark Miller
 Fix For: 4.5, 5.0

 Attachments: SOLR-5224.patch


 The flush commands in the class SolrCmdDistributor do not combine original 
 request params into external update requests.
 The actual code is :
 {code:title=SolrCmdDistributor.java|borderStyle=solid}
   UpdateRequestExt ureq = new UpdateRequestExt();
   
   ModifiableSolrParams combinedParams = new ModifiableSolrParams();
   
   for (AddRequest aReq : alist) {
 AddUpdateCommand cmd = aReq.cmd;
 combinedParams.add(aReq.params);

 ureq.add(cmd.solrDoc, cmd.commitWithin, cmd.overwrite);
   }
   
   if (ureq.getParams() == null) ureq.setParams(new 
 ModifiableSolrParams());
   ureq.getParams().add(combinedParams);
 {code} 
 but, the params from the original request: cmd.getReq().getParams() should be 
 combined as well in order to get them back in custom update processors for 
 instance.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5224) SolrCmdDistributor flush functions should combine original request params

2013-09-09 Thread ludovic Boutros (JIRA)
ludovic Boutros created SOLR-5224:
-

 Summary: SolrCmdDistributor flush functions should combine 
original request params
 Key: SOLR-5224
 URL: https://issues.apache.org/jira/browse/SOLR-5224
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.4, 4.3.1
Reporter: ludovic Boutros


The flush commands in the class SolrCmdDistributor do not combine original 
request params into external update requests.

The actual code is :

{code:title=SolrCmdDistributor.java|borderStyle=solid}
  UpdateRequestExt ureq = new UpdateRequestExt();
  
  ModifiableSolrParams combinedParams = new ModifiableSolrParams();
  
  for (AddRequest aReq : alist) {
AddUpdateCommand cmd = aReq.cmd;
combinedParams.add(aReq.params);
   
ureq.add(cmd.solrDoc, cmd.commitWithin, cmd.overwrite);
  }
  
  if (ureq.getParams() == null) ureq.setParams(new ModifiableSolrParams());
  ureq.getParams().add(combinedParams);
{code} 

but, the params from the original request: cmd.getReq().getParams() should be 
combined as well in order to get them back in custom update processors for 
instance.




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4394) Add SSL tests and example configs

2013-03-22 Thread ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13610991#comment-13610991
 ] 

ludovic Boutros commented on SOLR-4394:
---

Hi Hoss Man,

Since the commit in the branch 4x, I have some trouble to run my unit tests 
which extends the SolrJettyTestBase.
The TEST_KEYSTORE variable initialization is crashing with an NPE in the loop:

{code:title=ExternalPaths.java|borderStyle=solid}
static String determineSourceHome() {
// ugly, ugly hack to determine the example home without depending on the 
CWD
// this is needed for example/multicore tests which reside outside the 
classpath
File file;
try {
  file = new File(solr/conf);
  if (!file.exists()) {
file = new 
File(Thread.currentThread().getContextClassLoader().getResource(solr/conf).toURI());
  }
} catch (Exception e) {
  // If there is no solr/conf in the classpath, fall back to searching 
from the current directory.
  file = new File(.);
}
File base = file.getAbsoluteFile();
while (!new File(base, solr/CHANGES.txt).exists()) {
  base = base.getParentFile();
}
return new File(base, solr/).getAbsolutePath();
  }
{code}

Could you please create a public function getKeyStore that I could bypass like 
the getSolrHome function ?






 Add SSL tests and example configs
 -

 Key: SOLR-4394
 URL: https://issues.apache.org/jira/browse/SOLR-4394
 Project: Solr
  Issue Type: Improvement
Reporter: Hoss Man
Assignee: Hoss Man
 Fix For: 4.2, 5.0

 Attachments: SOLR-4394.patch, SOLR-4394.patch, SOLR-4394.patch, 
 SOLR-4394__phase2.patch


 We should provide some examples of running Solr+Jetty with SSL enabled, and 
 have some basic tests using jetty over SSL

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4608) Update Log replay should use the default processor chain

2013-03-20 Thread ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13608060#comment-13608060
 ] 

ludovic Boutros commented on SOLR-4608:
---

You're right Yonik, it works now. Thanks.
Do you think the patch could be committed in the different branches ?
If I can help, just ask.


 Update Log replay should use the default processor chain
 

 Key: SOLR-4608
 URL: https://issues.apache.org/jira/browse/SOLR-4608
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.1, 4.2
Reporter: ludovic Boutros
Assignee: Yonik Seeley
 Fix For: 4.3, 5.0, 4.2.1

 Attachments: SOLR-4608.patch


 If a processor chain is used with custom processors, 
 they are not used in case of node failure during log replay.
 Here is the code:
 {code:title=UpdateLog.java|borderStyle=solid}
 public void doReplay(TransactionLog translog) {
   try {
 loglog.warn(Starting log replay  + translog +  active=+activeLog 
 +  starting pos= + recoveryInfo.positionOfStart);
 tlogReader = translog.getReader(recoveryInfo.positionOfStart);
 // NOTE: we don't currently handle a core reload during recovery.  
 This would cause the core
 // to change underneath us.
 // TODO: use the standard request factory?  We won't get any custom 
 configuration instantiating this way.
 RunUpdateProcessorFactory runFac = new RunUpdateProcessorFactory();
 DistributedUpdateProcessorFactory magicFac = new 
 DistributedUpdateProcessorFactory();
 runFac.init(new NamedList());
 magicFac.init(new NamedList());
 UpdateRequestProcessor proc = magicFac.getInstance(req, rsp, 
 runFac.getInstance(req, rsp, null));
 {code} 
 I think this is a big issue, because a lot of people will discover it when a 
 node will crash in the best case... and I think it's too late.
 It means to me that processor chains are not usable with Solr Cloud currently.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4608) Update Log replay should use the default processor chain

2013-03-19 Thread ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606201#comment-13606201
 ] 

ludovic Boutros commented on SOLR-4608:
---

Thanks Mark and Yonik.

Yonik, could you please post the code of this change ? 
I could try to patch the 4.1/4.2 branches and then test it.

 

 Update Log replay should use the default processor chain
 

 Key: SOLR-4608
 URL: https://issues.apache.org/jira/browse/SOLR-4608
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.1, 4.2
Reporter: ludovic Boutros
Assignee: Yonik Seeley
 Fix For: 4.3, 5.0, 4.2.1


 If a processor chain is used with custom processors, 
 they are not used in case of node failure during log replay.
 Here is the code:
 {code:title=UpdateLog.java|borderStyle=solid}
 public void doReplay(TransactionLog translog) {
   try {
 loglog.warn(Starting log replay  + translog +  active=+activeLog 
 +  starting pos= + recoveryInfo.positionOfStart);
 tlogReader = translog.getReader(recoveryInfo.positionOfStart);
 // NOTE: we don't currently handle a core reload during recovery.  
 This would cause the core
 // to change underneath us.
 // TODO: use the standard request factory?  We won't get any custom 
 configuration instantiating this way.
 RunUpdateProcessorFactory runFac = new RunUpdateProcessorFactory();
 DistributedUpdateProcessorFactory magicFac = new 
 DistributedUpdateProcessorFactory();
 runFac.init(new NamedList());
 magicFac.init(new NamedList());
 UpdateRequestProcessor proc = magicFac.getInstance(req, rsp, 
 runFac.getInstance(req, rsp, null));
 {code} 
 I think this is a big issue, because a lot of people will discover it when a 
 node will crash in the best case... and I think it's too late.
 It means to me that processor chains are not usable with Solr Cloud currently.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4608) Update Log replay should use the default processor chain

2013-03-19 Thread ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13606510#comment-13606510
 ] 

ludovic Boutros commented on SOLR-4608:
---

Anything after the DistributedUpdateProcessor will not be applied, right ?

Do I need to create one default processor chain with my custom processor before 
the DistributedUpdateProcessor, and the real one used by the update handler 
with my custom processor after the DistributedUpdateProcessor ?

 Update Log replay should use the default processor chain
 

 Key: SOLR-4608
 URL: https://issues.apache.org/jira/browse/SOLR-4608
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.1, 4.2
Reporter: ludovic Boutros
Assignee: Yonik Seeley
 Fix For: 4.3, 5.0, 4.2.1

 Attachments: SOLR-4608.patch


 If a processor chain is used with custom processors, 
 they are not used in case of node failure during log replay.
 Here is the code:
 {code:title=UpdateLog.java|borderStyle=solid}
 public void doReplay(TransactionLog translog) {
   try {
 loglog.warn(Starting log replay  + translog +  active=+activeLog 
 +  starting pos= + recoveryInfo.positionOfStart);
 tlogReader = translog.getReader(recoveryInfo.positionOfStart);
 // NOTE: we don't currently handle a core reload during recovery.  
 This would cause the core
 // to change underneath us.
 // TODO: use the standard request factory?  We won't get any custom 
 configuration instantiating this way.
 RunUpdateProcessorFactory runFac = new RunUpdateProcessorFactory();
 DistributedUpdateProcessorFactory magicFac = new 
 DistributedUpdateProcessorFactory();
 runFac.init(new NamedList());
 magicFac.init(new NamedList());
 UpdateRequestProcessor proc = magicFac.getInstance(req, rsp, 
 runFac.getInstance(req, rsp, null));
 {code} 
 I think this is a big issue, because a lot of people will discover it when a 
 node will crash in the best case... and I think it's too late.
 It means to me that processor chains are not usable with Solr Cloud currently.
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-4608) Update Log replay should use the default processor chain

2013-03-18 Thread ludovic Boutros (JIRA)
ludovic Boutros created SOLR-4608:
-

 Summary: Update Log replay should use the default processor chain
 Key: SOLR-4608
 URL: https://issues.apache.org/jira/browse/SOLR-4608
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.2, 4.1
Reporter: ludovic Boutros


If a processor chain is used with custom processors, 
they are not used in case of node failure during log replay.

Here is the code:

{code:title=UpdateLog.java|borderStyle=solid}
public void doReplay(TransactionLog translog) {
  try {
loglog.warn(Starting log replay  + translog +  active=+activeLog + 
 starting pos= + recoveryInfo.positionOfStart);

tlogReader = translog.getReader(recoveryInfo.positionOfStart);

// NOTE: we don't currently handle a core reload during recovery.  This 
would cause the core
// to change underneath us.

// TODO: use the standard request factory?  We won't get any custom 
configuration instantiating this way.
RunUpdateProcessorFactory runFac = new RunUpdateProcessorFactory();
DistributedUpdateProcessorFactory magicFac = new 
DistributedUpdateProcessorFactory();
runFac.init(new NamedList());
magicFac.init(new NamedList());

UpdateRequestProcessor proc = magicFac.getInstance(req, rsp, 
runFac.getInstance(req, rsp, null));
{code} 

I think this is a big issue, because a lot of people will discover it when a 
node will crash in the best case... and I think it's too late.

It means to me that processor chains are not usable with Solr Cloud currently.
 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4079) The hunspell filter should support compressed Hunspell dictionaries

2012-05-30 Thread ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13285460#comment-13285460
 ] 

ludovic Boutros commented on LUCENE-4079:
-

Thanks Chris for taking this patch in account so fast !

 The hunspell filter should support compressed Hunspell dictionaries
 ---

 Key: LUCENE-4079
 URL: https://issues.apache.org/jira/browse/LUCENE-4079
 Project: Lucene - Java
  Issue Type: Improvement
  Components: modules/analysis
Affects Versions: 3.5, 3.6, 4.0
Reporter: ludovic Boutros
Assignee: Chris Male
 Fix For: 4.0, 3.6.1, 5.0

 Attachments: LUCENE-4079-3.6.x.patch, LUCENE-4079-trunk.patch


 OpenOffice dictionaries are often compressed via some aliases on the 
 beginning of the affixe file. The french one for instance.
 Currently the hunspell filter does not read the aliases.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4079) The hunspell filter should support compressed Hunspell dictionaries

2012-05-29 Thread ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284650#comment-13284650
 ] 

ludovic Boutros commented on LUCENE-4079:
-

No problem, I will try to make the patch against trunk today.

 The hunspell filter should support compressed Hunspell dictionaries
 ---

 Key: LUCENE-4079
 URL: https://issues.apache.org/jira/browse/LUCENE-4079
 Project: Lucene - Java
  Issue Type: Improvement
  Components: modules/analysis
Affects Versions: 3.5, 3.6, 4.0
Reporter: ludovic Boutros
Assignee: Chris Male
 Fix For: 4.0, 3.6.1

 Attachments: LUCENE-4079.patch


 OpenOffice dictionaries are often compressed via some aliases on the 
 beginning of the affixe file. The french one for instance.
 Currently the hunspell filter does not read the aliases.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4079) The hunspell filter should support compressed Hunspell dictionaries

2012-05-29 Thread ludovic Boutros (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ludovic Boutros updated LUCENE-4079:


Attachment: LUCENE-4079-3.6.x.patch
LUCENE-4079-trunk.patch

Ok, I have merged some typo corrections from the trunk to the branch 3.6.
I have applied the patch to the trunk, and run the tests.

Do you need something else ? :)

 The hunspell filter should support compressed Hunspell dictionaries
 ---

 Key: LUCENE-4079
 URL: https://issues.apache.org/jira/browse/LUCENE-4079
 Project: Lucene - Java
  Issue Type: Improvement
  Components: modules/analysis
Affects Versions: 3.5, 3.6, 4.0
Reporter: ludovic Boutros
Assignee: Chris Male
 Fix For: 4.0, 3.6.1

 Attachments: LUCENE-4079-3.6.x.patch, LUCENE-4079-trunk.patch, 
 LUCENE-4079.patch


 OpenOffice dictionaries are often compressed via some aliases on the 
 beginning of the affixe file. The french one for instance.
 Currently the hunspell filter does not read the aliases.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4079) The hunspell filter should support compressed Hunspell dictionaries

2012-05-29 Thread ludovic Boutros (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ludovic Boutros updated LUCENE-4079:


Attachment: (was: LUCENE-4079.patch)

 The hunspell filter should support compressed Hunspell dictionaries
 ---

 Key: LUCENE-4079
 URL: https://issues.apache.org/jira/browse/LUCENE-4079
 Project: Lucene - Java
  Issue Type: Improvement
  Components: modules/analysis
Affects Versions: 3.5, 3.6, 4.0
Reporter: ludovic Boutros
Assignee: Chris Male
 Fix For: 4.0, 3.6.1

 Attachments: LUCENE-4079-3.6.x.patch, LUCENE-4079-trunk.patch


 OpenOffice dictionaries are often compressed via some aliases on the 
 beginning of the affixe file. The french one for instance.
 Currently the hunspell filter does not read the aliases.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2934) Problem with Solr Hunspell with French Dictionary

2012-05-28 Thread ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284322#comment-13284322
 ] 

ludovic Boutros commented on SOLR-2934:
---

For the french dictionary for instance, if I understand the mechanism well, 
it seems that there are some aliases, i.e. AF ..., AM 
These dictionaries are somehow compressed.

And in the C++ code there is this part of code :

{code}
dash = strchr(piece, '/');
if (dash) {
...
if (pHMgr-is_aliasf()) {
  int index = atoi(dash + 1);
  nptr-contclasslen = pHMgr-get_aliasf(index, (nptr-contclass));
} else {
nptr-contclasslen = pHMgr-decode_flags((nptr-contclass), dash + 
1);
flag_qsort(nptr-contclass, 0, nptr-contclasslen);
}
{code}

But I did not find anything similar in the Java Class, the aliases are not 
loaded I think.
Correct me if I'm wrong but it seems not possible to load compressed affix 
dictionaries currently.

Hope this can help.


 Problem with Solr Hunspell with French Dictionary
 -

 Key: SOLR-2934
 URL: https://issues.apache.org/jira/browse/SOLR-2934
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis
Affects Versions: 3.5
 Environment: Windows 7
Reporter: Nathan Castelein
Assignee: Chris Male
 Fix For: 4.0

 Attachments: en_GB.aff, en_GB.dic


 I'm trying to add the HunspellStemFilterFactory to my Solr project. 
 I'm trying this on a fresh new download of Solr 3.5.
 I downloaded french dictionary here (found it from here): 
 http://www.dicollecte.org/download/fr/hunspell-fr-moderne-v4.3.zip
 But when I start Solr and go to the Solr Analysis, an error occurs in Solr.
 Is there the trace : 
 java.lang.RuntimeException: Unable to load hunspell data! 
 [dictionary=en_GB.dic,affix=fr-moderne.aff]
   at 
 org.apache.solr.analysis.HunspellStemFilterFactory.inform(HunspellStemFilterFactory.java:82)
   at 
 org.apache.solr.core.SolrResourceLoader.inform(SolrResourceLoader.java:546)
   at org.apache.solr.schema.IndexSchema.init(IndexSchema.java:126)
   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:461)
   at org.apache.solr.core.CoreContainer.load(CoreContainer.java:316)
   at org.apache.solr.core.CoreContainer.load(CoreContainer.java:207)
   at 
 org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:130)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:94)
   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
   at 
 org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
   at 
 org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
   at 
 org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
   at 
 org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
   at org.mortbay.jetty.Server.doStart(Server.java:224)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at org.mortbay.xml.XmlConfiguration.main(XmlConfiguration.java:985)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
   at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
   at java.lang.reflect.Method.invoke(Unknown Source)
   at org.mortbay.start.Main.invokeMain(Main.java:194)
   at org.mortbay.start.Main.start(Main.java:534)
   at org.mortbay.start.Main.start(Main.java:441)
   at org.mortbay.start.Main.main(Main.java:119)
 Caused by: java.lang.StringIndexOutOfBoundsException: String index out of 
 range: 3
   at java.lang.String.charAt(Unknown Source)
   at 
 org.apache.lucene.analysis.hunspell.HunspellDictionary$DoubleASCIIFlagParsingStrategy.parseFlags(HunspellDictionary.java:382)
   at 
 

[jira] [Commented] (SOLR-2934) Problem with Solr Hunspell with French Dictionary

2012-05-28 Thread ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284334#comment-13284334
 ] 

ludovic Boutros commented on SOLR-2934:
---

And just for information, the ubuntu french hunspell dictionary is not 
compressed and works perfectly.

 Problem with Solr Hunspell with French Dictionary
 -

 Key: SOLR-2934
 URL: https://issues.apache.org/jira/browse/SOLR-2934
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis
Affects Versions: 3.5
 Environment: Windows 7
Reporter: Nathan Castelein
Assignee: Chris Male
 Fix For: 4.0

 Attachments: en_GB.aff, en_GB.dic


 I'm trying to add the HunspellStemFilterFactory to my Solr project. 
 I'm trying this on a fresh new download of Solr 3.5.
 I downloaded french dictionary here (found it from here): 
 http://www.dicollecte.org/download/fr/hunspell-fr-moderne-v4.3.zip
 But when I start Solr and go to the Solr Analysis, an error occurs in Solr.
 Is there the trace : 
 java.lang.RuntimeException: Unable to load hunspell data! 
 [dictionary=en_GB.dic,affix=fr-moderne.aff]
   at 
 org.apache.solr.analysis.HunspellStemFilterFactory.inform(HunspellStemFilterFactory.java:82)
   at 
 org.apache.solr.core.SolrResourceLoader.inform(SolrResourceLoader.java:546)
   at org.apache.solr.schema.IndexSchema.init(IndexSchema.java:126)
   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:461)
   at org.apache.solr.core.CoreContainer.load(CoreContainer.java:316)
   at org.apache.solr.core.CoreContainer.load(CoreContainer.java:207)
   at 
 org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:130)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:94)
   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
   at 
 org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
   at 
 org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
   at 
 org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
   at 
 org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
   at org.mortbay.jetty.Server.doStart(Server.java:224)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at org.mortbay.xml.XmlConfiguration.main(XmlConfiguration.java:985)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
   at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
   at java.lang.reflect.Method.invoke(Unknown Source)
   at org.mortbay.start.Main.invokeMain(Main.java:194)
   at org.mortbay.start.Main.start(Main.java:534)
   at org.mortbay.start.Main.start(Main.java:441)
   at org.mortbay.start.Main.main(Main.java:119)
 Caused by: java.lang.StringIndexOutOfBoundsException: String index out of 
 range: 3
   at java.lang.String.charAt(Unknown Source)
   at 
 org.apache.lucene.analysis.hunspell.HunspellDictionary$DoubleASCIIFlagParsingStrategy.parseFlags(HunspellDictionary.java:382)
   at 
 org.apache.lucene.analysis.hunspell.HunspellDictionary.parseAffix(HunspellDictionary.java:165)
   at 
 org.apache.lucene.analysis.hunspell.HunspellDictionary.readAffixFile(HunspellDictionary.java:121)
   at 
 org.apache.lucene.analysis.hunspell.HunspellDictionary.init(HunspellDictionary.java:64)
   at 
 org.apache.solr.analysis.HunspellStemFilterFactory.inform(HunspellStemFilterFactory.java:46)
 I can't find where the problem is. It seems like my dictionary isn't well 
 written for hunspell, but I tried with two different dictionaries, and I had 
 the same problem.
 I also tried with an english dictionary, and ... it works !
 So I think that my french dictionary is wrong for hunspell, but I don't know 
 why ...
 Can you help me ?

--
This message is 

[jira] [Created] (SOLR-3494) The hunspell filter should support compressed Hunspell dictionaries

2012-05-28 Thread ludovic Boutros (JIRA)
ludovic Boutros created SOLR-3494:
-

 Summary: The hunspell filter should support compressed Hunspell 
dictionaries
 Key: SOLR-3494
 URL: https://issues.apache.org/jira/browse/SOLR-3494
 Project: Solr
  Issue Type: Improvement
  Components: Schema and Analysis
Affects Versions: 3.5, 3.6
Reporter: ludovic Boutros


OpenOffice dictionaries are often compressed via some aliases on the beginning 
of the affixe file. The french one for instance.
Currently the hunspell filter does not read the aliases.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2934) Problem with Solr Hunspell with French Dictionary

2012-05-28 Thread ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284337#comment-13284337
 ] 

ludovic Boutros commented on SOLR-2934:
---

done : SOLR-3494.

 Problem with Solr Hunspell with French Dictionary
 -

 Key: SOLR-2934
 URL: https://issues.apache.org/jira/browse/SOLR-2934
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis
Affects Versions: 3.5
 Environment: Windows 7
Reporter: Nathan Castelein
Assignee: Chris Male
 Fix For: 4.0

 Attachments: en_GB.aff, en_GB.dic


 I'm trying to add the HunspellStemFilterFactory to my Solr project. 
 I'm trying this on a fresh new download of Solr 3.5.
 I downloaded french dictionary here (found it from here): 
 http://www.dicollecte.org/download/fr/hunspell-fr-moderne-v4.3.zip
 But when I start Solr and go to the Solr Analysis, an error occurs in Solr.
 Is there the trace : 
 java.lang.RuntimeException: Unable to load hunspell data! 
 [dictionary=en_GB.dic,affix=fr-moderne.aff]
   at 
 org.apache.solr.analysis.HunspellStemFilterFactory.inform(HunspellStemFilterFactory.java:82)
   at 
 org.apache.solr.core.SolrResourceLoader.inform(SolrResourceLoader.java:546)
   at org.apache.solr.schema.IndexSchema.init(IndexSchema.java:126)
   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:461)
   at org.apache.solr.core.CoreContainer.load(CoreContainer.java:316)
   at org.apache.solr.core.CoreContainer.load(CoreContainer.java:207)
   at 
 org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:130)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:94)
   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
   at 
 org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
   at 
 org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
   at 
 org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
   at 
 org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
   at org.mortbay.jetty.Server.doStart(Server.java:224)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at org.mortbay.xml.XmlConfiguration.main(XmlConfiguration.java:985)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
   at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
   at java.lang.reflect.Method.invoke(Unknown Source)
   at org.mortbay.start.Main.invokeMain(Main.java:194)
   at org.mortbay.start.Main.start(Main.java:534)
   at org.mortbay.start.Main.start(Main.java:441)
   at org.mortbay.start.Main.main(Main.java:119)
 Caused by: java.lang.StringIndexOutOfBoundsException: String index out of 
 range: 3
   at java.lang.String.charAt(Unknown Source)
   at 
 org.apache.lucene.analysis.hunspell.HunspellDictionary$DoubleASCIIFlagParsingStrategy.parseFlags(HunspellDictionary.java:382)
   at 
 org.apache.lucene.analysis.hunspell.HunspellDictionary.parseAffix(HunspellDictionary.java:165)
   at 
 org.apache.lucene.analysis.hunspell.HunspellDictionary.readAffixFile(HunspellDictionary.java:121)
   at 
 org.apache.lucene.analysis.hunspell.HunspellDictionary.init(HunspellDictionary.java:64)
   at 
 org.apache.solr.analysis.HunspellStemFilterFactory.inform(HunspellStemFilterFactory.java:46)
 I can't find where the problem is. It seems like my dictionary isn't well 
 written for hunspell, but I tried with two different dictionaries, and I had 
 the same problem.
 I also tried with an english dictionary, and ... it works !
 So I think that my french dictionary is wrong for hunspell, but I don't know 
 why ...
 Can you help me ?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your 

[jira] [Updated] (SOLR-3494) The hunspell filter should support compressed Hunspell dictionaries

2012-05-28 Thread ludovic Boutros (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ludovic Boutros updated SOLR-3494:
--

Attachment: SOLR-3494.patch

I manage to load french compressed dictionaries with this little patch.

 The hunspell filter should support compressed Hunspell dictionaries
 ---

 Key: SOLR-3494
 URL: https://issues.apache.org/jira/browse/SOLR-3494
 Project: Solr
  Issue Type: Improvement
  Components: Schema and Analysis
Affects Versions: 3.5, 3.6
Reporter: ludovic Boutros
 Attachments: SOLR-3494.patch


 OpenOffice dictionaries are often compressed via some aliases on the 
 beginning of the affixe file. The french one for instance.
 Currently the hunspell filter does not read the aliases.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3494) The hunspell filter should support compressed Hunspell dictionaries

2012-05-28 Thread ludovic Boutros (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ludovic Boutros updated SOLR-3494:
--

Attachment: (was: SOLR-3494.patch)

 The hunspell filter should support compressed Hunspell dictionaries
 ---

 Key: SOLR-3494
 URL: https://issues.apache.org/jira/browse/SOLR-3494
 Project: Solr
  Issue Type: Improvement
  Components: Schema and Analysis
Affects Versions: 3.5, 3.6
Reporter: ludovic Boutros
 Attachments: SOLR-3494.patch


 OpenOffice dictionaries are often compressed via some aliases on the 
 beginning of the affixe file. The french one for instance.
 Currently the hunspell filter does not read the aliases.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3494) The hunspell filter should support compressed Hunspell dictionaries

2012-05-28 Thread ludovic Boutros (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ludovic Boutros updated SOLR-3494:
--

Attachment: SOLR-3494.patch

little changes.

 The hunspell filter should support compressed Hunspell dictionaries
 ---

 Key: SOLR-3494
 URL: https://issues.apache.org/jira/browse/SOLR-3494
 Project: Solr
  Issue Type: Improvement
  Components: Schema and Analysis
Affects Versions: 3.5, 3.6
Reporter: ludovic Boutros
 Attachments: SOLR-3494.patch


 OpenOffice dictionaries are often compressed via some aliases on the 
 beginning of the affixe file. The french one for instance.
 Currently the hunspell filter does not read the aliases.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2934) Problem with Solr Hunspell with French Dictionary

2012-05-28 Thread ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284361#comment-13284361
 ] 

ludovic Boutros commented on SOLR-2934:
---

I've attached a little patch in the other issue which allows me to load latest 
French dictionaries of OpenOffice.

 Problem with Solr Hunspell with French Dictionary
 -

 Key: SOLR-2934
 URL: https://issues.apache.org/jira/browse/SOLR-2934
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis
Affects Versions: 3.5
 Environment: Windows 7
Reporter: Nathan Castelein
Assignee: Chris Male
 Fix For: 4.0

 Attachments: en_GB.aff, en_GB.dic


 I'm trying to add the HunspellStemFilterFactory to my Solr project. 
 I'm trying this on a fresh new download of Solr 3.5.
 I downloaded french dictionary here (found it from here): 
 http://www.dicollecte.org/download/fr/hunspell-fr-moderne-v4.3.zip
 But when I start Solr and go to the Solr Analysis, an error occurs in Solr.
 Is there the trace : 
 java.lang.RuntimeException: Unable to load hunspell data! 
 [dictionary=en_GB.dic,affix=fr-moderne.aff]
   at 
 org.apache.solr.analysis.HunspellStemFilterFactory.inform(HunspellStemFilterFactory.java:82)
   at 
 org.apache.solr.core.SolrResourceLoader.inform(SolrResourceLoader.java:546)
   at org.apache.solr.schema.IndexSchema.init(IndexSchema.java:126)
   at org.apache.solr.core.CoreContainer.create(CoreContainer.java:461)
   at org.apache.solr.core.CoreContainer.load(CoreContainer.java:316)
   at org.apache.solr.core.CoreContainer.load(CoreContainer.java:207)
   at 
 org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:130)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:94)
   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
   at 
 org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
   at 
 org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
   at 
 org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
   at 
 org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
   at org.mortbay.jetty.Server.doStart(Server.java:224)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at org.mortbay.xml.XmlConfiguration.main(XmlConfiguration.java:985)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
   at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
   at java.lang.reflect.Method.invoke(Unknown Source)
   at org.mortbay.start.Main.invokeMain(Main.java:194)
   at org.mortbay.start.Main.start(Main.java:534)
   at org.mortbay.start.Main.start(Main.java:441)
   at org.mortbay.start.Main.main(Main.java:119)
 Caused by: java.lang.StringIndexOutOfBoundsException: String index out of 
 range: 3
   at java.lang.String.charAt(Unknown Source)
   at 
 org.apache.lucene.analysis.hunspell.HunspellDictionary$DoubleASCIIFlagParsingStrategy.parseFlags(HunspellDictionary.java:382)
   at 
 org.apache.lucene.analysis.hunspell.HunspellDictionary.parseAffix(HunspellDictionary.java:165)
   at 
 org.apache.lucene.analysis.hunspell.HunspellDictionary.readAffixFile(HunspellDictionary.java:121)
   at 
 org.apache.lucene.analysis.hunspell.HunspellDictionary.init(HunspellDictionary.java:64)
   at 
 org.apache.solr.analysis.HunspellStemFilterFactory.inform(HunspellStemFilterFactory.java:46)
 I can't find where the problem is. It seems like my dictionary isn't well 
 written for hunspell, but I tried with two different dictionaries, and I had 
 the same problem.
 I also tried with an english dictionary, and ... it works !
 So I think that my french dictionary is wrong for hunspell, but I don't know 
 why ...
 Can you help me ?

--
This 

[jira] [Commented] (LUCENE-4079) The hunspell filter should support compressed Hunspell dictionaries

2012-05-28 Thread ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284385#comment-13284385
 ] 

ludovic Boutros commented on LUCENE-4079:
-

oups, yes you are right, thank you Simon.


 The hunspell filter should support compressed Hunspell dictionaries
 ---

 Key: LUCENE-4079
 URL: https://issues.apache.org/jira/browse/LUCENE-4079
 Project: Lucene - Java
  Issue Type: Improvement
  Components: modules/analysis
Affects Versions: 3.5, 3.6, 4.0
Reporter: ludovic Boutros
 Fix For: 4.0, 3.6.1

 Attachments: SOLR-3494.patch


 OpenOffice dictionaries are often compressed via some aliases on the 
 beginning of the affixe file. The french one for instance.
 Currently the hunspell filter does not read the aliases.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4079) The hunspell filter should support compressed Hunspell dictionaries

2012-05-28 Thread ludovic Boutros (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ludovic Boutros updated LUCENE-4079:


Attachment: LUCENE-4079.patch

 The hunspell filter should support compressed Hunspell dictionaries
 ---

 Key: LUCENE-4079
 URL: https://issues.apache.org/jira/browse/LUCENE-4079
 Project: Lucene - Java
  Issue Type: Improvement
  Components: modules/analysis
Affects Versions: 3.5, 3.6, 4.0
Reporter: ludovic Boutros
 Fix For: 4.0, 3.6.1

 Attachments: LUCENE-4079.patch


 OpenOffice dictionaries are often compressed via some aliases on the 
 beginning of the affixe file. The french one for instance.
 Currently the hunspell filter does not read the aliases.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4079) The hunspell filter should support compressed Hunspell dictionaries

2012-05-28 Thread ludovic Boutros (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ludovic Boutros updated LUCENE-4079:


Attachment: (was: SOLR-3494.patch)

 The hunspell filter should support compressed Hunspell dictionaries
 ---

 Key: LUCENE-4079
 URL: https://issues.apache.org/jira/browse/LUCENE-4079
 Project: Lucene - Java
  Issue Type: Improvement
  Components: modules/analysis
Affects Versions: 3.5, 3.6, 4.0
Reporter: ludovic Boutros
 Fix For: 4.0, 3.6.1

 Attachments: LUCENE-4079.patch


 OpenOffice dictionaries are often compressed via some aliases on the 
 beginning of the affixe file. The french one for instance.
 Currently the hunspell filter does not read the aliases.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4079) The hunspell filter should support compressed Hunspell dictionaries

2012-05-28 Thread ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13284386#comment-13284386
 ] 

ludovic Boutros commented on LUCENE-4079:
-

patch renamed.

 The hunspell filter should support compressed Hunspell dictionaries
 ---

 Key: LUCENE-4079
 URL: https://issues.apache.org/jira/browse/LUCENE-4079
 Project: Lucene - Java
  Issue Type: Improvement
  Components: modules/analysis
Affects Versions: 3.5, 3.6, 4.0
Reporter: ludovic Boutros
 Fix For: 4.0, 3.6.1

 Attachments: LUCENE-4079.patch


 OpenOffice dictionaries are often compressed via some aliases on the 
 beginning of the affixe file. The french one for instance.
 Currently the hunspell filter does not read the aliases.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4079) The hunspell filter should support compressed Hunspell dictionaries

2012-05-28 Thread ludovic Boutros (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ludovic Boutros updated LUCENE-4079:


Attachment: LUCENE-4079.patch

The patch with a test unit.

I don't know if a compressed dictionary could use both naming types (alias and 
direct rule name).
In the c++ code it seems that it is not possible so I did not test it in the 
java code.

 The hunspell filter should support compressed Hunspell dictionaries
 ---

 Key: LUCENE-4079
 URL: https://issues.apache.org/jira/browse/LUCENE-4079
 Project: Lucene - Java
  Issue Type: Improvement
  Components: modules/analysis
Affects Versions: 3.5, 3.6, 4.0
Reporter: ludovic Boutros
Assignee: Chris Male
 Fix For: 4.0, 3.6.1

 Attachments: LUCENE-4079.patch


 OpenOffice dictionaries are often compressed via some aliases on the 
 beginning of the affixe file. The french one for instance.
 Currently the hunspell filter does not read the aliases.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4079) The hunspell filter should support compressed Hunspell dictionaries

2012-05-28 Thread ludovic Boutros (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ludovic Boutros updated LUCENE-4079:


Attachment: (was: LUCENE-4079.patch)

 The hunspell filter should support compressed Hunspell dictionaries
 ---

 Key: LUCENE-4079
 URL: https://issues.apache.org/jira/browse/LUCENE-4079
 Project: Lucene - Java
  Issue Type: Improvement
  Components: modules/analysis
Affects Versions: 3.5, 3.6, 4.0
Reporter: ludovic Boutros
Assignee: Chris Male
 Fix For: 4.0, 3.6.1

 Attachments: LUCENE-4079.patch


 OpenOffice dictionaries are often compressed via some aliases on the 
 beginning of the affixe file. The french one for instance.
 Currently the hunspell filter does not read the aliases.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3476) Create a Solr Core with a given commit point

2012-05-23 Thread ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13281484#comment-13281484
 ] 

ludovic Boutros commented on SOLR-3476:
---

some examples of usages :

- Create a new core with a given commit point generation :

bq. 
http://localhost:8084/solr/admin/cores?action=CREATEname=core4commitPointGeneration=4instanceDir=test

- Get the status of this core :

bq. http://localhost:8084/solr/admin/cores?action=STATUScore=core4

{code:xml}
response
  lst name=responseHeader
int name=status0/int
int name=QTime2692/int
  /lst
  lst name=status
lst name=core4
  str name=namecore4/str
  str name=instanceDirD:\temp\bases\testCores\test\/str
  str name=dataDirD:\temp\bases\testCores\test\data\/str
  date name=startTime2012-05-23T09:31:50.483Z/date
  long name=uptime149054/long
  long name=indexCommitGeneration4/long
  lst name=indexCommitList
long name=generation1/long
long name=generation2/long
long name=generation3/long
long name=generation4/long
long name=generation5/long
long name=generation6/long
long name=generation7/long
  /lst
  lst name=index
int name=numDocs3/int
int name=maxDoc3/int
long name=version1337759534761/long
int name=segmentCount3/int
bool name=currentfalse/bool
bool name=hasDeletionsfalse/bool
str 
name=directoryorg.apache.lucene.store.SimpleFSDirectory:org.apache.lucene.store.SimpleFSDirectory@D:\temp\bases\testCores\test\data\index
 lockFactory=org.apache.lucene.store.NativeFSLockFactory@1c24b45/str
date name=lastModified2012-05-23T09:22:10.713Z/date
  /lst
/lst
  /lst
/response
{code}

We can see the current commit point generation and the available commit point 
list.

- Now the solr.xml file :

{code:xml}
solr sharedLib=lib persistent=true
  cores adminPath=/admin/cores
core name=core4 instanceDir=test\ commitPointGeneration=4/
  /cores
/solr
{code}



 Create a Solr Core with a given commit point
 

 Key: SOLR-3476
 URL: https://issues.apache.org/jira/browse/SOLR-3476
 Project: Solr
  Issue Type: New Feature
  Components: multicore
Affects Versions: 3.6
Reporter: ludovic Boutros
 Attachments: commitPoint.patch


 In some configurations, we need to open new cores with a given commit point.
 For instance, when the publication of new documents must be controlled (legal 
 obligations) in a master-slave configuration there are two cores on the same 
 instanceDir and dataDir which are using two versions of the index.
 The switch of the two cores is done manually.
 The problem is that when the replication is done one day before the switch, 
 if any problem occurs, and we need to restart tomcat, the new documents are 
 published.
 With this functionality, we could ensure that the index generation used by 
 the core used for querying is always the good one. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3476) Create a Solr Core with a given commit point

2012-05-23 Thread ludovic Boutros (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ludovic Boutros updated SOLR-3476:
--

Issue Type: Improvement  (was: New Feature)

 Create a Solr Core with a given commit point
 

 Key: SOLR-3476
 URL: https://issues.apache.org/jira/browse/SOLR-3476
 Project: Solr
  Issue Type: Improvement
  Components: multicore
Affects Versions: 3.6
Reporter: ludovic Boutros
 Attachments: commitPoint.patch


 In some configurations, we need to open new cores with a given commit point.
 For instance, when the publication of new documents must be controlled (legal 
 obligations) in a master-slave configuration there are two cores on the same 
 instanceDir and dataDir which are using two versions of the index.
 The switch of the two cores is done manually.
 The problem is that when the replication is done one day before the switch, 
 if any problem occurs, and we need to restart tomcat, the new documents are 
 published.
 With this functionality, we could ensure that the index generation used by 
 the core used for querying is always the good one. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3476) Create a Solr Core with a given commit point

2012-05-22 Thread ludovic Boutros (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ludovic Boutros updated SOLR-3476:
--

Attachment: commitPoint.patch

This patch adds this functionality:

The STATUS command in the CoreAdminHandler gives now the generation of a core 
and the generation list available in the index currently.
The core creation has now an additional parameter (optional): 
commitPointGeneration.
It is the generation of the wanted commit point.

I will add some more examples tomorrow.

If someone could check that everything is ok with this patch that would be 
great !



 Create a Solr Core with a given commit point
 

 Key: SOLR-3476
 URL: https://issues.apache.org/jira/browse/SOLR-3476
 Project: Solr
  Issue Type: New Feature
  Components: multicore
Affects Versions: 3.6
Reporter: ludovic Boutros
 Attachments: commitPoint.patch


 In some configurations, we need to open new cores with a given commit point.
 For instance, when the publication of new documents must be controlled (legal 
 obligations) in a master-slave configuration there are two cores on the same 
 instanceDir and dataDir which are using two versions of the index.
 The switch of the two cores is done manually.
 The problem is that when the replication is done one day before the switch, 
 if any problem occurs, and we need to restart tomcat, the new documents are 
 published.
 With this functionality, we could ensure that the index generation used by 
 the core used for querying is always the good one. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3476) Create a Solr Core with a given commit point

2012-05-22 Thread ludovic Boutros (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ludovic Boutros updated SOLR-3476:
--

Attachment: (was: commitPoint.patch)

 Create a Solr Core with a given commit point
 

 Key: SOLR-3476
 URL: https://issues.apache.org/jira/browse/SOLR-3476
 Project: Solr
  Issue Type: New Feature
  Components: multicore
Affects Versions: 3.6
Reporter: ludovic Boutros
 Attachments: commitPoint.patch


 In some configurations, we need to open new cores with a given commit point.
 For instance, when the publication of new documents must be controlled (legal 
 obligations) in a master-slave configuration there are two cores on the same 
 instanceDir and dataDir which are using two versions of the index.
 The switch of the two cores is done manually.
 The problem is that when the replication is done one day before the switch, 
 if any problem occurs, and we need to restart tomcat, the new documents are 
 published.
 With this functionality, we could ensure that the index generation used by 
 the core used for querying is always the good one. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3476) Create a Solr Core with a given commit point

2012-05-22 Thread ludovic Boutros (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ludovic Boutros updated SOLR-3476:
--

Attachment: commitPoint.patch

remove some wired things on the beginning of the patch and transform it in unix 
format 

 Create a Solr Core with a given commit point
 

 Key: SOLR-3476
 URL: https://issues.apache.org/jira/browse/SOLR-3476
 Project: Solr
  Issue Type: New Feature
  Components: multicore
Affects Versions: 3.6
Reporter: ludovic Boutros
 Attachments: commitPoint.patch


 In some configurations, we need to open new cores with a given commit point.
 For instance, when the publication of new documents must be controlled (legal 
 obligations) in a master-slave configuration there are two cores on the same 
 instanceDir and dataDir which are using two versions of the index.
 The switch of the two cores is done manually.
 The problem is that when the replication is done one day before the switch, 
 if any problem occurs, and we need to restart tomcat, the new documents are 
 published.
 With this functionality, we could ensure that the index generation used by 
 the core used for querying is always the good one. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3454) ArrayIndexOutOfBoundsException while grouping via Solrj

2012-05-17 Thread ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13277679#comment-13277679
 ] 

ludovic Boutros commented on SOLR-3454:
---

Hi Martijn,

I'm not at my office today (vacations), so I tried to reproduce at home and you 
are right, 
the test not does not fail on my home computer...

That's strange, I will try to compare the two environments to understand this 
behavior.

thx,

Ludovic.

 ArrayIndexOutOfBoundsException while grouping via Solrj
 ---

 Key: SOLR-3454
 URL: https://issues.apache.org/jira/browse/SOLR-3454
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 3.5, 3.6
 Environment: Windows 7, Java 6
Reporter: ludovic Boutros
 Attachments: SOLR-3454.diff


 When we try to use the grouping function at the end of a result via solrj 
 with the parameter group.main=true, 
 an ArrayIndexOutOfBoundsException is raised.
 For instance, on a result containing 3 groups, if the start  rows parameters 
 are equal to 2  5 respectively.
 I will attach a patch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3454) ArrayIndexOutOfBoundsException while grouping via Solrj

2012-05-17 Thread ludovic Boutros (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ludovic Boutros updated SOLR-3454:
--

Attachment: SOLR-3454.diff

 ArrayIndexOutOfBoundsException while grouping via Solrj
 ---

 Key: SOLR-3454
 URL: https://issues.apache.org/jira/browse/SOLR-3454
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 3.5, 3.6
 Environment: Windows 7, Java 6
Reporter: ludovic Boutros
 Attachments: SOLR-3454.diff, SOLR-3454.diff


 When we try to use the grouping function at the end of a result via solrj 
 with the parameter group.main=true, 
 an ArrayIndexOutOfBoundsException is raised.
 For instance, on a result containing 3 groups, if the start  rows parameters 
 are equal to 2  5 respectively.
 I will attach a patch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3454) ArrayIndexOutOfBoundsException while grouping via Solrj

2012-05-17 Thread ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13277692#comment-13277692
 ] 

ludovic Boutros commented on SOLR-3454:
---

I finally manage to reproduce with the start value equal to 4.

And here is the stacktrace :

{quote}
testGroupingSimpleFormatArrayIndexOutOfBoundsException(org.apache.solr.TestGroupingJavabin)
  Time elapsed: 88.651 sec   ERROR!
org.apache.solr.client.solrj.SolrServerException: Error executing query
at 
org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:95)
at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:311)
at 
org.apache.solr.TestGroupingJavabin.testGroupingSimpleFormatArrayIndexOutOfBoundsException(TestGroupingJavabin.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
at 
org.apache.lucene.util.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:61)
at 
org.apache.lucene.util.LuceneTestCase$SubclassSetupTeardownRule$1.evaluate(LuceneTestCase.java:630)
at 
org.apache.lucene.util.LuceneTestCase$InternalSetupTeardownRule$1.evaluate(LuceneTestCase.java:536)
at 
org.apache.lucene.util.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:67)
at 
org.apache.lucene.util.LuceneTestCase$TestResultInterceptorRule$1.evaluate(LuceneTestCase.java:457)
at 
org.apache.lucene.util.UncaughtExceptionsRule$1.evaluate(UncaughtExceptionsRule.java:74)
at 
org.apache.lucene.util.LuceneTestCase$SaveThreadAndTestNameRule$1.evaluate(LuceneTestCase.java:508)
at org.junit.rules.RunRules.evaluate(RunRules.java:18)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)
at 
org.apache.lucene.util.LuceneTestCaseRunner.runChild(LuceneTestCaseRunner.java:146)
at 
org.apache.lucene.util.LuceneTestCaseRunner.runChild(LuceneTestCaseRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:30)
at 
org.apache.lucene.util.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:61)
at 
org.apache.lucene.util.UncaughtExceptionsRule$1.evaluate(UncaughtExceptionsRule.java:74)
at 
org.apache.lucene.util.StoreClassNameRule$1.evaluate(StoreClassNameRule.java:36)
at 
org.apache.lucene.util.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:67)
at org.junit.rules.RunRules.evaluate(RunRules.java:18)
at org.junit.runners.ParentRunner.run(ParentRunner.java:300)
at 
org.apache.maven.surefire.junit4.JUnit4TestSet.execute(JUnit4TestSet.java:53)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:123)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:104)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.maven.surefire.util.ReflectionUtils.invokeMethodWithArray(ReflectionUtils.java:164)
at 
org.apache.maven.surefire.booter.ProviderFactory$ProviderProxy.invoke(ProviderFactory.java:110)
at 
org.apache.maven.surefire.booter.SurefireStarter.invokeProvider(SurefireStarter.java:175)
at 

[jira] [Updated] (SOLR-3454) ArrayIndexOutOfBoundsException while grouping via Solrj

2012-05-17 Thread ludovic Boutros (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ludovic Boutros updated SOLR-3454:
--

Attachment: (was: SOLR-3454.diff)

 ArrayIndexOutOfBoundsException while grouping via Solrj
 ---

 Key: SOLR-3454
 URL: https://issues.apache.org/jira/browse/SOLR-3454
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 3.5, 3.6
 Environment: Windows 7, Java 6
Reporter: ludovic Boutros
 Attachments: SOLR-3454.diff


 When we try to use the grouping function at the end of a result via solrj 
 with the parameter group.main=true, 
 an ArrayIndexOutOfBoundsException is raised.
 For instance, on a result containing 3 groups, if the start  rows parameters 
 are equal to 2  5 respectively.
 I will attach a patch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3454) ArrayIndexOutOfBoundsException while grouping via Solrj

2012-05-17 Thread ludovic Boutros (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13278080#comment-13278080
 ] 

ludovic Boutros commented on SOLR-3454:
---

Ok, thanks Martijn !

 ArrayIndexOutOfBoundsException while grouping via Solrj
 ---

 Key: SOLR-3454
 URL: https://issues.apache.org/jira/browse/SOLR-3454
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 3.5, 3.6, 4.0
 Environment: Windows 7, Java 6
Reporter: ludovic Boutros
 Fix For: 4.0, 3.6.1

 Attachments: SOLR-3454.diff, SOLR-3454.patch


 When we try to use the grouping function at the end of a result via solrj 
 with the parameter group.main=true, 
 an ArrayIndexOutOfBoundsException is raised.
 For instance, on a result containing 3 groups, if the start  rows parameters 
 are equal to 2  5 respectively.
 I will attach a patch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-3454) ArrayIndexOutOfBoundsException while grouping via Solrj

2012-05-14 Thread ludovic Boutros (JIRA)
ludovic Boutros created SOLR-3454:
-

 Summary: ArrayIndexOutOfBoundsException while grouping via Solrj
 Key: SOLR-3454
 URL: https://issues.apache.org/jira/browse/SOLR-3454
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 3.5, 3.6
 Environment: Windows 7, Java 6
Reporter: ludovic Boutros


When we try to use the grouping function at the end of a result via solrj with 
the parameter group.main=true, 
an ArrayIndexOutOfBoundsException is raised.

For instance, on a result containing 3 groups, if the start  rows parameters 
are equal to 2  5 respectively.

I will attach a patch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >