[jira] [Commented] (OPENNLP-1096) Optimize n-gram creation loop for CPU cache usage

2017-06-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/OPENNLP-1096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16063016#comment-16063016
 ] 

ASF GitHub Bot commented on OPENNLP-1096:
-

Github user asfgit closed the pull request at:

https://github.com/apache/opennlp/pull/235


> Optimize n-gram creation loop for CPU cache usage
> -
>
> Key: OPENNLP-1096
> URL: https://issues.apache.org/jira/browse/OPENNLP-1096
> Project: OpenNLP
>  Issue Type: Improvement
>Reporter: Joern Kottmann
>Assignee: Joern Kottmann
>Priority: Trivial
> Fix For: 1.8.1
>
>
> There are two for loops to read the string and calculate n-grams, the loops 
> should be turned around to be more cache friendly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (OPENNLP-1096) Optimize n-gram creation loop for CPU cache usage

2017-06-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/OPENNLP-1096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16059285#comment-16059285
 ] 

ASF GitHub Bot commented on OPENNLP-1096:
-

GitHub user kottmann opened a pull request:

https://github.com/apache/opennlp/pull/235

OPENNLP-1096: Swap for loops in ngram generation to be cache friendly

Thank you for contributing to Apache OpenNLP.

In order to streamline the review of the contribution we ask you
to ensure the following steps have been taken:

### For all changes:
- [ ] Is there a JIRA ticket associated with this PR? Is it referenced 
 in the commit message?

- [ ] Does your PR title start with OPENNLP- where  is the JIRA 
number you are trying to resolve? Pay particular attention to the hyphen "-" 
character.

- [ ] Has your PR been rebased against the latest commit within the target 
branch (typically master)?

- [ ] Is your initial contribution a single, squashed commit?

### For code changes:
- [ ] Have you ensured that the full suite of tests is executed via mvn 
clean install at the root opennlp folder?
- [ ] Have you written or updated unit tests to verify your changes?
- [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)? 
- [ ] If applicable, have you updated the LICENSE file, including the main 
LICENSE file in opennlp folder?
- [ ] If applicable, have you updated the NOTICE file, including the main 
NOTICE file found in opennlp folder?

### For documentation related changes:
- [ ] Have you ensured that format looks appropriate for the output in 
which it is rendered?

### Note:
Please ensure that once the PR is submitted, you check travis-ci for build 
issues and submit an update to your PR as soon as possible.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/kottmann/opennlp opennlp-1096

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/opennlp/pull/235.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #235


commit 140695f4cd97080d48e9915e597db0fbf65d6320
Author: Jörn Kottmann 
Date:   2017-06-22T12:41:56Z

OPENNLP-1096: Swap for loops in ngram generation to be cache friendly




> Optimize n-gram creation loop for CPU cache usage
> -
>
> Key: OPENNLP-1096
> URL: https://issues.apache.org/jira/browse/OPENNLP-1096
> Project: OpenNLP
>  Issue Type: Improvement
>Reporter: Joern Kottmann
>Assignee: Joern Kottmann
>Priority: Trivial
> Fix For: 1.8.1
>
>
> There are two for loops to read the string and calculate n-grams, the loops 
> should be turned around to be more cache friendly.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)