Re: [VOTE] release Apache OpenWhisk Package Alarm, Package Cloudant, and Package Kafka 2.0.0 (incubating)

2019-04-01 Thread James W Dubee
Hey Dave,

I believe Jason Peterson did an emergency patch this weekend for the alarms 
provider. We should use hash 92b0ac8b0b006ef6c9395271fd17afa6c83c14b1 instead 
for alarms for the release. Other than that +1.

Regards,
James Dubee

-"David P Grove"  wrote: -
To: Incubator 
From: "David P Grove" 
Date: 03/26/2019 08:48AM
Cc: "OpenWhisk Dev" 
Subject: [VOTE] release Apache OpenWhisk Package Alarm, Package Cloudant, and 
Package Kafka 2.0.0 (incubating)


The Apache OpenWhisk Community has voted to make the first Apache release
of the OpenWhisk "Package Alarm", "Package Cloudant" and "Package Kafka"
components.

The OpenWhisk dev list voting thread is here:
https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.apache.org_thread.html_e0fb73d8ca0d0c584650f4f5b8b02054f5c99d64895f3e5ace352234-40-253Cdev.openwhisk.apache.org-253E=DwIFAg=jf_iaSHvJObTbx-siA1ZOg=mNYL3iRkIHSpena7hJD92ihAR7Np-_j5HhizqqbvOHE=2ZZjkjbEj4oG91mtAZpXV1T5xhbbX2qmqjBpWAlxkns=SJ6lnxDzZZnMi5vWI5ielZN5vvAigmWHArnBjMllhfU=

We request that IPMC Members please review and vote on this incubator
release as described below.

These three Apache OpenWhisk packages provide basic event-based programming
capabilities to an OpenWhisk deployment.

This release comprises of source code distribution only. There are three
components (git repos) included in this release of the OpenWhisk Packages
Group. All release artifacts were built by PR#251 in the openwhisk-release
repo from the following Git commit IDs:
* openwhisk-package-alarms: dd6ab4d81c8893436e7242170d57951c089d5185
* openwhisk-package-cloudant: ce5dac9cb0204fb2f78195d9e3f9364236613a31
* openwhisk-package-kafka: 49820dd24170f24a37c02fae6ba7ec06e190423f

openwhisk-package-alarms (C158D76 B5D9A53A AC10AC42 4CD5060C 7A6467D5
E9624519 117E9C8C AB171590 CCC3505D 5C7E630D C41769B4 44F85F03 BBA40A6A
E04D2264 79FFA78B B3C5BFB8)
src.tar.gz:
https://urldefense.proofpoint.com/v2/url?u=https-3A__dist.apache.org_repos_dist_dev_incubator_openwhisk_apache-2Dopenwhisk-2D2.0.0-2Dincubating-2Drc2_openwhisk-2Dpackage-2Dalarms-2D2.0.0-2Dincubating-2Dsources.tar.gz=DwIFAg=jf_iaSHvJObTbx-siA1ZOg=mNYL3iRkIHSpena7hJD92ihAR7Np-_j5HhizqqbvOHE=2ZZjkjbEj4oG91mtAZpXV1T5xhbbX2qmqjBpWAlxkns=xPS9AntqtpNp2gGKMX9zNkYsENRPJBfdmwEyhdBBQpU=
sha512 checksum:
https://urldefense.proofpoint.com/v2/url?u=https-3A__dist.apache.org_repos_dist_dev_incubator_openwhisk_apache-2Dopenwhisk-2D2.0.0-2Dincubating-2Drc2_openwhisk-2Dpackage-2Dalarms-2D2.0.0-2Dincubating-2Dsources.tar.gz=DwIFAg=jf_iaSHvJObTbx-siA1ZOg=mNYL3iRkIHSpena7hJD92ihAR7Np-_j5HhizqqbvOHE=2ZZjkjbEj4oG91mtAZpXV1T5xhbbX2qmqjBpWAlxkns=xPS9AntqtpNp2gGKMX9zNkYsENRPJBfdmwEyhdBBQpU=
.sha512
signature:
https://urldefense.proofpoint.com/v2/url?u=https-3A__dist.apache.org_repos_dist_dev_incubator_openwhisk_apache-2Dopenwhisk-2D2.0.0-2Dincubating-2Drc2_openwhisk-2Dpackage-2Dalarms-2D2.0.0-2Dincubating-2Dsources.tar.gz=DwIFAg=jf_iaSHvJObTbx-siA1ZOg=mNYL3iRkIHSpena7hJD92ihAR7Np-_j5HhizqqbvOHE=2ZZjkjbEj4oG91mtAZpXV1T5xhbbX2qmqjBpWAlxkns=xPS9AntqtpNp2gGKMX9zNkYsENRPJBfdmwEyhdBBQpU=
.asc

openwhisk-package-cloudant (380E1BA8 EC3C72A8 09B10DCF 03BC1D2D 4BFA7E4F
892995EE 39D26BBE 758AFD5D FA680E81 54A9C726 A0D386C5 B447C947 70D26385
2A8D07BF 5848318A 42A3BBDD)
src.tar.gz:
https://urldefense.proofpoint.com/v2/url?u=https-3A__dist.apache.org_repos_dist_dev_incubator_openwhisk_apache-2Dopenwhisk-2D2.0.0-2Dincubating-2Drc2_openwhisk-2Dpackage-2Dcloudant-2D2.0.0-2Dincubating-2Dsources.tar.gz=DwIFAg=jf_iaSHvJObTbx-siA1ZOg=mNYL3iRkIHSpena7hJD92ihAR7Np-_j5HhizqqbvOHE=2ZZjkjbEj4oG91mtAZpXV1T5xhbbX2qmqjBpWAlxkns=5eVzmTXgRQksCDi1Bv-cU_weVwEpXtw3ixSb2vpt0iE=
sha512 checksum:
https://urldefense.proofpoint.com/v2/url?u=https-3A__dist.apache.org_repos_dist_dev_incubator_openwhisk_apache-2Dopenwhisk-2D2.0.0-2Dincubating-2Drc2_openwhisk-2Dpackage-2Dcloudant-2D2.0.0-2Dincubating-2Dsources.tar.gz=DwIFAg=jf_iaSHvJObTbx-siA1ZOg=mNYL3iRkIHSpena7hJD92ihAR7Np-_j5HhizqqbvOHE=2ZZjkjbEj4oG91mtAZpXV1T5xhbbX2qmqjBpWAlxkns=5eVzmTXgRQksCDi1Bv-cU_weVwEpXtw3ixSb2vpt0iE=
.sha512
signature:
https://urldefense.proofpoint.com/v2/url?u=https-3A__dist.apache.org_repos_dist_dev_incubator_openwhisk_apache-2Dopenwhisk-2D2.0.0-2Dincubating-2Drc2_openwhisk-2Dpackage-2Dcloudant-2D2.0.0-2Dincubating-2Dsources.tar.gz=DwIFAg=jf_iaSHvJObTbx-siA1ZOg=mNYL3iRkIHSpena7hJD92ihAR7Np-_j5HhizqqbvOHE=2ZZjkjbEj4oG91mtAZpXV1T5xhbbX2qmqjBpWAlxkns=5eVzmTXgRQksCDi1Bv-cU_weVwEpXtw3ixSb2vpt0iE=
.asc

openwhisk-package-kafka (303B8286 AA9B945D 1A740FCB 0DE908D5 A287FAF5
C20A9DF9 4BBD2F16 C488DF4B 3E82A75B 9D071B1A E1CCDB54 7B623A36 5AC88E5D
9994539B 00ECEA5D A066D156)
src.tar.gz:

Re: updated CLI go vendors

2019-02-19 Thread James W Dubee



+1

-Rodric Rabbah  wrote: -
To: dev@openwhisk.apache.org
From: Rodric Rabbah 
Date: 02/16/2019 12:33PM
Subject: updated CLI go vendors

The following PR is the result of the a suggestion from homebrew

https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_apache_incubator-2Dopenwhisk-2Dcli_pull_411=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=mNYL3iRkIHSpena7hJD92ihAR7Np-_j5HhizqqbvOHE=P6MBz9UmATt_8GRdWXcdsE3n-gZ25OE_hplEyW1l8pg=nkcebXcVtCgnX1vkhKsla0Gx2SA9Bd3SHHDk075ryPc=

It updates the go vendors with missing packages.

-r



Re: Logstore usage during `activation get`

2018-10-04 Thread James W Dubee

Hey guys,

For scalability concerns, I think it is preferable to separate logs from
activation records entirely. Doing so would require users to use the logs
API, or whatever tools are provided by the underly logging service. We
should focus on serverless technology and delegate log handling to services
that specialize in such areas.

>From experimenting with storing logs in a different data store other than
where activations are stored, I've seen that there may be intermittent test
failures for tests that rely on data being present in an activation
record's log field. This may result when the activation record is retrieved
before all the logs are stored in the separate log store.

On a similar note, I don't think we should be storing activation responses
at all.

Regards,
James Dubee





From:   Rodric Rabbah 
To: dev@openwhisk.apache.org
Date:   10/02/2018 07:30 PM
Subject:Re: Logstore usage during `activation get`



> By "break this" do you mean at some point we should remove the logs from
the GET?

Yes.

@dubee thoughts? Since you've worked on the elastic plugin.

-r




Re: Proposal to Remove Artifact Store Polling for Blocking Invocations

2018-09-24 Thread James W Dubee


Hey Rodric,

Sure, I split up the two changes into different PRs. The defect fix is now
located here: https://github.com/apache/incubator-openwhisk/pull/4040. I'll
use the PR from my original email for removal of DB polling.

Regards,
James Dubee




From:   Rodric Rabbah 
To: dev@openwhisk.apache.org
Date:   09/21/2018 02:34 PM
Subject:Re: Proposal to Remove Artifact Store Polling for Blocking
Invocations



Thanks James for the explanation and patches. It sounds like there should
be two separate PRs, one to address the bug and the other to remove
polling. What do you think?

-r

> On Sep 21, 2018, at 1:09 PM, James W Dubee  wrote:
>
>
>
>
>
> Hello OpenWhisk developers,
>
> When a blocking action is invoked, the controller waits for that action's
> response from the invoker and also polls the artifact store for the same
> response. Usually blocking invocation responses are obtained from the
> invoker. However, there are instances when the invocation response is
> retrieved from the artifact store instead. From observation, the most
> likely scenario for a blocking activation to be retrieve from the
artifact
> store is when an action generates a response that exceeds the maximum
> allowed Kafka message size for the "completed" topic. However, this
> situation should not occur as large action responses are meant to be
> truncated by the invoker to the allowed maximum Kafka message size for
the
> corresponding topic.
>
> Currently artifact store polling for activation records is masking a bug
> involving large action responses. While OpenWhisk provides a
configuration
> value, whisk.activation.payload.max, for what one would assume would
allow
> for adjustments to be made to the maximum activation record size, this
> configuration value only adjusts the Kafka topic that is used to schedule
> actions for invocation. Instead the Kafka topic used to communicate the
> completion of an action always uses the default value for
> KAFKA_MESSAGE_MAX_BYTES, which is ~1MB. Additionally, the invoker
truncates
> action responses to the whisk.activation.payload.max value even though
> whisk.activation.payload.max is not being applied properly to the
> "completed" Kafka topic. More over, this truncation does not account for
> data added to the action response by the Kafka producer during
> serialization, so an action response may fail to be sent to the
"completed"
> topic even if its actual action response size adheres to the topic's size
> limitations. As a result, any action response plus the size of
> serialization done by the Kafka producer that exceeds ~1MB will be
> retrieved via artifact store polling.
>
> Performance degradation appears to occur when an activation recorded is
> retrieved via artifact store polling. Artifact store polling occurs every
> 15 seconds for a blocking invocation. Since the response of an action
that
> generates a payload greater than ~1MB can not be sent through the
> "completed" Kafka topic, that action's activation record must be
retrieved
> via polling. Even though such an action may complete in milliseconds, the
> end user will not get back the activation response for at least 15
seconds
> due to the polling logic in the controller.
>
> I have submitted a pull request to remove the polling mechanism and also
> fix the large action response bug. The pull request can be found here:
>
https://github.com/apache/incubator-openwhisk/pull/4033
.
>
> Regards,
> James Dubee





Proposal to Remove Artifact Store Polling for Blocking Invocations

2018-09-21 Thread James W Dubee




Hello OpenWhisk developers,

When a blocking action is invoked, the controller waits for that action's
response from the invoker and also polls the artifact store for the same
response. Usually blocking invocation responses are obtained from the
invoker. However, there are instances when the invocation response is
retrieved from the artifact store instead. From observation, the most
likely scenario for a blocking activation to be retrieve from the artifact
store is when an action generates a response that exceeds the maximum
allowed Kafka message size for the "completed" topic. However, this
situation should not occur as large action responses are meant to be
truncated by the invoker to the allowed maximum Kafka message size for the
corresponding topic.

Currently artifact store polling for activation records is masking a bug
involving large action responses. While OpenWhisk provides a configuration
value, whisk.activation.payload.max, for what one would assume would allow
for adjustments to be made to the maximum activation record size, this
configuration value only adjusts the Kafka topic that is used to schedule
actions for invocation. Instead the Kafka topic used to communicate the
completion of an action always uses the default value for
KAFKA_MESSAGE_MAX_BYTES, which is ~1MB. Additionally, the invoker truncates
action responses to the whisk.activation.payload.max value even though
whisk.activation.payload.max is not being applied properly to the
"completed" Kafka topic. More over, this truncation does not account for
data added to the action response by the Kafka producer during
serialization, so an action response may fail to be sent to the "completed"
topic even if its actual action response size adheres to the topic's size
limitations. As a result, any action response plus the size of
serialization done by the Kafka producer that exceeds ~1MB will be
retrieved via artifact store polling.

Performance degradation appears to occur when an activation recorded is
retrieved via artifact store polling. Artifact store polling occurs every
15 seconds for a blocking invocation. Since the response of an action that
generates a payload greater than ~1MB can not be sent through the
"completed" Kafka topic, that action's activation record must be retrieved
via polling. Even though such an action may complete in milliseconds, the
end user will not get back the activation response for at least 15 seconds
due to the polling logic in the controller.

I have submitted a pull request to remove the polling mechanism and also
fix the large action response bug. The pull request can be found here:
https://github.com/apache/incubator-openwhisk/pull/4033.

Regards,
James Dubee


[VOTE] Release Apache OpenWhisk 0.9.0-incubating rc2: main OpenWhisk module

2018-07-09 Thread James W Dubee
Hello all,

I vote +1 as well to release OpenWhisk 0.9.0-incubating rc2 module

Checklist for reference:
[ X ] Download links are valid.(Please disregard the md5 link, since we do
not need it)
[ X ] Checksums and PGP signatures are valid.
[ X ] Source code artifacts have correct names matching the current
release.
[ X ] LICENSE and NOTICE files are correct for each OpenWhisk repo.
[ X ] All files have license headers if necessary.
[ X ] No compiled archives bundled in source archive.

Regards,
James Dubee


Activation Store Service Provider Interface

2018-05-16 Thread James W Dubee


Fellow developers,

Currently the activation store is tightly coupled with the artifact store
utilized by an OpenWhisk deployment. Meaning the controller must use the
same data store, CouchDB for example, utilized by an OpenWhisk instance to
store and retrieve activation details. However, an invoker can use a
customized log store implementation to store user logs and activation
records in a data store that is independent of the primary database. Such
differences in functionality between the controller and invoker with
regards to activation storage and retrieval is problematic.


Activation records are special as they hold meaningful details about the
execution behavior of actions, triggers, rules, and sequences. Ideally
users should be able to run customized queries on these activation records
in order to find out the execution time of a group of entities, if a group
of entities were executed successfully, etc. By default, CouchDb does not
allow such customized queries to be performed. To provide user defined
queries on activations, we have already provided a log store that can be
customized per-deployment via a Service Provider Interface (SPI). Currently
we have log store implementations for Elasticsearch and Splunk. Both of
these services allow user queries.


Unfortunately these log store implementations apply mostly to the invoker.
For instance, the controller can only fetch logs from the log store using
the activation logs API. While the rest of the activation APIs must utilize
the artifact store. Activation records must be in the artifact store in
order for most of the controller activation APIs to return activation
information, even though those same activation records may have also been
saved in a different backing store by the log store. Consequently,
activation records and logs may be duplicated in the log store and artifact
store. Another problem revolves around the controller writing user logs.
User logs generated by the controller for triggers and sequences are only
written to the artifact store even if a non-default log store is being
used. Therefore, users cannot run customized queries on user logs generated
by the controller.

To eliminate the duplication of activations, and controller user logs being
inaccessible to the log store, an activation store SPI can be provided.
When configured appropriately the activation store would be able to utilize
the same data store used by the log store. This would eliminate duplication
of stored activation information in separate databases, and allow the log
store access to user logs generated by the controller.


Work for providing the activation store that is compatible with the
artifact store has already started. The related PR can be found here:
https://github.com/apache/incubator-openwhisk/pull/3619. Providing an
activation store for Elasticsearch will follow.


Regards,
James Dubee


Re: Please comment on README update to release repo.

2018-04-05 Thread James W Dubee

Hey Matt,

Having all the repository statuses for Travis all in one place is really
nice for monitoring purposes! I think we could add the providers repos
(alarms, Cloudant, Kafka) eventually.

Regards,
James Dubee





From:   "Matt Rutkowski" 
To: dev@openwhisk.apache.org
Date:   04/05/2018 05:01 PM
Subject:Please comment on README update to release repo.



Whiskers,

Started work towards getting the incubator-openwhisk-release repo. docs in
"GA" shape with focus on the role of a "Release Manager"...

Added the "repo status" table to the README so Rel. Mgrs. can get a
snapshot of project health (migrated and updated from our CWIKI).
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_apache_incubator-2Dopenwhisk-2Drelease=DwIFAg=jf_iaSHvJObTbx-siA1ZOg=mNYL3iRkIHSpena7hJD92ihAR7Np-_j5HhizqqbvOHE=HOAG0-RWpYCfqcGEuNrAyzsaSI5762zXBM80W0493Zc=P77gBP58GSA9naqmmeInRh_sC72knWJGUsAslSkQR1M=


Also, plan to update scancode to the new "strict" configuration and submit
PRs to affected repos. as well so we can mark the final column with all
"yes' values.

Much more work to do in the docs that will promote the actual release
process steps to main README, clearly indicate which steps are manual and
provide links to supporting docs from each step. Will appreciate more
comments as I submit more PRs over the next few days.

-matt





Re: Controller Memory Pressure

2017-11-18 Thread James W Dubee

Here's the related PR containing the action migration script:
https://github.com/apache/incubator-openwhisk/pull/2938.




From:   "James W Dubee" <jwdu...@us.ibm.com>
To: dev@openwhisk.apache.org
Date:   11/17/2017 03:21 PM
Subject:Controller Memory Pressure







Hello all,

In order to reduce the amount of memory used within the controller, we plan
to store action code as attachments in the associated CouchDB documents
instead of inlining the code directly in action documents. Doing so will
allow the controller to not fetch action code from the database when an
action is invoked. Code associated with actions can be fairly large or
there may be several thousands of actions being invoked at any given time
in a system. Given this situation, not fetching action code or storing it
in the cache when it is not needed will help prevent memory pressure on the
controller.

Currently only the code associated with Java actions is stored as document
attachments. For all of the other runtime kinds, action code is inlined in
the action document. Allowing all action code to be stored as attachments
will normalize how action code is saved amongst different runtime kinds.

The cost of fully supporting all action code as attachments is that
existing databases will have to be updated in order to migrate existing
actions to the new schema. We will provided a script that will assist in
the action schema migration process. While it is preferable from a
performance perspective to migrate existing databases, the controller will
still function as it does today with the old schema. In the event that an
existing database is not migrated, new or updated actions will still use
the new action schema.

Regards,
James Dubee




Controller Memory Pressure

2017-11-17 Thread James W Dubee




Hello all,

In order to reduce the amount of memory used within the controller, we plan
to store action code as attachments in the associated CouchDB documents
instead of inlining the code directly in action documents. Doing so will
allow the controller to not fetch action code from the database when an
action is invoked. Code associated with actions can be fairly large or
there may be several thousands of actions being invoked at any given time
in a system. Given this situation, not fetching action code or storing it
in the cache when it is not needed will help prevent memory pressure on the
controller.

Currently only the code associated with Java actions is stored as document
attachments. For all of the other runtime kinds, action code is inlined in
the action document. Allowing all action code to be stored as attachments
will normalize how action code is saved amongst different runtime kinds.

The cost of fully supporting all action code as attachments is that
existing databases will have to be updated in order to migrate existing
actions to the new schema. We will provided a script that will assist in
the action schema migration process. While it is preferable from a
performance perspective to migrate existing databases, the controller will
still function as it does today with the old schema. In the event that an
existing database is not migrated, new or updated actions will still use
the new action schema.

Regards,
James Dubee


Re: Feed Provider API v2?

2017-10-23 Thread James W Dubee


Hello Carlos,

The CLI changes to invoker a trigger feed with a "READ" request when
getting a trigger should be doable. Have you thought about what the trigger
get output should look like when the associated feed supports a "READ"
request?

Regards,
James Dubee




From:   Carlos Santana 
To: dev@openwhisk.apache.org
Date:   10/20/2017 06:52 PM
Subject:Re: Feed Provider API v2?



Hi James

Me and Adnan [1] from my team started to look into this last week.

He implemented the lifecycle "READ" in the kafka feed action,
The feed action will be invoke in the similar manner as "CREATE" and
"DELETE" but only requiring the `auth` to be pass and of course the
`lifecycleEvent: READ` and the triggerName
You can see it here
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_apache_incubator-2Dopenwhisk-2Dpackage-2Dkafka_blob_master_action_messageHubFeed.js-23L23=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=mNYL3iRkIHSpena7hJD92ihAR7Np-_j5HhizqqbvOHE=p5aKvZk1rW83hc0xAgbhBu8W7wIda-FcyeefWRdwmjs=kBbBQSHNmImWvCB9wSSjXcAasHWLHcu6-NjFa7Uvwd0=

The PR that introduce this is here:
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_apache_incubator-2Dopenwhisk-2Dpackage-2Dkafka_pull_217_files=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=mNYL3iRkIHSpena7hJD92ihAR7Np-_j5HhizqqbvOHE=p5aKvZk1rW83hc0xAgbhBu8W7wIda-FcyeefWRdwmjs=_8msJotxQZ-bzYd--yeKYDMeZF0EycN-UuGGBYA49_c=


if you have the kafka setup locally you can test it via invoking the action
directly

But we want to provide this in the CLI also, so when you do a "wsk trigger
get mykafkatrigger" it will detect the annotation and do an extra request
to get the trigger info from the the provider.

He was going to do same API for alarms, and cloudant trigger to implement
the "READ" lifecycle
And then look for someone that knows the go CLI code to add the the extra
request to "trigger get"

Probably we need to doc this lifecycle in the feed.md

For the update the thinking is to use the same design.
To use a lifecycle "UPDATE" to the feed action to change the configuration

Also there is a need when an api key is revoke, to tell the feed provider
to update replace all the trigger with the old key, and use the new key

[1]:
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_abaruni=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=mNYL3iRkIHSpena7hJD92ihAR7Np-_j5HhizqqbvOHE=p5aKvZk1rW83hc0xAgbhBu8W7wIda-FcyeefWRdwmjs=YTGW5bpkDkrlz0h7JWdu6F6cy15WMSrg4GjSXNisZaI=




On Thu, Oct 19, 2017 at 10:39 AM James Thomas  wrote:

> Users are running into limitations of the current feed provider API.
Could
> we collect requirements, potentials solutions and work on an update to
the
> API to support some of the new features?
>
> Looking back through the Github issues, I've tried to capture high-level
> requirements and solutions that have already been identified, included
> below.
>
> *Requirements*
>
> ** Support retrieving state about triggers registered with provider*
>
> This is the most common requirement. Feed providers only support
> "CREATE/DELETE" operations. Users want to be able to retrieve current
state
> of triggers from the provider. For example, this would allow people to
> number of triggers left to fire when maxTriggers is defined.
>
> Related issues:
>
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_apache_incubator-2Dopenwhisk_issues_1925=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=mNYL3iRkIHSpena7hJD92ihAR7Np-_j5HhizqqbvOHE=p5aKvZk1rW83hc0xAgbhBu8W7wIda-FcyeefWRdwmjs=dRoiYkv5BJw4HJTrGqeGJwO5lP-CXKsJ-hY_MQSqZvE=

>
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_apache_incubator-2Dopenwhisk_issues_1398=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=mNYL3iRkIHSpena7hJD92ihAR7Np-_j5HhizqqbvOHE=p5aKvZk1rW83hc0xAgbhBu8W7wIda-FcyeefWRdwmjs=2E8U4ikXfanWzUha1bZ5ZZKMt1oyez3k0hwgvgfwreQ=

>
>
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_apache_incubator-2Dopenwhisk-2Dpackage-2Dalarms_issues_86-23issuecomment-2D325204467=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=mNYL3iRkIHSpena7hJD92ihAR7Np-_j5HhizqqbvOHE=p5aKvZk1rW83hc0xAgbhBu8W7wIda-FcyeefWRdwmjs=HrBGPdF3yD2djMSZ9LbP037bdKxOqZPeRq3mt1IQAuY=

>
https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_apache_incubator-2Dopenwhisk_issues_471=DwIBaQ=jf_iaSHvJObTbx-siA1ZOg=mNYL3iRkIHSpena7hJD92ihAR7Np-_j5HhizqqbvOHE=p5aKvZk1rW83hc0xAgbhBu8W7wIda-FcyeefWRdwmjs=nJB61AyDiK3gKDspCC1MzsoL7A5dE-mq8YyCOfWiU0Q=

>
> ** Add "delete trigger after max" option to enable auto-removing triggers
> that won't be fired anymore. *
>
> Numerous feeds now support a "maxTriggers" to limit the number of trigger
> activations. What happens once this limit has been reached? There's been
> discussion about removing "dangling" triggers versus disabling and how to
> surface this to the user. Should this option be baked into the API?
>
> This is relevant for an update to the "alarm" package to support
"one-off"
> triggers.
> Related issues:
>