2020-02-11 02:54:24 UTC - Rodric Rabbah: This is more relevant. 
<https://github.com/apache/openwhisk/pull/4724|https://github.com/apache/openwhisk/pull/4724>
+1 : seonghyun, Bilal
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1581389664214800?thread_ts=1581389664.214800&cid=C3TPCAQG1
----
2020-02-11 02:55:00 UTC - Rodric Rabbah: You can also enable statsd/kamon
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1581389700215600
----
2020-02-11 15:07:36 UTC - Bilal: That seems like exactly what I'm interested 
in. I am deploying onto kubernetes currently which relies on docker images. So 
my understanding is that I can either wait until this PR gets accepted and 
eventually pushed to the OW docker images or build my own docker iimages from 
the main OW repo and this PR.
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1581433656216000?thread_ts=1581389664.214800&cid=C3TPCAQG1
----
2020-02-11 15:28:54 UTC - Bilal: Are the metrics from statsD/Kamon different 
from the ones in prometheus? I enabled both on my kube deployment but can only 
access the promethus server that spun up. Looking here 
<https://github.com/apache/openwhisk/blob/master/docs/metrics.md> I can see all 
those metrics in prometheus so I guess the choice was to either send the 
metrics to either promethus or statsD?
I was attempting to follow the instructions here: 
<https://github.com/apache/openwhisk-deploy-kube/blob/master/docs/configurationChoices.md#metrics-and-prometheus-support>
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1581434934217900?thread_ts=1581434934.217900&cid=C3TPCAQG1
----
2020-02-11 22:33:48 UTC - Bilal: Roughly speaking how much storage does OW 
need? The defaults on the openwhisk-deploy-kube repo seem quite small, with 1 
gigiabyte or so for each component (zookeeper, kafka, db, redis, prometheus).  
I'm at the small scale stage 
<https://github.com/dgrove-oss/openwhisk-deploy-kube/blob/helm3/docs/k8s-custom-build-cluster-scaleup.md#small-scale>
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1581460428220700
----
2020-02-11 22:41:02 UTC - Rodric Rabbah: it depends on your topology - we use 
2gb on average for the various nodes
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1581460862221800
----
2020-02-11 22:41:15 UTC - Rodric Rabbah: yes - that PR should land immintently
+1 : Bilal
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1581460875221900?thread_ts=1581389664.214800&cid=C3TPCAQG1
----
2020-02-11 22:41:39 UTC - Rodric Rabbah: should be the same
+1 : Bilal
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1581460899222100?thread_ts=1581434934.217900&cid=C3TPCAQG1
----
2020-02-11 22:58:38 UTC - Bilal: just 2 GB  of storage? I assume the couchDB 
needs more if it's going to store activation records for like a year or 
something
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1581461918222900?thread_ts=1581461918.222900&cid=C3TPCAQG1
----
2020-02-11 23:04:21 UTC - Bilal: Set to 2GB of storage for Kafka

```[2020-02-11 23:00:10,764] INFO Recovering unflushed segment 0 in log 
__consumer_offsets-17. (kafka.log.Log) 
[2020-02-11 23:00:10,764] INFO Loading producer state from offset 0 for 
partition __consumer_offsets-17 with message format version 2 (kafka.log.Log) 
[2020-02-11 23:00:10,765] INFO Completed load of log __consumer_offsets-17 with 
1 log segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log) 
[2020-02-11 23:00:10,766] WARN Found a corrupted index file due to requirement 
failed: Corrupt index found, index file 
(/kafka/kafka-logs-paste-kafka-0/__consumer_offsets-39/00000000000000000000.index)
 has non-zero size but the last offset is 0 which is no larger than the base 
offset 0.}. deleting 
/kafka/kafka-logs-paste-kafka-0/__consumer_offsets-39/00000000000000000000.timeindex,
 
/kafka/kafka-logs-paste-kafka-0/__consumer_offsets-39/00000000000000000000.index,
 and 
/kafka/kafka-logs-paste-kafka-0/__consumer_offsets-39/00000000000000000000.txnindex
 and rebuilding index... (kafka.log.Log) 
waiting for kafka to be ready 
[2020-02-11 23:00:19,832] ERROR There was an error in one of the threads during 
logs loading: <http://java.io|java.io>.IOException: No space left on device 
(kafka.log.LogManager) 
[2020-02-11 23:00:19,835] FATAL [Kafka Server 0], Fatal error during 
KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer) 
<http://java.io|java.io>.IOException: No space left on device ```
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1581462261223600
----
2020-02-11 23:07:35 UTC - Rodric Rabbah: Couch is special. Also if you’re in a 
HA environment you need two for continuous replication. 

It also depends on your retention policy for activations if those are stored on 
couch. 
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1581462455225200?thread_ts=1581461918.222900&cid=C3TPCAQG1
----
2020-02-11 23:08:03 UTC - Rodric Rabbah: Because cloud is also doing view 
computation it needs a beefy cpu. 
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1581462483225900?thread_ts=1581461918.222900&cid=C3TPCAQG1
----
2020-02-11 23:16:00 UTC - Bilal: So far we are still in the early stages, I've 
moved beyond testing using KIND locally. Have everything running through 
on-prem 3 node kube cluster currently, I don't recall the exact specs but from 
what I can see on the dash board we've got 16core and 64bg of memory per node.

I do see some occasional spikes to 80% cpu utilization now that you mention it.
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1581462960226500?thread_ts=1581461918.222900&cid=C3TPCAQG1
----
2020-02-11 23:16:40 UTC - Bilal: Reading through past threads on slack has been 
quite useful for scaling up, it's nice to see the conversations that led to 
some of the scaling up documentation.
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1581463000226700?thread_ts=1581461918.222900&cid=C3TPCAQG1
----
2020-02-11 23:17:19 UTC - Bilal: So far I've tested 100ish concurrent actions 
sustained and things seemed fine.

Well, before I blew up Kafka today :rolling_on_the_floor_laughing:
https://openwhisk-team.slack.com/archives/C3TPCAQG1/p1581463039226900?thread_ts=1581461918.222900&cid=C3TPCAQG1
----

Reply via email to