2020-10-06 09:22:39 UTC - Emmanuel Marchand (eXenSa): hi there, I fail to find 
the javadoc for pulsar 2.6.x, do I miss something ?
----
2020-10-06 09:45:33 UTC - Marcio Martins: Hey guys, my bookies are running out 
of ledger disk space, and I think it's because s3 offload stopped working... In 
the last few hours this is spammed in the logs:
```java.util.concurrent.CompletionException: 
org.apache.bookkeeper.mledger.ManagedLedgerException$BadVersionException: 
org.apache.zookeeper.KeeperException$BadVersionException: KeeperErrorCode = 
BadVersion```

----
2020-10-06 09:45:42 UTC - Marcio Martins: Any ideas of how to fix this?
----
2020-10-06 09:58:46 UTC - Vil: comparison and Benchmark between pravega and 
puslar: 
<https://blog.pravega.io/2020/10/01/when-speeding-makes-sense-fast-consistent-durable-and-scalable-streaming-data-with-pravega/>
----
2020-10-06 10:01:26 UTC - Vil: i am surprised. why so much performance 
differance between the two, even though both are based on Bookeeper?
----
2020-10-06 10:01:37 UTC - Vil: i have not heard of pravega before
----
2020-10-06 12:21:13 UTC - Rattanjot Singh: if we upgrade the bookie do we need 
to restart broker and proxy?
----
2020-10-06 13:30:33 UTC - Alan Broddle: We have tried this and it gives an 
authentication error and generates a dump of the web page.  Does not set the 
password to admin/apachepulsar
Extract:
&lt;table class=“contentTable”&gt;
  &lt;tr&gt;
    &lt;td class=“contentData”&gt;
      You must be authenticated to access this URL.
    &lt;/td&gt;
  &lt;/tr&gt;
&lt;/table&gt;
----
2020-10-06 14:14:48 UTC - Addison Higham: what version of pulsar? and do you 
have a full stack trace for that?

To immediately fix your problem you can try unloading a namespace or if that 
doesn't work, restarting your brokers
----
2020-10-06 14:14:59 UTC - Marcio Martins: 2.5.1
----
2020-10-06 14:15:05 UTC - Marcio Martins: No, I lost it now :[
----
2020-10-06 14:15:31 UTC - Marcio Martins: I bumped the space on all bookies and 
it is working fine for now... But will surely happen again soon...
----
2020-10-06 14:15:48 UTC - Addison Higham: it may already be resolved issue. 
Yes, I would try unloading namespace or restarting brokers
----
2020-10-06 14:17:10 UTC - Addison Higham: In general no, you shouldn't need to 
restart broker or proxy.

However, did your bookie change IPs? if so, it may be needed currently due to 
some issues with how connections are opened to bookkeeper and TCP sockets not 
timing out gracefully.
----
2020-10-06 15:03:25 UTC - Enrico: Where i can check where i lost messages? 
because I lost 40% messages with async and non-persistent topic
----
2020-10-06 15:28:20 UTC - Addison Higham: you are trying to debug where in the 
pipeline it happened? You could look at your broker logs. One thing to look for 
is to see if a rebalance happened. Topics are rebalanced to brokers, in the 
case of persistent topics, no messages are lost, but in the case of 
non-persistent topics, this can result in message loss
----
2020-10-06 15:28:55 UTC - Addison Higham: Are you also ensuring you don't have 
any issues on the client?
----
2020-10-06 18:41:48 UTC - Evan Furman: Anyone heard anything regarding an 
official Datadog integration for Pulsar? The grafana/prom stuff is great, just 
curious.
----
2020-10-06 19:07:08 UTC - Addison Higham: here are the snapshot docs: 
<http://pulsar.apache.org/api/client/2.6.0-SNAPSHOT/>, they should be correct 
but I thought that we published point released too.. will ask around
----
2020-10-06 19:09:23 UTC - Addison Higham: I am not aware of anything. I know 
that datadog is moving more towards openmetrics even for "official" 
integrations. Going that route should be pretty straight forward, but using DD 
APIs directly would also be an option.

It is something I think would be good to pursue in the community as official 
integrations also don't have to pay for custom metrics (up to a point, I think 
it is a only a few hundred metrics)
+1 : Evan Furman
----
2020-10-06 19:15:11 UTC - Joshua Decosta: Sorry to bother again, are there any 
metrics from proxy that aren’t available from the brokers? @Addison Higham 
----
2020-10-06 19:15:45 UTC - Joshua Decosta: Basically I’m trying to access 
metrics and I’m concerned if I don’t get metrics from the proxy that i will be 
missing some. 
----
2020-10-06 19:16:03 UTC - Addison Higham: no worries :slightly_smiling_face: 
proxy and broker metrics are entirely different sets of metrics, but there was 
just support merged for the proxy to fetch metrics from the broker backends
----
2020-10-06 19:17:04 UTC - Joshua Decosta: My problem is I’m using signalfx to 
grab the metrics and there doesn’t seem to be a way to configure it with a auth 
token. This was also why i was asking about changing the ports for the proxy 
metrics endpoint 
----
2020-10-06 19:18:07 UTC - Joshua Decosta: I either have to leave the metrics 
auth disabled, change the port only for the metrics endpoint or something else 
that i haven’t determined yet
----
2020-10-06 19:20:33 UTC - Joshua Decosta: I guess i could just change the port 
for the admin services 
----
2020-10-06 19:21:03 UTC - Joshua Decosta: You were saying that the admin 
service is what exposes the metrics endpoint even on the pulsar proxy?
----
2020-10-06 19:22:02 UTC - Addison Higham: to be clear, disabling auth for 
metrics doesn't disable auth for any other endpoints. What we have seen in the 
past though is that if you wanted to use something like IP filtering, you could 
run a small proxy or something purely for that single adming endpoint
----
2020-10-06 19:22:13 UTC - Addison Higham: curious why you want to change the 
port for metrics?
----
2020-10-06 19:22:23 UTC - Joshua Decosta: Network security rules 
----
2020-10-06 19:22:46 UTC - Joshua Decosta: We have most other ports blocked by 
default and essentially that would block metrics automatically 
----
2020-10-06 19:22:53 UTC - Joshua Decosta: If i could just change the port 
----
2020-10-06 19:23:12 UTC - Joshua Decosta: Which would keep the admin service on 
a different port 
----
2020-10-06 19:27:03 UTC - Addison Higham: gotcha, yes, so the metrics servlet, 
both in proxy and the broker, is part of the admin server.

If you wanted to have it be on a different port, I would suggest a small proxy 
that could simply handle that single route. If you are using kubernetes, you 
may be able to do this just with a ingress controller pointed at the broker and 
proxy backends as needed
----
2020-10-06 19:27:08 UTC - Addison Higham: we use such a technique for a few 
customers
+1 : Joshua Decosta
----
2020-10-06 19:39:34 UTC - Adam Rachman: @Adam Rachman has joined the channel
----
2020-10-06 23:16:01 UTC - Robert Stolz: @Robert Stolz has joined the channel
----
2020-10-07 00:28:27 UTC - Vincent Wong: Hi, I find that this ledgers directory 
is filling up the disk
`/data/bookkeeper/ledgers/current`
I found there are many *.log files in this directory, how I can safety 
delete/rotate them ?
```-rw-r--r-- 1 root root 1073778769 Sep 24 13:08 1ad.log
-rw-r--r-- 1 root root 1073758726 Sep 24 14:09 1ae.log
-rw-r--r-- 1 root root 1073754678 Sep 24 14:57 1af.log```

----
2020-10-07 01:13:05 UTC - Addison Higham: there isn't a safe way to delete them 
on the filesystem. Multiple ledger are interleaved into enterylog files, which 
are your actual messages

What you need to do is delete some topics/change your retention settings, which 
will cause pulsar to delete some ledgers and then have BK  major compaction 
run. There isn't a way to force major compaction, instead, you can change your 
bookie's `majorCompactionInterval` setting to something like 10 minutes, but 
then make sure just to change it back
----
2020-10-07 03:59:04 UTC - Horatio: @Horatio has joined the channel
----
2020-10-07 04:59:33 UTC - Taylor: @Taylor has joined the channel
----
2020-10-07 06:14:22 UTC - Emmanuel Marchand (eXenSa): thanks for the link, FYI 
documentation from this link (<https://javadoc.io/doc/org.apache.pulsar>) are 
up to date only for `pulsar-broker` and `pulsar-functions-api`
----
2020-10-07 07:39:42 UTC - Alan Hoffmeister: Is there a way to configure topic 
compression to actually delete older keys? I would like to save space by not 
storing duplicated messages with the same key
----

Reply via email to