2020-01-24 10:20:41 UTC - Stefan Lundgren: @Stefan Lundgren has joined the
channel
----
2020-01-24 13:45:05 UTC - Ryan: Ping
----
2020-01-24 16:42:15 UTC - Poul Henriksen: @Poul Henriksen has joined the channel
----
2020-01-24 18:19:52 UTC - Roman Popenov: Does anyone know what this might be
related to:
```18:04:02.914 [pulsar-external-web-4-5] WARN
org.eclipse.jetty.server.HttpChannel - /admin/v2/clusters
java.lang.IllegalArgumentException: Negative initial size: -1
at
<http://java.io|java.io>.ByteArrayOutputStream.<init>(ByteArrayOutputStream.java:74)
~[?:1.8.0_232]
at
org.apache.pulsar.proxy.server.AdminProxyHandler$ReplayableProxyContentProvider.<init>(AdminProxyHandler.java:150)
~[org.apache.pulsar-pulsar-proxy-2.4.2.jar:2.4.2]```
----
2020-01-24 19:05:58 UTC - Chandra: @Chandra has joined the channel
----
2020-01-24 19:09:02 UTC - Sijie Guo: I saw the similar error before. Can’t
remember what exactly is it.
----
2020-01-24 19:16:18 UTC - Roman Popenov: And the manager is just spinning in a
very unresponsive state
----
2020-01-24 21:48:27 UTC - Addison Higham: okay, so I ran out of disk last night
on one cluster. My cluster only reports about 15 GB of storage size across all
the namespaces, but with 3x replication factor and 3 bookies each with 250 GB,
that doesn't quite make sense. Also confused as only one bookie filled up
completely, the other 2 are 80% storage used.
We do use s3 offloading, so I am thinking is that the storage size reported
does not include s3 offloaded storage OR offloaded storage that is still within
the deletion lag policy. I got the cluster back up and running by adding 3 more
bookies, but now I want to figure out why disk isn't being reclaimed.
I would have thought at this point that the deletion lag is past and would have
cleaned up some disk, but it doesn't appear like it is being triggered. Is
there any tooling for helping to have pulsar and BK "re-sync" on ledgers?
Looking at the CLI and code, I didn't notice anything
----
2020-01-24 21:56:46 UTC - Eugen: So pulsar can easily scale to millions of
topics, but what if some clients subscribe to say 10000 topics via a regex
subscription. Will this be efficient, or should it be avoided? And similar for
the producer side: As for every topic, a
`pulsarClient.newProducer().topic("x").create()` needs to be done, it just
doesn't feel right to have to do that for tens of thousands of topics in a
single producer.
----
2020-01-24 22:14:42 UTC - Roman Popenov: still no love in minikube
```Caused by:
org.apache.pulsar.broker.service.BrokerServiceException$PersistenceException:
org.apache.bookkeeper.mledger.ManagedLedgerException: Not enough non-faulty
bookies available
... 14 more
Caused by: org.apache.bookkeeper.mledger.ManagedLedgerException: Not enough
non-faulty bookies available
22:11:07.633 [pulsar-web-32-6] INFO org.eclipse.jetty.server.RequestLog -
172.17.0.6 - - [24/Jan/2020:22:11:07 +0000] "PUT
/admin/v2/persistent/public/functions/assignments HTTP/1.1" 500 2528 "-"
"Pulsar-Java-v2.5.0" 280```
and then later this:
```22:11:07.660 [main] ERROR org.apache.pulsar.PulsarBrokerStarter - Failed to
start pulsar service.
org.apache.pulsar.broker.PulsarServerException: java.lang.RuntimeException:
org.apache.pulsar.client.admin.PulsarAdminException$ServerSideErrorException:
HTTP 500 Internal Server Error
at org.apache.pulsar.broker.PulsarService.start(PulsarService.java:518)
~[org.apache.pulsar-pulsar-broker-2.5.0.jar:2.5.0]
at
org.apache.pulsar.PulsarBrokerStarter$BrokerStarter.start(PulsarBrokerStarter.java:264)
~[org.apache.pulsar-pulsar-broker-2.5.0.jar:2.5.0]
at
org.apache.pulsar.PulsarBrokerStarter.main(PulsarBrokerStarter.java:329)
[org.apache.pulsar-pulsar-broker-2.5.0.jar:2.5.0]
Caused by: java.lang.RuntimeException:
org.apache.pulsar.client.admin.PulsarAdminException$ServerSideErrorException:
HTTP 500 Internal Server Error
at
org.apache.pulsar.functions.worker.WorkerService.start(WorkerService.java:206)
~[org.apache.pulsar-pulsar-functions-worker-2.5.0.jar:2.5.0]
at
org.apache.pulsar.broker.PulsarService.startWorkerService(PulsarService.java:1108)
~[org.apache.pulsar-pulsar-broker-2.5.0.jar:2.5.0]
at org.apache.pulsar.broker.PulsarService.start(PulsarService.java:505)
~[org.apache.pulsar-pulsar-broker-2.5.0.jar:2.5.0]```
----
2020-01-24 22:19:57 UTC - Vladimir Shchur: Here we are :wink:
<https://github.com/apache/pulsar/issues/6127>
heart : Roman Popenov
party-parrot : Roman Popenov
----
2020-01-24 22:20:55 UTC - Roman Popenov: :thanks:
----
2020-01-24 23:53:28 UTC - Ming: Hi, is there a plan for tiered storage to
support Azure blob storage? Or I could miss the obvious that's been supported
already.
----
2020-01-25 00:23:10 UTC - Addison Higham: @Ming AFAIK, azure isn't yet
supported, however as the docs mention, under the hood pulsar uses the apache
jclouds blobstore API (see
<https://jclouds.apache.org/reference/providers/#blobstore-providers>) so it
*theoretically* should be very easy. I know there was some work being done to
re-factor the tired storage support and there was an implementation of azure
that depended on that refactor (see
<https://github.com/apache/pulsar/pull/2865> and
<https://github.com/apache/pulsar/pull/2615>) that looks like it got stalled,
but if you wanted to follow up there that might be worthwhile
----
2020-01-25 00:39:28 UTC - Ming: @Addison Higham thanks! The PRs could have been
split up with refactoring and blob supported separately.
----
2020-01-25 03:53:38 UTC - Sijie Guo: I think this is related to what we have
encountered in pulsarctl before:
<https://github.com/streamnative/pulsarctl/pull/157>
----
2020-01-25 03:54:44 UTC - Sijie Guo: So if you are seeing this when using
pulsar manager, we might need to fix the same thing in pulsar manager. Do you
mind creating an issue to pulsar manager repo?
----
2020-01-25 03:55:56 UTC - Sijie Guo: you might want to avoid this. since you
need a large memory at your client side to hold all these consumer instances.
----
2020-01-25 03:59:25 UTC - Sijie Guo: sorry. “rerun java8 tests” was a trigger
phrase to trigger CI. you can ignore that comment
----
2020-01-25 05:09:24 UTC - Eric Simon: Hi fellow Pulsars! I am running a
standalone cluster & trying to publish an function (jar) I get the
following exception:
```Exception in thread "main"
org.apache.pulsar.client.admin.PulsarAdminException:
java.util.concurrent.ExecutionException:
org.apache.pulsar.shade.io.netty.channel.socket.ChannelOutputShutdownException:
Channel output shutdown
at
org.apache.pulsar.client.admin.internal.BaseResource.getApiException(BaseResource.java:228)
at
org.apache.pulsar.client.admin.internal.FunctionsImpl.createFunction(FunctionsImpl.java:178)
at
com.current.control.DeployCluster$.createFunction(DeployCluster.scala:71)
at com.current.control.DeployCluster$.main(DeployCluster.scala:22)
at com.current.control.DeployCluster.main(DeployCluster.scala)
Caused by: java.util.concurrent.ExecutionException:
org.apache.pulsar.shade.io.netty.channel.socket.ChannelOutputShutdownException:
Channel output shutdown
at
java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
at
java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
at
<http://org.apache.pulsar.shade.org|org.apache.pulsar.shade.org>.asynchttpclient.netty.NettyResponseFuture.get(NettyResponseFuture.java:201)
at
org.apache.pulsar.client.admin.internal.FunctionsImpl.createFunction(FunctionsImpl.java:171)
... 3 more```
----
2020-01-25 05:10:13 UTC - Eric Simon: My guess is that it has to do with the
Jetty buffersize on the standalone. Has anyone seen this before & can you
provide any info to fix this?
----
2020-01-25 05:11:21 UTC - Sijie Guo: how large is your function jar size?
----
2020-01-25 05:18:51 UTC - Eric Simon: 41 mb. I am using assembly to package the
jar.
----
2020-01-25 05:20:55 UTC - Sijie Guo: That should be fine though. Did you try to
create function using the api example (examples/api-examples.jar)?
----
2020-01-25 05:22:43 UTC - Eugen: I see - thanks! So I guess in my case, where I
have both consumers that want to consume everything, but also consumers that
are interested in only specific, key-able parts of the data, I should have a
layered design: Ingestion intosingle high-throughput topic _a_ for
all-consumers. Then one consumer takes from _a_ and splits the data into many
smaller topics for consumption by those finer-grained consumers.
----
2020-01-25 05:29:26 UTC - Sijie Guo: that sounds reasonable
thanks : Eugen
----