2018-07-04 14:56:15 UTC - Tomer Lev: Hi, I installed pulsar dashboard using 
docker with instruction in documentation 
<http://pulsar.apache.org/docs/latest/admin/Dashboard/>  and I can't see 
nothing there - can someone help me with setting this up ?
----
2018-07-04 15:07:45 UTC - Matteo Merli: By nothing, you mean the dashboard 
shows up bu with no data?
----
2018-07-04 15:10:28 UTC - Tomer Lev: yes
----
2018-07-04 15:10:42 UTC - Tomer Lev: I can see the dashboard but can't see even 
the name of my cluster
----
2018-07-04 15:10:54 UTC - Matteo Merli: did you check the SERVICE_URL env var. 
Is that pointing to correct address?
----
2018-07-04 15:11:12 UTC - Tomer Lev: yes (it's localhost) I tried with and 
without it ...
----
2018-07-04 15:11:30 UTC - Tomer Lev: BTW the docker didn't came up out of the 
box I had to change the version of postgres
----
2018-07-04 15:11:54 UTC - Matteo Merli: ok, the problem is that since the 
dashboard runs inside Docker, localhost means inside the containers
----
2018-07-04 15:12:05 UTC - Matteo Merli: you would have to set it to the host 
machine IP
----
2018-07-04 15:12:16 UTC - Tomer Lev: oh ok i'll try that
----
2018-07-04 15:14:26 UTC - Tomer Lev: ok I can see the cluster name right now
----
2018-07-04 15:14:29 UTC - Tomer Lev: but nothing else ...
----
2018-07-04 15:15:05 UTC - Tomer Lev: No Topics... No Brokers...
----
2018-07-04 15:17:40 UTC - Matteo Merli: Uhm, I see. I think it’s still a 
problem with “localhost”. The standalone broker advertise itself on 
“localhost”. When the dashboard fetches the stats, it gets the list of brokers 
(and their addresses) and then connects to them. If standalone advertises 
localhost, it would not be reachable. 

You can add `--advertised-address $HOST_IP` when running pulsar standalone
----
2018-07-04 15:20:20 UTC - Tomer Lev: I changed the advertise in broker.conf to 
0.0.0.0
----
2018-07-04 15:20:34 UTC - Matteo Merli: no, it cannot be 0.0.0.0
----
2018-07-04 15:20:51 UTC - Matteo Merli: it’s not the “bind” address
----
2018-07-04 15:21:16 UTC - Matteo Merli: it’s the address that brokers tells 
other on how it can be reached
----
2018-07-04 15:23:12 UTC - Tomer Lev: You right my bad... changed to the machine 
ip address but still no results
----
2018-07-04 15:28:48 UTC - Tomer Lev: any idea?
----
2018-07-04 15:29:22 UTC - Tomer Lev: can I start the dashboard with verbose 
option ?
----
2018-07-04 15:29:58 UTC - Matteo Merli: there are logs inside the container. 
all components are started with supervisord
----
2018-07-04 15:30:14 UTC - Tomer Lev: ok ill check them
----
2018-07-04 15:30:59 UTC - Matteo Merli: /var/log/supervisor
----
2018-07-04 15:31:19 UTC - Matteo Merli: In particular 
`collector-stderr---supervisor-*.log`
----
2018-07-04 15:31:41 UTC - Matteo Merli: that is the process that collects data 
from brokers
----
2018-07-04 15:33:13 UTC - Tomer Lev: I can see some errors here
----
2018-07-04 15:33:17 UTC - Tomer Lev: (Caused by 
NewConnectionError('&lt;urllib3.connection.HTTPConnection object at 
0x7f73e1d49650&gt;: Failed to establish a new connection: [Errno -2] Name or 
service not known',))
----
2018-07-04 15:33:43 UTC - Matteo Merli: seems a problem with SERVICE_URL 
variable
----
2018-07-04 15:36:17 UTC - Tomer Lev: looks like it trying to connect to host 
that it cannot resolve it's DNS
----
2018-07-04 15:36:57 UTC - Tomer Lev: but where it gets this host from ?
----
2018-07-04 15:37:22 UTC - Matteo Merli: that has to be the “advertisedAddress” 
from standalone broker
----
2018-07-04 15:38:11 UTC - Tomer Lev: I use 3 nodes and all of them are with 
default advertisedAddress
----
2018-07-04 15:38:29 UTC - Tomer Lev: so it should expose it's hostname right ?
----
2018-07-04 15:39:00 UTC - Matteo Merli: Oh I see, and the hostnames are not 
reachable from inside docker
----
2018-07-04 15:39:58 UTC - Tomer Lev: actually I have hostnames of 
pulsar1,pulasr2,pulasr3
----
2018-07-04 15:40:15 UTC - Tomer Lev: and somehow it's trying to connect 
<http://pulsar.mydomain.com|pulsar.mydomain.com>
----
2018-07-04 15:40:28 UTC - Matteo Merli: either you can make these hostnames 
resolvable from inside Docker, or you can set advertisedAddress in each broker 
to advertise the IP rather than the hostname
----
2018-07-04 15:41:33 UTC - Tomer Lev: 
<http://pulsar.mydomain.com|pulsar.mydomain.com> is A records that's point to 
all 3 nodes but it cannot be resolved from inside the docker
----
2018-07-04 15:41:47 UTC - Tomer Lev: just trying to understand from where it 
took that address ...
----
2018-07-04 15:43:26 UTC - Matteo Merli: Is that what was configured as the 
cluster url ?
----
2018-07-04 15:43:48 UTC - Tomer Lev: I guess, where this configuration can be 
find ?
----
2018-07-04 15:44:11 UTC - Matteo Merli: `pulsar-admin clusters get my-cluster`
----
2018-07-04 15:45:02 UTC - Tomer Lev: yes it's there
----
2018-07-04 15:45:27 UTC - Matteo Merli: Ok, now I remember how that worked. The 
collector gets the list of clusters, for each clusters it then gets the 
serviceUrl so it can fetch the list of brokers and then collect stats for each 
broker
----
2018-07-04 15:46:00 UTC - Tomer Lev: I see so I have to let the docker resolve 
that address right ?
----
2018-07-04 15:47:08 UTC - Matteo Merli: Yes, as a workaround you can update the 
cluster metadata to point to single IP : 

`bin/pulsar-admin clusters update … `
----
2018-07-04 15:48:24 UTC - Tomer Lev: and it will be able to fetch all brokers 
data if I do so ?
----
2018-07-04 15:48:39 UTC - Tomer Lev: or just the single one ?
----
2018-07-04 15:49:07 UTC - Matteo Merli: all of them, unless that IP is not 
reachable..
----
2018-07-04 15:50:29 UTC - Tomer Lev: sure. ok thanks i'll try that
----
2018-07-04 16:04:09 UTC - Tomer Lev: Thank you it works now 
:slightly_smiling_face:
+1 : Matteo Merli
----
2018-07-04 16:46:51 UTC - Daniel Ferreira Jorge: Hi, I have a consumer (java) 
with an exclusive subscription consuming, let's say 100 messages, from a 
non-partitioned topic through a message listener. So, the messages are 
guaranteed to be processed in order, from 1 to 100. What is the behavior of the 
consumer if by any reason there is a failure (no ack is sent within the 
ackTimeout) while consuming message 50? Will it just stall? Will it keep 
retrying?
----
2018-07-04 20:38:58 UTC - Justin Case: @Justin Case has joined the channel
----
2018-07-04 20:49:19 UTC - Justin Case: Hello everyone! Pulsar noobie here. I 
set up a small cluster in k8s using the docs and for testing it thought of 
porting an app from Kafka using the provided wrapper. But wasn't able to, as it 
turned out one component has a class that `implements Consumer`, a public class 
that isn't part of the Kafka wrapper, but comes from Kafka and Pulsar 
themselves, and I get compilation errors because of some overrides of methods 
that don't exist in Pulsar and also a couple of virtual methods that are 
specific to the Pulsar consumer class but are missing from my implementation. 
My question is: are there any chances I can quickly port this app over for 
testing using the Kafka wrapper? Or have I no choice of doing an actual port 
from Kafka to Pulsar (which would actually mean for me to find some other app 
for testing and benchmarking)
----
2018-07-04 20:55:38 UTC - Daniel Ferreira Jorge: Hi, how can I guarantee that 
the next message will not be sent to a consumer unless the previous message was 
acked? (exclusive subscription)
----
2018-07-04 20:57:29 UTC - Sijie Guo: &gt; But wasn’t able to, as it turned out 
one component has a class that `implements Consumer`

the pulsar kafka wrapper should implement Consumer already. 
that’s a bit strange. 

&gt;  because of some overrides of methods that don’t exist in Pulsar and also 
a couple of virtual methods that are specific to the Pulsar consumer class

the pulsar kafka wrapper is based on kafka 0.10.2.1. if your application is 
using a newer version kafka, that’s possible that some of the methods 
introduced in newer kafka, but the pulsar/kafka wrapper hasn’t implemented them 
yet.
----
2018-07-04 20:59:40 UTC - Sijie Guo: @Daniel Ferreira Jorge: 

&gt; how can I guarantee that the next message will not be sent to a consumer 
unless the previous message was acked

it is simple when you are using receiveAsync, you can chain the callbacks of 
receiveAsync and ack. this should achieve what you need.

if you are using MessageListener, you can set receiveQueueSize in your consumer 
to 1, this would simulate the same behavior
----
2018-07-04 21:02:46 UTC - Daniel Ferreira Jorge: I will try that @Sijie Guo 
thanks
----
2018-07-04 22:07:11 UTC - Beast in Black: Hi all (and hello @Sijie Guo :smile: 
) For the namespace retention policy REST API, how do I specify the parameters? 
Is it sufficient to just provide a JSON data object to the `POST 
/admin/persistent/{tenant}/{namespace}/persistence` URL, like 
`{"retentionTimeInMinutes": 10, "retentionSizeInMB": 0}` ?
----
2018-07-05 02:46:48 UTC - Matteo Merli: @Daniel Ferreira Jorge to completely 
disable prefetching from consumer you should set the receiver queue size to 0. 
That will make consumer to ask broker for next message only when app is ready 
to process it
----
2018-07-05 02:49:35 UTC - Matteo Merli: @Beast in Black yes, though the path 
should be /admin/v2/namespaces/{tenant}/{namespace}/.../
----
2018-07-05 07:58:54 UTC - Justin Case: sorry I dropped off yesterday
&gt; the pulsar kafka wrapper should implement Consumer already
yes, and so does the app I'm trying to port, unfortunately; I'm hoping the two 
Consumer classes (provided by Pulsar and Kafka, respectively) have similar 
enough interfaces that if I iron out the compilation errors, functionality wise 
the app won't crash and burn
----

Reply via email to