2018-12-07 09:22:55 UTC - Maarten Tielemans: @Maarten Tielemans has joined the 
channel
----
2018-12-07 09:55:28 UTC - Maarten Tielemans: Hi All, somewhat similar question 
to above.

Running Apache Pulsar (currently) in standalone mode on an i3.large node.
I have formatted (as xfs) and mounted the NVMe disk. I do not have separate 
disks for journal and ledger (which one should be the NVMe?)

At times I see a big variance and huge spikes in tail latency. Wondering what 
my best way of debugging this would be? I need lower overall and tail latency ..
Furthermore, if I were to run Kafka standalone on the same instance/disk, my 
average latency is around 1 ms. As far as I understand, Pulsar standalone does 
not have the fsync requirement. Are there things which I should finetune?

Some extracts from the pulsar-perf log:
----
2018-12-07 09:55:37 UTC - Maarten Tielemans: 
----
2018-12-07 10:10:07 UTC - Sijie Guo: @Maarten Tielemans: since your disk is 
mounted in /mnt/data, you need to point the standalone to use this directory 
`bin/pulsar standalone --bookkeeper-dir /mnt/data/bookkeeper`. also I would 
suggest not putting the binary distribution under /mnt/data
----
2018-12-07 10:13:09 UTC - Sijie Guo: (actually I think it might actually use 
the disk, since you put the binary in /mnt/data)
----
2018-12-07 10:19:52 UTC - Maarten Tielemans: @Sijie Guo made the changes you 
suggested, but doesn't seem to have a noticeable impact ; still avg around 3.x 
ms, 99.9 around 10ms and spikes in the 99.99
----
2018-12-07 10:30:51 UTC - Sijie Guo: @Maarten Tielemans: I see . the bookie 
journal group commit is set to 1ms, since you are sending 60 msg/second . 
almost every message is waiting for 1ms second at storage to commit, and there 
is 1ms batching time at client side as well.

I would suggest you test with 10000 messages/s and 100000 messages/s (basically 
changing your rate)
----
2018-12-07 10:32:12 UTC - Maarten Tielemans: @Sijie Guo is there a way to 
change that behaviour? The message rates and sizes I use are closer to my real 
use case
----
2018-12-07 10:53:26 UTC - Sijie Guo: can you add  
`journalFlushWhenQueueEmpty=true` to `conf/standalone.conf`?

and add `--batch-time-window 0` to the command you used for running pulsar-perf
----
2018-12-07 10:53:32 UTC - Sijie Guo: @Maarten Tielemans ^
----
2018-12-07 10:57:50 UTC - David Tinker: Does anyone have any idea why the 
ProducerImpl from the Java client (2.2.0) would get stuck in some kind of loop 
after creation but before any messages have been published? These 2 debugs are 
endlessly repeated and calls to producer.send(..) block indefinitely.
----
2018-12-07 10:57:59 UTC - David Tinker: 2018-12-07 12:52:16,776 [,] 
[pulsar-timer-4-1] DEBUG impl.ProducerImpl - 
[<persistent://public/feed-mentions/TWITTER_SAMPLE>] [sprout@porker] Batching 
the messages from the batch container from timer thread
2018-12-07 12:52:16,776 [,] [pulsar-timer-4-1] DEBUG impl.ProducerImpl - 
[<persistent://public/feed-mentions/TWITTER_SAMPLE>] [sprout@porker] Batching 
the messages from the batch container with 0 messages
----
2018-12-07 10:58:59 UTC - David Tinker: Exactly the same code is running fine 
in some other services.
----
2018-12-07 11:01:52 UTC - Sijie Guo: @David Tinker I haven’t heard anyone seen 
this issue. one possibility (just guessing here) - if there are dependency 
conflicts with your other dependencies and causing RuntimeException.
----
2018-12-07 11:38:48 UTC - David Tinker: Looks like its normal behaviour for 
that code to run every batchingMaxPublishDelayMicros
----
2018-12-07 12:19:00 UTC - David Tinker: I had a clash between 
`net.jpountz.lz4:lz4:1.2.0` and `org.lz4:lz4-java:1.5.0`. Maybe ProducerImpl 
should log failures in the `batchMessageAndSend catch (Throwable) { .. }` as 
errors instead of warnings? I would have found this quicker 
:slightly_smiling_face:
----
2018-12-07 12:21:08 UTC - Sijie Guo: yes that’s probably a good idea to catch 
throwable
----
2018-12-07 12:22:30 UTC - David Tinker: It does catch Throwable but only logs 
it as a warning so I missed it. When I turned on debugging it got drowned out. 
I should have had at least warnings on originally I suppose.
----
2018-12-07 12:23:10 UTC - Sijie Guo: oh I see I misread your comment 
:slightly_smiling_face:
----
2018-12-07 12:23:15 UTC - Sijie Guo: yes error is probably better
----
2018-12-07 12:39:46 UTC - Maarten Tielemans: @Sijie Guo the changes you mention 
bring the mean/med latency down to around 1.5-2ms ; so is a noticeable 
improvement. tail/max latency still spikes at times
----
2018-12-07 12:40:20 UTC - Maarten Tielemans: I assume that if I were to run two 
bookkeepers, and I use ack = 1, that would help my tail latency?
----
2018-12-07 12:41:10 UTC - Sijie Guo: yes
----
2018-12-07 12:48:57 UTC - Maarten Tielemans: is there value in splitting 
journal and ledger to two separate disks? can I do that with the standalone?
----
2018-12-07 13:01:51 UTC - jia zhai: Here is the related config, it should work 
with standalone
```
ledgerDirectories=data/bookkeeper/ledgers
journalDirectory=data/bookkeeper/journal
```
----
2018-12-07 13:41:57 UTC - Maarten Tielemans: I am not sure if I am correctly 
setting the above config. Can I add that to the `conf/standalone.conf` ? Asking 
because it does not seem to have effect
----
2018-12-07 13:47:09 UTC - Sijie Guo: 1) don’t think those settings take effect 
for standalone mode. Standalone mode is taking the directory setting from 
—bookkeeper-dir 

2) multiple disks only change throughput. Doesn’t change latency. 

3) for tail or max latency, I would suggest setting up Prometheus and Grafana, 
so you can get metrics to understand where the latency comes from.

----
2018-12-07 13:52:15 UTC - Maarten Tielemans: @Sijie Guo thanks for the feedback
----
2018-12-07 16:49:33 UTC - Melvin: @Melvin has joined the channel
----
2018-12-07 17:07:45 UTC - Melvin: Hi I am using the pulsar python client and 
have trouble subscribing. I am seeing these errors. Do any of you know what's 
going on?
```2018-12-07 12:05:45.866 ERROR ClientConnection:709 | [sourceip:57139 -&gt; 
destip:6651] Failed partition-metadata lookup req_id: 1 error: 4
2018-12-07 12:05:45.866 ERROR ClientImpl:374 | Error Checking/Getting Partition 
Metadata while Subscribing- 5```
----
2018-12-07 17:26:47 UTC - Sanjeev Kulkarni: what does pulsar-admin topics stats 
&lt;topicname&gt; tell you?
----
2018-12-07 19:46:17 UTC - Beast in Black: @Melvin I have this exact problem 
when using the pulsar CPP client as well. If you check the broker logs you will 
see something like the following (even if you are not using partitioned topics):
```
20:39:38.028 [pulsar-io-21-2] ERROR 
org.apache.pulsar.broker.web.PulsarWebResource - Policies not found for 
&lt;YOUR_NAMESPACE&gt; namespace
20:39:38.033 [pulsar-io-21-2] WARN  org.apache.pulsar.broker.service.ServerCnx 
- Failed to get Partitioned Metadata [/100.96.1.12:50564] 
&lt;YOUR_NAMESPACE/YOUR_TOPIC&gt;: Policies not found for 
&lt;YOUR_NAMESPACE&gt; namespaceorg.apache.pulsar.broker.web.RestException: 
Policies not found for &lt;YOUR_NAMESPACE&gt; namespace
&lt;JAVA_EXCEPTION_FOLLOWS&gt;
```

So far I have not received any responses here, nor been able to figure this out 
yet especially since I am not using partitioned topics or using explicit 
partitioning.

I did see in the pulsar CPP client test source code the following, and 
implemented it in my own code but it does not seem to solve the issue re the 
subscriber failure:
```
    // handling dangling subscriptions
    consumer.unsubscribe();
    client.subscribe(topicName, subName, consumer);
```

One thing I have noticed is that it consistently reports these errors/failures 
when a namespace is created and you immediately try to subscribe to it. 
However, I have seen these errors occur sporadically even after an initial 
subscribe succeeds (I have the subscription in a retry thread).
----
2018-12-07 20:38:50 UTC - David Kjerrumgaard: @Melvin @Beast in Black What does 
pulsar-admin namespaces permissions &lt;YOUR-TENANT&gt;/&lt;YOUR NAMESPACE&gt; 
return?
----
2018-12-07 20:44:18 UTC - David Kjerrumgaard: what does pulsar-admin topics 
stats &lt;topicname&gt; tell you?
----
2018-12-07 20:45:23 UTC - David Kjerrumgaard: what does pulsar-admin topics 
stats-internal &lt;topicname&gt; return?
----
2018-12-07 20:49:16 UTC - Beast in Black: @David Kjerrumgaard I get an empty 
map for each of my namespaces: `{ }`
----
2018-12-07 20:51:52 UTC - Beast in Black: @David Kjerrumgaard stats and 
stats-internal both return the same thing:
```
Policies not found for &lt;MYTENANT/MYNAMESPACE&gt; namespace

Reason: Policies not found for &lt;MYTENANT/MYNAMESPACE&gt; namespace
command terminated with exit code 1
```
----
2018-12-07 20:55:27 UTC - David Kjerrumgaard: and the host at 100.96.1.12:50564 
is up and reachable by your Pulsar broker?
----
2018-12-07 21:01:09 UTC - Beast in Black: @David Kjerrumgaard yep
----
2018-12-07 21:02:10 UTC - Beast in Black: I see that it does work, but every so 
often I get this problem and then my subscribers have conniptions
----
2018-12-07 21:08:09 UTC - Beast in Black: @David Kjerrumgaard and I checked 
that my tenants have the admin role set. here's what `tenants get` shows me:
```
{
  "adminRoles" : [ "admin" ],
  "allowedClusters" : [&lt;MY_LIST_OF_REPLICATION_CLUSTERS&gt; ]
}
```
----
2018-12-07 21:13:48 UTC - Beast in Black: @David Kjerrumgaard do I need to 
explicitly set permissions on the namespace for the `"admin"` user set above in 
the `tenants get` call result?
----
2018-12-07 21:17:07 UTC - David Kjerrumgaard: You should, but I don't think it 
will solve the issue you are experiencing.
----
2018-12-07 21:32:47 UTC - Beast in Black: @David Kjerrumgaard OK, I'll change 
my code to explicitly set permissions on the namespaces. I'll have to do this 
via a REST call to the admin interface from my C++ app since the pulsar CPP 
client sources do not seem to have any way to do this. I checked the 
documentation at the pulsar REST API docs but it does not specify what the 
contents of the `POST` payload for the call to 
`/admin/v2/namespaces/:tenant/:namespace/permissions/:role` ought to be - could 
you point me to where I could find that info or an example?

Also, I am willing to experiment with explicitly setting a reasonable default 
policy on the namespace - are there any examples/docs showing how to do this?
----
2018-12-07 21:37:01 UTC - David Kjerrumgaard: @Beast in Black Can we sync in 30 
minutes?
----
2018-12-07 21:39:22 UTC - Beast in Black: @David Kjerrumgaard sure, no problem, 
and thank you
----
2018-12-07 21:57:21 UTC - Mike Card: @Matteo Merli well went in and added the 
following to my userdata:

```
          # Set up nvme disks
          sudo mkdir /mnt/bookie-journal
          sudo mkdir /mnt/bookie-ledger
          sudo mkfs.ext4 -E nodiscard /dev/nvme0n1
          sudo mount -o discard /dev/nvme0n1 /mnt/bookie-journal
          sudo mkfs.ext4 -E nodiscard /dev/nvme1n1
          sudo mount -o discard /dev/nvme1n1 /mnt/bookie-ledger
```

&lt;SNIP&gt;

```
          # configure Bookkeeper
          cat &lt;&lt;EOF &gt;&gt; conf/bookkeeper.conf
          
zkServers=zookeeper-1.${ZookeeperEnsemble}:2181,zookeeper-2.${ZookeeperEnsemble}:2181,zookeeper-3.${ZookeeperEnsemble}:2181
          journalDirectory=/mnt/bookie-journal
          ledgerDirectory=/mnt/bookie-ledger
          EOF
```
----
2018-12-07 21:58:17 UTC - Mike Card: Now on one of the Pulsar instances when I 
examine the content of bookeeper.conf I see this at the end:

```
# Other RocksDB specific tunables
dbStorage_rocksDB_writeBufferSizeMB=64
dbStorage_rocksDB_sstSizeInMB=64
dbStorage_rocksDB_blockSize=65536
dbStorage_rocksDB_bloomFilterBitsPerKey=10
dbStorage_rocksDB_numLevels=-1
dbStorage_rocksDB_numFilesInLevel0=4
dbStorage_rocksDB_maxSizeInLevel1MB=256
zkServers=zookeeper-1.emo3:2181,zookeeper-2.emo3:2181,zookeeper-3.emo3:2181
journalDirectory=/mnt/bookie-journal
ledgerDirectory=/mnt/bookie-ledger
```
----
2018-12-07 21:59:02 UTC - Mike Card: And if I list the contents of these dirs I 
see:

```
drwx------ 2 root root 16384 Dec  7 21:37 lost+found
[mcard@pulsar-0d712b2f6d8e3259e conf]$ ls -la /mnt/bookie*
/mnt/bookie-journal:
total 20
drwxr-xr-x 3 root root  4096 Dec  7 21:37 .
drwxr-xr-x 5 root root    63 Dec  7 21:37 ..
drwx------ 2 root root 16384 Dec  7 21:37 lost+found

/mnt/bookie-ledger:
total 20
drwxr-xr-x 3 root root  4096 Dec  7 21:37 .
drwxr-xr-x 5 root root    63 Dec  7 21:37 ..
drwx------ 2 root root 16384 Dec  7 21:37 lost+found
```
----
2018-12-07 21:59:49 UTC - Mike Card: All of this looks OK but behavior is 
identical to yesterday. There is no I/O to the nvme disks, it is all to the 
local disk so latency is terrible etc.
----
2018-12-07 21:59:58 UTC - David Kjerrumgaard: @Beast in Black  
<http://pulsar.apache.org/en/admin-rest-api/#operation/grantPermissionOnNamespace>
----
2018-12-07 22:01:18 UTC - Mike Card: Oh I just saw I should've used 
ledgerDirectories!
----
2018-12-07 22:01:38 UTC - Mike Card: Saw that in the original conf will fix 
that and see if that fixes this
----
2018-12-07 22:03:38 UTC - Matteo Merli: Yes
----
2018-12-07 22:04:22 UTC - Matteo Merli: Also, side note: xfs is generally 
preferred to ext4 (in terms of latency), not huge but noticeable especially 
under load
----
2018-12-07 22:07:48 UTC - Mike Card: oh OK will switch that too
----
2018-12-07 22:08:04 UTC - Mike Card: Otherwise do the userdata contents look 
right?
----
2018-12-07 22:11:59 UTC - Beast in Black: @David Kjerrumgaard thank you. Would 
the contents of the `POST` payload be something like `"admin": [ "produce",  
"consume"]` ? Is there a list of operations available? I basically want to be 
able to grant permissions for all operations, something like `"admin": [ "*" ]`
----
2018-12-07 22:15:26 UTC - David Kjerrumgaard: The REST API doesn't have the 
ns-isolation-policies listed as an option. So you may have to use the admin CLI 
instead. I am guessing that your goal is to automate this entire process, and 
that using the CLI would not be a viable solution for this. However, we should 
try setting this via the CLI just to confirm that it is indeed the fix we are 
looking for.  Once we confirm that, we can work on automating it.
----
2018-12-07 22:15:27 UTC - David Kjerrumgaard: 
<http://pulsar.apache.org/docs/en/pulsar-admin/#ns-isolation-policy>
----
2018-12-07 22:21:22 UTC - Beast in Black: @David Kjerrumgaard yes, the goal is 
indeed to be able to dynamically create tenants/namespaces/topics and perform 
pub/sub from the C++ app. I don't have a K8s cluster right now to test the CLI 
(we use K8s on AWS and our app is a C++ app running in a pod, with the pulsar 
admin pod being separate) but the next time I deploy (later tonight) I will 
issue a CLI command to explicitly set the NS policy and see if it makes the 
partition metadata/policy errors go away.

I am also wondering if there is some way to specify a default policy - perhaps 
in the broker config - to be applied for each namespace that gets created by 
the app pod...that way, we can avoid having to muck about in the app itself
----
2018-12-07 22:27:19 UTC - David Kjerrumgaard: @Beast in Black 
<http://pulsar.apache.org/en/admin-rest-api/#operation/grantPermissionOnNamespace>
  is the API for the permissions, and you would want specify; tenant=, 
namespace=, role="admin", actions = produce
----
2018-12-07 22:27:28 UTC - David Kjerrumgaard: tenant=, namespace=, 
role="admin", actions = consume
----
2018-12-07 22:27:47 UTC - David Kjerrumgaard: ^^^ Just make two calls with 
different actions to achieve the same result...
----
2018-12-07 22:28:08 UTC - David Kjerrumgaard: since I am not 100% sure how it 
would handle an array of values
----
2018-12-07 22:28:51 UTC - David Kjerrumgaard: @Beast in Black I will get back 
to on the default policy settings, as I don't have an answer at the moment.
----
2018-12-07 22:34:21 UTC - Beast in Black: @David Kjerrumgaard no problem, and 
thank you so much for your help and patience. I will try the CLI option for the 
policy settings and report back here on my findings (I'll ping you so you can 
find the message). Thanks again!
----
2018-12-07 23:11:07 UTC - Matteo Merli: I mean, the 2 directories are empty 
:slightly_smiling_face:
----
2018-12-07 23:11:21 UTC - Matteo Merli: so it’s definitely not writing there
----
2018-12-07 23:14:05 UTC - Mike Card: Well made fixes now I see this:
```
[mcard@pulsar-096813e6006a55e68 conf]$ ls -la /mnt/bookie*
/mnt/bookie-journal:
total 0
drwxr-xr-x 2 root root  6 Dec  7 22:55 .
drwxr-xr-x 5 root root 63 Dec  7 22:55 ..

/mnt/bookie-ledger:
total 0
drwxr-xr-x 3 root root  21 Dec  7 23:08 .
drwxr-xr-x 5 root root  63 Dec  7 22:55 ..
drwxr-xr-x 4 root root 109 Dec  7 23:08 current
```
----
2018-12-07 23:14:24 UTC - Mike Card: ```
[mcard@pulsar-096813e6006a55e68 conf]$ ls -la /mnt/bookie-ledger/current/
total 16
drwxr-xr-x 4 root root  109 Dec  7 23:08 .
drwxr-xr-x 3 root root   21 Dec  7 23:08 ..
-rw-r--r-- 1 root root 1024 Dec  7 23:08 0.log
-rw-r--r-- 1 root root    0 Dec  7 23:08 1.log
-rw-r--r-- 1 root root    2 Dec  7 23:08 lastId
-rw-r--r-- 1 root root   16 Dec  7 23:08 lastMark
drwxr-xr-x 2 root root  123 Dec  7 23:08 ledgers
drwxr-xr-x 2 root root  123 Dec  7 23:08 locations
-rw-r--r-- 1 root root  185 Dec  7 23:08 VERSION
```
----
2018-12-07 23:14:36 UTC - Mike Card: so looks like ledger is being written to 
at expected place
----
2018-12-07 23:14:42 UTC - Matteo Merli: looks better nwo
----
2018-12-07 23:14:49 UTC - Matteo Merli: did the latency change?
----
2018-12-07 23:14:51 UTC - Mike Card: however performance is still really bad
----
2018-12-07 23:14:55 UTC - Mike Card: no
----
2018-12-07 23:15:21 UTC - Mike Card: looks like all I/O is still to local disk 
but that can't be given directory contents
----
2018-12-07 23:15:41 UTC - Matteo Merli: uhm, there’s something else then. In 
this setup it should be ~2 millis at 99pct
----
2018-12-07 23:21:14 UTC - Mike Card: 
----
2018-12-07 23:21:25 UTC - Mike Card: That is bookkeeper.conf
----
2018-12-07 23:22:25 UTC - Mike Card: 
----
2018-12-07 23:22:34 UTC - Mike Card: and that is broker.conf
----
2018-12-07 23:22:48 UTC - Mike Card: Those are the configs running up in AWS on 
i3.4xlarge instances
----
2018-12-07 23:26:42 UTC - Matteo Merli: journalDirectory is defined at the 
beginning of the file as well
----
2018-12-07 23:27:05 UTC - Matteo Merli: same for `ledgerDirectories` at line 351
----
2018-12-07 23:27:36 UTC - Mike Card: Yeah I was doing an append to the default 
configs, doesn't the second entry override the first?
----
2018-12-07 23:27:57 UTC - Matteo Merli: I don’t think so :slightly_smiling_face:
----
2018-12-07 23:29:01 UTC - Mike Card: Hmm thought it did, esp. since it seems to 
be writing the ledger in the right place. I'll remove the other entries and see 
if that matters
----
2018-12-07 23:55:16 UTC - Mike Card: Now would it be bad to have an ELB as the 
single URL for the pulsar cluster in the call to set up the ZK metadata, to wit:

pulsar initialize-cluster-metadata \
  --cluster pulsar-cluster-1 \
  --zookeeper zookeeper-1.emo3:2181,zookeeper-2.emo3:2181,zookeeper-3.emo3:2181 
\
  --configuration-store 
zookeeper-1.emo3:2181,zookeeper-2.emo3:2181,zookeeper-3.emo3:2181 \
  --web-service-url 
<http://emodb-pulsar-databus-pulsar-cluster-1.qa.us-east-1.nexus.bazaarvoice.com:8080>
 \
  --web-service-url-tls 
<https://emodb-pulsar-databus-pulsar-cluster-1.qa.us-east-1.nexus.bazaarvoice.com:8443>
 \
  --broker-service-url 
<pulsar://emodb-pulsar-databus-pulsar-cluster-1.qa.us-east-1.nexus.bazaarvoice.com:6650>
 \
  --broker-service-url-tls 
<pulsar+ssl://emodb-pulsar-databus-pulsar-cluster-1.qa.us-east-1.nexus.bazaarvoice.com:6651>

where here 
<http://emodb-pulsar-databus-pulsar-cluster-1.qa.us-east-1.nexus.bazaarvoice.com|emodb-pulsar-databus-pulsar-cluster-1.qa.us-east-1.nexus.bazaarvoice.com>
 points to a load balancer? Would it be better to make these parameters a 
comma-separated list of the pulsar instances e.g.

pulsar initialize-cluster-metadata \
  --cluster pulsar-cluster-1 \
  --zookeeper zookeeper-1.emo3:2181,zookeeper-2.emo3:2181,zookeeper-3.emo3:2181 
\
  --configuration-store 
zookeeper-1.emo3:2181,zookeeper-2.emo3:2181,zookeeper-3.emo3:2181 \
  --web-service-url <http://pulsar1:8080>, 
<http://pulsar2:8080,http://pulsar3:8080,\>
  --web-service-url-tls 
<https://pulsar1:8443,https://pulsar2:8443,https://pulsar3:8443> \
  --broker-service-url 
<pulsar://pulsar1:6650,pulsar://pulsar2:6650,pulsar://pulsar3:6650> \
  --broker-service-url-tls 
<pulsar://pulsar1:6651,pulsar://pulsar2:6651,pulsar://pulsar3:6651>
----
2018-12-08 00:12:01 UTC - Mike Card: Well results are better this time still 
not great though
----
2018-12-08 00:13:00 UTC - Mike Card: 
----
2018-12-08 00:44:28 UTC - Beast in Black: Does anyone have an example of 
setting `ns-isolation-policy` via the apache pulsar command line? I looked at 
the docs and the java admin source code/tests but wasn't able to find anything 
that I could use...
----
2018-12-08 00:46:24 UTC - Beast in Black: To elaborate, any help with what the 
`--auto-failover-policy-params` and `--auto-failover-policy-type` params 
accept/expect would be greatly appreciated
----
2018-12-08 02:32:09 UTC - Mike Card: OK @Matteo Merli I went through those 
confs and found that zkServers was defined twice too, so I commented out the 
first one I think that made the difference latency is now running steady around 
4 ms
----
2018-12-08 03:33:06 UTC - Samuel Sun: Hi any command to stop bookie(running 
background) gracefully ?
----
2018-12-08 04:28:32 UTC - Matteo Merli: @Samuel Sun if you started with 
`pulsar-daemon start bookie` you can do with `pulsar-daemon stop bookie`. 

In any case, sending SIGTERM will trigger the graceful shutdown 
----
2018-12-08 04:29:34 UTC - Matteo Merli: :+1: yes this numbers make more sense! 
:)
----
2018-12-08 04:56:57 UTC - Samuel Sun: got it, thanks
----
2018-12-08 09:05:28 UTC - Christophe Bornet: @Beast in Black I'm also testing 
namespace isolation. To set a policy I used `bin/pulsar-admin --admin-url 
<http://pulsar1-eu:8080> ns-isolation-policy set test ns-is-policy1 
--auto-failover-policy-params min_limit=0,usage_threshold=100 
--auto-failover-policy-type min_available --namespaces public/default 
--secondary pulsar*-us --primary pulsar*-eu`
----
2018-12-08 09:06:26 UTC - Christophe Bornet: Don't hesitate on sharing the 
results of your tests. I'll share mine.
----
2018-12-08 09:07:55 UTC - Christophe Bornet: Note that the min_available policy 
type is the only one currently acceptable
----
2018-12-08 09:09:19 UTC - Christophe Bornet: And providing min_limit and 
usage_threshold is mandatory for this policy
----

Reply via email to