2019-09-18 11:04:07 UTC - Retardust: Hi, how netty-source guaranties ordering 
for topic?
f.e. I have one topic and multiple brokers with netty http.
 I've send messages synchronously on any of them (behind http proxy) do I lose 
ordering guaranties or not?
----
2019-09-18 13:21:01 UTC - Cory Davenport: Understood on the offloading. Makes 
sense was just curious mainly.
----
2019-09-18 13:22:27 UTC - Cory Davenport: Would the TTL override the whole 
"only deleted once consumed by all subscriptions"
----
2019-09-18 13:47:55 UTC - Tarek Shaar: @Ali Ahmed thanks for sharing the 
presentation..
----
2019-09-18 14:22:25 UTC - Sijie Guo: I don’t think there is any ordering 
guarantee in this case.
----
2019-09-18 14:23:46 UTC - Retardust: Event if there is only one producer , 
right?
----
2019-09-18 14:24:01 UTC - Retardust: So how could I use netty connector to save 
ordering guaranties?
----
2019-09-18 14:27:20 UTC - Sijie Guo: > Event if there is only one producer , 
right?

Correct. Because the messages sent to different netty source instance can end 
up at different partitions.

You can achieve certain ordering guanrantee, when there is only one partition 
and one client sending requests to the netty source in sync way.

--

We can potentially improve the ordering guarantee to add keys to the netty 
source, so the requests of same key can be produced to same partition.
----
2019-09-18 14:34:28 UTC - Retardust: so different netty instances have 
different producer id and will produce messages concurrently, right?
----
2019-09-18 14:44:41 UTC - Sijie Guo: yes
----
2019-09-18 16:39:50 UTC - Florentin Dubois: Hello, Is someone has already met 
this issue <https://github.com/apache/pulsar/issues/5216> ?
----
2019-09-18 16:54:13 UTC - Sijie Guo: I don’t think we support creating a 
subscription for a non-persistent topic. Because all the subscriptions for 
non-persistent topics are non-durable.
----
2019-09-18 16:54:23 UTC - Sijie Guo: The error message probably should be 
improved.
----
2019-09-18 17:12:57 UTC - Nicolas Ha: Just wanted to say thank you for making 
the pulsar.admin available in Java :smile: it is very convenient
+1 : David Kjerrumgaard, Sijie Guo
----
2019-09-18 17:16:14 UTC - Florentin Dubois: Thanks Sijie, I will make a pull 
request to improve it :wink:
----
2019-09-18 17:53:56 UTC - Axel Barfod: Question, do pulsar subscriptions expire 
after some time without receiving messages ?
----
2019-09-18 18:10:13 UTC - aliafsar: @aliafsar has joined the channel
----
2019-09-18 18:12:20 UTC - Ali Afsar: @Ali Afsar has joined the channel
----
2019-09-18 19:03:41 UTC - Tilden: Hi  while running apache pulsar in Openshift 
env,  i am getting the below error.   I think Pulsar images is build to run as 
root user, but in openshift default root is blocked. can anyone pls suggest, 
how to run it or do we have any specific pulsar image for openshift ?

September 18th 2019, 21:55:28.774       zookeeper       [conf/pulsar_env.sh] 
Applying config PULSAR_MEM = "-Xms2g -Xmx2g -Dcom.sun.management.jmxremote 
-Djute.maxbuffer=10485760 -XX:+ParallelRefProcEnabled 
-XX:+UnlockExperimentalVMOptions -XX:+AggressiveOpts -XX:+DoEscapeAnalysis 
-XX:+DisableExplicitGC -XX:+PerfDisableSharedMem -Dzookeeper.forceSync=no"
September 18th 2019, 21:55:28.774       zookeeper         File 
"bin/apply-config-from-env.py", line 73, in &lt;module&gt;
September 18th 2019, 21:55:28.774       zookeeper           f = 
open(conf_filename, 'w')
September 18th 2019, 21:55:28.774       zookeeper       IOError: [Errno 13] 
Permission denied: 'conf/pulsar_env.sh'
September 18th 2019, 21:55:28.773       zookeeper       [conf/pulsar_env.sh] 
Applying config PULSAR_GC = "-XX:+UseG1GC -XX:MaxGCPauseMillis=10"
September 18th 2019, 21:55:28.773       zookeeper       Traceback (most recent 
call last):
September 18th 2019, 21:50:15.791       zookeeper       Traceback (most recent 
call last):
September 18th 2019, 21:50:15.791       zookeeper         File 
"bin/apply-config-from-env.py", line 73, in &lt;module&gt;
September 18th 2019, 21:50:15.791       zookeeper           f = 
open(conf_filename, 'w')
September 18th 2019, 21:50:15.791       zookeeper       IOError: [Errno 13] 
Permission denied: 'conf/pulsar_env.sh'
----
2019-09-18 19:10:14 UTC - Poule: Anyone using macOS Catalina?
----
2019-09-18 19:11:03 UTC - Ali Ahmed: catalina has not been released yet
----
2019-09-18 19:12:41 UTC - Poule: well anyone using Catalina Beta 8?
----
2019-09-18 19:15:07 UTC - Poule: I have segfaults on it
```
~ python -c 'from pulsar.schema import Record'
[1]    28851 segmentation fault  python -c 'from pulsar.schema import Record'
```
----
2019-09-18 19:15:29 UTC - Ali Ahmed: what version is this ?
----
2019-09-18 19:15:49 UTC - Poule: 2.4.0
----
2019-09-18 19:16:17 UTC - Ali Ahmed: can you open an issue I will take a look 
at it
----
2019-09-18 19:16:28 UTC - Poule: ok
----
2019-09-18 19:25:49 UTC - Poule: I posted the issue
----
2019-09-18 19:48:09 UTC - Matteo Merli: did you compile the wheel file there, 
or you took the one for 10.14?
----
2019-09-18 19:58:27 UTC - Addison Higham: general BK question: curious how 
people manage BK restarts/upgrades in an automated fashion. What I assume would 
be ideal is to restart/replace one bookie at a time, wait until there are no 
under replicated partitions, then move onto the next bookie. Perhaps that might 
be naive?
----
2019-09-18 20:10:13 UTC - Poule: @Matteo Merli I took the 10.14
----
2019-09-18 20:39:02 UTC - Matteo Merli: Since it’s a controlled rollover, 
there’s no need to wait for under-replicated ledgers.

Typically, we’d use a 10min delay before marking the ledgers as 
under-replicated, so that it doesn’t get triggered on rolling-restarts (or 
short lasting failures).

Another alternative is to disable auto-recovery during the upgrades (there’s a 
command that sets a flag for that ) and re-enabled after the upgrade is done.

For each bookie, you’d want to wait until is working correctly, before moving 
to the next one.
----
2019-09-18 22:29:19 UTC - Poule: @Matteo Merli I just manually built the wheel 
2.4.1 on Catalina and it seems to work ok so I guess it's just a matter or 
rebuilding 10.15-specific wheels
----
2019-09-18 22:29:35 UTC - Poule: and pypi-ify them
----
2019-09-18 22:31:59 UTC - Matteo Merli: Yes, in general, we have to build the 
wheel for each MacOS version (and Python version too..)
+1 : Poule
----
2019-09-19 01:13:25 UTC - Poule: of the 8 schema compat check strategy, 2.4.1 
has only 3 right?
----
2019-09-19 01:13:56 UTC - Poule: or 8?
----
2019-09-19 01:35:36 UTC - Poule: docs says 8, pulsar-admin says 3
----
2019-09-19 04:33:15 UTC - Poule: with scheme `backwards`, I am supposed to be 
able to delete a field in the schema definition right?
----
2019-09-19 04:49:23 UTC - Poule: if  I delete the current schema, the next 
schema definition I upload can be anything regardless of the strategy right?
----
2019-09-19 06:24:49 UTC - Sijie Guo: Yes
hugging_face : Poule
----
2019-09-19 06:25:01 UTC - Sijie Guo: You can also check out this documentation 
: <http://pulsar.apache.org/docs/en/schema-evolution-compatibility/>
----
2019-09-19 06:25:12 UTC - Sijie Guo: Yes 
hugging_face : Poule
----
2019-09-19 06:25:29 UTC - Sijie Guo: It should be 8
----
2019-09-19 06:25:48 UTC - Sijie Guo: I guess the Pulsar-admin description is 
not updated 
----
2019-09-19 07:34:31 UTC - Poule: it actually accepts only 3
----
2019-09-19 07:37:53 UTC - Poule: maybe the 2.4.1 bin package has an old 
pulsar-admin I don't know
----
2019-09-19 07:46:28 UTC - Poule: filing a gh issue
----

Reply via email to