2019-10-10 09:11:22 UTC - Retardust: my entity is
```
@NoArgsConstructor
@EqualsAndHashCode
@Getter
@Setter
public class JournalBatch {

    private String[] entriesIds;
    private ActionType[] actions;
    private String[] tableNames;
    private int count = 0;
    private int skipped = 0;
    private byte[][] entryDatas;
    private boolean[][] nullValues;
    private int bytesSize;
    private boolean[] archived;
}
```

and function is Function<byte[], JournalBatch>
----
2019-10-10 09:41:48 UTC - Ranganath S: @Ranganath S has joined the channel
----
2019-10-10 09:47:00 UTC - Sijie Guo: okay I will check
----
2019-10-10 09:50:08 UTC - Sijie Guo: was the topic created before using a 
different schema?
----
2019-10-10 09:50:26 UTC - Sijie Guo: you can use `pulsar-admin schemas` to 
check what schema is used for the topics
----
2019-10-10 09:51:22 UTC - Retardust: am I need to set schema on topic 
explicitly?
----
2019-10-10 09:53:40 UTC - Ranganath S: I am getting unknownhostexception when I 
just start apache pulsar by cmd bin/pulsar standalone.. can some one help pls ?
----
2019-10-10 10:03:16 UTC - Sijie Guo: if the topic doesn’t exist, when you run 
the function, the topic will be created with the schema by inferring the schema 
from the function type.
----
2019-10-10 10:03:48 UTC - Sijie Guo: however if the topic exists before you 
submitting the function, you have to make sure the topic has the compatible 
schema.
----
2019-10-10 10:08:06 UTC - test: @test has joined the channel
wave : Poule
----
2019-10-10 10:22:58 UTC - Retardust: thanks
----
2019-10-10 10:41:24 UTC - Retardust: will function consume all topic content 
after deployment?
I have "buffer" topic_1 and I want to start function after special event.
function will bridge messages from topic_1 to topic_2.
doesn't see option for subscription offset type such as latest/earliest
----
2019-10-10 10:46:11 UTC - Sijie Guo: you can use `pulsar-admin` to reset cursor 
for the  (function) subscription for the input topic. does that work for you?
----
2019-10-10 10:46:18 UTC - Retardust: Also is there any option to dump binary 
message content from pulsar?)
p.s. could I even save that to file from function?)
up: done with function and stdout, but it's seems not to be very clean solution)
----
2019-10-10 10:47:07 UTC - Retardust: Maybe.
could I deploy function without starting?
----
2019-10-10 10:49:07 UTC - Sijie Guo: I don’t think we offer that option yet. a 
workaround is to stop the function, reset cursor and start the function
----
2019-10-10 10:49:17 UTC - Sijie Guo: feel free to create an issue for it.
----
2019-10-10 10:49:48 UTC - Sijie Guo: A github issue with description and error 
message would help us to understand the problem and help you resolve it.
----
2019-10-10 10:53:12 UTC - Ranganath S: 
<https://github.com/apache/pulsar/issues/4510> - the error message is - 
java.net.UnknownHostException: failed to resolve &lt;laptop host name&gt;' 
after 6 queries- Just I started the command bin/pulsar standalone on my MacBook.
----
2019-10-10 10:53:56 UTC - Retardust: ok, thanks!
----
2019-10-10 11:44:28 UTC - xiaolong.ran: @Retardust @Sijie Guo 
<https://github.com/apache/pulsar/pull/5357> The reason for this problem is 
because the timing of the `functionClassLoader` being set is incorrect.
+1 : Retardust
----
2019-10-10 14:06:28 UTC - Nicolas Ha: ok thanks - that would be useful for me 
:slightly_smiling_face: I can’t use the cumulative ack until then (not a big 
deal though)
----
2019-10-10 14:20:12 UTC - Filcho: @Filcho has joined the channel
----
2019-10-10 14:24:21 UTC - Nicolas Ha: I am getting confused - I see 
`onAcknowledge`, `onAckTimeoutSend`, `onNegativeAcksSend` in the code, but the 
last two are not in the doc?
----
2019-10-10 14:27:03 UTC - Endre Karlson: @Endre Karlson has joined the channel
----
2019-10-10 14:31:59 UTC - Endre Karlson: hey guys, anyone looked st a operator 
for k8s?
----
2019-10-10 14:47:07 UTC - Sijie Guo: We are working on a K8S operator. Will 
release it soon.
heart_eyes : Poule
+1 : Vladimir Shchur, Luke Lu, Chris Bartholomew, Poule
----
2019-10-10 14:57:24 UTC - Addison Higham: oh interesting, for all pulsar 
components? I imagine most of the smarts need to be in BK and ZK
----
2019-10-10 15:26:13 UTC - Britt Bolen: I’ve got an athenz + namespace 
permissions question.  When using the admin cli to set permissions on a 
namespace (`$ pulsar-admin namespaces grant-permission test-tenant/ns1 
--actions produce,consume  --role role_token`) what value do i use for 
`role_token` when using athenz for authentication?  do i use 
`tenant_domain.tenant_service` ?  thanks
----
2019-10-10 15:29:31 UTC - Matteo Merli: Yes, that should be the correct one
----
2019-10-10 15:31:10 UTC - Matteo Merli: do you get any error?
----
2019-10-10 15:32:26 UTC - Britt Bolen: i hadn’t tried it yet, i’m just reading 
all the docs and coming up to speed, and there was a line in the docs about how 
“In other words, Pulsar uses the Athenz role token only for authentication, not 
for authorization.” so I wasn’t sure how one uses athenz with authorization in 
pulsar
----
2019-10-10 15:32:44 UTC - Britt Bolen: thanks
----
2019-10-10 15:33:03 UTC - Matteo Merli: That’s correct, the authorization is 
handled in Pulsar itself via the `grant-permission` commands.
----
2019-10-10 15:33:50 UTC - Matteo Merli: Athenz has its own way to define 
resources and access to them, though we’re just using it to identify the client
----
2019-10-10 15:36:25 UTC - Britt Bolen: gotcha, makes sense then since 
`tenant_domain.tenant_service` is the name that the client presents for 
authentication
----
2019-10-10 16:39:57 UTC - Luke Lu: If the operator can handle zk/bk 
upgrade/rollback seamlessly, that’d be awesome.
----
2019-10-10 17:14:49 UTC - Addison Higham: just running into 
<https://github.com/apache/pulsar/issues/3702>
----
2019-10-10 17:15:27 UTC - Addison Higham: but it is happening very consistently 
for me ATM
----
2019-10-10 17:15:54 UTC - Matteo Merli: can you get tpcdumps of the HTTP 
traffic? pref from broker perspective
----
2019-10-10 17:18:10 UTC - Addison Higham: it is https, but it is going via a 
proxy... trying to remember how I have this set up
----
2019-10-10 17:27:05 UTC - Addison Higham: swapping the proxy to hit http, will 
see if I can still repro after that
----
2019-10-10 17:28:23 UTC - Britt Bolen: One more authorization question, are 
‘role tokens’ global?  so a single PulsarClient could access topics in multiple 
different tenants and namespaces assuming they use the same role token values 
when setting permissions on various namespaces?
----
2019-10-10 17:30:07 UTC - Matteo Merli: Yes, for the authZ provider, the “role” 
or “principal” is just a string that identifies a client. a single principal is 
not restricted to use resources from one tenant.
----
2019-10-10 17:31:28 UTC - Matteo Merli: one common scenario: give “consume” 
permission on my namespace to some other users
----
2019-10-10 17:32:28 UTC - Britt Bolen: ok, thanks just wanted to be sure I got 
it!
----
2019-10-10 18:10:16 UTC - Addison Higham: okay @Matteo Merli I think I captured 
it... what exactly are you looking for? Here is the 307 back:
```
PUT 
/admin/v2/persistent/code/gerrit-events/get_smart%3Aref-updated/subscription/flink-pulsar-d19a9dfb-2741-4280-9392-2ff1c221aa82
 HTTP/1.1
Authorization: Bearer ...
User-Agent: Pulsar-Java-v2.4.1
Host: 
<http://pulsar-pdx.bus-beta.insk8s.net:8443|pulsar-pdx.bus-beta.insk8s.net:8443>
Accept: application/json
Content-Type: application/json
Via: http/1.1 pulsar-beta-proxy-784f6c7c55-c6btf
X-Forwarded-For: 10.11.41.30
X-Forwarded-Proto: https
X-Forwarded-Host: 
<http://pulsar-pdx.bus-beta.insk8s.net:8443|pulsar-pdx.bus-beta.insk8s.net:8443>
X-Forwarded-Server: 10.11.46.152
X-Original-Principal: code-admin
Content-Length: 82

{"ledgerId":9223372036854775807,"entryId":9223372036854775807,"partitionIndex":-1}

HTTP/1.1 307 Temporary Redirect
Date: Thu, 10 Oct 2019 17:49:40 GMT
Location: 
<http://10.11.59.132:8080/admin/v2/persistent/code/gerrit-events/get_smart%3Aref-updated/subscription/flink-pulsar-d19a9dfb-2741-4280-9392-2ff1c221aa82?authoritative=false>
broker-address: 10.11.58.246
Content-Length: 0
Server: Jetty(9.4.12.v20180830)
```
----
2019-10-10 18:17:08 UTC - Addison Higham: oh I might be confused
----
2019-10-10 18:50:38 UTC - Addison Higham: okay, now I think I undertand more, 
here is the TCP request (that I think kicks things off?)
```
PUT 
/admin/v2/persistent/code/gerrit-events/get_smart%3Achange-merged/subscription/flink-pulsar-9c2c9da6-af13-406f-a04d-2087c6603e76
 HTTP/1.1
Authorization: Bearer ...
User-Agent: Pulsar-Java-v2.4.1
Host: 
<http://pulsar-pdx.bus-beta.insk8s.net:8443|pulsar-pdx.bus-beta.insk8s.net:8443>
Accept: application/json
Content-Type: application/json
Via: http/1.1 pulsar-beta-proxy-784f6c7c55-c6btf
X-Forwarded-For: 10.11.45.87
X-Forwarded-Proto: https
X-Forwarded-Host: 
<http://pulsar-pdx.bus-beta.insk8s.net:8443|pulsar-pdx.bus-beta.insk8s.net:8443>
X-Forwarded-Server: 10.11.46.152
X-Original-Principal: code-admin
Content-Length: 82

{"ledgerId":9223372036854775807,"entryId":9223372036854775807,"partitionIndex":-1}

HTTP/1.1 307 Temporary Redirect
Date: Thu, 10 Oct 2019 17:49:38 GMT
Location: 
<http://10.11.59.132:8080/admin/v2/persistent/code/gerrit-events/get_smart%3Achange-merged/subscription/flink-pulsar-9c2c9da6-af13-406f-a04d-2087c6603e76?authoritative=false>
broker-address: 10.11.58.246
Content-Length: 0
Server: Jetty(9.4.12.v20180830)
```
----
2019-10-10 19:33:26 UTC - Addison Higham: okay, so I am fairly confident what 
is happening:

The  proxy gets the request and sends it to a broker, the broker responds with 
a 307, the proxy follows the re-direct but for whatever reason, changes the 
request such that the next broker who gets it can't complete.

Based on the logs I am seeing, where the second broker just times out, I am 
thinking that proxy isn't forwarding on the request body, so the broker is just 
waiting for the request bytes
----
2019-10-10 19:35:21 UTC - Addison Higham: that seems to be the case also in 
this capture:
```
PUT 
/admin/v2/persistent/code/gerrit-events/get_smart%3Aref-updated/subscription/flink-pulsar-d19a9dfb-2741-4280-9392-2ff1c221aa82?authoritative=false
 HTTP/1.1
User-Agent: Jetty/9.4.12.v20180830
User-Agent: Pulsar-Java-v2.4.1
Accept: application/json
Content-Type: application/json
Via: http/1.1 pulsar-beta-proxy-784f6c7c55-c6btf
X-Forwarded-For: 10.11.41.30
X-Forwarded-Proto: https
X-Forwarded-Host: 
<http://pulsar-pdx.bus-beta.insk8s.net:8443|pulsar-pdx.bus-beta.insk8s.net:8443>
X-Forwarded-Server: 10.11.46.152
X-Original-Principal: code-admin
Authorization: Bearer ...
Host: 10.11.59.132:8080
Content-Length: 82

HTTP/1.1 500 Request failed.
Date: Thu, 10 Oct 2019 17:49:40 GMT
Content-Length: 0
Server: Jetty(9.4.12.v20180830)
```
----
2019-10-10 19:35:25 UTC - Addison Higham: notice that there are no bytes
----
2019-10-10 19:35:43 UTC - Addison Higham: (but it is for a different topic, I 
couldn't find the redirect for above)
----
2019-10-10 19:36:52 UTC - Addison Higham: @Matteo Merli ^^ does that sound 
plausible?
----
2019-10-10 19:38:12 UTC - Matteo Merli: Yes, the question is why the 2nd broker 
gives 500  error
----
2019-10-10 19:38:27 UTC - Matteo Merli: there should (hopefully) be a stack 
trace in that broker logs
----
2019-10-10 19:38:28 UTC - Addison Higham: I think because it never gets the 
bytes of the original request body
----
2019-10-10 19:38:50 UTC - Matteo Merli: uhm, that’s right
----
2019-10-10 19:39:23 UTC - Addison Higham: ```
7:50:08.143 [pulsar-web-33-3] WARN  
org.apache.pulsar.broker.web.AuthenticationFilter - [10.11.46.152] Failed to 
authenticate HTTP request: org.glassfish.jersey.server.ContainerException: 
java.io.IOException: java.util.concurrent.TimeoutException: Idle timeout 
expired: 30000/30000 ms
17:50:08.144 [pulsar-web-33-3] INFO  org.eclipse.jetty.server.RequestLog - 
10.11.46.152 - - [10/Oct/2019:17:49:38 +0000] "PUT 
/admin/v2/persistent/code/gerrit-events/get_smart%3Achange-merged/subscription/flink-pulsar-9c2c9da6-af13-406f-a04d-2087c6603e76?authoritative=false
 HTTP/1.1" 500 0 "-" "Jetty/9.4.12.v20180830" 30008
```
----
2019-10-10 19:40:02 UTC - Addison Higham: relevant log lines, it just is timing 
out, waiting for a content-length it never gets. Note that tcp capture above 
with a 500, it never gets a body
----
2019-10-10 19:41:12 UTC - Matteo Merli: that’s strange, could be a bug in Jetty 
HTTP proxy  , or something related to the original request being on HTTPS
----
2019-10-10 19:41:33 UTC - Matteo Merli: could be worth to check with newer 
version of Jetty
----
2019-10-10 19:44:41 UTC - Addison Higham: looks like in 
<https://github.com/apache/pulsar/blob/master/pulsar-proxy/src/main/java/org/apache/pulsar/proxy/server/AdminProxyHandler.java#L151>
 the auth headers are explicitly copied over, perhaps on redirects, it doesn't 
send the request body
----
2019-10-10 19:57:09 UTC - Addison Higham: @Matteo Merli okay yeah,  so in that 
code, we are overriding the the copyRequest method which *does* does copy the 
content, but since the super version isn't being called, it is the 
responsibility of that method to do it
----
2019-10-10 19:57:43 UTC - Addison Higham: ```
diff --git 
a/pulsar-proxy/src/main/java/org/apache/pulsar/proxy/server/AdminProxyHandler.java
 
b/pulsar-proxy/src/main/java/org/apache/pulsar/proxy/server/AdminProxyHandler.java
index fdf687dfc9..a725f9b17e 100644
--- 
a/pulsar-proxy/src/main/java/org/apache/pulsar/proxy/server/AdminProxyHandler.java
+++ 
b/pulsar-proxy/src/main/java/org/apache/pulsar/proxy/server/AdminProxyHandler.java
@@ -42,6 +42,7 @@ import org.eclipse.jetty.client.HttpClient;
 import org.eclipse.jetty.client.HttpRequest;
 import org.eclipse.jetty.client.ProtocolHandlers;
 import org.eclipse.jetty.client.RedirectProtocolHandler;
+import org.eclipse.jetty.client.api.ContentProvider;
 import org.eclipse.jetty.client.api.Request;
 import org.eclipse.jetty.http.HttpHeader;
 import org.eclipse.jetty.proxy.ProxyServlet;
@@ -149,11 +150,20 @@ class AdminProxyHandler extends ProxyServlet {
         @Override
         protected Request copyRequest(HttpRequest oldRequest, URI newURI) {
             String authorization = 
oldRequest.getHeaders().get(HttpHeader.AUTHORIZATION);
+            ContentProvider body = oldRequest.getContent();
             Request newRequest = super.copyRequest(oldRequest, newURI);
             if (authorization != null) {
                 newRequest.header(HttpHeader.AUTHORIZATION, authorization);
             }
-
+            if (body != null &amp;&amp; body.getLength() &gt; 0) {
+                if (body.isReproducible()) {
+                    newRequest.content(body);
+                } else {
+                    // throw an exception instead? This is likely only if the 
client is uploading
+                    // a chunked body or similar
+                    LOG.error("Redirected request from {} to {} had 
non-reproducible body", oldRequest.getURI(), newRequest.getURI());
+                }
+            }
             return newRequest;
         }
```
----
2019-10-10 19:58:02 UTC - Addison Higham: going to try that against my cluster
----
2019-10-10 20:13:06 UTC - Matteo Merli: :+1:
----
2019-10-10 20:36:29 UTC - Naby: Thanks. It worked, but, with the help of other 
web pages as well.
<https://pulsar.apache.org/docs/en/develop-cpp/>
<https://stackoverflow.com/questions/58272830/python-crashing-on-macos-10-15-beta-19a582a-with-usr-lib-libcrypto-dylib>
And by setting the default python to 3.7: alias python=“python3.7”

Some of the contents are outdated on the apache pulsar website and some are 
inconsistent.
----
2019-10-10 21:27:53 UTC - Sandeep Kotagiri: @David Kjerrumgaard @Sijie Guo I 
have been trying to offload some data into S3 since yesterday. Following lead 
from @David Kjerrumgaard I configured standalone.conf. This certainly made a 
difference. However, I wasn't able to successfully offload anything yet. Some 
items I noticed. Initially I did not configure 
s3ManagedLedgerOffloadServiceEndpoint. During this time, when I manually invoke 
offload, I get a 500 error. However, in the background the status was that 
"offload is in progress". After some time, I configured 
s3ManagedLedgerOffloadServiceEndpoint. And I was able to successfully submit an 
offload request. However the offload started failing through null pointer 
exception.
----
2019-10-10 21:31:03 UTC - Sandeep Kotagiri: 
----
2019-10-10 21:33:23 UTC - Sandeep Kotagiri: I think I am getting something 
wrong. However not sure what it is. I am also behind a corporate proxy. I was 
expecting that the lack of proxy settings to reach out to internet might be 
causing some issues. But I did not find issues in the logs with regards to the 
lack of proxy settings.
----
2019-10-10 21:35:20 UTC - David Kjerrumgaard: @Sandeep Kotagiri "However, in 
the background the status was that "offload is in progress". "  --- Please file 
a big report for this inaccurate status being reported.
----
2019-10-10 21:38:20 UTC - Sandeep Kotagiri: @David Kjerrumgaard will do. Any 
other pointers that might be helpful in this pursuit?
----
2019-10-10 21:38:38 UTC - David Kjerrumgaard: @Sandeep Kotagiri  I actually 
think the proxy is the source of the issue, but the stack trace doesn't spell 
that out very clearly.  From what I see, the key part in the stack trace is as 
follows:  `Caused by: java.lang.NullPointerException: Null id
    at 
org.jclouds.blobstore.domain.AutoValue_MultipartUpload.&lt;init&gt;(AutoValue_MultipartUpload.java:32)
 ~[?:?]
    at 
org.jclouds.blobstore.domain.MultipartUpload.create(MultipartUpload.java:35) 
~[?:?]
    at 
org.jclouds.s3.blobstore.S3BlobStore.initiateMultipartUpload(S3BlobStore.java:373)
 ~`   This indicates that you have were unable to get a valid S3 object ID to 
use for storing the offloaded ledgers.
----
2019-10-10 21:40:21 UTC - Sandeep Kotagiri: Ok got it. Yesterday I tried 
setting up http.Proxyhost and http.Proxyport within JAVA_OPTS in the 
environment configuration. But this caused some zookeeper communication to 
fail. I did not capture the exact problem. Let me test this a bit more before I 
open any bugs.
----
2019-10-10 21:40:55 UTC - David Kjerrumgaard: @Sandeep Kotagiri Basically, the 
offloader connects to S3 and creates a target S3 object and uses that id to 
imitate a multi-part upload of the ledger data in 5 MB chunks. This allows 
Pulsar to send smaller (5MB) pieces of the data over the network and tell S3 
that they are all associated with one another and should be stored as one 
logical object on S3.
----
2019-10-10 21:41:49 UTC - David Kjerrumgaard: @Sandeep Kotagiri It could also 
be related to AWS permissions and you might not be authorized to create S3 
objects.
----
2019-10-10 21:43:08 UTC - Sandeep Kotagiri: Yes, this is one more thing I will 
need to try. I used the default permission settings. By default everything is 
blocked. On the other hand, in the permissions tab, I could clearly see that 
the account has read/write permissions. I will relax the permissions to see if 
this works.
----
2019-10-10 21:45:51 UTC - Addison Higham: wait... I totally read that wrong... 
it is calling the super method, so it should be copying the body, well... that 
makes me more confused... going to double check my tcp capture
----
2019-10-10 21:49:38 UTC - David Kjerrumgaard: @Sandeep Kotagiri No worries, 
just thought I would point out other things that might cause the issue.  Do you 
see ANY objects the in S3 bucket you configured for offloading the ledgers to?
----
2019-10-10 21:59:53 UTC - Matteo Merli: do you have `-s 0` option in tcpdump?
----
2019-10-10 22:08:23 UTC - Addison Higham: I had `-s 65535` for my first 
capture, now am doing it using this nifty tool: 
<https://github.com/eldadru/ksniff>, which made it significantly easier,  it 
appears to be using an pretty new version of tcpdump so it should be capturing 
the default of `262144` bytes
----
2019-10-10 22:10:02 UTC - Matteo Merli: -s 0 should not be capping the packet 
size
----
2019-10-10 22:10:31 UTC - Addison Higham: from the man page:
```
snapshot-length=snaplen
              Snarf snaplen bytes of data from each packet rather than the 
default of 262144 bytes.  Packets truncated because of a limited snapshot are 
indicated in the output with ``[|proto]'', where proto is the name of the 
protocol level at which the truncation has occurred.  Note that taking larger 
snapshots both increases the
              amount of time it takes to process packets and, effectively, 
decreases the amount of packet buffering.  This may cause packets to be lost.  
You should limit snaplen to the smallest number that will capture the protocol 
information you're interested in.  Setting snaplen to 0 sets it to the default 
of 262144, for  back‐
              wards compatibility with recent older versions of tcpdump.
```
----
2019-10-10 22:29:16 UTC - Addison Higham: okay, looking at packet lengths, I 
don't think anything is getting truncated
----
2019-10-10 23:56:00 UTC - Addison Higham: welp... I am at a dead end on this, 
enabled debug logging on my cluster, can certainly see the issue on the broker 
side, but after looking through the netty code and looking for debug logs I 
would expect, I don't see anything
----
2019-10-11 02:40:01 UTC - Penghui Li: Will try to add method to get un-acked 
messageIds
----
2019-10-11 02:42:40 UTC - Penghui Li: Seems the java doc has not updated
----
2019-10-11 02:59:04 UTC - Addison Higham: okay, update on this:
- new version of jetty appears to fix the issue
- however, I had to set `tlsEnabledWithBroker` to false, so there is something 
about talking TLS that it doesn't like
- logging in the proxy seems really weird, I didn't get any logs for code paths 
that I am pretty sure it must be in, even with enable debug logs for the root 
logger. With the above exceptions, I wasn't seeing anything either
----
2019-10-11 03:05:15 UTC - Addison Higham: dangit... forget all that, I just 
lucky a few times
----
2019-10-11 03:05:46 UTC - Matteo Merli: :confused:
----
2019-10-11 03:06:10 UTC - Matteo Merli: can you open an issue and put all your 
findings? along with steps to repro?
----
2019-10-11 03:20:25 UTC - Sandeep Kotagiri: @David Kjerrumgaard, no. I tried 
relaxing the permissions on S3 bucket without any help. I will try to set this 
up where there is no corporate proxy in between AWS and Pulsar. I am hoping 
this will do the trick. I will try this out first thing in the morning.
----
2019-10-11 03:57:17 UTC - Addison Higham: 
<https://github.com/apache/pulsar/issues/5360>
----
2019-10-11 06:39:33 UTC - Endre Karlson: @Sijie Guo any idea on publishing it?
----
2019-10-11 07:33:11 UTC - Jianfeng Qiao: Hello, I have one question about 
Shared mode. Per the documentation says "In shared or round robin mode, 
multiple consumers can attach to the same subscription. Messages are delivered 
in a round robin distribution across consumers, and any given message is 
delivered to only one consumer. When a consumer disconnects, all the messages 
that were sent to it and not acknowledged will be rescheduled for sending to 
the remaining consumers.", does it means one message can only be delivered to 
one consumer even if the consumer does not ack it?
----
2019-10-11 07:35:01 UTC - Jianfeng Qiao: Based on our recent tests against 
Shared mode, it does not work in this way.
----
2019-10-11 07:40:41 UTC - Jianfeng Qiao: The broker does not save the 
relationship of which message is delivered to which consumer, the message might 
be redelivered to another consumer even if the original consumer is still alive 
but not ack it.
----
2019-10-11 08:20:50 UTC - dba: Hi. Just a quick question to see if this is the 
expected behavior or if it's a bug. I have enabled batching (using Pulsar 
2.4.1) and sent 3 messages that are then bundled. I consume (using the java 
client version 2.4.1) and get 3 messages (the ledgerId, entryId and partition 
are the same, but batch_index is 1 to 3). If I ack message 1 and 2 and then 
stop consuming, the next time I connect I get 1 and 2 again. Meaning that the 
cursor is only moved forward if I ack all the messages in the batch. It this 
the expected behavior?
Oh, also another question. Since batched messages are delivered to the consumer 
in a single CommandMessage and you can set ordering_key and partition_key pr 
SingleMessageMetadata, how does that work? Won't that mean that messages for a 
certain ordering_key are delivered to the wrong consumer? Since we can have a 
mix of ordering_keys pr CommandMessage?
----

Reply via email to