2019-07-01 13:16:19 UTC - Sijie Guo: yes please
----
2019-07-01 13:16:21 UTC - Sijie Guo: thanks
----
2019-07-01 13:18:05 UTC - Sijie Guo: There are a couple options:

- you can seek a subscription back to old messages. so your consumers will 
receive the old messages.

If you are in favor of using reader APIs, you can read directly from partitions.
----
2019-07-01 13:42:10 UTC - Gilberto Muñoz Hernández: Hi @David Kjerrumgaard, 
here is the new feature request about the hdfs sink connector 
(<https://github.com/apache/pulsar/issues/4651>)
----
2019-07-01 14:06:15 UTC - dipali: I am working with pulsar cluster in aws
----
2019-07-01 14:06:31 UTC - dipali: able to run but its showing empty dashboard
----
2019-07-01 14:07:25 UTC - dipali: if anyone has faced similar issue, please 
guide
----
2019-07-01 14:07:49 UTC - Sijie Guo: the current dashboard relies on stats for 
rendering it. so you might have to send the traffic to your cluster, in order 
to make the topics show up in the dashboard.
----
2019-07-01 14:11:24 UTC - dipali: how do i do that?
----
2019-07-01 14:12:22 UTC - Sijie Guo: produce and consume messages to and from 
your cluster
----
2019-07-01 14:13:19 UTC - dipali: ok.. i did producce topic using the example 
provide
----
2019-07-01 14:13:31 UTC - dipali: did not consume though
----
2019-07-01 14:14:19 UTC - dipali: is there any documentatiomn i can follow
----
2019-07-01 14:14:37 UTC - dipali: i created 9 node cluster
----
2019-07-01 14:14:40 UTC - dipali: in aws
----
2019-07-01 14:16:24 UTC - Sijie Guo: how did you run the dashboard?
----
2019-07-01 14:17:11 UTC - dipali: $ docker build -t 
apachepulsar/pulsar-dashboard dashboard
----
2019-07-01 14:17:24 UTC - dipali: $ 
SERVICE_URL=<http://broker.example.com:8080/>
$ docker run -p 80:80 \
  -e SERVICE_URL=$SERVICE_URL \
  apachepulsar/pulsar-dashboard
----
2019-07-01 14:17:42 UTC - dipali: here i provided one of the proxy ip
----
2019-07-01 14:18:22 UTC - dipali: sorry one of the broker ip
----
2019-07-01 14:19:07 UTC - dipali: service URL= <http://broker1> ip:8080
----
2019-07-01 14:24:35 UTC - Sijie Guo: ok where do you run the pulsar-dashboard 
docker? Is the docker instance able to connect to broker1 ip?
----
2019-07-01 14:26:15 UTC - dipali: docker is running ...not throiwng any error
----
2019-07-01 14:26:28 UTC - dipali: but its not showing any ip..
----
2019-07-01 14:27:37 UTC - dipali: Starting Pulsar dasboard
+ /pulsar/init-postgres.sh
+ rm -rf '/data/*'
+ chown -R postgres: /data
+ chmod 700 /data
+ sudo -u postgres /usr/lib/postgresql/9.6/bin/initdb /data/
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locale "C.UTF-8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

Success. You can now start the database server using:

    /usr/lib/postgresql/9.6/bin/pg_ctl -D /data/ -l logfile start


WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
+ sudo -u postgres /etc/init.d/postgresql start
Starting PostgreSQL 9.6 database server: main.
+ sudo -u postgres psql --command 'CREATE USER docker WITH PASSWORD 
'\''docker'\'';'
CREATE ROLE
+ sudo -u postgres createdb -O docker pulsar_dashboard
+ cd /pulsar/django
+ ./manage.py migrate
Operations to perform:
  Apply all migrations: admin, auth, contenttypes, sessions, stats
Running migrations:
  Applying contenttypes.0001_initial... OK
  Applying auth.0001_initial... OK
  Applying admin.0001_initial... OK
  Applying admin.0002_logentry_remove_auto_add... OK
  Applying contenttypes.0002_remove_content_type_name... OK
  Applying auth.0002_alter_permission_name_max_length... OK
  Applying auth.0003_alter_user_email_max_length... OK
  Applying auth.0004_alter_user_username_opts... OK
  Applying auth.0005_alter_user_last_login_null... OK
  Applying auth.0006_require_contenttypes_0002... OK
  Applying auth.0007_alter_validators_add_error_messages... OK
  Applying auth.0008_alter_user_username_max_length... OK
  Applying sessions.0001_initial... OK
  Applying stats.0001_initial... OK
  Applying stats.0002_support_deleted_objects... OK
+ supervisord -n
/usr/lib/python2.7/dist-packages/supervisor/options.py:298: UserWarning: 
Supervisord is running as root and it is searching for its configuration file 
in default locations (including its current working directory); you probably 
want to specify a "-c" argument specifying an absolute path to a configuration 
file for improved security.
  'Supervisord is running as root and it is searching '
2019-07-01 13:35:09,355 CRIT Supervisor running as root (no user in config file)
2019-07-01 13:35:09,355 INFO Included extra file 
"/etc/supervisor/conf.d/supervisor-app.conf" during parsing
2019-07-01 13:35:09,362 INFO RPC interface 'supervisor' initialized
2019-07-01 13:35:09,363 CRIT Server 'unix_http_server' running without any HTTP 
authentication checking
2019-07-01 13:35:09,363 INFO supervisord started with pid 72
2019-07-01 13:35:10,365 INFO spawned: 'nginx' with pid 75
2019-07-01 13:35:10,367 INFO spawned: 'collector' with pid 76
2019-07-01 13:35:10,369 INFO spawned: 'uwsgi' with pid 77
2019-07-01 13:35:15,973 INFO success: nginx entered RUNNING state, process has 
stayed up for &gt; than 5 seconds (startsecs)
2019-07-01 13:35:15,973 INFO success: collector entered RUNNING state, process 
has stayed up for &gt; than 5 seconds (startsecs)
2019-07-01 13:35:15,973 INFO success: uwsgi entered RUNNING state, process has 
stayed up for &gt; than 5 seconds (startsecs)
----
2019-07-01 14:29:12 UTC - Sijie Guo: can you get into the docker instance and 
try to telnet the broker ip to see it can connect to broker ip or not?
----
2019-07-01 14:30:13 UTC - dipali: ok..
----
2019-07-01 14:50:59 UTC - Szymon Zberaz: @Szymon Zberaz has joined the channel
----
2019-07-01 14:53:38 UTC - Szymon Zberaz: Hello, I have problem with apache 
pulsar on my kubernetes cluster. I do not know what can be wrong because I 
installed pulsar stack from helm chart. Proxy pod is starting but in the logs I 
see a error message like this :
```
14:38:12.880 [main] INFO  org.eclipse.jetty.util.thread.ThreadPoolBudget - 
SelectorManager@ServerConnector@2eadc9f6{HTTP/1.1,[http/1.1]}{0.0.0.0:8080} 
requires 1 threads from 
WebExecutorThreadPool[etp244872973]@e98770d{STARTED,4&lt;=4&lt;=4,i=3,q=0,ReservedThreadExecutor@5633dafd{s=0/1,p=0}}
2019-07-01 02:38:12,881 [sun.misc.Launcher$AppClassLoader@42110406] error 
Uncaught exception in thread main: Failed to start HTTP server on ports [8080]

```
I would be happy if someone could help me with this issue
----
2019-07-01 15:06:14 UTC - Chris Bartholomew: @Szymon Zberaz I have seen similar 
issues when running on smaller worker nodes. To workaround, you can try setting 
httpNumThreads to 8 in your proxy.conf file.
----
2019-07-01 15:06:59 UTC - Szymon Zberaz: ok, thankks, it is possible to do this 
with helm chart ?
----
2019-07-01 15:09:54 UTC - Chris Bartholomew: You should be able to add it to 
the data section of  proxy-configmap.yaml like this: ```  httpNumThreads: 
"8"``` This should get translated into the proxy.conf in your pod.
----
2019-07-01 15:12:23 UTC - Szymon Zberaz: but how I can keep defaults ?
----
2019-07-01 15:17:05 UTC - Chris Bartholomew: Actually, looking more closely at 
the helm chart, you can add the line to your values file  in 
.Values.proxy.configData. Any value present in there gets added to the 
proxy-configmap.yaml data section.
heart : Szymon Zberaz
----
2019-07-01 15:31:59 UTC - Gilberto Muñoz Hernández: @David Kjerrumgaard does 
the cassandra sink connector have this same limitation? Only string messages?
----
2019-07-01 15:32:26 UTC - Szymon Zberaz: thanks, is working 
:slightly_smiling_face:

Solution :
```
proxy:
  component: proxy
  replicaCount: 1
  # nodeSelector:
    # <http://cloud.google.com/gke-nodepool|cloud.google.com/gke-nodepool>: 
default-pool
  annotations:
    <http://prometheus.io/scrape|prometheus.io/scrape>: "true"
    <http://prometheus.io/port|prometheus.io/port>: "8080"
  tolarations: []
  gracePeriod: 0
  resources:
    requests:
      memory: 64Mi
      cpu: 0.1
  ## Proxy configmap
  ## templates/proxy-configmap.yaml
  ##
  configData:
    PULSAR_MEM: "\"-Xms64m -Xmx128m -XX:MaxDirectMemorySize=64m\""
    httpNumThreads: "8"
```
+1 : Chris Bartholomew
----
2019-07-01 15:35:12 UTC - David Kjerrumgaard: Yes, it can only handle Strings 
at the moment.
----
2019-07-01 15:36:54 UTC - David Kjerrumgaard: But how would the connector 
handle an incoming byte array?
----
2019-07-01 17:20:45 UTC - Vineeth Thumma: @Vineeth Thumma has joined the channel
----
2019-07-01 17:53:04 UTC - Devin G. Bost: @Jerry Peng I'm resuming my work on 
the end-to-end test that I previously spoke with you about. When I attempt to 
connect to the mocked PulsarAdmin, I'm getting:
`Connection refused: localhost/0:0:0:0:0:0:0:1:8080`

Do you know if it's possible for me to run methods like GetTenants() on the 
mocked PulsarAdmin object?
----
2019-07-01 17:59:59 UTC - Jerry Peng: @Devin G. Bost you should be able to use 
mockito to mock PulsarAdmin class
----
2019-07-01 18:00:30 UTC - Devin G. Bost: Thanks for the info. I'll keep 
debugging my code.
----
2019-07-01 18:01:01 UTC - Devin G. Bost: Do you know what might cause the 
connection to be refused?
----
2019-07-01 18:03:04 UTC - Jerry Peng: Did you mock it like the following?
```
PulsarAdmin admin = mock(PulsarAdmin.class);
when(admin.tenants()).thenReturn(tenants);
```
----
2019-07-01 18:03:18 UTC - Jerry Peng: then the actual code should never be 
called
----
2019-07-01 18:06:50 UTC - Devin G. Bost: Oh, I see what you mean.
----
2019-07-01 18:08:38 UTC - Devin G. Bost: I can mock some of the parts like 
that, but what if I want to actually perform an integration test to determine 
if the functions I create are created successfully?
----
2019-07-01 18:14:12 UTC - Devin G. Bost: e.g. 
`/pulsar/io/PulsarFunctionE2ETest.java`
----
2019-07-01 18:16:28 UTC - Jerry Peng: Oh in those tests the ports used for the 
broker/worker are randomly assigned.  I am not sure if you are doing that as 
well but if you are then you need to set the correct port for the admin to 
connect to
----
2019-07-01 18:16:51 UTC - Jerry Peng: Current it is trying to use 8080 which is 
the default port 
----
2019-07-01 18:17:27 UTC - Devin G. Bost: So, you think I'm just not passing the 
correct port?
----
2019-07-01 18:17:48 UTC - Devin G. Bost: I'll check on that.
----
2019-07-01 18:33:23 UTC - Devin G. Bost: Yeah, it looks like the wrong port is 
getting passed. Thanks.
----
2019-07-01 18:46:13 UTC - Santiago Del Campo: @Santiago Del Campo has joined 
the channel
----
2019-07-01 19:39:18 UTC - Santiago Del Campo: Good day!

I have a couple of questions about troubleshooting Pulsar components when they 
crash and in which contexts they crash.

first of all, our Pulsar Cluster is deployed inside Kubernetes / AWS.... we use 
the generic Yamls located in the source tarball

1) When the bookie pods are re deployed after a while due to changes in 
configuration or server maintenance, in most cases one error appears in the 
logs about the mismatch of cluster InstanceIds... the one that the new bookie 
pods have are not the same, producing bookie server to never start.... my 
workaround for this is to literally manually delete the instance IDs inside the 
bookie pods and replace it with the one that the cluster is actually using (i 
change it at /data/bookeeper/journal&amp;ledgers/current/VERSION). Is there any 
way to automate this to make the cluster more stable? Seems like Bookies create 
new InstanceIDs by themselves.

2) Again, if Bookie pods are re deployed like in point 1), most cases i can see 
this kind of errors: ```  Not all the new directories are empty. New 
directories that are not empty are: [data/bookkeeper/ledgers/current]  ```... 
An this leads me to ask about how bookie handles persistence of several 
configurations parameters.... are they renewed with a new redeploy? and if not, 
is there a way to automate the cleanup?

Not sure if i am being clear enough with this questions, we're kinda new with 
Pulsar.. but if you could help us with this we'd appreciate it alot!

My feeling about all this errors is like the different components (bookie, 
broker, etc...) in a Kubernetes environment do not recover as expected, making  
the natural mutability of a Kubernetes cluster kinda difficult :thinking_face:
----
2019-07-01 20:41:07 UTC - Devin G. Bost: @Jerry Peng Is there a way for me to 
increase the timeouts? I resolved the port issue, but I'm getting timeout 
exceptions that are causing my connection to close.
----
2019-07-01 20:47:48 UTC - Devin G. Bost: Nvm. I found some places where it can 
be set.
----
2019-07-01 21:02:00 UTC - Devin G. Bost: @Jerry Peng Even after fixing the 
timeout issue, I'm still getting 
`org.apache.pulsar.client.admin.PulsarAdminException: 
java.lang.IllegalStateException: Client instance has been closed.`
----
2019-07-01 21:04:20 UTC - Devin G. Bost: 
----
2019-07-01 21:22:21 UTC - Jerry Peng: that means the close() was already called 
on the client admin and then you tried to use it
----
2019-07-01 21:29:31 UTC - Devin G. Bost: You are right. I refactored my 
lifecycle to use an IoC container and forgot that I was still managing the 
connection in several places. I didn't think they were getting executed, but I 
was wrong.
----
2019-07-01 21:44:54 UTC - Devin G. Bost: @Jerry Peng Now I'm getting 
`org.apache.pulsar.client.admin.PulsarAdminException$NotAuthorizedException: 
Cluster [use] is not in the list of allowed clusters list for tenant [osp]`

Where in the E2E method do I configure the allowed clusters list for the 
tenant? I tried a few different things, but I was just getting different errors.
----
2019-07-01 22:24:48 UTC - Jerry Peng: ```
TenantInfo propAdmin = new TenantInfo();
        propAdmin.getAdminRoles().add("superUser");
        
propAdmin.setAllowedClusters(Sets.newHashSet(Lists.newArrayList("use")));
        admin.tenants().updateTenant(tenant, propAdmin);
```
----
2019-07-01 22:25:58 UTC - Jerry Peng: I think there are some places where you 
having replaced “use” with “osp”
----
2019-07-01 22:34:15 UTC - Devin G. Bost: Thanks. It looks like `use` is the 
cluster name. I replaced the tenant name at the top with `osp`, and I got this 
error:

```
16:32:07.604 [bookkeeper-io-54-15] ERROR o.a.b.proto.PerChannelBookieClient - 
Could not connect to bookie: [id: 0x711fb128]/10.15.34.218:15991, current state 
CONNECTING : 
io.netty.channel.ConnectTimeoutException: connection timed out: 
<http://ocpc-lm31977.overstock.com/10.15.34.218:15991|ocpc-lm31977.overstock.com/10.15.34.218:15991>
        at 
io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:267)
        at 
io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
        at 
io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:127)
        at 
io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
        at 
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:474)
        at 
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:909)
        at 
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
        at java.lang.Thread.run(Thread.java:748)
```
----
2019-07-01 22:34:50 UTC - Devin G. Bost: The error occurs repeatedly until I 
get `org.apache.pulsar.broker.PulsarServerException: 
java.lang.RuntimeException: 
org.apache.pulsar.client.api.PulsarClientException$TimeoutException: 8 lookup 
request timedout after ms 30000`
----
2019-07-01 22:37:16 UTC - Jerry Peng: I am not sure about this error, but I 
suggest running PulsarFunctionE2ETest to make sure it runs in your setup. 
After, start replacing the code piece by piece.
----
2019-07-01 22:37:28 UTC - Devin G. Bost: Okay.
----
2019-07-01 23:14:34 UTC - Ping-Min Lin: Hi, I'm trying to use the reader 
interface to avoid subscriptions to be created. However looking into the reader 
implementation source, it seems that the reader interface is still using 
consumers. Not sure if my observation is correct or if I'm missing something. 
Thanks!
----
2019-07-01 23:59:52 UTC - Matteo Merli: @Ping-Min Lin Yes, from the wire 
protocol perspective, reader works in very similar way as consumer, though 
there are few differences:
 * The subscription is marked as “volatile” and its state is not stored, so it 
disappears when the reader disconnects
 * Internally, the reader position itself to a particular message id after 
every reconnection, to ensure there are no dups when a connection is 
re-established
----
2019-07-02 00:05:54 UTC - Ping-Min Lin: Thanks for the clarification @Matteo 
Merli!
cc: @Ambud Sharma
----
2019-07-02 00:35:06 UTC - Ping-Min Lin: @Matteo Merli I see in the docs: `The 
reader interface for Pulsar cannot currently be used with partitioned topics.` 
does this mean that I cannot even use the reader interface to read from an 
internal partition topic `mytopic-partition-X` ?
----
2019-07-02 00:35:36 UTC - Matteo Merli: Yes, you can read directly from the 
specified partitions
----
2019-07-02 00:36:32 UTC - Matteo Merli: It just doesn’t work at the 
“partitioned topic” level because it takes only 1 message id to position the 
reader
----
2019-07-02 00:36:50 UTC - Ping-Min Lin: I see, thank you!
----
2019-07-02 01:04:28 UTC - Devin G. Bost: Does anyone know where I can find the 
dependencies for `com.yahoo.athenz`? My work needs to setup a mirror to allow 
us to pull the dependencies (in order to build Pulsar 2.3.2 from source code), 
and we couldn't find it in our upstream mirrors (Sonatype Central and Bintray).
When I try to build, I'm getting:
```
Failed to execute goal on project pulsar-broker-auth-athenz: Could not resolve 
dependencies for project org.apache.pulsar:pulsar-broker-auth-athenz:jar:2.3.2: 
Failed to collect dependencies at 
com.yahoo.athenz:athenz-zpe-java-client:jar:1.8.17: Failed to read artifact 
descriptor for com.yahoo.athenz:athenz-zpe-java-client:jar:1.8.17: Failure to 
find com.yahoo.athenz:athenz:pom:1.8.17 
```
and I was told to find out "where to get it if we are to mirror it".
----
2019-07-02 01:07:24 UTC - Matteo Merli: They get pulled from 
<https://yahoo.bintray.com/maven>
+1 : Devin G. Bost
----
2019-07-02 04:22:38 UTC - Devin G. Bost: I noticed that I only get the bookie 
connection error when I'm connected to the VPN to my office. :thinking_face:
----
2019-07-02 05:02:23 UTC - Jerry Peng: might cause your ip address to resolve 
differently or assign you a different hostname
----
2019-07-02 05:06:27 UTC - Devin G. Bost: It does look like I'm getting a 
different IP address.
----
2019-07-02 05:06:41 UTC - Devin G. Bost: It also looks like it gives me a 
different hostname.
----
2019-07-02 05:09:22 UTC - Devin G. Bost: I'll ignore this issue for now and get 
back to it after I get it working when I'm not on the VPN.

Now, when I try to create functions, sinks, and sources, I'm getting this error:
```
org.apache.pulsar.client.admin.PulsarAdminException: Source Package is not 
provided
        at 
org.apache.pulsar.client.admin.internal.BaseResource.getApiException(BaseResource.java:180)
        at 
org.apache.pulsar.client.admin.internal.SourceImpl.createSource(SourceImpl.java:133)
        at 
com.overstock.dataeng.pulsar.deployment.manifest.Source.create(Source.java:143)
```
I also get the error for Kafka sources and sinks. Is it expecting me to provide 
a connector Nar file somewhere for those?
----
2019-07-02 05:17:28 UTC - Devin G. Bost: For the ones that don't use built-in 
connectors, I'm getting bookie operation timeouts, though it does appear that 
communication is occurring with the bookie. The timeouts look like this:
----
2019-07-02 05:17:50 UTC - Devin G. Bost: 
----
2019-07-02 05:20:30 UTC - Devin G. Bost: The timeouts continue for a while 
before reaching a pattern of creating a new ensemble for the ledger, checking 
the stats, flushing the write cache, and then timing out again, like this:
```
23:11:10.764 [BookKeeperClientWorker-OrderedExecutor-0-0] INFO  
o.a.bookkeeper.client.LedgerHandle - New Ensemble: [192.168.1.140:16301] for 
ledger: 4
23:11:11.031 [pulsar-web-69-15] INFO  org.eclipse.jetty.server.RequestLog - 
127.0.0.1 - - [01/Jul/2019:23:11:11 -0600] "GET 
/admin/persistent/osp/use/pulsar-function-admin/coordinate/stats HTTP/1.1" 200 
770 "-" "Pulsar-Java-v2.3.2" 2
23:11:11.039 [pulsar-web-69-8] INFO  org.eclipse.jetty.server.RequestLog - 
127.0.0.1 - - [01/Jul/2019:23:11:11 -0600] "GET 
/admin/persistent/osp/use/pulsar-function-admin/coordinate/stats HTTP/1.1" 200 
770 "-" "Pulsar-Java-v2.3.2" 1
. . . 
23:11:17.194 [BookieWriteThreadPool-OrderedExecutor-0-0] INFO  
o.a.b.b.storage.ldb.DbLedgerStorage - Write cache is full, triggering flush
23:11:17.232 [pulsar-web-69-7] INFO  org.eclipse.jetty.server.RequestLog - 
127.0.0.1 - - [01/Jul/2019:23:11:17 -0600] "GET 
/admin/persistent/osp/use/pulsar-function-admin/coordinate/stats HTTP/1.1" 200 
770 "-" "Pulsar-Java-v2.3.2" 1
23:11:17.334 [pulsar-web-69-5] INFO  org.eclipse.jetty.server.RequestLog - 
127.0.0.1 - - [01/Jul/2019:23:11:17 -0600] "GET 
/admin/persistent/osp/use/pulsar-function-admin/coordinate/stats HTTP/1.1" 200 
770 "-" "Pulsar-Java-v2.3.2" 1
. . . 
23:11:17.750 [BookKeeperClientWorker-OrderedExecutor-0-0] WARN  
o.a.bookkeeper.client.PendingAddOp - Failed to write entry (6, 71): Bookie 
operation timeout
23:11:17.750 [BookKeeperClientScheduler-OrderedScheduler-0-0] INFO  
o.a.b.proto.PerChannelBookieClient - Timed-out 117 operations to channel [id: 
0x32fa9196, L:/192.168.1.140:64280 - R:ocpc-lm31977/192.168.1.140:16300] for 
192.168.1.140:16300
23:11:17.751 [BookKeeperClientWorker-OrderedExecutor-0-0] WARN  
o.a.bookkeeper.client.PendingAddOp - Failed to write entry (6, 100): Bookie 
operation timeout
23:11:17.751 [BookKeeperClientWorker-OrderedExecutor-0-0] WARN  
o.a.bookkeeper.client.PendingAddOp - Failed to write entry (6, 70): Bookie 
operation timeout
```
----
2019-07-02 05:41:05 UTC - Mahesh: Hi, I have few questions regarding topic 
deletion in pulsar. Someone please clarify
1) Is topic deletion atomic
2) Will topic gets deleted if it has unacknowledged messages
3) What happens to messages that producer sends while deletion is in progress
4) Will pulsar acquire lock on topic before deleting it ?
----
2019-07-02 06:01:07 UTC - Penghui Li: @Mahesh
<https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/persistent/PersistentTopic.java#L727>
please take a look this method, i think it can help you to understand some 
topic deletion details.
----
2019-07-02 06:04:38 UTC - Penghui Li: And pulsar enable auto delete inactive 
topics, some details relate to 
<https://github.com/apache/pulsar/blob/master/pulsar-broker/src/main/java/org/apache/pulsar/broker/service/persistent/PersistentTopic.java#L1480>
----
2019-07-02 06:05:37 UTC - Penghui Li: You can disable by setting 
brokerDeleteInactiveTopicsEnabled=false in broker.conf
----
2019-07-02 06:06:35 UTC - Jiayi Liao: @Jiayi Liao has joined the channel
----
2019-07-02 07:15:55 UTC - dipali: sorry i had to go out
----
2019-07-02 07:19:08 UTC - dipali: i am able to telnet broker from docker 
instance
----
2019-07-02 08:06:30 UTC - dipali: also i am seeing the proxy is connecting to 
the private ip of broker... not the public ip.. can this be th issue
----

Reply via email to