#general
@mohammedgalalen056: @mohammedgalalen056 has joined the channel
@nguyenhoanglam1990: @nguyenhoanglam1990 has joined the channel
@nguyenhoanglam1990: hi everyone looks at the error when adding the realtime table, the error 500 (ClassNotFoundException: org.apache.pinot.plugin.stream.kafka20.KafkaConsumerFactory) INFO [AddTableCommand] [main] {"code": 500, "error": "org.apache.pinot.plugin.stream.kafka20.KafkaConsumerFactory"}
@fx19880617: did you override the `JAVA_OPTS` env variable when you start pinot?
@fx19880617: basically this error means that the kafka2.0 plugins is not loaded
@snlee: @noahprince8 @g.kishore Are you guys planning to work on
@snlee: If not, we want to take this particular item `time pruning on the broker side`
@noahprince8: Totally cool with someone else doing it. It’s just something my company would need if we’re going to start using Pinot, and would implement if we decided to move forward with it.
@snlee: Thank you for the quick response. Time based pruning on the broker side is the optimization that would help for most of the time series use cases. We will work on this one.
@noahprince8: Thanks for implementing it! I’ll keep an eye on that issue.
@snlee: @jiatao ^^
#random
@mohammedgalalen056: @mohammedgalalen056 has joined the channel
@nguyenhoanglam1990: @nguyenhoanglam1990 has joined the channel
#troubleshooting
@elon.azoulay: Hi, we are migrating our pinot installation to a different region in gke. Has anyone done this before? Do you recommend shutting pinot down, creating snapshots and then redeploying in the target region? Also, the ip's of the kafka brokers and schema registry will change, is it possible to modify the table config to reflect that?
@mayanks: One way would be to bring up the other region first (with data ingestion pipeline setup as well). And then gradually move traffic out from one into another.
@mayanks: Once all traffic is moved to the new one, the original one can be decomm'd.
@mayanks: This is if you want zero down-time.
@elon.azoulay: Nice, the only issue is that we will be using the same namespace and k8s cluster in gke. Would we be able to add nodes in the new region and then remove the old nodes?
@mayanks: Yes
@g.kishore: Does the new region have access to deep store?
@elon.azoulay: Yes
@g.kishore: What you suggested works
@g.kishore: As long the instance ids are the same
@g.kishore: If you want to be smart, you just need to copy zookeeper directory
@g.kishore: And start same number of Pinot servers in the new cluster
@g.kishore: And it will download the segments from gcs
@elon.azoulay: Would we need downtime to copy zookeeper disks?
@elon.azoulay: Or is it possible to add nodes to the zookeeper cluster in the new region and then remove nodes in the old region?
@elon.azoulay: @mayanks was saying we can add new pinot servers in the new region and remove old pinot servers once the new ones are replicated to. Does that involve tagging?
@mayanks: Yes, this ^^ will get you zero downtime, if that is important for you
@elon.azoulay: Sounds good! Is that as simple as going to the zk explorer page and adding the tags?
#pinot-dev
@mohammedgalalen056: @mohammedgalalen056 has joined the channel
#announcements
@mohammedgalalen056: @mohammedgalalen056 has joined the channel
--------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
