#general


@fitzystrikesagain: @fitzystrikesagain has joined the channel
@ricardo.bernardino: Hi all! For anyone unfamiliar with Zookeeper operations (which we were), you will see that Zookeeper will keep increasing its disk usage. We found this odd since it only allows 1MB worth of data in its leaf nodes. Looking at our data directory we saw its disk usage mainly in the logs folder and the snapshots. After searching a bit, we found that there are two configurations that will automatically purge these files (`autopurge.snapRetainCount` and `autopurge.purgeInterval` ). The logs are transaction logs, not application level logs, and they relate to the snapshots such that Zookeeper can recover from a failure by using the latest snapshot and the transaction logs. The `purgeInterval` is 0 by default so it does not purge anything. and `snapRetainCount` is 3 - but again this is disabled. Depending on the docker image and helm chart you are using you already have an env var to change the `purgeInterval`: • `ZOO_AUTOPURGE_PURGEINTERVAL` for the official docker image • `ZK_PURGE_INTERVAL` for the zookeeper incubator helm chart Hope this helps!
  @fx19880617: Many thanks for point this out! We’ve also seen this issue and the auto purge does help!
  @g.kishore: can we add this to docs?
  @laxman: I added this auto purge feature to zookeeper long ago. I can explain why this feature disabled by default. More write ops to zookeeper results in more snapshots. ZooKeeper snapshots can be backed up and they can be used to restore the ZooKeeper state. Depending on how frequently snapshots are being rolled, users may want to control the number of snapshots to retain. For example, if snapshots are being rolled every hour (due to heavy writes), you may want to retain 24 of them to be able to restore 1 day old state.
  @laxman: Some background here
@kautsshukla: Hi All, Expecting that all the finished segments will be rebalanced among all 3 pinot servers, instead of just the consuming server post addition of 1 new server from 2 to 3 nodes now.
  @mayanks: Only new consuming segments will be balanced automatically. Existing flushed segments will require running the rebalance command
  @kautsshukla: I tried but existed online segemnts didn’t balanced with third new added node
  @g.kishore: Are they nodes tagged properly?
  @kautsshukla: @g.kishore All nodes are tagged to defaultTenant
  @mayanks: IIRC you have 3500 segments per node of several hundred GB? If so, it might take some time. Can you check if it did anything at all?
@matt: @matt has joined the channel
@s.aditya.1811: @s.aditya.1811 has joined the channel
@s.aditya.1811: Hi Everyone. I have created a Udemy course on Apache Pinot. Feel free to check the course out

#random


@fitzystrikesagain: @fitzystrikesagain has joined the channel
@fitzystrikesagain: @fitzystrikesagain has left the channel
@matt: @matt has joined the channel
@s.aditya.1811: @s.aditya.1811 has joined the channel

#troubleshooting


@fitzystrikesagain: @fitzystrikesagain has joined the channel
@syedakram93: Hi,
@syedakram93: controller.segment.fetcher.auth.token="Basic YWRtaW46dmVyeXNlY3JldA==" controller.admin.access.control.factory.class=org.apache.pinot.controller.api.access.BasicAuthAccessControlFactory controller.admin.access.control.principals=admin,user controller.admin.access.control.principals.admin.password=verysecret controller.admin.access.control.principals.user.password=secret controller.admin.access.control.principals.user.tables=baseballStats controller.admin.access.control.principals.user.permissions=READ controller.port=9000 controller.host=localhost controller.zk.str=localhost:2191 controller.data.dir=/home/sas/temp/rawdata controller.helix.cluster.name=QuickStartCluster
@syedakram93: getting above exception present in the screenshot, when i am setting up a cluster with authorization
@syedakram93: can someone help?
@syedakram93: i am able to just login, but not able to access anything
@syedakram93: it seems no permission available for user/admin 403 error code
@syedakram93: Got Exception to upload Pinot Schema: baseballStats org.apache.pinot.common.exception.HttpErrorStatusException: Got error status code: 403 (Forbidden) with reason: "Permission is denied for access type 'READ' to the endpoint
@syedakram93: when trying to upload schema, i got above exception too in that auth mode via command line
@syedakram93: @mayanks
@patidar.rahul8392: @syedakram93 how you enabled this authentication in Pinot.I am also trying to do that. Have you created new controller.properties file and while starting the controller used this file using --confFilename
@patidar.rahul8392: ?
@mayanks: How are you passing the credentials when making the call?
@syedakram93: yes
@syedakram93: bin/pinot-admin.sh StartController -configFileName bin/controller.properties &
@syedakram93: i am not passing any credentials
@mayanks: I guess that’s the issue?
@syedakram93: but even ui is not able to access
@syedakram93: u can check above screenshot
@mayanks: Yeah because controller was configured to allow only the Admin user, so in requests it is expecting credentials for admin.
@mayanks: @slack1 could we add docs on using/configuring ACLs in Pinot, or get a pointer if they already exist?
  @slack1: @mayanks
  @slack1: ui supports both aadmin and limited-access users
  @mayanks: Thanks much @slack1.
@santosh.reddy: Hi I am trying to create tenant in the pinot cluster, i am seeing below error, please help me to resolve it, i am using latest version of pinot (0.7.1) and running in cluster mode on vm’s
@santosh.reddy: ```bin/pinot-admin.sh AddTenant -name XXXXX -role BROKER -instanceCount 2 -exec SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/pinot/lib/pinot-all-0.7.1-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/pinot/plugins/pinot-file-system/pinot-s3/pinot-s3-0.7.1-shaded.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance. WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.pinot.spi.plugin.PluginClassLoader (file:/opt/pinot/lib/pinot-all-0.7.1-jar-with-dependencies.jar) to method java.net.URLClassLoader.addURL(java.net.URL) WARNING: Please consider reporting this to the maintainers of org.apache.pinot.spi.plugin.PluginClassLoader WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release Error: Option "-offlineInstanceCount" is required```
@santosh.reddy: provided with required values and tried it but it didnt work for me
@santosh.reddy: ```bin/pinot-admin.sh AddTenant -name Liquidation -role SERVER -instanceCount 2 -offlineInstanceCount 1 -realTimeInstanceCount 1 -exec SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/pinot/lib/pinot-all-0.7.1-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/pinot/plugins/pinot-file-system/pinot-s3/pinot-s3-0.7.1-shaded.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] Anusha to Everyone (12:39 PM) WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance. WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.pinot.spi.plugin.PluginClassLoader (file:/opt/pinot/lib/pinot-all-0.7.1-jar-with-dependencies.jar) to method java.net.URLClassLoader.addURL(java.net.URL) WARNING: Please consider reporting this to the maintainers of org.apache.pinot.spi.plugin.PluginClassLoader WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release Executing command: AddTenant -controllerProtocol http -controllerHost 10.222.86.148 -controllerPort 9000 -name Liquidation -role SERVER -instanceCount 2 -offlineInstanceCount 1 -realTimeInstanceCount 1 -exec {"code":500,"error":"Failed to create tenant"} {"code":500,"error":"Failed to create tenant"}```
  @mayanks: What do you see in the controller log?
  @santosh.reddy: no error logs showed in the controller logs
  @mayanks: Does it show that it received this request though?
  @santosh.reddy: yes, i think so
  @mayanks: Can you share that log
  @santosh.reddy: 021/05/20 23:18:57.913 INFO [ControllerResponseFilter] [grizzly-http-server-3] Handled request from xxxxxxx GET , content-type null status code 200 OK 2021/05/20 23:19:02.654 INFO [ControllerResponseFilter] [grizzly-http-server-2] Handled request from xxxxxxx GET , content-type null status code 200 OK 2021/05/20 23:19:02.927 INFO [ControllerResponseFilter] [grizzly-http-server-1] Handled request from xxxxxxx GET , content-type null status code 200 OK 2021/05/20 23:19:07.672 INFO [ControllerResponseFilter] [grizzly-http-server-0] Handled request from xxxxxxx GET , content-type null status code 200 OK
  @santosh.reddy: this is the response i am seeing in controller logs
  @santosh.reddy: nothing out 500
  @mayanks: Can you tail controller log and issue the command again and see if it gets anything.
  @santosh.reddy: above is the last tailed logs
  @mayanks: Hmm, 200 cannot become 500 right?
  @santosh.reddy: after running the tenant command
  @mayanks: how did you verify it is after running the AddTenant command?
  @santosh.reddy: i tried to grep the 500 errors logs, i dont see in the output
  @santosh.reddy: i got the command from official docs, you can see the output of that command in first message
  @mayanks: Can you check if the existing instances are untagged?
  @mayanks: @santosh.reddy ^^
  @mayanks: From the code, it seems that it will fail if it cannot find available untagged instances to complete the request.
  @santosh.reddy: what is the process to untag the instances
@santosh.reddy: followed the command from the below docs
@santosh.reddy:
@surendra: Hi, We are testing segment partitioning for REALTIME tables (Kafka as source) , But unable to find the configurations on documentation page except `When emitting an event to kafka, a user need to feed partitioning key and partition function for Kafka producer API`. Can someone give insights on how it works internally? How to configure schema registry for Kafka record keys ?
  @npawar: does this help:
  @surendra: Thanks Neha, Will check update you.
  @surendra: @npawar Ingested the data with the config mentioned in links. I see below info in metadata.properties file , is there any other way to validate partitioning is working as expected ? ```column.networkId.partitionFunction = Murmur column.networkId.numPartitions = 2 column.networkId.partitionValues = 0,1```
@matt: @matt has joined the channel
@s.aditya.1811: @s.aditya.1811 has joined the channel

#onboarding


@fitzystrikesagain: @fitzystrikesagain has joined the channel
@fitzystrikesagain: @fitzystrikesagain has left the channel

#pinot-dev


@fitzystrikesagain: @fitzystrikesagain has joined the channel
@ssubrama: @tingchen we need urgent review on this.

#pinot-docs


@fitzystrikesagain: @fitzystrikesagain has joined the channel
@fitzystrikesagain: @fitzystrikesagain has left the channel

#getting-started


@fitzystrikesagain: @fitzystrikesagain has joined the channel
@matt: @matt has joined the channel

#releases


@fitzystrikesagain: @fitzystrikesagain has joined the channel
--------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscr...@pinot.apache.org For additional commands, e-mail: dev-h...@pinot.apache.org

Reply via email to