<h3><u>#general</u></h3><br><strong>@joey: </strong>:wave: is there an easy way to pass in the `instanceId` variable to Pinot without hardcoding it into a `conf` file? For example, passing it in as an env variable or CLI parameter. Context is the `conf` file is being puppeted and the ID will be obtained during a launch script.
Being able to plumb it through while calling `pinot-admin.sh` would be nice convenient, but this if not it's probably just as easy to copy the configuration, edit it, and load the edited version.<br><strong>@fx19880617: </strong>Right now `instanceId` can only be set through `conf` files. One way to workaround this is to have a wrapper start script to update the `conf` file then start pinot.<br><strong>@g.kishore: </strong>@joey <https://u17000708.ct.sendgrid.net/ls/click?upn=1BiFF0-2FtVRazUn1cLzaiMSfW2QiSG4bkQpnpkSL7FiK3MHb8libOHmhAW89nP5XKvd1xlGru5RyM-2Fuil934eiw-3D-3D7W5v_vGLQYiKGfBLXsUt3KGBrxeq6BCTMpPOLROqAvDqBeTwHQFi8URuVcy6zrl9z7dhvDKRaZilnvofsr6u-2F3X-2FLSpUmoHPXxXHcKFaYolGpUx6X-2F2kBvKL74IPoT8Dtj-2FZzMiiNOejyweh1giLG9akybsHirFVuQdqTS49y9FXM6JoSGkl6bsN91h9ruwe-2FaAvI-2BaPItOoycA1p0ysyjtLdPmyFDk9PhupS5mhw5TLVI6k-3D><br><strong>@g.kishore: </strong>> A future commit will enable properties loaded from PINOT_X environment variables.<br><strong>@joey: </strong>:thumbsup:. Follow up, I set `instanceId=pinot-broker-1` and that looks like it did some "fun" things! The server's instance zk state is ```{ "id": "pinot-broker-1", "simpleFields": { "HELIX_ENABLED": "true", "HELIX_ENABLED_TIMESTAMP": "1600146975179", "HELIX_HOST": "pinot-broker-1", "HELIX_PORT": "" }, "mapFields": {}, "listFields": { "TAG_LIST": [ "DefaultTenant_BROKER" ] } }``` Based on a bit of splunking, it looks like the `instanceId` has to conform to a strict form of `<type>_<hostname>_<port>` for internals to work?<br><strong>@joey: </strong>:P<br><strong>@aizydorczyk: </strong>@aizydorczyk has joined the channel<br><strong>@usha: </strong>@usha has joined the channel<br><strong>@usha: </strong>Hi, I am trying to injest data into Pinot. The csv file is about 30G. It has been running for about 5 hours and has not completed yet. Could someone let me know where I can find the logs for this process. I have the pinot cluster running in docker containers - Setup similar to one described here - <https://u17000708.ct.sendgrid.net/ls/click?upn=1BiFF0-2FtVRazUn1cLzaiMdTeAXadp8BL3QinSdRtJdqF7hckgVpJ77N6aIHLFxaXyamB0nyAfRyC-2Bxc6TicbDYHgNMrz1D0SKTbAY9CR-2BpS89cNgc-2Fdm6wJiI8n8sAkjHyR3_vGLQYiKGfBLXsUt3KGBrxeq6BCTMpPOLROqAvDqBeTwHQFi8URuVcy6zrl9z7dhvMCeW3U08HkM70lU6rbIiJsRZKjX4BD695fySGS5tFsiDsSUc0uGh29cys4ZiwFvcQaQkb4FT5boaOkkGUBzszF5hA34GAjVhy-2BvNgnqBigSLryrKtwKMJdEzSSxDni-2FVKiH8reFZoEISbnCgjrS-2FN48T2c1zd6MXneV3KlhAOt8-3D>.<br><strong>@npawar: </strong>Hi all, Here’s the 0.5.0 release blog by @tingchen with all the details of this release. Nicely done Ting! <https://u17000708.ct.sendgrid.net/ls/click?upn=1BiFF0-2FtVRazUn1cLzaiMa1aAdGoOdRoyIvGAevnwx4MmluqK5BbqVUbYmCA2ouJpoz0C42-2FPlZVB0dQEHozhLbOLSUUBAp0fuAKEL-2BazbmhXKk428Qk7O5pD9o4nRjVqT5KqyDq0RfzL-2B9oaQmJ7A-3D-3D2UD5_vGLQYiKGfBLXsUt3KGBrxeq6BCTMpPOLROqAvDqBeTwHQFi8URuVcy6zrl9z7dhvHjfhF-2BnuCT3yN3UYlRw-2Frxv8QcW0mtWxzuoCjplJXMOzNebt9tEY-2FdA23SmJPxK-2BpsfEnahW5TV0ENcApCg5X-2FAoz487SnPQd8ecO6qt-2BbCEDJITewvMbZvKJ72lJhokUGWB3hX6xJAOCS8kt03sVrQ7YPJYfIpaQf5O-2F252-2FKQ-3D><br><strong>@ledzepu2: </strong>@ledzepu2 has joined the channel<br><h3><u>#random</u></h3><br><strong>@aizydorczyk: </strong>@aizydorczyk has joined the channel<br><strong>@usha: </strong>@usha has joined the channel<br><strong>@ledzepu2: </strong>@ledzepu2 has joined the channel<br><h3><u>#troubleshooting</u></h3><br><strong>@yash.agarwal: </strong>I am getting the following error when trying to drop a test broker. ```{ "code": 409, "error": "Failed to drop instance Broker_172.17.0.2_8099 - Instance Broker_172.17.0.2_8099 exists in ideal state for brokerResource" }``` How do i remove the instance from ideal states and also drop it ?<br><strong>@shen.wan: </strong>Pinot logging has a bug <https://u17000708.ct.sendgrid.net/ls/click?upn=1BiFF0-2FtVRazUn1cLzaiMSfW2QiSG4bkQpnpkSL7FiK3MHb8libOHmhAW89nP5XKRBLaWHSPG7X4eavDsZpgqEgoW-2FycSVbOK20-2FkXzFYKn0VUl2MNYyht2Nkr1Q5g2ALN7-2FP6-2BJO5TMNm8xTy5DH8QVpRflNI5md-2FAmRQ3dGdEBf9P6l1ZKlK0-2BC4bMLrHc9exGnBoHUOQ-2F3xZ5LrMVhVT8k52iLizCckOiLuOioBxtnjnmLncUxzEXqkGHzH3Qus-2BLmqnEaM6w7nr-2F3jO2cX2Xagk-2FiM-2F8sZxNG1k9-2F-2B4-3DSsJs_vGLQYiKGfBLXsUt3KGBrxeq6BCTMpPOLROqAvDqBeTwHQFi8URuVcy6zrl9z7dhvPWHcOiomGWpbrST5ixclo-2BkmD4-2F1s3qs83uTDEKdQYl950JeHjiSXehw784jthpcVX5coj8hgaz2-2FEFwQ8afokmczOTFMtAAZu4-2F-2B82aLcMyvEDeZRmplUxi-2BLM-2FVJ3jNGVi09c1jC2SrXQwWzkTAY-2FNQxWVSZz9RmYMXb0H1ls-3D> (mismatching number of arguments) that truncates the error details.<br><strong>@pradeepgv42: </strong>QQ what is the best way to drop a realtime server and move the segments to a different server? Would disabling the server and rebalancing would do the trick?<br><h3><u>#jdbc-connector</u></h3><br><strong>@kharekartik: </strong><https://u17000708.ct.sendgrid.net/ls/click?upn=1BiFF0-2FtVRazUn1cLzaiMSfW2QiSG4bkQpnpkSL7FiK3MHb8libOHmhAW89nP5XKZs9DE1TP8d9GzkdlynoUHQ-3D-3D33Iw_vGLQYiKGfBLXsUt3KGBrxeq6BCTMpPOLROqAvDqBeTwHQFi8URuVcy6zrl9z7dhvs1cY2EdGTkQ8gBJwmPtoH1Z80XUGhyV-2Ftc1gcJzCjpMqn-2B9DZaJ3LvrR-2FaO8PiDn3UqCRK7q5I-2BRcUPhqGdn2JdAzyj2kho6iWPPAqkZtTJzEwaYrQOs2mROKk3YfvXzBAzGKFvt1Mr6I3pD3S5cwn4mT7MJS2ZCok8mkTxgF98-3D> Does this work or should I cache the final tenant : broker map instead of just instance configs?<br><h3><u>#lp-pinot-poc</u></h3><br><strong>@g.kishore: </strong>@fx19880617 do you have the graphs before we made the change to gc params<br><strong>@fx19880617: </strong><br><strong>@fx19880617: </strong>this graph is before and after gc changes<br><strong>@fx19880617: </strong>Below is gc change plus setting `pinot.server.instance.realtime.alloc.offheap=true` in server conf<br><strong>@g.kishore: </strong>can you capture the right legends for the latency<br><strong>@g.kishore: </strong>in the before graph<br><strong>@fx19880617: </strong>blue line is p999, yellow is p99, green is p50<br><strong>@fx19880617: </strong>I already deleted the cluster, if needed I can bring it back again<br><strong>@g.kishore: </strong>ok<br>
