#general
@ajaythompson: @ajaythompson has joined the channel
#random
@ajaythompson: @ajaythompson has joined the channel
#feat-presto-connector
@aiyer: @aiyer has joined the channel
#troubleshooting
@elon.azoulay: When I create a table with a Boolean column in schema and then retrieve the schema the type is string. Is there any way to determine if the column was initially meant to be Boolean?
@jackie.jxt: Currently we don’t have bool support yet. Boolean is treated as string
@jackie.jxt:
@elon.azoulay: Oh, that's great!
@jackie.jxt: After that, you can store Boolean natively
@mohamed.sultan: Hi team, I'm just facing this issue when i do run query.
@dlavoie: Your pinot broker is probably not healthy. Can you check the running status of your pods?
@mohamed.sultan: I just tried deleting the broker and restarted
@mohamed.sultan: and pinot stopped consuming data from kafka as well
@mohamed.sultan: for all tables
@dlavoie: If your cluster is not healthy nothing will work as expected.
@mohamed.sultan: any other way to sort out?
@dlavoie: What is the running status of your pods?
@mohamed.sultan: it's in running state
@dlavoie: Any error in the logs of each pods?
@mohamed.sultan: This is for broker-0
@dlavoie: what about the others?
@mohamed.sultan: This is controller-0
@mohamed.sultan: server-0
@mohamed.sultan: kindly do favor.
@dlavoie: Seems like you have a schema error regarding a dateformat. Can you access the Cluster Manager UI and check if broker is live? Finally, which k8s provider are you running on?
@g.kishore: Looking at the exception in the log that the time stamp format in the Kafka message does not match what you have in table schema
@dlavoie: Schema error is a different problem.
@mohamed.sultan: GKE
@dlavoie: I would suggest fixing your schema, then trying the query again. The error you got might be caused by a broker being restarted
@mohamed.sultan: brokers are in alive status
@mohamed.sultan: ok thats fine. but some table were consuming data. But now it got stopped.
@mohamed.sultan: can you point me which schema has error?
@mayanks: The lastest image you pasted has an error that indicates a bad date Feb 29 2021 (this year only had 28 days in Feb)
@elon.azoulay: If you tail the server logs do you see `java.lang.OutOfMemoryError: Direct buffer memory` ? The servers may be running but the direct memory may be hitting the limit.
@ajaythompson: @ajaythompson has joined the channel
@ashish: Running into an issue using latest pinot-presto image against pinot latest. presto:default> select * from baseballstats; Query 20210421_202103_00011_isaaq, FAILED, 1 node Splits: 17 total, 0 done (0.00%) 1:00 [0 rows, 0B] [0 rows/s, 0B/s] Query 20210421_202103_00011_isaaq failed: null value in entry: Server_172.19.0.2_7000=null
@ashish: 2021-04-21T20:36:07.566Z ERROR nioEventLoopGroup-7-1 org.apache.pinot.$
@ashish: Is there a backward incompatible change introduced recently? Do I need to recreate pinot-presto image again with latest pinot client library?
@jackie.jxt: @fx19880617 I think we need to pick up the latest pinot code in the connector
@jackie.jxt: We recently upgraded the data table version:
@fx19880617: i see
@fx19880617: i need to make this change then
@fx19880617: hmm
@fx19880617: is it possible to specify the version from server request?
@fx19880617: I think this change is not yet released. So from presto side, I have no reference to it ?
@fx19880617: or it’s already in 0.7.1?
@jackie.jxt: No, it's not released yet. There is a server config `pinot.server.instance.currentDataTableVersion` to change the data table version
@jackie.jxt: If set to 2, it should remain the same behavior
@fx19880617: I see. @ashish can you give it a try by adding `pinot.server.instance.currentDataTableVersion=2` to pinot server config
@fx19880617: hmm
@jackie.jxt: @fx19880617 We should probably pairing the pinot-presto image with the pinot image, e.g. for each pinot release have a pinot-presto release?
@fx19880617: it’s too much overhead for managing :stuck_out_tongue: I will try
@fx19880617: just if this feature not in 0.7.1 release, then we should wait for it’s in 0.8.0
@fx19880617: then remove
@jackie.jxt: True. The issue is caused by running pinot-presto image for `0.7.1` with the latest pinot master
@fx19880617: yeah
@fx19880617: Maybe we can add this to default helm pinot server config
@fx19880617: so new users won’t experience failure
@jackie.jxt: Yeah, that works. We can remove that after the next release
@fx19880617: cool
@fx19880617:
@ashish: Trying out the workaround now
#getting-started
@nikhil.sulegaon: @nikhil.sulegaon has joined the channel
#minion-improvements
@laxman: @jackie.jxt /@fx19880617: Whats the right way to convert a REALTIME table to HYBRID table? Is single table configuration sufficient or we need to create REALTIME and OFFLINE separately?
@laxman: We already have REALTIME table with 90 days retention. Want to convert that to REALTIME table with 7 days retention and OFFLINE with remaining 83 days retention. Also, move existing old data (segments) from REALTIME table to OFFLINE table.
@g.kishore: Can you please document this?
@laxman: sure. will do. once I try this in our clusters successfully, I will document this. @fx19880617 /@jackie.jxt: let me know your thoughts. Please let me know even if you are not clear. I can verify it immediately on my test cluster. Have scope and time for trial and error
@jackie.jxt: @laxman For your use case, you need to create a separate OFFLINE table, and add the `RealtimeToOfflineSegmentsTask` to the REALTIME table
@jackie.jxt: Minion should be able to pick up the completed real-time segments and push them to the offline table
@jackie.jxt: Don't change the REALTIME table retention before the segments are pushed to the offline side
@laxman: Okay. Both OFFLINE and REALTIME with 90 days retention till all segments are moved to OFFLINE?
@npawar: yes, and btw the documentation for this:
--------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscr...@pinot.apache.org For additional commands, e-mail: dev-h...@pinot.apache.org