<h3><u>#general</u></h3><br><strong>@azmicibi: </strong>@azmicibi has joined 
the channel<br><strong>@mchristensen: </strong>@mchristensen has joined the 
channel<br><strong>@hara.acharya: </strong>@hara.acharya has joined the 
channel<br><strong>@afilipchik: </strong>hi! what is the best way to make Pinot 
multiregional? Is it possible to do without consuming from kafka (rely on DFS 
replication of uploaded segments) to save $$$ ?<br><strong>@g.kishore: 
</strong>yes, you can push segments from one region to other by using the 
upload call<br><strong>@g.kishore: </strong>but this is only applicable for the 
flushed segments<br><strong>@mayanks: </strong>We only have upload capability 
for offline table (part of hybrid table) though, right?<br><strong>@mayanks: 
</strong>@afilipchik Do you have a hybrid or realtime-only 
table?<br><strong>@afilipchik: </strong>mostly hypothetical question. 
Observability team had a thought experiment - can they use Pinot instead of ES 
to store logs. But they want geo redundancy<br><strong>@afilipchik: 
</strong>so, tables will be whatever it needs to be 
:slightly_smiling_face:<br><strong>@mayanks: </strong>Ok, the way we do this at 
LinkedIn is via Kafka mirror-maker.<br><strong>@afilipchik: </strong>that makes 
sense. so, just replicate into 1 or two locations, and have some way to switch 
read load?<br><strong>@mayanks: </strong>Once you have replicated, you can 
switch load, or have sticky routing (based on partition), 
etc<br><h3><u>#random</u></h3><br><strong>@azmicibi: </strong>@azmicibi has 
joined the channel<br><strong>@mchristensen: </strong>@mchristensen has joined 
the channel<br><strong>@hara.acharya: </strong>@hara.acharya has joined the 
channel<br><h3><u>#troubleshooting</u></h3><br><strong>@yash.agarwal: 
</strong>Hey, How can I pass auth, during segment pull ? It is over HTTP, hence 
adding it as user info in the segment uri prefix. But on redirect it does not 
retain the user info and fails with 401. Any suggestions 
?<br><strong>@yash.agarwal: </strong>I believe this can be fixed by something 
likeĀ 
<https://u17000708.ct.sendgrid.net/ls/click?upn=1BiFF0-2FtVRazUn1cLzaiMSfW2QiSG4bkQpnpkSL7FiK3MHb8libOHmhAW89nP5XKsBteHJu6eYbcEKuBC-2FrlymsnyPhGybk7etbEI9OIkKM-3Dt_KG_vGLQYiKGfBLXsUt3KGBrxeq6BCTMpPOLROqAvDqBeTzJ9IKd9kcNEktHRwPRJywb9c0et-2FG0-2FdIy87AndQbqYxgDo9wrk0VydGMogxVy0iz28JDGrefq27U1hx83OO6EYI4cltyuKADjVGyofe5G5a7Kz-2BcIWe46LfRqtOsvgulecdSUAusxnwAtJ5nZ8WX1J0oYp9TyPutV7bTF8fzyb1hjzu4ETAiSeA-2FEKPsmMBs-3D><br><strong>@dbarber:
 </strong>@dbarber has joined the channel<br><strong>@sosyalmedya.oguzhan: 
</strong>Hi,

i'm trying to receive data using pinot scatter-gather 
api(`pinot.core.transport.{AsyncQueryResponse, QueryRouter, ServerInstance`) in 
pinot 0.5.0-snapshot version.

i'm running locally now, and using pinot 0.4.0 version in locally now(because 
when i try to up pinot with the master branch, it can not load the data. there 
maybe a problem). Can there any problem about backward compatibility?

error message;

```ERROR org.apache.pinot.core.transport.DataTableHandler - Caught exception 
while handling response from server: 192.168.2.154_O
java.lang.NoSuchMethodError: 
java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer;
        at 
org.apache.pinot.core.common.datatable.DataTableImplV2.&lt;init&gt;(DataTableImplV2.java:122)
 ~[classes/:?]
        at 
org.apache.pinot.core.common.datatable.DataTableFactory.getDataTable(DataTableFactory.java:35)
 ~[classes/:?]
        at 
org.apache.pinot.core.transport.DataTableHandler.channelRead0(DataTableHandler.java:67)
 ~[classes/:?]
        at 
org.apache.pinot.core.transport.DataTableHandler.channelRead0(DataTableHandler.java:36)
 ~[classes/:?]
        at 
io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
 ~[netty-all-4.1.42.Final.jar:4.1.42.Final]
        at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
 [netty-all-4.1.42.Final.jar:4.1.42.Final]
        at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
 [netty-all-4.1.42.Final.jar:4.1.42.Final]
        at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
 [netty-all-4.1.42.Final.jar:4.1.42.Final]
        at 
io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:328)
 [netty-all-4.1.42.Final.jar:4.1.42.Final]
        at 
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:302)
 [netty-all-4.1.42.Final.jar:4.1.42.Final]
        at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
 [netty-all-4.1.42.Final.jar:4.1.42.Final]
        at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
 [netty-all-4.1.42.Final.jar:4.1.42.Final]
        at 
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352)
 [netty-all-4.1.42.Final.jar:4.1.42.Final]
        at 
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1422)
 [netty-all-4.1.42.Final.jar:4.1.42.Final]
        at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374)
 [netty-all-4.1.42.Final.jar:4.1.42.Final]
        at 
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360)
 [netty-all-4.1.42.Final.jar:4.1.42.Final]
        at 
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:931)
 [netty-all-4.1.42.Final.jar:4.1.42.Final]
        at 
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
 [netty-all-4.1.42.Final.jar:4.1.42.Final]
        at 
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:700) 
[netty-all-4.1.42.Final.jar:4.1.42.Final]
        at 
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:635)
 [netty-all-4.1.42.Final.jar:4.1.42.Final]
        at 
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:552) 
[netty-all-4.1.42.Final.jar:4.1.42.Final]
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:514) 
[netty-all-4.1.42.Final.jar:4.1.42.Final]
        at 
io.netty.util.concurrent.SingleThreadEventExecutor$6.run(SingleThreadEventExecutor.java:1044)
 [netty-all-4.1.42.Final.jar:4.1.42.Final]
        at 
io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) 
[netty-all-4.1.42.Final.jar:4.1.42.Final]
        at 
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
 [netty-all-4.1.42.Final.jar:4.1.42.Final]
        at java.lang.Thread.run(Thread.java:748) 
[?:1.8.0_221]```<br><strong>@mayanks: </strong>&lt;rant&gt; Is it just me, or 
Intellij has become a lot more confused loading Pinot (specially when moving 
between commits), eats up a lot of my time. &lt;/rant&gt;<br>

Reply via email to