<h3><u>#general</u></h3><br><strong>@mailtobuchi: </strong>Hey everyone, what 
does this usually indicate? Did the query fail in broker itself or did it fail 
in the servers? Not quite clear

<br><strong>@mailtobuchi: 
</strong>```org.apache.pinot.client.PinotClientException: Query had processing 
exceptions:
[{"errorCode":200,"message":"QueryExecutionError:\njava.lang.RuntimeException: 
Caught exception while running CombinePlanNode.\n\tat 
org.apache.pinot.core.plan.CombinePlanNode.run(CombinePlanNode.java:148)\n\tat 
org.apache.pinot.core.plan.InstanceResponsePlanNode.run(InstanceResponsePlanNode.java:38)\n\tat
 
org.apache.pinot.core.plan.GlobalPlanImplV0.execute(GlobalPlanImplV0.java:45)\n\tat
 
org.apache.pinot.core.query.executor.ServerQueryExecutorV1Impl.processQuery(ServerQueryExecutorV1Impl.java:220)\n\tat
 
org.apache.pinot.core.query.scheduler.QueryScheduler.processQueryAndSerialize(QueryScheduler.java:152)\n\tat
 
org.apache.pinot.core.query.scheduler.QueryScheduler.lambda$createQueryFutureTask$0(QueryScheduler.java:136)\n\tat
 java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat 
shaded.com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:111)\n\tat
 
shaded.com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:58)\n\tat
 
shaded.com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:75)\n\tat
 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat
 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat
 java.lang.Thread.run(Thread.java:748)"}]
        at org.apache.pinot.client.Connection.execute(Connection.java:129) 
~[pinot-java-client-0.3.0.jar:0.3.0-9b2dc20c07dec6cf33df08c4444d996e8202c3ba]
        at org.apache.pinot.client.Connection.execute(Connection.java:96) 
~[pinot-java-client-0.3.0.jar:0.3.0-9b2dc20c07dec6cf33df08c4444d996e8202c3ba]
        at 
org.apache.pinot.client.PreparedStatement.execute(PreparedStatement.java:72) 
~[pinot-java-client-0.3.0.jar:0.3.0-9b2dc20c07dec6cf33df08c4444d996e8202c3ba]
        at 
org.hypertrace.core.query.service.pinot.PinotClientFactory$PinotClient.executeQuery(PinotClientFactory.java:82)
 ~[query-service-impl-0.1.1.jar:?]
        at 
org.hypertrace.core.query.service.pinot.PinotBasedRequestHandler.handleRequest(PinotBasedRequestHandler.java:113)
 ~[query-service-impl-0.1.1.jar:?]
        at 
org.hypertrace.core.query.service.QueryServiceImpl.execute(QueryServiceImpl.java:99)
 [query-service-impl-0.1.1.jar:?]
        at 
org.hypertrace.core.query.service.api.QueryServiceGrpc$MethodHandlers.invoke(QueryServiceGrpc.java:210)
 [query-service-api-0.1.1.jar:?]
        at 
io.grpc.stub.ServerCalls$UnaryServerCallHandler$UnaryServerCallListener.onHalfClose(ServerCalls.java:172)
 [grpc-stub-1.30.2.jar:1.30.2]
        at 
io.grpc.PartialForwardingServerCallListener.onHalfClose(PartialForwardingServerCallListener.java:35)
 [grpc-api-1.30.2.jar:1.30.2]
        at 
io.grpc.ForwardingServerCallListener.onHalfClose(ForwardingServerCallListener.java:23)
 [grpc-api-1.30.2.jar:1.30.2]
        at 
io.grpc.ForwardingServerCallListener$SimpleForwardingServerCallListener.onHalfClose(ForwardingServerCallListener.java:40)
 [grpc-api-1.30.2.jar:1.30.2]
        at 
io.grpc.Contexts$ContextualizedServerCallListener.onHalfClose(Contexts.java:86) 
[grpc-api-1.30.2.jar:1.30.2]
        at 
io.grpc.internal.ServerCallImpl$ServerStreamListenerImpl.halfClosed(ServerCallImpl.java:331)
 [grpc-core-1.30.2.jar:1.30.2]
        at 
io.grpc.internal.ServerImpl$JumpToApplicationThreadServerStreamListener$1HalfClosed.runInContext(ServerImpl.java:820)
 [grpc-core-1.30.2.jar:1.30.2]
        at io.grpc.internal.ContextRunnable.run(ContextRunnable.java:37) 
[grpc-core-1.30.2.jar:1.30.2]
        at 
io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123) 
[grpc-core-1.30.2.jar:1.30.2]
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) 
[?:?]
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) 
[?:?]
        at java.lang.Thread.run(Thread.java:834) [?:?]```<br><strong>@mayanks: 
</strong>The exception trace is from the server<br><strong>@mailtobuchi: 
</strong>What could be the cause of this?<br><strong>@mayanks: </strong>Seems 
something went wrong in executing the plan (building the operators). 
Unfortunately it does not say much more. What version are you 
running?<br><strong>@pallavi.kumari: </strong>@pallavi.kumari has joined the 
channel<br><strong>@g.kishore: </strong>Coming soon - New cluster manager UI 
for Pinot! 
<https://u17000708.ct.sendgrid.net/ls/click?upn=1BiFF0-2FtVRazUn1cLzaiMSfW2QiSG4bkQpnpkSL7FiK3MHb8libOHmhAW89nP5XKsO-2FKtTW98YBR0iHUSyO2hA-3D-3Dklia_vGLQYiKGfBLXsUt3KGBrxeq6BCTMpPOLROqAvDqBeTy-2FiD6RhgorXOZrWtElsmlx6fJ1-2FEQuTtonRlVYDJ7MNqxbqYzrWTXOhcv3SXpRpLuK-2F8AhPDZFpbq98szWkQ1ql-2BSEwz28pZNSXpjkQvZrRjc2U5t8mUVeYgUpVcDB8y6lFuHIH2dTgpJyBStrZLKezxHH3AhCmcT7Fr-2FGe4era7OR2imn1vS1oM82nfYken4-3D>
 Feedback welcome!  Anyone with UX/UI experience who can help us make it 
better, please ping us :slightly_smiling_face:. The first version is readonly. 
This was one of the most requested feature in recent polls 
<https://u17000708.ct.sendgrid.net/ls/click?upn=1BiFF0-2FtVRazUn1cLzaiMRmWRpSwTQ1R-2Fu3Wa3xNjOCPY1ytDhgp-2BzNaYnys36gFAVlk_vGLQYiKGfBLXsUt3KGBrxeq6BCTMpPOLROqAvDqBeTy-2FiD6RhgorXOZrWtElsmlxSx3YPRnbZI2Fct6UXbG9DDXyGmXreTt4skPk1ERSlEX-2BHKcSNkFknzk6PocrNB-2FjugY3iJRbx1UfcbFhNHeumzQkz0VSczGqDTnEvR2Nu-2B8uyhypSNxKWxlnnS4QzKKVvQfe-2FxJ-2FgfNf7WJ4VW7VRpW-2FFOfUpvRGbjW2f-2F1opK0-3D><br><strong>@damianoporta:
 </strong>Hello everybody! I would like to test Presto, i read it supports 
window functions. So, Can i query Pinot with window functions using 
Presto?<br><strong>@snlee: </strong>Hi all, we are currently developing segment 
merge service on Pinot and the design document is available for general. Please 
take a look and provide us the feedback :slightly_smiling_face:
<https://u17000708.ct.sendgrid.net/ls/click?upn=1BiFF0-2FtVRazUn1cLzaiMc9VK8AZw4xfCWnhVjqO8F0bNvfRpSIoJOhuhAbSzAytnEzCBl5ZaR7eIR3XglrQKdKyAeCTAfLJtPToobyQxBXNRXaskvbsBHtj50efwVbV0jC6no0G-2FAxriyWJWt3z4g-3D-3DRFwb_vGLQYiKGfBLXsUt3KGBrxeq6BCTMpPOLROqAvDqBeTy-2FiD6RhgorXOZrWtElsmlxovXn3u-2Fn-2F69iNLRSHVkdSyfONmcDzSDE42tNRgUEuhujGXeOBXqQpZSAWScfLFmYExDFNTQYjX5UHPZwd-2BHK9-2B06gFUqf6dQaUrh-2Fqzv0BeT-2FXHlhDauKocFM-2FHfcPL8Kjfn6PAJQkEHdjAGrMWsDfxML5psYZi6GJ3HCbYZ8sU-3D><br><h3><u>#random</u></h3><br><strong>@pallavi.kumari:
 </strong>@pallavi.kumari has joined the 
channel<br><h3><u>#troubleshooting</u></h3><br><strong>@mailtobuchi: 
</strong>@mailtobuchi has joined the channel<br><strong>@mayanks: 
</strong>@mailtobuchi are there specific types of queries that are failing and 
passing?<br><strong>@mailtobuchi: </strong>We’re seeing slowness of some random 
queries with Pinot. So far here are our observations:
• We didn’t tune the segment sizes so we have smaller segments, some are in the 
size of ~100MB, though in one table they went to 770MB each segment.
• On one of the tables, we noticed 10K segments. Queries to this table are some 
times failing with the exception that I posted in <#CDRCA57FC|general> channel. 
• If we try the queries from Pinot console, we’re seeing the response times are 
always better than what our service, which is using Java Pinot client, is 
seeing.<br><strong>@mayanks: </strong>So failing queries can pass when run on 
console?<br><strong>@mayanks: </strong>Do you have a way to observe metrics 
Pinot emits?<br><strong>@mayanks: </strong>It is possible that those queries 
are waiting on the server side long enough, and hit timeout when they start 
executing<br><strong>@yash.agarwal: </strong>Hey, I am getting the following 
error.
```Caused by: java.lang.ClassCastException: cannot assign instance of 
java.lang.invoke.SerializedLambda to field 
org.apache.spark.api.java.JavaRDDLike$$anonfun$foreach$1.f$14 of type 
org.apache.spark.api.java.function.VoidFunction in instance of 
org.apache.spark.api.java.JavaRDDLike$$anonfun$foreach$1```
I am using
```PINOT_VERSION=0.4.0```
With the following overridden versions configs to match the environment
```&lt;scala.version&gt;2.11.8&lt;/scala.version&gt;
&lt;spark.version&gt;2.3.1.tgt.17&lt;/spark.version&gt; (which is specific to 
target)```
Env:
```Spark version 2.3.1.tgt.17
Using Scala version 2.11.8, Java HotSpot(TM) 64-Bit Server VM, 1.8.0_73```
Run Command:
```spark-submit \
  --class org.apache.pinot.tools.admin.command.LaunchDataIngestionJobCommand \
  --master yarn \
  --deploy-mode client \
  --conf 
"spark.driver.extraJavaOptions=-Dplugins.dir=${PINOT_DISTRIBUTION_DIR}/plugins 
-Dlog4j2.configurationFile=${PINOT_DISTRIBUTION_DIR}/conf/pinot-ingestion-job-log4j2.xml"
 \
  --conf 
"spark.driver.extraClassPath=${PINOT_DISTRIBUTION_DIR}/lib/pinot-all-${PINOT_VERSION}-jar-with-dependencies.jar"
 \
  
local://${PINOT_DISTRIBUTION_DIR}/lib/pinot-all-${PINOT_VERSION}-jar-with-dependencies.jar
 \
  -jobSpecFile 
/home_dir/z00290g/guestSdrGstDataSgl_sparkIngestionJobSpec.yaml```<br><strong>@g.kishore:
 </strong>@yash.agarwal were you able to rebuild the 
jar?<br><strong>@yash.agarwal: </strong>Yes. I built the jar with the updated 
spark and scala versions.<br><strong>@g.kishore: </strong>i 
see<br><strong>@g.kishore: </strong>@fx19880617 you had another command to 
build the spark job jar directly?<br><strong>@fx19880617: </strong>pinot-spark 
jar?<br><strong>@fx19880617: </strong>I’m also using that pinot-all jar<br>

Reply via email to