#general


@ryantle1028: hi i try to use hdfs with pinot follow document --> it not working ###my config## ####controller config## pinot.service.role=CONTROLLER pinot.cluster.name=pinot-uat controller.host=pinot-uat01 controller.data.dir= controller.local.temp.dir=/tmp/pinot/data/controller controller.zk.str=172.19.131.116:2181,172.19.131.117:2181,172.19.131.118:2181 controller.enable.split.commit=true controller.access.protocols.http.port=9000 controller.helix.cluster.name=pinot-uat pinot.controller.storage.factory.class.hdfs=org.apache.pinot.plugin.filesystem.HadoopPinotFS pinot.controller.storage.factory.hdfs.hadoop.conf.path=/etc/hadoop/conf pinot.controller.segment.fetcher.protocols=file,http,hdfs pinot.controller.segment.fetcher.hdfs.class=org.apache.pinot.common.utils.fetcher.PinotFSSegmentFetcher pinot.controller.segment.fetcher.hdfs.hadoop.kerberos.principle=hdpt...@true.care pinot.controller.segment.fetcher.hdfs.hadoop.kerberos.keytab=/data/apache-pinot/keytab/hdptest.keytab controller.vip.host=pinotuat.true.care controller.vip.port=9000 controller.port=9000 .hostname=true pinot.server.grpc.enable=true ########Executable export HADOOP_HOME=/usr/lib/hadoop export HADOOP_VERSION=2.6.0-cdh5.16.2 export HADOOP_GUAVA_VERSION=11.0.2 export HADOOP_GSON_VERSION=2.2.4 export GC_LOG_LOCATION=/data/apache-pinot/logs/ export PINOT_VERSION=0.8.0 export PINOT_DISTRIBUTION_DIR=/data/apache-pinot export SERVER_CONF_DIR=/data/apache-pinot/conf export ZOOKEEPER_ADDRESS=172.19.131.116:2181,172.19.131.117:2181,172.19.131.118:2181 export CLASSPATH_PREFIX="${HADOOP_HOME}/client/hadoop-hdfs-${HADOOP_VERSION}.jar:${HADOOP_HOME}/client/hadoop-annotations-${HADOOP_VERSION}.jar:${HADOOP_HOME}/client/hadoop-auth-${HADOOP_VERSION}.jar:${HADOOP_HOME}/client/hadoop-common-${HADOOP_VERSION}.jar:${HADOOP_HOME}/client/guava-${HADOOP_GUAVA_VERSION}.jar:${HADOOP_HOME}/client/gson-${HADOOP_GSON_VERSION}.jar" export JAVA_OPTS="-Xms8G -Xmx12G -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -Xloggc:${GC_LOG_LOCATION}/gc-pinot-controller.log" ${PINOT_DISTRIBUTION_DIR}/bin/start-controller.sh -configFileName ${SERVER_CONF_DIR}/pinot-controller.conf ###########error log#### 2022/03/10 12:06:41.771 INFO [StartControllerCommand] [main] Executing command: StartController -configFileName /data/apache-pinot/conf/pinot-controller.conf 2022/03/10 12:06:41.843 INFO [StartServiceManagerCommand] [main] Executing command: StartServiceManager -clusterName pinot-uat -zkAddress 172.19.131.116:2181,172.19.131.117:2181,172.19.131.118:2181 -port -1 -bootstrapServices [] 2022/03/10 12:06:41.843 INFO [StartServiceManagerCommand] [main] Starting a Pinot [SERVICE_MANAGER] at 0.012s since launch 2022/03/10 12:06:41.847 INFO [StartServiceManagerCommand] [main] Started Pinot [SERVICE_MANAGER] instance [ServiceManager_poc-pinot01_-1] at 0.016s since launch 2022/03/10 12:06:41.848 INFO [StartServiceManagerCommand] [main] Starting a Pinot [CONTROLLER] at 0.016s since launch WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/usr/lib/hadoop/hadoop-auth-2.6.0-cdh5.16.2.jar) to method sun.security.krb5.Config.getInstance() WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/htrace/core/Tracer$Builder at org.apache.hadoop.fs.FsTracer.get(FsTracer.java:42) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2803) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:98) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2853) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2835) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:387) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:186) at org.apache.pinot.plugin.filesystem.HadoopPinotFS.init(HadoopPinotFS.java:65) at org.apache.pinot.spi.filesystem.PinotFSFactory.register(PinotFSFactory.java:52) at org.apache.pinot.spi.filesystem.PinotFSFactory.init(PinotFSFactory.java:72) at org.apache.pinot.controller.BaseControllerStarter.initPinotFSFactory(BaseControllerStarter.java:518) at org.apache.pinot.controller.BaseControllerStarter.setUpPinotController(BaseControllerStarter.java:358) at org.apache.pinot.controller.BaseControllerStarter.start(BaseControllerStarter.java:308) at org.apache.pinot.tools.service.PinotServiceManager.startController(PinotServiceManager.java:123) at org.apache.pinot.tools.service.PinotServiceManager.startRole(PinotServiceManager.java:93) at org.apache.pinot.tools.admin.command.StartServiceManagerCommand.lambda$startBootstrapServices$0(StartServiceManagerCommand.java:233) at org.apache.pinot.tools.admin.command.StartServiceManagerCommand.startPinotService(StartServiceManagerCommand.java:285) at org.apache.pinot.tools.admin.command.StartServiceManagerCommand.startBootstrapServices(StartServiceManagerCommand.java:232) at org.apache.pinot.tools.admin.command.StartServiceManagerCommand.execute(StartServiceManagerCommand.java:182) at org.apache.pinot.tools.admin.command.StartControllerCommand.execute(StartControllerCommand.java:149) at org.apache.pinot.tools.admin.PinotAdministrator.execute(PinotAdministrator.java:166) at org.apache.pinot.tools.admin.PinotAdministrator.main(PinotAdministrator.java:186) at org.apache.pinot.tools.admin.PinotController.main(PinotController.java:35) Caused by: java.lang.ClassNotFoundException: org.apache.htrace.core.Tracer$Builder at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581) at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178) at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522)
  @mayanks: What version of Hadoop and Java? Cc: @xiangfu0
  @ryantle1028: export HADOOP_VERSION=2.6.0-cdh5.16.2 java --> java-11-openjdk.x86_64 (because use pinot 0.8.0 , in need to use newest version , first time i test with 0.9.3 it's error too.
  @ryantle1028: Did you have recommend which version can work with hdfs.... (Now i test with hdfs on CDH 5.16.2)
  @ryantle1028: hadoop client i use cloudera client.
  @ryantle1028: please help.
  @xiangfu0: can you try to put Pinot-hdfs shaded jar into your class path?
  @ryantle1028: you mean export CLASSPATH_PREFIX=/data/apache-pinot/plugins/pinot-file-system/pinot-hdfs/pinot-hdfs-0.8.0-shaded.jar ?
  @ryantle1028: [root@poc-pinot01 pinot-hdfs]# export CLASSPATH_PREFIX="${HADOOP_HOME}/client/hadoop-hdfs-${HADOOP_VERSION}.jar:${HADOOP_HOME}/client/hadoop-annotations-${HADOOP_VERSION}.jar:${HADOOP_HOME}/client/hadoop-auth-${HADOOP_VERSION}.jar:${HADOOP_HOME}/client/hadoop-common-${HADOOP_VERSION}.jar:${HADOOP_HOME}/client/guava-${HADOOP_GUAVA_VERSION}.jar:${HADOOP_HOME}/client/gson-${HADOOP_GSON_VERSION}.jar:/data/apache-pinot/plugins/pinot-file-system/pinot-hdfs/pinot-hdfs-0.8.0-shaded.jar" [root@poc-pinot01 pinot-hdfs]# ${PINOT_DISTRIBUTION_DIR}/bin/start-controller.sh -configFileName ${SERVER_CONF_DIR}/pinot-controller.conf [0.001s][warning][gc] -Xloggc is deprecated. Will use -Xlog:gc:/data/apache-pinot/logs//gc-pinot-controller.log instead. SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/data/apache-pinot/lib/pinot-all-0.8.0-jar-with-dependencies.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/data/apache-pinot/plugins/pinot-environment/pinot-azure/pinot-azure-0.8.0-shaded.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/data/apache-pinot/plugins/pinot-file-system/pinot-s3/pinot-s3-0.8.0-shaded.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/data/apache-pinot/plugins/pinot-input-format/pinot-parquet/pinot-parquet-0.8.0-shaded.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/data/apache-pinot/plugins/pinot-metrics/pinot-yammer/pinot-yammer-0.8.0-shaded.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance. 2022/03/10 15:37:50.007 INFO [StartControllerCommand] [main] Executing command: StartController -configFileName /data/apache-pinot/conf/pinot-controller.conf 2022/03/10 15:37:50.086 INFO [StartServiceManagerCommand] [main] Executing command: StartServiceManager -clusterName pinot-uat -zkAddress 172.19.131.116:2181,172.19.131.117:2181,172.19.131.118:2181 -port -1 -bootstrapServices [] 2022/03/10 15:37:50.087 INFO [StartServiceManagerCommand] [main] Starting a Pinot [SERVICE_MANAGER] at 0.013s since launch 2022/03/10 15:37:50.091 INFO [StartServiceManagerCommand] [main] Started Pinot [SERVICE_MANAGER] instance [ServiceManager_poc-pinot01_-1] at 0.017s since launch 2022/03/10 15:37:50.091 INFO [StartServiceManagerCommand] [main] Starting a Pinot [CONTROLLER] at 0.018s since launch WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by org.apache.hadoop.security.authentication.util.KerberosUtil (file:/usr/lib/hadoop/hadoop-auth-2.6.0-cdh5.16.2.jar) to method sun.security.krb5.Config.getInstance() WARNING: Please consider reporting this to the maintainers of org.apache.hadoop.security.authentication.util.KerberosUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/htrace/core/Tracer$Builder at org.apache.hadoop.fs.FsTracer.get(FsTracer.java:42) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2803) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:98) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2853) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2835) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:387) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:186) at org.apache.pinot.plugin.filesystem.HadoopPinotFS.init(HadoopPinotFS.java:65) at org.apache.pinot.spi.filesystem.PinotFSFactory.register(PinotFSFactory.java:52) at org.apache.pinot.spi.filesystem.PinotFSFactory.init(PinotFSFactory.java:72) at org.apache.pinot.controller.BaseControllerStarter.initPinotFSFactory(BaseControllerStarter.java:518) at org.apache.pinot.controller.BaseControllerStarter.setUpPinotController(BaseControllerStarter.java:358) at org.apache.pinot.controller.BaseControllerStarter.start(BaseControllerStarter.java:308) at org.apache.pinot.tools.service.PinotServiceManager.startController(PinotServiceManager.java:123) at org.apache.pinot.tools.service.PinotServiceManager.startRole(PinotServiceManager.java:93) at org.apache.pinot.tools.admin.command.StartServiceManagerCommand.lambda$startBootstrapServices$0(StartServiceManagerCommand.java:233) at org.apache.pinot.tools.admin.command.StartServiceManagerCommand.startPinotService(StartServiceManagerCommand.java:285) at org.apache.pinot.tools.admin.command.StartServiceManagerCommand.startBootstrapServices(StartServiceManagerCommand.java:232) at org.apache.pinot.tools.admin.command.StartServiceManagerCommand.execute(StartServiceManagerCommand.java:182) at org.apache.pinot.tools.admin.command.StartControllerCommand.execute(StartControllerCommand.java:149) at org.apache.pinot.tools.admin.PinotAdministrator.execute(PinotAdministrator.java:166) at org.apache.pinot.tools.admin.PinotAdministrator.main(PinotAdministrator.java:186) at org.apache.pinot.tools.admin.PinotController.main(PinotController.java:35) Caused by: java.lang.ClassNotFoundException: org.apache.htrace.core.Tracer$Builder at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581) at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178) at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522) ... 23 more
  @ryantle1028: ############# Mar 10 16:36:34 poc-pinot01 pinot-admin.sh: java.lang.RuntimeException: Could not initialize HadoopPinotFS Mar 10 16:36:34 poc-pinot01 pinot-admin.sh: at org.apache.pinot.plugin.filesystem.HadoopPinotFS.init(HadoopPinotFS.java:69) ~[pinot-hdfs-0.8.0-shaded.jar:0.8.0-9a0f41bc24243ff74315723b0153b534c2596e30] Mar 10 16:36:34 poc-pinot01 pinot-admin.sh: at org.apache.pinot.spi.filesystem.PinotFSFactory.register(PinotFSFactory.java:52) ~[pinot-all-0.8.0-jar-with-dependencies.jar:0.8.0-c4ceff06d21fc1c1b88469a8dbae742a4b609808] Mar 10 16:36:34 poc-pinot01 pinot-admin.sh: at org.apache.pinot.spi.filesystem.PinotFSFactory.init(PinotFSFactory.java:72) ~[pinot-all-0.8.0-jar-with-dependencies.jar:0.8.0-c4ceff06d21fc1c1b88469a8dbae742a4b609808] Mar 10 16:36:34 poc-pinot01 pinot-admin.sh: at org.apache.pinot.controller.BaseControllerStarter.initPinotFSFactory(BaseControllerStarter.java:518) ~[pinot-all-0.8.0-jar-with-dependencies.jar:0.8.0-c4ceff06d21fc1c1b88469a8dbae742a4b609808] Mar 10 16:36:34 poc-pinot01 pinot-admin.sh: at org.apache.pinot.controller.BaseControllerStarter.setUpPinotController(BaseControllerStarter.java:358) ~[pinot-all-0.8.0-jar-with-dependencies.jar:0.8.0-c4ceff06d21fc1c1b88469a8dbae742a4b609808] Mar 10 16:36:34 poc-pinot01 pinot-admin.sh: at org.apache.pinot.controller.BaseControllerStarter.start(BaseControllerStarter.java:308) ~[pinot-all-0.8.0-jar-with-dependencies.jar:0.8.0-c4ceff06d21fc1c1b88469a8dbae742a4b609808] Mar 10 16:36:34 poc-pinot01 pinot-admin.sh: at org.apache.pinot.tools.service.PinotServiceManager.startController(PinotServiceManager.java:123) ~[pinot-all-0.8.0-jar-with-dependencies.jar:0.8.0-c4ceff06d21fc1c1b88469a8dbae742a4b609808] Mar 10 16:36:34 poc-pinot01 pinot-admin.sh: at org.apache.pinot.tools.service.PinotServiceManager.startRole(PinotServiceManager.java:93) ~[pinot-all-0.8.0-jar-with-dependencies.jar:0.8.0-c4ceff06d21fc1c1b88469a8dbae742a4b609808] Mar 10 16:36:34 poc-pinot01 pinot-admin.sh: at org.apache.pinot.tools.admin.command.StartServiceManagerCommand.lambda$startBootstrapServices$0(StartServiceManagerCommand.java:233) ~[pinot-all-0.8.0-jar-with-dependencies.jar:0.8.0-c4ceff06d21fc1c1b88469a8dbae742a4b609808] Mar 10 16:36:34 poc-pinot01 pinot-admin.sh: at org.apache.pinot.tools.admin.command.StartServiceManagerCommand.startPinotService(StartServiceManagerCommand.java:285) [pinot-all-0.8.0-jar-with-dependencies.jar:0.8.0-c4ceff06d21fc1c1b88469a8dbae742a4b609808] Mar 10 16:36:34 poc-pinot01 pinot-admin.sh: at org.apache.pinot.tools.admin.command.StartServiceManagerCommand.startBootstrapServices(StartServiceManagerCommand.java:232) [pinot-all-0.8.0-jar-with-dependencies.jar:0.8.0-c4ceff06d21fc1c1b88469a8dbae742a4b609808] Mar 10 16:36:34 poc-pinot01 pinot-admin.sh: at org.apache.pinot.tools.admin.command.StartServiceManagerCommand.execute(StartServiceManagerCommand.java:182) [pinot-all-0.8.0-jar-with-dependencies.jar:0.8.0-c4ceff06d21fc1c1b88469a8dbae742a4b609808] Mar 10 16:36:34 poc-pinot01 pinot-admin.sh: at org.apache.pinot.tools.admin.command.StartControllerCommand.execute(StartControllerCommand.java:149) [pinot-all-0.8.0-jar-with-dependencies.jar:0.8.0-c4ceff06d21fc1c1b88469a8dbae742a4b609808] Mar 10 16:36:34 poc-pinot01 pinot-admin.sh: at org.apache.pinot.tools.admin.PinotAdministrator.execute(PinotAdministrator.java:166) [pinot-all-0.8.0-jar-with-dependencies.jar:0.8.0-c4ceff06d21fc1c1b88469a8dbae742a4b609808] Mar 10 16:36:34 poc-pinot01 pinot-admin.sh: at org.apache.pinot.tools.admin.PinotAdministrator.main(PinotAdministrator.java:186) [pinot-all-0.8.0-jar-with-dependencies.jar:0.8.0-c4ceff06d21fc1c1b88469a8dbae742a4b609808] Mar 10 16:36:34 poc-pinot01 pinot-admin.sh: Caused by: java.io.IOException: No FileSystem for scheme: hdfs Mar 10 16:36:34 poc-pinot01 pinot-admin.sh: at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2644) ~[pinot-orc-0.8.0-shaded.jar:0.8.0-9a0f41bc24243ff74315723b0153b534c2596e30]
  @ryantle1028: error--> ERROR [PinotFSFactory] [main] Could not instantiate file system for class org.apache.pinot.plugin.filesystem.HadoopPinotFS with scheme hdfs
@ryantle1028: error --> ERROR [PinotFSFactory] [main] Could not instantiate file system for class org.apache.pinot.plugin.filesystem.HadoopPinotFS with scheme hdfs
  @ken: What does your config look like for the hdfs scheme?
  @ken: Normally it needs to look something like: ```pinotFSSpecs: - scheme: hdfs className: org.apache.pinot.plugin.filesystem.HadoopPinotFS configs: hadoop.conf.path: '/mnt/hadoop/etc/hadoop/'``` Where `hadoop.conf.path` is the appropriate path to the directory where the Hadoop configuration files live (which is needed for Pinot to talk to the HDFS cluster)
@osskalluri: @osskalluri has joined the channel
@mehtashailee21: @mehtashailee21 has joined the channel
@erwin.sapuay1: @erwin.sapuay1 has joined the channel
@sanat.pattanaik: @sanat.pattanaik has joined the channel
@sunhee.bigdata: Hello! I'm working om Pinot poc. Does pinot have query audit logging ? I found query log in controller or broker log. (only query. not user) But I can't find any config or docs about query audit logging in Pinot.

#random


@ashwinviswanath: @ashwinviswanath has left the channel
@osskalluri: @osskalluri has joined the channel
@mehtashailee21: @mehtashailee21 has joined the channel
@jkinzel: @jkinzel has left the channel
@erwin.sapuay1: @erwin.sapuay1 has joined the channel
@sanat.pattanaik: @sanat.pattanaik has joined the channel

#troubleshooting


@dadelcas: Hello, I'm upgrading my cluster from 0.8.0 to 0.9.3. I have hybrid tables with upsert that are incompatible with 0.9.3. There is a constrain that a realtime table cannot have RealtimeToOfflineSegmentsTask configured, what the options for migrating? Realtime tables can't be backfilled easily/quickly
  @dadelcas: I've seen the checked has been relaxed, is 0.9 going to be patched? Do I have to wait until 0.10.0 is available? I think the release has been done but the website doesn't have the release notes
  @dadelcas: Can't find the tag in github either
  @mayanks: 0.10 is being built right now, should be available real soon.
  @dadelcas: Cool!
@osskalluri: @osskalluri has joined the channel
@mehtashailee21: @mehtashailee21 has joined the channel
@mehtashailee21: Hello there, I am setting up apache Pinot for the first time. I have setup the zookeeper, broker, controller and server. Not when I try to run AddTable container proc. using docker, it returns with a null pointer exception. Can someone help me debug this issue. Command: ```docker run \ --network=pinot-demo_default \ --name pinot-batch-table-creation \ -v /home/shailee/projects/DS/data/lineorder_offline.json:/lineorder_offline.json \ -v /home/shailee/projects/DS/data/lineorder.json:/lineorder.json \ apachepinot/pinot:latest AddTable \ -schemaFile /lineorder_offline.json \ -tableConfigFile /lineorder.json \ -controllerHost manual-pinot-controller \ -controllerPort 9000 \ -exec``` Error in controller: ```ERROR [WebApplicationExceptionMapper] [grizzly-http-server-4] Server error: java.lang.NullPointerException: null at java.util.Objects.requireNonNull(Objects.java:221) ~[?:?] at java.util.Optional.<init>(Optional.java:107) ~[?:?] at java.util.Optional.of(Optional.java:120) ~[?:?] at org.apache.pinot.controller.api.access.AccessControlUtils.validatePermission(AccessControlUtils.java:48) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-b7c181a77289fccb10cea139a097efb5d82f634a] at org.apache.pinot.controller.api.resources.PinotSchemaRestletResource.addSchema(PinotSchemaRestletResource.java:194) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-b7c181a77289fccb10cea139a097efb5d82f634a] at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?] at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:?] at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:?] at java.lang.reflect.Method.invoke(Method.java:566) ~[?:?] at org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandlerFactory.lambda$static$0(ResourceMethodInvocationHandlerFactory.java:52) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-b7c181a77289fccb10cea139a097efb5d82f634a] at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher$1.run(AbstractJavaResourceMethodDispatcher.java:124) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-b7c181a77289fccb10cea139a097efb5d82f634a] at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.invoke(AbstractJavaResourceMethodDispatcher.java:167) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-b7c181a77289fccb10cea139a097efb5d82f634a] at org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherProvider$TypeOutInvoker.doDispatch(JavaResourceMethodDispatcherProvider.java:219) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-b7c181a77289fccb10cea139a097efb5d82f634a] at org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDispatcher.dispatch(AbstractJavaResourceMethodDispatcher.java:79) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-b7c181a77289fccb10cea139a097efb5d82f634a] at org.glassfish.jersey.server.model.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:469) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-b7c181a77289fccb10cea139a097efb5d82f634a] at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:391) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-b7c181a77289fccb10cea139a097efb5d82f634a] at org.glassfish.jersey.server.model.ResourceMethodInvoker.apply(ResourceMethodInvoker.java:80) ~[pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-b7c181a77289fccb10cea139a097efb5d82f634a] at org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:253) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-b7c181a77289fccb10cea139a097efb5d82f634a] at org.glassfish.jersey.internal.Errors$1.call(Errors.java:248) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-b7c181a77289fccb10cea139a097efb5d82f634a] at org.glassfish.jersey.internal.Errors$1.call(Errors.java:244) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-b7c181a77289fccb10cea139a097efb5d82f634a] at org.glassfish.jersey.internal.Errors.process(Errors.java:292) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-b7c181a77289fccb10cea139a097efb5d82f634a] at org.glassfish.jersey.internal.Errors.process(Errors.java:274) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-b7c181a77289fccb10cea139a097efb5d82f634a] at org.glassfish.jersey.internal.Errors.process(Errors.java:244) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-b7c181a77289fccb10cea139a097efb5d82f634a] at org.glassfish.jersey.process.internal.RequestScope.runInScope(RequestScope.java:265) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-b7c181a77289fccb10cea139a097efb5d82f634a] at org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:232) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-b7c181a77289fccb10cea139a097efb5d82f634a] at org.glassfish.jersey.server.ApplicationHandler.handle(ApplicationHandler.java:679) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-b7c181a77289fccb10cea139a097efb5d82f634a] at org.glassfish.jersey.grizzly2.httpserver.GrizzlyHttpContainer.service(GrizzlyHttpContainer.java:353) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-b7c181a77289fccb10cea139a097efb5d82f634a] at org.glassfish.grizzly.http.server.HttpHandler$1.run(HttpHandler.java:200) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-b7c181a77289fccb10cea139a097efb5d82f634a] at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.doWork(AbstractThreadPool.java:569) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-b7c181a77289fccb10cea139a097efb5d82f634a] at org.glassfish.grizzly.threadpool.AbstractThreadPool$Worker.run(AbstractThreadPool.java:549) [pinot-all-0.10.0-SNAPSHOT-jar-with-dependencies.jar:0.10.0-SNAPSHOT-b7c181a77289fccb10cea139a097efb5d82f634a] at java.lang.Thread.run(Thread.java:829) [?:?]```
  @npawar: Mind sharing the table config? Have you added the schema first?
  @mark.needham: I think your schema name might be null
  @mark.needham: in the schema file `schemaName` e.g. ```{ "schemaName": "movies", "dimensionFieldSpecs": [{ "name": "id", "dataType": "LONG" }, { "name": "title", "dataType": "STRING" }, { "name": "genre", "dataType": "STRING" }, { "name": "year", "dataType": "INT" } ] }```
  @mehtashailee21:
  @mehtashailee21:
  @mehtashailee21: How do I go about debugging issues in the schema. Does it log schema validation anywhere?
  @mehtashailee21: Thanks @npawar I had missed the schema Name
  @npawar: we have a schema validate method, which does get called before adding the schema. But looks like the `AccessControlUtils.validatePermission` happens before the `validateSchema`, and the validatePermission breaks at schema name. let me submit a fix to move validate schema before permissions, so that the error message would be clearer
  @npawar: if you’d called the POST schemas/validate endpoint, you’d see the validation as expected
  @mehtashailee21: thank you for that :smile:
@jkinzel: @jkinzel has left the channel
@luisfernandez: Hey friends, we are starting to issue queries in our production cluster and we are seeing the following `Failed to find time boundary info for hybrid table` anyone know if this is bad? have more info about it?
  @mayanks: Yes, this is not good.
  @mayanks: A hybrid table internally maintains time boundary and uses it to route a query to offline table for data older than time boundary, and realtime table for data newer than time boundary.
  @mayanks: Is your table hybrid, and does it have valid ideal-state/external-view for the OFFLINE and REALTIME counterparts?
  @mayanks: If your OFFLINE table is empty, then the error won’t cause a functional issue. But the setup won’t make sense in that case.
  @luisfernandez: The OFFLINE table is empty right now because the task that moves segments to offline hasn't kicked yet
  @mayanks: Ah ok, you can skip the error then
  @luisfernandez: Does it mean that once the OFFLINE table starts getting data we shouldn't see the warning anymore yes?
@erwin.sapuay1: @erwin.sapuay1 has joined the channel
@sanat.pattanaik: @sanat.pattanaik has joined the channel
@tony: I have a table showing BAD because a handful of segments are only on one of two servers. I am trying to "rebalance servers" to fix and I see `"status": "IN_PROGRESS"` but nothing in the controller logs other than ```INFO [CustomRebalancer] [HelixController-pipeline-default-pinot-(3cd60663_DEFAULT)] Computing BestPossibleMapping for node_reboot_events_REALTIME``` and ```WARN [SegmentStatusChecker] [pool-10-thread-4] Table node_reboot_events_REALTIME has 1 replicas, below replication threshold :2``` What status should I expect to see?
  @mayanks: Do you know why the segments were BAD? If it was due to a server going down, it might be better to bring the server back up?
  @tony: The servers are all up. I am not sure how it happened. I know which segments, it is 7 of ~600
  @mayanks: Can you check external view in ZK (to eliminate UI bug)?
  @mayanks: For such scenario, rebalance is not the way to solve it.
  @tony: Checking ZK now. But for rebalance status, how can I tell if anything is happening?
  @mayanks: Not at the moment, we need to add it (looking for volunteers).
  @tony: Oh well. For status, where should I look in ZK? pinot/INSTANCES/<server>/CURRENTSTATE/xxx/<tablename> does not show the segment
  @tony: And what is the way to solve this scenario?
  @tony: (and thanks for your help)
  @tony: ZK shows the segment as error on the server that shows an error in the UI ``` "node_reboot_events__1__631__20220124T1158Z": { "CURRENT_STATE": "ERROR" },```

#pinot-dev


@osskalluri: @osskalluri has joined the channel

#pinot-trino


@aaron.weiss: @aaron.weiss has joined the channel
--------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscr...@pinot.apache.org For additional commands, e-mail: dev-h...@pinot.apache.org

Reply via email to