Hi Luoc,
Any pointers on the validation error mentioned in my last email?  thanks in
advance.


Regards
Prabhakar

On Tue, Oct 5, 2021 at 4:38 PM Prabhakar Bhosaale <bhosale....@gmail.com>
wrote:

> hi Luoc,
> I gave full permission to the UDF folder and that error is gone and
> drillbit started successfully. But when I query the json file I get the
> following error message.
>
> 2021-10-05 16:33:18,428 [1ea3cf09-19d4-55fe-3b84-6235e7f1eae6:foreman]
> INFO o.a.drill.exec.work.foreman.Foreman - Query text for query with id
> 1ea3cf09-19d4-55fe-3b84-6235e7f1eae6 issued by anonymous: select * from
> archival_hdfs.`mytable`
> 2021-10-05 16:33:18,435 [1ea3cf09-19d4-55fe-3b84-6235e7f1eae6:foreman]
> INFO o.a.d.e.p.s.conversion.SqlConverter - User Error Occurred: Invalid
> range: [0..-1) (Invalid range: [0..-1))
> org.apache.drill.common.exceptions.UserException: VALIDATION ERROR:
> Invalid range: [0..-1)
>
> Regards
> Prabhakar
>
> On Tue, Oct 5, 2021 at 2:28 PM luoc <l...@apache.org> wrote:
>
>> Hi Prabhakar,
>>   If you removed the configured UDF block, can all the Drillbit start
>> properly?
>> I mean, if the host is configured but not working, we need to locate the
>> wrong configuration block first.
>>
>> > 2021年10月4日 下午1:13,Prabhakar Bhosaale <bhosale....@gmail.com> 写道:
>> >
>> > hi Luoc,
>> >
>> > I already tried giving hdfs host name in UDF section and even then it
>> gives
>> > same error.
>> > My understanding of that element is that it just tell what kind of file
>> > system it need to use.  I checked for the documentation on understanding
>> > the every element of that configuration file but could not find it. If
>> you
>> > could give me some pointers for the documentation that would be great.
>> thx
>> >
>> > Regards
>> > Prabhakar
>> >
>> >
>> >
>> > On Sat, Oct 2, 2021 at 1:08 PM luoc <l...@apache.org> wrote:
>> >
>> >> Hi Prabhakar,
>> >>
>> >>  Which configuration block is the host name you mentioned? I see that
>> the
>> >> UDF block.
>> >>
>> >> ```
>> >>    # instead of default taken from Hadoop configuration
>> >>    fs: "hdfs:///",
>> >> ```
>> >>
>> >>> 在 2021年10月2日,15:11,Prabhakar Bhosaale <bhosale....@gmail.com> 写道:
>> >>>
>> >>> Hi Luoc,
>> >>> Could you please help with it?  thanks
>> >>>
>> >>> Regards
>> >>> Prabhakar
>> >>>
>> >>>> On Fri, Oct 1, 2021 at 4:55 PM Prabhakar Bhosaale <
>> >> bhosale....@gmail.com>
>> >>>> wrote:
>> >>>> Hi Luoc,
>> >>>> I have already given the host name. Below is the complete file for
>> your
>> >>>> reference. Not sure where to give the hostname.
>> >>>> # Licensed to the Apache Software Foundation (ASF) under one
>> >>>> # or more contributor license agreements.  See the NOTICE file
>> >>>> # distributed with this work for additional information
>> >>>> # regarding copyright ownership.  The ASF licenses this file
>> >>>> # to you under the Apache License, Version 2.0 (the
>> >>>> # "License"); you may not use this file except in compliance
>> >>>> # with the License.  You may obtain a copy of the License at
>> >>>> #
>> >>>> # http://www.apache.org/licenses/LICENSE-2.0
>> >>>> #
>> >>>> # Unless required by applicable law or agreed to in writing, software
>> >>>> # distributed under the License is distributed on an "AS IS" BASIS,
>> >>>> # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
>> >> implied.
>> >>>> # See the License for the specific language governing permissions and
>> >>>> # limitations under the License.
>> >>>> #
>> >>>> #  This file tells Drill to consider this module when class path
>> >> scanning.
>> >>>> #  This file can also include any supplementary configuration
>> >> information.
>> >>>> #  This file is in HOCON format, see
>> >>>> https://github.com/typesafehub/config/blob/master/HOCON.md for more
>> >>>> information.
>> >>>> drill.logical.function.packages +=
>> "org.apache.drill.exec.expr.fn.impl"
>> >>>> drill.exec: {
>> >>>> cluster-id: "drillcluster"
>> >>>> rpc: {
>> >>>> user: {
>> >>>>   server: {
>> >>>>     port: 31010
>> >>>>     threads: 1
>> >>>>   }
>> >>>>   client: {
>> >>>>     threads: 1
>> >>>>   }
>> >>>> },
>> >>>> bit: {
>> >>>>   server: {
>> >>>>     port : 31011,
>> >>>>     retry:{
>> >>>>       count: 7200,
>> >>>>       delay: 500
>> >>>>     },
>> >>>>     threads: 1
>> >>>>   }
>> >>>> },
>> >>>> use.ip : false
>> >>>> },
>> >>>> operator: {
>> >>>> packages += "org.apache.drill.exec.physical.config"
>> >>>> },
>> >>>> optimizer: {
>> >>>> implementation: "org.apache.drill.exec.opt.IdentityOptimizer"
>> >>>> },
>> >>>> functions: ["org.apache.drill.expr.fn.impl"],
>> >>>> storage: {
>> >>>> packages += "org.apache.drill.exec.store",
>> >>>> file: {
>> >>>>   text: {
>> >>>>     buffer.size: 262144,
>> >>>>     batch.size: 4000
>> >>>>   },
>> >>>>   partition.column.label: "dir"
>> >>>> },
>> >>>> # The action on the storage-plugins-override.conf after it's use.
>> >>>> # Possible values are "none" (default), "rename", "remove"
>> >>>> action_on_plugins_override_file: "none"
>> >>>> },
>> >>>> zk: {
>> >>>> connect: "10.81.68.6:2181,10.81.68.110:2181,10.81.70.139:2181",
>> >>>> root: "user/pstore",
>> >>>> refresh: 500,
>> >>>> timeout: 5000,
>> >>>> retry: {
>> >>>>   count: 7200,
>> >>>>   delay: 500
>> >>>> }
>> >>>> # This option controls whether Drill specifies ACLs when it creates
>> >>>> znodes.
>> >>>> # If this is 'false', then anyone has all privileges for all Drill
>> >>>> znodes.
>> >>>> # This corresponds to ZOO_OPEN_ACL_UNSAFE.
>> >>>> # Setting this flag to 'true' enables the provider specified in
>> >>>> "acl_provider"
>> >>>> apply_secure_acl: false,
>> >>>> # This option specified the ACL provider to be used by Drill.
>> >>>> # Custom ACL providers can be provided in the Drillbit classpath and
>> >>>> Drill can be made to pick them
>> >>>> # by changing this option.
>> >>>> # Note: This option has no effect if "apply_secure_acl" is 'false'
>> >>>> #
>> >>>> # The default "creator-all" will setup ACLs such that
>> >>>> #    - Only the Drillbit user will have all privileges(create,
>> delete,
>> >>>> read, write, admin). Same as ZOO_CREATOR_ALL_ACL
>> >>>> #    - Other users will only be able to read the
>> >>>> cluster-discovery(list of Drillbits in the cluster) znodes.
>> >>>> #
>> >>>> acl_provider: "creator-all"
>> >>>> },
>> >>>> http: {
>> >>>> enabled: true,
>> >>>> ssl_enabled: false,
>> >>>> port: 8047
>> >>>> session_max_idle_secs: 3600, # Default value 1hr
>> >>>> cors: {
>> >>>>   enabled: false,
>> >>>>   allowedOrigins: ["null"],
>> >>>>   allowedMethods: ["GET", "POST", "HEAD", "OPTIONS"],
>> >>>>   allowedHeaders: ["X-Requested-With", "Content-Type", "Accept",
>> >>>> "Origin"],
>> >>>>   credentials: true
>> >>>> },
>> >>>> auth: {
>> >>>>     # Http Auth mechanisms to configure. If not provided but
>> user.auth
>> >>>> is enabled
>> >>>>     # then default value is ["FORM"].
>> >>>>     mechanisms: ["BASIC", "FORM", "SPNEGO"],
>> >>>>     # Spnego principal to be used by WebServer when Spnego
>> >>>> authentication is enabled.
>> >>>>     spnego.principal: "HTTP://<localhost>"
>> >>>>     # Location to keytab file for above spnego principal
>> >>>>     spnego.keytab: "<keytab_file_location>";
>> >>>> },
>> >>>> jetty: {
>> >>>>   server: {
>> >>>>     # development option which allows to log Jetty server state after
>> >>>> start
>> >>>>     dumpAfterStart: false,
>> >>>>     # Optional params to set on Jetty's
>> >>>> org.eclipse.jetty.util.ssl.SslContextFactory when
>> >>>> drill.exec.http.ssl_enabled
>> >>>>     sslContextFactory: {
>> >>>>       # allows to specify cert to use when multiple non-SNI
>> >>>> certificates are available.
>> >>>>       certAlias: "certAlias",
>> >>>>       # path to file that contains Certificate Revocation List
>> >>>>       crlPath: "/etc/file.crl",
>> >>>>       # enable Certificate Revocation List Distribution Points
>> Support
>> >>>>       enableCRLDP: false,
>> >>>>       # enable On-Line Certificate Status Protocol support
>> >>>>       enableOCSP: false,
>> >>>>       # when set to "HTTPS" hostname verification will be enabled
>> >>>>       endpointIdentificationAlgorithm: "HTTPS",
>> >>>>       # accepts exact cipher suite names and/or regular expressions.
>> >>>>       excludeCipherSuites: ["SSL_DHE_DSS_WITH_DES_CBC_SHA"],
>> >>>>       # list of TLS/SSL protocols to exclude
>> >>>>       excludeProtocols: ["TLSv1.1"],
>> >>>>       # accepts exact cipher suite names and/or regular expressions.
>> >>>>       includeCipherSuites: ["SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA",
>> >>>> "SSL_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA"],
>> >>>>       # list of TLS/SSL protocols to include
>> >>>>       includeProtocols: ["TLSv1.2", "TLSv1.3"],
>> >>>>       # the algorithm name (default "SunX509") used by the
>> >>>> javax.net.ssl.KeyManagerFactory
>> >>>>       keyManagerFactoryAlgorithm: "SunX509",
>> >>>>       # classname of custom java.security.Provider implementation
>> >>>>       keyStoreProvider: null,
>> >>>>       # type of key store (default "JKS")
>> >>>>       keyStoreType: "JKS",
>> >>>>       # max number of intermediate certificates in sertificate chain
>> >>>>       maxCertPathLength: -1,
>> >>>>       # set true if ssl needs client authentication
>> >>>>       needClientAuth: false,
>> >>>>       # location of the OCSP Responder
>> >>>>       ocspResponderURL: "",
>> >>>>       # javax.net.ssl.SSLContext provider
>> >>>>       provider: null,
>> >>>>       # whether TLS renegotiation is allowed
>> >>>>       renegotiationAllowed: false,
>> >>>>       # number of renegotions allowed for this connection (-1 for
>> >>>> unlimited, default 5) .
>> >>>>       renegotiationLimit: 5,
>> >>>>       # algorithm name for java.security.SecurityRandom instances.
>> >>>>       #
>> >>>>
>> >>
>> https://docs.oracle.com/javase/8/docs/technotes/guides/security/StandardNames.html#SecureRandom
>> >>>>       secureRandomAlgorithm: "NativePRNG",
>> >>>>       # set the flag to enable SSL Session caching
>> >>>>       sessionCachingEnabled: false,
>> >>>>       # set if you want to bound session cache size
>> >>>>       sslSessionCacheSize: -1,
>> >>>>       # session timeout in seconds.
>> >>>>       sslSessionTimeout: -1,
>> >>>>       # the algorithm name (default "SunX509") used by the
>> >>>> javax.net.ssl.TrustManagerFactory
>> >>>>       trustManagerFactoryAlgorithm: "SunX509",
>> >>>>       # provider of the trust store
>> >>>>       trustStoreProvider: null,
>> >>>>       # type of the trust store (default "JKS")
>> >>>>       trustStoreType: "JKS",
>> >>>>       # sets whether the local cipher suites preference should be
>> >>>> honored.
>> >>>>       useCipherSuiteOrder: false,
>> >>>>       # true if SSL certificates have to be validated
>> >>>>       validateCerts: false,
>> >>>>       # true if SSL certificates of the peer have to be validated
>> >>>>       validatePeerCerts: false,
>> >>>>       # true if SSL wants client authentication.
>> >>>>       wantClientAuth: false
>> >>>>     },
>> >>>>     response: {
>> >>>>       # any response headers with constant values may be configured
>> >>>> like this
>> >>>>       headers: {
>> >>>>         "X-XSS-Protection": "1; mode=block",
>> >>>>         "X-Content-Type-Options": "nosniff",
>> >>>>         "Strict-Transport-Security":
>> >>>> "max-age=31536000;includeSubDomains",
>> >>>>         # NOTE: 'unsafe-inline' is required until DRILL-7642 is
>> >>>> resolved
>> >>>>         "Content-Security-Policy": "default-src https:; script-src
>> >>>> 'unsafe-inline' https:; style-src 'unsafe-inline' https:; font-src
>> data:
>> >>>> https:; img-src data: https:"
>> >>>>       }
>> >>>>     }
>> >>>>   }
>> >>>> }
>> >>>> },
>> >>>> # Below SSL parameters need to be set for custom transport layer
>> >>>> settings.
>> >>>> ssl: {
>> >>>> #If not provided then the default value is java system property
>> >>>> javax.net.ssl.keyStore value
>> >>>> keyStorePath: "/keystore.file",
>> >>>> #If not provided then the default value is java system property
>> >>>> javax.net.ssl.keyStorePassword value
>> >>>> keyStorePassword: "ks_passwd",
>> >>>> #If not provided then the default value is java system property
>> >>>> javax.net.ssl.trustStore value
>> >>>> trustStorePath: "/truststore.file",
>> >>>> #If not provided then the default value is java system property
>> >>>> javax.net.ssl.trustStorePassword value
>> >>>> trustStorePassword: "ts_passwd"
>> >>>> },
>> >>>> functions: ["org.apache.drill.expr.fn.impl"],
>> >>>> network: {
>> >>>> start: 35000
>> >>>> },
>> >>>> work: {
>> >>>> max.width.per.endpoint: 5,
>> >>>> global.max.width: 100,
>> >>>> affinity.factor: 1.2,
>> >>>> executor.threads: 4
>> >>>> },
>> >>>> sys.store.provider: {
>> >>>> class:
>> >>>>
>> >>
>> "org.apache.drill.exec.store.sys.store.provider.ZookeeperPersistentStoreProvider",
>> >>>> # The following section is used by ZkPStoreProvider
>> >>>> zk: {
>> >>>>   blobroot: "file:///var/log/drill"
>> >>>> },
>> >>>> # The following section is only required by LocalPStoreProvider
>> >>>> local: {
>> >>>>   path: "/tmp/drill",
>> >>>>   write: true
>> >>>> }
>> >>>> },
>> >>>> impersonation: {
>> >>>> enabled: false,
>> >>>> max_chained_user_hops: 3
>> >>>> },
>> >>>> security.user.auth {
>> >>>> enabled: false,
>> >>>> packages += "org.apache.drill.exec.rpc.user.security",
>> >>>> # There are 2 implementations available out of the box with
>> annotation
>> >>>> UserAuthenticatorTemplate
>> >>>> # Annotation type "pam" is providing implementation using JPAM
>> >>>> # Annotation type "pam4j" is providing implementation using libpam4j
>> >>>> # Based on annotation type configured below corresponding
>> >>>> authenticator is used.
>> >>>> impl: "pam",
>> >>>> pam_profiles: [ "sudo", "login" ]
>> >>>> },
>> >>>> trace: {
>> >>>> directory: "/tmp/drill-trace",
>> >>>> filesystem: "file:///"
>> >>>> },
>> >>>> tmp: {
>> >>>> directories: ["/tmp/drill"],
>> >>>> filesystem: "drill-local:///"
>> >>>> },
>> >>>> buffer:{
>> >>>> impl: "org.apache.drill.exec.work.batch.UnlimitedRawBatchBuffer",
>> >>>> size: "100",
>> >>>> spooling: {
>> >>>>   delete: false,
>> >>>>   size: 100000000
>> >>>> }
>> >>>> },
>> >>>> cache.hazel.subnets: ["*.*.*.*"],
>> >>>> spill: {
>> >>>>  # These options are common to all spilling operators.
>> >>>>  # They can be overriden, per operator (but this is just for
>> >>>>  # backward compatibility, and may be deprecated in the future)
>> >>>>  directories : [ "/tmp/drill/spill" ],
>> >>>>  fs : "file:///"
>> >>>> }
>> >>>> sort: {
>> >>>> purge.threshold : 100,
>> >>>> external: {
>> >>>>   batch.size : 4000,
>> >>>>   spill: {
>> >>>>     batch.size : 4000,
>> >>>>     group.size : 100,
>> >>>>     threshold : 200,
>> >>>>     # The 2 options below override the common ones
>> >>>>     # they should be deprecated in the future
>> >>>>     directories : [ "/tmp/drill/spill" ],
>> >>>>     fs : "file:///"
>> >>>>   }
>> >>>> }
>> >>>> },
>> >>>> hashagg: {
>> >>>> # The partitions divide the work inside the hashagg, to ease
>> >>>> # handling spilling. This initial figure is tuned down when
>> >>>> # memory is limited.
>> >>>> #  Setting this option to 1 disables spilling !
>> >>>> num_partitions: 32,
>> >>>> spill: {
>> >>>>     # The 2 options below override the common ones
>> >>>>     # they should be deprecated in the future
>> >>>>     directories : [ "/tmp/drill/spill" ],
>> >>>>     fs : "file:///"
>> >>>> }
>> >>>> },
>> >>>> memory: {
>> >>>> top.max: 1000000000000,
>> >>>> operator: {
>> >>>>   max: 20000000000,
>> >>>>   initial: 10000000
>> >>>> },
>> >>>> fragment: {
>> >>>>   max: 20000000000,
>> >>>>   initial: 20000000
>> >>>> }
>> >>>> },
>> >>>> scan: {
>> >>>> threadpool_size: 8,
>> >>>> decode_threadpool_size: 1
>> >>>> },
>> >>>> debug.error_on_leak: true,
>> >>>> # Settings for Dynamic UDFs (see
>> >>>> https://issues.apache.org/jira/browse/DRILL-4726 for details).
>> >>>> udf: {
>> >>>> # number of retry attempts to update remote function registry
>> >>>> # if registry version was changed during update
>> >>>> retry-attempts: 10,
>> >>>> directory: {
>> >>>>   # Override this property if custom file system should be used to
>> >>>> create remote directories
>> >>>>   # instead of default taken from Hadoop configuration
>> >>>>   fs: "hdfs:///",
>> >>>>   # Set this property if custom absolute root should be used for
>> >>>> remote directories
>> >>>>   root: "user/udf"
>> >>>> }
>> >>>> },
>> >>>> # Settings for Temporary Tables (see
>> >>>> https://issues.apache.org/jira/browse/DRILL-4956 for details).
>> >>>> # Temporary table can be created ONLY in default temporary workspace.
>> >>>> # Full workspace name should be indicated (including schema and
>> >>>> workspace separated by dot).
>> >>>> # Workspace MUST be file-based and writable. Workspace name is
>> >>>> case-sensitive.
>> >>>> default_temporary_workspace: "dfs.tmp"
>> >>>>
>> >>>> # Enable and provide additional parameters for Client-Server
>> >>>> communication over SSL
>> >>>> # see also the javax.net.ssl parameters below
>> >>>> security.user.encryption.ssl: {
>> >>>> #Set this to true to enable all client server communication to occur
>> >>>> over SSL.
>> >>>> enabled: false,
>> >>>> #key password is optional if it is the same as the keystore password
>> >>>> keyPassword: "key_passwd",
>> >>>> #Optional handshakeTimeout in milliseconds. Default is 10000 ms (10
>> >>>> seconds)
>> >>>> handshakeTimeout: 10000,
>> >>>> #protocol is optional. Drill will default to TLSv1.2. Valid values
>> >>>> depend on protocol versions
>> >>>> # enabled for tje underlying securrity provider. For JSSE these are :
>> >>>> SSL, SSLV2, SSLV3,
>> >>>> # TLS, TLSV1, TLSv1.1, TLSv1.2
>> >>>> protocol: "TLSv1.2",
>> >>>> #ssl provider. May be "JDK" or "OPENSSL". Default is "JDK"
>> >>>> provider: "JDK"
>> >>>> }
>> >>>>
>> >>>> # HTTP client proxy configuration
>> >>>> net_proxy: {
>> >>>>
>> >>>> # HTTP URL. Omit if from a Linux env var
>> >>>> # See
>> >>>>
>> >>
>> https://www.shellhacks.com/linux-proxy-server-settings-set-proxy-command-line/
>> >>>> http_url: "",
>> >>>>
>> >>>> # Explicit HTTP setup, used if URL is not set
>> >>>> http: {
>> >>>>   type: "none", # none, http, socks. Blank same as none.
>> >>>>   host: "",
>> >>>>   port: 80,
>> >>>>   user_name: "",
>> >>>>   password: ""
>> >>>> },
>> >>>>
>> >>>> # HTTPS URL. Omit if from a Linux env var
>> >>>> https_url: "",
>> >>>>
>> >>>> # Explicit HTTPS setup, used if URL is not set
>> >>>> https: {
>> >>>>   type: "none", # none, http, socks. Blank same as none.
>> >>>>   host: "",
>> >>>>   port: 80,
>> >>>>   user_name: "",
>> >>>>   password: ""
>> >>>> }
>> >>>> }
>> >>>> },
>> >>>>
>> >>>> drill.metrics : {
>> >>>> context: "drillbit",
>> >>>> jmx: {
>> >>>> enabled : true
>> >>>> },
>> >>>> log: {
>> >>>> enabled : false,
>> >>>> interval : 60
>> >>>> }
>> >>>> }
>> >>>>
>> >>>>> On Fri, Oct 1, 2021 at 3:12 PM luoc <l...@apache.org> wrote:
>> >>>>>
>> >>>>> Hi Prabhakar,
>> >>>>> Check your config. The following error message shows that a valid
>> host
>> >>>>> name is missing :
>> >>>>>
>> >>>>> "Caused by: java.io.IOException: Incomplete HDFS URI, no host:
>> >> hdfs:///"
>> >>>>>
>> >>>>>> 2021年10月1日 下午5:26,Prabhakar Bhosaale <bhosale....@gmail.com> 写道:
>> >>>>>>
>> >>>>>> Hi Team,
>> >>>>>> I have installed drill in distributed mode on hadoop 3 node
>> cluster.
>> >>>>>>
>> >>>>>> I get following error in the drillbit.out file when try to start
>> the
>> >>>>>> drillbit
>> >>>>>>
>> >>>>>> ----- ERROR----
>> >>>>>> 12:34:46.984 [main-EventThread] ERROR
>> >>>>> o.a.c.framework.imps.EnsembleTracker
>> >>>>>> - Invalid config event received:
>> >>>>>> {server.1=machinename1:2888:3888:participant, version=0, server.3=
>> >>>>>> machinename2:2888:3888:participant, server.2=
>> >>>>>> machinename3:2888:3888:participant}
>> >>>>>> Exception in thread "main"
>> >>>>>> org.apache.drill.exec.exception.DrillbitStartupException: Failure
>> >> during
>> >>>>>> initial startup of Drillbit.
>> >>>>>> at org.apache.drill.exec.server.Drillbit.start(Drillbit.java:588)
>> >>>>>> at org.apache.drill.exec.server.Drillbit.start(Drillbit.java:554)
>> >>>>>> at org.apache.drill.exec.server.Drillbit.main(Drillbit.java:550)
>> >>>>>> Caused by:
>> org.apache.drill.common.exceptions.DrillRuntimeException:
>> >>>>> Error
>> >>>>>> during file system hdfs:/// setup
>> >>>>>> at
>> >>>>>>
>> >>>>>
>> >>
>> org.apache.drill.common.exceptions.DrillRuntimeException.create(DrillRuntimeException.java:48)
>> >>>>>> at
>> >>>>>>
>> >>>>>
>> >>
>> org.apache.drill.exec.expr.fn.registry.RemoteFunctionRegistry.prepareAreas(RemoteFunctionRegistry.java:231)
>> >>>>>> at
>> >>>>>>
>> >>>>>
>> >>
>> org.apache.drill.exec.expr.fn.registry.RemoteFunctionRegistry.init(RemoteFunctionRegistry.java:109)
>> >>>>>> at org.apache.drill.exec.server.Drillbit.run(Drillbit.java:233)
>> >>>>>> at org.apache.drill.exec.server.Drillbit.start(Drillbit.java:584)
>> >>>>>> ... 2 more
>> >>>>>> Caused by: java.io.IOException: Incomplete HDFS URI, no host:
>> hdfs:///
>> >>>>>> at
>> >>>>>>
>> >>>>>
>> >>
>> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:170)
>> >>>>>> at
>> >>>>>
>> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3375)
>> >>>>>> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:125)
>> >>>>>> at
>> >>>>>
>> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3424)
>> >>>>>> at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3392)
>> >>>>>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:485)
>> >>>>>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:233)
>> >>>>>> at
>> >>>>>>
>> >>>>>
>> >>
>> org.apache.drill.exec.expr.fn.registry.RemoteFunctionRegistry.prepareAreas(RemoteFunctionRegistry.java:229)
>> >>>>>> ... 5 more
>> >>>>>> ----ERROR END----
>> >>>>>>
>> >>>>>> THe drill version is - 1.19
>> >>>>>> Haddop version is - 3.3.1
>> >>>>>> Zookeeper version is - 3.71
>> >>>>>>
>> >>>>>> I have following settings done
>> >>>>>>
>> >>>>>> zoo.cfg file
>> >>>>>> server.1=machine1:2888:3888
>> >>>>>> server.2= machine2:2888:3888
>> >>>>>> server.3= machine3:2888:3888
>> >>>>>>
>> >>>>>> drill-override.conf
>> >>>>>> zk: {
>> >>>>>> connect: "machine1:2181, machine2:2181, machine3:2181",
>> >>>>>> root: "user/pstore",
>> >>>>>> refresh: 500,
>> >>>>>> timeout: 5000,
>> >>>>>> retry: {
>> >>>>>>  count: 7200,
>> >>>>>>  delay: 500
>> >>>>>> }
>> >>>>>>
>> >>>>>> udf: {
>> >>>>>> # number of retry attempts to update remote function registry
>> >>>>>> # if registry version was changed during update
>> >>>>>> retry-attempts: 10,
>> >>>>>> directory: {
>> >>>>>>  # Override this property if custom file system should be used to
>> >>>>>> create remote directories
>> >>>>>>  # instead of default taken from Hadoop configuration
>> >>>>>>  fs: "hdfs:///",
>> >>>>>>  # Set this property if custom absolute root should be used for
>> >>>>> remote
>> >>>>>> directories
>> >>>>>>  root: "user/udf"
>> >>>>>> }
>> >>>>>>
>> >>>>>> Any help and pointer is appreciated. thx
>> >>>>>>
>> >>>>>> Regards
>> >>>>>> Prabhakar
>> >>>>>
>> >>>>>
>> >>>
>> >>
>> >>
>> >
>>
>>

Reply via email to