Re: New blog post, "The reports of my death have been greatly exaggerated"

2021-11-02 Thread Prabhakar Bhosaale
Hi James/team,
Let's use this as an opportunity to market the apache drill. Those who are
using apache drill, let's spread the word, it is important for our own
existence. thx


Regards
Prabhakar

On Tue, Nov 2, 2021 at 11:37 AM James Turton  wrote:

> Good day all
>
> https://drill.apache.org/blog/2021/10/30/reports-of-my-death/
>
> We'd like to share this new Drill blog post which addresses some
> misconceptions about the Drill project that are contained in a
> highly-ranked post in another blog, entitled "The Death of Apache Drill".
>
> Note: this is by no means a beginning of any sort of flame war. We've
> published only to correct the statements made about the Drill project
> that are untrue or misleading and we have no intention of saying more.
> The more exposure our post gets, the better the misinformation is
> counteracted so please share it freely.
>
> Happy Drilling
> James Turton
>
>
>
>
>


Re: Drill UI on cluster deployment

2021-10-26 Thread Prabhakar Bhosaale
hi James,
No I am not running anything extra other than the Drill on the server. My
understanding is that drill internally has this embedded  HTTP server which
serves the UI.  thanks


Regards
Prabhakar

On Tue, Oct 26, 2021 at 12:09 PM James Turton
 wrote:

> Hi Prabhakar
>
> Are you running an HTTP server like Apache or Nginx in front of the
> Drill web UI?  If so, are you able to configure it so that it does not
> apply a *|Content-Security-Policy |*HTTP header?
>
> James
>
> On 2021/10/26 07:55, Prabhakar Bhosaale wrote:
> > Hi All,
> > I have deployed the drill 1.19 on hadoop cluster. It is working fine. Bu
> > when i try drill UI from chrome or Edge browser, it is not displayed
> > correctly. It is not able to download the CSS and JS files it seems.
> >
> > When i check the errors in browser in dveloper tools then I see
> > Refused to load the stylesheet '
> > http://x:8047/static/css/bootstrap.min.css' because it violates the
> > following Content Security Policy directive: "style-src 'unsafe-inline'
> > https:". Note that 'style-src-elem' was not explicitly set, so
> 'style-src'
> > is used as a fallback.
> >
> > But if I visit the drill UI from IE old version then it is displaying
> > correctly.
> >
> > is there any setting need to be changed to make this UI work correctly in
> > latest browsers?
> >
> > REgards
> > Prabhakar
> >
>


Drill UI on cluster deployment

2021-10-25 Thread Prabhakar Bhosaale
Hi All,
I have deployed the drill 1.19 on hadoop cluster. It is working fine. Bu
when i try drill UI from chrome or Edge browser, it is not displayed
correctly. It is not able to download the CSS and JS files it seems.

When i check the errors in browser in dveloper tools then I see
Refused to load the stylesheet '
http://x:8047/static/css/bootstrap.min.css' because it violates the
following Content Security Policy directive: "style-src 'unsafe-inline'
https:". Note that 'style-src-elem' was not explicitly set, so 'style-src'
is used as a fallback.

But if I visit the drill UI from IE old version then it is displaying
correctly.

is there any setting need to be changed to make this UI work correctly in
latest browsers?

REgards
Prabhakar


Re: DrilBit error: ERROR o.a.c.framework.imps.EnsembleTracker - Invalid config event received

2021-10-14 Thread Prabhakar Bhosaale
Hi James and Luoc,

Thanks for approval and merger.  I am also looking for the documentation
for details of every section and element of drill-override-example.conf but
could not find it. If this documentation is not available then I am happy
to create one but I need the details. thanks

Regards
Prabhakar

On Thu, Oct 14, 2021 at 8:15 PM luoc  wrote:

> Hello Prabhakar,
>   Thank you so much. Your pull request has been merged into master within
> 3 minutes (we welcome the new contributor may be faster than three
> seconds). Thank you, James.
>
> > 2021年10月14日 下午10:23,Prabhakar Bhosaale  写道:
> >
> > Hi Luoc,
> > I have updated the document with few details and created the pull request
> > as you suggested. Please review. thanks
> >
> > Regards
> > Prabhakar
> >
> > On Wed, Oct 13, 2021 at 4:07 PM luoc  wrote:
> >
> >> Hello Prabhakar,
> >>  For example, you can find an icon in the upper right corner of each
> >> docs, please click that, and remember the address of the page, then :
> >>  1.  git clone https://github.com/apache/drill-site.git
> >>  2.  locate the address of page
> >>  3.  update the markdown file
> >>  4.  git commit & create a pull request
> >>
> >>
> >>> 2021年10月12日 下午11:03,Prabhakar Bhosaale  写道:
> >>>
> >>> hi Luoc,
> >>>
> >>> Thanks for kind words. We do have certain scope to update the document
> >> and
> >>> I am happy to contribute. Let me know how I can contribute? thx
> >>>
> >>> Regards
> >>> Prabhakar
> >>>
> >>> On Tue, Oct 12, 2021 at 7:16 PM luoc  wrote:
> >>>
> >>>> Hello Prabhakar,
> >>>> You can be proud of your work on DRILL. In addition, Is the DRILL
> >>>> document (for the JSON files) not detailed enough ? If possible, we
> >> always
> >>>> welcome you to contribute the docs (or update the specified docs).
> >>>>
> >>>>> 2021年10月12日 下午8:36,Prabhakar Bhosaale  写道:
> >>>>>
> >>>>> Hi Luoc,
> >>>>>
> >>>>> The problem is resolved. The root cause of the problem was the port
> >>>> number given in the storage plugin for HDFS. Instead of the service
> >> port of
> >>>> HDFS webui port was mentioned and hence it was giving problems. Thanks
> >> for
> >>>> all your support.
> >>>>>
> >>>>> Regards
> >>>>> Prabhakar
> >>>>>
> >>>>> On Tue, Oct 12, 2021 at 10:56 AM Prabhakar Bhosaale <
> >>>> bhosale@gmail.com <mailto:bhosale@gmail.com>> wrote:
> >>>>> Hi Luoc,
> >>>>>
> >>>>> Please ignore schreenshot of show files command in my earlier email.
> >> The
> >>>> show files command by using archival_hdfs gives the same error
> message.
> >>>>>
> >>>>> "RESOURCE ERROR: Failed to load schema for "archival_hdfs"!
> >>>>>
> >>>>> java.io.IOException: Failed on local exception:
> >>>> org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data
> >> length
> >>>>>
> >>>>> But if I use default dfs schema then show files gives correct output.
> >>>>>
> >>>>> Thanks and regards
> >>>>> Prabhakar
> >>>>>
> >>>>> On Tue, Oct 12, 2021 at 10:34 AM Prabhakar Bhosaale <
> >>>> bhosale@gmail.com <mailto:bhosale@gmail.com>> wrote:
> >>>>> Hi Luoc,
> >>>>> I had attached the files to my earlier email, I am attaching it again
> >>>> here.
> >>>>>
> >>>>> Answers to your questions
> >>>>> 1. `show databases` or `show schemas` in the console. Will the
> >>>> `archival_hdfs` be displayed ?
> >>>>> Prabhakar: archival_hdfs is not displayed by show databases or show
> >>>> schemas. But it is displayed on drill UI under "Default schema" drop
> >> down.
> >>>> Below is screenshot
> >>>>>
> >>>>>
> >>>>> 2. `show files in xxx` in the console. What do you see ?
> >>>>> Prabhakar: It gives the error. Pls see screenshot below
> >>>>>
> >>>>>
> >>>>>
> >>>>> Regards
> >>>>> Prabhakar
> >>>>>
> >>>>> On Mon, Oct 11, 2021 at 3:34 PM luoc  >>>> l...@apache.org>> wrote:
> >>>>> Hello Prabhakar,
> >>>>> I did not see the log attachment in the email. In addition, you can
> >>>> also check the schema by following steps :
> >>>>> 1. `show databases` or `show schemas` in the console. Will the
> >>>> `archival_hdfs` be displayed ?
> >>>>> 2. `show files in xxx` in the console. What do you see ?
> >>>>>
> >>>>>
> >>>>>> 2021年10月11日 下午3:39,Prabhakar Bhosaale   >>>> bhosale@gmail.com>> 写道:
> >>>>>>
> >>>>>> Hi Team,
> >>>>>>
> >>>>>> I have deployed drill in cluster mode on hadoop 3 node cluster.
> When I
> >>>> start the drill on every node, one of the node gives the following
> error
> >>>> but dril is up on all 3 nodes.
> >>>>>>
> >>>>>> "ERROR o.a.c.framework.imps.EnsembleTracker - Invalid config event
> >>>> received"
> >>>>>>
> >>>>>>
> >>>>>> After drill startup, When I try to query the json file on the
> cluster
> >>>> I get the following error.
> >>>>>>
> >>>>>> "RESOURCE ERROR: Failed to load schema for "archival_hdfs"!
> >>>>>>
> >>>>>> java.io.IOException: Failed on local exception:
> >>>> org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data
> >> length
> >>>>>>
> >>>>>> Any help or pointer would be appreciated. Attaching the log files.
> >>>>>>
> >>>>>> Thanks
> >>
> >>
>
>


Re: DrilBit error: ERROR o.a.c.framework.imps.EnsembleTracker - Invalid config event received

2021-10-14 Thread Prabhakar Bhosaale
Hi Luoc,
I have updated the document with few details and created the pull request
as you suggested. Please review. thanks

Regards
Prabhakar

On Wed, Oct 13, 2021 at 4:07 PM luoc  wrote:

> Hello Prabhakar,
>   For example, you can find an icon in the upper right corner of each
> docs, please click that, and remember the address of the page, then :
>   1.  git clone https://github.com/apache/drill-site.git
>   2.  locate the address of page
>   3.  update the markdown file
>   4.  git commit & create a pull request
>
>
> > 2021年10月12日 下午11:03,Prabhakar Bhosaale  写道:
> >
> > hi Luoc,
> >
> > Thanks for kind words. We do have certain scope to update the document
> and
> > I am happy to contribute. Let me know how I can contribute? thx
> >
> > Regards
> > Prabhakar
> >
> > On Tue, Oct 12, 2021 at 7:16 PM luoc  wrote:
> >
> >> Hello Prabhakar,
> >>  You can be proud of your work on DRILL. In addition, Is the DRILL
> >> document (for the JSON files) not detailed enough ? If possible, we
> always
> >> welcome you to contribute the docs (or update the specified docs).
> >>
> >>> 2021年10月12日 下午8:36,Prabhakar Bhosaale  写道:
> >>>
> >>> Hi Luoc,
> >>>
> >>> The problem is resolved. The root cause of the problem was the port
> >> number given in the storage plugin for HDFS. Instead of the service
> port of
> >> HDFS webui port was mentioned and hence it was giving problems. Thanks
> for
> >> all your support.
> >>>
> >>> Regards
> >>> Prabhakar
> >>>
> >>> On Tue, Oct 12, 2021 at 10:56 AM Prabhakar Bhosaale <
> >> bhosale@gmail.com <mailto:bhosale@gmail.com>> wrote:
> >>> Hi Luoc,
> >>>
> >>> Please ignore schreenshot of show files command in my earlier email.
> The
> >> show files command by using archival_hdfs gives the same error message.
> >>>
> >>> "RESOURCE ERROR: Failed to load schema for "archival_hdfs"!
> >>>
> >>> java.io.IOException: Failed on local exception:
> >> org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data
> length
> >>>
> >>> But if I use default dfs schema then show files gives correct output.
> >>>
> >>> Thanks and regards
> >>> Prabhakar
> >>>
> >>> On Tue, Oct 12, 2021 at 10:34 AM Prabhakar Bhosaale <
> >> bhosale@gmail.com <mailto:bhosale@gmail.com>> wrote:
> >>> Hi Luoc,
> >>> I had attached the files to my earlier email, I am attaching it again
> >> here.
> >>>
> >>> Answers to your questions
> >>>  1. `show databases` or `show schemas` in the console. Will the
> >> `archival_hdfs` be displayed ?
> >>> Prabhakar: archival_hdfs is not displayed by show databases or show
> >> schemas. But it is displayed on drill UI under "Default schema" drop
> down.
> >> Below is screenshot
> >>>
> >>>
> >>>  2. `show files in xxx` in the console. What do you see ?
> >>> Prabhakar: It gives the error. Pls see screenshot below
> >>>
> >>>
> >>>
> >>> Regards
> >>> Prabhakar
> >>>
> >>> On Mon, Oct 11, 2021 at 3:34 PM luoc  >> l...@apache.org>> wrote:
> >>> Hello Prabhakar,
> >>>  I did not see the log attachment in the email. In addition, you can
> >> also check the schema by following steps :
> >>>  1. `show databases` or `show schemas` in the console. Will the
> >> `archival_hdfs` be displayed ?
> >>>  2. `show files in xxx` in the console. What do you see ?
> >>>
> >>>
> >>>> 2021年10月11日 下午3:39,Prabhakar Bhosaale  >> bhosale@gmail.com>> 写道:
> >>>>
> >>>> Hi Team,
> >>>>
> >>>> I have deployed drill in cluster mode on hadoop 3 node cluster. When I
> >> start the drill on every node, one of the node gives the following error
> >> but dril is up on all 3 nodes.
> >>>>
> >>>> "ERROR o.a.c.framework.imps.EnsembleTracker - Invalid config event
> >> received"
> >>>>
> >>>>
> >>>> After drill startup, When I try to query the json file on the cluster
> >> I get the following error.
> >>>>
> >>>> "RESOURCE ERROR: Failed to load schema for "archival_hdfs"!
> >>>>
> >>>> java.io.IOException: Failed on local exception:
> >> org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data
> length
> >>>>
> >>>> Any help or pointer would be appreciated. Attaching the log files.
> >>>>
> >>>> Thanks
>
>


Re: DrilBit error: ERROR o.a.c.framework.imps.EnsembleTracker - Invalid config event received

2021-10-12 Thread Prabhakar Bhosaale
hi Luoc,

Thanks for kind words. We do have certain scope to update the document and
I am happy to contribute. Let me know how I can contribute? thx

Regards
Prabhakar

On Tue, Oct 12, 2021 at 7:16 PM luoc  wrote:

> Hello Prabhakar,
>   You can be proud of your work on DRILL. In addition, Is the DRILL
> document (for the JSON files) not detailed enough ? If possible, we always
> welcome you to contribute the docs (or update the specified docs).
>
> > 2021年10月12日 下午8:36,Prabhakar Bhosaale  写道:
> >
> > Hi Luoc,
> >
> > The problem is resolved. The root cause of the problem was the port
> number given in the storage plugin for HDFS. Instead of the service port of
> HDFS webui port was mentioned and hence it was giving problems. Thanks for
> all your support.
> >
> > Regards
> > Prabhakar
> >
> > On Tue, Oct 12, 2021 at 10:56 AM Prabhakar Bhosaale <
> bhosale@gmail.com <mailto:bhosale@gmail.com>> wrote:
> > Hi Luoc,
> >
> > Please ignore schreenshot of show files command in my earlier email. The
> show files command by using archival_hdfs gives the same error message.
> >
> > "RESOURCE ERROR: Failed to load schema for "archival_hdfs"!
> >
> > java.io.IOException: Failed on local exception:
> org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length
> >
> > But if I use default dfs schema then show files gives correct output.
> >
> > Thanks and regards
> > Prabhakar
> >
> > On Tue, Oct 12, 2021 at 10:34 AM Prabhakar Bhosaale <
> bhosale@gmail.com <mailto:bhosale@gmail.com>> wrote:
> > Hi Luoc,
> > I had attached the files to my earlier email, I am attaching it again
> here.
> >
> > Answers to your questions
> >   1. `show databases` or `show schemas` in the console. Will the
> `archival_hdfs` be displayed ?
> > Prabhakar: archival_hdfs is not displayed by show databases or show
> schemas. But it is displayed on drill UI under "Default schema" drop down.
> Below is screenshot
> >
> >
> >   2. `show files in xxx` in the console. What do you see ?
> > Prabhakar: It gives the error. Pls see screenshot below
> >
> >
> >
> > Regards
> > Prabhakar
> >
> > On Mon, Oct 11, 2021 at 3:34 PM luoc  l...@apache.org>> wrote:
> > Hello Prabhakar,
> >   I did not see the log attachment in the email. In addition, you can
> also check the schema by following steps :
> >   1. `show databases` or `show schemas` in the console. Will the
> `archival_hdfs` be displayed ?
> >   2. `show files in xxx` in the console. What do you see ?
> >
> >
> > > 2021年10月11日 下午3:39,Prabhakar Bhosaale  bhosale@gmail.com>> 写道:
> > >
> > > Hi Team,
> > >
> > > I have deployed drill in cluster mode on hadoop 3 node cluster. When I
> start the drill on every node, one of the node gives the following error
> but dril is up on all 3 nodes.
> > >
> > > "ERROR o.a.c.framework.imps.EnsembleTracker - Invalid config event
> received"
> > >
> > >
> > > After drill startup, When I try to query the json file on the cluster
> I get the following error.
> > >
> > > "RESOURCE ERROR: Failed to load schema for "archival_hdfs"!
> > >
> > > java.io.IOException: Failed on local exception:
> org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length
> > >
> > > Any help or pointer would be appreciated. Attaching the log files.
> > >
> > > Thanks
> > >
> > >
> >
>
>


Re: DrilBit error: ERROR o.a.c.framework.imps.EnsembleTracker - Invalid config event received

2021-10-12 Thread Prabhakar Bhosaale
Hi Luoc,

The problem is resolved. The root cause of the problem was the port number
given in the storage plugin for HDFS. Instead of the service port of HDFS
webui port was mentioned and hence it was giving problems. Thanks for all
your support.

Regards
Prabhakar

On Tue, Oct 12, 2021 at 10:56 AM Prabhakar Bhosaale 
wrote:

> Hi Luoc,
>
> Please ignore schreenshot of show files command in my earlier email. The
> show files command by using archival_hdfs gives the same error message.
>
> "RESOURCE ERROR: Failed to load schema for "archival_hdfs"!
>
> java.io.IOException: Failed on local exception:
> org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length
>
> But if I use default dfs schema then show files gives correct output.
>
> Thanks and regards
> Prabhakar
>
> On Tue, Oct 12, 2021 at 10:34 AM Prabhakar Bhosaale 
> wrote:
>
>> Hi Luoc,
>> I had attached the files to my earlier email, I am attaching it again
>> here.
>>
>> Answers to your questions
>>   1. `show databases` or `show schemas` in the console. Will the
>> `archival_hdfs` be displayed ?
>> Prabhakar: archival_hdfs is not displayed by show databases or show
>> schemas. But it is displayed on drill UI under "Default schema" drop down.
>> Below is screenshot
>> [image: image.png]
>>
>>   2. `show files in xxx` in the console. What do you see ?
>> Prabhakar: It gives the error. Pls see screenshot below
>>
>> [image: image.png]
>>
>> Regards
>> Prabhakar
>>
>> On Mon, Oct 11, 2021 at 3:34 PM luoc  wrote:
>>
>>> Hello Prabhakar,
>>>   I did not see the log attachment in the email. In addition, you can
>>> also check the schema by following steps :
>>>   1. `show databases` or `show schemas` in the console. Will the
>>> `archival_hdfs` be displayed ?
>>>   2. `show files in xxx` in the console. What do you see ?
>>>
>>>
>>> > 2021年10月11日 下午3:39,Prabhakar Bhosaale  写道:
>>> >
>>> > Hi Team,
>>> >
>>> > I have deployed drill in cluster mode on hadoop 3 node cluster. When I
>>> start the drill on every node, one of the node gives the following error
>>> but dril is up on all 3 nodes.
>>> >
>>> > "ERROR o.a.c.framework.imps.EnsembleTracker - Invalid config event
>>> received"
>>> >
>>> >
>>> > After drill startup, When I try to query the json file on the cluster
>>> I get the following error.
>>> >
>>> > "RESOURCE ERROR: Failed to load schema for "archival_hdfs"!
>>> >
>>> > java.io.IOException: Failed on local exception:
>>> org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length
>>> >
>>> > Any help or pointer would be appreciated. Attaching the log files.
>>> >
>>> > Thanks
>>> >
>>> >
>>>
>>>


Re: DrilBit error: ERROR o.a.c.framework.imps.EnsembleTracker - Invalid config event received

2021-10-11 Thread Prabhakar Bhosaale
Hi Luoc,

Please ignore schreenshot of show files command in my earlier email. The
show files command by using archival_hdfs gives the same error message.

"RESOURCE ERROR: Failed to load schema for "archival_hdfs"!

java.io.IOException: Failed on local exception:
org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length

But if I use default dfs schema then show files gives correct output.

Thanks and regards
Prabhakar

On Tue, Oct 12, 2021 at 10:34 AM Prabhakar Bhosaale 
wrote:

> Hi Luoc,
> I had attached the files to my earlier email, I am attaching it again here.
>
> Answers to your questions
>   1. `show databases` or `show schemas` in the console. Will the
> `archival_hdfs` be displayed ?
> Prabhakar: archival_hdfs is not displayed by show databases or show
> schemas. But it is displayed on drill UI under "Default schema" drop down.
> Below is screenshot
> [image: image.png]
>
>   2. `show files in xxx` in the console. What do you see ?
> Prabhakar: It gives the error. Pls see screenshot below
>
> [image: image.png]
>
> Regards
> Prabhakar
>
> On Mon, Oct 11, 2021 at 3:34 PM luoc  wrote:
>
>> Hello Prabhakar,
>>   I did not see the log attachment in the email. In addition, you can
>> also check the schema by following steps :
>>   1. `show databases` or `show schemas` in the console. Will the
>> `archival_hdfs` be displayed ?
>>   2. `show files in xxx` in the console. What do you see ?
>>
>>
>> > 2021年10月11日 下午3:39,Prabhakar Bhosaale  写道:
>> >
>> > Hi Team,
>> >
>> > I have deployed drill in cluster mode on hadoop 3 node cluster. When I
>> start the drill on every node, one of the node gives the following error
>> but dril is up on all 3 nodes.
>> >
>> > "ERROR o.a.c.framework.imps.EnsembleTracker - Invalid config event
>> received"
>> >
>> >
>> > After drill startup, When I try to query the json file on the cluster I
>> get the following error.
>> >
>> > "RESOURCE ERROR: Failed to load schema for "archival_hdfs"!
>> >
>> > java.io.IOException: Failed on local exception:
>> org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length
>> >
>> > Any help or pointer would be appreciated. Attaching the log files.
>> >
>> > Thanks
>> >
>> >
>>
>>


Re: DrilBit error: ERROR o.a.c.framework.imps.EnsembleTracker - Invalid config event received

2021-10-11 Thread Prabhakar Bhosaale
Hi Luoc,
I had attached the files to my earlier email, I am attaching it again here.

Answers to your questions
  1. `show databases` or `show schemas` in the console. Will the
`archival_hdfs` be displayed ?
Prabhakar: archival_hdfs is not displayed by show databases or show
schemas. But it is displayed on drill UI under "Default schema" drop down.
Below is screenshot
[image: image.png]

  2. `show files in xxx` in the console. What do you see ?
Prabhakar: It gives the error. Pls see screenshot below

[image: image.png]

Regards
Prabhakar

On Mon, Oct 11, 2021 at 3:34 PM luoc  wrote:

> Hello Prabhakar,
>   I did not see the log attachment in the email. In addition, you can also
> check the schema by following steps :
>   1. `show databases` or `show schemas` in the console. Will the
> `archival_hdfs` be displayed ?
>   2. `show files in xxx` in the console. What do you see ?
>
>
> > 2021年10月11日 下午3:39,Prabhakar Bhosaale  写道:
> >
> > Hi Team,
> >
> > I have deployed drill in cluster mode on hadoop 3 node cluster. When I
> start the drill on every node, one of the node gives the following error
> but dril is up on all 3 nodes.
> >
> > "ERROR o.a.c.framework.imps.EnsembleTracker - Invalid config event
> received"
> >
> >
> > After drill startup, When I try to query the json file on the cluster I
> get the following error.
> >
> > "RESOURCE ERROR: Failed to load schema for "archival_hdfs"!
> >
> > java.io.IOException: Failed on local exception:
> org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length
> >
> > Any help or pointer would be appreciated. Attaching the log files.
> >
> > Thanks
> >
> >
>
>


DrilBit error: ERROR o.a.c.framework.imps.EnsembleTracker - Invalid config event received

2021-10-11 Thread Prabhakar Bhosaale
Hi Team,

I have deployed drill in cluster mode on hadoop 3 node cluster. When I
start the drill on every node, one of the node gives the following error
but dril is up on all 3 nodes.

"ERROR o.a.c.framework.imps.EnsembleTracker - Invalid config event received"


After drill startup, When I try to query the json file on the cluster I get
the following error.

"RESOURCE ERROR: Failed to load schema for "archival_hdfs"!

java.io.IOException: Failed on local exception:
org.apache.hadoop.ipc.RpcException: RPC response exceeds maximum data length

Any help or pointer would be appreciated. Attaching the log files.

Thanks


Error: Drill query json file on hadoop cluster

2021-10-06 Thread Prabhakar Bhosaale
Hi Team,

I have installed drill in distributed mode on hadoop 3 node cluster.

I get the following error when I try to query the json file.

2021-10-05 16:33:18,428 [1ea3cf09-19d4-55fe-3b84-6235e7f1eae6:foreman] INFO
o.a.drill.exec.work.foreman.Foreman - Query text for query with id
1ea3cf09-19d4-55fe-3b84-6235e7f1eae6 issued by anonymous: select * from
archival_hdfs.`mytable`
2021-10-05 16:33:18,435 [1ea3cf09-19d4-55fe-3b84-6235e7f1eae6:foreman] INFO
o.a.d.e.p.s.conversion.SqlConverter - User Error Occurred: Invalid range:
[0..-1) (Invalid range: [0..-1))
org.apache.drill.common.exceptions.UserException: VALIDATION ERROR: Invalid
range: [0..-1)

Following are the version details.

THe drill version is - 1.19
Haddop version is - 3.3.1
Zookeeper version is - 3.71

Regards
Prabhakar


Re: Drill startup error: Distributed mode

2021-10-05 Thread Prabhakar Bhosaale
Hi Luoc,
Any pointers on the validation error mentioned in my last email?  thanks in
advance.


Regards
Prabhakar

On Tue, Oct 5, 2021 at 4:38 PM Prabhakar Bhosaale 
wrote:

> hi Luoc,
> I gave full permission to the UDF folder and that error is gone and
> drillbit started successfully. But when I query the json file I get the
> following error message.
>
> 2021-10-05 16:33:18,428 [1ea3cf09-19d4-55fe-3b84-6235e7f1eae6:foreman]
> INFO o.a.drill.exec.work.foreman.Foreman - Query text for query with id
> 1ea3cf09-19d4-55fe-3b84-6235e7f1eae6 issued by anonymous: select * from
> archival_hdfs.`mytable`
> 2021-10-05 16:33:18,435 [1ea3cf09-19d4-55fe-3b84-6235e7f1eae6:foreman]
> INFO o.a.d.e.p.s.conversion.SqlConverter - User Error Occurred: Invalid
> range: [0..-1) (Invalid range: [0..-1))
> org.apache.drill.common.exceptions.UserException: VALIDATION ERROR:
> Invalid range: [0..-1)
>
> Regards
> Prabhakar
>
> On Tue, Oct 5, 2021 at 2:28 PM luoc  wrote:
>
>> Hi Prabhakar,
>>   If you removed the configured UDF block, can all the Drillbit start
>> properly?
>> I mean, if the host is configured but not working, we need to locate the
>> wrong configuration block first.
>>
>> > 2021年10月4日 下午1:13,Prabhakar Bhosaale  写道:
>> >
>> > hi Luoc,
>> >
>> > I already tried giving hdfs host name in UDF section and even then it
>> gives
>> > same error.
>> > My understanding of that element is that it just tell what kind of file
>> > system it need to use.  I checked for the documentation on understanding
>> > the every element of that configuration file but could not find it. If
>> you
>> > could give me some pointers for the documentation that would be great.
>> thx
>> >
>> > Regards
>> > Prabhakar
>> >
>> >
>> >
>> > On Sat, Oct 2, 2021 at 1:08 PM luoc  wrote:
>> >
>> >> Hi Prabhakar,
>> >>
>> >>  Which configuration block is the host name you mentioned? I see that
>> the
>> >> UDF block.
>> >>
>> >> ```
>> >># instead of default taken from Hadoop configuration
>> >>fs: "hdfs:///",
>> >> ```
>> >>
>> >>> 在 2021年10月2日,15:11,Prabhakar Bhosaale  写道:
>> >>>
>> >>> Hi Luoc,
>> >>> Could you please help with it?  thanks
>> >>>
>> >>> Regards
>> >>> Prabhakar
>> >>>
>> >>>> On Fri, Oct 1, 2021 at 4:55 PM Prabhakar Bhosaale <
>> >> bhosale@gmail.com>
>> >>>> wrote:
>> >>>> Hi Luoc,
>> >>>> I have already given the host name. Below is the complete file for
>> your
>> >>>> reference. Not sure where to give the hostname.
>> >>>> # Licensed to the Apache Software Foundation (ASF) under one
>> >>>> # or more contributor license agreements.  See the NOTICE file
>> >>>> # distributed with this work for additional information
>> >>>> # regarding copyright ownership.  The ASF licenses this file
>> >>>> # to you under the Apache License, Version 2.0 (the
>> >>>> # "License"); you may not use this file except in compliance
>> >>>> # with the License.  You may obtain a copy of the License at
>> >>>> #
>> >>>> # http://www.apache.org/licenses/LICENSE-2.0
>> >>>> #
>> >>>> # Unless required by applicable law or agreed to in writing, software
>> >>>> # distributed under the License is distributed on an "AS IS" BASIS,
>> >>>> # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
>> >> implied.
>> >>>> # See the License for the specific language governing permissions and
>> >>>> # limitations under the License.
>> >>>> #
>> >>>> #  This file tells Drill to consider this module when class path
>> >> scanning.
>> >>>> #  This file can also include any supplementary configuration
>> >> information.
>> >>>> #  This file is in HOCON format, see
>> >>>> https://github.com/typesafehub/config/blob/master/HOCON.md for more
>> >>>> information.
>> >>>> drill.logical.function.packages +=
>> "org.apache.drill.exec.expr.fn.impl"
>> >>>> drill.exec: {
>> >>>> cluster-id: "drillcluster"
>

Re: Drill startup error: Distributed mode

2021-10-05 Thread Prabhakar Bhosaale
hi Luoc,
I gave full permission to the UDF folder and that error is gone and
drillbit started successfully. But when I query the json file I get the
following error message.

2021-10-05 16:33:18,428 [1ea3cf09-19d4-55fe-3b84-6235e7f1eae6:foreman] INFO
o.a.drill.exec.work.foreman.Foreman - Query text for query with id
1ea3cf09-19d4-55fe-3b84-6235e7f1eae6 issued by anonymous: select * from
archival_hdfs.`mytable`
2021-10-05 16:33:18,435 [1ea3cf09-19d4-55fe-3b84-6235e7f1eae6:foreman] INFO
o.a.d.e.p.s.conversion.SqlConverter - User Error Occurred: Invalid range:
[0..-1) (Invalid range: [0..-1))
org.apache.drill.common.exceptions.UserException: VALIDATION ERROR: Invalid
range: [0..-1)

Regards
Prabhakar

On Tue, Oct 5, 2021 at 2:28 PM luoc  wrote:

> Hi Prabhakar,
>   If you removed the configured UDF block, can all the Drillbit start
> properly?
> I mean, if the host is configured but not working, we need to locate the
> wrong configuration block first.
>
> > 2021年10月4日 下午1:13,Prabhakar Bhosaale  写道:
> >
> > hi Luoc,
> >
> > I already tried giving hdfs host name in UDF section and even then it
> gives
> > same error.
> > My understanding of that element is that it just tell what kind of file
> > system it need to use.  I checked for the documentation on understanding
> > the every element of that configuration file but could not find it. If
> you
> > could give me some pointers for the documentation that would be great.
> thx
> >
> > Regards
> > Prabhakar
> >
> >
> >
> > On Sat, Oct 2, 2021 at 1:08 PM luoc  wrote:
> >
> >> Hi Prabhakar,
> >>
> >>  Which configuration block is the host name you mentioned? I see that
> the
> >> UDF block.
> >>
> >> ```
> >># instead of default taken from Hadoop configuration
> >>fs: "hdfs:///",
> >> ```
> >>
> >>> 在 2021年10月2日,15:11,Prabhakar Bhosaale  写道:
> >>>
> >>> Hi Luoc,
> >>> Could you please help with it?  thanks
> >>>
> >>> Regards
> >>> Prabhakar
> >>>
> >>>> On Fri, Oct 1, 2021 at 4:55 PM Prabhakar Bhosaale <
> >> bhosale@gmail.com>
> >>>> wrote:
> >>>> Hi Luoc,
> >>>> I have already given the host name. Below is the complete file for
> your
> >>>> reference. Not sure where to give the hostname.
> >>>> # Licensed to the Apache Software Foundation (ASF) under one
> >>>> # or more contributor license agreements.  See the NOTICE file
> >>>> # distributed with this work for additional information
> >>>> # regarding copyright ownership.  The ASF licenses this file
> >>>> # to you under the Apache License, Version 2.0 (the
> >>>> # "License"); you may not use this file except in compliance
> >>>> # with the License.  You may obtain a copy of the License at
> >>>> #
> >>>> # http://www.apache.org/licenses/LICENSE-2.0
> >>>> #
> >>>> # Unless required by applicable law or agreed to in writing, software
> >>>> # distributed under the License is distributed on an "AS IS" BASIS,
> >>>> # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
> >> implied.
> >>>> # See the License for the specific language governing permissions and
> >>>> # limitations under the License.
> >>>> #
> >>>> #  This file tells Drill to consider this module when class path
> >> scanning.
> >>>> #  This file can also include any supplementary configuration
> >> information.
> >>>> #  This file is in HOCON format, see
> >>>> https://github.com/typesafehub/config/blob/master/HOCON.md for more
> >>>> information.
> >>>> drill.logical.function.packages +=
> "org.apache.drill.exec.expr.fn.impl"
> >>>> drill.exec: {
> >>>> cluster-id: "drillcluster"
> >>>> rpc: {
> >>>> user: {
> >>>>   server: {
> >>>> port: 31010
> >>>> threads: 1
> >>>>   }
> >>>>   client: {
> >>>> threads: 1
> >>>>   }
> >>>> },
> >>>> bit: {
> >>>>   server: {
> >>>> port : 31011,
> >>>> retry:{
> >>>>   count: 7200,
> >>>>   delay: 500
> >>>> },
> >>>> threads: 1
> >

Re: Drill startup error: Distributed mode

2021-10-03 Thread Prabhakar Bhosaale
hi Luoc,

I already tried giving hdfs host name in UDF section and even then it gives
same error.
My understanding of that element is that it just tell what kind of file
system it need to use.  I checked for the documentation on understanding
the every element of that configuration file but could not find it. If you
could give me some pointers for the documentation that would be great. thx

Regards
Prabhakar



On Sat, Oct 2, 2021 at 1:08 PM luoc  wrote:

> Hi Prabhakar,
>
>   Which configuration block is the host name you mentioned? I see that the
> UDF block.
>
> ```
> # instead of default taken from Hadoop configuration
> fs: "hdfs:///",
> ```
>
> > 在 2021年10月2日,15:11,Prabhakar Bhosaale  写道:
> >
> > Hi Luoc,
> > Could you please help with it?  thanks
> >
> > Regards
> > Prabhakar
> >
> >> On Fri, Oct 1, 2021 at 4:55 PM Prabhakar Bhosaale <
> bhosale@gmail.com>
> >> wrote:
> >> Hi Luoc,
> >> I have already given the host name. Below is the complete file for your
> >> reference. Not sure where to give the hostname.
> >> # Licensed to the Apache Software Foundation (ASF) under one
> >> # or more contributor license agreements.  See the NOTICE file
> >> # distributed with this work for additional information
> >> # regarding copyright ownership.  The ASF licenses this file
> >> # to you under the Apache License, Version 2.0 (the
> >> # "License"); you may not use this file except in compliance
> >> # with the License.  You may obtain a copy of the License at
> >> #
> >> # http://www.apache.org/licenses/LICENSE-2.0
> >> #
> >> # Unless required by applicable law or agreed to in writing, software
> >> # distributed under the License is distributed on an "AS IS" BASIS,
> >> # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
> implied.
> >> # See the License for the specific language governing permissions and
> >> # limitations under the License.
> >> #
> >> #  This file tells Drill to consider this module when class path
> scanning.
> >> #  This file can also include any supplementary configuration
> information.
> >> #  This file is in HOCON format, see
> >> https://github.com/typesafehub/config/blob/master/HOCON.md for more
> >> information.
> >> drill.logical.function.packages += "org.apache.drill.exec.expr.fn.impl"
> >> drill.exec: {
> >> cluster-id: "drillcluster"
> >> rpc: {
> >>  user: {
> >>server: {
> >>  port: 31010
> >>  threads: 1
> >>}
> >>client: {
> >>  threads: 1
> >>}
> >>  },
> >>  bit: {
> >>server: {
> >>  port : 31011,
> >>  retry:{
> >>count: 7200,
> >>delay: 500
> >>  },
> >>  threads: 1
> >>}
> >>  },
> >>  use.ip : false
> >> },
> >> operator: {
> >>  packages += "org.apache.drill.exec.physical.config"
> >> },
> >> optimizer: {
> >>  implementation: "org.apache.drill.exec.opt.IdentityOptimizer"
> >> },
> >> functions: ["org.apache.drill.expr.fn.impl"],
> >> storage: {
> >>  packages += "org.apache.drill.exec.store",
> >>  file: {
> >>text: {
> >>  buffer.size: 262144,
> >>  batch.size: 4000
> >>},
> >>partition.column.label: "dir"
> >>  },
> >>  # The action on the storage-plugins-override.conf after it's use.
> >>  # Possible values are "none" (default), "rename", "remove"
> >>  action_on_plugins_override_file: "none"
> >> },
> >> zk: {
> >>  connect: "10.81.68.6:2181,10.81.68.110:2181,10.81.70.139:2181",
> >>  root: "user/pstore",
> >>  refresh: 500,
> >>  timeout: 5000,
> >>  retry: {
> >>count: 7200,
> >>delay: 500
> >>  }
> >>  # This option controls whether Drill specifies ACLs when it creates
> >> znodes.
> >>  # If this is 'false', then anyone has all privileges for all Drill
> >> znodes.
> >>  # This corresponds to ZOO_OPEN_ACL_UNSAFE.
> >>  # Setting this flag to 'true' enables the provider specified in
> >> "acl_provider"
> >>  apply_secure_acl: false,
> >>  # This option specified the ACL provider to

Re: Drill startup error: Distributed mode

2021-10-02 Thread Prabhakar Bhosaale
Hi Luoc,
Could you please help with it?  thanks

Regards
Prabhakar

On Fri, Oct 1, 2021 at 4:55 PM Prabhakar Bhosaale 
wrote:

> Hi Luoc,
> I have already given the host name. Below is the complete file for your
> reference. Not sure where to give the hostname.
>
>
> # Licensed to the Apache Software Foundation (ASF) under one
> # or more contributor license agreements.  See the NOTICE file
> # distributed with this work for additional information
> # regarding copyright ownership.  The ASF licenses this file
> # to you under the Apache License, Version 2.0 (the
> # "License"); you may not use this file except in compliance
> # with the License.  You may obtain a copy of the License at
> #
> # http://www.apache.org/licenses/LICENSE-2.0
> #
> # Unless required by applicable law or agreed to in writing, software
> # distributed under the License is distributed on an "AS IS" BASIS,
> # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> # See the License for the specific language governing permissions and
> # limitations under the License.
> #
>
> #  This file tells Drill to consider this module when class path scanning.
> #  This file can also include any supplementary configuration information.
> #  This file is in HOCON format, see
> https://github.com/typesafehub/config/blob/master/HOCON.md for more
> information.
>
> drill.logical.function.packages += "org.apache.drill.exec.expr.fn.impl"
>
> drill.exec: {
>   cluster-id: "drillcluster"
>   rpc: {
> user: {
>   server: {
> port: 31010
> threads: 1
>   }
>   client: {
> threads: 1
>   }
> },
> bit: {
>   server: {
> port : 31011,
> retry:{
>   count: 7200,
>   delay: 500
> },
> threads: 1
>   }
> },
> use.ip : false
>   },
>   operator: {
> packages += "org.apache.drill.exec.physical.config"
>   },
>   optimizer: {
> implementation: "org.apache.drill.exec.opt.IdentityOptimizer"
>   },
>   functions: ["org.apache.drill.expr.fn.impl"],
>   storage: {
> packages += "org.apache.drill.exec.store",
> file: {
>   text: {
> buffer.size: 262144,
> batch.size: 4000
>   },
>   partition.column.label: "dir"
> },
> # The action on the storage-plugins-override.conf after it's use.
> # Possible values are "none" (default), "rename", "remove"
> action_on_plugins_override_file: "none"
>   },
>   zk: {
> connect: "10.81.68.6:2181,10.81.68.110:2181,10.81.70.139:2181",
> root: "user/pstore",
> refresh: 500,
> timeout: 5000,
> retry: {
>   count: 7200,
>   delay: 500
> }
> # This option controls whether Drill specifies ACLs when it creates
> znodes.
> # If this is 'false', then anyone has all privileges for all Drill
> znodes.
> # This corresponds to ZOO_OPEN_ACL_UNSAFE.
> # Setting this flag to 'true' enables the provider specified in
> "acl_provider"
> apply_secure_acl: false,
>
> # This option specified the ACL provider to be used by Drill.
> # Custom ACL providers can be provided in the Drillbit classpath and
> Drill can be made to pick them
> # by changing this option.
> # Note: This option has no effect if "apply_secure_acl" is 'false'
> #
> # The default "creator-all" will setup ACLs such that
> #- Only the Drillbit user will have all privileges(create, delete,
> read, write, admin). Same as ZOO_CREATOR_ALL_ACL
> #- Other users will only be able to read the
> cluster-discovery(list of Drillbits in the cluster) znodes.
> #
> acl_provider: "creator-all"
>   },
>   http: {
> enabled: true,
> ssl_enabled: false,
> port: 8047
> session_max_idle_secs: 3600, # Default value 1hr
> cors: {
>   enabled: false,
>   allowedOrigins: ["null"],
>   allowedMethods: ["GET", "POST", "HEAD", "OPTIONS"],
>   allowedHeaders: ["X-Requested-With", "Content-Type", "Accept",
> "Origin"],
>   credentials: true
> },
> auth: {
> # Http Auth mechanisms to configure. If not provided but user.auth
> is enabled
> # then default value is ["FORM"].
> mechanisms: ["BASIC", "FORM", "SPNEGO"],
> # Spnego principal to be used by WebServer when Spnego
&g

Re: Drill startup error: Distributed mode

2021-10-01 Thread Prabhakar Bhosaale
h: "/tmp/drill",
  write: true
}
  },
  impersonation: {
enabled: false,
max_chained_user_hops: 3
  },
  security.user.auth {
enabled: false,
packages += "org.apache.drill.exec.rpc.user.security",
# There are 2 implementations available out of the box with annotation
UserAuthenticatorTemplate
# Annotation type "pam" is providing implementation using JPAM
# Annotation type "pam4j" is providing implementation using libpam4j
# Based on annotation type configured below corresponding authenticator
is used.
impl: "pam",
pam_profiles: [ "sudo", "login" ]
  },
  trace: {
directory: "/tmp/drill-trace",
filesystem: "file:///"
  },
  tmp: {
directories: ["/tmp/drill"],
filesystem: "drill-local:///"
  },
  buffer:{
impl: "org.apache.drill.exec.work.batch.UnlimitedRawBatchBuffer",
size: "100",
spooling: {
  delete: false,
  size: 1
}
  },
  cache.hazel.subnets: ["*.*.*.*"],
  spill: {
 # These options are common to all spilling operators.
 # They can be overriden, per operator (but this is just for
 # backward compatibility, and may be deprecated in the future)
 directories : [ "/tmp/drill/spill" ],
 fs : "file:///"
  }
  sort: {
purge.threshold : 100,
external: {
  batch.size : 4000,
  spill: {
batch.size : 4000,
group.size : 100,
threshold : 200,
# The 2 options below override the common ones
# they should be deprecated in the future
directories : [ "/tmp/drill/spill" ],
fs : "file:///"
  }
}
  },
  hashagg: {
# The partitions divide the work inside the hashagg, to ease
# handling spilling. This initial figure is tuned down when
# memory is limited.
#  Setting this option to 1 disables spilling !
num_partitions: 32,
spill: {
# The 2 options below override the common ones
# they should be deprecated in the future
directories : [ "/tmp/drill/spill" ],
fs : "file:///"
}
  },
  memory: {
top.max: 1,
operator: {
  max: 200,
  initial: 1000
},
fragment: {
  max: 200,
  initial: 2000
}
  },
  scan: {
threadpool_size: 8,
decode_threadpool_size: 1
  },
  debug.error_on_leak: true,
  # Settings for Dynamic UDFs (see
https://issues.apache.org/jira/browse/DRILL-4726 for details).
  udf: {
# number of retry attempts to update remote function registry
# if registry version was changed during update
retry-attempts: 10,
directory: {
  # Override this property if custom file system should be used to
create remote directories
  # instead of default taken from Hadoop configuration
  fs: "hdfs:///",
  # Set this property if custom absolute root should be used for remote
directories
  root: "user/udf"
}
  },
  # Settings for Temporary Tables (see
https://issues.apache.org/jira/browse/DRILL-4956 for details).
  # Temporary table can be created ONLY in default temporary workspace.
  # Full workspace name should be indicated (including schema and workspace
separated by dot).
  # Workspace MUST be file-based and writable. Workspace name is
case-sensitive.
  default_temporary_workspace: "dfs.tmp"

  # Enable and provide additional parameters for Client-Server
communication over SSL
  # see also the javax.net.ssl parameters below
  security.user.encryption.ssl: {
#Set this to true to enable all client server communication to occur
over SSL.
enabled: false,
#key password is optional if it is the same as the keystore password
keyPassword: "key_passwd",
#Optional handshakeTimeout in milliseconds. Default is 1 ms (10
seconds)
handshakeTimeout: 1,
#protocol is optional. Drill will default to TLSv1.2. Valid values
depend on protocol versions
# enabled for tje underlying securrity provider. For JSSE these are :
SSL, SSLV2, SSLV3,
# TLS, TLSV1, TLSv1.1, TLSv1.2
protocol: "TLSv1.2",
#ssl provider. May be "JDK" or "OPENSSL". Default is "JDK"
provider: "JDK"
  }

  # HTTP client proxy configuration
  net_proxy: {

# HTTP URL. Omit if from a Linux env var
    # See
https://www.shellhacks.com/linux-proxy-server-settings-set-proxy-command-line/
http_url: "",

# Explicit HTTP setup, used if URL is not set
http: {
  type: "none", # none, http, socks. Blank same as none.
  host: "",
  port: 80,
  user_name: "",
  password: ""
},

# HTTPS URL. Omit if from a Linux env var
https_url: "",

# Explicit HTTPS setup, used if URL is not set
https: 

Drill startup error: Distributed mode

2021-10-01 Thread Prabhakar Bhosaale
Hi Team,
I have installed drill in distributed mode on hadoop 3 node cluster.

I get following error in the drillbit.out file when try to start the
drillbit

- ERROR
12:34:46.984 [main-EventThread] ERROR o.a.c.framework.imps.EnsembleTracker
- Invalid config event received:
{server.1=machinename1:2888:3888:participant, version=0, server.3=
machinename2:2888:3888:participant, server.2=
machinename3:2888:3888:participant}
Exception in thread "main"
org.apache.drill.exec.exception.DrillbitStartupException: Failure during
initial startup of Drillbit.
at org.apache.drill.exec.server.Drillbit.start(Drillbit.java:588)
at org.apache.drill.exec.server.Drillbit.start(Drillbit.java:554)
at org.apache.drill.exec.server.Drillbit.main(Drillbit.java:550)
Caused by: org.apache.drill.common.exceptions.DrillRuntimeException: Error
during file system hdfs:/// setup
at
org.apache.drill.common.exceptions.DrillRuntimeException.create(DrillRuntimeException.java:48)
at
org.apache.drill.exec.expr.fn.registry.RemoteFunctionRegistry.prepareAreas(RemoteFunctionRegistry.java:231)
at
org.apache.drill.exec.expr.fn.registry.RemoteFunctionRegistry.init(RemoteFunctionRegistry.java:109)
at org.apache.drill.exec.server.Drillbit.run(Drillbit.java:233)
at org.apache.drill.exec.server.Drillbit.start(Drillbit.java:584)
... 2 more
Caused by: java.io.IOException: Incomplete HDFS URI, no host: hdfs:///
at
org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:170)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3375)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:125)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3424)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3392)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:485)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:233)
at
org.apache.drill.exec.expr.fn.registry.RemoteFunctionRegistry.prepareAreas(RemoteFunctionRegistry.java:229)
... 5 more
ERROR END

THe drill version is - 1.19
Haddop version is - 3.3.1
Zookeeper version is - 3.71

I have following settings done

 zoo.cfg file
server.1=machine1:2888:3888
server.2= machine2:2888:3888
server.3= machine3:2888:3888

drill-override.conf
 zk: {
connect: "machine1:2181, machine2:2181, machine3:2181",
root: "user/pstore",
refresh: 500,
timeout: 5000,
retry: {
  count: 7200,
  delay: 500
}

 udf: {
# number of retry attempts to update remote function registry
# if registry version was changed during update
retry-attempts: 10,
directory: {
  # Override this property if custom file system should be used to
create remote directories
  # instead of default taken from Hadoop configuration
  fs: "hdfs:///",
  # Set this property if custom absolute root should be used for remote
directories
  root: "user/udf"
}

Any help and pointer is appreciated. thx

Regards
Prabhakar


Re: Json Schma change error

2021-09-26 Thread Prabhakar Bhosaale
Thanks Luoc. will check on it.


Regards
Prabhakar

On Sun, Sep 26, 2021 at 6:36 PM luoc  wrote:

>
> Hello Prabhakar,
>   There may not be such a detailed docs. But you can review the docs of
> Drill internals and architecture.
>   https://github.com/paul-rogers/drill/wiki#internals-topics
>
>
> > 在 2021年9月25日,18:51,luoc  写道:
> >
> > Hello Prabhakar,
>


Re: Json Schma change error

2021-09-26 Thread Prabhakar Bhosaale
Hi Luoc,
THanks for the inputs but unfortunately I am not a java developer and
understanding logic by looking at the code would be difficult for me. So
just want to know if there is any documentation available on this topic?
Thanks in advance

Regards
Prabhakar

On Sat, Sep 25, 2021 at 4:21 PM luoc  wrote:

> Hello Prabhakar,
>
>   Good questions. I think you want to understand the internal logic for
> the JSON loader.
> Actually, the Drill includes two kind of JSON loader, old version and new
> revision.
> I suggest you to debug based on the following code base :
>
> 1. Test Unit with the new JSON loader :
>
> /drill-java-exec/src/test/java/org/apache/drill/exec/store/easy/json/loader
>
> 2. New JSON loader in HTTP storage :
>
> https://github.com/apache/drill/blob/bf2b0d79e43bf65448557510a7b39f17c428df78/contrib/storage-http/src/main/java/org/apache/drill/exec/store/http/HttpBatchReader.java#L103
> <
> https://github.com/apache/drill/blob/bf2b0d79e43bf65448557510a7b39f17c428df78/contrib/storage-http/src/main/java/org/apache/drill/exec/store/http/HttpBatchReader.java#L103
> >
>
> 3. JSON Record Reader :
> org.apache.drill.exec.store.json.TestJsonRecordReader
>
>
> > 2021年9月24日 下午7:36,Prabhakar Bhosaale  写道:
> >
> > Hi Team,
> > I am getting following error while querying JSON file
> >
> > "(java.lang.Exception) UNSUPPORTED_OPERATION ERROR: Schema changes not
> > supported in External Sort."
> >
> > I have identified the root cause as one of the column has NULL value for
> > certain rows and STRING value for certain rows
> >
> > I am trying to find out How drill decides the datatype of columns and
> > identify the schema changes.
> >
> > I tried following changes in data and got different results.
> >
> > Case 1  - If I remove order by clause in the query then I don't the
> error.
> > Point to note, this specific column is not part of order by clause. But
> it
> > is part of select list
> >
> > Case 2 - If I keep only two rows in file, one with NULL data and other
> with
> > STRING data for given column then no error. Query returns the data
> > successfully
> >
> > Case 3 - I change the value of given column in first to from NULL to
> empty
> > string that is two double quotes then no error
> >
> > Previously somewhere I has read that drill reads initial certain rows of
> > JSON and decides the datatype but not able to find the same now in the
> > documentation.
> >
> > Thanks and Regards
> > Prabhakar
>
>


Json Schma change error

2021-09-24 Thread Prabhakar Bhosaale
Hi Team,
I am getting following error while querying JSON file

"(java.lang.Exception) UNSUPPORTED_OPERATION ERROR: Schema changes not
supported in External Sort."

I have identified the root cause as one of the column has NULL value for
certain rows and STRING value for certain rows

I am trying to find out How drill decides the datatype of columns and
identify the schema changes.

 I tried following changes in data and got different results.

Case 1  - If I remove order by clause in the query then I don't the error.
Point to note, this specific column is not part of order by clause. But it
is part of select list

Case 2 - If I keep only two rows in file, one with NULL data and other with
STRING data for given column then no error. Query returns the data
successfully

Case 3 - I change the value of given column in first to from NULL to empty
string that is two double quotes then no error

Previously somewhere I has read that drill reads initial certain rows of
JSON and decides the datatype but not able to find the same now in the
documentation.

Thanks and Regards
Prabhakar


Re: Starting drill in production

2021-08-23 Thread Prabhakar Bhosaale
THanks Sanel, will try and let you know the result.

Regards
Prabhakar

On Mon, Aug 23, 2021 at 2:21 PM Sanel Zukan  wrote:

> Try with --silent=true or "-log drill.log" arguments. Or redirect nohup
> to /dev/null, with "nohup drill-embedded > /dev/null 2>&1".
>
> Best,
> Sanel
>
> Prabhakar Bhosaale  writes:
> > Hi All,
> > We are deploying drill in standalone mode on production and starting it
> > with nohup. But with nohup it is generating a huge log file. So
> > following are the questions.
> >
> > 1. Is it the correct way to start drill with nohup? If not then what is
> the
> > alternative to keep it running?
> > 2. Are there any settings to reduce the logging in nohup. or change the
> > location of the log file? Currently the log file is getting created under
> > the bin folder.
> >
> > Thanks and regards
> > Prabhakar
>


Starting drill in production

2021-08-22 Thread Prabhakar Bhosaale
Hi All,
We are deploying drill in standalone mode on production and starting it
with nohup. But with nohup it is generating a huge log file. So
following are the questions.

1. Is it the correct way to start drill with nohup? If not then what is the
alternative to keep it running?
2. Are there any settings to reduce the logging in nohup. or change the
location of the log file? Currently the log file is getting created under
the bin folder.

Thanks and regards
Prabhakar


Re: Drill on AIX

2021-07-06 Thread Prabhakar Bhosaale
Thanks Charles for your quick response.

Regards
Prabhakar

On Tue, Jul 6, 2021 at 5:24 PM Charles Givre  wrote:

> Hi Prabhakar,
> To the best of my knowledge, Drill is not currently certified on IBM AIX.
> I'm sure the community would welcome the opportunity to get it certified.
> -- C
>
>
> > On Jul 6, 2021, at 6:20 AM, Prabhakar Bhosaale 
> wrote:
> >
> > Hi Team,
> > Is Dril version 1.16 and onwards tested and certified on IBM AIX? thx
> >
> > Regards
> > Prabhakar
>
>


Drill on AIX

2021-07-06 Thread Prabhakar Bhosaale
Hi Team,
Is Dril version 1.16 and onwards tested and certified on IBM AIX? thx

Regards
Prabhakar


Re: [VOTE] Release Apache Drill 1.19.0 - RC0

2021-06-01 Thread Prabhakar Bhosaale
Vote +1

On Wed, Jun 2, 2021 at 3:12 AM Laurent Goujon  wrote:

> Hi all,
>
> I'd like to propose the first release candidate (RC0) of Apache Drill,
> version 1.19.0.
> The release candidate covers a total of 105 resolved JIRAs [1]. Thanks
> to everyone who contributed to this release.
> The tarball artifacts are hosted at [2] and the maven artifacts are
> hosted at [3].
> This release candidate is based on commit
> ad3f344ac21e0462aa82f51f648a21a0554cf368 located at [4].
> Please download and try out the release.
>
> The vote ends at 5 PM UTC (9 AM PDT, 7 PM EET, 10:30 PM IST), June 4, 2021.
>
> [ ] +1
> [ ] +0
> [ ] -1
> Here's my vote: +1
> Laurent
>
> [1]
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12313820=12348331
> [2] https://home.apache.org/~laurent/drill/releases/1.19.0/rc0/
> [3]
> https://repository.apache.org/content/repositories/orgapachedrill-1083/
> [4] https://github.com/laurentgo/drill/commits/drill-1.19.0
>


Re: drill cluster - storage plugin not replicated to one of node

2021-04-21 Thread Prabhakar Bhosaale
Hi James,
Thanks for the pointer. The zookeeper nodes were working in
standalone mode. I configured them in replication mode and it is working
now.

Thanks once again for your help.

Regards
Prabhakar

On Sat, Apr 17, 2021 at 10:17 AM James Turton
 wrote:

> Indepedently of Drill, you can debug whether ZooKeeper's replication to
> the problematic node is working by using zkCli to create and retrieve
> test zNodes.
>
> On 2021/04/16 16:50, Prabhakar Bhosaale wrote:
> > Hi Team,
> > We have a hadoop cluster of 3 nodes on which drill is deployed. one name
> > node and three data nodes, When we create the storage plugin using the
> > drill ui of name node, the plugin is visible (synced) on 2nd node of
> > cluster but not on 3rd node.
> > Could not find any error in drillbit.log or zookeeper.out files.
> > Any pointers on what could be possible root cause and solution for it?
> >
> > Thanks and regards
> > Prabhakar
> >
>
>


Re: Drill Storage plugin disappears

2021-04-19 Thread Prabhakar Bhosaale
Hi James,
Thanks, but this problem also persists in standalone mode on linux. thx

Regards
Prabhakar

On Mon, Apr 19, 2021 at 1:24 PM James Turton 
wrote:

> Firstly, I'm afraid I misread your message and didn't see that you are
> using distributed mode on Linux, making this a ZooKeeper question.  I
> would start by checking the value of `dataDir` in ZooKeeper's conf/zoo.cfg.
>
> On 2021/04/19 09:30, Prabhakar Bhosaale wrote:
> >Hi James,
> > Thanks for your quick response. What could be the possible solution? or
> how
> > do i check and get it resolved?  thx
> >
> > REgards
> > Prabhakar
> >
> > On Mon, Apr 19, 2021 at 12:43 PM James Turton
> >  wrote:
> >
> >> I've never experienced this myself but if I had to hazard a guess it
> >> would be that, on Linux, your embedded Drill is using an ephemeral
> >> directory to store its config.  Perhaps something under /var/run?
> >>
> >> On 2021/04/19 08:56, Prabhakar Bhosaale wrote:
> >>> Hi Team,
> >>>
> >>> We are finding weird behaviour in apache drill on linux.  The storage
> >>> plugin created on  drill deployed on a linux machine gets disappeared
> >>> whenever we restart the drill.
> >>> But this storage plugin created on drill on windows is always there. We
> >>> tried with drill 1.14 and 1.16 both versions. But the issue is the
> same.
> >>>
> >>> Deployed drill 1.14 in standalone mode on windows and linux
> >>> deployed drill 1.16 in distributed mode on linux.
> >>>
> >>> Please advise. thx
> >>>
> >>> Regards
> >>> Prabhakar
> >>>
> >>
>
>


Re: Drill Storage plugin disappears

2021-04-19 Thread Prabhakar Bhosaale
  Hi James,
Thanks for your quick response. What could be the possible solution? or how
do i check and get it resolved?  thx

REgards
Prabhakar

On Mon, Apr 19, 2021 at 12:43 PM James Turton
 wrote:

> I've never experienced this myself but if I had to hazard a guess it
> would be that, on Linux, your embedded Drill is using an ephemeral
> directory to store its config.  Perhaps something under /var/run?
>
> On 2021/04/19 08:56, Prabhakar Bhosaale wrote:
> > Hi Team,
> >
> > We are finding weird behaviour in apache drill on linux.  The storage
> > plugin created on  drill deployed on a linux machine gets disappeared
> > whenever we restart the drill.
> > But this storage plugin created on drill on windows is always there. We
> > tried with drill 1.14 and 1.16 both versions. But the issue is the same.
> >
> > Deployed drill 1.14 in standalone mode on windows and linux
> > deployed drill 1.16 in distributed mode on linux.
> >
> > Please advise. thx
> >
> > Regards
> > Prabhakar
> >
>
>


Drill Storage plugin disappears

2021-04-19 Thread Prabhakar Bhosaale
Hi Team,

We are finding weird behaviour in apache drill on linux.  The storage
plugin created on  drill deployed on a linux machine gets disappeared
whenever we restart the drill.
But this storage plugin created on drill on windows is always there. We
tried with drill 1.14 and 1.16 both versions. But the issue is the same.

Deployed drill 1.14 in standalone mode on windows and linux
deployed drill 1.16 in distributed mode on linux.

Please advise. thx

Regards
Prabhakar


drill cluster - storage plugin not replicated to one of node

2021-04-16 Thread Prabhakar Bhosaale
Hi Team,
We have a hadoop cluster of 3 nodes on which drill is deployed. one name
node and three data nodes, When we create the storage plugin using the
drill ui of name node, the plugin is visible (synced) on 2nd node of
cluster but not on 3rd node.
Could not find any error in drillbit.log or zookeeper.out files.
Any pointers on what could be possible root cause and solution for it?

Thanks and regards
Prabhakar


Re: [DISCUSSION] One of the most impressive features

2021-04-03 Thread Prabhakar Bhosaale
Hey Luoc,
nice to hear the updates in 1.19. will see how i can fit it one of real
usecase.


Regards
Prabhakar

On Sat, Apr 3, 2021 at 5:55 PM luoc  wrote:

> Hi Prabhakar,
>   Great. Drill can combine data from multiple data sources on the fly in a
> single query, federated query & analysis is one of the features of apache
> drill. That's exactly what I love about drill.
> In release 1.19, drill supported the Cassandra/Scylla, ElasticSearch,
> Splunk, XML and more. Then, they are based on the EVF framework, more
> stability and more powerful than previous version.
>
> > 2021年4月3日 下午8:19,Markenson França  写道:
> >
> > Hi Luoc and Prabhakar,
> >
> > We use Drill for data merging in Brazillian Federal Court at Rio de
> Janeiro.
> >
> > We developed two stages: extraction and consolidation.
> >
> > Extractor get data from several databases (Oracle, MySql, Postgres,
> > SqlServer, Ingres, Http and MUMPS) put them in a standard plain text
> > format.
> >
> > Consolidator is a piece of Python code using Dril for getting all data
> > pieces of plain text and combine them in same standard format.
> >
> > The result are data blocks available by tematic area (HR, Aquisition
> > sector, Law data, etc ) used directely by users (Excel importing via
> > network paths) or available through Metabase*.
> >
> > Using Drill at consolidation stage we are  avoiding production servers
> > overload and joining unthinkable databases like  MUMPS+Oracle+SQL Server.
> > Drill consolidation works at speed of the light (thanks for Drill
> > performance). Querying plain data with SQL is amazing.
> >
> > Regards,
> > Markenson
> >
> > *I have been used a csv driver Metabase we developed to publish Drill
> data
> > for users (https://github.com/Markenson/csv-metabase-driver). I'm
> trying to
> > developed a driver for Drill via jdbc.
> >
> >
> >
> > Em sáb, 3 de abr de 2021 08:18, Prabhakar Bhosaale <
> bhosale@gmail.com>
> > escreveu:
> >
> >> Hi Luoc,
> >> the impressive feature for me is to query the data from files
> >> (json,csv,parquette etc.) using sql syntax. This makes life very easy.
> >> Also i am not sure as i have not tried it but i guess i can query two
> >> different storage (json file and oracle database) and combine the data.
> >> thx
> >>
> >> Regards
> >> Prabhakar
> >>
> >> On Fri, Apr 2, 2021 at 6:53 PM luoc  wrote:
> >>
> >>> Hi all,
> >>>  I'm from drill team, there will be many new features in release 1.19,
> >>> However, I’m also looking forward to getting your reply about using
> >> drill.
> >>>  At ApacheCon 2021 ( + ApacheCon 2021 Asia), there is a topic about
> >> track
> >>> drill talk, So I hope for a positive response that what is one of the
> >> most
> >>> impressive features of drill in your projects?
> >>>  That’s for all the developers and drill users, Thanks for your time.
> >>>
> >>> Kind regards
> >>> luoc
> >>
>
>


Re: [DISCUSSION] One of the most impressive features

2021-04-03 Thread Prabhakar Bhosaale
Hi Markenson,
it is really nice use case. I guess with this kind use of data i guess ppl
will be able to save more on their servers.

REgards
Prabhakar

On Sat, Apr 3, 2021 at 5:50 PM Markenson França 
wrote:

> Hi Luoc and Prabhakar,
>
> We use Drill for data merging in Brazillian Federal Court at Rio de
> Janeiro.
>
> We developed two stages: extraction and consolidation.
>
> Extractor get data from several databases (Oracle, MySql, Postgres,
> SqlServer, Ingres, Http and MUMPS) put them in a standard plain text
> format.
>
> Consolidator is a piece of Python code using Dril for getting all data
> pieces of plain text and combine them in same standard format.
>
> The result are data blocks available by tematic area (HR, Aquisition
> sector, Law data, etc ) used directely by users (Excel importing via
> network paths) or available through Metabase*.
>
> Using Drill at consolidation stage we are  avoiding production servers
> overload and joining unthinkable databases like  MUMPS+Oracle+SQL Server.
> Drill consolidation works at speed of the light (thanks for Drill
> performance). Querying plain data with SQL is amazing.
>
> Regards,
> Markenson
>
> *I have been used a csv driver Metabase we developed to publish Drill data
> for users (https://github.com/Markenson/csv-metabase-driver). I'm trying
> to
> developed a driver for Drill via jdbc.
>
>
>
> Em sáb, 3 de abr de 2021 08:18, Prabhakar Bhosaale 
> escreveu:
>
> > Hi Luoc,
> > the impressive feature for me is to query the data from files
> > (json,csv,parquette etc.) using sql syntax. This makes life very easy.
> > Also i am not sure as i have not tried it but i guess i can query two
> > different storage (json file and oracle database) and combine the data.
> > thx
> >
> > Regards
> > Prabhakar
> >
> > On Fri, Apr 2, 2021 at 6:53 PM luoc  wrote:
> >
> > > Hi all,
> > >   I'm from drill team, there will be many new features in release 1.19,
> > > However, I’m also looking forward to getting your reply about using
> > drill.
> > >   At ApacheCon 2021 ( + ApacheCon 2021 Asia), there is a topic about
> > track
> > > drill talk, So I hope for a positive response that what is one of the
> > most
> > > impressive features of drill in your projects?
> > >   That’s for all the developers and drill users, Thanks for your time.
> > >
> > > Kind regards
> > > luoc
> >
>


Re: [DISCUSSION] One of the most impressive features

2021-04-03 Thread Prabhakar Bhosaale
Hi Luoc,
the impressive feature for me is to query the data from files
(json,csv,parquette etc.) using sql syntax. This makes life very easy.
Also i am not sure as i have not tried it but i guess i can query two
different storage (json file and oracle database) and combine the data.  thx

Regards
Prabhakar

On Fri, Apr 2, 2021 at 6:53 PM luoc  wrote:

> Hi all,
>   I'm from drill team, there will be many new features in release 1.19,
> However, I’m also looking forward to getting your reply about using drill.
>   At ApacheCon 2021 ( + ApacheCon 2021 Asia), there is a topic about track
> drill talk, So I hope for a positive response that what is one of the most
> impressive features of drill in your projects?
>   That’s for all the developers and drill users, Thanks for your time.
>
> Kind regards
> luoc


Re: Drill-On-yarn. Not able to start the drill

2021-03-30 Thread Prabhakar Bhosaale
Sure Luoc, will check on it and will update you. Thx

Regards
Prabhakar

On Tue, Mar 30, 2021 at 12:58 PM luoc  wrote:

> Hello,
>   There are instructions in the Drill-on-YARN documentation that covers
> this case, See also DRILL-7180 <
> https://issues.apache.org/jira/browse/DRILL-7180>
>
> > 2021年3月29日 下午1:51,Prabhakar Bhosaale  写道:
> >
> > Hi Luoc,
> > I have checked the troubleshooting steps already and ensured those are
> > taken care. But as of now luck. I will try a few more changes next week.
> > Meanwhile if you have any suggestions or tips it would be appreciated.
> thx
> >
> > Regards
> > Prabhakar
> >
> > On Fri, Mar 26, 2021 at 9:08 PM luoc  wrote:
> >
> >> Hello,
> >>  How is it going? I guess this problem should be caused by a
> >> configuration file errors. So, Did you follow the steps in the docs?
> >>
> >> http://drill.apache.org/docs/appendix-c-troubleshooting
> >>
> >>> 在 2021年3月23日,13:54,Prabhakar Bhosaale  写道:
> >>>
> >>> Dear Luoc,
> >>> Dill did not even start. and log was having only one line as below
> >>>
> >>> /bin/bash:
> >>
> /tmp/hadoop-admin/nm-local-dir/usercache/admin/appcache/application_1616151030436_0008/container_1616151030436_0008_01_01/drill/apache-drill-1.18.0/bin/drill-am.sh:
> >>> No such file or directory
> >>>
> >>>
> >>> Regards
> >>>
> >>> Prabhakar
> >>>
> >>>
> >>>> On Sat, Mar 20, 2021 at 1:34 PM luoc  wrote:
> >>>>
> >>>> Hi,
> >>>> Is glad to see that you are trying to run drill on yarn. But I’m
> >> curious
> >>>> is that drill got an error after started or drill does not to run? And
> >>>> please post the logs file as email attachments.
> >>>>
> >>>>> 2021年3月19日 下午8:36,Prabhakar Bhosaale  写道:
> >>>>>
> >>>>> Hi Team,
> >>>>>
> >>>>> We are trying to start drill using Drill-on-Yarn. We followed the
> >>>>> documentation but got the following error.
> >>>>>
> >>>>>
> >>>>> "Diagnostics: Exception from container-launch.
> >>>>> Container id: container_1616151030436_0008_01_01
> >>>>> Exit code: 127
> >>>>> Stack trace: ExitCodeException exitCode=127:
> >>>>> at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
> >>>>> at org.apache.hadoop.util.Shell.run(Shell.java:456)
> >>>>> at
> >>>>
> >>
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
> >>>>> at
> >>>>>
> >>>>
> >>
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
> >>>>> at
> >>>>>
> >>>>
> >>
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
> >>>>> at
> >>>>>
> >>>>
> >>
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
> >>>>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> >>>>> at
> >>>>>
> >>>>
> >>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> >>>>> at
> >>>>>
> >>>>
> >>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> >>>>> at java.lang.Thread.run(Thread.java:748)"
> >>>>>
> >>>>> when we checked the respective log file it is giving error
> >>>>>
> >>>>> "/bin/bash:
> >>>>
> >>
> /tmp/hadoop-admin/nm-local-dir/usercache/admin/appcache/application_1616151030436_0008/container_1616151030436_0008_01_01/drill/apache-drill-1.18.0/bin/drill-am.sh:
> >>>>> No such file or directory"
> >>>>>
> >>>>> The drill works fine when started in distributed mode.  But gives the
> >>>>> above error when starting using "Drill-On-Yarn".
> >>>>>
> >>>>> Following are the versions of the software that we are using
> >>>>> Drill - 1.18
> >>>>> Hadoop - 2.7
> >>>>> Zookeeper-3.4.5, as per documentation we replaced the zooker file in
> >> the
> >>>>> jars folder.
> >>>>>
> >>>>>
> >>>>> Pls help to resolve it.  Thanks
> >>>>>
> >>>>> Regards
> >>>>> Prabhakar
> >>>>
> >>>>
> >>
>
>


Re: Drill-On-yarn. Not able to start the drill

2021-03-28 Thread Prabhakar Bhosaale
Hi Luoc,
I have checked the troubleshooting steps already and ensured those are
taken care. But as of now luck. I will try a few more changes next week.
Meanwhile if you have any suggestions or tips it would be appreciated. thx

Regards
Prabhakar

On Fri, Mar 26, 2021 at 9:08 PM luoc  wrote:

> Hello,
>   How is it going? I guess this problem should be caused by a
> configuration file errors. So, Did you follow the steps in the docs?
>
> http://drill.apache.org/docs/appendix-c-troubleshooting
>
> > 在 2021年3月23日,13:54,Prabhakar Bhosaale  写道:
> >
> > Dear Luoc,
> > Dill did not even start. and log was having only one line as below
> >
> > /bin/bash:
> /tmp/hadoop-admin/nm-local-dir/usercache/admin/appcache/application_1616151030436_0008/container_1616151030436_0008_01_01/drill/apache-drill-1.18.0/bin/drill-am.sh:
> > No such file or directory
> >
> >
> > Regards
> >
> > Prabhakar
> >
> >
> >> On Sat, Mar 20, 2021 at 1:34 PM luoc  wrote:
> >>
> >> Hi,
> >>  Is glad to see that you are trying to run drill on yarn. But I’m
> curious
> >> is that drill got an error after started or drill does not to run? And
> >> please post the logs file as email attachments.
> >>
> >>> 2021年3月19日 下午8:36,Prabhakar Bhosaale  写道:
> >>>
> >>> Hi Team,
> >>>
> >>> We are trying to start drill using Drill-on-Yarn. We followed the
> >>> documentation but got the following error.
> >>>
> >>>
> >>> "Diagnostics: Exception from container-launch.
> >>> Container id: container_1616151030436_0008_01_01
> >>> Exit code: 127
> >>> Stack trace: ExitCodeException exitCode=127:
> >>> at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
> >>> at org.apache.hadoop.util.Shell.run(Shell.java:456)
> >>> at
> >>
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
> >>> at
> >>>
> >>
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
> >>> at
> >>>
> >>
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
> >>> at
> >>>
> >>
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
> >>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> >>> at
> >>>
> >>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> >>> at
> >>>
> >>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> >>> at java.lang.Thread.run(Thread.java:748)"
> >>>
> >>> when we checked the respective log file it is giving error
> >>>
> >>> "/bin/bash:
> >>
> /tmp/hadoop-admin/nm-local-dir/usercache/admin/appcache/application_1616151030436_0008/container_1616151030436_0008_01_01/drill/apache-drill-1.18.0/bin/drill-am.sh:
> >>> No such file or directory"
> >>>
> >>> The drill works fine when started in distributed mode.  But gives the
> >>> above error when starting using "Drill-On-Yarn".
> >>>
> >>> Following are the versions of the software that we are using
> >>> Drill - 1.18
> >>> Hadoop - 2.7
> >>> Zookeeper-3.4.5, as per documentation we replaced the zooker file in
> the
> >>> jars folder.
> >>>
> >>>
> >>> Pls help to resolve it.  Thanks
> >>>
> >>> Regards
> >>> Prabhakar
> >>
> >>
>


Re: Drill-On-yarn. Not able to start the drill

2021-03-22 Thread Prabhakar Bhosaale
Dear Luoc,
Dill did not even start. and log was having only one line as below

/bin/bash: 
/tmp/hadoop-admin/nm-local-dir/usercache/admin/appcache/application_1616151030436_0008/container_1616151030436_0008_01_01/drill/apache-drill-1.18.0/bin/drill-am.sh:
No such file or directory


Regards

Prabhakar


On Sat, Mar 20, 2021 at 1:34 PM luoc  wrote:

> Hi,
>   Is glad to see that you are trying to run drill on yarn. But I’m curious
> is that drill got an error after started or drill does not to run? And
> please post the logs file as email attachments.
>
> > 2021年3月19日 下午8:36,Prabhakar Bhosaale  写道:
> >
> > Hi Team,
> >
> > We are trying to start drill using Drill-on-Yarn. We followed the
> > documentation but got the following error.
> >
> >
> > "Diagnostics: Exception from container-launch.
> > Container id: container_1616151030436_0008_01_01
> > Exit code: 127
> > Stack trace: ExitCodeException exitCode=127:
> > at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
> > at org.apache.hadoop.util.Shell.run(Shell.java:456)
> > at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
> > at
> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
> > at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
> > at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
> > at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> > at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> > at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> > at java.lang.Thread.run(Thread.java:748)"
> >
> > when we checked the respective log file it is giving error
> >
> > "/bin/bash:
> /tmp/hadoop-admin/nm-local-dir/usercache/admin/appcache/application_1616151030436_0008/container_1616151030436_0008_01_01/drill/apache-drill-1.18.0/bin/drill-am.sh:
> > No such file or directory"
> >
> > The drill works fine when started in distributed mode.  But gives the
> > above error when starting using "Drill-On-Yarn".
> >
> > Following are the versions of the software that we are using
> > Drill - 1.18
> > Hadoop - 2.7
> > Zookeeper-3.4.5, as per documentation we replaced the zooker file in the
> > jars folder.
> >
> >
> > Pls help to resolve it.  Thanks
> >
> > Regards
> > Prabhakar
>
>


Drill-On-yarn. Not able to start the drill

2021-03-19 Thread Prabhakar Bhosaale
Hi Team,

We are trying to start drill using Drill-on-Yarn. We followed the
documentation but got the following error.


"Diagnostics: Exception from container-launch.
Container id: container_1616151030436_0008_01_01
Exit code: 127
Stack trace: ExitCodeException exitCode=127:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
at org.apache.hadoop.util.Shell.run(Shell.java:456)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)"

when we checked the respective log file it is giving error

"/bin/bash: 
/tmp/hadoop-admin/nm-local-dir/usercache/admin/appcache/application_1616151030436_0008/container_1616151030436_0008_01_01/drill/apache-drill-1.18.0/bin/drill-am.sh:
No such file or directory"

 The drill works fine when started in distributed mode.  But gives the
above error when starting using "Drill-On-Yarn".

Following are the versions of the software that we are using
Drill - 1.18
Hadoop - 2.7
Zookeeper-3.4.5, as per documentation we replaced the zooker file in the
jars folder.


Pls help to resolve it.  Thanks

Regards
Prabhakar


Re: Prepared statment in Drill

2021-02-25 Thread Prabhakar Bhosaale
Hi Luoc,
As suggested by you, I have created the JIRA. The link is
https://issues.apache.org/jira/browse/DRILL-7870

Regards
Prabhakar

On Thu, Feb 25, 2021 at 4:40 PM luoc  wrote:

> Hi,
>   It is not currently supported (See also
> `org.apache.drill.jdbc.PreparedStatementTest#testSqlQueryWithParamNotSupported()`).
>   However, Drill is a user-centric software. So, Would you please create a
> JIRA on https://issues.apache.org <https://issues.apache.org/>? then we
> will evaluate whether to develop. thanks
>
> > 2021年2月25日 下午5:37,Зиновьев Олег  写道:
> >
> > Hi.
> >
> > Drill does not support dynamic parameters
> >
> > Kind regards,
> > Oleg.
> >
> > 25 февр. 2021 г., в 11:27, James Turton  .INVALID<mailto:ja...@somecomputer.xyz.INVALID>> написал(а):
> >
> > If I remember right, prepared statements will cause the JDBC driver to
> run a
> >
> > SELECT * from (your query) LIMIT 0
> >
> > for schema discovery before running your query.  As you may deduce from
> the syntax I've shown, this works but only for when your query is itself a
> SELECT.  One conclusion one could draw is: only use prepared statements for
> SELECTs.
> >
> > On 2021/02/25 09:25, Prabhakar Bhosaale wrote:
> > Hi Team,
> > I am integrating pentaho with Drill to create the reports on drill file
> > storage. I have created the view on files which are accessible in pentaho
> > as views.
> > I am trying to send the prepared statement to query these views. But it
> > seems Drill JDBC driver does not support prepared statements.
> > Any further updates on this? is there any updated JDBC driver
> > available which supports prepared statement?  thanks
> >
> >
> > Regards
> > Prabhakar
> >
> >
> >
>
>


Re: Prepared statment in Drill

2021-02-25 Thread Prabhakar Bhosaale
Hi James,
thanks for your reply. But it is giving error

*java.sql.SQLException*: [MapR][DrillJDBCDriver](500980) Encountered error
while creating prepared statement. Details: PLAN ERROR: Cannot convert
RexNode to equivalent Drill expression. RexNode Class:
org.apache.calcite.rex.RexDynamicParam, RexNode Digest: ?0

after googling i understood that it is not supported. So just wanted to
understand is there any further updates on it? thanks

Regards
Prabhakar

On Thu, Feb 25, 2021 at 1:57 PM James Turton 
wrote:

> If I remember right, prepared statements will cause the JDBC driver to
> run a
>
> SELECT * from (your query) LIMIT 0
>
> for schema discovery before running your query.  As you may deduce from
> the syntax I've shown, this works but only for when your query is itself
> a SELECT.  One conclusion one could draw is: only use prepared
> statements for SELECTs.
>
> On 2021/02/25 09:25, Prabhakar Bhosaale wrote:
> > Hi Team,
> > I am integrating pentaho with Drill to create the reports on drill file
> > storage. I have created the view on files which are accessible in pentaho
> > as views.
> > I am trying to send the prepared statement to query these views. But it
> > seems Drill JDBC driver does not support prepared statements.
> > Any further updates on this? is there any updated JDBC driver
> > available which supports prepared statement?  thanks
> >
> >
> > Regards
> > Prabhakar
> >
>
>


Prepared statment in Drill

2021-02-24 Thread Prabhakar Bhosaale
Hi Team,
I am integrating pentaho with Drill to create the reports on drill file
storage. I have created the view on files which are accessible in pentaho
as views.
I am trying to send the prepared statement to query these views. But it
seems Drill JDBC driver does not support prepared statements.
Any further updates on this? is there any updated JDBC driver
available which supports prepared statement?  thanks


Regards
Prabhakar


Re: Question about multiple queries through REST API

2021-02-24 Thread Prabhakar Bhosaale
Hi,
ANother way is to concat tablename to the schema name as follows

select t1.code, t2.name
   from  fsfile.default.`dataFile1.psv` t1
   join  fsfile.default.`dataFile2.psv` t2
   on t1.code=t2.code;

Regards
Prabhakar

On Thu, Feb 25, 2021 at 2:41 AM Charles Givre  wrote:

> HI there,
> I don't think the Drill REST API accepts multiple queries in one request.
> However there is an alternative which doesn't appear to be in the
> documentation. When you submit your query there is a parameter called
> defaultSchema (I think) that you can set that has the effect of executing a
> USE statement prior to the query.   That will accomplish what you're trying
> to do there.
>
> Best,
> -- C
>
> > On Feb 24, 2021, at 3:42 PM, KimJohn Quinn  wrote:
> >
> > Hello everyone.
> > We just started integrating Drill for a POC (i.e. embedded).  Through a
> > REST API a user can submit one or more queries which in turn are sent to
> > Drill via its own REST API (/query.json).
> >
> > How can I submit multiple queries where the current query may be
> > dependent on the previous one?
> >
> > As an example:
> >
> >   1. use fsfile.`default`;
> >
> >   2. select t1.code, t2.name
> >   from `dataFile1.psv` t1
> >   join `dataFile2.psv` t2
> >   on t1.code=t2.code;
> >
> > Can I do this through the REST API or should I be using the JDBC API
> > directly instead to allow for multiple statements to be executed?
> >
> > Thanks.
>
>


Re: Squirrel not showing JSON table

2021-02-24 Thread Prabhakar Bhosaale
Hi Luoc,
This is what exactly i tied and it worked. This solves the problem. thanks

Regards
Prabhakar

On Wed, Feb 24, 2021 at 4:12 PM luoc  wrote:

> Hi,
>   I think `CREATE VIEW` should be the answer you want. please view the
> attachments of email to understand Drill views.
>
> 1. `select *` to view all columns.
> 2. `create view` to create view schema.
> 3. `show tables`,`describe table_name` to verify the schema.
>
> See also http://drill.apache.org/docs/create-view
>
>
> 2021年2月23日 下午9:35,Prabhakar Bhosaale  写道:
>
> hi Luco,
> Thanks for quick response. The "Show files" or querying INFORMATION_SCHEMA
> is not an option for me as I need to give an option to business users to
> generate reports from BI tools.
> So I am trying to integrate with Pentaho. I am able to create the
> datasource and query the data by directly writing the query. But it is not
> listing the Json files or folders. Is there any way to achieve that? thanks
>
> Regards
> Prabhakar
>
> On Tue, Feb 23, 2021 at 12:44 PM luoc  wrote:
>
> Hello,
>  This is the default behavior. you could use the `show files` command to
> list files. See also http://drill.apache.org/docs/show-files <
> http://drill.apache.org/docs/show-files>. And then please don’t paste the
> image for apache email, this is not supported. as an attachment file is a
> simple way.
>
>  Also welcome to join our Slack Channel: https://bit.ly/3t4rozO <
> https://bit.ly/3t4rozO>
>
> 2021年2月23日 下午12:05,Prabhakar Bhosaale  写道:
>
> Hi Team,
> I am using apache dril 1.16.0 with MapR JDBC driver from URL
>
>
> https://package.mapr.com/tools/MapR-JDBC/MapR_Drill/MapRDrill_jdbc_v1.6.6.1009/
> <
>
> https://package.mapr.com/tools/MapR-JDBC/MapR_Drill/MapRDrill_jdbc_v1.6.6.1009/
>
>
>
> I used this driver to connect to drill datasource from squirrel. I have
>
> two storage plug-in one for JSON files and another for oracle
>
>
> When I open connection through squirrel, I do not see table listing for
>
> JSON storage but I see table listing for oracle DB plug-in.  Below is the
> screenshot.
>
>
> Am I missing something in configuration? Please help
>
> Thanks and Regards
> Prabhakar
>
>
>
>
>


Re: Squirrel not showing JSON table

2021-02-23 Thread Prabhakar Bhosaale
hi Luco,
Thanks for quick response. The "Show files" or querying INFORMATION_SCHEMA
is not an option for me as I need to give an option to business users to
generate reports from BI tools.
So I am trying to integrate with Pentaho. I am able to create the
datasource and query the data by directly writing the query. But it is not
listing the Json files or folders. Is there any way to achieve that? thanks

Regards
Prabhakar

On Tue, Feb 23, 2021 at 12:44 PM luoc  wrote:

> Hello,
>   This is the default behavior. you could use the `show files` command to
> list files. See also http://drill.apache.org/docs/show-files <
> http://drill.apache.org/docs/show-files>. And then please don’t paste the
> image for apache email, this is not supported. as an attachment file is a
> simple way.
>
>   Also welcome to join our Slack Channel: https://bit.ly/3t4rozO <
> https://bit.ly/3t4rozO>
>
> > 2021年2月23日 下午12:05,Prabhakar Bhosaale  写道:
> >
> > Hi Team,
> > I am using apache dril 1.16.0 with MapR JDBC driver from URL
> >
> https://package.mapr.com/tools/MapR-JDBC/MapR_Drill/MapRDrill_jdbc_v1.6.6.1009/
> <
> https://package.mapr.com/tools/MapR-JDBC/MapR_Drill/MapRDrill_jdbc_v1.6.6.1009/
> >
> >
> > I used this driver to connect to drill datasource from squirrel. I have
> two storage plug-in one for JSON files and another for oracle
> >
> > When I open connection through squirrel, I do not see table listing for
> JSON storage but I see table listing for oracle DB plug-in.  Below is the
> screenshot.
> >
> > Am I missing something in configuration? Please help
> >
> > Thanks and Regards
> > Prabhakar
> >
>
>


Re: ODBC driver status?

2020-10-22 Thread Prabhakar Bhosaale
Hi Charles,

I had asked this issue on our this forum on 15th May 2020 but did not get .
Below was the error that I had reported. If you advise I will raise the
issue again with the details below. thx

"Hi Team,
I am trying to use Apache drill driver 1.16.0 in java servlet on
tomcat 7.70.  When i run the web application and try to use
Class.forName("org.apache.drill.jdbc.Driver");

It gave the error java.lang.ClassNotFoundException:
org.apache.drill.jdbc.Driver

The driver file is copied to WebContent\WEB-INF\lib folder.

But if I try to use the same code from java standalone application then it
works perfectly.  So any suggestions on how to resolve this issue?"

On Thu, Oct 22, 2020 at 5:19 PM Charles Givre  wrote:

> Hi Prabhakar,
> Thanks for sending this.  Would you mind documenting these issues with the
> JDBC driver in JIRA (issues.apache.org <http://issues.apache.org/>)?
> Thanks!
> -- C
>
> > On Oct 22, 2020, at 2:16 AM, Prabhakar Bhosaale 
> wrote:
> >
> > Hi,
> >
> > Just some additional information on JDBC driver based on my experience.
> >
> > If you are planning to use JDBC driver with tomcat server then the driver
> > which comes with the drill binary will not work as it has not implemented
> > certain classes and packages as required by tomcat. So I ended up using
> > "Simba drill JDBC driver". I guess it is an extension of MapR driver
> though
> > I am not sure.
> >
> > Thanks and Regards
> > Prabhakar
> >
> >
> > On Thu, Oct 22, 2020 at 3:09 AM Rafael Jaimes III 
> > wrote:
> >
> >> The ODBC driver hasn't seen an update for Drill since version 1.15.
> >> It was working fine with Drill 1.16 but since Drill 1.17, some minor
> errors
> >> have arisen.
> >>
> >> I would switch to JDBC driver where possible, not the MapR one, but the
> one
> >> that comes updated with the Drill binary.
> >>
> >> You can also use the REST API which is getting better recently which
> >> requires no driver at all, but JDBC is going to offer the best
> performance.
> >>
> >> Best,
> >> Rafael
> >>
> >>
> >> On Wed, Oct 21, 2020 at 5:10 PM Gareth Western <
> gar...@garethwestern.com>
> >> wrote:
> >>
> >>> Hi, Is the ODBC driver still maintained / usable? The download location
> >>> documented on the website[1] looks like it hasn’t been updated since
> >> 2018.
> >>>
> >>>
> >>>
> >>>  1.  http://package.mapr.com/tools/MapR-ODBC/MapR_Drill/
> >>>
> >>
>
>


Re: ODBC driver status?

2020-10-22 Thread Prabhakar Bhosaale
Hi,

Just some additional information on JDBC driver based on my experience.

If you are planning to use JDBC driver with tomcat server then the driver
which comes with the drill binary will not work as it has not implemented
certain classes and packages as required by tomcat. So I ended up using
"Simba drill JDBC driver". I guess it is an extension of MapR driver though
I am not sure.

Thanks and Regards
Prabhakar


On Thu, Oct 22, 2020 at 3:09 AM Rafael Jaimes III 
wrote:

> The ODBC driver hasn't seen an update for Drill since version 1.15.
> It was working fine with Drill 1.16 but since Drill 1.17, some minor errors
> have arisen.
>
> I would switch to JDBC driver where possible, not the MapR one, but the one
> that comes updated with the Drill binary.
>
> You can also use the REST API which is getting better recently which
> requires no driver at all, but JDBC is going to offer the best performance.
>
> Best,
> Rafael
>
>
> On Wed, Oct 21, 2020 at 5:10 PM Gareth Western 
> wrote:
>
> > Hi, Is the ODBC driver still maintained / usable? The download location
> > documented on the website[1] looks like it hasn’t been updated since
> 2018.
> >
> >
> >
> >   1.  http://package.mapr.com/tools/MapR-ODBC/MapR_Drill/
> >
>


Re: Drill JDBC driver not working with Apache tomcat

2020-05-15 Thread Prabhakar Bhosaale
Hi Team,

Please help or give any pointer on below issue. Your help is greatly
appreciated...Thx

Regards
Prabhakar


On Fri, May 15, 2020 at 1:35 PM Prabhakar Bhosaale 
wrote:

> Hi Team,
> I am trying to use Apache drill driver 1.16.0 in java servlet on
> tomcat 7.70.  When i run the web application and try to use
> Class.forName("org.apache.drill.jdbc.Driver");
>
> It give error java.lang.ClassNotFoundException:
> org.apache.drill.jdbc.Driver
>
> The driver file is copied to WebContent\WE-INF\lib folder.
>
> But if try to use same code from java standalone application then it works
> perfectly.  So any suggestions on how to resolve this issue?
>
> Thanks in advance..
>
> Regards
> Prabhakar
>


Re: Drill's Future

2020-05-15 Thread Prabhakar Bhosaale
HI Charles,
I have never contributed to any open source community before this. Also I
am not expert on JAVA but willing to contribute for sure. Please let me
know in what capacity i can be helpful here.

Also, I would like to know if there any efforts going on to identify
primary support for Apache drill and what is its future in case there is no
primary supporter? My organization is planning to build some tools using
drill and this news will definitely impact the same. thx

Regards
Prabhakar

On Fri, May 15, 2020 at 8:18 PM Charles Givre  wrote:

> Dear Drill Users,
> I hope everyone weathering the COVID storm and staying safe.
>
> As you may or may not be aware, I found out this week that MapR's
> successor, HPE has for all intents and purposes ended its support for
> Drill.  Whilst they will be contributing minimal bug fixes, they will not
> be adding any new functionality or new features.  This leaves the open
> source Drill without a primary backer.  Under the Apache rules, to commit
> code to the code base it must be reviewed and voted upon. While we have
> committers, we do not currently have enough code reviewers to review new
> commits.
>
> My question to you as a Drill user is whether or not Drill is offering
> value to you and your organization.  If it is, and you would like to see
> Drill development continue, then we need additional volunteers to step up
> and review pull requests.
>
> If you're interested in reviewing code for Drill, please let me know.
>
> Thank you for your continued support and stay safe!
> -- Charles


Drill JDBC driver not working with Apache tomcat

2020-05-15 Thread Prabhakar Bhosaale
Hi Team,
I am trying to use Apache drill driver 1.16.0 in java servlet on
tomcat 7.70.  When i run the web application and try to use
Class.forName("org.apache.drill.jdbc.Driver");

It give error java.lang.ClassNotFoundException:
org.apache.drill.jdbc.Driver

The driver file is copied to WebContent\WE-INF\lib folder.

But if try to use same code from java standalone application then it works
perfectly.  So any suggestions on how to resolve this issue?

Thanks in advance..

Regards
Prabhakar


Re: Querying encrypted JSON file

2020-04-12 Thread Prabhakar Bhosaale
Hi Paul,
Thanks  for details. As of now i have not finalized on any encryption
tecnique as first i wanted to understand drill capabilities on encryption
and decryption.
To give you more details on my requirent. I will be archiving data in
JSON format from database. And that archived data will be acceased using
drill for reporting pupose. I am already zipping up JSON files using gzip.
But for security reasons i need to encrypt the files also. Thx

Regards
Prabhakar



On Sun, Apr 12, 2020, 11:38 Paul Rogers  wrote:

> Hi Prabhakar,
>
> Depending on how you perform encryption, you may be able to treat it
> similar to compression. Drill handles compression (zip, gzip, etc.) via an
> extra layer of functionality on top of any format plugin. That means,
> rather than writing a new JSON file reader, you write a new compression
> plugin (which will actually do decryption). I have not added one of these,
> but I'll poke around to see if I can find some pointers.
>
> On the other hand, if encryption is part of the access protocol (such as
> S3), then you can configure it via the S3 client.
>
> Can you describe a bit more how you encrypt your files and what is needed
> to decrypt?
>
>
> Thanks,
> - Paul
>
>
>
>     On Saturday, April 11, 2020, 10:39:15 PM PDT, Prabhakar Bhosaale <
> bhosale@gmail.com> wrote:
>
>  Hi Ted,
> Thanks for your reply. Could you please give some more details on how to
> write to create file format, how to use it. Any pointers will be
> appreciated. Thx
>
> Regards
> Prabhakar
>
> On Sun, Apr 12, 2020, 00:19 Ted Dunning  wrote:
>
> > Yes.
> >
> > You need to write a special file format for that, though.
> >
> >
> > On Sat, Apr 11, 2020 at 6:58 AM Prabhakar Bhosaale <
> bhosale@gmail.com>
> > wrote:
> >
> > > Hi All,
> > > I have a  encrypted JSON file. is there any way in drill to query the
> > > encrypted JSON file? Thanks
> > >
> > > Regards
> > > Prabhakar
> > >
> >
>


Re: Querying encrypted JSON file

2020-04-11 Thread Prabhakar Bhosaale
Hi Ted,
Thanks for your reply. Could you please give some more details on how to
write to create file format, how to use it. Any pointers will be
appreciated. Thx

Regards
Prabhakar

On Sun, Apr 12, 2020, 00:19 Ted Dunning  wrote:

> Yes.
>
> You need to write a special file format for that, though.
>
>
> On Sat, Apr 11, 2020 at 6:58 AM Prabhakar Bhosaale 
> wrote:
>
> > Hi All,
> > I have a  encrypted JSON file. is there any way in drill to query the
> > encrypted JSON file? Thanks
> >
> > Regards
> > Prabhakar
> >
>


Querying encrypted JSON file

2020-04-11 Thread Prabhakar Bhosaale
Hi All,
I have a  encrypted JSON file. is there any way in drill to query the
encrypted JSON file? Thanks

Regards
Prabhakar


Re: Drill embedded mode on Linux

2020-04-10 Thread Prabhakar Bhosaale
Hi Jaims,
I installed devel distro and it worked. Thanks for your help.

Regards
Prabhakar

On Thu, Apr 9, 2020, 19:57 Prabhakar Bhosaale  wrote:

> Thanks Jaims, This helps. I have only openJDK. I will get the devel and
> will update you.
>
> Regards
> Prabhakar
>
> On Thu, Apr 9, 2020 at 7:53 PM Rafael Jaimes III 
> wrote:
>
>> Prab,
>>
>> I don't think screenshots work on the list. What distro are you using?
>>
>> On Red Hat, OpenJDK is a JRE but OpenJDK-devel has the JDK. It may be
>> confusing.
>>
>> On Thu, Apr 9, 2020, 10:17 AM Prabhakar Bhosaale 
>> wrote:
>>
>> > Hi All,
>> >
>> > Just to give you some additional information. I came across information
>> on
>> >
>> http://www.openkb.info/2017/05/drill-errors-with-jdk-java-compiler-not.html
>> >
>> >
>> > As per this article, my output of step 2 is not as expected.  But this
>> > article does not mention what to do in this case.  thx
>> >
>> > Regards
>> > Prabhakar
>> >
>> > On Thu, Apr 9, 2020 at 7:37 PM Prabhakar Bhosaale <
>> bhosale@gmail.com>
>> > wrote:
>> >
>> >> Hi James,
>> >> thanks for quick reply.
>> >> Below is Java version screenshot. As per documentation this is correct.
>> >> [image: image.png]
>> >>
>> >> Below is screenshot of java path. this is also correct. But still same
>> >> error
>> >> [image: image.png]
>> >>
>> >> Regards
>> >> Prabhakar
>> >>
>> >> On Thu, Apr 9, 2020 at 7:08 PM Jaimes, Rafael - 0993 - MITLL <
>> >> rafael.jai...@ll.mit.edu> wrote:
>> >>
>> >>> The error tells you that it's not finding a Java 1.8 JDK. You can use
>> >>> OpenJDK
>> >>> 1.8 for the job.
>> >>> I would check:
>> >>> 1) your java version (both version # and whether it is a JDK, not a
>> JRE)
>> >>> 2) your java path env vars
>> >>>
>> >>> -Original Message-
>> >>> From: Prabhakar Bhosaale 
>> >>> Sent: Thursday, April 9, 2020 9:29 AM
>> >>> To: user@drill.apache.org
>> >>> Subject: Drill embedded mode on Linux
>> >>>
>> >>> Hi All,
>> >>> I am using drill 1.16 and trying to start the drill in embedded mode
>> on
>> >>> linux
>> >>> machine. Following the documentation from drill website.
>> >>>
>> >>> I am using  bin/drill-embedded command but it is giving following
>> error.
>> >>> Checked the java version and it is correct.  Please help urgently. thx
>> >>>
>> >>> Regards
>> >>> Prabhakar
>> >>>
>> >>> Error: Failure in starting embedded Drillbit:
>> >>> org.apache.drill.exec.exception.DrillbitStartupException: JDK Java
>> >>> compiler
>> >>> not available. Ensure Drill is running with the java executable from a
>> >>> JDK and
>> >>> not a JRE (state=,code=0)
>> >>> java.sql.SQLException: Failure in starting embedded Drillbit:
>> >>> org.apache.drill.exec.exception.DrillbitStartupException: JDK Java
>> >>> compiler
>> >>> not available. Ensure Drill is running with the java executable from a
>> >>> JDK and
>> >>> not a JRE
>> >>> at
>> >>>
>> >>>
>> org.apache.drill.jdbc.impl.DrillConnectionImpl.(DrillConnectionImpl.java:143)
>> >>> at
>> >>>
>> >>>
>> org.apache.drill.jdbc.impl.DrillJdbc41Factory.newDrillConnection(DrillJdbc41Factory.java:67)
>> >>> at
>> >>>
>> >>>
>> org.apache.drill.jdbc.impl.DrillFactory.newConnection(DrillFactory.java:67)
>> >>> at
>> >>>
>> >>>
>> org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:138)
>> >>> at org.apache.drill.jdbc.Driver.connect(Driver.java:72)
>> >>> at
>> >>> sqlline.DatabaseConnection.connect(DatabaseConnection.java:130)
>> >>> at
>> >>> sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:179)
>> >>> at sqlline.Commands.connect(Commands.java:1278)
>> >>> at sqlline.Commands.connect

Re: Drill embedded mode on Linux

2020-04-09 Thread Prabhakar Bhosaale
Thanks Jaims, This helps. I have only openJDK. I will get the devel and
will update you.

Regards
Prabhakar

On Thu, Apr 9, 2020 at 7:53 PM Rafael Jaimes III 
wrote:

> Prab,
>
> I don't think screenshots work on the list. What distro are you using?
>
> On Red Hat, OpenJDK is a JRE but OpenJDK-devel has the JDK. It may be
> confusing.
>
> On Thu, Apr 9, 2020, 10:17 AM Prabhakar Bhosaale 
> wrote:
>
> > Hi All,
> >
> > Just to give you some additional information. I came across information
> on
> >
> http://www.openkb.info/2017/05/drill-errors-with-jdk-java-compiler-not.html
> >
> >
> > As per this article, my output of step 2 is not as expected.  But this
> > article does not mention what to do in this case.  thx
> >
> > Regards
> > Prabhakar
> >
> > On Thu, Apr 9, 2020 at 7:37 PM Prabhakar Bhosaale  >
> > wrote:
> >
> >> Hi James,
> >> thanks for quick reply.
> >> Below is Java version screenshot. As per documentation this is correct.
> >> [image: image.png]
> >>
> >> Below is screenshot of java path. this is also correct. But still same
> >> error
> >> [image: image.png]
> >>
> >> Regards
> >> Prabhakar
> >>
> >> On Thu, Apr 9, 2020 at 7:08 PM Jaimes, Rafael - 0993 - MITLL <
> >> rafael.jai...@ll.mit.edu> wrote:
> >>
> >>> The error tells you that it's not finding a Java 1.8 JDK. You can use
> >>> OpenJDK
> >>> 1.8 for the job.
> >>> I would check:
> >>> 1) your java version (both version # and whether it is a JDK, not a
> JRE)
> >>> 2) your java path env vars
> >>>
> >>> -Original Message-
> >>> From: Prabhakar Bhosaale 
> >>> Sent: Thursday, April 9, 2020 9:29 AM
> >>> To: user@drill.apache.org
> >>> Subject: Drill embedded mode on Linux
> >>>
> >>> Hi All,
> >>> I am using drill 1.16 and trying to start the drill in embedded mode on
> >>> linux
> >>> machine. Following the documentation from drill website.
> >>>
> >>> I am using  bin/drill-embedded command but it is giving following
> error.
> >>> Checked the java version and it is correct.  Please help urgently. thx
> >>>
> >>> Regards
> >>> Prabhakar
> >>>
> >>> Error: Failure in starting embedded Drillbit:
> >>> org.apache.drill.exec.exception.DrillbitStartupException: JDK Java
> >>> compiler
> >>> not available. Ensure Drill is running with the java executable from a
> >>> JDK and
> >>> not a JRE (state=,code=0)
> >>> java.sql.SQLException: Failure in starting embedded Drillbit:
> >>> org.apache.drill.exec.exception.DrillbitStartupException: JDK Java
> >>> compiler
> >>> not available. Ensure Drill is running with the java executable from a
> >>> JDK and
> >>> not a JRE
> >>> at
> >>>
> >>>
> org.apache.drill.jdbc.impl.DrillConnectionImpl.(DrillConnectionImpl.java:143)
> >>> at
> >>>
> >>>
> org.apache.drill.jdbc.impl.DrillJdbc41Factory.newDrillConnection(DrillJdbc41Factory.java:67)
> >>> at
> >>>
> >>>
> org.apache.drill.jdbc.impl.DrillFactory.newConnection(DrillFactory.java:67)
> >>> at
> >>>
> >>>
> org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:138)
> >>> at org.apache.drill.jdbc.Driver.connect(Driver.java:72)
> >>> at
> >>> sqlline.DatabaseConnection.connect(DatabaseConnection.java:130)
> >>> at
> >>> sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:179)
> >>> at sqlline.Commands.connect(Commands.java:1278)
> >>> at sqlline.Commands.connect(Commands.java:1172)
> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>> at
> >>>
> >>>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> >>> at
> >>>
> >>>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >>> at java.lang.reflect.Method.invoke(Method.java:498)
> >>> at
> >>>
> >>>
> sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
> >>> at sqlline.SqlLine.dispatch(SqlLine.java:736)
> >>> at sqlline.SqlLine.initArgs(SqlLine.java:428)
> >>> at sqlline.SqlLine.begin(SqlLine.java:531)
> >>> at sqlline.SqlLine.start(SqlLine.java:270)
> >>> at sqlline.SqlLine.main(SqlLine.java:201)
> >>> Caused by: org.apache.drill.exec.exception.DrillbitStartupException:
> JDK
> >>> Java
> >>> compiler not available. Ensure Drill is running with the java
> executable
> >>> from
> >>> a JDK and not a JRE
> >>> at
> >>> org.apache.drill.exec.server.Drillbit.(Drillbit.java:152)
> >>> at
> >>> org.apache.drill.exec.server.Drillbit.(Drillbit.java:125)
> >>> at
> >>>
> >>>
> org.apache.drill.jdbc.impl.DrillConnectionImpl.(DrillConnectionImpl.java:133)
> >>> ... 18 more
> >>>
> >>
>


Re: Drill embedded mode on Linux

2020-04-09 Thread Prabhakar Bhosaale
Hi All,

Just to give you some additional information. I came across information on
http://www.openkb.info/2017/05/drill-errors-with-jdk-java-compiler-not.html

As per this article, my output of step 2 is not as expected.  But this
article does not mention what to do in this case.  thx

Regards
Prabhakar

On Thu, Apr 9, 2020 at 7:37 PM Prabhakar Bhosaale 
wrote:

> Hi James,
> thanks for quick reply.
> Below is Java version screenshot. As per documentation this is correct.
> [image: image.png]
>
> Below is screenshot of java path. this is also correct. But still same
> error
> [image: image.png]
>
> Regards
> Prabhakar
>
> On Thu, Apr 9, 2020 at 7:08 PM Jaimes, Rafael - 0993 - MITLL <
> rafael.jai...@ll.mit.edu> wrote:
>
>> The error tells you that it's not finding a Java 1.8 JDK. You can use
>> OpenJDK
>> 1.8 for the job.
>> I would check:
>> 1) your java version (both version # and whether it is a JDK, not a JRE)
>> 2) your java path env vars
>>
>> -Original Message-
>> From: Prabhakar Bhosaale 
>> Sent: Thursday, April 9, 2020 9:29 AM
>> To: user@drill.apache.org
>> Subject: Drill embedded mode on Linux
>>
>> Hi All,
>> I am using drill 1.16 and trying to start the drill in embedded mode on
>> linux
>> machine. Following the documentation from drill website.
>>
>> I am using  bin/drill-embedded command but it is giving following error.
>> Checked the java version and it is correct.  Please help urgently. thx
>>
>> Regards
>> Prabhakar
>>
>> Error: Failure in starting embedded Drillbit:
>> org.apache.drill.exec.exception.DrillbitStartupException: JDK Java
>> compiler
>> not available. Ensure Drill is running with the java executable from a
>> JDK and
>> not a JRE (state=,code=0)
>> java.sql.SQLException: Failure in starting embedded Drillbit:
>> org.apache.drill.exec.exception.DrillbitStartupException: JDK Java
>> compiler
>> not available. Ensure Drill is running with the java executable from a
>> JDK and
>> not a JRE
>> at
>>
>> org.apache.drill.jdbc.impl.DrillConnectionImpl.(DrillConnectionImpl.java:143)
>> at
>>
>> org.apache.drill.jdbc.impl.DrillJdbc41Factory.newDrillConnection(DrillJdbc41Factory.java:67)
>> at
>>
>> org.apache.drill.jdbc.impl.DrillFactory.newConnection(DrillFactory.java:67)
>> at
>>
>> org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:138)
>> at org.apache.drill.jdbc.Driver.connect(Driver.java:72)
>> at sqlline.DatabaseConnection.connect(DatabaseConnection.java:130)
>> at
>> sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:179)
>> at sqlline.Commands.connect(Commands.java:1278)
>> at sqlline.Commands.connect(Commands.java:1172)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at
>>
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>> at
>>
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:498)
>> at
>> sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
>> at sqlline.SqlLine.dispatch(SqlLine.java:736)
>> at sqlline.SqlLine.initArgs(SqlLine.java:428)
>> at sqlline.SqlLine.begin(SqlLine.java:531)
>> at sqlline.SqlLine.start(SqlLine.java:270)
>> at sqlline.SqlLine.main(SqlLine.java:201)
>> Caused by: org.apache.drill.exec.exception.DrillbitStartupException: JDK
>> Java
>> compiler not available. Ensure Drill is running with the java executable
>> from
>> a JDK and not a JRE
>> at org.apache.drill.exec.server.Drillbit.(Drillbit.java:152)
>> at org.apache.drill.exec.server.Drillbit.(Drillbit.java:125)
>> at
>>
>> org.apache.drill.jdbc.impl.DrillConnectionImpl.(DrillConnectionImpl.java:133)
>> ... 18 more
>>
>


Re: Drill embedded mode on Linux

2020-04-09 Thread Prabhakar Bhosaale
Hi James,
thanks for quick reply.
Below is Java version screenshot. As per documentation this is correct.
[image: image.png]

Below is screenshot of java path. this is also correct. But still same error
[image: image.png]

Regards
Prabhakar

On Thu, Apr 9, 2020 at 7:08 PM Jaimes, Rafael - 0993 - MITLL <
rafael.jai...@ll.mit.edu> wrote:

> The error tells you that it's not finding a Java 1.8 JDK. You can use
> OpenJDK
> 1.8 for the job.
> I would check:
> 1) your java version (both version # and whether it is a JDK, not a JRE)
> 2) your java path env vars
>
> -Original Message-----
> From: Prabhakar Bhosaale 
> Sent: Thursday, April 9, 2020 9:29 AM
> To: user@drill.apache.org
> Subject: Drill embedded mode on Linux
>
> Hi All,
> I am using drill 1.16 and trying to start the drill in embedded mode on
> linux
> machine. Following the documentation from drill website.
>
> I am using  bin/drill-embedded command but it is giving following error.
> Checked the java version and it is correct.  Please help urgently. thx
>
> Regards
> Prabhakar
>
> Error: Failure in starting embedded Drillbit:
> org.apache.drill.exec.exception.DrillbitStartupException: JDK Java
> compiler
> not available. Ensure Drill is running with the java executable from a JDK
> and
> not a JRE (state=,code=0)
> java.sql.SQLException: Failure in starting embedded Drillbit:
> org.apache.drill.exec.exception.DrillbitStartupException: JDK Java
> compiler
> not available. Ensure Drill is running with the java executable from a JDK
> and
> not a JRE
> at
>
> org.apache.drill.jdbc.impl.DrillConnectionImpl.(DrillConnectionImpl.java:143)
> at
>
> org.apache.drill.jdbc.impl.DrillJdbc41Factory.newDrillConnection(DrillJdbc41Factory.java:67)
> at
> org.apache.drill.jdbc.impl.DrillFactory.newConnection(DrillFactory.java:67)
> at
>
> org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:138)
> at org.apache.drill.jdbc.Driver.connect(Driver.java:72)
> at sqlline.DatabaseConnection.connect(DatabaseConnection.java:130)
> at
> sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:179)
> at sqlline.Commands.connect(Commands.java:1278)
> at sqlline.Commands.connect(Commands.java:1172)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
> sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
> at sqlline.SqlLine.dispatch(SqlLine.java:736)
> at sqlline.SqlLine.initArgs(SqlLine.java:428)
> at sqlline.SqlLine.begin(SqlLine.java:531)
> at sqlline.SqlLine.start(SqlLine.java:270)
> at sqlline.SqlLine.main(SqlLine.java:201)
> Caused by: org.apache.drill.exec.exception.DrillbitStartupException: JDK
> Java
> compiler not available. Ensure Drill is running with the java executable
> from
> a JDK and not a JRE
> at org.apache.drill.exec.server.Drillbit.(Drillbit.java:152)
> at org.apache.drill.exec.server.Drillbit.(Drillbit.java:125)
> at
>
> org.apache.drill.jdbc.impl.DrillConnectionImpl.(DrillConnectionImpl.java:133)
> ... 18 more
>


Drill embedded mode on Linux

2020-04-09 Thread Prabhakar Bhosaale
Hi All,
I am using drill 1.16 and trying to start the drill in embedded mode on
linux machine. Following the documentation from drill website.

I am using  bin/drill-embedded command but it is giving following error.
Checked the java version and it is correct.  Please help urgently. thx

Regards
Prabhakar

Error: Failure in starting embedded Drillbit:
org.apache.drill.exec.exception.DrillbitStartupException: JDK Java compiler
not available. Ensure Drill is running with the java executable from a JDK
and not a JRE (state=,code=0)
java.sql.SQLException: Failure in starting embedded Drillbit:
org.apache.drill.exec.exception.DrillbitStartupException: JDK Java compiler
not available. Ensure Drill is running with the java executable from a JDK
and not a JRE
at
org.apache.drill.jdbc.impl.DrillConnectionImpl.(DrillConnectionImpl.java:143)
at
org.apache.drill.jdbc.impl.DrillJdbc41Factory.newDrillConnection(DrillJdbc41Factory.java:67)
at
org.apache.drill.jdbc.impl.DrillFactory.newConnection(DrillFactory.java:67)
at
org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:138)
at org.apache.drill.jdbc.Driver.connect(Driver.java:72)
at sqlline.DatabaseConnection.connect(DatabaseConnection.java:130)
at
sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:179)
at sqlline.Commands.connect(Commands.java:1278)
at sqlline.Commands.connect(Commands.java:1172)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
at sqlline.SqlLine.dispatch(SqlLine.java:736)
at sqlline.SqlLine.initArgs(SqlLine.java:428)
at sqlline.SqlLine.begin(SqlLine.java:531)
at sqlline.SqlLine.start(SqlLine.java:270)
at sqlline.SqlLine.main(SqlLine.java:201)
Caused by: org.apache.drill.exec.exception.DrillbitStartupException: JDK
Java compiler not available. Ensure Drill is running with the java
executable from a JDK and not a JRE
at org.apache.drill.exec.server.Drillbit.(Drillbit.java:152)
at org.apache.drill.exec.server.Drillbit.(Drillbit.java:125)
at
org.apache.drill.jdbc.impl.DrillConnectionImpl.(DrillConnectionImpl.java:133)
... 18 more


Re: Linux versions supported for Apache drill

2020-04-03 Thread Prabhakar Bhosaale
Thanks Paul, we will target latest RHEL.

Regards
Prabhakar

On Fri, Apr 3, 2020, 23:11 Paul Rogers  wrote:

> Hi Prabhakar,
>
> Drill is written in Java and should support just about any Linux version;
> certainly all the major versions. It's been run on MacOS, Ubuntu, CentOS,
> RedHat and probably many more. Might struggle a bit on a RaspberryPi, but I
> think someone even did that several years back.
>
> The main limitation is Windows, simply because no one has ever written the
> wrapper scripts/batch files/PowerShell scripts to launch Drill.
>
>
> Is there a specific version of interest?
>
> Thanks,
> - Paul
>
>
>
>     On Thursday, April 2, 2020, 8:59:23 PM PDT, Prabhakar Bhosaale <
> bhosale@gmail.com> wrote:
>
>  Hi All,,
> Can any one help us with the versions of  linux supported by Apache dril? I
> could not find this information on drill website. Thanks in advance.
>
> Regards
> Prabhakar
>


Linux versions supported for Apache drill

2020-04-02 Thread Prabhakar Bhosaale
Hi All,,
Can any one help us with the versions of  linux supported by Apache dril? I
could not find this information on drill website. Thanks in advance.

Regards
Prabhakar


Re: JDBC datasource on Websphere server 8.5.5.9

2020-03-30 Thread Prabhakar Bhosaale
Hi Paul,
Thanks for your detailed information. Out of 3 approaches mentioned above
we have already tried 3rd one and it is still the same error. I will keep
on looking for other two options and will let you know any updates. thx

Regards
Prabhakar

On Tue, Mar 31, 2020 at 3:48 AM Paul Rogers 
wrote:

> Hi Prabhakar,
>
> Not being much of a JDBC expert, I did some poking around. It seems that
> Drill's open-source JDBC driver is based on Apache Calcite's Avatica
> framework. Avatica does not appear include JDBC DataSource support, it is
> just a simple, basic JDBC driver.
>
> My attempts to Google how to use such a basic Driver with Websphere did
> not produce many results. I found the click-this, type-that instructions
> (from 2007!) but did not see anything about how to handle a basic driver.
>
> So, seems that there several approaches:
>
> 1. Extend the Drill JDBC driver to include DataSource support.
> 2. Find a Websphere or third-party solution to wrap "plain" JDBC drivers.
> 3. Try MapR's commercial JDBC driver created by Simba. [1] Looks like this
> driver works on Windows. The documentation [2] does not list DataSource
> support, however.
>
>
> Contributions are welcome to solve item 1. You are probably more of a WS
> expert than I, so perhaps you can research item 2. You can also check
> whether the MapR Driver give you what you need.
>
> Also, if any others out there have more JDBC experience, it would be great
> if someone could add a bit more context. For example, how is this issue
> handled for other JDBC drivers? What would it take for Drill to add
> DataSource support?
>
>
> Thanks,
> - Paul
> [1] https://mapr.com/docs/61/Drill/drill_odbc_connector.html
> [2]
> https://mapr.com/docs/61/attachments/JDBC_ODBC_drivers/DrillODBCInstallandConfigurationGuide.pdf
>
>
>
> On Sunday, March 29, 2020, 9:26:52 PM PDT, Prabhakar Bhosaale <
> bhosale@gmail.com> wrote:
>
>  Hi Paul,
>
> Any further inputs on JDBC driver for drill?  thx
>
> Regards
> Prabhakar
>
> On Thu, Mar 26, 2020 at 1:25 PM Prabhakar Bhosaale 
> wrote:
>
> > Hi Paul,
> > Please see my answers inline below
> >
> > Drill is supported on Windows only in embedded mode; we have no scripts
> to
> > run a server. Were you able to create your own solution?
> > Prabhakar: We are using drill on windows only in embedded mode
> >
> > The exception appears to indicate that the Drill JDBC connection is being
> > used inside a transaction, perhaps with other data sources, so a
> two-phase
> > commit is needed. However, Drill does not support transactions as
> > transactions don't make sense for data sources such as HDFS or S3.
> >
> >
> > Is there a way to configure WAS to use Drill just for read-only access
> > without transactions? See this link: [1]. To quote:
> >
> > Non-transactional data source
> > Specifies that the application server does not enlist the connections
> from
> > this data source in global or local transactions. Applications must
> > explicitly call setAutoCommit(false) on the connection if they want to
> > start a local transaction on the connection, and they must commit or roll
> > back the transaction that they started.
> >
> > Prabhakar: I tried making the datasource as non-transactional data
> source.
> > But still it gave same error
> >
> > Can you run a test? Will SQLLine connect to your Drill server? If so,
> then
> > you know that you have the host name correct, that the ports are open,
> and
> > that Drill runs well enough on Windows for your needs.
> >
> > Prabhakar: I tried connecting drill using squirrel and it connected
> > successfully to drill. Even we tried simple java code using this driver
> > class and it successfully retrieved the data. So drill with its port and
> > host is working fine.
> >
> >
> > Our understanding is that webphere is expecting any JDBC driver to
> > implement the javax.sql.ConnectionPoolDataSource class, But in drill
> driver
> > we are not sure whether this is implemented.
> >
> > Please refer
> >
> https://www.ibm.com/mysupport/s/question/0D50z62kMU2CAM/classcastexception-comibmoptimconnectjdbcnvdriver-incompatible-with-javaxsqlconnectionpooldatasource?language=en_US
> >
> > Any help in this regard is highly appreciated. thx
> >
> > REgards
> > Prabhakar
> >
> > On Thu, Mar 26, 2020 at 10:52 AM Paul Rogers 
> > wrote:
> >
> >> Hi Prabhakar,
> >>
> >> Drill is supported on Windows only in embedded mode; we have no scripts
> >> to run a server. W

Re: JDBC datasource on Websphere server 8.5.5.9

2020-03-29 Thread Prabhakar Bhosaale
Hi Paul,

Any further inputs on JDBC driver for drill?  thx

Regards
Prabhakar

On Thu, Mar 26, 2020 at 1:25 PM Prabhakar Bhosaale 
wrote:

> Hi Paul,
> Please see my answers inline below
>
> Drill is supported on Windows only in embedded mode; we have no scripts to
> run a server. Were you able to create your own solution?
> Prabhakar: We are using drill on windows only in embedded mode
>
> The exception appears to indicate that the Drill JDBC connection is being
> used inside a transaction, perhaps with other data sources, so a two-phase
> commit is needed. However, Drill does not support transactions as
> transactions don't make sense for data sources such as HDFS or S3.
>
>
> Is there a way to configure WAS to use Drill just for read-only access
> without transactions? See this link: [1]. To quote:
>
> Non-transactional data source
> Specifies that the application server does not enlist the connections from
> this data source in global or local transactions. Applications must
> explicitly call setAutoCommit(false) on the connection if they want to
> start a local transaction on the connection, and they must commit or roll
> back the transaction that they started.
>
> Prabhakar: I tried making the datasource as non-transactional data source.
> But still it gave same error
>
> Can you run a test? Will SQLLine connect to your Drill server? If so, then
> you know that you have the host name correct, that the ports are open, and
> that Drill runs well enough on Windows for your needs.
>
> Prabhakar: I tried connecting drill using squirrel and it connected
> successfully to drill. Even we tried simple java code using this driver
> class and it successfully retrieved the data. So drill with its port and
> host is working fine.
>
>
> Our understanding is that webphere is expecting any JDBC driver to
> implement the javax.sql.ConnectionPoolDataSource class, But in drill driver
> we are not sure whether this is implemented.
>
> Please refer
> https://www.ibm.com/mysupport/s/question/0D50z62kMU2CAM/classcastexception-comibmoptimconnectjdbcnvdriver-incompatible-with-javaxsqlconnectionpooldatasource?language=en_US
>
> Any help in this regard is highly appreciated. thx
>
> REgards
> Prabhakar
>
> On Thu, Mar 26, 2020 at 10:52 AM Paul Rogers 
> wrote:
>
>> Hi Prabhakar,
>>
>> Drill is supported on Windows only in embedded mode; we have no scripts
>> to run a server. Were you able to create your own solution?
>>
>> The exception appears to indicate that the Drill JDBC connection is being
>> used inside a transaction, perhaps with other data sources, so a two-phase
>> commit is needed. However, Drill does not support transactions as
>> transactions don't make sense for data sources such as HDFS or S3.
>>
>>
>> Is there a way to configure WAS to use Drill just for read-only access
>> without transactions? See this link: [1]. To quote:
>>
>> Non-transactional data source
>> Specifies that the application server does not enlist the connections
>> from this data source in global or local transactions. Applications must
>> explicitly call setAutoCommit(false) on the connection if they want to
>> start a local transaction on the connection, and they must commit or roll
>> back the transaction that they started.
>>
>> Can you run a test? Will SQLLine connect to your Drill server? If so,
>> then you know that you have the host name correct, that the ports are open,
>> and that Drill runs well enough on Windows for your needs.
>>
>> By the way, the Apache mail agent does not support attachments. Can you
>> post the log somewhere else? Or, just past into an e-mail the lines around
>> the failure.
>>
>> Thanks,
>> - Paul
>>
>>
>> [1]
>> https://www.ibm.com/support/knowledgecenter/en/SSAW57_8.5.5/com.ibm.websphere.nd.multiplatform.doc/ae/udat_jdbcdatasorprops.html
>>
>>
>>
>>
>> On Wednesday, March 25, 2020, 8:53:31 PM PDT, Prabhakar Bhosaale <
>> bhosale@gmail.com> wrote:
>>
>>  Hi Charles,
>>
>> Thanks for the reply.  The dril version is 1.16 and JDBC version is also
>> same. The drill is installed on windows in standalone mode.
>>
>> The challenge here is that, when we created the data provider and data
>> source on WAS, we have not given any hostname or port details of drill
>> server, so when test connection happens on WAS server, it is actually not
>> connecting to drill.
>>
>> Please let me know if you need any additional information. Once again
>> thanks for your help
>>
>> Regards
>> Prabhakar
>>
>> On 

Re: JDBC datasource on Websphere server 8.5.5.9

2020-03-26 Thread Prabhakar Bhosaale
Hi Igor,
we tried the link given but we could not find the property enable2phase for
custom database. From another link it is evident that, this property is for
oracle db.

https://www.ibm.com/support/knowledgecenter/en/SSZJPZ_11.7.0/com.ibm.swg.im.iis.productization.iisinfsv.install.doc/topics/wsisinst_install_connectclusterconfiguration_iadb.html


Regards
Prabhakar

On Tue, Mar 24, 2020 at 6:58 PM Igor Guzenko 
wrote:

> Hello Prabhakar,
>
> Seems like there is a similar question on
>
> https://stackoverflow.com/questions/1677722/websphere-application-server-data-source
> and
> easiest path is to uncheck enable2Phase somewhere in WebSphere driver
> settings.
> Or harder way is that you can try to make a small project just to wrap
> Drill driver with some of JDBC connection pools like HikariCP.
>
> Thanks,
> Igor
>
> On Tue, Mar 24, 2020 at 2:49 PM Charles Givre  wrote:
>
> > HI Prabhakar,
> > Thanks for your interest in Drill.  Can you share your config info as
> well
> > as the versions of Drill and JDBC Driver that you are using?
> > Thanks,
> > -- C
> >
> >
> > > On Mar 24, 2020, at 7:07 AM, Prabhakar Bhosaale  >
> > wrote:
> > >
> > > Hi Team,
> > >
> > > we are trying to connect to apache drill from websphere 8.5.5.9.  We
> > created the the Data provider and data source as per standard process of
> > WAS.  But when we try to test the connection, it gives following error.
> > >
> > > "Test connection operation failed for data source retrievalds on server
> > ARCHIVE_SERVER at node ARCHIVALPROFILENode1 with the following exception:
> > java.lang.Exception: DSRA8101E: DataSource class cannot be used as
> > one-phase: ClassCastException: org.apache.drill.jdbc.Driver incompatible
> > with javax.sql.ConnectionPoolDataSource   "
> > >
> > > We are using SDK version 1.8
> > > Attaching the JVM log also for your reference. thx
> > >
> > > Any pointers or any documentation in this regards would be appreciated.
> > Please help. thx
> > >
> > > Regards
> > > Prabhakar
> > > 
> >
> >
>


Re: JDBC datasource on Websphere server 8.5.5.9

2020-03-26 Thread Prabhakar Bhosaale
Hi Paul,
Please see my answers inline below

Drill is supported on Windows only in embedded mode; we have no scripts to
run a server. Were you able to create your own solution?
Prabhakar: We are using drill on windows only in embedded mode

The exception appears to indicate that the Drill JDBC connection is being
used inside a transaction, perhaps with other data sources, so a two-phase
commit is needed. However, Drill does not support transactions as
transactions don't make sense for data sources such as HDFS or S3.


Is there a way to configure WAS to use Drill just for read-only access
without transactions? See this link: [1]. To quote:

Non-transactional data source
Specifies that the application server does not enlist the connections from
this data source in global or local transactions. Applications must
explicitly call setAutoCommit(false) on the connection if they want to
start a local transaction on the connection, and they must commit or roll
back the transaction that they started.

Prabhakar: I tried making the datasource as non-transactional data source.
But still it gave same error

Can you run a test? Will SQLLine connect to your Drill server? If so, then
you know that you have the host name correct, that the ports are open, and
that Drill runs well enough on Windows for your needs.

Prabhakar: I tried connecting drill using squirrel and it connected
successfully to drill. Even we tried simple java code using this driver
class and it successfully retrieved the data. So drill with its port and
host is working fine.


Our understanding is that webphere is expecting any JDBC driver to
implement the javax.sql.ConnectionPoolDataSource class, But in drill driver
we are not sure whether this is implemented.

Please refer
https://www.ibm.com/mysupport/s/question/0D50z62kMU2CAM/classcastexception-comibmoptimconnectjdbcnvdriver-incompatible-with-javaxsqlconnectionpooldatasource?language=en_US

Any help in this regard is highly appreciated. thx

REgards
Prabhakar

On Thu, Mar 26, 2020 at 10:52 AM Paul Rogers 
wrote:

> Hi Prabhakar,
>
> Drill is supported on Windows only in embedded mode; we have no scripts to
> run a server. Were you able to create your own solution?
>
> The exception appears to indicate that the Drill JDBC connection is being
> used inside a transaction, perhaps with other data sources, so a two-phase
> commit is needed. However, Drill does not support transactions as
> transactions don't make sense for data sources such as HDFS or S3.
>
>
> Is there a way to configure WAS to use Drill just for read-only access
> without transactions? See this link: [1]. To quote:
>
> Non-transactional data source
> Specifies that the application server does not enlist the connections from
> this data source in global or local transactions. Applications must
> explicitly call setAutoCommit(false) on the connection if they want to
> start a local transaction on the connection, and they must commit or roll
> back the transaction that they started.
>
> Can you run a test? Will SQLLine connect to your Drill server? If so, then
> you know that you have the host name correct, that the ports are open, and
> that Drill runs well enough on Windows for your needs.
>
> By the way, the Apache mail agent does not support attachments. Can you
> post the log somewhere else? Or, just past into an e-mail the lines around
> the failure.
>
> Thanks,
> - Paul
>
>
> [1]
> https://www.ibm.com/support/knowledgecenter/en/SSAW57_8.5.5/com.ibm.websphere.nd.multiplatform.doc/ae/udat_jdbcdatasorprops.html
>
>
>
>
> On Wednesday, March 25, 2020, 8:53:31 PM PDT, Prabhakar Bhosaale <
> bhosale@gmail.com> wrote:
>
>  Hi Charles,
>
> Thanks for the reply.  The dril version is 1.16 and JDBC version is also
> same. The drill is installed on windows in standalone mode.
>
> The challenge here is that, when we created the data provider and data
> source on WAS, we have not given any hostname or port details of drill
> server, so when test connection happens on WAS server, it is actually not
> connecting to drill.
>
> Please let me know if you need any additional information. Once again
> thanks for your help
>
> Regards
> Prabhakar
>
> On Tue, Mar 24, 2020 at 6:19 PM Charles Givre  wrote:
>
> > HI Prabhakar,
> > Thanks for your interest in Drill.  Can you share your config info as
> well
> > as the versions of Drill and JDBC Driver that you are using?
> > Thanks,
> > -- C
> >
> >
> > > On Mar 24, 2020, at 7:07 AM, Prabhakar Bhosaale  >
> > wrote:
> > >
> > > Hi Team,
> > >
> > > we are trying to connect to apache drill from websphere 8.5.5.9.  We
> > created the the Data provider and data source as per standard process of
> > WAS.

Re: JDBC datasource on Websphere server 8.5.5.9

2020-03-25 Thread Prabhakar Bhosaale
Hi Charles,

Thanks for the reply.  The dril version is 1.16 and JDBC version is also
same. The drill is installed on windows in standalone mode.

The challenge here is that, when we created the data provider and data
source on WAS, we have not given any hostname or port details of drill
server, so when test connection happens on WAS server, it is actually not
connecting to drill.

Please let me know if you need any additional information. Once again
thanks for your help

Regards
Prabhakar

On Tue, Mar 24, 2020 at 6:19 PM Charles Givre  wrote:

> HI Prabhakar,
> Thanks for your interest in Drill.  Can you share your config info as well
> as the versions of Drill and JDBC Driver that you are using?
> Thanks,
> -- C
>
>
> > On Mar 24, 2020, at 7:07 AM, Prabhakar Bhosaale 
> wrote:
> >
> > Hi Team,
> >
> > we are trying to connect to apache drill from websphere 8.5.5.9.  We
> created the the Data provider and data source as per standard process of
> WAS.  But when we try to test the connection, it gives following error.
> >
> > "Test connection operation failed for data source retrievalds on server
> ARCHIVE_SERVER at node ARCHIVALPROFILENode1 with the following exception:
> java.lang.Exception: DSRA8101E: DataSource class cannot be used as
> one-phase: ClassCastException: org.apache.drill.jdbc.Driver incompatible
> with javax.sql.ConnectionPoolDataSource   "
> >
> > We are using SDK version 1.8
> > Attaching the JVM log also for your reference. thx
> >
> > Any pointers or any documentation in this regards would be appreciated.
> Please help. thx
> >
> > Regards
> > Prabhakar
> > 
>
>


JDBC datasource on Websphere server 8.5.5.9

2020-03-24 Thread Prabhakar Bhosaale
Hi Team,

we are trying to connect to apache drill from websphere 8.5.5.9.  We
created the the Data provider and data source as per standard process of
WAS.  But when we try to test the connection, it gives following error.

"Test connection operation failed for data source retrievalds on server
ARCHIVE_SERVER at node ARCHIVALPROFILENode1 with the following exception:
java.lang.Exception: DSRA8101E: DataSource class cannot be used as
one-phase: ClassCastException: org.apache.drill.jdbc.Driver incompatible
with javax.sql.ConnectionPoolDataSource   "

We are using SDK version 1.8
Attaching the JVM log also for your reference. thx

Any pointers or any documentation in this regards would be appreciated.
Please help. thx

Regards
Prabhakar
 Start Display Current Environment 
Log file started at: [3/24/20 15:40:07:559 GMT+05:30]
* End Display Current Environment *
[3/24/20 15:40:07:517 GMT+05:30] 007f SystemErr R   at 
org.apache.struts.action.ActionServlet.process(Unknown Source)
[3/24/20 15:40:07:560 GMT+05:30] 007f SystemErr R   at 
org.apache.struts.action.ActionServlet.doPost(Unknown Source)
[3/24/20 15:40:07:560 GMT+05:30] 007f SystemErr R   at 
javax.servlet.http.HttpServlet.service(HttpServlet.java:595)
[3/24/20 15:40:07:560 GMT+05:30] 007f SystemErr R   at 
javax.servlet.http.HttpServlet.service(HttpServlet.java:668)
[3/24/20 15:40:07:560 GMT+05:30] 007f SystemErr R   at 
com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1232)
[3/24/20 15:40:07:560 GMT+05:30] 007f SystemErr R   at 
com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:781)
[3/24/20 15:40:07:560 GMT+05:30] 007f SystemErr R   at 
com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:480)
[3/24/20 15:40:07:560 GMT+05:30] 007f SystemErr R   at 
com.ibm.ws.webcontainer.servlet.ServletWrapperImpl.handleRequest(ServletWrapperImpl.java:178)
[3/24/20 15:40:07:560 GMT+05:30] 007f SystemErr R   at 
com.ibm.ws.webcontainer.filter.WebAppFilterChain.invokeTarget(WebAppFilterChain.java:136)
[3/24/20 15:40:07:560 GMT+05:30] 007f SystemErr R   at 
com.ibm.ws.webcontainer.filter.WebAppFilterChain.doFilter(WebAppFilterChain.java:79)
[3/24/20 15:40:07:560 GMT+05:30] 007f SystemErr R   at 
com.ibm.ws.webcontainer.filter.WebAppFilterManager.doFilter(WebAppFilterManager.java:967)
[3/24/20 15:40:07:560 GMT+05:30] 007f SystemErr R   at 
com.ibm.ws.webcontainer.filter.WebAppFilterManager.invokeFilters(WebAppFilterManager.java:1107)
[3/24/20 15:40:07:560 GMT+05:30] 007f SystemErr R   at 
com.ibm.ws.webcontainer.webapp.WebAppRequestDispatcher.dispatch(WebAppRequestDispatcher.java:1385)
[3/24/20 15:40:07:560 GMT+05:30] 007f SystemErr R   at 
com.ibm.ws.webcontainer.webapp.WebAppRequestDispatcher.forward(WebAppRequestDispatcher.java:194)
[3/24/20 15:40:07:560 GMT+05:30] 007f SystemErr R   at 
org.apache.struts.action.RequestProcessor.doForward(Unknown Source)
[3/24/20 15:40:07:560 GMT+05:30] 007f SystemErr R   at 
org.apache.struts.tiles.TilesRequestProcessor.doForward(Unknown Source)
[3/24/20 15:40:07:560 GMT+05:30] 007f SystemErr R   at 
org.apache.struts.action.RequestProcessor.processForwardConfig(Unknown Source)
[3/24/20 15:40:07:560 GMT+05:30] 007f SystemErr R   at 
org.apache.struts.tiles.TilesRequestProcessor.processForwardConfig(Unknown 
Source)
[3/24/20 15:40:07:560 GMT+05:30] 007f SystemErr R   at 
org.apache.struts.action.RequestProcessor.process(Unknown Source)
[3/24/20 15:40:07:561 GMT+05:30] 007f SystemErr R   at 
org.apache.struts.action.ActionServlet.process(Unknown Source)
[3/24/20 15:40:07:561 GMT+05:30] 007f SystemErr R   at 
org.apache.struts.action.ActionServlet.doPost(Unknown Source)
[3/24/20 15:40:07:561 GMT+05:30] 007f SystemErr R   at 
javax.servlet.http.HttpServlet.service(HttpServlet.java:595)
[3/24/20 15:40:07:561 GMT+05:30] 007f SystemErr R   at 
javax.servlet.http.HttpServlet.service(HttpServlet.java:668)
[3/24/20 15:40:07:561 GMT+05:30] 007f SystemErr R   at 
com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1232)
[3/24/20 15:40:07:561 GMT+05:30] 007f SystemErr R   at 
com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:781)
[3/24/20 15:40:07:561 GMT+05:30] 007f SystemErr R   at 
com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:480)
[3/24/20 15:40:07:561 GMT+05:30] 007f SystemErr R   at 
com.ibm.ws.webcontainer.servlet.ServletWrapperImpl.handleRequest(ServletWrapperImpl.java:178)
[3/24/20 15:40:07:561 GMT+05:30] 007f SystemErr R   at 

Re: JDBC driver for Java 7 vesion

2020-02-25 Thread Prabhakar Bhosaale
Thanks Paul, that helps a lot.

Regards
Prabhakar

On Tue, Feb 25, 2020 at 1:30 PM Paul Rogers 
wrote:

> Hi Prabhakar,
>
> As it turns out, Drill is built for Java 8-13, but we've not built for
> Java 7 in quite some time. (Java 7 reached end of life several years back.)
>
> That said, you can try to clone the project sources and do a build.
> Unfortunately, the JDBC driver tends to use quite a bit of Drill's
> internals and so has a rather large footprint, some of which is likely to
> depend on Java 8.
>
> Further, Drill depends on a large number of libraries, all of which have
> likely been upgraded to Java 8. You'd have to find old Java 7 versions, and
> then figure out how to change Drill code to work with those old versions.
>
> One might well ask, is it possible for you to upgrade to a supported Java
> version? Drill must not be the only library where you have the Java 8
> dependency problem.
>
>
> Thanks,
> - Paul
>
>
>
> On Monday, February 24, 2020, 11:31:15 PM PST, Prabhakar Bhosaale <
> bhosale@gmail.com> wrote:
>
>  Hi All,
> We are using drill 1.16.0 and we are trying to create JDBC datasource  on
> WAS8.5 with java7. we are getting following error.
> "exception: java.sql.SQLException: java.lang.UnsupportedClassVersionError:
> JVMCFRE003 bad major version; class=org/apache/drill/jdbc/Driver, offset=6"
>
> So where can i get the JDBC driver compiled for java7?  thx
>
> Regards
> Prabhakar
>


JDBC driver for Java 7 vesion

2020-02-24 Thread Prabhakar Bhosaale
Hi All,
We are using drill 1.16.0 and we are trying to create JDBC datasource  on
WAS8.5 with java7. we are getting following error.
"exception: java.sql.SQLException: java.lang.UnsupportedClassVersionError:
JVMCFRE003 bad major version; class=org/apache/drill/jdbc/Driver, offset=6"

So where can i get the JDBC driver compiled for java7?  thx

Regards
Prabhakar


Re: Websphere JDBC data source - Class not found exception

2020-02-24 Thread Prabhakar Bhosaale
Hi Paul,

Thanks for your reply. we resolved this error but as mentioned in my other
email, we have java7 on this server while drill JDBC needs java8. So we are
getting error "exception: java.sql.SQLException:
java.lang.UnsupportedClassVersionError: JVMCFRE003 bad major version;
class=org/apache/drill/jdbc/Driver, offset=6".  I guess it is because of
java version. So my question is where can i get JDBC driver compiled for
JAva7.
We are using drill 1.16.0 version. thx

Regards
Prabhakar

On Tue, Feb 25, 2020 at 12:33 PM Paul Rogers 
wrote:

> Hi Prabhakar,
>
> While it is a bit difficult to debug class path issues via e-mail, here
> are some suggestions.
>
> First, verify that the Drill JDBC driver is indeed on your class path.
> Given that you are using an app server, it is important that the jar be
> visible to the class loader that is calling it.
>
> Second, app servers tend to formalize things like the JDBC registry. Might
> there be some config needed to ensure that the Drill driver is registered?
> Check what you did for, say, MySQL and try that.
>
> Third, some apps need to force the Drill driver class to be loaded and
> visible. See the last section of [1].
>
> Thanks,
> - Paul
>
> [1]
> http://drill.apache.org/docs/using-the-jdbc-driver/#example-of-connecting-to-drill-programmatically
>
>
>
>
> On Monday, February 24, 2020, 7:24:41 PM PST, Prabhakar Bhosaale <
> bhosale@gmail.com> wrote:
>
>  Hi Team,
>
> Please help with any pointers on below issue mentioned. Thanks in advance.
>
>
> Regards
> Prabhakar
>
>
> On Mon, Feb 24, 2020 at 10:01 AM Prabhakar Bhosaale  >
> wrote:
>
> > Hi All,
> > we have apache drill version 1.16 and we are trying to create JDBC data
> > source on websphere application server. But it gives error "Class not
> found
> > org.apache.drill.jdbc.Driver" when i try to test the connection.
> >
> > I followed all the instructions that are available on different sites and
> > on Drill site but with no success.
> >
> > The only pre-requisite which is not matching is JDK version. As per
> listed
> > prerequisites it needs JDK 8 where as the websphere is running on JDK7.
> >
> > is this JDK version causing the error? Any pointers to resolve this will
> > help. Thanks
> >
> > Regards
> > Prabhakar
> >
>


Re: Websphere JDBC data source - Class not found exception

2020-02-24 Thread Prabhakar Bhosaale
Hi Team,

Please help with any pointers on below issue mentioned. Thanks in advance.


Regards
Prabhakar


On Mon, Feb 24, 2020 at 10:01 AM Prabhakar Bhosaale 
wrote:

> Hi All,
> we have apache drill version 1.16 and we are trying to create JDBC data
> source on websphere application server. But it gives error "Class not found
> org.apache.drill.jdbc.Driver" when i try to test the connection.
>
> I followed all the instructions that are available on different sites and
> on Drill site but with no success.
>
> The only pre-requisite which is not matching is JDK version. As per listed
> prerequisites it needs JDK 8 where as the websphere is running on JDK7.
>
> is this JDK version causing the error? Any pointers to resolve this will
> help. Thanks
>
> Regards
> Prabhakar
>


Websphere JDBC data source - Class not found exception

2020-02-23 Thread Prabhakar Bhosaale
Hi All,
we have apache drill version 1.16 and we are trying to create JDBC data
source on websphere application server. But it gives error "Class not found
org.apache.drill.jdbc.Driver" when i try to test the connection.

I followed all the instructions that are available on different sites and
on Drill site but with no success.

The only pre-requisite which is not matching is JDK version. As per listed
prerequisites it needs JDK 8 where as the websphere is running on JDK7.

is this JDK version causing the error? Any pointers to resolve this will
help. Thanks

Regards
Prabhakar


Re: WebUI not saving changes to storage plug-ins configuration

2020-02-14 Thread Prabhakar Bhosaale
Hi,

I faced this problem when I start the drill with drill-embedded.bat. and it
was resolved by starting drill using  sqlline.bat -u "jdbc:drill:zk=local
Give it a try.

Regards
Prabhakar

On Fri, Feb 14, 2020, 03:09 Leyne, Sean  wrote:

>
> I created a new storage plug-in (copying the system defined 'cp'
> definition), and I am now trying to changes the configuration.
>
> First, enable/disable plug-in action is not having any effect.
>
> Second, when I shutdown the embedded engine, and make manual changes to
> the .sys.drill file in my C:\tmp\drill\sys.storage_plugins folder, the
> changes are only partially accepted.
>
> This is definition in the .sys.drill file:
>
> {
>   "type" : "file",
>   "connection" : "file:///",
>   "config" : null,
>   "workspaces" : {
>   "CSVFiles": {
>   "location": "d:/drill/CSVFiles/",
>   "writeable": false
>   },
>   "ParquetFiles": {
>   "location": "d:/drill/ParquetFiles/",
>   "writeable": true
>   }
>   },
>   "formats" : {...
>
> This is what the WebUI is displaying
>
> {
>   "type": "file",
>   "connection": "file:///",
>   "config": null,
>   "workspaces": {
> "csvfiles": {
>   "location": "d:/drill/CSVFiles/",
>   "writable": false,
>   "defaultInputFormat": null,
>   "allowAccessOutsideWorkspace": false
> },
> "parquetfiles": {
>   "location": "d:/drill/ParquetFiles/",
>   "writable": false,
>   "defaultInputFormat": null,
>   "allowAccessOutsideWorkspace": false
> }
>   },
>   "formats": {
>
> Note: the  "parquetfiles" | "writable" are different.
>
> Finally, trying to edit the "parquetfiles" | "writable" via the WebUI
> plugin configuration editor but the changes are not being saved/don't have
> any effect.
>
> What am I doing wrong?
>
>
> Sean
>
>


Re: Querying json files from multiple subdirectories

2020-02-07 Thread Prabhakar Bhosaale
Hi Charles,
Another option which i found to query sub-directories is using directory
tree notations. like dir0 and dir1. So i will be using

Select * from transactions where dir0=2012  or
select * from transactions where dir0 in (2012, 2014, 2016) or
select * from transactions where dir0 between 2012 and 2016

I hope this helps others also.  Thanks for your replies.

Regards
Prabhakar

On Sun, Jan 19, 2020 at 12:21 PM Prabhakar Bhosaale 
wrote:

> Thanks charles. Will try few options and get back to you.
>
> Regards
> Prabhakar
>
> On Sun, Jan 19, 2020, 04:45 Charles Givre  wrote:
>
>> Hi Prabhakar,
>> You'll need to find some common identifier for the files you want to
>> query.
>> It could be something like:
>>
>> SELECT
>> FROM dfs.`/Year*/`
>>
>> Alternatively, you could have multiple SELECT queries and join them
>> together via a UNION statement.  IE:
>>
>> SELECT * FROM dfs.`Year2013/trans.json`
>> UNION
>> SELECT * FROM dfs.`Year2014/trans.json`
>>
>>
>>
>> -- C
>>
>> > On Jan 17, 2020, at 11:07 PM, Prabhakar Bhosaale 
>> wrote:
>> >
>> > Hi Charls,
>> > Thanks for your suggestion. Actually the transactions folder will have
>> more
>> > yearwise folder. But i want to query only few folders at a time. The
>> >
>> > Regards
>> > Prabhakar
>> >
>> > On Fri, Jan 17, 2020, 20:01 Charles Givre  wrote:
>> >
>> >> Hi there,
>> >> If you have that directory structure, the following query should work:
>> >>
>> >> SELECT *
>> >> FROM dfs..`transactions/` as t1
>> >>
>> >> Obviously replacing  with your workspace.  You can then join
>> >> that with anything that Drill can query.
>> >> Best,
>> >> -- C
>> >>
>> >>
>> >>
>> >>> On Jan 17, 2020, at 1:27 AM, Prabhakar Bhosaale <
>> bhosale@gmail.com>
>> >> wrote:
>> >>>
>> >>> Hi All,
>> >>>
>> >>> I am new to apache drill and trying to retrieve data from json files
>> by
>> >>> querying the directories.
>> >>>
>> >>> The directory structure is
>> >>>
>> >>>   |-->Year2012--->trans.json
>> >>>   |
>> >>>   |
>> >>> transactions-->|
>> >>>   |
>> >>>   |-->Year2013--->trans.json
>> >>>
>> >>> I would like to query trans.json from both the sub-directories as one
>> >> table
>> >>> and then join the resultant table with another table in a single
>> query.
>> >>> Please help with possible options. thx
>> >>>
>> >>> Regards
>> >>
>> >>
>>
>>


Re: Querying json files from multiple subdirectories

2020-01-18 Thread Prabhakar Bhosaale
Thanks charles. Will try few options and get back to you.

Regards
Prabhakar

On Sun, Jan 19, 2020, 04:45 Charles Givre  wrote:

> Hi Prabhakar,
> You'll need to find some common identifier for the files you want to
> query.
> It could be something like:
>
> SELECT
> FROM dfs.`/Year*/`
>
> Alternatively, you could have multiple SELECT queries and join them
> together via a UNION statement.  IE:
>
> SELECT * FROM dfs.`Year2013/trans.json`
> UNION
> SELECT * FROM dfs.`Year2014/trans.json`
>
>
>
> -- C
>
> > On Jan 17, 2020, at 11:07 PM, Prabhakar Bhosaale 
> wrote:
> >
> > Hi Charls,
> > Thanks for your suggestion. Actually the transactions folder will have
> more
> > yearwise folder. But i want to query only few folders at a time. The
> >
> > Regards
> > Prabhakar
> >
> > On Fri, Jan 17, 2020, 20:01 Charles Givre  wrote:
> >
> >> Hi there,
> >> If you have that directory structure, the following query should work:
> >>
> >> SELECT *
> >> FROM dfs..`transactions/` as t1
> >>
> >> Obviously replacing  with your workspace.  You can then join
> >> that with anything that Drill can query.
> >> Best,
> >> -- C
> >>
> >>
> >>
> >>> On Jan 17, 2020, at 1:27 AM, Prabhakar Bhosaale  >
> >> wrote:
> >>>
> >>> Hi All,
> >>>
> >>> I am new to apache drill and trying to retrieve data from json files by
> >>> querying the directories.
> >>>
> >>> The directory structure is
> >>>
> >>>   |-->Year2012--->trans.json
> >>>   |
> >>>   |
> >>> transactions-->|
> >>>   |
> >>>   |-->Year2013--->trans.json
> >>>
> >>> I would like to query trans.json from both the sub-directories as one
> >> table
> >>> and then join the resultant table with another table in a single query.
> >>> Please help with possible options. thx
> >>>
> >>> Regards
> >>
> >>
>
>


Re: Querying json files from multiple subdirectories

2020-01-17 Thread Prabhakar Bhosaale
Hi Charls,
Thanks for your suggestion. Actually the transactions folder will have more
yearwise folder. But i want to query only few folders at a time. The

Regards
Prabhakar

On Fri, Jan 17, 2020, 20:01 Charles Givre  wrote:

> Hi there,
> If you have that directory structure, the following query should work:
>
> SELECT *
> FROM dfs..`transactions/` as t1
>
> Obviously replacing  with your workspace.  You can then join
> that with anything that Drill can query.
> Best,
> -- C
>
>
>
> > On Jan 17, 2020, at 1:27 AM, Prabhakar Bhosaale 
> wrote:
> >
> > Hi All,
> >
> > I am new to apache drill and trying to retrieve data from json files by
> > querying the directories.
> >
> > The directory structure is
> >
> >|-->Year2012--->trans.json
> >|
> >|
> > transactions-->|
> >|
> >|-->Year2013--->trans.json
> >
> > I would like to query trans.json from both the sub-directories as one
> table
> > and then join the resultant table with another table in a single query.
> > Please help with possible options. thx
> >
> > Regards
>
>


Querying json files from multiple subdirectories

2020-01-16 Thread Prabhakar Bhosaale
Hi All,

I am new to apache drill and trying to retrieve data from json files by
querying the directories.

The directory structure is

|-->Year2012--->trans.json
|
|
transactions-->|
|
|-->Year2013--->trans.json

I would like to query trans.json from both the sub-directories as one table
and then join the resultant table with another table in a single query.
Please help with possible options. thx

Regards


querying json from multiple subdirectories

2020-01-14 Thread Prabhakar Bhosaale
Hi All,

I am new to apache drill and trying to retrieve data from json files by
querying the directories.

The directory structure is

|-->Year2012--->trans.json
|
|
transactions-->|
|
|-->Year2013--->trans.json

I would like to query trans.json from both the sub-directories as one table
and then join the resultant table with another table in a single query.
Please help with possible options. thx

Regards
Prabhakar