I have used Hive 1.2 and I have found that the stats in parquet files are
populated for some data types. Integer, bigint, float, double, date work.
String does not seem to work.
You can then use Drill to query the Hive table and get predicate pushdown
for simple compare filters. This has the for
paring hive plugin and dfs plugin for parquet format. But from my
> > > > general experience it appears as though DFS plugin is faster.
> > > >
> > > > Also do not forget the 3rd option in my first response (Hive Plugin +
> > > Drill
> > > >
Let me clarify my comment regarding the Hive plugin. We plan to support
parquet filter pushdown with the Hive plugin when using our native reader
(there is an option you can set). This will be at some point in the future.
Thanks.
--Robert
On Thu, Nov 17, 2016 at 3:46 PM, Robert Hou wrote
This will work for JDBC clients.
If you are using ODBC, then there are other steps.
Thanks.
--Robert
From: Padma Penumarthy
Sent: Saturday, August 12, 2017 7:19 AM
To: user@drill.apache.org
Subject: Re: Drill on secure HDFS
Did you look at this ?
https://dr
I asked a couple of Drill developers. We don't have much experience with PCAP
yet. Takeo, can you file a Jira for this, and include the information below?
The error message mentions a bad magic number, which Drill sometimes uses to
help determine the file format.
Also, it appears that you h
bug in the way that we
wrote the format plugin in that it isn't telling Drill to not split the
file.
On Wed, Sep 13, 2017 at 12:52 AM, Robert Hou wrote:
> I asked a couple of Drill developers. We don't have much experience with
> PCAP yet. Takeo, can you file a Jira for this,
From: Ted Dunning
Sent: Tuesday, September 12, 2017 4:30 PM
To: user
Cc: j...@apache.org
Subject: Re: ***UNCHECKED*** Re: Query Error on PCAP over MapR FS
PCAP is a binary format that cannot easily be split.
On Wed, Sep 13, 2017 at 1:15 AM, Robert Hou wrote:
> Hi Ted,
>
perhaps you can include the Drill profile, and we can look at how many
> fragments are used by the query. The profiles can be found via a web URL
I attached query profile to the issue.
Thank you.
> 2017/09/13 8:50、Robert Hou のメ�`ル:
>
> So then there should be one fragment reading th
Try:
select t.v11._ from dfs.`` t where t.v11._ = '0070';
This works for json. Try it with MongoDB.
Thanks.
--Robert
From: gus
Sent: Wednesday, September 13, 2017 11:26 AM
To: user@drill.apache.org
Subject: How to query values in list
Hi, in this exa
!,
gus
Em Thu, Sep 14, 2017 at 12:47:31AM +, Robert Hou escreveu:
> Try:
>
>
>select t.v11._ from dfs.`` t where t.v11._ = '0070';
>
>
> This works for json. Try it with MongoDB.
>
>
> Thanks.
>
>
> --Robert
>
> __
You wrote:
I’m running drill as user “drill”.
How are you invoking sqllline? Are you specifying a user "drill"?
You should be able to query the file with two steps:
1) use mfs;
this invokes the plugin
2) select * from `x.pcap`;
Since x.pcap is in the root directory, you don't need
This might work:
"pcap": {
"type": "pcap"
}
Thanks.
--Robert
From: Arjun kr
Sent: Wednesday, September 13, 2017 10:22 PM
To: user@drill.apache.org
Subject: Re: Query Error on PCAP over MapR FS
I have not used pcap storage format before. Doesn't it re
s.
drill.exec: {
cluster-id: "cluster3-drillbits",
zk.connect: "node21:5181,node22:5181,node23:5181”
}
Storage plugin has config for PCAP.
"pcap": {
"type": "pcap"
},
Is it better to access via NFS to MapR FS?
I can access file:///mapr/cluster
There is some information in the docs:
https://drill.apache.org/docs/partition-pruning-introduction/
https://drill.apache.org/docs/parquet-filter-pushdown/
I don't believe we push predicates past a join.
Thanks.
--Robert
From: Chetan Kothari
Sent: Tuesda
This appears to be a bug. I am able to reproduce this on my system. Can you
file a Jira please?
Thanks.
--Robert
From: Steven Peh
Sent: Tuesday, January 9, 2018 4:19 PM
To: user@drill.apache.org
Subject: Squirrel Schema list
Hi, I just configured Squirrel
I think BI tools should have a way to export the data to csv. Tableau does.
Thanks.
--Robert
From: Saurabh Mahapatra
Sent: Tuesday, May 15, 2018 6:56 PM
To: Divya Gehlot
Cc: user@drill.apache.org
Subject: Re: is there any way to download the data through Dril
MapR Driver Version 1.3.0 seems quite old. It may not work with recent
versions of Drill. What version of Drill are you running?
Thanks.
--Robert
From: Andries Engelbrecht
Sent: Wednesday, May 16, 2018 7:27 AM
To: user@drill.apache.org
Subject: Re: Apache D
Did anything change on your system that might indicate why you cannot connect
lately?
Thanks.
--Robert
From: Robert Hou
Sent: Wednesday, May 16, 2018 10:25 AM
To: user@drill.apache.org
Subject: Re: Apache Drill and Tableau connectivity issue
MapR Driver
The current MapR ODBC and JDBC drivers are not compatible with the new
Apache Drill 1.14. release. (Apache Drill ships with a JDBC driver that
will work). There will be new MapR ODBC and JDBC drivers coming out with
the new MapR Drill release to address this.
Thanks.
--Robert
On Sun, Aug 5, 20
I'm wondering if you can export the json from Postgres to a json document.
And then write it to parquet using Drill. This link may have some ideas:
https://hashrocket.com/blog/posts/create-quick-json-data-dumps-from-postgresql
Thanks.
--Robert
On Wed, Aug 15, 2018 at 10:16 AM, Reid Thompson
Congratulations, Hanu. Thanks for contributing to Drill.
--Robert
On Thu, Nov 1, 2018 at 4:06 PM Jyothsna Reddy
wrote:
> Congrats Hanu!! Well deserved :D
>
> Thank you,
> Jyothsna
>
> On Thu, Nov 1, 2018 at 2:15 PM Sorabh Hamirwasia
> wrote:
>
> > Congratulations Hanu!
> >
> > Thanks,
> > Sor
I would suggest connecting with drillbit first, and if that works, I would
use zookeeper.
sqlline -u "jdbc:drill:drillbit="
Please note the quotation marks.
If that works, then try:
sqlline -u "jdbc:drill:zk=:port//"
zk.root is "drill" by default. If you have specified zk.root in
drill-
nse.proofpoint.com/v2/url?u=http-3A__www.sidra.org_&d=DwIGaQ&c=cskdkSMqhcnjZxdQVpwTXg&r=GXRJhB4g1YFDJsrcglHwUA&m=GzXrG3NEgyu3DwC61SjSD5gw0TYMT4GGi_jOLjSGttY&s=3IKZg-nDhsGJG70FdwBMabw_o0GCo_vADXVODbny7T8&e=
> >
>
>
> On 1/9/19, 11:34 AM, "Robert Hou" wrote:
>
> I would suggest
n 9, 2019 at 1:03 AM Robert Hou wrote:
> Can you try "drill" as your zk.root?
>
> What do you see when you log into zookeeper and run "ls /"?
>
> Thanks.
>
> --Robert
>
> On Wed, Jan 9, 2019 at 12:45 AM Tushar Pathare wrot
I added zk.root to my drill-override.conf and it showed up in zookeeper,
and I can connect to the new zk.root. Since you don't have zk.root, then
you should use "drill" in your connection URL.
Thanks.
--Robert
On Wed, Jan 9, 2019 at 1:08 AM Robert Hou wrote:
> When
I had to restart drill after I changed drill-override.conf.
--Robert
On Wed, Jan 9, 2019 at 1:19 AM Robert Hou wrote:
> I added zk.root to my drill-override.conf and it showed up in zookeeper,
> and I can connect to the new zk.root. Since you don't have zk.root, then
> you sho
If you know the schema ahead of time, you can try creating a view. Using
your xyz.json example:
create view newxyz as select cast(address as varchar) address, cast(zipcode
as int) zipcode from xyz.json;
select * from newxyz;
+-+--+
| address | zipcode |
+
Congratulations, Bohdan. Thanks for contributing to Drill!
--Robert
On Tue, Jul 16, 2019 at 11:50 AM hanu mapr wrote:
> Congratulations Bohdan!
>
> On Tue, Jul 16, 2019 at 9:30 AM Gautam Parai wrote:
>
> > Congratulations Bohdan!
> >
> > Gautam
> >
> > On Mon, Jul 15, 2019 at 11:53 PM Bohdan
Congratulations Charles, and thanks for your contributions to Drill!
Thank you Arina for all you have done as PMC Chair this past year.
--Robert
On Fri, Aug 23, 2019 at 4:16 PM Khurram Faraaz wrote:
> Congratulations Charles, and thank you Arina.
>
> Regards,
> Khurram
>
> On Fri, Aug 23, 2019
29 matches
Mail list logo