You'll want to configure the Linux boxes on which Drill is running to use
LDAP for user authentication. That way, when Drill checks the credentials
against PAM, the local operating system is translating that into the LDAP
calls.
On Thu, Apr 28, 2016 at 11:59 AM, Alfaro, Tony
<
sha...@yahoo-inc.com.invalid> wrote:
> Sorry, my bad. I wanted to ask if there is a way to add REST end point as
> storage plugin for drill?
>
>
> ~
> Chandrashekhar
>
> On Tuesday, 26 April 2016 11:45 AM, Tomer Shiran <tshi...@dremio.com>
> wrote:
>
>
these
> server. Now users will run different query against this table. I assume
> that the data will be in memory for every query once it is loaded first
> time or will drill read the hdfs file everytime?
>
> Thanks
>
> On Wed, Mar 23, 2016 at 4:49 AM, Tomer Shiran <t
ver is installed. Will it give a boost? I doubt
> that this will be useful as ultimately it will rely on MS SQL Server for
> performance.
>
> Just want to know what are different options?
>
> Thanks
>
--
Tomer Shiran
CEO and Co-Founder, Dremio
Currently there isn't a storage plugin that connects directly to
Salesforce. You could of course run an export (Salesforce can do a bulk
export to JSON) and then use Drill to analyze the exported files.
On Tue, Mar 15, 2016 at 11:07 PM, Marat Kalibekov wrote:
> Hello, what is
Yes, that's possible. What kind of system (HDFS, S3, NAS, ...)?
For example, you can create another storage plugin of type 'dfs' (start by
copying the JSON config from an existing one). This can be done through the
Drill UI.
> On Mar 15, 2016, at 7:34 AM, Sanjiv Kumar
rt data into existing
> tables. Using CTAS is not an option with the amount of data I have to
> process. If there is no workaround, I have to abandon Drill and go back to
> Spark and Hive.
>
> Thanks,
>
> Ian.
>
--
Tomer Shiran
CEO and Co-Founder, Dremio
Try using this URL instead:
http://www.apache.org/dyn/closer.lua?filename=drill/drill-1.5.0/apache-drill-1.5.0.tar.gz=download
The docs need to be updated.
> On Feb 23, 2016, at 5:02 PM,
> wrote:
>
> Hi
>
> Does anyone help me?
imit
> 10;
> > > ++
> > > | columns |
> > > ++
> > > | ["1",&q
asionally I use R! Studio and
> Shiny.
>
> I recently got a licence for Tableau, so I run that in VirtualBox on
> Windows. Lots to learn there, so little time. ;)
>
> P.
>
>
> On Mon, Jan 4, 2016 at 12:50 PM, Tomer Shiran <tshi...@dremio.com> wrote:
>
> > Pe
Peder,
What BI tool are you running on Debian?
Thanks,
Tomer
On Thu, Dec 31, 2015 at 8:40 AM, Peder Jakobsen | gmail wrote:
> Hi Norris,
>
> Just discovered that the ODBC driver is only available for CentOS / Redhat
> and SuSE, but not for Debian. It would be nice to
No. Drill does not depend on Hadoop
Can you try these instructions and see if they work:
http://www.dremio.com/blog/installing-apache-drill-on-microsoft-windows/
> On Dec 31, 2015, at 11:32 AM, Peder Jakobsen | gmail
> wrote:
>
> Hi, does hadoop have to be installed in
+1
Having a Python client would be super valuable
> On Dec 28, 2015, at 9:45 AM, Peder Jakobsen | gmail
> wrote:
>
> Two thumbs up for this project. An immediate benefit is the ability to
> take advantage of the enhanced interactive features of the iPython shell.
>
>
>
> Best,
> Nirav
>
--
Tomer Shiran
CEO and Co-Founder, Dremio
On Tue, Oct 27, 2015 at 1:55 AM, Assaf Lowenstein wrote:
> Hi,
>
> I'm trying to setup a drill server, running in embedded mode, so users will
> be able to access a centralized location and I need assistance/confirmation
> on the following:
>
>1. As it's a server, I need
This is something we've talked about adding. A few questions that will help
us plan:
How do you call this API? Is it a GET request which includes the necessary
authentication tokens?
Is the response a single map as shown below with all the records in the
"data" array, regardless of the number of
ng
> > > > > storage permissions without having to manage centralized security
> > > > > permissions at Drill layer through user impersonation. Users can
> use
> > > > Drill
> > > > > views if they need more granular access to the data.
> > > > >
> > > > > I would be interested in learning more about your use case to
> secure
> > > the
> > > > > storage plugin/connections.
> > > > >
> > > > > thanks
> > > > >
> > > > > On Mon, Oct 26, 2015 at 6:33 AM, John Omernik <j...@omernik.com>
> > > wrote:
> > > > >
> > > > > > Hey all -
> > > > > >
> > > > > > On file system based storage plugins, security is straight
> forward
> > > with
> > > > > > filesystem permissions etc. How do we secure storage plugins?
> It
> > > > would
> > > > > > seem we would want a situation where people could not access
> > certain
> > > > > > storage plugins especially since authentication to the source
> > system
> > > is
> > > > > > going to be "wide open" (i.e. there is no pass through auth to a
> > > > backend
> > > > > > Mongo server, JDBC system, or Hbase setup) thus how do we wrap
> > > > security
> > > > > > around these?
> > > > > >
> > > > > > Even basic security... i.e. only X user can use them, so we can
> use
> > > > them
> > > > > to
> > > > > > load parquet tables or something where we can apply security.
> > > > > >
> > > > > > Thoughts?
> > > > > >
> > > > > > John
> > > > > >
> > > > >
> > > >
> > >
> >
>
--
Tomer Shiran
CEO and Co-Founder, Dremio
o data, and I am not advocating it, it would just be a nice tool to use
> for those orgs that where there is a possibility of access data, and using
> it in a better way, but then using drill to help clean things up from a
> security perspective.
>
>
>
> On Mon, Oct 26,
Congrats! Nice work by the Drill community!!
> On Oct 17, 2015, at 6:34 AM, Abdel Hakim Deneche
> wrote:
>
> It is my pleasure to announce the release of Apache Drill 1.2.0.
>
> This release of Drill fixes many issues and introduces a number of
> enhancements,
t; > confidential and/or proprietary information, is the property of
> > Interactive
> > > Data Corporation and/or its subsidiaries, and is directed only to the
> > > addressee(s). If you are not the designated recipient or have reason to
> > > believe you received this message in error, please delete this message
> > from
> > > your system and notify the sender immediately. An unintended
> recipient's
> > > disclosure, copying, distribution, or use of this message or any
> > > attachments is prohibited and may be unlawful.
> > > ***
> > >
> >
> >
> >
> > --
> > Kamesh.
> >
>
--
Tomer Shiran
CEO and Co-Founder, Dremio
Here are a few slides that show how to use the REST API:
http://www.slideshare.net/dremio/drill-rest-api
Note:
- The method should be: POST
- Add an HTTP header: "Content-Type: application/json"
- Format the payload in JSON
On Tue, Oct 13, 2015 at 9:16 PM, Amandeep Singh
e-mail. Any reference to "Nomura"
> is
> > a reference to any entity in the Nomura Holdings, Inc. group. Please read
> > our Electronic Communications Legal Notice which forms part of this
> e-mail:
> > http://www.Nomura.com/email_disclaimer.htm
> >
> >
>
--
Tomer Shiran
CEO and Co-Founder, Dremio
y profile page, I saw
> >>the
> >> >> slow queries have a long time for "First Start" column, see below
> >> >>sample.
> >> >> Any hints what's the possible reason?
> >> >>
> >> >> I tried both 0.8.0 and 1.1.0, and got the same result.
> >> >>
> >> >> Major Fragment Minor Fragments Reporting First Start Last
> >> >> Start First End Last Endtmintavgtmax
> >> >>memmax
> >> >> 00-xx-xx1 / 1 53.668 53.668 53.877 53.877 0.209 0.209
> >> >> 0.209 27MB
> >> >>
> >> >> Thanks,
> >> >> Wen
> >> >>
> >>
> >>
>
>
--
Tomer Shiran
CEO and Co-Founder, Dremio
://www.mapr.com/
Now Available - Free Hadoop On-Demand Training
http://www.mapr.com/training?utm_source=Emailutm_medium=Signatureutm_campaign=Free%20available
--
Tomer Shiran
CEO and Co-Founder, Dremio
You can create Parquet formatted tables in any directory using CREATE TABLE AS
statements. When you're done, delete the directory through the file system (rm
or hadoop fs -rm). Will that work for your use case?
On Aug 14, 2015, at 3:56 PM, Alexey Sorokin sorok...@gmail.com wrote:
We are
Yes, there has been interest in Elasticsearch support from the community.
Are you interested in using that?
On Wed, Jul 1, 2015 at 10:57 AM, Ramana I N inram...@gmail.com wrote:
Hey Drillers,
Is there anything in the pipelines for using ElasticSearch as a storage
plugin? Is there any ask for
No. This is referring to the distributed file system. 'hadoop fs -rm
/path/to/file' should do it
On May 30, 2015, at 7:50 PM, George Lu luwenbin...@gmail.com wrote:
The files created are put around all the nodes in Drill.
I need to go through each of them and delete one by one?
On Sun,
Not yet. We've discussed this and it's certainly something we would like to
add.
On Thu, May 21, 2015 at 8:39 AM, Virinchi Garimella virin...@sriyamsoft.com
wrote:
Hi,
Does Drill support ElasticSearch / Logstash?
Virin
--
Sent from myMail app for Android
Check out the FLATTEN function:
https://cwiki.apache.org/confluence/display/DRILL/FLATTEN+Function
On Wed, Apr 29, 2015 at 11:44 PM, Mohit Kaushik mohit.kaus...@orkash.com
wrote:
I have a json file in HDFS named autom.json contains.
{
company: [
{
modelName: {
name: abc
You may want to check the presentation here:
http://www.meetup.com/Big-Data-Security-and-Data-Governance-Meetup/events/219285891/
On Wed, Mar 11, 2015 at 10:41 AM, Tomer Shiran tshi...@gmail.com wrote:
Not currently. Note that Drill provides an elegant access control solution
by leveraging
Not currently. Note that Drill provides an elegant access control solution
by leveraging the DFS ACLs. With Drill views, which are implemented as
simple files in the DFS, it's then easy to do column and row-level access
control, and even selective data masking, using standard DFS ACLs.
Note that
Yes, Drill is currently focused on querying data as opposed to inserting or
updating. While most of the systems you listed take a traditional approach
to SQL in which a DBA must create and manage schemas, Drill is designed for
Hadoop and NoSQL databases. In these systems, most of the data is
Thanks for pointing that out. We need to update the links. It looks like
the binaries were moved out of their previous location and into the new
location.
On Sun, Dec 7, 2014 at 8:39 AM, Ajay ajay.ga...@gmail.com wrote:
Hi,
I got into similar issues. It may be because the project recently
code
On Sun, Dec 7, 2014 at 10:14 PM, Tomer Shiran tshi...@gmail.com wrote:
Thanks for pointing that out. We need to update the links. It looks like
the binaries were moved out of their previous location and into the new
location.
On Sun, Dec 7, 2014 at 8:39 AM, Ajay ajay.ga...@gmail.com
On Thu, Dec 4, 2014 at 10:30 PM, Ajay ajay.ga...@gmail.com wrote:
Hello,
I am Ajay Garga, work for an ecommerce company. Currently we are looking at
building a Datawarehouse platform as described below:
DW as a Service
|
REST API
|
SQL On No SQL (Drill/Pig/Hive)
|
No SQL
35 matches
Mail list logo