There are other logs that might be reporting the error.
Look at the other logs in the Drill UI ... one of them that can carry more
information would be drillbit.out
-Original Message-
From: Akshay Joshi [mailto:joshiakshay0...@gmail.com]
Sent: Tuesday, October 31, 2017 11:06 AM
To:
I second Ted's suggestion!
Since we haven't seen what your profile's operator overview, we can't say for
sure why the performance isn't good.
On the top of my head ,these are most likely things happening that make your
performance so bad:
1. All the CSV files are being read and rows rejected
gt;>
>> Can you try with CURL as well ?
>>
>> curl -v -X POST -H "Content-Type: application/json" -d
>> '{"name":"oracle1", "config": {"type": "jdbc", "enabled": true,"driver":
>> "oracle
That doesn't look too big. Are the queries failing during planning or execution
phase?
Also, you mentioned that you are running this on a machine with 16GB RAM. How
much memory have you given to Drill? Typical min config is about 8GB for Drill
alone, and with 2-3GB with the OS, not a whole lot
I've filed https://issues.apache.org/jira/browse/DRILL-6423 and opened a PR for
this. (https://github.com/apache/drill/pull/1266)
This allows you to export the visible table in the results page as a CSV. That
means, you can actually sort or filter the results in the UI and then choose
the
Can you provide more details for a repro? Multiple issues can potentially cause
or (incorrectly) report a memory leak.
If you notice, the 'Actual' value is zero, so there isn't really a leak, but
the resources were not closed properly. What does the stack trace say?
On 6/7/2018 11:05:32 PM,
Can you share the stack trace as well?
On 6/11/2018 3:12:07 AM, Divya Gehlot wrote:
Hi ,
I am trying to convert the complex json format to Parquet and I am getting
below erro :
SYSTEM ERROR: UnsupportedOperationException: Unsupported type LIST
Fragment 0:0
JSON File format as below :
{
I'm wondering if this JIRA is related the issue you are facing:
https://issues.apache.org/jira/browse/DRILL-2241
On 6/11/2018 9:44:32 AM, Kunal Khatua wrote:
Can you share the stack trace as well?
On 6/11/2018 3:12:07 AM, Divya Gehlot wrote:
Hi ,
I am trying to convert the complex json format
This is very odd. Out of the box, using Drill in embedded mode (i.e. zk=local)
works for me.
I'm testing with Apache Drill 1.13.0 on Windows 10. Nothing fancy in my env var
PATH.
Everything points to JDK8.
You're seeing this with a clean setup as well? (for an existing machine, you
should
Hi
Could you upload the screenshots to a public hosting site and post the links?
The Apache mailing list blocks attachments for security reasons.
~ Kunal
On 7/1/2018 10:22:00 PM, dony.natra...@ramyamlab.com
wrote:
Hi there,
Hope you’re doing good.
I need suggestion on performance issues
tools like DBeaver (which has a nice feature of
automatically downloading the latest JDBC drivers) or Squirrel.
You can then use the WebUI to monitor the queries in flight, etc.
On 6/30/2018 12:18:23 AM, Kunal Khatua wrote:
Are you running the query through the WebUI? What are the memory
Are you running the query through the WebUI? What are the memory settings of
your Drillbit?
On 6/26/2018 7:45:30 AM, Dave Challis wrote:
Are there any recommended Drill settings to configure in order to ensure
that the web console (running on 8047) remains responsive even under heavy
load?
Hi Daniel
I might not be the best person to answer this, but I'd think that for
supporting encryption to data sources, both Drill and the data source would
need to agree to a common implementation of such a protocol. I don't know if
there is any defacto standard for data sources in general.
What is the entire stack trace? I'm wonderinf if there is an internal timeout
that is occuring for long running CTAS in a secure setup.
On 6/29/2018 3:42:17 AM, Divya Gehlot wrote:
Hi,
At times I am getting error whlile CTAS and it doesn't happen all the time
like next run for 18 hours it will
What is the system memory and what are the allocations for heap and direct? The
memory crash might be occurring due to insufficient heap. The limits parameter
applies to the direct memory and not Heap.
Can you share details in the logs from the crash?
-Original Message-
From: Timothy
Hi Lalit
Your profile hints that it is stuck in the Major Fragment 06-xx-xx, which is
fed data from 16-xx-xx via 11-Exchange.
Looking at the operators’ overview and the similarity with other major
fragments, only this one seems to be stuck at completing the sort.
Could you provide the JStack
For a network file, you want to use something like an NFS mount point and query
the file via that.
-Original Message-
From: Thiago Hernandes de souza [mailto:thiago.so...@cedrotech.com]
Sent: Thursday, January 25, 2018 10:59 AM
To: user@drill.apache.org
Subject: [Drill] Connect on
e view is simple: SELECT * FROM s3://myparquet.parquet (14GB)
planner.memory.max_query_memor y_per_node = 10479720202
Drillbit.log attached (I think I have the correct selection included).
Thanks
On Fri, Jan 26, 2018 at 2:41 PM, Kunal Khatua
<kkha...@mapr.com<mailto:kkha...@mapr.com>
That is basically indicating that while the embedded Drillbit failed to
startup, but the SQLLine instance did (with no current connection to a running
embedded Drillbit). Try to see the logs on why the Drillbit JVM isn’t coming up.
From: Bo Qiang [mailto:bo.qi...@nielsen.com]
Sent: Tuesday,
e the data.
>> Without writing a plugin for each storage system I'd like to leverage
>> Apache Drill as a broker towards all of them.
>>
>> Right now I've tried to puth the phoenix-core.jar into the
>> jar/3rdparty folder but when I try to create the Phoenix storage I
anning speed, metadata operations, etc. Also a good check to
see if the data is healthy.
You may consider looking at some pointers here .
https://community.mapr.com/community/exchange/blog/2017/01/25/drill-best-practices-for-bi-and-analytical-tools
--Andries
On 1/28/18, 6:18 PM, "Ku
er_node = 10479720202
Drillbit.log attached (I think I have the correct selection included).
Thanks
On Fri, Jan 26, 2018 at 2:41 PM, Kunal Khatua
<kkha...@mapr.com<mailto:kkha...@mapr.com>> wrote:
What is the system memory and what are the allocations for heap and direct
The JDBC storage plugin allows Drill to leverage any SQL system that has JDBC
drivers, so it should work.
That said, the JDBC storage plugin is a community developed storage plugin, so
it might not be fully tested.
If you are looking to simply have the Phoenix JDBC driver bundled into the
It might be to do with the way you've installed Drill.
If you've built and deployed Drill, odds are that the client will be different.
With the RPM installation, however, the installer has symlinks to make the
mapr-client libraries required by Drill be pointing to the libraries available
in
ng a CDH5 profile for Drill without any problem.
What do you think? Is there any possibility to have Drill and Drillix merged
soon?
Best,
Flavio
On Mon, Feb 5, 2018 at 9:10 PM, Kunal Khatua <kkha...@mapr.com> wrote:
> Hi Flavio
>
> I'm wondering whether you tried modifying the pom.x
com/v2/url?u=https-3A__issues.apache.org_jira_browse_PHOENIX-2D4523=DwIBaQ=cskdkSMqhcnjZxdQVpwTXg=-cT6otg6lpT_XkmYy7yg3A=BQrc4m6Ki2hXeQbOe8XIbthhiEgygxzD16DmbvRBW-I=zx8vNOk1VKYBp4KV1Iya4XU_JyLRlHKSxQvA1WcFHW4=
On Fri, Feb 2, 2018 at 7:21 PM, Kunal Khatua <kkha...@mapr.com> wrote:
> That's great,
This made it into Drill 1.12, but there is some additional work to be done. We
are hoping to have it completed in time for 1.13 release, along with
documentation.
-Original Message-
From: John Omernik [mailto:j...@omernik.com]
Sent: Monday, February 12, 2018 12:44 PM
To: user
Can you share what the error is? Without that, it is anybody's guess on what
the issue is.
-Original Message-
From: Anup Tiwari [mailto:anup.tiw...@games24x7.com]
Sent: Tuesday, February 13, 2018 6:19 AM
To: user@drill.apache.org
Subject: Reading drill(1.10.0) created parquet table in
eb 13, 2018 10:59 PM, Kunal Khatua kkha...@mapr.com wrote:
Can you share what the error is? Without that, it is anybody's guess on what
the issue is.
-Original Message-
From: Anup Tiwari [mailto:anup.tiw...@games24x7.com]
Sent: Tuesday, February 13, 2018 6:19 AM
To: user@drill
Numerous fixes have gone into Drill since then. Can you try with Drill 1.12 ?
-Original Message-
From: Siva Gudavalli [mailto:gudavalli.s...@yahoo.com.INVALID]
Sent: Wednesday, February 14, 2018 6:23 PM
To: user@drill.apache.org
Subject: Drill 1.10 Memory Leak with CTAS + partition by
| 1| 12 | 123 |
+--+--+--+
1 row selected (0.587 seconds)
0: jdbc:drill:schema=dfs>
Thanks,
Arjun
____
From: Kunal Khatua <kkha...@mapr.com>
Sent: Wednesday, February 21, 2018 12:37 AM
To: user@drill.apache.org
Subject: RE: Fixe
Looks like DRILL-5547: Linking config options with system option manager (
https://github.com/apache/drill/commit/a51c98b8bf210bbe9d3f4018361d937252d1226d
) introduced a change in the computation, which is based on the number of cores.
Xg=Q3Oz5l4W5TvDHNLpOqMYE2AgtKWFE937v89GEHyOVDU=69ohaJkyhIdPzNBy3ZsqNCTa19XysjZzgmn_XPJ2yXQ=-0LdlBnmAXaipanP87yJezn5HPEHQIQVX5izxnNTYFY=
>>
>>
>>
>>
>> On Monday, February 19, 2018, 12:52:42 PM PST, Kunal Khatua <
>> kkha...@mapr.com&
Hi everyone
We're working on simplifying the Drill memory configuration process, where by,
users no more have the need for getting into the specifics of Heap and Direct
memory allocations.
Here is the original JIRA https://issues.apache.org/jira/browse/DRILL-5741
The idea is to simply provide
As long as you have delimiters, you should be able to import it as a regular
CSV file. Using views that define the fixed-width nature should help operators
downstream work more efficiently.
-Original Message-
From: Flavio Pompermaier [mailto:pomperma...@okkam.it]
Sent: Monday,
MEP 4.1 will be carrying Drill-1.12 and is expected in early February 2018
-Original Message-
From: John Omernik [mailto:j...@omernik.com]
Sent: Wednesday, January 03, 2018 7:06 AM
To: user
Subject: Timeframe on Apache Drill 1.12 in MapR Package?
Hey all, just
I actually have a PR that would work in conjunction with listing for the
WebUI's auto-complete feature. I had based it off an original PR meant for
providing descriptions to System options, but since that has got stuck in
review and rework, I've not made new progress on this.
We'll definitely
Hi Peter
>From the logs you shared, the Parquet-related messages are a warning and has
>nothing to do with the unresponsiveness and subsequent crash of the Drillbit.
The Zookeeper timeout indicates that either the ZK or your DBit was possibly
overloaded. Since th Drillbit is reporting the
I think apart from a lot of the improvements proposed, like metadata store
(+1), having support for a materialized view would be especially useful.
The reason I am proposing this is that there are some storage plugins, for
which Drill cannot pushdown filters, etc. Materialized views would allow
Congratulations, Boaz!!
On 8/17/2018 10:11:32 AM, Paul Rogers wrote:
Congratulations Boaz!
- Paul
On Friday, August 17, 2018, 2:56:27 AM PDT, Vitalii Diravka wrote:
Congrats Boaz!
Kind regards
Vitalii
On Fri, Aug 17, 2018 at 12:51 PM Arina Ielchiieva wrote:
> I am pleased to announce that
Is there a stack trace? Does the Drillbit crash, because that might have
generated a log as well (hs_error_ ) ?
On 8/28/2018 12:47:57 AM, Divya Gehlot wrote:
Hi,
I could see java pointer error in drill logs as beow
*** Error in
L-6693.
It is introduced somewhere in Drill 1.13.0.
Kunal, could you please take a look?
Thanks!
On Wed, Jul 18, 2018 at 9:07 AM Kunal Khatua wrote:
> Could you share the query details and/or the profile of the query you are
> running? There wasn't anything specific in the UI that changed.
>
You could try the reverse. Monitor in the initial window, while submitting the
query in another window.
That said, the reason your console is getting stuck is by design. The browser
tab from which you submit the query is the window where you'll receive the
results of the query. Hence, the
of same drillbit as in our cluster we have only one web-console
that we can access. Thanks for your suggestion regarding the other tools which
is one option that I could try.
Please let me know your response.
Best regards,
_
Tilak
-Original Message-
From: Kunal Khatua
Hi Alex
Do you have a lot of files in your 65GB of CSV dump, and are the rows very
wide? Is the error instantaneous or does it take a while.
The error for your example would typically occur if the Drillbit is very busy
doing something, resulting in a timeout or lack of a heartbeat from the
Hi Carlos
It looks similar to an issue reported previously:
https://lists.apache.org/thread.html/1f3d4c427690c06f1992bc5070f355689ccc5b1ed8cc3678ad8e9106@
Could you try setting the JVM's file encoding to UTF-8 and retry? If it does
not work, please file a JIRA in https://issues.apache.org
Saw this on Twitter... looked interesting. We've been thinking of revamping the
site a bit to help around folks with leveraging Drill in different
applications... this could sit nicely in a Cyber-related domain.
As for the Ctrl+Enter / Meta-Enter ... let's keep that as a separate PR.
Always
Vitalii
I think Pedro is referring to this project:
https://github.com/bizreach/drill-excel-plugin
It would be worth considering adding this if it is mature enough.
Pedro
Like Vitalii suggested, please create a JIRA. This helps gain visibility within
the community and someone (including
ere is the drillbit.log output -> https://pastebin.com/X44guqiM
Thanks,
Divya
On Wed, 29 Aug 2018 at 07:05, Kunal Khatua wrote:
> Is there a stack trace? Does the Drillbit crash, because that might have
> generated a log as well (hs_error_ ) ?
>
> On 8/28/2018 12:47:57 AM,
Scott
I think I can explain why you are getting the OutOfMemory.
Drill essentially has 2 pools of memory... the standard JVM Heap and the
Netty-managed Direct memory. When you are reading a JSON document, it needs to
be deserialized into Java heap objects because of the JSON parser libraries
Hi Divya
Did you you manage to resolve this? Since there isn't a stack trace, I'll take
a guess. It looks like this might have been occurring during the planning
phase, when the Foreman is trying to allocate work on a Drillbit. Is the same
host being reported? It might be that the
Drill Explorer is available with the ODBC driver for Linux, Mac and Windows
platform. You should be able to download from links on the Apache Drill Website:
https://drill.apache.org/docs/installing-the-odbc-driver/
On 7/13/2018 10:27:50 AM, Kayode Thomas
wrote:
Hello,
I am interested in
Not until now :)
Can you file a JIRA so that we can track it?
On Tue, Mar 6, 2018 at 11:40 AM, John Omernik wrote:
> Perfect. That works for me because I have a limited number of values, I
> could see that getting out of hand if the values were unknown. Has there
> been any
You should be able to connect to a Hive cluster via JDBC. However, the benefit
of using Drill co-located on the same cluster is that Drill can directly access
the data based on locality information from Hive and process across the
distributed FS cluster.
With JDBC, any filters you have, will
1. Drill connects to JDBC sources using the JDBC Storage plugin and you
should probably try that.That said, the JDBC storage plugin, AFAIK, does
not do push down of filters to the database. But you could try this to
confirm.
2. What you need to look at first is the query profile for
Here is the background of your issue:
https://drill.apache.org/docs/sort-based-and-hash-based-memory-constrained-operators/#spill-to-disk
HashAgg introduced a Spill-to-disk capability in 1.11.0 that allows for
Drill to run a query's HashAgg in a memory constrained environment. The
memory required
in
> the drill-override.conf file. After searching in google I found that this
> issue occurs when there is a version mismatch or if one class is present in
> two or more JAR files. I have not very much idea in Java, so can you please
> let me know any particular JAR which has to be remov
>
>
> On Mon, Mar 12, 2018 1:59 PM, Anup Tiwari anup.tiw...@games24x7.com
> wrote:
> Hi Kunal,
> Thanks for info and i went with option 1 and increased
> planner.memory.max_query_memory_per_node and now queries are working
> fine. Will
> let you in case of any issues.
>
&g
Hi Jiang
Thanks for bringing this to our attention.
However, we're not actively developing the Mongo storage plugin at the
moment, so it might be a while before this is updated.
Do you think you could help with submitting a pull-request against a JIRA
filed for this? You already appear to have
The trouble with your approach is that making it a background process
pauses the process in an interactive mode, until you bring it to the
foreground.
You cannot use an Embedded Drillbit with Zookeeper. You should start it off
as a regular drillbit instance.
Zookeeper only serves the purpose of
>
> > wrote:
> > > >
> > > > > Hi Asim,
> > > > >
> > > > >
> > > > > You may try using hive uber jar in case you have not tried it. See
> if
> > > > > below link helps.
> > > > >
>
This looks incorrect:
drill.exec.sys.store.provider.local.path = "mysql-connector-java-5.1.
45-bin.jar"
Refer this https://drill.apache.org/docs/storage-plugin-registration/#
storage-plugin-configuration-persistence
and provide a path that will allow you to persist the storage plugins'
info.
There could be multiple reasons for why the ChannelClosedException is
thrown. What kind of a load are you running on your Drillbits?
There might be resources in the mailing list archives that have touched
upon solutions for this, so you could look up there as well.
On Wed, Mar 14, 2018 at 11:09
Hi Anup
It helps if you can share the profile (*.sys.drill / *.json files) to help
explain. I don't think the user mailing list allows attachments, so you
could use an online document sharing service (e.g. Google Drive, etc) to do
the same.
Coming back to your description, it seems like you are
Hi Anup
Can you share this as a file ? There seems to be some truncation of the
contents.
Share it using some online service like Google Drive or Dropbox, since the
mailing list might not allow for attachments.
Thanks
~ Kunal
On Tue, Mar 13, 2018 at 11:44 PM, Anup Tiwari
I might be wrong, but I think it's partly because of the need for Zookeeper on
Windows, which is not commonly done.
Other more obvious factors, IMO, are the overhead in creating and maintaining
*nix shell scripts in Batch files. Linux (Bash) scripts are much more powerful
with a lot of
ittling down queries you need to
(unless you need to whittle down on the base64 encoded column) then yank the
data into R, Python, Go, whatever and do the base 64 decoding there.
> On Apr 20, 2018, at 1:47 PM, Kunal Khatua wrote:
>
> First, you're posting on the wrong mailing list.
First, you're posting on the wrong mailing list. This should be posted in the
user mailing list, because this is meant for discussion of development features
about Drill. As a result, I'll cc to the dev list (so that you don't miss the
reply), but you should carry on the conversation in the
Yes, that is correct.
On 3/28/2018 3:45:13 AM, Rahul Raj wrote:
Is Drill fork of Calcite maintained at
https://github.com/mapr/incubator-calcite/?
I assume that the required calcite branch for Drill 13.0 is
DrillCalcite1.15.0. I would like to test a newer patch
For security reasons, the user mailing lists don't allow attachments. You
could upload the image to some online file sharing services like
GoogleDrive/Dropbox and share the link in the mailing list, along with a
description of the setup, etc.
On Fri, Mar 16, 2018 at 3:10 AM, Sonu Kumawat
Nice. We should have some polls conducted for what Drill is being used for
by the community as well.
On Thu, Mar 15, 2018 at 9:37 PM, Saurabh Mahapatra <
saurabhmahapatr...@gmail.com> wrote:
> Participate in the Apache Drill Poll and have your voice heard through ONE
> vote:
a higher level".
> Are you saying to increase this "planner.memory.max_query_memory_per_node"
> from
> 2GB? If yes then just wanted to mention that i have already set
> planner.memory.max_query_memory_per_node = 4G(mentioned in trail mail).
> Let me know if i have misinterpreted anything.
>
>
t; but
can't find anything definitive.
On Mon, Mar 19, 2018 at 6:16 PM, Kunal Khatua wrote:
> This error looks familiar and might be because of the Python library
> wrapping a select * around the original query.
>
> Using the JDBC driver directly doesn’t seem to show t
debug it ?
Thanks,
Divya
On 15 March 2018 at 15:00, Kunal Khatua wrote:
> There could be multiple reasons for why the ChannelClosedException is
> thrown. What kind of a load are you running on your Drillbits?
>
> There might be resources in the mailing list archives that have touche
I think Ted's question is 2 fold, with the former being more important.
1. Can we push filters past a union.
2. Will Drill push filters down to the source.
For the latter, it depends on the source.
For the former, it depends primarily on whether Calcite supports this. I
haven't tried it, so I
What is the data format?
If you have a JDBC driver for that, you should be able to query it.
On 2/24/2018 9:01:43 PM, Bremner-Stokes, Glen
wrote:
Hello all,
Not sure if I am posting this correctly so please redirect me if necessary.
I have a GraphQL server that
:19 PM, Kunal Khatua <kkha...@mapr.com> wrote:
This error looks familiar and might be because of the Python library wrapping a
select * around the original query.
Using the JDBC driver directly doesn’t seem to show this problem. Drill 1.13.0
is out now. Could you give a try with that and c
This error looks familiar and might be because of the Python library wrapping a
select * around the original query.
Using the JDBC driver directly doesn’t seem to show this problem. Drill 1.13.0
is out now. Could you give a try with that and confirm if the behavior is the
same?
-Original
That could be a possibility. See if you can skip that value in the column...
try using a filter condition with a such as
NOT LIKE '%'
On 6/27/2018 11:25:10 PM, Nitin Pawar wrote:
Could this cause an issue if one of the field in concat function has large
text ?
On Thu, Jun 28, 2018 at 11:10
There is another possible explanation of why this is happening.
The `createFullRootSchema` seems to be called as part of the initial schema
creation required for a query, and that means initialization of the backend
storage plugins for the new nodes.
You might need to disable some of the
find on collection is expected, as it has to capture
schema information. But, I see count and find being run 43 times each for one
single query in drill.
On Sat, 20 Oct 2018 at 00:06, Kunal Khatua mailto:ku...@apache.org]> wrote:
Hi Bala
Can you share details of the profiles itself? It mi
Hi Bala
Can you share details of the profiles itself? It might be that the MongoDB
storage plugin is translating the query into 100 mongo queries because of some
(100?) specific filter criteria in the Drill query?
JVM Heap usage fluctuation would indicate frequent object creation and garbage
Congratulations, Hanu!
On 11/1/2018 11:04:58 AM, Abhishek Girish wrote:
Congratulations, Hanu!
On Thu, Nov 1, 2018 at 10:56 AM Khurram Faraaz wrote:
> Congratulations Hanu!
>
> On Thu, Nov 1, 2018 at 10:14 AM Gautam Parai wrote:
>
> > Congratulations Hanumath! Well deserved :)
> >
> > Gautam
>
What is the Drillbit's JStack? You can try seeing the /threads webpage.
On 10/25/2018 8:16:06 PM, Divya Gehlot wrote:
Hi,
I am using Dril 1.10 and I have query running for days and when I click on
cacel the query through Drill Web UI . It hung in Cancellation_Requested
state .
Is there any way
Yes, a link of the recording should be available after the meetup.
On 11/4/2018 4:38:51 PM, Divya Gehlot wrote:
Hi ,
Can we have the link of recorded video for the audience who cannot attend
in person ?
Thanks,
Divya
On Mon, 5 Nov 2018 at 06:46, Pritesh Maker wrote:
> Hello, Drillers!
>
> We
Hi Prisdha
What do the logs say? Can you share the stack trace from the logs??
Kunal
On 11/6/2018 1:58:42 PM, Prisdha Dharma wrote:
Hello,
The latest Apache Drill works fine with JDBC, JSON, CSV, and simple parquet
files. However it fails to read parquet files with nested columns, such as
Hi Herman
Assuming that you're doing analytics on your data. If that's the case, parquet
format is the way to go.
That said, could you provide some details about the parquet data you've
created, like the schema, parquet version and the tool used to generate.
Usually, the schema (and meta)
Hi Niels
There is currently no one I know who is working on stored procedures.
Development is primarily driven by the need for features that give the most
benefit to the community of users, and there doesn't seem to be that much
demand for this feature. That said, if you think it is a real
Hi Matthias
The waiting time for a PARQUET_ROW_GROUP_SCAN operator is the total time that
all the fragments took to read the parquet data into memory as Drill's Value
Vectors. So, 80 seconds would indicate that the bulk of the time is spent in
just getting the data.
If you scroll down to the
Very good point on the 'Added in Drill Version 1.XX' , and I think Bridget's
already on it !
However, for sys.functions, it doesn't make sense to carry a min version. The
Drill server basically scans all the available functions and exposes them in
the sys.functions table. If a function has
Hi Charles
Can you provide steps of where the JDBC driver needs to be located ? I managed
to start up the server but I hit a signin message:
Cannot GET /sqlpad/signin
I see the following in the server output:
[root@kk127 sqlpad]# npm start > sqlpad@2.8.0 start /root/drill/cgivre/sqlpad >
There must be a stack trace in the logs.could you share that?
On Tue, Jan 8, 2019, 11:10 PM Tushar Pathare Hello Team,
>
> We have installed drill 1.12.0 and trying to connect
> using a client to the drill cluster
>
> Our config for connection is
>
>
>
> Drill class is :
Hi James
The problem you're describing could be due to multiple factors. Typically,
browsers don't specify a timeout for a request it sends to the server (Amazon,
in this case), but it can be safe to assume that it is reasonably long.
You said that things work great for small scale data or
are not opened. Drill
also opening lot of ports during execution. I am not sure what ports are
they
On Mon, Mar 25, 2019, 22:46 Kunal Khatua wrote:
> Hi Praveen
>
> THe mailing lists dont allow attachments to be sent through. What ports
> are you trying to change?
>
> ~ Kunal
> On
Congratulations! Very well deserved!
On 4/5/2019 11:02:58 AM, Boaz Ben-Zvi wrote:
Congratulation Sorabh - welcome to the Project Management Committee !!
On Fri, Apr 5, 2019 at 10:58 AM Abhishek Ravi wrote:
> Congratulations Sorabh! Well deserved!
>
> On Fri, Apr 5, 2019 at 10:49 AM hanu mapr
Hi Praveen
THe mailing lists dont allow attachments to be sent through. What ports are you
trying to change?
~ Kunal
On 3/25/2019 9:59:29 AM, PRAVEEN DEVERACHETTY wrote:
I have a question regarding the ports used in embeded drill. Is there anyway to
disable these ports in embeded drill
Hi everyone
It gives me great pleasure in announcing the launch of the "Powered By Drill"
page on the official Apache Drill website : https://drill.apache.org/poweredBy
As a start, the page currently has a handful Drill users that shared a short
blurb about their usage in production, and one
Hi Denis
You seem to be trying to read a Parquet 2.0 format file with a Parquet 1.10
reader that comes with Drill. Is there a specific reason you are using version
2.0 ?
~ Kunal
On 3/11/2019 10:13:39 AM, Denis Dudinski wrote:
Hello,
I have a parquet 2.0 file which contains serialised avro
Executing an "alter system set param=value" usually persists the value. Not
sure if that works for an embedded mode.
Could you try and let us know if that works?
On 3/11/2019 11:19:13 AM, PRAVEEN DEVERACHETTY wrote:
I am using apache drill on windows platform. My requirement is to udpate
the
Hi Rob
This is very weird ! Is that the entire JSON? I'm wondering if the JSON
document by any chance has a different end-of-line character that *might* be
causing this. Windows uses CarriageReturn+LineFeed, while Linux (HDFS) has only
Linefeed and MacOS using CarriageReturn.
Linux has a
101 - 200 of 231 matches
Mail list logo