Apache drill show a lot of CLOSE_WAIT states when we access https://ip address:8047

2018-06-26 Thread Ken Qi (Guangquan)
Hi Team,

Hope all is good.

We need your help.

Here is the apache drill process which we installed in our server.

drill19220 1 17 16:48 ?00:15:32 /usr/java/jdk/bin/java
-Xms8G -Xmx8G -XX:MaxDirectMemorySize=96G -XX:ReservedCodeCacheSize=1024m
-Ddrill.exec.enable-epoll=false -XX:+CMSClassUnloadingEnabled -XX:+UseG1GC
-Dlog.path=/var/log/drill/drillbit.log
-Dlog.query.path=/var/log/drill/drillbit_queries.json -cp
/usr/local/apache-drill-1.13.1/conf:/usr/local/apache-drill-1.13.1/jars/*:/usr/local/apache-drill-1.13.1/jars/ext/*:/usr/local/apache-drill-1.13.1/jars/3rdparty/*:/usr/local/apache-drill-1.13.1/jars/classb/*:/usr/local/apache-drill-1.13.1/jars/3rdparty/linux/*
org.apache.drill.exec.server.Drillbit
root 23651 23227  0 18:16 pts/100:00:00 grep --color=auto java

Question 1:

There are a lot of CLOSE_WAIT states when I access apache drill  https://ip
address:8047   I have changed
our server ip to  for the secruity reason, this caused that we can't
access apache drill by  https://ip address:8047
, so we can't check which SQL
run failed.

tcp6   0  0 :::8047 :::*LISTEN
19220/java
tcp6 518  0 192.168.:8047  192.168.100.131:54132
 CLOSE_WAIT  19220/java
tcp6   1  0 192.168.  :8047  192.168.100.222:52986
 CLOSE_WAIT  19220/java
tcp6 518  0 192.168.  :8047  192.168.100.222:53009
 CLOSE_WAIT  19220/java
tcp6 518  0 192.168.  :8047  192.168.100.131:54131
 CLOSE_WAIT  19220/java
tcp6   1  0 192.168.  :8047  192.168.3.119:61202
 CLOSE_WAIT  19220/java
tcp6 518  0 192.168.  :8047  192.168.100.131:54366
 CLOSE_WAIT  19220/java
tcp6 518  0 192.168.  :8047  192.168.100.131:54129
 CLOSE_WAIT  19220/java
tcp6 518  0 192.168.  :8047  192.168.100.131:58627
 CLOSE_WAIT  19220/java
tcp6 518  0 192.168.  :8047  192.168.100.131:58486
 CLOSE_WAIT  19220/java
tcp6 518  0 192.168.  :8047  192.168.100.131:54134
 CLOSE_WAIT  19220/java
tcp6 518  0 192.168.  :8047  192.168.100.222:53008
 CLOSE_WAIT  19220/java
tcp6   1  0 192.168.  :8047  192.168.3.119:56226
 CLOSE_WAIT  19220/java
tcp6 518  0 192.168.  :8047  192.168.100.222:52991
 CLOSE_WAIT  19220/java
tcp6   1  0 192.168.  :8047  192.168.3.119:51172
 CLOSE_WAIT  19220/java
tcp6   1  0 192.168.  :8047  192.168.3.119:36136
 CLOSE_WAIT  19220/java
tcp6 518  0 192.168.  :8047  192.168.100.131:54133
 CLOSE_WAIT  19220/java
tcp6  24  0 192.168.  :8047  192.168.100.131:57474
 ESTABLISHED 19220/java
tcp6 518  0 192.168.  :8047  192.168.100.131:54069
 CLOSE_WAIT  19220/java
tcp6 518  0 192.168.  :8047  192.168.100.131:54130
 CLOSE_WAIT  19220/java
tcp6 518  0 192.168.  :8047  192.168.100.222:53001
 CLOSE_WAIT  19220/java
tcp6 518  0 192.168.  :8047  192.168.100.222:52985
 CLOSE_WAIT  19220/java
tcp6 518  0 192.168.  :8047  192.168.100.222:52990
 CLOSE_WAIT  19220/java
tcp6 518  0 192.168.  :8047  192.168.100.131:54212
 CLOSE_WAIT  19220/java
tcp6   1  0 192.168.  :8047  192.168.100.131:58628
 CLOSE_WAIT  19220/java
tcp6   1  0 192.168.  :8047  192.168.100.131:53955
 CLOSE_WAIT  19220/java
tcp6 518  0 192.168.  :8047  192.168.100.131:57391
 CLOSE_WAIT  19220/java
tcp6   1  0 192.168.  :8047  192.168.3.119:41219
 CLOSE_WAIT  19220/java
tcp6 518  0 192.168.  :8047  192.168.100.131:54307
 CLOSE_WAIT  19220/java
tcp6 518  0 192.168.  :8047  192.168.100.222:53000
 CLOSE_WAIT  19220/java
tcp6 518  0 192.168  :8047  192.168.100.222:52984
 CLOSE_WAIT  19220/java
tcp6 518  0 192.168.  :8047  192.168.100.131:54308
 CLOSE_WAIT  19220/java
tcp6   1  0 192.168.  :8047  192.168.3.119:46189
 CLOSE_WAIT  19220/java
tcp6 518  0 192.168.  :8047  192.168.100.131:54211
 CLOSE_WAIT  19220/java



Question 2


Our apache drill was down frequently, it seems that it is due to memory
leak. However, we have configured 96G memory for apache dirll, so can you
please advise how can we identify which SQL took a lot of memory? and how
can improve our performance?


Error Id: 40d789a6-91ee-4e0b-bfc9-a26358a43df3 on
theremin.root.digitalalchemy:31010]
org.apache.drill.common.exceptions.UserException: SYSTEM ERROR:
IllegalStateException: Memory was leaked by query. Memory leaked: (67043328)
Allocator(op:14:0:0:HashPartitionSender)
100/67043328/101535744/100 (res/actual/peak/limit)


Fragment 14:0

[Error Id: 40d789a6-91ee-4e0b-bfc9-a26358a43df3 on
theremin.root.digitalalchemy:31010]
   at

Re: Drill error

2018-06-26 Thread Vitalii Diravka
Hi Nitin,

It happens in the process of reallocation of the size of buffers in the
memory.
It isn't a User Exception, so it looks like a bug, if you get it in some
existed plugin.
But to say you exactly, please describe your case. What kind of query did
you perform, any UDF's, which data source?
Also logs can help.

Thanks.

Kind regards
Vitalii


On Tue, Jun 26, 2018 at 1:27 PM Nitin Pawar  wrote:

> Hi,
>
> Can someone help me understand below error? and how do I not let this
> happen ??
>
> SYSTEM ERROR: IllegalStateException: Tried to remove unmanaged buffer.
>
> Fragment 0:0
>
> [Error Id: bcd510f6-75ee-49a7-b723-7b35d8575623 on
> ip-10-0-103-63.ec2.internal:31010]
> Caused By: SYSTEM ERROR: IllegalStateException: Tried to remove unmanaged
> buffer.
>
> Fragment 0:0
>
> [Error Id: bcd510f6-75ee-49a7-b723-7b35d8575623 on
> ip-10-0-103-63.ec2.internal:31010]
>
> --
> Nitin Pawar
>


Web console responsiveness under heavy load

2018-06-26 Thread Dave Challis
Are there any recommended Drill settings to configure in order to ensure
that the web console (running on 8047) remains responsive even under heavy
load?

Currently, if I execute a large/complex query (that e.g. takes 5m to
complete), all queries to 8047 just block until the query completes.

I'd like to use it to keep an eye on the query (on the profiles) page.


Re: Drill Hangout tomorrow 06/26

2018-06-26 Thread Aman Sinha
Hangout attendees on 06/26:
Padma, Hanumath, Boaz, Aman, Jyothsna, Sorabh, Arina, Bohdan, Vitalii,
Volodymyr, Abhishek, Robert

2 topics were discussed:
1.  Vitalii brought up the Travis timeout issue for which he has sent out
an email in this thread;  Actually Vitalli can you send it in a separate
email with explicit subject otherwise people may miss it.
2. Padma went over the batch sizing work and current status.  Padma, pls
add a link to your document.  Summarizing some of the discussion:

   - Does batch sizing affect output batches only or internal batches also
   ?  For certain operators such as HashAgg it does affect the internal
   batches held in the hash table since these batches are transferred as-is to
   the output container.
   - 16 MB limit on the batch size is a best effort but in some cases it
   could slightly exceed.  The number of rows per output batch is estimated as
   nearest lower power of 2.  For example, if based on input batch size, the
   number of output rows is 600, it will be rounded down to 512.
   - An optimization could be done in future to have upstream operator
   provide the batch size information in metadata instead of downstream
   operator computing it for each incoming.
   - There was discussion on estimating the size of complex type columns
   especially ones with nesting levels.  It would be good to add details in
   the document.


-Aman

On Tue, Jun 26, 2018 at 10:48 AM Vitalii Diravka 
wrote:

> Lately Drill Travis Build fails more often because of Travis job time
> expires.
> The right way is to accelerate Drill execution :)
>
> Nevertheless I believe we should consider excluding some more tests from
> Travis Build.
> We can add all TPCH tests (
> TestTpchLimit0, TestTpchExplain, TestTpchPlanning, TestTpchExplain) to the
> SlowTest category.
>
> Is there other solution for this issue? What are other tests are executed
> very slowly?
>
> Kind regards
> Vitalii
>
>
> On Tue, Jun 26, 2018 at 3:34 AM Aman Sinha  wrote:
>
> > We'll have the Drill hangout tomorrow Jun26th, 2018 at 10:00 PDT.
> >
> > If you have any topics to discuss, send a reply to this post or just join
> > the hangout.
> >
> > ( Drill hangout link
> >  )
> >
>


Re: Drill Hangout tomorrow 06/26

2018-06-26 Thread Vitalii Diravka
Lately Drill Travis Build fails more often because of Travis job time
expires.
The right way is to accelerate Drill execution :)

Nevertheless I believe we should consider excluding some more tests from
Travis Build.
We can add all TPCH tests (
TestTpchLimit0, TestTpchExplain, TestTpchPlanning, TestTpchExplain) to the
SlowTest category.

Is there other solution for this issue? What are other tests are executed
very slowly?

Kind regards
Vitalii


On Tue, Jun 26, 2018 at 3:34 AM Aman Sinha  wrote:

> We'll have the Drill hangout tomorrow Jun26th, 2018 at 10:00 PDT.
>
> If you have any topics to discuss, send a reply to this post or just join
> the hangout.
>
> ( Drill hangout link
>  )
>


Re: Drill 1.12 query hive transactional orc table

2018-06-26 Thread Vitalii Diravka
Hi,

Thanks for your question.
Drill started supporting queries on Hive ACID tables past DRILL-1.13.0
version[1].
Please do upgrade onto the last Drill version, then you will able to
perform queries for Hive transnational tables.

[1] https://drill.apache.org/docs/hive-storage-plugin/

Kind regards
Vitalii


On Tue, Jun 26, 2018 at 7:04 AM qi...@tsingning.com 
wrote:

> Hi:
>  I am sorry about that my English is poor.
>  I have a problem and need your help.
>  Drill 1.12 uses Hive 1.2.1.
>  My  Drill 1.12.
>  My Hive version is 1.2.1
>  Things working fine :  use drill to query normal hive table .
>
> Now a  Hive table :
>  create table db_test.t_test_log(
>   create_time string,
>   log_id string,
>   log_type string)
> clustered by (log_id) into 2 buckets
>   ROW FORMAT DELIMITED  FIELDS TERMINATED BY '\001'  LINES TERMINATED BY
> '\n'
> stored as orc
> tblproperties ('transactional'='true');
> data stream : flume -->hive,it's Quasi real-time insertion.
> Query this table,things working fine with hive sql,but when I use drill to
> query this table it do not work. Then Exception info:
>
>
> ==
> 2018-06-25 16:28:25,650 [24cf5855-cf24-48e7-92c7-be27fbae9370:foreman]
> INFO  o.a.drill.exec.work.foreman.Foreman - Query text for query id
> 24cf5855-cf24-48e7-92c7-be27fbae9370: select count(*) cnt  from
> hive.db_test.t_test_log
> 2018-06-25 16:28:25,969 [24cf5855-cf24-48e7-92c7-be27fbae9370:frag:0:0]
> INFO  o.a.d.e.w.fragment.FragmentExecutor -
> 24cf5855-cf24-48e7-92c7-be27fbae9370:0:0: State change requested
> AWAITING_ALLOCATION --> RUNNING
> 2018-06-25 16:28:25,969 [24cf5855-cf24-48e7-92c7-be27fbae9370:frag:0:0]
> INFO  o.a.d.e.w.f.FragmentStatusReporter -
> 24cf5855-cf24-48e7-92c7-be27fbae9370:0:0: State to report: RUNNING
> 2018-06-25 16:28:27,251 [24cf5855-cf24-48e7-92c7-be27fbae9370:frag:0:0]
> ERROR o.a.d.exec.physical.impl.ScanBatch - SYSTEM ERROR: IOException:
> Cannot obtain block length for
> LocatedBlock{BP-2057246263-10.30.208.135-1515072017012:blk_1074371083_630359;
> getBlockSize()=904; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[
> 10.30.208.135:50010,DS-8fc25c0e-3c81-49d5-b6d9-d229129b5525,DISK],
> DatanodeInfoWithStorage[10.31.0.7:50010,DS-e91fa806-0e81-48ca-864f-e9019001822c,DISK],
> DatanodeInfoWithStorage[10.31.76.49:50010
> ,DS-edfb09a8-dc1f-4e8e-b99f-c72a89cd2b1e,DISK]]}
>
> Setup failed for HiveOrcReader
>
> [Error Id: d7a136a7-c880-4356-947f-90e68238a4f0 ]
> org.apache.drill.common.exceptions.UserException: SYSTEM ERROR:
> IOException: Cannot obtain block length for
> LocatedBlock{BP-2057246263-10.30.208.135-1515072017012:blk_1074371083_630359;
> getBlockSize()=904; corrupt=false; offset=0; locs=[DatanodeInfoWithStorage[
> 10.30.208.135:50010,DS-8fc25c0e-3c81-49d5-b6d9-d229129b5525,DISK],
> DatanodeInfoWithStorage[10.31.0.7:50010,DS-e91fa806-0e81-48ca-864f-e9019001822c,DISK],
> DatanodeInfoWithStorage[10.31.76.49:50010
> ,DS-edfb09a8-dc1f-4e8e-b99f-c72a89cd2b1e,DISK]]}
>
> Setup failed for HiveOrcReader
>
> [Error Id: d7a136a7-c880-4356-947f-90e68238a4f0 ]
> at
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:586)
> ~[drill-common-1.12.0.jar:1.12.0]
> at org.apache.drill.exec.physical.impl.ScanBatch.next(ScanBatch.java:213)
> [drill-java-exec-1.12.0.jar:1.12.0]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
> [drill-java-exec-1.12.0.jar:1.12.0]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
> [drill-java-exec-1.12.0.jar:1.12.0]
> at
> org.apache.drill.exec.record.AbstractSingleRecordBatch.innerNext(AbstractSingleRecordBatch.java:51)
> [drill-java-exec-1.12.0.jar:1.12.0]
> at
> org.apache.drill.exec.physical.impl.project.ProjectRecordBatch.innerNext(ProjectRecordBatch.java:134)
> [drill-java-exec-1.12.0.jar:1.12.0]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:164)
> [drill-java-exec-1.12.0.jar:1.12.0]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
> [drill-java-exec-1.12.0.jar:1.12.0]
> at
> org.apache.drill.exec.test.generated.StreamingAggregatorGen1.doWork(StreamingAggTemplate.java:187)
> [na:na]
> at
> org.apache.drill.exec.physical.impl.aggregate.StreamingAggBatch.innerNext(StreamingAggBatch.java:181)
> [drill-java-exec-1.12.0.jar:1.12.0]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:164)
> [drill-java-exec-1.12.0.jar:1.12.0]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:119)
> [drill-java-exec-1.12.0.jar:1.12.0]
> at
> org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:109)
> [drill-java-exec-1.12.0.jar:1.12.0]
> at
> 

Re: Drill Hangout tomorrow 06/26

2018-06-26 Thread Padma Penumarthy
Here is the link to the document. Any feedback/comments welcome.

https://docs.google.com/document/d/1Z-67Y_KNcbA2YYWCHEwf2PUEmXRPWSXsw-CHnXW_98Q/edit?usp=sharing

Thanks
Padma


On Jun 26, 2018, at 12:12 PM, Aman Sinha 
mailto:amansi...@gmail.com>> wrote:

Hangout attendees on 06/26:
Padma, Hanumath, Boaz, Aman, Jyothsna, Sorabh, Arina, Bohdan, Vitalii,
Volodymyr, Abhishek, Robert

2 topics were discussed:
1.  Vitalii brought up the Travis timeout issue for which he has sent out
an email in this thread;  Actually Vitalli can you send it in a separate
email with explicit subject otherwise people may miss it.
2. Padma went over the batch sizing work and current status.  Padma, pls
add a link to your document.  Summarizing some of the discussion:

  - Does batch sizing affect output batches only or internal batches also
  ?  For certain operators such as HashAgg it does affect the internal
  batches held in the hash table since these batches are transferred as-is to
  the output container.
  - 16 MB limit on the batch size is a best effort but in some cases it
  could slightly exceed.  The number of rows per output batch is estimated as
  nearest lower power of 2.  For example, if based on input batch size, the
  number of output rows is 600, it will be rounded down to 512.
  - An optimization could be done in future to have upstream operator
  provide the batch size information in metadata instead of downstream
  operator computing it for each incoming.
  - There was discussion on estimating the size of complex type columns
  especially ones with nesting levels.  It would be good to add details in
  the document.


-Aman

On Tue, Jun 26, 2018 at 10:48 AM Vitalii Diravka 
mailto:vitalii.dira...@gmail.com>>
wrote:

Lately Drill Travis Build fails more often because of Travis job time
expires.
The right way is to accelerate Drill execution :)

Nevertheless I believe we should consider excluding some more tests from
Travis Build.
We can add all TPCH tests (
TestTpchLimit0, TestTpchExplain, TestTpchPlanning, TestTpchExplain) to the
SlowTest category.

Is there other solution for this issue? What are other tests are executed
very slowly?

Kind regards
Vitalii


On Tue, Jun 26, 2018 at 3:34 AM Aman Sinha 
mailto:amansi...@apache.org>> wrote:

We'll have the Drill hangout tomorrow Jun26th, 2018 at 10:00 PDT.

If you have any topics to discuss, send a reply to this post or just join
the hangout.

( Drill hangout link

 )



Re: Help with Apache Drill - S3 compatible storage connectivity

2018-06-26 Thread Paul Rogers
Hi Dummy ID,

Have you tried getting Drill to work with the real S3? There were a few 
confusing bits in the S3 docs that Bridget graciously cleaned up. By trying to 
access the real S3 you'll ensure that you have gotten the config stuff right.

The key missing bit of information was that you had to provide the 
fs.s3a.endpoint property. The HDFS s3a library is the current version you 
should use (at least for S3) rather than the older s3n, etc. libraries.

You can also use HDFS to verify that you have proper connectivity to your S3 
clone. Configure HDFS to work with your S3 clone, then check if you can access 
your files using the "hadoop fs" commands. If not, then you've probably got a 
more fundamental issue than Drill.

If "hadoop fs" works, but Drill does not, then you probably have a Drill config 
issue since Drill uses the Hadoop libraries for S3 access. Review the docs for 
what should be in core-site.xml and what should be in your S3 storage plugin 
config.

Thanks,
- Paul

 

On Tuesday, June 26, 2018, 2:10:11 PM PDT, dummy id  
wrote:  
 
 Can i get an update on this please?

On Fri, Jun 15, 2018 at 11:36 AM, dummy id  wrote:

> *Team, *
>
>
>
> *I am not sure who can help me out with this, so just adding both of the
> help community..I have followed your  documentation on setting up drill and
> i am able to query the files from local , meaning class path and dfs, but
> not able to do it with s3. I am not using Amazon S3 instead i am using S3
> compatible storage from DELL EMC. I would need your help in setting up the
> storage plugin file so that it uses path style addressing instead of normal
> URL method. I would request you to kindly share an example core-site.xml
> file as well as an example storage plugin file where it uses path style
> addressing method to connect to s3.*
>
>
>
> *I have tried using fs**.**s3a**.**path**.**style**.**access with
> value true in both core-site and storage plugin files, but still the path
> style addressing is not being read by drill and it again tries to connect
> using URL method as it uses in connecting with Amazon S3. Kindly help..*
>
>
>
> *Just an FYI, i have followed steps from "Drill in 10 minutes"
> documentation to install and connect with my S3 compatible storage.*
>
>
>
> *Awaiting for your reply..*
>
>
>
  

Re: Help with Apache Drill - S3 compatible storage connectivity

2018-06-26 Thread Parth Chandra
Drill uses HDFS to access S3, so if you have configured the EMC system to
be usable by hadoop, it will be usable by Drill.
Here's the documentation for an S3 compatible EMC system (
https://www.emc.com/collateral/TechnicalDocument/docu86295.pdf); chapters
6-11 are relevant.  I'm not sure if this is the same system you have, but
your system should have similar documentation.
You will probably have to use a different protocol identifier in the URL to
access the storage system.




On Tue, Jun 26, 2018 at 11:30 AM, dummy id  wrote:

> Can i get an update on this please?
>
> On Fri, Jun 15, 2018 at 11:36 AM, dummy id  wrote:
>
> > *Team, *
> >
> >
> >
> > *I am not sure who can help me out with this, so just adding both of the
> > help community..I have followed your  documentation on setting up drill
> and
> > i am able to query the files from local , meaning class path and dfs, but
> > not able to do it with s3. I am not using Amazon S3 instead i am using S3
> > compatible storage from DELL EMC. I would need your help in setting up
> the
> > storage plugin file so that it uses path style addressing instead of
> normal
> > URL method. I would request you to kindly share an example core-site.xml
> > file as well as an example storage plugin file where it uses path style
> > addressing method to connect to s3.*
> >
> >
> >
> > *I have tried using fs**.**s3a**.**path**.**style**.**access with
> > value true in both core-site and storage plugin files, but still the path
> > style addressing is not being read by drill and it again tries to connect
> > using URL method as it uses in connecting with Amazon S3. Kindly help..*
> >
> >
> >
> > *Just an FYI, i have followed steps from "Drill in 10 minutes"
> > documentation to install and connect with my S3 compatible storage.*
> >
> >
> >
> > *Awaiting for your reply..*
> >
> >
> >
>


Re: Help with Apache Drill - S3 compatible storage connectivity

2018-06-26 Thread dummy id
Can i get an update on this please?

On Fri, Jun 15, 2018 at 11:36 AM, dummy id  wrote:

> *Team, *
>
>
>
> *I am not sure who can help me out with this, so just adding both of the
> help community..I have followed your  documentation on setting up drill and
> i am able to query the files from local , meaning class path and dfs, but
> not able to do it with s3. I am not using Amazon S3 instead i am using S3
> compatible storage from DELL EMC. I would need your help in setting up the
> storage plugin file so that it uses path style addressing instead of normal
> URL method. I would request you to kindly share an example core-site.xml
> file as well as an example storage plugin file where it uses path style
> addressing method to connect to s3.*
>
>
>
> *I have tried using fs**.**s3a**.**path**.**style**.**access with
> value true in both core-site and storage plugin files, but still the path
> style addressing is not being read by drill and it again tries to connect
> using URL method as it uses in connecting with Amazon S3. Kindly help..*
>
>
>
> *Just an FYI, i have followed steps from "Drill in 10 minutes"
> documentation to install and connect with my S3 compatible storage.*
>
>
>
> *Awaiting for your reply..*
>
>
>


Re: Help with Apache Drill - S3 compatible storage connectivity

2018-06-26 Thread Saurabh Mahapatra
I mean seriously. Dummy id? 

Sent from my iPhone



> On Jun 26, 2018, at 11:30 AM, dummy id  wrote:
> 
> Can i get an update on this please?
> 
>> On Fri, Jun 15, 2018 at 11:36 AM, dummy id  wrote:
>> 
>> *Team, *
>> 
>> 
>> 
>> *I am not sure who can help me out with this, so just adding both of the
>> help community..I have followed your  documentation on setting up drill and
>> i am able to query the files from local , meaning class path and dfs, but
>> not able to do it with s3. I am not using Amazon S3 instead i am using S3
>> compatible storage from DELL EMC. I would need your help in setting up the
>> storage plugin file so that it uses path style addressing instead of normal
>> URL method. I would request you to kindly share an example core-site.xml
>> file as well as an example storage plugin file where it uses path style
>> addressing method to connect to s3.*
>> 
>> 
>> 
>> *I have tried using fs**.**s3a**.**path**.**style**.**access with
>> value true in both core-site and storage plugin files, but still the path
>> style addressing is not being read by drill and it again tries to connect
>> using URL method as it uses in connecting with Amazon S3. Kindly help..*
>> 
>> 
>> 
>> *Just an FYI, i have followed steps from "Drill in 10 minutes"
>> documentation to install and connect with my S3 compatible storage.*
>> 
>> 
>> 
>> *Awaiting for your reply..*
>> 
>> 
>> 


Drill error

2018-06-26 Thread Nitin Pawar
Hi,

Can someone help me understand below error? and how do I not let this
happen ??

SYSTEM ERROR: IllegalStateException: Tried to remove unmanaged buffer.

Fragment 0:0

[Error Id: bcd510f6-75ee-49a7-b723-7b35d8575623 on
ip-10-0-103-63.ec2.internal:31010]
Caused By: SYSTEM ERROR: IllegalStateException: Tried to remove unmanaged
buffer.

Fragment 0:0

[Error Id: bcd510f6-75ee-49a7-b723-7b35d8575623 on
ip-10-0-103-63.ec2.internal:31010]

-- 
Nitin Pawar