Re: need understanding of drill affinity configurations

2018-07-03 Thread Padma Penumarthy
max.width.per.endpoint - how many minor fragments you can have per major fragment on a single node global.max.width - max number of minor fragments per major fragment you can have across all nodes affinity.factor - When deciding how many minor fragments to schedule on each node, this is the

need understanding of drill affinity configurations

2018-07-03 Thread Divya Gehlot
Hi, I would like to understand below configurations : work: { max.width.per.endpoint: 5, global.max.width: 100, affinity.factor: 1.2, executor.threads: 4 }, As I am getting below error at time in some of the Drill bits : 2018-07-03 04:08:45,725 [pool-245-thread-13] INFO

CTAS throwing NPE

2018-07-03 Thread Ranjit Reddy
I have a query that we are using it to create a table. Select query runs fine, when a table is created from the query, it is failing with NPE. I’ve included the query below, drill bit log and the exception thrown in the web ui.   Web UI error: Query Failed: An Error Occurred

Re: difference between cast as timestamp and to_timestamp

2018-07-03 Thread Arjun kr
TO_TIMESTAMP function accepts epoch timestamp in seconds. Whereas cast to Timestamp seems to be expecting value in milliseconds. 0: jdbc:drill:> select TO_TIMESTAMP(1530601200049/1000) from (values(1)); ++ | EXPR$0 | ++ |

Re: How to dynamically add months in Apache drill

2018-07-03 Thread Arjun kr
One way to do this is given below. This requires interval expression to be passed to the DATE_ADD function. 0: jdbc:drill:> select DATE_ADD(date '2015-05-15', interval '1' month) with_literal, DATE_ADD(date '2015-05-15',cast(concat('P',val,'M') as interval month)) with_column_value from

How to dynamically add months in Apache drill

2018-07-03 Thread Bharani Manickam
Hello, DATE_ADD function doesn't support a column as the interval argument in drill queries. We have a requirement to pass a column as Interval Month to derive a forecasted date. Do you have any work around for this please? The requirement is something like this - Query that works select

using Parquet file

2018-07-03 Thread dony.natrajan
Hi there, I need small suggestion in apache drill. I've created Table using CTAS command in drill. I've used this table to store large data from complex queries to improve the performance. However, how to refresh this table which're created in Drill to get the real time data to any

difference between cast as timestamp and to_timestamp

2018-07-03 Thread Divya Gehlot
Hi, Below gives me different values Query 1: select CAST(1530601200049 AS TimeStamp) from (values(1)); EXPR$0 2018-07-03T07:00:00.049-05:00 Query 2: select TO_TIMESTAMP(1530601200049) from (values(1)); Apache Drill 50472-10-26T11:00:49.000-05:00 Query 3 : select

RE: unable to read timestamp when create CSV file wheras parquet can read

2018-07-03 Thread Lee, David
I think you need to use to_timestamp() instead.. Your conversion includes a timezone (-05:00).. from_unixtime() looks like is producing a non-utc timestamp. -Original Message- From: Divya Gehlot [mailto:divya.htco...@gmail.com] Sent: Tuesday, July 03, 2018 2:06 AM To:

Failure finding Drillbit on host

2018-07-03 Thread Divya Gehlot
Hi, Can anybody help me with the cause of below error : 2018-07-03 04:08:47,783 [pool-245-thread-4] INFO > o.a.d.e.s.schedule.BlockMapBuilder - Failure finding Drillbit running on > host . Skipping affinity to that host. > 2018-07-03 04:08:47,783 [pool-245-thread-13] INFO >

Re: unable to read timestamp when create CSV file wheras parquet can read

2018-07-03 Thread Divya Gehlot
Has anybody this issue between CSV and parquet files ? Really appreciate the help ! On Mon, 2 Jul 2018 at 17:51, Divya Gehlot wrote: > Hi, > I am getting error when I creating the CSV files > The part of the query throwing error is as below, the input data comes in > unix time format >

Re: CTAS AccessControlException

2018-07-03 Thread Divya Gehlot
Thanks Abhishek! Will check and update on it On Mon, 2 Jul 2018 at 23:57, Abhishek Girish wrote: > Hey Divya, > > Here is one way to check if all nodes have the same UID/GID: > > clush -a 'cat /etc/passwd | grep -i user1' > Node1: user1:x:5000:5000::/home/user1:/bin/bash > Node2:

Re: Web console responsiveness under heavy load

2018-07-03 Thread Dave Challis
Many thanks, that's good to know, will move to doing that. On 3 July 2018 at 05:29, Kunal Khatua wrote: > Yes. The reason you see the sluggishness is because the embedded webserver > within the Drillbit also acts as a proxy client, which receives the entire > result set, before converting it