.
Please go ahead and create Jira [1] with description of your environment,
settings, CTAS and query, which doesn't work.
Thanks
[1] https://issues.apache.org/jira/projects/DRILL/
Kind regards
Vitalii
On Sat, Mar 24, 2018 at 12:50 PM, Anup Tiwari <anup.tiw...@games24x7.com>
I have not upgraded hive version but installed hive 2.3.2 on a server and
tried to read data and its working.Can we have any workaround to run drill 1.13
with hive 2.1 or up-gradation is the only option?
On Sat, Mar 24, 2018 3:52 PM, Anup Tiwari anup.tiw...@games24x7.com wrote:
Sorry
298
Kind regards
Vitalii
On Tue, Mar 20, 2018 at 1:54 PM, Anup Tiwari <anup.tiw...@games24x7.com>
wrote:
> Please find below information :-
> Apache Hadoop 2.7.3Apache Hive 2.1.1
> @Vitalii, For testing i can setup upgrade hive but upgrading hive will
> take time
cess any Hive table. What's your Hive
server version?
On Tue, Mar 20, 2018 at 3:39 PM, Anup Tiwari <anup.tiw...@games24x7.com>
wrote:
> Hi,
> Please find my reply :-
> Can you do a 'use hive;` followed by 'show tables;' and see if table
> 'cad' is li
) or if this is specific to a certain table /
database in Hive?
-Abhishek
On Tue, Mar 20, 2018 at 2:37 PM, Anup Tiwari <anup.tiw...@games24x7.com>
wrote:
Note : Using Show databases, i can see hive schemas.
On Tue, Mar 20, 2018 2:36 PM, Anup Tiwari anup.tiw...@games24x7.com
Note : Using Show databases, i can see hive schemas.
On Tue, Mar 20, 2018 2:36 PM, Anup Tiwari anup.tiw...@games24x7.com wrote:
Hi,
I am not able to read my hive tables in drill 1.13.0 and with same plugin conf
it was working in Drill 1.12.0 and 1.10.0. Please look into it asap and let me
lete before
shutting down2018-03-20 14:25:27,375
[254f337f-9ac3-b66f-ed17-1de459da3283:foreman] INFO
o.apache.drill.exec.work.WorkManager - Waiting for 0 running fragments to
complete before shutting down
Regards,
Anup Tiwari
can try out 1.13.0 and let us all know.
Thanks
Parth
On Sat, Mar 17, 2018 at 11:43 AM, Anup Tiwari <anup.tiw...@games24x7.com>
wrote:
> Thanks Parth for Info. I am really looking forward to it.
> But can you tell me if the second part(about hack) was right or not?
gets killed on
few/all nodes and i am not getting any logs in drillbit.out/drillbit.log.
On Fri, Mar 16, 2018 11:07 PM, Parth Chandra par...@apache.org wrote:
On Fri, Mar 16, 2018 at 8:10 PM, Anup Tiwari <anup.tiw...@games24x7.com>
wrote:
Hi All,
I was just going through thi
et rid of the hack when an official fix is available.
To cover the missing 5% error (any other type of errors), we advise users
to try again. We also have built-in retry strategy implemented in our
hourly python scripts that aggregates data.
Hope it helps
Francois
onfiguring-drill-memory/
~ Kunal
On Tue, Mar 13, 2018 at 11:41 PM, Anup Tiwari <anup.tiw...@games24x7.com>
wrote:
> Hi Kunal,
> Please find below cluster/platform details :-
> Number of Nodes : 5
> RAM/Node : 32GBCore/Node : 8DRILL_MAX_DIRECT_MEMORY="20G
or not.
On Fri, Mar 16, 2018 1:03 PM, Anup Tiwari anup.tiw...@games24x7.com wrote:
Hi All,
We are getting a lot of different type of issues/error post upgrading from Drill
1.10.0 to 1.12.0 which i am asking on forum as well so just wanted to know
whether downgrading to Drill 1.11.0 will help
(ThreadPoolExecutor.java:1142)
[na:1.8.0_72]at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
[na:1.8.0_72]at java.lang.Thread.run(Thread.java:745) [na:1.8.0_72]
Let me know what to do here.
Regards,
Anup Tiwari
you are doing a partition.
Is the data highly skewed on such a column?
On Wed, Mar 14, 2018 at 1:16 AM, Anup Tiwari <anup.tiw...@games24x7.com>
wrote:
> Also i have observed one thing, the query which is taking time is creating
> ~30-40 fragments and 99.
this as a file ? There seems to be some truncation of the
contents.
Share it using some online service like Google Drive or Dropbox, since the
mailing list might not allow for attachments.
Thanks
~ Kunal
On Tue, Mar 13, 2018 at 11:44 PM, Anup Tiwari <anup.tiw...@games24x7.com>
Also i have observed one thing, the query which is taking time is creating
~30-40 fragments and 99.9% of record is getting written into only one
fragment.
On Wed, Mar 14, 2018 1:37 PM, Anup Tiwari anup.tiw...@games24x7.com wrote:
Hi Padma,
Please find my highlighted answer w.r.t
nks
Padma
On Mar 12, 2018, at 1:27 AM, Anup Tiwari <anup.tiw...@games24x7.com> wrote:
Hi All,
From last couple of days i am stuck in a problem. I have a query which left
joins 3 drill tables(parquet), everyday it is used to take around 15-20 mins
but
from last couple of days it is
JSON Profile when Succeeded :-
{"id":{"part1":2690693429455769721,"part2":6509382378722762087},"type":1,"start":1521007764471,"end":1521007906770,"query":"create
table a_games_log_visit_utm as\nselect\ndistinct\nglv.sessionid,\ncase when
(UFG('utms=', glv.url, '&') <> 'null') then UFG('utms=',
Hi Kunal,
Please find below cluster/platform details :-
Number of Nodes : 5
RAM/Node : 32GBCore/Node : 8DRILL_MAX_DIRECT_MEMORY="20G"DRILL_HEAP="8G"DRILL
VERSION = 1.12.0HADOOP VERSION = 2.7.3ZOOKEEPER VERSION = 3.4.8(Installed in
Distributed Mode on 3
Hi All,
We are getting "IllegalReferenceCountException" issue again in for few queries
from last 2 days and currently we are on Drill 1.12.0. Can anybody help me here
to understand what is the exact reason behind this?
On Thu, Dec 14, 2017 4:52 PM, Anup Tiwari anup.tiw...@gam
Versions(<1.11.0)?
On Mon, Mar 12, 2018 1:59 PM, Anup Tiwari anup.tiw...@games24x7.com wrote:
Hi Kunal,
Thanks for info and i went with option 1 and increased
planner.memory.max_query_memory_per_node and now queries are working fine. Will
let you in case of any issues.
On Mon,
Mon, Mar 12, 2018 1:59 PM, Anup Tiwari anup.tiw...@games24x7.com wrote:
Hi Kunal,
Thanks for info and i went with option 1 and increased
planner.memory.max_query_memory_per_node and now queries are working fine. Will
let you in case of any issues.
On Mon, Mar 12, 2018 2:30 AM, Kunal Khatua
with #1. Going with #2 will risk instability
which is worse than a query failing IMHO.
On Sun, Mar 11, 2018 at 11:56 AM, Anup Tiwari <anup.tiw...@games24x7.com>
wrote:
Hi All,
I recently upgraded from 1.10.0 to 1.12.0 and in my one of query I got
below
error :-
,
Anup Tiwari
. Either enable fallback config
drill.exec.hashagg.fallback.enabled using Alter session/system command or
increase memory limit for Drillbit
Can anybody tell me working of "drill.exec.hashagg.fallback.enabled" variable.
Should we always set it to true as it is false by default?
Regards,
Anup Tiwari
Hi Khurram/Arjun,
Anyone got time to look into it?
On Fri, Feb 16, 2018 4:53 PM, Anup Tiwari anup.tiw...@games24x7.com wrote:
Hi Arjun,
After posting this reply ; i have found the same answer on net and that
parameter to 30 and then query worked but it took bit more time than expected
s (without formatting) here. And the
error that you see on Drill 1.11.0, so we can try and repro it.
Thanks,
Khurram
From: Anup Tiwari <anup.tiw...@games24x7.com>
Sent: Wednesday, February 14, 2018 3:14:01 AM
To: user@drill.apache
lumns[248],'') as `ICD_PRCDR_VRSN_CD25`, CASE WHEN
columns[249]
> =
> > =3D
> > '' THEN NULL ELSE TO_DATE(columns[249], 'MMdd') END as
`PRCDR_DT25`,
> > CASE WHEN columns[250] =3D '' THEN NULL ELSE CAST(columns[250] as
DOUBLE)
> > E=
> > ND
> > as `DOB_DT`, NULLIF(columns[251],'') as `GNDR_CD`,
> NULLIF(columns[252],'')
> > as `RACE_CD`, NULLIF(columns[253],'') as `CNTY_CD`,
> NULLIF(columns[254],'')
> > as `STATE_CD`,
> > NULLIF(columns[255],'') as `CWF_BENE_MDCR_STUS_CD`
> > FROM cms.`blair`.`ALL_IP_OS.csv`
> > WHERE columns[58] =3D '70583' OR columns[62] =3D '70583' OR
columns[66]
> =3D
> > '70583' ;
> >
> >
> > On Thu, Mar 24, 2016 at 9:22 AM, Jacques Nadeau <jacq...@dremio.com>
> > wrote:
> >
> > > It would also good to get the full stack trace. Do you have jdk or
only
> > Joe
> > > on these machines?
> > > On Mar 24, 2016 5:27 AM, "Edmon Begoli" <ebeg...@gmail.com> wrote:
> > >
> > > > Does anyone know what might be causing this exception:
> > > >
> > > > *Error: SYSTEM ERROR: CompileException: File
> > > >
> >
'org.apache.drill.exec.compile.DrillJavaFileObject[ProjectorGen10.java]',
> > > > Line 7275, Column 17: ProjectorGen10.java:7275: error: code too
> large*
> > > >
> > > > * public void doEval(int inIndex, int outIndex)*
> > > >
> > > > * ^ (compiler.err.limit.code)*
> > > >
> > > >
> > > > *Fragment 0:0*
> > > >
> > > >
> > > > *[Error Id: 687009ec-4d55-443a-9066-218fb3ac8adb on
localhost:31010]
> > > > (state=,code=0)*
> > > >
> > >
> >
>
Regards,
Anup Tiwari
:
Can you share what the error is? Without that, it is anybody's guess on what the
issue is.
-Original Message-
From: Anup Tiwari [mailto:anup.tiw...@games24x7.com]
Sent: Tuesday, February 13, 2018 6:19 AM
To: user@drill.apache.org
Subject: Reading drill(1.10.0) created parquet table
,
Arjun
____
From: Anup Tiwari <anup.tiw...@games24x7.com>
Sent: Wednesday, February 14, 2018 11:49 AM
To: user@drill.apache.org
Subject: Re: S3 Connection Issues
Hi Arjun,
I tried what you said but its not working and queries are going inENQUEUED
state. Please find below l
On Feb 13, 2018, at 1:46 AM, Anup Tiwari
<anup.tiw...@games24x7.com<mailto:anup.tiw...@games24x7.com>> wrote:
Hi Padma,
As you have mentioned "Last time I tried, using Hadoop 2.8.1 worked for me." so
have you build drill with hadoop 2.8.1 ? If yes then can you provide
Hi Team,
I am trying to read drill(1.10.0) created parquet table in hive(2.1.1) using
external table and getting some error which seems not related to drill. Just
asking anyone have tried this ? If yes then do we have any best practices/link
for this?
Regards,
Anup Tiwari
From: Anup Tiwari <anup.tiw...@games24x7.com>
Sent: Tuesday, February 13, 2018 12:01 PM
To: user@drill.apache.org
Subject: Re: Unable to setup hive plugin in Drill 1.11.0
Also forgot to mention that we are using Drill 1.10 with 2.1 on our one of
cl
keys are correct but using the AWS CLI and downloaded
some of the files, but I’m kind of at a loss as to how to debug. Any
suggestions?
> Thanks in advance,
> — C
Regards,
Anup Tiwari
Sent with Mixmax
: 2.1.1Apache
Hadoop : 2.7.3
So does this mean the issue is with hadoop version ? As i can seehadoop 2.7.1
related jars in 3rdparty jar of drill.
On Tue, Feb 13, 2018 11:33 AM, Anup Tiwari anup.tiw...@games24x7.com wrote:
Hi Sorabh,
Thanks for reply. We are using below combination :-
Apache Drill
there.
[1]: https://github.com/apache/drill/blob/1.11.0/pom.xml#L51
[2]: https://github.com/apache/drill/pull/
Thanks,
Sorabh
From: Anup Tiwari <anup.tiw...@games24x7.com>
Sent: Monday, February 12, 2018 9:21 AM
To: user@drill.apache.org
S
]
at
org.apache.drill.exec.store.StoragePluginRegistryImpl.create(StoragePluginRegistryImpl.java:345)
~[drill-java-exec-1.11.0.jar:1.11.0]... 45 common frames omitted
On Mon, Feb 12, 2018 9:23 PM, Anup Tiwari anup.tiw...@games24x7.com wrote:
Hi All,
Please find below information :-
Apache Drill Version : 1.11.0MySQL
ops" : {
"hive.metastore.uris" : "thrift://prod-hadoop-xxx:9083",
"hive.metastore.sasl.enabled" : "false", "fs.default.name" :
"hdfs://prod-hadoop-xxx:9000"} }
Error :
"result" : "error (unable to create/ update storage)"
Regards,
Anup Tiwari
Sent with Mixmax
nd to have a stack trace for such errors, so it
> helps if you can share that too.
>
> ~Kunal
>
> -Original Message-
> From: Anup Tiwari [mailto:anup.tiw...@games24x7.com]
> Sent: Friday, December 08, 2017 12:35 AM
> To: user@drill.apache.org
> Subject: Re: [1.9.0] :
,
*Anup Tiwari*
On Thu, Dec 7, 2017 at 10:33 PM, Kunal Khatua <kkha...@mapr.com> wrote:
> What is it that you were trying to do when you encountered this?
>
> This is a system error and the message appears to hint that Drill shutdown
> a prematurely and is unable to account
.
Regards,
*Anup Tiwari*
On Mon, Dec 12, 2016 at 5:07 PM, Anup Tiwari <anup.tiw...@games24x7.com>
wrote:
> Hi Aman,
>
> Sorry for delayed response, since we are executing this query on our
> ~150GB logs and as i have mentioned in trail mail, by executing "removed
> condition
Any updates on this?
Since we have migrated to Aws Mumbai, we are not able to connect s3 and
Drill.
On 04-Apr-2017 11:02 PM, "Shankar Mane" wrote:
> Quick question here:
>
> Does s3 plugin support S3 signature version 4 ?
>
> FYI: s3 plugin works in case when region
with every row of the second table, hence do a Cartesian product?
OR
If we just don't specify join condition like :
select a.*, b.* from tt1 as a, tt2 b; then will it internally treat this
query as Cartesian join.
Regards,
*Anup Tiwari*
On Mon, May 8, 2017 at 10:00 PM, Zelaine Fong <zf...@mapr.
Thanks Padma, it worked.
Regards,
*Anup Tiwari*
On Wed, Apr 19, 2017 at 1:13 AM, Kunal Khatua <kkha...@mapr.com> wrote:
> Could you also share the profiles for the failed queries as well?
>
>
> Thanks
>
> Kunal
>
>
> From: Padma
0.jar:1.10.0]
at
org.apache.drill.exec.work.fragment.FragmentExecutor.cleanup(FragmentExecutor.java:155)
[drill-java-exec-1.10.0.jar:1.10.0]
... 5 common frames omitted
Regards,
*Anup Tiwari*
exec-1.9.0.jar:1.9.0]
at
org.apache.drill.exec.record.AbstractRecordBatch.next(AbstractRecordBatch.java:162)
~[drill-java-exec-1.9.0.jar:1.9.0]
Regards,
*Anup Tiwari*
On Mon, Mar 6, 2017 at 7:30 PM, John Omernik <j...@omernik.com> wrote:
> Have you tried disabling hash joins or hash
Hi John,
I have tried above config as well but still getting this issue.
And please note that we were using similar configuration params for Drill
1.6 where this issue was not coming.
Anything else which i can try?
Regards,
*Anup Tiwari*
On Fri, Mar 3, 2017 at 11:01 PM, Abhishek Girish <a
e frequently, kindly suggest us what is best configuration
for our environment.
Regards,
*Anup Tiwari*
On Thu, Mar 2, 2017 at 1:26 AM, John Omernik <j...@omernik.com> wrote:
> Another thing to consider is ensure you have a Spill Location setup, and
> then disable hashagg/hashjoin fo
Hi,
Can someone look into it? As we are now getting this more frequently in
Adhoc queries as well.
And for automation jobs, we are moving to Hive as in drill we are getting
this more frequently.
Regards,
*Anup Tiwari*
On Sat, Dec 31, 2016 at 12:11 PM, Anup Tiwari <anup.tiw...@games24x7.
S_ACCURATE'='{\"BASIC_STATS\":\"true\"}',
'numFiles'='3',
'numRows'='1993254',
'rawDataSize'='1143386232',
'totalSize'='69876827',
'transient_lastDdlTime'='1486640969');
Regards,
*Anup Tiwari*
On Sat, Jan 21, 2017 at 1:04 PM, Chunhui Shi <c...@mapr.com> wrote:
> I gues
can you point me to any specific line or sentence on that link?
Also please correct me if i am misinterpreting, but as written in 1st
line "*Drill
1.1 and later supports Hive 1.0*", does that mean Drill 1.1 and later
doesn't support OR partially support Hive 2.x?
Regards,
*Anup Tiwar
ing in metastore?
Regards,
*Anup Tiwari*
On Sat, Jan 21, 2017 at 4:56 AM, Zelaine Fong <zf...@mapr.com> wrote:
> The stack trace shows the following:
>
> Caused by: org.apache.drill.common.exceptions.DrillRuntimeException:
> java.io.IOException: Failed to get numRows from HiveTable
&g
~[drill-hive-exec-shaded-1.9.0.jar:1.9.0]
at
org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$FileGenerator.call(OrcInputFormat.java:620)
~[drill-hive-exec-shaded-1.9.0.jar:1.9.0]
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
~[na:1.8.0_72]
... 3 common frames omitted
Regards,
*Anup Tiwari*
On Thu, Jan
org/docs/apache-drill-contribution-ideas/> which says
that we have to create custom storage plugin to read ORC format tables. So
can you tell me how to create custom storage plugin in this case?
Regards,
*Anup Tiwari*
On Thu, Jan 19, 2017 at 1:55 PM, Nitin Pawar <nitinpawar...@gmail.com>
w
+Dev
Can someone help me in this?
Regards,
*Anup Tiwari*
On Sun, Jan 15, 2017 at 2:21 PM, Anup Tiwari <anup.tiw...@games24x7.com>
wrote:
> Hi Team,
>
> Can someone tell me how to configure custom storage plugin in Drill for
> accessing hive ORC tables?
>
> Thanks
Hi Team,
Can someone tell me how to configure custom storage plugin in Drill for
accessing hive ORC tables?
Thanks in advance!!
Regards,
*Anup Tiwari*
Hi,
We are getting this issue bit more frequently. can someone please look into
it and tell us that why it is happening since as mention in earlier mail
when this query gets executed no other query is running at that time.
Thanks in advance.
Regards,
*Anup Tiwari*
On Sat, Dec 24, 2016 at 10:20
ay be interested in metrics. More info here:
http://drill.apache.org/docs/monitoring-metrics/ <
http://drill.apache.org/docs/monitoring-metrics/>
Thank you,
Sudheesh
> On Dec 21, 2016, at 4:31 AM, Anup Tiwari <anup.tiw...@games24x7.com>
wrote:
>
> @sudheesh, yes drill bit i
@sudheesh, yes drill bit is running on datanodeN/10.*.*.5:31010).
Can you tell me how this will impact to query and do i have to set this at
session level OR system level?
Regards,
*Anup Tiwari*
On Tue, Dec 20, 2016 at 11:59 PM, Chun Chang <cch...@maprtech.com> wrote:
> I am pr
ed(NioEventLoop.java:468)
at
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at java.lang.Thread.run(Thread.ja
ou tell me in which scenarios we throw
"IllegalReferenceCountException" and how to handle this in different
scenarios?
Regards,
*Anup Tiwari*
On Thu, Dec 8, 2016 at 10:55 PM, Aman Sinha <amansi...@apache.org> wrote:
> Hi Anup,
> since your original query was working on 1.6 and
ked
for me but when i execute that removed conditions alone in CTAS , it got
executed successfully.
Regards,
*Anup Tiwari*
On Wed, Dec 7, 2016 at 12:22 PM, Anup Tiwari <anup.tiw...@games24x7.com>
wrote:
> Hi Team,
>
> I am getting below 2 error in my one of the query which was working f
roupInformation.doAs(UserGroupInformation.java:1657)
~[hadoop-common-2.7.1.jar:na]
at
org.apache.drill.exec.work.fragment.FragmentExecutor.run(FragmentExecutor.java:226)
[drill-java-exec-1.9.0.jar:1.9.0]
... 4 common frames omitted
Regards,
*Anup Tiwari*
error-cannot-open-channel-to-x-at-election-address
Regards,
*Anup Tiwari*
Loop.java:354)
at
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at java.lang.Thread.run(Thread.java:745)
Can anyone suggest any workaround or fix in these scenarios?
Regards,
*Anup Tiwari*
On Mon, Oct 17, 2016 at 10:42 PM, Abhishek Girish <abhishek.gir...@gmail.com
>
NioEventLoop.java:468)
at
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at java.lang.Thread.run(Thread.java:745)
Regards,
*Anup Tiwari*
-Oct-2016 11:35 AM, "Nitin Pawar" <nitinpawar...@gmail.com> wrote:
is there an option where you can upgrade to 1.8 and test it?
On Sat, Oct 15, 2016 at 10:23 AM, Anup Tiwari <anup.tiw...@games24x7.com>
wrote:
> No.. on a parquet table..
>
> Regards,
> *Anup Tiw
No.. on a parquet table..
Regards,
*Anup Tiwari*
On Fri, Oct 14, 2016 at 6:23 PM, Nitin Pawar <nitinpawar...@gmail.com>
wrote:
> are you querying on csv files?
>
> On Fri, Oct 14, 2016 at 1:31 PM, Anup Tiwari <anup.tiw...@games24x7.com>
> wrote:
>
> > Hi
.java:250)
at
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at java.lang.Thread.run(Thread.java:745)
Also when i am trying to exclude empty string i.e. *col_name <> ''* then it
is excluding null values as well.
Regards,
*Anup Tiwari*
Also please note that I have tried below in all node's drill-env.sh but its
not working.
export DRILL_LOG_DIR="hdfs://namenode:9000/tmp/drilllogs/"
Regards,
*Anup Tiwari*
On Fri, Aug 26, 2016 at 4:06 PM, Anup Tiwari <anup.tiw...@games24x7.com>
wrote:
> Hi All,
>
> We
Hi All,
We are trying to move drill logs directory from local file system to HDFS
so that we can refer only one location rather than each node's log
directory.
Can anyone help me on this?
Regards,
*Anup Tiwari*
thanks for link.. but till then is their any other way? OR way to read
zookeper logs in drill, as we are showing on profile UI?
Regards,
*Anup Tiwari*
Software Engineer(BI-Team),PlayGames24x7 Pvt Ltd
On Fri, Aug 19, 2016 at 6:40 PM, Khurram Faraaz <kfar...@maprtech.com>
e to get the whole picture of your
cluster. Even that, you will not see running queries.
Hope this helps.
On Thu, Aug 18, 2016 at 12:34 AM, Anup Tiwari <anup.tiw...@games24x7.com>
wrote:
> Hi All,
>
> We want to see all types of queries which ran on drill cluster or
currently
> ru
Hi All,
We have a column in table in which date time is coming in below format :-
Thu Jun 09 2016 17:00:25 GMT+0530 (IST)
We want to extract date-time in "-MM-dd hh:mm:ss" (2016-06-09
17:00:25") format.
As far as my knowledge their is no in build function to achieve this.
Kindly let me
Hi All,
Sometimes I am getting below error while creating a table in drill using a
hive table :-
*"*java.lang.OutOfMemoryError: Java heap space*"* which in-turn kills drill
bit of one of the node where i have executed respective query.
*Query Type :-*
create table glv_abc as select sessionid,
Ok Team.. So it's a bug , please find below jira link:-
https://issues.apache.org/jira/browse/DRILL-4474
On 04-Mar-2016 11:23 PM, "Anup Tiwari" <anup.tiw...@games24x7.com> wrote:
> Hi Team,
>
> I am getting different output for same condition in drill.. In 1st query
Can anyone help me on this?
On 21-Jan-2016 11:29 pm, "Anup Tiwari" <anup.tiw...@games24x7.com> wrote:
> @jim I have already followed steps given in that post but its not working.
> On 21-Jan-2016 8:45 pm, "Devender Yadav" <dev@gmail.com> wrote:
>
&g
onfusedcoders.com/bigdata/apache-drill/sql-on-cassandra-querying-cassandra-via-apache-drill
> >
> > On Thu, Jan 21, 2016 at 6:07 AM, Anup Tiwari <anupsdtiw...@gmail.com>
> > wrote:
> >
> > > Hi,
> > >
> > > I am using Drill 1.2 and want to query Cass
78 matches
Mail list logo