Re: Need write access to hive wiki

2019-02-11 Thread Lefty Leverenz
Done.  Welcome to the Hive wiki team, Nita!

*Hive developers:*  Please take notice of requests like ths, and make sure
someone replies to them within a few days.  (I'm not monitoring the Hive
mailing lists on a regular basis anymore.)

-- Lefty


On Mon, Jan 28, 2019 at 11:30 AM Nita Dembla  wrote:

> Please grant me write access. Have to update the wiki for a bug. My
> confluent id is “ndembla”
>
> Thanks,
> Nita.


Re: Wiki Write Access

2019-02-11 Thread Lefty Leverenz
Done.  Welcome to the Hive wiki team, David!

-- Lefty


On Sun, Feb 10, 2019 at 8:50 PM David M  wrote:

> I realized I mistyped my username My confluence username is mcginnisda.
> Please give me write access to the Hive confluence wiki, or tell me where I
> need to request it.
>
>
>
> Thanks!
>
>
>
> *From:* David M 
> *Sent:* Thursday, February 7, 2019 10:38 AM
> *To:* user@hive.apache.org
> *Subject:* Wiki Write Access
>
>
>
> All,
>
>
>
> I’d like to get wiki write access for the Apache Hive wiki, so I can
> update some documentation based on a recent patch. My confluence name is
> mcginnda.
>
>
>
> Thanks!
>
>
>
> David McGinnis
>
>
>
>
>


Re: Access to hive wiki

2018-12-24 Thread Lefty Leverenz
Done.  Welcome to the Hive wiki team, Johndee!

-- Lefty


On Fri, Dec 21, 2018 at 7:25 AM Johndee Burks  wrote:

> I would like to request access to hive wiki. My username is
> john...@cloudera.com.
>
> --
> - JRB
>


Re: access to the Hive wiki

2018-11-21 Thread Lefty Leverenz
Done.  Welcome to the Hive wiki team, Miklos!

-- Lefty


On Tue, Nov 13, 2018 at 11:48 AM Miklos Gergely 
wrote:

> Hi,
>
> I’d like to request access to the Hive wiki.
> My confluence username is mgergely.
>
> Thanks
>
> *Miklos Gergely*
>
> *Staff Software Engineer, Budapest*
>
> Wesselenyi street 16/B (6. Floor)
>
> 1077 Budapest
>
>
>
> This email and any files transmitted with it are confidential and intended
> solely for the use of the individual or entity to whom they are addressed.
> If you have received this email in error please notify the system manager.
> This message contains confidential information and is intended only for the
> individual named. If you are not the named addressee you should not
> disseminate, distribute or copy this e-mail. Please notify the sender
> immediately by e-mail if you have received this e-mail by mistake and
> delete this e-mail from your system. If you are not the intended recipient
> you are notified that disclosing, copying, distributing or taking any
> action in reliance on the contents of this information is strictly
> prohibited.
>
>


How to unsubscribe from the Hive user list

2018-09-22 Thread Lefty Leverenz
People often send "unsubscribe" messages to this list (user@hive.apache.org)
but that won't get them off the list.

To unsubscribe, you have to send a message to *user-unsubscribe*@
hive.apache.org as described here:  Mailing Lists
.

Thanks and sorry to see you go.

-- Lefty


Re: Hive wiki write access

2018-09-01 Thread Lefty Leverenz
Done.  Welcome to the Hive wiki team, Max!

-- Lefty


On Thu, Aug 23, 2018 at 5:20 AM Max Efremov  wrote:

> Hello.
> I start working on https://issues.apache.org/jira/browse/HIVE-20447 and I
> need write access to hive wiki,
> https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients pfge
> to add new output format.
> My  Confluence username is tmin10.
>


Re: Error Starting hive thrift server, hive 3 on Hadoop 3.1

2018-07-05 Thread Lefty Leverenz
Mich, here's how you can get write access to the Hive wiki:  About This
Wiki -- How to get permission to edit

.

-- Lefty


On Wed, Jul 4, 2018 at 2:59 AM Mich Talebzadeh 
wrote:

> Sure will do
>
> Thanks
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> *
>
>
>
> http://talebzadehmich.wordpress.com
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Tue, 3 Jul 2018 at 23:35, Jörn Franke  wrote:
>
>> It would be good if you can document this on the Hive wiki so that other
>> users know it.
>>
>> On the other hand there is Apache Bigtop which tests integration of
>> various Big Data components  - but it is complicated. Behind a big data
>> distribution there is a lot of effort.
>>
>> On 3. Jul 2018, at 23:08, Mich Talebzadeh 
>> wrote:
>>
>> Resolved this by getting rid of HADOOP_CLASSPATH that I had to add to
>> make Hbase 2 work with Hadoop 3.1. It did not help and I had to revert back
>> to Hbase 1.2.6. But left that CLASSPATH in ENV file.
>>
>> This is becoming retrofitting issues where making an artefact work with
>> Hadoop impacts other artefacts and results in unnecessary waste of time.
>>
>> HTH
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * 
>> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> *
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>>
>>
>> On Tue, 3 Jul 2018 at 17:48, Mich Talebzadeh 
>> wrote:
>>
>>> This is hive 3 on Hadoop 3.1
>>>
>>> I am getting this error in a loop
>>>
>>> 2018-07-03 17:43:44,929 INFO  [main] SessionState: Hive Session ID =
>>> 5f38c8a3-f269-42e0-99d8-9ddff676f009
>>> 2018-07-03 17:43:44,929 INFO  [main] server.HiveServer2: Shutting down
>>> HiveServer2
>>> 2018-07-03 17:43:44,929 INFO  [main] server.HiveServer2:
>>> Stopping/Disconnecting tez sessions.
>>> 2018-07-03 17:43:44,930 WARN  [main] server.HiveServer2: Error starting
>>> HiveServer2 on attempt 5, will retry in 6ms
>>> java.lang.NoSuchMethodError:
>>> org.apache.hadoop.tracing.TraceUtils.wrapHadoopConf(Ljava/lang/String;Lorg/apache/hadoop/conf/Configuration;)Lorg/apache/htrace/HTraceConfiguration;
>>>
>>> Any ideas?
>>>
>>>
>>> Dr Mich Talebzadeh
>>>
>>>
>>>
>>> LinkedIn * 
>>> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> *
>>>
>>>
>>>
>>> http://talebzadehmich.wordpress.com
>>>
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>> any loss, damage or destruction of data or any other property which may
>>> arise from relying on this email's technical content is explicitly
>>> disclaimed. The author will in no case be liable for any monetary damages
>>> arising from such loss, damage or destruction.
>>>
>>>
>>>
>>


Re: Request Edit Permission to Apache Hive Confluence Page

2018-06-21 Thread Lefty Leverenz
Done.  Welcome to the Hive wiki team, Alice!

-- Lefty


On Tue, Jun 19, 2018 at 6:32 PM Yuneng Fan  wrote:

> Hello there,
> I would like to request Edit Permission to Apache Hive Confluence Page.
> Confluence ID = afan
>
> Thanks,
> Alice
>


How to unsubscribe

2018-06-13 Thread Lefty Leverenz
Many people try to unsubscribe from this mailing list by sending a message
directly to the list.  That doesn't work.

To unsubscribe, you have to send a message (any message) to the automated
unsubscribe address:

user-unsubscr...@hive.apache.org


as described here:  Hive Mailing Lists
.


Since it's automated, the message must be sent using the email account that
is subscribed to the Hive user mailing list.

-- Lefty


Re: Unsubscribe

2018-06-09 Thread Lefty Leverenz
Xiaobin She, to unsubscribe please send a message to
user-unsubscr...@hive.apache.org as described here:  Mailing Lists
.  Thanks.


-- Lefty


On Sat, Jun 9, 2018 at 11:07 AM Xiaobin She  wrote:

> Unsubscribe
>


Re: Write access to wiki

2018-06-05 Thread Lefty Leverenz
Done.  Welcome to the Hive wiki team, Luis!

-- Lefty


On Tue, Jun 5, 2018 at 9:39 AM Luis Figueroa  wrote:

> Hi, may I be granted write access to the wiki? I'd like to help keeping up
> with updates.
>
> My confluence user name is luisefigueroa
>
> Cheers,
> Luis
>
>


Re: Re: Edit wiki of hive permissions

2018-06-05 Thread Lefty Leverenz
You now have edit permissions on the wiki.  Welcome to the Hive wiki
team, Chuikai You!

-- Lefty


On Tue, Jun 5, 2018 at 2:24 AM yo...@jpush.cn  wrote:

> I'm sorry,The correct account is *youchuikai *
> Thank you.
> --
> yo...@jpush.cn
>
>
> *From:* Lefty Leverenz 
> *Date:* 2018-06-05 10:46
> *To:* youck 
> *CC:* user 
> *Subject:* Re: Edit wiki of hive permissions
> The permissions page doesn't recognize youchui...@163.com as a Confluence
> user.  Do you have a Confluence account?
>
> See About This Wiki -- How to get permission to edit
> <https://cwiki.apache.org/confluence/display/Hive/AboutThisWiki#AboutThisWiki-Howtogetpermissiontoedit>
> .
>
> -- Lefty
>
>
> On Sun, Jun 3, 2018 at 10:20 PM yo...@jpush.cn  wrote:
>
>> Lefty:
>> can you please provide edit permissions to youchui...@163.com for
>> the Hive wiki,I want do something for hive community.
>> Thank you.
>> --
>> yo...@jpush.cn
>>
>


Re: Edit wiki of hive permissions

2018-06-04 Thread Lefty Leverenz
The permissions page doesn't recognize youchui...@163.com as a Confluence
user.  Do you have a Confluence account?

See About This Wiki -- How to get permission to edit

.

-- Lefty


On Sun, Jun 3, 2018 at 10:20 PM yo...@jpush.cn  wrote:

> Lefty:
> can you please provide edit permissions to youchui...@163.com for the
> Hive wiki,I want do something for hive community.
> Thank you.
> --
> yo...@jpush.cn
>


Re: Edit Permissions

2018-05-19 Thread Lefty Leverenz
Done.  Welcome to the Hive wiki team, Jörn.


-- Lefty


On Thu, May 17, 2018 at 9:35 AM Jörn Franke  wrote:

> Hi,
>
> can you please provide edit permissions to jornfranke for the Hive wiki.
>
> I want to add some more documentation about Hive SerDe.
>
> thank you.
>
> best regards
>


Re: Unsubscribe

2018-05-13 Thread Lefty Leverenz
Rahul, to unsubscribe please send a message to
user-unsubscr...@hive.apache.org as described here:  Mailing Lists
.

Thanks.

-- Lefty


On Mon, May 7, 2018 at 4:22 PM Rahul Channe  wrote:

>


Re: Unsubscribe

2018-05-13 Thread Lefty Leverenz
Beth, to unsubscribe please send a message to
user-unsubscr...@hive.apache.org as described here:  Mailing Lists
.

Thanks.

-- Lefty


On Mon, May 7, 2018 at 4:52 PM Beth Lee  wrote:

>
>


Re: Unsubscribe

2018-05-13 Thread Lefty Leverenz
Roger, to unsubscribe please send a message to
user-unsubscr...@hive.apache.org as described here:  Mailing Lists
.

Thanks.

-- Lefty


On Tue, May 8, 2018 at 12:49 AM Roger Baatjes  wrote:

>
>


Re: Unsubscribe

2018-05-13 Thread Lefty Leverenz
Dheena, to unsubscribe please send a message to
user-unsubscr...@hive.apache.org as described here:  Mailing Lists
.

Thanks.

-- Lefty


On Wed, May 9, 2018 at 2:19 AM Dheena Dhayalan  wrote:

>
>


Re: What does the ORC SERDE do

2018-05-13 Thread Lefty Leverenz
Jörn, please do update the wiki, we really need better SerDe documentation.

Getting write access is easy:

About This Wiki -- How to get permission to edit



-- Lefty


On Sun, May 13, 2018 at 10:18 AM Jörn Franke  wrote:

> You have in AbstractSerde a method to return very basic stats related to
> your fileformat (mostly size of the data and number of rows etc):
>
>
> https://github.com/apache/hive/blob/master/serde/src/java/org/apache/hadoop/hive/serde2/SerDeStats.java
>
>  In method initialize of your Serde you can retrieve properties related to
> partitions and include this information in your file format, if needed (you
> don’t need to create folders etc for partitions - this is done by Hive)
>
>
> On 13. May 2018, at 19:09, Elliot West  wrote:
>
> Hi Jörn,
>
> I’m curious to know how the SerDe framework provides the means to deal
> with partitions, table properties, and statistics? I was under the
> impression that these were in the domain of the metastore and I’ve not
> found anything in the SerDe interface related to these. I would appreciate
> if you could point me in the direction of anything I’ve missed.
>
> Thanks,
>
> Elliot.
>
> On Sun, 13 May 2018 at 15:42, Jörn Franke  wrote:
>
>> In detail you can check the source code, but a Serde needs to translate
>> an object to a Hive object and vice versa. Usually this is very simple
>> (simply passing the object or create A HiveDecimal etc). It also provides
>> an ObjectInspector that basically describes an object in more detail (eg to
>> be processed by an UDF). For example, it can tell you precision and scale
>> of an objects. In case of ORC it describes also how a bunch of objects
>> (vectorized) can be mapped to hive objects and the other way around.
>> Furthermore, it provides statistics and provides means to deal with
>> partitions as well as table properties (!=input/outputformat properties).
>> Although it sounds complex, hive provides most of the functionality so
>> implementing a serde is most of the times easy.
>>
>> > On 13. May 2018, at 16:34, 侯宗田  wrote:
>> >
>> > Hello,everyone
>> >   I know the json serde turn fields in a row to a json format, csv
>> serde turn it to csv format with their serdeproperties. But I wonder what
>> the orc serde does when I choose to stored as orc file format. And why is
>> there still escaper, separator in orc serdeproperties. Also with RC
>> Parquet. I think they are just about how to stored and compressed with
>> their input and output format respectively, but I don’t know what their
>> serde does, can anyone give some hint?
>>
>


Re: Unsubscribe

2018-04-09 Thread Lefty Leverenz
Sajitha, to unsubscribe please send a message to
user-unsubscr...@hive.apache.org as described here:  Mailing Lists
.

Thanks.

-- Lefty


On Fri, Apr 6, 2018 at 2:55 PM Sritharan, Sajitha (S.) 
wrote:

>
>
>
>
> Thanks & Regards,
>
> Sajitha Sritharan
>
>
>


Re: Hive Druid SQL

2018-04-09 Thread Lefty Leverenz
>  Does it mean, I cannot use SQLserver as Druid metastore for Hive to work
with Druid?

Apparently so.

   - In Hive 2.2.0 *hive.druid.metadata.db.type* was introduced with values
   "mysql" and "postgres" (HIVE-15277
   ).
   - In Hive 2.3.0 the value "postgres" was changed to "postgresql" (
   HIVE-15809 ).
   - In Hive 3.0.0 (upcoming release) the value "derby" is added (HIVE-18196
   ).

-- Lefty


On Fri, Apr 6, 2018 at 10:09 AM Amit  wrote:

> Hive Druid Integration:
> I have Hive and Druid working independently.
> But having trouble connecting the two together.
> I don't have Hortonworks.
>
> I have Druid using sqlserver as metadata store database.
>
> When I try setting this property in Beeline,
>
> set hive.druid.metadata.db.type=sqlserver;
>
>  I get a message:
> Error: Error while processing statement: 'SET
> hive.druid.metadata.db.type=sqlserver' FAILED in
> validation : Invalid value.. expects one of patterns [mysql, postgres].
> (state=42000,code=1)
>
> Does it mean, I cannot use SQLserver as Druid metastore for Hive to work
> with Druid?
>
>
>
>


Re: Write Acess to Hive Wiki

2018-02-24 Thread Lefty Leverenz
Done.  Welcome to the Hive wiki team, Madhudeep!

-- Lefty


On Tue, Feb 20, 2018 at 9:24 PM, Madhudeep petwal <
madhudeep11pet...@gmail.com> wrote:

> Hi,
>
> My Jira  has been
> merged. Now I need to edit website for
> documentation.
> Please provide write access.
>
> Confluence UserName - madhudeep.petwal
>
> Thanks
> Madhudeep Petwal
>


Re:

2017-12-07 Thread Lefty Leverenz
To unsubscribe, please send a message to user-unsubscr...@hive.apache.org as
described here:  Mailing Lists .

Thanks.

-- Lefty


On Wed, Nov 29, 2017 at 5:17 AM, Zhenyi Zhao  wrote:

> Unsubscribe
>


Re: write access to the Hive wiki

2017-11-25 Thread Lefty Leverenz
Done.  Welcome to the Hive wiki team, Sourygna!

-- Lefty


On Thu, Nov 23, 2017 at 5:23 AM, Luangsay Sourygna 
wrote:

> Hi,
>
> Could you please give access to my user (sourygna) ?
>
> Thanks,
>
> Sourygna
>


Re: Wiki access request

2017-10-27 Thread Lefty Leverenz
Done.  Welcome to the Hive wiki team, Akira!

-- Lefty



On Thu, Oct 26, 2017 at 10:15 PM, Akira Ajisaka  wrote:

> Hi,
>
> Would you grant me edit permissions for Hive confluence wiki?
> My account id is "aajisaka".
>
> Thanks,
> Akira
>


Re: can i have write privilege to Hive Wiki

2017-10-19 Thread Lefty Leverenz
Done.  Welcome to the Hive wiki team, Slim!

-- Lefty



On Wed, Oct 18, 2017 at 11:04 AM, Slim Bouguerra 
wrote:

> User name is bslim and address is bs...@apache.org
> thanks
>
>


Re: Wiki Access Request

2017-10-16 Thread Lefty Leverenz
Done.  Welcome to the Hive wiki team, Jay!

-- Lefty


On Mon, Oct 16, 2017 at 4:42 AM, Jay Green-Stevens <
t-jgreenstev...@hotels.com> wrote:

> I would like to request write access for the Hive wiki; my user name is
> jgreenstevens.
>
>
>
> Thanks,
>
>
>
> Jay Green-Stevens
>
>
>


Re: Wiki Access Request

2017-10-13 Thread Lefty Leverenz
Jay, first you need a Confluence username as described here:  About This
Wiki -- How to get permission to edit

.

-- Lefty


On Fri, Oct 13, 2017 at 11:33 AM, Jay Green-Stevens <
t-jgreenstev...@hotels.com> wrote:

> Hi,
>
>
>
> I would like to request write access to the Hive wiki page.
>
>
>
> Thanks,
>
> Jay Green-Stevens
>


Re: Hive query starts own session for LLAP

2017-09-26 Thread Lefty Leverenz
Thanks for the explanations of "all" and "only" Sergey.  I've added them to
the wiki, with minor edits:  hive.llap.execution.mode

.

Now we need an explanation of "map" -- can you supply it?

-- Lefty


On Mon, Sep 25, 2017 at 11:33 AM, Sergey Shelukhin 
wrote:

> Hello.
> Hive would create a new Tez AM to coordinate the query (or use an existing
> one if HS2 session pool is used). However, the YARN app for Tez should
> only have a single container. Is this not the case?
> If it’s running additional containers, what is hive.llap.execution.mode
> set to? It should be set to all or only by default (“all” means run
> everything in LLAP if at all possible; “only” is the same with fallback to
> containers disabled - so the query would fail if it cannot run in LLAP).
>
> From:  Rajesh Narayanan  on behalf of
> Rajesh Narayanan 
> Reply-To:  "user@hive.apache.org" 
> Date:  Friday, September 22, 2017 at 11:59
> To:  "user@hive.apache.org" 
> Subject:  Hive query starts own session for LLAP
>
>
> HI All,
> When I execute the hive query , that  starts its own session and creates
> new yarn jobs rather than using the llap enabled job
> Can you please provide some suggestion?
>
> Thanks
> Rajesh
>
>


Re: unsubscribe

2017-09-22 Thread Lefty Leverenz
Sajitha, to unsubscribe please send messages to
user-unsubscr...@hive.apache.org and dev-unsubscr...@hive.apache.org as
described here:  Mailing Lists .
Thanks.

-- Lefty


On Fri, Sep 22, 2017 at 8:41 AM, Sritharan, Sajitha (S.) 
wrote:

> Please unsubscribe my id from these groups
>
>
>
> Thanks & Regards,
>
> Sajitha Sritharan
>
>
>


Re: Wiki Edit Privileges

2017-08-11 Thread Lefty Leverenz
Done.  Welcome to the Hive wiki team, Karen!

-- Lefty


On Thu, Aug 10, 2017 at 10:26 AM, Ren Coppage 
wrote:

> Hi,
>
> I would like to request write access to the Hive Wiki. My Confluence
> username is klcopp.
>
> Thanks very much!
> Karen Coppage
>


Re: request for wiki write permission

2017-07-19 Thread Lefty Leverenz
Done.  Welcome to the Hive wiki team, Andrew!

-- Lefty


On Wed, Jul 19, 2017 at 11:22 AM, Andrew Sherman 
wrote:

> For user asherman
>
> Thanks
>
> -Andrew
>


Re: wiki access permission

2017-07-15 Thread Lefty Leverenz
Done.  Welcome to the Hive wiki team, Anubhav!

-- Lefty


On Sat, Jul 15, 2017 at 2:44 AM, Anubhav Tarar 
wrote:

> Hi,i need wiki access permission
>
>  Confluence username:anubhav tarar
>
> --
> Thanks and Regards
>
> *   Anubhav Tarar *
>
>
> * Software Consultant*
>   *Knoldus Software LLP    *
>LinkedIn  Twitter
> fb 
>   mob : 8588915184 <(858)%20891-5184>
>


Re: Controlling Number of small files while inserting into Hive table

2017-06-26 Thread Lefty Leverenz
Saquib Khan, to unsubscribe you need to send a message to
user-unsubscr...@hive.apache.org as described here:  Mailing Lists
.


Thanks.

-- Lefty

On Sun, Jun 25, 2017 at 7:14 PM, saquib khan  wrote:

> Please remove me from the user list.
>
> On Sun, Jun 25, 2017 at 5:10 PM Db-Blog  wrote:
>
>> Hi Arpan,
>> Include the partition column in the distribute by clause of DML, it will
>> generate only one file per day. Hope this will resolve the issue.
>>
>> "insert into 'target_table' select a,b,c from x where ... distribute by
>> (date)"
>>
>> PS: Backdated processing will generate additional file(s). One file per
>> load.
>>
>> Thanks,
>> Saurabh
>>
>> Sent from my iPhone, please avoid typos.
>>
>> On 22-Jun-2017, at 11:30 AM, Arpan Rajani 
>> wrote:
>>
>> Hello everyone,
>>
>>
>> I am sure many of you might have faced similar issue.
>>
>> We do "insert into 'target_table' select a,b,c from x where .." kind of
>> queries for a nightly load. This insert goes in a new partition of the
>> target_table.
>>
>> Now the concern is : *this inserts load hardly any data* ( I would say
>> less than 128 MB per day) *but data is fregmented into1200 files*. Each
>> file in a few KiloBytes. This is slowing down the performance. How can we
>> make sure, this load does not generate lot of small files?
>>
>> I have already set : *hive.merge.mapfiles and **hive.merge.mapredfiles *to
>> true in custom/advanced hive-site.xml. But still the load job loads data
>> with 1200 small files.
>>
>> I know why 1200 is, this is the value of maximum number of
>> reducers/containers available in one of the hive-sites. (I do not think its
>> a good idea to do cluster wide setting to change this number, as this can
>> affect other jobs which can use cluster when it has free containers)
>>
>> *What could be other way/settings, so that the hive insert do not take
>> 1200 slots and generate lots of small files?*
>>
>> I also have another question which is partly contrary to above : (This is
>> relatively less important)
>>
>> When I reload this table by creating a new table by doing select on
>> target table, the newly created table does not contain too many small
>> files. This newly created table's number of files drops down from 1200 to
>> ±50. What could be the reason?
>>
>> PS: I did go through http://www.openkb.info/2014/12/how-to-control-
>> file-numbers-of-hive.html
>>
>>
>> Regards,
>> Arpan
>>
>> The contents of this e-mail are confidential and for the exclusive use of
>> the intended recipient. If you receive this e-mail in error please delete
>> it from your system immediately and notify us either by e-mail or
>> telephone. You should not copy, forward or otherwise disclose the content
>> of the e-mail. The views expressed in this communication may not
>> necessarily be the view held by WHISHWORKS.
>>
>>


Re: Remove subscription form list

2017-06-02 Thread Lefty Leverenz
Maybe you should just try again.  A typo could have blocked your message.

-- Lefty


On Fri, Jun 2, 2017 at 8:18 AM, John Tench <j...@tinca.ca> wrote:

> I didn’t receive any confirmation message. Maybe the server is rejecting
> my message as spam?
>
>
>
> I can’t see how I could be subscribed under two email accounts.
>
>
>
> John
>
> .
>
>
>
> *From:* Lefty Leverenz [mailto:leftylever...@gmail.com]
> *Sent:* June 2, 2017 01:59
> *To:* user@hive.apache.org
> *Subject:* Re: Remove subscription form list
>
>
>
> Did you receive a confirmation message when you sent the unsubscribe
> email?  Could you be subscribed on two different email accounts?
>
>
> -- Lefty
>
>
>
>
>
> On Wed, May 31, 2017 at 9:16 AM, John Tench <j...@tinca.ca> wrote:
>
> Could someone unsubscribe me from this list? I’ve followed the
> instructions for removing my subscription ( that is, send a message to:
>
><user-unsubscr...@hive.apache.org> but this isn’t working.
>
>
>
> Much appreciated.
>
>
>
> John.
>
>
>


Re: Remove subscription form list

2017-06-01 Thread Lefty Leverenz
Did you receive a confirmation message when you sent the unsubscribe
email?  Could you be subscribed on two different email accounts?

-- Lefty


On Wed, May 31, 2017 at 9:16 AM, John Tench  wrote:

> Could someone unsubscribe me from this list? I’ve followed the
> instructions for removing my subscription ( that is, send a message to:
>
> but this isn’t working.
>
>
>
> Much appreciated.
>
>
>
> John.
>


Re: Welcome Rui Li to Hive PMC

2017-05-24 Thread Lefty Leverenz
Congratulations!

-- Lefty

On Thu, May 25, 2017 at 12:40 AM, Chao Sun  wrote:

> Congratulations Rui!!
>
> On Wed, May 24, 2017 at 9:19 PM, Xuefu Zhang  wrote:
>
> > Hi all,
> >
> > It's an honer to announce that Apache Hive PMC has recently voted to
> invite
> > Rui Li as a new Hive PMC member. Rui is a long time Hive contributor and
> > committer, and has made significant contribution in Hive especially in
> Hive
> > on Spark. Please join me in congratulating him and looking forward to a
> > bigger role that he will play in Apache Hive project.
> >
> > Thanks,
> > Xuefu
> >
>


Re: Jimmy Xiang now a Hive PMC member

2017-05-24 Thread Lefty Leverenz
Congratulations!

-- Lefty

On Thu, May 25, 2017 at 12:41 AM, Chao Sun  wrote:

> Congratulations Jimmy!!
>
> On Wed, May 24, 2017 at 9:16 PM, Xuefu Zhang  wrote:
>
>> Hi all,
>>
>> It's an honer to announce that Apache Hive PMC has recently voted to
>> invite
>> Jimmy Xiang as a new Hive PMC member. Please join me in congratulating him
>> and looking forward to a bigger role that he will play in Apache Hive
>> project.
>>
>> Thanks,
>> Xuefu
>>
>
>


Re: Request write access to Hive wiki

2017-04-04 Thread Lefty Leverenz
You have write access now, Sankar.  Welcome to the Hive wiki team!

-- Lefty


On Mon, Apr 3, 2017 at 11:15 PM, Sankar Hariappan <
shariap...@hortonworks.com> wrote:

> Hi,
>
> I’m currently working on Hive Replication feature and need access to
> update some wiki pages.
> Confluence ID: sankarh
>
> Best regards
> Sankar
>


Re: Edit permissions for hive wiki

2017-03-20 Thread Lefty Leverenz
Done.  Welcome to the Hive wiki team, Sunitha!

-- Lefty


On Mon, Mar 20, 2017 at 8:00 PM, Sunitha Beeram 
wrote:

> Hi,
>
> Could you please grant me edit permissions for Hive Wiki? My confluence
> user name is sbeeram.
>
> Thanks,
> Sunitha
>
>


Re: Request write access to the Hive wiki

2017-03-01 Thread Lefty Leverenz
Done.  Welcome to the Hive wiki team, Vineet!

-- Lefty


On Wed, Mar 1, 2017 at 5:32 PM, Vineet Garg  wrote:

> Hello,
>
> I would like to get permissions to modify Hive wiki.
>
> Username: *vgarg*
> email: *vg...@hortonworks.com *
>
> Thanks,
> Vineet Garg
>


Re: Request for write access to the Hive wiki

2017-02-08 Thread Lefty Leverenz
You've got it.  Welcome to the Hive wiki team, Nanda!

-- Lefty


On Wed, Feb 8, 2017 at 8:31 PM, Nandakumar Vadivelu <
nvadiv...@hortonworks.com> wrote:

> Hi All,
>
>
>
> As part of HIVE-15792 ,
> Hive wiki has to be updated. Requesting write access to update the wiki.
>
>
>
> Confluence username: nandakumar131
>
>
>
>
>
> Thanks,
>
> Nanda
>
>
>


Re: edit wiki permissions

2017-02-03 Thread Lefty Leverenz
Okay, done.

-- Lefty

On Fri, Feb 3, 2017 at 8:15 PM, Anishek Agarwal <anis...@gmail.com> wrote:

> Hey Lefty
>
> Thanks for giving me the permissions. Please remove access for  "anishek
> (anagarwal)".
>
> Regards,
> Anishek
>
> On Sat, Feb 4, 2017 at 6:45 AM Lefty Leverenz <leftylever...@gmail.com>
> wrote:
>
>> (I misspelled "anishek" two messages ago but got it right on the
>> permissions page.)
>>
>> -- Lefty
>>
>> On Fri, Feb 3, 2017 at 5:09 PM, Lefty Leverenz <leftylever...@gmail.com>
>> wrote:
>>
>> You should see the Edit icon now, after refreshing or opening a new wiki
>> page.
>>
>> -- Lefty
>>
>>
>> On Fri, Feb 3, 2017 at 5:08 PM, Lefty Leverenz <leftylever...@gmail.com>
>> wrote:
>>
>> Hm, the icon should be next to the Watch icon.  And yet you seem to be
>> logged in.
>>
>> Well, when I enter your Confluence username on the permissions page there
>> are two matches:  "anishek (anagarwal)" and "anishek (anishek)".  The user
>> who has edit permissions is "anishek (anagarwal)".  Are you logged in as
>> "anishek (anishek)"?
>>
>> Ooooh ... now I get it, the word in parentheses is the username, so in
>> November you created an account for anargawal but now you're logged in as
>> anashek.  I'll give that one permissions.
>>
>> Should I remove permissions for anagarwal or do you want to keep both
>> accounts open?
>>
>> -- Lefty
>>
>>
>> On Fri, Feb 3, 2017 at 12:30 AM, Anishek Agarwal <anis...@gmail.com>
>> wrote:
>>
>> Hey Lefty,
>>
>> I dont have the the edit icon, can you please check it,  I am attaching
>> the screen shot after login on wiki.
>> [image: Screen Shot 2017-02-03 at 1.58.52 PM.png]
>>
>>
>> Thanks
>> anishek
>>
>> On Fri, Feb 3, 2017 at 11:30 AM Lefty Leverenz <leftylever...@gmail.com>
>> wrote:
>>
>> You got edit permissions back in November, Anishek.  There should be an
>> Edit icon in the upper right corner of each wiki page, as long as you're 
>> logged
>> in <https://cwiki.apache.org/confluence/login.action>.
>>
>> -- Lefty
>>
>>
>> On Thu, Feb 2, 2017 at 9:33 PM, Anishek Agarwal <anis...@gmail.com>
>> wrote:
>>
>> Hello,
>>
>> Can i please get edit permissions on the wiki, the username is "anishek"
>>
>> thanks
>> anishek
>>
>>
>>
>>
>>
>>


Re: edit wiki permissions

2017-02-03 Thread Lefty Leverenz
(I misspelled "anishek" two messages ago but got it right on the
permissions page.)

-- Lefty

On Fri, Feb 3, 2017 at 5:09 PM, Lefty Leverenz <leftylever...@gmail.com>
wrote:

> You should see the Edit icon now, after refreshing or opening a new wiki
> page.
>
> -- Lefty
>
>
> On Fri, Feb 3, 2017 at 5:08 PM, Lefty Leverenz <leftylever...@gmail.com>
> wrote:
>
>> Hm, the icon should be next to the Watch icon.  And yet you seem to be
>> logged in.
>>
>> Well, when I enter your Confluence username on the permissions page there
>> are two matches:  "anishek (anagarwal)" and "anishek (anishek)".  The user
>> who has edit permissions is "anishek (anagarwal)".  Are you logged in as
>> "anishek (anishek)"?
>>
>> Ooooh ... now I get it, the word in parentheses is the username, so in
>> November you created an account for anargawal but now you're logged in as
>> anashek.  I'll give that one permissions.
>>
>> Should I remove permissions for anagarwal or do you want to keep both
>> accounts open?
>>
>> -- Lefty
>>
>>
>> On Fri, Feb 3, 2017 at 12:30 AM, Anishek Agarwal <anis...@gmail.com>
>> wrote:
>>
>>> Hey Lefty,
>>>
>>> I dont have the the edit icon, can you please check it,  I am attaching
>>> the screen shot after login on wiki.
>>> [image: Screen Shot 2017-02-03 at 1.58.52 PM.png]
>>>
>>>
>>> Thanks
>>> anishek
>>>
>>> On Fri, Feb 3, 2017 at 11:30 AM Lefty Leverenz <leftylever...@gmail.com>
>>> wrote:
>>>
>>>> You got edit permissions back in November, Anishek.  There should be an
>>>> Edit icon in the upper right corner of each wiki page, as long as you're 
>>>> logged
>>>> in <https://cwiki.apache.org/confluence/login.action>.
>>>>
>>>> -- Lefty
>>>>
>>>>
>>>> On Thu, Feb 2, 2017 at 9:33 PM, Anishek Agarwal <anis...@gmail.com>
>>>> wrote:
>>>>
>>>> Hello,
>>>>
>>>> Can i please get edit permissions on the wiki, the username is "anishek"
>>>>
>>>> thanks
>>>> anishek
>>>>
>>>>
>>>>
>>
>


Re: edit wiki permissions

2017-02-03 Thread Lefty Leverenz
You should see the Edit icon now, after refreshing or opening a new wiki
page.

-- Lefty


On Fri, Feb 3, 2017 at 5:08 PM, Lefty Leverenz <leftylever...@gmail.com>
wrote:

> Hm, the icon should be next to the Watch icon.  And yet you seem to be
> logged in.
>
> Well, when I enter your Confluence username on the permissions page there
> are two matches:  "anishek (anagarwal)" and "anishek (anishek)".  The user
> who has edit permissions is "anishek (anagarwal)".  Are you logged in as
> "anishek (anishek)"?
>
> Ooooh ... now I get it, the word in parentheses is the username, so in
> November you created an account for anargawal but now you're logged in as
> anashek.  I'll give that one permissions.
>
> Should I remove permissions for anagarwal or do you want to keep both
> accounts open?
>
> -- Lefty
>
>
> On Fri, Feb 3, 2017 at 12:30 AM, Anishek Agarwal <anis...@gmail.com>
> wrote:
>
>> Hey Lefty,
>>
>> I dont have the the edit icon, can you please check it,  I am attaching
>> the screen shot after login on wiki.
>> [image: Screen Shot 2017-02-03 at 1.58.52 PM.png]
>>
>>
>> Thanks
>> anishek
>>
>> On Fri, Feb 3, 2017 at 11:30 AM Lefty Leverenz <leftylever...@gmail.com>
>> wrote:
>>
>>> You got edit permissions back in November, Anishek.  There should be an
>>> Edit icon in the upper right corner of each wiki page, as long as you're 
>>> logged
>>> in <https://cwiki.apache.org/confluence/login.action>.
>>>
>>> -- Lefty
>>>
>>>
>>> On Thu, Feb 2, 2017 at 9:33 PM, Anishek Agarwal <anis...@gmail.com>
>>> wrote:
>>>
>>> Hello,
>>>
>>> Can i please get edit permissions on the wiki, the username is "anishek"
>>>
>>> thanks
>>> anishek
>>>
>>>
>>>
>


Re: edit wiki permissions

2017-02-03 Thread Lefty Leverenz
Hm, the icon should be next to the Watch icon.  And yet you seem to be
logged in.

Well, when I enter your Confluence username on the permissions page there
are two matches:  "anishek (anagarwal)" and "anishek (anishek)".  The user
who has edit permissions is "anishek (anagarwal)".  Are you logged in as
"anishek (anishek)"?

Ooooh ... now I get it, the word in parentheses is the username, so in
November you created an account for anargawal but now you're logged in as
anashek.  I'll give that one permissions.

Should I remove permissions for anagarwal or do you want to keep both
accounts open?

-- Lefty


On Fri, Feb 3, 2017 at 12:30 AM, Anishek Agarwal <anis...@gmail.com> wrote:

> Hey Lefty,
>
> I dont have the the edit icon, can you please check it,  I am attaching
> the screen shot after login on wiki.
> [image: Screen Shot 2017-02-03 at 1.58.52 PM.png]
>
>
> Thanks
> anishek
>
> On Fri, Feb 3, 2017 at 11:30 AM Lefty Leverenz <leftylever...@gmail.com>
> wrote:
>
>> You got edit permissions back in November, Anishek.  There should be an
>> Edit icon in the upper right corner of each wiki page, as long as you're 
>> logged
>> in <https://cwiki.apache.org/confluence/login.action>.
>>
>> -- Lefty
>>
>>
>> On Thu, Feb 2, 2017 at 9:33 PM, Anishek Agarwal <anis...@gmail.com>
>> wrote:
>>
>> Hello,
>>
>> Can i please get edit permissions on the wiki, the username is "anishek"
>>
>> thanks
>> anishek
>>
>>
>>


Re: edit wiki permissions

2017-02-02 Thread Lefty Leverenz
You got edit permissions back in November, Anishek.  There should be an
Edit icon in the upper right corner of each wiki page, as long as you're logged
in .

-- Lefty


On Thu, Feb 2, 2017 at 9:33 PM, Anishek Agarwal  wrote:

> Hello,
>
> Can i please get edit permissions on the wiki, the username is "anishek"
>
> thanks
> anishek
>


Re: Table not found in the definition of view

2017-01-28 Thread Lefty Leverenz
Manish, to unsubscribe please send a message to
user-unsubscr...@hive.apache.org as described here:  Mailing Lists
.

Thanks.  -- Lefty


On Sat, Jan 28, 2017 at 1:44 PM, Manish Sharma  wrote:

> unsubscribe
>
> On Thu, Dec 1, 2016 at 3:36 AM, Furcy Pin  wrote:
>
>> Hi,
>>
>> you should replace
>>
>> WITH table AS (subquery)
>> SELECT ...
>> FROM table
>>
>> with
>>
>> SELECT ...
>> FROM(
>>   subquery
>> ) table
>>
>> Regards.
>>
>> On Thu, Dec 1, 2016 at 12:32 PM, Priyanka Raghuvanshi <
>> priyan...@winjit.com> wrote:
>>
>>> Hi All
>>>
>>>
>>> Getting error 'Table not found in the definition of view ' if a view is
>>> created using 'WITH' clause to use the  result of one query in another.
>>>
>>>
>>> This issue has been resolved for Hive 1.3.0 and 2.0.0 but mine is 0.13
>>>
>>> Regards
>>>
>>> Priyanka Raghuvanshi
>>>
>>
>>
>


Re: Unsubscribe

2017-01-22 Thread Lefty Leverenz
To unsubscribe please send a message to user-unsubscr...@hive.apache.org as
described here:  Mailing Lists .
Thanks.

-- Lefty


On Sat, Jan 21, 2017 at 6:37 AM, Mhd Wrk  wrote:

> Unsubscribe
>


Re: Adding a Hive Statement of SQL Conformance to the docs

2017-01-14 Thread Lefty Leverenz
+1

Good idea.  We'll add crossreferences from various places in the wiki, such
as Hive Versions and Branches

and
the Language Manual
 umbrella
page.  (Also DDL
, DML
, and
Select
.)

Thanks, Carter.

-- Lefty


On Fri, Jan 13, 2017 at 9:48 AM, Alan Gates  wrote:

> +1.  I think this will be great for existing and potential Hive users.
>
> Alan.
>
> > On Jan 13, 2017, at 9:09 AM, Carter Shanklin 
> wrote:
> >
> > I get asked from time to time what Hive's level of SQL conformance is,
> and it's difficult to provide a clean answer. Most SQL systems have some
> detailed statement of SQL conformance to help answer this question.
> >
> > For a year or so I've maintained a spreadsheet that tracks Hive's SQL
> conformance, inspired by the Postgres SQL Conformance page. I've copied
> this spreadsheet into a publicly viewable Google Spreadsheet here:
> https://docs.google.com/spreadsheets/d/1VaAqkPXtjhT_
> oYniUW1I2xMmAFlxqp2UFkemL9-U14Q/edit#gid=0
> >
> > I propose to add a static version of this document to the Hive Wiki, and
> to version it with one static SQL Conformance page per Hive major release,
> starting with Hive 2.1 and moving forward. So for example there would be
> one page for Hive 2.1, one for Hive 2.2 when it is released, and so on.
> >
> > At this point I don't guarantee the spreadsheet's complete accuracy.
> Getting it into the public wiki with multiple editors should quickly
> eliminate any errors.
> >
> > Does anyone have comments, suggestions or objections?
> >
> > Thanks,
> >
>
>


Re: send this mail to subscribe

2017-01-12 Thread Lefty Leverenz
To subscribe, you need to send a message to "user-subscr...@hive.apache.org"
(rather than the mailing list itself) as described here:  Mailing Lists
.

-- Lefty


On Thu, Jan 12, 2017 at 9:57 PM, Rajendra Bhat  wrote:

>
>
> --
> Thanks and
> Regards
>
> Rajendra Bhat
>


Re: please give me the permission to update the wiki of hive on spark

2017-01-03 Thread Lefty Leverenz
Done.  Welcome to the Hive wiki team, Kelly, and happy new year!

-- Lefty


On Mon, Jan 2, 2017 at 5:40 PM, Zhang, Liyun  wrote:

> Hi
>
>   I want to update wiki confluence/display/Hive/Hive+on+Spark%3A+Getting+Started> of hive on
> spark because HIVE-8373, my  Confluence
>  username is kellyzly,
> please provide the privilege to me to update wiki.
>
>
>
>
>
> Best Regards
>
> Kelly Zhang/Zhang,Liyun
>
>
>


Re: Requesting write access to Hive wiki

2016-12-18 Thread Lefty Leverenz
You've got it.  Welcome to the Hive wiki team, Michael!

-- Lefty


On Sun, Dec 18, 2016 at 1:25 PM, mikey d  wrote:

> Requesting write access to Hive wiki
>
> Request send already to user-subscr...@hive.apache.org
>
> If there is anything futher needed, please let me know.
>
>
> User: mdeguzis
>
> On Sun, Dec 18, 2016 at 4:24 PM, mikey d  wrote:
>
>> Requesting write access to Hive wiki
>>
>> Request send already to user-subscr...@hive.apache.org
>>
>> If there is anything futher needed, please let me know.
>>
>>
>> User: mdeguzis
>>
>> --
>> Michael DeGuzis
>> Email: mdegu...@gmail.com
>> Website: http://www.libregeek.org
>> Linked In Resume/Profile
>> 
>> Projects: GitHub Projects 
>>
>
>
>
> --
> Michael DeGuzis
> Email: mdegu...@gmail.com
> Website: http://www.libregeek.org
> Linked In Resume/Profile
> 
> Projects: GitHub Projects 
>


Re: Naveen Gangam has become a Hive Committer

2016-12-17 Thread Lefty Leverenz
Congratulations Naveen!

-- Lefty


On Fri, Dec 16, 2016 at 10:10 AM, Xuefu Zhang  wrote:

> Bcc: dev/user
>
> Hi all,
>
> It's my honor to announce that Apache Hive PMC has voted on and approved
> Naveen's committership. Please join me in congratulate him on his
> contributions and achievements.
>
> Regards,
> Xuefu
>


Re: [ANNOUNCE] New Hive Committer - Rajesh Balamohan

2016-12-14 Thread Lefty Leverenz
Congratulations Rajesh!

-- Lefty


On Tue, Dec 13, 2016 at 11:58 PM, Rajesh Balamohan 
wrote:

> Thanks a lot for providing this opportunity and to all for their messages.
> :)
>
> ~Rajesh.B
>
> On Wed, Dec 14, 2016 at 11:33 AM, Dharmesh Kakadia 
> wrote:
>
> > Congrats Rajesh !
> >
> > Thanks,
> > Dharmesh
> >
> > On Tue, Dec 13, 2016 at 7:37 PM, Vikram Dixit K 
> > wrote:
> >
> >> Congrats Rajesh! :)
> >>
> >> On Tue, Dec 13, 2016 at 9:36 PM, Pengcheng Xiong 
> >> wrote:
> >>
> >>> Congrats Rajesh! :)
> >>>
> >>> On Tue, Dec 13, 2016 at 6:51 PM, Prasanth Jayachandran <
> >>> prasan...@apache.org
> >>> > wrote:
> >>>
> >>> > The Apache Hive PMC has voted to make Rajesh Balamohan a committer on
> >>> the
> >>> > Apache Hive Project. Please join me in congratulating Rajesh.
> >>> >
> >>> > Congratulations Rajesh!
> >>> >
> >>> > Thanks
> >>> > Prasanth
> >>>
> >>
> >>
> >>
> >> --
> >> Nothing better than when appreciated for hard work.
> >> -Mark
> >>
> >
> >
>


Re: Requesting write access to the Hive wiki

2016-12-02 Thread Lefty Leverenz
You've got it.  Welcome to the Hive wiki team, Grant!

-- Lefty


On Fri, Dec 2, 2016 at 1:38 PM, grant sohn <grantlm...@gmail.com> wrote:

> https://cwiki.apache.org/confluence/display/~gsohn
>
> It's gsohn.
>
> Thanks,
>
> Grant
>
> Sent from my iPhone
>
> On Dec 2, 2016, at 1:24 PM, Lefty Leverenz <leftylever...@gmail.com>
> wrote:
>
> Grant, what is your Confluence username?
>
> About This Wiki – How to get permission to edit
> <https://cwiki.apache.org/confluence/display/Hive/AboutThisWiki#AboutThisWiki-Howtogetpermissiontoedit>
>
> -- Lefty
>
>
> On Fri, Dec 2, 2016 at 11:18 AM, grant sohn <grantlm...@gmail.com> wrote:
>
>> Hi,
>>
>> I would like to make some minor fixes to the wiki to maintain consistency
>> with the code.  For details see HIVE-15276.
>>
>> Thanks,
>>
>> Grant
>>
>> Sent from my iPhone
>
>
>


Re: Requesting write access to the Hive wiki

2016-12-02 Thread Lefty Leverenz
Grant, what is your Confluence username?

About This Wiki – How to get permission to edit


-- Lefty


On Fri, Dec 2, 2016 at 11:18 AM, grant sohn  wrote:

> Hi,
>
> I would like to make some minor fixes to the wiki to maintain consistency
> with the code.  For details see HIVE-15276.
>
> Thanks,
>
> Grant
>
> Sent from my iPhone


Re: Edit permissions for "anagarwal"

2016-11-22 Thread Lefty Leverenz
Done.  Welcome to the Hive wiki team, Anishek!

-- Lefty


On Tue, Nov 22, 2016 at 1:05 AM, Anishek Agarwal 
wrote:

> Hello,
>
> Please provide edit permissions to the hive wiki pages for the confluence
> user name  *anagarwal*
>
> Regards,
> anishek
>


Re: Happy Diwali to those forum members who celebrate this great festival

2016-10-30 Thread Lefty Leverenz
+1

-- Lefty


On Sun, Oct 30, 2016 at 1:01 PM, Ashok Kumar  wrote:

> You are very kind Sir
>
>
> On Sunday, 30 October 2016, 16:42, Devopam Mittra 
> wrote:
>
>
> +1
> Thanks and regards
> Devopam
>
> On 30 Oct 2016 9:37 pm, "Mich Talebzadeh" 
> wrote:
>
> Enjoy the festive season.
>
> Regards,
>
> Dr Mich Talebzadeh
>
> LinkedIn * https://www.linkedin.com/ profile/view?id=
> AAEWh2gBxianrbJd6zP6AcPCCd OABUrV8Pw
> *
>
> http://talebzadehmich. wordpress.com
> 
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
>


Re: requesting write access to the Hive wiki

2016-10-08 Thread Lefty Leverenz
Or do you mean the wiki page that tells you how to get write access?  That
is
https://cwiki.apache.org/confluence/display/Hive/AboutThisWiki#AboutThisWiki-Howtogetpermissiontoedit
.

-- Lefty


On Sat, Oct 8, 2016 at 10:24 PM, Tian, Jianguo <jianguo.t...@intel.com>
wrote:

> Do you mean the wiki link which I want to update? Here it is:
> https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#
> HiveServer2Clients-JDBC
>
>
>
> *From:* mohan.25feb86 [mailto:mohan.25fe...@gmail.com]
> *Sent:* Sunday, October 9, 2016 3:46 AM
> *To:* user@hive.apache.org
> *Subject:* Re: requesting write access to the Hive wiki
>
>
>
> Could somebody share this wiki link
>
>
>
>
>
> Sent from Samsung Galaxy Note
>
>
>
>
>  Original message 
> From: Lefty Leverenz <leftylever...@gmail.com>
> Date: 07/10/2016 10:30 PM (GMT-07:00)
> To: user@hive.apache.org
> Subject: Re: requesting write access to the Hive wiki
>
> Done.  Welcome to the Hive wiki team, Jianguo Tian!
>
>
> -- Lefty
>
>
>
>
>
> On Fri, Oct 7, 2016 at 9:44 PM, Tian, Jianguo <jianguo.t...@intel.com>
> wrote:
>
> Hi,
>
>   I’m a soft engineer  from Intel and I’m eager to be a contributor. Now I
> commit one JIRA and I want to update it on the Wiki. I hope that I can take
> part in this big community and devote something.
>
>
>
> Thanks and Regards,
>
> Jianguo Tian
>
>
>
>
>


Re: requesting write access to the Hive wiki

2016-10-07 Thread Lefty Leverenz
Done.  Welcome to the Hive wiki team, Jianguo Tian!

-- Lefty


On Fri, Oct 7, 2016 at 9:44 PM, Tian, Jianguo 
wrote:

> Hi,
>
>   I’m a soft engineer  from Intel and I’m eager to be a contributor. Now I
> commit one JIRA and I want to update it on the Wiki. I hope that I can take
> part in this big community and devote something.
>
>
>
> Thanks and Regards,
>
> Jianguo Tian
>
>
>


Re: Write access to Hive wiki please

2016-09-22 Thread Lefty Leverenz
Done.  Welcome to the Hive wiki team, Ian!

-- Lefty


On Wed, Sep 21, 2016 at 10:03 AM, Ian Cook  wrote:

> Hello,
>
> Could I please have write access to the Hive wiki so that I can help with
> fixes? My Confluence username is *icook*.
>
> Thanks,
> Ian Cook
> Cloudera
>


Re: Need help with query

2016-09-12 Thread Lefty Leverenz
Here's a list of the wikidocs about dynamic partitions

.

-- Lefty


On Mon, Sep 12, 2016 at 3:25 PM, Devopam Mittra  wrote:

> Kindly learn dynamic partition from cwiki. That will be the perfect
> solution to your requirement in my opinion.
> Regards
> Dev
>
> On 13 Sep 2016 12:49 am, "Igor Kravzov"  wrote:
>
>> Hi,
>>
>> I have a query like this one
>>
>> alter table my_table
>>   add if not exists partition (mmdd=20160912) location
>> '/mylocation/20160912';
>>
>> Is it possible to make so I don't have to change date every day?
>> Something with  CURRENT_DATE;?
>>
>> Thanks in advance.
>>
>


Re: unsubscribe

2016-09-07 Thread Lefty Leverenz
To unsubscribe please send a message to user-unsubscr...@hive.apache.org as
described here:  Mailing Lists .
Thanks.

-- Lefty


On Wed, Sep 7, 2016 at 11:00 AM, roshan joe  wrote:

>
>>
>>
>>
>>
>>
>


Re: Write access to wiki

2016-09-06 Thread Lefty Leverenz
You've got it.  Welcome to the Hive wiki team, Zsombor!

-- Lefty


On Tue, Sep 6, 2016 at 9:59 AM, Zsombor Klara 
wrote:

> Hi,
>
> I would like to have write access to the hive wiki. My username is
> zsombor.klara.
>
> Thanks,
> Zsombor
>


Re: Write access to the Hive wiki

2016-09-01 Thread Lefty Leverenz
Done.  Welcome to the Hive wiki team, Marta!

-- Lefty


On Thu, Sep 1, 2016 at 8:13 AM, Marta Kuczora  wrote:

> Sorry, forgot to write my loginId: kuczoram
>
> Regards,
> Marta
>
> On Thu, Sep 1, 2016 at 2:12 PM, Marta Kuczora 
> wrote:
>
>> Hi,
>>
>> could somebody please give me right to modify the Hive wiki?
>> I would be interested in taking the HIVE-14632 Jira. I worked on the
>> output format part recently, so I could improve its documentation.
>>
>> Thanks and regards,
>> Marta
>>
>
>


Re: Request for write access to hive wiki

2016-08-23 Thread Lefty Leverenz
Done.  Welcome to the Hive wiki team, Kit!

-- Lefty


On Tue, Aug 23, 2016 at 12:44 PM, Kit Menke  wrote:

> I am requesting write access to the hive wiki. My confluence user name is
> kit.menke.


Re: Unsubscribe

2016-08-16 Thread Lefty Leverenz
Sarath, to unsubscribe please send a message to
user-unsubscr...@hive.apache.org as described here:  Mailing Lists
.  Thanks.

-- Lefty


On Tue, Aug 16, 2016 at 1:31 AM, Sarath Chandra <
sarathchandra.jos...@algofusiontech.com> wrote:

>


Re: unsubscribe

2016-08-03 Thread Lefty Leverenz
To unsubscribe please send a message to user-unsubscr...@hive.apache.org as
described here:  Mailing Lists .
Thanks.

-- Lefty


On Mon, Aug 1, 2016 at 11:13 PM, zhang jp  wrote:

> unsubscribe
>


Re: Permission to edit wiki

2016-07-21 Thread Lefty Leverenz
Done.  Welcome to the Hive wiki team, Vihang!

-- Lefty

On Wed, Jul 20, 2016 at 11:02 AM, Vihang Karajgaonkar 
wrote:

> Hi,
>
> I submitted a patch for HIVE-14135 recently which needs an update to the
> wiki. I have created my Confluence account and its username is vihangk1.
> Can someone please give me the permission to edit the Hive wiki pages?
>
> Thanks,
> Vihang


Re: [ANNOUNCE] New PMC Member : Pengcheng

2016-07-17 Thread Lefty Leverenz
Congratulations Pengcheng!

-- Lefty

On Sun, Jul 17, 2016 at 1:03 PM, Ashutosh Chauhan 
wrote:

> >
> > Hello Hive community,
> >
> > I'm pleased to announce that Pengcheng Xiong has accepted the Apache Hive
> > PMC's
> > invitation, and is now our newest PMC member. Many thanks to Pengcheng
> for
> > all of his hard work.
> >
> > Please join me congratulating Pengcheng!
> >
> > Best,
> > Ashutosh
> > (On behalf of the Apache Hive PMC)
> >
>


Re: [ANNOUNCE] New PMC Member : Jesus

2016-07-17 Thread Lefty Leverenz
Congratulations Jesus!

-- Lefty

On Sun, Jul 17, 2016 at 1:01 PM, Ashutosh Chauhan 
wrote:

> Hello Hive community,
>
> I'm pleased to announce that Jesus Camacho Rodriguez has accepted the
> Apache Hive PMC's
> invitation, and is now our newest PMC member. Many thanks to Jesus for all
> of
> his hard work.
>
> Please join me congratulating Jesus!
>
> Best,
> Ashutosh
> (On behalf of the Apache Hive PMC)
>


Re: [Announce] New Hive Committer - Mohit Sabharwal

2016-07-03 Thread Lefty Leverenz
Congratulations Mohit!

-- Lefty

On Sun, Jul 3, 2016 at 11:50 PM, Alpesh Patel 
wrote:

> Congrats
> On Jul 1, 2016 9:57 AM, "Szehon Ho"  wrote:
>
>> On behalf of the Apache Hive PMC, I'm pleased to announce that Mohit
>> Sabharwal has been voted a committer on the Apache Hive project.
>>
>> Please join me in congratulating Mohit !
>>
>> Thanks,
>> Szehon
>>
>


Re: hive wiki edit permission

2016-06-01 Thread Lefty Leverenz
Zoltan, you now have write access to the wiki.  Welcome to the Hive docs
team!

-- Lefty

On Wed, Jun 1, 2016 at 6:39 PM, Zoltan Haindrich  wrote:

> Hi,
>
> I would like to update the documentation - and add the new altertate way
> to use ide-s with hive (HIVE-13490)
> my wiki login is: kirk
>
> regards,
> Zoltan
>
>


Re: unsubscribe

2016-05-23 Thread Lefty Leverenz
Martin, to unsubscribe please send a message to
user-unsubscr...@hive.apache.org as described here:  Mailing Lists
.  Thanks.

-- Lefty


On Mon, May 23, 2016 at 3:00 AM, Martin Kudlej  wrote:

>
> --
> Best Regards,
> Martin Kudlej.
> RHSC/USM Senior Quality Assurance Engineer
> Red Hat Czech s.r.o.
>
> Phone: +420 532 294 155
> E-mail:mkudlej at redhat.com
> IRC:   mkudlej at #brno, #gluster, #storage-qa, #rhsc-qe, #rhs, #distcomp
>


Re: Would like to be a user

2016-05-17 Thread Lefty Leverenz
Suresh, you now have write privileges for the Hive wiki.  Welcome to the
docs team!

-- Lefty



On Tue, May 17, 2016 at 3:36 PM, Markovitz, Dudu <dmarkov...@paypal.com>
wrote:

> Thanks J
>
>
>
> *From:* Lefty Leverenz [mailto:leftylever...@gmail.com]
> *Sent:* Tuesday, May 17, 2016 10:06 PM
>
> *To:* user@hive.apache.org
> *Subject:* Re: Would like to be a user
>
>
>
> Done.  Welcome to the Hive wiki team, Dudu!
>
>
> -- Lefty
>
>
>
>
>
> On Tue, May 17, 2016 at 3:04 AM, Markovitz, Dudu <dmarkov...@paypal.com>
> wrote:
>
> Hi Lefty
>
>
>
> Can you add me as well?
>
> My user is dudu.markovitz
>
>
>
> Thanks
>
>
>
> Dudu
>
>
>
> *From:* Lefty Leverenz [mailto:leftylever...@gmail.com]
> *Sent:* Monday, May 16, 2016 11:44 PM
> *To:* user@hive.apache.org
> *Subject:* Re: Would like to be a user
>
>
>
> Welcome Gail, I've given you write access to the Hive wiki.  Thanks in
> advance for your contributions!
>
>
> -- Lefty
>
>
>
> On Mon, May 16, 2016 at 4:39 PM, gail haspert <gkhasp...@gmail.com> wrote:
>
> Hi
>
> I would like to start working on (write to) apache hive. My confluence
> user name is ghaspert.
>
>
>
> thanks
>
> -- Forwarded message --
> From: *gail haspert* <gkhasp...@gmail.com>
> Date: Mon, May 16, 2016 at 12:40 PM
> Subject: Would like to be a user
> To: user-subscr...@hive.apache.org
>
> I just signed up as ghaspert for a confluence account.
>
> Thanks,
>
> Gail
>
>
>
>
>
>
>


Re: Would like to be a user

2016-05-17 Thread Lefty Leverenz
Done.  Welcome to the Hive wiki team, Dudu!

-- Lefty


On Tue, May 17, 2016 at 3:04 AM, Markovitz, Dudu <dmarkov...@paypal.com>
wrote:

> Hi Lefty
>
>
>
> Can you add me as well?
>
> My user is dudu.markovitz
>
>
>
> Thanks
>
>
>
> Dudu
>
>
>
> *From:* Lefty Leverenz [mailto:leftylever...@gmail.com]
> *Sent:* Monday, May 16, 2016 11:44 PM
> *To:* user@hive.apache.org
> *Subject:* Re: Would like to be a user
>
>
>
> Welcome Gail, I've given you write access to the Hive wiki.  Thanks in
> advance for your contributions!
>
>
> -- Lefty
>
>
>
> On Mon, May 16, 2016 at 4:39 PM, gail haspert <gkhasp...@gmail.com> wrote:
>
> Hi
>
> I would like to start working on (write to) apache hive. My confluence
> user name is ghaspert.
>
>
>
> thanks
>
> -- Forwarded message --
> From: *gail haspert* <gkhasp...@gmail.com>
> Date: Mon, May 16, 2016 at 12:40 PM
> Subject: Would like to be a user
> To: user-subscr...@hive.apache.org
>
> I just signed up as ghaspert for a confluence account.
>
> Thanks,
>
> Gail
>
>
>
>
>


Re: Would like to be a user

2016-05-16 Thread Lefty Leverenz
Welcome Gail, I've given you write access to the Hive wiki.  Thanks in
advance for your contributions!

-- Lefty

On Mon, May 16, 2016 at 4:39 PM, gail haspert  wrote:

> Hi
> I would like to start working on (write to) apache hive. My confluence
> user name is ghaspert.
>
> thanks
> -- Forwarded message --
> From: gail haspert 
> Date: Mon, May 16, 2016 at 12:40 PM
> Subject: Would like to be a user
> To: user-subscr...@hive.apache.org
>
>
> I just signed up as ghaspert for a confluence account.
> Thanks,
> Gail
>
>


Re: Hive configuration parameter hive.enforce.bucketing does not exist in Hive 2

2016-04-29 Thread Lefty Leverenz
FYI, the removal of hive.enforce.bucketing is documented in the wiki (
hive.enforce.bucketing
)
and the JIRA issue that removed it is HIVE-12331
.

-- Lefty

On Fri, Apr 29, 2016 at 12:07 PM, Mich Talebzadeh  wrote:

> Unfortunately that needs to be done or better  the whole line removed in
> every hql code where it is set as true .
>
> Thanks
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> *
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>
> On 29 April 2016 at 19:35, Sergey Shelukhin 
> wrote:
>
>> You can set hive.conf.validation to false to disable this :)
>>
>> From: Mich Talebzadeh 
>> Reply-To: "user@hive.apache.org" 
>> Date: Friday, April 29, 2016 at 11:16
>> To: user 
>> Subject: Re: Hive configuration parameter hive.enforce.bucketing does
>> not exist in Hive 2
>>
>> Well having it in the old code causes the query to crash as well!
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * 
>> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> *
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>>
>> On 29 April 2016 at 18:33, Sergey Shelukhin 
>> wrote:
>>
>>> This parameter has indeed been removed; it is treated as always true
>>> now, because setting it to false just produced incorrect tables.
>>>
>>> From: Mich Talebzadeh 
>>> Reply-To: "user@hive.apache.org" 
>>> Date: Friday, April 29, 2016 at 02:51
>>> To: user 
>>> Subject: Hive configuration parameter hive.enforce.bucketing does not
>>> exist in Hive 2
>>>
>>> Is the parameter
>>>
>>> --set hive.enforce.bucketing = true;
>>>
>>> depreciated in Hive 2 as it causes hql code not to work?
>>>
>>> hive> set hive.enforce.bucketing = true;
>>> Query returned non-zero code: 1, cause: hive configuration
>>> hive.enforce.bucketing does not exists.
>>>
>>>
>>> Dr Mich Talebzadeh
>>>
>>>
>>>
>>> LinkedIn * 
>>> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>> *
>>>
>>>
>>>
>>> http://talebzadehmich.wordpress.com
>>>
>>>
>>>
>>
>>
>


Re:

2016-04-16 Thread Lefty Leverenz
To unsubscribe, please send a message to user-unsubscr...@hive.apache.org as
described here:  Mailing Lists .
Thanks.

-- Lefty


On Sat, Apr 16, 2016 at 12:12 AM, 469564481 <469564...@qq.com> wrote:

>
> I do not want to receive email .Thaks!
>
> -- Original --
> *From: * "Jörn Franke";;
> *Date: * Sat, Apr 16, 2016 03:10 PM
> *To: * "user";
> *Subject: * Re: Mappers spawning Hive queries
>
> Just out of curiosity, what is the use case behind this?
>
> How do you call the shell script?
>
> > On 16 Apr 2016, at 00:24, Shirish Tatikonda 
> wrote:
> >
> > Hello,
> >
> > I am trying to run multiple hive queries in parallel by submitting them
> through a map-reduce job.
> > More specifically, I have a map-only hadoop streaming job where each
> mapper runs a shell script that does two things -- 1) parses input lines
> obtained via streaming; and 2) submits a very simple hive query (via hive
> -e ...) with parameters computed from step-1.
> >
> > Now, when I run the streaming job, the mappers seem to be stuck and I
> don't know what is going on. When I looked on resource manager web UI, I
> don't see any new MR Jobs (triggered from the hive query). I am trying to
> understand this behavior.
> >
> > This may be a bad idea to begin with, and there may be better ways to
> accomplish the same task. However, I would like to understand the behavior
> of such a MR job.
> >
> > Any thoughts?
> >
> > Thank you,
> > Shirish
> >
>


Re: [VOTE] Bylaws change to allow some commits without review

2016-04-15 Thread Lefty Leverenz
+1

Navis, you've just reactivated your PMC membership.  ;-)

A PMC member is considered emeritus by their own declaration or by not
> contributing in any form to the project for over six months.
>

Actually your old patch for HIVE-9499
 was committed in March
and you added a comment to HIVE-11752
 in February, so you
*have* been active recently.  And now you can let it slide until
October

-- Lefty

On Thu, Apr 14, 2016 at 5:57 PM, Sushanth Sowmyan 
wrote:

> +1
> On Apr 13, 2016 17:20, "Prasanth Jayachandran" <
> pjayachand...@hortonworks.com> wrote:
>
>> +1
>>
>> Thanks
>> Prasanth
>>
>>
>>
>>
>> On Wed, Apr 13, 2016 at 5:14 PM -0700, "Navis Ryu" 
>> wrote:
>>
>> not sure I'm active PMC member but +1, anyway.
>>
>> 2016년 4월 14일 목요일, Lars Francke님이 작성한 메시지:
>>
>>> Hi everyone,
>>>
>>> we had a discussion on the dev@ list about allowing some forms of
>>> contributions to be committed without a review.
>>>
>>> The exact sentence I propose to add is: "Minor issues (e.g. typos, code
>>> style issues, JavaDoc changes. At committer's discretion) can be committed
>>> after soliciting feedback/review on the mailing list and not receiving
>>> feedback within 2 days."
>>>
>>> The proposed bylaws can also be seen here <
>>> https://cwiki.apache.org/confluence/display/Hive/Proposed+Changes+to+Hive+Project+Bylaws+-+April+2016
>>> >
>>>
>>> This vote requires a 2/3 majority of all Active PMC members so I'd love
>>> to get as many votes as possible. The vote will run for at least six days.
>>>
>>> Thanks,
>>> Lars
>>>
>>


Re: Dynamic Partitioning Table Properties

2016-04-05 Thread Lefty Leverenz
These are configuration properties, not table properties.  Here are their
descriptions in the Hive wiki:

   - hive.exec.dynamic.partition.mode
   

   - hive.exec.max.dynamic.partitions
   

   - hive.exec.max.dynamic.partitions.pernode
   


And there's more information in the Tutorial:  Dynamic-Partition Insert

.

-- Lefty


On Mon, Apr 4, 2016 at 5:54 AM, bhanu prasad 
wrote:

> Hi,
>
> Can anyone explain me the below dynamic partitioning table properties in
> detail.
>
>
>
> 1.set hive.exec.dynamic.partition.mode=nonstrict;
> 2.set hive.exec.max.dynamic.partitions=1;
> 3.set hive.exec.max.dynamic.partitions.pernode=500;
>
>
> Thanks in advance
>
>


Re: Hive Macros roadmap

2016-04-04 Thread Lefty Leverenz
Shannon Ladymon has documented Hive macros in the wiki
<https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-Create/DropMacro>,
using examples from HIVE-2655
<https://issues.apache.org/jira/browse/HIVE-2655> and HIVE-13372
<https://issues.apache.org/jira/browse/HIVE-13372>.  Review comments,
corrections, and additions are welcome.

Thanks, Shannon!

-- Lefty

On Sun, Sep 13, 2015 at 10:53 PM, Lefty Leverenz <leftylever...@gmail.com>
wrote:

> HIVE-2655 <https://issues.apache.org/jira/browse/HIVE-2655> added macros
> in release 0.12.0, so I've added a TODOC12 label and doc note to it.
>
> Thank you Elliot for drawing this to our attention.
>
> -- Lefty
>
>
> On Fri, Sep 11, 2015 at 3:41 PM, Edward Capriolo <edlinuxg...@gmail.com>
> wrote:
>
>> Macro's are in and tested. No one will remove them. The unit tests ensure
>> they keep working.
>>
>> On Fri, Sep 11, 2015 at 3:38 PM, Elliot West <tea...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> I noticed some time ago the Hive Macro feature. To me at least this
>>> seemed like an excellent addition to HQL, allowing the user to encapsulate
>>> complex column logic as an independent HQL, reusable macro while avoiding
>>> the complexities of Java UDFs. However, few people seem to be aware of them
>>> or use them. If you are unfamiliar with macros they look like this:
>>>
>>> hive> create temporary macro MYSIGMOID(x DOUBLE)
>>> > 2.0 / (1.0 + exp(-x));
>>> OK
>>>
>>> hive> select MYSIGMOID(1.0) from dual;
>>> OK
>>>
>>> 1.4621171572600098
>>>
>>>
>>> As far as I can tell, they are no longer documented on the Hive wiki.
>>> There is a tiny reference to them in the O'Reilly 'Programming Hive' book
>>> (page 185). Can anyone advise me on the following:
>>>
>>>- Are there are plans to keep or remove this functionality?
>>>- Are there are plans to document this functionality?
>>>- Aside from limitations of HQL are there compelling reasons not to
>>>use macros?
>>>
>>> Thanks - Elliot.
>>>
>>>
>>
>


Re: Reopen https://issues.apache.org/jira/browse/YARN-2624

2016-03-29 Thread Lefty Leverenz
Have you asked this question on one of the YARN mailing lists
 (yarn-...@hadoop.apache.org
or yarn-iss...@hadoop.apache.org)?

-- Lefty


On Tue, Mar 29, 2016 at 3:56 PM, mahender bigdata <
mahender.bigd...@outlook.com> wrote:

> Ping..
>
>
> On 3/26/2016 7:23 AM, mahender bigdata wrote:
>
> Can we reopen this Jira.. Looks like this  issue can be reproed though in
> Apache site it says resolved.
>
> -Mahender
>
> On 3/25/2016 2:06 AM, Mahender Sarangam wrote:
>
> any update on this..
>
> > Subject: Re: Hadoop 2.6 version
> 
> https://issues.apache.org/jira/browse/YARN-2624
> > To: user@hive.apache.org
> > From: mahender.bigd...@outlook.com
> > Date: Thu, 24 Mar 2016 12:20:57 -0700
> >
> >
> > Is there any other way to do NM Node Cache directory, I'm using Windows
> > Cluster Hortan Works HDP System.
> >
> > /mahender
> > On 3/24/2016 11:27 AM, mahender bigdata wrote:
> > > Hi,
> > >
> > > Has any one is holding work around for this bug, Looks like this
> > > problem still persists in hadoop 2.6. Templeton Job get failed as soon
> > > as job is submitted. Please let us know as early as possible
> > >
> > > Application application_1458842675930_0002 failed 2 times due to AM
> > > Container for appattempt_1458842675930_0002_02 exited with
> > > exitCode: -1000
> > > For more detailed output, check application tracking
> > > page:
> http://headnodehost:9014/cluster/app/application_1458842675930_0002Then,
> > > click on links to logs of each attempt.
> > > Diagnostics: Rename cannot overwrite non empty destination directory
> > > c:/apps/temp/hdfs/nm-local-dir/usercache/XXX/filecache/18
> > > java.io.IOException: Rename cannot overwrite non empty destination
> > > directory c:/apps/temp/hdfs/nm-local-dir/usercache//filecache/18
> > > at
> > >
> org.apache.hadoop.fs.AbstractFileSystem.renameInternal(AbstractFileSystem.java:735)
>
> > >
> > > at org.apache.hadoop.fs.FilterFs.renameInternal(FilterFs.java:236)
> > > at
> > >
> org.apache.hadoop.fs.AbstractFileSystem.rename(AbstractFileSystem.java:678)
> > > at org.apache.hadoop.fs.FileContext.rename(FileContext.java:958)
> > > at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:366)
> > > at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:62)
> > > at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> > > at
> > > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> > > at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> > > at
> > >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>
> > >
> > > at
> > >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>
> > >
> > > at java.lang.Thread.run(Thread.java:745)
> > > Failing this attempt. Failing the application.
> > >
> > >
> > > /Mahender
> > >
> > >
> > >
> > >
> >
>
>
>
>


Re: Can I send my issue with the HIVE to this mail group?

2016-03-23 Thread Lefty Leverenz
Yes, this mailing list is the right place to discuss any problems you have
using Apache HIve.

I suggest using a new message, so that the subject line can be relevant to
your issue.

-- Lefty

On Wed, Mar 23, 2016 at 5:47 PM, Sanka, Himabindu  wrote:

>
>
>
>
> *Regards,*
>
> *Hima*
>
>
>
>
>
>
>
>
> This e-mail, including attachments, may include confidential and/or
> proprietary information, and may be used only by the person or entity
> to which it is addressed. If the reader of this e-mail is not the intended
> recipient or his or her authorized agent, the reader is hereby notified
> that any dissemination, distribution or copying of this e-mail is
> prohibited. If you have received this e-mail in error, please notify the
> sender by replying to this message and delete this e-mail immediately.
>


Re: [ANNOUNCE] New Hive Committer - Wei Zheng

2016-03-09 Thread Lefty Leverenz
Congratulations!

-- Lefty

On Wed, Mar 9, 2016 at 10:30 PM, Dmitry Tolpeko  wrote:

> Congratulations, Wei!
>
> On Thu, Mar 10, 2016 at 5:48 AM, Chao Sun  wrote:
>
>> Congratulations!
>>
>> On Wed, Mar 9, 2016 at 6:44 PM, Prasanth Jayachandran <
>> pjayachand...@hortonworks.com> wrote:
>>
>>> Congratulations Wei!
>>>
>>> On Mar 9, 2016, at 8:43 PM, Sergey Shelukhin >> > wrote:
>>>
>>> Congrats!
>>>
>>> From: Szehon Ho >
>>> Reply-To: "user@hive.apache.org" <
>>> user@hive.apache.org>
>>> Date: Wednesday, March 9, 2016 at 17:40
>>> To: "user@hive.apache.org" <
>>> user@hive.apache.org>
>>> Cc: "d...@hive.apache.org" <
>>> d...@hive.apache.org>, "w...@apache.org
>>> " >
>>> Subject: Re: [ANNOUNCE] New Hive Committer - Wei Zheng
>>>
>>> Congratulations Wei!
>>>
>>> On Wed, Mar 9, 2016 at 5:26 PM, Vikram Dixit K >> > wrote:
>>> The Apache Hive PMC has voted to make Wei Zheng a committer on the
>>> Apache Hive Project. Please join me in congratulating Wei.
>>>
>>> Thanks
>>> Vikram.
>>>
>>>
>>>
>>
>


Re: The index for query in hive 1.2.1 does not work.

2016-02-16 Thread Lefty Leverenz
For more details, see the user@hive discussion January 5 - February 8:  "Is
Hive Index officially not recommended?

"

-- Lefty


On Mon, Feb 15, 2016 at 11:52 PM, Mich Talebzadeh <
mich.talebza...@cloudtechnologypartners.co.uk> wrote:

> Hi,
>
>
>
> "Traditional" Indexes are not currently used in Hive. You can create them
> but they are not used by the optimizer.
>
> You can create storage indexes in Hive using ORC file format that provides
> three levels of granularity
>
>1. ORC File itself
>2. Multiple stripes within the ORC file
>3. Multiple row groups (row batches) within each stripe
>
> Effectively:
>
>- Chunks of data making up ORC file stored as storage index. *Storage
>index* is the term used for the combined Index and statistics.
>- Each Storage Index has statistics of min, max, count, and sum for
>each column in the grouping of rows in batches of 10,000 called *row
>group*. Row group both *has row data* and *index data*
>- Crucially, it needs the location of the start of each row group, so
>that the query could jump straight to the beginning of the row group so
>narrowing down the search path.
>- The query should perform a SARG pushdown that limits which rows are
>required for the query and can avoid reading an entire file, or at least
>sections of the file which is by and large what a conventional RDBMS B-tree
>index does.
>- Support for new ACID features in Hive (insert, update and delete).
>
>
>
> HTH.
>
>
>
> Mich
>
>
>
> On 16/02/2016 03:17, 万修远 wrote:
>
> Hello,
>
> *When I use index in hive 1.2.1, I find the index does not work.  The
> details are as follows:*
>
> 1. After using index, the query speed does not improve.  If I use manual
> use of indexes, the query speed improve obviously, but when switch to
> automatic use of indexes, the speed makes no difference relative to not use
> index.
>
> 2. After rebuild index, I add a new text file which includes one record
> matching my query filter in the table directory. Then,  the query results
> will show the record included in the new text file. (The case that append
> new record in the same file but in different block is the same.)
>
> 3.When debug the hive source code I find that the function
> generateIndexQuery of class CompactIndexHandler is't called. Finally I
> find that the function compile in class TaskCompiler returns early at the
> follow statements:
> if (pCtx.getFetchTask() != null) {
> return;
> }this will result in index not working for query. But I do't know why to
> set FetchTask because I know little about hive.
>
> 
>
> *So, My question is :*1. Does hive 1.2.1 support index normally? IF it
> supports index completely, what's my issue?2. I want to know  how indexes
> are used to optimize queries, where can I find some references?
>
> 
>
> *Appendix: How do I use index in hive 1.2.1*
>
> 1.create table and load data:
>
> create table table01( id int, name string)
> ROW FORMAT DELIMITED
> FIELDS TERMINATED BY '\t';
> load data local inpath '/home/hadoop/data/dual.txt' overwrite into table 
> table01;
>
> 2.create and rebuild index:
>
> create index table01_index on table table01(id) as 
> 'org.apache.hadoop.hive.ql.index.compact.CompactIndexHandler' with deferred 
> rebuild;
> alter index table01_index on table01 rebuild;
>
> 3.set properties:
>
> set hive.optimize.index.filter.compact.minsize=0;
> set hive.optimize.index.filter.compact.maxsize=-1;
> set hive.index.compact.query.max.size=-1;
> set hive.index.compact.query.max.entries=-1;
> set Hive.optimize.index.groupby=false;
> set hive.optimize.index.filter=true;
> set hive.input.format=org.apache.hadoop.hive.ql.io.HiveInputFormat;
>
> 4.execute query statement:
>
> select * from table01 where id =50;
>
> Thanks!
> --
> Jason
>
>
>
>
> --
>
> Dr Mich Talebzadeh
>
> LinkedIn  
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> http://talebzadehmich.wordpress.com
>
> NOTE: The information in this email is proprietary and confidential. This 
> message is for the designated recipient only, if you are not the intended 
> recipient, you should destroy it immediately. Any information in this message 
> shall not be understood as given or endorsed by Cloud Technology Partners 
> Ltd, its subsidiaries or their employees, unless expressly so stated. It is 
> the responsibility of the recipient to ensure that this email is virus free, 
> therefore neither Cloud Technology partners Ltd, its subsidiaries nor their 
> employees accept any responsibility.
>
>
>


Re: Hive optimizer

2016-02-04 Thread Lefty Leverenz
You can find Hive CBO information in Cost Based Optimizer in Hive

.

-- Lefty


On Wed, Feb 3, 2016 at 11:48 AM, John Pullokkaran <
jpullokka...@hortonworks.com> wrote:

> Its both.
> Some of the optimizations are rule based and some are cost based.
>
> John
>
> From: Ashok Kumar 
> Reply-To: "user@hive.apache.org" , Ashok Kumar <
> ashok34...@yahoo.com>
> Date: Wednesday, February 3, 2016 at 11:45 AM
> To: User 
> Subject: Hive optimizer
>
>   Hi,
>
> Is Hive optimizer a cost based Optimizer (CBO) or a rule based optimizer
> (CBO) or none of them.
>
> thanks
>


Re: ORC format

2016-02-02 Thread Lefty Leverenz
Can't resist teasing Mich about this:  "Indeed one often demoralises data
taking advantages of massive parallel processing in Hive."

Surely he meant denormalizes .
Nobody would want to demoralise their data -- performance would suffer.  ;)

-- Lefty


On Mon, Feb 1, 2016 at 10:00 AM, Mich Talebzadeh 
wrote:

> Thanks Alan for this explanation. Interesting to see Primary Key in Hive.
>
>
>
>
>
> Sometimes comparison is made between Hive Storage Index concept in Orc and
> Oracle Exadata  storage index that also uses the same terminology!
>
>
>
> It is a bit of a misnomer to call Oracle Exadata indexes a “storage
> index”, since it appears that Exadata stores data block from tables in the
> storage index, usually when they are accessed via a full-table scan.  In
> this context Exadata storage index is not a “real” index in the sense that
> the storage index exists only in RAM, and it must be re-created from
> scratch when the Exadata server is bounced.
>
>
>
> Oracle Exadata  and SAP HANA as far as I know force serial scans into
> Hardware - with HANA, it is by pushing the bitmaps into the L2 cache on the
> chip - Oracle has special processors on SPARC T5 called D 
> that offloads the column bit scan off the CPU and onto separate specialized
> HW.  As a result, both rely on massive parallelization..
>
>
>
>
>
> Orc storage index is neat and different from both Exadata and SAP HANA,
> The way I see ORC storage indexes
>
>
>
> · They are combined Index and statistics.
>
> · Each index has statistics of min, max, count, and sum for each
> column in the row group of 10,000 rows.
>
> · Crucially, it has the location of the start of each row group,
> so that the query can jump straight to the beginning of the row group.
>
> · The query can do  a SARG pushdown that limits which rows are
> required for the query and can avoid reading an entire file, or at least
> sections of the file which is by and large what a conventional RDBMS B-tree
> index does.
>
>
>
>
>
> Cheers,
>
>
>
> Dr Mich Talebzadeh
>
>
>
> LinkedIn * 
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> *
>
>
>
> *Sybase ASE 15 Gold Medal Award 2008*
>
> A Winning Strategy: Running the most Critical Financial Data on ASE 15
>
>
> http://login.sybase.com/files/Product_Overviews/ASE-Winning-Strategy-091908.pdf
>
> Author of the books* "A Practitioner’s Guide to Upgrading to Sybase ASE
> 15", ISBN 978-0-9563693-0-7*.
>
> co-author *"Sybase Transact SQL Guidelines Best Practices", ISBN
> 978-0-9759693-0-4*
>
> *Publications due shortly:*
>
> *Complex Event Processing in Heterogeneous Environments*, ISBN:
> 978-0-9563693-3-8
>
> *Oracle and Sybase, Concepts and Contrasts*, ISBN: 978-0-9563693-1-4, volume
> one out shortly
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>
> NOTE: The information in this email is proprietary and confidential. This
> message is for the designated recipient only, if you are not the intended
> recipient, you should destroy it immediately. Any information in this
> message shall not be understood as given or endorsed by Peridale Technology
> Ltd, its subsidiaries or their employees, unless expressly so stated. It is
> the responsibility of the recipient to ensure that this email is virus
> free, therefore neither Peridale Technology Ltd, its subsidiaries nor their
> employees accept any responsibility.
>
>
>
> *From:* Alan Gates [mailto:alanfga...@gmail.com]
> *Sent:* 01 February 2016 17:07
> *To:* user@hive.apache.org
> *Subject:* Re: ORC format
>
>
>
> ORC does not currently expose a primary key to the user, though we have
> talked of having it do that.  As Mich says the indexing on ORC is oriented
> towards statistics that help the optimizer plan the query.  This can be
> very important in split generation (determining which parts of the input
> will be read by which tasks) as well as on the fly input pruning (deciding
> not to read a section of the file because the stats show that no rows in
> that section will match a predicate).  Either of these can help joins.  But
> as there is not a user visible primary key there's no ability to rewrite
> the join as an index based join, which I think is what you were asking
> about in your original email.
>
> Alan.
>
>
> *Philip Lee* 
>
> February 1, 2016 at 7:27
>
> Also,
>
> when making ORC from CSV,
>
> for indexing every key on each coulmn is made, or a primary on a table is
> made ?
>
>
>
> If keys are made on each column in a table, accessing any column in some
> functions like filtering should be faster.
>
>
>
>
>
>
> --
>
> ==
>
> *Hae Joon Lee*
>
>
>
> Now, in Germany,
>
> M.S. Candidate, Interested in Distributed System, Iterative Processing
>
> Dept. of Computer Science, Informatik 

Re: Hive Buckets and Select queries

2015-12-31 Thread Lefty Leverenz
cc:  user@hive.apache.org

-- Lefty



On Mon, Dec 28, 2015 at 11:00 PM, Varadharajan Mukundan <
srinath...@gmail.com> wrote:

> Hi All,
>
> Say i have a table with below schema:
>
> CREATE TABLE foo (id INT) CLUSTERED BY (id) INTO 8 BUCKETS STORED AS ORC;
>
> and when we issue the following query, its doing a "Full table scan"
>
> SELECT * FROM foo WHERE id=
>
> After doing some searching on the net, i found that "table sample" seems to
> be one of the ways to resolve this and the query would be written in this
> manner:
>
> SELECT * FROM foo TABLE SAMPLE(BUCKET  of
> 8) where id=
>
> I was expecting the above query to read only the 2nd bucket but to my
> surprise it did full table scan again. I understand that partitioning in
> hive is the way to go for such queries by i have two points on why we need
> such filtering techniques for buckets as well.
>
> 1. Partitioning may not be suitable / preferred when there are lots of
> partitions (high cardinality columns)
> 2. Buckets just works out of the box, without specifying things like
> partitioning keys etc.. in the queries like "insert clauses".
>
> I was wondering if there are any technical constraints on why we were not
> able to restrict the scan only to that bucket for such pointed queries?
>
> --
> Thanks,
> M. Varadharajan
>
> 
>
> "Experience is what you get when you didn't get what you wanted"
>-By Prof. Randy Pausch in "The Last Lecture"
>
> My Journal :- http://varadharajan.in
>


Re: wiki access

2015-12-31 Thread Lefty Leverenz
Done.  Welcome to the Hive wiki team, Andrew!

-- Lefty


On Wed, Dec 30, 2015 at 8:35 AM, Sears, Andrew <
andrew.se...@analyticsdream.com> wrote:

>
> Good morning,
>
> I am hoping to assist as a contributor for Apache Hive, can I start by
> getting
> access to edit and export the wiki?
>
> My confluence user id is asears.
>
> Thank you,
> Andrew Sears
> andrew.se...@analyticsdream.com
>


Re: Managed to make Hive run on Spark engine

2015-12-06 Thread Lefty Leverenz
Congratulations!

-- Lefty

On Sun, Dec 6, 2015 at 3:32 PM, Mich Talebzadeh  wrote:

> Thanks all especially to Xuefu.for contributions. Finally it works, which
> means don’t give up until it works J
>
>
>
> hduser@rhes564::/usr/lib/hive/lib> hive
>
> Logging initialized using configuration in
> jar:file:/usr/lib/hive/lib/hive-common-1.2.1.jar!/hive-log4j.properties
>
> *hive> set spark.home= /usr/lib/spark-1.3.1-bin-hadoop2.6;*
>
> *hive> set hive.execution.engine=spark;*
>
> *hive> set spark.master=spark://50.140.197.217:7077
> ;*
>
> *hive> set spark.eventLog.enabled=true;*
>
> *hive> set spark.eventLog.dir= /usr/lib/spark-1.3.1-bin-hadoop2.6/logs;*
>
> *hive> set spark.executor.memory=512m;*
>
> *hive> set spark.serializer=org.apache.spark.serializer.KryoSerializer;*
>
> *hive> set hive.spark.client.server.connect.timeout=22ms;*
>
> *hive> set
> spark.io.compression.codec=org.apache.spark.io.LZFCompressionCodec;*
>
> hive> use asehadoop;
>
> OK
>
> Time taken: 0.638 seconds
>
> hive> *select count(1) from t;*
>
> Query ID = hduser_20151206200528_4b85889f-e4ca-41d2-9bd2-1082104be42b
>
> Total jobs = 1
>
> Launching Job 1 out of 1
>
> In order to change the average load for a reducer (in bytes):
>
>   set hive.exec.reducers.bytes.per.reducer=
>
> In order to limit the maximum number of reducers:
>
>   set hive.exec.reducers.max=
>
> In order to set a constant number of reducers:
>
>   set mapreduce.job.reduces=
>
> Starting Spark Job = c8fee86c-0286-4276-aaa1-2a5eb4e4958a
>
>
>
> Query Hive on Spark job[0] stages:
>
> 0
>
> 1
>
>
>
> Status: Running (Hive on Spark job[0])
>
> Job Progress Format
>
> CurrentTime StageId_StageAttemptId:
> SucceededTasksCount(+RunningTasksCount-FailedTasksCount)/TotalTasksCount
> [StageCost]
>
> 2015-12-06 20:05:36,299 Stage-0_0: 0(+1)/1  Stage-1_0: 0/1
>
> 2015-12-06 20:05:39,344 Stage-0_0: 1/1 Finished Stage-1_0: 0(+1)/1
>
> 2015-12-06 20:05:40,350 Stage-0_0: 1/1 Finished Stage-1_0: 1/1 Finished
>
> Status: Finished successfully in 8.10 seconds
>
> OK
>
>
>
> The versions used for this project
>
>
>
>
>
> OS version Linux version 2.6.18-92.el5xen (
> brewbuil...@ls20-bc2-13.build.redhat.com) (gcc version 4.1.2 20071124
> (Red Hat 4.1.2-41)) #1 SMP Tue Apr 29 13:31:30 EDT 2008
>
>
>
> Hadoop 2.6.0
>
> Hive 1.2.1
>
> spark-1.3.1-bin-hadoop2.6 (downloaded from prebuild 
> spark-1.3.1-bin-hadoop2.6.gz
> for starting spark standalone cluster)
>
> The Jar file used in $HIVE_HOME/lib to link Hive to spark was à
> spark-assembly-1.3.1-hadoop2.4.0.jar
>
>(built from the source downloaded as zipped file spark-1.3.1.gz and
> built with command line make-distribution.sh --name
> "hadoop2-without-hive" --tgz
> "-Pyarn,hadoop-provided,hadoop-2.4,parquet-provided"
>
>
>
> Pretty picky on parameters, CLASSPATH, IP addresses or hostname etc to
> make it work
>
>
>
> I will create a full guide on how to build and make Hive to run with Spark
> as its engine (as opposed to MR).
>
>
>
> HTH
>
>
>
> Mich Talebzadeh
>
>
>
> *Sybase ASE 15 Gold Medal Award 2008*
>
> A Winning Strategy: Running the most Critical Financial Data on ASE 15
>
>
> http://login.sybase.com/files/Product_Overviews/ASE-Winning-Strategy-091908.pdf
>
> Author of the books* "A Practitioner’s Guide to Upgrading to Sybase ASE
> 15", ISBN 978-0-9563693-0-7*.
>
> co-author *"Sybase Transact SQL Guidelines Best Practices", ISBN
> 978-0-9759693-0-4*
>
> *Publications due shortly:*
>
> *Complex Event Processing in Heterogeneous Environments*, ISBN:
> 978-0-9563693-3-8
>
> *Oracle and Sybase, Concepts and Contrasts*, ISBN: 978-0-9563693-1-4, volume
> one out shortly
>
>
>
> http://talebzadehmich.wordpress.com
>
>
>
> NOTE: The information in this email is proprietary and confidential. This
> message is for the designated recipient only, if you are not the intended
> recipient, you should destroy it immediately. Any information in this
> message shall not be understood as given or endorsed by Peridale Technology
> Ltd, its subsidiaries or their employees, unless expressly so stated. It is
> the responsibility of the recipient to ensure that this email is virus
> free, therefore neither Peridale Ltd, its subsidiaries nor their employees
> accept any responsibility.
>
>
>


Re: how to search the archive

2015-12-06 Thread Lefty Leverenz
I've been hoping someone else would answer the question about searching the
archives, but here's what I know:

   1. The Apache archives linked from Hive's mailing lists
    page don't seem to be
   searchable.
   2. Links to searchable MarkMail archives are broken on the mailing lists
   page.
   3. Find the MarkMail archives here:
   http://apache.markmail.org/search/?q=hive.
  - Clicking on a list puts it in the upper-right search field.
  - Add your search string in quotes.
  - Example:
  
http://apache.markmail.org/search/?q=hive+list%3Aorg.apache.hadoop.hive-user+%22search+the+archive%22
  .
  - The ? button gives more search functions.

If anyone knows more, please tell us.

-- Lefty

On Sun, Dec 6, 2015 at 6:08 AM, Awhan Patnaik  wrote:

>
>
> On Fri, Dec 4, 2015 at 8:41 PM, Takahiko Saito  wrote:
>
>> Could a table be an external table?
>>
>>
>>
> Yes.
> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL#LanguageManualDDL-ExternalTables
>


Re: Quick Question

2015-12-04 Thread Lefty Leverenz
Here's the documentation:  Parquet
.

-- Lefty

On Fri, Dec 4, 2015 at 5:03 PM, Xuefu Zhang  wrote:

> Create a table with the file and query the table. Parquet is fully
> supported in Hive.
>
> --Xuefu
>
> On Fri, Dec 4, 2015 at 10:58 AM, Siva Kanth Sattiraju (ssattira) <
> ssatt...@cisco.com> wrote:
>
>> Hi All,
>>
>> Is there a way to read “parquet” file through Hive?
>>
>> Regards,
>> Siva
>>
>>
>


Re: Upgrading from Hive 0.14.0 to Hive 1.2.1

2015-11-24 Thread Lefty Leverenz
Here's the documentation for Hive's schema tool:
https://cwiki.apache.org/confluence/display/Hive/Hive+Schema+Tool.

-- Lefty

On Tue, Nov 24, 2015 at 2:12 PM, Xuefu Zhang  wrote:

> This upgrade should be no different from other upgrade. You can use Hive's
> schema tool to upgrade your existing metadata.
>
> Thanks,
> Xuefu
>
> On Tue, Nov 24, 2015 at 10:05 AM, Mich Talebzadeh 
> wrote:
>
>> Hi,
>>
>>
>>
>> I would like to upgrade to Hive 1.2.1 as I understand one cannot deploy
>> Spark execution engine on 0.14
>>
>>
>>
>> *Chooses execution engine. Options are: **mr** (Map reduce, default), *
>> *tez** (Tez
>>  execution,
>> for Hadoop 2 only), or **spark** (Spark
>>  execution,
>> for Hive 1.1.0 onward).*
>>
>>
>>
>> Is there any upgrade path (I don’t want to lose my existing databases in
>> Hive) or I have to start from new including generating new metatsore etc?
>>
>>
>>
>> Thanks,
>>
>>
>>
>>
>>
>> Mich Talebzadeh
>>
>>
>>
>> *Sybase ASE 15 Gold Medal Award 2008*
>>
>> A Winning Strategy: Running the most Critical Financial Data on ASE 15
>>
>>
>> http://login.sybase.com/files/Product_Overviews/ASE-Winning-Strategy-091908.pdf
>>
>> Author of the books* "A Practitioner’s Guide to Upgrading to Sybase ASE
>> 15", ISBN 978-0-9563693-0-7*.
>>
>> co-author *"Sybase Transact SQL Guidelines Best Practices", ISBN
>> 978-0-9759693-0-4*
>>
>> *Publications due shortly:*
>>
>> *Complex Event Processing in Heterogeneous Environments*, ISBN:
>> 978-0-9563693-3-8
>>
>> *Oracle and Sybase, Concepts and Contrasts*, ISBN: 978-0-9563693-1-4, volume
>> one out shortly
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>>
>> NOTE: The information in this email is proprietary and confidential. This
>> message is for the designated recipient only, if you are not the intended
>> recipient, you should destroy it immediately. Any information in this
>> message shall not be understood as given or endorsed by Peridale Technology
>> Ltd, its subsidiaries or their employees, unless expressly so stated. It is
>> the responsibility of the recipient to ensure that this email is virus
>> free, therefore neither Peridale Ltd, its subsidiaries nor their employees
>> accept any responsibility.
>>
>>
>>
>
>


Re: [ANNOUNCE] New PMC Member : John Pullokkaran

2015-11-24 Thread Lefty Leverenz
Congratulations John!

-- Lefty

On Tue, Nov 24, 2015 at 6:31 PM, Navis Ryu  wrote:

> Congratulations!
>
> 2015년 11월 25일 수요일, Hari Sivarama Subramaniyan<
> hsubramani...@hortonworks.com>님이 작성한 메시지:
>
> Congrats, John!
>> --
>> *From:* Eugene Koifman 
>> *Sent:* Tuesday, November 24, 2015 3:14 PM
>> *To:* user@hive.apache.org; d...@hive.apache.org
>> *Subject:* Re: [ANNOUNCE] New PMC Member : John Pullokkaran
>>
>> Congrats!
>>
>> From: Sergey Shelukhin 
>> Reply-To: "user@hive.apache.org" 
>> Date: Tuesday, November 24, 2015 at 3:13 PM
>> To: "user@hive.apache.org" , "d...@hive.apache.org"
>> 
>> Subject: Re: [ANNOUNCE] New PMC Member : John Pullokkaran
>>
>> Congrats!
>>
>> From: Jimmy Xiang 
>> Reply-To: "user@hive.apache.org" 
>> Date: Tuesday, November 24, 2015 at 15:07
>> To: "d...@hive.apache.org" 
>> Cc: "user@hive.apache.org" 
>> Subject: Re: [ANNOUNCE] New PMC Member : John Pullokkaran
>>
>> Congrats!!
>>
>> On Tue, Nov 24, 2015 at 3:04 PM, Szehon Ho  wrote:
>>
>>> Congratulations!
>>>
>>> On Tue, Nov 24, 2015 at 3:02 PM, Xuefu Zhang 
>>> wrote:
>>>
>>> > Congratulations, John!
>>> >
>>> > --Xuefu
>>> >
>>> > On Tue, Nov 24, 2015 at 3:01 PM, Prasanth J 
>>> > wrote:
>>> >
>>> >> Congratulations and Welcome John!
>>> >>
>>> >> Thanks
>>> >> Prasanth
>>> >>
>>> >> On Nov 24, 2015, at 4:59 PM, Ashutosh Chauhan 
>>> >> wrote:
>>> >>
>>> >> On behalf of the Hive PMC I am delighted to announce John Pullokkaran
>>> is
>>> >> joining Hive PMC.
>>> >> John is a long time contributor in Hive and is focusing on compiler
>>> and
>>> >> optimizer areas these days.
>>> >> Please give John a warm welcome to the project!
>>> >>
>>> >> Ashutosh
>>> >>
>>> >>
>>> >>
>>> >
>>>
>>
>>


Re: Building Spark to use for Hive on Spark

2015-11-22 Thread Lefty Leverenz
Gopal, can you confirm the doc change that Jone Zhang suggests?  The second
sentence confuses me:  "You can choose Spark1.5.0+ which  build include the
Hive jars."

Thanks.

-- Lefty


On Thu, Nov 19, 2015 at 8:33 PM, Jone Zhang  wrote:

> I should add that Spark1.5.0+ is used hive1.2.1 default when you use -Phive
>
> So this page
> 
>  shoule
> write like below
> “Note that you must have a version of Spark which does *not* include the
> Hive jars if you use Spark1.4.1 and before, You can choose Spark1.5.0+
> which  build include the Hive jars ”
>
>
> 2015-11-19 5:12 GMT+08:00 Gopal Vijayaraghavan :
>
>>
>>
>> > I wanted to know  why is it necessary to remove the Hive jars from the
>> >Spark build as mentioned on this
>>
>> Because SparkSQL was originally based on Hive & still uses Hive AST to
>> parse SQL.
>>
>> The org.apache.spark.sql.hive package contains the parser which has
>> hard-references to the hive's internal AST, which is unfortunately
>> auto-generated code (HiveParser.TOK_TABNAME etc).
>>
>> Everytime Hive makes a release, those constants change in value and that
>> is private API because of the lack of backwards-compat, which is violated
>> by SparkSQL.
>>
>> So Hive-on-Spark forces mismatched versions of Hive classes, because it's
>> a circular dependency of Hive(v1) -> Spark -> Hive(v2) due to the basic
>> laws of causality.
>>
>> Spark cannot depend on a version of Hive that is unreleased and
>> Hive-on-Spark release cannot depend on a version of Spark that is
>> unreleased.
>>
>> Cheers,
>> Gopal
>>
>>
>>
>


Re: Building Spark to use for Hive on Spark

2015-11-22 Thread Lefty Leverenz
Thanks Xuefu!

-- Lefty

On Mon, Nov 23, 2015 at 1:09 AM, Xuefu Zhang <xzh...@cloudera.com> wrote:

> Hive is supposed to work with any version of Hive (1.1+) and a version of
> Spark w/o Hive. Thus, to make HoS work reliably and also simply the
> matters, I think it still makes to require that spark-assembly jar
> shouldn't contain Hive Jars. Otherwise, you have to make sure that your
> Hive version matches the same as the "other" Hive version that's included
> in Spark.
>
> In CDH 5.x, Spark version is 1.5, and we still build Spark jar w/o Hive.
>
> Therefore, I don't see a need to update the doc.
>
> --Xuefu
>
> On Sun, Nov 22, 2015 at 9:23 PM, Lefty Leverenz <leftylever...@gmail.com>
> wrote:
>
>> Gopal, can you confirm the doc change that Jone Zhang suggests?  The
>> second sentence confuses me:  "You can choose Spark1.5.0+ which  build
>> include the Hive jars."
>>
>> Thanks.
>>
>> -- Lefty
>>
>>
>> On Thu, Nov 19, 2015 at 8:33 PM, Jone Zhang <joyoungzh...@gmail.com>
>> wrote:
>>
>>> I should add that Spark1.5.0+ is used hive1.2.1 default when you use
>>> -Phive
>>>
>>> So this page
>>> <https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark%3A+Getting+Started>
>>>  shoule
>>> write like below
>>> “Note that you must have a version of Spark which does *not* include
>>> the Hive jars if you use Spark1.4.1 and before, You can choose
>>> Spark1.5.0+ which  build include the Hive jars ”
>>>
>>>
>>> 2015-11-19 5:12 GMT+08:00 Gopal Vijayaraghavan <gop...@apache.org>:
>>>
>>>>
>>>>
>>>> > I wanted to know  why is it necessary to remove the Hive jars from the
>>>> >Spark build as mentioned on this
>>>>
>>>> Because SparkSQL was originally based on Hive & still uses Hive AST to
>>>> parse SQL.
>>>>
>>>> The org.apache.spark.sql.hive package contains the parser which has
>>>> hard-references to the hive's internal AST, which is unfortunately
>>>> auto-generated code (HiveParser.TOK_TABNAME etc).
>>>>
>>>> Everytime Hive makes a release, those constants change in value and that
>>>> is private API because of the lack of backwards-compat, which is
>>>> violated
>>>> by SparkSQL.
>>>>
>>>> So Hive-on-Spark forces mismatched versions of Hive classes, because
>>>> it's
>>>> a circular dependency of Hive(v1) -> Spark -> Hive(v2) due to the basic
>>>> laws of causality.
>>>>
>>>> Spark cannot depend on a version of Hive that is unreleased and
>>>> Hive-on-Spark release cannot depend on a version of Spark that is
>>>> unreleased.
>>>>
>>>> Cheers,
>>>> Gopal
>>>>
>>>>
>>>>
>>>
>>
>


Re: override log4j level

2015-11-17 Thread Lefty Leverenz
This is documented in the wiki:  Hive Logging

.

-- Lefty

On Mon, Nov 16, 2015 at 11:49 PM, Amey Barve  wrote:

> Hi Patcharee,
>
> Use
> *--hiveconf hive.root.logger=ERROR,console*
>
> Regards,
> Amey
>
> On Mon, Nov 16, 2015 at 10:57 PM, pth001 
> wrote:
>
>> Hi,
>>
>> How can I override log4j level by using --hiveconf? I want to use ERROR
>> level for some tasks.
>>
>> Thanks,
>> Patcharee
>>
>
>


Re: [VOTE] Hive 2.0 release plan

2015-11-15 Thread Lefty Leverenz
+1

(Re-adding user@hive.apache.org.)

-- Lefty

On Sun, Nov 15, 2015 at 8:09 PM, Prasanth Jayachandran <
pjayachand...@hortonworks.com> wrote:

> +1
>
> Thanks
> Prasanth
> > On Nov 13, 2015, at 4:26 PM, Vaibhav Gumashta 
> wrote:
> >
> > +1
> >
> > Thanks,
> > --Vaibhav
> >
> >
> >
> >
> >
> > On Fri, Nov 13, 2015 at 2:24 PM -0800, "Tristram de Lyones" <
> delyo...@gmail.com> wrote:
> >
> > +1
> >
> > On Fri, Nov 13, 2015 at 1:38 PM, Sergey Shelukhin <
> ser...@hortonworks.com>
> > wrote:
> >
> >> Hi.
> >> With no strong objections on DISCUSS thread, some issues raised and
> >> addressed, and a reminder from Carl about the bylaws for the release
> >> process, I propose we release the first version of Hive 2 (2.0), and
> >> nominate myself as release manager.
> >> The goal is to have the first release of Hive with aggressive set of new
> >> features, some of which are ready to use and some are at experimental
> >> stage and will be developed in future Hive 2 releases, in line with the
> >> Hive-1-Hive-2 split discussion.
> >> If the vote passes, the timeline to create a branch should be around the
> >> end of next week (to minimize merging in the wake of the release), and
> the
> >> timeline to release would be around the end of November, depending on
> the
> >> issues found during the RC cutting process, as usual.
> >>
> >> Please vote:
> >> +1 proceed with the release plan
> >> +-0 don’t care
> >> -1 don’t proceed with the release plan, for such and such reasons
> >>
> >> The vote will run for 3 days.
> >>
> >>
>
>
>
> Thanks
> -- Prasanth
>
>


  1   2   3   4   >