Hi Grant,

Thanks for insight.

You mentioned and I quote

" Acid tables have been a real pain for us. We don’t believe they are
production ready.. "

Can you please elaborate on this/

Thanks

Mich Talebzadeh

http://talebzadehmich.wordpress.com

Author of the books "A Practitioner’s Guide to Upgrading to Sybase ASE 15",
ISBN 978-0-9563693-0-7. 
co-author "Sybase Transact SQL Guidelines Best Practices", ISBN
978-0-9759693-0-4
Publications due shortly:
Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and
Coherence Cache
Oracle and Sybase, Concepts and Contrasts, ISBN: 978-0-9563693-1-4, volume
one out shortly

NOTE: The information in this email is proprietary and confidential. This
message is for the designated recipient only, if you are not the intended
recipient, you should destroy it immediately. Any information in this
message shall not be understood as given or endorsed by Peridale Ltd, its
subsidiaries or their employees, unless expressly so stated. It is the
responsibility of the recipient to ensure that this email is virus free,
therefore neither Peridale Ltd, its subsidiaries nor their employees accept
any responsibility.


-----Original Message-----
From: Grant Overby (groverby) [mailto:grove...@cisco.com] 
Sent: 14 April 2015 22:02
To: Gopal Vijayaraghavan; user@hive.apache.org
Subject: Re: External Table with unclosed orc files.

Thanks for the link to the hive streaming bolt. We rolled our own bolt many
moons ago to utilize hive streaming. We’ve tried it against 0.13 and
0.14 . Acid tables have been a real pain for us. We don’t believe they are
production ready. At least in our use cases, Tez crashes for assorted
reasons or only assigns 1 mapper to the partition. Having delta files and no
base files borks mapper assignments.  Files containing flush in their name
are left scattered about, borking queries. Latency is higher with streaming
than writing to an orc file in hdfs, forcing obscene quantities of buckets
and orc files smaller than any reasonable orc stripe / hdfs block size. The
compactor hangs seemingly at random for no reason we’ve been able to
discern.



An orc file without a footer is junk data (or, at least, the last stripe is
junk data). I suppose my question should have been 'what will the hive query
do when it encounters this? Skip the stripe / file? Error out the query?
Something else?’




Grant Overby
Software Engineer
Cisco.com <http://www.cisco.com/>
grove...@cisco.com
Mobile: 865 724 4910




 Think before you print.This email may contain confidential and privileged
material for the sole use of the intended recipient. Any review, use,
distribution or disclosure by others is strictly prohibited. If you are not
the intended recipient (or authorized to receive for the recipient), please
contact the sender by reply email and delete all copies of this message.
Please click here
<http://www.cisco.com/web/about/doing_business/legal/cri/index.html> for
Company Registration Information.







On 4/14/15, 4:23 PM, "Gopal Vijayaraghavan" <gop...@apache.org> wrote:

>
>> What will Hive do if querying an external table containing orc files
>>that are still being written to?
>
>Doing that directly won¹t work at all. Because ORC files are only readable
>after the Footer is written out, which won¹t be for any open files.
>
>> I won¹t be able to test these scenarios till tomorrow and would like to
>>have some idea of what to expect this afternoon.
>
>If I remember correctly, your previous question was about writing ORC from
>Storm.
>
>If you¹re on a recent version of Storm, I¹d advise you to look at
>storm-hive/ 
>
>https://github.com/apache/storm/tree/master/external/storm-hive
>
>
>Or alternatively, there¹s a ³hortonworks trucking demo² which does a
>partition insert instead.
>
>Cheers,
>Gopal
>
>


Reply via email to