Thanks Vihang!

> On Apr 13, 2018, at 12:36 PM, Vihang Karajgaonkar <vih...@cloudera.com> wrote:
> 
> Hi Vineet,
> 
> I created a profile on ptest-server so that tests can be run on branch-3.
> It is the same as branch-2 patches. You will need to include branch-3 in
> the patch name. Eg. HIVE-1234.01-branch-3.patch
> 
> -Vihang
> 
> 
> 
> On Mon, Apr 9, 2018 at 4:35 PM, Vineet Garg <vg...@hortonworks.com> wrote:
> 
>> I have created an umbrella jira to investigate and fix test failures for
>> hive 3.0.0. LINK : https://issues.apache.org/jira/browse/HIVE-19142.
>> Please link any other existing jira related to test failure with this
>> umbrella jira.
>> 
>> Also, how do we run tests on branch-3? Is there some setup to be done?
>> 
>> -Vineet
>> 
>> On Apr 9, 2018, at 4:26 AM, Zoltan Haindrich <zhaindr...@hortonworks.com<
>> mailto:zhaindr...@hortonworks.com>> wrote:
>> 
>> Hello
>> 
>> A few weeks earlier I've tried to hunt down this problem...
>> so...to my best knowledge the cause of this seems to be the following:
>> 
>> * in some cases the "cleanup" after a failed query may somehow leave some
>> threads behind...
>> * these threads have reference to the "customized" session classloader -
>> this makes the threads more memory hungry
>> * after a while these threads/classloaders eat up the heap...
>> 
>> I've opened HIVE-18522 for this thread issue
>> 
>> I think this problem is not new ...and it might have been present earlier
>> as well...the only thing what changed is that there were a few more new
>> features which have added new udfs/etc which made the memory cost of a
>> session more heavier..
>> ...and as a sidenote: I'm not convinced that this issue will arise in a
>> proper hs2 setup - as it might be easily connected to the fact that these
>> tests are using the cli driver to execute the tests.
>> 
>> 
>> cheers,
>> Zoltan
>> 
>> On 7 Apr 2018 7:15 p.m., Ashutosh Chauhan <hashut...@apache.org<mailto:h
>> ashut...@apache.org>> wrote:
>> We need to investigate and find out root cause of these failures. If its
>> determined that its a corner case and fix is non-trivial then we may
>> release note it under known issues. But ideally we should fix these
>> failures.
>> Cutting a branch should make it easier since branch is expected to receive
>> lot less commits as compared to master so it should be faster to stabilize
>> branch.
>> 
>> On Fri, Apr 6, 2018 at 10:49 AM, Eugene Koifman <ekoif...@hortonworks.com<
>> mailto:ekoif...@hortonworks.com>>
>> wrote:
>> 
>> Cutting the branch before the tests are stabilized would mean we have to
>> fix them in 2 places.
>> 
>> On 4/6/18, 10:05 AM, "Thejas Nair" <thejas.n...@gmail.com<mailto:
>> thejas.n...@gmail.com>> wrote:
>> 
>>   That needs to be cleaned up. There are far too many right now, its
>>   just not handful of flaky tests.
>> 
>> 
>>   On Fri, Apr 6, 2018 at 2:48 AM, Peter Vary <pv...@cloudera.com<mailto:
>> pv...@cloudera.com>> wrote:
>> Hi Team,
>> 
>> I am new to the Hive release process and it is not clear to me how
>> the failing tests are handled. Do we plan to fix the failing tests before
>> release? Or it is accepted to cut a new major release with known test
>> issues.
>> 
>> Thanks,
>> Peter
>> 
>> On Apr 5, 2018, at 8:25 PM, Vineet Garg <vg...@hortonworks.com<mailto:
>> vg...@hortonworks.com>>
>> wrote:
>> 
>> Hello,
>> 
>> I plan to cut off branch for Hive 3.0.0 on Monday (9 April) since
>> bunch of folks have big patches pending.
>> 
>> Regards,
>> Vineet G
>> 
>> On Apr 2, 2018, at 3:14 PM, Vineet Garg <vg...@hortonworks.com<mailto:
>> vg...@hortonworks.com>>
>> wrote:
>> 
>> Hello,
>> 
>> We have enough votes to prepare a release candidate for Hive
>> 3.0.0. I am going to cutoff a branch in a day or two. I’ll send an email as
>> soon as I have the branch ready.
>> Meanwhile there are approximately 69 JIRAs which are currently
>> opened with fix version 3.0.0. I’ll appreciate if their respective owners
>> would update the JIRA if it is a blocker. Otherwise I’ll update them to
>> defer the fix version to next release.
>> 
>> Regards,
>> Vineet G
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 

Reply via email to