Ah, great Evans. Disk wise: we should still be able to use old account ebs 
volume; or as Nate said. Great, looks loke the release is coming along then!

Don't forget to vote on it ;)

On August 1, 2015 12:25:13 PM PDT, [email protected] wrote:
>We can either add ebs volume or increase existing volume, or both
>
>-----Original Message-----
>From: Evans Ye [mailto:[email protected]] 
>Sent: Saturday, August 1, 2015 11:25 AM
>To: [email protected]
>Subject: Re: Gearing up for 1.0.0 (cut-off of the branch)
>
>We do have release build setup on CI:
>
>http://bigtop01.cloudera.org:8080/view/Releases/job/Bigtop-1.0.0-deb/
>http://bigtop01.cloudera.org:8080/view/Releases/job/Bigtop-1.0.0-rpm/
>
>These builds underlying leveraged the new aws account resources.
>Once the RC passed the vote, we can simply click the build button and
>produce the whole packages(supposedly).
>
>I haven't share the information of these CI builds because there's
>still an
>issue:
>The fixed type of EC2 instance donated by EMR team do not have big
>enough local disk to store all the release artifacts, hence I split
>them into two jobs which are running on two machines separately(This is
>why we have two release jobs). I think it's not a big problem, we just
>need to discuss with Tom. It should be possible to exchange m3.xlarge
>instances to a storage optimized instance dedicated to release builds,
>or possibly we can add some EBS storage.
>Anyhow, I'll reach out to Tom. Once we're able to spin up instance with
>enough storage. I'll merge those two jobs into one build.
>
>
>2015-08-01 15:26 GMT+08:00 Konstantin Boudnik <[email protected]>:
>
>> Either way. I think the old account isn't fully functional, is it ?
>> Basically if can do "official" packages somehow - it'd great
>>
>> On July 31, 2015 11:24:45 PM PDT, [email protected] wrote:
>> >By "our CI" setup you mean last build on current machine, or doing 
>> >1.0 release builds on new AWS account.
>> >
>> >Think @evans has the building of containers and general bigtop
>assets 
>> >going.  I believe there is still some work to setup final pass on
>new 
>> >AWS account.
>> >
>> >After we can build the assets and packages, we can also setup some 
>> >repos too
>> >
>> >
>> >-----Original Message-----
>> >From: Konstantin Boudnik [mailto:[email protected]]
>> >Sent: Friday, July 31, 2015 2:28 PM
>> >To: [email protected]
>> >Subject: Re: Gearing up for 1.0.0 (cut-off of the branch)
>> >
>> >I have rolled out RC1 and started the [VOTE].
>> >
>> >Questions to Evans and Nate: do you guys think we can produce the 
>> >binary packages once the release is accepted? It'd be nice to have 
>> >them made on our CI setup, but I am not sure if this is possible at 
>> >the moment.
>> >
>> >Thanks!
>> >  Cos
>> >
>> >On Fri, Jul 31, 2015 at 01:46AM, Konstantin Boudnik wrote:
>> >> Started, finally, preparing RC1 and hit a few issues. Some of them
>
>> >> like
>> >> BIGTOP-1947 were easy to fix, but there's particularly one that I
>> >have
>> >> no idea what to do with: BIGTOP-1949
>> >>
>> >> It seems that Sqoop's artifacts for sqoop-core and sqoop-client 
>> >> from
>> >> 1.4.5 (and 1.4.6) are pulled off the net hence the smoke test
>> >artifact
>> >> can not be build. If anyone has any insight into what's going on
>on 
>> >> that from - please share. I am also cross-posting this to
>dev@sqoop
>> >so
>> >> hopefully they will be able to comment on the issue. Thanks for
>the
>> >help!
>> >>
>> >> Regards,
>> >>   Cos
>> >>
>> >> On Sat, May 30, 2015 at 10:14AM, Konstantin Boudnik wrote:
>> >> > Ok, master is unlocked for 1.1.0-SNAPSHOT development I have
>also 
>> >> > pushed branch-1.0 that has all the bits for 1.0 RC, but I don't
>> >have
>> >> > time to finish RC publishing up right now - will try to do it 
>> >> > from the train, but who knows if I will have any connection
>there.
>> >> >
>> >> > If anyone can pick up where I left-off - it'd be great: I am 
>> >> > really
>> >
>> >> > trying to get on my damn vacation ;) If not - I will try to find
>
>> >> > a bit of time next week for this.
>> >> >
>> >> > Thanks all for your help,
>> >> >   Cos
>>

Reply via email to