Re: [CONFERENCE VIDEOS] - CloudStack Collaboration Conference 2019, Las Vegas

2019-09-23 Thread Voloshanenko Igor
Thank you , Andrija!
Great videos!

пн, 23 сент. 2019 г. в 8:14, Andrija Panic :

> Hi guys,
>
> I'm pleased to share below videos of talks/presentation from this year
> CloudStack Collaboration Conference in Las Vegas (CCC19NA), published on
> the ASF channel (thanks to Rich Bowen).
>
> https://www.youtube.com/playlist?list=PLU2OcwpQkYCyONTMOg3tAcphtgCVOHcW1
>
> ///
> Now, I have invested a lot of hours int this, so I kindly ask you to at
> least promote video(s) on your LinkedIn, Twitter and other channels -
> specifically the *keynote *video, where we have 4 special guests from
> *Apple,
> BT, Leaseweb and Ticketmaster* - spread the word about these high-profile
> guests, please, for the sake of the project marketing.
> ///
>
> This is 21 video total, instead of 24, because:
> - 1 presentation was not held (Kubernetes on Bare Metal)
> -  presentations from Marcus (Cloudstack API development 101) and Rafael
> (Openstack from a CloudStack perspective) were not recorded successfully (I
> assume the topics were too complex for the camera to digest.. ) - joke
> aside - my apologies to for that.
>
> I've given my best to filter sound noise and enhance what could be
> enhanced and
> I think that results are very good (considering everything).
>
> If there are any errors in speaker names, title and company, kindly let me
> know, I'll keep Adobe Premiere projects for a week or two.
>
> Finally, if there is anyone (speaker) who is not happy for their
> presentation to be online, let me know so I can ask Rich Bowen to take it
> down.
>
> /// some tech details
> - Over 300GB raw material
> - Over 30 hours spent for video production (editing, encoding, etc. )
> - Adobe Premiere Pro for video processing
> - Audacity for sound processing (noise filtering, etc.)
> - Final videos in AAC audio, 256Kb/s with video in h.264, 1080p, 1-pass
> VBR, 10Mb/s, HW accelerated encoding, on i7-8550u integrated graphic
> (ultrabook, yes... )
>
>
> Regards,
> Andrija
>


Re: [ANNOUNCE] Andrija Panic has joined the PMC

2019-07-13 Thread Voloshanenko Igor
Congrats Andrija !

сб, 13 июля 2019 г. в 11:03, Paul Angus :

> Fellow CloudStackers,
>
>
>
> It gives me great pleasure to say that Adrija has been invited to join the
> PMC and has gracefully accepted.
>
>
> Please joining me in congratulating Andrija!
>
>
>
>
> Kind regards,
>
>
>
> Paul Angus
>
> CloudStack PMC
>


Re: [ANNOUNCE] New committer: Andrija Panić

2018-11-18 Thread Voloshanenko Igor
Congrats ! Big deal )

вс, 18 нояб. 2018 г. в 23:27, Tutkowski, Mike :

> Hi everyone,
>
> The Project Management Committee (PMC) for Apache CloudStack
> has invited Andrija Panić to become a committer and I am pleased
> to announce that he has accepted.
>
> Please join me in congratulating Andrija on this accomplishment.
>
> Thanks!
> Mike
>


Re: [DISCUSS] Freezing master for 4.11

2018-01-08 Thread Voloshanenko Igor
You again faster than me )))

2018-01-08 12:21 GMT+02:00 Voloshanenko Igor :

> :D tnx )
> Updated by my colleague already
>
> 2018-01-08 12:06 GMT+02:00 Daan Hoogland :
>
>> yeah, way ahead of you Igor ;) I asked a question about it
>>
>> On Mon, Jan 8, 2018 at 11:05 AM, Voloshanenko Igor <
>> igor.voloshane...@gmail.com> wrote:
>>
>> > Updates posted to https://github.com/apache/cloudstack/pull/2389
>> > Can you please review?
>> >
>> > 2018-01-08 11:57 GMT+02:00 Voloshanenko Igor <
>> igor.voloshane...@gmail.com
>> > >:
>> >
>> > > Sure. Got it.
>> > >
>> > > Will post update soon
>> > >
>> > > 2018-01-08 11:38 GMT+02:00 Daan Hoogland :
>> > >
>> > >> Igor, I remember your PR and think it is fine. It can also be argued
>> > that
>> > >> it needs to go in as a security feature. For an RM it is unthinkably
>> > late,
>> > >> but fortunately it is very small. I will however -1 it if it leads
>> to a
>> > >> plethora of last minute PRs to include.
>> > >>
>> > >> On Mon, Jan 8, 2018 at 10:33 AM, Voloshanenko Igor <
>> > >> igor.voloshane...@gmail.com> wrote:
>> > >>
>> > >> > Guys, can we please include https://github.com/apache/clou
>> > >> dstack/pull/2389
>> > >> > into 4.11
>> > >> > PR very small and updates will be published in next few hours.
>> > >> >
>> > >> > As we have this for a while in production for 4.8 branch.
>> > >> >
>> > >> > 2018-01-08 11:15 GMT+02:00 Boris Stoyanov <
>> > boris.stoya...@shapeblue.com
>> > >> >:
>> > >> >
>> > >> > > +1 Daan
>> > >> > >
>> > >> > >
>> > >> > > boris.stoya...@shapeblue.com
>> > >> > > www.shapeblue.com
>> > >> > > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
>> > >> > > @shapeblue
>> > >> > >
>> > >> > >
>> > >> > >
>> > >> > > > On 8 Jan 2018, at 10:47, Daan Hoogland <
>> daan.hoogl...@gmail.com>
>> > >> > wrote:
>> > >> > > >
>> > >> > > > Rohit, Ivan,
>> > >> > > >
>> > >> > > > I think we can argue that the five open PRs on the milestone
>> can
>> > >> still
>> > >> > go
>> > >> > > > in as long as active work on them continues. I have not looked
>> at
>> > >> > Ivan's
>> > >> > > > PRs yet but can see they were entered in december and he is
>> > actively
>> > >> > > > working on it so why not include those in the milestone. A
>> bigger
>> > >> > concern
>> > >> > > > is that some of the remaining PRs in that milestone are
>> > potentially
>> > >> > > > conflicting. So we feature freeze now and work only to get the
>> set
>> > >> list
>> > >> > > in
>> > >> > > > (and blockers).
>> > >> > > >
>> > >> > > >
>> > >> > > > On Mon, Jan 8, 2018 at 9:39 AM, Ivan Kudryavtsev <
>> > >> > > kudryavtsev...@bw-sw.com>
>> > >> > > > wrote:
>> > >> > > >
>> > >> > > >> Rohit, Devs,
>> > >> > > >>
>> > >> > > >> just consider adding:
>> > >> > > >>
>> > >> > > >> CLOUDSTACK-10188 / https://github.com/apache/
>> > cloudstack/pull/2362
>> > >> > > [resouce
>> > >> > > >> accounting blocker bug]
>> > >> > > >> CLOUDSTACK-10170 / https://github.com/apache/
>> > cloudstack/pull/2350
>> > >> > > >> [security
>> > >> > > >> fix, enchancement]
>> > >> > > >>
>> > >> > > >> They are ready (we think so) for some time, but *no final
>> review*
>> > >> yet.
>> > >> > > >>
>> > >> > > >>
>> > >> > > >> 2018-01-08 14:47 GMT+07:00 Ro

Re: [DISCUSS] Freezing master for 4.11

2018-01-08 Thread Voloshanenko Igor
:D tnx )
Updated by my colleague already

2018-01-08 12:06 GMT+02:00 Daan Hoogland :

> yeah, way ahead of you Igor ;) I asked a question about it
>
> On Mon, Jan 8, 2018 at 11:05 AM, Voloshanenko Igor <
> igor.voloshane...@gmail.com> wrote:
>
> > Updates posted to https://github.com/apache/cloudstack/pull/2389
> > Can you please review?
> >
> > 2018-01-08 11:57 GMT+02:00 Voloshanenko Igor <
> igor.voloshane...@gmail.com
> > >:
> >
> > > Sure. Got it.
> > >
> > > Will post update soon
> > >
> > > 2018-01-08 11:38 GMT+02:00 Daan Hoogland :
> > >
> > >> Igor, I remember your PR and think it is fine. It can also be argued
> > that
> > >> it needs to go in as a security feature. For an RM it is unthinkably
> > late,
> > >> but fortunately it is very small. I will however -1 it if it leads to
> a
> > >> plethora of last minute PRs to include.
> > >>
> > >> On Mon, Jan 8, 2018 at 10:33 AM, Voloshanenko Igor <
> > >> igor.voloshane...@gmail.com> wrote:
> > >>
> > >> > Guys, can we please include https://github.com/apache/clou
> > >> dstack/pull/2389
> > >> > into 4.11
> > >> > PR very small and updates will be published in next few hours.
> > >> >
> > >> > As we have this for a while in production for 4.8 branch.
> > >> >
> > >> > 2018-01-08 11:15 GMT+02:00 Boris Stoyanov <
> > boris.stoya...@shapeblue.com
> > >> >:
> > >> >
> > >> > > +1 Daan
> > >> > >
> > >> > >
> > >> > > boris.stoya...@shapeblue.com
> > >> > > www.shapeblue.com
> > >> > > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > >> > > @shapeblue
> > >> > >
> > >> > >
> > >> > >
> > >> > > > On 8 Jan 2018, at 10:47, Daan Hoogland  >
> > >> > wrote:
> > >> > > >
> > >> > > > Rohit, Ivan,
> > >> > > >
> > >> > > > I think we can argue that the five open PRs on the milestone can
> > >> still
> > >> > go
> > >> > > > in as long as active work on them continues. I have not looked
> at
> > >> > Ivan's
> > >> > > > PRs yet but can see they were entered in december and he is
> > actively
> > >> > > > working on it so why not include those in the milestone. A
> bigger
> > >> > concern
> > >> > > > is that some of the remaining PRs in that milestone are
> > potentially
> > >> > > > conflicting. So we feature freeze now and work only to get the
> set
> > >> list
> > >> > > in
> > >> > > > (and blockers).
> > >> > > >
> > >> > > >
> > >> > > > On Mon, Jan 8, 2018 at 9:39 AM, Ivan Kudryavtsev <
> > >> > > kudryavtsev...@bw-sw.com>
> > >> > > > wrote:
> > >> > > >
> > >> > > >> Rohit, Devs,
> > >> > > >>
> > >> > > >> just consider adding:
> > >> > > >>
> > >> > > >> CLOUDSTACK-10188 / https://github.com/apache/
> > cloudstack/pull/2362
> > >> > > [resouce
> > >> > > >> accounting blocker bug]
> > >> > > >> CLOUDSTACK-10170 / https://github.com/apache/
> > cloudstack/pull/2350
> > >> > > >> [security
> > >> > > >> fix, enchancement]
> > >> > > >>
> > >> > > >> They are ready (we think so) for some time, but *no final
> review*
> > >> yet.
> > >> > > >>
> > >> > > >>
> > >> > > >> 2018-01-08 14:47 GMT+07:00 Rohit Yadav <
> > rohit.ya...@shapeblue.com
> > >> >:
> > >> > > >>
> > >> > > >>> All,
> > >> > > >>>
> > >> > > >>>
> > >> > > >>> As per the previously shared schedule [1], by EOD today (8 Jan
> > >> 2018)
> > >> > we
> > >> > > >>> would freeze master after which we'll only accept
> > critical/blocker
>

Re: [DISCUSS] Freezing master for 4.11

2018-01-08 Thread Voloshanenko Igor
Updates posted to https://github.com/apache/cloudstack/pull/2389
Can you please review?

2018-01-08 11:57 GMT+02:00 Voloshanenko Igor :

> Sure. Got it.
>
> Will post update soon
>
> 2018-01-08 11:38 GMT+02:00 Daan Hoogland :
>
>> Igor, I remember your PR and think it is fine. It can also be argued that
>> it needs to go in as a security feature. For an RM it is unthinkably late,
>> but fortunately it is very small. I will however -1 it if it leads to a
>> plethora of last minute PRs to include.
>>
>> On Mon, Jan 8, 2018 at 10:33 AM, Voloshanenko Igor <
>> igor.voloshane...@gmail.com> wrote:
>>
>> > Guys, can we please include https://github.com/apache/clou
>> dstack/pull/2389
>> > into 4.11
>> > PR very small and updates will be published in next few hours.
>> >
>> > As we have this for a while in production for 4.8 branch.
>> >
>> > 2018-01-08 11:15 GMT+02:00 Boris Stoyanov > >:
>> >
>> > > +1 Daan
>> > >
>> > >
>> > > boris.stoya...@shapeblue.com
>> > > www.shapeblue.com
>> > > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
>> > > @shapeblue
>> > >
>> > >
>> > >
>> > > > On 8 Jan 2018, at 10:47, Daan Hoogland 
>> > wrote:
>> > > >
>> > > > Rohit, Ivan,
>> > > >
>> > > > I think we can argue that the five open PRs on the milestone can
>> still
>> > go
>> > > > in as long as active work on them continues. I have not looked at
>> > Ivan's
>> > > > PRs yet but can see they were entered in december and he is actively
>> > > > working on it so why not include those in the milestone. A bigger
>> > concern
>> > > > is that some of the remaining PRs in that milestone are potentially
>> > > > conflicting. So we feature freeze now and work only to get the set
>> list
>> > > in
>> > > > (and blockers).
>> > > >
>> > > >
>> > > > On Mon, Jan 8, 2018 at 9:39 AM, Ivan Kudryavtsev <
>> > > kudryavtsev...@bw-sw.com>
>> > > > wrote:
>> > > >
>> > > >> Rohit, Devs,
>> > > >>
>> > > >> just consider adding:
>> > > >>
>> > > >> CLOUDSTACK-10188 / https://github.com/apache/cloudstack/pull/2362
>> > > [resouce
>> > > >> accounting blocker bug]
>> > > >> CLOUDSTACK-10170 / https://github.com/apache/cloudstack/pull/2350
>> > > >> [security
>> > > >> fix, enchancement]
>> > > >>
>> > > >> They are ready (we think so) for some time, but *no final review*
>> yet.
>> > > >>
>> > > >>
>> > > >> 2018-01-08 14:47 GMT+07:00 Rohit Yadav > >:
>> > > >>
>> > > >>> All,
>> > > >>>
>> > > >>>
>> > > >>> As per the previously shared schedule [1], by EOD today (8 Jan
>> 2018)
>> > we
>> > > >>> would freeze master after which we'll only accept critical/blocker
>> > > fixes,
>> > > >>> stabilize master and start a voting thread on 4.11 RC1 (est. by 15
>> > Jan
>> > > >>> 2018).
>> > > >>>
>> > > >>>
>> > > >>> I wanted to gather consensus if this is acceptable to everyone as
>> > there
>> > > >>> are still few outstanding feature PRs that I understand authors
>> have
>> > > >> worked
>> > > >>> hard to get them in.
>> > > >>>
>> > > >>>
>> > > >>> Please share your thoughts, comments.  If you're the author of
>> such
>> > an
>> > > >>> existing PR, please work with us, address outstanding issues.
>> > > >>>
>> > > >>>
>> > > >>> In case of no objections, it will be assumed that the plan is
>> > > acceptable
>> > > >>> to everyone. Thanks.
>> > > >>>
>> > > >>>
>> > > >>> [1] http://markmail.org/message/mszlluye35acvn2j
>> > > >>>
>> > > >>>
>> > > >>> - Rohit
>> > > >>>
>> > > >>> <https://cloudstack.apache.org>
>> > > >>>
>> > > >>>
>> > > >>>
>> > > >>> rohit.ya...@shapeblue.com
>> > > >>> www.shapeblue.com
>> > > >>> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
>> > > >>> @shapeblue
>> > > >>>
>> > > >>>
>> > > >>>
>> > > >>>
>> > > >>
>> > > >>
>> > > >> --
>> > > >> With best regards, Ivan Kudryavtsev
>> > > >> Bitworks Software, Ltd.
>> > > >> Cell: +7-923-414-1515
>> > > >> WWW: http://bitworks.software/ <http://bw-sw.com/>
>> > > >>
>> > > >
>> > > >
>> > > >
>> > > > --
>> > > > Daan
>> > >
>> > >
>> >
>>
>>
>>
>> --
>> Daan
>>
>
>


Re: [DISCUSS] Freezing master for 4.11

2018-01-08 Thread Voloshanenko Igor
Sure. Got it.

Will post update soon

2018-01-08 11:38 GMT+02:00 Daan Hoogland :

> Igor, I remember your PR and think it is fine. It can also be argued that
> it needs to go in as a security feature. For an RM it is unthinkably late,
> but fortunately it is very small. I will however -1 it if it leads to a
> plethora of last minute PRs to include.
>
> On Mon, Jan 8, 2018 at 10:33 AM, Voloshanenko Igor <
> igor.voloshane...@gmail.com> wrote:
>
> > Guys, can we please include https://github.com/apache/
> cloudstack/pull/2389
> > into 4.11
> > PR very small and updates will be published in next few hours.
> >
> > As we have this for a while in production for 4.8 branch.
> >
> > 2018-01-08 11:15 GMT+02:00 Boris Stoyanov 
> :
> >
> > > +1 Daan
> > >
> > >
> > > boris.stoya...@shapeblue.com
> > > www.shapeblue.com
> > > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > > @shapeblue
> > >
> > >
> > >
> > > > On 8 Jan 2018, at 10:47, Daan Hoogland 
> > wrote:
> > > >
> > > > Rohit, Ivan,
> > > >
> > > > I think we can argue that the five open PRs on the milestone can
> still
> > go
> > > > in as long as active work on them continues. I have not looked at
> > Ivan's
> > > > PRs yet but can see they were entered in december and he is actively
> > > > working on it so why not include those in the milestone. A bigger
> > concern
> > > > is that some of the remaining PRs in that milestone are potentially
> > > > conflicting. So we feature freeze now and work only to get the set
> list
> > > in
> > > > (and blockers).
> > > >
> > > >
> > > > On Mon, Jan 8, 2018 at 9:39 AM, Ivan Kudryavtsev <
> > > kudryavtsev...@bw-sw.com>
> > > > wrote:
> > > >
> > > >> Rohit, Devs,
> > > >>
> > > >> just consider adding:
> > > >>
> > > >> CLOUDSTACK-10188 / https://github.com/apache/cloudstack/pull/2362
> > > [resouce
> > > >> accounting blocker bug]
> > > >> CLOUDSTACK-10170 / https://github.com/apache/cloudstack/pull/2350
> > > >> [security
> > > >> fix, enchancement]
> > > >>
> > > >> They are ready (we think so) for some time, but *no final review*
> yet.
> > > >>
> > > >>
> > > >> 2018-01-08 14:47 GMT+07:00 Rohit Yadav :
> > > >>
> > > >>> All,
> > > >>>
> > > >>>
> > > >>> As per the previously shared schedule [1], by EOD today (8 Jan
> 2018)
> > we
> > > >>> would freeze master after which we'll only accept critical/blocker
> > > fixes,
> > > >>> stabilize master and start a voting thread on 4.11 RC1 (est. by 15
> > Jan
> > > >>> 2018).
> > > >>>
> > > >>>
> > > >>> I wanted to gather consensus if this is acceptable to everyone as
> > there
> > > >>> are still few outstanding feature PRs that I understand authors
> have
> > > >> worked
> > > >>> hard to get them in.
> > > >>>
> > > >>>
> > > >>> Please share your thoughts, comments.  If you're the author of such
> > an
> > > >>> existing PR, please work with us, address outstanding issues.
> > > >>>
> > > >>>
> > > >>> In case of no objections, it will be assumed that the plan is
> > > acceptable
> > > >>> to everyone. Thanks.
> > > >>>
> > > >>>
> > > >>> [1] http://markmail.org/message/mszlluye35acvn2j
> > > >>>
> > > >>>
> > > >>> - Rohit
> > > >>>
> > > >>> <https://cloudstack.apache.org>
> > > >>>
> > > >>>
> > > >>>
> > > >>> rohit.ya...@shapeblue.com
> > > >>> www.shapeblue.com
> > > >>> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > > >>> @shapeblue
> > > >>>
> > > >>>
> > > >>>
> > > >>>
> > > >>
> > > >>
> > > >> --
> > > >> With best regards, Ivan Kudryavtsev
> > > >> Bitworks Software, Ltd.
> > > >> Cell: +7-923-414-1515
> > > >> WWW: http://bitworks.software/ <http://bw-sw.com/>
> > > >>
> > > >
> > > >
> > > >
> > > > --
> > > > Daan
> > >
> > >
> >
>
>
>
> --
> Daan
>


Re: [DISCUSS] Freezing master for 4.11

2018-01-08 Thread Voloshanenko Igor
Guys, can we please include https://github.com/apache/cloudstack/pull/2389
into 4.11
PR very small and updates will be published in next few hours.

As we have this for a while in production for 4.8 branch.

2018-01-08 11:15 GMT+02:00 Boris Stoyanov :

> +1 Daan
>
>
> boris.stoya...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
> > On 8 Jan 2018, at 10:47, Daan Hoogland  wrote:
> >
> > Rohit, Ivan,
> >
> > I think we can argue that the five open PRs on the milestone can still go
> > in as long as active work on them continues. I have not looked at Ivan's
> > PRs yet but can see they were entered in december and he is actively
> > working on it so why not include those in the milestone. A bigger concern
> > is that some of the remaining PRs in that milestone are potentially
> > conflicting. So we feature freeze now and work only to get the set list
> in
> > (and blockers).
> >
> >
> > On Mon, Jan 8, 2018 at 9:39 AM, Ivan Kudryavtsev <
> kudryavtsev...@bw-sw.com>
> > wrote:
> >
> >> Rohit, Devs,
> >>
> >> just consider adding:
> >>
> >> CLOUDSTACK-10188 / https://github.com/apache/cloudstack/pull/2362
> [resouce
> >> accounting blocker bug]
> >> CLOUDSTACK-10170 / https://github.com/apache/cloudstack/pull/2350
> >> [security
> >> fix, enchancement]
> >>
> >> They are ready (we think so) for some time, but *no final review* yet.
> >>
> >>
> >> 2018-01-08 14:47 GMT+07:00 Rohit Yadav :
> >>
> >>> All,
> >>>
> >>>
> >>> As per the previously shared schedule [1], by EOD today (8 Jan 2018) we
> >>> would freeze master after which we'll only accept critical/blocker
> fixes,
> >>> stabilize master and start a voting thread on 4.11 RC1 (est. by 15 Jan
> >>> 2018).
> >>>
> >>>
> >>> I wanted to gather consensus if this is acceptable to everyone as there
> >>> are still few outstanding feature PRs that I understand authors have
> >> worked
> >>> hard to get them in.
> >>>
> >>>
> >>> Please share your thoughts, comments.  If you're the author of such an
> >>> existing PR, please work with us, address outstanding issues.
> >>>
> >>>
> >>> In case of no objections, it will be assumed that the plan is
> acceptable
> >>> to everyone. Thanks.
> >>>
> >>>
> >>> [1] http://markmail.org/message/mszlluye35acvn2j
> >>>
> >>>
> >>> - Rohit
> >>>
> >>> 
> >>>
> >>>
> >>>
> >>> rohit.ya...@shapeblue.com
> >>> www.shapeblue.com
> >>> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> >>> @shapeblue
> >>>
> >>>
> >>>
> >>>
> >>
> >>
> >> --
> >> With best regards, Ivan Kudryavtsev
> >> Bitworks Software, Ltd.
> >> Cell: +7-923-414-1515
> >> WWW: http://bitworks.software/ 
> >>
> >
> >
> >
> > --
> > Daan
>
>


Re: PrivateGateaway ACL rules blocker

2017-12-20 Thread Voloshanenko Igor
Test passed.
So will wait till your guys will have time to review this one-liner and
merge it )

2017-12-20 14:16 GMT+02:00 Rohit Yadav :

> Sure, I've kicked some tests. Will merge when tests pass and we've some
> review/feedback from others.
>
>
> -Rohit
>
> ____
> From: Voloshanenko Igor 
> Sent: Wednesday, December 20, 2017 5:33:10 PM
> To: dev@cloudstack.apache.org
> Subject: Re: PrivateGateaway ACL rules blocker
>
> Tnx a lot Rohit!
>
> As we only handle a special case - I think all tests will pass without any
> issues. As I can't imagine that somebody assumed inside test to check if
> buggy condition passed :D
>
> 2017-12-20 13:02 GMT+02:00 Rohit Yadav :
>
> > Hi Voloshanenko,
> >
> >
> > Thanks for reporting and sharing, I'll kick some tests.
> >
> >
> > - Rohit
> >
> > 
> > From: Voloshanenko Igor 
> > Sent: Wednesday, December 20, 2017 6:29:46 AM
> > To: dev@cloudstack.apache.org
> > Subject: PrivateGateaway ACL rules blocker
> >
> > Hi all!
> >
> > Guys, can i please kindly ask you to review this issue and PR -
> > https://issues.apache.org/jira/browse/CLOUDSTACK-10200
> >
> > We have a lot of clients with PG and want to include this in 4.11 LTS as
> > without this small one-liner patch PrivateGateway component useless
> >
> > Tnx in advance!
> >
> > rohit.ya...@shapeblue.com
> > www.shapeblue.com<http://www.shapeblue.com>
> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > @shapeblue
> >
> >
> >
> >
>
> rohit.ya...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
>


Re: PrivateGateaway ACL rules blocker

2017-12-20 Thread Voloshanenko Igor
Tnx a lot Rohit!

As we only handle a special case - I think all tests will pass without any
issues. As I can't imagine that somebody assumed inside test to check if
buggy condition passed :D

2017-12-20 13:02 GMT+02:00 Rohit Yadav :

> Hi Voloshanenko,
>
>
> Thanks for reporting and sharing, I'll kick some tests.
>
>
> - Rohit
>
> ____
> From: Voloshanenko Igor 
> Sent: Wednesday, December 20, 2017 6:29:46 AM
> To: dev@cloudstack.apache.org
> Subject: PrivateGateaway ACL rules blocker
>
> Hi all!
>
> Guys, can i please kindly ask you to review this issue and PR -
> https://issues.apache.org/jira/browse/CLOUDSTACK-10200
>
> We have a lot of clients with PG and want to include this in 4.11 LTS as
> without this small one-liner patch PrivateGateway component useless
>
> Tnx in advance!
>
> rohit.ya...@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
>


PrivateGateaway ACL rules blocker

2017-12-19 Thread Voloshanenko Igor
Hi all!

Guys, can i please kindly ask you to review this issue and PR -
https://issues.apache.org/jira/browse/CLOUDSTACK-10200

We have a lot of clients with PG and want to include this in 4.11 LTS as
without this small one-liner patch PrivateGateway component useless

Tnx in advance!


Re: Advise on multiple PODs network design

2017-10-05 Thread Voloshanenko Igor
Tnx Remi!

Brilliant advice!

чт, 5 окт. 2017 г. в 15:38, Remi Bergsma :

> Hi,
>
> We solved this problem by splitting the network in an underlay and overlay
> network. The underlay is the physical network, including the management
> traffic and storage from hypervisors and such. The simpler, the better. In
> the overlay there’s your services layer, for example the guest networks for
> your clients. Since you’re already using vxlan that shouldn’t be hard. In
> our setup, each POD is a rack with TOR (Top-of-rack routing) which means a
> L2 domain stays within a rack. If that goes wrong, only one rack aka POD
> has issues and not the rest. We’ve many PODs in several zones. The overlay
> makes tunnels (we use Nicira but vxlan, ovn or nuage can do the same) and
> those can be created over L3 interconnected PODs. Effectively, this gives
> you L2 (overlay) over L3 (underlay). Storage wise we have both cluster-wide
> (in the POD) and zone-wide (basically a POD with just storage).
>
> VMs in a given guest network can run in any of the PODs and still have a
> L2 connection between then, even though the actual physical network is L3.
> This is one of the great benefits of SDN (Software defined networking).
> It’s pretty much the best of both worlds. We scaled this quite a bit and
> it’s rock solid.
>
> Regards,
> Remi
>
>
> On 05/10/2017, 13:57, "Rafael Weingärtner" 
> wrote:
>
> Exactly; the management IPs is defined per POD already, the public you
> could work out dedicating domains per POD, and then you can dedicate a
> pool of IPs for each domain. The guest networking problem is solved if
> you force users from let´s say same domain to say in the same Pod.
>
> The other approach as you said would be a zone per POD.
>
> Please keep us posted with your tests, your findings may be valuable to
> spot improvement in ACS design and help others with more complex
> deployments.
>
>
> On 10/5/2017 6:51 AM, Andrija Panic wrote:
> > Thanks Rafael,
> >
> > yes that is my expectation also (same broadcast domain for Guest
> network),
> > so it doesn't really solve my problem (identical thing is expected
> for
> > Public Network, at least, if not other networks also)
> > Other options seems to be zones per each X racks...
> >
> > Will see.
> >
> > Thanks
> >
> > On 4 October 2017 at 22:25, Rafael Weingärtner <
> raf...@autonomiccs.com.br>
> > wrote:
> >
> >> I think this can cause problems, if not properly managed. Unless you
> >> concentrate Domains/Users in Pods. Otherwise, you might end up with
> some
> >> VMs of the same user/domain/project in different pods, and if they
> are all
> >> in the same VPC for instance, we would expect them to be in the same
> >> broadcast domain.
> >>
> >> I think to apply what you want, it may require some designing and
> testing,
> >> but it feels feasible with ACS.
> >>
> >>
> >> On 10/4/2017 5:19 PM, Andrija Panic wrote:
> >>
> >>> Anyone?  I know I'm trying to squeeze some free paid consulting
> here :),
> >>> but trying to understand if PODs makes sense in this situation
> >>>
> >>> Thx
> >>>
> >>> On 2 October 2017 at 10:21, Andrija Panic  >
> >>> wrote:
> >>>
> >>> Hi guys,
>  Sorry for long post below...
> 
>  I was wondering if someone could bring some light for me for
> multiple
>  PODs
>  networking design (L2 vs L3) - idea is to make smaller L2
> broadcast
>  domains
>  (any other reason?)
> 
>  We might decide to transition from current single pod, single
> cluster
>  (single zone) to multiple PODs design (or not...) - we will
> eventually
>  grow
>  to over 50 racks worth of KVM hosts (1000+ hosts) so Im trying to
>  understand best options to avoid having insanely huge L2 broadcast
>  domains...
> 
>  Mgmt network is routed between pods, that is clear.
> 
>  We have dedicated primary storage network and Secondary Storage
> networks
>  (vlan interfaces configured locally on all KVM hosts, providing
> direct L2
>  connection obviously, not shared with mgmt.network), and same for
> Public
>  and Guest networks... (Advanced networking in zone, Vxlan used as
>  isolation)
> 
>  Now with multiple PODs, since Public Network and Guest network is
> defined
>  per Zone level (not POD level), and currently same zone-wide
> setup for
>  Primary Storage... what would be the best way to make this
> traffic stay
>  inside PODs as much as possible and is this possible at all?
> Perhaps I
>  would need to look into multiple zones, not PODs.
> 
>  My humble conclusion, based on having all dedicated networks, is
> that I
>  need to strech (L2 attach as vlan interface) primary

Re: IAM Plugin

2017-09-14 Thread Voloshanenko Igor
In this case how it's works ? I mean in presentation described
server-client(plugin ) model

I try to make this work on 4.8 .. and to be honest don't understand how to
run this (((

чт, 14 сент. 2017 г. в 16:12, Rafael Weingärtner <
rafaelweingart...@gmail.com>:

> I believe it is native this feature in latest ACS versions.
> This is basically the use of role base access control model where you link
> "permission"(api methods) to roles and then roles to users.
>
> On Thu, Sep 14, 2017 at 9:35 AM, Voloshanenko Igor <
> igor.voloshane...@gmail.com> wrote:
>
> > Hi, folks!
> >
> > Can I kindly ask you about help with IAM plugin?
> >
> > I'm trying to test it - and don;t see any relative instruction - how to
> > install it (both plugin and server) sides and any API examples...
> >
> > I found only 2 presentations and one jira ticket which looks like
> outdated
> > (((
> >
> > tnx in advance for your help!
> >
>
>
>
> --
> Rafael Weingärtner
>


IAM Plugin

2017-09-14 Thread Voloshanenko Igor
Hi, folks!

Can I kindly ask you about help with IAM plugin?

I'm trying to test it - and don;t see any relative instruction - how to
install it (both plugin and server) sides and any API examples...

I found only 2 presentations and one jira ticket which looks like outdated
(((

tnx in advance for your help!


Re: Introduction

2017-07-31 Thread Voloshanenko Igor
Welcome Nicolas!
Nice to meet you

пн, 31 июля 2017 г. в 18:07, Nicolas Vazquez :

> Hi all,
>
>
> My name is Nicolas Vazquez, today is my first day at @ShapeBlue as a
> Software Engineer. I am based in Montevideo, Uruguay and I've been working
> with CloudStack since mid-2015. Looking forward to working with you!
>
>
> Thanks,
>
> Nicolas
>
> nicolas.vazq...@shapeblue.com
> www.shapeblue.com
> ,
> @shapeblue
>
>
>
>


Re: Attaching more than 14 data volumes to an instance

2017-02-15 Thread Voloshanenko Igor
On VM we try to emulate real hardware )))
So any device honor specification
In this case PCI :)

To be honest we can increase limits by adding multifunctional devices or
migrate to virtio-iscsi-blk

But as for me - 14 disks more than enough now


About 3 for cdrom. I will check . I think CDROM emulated as IDE device, not
via virtio-blk

For 0 - root volume , interesting , in this case we can easily add 1 more
DATA disk :)

ср, 15 февр. 2017 г. в 19:24, Rafael Weingärtner <
rafaelweingart...@gmail.com>:

> I thought that on a VM we would not be bound by PCI limitations.
> Interesting explanations, thanks.
>
>
> On Wed, Feb 15, 2017 at 12:19 PM, Voloshanenko Igor <
> igor.voloshane...@gmail.com> wrote:
>
> > I think explanation very easy.
> > PCI itself can handle up to 32 devices.
> >
> > If you run lspci inside empty (fresh created) VM - you will see, that 8
> > slots already occupied
> > [root@test-virtio-blk ~]# lspci
> > 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev
> > 02)
> > 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton
> II]
> > 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton
> > II]
> > 00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB
> [Natoma/Triton
> > II] (rev 01)
> > 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
> > 00:02.0 VGA compatible controller: Cirrus Logic GD 5446
> > 00:03.0 Ethernet controller: Red Hat, Inc Virtio network device
> > 00:04.0 SCSI storage controller: Red Hat, Inc Virtio block device
> >
> > [root@test-virtio-blk ~]# lspci | wc -l
> > 8
> >
> > So, 7 system devices + 1 ROOT disk
> >
> > in current implementation, we used virtio-blk, which can handle only 1
> > device per instance.
> >
> > So, we have 32-8 == 24 free slots...
> >
> > As Cloudstack support more than 1 eth cards - 8 of them reserved for
> > network cards and 16 available for virtio-blk
> >
> > So, practical limit equal 16 devices (for DATA disks)
> >
> > Why 2 devices (0 + 3) excluded - interesting question... I will try to
> > research and post explanation
> >
> >
> >
> >
> >
> >
> >
> >
> > 2017-02-15 18:27 GMT+02:00 Rafael Weingärtner <
> rafaelweingart...@gmail.com
> > >:
> >
> > > I hate to say this, but probably no one knows why.
> > > I looked at the history and this method has always being like this.
> > >
> > > The device ID 3 seems to be something reserved, probably for Xen tools
> > (big
> > > guess here)?
> > >
> > > Also, regarding the limit; I could speculate two explanations for the
> > > limit. A developer did not get the full specs and decided to do
> whatever
> > > he/she wanted. Or, maybe, at the time of coding (long, long time ago)
> > there
> > > was a hypervisor that limited (maybe still limits) the number of
> devices
> > > that could be plugged to a VM and the first developers decided to level
> > > everything by that spec.
> > >
> > > It may be worth checking with KVM, XenServer, Hyper-V, and VMware if
> they
> > > have such limitation on disks that can be attached to a VM. If they do
> > not
> > > have, we could remove that, or at least externalize the limit in a
> > > parameter.
> > >
> > > On Wed, Feb 15, 2017 at 5:54 AM, Friðvin Logi Oddbjörnsson <
> > > frid...@greenqloud.com> wrote:
> > >
> > > > CloudStack is currently limiting the number of data volumes, that can
> > be
> > > > attached to an instance, to 14.
> > > > More specifically, this limitation relates to the device ids that
> > > > CloudStack considers valid for data volumes.
> > > > In method VolumeApiServiceImpl.getDeviceId(long, Long), only device
> > ids
> > > 1,
> > > > 2, and 4-15 are considered valid.
> > > > What I would like to know is: is there a reason for this limitation?
> > (of
> > > > not going higher than device id 15)
> > > >
> > > > Note that the current number of attached data volumes is already
> being
> > > > checked against the maximum number of data volumes per instance, as
> > > > specified by the relevant hypervisor’s capabilities.
> > > > E.g. if the relevant hypervisor’s capabilities specify that it only
> > > > supports 6 data volumes per instance, CloudStack rejects attaching a
> > > > seventh data volume.
> > > >
> > > >
> > > > Friðvin Logi Oddbjörnsson
> > > >
> > > > Senior Developer
> > > >
> > > > Tel: (+354) 415 0200 | frid...@greenqloud.com <
> jaros...@greenqloud.com
> > >
> > > >
> > > > Mobile: (+354) 696 6528 | PGP Key: 57CA1B00
> > > > <https://sks-keyservers.net/pks/lookup?op=vindex&search=
> > > > frid...@greenqloud.com>
> > > >
> > > > Twitter: @greenqloud <https://twitter.com/greenqloud> | @qstackcloud
> > > > <https://twitter.com/qstackcloud>
> > > >
> > > > www.greenqloud.com | www.qstack.com
> > > >
> > > > [image: qstack_blue_landscape_byqreenqloud-01.png]
> > > >
> > >
> > >
> > >
> > > --
> > > Rafael Weingärtner
> > >
> >
>
>
>
> --
> Rafael Weingärtner
>


Re: Attaching more than 14 data volumes to an instance

2017-02-15 Thread Voloshanenko Igor
I think explanation very easy.
PCI itself can handle up to 32 devices.

If you run lspci inside empty (fresh created) VM - you will see, that 8
slots already occupied
[root@test-virtio-blk ~]# lspci
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton
II]
00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton
II] (rev 01)
00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
00:02.0 VGA compatible controller: Cirrus Logic GD 5446
00:03.0 Ethernet controller: Red Hat, Inc Virtio network device
00:04.0 SCSI storage controller: Red Hat, Inc Virtio block device

[root@test-virtio-blk ~]# lspci | wc -l
8

So, 7 system devices + 1 ROOT disk

in current implementation, we used virtio-blk, which can handle only 1
device per instance.

So, we have 32-8 == 24 free slots...

As Cloudstack support more than 1 eth cards - 8 of them reserved for
network cards and 16 available for virtio-blk

So, practical limit equal 16 devices (for DATA disks)

Why 2 devices (0 + 3) excluded - interesting question... I will try to
research and post explanation








2017-02-15 18:27 GMT+02:00 Rafael Weingärtner :

> I hate to say this, but probably no one knows why.
> I looked at the history and this method has always being like this.
>
> The device ID 3 seems to be something reserved, probably for Xen tools (big
> guess here)?
>
> Also, regarding the limit; I could speculate two explanations for the
> limit. A developer did not get the full specs and decided to do whatever
> he/she wanted. Or, maybe, at the time of coding (long, long time ago) there
> was a hypervisor that limited (maybe still limits) the number of devices
> that could be plugged to a VM and the first developers decided to level
> everything by that spec.
>
> It may be worth checking with KVM, XenServer, Hyper-V, and VMware if they
> have such limitation on disks that can be attached to a VM. If they do not
> have, we could remove that, or at least externalize the limit in a
> parameter.
>
> On Wed, Feb 15, 2017 at 5:54 AM, Friðvin Logi Oddbjörnsson <
> frid...@greenqloud.com> wrote:
>
> > CloudStack is currently limiting the number of data volumes, that can be
> > attached to an instance, to 14.
> > More specifically, this limitation relates to the device ids that
> > CloudStack considers valid for data volumes.
> > In method VolumeApiServiceImpl.getDeviceId(long, Long), only device ids
> 1,
> > 2, and 4-15 are considered valid.
> > What I would like to know is: is there a reason for this limitation? (of
> > not going higher than device id 15)
> >
> > Note that the current number of attached data volumes is already being
> > checked against the maximum number of data volumes per instance, as
> > specified by the relevant hypervisor’s capabilities.
> > E.g. if the relevant hypervisor’s capabilities specify that it only
> > supports 6 data volumes per instance, CloudStack rejects attaching a
> > seventh data volume.
> >
> >
> > Friðvin Logi Oddbjörnsson
> >
> > Senior Developer
> >
> > Tel: (+354) 415 0200 | frid...@greenqloud.com 
> >
> > Mobile: (+354) 696 6528 | PGP Key: 57CA1B00
> >  > frid...@greenqloud.com>
> >
> > Twitter: @greenqloud  | @qstackcloud
> > 
> >
> > www.greenqloud.com | www.qstack.com
> >
> > [image: qstack_blue_landscape_byqreenqloud-01.png]
> >
>
>
>
> --
> Rafael Weingärtner
>


Re: Advice needed - switching IP for SS

2016-02-03 Thread Voloshanenko Igor
Andrija, as for me the best solution:

1. shutdown data interfaces (or bond) on switch for new server after rsync
2. Configure same IP as for old server
3. via switch CLI shutdown data ports for old server
4. via switch CLI startup ports for new server
4. Reconfigure IP for data bond old server via mgm interface to new IP
5. via switch CLI startup data ports for old server on switch

All this can be done via ansible. Downtime will be only between steps 3 and
4. And it's will be around 0,1-0,5 seconds.

>From cloudstack mgmt sever it's will be transparent. Possible 1-2 TCP
retransit will happen.

2016-02-03 22:49 GMT+02:00 Andrija Panic :

> Hi guys,
>
> I need to do manitance of 1 Seconary Storage NFS server fro few days, so I
> though to temporarily rsync data to another NFS box, and switch IP
> addresses, so the new NFS box has the original IP for few days...(need to
> test if KVm nodes will gracefullt remount the NFS server during the IP
> switchover...)
>
> Is this the prefered way, or should I hack the DB to point existing NFS
> server (that is defined in ACS) to the new IP of the second NFS box, and
> perhaps restart mgmt and agents across all nodes.
>
> Any recomendations ?
>
> Best,
>
> Andrija Panić
>