[openstack-dev] ThreadStack Project: an innovative open-source software for multi-thread computing

2018-05-25 Thread Erkam Murat Bozkurt
I have developed a new open source software as a result of a scientific
research and I want to share my study with scientists and/or software
developers.

ThreadStack is an innovative software which produces a class library for
C++ multi-thread programming and the outcome of the ThreadStack acts as an
autonomous management system for the thread synchronization tasks.
ThreadStack has a nice and useful graphical user interface and includes a
short tutorial and code examples. ThreadStack offers a new way for
multi-thread computing and it uses a meta program in order to produce an
application specific thread synchronization library. Therefore, the
programmer must read the tutorial to be able to use the software. The
tutorial includes the main designs of the program.

An academic journal submission has been performed for the study and the
scientific introduction of the project will be readable from an academic
journal as soon as possible.

ThreadStack can be downloaded from sourcefource and the link is given in
below.


https://sourceforge.net/projects/threadstack/

threadstack.h...@gmail.com

I am waiting your valuable comments.

Thanks and best regards.

Erkam Murat Bozkurt,

M. Sc Control Systems Engineering.

Istanbul / Turkey
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [StarlingX] StarlingX code followup discussions

2018-05-25 Thread Dean Troyer
On Thu, May 24, 2018 at 11:23 PM, Tim Bell  wrote:
> I'd like to understand the phrase "StarlingX is an OpenStack Foundation Edge 
> focus area project".
>
> My understanding of the current situation is that "StarlingX would like to be 
> OpenStack Foundation Edge focus area project".
>
> I have not been able to keep up with all of the discussions so I'd be happy 
> for further URLs to help me understand the current situation and the 
> processes (formal/informal) to arrive at this conclusion.

Agreed Tim, my apologies for being quick on the conclusions there.
Even after some discussions yesterday it is not clear to me exactly
the right phrasing.  I understand that the intention is to become an
incubated edge project, I do not know at what point StarlingX nor
Airship exactly are at today.

dt

-- 

Dean Troyer
dtro...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal

2018-05-25 Thread Tristan Cacqueray

Hello Bogdan,

Perhaps this has something to do with jobs evaluation order, it may be
worth trying to add the dependencies list in the project-templates, like
it is done here for example:
http://git.openstack.org/cgit/openstack-infra/project-config/tree/zuul.d/projects.yaml#n9799

It also easier to read dependencies from pipelines definition imo.

-Tristan

On May 25, 2018 12:45 pm, Bogdan Dobrelya wrote:
Job dependencies seem ignored by zuul, see jobs [0],[1],[2] started 
simultaneously. While I expected them run one by one. According to the 
patch 568536 [3], [1] is a dependency for [2] and [3].


The same can be observed for the remaining patches in the topic [4].
Is that a bug or I misunderstood what zuul job dependencies actually do?

[0] 
http://logs.openstack.org/36/568536/2/check/tripleo-ci-centos-7-undercloud-containers/731183a/ara-report/
[1] 
http://logs.openstack.org/36/568536/2/check/tripleo-ci-centos-7-3nodes-multinode/a1353ed/ara-report/
[2] 
http://logs.openstack.org/36/568536/2/check/tripleo-ci-centos-7-containers-multinode/9777136/ara-report/

[3] https://review.openstack.org/#/c/568536/
[4] 
https://review.openstack.org/#/q/topic:ci_pipelines+(status:open+OR+status:merged)


On 5/15/18 11:39 AM, Bogdan Dobrelya wrote:

Added a few more patches [0], [1] by the discussion results. PTAL folks.
Wrt remaining in the topic, I'd propose to give it a try and revert it, 
if it proved to be worse than better.

Thank you for feedback!

The next step could be reusing artifacts, like DLRN repos and containers 
built for patches and hosted undercloud, in the consequent pipelined 
jobs. But I'm not sure how to even approach that.


[0] https://review.openstack.org/#/c/568536/
[1] https://review.openstack.org/#/c/568543/

On 5/15/18 10:54 AM, Bogdan Dobrelya wrote:

On 5/14/18 10:06 PM, Alex Schultz wrote:
On Mon, May 14, 2018 at 10:15 AM, Bogdan Dobrelya 
 wrote:

An update for your review please folks


Bogdan Dobrelya  writes:


Hello.
As Zuul documentation [0] explains, the names "check", "gate", and
"post"  may be altered for more advanced pipelines. Is it doable to
introduce, for particular openstack projects, multiple check
stages/steps as check-1, check-2 and so on? And is it possible to 
make

the consequent steps reusing environments from the previous steps
finished with?

Narrowing down to tripleo CI scope, the problem I'd want we to solve
with this "virtual RFE", and using such multi-staged check pipelines,
is reducing (ideally, de-duplicating) some of the common steps for
existing CI jobs.



What you're describing sounds more like a job graph within a pipeline.
See:
https://docs.openstack.org/infra/zuul/user/config.html#attr-job.dependencies 

for how to configure a job to run only after another job has 
completed.

There is also a facility to pass data between such jobs.

... (skipped) ...

Creating a job graph to have one job use the results of the 
previous job

can make sense in a lot of cases.  It doesn't always save *time*
however.

It's worth noting that in OpenStack's Zuul, we have made an explicit
choice not to have long-running integration jobs depend on shorter 
pep8

or tox jobs, and that's because we value developer time more than CPU
time.  We would rather run all of the tests and return all of the
results so a developer can fix all of the errors as quickly as 
possible,
rather than forcing an iterative workflow where they have to fix 
all the
whitespace issues before the CI system will tell them which actual 
tests

broke.

-Jim



I proposed a few zuul dependencies [0], [1] to tripleo CI pipelines for
undercloud deployments vs upgrades testing (and some more). Given 
that those
undercloud jobs have not so high fail rates though, I think Emilien 
is right

in his comments and those would buy us nothing.

 From the other side, what do you think folks of making the
tripleo-ci-centos-7-3nodes-multinode depend on
tripleo-ci-centos-7-containers-multinode [2]? The former seems quite 
faily
and long running, and is non-voting. It deploys (see featuresets 
configs
[3]*) a 3 nodes in HA fashion. And it seems almost never passing, 
when the
containers-multinode fails - see the CI stats page [4]. I've found 
only a 2
cases there for the otherwise situation, when containers-multinode 
fails,
but 3nodes-multinode passes. So cutting off those future failures 
via the
dependency added, *would* buy us something and allow other jobs to 
wait less
to commence, by a reasonable price of somewhat extended time of the 
main
zuul pipeline. I think it makes sense and that extended CI time will 
not

overhead the RDO CI execution times so much to become a problem. WDYT?



I'm not sure it makes sense to add a dependency on other deployment
tests. It's going to add additional time to the CI run because the
upgrade won't start until well over an hour after the rest of the


The things are not so simple. There is also a significant 
time-to-wait-in-queue jobs start delay. And it takes probably even 
l

Re: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal

2018-05-25 Thread Bogdan Dobrelya
Job dependencies seem ignored by zuul, see jobs [0],[1],[2] started 
simultaneously. While I expected them run one by one. According to the 
patch 568536 [3], [1] is a dependency for [2] and [3].


The same can be observed for the remaining patches in the topic [4].
Is that a bug or I misunderstood what zuul job dependencies actually do?

[0] 
http://logs.openstack.org/36/568536/2/check/tripleo-ci-centos-7-undercloud-containers/731183a/ara-report/
[1] 
http://logs.openstack.org/36/568536/2/check/tripleo-ci-centos-7-3nodes-multinode/a1353ed/ara-report/
[2] 
http://logs.openstack.org/36/568536/2/check/tripleo-ci-centos-7-containers-multinode/9777136/ara-report/

[3] https://review.openstack.org/#/c/568536/
[4] 
https://review.openstack.org/#/q/topic:ci_pipelines+(status:open+OR+status:merged)


On 5/15/18 11:39 AM, Bogdan Dobrelya wrote:

Added a few more patches [0], [1] by the discussion results. PTAL folks.
Wrt remaining in the topic, I'd propose to give it a try and revert it, 
if it proved to be worse than better.

Thank you for feedback!

The next step could be reusing artifacts, like DLRN repos and containers 
built for patches and hosted undercloud, in the consequent pipelined 
jobs. But I'm not sure how to even approach that.


[0] https://review.openstack.org/#/c/568536/
[1] https://review.openstack.org/#/c/568543/

On 5/15/18 10:54 AM, Bogdan Dobrelya wrote:

On 5/14/18 10:06 PM, Alex Schultz wrote:
On Mon, May 14, 2018 at 10:15 AM, Bogdan Dobrelya 
 wrote:

An update for your review please folks


Bogdan Dobrelya  writes:


Hello.
As Zuul documentation [0] explains, the names "check", "gate", and
"post"  may be altered for more advanced pipelines. Is it doable to
introduce, for particular openstack projects, multiple check
stages/steps as check-1, check-2 and so on? And is it possible to 
make

the consequent steps reusing environments from the previous steps
finished with?

Narrowing down to tripleo CI scope, the problem I'd want we to solve
with this "virtual RFE", and using such multi-staged check pipelines,
is reducing (ideally, de-duplicating) some of the common steps for
existing CI jobs.



What you're describing sounds more like a job graph within a pipeline.
See:
https://docs.openstack.org/infra/zuul/user/config.html#attr-job.dependencies 

for how to configure a job to run only after another job has 
completed.

There is also a facility to pass data between such jobs.

... (skipped) ...

Creating a job graph to have one job use the results of the 
previous job

can make sense in a lot of cases.  It doesn't always save *time*
however.

It's worth noting that in OpenStack's Zuul, we have made an explicit
choice not to have long-running integration jobs depend on shorter 
pep8

or tox jobs, and that's because we value developer time more than CPU
time.  We would rather run all of the tests and return all of the
results so a developer can fix all of the errors as quickly as 
possible,
rather than forcing an iterative workflow where they have to fix 
all the
whitespace issues before the CI system will tell them which actual 
tests

broke.

-Jim



I proposed a few zuul dependencies [0], [1] to tripleo CI pipelines for
undercloud deployments vs upgrades testing (and some more). Given 
that those
undercloud jobs have not so high fail rates though, I think Emilien 
is right

in his comments and those would buy us nothing.

 From the other side, what do you think folks of making the
tripleo-ci-centos-7-3nodes-multinode depend on
tripleo-ci-centos-7-containers-multinode [2]? The former seems quite 
faily
and long running, and is non-voting. It deploys (see featuresets 
configs
[3]*) a 3 nodes in HA fashion. And it seems almost never passing, 
when the
containers-multinode fails - see the CI stats page [4]. I've found 
only a 2
cases there for the otherwise situation, when containers-multinode 
fails,
but 3nodes-multinode passes. So cutting off those future failures 
via the
dependency added, *would* buy us something and allow other jobs to 
wait less
to commence, by a reasonable price of somewhat extended time of the 
main
zuul pipeline. I think it makes sense and that extended CI time will 
not

overhead the RDO CI execution times so much to become a problem. WDYT?



I'm not sure it makes sense to add a dependency on other deployment
tests. It's going to add additional time to the CI run because the
upgrade won't start until well over an hour after the rest of the


The things are not so simple. There is also a significant 
time-to-wait-in-queue jobs start delay. And it takes probably even 
longer than the time to execute jobs. And that delay is a function of 
available HW resources and zuul queue length. And the proposed change 
affects those parameters as well, assuming jobs with failed 
dependencies won't run at all. So we could expect longer execution 
times compensated with shorter wait times! I'm not sure how to 
estimate that tho. You folks have all numbers and knowledge, let's use 
that

[openstack-dev] [octavia] Multiple availability zone and network region support

2018-05-25 Thread mihaela.balas
Hello,

Is there any way to set up Octavia so that we are able to launch amphora in 
different AZs and connected to different network per each AZ?

Than you,
Mihaela Balas

_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] z/VM introducing a new config driveformat

2018-05-25 Thread Chen CH Ji
we are continue to evaluating the ways to remove the restrictions in the
future, one question on following comments:

>>>Why don't you support the metadata service? That's a pretty fundamental
mechanism for nova and openstack. It's the only way you can get a live
copy of metadata, and it's the only way you can get access to device
tags when you hot-attach something. Personally, I think that it's
something that needs to work.

As Matt mentioned in https://review.openstack.org/#/c/562154/ PS#4
As far as I know the metadata service is not a basic feature, it's optional
and some deployments don't run it because of possible security concerns.
so seems it's different suggestion,... and for the following suggestion
It's the only way you can get a live copy of metadata, and it's the only
way you can get access to device tags when you hot-attach something

can I know a use case for this 'live copy metadata or ' the 'only way to
access device tags when hot-attach?
my thought is this is one time thing in cloud-init side either through
metatdata service or config drive and won't be used later? then why I need
a live copy?
 and because nova do the hot attach why it's the only way to access the
tags? what exec in the deployed VM will access the device? cloud-init or
something else?

Thanks a lot for your help

Best Regards!

Kevin (Chen) Ji 纪 晨

Engineer, zVM Development, CSTL
Notes: Chen CH Ji/China/IBM@IBMCN   Internet: jiche...@cn.ibm.com
Phone: +86-10-82451493
Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
Beijing 100193, PRC



From:   Dan Smith 
To: "Chen CH Ji" 
Cc: "OpenStack Development Mailing List \(not for usage questions
\)" 
Date:   04/13/2018 09:46 PM
Subject:Re: [openstack-dev] [Nova] z/VM introducing a new config
driveformat



> for the run_validation=False issue, you are right, because z/VM driver
> only support config drive and don't support metadata service ,we made
> bad assumption and took wrong action to disabled the whole ssh check,
> actually according to [1] , we should only disable
> CONF.compute_feature_enabled.metadata_service but keep both
> self.run_ssh and CONF.compute_feature_enabled.config_drive as True in
> order to make config drive test validation take effect, our CI will
> handle that

Why don't you support the metadata service? That's a pretty fundamental
mechanism for nova and openstack. It's the only way you can get a live
copy of metadata, and it's the only way you can get access to device
tags when you hot-attach something. Personally, I think that it's
something that needs to work.

> For the tgz/iso9660 question below, this is because we got wrong info
> from low layer component folks back to 2012 and after discuss with
> some experts again, actually we can create iso9660 in the driver layer
> and pass down to the spawned virtual machine and during startup
> process, the VM itself will mount the iso file and consume it, because
> from linux perspective, either tgz or iso9660 doesn't matter , only
> need some files in order to transfer the information from openstack
> compute node to the spawned VM.  so our action is to change the format
> from tgz to iso9660 and keep consistent to other drivers.

The "iso file" will not be inside the guest, but rather passed to the
guest as a block device, right?

> For the config drive working mechanism question, according to [2] z/VM
> is Type 1 hypervisor while Qemu/KVM are mostly likely to be Type 2
> hypervisor, there is no file system in z/VM hypervisor (I omit too
> much detail here) , so we can't do something like linux operation
> system to keep a file as qcow2 image in the host operating system,

I'm not sure what the type-1-ness has to do with this. The hypervisor
doesn't need to support any specific filesystem for this to work. Many
drivers we have in the tree are type-1 (xen, vmware, hyperv, powervm)
and you can argue that KVM is type-1-ish. They support configdrive.

> what we do is use a special file pool to store the config drive and
> during VM init process, we read that file from special device and
> attach to VM as iso9660 format then cloud-init will handle the follow
> up, the cloud-init handle process is identical to other platform

This and the previous mention of this sort of behavior has me
concerned. Are you describing some sort of process that runs when the
instance is starting to initialize its environment, or something that
runs  *inside* the instance and thus functionality that has to exist in
the *image* to work?

--Dan



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev