[OpenStack-Infra] https on eavesdrop.o.o broken

2018-11-13 Thread Andreas Jaeger
Reviewing a patch that changed http://eavesdrop.o.o to https, I checked
and noticed that the URL gives a misconfiguration error. Should we stop
running Apache on port 443 for eavesdrop - or configure the server
properly with https?

Sending it here so that somebody can clean it up later and we don't
forget it...

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra

Re: [Openstack] [openstack][nova|cinder] niscsiadm: Could not log into all portals || VolumeNotFound exception

2018-11-13 Thread Bernd Bausch
You launch a volume-backed instance. The volume can't be attached, so 
the instance can't be launched.


The volume can't be attached because iSCSI authentication fails. Either 
it's not set up correctly in cinder.conf on the controller, or you hit a 
bug. When you google for /iscsi authentication /and /Cinder /or /Nova/, 
you get plenty of hits, such as 
https://ask.openstack.org/en/question/92482/unable-to-attach-cinder-volume-iscsi-require-authentication/.


This is the command that fails on the controller. The IP address or 
target might be incorrect, or the credentials, which are set by earlier 
iscsiadm commands.

iscsiadm -m node -T 
iqn.2010-10.org.openstack:volume-9ae59d83-ec09-4fd5-aa2c-5049a29bed5e -p 
172.23.29.118:3260 --login

Bernd


smime.p7s
Description: S/MIME Cryptographic Signature
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack-operators] [openstack-dev] [all] All Hail our Newest Release Name - OpenStack Train

2018-11-13 Thread Slawomir Kaplonski
Hi,

I think it was published, see 
http://lists.openstack.org/pipermail/openstack/2018-November/047172.html

> Wiadomość napisana przez Jeremy Freudberg  w dniu 
> 14.11.2018, o godz. 06:12:
> 
> Hey Tony,
> 
> What's the reason for the results of the poll not being public?
> 
> Thanks,
> Jeremy
> On Tue, Nov 13, 2018 at 11:52 PM Tony Breeds  wrote:
>> 
>> 
>> Hi everybody!
>> 
>> As the subject reads, the "T" release of OpenStack is officially
>> "Train".  Unlike recent choices Train was the popular choice so
>> congrats!
>> 
>> Thanks to everybody who participated and help with the naming process.
>> 
>> Lets make OpenStack Train the release so awesome that people can't help
>> but choo-choo-choose to run it[1]!
>> 
>> 
>> Yours Tony.
>> [1] Too soon? Too much?
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

— 
Slawek Kaplonski
Senior software engineer
Red Hat


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack] [openstack-dev] [all] All Hail our Newest Release Name - OpenStack Train

2018-11-13 Thread Slawomir Kaplonski
Hi,

I think it was published, see 
http://lists.openstack.org/pipermail/openstack/2018-November/047172.html

> Wiadomość napisana przez Jeremy Freudberg  w dniu 
> 14.11.2018, o godz. 06:12:
> 
> Hey Tony,
> 
> What's the reason for the results of the poll not being public?
> 
> Thanks,
> Jeremy
> On Tue, Nov 13, 2018 at 11:52 PM Tony Breeds  wrote:
>> 
>> 
>> Hi everybody!
>> 
>> As the subject reads, the "T" release of OpenStack is officially
>> "Train".  Unlike recent choices Train was the popular choice so
>> congrats!
>> 
>> Thanks to everybody who participated and help with the naming process.
>> 
>> Lets make OpenStack Train the release so awesome that people can't help
>> but choo-choo-choose to run it[1]!
>> 
>> 
>> Yours Tony.
>> [1] Too soon? Too much?
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

— 
Slawek Kaplonski
Senior software engineer
Red Hat


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [all] All Hail our Newest Release Name - OpenStack Train

2018-11-13 Thread Slawomir Kaplonski
Hi,

I think it was published, see 
http://lists.openstack.org/pipermail/openstack/2018-November/047172.html

> Wiadomość napisana przez Jeremy Freudberg  w dniu 
> 14.11.2018, o godz. 06:12:
> 
> Hey Tony,
> 
> What's the reason for the results of the poll not being public?
> 
> Thanks,
> Jeremy
> On Tue, Nov 13, 2018 at 11:52 PM Tony Breeds  wrote:
>> 
>> 
>> Hi everybody!
>> 
>> As the subject reads, the "T" release of OpenStack is officially
>> "Train".  Unlike recent choices Train was the popular choice so
>> congrats!
>> 
>> Thanks to everybody who participated and help with the naming process.
>> 
>> Lets make OpenStack Train the release so awesome that people can't help
>> but choo-choo-choose to run it[1]!
>> 
>> 
>> Yours Tony.
>> [1] Too soon? Too much?
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

— 
Slawek Kaplonski
Senior software engineer
Red Hat


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack] [openstack][nova|cinder] niscsiadm: Could not log into all portals || VolumeNotFound exception

2018-11-13 Thread Tushar Tyagi
Hi,

I have created a development setup with Devstack on 2 machines, where
one is a controller + compute node (IP: 172.23.29.96) and other is a storage
node(IP: 172.23.29.118). I am running all the services on the
controller, except the c-vol service which runs on the storage node.

Whenever I try to create a new instance, the volume gets created but
the instance is stuck with "Spawning" status, and in some time gets
errored out. This volume is then not attached to the VM. These volumes
are LVM based, SCSI backed volumes.

During this time, I can see the following 2 errors of interest in
stack traces/logs. It would be really helpful if anyone here can take
a look and point me in the right direction.

I've also attached the stack traces in case the formatting gets messed up. 

== START OF STACK TRACE 1 =

DEBUG oslo.privsep.daemon [-] privsep: Exception during 
request[140390878014992]: Unexpected error while running command.
Command: iscsiadm -m node -T 
iqn.2010-10.org.openstack:volume-9ae59d83-ec09-4fd5-aa2c-5049a29bed5e -p 
172.23.29.118:3260 --login
Exit code: 8
Stdout: u'Logging in to [iface: default, target: 
iqn.2010-10.org.openstack:volume-9ae59d83-ec09-4fd5-aa2c-5049a29bed5e, portal: 
172.23.29.118,3260] (multiple)\n'
Stderr: u'iscsiadm: Could not login to [iface: default, target: 
iqn.2010-10.org.openstack:volume-9ae59d83-ec09-4fd5-aa2c-5049a29bed5e, portal: 
172.23.29.118,3260].\nis\
csiadm: initiator reported error (8 - connection timed out)\niscsiadm: Could 
not log into all portals\n' {{(pid=14097) loop 
/usr/lib/python2.7/site-packages/oslo_privs\
ep/daemon.py:449}}
ERROR oslo.privsep.daemon Traceback (most recent call last):
ERROR oslo.privsep.daemon   File 
"/usr/lib/python2.7/site-packages/oslo_privsep/daemon.py", line 445, in loop
ERROR oslo.privsep.daemon reply = self._process_cmd(*msg)
ERROR oslo.privsep.daemon   File 
"/usr/lib/python2.7/site-packages/oslo_privsep/daemon.py", line 428, in 
_process_cmd
ERROR oslo.privsep.daemon ret = func(*f_args, **f_kwargs)
ERROR oslo.privsep.daemon   File 
"/usr/lib/python2.7/site-packages/oslo_privsep/priv_context.py", line 209, in 
_wrap
ERROR oslo.privsep.daemon return func(*args, **kwargs)
ERROR oslo.privsep.daemon   File 
"/usr/lib/python2.7/site-packages/os_brick/privileged/rootwrap.py", line 194, 
in execute_root
ERROR oslo.privsep.daemon return custom_execute(*cmd, shell=False, 
run_as_root=False, **kwargs)
ERROR oslo.privsep.daemon   File 
"/usr/lib/python2.7/site-packages/os_brick/privileged/rootwrap.py", line 143, 
in custom_execute
ERROR oslo.privsep.daemon on_completion=on_completion, *cmd, **kwargs)
ERROR oslo.privsep.daemon   File 
"/usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py", line 424, 
in execute
ERROR oslo.privsep.daemon cmd=sanitized_cmd)
ERROR oslo.privsep.daemon ProcessExecutionError: Unexpected error while running 
command.
ERROR oslo.privsep.daemon Command: iscsiadm -m node -T 
iqn.2010-10.org.openstack:volume-9ae59d83-ec09-4fd5-aa2c-5049a29bed5e -p 
172.23.29.118:3260 --login
ERROR oslo.privsep.daemon Exit code: 8
ERROR oslo.privsep.daemon Stdout: u'Logging in to [iface: default, target: 
iqn.2010-10.org.openstack:volume-9ae59d83-ec09-4fd5-aa2c-5049a29bed5e, portal: 
172.23.29.118\
,3260] (multiple)\n'
ERROR oslo.privsep.daemon Stderr: u'iscsiadm: Could not login to [iface: 
default, target: 
iqn.2010-10.org.openstack:volume-9ae59d83-ec09-4fd5-aa2c-5049a29bed5e, portal\
: 172.23.29.118,3260].\niscsiadm: initiator reported error (8 - connection 
timed out)\niscsiadm: Could not log into all portals\n'
ERROR oslo.privsep.daemon

== END OF STACK TRACE 1   ===


== START OF STACK TRACE 2 ===


[instance: d2a44cc2-c367-4d6e-b572-0d174e44d817] Instance failed to spawn: 
VolumeDeviceNotFound: Volume device not found at .
Error:  Traceback (most recent call last):
Error:File "/opt/stack/nova/nova/compute/manager.py", line 2357, in 
_build_resources
Error:  yield resources
Error:File "/opt/stack/nova/nova/compute/manager.py", line 2121, in 
_build_and_run_instance
Error:  block_device_info=block_device_info)
Error:File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 3075, in 
spawn
Error:  mdevs=mdevs)
Error:File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 5430, in 
_get_guest_xml
Error:  context, mdevs)
Error:File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 5216, in 
_get_guest_config
Error:  flavor, guest.os_type)
Error:File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 4030, in 
_get_guest_storage_config
Error:  self._connect_volume(context, connection_info, instance)
Error:File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 1222, in 
_connect_volume
Error:  vol_driver.connect_volume(connection_info, instance)
Error:File "/opt/stack/nova/nova/virt/libvirt/volume/iscsi.py", line 64, in 
connect_volume
Error:  device_info = 

Re: [openstack-dev] [all] All Hail our Newest Release Name - OpenStack Train

2018-11-13 Thread Jeremy Freudberg
Hey Tony,

What's the reason for the results of the poll not being public?

Thanks,
Jeremy
On Tue, Nov 13, 2018 at 11:52 PM Tony Breeds  wrote:
>
>
> Hi everybody!
>
> As the subject reads, the "T" release of OpenStack is officially
> "Train".  Unlike recent choices Train was the popular choice so
> congrats!
>
> Thanks to everybody who participated and help with the naming process.
>
> Lets make OpenStack Train the release so awesome that people can't help
> but choo-choo-choose to run it[1]!
>
>
> Yours Tony.
> [1] Too soon? Too much?
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack] [all] All Hail our Newest Release Name - OpenStack Train

2018-11-13 Thread Tony Breeds

Hi everybody!

As the subject reads, the "T" release of OpenStack is officially
"Train".  Unlike recent choices Train was the popular choice so
congrats!

Thanks to everybody who participated and help with the naming process.

Lets make OpenStack Train the release so awesome that people can't help
but choo-choo-choose to run it[1]!


Yours Tony.
[1] Too soon? Too much?


signature.asc
Description: PGP signature
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack-operators] [all] All Hail our Newest Release Name - OpenStack Train

2018-11-13 Thread Tony Breeds

Hi everybody!

As the subject reads, the "T" release of OpenStack is officially
"Train".  Unlike recent choices Train was the popular choice so
congrats!

Thanks to everybody who participated and help with the naming process.

Lets make OpenStack Train the release so awesome that people can't help
but choo-choo-choose to run it[1]!


Yours Tony.
[1] Too soon? Too much?


signature.asc
Description: PGP signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[openstack-dev] [all] All Hail our Newest Release Name - OpenStack Train

2018-11-13 Thread Tony Breeds

Hi everybody!

As the subject reads, the "T" release of OpenStack is officially
"Train".  Unlike recent choices Train was the popular choice so
congrats!

Thanks to everybody who participated and help with the naming process.

Lets make OpenStack Train the release so awesome that people can't help
but choo-choo-choose to run it[1]!


Yours Tony.
[1] Too soon? Too much?


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [Openstack-sigs] new SIGs to cover use cases

2018-11-13 Thread Yih Leong, Sun.
Wondering if we should *up level* to Multi-Cloud, where Hybrid Cloud can be
a subset of Multi-Cloud.
I think Scientific SIG can still focus on Scientific and HPC, whereas
Multi/Hybrid Cloud will support broader use cases.


On Tue, Nov 13, 2018 at 8:22 AM Stig Telfer  wrote:

> You are right to make the connection - this is a subject that regularly
> comes up in the discussions of the Scientific SIG, though it’s just one of
> many use cases for hybrid cloud.  If a new SIG was created around hybrid
> cloud, it would be useful to have it closely connected with the Scientific
> SIG.
>
> Cheers,
> Stig
>
>
> > On 13 Nov 2018, at 09:01,  <
> arkady.kanev...@dell.com> wrote:
> >
> > Good point.
> > Adding SIG list.
> >
> > -Original Message-
> > From: Jeremy Stanley [mailto:fu...@yuggoth.org]
> > Sent: Monday, November 12, 2018 4:46 PM
> > To: openstack-operators@lists.openstack.org
> > Subject: Re: [Openstack-operators] new SIGs to cover use cases
> >
> >
> > [EXTERNAL EMAIL]
> > Please report any suspicious attachments, links, or requests for
> sensitive information.
> >
> >
> > On 2018-11-12 15:46:38 + (+), arkady.kanev...@dell.com wrote:
> > [...]
> >>  1.  Do we have or want to create a user community around Hybrid cloud.
> > [...]
> >>  2.  As we target AI/ML as 2019 target application domain do we
> >>  want to create a SIG for it? Or do we extend scientific
> >>  community SIG to cover it?
> > [...]
> >
> > It may also be worthwhile to ask this on the openstack-sigs mailing
> > list.
> > --
> > Jeremy Stanley
> >
> > ___
> > OpenStack-operators mailing list
> > OpenStack-operators@lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
> ___
> openstack-sigs mailing list
> openstack-s...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[openstack-dev] [tripleo] no recheck / no workflow until gate is stable

2018-11-13 Thread Emilien Macchi
We have serious issues with the gate at this time, we believe it is a mix
of mirrors errors (infra) and tempest timeouts (see
https://review.openstack.org/617845).

Until the situation is resolved, do not recheck or approve any patch for
now.
Thanks for your understanding,
-- 
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python3] Enabling py37 unit tests

2018-11-13 Thread Corey Bryant
On Wed, Nov 7, 2018 at 11:12 AM Clark Boylan  wrote:

> On Wed, Nov 7, 2018, at 4:47 AM, Mohammed Naser wrote:
> > On Wed, Nov 7, 2018 at 1:37 PM Doug Hellmann 
> wrote:
> > >
> > > Corey Bryant  writes:
> > >
> > > > On Wed, Oct 10, 2018 at 8:45 AM Corey Bryant <
> corey.bry...@canonical.com>
> > > > wrote:
> > > >
> > > > I'd like to start moving forward with enabling py37 unit tests for a
> subset
> > > > of projects. Rather than putting too much load on infra by enabling
> 3 x py3
> > > > unit tests for every project, this would just focus on enablement of
> py37
> > > > unit tests for a subset of projects in the Stein cycle. And just to
> be
> > > > clear, I would not be disabling any unit tests (such as py35). I'd
> just be
> > > > enabling py37 unit tests.
> > > >
> > > > As some background, this ML thread originally led to updating the
> > > > python3-first governance goal (
> https://review.openstack.org/#/c/610708/)
> > > > but has now led back to this ML thread for a +1 rather than updating
> the
> > > > governance goal.
> > > >
> > > > I'd like to get an official +1 here on the ML from parties such as
> the TC
> > > > and infra in particular but anyone else's input would be welcomed
> too.
> > > > Obviously individual projects would have the right to reject proposed
> > > > changes that enable py37 unit tests. Hopefully they wouldn't, of
> course,
> > > > but they could individually vote that way.
> > > >
> > > > Thanks,
> > > > Corey
> > >
> > > This seems like a good way to start. It lets us make incremental
> > > progress while we take the time to think about the python version
> > > management question more broadly. We can come back to the other
> projects
> > > to add 3.7 jobs and remove 3.5 jobs when we have that plan worked out.
> >
> > What's the impact on the number of consumption in upstream CI node usage?
> >
>
> For period from 2018-10-25 15:16:32,079 to 2018-11-07 15:59:04,994,
> openstack-tox-py35 jobs in aggregate represent 0.73% of our total capacity
> usage.
>
> I don't expect py37 to significantly deviate from that. Again the major
> resource consumption is dominated by a small number of projects/repos/jobs.
> Generally testing outside of that bubble doesn't represent a significant
> resource cost.
>
> I see no problem with adding python 3.7 unit testing from an
> infrastructure perspective.
>
> Clark
>
>
>
Thanks all for the input on this. It seems like we have no objections to
moving forward so I'll plan on getting started soon.

Thanks,
Corey
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] [nova] PCI alias attribute numa_policy ignored when flavor has hw:cpu_policy=dedicated set

2018-11-13 Thread Satish Patel
Sean,

Thank you for the detailed explanation, i really hope if we can
backport to queens, it would be harder for me to upgrade cluster..!

On Tue, Nov 13, 2018 at 8:42 AM Sean Mooney  wrote:
>
> On Tue, 2018-11-13 at 07:52 -0500, Satish Patel wrote:
> > Mike,
> >
> > Here is the bug which I reported https://bugs.launchpad.net/bugs/1795920
> actully this is a releated but different bug based in the description below.
> thanks for highlighting this to me.
> >
> > Cc'ing: Sean
> >
> > Sent from my iPhone
> >
> > On Nov 12, 2018, at 8:27 AM, Satish Patel  wrote:
> >
> > > Mike,
> > >
> > > I had same issue month ago when I roll out sriov in my cloud and this is 
> > > what I did to solve this issue. Set
> > > following in flavor
> > >
> > > hw:numa_nodes=2
> > >
> > > It will spread out instance vcpu across numa, yes there will be little 
> > > penalty but if you tune your application
> > > according they you are good
> > >
> > > Yes this is bug I have already open ticket and I believe folks are 
> > > working on it but its not simple fix. They may
> > > release new feature in coming oprnstack release.
> > >
> > > Sent from my iPhone
> > >
> > > On Nov 11, 2018, at 9:25 PM, Mike Joseph  wrote:
> > >
> > > > Hi folks,
> > > >
> > > > It appears that the numa_policy attribute of a PCI alias is ignored for 
> > > > flavors referencing that alias if the
> > > > flavor also has hw:cpu_policy=dedicated set.  The alias config is:
> > > >
> > > > alias = { "name": "mlx", "device_type": "type-VF", "vendor_id": "15b3", 
> > > > "product_id": "1004", "numa_policy":
> > > > "preferred" }
> > > >
> > > > And the flavor config is:
> > > >
> > > > {
> > > >   "OS-FLV-DISABLED:disabled": false,
> > > >   "OS-FLV-EXT-DATA:ephemeral": 0,
> > > >   "access_project_ids": null,
> > > >   "disk": 10,
> > > >   "id": "221e1bcd-2dde-48e6-bd09-820012198908",
> > > >   "name": "vm-2",
> > > >   "os-flavor-access:is_public": true,
> > > >   "properties": "hw:cpu_policy='dedicated', 
> > > > pci_passthrough:alias='mlx:1'",
> > > >   "ram": 8192,
> > > >   "rxtx_factor": 1.0,
> > > >   "swap": "",
> > > >   "vcpus": 2
> > > > }
> Satish in your case you were trying to use neutrons sriov vnic types such 
> that the VF would be connected to a neutron
> network. In this case the mellanox connectx 3 virtual funcitons are being 
> passed to the guest using the pci alias via
> the flavor which means they cannot be used to connect to neutron networks but 
> they should be able to use affinity
> poileices.
> > > >
> > > > In short, our compute nodes have an SR-IOV Mellanox NIC (ConnectX-3) 
> > > > with 16 VFs configured.  We wish to expose
> > > > these VFs to VMs that schedule on the host.  However, the NIC is in 
> > > > NUMA region 0 which means that only half of
> > > > the compute node's CPU cores would be usable if we required VM affinity 
> > > > to the NIC's NUMA region.  But we don't
> > > > need that, since we are okay with cross-region access to the PCI device.
> > > >
> > > > However, we do need CPU pinning to work, in order to have efficient 
> > > > cache hits on our VM processes.  Therefore, we
> > > > still want to pin our vCPUs to pCPUs, even if the pins end up on on a 
> > > > NUMA region opposite of the NIC.  The spec
> > > > for numa_policy seem to indicate that this is exactly the intent of the 
> > > > option:
> > > >
> > > > https://specs.openstack.org/openstack/nova-specs/specs/queens/implemented/share-pci-between-numa-nodes.html
> > > >
> > > > But, with the above config, we still get PCI affinity scheduling errors:
> > > >
> > > > 'Insufficient compute resources: Requested instance NUMA topology 
> > > > together with requested PCI devices cannot fit
> > > > the given host NUMA topology.'
> > > >
> > > > This strikes me as a bug, but perhaps I am missing something here?
> yes this does infact seam like a new bug.
> can you add myself and stephen to the bug once you file it.
> in the bug please include the version of opentack you were deploying.
>
> in the interim setting hw:numa_nodes=2 will allow you to pin the guest 
> without the error
> however the flavor and alias you have provided should have been enough.
>
> im hoping that we can fix both the alisa and neutorn based case this cycle 
> but to do so we
> will need to reporpose original queens spec for stein and disucss if we can 
> backport any of the
> fixes or if this would be only completed in stein+ i would hope we coudl 
> backport fixes for the flavor
> based use case but the neutron based sue case would likely be stein+
>
> regards
> sean
> > > >
> > > > Thanks,
> > > > MJ
> > > > ___
> > > > Mailing list: 
> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> > > > Post to : openstack@lists.openstack.org
> > > > Unsubscribe : 
> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

___
Mailing list: 

Re: [openstack-dev] [Openstack] [nova] PCI alias attribute numa_policy ignored when flavor has hw:cpu_policy=dedicated set

2018-11-13 Thread Satish Patel
Sean,

Thank you for the detailed explanation, i really hope if we can
backport to queens, it would be harder for me to upgrade cluster..!

On Tue, Nov 13, 2018 at 8:42 AM Sean Mooney  wrote:
>
> On Tue, 2018-11-13 at 07:52 -0500, Satish Patel wrote:
> > Mike,
> >
> > Here is the bug which I reported https://bugs.launchpad.net/bugs/1795920
> actully this is a releated but different bug based in the description below.
> thanks for highlighting this to me.
> >
> > Cc'ing: Sean
> >
> > Sent from my iPhone
> >
> > On Nov 12, 2018, at 8:27 AM, Satish Patel  wrote:
> >
> > > Mike,
> > >
> > > I had same issue month ago when I roll out sriov in my cloud and this is 
> > > what I did to solve this issue. Set
> > > following in flavor
> > >
> > > hw:numa_nodes=2
> > >
> > > It will spread out instance vcpu across numa, yes there will be little 
> > > penalty but if you tune your application
> > > according they you are good
> > >
> > > Yes this is bug I have already open ticket and I believe folks are 
> > > working on it but its not simple fix. They may
> > > release new feature in coming oprnstack release.
> > >
> > > Sent from my iPhone
> > >
> > > On Nov 11, 2018, at 9:25 PM, Mike Joseph  wrote:
> > >
> > > > Hi folks,
> > > >
> > > > It appears that the numa_policy attribute of a PCI alias is ignored for 
> > > > flavors referencing that alias if the
> > > > flavor also has hw:cpu_policy=dedicated set.  The alias config is:
> > > >
> > > > alias = { "name": "mlx", "device_type": "type-VF", "vendor_id": "15b3", 
> > > > "product_id": "1004", "numa_policy":
> > > > "preferred" }
> > > >
> > > > And the flavor config is:
> > > >
> > > > {
> > > >   "OS-FLV-DISABLED:disabled": false,
> > > >   "OS-FLV-EXT-DATA:ephemeral": 0,
> > > >   "access_project_ids": null,
> > > >   "disk": 10,
> > > >   "id": "221e1bcd-2dde-48e6-bd09-820012198908",
> > > >   "name": "vm-2",
> > > >   "os-flavor-access:is_public": true,
> > > >   "properties": "hw:cpu_policy='dedicated', 
> > > > pci_passthrough:alias='mlx:1'",
> > > >   "ram": 8192,
> > > >   "rxtx_factor": 1.0,
> > > >   "swap": "",
> > > >   "vcpus": 2
> > > > }
> Satish in your case you were trying to use neutrons sriov vnic types such 
> that the VF would be connected to a neutron
> network. In this case the mellanox connectx 3 virtual funcitons are being 
> passed to the guest using the pci alias via
> the flavor which means they cannot be used to connect to neutron networks but 
> they should be able to use affinity
> poileices.
> > > >
> > > > In short, our compute nodes have an SR-IOV Mellanox NIC (ConnectX-3) 
> > > > with 16 VFs configured.  We wish to expose
> > > > these VFs to VMs that schedule on the host.  However, the NIC is in 
> > > > NUMA region 0 which means that only half of
> > > > the compute node's CPU cores would be usable if we required VM affinity 
> > > > to the NIC's NUMA region.  But we don't
> > > > need that, since we are okay with cross-region access to the PCI device.
> > > >
> > > > However, we do need CPU pinning to work, in order to have efficient 
> > > > cache hits on our VM processes.  Therefore, we
> > > > still want to pin our vCPUs to pCPUs, even if the pins end up on on a 
> > > > NUMA region opposite of the NIC.  The spec
> > > > for numa_policy seem to indicate that this is exactly the intent of the 
> > > > option:
> > > >
> > > > https://specs.openstack.org/openstack/nova-specs/specs/queens/implemented/share-pci-between-numa-nodes.html
> > > >
> > > > But, with the above config, we still get PCI affinity scheduling errors:
> > > >
> > > > 'Insufficient compute resources: Requested instance NUMA topology 
> > > > together with requested PCI devices cannot fit
> > > > the given host NUMA topology.'
> > > >
> > > > This strikes me as a bug, but perhaps I am missing something here?
> yes this does infact seam like a new bug.
> can you add myself and stephen to the bug once you file it.
> in the bug please include the version of opentack you were deploying.
>
> in the interim setting hw:numa_nodes=2 will allow you to pin the guest 
> without the error
> however the flavor and alias you have provided should have been enough.
>
> im hoping that we can fix both the alisa and neutorn based case this cycle 
> but to do so we
> will need to reporpose original queens spec for stein and disucss if we can 
> backport any of the
> fixes or if this would be only completed in stein+ i would hope we coudl 
> backport fixes for the flavor
> based use case but the neutron based sue case would likely be stein+
>
> regards
> sean
> > > >
> > > > Thanks,
> > > > MJ
> > > > ___
> > > > Mailing list: 
> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> > > > Post to : openst...@lists.openstack.org
> > > > Unsubscribe : 
> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

__
OpenStack 

Re: [Openstack-operators] new SIGs to cover use cases

2018-11-13 Thread Stig Telfer
You are right to make the connection - this is a subject that regularly comes 
up in the discussions of the Scientific SIG, though it’s just one of many use 
cases for hybrid cloud.  If a new SIG was created around hybrid cloud, it would 
be useful to have it closely connected with the Scientific SIG.

Cheers,
Stig


> On 13 Nov 2018, at 09:01,  
>  wrote:
> 
> Good point.
> Adding SIG list.
> 
> -Original Message-
> From: Jeremy Stanley [mailto:fu...@yuggoth.org] 
> Sent: Monday, November 12, 2018 4:46 PM
> To: openstack-operators@lists.openstack.org
> Subject: Re: [Openstack-operators] new SIGs to cover use cases
> 
> 
> [EXTERNAL EMAIL] 
> Please report any suspicious attachments, links, or requests for sensitive 
> information.
> 
> 
> On 2018-11-12 15:46:38 + (+), arkady.kanev...@dell.com wrote:
> [...]
>>  1.  Do we have or want to create a user community around Hybrid cloud.
> [...]
>>  2.  As we target AI/ML as 2019 target application domain do we
>>  want to create a SIG for it? Or do we extend scientific
>>  community SIG to cover it?
> [...]
> 
> It may also be worthwhile to ask this on the openstack-sigs mailing
> list.
> -- 
> Jeremy Stanley
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] operators get-together today at the Berlin Summit

2018-11-13 Thread Jonathan D. Proulx
On Tue, Nov 13, 2018 at 12:57:02PM +0100, Chris Morgan wrote:
:   We never did come up with a good plan for a separate event for
:   operators this evening, so I think maybe we should just meet up at the
:   marketplace mixer, so may I propose meet at the front at 6pm?
:   Chris
:   --
:   Chris Morgan <[1]mihali...@gmail.com>

Sure sounds good to me.

-Jon

:References
:
:   1. mailto:mihali...@gmail.com

:___
:OpenStack-operators mailing list
:OpenStack-operators@lists.openstack.org
:http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack] How to detach Image from instance

2018-11-13 Thread Soheil Pourbafrani
Hi,

I lunch an instance using CentOS ISO image and install it on ephemeral disk
(no volume created). So after finishing the installation, I rebooted the
instance, *but it boot the CentOS image, again. *While on the path
/var/lib/nova on the compute node I can see a disk for the instance is
created. So I guess removing the image from instance will force it to boot
the installed OS. Is there any way to do that?
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [Openstack] [nova] PCI alias attribute numa_policy ignored when flavor has hw:cpu_policy=dedicated set

2018-11-13 Thread Sean Mooney
On Tue, 2018-11-13 at 07:52 -0500, Satish Patel wrote:
> Mike,
> 
> Here is the bug which I reported https://bugs.launchpad.net/bugs/1795920
actully this is a releated but different bug based in the description below.
thanks for highlighting this to me.
> 
> Cc'ing: Sean 
> 
> Sent from my iPhone
> 
> On Nov 12, 2018, at 8:27 AM, Satish Patel  wrote:
> 
> > Mike,
> > 
> > I had same issue month ago when I roll out sriov in my cloud and this is 
> > what I did to solve this issue. Set
> > following in flavor 
> > 
> > hw:numa_nodes=2
> > 
> > It will spread out instance vcpu across numa, yes there will be little 
> > penalty but if you tune your application
> > according they you are good 
> > 
> > Yes this is bug I have already open ticket and I believe folks are working 
> > on it but its not simple fix. They may
> > release new feature in coming oprnstack release. 
> > 
> > Sent from my iPhone
> > 
> > On Nov 11, 2018, at 9:25 PM, Mike Joseph  wrote:
> > 
> > > Hi folks,
> > > 
> > > It appears that the numa_policy attribute of a PCI alias is ignored for 
> > > flavors referencing that alias if the
> > > flavor also has hw:cpu_policy=dedicated set.  The alias config is:
> > > 
> > > alias = { "name": "mlx", "device_type": "type-VF", "vendor_id": "15b3", 
> > > "product_id": "1004", "numa_policy":
> > > "preferred" }
> > > 
> > > And the flavor config is:
> > > 
> > > {
> > >   "OS-FLV-DISABLED:disabled": false,
> > >   "OS-FLV-EXT-DATA:ephemeral": 0,
> > >   "access_project_ids": null,
> > >   "disk": 10,
> > >   "id": "221e1bcd-2dde-48e6-bd09-820012198908",
> > >   "name": "vm-2",
> > >   "os-flavor-access:is_public": true,
> > >   "properties": "hw:cpu_policy='dedicated', 
> > > pci_passthrough:alias='mlx:1'",
> > >   "ram": 8192,
> > >   "rxtx_factor": 1.0,
> > >   "swap": "",
> > >   "vcpus": 2
> > > }
Satish in your case you were trying to use neutrons sriov vnic types such that 
the VF would be connected to a neutron
network. In this case the mellanox connectx 3 virtual funcitons are being 
passed to the guest using the pci alias via
the flavor which means they cannot be used to connect to neutron networks but 
they should be able to use affinity
poileices. 
> > > 
> > > In short, our compute nodes have an SR-IOV Mellanox NIC (ConnectX-3) with 
> > > 16 VFs configured.  We wish to expose
> > > these VFs to VMs that schedule on the host.  However, the NIC is in NUMA 
> > > region 0 which means that only half of
> > > the compute node's CPU cores would be usable if we required VM affinity 
> > > to the NIC's NUMA region.  But we don't
> > > need that, since we are okay with cross-region access to the PCI device.
> > > 
> > > However, we do need CPU pinning to work, in order to have efficient cache 
> > > hits on our VM processes.  Therefore, we
> > > still want to pin our vCPUs to pCPUs, even if the pins end up on on a 
> > > NUMA region opposite of the NIC.  The spec
> > > for numa_policy seem to indicate that this is exactly the intent of the 
> > > option:
> > > 
> > > https://specs.openstack.org/openstack/nova-specs/specs/queens/implemented/share-pci-between-numa-nodes.html
> > > 
> > > But, with the above config, we still get PCI affinity scheduling errors:
> > > 
> > > 'Insufficient compute resources: Requested instance NUMA topology 
> > > together with requested PCI devices cannot fit
> > > the given host NUMA topology.'
> > > 
> > > This strikes me as a bug, but perhaps I am missing something here?
yes this does infact seam like a new bug.
can you add myself and stephen to the bug once you file it.
in the bug please include the version of opentack you were deploying.

in the interim setting hw:numa_nodes=2 will allow you to pin the guest without 
the error
however the flavor and alias you have provided should have been enough.

im hoping that we can fix both the alisa and neutorn based case this cycle but 
to do so we
will need to reporpose original queens spec for stein and disucss if we can 
backport any of the
fixes or if this would be only completed in stein+ i would hope we coudl 
backport fixes for the flavor
based use case but the neutron based sue case would likely be stein+

regards
sean
> > > 
> > > Thanks,
> > > MJ
> > > ___
> > > Mailing list: 
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> > > Post to : openst...@lists.openstack.org
> > > Unsubscribe : 
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] [glance] task in pending state, image in uploading state

2018-11-13 Thread Brian Rosmaita
On 11/12/18 7:29 PM, Bernd Bausch wrote:
> Thanks Brian. It's great to get an email from Mr. Glance.
> 
> I managed to patch Devstack, and a first test was successful. Perfect!

Glad it worked!

> A bit late, I then found numerous warnings in release notes and other
> documents that UWSGI should not be used when deploying Glance. My
> earlier web searches flew by these documents without noticing them.

We haven't made it easy for you in devstack, though.  As you can see
from the patch, it requires coordination across a few different teams to
make the appropriate changes to get all tests passing and the patch
merged.  When everyone's back from the summit, I'll see if I can get a
coordinated push across teams to get this done for Stein milestone 2.

This won't solve the larger problem of Glance not running in uWSGI,
though.  I'd refer people interested in having that happen to my
statement about this issue in the Queens release notes [0]; the
situation described there still stands.

[0]
https://docs.openstack.org/releasenotes/glance/queens.html#relnotes-16-0-0-stable-queens-known-issues


> Bernd
> 
> On 11/12/2018 11:27 PM, Brian Rosmaita wrote:
>> On 11/12/18 5:07 AM, Bernd Bausch wrote:
>>> Trying Glance's new import process, my images are all stuck in status
>>> uploading (both methods glance-direct and web-download).
>>>
>>> I can see that there are tasks for those images; they are pending. The
>>> Glance API log doesn't contain anything that clues me in (debug logging
>>> is enabled).
>>>
>>> The source code is too involved for my feeble Python and OpenStack
>>> Internals skills.
>>>
>>> *How can I find out what blocks the tasks? *
>>>
>>> This is a stable Rocky Devstack without any customization of the Glance
>>> config.
>>>
>> The tasks engine Glance uses to facilitate the "new" (experimental in
>> Pike, current in Queens) image import process does not work when Glance
>> is deployed as a WSGI application using uWSGI [0]; as you observed, the
>> tasks remain stuck in 'pending'.  You can apply this patch [1] to your
>> devstack Glance and restart devstack@g-api and image import should work
>> without additional glance api-changes (the patch applied cleanly last
>> time I checked, which was a Stein-1 milestone devstack; it should apply
>> cleanly to your stable Rocky devstack).  You may also want to take a
>> look at the Glance admin guide [2] to see what configuration options are
>> available.
>>
>> [0]
>> https://docs.openstack.org/releasenotes/glance/queens.html#relnotes-16-0-0-stable-queens-known-issues
>>
>> [1] https://review.openstack.org/#/c/545483/
>> [2]
>> https://docs.openstack.org/glance/latest/admin/interoperable-image-import.html
>>
>>
>>> ___
>>> Mailing list:
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>> Post to : openstack@lists.openstack.org
>>> Unsubscribe :
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>
>>
>> ___
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> 
> 
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> 


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] [PackStack][cinder, nova] How to create ephemeral instance from volume

2018-11-13 Thread Soheil Pourbafrani
Hi,

I have some volumes with a snapshot of each. I was wondering is it possible
to create a new instance with the only ephemeral disk (not root disk) from
them. Actually, I didn't want to create volume for new instances.

Thanks
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [nova] PCI alias attribute numa_policy ignored when flavor has hw:cpu_policy=dedicated set

2018-11-13 Thread Satish Patel
Mike,

Here is the bug which I reported https://bugs.launchpad.net/bugs/1795920

Cc'ing: Sean 

Sent from my iPhone

> On Nov 12, 2018, at 8:27 AM, Satish Patel  wrote:
> 
> Mike,
> 
> I had same issue month ago when I roll out sriov in my cloud and this is what 
> I did to solve this issue. Set following in flavor 
> 
> hw:numa_nodes=2
> 
> It will spread out instance vcpu across numa, yes there will be little 
> penalty but if you tune your application according they you are good 
> 
> Yes this is bug I have already open ticket and I believe folks are working on 
> it but its not simple fix. They may release new feature in coming oprnstack 
> release. 
> 
> Sent from my iPhone
> 
>> On Nov 11, 2018, at 9:25 PM, Mike Joseph  wrote:
>> 
>> Hi folks,
>> 
>> It appears that the numa_policy attribute of a PCI alias is ignored for 
>> flavors referencing that alias if the flavor also has 
>> hw:cpu_policy=dedicated set.  The alias config is:
>> 
>> alias = { "name": "mlx", "device_type": "type-VF", "vendor_id": "15b3", 
>> "product_id": "1004", "numa_policy": "preferred" }
>> 
>> And the flavor config is:
>> 
>> {
>>   "OS-FLV-DISABLED:disabled": false,
>>   "OS-FLV-EXT-DATA:ephemeral": 0,
>>   "access_project_ids": null,
>>   "disk": 10,
>>   "id": "221e1bcd-2dde-48e6-bd09-820012198908",
>>   "name": "vm-2",
>>   "os-flavor-access:is_public": true,
>>   "properties": "hw:cpu_policy='dedicated', pci_passthrough:alias='mlx:1'",
>>   "ram": 8192,
>>   "rxtx_factor": 1.0,
>>   "swap": "",
>>   "vcpus": 2
>> }
>> 
>> In short, our compute nodes have an SR-IOV Mellanox NIC (ConnectX-3) with 16 
>> VFs configured.  We wish to expose these VFs to VMs that schedule on the 
>> host.  However, the NIC is in NUMA region 0 which means that only half of 
>> the compute node's CPU cores would be usable if we required VM affinity to 
>> the NIC's NUMA region.  But we don't need that, since we are okay with 
>> cross-region access to the PCI device.
>> 
>> However, we do need CPU pinning to work, in order to have efficient cache 
>> hits on our VM processes.  Therefore, we still want to pin our vCPUs to 
>> pCPUs, even if the pins end up on on a NUMA region opposite of the NIC.  The 
>> spec for numa_policy seem to indicate that this is exactly the intent of the 
>> option:
>> 
>> https://specs.openstack.org/openstack/nova-specs/specs/queens/implemented/share-pci-between-numa-nodes.html
>> 
>> But, with the above config, we still get PCI affinity scheduling errors:
>> 
>> 'Insufficient compute resources: Requested instance NUMA topology together 
>> with requested PCI devices cannot fit the given host NUMA topology.'
>> 
>> This strikes me as a bug, but perhaps I am missing something here?
>> 
>> Thanks,
>> MJ
>> ___
>> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to : openstack@lists.openstack.org
>> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [nova][neutron] boot server with more than one subnet selection question

2018-11-13 Thread Sean Mooney
On Tue, 2018-11-13 at 12:27 +0100, Matt Riedemann wrote:
> On 11/13/2018 4:45 AM, Chen CH Ji wrote:
> > Got it, this is what I am looking for .. thank you
> 
> Regarding that you can do with server create, I believe it's:
> 
> 1. don't specify anything for networking, you get a port on the network 
> available to you; if there are multiple networks, it's a failure and the 
> user has to specify one.
> 
> 2. specify a network, nova creates a port on that network
in this case i belive neutron alocate an 1 ipv4 adress and 1 ipv6 addres 
assumeing the network has
a subnet for each type.
> 
> 3. specify a port, nova uses that port and doesn't create anything in 
> neutron
in this case nova just reads the ips the neutron has already allocated to the 
port and list those for  the instace
> 
> 4. specify a network and fixed IP, nova creates a port on that network 
> using that fixed IP.
and in this case  nova will create the port in neutron using the fixed ip you 
supplied which will cause neutron to
attach the prot to the correct subnet
> 
> It sounds like you want #3 or #4.
> 

i think what is actully wanted is "openstack server create --nic net-id=,v4-fixed-ip="

we do not have a subnet-id option for --nic so if you want to select the subnet 
as part of the boot you have to supply
the ip. similary if you want neutron to select the ip you have to precreate teh 
port and use the --port option when
creating the vm. so as matt saidn #3 or #4 are the best solutions for your 
request.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][manila] PLEASE READ: Change of location for dinner ...

2018-11-13 Thread Jay S Bryant

Team,

The dinner has had to change locations.  Dicke Wirtin didn't get my 
online reservation and they are full.


NEW LOCATION: Joe's Restaurant and Wirsthaus -- Theodor-Heuss-Platz 10, 
14052 Berlin


The time is still 8 pm.

Please pass the word on!

Jay


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack-operators] operators get-together today at the Berlin Summit

2018-11-13 Thread Chris Morgan
We never did come up with a good plan for a separate event for operators
this evening, so I think maybe we should just meet up at the marketplace
mixer, so may I propose meet at the front at 6pm?

Chris

-- 
Chris Morgan 
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[openstack-dev] [vitrage] No IRC meeting this week

2018-11-13 Thread Ifat Afek
Hi,

We will not hold the Vitrage IRC meeting tomorrow, since some of our
contributors are in Berlin.
Our next meeting will be Next Wednesday, November 21th.

Thanks,
Ifat.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] boot server with more than one subnet selection question

2018-11-13 Thread Matt Riedemann

On 11/13/2018 4:45 AM, Chen CH Ji wrote:

Got it, this is what I am looking for .. thank you


Regarding that you can do with server create, I believe it's:

1. don't specify anything for networking, you get a port on the network 
available to you; if there are multiple networks, it's a failure and the 
user has to specify one.


2. specify a network, nova creates a port on that network

3. specify a port, nova uses that port and doesn't create anything in 
neutron


4. specify a network and fixed IP, nova creates a port on that network 
using that fixed IP.


It sounds like you want #3 or #4.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack-operators] upgrade pike to queens get `xxx DBConnectionError SELECT 1`

2018-11-13 Thread Ignazio Cassano
Hi all,
upgradig from pike to queens I got a lot of errors in nova-api.log:

upgrade pike to queens get `xxx DBConnectionError SELECT 1`

My issue is the same reported at:

https://bugs.launchpad.net/oslo.db/+bug/1774544

The difference is that I am using centos 7.
I am also using haproxy to balancing galera cluster .
Modifying nova.conf ans using end exluding haproxu specifying only a
galera cluster member the problem disappears.

Please, any help ?

Regards
Ignazio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack] [PackStack][Cinder] On which node OpenStack store data of each instance

2018-11-13 Thread Bernd Bausch

Soheil,

I took the liberty to add the openstack distribution list back in.

Your description is a bit vague. Do you have dedicated nodes for 
storage, or do you run instances on the same nodes where storage is 
configured? Do you want run use volumes for instance storage, or 
ephemeral disks?


Volumes are normally located on remote servers or disk arrays, so that 
the answer is yes in this case. You can even pool storage of several 
nodes together using DRBD or (up to Newton) GlusterFS, but I have no 
experience in this area and can't tell you what would work and what 
would not.


To configure volume backends, see 
https://docs.openstack.org/cinder/rocky/configuration/block-storage/volume-drivers.html.


Ephemeral storage is normally local storage on the compute node where 
the instance runs. You can also use NFS-mounted remote filesystem for 
ephemeral storage.


Bernd.

On 11/13/2018 5:37 PM, Soheil Pourbafrani wrote:

Thanks all,

Suppose we use HDD disks of local machines and there are no shared 
storages like SAN storage. So in such an environment is it possible to 
use remote disks on other machines for compute nodes? (I think it's 
impossible with HDD local disks and for such a scenario we should have 
SAN storage).


So the question is is it possible to have volumes in local disk of 
compute nodes? or we should let OpenStack go!


On Mon, Nov 12, 2018 at 6:31 PM Bernd Bausch > wrote:


OpenStack stores volumes wherever you configure it to store them.
On a
disk array, an NFS server, a Ceph cluster, a dedicated storage
node, a
controller or even a compute node. And more.

My guess: Volumes on controllers or compute nodes are not a good
solution for production systems.

By default, Packstack implements Cinder volumes as LVM volumes on the
controller. It's probably possible to put the LVM volumes on other
nodes, and it is definitely possible to configure a different backend
than LVM, for example Netapp, in which case the volumes would be on a
Netapp appliance.

On 11/12/2018 9:34 PM, Soheil Pourbafrani wrote:
> My question is does OpenStack store volumes somewhere other than
> the compute node?
> For example in PackStack on two nodes, one for controller and
network
> and the other for compute node, the instance's volumes will be
stored
> on the controller or on compute?



smime.p7s
Description: S/MIME Cryptographic Signature
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack] Openstack maintenance updates - how to?

2018-11-13 Thread Volodymyr Litovka

Hi colleagues,

we're using Openstack from Ubuntu repositories. Everything is perfect 
except cases when I manually apply patches before supplier (e.g. 
Canonical) will issue updated versions. The problem is that it happens 
not immediately and not with the next update, thus all patches I applied 
manually, will be overwrited back during next update.


How you, firends, deal with this? Is "manual" (as described above) way 
is the most safe and reliable? Or may be there is "stable" branch of 
Openstack components which can be used as maintenance? Or whether 
"master" branch is good and safe source for updating Openstack 
components is such way?


Any thoughts on this?

Thanks!

--
Volodymyr Litovka
  "Vision without Execution is Hallucination." -- Thomas Edison


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[openstack-dev] [horizon] No meeting this week

2018-11-13 Thread Ivan Kolodyazhny
Hi team,

Let's skip the meeting tomorrow due to the OpenStack Summit.

Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack-operators] queens: vnc password console does not work anymore

2018-11-13 Thread Ignazio Cassano
Hi All,
before upgrading to queens, we used in qemu.conf the vnc_passwd parameter.
Using the dashboard console asked me for the password.
Upgrading to queens it does not work anymore (unable to negotiote security
with server).
Removing the vnc_password from qemu.conf e restart libvirt and virtual
machine it returns to work.
What is changed in queens ?
Thanks
Ignazio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] new SIGs to cover use cases

2018-11-13 Thread Arkady.Kanevsky
Good point.
Adding SIG list.

-Original Message-
From: Jeremy Stanley [mailto:fu...@yuggoth.org] 
Sent: Monday, November 12, 2018 4:46 PM
To: openstack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] new SIGs to cover use cases


[EXTERNAL EMAIL] 
Please report any suspicious attachments, links, or requests for sensitive 
information.


On 2018-11-12 15:46:38 + (+), arkady.kanev...@dell.com wrote:
[...]
>   1.  Do we have or want to create a user community around Hybrid cloud.
[...]
>   2.  As we target AI/ML as 2019 target application domain do we
>   want to create a SIG for it? Or do we extend scientific
>   community SIG to cover it?
[...]

It may also be worthwhile to ask this on the openstack-sigs mailing
list.
-- 
Jeremy Stanley

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators