Re: [Openstack-operators] IPV6 help liberty

2016-07-12 Thread Jens Rosenboom
2016-07-12 20:55 GMT+02:00 suresh kumar :
> Hi All,
>
> I have created IPv6 vlan in neutron with DHCPv6 stateful option, when I
> create instances with this ipv6 vlan dhcp is failing to assign the IP to
> instances and it is assigning link local address
>
> I am able to ping the GW with link local address but not the other instances
> on same vlan
>
> Is there any configuration need to be done in neutron to make this work? my
> ipv6 vlan is rotatable so I didn't attached to any router interface inside
> neutron.

Cirros does not yet support DHCPv6, see
https://bugs.launchpad.net/cirros/+bug/1487041.

It also looks like other images will only do slaac by default, so you
would have to explicitly setup an DHCPv6 client in your guest, e.g.
for ubuntu-xenial do "sudo dhclient -6 ens3".

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [scientific-wg] high-performance/parallel file-systems panel at Barcelona

2016-07-12 Thread Simon Williams
Blair,

I am sure Allen Samuels would like to contribute to the discusion.

Simon

Simon Williams
Regional Director - SanDisk Enterprise Storage Solutions

SanDisk |a Western Digital brand

M: +61 432 975 857
E: simon.willi...@sandisk.com
O: +61 3 9507 2013
W: https://www.sandisk.com/business/datacenter/products
InfiniFlash: https://www.youtube.com/watch?v=9Z-0dMWB9ZQ

[http://www.ibm.com/webaccessories/emailsig/i/LinkedIn.jpg] 
 

 Simon Williams  | 
[http://www.ibm.com/webaccessories/emailsig/i/Twitter.jpg] @simwilli 
 | 
[http://download82.com/products/69/skype-icon-small.png]  swilliams.skype

From: Blair Bethwaite 
>
Date: Wednesday, 13 July 2016 12:03 am
To: user-committee 
>,
 "openstack-oper." 
>
Cc: Stephen Telfer >
Subject: [Openstack-operators] [scientific-wg] high-performance/parallel 
file-systems panel at Barcelona

Hi all,

Just pondering summit talk submissions and wondering if anyone else
out there is interested in participating in a HPFS panel session...?

Assuming we have at least one person already who can cover direct
mounting of Lustre into OpenStack guests then it'd be nice to find
folks who have experience integrating with other filesystems and/or
approaches, e.g., GPFS, Manila, Object Storage translation gateways.

--
Cheers,
~Blairo

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

PLEASE NOTE: The information contained in this electronic mail message is 
intended only for the use of the designated recipient(s) named above. If the 
reader of this message is not the intended recipient, you are hereby notified 
that you have received this message in error and that any review, 
dissemination, distribution, or copying of this message is strictly prohibited. 
If you have received this communication in error, please notify the sender by 
telephone or e-mail (as shown above) immediately and destroy any and all copies 
of this message in your possession (whether hard copies or electronically 
stored copies).
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Updating flavor quotas (e.g. disk_iops) on existing instances.

2016-07-12 Thread Sławek Kapłoński
Hello,

If You are using virsh, You can apply such limits manually for each
instance. Check blkiotune command in virsh.

-- 
Best regards / Pozdrawiam
Sławek Kapłoński
sla...@kaplonski.pl

On Tue, 12 Jul 2016, Van Leeuwen, Robert wrote:

> Hi,
> 
> Is there an easy way to update the quotas for flavors and apply it to 
> existing instances?
> It looks like these settings are tracked in the “instance_extra” table and 
> not re-read from the flavor when (hard) re-booting the instances.
> 
> Since the instance_extra flavor table is a big JSON blob it is a pain to 
> apply changes there.
> Anybody found an easy way to do this?
> 
> Thx,
> Robert van Leeuwen

> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



signature.asc
Description: Digital signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [telecom-nfv] Ops WG meeting tomorrow

2016-07-12 Thread Curtis
Hi All,

Just a reminder that we have a meeting on Weds [1], 15:00 UTC. Hope to
see you there. :)

Thanks,
Curtis.

[1]: 
http://eavesdrop.openstack.org/#OpenStack_Operators_Telco_and_NFV_Working_Group

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] IPV6 help liberty

2016-07-12 Thread suresh kumar
Hi All,

I have created IPv6 vlan in neutron with DHCPv6 stateful option, when I
create instances with this ipv6 vlan dhcp is failing to assign the IP to
instances and it is assigning link local address

I am able to ping the GW with link local address but not the
other instances on same vlan

Is there any configuration need to be done in neutron to make this work? my
ipv6 vlan is rotatable so I didn't attached to any router interface inside
neutron.



Starting network...
udhcpc (v1.20.1) started
Sending discover...
Sending discover...
Sending discover...
Usage: /sbin/cirros-dhcpc 
No lease, failing
WARN: /etc/rc3.d/S40-network failed
cirros-ds 'net' up at 181.00
checking http://169.254.169.254/2009-04-04/instance-id
failed 1/20: up 181.01. request failed
failed 2/20: up 183.14. request failed
failed 3/20: up 185.15. request failed
failed 4/20: up 187.15. request failed
failed 5/20: up 189.16. request failed
failed 6/20: up 191.16. request failed
failed 7/20: up 193.17. request failed
failed 8/20: up 195.18. request failed
failed 9/20: up 197.18. request failed
failed 10/20: up 199.19. request failed
failed 11/20: up 201.19. request failed
failed 12/20: up 203.20. request failed
failed 13/20: up 205.20. request failed
failed 14/20: up 207.21. request failed
failed 15/20: up 209.21. request failed
failed 16/20: up 211.22. request failed
failed 17/20: up 213.23. request failed
failed 18/20: up 215.24. request failed
failed 19/20: up 217.25. request failed
failed 20/20: up 219.25. request failed
failed to read iid from metadata. tried 20
no results found for mode=net. up 221.26. searched: nocloud configdrive ec2
failed to get instance-id of datasource
Starting dropbear sshd: generating rsa key... generating dsa key... OK
=== system information ===
Platform: Fedora Project OpenStack Nova
Container: none
Arch: x86_64
CPU(s): 1 @ 2666.760 MHz
Cores/Sockets/Threads: 1/1/1
Virt-type:
RAM Size: 491MB
Disks:
NAME MAJ:MIN   SIZE LABEL MOUNTPOINT
vda  253:0   1073741824
vda1 253:1   1061061120 cirros-rootfs /
=== sshd host keys ===
-BEGIN SSH HOST KEY KEYS-
ssh-rsa 
B3NzaC1yc2EDAQABgwCRhQrIzrhy2Ce2ZsStVSan1fpu+nA33goWd8qY2yQQUM2OXtkg33DUvcrvezcSeSvn7B+A8UBA5XZIzygf/EZxIaTaB4LLOAf+Li1yD1IYu3ortFGIWGpif6YldWMYoHkKR7/q53fGLluvIczkLZ40zGBSaVyzuuXTG9b2kZKBq98D
root@cirros
ssh-dss 
B3NzaC1kc3MAAACBAIrrUsP7hcIk8qug15bGpJXW6jxGqX6PzUm7/a5iBusu28Y6o6Z3h8bs7YzyREL2mHatoW/vZdN7g6hKFwQR3nGOw5aeqrrA0ZdQFHJA+RF9iG+h0raJ20QYQn+XgMw5vSPf2LmqRt6kyQ5J9sqRNz30PpW5Ah7o1QdEAY6n+B4dFQCcijPlVQccaSshqjf/3qmaXyHzFwAAAIBBtf/S3aYm6FEft/qcNkGebVtv5GfcDdkmh7VJn9ChW7YMZOEd+vi72titWWNoylyv7C8tKQDpEvJjFMMeNs4RzU2TJ8rOs7EHSys1glK10IgUgDPhCSUNEnn+/KGxATB6BvrYjNFHqg6J21o2mL3iaMCBnfyLXQwM5Bn9gw/VMIBvmOZ5Is1BBcBzdI3Z0KRkexkDvQcrYKAVI8ywk3TVFZPBLrUjdMEg6d3usW3b6TapEinICAMnfyYFNvwuDaZ6Bzq8oVhDTYE06ZdlKFfzIynMgY58rU791k0vimfOEvpVQcEyj7X3OEl8NmXsPMns7+yAh3R4Q0HDic7KZ7VUnA==
root@cirros
-END SSH HOST KEY KEYS-
=== network info ===
if-info: lo,up,127.0.0.1,8,::1
if-info: eth0,up,,8,fe80::f816:3eff:fe63:9b93
=== datasource: None None ===
=== cirros: current=0.3.4 uptime=221.49 ===
route: fscanf
=== pinging gateway failed, debugging connection ===
 debug start ##
### /etc/init.d/sshd start
Starting dropbear sshd: OK
route: fscanf
### ifconfig -a
eth0  Link encap:Ethernet  HWaddr FA:16:3E:63:9B:93
  inet6 addr: fe80::f816:3eff:fe63:9b93/64 Scope:Link
  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
  RX packets:31 errors:0 dropped:0 overruns:0 frame:0
  TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:1000
  RX bytes:2462 (2.4 KiB)  TX bytes:1320 (1.2 KiB)

loLink encap:Local Loopback
  inet addr:127.0.0.1  Mask:255.0.0.0
  inet6 addr: ::1/128 Scope:Host
  UP LOOPBACK RUNNING  MTU:16436  Metric:1
  RX packets:0 errors:0 dropped:0 overruns:0 frame:0
  TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
  collisions:0 txqueuelen:0
  RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

### route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric RefUse Iface
route: fscanf



Thanks,
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [User-committee] [scientific-wg] high-performance/parallel file-systems panel at Barcelona

2016-07-12 Thread Edgar Magana
Hello Blairo,

We are running very good performance test here at Workday. I am even planning 
to write a speaking proposal about it. I would like to participate in the panel.

Cheers,

Edgar Magana
Cloud Operations Architect
Workday, Inc.

On 7/12/16, 7:24 AM, "Stig Telfer"  wrote:

Hi Blair - 

Great idea, thanks for sharing it.

We (Cambridge University) would love to be included if we can add value in the 
Lustre corner.

Best wishes,
Stig


> On 12 Jul 2016, at 15:03, Blair Bethwaite  wrote:
> 
> Hi all,
> 
> Just pondering summit talk submissions and wondering if anyone else
> out there is interested in participating in a HPFS panel session...?
> 
> Assuming we have at least one person already who can cover direct
> mounting of Lustre into OpenStack guests then it'd be nice to find
> folks who have experience integrating with other filesystems and/or
> approaches, e.g., GPFS, Manila, Object Storage translation gateways.
> 
> -- 
> Cheers,
> ~Blairo
> 
> ___
> User-committee mailing list
> user-commit...@lists.openstack.org
> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_user-2Dcommittee=CwICAg=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc=G0XRJfDQsuBvqa_wpWyDAUlSpeMV4W1qfWqBfctlWwQ=8w7hcA7l9c971zOOTaFAyX7QwEiC6mAgXtRUIj9uxU0=CsZ8UO8fJhdw_dIkt2nvt7CPoO9rRmKriAeZ0-lUZ3I=
>  


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators=CwICAg=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc=G0XRJfDQsuBvqa_wpWyDAUlSpeMV4W1qfWqBfctlWwQ=8w7hcA7l9c971zOOTaFAyX7QwEiC6mAgXtRUIj9uxU0=jN0BSzirIHARZHyiabZ5b0yM-FrW0xnXx5lEGLVZog8=
 


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [User-committee] [scientific-wg] high-performance/parallel file-systems panel at Barcelona

2016-07-12 Thread Stig Telfer
Hi Blair - 

Great idea, thanks for sharing it.

We (Cambridge University) would love to be included if we can add value in the 
Lustre corner.

Best wishes,
Stig


> On 12 Jul 2016, at 15:03, Blair Bethwaite  wrote:
> 
> Hi all,
> 
> Just pondering summit talk submissions and wondering if anyone else
> out there is interested in participating in a HPFS panel session...?
> 
> Assuming we have at least one person already who can cover direct
> mounting of Lustre into OpenStack guests then it'd be nice to find
> folks who have experience integrating with other filesystems and/or
> approaches, e.g., GPFS, Manila, Object Storage translation gateways.
> 
> -- 
> Cheers,
> ~Blairo
> 
> ___
> User-committee mailing list
> user-commit...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific-wg] Meeting reminder: in ~7 hours @ 2100 UTC in #openstack-meeting

2016-07-12 Thread Blair Bethwaite
Hi all,

Scientific-WG regular meeting is on soon, draft agenda below and at
https://wiki.openstack.org/wiki/Scientific_working_group.


2016-07-12 2100 UTC in channel #openstack-meeting

# Review of Activity Areas and opportunities for progress
## Bare metal
### Networking requirements/considerations
## Parallel filesystems
## Accounting and scheduling
# OpenStack & HPC white paper
# Other business
## Contributions sought to the Hypervisor Tuning Guide
## HPC / Research speaker track at Barcelona Summit

Regards,
Blair & Stig

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [User-committee] [scientific-wg] high-performance/parallel file-systems panel at Barcelona

2016-07-12 Thread Mike Lowe
I don’t think we’ve figured out how to do this, but I’d certainly like to 
attend.  We have promised integration with Wrangler’s 10PB of lustre that sits 
across the hot aisle from us, so I probably need to figure out how to do this 
between now and Barcelona.

> On Jul 12, 2016, at 10:03 AM, Blair Bethwaite  
> wrote:
> 
> Hi all,
> 
> Just pondering summit talk submissions and wondering if anyone else
> out there is interested in participating in a HPFS panel session...?
> 
> Assuming we have at least one person already who can cover direct
> mounting of Lustre into OpenStack guests then it'd be nice to find
> folks who have experience integrating with other filesystems and/or
> approaches, e.g., GPFS, Manila, Object Storage translation gateways.
> 
> -- 
> Cheers,
> ~Blairo
> 
> ___
> User-committee mailing list
> user-commit...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee



smime.p7s
Description: S/MIME cryptographic signature
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific-wg] high-performance/parallel file-systems panel at Barcelona

2016-07-12 Thread Blair Bethwaite
Hi all,

Just pondering summit talk submissions and wondering if anyone else
out there is interested in participating in a HPFS panel session...?

Assuming we have at least one person already who can cover direct
mounting of Lustre into OpenStack guests then it'd be nice to find
folks who have experience integrating with other filesystems and/or
approaches, e.g., GPFS, Manila, Object Storage translation gateways.

-- 
Cheers,
~Blairo

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Updating flavor quotas (e.g. disk_iops) on existing instances.

2016-07-12 Thread Van Leeuwen, Robert
Hi,

Is there an easy way to update the quotas for flavors and apply it to existing 
instances?
It looks like these settings are tracked in the “instance_extra” table and not 
re-read from the flavor when (hard) re-booting the instances.

Since the instance_extra flavor table is a big JSON blob it is a pain to apply 
changes there.
Anybody found an easy way to do this?

Thx,
Robert van Leeuwen
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators