Re: [Openstack] [Nova] CPU Scalling, Quota for Disk I/O

2013-07-09 Thread Bruno Oliveira ~lychinus
Hello, any thoughts on this ?

Thank you
--

Bruno Oliveira
Developer, Software Engineer
irc: lychinus | skype: brunnop.oliveira
brunnop.olive...@gmail.com





On Mon, Jul 1, 2013 at 4:22 PM, Bruno Oliveira ~lychinus
brunnop.olive...@gmail.com wrote:
 Hello Stackers,

 Today morning I saw an interesting question regarding CPU Scaling
 in the list, which got me to ask the following:

 Currently (or in the roadmap) do we have any feature on Nova
 (regardless of the hypervisor underneath) to set maximum disk I/O
 throughput a VM can have ?

 I mean, let's say we have hundreds of VMs under the same host as
 in production, and for some reason we're lacking performance due to
 one (or a few of them) being too hungry/greedy for disk reads/writes.

 Question 1) Is there a way we can set quotas for disk I/O for a (group of)
 instances ? Like: for this one (or this group), don't exceed the threshold
 of 50 MB/seg

 Question 2) Also, do we have anything like vertical scalling ?
 I mean, like defining CPU and Memory Balloons as extra resources
 that a set of VMs can make use of (temporarily), if they're demanding to ?

 Note: I've seen some of the videos of Heat talking about increasing
 horizontally  the number of instances behind a load balancer to attend
 an increasing number of user requests, for instances...

 Thank you so much.

 --

 Bruno Oliveira
 Developer, Software Engineer
 irc: lychinus | skype: brunnop.oliveira
 brunnop.olive...@gmail.com

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Nova] CPU Scalling, Quota for Disk I/O

2013-07-01 Thread Bruno Oliveira ~lychinus
Hello Stackers,

Today morning I saw an interesting question regarding CPU Scaling
in the list, which got me to ask the following:

Currently (or in the roadmap) do we have any feature on Nova
(regardless of the hypervisor underneath) to set maximum disk I/O
throughput a VM can have ?

I mean, let's say we have hundreds of VMs under the same host as
in production, and for some reason we're lacking performance due to
one (or a few of them) being too hungry/greedy for disk reads/writes.

Question 1) Is there a way we can set quotas for disk I/O for a (group of)
instances ? Like: for this one (or this group), don't exceed the threshold
of 50 MB/seg

Question 2) Also, do we have anything like vertical scalling ?
I mean, like defining CPU and Memory Balloons as extra resources
that a set of VMs can make use of (temporarily), if they're demanding to ?

Note: I've seen some of the videos of Heat talking about increasing
horizontally  the number of instances behind a load balancer to attend
an increasing number of user requests, for instances...

Thank you so much.

--

Bruno Oliveira
Developer, Software Engineer
irc: lychinus | skype: brunnop.oliveira
brunnop.olive...@gmail.com

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Cinder][HyperV][KVM] iSCSI / NFS/ FCoE for Block Storage Implementation

2013-06-26 Thread Bruno Oliveira ~lychinus
Dear stackers, salute o

As I'm moving forward to finally deploy Openstack to production,
I'd like to hear your thoughts for when dealing with Cinder in an approach
of Block  (iSCSI) vs Filesystem (NFS)

As far as I've read, iSCSI is extremely resilient and more reliable than NFS
since it already address issues like network faults, by using multiple
channels as individual paths to make sure the data reaches its targets.
On the other hand, NFS would require the infrastructure itself to guarantee
the network connectivity (not that this would be a major issue, though).

That for a production use is very important indeed, but I cannot forget of
the performance between the two (I've also got to know that NFS might
have a superior read IO, due to its read_cache, but it lacks when it comes
to writing -- unless I have some sort of write cache, like the one deployed
in the ZFS filesystem).

Note: we have a Sun/Oracle Storage using the ZFS filesystem for what
 we're using currently. I've read a lot on the internet, but as you know,
I'm not sure how practical these articles can be or if they're just
comparing them theorically)

**Question 1**
 Performance-wise, in a very high-throughput network
(supposely 10G), would iSCSI perform better than other alternatives ?

**Question 2**
Would you guys have any thoughts on FCoE over iSCSI ? According to this
blueprint, it's already implemented and available for libvirt to use it

https://blueprints.launchpad.net/cinder/+spec/fibre-channel-block-storage

**Question 3**
If NFS is the best option out of the 3, I'm not sure how to deal with it
for Hyper-V hosts, I mean, would it be technically possible at all ?

**Question 4** [Cloudbase-question]
Since the Cinder Volume for Windows Storage Server 2012
doesn't use the libvirt driver, may I ask you if the current build already
supports FCoE (Fibre Channel over Ethernet) ? Thank you very very much.


Thank you a lot, Stackers.

Best regards.

--

Bruno Oliveira
Developer, Software Engineer
irc: lychinus | skype: brunnop.oliveira
brunnop.olive...@gmail.com

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Ceilometer][Healthnmon] Dealing with Performance/Monitoring Metrics

2013-06-24 Thread Bruno Oliveira ~lychinus
@Julien

Right. Working on it, Julien. I'll let you guys know if I manage to
get any significant progress.
Thank you.


@Claudio

 Do we need to install Healthnmon in order to get them? (because ceilometer

Yeah. Until we manage to find a way to retrieve'em with Ceilometer or
until the two projects merge;

--

Bruno Oliveira
Developer, Software Engineer
irc: lychinus | skype: brunnop.oliveira
brunnop.olive...@gmail.com
--

Bruno Oliveira
Developer, Software Engineer
irc: lychinus | skype: brunnop.oliveira
brunnop.olive...@gmail.com





On Mon, Jun 24, 2013 at 7:08 AM, claudio marques clau...@onesource.pt wrote:
 Hi all

 I think that we are all (people working with ceilometer), trying to figure
 out how to get the system actual data metrics from VMs.
 Do we need to install Healthnmon in order to get them? (because ceilometer
 as Julien already said, can't do it yet)

 Cheers

 Cláudio Marques

 -
 clau...@onesource.pt
 http://www.onesource.pt/


 From: jul...@danjou.info
 To: brunnop.olive...@gmail.com
 Date: Mon, 24 Jun 2013 11:41:13 +0200
 CC: openstack@lists.launchpad.net
 Subject: Re: [Openstack] [Ceilometer][Healthnmon] Dealing with
 Performance/Monitoring Metrics


 On Sat, Jun 22 2013, Bruno Oliveira ~lychinus wrote:

 So, long story short, is technically possible for me to create metrics
 to make the API show up system_usage_metrics instead of simply
 system_allocated_metrics ?  (I guess I misunderstood, right?)

 Could you please enlight the path for me ?

 If you know how to retrieve the actual memory or disk used (rather than
 allocated) by a VM by measuring from outside the VM, please enlighten
 us. We (Ceilometer) don't know how to do that, that's why we don't do
 it actually.

 --
 Julien Danjou
 ;; Free Software hacker ; freelance consultant
 ;; http://julien.danjou.info


 ___ Mailing list:
 https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack More help :
 https://help.launchpad.net/ListHelp

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Ceilometer][Healthnmon] Dealing with Performance/Monitoring Metrics

2013-06-21 Thread Bruno Oliveira ~lychinus
Dear stackers, please enlight me on something regarding Ceilometer/Healthnmon.

So far as I've read and setup, Ceilometer is intended to get metrics
for billing usage in larger interval polls. But so far, I've not seen
reports of Memory Utilization, Disk Space Utilization or other System
Utilization metrics. They're mostly Memory Allocated, Number of
vCPUs, and etc.

And checking out the Healthnmon projects, I can see that at the
very-low level, the mechanism is pretty similiar. I even spotted the
following:


(...)But in another usage scenario, what if the users want to get the
relationship between different measurements? i.e. get all the
available measurements related to a specific VM instance, e.g. CPU
usage, disk IO usage, network information, storage volumes, …. In
Ceilometer it would require extra post-processing of the
'resource_metadata' to find all the measurements. While in Healthnmon,
the relationship is already in the data model so it's much easier.

The difference between the two data model may comes from that
Ceilometer is originally designed for metering, and Healthnmon is
designed for monitoring(...)

https://wiki.openstack.org/wiki/Ceilometer/CeilometerAndHealthnmon


So, long story short, is technically possible for me to create metrics
to make the API show up system_usage_metrics instead of simply
system_allocated_metrics ?  (I guess I misunderstood, right?)

Could you please enlight the path for me ?

I'm not sure if I got it right... I have seen the blueprints of the
projects possibly unifying on Havanna release. But I'm looking for
something I can do (for usage metrics) right now (Grizzly).

Thank you in advance.

--

Bruno Oliveira
Developer, Software Engineer
irc: lychinus | skype: brunnop.oliveira
brunnop.olive...@gmail.com

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [HyperV][Quantum] Quantum dhcp agent not working for Hyper-V

2013-06-11 Thread Bruno Oliveira ~lychinus
Guys,

Any thoughts on this?

Like I said, we're not sure if we, either, are doing too much unnecessary
things stuff (or just working around it) or if there's a right (and
hopefully simpler) way to do it.

Any comments or suggestions on this will be very welcome.

Thank you.

--


On Fri, Jun 7, 2013 at 5:51 PM, Bruno Oliveira ~lychinus 
brunnop.olive...@gmail.com wrote:

 (...)Do you have your vSwitch properly configured on your hyper-v
 host?(...)

  I can't say for sure, Peter, but I think so...

 In troubleshooting we did (and are still doing) I can tell that
 regardless of the network model that we're using (FLAT or VLAN
 Network),
 the instance that is provisioned on Hyper-V (for some reason) can't
 reach the quantum-l3-agent by default
 (I said default because, we just managed to do it after a hard, long
 and boring troubleshoting,
  yet, we're not sure if that's how it should be done, indeed)

 Since it's not something quick to explain, I'll present the scenario:
 (I'm not sure if it might be a candidate for a fix in quantum-l3-agent,
  so quantum-devs might be interested too)


 Here's how our network interfaces turns out, in our network controller:

 ==
 External bridge network
 ==

 Bridge br-eth1
 Port br-eth1
 Interface br-eth1
 type: internal
 Port eth1.11
 Interface eth1.11
 Port phy-br-eth1
 Interface phy-br-eth1

 ==
 Internal network
 ==

Bridge br-int
 Port int-br-eth1
 Interface int-br-eth1
 Port br-int
 Interface br-int
 type: internal
 Port tapb610a695-46
 tag: 1
 Interface tapb610a695-46
 type: internal
 Port qr-ef10bef4-fa
 tag: 1
 Interface qr-ef10bef4-fa
 type: internal

 ==

 There's another iface named br-ex that we're using for floating_ips,
 but it has nothing to do with what we're doing right now, so I'm skipping
 it...


  So, for the hands-on 

 I know it may be a little bit hard to understand, but I'll do my best
 trying to explain:

 1) the running instance in Hyper-V, which is linked to Hyper-V vSwitch
 is actually
 communicating to bridge: br-eth1 (that is in the network controller).

 NOTE: That's where the DHCP REQUEST (from the instance) lands


 2) The interface MAC Address, of that running instance on Hyper-V, is:
 fa:16:3e:95:95:e4. (we're gonna use it on later steps)
 Since DHCP is not fully working yet, we had to manually set an IP for
 that instance: 10.5.5.3


 3) From that instance interface, the dhcp_broadcast should be forward -
FROM interface eth1.12 TO  phy-br-eth1
And FROM interface phy-br-eth1 TO the bridge br-int   *** THIS
 IS WHERE THE PACKETS ARE DROPPED  ***.

 Check it out for the actions:drop

 -
 root@osnetwork:~# ovs-dpctl dump-flows br-int  |grep 10.5.5.3


 in_port(4),eth(src=fa:16:3e:f0:ac:8e,dst=ff:ff:ff:ff:ff:ff),eth_type(0x0806),arp(sip=10.5.5.3,tip=10.5.5.1,op=1,sha=fa:16:3e:f0:ac:8e,tha=00:00:00:00:00:00),
 packets:20, bytes:1120, used:0.412s, actions:drop

 -

 4) Finally, when the packet reaches the bridge br-int, the
 DHCP_REQUEST should be forward to the
dhcp_interface, that is: tapb610a695-46*** WHICH IS NOT
 HAPPENING EITHER ***


 5) How to fix :: bridge br-eth1

 ---
 5.1. Getting to know the ifaces of 'br-eth1'
 ---
 root@osnetwork:~# ovs-ofctl show br-eth1

 OFPT_FEATURES_REPLY (xid=0x1): ver:0x1, dpid:e0db554e164b
 n_tables:255, n_buffers:256 features: capabilities:0xc7, actions:0xfff

 1(eth1.11): addr:e0:db:55:4e:16:4b
  config: 0
  state:  0
  current:10GB-FD AUTO_NEG
  advertised: 1GB-FD 10GB-FD FIBER AUTO_NEG
  supported:  1GB-FD 10GB-FD FIBER AUTO_NEG

 3(phy-br-eth1): addr:26:9b:97:93:b9:70
  config: 0
  state:  0
  current:10GB-FD COPPER

 LOCAL(br-eth1): addr:e0:db:55:4e:16:4b
  config: 0
  state:  0

 OFPT_GET_CONFIG_REPLY (xid=0x3): frags=normal miss_send_len=0


 ---
 5.2. Adding flow rules to enable passing (instead of dropping)
 ---

 # the source mac_address (dl_src) is the from the interface of the
 # running instance on Hyper-V. This fix the DROP (only)

 root@osnetwork:~# ovs-ofctl add-flow br-eth1
 priority=10,in_port=3,dl_src=fa:16:3e:95:95:e4,actions=normal



 6) How to fix :: bridge br-int

 ---
 6.1. Getting to know the ifaces of 'br-int

Re: [Openstack] [HyperV][Ceilometer] Performance statistics from Hyper-V with Ceilometer and libvirt

2013-06-11 Thread Bruno Oliveira ~lychinus
Alessandro, sure will :)

What do I need to get started ?


--
*

*
*
​​
Bruno Oliveira*
Developer, Software Engineer
+55 11 9-6193-3987
skype: brunnop.oliveira
brunnop.olive...@gmail.com

http://br.linkedin.com/in/brunnopoliveira http://www.twitter.com/lychinus
  http://www.facebook.com/lychinus  http://gplus.to/lychinus



On Tue, Jun 11, 2013 at 2:28 PM, Alessandro Pilotti 
apilo...@cloudbasesolutions.com wrote:

 Hi Bruno,

  We just started implementing the Ceilometer Hyper-V inspector for the
 compute agent (see hyper-v-agent blueprint).
  Let me know if you'd like to help in testing it. :-)

  Thanks,

  Alessandro





  On Jun 6, 2013, at 00:40 , Bruno Oliveira brunnop.olive...@gmail.com
 wrote:

 Dear Stackers,

 Please I'd like to ask your expertise on ceilometer to try
 an approach for monitoring a compute nova running hyper-v

 I got a running environment of devstack with KVM on a single-node
 machine being fully monitored by ceilometer. I'm being able to
 access its API to see the data collected as well.

 On the other hand, I'm using Cloudbase's (cloudbase.it) Openstack Compute
 Hyper-V Installer (driver), which greatly help me deploying VMs from
 the devstack
 node to the hyper-v server. All is working smoothly.

 So far, in the irc channel #openstack-ceilometer, I got to know that
 ceilometer, just like collectd, uses libvirt to query the hypervisors
 for data (thanks dhellmann !). BUT if check in the libvirt.org site,
 it's being said
 that there's support for Hyper-V: http://libvirt.org/drvhyperv.html
 (given a uri to connect to it).

 1. So I'm wondering if all I need to get Ceilometer working for windows
 would be compiling it under my windows envionment with mingw chaintool,
 for example?

 2. Has anyone ever tried this or got any other way to make it data
 collection
 from hyper-v successful ?

 3. Do you guys have any other approach that you would suggest (installing
 snmp on each of the cloud servers/instances is not an option) ?

 Please, share your thoughts. I'd greatly appreciate it.

 Thank you very very much.

 --

 Bruno Oliveira
 Developer, Software Engineer

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [HyperV][Quantum] Quantum dhcp agent not working for Hyper-V

2013-06-07 Thread Bruno Oliveira ~lychinus
(...)Do you have your vSwitch properly configured on your hyper-v host?(...)

 I can't say for sure, Peter, but I think so...

In troubleshooting we did (and are still doing) I can tell that
regardless of the network model that we're using (FLAT or VLAN
Network),
the instance that is provisioned on Hyper-V (for some reason) can't
reach the quantum-l3-agent by default
(I said default because, we just managed to do it after a hard, long
and boring troubleshoting,
 yet, we're not sure if that's how it should be done, indeed)

Since it's not something quick to explain, I'll present the scenario:
(I'm not sure if it might be a candidate for a fix in quantum-l3-agent,
 so quantum-devs might be interested too)


Here's how our network interfaces turns out, in our network controller:

==
External bridge network
==

Bridge br-eth1
Port br-eth1
Interface br-eth1
type: internal
Port eth1.11
Interface eth1.11
Port phy-br-eth1
Interface phy-br-eth1

==
Internal network
==

   Bridge br-int
Port int-br-eth1
Interface int-br-eth1
Port br-int
Interface br-int
type: internal
Port tapb610a695-46
tag: 1
Interface tapb610a695-46
type: internal
Port qr-ef10bef4-fa
tag: 1
Interface qr-ef10bef4-fa
type: internal

==

There's another iface named br-ex that we're using for floating_ips,
but it has nothing to do with what we're doing right now, so I'm skipping it...


 So, for the hands-on 

I know it may be a little bit hard to understand, but I'll do my best
trying to explain:

1) the running instance in Hyper-V, which is linked to Hyper-V vSwitch
is actually
communicating to bridge: br-eth1 (that is in the network controller).

NOTE: That's where the DHCP REQUEST (from the instance) lands


2) The interface MAC Address, of that running instance on Hyper-V, is:
fa:16:3e:95:95:e4. (we're gonna use it on later steps)
Since DHCP is not fully working yet, we had to manually set an IP for
that instance: 10.5.5.3


3) From that instance interface, the dhcp_broadcast should be forward -
   FROM interface eth1.12 TO  phy-br-eth1
   And FROM interface phy-br-eth1 TO the bridge br-int   *** THIS
IS WHERE THE PACKETS ARE DROPPED  ***.

Check it out for the actions:drop
-
root@osnetwork:~# ovs-dpctl dump-flows br-int  |grep 10.5.5.3

in_port(4),eth(src=fa:16:3e:f0:ac:8e,dst=ff:ff:ff:ff:ff:ff),eth_type(0x0806),arp(sip=10.5.5.3,tip=10.5.5.1,op=1,sha=fa:16:3e:f0:ac:8e,tha=00:00:00:00:00:00),
packets:20, bytes:1120, used:0.412s, actions:drop
-

4) Finally, when the packet reaches the bridge br-int, the
DHCP_REQUEST should be forward to the
   dhcp_interface, that is: tapb610a695-46*** WHICH IS NOT
HAPPENING EITHER ***


5) How to fix :: bridge br-eth1

---
5.1. Getting to know the ifaces of 'br-eth1'
---
root@osnetwork:~# ovs-ofctl show br-eth1

OFPT_FEATURES_REPLY (xid=0x1): ver:0x1, dpid:e0db554e164b
n_tables:255, n_buffers:256 features: capabilities:0xc7, actions:0xfff

1(eth1.11): addr:e0:db:55:4e:16:4b
 config: 0
 state:  0
 current:10GB-FD AUTO_NEG
 advertised: 1GB-FD 10GB-FD FIBER AUTO_NEG
 supported:  1GB-FD 10GB-FD FIBER AUTO_NEG

3(phy-br-eth1): addr:26:9b:97:93:b9:70
 config: 0
 state:  0
 current:10GB-FD COPPER

LOCAL(br-eth1): addr:e0:db:55:4e:16:4b
 config: 0
 state:  0

OFPT_GET_CONFIG_REPLY (xid=0x3): frags=normal miss_send_len=0


---
5.2. Adding flow rules to enable passing (instead of dropping)
---

# the source mac_address (dl_src) is the from the interface of the
# running instance on Hyper-V. This fix the DROP (only)

root@osnetwork:~# ovs-ofctl add-flow br-eth1
priority=10,in_port=3,dl_src=fa:16:3e:95:95:e4,actions=normal



6) How to fix :: bridge br-int

---
6.1. Getting to know the ifaces of 'br-int'
---

root@osnetwork:~# ovs-ofctl show br-int

OFPT_FEATURES_REPLY (xid=0x1): ver:0x1, dpid:92976d64274d

n_tables:255, n_buffers:256  features: capabilities:0xc7, actions:0xfff

1(tapb610a695-46): addr:19:01:00:00:00:00
 config: PORT_DOWN
 state:  LINK_DOWN

4(int-br-eth1): addr:5a:56:e1:53:e9:90
 config: 0
 state:  0
 current:10GB-FD COPPER

5(qr-ef10bef4-fa): addr:19:01:00:00:00:00
 config: