Re: [openstack-dev] nova compute error

2014-04-26 Thread Hai Bo
Maybe the link is help for you.
http://superuser.com/questions/232807/iproute2-not-functioning-rtnetlink-answers-operation-not-supported

Best regards to you.
Ricky


On Sat, Apr 26, 2014 at 6:29 PM, abhishek jain ashujain9...@gmail.comwrote:

 Hi Ricky

 Thanks.You are right.
 I'm getting following error after running the command ...

  sudo ip link add qvbb2fc7c52-ae type veth peer name qvob2fc7c52-ae

  RTNETLINK answers: Operation not supported

  Below is the description of my system..

 uname -a
 Linux t4240-ubuntu1310 3.8.13-rt9-QorIQ-SDK-V1.4 #3 SMP Wed Apr 23
 12:11:58 CDT 2014 ppc64 ppc64 ppc64 GNU/Linux

 Plaese help regarding this.


 On Sat, Apr 26, 2014 at 2:13 PM, Bohai (ricky) bo...@huawei.com wrote:

  It seems that the command “ip link add qvbb2fc7c52-ae type veth peer
 name qvob2fc7c52-ae” failed.

 Maybe you can try it manually and confirm whether it’s the reason.



 Best regards to you.

 Ricky



 *From:* abhishek jain [mailto:ashujain9...@gmail.com]
 *Sent:* Saturday, April 26, 2014 3:25 PM
 *To:* OpenStack Development Mailing List (not for usage questions)
 *Subject:* [openstack-dev] nova compute error



 Hi all..

 I'm getting following nova-compute error and thus not able to boot the
 VMs on my compute node...


 nova-ompute service stopped and started giving following error...
 [02:06:44] Abhishek Jain: 2014-04-25 15:32:00.112 6501 TRACE
 nova.openstack.common.threadgroup cmd=' '.join(cmd))
 2014-04-25 15:32:00.112 6501 TRACE nova.openstack.common.threadgroup
 ProcessExecutionError: Unexpected error while running command.
 2014-04-25 15:32:00.112 6501 TRACE nova.openstack.common.threadgroup
 Command: ip link add qvbb2fc7c52-ae type veth peer name qvob2fc7c52-ae
 2014-04-25 15:32:00.112 6501 TRACE nova.openstack.common.threadgroup Exit
 code: 2
 2014-04-25 15:32:00.112 6501 TRACE nova.openstack.common.threadgroup
 Stdout: ''
 2014-04-25 15:32:00.112 6501 TRACE nova.openstack.common.threadgroup
 Stderr: 'RTNETLINK answers: Operation not supported\n'
 2014-04-25 15:32:00.112 6501 TRACE nova.openstack.common.threadgroup

 I'm able to run other services such as nova and q-agt on compute node and
 also the compute node is reflected on the controller node and vice versa.

 Please help me regarding this .

 Thanks

 Abhishek Jain

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Proposal: remove the server groups feature

2014-04-26 Thread Hai Bo
On Sat, Apr 26, 2014 at 5:15 AM, Jay Pipes jaypi...@gmail.com wrote:

 Hi Stackers,

 When recently digging in to the new server group v3 API extension
 introduced in Icehouse, I was struck with a bit of cognitive dissonance
 that I can't seem to shake. While I understand and support the idea
 behind the feature (affinity and anti-affinity scheduling hints), I
 can't help but feel the implementation is half-baked and results in a
 very awkward user experience.

 The use case here is very simple:

 Alice wants to launch an instance and make sure that the instance does
 not land on a compute host that contains other instances of that type.

 The current user experience is that the user creates a server group
 like so:

 nova server-group-create $GROUP_NAME --policy=anti-affinity

 and then, when the user wishes to launch an instance and make sure it
 doesn't land on a host with another of that instance type, the user does
 the following:

 nova boot --group $GROUP_UUID ...

 There are myriad problems with the above user experience and
 implementation. Let me explain them.

 1. The user isn't creating a server group when they issue a nova
 server-group-create call. They are creating a policy and calling it a
 group. Cognitive dissonance results from this mismatch.

 2. There's no way to add an existing server to this group. What this
 means is that the user needs to effectively have pre-considered their
 environment and policy before ever launching a VM. To realize why this
 is a problem, consider the following:

  - User creates three VMs that consume high I/O utilization
  - User then wants to launch three more VMs of the same kind and make
 sure they don't end up on the same hosts as the others

 No can do, since the first three VMs weren't started using a --group
 scheduler hint.

 3. There's no way to remove members from the group

 4. There's no way to manually add members to the server group

 5. The act of telling the scheduler to place instances near or away from
 some other instances has been hidden behind the server group API, which
 means that users doing a nova help boot will see a --group option that
 doesn't make much sense, as it doesn't describe the scheduling policy
 activity.

 Proposal
 

 I propose to scrap the server groups API entirely and replace it with a
 simpler way to accomplish the same basic thing.

 Create two new options to nova boot:

  --near-tag TAG
 and
  --not-near-tag TAG


Hi jay,

I have a little question.
Whether it will support multiple tags for the server?
Maybe  we hope a server to near some servers and not near another servers
currently.

Best regards to you.
Ricky



 The first would tell the scheduler to place the new VM near other VMs
 having a particular tag. The latter would tell the scheduler to place
 the new VM *not* near other VMs with a particular tag.

 What is a tag? Well, currently, since the Compute API doesn't have a
 concept of a single string tag, the tag could be a key=value pair that
 would be matched against the server extra properties.

 Once a real user-controlled simple string tags system is added to the
 Compute API, a tag would be just that, a simple string that may be
 attached or detached from some object (in this case, a server object).

 How does this solve all the issues highlighted above? In order, it
 solves the issues like so:

 1. There's no need to have any server group object any more. Servers
 have a set of tags (key/value pairs in v2/v3 API) that may be used to
 identify a type of server. The activity of launching an instance would
 now have options for the user to indicate their affinity preference,
 which removes the cognitive dissonance that happens due to the user
 needing to know what a server group is (a policy, not a group).

 2. Since there is no more need to maintain a separate server group
 object, if a user launched 3 instances and then wanted to make sure that
 3 new instances don't end up on the same hosts, all the user needs to do
 is tag the existing instances with a tag, and issue a call to:

  nova boot --not-near-tag $TAG ...

 and the affinity policy is applied properly.

 3. Removal of members of the server group is no longer an issue.
 Simply untag a server to remove it from the set of servers you wish to
 use in applying some affinity policy

 4. Likewise, since there's no server group object, in order to relate an
 existing server to another is to simply place a tag on the server.

 5. The act of applying affinity policies is now directly related to the
 act of launching instances, which is where it should be.

 I'll type up a real blueprint spec for this, but wanted to throw the
 idea out there, since it's something that struck me recently when I
 tried to explain the new server groups feature.

 Thoughts and feedback welcome,
 -jay


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org