[Bug 483634] [NEW] master can't receive info from node

2009-11-16 Thread paul guermonprez
Public bug reported:

hello

using ubuntu 9.10 server and a very basic/simple installation,
the compute node is not seen in the cluster. tried to restart services,
reboot on both master and node : nothing.
see the logs and config attached.

euca-describe-availability-zones verbose
AVAILABILITYZONEintelone192.168.1.90
AVAILABILITYZONE|- vm types free / max   cpu   ram  disk
AVAILABILITYZONE|- m1.small  /    1128 2
...

the node is up and running, was added correctly to the NODES list :

grep NODE /etc/eucalyptus/eucalyptus.conf
NODES= 192.168.2.101

in the logs i see FATAL and ERROR lines :


09:52:26  INFO ClusterUtil   | 
---
09:52:26  INFO ClusterUtil   | - [ intelone ] Cluster certificate 
valid=true
09:52:26  INFO ClusterUtil   | - [ intelone ] Node certificate 
valid=true
09:52:26  INFO ClusterUtil   | 
---
09:52:26 FATAL Binding   | org.jibx.runtime.JiBXException: 
Expected {http://eucalyptus.ucsb.edu/}mode; start tag, found 
{http://eucalyptus.ucsb.edu/}DescribeNetworksResponse; end tag (line -1, col 
-1, in SOAP-message)
09:52:26 FATAL BindingHandler| FAILED TO PARSE:
 | soapenv:Envelope 
xmlns:soapenv=http://schemas.xmlsoap.org/soap/envelope/;soapenv:Header 
xmlns:wsa=http://www.w3.org/2005/08/addressing;wsa:Action 
xmlns:wsu=http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd;
 
wsu:Id=SigID-a7a79cf6-d2bf-1de1-281fEucalyptusCC#DescribeNetworks/wsa:Actionwsa:From
 
xmlns:wsu=http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd;
 
wsu:Id=SigID-a7a79d28-d2bf-1de1-2820wsa:Addresshttp://192.168.1.90:8774/axis2/services/EucalyptusCC/wsa:Address/wsa:Fromwsa:MessageID
 
xmlns:wsu=http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd;
 
wsu:Id=SigID-a7a79d46-d2bf-1de1-2821urn:uuid:a7a761dc-d2bf-1de1-281d-001b24efef1c/wsa:MessageIDwsa:RelatesTo
 wsa:RelationshipType=http://www.w3.org/2005/08/addressing/reply; 
xmlns:wsu=http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd;
 
wsu:Id=SigID-a7a79d64-d2bf-1de1-2822urn:uuid:FDBD129B9FEE45B5B520C67A48EE1DC8/wsa:RelatesTowsse:Security
 
xmlns:wsse=http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd;
 soapenv:mustUnderstand=1wsse:BinarySecurityToken 
EncodingType=http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary;
 
xmlns:wsu=http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd;
 wsu:Id=CertID-a7a796de-d2bf-1de1-281e 
ValueType=http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3;MIIDdTCCAl2gAwIBAgIGAST9Xy+hMA0GCSqGSIb3DQEBDQUAMHExCzAJBgNVBAYTAlVTMQswCQYDVQQIEwJDQTEWMBQGA1UEBxMNU2FudGEgQmFyYmFyYTEUMBIGA1UEChMLY2MtaW50ZWxvbmUxEzARBgNVBAsTCkV1Y2FseXB0dXMxEjAQBgNVBAMTCWxvY2FsaG9zdDAeFw0wOTExMTYxNDIyMDRaFw0xNDExMTYxNDIyMDRaMHExCzAJBgNVBAYTAlVTMQswCQYDVQQIEwJDQTEWMBQGA1UEBxMNU2FudGEgQmFyYmFyYTEUMBIGA1UEChMLY2MtaW50ZWxvbmUxEzARBgNVBAsTCkV1Y2FseXB0dXMxEjAQBgNVBAMTCWxvY2FsaG9zdDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOlIcV65me6l3djk3zAoBn0OezvScdzVZt9UNQ4xxb55G7NFslQ2qOnNIH7YN+jyyploCOaIZj8Da1oID04Alq6VVw3/Rww9rA1kn7JLeaj+REb4gMeFRR92wMP09bSDRq6sjpj5mltAR+OtBOikjQkINF5twmvptiJVNKYQiNzsDkLhp5i9vXnIw1D4I61oCTTTA30bywLaXhNgloqxwqDHkw8qDespQ6fmAYthEGNkcrQcnzNNttnUJhDYhxYUfFdB/XRmmFsqNklf9Muz625+rfj1i/4JPouD8NamZ/2o6BQu7nANglbbmYV1IA984kQUjT2xGFi7y7iZORrbSssCAwEAAaMTMBEwDwYDVR0TAQH/BAUwAwEB/zANBgkqhkiG9w0BAQ0FAAOCAQEADF3SZarErfCZU0bCy9EN5+B6Y0LYNwIPqFeJTeuw6qGF7mL4uVm0PkYFOlk/IgjZDWhgi3ShyBfdhAE1x49waQdaBof9csz38D1MM/2pu43YNKycOdiY7QgLYUam/Nr1EL/IbU39VZNsNtXZzz30dMSUYpWQROKIq8nXaCqa/r51y/jPJWO0Ktc4TFQLyI1sSfQwQ9lT16udljZZnKg3DffRjo/oIcuiNvO86mJVGhUC8o3pCsyCur9VKueEIusR1AgaTkiM7xi2qMvo2pzxmvNneR7n1z23TX1Z75nYad5B6sHuTPtCbmOkgdobCi5PEpERaWqNpdVLULVUj2nLaQ==/wsse:BinarySecurityTokends:Signature
 xmlns:ds=http://www.w3.org/2000/09/xmldsig#; 
Id=SigID-a7a79f30-d2bf-1de1-2823ds:SignedInfods:CanonicalizationMethod 
Algorithm=http://www.w3.org/2001/10/xml-exc-c14n#;/ds:CanonicalizationMethodds:SignatureMethod
 
Algorithm=http://www.w3.org/2000/09/xmldsig#rsa-sha1;/ds:SignatureMethodds:Reference
 URI=#SigID-a7a79cf6-d2bf-1de1-281fds:Transformsds:Transform 
Algorithm=http://www.w3.org/2001/10/xml-exc-c14n#;/ds:Transform/ds:Transformsds:DigestMethod
 
Algorithm=http://www.w3.org/2000/09/xmldsig#sha1;/ds:DigestMethodds:DigestValueNmCTxmn7qSy54bu0a574Gy7ZdvI=/ds:DigestValue/ds:Referenceds:Reference
 URI=#SigID-a7a79d28-d2bf-1de1-2820ds:Transformsds:Transform 
Algorithm=http://www.w3.org/2001/10/xml-exc-c14n#;/ds:Transform/ds:Transformsds:DigestMethod
 

[Bug 483634] Re: master can't receive info from node

2009-11-16 Thread paul guermonprez

** Attachment added: master node conf and logs
   http://launchpadlibrarian.net/35750451/master_logs-conf.tar.bz2

-- 
master can't receive info from node
https://bugs.launchpad.net/bugs/483634
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to eucalyptus in ubuntu.

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 483634] Re: master can't receive info from node

2009-11-16 Thread paul guermonprez
It may be because the server config was missing VNET_DNS parameter :
[Mon Nov 16 11:23:10 2009][012652][EUCAWARN  ] VNET_DNS is not defined in config
[Mon Nov 16 11:23:10 2009][012652][EUCAFATAL ] bad network parameters, must fix 
before system will work
[Mon Nov 16 11:23:10 2009][012652][EUCAERROR ] cannot initialize from 
configuration file

This entry should be defined by the wizard automatically ...

-- 
master can't receive info from node
https://bugs.launchpad.net/bugs/483634
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to eucalyptus in ubuntu.

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 483634] [NEW] master can't receive info from node

2009-11-16 Thread paul guermonprez
Public bug reported:

hello

using ubuntu 9.10 server and a very basic/simple installation,
the compute node is not seen in the cluster. tried to restart services,
reboot on both master and node : nothing.
see the logs and config attached.

euca-describe-availability-zones verbose
AVAILABILITYZONEintelone192.168.1.90
AVAILABILITYZONE|- vm types free / max   cpu   ram  disk
AVAILABILITYZONE|- m1.small  /    1128 2
...

the node is up and running, was added correctly to the NODES list :

grep NODE /etc/eucalyptus/eucalyptus.conf
NODES= 192.168.2.101

in the logs i see FATAL and ERROR lines :


09:52:26  INFO ClusterUtil   | 
---
09:52:26  INFO ClusterUtil   | - [ intelone ] Cluster certificate 
valid=true
09:52:26  INFO ClusterUtil   | - [ intelone ] Node certificate 
valid=true
09:52:26  INFO ClusterUtil   | 
---
09:52:26 FATAL Binding   | org.jibx.runtime.JiBXException: 
Expected {http://eucalyptus.ucsb.edu/}mode; start tag, found 
{http://eucalyptus.ucsb.edu/}DescribeNetworksResponse; end tag (line -1, col 
-1, in SOAP-message)
09:52:26 FATAL BindingHandler| FAILED TO PARSE:
 | soapenv:Envelope 
xmlns:soapenv=http://schemas.xmlsoap.org/soap/envelope/;soapenv:Header 
xmlns:wsa=http://www.w3.org/2005/08/addressing;wsa:Action 
xmlns:wsu=http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd;
 
wsu:Id=SigID-a7a79cf6-d2bf-1de1-281fEucalyptusCC#DescribeNetworks/wsa:Actionwsa:From
 
xmlns:wsu=http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd;
 
wsu:Id=SigID-a7a79d28-d2bf-1de1-2820wsa:Addresshttp://192.168.1.90:8774/axis2/services/EucalyptusCC/wsa:Address/wsa:Fromwsa:MessageID
 
xmlns:wsu=http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd;
 
wsu:Id=SigID-a7a79d46-d2bf-1de1-2821urn:uuid:a7a761dc-d2bf-1de1-281d-001b24efef1c/wsa:MessageIDwsa:RelatesTo
 wsa:RelationshipType=http://www.w3.org/2005/08/addressing/reply; 
xmlns:wsu=http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd;
 
wsu:Id=SigID-a7a79d64-d2bf-1de1-2822urn:uuid:FDBD129B9FEE45B5B520C67A48EE1DC8/wsa:RelatesTowsse:Security
 
xmlns:wsse=http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd;
 soapenv:mustUnderstand=1wsse:BinarySecurityToken 
EncodingType=http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary;
 
xmlns:wsu=http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd;
 wsu:Id=CertID-a7a796de-d2bf-1de1-281e 
ValueType=http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3;MIIDdTCCAl2gAwIBAgIGAST9Xy+hMA0GCSqGSIb3DQEBDQUAMHExCzAJBgNVBAYTAlVTMQswCQYDVQQIEwJDQTEWMBQGA1UEBxMNU2FudGEgQmFyYmFyYTEUMBIGA1UEChMLY2MtaW50ZWxvbmUxEzARBgNVBAsTCkV1Y2FseXB0dXMxEjAQBgNVBAMTCWxvY2FsaG9zdDAeFw0wOTExMTYxNDIyMDRaFw0xNDExMTYxNDIyMDRaMHExCzAJBgNVBAYTAlVTMQswCQYDVQQIEwJDQTEWMBQGA1UEBxMNU2FudGEgQmFyYmFyYTEUMBIGA1UEChMLY2MtaW50ZWxvbmUxEzARBgNVBAsTCkV1Y2FseXB0dXMxEjAQBgNVBAMTCWxvY2FsaG9zdDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAOlIcV65me6l3djk3zAoBn0OezvScdzVZt9UNQ4xxb55G7NFslQ2qOnNIH7YN+jyyploCOaIZj8Da1oID04Alq6VVw3/Rww9rA1kn7JLeaj+REb4gMeFRR92wMP09bSDRq6sjpj5mltAR+OtBOikjQkINF5twmvptiJVNKYQiNzsDkLhp5i9vXnIw1D4I61oCTTTA30bywLaXhNgloqxwqDHkw8qDespQ6fmAYthEGNkcrQcnzNNttnUJhDYhxYUfFdB/XRmmFsqNklf9Muz625+rfj1i/4JPouD8NamZ/2o6BQu7nANglbbmYV1IA984kQUjT2xGFi7y7iZORrbSssCAwEAAaMTMBEwDwYDVR0TAQH/BAUwAwEB/zANBgkqhkiG9w0BAQ0FAAOCAQEADF3SZarErfCZU0bCy9EN5+B6Y0LYNwIPqFeJTeuw6qGF7mL4uVm0PkYFOlk/IgjZDWhgi3ShyBfdhAE1x49waQdaBof9csz38D1MM/2pu43YNKycOdiY7QgLYUam/Nr1EL/IbU39VZNsNtXZzz30dMSUYpWQROKIq8nXaCqa/r51y/jPJWO0Ktc4TFQLyI1sSfQwQ9lT16udljZZnKg3DffRjo/oIcuiNvO86mJVGhUC8o3pCsyCur9VKueEIusR1AgaTkiM7xi2qMvo2pzxmvNneR7n1z23TX1Z75nYad5B6sHuTPtCbmOkgdobCi5PEpERaWqNpdVLULVUj2nLaQ==/wsse:BinarySecurityTokends:Signature
 xmlns:ds=http://www.w3.org/2000/09/xmldsig#; 
Id=SigID-a7a79f30-d2bf-1de1-2823ds:SignedInfods:CanonicalizationMethod 
Algorithm=http://www.w3.org/2001/10/xml-exc-c14n#;/ds:CanonicalizationMethodds:SignatureMethod
 
Algorithm=http://www.w3.org/2000/09/xmldsig#rsa-sha1;/ds:SignatureMethodds:Reference
 URI=#SigID-a7a79cf6-d2bf-1de1-281fds:Transformsds:Transform 
Algorithm=http://www.w3.org/2001/10/xml-exc-c14n#;/ds:Transform/ds:Transformsds:DigestMethod
 
Algorithm=http://www.w3.org/2000/09/xmldsig#sha1;/ds:DigestMethodds:DigestValueNmCTxmn7qSy54bu0a574Gy7ZdvI=/ds:DigestValue/ds:Referenceds:Reference
 URI=#SigID-a7a79d28-d2bf-1de1-2820ds:Transformsds:Transform 
Algorithm=http://www.w3.org/2001/10/xml-exc-c14n#;/ds:Transform/ds:Transformsds:DigestMethod
 

[Bug 483634] Re: master can't receive info from node

2009-11-16 Thread paul guermonprez

** Attachment added: master node conf and logs
   http://launchpadlibrarian.net/35750451/master_logs-conf.tar.bz2

-- 
master can't receive info from node
https://bugs.launchpad.net/bugs/483634
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 483634] Re: master can't receive info from node

2009-11-16 Thread paul guermonprez

** Attachment added: node logs and conf
   http://launchpadlibrarian.net/35750463/node_logs-conf.tar.bz2

-- 
master can't receive info from node
https://bugs.launchpad.net/bugs/483634
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 483634] Re: master can't receive info from node

2009-11-16 Thread paul guermonprez
It may be because the server config was missing VNET_DNS parameter :
[Mon Nov 16 11:23:10 2009][012652][EUCAWARN  ] VNET_DNS is not defined in config
[Mon Nov 16 11:23:10 2009][012652][EUCAFATAL ] bad network parameters, must fix 
before system will work
[Mon Nov 16 11:23:10 2009][012652][EUCAERROR ] cannot initialize from 
configuration file

This entry should be defined by the wizard automatically ...

-- 
master can't receive info from node
https://bugs.launchpad.net/bugs/483634
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 472785] [NEW] can't register SC

2009-11-03 Thread paul guermonprez
Public bug reported:

hello

my master is .1, i'd like to move the storage to .2
.2 was installed with ubuntu 9.10 regular server,
plus packages SC and WALRUS.
i have a temporary password for eucalyptus user on .2
( https://help.ubuntu.com/community/UEC/PackageInstall
then https://help.ubuntu.com/community/UEC/StorageController )

from master :

ubu...@master:~$ sudo euca_conf --list-scs
registered storage controllers:
   intelone 192.168.2.1
ubu...@master:~$ sudo euca_conf --register-sc intelone 192.168.2.2
Adding SC 192.168.2.2 to cluster intelone
Trying rsync to sync keys with 192.168.2.2...eucalyp...@192.168.2.2's 
password: 
done.
SUCCESS: new SC for cluster 'intelone' on host '192.168.2.2' successfully 
registered.
ubu...@master:~$ sudo euca_conf --list-scs
registered storage controllers:
   intelone 192.168.2.1


I can also try to deregister first :
ubu...@master:~$ sudo euca_conf --deregister-sc 192.168.2.1
SUCCESS: Storage controller for cluster '192.168.2.1' successfully deregistered.
ubu...@master:~$ sudo euca_conf --list-scs
registered storage controllers:
   intelone 192.168.2.1


Or restart :
sudo restart eucalyptus

nothing helps

question just to make sure : what services are supposed
to run on a master side (not SC) and on the SC side ?

thanks paul

** Affects: eucalyptus (Ubuntu)
 Importance: Undecided
 Status: New

-- 
can't register SC
https://bugs.launchpad.net/bugs/472785
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to eucalyptus in ubuntu.

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 473062] [NEW] new node has eucalyptus-nc down (apache config ?)

2009-11-03 Thread paul guermonprez
Public bug reported:

Hello

Just installed a node, ubuntu 9.10 final version.
During installation, the node mode is selected by default.

when i try discover the node from the master,
i can't find it. the nc service is not running by default ...

ubu...@node2:~$ netstat -a|grep LIST
tcp0  0 node2.local:domain  *:* LISTEN 
tcp0  0 *:ssh   *:* LISTEN 
tcp6   0  0 [::]:ssh[::]:*  LISTEN 
tcp6   0  0 [::]:www[::]:*  LISTEN 

ubu...@node2:~$ sudo /etc/init.d/eucalyptus-nc status
[sudo] password for ubuntu: 
 * eucalyptus-nc is not running
ubu...@node2:~$ sudo /etc/init.d/eucalyptus-nc start
 * Starting Eucalyptus Node Service 
 [ OK ] 
ubu...@node2:~$ sudo /etc/init.d/eucalyptus-nc status
 * eucalyptus-nc is running
ubu...@node2:~$ netstat -a|grep LIST
tcp0  0 node2.local:domain  *:* LISTEN 
tcp0  0 *:ssh   *:* LISTEN 
tcp6   0  0 [::]:ssh[::]:*  LISTEN 
tcp6   0  0 [::]:8775   [::]:*  LISTEN 
tcp6   0  0 [::]:www[::]:*  LISTEN 

and the node can be discovered / added.

In fact, the http config may be a problem,
as the httpd-nc.conf has a maxclients=1,
but it does not explain why the service was down

regards, paul

** Affects: eucalyptus (Ubuntu)
 Importance: Undecided
 Status: New

-- 
new node has eucalyptus-nc down (apache config ?)
https://bugs.launchpad.net/bugs/473062
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to eucalyptus in ubuntu.

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 473062] Re: new node has eucalyptus-nc down (apache config ?)

2009-11-03 Thread paul guermonprez

** Attachment added: system and eucalyptus logs
   http://launchpadlibrarian.net/35041750/logs.tar

-- 
new node has eucalyptus-nc down (apache config ?)
https://bugs.launchpad.net/bugs/473062
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to eucalyptus in ubuntu.

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 472785] [NEW] can't register SC

2009-11-03 Thread paul guermonprez
Public bug reported:

hello

my master is .1, i'd like to move the storage to .2
.2 was installed with ubuntu 9.10 regular server,
plus packages SC and WALRUS.
i have a temporary password for eucalyptus user on .2
( https://help.ubuntu.com/community/UEC/PackageInstall
then https://help.ubuntu.com/community/UEC/StorageController )

from master :

ubu...@master:~$ sudo euca_conf --list-scs
registered storage controllers:
   intelone 192.168.2.1
ubu...@master:~$ sudo euca_conf --register-sc intelone 192.168.2.2
Adding SC 192.168.2.2 to cluster intelone
Trying rsync to sync keys with 192.168.2.2...eucalyp...@192.168.2.2's 
password: 
done.
SUCCESS: new SC for cluster 'intelone' on host '192.168.2.2' successfully 
registered.
ubu...@master:~$ sudo euca_conf --list-scs
registered storage controllers:
   intelone 192.168.2.1


I can also try to deregister first :
ubu...@master:~$ sudo euca_conf --deregister-sc 192.168.2.1
SUCCESS: Storage controller for cluster '192.168.2.1' successfully deregistered.
ubu...@master:~$ sudo euca_conf --list-scs
registered storage controllers:
   intelone 192.168.2.1


Or restart :
sudo restart eucalyptus

nothing helps

question just to make sure : what services are supposed
to run on a master side (not SC) and on the SC side ?

thanks paul

** Affects: eucalyptus (Ubuntu)
 Importance: Undecided
 Status: New

-- 
can't register SC
https://bugs.launchpad.net/bugs/472785
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 462140] Re: maximum 61 running instances, others shutting down

2009-10-28 Thread paul guermonprez
new tests : v27.1 rebooted cluster, creating security groups, 
VNET_ADDRSPERNET=512
VNET_PUBLICIPS=192.168.3.1-192.168.5.254

launching vms :

export EMI=emi-4158125D
euca-run-instances $EMI -n 50 -k mykey -t c1.medium --addressing private 
--group group2
euca-run-instances $EMI -n 50 -k mykey -t c1.medium --addressing private 
--group group3
euca-run-instances $EMI -n 50 -k mykey -t c1.medium --addressing private 
--group group4
euca-run-instances $EMI -n 50 -k mykey -t c1.medium --addressing private 
--group group5
euca-run-instances $EMI -n 50 -k mykey -t c1.medium --addressing private 
--group group6
euca-run-instances $EMI -n 50 -k mykey -t c1.medium --addressing private 
--group group7
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group8

the vms are launching fine, but only 32 end up running, the others are 
terminated.
when i had the ADDRSPERNET problem the vm was not able to launch.

for the same cluster setup, yesterday it was 64 vms, now it's 32 ...

result : some entire groups are being terminated ??? (groups 2,5,6,7 ...)
euca-describe-instances
RESERVATION r-4792084D  admin   group8
INSTANCEi-206F04DF  emi-4158125D172.19.28.6 172.19.28.6 
running mykey   4   c1.medium   2009-10-28T11:24:28.107Z
inteloneeki-65F8175Beri-48E316E3
INSTANCEi-2C2305CC  emi-4158125D172.19.28.5 172.19.28.5 
running mykey   3   c1.medium   2009-10-28T11:24:28.107Z
inteloneeki-65F8175Beri-48E316E3
INSTANCEi-36F706F4  emi-4158125D172.19.28.9 172.19.28.9 
running mykey   7   c1.medium   2009-10-28T11:24:28.107Z
inteloneeki-65F8175Beri-48E316E3
INSTANCEi-392C06E4  emi-4158125D172.19.28.10172.19.28.10
running mykey   8   c1.medium   2009-10-28T11:24:28.107Z
inteloneeki-65F8175Beri-48E316E3
INSTANCEi-39C70871  emi-4158125D172.19.28.8 172.19.28.8 
running mykey   6   c1.medium   2009-10-28T11:24:28.107Z
inteloneeki-65F8175Beri-48E316E3
INSTANCEi-3C330773  emi-4158125D172.19.28.17172.19.28.17
running mykey   15  c1.medium   2009-10-28T11:24:28.108Z
inteloneeki-65F8175Beri-48E316E3
INSTANCEi-408106F7  emi-4158125D172.19.28.21172.19.28.21
running mykey   19  c1.medium   2009-10-28T11:24:28.108Z
inteloneeki-65F8175Beri-48E316E3
INSTANCEi-4206079D  emi-4158125D172.19.28.12172.19.28.12
running mykey   10  c1.medium   2009-10-28T11:24:28.107Z
inteloneeki-65F8175Beri-48E316E3
INSTANCEi-44DF08E1  emi-4158125D172.19.28.18172.19.28.18
running mykey   16  c1.medium   2009-10-28T11:24:28.108Z
inteloneeki-65F8175Beri-48E316E3
INSTANCEi-450D08F8  emi-4158125D172.19.28.16172.19.28.16
running mykey   14  c1.medium   2009-10-28T11:24:28.108Z
inteloneeki-65F8175Beri-48E316E3
INSTANCEi-455E0828  emi-4158125D172.19.28.13172.19.28.13
running mykey   11  c1.medium   2009-10-28T11:24:28.107Z
inteloneeki-65F8175Beri-48E316E3
INSTANCEi-47360859  emi-4158125D172.19.28.19172.19.28.19
running mykey   17  c1.medium   2009-10-28T11:24:28.108Z
inteloneeki-65F8175Beri-48E316E3
INSTANCEi-50240846  emi-4158125D172.19.28.15172.19.28.15
running mykey   13  c1.medium   2009-10-28T11:24:28.108Z
inteloneeki-65F8175Beri-48E316E3
INSTANCEi-51910966  emi-4158125D172.19.28.7 172.19.28.7 
running mykey   5   c1.medium   2009-10-28T11:24:28.107Z
inteloneeki-65F8175Beri-48E316E3
INSTANCEi-51990946  emi-4158125D172.19.28.3 172.19.28.3 
running mykey   1   c1.medium   2009-10-28T11:24:28.107Z
inteloneeki-65F8175Beri-48E316E3
INSTANCEi-546D0996  emi-4158125D172.19.28.2 172.19.28.2 
running mykey   0   c1.medium   2009-10-28T11:24:28.107Z
inteloneeki-65F8175Beri-48E316E3
INSTANCEi-5581095F  emi-4158125D172.19.28.20172.19.28.20
running mykey   18  c1.medium   2009-10-28T11:24:28.108Z
inteloneeki-65F8175Beri-48E316E3
INSTANCEi-594D0A55  emi-4158125D172.19.28.4 172.19.28.4 
running mykey   2   c1.medium   2009-10-28T11:24:28.107Z
inteloneeki-65F8175Beri-48E316E3
INSTANCEi-5A3709C8  emi-4158125D172.19.28.14172.19.28.14
running mykey   12  

[Bug 462140] Re: maximum 61 running instances, others shutting down

2009-10-28 Thread paul guermonprez

** Attachment added: symptom : instances being terminated by security groups
   http://launchpadlibrarian.net/34539244/instances.weird

-- 
maximum 61 running instances, others shutting down
https://bugs.launchpad.net/bugs/462140
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to eucalyptus in ubuntu.

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 462140] Re: maximum 61 running instances, others shutting down

2009-10-28 Thread paul guermonprez
VNET_ADDRSPERNET=32
see logs_3.tar.bz2 ...

result : euca-describe-instances |grep running|wc = 75

export EMI=emi-4158125D
euca-add-group -d Group 10 group10
euca-add-group -d Group 11 group11
euca-add-group -d Group 12 group12
euca-add-group -d Group 13 group13
euca-add-group -d Group 14 group14
euca-add-group -d Group 15 group15
euca-add-group -d Group 16 group16
euca-add-group -d Group 17 group17
euca-add-group -d Group 18 group18
euca-add-group -d Group 19 group19
euca-add-group -d Group 20 group20
euca-add-group -d Group 21 group21
euca-add-group -d Group 22 group22
euca-add-group -d Group 23 group23
euca-add-group -d Group 24 group24
euca-add-group -d Group 25 group25
euca-add-group -d Group 26 group26
euca-add-group -d Group 27 group27
euca-add-group -d Group 28 group28
euca-add-group -d Group 29 group29
euca-add-group -d Group 30 group30
euca-add-group -d Group 31 group31
euca-add-group -d Group 32 group32
euca-add-group -d Group 33 group33
euca-add-group -d Group 34 group34
euca-add-group -d Group 35 group35
euca-add-group -d Group 36 group36
euca-add-group -d Group 37 group37
euca-add-group -d Group 38 group38
euca-add-group -d Group 39 group39
euca-add-group -d Group 40 group40
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group10
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group11
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group12
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group13
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group14
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group15
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group16
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group17
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group18
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group19
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group20
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group21
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group22
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group23
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group24
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group25
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group26
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group27
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group28
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group29
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group30
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group31
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group32
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group33
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group34
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group35
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group36
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group37
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group38
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group39
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group40

...
RESERVATION r-4741087B  admin   group15
INSTANCEi-326005BF  emi-4158125D172.19.1.177172.19.1.177
shutting-down   mykey   15  c1.medium   2009-10-28T13:32:08.723Z
inteloneeki-65F8175Beri-48E316E3
INSTANCEi-397E06B7  emi-4158125D172.19.1.173172.19.1.173
pending mykey   11  c1.medium   2009-10-28T13:32:08.723Z
inteloneeki-65F8175Beri-48E316E3
INSTANCEi-3C0C0753  emi-4158125D172.19.1.174172.19.1.174
pending mykey   12  c1.medium   2009-10-28T13:32:08.723Z
inteloneeki-65F8175Beri-48E316E3
INSTANCEi-3E48063F  emi-4158125D172.19.1.170172.19.1.170
pending mykey   8   c1.medium   2009-10-28T13:32:08.723Z
inteloneeki-65F8175Beri-48E316E3
INSTANCEi-3EEF082D  emi-4158125D172.19.1.168172.19.1.168
pending mykey   6   

[Bug 462140] Re: maximum 61 running instances, others shutting down

2009-10-28 Thread paul guermonprez
new tests : v27.1 rebooted cluster, creating security groups, 
VNET_ADDRSPERNET=512
VNET_PUBLICIPS=192.168.3.1-192.168.5.254

launching vms :

export EMI=emi-4158125D
euca-run-instances $EMI -n 50 -k mykey -t c1.medium --addressing private 
--group group2
euca-run-instances $EMI -n 50 -k mykey -t c1.medium --addressing private 
--group group3
euca-run-instances $EMI -n 50 -k mykey -t c1.medium --addressing private 
--group group4
euca-run-instances $EMI -n 50 -k mykey -t c1.medium --addressing private 
--group group5
euca-run-instances $EMI -n 50 -k mykey -t c1.medium --addressing private 
--group group6
euca-run-instances $EMI -n 50 -k mykey -t c1.medium --addressing private 
--group group7
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group8

the vms are launching fine, but only 32 end up running, the others are 
terminated.
when i had the ADDRSPERNET problem the vm was not able to launch.

for the same cluster setup, yesterday it was 64 vms, now it's 32 ...

result : some entire groups are being terminated ??? (groups 2,5,6,7 ...)
euca-describe-instances
RESERVATION r-4792084D  admin   group8
INSTANCEi-206F04DF  emi-4158125D172.19.28.6 172.19.28.6 
running mykey   4   c1.medium   2009-10-28T11:24:28.107Z
inteloneeki-65F8175Beri-48E316E3
INSTANCEi-2C2305CC  emi-4158125D172.19.28.5 172.19.28.5 
running mykey   3   c1.medium   2009-10-28T11:24:28.107Z
inteloneeki-65F8175Beri-48E316E3
INSTANCEi-36F706F4  emi-4158125D172.19.28.9 172.19.28.9 
running mykey   7   c1.medium   2009-10-28T11:24:28.107Z
inteloneeki-65F8175Beri-48E316E3
INSTANCEi-392C06E4  emi-4158125D172.19.28.10172.19.28.10
running mykey   8   c1.medium   2009-10-28T11:24:28.107Z
inteloneeki-65F8175Beri-48E316E3
INSTANCEi-39C70871  emi-4158125D172.19.28.8 172.19.28.8 
running mykey   6   c1.medium   2009-10-28T11:24:28.107Z
inteloneeki-65F8175Beri-48E316E3
INSTANCEi-3C330773  emi-4158125D172.19.28.17172.19.28.17
running mykey   15  c1.medium   2009-10-28T11:24:28.108Z
inteloneeki-65F8175Beri-48E316E3
INSTANCEi-408106F7  emi-4158125D172.19.28.21172.19.28.21
running mykey   19  c1.medium   2009-10-28T11:24:28.108Z
inteloneeki-65F8175Beri-48E316E3
INSTANCEi-4206079D  emi-4158125D172.19.28.12172.19.28.12
running mykey   10  c1.medium   2009-10-28T11:24:28.107Z
inteloneeki-65F8175Beri-48E316E3
INSTANCEi-44DF08E1  emi-4158125D172.19.28.18172.19.28.18
running mykey   16  c1.medium   2009-10-28T11:24:28.108Z
inteloneeki-65F8175Beri-48E316E3
INSTANCEi-450D08F8  emi-4158125D172.19.28.16172.19.28.16
running mykey   14  c1.medium   2009-10-28T11:24:28.108Z
inteloneeki-65F8175Beri-48E316E3
INSTANCEi-455E0828  emi-4158125D172.19.28.13172.19.28.13
running mykey   11  c1.medium   2009-10-28T11:24:28.107Z
inteloneeki-65F8175Beri-48E316E3
INSTANCEi-47360859  emi-4158125D172.19.28.19172.19.28.19
running mykey   17  c1.medium   2009-10-28T11:24:28.108Z
inteloneeki-65F8175Beri-48E316E3
INSTANCEi-50240846  emi-4158125D172.19.28.15172.19.28.15
running mykey   13  c1.medium   2009-10-28T11:24:28.108Z
inteloneeki-65F8175Beri-48E316E3
INSTANCEi-51910966  emi-4158125D172.19.28.7 172.19.28.7 
running mykey   5   c1.medium   2009-10-28T11:24:28.107Z
inteloneeki-65F8175Beri-48E316E3
INSTANCEi-51990946  emi-4158125D172.19.28.3 172.19.28.3 
running mykey   1   c1.medium   2009-10-28T11:24:28.107Z
inteloneeki-65F8175Beri-48E316E3
INSTANCEi-546D0996  emi-4158125D172.19.28.2 172.19.28.2 
running mykey   0   c1.medium   2009-10-28T11:24:28.107Z
inteloneeki-65F8175Beri-48E316E3
INSTANCEi-5581095F  emi-4158125D172.19.28.20172.19.28.20
running mykey   18  c1.medium   2009-10-28T11:24:28.108Z
inteloneeki-65F8175Beri-48E316E3
INSTANCEi-594D0A55  emi-4158125D172.19.28.4 172.19.28.4 
running mykey   2   c1.medium   2009-10-28T11:24:28.107Z
inteloneeki-65F8175Beri-48E316E3
INSTANCEi-5A3709C8  emi-4158125D172.19.28.14172.19.28.14
running mykey   12  

[Bug 462140] Re: maximum 61 running instances, others shutting down

2009-10-28 Thread paul guermonprez

** Attachment added: symptom : instances being terminated by security groups
   http://launchpadlibrarian.net/34539244/instances.weird

-- 
maximum 61 running instances, others shutting down
https://bugs.launchpad.net/bugs/462140
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 462140] Re: maximum 61 running instances, others shutting down

2009-10-28 Thread paul guermonprez
VNET_ADDRSPERNET=32
see logs_3.tar.bz2 ...

result : euca-describe-instances |grep running|wc = 75

export EMI=emi-4158125D
euca-add-group -d Group 10 group10
euca-add-group -d Group 11 group11
euca-add-group -d Group 12 group12
euca-add-group -d Group 13 group13
euca-add-group -d Group 14 group14
euca-add-group -d Group 15 group15
euca-add-group -d Group 16 group16
euca-add-group -d Group 17 group17
euca-add-group -d Group 18 group18
euca-add-group -d Group 19 group19
euca-add-group -d Group 20 group20
euca-add-group -d Group 21 group21
euca-add-group -d Group 22 group22
euca-add-group -d Group 23 group23
euca-add-group -d Group 24 group24
euca-add-group -d Group 25 group25
euca-add-group -d Group 26 group26
euca-add-group -d Group 27 group27
euca-add-group -d Group 28 group28
euca-add-group -d Group 29 group29
euca-add-group -d Group 30 group30
euca-add-group -d Group 31 group31
euca-add-group -d Group 32 group32
euca-add-group -d Group 33 group33
euca-add-group -d Group 34 group34
euca-add-group -d Group 35 group35
euca-add-group -d Group 36 group36
euca-add-group -d Group 37 group37
euca-add-group -d Group 38 group38
euca-add-group -d Group 39 group39
euca-add-group -d Group 40 group40
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group10
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group11
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group12
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group13
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group14
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group15
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group16
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group17
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group18
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group19
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group20
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group21
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group22
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group23
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group24
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group25
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group26
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group27
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group28
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group29
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group30
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group31
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group32
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group33
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group34
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group35
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group36
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group37
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group38
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group39
euca-run-instances $EMI -n 20 -k mykey -t c1.medium --addressing private 
--group group40

...
RESERVATION r-4741087B  admin   group15
INSTANCEi-326005BF  emi-4158125D172.19.1.177172.19.1.177
shutting-down   mykey   15  c1.medium   2009-10-28T13:32:08.723Z
inteloneeki-65F8175Beri-48E316E3
INSTANCEi-397E06B7  emi-4158125D172.19.1.173172.19.1.173
pending mykey   11  c1.medium   2009-10-28T13:32:08.723Z
inteloneeki-65F8175Beri-48E316E3
INSTANCEi-3C0C0753  emi-4158125D172.19.1.174172.19.1.174
pending mykey   12  c1.medium   2009-10-28T13:32:08.723Z
inteloneeki-65F8175Beri-48E316E3
INSTANCEi-3E48063F  emi-4158125D172.19.1.170172.19.1.170
pending mykey   8   c1.medium   2009-10-28T13:32:08.723Z
inteloneeki-65F8175Beri-48E316E3
INSTANCEi-3EEF082D  emi-4158125D172.19.1.168172.19.1.168
pending mykey   6   

[Bug 462140] [NEW] maximum 61 running instances, others shutting down

2009-10-27 Thread paul guermonprez
Public bug reported:

hello

I have a cluster of 6 machines+master, 384 cores total,
seen correctly by AVAILABILITYZONE :
AVAILABILITYZONE|- vm types free / max   cpu   ram  disk
AVAILABILITYZONE|- m1.small 0090 / 0384   1128 2

But when i launch VMs, only 61 end up running for ever. the others are
just shutting down (as seen by describe-instances).


The config is pretty standard, except for the number of VMs per subnet, changed 
to a higher value.
I am launching the instances with --addressing private to avoid networking 
limitations.

Attached are the logs and config files.

regards

** Affects: eucalyptus (Ubuntu)
 Importance: Undecided
 Status: New

-- 
maximum 61 running instances, others shutting down
https://bugs.launchpad.net/bugs/462140
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to eucalyptus in ubuntu.

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 462140] Re: maximum 61 running instances, others shutting down

2009-10-27 Thread paul guermonprez

** Attachment added: logs and config files (together)
   http://launchpadlibrarian.net/34495573/log.tar.bz2

-- 
maximum 61 running instances, others shutting down
https://bugs.launchpad.net/bugs/462140
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to eucalyptus in ubuntu.

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 461092] Re: can't add more than 29 instances

2009-10-27 Thread paul guermonprez
thanks, changed VNET_ADDRSPERNET to 512
should have RTFM, sorry.

now the problem is 61 instances, see
https://bugs.launchpad.net/ubuntu/+source/eucalyptus/+bug/462140

-- 
can't add more than 29 instances
https://bugs.launchpad.net/bugs/461092
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to eucalyptus in ubuntu.

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 462140] Re: maximum 61 running instances, others shutting down

2009-10-27 Thread paul guermonprez
Neil, we are talking about 384 cores seen by the sum of all OSes (64 per node 
OS).
No special trick to overprovisione the nodes.

In the new intel machines Nehalem class, we have a technology called 
hyperthreading (remember pentium 4 ? same thing).
There's 192 physical cores, but seen as 384 cores by the OS.

That's the way it is supposed to be deployed in production, VT included.
But i can always try to disable the feature in the BIOS if needed.

thanks

-- 
maximum 61 running instances, others shutting down
https://bugs.launchpad.net/bugs/462140
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to eucalyptus in ubuntu.

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 462140] Re: maximum 61 running instances, others shutting down

2009-10-27 Thread paul guermonprez
hello

The conf is in the attached tar file, but i changed the value from 32
(default) to 512 from the beginning of the install (i ran into this
VNET_ADDRPERNET problem yesterday). 64 was never entered.

I've just rebooted the entire cluster and ran more tests. I can have more than 
61 instances running, but some are shutting down.
I am lauching with --addressing private to avoid public IPs limitations (just 
in case).

So 61 does not seem to be a definitive threshold, and the problem does
not seem to be VNET_ADDRPERNET related.

Will run more tests tomorrow morning CET.

thanks paul

-- 
maximum 61 running instances, others shutting down
https://bugs.launchpad.net/bugs/462140
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to eucalyptus in ubuntu.

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 462140] [NEW] maximum 61 running instances, others shutting down

2009-10-27 Thread paul guermonprez
Public bug reported:

hello

I have a cluster of 6 machines+master, 384 cores total,
seen correctly by AVAILABILITYZONE :
AVAILABILITYZONE|- vm types free / max   cpu   ram  disk
AVAILABILITYZONE|- m1.small 0090 / 0384   1128 2

But when i launch VMs, only 61 end up running for ever. the others are
just shutting down (as seen by describe-instances).


The config is pretty standard, except for the number of VMs per subnet, changed 
to a higher value.
I am launching the instances with --addressing private to avoid networking 
limitations.

Attached are the logs and config files.

regards

** Affects: eucalyptus (Ubuntu)
 Importance: Undecided
 Status: New

-- 
maximum 61 running instances, others shutting down
https://bugs.launchpad.net/bugs/462140
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 462140] Re: maximum 61 running instances, others shutting down

2009-10-27 Thread paul guermonprez

** Attachment added: logs and config files (together)
   http://launchpadlibrarian.net/34495573/log.tar.bz2

-- 
maximum 61 running instances, others shutting down
https://bugs.launchpad.net/bugs/462140
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 461092] Re: can't add more than 29 instances

2009-10-27 Thread paul guermonprez
thanks, changed VNET_ADDRSPERNET to 512
should have RTFM, sorry.

now the problem is 61 instances, see
https://bugs.launchpad.net/ubuntu/+source/eucalyptus/+bug/462140

-- 
can't add more than 29 instances
https://bugs.launchpad.net/bugs/461092
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 462140] Re: maximum 61 running instances, others shutting down

2009-10-27 Thread paul guermonprez
Neil, we are talking about 384 cores seen by the sum of all OSes (64 per node 
OS).
No special trick to overprovisione the nodes.

In the new intel machines Nehalem class, we have a technology called 
hyperthreading (remember pentium 4 ? same thing).
There's 192 physical cores, but seen as 384 cores by the OS.

That's the way it is supposed to be deployed in production, VT included.
But i can always try to disable the feature in the BIOS if needed.

thanks

-- 
maximum 61 running instances, others shutting down
https://bugs.launchpad.net/bugs/462140
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 462140] Re: maximum 61 running instances, others shutting down

2009-10-27 Thread paul guermonprez
hello

The conf is in the attached tar file, but i changed the value from 32
(default) to 512 from the beginning of the install (i ran into this
VNET_ADDRPERNET problem yesterday). 64 was never entered.

I've just rebooted the entire cluster and ran more tests. I can have more than 
61 instances running, but some are shutting down.
I am lauching with --addressing private to avoid public IPs limitations (just 
in case).

So 61 does not seem to be a definitive threshold, and the problem does
not seem to be VNET_ADDRPERNET related.

Will run more tests tomorrow morning CET.

thanks paul

-- 
maximum 61 running instances, others shutting down
https://bugs.launchpad.net/bugs/462140
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 461092] [NEW] can't add more than 29 instances

2009-10-26 Thread paul guermonprez
Public bug reported:

I launch 29 times the command euca-run-instances, and it works fine
The 30th time, i get finishedVerify: Internal Error.

But i have plenty of free vms in euca-describe-availability-zones verbose
cores and memory are available
My machine has 32 physical cores, and 64 seen by the OS (hyperthreading)

Using ubuntu RC 9.10

i can't tell if the 29/30 threashold is reproducible or random

** Affects: eucalyptus (Ubuntu)
 Importance: Undecided
 Status: New

-- 
can't add more than 29 instances
https://bugs.launchpad.net/bugs/461092
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to eucalyptus in ubuntu.

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 461092] Re: can't add more than 29 instances

2009-10-26 Thread paul guermonprez

** Attachment added: master node eucalyptus logs
   
http://launchpadlibrarian.net/34412759/C%3A%5CDocuments%20and%20Settings%5Cpguermon%5CDesktop%5Clogs.tar.bz2

-- 
can't add more than 29 instances
https://bugs.launchpad.net/bugs/461092
You received this bug notification because you are a member of Ubuntu
Server Team, which is subscribed to eucalyptus in ubuntu.

-- 
Ubuntu-server-bugs mailing list
Ubuntu-server-bugs@lists.ubuntu.com
Modify settings or unsubscribe at: 
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs


[Bug 461092] [NEW] can't add more than 29 instances

2009-10-26 Thread paul guermonprez
Public bug reported:

I launch 29 times the command euca-run-instances, and it works fine
The 30th time, i get finishedVerify: Internal Error.

But i have plenty of free vms in euca-describe-availability-zones verbose
cores and memory are available
My machine has 32 physical cores, and 64 seen by the OS (hyperthreading)

Using ubuntu RC 9.10

i can't tell if the 29/30 threashold is reproducible or random

** Affects: eucalyptus (Ubuntu)
 Importance: Undecided
 Status: New

-- 
can't add more than 29 instances
https://bugs.launchpad.net/bugs/461092
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs


[Bug 461092] Re: can't add more than 29 instances

2009-10-26 Thread paul guermonprez

** Attachment added: master node eucalyptus logs
   
http://launchpadlibrarian.net/34412759/C%3A%5CDocuments%20and%20Settings%5Cpguermon%5CDesktop%5Clogs.tar.bz2

-- 
can't add more than 29 instances
https://bugs.launchpad.net/bugs/461092
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs