[jira] [Updated] (CLOUDSTACK-5999) Virtual Router does not start if Guest VM is rebooted from CloudStack
[ https://issues.apache.org/jira/browse/CLOUDSTACK-5999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Saksham Srivastava updated CLOUDSTACK-5999: --- Status: Reviewable (was: In Progress) Virtual Router does not start if Guest VM is rebooted from CloudStack - Key: CLOUDSTACK-5999 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5999 Project: CloudStack Issue Type: Bug Security Level: Public(Anyone can view this level - this is the default.) Affects Versions: 4.2.0 Reporter: Saksham Srivastava Assignee: Saksham Srivastava Fix For: 4.4.0 When a guest is rebooted from CloudStack, if the virtual router managing the guest network of that guest is down, CloudStack will not start the virtual router However the router is started in case the guest vm is stopped and then started. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (CLOUDSTACK-6023) Non windows instances are created on XenServer with a vcpu-max above supported xenserver limits
[ https://issues.apache.org/jira/browse/CLOUDSTACK-6023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13891934#comment-13891934 ] Joris van Lieshout commented on CLOUDSTACK-6023: Hi Hrikrishna, We came to this conclusion by using tcpdump to capture the POST that got returned with a http 500 error from the pool master. This post, which exceeded the 300k limit of xapi rpc, contained for each vm the stats for each of the 32 vpcus (even though the instances where just using 1 vcpu) thus making this post exceed the 300K limit. We are encountering this issue on a host running just 59 instances (inc 36 router vms that use just 1 vcpu but have a vcpumax of 32). My suggestion to resolve this issue would be to make the vcpu-max a configurable variable of a service/compute offering with a default of vcpusmax=vcpus unless otherwise configured in the offering. in addition I do wonder why there is is a descrepency between the XenServer Configuration Limits documentation and the documents you are refering to. In the end we are actively experiencing this issue. I've attached a screen print of xentop on one of our xenserver 6.0.2 host with this issue. Non windows instances are created on XenServer with a vcpu-max above supported xenserver limits --- Key: CLOUDSTACK-6023 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6023 Project: CloudStack Issue Type: Bug Security Level: Public(Anyone can view this level - this is the default.) Components: XenServer Affects Versions: Future, 4.2.1, 4.3.0 Reporter: Joris van Lieshout Priority: Blocker CitrixResourceBase.java contains a hardcoded value for vcpusmax for non windows instances: if (guestOsTypeName.toLowerCase().contains(windows)) { vmr.VCPUsMax = (long) vmSpec.getCpus(); } else { vmr.VCPUsMax = 32L; } For all currently available versions of XenServer the limit is 16vcpus: http://support.citrix.com/servlet/KbServlet/download/28909-102-664115/XenServer-6.0-Configuration-Limits.pdf http://support.citrix.com/servlet/KbServlet/download/32312-102-704653/CTX134789%20-%20XenServer%206.1.0_Configuration%20Limits.pdf http://support.citrix.com/servlet/KbServlet/download/34966-102-706122/CTX137837_XenServer%206_2_0_Configuration%20Limits.pdf In addition there seems to be a limit to the total amount of assigned vpcus on a XenServer. The impact of this bug is that xapi becomes unstable and keeps losing it's master_connection because the POST to the /remote_db_access is bigger then it's limit of 200K. This basically renders a pool slave unmanageable. If you would look at the running instances using xentop you will see hosts reporting with 32 vcpus Below the relevant portion of the xensource.log that shows the effect of the bug: [20140204T13:52:17.264Z|debug|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] stunnel: Using commandline: /usr/sbin/stunnel -fd f3b8bb12-4e03-b47a-0dc5-85ad5aef79e6 [20140204T13:52:17.269Z|debug|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] stunnel: stunnel has pidty: (FEFork (43,30540)) [20140204T13:52:17.269Z|debug|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] stunnel: stunnel start [20140204T13:52:17.269Z| info|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] stunnel connected pid=30540 fd=40 [20140204T13:52:17.346Z|error|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] Received HTTP error 500 ({ method = POST; uri = /remote_db_access; query = [ ]; content_length = [ 315932 ]; transfer encoding = ; version = 1.1; cookie = [ pool_secret=386bbf39-8710-4d2d-f452-9725d79c2393/aa7bcda9-8ebb-0cef-bb77-c6b496c5d859/1f928d82-7a20-9117-dd30-f96c7349b16e ]; task = ; subtask_of = ; content-type = ; user_agent = xapi/1.9 }) from master. This suggests our master address is wrong. Sleeping for 60s and then restarting. [20140204T13:53:18.620Z|error|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] Caught Master_connection.Goto_handler [20140204T13:53:18.620Z|debug|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] Connection to master died. I will continue to retry indefinitely (supressing future logging of this message). [20140204T13:53:18.620Z|error|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] Connection to master died. I will continue to retry indefinitely (supressing future logging of this message). [20140204T13:53:18.620Z|debug|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] Sleeping
[jira] [Updated] (CLOUDSTACK-6023) Non windows instances are created on XenServer with a vcpu-max above supported xenserver limits
[ https://issues.apache.org/jira/browse/CLOUDSTACK-6023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joris van Lieshout updated CLOUDSTACK-6023: --- Attachment: xentop.png Non windows instances are created on XenServer with a vcpu-max above supported xenserver limits --- Key: CLOUDSTACK-6023 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6023 Project: CloudStack Issue Type: Bug Security Level: Public(Anyone can view this level - this is the default.) Components: XenServer Affects Versions: Future, 4.2.1, 4.3.0 Reporter: Joris van Lieshout Priority: Blocker Attachments: xentop.png CitrixResourceBase.java contains a hardcoded value for vcpusmax for non windows instances: if (guestOsTypeName.toLowerCase().contains(windows)) { vmr.VCPUsMax = (long) vmSpec.getCpus(); } else { vmr.VCPUsMax = 32L; } For all currently available versions of XenServer the limit is 16vcpus: http://support.citrix.com/servlet/KbServlet/download/28909-102-664115/XenServer-6.0-Configuration-Limits.pdf http://support.citrix.com/servlet/KbServlet/download/32312-102-704653/CTX134789%20-%20XenServer%206.1.0_Configuration%20Limits.pdf http://support.citrix.com/servlet/KbServlet/download/34966-102-706122/CTX137837_XenServer%206_2_0_Configuration%20Limits.pdf In addition there seems to be a limit to the total amount of assigned vpcus on a XenServer. The impact of this bug is that xapi becomes unstable and keeps losing it's master_connection because the POST to the /remote_db_access is bigger then it's limit of 200K. This basically renders a pool slave unmanageable. If you would look at the running instances using xentop you will see hosts reporting with 32 vcpus Below the relevant portion of the xensource.log that shows the effect of the bug: [20140204T13:52:17.264Z|debug|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] stunnel: Using commandline: /usr/sbin/stunnel -fd f3b8bb12-4e03-b47a-0dc5-85ad5aef79e6 [20140204T13:52:17.269Z|debug|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] stunnel: stunnel has pidty: (FEFork (43,30540)) [20140204T13:52:17.269Z|debug|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] stunnel: stunnel start [20140204T13:52:17.269Z| info|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] stunnel connected pid=30540 fd=40 [20140204T13:52:17.346Z|error|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] Received HTTP error 500 ({ method = POST; uri = /remote_db_access; query = [ ]; content_length = [ 315932 ]; transfer encoding = ; version = 1.1; cookie = [ pool_secret=386bbf39-8710-4d2d-f452-9725d79c2393/aa7bcda9-8ebb-0cef-bb77-c6b496c5d859/1f928d82-7a20-9117-dd30-f96c7349b16e ]; task = ; subtask_of = ; content-type = ; user_agent = xapi/1.9 }) from master. This suggests our master address is wrong. Sleeping for 60s and then restarting. [20140204T13:53:18.620Z|error|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] Caught Master_connection.Goto_handler [20140204T13:53:18.620Z|debug|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] Connection to master died. I will continue to retry indefinitely (supressing future logging of this message). [20140204T13:53:18.620Z|error|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] Connection to master died. I will continue to retry indefinitely (supressing future logging of this message). [20140204T13:53:18.620Z|debug|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] Sleeping 2.00 seconds before retrying master connection... [20140204T13:53:20.627Z|debug|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] stunnel: Using commandline: /usr/sbin/stunnel -fd 3c8aed8e-1fce-be7c-09f8-b45cdc40a1f5 [20140204T13:53:20.632Z|debug|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] stunnel: stunnel has pidty: (FEFork (23,31207)) [20140204T13:53:20.632Z|debug|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] stunnel: stunnel start [20140204T13:53:20.632Z| info|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] stunnel connected pid=31207 fd=20 [20140204T13:53:28.874Z|error|xenserverhost1|4 unix-RPC|session.login_with_password D:2e7664ad69ed|master_connection] Caught Master_connection.Goto_handler [20140204T13:53:28.874Z|debug|xenserverhost1|4 unix-RPC|session.login_with_password D:2e7664ad69ed|master_connection] Connection to
[jira] [Comment Edited] (CLOUDSTACK-6023) Non windows instances are created on XenServer with a vcpu-max above supported xenserver limits
[ https://issues.apache.org/jira/browse/CLOUDSTACK-6023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13891934#comment-13891934 ] Joris van Lieshout edited comment on CLOUDSTACK-6023 at 2/5/14 9:03 AM: Hi Hrikrishna, We came to this conclusion by using tcpdump to capture the POST that got returned with a http 500 error from the pool master. This post, which exceeded the 300k limit of xapi rpc, contained for each vm the stats for each of the 32 vpcus (even though the instances where just using 1 vcpu) thus making this post exceed the 300K limit. We are encountering this issue on a host running just 59 instances (inc 36 router vms that use just 1 vcpu but have a vcpumax of 32). My suggestion to resolve this issue would be to make the vcpu-max a configurable variable of a service/compute offering with a default of vcpusmax=vcpus unless otherwise configured in the offering. in addition I do wonder why there is is a descrepency between the XenServer Configuration Limits documentation and the documents you are refering to. In the end we are actively experiencing this issue. I've attached a screen print of xentop on one of our xenserver 6.0.2 host with this issue. If it will helps I can attach the packet capture with the post? was (Author: jvanliesh...@schubergphilis.com): Hi Hrikrishna, We came to this conclusion by using tcpdump to capture the POST that got returned with a http 500 error from the pool master. This post, which exceeded the 300k limit of xapi rpc, contained for each vm the stats for each of the 32 vpcus (even though the instances where just using 1 vcpu) thus making this post exceed the 300K limit. We are encountering this issue on a host running just 59 instances (inc 36 router vms that use just 1 vcpu but have a vcpumax of 32). My suggestion to resolve this issue would be to make the vcpu-max a configurable variable of a service/compute offering with a default of vcpusmax=vcpus unless otherwise configured in the offering. in addition I do wonder why there is is a descrepency between the XenServer Configuration Limits documentation and the documents you are refering to. In the end we are actively experiencing this issue. I've attached a screen print of xentop on one of our xenserver 6.0.2 host with this issue. Non windows instances are created on XenServer with a vcpu-max above supported xenserver limits --- Key: CLOUDSTACK-6023 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6023 Project: CloudStack Issue Type: Bug Security Level: Public(Anyone can view this level - this is the default.) Components: XenServer Affects Versions: Future, 4.2.1, 4.3.0 Reporter: Joris van Lieshout Priority: Blocker Attachments: xentop.png CitrixResourceBase.java contains a hardcoded value for vcpusmax for non windows instances: if (guestOsTypeName.toLowerCase().contains(windows)) { vmr.VCPUsMax = (long) vmSpec.getCpus(); } else { vmr.VCPUsMax = 32L; } For all currently available versions of XenServer the limit is 16vcpus: http://support.citrix.com/servlet/KbServlet/download/28909-102-664115/XenServer-6.0-Configuration-Limits.pdf http://support.citrix.com/servlet/KbServlet/download/32312-102-704653/CTX134789%20-%20XenServer%206.1.0_Configuration%20Limits.pdf http://support.citrix.com/servlet/KbServlet/download/34966-102-706122/CTX137837_XenServer%206_2_0_Configuration%20Limits.pdf In addition there seems to be a limit to the total amount of assigned vpcus on a XenServer. The impact of this bug is that xapi becomes unstable and keeps losing it's master_connection because the POST to the /remote_db_access is bigger then it's limit of 200K. This basically renders a pool slave unmanageable. If you would look at the running instances using xentop you will see hosts reporting with 32 vcpus Below the relevant portion of the xensource.log that shows the effect of the bug: [20140204T13:52:17.264Z|debug|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] stunnel: Using commandline: /usr/sbin/stunnel -fd f3b8bb12-4e03-b47a-0dc5-85ad5aef79e6 [20140204T13:52:17.269Z|debug|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] stunnel: stunnel has pidty: (FEFork (43,30540)) [20140204T13:52:17.269Z|debug|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] stunnel: stunnel start [20140204T13:52:17.269Z| info|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] stunnel connected pid=30540 fd=40 [20140204T13:52:17.346Z|error|xenserverhost1|144 inet-RPC|host.call_plugin
[jira] [Created] (CLOUDSTACK-6029) [Doc] Comments to be put in the Techinical documentation as well as in the FS
Damodar Reddy T created CLOUDSTACK-6029: --- Summary: [Doc] Comments to be put in the Techinical documentation as well as in the FS Key: CLOUDSTACK-6029 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6029 Project: CloudStack Issue Type: Task Security Level: Public (Anyone can view this level - this is the default.) Components: Doc Affects Versions: 4.3.0 Reporter: Damodar Reddy T Assignee: Damodar Reddy T Priority: Critical Fix For: 4.3.0 The following comments need to be put in the FS as well as int he technical documentation 1. Explain the additional configuration required to handle split-brain in ticket and FS 2. Find the standard terminology(For Active/Active Master/Master) used and document to Make sure the correct terminology is being used in all documentation. 3. Find the ping timout and document 4. Find out about the DB version certified with CS, also what version of RHEL/CentOS are supported and what DBs they run 5. Can you put some pointers to documentation on best practices in FS (For Async and Semi-Sync 6. Network topology(Same Zone, Multiple Zones) and latency requirements for setting up DB-HA needs to be documented. 7. Backup and restore mechanism best practices. 8. Latency in updating slave in case of Async. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (CLOUDSTACK-6023) Non windows instances are created on XenServer with a vcpu-max above supported xenserver limits
[ https://issues.apache.org/jira/browse/CLOUDSTACK-6023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13891944#comment-13891944 ] Harikrishna Patnala commented on CLOUDSTACK-6023: - Instead of making it configurable (since we have other parameters also where we define max limits), we can put the max value to 16 only when dynamic scaling is enabled otherwise we'll keep the service offering value. Non windows instances are created on XenServer with a vcpu-max above supported xenserver limits --- Key: CLOUDSTACK-6023 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6023 Project: CloudStack Issue Type: Bug Security Level: Public(Anyone can view this level - this is the default.) Components: XenServer Affects Versions: Future, 4.2.1, 4.3.0 Reporter: Joris van Lieshout Priority: Blocker Attachments: xentop.png CitrixResourceBase.java contains a hardcoded value for vcpusmax for non windows instances: if (guestOsTypeName.toLowerCase().contains(windows)) { vmr.VCPUsMax = (long) vmSpec.getCpus(); } else { vmr.VCPUsMax = 32L; } For all currently available versions of XenServer the limit is 16vcpus: http://support.citrix.com/servlet/KbServlet/download/28909-102-664115/XenServer-6.0-Configuration-Limits.pdf http://support.citrix.com/servlet/KbServlet/download/32312-102-704653/CTX134789%20-%20XenServer%206.1.0_Configuration%20Limits.pdf http://support.citrix.com/servlet/KbServlet/download/34966-102-706122/CTX137837_XenServer%206_2_0_Configuration%20Limits.pdf In addition there seems to be a limit to the total amount of assigned vpcus on a XenServer. The impact of this bug is that xapi becomes unstable and keeps losing it's master_connection because the POST to the /remote_db_access is bigger then it's limit of 200K. This basically renders a pool slave unmanageable. If you would look at the running instances using xentop you will see hosts reporting with 32 vcpus Below the relevant portion of the xensource.log that shows the effect of the bug: [20140204T13:52:17.264Z|debug|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] stunnel: Using commandline: /usr/sbin/stunnel -fd f3b8bb12-4e03-b47a-0dc5-85ad5aef79e6 [20140204T13:52:17.269Z|debug|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] stunnel: stunnel has pidty: (FEFork (43,30540)) [20140204T13:52:17.269Z|debug|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] stunnel: stunnel start [20140204T13:52:17.269Z| info|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] stunnel connected pid=30540 fd=40 [20140204T13:52:17.346Z|error|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] Received HTTP error 500 ({ method = POST; uri = /remote_db_access; query = [ ]; content_length = [ 315932 ]; transfer encoding = ; version = 1.1; cookie = [ pool_secret=386bbf39-8710-4d2d-f452-9725d79c2393/aa7bcda9-8ebb-0cef-bb77-c6b496c5d859/1f928d82-7a20-9117-dd30-f96c7349b16e ]; task = ; subtask_of = ; content-type = ; user_agent = xapi/1.9 }) from master. This suggests our master address is wrong. Sleeping for 60s and then restarting. [20140204T13:53:18.620Z|error|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] Caught Master_connection.Goto_handler [20140204T13:53:18.620Z|debug|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] Connection to master died. I will continue to retry indefinitely (supressing future logging of this message). [20140204T13:53:18.620Z|error|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] Connection to master died. I will continue to retry indefinitely (supressing future logging of this message). [20140204T13:53:18.620Z|debug|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] Sleeping 2.00 seconds before retrying master connection... [20140204T13:53:20.627Z|debug|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] stunnel: Using commandline: /usr/sbin/stunnel -fd 3c8aed8e-1fce-be7c-09f8-b45cdc40a1f5 [20140204T13:53:20.632Z|debug|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] stunnel: stunnel has pidty: (FEFork (23,31207)) [20140204T13:53:20.632Z|debug|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] stunnel: stunnel start [20140204T13:53:20.632Z| info|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] stunnel connected pid=31207 fd=20 [20140204T13:53:28.874Z|error|xenserverhost1|4
[jira] [Commented] (CLOUDSTACK-5932) SystemVM scripts: the iso download urls need updating as current ones are obsolete
[ https://issues.apache.org/jira/browse/CLOUDSTACK-5932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13891952#comment-13891952 ] ASF subversion and git services commented on CLOUDSTACK-5932: - Commit 18191ce79aa7c37c0cf4f0dc072809a44e878709 in branch refs/heads/master from [~aprateek] [ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=18191ce ] CLOUDSTACK-5932: updated script with the valid iso download urls SystemVM scripts: the iso download urls need updating as current ones are obsolete -- Key: CLOUDSTACK-5932 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5932 Project: CloudStack Issue Type: Bug Security Level: Public(Anyone can view this level - this is the default.) Components: SystemVM Affects Versions: 4.3.0 Reporter: Abhinandan Prateek Assignee: Abhinandan Prateek Fix For: Future -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Created] (CLOUDSTACK-6030) smb usern password is stored in clear text in the db
Devdeep Singh created CLOUDSTACK-6030: - Summary: smb usern password is stored in clear text in the db Key: CLOUDSTACK-6030 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6030 Project: CloudStack Issue Type: Bug Security Level: Public (Anyone can view this level - this is the default.) Components: Management Server Affects Versions: 4.3.0 Reporter: Devdeep Singh Assignee: Devdeep Singh Priority: Critical When a smb primary or secondary storage is added to a setup, the user password gets stored in clear text in the db. It should be encrypted and stored. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (CLOUDSTACK-6030) smb user password is stored in clear text in the db
[ https://issues.apache.org/jira/browse/CLOUDSTACK-6030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Devdeep Singh updated CLOUDSTACK-6030: -- Summary: smb user password is stored in clear text in the db (was: smb usern password is stored in clear text in the db) smb user password is stored in clear text in the db --- Key: CLOUDSTACK-6030 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6030 Project: CloudStack Issue Type: Bug Security Level: Public(Anyone can view this level - this is the default.) Components: Management Server Affects Versions: 4.3.0 Reporter: Devdeep Singh Assignee: Devdeep Singh Priority: Critical When a smb primary or secondary storage is added to a setup, the user password gets stored in clear text in the db. It should be encrypted and stored. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (CLOUDSTACK-5677) When split brain ocuurs DB replication fails in case of HA Enabled.
[ https://issues.apache.org/jira/browse/CLOUDSTACK-5677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13891954#comment-13891954 ] Damodar Reddy T commented on CLOUDSTACK-5677: - auto_increment_increment = 10 Tells to the mysql node that auto increment values to be incremented by 10 instead of default value 1 For Example, In a table X the last auto incremented value is 10 and the same will be replicated to Slave so offset on both the mysql nodes is 10 and now split brain occurred and MS1 is talking to DB1 and MS2 is talking to DB2 and both get the request to add entry to the table X. Now in DB1 the auto increment value will return 11 and in DB2 the auto increment value will return 20 and both will get synced with each other and new offset will set to 20 on both DB1 and DB2 . auto_increment_offset = 2 Tells to the mysql node that the starting point of the auto increment column value to be start with. The second property is relevant only when split brain occurs on fresh setup. When split brain ocuurs DB replication fails in case of HA Enabled. --- Key: CLOUDSTACK-5677 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5677 Project: CloudStack Issue Type: Bug Security Level: Public(Anyone can view this level - this is the default.) Components: Management Server Affects Versions: 4.3.0 Reporter: Kiran Koneti Assignee: Damodar Reddy T Priority: Critical Fix For: 4.3.0 Setup and issue details: 1)Created a CS setup with multiple MS say MS1 and MS2 with DBHA enabled. 2)Created a DB replication has 2 db servers say DB1 and DB2. 3)Initially the setup was in such a way that both MS1 and MS2 are talking to the DB1 and DB2 was acting as a slave. 4)Then Blocked the communication from the MS2 to DB1 using the IPtables. 5)now the split brain scenario is created in such a way that MS1 talks to DB1 and MS2 talks to DB2. 6)Now tried to create an affinity group in both the MS at the same time say Aff1 though the MS1 and Aff2 from the MS2. 7)The affinity groups are created in both the MS i.e Aff1 is created in the MS1 and updated in the DB1 and Aff2 is created in the MS2 and update in the DB2 and the replication is broke between both the DB servers and the slave status in both the DB servers is shown as below. Error message shown is Could not execute Write_rows event on table cloud.affinity_group; Duplicate entry '1' for key 'PRIMARY', Error_code: 1062; handler error HA_ERR_FOUND_DUPP_KEY; the event's master log mysql-bin.65, end_log_pos 2785854 in DB2. Could not execute Write_rows event on table cloud.affinity_group; Duplicate entry '1' for key 'PRIMARY', Error_code: 1062; handler error HA_ERR_FOUND_DUPP_KEY; the event's master log mysql-bin.30, end_log_pos 392 in DB1. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (CLOUDSTACK-5932) SystemVM scripts: the iso download urls need updating as current ones are obsolete
[ https://issues.apache.org/jira/browse/CLOUDSTACK-5932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13891960#comment-13891960 ] ASF subversion and git services commented on CLOUDSTACK-5932: - Commit 0ce488849ddad65ec0f37f5bbc03b619bb959d58 in branch refs/heads/master from [~htrippaers] [ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=0ce4888 ] CLOUDSTACK-5932: update the definitions with the new debian version SystemVM scripts: the iso download urls need updating as current ones are obsolete -- Key: CLOUDSTACK-5932 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5932 Project: CloudStack Issue Type: Bug Security Level: Public(Anyone can view this level - this is the default.) Components: SystemVM Affects Versions: 4.3.0 Reporter: Abhinandan Prateek Assignee: Abhinandan Prateek Fix For: Future -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (CLOUDSTACK-6030) smb user password is stored in clear text in the db
[ https://issues.apache.org/jira/browse/CLOUDSTACK-6030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13891970#comment-13891970 ] ASF subversion and git services commented on CLOUDSTACK-6030: - Commit a24263fe81dc2a173bd06e8cec6bbe43c625e9e6 in branch refs/heads/master from [~devdeep] [ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=a24263f ] CLOUDSTACK-6030: Encrypt the primary and secondary smb storage password when it is stored in the db. smb user password is stored in clear text in the db --- Key: CLOUDSTACK-6030 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6030 Project: CloudStack Issue Type: Bug Security Level: Public(Anyone can view this level - this is the default.) Components: Management Server Affects Versions: 4.3.0 Reporter: Devdeep Singh Assignee: Devdeep Singh Priority: Critical When a smb primary or secondary storage is added to a setup, the user password gets stored in clear text in the db. It should be encrypted and stored. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Resolved] (CLOUDSTACK-6030) smb user password is stored in clear text in the db
[ https://issues.apache.org/jira/browse/CLOUDSTACK-6030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Devdeep Singh resolved CLOUDSTACK-6030. --- Resolution: Fixed smb user password is stored in clear text in the db --- Key: CLOUDSTACK-6030 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6030 Project: CloudStack Issue Type: Bug Security Level: Public(Anyone can view this level - this is the default.) Components: Management Server Affects Versions: 4.3.0 Reporter: Devdeep Singh Assignee: Devdeep Singh Priority: Critical When a smb primary or secondary storage is added to a setup, the user password gets stored in clear text in the db. It should be encrypted and stored. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (CLOUDSTACK-6023) Non windows instances are created on XenServer with a vcpu-max above supported xenserver limits
[ https://issues.apache.org/jira/browse/CLOUDSTACK-6023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13891982#comment-13891982 ] Joris van Lieshout commented on CLOUDSTACK-6023: That is a good idea. Nice solution. Non windows instances are created on XenServer with a vcpu-max above supported xenserver limits --- Key: CLOUDSTACK-6023 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6023 Project: CloudStack Issue Type: Bug Security Level: Public(Anyone can view this level - this is the default.) Components: XenServer Affects Versions: Future, 4.2.1, 4.3.0 Reporter: Joris van Lieshout Priority: Blocker Attachments: xentop.png CitrixResourceBase.java contains a hardcoded value for vcpusmax for non windows instances: if (guestOsTypeName.toLowerCase().contains(windows)) { vmr.VCPUsMax = (long) vmSpec.getCpus(); } else { vmr.VCPUsMax = 32L; } For all currently available versions of XenServer the limit is 16vcpus: http://support.citrix.com/servlet/KbServlet/download/28909-102-664115/XenServer-6.0-Configuration-Limits.pdf http://support.citrix.com/servlet/KbServlet/download/32312-102-704653/CTX134789%20-%20XenServer%206.1.0_Configuration%20Limits.pdf http://support.citrix.com/servlet/KbServlet/download/34966-102-706122/CTX137837_XenServer%206_2_0_Configuration%20Limits.pdf In addition there seems to be a limit to the total amount of assigned vpcus on a XenServer. The impact of this bug is that xapi becomes unstable and keeps losing it's master_connection because the POST to the /remote_db_access is bigger then it's limit of 200K. This basically renders a pool slave unmanageable. If you would look at the running instances using xentop you will see hosts reporting with 32 vcpus Below the relevant portion of the xensource.log that shows the effect of the bug: [20140204T13:52:17.264Z|debug|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] stunnel: Using commandline: /usr/sbin/stunnel -fd f3b8bb12-4e03-b47a-0dc5-85ad5aef79e6 [20140204T13:52:17.269Z|debug|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] stunnel: stunnel has pidty: (FEFork (43,30540)) [20140204T13:52:17.269Z|debug|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] stunnel: stunnel start [20140204T13:52:17.269Z| info|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] stunnel connected pid=30540 fd=40 [20140204T13:52:17.346Z|error|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] Received HTTP error 500 ({ method = POST; uri = /remote_db_access; query = [ ]; content_length = [ 315932 ]; transfer encoding = ; version = 1.1; cookie = [ pool_secret=386bbf39-8710-4d2d-f452-9725d79c2393/aa7bcda9-8ebb-0cef-bb77-c6b496c5d859/1f928d82-7a20-9117-dd30-f96c7349b16e ]; task = ; subtask_of = ; content-type = ; user_agent = xapi/1.9 }) from master. This suggests our master address is wrong. Sleeping for 60s and then restarting. [20140204T13:53:18.620Z|error|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] Caught Master_connection.Goto_handler [20140204T13:53:18.620Z|debug|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] Connection to master died. I will continue to retry indefinitely (supressing future logging of this message). [20140204T13:53:18.620Z|error|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] Connection to master died. I will continue to retry indefinitely (supressing future logging of this message). [20140204T13:53:18.620Z|debug|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] Sleeping 2.00 seconds before retrying master connection... [20140204T13:53:20.627Z|debug|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] stunnel: Using commandline: /usr/sbin/stunnel -fd 3c8aed8e-1fce-be7c-09f8-b45cdc40a1f5 [20140204T13:53:20.632Z|debug|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] stunnel: stunnel has pidty: (FEFork (23,31207)) [20140204T13:53:20.632Z|debug|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] stunnel: stunnel start [20140204T13:53:20.632Z| info|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] stunnel connected pid=31207 fd=20 [20140204T13:53:28.874Z|error|xenserverhost1|4 unix-RPC|session.login_with_password D:2e7664ad69ed|master_connection] Caught Master_connection.Goto_handler [20140204T13:53:28.874Z|debug|xenserverhost1|4
[jira] [Commented] (CLOUDSTACK-6023) Non windows instances are created on XenServer with a vcpu-max above supported xenserver limits
[ https://issues.apache.org/jira/browse/CLOUDSTACK-6023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13891985#comment-13891985 ] Harikrishna Patnala commented on CLOUDSTACK-6023: - But will setting the value to 16 solve the problem ? have you tried setting it to 16 Non windows instances are created on XenServer with a vcpu-max above supported xenserver limits --- Key: CLOUDSTACK-6023 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6023 Project: CloudStack Issue Type: Bug Security Level: Public(Anyone can view this level - this is the default.) Components: XenServer Affects Versions: Future, 4.2.1, 4.3.0 Reporter: Joris van Lieshout Priority: Blocker Attachments: xentop.png CitrixResourceBase.java contains a hardcoded value for vcpusmax for non windows instances: if (guestOsTypeName.toLowerCase().contains(windows)) { vmr.VCPUsMax = (long) vmSpec.getCpus(); } else { vmr.VCPUsMax = 32L; } For all currently available versions of XenServer the limit is 16vcpus: http://support.citrix.com/servlet/KbServlet/download/28909-102-664115/XenServer-6.0-Configuration-Limits.pdf http://support.citrix.com/servlet/KbServlet/download/32312-102-704653/CTX134789%20-%20XenServer%206.1.0_Configuration%20Limits.pdf http://support.citrix.com/servlet/KbServlet/download/34966-102-706122/CTX137837_XenServer%206_2_0_Configuration%20Limits.pdf In addition there seems to be a limit to the total amount of assigned vpcus on a XenServer. The impact of this bug is that xapi becomes unstable and keeps losing it's master_connection because the POST to the /remote_db_access is bigger then it's limit of 200K. This basically renders a pool slave unmanageable. If you would look at the running instances using xentop you will see hosts reporting with 32 vcpus Below the relevant portion of the xensource.log that shows the effect of the bug: [20140204T13:52:17.264Z|debug|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] stunnel: Using commandline: /usr/sbin/stunnel -fd f3b8bb12-4e03-b47a-0dc5-85ad5aef79e6 [20140204T13:52:17.269Z|debug|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] stunnel: stunnel has pidty: (FEFork (43,30540)) [20140204T13:52:17.269Z|debug|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] stunnel: stunnel start [20140204T13:52:17.269Z| info|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] stunnel connected pid=30540 fd=40 [20140204T13:52:17.346Z|error|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] Received HTTP error 500 ({ method = POST; uri = /remote_db_access; query = [ ]; content_length = [ 315932 ]; transfer encoding = ; version = 1.1; cookie = [ pool_secret=386bbf39-8710-4d2d-f452-9725d79c2393/aa7bcda9-8ebb-0cef-bb77-c6b496c5d859/1f928d82-7a20-9117-dd30-f96c7349b16e ]; task = ; subtask_of = ; content-type = ; user_agent = xapi/1.9 }) from master. This suggests our master address is wrong. Sleeping for 60s and then restarting. [20140204T13:53:18.620Z|error|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] Caught Master_connection.Goto_handler [20140204T13:53:18.620Z|debug|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] Connection to master died. I will continue to retry indefinitely (supressing future logging of this message). [20140204T13:53:18.620Z|error|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] Connection to master died. I will continue to retry indefinitely (supressing future logging of this message). [20140204T13:53:18.620Z|debug|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] Sleeping 2.00 seconds before retrying master connection... [20140204T13:53:20.627Z|debug|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] stunnel: Using commandline: /usr/sbin/stunnel -fd 3c8aed8e-1fce-be7c-09f8-b45cdc40a1f5 [20140204T13:53:20.632Z|debug|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] stunnel: stunnel has pidty: (FEFork (23,31207)) [20140204T13:53:20.632Z|debug|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] stunnel: stunnel start [20140204T13:53:20.632Z| info|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] stunnel connected pid=31207 fd=20 [20140204T13:53:28.874Z|error|xenserverhost1|4 unix-RPC|session.login_with_password D:2e7664ad69ed|master_connection] Caught Master_connection.Goto_handler
[jira] [Commented] (CLOUDSTACK-6023) Non windows instances are created on XenServer with a vcpu-max above supported xenserver limits
[ https://issues.apache.org/jira/browse/CLOUDSTACK-6023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892005#comment-13892005 ] Joris van Lieshout commented on CLOUDSTACK-6023: We will be installing on our test env a custom build of 4.2.1 that has a max of 16 today. I should be able to answer your question in a couple day. Theoretically however looking at the current size of the POST and the number of instance with vcpumax=32 setting it to 16 will make a big difference. Non windows instances are created on XenServer with a vcpu-max above supported xenserver limits --- Key: CLOUDSTACK-6023 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6023 Project: CloudStack Issue Type: Bug Security Level: Public(Anyone can view this level - this is the default.) Components: XenServer Affects Versions: Future, 4.2.1, 4.3.0 Reporter: Joris van Lieshout Priority: Blocker Attachments: xentop.png CitrixResourceBase.java contains a hardcoded value for vcpusmax for non windows instances: if (guestOsTypeName.toLowerCase().contains(windows)) { vmr.VCPUsMax = (long) vmSpec.getCpus(); } else { vmr.VCPUsMax = 32L; } For all currently available versions of XenServer the limit is 16vcpus: http://support.citrix.com/servlet/KbServlet/download/28909-102-664115/XenServer-6.0-Configuration-Limits.pdf http://support.citrix.com/servlet/KbServlet/download/32312-102-704653/CTX134789%20-%20XenServer%206.1.0_Configuration%20Limits.pdf http://support.citrix.com/servlet/KbServlet/download/34966-102-706122/CTX137837_XenServer%206_2_0_Configuration%20Limits.pdf In addition there seems to be a limit to the total amount of assigned vpcus on a XenServer. The impact of this bug is that xapi becomes unstable and keeps losing it's master_connection because the POST to the /remote_db_access is bigger then it's limit of 200K. This basically renders a pool slave unmanageable. If you would look at the running instances using xentop you will see hosts reporting with 32 vcpus Below the relevant portion of the xensource.log that shows the effect of the bug: [20140204T13:52:17.264Z|debug|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] stunnel: Using commandline: /usr/sbin/stunnel -fd f3b8bb12-4e03-b47a-0dc5-85ad5aef79e6 [20140204T13:52:17.269Z|debug|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] stunnel: stunnel has pidty: (FEFork (43,30540)) [20140204T13:52:17.269Z|debug|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] stunnel: stunnel start [20140204T13:52:17.269Z| info|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] stunnel connected pid=30540 fd=40 [20140204T13:52:17.346Z|error|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] Received HTTP error 500 ({ method = POST; uri = /remote_db_access; query = [ ]; content_length = [ 315932 ]; transfer encoding = ; version = 1.1; cookie = [ pool_secret=386bbf39-8710-4d2d-f452-9725d79c2393/aa7bcda9-8ebb-0cef-bb77-c6b496c5d859/1f928d82-7a20-9117-dd30-f96c7349b16e ]; task = ; subtask_of = ; content-type = ; user_agent = xapi/1.9 }) from master. This suggests our master address is wrong. Sleeping for 60s and then restarting. [20140204T13:53:18.620Z|error|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] Caught Master_connection.Goto_handler [20140204T13:53:18.620Z|debug|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] Connection to master died. I will continue to retry indefinitely (supressing future logging of this message). [20140204T13:53:18.620Z|error|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] Connection to master died. I will continue to retry indefinitely (supressing future logging of this message). [20140204T13:53:18.620Z|debug|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] Sleeping 2.00 seconds before retrying master connection... [20140204T13:53:20.627Z|debug|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] stunnel: Using commandline: /usr/sbin/stunnel -fd 3c8aed8e-1fce-be7c-09f8-b45cdc40a1f5 [20140204T13:53:20.632Z|debug|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] stunnel: stunnel has pidty: (FEFork (23,31207)) [20140204T13:53:20.632Z|debug|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] stunnel: stunnel start [20140204T13:53:20.632Z| info|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection]
[jira] [Commented] (CLOUDSTACK-5967) Virtual router in OVS network fails to start with error from XenServer: VM_REQUIRES_NETWORK
[ https://issues.apache.org/jira/browse/CLOUDSTACK-5967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892015#comment-13892015 ] ASF subversion and git services commented on CLOUDSTACK-5967: - Commit 2e004878b1da0f7fb5ec12e77babbb626e96c1ef in branch refs/heads/4.3-forward from [~murali.reddy] [ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=2e00487 ] CLOUDSTACK-5967: GRE tunnel creation is failing after network orchestrator refactor, only network elements providingg services as defined by network offering, are invloved network design and imlement phase. So OVS network element need to be enables as 'Connectivity' service provider to make GRE tunnels work. This fix introduced 'Ovs' provider as Connectivity service provider. Virtual router in OVS network fails to start with error from XenServer: VM_REQUIRES_NETWORK --- Key: CLOUDSTACK-5967 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5967 Project: CloudStack Issue Type: Bug Security Level: Public(Anyone can view this level - this is the default.) Components: Network Controller Affects Versions: 4.3.0 Environment: CloudStack 4.3 with XenServer 6.2 GRE tunnel based advanced network Reporter: Paul Angus Assignee: Murali Reddy Priority: Blocker Fix For: 4.3.0 Virtual Router start fails with error VM_REQUIRES_NETWORK when using GRE tunnel encapsulation. Tunnel is created on XenServer (OVSTunnel194) virtual router appears briefly then dissappears and insufficient capacity error is returned by cloudstack 2014-01-28 16:17:09,829 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Job-Executor-12:ctx-3a256248 ctx-58300764) Creating monitoring services on VM[DomainRouter|r-12-VM] start... 2014-01-28 16:17:09,842 DEBUG [c.c.a.t.Request] (Job-Executor-12:ctx-3a256248 ctx-58300764) Seq 2-232063002: Sending { Cmd , MgmtId: 345049362040, via: 2(localhost.localdomain), Ver: v1, Flags: 100011, [{com.cloud.agent.api.StartCommand:{vm:{id:12,name:r-12-VM,bootloader:PyGrub,type:DomainRouter,cpus:1,minSpeed:125,maxSpeed:500,minRam:134217728,maxRam:134217728,arch:x86_64,os:Debian GNU/Linux 7(32-bit),bootArgs: template=domP name=r-12-VM eth2ip=192.168.1.53 eth2mask=255.255.255.0 gateway=192.168.1.254 eth0ip=10.1.1.1 eth0mask=255.255.255.0 domain=cs2cloud.internal dhcprange=10.1.1.1 eth1ip=169.254.0.255 eth1mask=255.255.0.0 type=router disable_rp_filter=true
[jira] [Commented] (CLOUDSTACK-5967) Virtual router in OVS network fails to start with error from XenServer: VM_REQUIRES_NETWORK
[ https://issues.apache.org/jira/browse/CLOUDSTACK-5967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892034#comment-13892034 ] ASF subversion and git services commented on CLOUDSTACK-5967: - Commit 9a3adc97ca11c8044296f65ac00ff7f581e4fde6 in branch refs/heads/4.2 from [~murali.reddy] [ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=9a3adc9 ] CLOUDSTACK-5967: GRE tunnel creation is failing after network orchestrator refactor, only network elements providingg services as defined by network offering, are invloved network design and imlement phase. So OVS network element need to be enables as 'Connectivity' service provider to make GRE tunnels work. This fix introduced 'Ovs' provider as Connectivity service provider. Conflicts: server/src/com/cloud/network/NetworkServiceImpl.java Virtual router in OVS network fails to start with error from XenServer: VM_REQUIRES_NETWORK --- Key: CLOUDSTACK-5967 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5967 Project: CloudStack Issue Type: Bug Security Level: Public(Anyone can view this level - this is the default.) Components: Network Controller Affects Versions: 4.3.0 Environment: CloudStack 4.3 with XenServer 6.2 GRE tunnel based advanced network Reporter: Paul Angus Assignee: Murali Reddy Priority: Blocker Fix For: 4.3.0 Virtual Router start fails with error VM_REQUIRES_NETWORK when using GRE tunnel encapsulation. Tunnel is created on XenServer (OVSTunnel194) virtual router appears briefly then dissappears and insufficient capacity error is returned by cloudstack 2014-01-28 16:17:09,829 DEBUG [c.c.n.r.VirtualNetworkApplianceManagerImpl] (Job-Executor-12:ctx-3a256248 ctx-58300764) Creating monitoring services on VM[DomainRouter|r-12-VM] start... 2014-01-28 16:17:09,842 DEBUG [c.c.a.t.Request] (Job-Executor-12:ctx-3a256248 ctx-58300764) Seq 2-232063002: Sending { Cmd , MgmtId: 345049362040, via: 2(localhost.localdomain), Ver: v1, Flags: 100011, [{com.cloud.agent.api.StartCommand:{vm:{id:12,name:r-12-VM,bootloader:PyGrub,type:DomainRouter,cpus:1,minSpeed:125,maxSpeed:500,minRam:134217728,maxRam:134217728,arch:x86_64,os:Debian GNU/Linux 7(32-bit),bootArgs: template=domP name=r-12-VM eth2ip=192.168.1.53 eth2mask=255.255.255.0 gateway=192.168.1.254 eth0ip=10.1.1.1 eth0mask=255.255.255.0 domain=cs2cloud.internal dhcprange=10.1.1.1 eth1ip=169.254.0.255 eth1mask=255.255.0.0 type=router disable_rp_filter=true
[jira] [Commented] (CLOUDSTACK-6023) Non windows instances are created on XenServer with a vcpu-max above supported xenserver limits
[ https://issues.apache.org/jira/browse/CLOUDSTACK-6023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892073#comment-13892073 ] Daan Hoogland commented on CLOUDSTACK-6023: --- The 4.2.1 patch that we are installing will have the limit of 2*vcpu capped to 16. I aggree that we should see a big difference but this is not testing the extreme case of hard-coding it to 16. Also we have more then ten vm per host and we will go over the 160 /host limit if we follow Harikrishna's suggestion On Wed, Feb 5, 2014 at 12:14 PM, Joris van Lieshout (JIRA) -- Daan Non windows instances are created on XenServer with a vcpu-max above supported xenserver limits --- Key: CLOUDSTACK-6023 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6023 Project: CloudStack Issue Type: Bug Security Level: Public(Anyone can view this level - this is the default.) Components: XenServer Affects Versions: Future, 4.2.1, 4.3.0 Reporter: Joris van Lieshout Priority: Blocker Attachments: xentop.png CitrixResourceBase.java contains a hardcoded value for vcpusmax for non windows instances: if (guestOsTypeName.toLowerCase().contains(windows)) { vmr.VCPUsMax = (long) vmSpec.getCpus(); } else { vmr.VCPUsMax = 32L; } For all currently available versions of XenServer the limit is 16vcpus: http://support.citrix.com/servlet/KbServlet/download/28909-102-664115/XenServer-6.0-Configuration-Limits.pdf http://support.citrix.com/servlet/KbServlet/download/32312-102-704653/CTX134789%20-%20XenServer%206.1.0_Configuration%20Limits.pdf http://support.citrix.com/servlet/KbServlet/download/34966-102-706122/CTX137837_XenServer%206_2_0_Configuration%20Limits.pdf In addition there seems to be a limit to the total amount of assigned vpcus on a XenServer. The impact of this bug is that xapi becomes unstable and keeps losing it's master_connection because the POST to the /remote_db_access is bigger then it's limit of 200K. This basically renders a pool slave unmanageable. If you would look at the running instances using xentop you will see hosts reporting with 32 vcpus Below the relevant portion of the xensource.log that shows the effect of the bug: [20140204T13:52:17.264Z|debug|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] stunnel: Using commandline: /usr/sbin/stunnel -fd f3b8bb12-4e03-b47a-0dc5-85ad5aef79e6 [20140204T13:52:17.269Z|debug|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] stunnel: stunnel has pidty: (FEFork (43,30540)) [20140204T13:52:17.269Z|debug|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] stunnel: stunnel start [20140204T13:52:17.269Z| info|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] stunnel connected pid=30540 fd=40 [20140204T13:52:17.346Z|error|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] Received HTTP error 500 ({ method = POST; uri = /remote_db_access; query = [ ]; content_length = [ 315932 ]; transfer encoding = ; version = 1.1; cookie = [ pool_secret=386bbf39-8710-4d2d-f452-9725d79c2393/aa7bcda9-8ebb-0cef-bb77-c6b496c5d859/1f928d82-7a20-9117-dd30-f96c7349b16e ]; task = ; subtask_of = ; content-type = ; user_agent = xapi/1.9 }) from master. This suggests our master address is wrong. Sleeping for 60s and then restarting. [20140204T13:53:18.620Z|error|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] Caught Master_connection.Goto_handler [20140204T13:53:18.620Z|debug|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] Connection to master died. I will continue to retry indefinitely (supressing future logging of this message). [20140204T13:53:18.620Z|error|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] Connection to master died. I will continue to retry indefinitely (supressing future logging of this message). [20140204T13:53:18.620Z|debug|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] Sleeping 2.00 seconds before retrying master connection... [20140204T13:53:20.627Z|debug|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] stunnel: Using commandline: /usr/sbin/stunnel -fd 3c8aed8e-1fce-be7c-09f8-b45cdc40a1f5 [20140204T13:53:20.632Z|debug|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] stunnel: stunnel has pidty: (FEFork (23,31207)) [20140204T13:53:20.632Z|debug|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] stunnel: stunnel start [20140204T13:53:20.632Z|
[jira] [Created] (CLOUDSTACK-6032) [VmScaleup]service offering id is not getting changed in usage_vm_instance table under usage_type 1
Harikrishna Patnala created CLOUDSTACK-6032: --- Summary: [VmScaleup]service offering id is not getting changed in usage_vm_instance table under usage_type 1 Key: CLOUDSTACK-6032 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6032 Project: CloudStack Issue Type: Bug Security Level: Public (Anyone can view this level - this is the default.) Components: Management Server, Usage Affects Versions: 4.3.0 Reporter: Harikrishna Patnala Assignee: Harikrishna Patnala Fix For: 4.3.0, 4.4.0 steps to reproduce === 1-Deploy vm with co small instance{cpu:512,ram=512} 2-scaleup vm to co medium instance{cpu:1GHz,ram=1024} 3-check usage_vm_instance table in usage db expected = for usage_typ 1 and 2 service offerings_id should get update to new SO Actual === Sservice_offering_id is getting updated for usage_type 2 only Detail == mysql select * from usage_vm_instance where vm_instance_id=3 and end_date is NULL; ++-+++-+-+-+-+-+--+---+---++ | usage_type | zone_id | account_id | vm_instance_id | vm_name | service_offering_id | template_id | hypervisor_type | start_date | end_date | cpu_speed | cpu_cores | memory | ++-+++-+-+-+-+-+--+---+---++ | 1 | 1 | 2 | 3 | one | 15 | 7 | VMware | 2014-01-29 14:45:12 | NULL | NULL | NULL | NULL | | 2 | 1 | 2 | 3 | one | 2 | 7 | VMware | 2014-01-29 14:46:26 | NULL | NULL | NULL | NULL | ++-+++-+-+-+-+-+--+---+---++ -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (CLOUDSTACK-6031) Virtual Router count not displaying in UI Infrastructure Screen
[ https://issues.apache.org/jira/browse/CLOUDSTACK-6031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Geoff Higgibottom updated CLOUDSTACK-6031: -- Priority: Critical (was: Blocker) Virtual Router count not displaying in UI Infrastructure Screen --- Key: CLOUDSTACK-6031 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6031 Project: CloudStack Issue Type: Bug Security Level: Public(Anyone can view this level - this is the default.) Components: UI Affects Versions: 4.3.0 Environment: CloudStack 4.0.3 RC3 running on CentOS 6.5, Advanced Zone, XenServer 6.2 Reporter: Geoff Higgibottom Priority: Critical Labels: 4.0.3, UI the Virtual Router count is not displaying in Infrastructure Screen, it is always at ZERO, even with multiple Virtual Routers running. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Created] (CLOUDSTACK-6033) User changing OS type from the UI returns Only ROOT admins are allowed to modify this attribute
Francois Gaudreault created CLOUDSTACK-6033: --- Summary: User changing OS type from the UI returns Only ROOT admins are allowed to modify this attribute Key: CLOUDSTACK-6033 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6033 Project: CloudStack Issue Type: Bug Security Level: Public (Anyone can view this level - this is the default.) Components: ISO, UI Reporter: Francois Gaudreault Priority: Minor When a user upload an ISO, and tries to change the OS type afterward, the UI will output an error: Only ROOT admins are allowed to modify this attribute. The parameter appears to be saved even if this error occurs. I would expect a user template/ISO can be edited by the user who owns it. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (CLOUDSTACK-6033) User changing OS type from the UI returns Only ROOT admins are allowed to modify this attribute
[ https://issues.apache.org/jira/browse/CLOUDSTACK-6033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Francois Gaudreault updated CLOUDSTACK-6033: Affects Version/s: 4.2.1 User changing OS type from the UI returns Only ROOT admins are allowed to modify this attribute - Key: CLOUDSTACK-6033 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6033 Project: CloudStack Issue Type: Bug Security Level: Public(Anyone can view this level - this is the default.) Components: ISO, UI Affects Versions: 4.2.1 Reporter: Francois Gaudreault Priority: Minor When a user upload an ISO, and tries to change the OS type afterward, the UI will output an error: Only ROOT admins are allowed to modify this attribute. The parameter appears to be saved even if this error occurs. I would expect a user template/ISO can be edited by the user who owns it. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Closed] (CLOUDSTACK-5526) LibvirtDomainXMLParser
[ https://issues.apache.org/jira/browse/CLOUDSTACK-5526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wido den Hollander closed CLOUDSTACK-5526. -- LibvirtDomainXMLParser --- Key: CLOUDSTACK-5526 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5526 Project: CloudStack Issue Type: Bug Security Level: Public(Anyone can view this level - this is the default.) Components: KVM Affects Versions: 4.4.0 Environment: CentOS 6.4 with KVM 0.1002 Libvirt 0.5.1 Reporter: howie yu Assignee: Wido den Hollander Priority: Trivial When using LibvirtDomainXMLParser parser xml from Domain. The attribute diskCacheMode not always have value , and will be empty string String diskCacheMode = getAttrValue(driver, cache, disk); when the code go to here valueOf } else if (type.equalsIgnoreCase(block)) { def.defBlockBasedDisk(diskDev, diskLabel, DiskDef.diskBus.valueOf(bus.toUpperCase())); def.setCacheMode(DiskDef.diskCacheMode.valueOf(diskCacheMode)); } There will be cause IllegalArgumentException at at java.lang.Enum.valueOf(Enum.java:196) I suggest we may check if diskCacheMode is empty string , such as if (diskCacheMode == null || diskCacheMode.isEmpty()) { def.setCacheMode(DiskDef.diskCacheMode.NONE); } else { def.setCacheMode(DiskDef.diskCacheMode.valueOf(diskCacheMode)); } -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Resolved] (CLOUDSTACK-5526) LibvirtDomainXMLParser
[ https://issues.apache.org/jira/browse/CLOUDSTACK-5526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wido den Hollander resolved CLOUDSTACK-5526. Resolution: Fixed This has been resolved. The problem was a incorrect XML generation, since it should always say none, writeback OR writethrough, it should never be empty. I'm writing a test for the DomainXMLParser anyway :) LibvirtDomainXMLParser --- Key: CLOUDSTACK-5526 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5526 Project: CloudStack Issue Type: Bug Security Level: Public(Anyone can view this level - this is the default.) Components: KVM Affects Versions: 4.4.0 Environment: CentOS 6.4 with KVM 0.1002 Libvirt 0.5.1 Reporter: howie yu Assignee: Wido den Hollander Priority: Trivial When using LibvirtDomainXMLParser parser xml from Domain. The attribute diskCacheMode not always have value , and will be empty string String diskCacheMode = getAttrValue(driver, cache, disk); when the code go to here valueOf } else if (type.equalsIgnoreCase(block)) { def.defBlockBasedDisk(diskDev, diskLabel, DiskDef.diskBus.valueOf(bus.toUpperCase())); def.setCacheMode(DiskDef.diskCacheMode.valueOf(diskCacheMode)); } There will be cause IllegalArgumentException at at java.lang.Enum.valueOf(Enum.java:196) I suggest we may check if diskCacheMode is empty string , such as if (diskCacheMode == null || diskCacheMode.isEmpty()) { def.setCacheMode(DiskDef.diskCacheMode.NONE); } else { def.setCacheMode(DiskDef.diskCacheMode.valueOf(diskCacheMode)); } -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Created] (CLOUDSTACK-6034) Deleting events by user
Tomasz Zieba created CLOUDSTACK-6034: Summary: Deleting events by user Key: CLOUDSTACK-6034 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6034 Project: CloudStack Issue Type: Improvement Security Level: Public (Anyone can view this level - this is the default.) Components: UI Affects Versions: 4.2.1 Environment: ACS 4.2.1 Reporter: Tomasz Zieba In version 4.2.1 it is possible to remove events through UI. Events can be removed either by an administrator and a normal user. I think that, the removal of user events should be implemented through the hidden flag and no real removal events. In case of problems removing events complicates the diagnosis. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Created] (CLOUDSTACK-6035) error when displaying firewall rules
Tomasz Zieba created CLOUDSTACK-6035: Summary: error when displaying firewall rules Key: CLOUDSTACK-6035 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6035 Project: CloudStack Issue Type: Bug Security Level: Public (Anyone can view this level - this is the default.) Components: UI Affects Versions: 4.2.1 Environment: ACS 4.2.1 after upgrade from 3.0.2 Reporter: Tomasz Zieba After upgrade from 3.0.2 to 4.2.1. In version 4.2.1 there was an error when displaying firewall rules. If the number of rules is too high UI displays an error (screenshot attached). In the illustrated case it is 26 rules. There wasn't this bug in 3.0.2 version. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Created] (CLOUDSTACK-6036) CloudStack stops the machine for no reason
Tomasz Zieba created CLOUDSTACK-6036: Summary: CloudStack stops the machine for no reason Key: CLOUDSTACK-6036 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6036 Project: CloudStack Issue Type: Bug Security Level: Public (Anyone can view this level - this is the default.) Components: Management Server Affects Versions: 4.2.1 Environment: ACS 4.2.1 after upgrade from 3.0.2 XCP 1.1 Reporter: Tomasz Zieba Priority: Critical After the upgrade from version 3.0.2 to 4.2.1 appeared very strange error associated with self-stopping machine after changing the offering. Steps to reproduce: 1. Running instance of the machine 2. Stop with the operating system 3. Change offering of machine 4. Start the machine 5. Waiting for 10 minutes 6. CloudStack stops the machine for no reason 7. Restart the machine In the logs you can find information: 2014-02-05 06:27:00,974 DEBUG [xen.resource.CitrixResourceBase] (DirectAgent-316:null) 11. The VM i-41-824-VM is in Running state 2014-02-05 06:27:00,974 DEBUG [agent.transport.Request] (DirectAgent-316:null) Seq 50-1756626952: Processing: { Ans: , MgmtId: 160544475005497, via: 50, Ver: v1, Flags: 10, [{com.cloud.agent.api.ClusterSyncAnswer:{_clusterId:2,_newStates:{i-41-824-VM:{t:f32a4fee-b64e-4868-9c52-2a27e32d4c0e,u:Running,v:viridian:true;acpi:true;apic:true;pae:true;nx:false;timeoffset:0;cores-per-socket:4}},_isExecuted:false,result:true,wait:0}}] } 2014-02-05 06:27:00,981 DEBUG [cloud.vm.VirtualMachineManagerImpl] (DirectAgent-316:null) VM i-41-824-VM: cs state = Running and realState = Running 2014-02-05 06:27:00,981 DEBUG [cloud.vm.VirtualMachineManagerImpl] (DirectAgent-316:null) VM i-41-824-VM: cs state = Running and realState = Running 2014-02-05 06:36:01,240 DEBUG [agent.transport.Request] (HA-Worker-1:work-1511) Seq 51-1374970375: Sending { Cmd , MgmtId: 160544475005497, via: 51, Ver: v1, Flags: 100111, [{com.cloud.agent.api.StopCommand:{isProxy:false,executeInSequence:true,vmName:i-41-824-VM,wait:0}}] } 2014-02-05 06:36:01,240 DEBUG [agent.transport.Request] (HA-Worker-1:work-1511) Seq 51-1374970375: Executing: { Cmd , MgmtId: 160544475005497, via: 51, Ver: v1, Flags: 100111, [{com.cloud.agent.api.StopCommand:{isProxy:false,executeInSequence:true,vmName:i-41-824-VM,wait:0}}] } 2014-02-05 06:36:01,383 DEBUG [xen.resource.CitrixResourceBase] (DirectAgent-150:null) 9. The VM i-41-824-VM is in Stopping state 2014-02-05 06:36:27,625 DEBUG [xen.resource.CitrixResourceBase] (DirectAgent-150:null) 10. The VM i-41-824-VM is in Stopped state You will notice that the stop of the machine corresponds to the component HA-Worker. Such operation after the upgrade is complicating work of users. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (CLOUDSTACK-6035) error when displaying firewall rules
[ https://issues.apache.org/jira/browse/CLOUDSTACK-6035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tomasz Zieba updated CLOUDSTACK-6035: - Attachment: cloudstack_ui_firewall_error.JPG error when displaying firewall rules Key: CLOUDSTACK-6035 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6035 Project: CloudStack Issue Type: Bug Security Level: Public(Anyone can view this level - this is the default.) Components: UI Affects Versions: 4.2.1 Environment: ACS 4.2.1 after upgrade from 3.0.2 Reporter: Tomasz Zieba Attachments: cloudstack_ui_firewall_error.JPG After upgrade from 3.0.2 to 4.2.1. In version 4.2.1 there was an error when displaying firewall rules. If the number of rules is too high UI displays an error (screenshot attached). In the illustrated case it is 26 rules. There wasn't this bug in 3.0.2 version. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (CLOUDSTACK-6023) Non windows instances are created on XenServer with a vcpu-max above supported xenserver limits
[ https://issues.apache.org/jira/browse/CLOUDSTACK-6023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892297#comment-13892297 ] Daan Hoogland commented on CLOUDSTACK-6023: --- Nithin, you are right about it being a hypothesis. It is being tested in a live cloud at Schuberg Philis as we 'speak' it runs a patched 4.2.1 version. I committed my change to 4.3-forward. We decided to let you scale to twice your initial number of cpu no matter how many there where to begin with. If you need to go beyond that you cen restart the vm with a new offering. we can also implement both behaivures based on a setting if need be!?! I think this hasn't been caught because of the two rather specific setups Schuberg Philis is using. Thats why we oftem come with blocker bugs in older version during rc votes as well. And this is also why we want a stable master tree from which we can take snapshots to run in our production environments. Not there yet but still full of aspiration, Daan -- Daan Non windows instances are created on XenServer with a vcpu-max above supported xenserver limits --- Key: CLOUDSTACK-6023 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6023 Project: CloudStack Issue Type: Bug Security Level: Public(Anyone can view this level - this is the default.) Components: XenServer Affects Versions: Future, 4.2.1, 4.3.0 Reporter: Joris van Lieshout Priority: Blocker Attachments: xentop.png CitrixResourceBase.java contains a hardcoded value for vcpusmax for non windows instances: if (guestOsTypeName.toLowerCase().contains(windows)) { vmr.VCPUsMax = (long) vmSpec.getCpus(); } else { vmr.VCPUsMax = 32L; } For all currently available versions of XenServer the limit is 16vcpus: http://support.citrix.com/servlet/KbServlet/download/28909-102-664115/XenServer-6.0-Configuration-Limits.pdf http://support.citrix.com/servlet/KbServlet/download/32312-102-704653/CTX134789%20-%20XenServer%206.1.0_Configuration%20Limits.pdf http://support.citrix.com/servlet/KbServlet/download/34966-102-706122/CTX137837_XenServer%206_2_0_Configuration%20Limits.pdf In addition there seems to be a limit to the total amount of assigned vpcus on a XenServer. The impact of this bug is that xapi becomes unstable and keeps losing it's master_connection because the POST to the /remote_db_access is bigger then it's limit of 200K. This basically renders a pool slave unmanageable. If you would look at the running instances using xentop you will see hosts reporting with 32 vcpus Below the relevant portion of the xensource.log that shows the effect of the bug: [20140204T13:52:17.264Z|debug|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] stunnel: Using commandline: /usr/sbin/stunnel -fd f3b8bb12-4e03-b47a-0dc5-85ad5aef79e6 [20140204T13:52:17.269Z|debug|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] stunnel: stunnel has pidty: (FEFork (43,30540)) [20140204T13:52:17.269Z|debug|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] stunnel: stunnel start [20140204T13:52:17.269Z| info|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] stunnel connected pid=30540 fd=40 [20140204T13:52:17.346Z|error|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] Received HTTP error 500 ({ method = POST; uri = /remote_db_access; query = [ ]; content_length = [ 315932 ]; transfer encoding = ; version = 1.1; cookie = [ pool_secret=386bbf39-8710-4d2d-f452-9725d79c2393/aa7bcda9-8ebb-0cef-bb77-c6b496c5d859/1f928d82-7a20-9117-dd30-f96c7349b16e ]; task = ; subtask_of = ; content-type = ; user_agent = xapi/1.9 }) from master. This suggests our master address is wrong. Sleeping for 60s and then restarting. [20140204T13:53:18.620Z|error|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] Caught Master_connection.Goto_handler [20140204T13:53:18.620Z|debug|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] Connection to master died. I will continue to retry indefinitely (supressing future logging of this message). [20140204T13:53:18.620Z|error|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] Connection to master died. I will continue to retry indefinitely (supressing future logging of this message). [20140204T13:53:18.620Z|debug|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] Sleeping 2.00 seconds before retrying master connection... [20140204T13:53:20.627Z|debug|xenserverhost1|10|dom0 networking update
[jira] [Commented] (CLOUDSTACK-6023) Non windows instances are created on XenServer with a vcpu-max above supported xenserver limits
[ https://issues.apache.org/jira/browse/CLOUDSTACK-6023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892320#comment-13892320 ] Harikrishna Patnala commented on CLOUDSTACK-6023: - Hi Nitin, I have submitted a review request for setting to 16 only when dynamic scaling is enabled. Please review and commit https://reviews.apache.org/r/17747/ Non windows instances are created on XenServer with a vcpu-max above supported xenserver limits --- Key: CLOUDSTACK-6023 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6023 Project: CloudStack Issue Type: Bug Security Level: Public(Anyone can view this level - this is the default.) Components: XenServer Affects Versions: Future, 4.2.1, 4.3.0 Reporter: Joris van Lieshout Priority: Blocker Attachments: xentop.png CitrixResourceBase.java contains a hardcoded value for vcpusmax for non windows instances: if (guestOsTypeName.toLowerCase().contains(windows)) { vmr.VCPUsMax = (long) vmSpec.getCpus(); } else { vmr.VCPUsMax = 32L; } For all currently available versions of XenServer the limit is 16vcpus: http://support.citrix.com/servlet/KbServlet/download/28909-102-664115/XenServer-6.0-Configuration-Limits.pdf http://support.citrix.com/servlet/KbServlet/download/32312-102-704653/CTX134789%20-%20XenServer%206.1.0_Configuration%20Limits.pdf http://support.citrix.com/servlet/KbServlet/download/34966-102-706122/CTX137837_XenServer%206_2_0_Configuration%20Limits.pdf In addition there seems to be a limit to the total amount of assigned vpcus on a XenServer. The impact of this bug is that xapi becomes unstable and keeps losing it's master_connection because the POST to the /remote_db_access is bigger then it's limit of 200K. This basically renders a pool slave unmanageable. If you would look at the running instances using xentop you will see hosts reporting with 32 vcpus Below the relevant portion of the xensource.log that shows the effect of the bug: [20140204T13:52:17.264Z|debug|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] stunnel: Using commandline: /usr/sbin/stunnel -fd f3b8bb12-4e03-b47a-0dc5-85ad5aef79e6 [20140204T13:52:17.269Z|debug|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] stunnel: stunnel has pidty: (FEFork (43,30540)) [20140204T13:52:17.269Z|debug|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] stunnel: stunnel start [20140204T13:52:17.269Z| info|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] stunnel connected pid=30540 fd=40 [20140204T13:52:17.346Z|error|xenserverhost1|144 inet-RPC|host.call_plugin R:e58e985539ab|master_connection] Received HTTP error 500 ({ method = POST; uri = /remote_db_access; query = [ ]; content_length = [ 315932 ]; transfer encoding = ; version = 1.1; cookie = [ pool_secret=386bbf39-8710-4d2d-f452-9725d79c2393/aa7bcda9-8ebb-0cef-bb77-c6b496c5d859/1f928d82-7a20-9117-dd30-f96c7349b16e ]; task = ; subtask_of = ; content-type = ; user_agent = xapi/1.9 }) from master. This suggests our master address is wrong. Sleeping for 60s and then restarting. [20140204T13:53:18.620Z|error|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] Caught Master_connection.Goto_handler [20140204T13:53:18.620Z|debug|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] Connection to master died. I will continue to retry indefinitely (supressing future logging of this message). [20140204T13:53:18.620Z|error|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] Connection to master died. I will continue to retry indefinitely (supressing future logging of this message). [20140204T13:53:18.620Z|debug|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] Sleeping 2.00 seconds before retrying master connection... [20140204T13:53:20.627Z|debug|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] stunnel: Using commandline: /usr/sbin/stunnel -fd 3c8aed8e-1fce-be7c-09f8-b45cdc40a1f5 [20140204T13:53:20.632Z|debug|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] stunnel: stunnel has pidty: (FEFork (23,31207)) [20140204T13:53:20.632Z|debug|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] stunnel: stunnel start [20140204T13:53:20.632Z| info|xenserverhost1|10|dom0 networking update D:5c5376f0da6c|master_connection] stunnel connected pid=31207 fd=20 [20140204T13:53:28.874Z|error|xenserverhost1|4 unix-RPC|session.login_with_password
[jira] [Created] (CLOUDSTACK-6037) NPE: createZone is failing
Srikanteswararao Talluri created CLOUDSTACK-6037: Summary: NPE: createZone is failing Key: CLOUDSTACK-6037 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6037 Project: CloudStack Issue Type: Bug Security Level: Public (Anyone can view this level - this is the default.) Components: Management Server Affects Versions: 4.4.0 Environment: simulator Reporter: Srikanteswararao Talluri Priority: Blocker Fix For: 4.4.0 createzone is leading to null pointer exception ===START=== 0:0:0:0:0:0:0:1 -- GET apiKey=KFzBif3dP05fiswvBQinoXhTuCIJ1486joeuFF00K4t4NRfASfuO3NAu91V0_RaCDa-2-iRO-1RS1yTQY7-OVAname=Sandbox-simulatorguestcidraddress=10.1.1.0%2F24dns1=10.147.28.6response=jsoncommand=createZonesignature=r1GVuOmky%2Fe%2BD2PrYuU2DGtmgtI%3Dnetworktype=Advancedinternaldns1=10.147.28.6 2014-02-06 01:26:24,561 WARN [c.c.a.d.ParamGenericValidationWorker] (1098327041@qtp-873745902-11:ctx-e309a5b3 ctx-288e38c3 ctx-8e4c188b) Received unkown parameters for command createzoneresponse. Unknown parameters : signature apikey 2014-02-06 01:26:24,595 DEBUG [o.a.c.e.o.NetworkOrchestrator] (1098327041@qtp-873745902-11:ctx-e309a5b3 ctx-288e38c3 ctx-8e4c188b) Releasing lock for Acct[25788f1c-8e9d-11e3-8e98-062a0033-system] 2014-02-06 01:26:24,624 DEBUG [c.c.u.d.T.Transaction] (1098327041@qtp-873745902-11:ctx-e309a5b3 ctx-288e38c3 ctx-8e4c188b) Rolling back the transaction: Time = 50 Name = 1098327041@qtp-873745902-11; called by -TransactionLegacy.rollback:903-TransactionLegacy.removeUpTo:846-TransactionLegacy.close:670-Transaction.execute:41-Transaction.execute:46-ConfigurationManagerImpl.createZone:1782-ConfigurationManagerImpl.createZone:1922-NativeMethodAccessorImpl.invoke0:-2-NativeMethodAccessorImpl.invoke:57-DelegatingMethodAccessorImpl.invoke:43-Method.invoke:601-AopUtils.invokeJoinpointUsingReflection:317 2014-02-06 01:26:24,643 ERROR [c.c.a.ApiServer] (1098327041@qtp-873745902-11:ctx-e309a5b3 ctx-288e38c3 ctx-8e4c188b) unhandled exception executing api command: createZone java.lang.NullPointerException at org.apache.cloudstack.network.contrail.management.ContrailGuru.canHandle(ContrailGuru.java:99) at org.apache.cloudstack.network.contrail.management.ContrailGuru.design(ContrailGuru.java:119) at org.apache.cloudstack.engine.orchestration.NetworkOrchestrator.setupNetwork(NetworkOrchestrator.java:636) at com.cloud.configuration.ConfigurationManagerImpl.createDefaultSystemNetworks(ConfigurationManagerImpl.java:1865) at com.cloud.configuration.ConfigurationManagerImpl$6.doInTransaction(ConfigurationManagerImpl.java:1795) at com.cloud.configuration.ConfigurationManagerImpl$6.doInTransaction(ConfigurationManagerImpl.java:1782) at com.cloud.utils.db.Transaction$2.doInTransaction(Transaction.java:49) at com.cloud.utils.db.Transaction.execute(Transaction.java:37) at com.cloud.utils.db.Transaction.execute(Transaction.java:46) at com.cloud.configuration.ConfigurationManagerImpl.createZone(ConfigurationManagerImpl.java:1782) at com.cloud.configuration.ConfigurationManagerImpl.createZone(ConfigurationManagerImpl.java:1922) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317) at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:183) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150) at com.cloud.event.ActionEventInterceptor.invoke(ActionEventInterceptor.java:51) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:161) at org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:91) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204) at $Proxy112.createZone(Unknown Source) at org.apache.cloudstack.api.command.admin.zone.CreateZoneCmd.execute(CreateZoneCmd.java:170) at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:109) at com.cloud.api.ApiServer.queueCommand(ApiServer.java:528) at com.cloud.api.ApiServer.handleRequest(ApiServer.java:372) at com.cloud.api.ApiServlet.processRequestInContext(ApiServlet.java:329) at com.cloud.api.ApiServlet.access$000(ApiServlet.java:54) at com.cloud.api.ApiServlet$1.run(ApiServlet.java:118) at
[jira] [Updated] (CLOUDSTACK-6031) Virtual Router count not displaying in UI Infrastructure Screen
[ https://issues.apache.org/jira/browse/CLOUDSTACK-6031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chiradeep Vittal updated CLOUDSTACK-6031: - Labels: 4.3.0 UI (was: 4.0.3 UI) Virtual Router count not displaying in UI Infrastructure Screen --- Key: CLOUDSTACK-6031 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6031 Project: CloudStack Issue Type: Bug Security Level: Public(Anyone can view this level - this is the default.) Components: UI Affects Versions: 4.3.0 Environment: CloudStack 4.0.3 RC3 running on CentOS 6.5, Advanced Zone, XenServer 6.2 Reporter: Geoff Higgibottom Priority: Critical Labels: 4.3.0, UI the Virtual Router count is not displaying in Infrastructure Screen, it is always at ZERO, even with multiple Virtual Routers running. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (CLOUDSTACK-6031) Virtual Router count not displaying in UI Infrastructure Screen
[ https://issues.apache.org/jira/browse/CLOUDSTACK-6031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chiradeep Vittal updated CLOUDSTACK-6031: - Environment: CloudStack 4.3.0 RC3 running on CentOS 6.5, Advanced Zone, XenServer 6.2 (was: CloudStack 4.0.3 RC3 running on CentOS 6.5, Advanced Zone, XenServer 6.2) Virtual Router count not displaying in UI Infrastructure Screen --- Key: CLOUDSTACK-6031 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6031 Project: CloudStack Issue Type: Bug Security Level: Public(Anyone can view this level - this is the default.) Components: UI Affects Versions: 4.3.0 Environment: CloudStack 4.3.0 RC3 running on CentOS 6.5, Advanced Zone, XenServer 6.2 Reporter: Geoff Higgibottom Priority: Critical Labels: 4.3.0, UI the Virtual Router count is not displaying in Infrastructure Screen, it is always at ZERO, even with multiple Virtual Routers running. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (CLOUDSTACK-6031) Virtual Router count not displaying in UI Infrastructure Screen
[ https://issues.apache.org/jira/browse/CLOUDSTACK-6031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Animesh Chaturvedi updated CLOUDSTACK-6031: --- Fix Version/s: 4.3.0 Virtual Router count not displaying in UI Infrastructure Screen --- Key: CLOUDSTACK-6031 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6031 Project: CloudStack Issue Type: Bug Security Level: Public(Anyone can view this level - this is the default.) Components: UI Affects Versions: 4.3.0 Environment: CloudStack 4.3.0 RC3 running on CentOS 6.5, Advanced Zone, XenServer 6.2 Reporter: Geoff Higgibottom Assignee: Jessica Wang Priority: Critical Labels: 4.3.0, UI Fix For: 4.3.0 the Virtual Router count is not displaying in Infrastructure Screen, it is always at ZERO, even with multiple Virtual Routers running. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (CLOUDSTACK-6031) Virtual Router count not displaying in UI Infrastructure Screen
[ https://issues.apache.org/jira/browse/CLOUDSTACK-6031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Animesh Chaturvedi updated CLOUDSTACK-6031: --- Assignee: Jessica Wang Virtual Router count not displaying in UI Infrastructure Screen --- Key: CLOUDSTACK-6031 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6031 Project: CloudStack Issue Type: Bug Security Level: Public(Anyone can view this level - this is the default.) Components: UI Affects Versions: 4.3.0 Environment: CloudStack 4.3.0 RC3 running on CentOS 6.5, Advanced Zone, XenServer 6.2 Reporter: Geoff Higgibottom Assignee: Jessica Wang Priority: Critical Labels: 4.3.0, UI Fix For: 4.3.0 the Virtual Router count is not displaying in Infrastructure Screen, it is always at ZERO, even with multiple Virtual Routers running. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (CLOUDSTACK-6031) Virtual Router count not displaying in UI Infrastructure Screen
[ https://issues.apache.org/jira/browse/CLOUDSTACK-6031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892528#comment-13892528 ] Animesh Chaturvedi commented on CLOUDSTACK-6031: Jessica can you check on this issue Virtual Router count not displaying in UI Infrastructure Screen --- Key: CLOUDSTACK-6031 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6031 Project: CloudStack Issue Type: Bug Security Level: Public(Anyone can view this level - this is the default.) Components: UI Affects Versions: 4.3.0 Environment: CloudStack 4.3.0 RC3 running on CentOS 6.5, Advanced Zone, XenServer 6.2 Reporter: Geoff Higgibottom Assignee: Jessica Wang Priority: Critical Labels: 4.3.0, UI Fix For: 4.3.0 the Virtual Router count is not displaying in Infrastructure Screen, it is always at ZERO, even with multiple Virtual Routers running. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (CLOUDSTACK-6031) Virtual Router count not displaying in UI Infrastructure Screen
[ https://issues.apache.org/jira/browse/CLOUDSTACK-6031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892632#comment-13892632 ] ASF subversion and git services commented on CLOUDSTACK-6031: - Commit e0dfe0ab15ee71c2cdf8fe7cce46b73258120a8e in branch refs/heads/4.3-forward from [~jessicawang] [ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=e0dfe0a ] CLOUDSTACK-6031: UI infrastructure count pass listAll=true to all listXXX API for counting resource. Virtual Router count not displaying in UI Infrastructure Screen --- Key: CLOUDSTACK-6031 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6031 Project: CloudStack Issue Type: Bug Security Level: Public(Anyone can view this level - this is the default.) Components: UI Affects Versions: 4.3.0 Environment: CloudStack 4.3.0 RC3 running on CentOS 6.5, Advanced Zone, XenServer 6.2 Reporter: Geoff Higgibottom Assignee: Jessica Wang Priority: Critical Labels: 4.3.0, UI Fix For: 4.3.0 the Virtual Router count is not displaying in Infrastructure Screen, it is always at ZERO, even with multiple Virtual Routers running. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Resolved] (CLOUDSTACK-6031) Virtual Router count not displaying in UI Infrastructure Screen
[ https://issues.apache.org/jira/browse/CLOUDSTACK-6031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jessica Wang resolved CLOUDSTACK-6031. -- Resolution: Fixed Virtual Router count not displaying in UI Infrastructure Screen --- Key: CLOUDSTACK-6031 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6031 Project: CloudStack Issue Type: Bug Security Level: Public(Anyone can view this level - this is the default.) Components: UI Affects Versions: 4.3.0 Environment: CloudStack 4.3.0 RC3 running on CentOS 6.5, Advanced Zone, XenServer 6.2 Reporter: Geoff Higgibottom Assignee: Jessica Wang Priority: Critical Labels: 4.3.0, UI Fix For: 4.3.0 the Virtual Router count is not displaying in Infrastructure Screen, it is always at ZERO, even with multiple Virtual Routers running. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Resolved] (CLOUDSTACK-5996) UI - Duplicate entries for router VMs in the UI under project view when logged in as root admin
[ https://issues.apache.org/jira/browse/CLOUDSTACK-5996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jessica Wang resolved CLOUDSTACK-5996. -- Resolution: Fixed UI - Duplicate entries for router VMs in the UI under project view when logged in as root admin --- Key: CLOUDSTACK-5996 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5996 Project: CloudStack Issue Type: Bug Security Level: Public(Anyone can view this level - this is the default.) Components: UI Reporter: Jessica Wang Assignee: Jessica Wang When logged in as root admin, when we view UI-Infrastructure-Virtual Routers in the project view, we see duplicate rows for some routers. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (CLOUDSTACK-5996) UI - Duplicate entries for router VMs in the UI under project view when logged in as root admin
[ https://issues.apache.org/jira/browse/CLOUDSTACK-5996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892698#comment-13892698 ] Jessica Wang commented on CLOUDSTACK-5996: -- http://bugs-ccp.citrix.com/browse/CS-18578 UI - Duplicate entries for router VMs in the UI under project view when logged in as root admin --- Key: CLOUDSTACK-5996 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5996 Project: CloudStack Issue Type: Bug Security Level: Public(Anyone can view this level - this is the default.) Components: UI Reporter: Jessica Wang Assignee: Jessica Wang When logged in as root admin, when we view UI-Infrastructure-Virtual Routers in the project view, we see duplicate rows for some routers. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Resolved] (CLOUDSTACK-5098) [UI] Zone view is showing Add VMware Datacenter button even though zone is already associated with a data center
[ https://issues.apache.org/jira/browse/CLOUDSTACK-5098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jessica Wang resolved CLOUDSTACK-5098. -- Resolution: Fixed commit 299b05691228a6be2d6c912a12ab46886e892f05 Author: Jessica Wang jessicaw...@apache.org Date: Wed Feb 5 14:59:02 2014 -0800 CLOUDSTACK-5098: UI VMware during zone creation, after addVmwareDc succeeds, if addClsuter fails (e.g. because of wrong input value), zone detail page will show wrong button (Add Vmware Datacenter) since listVmwareDcs is only c To resolve this specific use case, change UI to use listApis instead of listClusters to determine whether to call listVmwareDcs. branch: 4.2.1 [UI] Zone view is showing Add VMware Datacenter button even though zone is already associated with a data center -- Key: CLOUDSTACK-5098 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5098 Project: CloudStack Issue Type: Bug Security Level: Public(Anyone can view this level - this is the default.) Components: UI Affects Versions: 4.2.1 Reporter: Sailaja Mada Assignee: Jessica Wang Priority: Critical Fix For: 4.3.0 Attachments: UI_changed_to_use_listClusters_instead_of_listApis_to_determine_whether_to_call_listVmwareDcs.PNG, apilog.log, management-server.log Steps: 1. Enable Nexus flag at the global level 2. Tried to configure the Adv zone with VMWARE hypervisor with out specifying Nexus vSwitch details while adding the cluster. With this Cluster addition failed . So i have cancelled the zone configuration. Now tried below steps: 1) Removed Pod and then tried to remove zone . It failed saying The zone is not deletable because there are VMware datacenters associated with this zone. Remove VMware DC from this zone. 2) There is no Remove DC option , When i go to Zone details in the UI Observation: 1) Entries in vmware_data_center and vmware_data_center_zone_map are not cleaned up when there is a failure to add the cluster 2) So with this issue when i try to add the cluster , it is failing saying This data center is already part of other Cloudstack Zone So when there is a failure to add cluster for any reason , we should remove the addDC association in the DB aswell. I did delete the rows from vmware_data_center and vmware_data_center_zone_map , Then i was able to reuse the Datacenter -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (CLOUDSTACK-5987) listTemplates: when call is made with filter=community/featured and by domaindAdmin/regularUser, public templates from some domains are not being returned
[ https://issues.apache.org/jira/browse/CLOUDSTACK-5987?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892835#comment-13892835 ] ASF subversion and git services commented on CLOUDSTACK-5987: - Commit c440d9046627966f6c81f58f4069526e76baef86 in branch refs/heads/rbac from [~minchen07] [ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=c440d90 ] Merge fix for CLOUDSTACK-5987 in master to RBAC. listTemplates: when call is made with filter=community/featured and by domaindAdmin/regularUser, public templates from some domains are not being returned -- Key: CLOUDSTACK-5987 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-5987 Project: CloudStack Issue Type: Bug Security Level: Public(Anyone can view this level - this is the default.) Affects Versions: 4.3.0 Reporter: Alena Prokharchyk Priority: Critical Fix For: 4.3.0 Steps to reproduce: 1) Create domains: Root/domain1 Root/domain2 2) Add accounts to those domains 3) Register public templates in domain1 and domain2. 4) Login as a user of domain1. You are not able to see the template registered in domain2. Only template of domain1, its parent domain(s), child domain(s) are being returned. -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (CLOUDSTACK-2932) Allow deleting of snapshots that have errored out.
[ https://issues.apache.org/jira/browse/CLOUDSTACK-2932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nitin Mehta updated CLOUDSTACK-2932: Fix Version/s: 4.4.0 Allow deleting of snapshots that have errored out. -- Key: CLOUDSTACK-2932 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-2932 Project: CloudStack Issue Type: Bug Security Level: Public(Anyone can view this level - this is the default.) Affects Versions: 4.2.0, 4.4.0 Reporter: Nitin Mehta Assignee: Nitin Mehta Fix For: 4.2.0, 4.4.0 -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (CLOUDSTACK-2932) Allow deleting of snapshots that have errored out.
[ https://issues.apache.org/jira/browse/CLOUDSTACK-2932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nitin Mehta updated CLOUDSTACK-2932: Affects Version/s: 4.4.0 Allow deleting of snapshots that have errored out. -- Key: CLOUDSTACK-2932 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-2932 Project: CloudStack Issue Type: Bug Security Level: Public(Anyone can view this level - this is the default.) Affects Versions: 4.2.0, 4.4.0 Reporter: Nitin Mehta Assignee: Nitin Mehta Fix For: 4.2.0, 4.4.0 -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (CLOUDSTACK-2932) Allow deleting of snapshots that have errored out.
[ https://issues.apache.org/jira/browse/CLOUDSTACK-2932?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13892892#comment-13892892 ] ASF subversion and git services commented on CLOUDSTACK-2932: - Commit 86cada3b3c1871c45bd206c07f785bf499865d61 in branch refs/heads/master from [~nitinme] [ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=86cada3 ] CLOUDSTACK-2932: Allow deleting of snapshots that have errored out. Simply mark the removed column as there is no physical clean up required. It can land into error state only from allocated/Creating state which are states before creation on primary storage works Allow deleting of snapshots that have errored out. -- Key: CLOUDSTACK-2932 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-2932 Project: CloudStack Issue Type: Bug Security Level: Public(Anyone can view this level - this is the default.) Affects Versions: 4.2.0, 4.4.0 Reporter: Nitin Mehta Assignee: Nitin Mehta Fix For: 4.2.0, 4.4.0 -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Created] (CLOUDSTACK-6038) systemvm template for HyperV with jre7
Rajesh Battala created CLOUDSTACK-6038: -- Summary: systemvm template for HyperV with jre7 Key: CLOUDSTACK-6038 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6038 Project: CloudStack Issue Type: Bug Security Level: Public (Anyone can view this level - this is the default.) Components: SystemVM Affects Versions: 4.4.0 Reporter: Rajesh Battala Assignee: Rajesh Battala Fix For: 4.4.0 -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Created] (CLOUDSTACK-6039) systemvm template for VMWare with jre7
Abhinandan Prateek created CLOUDSTACK-6039: -- Summary: systemvm template for VMWare with jre7 Key: CLOUDSTACK-6039 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6039 Project: CloudStack Issue Type: Bug Security Level: Public (Anyone can view this level - this is the default.) Components: SystemVM Affects Versions: 4.4.0 Reporter: Rajesh Battala Assignee: Rajesh Battala Fix For: 4.4.0 -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Updated] (CLOUDSTACK-6039) systemvm template for VMWare with jre7
[ https://issues.apache.org/jira/browse/CLOUDSTACK-6039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Abhinandan Prateek updated CLOUDSTACK-6039: --- Assignee: Sateesh Chodapuneedi (was: Rajesh Battala) systemvm template for VMWare with jre7 -- Key: CLOUDSTACK-6039 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6039 Project: CloudStack Issue Type: Bug Security Level: Public(Anyone can view this level - this is the default.) Components: SystemVM Affects Versions: 4.4.0 Reporter: Rajesh Battala Assignee: Sateesh Chodapuneedi Labels: hyperv Fix For: 4.4.0 -- This message was sent by Atlassian JIRA (v6.1.5#6160)
[jira] [Commented] (CLOUDSTACK-6030) smb user password is stored in clear text in the db
[ https://issues.apache.org/jira/browse/CLOUDSTACK-6030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13893123#comment-13893123 ] ASF subversion and git services commented on CLOUDSTACK-6030: - Commit 96d8e3c945b24af4d27429c4bd62b459e807e2a1 in branch refs/heads/4.3-forward from [~devdeep] [ https://git-wip-us.apache.org/repos/asf?p=cloudstack.git;h=96d8e3c ] CLOUDSTACK-6030: Encrypt the primary and secondary smb storage password when it is stored in the db. smb user password is stored in clear text in the db --- Key: CLOUDSTACK-6030 URL: https://issues.apache.org/jira/browse/CLOUDSTACK-6030 Project: CloudStack Issue Type: Bug Security Level: Public(Anyone can view this level - this is the default.) Components: Management Server Affects Versions: 4.3.0 Reporter: Devdeep Singh Assignee: Devdeep Singh Priority: Critical When a smb primary or secondary storage is added to a setup, the user password gets stored in clear text in the db. It should be encrypted and stored. -- This message was sent by Atlassian JIRA (v6.1.5#6160)