[
https://issues.apache.org/jira/browse/CLOUDSTACK-9224?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Mike Tutkowski updated CLOUDSTACK-9224:
---------------------------------------
Description:
>From an e-mail I sent to dev@ on January 8, 2016:
Hi,
Has anyone noticed the following issue?
I create a zone where I use local storage to house my system VMs.
After the system VMs are all started, I sometimes (but rarely) notice that the
local SR that was used to retrieve the system template remains (as opposed to
going away once the system VMs have been kicked off).
It doesn't seem to be a big deal until I happen to re-start my management
server.
At that point, CloudStack sees this local storage that should have gone away,
but didn't as new local primary storage and adds it as such.
If I go to Infrastructure and Primary Storage, I can see I now have a second
local primary storage for one of my XenServer hosts.
Has anyone seen this issue before?
What I've done at this point is simply remove the new primary storage from the
DB, but I can't (easily) seem to get rid of the extraneous SR.
Thanks!
Update (on April 18, 2016):
Adrian Sender <[email protected]>
Sat 4/16/2016 4:43 AM
To:
[email protected];
You replied on 4/16/2016 9:58 AM.
Hi Mike,
Hi have observed this behavior on CCP 4.3.x mostly and xenserver 6.5 less so
in 4.5.1. I use Fiber Channel LVMoHBA as the primary storage.
Seems like the same issue.
Disk Attached to Dom0 after snapshot or copy from secondary to primary:
In this example we have a disk attached to dom0, we cannot delete the disk
until we detach it.
admin.rc.precise 0 Created by template provisioner 42 GB Control domain on
host cpms1-1.nsp.testlabs.com.au
[root@cpms1-1 ~]# xe vdi-list name-label="admin.rc.precise 0"
uuid ( RO) : 3d79722b-294d-4358-bc57-af92b9e9dda7
name-label ( RW): admin.rc.precise 0
name-description ( RW): Created by template provisioner
sr-uuid ( RO): dce1ec02-cce0-347d-0679-f39c9ea64da1
virtual-size ( RO): 45097156608
sharable ( RO): false
read-only ( RO): false
You will want to list out the VBD (connector object between VM and VDI) based
on the VDI UUID. Here is an example:
[root@cpms1-1 ~]# xe vbd-list vdi-uuid=3d79722b-294d-4358-bc57-af92b9e9dda7
uuid ( RO) : d9e2d89e-a82f-9e6e-c97a-afe0af47468e
vm-uuid ( RO): 0f4cb186-0167-47d6-afb5-89b00102250b
vm-name-label ( RO): Control domain on host: cpms1-1.nsp.nectar.org.au
vdi-uuid ( RO): 3d79722b-294d-4358-bc57-af92b9e9dda7
empty ( RO): false
device ( RO):
Once done, you want to first try to make VBD inactive (it may already be
inactive), "The device is not currently attached"
xe vbd-unplug uuid=d9e2d89e-a82f-9e6e-c97a-afe0af47468e
Once done, you can then break the connection:
xe vbd-destroy uuid=<UUID of VBD>
Now you can delete the disk from xencenter
Regards,
Adrian Sender
Tutkowski, Mike
Wed 4/13/2016 5:10 PM
Sent Items
To:
[email protected];
Hi,
Has anyone recently observed the following behavior:
http://imgur.com/8ALJmWb
As you can see in the image, I have three 6.5 XenServer hosts in a resource
pool.
I just used them when creating a basic zone and the system VMs were deployed
just fine. However, there are SRs pointing to secondary storage on my
XenServer-6.5-1 and XenServer-6.5-3 hosts still (there used to be one on my
XenServer-6.5-2 host, but it went away once the system VMs started up on that
host).
Thoughts?
Thanks,
Mike
was:
>From an e-mail I sent to dev@ on January 8, 2016:
Hi,
Has anyone noticed the following issue?
I create a zone where I use local storage to house my system VMs.
After the system VMs are all started, I sometimes (but rarely) notice that the
local SR that was used to retrieve the system template remains (as opposed to
going away once the system VMs have been kicked off).
It doesn't seem to be a big deal until I happen to re-start my management
server.
At that point, CloudStack sees this local storage that should have gone away,
but didn't as new local primary storage and adds it as such.
If I go to Infrastructure and Primary Storage, I can see I now have a second
local primary storage for one of my XenServer hosts.
Has anyone seen this issue before?
What I've done at this point is simply remove the new primary storage from the
DB, but I can't (easily) seem to get rid of the extraneous SR.
Thanks!
> XenServer local storage added multiple times
> --------------------------------------------
>
> Key: CLOUDSTACK-9224
> URL: https://issues.apache.org/jira/browse/CLOUDSTACK-9224
> Project: CloudStack
> Issue Type: Bug
> Security Level: Public(Anyone can view this level - this is the
> default.)
> Components: XenServer
> Affects Versions: 4.6.0, 4.6.1, 4.6.2, 4.7.0, 4.7.1, 4.7.2, 4.8.0
> Environment: XenServer 6.5
> Ubuntu 12.04
> Reporter: Mike Tutkowski
> Labels: xenserver
> Fix For: Future
>
>
> From an e-mail I sent to dev@ on January 8, 2016:
> Hi,
> Has anyone noticed the following issue?
> I create a zone where I use local storage to house my system VMs.
> After the system VMs are all started, I sometimes (but rarely) notice that
> the local SR that was used to retrieve the system template remains (as
> opposed to going away once the system VMs have been kicked off).
> It doesn't seem to be a big deal until I happen to re-start my management
> server.
> At that point, CloudStack sees this local storage that should have gone away,
> but didn't as new local primary storage and adds it as such.
> If I go to Infrastructure and Primary Storage, I can see I now have a second
> local primary storage for one of my XenServer hosts.
> Has anyone seen this issue before?
> What I've done at this point is simply remove the new primary storage from
> the DB, but I can't (easily) seem to get rid of the extraneous SR.
> Thanks!
> Update (on April 18, 2016):
> Adrian Sender <[email protected]>
> Sat 4/16/2016 4:43 AM
> To:
> [email protected];
> You replied on 4/16/2016 9:58 AM.
> Hi Mike,
> Hi have observed this behavior on CCP 4.3.x mostly and xenserver 6.5 less so
> in 4.5.1. I use Fiber Channel LVMoHBA as the primary storage.
> Seems like the same issue.
> Disk Attached to Dom0 after snapshot or copy from secondary to primary:
> In this example we have a disk attached to dom0, we cannot delete the disk
> until we detach it.
> admin.rc.precise 0 Created by template provisioner 42 GB Control domain on
> host cpms1-1.nsp.testlabs.com.au
> [root@cpms1-1 ~]# xe vdi-list name-label="admin.rc.precise 0"
> uuid ( RO) : 3d79722b-294d-4358-bc57-af92b9e9dda7
> name-label ( RW): admin.rc.precise 0
> name-description ( RW): Created by template provisioner
> sr-uuid ( RO): dce1ec02-cce0-347d-0679-f39c9ea64da1
> virtual-size ( RO): 45097156608
> sharable ( RO): false
> read-only ( RO): false
> You will want to list out the VBD (connector object between VM and VDI) based
> on the VDI UUID. Here is an example:
> [root@cpms1-1 ~]# xe vbd-list vdi-uuid=3d79722b-294d-4358-bc57-af92b9e9dda7
> uuid ( RO) : d9e2d89e-a82f-9e6e-c97a-afe0af47468e
> vm-uuid ( RO): 0f4cb186-0167-47d6-afb5-89b00102250b
> vm-name-label ( RO): Control domain on host: cpms1-1.nsp.nectar.org.au
> vdi-uuid ( RO): 3d79722b-294d-4358-bc57-af92b9e9dda7
> empty ( RO): false
> device ( RO):
> Once done, you want to first try to make VBD inactive (it may already be
> inactive), "The device is not currently attached"
> xe vbd-unplug uuid=d9e2d89e-a82f-9e6e-c97a-afe0af47468e
> Once done, you can then break the connection:
> xe vbd-destroy uuid=<UUID of VBD>
> Now you can delete the disk from xencenter
> Regards,
> Adrian Sender
> Tutkowski, Mike
> Wed 4/13/2016 5:10 PM
> Sent Items
> To:
> [email protected];
> Hi,
> Has anyone recently observed the following behavior:
> http://imgur.com/8ALJmWb
> As you can see in the image, I have three 6.5 XenServer hosts in a resource
> pool.
> I just used them when creating a basic zone and the system VMs were deployed
> just fine. However, there are SRs pointing to secondary storage on my
> XenServer-6.5-1 and XenServer-6.5-3 hosts still (there used to be one on my
> XenServer-6.5-2 host, but it went away once the system VMs started up on that
> host).
> Thoughts?
> Thanks,
> Mike
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)