I'm trying to attach my diff to https://reviews.apache.org/r/13865/, but I
don't see the necessary buttons.

I wonder if I need to get edit access back again? We had trouble with the
Wiki. Was this also impacted?


On Wed, Oct 23, 2013 at 10:47 PM, Mike Tutkowski <
mike.tutkow...@solidfire.com> wrote:

> Sure, I can create a diff file and attach it to Review Board.
>
>
> On Wed, Oct 23, 2013 at 10:40 PM, Marcus Sorensen <shadow...@gmail.com>wrote:
>
>> Sure. The majority of it only affects people who are on your storage
>> anyway. Perhaps you can post a patch and I can run it through the
>> simulator to verify that the minor change to the existing code hasn't
>> broken the standard storages. I don't think it is since I've
>> thoroughly tested the code I posted, but I know there were some
>> additional changes.
>>
>> On Wed, Oct 23, 2013 at 10:35 PM, Mike Tutkowski
>> <mike.tutkow...@solidfire.com> wrote:
>> > OK, Marcus, I made the change to detect my volumes and it seems to work
>> just
>> > fine.
>> >
>> > Perhaps another day of testing and we can check this code in. What do
>> you
>> > think?
>> >
>> >
>> > On Wed, Oct 23, 2013 at 9:14 PM, Mike Tutkowski
>> > <mike.tutkow...@solidfire.com> wrote:
>> >>
>> >> Thanks, Marcus...I hadn't read that note, but that makes sense.
>> >>
>> >> Yes, that must be the root disk for the VM. I can put in code, as you
>> >> recommend, to handle only my volumes.
>> >>
>> >>
>> >> On Wed, Oct 23, 2013 at 5:37 PM, Marcus Sorensen <shadow...@gmail.com>
>> >> wrote:
>> >>>
>> >>> It should be sending the path info for each disk per the XML of the
>> >>> VM... so it will send all disks regardless of whether or not your
>> >>> adaptor manages that disk, and it's up to your adaptor to ignore any
>> >>> that aren't managed by it. There should be notes to that effect in the
>> >>> code near the disconnectPhysicalDisk interface in StorageAdaptor:
>> >>>
>> >>>     // given local path to file/device (per Libvirt XML), 1) check
>> >>> that device is
>> >>>     // handled by your adaptor, return false if not. 2) clean up
>> >>> device, return true
>> >>>     public boolean disconnectPhysicalDiskByPath(String localPath);
>> >>>
>> >>> Since we only have XML disk definitions when we stop or migrate a VM,
>> >>> we have to try all adaptors against all defined disks. So in your
>> >>> disconnectPhysicalDisk you might do something like check that the path
>> >>> starts with '/dev/disk/by-path' and contains 'iscs-iqn' (maybe there's
>> >>> some way that's more robust like checking the full path against a lun
>> >>> listing or something). If it doesn't match, then your
>> >>> disconnectPhysicalDisk just does nothing.
>> >>>
>> >>> I assume this is a root disk or some other local storage disk. If it's
>> >>> not, then your VM XML is messed up somehow.
>> >>>
>> >>> On Wed, Oct 23, 2013 at 5:01 PM, Mike Tutkowski
>> >>> <mike.tutkow...@solidfire.com> wrote:
>> >>> > I found the problem.
>> >>> >
>> >>> > disconnectPhysicalDiskByPath is being passed in (in my situation)
>> the
>> >>> > following:
>> >>> >
>> >>> > /var/lib/libvirt/images/9887d511-8dc7-4cb4-96f9-01230fe4bbb6
>> >>> >
>> >>> > Due to the name of the method, my code was expecting data such as
>> the
>> >>> > following:
>> >>> >
>> >>> >
>> >>> >
>> /dev/disk/by-path/ip-192.168.233.10:3260-iscsi-iqn.2012-03.com.solidfire:volume1-lun-0
>> >>> >
>> >>> > Was it intentional to send the data into this method in the current
>> >>> > way?
>> >>> >
>> >>> >
>> >>> > On Wed, Oct 23, 2013 at 1:57 PM, Mike Tutkowski
>> >>> > <mike.tutkow...@solidfire.com> wrote:
>> >>> >>
>> >>> >> You know, I forgot we supposed to be doing that! :) Multi-tasking
>> too
>> >>> >> much
>> >>> >> today, I guess.
>> >>> >>
>> >>> >> Anyways, it must not be working because I still had a hypervisor
>> >>> >> connection after I shut down the VM.
>> >>> >>
>> >>> >> Let me investigate.
>> >>> >>
>> >>> >>
>> >>> >> On Wed, Oct 23, 2013 at 1:48 PM, Marcus Sorensen <
>> shadow...@gmail.com>
>> >>> >> wrote:
>> >>> >>>
>> >>> >>> Are we not disconnecting when we stop the vm? There's a method for
>> >>> >>> it, we
>> >>> >>> should be. disconnectPhysicalDiskViaVmSpec
>> >>> >>>
>> >>> >>> On Oct 23, 2013 1:28 PM, "Mike Tutkowski"
>> >>> >>> <mike.tutkow...@solidfire.com>
>> >>> >>> wrote:
>> >>> >>>>
>> >>> >>>> I see one problem for us now, Marcus.
>> >>> >>>>
>> >>> >>>> * You have a running VM that you attach a volume to.
>> >>> >>>> * You stop the VM.
>> >>> >>>> * You detach the volume.
>> >>> >>>> * You start up the VM.
>> >>> >>>>
>> >>> >>>> The VM will not be connected to the volume (which is good), but
>> the
>> >>> >>>> hypervisor will still be connected to the volume.
>> >>> >>>>
>> >>> >>>> It would be great if we actually sent a command to the last host
>> ID
>> >>> >>>> of
>> >>> >>>> the stopped VM when detaching a volume (to have the hypervisor
>> >>> >>>> disconnect
>> >>> >>>> from the volume).
>> >>> >>>>
>> >>> >>>> What do you think about that?
>> >>> >>>>
>> >>> >>>>
>> >>> >>>> On Wed, Oct 23, 2013 at 1:15 PM, Mike Tutkowski
>> >>> >>>> <mike.tutkow...@solidfire.com> wrote:
>> >>> >>>>>
>> >>> >>>>> OK, whatever way you prefer then, Marcus (createVdb first or
>> >>> >>>>> second).
>> >>> >>>>>
>> >>> >>>>> If I leave createVdb first and return 0, it does seem to work.
>> >>> >>>>>
>> >>> >>>>>
>> >>> >>>>> On Wed, Oct 23, 2013 at 11:13 AM, Marcus Sorensen
>> >>> >>>>> <shadow...@gmail.com>
>> >>> >>>>> wrote:
>> >>> >>>>>>
>> >>> >>>>>> I think we could flip-flop these two lines if necessary:
>> >>> >>>>>>
>> >>> >>>>>>             createVbd(conn, vmSpec, vmName, vm);
>> >>> >>>>>>
>> >>> >>>>>>
>> _storagePoolMgr.connectPhysicalDisksViaVmSpec(vmSpec);
>> >>> >>>>>>
>> >>> >>>>>> I haven't actually tried it though. But in general I don't see
>> the
>> >>> >>>>>> Libvirt DiskDef using size at all, which is what createVbd does
>> >>> >>>>>> (creates XML definitions for disks to attach to the VM
>> >>> >>>>>> definition). It
>> >>> >>>>>> just takes the device at it's native advertised size when it
>> >>> >>>>>> actually
>> >>> >>>>>> goes to use it.
>> >>> >>>>>>
>> >>> >>>>>> On Wed, Oct 23, 2013 at 10:52 AM, Mike Tutkowski
>> >>> >>>>>> <mike.tutkow...@solidfire.com> wrote:
>> >>> >>>>>> > Little problem that I wanted to get your take on, Marcus.
>> >>> >>>>>> >
>> >>> >>>>>> > When a VM is being started, we call createVdb before calling
>> >>> >>>>>> > connectPhysicalDisksViaVmSpec.
>> >>> >>>>>> >
>> >>> >>>>>> > The problem is that createVdb calls getPhysicalDisk and my
>> >>> >>>>>> > volume
>> >>> >>>>>> > has not
>> >>> >>>>>> > yet been connected because connectPhysicalDisksViaVmSpec has
>> not
>> >>> >>>>>> > yet
>> >>> >>>>>> > been
>> >>> >>>>>> > called.
>> >>> >>>>>> >
>> >>> >>>>>> > When I try to read up the size of the disk to populate a
>> >>> >>>>>> > PhysicalDisk, I get
>> >>> >>>>>> > an error, of course, because the path does not yet exist.
>> >>> >>>>>> >
>> >>> >>>>>> > I could populate a 0 for the size of the physical disk and
>> then
>> >>> >>>>>> > the
>> >>> >>>>>> > next
>> >>> >>>>>> > time getPhysicalDisk is called, it should be filled in with a
>> >>> >>>>>> > proper
>> >>> >>>>>> > size.
>> >>> >>>>>> >
>> >>> >>>>>> > Do you see a problem with that approach?
>> >>> >>>>>> >
>> >>> >>>>>> >
>> >>> >>>>>> > On Tue, Oct 22, 2013 at 6:40 PM, Marcus Sorensen
>> >>> >>>>>> > <shadow...@gmail.com>
>> >>> >>>>>> > wrote:
>> >>> >>>>>> >>
>> >>> >>>>>> >> That's right. All should be well.
>> >>> >>>>>> >>
>> >>> >>>>>> >> On Oct 22, 2013 6:03 PM, "Mike Tutkowski"
>> >>> >>>>>> >> <mike.tutkow...@solidfire.com>
>> >>> >>>>>> >> wrote:
>> >>> >>>>>> >>>
>> >>> >>>>>> >>> Looks like we disconnect physical disks when the VM is
>> >>> >>>>>> >>> stopped.
>> >>> >>>>>> >>>
>> >>> >>>>>> >>> I didn't see that before.
>> >>> >>>>>> >>>
>> >>> >>>>>> >>> I suppose that means the disks are physically disconnected
>> >>> >>>>>> >>> when
>> >>> >>>>>> >>> the VM is
>> >>> >>>>>> >>> stopped, but the CloudStack DB still has the VM associated
>> >>> >>>>>> >>> with
>> >>> >>>>>> >>> the disks
>> >>> >>>>>> >>> for the next time the VM may be started up (unless someone
>> >>> >>>>>> >>> does a
>> >>> >>>>>> >>> disconnect
>> >>> >>>>>> >>> while the VM is in the Stopped State).
>> >>> >>>>>> >>>
>> >>> >>>>>> >>>
>> >>> >>>>>> >>> On Tue, Oct 22, 2013 at 4:19 PM, Mike Tutkowski
>> >>> >>>>>> >>> <mike.tutkow...@solidfire.com> wrote:
>> >>> >>>>>> >>>>
>> >>> >>>>>> >>>> Hey Marcus,
>> >>> >>>>>> >>>>
>> >>> >>>>>> >>>> Quick question for you related to attaching/detaching
>> volumes
>> >>> >>>>>> >>>> when the
>> >>> >>>>>> >>>> VM is in the Stopped State.
>> >>> >>>>>> >>>>
>> >>> >>>>>> >>>> If I detach a volume from a VM that is in the Stopped
>> State,
>> >>> >>>>>> >>>> the
>> >>> >>>>>> >>>> DB
>> >>> >>>>>> >>>> seems to get updated, but I don't see a command going to
>> the
>> >>> >>>>>> >>>> KVM
>> >>> >>>>>> >>>> hypervisor
>> >>> >>>>>> >>>> that leads to the removal of the iSCSI target.
>> >>> >>>>>> >>>>
>> >>> >>>>>> >>>> It seems the iSCSI target is only removed the next time
>> the
>> >>> >>>>>> >>>> VM is
>> >>> >>>>>> >>>> started.
>> >>> >>>>>> >>>>
>> >>> >>>>>> >>>> Do you know if this is true?
>> >>> >>>>>> >>>>
>> >>> >>>>>> >>>> If it is, I'm concerned that the volume could be attached
>> to
>> >>> >>>>>> >>>> another VM
>> >>> >>>>>> >>>> before the Stopped VM is re-started and when the Stopped
>> VM
>> >>> >>>>>> >>>> gets
>> >>> >>>>>> >>>> restarted
>> >>> >>>>>> >>>> that it would disconnect the iSCSI volume from underneath
>> the
>> >>> >>>>>> >>>> VM
>> >>> >>>>>> >>>> that now
>> >>> >>>>>> >>>> has the volume attached.
>> >>> >>>>>> >>>>
>> >>> >>>>>> >>>> I still want to perform some tests on this, but am first
>> >>> >>>>>> >>>> trying
>> >>> >>>>>> >>>> to get a
>> >>> >>>>>> >>>> VM to start after I've attached a volume to it when it
>> was in
>> >>> >>>>>> >>>> the
>> >>> >>>>>> >>>> Stopped
>> >>> >>>>>> >>>> State.
>> >>> >>>>>> >>>>
>> >>> >>>>>> >>>> Thanks,
>> >>> >>>>>> >>>> Mike
>> >>> >>>>>> >>>>
>> >>> >>>>>> >>>>
>> >>> >>>>>> >>>> On Mon, Oct 21, 2013 at 10:57 PM, Mike Tutkowski
>> >>> >>>>>> >>>> <mike.tutkow...@solidfire.com> wrote:
>> >>> >>>>>> >>>>>
>> >>> >>>>>> >>>>> Thanks for that info, Marcus.
>> >>> >>>>>> >>>>>
>> >>> >>>>>> >>>>> By the way, I wanted to see if I could attach my volume
>> to a
>> >>> >>>>>> >>>>> VM
>> >>> >>>>>> >>>>> in the
>> >>> >>>>>> >>>>> Stopped State.
>> >>> >>>>>> >>>>>
>> >>> >>>>>> >>>>> The attach logic didn't trigger any exceptions; however,
>> >>> >>>>>> >>>>> when I
>> >>> >>>>>> >>>>> started
>> >>> >>>>>> >>>>> the VM, I received an Insufficient Capacity exception.
>> >>> >>>>>> >>>>>
>> >>> >>>>>> >>>>> If I detach the volume and then start the VM, the VM
>> starts
>> >>> >>>>>> >>>>> just
>> >>> >>>>>> >>>>> fine.
>> >>> >>>>>> >>>>>
>> >>> >>>>>> >>>>> I noticed a problem here (in StoragePoolHostDaoImpl):
>> >>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>     @Override
>> >>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>     public StoragePoolHostVO findByPoolHost(long poolId,
>> >>> >>>>>> >>>>> long
>> >>> >>>>>> >>>>> hostId) {
>> >>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>         SearchCriteria<StoragePoolHostVO> sc =
>> >>> >>>>>> >>>>> PoolHostSearch.create();
>> >>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>         sc.setParameters("pool_id", poolId);
>> >>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>         sc.setParameters("host_id", hostId);
>> >>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>         return findOneIncludingRemovedBy(sc);
>> >>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>     }
>> >>> >>>>>> >>>>>
>> >>> >>>>>> >>>>> The findOneIncludingRemovedBy method returns null (the
>> >>> >>>>>> >>>>> poolId is
>> >>> >>>>>> >>>>> my
>> >>> >>>>>> >>>>> storage pool's ID and the hostId is the expected host
>> ID).
>> >>> >>>>>> >>>>>
>> >>> >>>>>> >>>>> I'm not sure what this method is trying to do.
>> >>> >>>>>> >>>>>
>> >>> >>>>>> >>>>> I looked in the storage_pool_host_ref table (is that the
>> >>> >>>>>> >>>>> correct
>> >>> >>>>>> >>>>> table?) and it only has one row, which maps the local
>> >>> >>>>>> >>>>> storage
>> >>> >>>>>> >>>>> pool of the
>> >>> >>>>>> >>>>> KVM host to the KVM host (which explains why no match is
>> >>> >>>>>> >>>>> found
>> >>> >>>>>> >>>>> for my
>> >>> >>>>>> >>>>> situation).
>> >>> >>>>>> >>>>>
>> >>> >>>>>> >>>>> Do you understand what this logic is trying to do?
>> >>> >>>>>> >>>>>
>> >>> >>>>>> >>>>> Thanks!
>> >>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>
>> >>> >>>>>> >>>>> On Mon, Oct 21, 2013 at 8:08 PM, Marcus Sorensen
>> >>> >>>>>> >>>>> <shadow...@gmail.com>
>> >>> >>>>>> >>>>> wrote:
>> >>> >>>>>> >>>>>>
>> >>> >>>>>> >>>>>> Do you have the capability to clone the root disk?
>> Normally
>> >>> >>>>>> >>>>>> the
>> >>> >>>>>> >>>>>> template is installed to primary, and then cloned for
>> each
>> >>> >>>>>> >>>>>> root
>> >>> >>>>>> >>>>>> disk.
>> >>> >>>>>> >>>>>> In some cases (such as CLVM), this isn't efficient and
>> so
>> >>> >>>>>> >>>>>> the
>> >>> >>>>>> >>>>>> template
>> >>> >>>>>> >>>>>> is copied fresh to populate each root disk.
>> >>> >>>>>> >>>>>>
>> >>> >>>>>> >>>>>> I'm actually not 100% sure how this works in the new
>> code.
>> >>> >>>>>> >>>>>> It
>> >>> >>>>>> >>>>>> used to
>> >>> >>>>>> >>>>>> be handled by copyPhysicalDisk in the storage adaptor,
>> >>> >>>>>> >>>>>> called
>> >>> >>>>>> >>>>>> by
>> >>> >>>>>> >>>>>> copyTemplateToPrimaryStorage, which runs on the agent.
>> It
>> >>> >>>>>> >>>>>> would
>> >>> >>>>>> >>>>>> pass
>> >>> >>>>>> >>>>>> template/secondary storage info, and the destination
>> >>> >>>>>> >>>>>> volume/primary
>> >>> >>>>>> >>>>>> storage info, and copyPhysicalDisk would do the work of
>> >>> >>>>>> >>>>>> installing the
>> >>> >>>>>> >>>>>> image to the destination.  Then subsequent root disks
>> would
>> >>> >>>>>> >>>>>> be
>> >>> >>>>>> >>>>>> cloned
>> >>> >>>>>> >>>>>> in CreateCommand by calling createDiskFromTemplate.
>> >>> >>>>>> >>>>>>
>> >>> >>>>>> >>>>>> In master it looks like this was moved to
>> >>> >>>>>> >>>>>> KVMStorageProcessor
>> >>> >>>>>> >>>>>> 'cloneVolumeFromBaseTemplate', although I think this
>> just
>> >>> >>>>>> >>>>>> takes
>> >>> >>>>>> >>>>>> over
>> >>> >>>>>> >>>>>> as default, and there's something in your storage driver
>> >>> >>>>>> >>>>>> that
>> >>> >>>>>> >>>>>> should
>> >>> >>>>>> >>>>>> be capable of cloning templates on the mgmt server side.
>> >>> >>>>>> >>>>>> I'm
>> >>> >>>>>> >>>>>> less sure
>> >>> >>>>>> >>>>>> about how the template gets to primary storage in the
>> first
>> >>> >>>>>> >>>>>> place, I
>> >>> >>>>>> >>>>>> assume copyTemplateToPrimaryStorage in
>> KVMStorageProcessor
>> >>> >>>>>> >>>>>> calling
>> >>> >>>>>> >>>>>> copyPhysicalDisk in your adaptor. It's a bit tough for
>> me
>> >>> >>>>>> >>>>>> to
>> >>> >>>>>> >>>>>> tell
>> >>> >>>>>> >>>>>> since our earlier storage adaptor did everything on the
>> >>> >>>>>> >>>>>> host it
>> >>> >>>>>> >>>>>> mostly
>> >>> >>>>>> >>>>>> just worked with the default stuff.
>> >>> >>>>>> >>>>>>
>> >>> >>>>>> >>>>>> On Mon, Oct 21, 2013 at 4:49 PM, Mike Tutkowski
>> >>> >>>>>> >>>>>> <mike.tutkow...@solidfire.com> wrote:
>> >>> >>>>>> >>>>>> > Hey Marcus,
>> >>> >>>>>> >>>>>> >
>> >>> >>>>>> >>>>>> > So...now that this works well for data disks, I was
>> >>> >>>>>> >>>>>> > wondering
>> >>> >>>>>> >>>>>> > what
>> >>> >>>>>> >>>>>> > might be
>> >>> >>>>>> >>>>>> > involved in getting this process to work for root
>> disks.
>> >>> >>>>>> >>>>>> >
>> >>> >>>>>> >>>>>> > Can you point me in the right direction as far as what
>> >>> >>>>>> >>>>>> > gets
>> >>> >>>>>> >>>>>> > invoked
>> >>> >>>>>> >>>>>> > when a
>> >>> >>>>>> >>>>>> > VM is being created on KVM (so that its root disk can
>> be
>> >>> >>>>>> >>>>>> > created and
>> >>> >>>>>> >>>>>> > the
>> >>> >>>>>> >>>>>> > necessary template laid down or ISO installed)?
>> >>> >>>>>> >>>>>> >
>> >>> >>>>>> >>>>>> > Thanks!
>> >>> >>>>>> >>>>>> >
>> >>> >>>>>> >>>>>> >
>> >>> >>>>>> >>>>>> > On Mon, Oct 21, 2013 at 1:14 PM, Mike Tutkowski
>> >>> >>>>>> >>>>>> > <mike.tutkow...@solidfire.com> wrote:
>> >>> >>>>>> >>>>>> >>
>> >>> >>>>>> >>>>>> >> Hey Marcus,
>> >>> >>>>>> >>>>>> >>
>> >>> >>>>>> >>>>>> >> Just wanted to let you know the branch of mine that
>> has
>> >>> >>>>>> >>>>>> >> your
>> >>> >>>>>> >>>>>> >> code
>> >>> >>>>>> >>>>>> >> and mine
>> >>> >>>>>> >>>>>> >> appears to work well with regards to attaching a data
>> >>> >>>>>> >>>>>> >> disk
>> >>> >>>>>> >>>>>> >> to a
>> >>> >>>>>> >>>>>> >> running VM:
>> >>> >>>>>> >>>>>> >>
>> >>> >>>>>> >>>>>> >> fdisk -l from hypervisor:
>> >>> >>>>>> >>>>>> >>
>> >>> >>>>>> >>>>>> >> http://i.imgur.com/NkP5fo0.png
>> >>> >>>>>> >>>>>> >>
>> >>> >>>>>> >>>>>> >> fdisk -l from within VM:
>> >>> >>>>>> >>>>>> >>
>> >>> >>>>>> >>>>>> >> http://i.imgur.com/8YwiiC7.png
>> >>> >>>>>> >>>>>> >>
>> >>> >>>>>> >>>>>> >> I plan to do more testing on this over the coming
>> days.
>> >>> >>>>>> >>>>>> >>
>> >>> >>>>>> >>>>>> >> If all goes well, perhaps we can check this code in
>> by
>> >>> >>>>>> >>>>>> >> the
>> >>> >>>>>> >>>>>> >> end of
>> >>> >>>>>> >>>>>> >> the
>> >>> >>>>>> >>>>>> >> week?
>> >>> >>>>>> >>>>>> >>
>> >>> >>>>>> >>>>>> >> Talk to you later,
>> >>> >>>>>> >>>>>> >> Mike
>> >>> >>>>>> >>>>>> >>
>> >>> >>>>>> >>>>>> >>
>> >>> >>>>>> >>>>>> >> On Sun, Oct 20, 2013 at 10:23 PM, Mike Tutkowski
>> >>> >>>>>> >>>>>> >> <mike.tutkow...@solidfire.com> wrote:
>> >>> >>>>>> >>>>>> >>>
>> >>> >>>>>> >>>>>> >>> Don't ask me, but it works now (I've been having
>> this
>> >>> >>>>>> >>>>>> >>> trouble
>> >>> >>>>>> >>>>>> >>> quite a
>> >>> >>>>>> >>>>>> >>> while today).
>> >>> >>>>>> >>>>>> >>>
>> >>> >>>>>> >>>>>> >>> I guess the trick is to send you an e-mail. :)
>> >>> >>>>>> >>>>>> >>>
>> >>> >>>>>> >>>>>> >>>
>> >>> >>>>>> >>>>>> >>> On Sun, Oct 20, 2013 at 10:05 PM, Marcus Sorensen
>> >>> >>>>>> >>>>>> >>> <shadow...@gmail.com>
>> >>> >>>>>> >>>>>> >>> wrote:
>> >>> >>>>>> >>>>>> >>>>
>> >>> >>>>>> >>>>>> >>>> Did you create a service offering that uses local
>> >>> >>>>>> >>>>>> >>>> storage,
>> >>> >>>>>> >>>>>> >>>> or add
>> >>> >>>>>> >>>>>> >>>> a
>> >>> >>>>>> >>>>>> >>>> shared primary storage? By default there is no
>> storage
>> >>> >>>>>> >>>>>> >>>> that
>> >>> >>>>>> >>>>>> >>>> matches the
>> >>> >>>>>> >>>>>> >>>> built in offerings.
>> >>> >>>>>> >>>>>> >>>>
>> >>> >>>>>> >>>>>> >>>> On Oct 20, 2013 9:39 PM, "Mike Tutkowski"
>> >>> >>>>>> >>>>>> >>>> <mike.tutkow...@solidfire.com>
>> >>> >>>>>> >>>>>> >>>> wrote:
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>> Hey Marcus,
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>> So, I went back to the branch of mine that has
>> your
>> >>> >>>>>> >>>>>> >>>>> code
>> >>> >>>>>> >>>>>> >>>>> and
>> >>> >>>>>> >>>>>> >>>>> mine and
>> >>> >>>>>> >>>>>> >>>>> was able to create a new CloudStack install from
>> >>> >>>>>> >>>>>> >>>>> scratch
>> >>> >>>>>> >>>>>> >>>>> with it
>> >>> >>>>>> >>>>>> >>>>> (once
>> >>> >>>>>> >>>>>> >>>>> again, after manually deleting what was in
>> >>> >>>>>> >>>>>> >>>>> /var/lib/libvirt/images to the
>> >>> >>>>>> >>>>>> >>>>> get system VMs to start).
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>> Anyways, my system VMs are running now and I
>> tried to
>> >>> >>>>>> >>>>>> >>>>> kick off a
>> >>> >>>>>> >>>>>> >>>>> VM
>> >>> >>>>>> >>>>>> >>>>> using the CentOS 6.3 image you provided me a while
>> >>> >>>>>> >>>>>> >>>>> back.
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>> The virtual router has a Status of Running;
>> however,
>> >>> >>>>>> >>>>>> >>>>> my
>> >>> >>>>>> >>>>>> >>>>> VM fails
>> >>> >>>>>> >>>>>> >>>>> to
>> >>> >>>>>> >>>>>> >>>>> start (with the generic message of Insufficient
>> >>> >>>>>> >>>>>> >>>>> Capacity).
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>> I've not seen this exception before (related to
>> the
>> >>> >>>>>> >>>>>> >>>>> VR).
>> >>> >>>>>> >>>>>> >>>>> Do you
>> >>> >>>>>> >>>>>> >>>>> have
>> >>> >>>>>> >>>>>> >>>>> any insight into this?:
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>> com.cloud.exception.ResourceUnavailableException:
>> >>> >>>>>> >>>>>> >>>>> Resource
>> >>> >>>>>> >>>>>> >>>>> [Pod:1] is
>> >>> >>>>>> >>>>>> >>>>> unreachable: Unable to apply userdata and password
>> >>> >>>>>> >>>>>> >>>>> entry
>> >>> >>>>>> >>>>>> >>>>> on
>> >>> >>>>>> >>>>>> >>>>> router
>> >>> >>>>>> >>>>>> >>>>> at
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> com.cloud.network.router.VirtualNetworkApplianceManagerImpl.applyRules(VirtualNetworkApplianceManagerImpl.java:3793)
>> >>> >>>>>> >>>>>> >>>>> at
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> com.cloud.network.router.VirtualNetworkApplianceManagerImpl.applyUserData(VirtualNetworkApplianceManagerImpl.java:3017)
>> >>> >>>>>> >>>>>> >>>>> at
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> com.cloud.network.element.VirtualRouterElement.addPasswordAndUserdata(VirtualRouterElement.java:933)
>> >>> >>>>>> >>>>>> >>>>> at
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> org.apache.cloudstack.engine.orchestration.NetworkOrchestrator.prepareElement(NetworkOrchestrator.java:1172)
>> >>> >>>>>> >>>>>> >>>>> at
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> org.apache.cloudstack.engine.orchestration.NetworkOrchestrator.prepareNic(NetworkOrchestrator.java:1288)
>> >>> >>>>>> >>>>>> >>>>> at
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> org.apache.cloudstack.engine.orchestration.NetworkOrchestrator.prepare(NetworkOrchestrator.java:1224)
>> >>> >>>>>> >>>>>> >>>>> at
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> com.cloud.vm.VirtualMachineManagerImpl.advanceStart(VirtualMachineManagerImpl.java:826)
>> >>> >>>>>> >>>>>> >>>>> at
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> com.cloud.vm.VirtualMachineManagerImpl.start(VirtualMachineManagerImpl.java:508)
>> >>> >>>>>> >>>>>> >>>>> at
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> org.apache.cloudstack.engine.cloud.entity.api.VMEntityManagerImpl.deployVirtualMachine(VMEntityManagerImpl.java:227)
>> >>> >>>>>> >>>>>> >>>>> at
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> org.apache.cloudstack.engine.cloud.entity.api.VirtualMachineEntityImpl.deploy(VirtualMachineEntityImpl.java:209)
>> >>> >>>>>> >>>>>> >>>>> at
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> com.cloud.vm.UserVmManagerImpl.startVirtualMachine(UserVmManagerImpl.java:3338)
>> >>> >>>>>> >>>>>> >>>>> at
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> com.cloud.vm.UserVmManagerImpl.startVirtualMachine(UserVmManagerImpl.java:2919)
>> >>> >>>>>> >>>>>> >>>>> at
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> com.cloud.vm.UserVmManagerImpl.startVirtualMachine(UserVmManagerImpl.java:2905)
>> >>> >>>>>> >>>>>> >>>>> at
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> com.cloud.utils.component.ComponentInstantiationPostProcessor$InterceptorDispatcher.intercept(ComponentInstantiationPostProcessor.java:125)
>> >>> >>>>>> >>>>>> >>>>> at
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> org.apache.cloudstack.api.command.user.vm.DeployVMCmd.execute(DeployVMCmd.java:421)
>> >>> >>>>>> >>>>>> >>>>> at
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:161)
>> >>> >>>>>> >>>>>> >>>>> at
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> com.cloud.api.ApiAsyncJobDispatcher.runJobInContext(ApiAsyncJobDispatcher.java:109)
>> >>> >>>>>> >>>>>> >>>>> at
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> com.cloud.api.ApiAsyncJobDispatcher$1.run(ApiAsyncJobDispatcher.java:66)
>> >>> >>>>>> >>>>>> >>>>> at
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
>> >>> >>>>>> >>>>>> >>>>> at
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
>> >>> >>>>>> >>>>>> >>>>> at
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
>> >>> >>>>>> >>>>>> >>>>> at
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> com.cloud.api.ApiAsyncJobDispatcher.runJob(ApiAsyncJobDispatcher.java:63)
>> >>> >>>>>> >>>>>> >>>>> at
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> org.apache.cloudstack.framework.jobs.impl.AsyncJobManagerImpl$1.runInContext(AsyncJobManagerImpl.java:532)
>> >>> >>>>>> >>>>>> >>>>> at
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> org.apache.cloudstack.managed.context.ManagedContextRunnable$1.run(ManagedContextRunnable.java:49)
>> >>> >>>>>> >>>>>> >>>>> at
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> org.apache.cloudstack.managed.context.impl.DefaultManagedContext$1.call(DefaultManagedContext.java:56)
>> >>> >>>>>> >>>>>> >>>>> at
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.callWithContext(DefaultManagedContext.java:103)
>> >>> >>>>>> >>>>>> >>>>> at
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> org.apache.cloudstack.managed.context.impl.DefaultManagedContext.runWithContext(DefaultManagedContext.java:53)
>> >>> >>>>>> >>>>>> >>>>> at
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> org.apache.cloudstack.managed.context.ManagedContextRunnable.run(ManagedContextRunnable.java:46)
>> >>> >>>>>> >>>>>> >>>>> at
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>> >>> >>>>>> >>>>>> >>>>> at
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> >>> >>>>>> >>>>>> >>>>> at
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>> >>> >>>>>> >>>>>> >>>>> at
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>>
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>> >>> >>>>>> >>>>>> >>>>> at java.lang.Thread.run(Thread.java:724)
>> >>> >>>>>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>> >>>>> Thanks!
>> >>> >>>>>> >>>>>> >>>
>> >>> >>>>>> >>>>>> >>>
>> >>> >>>>>> >>>>>> >>>
>> >>> >>>>>> >>>>>> >>>
>> >>> >>>>>> >>>>>> >>> --
>> >>> >>>>>> >>>>>> >>> Mike Tutkowski
>> >>> >>>>>> >>>>>> >>> Senior CloudStack Developer, SolidFire Inc.
>> >>> >>>>>> >>>>>> >>> e: mike.tutkow...@solidfire.com
>> >>> >>>>>> >>>>>> >>> o: 303.746.7302
>> >>> >>>>>> >>>>>> >>> Advancing the way the world uses the cloud™
>> >>> >>>>>> >>>>>> >>
>> >>> >>>>>> >>>>>> >>
>> >>> >>>>>> >>>>>> >>
>> >>> >>>>>> >>>>>> >>
>> >>> >>>>>> >>>>>> >> --
>> >>> >>>>>> >>>>>> >> Mike Tutkowski
>> >>> >>>>>> >>>>>> >> Senior CloudStack Developer, SolidFire Inc.
>> >>> >>>>>> >>>>>> >> e: mike.tutkow...@solidfire.com
>> >>> >>>>>> >>>>>> >> o: 303.746.7302
>> >>> >>>>>> >>>>>> >> Advancing the way the world uses the cloud™
>> >>> >>>>>> >>>>>> >
>> >>> >>>>>> >>>>>> >
>> >>> >>>>>> >>>>>> >
>> >>> >>>>>> >>>>>> >
>> >>> >>>>>> >>>>>> > --
>> >>> >>>>>> >>>>>> > Mike Tutkowski
>> >>> >>>>>> >>>>>> > Senior CloudStack Developer, SolidFire Inc.
>> >>> >>>>>> >>>>>> > e: mike.tutkow...@solidfire.com
>> >>> >>>>>> >>>>>> > o: 303.746.7302
>> >>> >>>>>> >>>>>> > Advancing the way the world uses the cloud™
>> >>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>
>> >>> >>>>>> >>>>>
>> >>> >>>>>> >>>>> --
>> >>> >>>>>> >>>>> Mike Tutkowski
>> >>> >>>>>> >>>>> Senior CloudStack Developer, SolidFire Inc.
>> >>> >>>>>> >>>>> e: mike.tutkow...@solidfire.com
>> >>> >>>>>> >>>>> o: 303.746.7302
>> >>> >>>>>> >>>>> Advancing the way the world uses the cloud™
>> >>> >>>>>> >>>>
>> >>> >>>>>> >>>>
>> >>> >>>>>> >>>>
>> >>> >>>>>> >>>>
>> >>> >>>>>> >>>> --
>> >>> >>>>>> >>>> Mike Tutkowski
>> >>> >>>>>> >>>> Senior CloudStack Developer, SolidFire Inc.
>> >>> >>>>>> >>>> e: mike.tutkow...@solidfire.com
>> >>> >>>>>> >>>> o: 303.746.7302
>> >>> >>>>>> >>>> Advancing the way the world uses the cloud™
>> >>> >>>>>> >>>
>> >>> >>>>>> >>>
>> >>> >>>>>> >>>
>> >>> >>>>>> >>>
>> >>> >>>>>> >>> --
>> >>> >>>>>> >>> Mike Tutkowski
>> >>> >>>>>> >>> Senior CloudStack Developer, SolidFire Inc.
>> >>> >>>>>> >>> e: mike.tutkow...@solidfire.com
>> >>> >>>>>> >>> o: 303.746.7302
>> >>> >>>>>> >>> Advancing the way the world uses the cloud™
>> >>> >>>>>> >
>> >>> >>>>>> >
>> >>> >>>>>> >
>> >>> >>>>>> >
>> >>> >>>>>> > --
>> >>> >>>>>> > Mike Tutkowski
>> >>> >>>>>> > Senior CloudStack Developer, SolidFire Inc.
>> >>> >>>>>> > e: mike.tutkow...@solidfire.com
>> >>> >>>>>> > o: 303.746.7302
>> >>> >>>>>> > Advancing the way the world uses the cloud™
>> >>> >>>>>
>> >>> >>>>>
>> >>> >>>>>
>> >>> >>>>>
>> >>> >>>>> --
>> >>> >>>>> Mike Tutkowski
>> >>> >>>>> Senior CloudStack Developer, SolidFire Inc.
>> >>> >>>>> e: mike.tutkow...@solidfire.com
>> >>> >>>>> o: 303.746.7302
>> >>> >>>>> Advancing the way the world uses the cloud™
>> >>> >>>>
>> >>> >>>>
>> >>> >>>>
>> >>> >>>>
>> >>> >>>> --
>> >>> >>>> Mike Tutkowski
>> >>> >>>> Senior CloudStack Developer, SolidFire Inc.
>> >>> >>>> e: mike.tutkow...@solidfire.com
>> >>> >>>> o: 303.746.7302
>> >>> >>>> Advancing the way the world uses the cloud™
>> >>> >>
>> >>> >>
>> >>> >>
>> >>> >>
>> >>> >> --
>> >>> >> Mike Tutkowski
>> >>> >> Senior CloudStack Developer, SolidFire Inc.
>> >>> >> e: mike.tutkow...@solidfire.com
>> >>> >> o: 303.746.7302
>> >>> >> Advancing the way the world uses the cloud™
>> >>> >
>> >>> >
>> >>> >
>> >>> >
>> >>> > --
>> >>> > Mike Tutkowski
>> >>> > Senior CloudStack Developer, SolidFire Inc.
>> >>> > e: mike.tutkow...@solidfire.com
>> >>> > o: 303.746.7302
>> >>> > Advancing the way the world uses the cloud™
>> >>
>> >>
>> >>
>> >>
>> >> --
>> >> Mike Tutkowski
>> >> Senior CloudStack Developer, SolidFire Inc.
>> >> e: mike.tutkow...@solidfire.com
>> >> o: 303.746.7302
>> >> Advancing the way the world uses the cloud™
>> >
>> >
>> >
>> >
>> > --
>> > Mike Tutkowski
>> > Senior CloudStack Developer, SolidFire Inc.
>> > e: mike.tutkow...@solidfire.com
>> > o: 303.746.7302
>> > Advancing the way the world uses the cloud™
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkow...@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the 
> cloud<http://solidfire.com/solution/overview/?video=play>
> *™*
>



-- 
*Mike Tutkowski*
*Senior CloudStack Developer, SolidFire Inc.*
e: mike.tutkow...@solidfire.com
o: 303.746.7302
Advancing the way the world uses the
cloud<http://solidfire.com/solution/overview/?video=play>
*™*

Reply via email to