[Users] unsupported configuration: spice secure channels set in XML configuration, but TLS port is not provided.

2013-11-17 Thread Blaster
Hello,

I’m using overt 3.3 on Fedora 19.

I had quite a bit of trouble getting everything up and running (All In One).   
My biggest problem was around vdsm, it crashed out during the interface 
configuration so I followed the instructions here  
http://www.ovirt.org/Installing_VDSM_from_rpm which had me disable TLS.  None 
of that ever worked, so I ended up creating the bridge myself, running 
engine-cleanup then engine-setup again.

Now when I run my VMs I get the following error:
unsupported configuration: spice secure channels set in XML configuration, but 
TLS port is not provided.

So something got messed up somewhere.

I can’t figure out where the XML files for each VM are stored.  

How can I resolve this error?  Google searches haven’t turned up anyone having 
this problem.

Thanks for any help

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Ovirt 3.3 removing disk failure

2013-11-17 Thread Sergey Gotliv
Saša,

Please, check this path 
/rhev/data-center/mnt/*/2799e01b-6e6e-4f3b-8cfe-779928ae9941 from your host. 
Does it exist?
If not, please, try to access your GlusterFS through SSH. Does it exist that 
way?

- Original Message -
> From: "Saša Friedrich" 
> To: users@ovirt.org
> Sent: Wednesday, November 13, 2013 12:10:58 PM
> Subject: Re: [Users] Ovirt 3.3 removing disk failure
> 
> Just for test I enabled "ovirt-updates-testing" and "ovirt-nightly" repos.
> Did yum update and error is still there.
> 
> I created new virtual disk and then tried to delete it... same error!
> 
> Is there a way to remove disk manualy? I can mount gluster volume and delete
> the disk. But what about db in engine? Which records should I remove (by
> hand)?
> 
> 
> tnx
> 
> 
> 
> Dne 12. 11. 2013 21:19, piše Saša Friedrich:
> 
> 
> What I found so far...
> 
> Function returning error is getDomPath in
> "/usr/share/vdsm/storage/fileSD.py":
> 
> def getDomPath(sdUUID):
> pattern = os.path.join(sd.StorageDomain.storage_repository,
> sd.DOMAIN_MNT_POINT, '*', sdUUID)
> # Warning! You need a global proc pool big as the number of NFS domains.
> domPaths = getProcPool().glob.glob(pattern)
> if len(domPaths) == 0:
> raise se.StorageDomainDoesNotExist(sdUUID)
> elif len(domPaths) > 1:
> raise se.StorageDomainLayoutError(sdUUID)
> else:
> return domPaths[0]
> 
> 
> When I click remove disk in engine, variable "pattern" gets
> "/rhev/data-center/mnt/*/2799e01b-6e6e-4f3b-8cfe-779928ae9941", and
> "domPaths" is empty
> 
> 
> 
> 
> Dne 12. 11. 2013 20:18, piše Saša Friedrich:
> 
> 
> After I changed the log level of vdsm I found the error:
> 
> Thread-5180::ERROR::2013-11-12
> 19:44:21,433::task::850::TaskManager.Task::(_setError)
> Task=`42a933fa-97f1-4260-8bc5-86c057dc8184`::Unexpected error
> Traceback (most recent call last):
> File "/usr/share/vdsm/storage/task.py", line 857, in _run
> return fn(*args, **kargs)
> File "/usr/share/vdsm/logUtils.py", line 45, in wrapper
> res = f(*args, **kwargs)
> File "/usr/share/vdsm/storage/hsm.py", line 1529, in deleteImage
> dom.deleteImage(sdUUID, imgUUID, volsByImg)
> File "/usr/share/vdsm/storage/fileSD.py", line 342, in deleteImage
> currImgDir = getImagePath(sdUUID, imgUUID)
> File "/usr/share/vdsm/storage/fileSD.py", line 97, in getImagePath
> return os.path.join(getDomPath(sdUUID), 'images', imgUUID)
> File "/usr/share/vdsm/storage/fileSD.py", line 89, in getDomPath
> raise se.StorageDomainDoesNotExist(sdUUID)
> StorageDomainDoesNotExist: Storage domain does not exist:
> ('2799e01b-6e6e-4f3b-8cfe-779928ae9941',)
> Thread-5180::ERROR::2013-11-12
> 19:44:21,435::dispatcher::67::Storage.Dispatcher.Protect::(run) {'status':
> {'message': "Storage domain does not exist:
> ('2799e01b-6e6e-4f3b-8cfe-779928ae9941',)", 'code': 358}}
> 
> 
> And this happens only when I want to delete virtual disk. VM alone works
> fine.
> 
> Any clue?
> 
> 
> tnx
> 
> 
> Dne 12. 11. 2013 15:54, piše Saša Friedrich:
> 
> 
> When I try to remove viritual disk from ovirt engine I get error "User
> admin@internal finished removing disk test_vm with storage failure in domain
> DATA_DOMAIN."
> 
> VM itself was running fine with no errors.
> 
> DATA_DOMAIN is GlusterFS replicated volume (on ovirt host).
> 
> ovirt engine comp (fc19)
> ovirt-engine.noarch 3.3.0.1-1.fc19
> 
> ovirt host (fc19)
> vdsm.x86_64 4.12.1-4.fc19
> vdsm-gluster.noarch 4.12.1-4.fc19
> glusterfs-server.x86_64 3.4.1-1.fc19
> 
> 
> tnx for help
> 
> 
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Info on snapshot removal operations and final disk format

2013-11-17 Thread Bob Doolittle

On 11/16/2013 7:46 AM, Gianluca Cecchi wrote:

Hello,
I'm on oVirt 3.2.3-1 on a Fedora 18 all-in-one test server.
I have a Windows XP VM that has a disk in qcow2 format and has a snapshot on it.
When I delete the snapshot I see, from the commands intercepted, that
the final effect is to have a raw disk (aka preallocated)  is this
correct and always true?


raw != preallocated. It can still be thin-provisioned. qcow2 is only
required for holding snapshots since effectively it can represent a
delta from some base image, but raw has better performance so is the
best format to use when no snapshots are present.

I believe the trick for thin-provisioning raw images is that Linux file 
systems (or the ones I'm aware of anyway) support "sparse" files, which 
means any block containing all zeros does not require backing storage 
and so in a sparse file it consumes no disk blocks.

See http://libguestfs.org/virt-sparsify.1.html, and
http://rwmj.wordpress.com/2010/10/19/tip-making-a-disk-image-sparse/

Note you cannot trust the output of ls -l for a sparse file.
Add the -s option. For example:

% truncate -s 10M /tmp/zeros
% ls -l /tmp/zeros
-rw-rw-r--. 1 rad rad 10485760 Nov 17 15:20 /tmp/zeros
% ls -ls /tmp/zeros
0 -rw-rw-r--. 1 rad rad 10485760 Nov 17 15:20 /tmp/zeros


The -s option shows us that no data blocks are in use, since it's a
completely sparse file.

If you've played with libvirt via Virtual Machine Manager you'll find
that you can select a "raw" disk format (default), and then select thin
provisioning with both a max allocation and a pre-allocation. This last
parameter unfortunately seems unavailable via the oVirt interface. It's
nice to say how much you'd like pre-allocated, so that you get better
initial performance when you know about how much you're going to need to
begin with. I'd love to see that in a future release.

-Bob


Does this mean that even if I create a VM with thin provisioned disks,
as soon as I take at least one snapshot and I then delete it I only
have raw disks?
Or am I missing anything?

This what I observed:
as soon as I launch delete snapshot operation:

raw format of new disk
vdsm 30805  1732  6 13:24 ?00:00:00 /usr/bin/dd
if=/dev/zero 
of=/rhev/data-center/65c9777e-23f1-4f04-8cea-e7c8871dc88b/0a8035e6-e41d-40ff-a154-e0a374f264b2/images/75c54716-5222-4ad6-91f2-8b312eacc4b4/d4fa7785-8a89-4d13-9082-52556ab0b326_MERGE
bs=1048576 seek=0 skip=0 conv=notrunc count=11264 oflag=direct

after about 5 minutes:
convert from qemu format to raw format
vdsm 31287  1732  7 13:29 ?00:00:08 /usr/bin/qemu-img
convert -t none -f qcow2
/rhev/data-center/65c9777e-23f1-4f04-8cea-e7c8871dc88b/0a8035e6-e41d-40ff-a154-e0a374f264b2/images/75c54716-5222-4ad6-91f2-8b312eacc4b4/d4fa7785-8a89-4d13-9082-52556ab0b326
-O raw 
/rhev/data-center/65c9777e-23f1-4f04-8cea-e7c8871dc88b/0a8035e6-e41d-40ff-a154-e0a374f264b2/images/75c54716-5222-4ad6-91f2-8b312eacc4b4/d4fa7785-8a89-4d13-9082-52556ab0b326_MERGE

at the end probably there is a rename of the disk file and
qemu-img info 
/rhev/data-center/65c9777e-23f1-4f04-8cea-e7c8871dc88b/0a8035e6-e41d-40ff-a154-e0a374f264b2/images/75c54716-5222-4ad6-91f2-8b312eacc4b4/d4fa7785-8a89-4d13-9082-52556ab0b326

image: 
/rhev/data-center/65c9777e-23f1-4f04-8cea-e7c8871dc88b/0a8035e6-e41d-40ff-a154-e0a374f264b2/images/75c54716-5222-4ad6-91f2-8b312eacc4b4/d4fa7785-8a89-4d13-9082-52556ab0b326
file format: raw
virtual size: 11G (11811160064 bytes)
disk size: 9.5G


# ll 
/rhev/data-center/65c9777e-23f1-4f04-8cea-e7c8871dc88b/0a8035e6-e41d-40ff-a154-e0a374f264b2/images/75c54716-5222-4ad6-91f2-8b312eacc4b4/
total 9995476
-rw-rw. 1 vdsm kvm 1048576 Nov 16 12:09
6ac73ee2-6419-43a4-91e7-7d4ef2026943_MERGE.lease
-rw-rw. 1 vdsm kvm 11811160064 Nov 16 13:32
d4fa7785-8a89-4d13-9082-52556ab0b326
-rw-rw. 1 vdsm kvm 1048576 Mar 23  2013
d4fa7785-8a89-4d13-9082-52556ab0b326.lease
-rw-rw. 1 vdsm kvm 1048576 Nov 16 13:29
d4fa7785-8a89-4d13-9082-52556ab0b326_MERGE.lease
-rw-r--r--. 1 vdsm kvm 274 Nov 16 13:29
d4fa7785-8a89-4d13-9082-52556ab0b326.meta

Thanks
Gianluca
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Users Digest, Vol 26, Issue 72

2013-11-17 Thread Ryan Barry
Without knowing how the disks are split among the controllers, I don't want
to make any assumptions about how shared it actually is, since it may be
half and half with no multipathing.

While a multi-controller DAS array *may* be shared storage, it may not be.
Moreover, I have no idea whether VDSM looks at by-path, by-bus, dm-*, or
otherwise, and there are no guarantees that a SAS disk will present like a
FC LUN (by-path/pci...-fc-$wwn...), whereas OCFS POSIXFS is assured to
work, albeit with a more complex setup and another intermediary layer.
On Nov 17, 2013 10:00 AM,  wrote:

> Send Users mailing list submissions to
> users@ovirt.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://lists.ovirt.org/mailman/listinfo/users
> or, via email, send a message with subject or body 'help' to
> users-requ...@ovirt.org
>
> You can reach the person managing the list at
> users-ow...@ovirt.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Users digest..."
>
>
> Today's Topics:
>
>1. Re: oVirt and SAS shared storage?? (Jeff Bailey)
>
>
> --
>
> Message: 1
> Date: Sat, 16 Nov 2013 21:39:35 -0500
> From: Jeff Bailey 
> To: users@ovirt.org
> Subject: Re: [Users] oVirt and SAS shared storage??
> Message-ID: <52882c67.9000...@cs.kent.edu>
> Content-Type: text/plain; charset=ISO-8859-1
>
>
> On 11/16/2013 9:22 AM, Ryan Barry wrote:
> >
> > unfortunally, I didn't got a reply for my question. So.. let's try
> > again.
> >
> > Does oVirt supports SAS shared storages (p. e. MSA2000sa) as
> > storage domain?
> > If yes.. what kind of storage domain I've to choose at setup time?
> >
> > SAS is a bus which implements the SCSI protocol in a point-to-point
> > fashion. The array you have is the effective equivalent of attaching
> > additional hard drives directly to your computer.
> >
> > It is not necessarily faster than iSCSI or Fiber Channel; almost any
> > nearline storage these days will be SAS, almost all the SANs in
> > production, and most of the tiered storage as well (because SAS
> > supports SATA drives). I'm not even sure if NetApp uses FC-AL drives
> > in their arrays anymore. I think they're all SAS, but don't quote me
> > on that.
> >
> > What differentiates a SAN (iSCSI or Fiber Channel) from a NAS is that
> > a SAN presents raw devices over a fabric or switched medium rather
> > than point-to-point (point-to-point Fiber Channel still happens, but
> > it's easier to assume that it doesn't for the sake of argument). A NAS
> > presents network file systems (CIFS, GlusterFS, Lustre, NFS, Ceph,
> > whatever), though this also gets complicated when you start talking
> > about distributed clustered network file systems.
> >
> > Anyway, what you have is neither of these. It's directly-attached
> > storage. It may work, but it's an unsupported configuration, and is
> > only shared storage in the sense that it has multiple controllers. If
> > I were going to configure it for oVirt, I would:
> >
>
> It's shared storage in every sense of the word.  I would simply use an
> FC domain and choose the LUNs as usual.
>
> > Attach it to a 3rd server and export iSCSI LUNs from it
> > Attach it to a 3rd server and export NFS from it
> > Attach it to multiple CentOS/Fedora servers, configure clustering (so
> > you get fencing, a DLM, and the other requisites of a clustered
> > filesystem), and use raw cLVM block devices or GFS2/OCFS filesystems
> > as POSIXFS storage for oVirt.
> >
>
> These would be terrible choices for both performance and reliability.
> It's exactly the same as fronting an FC LUN would be with all of that
> crud when you could simply access the LUN directly.  If the array port
> count is a problem then just toss an SAS switch in between and you have
> an all SAS equivalent of a Fibre Channel SAN.  This is exactly what we
> do in production vSphere environments and there are no technical reasons
> it shouldn't work fine with oVirt.
>
> > Thank you for your help
> >
> > Hans-Joachim
> >
> >
> > Hans
> >
> > --
> > while (!asleep) { sheep++; }
> >
> >
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> --
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
> End of Users Digest, Vol 26, Issue 72
> *
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users