On 02/04/2014 03:10 PM, Elad Ben Aharon wrote:
Actually, the best way to do it would be to create another storage domain and
migrate all the VMs's disks to it (you can do it while the VMs are running),
then you won't suffer from a down time.
notice this will create a snapshot for your vms
It's actually step 5 :)
Am 05.02.2014 07:50, schrieb Itamar Heim:
(step 6 explains where the user/password for libvirt/virsh are.
--
Mit freundlichen Grüßen / Regards
Sven Kieske
Systemadministrator
Mittwald CM Service GmbH Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F:
Can we install ovirt engine and vdsm on same node ?
If so any down side ?
--
*Hans Emmanuel*
*NOthing to FEAR but something to FEEL..*
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
Il 05/02/2014 09:18, Hans Emmanuel ha scritto:
Can we install ovirt engine and vdsm on same node ?
Sure, you can use all-in-one plugin for having both running on the same node.
In 3.4.0 you'll be able also to run ovirt engine inside a VM using
hosted-engine feature.
If so any down side ?
Well, I just tested this.
and I can't connect to virsh with this information.
I guess the mentioned user vdsm@rhevh might not be the actual one
used in 3.3.2 anymore? (mail is from 2012 and mentions rhev, so..)
or can't libvirt manage multiple authenticated users?
Because I registered my own
- Original Message -
From: Sandro Bonazzola sbona...@redhat.com
To: Hans Emmanuel hansemman...@gmail.com, users@ovirt.org
Sent: Wednesday, February 5, 2014 10:22:28 AM
Subject: Re: [Users] ovirt engine and vdsm on same node
Il 05/02/2014 09:18, Hans Emmanuel ha scritto:
Can we
Thanks for your inputs .
On Wed, Feb 5, 2014 at 2:04 PM, Yedidyah Bar David d...@redhat.com wrote:
- Original Message -
From: Sandro Bonazzola sbona...@redhat.com
To: Hans Emmanuel hansemman...@gmail.com, users@ovirt.org
Sent: Wednesday, February 5, 2014 10:22:28 AM
Subject:
Hi Maurice,
With quota you can set a hard limit for the number of CPU consumed by the user,
and
by that limit the num of vms, other quota parameters (memory and storage) can
be unlimited.
Although these videos are for version 3.1 they are pretty useful,
since there weren't any drastic change
On 5-2-2014 9:27, Sven Kieske wrote:
Well, I just tested this.
and I can't connect to virsh with this information.
I guess the mentioned user vdsm@rhevh might not be the actual one
used in 3.3.2 anymore? (mail is from 2012 and mentions rhev, so..)
vdsm@ovirt seems to work :-)
Joop
- Original Message -
From: Maurice James midnightst...@msn.com
To: users@ovirt.org
Sent: Wednesday, February 5, 2014 3:47:51 AM
Subject: [Users] Quotas
How do I limit the number of VMs that a user can create using quotas?
Hi Maurice.
Quota supports the resource level
Hello,
I revamp this thread putting a subject more in line with the real problem.
Previous thread subject was
unable to start vm in 3.3 and f19 with gluster
and began here on ovirt users mailing list:
http://lists.ovirt.org/pipermail/users/2013-September/016628.html
Now I updated all to final
I can confirm that vdsm@ovirt does work.
However, I have the strong feeling that
the password in /etc/pki/vdsm/keys/libvirt_password
is static for all installations.
And gerrit proves me right:
Hello List,
my aim is to host multiple VMs which are redundant and are high available.
It should also scale well.
I think usually people just buy a fat iSCSI Storage and attach this. In my
case it should scale well from very small nodes to big ones.
Therefore an iSCSI Target will bring a lot of
Yes after the download with yum, engine-setup works without fail. I up
3.4.0 to 3.5.0.
I can send you log files when engine-setup fail and when he works. I have
just one ovirt-engine so i can't reproduce this.
Kev.
2014-02-05 Yedidyah Bar David d...@redhat.com:
That's actually bad news, not
see in line
On 02/05/2014 10:45 AM, ml ml wrote:
Hello List,
my aim is to host multiple VMs which are redundant and are high
available. It should also scale well.
I'm assuming you are talking about HA cluster since redundancy vm and HA
vms are a contradiction :)
I think usually people
On Thu, Jan 16, 2014 at 11:51:25PM +, Peter Styk wrote:
Greetings,
I'm writing here as to share some of my findings about hosting with
Hetzner. All in one setups on single remote host can be tricky. Provider
mounted an extra /29 subnet to the main host but none is routed by default
and
Could you explain further why does the host need to do any routing?
Assaf Muller, Cloud Networking Engineer
Red Hat
- Original Message -
From: Dan Kenigsberg dan...@redhat.com
To: Peter Styk polf...@gmail.com, amul...@redhat.com
Cc: users@ovirt.org
Sent: Wednesday, February 5, 2014
On 02/05/2014 04:48 PM, Gianluca Cecchi wrote:
Hello,
in a test infra I would to temporarily put a Host that is part of
Gluster DC and has gluster volumes active into an ISCSI DC.
I put Gluster DC in maintenance, stop glustervolume and put both hosts
in maintenance.
Then I try to edit host and
On Wed, Feb 05, 2014 at 09:50:04AM +, Sven Kieske wrote:
I can confirm that vdsm@ovirt does work.
However, I have the strong feeling that
the password in /etc/pki/vdsm/keys/libvirt_password
is static for all installations.
And gerrit proves me right:
On Tue, Feb 04, 2014 at 08:20:26PM +, martin.tru...@cspq.gouv.qc.ca wrote:
Hi,
I want to configure NAT in Ovirt with this procedure :
http://lists.ovirt.org/pipermail/users/2012-April/001751.html
But I don't have the login password of virsh and for the PosgreSQL database.
I used the
Hello Ron,
thanks for your reply.
This post is also not meant to be a iscsi discussion.
Since oVirt does not support DRBD out of the box i came up with my own
concept:
check out posix storage domain.
If it supports gluster you might be able to use it for DRBD.
Sorry, I dont quite
On Wed, Feb 05, 2014 at 06:36:32AM -0500, Assaf Muller wrote:
Could you explain further why does the host need to do any routing?
From what I gather, the hosting service (Hetzner) allows only IP traffic
out of the box, but Peter may correct me.
___
From: ml ml mliebher...@googlemail.com
To: Users@ovirt.org
Sent: Wednesday, February 5, 2014 12:45:55 PM
Subject: [Users] Will this two node concept scale and work?
Hello List,
my aim is to host multiple VMs which are redundant and are high available. It
should also scale well.
I think
On Wed, 2014-02-05 at 08:49 -0500, Yedidyah Bar David wrote:
From: ml ml mliebher...@googlemail.com
To: Users@ovirt.org
Sent: Wednesday, February 5, 2014 12:45:55 PM
Subject: [Users] Will this two node concept scale and work?
Thank you, I'll update the wiky in a few days to include this info (and
I'll use it myself).
Regards,
El 05/02/14 04:43, Sahina Bose escribió:
On 02/03/2014 07:18 PM, Juan Pablo Lorier wrote:
Hi,
I've created my first wiki page and I'd like someone to review it and
tell me if there's
Are the volumes hosted on the host?
On Feb 5, 2014 12:18 PM, Gianluca Cecchi gianluca.cec...@gmail.com
wrote:
Hello,
in a test infra I would to temporarily put a Host that is part of
Gluster DC and has gluster volumes active into an ISCSI DC.
I put Gluster DC in maintenance, stop
On Wed, Feb 5, 2014 at 4:10 PM, Itamar Heim wrote:
Are the volumes hosted on the host?
Yes, but I have put down all resources related to it
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
Well I didn't know the exact background for this code
and I can understand it from a management perspective, but from
a sysadmin perspective it is useless (it does not prevent anything
against an informed attacker) and may be even lead to false security
assumptions (nobody can mess with libvirt,
Hello,
I have a fedora 19 engine based on 3.3 final.
I installed a fedora 19 host (configured as infrastructure server) and
then installed from web gui. Then yum update of the host updated vdsm
(see separate thread I'm going to open for this).
I create iSCSI DC and add that host to it.
When I try
On 02/05/2014 06:40 PM, Gianluca Cecchi wrote:
Hello,
fedora 19 engine 3.3.3 with ovirt-host-deploy-1.1.3-1.fc19.noarch
when I deploy a fedora 19 host from web admin gui I get on it
vdsm-4.13.0-11.fc19.x86_64
(it seems updated at november 2013)
If after that I explicitly enable ovirt 3.3.3 yum
- Original Message -
From: Itamar Heim ih...@redhat.com
To: Gianluca Cecchi gianluca.cec...@gmail.com, users users@ovirt.org
Sent: Wednesday, February 5, 2014 8:17:38 PM
Subject: Re: [Users] ovirt 3.3.3 host deploy push an old vdsm
On 02/05/2014 06:40 PM, Gianluca Cecchi wrote:
On Wed, Feb 5, 2014 at 7:19 PM, Alon Bar-Lev wrote:
- Original Message -
From: Itamar Heim
engine will deploy the latest vdsm the host sees based on the repos
configured on the host?
indeed.
I thought sort of archive of rpm packages embedded and delivered
from engine itself...
- Original Message -
From: Gianluca Cecchi gianluca.cec...@gmail.com
To: Alon Bar-Lev alo...@redhat.com
Cc: Itamar Heim ih...@redhat.com, users users@ovirt.org
Sent: Wednesday, February 5, 2014 8:29:49 PM
Subject: Re: [Users] ovirt 3.3.3 host deploy push an old vdsm
On Wed, Feb
I am having some issues wrapping my head around this but what I am trying
to setup is a A/B testing environment with a 3node cluster. Each node has
2 nics, 1 for ovirtmgmt and 1 for vlaned A/B network. I guess what I am
trying to understand is if ovirt is tagging the vlan's I setup and is
I attempted to set ovirtmgmt to untagged and making it a VM network
but that would not save.
The configuration:
ovirtmgmt:
- Display Network
- Migration Network
- VM Network
- NO VLAN
ipmi:
- VM Network
- VLAN 2
When I go into a host's Network Interfaces and attempt to sync the
removal
I currently have a new setup running ovirt 3.3.3. I have a Gluster storage
domain with roughly 2.5TB of usable space. Gluster is installed on the same
systems as the ovirt hosts. The host break down is as follows
Ovirt DC:
4 hosts in the cluster. Each host has 4 physical disks in a RAID 5.
Thanks for the input.
-Original Message-
From: Gilad Chaplik [mailto:gchap...@redhat.com]
Sent: Wednesday, February 05, 2014 3:48 AM
To: Maurice James
Cc: users@ovirt.org
Subject: Re: [Users] Quotas
Hi Maurice,
With quota you can set a hard limit for the number of CPU consumed by the
Ok. Thanks. Sounds do-able
From: Ted Miller [mailto:tmil...@hcjb.org]
Sent: Wednesday, February 05, 2014 3:44 PM
To: Maurice James; users@ovirt.org
Subject: RE: [Users] Import VMware
From: Maurice James midnightst...@msn.com mailto:midnightst...@msn.com
Do you know of the best way to
There was another recent post about this but a sum up was:
You must have power fencing to support VM HA otherwise they'll be an issue
with the engine not knowing whether the VM is still running and not bring
it up on a new host to avoid data corruption. Also make sure you have your
quorum setup
Hmm. So in that case would I be able to drop the Gluster setup and use NFS
each host and make sure power fencing is enabled? Will that still achieve
fault tolerance, or is a replicated gluster still required?
From: Andrew Lau [mailto:and...@andrewklau.com]
Sent: Wednesday, February 05, 2014
I'm not sure what you mean by NFS each host, but you'll need some way to at
least ensure the data is available. Be that be replicated gluster or a
centralized SAN etc.
On Thu, Feb 6, 2014 at 1:21 PM, Maurice James midnightst...@msn.com wrote:
Hmm. So in that case would I be able to drop the
OK I think I got it now. What I meant by NFS on each host was. In ovirt you
can set up an NFS storage domain on each host and have the available to all
hosts in the cluster
From: Andrew Lau [mailto:and...@andrewklau.com]
Sent: Wednesday, February 05, 2014 9:25 PM
To: Maurice James
Cc:
42 matches
Mail list logo