Re: [Users] FISL14 Conference Report - oVirt and Cloud EcoSystem

2013-07-10 Thread Doron Fediuck

- Original Message -
| From: Douglas Schilling Landgraf dougsl...@redhat.com
| To: users@ovirt.org
| Sent: Wednesday, July 10, 2013 2:32:44 AM
| Subject: [Users] FISL14 Conference Report - oVirt and Cloud EcoSystem
| 
| http://dougsland.livejournal.com/122744.html
| 
| See you there next year!
| 
| --
| Cheers
| Douglas

Thanks, Douglas.

Looks like you had fun there ;)

Any feedbacks you got so far?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] which file system for shared disk?

2013-07-10 Thread Piotr Szubiakowski

Hi,
we are developing an application where would be great if multiple host 
could have access to the same disk. I think that we can use features 
like shared disk or direct LUN to attach the same storage to multiple 
VM's. However to provide concurrent access to the resource, there should 
be a cluster file system used. The most popular open source cluster file 
systems are GFS2 and OCFS2. So my questions are:


1) Does anyone have share disk between VM's in oVirt? What fs did You used?
2) Is it possible to use GFS2 on VM's that are running on oVirt? Does 
anyone have run fencing mechanism with ovirt/libvirt?


Many thanks,
Piotr
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] which file system for shared disk?

2013-07-10 Thread Chris Smith
Why not use gluster with xfs on the storage bricks?

http://www.gluster.org/

On Wed, Jul 10, 2013 at 7:15 AM, Piotr Szubiakowski
piotr.szubiakow...@nask.pl wrote:
 Hi,
 we are developing an application where would be great if multiple host could
 have access to the same disk. I think that we can use features like shared
 disk or direct LUN to attach the same storage to multiple VM's. However to
 provide concurrent access to the resource, there should be a cluster file
 system used. The most popular open source cluster file systems are GFS2 and
 OCFS2. So my questions are:

 1) Does anyone have share disk between VM's in oVirt? What fs did You used?
 2) Is it possible to use GFS2 on VM's that are running on oVirt? Does anyone
 have run fencing mechanism with ovirt/libvirt?

 Many thanks,
 Piotr
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] which file system for shared disk?

2013-07-10 Thread Eduardo Ramos

Hi Piotr!

I've used OCFS2 out of oVirt, so I can't tell you specifically about VM 
environment, but I suggest you use OCFS2 in place of GFS2. It is simpler 
to implement, so less components to configure and it care about fencing 
for you.


On 07/10/2013 08:15 AM, Piotr Szubiakowski wrote:

Hi,
we are developing an application where would be great if multiple host 
could have access to the same disk. I think that we can use features 
like shared disk or direct LUN to attach the same storage to multiple 
VM's. However to provide concurrent access to the resource, there 
should be a cluster file system used. The most popular open source 
cluster file systems are GFS2 and OCFS2. So my questions are:


1) Does anyone have share disk between VM's in oVirt? What fs did You 
used?
2) Is it possible to use GFS2 on VM's that are running on oVirt? Does 
anyone have run fencing mechanism with ovirt/libvirt?


Many thanks,
Piotr
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


attachment: eduardo.vcf___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] which file system for shared disk?

2013-07-10 Thread Piotr Szubiakowski

Hi,
gluster is good in scenario when we have many hosts with own storage and 
we aggregate these pieces it into one shared storage. In this situation 
data is transferred via Ethernet ore Infiniband. In our scenario we have 
centralized storage accedes via Fibre Channel. In this situation it 
would be more efficient when data would be transferred through FC.


Thanks,
Piotr

W dniu 10.07.2013 13:23, Chris Smith pisze:

Why not use gluster with xfs on the storage bricks?

http://www.gluster.org/

On Wed, Jul 10, 2013 at 7:15 AM, Piotr Szubiakowski
piotr.szubiakow...@nask.pl  wrote:

Hi,
we are developing an application where would be great if multiple host could
have access to the same disk. I think that we can use features like shared
disk or direct LUN to attach the same storage to multiple VM's. However to
provide concurrent access to the resource, there should be a cluster file
system used. The most popular open source cluster file systems are GFS2 and
OCFS2. So my questions are:

1) Does anyone have share disk between VM's in oVirt? What fs did You used?
2) Is it possible to use GFS2 on VM's that are running on oVirt? Does anyone
have run fencing mechanism with ovirt/libvirt?

Many thanks,
Piotr
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] which file system for shared disk?

2013-07-10 Thread Piotr Szubiakowski

Hi Eduardo,
yes fencing method used in OCFS2 is probably better for vitalized 
environments. Thanks for advice!


Many thanks,
Piotr

W dniu 10.07.2013 13:32, Eduardo Ramos pisze:

Hi Piotr!

I've used OCFS2 out of oVirt, so I can't tell you specifically about 
VM environment, but I suggest you use OCFS2 in place of GFS2. It is 
simpler to implement, so less components to configure and it care 
about fencing for you.


On 07/10/2013 08:15 AM, Piotr Szubiakowski wrote:

Hi,
we are developing an application where would be great if multiple 
host could have access to the same disk. I think that we can use 
features like shared disk or direct LUN to attach the same storage to 
multiple VM's. However to provide concurrent access to the resource, 
there should be a cluster file system used. The most popular open 
source cluster file systems are GFS2 and OCFS2. So my questions are:


1) Does anyone have share disk between VM's in oVirt? What fs did You 
used?
2) Is it possible to use GFS2 on VM's that are running on oVirt? Does 
anyone have run fencing mechanism with ovirt/libvirt?


Many thanks,
Piotr
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] which file system for shared disk?

2013-07-10 Thread Subhendu Ghosh
On 07/10/2013 07:53 AM, Piotr Szubiakowski wrote:
 Hi,
 gluster is good in scenario when we have many hosts with own storage and we
 aggregate these pieces it into one shared storage. In this situation data is
 transferred via Ethernet ore Infiniband. In our scenario we have centralized
 storage accedes via Fibre Channel. In this situation it would be more
 efficient when data would be transferred through FC.

For FC disk presented to all nodes in the cluster, why use a filesystem?
use LVM instead,

http://www.ovirt.org/Vdsm_Block_Storage_Domains

 
 Thanks,
 Piotr
 
 W dniu 10.07.2013 13:23, Chris Smith pisze:
 Why not use gluster with xfs on the storage bricks?

 http://www.gluster.org/

 On Wed, Jul 10, 2013 at 7:15 AM, Piotr Szubiakowski
 piotr.szubiakow...@nask.pl  wrote:
 Hi,
 we are developing an application where would be great if multiple host could
 have access to the same disk. I think that we can use features like shared
 disk or direct LUN to attach the same storage to multiple VM's. However to
 provide concurrent access to the resource, there should be a cluster file
 system used. The most popular open source cluster file systems are GFS2 and
 OCFS2. So my questions are:

 1) Does anyone have share disk between VM's in oVirt? What fs did You used?
 2) Is it possible to use GFS2 on VM's that are running on oVirt? Does anyone
 have run fencing mechanism with ovirt/libvirt?

 Many thanks,
 Piotr
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users
 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] which file system for shared disk?

2013-07-10 Thread Piotr Szubiakowski



On 07/10/2013 07:53 AM, Piotr Szubiakowski wrote:

Hi,
gluster is good in scenario when we have many hosts with own storage and we
aggregate these pieces it into one shared storage. In this situation data is
transferred via Ethernet ore Infiniband. In our scenario we have centralized
storage accedes via Fibre Channel. In this situation it would be more
efficient when data would be transferred through FC.

For FC disk presented to all nodes in the cluster, why use a filesystem?
use LVM instead,

http://www.ovirt.org/Vdsm_Block_Storage_Domains



The way that oVirt manage storage domains accessed via FC is very smart. There is 
separate logical volume for each virtual disk. But I think that logical volume at the 
same time could be touched only by one host. Is it possible that two host 
access read/write the same logical volume and there is no data corruption?

Thanks,
Piotr


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] oVirt Weekly Meeting Minutes -- 2013-07-10

2013-07-10 Thread Mike Burns
Minutes: 
http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-10-14.00.html
Minutes (text): 
http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-10-14.00.txt
Log: 
http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-10-14.00.log.html



#ovirt: oVirt Weekly Meeting



Meeting started by mgoldboi at 14:00:55 UTC. The full logs are available
at http://ovirt.org/meetings/ovirt/2013/ovirt.2013-07-10-14.00.log.html
.



Meeting summary
---
* 3.3 Status Update  (mgoldboi, 14:03:04)
  * LINK:
http://www.ovirt.org/index.php?title=OVirt_3.3_release-management
(mgoldboi, 14:03:34)
  * Feature Freeze 2013-07-15 (Next Monday)  (mgoldboi, 14:04:02)
  * infra 3.3: ExternalTasks feature development in progress will make
feature freeze  (mgoldboi, 14:11:49)
  * virt: RAM Snapshots feature in progress, REST is at risk, but the
rest seems to make it by 15th  (mgoldboi, 14:15:01)
  * virt: test pages will be completed by end of week (best effort)
(mgoldboi, 14:17:09)
  * virt: all features are green except  RAM Snapshots  (mgoldboi,
14:17:50)
  * virt: Cloud-Init Integration currently on Red - patches are on their
way, still in risk for feature freeze  (mgoldboi, 14:20:58)
  * networking Quantum Integration in progress, will make it to feature
freeze  (mgoldboi, 14:23:23)
  * networking:NetworkReloaded in progress, will make it to feature
freeze  (mgoldboi, 14:23:43)
  * networking: test pages will be completed by end of week (best
effort)  (mgoldboi, 14:24:34)
  * networking: Multiple Gateways in progress - will make it to feature
freeze  (mgoldboi, 14:28:31)
  * networking: NetworkReloaded will probably not make it in, keeping it
on wiki hoping for a quick turnaround  (mgoldboi, 14:29:50)
  * node: all features are green  (mgoldboi, 14:30:10)
  * storage:  Edit Connections is at a risk  (mgoldboi, 14:31:56)
  * ACTION: amureini update storage features on release table
(mgoldboi, 14:33:22)
  * SLA: Network QoS - patches are in (on review) - at risk for feature
freeze  (mgoldboi, 14:36:41)
  * SLA: Scheduling API and Schedualer - patches are in (on review) - at
risk for feature freeze  (mgoldboi, 14:37:03)
  * SLA: QoS should be merged soon  (mgoldboi, 14:37:22)
  * SLA: Watchdog - most parts merged. still working on REST API (on
review)  (mgoldboi, 14:38:22)
  * : SLA: test pages will be completed soon (best effort)  (mgoldboi,
14:40:07)
  * gluster: gluster hooks - rest api patch resubmitted after comments.
if approved, can make it to feature freeze  (mgoldboi, 14:42:28)
  * gluster: gluster services - patches are in review  (mgoldboi,
14:43:14)
  * gluster: test pages will be completed early next week (best effort)
(mgoldboi, 14:44:00)
  * integration: Otopi Infra Migration - base flows are working - fixing
several bugs - will make feature freeze  (mgoldboi, 14:44:50)
  * integration: Self Hosted Engine - at risk for feature freeze - will
probably wouldn't make it  (mgoldboi, 14:45:23)
  * integration: test pages will be completed next week  (mgoldboi,
14:45:45)
  * UX:   FrontendRefactor and GWT Platform Upgrade will not make it
into feature freeze  (mgoldboi, 14:46:51)
  * Beta builds should be prepared and delivered to mburns by Tuesday
2013-07-16 (Require:  F18, EL6, would like F19 as well)  (mgoldboi,
14:48:24)
  * ACTION: mburns to post packages on Tuesday  (mgoldboi, 14:48:52)

* Conference and Workshop  (mgoldboi, 14:49:58)
  * CFP for KVM Forum, CloudOpen EU and LinuxCon EU closes on July 21
(mgoldboi, 14:52:04)

* Infra update  (mgoldboi, 14:55:38)
  * A couple of bug fixes were made to oVirt website. With more coming -
specifically, we're going to have a fix soon for context
highlighting of code in wiki pages  (mgoldboi, 14:56:46)
  * infra: forman instance is installed  (mgoldboi, 15:00:14)
  * infra: rackspace servers were installed with local storage DCs
(mgoldboi, 15:00:29)
  * infra: Fedora 18/19  and centos64 OS will be distributed over 20-25
slaves  (mgoldboi, 15:01:53)
  * infra: jenkins version was upgraded, lots of job maintenance was
done  (mgoldboi, 15:02:39)
  * infra: backup infra for jenkins.ovirt.org was improved  (mgoldboi,
15:03:10)
  * infra: new infra layout was proposed and approved for the
infrastructure  (mgoldboi, 15:05:24)
  * infra: servers and slaves were added to puppet/forman, puppet
classes additions will be started soon  (mgoldboi, 15:06:42)

* Other Topics  (mgoldboi, 15:07:09)

Meeting ended at 15:08:52 UTC.




Action Items

* amureini update storage features on release table
* mburns to post packages on Tuesday




Action Items, by person
---
* amureini
  * amureini update storage features on release table
* mburns
  * mburns to post packages on Tuesday
* **UNASSIGNED**
  * (none)




People Present (lines said)
---
* 

Re: [Users] Fedora upgrading from 3.1 to 3.2

2013-07-10 Thread Alex Lourie
Hi Karli

'Restore' certificates basically means taking the backup of 
/etc/pki/ovirt-engine/certs and /keys and restoring them into 3.2 after 
installation.

--dont-drop-database will do exactly that - leave DB intact; that can be for 
your benefit in some cases.

I'll be happy to hear on your progress.

-- 

Best regards,

Alex Lourie
Software Developer in Integration
Red Hat


- Original Message -
 From: Karli Sjöberg karli.sjob...@slu.se
 To: Alex Lourie alou...@redhat.com
 Cc: users@ovirt.org
 Sent: Friday, July 5, 2013 8:34:22 PM
 Subject: SV: [Users] Fedora upgrading from 3.1 to 3.2
 
 Hi Alex,
 
 crappy MS webmail that can´t figure out indents while on vacation, just FYI.
 Yes, progress is always good:) I would like to have some pointers about nr.
 4: Restore certificates. Then I can ask one of my co-workers to test the
 procedure and report back. So, restore certs; How to?
 
 Or! I saw in another thread there was a engine-cleanup --dont-drop-dbase
 something or other... Is there an equivalent for engine-setup, like
 --dont-touch-dbase? Or engine-cleanup --dbase-something and then
 engine-setup again, and it´ll just play nice with the dbase that´s still
 there perhaps?
 
 /Karli
 
 
 Från: Alex Lourie [alou...@redhat.com]
 Skickat: den 30 juni 2013 17:29
 Till: Karli Sjöberg
 Cc: users@ovirt.org
 Ämne: Re: [Users] Fedora upgrading from 3.1 to 3.2
 
 Hi Karli
 
 On Wed, Jun 26, 2013 at 5:28 PM, Karli Sjöberg karli.sjob...@slu.se
 wrote:
  Update!
 
  I have actually made some headway, I managed to get it passed the
  database upgrade! As I was looking at the log, I drilled down into
  the engine-upgrade script at looked at what it was trying to do,
  which was a pythonic way of doing:
  # cd /usr/share/ovirt-engine/dbscripts
  # ./upgrade.sh -s localhost -p 5432 -u engine -d engine
 
 
 Nice progress!
 
 
  So I first reinstalled everything (I´ve stopped counting) and an
  hour later I was back at just before doing engine-upgrade again. What
  I did then was to upgrade ovirt-engine-dbscripts first and ran the
  upgrade.sh script manually (which went by smoothly, output also
  attached), downgraded all of the ovirt packages back to 3.1.0-4
  (because engine-upgrade didn´t think there was any updating to act
  upon otherwise), then updated ovirt-engine-setup to 3.2.2, lastly ran
  engine-upgrade. That made it pass the database upgrade, yeay! But!
  Now it stopped at doing the CA instead... Log is attached.
 
  What I find strange is the last line of output from the upgrade.sh:
  ...
  Refreshing materialized views...
  Done.
  /usr/share/ovirt-engine/dbscripts
 
  Where the last lines of upgrade.sh are like:
  ...
  run_upgrade_files
 
  ret=$?
  printf Done.\n
  exit $ret
 
  And I looked through run_upgrade_files and couldn´t figure out why
  that function would exit with /usr/share/ovirt-engine/dbscripts? No
  that doesn´t quite add up. Something just doesn´t smell right but I
  haven´t figured it out what it is yet,
 
 
 I think it is the output from another script that runs the upgrade.sh
 script. There's a pushd/popd action used in sh scripts to change
 working folder and then get back to the original one. I think that this
 is what you see here.
 
 
  About the second error, with handling the CA, it seems like it´s
  having trouble connecting to the database after it´s upgrade, but
  since the upgrade itself went by OK, I did engine-cleanup,
  engine-setup, stopped engine, restored the database and started
  engine again, and it worked. Which should point more to something
  like a configuration being missed, or improperly handled at the
  upgrade that makes engine-config fail to connect.
 
 Could be; but the fact that reimport of the DB worked is a very good
 sign.
 
  Maybe some old configurations lying around that shouldn´t be, that
  hinders it, I don´t know. Is there anything handling configuration
  changes, like rpmconf -a in the upgrade process, after updating the
  rpms?
 
 
 No, we do not have rpmconf in ovirt.
 
 Now, if you have a working engine, that's good. Remember though that
 you started with clean DB. When DB has entries, the upgrade.sh run may
 not be as fluent as it was before. But, if it works, I guess you could
 use the same strategy for upgrading the complete working environment:
 
 1. Backup DB and certificates
 2. Upgrade DB in a stand-alone mode and save it aside.
 3. Install 3.2, restore the engine DB from upgraded backup.
 4. Restore certificates.
 5. Start the engine.
 6. Enjoy?
 
 Let me know what your status is.
 
 
 
  /Karli
 
  ons 2013-06-26 klockan 13:33 + skrev Karli Sjöberg:
  tis 2013-06-25 klockan 16:02 +0003 skrev Alex Lourie: 
  On Tue, Jun 25, 2013 at 5:03 PM, Karli Sjöberg
  karli.sjob...@slu.se
  wrote:
   tis 2013-06-25 klockan 15:35 +0200 skrev Gianluca Cecchi: On
  Tue,
   Jun 25, 2013 at 2:58 PM, Karli Sjöberg wrote:

  
  
Yes, I had much better success following that article, 

[Users] exception occured when adding NFS storage

2013-07-10 Thread Zhang, Hongyi
Hi,

I started to learn the oVirt by following the oVirt Quick Start Guide.

I used one server as the ovirt.engine, and another server as the ovirt.host. 
Both servers run Fedora 18.

After executing the command engine-setup in the ovirt.engine, I can get into 
the admin portal to start the configuration of host, storage, etc. Here was 
what I am doing:


1.   In the Default data center, I clicked  the Guide Me icon. The first 
thing is to configure the host. After waiting for 4-5 minutes, I saw the host 
ovirt.host is displayed under the Host tab with the Up status.

2.   On the ovirt.engine server, I did the following before creating the 
storage:

mkdir -p /mnt/data

chown -R 36:36 /mnt/data

Add the following line to /etc/sysconfig/nfs:

NFS4_SUPPORT=no

Add the following line to /etc/exports:

   /nmt/data  *(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36)

systemctl restart nfs-server.service

3.   Back to the Default data center, clicked the Guide Me icon to start 
the configuration of new storage.

4.   On the New Storage dialog, set Name field to be NFS-Share, and set the 
Export Path field to be ovirt.engine:/mnt/data. Then click OK

After couple of minutes, I saw the error window popped up: Error: Cannot add 
Storage. Internal error, Storage Connection doesn't exist.



From the /var/log/ ovirt-engine/engine.log, I can see:

2013-07-10 11:37:51,300 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
(ajp--127.0.0.1-8702-1) [41c282da] FINISH, ConnectStorageServerVDSCommand, 
return: {----=100}, log id: 1bca7819

2013-07-10 11:37:51,303 ERROR 
[org.ovirt.engine.core.bll.storage.NFSStorageHelper] (ajp--127.0.0.1-8702-1) 
[41c282da] The connection with details 128.224.147.229:/mnt/data failed because 
of error code 100 and error message is: generalexception

2013-07-10 11:37:51,382 WARN  
[org.ovirt.engine.core.bll.storage.AddNFSStorageDomainCommand] 
(ajp--127.0.0.1-8702-6) [2612bf14] CanDoAction of action AddNFSStorageDomain 
failed. 
Reasons:VAR__TYPE__STORAGE__DOMAIN,VAR__ACTION__ADD,ACTION_TYPE_FAILED_STORAGE_CONNECTION_NOT_EXIST

2013-07-10 11:37:51,483 INFO  
[org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand] 
(ajp--127.0.0.1-8702-1) [2bd59836] Running command: 
RemoveStorageServerConnectionCommand internal: false. Entities affected :  ID: 
aaa0----123456789aaa Type: System

2013-07-10 11:37:51,489 INFO  
[org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand] 
(ajp--127.0.0.1-8702-1) [2bd59836] START, 
DisconnectStorageServerVDSCommand(HostName = ovirt.node, HostId = 
8e47a267-98a9-4184-a92e-26587adae6c4, storagePoolId = 
----, storageType = NFS, connectionList = [{ 
id: null, connection: 128.224.147.229:/mnt/data, iqn: null, vfsType: null, 
mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]), 
log id: 2f9800d

Please let me know what I was doing wrong.

Thanks,
Hongyi

P.S. If I create the NFS domain in the ovirt.host side (i.e., Export Path is 
ovirt.host:/mnt/data), then everything is okay. Not sure why failed on the 
ovirt.engine side.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] which file system for shared disk?

2013-07-10 Thread Itamar Heim

On 07/10/2013 05:33 PM, Piotr Szubiakowski wrote:

The way that oVirt manage storage domains accessed via FC is very smart.
There is separate logical volume for each virtual disk. But I think that
logical volume at the same time could be touched only by one host. Is
it possible that two host access read/write the same logical volume and
there is no data corruption?


hence a shared disk over block storage using LVM must be pre-allocated, 
so no LV changes (lv extend) would be needed.

(also, it cannot have snapshots, since it would become qcow)
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] exception occured when adding NFS storage

2013-07-10 Thread Itamar Heim

On 07/10/2013 07:19 PM, Zhang, Hongyi wrote:

Hi,

I started to learn the oVirt by following the oVirt Quick Start Guide.

I used one server as the ovirt.engine, and another server as the
ovirt.host. Both servers run Fedora 18.

After executing the command “engine-setup” in the ovirt.engine, I can
get into the admin portal to start the configuration of host, storage,
etc. Here was what I am doing:

1.In the Default data center, I clicked  the Guide Me icon. The first
thing is to configure the host. After waiting for 4-5 minutes, I saw the
host ovirt.host is displayed under the Host tab with the Up status.

2.On the ovirt.engine server, I did the following before creating the
storage:

mkdir –p /mnt/data

chown –R 36:36 /mnt/data

Add the following line to /etc/sysconfig/nfs:

 NFS4_SUPPORT=”no”

Add the following line to /etc/exports:

/nmt/data  *(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36)

systemctl restart nfs-server.service

3.Back to the Default data center, clicked the Guide Me icon to start
the configuration of new storage.

4.On the New Storage dialog, set Name field to be NFS-Share, and set the
Export Path field to be ovirt.engine:/mnt/data. Then click OK

After couple of minutes, I saw the error window popped up: Error: Cannot
add Storage. Internal error, Storage Connection doesn't exist.

 From the /var/log/ ovirt-engine/engine.log, I can see:

2013-07-10 11:37:51,300 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(ajp--127.0.0.1-8702-1) [41c282da] FINISH,
ConnectStorageServerVDSCommand, return:
{----=100}, log id: 1bca7819

2013-07-10 11:37:51,303 ERROR
[org.ovirt.engine.core.bll.storage.NFSStorageHelper]
(ajp--127.0.0.1-8702-1) [41c282da] The connection with details
128.224.147.229:/mnt/data failed because of error code 100 and error
message is: generalexception

2013-07-10 11:37:51,382 WARN
[org.ovirt.engine.core.bll.storage.AddNFSStorageDomainCommand]
(ajp--127.0.0.1-8702-6) [2612bf14] CanDoAction of action
AddNFSStorageDomain failed.
Reasons:VAR__TYPE__STORAGE__DOMAIN,VAR__ACTION__ADD,ACTION_TYPE_FAILED_STORAGE_CONNECTION_NOT_EXIST

2013-07-10 11:37:51,483 INFO
[org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
(ajp--127.0.0.1-8702-1) [2bd59836] Running command:
RemoveStorageServerConnectionCommand internal: false. Entities affected
: ID: aaa0----123456789aaa Type: System

2013-07-10 11:37:51,489 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
(ajp--127.0.0.1-8702-1) [2bd59836] START,
DisconnectStorageServerVDSCommand(HostName = ovirt.node, HostId =
8e47a267-98a9-4184-a92e-26587adae6c4, storagePoolId =
----, storageType = NFS, connectionList
= [{ id: null, connection: 128.224.147.229:/mnt/data, iqn: null,
vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null,
nfsTimeo: null };]), log id: 2f9800d

Please let me know what I was doing wrong.

Thanks,

Hongyi

P.S. If I create the NFS domain in the ovirt.host side (i.e., Export
Path is ovirt.host:/mnt/data), then everything is okay. Not sure why
failed on the ovirt.engine side.



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



vdsm log from the failure to create the storage domain?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] exception occured when adding NFS storage

2013-07-10 Thread Zhang, Hongyi
Hi Itamar,

No any trace was logged in the file /var/log/vdsm/vdsm.log during the creation 
of the storage on the ovirt.engine side. But the vdsm is running:

# systemctl status vdsmd.service
vdsmd.service - Virtual Desktop Server Manager
   Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled)
   Active: active (running) since Tue 2013-07-09 12:11:13 EDT; 1 day 1h ago
 Main PID: 1871 (respawn)
   CGroup: name=systemd:/system/vdsmd.service
   \u251c\u25001871 /bin/bash -e /usr/share/vdsm/respawn --minlifetime 
10 --daemon --masterpid /var/run/vdsm/respawn.pid /usr/share/vdsm/...
   \u251c\u25001874 /usr/bin/python /usr/share/vdsm/vdsm
   \u251c\u25001894 /usr/bin/sudo -n /usr/bin/python 
/usr/share/vdsm/supervdsmServer.py 9e01f3d6-f6ac-4f19-b863-8f7c613ce899 1874 
/var/ru...
   \u2514\u25001898 /usr/bin/python /usr/share/vdsm/supervdsmServer.py 
9e01f3d6-f6ac-4f19-b863-8f7c613ce899 1874 /var/run/vdsm/svdsm.pid ...

Thanks,
Hongyi

-Original Message-
From: Itamar Heim [mailto:ih...@redhat.com] 
Sent: Wednesday, July 10, 2013 1:45 PM
To: Zhang, Hongyi
Cc: users@ovirt.org
Subject: Re: [Users] exception occured when adding NFS storage

On 07/10/2013 07:19 PM, Zhang, Hongyi wrote:
 Hi,

 I started to learn the oVirt by following the oVirt Quick Start Guide.

 I used one server as the ovirt.engine, and another server as the 
 ovirt.host. Both servers run Fedora 18.

 After executing the command engine-setup in the ovirt.engine, I can 
 get into the admin portal to start the configuration of host, storage, 
 etc. Here was what I am doing:

 1.In the Default data center, I clicked  the Guide Me icon. The first 
 thing is to configure the host. After waiting for 4-5 minutes, I saw 
 the host ovirt.host is displayed under the Host tab with the Up status.

 2.On the ovirt.engine server, I did the following before creating the
 storage:

 mkdir -p /mnt/data

 chown -R 36:36 /mnt/data

 Add the following line to /etc/sysconfig/nfs:

  NFS4_SUPPORT=no

 Add the following line to /etc/exports:

 /nmt/data  
 *(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36)

 systemctl restart nfs-server.service

 3.Back to the Default data center, clicked the Guide Me icon to start 
 the configuration of new storage.

 4.On the New Storage dialog, set Name field to be NFS-Share, and set 
 the Export Path field to be ovirt.engine:/mnt/data. Then click OK

 After couple of minutes, I saw the error window popped up: Error: 
 Cannot add Storage. Internal error, Storage Connection doesn't exist.

  From the /var/log/ ovirt-engine/engine.log, I can see:

 2013-07-10 11:37:51,300 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSComm
 and]
 (ajp--127.0.0.1-8702-1) [41c282da] FINISH, 
 ConnectStorageServerVDSCommand, return:
 {----=100}, log id: 1bca7819

 2013-07-10 11:37:51,303 ERROR
 [org.ovirt.engine.core.bll.storage.NFSStorageHelper]
 (ajp--127.0.0.1-8702-1) [41c282da] The connection with details 
 128.224.147.229:/mnt/data failed because of error code 100 and error 
 message is: generalexception

 2013-07-10 11:37:51,382 WARN
 [org.ovirt.engine.core.bll.storage.AddNFSStorageDomainCommand]
 (ajp--127.0.0.1-8702-6) [2612bf14] CanDoAction of action 
 AddNFSStorageDomain failed.
 Reasons:VAR__TYPE__STORAGE__DOMAIN,VAR__ACTION__ADD,ACTION_TYPE_FAILED
 _STORAGE_CONNECTION_NOT_EXIST

 2013-07-10 11:37:51,483 INFO
 [org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionComman
 d]
 (ajp--127.0.0.1-8702-1) [2bd59836] Running command:
 RemoveStorageServerConnectionCommand internal: false. Entities 
 affected
 : ID: aaa0----123456789aaa Type: System

 2013-07-10 11:37:51,489 INFO
 [org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSC
 ommand]
 (ajp--127.0.0.1-8702-1) [2bd59836] START, 
 DisconnectStorageServerVDSCommand(HostName = ovirt.node, HostId = 
 8e47a267-98a9-4184-a92e-26587adae6c4, storagePoolId = 
 ----, storageType = NFS, 
 connectionList = [{ id: null, connection: 128.224.147.229:/mnt/data, 
 iqn: null,
 vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null,
 nfsTimeo: null };]), log id: 2f9800d

 Please let me know what I was doing wrong.

 Thanks,

 Hongyi

 P.S. If I create the NFS domain in the ovirt.host side (i.e., Export 
 Path is ovirt.host:/mnt/data), then everything is okay. Not sure why 
 failed on the ovirt.engine side.



 ___
 Users mailing list
 Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users


vdsm log from the failure to create the storage domain?
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[Users] oVirt 3.2 - vdsm-4.10.3-18

2013-07-10 Thread Douglas Schilling Landgraf

New VDSM package is available for testing.

Changes
==
- service: make iscsid a systemd dependency (BZ#981906)
- vdsm.spec: update python-pthreading

Where
===
- oVirt testing update REPO (thanks mburns!)
- For koji users:
  f19 - http://koji.fedoraproject.org/koji/taskinfo?taskID=5592587
  f18 - http://koji.fedoraproject.org/koji/taskinfo?taskID=5592780
  el6 - http://koji.fedoraproject.org/koji/taskinfo?taskID=5592800

Feedback is appreciate.

Thanks!

--
Cheers
Douglas
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] FISL14 Conference Report - oVirt and Cloud EcoSystem

2013-07-10 Thread Douglas Schilling Landgraf

On 07/10/2013 04:26 AM, Doron Fediuck wrote:


- Original Message -
| From: Douglas Schilling Landgraf dougsl...@redhat.com
| To: users@ovirt.org
| Sent: Wednesday, July 10, 2013 2:32:44 AM
| Subject: [Users] FISL14 Conference Report - oVirt and Cloud EcoSystem
|
| http://dougsland.livejournal.com/122744.html
|
| See you there next year!
|
| --
| Cheers
| Douglas

Thanks, Douglas.

Looks like you had fun there ;)


:)


Any feedbacks you got so far?



Well, I think we have increased the project visibility.
Last year (our first participation in FISL) we were a small group in a 
booth distributing stickers and without any talks at main conference 
program because they were not approved. This year, besides an improved 
booth we had a bigger group (around 20 people RedHat + Community) and 
many keynotes in the main FISL program, including making oVirt work with 
others open source projects such as Spacewalk and Gluster.


Besides the growth mentioned above, it's also important to mention
that we heard from some visitors we met last year that oVirt is
already running on their companies and at least 2 colleges sharing that 
they will implement oVirt as solution.


For me that's the biggest feedback.

--
Cheers
Douglas
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users