[ovirt-users] planned datacenter maintenance 21.12.2018 20:30 UTC

2018-12-20 Thread Evgheni Dereveanchin
Hi everyone,

The network switch stack used by the oVirt PHX datacenter needs a reboot
which is scheduled for tomorrow. It is expected to be a fast task yet it
may cut all networking including shared storage access for all of our
hypervisors for a couple of minutes.

For this reason I'll shut down non-critical services beforehand and pause
CI to minimize I/O activity and protect against potential VM disk
corruption.

Maintenance window: 21.12.2018  20:30 UTC - 21:30 UTC

Services that may be unreachable for short periods of time during this
outage are:
* Package repositories
* Glance image repository
* Jenkins CI

Other services such as the website, gerrit and mailing lists are not
affected and will be operating as usual.

-- 
Regards,
Evgheni Dereveanchin
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3YCWGTYK7W52EVXYJGEHXOIC5HB27UHO/


[ovirt-users] Re: Backup & Restore

2018-12-20 Thread Torsten Stolpmann

On 20.12.2018 07:41, Yedidyah Bar David wrote:

On Wed, Dec 19, 2018 at 1:35 PM Torsten Stolpmann
 wrote:


On 19.12.2018 11:54, Yedidyah Bar David wrote:

On Wed, Dec 19, 2018 at 12:34 PM Torsten Stolpmann
 wrote:


On 19.12.2018 08:01, Yedidyah Bar David wrote:

On Tue, Dec 18, 2018 at 5:20 PM Torsten Stolpmann
 wrote:


Hi Yedidyah,

please find the logs at the following URL:
http://www.klaros-testmanagement.com/files/ovirt/ovirt-restore-logs.tar.gz

Let me know once you received them safely so I can remove them again.


Done.



Thanks, removed.



I also added the restore.log containing the actual error which occured
during the restore.

Since this was a clean system I restored on, the setup has been executed
after the database restore, so the setup logs probably contain nothing
of interest. I added them anyway.


Sorry I wasn't clear enough. I meant the setup logs on the machine used
to create the backup. So that I can try to see why your backup contained
this function. Do you still have these by any chance?

Indeed, I can't see anything wrong in the logs above.


Sorry for misunderstanding, this totally makes sense.

I added the setup logs i found at:
http://www.klaros-testmanagement.com/files/ovirt/ovirt-setup-logs.tar.gz

Again, please let me know once I can remove them.


Done.


Thanks, removed.





Please let me know if there is anything I can add to this.


If you do not have setup logs from the original machine, at least try
to think about its history and tell us notable points - including entire
version history, setup/upgrade (or similar) problems you had there (and
perhaps worked around), etc.



We started with one of the first 4.0 releases and updated the system on
a regular basis since then. We almost never skipped a release.


The last log there is ovirt-engine-setup-20171231170651-lb3q89.log ,
meaning it was last upgraded almost a year ago. Were there indeed no
more upgrades since?



20171231170651 is most probably the date when we installed the last
major release (4.2.0). We did a lot more updates since and I now have a
suspicion what went wrong here.

We naively assumed that a yum update is sufficient for a minor upgrade
of the engine installation.


:-(


Rereading the documentation I think we were
missing explicit engine-setup calls after each minor upgrade.


Indeed



It may be the case, that the extra call to engine-setup is required to
not be part of the yum update.


engine-setup might need to ask the user questions, so we do not run it
inside yum update.

As long as it clear to users that this is required this is totally ok 
for me.



I think in this case it would be a good
idea to warn users that this step has not yet been taken and the update
is not completed yet.


Do you think you would have noticed? It's not that hard to add such a
message during yum update, but not sure it's that helpful.


Yes, I would probably have noticed that. Sooner or later ... :)


Perhaps we can also make the engine check e.g. if the setup package is
newer than itself, and warn the user in the admin ui.

I think that both is a good idea. Some applications even go the way to 
lock out users from the GUI until a necessary database migration has 
been executed by an administrator. Your mileage may vary here but this 
is the solution I would favor.




Could this be the cause for the missing logs and behavior discrepancies?

If yes, would another call to engine-setup in the current state fix this
and allow us to produce correct backups in the future?


If you still have the source machine, then yes, it should be enough.
Run engine-setup, then engine-backup again to backup, then restore
on the new machine.


Thanks, will do.




The log indicates the machine was upgraded to ovirt-engine-4.2.0.2 ,
which didn't remove the uuid functions. The bug I linked at before,
1515635, was fixed in 4.2.1.

Based on this, I think the best solution for now would be the patch
you suggested. Would you like to open a bug for this and push a patch
to gerrit? I can do this as well if you prefer. Bug summary line
should probably be something like:

engine-setup fails after restoring a backup taken with 4.2.0


I currently fear that would only cure the symptom.


Another solution would be to try using the same engine version during
restore (4.2.0.2), and only upgrade later. This is a bit hard, because
we do not have separate repos for each version, although they do
include all released versions - so you can try e.g.:

yum install ovirt-engine-4.2.0.2-1.el7

(meaning, after you remove existing 4.2.7, or you can try yum downgrade).

I didn't try this myself, not sure how well it would work. There might
be older dependencies to handle, and/or it might break due to too-new
stuff.


I don't think this is necessary.


Obviously, the best solution is to upgrade more often and backup more
often, and have a smaller difference between backup version and restore
version, ideally no difference. But I realize this does not always

[ovirt-users] Re: Acquire an XML dump of a VM oVirt?

2018-12-20 Thread Jacob Green
What if you cannot run the VM, so its not running on any specific host. 
But you want the XML to identify the  information.



Thank you.


On 12/20/2018 09:10 AM, Benny Zlotnik wrote:

You can run `virsh -r dumpxml  ` on the relevant host

On Thu, Dec 20, 2018, 16:17 Jacob Green  wrote:


 How does one get an XML dump of a VM from ovirt? I have seen
ovirt
do it in the engine.log, but not sure how to force it to generate one
when I need it.


Thank you.

-- 
Jacob Green


Systems Admin

American Alloy Steel

713-300-5690
___
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org

Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/YV7K4GQZID2UC2SPS3PNDEKQUDZ5HLGV/



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LNQTWNT3HLXPXOPZOUMBYTV4HOORAQ75/


--
Jacob Green

Systems Admin

American Alloy Steel

713-300-5690

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JH4K6SH2DB7EB52KKE2CTD43PJFUMMVX/


[ovirt-users] Re: Acquire an XML dump of a VM oVirt?

2018-12-20 Thread Benny Zlotnik
You can run `virsh -r dumpxml  `  on the relevant host

On Thu, Dec 20, 2018, 16:17 Jacob Green   How does one get an XML dump of a VM from ovirt? I have seen ovirt
> do it in the engine.log, but not sure how to force it to generate one
> when I need it.
>
>
> Thank you.
>
> --
> Jacob Green
>
> Systems Admin
>
> American Alloy Steel
>
> 713-300-5690
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/YV7K4GQZID2UC2SPS3PNDEKQUDZ5HLGV/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LNQTWNT3HLXPXOPZOUMBYTV4HOORAQ75/


[ovirt-users] Acquire an XML dump of a VM oVirt?

2018-12-20 Thread Jacob Green
How does one get an XML dump of a VM from ovirt? I have seen ovirt 
do it in the engine.log, but not sure how to force it to generate one 
when I need it.



Thank you.

--
Jacob Green

Systems Admin

American Alloy Steel

713-300-5690
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YV7K4GQZID2UC2SPS3PNDEKQUDZ5HLGV/


[ovirt-users] Re: Active Storage Domains as Problematic

2018-12-20 Thread Alex McWhirter

On 2018-12-20 07:53, Stefan Wolf wrote:

i 've mounted it during the hosted-engine --deploy process
I selected glusterfs
and entered  server:/engine
I dont enter any mount options
yes it is enabled for both. I dont got errors for the second one, but
may it doesn't check after the first fail
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SYX6BQBW2MMV4YIXHG24KMXA7FTWL46X/


try server:/engine -o direct-io-mode=enable
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/N22AFD47NV336PNLHTGZSQJWMJOGFJCG/


[ovirt-users] Re: GlusterFS Storage - Size limited at 2 GB

2018-12-20 Thread v . levet
Someone could help me?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SX6F2NKC7NP4SDIE2WG2PUZPJQHK57S3/


[ovirt-users] Re: Hyperconvergend Setup stuck

2018-12-20 Thread Stefan Wolf
It is gdeploy 2.0.2

rpm -qa |grep gdeploy
gdeploy-2.0.8-1.el7.noarch
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3QUCUGUHSSRHESRSPGWENRRUND2K3QLK/


[ovirt-users] PC Support

2018-12-20 Thread stepstrange2
Norton.com/Setup - You might have been looking for a proper guide that helps 
you in downloading, installing and activating the Norton Product key.  and 
proceed for the Norton installation setup. Get Norton help at Norton Customer 
support toll free number.
 The if you support the Get face problem to the activate office.com/setup or 
the install the Microsoft office product product. Install office 2019 with 
Product Key.  
https://www.mynotronsetup.com
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/624P6Z5MBXYTHADH3EZNNM26ZHHIYEI6/


[ovirt-users] Re: Active Storage Domains as Problematic

2018-12-20 Thread Stefan Wolf
i 've mounted it during the hosted-engine --deploy process
I selected glusterfs
and entered  server:/engine
I dont enter any mount options
yes it is enabled for both. I dont got errors for the second one, but may it 
doesn't check after the first fail
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SYX6BQBW2MMV4YIXHG24KMXA7FTWL46X/


[ovirt-users] Re: Active Storage Domains as Problematic

2018-12-20 Thread Alex McWhirter

On 2018-12-20 07:14, Stefan Wolf wrote:

yes i think this too, but as you see at the top

[root@kvm380 ~]# gluster volume info
...
performance.strict-o-direct: on

...
it was already set

i did a one cluster setup with ovirt and I uses this result

Volume Name: engine
Type: Distribute
Volume ID: a40e848b-a8f1-4990-9d32-133b46db6f1d
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: kvm360.durchhalten.intern:/gluster_bricks/engine/engine
Options Reconfigured:
cluster.eager-lock: enable
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
user.cifs: off
network.ping-timeout: 30
network.remote-dio: off
performance.strict-o-direct: on
performance.low-prio-threads: 32
features.shard: on
storage.owner-gid: 36
storage.owner-uid: 36
transport.address-family: inet
nfs.disable: on

could there be an other reason?


are you mounting via the gluster GUI? I'm not sure how it handles 
mounting of manual gluster volumes, but the direct-io-mode=enable mount 
option comes to mind. I assume direct-io is also enabled on the other 
volume? It needs to be on all of them.

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NOGI2JX5NRWKVXNDHY66GI6XJ54HAFKR/


[ovirt-users] Re: Disk full

2018-12-20 Thread suporte
We moved the VM disk to the second gluster. On the ovirt-engine I cannot see 
the old disk, only the disk attached to the VM on the second gluster. 
We keep having the errors concerning the disk full. 
Using CLI I can see the image on the first gluster volume. So ovirt-engine was 
able to move the disk to the second volume but did not delete it from the first 
volume. 

# gluster volume info gv0 

Volume Name: gv0 
Type: Distribute 
Volume ID: 4aaffd24-553b-4a85-8c9b-386b02b30b6f 
Status: Started 
Snapshot Count: 0 
Number of Bricks: 1 
Transport-type: tcp 
Bricks: 
Brick1: gfs1.growtrade.pt:/home/brick1 
Options Reconfigured: 
features.shard-block-size: 512MB 
network.ping-timeout: 30 
storage.owner-gid: 36 
storage.owner-uid: 36 
user.cifs: off 
features.shard: off 
cluster.shd-wait-qlength: 1 
cluster.shd-max-threads: 8 
cluster.locking-scheme: granular 
cluster.data-self-heal-algorithm: full 
cluster.server-quorum-type: server 
cluster.quorum-type: auto 
cluster.eager-lock: enable 
network.remote-dio: enable 
performance.low-prio-threads: 32 
performance.stat-prefetch: off 
performance.io-cache: off 
performance.read-ahead: off 
performance.quick-read: off 
transport.address-family: inet 
performance.readdir-ahead: on 
nfs.disable: on 


Thanks 


From: "Sahina Bose"  
To: supo...@logicworks.pt, "Krutika Dhananjay"  
Cc: "users"  
Sent: Thursday, December 20, 2018 11:53:39 AM 
Subject: Re: [ovirt-users] Disk full 

Is it possible for you to delete the old disks from storage domain 
(you can use the ovirt-engine UI). Do you continue to see space used 
despite doing that? 
I see that you are on a much older version of gluster. Have you 
considered updating to 3.12? 

Please also provide output of "gluster volume info " 

On Thu, Dec 20, 2018 at 3:56 PM  wrote: 
> 
> Yes, I can see the image on the volume. 
> Gluster version: 
> glusterfs-client-xlators-3.8.12-1.el7.x86_64 
> glusterfs-cli-3.8.12-1.el7.x86_64 
> glusterfs-api-3.8.12-1.el7.x86_64 
> glusterfs-fuse-3.8.12-1.el7.x86_64 
> glusterfs-server-3.8.12-1.el7.x86_64 
> glusterfs-libs-3.8.12-1.el7.x86_64 
> glusterfs-3.8.12-1.el7.x86_64 
> 
> 
> Thanks 
> 
> José 
> 
>  
> From: "Sahina Bose"  
> To: supo...@logicworks.pt 
> Cc: "users"  
> Sent: Wednesday, December 19, 2018 4:13:16 PM 
> Subject: Re: [ovirt-users] Disk full 
> 
> Do you see the image on the gluster volume mount? Can you provide the gluster 
> volume options and version of gluster? 
> 
> On Wed, 19 Dec 2018 at 4:04 PM,  wrote: 
>> 
>> Hi, 
>> 
>> I have a all in one intallation with 2 glusters volumes. 
>> The disk of one VM filled up the brick, which is a partition. That partition 
>> has 0% free disk space. 
>> I moved the disk of that VM to the other gluster volume, the VM is working 
>> with the disk on the other gluster volume. 
>> When I move the disk, it didn't delete it from the brick, the engine keeps 
>> complaining that there is no more disk space on that volume. 
>> What can I do? 
>> Is there a way to prevent this in the future? 
>> 
>> Many thanks 
>> 
>> José 
>> 
>> 
>> 
>> -- 
>>  
>> Jose Ferradeira 
>> http://www.logicworks.pt 
>> ___ 
>> Users mailing list -- users@ovirt.org 
>> To unsubscribe send an email to users-le...@ovirt.org 
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ 
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/ 
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XE37KCG4PHD3LBQG3NCPTE45ASF3IEMX/
>>  
> 
> 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XS6UCO7LD6XIXXFTVK2KJM7FD6X4TNT5/


[ovirt-users] Re: Active Storage Domains as Problematic

2018-12-20 Thread Stefan Wolf
yes i think this too, but as you see at the top
>[root@kvm380 ~]# gluster volume info
>...
> performance.strict-o-direct: on
...
it was already set

i did a one cluster setup with ovirt and I uses this result

Volume Name: engine
Type: Distribute
Volume ID: a40e848b-a8f1-4990-9d32-133b46db6f1d
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: kvm360.durchhalten.intern:/gluster_bricks/engine/engine
Options Reconfigured:
cluster.eager-lock: enable
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
user.cifs: off
network.ping-timeout: 30
network.remote-dio: off
performance.strict-o-direct: on
performance.low-prio-threads: 32
features.shard: on
storage.owner-gid: 36
storage.owner-uid: 36
transport.address-family: inet
nfs.disable: on

could there be an other reason?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/HST6ODMSZQZEP6Q2OMLCAMD27HI2CDHQ/


[ovirt-users] Re: Upload via GUI to VMSTORE possible but not ISO Domain

2018-12-20 Thread Alex McWhirter
I've always just used engine-iso-uploader on the engine host to upload
images to the ISO domain, never really noticed that it doesn't "appear"
to be in the GUI. Very rarely do i need to upload ISO's, so i guess it's
just never really been an issue. I know the disk upload GUI options are
for VM HDD disk images, not ISO's though, which is why i imagine it
doesn't show up.___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3BYPWNPUITXCI43LPJJ7ZMFVHOZDSXMF/


[ovirt-users] Re: Active Storage Domains as Problematic

2018-12-20 Thread Alex McWhirter

you see to set strict direct io on the volumes

performance.strict-o-direct on
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/2SJGYBAQ33N7JW7NQTY7KPAUH3YQVMIO/


[ovirt-users] Re: Disk full

2018-12-20 Thread Sahina Bose
Is it possible for you to delete the old disks from storage domain
(you can use the ovirt-engine UI). Do you continue to see space used
despite doing that?
I see that you are on a much older version of gluster. Have you
considered updating to 3.12?

Please also provide output of "gluster volume info "

On Thu, Dec 20, 2018 at 3:56 PM  wrote:
>
> Yes, I can see the image on the volume.
> Gluster version:
> glusterfs-client-xlators-3.8.12-1.el7.x86_64
> glusterfs-cli-3.8.12-1.el7.x86_64
> glusterfs-api-3.8.12-1.el7.x86_64
> glusterfs-fuse-3.8.12-1.el7.x86_64
> glusterfs-server-3.8.12-1.el7.x86_64
> glusterfs-libs-3.8.12-1.el7.x86_64
> glusterfs-3.8.12-1.el7.x86_64
>
>
> Thanks
>
> José
>
> 
> From: "Sahina Bose" 
> To: supo...@logicworks.pt
> Cc: "users" 
> Sent: Wednesday, December 19, 2018 4:13:16 PM
> Subject: Re: [ovirt-users] Disk full
>
> Do you see the image on the gluster volume mount? Can you provide the gluster 
> volume options and version of gluster?
>
> On Wed, 19 Dec 2018 at 4:04 PM,  wrote:
>>
>> Hi,
>>
>> I have a all in one intallation with 2 glusters volumes.
>> The disk of one VM filled up the brick, which is a partition. That partition 
>> has 0% free disk space.
>> I moved the disk of that VM to the other gluster volume, the VM is working 
>> with the disk on the other gluster volume.
>> When I move the disk, it didn't delete it from the brick, the engine keeps 
>> complaining that there is no more disk space on that volume.
>> What can I do?
>> Is there a way to prevent this in the future?
>>
>> Many thanks
>>
>> José
>>
>>
>>
>> --
>> 
>> Jose Ferradeira
>> http://www.logicworks.pt
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct: 
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives: 
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/XE37KCG4PHD3LBQG3NCPTE45ASF3IEMX/
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NEHNOZLRQU2FDV6EGQQASP5DMPJFOQUR/


[ovirt-users] Re: Active Storage Domains as Problematic

2018-12-20 Thread Stefan Wolf
here is what i found in the logs of the hosts

2018-12-20 12:34:04,824+0100 INFO  (periodic/0) [vdsm.api] START 
repoStats(domains=()) from=internal, 
task_id=09235382-a5b5-48da-853d-f94cae092684 (api:46)
2018-12-20 12:34:04,825+0100 INFO  (periodic/0) [vdsm.api] FINISH repoStats 
return={u'20651d3d-08d7-482a-ae4e-7cd0e33cc907': {'code': 399, 'actual': True, 
'version': -1, 'acquired': False, 'delay': '0', 'lastCheck': '6.1', 'valid': 
False}, u'ae9e4cbd-3946-481d-b01a-e8a38bf00efb': {'code': 0, 'actual': True, 
'version': 4, 'acquired': True, 'delay': '0.0013974', 'lastCheck': '1.1', 
'valid': True}} from=internal, task_id=09235382-a5b5-48da-853d-f94cae092684 
(api:52)
2018-12-20 12:34:04,826+0100 INFO  (periodic/0) [vdsm.api] START 
multipath_health() from=internal, task_id=8f6166cb-aa41-4f46-823d-d38e4e85f02a 
(api:46)
2018-12-20 12:34:04,826+0100 INFO  (periodic/0) [vdsm.api] FINISH 
multipath_health return={} from=internal, 
task_id=8f6166cb-aa41-4f46-823d-d38e4e85f02a (api:52)
2018-12-20 12:34:04,832+0100 INFO  (jsonrpc/4) [vdsm.api] START 
prepareImage(sdUUID=u'20651d3d-08d7-482a-ae4e-7cd0e33cc907', 
spUUID=u'----', 
imgUUID=u'c1ebc7ad-dfb6-4cc1-8e24-40f0be3f4afe', 
leafUUID=u'e7e76dd8-d166-46a0-9761-fa6391aa047b', allowIllegal=False) 
from=::1,55348, task_id=996206fc-65eb-4056-b3b9-2ac0e1780c2c (api:46)
2018-12-20 12:34:04,836+0100 ERROR (periodic/0) [root] failed to retrieve 
Hosted Engine HA score '[Errno 2] No such file or directory'Is the Hosted 
Engine setup finished? (api:196)
2018-12-20 12:34:04,847+0100 ERROR (jsonrpc/4) [storage.fileSD] Underlying file 
system doesn't supportdirect IO (fileSD:108)
2018-12-20 12:34:04,847+0100 INFO  (jsonrpc/4) [vdsm.api] FINISH prepareImage 
error=Storage Domain target is unsupported: () from=::1,55348, 
task_id=996206fc-65eb-4056-b3b9-2ac0e1780c2c (api:50)
2018-12-20 12:34:04,847+0100 ERROR (jsonrpc/4) [storage.TaskManager.Task] 
(Task='996206fc-65eb-4056-b3b9-2ac0e1780c2c') Unexpected error (task:875)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in 
_run
return fn(*args, **kargs)
  File "", line 2, in prepareImage
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in method
ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3173, in 
prepareImage
dom = sdCache.produce(sdUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 110, in 
produce
domain.getRealDomain()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 51, in 
getRealDomain
return self._cache._realProduce(self._sdUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 134, in 
_realProduce
domain = self._findDomain(sdUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sdc.py", line 151, in 
_findDomain
return findMethod(sdUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/glusterSD.py", line 55, 
in findDomain
return GlusterStorageDomain(GlusterStorageDomain.findDomainPath(sdUUID))
  File "/usr/lib/python2.7/site-packages/vdsm/storage/fileSD.py", line 400, in 
__init__
validateFileSystemFeatures(manifest.sdUUID, manifest.mountpoint)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/fileSD.py", line 110, in 
validateFileSystemFeatures
raise se.StorageDomainTargetUnsupported()
StorageDomainTargetUnsupported: Storage Domain target is unsupported: ()
2018-12-20 12:34:04,847+0100 INFO  (jsonrpc/4) [storage.TaskManager.Task] 
(Task='996206fc-65eb-4056-b3b9-2ac0e1780c2c') aborting: Task is aborted: 
'Storage Domain target is unsupported: ()' - code 399 (task:1181)
2018-12-20 12:34:04,848+0100 ERROR (jsonrpc/4) [storage.Dispatcher] FINISH 
prepareImage error=Storage Domain target is unsupported: () (dispatcher:82)
2018-12-20 12:34:04,848+0100 INFO  (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call 
Image.prepare failed (error 399) in 0.02 seconds (__init__:573)
^

especially this part 

2018-12-20 12:34:04,847+0100 ERROR (jsonrpc/4) [storage.fileSD] Underlying file 
system doesn't supportdirect IO (fileSD:108)
2018-12-20 12:34:04,847+0100 INFO  (jsonrpc/4) [vdsm.api] FINISH prepareImage 
error=Storage Domain target is unsupported: () from=::1,55348, 
task_id=996206fc-65eb-4056-b3b9-2ac0e1780c2c (api:50)

and i am not sure why he is asking this
2018-12-20 12:34:04,836+0100 ERROR (periodic/0) [root] failed to retrieve 
Hosted Engine HA score '[Errno 2] No such file or directory'Is the Hosted 
Engine setup finished? (api:196)
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WKCPU5I4Z75TMZQQCQBAD4WK3QOLVTFR/


[ovirt-users] Active Storage Domains as Problematic

2018-12-20 Thread Stefan Wolf
Hello,

 

I ,ve setup a test lab with 3 nodes installed with centos 7

I configured manualy gluster fs. Glusterfs is up and running

 

[root@kvm380 ~]# gluster peer status

Number of Peers: 2

 

Hostname: kvm320.durchhalten.intern

Uuid: dac066db-55f7-4770-900d-4830c740ffbf

State: Peer in Cluster (Connected)

 

Hostname: kvm360.durchhalten.intern

Uuid: 4291be40-f77f-4f41-98f6-dc48fd993842

State: Peer in Cluster (Connected)

[root@kvm380 ~]# gluster volume info

 

Volume Name: data

Type: Replicate

Volume ID: 3586de82-e504-4c62-972b-448abead13d3

Status: Started

Snapshot Count: 0

Number of Bricks: 1 x 3 = 3

Transport-type: tcp

Bricks:

Brick1: kvm380.durchhalten.intern:/gluster/data

Brick2: kvm360.durchhalten.intern:/gluster/data

Brick3: kvm320.durchhalten.intern:/gluster/data

Options Reconfigured:

storage.owner-uid: 36

storage.owner-gid: 36

features.shard: on

performance.low-prio-threads: 32

performance.strict-o-direct: on

network.ping-timeout: 30

user.cifs: off

network.remote-dio: off

performance.quick-read: off

performance.read-ahead: off

performance.io-cache: off

cluster.eager-lock: enable

transport.address-family: inet

nfs.disable: on

performance.client-io-threads: off

 

Volume Name: engine

Type: Replicate

Volume ID: dcfbd322-5dd0-4bfe-a775-99ecc79e1416

Status: Started

Snapshot Count: 0

Number of Bricks: 1 x 3 = 3

Transport-type: tcp

Bricks:

Brick1: kvm380.durchhalten.intern:/gluster/engine

Brick2: kvm360.durchhalten.intern:/gluster/engine

Brick3: kvm320.durchhalten.intern:/gluster/engine

Options Reconfigured:

storage.owner-uid: 36

storage.owner-gid: 36

features.shard: on

performance.low-prio-threads: 32

performance.strict-o-direct: on

network.remote-dio: off

network.ping-timeout: 30

user.cifs: off

performance.quick-read: off

performance.read-ahead: off

performance.io-cache: off

cluster.eager-lock: enable

transport.address-family: inet

nfs.disable: on

performance.client-io-threads: off

 

 

After that I deployed a selfhosted engine

And add the two other hosts, at the beginning it looks good, but without
changing anything I got following error by two hosts

 


!

20.12.2018 11:35:05

Failed to connect Host kvm320.durchhalten.intern to Storage Pool Default



!

20.12.2018 11:35:05

Host kvm320.durchhalten.intern cannot access the Storage Domain(s)
hosted_storage attached to the Data Center Default. Setting Host state to
Non-Operational.



X

20.12.2018 11:35:05

Host kvm320.durchhalten.intern reports about one of the Active Storage
Domains as Problematic.



!

20.12.2018 11:35:05

Kdump integration is enabled for host kvm320.durchhalten.intern, but kdump
is not configured properly on host.



!

20.12.2018 11:35:04

Failed to connect Host kvm360.durchhalten.intern to Storage Pool Default



!

20.12.2018 11:35:04

Host kvm360.durchhalten.intern cannot access the Storage Domain(s)
hosted_storage attached to the Data Center Default. Setting Host state to
Non-Operational.



X

20.12.2018 11:35:04

Host kvm360.durchhalten.intern reports about one of the Active Storage
Domains as Problematic.



 

Before glusterfs I had a setup with nfs on 4. Server

 

Where is the problem?

 

thx

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RELXJB2LOJNAKAIJTNTSQC3AQFQNCT75/


[ovirt-users] Upload via GUI to VMSTORE possible but not ISO Domain

2018-12-20 Thread Ralf Schenk
Hello,

I can successfully upload disks to my Data-Domain ("VMSTORE") which is
NFS. I also can upload .iso Files there. (No porblems with SSL or
imageio-proxy). Why is the ISO Domain not available for Upload via GUI ?
Does a separate ISO Domain still make sense ? The ISO Domain is up and
running. And ist it possible to filter out the hosted_storage where the
engine lives for uploads ?


save image

-- 


*Ralf Schenk*
fon +49 (0) 24 05 / 40 83 70
fax +49 (0) 24 05 / 40 83 759
mail *r...@databay.de* 
    
*Databay AG*
Jens-Otto-Krag-Straße 11
D-52146 Würselen
*www.databay.de* 

Sitz/Amtsgericht Aachen • HRB:8437 • USt-IdNr.: DE 210844202
Vorstand: Ralf Schenk, Dipl.-Ing. Jens Conze, Aresch Yavari, Dipl.-Kfm.
Philipp Hermanns
Aufsichtsratsvorsitzender: Wilhelm Dohmen


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/73PMFYEQ64QN2K23A4UKNUC2JHAFNA35/


[ovirt-users] Re: Disk full

2018-12-20 Thread suporte
Yes, I can see the image on the volume. 
Gluster version: 
glusterfs-client-xlators-3.8.12-1.el7.x86_64 
glusterfs-cli-3.8.12-1.el7.x86_64 
glusterfs-api-3.8.12-1.el7.x86_64 
glusterfs-fuse-3.8.12-1.el7.x86_64 
glusterfs-server-3.8.12-1.el7.x86_64 
glusterfs-libs-3.8.12-1.el7.x86_64 
glusterfs-3.8.12-1.el7.x86_64 


Thanks 

José 


From: "Sahina Bose"  
To: supo...@logicworks.pt 
Cc: "users"  
Sent: Wednesday, December 19, 2018 4:13:16 PM 
Subject: Re: [ovirt-users] Disk full 

Do you see the image on the gluster volume mount? Can you provide the gluster 
volume options and version of gluster? 

On Wed, 19 Dec 2018 at 4:04 PM, < supo...@logicworks.pt > wrote: 



Hi, 

I have a all in one intallation with 2 glusters volumes. 
The disk of one VM filled up the brick, which is a partition. That partition 
has 0% free disk space. 
I moved the disk of that VM to the other gluster volume, the VM is working with 
the disk on the other gluster volume. 
When I move the disk, it didn't delete it from the brick, the engine keeps 
complaining that there is no more disk space on that volume. 
What can I do? 
Is there a way to prevent this in the future? 

Many thanks 

José 



-- 

Jose Ferradeira 
http://www.logicworks.pt 
___ 
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org 
Privacy Statement: https://www.ovirt.org/site/privacy-policy/ 
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/ 
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XE37KCG4PHD3LBQG3NCPTE45ASF3IEMX/
 




___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6CVB6BBS52HT4HPB5USW2SKVFNFZ5PNX/