Hi Dimitry,
Sorry for not being clearer, I've missed the part the ls was from the
underlying brick. Than i've clearly a different issue.
Best Olaf
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy
This is due to sharding.
I joined ovirt 6 moths ago and this option was still there.
Best Regards,
Strahil NikolovOn May 9, 2019 18:04, Dmitry Filonov
wrote:
>
> The data chunks are under .glusterfs folder on bricks now. Not a single huge
> file you can easily access from a brick.
> Not sure
Sorry, it wasn't clear from your post and earlier in the thread tauļ¼
sanren.ac.za was clearly listing data on the brick, not on a mounted
glusterfs volume.
--
Dmitry Filonov
Linux Administrator
SBGrid Core | Harvard Medical School
250 Longwood Ave, SGM-114
Boston, MA 02115
On Thu, May 9, 2019
This listing is from a gluster mount not from the underlying brick, which
should combine all parts from the underlying .glusterfs folder. I believe when
you make use of the feature.shard the files should be broken up in peaces
according the shard-size.
Olaf
The data chunks are under .glusterfs folder on bricks now. Not a single
huge file you can easily access from a brick.
Not sure when that change was introduced though.
Fil
On Thu, May 9, 2019, 10:43 AM wrote:
> It looks like i've got the exact same issue;
> drwxr-xr-x. 2 vdsm kvm 4.0K Mar 29
It looks like i've got the exact same issue;
drwxr-xr-x. 2 vdsm kvm 4.0K Mar 29 16:01 .
drwxr-xr-x. 22 vdsm kvm 4.0K Mar 29 18:34 ..
-rw-rw. 1 vdsm kvm 64M Feb 4 01:32 44781cef-173a-4d84-88c5-18f7310037b4
-rw-rw. 1 vdsm kvm 1.0M Oct 16 2018
44781cef-173a-4d84-88c5-18f7310037b4.lease
On Fri, Apr 12, 2019, 12:16 wrote:
> Adding to what me and my colleague shared
>
> I am able to locate the disk images of the VMs, I copied some of them and
> tried to boot them from another standalone kvm host, however booting the
> disk images wasn't succesful as it landed on a rescue mode.
On Fri, Apr 12, 2019 at 11:16 AM wrote:
> Adding to what me and my colleague shared
>
> I am able to locate the disk images of the VMs, I copied some of them and
> tried to boot them from another standalone kvm host, however booting the
> disk images wasn't succesful as it landed on a rescue
Adding to what me and my colleague shared
I am able to locate the disk images of the VMs, I copied some of them and tried
to boot them from another standalone kvm host, however booting the disk images
wasn't succesful as it landed on a rescue mode. The strange part is that the VM
disk images
Il Gio 11 Apr 2019, 19:12 Sakhi Hadebe ha scritto:
> What happened is the engine's root filesystem had filled up. My colleague
> tried to resize the root lvm. The engine then did not come back. In trying
> to resolve that he cleaned up the engine and tried to re-install it, no
> luck in doing
The real image is defined within the XML stanza in the vdsm.log when the VM was
last started .
So if you remember when the last time the HostedEngine was rebooted, you can
check the vdsm.log on the host.
From there - check the cluster for the file.
If it's missing deploy the HostedEngine again
What happened is the engine's root filesystem had filled up. My colleague
tried to resize the root lvm. The engine then did not come back. In trying
to resolve that he cleaned up the engine and tried to re-install it, no
luck in doing that.
That brought down all the VMs. All VMs are down. we
On Thu, Apr 11, 2019 at 9:46 AM Sakhi Hadebe wrote:
> Hi,
>
> We have a situation where the HostedEngine was cleaned up and the VMs are
> no longer running. Looking at the logs we can see the drive files as:
>
Do you have any guess on what really happened?
Are you sure that the disks really
13 matches
Mail list logo