> I appreciate the quick replies. Yes very basic. Single host/node with
> NFS and local storage.
> I will try to reinstall the host and import. I haven't had to do this yet!
> On Sat, Sep 18, 2021, 5:26 PM Jayme wrote:
>> It sounds like your setup is fair
Wesley Stewart wrote:
> I believe I should have the dump from the initial engine-setup. Can that
> be used?
> Also, I'm only running about 8 vms and need to start over for 4.4. Would
> it be easier to just reinstall and import the disks?
> On Sat, Sep 18,
Do you have a backup of the hosted engine that you can restore? If your vms
are on nfs mounts you should be able to readd the storage domain and import
On Sat, Sep 18, 2021 at 4:26 PM Wesley Stewart wrote:
> Luckily this is for a home lab and nothing critical is lost. However I
Shouldn’t that be admin@internal or was that a typo?
On Wed, Sep 15, 2021 at 4:40 AM wrote:
> i've put all in a rest client to check the syntax and how the request
> looks, now i got this response:
> access_denied: Cannot authenticate user 'admin@intern': No valid profile
> found in
I use the nagios check_rhv plugin, it has support for monitoring GlusterFS
as well: https://github.com/rk-it-at/check_rhv
On Tue, Sep 7, 2021 at 8:39 AM Jiří Sléžka wrote:
> On 9/7/21 1:05 PM, si...@justconnect.ie wrote:
> > Hi All,
> > Does anyone have recommendations for GlusterFS
You could use a single sever with vms on local storage. Or connected to
remote storage such as nfs.
There are drawbacks of course. You could not keep vms running if host is
down or for upgrades etc.
For any kind of high availability you’d want at least two severs with
remote storage, but then
Just a thought but depending on resources you might be able to use your 4th
server as nfs storage and live migrate vm disks to it and off of your
gluster volumes. I’ve done this in the past when doing major maintenance on
gluster volumes to err on the side of caution.
On Sat, Jul 10, 2021 at 7:22
I have observed this behaviour recently and in the past on 4.3 and 4.4, and
in my case it’s almost always following an ovirt upgrade. After upgrade
(especially upgrades involving glusterfs) I’d have bricks randomly go down
like your describing for about a week or so after upgrade and I’d have to
Check if there’s another lvm.conf file in the dir like lvm.conf.rpmsave and
swap them out. I recall having to do something similar to solve a
deployment issue much like yours
On Wed, Jun 30, 2021 at 6:39 PM wrote:
> Been beating my head against this for a while, but I'm having issues
A while ago I wrote an ansible playbook to backup ovirt vms via ova export
to storage attached to one of the hosts (nfs in my case). You can check it
I’ve been using this for a while and it has been working well for me on
I'm not sure if the hosted engine is on stream yet. I'm also on 4.4.6 and
while my nodes are CentOS 8 stream my hosted engine is also still 8.3
On Mon, May 31, 2021 at 3:45 AM mail--- via Users wrote:
> I upgraded. The upgrade seems to have been successful.
> However, the distribution OS of the
Removing the ovirt-node-ng-image-update package and re-installing it
manually seems to have done the trick. Thanks for pointing me in the right
On Thu, May 27, 2021 at 9:57 PM Jayme wrote:
> # rpm -qa | grep ovirt-node
> PS: remember to use tmux if executing via ssh.
> Le jeu. 27 mai 2021 à 22:21, Jayme a écrit :
>> The good host:
>> default: ovirt-node-ng-188.8.131.52-0.20210518.0 (4.18.0-301.1.el8.x86_64)
On Thu, May 27, 2021 at 6:18 PM Jayme wrote:
> It shows 4.4.5 image on
> Le je
I updated my three server HCI cluster from 4.4.5 to 4.4.6. All hosts
updated successfully and rebooted and are active. I notice that only one
host out of the three is actually running oVirt node 4.4.6 and the other
two are running 4.4.5. If I check for upgrade in admin it shows no upgrades
The problem appears to be MTU related, I may have a network configuration
problem. Setting back to 1500 mtu seems to have solved it for now
On Thu, May 27, 2021 at 2:26 PM Jayme wrote:
> I've gotten a bit further. I have a separate 10Gbe network for GlusterFS
> traffic which was al
fine on GlusterFS migration network in the past.
On Thu, May 27, 2021 at 2:11 PM Jayme wrote:
> I have a three node oVirt 4.4.5 cluster running oVirt node hosts. Storage
> is mix of GlusterFS and NFS. Everything has been running smoothly, but the
> other day I noticed many VMs ha
I have a three node oVirt 4.4.5 cluster running oVirt node hosts. Storage
is mix of GlusterFS and NFS. Everything has been running smoothly, but the
other day I noticed many VMs had invalid snapshots. I run a script to
export OVA for VMs for backup purposes, exports seemed to have been fine
What do you mean Gluster being announced as EOL? Where did you find this
On Mon, Apr 26, 2021 at 9:34 AM penguin pages
> I have been building out HCI stack with KVM/RHEV + oVirt with the HCI
> deployment process. This is very nice for small / remote site use cases,
Vprotect would be worth looking into
On Sun, Apr 18, 2021 at 3:23 AM wrote:
> Hi there,
> I want forever incremental backup for over 150+ virtual machines inside
> oVirt to save more backup space, then restore in case some problem occurs,
> any good advice?
David, I’m curious what the use case is. :9 you plan on using the disk with
three vms at the same time? This isn’t really what shareable disks are
meant to do afaik. If you want to share storage with multiple vms I’d
probably just setup an nfs share on one of the vms
On Thu, Apr 15, 2021 at 7:37
If it's a smaller setup one option might be to use RHEL. A developers
account with Redhat will allow for 16 licensed servers for free.
On Mon, Apr 12, 2021 at 4:07 AM dhanaraj.ramesh--- via Users <
> We had done successful POC of ovirt node & HE with 4.4.5 version and now
> that EL8 host.
> Once it's successful, the rest of the hosts should be available and you
> will be able to remove one of the other nodes, reduce gluster, reinstall ,
> get gluster running and add the host again in oVirt.
> Best Regards,
> Strahil Nikolov
I have a fairly stock three node HCI setup running oVirt 4.3.9. The hosts
are oVirt node. I'm using GlusterFS storage for the self hosted engine and
for some VMs. I also have some other VMs running from an external NFS
Is it possible for me to upgrade this environment to 4.4 while
The hosts run the vms. The engine just basically coordinates everything.
On Sun, Mar 21, 2021 at 8:50 PM jenia mtl wrote:
> Hi Edward.
> "Therein" meaning inside the engine? The virtualization hosts run inside
> the engine not inside the hypervision/Ovirt-node? And just to make sure,
If you deployed with wizard the hosted engine should already be HA and can
run on any host. I’d you look at GUI you will see a crown beside each host
that is capable of running the hostess engine.
On Sat, Mar 20, 2021 at 5:14 PM David White via Users
> I just finished deploying oVirt
Are you trying to set this up as a hci deployment? If so it might be
failing if the raspberry Pi cpu is not supported by ovirt
On Wed, Feb 24, 2021 at 3:08 AM wrote:
> Hey there,
> I tried using oVirt for some time now and like to convert my main Proxmox
> Cluster (2 Nodes + RPI for Quorum)
I believe you'd need to add in multiples of 3 hosts to expand gluster
On Mon, Feb 22, 2021 at 3:56 AM wrote:
> Ok, thanks for your answer.
> If I understand well, I can't expand my gluster storage ?
> My goal was add a node when I can to growing up my gluster and my compute
Take a look at configuring affinity rules
On Wed, Jan 20, 2021 at 7:49 PM Shantur Rathore wrote:
> Hi all,
> I am trying to figure if there is a way to force oVirt to schedule VMs on
> different hosts.
> So if I am cloning 6 VMs from a template, I want oVirt to schedule them on
Correct me if I'm wrong but according to the docs, there might be a more
elegant way of doing something similar with gluster cli ex: gluster volume
heal split-brain latest-mtime -- although I have never
tried it myself.
On Mon, Jan 11, 2021 at 1:50 PM Strahil Nikolov via Users
> > Is
It takes a very small amount of effort to do it one time using ssh-copy-id
but I suppose you could easily do it with Ansible too.
On Thu, Jan 7, 2021 at 11:42 AM marcel d'heureuse
> we have setup a ovirt system with 9 hosts.
> we will now add three more nodes and we have to
It looks like a few forks are popping up already. A new project called
RockyLinux and now CloudLinux announced an RHEL fork today which sounds
On Thu, Dec 10, 2020 at 5:42 AM Jorick Astrego
Ok yeah that is fairly similar to my setup, except I only have two drives
in each host.
In my case I created completely separate data volume, one per drive. You
could do the same, three data volumes for storage @ 7TB each for example.
On one of the volumes you'll need to split off 100Gb volume
I have two ssds in each host for storage. I ended up using the wizard but
in the wizard I simply added two data volumes
storage1 = /dev/sda
storage2 = /dev/sdb
You can create as many storage volumes as you want. You don’t need to have
just one single large volume. You could have just one
Personally I also found this confusing when I setup my cluster a while
back. I ended up creating multiple data volumes. One for each drive. You
could probably software raid the drives first and present it to the
deployment wizard as one block device. I’m not sure if deployment wizard
IMO this is best handled at hardware level with UPS and battery/flash
backed controllers. Can you share more details about your oVirt setup? How
many servers are you working with andare you using replica 3 or replica 3
On Thu, Oct 8, 2020 at 9:15 AM Jarosław Prokopowski
On Tue, Oct 6, 2020 at 7:28 PM Strahil Nikolov via Users
> Hello All,
> can someone send me the full link (not the short one) as my proxy is
> blocking it :)
would support it. You might have to roll
out your own GlusterFS storage solution. Someone with more Gluster/HCI
knowledge might know better.
On Mon, Sep 28, 2020 at 1:26 PM C Williams wrote:
> Thank for getting back with me !
> If I wanted to be wasteful with storage
You can only do HCI in multiple's of 3. You could do a 3 server HCI setup
and add the other two servers as compute nodes or you could add a 6th
server and expand HCI across all 6
On Mon, Sep 28, 2020 at 12:28 PM C Williams wrote:
> We recently received 5 servers. All have about 3 TB
Assuming you don't care about data on the drive you may just need to use
wipefs on the device i.e. wipefs -a /dev/sdb
On Fri, Sep 25, 2020 at 12:53 PM Staniforth, Paul <
> how do you manage a gluster host when upgrading a node?
Interested to hear how upgrading 4.3 HCI to 4.4 goes. I've been considering
it in my environment but was thinking about moving all VMs off to NFS
storage then rebuilding oVirt on 4.4 and importing.
On Thu, Sep 24, 2020 at 1:45 PM wrote:
> I am hoping for a miracle like that, too.
> In the
You could try setting host to maintenance and check stop gluster option,
then re-activate host or try restarting glusterd service on the host
On Mon, Sep 21, 2020 at 2:52 PM Jeremey Wise wrote:
> oVirt engine shows one of the gluster servers having an issue. I did a
> graceful shutdown of
I believe if you go into the storage domain in GUI there should be a tab
for vms which should list the vms then you can click the : menu and choose
On Wed, Sep 2, 2020 at 9:24 AM Darin Schmidt wrote:
> I am running this as an all in one system for a test bed at home. The
> system crashed
Thanks for letting me know, I suspected that might be the case. I’ll make a
note to fix that in the playbook
On Mon, Aug 31, 2020 at 3:57 AM Stefan Wolf wrote:
> I think, I found the problem.
> It is case sensitive. For the export it is NOT case sensitive but for the
> step "wait for
Interesting I’ve not hit that issue myself. I’d think it must somehow be
related to getting the event status. Is it happening to the same vms every
time? Is there anything different about the vm names or anything that would
set them apart from the others that work?
On Sun, Aug 30, 2020 at 11:56
Also if you look at the blog post linked on github page it has info about
increasing the ansible timeout on ovirt engine machine. This will be
necessary when dealing with large vms that take over 2 hours to export
On Sun, Aug 30, 2020 at 8:52 AM Jayme wrote:
> You should be able to
You should be able to fix by increasing the timeout variable in main.yml. I
think the default is pretty low around @ 600 seconds (10 minutes). I have
mine set for a few hours since I’m dealing with large vms. I’d also
increase poll interval as well so it’s not checking for completion every 10
Probably the easiest way is to export the VM as OVA. The OVA format is a
single file which includes the entire VM image along with the config. You
can import it back into oVirt easily as well. You can do this from the GUI
on a running VM and export to OVA without bringing the VM down. The export
Vprotect can do some form of incremental backup of ovirt vms. At least on
4.3 I’m not sure where they’re at for 4.4 support. Worth checking out, free
for 10 vms
On Wed, Aug 19, 2020 at 7:03 AM Kevin Doyle
> I am looking at ways to backup VM's, ideally that support incremental
I think you are perhaps overthinking a tad. Glusterfs is a fine solution
but it has had a rocky road. It would not be my first suggestion if you are
seeking high level write performance although that has been improving and
can be fine tuned. Instability at least in the past was mostly centered
Check engine.log in /var/log/ovirt-engine on the engine sever/vm
On Tue, Jul 28, 2020 at 7:16 PM Philip Brown wrote:
> I just tried to import an OVA file.
> The GUI status mentions that things seem to go along fairly happily..
> it mentions that it creates a disk for it
> but then
ade available to oVirt as a whole. And then from there, I
> > > would of course setup a number of virtual disks that I would attach
> > > back to that customer's VM.
> > > So to recap, if I were to have a 5-node Gluster Hyperconverged
> > > environment, I'm ho
Your other hosts that aren’t participating in gluster storage would just
mount the gluster storage domains.
On Wed, Jul 15, 2020 at 6:44 PM Philip Brown wrote:
> Are you then saying, that YES, all host nodes need to be able to talk to
> the glusterfs filesystem?
> on a related
Personally I find the rhev documentation much more complete:
On Mon, Jul 13, 2020 at 6:17 PM Philip Brown wrote:
> I find it odd that the ovirt website allows to see older version RELEASE
> but doesnt seem to
You should still be able to get it to work using a driver update disk
during install. See: https://forums.centos.org/viewtopic.php?t=71862
Either way, this is good to know ahead of time as to limit surprises!
On Tue, Jul 7, 2020 at 10:22 AM shadow emy wrote:
> i found the prob
I’ve tried various methods to improve gluster performance on similar
hardware and never had much luck. Small file workloads were particularly
troublesome. I ended up switching high performance vms to nfs storage and
performance with nfs improved greatly in my use case.
On Sun, Jun 28, 2020 at
Yes this is the point of hyperconverged. You only need three hosts to setup
a proper hci cluster. I would recommend ssds for gluster storage. You could
get away with non raid to save money since you can do replica three with
gluster meaning your data is fully replicated across all three hosts.
This is of course not recommended but there has been times where I have
lost network access to storage or storage sever while vms were running.
They paused and came back up when storage was available again without
causing any problems. This doesn’t mean it’s 100% safe but from my
experience it has
I wrote a simple un-official ansible playbook to backup full VMs here:
https://github.com/silverorange/ovirt_ansible_backup -- it works great for
my use case, but it is more geared toward smaller environments.
For commercial software I'd take a look at vProtect (it's free for up to 10
Also, I can't think of the limit off the top of my head. I believe it's
either 75 or 100Gb. If the engine volume is set any lower the installation
will fail. There is a minimum size requirement.
On Fri, May 29, 2020 at 12:09 PM Jayme wrote:
> Regarding Gluster question. The volumes wo
Regarding Gluster question. The volumes would be provisioned with LVM on
the same block device. I believe 100Gb is recommended for the engine
volume. The other volumes such as data would be created on another logical
volume and you can use up the rest of the available space there. Ex. 100gb
Here is the bug report:
On Thu, May 28, 2020 at 8:23 AM Jayme wrote:
> If it’s the issue I’m thinking of it’s because Apple Mojave started
> rejecting carts that have a validity date shorter than a certain period of
> time which ovir
If it’s the issue I’m thinking of it’s because Apple Mojave started
rejecting carts that have a validity date shorter than a certain period of
time which ovirt ca does not follow. I posted another message on this group
about it a little while ago and I think a bug report was made.
The only way I
This is likely due to centos8 not node image in particular. Centos8 dropped
support for many lsi raid controllers including older perc controllers.
Has the drive been used before, it might have existing partition/filesystem
on it? If you are sure it's fine to overwrite try running wipefs -a
/dev/sdb on all hosts. Also make sure there aren't any filters setup in
lvm.conf (there shouldn't be on fresh install, but worth checking).
On Tue, Apr
Oh and also gluster interface should not be set as default route either.
On Tue, Apr 28, 2020 at 7:19 PM Jayme wrote:
> On gluster interface try setting gateway to 10.0.1.1
> If that doesn’t work let us know where the process is failing currently
> and with what errors etc.
H keys seems to take over a minute just to
>> prompt for a password. Something smells here.
>> On Tue, Apr 28, 2020 at 7:32 PM Jayme wrote:
>>> You should be using a different subnet for each. I.e. 10.0.0.30 and
>>> 10.0.1.30 for example
You should be using a different subnet for each. I.e. 10.0.0.30 and
10.0.1.30 for example
On Tue, Apr 28, 2020 at 2:49 PM Shareef Jalloq wrote:
> I'm in the process of trying to set up an HCI 3 node cluster in my homelab
> to better understand the Gluster setup and have failed at the
What is the vm optimizer you speak of?
Have you tried the high performance vm profile? When set it will prompt you
to make additional manual changes such as configuring numa and hugepages
On Tue, Apr 21, 2020 at 8:52 AM wrote:
> On oVirt 4.3. i installed w10_64 with q35 cpu.
> i've used
Do you have the guest agent installed on the VMs?
On Thu, Apr 16, 2020 at 2:55 PM wrote:
> Are you getting any errors in the engine log or
> I have Windows 10 and haven't experienced that. You can't shut it down in
> the UI? Even after you try to shut it down
In oVirt admin go to Storage > Domains. Click your storage domain. Click
"Virtual Machines" tab. You should see a list of VMs on that storage
domain. Click one or highlight multiple then click import.
On Thu, Apr 16, 2020 at 2:34 PM wrote:
> If you click on the 3 dots in the vm portal, there is
The error suggests a problem with ansible. What packages are you using?
On Tue, Apr 14, 2020 at 1:51 AM Gabriel Bueno wrote:
> Does anyone have any clue that it may be happening?
> Users mailing list -- email@example.com
> To unsubscribe send an
I recently setup a new ovirt environment using latest 4.3.9 installer. I
can't seem to get the novnc client to work for the life of me in safari or
chrome on MacOS catalina.
I have downloaded the CA from the login page and imported it into keychain
and made sure it was fully trusted. In both
Was wondering if there are any guides or if anyone could share their
storage configuration details for NFS. If using LVM is it safe to snapshot
volumes with running VM images for backup purposes?
Users mailing list -- firstname.lastname@example.org
I've been following along with interest, as I've also been trying
everything I can to improve gluster performance in my HCI cluster. My issue
is mostly latency related and my workloads are typically small file
operations which have been especially challenging.
Couple of things
I strongly believe that FUSE mount is the real reason for poor performance
in HCI and these minor gluster and other tweaks won't satisfy most seeking
i/o performance. Enabling libgfapi is probably the best option. Redhat has
recently closed bug reports related to libgfapi citing won't fix and one
Do you have more specific details or guidelines in regards to the graphics
you are looking for?
On Tue, Mar 24, 2020 at 1:27 PM Sandro Bonazzola
> in preparation of oVirt 4.4 GA it would be nice to have some graphics we
> can use for launching oVirt 4.4 GA on
I too struggle with speed issues in hci. Latency is a big problem with
writes for me especially when dealing with small file workloads. How are
you testing exactly?
Look into enabling libgfapi and try some comparisons with that. People have
been saying it’s much faster, but it’s not a default
9/03/2020 11:18, Jayme wrote:
> > At the very least you should make sure to apply the gluster virt profile
> > to vm volumes. This can also be done using optimize for virt store in
> > the ovirt GUI
> with kind regards,
At the very least you should make sure to apply the gluster virt profile to
vm volumes. This can also be done using optimize for virt store in the
On Thu, Mar 19, 2020 at 6:54 AM Christian Reiss
> Hey folks,
> quick question. For running Gluster / oVirt I found several
What if any steps do I need to take prior to adding an additional gluster
volume to my HCI cluster using new storage devices via the oVirt gui? Will
the gui prepare the devices (xfs/lvm etc) or do I need to do that prior?
Users mailing list --
zna 2020 21:13:13 CET Jayme wrote:
> > >> I noticed Gluster 4k support mentioned in recent oVirt release notes.
> > >
> > >Can
> > >
> > >> anyone explain what this is about?
> > >
> > >before we supported only disks with block size
This is all that should be needed, I've done so on my engine and it works
fine to set the timeout much higher. My guess is that you did not restart
the engine after changing the config.
On Sun, Mar 15, 2020 at 10:44 AM Barrett Richardson
> Version 184.108.40.206-1.0.9.el7
> Per the info near
I noticed Gluster 4k support mentioned in recent oVirt release notes. Can
anyone explain what this is about?
Users mailing list -- email@example.com
To unsubscribe send an email to users-le...@ovirt.org
inode64,noatime,nodiratime 0 0
On Sun, Mar 8, 2020 at 12:23 PM Jayme wrote:
> I'm starting to think that my problem could be related to the use of perc
> H310 mini raid controllers in my oVirt hosts. The os/boot SSDs are raid
> mirror but gluster storage is SSDs in pass
7, 2020 at 6:24 PM Jayme wrote:
> No worries at all about the length of the email, the details are highly
> appreciated. You've given me lots to look into and consider.
> On Sat, Mar 7, 2020 at 10:02 AM Strahil Nikolov
>> On March 7, 2020 1:12
No worries at all about the length of the email, the details are highly
appreciated. You've given me lots to look into and consider.
On Sat, Mar 7, 2020 at 10:02 AM Strahil Nikolov
> On March 7, 2020 1:12:58 PM GMT+02:00, Jayme wrote:
> >Thanks again for the info. You’re
be with replica
3 vs replica 3 arbiter. I’d assume arbiter setup would be faster but
perhaps not by a considerable difference.
I will check into c states as well
On Sat, Mar 7, 2020 at 2:52 AM Strahil Nikolov
> On March 7, 2020 1:09:37 AM GMT+02:00, Jayme wrote:
i, Mar 6, 2020 at 5:06 PM Strahil Nikolov
> On March 6, 2020 6:02:03 PM GMT+02:00, Jayme wrote:
> >I have 3 server HCI with Gluster replica 3 storage (10GBe and SSD
> >Small file performance inner-vm is pretty terrible compared to a
I have 3 server HCI with Gluster replica 3 storage (10GBe and SSD disks).
Small file performance inner-vm is pretty terrible compared to a similar
spec'ed VM using NFS mount (10GBe network, SSD disk)
VM with gluster storage:
# dd if=/dev/zero of=test2.img bs=512 count=1000 oflag=dsync
Leo and Jayme,
> This thread is getting more and more useful, great.
> Atm, I have 15 nodes cluster with shared Storage from Netapp. The storage
> network is (NFS4.1) on 20GB LACP, separated from control.
> Performance is generally great, except in several tes
I currently have a three host hci in rep 3 (no arbiter). 10gbe network and
ssds making up the bricks. I’ve wondered what the result of adding three
more nodes to expand hci would be. Is there an overall storage performance
increase when gluster is expanded like this?
On Sat, Feb 29, 2020 at 4:26
>From my understanding, you can have more than 3 hosts in a HCI cluster but
to expand HCI you need to add hosts in multiples of three. I.e. go from 3
hosts to 6 or 9 etc.
You can still add hosts into the cluster as compute only hosts though. So
you could have 3 hosts with gluster and a
If the problem is with the upload process specifically it’s likely that you
do not have the ovirt engine certificate installed in your browser.
On Thu, Feb 27, 2020 at 11:34 PM Juan Pablo Lorier
> I'm running 220.127.116.11-1.el7 (just updated engine to see if it helps) and I
On Thu, Feb 27, 2020 at 11:33 AM Gianluca Cecchi
> sometimes I have environments (typically with Oracle RDBMS on virtual
> machines) where there is one boot disk and one (often big, such as 500Gb or
> more) data disk.
> The data disk has already its app
Echoing what others have said. Ansible is your best option here.
On Thu, Feb 27, 2020 at 7:22 AM Nathanaël Blanchet wrote:
> Le 27/02/2020 à 11:00, Yedidyah Bar David a écrit :
> On Thu, Feb 27, 2020 at 11:53 AM Eugène Ngontang
> Yes Ansible ovirt_vms module is useful, I use
Big thanks to Martin for helping out. Very much appreciated!
Users mailing list -- firstname.lastname@example.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
hear if you have any thoughts or opinions on ways
to improve backup retention policy to make it more versatile.
Thanks again for your feedback!
On Tue, Feb 18, 2020 at 8:15 AM Gianluca Cecchi
> On Mon, Feb 10, 2020 at 5:01 PM Jayme wrote:
>> I've been part of this maili
1 - 100 of 370 matches
Mail list logo