aster between
> the hosts and the NAS?MikeSent from my Galaxy
> Original message From: Pierre Le Fevre
> Date: 4/28/23 1:46 AM (GMT-08:00) To: users@cloudstack.apache.org
> Subject: Using NFS as primary storage, performance issues Hi all,We're
> working on up
- Original message From: Pierre Le Fevre Date:
4/28/23 1:46 AM (GMT-08:00) To: users@cloudstack.apache.org Subject: Using
NFS as primary storage, performance issues Hi all,We're working on upgrading
our storage solution to a proper networkattached storage.Before this, we had an
NFS sha
Hi,
Op 28/04/2023 om 10:45 schreef Pierre Le Fevre:
Hi all,
We're working on upgrading our storage solution to a proper network
attached storage.
Before this, we had an NFS share on some mounted disks on the management
server. Our new setup is a NAS running TrueNAS (zfs) with 64 GB ram and
Just enjoying the subject, I'm curious about what we can reach in disk
performance when using NFS primary storage. We have two infrastructures
with different purposes, one to instantiate normal VMs and other to be used
by VMs involved with scientific researches running simulations, AI
training,
Big thanks for all the suggestions :)
We've ordered some NVME SSDs for write caching, this is something we missed
when setting up the machine originally.
Sounds like the setup should work other than that.
Best,
Pierre
kthcloud
On Mon, 1 May 2023 at 17:55, wrote:
> Me use flat network and
Me use flat network and jumbo frame too.
El 29 de abr. de 2023 15:47 -0300, S.Fuller , escribió:
> Anything else different about the setup? Interface speeds? Routed vs flat
> network? MTU size being used by the network interfaces perhaps?
>
> - Steve
>
> On Fri, Apr 28, 2023 at 3:47 AM Pierre Le
-
From: Pierre Le Fevre
Sent: Friday, April 28, 2023 4:46 AM
To: users@cloudstack.apache.org
Subject: Using NFS as primary storage, performance issues
Hi all,
We're working on upgrading our storage solution to a proper network attached
storage.
Before this, we had an NFS share on some mounted disks
Anything else different about the setup? Interface speeds? Routed vs flat
network? MTU size being used by the network interfaces perhaps?
- Steve
On Fri, Apr 28, 2023 at 3:47 AM Pierre Le Fevre wrote:
> Hi all,
> We're working on upgrading our storage solution to a proper network
> attached
Hi Pierre,
In the past we try a similar solution than you to increase the
realiability of storages using ZFS (in our case was ubuntu). ZFS borns
to be safe, so when you write a block the system don't write other
block until this block confirms that is ok, so it slow.
In our case, we can
Hi all,
We're working on upgrading our storage solution to a proper network
attached storage.
Before this, we had an NFS share on some mounted disks on the management
server. Our new setup is a NAS running TrueNAS (zfs) with 64 GB ram and
8x8TB, 7200 rpm hard disks mounted to cloudstack over NFS.
e for KVM is there - i'm not certain how to
> invoke it to use writeback.
>
>
> https://git-wip-us.apache.org/repos/asf?p=cloudstack.git=search=HEAD=grep=setCacheMode
>
> On 6/6/16 7:26 AM, Vladimir Melnik wrote:
> > Dear colleagues,
> >
> > I've found why guest's
I see in the code setCacheMode for KVM is there - i'm not certain how to
invoke it to use writeback.
https://git-wip-us.apache.org/repos/asf?p=cloudstack.git=search=HEAD=grep=setCacheMode
On 6/6/16 7:26 AM, Vladimir Melnik wrote:
> Dear colleagues,
>
> I've found why guest's storage pe
Dear colleagues,
I've found why guest's storage performance was much less than host's
performance (the mistake was too stupid to tell about it, really).
But I'd like to ask one more question if you don't mind. :) I played with
various KVM options (cache, io and so on...) and now I can say
ilto:v.mel...@uplink.ua]
> > Sent: Sunday, 05 June 2016 6:06 PM
> > To: users@cloudstack.apache.org
> > Subject: Storage Performance
> >
> > Hello,
> >
> > I have an ACS-driven environment with a storage subsystem which is built
> on
> > Gluster over InfiniBand
ay, 05 June 2016 6:06 PM
> To: users@cloudstack.apache.org
> Subject: Storage Performance
>
> Hello,
>
> I have an ACS-driven environment with a storage subsystem which is built on
> Gluster over InfiniBand. The storage shows pretty good performance when I
> mount a volume
Hi Vladimir,
What hypervisor are you using?
-Original Message-
From: Vladimir Melnik [mailto:v.mel...@uplink.ua]
Sent: Sunday, 05 June 2016 6:06 PM
To: users@cloudstack.apache.org
Subject: Storage Performance
Hello,
I have an ACS-driven environment with a storage subsystem which
Hello,
I have an ACS-driven environment with a storage subsystem which is built on
Gluster over InfiniBand. The storage shows pretty good performance when I mount
a volume on a host and run a simple test ("dd if=/dev/zero of=/mnt/tmp/test.1G
bs=1G count=1 conv=fdatasync"), it shows about 400
Dear Sebastián,
thanks for the hint. However, as we are operating an object store in our data
center and not relying on AWS, network traffic is not an issue.
I’m more worried about the amount of data the ssvm has to push and pull out of
the object store via the S3 interface in our case and if
Not just about any of your questions, but one advice:
If you are talking about using AWS S3 as backend storage, is highly
recommendable that you consider the storage network traffic cost. I have
faced some cases, where the prize of the network traffic of out-coming data
(from AWS to Internet) is
Hi,
we plan to switch from a per-zone NFS secondary storage to a (region-wide) S3
secondary storage. Currently we have some concerns wrt the performance of this
setup, though.
The throughput we achieve for e.g. a single file download (wget
http://our.s3.storage/some-file) is approx. 100mbps.
in BSOD.
I'm doing a fresh install now.
Thanks again.
-Ursprüngliche Nachricht-
Von: Gerolamo Valcamonica [mailto:cloudst...@overweb.it]
Gesendet: Mittwoch, 17. Juni 2015 17:15
An: users@cloudstack.apache.org
Betreff: Re: AW: Storage Performance Windows VM
1- Install virtio drivers inside
How do I change that for existing systems?
Thanks,
Ingo
-Ursprüngliche Nachricht-
Von: Andrija Panic [mailto:andrija.pa...@gmail.com]
Gesendet: Mittwoch, 17. Juni 2015 16:34
An: users@cloudstack.apache.org
Betreff: Re: Storage Performance Windows VM
Choose Windows PV as the vm type/OS
Hi all,
I've got a few Windows machines which had block devices on an NFS share.
After migration to Ceph the disk performance is really bad.
The disk shows up as IDE in Windows (before and after). No virtio.
Any idea why it's that bad? Any drivers I need to change?
Thanks for your help.
Choose Windows PV as the vm type/OS inside acs. Then you will have virtio
hardware inside vm. Make sure you install drivers (Google fedora virtio
drivers) inside Windows, before you switch to virtio hardware...
On Jun 17, 2015 4:24 PM, Jochim, Ingo ingo.joc...@bautzen-it.de wrote:
Hi all,
I've
-
Von: Andrija Panic [mailto:andrija.pa...@gmail.com]
Gesendet: Mittwoch, 17. Juni 2015 16:34
An: users@cloudstack.apache.org
Betreff: Re: Storage Performance Windows VM
Choose Windows PV as the vm type/OS inside acs. Then you will have
virtio hardware inside vm. Make sure you install drivers
: Re: AW: Storage Performance Windows VM
Edit the template from whicb VMs were created. Or edit the vm instance directly
- maybe that is possible cant remember right now...
On Jun 17, 2015 4:36 PM, Jochim, Ingo ingo.joc...@bautzen-it.de wrote:
How do I change that for existing systems?
Thanks
quadrant using Borg technology!
Nux!
www.nux.ro
- Original Message -
From: Jochim, Ingo ingo.joc...@bautzen-it.de
To: users@cloudstack.apache.org users@cloudstack.apache.org
Sent: Wednesday, 17 June, 2015 15:36:33
Subject: AW: Storage Performance Windows VM
How do I change
27 matches
Mail list logo