to be. I sincerely hope
we see more focus on these practical challenges.
Regards,
Antoine
Antoine Boucher
antoi...@haltondc.com
On Sep 19, 2025, at 07:46, David Sekne wrote:
Hello,
I had the same problem not long ago. I added a new primary storage (NFS)
volume which did not have the
Hello,
I had the same problem not long ago. I added a new primary storage
(NFS) volume which did not have the proper write permissions (already
had multiple attached from the same storage and working fine) . When ACS
failed to create the KVMHA folder on it it rebooted all my hosts which
GitHub user DaanHoogland added a comment to the discussion: KVM cluster with
NFS primary storage – VM HA not working when host is powered down
It would be nice if we can define however these should work nicely together.
This has never been a focus of anybody’s attention.
GitHub link:
https
GitHub user DaanHoogland added a comment to the discussion: KVM cluster with
NFS primary storage – VM HA not working when host is powered down
> VM-HA functions properly, but only when HOST-HA is disabled. When HOST-HA is
> also enabled on the hosts, the log contains the entries men
GitHub user akoskuczi-bw created a discussion: KVM cluster with NFS primary
storage – VM HA not working when host is powered down
### problem
In a KVM cluster with NFS primary storage, VM HA does not work when a host is
powered down.
- The host status transitions to Down, HA state shows
GitHub user akoskuczi-bw added a comment to the discussion: KVM cluster with
NFS primary storage – VM HA not working when host is powered down
VM-HA functions properly, but only when HOST-HA is disabled. When HOST-HA is
also enabled on the hosts, the log contains the entries mentioned above
GitHub user boring-cyborg[bot] added a comment to the discussion: KVM cluster
with NFS primary storage – VM HA not working when host is powered down
Thanks for opening your first issue here! Be sure to follow the issue template!
GitHub link:
https://github.com/apache/cloudstack/discussions
GitHub user kiranchavala added a comment to the discussion: KVM cluster with
NFS primary storage – VM HA not working when host is powered down
@akoskuczi-bw
Could you please try the steps mentioned in this link
https://github.com/apache/cloudstack/issues/10477#issuecomment-2753247589
cc
GitHub user GerorgeEG created a discussion: KVM hosts got rebooted while adding
NFS primary storage
### problem
Multiple hosts got rebooted while adding NFS primary storage
ACS version: 4.19.1.2
KVM:REHL 8.10
NFS : v3 with nolock option
Below is the error from one of host
**message log
GitHub user GerorgeEG added a comment to the discussion: KVM hosts got rebooted
while adding NFS primary storage
thanks , will check and validate this in our environment.
GitHub link:
https://github.com/apache/cloudstack/discussions/11657#discussioncomment-14427473
This is an
GitHub user rohityadavcloud added a comment to the discussion: KVM hosts got
rebooted while adding NFS primary storage
I hit the same issue few weeks ago and can confirm the agent.properties
certainly helped.
GitHub link:
https://github.com/apache/cloudstack/discussions/11657
GitHub user DaanHoogland added a comment to the discussion: KVM hosts got
rebooted while adding NFS primary storage
@GerorgeEG , did you try mounting the new storage "by hand”?
side note: " /mnt/a288c84e-d100-334b-9bc3-0d79ffe9a610/KVMHA//hb-“ does not
look like a valid heartbeat f
GitHub user weizhouapache added a comment to the discussion: KVM hosts got
rebooted while adding NFS primary storage
@GerorgeEG
If you want the host not to be rebooted when write heartbeat fails, please
add/change the value in agent.properties
GitHub user GerorgeEG added a comment to the discussion: KVM hosts got rebooted
while adding NFS primary storage
Hi @DaanHoogland, thanks for picking this up, we found that it was the issue
with NFS share and that why mounting was stuck and hosts got rebooted, but we
want to avoid reboot
may be
better?
The ask is this:
Current Ceph as primary storage using one MON ip. I would like to move
this to FQDN for multiple other CEPH mons as well.
Just checking if the above article is still relevant?
ACS version: 4.20.0
Thank you!
Alex
GitHub user SviridoffA closed a discussion: Clarification Needed: Changing
scope of Primary Storage
Hi everyone! I hope someone can explain this to me. Just want to be absolutely
sure. I need to change scope of my primary storage from cluster to zone. I’m
using Linstor. The goal is to add a
GitHub user SviridoffA added a comment to the discussion: Clarification Needed:
Changing scope of Primary Storage
Hi @DaanHoogland ! Thank you so much for sharing your experience. This
information is important to me. Closing this discussion
GitHub link:
https://github.com/apache
GitHub user DaanHoogland added a comment to the discussion: Clarification
Needed: Changing scope of Primary Storage
about the different servers in a cluster, it is not ideal but as long as they
have the same OS it should work. Tagging should work and the deployment planner
would avoid to
GitHub user SviridoffA added a comment to the discussion: Clarification Needed:
Changing scope of Primary Storage
> Currently, I’m aware of two ways to do this: first is to disable the primary
> storage, and then change the scope from the UI (I’m using ACS 4.19.3). The
> second met
GitHub user SviridoffA created a discussion: Clarification Needed: Changing
scope of Primary Storage
Hi everyone! I hope someone can explain this to me. Just want to be absolutely
sure. I need to change scope of my primary storage from cluster to zone. I’m
using Linstor. The goal is to add
GitHub user tatay188 closed the discussion with a comment: CloudStack 4.20 IPv6
Primary Storage CEPH Initial VMs on KVM not created - Not Converting to RBD -
IPv6 addresses show truncated
Resolved: @thank you @wido @DaanHoogland @weizhouapache
GitHub link:
https://github.com/apache
GitHub user wido closed the discussion with a comment: CloudStack 4.20 IPv6
Primary Storage CEPH Initial VMs on KVM not created - Not Converting to RBD -
IPv6 addresses show truncated
Just one comment to add here: Avoid using IPv6 addresses (or IPv4) for Monitor
hosts for Ceph, use DNS. You
GitHub user tatay188 closed a discussion: CloudStack 4.20 IPv6 Primary Storage
CEPH Initial VMs on KVM not created - Not Converting to RBD - IPv6 addresses
show truncated
### problem
I am having the following problem, The Libvirt is unable to convert from
Secondary Storage to CEPH RBD
GitHub user DaanHoogland closed the discussion with a comment: CloudStack 4.20
IPv6 Primary Storage CEPH Initial VMs on KVM not created - Not Converting to
RBD - IPv6 addresses show truncated
@tatay188 , assuming this is resolved. please comment if you think it isn’t.
GitHub link:
https
GitHub user bullblock closed a discussion: setup a cloudsatck on aws vpc (LEVEL
2 network issue and Primary Storage issue)
https://github.com/user-attachments/assets/186d86ce-84fe-4c5d-8739-4c6c35c944d3";>
Recently, I built a CloudStack env on AWS VPC. We all know that AWS VPC could
Hi!
Has anyone found a solution to the problem related to the glusterfs driver
for qemu-kvm ( via glusterfs:// )?
libvirt: QEMU Driver error : internal error: process exited while
connecting to monitor: 2025-06-21T08:08:26.861166Z qemu-system-x86_64:
-blockdev
{"driver":"gluster","volume":"vol1",
GitHub user DaanHoogland closed the discussion with a comment: About snapshots
on primary storage
closing this for lack of activity. please re-open or open a new one if it
becomes relevant again.
GitHub link:
https://github.com/apache/cloudstack/discussions/9954#discussioncomment-13061225
GitHub user tdtmusic2 closed a discussion: About snapshots on primary storage
Hi all. Does any of you use the primary storage for storing snapshots? I mean
the **snapshot.backup.to.secondary** set to false. I tried switching to primary
(kvm + ceph pool as primary storage) but after that all
GitHub user DaanHoogland added a comment to the discussion: CloudStack 4.20
IPv6 Primary Storage CEPH Initial VMs on KVM not created - Not Converting to
RBD - IPv6 addresses show truncated
@tatay188 , is it the last command that fails?
for the setup data bases i see no option `-i
GitHub user jack99trade closed a discussion: Primary Storage Deletion , after
Zone is deleted.
Hi all, I have deleted a Zone and now Primary storage is in disabled state.
how do perform delete of this Primary storage. can't find the delete option.
GitHub link: https://github.com/a
GitHub user DaanHoogland added a comment to the discussion: About snapshots on
primary storage
@tdtmusic2 , do you have a resolution to this query yet?
GitHub link:
https://github.com/apache/cloudstack/discussions/9954#discussioncomment-12994592
This is an automatically sent email for
GitHub user tatay188 added a comment to the discussion: CloudStack 4.20 IPv6
Primary Storage CEPH Initial VMs on KVM not created - Not Converting to RBD -
IPv6 addresses show truncated
Additionally, Unable to remove the the Primary storage, I disable it, but still
there is no delete option
GitHub user tatay188 added a comment to the discussion: CloudStack 4.20 IPv6
Primary Storage CEPH Initial VMs on KVM not created - Not Converting to RBD -
IPv6 addresses show truncated
Helllo Weiz.
Single management server works !! How should I Add the additional Management
server to work
GitHub user tatay188 added a comment to the discussion: CloudStack 4.20 IPv6
Primary Storage CEPH Initial VMs on KVM not created - Not Converting to RBD -
IPv6 addresses show truncated
Everytime I reinstall something stops working and something starts working
properly.
GitHub link:
https
GitHub user weizhouapache added a comment to the discussion: CloudStack 4.20
IPv6 Primary Storage CEPH Initial VMs on KVM not created - Not Converting to
RBD - IPv6 addresses show truncated
> Everytime I reinstall something stops working and something starts working
> properly.
@ta
GitHub user weizhouapache added a comment to the discussion: CloudStack 4.20
IPv6 Primary Storage CEPH Initial VMs on KVM not created - Not Converting to
RBD - IPv6 addresses show truncated
@tatay188
Can you surround each ipv6 address by "[" and "]"
GitHub link:
https
GitHub user tatay188 added a comment to the discussion: CloudStack 4.20 IPv6
Primary Storage CEPH Initial VMs on KVM not created - Not Converting to RBD -
IPv6 addresses show truncated
Thank you, I reinstalled the Both management servers only, to clean the DB,
which seem to be corrupted
Op 01-04-2025 om 17:00 schreef Chi vediamo:
I added 3 Monitors:
20XX:::::OO:24, 20XX:::::OO26,
20XX:::::OO:28
Have you tried using a DNS entry instead? Just to see if that works. I
have never tried manual IPv6 addresses, I always use a Round Robin DNS
I added 3 Monitors:
20XX:::::OO:24, 20XX:::::OO26,
20XX:::::OO:28
and you can see on the logs the 20XX is truncated in some cases and in other
cases the last HEX digits.
2025-04-01 10:52:24,628 WARN [utils.script.Script] (agentRequest-Handler-4:[])
(logid
Hi,
Rbd driver installed ?
On Tue, Apr 1, 2025, 13:05 Chi vediamo wrote:
> Hello team,
>
> I am having the following problem, The Libvirt is unable to convert from
> Secondary Storage to CEPH RBD.
>
> CEPH is purely IPV6, The RBD was created, There are no error on the CEPH
> side.
>
> The Initia
Hello team,
I am having the following problem, The Libvirt is unable to convert from
Secondary Storage to CEPH RBD.
CEPH is purely IPV6, The RBD was created, There are no error on the CEPH side.
The Initial VM Creation starts and then stops, they never come enabled,
the initial VMs are delet
x.html
>
> -Wei
>
> On Sat, Mar 8, 2025 at 11:08 AM TechnologyRss <
> technologyrss.m...@gmail.com>
> wrote:
>
> > Hello,
> >
> > I am try ACS version 4.20 add using ceph primary storage but not create
> > storage volume but 4.19 is successfully w
Please read the cloudstack documentation
https://docs.cloudstack.apache.org/en/latest/index.html
-Wei
On Sat, Mar 8, 2025 at 11:08 AM TechnologyRss
wrote:
> Hello,
>
> I am try ACS version 4.20 add using ceph primary storage but not create
> storage volume but 4.19 is successf
Hello,
I am try ACS version 4.20 add using ceph primary storage but not create
storage volume but 4.19 is successfully working. What is the issue? Is
there any bug in this version ?
1. How can I use ceph for the 4.20 version of ACS?
2. I am creating an Adv zone without SG enabled, but I want to
Hello,
I am using primary as ceph and secondary as nfs, I see auto delete ssvm &
cpvm, not stable both vm...
+++Error from kvm host++
error from kvm host
++
696863ac13 from libvirt
2025-02-25 15:59:08,131 WARN [utils.script.Script]
(agentRequest-Handler-2:[]) (logid:08686e8
GitHub user shaerul closed a discussion: How CloudStack handles multiple NFS
type Primary Storage
I have added a second primary storage to my CloudStack which has augmented the
storage space in Dashboard. I have gone through the CloudsStack documents and
browsed through the Internet but
GitHub user rajujith added a comment to the discussion: setup a cloudsatck on
aws vpc (LEVEL 2 network issue and Primary Storage issue)
@bullblock sharing these articles:
https://aws.amazon.com/blogs/compute/building-a-cloud-in-the-cloud-running-apache-cloudstack-on-amazon-ec2-part-1/
https
GitHub user alexandremattioli added a comment to the discussion: setup a
cloudsatck on aws vpc (LEVEL 2 network issue and Primary Storage issue)
NFS for primary storage should work just like the secondary storage. How are
you presenting it to CloudStack?
GitHub link:
https://github.com
GitHub user bullblock edited a discussion: setup a cloudsatck on aws vpc (LEVEL
2 network issue and Primary Storage issue)
https://github.com/user-attachments/assets/186d86ce-84fe-4c5d-8739-4c6c35c944d3";>
Recently, I built a CloudStack env on AWS VPC. We all know that AWS VPC could
GitHub user bullblock edited a discussion: setup a cloudsatck on aws vpc (LEVEL
2 network issue and Primary Storage issue)
Recently, I built a CloudStack env on AWS VPC. We all know that AWS VPC could
be more friendly to Level 2 networks since it filters all Level 2 traffic. This
means that
> > > On 2024/09/18 08:34:47 Wai Ho Levin Ng wrote:
> > > > Hi Mohd,
> > > >
> > > > You can take a look ocfs2 over iscsi, then provision the primary
> > storage using shared mount point configuration, this probably for you.
> > > >
> > > > Levin
> > >
> >
Simon Žekar wrote:
> >
> > I guess GFS2 is another option. Is anyone running it in production?
> >
> > Simon
> >
> >> On 2024/09/18 08:34:47 Wai Ho Levin Ng wrote:
> >> Hi Mohd,
> >>
> >> You can take a look ocfs2 over iscsi, then provi
Mohd,
>>
>> You can take a look ocfs2 over iscsi, then provision the primary storage
>> using shared mount point configuration, this probably for you.
>>
>> Levin
>
I guess GFS2 is another option. Is anyone running it in production?
Simon
On 2024/09/18 08:34:47 Wai Ho Levin Ng wrote:
> Hi Mohd,
>
> You can take a look ocfs2 over iscsi, then provision the primary storage
> using shared mount point configuration, this probably for you.
>
> Levin
Hi Mohd,
You can take a look ocfs2 over iscsi, then provision the primary storage using
shared mount point configuration, this probably for you.
Levin
>
> On 18 Sep 2024, at 07:46, Muhammad Hanis Irfan Mohd Zaid
> wrote:
>
> Hi community, I would like to get some idea i
Hi community, I would like to get some idea if there someone uses iSCSI
with a clustered file system for their production cluster that can be
shared with us. I know that for CloudStack, NFS and Ceph are much preferred
but we currently might not have enough budget buying another server just
for eith
Hi Bryan,
Thanks for your input.
I will double check.
BR,
Wilken
-Ursprüngliche Nachricht-
Von: Bryan Tiang
Gesendet: Donnerstag, 15. August 2024 13:28
An: users@cloudstack.apache.org
Betreff: Re: ACS 4.19.1.1 - primary storage usage
Hey Wilken,
We are using the same setup as yours
GitHub user jack99trade added a comment to the discussion: Primary Storage
Deletion , after Zone is deleted.
Its solved now. Thanks. I first enabled the Storage then the option for
maintenance mode was visible and followed by delete.
Thanks @DaanHoogland
GitHub link:
https
GitHub user jack99trade added a comment to the discussion: Primary Storage
Deletion , after Zone is deleted.
Hi @DaanHoogland , Its was an NFS mounted Primary Share and there are only 3
options

No
GitHub user DaanHoogland added a comment to the discussion: Primary Storage
Deletion , after Zone is deleted.
@jack99trade I don't think you should be allowed to delete a zone when it still
contains any resources, so I wonder how the primary storage could still be
there, but for dlet
; running ACS 4.19.1.1 with linstor as primary storage.
> We are seeing more used primary storage in ACS then volumes/instance
> snapshots created.
>
> For example:
>
> <>
>
> But in total only 1649 GiB storage used for all volumes/ instance snapshots.
>
> Is the
Hi all,
running ACS 4.19.1.1 with linstor as primary storage.
We are seeing more used primary storage in ACS then volumes/instance snapshots
created.
For example:
But in total only 1649 GiB storage used for all volumes/ instance snapshots.
Is there any way to investigate this discrepancy
state": "Up",
> > "type": "Filesystem",
> > "zoneid": "07d64765-3123-4fc2-b947-25d2c36f5bb4",
> > "zonename": "Zone-A"
> > }
> > }
>
>
> Via UI - probably you're righ
quot;,
> "type": "Filesystem",
> "zoneid": "07d64765-3123-4fc2-b947-25d2c36f5bb4",
> "zonename": "Zone-A"
> }
> }
Via UI - probably you're right that there is a bug. When the data is
fetched it shows only th
Hello everyone,
The docs claim that this can be done from UI and from manually modifying
the agent on the KVM host but I wonder if it can be done from CLI? Here is
my experience so far (CS 4.19.1.0, one pod with mixed XCP and KVM clusters).
- Manually modifying the agent: It works
- From Primary
> >>> at com.cloud.utils.nio.Task.call(Task.java:29)
> >>> at
> >>> java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
> >>> at
> >>>
> >>
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.j
To:* Muhammad Hanis Irfan Mohd Zaid
*Cc:* users@cloudstack.apache.org
*Subject:* Re: Unable to add Ceph RBD for primary storage (No such file
or directory)
Hi,
I just tested adding ceph pool on alma9 (it has the same
package/version installed), it worked
What's the ceph version ?
-Wei
do
>
> > https://imgur.com/a/eN45YWa
> >
> > Thanks.
> >
> > On Wed, 7 Aug 2024 at 20:39, Rohit Yadav
> wrote:
> >
> >> Based on the logs, the error is due to some kind of rbd pool
> >> configuration. Can you try to add rbd pool on the kvm directly, see if
---
*From:* Wei ZHOU
*Sent:* Wednesday, August 7, 2024 5:45:34 PM
*To:* Muhammad Hanis Irfan Mohd Zaid
*Cc:* users@cloudstack.apache.org
*Subject:* Re: Unable to add Ceph RBD for primary storage (No such file
or directory)
Hi,
I just tested adding ceph pool on alma9 (it has the same
package/ve
To: users@cloudstack.apache.org
Cc: ustcweiz...@gmail.com ; rohit.ya...@shapeblue.com
Subject: Re: Unable to add Ceph RBD for primary storage (No such file or
directory)
I'm running Ceph 18.2.4 reef (stable).
Can you kindly share any reference on directly adding the pool to KVM? I'm
your
> ceph nodes/mons?
>
> Regards.
>
> Regards.
>
>
>
>
>
> --
> *From:* Wei ZHOU
> *Sent:* Wednesday, August 7, 2024 5:45:34 PM
> *To:* Muhammad Hanis Irfan Mohd Zaid
> *Cc:* users@cloudstack.apache.org
> *Subject:* Re: Unable to add Ceph RBD for primary stor
Sent: Wednesday, August 7, 2024 5:45:34 PM
To: Muhammad Hanis Irfan Mohd Zaid
Cc: users@cloudstack.apache.org
Subject: Re: Unable to add Ceph RBD for primary storage (No such file or
directory)
Hi,
I just tested adding ceph pool on alma9 (it has the same
package/version installed), it worked
or directory
>>
>> Have you installed the package "libvirt-daemon-driver-storage-rbd" on
>> the kvm host ?
>>
>> -Wei
>>
>> On Wed, Aug 7, 2024 at 11:27 AM Muhammad Hanis Irfan Mohd Zaid
>> wrote:
>> >
>> > I'm t
ge "libvirt-daemon-driver-storage-rbd" on
> the kvm host ?
>
> -Wei
>
> On Wed, Aug 7, 2024 at 11:27 AM Muhammad Hanis Irfan Mohd Zaid
> wrote:
> >
> > I'm trying to add a Ceph RBD pool for primary storage use. I've 5 Ceph
> MONs
> > in
ot; on
the kvm host ?
-Wei
On Wed, Aug 7, 2024 at 11:27 AM Muhammad Hanis Irfan Mohd Zaid
wrote:
>
> I'm trying to add a Ceph RBD pool for primary storage use. I've 5 Ceph MONs
> in my POC lab. Ping and telnet to all the Ceph MONs with port 6789 works.
>
> I'm
I'm trying to add a Ceph RBD pool for primary storage use. I've 5 Ceph MONs
in my POC lab. Ping and telnet to all the Ceph MONs with port 6789 works.
I'm following the steps from this:
- https://docs.ceph.com/en/reef/rbd/rbd-cloudstack/
- https://rohityadav.cloud/blog/ceph/
GitHub user rohityadavcloud added a comment to the discussion: How CloudStack
handles multiple NFS type Primary Storage
@shaerul It follows a lazy deployment strategy, i.e. it copies the templates
(images) to a primary storage only when it needs to create a VM's root disk on
a pool. Th
GitHub user shaerul added a comment to the discussion: How CloudStack handles
multiple NFS type Primary Storage
@rohityadavcloud
Thank you very much for all the documentation you've provided on CloudStack
installation and troubleshooting. It's been incredibly helpful and has made
GitHub user rohityadavcloud added a comment to the discussion: How CloudStack
handles multiple NFS type Primary Storage
@shaerul for all NFS primary storages, CloudStack copies the template/iso
(image) on the primary storage from the secondary storage which is called
preparing template on
GitHub user rohityadavcloud added a comment to the discussion: How CloudStack
handles multiple NFS type Primary Storage
Since this is connected to users@ ML, I don’t see why users can’t ask here.
Anybody replying on users@ wouldn’t be reflected here though, but they can
follow the link to
GitHub user shaerul added a comment to the discussion: How CloudStack handles
multiple NFS type Primary Storage
Got it thanks. Can you please delete this from here?
GitHub link:
https://github.com/apache/cloudstack/discussions/9073#discussioncomment-9389331
This is an automatically
GitHub user andrijapanicsb added a comment to the discussion: How CloudStack
handles multiple NFS type Primary Storage
Please post your questions on the mailing list, GitHub is not the right place
for it (it creates clutter) and you will get a much quicker reply on the
mailing list (users
Hi,
Latest version 4.19.01
Once I calmed down, I believe the Primary storage was a red herring as both
hosts are down and the cloudstack agent is not starting which I believe is
related to a traefik loadbalancer that I also had to recover. Let me do some
more troubleshooting and I will come
king about here?
Thanks,
Jayanth
From: Niclas Lindblom
Sent: Saturday, April 6, 2024 2:11:14 pm
To: users@cloudstack.apache.org
Subject: Primary storage recovery
Hello,
I have had a disk failure on my NFS server which hosts primary and secondary
storage. I ha
Hello,
I have had a disk failure on my NFS server which hosts primary and secondary
storage. I have managed to restore a backup and the file structure is back on
the primary storage on the NFS server. However, it seems Cloudstack has lost
the reference to it, Primary storage is showing as “up
Hey Antoine,
sounds like a bug to me. can you click on
github.com/apache/cloudstack/issues/new ;) ?
On Fri, Feb 2, 2024 at 10:48 PM Antoine Boucher wrote:
>
> In ACS version 4.18.1, when conducting a VM Primary Storage Migration, should
> the list of potential destinations exclude an
In ACS version 4.18.1, when conducting a VM Primary Storage Migration, should
the list of potential destinations exclude any Primary Storages that are
disabled?
It currently shows them all.
Regards,
Antoine
Hi Jeremy,
I don’ t think cloudstack migrates all the volumes on the primary storage when
you put it into maintenance.
-Jithin
From: Jeremy Hansen
Date: Saturday, 20 January 2024 at 4:21 PM
To: users@cloudstack.apache.org
Subject: Re: Issues migrating primary storage
I’m trying to put my NFS
I’m trying to put my NFS primary storage in to maintenance mode, which I
believe is supposed to migrate all of its storage, correct? The problem is I
don’t know how to get a status on this job? I can’t really tell if it’s
working. Management server doesn’t really have anything in the logs…. I
t;
> When making snapshots, the snapshot first stores the file in primary
> storage and then transports it to secondary storage and afterwards deletes
> it from primary leaving the copy on secondary storage. We currently use
> local kvm server as the primary storage for our VM instances. Th
Jimmy,
did you try this
https://cloudstack.apache.org/api/apidocs-4.18/apis/deleteStoragePool.html
(with force=true) ?
On Sat, Jan 13, 2024 at 9:34 AM Jeremy Hansen wrote:
>
> Is there a way I can delete a primary storage configuration if the storage no
> longer exists? This is a tes
only work with primary storage and must have iSCSI or NFS storage.
> In our current setup, the primary storage is Linstor raw block storage.
> To enable HA for hosts and HA-enabled KVM VM features, is it possible that
> we create another primary storage NFS with a capacity of only 200 MB
t; > mailto:jithin.r...@shapeblue.com)> wrote:
> > > Hi Jeremy,
> > >
> > > Have you checked the ‘wait’ parameter? Used as wait * 2 timeout.
> > >
> > > -Jithin
> > >
> > > From: Jeremy Hansen
> > > Date: Wednesday, 17 January
ut.
> >
> > -Jithin
> >
> > From: Jeremy Hansen
> > Date: Wednesday, 17 January 2024 at 12:14 PM
> > To: users@cloudstack.apache.org
> > Subject: Re: Issues migrating primary storage
> > Unfortunately the upgrade didn’t help:
> >
> > Resource
Good Day
When making snapshots, the snapshot first stores the file in primary
storage and then transports it to secondary storage and afterwards
deletes it from primary leaving the copy on secondary storage. We
currently use local kvm server as the primary storage for our VM
instances. This
;
> Have you checked the ‘wait’ parameter? Used as wait * 2 timeout.
>
> -Jithin
>
> From: Jeremy Hansen
> Date: Wednesday, 17 January 2024 at 12:14 PM
> To: users@cloudstack.apache.org
> Subject: Re: Issues migrating primary storage
> Unfortunately the upgrade didn’
Hi Jeremy,
Have you checked the ‘wait’ parameter? Used as wait * 2 timeout.
-Jithin
From: Jeremy Hansen
Date: Wednesday, 17 January 2024 at 12:14 PM
To: users@cloudstack.apache.org
Subject: Re: Issues migrating primary storage
Unfortunately the upgrade didn’t help:
Resource [StoragePool:3
Unfortunately the upgrade didn’t help:
Resource [StoragePool:3] is unreachable: Volume
[{"name”:”bigdisk","uuid":"8f24b8a6-229a-4311-9ddc-d6c6acb89aca"}] migration
failed due to [com.cloud.utils.exception.CloudRuntimeException: Failed to copy
/mnt/11cd19d0-f207-3d01-880f-8d01d4b15020/8f24b8a6-2
Upgraded to 4.18.1.0 and trying again…
-jeremy
> On Tuesday, Jan 16, 2024 at 7:08 PM, Jeremy Hansen (mailto:jer...@skidrow.la)> wrote:
> Unfortunately, this didn’t seem to have an impact. Volume migration still
> eventually fails. Should I move to 4.18.1.0?
>
> Thanks
> -jeremy
>
>
>
> > On Tue
Unfortunately, this didn’t seem to have an impact. Volume migration still
eventually fails. Should I move to 4.18.1.0?
Thanks
-jeremy
> On Tuesday, Jan 16, 2024 at 7:06 AM, Suresh Kumar Anaparti
> mailto:sureshkumar.anapa...@gmail.com)>
> wrote:
> Hi Jeremy,
>
> Can you extend with the config
1 - 100 of 769 matches
Mail list logo