The vm’s finally stopped and restarted. This is what I’m seeing in dmesg on the 
secondary storage vm:

root@s-60-VM:~# dmesg | grep -i error
[ 3.861852] blk_update_request: I/O error, dev vda, sector 6787872 op 
0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
[ 3.865833] blk_update_request: I/O error, dev vda, sector 6787872 op 
0x0:(READ) flags 0x0 phys_seg 1 prio class 0
[ 3.869553] systemd[1]: Failed to read configured hostname: Input/output error
[ 4.560419] EXT4-fs (vda6): re-mounted. Opts: errors=remount-ro
[ 4.646460] blk_update_request: I/O error, dev vda, sector 6787160 op 
0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
[ 4.650710] blk_update_request: I/O error, dev vda, sector 6787160 op 
0x0:(READ) flags 0x0 phys_seg 1 prio class 0
[ 4.975915] blk_update_request: I/O error, dev vda, sector 6787856 op 
0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
[ 4.980318] blk_update_request: I/O error, dev vda, sector 6787856 op 
0x0:(READ) flags 0x0 phys_seg 1 prio class 0
[ 5.018828] blk_update_request: I/O error, dev vda, sector 6787136 op 
0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
[ 5.022976] blk_update_request: I/O error, dev vda, sector 6787136 op 
0x0:(READ) flags 0x0 phys_seg 1 prio class 0
[ 5.026750] blk_update_request: I/O error, dev vda, sector 6787136 op 
0x0:(READ) flags 0x0 phys_seg 1 prio class 0
[ 5.460315] blk_update_request: I/O error, dev vda, sector 6787856 op 
0x0:(READ) flags 0x0 phys_seg 1 prio class 0
[ 10.415215] print_req_error: 16 callbacks suppressed
[ 10.415219] blk_update_request: I/O error, dev vda, sector 6787864 op 
0x0:(READ) flags 0x0 phys_seg 1 prio class 0
[ 13.362595] blk_update_request: I/O error, dev vda, sector 6787136 op 
0x0:(READ) flags 0x0 phys_seg 1 prio class 0
[ 13.388990] blk_update_request: I/O error, dev vda, sector 6787136 op 
0x0:(READ) flags 0x0 phys_seg 1 prio class 0
[ 13.787276] blk_update_request: I/O error, dev vda, sector 6399408 op 
0x0:(READ) flags 0x80700 phys_seg 1 prio class 0
[ 13.791575] blk_update_request: I/O error, dev vda, sector 6399408 op 
0x0:(READ) flags 0x0 phys_seg 1 prio class 0
[ 14.632299] blk_update_request: I/O error, dev vda, sector 6787136 op 
0x0:(READ) flags 0x0 phys_seg 1 prio class 0
[ 14.658283] blk_update_request: I/O error, dev vda, sector 6787136 op 
0x0:(READ) flags 0x0 phys_seg 1 prio class 0

-jeremy

> On Tuesday, Feb 21, 2023 at 8:57 PM, Me <jer...@skidrow.la 
> (mailto:jer...@skidrow.la)> wrote:
> The node cloudstack is claiming the system vm’s is starting on shows no signs 
> of any vm’s running. virsh list is black.
>
> Thanks
> -jeremy
>
>
>
> > On Tuesday, Feb 21, 2023 at 8:23 PM, Me <jer...@skidrow.la 
> > (mailto:jer...@skidrow.la)> wrote:
> > Also, just to note, I’m not sure how much made it in to the logs. The 
> > system vm’s are stuck in starting state and trying to kill through the 
> > interface doesn’t seem to do anything.
> >
> > -jeremy
> >
> >
> >
> >
> > > On Tuesday, Feb 21, 2023 at 8:20 PM, Me <jer...@skidrow.la 
> > > (mailto:jer...@skidrow.la)> wrote:
> > > Is there something else I can use to submit logs? Too much for pastebin.
> > >
> > > Thanks
> > > -jeremy
> > >
> > >
> > >
> > > > On Tuesday, Feb 21, 2023 at 7:07 PM, Simon Weller <siwelle...@gmail.com 
> > > > (mailto:siwelle...@gmail.com)> wrote:
> > > > Can you pull some management server logs and also put the CloudStack KVM
> > > > agent into debug mode before destroying the ssvm and share the logs?
> > > >
> > > > https://cwiki.apache.org/confluence/plugins/servlet/mobile?contentId=30147350#content/view/30147350
> > > >
> > > > On Tue, Feb 21, 2023, 8:33 PM Jeremy Hansen <jer...@skidrow.la.invalid>
> > > > wrote:
> > > >
> > > > > Yes. It’s just a different partition on the same nfs server.
> > > > >
> > > > >
> > > > >
> > > > > On Tuesday, Feb 21, 2023 at 6:02 PM, Simon Weller 
> > > > > <siwelle...@gmail.com>
> > > > > wrote:
> > > > > The new and old primary storage is in the same zone, correct?
> > > > > Did you also change out the secondary storage?
> > > > >
> > > > > On Tue, Feb 21, 2023, 7:59 PM Jeremy Hansen 
> > > > > <jer...@skidrow.la.invalid>
> > > > > wrote:
> > > > >
> > > > > Yes. On Kvm. I’ve been trying to destroy them from the interface and 
> > > > > it
> > > > > just keep churning. I did a destroy with virsh, but no status changed 
> > > > > in
> > > > > the interface. Also, the newly created ones don’t seem to bring up 
> > > > > their
> > > > > agent and never fully start.
> > > > >
> > > > > Thanks
> > > > >
> > > > >
> > > > >
> > > > > On Tuesday, Feb 21, 2023 at 4:37 PM, Simon Weller 
> > > > > <siwelle...@gmail.com>
> > > > > wrote:
> > > > > Just destroy the old system VMs and they will be recreated on 
> > > > > available
> > > > > storage.
> > > > >
> > > > > Are you on KVM?
> > > > >
> > > > >
> > > > >
> > > > > On Tue, Feb 21, 2023, 6:14 PM Jeremy Hansen 
> > > > > <jer...@skidrow.la.invalid>
> > > > > wrote:
> > > > >
> > > > > How do I completely recreate the system vm?
> > > > >
> > > > > I was able to get the old storage in to full maintenance and deleted 
> > > > > it,
> > > > > so maybe the system vm are still using the old storage? Is there a 
> > > > > way to
> > > > > tell the system vm’s to use the new storage? Db change?
> > > > >
> > > > > Thanks!
> > > > >
> > > > >
> > > > >
> > > > > On Tuesday, Feb 21, 2023 at 1:36 PM, Simon Weller 
> > > > > <siwelle...@gmail.com>
> > > > > wrote:
> > > > > Hey Jeremy,
> > > > >
> > > > > Is there anything in the management logs that indicate why it's not
> > > > > completing the maintenance action?
> > > > > Usually, this state is triggered by some stuck VMs that haven't 
> > > > > migrated
> > > > > off of the primary storage.
> > > > >
> > > > > You mentioned the system VMs. Are they still on the old storage? Could
> > > > > this
> > > > > be due to some storage tags?
> > > > >
> > > > > -Si
> > > > >
> > > > > On Tue, Feb 21, 2023 at 2:35 PM Jeremy Hansen 
> > > > > <jer...@skidrow.la.invalid>
> > > > > wrote:
> > > > >
> > > > > Any ideas on this? I’m completely stuck. Can’t bring up my system vm’s
> > > > > and I can’t remove the old primary storage.
> > > > >
> > > > > -jeremy
> > > > >
> > > > >
> > > > >
> > > > > On Tuesday, Feb 21, 2023 at 2:35 AM, Me <jer...@skidrow.la> wrote:
> > > > > I tried to put one of my primary storage definitions in to maintenance
> > > > > mode. Now it’s stuck in preparing for maintenance and I’m not sure 
> > > > > how to
> > > > > remedy this situation:
> > > > >
> > > > > Cancel maintenance mode
> > > > > (NFS Primary) Resource [StoragePool:1] is unreachable: Primary storage
> > > > > with id 1 is not ready to complete migration, as the status
> > > > > is:PrepareForMaintenance
> > > > >
> > > > > Restarted manager, agents, libvirtd. My secondarystoragevm can’t 
> > > > > start…
> > > > >
> > > > > 4.17.2.0. Using NFS for primary and secondary storage. I was 
> > > > > attempting
> > > > > to migrate to a new volume. All volumes were moved to the new 
> > > > > storage. I
> > > > > was simply trying to delete the old storage definition.
> > > > >
> > > > > Thanks
> > > > > -jeremy
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >

Attachment: signature.asc
Description: PGP signature

Reply via email to