Re: [libvirt-users] Reg: content of disk is not reflecting in host.

2019-08-07 Thread bharath paulraj
Thank you for the clarification.

Regards,
Bharath

On Wed, Aug 7, 2019 at 10:16 PM Daniel P. Berrangé 
wrote:

> On Wed, Aug 07, 2019 at 09:57:02PM +0530, bharath paulraj wrote:
> > Hi Team,
> >
> > I am doing a small testing and I don't know if my expectation is correct
> or
> > not. Pardon me if I am ignorant.
> > I created a VM and the VM is running. In the hypervisor I have created
> > ".img" file and attached this .img file to the VM.
> > My expectation is that, if VM is writing files to the attached disk, then
> > it should reflect in the .img file created in the hypervisor. But It is
> not
> > working as my expectation.
> > Please correct me if my expectation is wrong.
> >
> > Steps:
> > 1. Created disk.img in the hypervisor using the command: dd if=/dev/zero
> > of=disk.img bs=1M count=50; mkfs ext3 -F disk.img
> > 2. Attached the disk to the running VM using the command: virsh
> attach-disk
> >  --source disk.img  --target vdb --live
> > 3. In the VM, I mounted the disk and created few files.
> > 4. In the hypervisor, I mounted the disk.img to check if the file created
> > in the VM exists in the .img file.
> >>> I am not able to see those files.
>
> Do *NOT* do step 4 - it is incredibly dangerous in general and may well
> result in filesystem corruption & serious data loss.
>
> Most filesystems are only designed to be used by a single OS at any time.
>
> By mounting it on the host, you have 2 separate OS both accessing the
> same filesystem.  Even if you mount a filesystem with the read-only
> flag, there can still be writes to the filesystem as the second OS to
> mount will see it "dirty" and try to replay the journal that reflects
> what the first OS was doing.
>
> Assuming you didn't actually corrupt your FS image, the likely reason
> you don't see the file from the host is that the guest OS probably hasn't
> flushed it out to disk yet - it'll still be cached in memory in the guest
> unless something explicitly ran 'sync'.
>
> Regards,
> Daniel
> --
> |: https://berrange.com  -o-
> https://www.flickr.com/photos/dberrange :|
> |: https://libvirt.org -o-
> https://fstop138.berrange.com :|
> |: https://entangle-photo.org-o-
> https://www.instagram.com/dberrange :|
>


-- 
Regards,
Bharath
___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users

Re: [libvirt-users] Reg: content of disk is not reflecting in host.

2019-08-07 Thread Daniel P . Berrangé
On Wed, Aug 07, 2019 at 09:57:02PM +0530, bharath paulraj wrote:
> Hi Team,
> 
> I am doing a small testing and I don't know if my expectation is correct or
> not. Pardon me if I am ignorant.
> I created a VM and the VM is running. In the hypervisor I have created
> ".img" file and attached this .img file to the VM.
> My expectation is that, if VM is writing files to the attached disk, then
> it should reflect in the .img file created in the hypervisor. But It is not
> working as my expectation.
> Please correct me if my expectation is wrong.
> 
> Steps:
> 1. Created disk.img in the hypervisor using the command: dd if=/dev/zero
> of=disk.img bs=1M count=50; mkfs ext3 -F disk.img
> 2. Attached the disk to the running VM using the command: virsh attach-disk
>  --source disk.img  --target vdb --live
> 3. In the VM, I mounted the disk and created few files.
> 4. In the hypervisor, I mounted the disk.img to check if the file created
> in the VM exists in the .img file.
>>> I am not able to see those files.

Do *NOT* do step 4 - it is incredibly dangerous in general and may well
result in filesystem corruption & serious data loss.

Most filesystems are only designed to be used by a single OS at any time.

By mounting it on the host, you have 2 separate OS both accessing the
same filesystem.  Even if you mount a filesystem with the read-only
flag, there can still be writes to the filesystem as the second OS to
mount will see it "dirty" and try to replay the journal that reflects
what the first OS was doing.

Assuming you didn't actually corrupt your FS image, the likely reason
you don't see the file from the host is that the guest OS probably hasn't
flushed it out to disk yet - it'll still be cached in memory in the guest
unless something explicitly ran 'sync'. 

Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users


[libvirt-users] Reg: content of disk is not reflecting in host.

2019-08-07 Thread bharath paulraj
Hi Team,

I am doing a small testing and I don't know if my expectation is correct or
not. Pardon me if I am ignorant.
I created a VM and the VM is running. In the hypervisor I have created
".img" file and attached this .img file to the VM.
My expectation is that, if VM is writing files to the attached disk, then
it should reflect in the .img file created in the hypervisor. But It is not
working as my expectation.
Please correct me if my expectation is wrong.

Steps:
1. Created disk.img in the hypervisor using the command: dd if=/dev/zero
of=disk.img bs=1M count=50; mkfs ext3 -F disk.img
2. Attached the disk to the running VM using the command: virsh attach-disk
 --source disk.img  --target vdb --live
3. In the VM, I mounted the disk and created few files.
4. In the hypervisor, I mounted the disk.img to check if the file created
in the VM exists in the .img file.
   >> I am not able to see those files.

Regards,
Bharath
___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users

Re: [libvirt-users] Vm in state "in shutdown"

2019-08-07 Thread Michal Prívozník
On 8/5/19 1:25 PM, 马昊骢 Ianmalcolm Ma wrote:
> Description of problem:
> libvirt 3.9 on CentOS Linux release 7.4.1708 (kernel 
> 3.10.0-693.21.1.el7.x86_64) on Qemu version 2.10.0

I vaguely recall a bug like this, but I don't know any more details.
Then again, libvirt-3.9.0 is 2 years old. So I suggest try to upgrade
libvirt and see if it helps. Alternatively, you may try to run git
bisect and find the commit that fixed the problem.

Michal

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users

[libvirt-users] Vm in state "in shutdown"

2019-08-07 Thread 马昊骢 Ianmalcolm Ma
Description of problem:
libvirt 3.9 on CentOS Linux release 7.4.1708 (kernel 
3.10.0-693.21.1.el7.x86_64) on Qemu version 2.10.0

I’m currently facing a strange situation. Sometimes my vm is shown by ‘virsh 
list’ as in state “in shutdown” but there is no qemu-kvm process linked to it.

Libvirt log when “in shutdown” state occur is as follows:
“d470c3b284425b9bacb34d3b5f3845fe” is vm’s name, 
remoteDispatchDomainMemoryStats API is called by ‘collectd’, which is used to 
collect some vm running states and host information once every 30’s.

2019-07-25 14:23:58.706+: 15818: warning : 
qemuMonitorJSONIOProcessEvent:235 : type: POWERDOWN, vm: 
d470c3b284425b9bacb34d3b5f3845fe, cost 1.413 secs
2019-07-25 14:23:59.601+: 15818: warning : 
qemuMonitorJSONIOProcessEvent:235 : type: SHUTDOWN, vm: 
d470c3b284425b9bacb34d3b5f3845fe, cost 1.202 secs
2019-07-25 14:23:59.601+: 15818: warning : 
qemuMonitorJSONIOProcessEvent:235 : type: STOP, vm: 
d470c3b284425b9bacb34d3b5f3845fe, cost 1.203 secs
2019-07-25 14:23:59.601+: 15818: warning : 
qemuMonitorJSONIOProcessEvent:235 : type: SHUTDOWN, vm: 
d470c3b284425b9bacb34d3b5f3845fe, cost 1.203 secs
2019-07-25 14:23:59.629+: 15818: error : qemuMonitorIORead:597 : Unable to 
read from monitor: Connection reset by peer
2019-07-25 14:23:59.629+: 121081: warning : qemuProcessEventHandler:4840 : 
vm: d470c3b284425b9bacb34d3b5f3845fe, event: 6 locked
2019-07-25 14:23:59.629+: 15822: error : qemuMonitorJSONCommandWithFd:364 : 
internal error: Missing monitor reply object
2019-07-25 14:24:29.483+: 15821: warning : qemuGetProcessInfo:1468 : cannot 
parse process status data
2019-07-25 14:24:29.829+: 15823: warning : 
qemuDomainObjBeginJobInternal:4391 : Cannot start job (modify, none) for domain 
d470c3b284425b9bacb34d3b5f3845fe; current job is (query, none) owned by (15822 
remoteDispatchDomainMemoryStats, 0 ) for (30s, 0s)
2019-07-25 14:24:29.829+: 15823: error : qemuDomainObjBeginJobInternal:4403 
: Timed out during operation: cannot acquire state change lock (held by 
remoteDispatchDomainMemoryStats)
2019-07-25 14:24:29.829+: 121081: warning : 
qemuDomainObjBeginJobInternal:4391 : Cannot start job (destroy, none) for 
domain d470c3b284425b9bacb34d3b5f3845fe; current job is (query, none) owned by 
(15822 remoteDispatchDomainMemoryStats, 0 ) for (30s, 0s)
2019-07-25 14:24:29.829+: 121081: error : 
qemuDomainObjBeginJobInternal:4403 : Timed out during operation: cannot acquire 
state change lock (held by remoteDispatchDomainMemoryStats)
2019-07-25 14:24:29.829+: 121081: warning : qemuProcessEventHandler:4875 : 
vm: d470c3b284425b9bacb34d3b5f3845fe, event: 6, cost 31.459 secs

I’ve tried to find out how did this problem happened. I analyzed the execution 
process of the job and speculated that the problem occurred as follows:
step one: libvirt send command 'system_powerdown' to qemu.
step two: libvirt receive qemu monitor close event, and then deal EOF event.
step three: remoteDispatchDomainMemoryStats job start on the same vm.
step four: work thread dealing stop job is waited on job.cond, the timeout is 
30s.

It seems that the remoteDispatchDomainMemoryStats job is too slow to let the 
stop job wait timeout.

Then I tried to reproduce this process. The step is as follows:
First step:  add a sleep on ‘qemuProcessEventHandler' by using 
pthread_cond_timedwait. So can execute 'virsh dommemstat active’ command at 
this time interval.
Second step: start a vm to test
Third step: execute 'virsh shutdown active’ to shutdown the vm
Fourth step: execute 'virsh dommemstat active’ when stop job is sleep.

Then it works. The test vm state became to 'in shutdown’, and the libvirt log 
is as follows:
“active” is my test vm’s name.

2019-08-05 08:39:57.001+: 25889: warning : 
qemuDomainObjBeginJobInternal:4308 : Starting job: modify (vm=0x7f7bbc145fe0 
name=active, current job=none async=none)
2019-08-05 08:39:57.003+: 25889: warning : qemuDomainObjEndJob:4522 : 
Stopping job: modify (async=none vm=0x7f7bbc145fe0 name=active)
2019-08-05 08:39:57.003+: 25881: warning : 
qemuMonitorJSONIOProcessEvent:235 : type: POWERDOWN, vm: active, cost 0.008 secs
2019-08-05 08:39:57.854+: 25881: warning : 
qemuMonitorJSONIOProcessEvent:235 : type: SHUTDOWN, vm: active, cost 1.709 secs
2019-08-05 08:39:57.875+: 25881: warning : 
qemuMonitorJSONIOProcessEvent:235 : type: STOP, vm: active, cost 1.751 secs
2019-08-05 08:39:57.875+: 25881: warning : 
qemuMonitorJSONIOProcessEvent:235 : type: SHUTDOWN, vm: active, cost 1.751 secs
2019-08-05 08:39:57.915+: 25881: warning : qemuMonitorIO:756 : Error on 
monitor 
2019-08-05 08:39:57.915+: 25881: warning : qemuMonitorIO:777 : Triggering 
EOF callback
2019-08-05 08:39:57.915+: 26915: warning : qemuProcessEventHandler:4822 : 
usleep 20s
2019-08-05 08:40:01.004+: 25886: warning : 
qemuDomainObjBeginJobInternal:4308 : Starting job: query (vm=0x7f7bbc145fe0 
name=active, current job=none