Hi QEMU developers,
I'm trying to inject some operations during the emulated device teardown
phase.
For an emulated PCIe device, such as NVMe or IVSHMEM, I notice that QEMU
registers PCIDeviceClass pc->init and pc->exit functions for that device.
->init() (e.g. nvme_init(), or ivshmem_init()) are
Hi all,
I'm trying to map a host MMIO region (host PCIe device BAR) into guest
physical address space. The goal is to enable direct control over that host
MMIO region from guest OS by accessing a certain GPA.
I know the address of the host MMIO region (one page). First I map the page
into QEMU pr
Hi Peter,
Just a follow up on my previous question. I have figured it out by trying
it out with QEMU.
I'm writing to thank you again for your help! I really appreciate that.
Thank you!
Best,
Huaicheng
On Fri, Jun 1, 2018 at 1:00 AM Huaicheng Li
wrote:
> Hi Peter,
>
> Thank you
This way, the guest OS
won't be able to access my buffer and use it like other regular RAM.
Thanks!
Best,
Huaicheng
On Thu, May 31, 2018 at 3:11 AM Peter Maydell
wrote:
> On 30 May 2018 at 01:24, Huaicheng Li wrote:
> > Dear QEMU/KVM developers,
> >
> > I was tr
Dear QEMU/KVM developers,
I was trying to map a buffer in host QEMU process to a guest user space
application. I tried to achieve this
by allocating a buffer in the guest application first, then map this buffer
to QEMU process address space via
GVA -> GPA --> HVA (GPA to HVA is done via cpu_physic
k
[test]
numjobs=1
Signed-off-by: Huaicheng Li
---
hw/block/nvme.c | 97 +---
hw/block/nvme.h | 7
include/block/nvme.h | 2 ++
3 files changed, 102 insertions(+), 4 deletions(-)
diff --git a/hw/block/nvme.c b/hw/block/nvme.c
index 85d
018 10:05, Huaicheng Li wrote:
> > > Great to know that you'd like to mentor the project! If so, can we
> make it
> > > an official project idea and put it on QEMU GSoC page?
> >
> > Submissions need not come from the QEMU GSoC page. You are free to
>
Sounds great. Thanks!
On Tue, Feb 27, 2018 at 5:04 AM, Paolo Bonzini wrote:
> On 27/02/2018 10:05, Huaicheng Li wrote:
> > Including a RAM disk backend in QEMU would be nice too, and it may
> > interest you as it would reduce the delta between upstream QEMU and
> &g
is something that's worth
putting efforts into.
Best,
Huaicheng
On Mon, Feb 26, 2018 at 2:45 AM, Paolo Bonzini wrote:
> On 25/02/2018 23:52, Huaicheng Li wrote:
> > I remember there were some discussions back in 2015 about this, but I
> > don't see it finally done.
Hi all,
The project would be about utilizing shadow doorbell buffer features in
NVMe 1.3 to enable QEMU side polling for virtualized NVMe device, thus
achieving comparable performance as in virtio-dataplane.
**Why not virtio?**
The reason is many industrial/academic researchers uses QEMU NVMe as
Hi all,
I'm writing to ask if it's possible to use irqfd mechanism in QEMU's NVMe
virtual controller implementation. My search results show that back in
2015, there is a discussion on improving QEMU NVMe performance by utilizing
eventfd for guest-to-host notification, thus i guess irqfd should als
> On May 16, 2016, at 11:33 AM, Stefan Hajnoczi wrote:
>
> The way it's done in the "null" block driver is:
>
> static coroutine_fn int null_co_common(BlockDriverState *bs)
> {
>BDRVNullState *s = bs->opaque;
>
>if (s->latency_ns) {
>co_aio_sleep_ns(bdrv_get_aio_context(bs), QE
Hi all,
My goal is to add latency for each I/O without blocking the submission path.
Now
I can know how long each I/O should wait before it’s submitted to the AIO queue
via a model. Now the question is how can I make the I/O wait for that long time
before it’s finally handled by worker threads.
> On Apr 13, 2016, at 1:07 PM, John Snow wrote:
>
> Why do you want to use IDE? If you are looking for performance, why not
> a virtio device?
I’m just trying to understand how IDE emulation works and see where the
overhead comes in. Thank you for the detailed explanation. I really appreciate
> On Mar 14, 2016, at 10:09 PM, Huaicheng Li wrote:
>
>
>> On Mar 13, 2016, at 8:42 PM, Fam Zheng wrote:
>>
>> On Sun, 03/13 14:37, Huaicheng Li (coperd) wrote:
>>> Hi all,
>>>
>>> What I’m confused about is that:
>>>
>>
> On Mar 13, 2016, at 8:42 PM, Fam Zheng wrote:
>
> On Sun, 03/13 14:37, Huaicheng Li (coperd) wrote:
>> Hi all,
>>
>> What I’m confused about is that:
>>
>> If one I/O is too large and may need several rounds (say 2) of DMA transfers,
>> it seem
Hi all,
I meet some trouble in understanding IDE emulation:
(1) IDE I/O Down Path (In VCPU thread):
upon KVM_EXIT_IO, corresponding disk ioport write function will write IO info
to IDEState, then ide read callback function will eventually split it into
**several DMA transfers** and eventually
> On Mar 5, 2016, at 8:42 PM, Huaicheng Li (coperd) wrote:
>
>
>> On Mar 1, 2016, at 3:01 PM, Paolo Bonzini > <mailto:pbonz...@redhat.com>> wrote:
>>
>> This is done
>> because the worker threads only care about the queued request list, not
>&
> On Mar 1, 2016, at 3:01 PM, Paolo Bonzini wrote:
>
> This is done
> because the worker threads only care about the queued request list, not
> about active or completed requests.
Do you think it would be useful to add an API for inserting one request back
to the queued list? For example, In c
> On Mar 1, 2016, at 3:34 PM, Stefan Hajnoczi wrote:
>
> Have you seen Linux Documentation/device-mapper/delay.txt?
>
> You could set up a loopback block device and put the device-mapper delay
> target on top to simulate latency.
I’m working on one idea to emulate the latency of SSD read/wri
Hi all,
I’m trying to add some latency conditionally to I/O requests (qemu_paiocb, from
**IDE** disk emulation, **raw** image file).
My idea is to add this part into the work thread:
* First, set a timer for each incoming qemu_paiocb structure (e.g. 2ms)
* When worker thread handles this I/
elp. I will look into the code first.
> On Dec 9, 2015, at 3:20 AM, Dr. David Alan Gilbert
> wrote:
>
> * Huaicheng Li (lhc...@gmail.com) wrote:
>> Hi all,
>>
>> Please correct me if I’m wrong.
>>
>> I made some changes to IDE emulation (add some e
ld be appreciated. Thanks.
Huaicheng Li
ction ?? (I googled a lot and it seemed there were no
additional steps)
* Since the IA32_FEATURE_CONTROL MSR value should be set in BIOS and are
kept unchanged during the runtime, is there any modified BIOS that qemu can
use to enable the setting ?? Currently my qemu use the default one.
--
Best Regards
Huaicheng Li
24 matches
Mail list logo