On Mon, May 16, 2016 at 02:20:33PM -0600, Alex Williamson wrote:
> On Mon, 16 May 2016 11:10:05 +1000
> Alexey Kardashevskiy wrote:
>
> > On 05/14/2016 08:25 AM, Alex Williamson wrote:
> > > On Wed, 4 May 2016 16:52:26 +1000
> > > Alexey Kardashevskiy wrote:
> >
Hi,
I'm not sure where the problem lies, hence the CC to both lists. Please copy me
on the reply.
I'm playing with OpenStack's devstack environment on an Ubuntu 14.04 host with a
Celeron 2961Y CPU. (libvirt detects it as a Nehalem with a bunch of extra
features.) Qemu gives version 2.2.0
On Wed, May 04, 2016 at 04:52:21PM +1000, Alexey Kardashevskiy wrote:
> 6a81dd17 "spapr_iommu: Rename vfio_accel parameter" renamed vfio_accel
> flag everywhere but one spot was missed.
>
> Signed-off-by: Alexey Kardashevskiy
> Reviewed-by: David Gibson
On Wed, May 04, 2016 at 04:52:19PM +1000, Alexey Kardashevskiy wrote:
> At the moment presence of vfio-pci devices on a bus affect the way
> the guest view table is allocated. If there is no vfio-pci on a PHB
> and the host kernel supports KVM acceleration of H_PUT_TCE, a table
> is allocated in
On Wed, May 04, 2016 at 04:52:20PM +1000, Alexey Kardashevskiy wrote:
> Currently TCE tables are created once at start and their sizes never
> change. We are going to change that by introducing a Dynamic DMA windows
> support where DMA configuration may change during the guest execution.
>
> This
On Wed, May 04, 2016 at 04:52:18PM +1000, Alexey Kardashevskiy wrote:
> The user could have picked LIOBN via the CLI but the device tree
> rendering code would still use the value derived from the PHB index
> (which is the default fallback if LIOBN is not set in the CLI).
>
> This replaces
On Wed, May 04, 2016 at 04:52:22PM +1000, Alexey Kardashevskiy wrote:
> The source guest could have reallocated the default TCE table and
> migrate bigger/smaller table. This adds reallocation in post_load()
> if the default table size is different on source and destination.
>
> This adds
This series improves write_zeroes for qcow2
Since the work conflicts with my proposed patches to switch
write_zeroes to a byte-base interface, I figured I'd fix the
bugs and get this part nailed first, then rebase my other
work on top, rather than making Denis have to do the dirty work.
Changes
From: "Denis V. Lunev"
This patch follows guidelines of all other tracepoints in qcow2, like ones
in qcow2_co_writev. I think that they should dump values in the same
quantities or be changed all together.
Signed-off-by: Denis V. Lunev
CC: Eric Blake
is_zero_cluster() and is_zero_cluster_top_locked() are used only
by qcow2_co_write_zeroes(). The former is too broad (we don't
care if the sectors we are about to overwrite are non-zero, only
that all other sectors in the cluster are zero), so it needs to
be called up to twice but with smaller
From: "Denis V. Lunev"
We should split requests even if they are less than write_zeroes_alignment.
For example we can have the following request:
offset 62k
size 4k
write_zeroes_alignment 64k
The original code sent 1 request covering 2 qcow2 clusters, and resulted
in
From: "Denis V. Lunev"
Unaligned requests will occupy only one cluster. This is true since the
previous commit. Simplify the code taking this consideration into
account.
In other words, the caller is now buggy if it ever passes us an unaligned
request that crosses cluster
Add another test to 154, showing that we currently allocate a
data cluster in the top layer if any sector of the backing file
was allocated. The next patch will optimize this case.
Signed-off-by: Eric Blake
---
tests/qemu-iotests/154 | 40
From: Zhang Chen
This function is from net/socket.c, move it to net.c and net.h.
Add SocketReadState to make others reuse net_fill_rstate().
suggestion from jason.
v4:
- move 'rs->finalize = finalize' to rs_init()
v3:
- remove SocketReadState init callback
-
From: Dmitry Fleytman
Code that will be shared moved to a separate files.
Reviewed-by: Michael S. Tsirkin
Signed-off-by: Dmitry Fleytman
Signed-off-by: Leonid Bloch
From: Dmitry Fleytman
Reviewed-by: Michael S. Tsirkin
Signed-off-by: Dmitry Fleytman
Signed-off-by: Leonid Bloch
Signed-off-by: Jason Wang
---
From: Dmitry Fleytman
To make this device and network packets
abstractions ready for IOMMU.
Reviewed-by: Michael S. Tsirkin
Signed-off-by: Dmitry Fleytman
Signed-off-by: Leonid Bloch
This patch extends the TX/RX packet abstractions with features that will
be used by the e1000e device implementation.
Changes are:
1. Support iovec lists for RX buffers
2. Deeper RX packets parsing
3. Loopback option for TX packets
4. Extended VLAN headers handling
5. RSS processing
From: Dmitry Fleytman
Reviewed-by: Michael S. Tsirkin
Signed-off-by: Dmitry Fleytman
Signed-off-by: Leonid Bloch
Signed-off-by: Jason Wang
---
From: Eduardo Habkost
All handling of defaults (default_* variables) is inside vl.c,
move default_net there too, so we can more easily refactor that
code later.
Reviewed-by: Paolo Bonzini
Signed-off-by: Eduardo Habkost
From: Dmitry Fleytman
Reviewed-by: Michael S. Tsirkin
Signed-off-by: Dmitry Fleytman
Signed-off-by: Leonid Bloch
Signed-off-by: Jason Wang
---
From: Dmitry Fleytman
This patch drops "vmx" prefix from packet abstractions names
to emphasize the fact they are generic and not tied to any
specific network device.
These abstractions will be reused by e1000e emulation implementation
introduced by following
From: Dmitry Fleytman
These macros will be used by future commits introducing
e1000e device emulation and by vmxnet3 tracing code.
Reviewed-by: Michael S. Tsirkin
Signed-off-by: Dmitry Fleytman
From: Dmitry Fleytman
Reviewed-by: Michael S. Tsirkin
Signed-off-by: Dmitry Fleytman
Signed-off-by: Leonid Bloch
Signed-off-by: Jason Wang
---
From: Dmitry Fleytman
Added support for PCIe CAP v1, while reusing some of the existing v2
infrastructure.
Reviewed-by: Michael S. Tsirkin
Signed-off-by: Dmitry Fleytman
Signed-off-by: Leonid Bloch
From: Dmitry Fleytman
Reviewed-by: Michael S. Tsirkin
Signed-off-by: Dmitry Fleytman
Signed-off-by: Leonid Bloch
Signed-off-by: Jason Wang
---
From: Dmitry Fleytman
Reviewed-by: Michael S. Tsirkin
Signed-off-by: Dmitry Fleytman
Signed-off-by: Leonid Bloch
Signed-off-by: Jason Wang
---
From: Dmitry Fleytman
This function will be used by e1000e device code.
Reviewed-by: Michael S. Tsirkin
Signed-off-by: Dmitry Fleytman
Signed-off-by: Leonid Bloch
From: Dmitry Fleytman
Reviewed-by: Michael S. Tsirkin
Signed-off-by: Dmitry Fleytman
Signed-off-by: Leonid Bloch
Signed-off-by: Jason Wang
---
From: Dmitry Fleytman
Reviewed-by: Michael S. Tsirkin
Signed-off-by: Dmitry Fleytman
Signed-off-by: Leonid Bloch
Signed-off-by: Jason Wang
---
From: Zhou Jie
net_init_tap has a huge stack usage of 8192 bytes approx.
Moving large arrays to heap to reduce stack usage.
Signed-off-by: Zhou Jie
Signed-off-by: Jason Wang
---
net/tap.c | 6 --
1 file changed,
The following changes since commit 287db79df8af8e31f18e262feb5e05103a09e4d4:
Merge remote-tracking branch 'remotes/ehabkost/tags/x86-pull-request' into
staging (2016-05-24 13:06:33 +0100)
are available in the git repository at:
https://github.com/jasowang/qemu.git tags/net-pull-request
From: Prasad J Pandit
When receiving packets over MIPSnet network device, it uses
receive buffer of size 1514 bytes. In case the controller
accepts large(MTU) packets, it could lead to memory corruption.
Add check to avoid it.
Reported by: Oleksandr Bazhaniuk
On Wed, May 04, 2016 at 04:52:17PM +1000, Alexey Kardashevskiy wrote:
> At the moment IOMMU MR only translate to the system memory.
> However if some new code changes this, we will need clear indication why
> it is not working so here is the check.
>
> Signed-off-by: Alexey Kardashevskiy
On Wed, May 04, 2016 at 04:52:15PM +1000, Alexey Kardashevskiy wrote:
> Since a788f227 "memory: Allow replay of IOMMU mapping notifications"
> when new VFIO listener is added, all existing IOMMU mappings are
> replayed. However there is a problem that the base address of
> an IOMMU memory region
On Thu, May 05, 2016 at 04:45:04PM -0600, Alex Williamson wrote:
> On Wed, 4 May 2016 16:52:14 +1000
> Alexey Kardashevskiy wrote:
>
> > When a new memory listener is registered, listener_add_address_space()
> > is called and which in turn calls region_add() callbacks of memory
On Wed, May 25, 2016 at 07:59:26AM -0600, Alex Williamson wrote:
> On Wed, 25 May 2016 16:34:37 +1000
> David Gibson wrote:
>
> > On Fri, May 13, 2016 at 04:24:53PM -0600, Alex Williamson wrote:
> > > On Fri, 13 May 2016 17:16:48 +1000
> > > Alexey Kardashevskiy
On Wed, 05/25 17:45, Vladimir Sementsov-Ogievskiy wrote:
> Hi!
>
> Are you going to update the series in the near future?
Yes, probably in a couple days.
Fam
>
> On 08.03.2016 07:44, Fam Zheng wrote:
> > v4: Rebase.
> > Add rev-by from John in patches 1-5, 7, 8.
> > Remove
On Wed, 25 May 2016 01:28:15 +0530
Kirti Wankhede wrote:
> Design for Mediated Device Driver:
> Main purpose of this driver is to provide a common interface for mediated
> device management that can be used by differnt drivers of different
> devices.
>
> This module
Hello Peter,
This is my QOM (devices) patch queue. Please pull.
I've needed to build-fix it twice by now, so if I fixed the #includes wrongly
please pick it up as patch and tweak it or apply a cleanup on top.
Thanks,
Andreas
P.S. I don't seem to have a MAINTAINERS patch to go with it yet, but
Move bus type and related APIs to a separate file bus.c.
This is a first step in breaking up qdev.c into more manageable chunks.
Reviewed-by: Peter Maydell
[AF: Rebased onto osdep.h]
Signed-off-by: Andreas Färber
---
hw/core/Makefile.objs | 1 +
Most Zynq UltraScale+ users will be targetting and using the ZCU102
board instead of the development focused EP108. To make our QEMU machine
names clearer add a ZCU102 machine model.
Signed-off-by: Alistair Francis
---
There are differences between the two boards,
On 05/25/2016 12:22 PM, Paolo Bonzini wrote:
>> 1 QTAILQ should only be accessed using the interfaces defined in
>> queue.h. Its structs should not be directly used. So I created
>> interfaces in queue.h to query about its layout. If the implementation
>> is changed, these interfaces should be
Hi Richard,
Thank you for the helpful comments.
On Wed, May 25, 2016 at 1:35 PM, Richard Henderson wrote:
> On 05/24/2016 10:18 AM, Pranith Kumar wrote:
>> diff --git a/tcg/i386/tcg-target.h b/tcg/i386/tcg-target.h
>> index 92be341..93ea42e 100644
>> ---
On 25/05/16 22:59, Pranith Kumar wrote:
> On Wed, May 25, 2016 at 3:43 PM, Sergey Fedorov wrote:
>> I think it would better not to defer native support for the operation.
>> It should be relatively simple instruction. Otherwise we could wind up
>> deferring this
On Wed, May 25, 2016 at 3:43 PM, Sergey Fedorov wrote:
>
> I think it would better not to defer native support for the operation.
> It should be relatively simple instruction. Otherwise we could wind up
> deferring this indefinitely.
>
Agreed. I will go with the native
On Wed, May 25, 2016 at 3:25 PM, Alex Bennée wrote:
> Should we make the emitting of the function call/TCGop conditional on
> MTTCG being enabled? If we are running in round-robin mode there is no
> need to issue any fence operations.
>
Also, we should check if SMP(> 1
On 05/25/2016 12:25 PM, Alex Bennée wrote:
That would solves the problem of converting the various backends
piecemeal - although obviously we should move to all backends having
"native" support ASAP. However by introducing expensive substitute
functions we will slow down the translations as each
On 25/05/16 22:25, Alex Bennée wrote:
> Richard Henderson writes:
>> On 05/24/2016 10:18 AM, Pranith Kumar wrote:
>>> Signed-off-by: Pranith Kumar
>>> ---
>>> tcg/i386/tcg-target.h | 1 +
>>> tcg/i386/tcg-target.inc.c | 9 +
>>> tcg/tcg-opc.h
Richard Henderson writes:
> On 05/24/2016 10:18 AM, Pranith Kumar wrote:
>> Signed-off-by: Pranith Kumar
>> ---
>> tcg/i386/tcg-target.h | 1 +
>> tcg/i386/tcg-target.inc.c | 9 +
>> tcg/tcg-opc.h | 2 +-
>> tcg/tcg.c
> 1 QTAILQ should only be accessed using the interfaces defined in
> queue.h. Its structs should not be directly used. So I created
> interfaces in queue.h to query about its layout. If the implementation
> is changed, these interfaces should be changed accordingly. Code using
> these interfaces
Add a generic loader to QEMU which can be used to load images or set
memory values.
Signed-off-by: Alistair Francis
---
V7:
- Rebase
V6:
- Add error checking
V5:
- Rebase
V4:
- Allow the loader to work with every architecture
- Move the file to hw/core
-
This work is based on the original work by Li Guang with extra
features added by Peter C and myself.
The idea of this loader is to allow the user to load multiple images
or values into QEMU at startup.
Memory values can be loaded like this: -device
Signed-off-by: Alistair Francis
---
V6:
- Fixup documentation
V4:
- Re-write to be more comprehensive
docs/generic-loader.txt | 54 +
1 file changed, 54 insertions(+)
create mode 100644 docs/generic-loader.txt
diff
If the caller didn't specify an architecture for the ELF machine
the load_elf() function will auto detect it based on the ELF file.
Signed-off-by: Alistair Francis
---
V7:
- Fix typo
hw/core/loader.c | 10 ++
1 file changed, 10 insertions(+)
diff --git
On 05/17/2016 10:34 AM, Kevin Wolf wrote:
> Am 17.05.2016 um 11:15 hat Denis V. Lunev geschrieben:
>> We should split requests even if they are less than write_zeroes_alignment.
>> For example we can have the following request:
>> offset 62k
>> size 4k
>> write_zeroes_alignment 64k
>> The
I will try to explain my design rationale in details here.
1 QTAILQ should only be accessed using the interfaces defined in
queue.h. Its structs should not be directly used. So I created
interfaces in queue.h to query about its layout. If the implementation
is changed, these interfaces should be
On 25/05/16 21:03, Paolo Bonzini wrote:
>> The page table seems to be protected by 'mmap_lock' in user mode
>> emulation but by 'tb_lock' in system mode emulation. It may turn to be
>> possible to read it safely even with no lock held.
> Yes, it is possible to at least follow the radix tree safely
> The page table seems to be protected by 'mmap_lock' in user mode
> emulation but by 'tb_lock' in system mode emulation. It may turn to be
> possible to read it safely even with no lock held.
Yes, it is possible to at least follow the radix tree safely with no
lock held. The fields in the
> >> +/*
> >> + * Following 3 fields are for VMStateField which needs customized
> >> handling,
> >> + * such as QTAILQ in qemu/queue.h, lists, and tree.
> >> + */
> >> +const void *meta_data;
> >> +int (*extend_get)(QEMUFile *f, const void *metadata, void *opaque);
> >> +
On Tue, May 24, 2016 at 3:08 PM, Cleber Rosa wrote:
>
> On 05/13/2016 05:37 PM, Alistair Francis wrote:
>>
>>
>> +if (elf_machine < 1) {
>> +/* The caller didn't specify and ARCH, we can figure it out */
>
>
> Spotted a comment typo: s/and/an/
Thanks, sending a
There is a single remaining user in qemu-img, and another one in a test
case, both of which can be trivially converted to using BlockJob.blk
instead.
Signed-off-by: Kevin Wolf
Reviewed-by: Max Reitz
Reviewed-by: Eric Blake
---
blockjob.c
- Original Message -
> From: "Anthony PERARD"
> To: "Paul Durrant"
> Cc: qemu-devel@nongnu.org, xen-de...@lists.xenproject.org, "Stefano
> Stabellini" , "Paolo
> Bonzini"
> Sent:
This changes the backup block job to use the job's BlockBackend for
performing its I/O. job->bs isn't used by the backup code any more
afterwards.
Signed-off-by: Kevin Wolf
Reviewed-by: Eric Blake
Reviewed-by: Max Reitz
---
This changes the mirror block job to use the job's BlockBackend for
performing its I/O. job->bs isn't used by the mirroring code any more
afterwards.
Signed-off-by: Kevin Wolf
Reviewed-by: Eric Blake
Reviewed-by: Max Reitz
---
Signed-off-by: Kevin Wolf
Reviewed-by: Max Reitz
Reviewed-by: Alberto Garcia
---
block/backup.c | 7 +--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/block/backup.c b/block/backup.c
index fec45e8..a990cf1 100644
---
This changes the commit block job to use the job's BlockBackend for
performing its I/O. job->bs isn't used by the commit code any more
afterwards.
Signed-off-by: Kevin Wolf
Reviewed-by: Eric Blake
Reviewed-by: Max Reitz
---
From: Eric Blake
Commit 983a1600 changed the semantics of blk_write_zeroes() to
be byte-based rather than sector-based, but did not change the
name, which is an open invitation for other code to misuse the
function. Renaming to pwrite_zeroes() makes it more in line
with other
This changes the streaming block job to use the job's BlockBackend for
performing the COR reads. job->bs isn't used by the streaming code any
more afterwards.
Signed-off-by: Kevin Wolf
Reviewed-by: Eric Blake
Reviewed-by: Alberto Garcia
Now that we pass the job to the function, bs is implied by that.
Signed-off-by: Kevin Wolf
Reviewed-by: Max Reitz
Reviewed-by: Alberto Garcia
---
block/backup.c | 13 ++---
1 file changed, 6 insertions(+), 7 deletions(-)
diff
Also add trace points now that the function can be directly called.
Signed-off-by: Kevin Wolf
Reviewed-by: Max Reitz
Reviewed-by: Eric Blake
Reviewed-by: Alberto Garcia
---
block/block-backend.c | 21
So far, bdrv_close_all() first removed all root BlockDriverStates of
BlockBackends and monitor owned BDSes, and then assumed that the
remaining BDSes must be related to jobs and cancelled these jobs.
This order doesn't work that well any more when block jobs use
BlockBackends internally because
This adds a new BlockBackend field to the BlockJob struct, which
coexists with the BlockDriverState while converting the individual jobs.
When creating a block job, a new BlockBackend is created on top of the
given BlockDriverState, and it is destroyed when the BlockJob ends. The
reference to the
When draining intermediate nodes (i.e. nodes that aren't the root node
for at least one of their parents; with node references, the user can
always configure the graph to create this situation), we need to
propagate the .drained_begin/end callbacks all the way up to the root
for the drain to be
From: John Snow
Instead of relying on peeking at bs->job, we want to explicitly get
a reference to the job that was involved in this notifier callback.
Pack the Notifier inside of the BackupBlockJob so we can use
container_of to get a reference back to the BackupBlockJob
From: Alberto Garcia
The current way to obtain the list of existing block jobs is to
iterate over all root nodes and check which ones own a job.
Since we want to be able to support block jobs in other nodes as well,
this patch keeps a list of jobs that is updated every time
From: Paolo Bonzini
Signed-off-by: Paolo Bonzini
Signed-off-by: Kevin Wolf
---
dma-helpers.c| 14 +++---
hw/block/nvme.c | 6 +++---
hw/ide/ahci.c| 6 --
hw/ide/core.c| 8 +---
From: Max Reitz
Now that throttling has been moved to the BlockBackend level, we do not
need to create a BDS along with the BB in the I/O throttling test.
Signed-off-by: Max Reitz
Reviewed-by: Kevin Wolf
Signed-off-by: Kevin Wolf
We had to forbid mirroring to a target BDS that already had a BB
attached because the node swapping at job completion would add a second
BB and we didn't support multiple BBs on a single BDS at the time. Now
we do, so we can lift the restriction.
As we allow additional BlockBackends for the
From: Max Reitz
There are no callers to bdrv_open() or bdrv_open_inherit() left that
pass a pointer to a non-NULL BDS pointer as the first argument of these
functions, so we can finally drop that parameter and just make them
return the new BDS.
Generally, the following
Until now, bdrv_drained_begin() used bdrv_drain() internally to drain
the queue. This is kind of backwards and caused quiescing code to be
duplicated because bdrv_drained_begin() had to ensure that no new
requests come in even after bdrv_drain() returns, whereas bdrv_drain()
had to have them
From: Max Reitz
The only caller of bdrv_close() left is bdrv_delete(). We may as well
assert that, in a way (there are some things in bdrv_close() that make
more sense under that assumption, such as the call to
bdrv_release_all_dirty_bitmaps() which in turn assumes that no
From: Paolo Bonzini
Callers of dma_blk_io have no way to pass extra data to the DMAIOFunc,
because the original callback and opaque are gone by the time DMAIOFunc
is called. On the other hand, the BlockBackend is usually derived
from those extra data that you could pass to
From: Max Reitz
blk_new() cannot fail so its Error ** parameter has become superfluous.
Signed-off-by: Max Reitz
Signed-off-by: Kevin Wolf
---
block/block-backend.c | 9 ++---
blockdev.c | 6 +-
When changing the BlockDriverState that a BdrvChild points to while the
node is currently drained, we must call the .drained_end() parent
callback. Conversely, when this means attaching a new node that is
already drained, we need to call .drained_begin().
bdrv_root_attach_child() takes now an
From: Max Reitz
It is unused now, so we may just as well drop it.
Signed-off-by: Max Reitz
Reviewed-by: Alberto Garcia
Reviewed-by: Kevin Wolf
Signed-off-by: Kevin Wolf
---
block.c | 5
The existing users of the function are:
1. blk_new_open(), which already enabled the write cache
2. Some test cases that don't care about the setting
3. blockdev_init() for empty drives, where the cache mode is overridden
with the value from the options when a medium is inserted
Therefore,
From: Max Reitz
Its only caller is blk_new_open(), so we can just inline it there.
The bdrv_new_root() call is dropped in the process because we can just
let bdrv_open() create the BDS.
Signed-off-by: Max Reitz
Signed-off-by: Kevin Wolf
This adds a common function that is called when attaching a new child to
a parent, removing a child from a parent and when reconfiguring the
graph so that an existing child points to a different node now.
Signed-off-by: Kevin Wolf
Reviewed-by: Eric Blake
The bdrv_next() users all leaked the BdrvNextIterator after completing
the iteration. Simply changing bdrv_next() to free the iterator before
returning NULL at the end of list doesn't work because some callers exit
the loop before looking at all BDSes.
This patch moves the BdrvNextIterator from
From: Max Reitz
bdrv_close() now asserts that the BDS's refcount is 0, therefore it
cannot have any parents and the bdrv_parent_cb_change_media() call is a
no-op.
Signed-off-by: Max Reitz
Reviewed-by: Kevin Wolf
Signed-off-by: Kevin Wolf
From: Max Reitz
If bdrv_open_inherit() creates a snapshot BDS and *pbs is NULL, that
snapshot BDS should be returned instead of the BDS under it.
This has worked so far because (nearly) all users of BDRV_O_SNAPSHOT use
blk_new_open() to create the BDS tree. bdrv_append()
From: Max Reitz
bdrv_append_temp_snapshot() uses bdrv_new() to create an empty BDS
before invoking bdrv_open() on that BDS. This is probably a relict from
when it used to do some modifications on that empty BDS, but now that is
unnecessary, so we can just set bs_snapshot to
The following changes since commit 287db79df8af8e31f18e262feb5e05103a09e4d4:
Merge remote-tracking branch 'remotes/ehabkost/tags/x86-pull-request' into
staging (2016-05-24 13:06:33 +0100)
are available in the git repository at:
git://repo.or.cz/qemu/kevin.git tags/for-upstream
for you to
On Wed, May 25, 2016 at 3:52 AM, Edgar E. Iglesias
wrote:
> From: "Edgar E. Iglesias"
>
> Use the in kernel GIC model when running with KVM enabled.
>
> Reviewed-by: Peter Maydell
> Signed-off-by: Edgar E. Iglesias
On 05/24/2016 10:18 AM, Pranith Kumar wrote:
Signed-off-by: Pranith Kumar
---
tcg/i386/tcg-target.h | 1 +
tcg/i386/tcg-target.inc.c | 9 +
tcg/tcg-opc.h | 2 +-
tcg/tcg.c | 1 +
4 files changed, 12 insertions(+), 1 deletion(-)
On 05/24/2016 10:18 AM, Pranith Kumar wrote:
-/* We don't emulate caches so these are a no-op. */
+if (TCG_TARGET_HAS_fence) {
+tcg_gen_fence();
+}
This should then be unconditional.
r~
On Wed, May 25, 2016 at 3:52 AM, Edgar E. Iglesias
wrote:
> From: "Edgar E. Iglesias"
>
> Delay the realization of the GIC until after CPUs are
> realized. This is needed for KVM as the in-kernel GIC
> model will fail if it is realized with no
On Wed, May 25, 2016 at 3:52 AM, Edgar E. Iglesias
wrote:
> From: "Edgar E. Iglesias"
>
> The way we currently model the RPU subsystem is of quite
> limited use. In addition to that, it causes problems for
> KVM and for GDB debugging.
>
> Make
On Wed, May 25, 2016 at 3:52 AM, Edgar E. Iglesias
wrote:
> From: "Edgar E. Iglesias"
>
> Add a secure prop to en/disable ARM Security Extensions.
> This is particularly useful for KVM runs.
>
> Default to disabled to match the behavior of
1 - 100 of 350 matches
Mail list logo