We can't emulate stwu since that may corrupt current exception stack.
So we will have to do real store operation in the exception return code.
Firstly we'll allocate a trampoline exception frame below the kprobed
function stack and copy the current exception frame to the trampoline.
Then we can
We need to add a new thread flag, TIF_KPROBE/_TIF_DELAYED_KPROBE,
for handling kprobe operation while exiting exception.
Signed-off-by: Tiejun Chen tiejun.c...@windriver.com
---
arch/powerpc/include/asm/thread_info.h |2 ++
1 files changed, 2 insertions(+), 0 deletions(-)
diff --git
We need a copy mechanism to migrate exception stack. But looks copy_page()
already implement this well so we can complete copy_exc_stack() based on
that directly.
Signed-off-by: Tiejun Chen tiejun.c...@windriver.com
---
arch/powerpc/include/asm/page_32.h |1 +
arch/powerpc/kernel/misc_32.S
ppc32/kprobe: Fix a bug for kprobe stwu r1
There patches is used to fix that known kprobe bug,
[BUG?]3.0-rc4+ftrace+kprobe: set kprobe at instruction 'stwu' lead to system
crash/freeze
https://lkml.org/lkml/2011/7/3/156
We withdraw the original way to provide a dedicated exception stack. Now
We don't do the real store operation for kprobing 'stwu Rx,(y)R1'
since this may corrupt the exception frame, now we will do this
operation safely in exception return code after migrate current
exception frame below the kprobed function stack.
So we only update gpr[1] here and trigger a thread
In entry_64.S version of ret_from_except_lite, you'll notice that
in the !preempt case, after we've checked MSR_PR we test for any
TIF flag in _TIF_USER_WORK_MASK to decide whether to go to do_work
or not. However, in the preempt case, we do a convoluted trick to
test SIGPENDING only if PR was set
There was a bug for fmr initialization, which lead to fmr was always 0x100
in fsl_elbc_chip_init() and caused FCM command timeout before calling
fsl_elbc_chip_init_tail(), now we initialize CWTO to maximum timeout value
and not relying on the setting of bootloader.
Signed-off-by: Shengzhou Liu
- fix NAND_CMD_READID command for ONFI detect.
- add NAND_CMD_PARAM command to read the ONFI parameter page.
Signed-off-by: Shengzhou Liu shengzhou@freescale.com
---
v3: unify the bytes of fbcr to 256.
v2: no changes
drivers/mtd/nand/fsl_elbc_nand.c | 18 ++
1 files
On 2011-12-12 05:57, Benjamin Herrenschmidt wrote:
The old PowerMac swim3 driver has some interesting locking issues,
using a private lock and failing to lock the queue before completing
requests, which triggered WARN_ONs among others.
This rips out the private lock, makes everything operate
On Mon, 12 Dec 2011, Benjamin Herrenschmidt wrote:
Any chance you can test this patch ? I would not be surprised if it
broke m68k since I had to do some of the changes in there blind, so
let me know... with this, I can again suspend/resume properly on a Pismo
while using the internal
So on a CONSOLE_PORT_ADD message, we would take the
(existing)ports_device::ports_lock, and for other control messages we
would justtake the (new) port::port_lock? You are concerned that just
takingthe ports_lock for all control messages could be too
restrictive? Iwouldn't have expected these
On (Mon) 12 Dec 2011 [11:11:55], Miche Baker-Harvey wrote:
So on a CONSOLE_PORT_ADD message, we would take the
(existing)ports_device::ports_lock, and for other control messages we
would justtake the (new) port::port_lock? You are concerned that just
takingthe ports_lock for all control
On Tue, 2011-12-13 at 00:34 +1100, Finn Thain wrote:
On Mon, 12 Dec 2011, Benjamin Herrenschmidt wrote:
Any chance you can test this patch ? I would not be surprised if it
broke m68k since I had to do some of the changes in there blind, so
let me know... with this, I can again
I originally posted this as part of the DT clock bindings. I'm reposting
now since I've fixed up some bugs and I'm planning to put them into
linux-next.
The DT clock binding patches will be posted separately.
Cheers,
g.
arch/arm/boot/dts/testcases/tests-phandle.dtsi | 37 ++
A large chunk of qe_pin_request() is unnecessarily cut-and-paste
directly from of_get_named_gpio_flags(). This patch cuts out the
duplicate code and replaces it with a call to of_get_gpio().
v2: fixed compile error due to missing gpio_to_chip()
Signed-off-by: Grant Likely
of_parse_phandle_with_args() needs to return quite a bit of data. Rather
than making each datum a separate **out_ argument, this patch creates
struct of_phandle_args to contain all the returned data and reworks the
user of the function. This patch also enables of_parse_phandle_with_args()
to
of_reset_gpio_handle() is largely a cut-and-paste copy of
of_get_named_gpio_flags(). There really isn't any reason for the
split, so this patch deletes the duplicate function
Signed-off-by: Grant Likely grant.lik...@secretlab.ca
Cc: Michal Simek mon...@monstr.eu
---
On Fri, 2011-12-09 at 17:42 +0800, shuo@freescale.com wrote:
From: Liu Shuo b35...@freescale.com
If we use the Nand flash chip whose number of pages in a block is greater
than 64(for large page), we must treat the low bit of FBAR as being the
high bit of the page address due to the
On Tue, 2011-12-06 at 18:09 -0600, Scott Wood wrote:
On 12/03/2011 10:31 PM, shuo@freescale.com wrote:
From: Liu Shuo shuo@freescale.com
Freescale FCM controller has a 2K size limitation of buffer RAM. In order
to support the Nand flash chip whose page size is larger than 2K
On 12/12/2011 03:09 PM, Artem Bityutskiy wrote:
On Tue, 2011-12-06 at 18:09 -0600, Scott Wood wrote:
On 12/03/2011 10:31 PM, shuo@freescale.com wrote:
From: Liu Shuo shuo@freescale.com
Freescale FCM controller has a 2K size limitation of buffer RAM. In order
to support the Nand flash
On Mon, 2011-12-12 at 15:15 -0600, Scott Wood wrote:
NAND chips come from the factory with bad blocks marked at a certain
offset into each page. This offset is normally in the OOB area, but
since we change the layout from 4k data, 128 byte oob to 2k data, 64
byte oob, 2k data, 64 byte oob the
On 12/12/2011 03:19 PM, Artem Bityutskiy wrote:
On Mon, 2011-12-12 at 15:15 -0600, Scott Wood wrote:
NAND chips come from the factory with bad blocks marked at a certain
offset into each page. This offset is normally in the OOB area, but
since we change the layout from 4k data, 128 byte oob
When using the compat APIs, architectures will generally want to
be able to make direct syscalls to msgsnd(), shmctl(), etc., and
in the kernel we would want them to be handled directly by
compat_sys_xxx() functions, as is true for other compat syscalls.
However, for historical reasons, several
This expands the reverse mapping array to contain two links for each
HPTE which are used to link together HPTEs that correspond to the
same guest logical page. Each circular list of HPTEs is pointed to
by the rmap array entry for the guest logical page, pointed to by
the relevant memslot. Links
This moves the get/set_one_reg implementation down from powerpc.c into
booke.c, book3s_pr.c and book3s_hv.c. This avoids #ifdefs in C code,
but more importantly, it fixes a bug on Book3s HV where we were
accessing beyond the end of the kvm_vcpu struct (via the to_book3s()
macro) and corrupting
This relaxes the requirement that the guest memory be provided as
16MB huge pages, allowing it to be provided as normal memory, i.e.
in pages of PAGE_SIZE bytes (4k or 64k). To allow this, we index
the kvm-arch.slot_phys[] arrays with a small page index, even if
huge pages are being used, and use
When commit f43fdc15fa (KVM: PPC: booke: Improve timer register
emulation) factored out some code in arch/powerpc/kvm/powerpc.c
into a new helper function, kvm_vcpu_kick(), an error crept in
which causes Book3s HV guest vcpus to stall. This fixes it.
On POWER7 machines, guest vcpus are grouped
This adds the infrastructure to enable us to page out pages underneath
a Book3S HV guest, on processors that support virtualized partition
memory, that is, POWER7. Instead of pinning all the guest's pages,
we now look in the host userspace Linux page tables to find the
mapping for a given guest
This adds an array that parallels the guest hashed page table (HPT),
that is, it has one entry per HPTE, used to store the guest's view
of the second doubleword of the corresponding HPTE. The first
doubleword in the HPTE is the same as the guest's idea of it, so we
don't need to store a copy, but
This series of patches updates the Book3S-HV KVM code that manages the
guest hashed page table (HPT) to enable several things:
* MMIO emulation and MMIO pass-through
* Use of small pages (4kB or 64kB, depending on config) to back the
guest memory
* Pageable guest memory - i.e. backing pages
This provides for the case where userspace maps an I/O device into the
address range of a memory slot using a VM_PFNMAP mapping. In that
case, we work out the pfn from vma-vm_pgoff, and record the cache
enable bits from vma-vm_page_prot in two low-order bits in the
slot_phys array entries. Then,
This adds an smp_wmb in kvm_mmu_notifier_invalidate_range_end() and an
smp_rmb in mmu_notifier_retry() so that mmu_notifier_retry() will give
the correct answer when called without kvm-mmu_lock being held.
PowerPC Book3S HV KVM wants to use a bitlock per guest page rather than
a single global
This adds two new functions, kvmppc_pin_guest_page() and
kvmppc_unpin_guest_page(), and uses them to pin the guest pages where
the guest has registered areas of memory for the hypervisor to update,
(i.e. the per-cpu virtual processor areas, SLB shadow buffers and
dispatch trace logs) and then
This removes the code from kvmppc_core_prepare_memory_region() that
looked up the VMA for the region being added and called hva_to_page
to get the pfns for the memory. We have no guarantee that there will
be anything mapped there at the time of the KVM_SET_USER_MEMORY_REGION
ioctl call; userspace
This provides the low-level support for MMIO emulation in Book3S HV
guests. When the guest tries to map a page which is not covered by
any memslot, that page is taken to be an MMIO emulation page. Instead
of inserting a valid HPTE, we insert an HPTE that has the valid bit
clear but another
This allocates an array for each memory slot that is added to store
the physical addresses of the pages in the slot. This array is
vmalloc'd and accessed in kvmppc_h_enter using real_vmalloc_addr().
This allows us to remove the ram_pginfo field from the kvm_arch
struct, and removes the 64GB guest
With this, if a guest does an H_ENTER with a read/write HPTE on a page
which is currently read-only, we make the actual HPTE inserted be a
read-only version of the HPTE. We now intercept protection faults as
well as HPTE not found faults, and for a protection fault we work out
whether it should
At present, our implementation of H_ENTER only makes one try at locking
each slot that it looks at, and doesn't even retry the ldarx/stdcx.
atomic update sequence that it uses to attempt to lock the slot. Thus
it can return the H_PTEG_FULL error unnecessarily, particularly when
the H_EXACT flag
On Mon, 2011-12-12 at 16:50 +0800, Tiejun Chen wrote:
We need to add a new thread flag, TIF_KPROBE/_TIF_DELAYED_KPROBE,
for handling kprobe operation while exiting exception.
The basic idea is sane, however the instruction emulation isn't per-se
kprobe specific. It could be used by xmon too for
On Mon, 2011-12-12 at 16:50 +0800, Tiejun Chen wrote:
We need a copy mechanism to migrate exception stack. But looks copy_page()
already implement this well so we can complete copy_exc_stack() based on
that directly.
I'd rather you don't hijack copy_page which is quite sensitive. The
emulation
On Mon, 2011-12-12 at 16:50 +0800, Tiejun Chen wrote:
We can't emulate stwu since that may corrupt current exception stack.
So we will have to do real store operation in the exception return code.
Firstly we'll allocate a trampoline exception frame below the kprobed
function stack and copy
Commit 7c4b2f09 (powerpc: Update mpc85xx/corenet 32-bit defconfigs) accidentally
disabled the ePAPR byte channel driver in the defconfig for Freescale CoreNet
platforms.
Signed-off-by: Timur Tabi ti...@freescale.com
---
arch/powerpc/configs/corenet32_smp_defconfig |1 +
1 files changed, 1
Add support for MSIs under the Freescale hypervisor. This involves updating
the fsl_pci driver to support vmpic-msi nodes, and updating the fsl_pci
driver to create an ATMU for the rerouted MSIIR register.
Signed-off-by: Timur Tabi ti...@freescale.com
---
arch/powerpc/sysdev/fsl_msi.c | 68
On 12/12/2011 05:37 PM, Timur Tabi wrote:
@@ -205,6 +207,29 @@ static void __init setup_pci_atmu(struct pci_controller
*hose,
/* Setup inbound mem window */
mem = memblock_end_of_DRAM();
+
+ /*
+ * The msi-address-64 property, if it exists, indicates the physical
+
Scott Wood wrote:
Technically, it's up to the hv config file where MSIIR gets mapped.
After main memory is just a common way of configuring it, but won't work
if we're limiting the partition's memory to end at an unusual address.
I'll change the comment to reflect this.
Why can't we have the
On 12/12/2011 06:27 PM, Tabi Timur-B04825 wrote:
Scott Wood wrote:
Technically, it's up to the hv config file where MSIIR gets mapped.
After main memory is just a common way of configuring it, but won't work
if we're limiting the partition's memory to end at an unusual address.
I'll change
Scott Wood wrote:
How's the hypervisor even going to know if the mem= kernel command line
argument is used to change the end of main memory (assuming that's been
taken into account by this point in the boot sequence)?
What if the user put a shared memory region immediately after the main
On Tue, 13 Dec 2011, Benjamin Herrenschmidt wrote:
On Tue, 2011-12-13 at 00:34 +1100, Finn Thain wrote:
On Mon, 12 Dec 2011, Benjamin Herrenschmidt wrote:
Any chance you can test this patch ? I would not be surprised if it
broke m68k since I had to do some of the changes in there
On 12/11/11 01:32, Segher Boessenkool wrote:
Hi Suzuki,
Looks quite good, a few comments...
+get_type:
+ /* r4 holds the relocation type */
+ extrwi r4, r4, 8, 24 /* r4 = ((char*)r4)[3] */
This comment is confusing (only makes sense together with the
lwz a long way up).
Agree, will fix
于 2011年12月13日 05:30, Scott Wood 写道:
On 12/12/2011 03:19 PM, Artem Bityutskiy wrote:
On Mon, 2011-12-12 at 15:15 -0600, Scott Wood wrote:
NAND chips come from the factory with bad blocks marked at a certain
offset into each page. This offset is normally in the OOB area, but
since we change the
On Dec 7, 2011, at 11:46 PM, Kumar Gala wrote:
On Dec 7, 2011, at 9:23 PM, Benjamin Herrenschmidt wrote:
On Wed, 2011-12-07 at 11:19 -0600, Kumar Gala wrote:
struct dma_map_ops swiotlb_dma_ops = {
+#ifdef CONFIG_PPC64
+ .alloc_coherent = swiotlb_alloc_coherent,
+ .free_coherent =
On Mon, 2011-12-12 at 21:55 -0600, Becky Bruce wrote:
1) dma_direct_alloc_coherent strips GFP_HIGHMEM out of the flags field
when calling the actual allocator and the iotlb version does not. I
don't know how much this matters - I did a quick grep and I don't see
any users that specify that,
Benjamin Herrenschmidt wrote:
On Mon, 2011-12-12 at 16:50 +0800, Tiejun Chen wrote:
We can't emulate stwu since that may corrupt current exception stack.
So we will have to do real store operation in the exception return code.
Firstly we'll allocate a trampoline exception frame below the
Benjamin Herrenschmidt wrote:
On Mon, 2011-12-12 at 16:50 +0800, Tiejun Chen wrote:
We need to add a new thread flag, TIF_KPROBE/_TIF_DELAYED_KPROBE,
for handling kprobe operation while exiting exception.
The basic idea is sane, however the instruction emulation isn't per-se
kprobe
Benjamin Herrenschmidt wrote:
On Mon, 2011-12-12 at 16:50 +0800, Tiejun Chen wrote:
We need a copy mechanism to migrate exception stack. But looks copy_page()
already implement this well so we can complete copy_exc_stack() based on
that directly.
I'd rather you don't hijack copy_page which
Tiejun Chen wrote:
In entry_64.S version of ret_from_except_lite, you'll notice that
in the !preempt case, after we've checked MSR_PR we test for any
TIF flag in _TIF_USER_WORK_MASK to decide whether to go to do_work
or not. However, in the preempt case, we do a convoluted trick to
test
On Tue, 2011-12-13 at 13:01 +0800, tiejun.chen wrote:
Tiejun Chen wrote:
In entry_64.S version of ret_from_except_lite, you'll notice that
in the !preempt case, after we've checked MSR_PR we test for any
TIF flag in _TIF_USER_WORK_MASK to decide whether to go to do_work
or not. However,
Benjamin Herrenschmidt wrote:
On Tue, 2011-12-13 at 13:01 +0800, tiejun.chen wrote:
Tiejun Chen wrote:
In entry_64.S version of ret_from_except_lite, you'll notice that
in the !preempt case, after we've checked MSR_PR we test for any
TIF flag in _TIF_USER_WORK_MASK to decide whether to go to
We support 16TB of user address space and half a million contexts
so update the comment to reflect this.
Signed-off-by: Anton Blanchard an...@samba.org
---
Index: linux-powerpc/arch/powerpc/include/asm/mmu-hash64.h
===
---
this patch gives the possibility to workaround bug ENGcm09152
on i.MX25 when the hardware workaround is also implemented on
the board.
It covers the workaround described on page 42 of the following Errata :
http://cache.freescale.com/files/dsp/doc/errata/IMX25CE.pdf
Signed-off-by: Eric Bénard
Do we have a linux port available for freescale P5010 processor (with
single E5500 core) ?
*(found arch/powerpc/platforms/pseries ; and a some details on
kernel/cputable.c *)
Is there any reference board which uses this processor ? any reference in
DTS file also will be helpful.
Thanks
Vineeth
61 matches
Mail list logo