[PATCH v12 4/8] powerpc: add pmd_[dirty|mkclean] for THP

2014-07-09 Thread Minchan Kim
MADV_FREE needs pmd_dirty and pmd_mkclean for detecting recent
overwrite of the contents since MADV_FREE syscall is called for
THP page.

This patch adds pmd_dirty and pmd_mkclean for THP page MADV_FREE
support.

Cc: Benjamin Herrenschmidt b...@kernel.crashing.org
Cc: Paul Mackerras pau...@samba.org
Cc: Aneesh Kumar K.V aneesh.ku...@linux.vnet.ibm.com
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Minchan Kim minc...@kernel.org
---
 arch/powerpc/include/asm/pgtable-ppc64.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/powerpc/include/asm/pgtable-ppc64.h 
b/arch/powerpc/include/asm/pgtable-ppc64.h
index eb9261024f51..c9a4bbe8e179 100644
--- a/arch/powerpc/include/asm/pgtable-ppc64.h
+++ b/arch/powerpc/include/asm/pgtable-ppc64.h
@@ -468,9 +468,11 @@ static inline pte_t *pmdp_ptep(pmd_t *pmd)
 
 #define pmd_pfn(pmd)   pte_pfn(pmd_pte(pmd))
 #define pmd_young(pmd) pte_young(pmd_pte(pmd))
+#define pmd_dirty(pmd) pte_dirty(pmd_pte(pmd))
 #define pmd_mkold(pmd) pte_pmd(pte_mkold(pmd_pte(pmd)))
 #define pmd_wrprotect(pmd) pte_pmd(pte_wrprotect(pmd_pte(pmd)))
 #define pmd_mkdirty(pmd)   pte_pmd(pte_mkdirty(pmd_pte(pmd)))
+#define pmd_mkclean(pmd)   pte_pmd(pte_mkclean(pmd_pte(pmd)))
 #define pmd_mkyoung(pmd)   pte_pmd(pte_mkyoung(pmd_pte(pmd)))
 #define pmd_mkwrite(pmd)   pte_pmd(pte_mkwrite(pmd_pte(pmd)))
 
-- 
2.0.0

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [dm-devel] Regression in 3.15 on POWER8 with multipath SCSI

2014-07-09 Thread Junichi Nomura
On 07/09/14 12:55, Alexey Kardashevskiy wrote:
 On 07/08/2014 08:28 PM, Junichi Nomura wrote:
 It seems Bart's issue has gone with the attached patch:
 http://www.redhat.com/archives/dm-devel/2014-July/msg00035.html
 Could you try if it makes any difference on your issue?
..
 This patch fixes IPR SCSI for my POWER8 box, e8099177 was the problem.

Thank you for the testing.

Mike Snitzer has picked up this patch for his tree:
https://git.kernel.org/cgit/linux/kernel/git/device-mapper/linux-dm.git/commit/?h=for-nextid=75c76c45b76e53b7c2f025d30e7e308bfe331004

-- 
Jun'ichi Nomura, NEC Corporation
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH tty-next 14/22] tty: Remove tty_wait_until_sent_from_close()

2014-07-09 Thread Peter Hurley

On 06/17/2014 07:32 AM, Peter Hurley wrote:

On 06/17/2014 07:03 AM, David Laight wrote:

From: Peter Hurley
...

I don't understand the second half of the changelog, it doesn't seem
to fit here: there deadlock that we are trying to avoid here happens
when the *same* tty needs the lock to complete the function that
sends the pending data. I don't think we do still do that any more,
but it doesn't seem related to the tty lock being system-wide or not.


The tty lock is not used in the i/o path; it's purpose is to
mutually exclude state changes in open(), close() and hangup().

The commit that added this [1] comments that _other_ ttys may wait
for this tty to complete, and comments in the code note that this
function should be removed when the system-wide tty mutex was removed
(which happened with the commit noted in the changelog).


What happens if another process tries to do a non-blocking open
while you are sleeping in close waiting for output to drain?

Hopefully this returns before that data has drained.


Good point.

tty_open() should be trylocking both mutexes anyway in O_NONBLOCK.


Further, the tty lock should not be nested within the tty_mutex lock
in a reopen, regardless of O_NONBLOCK.

AFAICT, the tty_mutex in the reopen scenario is only protecting the
tty count bump of the linked tty (if the tty is a pty).

I think with some refactoring and returning with a tty reference held
from both tty_open_current_tty() and tty_driver_lookup_tty(), the tty
lock in tty_open() can be attempted without nesting in the tty_mutex.

Regardless, I'll be splitting this series and I'll be sure to cc
you all when I resubmit these changes (after testing).

Regards,
Peter Hurley




___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH 1/3] PCI/MSI: Add pci_enable_msi_partial()

2014-07-09 Thread Bjorn Helgaas
On Tue, Jul 8, 2014 at 6:26 AM, Alexander Gordeev agord...@redhat.com wrote:
 On Mon, Jul 07, 2014 at 01:40:48PM -0600, Bjorn Helgaas wrote:
  Can you quantify the benefit of this?  Can't a device already use
  MSI-X to request exactly the number of vectors it can use?  (I know
 
  A Intel AHCI chipset requires 16 vectors written to MME while advertises
  (via AHCI registers) and uses only 6. Even attempt to init 8 vectors 
  results
  in device's fallback to 1 (!).

 Is the fact that it uses only 6 vectors documented in the public spec?

 Yes, it is documented in ICH specs.

Out of curiosity, do you have a pointer to this?  It looks like it
uses one vector per port, and I'm wondering if the reason it requests
16 is because there's some possibility of a part with more than 8
ports.

 Is this a chipset erratum?  Are there newer versions of the chipset
 that fix this, e.g., by requesting 8 vectors and using 6, or by also
 supporting MSI-X?

 No, this is not an erratum. The value of 8 vectors is reserved and could
 cause undefined results if used.

As I read the spec (PCI 3.0, sec 6.8.1.3), if MMC contains 0b100
(requesting 16 vectors), the OS is allowed to allocate 1, 2, 4, 8, or
16 vectors.  If allocating 8 vectors and writing 0b011 to MME causes
undefined results, I'd say that's a chipset defect.

 I know this conserves vector numbers.  What does that mean in real
 user-visible terms?  Are there systems that won't boot because of this
 issue, and this patch fixes them?  Does it enable bigger
 configurations, e.g., more I/O devices, than before?

 Visibly, it ceases logging messages ('ahci :00:1f.2: irq 107 for
 MSI/MSI-X') for IRQs that are not shown in /proc/interrupts later.

 No, it does not enable/fix any existing hardware issue I am aware of.
 It just saves a couple of interrupt vectors, as Michael put it (10/16
 to be precise). However, interrupt vectors space is pretty much scarce
 resource on x86 and a risk of exhausting the vectors (and introducing
 quota i.e) has already been raised AFAIR.

I'm not too concerned about the logging issue.  If necessary, we could
tweak that message somehow.

Interrupt vector space is the issue I would worry about, but I think
I'm going to put this on the back burner until it actually becomes a
problem.

 Do you know how Windows handles this?  Does it have a similar interface?

 Have no clue, TBH. Can try to investigate if you see it helpful.

No, don't worry about investigating.  I was just curious because if
Windows *did* support something like this, that would be an indication
that there's a significant problem here and we might need to solve it,
too.  But it sounds like we can safely ignore it for now.

Bjorn
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH] powernv: Add OPAL tracepoints

2014-07-09 Thread Paul E. McKenney
On Thu, Jul 03, 2014 at 05:20:50PM +1000, Anton Blanchard wrote:
 Knowing how long we spend in firmware calls is an important part of
 minimising OS jitter.
 
 This patch adds tracepoints to each OPAL call. If tracepoints are
 enabled we branch out to a common routine that calls an entry and exit
 tracepoint.
 
 This allows us to write tools that monitor the frequency and duration
 of OPAL calls, eg:
 
 name  count  total(ms)  min(ms)  max(ms)  avg(ms)  period(ms)
 OPAL_HANDLE_INTERRUPT 5  0.1990.0370.0420.040   12547.545
 OPAL_POLL_EVENTS204  2.5900.0120.0360.0132264.899
 OPAL_PCI_MSI_EOI   2830  3.0660.0010.0050.001  81.166
 
 We use jump labels if configured, which means we only add a single
 nop instruction to every OPAL call when the tracepoints are disabled.
 
 Signed-off-by: Anton Blanchard an...@samba.org

That is what I call invoking tracepoints the hard way -- from assembly!
Just one question -- can these tracepoints be invoked from the idle
loop?  If so, you need to use the _rcuidle suffix, for example, as
in trace_opal_entry_rcuidle().  If not:

Acked-by: Paul E. McKenney paul...@linux.vnet.ibm.com

 ---
 
 Index: b/arch/powerpc/include/asm/trace.h
 ===
 --- a/arch/powerpc/include/asm/trace.h
 +++ b/arch/powerpc/include/asm/trace.h
 @@ -99,6 +99,51 @@ TRACE_EVENT_FN(hcall_exit,
  );
  #endif
 
 +#ifdef CONFIG_PPC_POWERNV
 +extern void opal_tracepoint_regfunc(void);
 +extern void opal_tracepoint_unregfunc(void);
 +
 +TRACE_EVENT_FN(opal_entry,
 +
 + TP_PROTO(unsigned long opcode, unsigned long *args),
 +
 + TP_ARGS(opcode, args),
 +
 + TP_STRUCT__entry(
 + __field(unsigned long, opcode)
 + ),
 +
 + TP_fast_assign(
 + __entry-opcode = opcode;
 + ),
 +
 + TP_printk(opcode=%lu, __entry-opcode),
 +
 + opal_tracepoint_regfunc, opal_tracepoint_unregfunc
 +);
 +
 +TRACE_EVENT_FN(opal_exit,
 +
 + TP_PROTO(unsigned long opcode, unsigned long retval),
 +
 + TP_ARGS(opcode, retval),
 +
 + TP_STRUCT__entry(
 + __field(unsigned long, opcode)
 + __field(unsigned long, retval)
 + ),
 +
 + TP_fast_assign(
 + __entry-opcode = opcode;
 + __entry-retval = retval;
 + ),
 +
 + TP_printk(opcode=%lu retval=%lu, __entry-opcode, __entry-retval),
 +
 + opal_tracepoint_regfunc, opal_tracepoint_unregfunc
 +);
 +#endif
 +
  #endif /* _TRACE_POWERPC_H */
 
  #undef TRACE_INCLUDE_PATH
 Index: b/arch/powerpc/platforms/powernv/Makefile
 ===
 --- a/arch/powerpc/platforms/powernv/Makefile
 +++ b/arch/powerpc/platforms/powernv/Makefile
 @@ -8,3 +8,4 @@ obj-$(CONFIG_PCI) += pci.o pci-p5ioc2.o
  obj-$(CONFIG_EEH)+= eeh-ioda.o eeh-powernv.o
  obj-$(CONFIG_PPC_SCOM)   += opal-xscom.o
  obj-$(CONFIG_MEMORY_FAILURE) += opal-memory-errors.o
 +obj-$(CONFIG_TRACEPOINTS)+= opal-tracepoints.o
 Index: b/arch/powerpc/platforms/powernv/opal-wrappers.S
 ===
 --- a/arch/powerpc/platforms/powernv/opal-wrappers.S
 +++ b/arch/powerpc/platforms/powernv/opal-wrappers.S
 @@ -13,30 +13,69 @@
  #include asm/hvcall.h
  #include asm/asm-offsets.h
  #include asm/opal.h
 +#include asm/jump_label.h
 +
 + .section.text
 +
 +#ifdef CONFIG_TRACEPOINTS
 +#ifdef CONFIG_JUMP_LABEL
 +#define OPAL_BRANCH(LABEL)   \
 + ARCH_STATIC_BRANCH(LABEL, opal_tracepoint_key)
 +#else
 +
 + .section.toc,aw
 +
 + .globl opal_tracepoint_refcount
 +opal_tracepoint_refcount:
 + .llong  0
 +
 + .section.text
 +
 +/*
 + * We branch around this in early init by using an unconditional cpu
 + * feature.
 + */
 +#define OPAL_BRANCH(LABEL)   \
 +BEGIN_FTR_SECTION;   \
 + b   1f; \
 +END_FTR_SECTION(0, 1);   \
 + ld  r12,opal_tracepoint_refcount@toc(r2);   \
 + std r12,32(r1); \
 + cmpdi   r12,0;  \
 + bne-LABEL;  \
 +1:
 +
 +#endif
 +
 +#else
 +#define OPAL_BRANCH(LABEL)
 +#endif
 
  /* TODO:
   *
   * - Trace irqs in/off (needs saving/restoring all args, argh...)
   * - Get r11 feed up by Dave so I can have better register usage
   */
 +
  #define OPAL_CALL(name, token)   \
   _GLOBAL(name);  \
   mflrr0; \
 - mfcrr12;\
   std r0,16(r1);  \
 + li  r0,token;   \
 + OPAL_BRANCH(opal_tracepoint_entry) \
 + mfcrr12; 

[PATCH] powerpc: Fail remap_4k_pfn() if PFN doesn't fit inside PTE

2014-07-09 Thread Madhusudanan Kandasamy

remap_4k_pfn() silently truncates upper bits of input 4K PFN if
it cannot be contained in PTE. This leads invalid memory mapping
and could result in a system crash when the memory is accessed.
This patch fails remap_4k_pfn() and returns -EINVAL if the input
4K PFN cannot be contained in PTE.
Used a helper inline function in the failure case so that the
remap_4k_pfn() macro can still be used in expression contexts.

Signed-off-by: Madhusudanan Kandasamy kma...@linux.vnet.ibm.com
---
 arch/powerpc/include/asm/pte-hash64-64k.h | 11 +--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/include/asm/pte-hash64-64k.h 
b/arch/powerpc/include/asm/pte-hash64-64k.h
index d836d94..10af7f1 100644
--- a/arch/powerpc/include/asm/pte-hash64-64k.h
+++ b/arch/powerpc/include/asm/pte-hash64-64k.h
@@ -74,8 +74,15 @@
 #define pte_pagesize_index(mm, addr, pte)  \
(((pte)  _PAGE_COMBO)? MMU_PAGE_4K: MMU_PAGE_64K)

+static inline int bad_4k_pfn(void)
+{
+   WARN_ON(1);
+   return -EINVAL;
+}
+
 #define remap_4k_pfn(vma, addr, pfn, prot) \
-   remap_pfn_range((vma), (addr), (pfn), PAGE_SIZE,\
-   __pgprot(pgprot_val((prot)) | _PAGE_4K_PFN))
+   ((pfn = (1UL  (64 - PTE_RPN_SHIFT))) ? bad_4k_pfn() :\
+   remap_pfn_range((vma), (addr), (pfn), PAGE_SIZE,\
+   __pgprot(pgprot_val((prot)) | _PAGE_4K_PFN)))

 #endif /* __ASSEMBLY__ */
-- 
2.0.1

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH] powerpc: Fail remap_4k_pfn() if PFN doesn't fit inside PTE

2014-07-09 Thread Stephen Rothwell
Hi Madhusudanan,

On Wed, 09 Jul 2014 21:38:31 +0530 Madhusudanan Kandasamy 
kma...@linux.vnet.ibm.com wrote:

 diff --git a/arch/powerpc/include/asm/pte-hash64-64k.h 
 b/arch/powerpc/include/asm/pte-hash64-64k.h
 index d836d94..10af7f1 100644
 --- a/arch/powerpc/include/asm/pte-hash64-64k.h
 +++ b/arch/powerpc/include/asm/pte-hash64-64k.h
 @@ -74,8 +74,15 @@
  #define pte_pagesize_index(mm, addr, pte)\
   (((pte)  _PAGE_COMBO)? MMU_PAGE_4K: MMU_PAGE_64K)
 
 +static inline int bad_4k_pfn(void)
 +{
 + WARN_ON(1);
 + return -EINVAL;
 +}
 +
  #define remap_4k_pfn(vma, addr, pfn, prot)   \
 - remap_pfn_range((vma), (addr), (pfn), PAGE_SIZE,\
 - __pgprot(pgprot_val((prot)) | _PAGE_4K_PFN))
 + ((pfn = (1UL  (64 - PTE_RPN_SHIFT))) ? bad_4k_pfn() :\
 + remap_pfn_range((vma), (addr), (pfn), PAGE_SIZE,\
 + __pgprot(pgprot_val((prot)) | _PAGE_4K_PFN)))
 
  #endif   /* __ASSEMBLY__ */

WARN_ON() returns the value it is passed, so no helper is needed:

 #define remap_4k_pfn(vma, addr, pfn, prot) \
-   remap_pfn_range((vma), (addr), (pfn), PAGE_SIZE,\
-   __pgprot(pgprot_val((prot)) | _PAGE_4K_PFN))
+   WARN_ON(((pfn = (1UL  (64 - PTE_RPN_SHIFT ? -EINVAL :\
+   remap_pfn_range((vma), (addr), (pfn), PAGE_SIZE,\
+   __pgprot(pgprot_val((prot)) | _PAGE_4K_PFN)))

-- 
Cheers,
Stephen Rothwells...@canb.auug.org.au


signature.asc
Description: PGP signature
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH 2/9] drivers: base: support cpu cache information interface to userspace via sysfs

2014-07-09 Thread Greg Kroah-Hartman
On Wed, Jun 25, 2014 at 06:30:37PM +0100, Sudeep Holla wrote:
 +static const struct device_attribute *cache_optional_attrs[] = {
 + dev_attr_coherency_line_size,
 + dev_attr_ways_of_associativity,
 + dev_attr_number_of_sets,
 + dev_attr_size,
 + dev_attr_attributes,
 + dev_attr_physical_line_partition,
 + NULL
 +};
 +
 +static int device_add_attrs(struct device *dev,
 + const struct device_attribute **dev_attrs)
 +{
 + int i, error = 0;
 + struct device_attribute *dev_attr;
 + char *buf;
 +
 + if (!dev_attrs)
 + return 0;
 +
 + buf = kmalloc(PAGE_SIZE, GFP_KERNEL);
 + if (!buf)
 + return -ENOMEM;
 +
 + for (i = 0; dev_attrs[i]; i++) {
 + dev_attr = (struct device_attribute *)dev_attrs[i];
 +
 + /* create attributes that provides meaningful value */
 + if (dev_attr-show(dev, dev_attr, buf)  0)
 + continue;
 +
 + error = device_create_file(dev, dev_attrs[i]);
 + if (error) {
 + while (--i = 0)
 + device_remove_file(dev, dev_attrs[i]);
 + break;
 + }
 + }
 +
 + kfree(buf);
 + return error;
 +}

Ick, why create your own function for this when the driver core has this
functionality built into it?  Look at the is_visible() callback, and how
it is use for an attribute group please.

 +static void device_remove_attrs(struct device *dev,
 + const struct device_attribute **dev_attrs)
 +{
 + int i;
 +
 + if (!dev_attrs)
 + return;
 +
 + for (i = 0; dev_attrs[i]; dev_attrs++, i++)
 + device_remove_file(dev, dev_attrs[i]);
 +}

You should just remove a whole group at once, not individually.

 +
 +const struct device_attribute **
 +__weak cache_get_priv_attr(struct device *cache_idx_dev)
 +{
 + return NULL;
 +}
 +
 +/* Add/Remove cache interface for CPU device */
 +static void cpu_cache_sysfs_exit(unsigned int cpu)
 +{
 + int i;
 + struct device *tmp_dev;
 + const struct device_attribute **ci_priv_attr;
 +
 + if (per_cpu_index_dev(cpu)) {
 + for (i = 0; i  cache_leaves(cpu); i++) {
 + tmp_dev = per_cache_index_dev(cpu, i);
 + if (!tmp_dev)
 + continue;
 + ci_priv_attr = cache_get_priv_attr(tmp_dev);
 + device_remove_attrs(tmp_dev, ci_priv_attr);
 + device_remove_attrs(tmp_dev, cache_optional_attrs);
 + device_unregister(tmp_dev);
 + }
 + kfree(per_cpu_index_dev(cpu));
 + per_cpu_index_dev(cpu) = NULL;
 + }
 + device_unregister(per_cpu_cache_dev(cpu));
 + per_cpu_cache_dev(cpu) = NULL;
 +}
 +
 +static int cpu_cache_sysfs_init(unsigned int cpu)
 +{
 + struct device *dev = get_cpu_device(cpu);
 +
 + if (per_cpu_cacheinfo(cpu) == NULL)
 + return -ENOENT;
 +
 + per_cpu_cache_dev(cpu) = device_create(dev-class, dev, cpu,
 +NULL, cache);
 + if (IS_ERR_OR_NULL(per_cpu_cache_dev(cpu)))
 + return PTR_ERR(per_cpu_cache_dev(cpu));
 +
 + /* Allocate all required memory */
 + per_cpu_index_dev(cpu) = kzalloc(sizeof(struct device *) *
 +  cache_leaves(cpu), GFP_KERNEL);
 + if (unlikely(per_cpu_index_dev(cpu) == NULL))
 + goto err_out;
 +
 + return 0;
 +
 +err_out:
 + cpu_cache_sysfs_exit(cpu);
 + return -ENOMEM;
 +}
 +
 +static int cache_add_dev(unsigned int cpu)
 +{
 + unsigned short i;
 + int rc;
 + struct device *tmp_dev, *parent;
 + struct cacheinfo *this_leaf;
 + const struct device_attribute **ci_priv_attr;
 + struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
 +
 + rc = cpu_cache_sysfs_init(cpu);
 + if (unlikely(rc  0))
 + return rc;
 +
 + parent = per_cpu_cache_dev(cpu);
 + for (i = 0; i  cache_leaves(cpu); i++) {
 + this_leaf = this_cpu_ci-info_list + i;
 + if (this_leaf-disable_sysfs)
 + continue;
 + tmp_dev = device_create_with_groups(parent-class, parent, i,
 + this_leaf,
 + cache_default_groups,
 + index%1u, i);
 + if (IS_ERR_OR_NULL(tmp_dev)) {
 + rc = PTR_ERR(tmp_dev);
 + goto err;
 + }
 +
 + rc = device_add_attrs(tmp_dev, cache_optional_attrs);
 + if (unlikely(rc))
 + goto err;
 +
 + ci_priv_attr = cache_get_priv_attr(tmp_dev);
 + rc = device_add_attrs(tmp_dev, ci_priv_attr);
 + if (unlikely(rc))
 + 

[PATCH] powerpc/pseries: dynamically added OF nodes need to call of_node_init

2014-07-09 Thread Tyrel Datwyler
Commit 75b57ecf9 refactored device tree nodes to use kobjects such that they
can be exposed via /sysfs. A secondary commit 0829f6d1f furthered this rework
by moving the kobect initialization logic out of of_node_add into its own
of_node_init function. The inital commit removed the existing kref_init calls
in the pseries dlpar code with the assumption kobject initialization would
occur in of_node_add. The second commit had the side effect of triggering a
BUG_ON as a result of dynamically added nodes being uninitialized.

This patch fixes this by adding of_node_init calls in place of the previously
removed kref_init calls.

Signed-off-by: Tyrel Datwyler tyr...@linux.vnet.ibm.com
---
 arch/powerpc/platforms/pseries/dlpar.c| 1 +
 arch/powerpc/platforms/pseries/reconfig.c | 1 +
 2 files changed, 2 insertions(+)

diff --git a/arch/powerpc/platforms/pseries/dlpar.c 
b/arch/powerpc/platforms/pseries/dlpar.c
index 022b38e..2d0b4d6 100644
--- a/arch/powerpc/platforms/pseries/dlpar.c
+++ b/arch/powerpc/platforms/pseries/dlpar.c
@@ -86,6 +86,7 @@ static struct device_node *dlpar_parse_cc_node(struct 
cc_workarea *ccwa,
}
 
of_node_set_flag(dn, OF_DYNAMIC);
+   of_node_init(dn);
 
return dn;
 }
diff --git a/arch/powerpc/platforms/pseries/reconfig.c 
b/arch/powerpc/platforms/pseries/reconfig.c
index 0435bb6..1c0a60d 100644
--- a/arch/powerpc/platforms/pseries/reconfig.c
+++ b/arch/powerpc/platforms/pseries/reconfig.c
@@ -69,6 +69,7 @@ static int pSeries_reconfig_add_node(const char *path, struct 
property *proplist
 
np-properties = proplist;
of_node_set_flag(np, OF_DYNAMIC);
+   of_node_init(np);
 
np-parent = derive_parent(path);
if (IS_ERR(np-parent)) {
-- 
1.7.12.4

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

[PATCH 1/9] powerpc: Drop support for pre-POWER4 cpus

2014-07-09 Thread Michael Ellerman
We inadvertently broke power3 support back in 3.4 with commit
f5339277eb8d powerpc: Remove FW_FEATURE ISERIES from arch code.
No one noticed until at least 3.9.

By then we'd also broken it with the optimised memcpy, copy_to/from_user
and clear_user routines. We don't want to add any more complexity to
those just to support ancient cpus, so it seems like it's a good time to
drop support for power3 and earlier.

Signed-off-by: Michael Ellerman m...@ellerman.id.au
---
 arch/powerpc/include/asm/cputable.h | 18 +++-
 arch/powerpc/kernel/cputable.c  | 90 -
 2 files changed, 6 insertions(+), 102 deletions(-)

diff --git a/arch/powerpc/include/asm/cputable.h 
b/arch/powerpc/include/asm/cputable.h
index bc2347774f0a..2721946780df 100644
--- a/arch/powerpc/include/asm/cputable.h
+++ b/arch/powerpc/include/asm/cputable.h
@@ -400,11 +400,6 @@ extern const char *powerpc_base_platform;
 #define CPU_FTRS_GENERIC_32(CPU_FTR_COMMON | CPU_FTR_NODSISRALIGN)
 
 /* 64-bit CPUs */
-#define CPU_FTRS_POWER3(CPU_FTR_USE_TB | \
-   CPU_FTR_IABR | CPU_FTR_PPC_LE)
-#define CPU_FTRS_RS64  (CPU_FTR_USE_TB | \
-   CPU_FTR_IABR | \
-   CPU_FTR_MMCRA | CPU_FTR_CTRL)
 #define CPU_FTRS_POWER4(CPU_FTR_USE_TB | CPU_FTR_LWSYNC | \
CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_CTRL | \
CPU_FTR_MMCRA | CPU_FTR_CP_USE_DCBTZ | \
@@ -466,10 +461,9 @@ extern const char *powerpc_base_platform;
 #define CPU_FTRS_POSSIBLE  (CPU_FTRS_E6500 | CPU_FTRS_E5500 | CPU_FTRS_A2)
 #else
 #define CPU_FTRS_POSSIBLE  \
-   (CPU_FTRS_POWER3 | CPU_FTRS_RS64 | CPU_FTRS_POWER4 |\
-   CPU_FTRS_PPC970 | CPU_FTRS_POWER5 | CPU_FTRS_POWER6 |   \
-   CPU_FTRS_POWER7 | CPU_FTRS_POWER8E | CPU_FTRS_POWER8 |  \
-   CPU_FTRS_CELL | CPU_FTRS_PA6T | CPU_FTR_VSX)
+   (CPU_FTRS_POWER4 | CPU_FTRS_PPC970 | CPU_FTRS_POWER5 | \
+CPU_FTRS_POWER6 | CPU_FTRS_POWER7 | CPU_FTRS_POWER8E | \
+CPU_FTRS_POWER8 | CPU_FTRS_CELL | CPU_FTRS_PA6T | CPU_FTR_VSX)
 #endif
 #else
 enum {
@@ -517,9 +511,9 @@ enum {
 #define CPU_FTRS_ALWAYS(CPU_FTRS_E6500  CPU_FTRS_E5500  
CPU_FTRS_A2)
 #else
 #define CPU_FTRS_ALWAYS\
-   (CPU_FTRS_POWER3  CPU_FTRS_RS64  CPU_FTRS_POWER4 \
-   CPU_FTRS_PPC970  CPU_FTRS_POWER5  CPU_FTRS_POWER6\
-   CPU_FTRS_POWER7  CPU_FTRS_CELL  CPU_FTRS_PA6T  CPU_FTRS_POSSIBLE)
+   (CPU_FTRS_POWER4  CPU_FTRS_PPC970  CPU_FTRS_POWER5  \
+CPU_FTRS_POWER6  CPU_FTRS_POWER7  CPU_FTRS_CELL  \
+CPU_FTRS_PA6T  CPU_FTRS_POSSIBLE)
 #endif
 #else
 enum {
diff --git a/arch/powerpc/kernel/cputable.c b/arch/powerpc/kernel/cputable.c
index 965291b4c2fa..4728c0885d83 100644
--- a/arch/powerpc/kernel/cputable.c
+++ b/arch/powerpc/kernel/cputable.c
@@ -123,96 +123,6 @@ extern void __restore_cpu_e6500(void);
 
 static struct cpu_spec __initdata cpu_specs[] = {
 #ifdef CONFIG_PPC_BOOK3S_64
-   {   /* Power3 */
-   .pvr_mask   = 0x,
-   .pvr_value  = 0x0040,
-   .cpu_name   = POWER3 (630),
-   .cpu_features   = CPU_FTRS_POWER3,
-   .cpu_user_features  = COMMON_USER_PPC64|PPC_FEATURE_PPC_LE,
-   .mmu_features   = MMU_FTR_HPTE_TABLE,
-   .icache_bsize   = 128,
-   .dcache_bsize   = 128,
-   .num_pmcs   = 8,
-   .pmc_type   = PPC_PMC_IBM,
-   .oprofile_cpu_type  = ppc64/power3,
-   .oprofile_type  = PPC_OPROFILE_RS64,
-   .platform   = power3,
-   },
-   {   /* Power3+ */
-   .pvr_mask   = 0x,
-   .pvr_value  = 0x0041,
-   .cpu_name   = POWER3 (630+),
-   .cpu_features   = CPU_FTRS_POWER3,
-   .cpu_user_features  = COMMON_USER_PPC64|PPC_FEATURE_PPC_LE,
-   .mmu_features   = MMU_FTR_HPTE_TABLE,
-   .icache_bsize   = 128,
-   .dcache_bsize   = 128,
-   .num_pmcs   = 8,
-   .pmc_type   = PPC_PMC_IBM,
-   .oprofile_cpu_type  = ppc64/power3,
-   .oprofile_type  = PPC_OPROFILE_RS64,
-   .platform   = power3,
-   },
-   {   /* Northstar */
-   .pvr_mask   = 0x,
-   .pvr_value  = 0x0033,
-   .cpu_name   = RS64-II (northstar),
-   .cpu_features   = CPU_FTRS_RS64,
-   .cpu_user_features  = COMMON_USER_PPC64,
-   .mmu_features   = MMU_FTR_HPTE_TABLE,
-   .icache_bsize   = 128,
- 

[PATCH 2/9] powerpc: Remove STAB code

2014-07-09 Thread Michael Ellerman
Old cpus didn't have a Segment Lookaside Buffer (SLB), instead they had
a Segment Table (STAB). Now that we've dropped support for those cpus,
we can remove the STAB support entirely.

Signed-off-by: Michael Ellerman m...@ellerman.id.au
---
 arch/powerpc/include/asm/mmu-hash64.h  |  22 ---
 arch/powerpc/include/asm/mmu_context.h |   3 -
 arch/powerpc/include/asm/paca.h|   4 -
 arch/powerpc/include/asm/reg.h |   2 +-
 arch/powerpc/kernel/asm-offsets.c  |   2 -
 arch/powerpc/kernel/exceptions-64s.S   | 155 --
 arch/powerpc/kernel/head_64.S  |   8 +-
 arch/powerpc/kernel/setup_64.c |   3 -
 arch/powerpc/mm/Makefile   |   4 +-
 arch/powerpc/mm/hash_utils_64.c|  18 +--
 arch/powerpc/mm/stab.c | 286 -
 arch/powerpc/xmon/xmon.c   |  26 ---
 12 files changed, 11 insertions(+), 522 deletions(-)
 delete mode 100644 arch/powerpc/mm/stab.c

diff --git a/arch/powerpc/include/asm/mmu-hash64.h 
b/arch/powerpc/include/asm/mmu-hash64.h
index 807014dde821..78fc19496e54 100644
--- a/arch/powerpc/include/asm/mmu-hash64.h
+++ b/arch/powerpc/include/asm/mmu-hash64.h
@@ -24,26 +24,6 @@
 #include asm/bug.h
 
 /*
- * Segment table
- */
-
-#define STE_ESID_V 0x80
-#define STE_ESID_KS0x20
-#define STE_ESID_KP0x10
-#define STE_ESID_N 0x08
-
-#define STE_VSID_SHIFT 12
-
-/* Location of cpu0's segment table */
-#define STAB0_PAGE 0x8
-#define STAB0_OFFSET   (STAB0_PAGE  12)
-#define STAB0_PHYS_ADDR(STAB0_OFFSET + PHYSICAL_START)
-
-#ifndef __ASSEMBLY__
-extern char initial_stab[];
-#endif /* ! __ASSEMBLY */
-
-/*
  * SLB
  */
 
@@ -369,10 +349,8 @@ extern void hpte_init_lpar(void);
 extern void hpte_init_beat(void);
 extern void hpte_init_beat_v3(void);
 
-extern void stabs_alloc(void);
 extern void slb_initialize(void);
 extern void slb_flush_and_rebolt(void);
-extern void stab_initialize(unsigned long stab);
 
 extern void slb_vmalloc_update(void);
 extern void slb_set_size(u16 size);
diff --git a/arch/powerpc/include/asm/mmu_context.h 
b/arch/powerpc/include/asm/mmu_context.h
index b467530e2485..f5690e2689c7 100644
--- a/arch/powerpc/include/asm/mmu_context.h
+++ b/arch/powerpc/include/asm/mmu_context.h
@@ -18,7 +18,6 @@ extern int init_new_context(struct task_struct *tsk, struct 
mm_struct *mm);
 extern void destroy_context(struct mm_struct *mm);
 
 extern void switch_mmu_context(struct mm_struct *prev, struct mm_struct *next);
-extern void switch_stab(struct task_struct *tsk, struct mm_struct *mm);
 extern void switch_slb(struct task_struct *tsk, struct mm_struct *mm);
 extern void set_context(unsigned long id, pgd_t *pgd);
 
@@ -79,8 +78,6 @@ static inline void switch_mm(struct mm_struct *prev, struct 
mm_struct *next,
 #ifdef CONFIG_PPC_STD_MMU_64
if (mmu_has_feature(MMU_FTR_SLB))
switch_slb(tsk, next);
-   else
-   switch_stab(tsk, next);
 #else
/* Out of line for now */
switch_mmu_context(prev, next);
diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h
index bb0bd25f20d0..5abde4e223bb 100644
--- a/arch/powerpc/include/asm/paca.h
+++ b/arch/powerpc/include/asm/paca.h
@@ -78,10 +78,6 @@ struct paca_struct {
u64 kernel_toc; /* Kernel TOC address */
u64 kernelbase; /* Base address of kernel */
u64 kernel_msr; /* MSR while running in kernel */
-#ifdef CONFIG_PPC_STD_MMU_64
-   u64 stab_real;  /* Absolute address of segment table */
-   u64 stab_addr;  /* Virtual address of segment table */
-#endif /* CONFIG_PPC_STD_MMU_64 */
void *emergency_sp; /* pointer to emergency stack */
u64 data_offset;/* per cpu data offset */
s16 hw_cpu_id;  /* Physical processor number */
diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
index bffd89d27301..f7b97b895708 100644
--- a/arch/powerpc/include/asm/reg.h
+++ b/arch/powerpc/include/asm/reg.h
@@ -254,7 +254,7 @@
 #define   DSISR_PROTFAULT  0x0800  /* protection fault */
 #define   DSISR_ISSTORE0x0200  /* access was a store */
 #define   DSISR_DABRMATCH  0x0040  /* hit data breakpoint */
-#define   DSISR_NOSEGMENT  0x0020  /* STAB/SLB miss */
+#define   DSISR_NOSEGMENT  0x0020  /* SLB miss */
 #define   DSISR_KEYFAULT   0x0020  /* Key fault */
 #define SPRN_TBRL  0x10C   /* Time Base Read Lower Register (user, R/O) */
 #define SPRN_TBRU  0x10D   /* Time Base Read Upper Register (user, R/O) */
diff --git a/arch/powerpc/kernel/asm-offsets.c 
b/arch/powerpc/kernel/asm-offsets.c
index f5995a912213..e35054054c32 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -216,8 +216,6 @@ int main(void)
 #endif /* CONFIG_PPC_BOOK3E */
 
 

[PATCH 3/9] powerpc: Remove MMU_FTR_SLB

2014-07-09 Thread Michael Ellerman
We now only support cpus that use an SLB, so we don't need an MMU
feature to indicate that.

Signed-off-by: Michael Ellerman m...@ellerman.id.au
---
 arch/powerpc/include/asm/cputable.h| 3 +--
 arch/powerpc/include/asm/mmu.h | 8 ++--
 arch/powerpc/include/asm/mmu_context.h | 3 +--
 arch/powerpc/kernel/entry_64.S | 8 ++--
 arch/powerpc/kernel/process.c  | 2 +-
 arch/powerpc/kernel/prom.c | 1 -
 arch/powerpc/mm/hash_utils_64.c| 6 ++
 arch/powerpc/xmon/xmon.c   | 8 +---
 8 files changed, 10 insertions(+), 29 deletions(-)

diff --git a/arch/powerpc/include/asm/cputable.h 
b/arch/powerpc/include/asm/cputable.h
index 2721946780df..ed17bc75b0a6 100644
--- a/arch/powerpc/include/asm/cputable.h
+++ b/arch/powerpc/include/asm/cputable.h
@@ -195,8 +195,7 @@ extern const char *powerpc_base_platform;
 
 #define CPU_FTR_PPCAS_ARCH_V2  (CPU_FTR_NOEXECUTE | CPU_FTR_NODSISRALIGN)
 
-#define MMU_FTR_PPCAS_ARCH_V2  (MMU_FTR_SLB | MMU_FTR_TLBIEL | \
-MMU_FTR_16M_PAGE)
+#define MMU_FTR_PPCAS_ARCH_V2  (MMU_FTR_TLBIEL | MMU_FTR_16M_PAGE)
 
 /* We only set the altivec features if the kernel was compiled with altivec
  * support
diff --git a/arch/powerpc/include/asm/mmu.h b/arch/powerpc/include/asm/mmu.h
index f8d1d6dcf7db..c42945b3dbc3 100644
--- a/arch/powerpc/include/asm/mmu.h
+++ b/arch/powerpc/include/asm/mmu.h
@@ -65,9 +65,9 @@
  */
 #define MMU_FTR_USE_PAIRED_MAS ASM_CONST(0x0100)
 
-/* MMU is SLB-based
+/* Doesn't support the B bit (1T segment) in SLBIE
  */
-#define MMU_FTR_SLBASM_CONST(0x0200)
+#define MMU_FTR_NO_SLBIE_B ASM_CONST(0x0200)
 
 /* Support 16M large pages
  */
@@ -89,10 +89,6 @@
  */
 #define MMU_FTR_1T_SEGMENT ASM_CONST(0x4000)
 
-/* Doesn't support the B bit (1T segment) in SLBIE
- */
-#define MMU_FTR_NO_SLBIE_B ASM_CONST(0x8000)
-
 /* MMU feature bit sets for various CPUs */
 #define MMU_FTRS_DEFAULT_HPTE_ARCH_V2  \
MMU_FTR_HPTE_TABLE | MMU_FTR_PPCAS_ARCH_V2
diff --git a/arch/powerpc/include/asm/mmu_context.h 
b/arch/powerpc/include/asm/mmu_context.h
index f5690e2689c7..73382eba02dc 100644
--- a/arch/powerpc/include/asm/mmu_context.h
+++ b/arch/powerpc/include/asm/mmu_context.h
@@ -76,8 +76,7 @@ static inline void switch_mm(struct mm_struct *prev, struct 
mm_struct *next,
 * sub architectures.
 */
 #ifdef CONFIG_PPC_STD_MMU_64
-   if (mmu_has_feature(MMU_FTR_SLB))
-   switch_slb(tsk, next);
+   switch_slb(tsk, next);
 #else
/* Out of line for now */
switch_mmu_context(prev, next);
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index 6528c5e2cc44..d6b22e8c8ee1 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -482,16 +482,12 @@ END_FTR_SECTION_IFSET(CPU_FTR_STCX_CHECKS_ADDRESS)
ld  r8,KSP(r4)  /* new stack pointer */
 #ifdef CONFIG_PPC_BOOK3S
 BEGIN_FTR_SECTION
-  BEGIN_FTR_SECTION_NESTED(95)
clrrdi  r6,r8,28/* get its ESID */
clrrdi  r9,r1,28/* get current sp ESID */
-  FTR_SECTION_ELSE_NESTED(95)
+FTR_SECTION_ELSE
clrrdi  r6,r8,40/* get its 1T ESID */
clrrdi  r9,r1,40/* get current sp 1T ESID */
-  ALT_MMU_FTR_SECTION_END_NESTED_IFCLR(MMU_FTR_1T_SEGMENT, 95)
-FTR_SECTION_ELSE
-   b   2f
-ALT_MMU_FTR_SECTION_END_IFSET(MMU_FTR_SLB)
+ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_1T_SEGMENT)
clrldi. r0,r6,2 /* is new ESID c? */
cmpdcr1,r6,r9   /* or is new ESID the same as current ESID? */
croreq,4*cr1+eq,eq
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index be99774d3f44..e39f388fc25c 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -1175,7 +1175,7 @@ int copy_thread(unsigned long clone_flags, unsigned long 
usp,
 #endif
 
 #ifdef CONFIG_PPC_STD_MMU_64
-   if (mmu_has_feature(MMU_FTR_SLB)) {
+   {
unsigned long sp_vsid;
unsigned long llp = mmu_psize_defs[mmu_linear_psize].sllp;
 
diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
index b694b0730971..1914791dd329 100644
--- a/arch/powerpc/kernel/prom.c
+++ b/arch/powerpc/kernel/prom.c
@@ -155,7 +155,6 @@ static struct ibm_pa_feature {
 } ibm_pa_features[] __initdata = {
{0, 0, PPC_FEATURE_HAS_MMU, 0, 0, 0},
{0, 0, PPC_FEATURE_HAS_FPU, 0, 1, 0},
-   {0, MMU_FTR_SLB, 0, 0, 2, 0},
{CPU_FTR_CTRL, 0, 0,0, 3, 0},
{CPU_FTR_NOEXECUTE, 0, 0,   0, 6, 0},
{CPU_FTR_NODSISRALIGN, 0, 0,1, 1, 1},
diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
index fb8bea71327d..6b7c1c824cf9 100644
--- a/arch/powerpc/mm/hash_utils_64.c
+++ b/arch/powerpc/mm/hash_utils_64.c
@@ -828,8 +828,7 @@ void __init 

[PATCH 4/9] powerpc: Pull out ksp_vsid logic into a helper

2014-07-09 Thread Michael Ellerman
The previous patch left a bit of a wart in copy_process(). Clean it up a
bit by moving the logic out into a helper.

Signed-off-by: Michael Ellerman m...@ellerman.id.au
---
 arch/powerpc/kernel/process.c | 32 ++--
 1 file changed, 18 insertions(+), 14 deletions(-)

diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index e39f388fc25c..9c34327e38ca 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -1095,6 +1095,23 @@ int arch_dup_task_struct(struct task_struct *dst, struct 
task_struct *src)
return 0;
 }
 
+static void setup_ksp_vsid(struct task_struct *p, unsigned long sp)
+{
+#ifdef CONFIG_PPC_STD_MMU_64
+   unsigned long sp_vsid;
+   unsigned long llp = mmu_psize_defs[mmu_linear_psize].sllp;
+
+   if (mmu_has_feature(MMU_FTR_1T_SEGMENT))
+   sp_vsid = get_kernel_vsid(sp, MMU_SEGSIZE_1T)
+SLB_VSID_SHIFT_1T;
+   else
+   sp_vsid = get_kernel_vsid(sp, MMU_SEGSIZE_256M)
+SLB_VSID_SHIFT;
+   sp_vsid |= SLB_VSID_KERNEL | llp;
+   p-thread.ksp_vsid = sp_vsid;
+#endif
+}
+
 /*
  * Copy a thread..
  */
@@ -1174,21 +1191,8 @@ int copy_thread(unsigned long clone_flags, unsigned long 
usp,
p-thread.vr_save_area = NULL;
 #endif
 
-#ifdef CONFIG_PPC_STD_MMU_64
-   {
-   unsigned long sp_vsid;
-   unsigned long llp = mmu_psize_defs[mmu_linear_psize].sllp;
+   setup_ksp_vsid(p, sp);
 
-   if (mmu_has_feature(MMU_FTR_1T_SEGMENT))
-   sp_vsid = get_kernel_vsid(sp, MMU_SEGSIZE_1T)
-SLB_VSID_SHIFT_1T;
-   else
-   sp_vsid = get_kernel_vsid(sp, MMU_SEGSIZE_256M)
-SLB_VSID_SHIFT;
-   sp_vsid |= SLB_VSID_KERNEL | llp;
-   p-thread.ksp_vsid = sp_vsid;
-   }
-#endif /* CONFIG_PPC_STD_MMU_64 */
 #ifdef CONFIG_PPC64 
if (cpu_has_feature(CPU_FTR_DSCR)) {
p-thread.dscr_inherit = current-thread.dscr_inherit;
-- 
1.9.1

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

[PATCH 5/9] powerpc: Remove CONFIG_POWER3

2014-07-09 Thread Michael Ellerman
Now that we have dropped power3 support we can remove CONFIG_POWER3. The
usage in pgtable_32.c was already dead code as CONFIG_POWER3 was not
selectable on PPC32.

Signed-off-by: Michael Ellerman m...@ellerman.id.au
---
 arch/powerpc/include/asm/cputable.h| 3 +--
 arch/powerpc/mm/pgtable_32.c   | 2 +-
 arch/powerpc/platforms/Kconfig.cputype | 4 
 3 files changed, 2 insertions(+), 7 deletions(-)

diff --git a/arch/powerpc/include/asm/cputable.h 
b/arch/powerpc/include/asm/cputable.h
index ed17bc75b0a6..bff747eea06b 100644
--- a/arch/powerpc/include/asm/cputable.h
+++ b/arch/powerpc/include/asm/cputable.h
@@ -268,8 +268,7 @@ extern const char *powerpc_base_platform;
 #endif
 
 #define CLASSIC_PPC (!defined(CONFIG_8xx)  !defined(CONFIG_4xx)  \
-!defined(CONFIG_POWER3)  !defined(CONFIG_POWER4)  \
-!defined(CONFIG_BOOKE))
+!defined(CONFIG_POWER4)  !defined(CONFIG_BOOKE))
 
 #define CPU_FTRS_PPC601(CPU_FTR_COMMON | CPU_FTR_601 | \
CPU_FTR_COHERENT_ICACHE | CPU_FTR_UNIFIED_ID_CACHE)
diff --git a/arch/powerpc/mm/pgtable_32.c b/arch/powerpc/mm/pgtable_32.c
index 343a87fa78b5..cf11342bf519 100644
--- a/arch/powerpc/mm/pgtable_32.c
+++ b/arch/powerpc/mm/pgtable_32.c
@@ -41,7 +41,7 @@ unsigned long ioremap_base;
 unsigned long ioremap_bot;
 EXPORT_SYMBOL(ioremap_bot);/* aka VMALLOC_END */
 
-#if defined(CONFIG_6xx) || defined(CONFIG_POWER3)
+#ifdef CONFIG_6xx
 #define HAVE_BATS  1
 #endif
 
diff --git a/arch/powerpc/platforms/Kconfig.cputype 
b/arch/powerpc/platforms/Kconfig.cputype
index a41bd023647a..798e6add1cae 100644
--- a/arch/powerpc/platforms/Kconfig.cputype
+++ b/arch/powerpc/platforms/Kconfig.cputype
@@ -140,10 +140,6 @@ config 6xx
depends on PPC32  PPC_BOOK3S
select PPC_HAVE_PMU_SUPPORT
 
-config POWER3
-   depends on PPC64  PPC_BOOK3S
-   def_bool y
-
 config POWER4
depends on PPC64  PPC_BOOK3S
def_bool y
-- 
1.9.1

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

[PATCH 6/9] powerpc: Remove oprofile RS64 support

2014-07-09 Thread Michael Ellerman
We no longer support these cpus, so we don't need oprofile support for
them either.

Signed-off-by: Michael Ellerman m...@ellerman.id.au
---
 arch/powerpc/include/asm/oprofile_impl.h |   1 -
 arch/powerpc/oprofile/Makefile   |   2 +-
 arch/powerpc/oprofile/common.c   |   3 -
 arch/powerpc/oprofile/op_model_rs64.c| 222 ---
 4 files changed, 1 insertion(+), 227 deletions(-)
 delete mode 100644 arch/powerpc/oprofile/op_model_rs64.c

diff --git a/arch/powerpc/include/asm/oprofile_impl.h 
b/arch/powerpc/include/asm/oprofile_impl.h
index d697b08994c9..61fe5d6f18e1 100644
--- a/arch/powerpc/include/asm/oprofile_impl.h
+++ b/arch/powerpc/include/asm/oprofile_impl.h
@@ -61,7 +61,6 @@ struct op_powerpc_model {
 };
 
 extern struct op_powerpc_model op_model_fsl_emb;
-extern struct op_powerpc_model op_model_rs64;
 extern struct op_powerpc_model op_model_power4;
 extern struct op_powerpc_model op_model_7450;
 extern struct op_powerpc_model op_model_cell;
diff --git a/arch/powerpc/oprofile/Makefile b/arch/powerpc/oprofile/Makefile
index 751ec7bd5018..cedbbeced632 100644
--- a/arch/powerpc/oprofile/Makefile
+++ b/arch/powerpc/oprofile/Makefile
@@ -14,6 +14,6 @@ oprofile-y := $(DRIVER_OBJS) common.o backtrace.o
 oprofile-$(CONFIG_OPROFILE_CELL) += op_model_cell.o \
cell/spu_profiler.o cell/vma_map.o \
cell/spu_task_sync.o
-oprofile-$(CONFIG_PPC_BOOK3S_64) += op_model_rs64.o op_model_power4.o 
op_model_pa6t.o
+oprofile-$(CONFIG_PPC_BOOK3S_64) += op_model_power4.o op_model_pa6t.o
 oprofile-$(CONFIG_FSL_EMB_PERFMON) += op_model_fsl_emb.o
 oprofile-$(CONFIG_6xx) += op_model_7450.o
diff --git a/arch/powerpc/oprofile/common.c b/arch/powerpc/oprofile/common.c
index c77348c5d463..bf094c5a4bd9 100644
--- a/arch/powerpc/oprofile/common.c
+++ b/arch/powerpc/oprofile/common.c
@@ -205,9 +205,6 @@ int __init oprofile_arch_init(struct oprofile_operations 
*ops)
ops-sync_stop = model-sync_stop;
break;
 #endif
-   case PPC_OPROFILE_RS64:
-   model = op_model_rs64;
-   break;
case PPC_OPROFILE_POWER4:
model = op_model_power4;
break;
diff --git a/arch/powerpc/oprofile/op_model_rs64.c 
b/arch/powerpc/oprofile/op_model_rs64.c
deleted file mode 100644
index 7e5b8ed3a1b7..
--- a/arch/powerpc/oprofile/op_model_rs64.c
+++ /dev/null
@@ -1,222 +0,0 @@
-/*
- * Copyright (C) 2004 Anton Blanchard an...@au.ibm.com, IBM
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version
- * 2 of the License, or (at your option) any later version.
- */
-
-#include linux/oprofile.h
-#include linux/smp.h
-#include asm/ptrace.h
-#include asm/processor.h
-#include asm/cputable.h
-#include asm/oprofile_impl.h
-
-#define dbg(args...)
-
-static void ctrl_write(unsigned int i, unsigned int val)
-{
-   unsigned int tmp = 0;
-   unsigned long shift = 0, mask = 0;
-
-   dbg(ctrl_write %d %x\n, i, val);
-
-   switch(i) {
-   case 0:
-   tmp = mfspr(SPRN_MMCR0);
-   shift = 6;
-   mask = 0x7F;
-   break;
-   case 1:
-   tmp = mfspr(SPRN_MMCR0);
-   shift = 0;
-   mask = 0x3F;
-   break;
-   case 2:
-   tmp = mfspr(SPRN_MMCR1);
-   shift = 31 - 4;
-   mask = 0x1F;
-   break;
-   case 3:
-   tmp = mfspr(SPRN_MMCR1);
-   shift = 31 - 9;
-   mask = 0x1F;
-   break;
-   case 4:
-   tmp = mfspr(SPRN_MMCR1);
-   shift = 31 - 14;
-   mask = 0x1F;
-   break;
-   case 5:
-   tmp = mfspr(SPRN_MMCR1);
-   shift = 31 - 19;
-   mask = 0x1F;
-   break;
-   case 6:
-   tmp = mfspr(SPRN_MMCR1);
-   shift = 31 - 24;
-   mask = 0x1F;
-   break;
-   case 7:
-   tmp = mfspr(SPRN_MMCR1);
-   shift = 31 - 28;
-   mask = 0xF;
-   break;
-   }
-
-   tmp = tmp  ~(mask  shift);
-   tmp |= val  shift;
-
-   switch(i) {
-   case 0:
-   case 1:
-   mtspr(SPRN_MMCR0, tmp);
-   break;
-   default:
-   mtspr(SPRN_MMCR1, tmp);
-   }
-
-   dbg(ctrl_write mmcr0 %lx mmcr1 %lx\n, mfspr(SPRN_MMCR0),
-  mfspr(SPRN_MMCR1));
-}
-
-static unsigned long reset_value[OP_MAX_COUNTER];
-
-static int num_counters;
-
-static int rs64_reg_setup(struct op_counter_config *ctr,
-  struct op_system_config *sys,
-  int num_ctrs)
-{
-   

[PATCH 7/9] powerpc: Remove power3 from comments

2014-07-09 Thread Michael Ellerman
There are still a few occurences where it remains, because it helps to
explain something that persists.

Signed-off-by: Michael Ellerman m...@ellerman.id.au
---
 arch/powerpc/lib/copyuser_64.S   | 3 +--
 arch/powerpc/mm/mmu_context_hash32.c | 2 +-
 arch/powerpc/mm/ppc_mmu_32.c | 2 +-
 3 files changed, 3 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/lib/copyuser_64.S b/arch/powerpc/lib/copyuser_64.S
index 0860ee46013c..f09899e35991 100644
--- a/arch/powerpc/lib/copyuser_64.S
+++ b/arch/powerpc/lib/copyuser_64.S
@@ -461,8 +461,7 @@ END_FTR_SECTION_IFCLR(CPU_FTR_UNALIGNED_LD_STD)
 /*
  * Routine to copy a whole page of data, optimized for POWER4.
  * On POWER4 it is more than 50% faster than the simple loop
- * above (following the .Ldst_aligned label) but it runs slightly
- * slower on POWER3.
+ * above (following the .Ldst_aligned label).
  */
 .Lcopy_page_4K:
std r31,-32(1)
diff --git a/arch/powerpc/mm/mmu_context_hash32.c 
b/arch/powerpc/mm/mmu_context_hash32.c
index 78fef6726e10..aa5a7fd89461 100644
--- a/arch/powerpc/mm/mmu_context_hash32.c
+++ b/arch/powerpc/mm/mmu_context_hash32.c
@@ -2,7 +2,7 @@
  * This file contains the routines for handling the MMU on those
  * PowerPC implementations where the MMU substantially follows the
  * architecture specification.  This includes the 6xx, 7xx, 7xxx,
- * 8260, and POWER3 implementations but excludes the 8xx and 4xx.
+ * and 8260 implementations but excludes the 8xx and 4xx.
  *  -- paulus
  *
  *  Derived from arch/ppc/mm/init.c:
diff --git a/arch/powerpc/mm/ppc_mmu_32.c b/arch/powerpc/mm/ppc_mmu_32.c
index 11571e118831..5029dc19b517 100644
--- a/arch/powerpc/mm/ppc_mmu_32.c
+++ b/arch/powerpc/mm/ppc_mmu_32.c
@@ -2,7 +2,7 @@
  * This file contains the routines for handling the MMU on those
  * PowerPC implementations where the MMU substantially follows the
  * architecture specification.  This includes the 6xx, 7xx, 7xxx,
- * 8260, and POWER3 implementations but excludes the 8xx and 4xx.
+ * and 8260 implementations but excludes the 8xx and 4xx.
  *  -- paulus
  *
  *  Derived from arch/ppc/mm/init.c:
-- 
1.9.1

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

[PATCH 8/9] powerpc: Remove CONFIG_POWER4

2014-07-09 Thread Michael Ellerman
Although the name CONFIG_POWER4 suggests that it controls support for
power4 cpus, this symbol is actually misnamed.

It is a historical wart from the powermac code, which used to support
building a 32-bit kernel for power3. CONFIG_POWER4 was used in that
context to guard code that was 64-bit only.

In the powermac code we can just use CONFIG_PPC64 instead, and in other
places it is a synonym for CONFIG_PPC_BOOK3S_64.

Signed-off-by: Michael Ellerman m...@ellerman.id.au
---
 arch/powerpc/include/asm/cputable.h   |  2 +-
 arch/powerpc/platforms/Kconfig.cputype| 12 +++--
 arch/powerpc/platforms/powermac/Kconfig   |  2 +-
 arch/powerpc/platforms/powermac/feature.c | 42 +++
 4 files changed, 27 insertions(+), 31 deletions(-)

diff --git a/arch/powerpc/include/asm/cputable.h 
b/arch/powerpc/include/asm/cputable.h
index bff747eea06b..f1027481da0f 100644
--- a/arch/powerpc/include/asm/cputable.h
+++ b/arch/powerpc/include/asm/cputable.h
@@ -268,7 +268,7 @@ extern const char *powerpc_base_platform;
 #endif
 
 #define CLASSIC_PPC (!defined(CONFIG_8xx)  !defined(CONFIG_4xx)  \
-!defined(CONFIG_POWER4)  !defined(CONFIG_BOOKE))
+!defined(CONFIG_BOOKE))
 
 #define CPU_FTRS_PPC601(CPU_FTR_COMMON | CPU_FTR_601 | \
CPU_FTR_COHERENT_ICACHE | CPU_FTR_UNIFIED_ID_CACHE)
diff --git a/arch/powerpc/platforms/Kconfig.cputype 
b/arch/powerpc/platforms/Kconfig.cputype
index 798e6add1cae..f03e7d0d76f8 100644
--- a/arch/powerpc/platforms/Kconfig.cputype
+++ b/arch/powerpc/platforms/Kconfig.cputype
@@ -140,10 +140,6 @@ config 6xx
depends on PPC32  PPC_BOOK3S
select PPC_HAVE_PMU_SUPPORT
 
-config POWER4
-   depends on PPC64  PPC_BOOK3S
-   def_bool y
-
 config TUNE_CELL
bool Optimize for Cell Broadband Engine
depends on PPC64  PPC_BOOK3S
@@ -240,7 +236,7 @@ config PHYS_64BIT
 
 config ALTIVEC
bool AltiVec Support
-   depends on 6xx || POWER4 || (PPC_E500MC  PPC64)
+   depends on 6xx || PPC_BOOK3S_64 || (PPC_E500MC  PPC64)
---help---
  This option enables kernel support for the Altivec extensions to the
  PowerPC processor. The kernel currently supports saving and restoring
@@ -256,7 +252,7 @@ config ALTIVEC
 
 config VSX
bool VSX Support
-   depends on POWER4  ALTIVEC  PPC_FPU
+   depends on PPC_BOOK3S_64  ALTIVEC  PPC_FPU
---help---
 
  This option enables kernel support for the Vector Scaler extensions
@@ -272,7 +268,7 @@ config VSX
 
 config PPC_ICSWX
bool Support for PowerPC icswx coprocessor instruction
-   depends on POWER4
+   depends on PPC_BOOK3S_64
default n
---help---
 
@@ -290,7 +286,7 @@ config PPC_ICSWX
 
 config PPC_ICSWX_PID
bool icswx requires direct PID management
-   depends on PPC_ICSWX  POWER4
+   depends on PPC_ICSWX
default y
---help---
  The PID register in server is used explicitly for ICSWX.  In
diff --git a/arch/powerpc/platforms/powermac/Kconfig 
b/arch/powerpc/platforms/powermac/Kconfig
index 1afd10f67858..607124bae2e7 100644
--- a/arch/powerpc/platforms/powermac/Kconfig
+++ b/arch/powerpc/platforms/powermac/Kconfig
@@ -10,7 +10,7 @@ config PPC_PMAC
 
 config PPC_PMAC64
bool
-   depends on PPC_PMAC  POWER4
+   depends on PPC_PMAC  PPC64
select MPIC
select U3_DART
select MPIC_U3_HT_IRQS
diff --git a/arch/powerpc/platforms/powermac/feature.c 
b/arch/powerpc/platforms/powermac/feature.c
index 63d82bbc05e9..1413e72bc2e1 100644
--- a/arch/powerpc/platforms/powermac/feature.c
+++ b/arch/powerpc/platforms/powermac/feature.c
@@ -158,7 +158,7 @@ static inline int simple_feature_tweak(struct device_node 
*node, int type,
return 0;
 }
 
-#ifndef CONFIG_POWER4
+#ifndef CONFIG_PPC64
 
 static long ohare_htw_scc_enable(struct device_node *node, long param,
 long value)
@@ -1318,7 +1318,7 @@ intrepid_aack_delay_enable(struct device_node *node, long 
param, long value)
 }
 
 
-#endif /* CONFIG_POWER4 */
+#endif /* CONFIG_PPC64 */
 
 static long
 core99_read_gpio(struct device_node *node, long param, long value)
@@ -1338,7 +1338,7 @@ core99_write_gpio(struct device_node *node, long param, 
long value)
return 0;
 }
 
-#ifdef CONFIG_POWER4
+#ifdef CONFIG_PPC64
 static long g5_gmac_enable(struct device_node *node, long param, long value)
 {
struct macio_chip *macio = macio_chips[0];
@@ -1550,9 +1550,9 @@ void g5_phy_disable_cpu1(void)
if (uninorth_maj == 3)
UN_OUT(U3_API_PHY_CONFIG_1, 0);
 }
-#endif /* CONFIG_POWER4 */
+#endif /* CONFIG_PPC64 */
 
-#ifndef CONFIG_POWER4
+#ifndef CONFIG_PPC64
 
 
 #ifdef CONFIG_PM
@@ -1864,7 +1864,7 @@ core99_sleep_state(struct device_node *node, long param, 
long value)
return 0;
 }
 
-#endif /* CONFIG_POWER4 */
+#endif /* CONFIG_PPC64 */
 
 static long
 generic_dev_can_wake(struct 

[PATCH 9/9] powerpc: Move CLASSIC_PPC into Kconfig and rename

2014-07-09 Thread Michael Ellerman
We have a strange #define in cputable.h called CLASSIC_PPC. It is true
when no other more modern platform is defined, and indicates that we're
building for a classic platform.

Although it is defined for 32  64bit, it's only used for 32bit. So for
starters, rename it to indicate that. There's also no reason for it not
to be in Kconfig, so move it there.

Signed-off-by: Michael Ellerman m...@ellerman.id.au
---
 arch/powerpc/include/asm/cputable.h| 7 ++-
 arch/powerpc/kernel/cputable.c | 4 ++--
 arch/powerpc/platforms/Kconfig.cputype | 6 +-
 3 files changed, 9 insertions(+), 8 deletions(-)

diff --git a/arch/powerpc/include/asm/cputable.h 
b/arch/powerpc/include/asm/cputable.h
index f1027481da0f..af5598688daf 100644
--- a/arch/powerpc/include/asm/cputable.h
+++ b/arch/powerpc/include/asm/cputable.h
@@ -267,9 +267,6 @@ extern const char *powerpc_base_platform;
 #define CPU_FTR_MAYBE_CAN_NAP  0
 #endif
 
-#define CLASSIC_PPC (!defined(CONFIG_8xx)  !defined(CONFIG_4xx)  \
-!defined(CONFIG_BOOKE))
-
 #define CPU_FTRS_PPC601(CPU_FTR_COMMON | CPU_FTR_601 | \
CPU_FTR_COHERENT_ICACHE | CPU_FTR_UNIFIED_ID_CACHE)
 #define CPU_FTRS_603   (CPU_FTR_COMMON | \
@@ -466,7 +463,7 @@ extern const char *powerpc_base_platform;
 #else
 enum {
CPU_FTRS_POSSIBLE =
-#if CLASSIC_PPC
+#ifdef CONFIG_CLASSIC_PPC32
CPU_FTRS_PPC601 | CPU_FTRS_603 | CPU_FTRS_604 | CPU_FTRS_740_NOTAU |
CPU_FTRS_740 | CPU_FTRS_750 | CPU_FTRS_750FX1 |
CPU_FTRS_750FX2 | CPU_FTRS_750FX | CPU_FTRS_750GX |
@@ -516,7 +513,7 @@ enum {
 #else
 enum {
CPU_FTRS_ALWAYS =
-#if CLASSIC_PPC
+#ifdef CONFIG_CLASSIC_PPC32
CPU_FTRS_PPC601  CPU_FTRS_603  CPU_FTRS_604  CPU_FTRS_740_NOTAU 
CPU_FTRS_740  CPU_FTRS_750  CPU_FTRS_750FX1 
CPU_FTRS_750FX2  CPU_FTRS_750FX  CPU_FTRS_750GX 
diff --git a/arch/powerpc/kernel/cputable.c b/arch/powerpc/kernel/cputable.c
index 4728c0885d83..e7b2ea906838 100644
--- a/arch/powerpc/kernel/cputable.c
+++ b/arch/powerpc/kernel/cputable.c
@@ -507,7 +507,7 @@ static struct cpu_spec __initdata cpu_specs[] = {
 #endif /* CONFIG_PPC_BOOK3S_64 */
 
 #ifdef CONFIG_PPC32
-#if CLASSIC_PPC
+#ifdef CONFIG_CLASSIC_PPC32
{   /* 601 */
.pvr_mask   = 0x,
.pvr_value  = 0x0001,
@@ -1147,7 +1147,7 @@ static struct cpu_spec __initdata cpu_specs[] = {
.machine_check  = machine_check_generic,
.platform   = ppc603,
},
-#endif /* CLASSIC_PPC */
+#endif /* CONFIG_CLASSIC_PPC32 */
 #ifdef CONFIG_8xx
{   /* 8xx */
.pvr_mask   = 0x,
diff --git a/arch/powerpc/platforms/Kconfig.cputype 
b/arch/powerpc/platforms/Kconfig.cputype
index f03e7d0d76f8..5fe116bf9883 100644
--- a/arch/powerpc/platforms/Kconfig.cputype
+++ b/arch/powerpc/platforms/Kconfig.cputype
@@ -61,7 +61,7 @@ choice
help
  There are two families of 64 bit PowerPC chips supported.
  The most common ones are the desktop and server CPUs
- (POWER3, RS64, POWER4, POWER5, POWER5+, POWER6, ...)
+ (POWER4, POWER5, 970, POWER5+, POWER6, POWER7, POWER8 ...)
 
  The other are the embedded processors compliant with the
  Book 3E variant of the architecture
@@ -140,6 +140,10 @@ config 6xx
depends on PPC32  PPC_BOOK3S
select PPC_HAVE_PMU_SUPPORT
 
+config CLASSIC_PPC32
+   depends on PPC32  !8xx  !4xx  !BOOKE
+   def_bool y
+
 config TUNE_CELL
bool Optimize for Cell Broadband Engine
depends on PPC64  PPC_BOOK3S
-- 
1.9.1

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH] powerpc/pseries: dynamically added OF nodes need to call of_node_init

2014-07-09 Thread Michael Ellerman
On Wed, 2014-07-09 at 21:20 -0400, Tyrel Datwyler wrote:
 Commit 75b57ecf9 refactored device tree nodes to use kobjects such that they
 can be exposed via /sysfs. A secondary commit 0829f6d1f furthered this rework
 by moving the kobect initialization logic out of of_node_add into its own
 of_node_init function. The inital commit removed the existing kref_init calls
 in the pseries dlpar code with the assumption kobject initialization would
 occur in of_node_add. The second commit had the side effect of triggering a
 BUG_ON as a result of dynamically added nodes being uninitialized.

So does this mean DLPAR is broken since 0829f6d1f (3.15-rc1)?

If so this should have a Cc: sta...@kernel.org shouldn't it?

And the latest trend is to also add:

Fixes: 0829f6d1f69e (of: device_node kobject lifecycle fixes)

cheers


___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev