Hi Julian,
There are a couple of minor formatting nits.
+static int
+arm_movmemqi_unaligned (rtx *operands)
+ /* Inlined memcpy using ldr/str/ldrh/strh can be quite big: try to limit
+ size of code if optimizing for size. We'll use ldm/stm if
src_aligned
+ or
On Fri, Oct 14, 2011 at 6:53 AM, Julian Brown jul...@codesourcery.com wrote:
On Wed, 28 Sep 2011 14:33:17 +0100
Ramana Radhakrishnan ramana.radhakrish...@linaro.org wrote:
On 6 May 2011 14:13, Julian Brown jul...@codesourcery.com wrote:
Hi,
This is the second of two patches to add
On Wed, 28 Sep 2011 14:33:17 +0100
Ramana Radhakrishnan ramana.radhakrish...@linaro.org wrote:
On 6 May 2011 14:13, Julian Brown jul...@codesourcery.com wrote:
Hi,
This is the second of two patches to add unaligned-access support to
the ARM backend. It builds on the first patch to
On 6 May 2011 14:13, Julian Brown jul...@codesourcery.com wrote:
Hi,
This is the second of two patches to add unaligned-access support to
the ARM backend. It builds on the first patch to provide support for
unaligned accesses when expanding block moves (i.e. for builtin memcpy
operations).
On Fri, 6 May 2011 14:13:32 +0100
Julian Brown jul...@codesourcery.com wrote:
Hi,
This is the second of two patches to add unaligned-access support to
the ARM backend. It builds on the first patch to provide support for
unaligned accesses when expanding block moves (i.e. for builtin memcpy
Hi,
This is the second of two patches to add unaligned-access support to
the ARM backend. It builds on the first patch to provide support for
unaligned accesses when expanding block moves (i.e. for builtin memcpy
operations). It makes some effort to use load/store multiple
instructions where