On Tuesday, 25 April 2017 at 16:16:43 UTC, Patrick Schluter wrote:
It's already the case. Intel and AMD (especially in Ryzen)
strongly discourage the use of prefetch instructions since at
least Core2 and Athlon64. The icache cost rarely pays off and
very often breaks the auto-prefetcher
On Tuesday, 25 April 2017 at 09:09:00 UTC, Ola Fosheim Grøstad
wrote:
On Monday, 24 April 2017 at 17:48:50 UTC, Stefan Koch wrote:
[...]
Oh, ok. AFAIK The decoding of indexing modes into micro-ops
(the real instructions used inside the CPU, not the actual
op-codes) has no effect on the
On Monday, 24 April 2017 at 17:48:50 UTC, Stefan Koch wrote:
On Monday, 24 April 2017 at 11:29:01 UTC, Ola Fosheim Grøstad
wrote:
What are scaled loads?
x86 has addressing modes which allow you to multiply an index
by a certain set of scalars and add it as on offset to the
pointer you want
On Monday, 24 April 2017 at 11:29:01 UTC, Ola Fosheim Grøstad
wrote:
What are scaled loads?
x86 has addressing modes which allow you to multiply an index by
a certain set of scalars and add it as on offset to the pointer
you want to load.
Thereby making memory access patterns more
On Monday, 24 April 2017 at 14:41:44 UTC, jmh530 wrote:
On Monday, 24 April 2017 at 12:59:55 UTC, Jonathan Marler wrote:
Have you considered using the LLVM jit compiler for CTFE? We
already have an LLVM front end. This would mean that CTFE
would depend on LLVM, which is a large dependency,
On Monday, 24 April 2017 at 12:59:55 UTC, Jonathan Marler wrote:
Have you considered using the LLVM jit compiler for CTFE? We
already have an LLVM front end. This would mean that CTFE would
depend on LLVM, which is a large dependency, but it would
create very fast, optimized code for CTFE on
On Thursday, 20 April 2017 at 12:56:11 UTC, Stefan Koch wrote:
Hi Guys,
I just begun work on the x86 jit backend.
Because right now I am at a stage where further design
decisions need to be made and those decisions need to be
informed by how a _fast_ jit-compatible x86-codegen is
On Saturday, 22 April 2017 at 14:29:22 UTC, Stefan Koch wrote:
And for that reason I am looking to extend the interface to
support for example scaled loads and the like.
Otherwise you and up with 1000 temporaries that add offsets to
pointers.
What are scaled loads?
Also and perhaps more
On Sunday, 23 April 2017 at 02:45:09 UTC, evilrat wrote:
On Saturday, 22 April 2017 at 10:38:45 UTC, Stefan Koch wrote:
On Saturday, 22 April 2017 at 03:03:32 UTC, evilrat wrote:
[...]
If you could share the code it would be appreciated.
If you cannot share it publicly come in irc sometime.
On Saturday, 22 April 2017 at 10:38:45 UTC, Stefan Koch wrote:
On Saturday, 22 April 2017 at 03:03:32 UTC, evilrat wrote:
Is this apply to templates too? I recently tried some code,
and templated version with about 10 instantiations for 4-5
types increased compile time from about 1 sec up to
On Saturday, 22 April 2017 at 14:22:18 UTC, John Colvin wrote:
On Thursday, 20 April 2017 at 12:56:11 UTC, Stefan Koch wrote:
Hi Guys,
I just begun work on the x86 jit backend.
Because right now I am at a stage where further design
decisions need to be made and those decisions need to be
On Thursday, 20 April 2017 at 12:56:11 UTC, Stefan Koch wrote:
Hi Guys,
I just begun work on the x86 jit backend.
Because right now I am at a stage where further design
decisions need to be made and those decisions need to be
informed by how a _fast_ jit-compatible x86-codegen is
On Saturday, 22 April 2017 at 03:03:32 UTC, evilrat wrote:
On Thursday, 20 April 2017 at 14:54:20 UTC, Stefan Koch wrote:
On Thursday, 20 April 2017 at 14:35:27 UTC, Suliman wrote:
Could you explain where it can be helpful?
It's helpful for newCTFE's development. :)
The I estimate the jit
On Saturday, 22 April 2017 at 03:03:32 UTC, evilrat wrote:
On Thursday, 20 April 2017 at 14:54:20 UTC, Stefan Koch wrote:
On Thursday, 20 April 2017 at 14:35:27 UTC, Suliman wrote:
Could you explain where it can be helpful?
It's helpful for newCTFE's development. :)
The I estimate the jit
On Thursday, 20 April 2017 at 14:54:20 UTC, Stefan Koch wrote:
On Thursday, 20 April 2017 at 14:35:27 UTC, Suliman wrote:
Could you explain where it can be helpful?
It's helpful for newCTFE's development. :)
The I estimate the jit will easily be 10 times faster then my
bytecode interpreter.
On Thursday, 20 April 2017 at 14:54:20 UTC, Stefan Koch wrote:
It's helpful for newCTFE's development. :)
The I estimate the jit will easily be 10 times faster then my
bytecode interpreter.
which will make it about 100-1000x faster then the current CTFE.
Wow.
On Thursday, 20 April 2017 at 14:35:27 UTC, Suliman wrote:
Could you explain where it can be helpful?
It's helpful for newCTFE's development. :)
The I estimate the jit will easily be 10 times faster then my
bytecode interpreter.
which will make it about 100-1000x faster then the current
On Thursday, 20 April 2017 at 12:56:11 UTC, Stefan Koch wrote:
Hi Guys,
I just begun work on the x86 jit backend.
Because right now I am at a stage where further design
decisions need to be made and those decisions need to be
informed by how a _fast_ jit-compatible x86-codegen is
On Thursday, 20 April 2017 at 12:56:11 UTC, Stefan Koch wrote:
Hi Guys,
I just begun work on the x86 jit backend.
Because right now I am at a stage where further design
decisions need to be made and those decisions need to be
informed by how a _fast_ jit-compatible x86-codegen is
19 matches
Mail list logo