Re: bit-slice and microcode discussion list
On 08/23/2019 12:47 PM, Noel Chiappa via cctalk wrote: > From: Jon Elson >> On 08/22/2019 12:47 PM, Tom Uban via cctalk wrote: >> On a possible related note, I am looking for information on converting >> CISC instructions to VLIW RISC. > I think it might end up looking a bit like the optimizers that were > used on drum memory computers back in the dark ages. I dunno; those were all about picking _addresses_ for instructions, such that the next instruction was coming up to the heads as the last one completed. Right, but the idea is to schedule memory reads way in advance of when the datum is required for a calculation. So, the load from memory to register is moved way up in the program, and the use of the register is much later to allow for the memory latency. Yes, it is not exactly like drum memory computers, but you are still scheduling things for when they can be done without causing a stall. Jon
Re: bit-slice and microcode discussion list
The concepts of bitslice coding and optimizing of it have always interested me. I'm not sure about the correlation to "CISC to VLIW RISC". Dwight From: cctalk on behalf of Al Kossow via cctalk Sent: Friday, August 23, 2019 2:37 PM To: cctalk@classiccmp.org Subject: Re: bit-slice and microcode discussion list > On a possible related note, I am looking for information on converting > CISC instructions to VLIW RISC. I'm impressed, cctlk went completely off the rails on the first reply to the list announcement, and has stayed there. At least the list itself is staying on topic.
Re: bit-slice and microcode discussion list
> On a possible related note, I am looking for information on converting > CISC instructions to VLIW RISC. I'm impressed, cctlk went completely off the rails on the first reply to the list announcement, and has stayed there. At least the list itself is staying on topic.
Re: bit-slice and microcode discussion list
On 8/23/2019 12:00 PM, Paul Koning via cctalk wrote: On Aug 23, 2019, at 1:47 PM, Noel Chiappa via cctalk wrote: From: Jon Elson On 08/22/2019 12:47 PM, Tom Uban via cctalk wrote: On a possible related note, I am looking for information on converting CISC instructions to VLIW RISC. I think it might end up looking a bit like the optimizers that were used on drum memory computers back in the dark ages. I dunno; those were all about picking _addresses_ for instructions, such that the next instruction was coming up to the heads as the last one completed. The _order_ of execution wasn't changed, there was no issue of contention for computing elements, etc - i.e. all the things ones think of a CISC->VLIW translation as doing. Instruction ordering (instruction scheduling) is as old as the CDC 6600, though then it was often done by the programmer. An early example of that conversion is the work done at DEC for "just in time" conversion of VAX instructions to MIPS, and later to Alpha. I wonder if their compiler technology was involved in that. It wouldn't surprise me. The Alpha "assembler" was actually the compiler back end, and as a result you could ask it to optimize your assembly programs. That was an interesting way to get a feel for what transformations of the program would be useful given the parallelism in that architecture. paul Why bother is my view. The problem is Three fold, a) The hardware people keep changing the internal details. b) A good compiler can see the the original program structure and optimize for that. c) The flat memory model as from FORTRAN or LISP where variables are random over the entire memory space scrambles your cache. With that said if you can make the optimization in defined some sort of MACRO format changing parameters would be simple and be effective unseen changes. Kind of the the early Compiler Compilers. I see RISC as emulation of the HARVARD memory model. A Harvard model would not take much change in programing other than not having a "SMALL" mode. Two 32 bit wide buses (data) (program) could be faster as external memory is more drum like with filling of caches rather than random memory than one large data path doing everything. I still favor the CLASSIC instruction set model. OP:AC:IX:OFFSET Core Memory made the machines slow with the memory restore cycle, Giving rise to CSIC like the PDP 11 to give a better use of that dead cycle. RISC is only fast because of the PAGE cycle of dynamic memory at the time. Too bad everything is all 8/16/32/64+ computing or say a 36 bit classic style cpu design could run quite effective at a few GHZ. Ben.
Re: bit-slice and microcode discussion list
> On Aug 23, 2019, at 1:47 PM, Noel Chiappa via cctalk > wrote: > >> From: Jon Elson > >>> On 08/22/2019 12:47 PM, Tom Uban via cctalk wrote: > >>> On a possible related note, I am looking for information on converting >>> CISC instructions to VLIW RISC. > >> I think it might end up looking a bit like the optimizers that were >> used on drum memory computers back in the dark ages. > > I dunno; those were all about picking _addresses_ for instructions, such > that the next instruction was coming up to the heads as the last one > completed. > > The _order_ of execution wasn't changed, there was no issue of contention > for computing elements, etc - i.e. all the things ones think of a > CISC->VLIW translation as doing. Instruction ordering (instruction scheduling) is as old as the CDC 6600, though then it was often done by the programmer. An early example of that conversion is the work done at DEC for "just in time" conversion of VAX instructions to MIPS, and later to Alpha. I wonder if their compiler technology was involved in that. It wouldn't surprise me. The Alpha "assembler" was actually the compiler back end, and as a result you could ask it to optimize your assembly programs. That was an interesting way to get a feel for what transformations of the program would be useful given the parallelism in that architecture. paul
Re: bit-slice and microcode discussion list
> From: Jon Elson >> On 08/22/2019 12:47 PM, Tom Uban via cctalk wrote: >> On a possible related note, I am looking for information on converting >> CISC instructions to VLIW RISC. > I think it might end up looking a bit like the optimizers that were > used on drum memory computers back in the dark ages. I dunno; those were all about picking _addresses_ for instructions, such that the next instruction was coming up to the heads as the last one completed. The _order_ of execution wasn't changed, there was no issue of contention for computing elements, etc - i.e. all the things ones think of a CISC->VLIW translation as doing. Noel
Re: bit-slice and microcode discussion list
On 08/22/2019 12:47 PM, Tom Uban via cctalk wrote: On 8/22/19 12:16 PM, Eric Smith via cctalk wrote: On another mailing list, someone asked if there was any list specifically about bit-slice design and microcoding. I don't know of one, so I've created a new mailing list specifically for those topics: http://lists.brouhaha.com/mailman/listinfo/bit-slicers The intent is for the list to cover technical discussion of bit-slice hardware design and/or microcoding. In other words, discussion of microcoding that doesn't use bit-slice hardware is fine. On a possible related note, I am looking for information on converting CISC instructions to VLIW RISC. Wow, I think that ends up looking like a compiler, or at least the optimizing back end part of a compiler. I worked a bit with a Trace Multiflow, and their optimizing back end was VERY slow, which I assume means it was a complex task to reorder all the atomic operations and pack into the long instruction words for best throughput. I think it might end up looking a bit like the optimizers that were used on drum memory computers back in the dark ages. Jon
Re: bit-slice and microcode discussion list
On Thu, Aug 22, 2019 at 12:47:28PM -0500, Tom Uban via cctalk wrote: [...] > On a possible related note, I am looking for information on converting CISC > instructions to VLIW RISC. Do you mean the theoretical basis, or implementing it? And is this ahead-of-time ("I want to run *this* binary"), or just-in-time ("I want to run *any* binary, including self-modifying code")? It's basically a compiler pipeline: deserialise the input code into an AST, then serialise it into output code. It's just that the input code is actual machine code rather than human-ented text. Various real-world implementations exist. QEMU, for example. VMWare also does it for ring-0 code if the host lacks VT-x. UAE definitely does it, and possiblt so does MAME. As you can see, it's basically a solved problem as far as computer science is concerned. If you have a copy of the Dragon Book to hand, you may as well give it a gander. The general concepts are timeless, but the actual nitty-gritty is only useful if you are still living in the 1970s, so don't spend too much time in the details of the algorithms because modern machines are so different that many of the book's design assumptions are now invalidated. (I base this opinion on my 1986 edition, although the TOC I've seen for the 2006 edition suggests that it's been dragged kicking and screaming into the 1990s.) There are *loads* of academic papers that you will have to wade through to advance from the Dragon Book's description of a kinder era to modern compiler design. Some of it remains an unsolved problem. You can see why the Dragon Book handwaves over the hard bits. To actually implement something that performs well and will actually be finished before your new VLIW RISC hardware is obsolete, I recommend you look at reusing existing compilers rather than implemting your own. The daddy of backends is LLVM. Unless your VLIW RISC is already supported, you get to learn how to implement an LLVM backend. It seems to be a common undergraduate assignment to implement an LLVM backend for an arbitrary RISC CPU (often MIPS) so you should be able to find myriad terrible implementations on GitHub to draw inspiration from. Another possibility is QEMU's TCG. I wasn't really aware of it until I did a quick search when compising this response, but I like what I see and now want to look much closer at it. Once you've done that, you need to decompile your CISC code into your chosen backend's IR. This involves a lot of tedious gruntwork, but is otherwise not that difficult. Have fun!
RE: bit-slice and microcode discussion list
Not precisely CISC to VLIW RISC, but in my opinion very cool and somewhat related. https://gamozolabs.github.io/fuzzing/2018/10/14/vectorized_emulation.html From: cctalk on behalf of Tom Uban via cctalk Sent: Friday, August 23, 2019 12:47:28 AM To: Eric Smith ; General Discussion: On-Topic and Off-Topic Posts Subject: Re: bit-slice and microcode discussion list On 8/22/19 12:16 PM, Eric Smith via cctalk wrote: > On another mailing list, someone asked if there was any list specifically > about bit-slice design and microcoding. I don't know of one, so I've > created a new mailing list specifically for those topics: > > http://lists.brouhaha.com/mailman/listinfo/bit-slicers > > The intent is for the list to cover technical discussion of bit-slice > hardware design and/or microcoding. In other words, discussion of > microcoding that doesn't use bit-slice hardware is fine. > On a possible related note, I am looking for information on converting CISC instructions to VLIW RISC. --tnx --tom
Re: bit-slice and microcode discussion list
On 8/22/19 12:16 PM, Eric Smith via cctalk wrote: > On another mailing list, someone asked if there was any list specifically > about bit-slice design and microcoding. I don't know of one, so I've > created a new mailing list specifically for those topics: > > http://lists.brouhaha.com/mailman/listinfo/bit-slicers > > The intent is for the list to cover technical discussion of bit-slice > hardware design and/or microcoding. In other words, discussion of > microcoding that doesn't use bit-slice hardware is fine. > On a possible related note, I am looking for information on converting CISC instructions to VLIW RISC. --tnx --tom
bit-slice and microcode discussion list
On another mailing list, someone asked if there was any list specifically about bit-slice design and microcoding. I don't know of one, so I've created a new mailing list specifically for those topics: http://lists.brouhaha.com/mailman/listinfo/bit-slicers The intent is for the list to cover technical discussion of bit-slice hardware design and/or microcoding. In other words, discussion of microcoding that doesn't use bit-slice hardware is fine.