Andrew, On Thu, Aug 13, 2015 at 1:17 PM, J. Andrew Rogers <[email protected]> wrote:
> > > On Aug 13, 2015, at 12:04 PM, Juan Carlos Kuri Pinto <[email protected]> > wrote: > > > > Regarding massive parallelism, Haskell is naturally parallelizable > because it is 100% pure. > More interestingly, how pipelineable is it? The whole idea behind FPGAs is that you use data-chaining to do entire loop iterations per clock cycle, rather than just individual operations per clock cycle, which in addition to improving speed also greatly improves latency - which of often sacrificed in GPU implementations. > That is, Haskell doesn't have mutating state. You never modify memory > locations. You only create new constants and let the garbage collector > remove them when they are out of scope. > Remember, there is no "garbage collector" in FPGAs, though the equivalent might be what happens to its state (which disappears into the "bit bucket") when you reprogram a piece of one. > > This is an incorrect conception of massive parallelism. The above model > creates a vast amount of shared, mutable state that is being ignored. Unfortunately, "massive" has been co-opted by the GPU/array processing folks, who don't (yet) understand just HOW massive things can get inside of future FPGAs. It is the reason languages like Haskell are poor for massive parallelism in > practice. > Once a usable language for FPGAs has been found, I expect EVERYTHING to change because FPGAs can be made fault-tolerant, which means that chips can be made arbitrarily LARGE limited only by the step-and-repeat equipment at the silicon foundrys, that will then also be enlarged to accommodate the new FPGA technology. This should eliminate most of the need for "parallelism" as it is now commonly understood, because there will then be a single "CPU" that can outrun anything else now in existence. > > What you describe above is focused on data access parallelism. > Unfortunately, topological parallelism dominates scalability in massively > parallel systems. Computing models that rely on immutable memory are > generally gaining that data access parallelism by reducing the efficiency > of topological parallelism. > This appears (to me) to be irrelevant for FPGAs. > > Memory locations are not the only mutable state in real computing systems. > The wires that move data *between* memory locations are also a dynamic > shared writable resource, and a relatively scarce one at that. THIS is what FPGAs are all about, especially course-grained architecture > Computing models that effect mutability by copying state greatly increase > the contention for this resource and exhibit strong sublinearity as the > parallelism increases. > Yes. > > The most efficiently parallelizable computing models use non-shared > mutable state. You basically move functions (which are naturally immutable > and compact) to the process that owns the state you want to mutate. This > allows very efficient topological parallelism, and since the state is > almost never shared between processes, most of the problems immutable > memory is designed to solve do not apply. It is actually an elegant if > somewhat unintuitive model in implementation. > Sounds good. However, we may have different definitions of some of the terms used in the above paragraph. > > Even on single machines, this model typically offers an integer factor > throughput improvement over lock-free multithreading models of parallelism > and concurrency (or functional models). > As I mentioned in my posting(s) I would suspect that a good language for FPGAs would also work well in many robotics applications, where it is necessary to continuously recompute the states of every mechanical component in order for a robotic device to be able to simultaneously do multiple things in a highly coordinated way. Of course you don't need a new language for this, but looking at things as continuous processes seems far more intuitive in this domain than does procedural programming. Thanks for your comments. Steve ============ > > ------------------------------------------- > AGI > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/10443978-6f4c28ac > Modify Your Subscription: > https://www.listbox.com/member/?& > Powered by Listbox: http://www.listbox.com > -- Full employment can be had with the stoke of a pen. Simply institute a six hour workday. That will easily create enough new jobs to bring back full employment. ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
