Andrew,
I pointed that paper out because most people have no idea what, dynamic
space decomposition means, not because it is directly related to the
special case of dynamic space decomposition that you have in mind. I now
can understand how dynamic decompositions (or a dynamic compositions) that
could be easily distributed might be used in efficiently.

You said:
"The above curves can be described solely as algebraic expressions. Because
the partitioning of the data model is dynamically constructed per some
function based on the properties of the data, the algebraic expression that
defines a subspace is being dynamically written and is composable with
other expressions. This distributes extremely well. There is no
space-filling curve per se, just an algebraic expression that
coincidentally defines one."

What are the algebraic expressions that can be constructed per some
function based on the properties of the data? You are not suggesting that
any, " tuple (e.g. a record or constraint) [which] is also a subspace but
with arbitrary extents," can be described with compact descriptions are you?

Jim Bromer

On Tue, Aug 25, 2015 at 12:07 PM, J. Andrew Rogers <[email protected]>
wrote:

> That paper is unrelated. As I said, no academic literature exists.
>
> A space maps 1:1 with the set of infinite bit strings in each dimension.
> As a consequence, spaces of different and arbitrary dimensionality have the
> same cardinality. Every computable data model can be effectively
> represented this way. You can partition a subspace any way you wish by
> selecting a bit from the infinitely long bit strings that defines a
> partition subspace. A tuple (e.g. a record or constraint) is also a
> subspace but with arbitrary extents; the volume of this subspace is
> infinite but there are compact descriptions of the bit strings required to
> describe it.
>
> Any space decomposition constructed this way is describable with a
> space-filling curve (SFC). Everything most computer scientists know about
> SFCs is so naive as to be effectively wrong. The above curves can be
> described solely as algebraic expressions. A fixed, global curve — Hilbert,
> Morton, etc — is irrelevant and useless. Because the partitioning of the
> data model is dynamically constructed per some function based on the
> properties of the data, the algebraic expression that defines a subspace is
> being dynamically written and is composable with other expressions. This
> distributes extremely well. There is no space-filling curve per se, just an
> algebraic expression that coincidentally defines one.
>
> Parallelization of computation within and across spaces is done on these
> subspace expressions. This, incidentally, is why dimensionality doesn’t
> really matter; the expressions are dimensionality oblivious.
>
>
>
> On Aug 25, 2015, at 6:02 AM, Jim Bromer <[email protected]> wrote:
>
> I did not know what, "dynamic space decomposition algebras,"  meant so I
> looked it up. The Wikipedia page I found was in the language of abstract
> algebra and I could not understand what they were getting at. However, I
> found this paper which the authors called, "A tertiary-tree-based dynamic
> space decomposition approach," and found that it made a lot of sense to
> me.
>
> An interesting point is that since the containers come in too many
> different variations to make an efficient algorithm they started with (what
> I have called in the past) a conveniently finite set of container sizes, If
> I understood the authors they then pre-computed how spaces smaller than
> most cargo holds were could be filled with variations of these simplified
> containers and how different combinations of these smaller spaces could be
> fit together. So it looks like the typical dynamic space decomposition
> algebra would have to be a heuristic composed of simplified sets of
> objects. But this in turn would typically lead to systems which did not
> conform to equi-incremental systems meaning that many useful functions
> could not be superimposed on the system. (Dimensional functions are based
> on continuous incremental measures so they don't lead to skips in the
> potential volumes. I am pointing out that even the dynamic decomposition
> algebras, which are more efficient than piecemeal trial and error, will not
> typically lead to further refinements in efficiency because they have to
> use non-continuous, non equi-incremental objects.)
>
> A heuristic for the container loading problem: A tertiary-tree-based
> dynamic space decomposition approach
>
> http://scholar.uwindsor.ca/cgi/viewcontent.cgi?article=1067&context=odettepub&sei-redir=1&referer=http%3A%2F%2Fwww.bing.com%2Fsearch%3Fq%3Ddynamic%2Bspace%2Bdecomposition%2Balgebras%26qs%3Dn%26pq%3Ddynamic%2Bspace%2Bdecomposition%2Balgebras%26sc%3D0-0%26sp%3D-1%26sk%3D%26cvid%3D95655503125b48648b7ca4468fde5719%26first%3D9%26FORM%3DPORE#search=%22dynamic%20space%20decomposition%20algebras%22
>
>
> Jim Bromer
>
> On Tue, Aug 18, 2015 at 2:22 PM, J. Andrew Rogers <[email protected]>
> wrote:
>
>>
>> On Aug 18, 2015, at 10:12 AM, Logan Streondj <[email protected]> wrote:
>>
>>>
>>>
>>> It generally involves three design characteristics; (1) using data
>>> structures that never require reorganization of your data i.e. no hash
>>> tables or range partitioning,
>>>
>>
>> some examples would be nice,
>> but I guess you mean to prefer simple and data-structures, like strings
>> and arrays.
>>
>>
>>
>> No, distributed structures based on dynamic space decomposition algebras.
>> It is like having a lot of secondary indexes on your data without having
>> any secondary indexes, which is good because secondary indexes do not
>> scale. Currently the most expressive/scalable type of data structure that
>> exists. There is no academic literature on them and the first versions were
>> only invented about a decade ago, modern versions only in the last five
>> years. They are being used in a handful of big systems already.
>>
>> You can look at a strict, binary quad-tree as an extremely narrow and
>> special case of these structures.
>>
>>
>> (2) logically generating uniform “work units” of compute that exceed the
>>> number compute elements by at least a couple orders of magnitude, and
>>>
>> so you mean each processing-core should have a queue of 100 or more
>> similar operations. makes sense, to justify the cost of core set-up
>>
>>
>>
>> The operations do not need to be similar, just related to data that is
>> wholly owned by the core. You move operations, not data. The point of
>> having thousands of independent work units is that a core can continuously
>> and dynamically reorder their execution to maximize throughput and to allow
>> easy load balancing.
>>
>>
>>
>>
>>> (3) have every compute element independently and dynamically schedule
>>> its own work sequence using game theory to orchestrate instead of explicit
>>> coordination.
>>>
>>
>> Okay I understand a core scheduling it's own work from a queue, but not
>> sure what you mean by using "game theory" which is a rather broad concept.
>>
>>
>>
>>
>> This is something that only works if you schedule your own I/O. Operating
>> system I/O schedulers are oblivious to what is going on around them and
>> routinely make pathological I/O scheduling decisions from a global point of
>> view.
>>
>> The concept came from routing theory and has been around for ages, I have
>> no idea who invented it. Given that a core explicitly schedules and orders
>> all of its communication and execution *and* that the core can assume all
>> other cores are running a similar scheduler, it is possible to design a
>> scheduling protocol that ensures approximately optimal aggregate throughput
>> and minimal conflict for the entire system. Essentially, each core
>> schedules its activity based on a protocol that models the state of other
>> cores it interacts with to minimize the probability of conflict or
>> bottleneck with minimal explicit information exchange.
>>
>> I have seen variants of this for both single multithreaded processors and
>> big distributed systems. I love the cleverness of it. Each thread makes a
>> decision on what to do next on a very fine-grained basis based on a model
>> of what all the other threads are deciding in their individual contexts in
>> the same shared environment. You still have to detect contention but the
>> amount of contention that occurs is *much* lower than in explicitly
>> coordinated systems because the threads are actively avoiding each other
>> without any detriment to their individual throughput.
>>
>> To be honest, I know the idioms for this kind of thing just well enough
>> to apply them. The mathematics underlying it is somewhat beyond me.
>>
>> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/24379807-653794b5> |
>> Modify <https://www.listbox.com/member/?&;> Your Subscription
>> <http://www.listbox.com/>
>>
>
> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/27156129-404b15ab> |
> Modify <https://www.listbox.com/member/?&;> Your Subscription
> <http://www.listbox.com/>
>
>
> *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/24379807-653794b5> |
> Modify
> <https://www.listbox.com/member/?&;>
> Your Subscription <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to