On 3 August 2011 21:04, BGB <[email protected]> wrote:
> sorry, just trying to clarify a few points...
>
>
> On 8/3/2011 9:57 AM, BGB wrote:
>>
>>
>> in my own language, there is the "async" modifier which can
>> (theoretically) be used for a lot of this:
>> "async function foo(x, y) { ... }"
>> where calls to foo implicitly create their own thread.
>>
>> "async bar(x, 3);"
>> would create a new thread for the call (IIRC, the initial version of the
>> language had also allowed the "bar!(x, 3);" syntax as well).
>>
>> "async { ... }"
>> would execute the code block in a new thread.
>>
>> granted, an "thread" modifier could make sense here instead?... (maybe
>> aliased to "async" or maybe replacing "async").
>>
>
> note, care should be taken not to confuse my "async" modifier with the C#
> "async" modifier, as they essentially do different things (much like my
> "delegate" keyword and the C# "delegate" keyword do different things, ...).
>
> still uncertain if a thread modifier would make more sense (in general,
> "thread" does make more sense, say, for TLS variables, as "async var mytls;"
> would look silly...).
>
> things are allowed to be a bit more fluid here, since there is not exactly a
> whole lot of code around depending on mostly not-yet-implemented features.
>
>
>> sadly, the "async" modifier was used in the first incarnation of BGBScript
>> (2004-2006), but was never fully reimplemented when the language was later
>> re-implemented (it has been on a long-term to-do list, but there were many
>> more pressing features to get implemented, and alternative if albeit less
>> convenient mechanisms exist...).
>>
>> the issue is granted, not so much whether one can type, say:
>> for(i=0; i<100; i++)
>>    fun(i) { async printf("%d\n", i); } (i);
>> but whether or not it will "own" their computer in the process...
>> (note: ugly closure hack needed to give each thread a proper unique value
>> for 'i', again probably another weak point of the existing model).
>>
>
> except that, as written, the above would not "own" ones' computer, due to a
> few details:
> the threads are too short-lived, so a momentary 100% CPU spike on all cores
> would not likely be noticed too badly;
> using a "soft" threading model, likely only a single worker OS thread would
> spawn which would service all of the threads, very possibly sequentially
> (the threads are likely too short-lived to trigger additional workers to
> spawn);
> with soft threading, the memory use is likely to be fairly trivial (nowhere
> near crash-risk levels);
> a larger number of OS threads would be needed to threaten the program
> (500-700 would likely crash a 32-bit the process on Windows, but the OS
> would probably survive this, 200-400 would likely crash a 32-bit Linux
> process, in both cases likely due to running out of address-space).
>
> so, I guess it can be more taken "in principle" that naively spawning a
> large number of OS threads could crash stuff...
>

yes, especially by taking into account that you cannot run more
threads (in parallel) than
existing cores. So, it is pointless to spawn more than hardware can do, because
these threads will just wait own turn to claim one of free cores and
meanwhile just consume resources,
which allocated by system for each thread: stack and thread state.

Especially pointless for cases, when workload is comparable to
scheduling overhead. It means that running loop sequentially
could even produce results faster than scheduling 500-700 parallel tiny jobs.

IMO, high-granularity parallelism is road to nowhere.

>
> slight (possible/eventual) syntax sugar could be something like:
> for(i=0; i<100; i++)
>    async(i) printf("%d\n", i);
>
> with the semantics that "i" is captured at the point the statement is seen,
> essentially turning the operation into a lambda-block and an async call
> operation. also this would save 1 code block vs the prior form (both async
> and closures use blocks, using both thus requires 2 blocks, but a combined
> form would only need a single block).
>
> note that otherwise, the variable will be captured, rather than its value,
> hence the "printf" call would see whatever was the last value in the
> variable. it is then necessary to "capture" its value at the point of
> execution so that the explicit value at that moment is captured.
>
>
>>
>> note, the effective opposite of "async" (or "thread", if added) would be
>> "synchronized", as in:
>> "synchronized { ... }" which would use a mutex (or some other means) to
>> synchronize execution within a given block.
>>
>> it can also be (theoretically) applied to methods and classes (sadly also
>> not presently implemented).
>>
>> now whether any of this could make threading easier to use... I really
>> have little idea...
>>
>
> my good old friend implementation holes.
> this is because I can sometimes spec out more features than I have
> implemented, or older features can "fall off a ledge" somewhere (say, when
> doing a mostly ground-up reimplementation of ones' VM).
>
>
>> maybe there is some fundamentally better way to approach multi-threaded
>> code?...
>>
>
> still a mystery...
>
> maybe in a more ideal world, maybe everything could just be converted to CPS
> and executed wherever a worker was available (essentially blurring the line
> between single and multithreaded code). however, how to best express and
> work with concurrent code is still an issue though.
>
>
>
> _______________________________________________
> fonc mailing list
> [email protected]
> http://vpri.org/mailman/listinfo/fonc
>



-- 
Best regards,
Igor Stasenko AKA sig.

_______________________________________________
fonc mailing list
[email protected]
http://vpri.org/mailman/listinfo/fonc

Reply via email to