Hello Joseph.

Thanks for your explanations.
I now have a small helper function par-map (parallel map) that does not impose
any thread limit,
while para-map/cores limits it to the number of physical cores (overwritable).
(If you are interested, I can send you the code for reading or inclusion.)

I hope that Manuel will accept your PR needed for bigloo-concurrent, soon.

Ciao
Sven

Your wrote, 2020-09-27 00:57:
> Hello, Sven,
>
> I am glad that it is working well for you. Most of the credit lies with 
> Takashi Kato and Manuel Serrano for their work on the scheme-concurrent and 
> pthread libraries, respectively.
>
> As for how the number of available threads is determined if the code contains 
> only calls of future and future-get from bigloo-concurrent, like 
> scheme-concurrent, a separate thread is launched for every future in this 
> case. It would be better to have a default threadpool in a lot of cases, and 
> this is something we
> can experiment with. I also think it would be useful to add features similar 
> to those provided by CompletableFuture in Java. It supports a number of 
> useful methods for composing futures.
>
> Best Regards,
> Joe
>
> On Friday, September 25, 2020, 7:54:18 AM PDT, Sven Hartrumpf 
> <hartru...@gmx.net> wrote:
>
> Hi again.
>
> Some more feedback.
>
> SH, 2020-09-21 19:35:
>> Hello Joseph.
>>
>> You wrote, 2020-09-20 18:43:
>>> Hello, Sven,
>>>
>>> I got a chance to look at the jvm issues this weekend and have made an 
>>> update that results in a mostly working implementation; there still remains 
>>> a race condition when shutting down an executor that results in futures 
>>> being in a finished state instead of the expected terminated state at 
>>> times. I will continue
> to
>>> look into this but don't see it as super critical. As before, changes were 
>>> required to the bigloo pthread library, I refactored the bglpmutex class to 
>>> use ReentrantLock. This gives the jvm version of the pthread library proper 
>>> recursive mutexes. The changes can be found at
>>> https://github.com/donaldsonjw/bigloo-1/tree/pthread_modifications.
>>>
>>> If you get a chance to try it out, let me know what you think.
>>
>> I am only using the C backend; so I cannot talk about your improvements for 
>> the Java
>> backend.
>> For the C backend, I have tried some smaller examples.
>> Your library seems to be solid and correct!
>> The performance is a little bit disappointing so far,
>> but maybe this
>> will change when I look at some more advanced/real-life code segments.
>
> It changed and improved :-)
> I have some other examples that show impressive performance.
> On a CPU with 16 physical cores (32 virtual cores, AMD Ryzen 9 3950X),
> the parallel version with 128 similar compute tasks is 20 times faster than
> the serial version.
> With 16 similar compute tasks, the speed-up is 15, which is almost the
> theoretical optimum of 16.
> So, the overhead of the library seems to be small.
>
>> How does your library choose the number of available threads (if the code
>> contains only calls of future and future-get from bigloo-concurrent)?
>> Number of physical cores, numer of virtual cores, ...?
>
> Greetings
> Sven
>
>>> Best Regards,
>>> Joseph Donaldson
>>>
>>> .
>>>
>>> On Sunday, September 6, 2020, 12:25:14 PM PDT, Joseph Donaldson 
>>> <donaldso...@yahoo.com> wrote:
>>>
>>> Hello, Sven,
>>>
>>> I took a look at porting scheme-concurrent to Bigloo. The results can be 
>>> found at https://github.com/donaldsonjw/bigloo-concurrent. The native 
>>> back-end passes all the tests but there are still some issues with the jvm 
>>> back-end; I will continue to work on it. While doing the port, I made a few 
>>> changes to Bigloo.
>>> These are linked from the bigloo-concurrent README. I have made a pull 
>>> request to have the changes included in the Bigloo mainline.
>>>
>>> Let me know what you think.
>>>
>>> Best Regards,
>>> Joseph Donaldson
>>>
>>>  On Thu, Aug 20, 2020 at 8:34 AM, Sven Hartrumpf
>>>  <hartru...@gmx.net> wrote:
>>>  Hello Joseph.
>>>
>>>  Thanks for your message and the link to hop code.
>>>  I must admit that this looks a little bit scary to me.
>>>  Maybe it is easier to port https://github.com/ktakashi/scheme-concurrent
>>>  (which already runs on several Schemes) to Bigloo?
>>>  It was presented at the Scheme Workshop 2016:
>>>  http://schemeworkshop.org/2016/ (it's the second talk)
>>>
>>>  Greetings
>>>  Sven
>>>
>>>  Joseph wrote, 2020-08-06 17:30:
>>>  > Hello, Sven,
>>>  >
>>>  > Given Bigloo's support for threads (pthreads or java threads), 
>>> efficiently implementing Guile's futures and parallel forms is definitely 
>>> possible. As mentioned in the Guile documentation, the most obvious 
>>> implementation would be to introduce a threadpool abstraction and
>>>  > build the futures and parallel-forms on top of that. Manuel created a 
>>> similar abstraction for his hop executors. See 
>>> https://github.com/manuel-serrano/hop/blob/master/src/queue_scheduler.scm. 
>>> Taking a look at Java's ThreadPoolExecutor abstraction may also be helpful.
>>>  >
>>>  > I think this would be an interesting project and would be willing to 
>>> contribute to it.
>>>  >
>>>  > Best Regards,
>>>  > Joseph Donaldson.
>>>  >
>>>  > On Wednesday, August 5, 2020, 3:04:46 AM PDT, Sven Hartrumpf 
>>> <hartru...@gmx.net> wrote:
>>>  >
>>>  > Dear Bigloo users.
>>>  >
>>>  > I read about futures in Guile (see 
>>> https://www.gnu.org/software/guile/manual/html_node/Futures.html)
>>>  > and the parallel forms built on-top, e.g. parallel, par-map and 
>>> par-for-each
>>>  > (see 
>>> https://www.gnu.org/software/guile/manual/html_node/Parallel-Forms.html).
>>>  >
>>>  > They look very promising,
>>>  > especially because 8, 16 or more CPU cores are available on more and more
>>>  > standard CPUs.
>>>  >
>>>  > Can these constructs be efficiently implemented in Bigloo, too?
>>>  >
>>>  > Greetings
>>>  > Sven

Reply via email to