Re: [Factor-talk] Questions of a newcomer

2016-11-08 Thread Chris Double
On Wed, Nov 9, 2016 at 4:02 AM,   wrote:
>
> There are my last 2 attemps. The first, commented out version finishes
> without waiting for the threads to finish (even with the ugly hack of
> reading the state>> of the thread) while in the second the receiving
> thread doesn't read the messages as they arrive, rather its mailbox gets
> filled up and only when everyone finishes gets time to work on the
> messages. What am I doing wrong?

Nothing seems to be wrong here, but there's no obvious thread join
operation to wait on completion so Factor exits immediately even if
active threads are running.

> On the topic of factor's cooperative threads - do they run multi- or
> single-core?

The are single core co-operative threads. If you run an FFI function
that  blocks then all threads block.

> If you have some other tips on the code I'll be glad, I feel like I'm
> doing more shuffling then I might need.

I tried to duplicate the basics of your code with the following:

self '[ "bash -c \"sleep 10\"" run-process drop 1 _ send ] "1" spawn
self '[ "bash -c \"sleep 5\"" run-process drop 2 _ send ] "2" spawn
receive

This will spawn two threads that run a bash 'sleep'. The 'receive'
blocks until the shortest sleep finishes and if you run another
'receive' then it blocks until the longer one completes. This should
be what your code does too. Are you sure the git commands aren't all
completing at the same time?

Note the use of the 'fry' quotation to avoid having to curry and swap
later to pass the 'self' around btw.

-- 
http://bluishcoder.co.nz

--
Developer Access Program for Intel Xeon Phi Processors
Access to Intel Xeon Phi processor-based developer platforms.
With one year of Intel Parallel Studio XE.
Training and support from Colfax.
Order your platform today. http://sdm.link/xeonphi
___
Factor-talk mailing list
Factor-talk@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/factor-talk


Re: [Factor-talk] Questions of a newcomer

2016-11-08 Thread John Benediktsson
>
> I thought of parallel-map first but I wanted to do a bit more than that,
> otherwise I could just write a script that handles one and feed it to
> GNU parallel. I wanted to achieve more than what I can with GNU parallel
> (which is able to parallelize on multiple cores and writes correctly to
> stdout by not mixing the outputs of several tasks). Since I know the
> count ahead I wanted to suppress the git output and instead show a
> progress bar with some results, e.g. if the pull/fetch was successful or
> not. The only way I can imagine that is by message passing. But as I
> explained the threads don't seem to work as I expect, the recieving
> thread doesn't get to "speak up" until it's too long. Is there any way
> one can make a particular thread higher priority or come up right when
> it gets a message? Also, when does factor end? I have a bunch of threads
> that are waiting for I/O or network response and it finishes anyway. My
> problem with threads is that I don't understand what's happening behind
> the scenes, who goes when, how many of them at once, etc.


Here is a simple example of ``n`` worker threads that do some number of
work (in this case sleeping and then doing "1" unit of work) and then push
that work into a mailbox.  The main thread does a mailbox-get and adds that
work to its total and updates a progress-bar, finishing when all the
expected work is done:

:: start-thread-example1 ( n -- )

 :> box

n iota [
'[
_ 100 * milliseconds sleep
1 box mailbox-put
] "Worker" spawn drop
] each

0 [
box mailbox-get +
dup n / 70 make-progress-bar print
dup n <
] loop drop ;


In the ``progress-bars.model`` vocabulary we have a visual model that can
be used in a GUI, or you can just print it to standard-out.

If you wanted to see how the count-downs code might look (without
progress-bars), maybe something like this:

:: start-thread-example2 ( n -- )

n  :> coord

n iota [
'[
_ 100 * milliseconds sleep
coord count-down
] "Worker" spawn drop
] each

coord await ;

There are a lot of ways to solve the problem, but without knowing more
about what you're looking for, I'll just leave these here.

Best,
John.
--
Developer Access Program for Intel Xeon Phi Processors
Access to Intel Xeon Phi processor-based developer platforms.
With one year of Intel Parallel Studio XE.
Training and support from Colfax.
Order your platform today. http://sdm.link/xeonphi___
Factor-talk mailing list
Factor-talk@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/factor-talk


Re: [Factor-talk] Questions of a newcomer

2016-11-08 Thread petern
Hi John,

On 2016-11-08 16:52, John Benediktsson wrote:
>> 
>> In that case I guess the mailing list makes more sense. Unless there's
>> people reading the IRC logs and not part of the mailing list.
>> 
> 
> The mailing list can be a fine place, or GitHub issues if you run into 
> some
> problems with Factor.  If you are worried about higher volume of
> conversation, you could reach out to me and we could have an offline
> discussion, pulling a couple of the other developers into it as well.

I'm OK with the mailing list, but let me know if you want to take the 
conversation off of it.

> 
> 
>> Thanks, I guess your explanation makes sense, `[ y ] dip` looked a bit
>> weird to me on the first look but I understand the meaning now.
>> 
> 
> There should not be a performance difference, in a simple example of `[ 
> y ]
> dip` versus `y swap``, its based on a sense of which one looks cleaner 
> or
> more closely expresses the purpose of the code.  You can find other 
> similar
> examples, like `[ foo ] keep` versus `[ foo ] [ ] bi` where developers 
> are
> experimenting with and thinking about expressivity in concatenative
> languages.

OK, guess I'll experiment as well, thanks.

> 
> 
>> > I've seen discussions on this mailing list about the extra/cursors
>> > vocabulary about that. I've never used it though. For example Joe
>> > talked
>> > about it in 2009:
>> > https://sourceforge.net/p/factor/mailman/message/23878123
>> >
>> 
>> I see, so nothing that would be alive.
>> 
> 
> Well, possibly thats one way to look at it.  Another is that it is/was 
> an
> experiment in thinking about generalized iterators and has tests and 
> works
> fine for what it is.  You can see the code in extra/cursors or use it
> (`USE: cursors`).  We do need to move towards lazy/infinite
> generator/iterators and hopefully to more easily generalize
> iteration/operations on those as well as sequences (`each`) and assocs
> (`assoc-each`) and dlists (`dlist-each`) and lists (`leach`) and heaps,
> trees, etc etc.  Doug Coleman did some work on this, but we haven't
> finished it yet.

I looked into the cursors vocab, sadly it doesn't have any 
documentation. But I'll dig into it if the need arises.

> 
> In the meantime I finished a small script where I was trying out factor
>> - it goes into my git folders and fetches/pulls updates from origin. I
>> got the first version working pretty quickly, then I wanted to tweak 
>> it
>> so that the updates run in parallel and output updates on the console
>> live. In a final version I wanted to print some progressbar while
>> surpressing git output. For that reason I thought I need message
>> passing. This isn't working as I expect it to. Here is the gist:
>> 
>> https://gist.github.com/xificurC/f4de1993b3218a50dd8936dfc0ec16f2
>> 
>> There are my last 2 attemps. The first, commented out version finishes
>> without waiting for the threads to finish (even with the ugly hack of
>> reading the state>> of the thread) while in the second the receiving
>> thread doesn't read the messages as they arrive, rather its mailbox 
>> gets
>> filled up and only when everyone finishes gets time to work on the
>> messages. What am I doing wrong?
>> 
> 
> I would think it would be easier to use some form of ``parallel-map`` 
> from
> `USE: concurrency.combinators`.  But if you want to build it up from 
> other
> concurrency libraries, you might think of a count-down latch (`USE:
> concurrency.count-downs`) and waiting for it to reach zero, since you 
> know
> how many items to wait for.

I thought of parallel-map first but I wanted to do a bit more than that, 
otherwise I could just write a script that handles one and feed it to 
GNU parallel. I wanted to achieve more than what I can with GNU parallel 
(which is able to parallelize on multiple cores and writes correctly to 
stdout by not mixing the outputs of several tasks). Since I know the 
count ahead I wanted to suppress the git output and instead show a 
progress bar with some results, e.g. if the pull/fetch was successful or 
not. The only way I can imagine that is by message passing. But as I 
explained the threads don't seem to work as I expect, the recieving 
thread doesn't get to "speak up" until it's too long. Is there any way 
one can make a particular thread higher priority or come up right when 
it gets a message? Also, when does factor end? I have a bunch of threads 
that are waiting for I/O or network response and it finishes anyway. My 
problem with threads is that I don't understand what's happening behind 
the scenes, who goes when, how many of them at once, etc.

>  You might also want to limit the number of
> threads running by using a semaphore (`USE: concurrency.semaphores`) if 
> not
> using parallel-map on groups of items.

Would that help to keep the receiver alive? Like if there's a total of 4 
threads running I put a semaphore on 3 and I get one always through? The 
problem with this example concretely is that I'm waiting on network

Re: [Factor-talk] Questions of a newcomer

2016-11-08 Thread John Benediktsson
>
> In that case I guess the mailing list makes more sense. Unless there's
> people reading the IRC logs and not part of the mailing list.
>

The mailing list can be a fine place, or GitHub issues if you run into some
problems with Factor.  If you are worried about higher volume of
conversation, you could reach out to me and we could have an offline
discussion, pulling a couple of the other developers into it as well.


> Thanks, I guess your explanation makes sense, `[ y ] dip` looked a bit
> weird to me on the first look but I understand the meaning now.
>

There should not be a performance difference, in a simple example of `[ y ]
dip` versus `y swap``, its based on a sense of which one looks cleaner or
more closely expresses the purpose of the code.  You can find other similar
examples, like `[ foo ] keep` versus `[ foo ] [ ] bi` where developers are
experimenting with and thinking about expressivity in concatenative
languages.


> > I've seen discussions on this mailing list about the extra/cursors
> > vocabulary about that. I've never used it though. For example Joe
> > talked
> > about it in 2009:
> > https://sourceforge.net/p/factor/mailman/message/23878123
> >
>
> I see, so nothing that would be alive.
>

Well, possibly thats one way to look at it.  Another is that it is/was an
experiment in thinking about generalized iterators and has tests and works
fine for what it is.  You can see the code in extra/cursors or use it
(`USE: cursors`).  We do need to move towards lazy/infinite
generator/iterators and hopefully to more easily generalize
iteration/operations on those as well as sequences (`each`) and assocs
(`assoc-each`) and dlists (`dlist-each`) and lists (`leach`) and heaps,
trees, etc etc.  Doug Coleman did some work on this, but we haven't
finished it yet.

In the meantime I finished a small script where I was trying out factor
> - it goes into my git folders and fetches/pulls updates from origin. I
> got the first version working pretty quickly, then I wanted to tweak it
> so that the updates run in parallel and output updates on the console
> live. In a final version I wanted to print some progressbar while
> surpressing git output. For that reason I thought I need message
> passing. This isn't working as I expect it to. Here is the gist:
>
> https://gist.github.com/xificurC/f4de1993b3218a50dd8936dfc0ec16f2
>
> There are my last 2 attemps. The first, commented out version finishes
> without waiting for the threads to finish (even with the ugly hack of
> reading the state>> of the thread) while in the second the receiving
> thread doesn't read the messages as they arrive, rather its mailbox gets
> filled up and only when everyone finishes gets time to work on the
> messages. What am I doing wrong?
>

I would think it would be easier to use some form of ``parallel-map`` from
`USE: concurrency.combinators`.  But if you want to build it up from other
concurrency libraries, you might think of a count-down latch (`USE:
concurrency.count-downs`) and waiting for it to reach zero, since you know
how many items to wait for.  You might also want to limit the number of
threads running by using a semaphore (`USE: concurrency.semaphores`) if not
using parallel-map on groups of items.


> On the topic of factor's cooperative threads - do they run multi- or
> single-core?
>

Currently single-core.  We have some work completed to allow multiple
cooperative "Factor VM's" in a single process but it's not finished.


> If you have some other tips on the code I'll be glad, I feel like I'm
> doing more shuffling then I might need.
>

Often there is a combinator that can make your operation simpler, or a
different order to the stack that flows through your operations more
simply.  But feel free to use local variables when you are learning and
having a hard time with stack shuffling.  They don't come with a
performance cost (except mutable variables where you are changing the value
pointed to by the local name).  And keep asking questions!

Best,
John.
--
Developer Access Program for Intel Xeon Phi Processors
Access to Intel Xeon Phi processor-based developer platforms.
With one year of Intel Parallel Studio XE.
Training and support from Colfax.
Order your platform today. http://sdm.link/xeonphi___
Factor-talk mailing list
Factor-talk@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/factor-talk


Re: [Factor-talk] Questions of a newcomer

2016-11-08 Thread petern
Hi Jon,

On 2016-11-07 22:34, Jon Harper wrote:
> Hi Peter,
> 
> On Mon, Nov 7, 2016 at 3:07 PM,  wrote:
> 
>> Hello,
>> 
>> I am tinkering with factor and was wondering if it is OK to pick your
>> brains here? As I play around with the language questions come up that
>> are probably easy for you to answer. I don't see much action on the
>> #concatenative IRC channel so I thought the maling list might be a
>> better place?
>> 
>> Questions are always welcome. The mailing list or #concatenative on 
>> IRC
> are both good places to ask. Several people read the logs of 
> #concatenative
> and may answer your questions some time after you sent it, so don't 
> give up.

In that case I guess the mailing list makes more sense. Unless there's 
people reading the IRC logs and not part of the mailing list.

> 
> As a starter:
>> 
>> - I see a common pattern in definitions of using `dip` instead of
>> `swap`. Is there some special reason for that? Is it more performant? 
>> I
>> know the words aren't interchangeable but e.g. `with-directory-files`
>> has `[ [ "" directory-files ] ] dip` which as far as I can tell is
>> equivalent to `[ "" directory-files ] swap`. I saw this pattern in 
>> more
>> definitions.
>> 
> I guess it's a matter of personal style. I would argue that the 
> 'meaning'
> of swap ( x y -- y x ) is that there's the same importance on pushing y
> down the stack than on puling x up the stack. Whereas [ y ] dip would 
> focus
> more putting y down the stack.
> 
> Regarding the performance, you can often see for yourself using the
> optimized. word of compiler.tree.debugger. You could even install 
> libudis
> and use the disassemble word of tools.disassembler.
> I would be surprised if the performance of swap vs dip mattered in a 
> real
> application.
> 

Thanks, I guess your explanation makes sense, `[ y ] dip` looked a bit 
weird to me on the first look but I understand the meaning now.

> 
>> 
>> - is there any sequence generator/iterator vocabulary? Something that
>> gives or computes values on demand. One can find it in many languages
>> with a bit different flavor, e.g. Scheme, Rust, Python. I saw that 
>> there
>> is lists.lazy which can serve a similar purpose but is a bit more 
>> heavy
>> weight in some cases. Maybe this isn't needed in factor at all and you
>> use a different pattern to solve a similar problem?
>> 
> I've seen discussions on this mailing list about the extra/cursors
> vocabulary about that. I've never used it though. For example Joe 
> talked
> about it in 2009: 
> https://sourceforge.net/p/factor/mailman/message/23878123
> 

I see, so nothing that would be alive.



In the meantime I finished a small script where I was trying out factor 
- it goes into my git folders and fetches/pulls updates from origin. I 
got the first version working pretty quickly, then I wanted to tweak it 
so that the updates run in parallel and output updates on the console 
live. In a final version I wanted to print some progressbar while 
surpressing git output. For that reason I thought I need message 
passing. This isn't working as I expect it to. Here is the gist:

https://gist.github.com/xificurC/f4de1993b3218a50dd8936dfc0ec16f2

There are my last 2 attemps. The first, commented out version finishes 
without waiting for the threads to finish (even with the ugly hack of 
reading the state>> of the thread) while in the second the receiving 
thread doesn't read the messages as they arrive, rather its mailbox gets 
filled up and only when everyone finishes gets time to work on the 
messages. What am I doing wrong?

On the topic of factor's cooperative threads - do they run multi- or 
single-core?

If you have some other tips on the code I'll be glad, I feel like I'm 
doing more shuffling then I might need.

Thank you for your answer.

> Cheers,
> Jon
> 
> --
> Developer Access Program for Intel Xeon Phi Processors
> Access to Intel Xeon Phi processor-based developer platforms.
> With one year of Intel Parallel Studio XE.
> Training and support from Colfax.
> Order your platform today. http://sdm.link/xeonphi
> ___
> Factor-talk mailing list
> Factor-talk@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/factor-talk

-- 

   Peter Nagy


--
Developer Access Program for Intel Xeon Phi Processors
Access to Intel Xeon Phi processor-based developer platforms.
With one year of Intel Parallel Studio XE.
Training and support from Colfax.
Order your platform today. http://sdm.link/xeonphi
___
Factor-talk mailing list
Factor-talk@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/factor-talk