Hi John,

On 2016-11-08 16:52, John Benediktsson wrote:
>> 
>> In that case I guess the mailing list makes more sense. Unless there's
>> people reading the IRC logs and not part of the mailing list.
>> 
> 
> The mailing list can be a fine place, or GitHub issues if you run into 
> some
> problems with Factor.  If you are worried about higher volume of
> conversation, you could reach out to me and we could have an offline
> discussion, pulling a couple of the other developers into it as well.

I'm OK with the mailing list, but let me know if you want to take the 
conversation off of it.

> 
> 
>> Thanks, I guess your explanation makes sense, `[ y ] dip` looked a bit
>> weird to me on the first look but I understand the meaning now.
>> 
> 
> There should not be a performance difference, in a simple example of `[ 
> y ]
> dip` versus `y swap``, its based on a sense of which one looks cleaner 
> or
> more closely expresses the purpose of the code.  You can find other 
> similar
> examples, like `[ foo ] keep` versus `[ foo ] [ ] bi` where developers 
> are
> experimenting with and thinking about expressivity in concatenative
> languages.

OK, guess I'll experiment as well, thanks.

> 
> 
>> > I've seen discussions on this mailing list about the extra/cursors
>> > vocabulary about that. I've never used it though. For example Joe
>> > talked
>> > about it in 2009:
>> > https://sourceforge.net/p/factor/mailman/message/23878123
>> >
>> 
>> I see, so nothing that would be alive.
>> 
> 
> Well, possibly thats one way to look at it.  Another is that it is/was 
> an
> experiment in thinking about generalized iterators and has tests and 
> works
> fine for what it is.  You can see the code in extra/cursors or use it
> (`USE: cursors`).  We do need to move towards lazy/infinite
> generator/iterators and hopefully to more easily generalize
> iteration/operations on those as well as sequences (`each`) and assocs
> (`assoc-each`) and dlists (`dlist-each`) and lists (`leach`) and heaps,
> trees, etc etc.  Doug Coleman did some work on this, but we haven't
> finished it yet.

I looked into the cursors vocab, sadly it doesn't have any 
documentation. But I'll dig into it if the need arises.

> 
> In the meantime I finished a small script where I was trying out factor
>> - it goes into my git folders and fetches/pulls updates from origin. I
>> got the first version working pretty quickly, then I wanted to tweak 
>> it
>> so that the updates run in parallel and output updates on the console
>> live. In a final version I wanted to print some progressbar while
>> surpressing git output. For that reason I thought I need message
>> passing. This isn't working as I expect it to. Here is the gist:
>> 
>> https://gist.github.com/xificurC/f4de1993b3218a50dd8936dfc0ec16f2
>> 
>> There are my last 2 attemps. The first, commented out version finishes
>> without waiting for the threads to finish (even with the ugly hack of
>> reading the state>> of the thread) while in the second the receiving
>> thread doesn't read the messages as they arrive, rather its mailbox 
>> gets
>> filled up and only when everyone finishes gets time to work on the
>> messages. What am I doing wrong?
>> 
> 
> I would think it would be easier to use some form of ``parallel-map`` 
> from
> `USE: concurrency.combinators`.  But if you want to build it up from 
> other
> concurrency libraries, you might think of a count-down latch (`USE:
> concurrency.count-downs`) and waiting for it to reach zero, since you 
> know
> how many items to wait for.

I thought of parallel-map first but I wanted to do a bit more than that, 
otherwise I could just write a script that handles one and feed it to 
GNU parallel. I wanted to achieve more than what I can with GNU parallel 
(which is able to parallelize on multiple cores and writes correctly to 
stdout by not mixing the outputs of several tasks). Since I know the 
count ahead I wanted to suppress the git output and instead show a 
progress bar with some results, e.g. if the pull/fetch was successful or 
not. The only way I can imagine that is by message passing. But as I 
explained the threads don't seem to work as I expect, the recieving 
thread doesn't get to "speak up" until it's too long. Is there any way 
one can make a particular thread higher priority or come up right when 
it gets a message? Also, when does factor end? I have a bunch of threads 
that are waiting for I/O or network response and it finishes anyway. My 
problem with threads is that I don't understand what's happening behind 
the scenes, who goes when, how many of them at once, etc.

>  You might also want to limit the number of
> threads running by using a semaphore (`USE: concurrency.semaphores`) if 
> not
> using parallel-map on groups of items.

Would that help to keep the receiver alive? Like if there's a total of 4 
threads running I put a semaphore on 3 and I get one always through? The 
problem with this example concretely is that I'm waiting on network 
almost all the time so it makes sense to launch all of them at once and 
just collect the results. In a bash-y way I would launch the processes 
with GNU parallel and write an answer into a named pipe and read from 
that. Is there something similar I can achieve in factor?

> 
> 
>> On the topic of factor's cooperative threads - do they run multi- or
>> single-core?
>> 
> 
> Currently single-core.  We have some work completed to allow multiple
> cooperative "Factor VM's" in a single process but it's not finished.

Cool, glad to know there's still a lot of work going on.

> 
> 
>> If you have some other tips on the code I'll be glad, I feel like I'm
>> doing more shuffling then I might need.
>> 
> 
> Often there is a combinator that can make your operation simpler, or a
> different order to the stack that flows through your operations more
> simply.  But feel free to use local variables when you are learning and
> having a hard time with stack shuffling.  They don't come with a
> performance cost (except mutable variables where you are changing the 
> value
> pointed to by the local name).  And keep asking questions!

I'm actually trying to not use locals to train more working with the 
stack. I can get things to work, I just feel like I'm doing it the long 
way. But I'm sure it's just a question of practice.

Thanks for your response!

> 
> Best,
> John.
> 
> ------------------------------------------------------------------------------
> Developer Access Program for Intel Xeon Phi Processors
> Access to Intel Xeon Phi processor-based developer platforms.
> With one year of Intel Parallel Studio XE.
> Training and support from Colfax.
> Order your platform today. http://sdm.link/xeonphi
> _______________________________________________
> Factor-talk mailing list
> Factor-talk@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/factor-talk

-- 
------------
   Peter Nagy
------------

------------------------------------------------------------------------------
Developer Access Program for Intel Xeon Phi Processors
Access to Intel Xeon Phi processor-based developer platforms.
With one year of Intel Parallel Studio XE.
Training and support from Colfax.
Order your platform today. http://sdm.link/xeonphi
_______________________________________________
Factor-talk mailing list
Factor-talk@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/factor-talk

Reply via email to