>
> In that case I guess the mailing list makes more sense. Unless there's
> people reading the IRC logs and not part of the mailing list.
>

The mailing list can be a fine place, or GitHub issues if you run into some
problems with Factor.  If you are worried about higher volume of
conversation, you could reach out to me and we could have an offline
discussion, pulling a couple of the other developers into it as well.


> Thanks, I guess your explanation makes sense, `[ y ] dip` looked a bit
> weird to me on the first look but I understand the meaning now.
>

There should not be a performance difference, in a simple example of `[ y ]
dip` versus `y swap``, its based on a sense of which one looks cleaner or
more closely expresses the purpose of the code.  You can find other similar
examples, like `[ foo ] keep` versus `[ foo ] [ ] bi` where developers are
experimenting with and thinking about expressivity in concatenative
languages.


> > I've seen discussions on this mailing list about the extra/cursors
> > vocabulary about that. I've never used it though. For example Joe
> > talked
> > about it in 2009:
> > https://sourceforge.net/p/factor/mailman/message/23878123
> >
>
> I see, so nothing that would be alive.
>

Well, possibly thats one way to look at it.  Another is that it is/was an
experiment in thinking about generalized iterators and has tests and works
fine for what it is.  You can see the code in extra/cursors or use it
(`USE: cursors`).  We do need to move towards lazy/infinite
generator/iterators and hopefully to more easily generalize
iteration/operations on those as well as sequences (`each`) and assocs
(`assoc-each`) and dlists (`dlist-each`) and lists (`leach`) and heaps,
trees, etc etc.  Doug Coleman did some work on this, but we haven't
finished it yet.

In the meantime I finished a small script where I was trying out factor
> - it goes into my git folders and fetches/pulls updates from origin. I
> got the first version working pretty quickly, then I wanted to tweak it
> so that the updates run in parallel and output updates on the console
> live. In a final version I wanted to print some progressbar while
> surpressing git output. For that reason I thought I need message
> passing. This isn't working as I expect it to. Here is the gist:
>
> https://gist.github.com/xificurC/f4de1993b3218a50dd8936dfc0ec16f2
>
> There are my last 2 attemps. The first, commented out version finishes
> without waiting for the threads to finish (even with the ugly hack of
> reading the state>> of the thread) while in the second the receiving
> thread doesn't read the messages as they arrive, rather its mailbox gets
> filled up and only when everyone finishes gets time to work on the
> messages. What am I doing wrong?
>

I would think it would be easier to use some form of ``parallel-map`` from
`USE: concurrency.combinators`.  But if you want to build it up from other
concurrency libraries, you might think of a count-down latch (`USE:
concurrency.count-downs`) and waiting for it to reach zero, since you know
how many items to wait for.  You might also want to limit the number of
threads running by using a semaphore (`USE: concurrency.semaphores`) if not
using parallel-map on groups of items.


> On the topic of factor's cooperative threads - do they run multi- or
> single-core?
>

Currently single-core.  We have some work completed to allow multiple
cooperative "Factor VM's" in a single process but it's not finished.


> If you have some other tips on the code I'll be glad, I feel like I'm
> doing more shuffling then I might need.
>

Often there is a combinator that can make your operation simpler, or a
different order to the stack that flows through your operations more
simply.  But feel free to use local variables when you are learning and
having a hard time with stack shuffling.  They don't come with a
performance cost (except mutable variables where you are changing the value
pointed to by the local name).  And keep asking questions!

Best,
John.
------------------------------------------------------------------------------
Developer Access Program for Intel Xeon Phi Processors
Access to Intel Xeon Phi processor-based developer platforms.
With one year of Intel Parallel Studio XE.
Training and support from Colfax.
Order your platform today. http://sdm.link/xeonphi
_______________________________________________
Factor-talk mailing list
Factor-talk@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/factor-talk

Reply via email to