Re: core.async and performance

2013-11-29 Thread Ben Mabey

On Fri Nov 29 22:30:45 2013, kandre wrote:

Maybe I'll just use my simpy models for now and wait for clj-sim ;)
Any chance of sharing?
Cheers
Andreas

On Saturday, 30 November 2013 15:40:10 UTC+10:30, Ben Mabey wrote:

On 11/29/13, 9:16 PM, Cedric Greevey wrote:

On Fri, Nov 29, 2013 at 11:03 PM, Ben Mabey > wrote:

On 11/29/13, 8:33 PM, Cedric Greevey wrote:

Have you checked for other sources of performance hits?
Boxing, var lookups, and especially reflection.

As I said, I haven't done any optimization yet. :)  I did
check for reflection though and didn't see any.



I'd expect a reasonably optimized Clojure version to
outperform a Python version by a very large factor -- 10x
just for being JITted JVM bytecode instead of interpreted
Python, times another however-many-cores-you-have for
core.async keeping all your processors warm vs. Python and
its GIL limiting the Python version to single-threaded
performance.

This task does not benefit from the multiplexing that
core.async provides, at least not in the case of a single
simulation which has no clear logical partition that can be
run in parallel.  The primary benefit that core.async is
providing in this case is to escape from call-back hell.


Hmm. Then you're still looking for a 25-fold slowdown somewhere.
It's hard to get Clojure to run that slow *without* reflection,
unless you're hitting one of those cases where parallelizing
actually makes things worse. Hmm; core.async will be trying to
multithread your code, even while the nature of the task is
limiting it to effectively serial performance anyway due to
blocking. Perhaps you're getting some of the slowdown from
context switches that aren't buying you anything for what they
cost? The GIL-afflicted Python code wouldn't be impacted by the
cost of context switches, by contrast.



I had expected the context-switching to take a hit but I never
tried to see how much of a hit it is.  I just did and I got  a
1.62x speed improvement[1] which means the clojure version is only
1.2x slower than the simpy version. :)

Right now the thread pool in core.async is hardcoded in.  So for
this experiment I hacked in a fixed thread pool of size one.  I
asked about having the thread pool for core.async be
swappable/parametrized at the conj during the unsession and the
idea was not received well.  For most use cases I think the
current thread pool is fine but for this particular one it appears
it is not...

-Ben

1. Full benchmark... compare to times here:
https://gist.github.com/bmabey/7714431

WARNING: Final GC required 5.486725933787122 % of runtime
WARNING: Final GC required 12.905903007134539 % of runtime
Evaluation count : 6 in 6 samples of 1 calls.
 Execution time mean : 392.457499 ms
Execution time std-deviation : 8.225849 ms
   Execution time lower quantile : 384.192999 ms ( 2.5%)
   Execution time upper quantile : 401.027249 ms (97.5%)
   Overhead used : 1.847987 ns

--
--
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient
with your first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
---
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


I'm planning on open sourcing clj-sim but it is not quite ready.  It 
should be soon but you shouldn't put your project on hold for it. :)


I should point out that while clj-sim provides process-based simulation 
similar to simula and simpy it does not give you the ability for the 
processes to look at each others state.  This is of course very 
limiting since at certain decision points other processes need to know 
the state of others (e.g. how many workers are available).  I've 
thought of ways of making the state of the bound variables in a process 
accessible to others but I don't think that is a good idea (and very 
un-clojurey).  Instead, I've been using the processes to generate 
events that feed into my production system that builds up the state 
again (generally in a single atom or perhaps datomic) which it then 
makes decisions on which in turn effects the simulated processes.  
Viewing the simulation processes as one big generator of stochastic 
events for my production system provides a nice separation of co

Re: core.async and performance

2013-11-29 Thread kandre
Maybe I'll just use my simpy models for now and wait for clj-sim ;)
Any chance of sharing?
Cheers
Andreas

On Saturday, 30 November 2013 15:40:10 UTC+10:30, Ben Mabey wrote:
>
>  On 11/29/13, 9:16 PM, Cedric Greevey wrote:
>  
>  On Fri, Nov 29, 2013 at 11:03 PM, Ben Mabey 
> > wrote:
>
>>  On 11/29/13, 8:33 PM, Cedric Greevey wrote:
>>  
>>  Have you checked for other sources of performance hits? Boxing, var 
>> lookups, and especially reflection.
>>   
>>  As I said, I haven't done any optimization yet. :)  I did check for 
>> reflection though and didn't see any. 
>>
>>   
>>  I'd expect a reasonably optimized Clojure version to outperform a Python 
>> version by a very large factor -- 10x just for being JITted JVM bytecode 
>> instead of interpreted Python, times another however-many-cores-you-have 
>> for core.async keeping all your processors warm vs. Python and its GIL 
>> limiting the Python version to single-threaded performance.
>>  
>>  This task does not benefit from the multiplexing that core.async 
>> provides, at least not in the case of a single simulation which has no 
>> clear logical partition that can be run in parallel.  The primary benefit 
>> that core.async is providing in this case is to escape from call-back hell.
>>  
>  
>  Hmm. Then you're still looking for a 25-fold slowdown somewhere. It's 
> hard to get Clojure to run that slow *without* reflection, unless you're 
> hitting one of those cases where parallelizing actually makes things worse. 
> Hmm; core.async will be trying to multithread your code, even while the 
> nature of the task is limiting it to effectively serial performance anyway 
> due to blocking. Perhaps you're getting some of the slowdown from context 
> switches that aren't buying you anything for what they cost? The 
> GIL-afflicted Python code wouldn't be impacted by the cost of context 
> switches, by contrast.
>  
>  
> I had expected the context-switching to take a hit but I never tried to 
> see how much of a hit it is.  I just did and I got  a 1.62x speed 
> improvement[1] which means the clojure version is only 1.2x slower than the 
> simpy version. :) 
>
> Right now the thread pool in core.async is hardcoded in.  So for this 
> experiment I hacked in a fixed thread pool of size one.  I asked about 
> having the thread pool for core.async be swappable/parametrized at the conj 
> during the unsession and the idea was not received well.  For most use 
> cases I think the current thread pool is fine but for this particular one 
> it appears it is not...
>
> -Ben
>
> 1. Full benchmark... compare to times here: 
> https://gist.github.com/bmabey/7714431
> WARNING: Final GC required 5.486725933787122 % of runtime
> WARNING: Final GC required 12.905903007134539 % of runtime
> Evaluation count : 6 in 6 samples of 1 calls.
>  Execution time mean : 392.457499 ms
> Execution time std-deviation : 8.225849 ms
>Execution time lower quantile : 384.192999 ms ( 2.5%)
>Execution time upper quantile : 401.027249 ms (97.5%)
>Overhead used : 1.847987 ns
>
>  

-- 
-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: core.async and performance

2013-11-29 Thread Ben Mabey

On 11/29/13, 9:16 PM, Cedric Greevey wrote:
On Fri, Nov 29, 2013 at 11:03 PM, Ben Mabey > wrote:


On 11/29/13, 8:33 PM, Cedric Greevey wrote:

Have you checked for other sources of performance hits? Boxing,
var lookups, and especially reflection.

As I said, I haven't done any optimization yet. :)  I did check
for reflection though and didn't see any.



I'd expect a reasonably optimized Clojure version to outperform a
Python version by a very large factor -- 10x just for being
JITted JVM bytecode instead of interpreted Python, times another
however-many-cores-you-have for core.async keeping all your
processors warm vs. Python and its GIL limiting the Python
version to single-threaded performance.

This task does not benefit from the multiplexing that core.async
provides, at least not in the case of a single simulation which
has no clear logical partition that can be run in parallel.  The
primary benefit that core.async is providing in this case is to
escape from call-back hell.


Hmm. Then you're still looking for a 25-fold slowdown somewhere. It's 
hard to get Clojure to run that slow *without* reflection, unless 
you're hitting one of those cases where parallelizing actually makes 
things worse. Hmm; core.async will be trying to multithread your code, 
even while the nature of the task is limiting it to effectively serial 
performance anyway due to blocking. Perhaps you're getting some of the 
slowdown from context switches that aren't buying you anything for 
what they cost? The GIL-afflicted Python code wouldn't be impacted by 
the cost of context switches, by contrast.




I had expected the context-switching to take a hit but I never tried to 
see how much of a hit it is.  I just did and I got  a 1.62x speed 
improvement[1] which means the clojure version is only 1.2x slower than 
the simpy version. :)


Right now the thread pool in core.async is hardcoded in.  So for this 
experiment I hacked in a fixed thread pool of size one.  I asked about 
having the thread pool for core.async be swappable/parametrized at the 
conj during the unsession and the idea was not received well.  For most 
use cases I think the current thread pool is fine but for this 
particular one it appears it is not...


-Ben

1. Full benchmark... compare to times here: 
https://gist.github.com/bmabey/7714431

WARNING: Final GC required 5.486725933787122 % of runtime
WARNING: Final GC required 12.905903007134539 % of runtime
Evaluation count : 6 in 6 samples of 1 calls.
 Execution time mean : 392.457499 ms
Execution time std-deviation : 8.225849 ms
   Execution time lower quantile : 384.192999 ms ( 2.5%)
   Execution time upper quantile : 401.027249 ms (97.5%)
   Overhead used : 1.847987 ns

--
--
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups "Clojure" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: core.async and performance

2013-11-29 Thread kandre
I am simulation a network of roads, sources and sinks of materials, and 
trucks hauling between sinks and sources. There is not much of a workload - 
the complexity arises from having hundreds of trucks going through their 
states and queuing at the sources/sinks. So the bulk of the simulation 
consists of putting events on a priority queue. 
Maybe channels are simply not the right tool for that - or maybe I should 
just be happy with being able to simulate 10^5 trucks in a few seconds ;)


On Saturday, 30 November 2013 14:46:24 UTC+10:30, Cedric Greevey wrote:
>
> On Fri, Nov 29, 2013 at 11:03 PM, Ben Mabey 
> > wrote:
>
>>  On 11/29/13, 8:33 PM, Cedric Greevey wrote:
>>  
>>  Have you checked for other sources of performance hits? Boxing, var 
>> lookups, and especially reflection.
>>   
>> As I said, I haven't done any optimization yet. :)  I did check for 
>> reflection though and didn't see any.
>>
>>   
>>  I'd expect a reasonably optimized Clojure version to outperform a Python 
>> version by a very large factor -- 10x just for being JITted JVM bytecode 
>> instead of interpreted Python, times another however-many-cores-you-have 
>> for core.async keeping all your processors warm vs. Python and its GIL 
>> limiting the Python version to single-threaded performance.
>>  
>> This task does not benefit from the multiplexing that core.async 
>> provides, at least not in the case of a single simulation which has no 
>> clear logical partition that can be run in parallel.  The primary benefit 
>> that core.async is providing in this case is to escape from call-back hell.
>>
>
> Hmm. Then you're still looking for a 25-fold slowdown somewhere. It's hard 
> to get Clojure to run that slow *without* reflection, unless you're hitting 
> one of those cases where parallelizing actually makes things worse. Hmm; 
> core.async will be trying to multithread your code, even while the nature 
> of the task is limiting it to effectively serial performance anyway due to 
> blocking. Perhaps you're getting some of the slowdown from context switches 
> that aren't buying you anything for what they cost? The GIL-afflicted 
> Python code wouldn't be impacted by the cost of context switches, by 
> contrast.
>  

-- 
-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: core.async and performance

2013-11-29 Thread Cedric Greevey
On Fri, Nov 29, 2013 at 11:03 PM, Ben Mabey  wrote:

>  On 11/29/13, 8:33 PM, Cedric Greevey wrote:
>
>  Have you checked for other sources of performance hits? Boxing, var
> lookups, and especially reflection.
>
> As I said, I haven't done any optimization yet. :)  I did check for
> reflection though and didn't see any.
>
>
>  I'd expect a reasonably optimized Clojure version to outperform a Python
> version by a very large factor -- 10x just for being JITted JVM bytecode
> instead of interpreted Python, times another however-many-cores-you-have
> for core.async keeping all your processors warm vs. Python and its GIL
> limiting the Python version to single-threaded performance.
>
> This task does not benefit from the multiplexing that core.async provides,
> at least not in the case of a single simulation which has no clear logical
> partition that can be run in parallel.  The primary benefit that core.async
> is providing in this case is to escape from call-back hell.
>

Hmm. Then you're still looking for a 25-fold slowdown somewhere. It's hard
to get Clojure to run that slow *without* reflection, unless you're hitting
one of those cases where parallelizing actually makes things worse. Hmm;
core.async will be trying to multithread your code, even while the nature
of the task is limiting it to effectively serial performance anyway due to
blocking. Perhaps you're getting some of the slowdown from context switches
that aren't buying you anything for what they cost? The GIL-afflicted
Python code wouldn't be impacted by the cost of context switches, by
contrast.

-- 
-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


[ANN] Slamhound 1.5.0 + screencast + Vim plugin

2013-11-29 Thread guns
Hello,

I am happy to announce version 1.5.0 of Slamhound, technomancy's amazing
ns rewriting tool.

;; ~/.lein/profiles.clj
{:user {:dependencies [[slamhound "1.5.0"]]}}

This is a *major* bugfix release. If you've tried Slamhound in the past
and felt frustrated, now is a great time to give it another try.

If you're unfamiliar with Slamhound, I've posted a short screencast
here:

https://vimeo.com/80650659

Many thanks to Phil Hagelberg for allowing me to take the reins for this
release.

Enhancements since the last version include:

- Greatly improved detection and disambiguation of missing ns
  references. Slamhound is now much better at DWIM.

- References in the existing ns form are always preferred over other
  candidates on the classpath.

- Mass-referred namespaces (via :use or :refer :all) are preserved
  as (:require [my.ns :refer :all]). Simply remove it from the ns
  form to get a vector of explicit refers.

- File comment headers, ns metadata maps, docstrings, and :require
  flags (:reload et al), are correctly preserved.

- Multiple options per require libspec are emitted correctly.
  e.g. (:require [clojure.test :as t :refer [deftest]])

- Classes created via defrecord/deftype etc are correctly found.

- Capitalized vars that shadow class names are no longer ignored.

A full changelog is available here:

https://github.com/technomancy/slamhound/blob/master/CHANGES

Finally, for Vim users there is a new plugin for Slamhound integration:

https://github.com/guns/vim-slamhound

It was always easy to use Slamhound from fireplace.vim, but now it's
just a Pathogen infect away.

Cheers,

guns


pgprTWW8hPLqL.pgp
Description: PGP signature


Re: core.async and performance

2013-11-29 Thread Ben Mabey

On 11/29/13, 8:33 PM, Cedric Greevey wrote:
Have you checked for other sources of performance hits? Boxing, var 
lookups, and especially reflection.
As I said, I haven't done any optimization yet. :)  I did check for 
reflection though and didn't see any.


I'd expect a reasonably optimized Clojure version to outperform a 
Python version by a very large factor -- 10x just for being JITted JVM 
bytecode instead of interpreted Python, times another 
however-many-cores-you-have for core.async keeping all your processors 
warm vs. Python and its GIL limiting the Python version to 
single-threaded performance.
This task does not benefit from the multiplexing that core.async 
provides, at least not in the case of a single simulation which has no 
clear logical partition that can be run in parallel.  The primary 
benefit that core.async is providing in this case is to escape from 
call-back hell.


If your Clojure version is 2.5x *slower* then it's probably capable of 
a *hundredfold* speedup somewhere, which suggests reflection 
(typically a 10x penalty if happening heavily in inner loops) *and* 
another sizable performance degrader* are combining here. Unless, 
again, you're measuring mostly overhead and not real workload on the 
Clojure side, but not on the Python side. Put a significant load into 
each goroutine in both versions and compare them then, see if that 
helps the Clojure version much more than the Python one for some reason.


Yeah, I think a real life simulation may have different results than 
this micro-benchmark.




* The other degrader would need to multiply with, not just add to, the 
reflection, too. That suggests either blocking (reflection making that 
worse by reflection in one thread/go holding up progress systemwide 
for 10x as long as without reflection) or else excess/discarded work 
(10x penalty for reflection, times 10x as many calls as needed to get 
the job done due to transaction retries, poor algo, or something, 
would get you a 100-fold slowdown -- but retries of swap! or dosync 
shouldn't be a factor if you're eschewing those in favor of go blocks 
for coordination...)




On Fri, Nov 29, 2013 at 10:13 PM, Ben Mabey > wrote:


On Fri Nov 29 17:04:59 2013, kandre wrote:

Here is the gist: https://gist.github.com/anonymous/7713596
Please not that there's no ordering of time for this simple
example
and there's only one event (timeout). This is not what I
intend to use
but it shows the problem.
Simulating 10^5 steps this way takes ~1.5s

Cheers
Andreas

On Saturday, 30 November 2013 09:31:08 UTC+10:30, kandre wrote:

I think I can provide you with a little code snipped.
I am talking about the very basic car example
(driving->parking->driving). Running the sim using core.async
takes about 1s for 10^5 steps whereas the simpy version
takes less
than 1s for 10^6 iterations on my vm.
Cheers
Andreas

On Saturday, 30 November 2013 09:22:22 UTC+10:30, Ben
Mabey wrote:

On Fri Nov 29 14:13:16 2013, kandre wrote:
> Thanks for all the replies. I accidentally left out the
close! When I contrived the example. I am using
core.async for
a discrete event simulation system. There are hundreds
of go
blocks all doing little but putting a sequence of
events onto

a channel and one go block advancing taking these
events and
advancing the time similar to simpy.readthedocs.org/



>
> The basic one car example under the previous link
executes
about 10 times faster than the same example using
core.a sync.
>

Hi Andreas,
I've been using core.async for DES as well since I
think the
process-based approach is useful.  I could try doing
the same
simulation you're attempting to see how my approach
compares
speed-wise.  Are you talking about the car wash or the gas
station
simulation?  Posting a gist of what you have will be
helpful
so I can
use the same parameters.

-Ben




--
--
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com

Note that posts from new members are moderated - please be patient
with your first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@goo

Re: quick macro review

2013-11-29 Thread Guru Devanla
I like the way you use (partial list 'var).


On Thu, Nov 28, 2013 at 5:22 PM, juan.facorro wrote:

> Hi Curtis,
>
> The *apply* is unnecessary if you use *unquote-splice* (*~@*), also
> instead of the *into* and *for* usage you could just *map* over the list
> of symbols.
>
> Here's how I would do it:
>
> (defmacro migrate [& syms]
>   `(migrate* ~@(map (partial list 'var) syms)))
>
> (macroexpand-1 '(migrate a b c))
>
> ;= (user/migrate* (var a) (var b) (var c))
>
>
> Hope it helps,
>
> Juan
>
> On Friday, November 29, 2013 5:26:14 AM UTC+8, Curtis Gagliardi wrote:
>>
>> I wrote a macro last night and got the feeling I did what I did in a
>> suboptimal way.
>>
>> I have have a migration function that I stole from technomancy that takes
>> in the vars of migration functions:
>> (migrate #'create-db #'add-users-table #'etc)
>>
>> It uses the name of the var from the metadata to record which migrations
>> have been run.  I wanted to try to make it so you didn't have to explicitly
>> pass in the vars, and just have the migrate function call var for you.
>> I've since decided this is a bad idea but I wrote the macro anyway just for
>> fun.  My first question is: could this be done without a macro?  I didn't
>> see how since if you write it as a function, all you recieve are the actual
>> functions and not the vars, but I thought I'd ask to be sure.  Assuming you
>> did have to write a macro, does this implementation seem reasonable?  I
>> felt strange about using (into [] ...).
>>
>> https://www.refheap.com/21335
>>
>> Basically I'm trying to get from (migrate f g h) to (migrate* (var f)
>> (var g) (var h)), I'm not sure I'm doing it right.
>>
>> Thanks,
>> Curtis.
>>
>  --
> --
> You received this message because you are subscribed to the Google
> Groups "Clojure" group.
> To post to this group, send email to clojure@googlegroups.com
> Note that posts from new members are moderated - please be patient with
> your first post.
> To unsubscribe from this group, send email to
> clojure+unsubscr...@googlegroups.com
> For more options, visit this group at
> http://groups.google.com/group/clojure?hl=en
> ---
> You received this message because you are subscribed to the Google Groups
> "Clojure" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to clojure+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.
>

-- 
-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: core.async and performance

2013-11-29 Thread Cedric Greevey
Hmm. Another possibility, though remote, is that the Clojure version
manages to trigger pathological worst-case behavior in the JIT and/or
hardware (frequent cache misses, usually-wrong branch prediction, etc.) and
the Python version doesn't (no JIT and maybe the interpreter is simply too
slow to make processor caches and branch prediction count for much, other
than that the interpreter itself would be slower than it already is without
these).


On Fri, Nov 29, 2013 at 10:33 PM, Cedric Greevey  wrote:

> Have you checked for other sources of performance hits? Boxing, var
> lookups, and especially reflection.
>
> I'd expect a reasonably optimized Clojure version to outperform a Python
> version by a very large factor -- 10x just for being JITted JVM bytecode
> instead of interpreted Python, times another however-many-cores-you-have
> for core.async keeping all your processors warm vs. Python and its GIL
> limiting the Python version to single-threaded performance. If your Clojure
> version is 2.5x *slower* then it's probably capable of a *hundredfold*
> speedup somewhere, which suggests reflection (typically a 10x penalty if
> happening heavily in inner loops) *and* another sizable performance
> degrader* are combining here. Unless, again, you're measuring mostly
> overhead and not real workload on the Clojure side, but not on the Python
> side. Put a significant load into each goroutine in both versions and
> compare them then, see if that helps the Clojure version much more than the
> Python one for some reason.
>
> * The other degrader would need to multiply with, not just add to, the
> reflection, too. That suggests either blocking (reflection making that
> worse by reflection in one thread/go holding up progress systemwide for 10x
> as long as without reflection) or else excess/discarded work (10x penalty
> for reflection, times 10x as many calls as needed to get the job done due
> to transaction retries, poor algo, or something, would get you a 100-fold
> slowdown -- but retries of swap! or dosync shouldn't be a factor if you're
> eschewing those in favor of go blocks for coordination...)
>
>
>
> On Fri, Nov 29, 2013 at 10:13 PM, Ben Mabey  wrote:
>
>> On Fri Nov 29 17:04:59 2013, kandre wrote:
>>
>>> Here is the gist: https://gist.github.com/anonymous/7713596
>>> Please not that there's no ordering of time for this simple example
>>> and there's only one event (timeout). This is not what I intend to use
>>> but it shows the problem.
>>> Simulating 10^5 steps this way takes ~1.5s
>>>
>>> Cheers
>>> Andreas
>>>
>>> On Saturday, 30 November 2013 09:31:08 UTC+10:30, kandre wrote:
>>>
>>> I think I can provide you with a little code snipped.
>>> I am talking about the very basic car example
>>> (driving->parking->driving). Running the sim using core.async
>>> takes about 1s for 10^5 steps whereas the simpy version takes less
>>> than 1s for 10^6 iterations on my vm.
>>> Cheers
>>> Andreas
>>>
>>> On Saturday, 30 November 2013 09:22:22 UTC+10:30, Ben Mabey wrote:
>>>
>>> On Fri Nov 29 14:13:16 2013, kandre wrote:
>>> > Thanks for all the replies. I accidentally left out the
>>> close! When I contrived the example. I am using core.async for
>>> a discrete event simulation system. There are hundreds of go
>>> blocks all doing little but putting a sequence of events onto
>>>
>>> a channel and one go block advancing taking these events and
>>> advancing the time similar to simpy.readthedocs.org/
>>> 
>>>
>>> >
>>> > The basic one car example under the previous link executes
>>> about 10 times faster than the same example using core.a sync.
>>> >
>>>
>>> Hi Andreas,
>>> I've been using core.async for DES as well since I think the
>>> process-based approach is useful.  I could try doing the same
>>> simulation you're attempting to see how my approach compares
>>> speed-wise.  Are you talking about the car wash or the gas
>>> station
>>> simulation?  Posting a gist of what you have will be helpful
>>> so I can
>>> use the same parameters.
>>>
>>> -Ben
>>>
>>>
>>>
>>>
>>> --
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Clojure" group.
>>> To post to this group, send email to clojure@googlegroups.com
>>> Note that posts from new members are moderated - please be patient
>>> with your first post.
>>> To unsubscribe from this group, send email to
>>> clojure+unsubscr...@googlegroups.com
>>> For more options, visit this group at
>>> http://groups.google.com/group/clojure?hl=en
>>> ---
>>> You received this message because you are subscribed to the Google
>>> Groups "Clojure" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to clojure+unsubscr...@googlegroups.com.
>>> For more options, visit htt

Re: core.async and performance

2013-11-29 Thread Cedric Greevey
Have you checked for other sources of performance hits? Boxing, var
lookups, and especially reflection.

I'd expect a reasonably optimized Clojure version to outperform a Python
version by a very large factor -- 10x just for being JITted JVM bytecode
instead of interpreted Python, times another however-many-cores-you-have
for core.async keeping all your processors warm vs. Python and its GIL
limiting the Python version to single-threaded performance. If your Clojure
version is 2.5x *slower* then it's probably capable of a *hundredfold*
speedup somewhere, which suggests reflection (typically a 10x penalty if
happening heavily in inner loops) *and* another sizable performance
degrader* are combining here. Unless, again, you're measuring mostly
overhead and not real workload on the Clojure side, but not on the Python
side. Put a significant load into each goroutine in both versions and
compare them then, see if that helps the Clojure version much more than the
Python one for some reason.

* The other degrader would need to multiply with, not just add to, the
reflection, too. That suggests either blocking (reflection making that
worse by reflection in one thread/go holding up progress systemwide for 10x
as long as without reflection) or else excess/discarded work (10x penalty
for reflection, times 10x as many calls as needed to get the job done due
to transaction retries, poor algo, or something, would get you a 100-fold
slowdown -- but retries of swap! or dosync shouldn't be a factor if you're
eschewing those in favor of go blocks for coordination...)



On Fri, Nov 29, 2013 at 10:13 PM, Ben Mabey  wrote:

> On Fri Nov 29 17:04:59 2013, kandre wrote:
>
>> Here is the gist: https://gist.github.com/anonymous/7713596
>> Please not that there's no ordering of time for this simple example
>> and there's only one event (timeout). This is not what I intend to use
>> but it shows the problem.
>> Simulating 10^5 steps this way takes ~1.5s
>>
>> Cheers
>> Andreas
>>
>> On Saturday, 30 November 2013 09:31:08 UTC+10:30, kandre wrote:
>>
>> I think I can provide you with a little code snipped.
>> I am talking about the very basic car example
>> (driving->parking->driving). Running the sim using core.async
>> takes about 1s for 10^5 steps whereas the simpy version takes less
>> than 1s for 10^6 iterations on my vm.
>> Cheers
>> Andreas
>>
>> On Saturday, 30 November 2013 09:22:22 UTC+10:30, Ben Mabey wrote:
>>
>> On Fri Nov 29 14:13:16 2013, kandre wrote:
>> > Thanks for all the replies. I accidentally left out the
>> close! When I contrived the example. I am using core.async for
>> a discrete event simulation system. There are hundreds of go
>> blocks all doing little but putting a sequence of events onto
>>
>> a channel and one go block advancing taking these events and
>> advancing the time similar to simpy.readthedocs.org/
>> 
>>
>> >
>> > The basic one car example under the previous link executes
>> about 10 times faster than the same example using core.a sync.
>> >
>>
>> Hi Andreas,
>> I've been using core.async for DES as well since I think the
>> process-based approach is useful.  I could try doing the same
>> simulation you're attempting to see how my approach compares
>> speed-wise.  Are you talking about the car wash or the gas
>> station
>> simulation?  Posting a gist of what you have will be helpful
>> so I can
>> use the same parameters.
>>
>> -Ben
>>
>>
>>
>>
>> --
>> --
>> You received this message because you are subscribed to the Google
>> Groups "Clojure" group.
>> To post to this group, send email to clojure@googlegroups.com
>> Note that posts from new members are moderated - please be patient
>> with your first post.
>> To unsubscribe from this group, send email to
>> clojure+unsubscr...@googlegroups.com
>> For more options, visit this group at
>> http://groups.google.com/group/clojure?hl=en
>> ---
>> You received this message because you are subscribed to the Google
>> Groups "Clojure" group.
>> To unsubscribe from this group and stop receiving emails from it, send
>> an email to clojure+unsubscr...@googlegroups.com.
>> For more options, visit https://groups.google.com/groups/opt_out.
>>
>
> I've verified your results and compared it with an implementation using my
> library.  My version runs 1.25x faster than yours and that is with an
> actual priority queue behind the scheduling for correct simulation/time
> semantics.  However, mine is still 2x slower than the simpy version.  Gist
> with benchmarks:
>
> https://gist.github.com/bmabey/7714431
>
> simpy is a mature library with lots of performance tweaking and I have
> done no optimizations so far.  My library is a thin wrapping around
> core.async with a few hooks into the internals and so I would expect that
> mos

Re: core.async and performance

2013-11-29 Thread Ben Mabey

On Fri Nov 29 17:04:59 2013, kandre wrote:

Here is the gist: https://gist.github.com/anonymous/7713596
Please not that there's no ordering of time for this simple example
and there's only one event (timeout). This is not what I intend to use
but it shows the problem.
Simulating 10^5 steps this way takes ~1.5s

Cheers
Andreas

On Saturday, 30 November 2013 09:31:08 UTC+10:30, kandre wrote:

I think I can provide you with a little code snipped.
I am talking about the very basic car example
(driving->parking->driving). Running the sim using core.async
takes about 1s for 10^5 steps whereas the simpy version takes less
than 1s for 10^6 iterations on my vm.
Cheers
Andreas

On Saturday, 30 November 2013 09:22:22 UTC+10:30, Ben Mabey wrote:

On Fri Nov 29 14:13:16 2013, kandre wrote:
> Thanks for all the replies. I accidentally left out the
close! When I contrived the example. I am using core.async for
a discrete event simulation system. There are hundreds of go
blocks all doing little but putting a sequence of events onto

a channel and one go block advancing taking these events and
advancing the time similar to simpy.readthedocs.org/

>
> The basic one car example under the previous link executes
about 10 times faster than the same example using core.a sync.
>

Hi Andreas,
I've been using core.async for DES as well since I think the
process-based approach is useful.  I could try doing the same
simulation you're attempting to see how my approach compares
speed-wise.  Are you talking about the car wash or the gas
station
simulation?  Posting a gist of what you have will be helpful
so I can
use the same parameters.

-Ben




--
--
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient
with your first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
---
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


I've verified your results and compared it with an implementation using 
my library.  My version runs 1.25x faster than yours and that is with 
an actual priority queue behind the scheduling for correct 
simulation/time semantics.  However, mine is still 2x slower than the 
simpy version.  Gist with benchmarks:


https://gist.github.com/bmabey/7714431

simpy is a mature library with lots of performance tweaking and I have 
done no optimizations so far.  My library is a thin wrapping around 
core.async with a few hooks into the internals and so I would expect 
that most of the time is being spent in core.async (again, I have done 
zero profiling to actually verify this).  So, it may be that core.async 
is slower than python generators for this particular use case.  I 
should say that this use case is odd in that our task is a serial one 
and so we don't get any benefit from having a threadpool to multiplex 
across (in fact the context switching may be harmful).


In my case the current slower speeds are vastly outweighed by the 
benefits:

* can run multiple simulations in parallel for sensitivity analysis
* I plan on eventually targeting Clojurescript for visualization 
(right now an event stream from JVM is used)

* ability to leverage CEP libraries for advanced stats
* being integrated into my production systems via channels which does 
all the real decision making in the sims.
This means I can do sensitivity analysis on different policies 
using actual production code.  A nice side benefit of this is that I 
get a free integration test. :)


Having said all that I am still exploring the use of core.async for DES 
and have not yet replaced my event-based simulator.  I most likely will 
replace at least parts of my simulations that have a lot of nested 
call-backs that make things hard to reason about.


-Ben

--
--
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups "Clojure" group.

To unsubscribe from this group and stop receiving emails 

Re: tails function?

2013-11-29 Thread Paul Butcher
On 30 Nov 2013, at 01:33, Mark Engelberg  wrote:

> (take-while seq (iterate rest [1 2 3 4]))

D'oh! Told you I would kick myself. Thanks Mark.

--
paul.butcher->msgCount++

Silverstone, Brands Hatch, Donington Park...
Who says I have a one track mind?

http://www.paulbutcher.com/
LinkedIn: http://www.linkedin.com/in/paulbutcher
Skype: paulrabutcher

-- 
-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: tails function?

2013-11-29 Thread Mark Engelberg
(take-while seq (iterate rest [1 2 3 4]))


On Fri, Nov 29, 2013 at 5:17 PM, Paul Butcher  wrote:

> I fear that I'm missing something obvious here, so I'm getting ready to
> kick myself. I'm looking for an equivalent of Scala's "tails" method:
>
> scala> List(1, 2, 3, 4).tails.toList
> res0: List[List[Int]] = List(List(1, 2, 3, 4), List(2, 3, 4), List(3, 4),
> List(4), List())
>
>
> But I'm damned if I can find anything in the standard library. Clearly I
> can define it myself:
>
> user=> (defn tails [coll]
>   #_=>   (lazy-seq
>   #_=> (when-let [s (seq coll)]
>   #_=>   (cons s (tails (rest s))
> #'user/tails
> user=> (tails [1 2 3 4])
> ((1 2 3 4) (2 3 4) (3 4) (4))
>
>
> But I can't believe that an equivalent isn't already in there somewhere?
> Thanks in advance to anyone who points me in the right direction.
>
> --
> paul.butcher->msgCount++
>
> Silverstone, Brands Hatch, Donington Park...
> Who says I have a one track mind?
>
> http://www.paulbutcher.com/
> LinkedIn: http://www.linkedin.com/in/paulbutcher
> Skype: paulrabutcher
>
>
>
>
>  --
> --
> You received this message because you are subscribed to the Google
> Groups "Clojure" group.
> To post to this group, send email to clojure@googlegroups.com
> Note that posts from new members are moderated - please be patient with
> your first post.
> To unsubscribe from this group, send email to
> clojure+unsubscr...@googlegroups.com
> For more options, visit this group at
> http://groups.google.com/group/clojure?hl=en
> ---
> You received this message because you are subscribed to the Google Groups
> "Clojure" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to clojure+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.
>

-- 
-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


tails function?

2013-11-29 Thread Paul Butcher
I fear that I'm missing something obvious here, so I'm getting ready to kick 
myself. I'm looking for an equivalent of Scala's "tails" method:

scala> List(1, 2, 3, 4).tails.toList
res0: List[List[Int]] = List(List(1, 2, 3, 4), List(2, 3, 4), List(3, 4), 
List(4), List())

But I'm damned if I can find anything in the standard library. Clearly I can 
define it myself:

user=> (defn tails [coll]
  #_=>   (lazy-seq
  #_=> (when-let [s (seq coll)]
  #_=>   (cons s (tails (rest s))
#'user/tails
user=> (tails [1 2 3 4])
((1 2 3 4) (2 3 4) (3 4) (4))

But I can't believe that an equivalent isn't already in there somewhere? Thanks 
in advance to anyone who points me in the right direction.

--
paul.butcher->msgCount++

Silverstone, Brands Hatch, Donington Park...
Who says I have a one track mind?

http://www.paulbutcher.com/
LinkedIn: http://www.linkedin.com/in/paulbutcher
Skype: paulrabutcher




-- 
-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: core.async and performance

2013-11-29 Thread kandre
Here is the gist: https://gist.github.com/anonymous/7713596
Please not that there's no ordering of time for this simple example and 
there's only one event (timeout). This is not what I intend to use but it 
shows the problem.
Simulating 10^5 steps this way takes ~1.5s

Cheers
Andreas

On Saturday, 30 November 2013 09:31:08 UTC+10:30, kandre wrote:
>
> I think I can provide you with a little code snipped. 
> I am talking about the very basic car example (driving->parking->driving). 
> Running the sim using core.async takes about 1s for 10^5 steps whereas the 
> simpy version takes less than 1s for 10^6 iterations on my vm.
> Cheers
> Andreas
>
> On Saturday, 30 November 2013 09:22:22 UTC+10:30, Ben Mabey wrote:
>>
>> On Fri Nov 29 14:13:16 2013, kandre wrote: 
>> > Thanks for all the replies. I accidentally left out the close! When I 
>> contrived the example. I am using core.async for a discrete event 
>> simulation system. There are hundreds of go blocks all doing little but 
>> putting a sequence of events onto 
>
>  

> a channel and one go block advancing taking these events and advancing the 
>> time similar to simpy.readthedocs.org/ 
>> > 
>> > The basic one car example under the previous link executes about 10 
>> times faster than the same example using core.a sync. 
>> > 
>>
>> Hi Andreas, 
>> I've been using core.async for DES as well since I think the 
>> process-based approach is useful.  I could try doing the same 
>> simulation you're attempting to see how my approach compares 
>> speed-wise.  Are you talking about the car wash or the gas station 
>> simulation?  Posting a gist of what you have will be helpful so I can 
>> use the same parameters. 
>>
>> -Ben 
>>
>>
>>
>>
>>

-- 
-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Reactive Patterns with Atoms - am I using too much state?

2013-11-29 Thread Brian Marick

On Nov 29, 2013, at 11:58 AM, Sam Ritchie  wrote:

> 2) a defstatefn macro that defines the "mark" and "mark!" versions at the 
> same time.

I do that with agents:

(def-action text :send [state number message])

… creates a `text*` and a `text>!`, where `test>!` is

(defn text>! [number message] (send self text* number message))

A nice bit about this is that `text*` is pure, so easily testable. Since 
`text>!` is constructed, I'm happy not testing it. Since I use Midje 
prerequisites to test that the `>!`-style functions are called when 
appropriate, I get nicely decoupled unit tests that don't have to know they're 
working with agents (except that the `>!` functions always return irrelevant 
values.)

I haven't been using this style for long, but it feels right so far.


Latest book: /Functional Programming for the Object-Oriented Programmer/
https://leanpub.com/fp-oo

-- 
-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: core.async and performance

2013-11-29 Thread kandre
I think I can provide you with a little code snipped. 
I am talking about the very basic car example (driving->parking->driving). 
Running the sim using core.async takes about 1s for 10^5 steps whereas the 
simpy version takes less than 1s for 10^6 iterations on my vm.
Cheers
Andreas

On Saturday, 30 November 2013 09:22:22 UTC+10:30, Ben Mabey wrote:
>
> On Fri Nov 29 14:13:16 2013, kandre wrote: 
> > Thanks for all the replies. I accidentally left out the close! When I 
> contrived the example. I am using core.async for a discrete event 
> simulation system. There are hundreds of go blocks all doing little but 
> putting a sequence of events onto a channel and one go block advancing 
> taking these events and advancing the time similar to 
> simpy.readthedocs.org/ 
> > 
> > The basic one car example under the previous link executes about 10 
> times faster than the same example using core.a sync. 
> > 
>
> Hi Andreas, 
> I've been using core.async for DES as well since I think the 
> process-based approach is useful.  I could try doing the same 
> simulation you're attempting to see how my approach compares 
> speed-wise.  Are you talking about the car wash or the gas station 
> simulation?  Posting a gist of what you have will be helpful so I can 
> use the same parameters. 
>
> -Ben 
>
>
>
>
>

-- 
-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: core.async and performance

2013-11-29 Thread Ben Mabey

On Fri Nov 29 14:13:16 2013, kandre wrote:

Thanks for all the replies. I accidentally left out the close! When I contrived 
the example. I am using core.async for a discrete event simulation system. 
There are hundreds of go blocks all doing little but putting a sequence of 
events onto a channel and one go block advancing taking these events and 
advancing the time similar to simpy.readthedocs.org/

The basic one car example under the previous link executes about 10 times 
faster than the same example using core.a sync.



Hi Andreas,
I've been using core.async for DES as well since I think the 
process-based approach is useful.  I could try doing the same 
simulation you're attempting to see how my approach compares 
speed-wise.  Are you talking about the car wash or the gas station 
simulation?  Posting a gist of what you have will be helpful so I can 
use the same parameters.


-Ben




--
--
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups "Clojure" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [ANN] Buffy The ByteBuffer Slayer, Clojure library to working with binary data

2013-11-29 Thread Andrey Antukh
Good work! Thanks!


2013/11/29 Alex P 

> Buffy [1] is a Clojure library to work with Binary Data, write complete
> binary protocol implementations
> in clojure, store your complex data structures in an off-heap chache, read
> binary files and do
> everything you would usually do `ByteBuffer`.
>
> Main features & motivation to write it
>
>   * partial deserialization (read and deserialise parts of a byte buffer)
>   * named access (access parts of your buffer by names)
>   * composing/decomposing from key/value pairs
>   * pretty hexdump
>   * many useful default types that you can combine and extend easily
>
> Data types include:
>
>   * primitives, such as `int32`, `boolean`, `byte`, `short`, `medium`,
> `float`, `long`
>   * arbitrary-length `string`
>   * byte arrays
>   * composite types (combine any of primitives together)
>   * repeated type (repeat any primitive arbitrary amount of times in
> payload)
>   * enum type (for mapping between human-readable and binary
> representation of constants)
>
> Buffy has been serving us well for recent time, and no major issues were
> revealed. However, until
> it reaches GA, we can't guarantee 100% backward compatibility, although
> we're thought it through
> very well and used our best knowledge to make it right.
>
> Buffy is a ClojureWerkz project, same as Monger, Elastisch, Cassaforte,
> Neocons, Meltdown and
> many others.
>
> [1] https://github.com/clojurewerkz/buffy
> [2] http://clojurewerkz.org
>
> --
>
> Alex P
>
> http://clojurewerkz.org
>
> http://twitter.com/ifesdjeen
>
> --
> --
> You received this message because you are subscribed to the Google
> Groups "Clojure" group.
> To post to this group, send email to clojure@googlegroups.com
> Note that posts from new members are moderated - please be patient with
> your first post.
> To unsubscribe from this group, send email to
> clojure+unsubscr...@googlegroups.com
> For more options, visit this group at
> http://groups.google.com/group/clojure?hl=en
> ---
> You received this message because you are subscribed to the Google Groups
> "Clojure" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to clojure+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.
>



-- 
Andrey Antukh - Андрей Антух - 
http://www.niwi.be/about.html
http://www.kaleidos.net/A5694F/

"Linux is for people who hate Windows, BSD is for people who love UNIX"
"Social Engineer -> Because there is no patch for human stupidity"

-- 
-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


[ANN] Buffy The ByteBuffer Slayer, Clojure library to working with binary data

2013-11-29 Thread Alex P
Buffy [1] is a Clojure library to work with Binary Data, write complete 
binary protocol implementations
in clojure, store your complex data structures in an off-heap chache, read 
binary files and do
everything you would usually do `ByteBuffer`.

Main features & motivation to write it

  * partial deserialization (read and deserialise parts of a byte buffer)
  * named access (access parts of your buffer by names)
  * composing/decomposing from key/value pairs
  * pretty hexdump
  * many useful default types that you can combine and extend easily

Data types include:

  * primitives, such as `int32`, `boolean`, `byte`, `short`, `medium`, 
`float`, `long`
  * arbitrary-length `string`
  * byte arrays
  * composite types (combine any of primitives together)
  * repeated type (repeat any primitive arbitrary amount of times in 
payload)
  * enum type (for mapping between human-readable and binary representation 
of constants)

Buffy has been serving us well for recent time, and no major issues were 
revealed. However, until 
it reaches GA, we can't guarantee 100% backward compatibility, although 
we're thought it through
very well and used our best knowledge to make it right.

Buffy is a ClojureWerkz project, same as Monger, Elastisch, Cassaforte, 
Neocons, Meltdown and
many others. 

[1] https://github.com/clojurewerkz/buffy
[2] http://clojurewerkz.org

-- 

Alex P

http://clojurewerkz.org

http://twitter.com/ifesdjeen

-- 
-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: core.async and performance

2013-11-29 Thread kandre
Thanks for all the replies. I accidentally left out the close! When I contrived 
the example. I am using core.async for a discrete event simulation system. 
There are hundreds of go blocks all doing little but putting a sequence of 
events onto a channel and one go block advancing taking these events and 
advancing the time similar to simpy.readthedocs.org/

The basic one car example under the previous link executes about 10 times 
faster than the same example using core.a sync.

-- 
-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: quick macro review

2013-11-29 Thread Curtis Gagliardi
Beautiful, thanks guys.

On Thursday, November 28, 2013 10:26:05 PM UTC-8, Alex Baranosky wrote:
>
> This also works, I believe:
>
> (defmacro migrate [& migration-syms]
>   (let [migration-vars (for [sym migration-syms]
>`(var ~sym))]
> `(migrate* ~@migration-vars)))
>
>
> On Thu, Nov 28, 2013 at 5:22 PM, juan.facorro 
> 
> > wrote:
>
>> Hi Curtis,
>>
>> The *apply* is unnecessary if you use *unquote-splice* (*~@*), also 
>> instead of the *into* and *for* usage you could just *map* over the list 
>> of symbols.
>>
>> Here's how I would do it:
>>
>> (defmacro migrate [& syms]
>>   `(migrate* ~@(map (partial list 'var) syms)))
>>
>> (macroexpand-1 '(migrate a b c)) 
>>
>> ;= (user/migrate* (var a) (var b) (var c))
>>
>>
>> Hope it helps,
>>
>> Juan
>>
>> On Friday, November 29, 2013 5:26:14 AM UTC+8, Curtis Gagliardi wrote:
>>>
>>> I wrote a macro last night and got the feeling I did what I did in a 
>>> suboptimal way.
>>>
>>> I have have a migration function that I stole from technomancy that 
>>> takes in the vars of migration functions: 
>>> (migrate #'create-db #'add-users-table #'etc)
>>>
>>> It uses the name of the var from the metadata to record which migrations 
>>> have been run.  I wanted to try to make it so you didn't have to explicitly 
>>> pass in the vars, and just have the migrate function call var for you.  
>>> I've since decided this is a bad idea but I wrote the macro anyway just for 
>>> fun.  My first question is: could this be done without a macro?  I didn't 
>>> see how since if you write it as a function, all you recieve are the actual 
>>> functions and not the vars, but I thought I'd ask to be sure.  Assuming you 
>>> did have to write a macro, does this implementation seem reasonable?  I 
>>> felt strange about using (into [] ...).  
>>>
>>> https://www.refheap.com/21335
>>>
>>> Basically I'm trying to get from (migrate f g h) to (migrate* (var f) 
>>> (var g) (var h)), I'm not sure I'm doing it right.
>>>
>>> Thanks,
>>> Curtis.
>>>
>>  -- 
>> -- 
>> You received this message because you are subscribed to the Google
>> Groups "Clojure" group.
>> To post to this group, send email to clo...@googlegroups.com
>> Note that posts from new members are moderated - please be patient with 
>> your first post.
>> To unsubscribe from this group, send email to
>> clojure+u...@googlegroups.com 
>> For more options, visit this group at
>> http://groups.google.com/group/clojure?hl=en
>> --- 
>> You received this message because you are subscribed to the Google Groups 
>> "Clojure" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to clojure+u...@googlegroups.com .
>> For more options, visit https://groups.google.com/groups/opt_out.
>>
>
>

-- 
-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Reactive Patterns with Atoms - am I using too much state?

2013-11-29 Thread Sam Ritchie

Hey guys,

As I start to work on more Clojurescript UI code, I've been running into 
a pattern with atoms and watches that I THINK I can abstract away... but 
the solution feels wrong, and I'd love to hear a better way.


Say I'm building a stopwatch for timing runners in a race. I've got a 
clojure record like this:


(defrecord StopwatchState [
  active? ;; timer running?
  timestamp ;; UTC time of this particular state
  stopwatch-time ;; the time on the stopwatch as of "timestamp"
  results ;; pairs of [athlete number, time]])

and a bunch of functions that toggle the timer, mark an athlete crossing 
the line, update the timer, etc.


My view holds the current state in an atom:

(def state (atom ,,some-stopwatch-state))

and I update my views by adding watches to the atom and refreshing the 
various UI components to match the new state.


Now! Here's the annoying pattern. In the cljs UI world, to poke these 
atoms, I end up wrapping all of my model functions like this:


(defn toggle! []
  (swap! state toggle))
(defn mark! [athlete-number]
  (swap! state mark athlete-number))

I can think of two ways to break the pattern.

1) A deep-code-walking macro that transforms code like (mark state 
athlete-number) into (swap! state mark athlete-number) if the first 
argument is an atom;
2) a defstatefn macro that defines the "mark" and "mark!" versions at 
the same time.


Both feel wrong. My goal is to program in a more declarative way. Is 
there a better way to structure UI code?


--
Sam Ritchie (@sritchie)
Paddleguru Co-Founder
703.863.8561
www.paddleguru.com 
Twitter // Facebook 



--
--
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups "Clojure" group.

To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: core.async and performance

2013-11-29 Thread Timothy Baldridge
This is all good advice. Also notice that these examples don't really match
real life use cases of core.async. Here you only have two threads, where
the execution time is dominated by message passing. In most situations
you'll have dozens (or hundreds) of gos, with actual work being done in
each logical thread. In these cases, I highly doubt the performance of
core.async will be the bottleneck.

But give it a try on a real project and let tell us how it goes.

Timothy


On Fri, Nov 29, 2013 at 10:04 AM, Thomas Heller  wrote:

> Ah forgot that the core.async folks mentioned that if you want performance
> its best to use real threads.
>
> (time
>  (let [c (chan 100)]
>(thread
> (dotimes [i 10]
>   (>!! c i))
> (close! c))
>(prn ((if-let [x (  (recur x)
>  i)))
>
>
> finishes in about 100ms which seems reasonable, just can't have too many
> of those threads open.
>
>
> On Fri, Nov 29, 2013 at 5:40 PM, Sean Corfield wrote:
>
>> On Fri, Nov 29, 2013 at 1:09 AM, Thomas Heller 
>> wrote:
>> > I'm actually surprised you get to "stop" at all. You send a couple items
>> > onto the channel but don't close it, therefore the 2nd go block will
>> > potentially block forever waiting for more.
>>
>> Indeed. When I tried Andreas' code in the REPL, it didn't terminate.
>>
>> > I'm far from an expert in core.async but I think the solution would be
>> to
>> > close! the channel and as a suggestion: (go) blocks themselves return
>> > channels that "block" until they are completed. You could write this as:
>>
>> Your code completes in around 330ms for me.
>> --
>> Sean A Corfield -- (904) 302-SEAN
>> An Architect's View -- http://corfield.org/
>> World Singles, LLC. -- http://worldsingles.com/
>>
>> "Perfection is the enemy of the good."
>> -- Gustave Flaubert, French realist novelist (1821-1880)
>>
>> --
>> --
>> You received this message because you are subscribed to the Google
>> Groups "Clojure" group.
>> To post to this group, send email to clojure@googlegroups.com
>> Note that posts from new members are moderated - please be patient with
>> your first post.
>> To unsubscribe from this group, send email to
>> clojure+unsubscr...@googlegroups.com
>> For more options, visit this group at
>> http://groups.google.com/group/clojure?hl=en
>> ---
>> You received this message because you are subscribed to a topic in the
>> Google Groups "Clojure" group.
>> To unsubscribe from this topic, visit
>> https://groups.google.com/d/topic/clojure/m6bqGd8vQZQ/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to
>> clojure+unsubscr...@googlegroups.com.
>>
>> For more options, visit https://groups.google.com/groups/opt_out.
>>
>
>  --
> --
> You received this message because you are subscribed to the Google
> Groups "Clojure" group.
> To post to this group, send email to clojure@googlegroups.com
> Note that posts from new members are moderated - please be patient with
> your first post.
> To unsubscribe from this group, send email to
> clojure+unsubscr...@googlegroups.com
> For more options, visit this group at
> http://groups.google.com/group/clojure?hl=en
> ---
> You received this message because you are subscribed to the Google Groups
> "Clojure" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to clojure+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.
>



-- 
“One of the main causes of the fall of the Roman Empire was that–lacking
zero–they had no way to indicate successful termination of their C
programs.”
(Robert Firth)

-- 
-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: core.async and performance

2013-11-29 Thread Thomas Heller
Ah forgot that the core.async folks mentioned that if you want performance
its best to use real threads.

(time
 (let [c (chan 100)]
   (thread
(dotimes [i 10]
  (>!! c i))
(close! c))
   (prn (wrote:

> On Fri, Nov 29, 2013 at 1:09 AM, Thomas Heller 
> wrote:
> > I'm actually surprised you get to "stop" at all. You send a couple items
> > onto the channel but don't close it, therefore the 2nd go block will
> > potentially block forever waiting for more.
>
> Indeed. When I tried Andreas' code in the REPL, it didn't terminate.
>
> > I'm far from an expert in core.async but I think the solution would be to
> > close! the channel and as a suggestion: (go) blocks themselves return
> > channels that "block" until they are completed. You could write this as:
>
> Your code completes in around 330ms for me.
> --
> Sean A Corfield -- (904) 302-SEAN
> An Architect's View -- http://corfield.org/
> World Singles, LLC. -- http://worldsingles.com/
>
> "Perfection is the enemy of the good."
> -- Gustave Flaubert, French realist novelist (1821-1880)
>
> --
> --
> You received this message because you are subscribed to the Google
> Groups "Clojure" group.
> To post to this group, send email to clojure@googlegroups.com
> Note that posts from new members are moderated - please be patient with
> your first post.
> To unsubscribe from this group, send email to
> clojure+unsubscr...@googlegroups.com
> For more options, visit this group at
> http://groups.google.com/group/clojure?hl=en
> ---
> You received this message because you are subscribed to a topic in the
> Google Groups "Clojure" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/clojure/m6bqGd8vQZQ/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> clojure+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.
>

-- 
-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [ANN]: clj.jdbc 0.1-beta1 - Alternative implementation of jdbc wrapper for clojure.

2013-11-29 Thread Sean Corfield
On Fri, Nov 29, 2013 at 3:38 AM, Lars Rune Nøstdal
 wrote:
> Looks interesting. It even has the issue tracker on github still enabled.

The reason contrib libraries (and Clojure itself) have the issue
tracker disabled on Github is because they use JIRA for tracking
issues - and pull requests are not accepted for contrib libraries (or
Clojure itself), instead you need to submit a patch to a ticket in
JIRA (and you need a signed Contributor's Agreement on file). Please
don't revisit that discussion tho' - search the archives for several
of the (often heated) discussions about patches vs pull requests, and
just accept that's the way things are done for Clojure and its contrib
libraries...
-- 
Sean A Corfield -- (904) 302-SEAN
An Architect's View -- http://corfield.org/
World Singles, LLC. -- http://worldsingles.com/

"Perfection is the enemy of the good."
-- Gustave Flaubert, French realist novelist (1821-1880)

-- 
-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: core.async and performance

2013-11-29 Thread Sean Corfield
On Fri, Nov 29, 2013 at 1:09 AM, Thomas Heller  wrote:
> I'm actually surprised you get to "stop" at all. You send a couple items
> onto the channel but don't close it, therefore the 2nd go block will
> potentially block forever waiting for more.

Indeed. When I tried Andreas' code in the REPL, it didn't terminate.

> I'm far from an expert in core.async but I think the solution would be to
> close! the channel and as a suggestion: (go) blocks themselves return
> channels that "block" until they are completed. You could write this as:

Your code completes in around 330ms for me.
-- 
Sean A Corfield -- (904) 302-SEAN
An Architect's View -- http://corfield.org/
World Singles, LLC. -- http://worldsingles.com/

"Perfection is the enemy of the good."
-- Gustave Flaubert, French realist novelist (1821-1880)

-- 
-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [ANN] Clara 0.3.0: Rete in ClojureScript

2013-11-29 Thread Mimmo Cosenza
Cool! It remembers me when I was developing expert systems 30 years ago…..the 
eternal recurrence ;-)
soon or later more devs will appreciate this wonderful unification language….

mimmo

On Nov 29, 2013, at 4:49 PM, Ryan Brush  wrote:

> Clara 0.3.0, a forward-chaining rules engine in pure Clojure, has been 
> released.  The headliner is ClojureScript support, although a handful of 
> fixes and optimizations were included as well. 
> 
> Some discussion of the ClojureScript port is in that group:
> https://groups.google.com/forum/#!topic/clojurescript/MMwjpcFUPqE
> 
> The github project:
> https://github.com/rbrush/clara-rules
> 
> Example usage from ClojureScript:
> https://github.com/rbrush/clara-examples/blob/master/src/main/clojurescript/clara/examples/shopping.cljs
> 
> Feel free to ping me (@ryanbrush) with any questions or suggestions, or log 
> issues via github.
> 
> -Ryan
> 
> -- 
> -- 
> You received this message because you are subscribed to the Google
> Groups "Clojure" group.
> To post to this group, send email to clojure@googlegroups.com
> Note that posts from new members are moderated - please be patient with your 
> first post.
> To unsubscribe from this group, send email to
> clojure+unsubscr...@googlegroups.com
> For more options, visit this group at
> http://groups.google.com/group/clojure?hl=en
> --- 
> You received this message because you are subscribed to the Google Groups 
> "Clojure" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to clojure+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.

-- 
-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [ANN] Clara 0.3.0: Rete in ClojureScript

2013-11-29 Thread Ambrose Bonnaire-Sergeant
Congrats!

Ambrose


On Fri, Nov 29, 2013 at 11:49 PM, Ryan Brush  wrote:

> Clara 0.3.0, a forward-chaining rules engine in pure Clojure, has been
> released.  The headliner is ClojureScript support, although a handful of
> fixes and optimizations were included as well.
>
> Some discussion of the ClojureScript port is in that group:
> https://groups.google.com/forum/#!topic/clojurescript/MMwjpcFUPqE
>
> The github project:
> https://github.com/rbrush/clara-rules
>
> Example usage from ClojureScript:
>
> https://github.com/rbrush/clara-examples/blob/master/src/main/clojurescript/clara/examples/shopping.cljs
>
> Feel free to ping me (@ryanbrush) with any questions or suggestions, or
> log issues via github.
>
> -Ryan
>
> --
> --
> You received this message because you are subscribed to the Google
> Groups "Clojure" group.
> To post to this group, send email to clojure@googlegroups.com
> Note that posts from new members are moderated - please be patient with
> your first post.
> To unsubscribe from this group, send email to
> clojure+unsubscr...@googlegroups.com
> For more options, visit this group at
> http://groups.google.com/group/clojure?hl=en
> ---
> You received this message because you are subscribed to the Google Groups
> "Clojure" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to clojure+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/groups/opt_out.
>

-- 
-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


[ANN] Clara 0.3.0: Rete in ClojureScript

2013-11-29 Thread Ryan Brush
Clara 0.3.0, a forward-chaining rules engine in pure Clojure, has been 
released.  The headliner is ClojureScript support, although a handful of 
fixes and optimizations were included as well. 

Some discussion of the ClojureScript port is in that group:
https://groups.google.com/forum/#!topic/clojurescript/MMwjpcFUPqE

The github project:
https://github.com/rbrush/clara-rules

Example usage from ClojureScript:
https://github.com/rbrush/clara-examples/blob/master/src/main/clojurescript/clara/examples/shopping.cljs

Feel free to ping me (@ryanbrush) with any questions or suggestions, or log 
issues via github.

-Ryan

-- 
-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [ANN]: clj.jdbc 0.1-beta1 - Alternative implementation of jdbc wrapper for clojure.

2013-11-29 Thread Lars Rune Nøstdal
Looks interesting. It even has the issue tracker on github still enabled.

-- 
-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: core.async and performance

2013-11-29 Thread Thomas Heller
Hey,

I'm actually surprised you get to "stop" at all. You send a couple items 
onto the channel but don't close it, therefore the 2nd go block will 
potentially block forever waiting for more.

I'm far from an expert in core.async but I think the solution would be to 
close! the channel and as a suggestion: (go) blocks themselves return 
channels that "block" until they are completed. You could write this as:

(time
 (let [c (chan 100)]
   (go 
(dotimes [i 10]
  (>! c i))
(close! c))
   ;; go and wait for its result
   (
> Hi there,
> I've started playing with core.async but I am not sure if I'm using it the 
> way it was intended to.
> Running a simple benchmark  with two go-blocks (one writing an event to a 
> channel, the other one reading it out) seems quite slow:
>
> (time (let [c (chan 100) stop (chan)]
>   (go 
> (dotimes [i 10]
>   (>! c i)))
>   (go
> (while ( (>! stop true))
>   (
> "Elapsed time: 1226.072003ms"
>
> I presume the way I am using core.async is fundamentally flawed so I'd be 
> greatful if someone would point it out to me.
>
> Cheers
> Andreas
>

-- 
-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [ANN] projars.com

2013-11-29 Thread Stanislav Yurin
Hi Joshua,
Your answer is very much appreciated.

My hypothesis right now is that highly successful /open source/ projects, 
already having profit or not, can usually take care of themselves. As well 
as hardly any established software company needs a broker. But examples on 
everyone's lips are very very
small part of real ecosystem. Something like 0.1%, and that could be very 
optimistic.

What is of particular interest for me is other 99.9%, first of all because 
I myself have belonged to that part for a long time, have
seen hundreds of active and abandoned projects of various quality, 
completeness and success. 
To say in general, one does not need to have incredibly large user base to 
make her living, neither to be the github, blogging and tutorial 
blockbuster. 
Take a walk from any street around the corner and count small businesses 
down there. Probably a half dozen. 
Probably you see and hear them for the first time. But they are still 
there, as well as another pack around the next corner.

Currently I am looking to such services as Codecanyon, Binpress as an 
example of what could be done, or, to be more precise,
as an evidence that something could be done.

What license types can work out for Clojure and similar communities, that 
is the good subject for experimenting. I am no prophet,
and, as you can see, the best I can do right now is to ask questions and 
make assumptions.

Thanks again for you attention.
Stanislav.

On Thursday, November 28, 2013 11:40:57 PM UTC+2, Joshua Ballanco wrote:
>
> On Thursday, November 28, 2013 at 12:10, Stanislav Yurin wrote: 
> > Hello, Clojure community. 
> >   
> > I have been following the Clojure path for nearly two years now, and 
> have really great pleasure 
> > using it in my personal and job projects, watching the community 
> delivering a lot of great things, 
> > most of that I have yet to taste. 
> >   
> > For some time I was incubating an idea of introducing the infrastructure 
> which may help regular developers like 
> > myself and businesses make some income from what we are creating on 
> daily basis, and improve the   
> > creations further. 
> >   
> > In short, on top of every open greatness, it is good to have options. 
> >   
> > The last thing I am willing to do is to build something no one needs, so 
> I have decided to evaluate an idea.   
> > The idea is simple: introducing the commercial option to the great 
> ecosystem we already have. 
> > Proposed http://projars.com concept is similar to well-organised 
> clojars/leiningen/maven content delivery system but with 
> > commercial products in mind. 
> >   
> > I have put the small introduction on the site, please feel free to 
> subscribe on site if you are interested, discuss, throw the stones   
> > in my direction etc. 
> >   
> > Again, the link is http://projars.com 
> >   
> > Any feedback will help a lot. 
>
> Hi Stanislav, 
>
> It’s an interesting idea to be sure. I think that, as open source and 
> software in general “eat the world”, there will definitely be room for 
> interesting new ways for people to be able to contribute to the community 
> while still putting a roof over their heads and food on their tables. 
> Soliciting donations/tips is one model. Crowd funding is another. However, 
> in both cases I think there is an outlier effect at play where a few people 
> will do very well, but most will never reach sustainability. On the other 
> hand, there are some models that I’ve seen work very well for different 
> people: 
>
> * Premium features: a project where a large chunk of the functionality is 
> available as open source, but some critical piece (usually related to 
> scale) is only available to paying customers. Successful projects I’ve seen 
> work this model include Phusion Passenger, Riak, Sidekiq, and Datomic. The 
> quite obvious difficulty with this model is that you need to have a 
> pre-existing product, probably a fairly sizable one, before people are 
> willing to pay for premium features. 
>
> * Feature bounties: an open source project where financial backers may pay 
> some sum to have their pet features prioritized over others. LuaJIT, 
> famously, has been completely financed via this model. The difficulty with 
> this model is that you probably need to have a fairly well established 
> reputation and project before just anyone is willing to pay you for a 
> feature (also known as: we can’t all be Mike Pall). 
>
> * Commercial dual licensing: if you release an open source project under 
> the GPL, many commercial organizations won’t use it. However, as the author 
> of an open source project, you are free to sell these commercial 
> organizations a copy of the software under different licensing terms. This 
> way the open source community can benefit, and the corporate lawyers can be 
> kept happy at the same time. This is probably best recognized as MySQL’s 
> model, but I know of others (including Glencoe Software, my current 
> employer) who ha

core.async and performance

2013-11-29 Thread kandre
Hi there,
I've started playing with core.async but I am not sure if I'm using it the 
way it was intended to.
Running a simple benchmark  with two go-blocks (one writing an event to a 
channel, the other one reading it out) seems quite slow:

(time (let [c (chan 100) stop (chan)]
  (go 
(dotimes [i 10]
  (>! c i)))
  (go
(while (! stop true))
  (http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [ANN] projars.com

2013-11-29 Thread John Wiseman
On Thu, Nov 28, 2013 at 2:53 AM, Bastien  wrote:

>
> I'm working on a website where people will be able to ask donations
> more easily for their FLOSS achievements and future projects, I'd love
> to see both directions (more commercial options and more crowdfunded
> FLOSS libraries) encouraged at the same time.
>

On this topic, I recently ran across https://www.suprmasv.com/

-- 
-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"Clojure" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to clojure+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.