Not quite, I'm more interested in systems for collecting metrics (think of 
reporting server or application health, for example) rather than storage 
with random reads and writes. It is possible to use databases for this, but 
it will be pretty inefficient. For example: 

1. If we tale traditional RDBMS like MySQL or PostgreSQL, each insert into 
DB will result in writing to disk (slow!), triggering indexes and other 
maintenance activities. So these systems simply cannot handle the load and 
will shortly become a bottleneck. 
2. Some NoSQL databases like Couchbase or Aerospike may give pretty good 
performance, but since they are key-value stores, scanning them for results 
may be terribly slow. 
3. Persistent queues like Apache Kafka provide very fast sequential writes 
and high-performance reads. But given their terrible API and permanent 
breaking changes even in minor versions I don't even want to try to connect 
to it from Julia. Well, not for the purpose of this little project, at 
least. 
4. In-memory queues with many producers and single consumer / collector are 
pretty much close. So here we come to fast RabbitMQ and even faster ZMQ. 
Still, they require some effort to build metric collection on top of them. 
5. There is also StatsD which is close in spirit, but hard to integrate 
with Julia. It also requires too many unrelated dependencies (e.g. 
node.js), especially compared to integrated ZMQ. 


I don't expect ready-to-use library or even well defined approach, but 
maybe someone in Julia community has already encountered problem of metric 
collection? 


On Tuesday, May 19, 2015 at 5:37:52 PM UTC+3, Steven G. Johnson wrote:
>
> You want fast concurrent read/write access to a persistent store by large 
> numbers of users.   Isn't this what databases are designed for?
>

Reply via email to