I did find this:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSPerformance.html
"Your performance can also be impacted if your application isn’t sending
enough I/O requests. This can be monitored by looking at your volume’s
queue length and I/O size. The queue length is the number of
Konstantinov, yes, perhaps, though I can't think where. This is a small
app, about 950 lines of code. There is a limited number of places where I
can make such a mistake. I do aggregate a lot of data into an atom, and I'm
sure there is a lot of contention around the atom, but the Socket timeout
Holding to a head somewhere perhaps?
On Thursday, 5 October 2017 19:32:11 UTC+3, lawrence...@gmail.com wrote:
>
> One last thing, I should mention, if I go through and add " LIMIT 100 " to
> all the SQL queries, everything works great. That is, when dealing with a
> few hundred documents, the
Obviously I'm brain-dead, since I forgot to retry the write on failure. So
I fixed this now:
(defn advance
[message db]
{:pre [
(= (type message) durable_queue.Task)
]}
(let [
;; 2017-10-05 -- if this is successful, then the return will look
like this:
One last thing, I should mention, if I go through and add " LIMIT 100 " to
all the SQL queries, everything works great. That is, when dealing with a
few hundred documents, the app seems to work perfectly, and there are no
errors. It's only when I try to work with a few million documents that
This problem has become much stranger. Really, I thought I was developing
some intuitions about how to write Clojure code, but everything about this
seems counter-intuitive.
When the app starts, it seems to be broken, doing one write per second to
ElasticSearch (on AWS). Then I get an error,
This is probably a stupid question, but is there an obvious way to get an
error message out of Elastisch? I had an app that was working with MongoDB
and I was then told I had to use ElasticSearch instead (something about
only using AWS for everything) so now I'm trying to get an error message,