seems the docs with auth creds is invalid... or im still broken...
curl -XPOST http://localhost:8080/riak-cs/user \
-H 'Content-Type: application/json' \
-d '{"email":"ad...@admin.com", "name":"admin"}'
returns
curl -H 'Content-Type: application/json' -X POST
http://localhost:8080/riak-cs
Thank you Uenishi-san,
I did not understand well about "system resources".
If "system resources" are those as you mentioned(OS level resources) , it's
not a problem.
Thank you for detailed explanation.
--
View this message in context:
http://riak-users.197444.n3.nabble.com/Can-I-share-Riak-KV
On Tue, Sep 15, 2015 at 12:47 PM, Kota Uenishi wrote:
> The message says your Riak is not properly configured. Please read [1]
> and set default bucket properties as buckets.default.allow_mult=true.
>
> [1]
> http://docs.basho.com/riakcs/latest/cookbooks/configuration/Configuring-Riak/#Allowing-f
The message says your Riak is not properly configured. Please read [1]
and set default bucket properties as buckets.default.allow_mult=true.
[1]
http://docs.basho.com/riakcs/latest/cookbooks/configuration/Configuring-Riak/#Allowing-for-Sibling-Creation.
On Tue, Sep 15, 2015 at 10:48 AM, Outback
Let's align our definitions of system resources and namespaces. And I
don't think anything is problem.
For namespaces, I meant you shouldn't overwrite Riak S2 data with your
application that is directly writing your data into underlying KV.
This is not Linux kernel namespaces or anything else. Jus
riak-cs chkconfig
config is OK
-config /usr/local/riak-cs/generated.configs/app.2015.09.15.01.41.39.config
-args_file /usr/local/riak-cs/generated.configs/vm.2015.09.15.01.41.39.args
-vm_args /usr/local/riak-cs/generated.configs/vm.2015.09.15.01.41.39.args
root@vmbsd:/usr/local/etc/riak # riak-cs s
The max_concurrency error on handoff is because, by default, Riak allows
only 2 handoffs occurring at a time, and additional handoff requests will
be rejected. You can change this setting in order to increase the number of
simultaneous transfers, at the expense of some cluster performance (as
hando
I added a 6th node to a 5 node cluster, hoping to rebalance the cluster
since I was approaching maximum disk usage on the original 5 nodes. Looks
like the rebalancing is not taking place, and I see a whole bunch of these
in the console logs:
688728495783936 was terminated for reason: {shutdown,ma
Yes there is plenty of errors there like
Committed before 500 {msg=GC overhead limit
exceeded,trace=java.lang.OutOfMemoryError: GC overhead limit exceeded
null:org.eclipse.jetty.io.EofException
and so on, this is reason why I try to restart node
My concerns is:
* search on this node come to u
Check the solr logs to see why it failed to shut down. If necessary, find the
pid bound to port 8985 and kill it.
-Fred
> On Sep 14, 2015, at 5:28 AM, Alexander Popov wrote:
>
> i'm doing riak restart
> and got 'ok' in answer, but node appears in shutdown state.
>
> in proccess list i found
i'm doing riak restart
and got 'ok' in answer, but node appears in shutdown state.
in proccess list i found that solr is still running, and in logs found :
2015-09-14 09:21:03.939 [info] <0.579.0>@yz_solr_proc:init:96 Starting
solr: "/usr/bin/java"
["-Djava.awt.headless=true","-Djetty.home=/usr/
On 14 Sep 2015, at 05:26, mtakahashi-ivi wrote:
> Thanks all,
>
> The problem is not only namespace but also competing system resources,
> right?
> Does it mean competing system resources occurs even if I separate
> namespaces?
Yes.
>
>
>
>
> --
> View this message in context:
> http://r
12 matches
Mail list logo