Hi Burkhard,
Thanks a lot!
On 2017.05.17. 13:22, Burkhard Linke wrote:
Hi,
On 05/16/2017 04:25 PM, Mārtiņš Jakubovičs wrote:
Hello all,
Just entered to object storage world and set up working cluster for
RadosGW and authentication using OpenStack Keystone. Swift API works
great, but how
Hello all,
Just entered to object storage world and set up working cluster for
RadosGW and authentication using OpenStack Keystone. Swift API works
great, but how to test S3 API? I mean, I find a way to test with python
boto, but looks like I am missing aws_access_key_id, how to get it? Or
Hello Cephers,
I'm planing hardware environment, I want to use CEPH for VM's which will
be managed by OpenStack. So far, in my virtualized DEV cloud, all looks
great, OpenStack works well with KVM + CEPH, but I want to increase
performance with SSD's. Full SSD CEPH cluster will be too
Hello,
How can check ceph client session in clients side, for example, when
mount iscsi or nfs, you can check it (nfs just mount, iscsi iscsiadm -m
session), but how can do that with ceph? And is there more detailed
documentation about openstack and ceph than
Hello,
I follow this guide
http://ceph.com/docs/master/start/quick-ceph-deploy/#create-a-cluster
and stuck in item 4.
Add the initial monitor(s) and gather the keys (new
inceph-deployv1.1.3).
ceph-deploy mon create-initial
For example:
ceph-deploy mon create-initial
If I
Hello,
Thanks for such fast response.
Warning still persist:
http://pastebin.com/QnciHG6v
I didn't mention it, but admin and monitoring nodes are Ubuntu 14.04
x64, ceph-deploy 1.4 and ceph 0.79.
On 2014.05.22. 12:50, Wido den Hollander wrote:
On 05/22/2014 11:46 AM, Mārtiņš Jakubovičs
Thanks,
I will try upgrade to 0.80.
On 2014.05.22. 13:00, Wido den Hollander wrote:
On 05/22/2014 11:54 AM, Mārtiņš Jakubovičs wrote:
Hello,
Thanks for such fast response.
Warning still persist:
http://pastebin.com/QnciHG6v
Hmm, that's weird.
I didn't mention it, but admin
/22/2014 11:54 AM, Mārtiņš Jakubovičs wrote:
Hello,
Thanks for such fast response.
Warning still persist:
http://pastebin.com/QnciHG6v
Hmm, that's weird.
I didn't mention it, but admin and monitoring nodes are Ubuntu 14.04
x64, ceph-deploy 1.4 and ceph 0.79.
Why aren't you trying with Ceph
the data, do
`ceph-deploy purge ceph-node1 ceph-deploy purgedata ceph-node1
ceph-deploy install ceph-node1`
and then try create-initial again.
create-initial is the best way to do this, and it already gives very
good/useful output!
On Thu, May 22, 2014 at 6:56 AM, Mārtiņš Jakubovičs mart...@vertigs.lv