-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Kurt Bauer
Vienna University Computer Center - ACOnet - VIX
Universitaetsstrasse 7, A-1010 Vienna, Austria, Europe
Tel:
Jan Fajerski <mailto:jfajer...@suse.com>
7. May 2018 at 14:21
On Mon, May 07, 2018 at 02:05:59PM +0200, Kurt Bauer wrote:
Hi Jan,
first of all thanks for this dashboard.
A few comments:
-) 'vonage-status-panel' is needed, which isn't mentioned in the
ReadMe
Yes, my bad. Will
Hi Jan,
first of all thanks for this dashboard.
A few comments:
-) 'vonage-status-panel' is needed, which isn't mentioned in the ReadMe
-) Using ceph 12.2.4 the mon metric for me is apparently called
'ceph_mon_quorum_count' not 'ceph_mon_quorum_status'
And a question:
Is there a way to get
Hi Jogi,
you can specify any repository you like with 'ceph-deploy install
--repo-url ', given you have the repo keys installed.
Best regards,
Kurt
Jogi Hofmüller wrote:
> Hi,
>
> Am 2015-09-25 um 22:23 schrieb Udo Lembke:
>
>> you can use this sources-list
>>
>> cat
Hi,
looking at the outputs below the following puzzles me:
You have two nodes but repl.size 3 for your test-data pool. With the
default crushmap this won't work as it tries to replicate on different
nodes.
So either change to rep.size 2, or add another node ;-)
best regards,
Kurt
Jogi
-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Kurt Bauer kurt.ba...@univie.ac.at
Vienna University Computer Center - ACOnet - VIX
Universitaetsstrasse 7, A-1010 Vienna
To see where your OSDs and Mon are listening, you have various cmds in
Linux, e.g:
'lsof -ni | grep ceph' - you should see one LISTEN line for the monitor,
2 LISTEN lines for the OSDs and a lot of ESTABLISHED lines, which
indicate communication between OSDs and OSDs and clients
'netstat -atn |
Hi,
that's by design. The monitor always listens to the public side only, if
a public network is defined. If you want everything in the cluster
network, just don't specify a seperate public/cluster network. But
that's all documented in great detail at
Hi,
Vikrant Verma
12. Februar 2014
19:03
yes, Iwant to use multiple
hard drives with a single OSD.Is it possible to
have it?
It' s perfectly possible, but at the expense of redundancy, resilience
and/or speed. You can use some RAID, then loosing one hardrive (or
more,
regards,
Kurt
Kurt Bauer schrieb:
Hi,
I have to open our CEPH cluster for some clients, that only support
kernel rbd. In general that's no problem and works just fine (verified
in our test-cluster ;-) ). I then tried to map images from our
production cluster and failed: rbd: add failed: (95
of
the potential vulnerability. It requires someone to hijack the TCP
session in order to take advantage of it (e.g., MITM), though, so there
are many environments where it is not a big concern.
sage
On Mon, 10 Feb 2014, Kurt Bauer wrote:
Hi,
I found two maybe related bugs in the tracker
, maybe that should be mentioned
somewhere at http://ceph.com/docs/master/rbd/rbd-ko/
--
Kurt Bauer kurt.ba...@univie.ac.at
Vienna University Computer Center - ACOnet - VIX
Universitaetsstrasse 7, A-1010 Vienna, Austria, Europe
Tel: ++43 1 4277 - 14070 (Fax: - 814070) KB1970-RIPE
smime.p7s
to restart any ceph services?
Kind regards
Alessandro Brega
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Kurt Bauer kurt.ba...@univie.ac.at
Vienna University Computer Center
attachments before opening them.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Kurt Bauer kurt.ba...@univie.ac.at
Vienna University Computer Center - ACOnet - VIX
Universitaetsstrasse 7, A-1010
Hi,
maybe you want to have a look at the following thread:
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-October/005368.html
Could be that you suffer from the same problems.
best regards,
Kurt
Rzk schrieb:
Hi all,
I have the same problem, just curious.
could it be caused by poor
Hi,
we had a setup like this and ran into trouble, so I would strongly
discourage you from setting it up like this. Under normal circumstances
there's no problem, but when the cluster is under heavy load, for
example when it has a lot of pgs backfilling, for whatever reason
(increasing num of
Hi,
neither do you need a filesystem on the partition, nor does it have to
be mounted. You can link the journal against the raw partition.
Best regards,
Kurt
Snider, Tim schrieb:
Sage -
The journal device needs a file system created does that device need to be
mounted?
Tim
-Original
not recommended? And
if not recommended, what the issues would/could be?
Thanks,
br,
Kurt
On Mon, Sep 23, 2013 at 7:42 AM, Kurt Bauer kurt.ba...@univie.ac.at wrote:
Hi,
just a short question to which I couldn't find an answer in the
documentation:
When I run a cluster with public and cluster network
--
Kurt Bauer kurt.ba...@univie.ac.at
Vienna University Computer Center - ACOnet - VIX
Universitaetsstrasse 7, A-1010 Vienna, Austria, Europe
Tel: ++43 1 4277 - 14070 (Fax: - 9140) KB1970-RIPE
smime.p7s
Description: S/MIME Cryptographic Signature
the cluster, as 2 nodes are still working.
But is there anything special we should consider or do beforehand,
besides praying that nothing bad will happen with the remaining two nodes?
Thanks for your help,
best regards,
Kurt
--
Kurt Bauer kurt.ba...@univie.ac.at
Vienna University Computer
Da Chun schrieb:
Thanks for sharing! Kurt.
Yes. I have read the article you mentioned. But I also read another
one:
http://www.hastexo.com/resources/hints-and-kinks/turning-ceph-rbd-images-san-storage-devices.
It uses LIO, which is the current standard Linux kernel SCSI target.
That
Hi,
the missing package is 'python-setuptools', but the dependencies in the
deb package should be fixed, as at least for Ubuntu Precise it doesn't work.
Manual install fixes the problem.
Best regards,
Kurt
Peter Wienemann schrieb:
Hi,
are there plans to provide (working) ceph-deploy Debian
22 matches
Mail list logo