Adam,
What David said before about SSD drives is very important. I will tell
you another way: use enterprise grade SSD drives, not consumer grade.
Also, pay attention to endurance.
The only suitable drive for Ceph I see in your tests is SSDSC2BB150G7,
and probably it isn't even the most
Mandi! Santu Roy
In chel di` si favelave...
> I am very new to Ceph. Studding for few days for a deployment of Ceph cluster.
> I am going to deploy ceph in a small data center where power failure is a big
> problem. we have single power supply, Single ups and a stand by generator. so
> what
Hello ceph-users.
Short description: during snapshot removal osd usilisation goes up to 100%,
which leads to slow requests and VM failures due to IOPS stall.
We're using Openstack Cinder with CEPH cluster as a volume backend. CEPH
version is 10.2.6.
We also using cinder-backup to create backups
https://github.com/twonote/radosgw-admin4j
I got into trouble when working with radosgw admin APIs, especially that
docs are a bit confusing and inconsistent with the code base. What I can do
are these two things: 1. Correct the section
On 04/26/17 14:54, Vladimir Prokofev wrote:
> Hello ceph-users.
>
> Short description: during snapshot removal osd usilisation goes up to
> 100%, which leads to slow requests and VM failures due to IOPS stall.
>
> We're using Openstack Cinder with CEPH cluster as a volume backend.
> CEPH version
Hi Henrik,
So, i assume if the MDS goes down there will be no data loss and we
can install another MDS server. What about old data is this work properly.
Regards
Prabu GJ
On Tue, 25 Apr 2017 16:31:44 +0530 Henrik Korkuc li...@kirneh.eu
wrote
On 17-04-25
On a Ceph Monitor/OSD server can i run just:
*yum update -y*
in order to upgrade system and packages or did this mess up Ceph?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Wed, Apr 26, 2017 at 10:53 AM, Adam Carheden wrote:
> What I'm trying to get from the list is /why/ the "enterprise" drives
> are important. Performance? Reliability? Something else?
Generically, enterprise drives
1) have higher endurance ratings
2) are significantly more
Adam,
Before we deployed our cluster, we did extensive testing on all kinds of
SSDs, from consumer-grade TLC SATA all the way to Enterprise PCI-E NVME
Drives. We ended up going with a ratio of 1x Intel P3608 PCI-E 1.6 TB
to 12x HGST 10TB SAS3 HDDs. It provided the best
Hi Adam,
How did you settle on the P3608 vs say the P3600 or P3700 for journals? And
also the 1.6T size? Seems overkill, unless its pulling double duty beyond OSD
journals.
Only improvement over the P3x00 is the move from x4 lanes to x8 lanes on the
PCIe bus, but the P3600/P3700 offer much
What I'm trying to get from the list is /why/ the "enterprise" drives
are important. Performance? Reliability? Something else?
The Intel was the only one I was seriously considering. The others were
just ones I had for other purposes, so I thought I'd see how they fared
in benchmarks.
The Intel
> Op 25 april 2017 om 20:07 schreef Ronny Aasen :
>
>
> Hello
>
> i am trying to install ceph on debian stretch from
>
> http://eu.ceph.com/debian-jewel/dists/
>
> but there is no stretch repo there.
>
> now with stretch being frozen, it is a good time to be
Hi Massimiiano,
I think you best go with the upgrade process from Ceph site, take a look at
it, since you need to do it in an specific order:
1. the MONs
2. the OSDs
3. the MDS
4. the Object gateways
http://docs.ceph.com/docs/master/install/upgrading-ceph/
it's better to do it like that and
A quick update just to close out this thread:
After investigating with netstat I found one ceph-osd process had three TCP
connections in established state but with no connection state on the peer
system (the client node that previously had been using the RBD image). The
qemu process on the
oh sorry my bad, I thought he wants to upgrade the ceph cluster, not the os
packages.
best,
*German*
2017-04-26 14:29 GMT-03:00 David Turner :
> He's asking how NOT to upgrade Ceph, but to update the rest of the
> packages on his system. In Ubuntu, you have to type
Hey all,
Resurrecting this thread because I just wanted to let you know that
Sam's initial work in master has been backported to Jewel and will be
in the next (10.2.8, I think?) release:
https://github.com/ceph/ceph/pull/14492/
Once upgraded, it will be safe to use the "osd snap trim sleep"
Thanks everyone for the replies.
I will be avoiding TLC drives, it was just something easy to benchmark
with existing equipment. I hadn't though of unscrupulous data durability
lies or performance suddenly tanking in unpredictable ways. I guess it
all comes down to trusting the vendor since it
He's asking how NOT to upgrade Ceph, but to update the rest of the packages
on his system. In Ubuntu, you have to type `apt-get dist-upgrade` instead
of just `apt-get upgrade` when you want to upgrade ceph. That becomes a
problem when trying to update the kernel, but not too bad. I think in
you can try the proxmox stretch repository if you want
http://download.proxmox.com/debian/ceph-luminous/dists/stretch/
- Mail original -
De: "Wido den Hollander"
À: "ceph-users" , "Ronny Aasen"
Envoyé: Mercredi 26
It's probably fine, depending on the ceph version. The upgrade notes on the
ceph website typically tell you the steps for each version.
As of Kraken, the notes say: "You may upgrade OSDs, Monitors, and MDSs in
any order. RGW daemons should be upgraded last"
Previously it was always recommended
Hi Greg,
Thanks a lot for your work on this one. It really helps us right now.
Would it be easy to add the snaptrim speed on a ceph -s, like "snaptrim io 144
MB/s, 721 objects/s" (or just objects/s if sizes are unknown) ?
It would help to see how the snaptrim speed changes along with snap
On Fri, Apr 21, 2017 at 12:07 PM, Fabian wrote:
> Hi Everyone,
>
> I play a bit around with ceph on a test cluster with 3 servers (each MON
> and OSD at the same time).
> I use some self written ansible rules to deploy the config and crate
> the OSD with ceph-disk. Because
> Op 24 april 2017 om 19:52 schreef Florian Haas :
>
>
> Hi everyone,
>
> so this will be a long email — it's a summary of several off-list
> conversations I've had over the last couple of weeks, but the TL;DR
> version is this question:
>
> How can a Ceph cluster
At a meeting with Intel folks a while back, they discussed the idea that future
large devices — which we’re starting to now see — would achieve greater
*effective* durability via a lower cost/GB that encourages the use of larger
than needed devices. Which is a sort of overprovisioning, just
I want to use the ceph rbd as storage backend of SPDK iscsi target,
but I start the iscsi_tgt failed!
I don't know how to specifiy backend storage type and some parrameters about
ceph rbd
Please show me the example of iscsi.conf about how to use ceph rbd if you know.
Tanks!
Thanks for the logs, Ben.
It looks that two completely different authenticators have failed:
the local, RADOS-backed auth (admin.txt) and Keystone-based
one as well. In the second case I'm pretty sure that Keystone has
rejected [1][2] to authenticate provided signature/StringToSign.
RGW tried to
Hey cephers,
Just a reminder that the Ceph Tech Talk for April (scheduled for
tomorrow) has been cancelled. Keep in mind that these talks only
happen with a community speaker who volunteers, so if you'd like to
jump in and participate for May please drop me a line. Thanks!
27 matches
Mail list logo