Hi Team,
Ceph OSD disk allocation procedure suggested that "We recommend using
a dedicated drive for the operating system and software, and one drive for each
Ceph OSD Daemon you run on the host" We have only SSD hard disk and is it
advisable to run OS and OSD on the same disk.
I'd suggest running your Ceph network on the "public" interface /24 you
have assigned to that rack. Assign loopback IP's from another (not
otherwise routed) network to your MONs.
I have no experience with the RBD client, but with RGW you can assign the
same IP to all of them and let your ToR
Most ceph clusters are setup once and then maintained. Even if new OSD
nodes are added, it's not a frequent enough operation to warrant automation.
Yes, ceph does provide hooks for automatically updating the CRUSH map
(crush location hook), but it's up to you to properly write, debug, and
I get digests, so please forgive me if this has been covered already.
> Assuming production level, we would keep a pretty close 1:2 SSD:HDD ratio,
1:4-5 is common but depends on your needs and the devices in question, ie.
assuming LFF drives and that you aren’t using crummy journals.
> First
Hi,
I wanted to share a PHP client for the RGW Admin Ops API [0] which has been
developed at my company.
There is a proper Python [1] client for the API, but we were unable to find one
for PHP, so we wrote it: https://github.com/PCextreme/rgw-admin-php
The client works with PHP 7 and is a
On 04/18/17 11:44, Jogi Hofmüller wrote:
> Hi,
>
> Am Dienstag, den 18.04.2017, 13:02 +0200 schrieb mj:
>> On 04/18/2017 11:24 AM, Jogi Hofmüller wrote:
>>> This might have been true for hammer and older versions of ceph.
>>> From
>>> what I can tell now, every snapshot taken reduces performance
On 04/18/17 16:31, pascal.pu...@pci-conseil.net wrote:
>
> Hello,
>
> Just an advise : next time, I will extend my Jewel ceph cluster with a
> fourth node.
>
> Actually, we have 3 x nodes of 12 x OSD with 4TB DD (36 x DD 4TB).
>
> I will add a new node with 12 x 8TB DD (will add 12 new OSD => 48
Hello,
Just an advise : next time, I will extend my Jewel ceph cluster with a
fourth node.
Actually, we have 3 x nodes of 12 x OSD with 4TB DD (36 x DD 4TB).
I will add a new node with 12 x 8TB DD (will add 12 new OSD => 48 OSD).
So, how to simply equilibrate ?
How to just unplug 3 x DD
Hi,
If you're using ceph-deploy, just run the command :
ceph-deploy osd prepare --overwrite-conf {your_host}:/dev/sdaa:/dev/sdaf2
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Its not working in vm and checked with individual node, once ZAP the disk its
working .
On Tue, 18 Apr 2017 15:41:53 +0530 xu xu gor...@gmail.com wrote
is it resolved?
I find in "ceph-deploy osd activate cphosd1:/home/osd1 cphosd2:/home/osd2
cphosd3:/home/osd3" logout
Ceph has the ability to us a script to figure out where in the
crushmap this disk should go (on osd start):
http://docs.ceph.com/docs/master/rados/operations/crush-map/#ceph-crush-location-hook
--
Adam
On Tue, Apr 18, 2017 at 7:53 AM, Matthew Vernon wrote:
> On 17/04/17
Hi, Chris.
Thank you for your help. I found some useful stuff in these files and I coped
with the task
If someone is interesting how, below is a little summary:
1. Create new osd
ceph-disk prepare /dev/sdaa /dev/sdaf
Now we have 4 partitions on sdaf, but we want to use second partition
On 17/04/17 21:16, Richard Hesse wrote:
> I'm just spitballing here, but what if you set osd crush update on start
> = false ? Ansible would activate the OSD's but not place them in any
> particular rack, working around the ceph.conf problem you mentioned.
> Then you could place them in your CRUSH
Hi,
Am Dienstag, den 18.04.2017, 13:02 +0200 schrieb mj:
>
> On 04/18/2017 11:24 AM, Jogi Hofmüller wrote:
> > This might have been true for hammer and older versions of ceph.
> > From
> > what I can tell now, every snapshot taken reduces performance of
> > the
> > entire cluster :(
>
> Really?
Hello all,
A while ago I came across a Ceph cluster with a RBD volume missing the
header object describing the characteristics of the volume, making it
impossible to attach or perform any operations on said volume.
As a courtesy to anyone else encountering the same situation I would like
to
Le 18/04/2017 à 11:24, Jogi Hofmüller a écrit :
> Hi,
>
> thanks for all you comments so far.
>
> Am Donnerstag, den 13.04.2017, 16:53 +0200 schrieb Lionel Bouton:
>> Hi,
>>
>> Le 13/04/2017 à 10:51, Peter Maloney a écrit :
>>> Ceph snapshots relly slow things down.
> I can confirm that now :(
On 04/18/2017 11:24 AM, Jogi Hofmüller wrote:
This might have been true for hammer and older versions of ceph. From
what I can tell now, every snapshot taken reduces performance of the
entire cluster :(
Really? Can others confirm this? Is this a 'wellknown fact'?
(unknown only to us,
is it resolved?
I find in "ceph-deploy osd activate cphosd1:/home/osd1 cphosd2:/home/osd2
cphosd3:/home/osd3" logout
...
[cphosd1][WARNIN] activate_dir: Creating symlink /var/lib/ceph/osd/ceph-0
-> /home/osd1
..
and "journalctl -xe" logout
..
Mar 4 09:26:33 localhost
Am 17.04.17 um 22:12 schrieb Richard Hesse:
> A couple of questions:
>
> 1) What is your rack topology? Are all ceph nodes in the same rack
> communicating with the same top of rack switch?
The cluster is planned for one rack with two ToR-/cluster-internal
switches. The cluster will be accessed
Hi,
thanks for all you comments so far.
Am Donnerstag, den 13.04.2017, 16:53 +0200 schrieb Lionel Bouton:
> Hi,
>
> Le 13/04/2017 à 10:51, Peter Maloney a écrit :
> > Ceph snapshots relly slow things down.
I can confirm that now :(
> We use rbd snapshots on Firefly (and Hammer now) and I
On Mon, Apr 17, 2017 at 1:42 PM, Alex Gorbachev
wrote:
> On Thu, Apr 13, 2017 at 4:24 AM, Ilya Dryomov wrote:
>> On Thu, Apr 13, 2017 at 5:39 AM, Alex Gorbachev
>> wrote:
>>> On Wed, Apr 12, 2017 at 10:51 AM, Ilya Dryomov
Félix Barbeira writes:
> We are implementing an IPv6 native ceph cluster using SLAAC. We have
> some legacy machines that are not capable of using IPv6, only IPv4 due
> to some reasons (yeah, I know). I'm wondering what could happen if I
> use an additional IPv4 on the radosgw in addition to the
Hi Song,
Next time, please post the question in ceph-devel or ceph-users mailing list.
I'm also replying the answer to your question to the ceph-users maling list.
The purpose of "--delay SEC" is to protect the image from being accidentally
removed from trash before SEC is expired, and to allow
23 matches
Mail list logo