Re: [ceph-users] best practices for EC pools

2019-02-07 Thread Alan Johnson
Just to add, that a more general formula is that the number of nodes should be greater than or equal to k+m+m so N>=k+m+m for full recovery -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Eugen Block Sent: Thursday, February 7, 2019 8:47 AM

Re: [ceph-users] Bluestore HDD Cluster Advice

2019-02-02 Thread Alan Johnson
If this is Skylake the 6 channel memory architecture lends itself better to configs such as 192GB (6 x 32) so yes even though 128GB is most likely sufficient usng (6 x 16GB) might be too small. From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Martin Verges Sent:

Re: [ceph-users] RBD default pool

2019-02-01 Thread Alan Johnson
Confirm that no pools are created by default with Mimic. From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of solarflow99 Sent: Friday, February 1, 2019 2:28 PM To: Ceph Users Subject: [ceph-users] RBD default pool I thought a new cluster would have the 'rbd' pool already

Re: [ceph-users] ceph 12.2.5 - atop DB/WAL SSD usage 0%

2018-04-27 Thread Alan Johnson
Could we infer from this if the usage model is large object sizes rather than small I/Os the benefit of offloading WAL/DB is questionable given that the failure of the SSD (assuming shared amongst HDDs) could take down a number of OSDs and in this case a best practice would be to collocate?

Re: [ceph-users] Install Ceph on Fedora 26

2017-10-26 Thread Alan Johnson
If using defaults try chmod +r /etc/ceph/ceph.client.admin.keyring -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of GiangCoi Mr Sent: Thursday, October 26, 2017 11:09 AM To: ceph-us...@ceph.com Subject: [ceph-users] Install Ceph on Fedora 26

[ceph-users] question regarding filestore on Luminous

2017-09-25 Thread Alan Johnson
I am trying to compare FileStore performance against Bluestore. With Luminous 12.20, Bluestore is working fine but if I try and create a Filestore volume with a separate journal using Jewel like Syntax - "ceph-deploy osd create :sdb:nvme0n1", device nvme0n1 is ignored and it sets up two

Re: [ceph-users] Squeezing Performance of CEPH

2017-06-23 Thread Alan Johnson
We have found that we can place 18 journals on the Intel 3700 PCI-e devices comfortably, We also tried it with fio adding more jobs to ensure that performance did not drop off (via Sebastian Han’s tests described at

Re: [ceph-users] Intel P3700 SSD for journals

2016-11-18 Thread Alan Johnson
We use the 800GB version as journal devices with up to an 1:18 ratio and have had good experiences no bottleneck on the journal side. These also feature good endurance characteristics. I would think that higher capacities are hard to justify as journals -Original Message- From:

Re: [ceph-users] Ceph consultants?

2016-10-05 Thread Alan Johnson
I did have some similar issues and resolved it by installing parted 3.2 (I can't say if this was definitive) but it worked for me. I also only used create (after disk zap) rather than prepare/activate. From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Steve Taylor Sent:

Re: [ceph-users] Ceph performance expectations

2016-04-07 Thread Alan Johnson
a number of good discussions relating to endurance, and suitability as a journal device. From: Sergio A. de Carvalho Jr. [mailto:scarvalh...@gmail.com] Sent: Thursday, April 07, 2016 11:18 AM To: Alan Johnson Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Ceph performance expectations Thanks

Re: [ceph-users] ceph-disk activate fails (after 33 osd drives)

2016-02-12 Thread Alan Johnson
Can you check the value of kernel.pid_max. This may have to be increased for larger OSD counts, it may have some bearing? From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of John Hogenmiller (yt) Sent: Friday, February 12, 2016 8:52 AM To: ceph-users@lists.ceph.com

Re: [ceph-users] Intel S3710 400GB and Samsung PM863 480GB fio results

2015-12-22 Thread Alan Johnson
I would also add that the journal activity is write intensive so a small part of the drive would get excessive writes if the journal and data are co-located on an SSD. This would also be the case where an SSD has multiple journals associated with many HDDs. -Original Message- From:

Re: [ceph-users] Performance question

2015-11-24 Thread Alan Johnson
Or separate the journals as this will bring the workload down on the spinners to 3Xrather than 6X From: Marek Dohojda [mailto:mdoho...@altitudedigital.com] Sent: Tuesday, November 24, 2015 1:24 PM To: Nick Fisk Cc: Alan Johnson; ceph-users@lists.ceph.com Subject: Re: [ceph-users] Performance

Re: [ceph-users] Performance question

2015-11-24 Thread Alan Johnson
much better may mean to a bottleneck elsewhere – network perhaps? From: Marek Dohojda [mailto:mdoho...@altitudedigital.com] Sent: Tuesday, November 24, 2015 10:37 AM To: Alan Johnson Cc: Haomai Wang; ceph-users@lists.ceph.com Subject: Re: [ceph-users] Performance question Yeah

Re: [ceph-users] Performance question

2015-11-24 Thread Alan Johnson
Are the journals on the same device – it might be better to use the SSDs for journaling since you are not getting better performance with SSDs? From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Marek Dohojda Sent: Monday, November 23, 2015 10:24 PM To: Haomai Wang Cc:

Re: [ceph-users] 2-Node Cluster - possible scenario?

2015-10-25 Thread Alan Johnson
Quorum can be achieved with one monitor node (for testing purposes this would be OK, but of course it is a single point of failure) however the default for the OSD nodes is three way replication (can be changed) but easier to set up three OSD nodes to start with and one monitor node. For your

Re: [ceph-users] Debian repo down?

2015-09-26 Thread Alan Johnson
Yes, I am also getting this error. Thx Alan From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Iban Cabrillo Sent: Saturday, September 26, 2015 6:58 AM To: ceph-users@lists.ceph.com Subject: [ceph-users] Debian repo down? HI cepher, I am getting download error form

Re: [ceph-users] runtime Error for creating ceph MON via ceph-deploy

2015-06-30 Thread Alan Johnson
I use sudo visudo and then add in a line under Defaults requiretty -- Defaults:user !requiretty Where user is the username. Hope this helps? Alan From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Vida Ahmadi Sent: Monday, June 22, 2015 6:31 AM To:

Re: [ceph-users] Hammer 0.94.2: Error when running commands on CEPH admin node

2015-06-18 Thread Alan Johnson
And also this needs the correct permission set as otherwise it will give this error. From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of B, Naga Venkata Sent: Thursday, June 18, 2015 10:07 AM To: Teclus Dsouza -X (teclus - TECH MAHINDRA LIM at Cisco);

Re: [ceph-users] Hammer 0.94.2: Error when running commands on CEPH admin node

2015-06-18 Thread Alan Johnson
For the permissions use sudo chmod +r /etc/ceph/ceph.client.admin.keyring From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Teclus Dsouza -X (teclus - TECH MAHINDRA LIM at Cisco) Sent: Thursday, June 18, 2015 10:21 AM To: B, Naga Venkata; ceph-users@lists.ceph.com

Re: [ceph-users] Ceph-deploy issues

2015-02-25 Thread Alan Johnson
Try sudo chmod +r /etc/ceph/ceph.client.admin.keyring for the error below? -Original Message- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Garg, Pankaj Sent: Wednesday, February 25, 2015 4:04 PM To: Travis Rhoden Cc: ceph-users@lists.ceph.com Subject: Re:

Re: [ceph-users] Ceph-deploy issues

2015-02-25 Thread Alan Johnson
-Original Message- From: Garg, Pankaj [mailto:pankaj.g...@caviumnetworks.com] Sent: Wednesday, February 25, 2015 4:26 PM To: Alan Johnson; Travis Rhoden Cc: ceph-users@lists.ceph.com Subject: RE: [ceph-users] Ceph-deploy issues Hi Alan, Thanks. Worked like magic. Why did this happen though? I have

Re: [ceph-users] 答复: Re: can not add osd

2015-02-10 Thread Alan Johnson
Just wondering if this was ever resolved �C I am seeing the exact same issue when I moved from Centos 6.5 firefly to Centos7 on giant release using “ceph-deploy osd prepare . . . ” the script fails to umount and then posts a device is busy message. Details are below in yang bin18’s posting