Re: [ceph-users] best practices for EC pools

2019-02-07 Thread Alan Johnson
Just to add, that a more general formula is that the number of nodes should be 
greater than or equal to k+m+m so N>=k+m+m for full recovery 

-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Eugen 
Block
Sent: Thursday, February 7, 2019 8:47 AM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] best practices for EC pools

Hi Francois,

> Is that correct that recovery will be forbidden by the crush rule if a 
> node is down?

yes, that is correct, failure-domain=host means no two chunks of the same PG 
can be on the same host. So if your PG is divided into 6 chunks, they're all on 
different hosts, no recovery is possible at this point (for the EC-pool).

> After rebooting all nodes we noticed that the recovery was slow, maybe 
> half an hour, but all pools are currently empty (new install).
> This is odd...

If the pools are empty I also wouldn't expect that, is restarting one OSD also 
that slow or is it just when you reboot the whole cluster?

> Which k values are preferred on 6 nodes?

It depends on the failures you expect and how many concurrent failures you need 
to cover.
I think I would keep failure-domain=host (with only 4 OSDs per host).  
As for the k and m values, 3+2 would make sense, I guess. That profile would 
leave one host for recovery and two OSDs of one PG acting set could fail 
without data loss, so as resilient as the 4+2 profile. This is one approach, so 
please don't read this as *the* solution for your environment.

Regards,
Eugen


Zitat von Scheurer François :

> Dear All
>
>
> We created an erasure coded pool with k=4 m=2 with failure-domain=host 
> but have only 6 osd nodes.
> Is that correct that recovery will be forbidden by the crush rule if a 
> node is down?
>
> After rebooting all nodes we noticed that the recovery was slow, maybe 
> half an hour, but all pools are currently empty (new install).
> This is odd...
>
> Can it be related to the k+m being equal to the number of nodes? 
> (4+2=6) step set_choose_tries 100 was already in the EC crush rule.
>
> rule ewos1-prod_cinder_ec {
>   id 2
>   type erasure
>   min_size 3
>   max_size 6
>   step set_chooseleaf_tries 5
>   step set_choose_tries 100
>   step take default class nvme
>   step chooseleaf indep 0 type host
>   step emit
> }
>
> ceph osd erasure-code-profile set ec42 k=4 m=2 crush-root=default 
> crush-failure-domain=host crush-device-class=nvme ceph osd pool create 
> ewos1-prod_cinder_ec 256 256 erasure ec42
>
> ceph version 12.2.10-543-gfc6f0c7299
> (fc6f0c7299e3442e8a0ab83260849a6249ce7b5f) luminous (stable)
>
>   cluster:
> id: b5e30221-a214-353c-b66b-8c37b4349123
> health: HEALTH_WARN
> noout flag(s) set
> Reduced data availability: 125 pgs inactive, 32 pgs 
> peering
>
>   services:
> mon: 3 daemons, quorum ewos1-osd1-prod,ewos1-osd3-prod,ewos1-osd5-prod
> mgr: ewos1-osd5-prod(active), standbys: ewos1-osd3-prod, ewos1-osd1-prod
> osd: 24 osds: 24 up, 24 in
>  flags noout
>
>   data:
> pools:   4 pools, 1600 pgs
> objects: 0 objects, 0B
> usage:   24.3GiB used, 43.6TiB / 43.7TiB avail
> pgs: 7.812% pgs not active
>  1475 active+clean
>  93   activating
>  32   peering
>
>
> Which k values are preferred on 6 nodes?
> BTW, we plan to use this EC pool as a second rbd pool in Openstack, 
> with the main first rbd pool being replicated size=3; it is nvme ssd 
> only.
>
>
> Thanks for your help!
>
>
>
> Best Regards
> Francois Scheurer



___
ceph-users mailing list
ceph-users@lists.ceph.com
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.ceph.com_listinfo.cgi_ceph-2Dusers-2Dceph.com=DwIGaQ=4DxX-JX0i28X6V65hK0ftwVK1xnmwcYC0vo7GVya1JY=sgFiQgvQASiGFaHpitF5P9M9QDCRkgKGttwwMFt2VIU=pTchIHDm3u6d1bmWBYKGF0Akb9UelYSeP1pnEbEw85Q=FV0ocIQ2LDiwIdGtKE36tH50px_KHyRvz14eDP1qptI=
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Bluestore HDD Cluster Advice

2019-02-02 Thread Alan Johnson
If this is Skylake the 6 channel memory architecture lends itself better to 
configs such as 192GB (6 x 32) so yes even though 128GB is most likely 
sufficient usng (6 x 16GB) might be too small.

From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Martin 
Verges
Sent: Saturday, February 2, 2019 2:19 AM
To: John Petrini 
Cc: ceph-users 
Subject: Re: [ceph-users] Bluestore HDD Cluster Advice

Hello John,

you don't need such a big CPU, save yourself some money with a 12c/24t and 
invest it in better / more disks. Same goes for memory, 128G would be enough. 
Why do you install 4x 25G NIC, hard disks won't be able to use that capacity?

In addition, you can use the 2 disks for OSDs and not OS if you choose croit 
for system management meaning 10 more OSDs in your small cluster for better 
performance and a lot easier to manage. The best part of it, this feature comes 
with our complete free version, so it is just a gain on your side! Try it out

Please make sure to buy the right disks, there is a huge performance gap 
between 512e and 4Kn drives but near to no price difference. Bluestore does 
perform better then filestore in most environments, but as always depending on 
your specific workload. I would not recommend to even considering a filestore 
osd anymore, instead buy the correct hardware for your use case and configure 
the cluster accordingly.

--
Martin Verges
Managing director

Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges 
[t.me]

croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263

Web: https://croit.io 
[croit.io]
YouTube: https://goo.gl/PGE1Bx 
[goo.gl]


Am Fr., 1. Feb. 2019 um 18:26 Uhr schrieb John Petrini 
mailto:jpetr...@coredial.com>>:
Hello,

We'll soon be building out four new luminous clusters with Bluestore.
Our current clusters are running filestore so we're not very familiar
with Bluestore yet and I'd like to have an idea of what to expect.

Here are the OSD hardware specs (5x per cluster):
2x 3.0GHz 18c/36t
22x 1.8TB 10K SAS (RAID1 OS + 20 OSD's)
5x 480GB Intel S4610 SSD's (WAL and DB)
192 GB RAM
4X Mellanox 25GB NIC
PERC H730p

With filestore we've found that we can achieve sub-millisecond write
latency by running very fast journals (currently Intel S4610's). My
main concern is that Bluestore doesn't use journals and instead writes
directly to the higher latency HDD; in theory resulting in slower acks
and higher write latency. How does Bluestore handle this? Can we
expect similar or better performance then our current filestore
clusters?

I've heard it repeated that Bluestore performs better than Filestore
but I've also heard some people claiming this is not always the case
with HDD's. Is there any truth to that and if so is there a
configuration we can use to achieve this same type of performance with
Bluestore?

Thanks all.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
[lists.ceph.com]
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RBD default pool

2019-02-01 Thread Alan Johnson
Confirm that no pools are created by default with Mimic.

From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of 
solarflow99
Sent: Friday, February 1, 2019 2:28 PM
To: Ceph Users 
Subject: [ceph-users] RBD default pool

I thought a new cluster would have the 'rbd' pool already created, has this 
changed?  I'm using mimic.


# rbd ls
rbd: error opening default pool 'rbd'
Ensure that the default pool has been created or specify an alternate pool name.
rbd: list: (2) No such file or directory


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph 12.2.5 - atop DB/WAL SSD usage 0%

2018-04-27 Thread Alan Johnson
Could we infer from this if the usage model is large object sizes  rather than 
small I/Os the benefit of offloading WAL/DB is questionable given that the 
failure of the SSD (assuming shared amongst HDDs) could take down a number of 
OSDs and in this case a best practice would be to collocate?

-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Serkan 
Çoban
Sent: Friday, April 27, 2018 10:05 AM
To: Steven Vacaroaia 
Cc: ceph-users 
Subject: Re: [ceph-users] ceph 12.2.5 - atop DB/WAL SSD usage 0%

rados bench is using 4MB block size for io. Try with with io size 4KB, you will 
see ssd will be used for write operations.

On Fri, Apr 27, 2018 at 4:54 PM, Steven Vacaroaia  wrote:
> Hi
>
> During rados bench tests, I noticed that HDD usage goes to 100% but 
> SSD stays at ( or very close to 0)
>
> Since I created OSD with BLOCK/WAL on SSD, shouldnt  I see some "activity'
> on SSD ?
>
> How can I be sure CEPH is actually using SSD for WAL /DB ?
>
>
> Note
> I only have 2 HDD and one SSD per server for now
>
>
> Comands used
>
> rados bench -p rbd 50 write -t 32 --no-cleanup && rados bench -p rbd 
> -t 32
> 50 rand
>
>
> /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data 
> /dev/sdc --block.wal 
> /dev/disk/by-partuuid/32ffde6f-7249-40b9-9bc5-2b70f0c3f7ad
> --block.db /dev/disk/by-partuuid/2d9ab913-7553-46fc-8f96-5ffee028098a
>
> ( partitions are on SSD ...see below)
>
>  sgdisk -p /dev/sda
> Disk /dev/sda: 780140544 sectors, 372.0 GiB Logical sector size: 512 
> bytes Disk identifier (GUID): 5FE0EA74-7E65-45B8-A356-62240333491E
> Partition table holds up to 128 entries First usable sector is 34, 
> last usable sector is 780140510 Partitions will be aligned on 
> 2048-sector boundaries Total free space is 520093629 sectors (248.0 
> GiB)
>
> Number  Start (sector)End (sector)  Size   Code  Name
>1   251660288   253757439   1024.0 MiB    ceph WAL
>2204862916607   30.0 GiB  ceph DB
>3   253757440   255854591   1024.0 MiB    ceph WAL
>462916608   125831167   30.0 GiB  ceph DB
>5   255854592   257951743   1024.0 MiB    ceph WAL
>6   125831168   188745727   30.0 GiB  ceph DB
>7   257951744   260048895   1024.0 MiB    ceph WAL
>8   188745728   251660287   30.0 GiB  ceph DB
> [root@osd04 ~]# ls -al /dev/disk/by-partuuid/ total 0 drwxr-xr-x 2 
> root root 200 Apr 26 15:39 .
> drwxr-xr-x 8 root root 160 Apr 27 08:45 ..
> lrwxrwxrwx 1 root root  10 Apr 27 09:38 
> 0baf986d-f786-4c1a-8962-834743b33e3a
> -> ../../sda8
> lrwxrwxrwx 1 root root  10 Apr 27 09:38 
> 2d9ab913-7553-46fc-8f96-5ffee028098a
> -> ../../sda2
> lrwxrwxrwx 1 root root  10 Apr 27 09:38 
> 32ffde6f-7249-40b9-9bc5-2b70f0c3f7ad
> -> ../../sda3
> lrwxrwxrwx 1 root root  10 Apr 27 09:38 
> 3f4e2d47-d553-4809-9d4e-06ba37b4c384
> -> ../../sda6
> lrwxrwxrwx 1 root root  10 Apr 27 09:38 
> 3fc98512-a92e-4e3b-9de7-556b8e206786
> -> ../../sda1
> lrwxrwxrwx 1 root root  10 Apr 27 09:38 
> 64b8ae66-cf37-4676-bf9f-9c4894788a7f
> -> ../../sda7
> lrwxrwxrwx 1 root root  10 Apr 27 09:38 
> 96254af9-7fe4-4ce0-886e-2e25356eff81
> -> ../../sda5
> lrwxrwxrwx 1 root root  10 Apr 27 09:38 
> ae616b82-35ab-4f7f-9e6f-3c65326d76a8
> -> ../../sda4
>
>
>
>
>
>
>  dm-0 |  busy 90% |  | read2516  | write  0 |
> |  KiB/r512 | KiB/w  0 |   | MBr/s  125.8 | MBw/s0.0
> |   | avq10.65 | avio 3.57 ms  |  |
> LVM | dm-1 |  busy 80% |  | read2406  | write
> 0 |  |  KiB/r512 | KiB/w  0 |   | MBr/s
> 120.3 | MBw/s0.0 |   | avq12.59 | avio 3.30 ms  |
> |
> DSK |  sdc |  busy 90% |  | read5044  | write
> 0 |  |  KiB/r256 | KiB/w  0 |   | MBr/s
> 126.1 | MBw/s0.0 |   | avq19.53 | avio 1.78 ms  |
> |
> DSK |  sdd |  busy 80% |  | read4805  | write
> 0 |  |  KiB/r256 | KiB/w  0 |   | MBr/s
> 120.1 | MBw/s0.0 |   | avq23.97 | avio 1.65 ms  |
> |
> DSK |  sda |  busy  0% |  | read   0  | write
> 7 |  |  KiB/r  0 | KiB/w 10 |   | MBr/s
> 0.0 | MBw/s0.0 |   | avq 0.00 | avio 0.00 ms  |
> |
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.ceph.com_lis
> tinfo.cgi_ceph-2Dusers-2Dceph.com=DwICAg=4DxX-JX0i28X6V65hK0ft5M-1
> rZQeWgdMry9v8-eNr4=eqMv5yFFe6-lAM9jJfUusNFzzcFAGwmoAez_acfPOtw=Gkb
> AzUQpHU6F0PQW4cXglhdQN00DLmI75Ge2zPFqeeQ=R5UDTadunkDZPcYZfMoWS_0Vead
> oXB5jfcy-FKfJYPM=
>

Re: [ceph-users] Install Ceph on Fedora 26

2017-10-26 Thread Alan Johnson
If using defaults try 
 chmod +r /etc/ceph/ceph.client.admin.keyring

-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of 
GiangCoi Mr
Sent: Thursday, October 26, 2017 11:09 AM
To: ceph-us...@ceph.com
Subject: [ceph-users] Install Ceph on Fedora 26

Hi all
I am installing ceph luminous on fedora 26, I installed ceph luminous success 
but when I install ceph mon, it’s error: it doesn’t find client.admin.keyring. 
How I can fix it, Thank so much

Regard, 
GiangLT

Sent from my iPhone
___
ceph-users mailing list
ceph-users@lists.ceph.com
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.ceph.com_listinfo.cgi_ceph-2Dusers-2Dceph.com=DwIGaQ=4DxX-JX0i28X6V65hK0ft5M-1rZQeWgdMry9v8-eNr4=eqMv5yFFe6-lAM9jJfUusNFzzcFAGwmoAez_acfPOtw=YEG8qsLFsc0XjSKKJCIlkSn9C_WtCejsaUPv2p5ieRk=orrv_azJsm9kAmXQLjUHM6ClwXx-8oQFN89GyknIeN0=
 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] question regarding filestore on Luminous

2017-09-25 Thread Alan Johnson
I am trying to compare FileStore performance against Bluestore. With Luminous 
12.20,  Bluestore is working fine but if I try and create a Filestore volume 
with a separate journal using  Jewel like Syntax - "ceph-deploy osd create 
:sdb:nvme0n1", device nvme0n1 is ignored and it sets up two partitions 
(similar to BlueStore) as shown below:
Number  Start   End SizeFile system  NameFlags
1  1049kB  106MB   105MB   xfs  ceph data
2  106MB   6001GB  6001GB   ceph block

Is this expected behavior or is FIleStore no longer supported with Luminous?

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Squeezing Performance of CEPH

2017-06-23 Thread Alan Johnson
We have found that we can place 18 journals on the Intel 3700 PCI-e devices 
comfortably, We also tried it with fio adding more jobs to ensure that 
performance did not drop off (via Sebastian Han’s tests described at 
https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is-suitable-as-a-journal-device/)
 so 12 should be no problem – only gotcha is if the NVMe dies . . .


From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of 
Massimiliano Cuttini
Sent: Friday, June 23, 2017 9:35 AM
To: Ashley Merrick
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Squeezing Performance of CEPH


Hi Ashley,
You could move your Journal to another SSD this would remove the double write.
If I move the journal to another SSD, I will loss an available OSD, so this is 
likely to say improve of x2 and then decrease of x½ ...
this should not improve performance in any case on a full SSD disks system.


Ideally you’d want one or two PCIe NVME in the servers for the Journal.
This seems a really good Idea, but image that I have only 2 slots for PCIe and 
12 SSD disks.
I image that it's will not be possible place 12 Journal on 2 PCIe NVME without 
loss performance. or yes?


Or if you can hold off a bit then bluestore, which removes the double write, 
however is still handy to move some of the services to a seperate disk.
I hear that bluestore will remove double writing on journal (still not 
investigated), but I guess Luminous will be fully tested not before the end of 
the year.
About the today system really don't know if moving on a separate disks will 
have some impact considering that this is a full SSD disks system.

Even adding 2 PCIe NVME why should not use them as a OSD instead of journal 
solo?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Intel P3700 SSD for journals

2016-11-18 Thread Alan Johnson
We use the 800GB version as journal devices with up to an 1:18 ratio and have 
had good experiences no bottleneck on the journal side. These also feature good 
endurance characteristics. I would think that higher capacities are hard to 
justify as journals

-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Heath 
Albritton
Sent: Friday, November 18, 2016 9:19 AM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Intel P3700 SSD for journals

I've used the 400GB unit extensively for almost 18 months, one per six drives.  
They've performed flawlessly.

In practice, journals will typically be quite small relative to the total 
capacity of the SSD.  As such, there will be plenty of room for wear leveling.  
If there was some concern, one could over-provision them even more than they 
already are.

-H

> On Nov 18, 2016, at 05:43, William Josefsson  
> wrote:
> 
> Hi list, I wonder if there is anyone who have experience with Intel
> P3700 SSD drives as Journals, and can share their experience?
> 
> I was thinking of using the P3700 SSD 400GB as journal in my ceph 
> deployment. It is benchmarked in Sebastian hann ssd page as well.
> However a vendor I spoke to didn't qualify the small sizes of this 
> model as "enterprise grade/warranty". They suggested the 1.8TB or 2TB.
> I have asked for clarification on why this is the case.
> 
> Has anyone experienced any issues with the smaller size P3700 SSDs, 
> and I'm not sure how a smaller drive could affect the quality of the 
> product? Maybe anyone can shed light on if the size of the SSD drive 
> matters? thx will ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph consultants?

2016-10-05 Thread Alan Johnson
I did have some similar issues and resolved it by installing parted 3.2 (I 
can't say if this was definitive) but it worked for me. I also only used create 
(after disk zap) rather than prepare/activate.

From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Steve 
Taylor
Sent: Wednesday, October 05, 2016 2:29 PM
To: Tracy Reed; Peter Maloney
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph consultants?

Try using 'ceph-deploy osd create' instead of 'ceph-deploy osd prepare' and 
'ceph-deploy osd activate' when using an entire disk for an OSD. That will 
create a journal partition and co-locate your journal on the same disk with the 
OSD, but that's fine for an initial dev setup.

[cid:image001.jpg@01D21F20.4BAD0350]

Steve Taylor | Senior Software Engineer | StorageCraft Technology 
Corporation
380 Data Drive Suite 300 | Draper | Utah | 84020
Office: 801.871.2799 |


If you are not the intended recipient of this message or received it 
erroneously, please notify the sender and delete it, together with any 
attachments, and be advised that any dissemination or copying of this message 
is prohibited.


-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Tracy 
Reed
Sent: Wednesday, October 5, 2016 3:12 PM
To: Peter Maloney 
>
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph consultants?

On Wed, Oct 05, 2016 at 01:17:52PM PDT, Peter Maloney spake thusly:
> What do you need help with specifically? Setting up ceph isn't very
> complicated... just fixing it when things go wrong should be. What
> type of scale are you working with, and do you already have hardware?
> Or is the problem more to do with integrating it with clients?

Hi Peter,

I agree, setting up Ceph isn't very complicated. I posted to the list on
10/03/16 with the initial problem I have run into under the subject "Can't 
activate OSD". Please refer to that thread as it has logs, details of my setup, 
etc.

I started working on this about a month ago then spent several days on it and a 
few hours with a couple different people on IRC. Nobody has been able to figure 
out how to get my OSD activated. I took a couple weeks off and now I'm back at 
it as I really need to get this going soon.

Basically, I'm following the quickstart guide at 
http://docs.ceph.com/docs/jewel/start/quick-ceph-deploy/ and when I run the 
command to activate the OSDs like so:

ceph-deploy osd activate ceph02:/dev/sdc ceph03:/dev/sdc

I get this in the ceph-deploy log:

[2016-10-03 15:16:10,193][ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 
7.2.1511 Core
[2016-10-03 15:16:10,193][ceph_deploy.osd][DEBUG ] activating host ceph03 disk 
/dev/sdc
[2016-10-03 15:16:10,193][ceph_deploy.osd][DEBUG ] will use init type: systemd
[2016-10-03 15:16:10,194][ceph03][DEBUG ] find the location of an executable
[2016-10-03 15:16:10,200][ceph03][INFO  ] Running command: sudo 
/usr/sbin/ceph-disk -v activate --mark-init systemd --mount /dev/sdc
[2016-10-03 15:16:10,377][ceph03][WARNING] main_activate: path = /dev/sdc
[2016-10-03 15:21:10,380][ceph03][WARNING] No data was received after 300 
seconds, disconnecting...
[2016-10-03 15:21:15,387][ceph03][INFO  ] checking OSD status...
[2016-10-03 15:21:15,401][ceph03][DEBUG ] find the location of an executable
[2016-10-03 15:21:15,472][ceph03][INFO  ] Running command: sudo /bin/ceph 
--cluster=ceph osd stat --format=json
[2016-10-03 15:21:15,698][ceph03][INFO  ] Running command: sudo systemctl 
enable ceph.target

More details in other thread.

Where am I going wrong here?

Thanks!

--
Tracy Reed
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph performance expectations

2016-04-07 Thread Alan Johnson
Hi Sergio, yes I think you have also answered most of your own points –

The main thing is to try and avoid excessive seeks on the HDDS, it would help 
to separate the journal and data but since HDDs are heavily dependent on seek 
and latency delays, it would not help to have multiple journals on a single 
HDD. Of course with SSDs the random access is not an issue and we have found 
that a SATA SSD as a journal can support around five HDDs and with PCI-e NVMe 
devices such as Intel 3700 we can sustain much higher ratios.

I would also take a look at iostat as well to see just how busy the disks are, 
I would estimate that there is high utilization and in our testing here try and 
use co-located journals and data only on  high density servers (72 bay) where 
there are enough devices to share the workload within an OSD server. In this 
case we do see idle time on the disks themselves but from our observations 
co-locating on the lower density servers does cause extremely high utilization.

Even if you could borrow some SSDs for a short duration at least you would know 
for sure how much the gain would be. The type of SSD is also of course very 
important and this forum has had a number of good discussions relating to 
endurance, and suitability as a journal device.

From: Sergio A. de Carvalho Jr. [mailto:scarvalh...@gmail.com]
Sent: Thursday, April 07, 2016 11:18 AM
To: Alan Johnson
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph performance expectations

Thanks, Alan.

Unfortunately, we currently don't have much flexibility in terms of the 
hardware we can get so adding SSDs might not be possible in the near future. 
What is the best practice here, allocating, for each OSD, one disk just for 
data and one disk just for thd journal? Since the journals are rather small (in 
our setup a 5GB partition is created on every disk), wouldn't this a bit of a 
waste of disk space?

I was wondering if it would make sense to give each OSD one full 4TB disk and 
use one of the 900 GB disks for all journals (12 journals in this case). Would 
that cause even more contention since then different OSDs would then be trying 
to write their journals to the same disk?

On Thu, Apr 7, 2016 at 4:13 PM, Alan Johnson 
<al...@supermicro.com<mailto:al...@supermicro.com>> wrote:
I would strongly consider your journaling setup, (you do mention that you will 
revisit this) but we have found that co-locating journals does impact 
performance and usually separating them on flash is a good idea. Also not sure 
of your networking setup which can also have significant impact.

From: ceph-users 
[mailto:ceph-users-boun...@lists.ceph.com<mailto:ceph-users-boun...@lists.ceph.com>]
 On Behalf Of Sergio A. de Carvalho Jr.
Sent: Thursday, April 07, 2016 5:01 AM
To: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: [ceph-users] Ceph performance expectations

Hi all,

I've setup a testing/development Ceph cluster consisting of 5 Dell PowerEdge 
R720xd servers (256GB RAM, 2x 8-core Xeon E5-2650 @ 2.60 GHz, dual-port 10Gb 
Ethernet, 2x 900GB + 12x 4TB disks) running CentOS 6.5 and Ceph Hammer 0.94.6. 
All servers use one 900GB disk for the root partition and the other 13 disks 
are assigned to OSDs, so we have 5 x 13 = 65 OSDs in total. We also run 1 
monitor on every host. Journals are 5GB partitions on each disk (this is 
something we obviously will need to revisit later). The purpose of this cluster 
will be to serve as a backend storage for Cinder volumes and Glance images in 
an OpenStack cloud.

With this setup, I'm getting what I'm considering an "okay" performance:

# rados -p images bench 5 write
 Maintaining 16 concurrent writes of 4194304 bytes for up to 5 seconds or 0 
objects

Total writes made:  394
Write size: 4194304
Bandwidth (MB/sec): 299.968

Stddev Bandwidth:   127.334
Max bandwidth (MB/sec): 348
Min bandwidth (MB/sec): 0
Average Latency:0.212524
Stddev Latency: 0.13317
Max latency:0.828946
Min latency:0.0707341

Does that look acceptable? How much more can I expect to achieve by 
fine-tunning and perhaps using a more efficient setup?

I do understand the bandwidth above is a product of running 16 concurrent 
writes, and rather small object sizes (4MB). Bandwidth lowers significantly 
with 64MB and 1 thread:

# rados -p images bench 5 write -b 67108864 -t 1
 Maintaining 1 concurrent writes of 67108864 bytes for up to 5 seconds or 0 
objects

Total writes made:  7
Write size: 67108864
Bandwidth (MB/sec): 71.520

Stddev Bandwidth:   24.1897
Max bandwidth (MB/sec): 64
Min bandwidth (MB/sec): 0
Average Latency:0.894792
Stddev Latency: 0.0547502
Max latency:0.99311
Min latency:0.832765

Is such a drop expected?

Now, what I'm really concerned is about upload times. Uploading a 
randomly-generated 1GB file takes a bit too long:

# time r

Re: [ceph-users] ceph-disk activate fails (after 33 osd drives)

2016-02-12 Thread Alan Johnson
Can you check the value of kernel.pid_max. This may have to be increased for 
larger OSD counts, it may have some bearing?


From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of John 
Hogenmiller (yt)
Sent: Friday, February 12, 2016 8:52 AM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] ceph-disk activate fails (after 33 osd drives)



I have 7 servers, each containing 60 x 6TB drives in jbod mode. When I first 
started, I only activated a couple drives on 3 nodes as Ceph OSDs. Yesterday, I 
went to expand to the remaining nodes as well as prepare and activate all the 
drives.

ceph-disk prepare worked just fine. However, ceph-disk activate-all managed to 
only activate 33 drives and failed on the rest.  This is consistent all 7 nodes 
(existing and newly installed). At the end of the day, I have 33 Ceph OSDs 
activated per server and can't activate any more. I did have to bump up the 
pg_num and pgp_num on the pool in order to accommodate the drives that did 
activate. I don't know if having a low pg number during the mass influx of OSDs 
caused an issue or not within the pool. I don't think so because I can only set 
the pg_num to a maximum value determined by the number of known OSDs. But maybe 
you have to expand slowly, increase pg's, expand osds, increase pgs in a slow 
fashion.  I certainly have not seen anything to suggest a magic "33/node 
limit", and I've seen references to servers with up to 72 Ceph OSDs on them.

I then attempted to activate individual ceph osd's and got the same set of 
errors. I even wiped a drive, re-ran `ceph-disk prepare` and `ceph-disk 
activate` to have it fail in the same way.

status:
```
root@ljb01:/home/ceph/rain-cluster# ceph status
cluster 4ebe7995-6a33-42be-bd4d-20f51d02ae45
 health HEALTH_OK
 monmap e5: 5 mons at 
{hail02-r01-06=172.29.4.153:6789/0,hail02-r01-08=172.29.4.155:6789/0,rain02-r01-01=172.29.4.148:6789/0,rain02-r01-03=172.29.4.150:6789/0,rain02-r01-04=172.29.4.151:6789/0}
election epoch 12, quorum 0,1,2,3,4 
rain02-r01-01,rain02-r01-03,rain02-r01-04,hail02-r01-06,hail02-r01-08
 osdmap e1116: 420 osds: 232 up, 232 in
flags sortbitwise
  pgmap v397198: 10872 pgs, 14 pools, 101 MB data, 8456 objects
38666 MB used, 1264 TB / 1264 TB avail
   10872 active+clean
```



Here is what I get when I run ceph-disk prepare on a blank drive:

```
root@rain02-r01-01:/etc/ceph# ceph-disk  prepare  /dev/sdbh1
The operation has completed successfully.
The operation has completed successfully.
meta-data=/dev/sdbh1 isize=2048   agcount=6, agsize=268435455 blks
 =   sectsz=512   attr=2, projid32bit=0
data =   bsize=4096   blocks=1463819665, imaxpct=5
 =   sunit=0  swidth=0 blks
naming   =version 2  bsize=4096   ascii-ci=0
log  =internal log   bsize=4096   blocks=521728, version=2
 =   sectsz=512   sunit=0 blks, lazy-count=1
realtime =none   extsz=4096   blocks=0, rtextents=0
The operation has completed successfully.

root@rain02-r01-01:/etc/ceph# parted /dev/sdh print
Model: ATA HUS726060ALA640 (scsi)
Disk /dev/sdh: 6001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start   End SizeFile system  Name  Flags
 2  1049kB  5369MB  5368MB   ceph journal
 1  5370MB  6001GB  5996GB  xfs  ceph data
```

And finally the errors from attempting to activate the drive.

```
root@rain02-r01-01:/etc/ceph# ceph-disk activate /dev/sdbh1
got monmap epoch 5
2016-02-12 12:53:43.340526 7f149bc71940 -1 journal FileJournal::_open: unable 
to setup io_context (0) Success
2016-02-12 12:53:43.340748 7f1493f83700 -1 journal io_submit to 0~4096 got (22) 
Invalid argument
2016-02-12 12:53:43.341186 7f149bc71940 -1 
filestore(/var/lib/ceph/tmp/mnt.KRphD_) could not find 
-1/23c2fcde/osd_superblock/0 in index: (2) No such file or directory
os/FileJournal.cc: In function 'int FileJournal::write_aio_bl(off64_t&, 
ceph::bufferlist&, uint64_t)' thread 7f1493f83700 time 2016-02-12 
12:53:43.341355
os/FileJournal.cc: 1469: FAILED assert(0 == "io_submit got unexpected error")
 ceph version 9.2.0 (bb2ecea240f3a1d525bcb35670cb07bd1f0ca299)
 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x8b) 
[0x7f149b767f2b]
 2: (FileJournal::write_aio_bl(long&, ceph::buffer::list&, unsigned 
long)+0x5ad) [0x7f149b5fe27d]
 3: (FileJournal::do_aio_write(ceph::buffer::list&)+0x263) [0x7f149b602e63]
 4: (FileJournal::write_thread_entry()+0x4e4) [0x7f149b607394]
 5: (FileJournal::Writer::entry()+0xd) [0x7f149b44bddd]
 6: (()+0x8182) [0x7f1499d87182]
 7: (clone()+0x6d) [0x7f14980ce47d]
 NOTE: a copy of the executable, or 

Re: [ceph-users] Intel S3710 400GB and Samsung PM863 480GB fio results

2015-12-22 Thread Alan Johnson
I would also add that the journal activity is write intensive so a small part 
of the drive would get excessive writes if the journal and data are co-located 
on an SSD. This would also be the case where an SSD has multiple journals 
associated with many HDDs.

-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Wido 
den Hollander
Sent: Tuesday, December 22, 2015 11:46 AM
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Intel S3710 400GB and Samsung PM863 480GB fio results

On 12/22/2015 05:36 PM, Tyler Bishop wrote:
> Write endurance is kinda bullshit.
> 
> We have crucial 960gb drives storing data and we've only managed to take 2% 
> off the drives life in the period of a year and hundreds of tb written weekly.
> 
> 
> Stuff is way more durable than anyone gives it credit.
> 
> 

No, that is absolutely not true. I've seen multiple SSDs fail in Ceph clusters. 
Small Samsung 850 Pro SSDs worn out within 4 months in heavy write-intensive 
Ceph clusters.

> - Original Message -
> From: "Lionel Bouton" 
> To: "Andrei Mikhailovsky" , "ceph-users" 
> 
> Sent: Tuesday, December 22, 2015 11:04:26 AM
> Subject: Re: [ceph-users] Intel S3710 400GB and Samsung PM863 480GB 
> fio results
> 
> Le 22/12/2015 13:43, Andrei Mikhailovsky a écrit :
>> Hello guys,
>>
>> Was wondering if anyone has done testing on Samsung PM863 120 GB version to 
>> see how it performs? IMHO the 480GB version seems like a waste for the 
>> journal as you only need to have a small disk size to fit 3-4 osd journals. 
>> Unless you get a far greater durability.
> 
> The problem is endurance. If we use the 480GB for 3 OSDs each on the 
> cluster we might build we expect 3 years (with some margin for error 
> but not including any write amplification at the SSD level) before the 
> SSDs will fail.
> In our context a 120GB model might not even last a year (endurance is 
> 1/4th of the 480GB model). This is why SM863 models will probably be 
> more suitable if you have access to them: you can use smaller ones 
> which cost less and get more endurance (you'll have to check the 
> performance though, usually smaller models have lower IOPS and bandwidth).
> 
>> I am planning to replace my current journal ssds over the next month or so 
>> and would like to find out if there is an a good alternative to the Intel's 
>> 3700/3500 series. 
> 
> 3700 are a safe bet (the 100GB model is rated for ~1.8PBW). 3500 
> models probably don't have enough endurance for many Ceph clusters to 
> be cost effective. The 120GB model is only rated for 70TBW and you 
> have to consider both client writes and rebalance events.
> I'm uneasy with SSDs expected to fail within the life of the system 
> they are in: you can have a cascade effect where an SSD failure brings 
> down several OSDs triggering a rebalance which might make SSDs 
> installed at the same time fail too. In this case in the best scenario 
> you will reach your min_size (>=2) and block any writes which would 
> prevent more SSD failures until you move journals to fresh SSDs. If 
> min_size = 1 you might actually lose data.
> 
> If you expect to replace your current journal SSDs if I were you I 
> would make a staggered deployment over several months/a year to avoid 
> them failing at the same time in case of an unforeseen problem. In 
> addition this would allow to evaluate the performance and behavior of 
> a new SSD model with your hardware (there have been reports of 
> performance problems with some combinations of RAID controllers and 
> SSD models/firmware versions) without impacting your cluster's overall 
> performance too much.
> 
> When using SSDs for journals you have to monitor both :
> * the SSD wear leveling or something equivalent (SMART data may not be 
> available if you use a RAID controller but usually you can get the 
> total amount data written) of each SSD,
> * the client writes on the whole cluster.
> And check periodically what the expected lifespan left there is for 
> each of your SSD based on their current state, average write speed, 
> estimated write amplification (both due to pool's size parameter and 
> the SSD model's inherent write amplification) and the amount of data 
> moved by rebalance events you expect to happen.
> Ideally you should make this computation before choosing the SSD 
> models, but several variables are not always easy to predict and 
> probably will change during the life of your cluster.
> 
> Lionel
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


--
Wido den Hollander
42on B.V.
Ceph trainer and consultant

Phone: +31 (0)20 700 9902
Skype: 

Re: [ceph-users] Performance question

2015-11-24 Thread Alan Johnson
Or separate the journals as this will bring the workload down on the spinners 
to 3Xrather than 6X

From: Marek Dohojda [mailto:mdoho...@altitudedigital.com]
Sent: Tuesday, November 24, 2015 1:24 PM
To: Nick Fisk
Cc: Alan Johnson; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Performance question

Crad I think you are 100% correct:

rrqm/s   wrqm/s r/s w/srkB/swkB/s avgrq-sz avgqu-sz   await 
r_await w_await  svctm  %util

 0.00   369.00   33.00 1405.00   132.00 135656.00   188.86 5.614.02   
21.943.60   0.70 100.00

I was kinda wondering that this maybe the case, which is why I was wondering if 
I should be doing too much in terms of troubleshooting.

So basically what you are saying I need to wait for new version?


Thank you very much everybody!


On Tue, Nov 24, 2015 at 9:35 AM, Nick Fisk 
<n...@fisk.me.uk<mailto:n...@fisk.me.uk>> wrote:
You haven’t stated what size replication you are running. Keep in mind that 
with a replication factor of 3, you will be writing 6x the amount of data down 
to disks than what the benchmark says (3x replication x2 for data+journal 
write).

You might actually be near the hardware maximums. What does iostat looks like 
whilst you are running rados bench, are the disks getting maxed out?

From: ceph-users 
[mailto:ceph-users-boun...@lists.ceph.com<mailto:ceph-users-boun...@lists.ceph.com>]
 On Behalf Of Marek Dohojda
Sent: 24 November 2015 16:27
To: Alan Johnson <al...@supermicro.com<mailto:al...@supermicro.com>>

Cc: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] Performance question

7 total servers, 20 GIG pipe between servers, both reads and writes.  The 
network itself has plenty of pipe left, it is averaging 40Mbits/s

Rados Bench SAS 30 writes
 Total time run: 30.591927
Total writes made:  386
Write size: 4194304
Bandwidth (MB/sec): 50.471

Stddev Bandwidth:   48.1052
Max bandwidth (MB/sec): 160
Min bandwidth (MB/sec): 0
Average Latency:1.25908
Stddev Latency: 2.62018
Max latency:21.2809
Min latency:0.029227

Rados Bench SSD writes
 Total time run: 20.425192
Total writes made:  1405
Write size: 4194304
Bandwidth (MB/sec): 275.150

Stddev Bandwidth:   122.565
Max bandwidth (MB/sec): 576
Min bandwidth (MB/sec): 0
Average Latency:0.231803
Stddev Latency: 0.190978
Max latency:0.981022
Min latency:0.0265421


As you can see SSD is better but not as much as I would expect SSD to be.



On Tue, Nov 24, 2015 at 9:10 AM, Alan Johnson 
<al...@supermicro.com<mailto:al...@supermicro.com>> wrote:
Hard to know without more config details such as no of servers, network  – GigE 
or !0 GigE, also not sure how you are measuring, (reads or writes) you could 
try RADOS bench as a baseline, I would expect more performance with 7 X 10K 
spinners journaled to SSDs. The fact that SSDs did not perform much better may 
mean to a bottleneck elsewhere – network perhaps?
From: Marek Dohojda 
[mailto:mdoho...@altitudedigital.com<mailto:mdoho...@altitudedigital.com>]
Sent: Tuesday, November 24, 2015 10:37 AM
To: Alan Johnson
Cc: Haomai Wang; ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>

Subject: Re: [ceph-users] Performance question

Yeah they are, that is one thing I was planning on changing, What I am really 
interested at the moment, is vague expected performance.  I mean is 100MB 
around normal, very low, or "could be better"?

On Tue, Nov 24, 2015 at 8:02 AM, Alan Johnson 
<al...@supermicro.com<mailto:al...@supermicro.com>> wrote:
Are the journals on the same device – it might be better to use the SSDs for 
journaling since you are not getting better performance with SSDs?

From: ceph-users 
[mailto:ceph-users-boun...@lists.ceph.com<mailto:ceph-users-boun...@lists.ceph.com>]
 On Behalf Of Marek Dohojda
Sent: Monday, November 23, 2015 10:24 PM
To: Haomai Wang
Cc: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] Performance question

 Sorry I should have specified SAS is the 100 MB :) , but to be honest SSD 
isn't much faster.

On Mon, Nov 23, 2015 at 7:38 PM, Haomai Wang 
<haomaiw...@gmail.com<mailto:haomaiw...@gmail.com>> wrote:
On Tue, Nov 24, 2015 at 10:35 AM, Marek Dohojda
<mdoho...@altitudedigital.com<mailto:mdoho...@altitudedigital.com>> wrote:
> No SSD and SAS are in two separate pools.
>
> On Mon, Nov 23, 2015 at 7:30 PM, Haomai Wang 
> <haomaiw...@gmail.com<mailto:haomaiw...@gmail.com>> wrote:
>>
>> On Tue, Nov 24, 2015 at 10:23 AM, Marek Dohojda
>> <mdoho...@altitudedigital.com<mailto:mdoho...@altitudedigital.com>> wrote:
>> > I have a Hammer Ceph cluster on 7 nodes with total 14 OSDs.  7 of which
>> > are
>> > SSD a

Re: [ceph-users] Performance question

2015-11-24 Thread Alan Johnson
Hard to know without more config details such as no of servers, network  – GigE 
or !0 GigE, also not sure how you are measuring, (reads or writes) you could 
try RADOS bench as a baseline, I would expect more performance with 7 X 10K 
spinners journaled to SSDs. The fact that SSDs did not perform much better may 
mean to a bottleneck elsewhere – network perhaps?
From: Marek Dohojda [mailto:mdoho...@altitudedigital.com]
Sent: Tuesday, November 24, 2015 10:37 AM
To: Alan Johnson
Cc: Haomai Wang; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Performance question

Yeah they are, that is one thing I was planning on changing, What I am really 
interested at the moment, is vague expected performance.  I mean is 100MB 
around normal, very low, or "could be better"?

On Tue, Nov 24, 2015 at 8:02 AM, Alan Johnson 
<al...@supermicro.com<mailto:al...@supermicro.com>> wrote:
Are the journals on the same device – it might be better to use the SSDs for 
journaling since you are not getting better performance with SSDs?

From: ceph-users 
[mailto:ceph-users-boun...@lists.ceph.com<mailto:ceph-users-boun...@lists.ceph.com>]
 On Behalf Of Marek Dohojda
Sent: Monday, November 23, 2015 10:24 PM
To: Haomai Wang
Cc: ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
Subject: Re: [ceph-users] Performance question

 Sorry I should have specified SAS is the 100 MB :) , but to be honest SSD 
isn't much faster.

On Mon, Nov 23, 2015 at 7:38 PM, Haomai Wang 
<haomaiw...@gmail.com<mailto:haomaiw...@gmail.com>> wrote:
On Tue, Nov 24, 2015 at 10:35 AM, Marek Dohojda
<mdoho...@altitudedigital.com<mailto:mdoho...@altitudedigital.com>> wrote:
> No SSD and SAS are in two separate pools.
>
> On Mon, Nov 23, 2015 at 7:30 PM, Haomai Wang 
> <haomaiw...@gmail.com<mailto:haomaiw...@gmail.com>> wrote:
>>
>> On Tue, Nov 24, 2015 at 10:23 AM, Marek Dohojda
>> <mdoho...@altitudedigital.com<mailto:mdoho...@altitudedigital.com>> wrote:
>> > I have a Hammer Ceph cluster on 7 nodes with total 14 OSDs.  7 of which
>> > are
>> > SSD and 7 of which are SAS 10K drives.  I get typically about 100MB IO
>> > rates
>> > on this cluster.

So which pool you get with 100 MB?

>>
>> You mixed up sas and ssd in one pool?
>>
>> >
>> > I have a simple question.  Is 100MB within my configuration what I
>> > should
>> > expect, or should it be higher? I am not sure if I should be looking for
>> > issues, or just accept what I have.
>> >
>> > ___
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com>
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>>
>>
>>
>> --
>> Best Regards,
>>
>> Wheat
>
>

--
Best Regards,

Wheat


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Performance question

2015-11-24 Thread Alan Johnson
Are the journals on the same device – it might be better to use the SSDs for 
journaling since you are not getting better performance with SSDs?

From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Marek 
Dohojda
Sent: Monday, November 23, 2015 10:24 PM
To: Haomai Wang
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Performance question

 Sorry I should have specified SAS is the 100 MB :) , but to be honest SSD 
isn't much faster.

On Mon, Nov 23, 2015 at 7:38 PM, Haomai Wang 
> wrote:
On Tue, Nov 24, 2015 at 10:35 AM, Marek Dohojda
> wrote:
> No SSD and SAS are in two separate pools.
>
> On Mon, Nov 23, 2015 at 7:30 PM, Haomai Wang 
> > wrote:
>>
>> On Tue, Nov 24, 2015 at 10:23 AM, Marek Dohojda
>> > wrote:
>> > I have a Hammer Ceph cluster on 7 nodes with total 14 OSDs.  7 of which
>> > are
>> > SSD and 7 of which are SAS 10K drives.  I get typically about 100MB IO
>> > rates
>> > on this cluster.

So which pool you get with 100 MB?

>>
>> You mixed up sas and ssd in one pool?
>>
>> >
>> > I have a simple question.  Is 100MB within my configuration what I
>> > should
>> > expect, or should it be higher? I am not sure if I should be looking for
>> > issues, or just accept what I have.
>> >
>> > ___
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>>
>>
>>
>> --
>> Best Regards,
>>
>> Wheat
>
>


--
Best Regards,

Wheat

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 2-Node Cluster - possible scenario?

2015-10-25 Thread Alan Johnson
Quorum can be achieved with one monitor node (for testing purposes this would 
be OK, but of course it is a single point of failure) however the default for 
the OSD nodes  is three way replication (can be changed) but easier to set up 
three OSD nodes to start with and one monitor node. For your case the monitor 
node would not need to be very powerful and a lower spec system could be used 
allowing your previously suggested mon node to be used instead as a third OSD 
node. 

-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of 
Hermann Himmelbauer
Sent: Monday, October 26, 2015 12:17 AM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] 2-Node Cluster - possible scenario?

Hi,
In a little project of mine I plan to start ceph storage with a small setup and 
to be able to scale it up later. Perhaps someone can give me any advice if the 
following (two nodes with OSDs, third node with Monitor only):

- 2 Nodes (enough RAM + CPU), 6*3TB Harddisk for OSDs -> 9TB usable space in 
case of 3* redundancy, 1 Monitor on each of the nodes
- 1 extra node that has no OSDs but runs a third monitor.
- 10GBit Ethernet as storage backbone

Later I may add more nodes + OSDs to expand the cluster in case more storage / 
performance is needed.

Would this work / be stable? Or do I need to spread my OSDs to 3 ceph nodes 
(e.g. in order to achive quorum). In case one of the two OSD nodes fail, would 
the storage still be accessible?

The setup should be used for RBD/QEMU only, no cephfs or the like.

Any hints are appreciated!

Best Regards,
Hermann

--
herm...@qwer.tk
PGP/GPG: 299893C7 (on keyservers)
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Debian repo down?

2015-09-26 Thread Alan Johnson
Yes, I am also getting this error.

Thx

Alan

From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Iban 
Cabrillo
Sent: Saturday, September 26, 2015 6:58 AM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] Debian repo down?

HI cepher,
  I am getting download error form debian repos (I check it with firefly and 
hammer) :

  W: Failed to fetch http://ceph.com/debian-hammer/dists/trusty/InRelease

W: Failed to fetch http://ceph.com/debian-hammer/dists/trusty/Release.gpg  
Cannot initiate the connection to 
download.ceph.com:80 
(2607:f298:6050:51f3:f816:3eff:fe50:5ec). - connect (101: Network is 
unreachable) [IP: 2607:f298:6050:51f3:f816:3eff:fe50:5ec 80]

URL is not available from browser either (http://ceph.com/debian-).

Saludos
--

Iban Cabrillo Bartolome
Instituto de Fisica de Cantabria (IFCA)
Santander, Spain
Tel: +34942200969
PGP PUBLIC KEY: http://pgp.mit.edu/pks/lookup?op=get=0xD9DF0B3D6C8C08AC

Bertrand Russell:
"El problema con el mundo es que los estúpidos están seguros de todo y los 
inteligentes están llenos de dudas"
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] runtime Error for creating ceph MON via ceph-deploy

2015-06-30 Thread Alan Johnson
I use sudo visudo and then add in a line under
Defaults requiretty
--
Defaults:user !requiretty

Where user is the username.

Hope this helps?

Alan

From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Vida 
Ahmadi
Sent: Monday, June 22, 2015 6:31 AM
To: ceph-users@lists.ceph.com
Subject: [ceph-users] runtime Error for creating ceph MON via ceph-deploy

Hi all,
I am a new user who want to deploy simple ceph cluster.
I start to create ceph monitor node via ceph-deploy and got error:
[ceph_deploy][ERROR ] RuntimeError: remote connection got closed, ensure 
``requiretty`` is disabled for node1
I commented requiretty and I have a password-less access to the node1.
Is there any other issues for this error?
Any kind of help will be appreciated.
Note: I am using centOS 7 and ceph version 1.5.25.
--
Best regards,
Vida
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Hammer 0.94.2: Error when running commands on CEPH admin node

2015-06-18 Thread Alan Johnson
And also this needs the correct permission set as otherwise it will give this 
error.


From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of B, 
Naga Venkata
Sent: Thursday, June 18, 2015 10:07 AM
To: Teclus Dsouza -X (teclus - TECH MAHINDRA LIM at Cisco); 
ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Hammer 0.94.2: Error when running commands on CEPH 
admin node

Do you have admin keyring in /etc/ceph directory?

From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Teclus 
Dsouza -X (teclus - TECH MAHINDRA LIM at Cisco)
Sent: Thursday, June 18, 2015 10:35 PM
To: ceph-users@lists.ceph.commailto:ceph-users@lists.ceph.com
Subject: [ceph-users] Hammer 0.94.2: Error when running commands on CEPH admin 
node
Importance: High

Hello Everyone,

I have setup a new cluster with Ceph-hammer version (0.94.2   The install went 
through fine without any issues but from the admin node I am not able to 
execute any of the Ceph commands

Error:
root@ceph-main:/cephcluster# ceph auth export
2015-06-18 12:43:28.922367 7f54d286b700 -1 monclient(hunting): ERROR: missing 
keyring, cannot use cephx for authentication
2015-06-18 12:43:28.922375 7f54d286b700  0 librados: client.admin 
initialization error (2) No such file or directory
Error connecting to cluster: ObjectNotFound

I googled for this and only found one article relevant, but it did not solve my 
problem.
http://t75390.file-systems-ceph-user.file-systemstalk.us/newbie-error-connecting-to-cluster-permissionerror-t75390.html

Is there any other workaround or fix for this ??

Regards
Teclus Dsouza
Technical Architect
Tech Mahindra


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Hammer 0.94.2: Error when running commands on CEPH admin node

2015-06-18 Thread Alan Johnson
For the permissions use  sudo chmod +r /etc/ceph/ceph.client.admin.keyring


From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Teclus 
Dsouza -X (teclus - TECH MAHINDRA LIM at Cisco)
Sent: Thursday, June 18, 2015 10:21 AM
To: B, Naga Venkata; ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Hammer 0.94.2: Error when running commands on CEPH 
admin node

Hello Naga,

The keyring file is present under a folder I created for ceph.   Are you saying 
the same needs to be copied to the /etc/ceph folder ?

Regards
Teclus

From: B, Naga Venkata [mailto:nag...@hp.com]
Sent: Thursday, June 18, 2015 10:37 PM
To: Teclus Dsouza -X (teclus - TECH MAHINDRA LIM at Cisco); 
ceph-users@lists.ceph.commailto:ceph-users@lists.ceph.com
Subject: RE: [ceph-users] Hammer 0.94.2: Error when running commands on CEPH 
admin node

Do you have admin keyring in /etc/ceph directory?

From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Teclus 
Dsouza -X (teclus - TECH MAHINDRA LIM at Cisco)
Sent: Thursday, June 18, 2015 10:35 PM
To: ceph-users@lists.ceph.commailto:ceph-users@lists.ceph.com
Subject: [ceph-users] Hammer 0.94.2: Error when running commands on CEPH admin 
node
Importance: High

Hello Everyone,

I have setup a new cluster with Ceph-hammer version (0.94.2   The install went 
through fine without any issues but from the admin node I am not able to 
execute any of the Ceph commands

Error:
root@ceph-main:/cephcluster# ceph auth export
2015-06-18 12:43:28.922367 7f54d286b700 -1 monclient(hunting): ERROR: missing 
keyring, cannot use cephx for authentication
2015-06-18 12:43:28.922375 7f54d286b700  0 librados: client.admin 
initialization error (2) No such file or directory
Error connecting to cluster: ObjectNotFound

I googled for this and only found one article relevant, but it did not solve my 
problem.
http://t75390.file-systems-ceph-user.file-systemstalk.us/newbie-error-connecting-to-cluster-permissionerror-t75390.html

Is there any other workaround or fix for this ??

Regards
Teclus Dsouza
Technical Architect
Tech Mahindra


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph-deploy issues

2015-02-25 Thread Alan Johnson
Try sudo chmod +r /etc/ceph/ceph.client.admin.keyring for the error below?

-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Garg, 
Pankaj
Sent: Wednesday, February 25, 2015 4:04 PM
To: Travis Rhoden
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph-deploy issues

I figured it out.at least first hurdle.
I have 2 networks, 10.18.240.x. and 192.168.240.xx.
I was specifying different public and cluster addresses. Somehow it doesn’t 
like it.
Maybe the issue really is the ceph-deploy is old. I am on ARM64 and this is the 
latest I have for Ubuntu.

After I got past the first hurdle, now I get this message :

2015-02-26 00:03:31.642166 3ff94c7f1f0 -1 monclient(hunting): ERROR: missing 
keyring, cannot use cephx for authentication
2015-02-26 00:03:31.642390 3ff94c7f1f0  0 librados: client.admin initialization 
error (2) No such file or directory Error connecting to cluster: ObjectNotFound


Thanks
Pankaj

-Original Message-
From: Travis Rhoden [mailto:trho...@gmail.com]
Sent: Wednesday, February 25, 2015 3:55 PM
To: Garg, Pankaj
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph-deploy issues

Hi Pankaj,

I can't say that it will fix the issue, but the first thing I would encourage 
is to use the latest ceph-deploy.

you are using 1.4.0, which is quite old.  The latest is 1.5.21.

 - Travis

On Wed, Feb 25, 2015 at 3:38 PM, Garg, Pankaj pankaj.g...@caviumnetworks.com 
wrote:
 Hi,

 I had a successful ceph cluster that I am rebuilding. I have 
 completely uninstalled ceph and any remnants and directories and config files.

 While setting up the new cluster, I follow the Ceph-deploy 
 documentation as described before. I seem to get an error now (tried many 
 times) :



 ceph-deploy mon create-initial command fails in gather keys step. This 
 never happened before, and I’m not sure why its failing now.







 cephuser@ceph1:~/my-cluster$ ceph-deploy mon create-initial

 [ceph_deploy.cli][INFO  ] Invoked (1.4.0): /usr/bin/ceph-deploy mon 
 create-initial

 [ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph1

 [ceph_deploy.mon][DEBUG ] detecting platform for host ceph1 ...

 [ceph1][DEBUG ] connected to host: ceph1

 [ceph1][DEBUG ] detect platform information from remote host

 [ceph1][DEBUG ] detect machine type

 [ceph_deploy.mon][INFO  ] distro info: Ubuntu 14.04 trusty

 [ceph1][DEBUG ] determining if provided host has same hostname in 
 remote

 [ceph1][DEBUG ] get remote short hostname

 [ceph1][DEBUG ] deploying mon to ceph1

 [ceph1][DEBUG ] get remote short hostname

 [ceph1][DEBUG ] remote hostname: ceph1

 [ceph1][DEBUG ] write cluster configuration to 
 /etc/ceph/{cluster}.conf

 [ceph1][DEBUG ] create the mon path if it does not exist

 [ceph1][DEBUG ] checking for done path: 
 /var/lib/ceph/mon/ceph-ceph1/done

 [ceph1][DEBUG ] done path does not exist: 
 /var/lib/ceph/mon/ceph-ceph1/done

 [ceph1][INFO  ] creating keyring file:
 /var/lib/ceph/tmp/ceph-ceph1.mon.keyring

 [ceph1][DEBUG ] create the monitor keyring file

 [ceph1][INFO  ] Running command: sudo ceph-mon --cluster ceph --mkfs 
 -i
 ceph1 --keyring /var/lib/ceph/tmp/ceph-ceph1.mon.keyring

 [ceph1][DEBUG ] ceph-mon: set fsid to
 099013d5-126d-45b4-a98e-5f0c386805a4

 [ceph1][DEBUG ] ceph-mon: created monfs at
 /var/lib/ceph/mon/ceph-ceph1 for
 mon.ceph1

 [ceph1][INFO  ] unlinking keyring file 
 /var/lib/ceph/tmp/ceph-ceph1.mon.keyring

 [ceph1][DEBUG ] create a done file to avoid re-doing the mon 
 deployment

 [ceph1][DEBUG ] create the init path if it does not exist

 [ceph1][DEBUG ] locating the `service` executable...

 [ceph1][INFO  ] Running command: sudo initctl emit ceph-mon 
 cluster=ceph
 id=ceph1

 [ceph1][INFO  ] Running command: sudo ceph --cluster=ceph 
 --admin-daemon /var/run/ceph/ceph-mon.ceph1.asok mon_status

 [ceph1][DEBUG ]
 **
 **

 [ceph1][DEBUG ] status for monitor: mon.ceph1

 [ceph1][DEBUG ] {

 [ceph1][DEBUG ]   election_epoch: 2,

 [ceph1][DEBUG ]   extra_probe_peers: [

 [ceph1][DEBUG ] 192.168.240.101:6789/0

 [ceph1][DEBUG ]   ],

 [ceph1][DEBUG ]   monmap: {

 [ceph1][DEBUG ] created: 0.00,

 [ceph1][DEBUG ] epoch: 1,

 [ceph1][DEBUG ] fsid: 099013d5-126d-45b4-a98e-5f0c386805a4,

 [ceph1][DEBUG ] modified: 0.00,

 [ceph1][DEBUG ] mons: [

 [ceph1][DEBUG ]   {

 [ceph1][DEBUG ] addr: 10.18.240.101:6789/0,

 [ceph1][DEBUG ] name: ceph1,

 [ceph1][DEBUG ] rank: 0

 [ceph1][DEBUG ]   }

 [ceph1][DEBUG ] ]

 [ceph1][DEBUG ]   },

 [ceph1][DEBUG ]   name: ceph1,

 [ceph1][DEBUG ]   outside_quorum: [],

 [ceph1][DEBUG ]   quorum: [

 [ceph1][DEBUG ] 0

 [ceph1][DEBUG ]   ],

 [ceph1][DEBUG ]   rank: 0,

 [ceph1][DEBUG ]   state: leader,

 [ceph1][DEBUG ]   sync_provider: []

 [ceph1][DEBUG ] }

 [ceph1][DEBUG ]
 

Re: [ceph-users] Ceph-deploy issues

2015-02-25 Thread Alan Johnson
Not sure Pankaj, I always do this after deploying ceph as I spent a long time 
on this earlier, I think it is mentioned in the docs but I may have overlooked 
it at the time so now it is imprinted heavily? I have been using multiple 
networks as well but had to do it in both case?

Thx

-Original Message-
From: Garg, Pankaj [mailto:pankaj.g...@caviumnetworks.com] 
Sent: Wednesday, February 25, 2015 4:26 PM
To: Alan Johnson; Travis Rhoden
Cc: ceph-users@lists.ceph.com
Subject: RE: [ceph-users] Ceph-deploy issues

Hi Alan,
Thanks. Worked like magic.
Why did this happen though? I have deployed on the same machine using same 
ceph-deploy and it was fine. 
Not sure if anything is different this time, except my network, which shouldn’t 
affect this.

Thakns
Pankaj

-Original Message-
From: Alan Johnson [mailto:al...@supermicro.com]
Sent: Wednesday, February 25, 2015 4:24 PM
To: Garg, Pankaj; Travis Rhoden
Cc: ceph-users@lists.ceph.com
Subject: RE: [ceph-users] Ceph-deploy issues

Try sudo chmod +r /etc/ceph/ceph.client.admin.keyring for the error below?

-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Garg, 
Pankaj
Sent: Wednesday, February 25, 2015 4:04 PM
To: Travis Rhoden
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph-deploy issues

I figured it out.at least first hurdle.
I have 2 networks, 10.18.240.x. and 192.168.240.xx.
I was specifying different public and cluster addresses. Somehow it doesn’t 
like it.
Maybe the issue really is the ceph-deploy is old. I am on ARM64 and this is the 
latest I have for Ubuntu.

After I got past the first hurdle, now I get this message :

2015-02-26 00:03:31.642166 3ff94c7f1f0 -1 monclient(hunting): ERROR: missing 
keyring, cannot use cephx for authentication
2015-02-26 00:03:31.642390 3ff94c7f1f0  0 librados: client.admin initialization 
error (2) No such file or directory Error connecting to cluster: ObjectNotFound


Thanks
Pankaj

-Original Message-
From: Travis Rhoden [mailto:trho...@gmail.com]
Sent: Wednesday, February 25, 2015 3:55 PM
To: Garg, Pankaj
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph-deploy issues

Hi Pankaj,

I can't say that it will fix the issue, but the first thing I would encourage 
is to use the latest ceph-deploy.

you are using 1.4.0, which is quite old.  The latest is 1.5.21.

 - Travis

On Wed, Feb 25, 2015 at 3:38 PM, Garg, Pankaj pankaj.g...@caviumnetworks.com 
wrote:
 Hi,

 I had a successful ceph cluster that I am rebuilding. I have 
 completely uninstalled ceph and any remnants and directories and config files.

 While setting up the new cluster, I follow the Ceph-deploy 
 documentation as described before. I seem to get an error now (tried many 
 times) :



 ceph-deploy mon create-initial command fails in gather keys step. This 
 never happened before, and I’m not sure why its failing now.







 cephuser@ceph1:~/my-cluster$ ceph-deploy mon create-initial

 [ceph_deploy.cli][INFO  ] Invoked (1.4.0): /usr/bin/ceph-deploy mon 
 create-initial

 [ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph1

 [ceph_deploy.mon][DEBUG ] detecting platform for host ceph1 ...

 [ceph1][DEBUG ] connected to host: ceph1

 [ceph1][DEBUG ] detect platform information from remote host

 [ceph1][DEBUG ] detect machine type

 [ceph_deploy.mon][INFO  ] distro info: Ubuntu 14.04 trusty

 [ceph1][DEBUG ] determining if provided host has same hostname in 
 remote

 [ceph1][DEBUG ] get remote short hostname

 [ceph1][DEBUG ] deploying mon to ceph1

 [ceph1][DEBUG ] get remote short hostname

 [ceph1][DEBUG ] remote hostname: ceph1

 [ceph1][DEBUG ] write cluster configuration to 
 /etc/ceph/{cluster}.conf

 [ceph1][DEBUG ] create the mon path if it does not exist

 [ceph1][DEBUG ] checking for done path: 
 /var/lib/ceph/mon/ceph-ceph1/done

 [ceph1][DEBUG ] done path does not exist: 
 /var/lib/ceph/mon/ceph-ceph1/done

 [ceph1][INFO  ] creating keyring file:
 /var/lib/ceph/tmp/ceph-ceph1.mon.keyring

 [ceph1][DEBUG ] create the monitor keyring file

 [ceph1][INFO  ] Running command: sudo ceph-mon --cluster ceph --mkfs 
 -i
 ceph1 --keyring /var/lib/ceph/tmp/ceph-ceph1.mon.keyring

 [ceph1][DEBUG ] ceph-mon: set fsid to
 099013d5-126d-45b4-a98e-5f0c386805a4

 [ceph1][DEBUG ] ceph-mon: created monfs at
 /var/lib/ceph/mon/ceph-ceph1 for
 mon.ceph1

 [ceph1][INFO  ] unlinking keyring file 
 /var/lib/ceph/tmp/ceph-ceph1.mon.keyring

 [ceph1][DEBUG ] create a done file to avoid re-doing the mon 
 deployment

 [ceph1][DEBUG ] create the init path if it does not exist

 [ceph1][DEBUG ] locating the `service` executable...

 [ceph1][INFO  ] Running command: sudo initctl emit ceph-mon 
 cluster=ceph
 id=ceph1

 [ceph1][INFO  ] Running command: sudo ceph --cluster=ceph 
 --admin-daemon /var/run/ceph/ceph-mon.ceph1.asok mon_status

 [ceph1][DEBUG ]
 **
 **

 [ceph1][DEBUG ] status

Re: [ceph-users] 答复: Re: can not add osd

2015-02-10 Thread Alan Johnson
Just wondering if this was ever resolved �C I am seeing the exact same issue 
when I moved from Centos 6.5 firefly to Centos7 on giant release using 
“ceph-deploy osd prepare . . . ” the script fails to umount and then posts a  
device is busy message. Details are below in yang bin18’s posting below. Ubuntu 
Trusty with giant seems OK. I have redeployed the cluster and also tried 
deploying on virtual machines as well as physical ones. Setup is minimal 3 x 
OSD nodes with one monitor node.



From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of 
yang.bi...@zte.com.cn
Sent: Monday, December 22, 2014 2:58 AM
To: Karan Singh
Cc: ceph-users
Subject: [ceph-users] 答复: Re: can not add osd

Hi

I have deploied ceph osd  according official Ceph docs,and the same error came 
out again.




发件人: Karan Singh karan.si...@csc.fimailto:karan.si...@csc.fi
收件人: yang.bi...@zte.com.cnmailto:yang.bi...@zte.com.cn,
抄送:ceph-users 
ceph-users@lists.ceph.commailto:ceph-users@lists.ceph.com
日期: 2014/12/16 22:51
主题:Re: [ceph-users] can not add osd




Hi

You logs does not provides much information , if you are following any other 
documentation for Ceph , i would recommend you to follow official Ceph docs.

http://ceph.com/docs/master/start/quick-start-preflight/




Karan Singh
Systems Specialist , Storage Platforms
CSC - IT Center for Science,
Keilaranta 14, P. O. Box 405, FIN-02101 Espoo, Finland
mobile: +358 503 812758
tel. +358 9 4572001
fax +358 9 4572302
http://www.csc.fi/


On 16 Dec 2014, at 09:55, yang.bi...@zte.com.cnmailto:yang.bi...@zte.com.cn 
wrote:

hi

When i execute ceph-deploy osd prepare node3:/dev/sdb,always come out err 
like this :

[node3][WARNIN] INFO:ceph-disk:Running command: /bin/umount -- 
/var/lib/ceph/tmp/mnt.u2KXW3
[node3][WARNIN] umount: /var/lib/ceph/tmp/mnt.u2KXW3: target is busy.

Then i execute /bin/umount -- /var/lib/ceph/tmp/mnt.u2KXW3,result is ok.


ZTE Information Security Notice: The information contained in this mail (and 
any attachment transmitted herewith) is privileged and confidential and is 
intended for the exclusive use of the addressee(s).  If you are not an intended 
recipient, any disclosure, reproduction, distribution or other dissemination or 
use of the information contained is strictly prohibited.  If you have received 
this mail in error, please delete it and notify us immediately.



___
ceph-users mailing list
ceph-users@lists.ceph.commailto:ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com






ZTE Information Security Notice: The information contained in this mail (and 
any attachment transmitted herewith) is privileged and confidential and is 
intended for the exclusive use of the addressee(s).  If you are not an intended 
recipient, any disclosure, reproduction, distribution or other dissemination or 
use of the information contained is strictly prohibited.  If you have received 
this mail in error, please delete it and notify us immediately.



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com