[ceph-users] Does marking OSD "down" trigger "AdvMap" event in other OSD?

2016-10-16 Thread xxhdx1985126
Hi, everyone. If one OSD's state transforms from up to down, by "kill -i" for example, will an "AdvMap" event be triggered on other related OSDs?___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] RBD with SSD journals and SAS OSDs

2016-10-16 Thread Christian Balzer
Hello, On Sun, 16 Oct 2016 19:07:17 +0800 William Josefsson wrote: > Ok thanks for sharing. yes my journals are Intel S3610 200GB, which I > partition in 4 partitions each ~45GB. When I ceph-deploy I declare > these as the journals of the OSDs. > The size (45GB) of these journals is only going

Re: [ceph-users] new Open Source Ceph based iSCSI SAN project

2016-10-16 Thread Oliver Dzombic
Hi, its using LIO, means it will have the same compatibelity issues with vmware. So i am wondering, why they call it an idial solution. -- Mit freundlichen Gruessen / Best regards Oliver Dzombic IP-Interactive mailto:i...@ip-interactive.de Anschrift: IP Interactive UG ( haftungsbeschraenkt

Re: [ceph-users] new Open Source Ceph based iSCSI SAN project

2016-10-16 Thread Gandalf Corvotempesta
Really interesting project Il 16 ott 2016 18:57, "Maged Mokhtar" ha scritto: > Hello, > > I am happy to announce PetaSAN, an open source scale-out SAN that uses > Ceph storage and LIO iSCSI Target. > visit us at: > www.petasan.org > > your feedback will be much

[ceph-users] new Open Source Ceph based iSCSI SAN project

2016-10-16 Thread Maged Mokhtar
Hello, I am happy to announce PetaSAN, an open source scale-out SAN that uses Ceph storage and LIO iSCSI Target. visit us at: www.petasan.org your feedback will be much appreciated. maged mokhtar ___ ceph-users mailing list

Re: [ceph-users] RBD with SSD journals and SAS OSDs

2016-10-16 Thread William Josefsson
Ok thanks for sharing. yes my journals are Intel S3610 200GB, which I partition in 4 partitions each ~45GB. When I ceph-deploy I declare these as the journals of the OSDs. I was trying to understand the blocking, and how much my SAS OSDs affected my performance. I have a total of 9 hosts, 158

Re: [ceph-users] cephfs slow delete

2016-10-16 Thread John Spray
On Sat, Oct 15, 2016 at 1:36 AM, Heller, Chris wrote: > Just a thought, but since a directory tree is a first class item in cephfs, > could the wire protocol be extended with an “recursive delete” operation, > specifically for cases like this? In principle yes, but the

[ceph-users] Ubuntu repo's broken

2016-10-16 Thread Jon Morby (FidoNet)
Morning It’s been a few days now since the outage however we’re still unable to install new nodes, it seems the repo’s are broken … and have been for at least 2 days now (so not just a brief momentary issue caused by an update) [osd04][WARNIN] E: Package 'ceph-osd' has no installation

Re: [ceph-users] RBD with SSD journals and SAS OSDs

2016-10-16 Thread Christian Balzer
Hello, On Sun, 16 Oct 2016 15:03:24 +0800 William Josefsson wrote: > Hi list, while I know that writes in the RADOS backend are sync() can > anyone please explain when the cluster will return on a write call for > RBD from VMs? Will data be considered synced one written to the > journal or all

[ceph-users] RBD with SSD journals and SAS OSDs

2016-10-16 Thread William Josefsson
Hi list, while I know that writes in the RADOS backend are sync() can anyone please explain when the cluster will return on a write call for RBD from VMs? Will data be considered synced one written to the journal or all the way to the OSD drive? Each host in my cluster has 5x Intel S3610, and