Re: [ceph-users] Antw: Re: SSS Caching

2016-10-27 Thread Ashley Merrick
Hello, That is what I was thinking of doing as a plan B, just require more overhead on configuration of the VM, where cache tier does not require any per VM creation work. I guess as Christian said depends on the work load to which is the best option. ,Ashley -Original Message- From:

Re: [ceph-users] Dead pool recovery - Nightmare

2016-10-27 Thread Wido den Hollander
> Op 27 oktober 2016 om 11:23 schreef Ralf Zerres : > > > Hello community, > hello ceph developers, > > My name is Ralf working as IT-consultant. In this paticular case I do support > a > german customer running a 2 node CEPH cluster. > > This customer is

[ceph-users] Hammer OSD memory increase when add new machine

2016-10-27 Thread Dong Wu
Hi all, We have a ceph cluster only use rbd. The cluster contains several group machines, each group contains several machines, then each machine has 12 SSDs, each ssd as an OSD (journal and data together). eg: group1: machine1~machine12 group2: machine13~machine24 .. each group is separated

Re: [ceph-users] CephFS: ceph-fuse and "remount" option

2016-10-27 Thread Yan, Zheng
On Thu, Oct 27, 2016 at 6:30 PM, Florent B wrote: > Hi everyone, > > I would like to know why "remount" option is not handled on ceph-fuse ? > Is it a technical restriction, or is it just not implemented yet ? > remount handler of fuse kernel module does not do anything. If

[ceph-users] Dead pool recovery - Nightmare

2016-10-27 Thread Ralf Zerres
Hello community, hello ceph developers, My name is Ralf working as IT-consultant. In this paticular case I do support a german customer running a 2 node CEPH cluster. This customer is struggeling with a desasterous situation, where a full pool of rbd-data (about 12 TB valid production-data) is

[ceph-users] RGW: Delete orphan period for non-existent realm

2016-10-27 Thread Richard Chan
Hi Cephers, In my period list I am seeing an orphan period { "periods": [ "24dca961-5761-4bd1-972b-685a57e2fcf7:staging", "a5632c6c4001615e57e587c129c1ad93:staging", "fac3496d-156f-4c09-9654-179ad44091b9" ] } The realm a5632c6c4001615e57e587c129c1ad93 no longer

Re: [ceph-users] RGW: Delete orphan period for non-existent realm

2016-10-27 Thread Orit Wasserman
On Thu, Oct 27, 2016 at 12:30 PM, Richard Chan wrote: > Hi Cephers, > > In my period list I am seeing an orphan period > > { > "periods": [ > "24dca961-5761-4bd1-972b-685a57e2fcf7:staging", > "a5632c6c4001615e57e587c129c1ad93:staging", >

Re: [ceph-users] Dead pool recovery - Nightmare

2016-10-27 Thread Wido den Hollander
> Op 27 oktober 2016 om 11:46 schreef Ralf Zerres : > > > Here we go ... > > > > Wido den Hollander hat am 27. Oktober 2016 um 11:35 > > geschrieben: > > > > > > > > > Op 27 oktober 2016 om 11:23 schreef Ralf Zerres : > > > > >

Re: [ceph-users] Dead pool recovery - Nightmare

2016-10-27 Thread Ralf Zerres
> Wido den Hollander hat am 27. Oktober 2016 um 12:37 > geschrieben: > > > Bringing back to the list > > > Op 27 oktober 2016 om 12:08 schreef Ralf Zerres : > > > > > > > Wido den Hollander hat am 27. Oktober 2016 um 11:51 > > >

[ceph-users] Antw: Re: SSS Caching

2016-10-27 Thread Steffen Weißgerber
>>> Christian Balzer schrieb am Donnerstag, 27. Oktober 2016 um 04:07: Hi, > Hello, > > On Wed, 26 Oct 2016 15:40:00 + Ashley Merrick wrote: > >> Hello All, >> >> Currently running a CEPH cluster connected to KVM via the KRBD and used only > for this purpose. >> >> Is

Re: [ceph-users] Dead pool recovery - Nightmare

2016-10-27 Thread Wido den Hollander
Bringing back to the list > Op 27 oktober 2016 om 12:08 schreef Ralf Zerres : > > > > Wido den Hollander hat am 27. Oktober 2016 um 11:51 > > geschrieben: > > > > > > > > > Op 27 oktober 2016 om 11:46 schreef Ralf Zerres : > > > >

Re: [ceph-users] Qcow2 and RBD Import

2016-10-27 Thread Eugen Block
Hi, from my little experience (it's really not much) you should not store the images as qcow, but I'm not sure if it depends on your purpose. I have an Openstack environment and when I tried to get it running with Ceph I uploaded my images as qcow2. But then I had problems getting VMs

[ceph-users] ceph df show 8E pool

2016-10-27 Thread Dan van der Ster
Hi all, One of our 10.2.3 clusters has a pool with bogus statistics. The pool is empty, but it shows 8E of data used and 2^63-1 objects. POOLS: NAME ID USED %USED MAX AVAIL OBJECTS test 19 8E 0 1362T 9223372036854775807

Re: [ceph-users] RGW: Delete orphan period for non-existent realm

2016-10-27 Thread George Mihaiescu
Are these problems fixed in the latest version of the Debian packages? I'm a fairly large user with a lot of existing data stored in .rgw.buckets pool, and I'm running Hammer. I just hope that upgrading to Jewel so long after its release will not cause loss of access to this data for my users,

[ceph-users] [kvm] Fio direct i/o read faster than buffered i/o

2016-10-27 Thread Piotr Kopec
Hi all, I am getting very strange results while testing Ceph performance using fio from KVM guest VM. My environment is as follows: - Ceph version 0.94.5 - 4x Storage Nodes, 36 SATA disks each, journals on SSD. 10Gbps links - KVM as hypervisor (qemu-kvm 2.0.0+dfsg-2ubuntu1.22) on

[ceph-users] CephFS in existing pool namespace

2016-10-27 Thread Reed Dier
Looking to add CephFS into our Ceph cluster (10.2.3), and trying to plan for that addition. Currently only using RADOS on a single replicated, non-EC, pool, no RBD or RGW, and segmenting logically in namespaces. No auth scoping at this time, but likely something we will be moving to in the

Re: [ceph-users] ceph df show 8E pool

2016-10-27 Thread Goncalo Borges
Hi Dan... Have you tried 'rados df' to see if it agrees with 'ceph df' ? Cheers G. From: ceph-users [ceph-users-boun...@lists.ceph.com] on behalf of Dan van der Ster [d...@vanderster.com] Sent: 28 October 2016 03:01 To: ceph-users Subject: [ceph-users]

[ceph-users] Ceph mount problem "can't read superblock"

2016-10-27 Thread Владимир Спирин
Hello! I have 5 nodes , 3 with osd 1 from which contain also mon and mds . And 2 with mds and mon only. After migration to Ubuntu 16.04 LTS from 14.04 LTS i've rebuild my cluster and got error on mount as kernel driver "can't read superblock". I've tried to mount as fuse but not solve this

Re: [ceph-users] rgw / s3website, MethodNotAllowed on Jewel 10.2.3

2016-10-27 Thread Robin H. Johnson
On Wed, Oct 26, 2016 at 11:43:15AM +0200, Trygve Vea wrote: > Hi! > > I'm trying to get s3website working on one of our Rados Gateway > installations, and I'm having some problems finding out what needs to > be done for this to work. It looks like this is a halfway secret > feature, as I can

Re: [ceph-users] Instance filesystem corrupt

2016-10-27 Thread Ahmed Mostafa
So i couldn't actually wait till the morning I sat rbd cache to false and tried to create the same number of instances, but the same issue happened again. I want to note, that if i rebooted any of the virtual machines that has this issue, it works without any problem afterwards. Does this mean

Re: [ceph-users] Instance filesystem corrupt

2016-10-27 Thread Keynes_Lee
We didn’t enable rbd caching. The OS file system which we met this issue includes : ext3 , ext4 , xfs , NTFS ( Windows ) [cid:image007.jpg@01D1747D.DB260110] Keynes Lee李 俊 賢 Direct: +886-2-6612-1025 Mobile: +886-9-1882-3787 Fax: +886-2-6612-1991 E-Mail:

Re: [ceph-users] Instance filesystem corrupt

2016-10-27 Thread Ahmed Mostafa
Well, for me know i know my issue It is indeed an over-utilization issue, but bot related to ceph Th cluster is connected via a 1g interfaces, which basixally gets saturated by all the bandwidth generated from theae instances trying to read their root filesystems and mount it, which is stored in

Re: [ceph-users] Instance filesystem corrupt

2016-10-27 Thread Jason Dillaman
On Thu, Oct 27, 2016 at 6:29 PM, Ahmed Mostafa wrote: > I sat rbd cache to false and tried to create the same number of instances, > but the same issue happened again > Did you disable it both in your hypervisor node's ceph.conf and also disable the cache via the QEMU

Re: [ceph-users] Instance filesystem corrupt

2016-10-27 Thread Keynes_Lee
I think this issue may not related to your poor hardware. Our cluster has 3 Ceph monitor and 4 OSD. Each server has 2 cpu ( Intel(R) Xeon(R) CPU E5-2683 v3 @ 2.00GHz ) , 32 GB memory OSD nodes has 2 SSD for journal disks and 8 SATA disks ( 6TB / 7200 rpm ) ALL of them were connected to each

Re: [ceph-users] 10Gbit switch advice for small ceph cluster upgrade

2016-10-27 Thread Erick Perez - Quadrian Enterprises
On Thu, Oct 27, 2016 at 9:59 AM, Erick Perez - Quadrian Enterprises < epe...@quadrianweb.com> wrote: > > > On Thu, Oct 27, 2016 at 9:19 AM, Richard Chan < > rich...@treeboxsolutions.com> wrote: > >> Dell N40xx is cheap if you buy the RJ-45 versions.It will save a lot on >> optics. >> Our NICs

Re: [ceph-users] 10Gbit switch advice for small ceph cluster upgrade

2016-10-27 Thread Erik McCormick
On Oct 27, 2016 3:16 PM, "Oliver Dzombic" wrote: > > Hi, > > i can recommand > > X710-DA2 > Weer also use this nice for everything. > 10G Switch is going over our bladeinfrastructure, so i can't recommand > something for you there. > > I assume that the usual

Re: [ceph-users] Instance filesystem corrupt

2016-10-27 Thread Jason Dillaman
The only effect I could see out of a highly overloaded system would be that the OSDs might appear to become unresponsive to the VMs. Are any of you using cache tiering or librbd cache? For the latter, there was one issue [1] that can result in read corruption that affects hammer and prior

[ceph-users] Qcow2 and RBD Import

2016-10-27 Thread Ahmed Mostafa
Hello, Going through the documentation I am aware that I should be using RAW images instead of Qcow2 when storing my images in ceph. I have carried a small test to understand how this is going. [root@ ~]# qemu-img create -f qcow2 test.qcow2 100G Formatting 'test.qcow2', fmt=qcow2

Re: [ceph-users] 10Gbit switch advice for small ceph cluster upgrade

2016-10-27 Thread Richard Chan
Dell N40xx is cheap if you buy the RJ-45 versions.It will save a lot on optics. Our NICs are all RJ-45. We use CAT7 and the switch hasn't given any problems. We have one with an SFP+ uplink, and used a Dell-branded DAC cable that has also worked flawlessly. (Not to be confused with Dell

Re: [ceph-users] 10Gbit switch advice for small ceph cluster upgrade

2016-10-27 Thread Oliver Dzombic
Hi, i can recommand X710-DA2 10G Switch is going over our bladeinfrastructure, so i can't recommand something for you there. I assume that the usual juniper/cisco will do a good job. I think, for ceph, the switch is not the major point of a setup. -- Mit freundlichen Gruessen / Best regards

Re: [ceph-users] RGW: Delete orphan period for non-existent realm

2016-10-27 Thread Richard Chan
How do you clean up the :staging object? The orphan_uuid belongs to a realm that was accidentally created here. It was later moved to another rgw realm root pool. This remnant has no periods only the staging object. radosgw-admin --cluster flash period list { "periods": [

Re: [ceph-users] 10Gbit switch advice for small ceph cluster upgrade

2016-10-27 Thread Robert Sanders
Arista has been good to us. Netgear or eBay if you have to scrape bottom. Rob Sent from my iPhone > On Oct 27, 2016, at 9:16 AM, Oliver Dzombic wrote: > > Hi, > > i can recommand > > X710-DA2 > > 10G Switch is going over our bladeinfrastructure, so i can't

[ceph-users] Antw: Re: SSS Caching

2016-10-27 Thread Steffen Weißgerber
>>> Christian Balzer schrieb am Donnerstag, 27. Oktober 2016 um 13:55: Hi Christian, > > Hello, > > On Thu, 27 Oct 2016 11:30:29 +0200 Steffen Weißgerber wrote: > >> >> >> >> >>> Christian Balzer schrieb am Donnerstag, 27. Oktober 2016 um >> 04:07: >> >>

[ceph-users] 10Gbit switch advice for small ceph cluster upgrade

2016-10-27 Thread Jelle de Jong
Hello everybody, I want to upgrade my small ceph cluster to 10Gbit networking and would like some recommendation from your experience. What is your recommend budget 10Gbit switch suitable for Ceph? I would like to use X550-T1 intel adapters in my nodes. Or is fibre recommended? X520-DA2

Re: [ceph-users] 10Gbit switch advice for small ceph cluster upgrade

2016-10-27 Thread Nick Fisk
> -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Oliver Dzombic > Sent: 27 October 2016 14:16 > To: ceph-users@lists.ceph.com > Subject: Re: [ceph-users] 10Gbit switch advice for small ceph cluster upgrade > > Hi, > > i can recommand > >

Re: [ceph-users] 10Gbit switch advice for small ceph cluster upgrade

2016-10-27 Thread mj
Hi Jelle, On 10/27/2016 03:04 PM, Jelle de Jong wrote: Hello everybody, I want to upgrade my small ceph cluster to 10Gbit networking and would like some recommendation from your experience. What is your recommend budget 10Gbit switch suitable for Ceph? We are running a 3-node cluster, with

Re: [ceph-users] Instance filesystem corrupt

2016-10-27 Thread Brian ::
What is the issue exactly? On Fri, Oct 28, 2016 at 2:47 AM, wrote: > I think this issue may not related to your poor hardware. > > > > Our cluster has 3 Ceph monitor and 4 OSD. > > > > Each server has > > 2 cpu ( Intel(R) Xeon(R) CPU E5-2683 v3 @ 2.00GHz ) , 32 GB

Re: [ceph-users] Some query about using "bcache" as backend of Ceph

2016-10-27 Thread Christian Balzer
Hello, On Fri, 28 Oct 2016 12:42:40 +0800 (CST) james wrote: > Hi, > > Is there anyone in the community has experience of using "bcache" as backend > of Ceph? If you search the ML archives (via google) you should find some past experiences, more recent ones not showing particular great

[ceph-users] Some query about using "bcache" as backend of Ceph

2016-10-27 Thread james
Hi, Is there anyone in the community has experience of using "bcache" as backend of Ceph? Nowadays, maybe most Ceph solution are based on full-SSD or full-HDD as backend data disks. So in order to balance the cost and performance/capacity, we are trying the hybrid solution with "bcache". It

[ceph-users] Use cases for realms, and separate rgw_realm_root_pools

2016-10-27 Thread Richard Chan
Hi Cephers, I was wondering whether you could share your: 1. Use cases for multiple realms 2. Do your realms use the same rgw_realm_root_pool or do you use separate rgw_realm_root_pool per realm. -- Richard Chan ___ ceph-users mailing list

Re: [ceph-users] rgw / s3website, MethodNotAllowed on Jewel 10.2.3

2016-10-27 Thread Trygve Vea
- Den 27.okt.2016 23:13 skrev Robin H. Johnson robb...@gentoo.org: > On Wed, Oct 26, 2016 at 11:43:15AM +0200, Trygve Vea wrote: >> Hi! >> >> I'm trying to get s3website working on one of our Rados Gateway >> installations, and I'm having some problems finding out what needs to >> be done for