[ceph-users] Upgrade from Jewel to Luminous. REQUIRE_JEWEL OSDMap

2017-11-27 Thread Cary
"release": "luminous", "num": 8 } }, "client": { "group": { "features": "0x1ffddff8eea4fffb", "release": "luminous", "num": 3 Is there any way I can get these OSDs to join the cluster now, or recover my data? Cary ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] PG active+clean+remapped status

2017-12-16 Thread Cary
Karun, Could you paste in the output from "ceph health detail"? Which OSD was just added? Cary -Dynamic On Sun, Dec 17, 2017 at 4:59 AM, Karun Josy <karunjo...@gmail.com> wrote: > Any help would be appreciated! > > Karun Josy > > On Sat, Dec 16, 2017 a

Re: [ceph-users] PG active+clean+remapped status

2017-12-16 Thread Cary
Karun, Did you attempt a "ceph pg repair "? Replace with the pg ID that needs repaired, 3.4. Cary -D123 On Sat, Dec 16, 2017 at 8:24 AM, Karun Josy <karunjo...@gmail.com> wrote: > Hello, > > I added 1 disk to the cluster and after rebalancing, it shows 1 PG is in &g

Re: [ceph-users] PG active+clean+remapped status

2017-12-16 Thread Cary
or recovering. If possible, wait until the cluster is in a healthy state first. Cary -Dynamic On Sat, Dec 16, 2017 at 2:05 PM, Karun Josy <karunjo...@gmail.com> wrote: > Hi Cary, > > No, I didnt try to repair it. > I am comparatively new in ceph. Is it okay to try to repair it ? >

Re: [ceph-users] add hard drives to 3 CEPH servers (3 server cluster)

2017-12-14 Thread Cary
to start the drive. You can watch the data move to the drive with a ceph -w. Once data has migrated to the drive, start the next. Cary -Dynamic On Thu, Dec 14, 2017 at 5:34 PM, James Okken <james.ok...@dialogic.com> wrote: > Hi all, > > Please let me know if I am missing steps or

Re: [ceph-users] add hard drives to 3 CEPH servers (3 server cluster)

2017-12-14 Thread Cary
; changed to a lower %? Cary -Dynamic On Thu, Dec 14, 2017 at 10:52 PM, James Okken <james.ok...@dialogic.com> wrote: > Thanks Cary! > > Your directions worked on my first sever. (once I found the missing carriage > return in your list of commands, the email musta messed

Re: [ceph-users] add hard drives to 3 CEPH servers (3 server cluster)

2017-12-18 Thread Cary
James, If your replication factor is 3, for every 1GB added, your GB avail with decrease by 3GB. Cary -Dynamic On Mon, Dec 18, 2017 at 6:18 PM, James Okken <james.ok...@dialogic.com> wrote: > Thanks David. > Thanks again Cary. > > If I have > 682 GB used, 12998 GB / 136

Re: [ceph-users] Migrating to new pools (RBD, CephFS)

2017-12-18 Thread Cary
A possible option. They do not recommend using cppool. http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-July/011460.html **COMPLETELY UNTESTED AND DANGEROUS** stop all MDS daemons delete your filesystem (but leave the pools) use "rados export" and "rados import" to do a full copy of the

Re: [ceph-users] POOL_NEARFULL

2017-12-19 Thread Cary
n my 4.6TB is the same for all of them, they have different %USE. So I could lower the weight of the OSDs with more data, and Ceph will balance the cluster. I am not too sure why this happens. http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-March/008623.html Cary -Dynamic On Tue,

Re: [ceph-users] add hard drives to 3 CEPH servers (3 server cluster)

2017-12-15 Thread Cary
James, Those errors are normal. Ceph creates the missing files. You can check "/var/lib/ceph/osd/ceph-6", before and after you run those commands to see what files are added there. Make sure you get the replication factor set. Cary -Dynamic On Fri, Dec 15, 2017 at 6:11 PM, J

Re: [ceph-users] add hard drives to 3 CEPH servers (3 server cluster)

2017-12-15 Thread Cary
ailable, 3 total. ie. usage: 19465 GB used, 60113 GB / 79578 GB avail We choose to use Openstack with Ceph in this decade and do the other things, not because they are easy, but because they are hard...;-p Cary -Dynamic On Fri, Dec 15, 2017 at 10:12 PM, David Turner <drakonst...@gmail.com&g

Re: [ceph-users] ceph-volume does not support upstart

2017-12-29 Thread Cary
. == cd /etc/init.d/ ln -s ceph ceph-osd.12 /etc/init.d/ceph-osd.12 start rc-update add ceph-osd.12 default Cary On Fri, Dec 29, 2017 at 8:47 AM, 赵赵贺东 <zhaohed...@gmail.com> wrote: > Hello Cary! > It’s really big surprise for me to receive your reply! > Sincere thanks t

[ceph-users] Signature check failures.

2018-01-24 Thread Cary
DISPATCH pgs=26018 cs=1 l=1).process Signature check failed Does anyone know what could cause this, and what I can do to fix it. Thank you, Cary -Dynamic ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Signature check failures.

2018-01-26 Thread Cary
21.32.2:6807/153106 conn(0x7fc8bc020870 :-1 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=26018 cs=1 l=1).process Signature check failed Does anyone know what could cause this, and what I can do to fix it. Thank you, Cary -Dynamic ___ ceph-users maili

[ceph-users] Signature check failures.

2018-01-25 Thread Cary
21.32.2:6807/153106 conn(0x7fc8bc020870 :-1 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=26018 cs=1 l=1).process Signature check failed Does anyone know what could cause this, and what I can do to fix it. Thank you, Cary -Dynamic ___ ceph-users maili

Re: [ceph-users] Signature check failures.

2018-02-01 Thread Cary
: CENSORED caps: [mon] allow * I believe this is causing the virtual machines we have running to crash. Any advice would be appreciated. Please let me know if I need to provide any other details. Thank you, Cary -Dynamic On Mon, Jan 29, 2018 at 7:53 PM, Gregory Farnum <gfar...@redh

Re: [ceph-users] Signature check failures.

2018-02-19 Thread Cary
Gregory, I greatly appreciate your assistance. I recompiled Ceph with -ssl and the nss USE flags set, which is opposite what I was using. I am now able to export from our pools without signature check failures. Thank you for pointing me in the right direction. Cary -Dynamic On Fri, Feb 16

Re: [ceph-users] Signature check failures.

2018-02-15 Thread Cary
Ceph Luminous, as we were not having these problem with Jewel. Cary -Dynamic On Thu, Feb 1, 2018 at 7:04 PM, Cary <dynamic.c...@gmail.com> wrote: > Hello, > > I did not do anything special that I know of. I was just exporting an > image from Openstack. We have recently upg

Re: [ceph-users] Can't delete file in cephfs with "No space left on device"

2017-12-25 Thread Cary
Could you post the output of “ceph osd df”? On Dec 25, 2017, at 19:46, ? ? wrote: Hi all: Ceph version: ceph version 10.2.9 (2ee413f77150c0f375ff6f10edd6c8f9c7d060d0) Ceph df: GLOBAL: SIZE AVAIL RAW USED %RAW USED 46635G 12500G 34135G

Re: [ceph-users] 答复: Can't delete file in cephfs with "No space left on device"

2017-12-25 Thread Cary
24 1.62650 1.0 1665G 1146G 518G 68.86 0.94 325 > > 25 1.62650 1.0 1665G 1033G 632G 62.02 0.85 309 > > 26 1.62650 1.0 1665G 1234G 431G 74.11 1.01 334 > > 27 1.62650 1.0 1665G 1342G 322G 80.62 1.10 352 > > TOTAL 46635G 34135G 12

Re: [ceph-users] 答复: Can't delete file in cephfs with "No space left on device"

2017-12-25 Thread Cary
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-October/013646.html On Tue, Dec 26, 2017 at 6:07 AM, Cary <dynamic.c...@gmail.com> wrote: > Are you using hardlinks in cephfs? > > > On Tue, Dec 26, 2017 at 3:42 AM, 周 威 <cho...@msn.cn> wrote: >> The out put

Re: [ceph-users] ceph-volume does not support upstart

2017-12-28 Thread Cary
You could add a file named /usr/sbin/systemctl and add: exit 0 to it. Cary On Dec 28, 2017, at 18:45, 赵赵贺东 <zhaohed...@gmail.com> wrote: Hello ceph-users! I am a ceph user from china. Our company deploy ceph on arm ubuntu 14.04. Ceph Version is luminous 12.2.2. When I try to activa