- Original Message -
From: Butkeev Stas staer...@ya.ru
To: ceph-us...@ceph.com, ceph-commun...@lists.ceph.com, supp...@ceph.com
Sent: Friday, 31 July, 2015 9:10:40 PM
Subject: [ceph-users] problem with RGW
Hello everybody
We have ceph cluster that consist of 8 host with 12 osd per each
I encountered a similar problem. Incoming firewall ports were blocked
on one host. So the other OSDs kept marking that OSD as down. But, it
could talk out, so it kept saying 'hey, i'm up, mark me up' so then
the other OSDs started trying to send it data again, causing backed up
requests.. Which
On Fri, 31 Jul 2015, Alexandre DERUMIER wrote:
As I still haven't heard or seen about any upstream distros for Debian
Jessie (see also [1]),
Gitbuilder is already done for jessie
http://gitbuilder.ceph.com/ceph-deb-jessie-x86_64-basic/
@Sage : Don't known if something is blocking to
Thanks for your quick action!!
- Shinobu
On Fri, Jul 31, 2015 at 11:01 PM, Ilya Dryomov idryo...@gmail.com wrote:
On Fri, Jul 31, 2015 at 2:21 PM, pixelfairy pixelfa...@gmail.com wrote:
according to http://ceph.com/docs/master/rbd/rbd-snapshot/#layering,
you have two choices,
format
Jan,
this is very handy to know! Thanks for sharing with us!
People, do you believe that it would be nice to have a place where we
can gather either good practices or problem resolutions or tips from the
community? We could have a voting system and those with the most votes
(or above a
That's good to hear. Thanks for the heads up. We're going to be
getting another pile of hardware in the next couple of weeks and I'd
prefer to not have to start with Wheezy just to have to move to Jessie a
little bit later on. As someone said earlier, OS rollouts take some care
to do in
On Fri, Jul 31, 2015 at 2:21 PM, pixelfairy pixelfa...@gmail.com wrote:
according to http://ceph.com/docs/master/rbd/rbd-snapshot/#layering,
you have two choices,
format 1: you can mount with rbd kernel module
format 2: you can clone
just mapped and mounted a this image,
rbd image
dear Ceph experts;
I am pretty new in ceph project and we are working on a management
infrastructure and using Ceph / Calamari as our storage resource.
I have some basic questions:
1) what is the purpose of installing and configuring salt-master and
salt-minion in Ceph environment?
is this
On Fri, Jul 31, 2015 at 5:47 PM, Jan Schermer j...@schermer.cz wrote:
I know a few other people here were battling with the occasional issue of OSD
being extremely slow when starting.
I personally run OSDs mixed with KVM guests on the same nodes, and was
baffled by this issue occuring
Hi,
I was trying rados bench, and first wrote 250 objects from 14 hosts with
--no-cleanup. Then I ran the read tests from the same 14 hosts and ran
into this:
[root@osd007 test]# /usr/bin/rados -p ectest bench 100 seq
2015-07-31 17:52:51.027872 7f6c40de17c0 -1 WARNING: the following
On 31 Jul 2015, at 17:28, Haomai Wang haomaiw...@gmail.com wrote:
On Fri, Jul 31, 2015 at 5:47 PM, Jan Schermer j...@schermer.cz wrote:
I know a few other people here were battling with the occasional issue of
OSD being extremely slow when starting.
I personally run OSDs mixed with KVM
I remember reading that ScaleIO (I think?) does something like this by
regularly sending reports to a multicast group, thus any node with issues (or
just overload) is reweighted or avoided automatically on the client. OSD map is
the Ceph equivalent I guess. It makes sense to gather metrics and
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
Even just a ping at max MTU set with nodefrag could tell a lot about
connectivity issues and latency without a lot of traffic. Using Ceph
messenger would be even better to check firewall ports. I like the
idea of incorporating simple network checks
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
I usually do the crush rm step second to last. I don't know if your
modifying the osd after removing it from the CRUSH is putting it back
in.
1. Stop OSD process
2. ceph osd rm
3. ceph osd crush rm osd.
4. ceph auth del osd.
Can you try the crush
Most folks have either probably already left or are on their way out the
door late on a friday, but I just wanted to say Happy SysAdmin day to
all of the excellent System Administrators out there running Ceph
clusters. :)
Mark
___
ceph-users mailing
Thanks Mark you too
Michael Kuriger
Sr. Unix Systems Engineer
* mk7...@yp.com |( 818-649-7235
On 7/31/15, 3:02 PM, ceph-users on behalf of Mark Nelson
ceph-users-boun...@lists.ceph.com on behalf of mnel...@redhat.com wrote:
Most folks have either probably already left or are on their
Hi,
I had 27 OSD's in my cluster. I removed two of the OSD from (osd.20)
host-3 (osd.22) host-6.
user@host-1:~$ sudo ceph osd tree
ID WEIGHTTYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 184.67990 root default
-7 82.07996 chassis chassis2
-4 41.03998 host
As I still haven't heard or seen about any upstream distros for Debian
Jessie (see also [1]),
Gitbuilder is already done for jessie
http://gitbuilder.ceph.com/ceph-deb-jessie-x86_64-basic/
@Sage : Don't known if something is blocking to release package officially ?
- Mail original -
May your bytes stay with you :)
Happy bofhday!
Jan
On 01 Aug 2015, at 00:10, Michael Kuriger mk7...@yp.com wrote:
Thanks Mark you too
Michael Kuriger
Sr. Unix Systems Engineer
* mk7...@yp.com |( 818-649-7235
On 7/31/15, 3:02 PM, ceph-users on behalf of Mark Nelson
For a moment it de-list removed OSD's and after sometime it again
comes up in ceph osd tree listing.
On Fri, Jul 31, 2015 at 12:45 PM, Mallikarjun Biradar
mallikarjuna.bira...@gmail.com wrote:
Hi,
I had 27 OSD's in my cluster. I removed two of the OSD from (osd.20)
host-3 (osd.22) host-6.
On Thu, 30 Jul 2015 06:54:13 -0700 (PDT), Sage Weil sw...@redhat.com
wrote:
So... given that, I'd like to gauge user interest in these old distros.
Specifically,
CentOS6 / RHEL6
Ubuntu precise 12.04
Debian wheezy
Would anyone miss them?
Well, Centos 6 will be supported to 2020,
Hi,I had the same problem. Aparently civetweb can talk https when run standalone. But I didn't find out how to pass the necessary options to civetweb through ceph. So, I put haproxy in front of civerweb. haproxy terminates the https connection and forwards the requests in plain text to
I know a few other people here were battling with the occasional issue of OSD
being extremely slow when starting.
I personally run OSDs mixed with KVM guests on the same nodes, and was baffled
by this issue occuring mostly on the most idle (empty) machines.
Thought it was some kind of race
also, you probably want to reclaim unused space when you delete files.
http://ceph.com/docs/master/rbd/qemu-rbd/#enabling-discard-trim
On Fri, Jul 31, 2015 at 3:54 AM pixelfairy pixelfa...@gmail.com wrote:
rbd is already thin provisioned. when you set its size, your setting the
maximum size.
Hello everybody
We have ceph cluster that consist of 8 host with 12 osd per each host. It's 2T
SATA disks.
[13:23]:[root@se087 ~]# ceph osd tree
ID WEIGHTTYPE NAMEUP/DOWN REWEIGHT
PRIMARY-AFFINITY
-1 182.99203 root default
according to http://ceph.com/docs/master/rbd/rbd-snapshot/#layering,
you have two choices,
format 1: you can mount with rbd kernel module
format 2: you can clone
just mapped and mounted a this image,
rbd image 'vm-101-disk-2': size 5120 MB in 1280 objects order 22 (4096 kB
objects)
On 31/07/15 06:27, Stijn De Weirdt wrote:
wouldn't it be nice that ceph does something like this in background
(some sort of network-scrub). debugging network like this is not that
easy (can't expect admins to install e.g. perfsonar on all nodes
and/or clients)
something like: every X min,
On 31/07/15 09:47, Mallikarjun Biradar wrote:
For a moment it de-list removed OSD's and after sometime it again
comes up in ceph osd tree listing.
Is the OSD service itself definitely stopped? Are you using any
orchestration systems (puppet, chef) that might be re-creating its auth
key
Yeah. OSD service stopped.
Nope, I am not using any orchestration system.
user@host-1:~$ ps -ef | grep ceph
root 2305 1 7 Jul27 ?06:52:36 /usr/bin/ceph-osd
--cluster=ceph -i 3 -f
root 2522 1 6 Jul27 ?06:19:42 /usr/bin/ceph-osd
--cluster=ceph -i 0 -f
root
On 07/31/2015 05:21 AM, John Spray wrote:
On 31/07/15 06:27, Stijn De Weirdt wrote:
wouldn't it be nice that ceph does something like this in background
(some sort of network-scrub). debugging network like this is not that
easy (can't expect admins to install e.g. perfsonar on all nodes
and/or
I am using hammer 0.94
On Fri, Jul 31, 2015 at 4:01 PM, Mallikarjun Biradar
mallikarjuna.bira...@gmail.com wrote:
Yeah. OSD service stopped.
Nope, I am not using any orchestration system.
user@host-1:~$ ps -ef | grep ceph
root 2305 1 7 Jul27 ?06:52:36 /usr/bin/ceph-osd
rbd is already thin provisioned. when you set its size, your setting the
maximum size. its explained here,
http://ceph.com/docs/master/rbd/rados-rbd-cmds/
On Thu, Jul 30, 2015 at 12:04 PM Robert LeBlanc rob...@leblancnet.us
wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
I'll take a
32 matches
Mail list logo