following the failure of the first disk (assuming AFR
https://en.wikipedia.org/wiki/Annualized_failure_rate of every disk is
8%, divided by the number of hours during a year == (0.08 / 8760) ~=
1/100,000
* A given disk does not participate in more than 100 PG
--
Christian Balzer
answered canonically by Inktank or Sage, if not then perhaps
I'll see how far I get sticking this diatribe into the ICE support
portal...
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com
On Thu, 28 Aug 2014 10:29:20 -0400 Mike Dawson wrote:
On 8/28/2014 12:23 AM, Christian Balzer wrote:
On Wed, 27 Aug 2014 13:04:48 +0200 Loic Dachary wrote:
On 27/08/2014 04:34, Christian Balzer wrote:
Hello,
On Tue, 26 Aug 2014 20:21:39 +0200 Loic Dachary wrote:
Hi Craig
Thanks!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
Hello,
On Fri, 29 Aug 2014 02:32:39 -0400 J David wrote:
On Thu, Aug 28, 2014 at 10:47 PM, Christian Balzer ch...@gol.com wrote:
There are 1328 PG's in the pool, so about 110 per OSD.
And just to be pedantic, the PGP_NUM is the same?
Ah, ceph status reports 1328 pgs. But:
$ sudo
-ceph.com
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi
Hello,
On Sat, 30 Aug 2014 18:27:22 -0400 J David wrote:
On Fri, Aug 29, 2014 at 2:53 AM, Christian Balzer ch...@gol.com wrote:
Now, 1200 is not a power of two, but it makes sense. (12 x 100).
Should have been 600 and then upped to 1024.
At the time, there was a reason why doing
.
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph
e13901) v4 currently waiting for subops from [12,29]
Kind Regards,
David
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Christian BalzerNetwork/Systems Engineer
to be
done (retention of hot objects, etc) and there have been stability concerns
raised here.
Regards,
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com
Hello,
On Fri, 5 Sep 2014 12:09:11 +0800 Ding Dinghua wrote:
Please see my comment below:
2014-09-04 21:33 GMT+08:00 Christian Balzer ch...@gol.com:
Hello,
On Thu, 4 Sep 2014 20:56:31 +0800 Ding Dinghua wrote:
Aside from what Loic wrote, why not replace the network
On Fri, 5 Sep 2014 13:46:17 +0800 Ding Dinghua wrote:
2014-09-05 13:19 GMT+08:00 Christian Balzer ch...@gol.com:
Hello,
On Fri, 5 Sep 2014 12:09:11 +0800 Ding Dinghua wrote:
Please see my comment below:
2014-09-04 21:33 GMT+08:00 Christian Balzer ch...@gol.com
not have this problem; I'm
pretty sure you're dealing with a rogue/faulty osd/node somewhere.
Cheers,
Martin
On Fri, Sep 5, 2014 at 2:28 AM, Christian Balzer ch...@gol.com wrote:
On Thu, 4 Sep 2014 12:02:13 +0200 David wrote:
Hi,
We’re running a ceph cluster with version
://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users
On Fri, 5 Sep 2014 09:42:02 + Dan Van Der Ster wrote:
On 05 Sep 2014, at 11:04, Christian Balzer ch...@gol.com wrote:
On Fri, 5 Sep 2014 07:46:12 + Dan Van Der Ster wrote:
On 05 Sep 2014, at 03:09, Christian Balzer ch...@gol.com wrote:
On Thu, 4 Sep 2014 14:49:39 -0700
128GB of RAM on the OSDs and decided to beef them up to 256GB which
helped. They’re running different workloads (shared hosting) but
we’ve never encountered the issue we had yesterday even during our
testing/benchmarking.
Kind Regards,
David
5 sep 2014 kl. 09:05 skrev Christian Balzer
/msg04152.html
http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/10021
And I'm not going to use BTRFS for mainly RBD backed VM images
(fragmentation city), never mind the other stability issues that crop up
here ever so often.
September 6 2014 1:27 PM, Christian Balzer ch...@gol.com wrote
would of
course never bring the cluster down.
However taking an OSD out and/or adding a new one will cause data movement
that might impact your cluster's performance.
Regards,
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan
Hello,
On Sat, 6 Sep 2014 17:41:02 +0200 Josef Johansson wrote:
Hi,
On 06 Sep 2014, at 17:27, Christian Balzer ch...@gol.com wrote:
Hello,
On Sat, 6 Sep 2014 17:10:11 +0200 Josef Johansson wrote:
We manage to go through the restore, but the performance degradation
a decent HW RAID controller.
Christian
On Sat Sep 06 2014 at 8:37:56 AM Christian Balzer ch...@gol.com wrote:
On Sat, 6 Sep 2014 14:50:20 + Dan van der Ster wrote:
September 6 2014 4:01 PM, Christian Balzer ch...@gol.com wrote:
On Sat, 6 Sep 2014 13:07:27 + Dan van der Ster
Hello,
On Sat, 06 Sep 2014 10:28:19 -0700 JIten Shah wrote:
Thanks Christian. Replies inline.
On Sep 6, 2014, at 8:04 AM, Christian Balzer ch...@gol.com wrote:
Hello,
On Fri, 05 Sep 2014 15:31:01 -0700 JIten Shah wrote:
Hello Cephers,
We created a ceph cluster with 100
you'd easily get into
a real near full or full situation.
Regards,
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communult in data
movement, so pick an appropriate time), would be to ications
http://www.gol.com
Hello,
On Mon, 08 Sep 2014 09:53:58 -0700 JIten Shah wrote:
On Sep 6, 2014, at 8:22 PM, Christian Balzer ch...@gol.com wrote:
Hello,
On Sat, 06 Sep 2014 10:28:19 -0700 JIten Shah wrote:
Thanks Christian. Replies inline.
On Sep 6, 2014, at 8:04 AM, Christian Balzer ch
and push that change to all the MON, MSD and
OSD’s ?
Thanks.
—Jiten
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
ceph-users
:09 PM, Christian Balzer wrote:
Hello,
On Mon, 08 Sep 2014 11:42:59 -0400 JR wrote:
Greetings all,
I have a small ceph cluster (4 nodes, 2 osds per node) which recently
started showing:
root@ocd45:~# ceph health
HEALTH_WARN 1 near full osd(s)
admin@node4:~$ for i in 2 3
not an
option at the moment to upgrade to firefly (can't make a big change
before sending it out the door).
On 9/8/2014 12:09 PM, Christian Balzer wrote:
Hello,
On Mon, 08 Sep 2014 11:42:59 -0400 JR wrote:
Greetings all,
I have a small ceph cluster (4 nodes, 2 osds per
-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Christian Balzer Sent: Sunday, 7 September 2014 1:38 AM
To: ceph-users
Subject: Re: [ceph-users] SSD journal deployment experiences
On Sat, 6 Sep 2014 14:50:20 + Dan van der Ster wrote:
September 6 2014 4:01 PM, Christian
) warning; should have checked more
carefully.
I now have the expected data movement.
Thanks alot!
JR
On 9/8/2014 10:04 PM, Christian Balzer wrote:
Hello,
On Mon, 08 Sep 2014 18:30:07 -0400 JR wrote:
Hi Christian, all,
Having researched this a bit more, it seemed
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users
has plenty of reserves, I would go with the 1024 PGs for
big pools and 128 or 256 for the small ones.
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com
-correcting
code http://en.wikipedia.org/wiki/Error-correcting_code (ECC). So if
the data is not correct. the disk can recovery it or return i/o error.
Does anyone can explain it?
http://en.wikipedia.org/wiki/Data_corruption#Silent_data_corruption
--
Christian BalzerNetwork
On Tue, 9 Sep 2014 10:57:26 -0700 Craig Lewis wrote:
On Sat, Sep 6, 2014 at 9:27 AM, Christian Balzer ch...@gol.com wrote:
On Sat, 06 Sep 2014 16:06:56 + Scott Laird wrote:
Backing up slightly, have you considered RAID 5 over your SSDs?
Practically speaking, there's
a test cluster.
So finding bugs is hardly surprising, especially since you're using an
experimental backend on top of that.
Also due it being a development version, you might get more feedback on
the ceph-devel mailing list.
Christian
--
Christian BalzerNetwork/Systems Engineer
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
ceph
://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users
. ^o^
On Thu, Sep 18, 2014 at 6:36 AM, Christian Balzer ch...@gol.com wrote:
Hello,
On Thu, 18 Sep 2014 13:07:35 +0200 Christoph Adomeit wrote:
Presently we use Solaris ZFS Boxes as NFS Storage for VMs.
That sounds slower than I would Ceph RBD expect to be in nearly all
and the
mtimes as I don't use that functionality.
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users
Hello,
On Sun, 21 Sep 2014 21:00:48 +0200 Udo Lembke wrote:
Hi Christian,
On 21.09.2014 07:18, Christian Balzer wrote:
...
Personally I found ext4 to be faster than XFS in nearly all use cases
and the lack of full, real kernel integration of ZFS is something that
doesn't appeal
if not all the traffic generated
by that card will have to transferred to the other CPU anyway.
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
On Mon, 22 Sep 2014 13:35:26 +0200 Udo Lembke wrote:
Hi Christian,
On 22.09.2014 05:36, Christian Balzer wrote:
Hello,
On Sun, 21 Sep 2014 21:00:48 +0200 Udo Lembke wrote:
Hi Christian,
On 21.09.2014 07:18, Christian Balzer wrote:
...
Personally I found ext4 to be faster
Hello,
On Mon, 22 Sep 2014 08:55:48 -0500 Mark Nelson wrote:
On 09/22/2014 01:55 AM, Christian Balzer wrote:
Hello,
not really specific to Ceph, but since one of the default questions by
the Ceph team when people are facing performance problems seems to be
Have you tried turning
/listinfo.cgi/ceph-users-ceph.com
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
on you
unexpectedly.
A 200 TB DC 3700S has a TBW of 1825, more than 10 times that of your
Samsungs and would allow you to write 1TB each day for 5 years.
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com
On Tue, 30 Sep 2014 15:26:31 +0100 Kingsley Tart wrote:
On Tue, 2014-09-30 at 00:30 +0900, Christian Balzer wrote:
On Mon, 29 Sep 2014 11:15:21 +0200 Emmanuel Lacour wrote:
On Mon, Sep 29, 2014 at 05:57:12PM +0900, Christian Balzer wrote:
Given your SSDs, are they failing after
.tcp_window_scaling = 1
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_sack = 1
Do we have other ways to significantly improve CEPH storage performance?
Any feedback and comments are welcome!
Thank you!
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com
@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Christian BalzerNetwork/Systems Engineer
ch
optimisation code will be added to the
next maintenance release of firefy?
Andrei
- Original Message -
From: Timur Nurlygayanov tnurlygaya...@mirantis.com
To: Christian Balzer ch...@gol.com
Cc: ceph-us...@ceph.com
Sent: Wednesday, 1 October, 2014 1:11:25 PM
Subject: Re
on the poor NANDs as Emmanuel's environment.
Christian
Cheers,
Martin
On Wed, Oct 1, 2014 at 10:18 AM, Christian Balzer ch...@gol.com wrote:
On Wed, 1 Oct 2014 09:28:12 +0200 Kasper Dieter wrote:
On Tue, Sep 30, 2014 at 04:38:41PM +0200, Mark Nelson wrote:
On 09/29/2014 03:58 AM
second
human opinion.
Thanks you for any hint you'll give!
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
ceph-users mailing list
ceph
On Wed, 01 Oct 2014 20:12:03 +0200 Massimiliano Cuttini wrote:
Hello Christian,
Il 01/10/2014 19:20, Christian Balzer ha scritto:
Hello,
On Wed, 01 Oct 2014 18:26:53 +0200 Massimiliano Cuttini wrote:
Dear all,
i need few tips about Ceph best solution for driver controller
network nodes.
Monitor your nodes with atop and see where the bottlenecks are (I still
bet disks).
Re-read my mail below.
Christian
Thanks
On Wed, Oct 1, 2014 at 7:24 PM, Christian Balzer ch...@gol.com wrote:
Hello,
On Wed, 1 Oct 2014 14:43:49 -0700 Jakes John wrote:
Hi Ceph
for
a cache pool and 6 classic Ceph storage nodes for permanent storage
might be good enough for your use case with a future version of Ceph.
But unfortunately currently cache pools aren't quite there yet.
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com
(doubtful, but verify), more journals.
Other than that, maybe create a 1TB (usable space) SSD pool for guests
with special speed requirements...
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http
-
From: Christian Balzer ch...@gol.com
To: ceph-users@lists.ceph.com
Sent: Thursday, October 2, 2014 8:54:41 PM
Subject: Re: [ceph-users] Ceph SSD array with Intel DC S3500's
Hello,
On Thu, 2 Oct 2014 13:48:27 -0400 (EDT) Adam Boyhan wrote:
Hey everyone, loving Ceph so far
On Fri, 3 Oct 2014 11:24:38 +0100 (BST) Andrei Mikhailovsky wrote:
From: Christian Balzer ch...@gol.com
To: ceph-users@lists.ceph.com
Sent: Friday, 3 October, 2014 2:06:48 AM
Subject: Re: [ceph-users] ceph, ssds, hdds, journals and caching
On Thu, 2 Oct 2014 21:54:54 +0100 (BST
?
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users
],
ATTR{queue/rotational}==0, ATTR{queue/scheduler}=noop # # set
cfq
scheduler for rotating disks ACTION==add|change,
KERNEL==sd[a-z],
ATTR{queue/rotational}==1, ATTR{queue/scheduler}=cfq
Is there anything else that I am missing?
--
Christian Balzer Network/Systems Engineer
-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com
On Mon, 6 Oct 2014 09:17:03 + Carl-Johan Schenström wrote:
Christian Balzer ch...@gol.com wrote:
Any decent switch with LACP will do really.
And with that I mean Cisco, Brocade etc.
But that won't give you redundancy if a switch fails, see below.
TRILL ( http
On Fri, 03 Oct 2014 11:56:42 +0200 Massimiliano Cuttini wrote:
Il 02/10/2014 17:24, Christian Balzer ha scritto:
On Thu, 02 Oct 2014 12:20:06 +0200 Massimiliano Cuttini wrote:
Il 02/10/2014 03:18, Christian Balzer ha scritto:
On Wed, 01 Oct 2014 20:12:03 +0200 Massimiliano Cuttini wrote
.
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com
PM Christian Balzer ch...@gol.com wrote:
On Tue, 07 Oct 2014 20:40:31 + Scott Laird wrote:
I've done this two ways in the past. Either I'll give each machine
an Infiniband network link and a 1000baseT link and use the
Infiniband one as the private network for Ceph, or I'll throw
mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com
/listinfo.cgi/ceph-users-ceph.com
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
with 0.80.1 and 0.80.6)?
Regards,
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http
10'
That I'm also aware of, but for the time being having everything in
[global] resolves the problem and more importantly makes it reboot proof.
Christian
On Thu, Oct 16, 2014 at 6:54 PM, Christian Balzer ch...@gol.com wrote:
Hello,
Consider this rather basic configuration file
@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
ceph-users mailing list
ceph
or
recollection of seeing this before.
In general you will want to monitor all your cluster nodes with something
like atop in a situation like this to spot potential problems like slow
disks, CPU or network starvation, etc.
Christian
Please help me.
Best regards !
--
Christian BalzerNetwork
.
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi
Hello,
On Wed, 22 Oct 2014 17:41:45 -0300 Ricardo J. Barberis wrote:
El Martes 21/10/2014, Christian Balzer escribió:
Hello,
I'm trying to change the value of mon_osd_down_out_subtree_limit from
rack to something, anything else with ceph 0.80.(6|7).
Using injectargs it tells me
and seeing if that makes any difference. If you feel
like it, it might be worth also running similar tests with something
like fio just to verify that the same behaviour is present.
Thanks!
Mark
--
Christian BalzerNetwork/Systems Engineer
ch
, but nobody cares about those in journals. ^o^
Obvious things that come to mind in this context would be the ability to
disable journals (difficult, I know, not touching BTRFS, thank you) and
probably K/V store in the future.
Regards,
Christian
--
Christian BalzerNetwork/Systems Engineer
, at 07:58, Christian Balzer ch...@gol.com wrote:
Hello,
as others have reported in the past and now having tested things here
myself, there really is no point in having journals for SSD backed
OSDs on other SSDs.
It is a zero sum game, because:
a) using that journal SSD
.
It's too big or normal use case for ceph?
Not too big, but definitely needs a completely different design and lots of
forethought, planning and testing.
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com
.
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph
the max to 10 seconds and since Ceph also starts flushing the
journal when it becomes half full there's the above goal of having 20
seconds worth of space.
That all said, I'd be very happy about some journal perf counters in Ceph
that show how effective and utilized it is.
Christian
--
Christian Balzer
note, have you done any tests using the ZFS compression?
I'm wondering what the performance impact and efficiency are.
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com
for a single hot 4KB of
data is hardly efficient.
Regards,
Christian
Cheers,
-Original Message-
From: Christian Balzer [mailto:ch...@gol.com]
Sent: October-30-14 4:12 AM
To: ceph-users
Cc: Michal Kozanecki
Subject: Re: [ceph-users] use ZFS for OSDs
On Wed, 29 Oct 2014 15
4.0G 21%
/var/lib/ceph/osd/ceph-1
My Linux OS
lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:Ubuntu 14.04 LTS
Release:14.04
Codename: trusty
Regards
Shiv
--
Christian BalzerNetwork/Systems Engineer
ch
On Sun, 2 Nov 2014 14:07:23 -0800 (PST) Sage Weil wrote:
On Mon, 3 Nov 2014, Christian Balzer wrote:
c) But wait, you specified a pool size of 2 in your OSD section! Tough
luck, because since Firefly there is a bug that at the very least
prevents OSD and RGW parameters from being parsed
Director
of Software Engineering
Red Hat (Inktank is now part of Red Hat!)
http://www.linkedin.com/in/ircolle
http://www.twitter.com/ircolle
Cell: +1.303.601.7713
Email: ico...@redhat.com
- Original Message -
From: Christian Balzer ch...@gol.com
To: ceph-us...@ceph.com
Cc: Shiv Raj
On Mon, 3 Nov 2014 06:02:08 -0800 (PST) Sage Weil wrote:
On Mon, 3 Nov 2014, Mark Kirkwood wrote:
On 03/11/14 14:56, Christian Balzer wrote:
On Sun, 2 Nov 2014 14:07:23 -0800 (PST) Sage Weil wrote:
On Mon, 3 Nov 2014, Christian Balzer wrote:
c) But wait, you specified a pool
the information
you're after.
If you're using libvirt, using virt-top should make really busy VMs stand
out pretty quickly.
I'm using ganeti, so I'd _really_ love to have what you're asking for
implemented in Ceph.
Regards,
Christian
best regards
Danny
--
Christian BalzerNetwork
reading all the data in
the PGs.
Why? And what is it comparing that data with, the cosmic background
radiation?
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On Tue, 11 Nov 2014 10:21:49 -0800 Gregory Farnum wrote:
On Mon, Nov 10, 2014 at 10:58 PM, Christian Balzer ch...@gol.com wrote:
Hello,
One of my clusters has become busy enough (I'm looking at you, evil
Window VMs that I shall banish elsewhere soon) to experience client
noticeable
Christoph
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com
(not)
use in their systems ?
Any magical way to “blink” a drive in linux ?
Thanks regards
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Christian Balzer
@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Christian BalzerNetwork/Systems Engineer
represent the views of the company.
___ ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global
From: Christian Balzer ch...@gol.com
Sent: Friday, 21 November 2014 10:06 AM
To: 'ceph-users'
Cc: Bond, Darryl
Subject: Re: [ceph-users] Kernel memory allocation oops Centos 7
On Thu, 20 Nov 2014 22:10:02 + Bond, Darryl wrote:
Brief outline:
6 Node production
, at least it's good to know that.
Guess I'll keep cargo-culting that little setting for some time to come.
Darryl
From: Christian Balzer ch...@gol.com
Sent: Friday, 21 November 2014 2:39 PM
To: 'ceph-users'
Cc: Bond, Darryl
Subject: Re: [ceph-users
tell osd.* injectargs '--filestore_max_sync_interval 30'
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine Japan/Fusion Communications
http://www.gol.com/
___
ceph-users mailing list
ceph
. Is it really that easy to trash your OSDs?
In the case a storage node crashes, am I to expect most if not all OSDs or
at least their journals to require manual loving?
Regards,
Christian
--
Christian BalzerNetwork/Systems Engineer
ch...@gol.com Global OnLine
1 - 100 of 1137 matches
Mail list logo