Even the cheapest stuff nowadays has some more or less decent wear
leveling algorithm built into their controller so this won't be a
problem. Wear leveling algorithms cycle the blocks internally so wear
evens out on the whole disk.
-K.
On 12/22/2015 06:57 PM, Alan Johnson wrote:
> I would also
On Tue, Dec 22, 2015 at 9:29 PM, Francois Lafont wrote:
> Hello,
>
> On 21/12/2015 04:47, Yan, Zheng wrote:
>
>> fio tests AIO performance in this case. cephfs does not handle AIO
>> properly, AIO is actually SYNC IO. that's why cephfs is so slow in
>> this case.
>
> Ah ok,
Hi,
I think the ratio is based on SSD max throughput/HDD max throughput
For example: one 400 Mbps SSD could be journal for 4 100 Mbps SAS.
This is my idea, I'm also building a Ceph Storage for Openstack.
Could you guys give some experiences?
On Dec 23, 2015 03:04, "Pshem Kowalczyk"
Hi,
We'll be building our first production-grade ceph cluster to back an
openstack setup (a few hundreds of VMs). Initially we'll need only about
20-30TB of storage, but that's likely to grow. I'm unsure about required
IOPs (there are multiple, very different classes of workloads to consider).
ok, You give me the answer, thanks a lot.
But, I don't know the answer to your questions.
Maybe someone else can answer.
--Original--
From: "Loris Cuoghi";;
Date: Tue, Dec 22, 2015 07:31 PM
To: "ceph-users";
Hello,
On Wed, 23 Dec 2015 11:46:58 +0800 yuyang wrote:
> ok, You give me the answer, thanks a lot.
>
Assume that a journal SSD failure means a loss of all associated OSDs.
So in your case a single SSD failure will cause the data loss of a whole
node.
If you have 15 or more of those nodes,
On 21 December 2015 at 22:07, Yan, Zheng wrote:
>
> > OK, so i changed fio engine to 'sync' for the comparison of a single
> > underlying osd vs the cephfs.
> >
> > the cephfs w/ sync is ~ 115iops / ~500KB/s.
>
> This is normal because you were doing single thread sync IO. If
On Tue, Dec 22, 2015 at 9:00 AM, Simon Hallam wrote:
> Thank you both, cleared up a lot.
>
> Is there a performance metric in perf dump on the MDS' that I can see the
> active number of inodes/dentries? I'm guessing the mds_mem ino and dn metrics
> are the relevant ones?
>
Hello all,
I have simple 10 OSD cluster that is running out of space on several OSDs.
Notice the output below shows a bid difference in .rgw.buckets used at 10% and
raw used at 73%
Is there something I need to do to purge the space?
I did have a 10TB rbd block image in the data pool that I've
Hello, everyone,
I have a ceph cluster with sereral nodes, every node has 1 SSD and 9 STAT disks.
Every STAT disk is used as an OSD, in order to improve IO performance, the SSD
is used as journal file disk.
That is, there are 9 nournal files in every SSD.
If the SSD failed or down, can the OSD
Thank you both, cleared up a lot.
Is there a performance metric in perf dump on the MDS' that I can see the
active number of inodes/dentries? I'm guessing the mds_mem ino and dn metrics
are the relevant ones?
http://paste.fedoraproject.org/303932/77466614/
Cheers,
Simon
> -Original
Hello guys,
Was wondering if anyone has done testing on Samsung PM863 120 GB version to see
how it performs? IMHO the 480GB version seems like a waste for the journal as
you only need to have a small disk size to fit 3-4 osd journals. Unless you get
a far greater durability.
I am planning to
Hello,
On 21/12/2015 04:47, Yan, Zheng wrote:
> fio tests AIO performance in this case. cephfs does not handle AIO
> properly, AIO is actually SYNC IO. that's why cephfs is so slow in
> this case.
Ah ok, thanks for this very interesting information.
So, in fact, the question I ask myself is:
Hello guys,
I was planning to upgrade our ceph cluster over the holiday period and was
wondering when are you planning to release the next point release of the
Infernalis? Should I wait for it or just roll out 9.2.0 for the time being?
thanks
Andrei
On 22-12-15 13:43, Andrei Mikhailovsky wrote:
> Hello guys,
>
> Was wondering if anyone has done testing on Samsung PM863 120 GB version to
> see how it performs? IMHO the 480GB version seems like a waste for the
> journal as you only need to have a small disk size to fit 3-4 osd journals.
>
Le 22/12/2015 09:42, yuyang a écrit :
Hello, everyone,
[snip snap]
Hi
> If the SSD failed or down, can the OSD work?
> Is the osd down or only can be read?
If you don't have a journal anymore, the OSD has already quit, as it
can't continue writing, nor it can assure data consistency, since
On Tue, Dec 22, 2015 at 12:58 PM, Florent B wrote:
> Hi,
>
> Today I had another MDS crash, but this time it was an active MDS crash.
>
> Log is here : http://paste.ubuntu.com/14136900/
>
> Infernalis on Debian Jessie (packaged version).
>
> Does anyone know something about
would this behavior go away if I add more osds or pg(s), or can I do anything
else besides to change the FS on osds? is this a known performance issue?
Thanks
--
Dan
On December 22, 2015 4:53:24 PM Wade Holler wrote:
The hanging kernel tasks under -327 for XFS resulted
Le 22/12/2015 13:43, Andrei Mikhailovsky a écrit :
> Hello guys,
>
> Was wondering if anyone has done testing on Samsung PM863 120 GB version to
> see how it performs? IMHO the 480GB version seems like a waste for the
> journal as you only need to have a small disk size to fit 3-4 osd journals.
Write endurance is kinda bullshit.
We have crucial 960gb drives storing data and we've only managed to take 2% off
the drives life in the period of a year and hundreds of tb written weekly.
Stuff is way more durable than anyone gives it credit.
- Original Message -
From: "Lionel
On 12/22/2015 05:36 PM, Tyler Bishop wrote:
> Write endurance is kinda bullshit.
>
> We have crucial 960gb drives storing data and we've only managed to take 2%
> off the drives life in the period of a year and hundreds of tb written weekly.
>
>
> Stuff is way more durable than anyone gives it
I would also add that the journal activity is write intensive so a small part
of the drive would get excessive writes if the journal and data are co-located
on an SSD. This would also be the case where an SSD has multiple journals
associated with many HDDs.
-Original Message-
From:
On Tue, Dec 22, 2015 at 7:18 PM, Don Waterloo wrote:
> On 21 December 2015 at 22:07, Yan, Zheng wrote:
>>
>>
>> > OK, so i changed fio engine to 'sync' for the comparison of a single
>> > underlying osd vs the cephfs.
>> >
>> > the cephfs w/ sync is ~
I had major host stability problems under load with -327 . Repeatable test
cases under high load with XFS or BTRFS would result in hung kernel tasks
and of course the sympathetic behavior you mention.
requests are blocked mean that the op tracker in ceph hasn't received a
timely response from
That is strange, maybe there is a sysctl option to tweak on OSDs ? this will be
nasty if it goes into our production!
--
Dan
From: Wade Holler [mailto:wade.hol...@gmail.com]
Sent: Tuesday, December 22, 2015 4:36 PM
To: Dan Nica ; ceph-users@lists.ceph.com
Subject:
The hanging kernel tasks under -327 for XFS resulted in LOG verification
failures and completely locked the hosts.
BTRFS task timeouts we could get around by setting
kernel.hung_task_timeout_secs = 960
The host would eventually get responsive again however that doesn't really
matter, since the
Hi
I try to run a bench test on a RBD image and I get from time to time the
following in ceph status
cluster 046b0180-dc3f-4846-924f-41d9729d48c8
health HEALTH_WARN
2 requests are blocked > 32 sec
monmap e1: 3 mons at
Hi,
Did you try to use the cleanup and dispose steps of cosbench?
brgds
De : ceph-users [mailto:ceph-users-boun...@lists.ceph.com] De la part de
Somnath Roy
Envoyé : mardi 24 novembre 2015 20:49
À : ceph-users@lists.ceph.com
Objet : [ceph-users] RGW pool contents
Hi Yehuda/RGW experts,
I have
Hi all,
When I exec ./install-deps.sh, there are some errors:
--> Already installed : junit-4.11-8.el7.noarch
No uninstalled build requires
Running virtualenv with interpreter /usr/bin/python2.7
New python executable in
Thanks for responding back, unfortunately Cosbench setup is not there..
Good to know that there are cleanup steps for Cosbench data.
Regards
Somnath
From: ghislain.cheval...@orange.com [mailto:ghislain.cheval...@orange.com]
Sent: Tuesday, December 22, 2015 11:28 PM
To: Somnath Roy;
30 matches
Mail list logo