Hi Everyone,
I'm looking for some SSD's for our cluster and I came across these Samsung DC
SV843 SSD's and noticed in the mailing lists from awhile back some people were
talking about them.
Just wondering if anyone ended up using them and how they are going?
Thanks in advance,
Regards,
:6830/43939 shutdown
complete.
Full OSD log below
https://drive.google.com/file/d/0B578d6cBmDPYQ1lCMUR2Y0tLNTA/view?usp=sharing
Regards,
Quenten Grasso
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph
and leverage some nice SSD's maybe a P3700
400GB
for the zil/l2arc with compression and going back to 2x replicas which then
this could give us some pretty fast/safe/efficient storage.
Now to find that money tree.
Regards,
Quenten Grasso
-Original Message-
From: ceph-users [mailto:ceph
,
Quenten Grasso
From: Nick Fisk [mailto:n...@fisk.me.uk]
Sent: Saturday, 24 January 2015 7:33 PM
To: Quenten Grasso; ceph-users@lists.ceph.com
Subject: RE: Consumer Grade SSD Clusters
Hi Quenten,
There is no real answer to your question. It really depends on how busy your
storage
1
53 1 osd.53 up 1
54 1 osd.54 up 1
Regards,
Quenten Grasso
-Original Message-
From: Christian Balzer [mailto:ch...@gol.com]
Sent: Tuesday, 27 January 2015 11:33 AM
To: ceph-users@lists.ceph.com
Cc
procedure everything was ok and the osd.0 had been
emptied and seemingly rebalanced.
Any ideas why its rebalancing again?
we're using Ubuntu 12.04 w/ Ceph 80.8 Kernel 3.13.0-43-generic
#72~precise1-Ubuntu SMP Tue Dec 9 12:14:18 UTC 2014 x86_64 x86_64 x86_64
GNU/Linux
Regards,
Quenten Grasso
Grasso
-Original Message-
From: Christian Balzer [mailto:ch...@gol.com]
Sent: Tuesday, 27 January 2015 11:53 AM
To: ceph-users@lists.ceph.com
Cc: Quenten Grasso
Subject: Re: [ceph-users] OSD removal rebalancing again
On Tue, 27 Jan 2015 01:37:52 + Quenten Grasso wrote:
Hi Christian
x replication it's a pretty good
cost saving however 3x replication not so much.
Cheers,
Quenten Grasso
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
4096 --no-cleanup --concurrent-ios=32
rados bench -p benchmark1 180 seq -b 4096
As you may know increasing the concurrent io's will increase cpu/disk load.
= Total PG = OSD * 100 / Replicas
Ie: 50 OSD System with 3 replicas would be around 1600
Hope this helps a little,
Cheers,
Quenten
of these. While not really HA seems
mostly work be it FreeNAS iSCSI can get a bit cranky at times.
We are moving towards another KVM Hypervisor such as proxmox for these vm's
which don't quite fit into our Openstack environment instead of having to use
RBD Proxys
Regards,
Quenten Grasso
-Original
spindles?
If this is the case Is it possible to calculate how many IOPS a journal would
absorb and how this would translate to x IOPS on spindle disk?
Regards,
Quenten Grasso
-Original Message-
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Christian Balzer
Sent
Hi Sage, Andrija List
I have seen the tuneables issue on our cluster when I upgraded to firefly.
I ended up going back to legacy settings after about an hour as my cluster is
of 55 3TB OSD’s over 5 nodes and it decided it needed to move around 32% of our
data, which after an hour all of our
Hi All,
Just a quick question for the list, has anyone seen a significant increase in
ram usage since firefly? I upgraded from 0.72.2 to 80.3 now all of my Ceph
servers are using about double the ram they used to.
Only other significant change to our setup was a upgrade to Kernel
000 S0 0.0 0:00.08 kworker/0:0
Regards,
Quenten Grasso
From: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Quenten Grasso
Sent: Tuesday, 18 March 2014 10:19 PM
To: 'ceph-users@lists.ceph.com'
Subject: [ceph-users] OSD Restarts cause
Hi All,
I'm trying to troubleshoot a strange issue with my Ceph cluster.
We're Running Ceph Version 0.72.2
All Nodes are Dell R515's w/ 6C AMD CPU w/ 32GB Ram, 12 x 3TB NearlineSAS
Drives and 2 x 100GB Intel DC S3700 SSD's for Journals.
All Pools have a replica of 2 or better. I.e. metadata
Hi All,
I'm trying to troubleshoot a strange issue with my Ceph cluster.
We're Running Ceph Version 0.72.2
All Nodes are Dell R515's w/ 6C AMD CPU w/ 32GB Ram, 12 x 3TB NearlineSAS
Drives and 2 x 100GB Intel DC S3700 SSD's for Journals.
All Pools have a replica of 2 or better. I.e. metadata
Hi All,
Does Radosgw support a Public URL For static content?
Being that I wish to share a File publicly but not give out
username/passwords etc.
I noticed in the http://ceph.com/docs/master/radosgw/swift/ it says Static
Websites isn't supported.. which I assume is talking about this feature,
Hey Guys,
Looks like 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc'
is down.
Regards,
Quenten Grasso
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
controller bios/firmware
Megarec -cleanflash 0
4) Reboot
5) flash new firmware
Megarec -m0flash 0 mr2108fw.rom
6) Reboot Done.
Also if your command errors out half way through flashing/erasing run it again.
Regards,
Quenten Grasso
-Original Message-
From: ceph-devel-ow
Hi All,
I'm finding my write performance is less than I would have expected. After
spending some considerable amount of time testing several different
configurations I can never seems to break over ~360mb/s write even when using
tmpfs for journaling.
So I've purchased 3x Dell R515's with 1 x
20 matches
Mail list logo