Hi,
Try looking for file locate in a folder named Slot X where X in the
number of the slot, then echoing 1 in the locate file will make the
led blink. :
# find /sys -name locate |grep Slot
Sorry, I forgot to say that in Slot X/device/block you could find the
device name, like sdc.
Cheers
Le 18/11/2014 00:15, Cedric Lemarchand a écrit :
Hi,
Try looking for file locate in a folder named Slot X where X in
the number of the slot, then echoing 1 in the locate file will make
keeping in mind.
--
Warren Wang
Comcast Cloud (OpenStack)
From: Cedric Lemarchand ced...@yipikai.org
Date: Wednesday, September 3, 2014 at 5:14 PM
To: ceph-users@lists.ceph.com ceph-users@lists.ceph.com
Subject: Re: [ceph-users] [Single OSD performance on SSD] Can't go over 3,
2K
Le 03/09/2014 22:11, Sebastien Han a écrit :
Hi Warren,
What do mean exactly by secure erase? At the firmware level with constructor
softwares?
SSDs were pretty new so I don’t we hit that sort of things. I believe that
only aged SSDs have this behaviour but I might be wrong.
I think it's
Le 06/06/2014 14:05, Koleos Fuskus a écrit :
BTW, I know is not a good idea to run all mon and all osd on the same
machine, on the same disk. But on the other hand, it facilitates testing with
small resources. It would be great to deploy such small environment easily.
Loic wrote a nice
On 07/06/2014 11:07, Cedric Lemarchand wrote:
Le 06/06/2014 14:05, Koleos Fuskus a écrit :
BTW, I know is not a good idea to run all mon and all osd on the
same machine, on the same disk. But on the other hand, it facilitates
testing with small resources. It would be great to deploy such small
Le 05/06/2014 18:27, Sven Budde a écrit :
Hi Alexandre,
thanks for the reply. As said, my switches are not stackable, so using LCAP
seems not to be my best option.
I'm seeking for an explanation how Ceph is utilizing two (or more)
independent links on both the public and the cluster
the only way to get working multipathing for Ceph.
On Thu, Jun 5, 2014 at 10:50 AM, Cedric Lemarchand ced...@yipikai.org
mailto:ced...@yipikai.org wrote:
Le 05/06/2014 18:27, Sven Budde a écrit :
Hi Alexandre,
thanks for the reply. As said, my switches are not stackable, so
Le 04/06/2014 03:23, Christian Balzer a écrit :
On Tue, 03 Jun 2014 18:52:00 +0200 Cedric Lemarchand wrote:
Le 03/06/2014 12:14, Christian Balzer a écrit :
A simple way to make 1) and 2) cheaper is to use AMD CPUs, they will do
just fine at half the price with these loads.
If you're
Hello,
Le 03/06/2014 12:14, Christian Balzer a écrit :
A simple way to make 1) and 2) cheaper is to use AMD CPUs, they will do
just fine at half the price with these loads.
If you're that tight on budget, 64GB RAM will do fine, too.
I am interested about this specific thought, could you
Le 28/05/2014 16:15, Stefan Priebe - Profihost AG a écrit :
Am 28.05.2014 16:13, schrieb Wido den Hollander:
On 05/28/2014 04:11 PM, VELARTIS Philipp Dürhammer wrote:
Is someone using btrfs in production?
I know people say it’s still not stable. But do we use so many features
with ceph? And
What come first in my mind is GlusterFS, just my 2 cents.
Cheers
Le 23/05/2014 20:41, Listas@Adminlinux a écrit :
Hi !
I have failover clusters for some aplications. Generally with 2
members configured with Ubuntu + Drbd + Ext4. For example, my IMAP
cluster works fine with ~ 50k email
Le 12/05/2014 19:14, Loic Dachary a écrit :
On 12/05/2014 11:00, Alexandre DERUMIER wrote:
I'll be there !
Will be there too !
(Do you known if it'll be possible to buy some ceph t-shirts ?)
As a side note, have some mug ? ;-)
Cheers !
Absolutely ! If you want large quantities it can also
Hi Dan,
Le 13/05/2014 13:42, Dan van der Ster a écrit :
Hi,
I think you're not getting many replies simply because those are
rather large servers and not many have such hardware in prod.
Good point.
We run with 24x3TB drives, 64GB ram, one 10Gbit NIC. Memory-wise there
are no problems.
Hello,
This build is only intended for archiving purpose, what matter here is
lowering ratio $/To/W.
Access to the storage would be via radosgw, installed on each nodes. I
need that each nodes sustain an average of 1Gb write rates, for which I
think it would not be a problem. Erasure encoding
I am surprised that CephFS isn't proposed as an option, in the way it
removes the not negligible block storage layer from the picture. I
always feel uncomfortable to stack storage technologies or file systems
(here NFS over XFS over iSCSI over RDB over Rados) and try to stay as
possible on the
, Cedric Lemarchand a écrit :
Hello,
This build is only intended for archiving purpose, what matter here is
lowering ratio $/To/W.
Access to the storage would be via radosgw, installed on each nodes. I
need that each nodes sustain an average of 1Gb write rates, for which
I think it would
Le 06/05/2014 17:07, Xabier Elkano a écrit :
the goal is the performance over the capacity.
I am sure you already consider the full SSD option, did you ?
--
Cédric
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hi Indra,
Le 04/05/2014 06:11, Indra Pramana a écrit :
Would like to share after I tried yesterday, this doesn't work:
- ceph osd set noout
- sudo stop ceph-osd id=12
- Replace the drive, and once done:
- sudo start ceph-osd id=12
You said a few lines afterwards that a new OSD number is
What a huge news ! big congrats for the Ceph team, without forgetting
all the volunteers that helped for.
Keep up the amazing work, Ceph is going to be a revolution for the
storage, and that's great.
Follow up in lines responses.
Le 01/05/2014 08:26, Wido den Hollander a écrit :
On 04/30/2014
Le 29/04/2014 20:39, Craig Lewis a écrit :
On 4/29/14 08:29 , Drew Weaver wrote:
I am getting to the Ceph party a little late but I am trying to find
out if any work has already been done on trying to automate the
provisioning lifecycle of users, etc in radosgw?
I started writing some Chef
Hi Punit,
Le 28 avr. 2014 à 11:55, Punit Dambiwal hypu...@gmail.com
mailto:hypu...@gmail.com a écrit :
Hi Yehuda,
I am using the same above method to call the api and used the way
which described in the
http://ceph.com/docs/master/radosgw/s3/authentication/#access-control-lists-acls
for
Hi rAn,
Le 27/04/2014 13:13, rAn rAnn a écrit :
Thanks all
Im trying to deploy from node1(the admin node) to the new node via the
command ceph-deploy install node2.
I have coppied the two main repositories (noarc and x86-64) to my
secure site and I have encountered the folowing warnings
AM, Cedric Lemarchand ced...@yipikai.org wrote:
By digging a bit more I found a part of the answer :
http://wiki.ceph.com/Planning/Blueprints/Firefly/rgw%3A_object_versioning
Are there any future plans for swift ?
Thanks
--
Cédric
Le 23/04/2014 21:55, Cedric Lemarchand a écrit :
Hi
Hello,
Le 24/04/2014 12:39, Sakhi Hadebe a écrit :
Hi,
I am a new to the storage clucter concept. I have been task to test CEPH.
I have two DELL PE R515 machines running ubuntu 12.04 LTS. I
understand that I need to make one the admin node where I will be able
to run all the commands
it
with rados cmd.
Cheers !
JC
On Apr 18, 2014, at 03:51, Cedric Lemarchand ced...@yipikai.org wrote:
Hi,
I am facing a strange behaviour where a pool is stucked, I have no idea how
this pool appear in the cluster in the way I have not played with pool
creation, *yet*.
# root
Hi Cephers,
I would like to know if is the swift object versioning feature [1] is
(or will be) on the road map ?
Because ... it would be great ;-)
Thx,
Cédric
[1]
http://docs.openstack.org/api/openstack-object-storage/1.0/content/set-object-versions.html
--
Cédric
Hi,
I am facing a strange behaviour where a pool is stucked, I have no idea
how this pool appear in the cluster in the way I have not played with
pool creation, *yet*.
# root@node1:~# ceph -s
cluster 1b147882-722c-43d8-8dfb-38b78d9fbec3
health HEALTH_WARN 333 pgs degraded; 333 pgs
21osd.2up1
2014-04-18 14:51 GMT+04:00 Cedric Lemarchand ced...@yipikai.org
mailto:ced...@yipikai.org:
Hi,
I am facing a strange behaviour where a pool is stucked, I have no
idea how this pool appear in the cluster in the way I have not
played
Hello,
If I remember good, I think I have encounter same issues with 12.04 and
Virtualbox VM, and *it seems* that VBox tools do some very weird things.
Have you tried to cleanup all packages that aren't needed by your setup ?
Le 04/04/2014 14:39, Brian Candler a écrit :
On 04/04/2014 12:59,
Le 07/03/2014 18:05, Stijn De Weirdt a écrit :
we tried this with a Dell H200 (also LSI2008 based).
however, running some basic benchmarks, we saw no immediate difference
between IT and IR firmware.
so i'd like to know: what kind of performance improvement do you get,
and how did you
Le 05/02/2014 14:10, Sebastien Han a écrit :
We have a meet up in Paris.
Please see: http://www.meetup.com/Ceph-in-Paris/events/158942372/
This is a great news ! thanks for input.
Cheers
Cédric
___
ceph-users mailing list
ceph-users@lists.ceph.com
Le 16/01/2014 10:16, NEVEU Stephane a écrit :
Thank you all for comments,
So to sum up a bit, it's a reasonable compromise to buy :
2 x R720 with 2x Intel E5-2660v2, 2.2GHz, 25M Cache, 48Gb RAM, 2 x 146GB, SAS
6Gbps, 2.5-in, 15K RPM Hard Drive (Hot-plug) Flex Bay for OS and 24 x 1.2TB,
SAS
Le 15/01/2014 14:15, Stefan Priebe a écrit :
THe DC S3700 isn't good as squential but the 520 or 525 series has the
problem that it doesn't have a capicator. We've used Intel SSDs since
the 160 series but for ceph we now go for Crucial m500 (has capicitor).
Curcial m500 has capacitor, could
Hello guys,
What about arm hardware ? did someone already use some like Viridis ?
http://www.boston.co.uk/solutions/viridis/viridis-4u.aspx
Cheers
Le 15/01/2014 16:16, Mark Nelson a écrit :
On 01/15/2014 09:14 AM, Alexandre DERUMIER wrote:
We just got in a test chassis of the 4 node in 4U
Le 15/01/2014 16:25, Mark Nelson a écrit :
On 01/15/2014 09:22 AM, Cedric Lemarchand wrote:
Hello guys,
What about arm hardware ? did someone already use some like Viridis ?
http://www.boston.co.uk/solutions/viridis/viridis-4u.aspx
Afaik, the boston solution was based on Calxeda gear
Le 15/01/2014 17:34, Alexandre DERUMIER a écrit :
Hi Derek,
thanks for the information about r720xd.
Seem that 24 drive chassis is also available.
What is the advantage to use flexbay for ssd ? Bypass the back-plane ?
From what I understand the flexbay are inside the box, typically
usefull
Le 10/01/2014 17:16, Bradley Kite a écrit :
This might explain why the performance is not so good - on each
connection it can only do 1 transaction at a time:
1) Submit write
2) wait...
3) Receive ACK
Then repeat...
But if the OSD protocol supports multiple transactions it could do
,
Cédric
Le 28/12/2013 15:53, Wido den Hollander a écrit :
On 12/28/2013 02:40 PM, Cedric Lemarchand wrote:
Le 28/12/2013 14:35, Wido den Hollander a écrit :
On 12/28/2013 02:07 PM, Cedric Lemarchand wrote:
Hello Cepher,
As my needs are to lower the $/To, I would like to know
Hello Cepher,
As my needs are to lower the $/To, I would like to know if the
replication ratio could only be an integer or can be set to 1,5 or 1,25 ?
In others words, do Ceph can compute data parity or only do multiple
copy of data ?
Sorry in advance if this is a basic question !
-- Cédric
Le 28/12/2013 14:35, Wido den Hollander a écrit :
On 12/28/2013 02:07 PM, Cedric Lemarchand wrote:
Hello Cepher,
As my needs are to lower the $/To, I would like to know if the
replication ratio could only be an integer or can be set to 1,5 or
1,25 ?
In others words, do Ceph can compute data
41 matches
Mail list logo