Looks like it's just following the warnings from libvirt
https://bugzilla.redhat.com/show_bug.cgi?id=751631 heh, but I found one
of the inktank guys confirming that RBD was safe to add to the whitelist
last year
http://www.redhat.com/archives/libvir-list/2012-July/msg00021.html which
is good f
Hello,
With AWS it is possible to do user browser-based uploads using POST [1].
Is it possible to do with RadosGW. Is the feature supported?
Cheers,
Valery
[1] http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingHTTPPOST.html
--
SWITCH
--
Valery Tschopp, Software Engi
Hi,
I now did an upgrade to dumpling (ceph version 0.67.5
(a60ac9194718083a4b6a225fc17cad6096c69bd1)), but the osd still fails at startup
with a trace.
Heres the trace:
http://paste.ubuntu.com/6755307/
If you need any more infos I will provide them. Can someone please help?
Thanks
Von: ceph
Hi there,
We have a production Ceph cluster with 12 OSDs spread over 6 hosts
running version 0.72.2.
From time to time, we're seeing some nasty multi-second latencies
(typically 1-3 second, sometimes as high as 5 seconds) inside QEMU VMs
for both read and write loads.
The VMs are still res
Hello List,
I'm going to build a build a rbd cluster this year, with 5 nodes
I would like to have this kind of configuration for each node:
- 2U
- 2,5inch drives
os : 2 disk sas drive
journal : 2 x ssd intel dc s3700 100GB
osd : 10 or 12 x sas Seagate Savvio 10K.6 900GB
I see on the mailing
Hi Alexandre,
Are you going with a 10Gb network? It’s not an issue for IOPS but more for the
bandwidth. If so read the following:
I personally won’t go with a ratio of 1:6 for the journal. I guess 1:5 (or even
1:4) is preferable.
SAS 10K gives you around 140MB/sec for sequential writes.
So if y
Hi Sebastian,
Am 15.01.2014 13:55, schrieb Sebastien Han:
Hi Alexandre,
Are you going with a 10Gb network? It’s not an issue for IOPS but more for the
bandwidth. If so read the following:
I personally won’t go with a ratio of 1:6 for the journal. I guess 1:5 (or even
1:4) is preferable.
SAS
Hello Sebastien,
thanks for your reply.
>>Are you going with a 10Gb network? It’s not an issue for IOPS but more for
>>the bandwidth. If so read the following:
Currently it's planned to use 1gb network for the public network (vm-->rbd
cluster).
Maybe 10gbe for cluster network replication is p
Le 15/01/2014 14:15, Stefan Priebe a écrit :
THe DC S3700 isn't good as squential but the 520 or 525 series has the
problem that it doesn't have a capicator. We've used Intel SSDs since
the 160 series but for ceph we now go for Crucial m500 (has capicitor).
"Curcial m500 has capacitor", coul
> I would like to have this kind of configuration for each node:
>
> - 2U
> - 2,5inch drives
> os : 2 disk sas drive
> journal : 2 x ssd intel dc s3700 100GB
> osd : 10 or 12 x sas Seagate Savvio 10K.6 900GB
> Another option could be to use supermicro server,
> they have some 2U - 16 disks chass
Am 15.01.2014 14:33, schrieb Cedric Lemarchand:
Le 15/01/2014 14:15, Stefan Priebe a écrit :
THe DC S3700 isn't good as squential but the 520 or 525 series has the
problem that it doesn't have a capicator. We've used Intel SSDs since
the 160 series but for ceph we now go for Crucial m500 (has
Hi all,
I have to build a new Ceph storage architecture replicated between two
datacenters (for Disastry Recovery Plan) so basically 2x30 terabits (2x3.75
terabytes).
I only can buy Dell servers.
I planned to use 2x1gb (LACP) for the replication network and also 2x1gb (LACP)
for production n
Seem that s3700 have supercapacitor too
http://www.thessdreview.com/our-reviews/s3700/
"The S3700 has power loss protection to keep a sudden outage from corrupting
data, but if the system detects a fault in the two capacitors powering the
system, it will voluntarily disable the volatile cache s
> Power-Loss Protection: In the rare event that power fails while the
> drive is operating, power-loss protection helps ensure that data isn’t
> corrupted.
Seems that not all power protected SSDs are created equal:
http://lkcl.net/reports/ssd_analysis.html
The m500 is not tested but the m4 is.
>>We are using Supermicro 2uTwin nodes.
>>These have 2 nodes in 2u with each 12 disks.
>>
>>We use X9DRT-HF+ mainboards, 2x Intel DC S3500 SSD and 10x 2.5" 1TB 7.2k HDD
>>Seagate Constellation.2
>>They have SAS2008 controllers on board which can be flashed to be a JBOD
>>controller.
Thanks R
Am 15.01.2014 15:03, schrieb Robert van Leeuwen:
Power-Loss Protection: In the rare event that power fails while the
drive is operating, power-loss protection helps ensure that data isn’t
corrupted.
Seems that not all power protected SSDs are created equal:
http://lkcl.net/reports/ssd_analysis
On 01/15/2014 07:52 AM, NEVEU Stephane wrote:
Hi all,
I have to build a new Ceph storage architecture replicated between
two datacenters (for Disastry Recovery Plan) so basically 2x30 terabits
(2x3.75 terabytes).
I only can buy Dell servers.
I planned to use 2x1gb (LACP) for the replicatio
It's also good to note that the m500 has built in RAIN protection
(basically, diagonal parity at the nand level). Should be very good for
journal consistency.
Sent from my mobile device. Please excuse brevity and typographical errors.
On Jan 15, 2014 9:07 AM, "Stefan Priebe" wrote:
> Am 15.01
On 01/15/2014 08:03 AM, Robert van Leeuwen wrote:
Power-Loss Protection: In the rare event that power fails while the
drive is operating, power-loss protection helps ensure that data isn’t
corrupted.
Seems that not all power protected SSDs are created equal:
http://lkcl.net/reports/ssd_analysi
On 1/15/14, 9:20 AM, Mark Nelson wrote:
> I guess I'd probably look at the R520 in an 8 bay configuration with an
> E5-2407 and 4 1TB data disks per chassis (along with whatever OS disk
> setup you want). That gives you 4 PCIE slots for the extra network
> cards, the option for a hardware raid con
Hum the Crucial m500 is pretty slow. The biggest one doesn’t even reach 300MB/s.
Intel DC S3700 100G showed around 200MB/sec for us.
Actually, I don’t know the price difference between the crucial and the intel
but the intel looks more suitable for me. Especially after Mark’s comment.
Séba
Am 15.01.2014 15:34, schrieb Sebastien Han:
Hum the Crucial m500 is pretty slow. The biggest one doesn’t even reach 300MB/s.
Intel DC S3700 100G showed around 200MB/sec for us.
where did you get this values from? I've some 960GB and they all have >
450Mb/s write speed. Also in tests like here
On 01/15/2014 08:29 AM, Derek Yarnell wrote:
On 1/15/14, 9:20 AM, Mark Nelson wrote:
I guess I'd probably look at the R520 in an 8 bay configuration with an
E5-2407 and 4 1TB data disks per chassis (along with whatever OS disk
setup you want). That gives you 4 PCIE slots for the extra network
c
I use H700 on Dell R815, 4 nodes. No problem performance.
Configuration:
1 SSD Intel 530 - OS and Journal.
5 OSD HDD 600G: certified DELL - WD/HITACHI/SEAGATE.
Size replication=2. Iops ~ 4k no VM.
15 янв. 2014 г. 15:47 пользователь "Alexandre DERUMIER"
написал:
>
> Hello List,
>
> I'm going to bu
On 01/15/2014 08:04 AM, Alexandre DERUMIER wrote:
We are using Supermicro 2uTwin nodes.
These have 2 nodes in 2u with each 12 disks.
We use X9DRT-HF+ mainboards, 2x Intel DC S3500 SSD and 10x 2.5" 1TB 7.2k HDD
Seagate Constellation.2
They have SAS2008 controllers on board which can be flashed t
Am 15.01.2014 15:44, schrieb Mark Nelson:
On 01/15/2014 08:39 AM, Stefan Priebe wrote:
Am 15.01.2014 15:34, schrieb Sebastien Han:
Hum the Crucial m500 is pretty slow. The biggest one doesn’t even
reach 300MB/s.
Intel DC S3700 100G showed around 200MB/sec for us.
where did you get this value
On 01/15/2014 08:39 AM, Stefan Priebe wrote:
Am 15.01.2014 15:34, schrieb Sebastien Han:
Hum the Crucial m500 is pretty slow. The biggest one doesn’t even
reach 300MB/s.
Intel DC S3700 100G showed around 200MB/sec for us.
where did you get this values from? I've some 960GB and they all have >
Sorry I was only looking at the 4K aligned results.
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 10, rue de la Victoire - 75009 Paris
Web : www.enovance.com - Twitter : @enovance
Kernel Patch for Intel S3700, Intel 530...
diff --git a/drivers/scsi/sd.c b/drivers//scsi/sd.c
--- a/drivers/scsi/sd.c 2013-09-14 12:53:21.0 +0400
+++ b/drivers//scsi/sd.c2013-12-19 21:43:29.0 +0400
@@ -137,6 +137,7 @@
char *buffer_data;
struct scsi_mode_
The S3700 does not need stuff like this. It internally ignores flushes.
Also there is an upstream one:
https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/commit/?id=39c60a0948cc06139e2fbfe084f83cb7e7deae3b
Stefan
Am 15.01.2014 15:47, schrieb Ирек Фасихов:
Kernel Patch for In
Am 15.01.2014 15:50, schrieb Sebastien Han:
However you have to get > 480GB which ridiculously large for a journal. I
believe they are pretty expensive too.
that's correct it just them instead of SATA or SAS disks in ceph ;-) so
960GB makes sense.
Sébastien Han
Cloud Engineer
"Alwa
actually, they're very inexpensive as far as SSD's go. The 960gb m500 can
be had on Amazon for $499 US on prime (as of yesterday anyway).
Sent from my mobile device. Please excuse brevity and typographical errors.
On Jan 15, 2014 9:50 AM, "Sebastien Han" wrote:
> However you have to get > 480G
On 01/15/2014 08:50 AM, Sebastien Han wrote:
However you have to get > 480GB which ridiculously large for a journal. I
believe they are pretty expensive too.
Looks like the M500 in 480GB capacity is around $300 on amazon right now
VS about $300 for a 200GB DC S3700. The M500 has more capacit
However you have to get > 480GB which ridiculously large for a journal. I
believe they are pretty expensive too.
Sébastien Han
Cloud Engineer
"Always give 100%. Unless you're giving blood.”
Phone: +33 (0)1 49 70 99 72
Mail: sebastien@enovance.com
Address : 10, rue de la Victoire
>>We just got in a test chassis of the 4 node in 4U fattwin setup with 10
>>spinning disks, 2x DC S3700s, 1 system disk, and dual E5 CPUs per node.
>>The guys in our data center said the thing weighs about 260lbs and
>>hangs out the back of the rack. :D
Thanks Mark.
what cpu frequency/number
On 01/15/2014 09:14 AM, Alexandre DERUMIER wrote:
We just got in a test chassis of the 4 node in 4U fattwin setup with 10
spinning disks, 2x DC S3700s, 1 system disk, and dual E5 CPUs per node.
The guys in our data center said the thing weighs about 260lbs and
hangs out the back of the rack. :D
On 01/15/2014 09:22 AM, Cedric Lemarchand wrote:
Hello guys,
What about arm hardware ? did someone already use some like Viridis ?
http://www.boston.co.uk/solutions/viridis/viridis-4u.aspx
Afaik, the boston solution was based on Calxeda gear...
http://www.boston.co.uk/press/2013/10/boston-li
Hello guys,
What about arm hardware ? did someone already use some like Viridis ?
http://www.boston.co.uk/solutions/viridis/viridis-4u.aspx
Cheers
Le 15/01/2014 16:16, Mark Nelson a écrit :
On 01/15/2014 09:14 AM, Alexandre DERUMIER wrote:
We just got in a test chassis of the 4 node in 4U fa
Hi there,
We have a production Ceph cluster with 12 OSDs spread over 6 hosts
running version 0.72.2.
From time to time, we're seeing some nasty multi-second latencies
(typically 1-3 second, sometimes as high as 5 seconds) inside QEMU VMs
for both read and write loads.
The VMs are still res
Le 15/01/2014 16:25, Mark Nelson a écrit :
On 01/15/2014 09:22 AM, Cedric Lemarchand wrote:
Hello guys,
What about arm hardware ? did someone already use some like Viridis ?
http://www.boston.co.uk/solutions/viridis/viridis-4u.aspx
Afaik, the boston solution was based on Calxeda gear...
ht
On 01/15/2014 09:35 AM, Cedric Lemarchand wrote:
Le 15/01/2014 16:25, Mark Nelson a écrit :
On 01/15/2014 09:22 AM, Cedric Lemarchand wrote:
Hello guys,
What about arm hardware ? did someone already use some like Viridis ?
http://www.boston.co.uk/solutions/viridis/viridis-4u.aspx
Afaik, th
On 1/15/2014 9:16 AM, Mark Nelson wrote:
On 01/15/2014 09:14 AM, Alexandre DERUMIER wrote:
For the system disk, do you use some kind of internal flash memory disk ?
We probably should have, but ended up with I think just a 500GB 7200rpm
disk, whatever was cheapest. :)
If your system has to
Hi Derek,
thanks for the information about r720xd.
Seem that 24 drive chassis is also available.
What is the advantage to use flexbay for ssd ? Bypass the back-plane ?
- Mail original -
De: "Derek Yarnell"
À: ceph-users@lists.ceph.com
Envoyé: Mercredi 15 Janvier 2014 15:29:07
Obj
Le 15/01/2014 17:34, Alexandre DERUMIER a écrit :
Hi Derek,
thanks for the information about r720xd.
Seem that 24 drive chassis is also available.
What is the advantage to use flexbay for ssd ? Bypass the back-plane ?
From what I understand the "flexbay" are inside the box, typically
usefull
>>From what I understand the "flexbay" are inside the box, typically
>>usefull for OS (SSD) drives, then it lets you use all the front hotlug
>>slot with larger platter drives.
Yes, it's inside the box.
I ask the question because of the derek message:
"
They currently give me a hard time abo
Hrm, at first glance that looks like the on-disk state got corrupted
somehow. If it's only one OSD which has this issue, I'd turn it off
and mark it out. Then if the cluster recovers properly, wipe it and
put it back in as a new OSD.
-Greg
On Wed, Jan 15, 2014 at 1:49 AM, Rottmann, Jonas (centron
On 01/15/2014 10:53 AM, Alexandre DERUMIER wrote:
>> >From what I understand the "flexbay" are inside the box, typically
>>> usefull for OS (SSD) drives, then it lets you use all the front hotlug
>>> slot with larger platter drives.
>
> Yes, it's inside the box.
>
> I ask the question because
On 1/15/14, 1:35 PM, Dimitri Maziuk wrote:
> On 01/15/2014 10:53 AM, Alexandre DERUMIER wrote:
From what I understand the "flexbay" are inside the box,
typically usefull for OS (SSD) drives, then it lets you use
all the front hotlug slot with larger platter drives.
>>
>> Yes, it's
On 01/15/2014 12:42 PM, Derek Yarnell wrote:
...
> I think this is more a configuration Dell has been
> unwilling to sell is all.
Ah.
Every once in a while they make their bios complain when it finds a
non-"Dell approved" disk. Once enough customers start screaming they
release a "bios update" t
Hi,
perhaps the disk has an problem?
Have you look with smartctl?
(apt-get install smartmontools; smartctl -A /dev/sdX )
Udo
On 15.01.2014 10:49, Rottmann, Jonas (centron GmbH) wrote:
>
> Hi,
>
>
>
> I now did an upgrade to dumpling (ceph version 0.67.5
> (a60ac9194718083a4b6a225fc17cad6096c69
Randy,
Use librados. If you want to test out my latest doc and provide some
feedback, I'd appreciate it:
http://ceph.com/docs/wip-doc-librados-intro/rados/api/librados-intro/
On Mon, Jan 13, 2014 at 11:40 PM, Randy Breunling wrote:
> New to CEPH...so I'm on the learning-curve here.
> Have been
Jeff,
First, if you've specified the public and cluster networks in [global], you
don't need to specify it anywhere else. If you do, they get overridden.
That's not the issue here. It appears from your ceph.conf file that you've
specified an address on the cluster network. Specifically, you specif
If I understand correctly then, I should either not specify mon addr or
set it to an external IP?
Thanks for the clarification,
Jeff
On 01/15/2014 03:58 PM, John Wilkins wrote:
Jeff,
First, if you've specified the public and cluster networks in
[global], you don't need to specify it anywher
Monitors use the public network, not the cluster network. Only OSDs use the
cluster network. The purpose of the cluster network is that OSDs do a lot
of heartbeat checks, data replication, recovery, and rebalancing. So the
cluster network will see more traffic than the front end public network.
See
I am facing a problem in requesting ceph radosgw using swift api.
Connection is getting closed after reading 512 bytes from stream. This
problem is only occurring if I send a GET object request with range header.
Here is the request and response:
Request--->
GET http://rgw.mydomain.com/swift/v1/
55 matches
Mail list logo