Hi,
Plus reads will still come from your non-SSD disks unless you're using
something like flashcache in front and as Greg said, having much more IOPS
available for your db often makes a difference (depending on load, usage
etc ofc).
We're using Samsung Pro 840 256GB pretty much like Martin
On 21/10/2013 22:45, Gregory Farnum wrote:
On Mon, Oct 21, 2013 at 8:05 AM, Pieter Steyn pie...@kaluma.com wrote:
Hi all,
I'm using Ceph as a filestore for my nginx web server, in order to have
shared storage, and redundancy with automatic failover.
The cluster is not high spec, but given my
Hi Alfredo
Thanks for picking up on this
-Original Message-
From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
Sent: Montag, 21. Oktober 2013 14:17
To: Fuchs, Andreas (SwissTXT)
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph Block Device install
On Mon, Oct 21,
Hi all!
I update my ceph verson form 0.56.3 to 0.62! I install call dev pockage!
and sucess build 0.62!
but when I used 0.62 init-ceph ! all osd can not restart!
it return fail /use/local/bin/ceph-osd -i 0 --pid -file
/var/run/osd.0.pid -c /tmp/fetched.ceph.conf.12035
and
try with qemu-img:
qemu-img convert -p -f vpc hyper-v-image.vhd
rbd:rbdpool/ceph-rbd-image:mon_host=ceph-mon-name
where ceph-mon-name is the ceph monitor host name or ip
2013/10/22 James Harper james.har...@bendigoit.com.au:
Can any suggest a straightforward way to import a VHD to a ceph RBD?
Hi,
I was wondering if anyone has had any experience in attempting to use a RBD
volume as a clustered drive in Windows Failover Clustering? I'm getting the
impression that it won't work since it needs to be either an iSCSI LUN or a
SCSI LUN.
Thanks,
Damien
On Tue, Oct 22, 2013 at 3:39 AM, Fuchs, Andreas (SwissTXT)
andreas.fu...@swisstxt.ch wrote:
Hi Alfredo
Thanks for picking up on this
-Original Message-
From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
Sent: Montag, 21. Oktober 2013 14:17
To: Fuchs, Andreas (SwissTXT)
Cc:
RBD can be re-published via iSCSI using a gateway host to sit in
between, for example using targetcli.
On 2013-10-22 13:15, Damien Churchill wrote:
Hi,
I was wondering if anyone has had any experience in attempting to use
a RBD volume as a clustered drive in Windows Failover Clustering?
-Original Message-
From: Alfredo Deza [mailto:alfredo.d...@inktank.com]
Sent: Dienstag, 22. Oktober 2013 14:16
To: Fuchs, Andreas (SwissTXT)
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph Block Device install
On Tue, Oct 22, 2013 at 3:39 AM, Fuchs, Andreas
Thanks Mark for the response. My comments inline...
From: Mark Nelson mark.nel...@inktank.com
To: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Rados bench result when increasing OSDs
Message-ID: 52653b49.8090...@inktank.com
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
On
Hi All,
I have a ceph cluster setup with 3 nodes which has 1Gbps public network and
10Gbps private cluster network which is not accessible from public network.
I want to force OSDs to use only private network and public network for
MONs and MDS. I am using ceph-deploy to setup the cluster and
Hi Kyle and Greg,
I will get back to you with more details tomorrow, thanks for the response.
Thanks,
Guang
在 2013-10-22,上午9:37,Kyle Bader kyle.ba...@gmail.com 写道:
Besides what Mark and Greg said it could be due to additional hops through
network devices. What network devices are you using,
If get this message
RuntimeError: Failed to execute command: su -c 'rpm --import
https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc;'
change the configuration of curl to
[root@cephtest01 ~]# cat .curlrc
proxy = http://proxy.de.signintra.com:80
In the root home directory.
2013/10/22 Michael Kirchner michael.kirch...@dbschenker.com:
If get this message
RuntimeError: Failed to execute command: su -c 'rpm --import
https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc;'
change the configuration of curl to
[root@cephtest01 ~]# cat .curlrc
proxy =
http://ceph.com/docs/master/rados/configuration/network-config-ref/
22 окт. 2013 г. 18:22 пользователь Abhay Sachan abhay...@gmail.com
написал:
Hi All,
I have a ceph cluster setup with 3 nodes which has 1Gbps public network
and 10Gbps private cluster network which is not accessible from public
Hi Abhay
Try to set this on your ceph.conf:
cluster_network = 192.168.1.1/24
public_network = 192.168.1.1/24
Obviously, use your own ip ranges on both variables.
Regards
___
ceph-users mailing list
ceph-users@lists.ceph.com
Hi,
I accidentally installed Saucy Salamander. Does the project have a
timeframe for supporting this Ubuntu release?
Thanks,
JL
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
For the time being, you can install the Raring debs on Saucy without issue.
echo deb http://ceph.com/debian-dumpling/ raring main | sudo tee
/etc/apt/sources.list.d/ceph.list
I'd also like to register a +1 request for official builds targeted at
Saucy.
Cheers,
Mike
On 10/22/2013 11:42
And a +1 from me as well. It would appear that ubuntu has picked up the 0.67.4
source and included a build of it in their official repo, so you may be able to
get by until the next point release with those.
http://packages.ubuntu.com/search?keywords=ceph
On Oct 22, 2013, at 11:46 AM, Mike
Off topic perhaps but I'm finding it pretty buggy just now - not sure
I'd want it underpinning Ceph, at the moment.
On 2013-10-22 16:51, Mike Lowe wrote:
And a +1 from me as well. It would appear that ubuntu has picked up
the 0.67.4 source and included a build of it in their official repo,
so
Hello,
we're using a small Ceph cluster with 8 nodes, each 4 osds. People are using it
through instances and volumes in a Openstack platform.
We're facing a HEALTH_ERR with full or near full osds :
cluster 5942e110-ea2f-4bac-80f7-243fe3e35732
health HEALTH_ERR 1 full osd(s); 13 near full
thanks for the quick responses. seems to be working ok for me, but...
[OT]
I keep hitting this issue where ceph-deploy will not mkdir /etc/ceph/
before it tries to write cluster configuration to
/etc/ceph/{cluster}.conf. Manually creating the dir on each mon node
allows me to issue a
I currently have two datacenters (active/passive) using NFS storage.
Backups are done with nightly rsyncs. I want to replace this with
RadosGW and RGW geo-replication. I plan to roll out production after
Emperor comes out.
I'm trying to figure out how to import my existing data. The data
/etc/ceph should be installed by the package named 'ceph'. Make sure
you're using ceph-deploy install to install the Ceph packages before
trying to use the machines for mon create.
On 10/22/2013 10:32 AM, LaSalle, Jurvis wrote:
thanks for the quick responses. seems to be working ok for me,
Hello,
What I have used to rebalance my cluster is:
ceph osd reweight-by-utilization
we're using a small Ceph cluster with 8 nodes, each 4 osds. People are
using it
through instances and volumes in a Openstack platform.
We're facing a HEALTH_ERR with full or near full osds :
cluster
|
hi all!
I have 12 nodes ceph clouster ( 1 mon 2mds 9osd). today my osd.0 osd.3
osd.4 are down and can not restart them!
osd.0 , osd.3, osd.4 are in the same host which name is osd0!
firstly, here is the osd log:
#tail -f /var/log/ceph/osd.0.log
ceph version
Hey all,
The OpenStack community has spawned a newish Project Manila, an
effort spearheaded by NetApp to provide a file-sharing service
analogous to Cinder, but for filesystems instead of block devices. The
elevator pitch:
Isn't it great how OpenStack lets you manage block devices for your
hosts?
27 matches
Mail list logo