On 01/22/2014 05:47 PM, alistair.whit...@barclays.com wrote:
All,
Having failed to successfully and new monitors using ceph-deploy, I
tried the documented manual approach.
The platform:
OS: RHEL 6.4
Ceph: Emperor
Ceph-deploy: 1.3.4-0
When following the procedure on an existing node in a
On Wed, Jan 22, 2014 at 10:43 PM, Schlacta, Christ aarc...@aarcane.org wrote:
can ceph handle a configuration where a custer node is not always on, but
rather gets booted periodically to sync to the cluster, and is also
sometimes up full time as demand requires? I ask because I want to put an
On Thu, Jan 23, 2014 at 3:35 AM, bf bf31...@gmail.com wrote:
Gregory Farnum greg@... writes:
Yes, Ceph does all the heavy lifting. Multiple PGs with the same OSDs
can happen (eg, if you only have two OSDs, all PGs will be on both),
but it behaves about as well as is possible within the
Hi,
I'm trying to deploy Ceph 0.72.2 on Fedora 20, but having some
issues.
I have tried compiling Ceph myself as well install rpms from
http://gitbuilder.ceph.com/ceph-rpm-fedora20-x86_64-basic/ref/emperor/
with the same result: my OSDs are dying 15 minutes
Hi,
Is there already an SELinux policy module for CephFS available?
My understanding is that such policy should either come with the RPM that needs
it, in this case ceph (which is only partly true as you can mount CephFS
without having the ceph RPM) or, probably better, go into a separate RPM,
[ Returning list to thread. ]
On Wed, Jan 22, 2014 at 11:37 PM, Dmitry Lysenko t...@sovtest.ru wrote:
22.01.2014 13:01, Gregory Farnum пишет:
On Wed, Jan 22, 2014 at 3:23 AM, Dmitry Lysenko t...@sovtest.ru wrote:
Good day.
Some time ago i change pg_num like this
There are some seldom-used files (namely install ISOs) that I want to throw
in ceph to keep them widely available, but throughput and response times
aren't critical for them, nor is redundancy. Is it possible to throw them
into OSDs on cheap, bulk offline storage, and more importantly, will idle
On Jan 23, 2014, at 4:18 PM, Gregory Farnum g...@inktank.com
wrote:
On Wed, Jan 22, 2014 at 3:23 PM, Karol Kozubal karol.kozu...@elits.com
wrote:
Hi Everyone,
I have a few questions concerning mounting cephfs with ceph-fuse in fstab at
boot. I am currently successfully mounting cephfs
Hi,
because debian wheezy libleveldb1 is also quite old
http://packages.debian.org/wheezy/libleveldb1
libleveldb1 (0+20120530.gitdd0d562-1)
Yes, that version is buggy and was causing the issue.
I took the source deb from debian sid and rebuilt it for precise in my case:
On Thu, Jan 23, 2014 at 8:07 AM, Arne Wiebalck arne.wieba...@cern.ch wrote:
On Jan 23, 2014, at 4:18 PM, Gregory Farnum g...@inktank.com
wrote:
On Wed, Jan 22, 2014 at 3:23 PM, Karol Kozubal karol.kozu...@elits.com
wrote:
Hi Everyone,
I have a few questions concerning mounting cephfs
Hi,
I'm using the latest Emperor Ceph release, and trying to bring up the S3 Object
Gateway.
I have a Ceph cluster deployed on an Ubuntu 13.10 based distribution.
When I attempt to create a S3 bucket using the boto python module, I get the
following error:
Boto.exception.S3ResponseError:
Thanks.
Does It need to rebuild the whole ceph packages with libleveldb-dev ?
Or can I simply backport libleveldb1 and use ceph packages from intank
repository ?
- Mail original -
De: Sylvain Munaut s.mun...@whatever-company.com
À: Alexandre DERUMIER aderum...@odiso.com
Cc: Mark
On Thu, Jan 23, 2014 at 8:24 AM, David Francheski (dfranche)
dfran...@cisco.com wrote:
Hi,
I'm using the latest Emperor Ceph release, and trying to bring up the S3
Object Gateway.
I have a Ceph cluster deployed on an Ubuntu 13.10 based distribution.
When I attempt to create a S3 bucket
On Thu, Jan 23, 2014 at 6:27 PM, Alexandre DERUMIER aderum...@odiso.com wrote:
Thanks.
Does It need to rebuild the whole ceph packages with libleveldb-dev ?
Or can I simply backport libleveldb1 and use ceph packages from intank
repository ?
I had to rebuild ceph because the old one is a
But there is an officil libleveldb version from ceph for wheezy:
http://gitbuilder.ceph.com/leveldb-deb-x86_64/
http://gitbuilder.ceph.com/leveldb-deb-x86_64/libleveldb1_1.9.0-1~bpo70+1_amd64.deb
and
http://gitbuilder.ceph.com/leveldb-deb-x86_64/libleveldb-dev_1.9.0-1~bpo70+1_amd64.deb
Am
Good day.
I have a running ceph cluster and would like to change the setting mon osd
down out interval = 3600. Is there a way to do this without having to
restart any ceph services?
Kind regards
Alessandro Brega
___
ceph-users mailing list
You can inject settings into a running cluster. To set the mon osd
down out interval for all the osds it would be.
ceph tell osd.* injectargs '--mon-osd-down-out-interval 3600'
On Thu, Jan 23, 2014 at 12:52 PM, Alessandro Brega
alessandro.bre...@gmail.com wrote:
Good day.
I have a running
Hi,
have a look at
http://ceph.com/docs/master/rados/configuration/ceph-conf/#runtime-changes
best regards,
Kurt
Alessandro Brega schrieb:
Good day.
I have a running ceph cluster and would like to change the setting
mon osd down out interval = 3600. Is there a way to do this without
having
Hi all-
I'm creating some scripted performance testing for my Ceph cluster. The part
relevant to my questions works like this:
1. Create some pools
2. Create and map some RBDs
3. Write-in the RBDs using DD or FIO
4. Run FIO testing on the RBDs (small block random and
What guarantees does ceph place on data integrity? Zfs uses a Merkel tree
to guarantee the integrity of all data and metadata on disk and will
ultimately refuse to return duff data to an end user consumer.
I know ceph provides some integrity mechanisms and has a scrub feature.
Does it provide
So I just have a few more questions that are coming to mind. Firstly, I
have OSDi whose underlying filesystems can be.. Dun dun dun Resized!!
If I choose to expand my allocation to ceph, I can in theory do so by
expanding the quota on the OSDi. (I'm using ZFS) Similarly, if the OSD is
But there is an officil libleveldb version from ceph for wheezy:
http://gitbuilder.ceph.com/leveldb-deb-x86_64/
http://gitbuilder.ceph.com/leveldb-deb-x86_64/libleveldb1_1.9.0-1~bpo70+1_amd64.deb
and
http://gitbuilder.ceph.com/leveldb-deb-x86_64/libleveldb-dev_1.9.0-1~bpo70+1_amd64.deb
On Thu, Jan 23, 2014 at 8:36 PM, David Francheski (dfranche)
dfran...@cisco.com wrote:
Thanks Yehuda,
I've attached both the apache2 access/error logs, as well as the radosgw
log file.
It doesn't look like /var/www/s3gw.fcgi is even being called.
I put a touch /tmp/radosgw-started-flag
On 01/23/2014 01:23 AM, Karol Kozubal wrote:
This works correctly when I mount at run time. However I am running into
issues doing this at boot time through fstab with the following command,
as per the documentation on ceph.com I am passing cephfs root as a part
of the first argument in fstab:
24 matches
Mail list logo