What exactly clone field from rados dfmeant for?
Steps tried:
Created an rbd image mapped it
Wrote some 1GB of data on /dev/rbd1 using fio
Unmapped rbd image
Took snapshot
Mapped rbd image again and overwrite 1GB of data using fio
Unmapped rbd image
Took snapshot
Mapped rbd image again and wrote
Thanks for Loic!
I will join.
On Thu, Oct 30, 2014 at 1:54 AM, Loic Dachary l...@dachary.org wrote:
Hi Ceph,
TL;DR: Register for the Micro Ceph and OpenStack Design Summit November 3rd,
2014 11:40am
http://kilodesignsummit.sched.org/event/f2e49f4547a757cc3d51f5641b2000cb
November 3rd,
On Wed, 29 Oct 2014 15:32:57 + Michal Kozanecki wrote:
[snip]
With Ceph handling the
redundancy at the OSD level I saw no need for using ZFS mirroring or
zraid, instead if ZFS detects corruption instead of self-healing it
sends a read failure of the pg file to ceph, and then ceph's scrub
On 10/27/2014 06:37 PM, Patrick Darley wrote:
Hi there
Over the last week or so, I've been trying to connect a ceph monitor
node running on a baserock system
to connect to a simple 3-node ubuntu ceph cluster.
The 3 node ubunutu cluster was created by following the documented Quick
installation
On 10/30/2014 05:54 AM, Sage Weil wrote:
On Thu, 30 Oct 2014, Nigel Williams wrote:
On 30/10/2014 8:56 AM, Sage Weil wrote:
* *Degraded vs misplaced*: the Ceph health reports from 'ceph -s' and
related commands now make a distinction between data that is
degraded (there are fewer than
Hi Everyone,
I upgraded Ceph to Giant by installing *tar.gz package, but appeared
some errors related Object Trimming or Snap Trimming:
I think having some missing objects and be not recovered.
* ceph version 0.86*-106-g6f8524e (6f8524ef7673ab4448de2e0ff76638deaf03cae8)
1:
Hi,
I've noticed the following messages always accumulate in OSD log before it
exhausts all memory:
2014-10-30 08:48:42.994190 7f80a2019700 0 log [WRN] : slow request
38.901192 seconds old, received at 2014-10-30 08:48:04.092889:
osd_op(osd.29.3076:207644827
Hi Christopher,
Very interesting setup :-) Last week-end I discussed this in theory with Johan
Euphrosine and did not know you had something already. Deploying a mon in a
container is fairly straightforward and I wonder if the boot script
Hi,
Will http://ceph.com/rpm/ also be updated to have the giant packages?
Thanks
Kenneth
- Message from Patrick McGarry patr...@inktank.com -
Date: Wed, 29 Oct 2014 22:13:50 -0400
From: Patrick McGarry patr...@inktank.com
Subject: Re: [ceph-users] where to download 0.87
Will there be debs?
On 30/10/14 10:37, Irek Fasikhov wrote:
Hi.
Use http://ceph.com/rpm-giant/
2014-10-30 12:34 GMT+03:00 Kenneth Waegeman kenneth.waege...@ugent.be
mailto:kenneth.waege...@ugent.be:
Hi,
Will http://ceph.com/rpm/ also be updated to have the giant packages?
http://ceph.com/debian-giant/ :)
2014-10-30 12:45 GMT+03:00 Jon Kåre Hellan jon.kare.hel...@uninett.no:
Will there be debs?
On 30/10/14 10:37, Irek Fasikhov wrote:
Hi.
Use http://ceph.com/rpm-giant/
2014-10-30 12:34 GMT+03:00 Kenneth Waegeman kenneth.waege...@ugent.be:
Hi,
Will
Hello Greg,
You are right I missed a comment before [mds] in ceph.conf. :-)
The new log file can be downloaded below because its to big to send:
http://expirebox.com/download/1bdbc2c1b71c784da2bcd0a28e3cdf97.html
Thanks,
Jasper
Van:
Hello,
Update your ceph.list file:
$ cat /etc/apt/sources.list.d/ceph.list
deb [arch=amd64] http://eu.ceph.com/debian-giant/ wheezy main
Linked from the http://ceph.com/get page.
Thanks,
JF
On 30/10/14 10:45, Jon Kåre Hellan wrote:
Will there be debs?
On 30/10/14 10:37, Irek Fasikhov
Ticket created: http://tracker.ceph.com/issues/9941
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Great to see this discussion starting. There is work being done in this
repo for Ceph in Docker - https://github.com/Ulexus/docker-ceph
Independently of this, we're using RBD as backing for the Docker containers
but still installing Ceph as part of the system and mounting outside of the
container
Hi Daniel,
I can't remember if deleting a pool invokes the snap trimmer to do the
actual work deleting objects. But if it does, then it is most definitely
broken in everything except latest releases (actual dumpling doesn't have
the fix yet in a release).
Given a release with those fixes (see
On 2014-10-30 10:14:44 +, Dan van der Ster said:
Hi Daniel,
I can't remember if deleting a pool invokes the snap trimmer to do the
actual work deleting objects. But if it does, then it is most
definitely broken in everything except latest releases (actual dumpling
doesn't have the fix yet
Dear all:
I meet a strange situation. First, I show my ceph status as following:
cluster fb155b6a-5470-4796-97a4-185859ca6953
..
osdmap e25234: 20 osds: 20 up, 20 in
pgmap v2186527: 1056 pgs, 4 pools, 5193 GB data, 1316 kobjects
8202 GB used, 66170 GB / 74373 GB
Hi,
It would also be great to have a Ceph docker storage driver.
https://github.com/docker/docker/issues/8854
Cheers
On 30/10/2014 11:06, Hunter Nield wrote:
Great to see this discussion starting. There is work being done in this repo
for Ceph in Docker -
Hi Sage,
sorry to be late to this thread; I just caught this one as I was
reviewing the Giant release notes. A few questions below:
On Mon, Oct 13, 2014 at 8:16 PM, Sage Weil s...@newdream.net wrote:
[...]
* ACLs: implemented, tested for kernel client. not implemented for
ceph-fuse.
[...]
October 30 2014 11:32 AM, Daniel Schneller
daniel.schnel...@centerdevice.com wrote:
On 2014-10-30 10:14:44 +, Dan van der Ster said:
Hi Daniel,
I can't remember if deleting a pool invokes the snap trimmer to do the
actual work deleting objects. But if it does, then it is most
Apart from the current there is a bug part, is the idea to copy a
snapshot into a new pool a viable one for a full-backup-restore?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,
Yesterday I removed two OSD's, to replace them with new disks. Ceph was
not able to completely reach all active+clean state, but some degraded
objects remain. However, the amount of degraded objects is negative
(-82), see below:
2014-10-30 13:31:32.862083 mon.0 [INF] pgmap v209175: 768 pgs:
Great idea Loic!
I'd forgotten about the storage-driver side but is a great fit with Ceph
On Thu, Oct 30, 2014 at 6:50 PM, Loic Dachary l...@dachary.org wrote:
Hi,
It would also be great to have a Ceph docker storage driver.
https://github.com/docker/docker/issues/8854
Cheers
On
Hi loic,
Back on this issue...
Using the epel package, I still get prepared-only disks, e.g :
/dev/sdc :
/dev/sdc1 ceph data, prepared, cluster ceph, journal /dev/sdc2
/dev/sdc2 ceph journal, for /dev/sdc1
Looking at udev output, I can see that there is no ACTION=add with
Hi Frederic,
The following pull request is still in review
https://github.com/ceph/ceph/pull/2648 . I hope it will get merged soon and put
this behind us ;-)
Cheers
On 30/10/2014 14:30, SCHAER Frederic wrote:
Hi loic,
Back on this issue...
Using the epel package, I still get
Hello Lukas,
The 'slow request' logs are expected while the cluster is in such a
state.. the OSD processes simply aren't able to respond quickly to client
IO requests.
I would recommend trying to recover without the most problematic disk (
seems to be OSD.10? ).. Simply shut it down and see if
Dear Ceph users,
I just received 2 fresh new servers and i'm starting to develop my Ceph
Cluster.
The first step is: create the admin node in order to controll all the
cluster by remote.
I have a big cluster of XEN servers and I'll setup there a new VM only
for this.
I need some info:
1) As
Whats everyones opinions on having redundant power supplies in your OSD
nodes?
One part of me says let Ceph do the redundancy and plan for the hardware to
fail, the other side says that they are probably worth having as they lessen
the chance of losing a whole node.
Considering they can
On Thu, Oct 30, 2014 at 10:55 AM, Florian Haas flor...@hastexo.com wrote:
* ganesha NFS integration: implemented, no test coverage.
I understood from a conversation I had with John in London that
flock() and fcntl() support had recently been added to ceph-fuse, can
this be expected to Just
On 10/30/2014 01:38 PM, Erik Logtenberg wrote:
Hi,
Yesterday I removed two OSD's, to replace them with new disks. Ceph was
not able to completely reach all active+clean state, but some degraded
objects remain. However, the amount of degraded objects is negative
(-82), see below:
So why
On 10/30/2014 11:40 AM, Cheng Wei-Chung wrote:
Dear all:
I meet a strange situation. First, I show my ceph status as following:
cluster fb155b6a-5470-4796-97a4-185859ca6953
..
osdmap e25234: 20 osds: 20 up, 20 in
pgmap v2186527: 1056 pgs, 4 pools, 5193 GB data, 1316
On 10/30/2014 03:36 PM, Nick Fisk wrote:
What’s everyone’s opinions on having redundant power supplies in your OSD
nodes?
One part of me says let Ceph do the redundancy and plan for the hardware to
fail, the other side says that they are probably worth having as they lessen
the chance
The simple (to me, anyway) answer is if your data is that important, spend the
money to insure it. A few hundred $$$, even over a couple hundred systems, is
still good policy so far as I'm concerned, when you weigh the possible costs of
not being able to access the data versus the cost of a
On Thu, 30 Oct 2014, Ta Ba Tuan wrote:
Hi Everyone,
I upgraded Ceph to Giant by installing *tar.gz package, but appeared some
errors related Object Trimming or Snap Trimming:
I think having some missing objects and be not recovered.
Note that this isn't giant, which is 0.87, but something
Hello cephers,
I would like to know if it is possible to underprovision the ssd disks when
using with ceph-deploy?
I would like to leave at least 10% in unpartitioned space on each ssd to make
sure it will keep stable write performance overtime. In the past, i've
experienced performance
if you don't have 2 powerfeeds, don't spend the money.
if you have 2 feeds, well, start with 2 PSUs for your switches ;)
if you stick with one PSU for the OSDs, make sure you have your cabling
(power and network, don't forget your network switches should be on same
power feeds ;) and crushmap
Hello,
I have an one node ceph installation and when trying to import an
image using qemu, it works fine for some time and after that the osd
process starts using ~100% of cpu and the number of op/s increases and
the writes decrease dramatically. The osd process doesn't appear as
being cpu bound,
On Thu, 30 Oct 2014, Florian Haas wrote:
Hi Sage,
sorry to be late to this thread; I just caught this one as I was
reviewing the Giant release notes. A few questions below:
On Mon, Oct 13, 2014 at 8:16 PM, Sage Weil s...@newdream.net wrote:
[...]
* ACLs: implemented, tested for kernel
Yesterday I removed two OSD's, to replace them with new disks. Ceph was
not able to completely reach all active+clean state, but some degraded
objects remain. However, the amount of degraded objects is negative
(-82), see below:
So why didn't it reach that state?
Well, I dunno, I was
hi,all
1.
how to set region's endpoints? how to know there are how many endpoints?
2.
i follow the step of 'create a region', but after that, i can list the new
region. default region is always there.
3.
there is one rgw for each zone. after rgw starts up. i can find the pools
related
Thanks Michael, still no luck.
Letting the problematic OSD.10 down has no effect. Within minutes more of
OSDs fail on same issue after consuming ~50GB of memory. Also, I can see
two of those cache-tier OSDs on separate hosts which remain utilized almost
200% CPU all the time
I've performed
Hello all,
If you are running a pre-giant MDS and you install firefly ceph-fuse
packages, you will find that your fuse clients are unable to connect
to the filesystem. Thanks for ron-slc on IRC for reporting the issue.
http://tracker.ceph.com/issues/9945
If you are using the FUSE client with
I've just noticed that MB used is increasing with 60MB even if ceph
says that it writes only a few kb:
63603 MB data, 39809 MB used, 2346 GB / 2389 GB avail; 974 kB/s wr, 1277 op/s
63649 MB data, 39863 MB used, 2346 GB / 2389 GB avail; 974 kB/s wr, 1369 op/s
On Thu, Oct 30, 2014 at 5:13 PM,
Hi John,
and what if it's the other way around: having some clients with giant
ceph-fuse and a cluster on firefly?
I was planning on installing the new ceph-fuse on some of my test clients.
On Thu, Oct 30, 2014 at 4:59 PM, John Spray john.sp...@redhat.com wrote:
Hello all,
If you are
On Thu, 30 Oct 2014, Luis Periquito wrote:
Hi John,
and what if it's the other way around: having some clients with giant
ceph-fuse and a cluster on firefly?
I was planning on installing the new ceph-fuse on some of my test clients.
This will break in the same way. Sorry!
sage
On
Hello Lukas,
Unfortunately, I'm all out of ideas at the moment. There are some memory
profiling techniques which can help identify what is causing the memory
utilization, but it's a bit beyond what I typically work on. Others on the
list may have experience with this (or otherwise have ideas)
On 2014-10-30 08:23, Joao Eduardo Luis wrote:
On 10/27/2014 06:37 PM, Patrick Darley wrote:
Hi there
Over the last week or so, I've been trying to connect a ceph monitor
node running on a baserock system
to connect to a simple 3-node ubuntu ceph cluster.
The 3 node ubunutu cluster was created
Nevermind, you helped me a lot by showing this OSD startup procedure
Michael. Big Thanks!
I seem to have made some progress now by setting the cache-mode to forward.
The OSD processes of SATA hosts stopped failing immediately. I'm now
waiting for the cache tier to flush. Then I'll try to enable
Hi,
I'm new in ceph and tying to install the cluster. I'm using single server
for mon and osd. I've create one partition with device /dev/vdb1 containing
100 gb with ext4 fs and trying to add as an OSD in ceph monitor. But
whenever I'm trying to activate the partition as osd block device we are
Fixed. My cluster is HEALTH_OK again now. It went fast in the right
direction after I set cache-mode to forward (from original writeback) and
disabling norecover and nobackfill flags.
Still I'm waiting for 15 million of objects to get flushed from the cache
tier.
It seems that the issue was
Hey cephers,
All videos (from both days) of Ceph Developer Summit: Hammer are now
posted to YouTube and linked from the master wiki page:
https://wiki.ceph.com/Planning/CDS/Hammer_(Oct_2014)
We had just over 60 non-Red Hat participants from almost 20 different
countries represented, so a big
Thanks for pointing that out. Unfortunately, those tickets contain only
a description of the problem, but no solution or workaround. One was
opened 8 months ago and the other more than a year ago. No love since.
Is there any way I can get my cluster back in a healthy state?
Thanks,
Erik.
On
Erik,
I reported a similar issue 22 months ago. I don't think any developer
has ever really prioritized these issues.
http://tracker.ceph.com/issues/3720
I was able to recover that cluster. The method I used is in the
comments. I have no idea if my cluster was broken for the same reason as
We are looking to forward all of our Ceph logs to a centralized syslog
server. In the manual[1] it talks about log settings, but I'm not sure
about a few things.
1. What is clog?
2. If syslog is the logging facility are the logs from all daemons
merged into the same file? Is there a
Dear Ceph,
I used keyvalue backend in 0.80.6 and 0.80.7, the average speed with
rsync millions small files is 10M byte /second
when i upgrade to 0.87(giant), the speed slow down to 5M byte /second, I don't
why , is there any tunning option for this?
will superblock cause those performance
Also found the other problem is: the ceph osd directory has millions small
files which will cause performance issue
1008 = # pwd
/var/lib/ceph/osd/ceph-8/current
1007 = # ls |wc -l
21451
发件人: ceph-usersmailto:ceph-users-boun...@lists.ceph.com
发送时间: 2014-10-31 08:23
收件人:
Yes, it exists persistence problem at 0.80.6 and we fixed it at Giant.
But at Giant, other performance optimization has been applied. Could
you tell more about your tests?
On Fri, Oct 31, 2014 at 8:27 AM, 廖建锋 de...@f-club.cn wrote:
Also found the other problem is: the ceph osd directory has
which i can telll is :
in 0.87 , osd's writting under 10MB/s ,but io utilization is about 95%
in 0.80.6, osd's writting about 20MB/s, but io utilization is about 30%
iostat -mx 2 with 0.87
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util
sdb 0.00 43.00
Thanks, recently I mainly focus on rbd performance for it(random small write).
I want to know your test situation. Is it seq write?
On Fri, Oct 31, 2014 at 9:48 AM, 廖建锋 de...@f-club.cn wrote:
which i can telll is :
in 0.87 , osd's writting under 10MB/s ,but io utilization is about 95%
I am not sure if it seq or ramdon, i just use rsync to copy millions small pic
file form our pc server to ceph cluster
发件人: Haomai Wangmailto:haomaiw...@gmail.com
发送时间: 2014-10-31 09:59
收件人: 廖建锋mailto:de...@f-club.cn
抄送: ceph-usersmailto:ceph-users-boun...@lists.ceph.com;
ok. I will explore it.
On Fri, Oct 31, 2014 at 10:03 AM, 廖建锋 de...@f-club.cn wrote:
I am not sure if it seq or ramdon, i just use rsync to copy millions small
pic file form our pc server to ceph cluster
发件人: Haomai Wang
发送时间: 2014-10-31 09:59
收件人: 廖建锋
抄送: ceph-users; ceph-users
主题: Re:
On 10/30/2014 11:40 AM, Cheng Wei-Chung wrote:
Dear all:
I meet a strange situation. First, I show my ceph status as following:
cluster fb155b6a-5470-4796-97a4-185859ca6953
..
osdmap e25234: 20 osds: 20 up, 20 in
pgmap v2186527: 1056 pgs, 4 pools, 5193 GB data, 1316
On 10/30/2014 11:40 AM, Cheng Wei-Chung wrote:
Dear all:
I meet a strange situation. First, I show my ceph status as following:
cluster fb155b6a-5470-4796-97a4-185859ca6953
..
osdmap e25234: 20 osds: 20 up, 20 in
pgmap v2186527: 1056 pgs, 4 pools, 5193 GB data, 1316
64 matches
Mail list logo