Bob,
Those numbers would seem to indicate some other problem One of the
biggest culprits of that poor performance is often related to network issues.
In the last few months, there have been several reported issues of performance,
that have turned out to be network. Not all, but most.
On 7/20/15, 11:52 AM, ceph-users on behalf of Campbell, Bill
ceph-users-boun...@lists.ceph.commailto:ceph-users-boun...@lists.ceph.com on
behalf of
bcampb...@axcess-financial.commailto:bcampb...@axcess-financial.com wrote:
We use VMware with Ceph, however we don't use RBD directly (we have an
David - I'm new to Ceph myself, so can't point out any smoking guns - but
your problem feels like a network issue. I suggest you check all of
your OSD/Mon/Clients network interfaces. Check for errors, check that
they are negotiating the same link speed/type with your switches (if you
have LLDP
On 7/16/15, 9:51 PM, ceph-users on behalf of Goncalo Borges
ceph-users-boun...@lists.ceph.com on behalf of
gonc...@physics.usyd.edu.au wrote:
Once I substituted the fqdn by simply the hostname (without the domain)
it worked.
Goncalo,
I ran into the same problems too - and ended up bailing on
On 7/16/15, 6:55 AM, Gregory Farnum g...@gregs42.com wrote:
Yep! The Hadoop workload is a fairly simple one that is unlikely to
break anything in CephFS. We run a limited set of Hadoop tests on it
every week and provide bindings to set it up; I think the
documentation is a bit lacking here but
/master/radosgw/s3/
[3] https://wiki.apache.org/hadoop/AmazonS3
On 7/15/15, 9:50 AM, Somnath Roy
somnath@sandisk.commailto:somnath@sandisk.com wrote:
Did you try to integrate ceph +rgw+s3 with Hadoop?
Sent from my iPhone
On Jul 15, 2015, at 8:58 AM, Shane Gibson
shane_gib
We are in the (very) early stages of considering testing backing Hadoop via
Ceph - as opposed to HDFS. I've seen a few very vague references to doing
that, but haven't found any concrete info (architecture, configuration
recommendations, gotchas, lessons learned, etc...). I did find the
Lionel - thanks for the feedback ... inline below ...
On 7/2/15, 9:58 AM, Lionel Bouton
lionel+c...@bouton.namemailto:lionel+c...@bouton.name wrote:
Ouch. These spinning disks are probably a bottleneck: there are regular advices
on this list to use one DC SSD for 4 OSDs. You would probably
,
Best regards,
German
2015-07-01 21:16 GMT-03:00 Shane Gibson
shane_gib...@symantec.commailto:shane_gib...@symantec.com:
It also depends a lot on the size of your cluster ... I have a test cluster I'm
standing up right now with 60 nodes - a total of 600 OSDs each at 4 TB ... If I
lose 4 TB
It also depends a lot on the size of your cluster ... I have a test cluster I'm
standing up right now with 60 nodes - a total of 600 OSDs each at 4 TB ... If I
lose 4 TB - that's a very small fraction of the data. My replicas are going to
be spread out across a lot of spindles, and
For a high perf cluster - absolutely agree ... but I would suggest that
running the MONs as VMs has it's on performance challenges, to carefully
manage as well. If you are on oversubscribed hypervisors, you may end up
with the same exact issues with perf impacting the MONs. For a very small
For a small deployment this might be ok - but as mentioned, mon logging might
be an issue. Consider the following:
* disk resources for mon logging (maybe dedicate a disk to logging, to avoid
disk IO contention for OSDs)
* CPU resources, some Filesystem types for OSDs can eat a lot of CPU
On 6/23/15, 5:09 AM, ceph-users on behalf of Gregory Farnum
ceph-users-boun...@lists.ceph.com on behalf of g...@gregs42.com wrote:
Monitors are bound to a particular IP address.
Greg - are you saying the MONs are only able to bind to a single IP
address? Despite the fact that most daemons in
Cristian,
I'm not sure off hand what's up - but can you increase the logging levels, then
rerun the test:
http://docs.ceph.com/docs/master/rados/troubleshooting/log-and-debug/
See the Runtime section for injecting the logging arguments after starting -
or change the {cluster}.conf (eg
All - I have been following this thread for a bit, and am happy to see how
involved, capable, and collaborative that this ceph-users community seems
to be. It appears there is a fairly strong amount of domain knowledge
around the hardware used by many Ceph deployments, with a lot of thumbs
up
All - I am building my first ceph cluster, and doing it the hard way,
manually without the aid of ceph-deploy. I have successfully built the
mon cluster and am now adding OSDs.
My main question:
How do I prepare the Journal prior to the prepare/activate stages of the
OSD creation?
More
Ok - I know this post has the potential to spread to unsavory corners of
discussion about the best linux distro ... blah blah blah ... please, don't
let it go there ... !
I'm seeking some input from people that have been running larger Ceph clusters
... on the order of 100s of physical
Vida - installing Ceph as hosted VMs is a great way to get experience
hands-on with a Ceph cluster. It is NOT a good way to run Ceph for any real
work load.NOTE that it's critical you structure your virtual disks and
virtual network(s) to match how you'd like to run your Ceph work loads
Alternatively you could just use GIT (or some other form of versioning system)
... host your code/files/html/whatever in GIT. Make changes to the GIT tree -
then you can trigger a git pull from your webservers to local filesystem.
This gives you the ability to use branches/versions to
19 matches
Mail list logo