On Tue, Nov 26, 2013 at 06:50:33AM -0800, Sage Weil wrote:
If syncfs(2) is not present, we have to use sync(2). That means you have
N daemons calling sync(2) to force a commit on a single fs, but all other
mounted fs's are also synced... which means N times the sync(2) calls.
Fortunately
2013/11/26 Derek Yarnell de...@umiacs.umd.edu
On 11/26/13, 4:04 AM, Mihály Árva-Tóth wrote:
Hello,
Is there any idea? I don't know this is s3api limitation or missing
feature?
Thank you,
Mihaly
Hi Mihaly,
If all you are looking for is the current size of the bucket this can be
Thanks a lot... after update withceph-deploy 1.3.3, everythingis working fine...Regards,Upendra YadavDFS On Wed, 27 Nov 2013 02:22:00 +0530 Alfredo Dezaalfredo.d...@inktank.com wrote ceph-deploy 1.3.3 just got released and you should not see this with the new version.On Tue, Nov 26, 2013
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
No solution so far, but I also asked in IRC and linuxkidd told me they
where looking for a workaround.
Micha Krause
-BEGIN PGP SIGNATURE-
Version: GnuPG v2.0.22 (GNU/Linux)
Comment: Using GnuPG with Thunderbird -
The largest group of threads is those from the network messenger — in
the current implementation it creates two threads per process the
daemon is communicating with. That's two threads for each OSD it
shares PGs with, and two threads for each client which is accessing
any data on that OSD.
Recently, I want to test performance benefit of rbd cache, i cannot get
obvious performance benefit at my setup, then I try to make sure rbd cache is
enabled, but I cannot get rbd cache perf counter. In order to identify how to
enable rbd cache perf counter, I setup a simple setup(one client
Thanks a lot, Jens. Do I have to have cephx authentication enabled? Did you
enable it? Which user from the node that contains cinder-api or glance-api
are you using to create volumes and images? The documentation at
http://ceph.com/docs/master/rbd/rbd-openstack/ mentions creating new
Hi Karan
your cinder.conf looks sensible to me, I have posted mine here:
--- cut ---
[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = True
Hi,
while googles leveldb was too slow for facebook they created rocksdb
(http://rocksdb.org/) may be interesting for Ceph? It's already
production quality.
Greets,
Stefan
___
ceph-users mailing list
ceph-users@lists.ceph.com
there are some guide lines on this. Thanks in advance!
Regards,
Johannes
__ Informatie van ESET Endpoint Antivirus, versie van database
viruskenmerken 9100 (20131127) __
Het bericht is gecontroleerd door ESET Endpoint Antivirus.
http://www.eset.com
I was going to add something to the bug tracker, but it looks to me
that contributor email addresses all have public (unauthenticated)
visibility? Can this be set in user preferences?
Many thanks!
___
ceph-users mailing list
On 11/27/2013 09:25 AM, Gregory Farnum wrote:
On Wed, Nov 27, 2013 at 1:31 AM, Jens-Christian Fischer
jens-christian.fisc...@switch.ch wrote:
The largest group of threads is those from the network messenger — in
the current implementation it creates two threads per process the
daemon is
On Wed, Nov 27, 2013 at 7:28 AM, Mark Nelson mark.nel...@inktank.com wrote:
On 11/27/2013 09:25 AM, Gregory Farnum wrote:
On Wed, Nov 27, 2013 at 1:31 AM, Jens-Christian Fischer
jens-christian.fisc...@switch.ch wrote:
The largest group of threads is those from the network messenger — in
the
OnActivating cluster ceph disks by using commandceph-deploy osd activate ceph-node3:/home/ceph/osd2i am gettingceph-node3][DEBUG ] connected to host: ceph-node3[ceph-node3][DEBUG ] detect platform information from remote host[ceph-node3][DEBUG ] detect machine type[ceph_deploy.osd][INFO ] Distro
On Wed, Nov 27, 2013 at 04:34:00PM +0100, Gregory Farnum wrote:
On Wed, Nov 27, 2013 at 7:28 AM, Mark Nelson mark.nel...@inktank.com wrote:
On 11/27/2013 09:25 AM, Gregory Farnum wrote:
On Wed, Nov 27, 2013 at 1:31 AM, Jens-Christian Fischer
jens-christian.fisc...@switch.ch wrote:
The
On 11/26/13, 3:31 PM, Shain Miley wrote:
Micha,
Did you ever figure out a work around for this issue?
I also had plans of using s3cmd to put, and recursively set acl's on a
nightly basis...however we are getting the 403 errors as well during our
testing.
I was just wondering if you
I am working with a small test cluster, but the problems described
here will remain in production. I have an external fiber channel storage
array and have exported 2 3TB disks (just as JBODs). I can use
ceph-deploy to create an OSD for each of these disks on a node named
Vashti. So far
For ~$67 you get a mini-itx motherboard with a soldered on 17W dual core
1.8GHz ivy-bridge based Celeron (supports SSE4.2 CRC32 instructions!).
It has 2 standard dimm slots so no compromising on memory, on-board gigabit
eithernet, 3 3Gb/s + 1 6Gb/s SATA, and a single PCIE slot for an additional
Thanks. I may have to go this route, but it seems awfully fragile. One
stray command could destroy the entire cluster, replicas and all. Since
all disks are visible to all nodes, any one of them could mount
everything, corrupting all OSDs at once.
Surly other people are using external FC
Is LUN masking an option in your SAN?
On 11/27/13, 2:34 PM, Kevin Horan kho...@cs.ucr.edu wrote:
Thanks. I may have to go this route, but it seems awfully fragile. One
stray command could destroy the entire cluster, replicas and all. Since
all disks are visible to all nodes, any one of them
Ah, that sounds like what I want. I'll look into that, thanks.
Kevin
On 11/27/2013 11:37 AM, LaSalle, Jurvis wrote:
Is LUN masking an option in your SAN?
On 11/27/13, 2:34 PM, Kevin Horan kho...@cs.ucr.edu wrote:
Thanks. I may have to go this route, but it seems awfully fragile. One
stray
Dear Ceph Experts,
our Ceph cluster suddenly went into a state of OSDs constantly having
blocked or slow requests, rendering the cluster unusable. This happened
during normal use, there were no updates, etc.
All disks seem to be healthy (smartctl, iostat, etc.). A complete
hardware reboot
On Wed, Nov 27, 2013 at 4:46 AM, Sebastian webmas...@mailz.de wrote:
Hi,
we have a setup of 4 Servers running ceph and radosgw. We use it as an
internal S3 service for our files. The Servers run Debian Squeeze with Ceph
0.67.4.
The cluster has been running smoothly for quite a while, but
Hey,
What number do you have for a replication factor? As for three, 1.5k
IOPS may be a little bit high for 36 disks, and your OSD ids looks a bit
suspicious - there should not be 60+ OSDs based on calculation from
numbers below.
On 11/28/2013 12:45 AM, Oliver Schulz wrote:
Dear Ceph Experts,
Sounds like what I was having starting a couple of days ago, played
around with the conf, taking in/out suspect osd and doing full smart
tests on them that came back perfectly fine, doing network tests that
came back 110MB/s on all channels, doing OSD benches that reported all
OSD managing 80+
How much performance can be improved if use SSDs to storage journals?
You will see roughly twice the throughput unless you are using btrfs
(still improved but not as dramatic). You will also see lower latency
because the disk head doesn't have to seek back and forth between
journal and data
Hi all,
I'd like to use Ceph to solve two problems at my company: to be an S3 mock
for testing our application, and for sharing test artifacts in a
peer-to-peer fashion between developers.
We currently store immutable binary blobs ranging from a few kB to several
hundred MB in S3, which means
On 11/27/2013 07:21 AM, James Pearce wrote:
I was going to add something to the bug tracker, but it looks to me that
contributor email addresses all have public (unauthenticated)
visibility? Can this be set in user preferences?
Yes, it can be hidden here: http://tracker.ceph.com/my/account
On 11/26/2013 02:22 PM, Stephen Taylor wrote:
From ceph-users archive 08/27/2013:
On 08/27/2013 01:39 PM, Timofey Koolin wrote:
/Is way to know real size of rbd image and rbd snapshots?/
/rbd ls -l write declared size of image, but I want to know real size./
You can sum the sizes of the
On 11/26/2013 01:14 AM, Ta Ba Tuan wrote:
Hi James,
Proplem is why the Ceph not recommend using Device'UUID in Ceph.conf,
when, above error can be occur?
I think with the newer-style configuration, where your disks have
partition ids setup by ceph-disk instead of entries in ceph.conf, it
[re-adding the list]
It's not related to the version of qemu. When qemu starts up, it
creates the admin socket file, but it needs write access to do that.
Does the user running qemu (libvirt-qemu on ubuntu) have write access
to /var/run/ceph? It may be unix permissions blocking it, or apparmor
2013/11/27 Yehuda Sadeh yeh...@inktank.com
On Wed, Nov 27, 2013 at 12:24 AM, Mihály Árva-Tóth
mihaly.arva-t...@virtual-call-center.eu wrote:
2013/11/26 Derek Yarnell de...@umiacs.umd.edu
On 11/26/13, 4:04 AM, Mihály Árva-Tóth wrote:
Hello,
Is there any idea? I don't know this is
32 matches
Mail list logo