Hi,
On 18 Mar 2015, at 05:29, Christian Balzer ch...@gol.com wrote:
Hello,
On Wed, 18 Mar 2015 03:52:22 +0100 Josef Johansson wrote:
Hi,
I’m planning a Ceph SSD cluster, I know that we won’t get the full
performance from the SSD in this case, but SATA won’t cut it as backend
Check tty session for remote execution. Could explain why it worked when
you were logged in. Possibly ansible is running mount command without
tty. Check your settings for that user and see if they required to have
tty to make remote calls such as mount.
On Mar 18, 2015 5:42 AM, Florent B
I actually have been searching for this information a couple of times in the
ML now.
Was hoping that you would’ve been done with it before I ordered :)
Yes, me too ;) I'm waiting for my mellanox switches .
I will most likely order this week so I will see it when the stuff is being
Hi,
Christian Balzer wrote :
Consider what you think your IO load (writes) generated by your client(s)
will be, multiply that by your replication factor, divide by the number of
OSDs, that will give you the base load per OSD.
Then multiply by 2 (journal on OSD) per OSD.
Finally based on my
Hi again Greg :-)
No, it doesn't seem to progress past that point. I started the OSD again a
couple of nights ago:
2015-03-16 21:34:46.221307 7fe4a8aa7780 10 journal op_apply_finish 13288339
open_ops 1 - 0, max_applied_seq 13288338 - 13288339
2015-03-16 21:34:46.221445 7fe4a8aa7780 3 journal
What message is this and why is it saying that on stdout?
stdout: /mnt/cephfs is not a mountpoint
On Mar 17, 2015 7:41 PM, Florent B flor...@coppint.com wrote:
Hi Thomas,
The problem is : there is no error.
fstab is configured as expected.
and if I run the mount command by hand (not via
Hi,
I don't known how rbd read-ahead is working,
but with qemu virtio-scsi, you can have read merge request (for sequential
reads), so it's doing bigger ops to ceph cluster and improve throughput.
virtio-blk merge request will be supported in coming qemu 2.3.
(I'm not sure of virtio-win
Hi Alexandre,
I actually have been searching for this information a couple of times in the ML
now.
Was hoping that you would’ve been done with it before I ordered :)
I will most likely order this week so I will see it when the stuff is being
assembled :o
Do you feel that there something in
Acutally, good question - is RBD caching at all - possible with Windows
guestes, if it ussing latest VirtIO drivers ?
Linux caching (write caching, writeback) is working fine with newer virt-io
drivers...
Thanks
On 18 March 2015 at 10:39, Alexandre DERUMIER aderum...@odiso.com wrote:
Hi,
I
Hi, you don't need to defined ip and host for osd,
but you need to defined monitor ips
[osd]
keyring = /var/lib/ceph/osd/ceph-$id/keyring
[mon.0]
host = node1
mon addr = X.X.X.X:6789
[mon.1]
host = node2
mon addr = X.X.X.X:6789
[mon.2]
host
Hi to the ceph-users list !
We're setting up a new Ceph infrastructure :
- 1 MDS admin node
- 4 OSD storage nodes (60 OSDs)
each of them running a monitor
- 1 client
Each 32GB RAM/16 cores OSD node supports 15 x 4TB SAS OSDs (XFS) and 1
SSD with 5GB journal partitions, all in JBOD
Hi again everyone after some tests I have noticed that the data deleted on the
filesystem doesnt really goes on the cluster...
In the filesystem the uses space is 256MB and in ceph status shows us 25459MB
thats 24GB so I was wondering if there is any way to reclaim space from deleted
files or
On Wed, Mar 18, 2015 at 3:28 AM, Chris Murray chrismurra...@gmail.com wrote:
Hi again Greg :-)
No, it doesn't seem to progress past that point. I started the OSD again a
couple of nights ago:
2015-03-16 21:34:46.221307 7fe4a8aa7780 10 journal op_apply_finish 13288339
open_ops 1 - 0,
Thank you so much Thomas, Alexander! I really appreciate it
Have a great time :)
Jesus Chavez
SYSTEMS ENGINEER-C.SALES
jesch...@cisco.commailto:jesch...@cisco.com
Phone: +52 55 5267 3146tel:+52%2055%205267%203146
Mobile: +51 1 5538883255tel:+51%201%205538883255
CCIE - 44433
On Mar 18, 2015,
Hi Greg,
Thanks for your input and completely agree that we cannot expect developers
to fully document what impact each setting has on a cluster, particularly in
a performance related way
That said, if you or others could spare some time for a few pointers it
would be much appreciated and I will
- Original Message -
From: Ben b@benjackson.email
To: Yehuda Sadeh-Weinraub yeh...@redhat.com
Cc: Craig Lewis cle...@centraldesktop.com, ceph-users
ceph-us...@ceph.com
Sent: Tuesday, March 17, 2015 7:28:28 PM
Subject: Re: [ceph-users] Shadow files
None of this helps with trying
On 18/03/2015 14:58, Jesus Chavez (jeschave) wrote:
Hi again everyone after some tests I have noticed that the data
deleted on the filesystem doesnt really goes on the cluster...
In the filesystem the uses space is 256MB and in ceph status shows us
25459MB thats 24GB so I was wondering if
Hi Chris,
Thank you for your reply.
We are also thinking about using the S3 API but we are concerned about how
compatible it is with the real S3. For instance, we would like to design the
system using pre-signed URL for storing some objects. I read the ceph
documentation, it does not mention
On Wed, Mar 18, 2015 at 8:04 AM, Nick Fisk n...@fisk.me.uk wrote:
Hi Greg,
Thanks for your input and completely agree that we cannot expect developers
to fully document what impact each setting has on a cluster, particularly in
a performance related way
That said, if you or others could
hello!
I will attend whole event, so if you would like to talk about Ceph - please
please let me know (or catch me at B20 booth) :)
There was strong Ceph (from Inktank and the community) representation last
years.
see you!
--
Pawel
On Tue, Mar 17, 2015 at 6:38 PM, Josef Johansson
Hi!
We’re three guys that’s going to attend the whole event, would be nice to meet
up some cephers indeed.
Yeah, it would be nice to talk about Ceph, but meeting up at your booth sounds
like a good idea. We’re just plain visitors.
Have you seen anything about how much presence there will be
Hi Josef,
I'm going to benchmark a 3nodes cluster with 6ssd each node (2x10 cores 3,1ghz).
From my previous bench, you need fast cpus if you need a lot of iops, and
writes are lot more expansive than reads.
Now i'm you are doing only small iops (big blocks / big throughput), you don't
need too
I think I might have found the issue
Something is wrong with my crush map.
I was just attempting to modify it
microserver-1:~ # ceph osd getcrushmap -o /tmp/cm
got crush map from osdmap epoch 3937
microserver-1:~ # crushtool -d /tmp/cm -o /tmp/cm.txt
microserver-1:~ # vim /tmp/cm.txt
subscribe
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
- Original Message -
From: Abhishek L abhishek.lekshma...@gmail.com
To: Yehuda Sadeh-Weinraub yeh...@redhat.com
Cc: Ben b@benjackson.email, ceph-users ceph-us...@ceph.com
Sent: Wednesday, March 18, 2015 10:54:37 AM
Subject: Re: [ceph-users] Shadow files
Yehuda Sadeh-Weinraub
Yehuda Sadeh-Weinraub writes:
Is there a quick way to see which shadow files are safe to delete
easily?
There's no easy process. If you know that a lot of the removed data is on
buckets that shouldn't exist anymore then you could start by trying to
identify that. You could do that by:
On Wed, 18 Mar 2015, Greg Chavez wrote:
We have a cuttlefish (0.61.9) 192-OSD cluster that has lost network
availability several times since this past Thursday and whose nodes were all
rebooted twice (hastily and inadvisably each time). The final reboot, which
was supposed to be the last thing
We have a cuttlefish (0.61.9) 192-OSD cluster that has lost network
availability several times since this past Thursday and whose nodes were
all rebooted twice (hastily and inadvisably each time). The final reboot,
which was supposed to be the last thing before recovery according to our
data
28 matches
Mail list logo