Re: [ceph-users] can I attach a volume to 2 servers

2016-05-02 Thread Edward Huyer
to the same block device. Because no server is aware of what the other servers are doing, it’s essentially guaranteed that you’ll have one server partially overwriting things another server just wrote, resulting in lost data and/or a broken filesystem. - Edward Huyer School of Interactive Games

Re: [ceph-users] Mapping RBD On Ceph Cluster Node

2016-04-30 Thread Edward Huyer
On Apr 29, 2016 11:46 PM, Gregory Farnum <gfar...@redhat.com> wrote: > > On Friday, April 29, 2016, Edward Huyer <erhvks@ On Friday, April 29, 2016, Edward Huyer <erh...@rit.edu<mailto:erh...@rit.edu>> wrote: This is more of a "why" than a "can I/

[ceph-users] Mapping RBD On Ceph Cluster Node

2016-04-29 Thread Edward Huyer
This is more of a "why" than a "can I/should I" question. The Ceph block device quickstart says (if I interpret it correctly) not to use a physical machine as both a Ceph RBD client and a node for hosting OSDs or other Ceph services. Is this interpretation correct? If so, what is the

[ceph-users] Weird/normal behavior when creating filesystem on RBD volume

2016-04-22 Thread Edward Huyer
what's going on here? I have a pretty strong notion, but I'm hoping someone can give a definite answer. This behavior appears to be normal, so I'm not actually worried about it. It just makes myself and some coworkers go "huh, I wonder what causes that". ----- Edward Huyer School of I

[ceph-users] ceph-deploy Intended Purpose

2013-07-12 Thread Edward Huyer
-to-play-with tool and manual configuration is preferred for real clusters? I've seen documentation suggesting it's not intended for use in real clusters, but a lot of other documentation seems to assume it's the default deploy tool. - Edward Huyer School of Interactive Games and Media

Re: [ceph-users] Resizing filesystem on RBD without unmount/mount cycle

2013-06-24 Thread Edward Huyer
-Original Message- From: John Nielsen [mailto:li...@jnielsen.net] Sent: Monday, June 24, 2013 1:24 PM To: Edward Huyer Cc: ceph-us...@ceph.com Subject: Re: [ceph-users] Resizing filesystem on RBD without unmount/mount cycle On Jun 24, 2013, at 9:13 AM, Edward Huyer erh...@rit.edu

Re: [ceph-users] palcing SSDs and SATAs in same hosts

2013-06-20 Thread Edward Huyer
Hi, I am thinking how to make ceph with 2 pools - fast and slow. Plan is to use SSDs and SATAs(or SAS) in the same hosts and define pools that use fast and slow disks accordingly. Later it would be easy to grow either pool by need. I found example for CRUSH map that does similar thing

[ceph-users] New User Q: General config, massive temporary OSD loss

2013-06-18 Thread Edward Huyer
it will be good enough. - Edward Huyer School of Interactive Games and Media Golisano 70-2373 152 Lomb Memorial Drive Rochester, NY 14623 585-475-6651 erh...@rit.edumailto:erh...@rit.edu Obligatory Legalese: The information transmitted, including attachments, is intended only for the person(s

Re: [ceph-users] New User Q: General config, massive temporary OSD loss

2013-06-18 Thread Edward Huyer
[ Please stay on the list. :) ] Doh. Was trying to get Outlook to quote properly, and forgot to hit Reply-all. :) The specifics of what data will migrate where will depend on how you've set up your CRUSH map, when you're updating the CRUSH locations, etc, but if you move an OSD then it