Yum repo is not an intensive workload, take a look at GFS2 or OCFS then.

Jan

> On 28 Jan 2016, at 13:23, Sándor Szombat <[email protected]> wrote:
> 
> Sorry I forget something: we use ceph for another things (for example docker 
> registry backend).
> And yes, cephfs maybe overkill but we seach for best solution now. And we 
> want to reduce the used tools count so we can solve this with Ceph it's good.
> 
> Thanks for your help and I will check these.
> 
> 2016-01-28 13:13 GMT+01:00 Jan Schermer <[email protected] 
> <mailto:[email protected]>>:
> Yum repo doesn't really sound like something "mission critical" (in my 
> experience). You need to wait for the repo to update anyway before you can 
> use the package, so not something realtime either.
> CephFS is overkill for this.
> 
> I would either simply rsync the repo between the three machines via cron (and 
> set DNS to point to all three IPs they have). If you need something "more 
> realtime" than you can use for example incron.
> Or you can push new packages to all three and refresh them (assuming you have 
> some tools that do that already).
> Or if you want to use Ceph then you can create one rbd image that only gets 
> mounted on one of the hosts, and if that goes down you remount it elsewhere 
> (by hand, or via pacemaker, cron script...). I don't think Ceph makes sense 
> if that going to be the only use, though...
> 
> Maybe you could also push the packages to RadosGW, but I'm not familiar with 
> it that much, not sure how you build a repo that points there. This would 
> make sense but I have no idea if it's simple to do.
> 
> Jan
> 
> 
>> On 28 Jan 2016, at 13:03, Sándor Szombat <[email protected] 
>> <mailto:[email protected]>> wrote:
>> 
>> Hello,
>> 
>> yes I missunderstand things I things. Thanks for your help!
>> 
>> So the situation is next: We have a yum repo with rpm packages. We want to 
>> store these rpm's in Ceph. But we have three main node what can be able to 
>> install the other nodes. So we have to share these rpm packages between the 
>> three host. I check the CephFs, it will be the best solution for us, but it 
>> is in beta and beta products cant allowed for us. This is why I tryto find 
>> another, Ceph based solution. 
>> 
>> 
>> 
>> 2016-01-28 12:46 GMT+01:00 Jan Schermer <[email protected] 
>> <mailto:[email protected]>>:
>> This is somewhat confusing.
>> 
>> CephFS is a shared filesystem - you mount that on N hosts and they can 
>> access the data simultaneously.
>> RBD is a block device, this block device can be accesses from more than 1 
>> host, BUT you need to use a cluster aware filesystem (such as GFS2, OCFS).
>> 
>> Both CephFS and RBD use RADOS as a backend, which is responsible for data 
>> placement, high-availability and so on.
>> 
>> If you explain your scenario more we could suggest some options - do you 
>> really need to have the data accessible on more servers, or is a (short) 
>> outage acceptable when one server goes down? What type of data do you need 
>> to share and how will the data be accessed?
>> 
>> Jan
>> 
>> > On 28 Jan 2016, at 11:06, Sándor Szombat <[email protected] 
>> > <mailto:[email protected]>> wrote:
>> >
>> > Hello all!
>> >
>> > I check the Ceph FS, but it is beta now unfortunatelly. I start to meet 
>> > with the rdb. It is possible to create an image in a pool, mount it as a 
>> > block device (for example /dev/rbd0), and format this as HDD, and mount it 
>> > on 2 host? I tried to make this, and it's work but after mount the  
>> > /dev/rbd0 on the two host and I tried to put files into these mounted 
>> > folders it can't refresh automatically between hosts.
>> > So the main question: this will be a possible solution?
>> > (The task: we have 3 main node what can install the other nodes with 
>> > ansible, and we want to store our rpm's in ceph it is possible. This is 
>> > necessary because of high avability.)
>> >
>> > Thanks for your help!
>> >
>> > _______________________________________________
>> > ceph-users mailing list
>> > [email protected] <mailto:[email protected]>
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
>> > <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
>> 
>> 
> 
> 

_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to