Hello,
Some data is not in git repository and also needs to be updated on all
servers at same time (uploads...), that's why I'm searching for a
centralized solution.
I think I've found a "patch" to do it... All our server are connected to a
manager, so I've created a task in that managet to stop
This removes ceph completely, or any other networked storage, but git has
triggers. If your website is stopped in git and you just need to make sure
that nginx always has access to the latest data, just configure git
triggers to auto-update the repository when there is a commit to the
repository
Hello,
Our problem is that the webpage is on a autoscaling group, so the created
machine is not always updated and needs to have the latest data always.
I've tried several ways to do it:
- Local Storage synced: Sometimes the sync fails and data is not updated
- NFS: If NFS server goes
Using CephFS for something like this is about the last thing I would do.
Does it need to be on a networked posix filesystem that can be mounted on
multiple machines at the same time? If so, then you're kinda stuck and we
can start looking at your MDS hardware and see if there are any MDS
settings
Hello,
I've tried to change a lot of things on configuration and use ceph-fuse but
nothing makes it work better... When I deploy the git repository it becomes
much slower until I remount the FS (just executing systemctl stop nginx &&
umount /mnt/ceph && mount -a && systemctl start nginx). It
Hello,
I've created a Ceph cluster with 3 nodes and a FS to serve a webpage. The
webpage speed is good enough (near to NFS speed), and have HA if one FS die.
My problem comes when I deploy a git repository on that FS. The server
makes a lot of IOPS to check the files that have to update and then