If you want to have a swap, why not create a ramdisk and then format/use
it as swap?
-Original Message-
From: Brent Kennedy [mailto:bkenn...@cfl.rr.com]
Sent: 05 April 2020 20:13
To: 'Martin Verges'
Cc: 'ceph-users'
Subject: [ceph-users] Re: Questions on Ceph cluster without OS disks
gt;
> -Brent
>
>
>
> *From:* Martin Verges
> *Sent:* Sunday, April 5, 2020 3:04 AM
> *To:* Brent Kennedy
> *Cc:* huxia...@horebdata.cn; ceph-users
> *Subject:* Re: [ceph-users] Re: Questions on Ceph cluster without OS disks
>
>
>
> Hello Brent,
>
>
>
>
--
From: Martin Verges mailto:martin.ver...@croit.io> >
Sent: Sunday, March 22, 2020 3:50 PM
To: huxia...@horebdata.cn <mailto:huxia...@horebdata.cn>
Cc: ceph-users mailto:ceph-users@ceph.io> >
Subject: [ceph-users] Re: Questions on Ceph cluster without OS disks
Hello Samuel,
erges
> Sent: Sunday, March 22, 2020 3:50 PM
> To: huxia...@horebdata.cn
> Cc: ceph-users
> Subject: [ceph-users] Re: Questions on Ceph cluster without OS disks
>
> Hello Samuel,
>
> we from croit.io don't use NFS to boot up Servers. We copy the OS
> directly into the
t;
>
>
>
> -Original Message-
> From: Martin Verges
> Sent: Sunday, March 22, 2020 3:50 PM
> To: huxia...@horebdata.cn
> Cc: ceph-users
> Subject: [ceph-users] Re: Questions on Ceph cluster without OS disks
>
> Hello Samuel,
>
> we from croit.io don'
gateways
-Original Message-
From: Martin Verges
Sent: Sunday, March 22, 2020 3:50 PM
To: huxia...@horebdata.cn
Cc: ceph-users
Subject: [ceph-users] Re: Questions on Ceph cluster without OS disks
Hello Samuel,
we from croit.io don't use NFS to boot up Servers. We copy the OS directly
u; martin.verges
> Subject: RE: [ceph-users] Questions on Ceph cluster without OS disks
>
> I would say it is not a 'proven technology' otherwise you would see a
> wide spread implementation and adaptation of this method. However if you
> really need the physical disk space, it is a s
The default rsyslog in centos has been able to do remote logging for
many years.
-Original Message-
Cc: ceph-users
Subject: [ceph-users] Re: Questions on Ceph cluster without OS disks
Hello Martin,
I suspect you're using a central syslog server.
Can you share information which
Hello Martin,
I suspect you're using a central syslog server.
Can you share information which central syslog server you use?
Is this central server running on ceph cluster, too?
Regards
Thomas
Am 23.03.2020 um 09:39 schrieb Martin Verges:
> Hello Thomas,
>
> by default we allocate 1GB per Host
I suspect Ceph is configured in their case to send all logs off-node to a
central syslog server, ELK, etc.
With Jewel this seemed to result in daemons crashing, but probably it’s since
been fixed (I haven’t tried).
> that is much less than I experienced of allocated disk space in case
>
Hello Martin,
that is much less than I experienced of allocated disk space in case
something is wrong with the cluster.
I have defined at least 10GB and there were situations (in the past)
when this space was quickly allocated by
syslog
user.log
messages
daemon.log
Regards
Thomas
Am 23.03.2020
Hello Thomas,
by default we allocate 1GB per Host on the Management Node, nothing on the
PXE booted server.
This value can be changed in the management container config file
(/config/config.yml):
> ...
> logFilesPerServerGB: 1
> ...
After changing the config, you need to restart the mgmt
Hello Martin,
how much disk space do you reserve for log in the PXE setup?
Regards
Thomas
Am 22.03.2020 um 20:50 schrieb Martin Verges:
> Hello Samuel,
>
> we from croit.io don't use NFS to boot up Servers. We copy the OS directly
> into the RAM (approximately 0.5-1GB). Think of it like a
Martin,
thanks a lot for the information. This is very interesting, and i will contact
again if we decided to go this way.
best regards,
samuel
huxia...@horebdata.cn
From: Martin Verges
Date: 2020-03-22 20:50
To: huxia...@horebdata.cn
CC: ceph-users
Subject: Re: Questions on Ceph cluster
Hello Samuel,
we from croit.io don't use NFS to boot up Servers. We copy the OS directly
into the RAM (approximately 0.5-1GB). Think of it like a container, you
start it and throw it away when you no longer need it.
This way we can save the slots of OS harddisks to add more storage per node
and
I would say it is not a 'proven technology' otherwise you would see a
wide spread implementation and adaptation of this method. However if you
really need the physical disk space, it is a solution. Although I also
would have questions on creating an extra redundant environment to
service
16 matches
Mail list logo