Colleagues, tell me please, who uses puppet for the deployment of ceph in
production?
And also, where can I get the puppet modules for ceph?
Александр Пивушков
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi
Hello! My cluster uses two networks.
in ceph.conf there are two records: public_network = 10.53.8.0/24,
cluster_network = 10.0.0.0/24
Servers and clients are connected to one switch.
To store data in ceph from clients, use cephfs:
10.53.8.141:6789,10.53.8.143:6789,10.53.8.144:6789:/ on / mnt
samba and try to make the HA samba with ctdb.
>Понедельник, 22 августа 2016, 10:57 +03:00 от Christian Balzer :
>
>On Mon, 22 Aug 2016 10:18:51 +0300 Александр Пивушков wrote:
>
>> Hello,
>> Several answers below
>>
>> >Среда, 17 августа 2016,
I will read on :)
>Понедельник, 22 августа 2016, 10:57 +03:00 от Christian Balzer :
>
>On Mon, 22 Aug 2016 10:18:51 +0300 Александр Пивушков wrote:
>
>> Hello,
>> Several answers below
>>
>> >Среда, 17 августа 2016, 8:57 +03:00 от Christian Balzer < ch.
s
>> > > there a formula exists to calculate speed expectations from the raw speed
>> > > and/or IOPS point of view?
>> > >
>> >
>> > Lets look at a simplified example:
>> > 10 nodes (with fast enough CPU cores to fully utilize those SSDs/
for? Object storage or block (RBD)? The former is fine, the latter
>is not ready yet. Think of k and m as like data and parity disks in Raid6.
I have read what the m and k. I mean who have values m and k uses.
--
Александр Пивушков
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hello, dear community.
There were a few questions as we learn Ceph.
-How Do you think, whether to buy Red Hat Ceph Storage is necessary if we do
not plan to use the technical support. How Red Hat Ceph Storage is needed for
beginners? Does it have any hidden optimization settings.
https://www.red
eph.com > on behalf of Nick Fisk
>< n...@fisk.me.uk >
>Reply-To: "n...@fisk.me.uk" < n...@fisk.me.uk >
>Date: Friday 12 August 2016 09:33
>To: 'Александр Пивушков' < p...@mail.ru >, 'ceph-users' <
>ceph-users@lists.ceph.com >
MDS, CephFS and ceph-dokan
>https://github.com/ketor/ceph-dokan
>
>Please share the experience of how it is possible to provide access with
>minimal overhead (preferably zero :( ) Windows Ceph users to the server?
> Ie how to make sure that the files generated by the pro
Hello,
I continue to design high-performance cluster Ceph, petascale.
Scheduled to purchase a high-performance server, OS Windows 2016, for clients.
Clients are in the Dockers.
https://docs.docker.com/engine/installation/windows/
Virtualization. It does not matter...
Clients run program writte
and what does not suit calamari?
--
Александр Пивушков.
четверг, 04 августа 2016г., 16:56 +03:00 от Lenz Grimmer < l...@grimmer.com> :
>Hi all,
>
>FYI, a few days ago, we released openATTIC 2.0.13 beta. On the Ceph
>management side, we've made some progress with the clust
>
>2016-08-10 9:30 GMT+05:00 Александр Пивушков < p...@mail.ru > :
>>I want to use Ceph only as user data storage.
>>user program writes data to a folder that is mounted on a Ceph.
>>Virtual machine images are not stored on the Сeph.
>>Fiber channel an
Hello
>Вторник, 9 августа 2016, 14:56 +03:00 от Christian Balzer :
>
>
>Hello,
>
>[re-added the list]
>
>Also try to leave a line-break, paragraph between quoted and new text,
>your mail looked like it was all written by me...
>
>On Tue, 09 Aug 2016 11:00:
oVirt.
In this scheme I can use oVirt?
--
Александр Пивушков.
+7(961)5097964 среда, 10 августа 2016г., 01:26 +03:00 от Christian Balzer <
ch...@gol.com> :
>
>Hello,
>
>On Tue, 9 Aug 2016 14:15:59 -0400 Jeff Bailey wrote:
>
>>
>>
>> On 8/9/2016 10:43 AM, Wi
Вторник, 9 августа 2016, 17:43 +03:00 от Wido den Hollander :
>
>
>> Op 9 augustus 2016 om 16:36 schreef Александр Пивушков < p...@mail.ru >:
>>
>>
>> > >> Hello dear community!
>> >> >> I'm new to the Ceph and not long ago
> >> Hello dear community!
>> >> I'm new to the Ceph and not long ago took up the theme of building
>> >> clusters.
>> >> Therefore it is very important to your opinion.
>> >> It is necessary to create a cluster from 1.2 PB storage and very rapid
>> >> access to data. Earlier disks of "Intel® SS
Hello dear community!
I'm new to the Ceph and not long ago took up the theme of building clusters.
Therefore it is very important to your opinion.
It is necessary to create a cluster from 1.2 PB storage and very rapid access
to data. Earlier disks of "Intel® SSD DC P3608 Series 1.6TB NVMe PCIe 3.
17 matches
Mail list logo