Hello,
> Yes, I gathered that.
> The question is, what servers between the Windows clients and the final
> Ceph storage are you planning to use.
Got it! :)
I think I will use the OSD to samba.
If possible, using this project here. https://ctdb.samba.org/
For each OSD will install samba and
Thank you very much for your answer!
> Yes, I gathered that.
> The question is, what servers between the Windows clients and the final
> Ceph storage are you planning to use.
>
That I do not yet understand.
While I believe that the client can be connected directly to the Ceph :)
I will read
On Mon, 22 Aug 2016 10:18:51 +0300 Александр Пивушков wrote:
> Hello,
> Several answers below
>
> >Среда, 17 августа 2016, 8:57 +03:00 от Christian Balzer :
> >
> >
> >Hello,
> >
> >On Wed, 17 Aug 2016 09:27:30 +0500 Дробышевский, Владимир wrote:
> >
> >> Christian,
> >>
> >>
Hello,
Several answers below
>Среда, 17 августа 2016, 8:57 +03:00 от Christian Balzer :
>
>
>Hello,
>
>On Wed, 17 Aug 2016 09:27:30 +0500 Дробышевский, Владимир wrote:
>
>> Christian,
>>
>> thanks a lot for your time. Please see below.
>>
>>
>> 2016-08-17 5:41 GMT+05:00
Hello,
On Wed, 17 Aug 2016 09:27:30 +0500 Дробышевский, Владимир wrote:
> Christian,
>
> thanks a lot for your time. Please see below.
>
>
> 2016-08-17 5:41 GMT+05:00 Christian Balzer :
>
> >
> > Hello,
> >
> > On Wed, 17 Aug 2016 00:09:14 +0500 Дробышевский, Владимир
Hello,
On Wed, 17 Aug 2016 00:09:14 +0500 Дробышевский, Владимир wrote:
> Dear community,
>
> I've had a conversation with Alexander, and he asked me to explain the
> situation and will be very grateful for any advices.
>
Your summary makes it somewhat clearer, but it still leaves some
Dear community,
I've had a conversation with Alexander, and he asked me to explain the
situation and will be very grateful for any advices.
So demands look like these:
1. He has a number of clients which need to periodically write a set of
data as big as 160GB to a storage. The acceptable
>
>2016-08-10 9:30 GMT+05:00 Александр Пивушков < p...@mail.ru > :
>>I want to use Ceph only as user data storage.
>>user program writes data to a folder that is mounted on a Ceph.
>>Virtual machine images are not stored on the Сeph.
>>Fiber channel and 40GbE are used only for the rapid
Hello
>Вторник, 9 августа 2016, 14:56 +03:00 от Christian Balzer :
>
>
>Hello,
>
>[re-added the list]
>
>Also try to leave a line-break, paragraph between quoted and new text,
>your mail looked like it was all written by me...
>
>On Tue, 09 Aug 2016 11:00:27 +0300 Александр
2016-08-10 9:30 GMT+05:00 Александр Пивушков :
> I want to use Ceph only as user data storage.
> user program writes data to a folder that is mounted on a Ceph.
> Virtual machine images are not stored on the Сeph.
> Fiber channel and 40GbE are used only for the rapid transmission
Hello Vladimir,
On Wed, 10 Aug 2016 09:12:39 +0500 Дробышевский, Владимир wrote:
> Christian,
>
> I have to say that OpenNebula 5 doesn't need any additional hacks (ok,
> just two lines of code to support rescheduling in case of the original node
> failure and even these patch scheduled to
I want to use Ceph only as user data storage.
user program writes data to a folder that is mounted on a Ceph.
Virtual machine images are not stored on the Сeph.
Fiber channel and 40GbE are used only for the rapid transmission of
information between the cluster Ceph and the virtual machine on
Christian,
I have to say that OpenNebula 5 doesn't need any additional hacks (ok,
just two lines of code to support rescheduling in case of the original node
failure and even these patch scheduled to 5.2 to be added after my question
a couple of weeks ago; but it isn't about 'live') or an
Hello,
On Tue, 9 Aug 2016 14:15:59 -0400 Jeff Bailey wrote:
>
>
> On 8/9/2016 10:43 AM, Wido den Hollander wrote:
> >
> >> Op 9 augustus 2016 om 16:36 schreef Александр Пивушков :
> >>
> >>
> >> > >> Hello dear community!
> >> I'm new to the Ceph and not long ago took up
On 8/9/2016 10:43 AM, Wido den Hollander wrote:
Op 9 augustus 2016 om 16:36 schreef Александр Пивушков :
> >> Hello dear community!
I'm new to the Ceph and not long ago took up the theme of building clusters.
Therefore it is very important to your opinion.
It is necessary
Вторник, 9 августа 2016, 17:43 +03:00 от Wido den Hollander :
>
>
>> Op 9 augustus 2016 om 16:36 schreef Александр Пивушков < p...@mail.ru >:
>>
>>
>> > >> Hello dear community!
>> >> >> I'm new to the Ceph and not long ago took up the theme of building
>> >> >> clusters.
>>
> Op 9 augustus 2016 om 16:36 schreef Александр Пивушков :
>
>
> > >> Hello dear community!
> >> >> I'm new to the Ceph and not long ago took up the theme of building
> >> >> clusters.
> >> >> Therefore it is very important to your opinion.
> >> >> It is necessary to create a
Hello,
[re-added the list]
Also try to leave a line-break, paragraph between quoted and new text,
your mail looked like it was all written by me...
On Tue, 09 Aug 2016 11:00:27 +0300 Александр Пивушков wrote:
> Thank you for your response!
>
>
> >Вторник, 9 августа 2016, 5:11 +03:00 от
Hello,
On Mon, 08 Aug 2016 17:39:07 +0300 Александр Пивушков wrote:
>
> Hello dear community!
> I'm new to the Ceph and not long ago took up the theme of building clusters.
> Therefore it is very important to your opinion.
> It is necessary to create a cluster from 1.2 PB storage and very
Hello dear community!
I'm new to the Ceph and not long ago took up the theme of building clusters.
Therefore it is very important to your opinion.
It is necessary to create a cluster from 1.2 PB storage and very rapid access
to data. Earlier disks of "Intel® SSD DC P3608 Series 1.6TB NVMe PCIe
20 matches
Mail list logo