On Mon, 10 Nov 2014 12:19:29 AM Michael Rasmussen wrote:
> I think -n size=8192 and inode64 is only useful if your storage size is
> greater than can be address by 32 bit. -n size=8192 will use more of
> the available storage for metadata and inode64 consumes more RAM so if
> this is not needed it
On Mon, 10 Nov 2014 12:19:29 AM Michael Rasmussen wrote:
> I think -n size=8192 and inode64 is only useful if your storage size is
> greater than can be address by 32 bit.
True. In my case, 3TB
--
Lindsay
signature.asc
Description: This is a digitally signed message part.
__
On Mon, 10 Nov 2014 08:59:39 +1000
Lindsay Mathieson wrote:
> On 10 November 2014 08:53, Michael Rasmussen wrote:
> > On Mon, 10 Nov 2014 08:32:34 +1000
> > Lindsay Mathieson wrote:
> >
> >> I revisited gluster, this time formatted the filesystem per recommendations
> >> (difficult to find). Th
On 10 November 2014 08:53, Michael Rasmussen wrote:
> On Mon, 10 Nov 2014 08:32:34 +1000
> Lindsay Mathieson wrote:
>
>> I revisited gluster, this time formatted the filesystem per recommendations
>> (difficult to find). This seemed to resolved the I/O problems, haven't been
>> able to recreate t
On Mon, 10 Nov 2014 08:32:34 +1000
Lindsay Mathieson wrote:
> I revisited gluster, this time formatted the filesystem per recommendations
> (difficult to find). This seemed to resolved the I/O problems, haven't been
> able to recreate them no matter what the load.
>
What recommendations?
--
On Wed, 5 Nov 2014 05:34:04 PM Eneko Lacunza wrote:
> > Overall, I seemed to get similar i/o to what I was getting with
> > gluster, when I implemented a SSD cache for it (EXT4 with SSD
> > Journal). However ceph seemed to cope better with high loads, with one
> > of my stress tests - starting 7 vm
Hi Lindsay,
On 05/11/14 01:52, Lindsay Mathieson wrote:
Thanks for the informative reply Eneko, most helpful.
I'm glad that my response was helpful, thanks :)
4 drives per server will be better, but using SSD for journals will help you
a lot, could even give you better performance than 4 osds
05.11.2014 3:52, Lindsay Mathieson пишет:
> Can journal size be too large? if I gave 20GB+ to a journal for 3TB drives
> would it be used or is that just a waste? thanks,
Journal size is not matter of drive size, but of its speed and required write
size/performance. It is amount of data that cli
On 3 November 2014 18:10, Eneko Lacunza wrote:
> Hi Lindsay,
Thanks for the informative reply Eneko, most helpful.
> 4 drives per server will be better, but using SSD for journals will help you
> a lot, could even give you better performance than 4 osds per server. He had
> for some months a 2-o
Hi Lindsay,
On 02/11/14 08:59, Lindsay Mathieson wrote:
One OSD per node.
You're breaking CEPH's philosophy. It's
designed to be used with at least tens of OSDs. You can use any old/cheap
drives. Just more-better.
Yah, bit of a learning curve here, having to adjust my preconceptions and
expec
On Sun, 2 Nov 2014 09:58:51 AM Dmitry Petuhov wrote:
> 02.11.2014 5:18, Lindsay Mathieson пишет:
> > Have been doing a lot of testing with a three node/2 osd setup
> > - 3TB WD red drives (about 170MB/s write)
> > - 2 * 1GB Ethernet Bonded dedicated to the network filesystem
>
> 2 OSD each node or
02.11.2014 5:18, Lindsay Mathieson пишет:
> Have been doing a lot of testing with a three node/2 osd setup
> - 3TB WD red drives (about 170MB/s write)
> - 2 * 1GB Ethernet Bonded dedicated to the network filesystem
2 OSD each node or only 2 OSD? You're breaking CEPH's philosophy. It's designed
to
Have been doing a lot of testing with a three node/2 osd setup
- 3TB WD red drives (about 170MB/s write)
- 2 * 1GB Ethernet Bonded dedicated to the network filesystem
With glusterfs, individual VMs were getting up to 70 MB/s write performance.
Tests on the gluster mount gave 170 MB/s, the drive m
13 matches
Mail list logo