Re: [ceph-users] Can CephFS Kernel Client Not Read & Write at the Same Time?

2019-03-07 Thread Ketil Froyn
On Fri, Mar 8, 2019, 01:15 Gregory Farnum  wrote:

> In general, no, this is not an expected behavior.
>

For clarification:

I assume you are responding to Andrew's last question "Is this expected
behavior...?" (quoted below).

When I first read through, it looked like your mail was a response to the
question in the subject "Can CephFS Kernel Client Not Read & Write at the
Same Time?", and that was concerning.


>> Is this expected behavior for the CephFS kernel drivers? Can a CephFS
>> kernel client really not read and write to the file system simultaneously?
>>
>> Thanks,
>> Andrew Richards
>> Senior Systems Engineer
>> Keeper Technology, LLC
>>
>
Regards, Ketil
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Upgrade Luminous to mimic on Ubuntu 18.04

2019-02-18 Thread Ketil Froyn
I think there may be something wrong with the apt repository for
bionic, actually. Compare the packages available for Xenial:

https://download.ceph.com/debian-luminous/dists/xenial/main/binary-amd64/Packages

to the ones available for Bionic:

https://download.ceph.com/debian-luminous/dists/bionic/main/binary-amd64/Packages

The only package listed in the repository for bionic is ceph-deploy,
while there's lots for xenial. A quick summary:

$ curl -s 
https://download.ceph.com/debian-luminous/dists/bionic/main/binary-amd64/Packages
| grep ^Package | wc -l
1
$ curl -s 
https://download.ceph.com/debian-luminous/dists/xenial/main/binary-amd64/Packages
| grep ^Package | wc -l
63

Ketil

On Tue, 19 Feb 2019 at 02:10, David Turner  wrote:
>
> Everybody is just confused that you don't have a newer version of Ceph 
> available. Are you running `apt-get dist-upgrade` to upgrade ceph? Do you 
> have any packages being held back? There is no reason that Ubuntu 18.04 
> shouldn't be able to upgrade to 12.2.11.
>
> On Mon, Feb 18, 2019, 4:38 PM >
>> Hello people,
>>
>> Am 11. Februar 2019 12:47:36 MEZ schrieb c...@elchaka.de:
>> >Hello Ashley,
>> >
>> >Am 9. Februar 2019 17:30:31 MEZ schrieb Ashley Merrick
>> >:
>> >>What does the output of apt-get update look like on one of the nodes?
>> >>
>> >>You can just list the lines that mention CEPH
>> >>
>> >
>> >... .. .
>> >Get:6 Https://Download.ceph.com/debian-luminous bionic InRelease [8393
>> >B]
>> >... .. .
>> >
>> >The Last available is 12.2.8.
>>
>> Any advice or recommends on how to proceed to be able to Update to 
>> mimic/(nautilus)?
>>
>> - Mehmet
>> >
>> >- Mehmet
>> >
>> >>Thanks
>> >>
>> >>On Sun, 10 Feb 2019 at 12:28 AM,  wrote:
>> >>
>> >>> Hello Ashley,
>> >>>
>> >>> Thank you for this fast response.
>> >>>
>> >>> I cannt prove this jet but i am using already cephs own repo for
>> >>Ubuntu
>> >>> 18.04 and this 12.2.7/8 is the latest available there...
>> >>>
>> >>> - Mehmet
>> >>>
>> >>> Am 9. Februar 2019 17:21:32 MEZ schrieb Ashley Merrick <
>> >>> singap...@amerrick.co.uk>:
>> >>> >Around available versions, are you using the Ubuntu repo’s or the
>> >>CEPH
>> >>> >18.04 repo.
>> >>> >
>> >>> >The updates will always be slower to reach you if your waiting for
>> >>it
>> >>> >to
>> >>> >hit the Ubuntu repo vs adding CEPH’s own.
>> >>> >
>> >>> >
>> >>> >On Sun, 10 Feb 2019 at 12:19 AM,  wrote:
>> >>> >
>> >>> >> Hello m8s,
>> >>> >>
>> >>> >> Im curious how we should do an Upgrade of our ceph Cluster on
>> >>Ubuntu
>> >>> >> 16/18.04. As (At least on our 18.04 nodes) we only have 12.2.7
>> >(or
>> >>> >.8?)
>> >>> >>
>> >>> >> For an Upgrade to mimic we should First Update to Last version,
>> >>> >actualy
>> >>> >> 12.2.11 (iirc).
>> >>> >> Which is not possible on 18.04.
>> >>> >>
>> >>> >> Is there a Update path from 12.2.7/8 to actual mimic release or
>> >>> >better the
>> >>> >> upcoming nautilus?
>> >>> >>
>> >>> >> Any advice?
>> >>> >>
>> >>> >> - Mehmet___
>> >>> >> ceph-users mailing list
>> >>> >> ceph-users@lists.ceph.com
>> >>> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >>> >>
>> >>> ___
>> >>> ceph-users mailing list
>> >>> ceph-users@lists.ceph.com
>> >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >>>
>> >___
>> >ceph-users mailing list
>> >ceph-users@lists.ceph.com
>> >http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
-Ketil
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Commercial support

2019-01-23 Thread Ketil Froyn
Hi,

How is the commercial support for Ceph? More specifically, I was  recently
pointed in the direction of the very interesting combination of CephFS,
Samba and ctdb. Is anyone familiar with companies that provide commercial
support for in-house solutions like this?

Regards, Ketil
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Recommendations for sharing a file system to a heterogeneous client network?

2019-01-15 Thread Ketil Froyn
Robert,

Thanks, this is really interesting. Do you also have any details on how a
solution like this performs? I've been reading a thread about samba/cephfs
performance, and the stats aren't great - especially when creating/deleting
many files - but being a rookie, I'm not 100% clear on the hardware
differences being benchmarked in the mentioned test.

http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-May/026841.html

Regards, Ketil

On Tue, Jan 15, 2019, 16:38 Robert Sander  Hi Ketil,
>
> use Samba/CIFS with multiple gateway machines clustered with CTDB.
> CephFS can be mounted with Posix ACL support.
>
> Slides from my last Ceph day talk are available here:
>
> https://www.slideshare.net/Inktank_Ceph/ceph-day-berlin-unlimited-fileserver-with-samba-ctdb-and-cephfs
>
> Regards
> --
> Robert Sander
> Heinlein Support GmbH
> Schwedter Str. 8/9b, 10119 Berlin
>
> https://www.heinlein-support.de
>
> Tel: 030 / 405051-43
> Fax: 030 / 405051-19
>
> Amtsgericht Berlin-Charlottenburg - HRB 93818 B
> Geschäftsführer: Peer Heinlein - Sitz: Berlin
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Recommendations for sharing a file system to a heterogeneous client network?

2019-01-15 Thread Ketil Froyn
Hi,

I'm pretty new to Ceph - pardon the newbie question. I've done a bit
of reading and searching, but I haven't seen an answer to this yet.

Is anyone using ceph to power a filesystem shared among a network of
Linux, Windows and Mac clients? How have you set it up? Is there a
mature Windows driver for CephFS? If not, are you using Samba/CIFS on
top of CephFS, or Samba/CIFS on top of a large RBD volume? Or
something else entirely? I'm looking for something scalable that
supports AD integration and ACL access control.

Are there any recommendations for this? Any pointers would be appreciated

Regards, Ketil
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] dd testing from within the VM

2016-05-20 Thread Ketil Froyn
I'm a lurker here and don't know much about ceph, but:

If fdatasync hardly makes a difference, then either it's not being honoured
(which would be a major problem), or there's something else that is a
bottleneck in your test (more likely).

It's not uncommon for a poor choice of block size (bs) to have a big effect
on the speed with dd. Try something much bigger. How big is the minimum
record size written to your ceph cluster? Reading on
http://docs.ceph.com/docs/master/man/8/rbd/ it appears that the default is
4MB. On a normal RAID, you'd often see a record size of something like 64kb
or 128kb or something like that.

A too big bs usually isn't a problem as long as it's a multiple of the
record size.You could try again with bs=4MB, or even bigger, so something
like:

dd if=/dev/zero of=test.file bs=4M count=1024
dd if=/dev/zero of=test.file bs=4M count=1024 conv=fdatasync

to see if this affects your performance. And you might want to write more
than 4GB to make sure you get it spread out, though this may not change the
result for a single sequential write. You could try running several in
parallel to see if your total speed is higher.

Proper benchmarking can be difficult, with dd or other tools. Try a
benchmarking tool like bonnie++ instead, though I'm not sure it does
concurrent writes.

Cheers, Ketil

On 19 May 2016 at 04:40, Ken Peng  wrote:

> Hi,
>
> Our VM has been using ceph as block storage for both system and data
> patition.
>
> This is what dd shows,
>
> # dd if=/dev/zero of=test.file bs=4k count=1024k
> 1048576+0 records in
> 1048576+0 records out
> 4294967296 bytes (4.3 GB) copied, 16.7969 s, 256 MB/s
>
> When dd again with fdatasync argument,the result is similar.
>
> # dd if=/dev/zero of=test.file bs=4k count=1024k conv=fdatasync
> 1048576+0 records in
> 1048576+0 records out
> 4294967296 bytes (4.3 GB) copied, 17.6878 s, 243 MB/s
>
>
> My questions include,
>
> 1. for a cluster which has more than 200 disks as OSD storage (SATA only),
> both the cluster and data network are 10Gbps, does the performance from
> within the VM behave well as the results above?
>
> 2. is "dd" suitable for testing a block storage within the VM?
>
> 3. why "fdatasync" influences nothing on the testing?
>
> Thank you.
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
-Ketil 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com