Re: [ceph-users] Nautilus:14.2.2 Legacy BlueStore stats reporting detected

2019-07-29 Thread Robert Sander
On 24.07.19 09:18, nokia ceph wrote:

> Please let us know disabling bluestore warn on legacy statfs is the only
> option for upgraded clusters.

You can repair the OSD with

systemctl stop ceph-osd@$OSDID
ceph-bluestore-tool repair --path /var/lib/ceph/osd/ceph-$OSDID
systemctl start ceph-osd@$OSDID

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

https://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] performance in a small cluster

2019-05-29 Thread Robert Sander
Hi,

Am 29.05.19 um 11:19 schrieb Martin Verges:
> 
> We have identified the performance settings in the BIOS as a major
> factor
> 
> could you share your insights what options you changed to increase
> performance and could you provide numbers to it?

Most default perfomance settings nowadays seem to be geared towards
power savings. This decreases CPU frequencies and does not play well
with Ceph (and virtualization).

There was just one setting in the BIOS of these machines called "Host
Performance" that was set to "Balanced". We changed that to "Max
Performance" and immediately the throughput doubled.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Linux: Akademie - Support - Hosting
http://www.heinlein-support.de

Tel: 030-405051-43
Fax: 030-405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein  -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] performance in a small cluster

2019-05-29 Thread Robert Sander
Am 24.05.19 um 14:43 schrieb Paul Emmerich:
> * SSD model? Lots of cheap SSDs simply can't handle more than that

The customer currently has 12 Micron 5100 1,92TB (Micron_5100_MTFDDAK1)
SSDs and will get a batch of Micron 5200 in the next days

We have identified the performance settings in the BIOS as a major
factor. Ramping that up we got a remarkable performance increase.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Linux: Akademie - Support - Hosting
http://www.heinlein-support.de

Tel: 030-405051-43
Fax: 030-405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein  -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] performance in a small cluster

2019-05-24 Thread Robert Sander
Am 24.05.19 um 14:43 schrieb Paul Emmerich:
> 20 MB/s at 4K blocks is ~5000 iops, that's 1250 IOPS per SSD (assuming
> replica 3).
> 
> What we usually check in scenarios like these:
> 
> * SSD model? Lots of cheap SSDs simply can't handle more than that

The system has been newly created and is not busy at all.

We tested a single SSD without OSD on top with fio: it can do 50K IOPS
read and 16K IOPS write.

> * Get some proper statistics such as OSD latencies, disk IO utilization,
> etc. A benchmark without detailed performance data doesn't really help
> to debug such a problem

Yes, that is correct, we will try to setup a perfdata gathering system.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Linux: Akademie - Support - Hosting
http://www.heinlein-support.de

Tel: 030-405051-43
Fax: 030-405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein  -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] performance in a small cluster

2019-05-24 Thread Robert Sander

Hi,

we have a small cluster at a customer's site with three nodes and 4 
SSD-OSDs each.

Connected with 10G the system is supposed to perform well.

rados bench shows ~450MB/s write and ~950MB/s read speeds with 4MB 
objects but only 20MB/s write and 95MB/s read with 4KB objects.


This is a little bit disappointing as the 4K performance is also seen in 
KVM VMs using RBD.


Is there anything we can do to improve performance with small objects / 
block sizes?


Jumbo frames have already been enabled.

4MB objects write:

Total time run: 30.218930
Total writes made:  3391
Write size: 4194304
Object size:4194304
Bandwidth (MB/sec): 448.858
Stddev Bandwidth:   63.5044
Max bandwidth (MB/sec): 552
Min bandwidth (MB/sec): 320
Average IOPS:   112
Stddev IOPS:15
Max IOPS:   138
Min IOPS:   80
Average Latency(s): 0.142475
Stddev Latency(s):  0.0990132
Max latency(s): 0.814715
Min latency(s): 0.0308732

4MB objects rand read:

Total time run:   30.169312
Total reads made: 7223
Read size:4194304
Object size:  4194304
Bandwidth (MB/sec):   957.662
Average IOPS: 239
Stddev IOPS:  23
Max IOPS: 272
Min IOPS: 175
Average Latency(s):   0.0653696
Max latency(s):   0.517275
Min latency(s):   0.00201978

4K objects write:

Total time run: 30.002628
Total writes made:  165404
Write size: 4096
Object size:4096
Bandwidth (MB/sec): 21.5351
Stddev Bandwidth:   2.0575
Max bandwidth (MB/sec): 22.4727
Min bandwidth (MB/sec): 11.0508
Average IOPS:   5512
Stddev IOPS:526
Max IOPS:   5753
Min IOPS:   2829
Average Latency(s): 0.00290095
Stddev Latency(s):  0.0015036
Max latency(s): 0.0778454
Min latency(s): 0.00174262

4K objects read:

Total time run:   30.000538
Total reads made: 1064610
Read size:4096
Object size:  4096
Bandwidth (MB/sec):   138.619
Average IOPS: 35486
Stddev IOPS:  3776
Max IOPS: 42208
Min IOPS: 26264
Average Latency(s):   0.000443905
Max latency(s):   0.0123462
Min latency(s):   0.000123081


Regards
--
Robert Sander
Heinlein Support GmbH
Linux: Akademie - Support - Hosting
http://www.heinlein-support.de

Tel: 030-405051-43
Fax: 030-405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein  -- Sitz: Berlin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph mimic and samba vfs_ceph

2019-05-09 Thread Robert Sander
On 08.05.19 23:23, Gregory Farnum wrote:

> Fixing the wiring wouldn't be that complicated if you can hack on the
> code at all, but there are some other issues with the Samba VFS
> implementation that have prevented anyone from prioritizing it so far.
> (Namely, smb forks for every incoming client connection, which means
> every smb client gets a completely independent cephfs client, which is
> very inefficient.)

Inefficient because of multiplying the local cache efforts or because
too much clients stress the MDS?

I thought it would be more efficient to run multiple clients (in
userspace) that interact in parallel with the Ceph cluster.
Instead of having only one mounted filesystem (kernel or FUSE) where all
the data passes through.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

https://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Tip for erasure code profile?

2019-05-03 Thread Robert Sander
Hi,

I would be glad if anybody could give me a tip for an erasure code
profile and an associated crush ruleset.

The cluster spans 2 rooms with each room containing 6 hosts and each
host has 12 to 16 OSDs.

The failure domain would be the room level, i.e. data should survive if
one of the rooms has a power loss.

Is that even possible with erasure coding?
I am only coming up with profiles where m=6, but that seems to be a
little overkill.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

https://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-volume activate runs infinitely

2019-05-03 Thread Robert Sander
Hi,

On 02.05.19 15:20, Alfredo Deza wrote:

>   stderr: Job for ceph-osd@21.service canceled.
> 
> Do you have output on the osd12 logs at /var/log/ceph ?

Unfortunately the customer has setup a central logging without local
fallback. Rsyslogd was not running yet and the Ceph OSDs where
configured to log to syslog and not to files…

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

https://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-volume activate runs infinitely

2019-05-02 Thread Robert Sander
 start ceph-osd@21
 stderr: Job for ceph-osd@21.service canceled.

There is nothing in the global journal because journald had not
been started at that time.

> The "After=" directive is just adding some wait time to start
> activating here, so I wonder how is it that your OSDs didn't
> eventually came up.

Yes, we added that After because ceph-osd@.service contains this line.
At least it does no harm. ;)

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

https://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] ceph-volume activate runs infinitely

2019-05-02 Thread Robert Sander
Hi,

The ceph-volume@.service units on an Ubuntu 18.04.2 system
run unlimited and do not finish.

Only after we create this override config the system boots again:

# /etc/systemd/system/ceph-volume@.service.d/override.conf
[Unit]
After=network-online.target local-fs.target time-sync.target ceph-mon.target

It looks like "After=local-fs.target" (the original value) is not
enough for the dependencies.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

https://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Newly created OSDs receive no objects / PGs

2019-03-22 Thread Robert Sander
Hi,

we created a bunch of new OSDs on three new nodes this morning. Now,
after roughly 6 hours they still are empty and the cluster did not
rebalance any objects resp. placement groups to them.

The crush rule is the simple one, selecting destination OSDs on
different hosts. All OSDs are up and in and have a weight
correspondending with their size.

What could be the issue here?

Excerpt from "ceph osd df tree":

ID CLASS WEIGHT  REWEIGHT SIZEUSE AVAIL   %USE  VAR  PGS
…
30 hdd  3.64000  1.0 3.6 TiB  1.5 TiB 2.1 TiB 41.16 1.91 119 osd.30
31 hdd  3.64000  0.8 3.6 TiB  1.2 TiB 2.5 TiB 32.36 1.50  96 osd.31
60 hdd  3.64000  1.0 3.6 TiB  1.5 TiB 2.1 TiB 42.38 1.96 127 osd.60
61 hdd  3.64000  1.0 3.6 TiB  1.3 TiB 2.4 TiB 34.57 1.60 118 osd.61
62 hdd  3.62999  1.0 3.6 TiB  1.3 TiB 2.3 TiB 36.43 1.69 107 osd.62
63 hdd  3.64000  1.0 3.6 TiB  1.6 TiB 2.1 TiB 43.06 1.99 118 osd.63
71 hdd  3.62999  1.0 3.6 TiB  1.2 TiB 2.5 TiB 32.12 1.49  99 osd.71
-70   128.11981- 128 TiB  3.0 TiB 125 TiB  2.31 0.11  host al103

142 hdd 7.45999  1.0 7.5 TiB  189 GiB 7.3 TiB  2.47 0.11   0 osd.142

143 hdd 7.45999  1.0 7.5 TiB  189 GiB 7.3 TiB  2.47 0.11   0 osd.143
144 hdd 7.45999  1.0 7.5 TiB  189 GiB 7.3 TiB  2.47 0.11   0 osd.144
145 hdd 7.45999  1.0 7.5 TiB  189 GiB 7.3 TiB  2.47 0.11   0 osd.145
146 hdd 7.45999  1.0 7.5 TiB  189 GiB 7.3 TiB  2.47 0.11   0 osd.146
147 hdd 7.45999  1.0 7.5 TiB  189 GiB 7.3 TiB  2.47 0.11   0 osd.147
148 hdd 7.45999  1.0 7.5 TiB  189 GiB 7.3 TiB  2.47 0.11   0 osd.148
…

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

https://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Error in Mimic repo for Ubunut 18.04

2019-03-15 Thread Robert Sander
On 15.03.19 13:40, Konstantin Shalygin wrote:
>> This seems to be still a problem...
>>
>> Is anybody looking into it?
> Anybody of Ubuntu users is created ticket to devops [1] project? No...

> [1] http://tracker.ceph.com/projects/devops/activity

Last time I created a ticket I was told to first ask on the mailing list…

I will now open one.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

https://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Error in Mimic repo for Ubunut 18.04

2019-03-14 Thread Robert Sander
Hi,

when running "apt update" I get the following error:

Err:6 http://download.ceph.com/debian-mimic bionic/main amd64 Packages
  File has unexpected size (13881 != 13883). Mirror sync in progress? [IP: 
158.69.68.124 80]
  Hashes of expected file:
   - Filesize:13883 [weak]
   - SHA256:91a7e695d565b6459adf32476400fb64aaf1c5f93265394a1e17770176f92e0e
   - SHA1:2f845c3715f38f689eeb75601ace099f73651a45 [weak]
   - MD5Sum:a1ced382b449dddacaea4b1da995388a [weak]
  Release file created at: Fri, 04 Jan 2019 17:24:09 +

Is there a corrupt Packages or Release file in the repo?

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

https://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] v12.2.11 Luminous released

2019-02-01 Thread Robert Sander
Am 01.02.19 um 19:06 schrieb Neha Ojha:

> If you would have hit the bug, you should have seen failures like
> https://tracker.ceph.com/issues/36686.
> Yes, pglog_hardlimit is off by default in 12.2.11. Since you are
> running 12.2.9(which has the patch that allows you to limit the length
> of the pg log), you could follow the steps and upgrade to 12.2.11 and
> set this flag.

The question is: If I am now on 12.2.9 and see no issues, do I have to
set this flag after upgrading to 12.2.11?

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Question regarding client-network

2019-01-30 Thread Robert Sander
On 30.01.19 08:55, Buchberger, Carsten wrote:

> So as long as there is ip-connectivity between the client, and the
> client-network ip –adressses of our ceph-cluster everything is fine ?

Yes, client traffic is routable.

Even inter-OSD traffic is routable, there are reports from people
running routing protocols inside their Ceph clusters.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

https://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Commercial support

2019-01-28 Thread Robert Sander
Hi,

Am 23.01.19 um 23:28 schrieb Ketil Froyn:

> How is the commercial support for Ceph?

At Heinlein Support we also offer independent
ceph consulting. We are concentrating on the
German speaking regions of Europe.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How To Properly Failover a HA Setup

2019-01-21 Thread Robert Sander
On 21.01.19 09:22, Charles Tassell wrote:
> Hello Everyone,
> 
>    I've got a 3 node Jewel cluster setup, and I think I'm missing 
> something.  When I want to take one of my nodes down for maintenance 
> (kernel upgrades or the like) all of my clients (running the kernel 
> module for the cephfs filesystem) hang for a couple of minutes before 
> the redundant servers kick in.

Have you set the noout flag before doing cluster maintenance?

ceph osd set noout

and afterwards

ceph osd unset noout

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

https://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] quick questions about a 5-node homelab setup

2019-01-18 Thread Robert Sander
On 18.01.19 11:48, Eugen Leitl wrote:

> OSD on every node (Bluestore), journal on SSD (do I need a directory, or a 
> dedicated partition? How large, assuming 2 TB and 4 TB Bluestore HDDs?)

You need a partition on the SSD for the block.db (it's not a journal
anymore with blustore). You should look into osd_memory_target to
configure the osd process with 1 or 2 GB of RAM in your setup.

> Can I run ceph-mon instances on the two D510, or would that already overload 
> them? No sense to try running 2x monitors on D510 and one on the 330, right?

Yes, Mons need some resources. If you have set osd_memory_target they
may fit on your Atoms.

> I've just realized that I'll also need ceph-mgr daemons on the hosts running 
> ceph-mon. I don't see the added system resource requirements for these.

The mgr process is quite light in resource usage.

> Assuming BlueStore is too fat for my crappy nodes, do I need to go to 
> FileStore? If yes, then with xfs as the file system? Journal on the SSD as a 
> directory, then?

Journal for FileStore is also a block device.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

https://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Filestore OSD on CephFS?

2019-01-16 Thread Robert Sander
On 16.01.19 16:03, Kenneth Van Alstyne wrote:

> To be clear, I know the question comes across as ludicrous.  It *seems*
> like this is going to work okay for the light workload use case that I
> have in mind — I just didn’t want to risk impacting the underlying
> cluster too much or hit any other caveats that perhaps someone else has
> run into before. 

Why is setting up a distinct pool as destination for your RBD mirros not
an option? Does it have to be an extra cluster?

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

https://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Recommendations for sharing a file system to a heterogeneous client network?

2019-01-15 Thread Robert Sander
Hi Ketil,

use Samba/CIFS with multiple gateway machines clustered with CTDB.
CephFS can be mounted with Posix ACL support.

Slides from my last Ceph day talk are available here:
https://www.slideshare.net/Inktank_Ceph/ceph-day-berlin-unlimited-fileserver-with-samba-ctdb-and-cephfs

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

https://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Performance Problems

2018-12-10 Thread Robert Sander
On 07.12.18 18:33, Scharfenberg, Buddy wrote:

> We have 3 nodes set up, 1 with several large drives, 1 with a handful of
> small ssds, and 1 with several nvme drives.

This is a very unusual setup. Do you really have all your HDDs in one
node, the SSDs in another and NVMe in the third?

How do you guarantee redundancy?

You should evenly distribute your storage devices across your nodes,
this may already be a performance boost as it distributes the requests.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

https://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Luminous v12.2.10 released

2018-11-27 Thread Robert Sander
Am 27.11.18 um 15:50 schrieb Abhishek Lekshmanan:

>   As mentioned above if you've successfully upgraded to v12.2.9 DO NOT
>   upgrade to v12.2.10 until the linked tracker issue has been fixed.

What about clusters currently running 12.2.9 (because this was the
version in the repos when they got installed / last upgraded) where new
nodes are scheduled to setup?
Can the new nodes be installed with 12.2.10 and run with the other
12.2.9 nodes?
Should the new nodes be pinned to 12.2.9?

Regards
-- 
Robert Sander
Heinlein Support GmbH
Linux: Akademie - Support - Hosting
http://www.heinlein-support.de

Tel: 030-405051-43
Fax: 030-405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein  -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Move the disk of an OSD to another node?

2018-11-21 Thread Robert Sander
Hi,

I was thinking if it is a good idea to just move the disk of an OSD to
another node.

Prerequisite is that the FileStore journal resp the BlueStore RocksDB
and WAL are located on the same device.

I have tested this move on a virtual ceph cluster and it seems to work.

Set noout, stopped the OSD process, unmounted everything and removed the
(virtual) disk from the original node. Attached the disk to the new node
and as soon as the disk is recognized an OSD process is started and some
rebalancing happens, after that the cluster is healthy).

But every time now when something is happening the numbers are odd.

E.g. when I add a new OSD:

2018-11-21 10:00:00.000301 mon.ceph04 mon.0 192.168.101.92:6789/0 10374 : 
cluster [INF] overall HEALTH_OK
2018-11-21 10:08:43.950612 mon.ceph04 mon.0 192.168.101.92:6789/0 10427 : 
cluster [INF] osd.8 192.168.101.156:6805/2361542 boot
2018-11-21 10:08:44.946176 mon.ceph04 mon.0 192.168.101.92:6789/0 10429 : 
cluster [WRN] Health check failed: 2/1716 objects misplaced (0.117%) 
(OBJECT_MISPLACED)
2018-11-21 10:08:44.946211 mon.ceph04 mon.0 192.168.101.92:6789/0 10430 : 
cluster [WRN] Health check failed: Reduced data availability: 11 pgs inactive, 
37 pgs peering (PG_AVAILABILITY)
2018-11-21 10:08:44.946242 mon.ceph04 mon.0 192.168.101.92:6789/0 10431 : 
cluster [WRN] Health check failed: Degraded data redundancy: 230/1716 objects 
degraded (13.403%), 1 pg degraded (PG_DEGRADED)
2018-11-21 10:08:50.883625 mon.ceph04 mon.0 192.168.101.92:6789/0 10433 : 
cluster [WRN] Health check update: 40/1716 objects misplaced (2.331%) 
(OBJECT_MISPLACED)
2018-11-21 10:08:50.883684 mon.ceph04 mon.0 192.168.101.92:6789/0 10434 : 
cluster [WRN] Health check update: Degraded data redundancy: 7204/1716 objects 
degraded (419.814%), 83 pgs degraded (PG_DEGRADED)
2018-11-21 10:08:50.883719 mon.ceph04 mon.0 192.168.101.92:6789/0 10435 : 
cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data 
availability: 12 pgs inactive, 22 pgs peering)
2018-11-21 10:08:43.112896 osd.8 osd.8 192.168.101.156:6805/2361542 1 : cluster 
[WRN] failed to encode map e315 with expected crc
2018-11-21 10:08:57.390534 mon.ceph04 mon.0 192.168.101.92:6789/0 10436 : 
cluster [WRN] Health check update: Degraded data redundancy: 7001/1716 objects 
degraded (407.984%), 79 pgs degraded (PG_DEGRADED)
2018-11-21 10:09:00.891305 mon.ceph04 mon.0 192.168.101.92:6789/0 10437 : 
cluster [WRN] Health check update: 56/1716 objects misplaced (3.263%) 
(OBJECT_MISPLACED)
2018-11-21 10:09:02.391144 mon.ceph04 mon.0 192.168.101.92:6789/0 10438 : 
cluster [WRN] Health check update: Degraded data redundancy: 6413/1716 objects 
degraded (373.718%), 77 pgs degraded (PG_DEGRADED)
2018-11-21 10:09:06.897229 mon.ceph04 mon.0 192.168.101.92:6789/0 10441 : 
cluster [WRN] Health check update: 55/1716 objects misplaced (3.205%) 
(OBJECT_MISPLACED)
2018-11-21 10:09:07.391932 mon.ceph04 mon.0 192.168.101.92:6789/0 10442 : 
cluster [WRN] Health check update: Degraded data redundancy: 5533/1716 objects 
degraded (322.436%), 71 pgs degraded (PG_DEGRADED)
2018-11-21 10:09:12.392621 mon.ceph04 mon.0 192.168.101.92:6789/0 10443 : 
cluster [WRN] Health check update: Degraded data redundancy: 5499/1716 objects 
degraded (320.455%), 69 pgs degraded (PG_DEGRADED)

until finally

2018-11-21 10:11:07.407294 mon.ceph04 mon.0 192.168.101.92:6789/0 10495 : 
cluster [WRN] Health check update: 17/1716 objects misplaced (0.991%) 
(OBJECT_MISPLACED)
2018-11-21 10:11:07.507613 mon.ceph04 mon.0 192.168.101.92:6789/0 10496 : 
cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 
1/1716 objects degraded (0.058%), 1 pg degraded)
2018-11-21 10:11:12.407743 mon.ceph04 mon.0 192.168.101.92:6789/0 10497 : 
cluster [WRN] Health check update: 13/1716 objects misplaced (0.758%) 
(OBJECT_MISPLACED)
2018-11-21 10:11:17.408178 mon.ceph04 mon.0 192.168.101.92:6789/0 10500 : 
cluster [WRN] Health check update: 10/1716 objects misplaced (0.583%) 
(OBJECT_MISPLACED)
2018-11-21 10:11:25.406556 mon.ceph04 mon.0 192.168.101.92:6789/0 10501 : 
cluster [WRN] Health check update: 4/1716 objects misplaced (0.233%) 
(OBJECT_MISPLACED)
2018-11-21 10:11:31.016869 mon.ceph04 mon.0 192.168.101.92:6789/0 10502 : 
cluster [INF] Health check cleared: OBJECT_MISPLACED (was: 1/1716 objects 
misplaced (0.058%))
2018-11-21 10:11:31.016936 mon.ceph04 mon.0 192.168.101.92:6789/0 10503 : 
cluster [INF] Cluster is now healthy

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

https://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Secure way to wipe a Ceph cluster

2018-07-27 Thread Robert Sander
Hi,

On 27.07.2018 09:00, Christopher Kunz wrote:
> 
> as part of deprovisioning customers, we regularly have the task of
> wiping their Ceph clusters. Is there a certifiable, GDPR compliant way
> to do so without physically shredding the disks?

In the past I have used DBAN from https://dban.org/, but they seem to
follow a more commercial business model now.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

https://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph cluster monitoring tool

2018-07-24 Thread Robert Sander
On 24.07.2018 07:02, Satish Patel wrote:
> My 5 node ceph cluster is ready for production, now i am looking for
> good monitoring tool (Open source), what majority of folks using in
> their production?

Some people already use Prometheus and the exporter from the Ceph Mgr.

Some use more traditional monitoring systems (like me). I have written a
Ceph plugin for the Check_MK monitoring system:

https://github.com/HeinleinSupport/check_mk/tree/master/ceph

Caution: It will not scale to hundreds of OSDs as it invokes the Ceph
CLI tools to gather monitoring data on every node. This takes some time.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

https://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] active+clean+inconsistent PGs after upgrade to 12.2.7

2018-07-19 Thread Robert Sander
On 19.07.2018 11:15, Ronny Aasen wrote:

> Did you upgrade from 12.2.5 or 12.2.6 ?

Yes.

> sounds like you hit the reason for the 12.2.7 release
> 
> read : https://ceph.com/releases/12-2-7-luminous-released/
> 
> there should come features in 12.2.8 that can deal with the "objects are 
> in sync but checksums are wrong" scenario.

I already read that before the upgrade but did not consider to be
affected by the bug.

The pools with the inconsistent PGs only have RBDs stored and not CephFS
nor RGW data.

I have restarted the OSDs with "osd skip data digest = true" as a "ceph
tell" is not able to inject this argument into the running processes.

Let's see if this works out.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

https://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] active+clean+inconsistent PGs after upgrade to 12.2.7

2018-07-19 Thread Robert Sander
Hi,

just a quick warning: We currently see active+clean+inconsistent PGs on
two cluster after upgrading to 12.2.7.

I created http://tracker.ceph.com/issues/24994

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

https://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Planning all flash cluster

2018-06-20 Thread Robert Sander
On 20.06.2018 13:58, Nick A wrote:

> We'll probably add another 2 OSD drives per month per node until full
> (24 SSD's per node), at which point, more nodes.

I would add more nodes earlier to achieve better overall performance.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Open-sourcing GRNET's Ceph-related tooling

2018-06-05 Thread Robert Sander
Hi,

I just saw this announcement and just wanted to "advertise" our Check_MK
plugin for Ceph:

https://github.com/HeinleinSupport/check_mk/tree/master/ceph

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph MeetUp Berlin – May 28

2018-06-05 Thread Robert Sander
Hi,

On 27.05.2018 01:48, c...@elchaka.de wrote:
> 
> Very interested to the Slides/vids.

Slides are now available:
https://www.meetup.com/Ceph-Berlin/events/qbpxrhyxhblc/

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph MeetUp Berlin – May 28

2018-05-22 Thread Robert Sander
On 19.05.2018 00:16, Gregory Farnum wrote:
> Is there any chance of sharing those slides when the meetup has
> finished? It sounds interesting! :)

We usually put a link to the slides on the MeetUp page.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph MeetUp Berlin – May 28

2018-05-18 Thread Robert Sander
Hi,

we are organizing a bi-monthyl meetup in Berlin, Germany and invite any
interested party to join us for the next one on May 28:

https://www.meetup.com/Ceph-Berlin/events/qbpxrhyxhblc/

The presented topic is "High available (active/active) NFS and CIFS
exports upon CephFS".

Kindest Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] split brain case

2018-03-29 Thread Robert Sander
On 29.03.2018 10:25, ST Wong (ITSC) wrote:

> While servers in each building have alternate uplinks.   What will
> happen in case the link between the buildings is broken (application
> servers in each server room will continue to write to OSDs in the same
> room) ?

The side with the lesser number of monitors will stop working.
Applications will not be able to write or read any more as the monitors
still reachable in their network will have no quorum and refuse to give
out the cluster map.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] cephfs performance issue

2018-03-29 Thread Robert Sander
On 29.03.2018 09:50, ouyangxu wrote:

> I'm using Ceph 12.2.4 with CentOS 7.4, and tring to use cephfs for
> MariaDB deployment,

Don't do this.
As the old saying goes: If it hurts, stop doing it.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Upgrading ceph and mapped rbds

2018-03-29 Thread Robert Sander
On 28.03.2018 11:36, Götz Reinicke wrote:

> My question is: How to proceed with the serves which map the rbds?

Do you intend to upgrade the kernels on these RBD clients acting as NFS
servers?

If so you have to plan a reboot anyway. If not, nothing changes.

Or are you using qemu+rbd in userspace for VMs? Then the VMs have to be
restarted to use a newer librbd.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Berlin Ceph MeetUp March 26 - openATTIC

2018-03-16 Thread Robert Sander
Hi,

I am happy to announce our next meetup on March 26, we will have a talk
about openATTIC presented by Jan from SuSE.

Please RSVP at https://www.meetup.com/Ceph-Berlin/events/qbpxrhyxfbjc/

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-05 Thread Robert Sander
On 05.03.2018 00:26, Adrian Saul wrote:
>  
> 
> We are using Ceph+RBD+NFS under pacemaker for VMware.  We are doing
> iSCSI using SCST but have not used it against VMware, just Solaris and
> Hyper-V.
> 
> 
> It generally works and performs well enough – the biggest issues are the
> clustering for iSCSI ALUA support and NFS failover, most of which we
> have developed in house – we still have not quite got that right yet.

You should look at setting up a Samba CTDB cluster with CephFS as
backend. This can also be used with NFS including NFS failover.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Cephfs NFS failover

2017-12-21 Thread Robert Sander
On 20.12.2017 18:45, nigel davies wrote:
> Hay all
> 
> Can any one advise on how it can do this.

You can use ctdb for that and run an active/active NFS cluster:

https://wiki.samba.org/index.php/Setting_up_CTDB_for_Clustered_NFS

The cluster filesystem can be a CephFS. This also works with Samba, i.e.
you get an unlimited fileserver.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Nimmst du meine Einladung an und kommst auch zu Ceph Berlin?

2017-09-07 Thread Robert Sander

Ceph Berlin


Begleite Robert Sander und 406 weitere  Cephalopods zu Berlin. Bleib immer auf 
dem Laufenden über neue Events in Deiner Gegend.

This is a group for anyone interested in Ceph ( 
http://meet.meetup.com/wf/click?upn=GD49DQZtg64H-2B6gLGjtaFxmHenmQfczCg-2Fbz5SlDxx4-3D_3m43nP6to2B7v9FAjDzXtNldDLR9h3SJnHRQkVD1i3Bm0tlYE1fk-2Bz4dxJ9qLfiTtuXUx8qanJXTWQtwGuegd4OLd7PqBu6t0Ylu69Tr-2Badc0lMS5vnk7n7i7ZOosVjo4DU9Sg5gNF1-2BQtKIBWLw33LzmlZml20P08LhymrFBkZrY2TInpoUajGpjkTI67G0qz0NRhP2mexSujPHOGwSnMFRvwXd5H-2FAG-2BA9UU12et8-3D
 ). All skills levels are welcome. There is a growing user community around 
this Free Software distributed storage. Participants are exp...

--

Einladung annehmen

http://meet.meetup.com/wf/click?upn=pEEcc35imY7Cq0tG1vyTt4Z4gND5RbLM8N-2BuJDsKubhlsuh5g4Jbj0Xb-2Ba7-2BJOrs7eqkf5U07yDualtMk4G9XU8HSPDz-2FOR381TTPPky2K-2FT45NkX6ZmY5pxu1PEBxNilFki1YXQIvlilu-2FJibrEisPxQLKWIk88qgWXFuzJduUwKlrPFwEDql7M6wGAutKCZnmY7Rev06gaOaZTMXfeODkAJpXB0poLWmGFKUdWST8AXlDCOy-2FEgvyb7B8kRj1sHvc9DS5l6nhyagc4ocPaE-2FcPpPy5YrPZqrefgsRjTkXfL-2FOd7ieDO72YNu1AXyutOlwx0jYsBX0foVS92kWnjV3qPJREDuHu0DKzCY2SeUcH1Zb3MfXRjVmDwnK39xIUG-2BFuqBuKfOCc-2B1yZWE0CZ4J-2BeGFXuFGtqPOABgaA3iqjJclVyJSXK5PuKu-2F2WPy-2B57iLm0Td-2B7X9F-2FyUL1xScRTaUVNYPcAaV13XiP7Iys4kgi2NBrsiZYwAsSlbPoe6-2F9LZfmmWez5r8WW-2BhjFbndXRs1-2FHAxhCBTQsDBqHFBcWi29OYMVD9dIP3QLyPqjyUi48-2BKZWKdG-2Fic4Q-2FQip-2FnTJRavTYBYgtdkcpYOXnoDxGfDbCQjsowDw0gAW4sNl7KckxnthPsv6rlgFZJktmjDCRIIKH7jDf7cI3H76waW9agCN78z8CSy3KolQwNVawxv9lYZ5NI7zAvsRtwgPUKQMzQT4U7sZRWeykPokUsUUWMq4y-2B7JWyJ7iWVCWE8JWEZ1s538JyheJ-2F-2FNKDRkyiFzjIEHXq8pAUJVkZQA-2BupzCjzLZIhDwWqpuewAGc-2Bjqs7x-2Fl6NTpEt0Yddd9TrBlwFj6AOhQVaTGRb3RpWsD5mJh3cqvgl7cMBnZ1LvgzD_3m43nP6to2B7v9FAjDzXtNldDLR9h3SJnHRQkVD1i3Bm0tlYE1fk-2Bz4dxJ9qLfiTt-2BaXInyCv-2F1cjQ1WkUDWuT-2BeDyd11F59c8V6qqWg-2F7CRwdBGd7ESfCGA6iHwIr2l2dv3iSyQX5IzKvODQzER9IvPzcc6MPKpsNY-2BqkfzsPKuPDeG6wLtUanwd0qkB0DLBv0fbxAcqP3KGBdT5TwCGcU4oRYVcuEutwnpuPbHwkY-3D

--

---
Diese Nachricht wurde von Meetup im Auftrag von Robert Sander 
(http://meet.meetup.com/wf/click?upn=pEEcc35imY7Cq0tG1vyTt0wKpnP6IfnATkFJd06gYNZtXqrk0epgntBhuACf0MnGxwRUM0d7t0HLFWF6aM5qeg08Mze-2BbdlPheIZaXYq178-3D_3m43nP6to2B7v9FAjDzXtNldDLR9h3SJnHRQkVD1i3Bm0tlYE1fk-2Bz4dxJ9qLfiT-2F2Q9YHUb78TeWceZfVhBaPoMMx3WVP-2Fg-2F3fhmkGYo3w0gjQrMSQ-2B63LxSlPcWPcFIW1b4Yfc0uWBl-2FsLts3pkQy8AGzg2rnHlJ8Tu7IMubw9BGUT4ht42atmybPK-2FKZiI-2Fn7PcECmYVNqSzpeoYTszEsEp-2BYr3V054oBOHdXfzE-3D
 von Ceph Berlin gesendet.


Noch Fragen? Schreib uns eine E-Mail an supp...@meetup.com

Ich möchte diese Art von E-Mails nicht mehr erhalten. 
(http://meet.meetup.com/wf/click?upn=pEEcc35imY7Cq0tG1vyTt4Z4gND5RbLM8N-2BuJDsKubhlsuh5g4Jbj0Xb-2Ba7-2BJOrs7eqkf5U07yDualtMk4G9XXM9AVZBrUeKDq0HbE4ayLXUofWSR5jrzgDpGi4KePNgeIiHLY81A-2FZ5XZXHMP-2B5wd6ElCOq1vuqVAeHj-2Bmkfxx3bzNrYNb3UrRtaQIGFg-2B-2BisLV1rO7XeaF8iW9n8PpziA7-2FknUIq3ix4JJg0v2sqRZmfuRwiMWwHnKO60VVdyJLNs8kA4xuLTk-2B4L5AtyS1qL18vkCyKr3bAvdfhqXPcUcNQ88pgqNPS7IrN6tfu8Jw8Fb0-2F-2Bg79Ye5aeu6g44vo-2BPiA2RgSTU-2BDFXTQZctdzHgi-2BZbu1E-2F21yal3oKV1myrRQTOSyTC3VA0IPJsN5VvAeXLQ1wdMLbiguh1sn6NEd-2FglgnuAI5LodbN-2Be0V23kgK3WDKDqx-2B9TazuQrnr1KA-2Fbwx-2BsAn4qqrY6-2BQsYv38dhG-2BuP9ImJMgUSKB2a9S7fjZyhqh7ztf9KylLDOLnr6MfaGbwDQ21PiKeUlsfXva3Y9bSnPFVz3RPxKjxLIqSEkhWWqyThOklg84g5ynr1KwWpVrPKYg4S6rufalYqfIdArpgrdjt-2B4cNYs2lPx9_3m43nP6to2B7v9FAjDzXtNldDLR9h3SJnHRQkVD1i3Bm0tlYE1fk-2Bz4dxJ9qLfiTF4fsbxN-2F7EoAxN7nZQxrAnOBq1P1dn1JfzgYs0be9sLR1hO9q1aAEH4ice-2BT71vbdMeJUnoLfOQ19lntkeexxPSJMJ9JR92OFC45FoNHXqVTp64MZrGQb3vMs6aVZOZOxVOZZCmVh8HtR7SDET2aFa0ieowZ482z-2Fy5Gb-2BV6a84-3D

Meetup Inc. 
(http://meet.meetup.com/wf/click?upn=pEEcc35imY7Cq0tG1vyTt0wKpnP6IfnATkFJd06gYNYhelFO3AWyXJ-2BzQcDZesNQ_3m43nP6to2B7v9FAjDzXtNldDLR9h3SJnHRQkVD1i3Bm0tlYE1fk-2Bz4dxJ9qLfiTL0-2BgJGmZuu2YpRpc7ixIJZ44HdOxUMjQoGQ52UifPFWbEMPZviWi9G7E5FCxN6kWemvqv3Rt5XOZ4Y6Pp-2FL90DafpC1GWOITE6Xr6IWHl-2FD0Wgv0GXpychJqMfJnfjBXflnuKkbN6hQy9uualEmaRTaqeRkWaNUirBqyNOKWF80-3D
 POB 4668 #37895 New York NY USA 10163
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph MeetUp Berlin on July 17

2017-07-10 Thread Robert Sander
Hi,

https://www.meetup.com/de-DE/Ceph-Berlin/events/240812906/

Come join us for an introduction into Ceph and DESY including a tour of
their data center and photo injector test facility.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] List-Archive unavailable

2017-04-01 Thread Robert Sander
Hi,

the list archive at http://lists.ceph.com/pipermail/ceph-users-ceph.com/
is currently not available. Anybody knows what is going on there?

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How to hide internal ip on ceph mount

2017-03-01 Thread Robert Sander
On 01.03.2017 10:54, gjprabu wrote:
> Hi,
> 
> We try to use host name instead of ip address but mounted partion
> showing up address only . How show the host name instead of ip address.

What is the security gain you try to achieve by hiding the IPs?

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] How to hide internal ip on ceph mount

2017-02-28 Thread Robert Sander
On 28.02.2017 07:19, gjprabu wrote:

>  How to hide internal ip address on cephfs mounting. Due to
> security reason we need to hide ip address. Also we are running docker
> container in the base machine and which will shown the partition details
> over there. Kindly let us know is there any solution for this. 
> 
> 192.168.xxx.xxx:6789,192.168.xxx.xxx:6789,192.168.xxx.xxx:6789:/
> ceph  6.4T  2.0T  4.5T  31% /home/

If this is needed as a "security measure" you should not mount CephFS on
this host in the first place.

Only mount CephFS on hosts you trust (especially the root user) as the
Filesystem uses the local accounts for access control.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CephFS root squash?

2017-02-10 Thread Robert Sander
On 09.02.2017 20:11, Jim Kilborn wrote:

> I am trying to figure out how to allow my users to have sudo on their 
> workstation, but not have that root access to the ceph kernel mounted volume.

I do not think that CephFS is meant to be mounted on human users'
workstations.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 10Gbit switch advice for small ceph cluster upgrade

2016-12-16 Thread Robert Sander
On 15.12.2016 16:49, Bjoern Laessig wrote:

> What does your Cluster do? Where is your data. What happens now?

You could configure the interfaces between the nodes as pointopoint
links and run OSPF on them. The cluster nodes then would have their node
IP on a dummy interface. OSPF would sort out the routing.

If a link between two nodes go down the traffic is routed via the third
node.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] hammer on xenial

2016-11-16 Thread Robert Sander
On 16.11.2016 09:05, Steffen Weißgerber wrote:
> Hello,
> 
> we started upgrading ubuntu on our ceph nodes to Xenial and had to see that 
> during
> the upgrade ceph automatically was upgraded from hammer to jewel also.
> 
> Because we don't want to upgrade ceph and the OS at the same time we 
> deinstalled
> the ceph jewel components reactivated /etc/apt/sources.list.d/ceph.list with
> 
> deb http://ceph.com/debian-hammer/ xenial main
> 
> and pinned the ceph relaese to install in /etc/apt/preferences/ceph.pref

After that process you may still have the Ubuntu trusty packages for
Ceph Hammer installed.

Do an "apt-get install --reinstall ceph.*" on your node after the
Upgrade. This should pull the Ubuntu xenial packages and install them.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Help with systemd

2016-08-23 Thread Robert Sander
On 22.08.2016 20:16, K.C. Wong wrote:
> Is there a way
> to force a 'remote-fs' reclassification?

Have you tried adding _netdev to the fstab options?

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 2TB useable - small business - help appreciated

2016-08-22 Thread Robert Sander
On 01.08.2016 08:05, Christian Balzer wrote:

>> With all the info provided is DRBD Pacemaker HA Cluster or even
>> GlusterFS a better option?

Yes.

>>
> No GlusterFS support for VMware as well last time I checked, only
> interfaces via an additional NFS head again, so no advantage here.

Last time I checked GlusterFS had a builtin NFS server.

Kindest Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Backfilling caused RBD corruption on Hammer?

2016-05-08 Thread Robert Sander
Am 29.04.2016 um 17:11 schrieb Robert Sander:

> As the backfilling with the full weight of the new OSDs would have run
> for more than 28h and no VM was usable we re-weighted the new OSDs to
> 0.1. The backfilling finished after about 2h and we planned to increase
> the weight slowly when suddenly the RBD was corrupted.

After going through the logfiles I may have found the culprit.

Is it possible to lose data if "ceph osd crush add" is called on the
same OSD multiple times?

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG: 
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Backfilling caused RBD corruption on Hammer?

2016-04-29 Thread Robert Sander
Hi,

all OSDs are running 0.94.5 as the new ones were added to the existing servers.

No cache tiering is involved.

We observed many "slow request" warnings during the backfill.

As the backfilling with the full weight of the new OSDs would have run for more 
than 28h and no VM was usable we re-weighted the new OSDs to 0.1. The 
backfilling finished after about 2h and we planned to increase the weight 
slowly when suddenly the RBD was corrupted.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Backfilling caused RBD corruption on Hammer?

2016-04-29 Thread Robert Sander
Hi,

yesterday we ran into a strange bug / mysterious issue with a Hammer
0.94.5 storage cluster.

We added OSDs and the cluster started the backfilling. Suddenly one of
the running VMs complained that it lost a partition in a 2TB RBD.

After resetting the VM it could not boot any more as the RBD has no
partition info at the start. :(

It looks like the data in the objects has been changed somehow.

How is that possible? Any ideas?

The VM was restored from a backup but we would still like to know how
this happened and maybe restore some data that was not backed up before
the crash.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph MeetUp Berlin on November 23

2015-11-09 Thread Robert Sander
Hi,

I would like to invite you to our next MeetUp in Berlin on November 23:

http://www.meetup.com/de/Ceph-Berlin/events/222906642/

Marcel Wallschläger will talk about Ceph in a research environment.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Consulting

2015-09-29 Thread Robert Sander
On 28.09.2015 20:47, Robert LeBlanc wrote:
> Ceph consulting was provided by Inktank[1], but the Inktank website is
> down. How do we go about getting consulting services now?

Have a look at the RedHat site for Ceph:

https://www.redhat.com/en/technologies/storage/ceph

There are also several independent consulting companies which provide
Ceph support.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Storage Cluster on Amazon EC2 across different regions

2015-09-29 Thread Robert Sander
On 29.09.2015 09:54, Raluca Halalai wrote:
> What do you want to prove with such a setup?
> 
> 
> It's for research purposes. We are trying different storage systems in a
> WAN environment.

Then Ceph can be ticked off the list of candidates.
Its purpose is not to be a WAN storage system.

It would be different if you setup local Ceph clusters and have Rados
Gateways (S3 Interfaces) interact with them (geo replication with the
radosgw agent).

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Storage Cluster on Amazon EC2 across different regions

2015-09-29 Thread Robert Sander
On 28.09.2015 19:55, Raluca Halalai wrote:

> I am trying to deploy a Ceph Storage Cluster on Amazon EC2, in different
> regions.

Don't do this.

What do you want to prove with such a setup?

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] [Ceph-community] Ceph MeetUp Berlin Sept 28

2015-09-08 Thread Robert Sander
Hi,

the next meetup in Berlin takes place on September 28 at 18:00 CEST.

Please RSVP at http://www.meetup.com/de/Ceph-Berlin/events/222906639/

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
Ceph-community mailing list
ceph-commun...@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-community-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph MeetUp Berlin on March 23

2015-02-25 Thread Robert Sander
Hi,

I would like to invite you to our next MeetUp in Berlin on March 23:
http://www.meetup.com/Ceph-Berlin/events/219958751/

Stephan Seitz will talk about HA-iSCSI with Ceph.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] two mount points, two diffrent data

2015-01-16 Thread Robert Sander
On 14.01.2015 14:20, Rafał Michalak wrote:
 
 #node1
 mount /dev/rbd/rbd/test /mnt
 
 #node2
 mount /dev/rbd/rbd/test /mnt

If you want to mount a filesystem on one block device onto multiple
clients, the filesystem has to be clustered, e.g. OCFS2.

A normal local filesystem like ext4 or XFS is not aware that other
clients may alter the underlying block device. This is a sure receipt
for data corruption and loss.

Maybe you should look at CephFS.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph MeetUp Berlin

2015-01-12 Thread Robert Sander
Hi,

the next MeetUp in Berlin takes place on January 26 at 18:00 CET.

Our host is Deutsche Telekom, they will hold a short presentation about
their OpenStack / CEPH based production system.

Please RSVP at http://www.meetup.com/Ceph-Berlin/events/218939774/

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Block device and Trim/Discard

2014-12-12 Thread Robert Sander
On 12.12.2014 12:48, Max Power wrote:

 It would be great to shrink the used space. Is there a way to achieve this? Or
 have I done something wrong? In a professional environment you may can live 
 with
 filesystems that only grow. But on my small home-cluster this really is a
 problem.

As Wido already mentioned the kernel RBD does not support discard.

When using qemu+rbd you cannot use the virto driver as this also does
not support discard. My best experience is with the virtual SATA driver
and the options cache=writeback and discard=on.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Monitoring with check_MK

2014-11-19 Thread Robert Sander
Hi,

On 14.11.2014 11:38, Nick Fisk wrote:

 I've just been testing your ceph check and I have made a small modification 
 to allow it to adjust itself to suit the autoscaling of the units Ceph 
 outputs.

Thanks for the feedback. I took your idea, added PB and KB, and pushed
it to github again:
https://github.com/HeinleinSupport/check_mk/tree/c5f7374f1c4c6461a265e16d69d0c6a477a5a73e/ceph/checks

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph Monitoring with check_MK

2014-11-07 Thread Robert Sander
Hi,

I just create a simple check_MK agent plugin and accompanying checks to
monitor the overall health status and pool usage with the check_MK / OMD
monitoring system:

https://github.com/HeinleinSupport/check_mk/tree/master/ceph

One question remains: What is the real unit of the ceph df output?

It shows used GB.
Are these GiB (Gibibyte, 2^30 bytes) oder SI GB (Gigabyte, 10^9 bytes)?

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Typical 10GbE latency

2014-11-06 Thread Robert Sander
Hi,

2 LACP bonded Intel Corporation Ethernet 10G 2P X520 Adapters, no jumbo
frames, here:

rtt min/avg/max/mdev = 0.141/0.207/0.313/0.040 ms
rtt min/avg/max/mdev = 0.124/0.223/0.289/0.044 ms
rtt min/avg/max/mdev = 0.302/0.378/0.460/0.038 ms
rtt min/avg/max/mdev = 0.282/0.389/0.473/0.035 ms

All hosts on the same stacked pair of Dell N4032F switches.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph status 104 active+degraded+remapped 88 creating+incomplete

2014-10-31 Thread Robert Sander
On 29.10.2014 18:29, Thomas Alrin wrote:
 Hi all,
 I'm new to ceph. What is wrong in this ceph? How can i make status to 
 change HEALTH_OK? Please help

With the current default pool size of 3 and the default crush rule you
need at least 3 OSDs on separate nodes for a new ceph cluster to start.

With 2 OSDs on one node you need to change the pool replica size and the
crush rule.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph MeetUp Berlin: Performance

2014-10-29 Thread Robert Sander
Hi,

the next Ceph MeetUp in Berlin is scheduled for November 24.

Lars Marowsky-Brée of SuSE will talk about Ceph performance.

Please RSVP at http://www.meetup.com/Ceph-Berlin/events/215147892/

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Basic Ceph questions

2014-10-13 Thread Robert Sander
On 10.10.2014 02:19, Marcus White wrote:
 
 For VMs, I am trying to visualize how the RBD device would be exposed.
 Where does the driver live exactly? If its exposed via libvirt and
 QEMU, does the kernel driver run in the host OS, and communicate with
 a backend Ceph cluster? If yes, does libRBD provide a target (SCSI?)
 interface which the kernel driver connects to? Trying to visualize
 what the stack looks like, and the flow of IOs for block devices.

For VMs, the RBD is not exposed at all. The communication is made in
userland space of the host kernel, qemu uses librados directly, no
kernel driver involved.

The VM guest kernel sees a virtual block device presented by qemu.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Basic Ceph questions

2014-10-13 Thread Robert Sander
On 13.10.2014 16:47, Marcus White wrote:
 
 1. In what stack is the driver used in that case if QEMU communicates
 directly with librados?

The qemu process directly communicates with the Ceph cluster via
network. It is a normal userland process when it comes to the host kernel.

 2. With QEMU-librados I would guess the new kernel targets/LIO would
 not work? They give better performance and lower CPU..

There is no SCSI involved here.

 3. Where is the kernel driver used in that case?..

Nowhere.

 4. In QEMU, is it a SCSI device?

It is any device type you configure for your guest to see. It can even
be an IDE disk.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Berlin Ceph MeetUp: September 22nd, 2014

2014-09-02 Thread Robert Sander
Hi,

the next Berlin Ceph meetup is scheduled for September 22.

http://www.meetup.com/Ceph-Berlin/events/198884162/

Our host Christian will present the Ceph cluster they use for education
at the Berlin College of Further Education for Information Technology
and Medical Equipment Technology http://www.oszimt.de/.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph Berlin MeetUp 28.7.

2014-07-24 Thread Robert Sander
Hi,

the next Ceph MeetUp in Berlin, Germany, happens on July 28.

http://www.meetup.com/Ceph-Berlin/events/195107422/

Regards
-- 
Robert Sander
Heinlein Support GmbH
Linux: Akademie - Support - Hosting
http://www.heinlein-support.de

Tel: 030-405051-43
Fax: 030-405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein  -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Does CEPH rely on any multicasting?

2014-05-16 Thread Robert Sander
On 16.05.2014 10:49, Dietmar Maurer wrote:
 Recall that Ceph already incorporates its own cluster-management framework,
 and the various Ceph daemons already operate in a clustered manner.
 
 Sure. But it guess it could reduce 'ceph' code size if you use an existing 
 framework.

Ceph has nothing to do with a HA cluster based on pacemaker.
It has a complete different logic built in.
The only similarity is that both use a quorum algorithm to detect split
brain situations.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] raid levels (Information needed)

2014-05-16 Thread Robert Sander
On 16.05.2014 11:42, yalla.gnan.ku...@accenture.com wrote:
 Hi Jerker,
 
 Thanks for the reply.
 
 The link you posted describes only object storage. I need information of raid 
 levels implementation for block devices.
 

There is no RAID level for RBDs. These are virtual block devices and
are mapped to objects in the Ceph cluster.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Berlin MeetUp

2014-05-16 Thread Robert Sander
Hi,

we are currently planning the next Ceph MeetUp in Berlin, Germany, for
May 26 at 6 pm.

If you want to participate please head over to
http://www.meetup.com/Ceph-Berlin/

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy - how to get in-memory ceph.conf

2014-04-30 Thread Robert Sander
On 30.04.2014 08:18, Cao, Buddy wrote:
 Thanks for your reply Haomai. There is no /etc/ceph/ceph.conf on any ceph 
 nodes, that is why I raised the question at beginning.

ceph-deploy creates the ceph.conf file in the local working directory.
You can distribute that with ceph-deploy admin.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy - how to get in-memory ceph.conf

2014-04-30 Thread Robert Sander
On 30.04.2014 09:38, Cao, Buddy wrote:
 Thanks Robert. The auto-created ceph.conf file in local working directory is 
 too simple, almost nothing inside it. How do I know the osd.x created by 
 ceph-deploy, and populate these kinda necessary information into ceph.conf? 

This information is not necessary any more.

The important information are the monitors' addresses and network
addresses of public and cluster networks. Plus the cluster fsid.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Backup Restore?

2014-04-02 Thread Robert Sander
Hi,

what are the options to consistently backup and restore
data out of a ceph cluster?

- RBDs can be snapshotted.
- Data on RBDs used inside VMs can be backed up using tools from the guest.
- CephFS data can be backed up using rsync are similar tools

What about object data in other pools?

There are two scenarios where a backup is needed:

- disaster recovery, i.e. the while cluster goes nuts
- single item restore, because PEBKAC or application error

Is there any work on progress to cover these?

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG: 
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Largest Production Ceph Cluster

2014-04-01 Thread Robert Sander
On 01.04.2014 13:38, Karol Kozubal wrote:

 I am curious to know what is the largest known ceph production deployment?

I would assume it is the CERN installation.

Have a look at the slides from Frankfurt Ceph Day:

http://www.slideshare.net/Inktank_Ceph/scaling-ceph-at-cern

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Second Ceph Berlin MeetUp

2014-03-20 Thread Robert Sander
Hi,

the second meetup takes place at March 24.

For more details please have a look at
http://www.meetup.com/Ceph-Berlin/events/163029162/

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Advises for a new Ceph cluster

2014-02-19 Thread Robert Sander
On 18.02.2014 21:41, shacky wrote:
 Hi.
 
 I have to create a new Ceph cluster with 3 nodes with 4 hard drives in
 RAID5 (12Tb available per node).

Drop RAID5 and create one OSD per harddisk.

If you need to store small files consider how your applications
communicates with the storage cluster. Does your application need a
POSIX filesystem? Then Ceph may not be your first choice.

Is your application able to talk to S3 or natively to RADOS? Then Ceph
is good for you.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CephFS to provide distributed access read/write

2014-02-19 Thread Robert Sander
On 19.02.2014 14:55, Listas@Adminlinux wrote:

 Is CephFS already stable to provide simultaneous access to data in a 
 production environment ?

It may be stable but I think the performance is not anywhere you need
for 50K accounts.

Have you looked into using dsync between your dovecot instances?
http://wiki2.dovecot.org/Replication

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph Community Berlin founded

2014-01-30 Thread Robert Sander
Hi,

The inaugural meeting of the Ceph Berlin community took place on Monday,
January 27th. A total of 14 Cephalopods found their way to the
Heinlein Offices. That was a great response to a group that was formed
just under three weeks ago. And we even have around 30 members right now.

In the organizational part we agreed to meet every two months,
preferably on mondays at 18:00 in rotating locations. We want
to have a short presentation (a Ceph story) and then head to a nearby
restaurant for dinner and drinks.

After that was settled, we heard a great talk from Christian Theuni
Theune of http://gocept.com and http://flyingcircus.io/ showing their
experiences with different storage systems and their current migration
from iSCSI to Ceph for their KVM-Cluster.

At a quarter past eight the pack was hungry and we headed to a nearby
Spanish restaurant for tapas and more talking until about 22:30. This
was a remarkable evening.

If you want to join us head over to http://www.meetup.com/Ceph-Berlin/

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] lvm for a quick ceph lab cluster test

2013-08-27 Thread Robert Sander
On 26.08.2013 23:07, Samuel Just wrote:
 Seems reasonable to me.  I'm not sure I've heard anything about using
 LVM under ceph.  Let us know how it goes!

We are currently using it on a test cluster distributed on our desktops.
Loïc Dachary visited us and wrote a small article:
http://dachary.org/?p=2269

One thing with LVM volumes is that you have to manually create the
filesystem (mkfs.xfs) and mount it somewhere and then point ceph-deploy
to that directory. It then creates a symlink under /var/lib/ceph/osd.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] cluster name different from ceph

2013-07-04 Thread Robert Sander
On 04.07.2013 03:12, Gregory Farnum wrote:
 Hmm, yeah. What documentation are you looking at exactly? I don't think
 we test or have built a lot of the non-ceph handling required
 throughout, though with careful setups it should be possible.

I am referring to this section:

http://ceph.com/docs/master/rados/deployment/ceph-deploy-new/#naming-a-cluster

I tried to add --cluster othername to every ceph-deploy invocation and I even
set CEPH_ARGS=--cluster othername. It works most of the time but there are
some pieces missing.

Especially the init.d script from the Debian package seems not to be able to 
start
any service with a cluster name different than ceph.

I already created the issues
http://tracker.ceph.com/issues/5483 and
http://tracker.ceph.com/issues/5499
but I wanted to inform the general public via this mailing list.

BTW: This is all with the stable version 0.61.4.

Kindest Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG: 
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] cluster name different from ceph

2013-07-03 Thread Robert Sander
Hi,

The documentation states that cluster names that differ from ceph are
possible and should be used when running multiple clusters on the same
hardware.

But it seems that all the tools (especially ceph-deploy and the
init-scripts) are quite hardcoded with the name ceph.

I try to setup a cluster with a different name and run into this problem
at nearly every step.

It looks like quite some work has to be done before this really works.
IMHO the documentation should be changed to express that only ceph
should be used as the cluster name in order to work flawlessly.

Kindest Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Clustered FS for RBD

2013-06-05 Thread Robert Sander
On 04.06.2013 20:03, Gandalf Corvotempesta wrote:
 Any experiences with clustered FS on top of RBD devices?
 Which FS do you suggest for more or less 10.000 mailboxes accessed by 10
 dovecot nodes ?

There is an ongoing effort to implement librados storage in Dovecot,
AFAIK. Maybe it's worth looking for this on the Dovecot devs mailing list.

Regards
-- 
Robert Sander
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin

http://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Zwangsangaben lt. §35a GmbHG:
HRB 93818 B / Amtsgericht Berlin-Charlottenburg,
Geschäftsführer: Peer Heinlein -- Sitz: Berlin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com