Re: [ceph-users] Ubuntu 18.04 - Mimic - Nautilus

2019-07-10 Thread Kai Wagner
On 10.07.19 20:46, Reed Dier wrote:
> It does not appear that that page has been updated in a while.

Addressed that already - someone needs to merge it

https://github.com/ceph/ceph/pull/28643

-- 
GF: Felix Imendörffer, Mary Higgins, Sri Rasiah HRB 21284 (AG Nürnberg)




signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph dovecot

2019-05-23 Thread Kai Wagner
Hi Marc,

let me add Danny so he's aware of your request.

Kai

On 23.05.19 12:13, Wido den Hollander wrote:
>
> On 5/23/19 12:02 PM, Marc Roos wrote:
>> Sorry for not waiting until it is published on the ceph website but, 
>> anyone attended this talk? Is it production ready? 
>>
> Danny from Deutsche Telekom can answer this better, but no, it's not
> production ready.
>
> It seems it's more challenging to get it working especially on the scale
> of Telekom. (Millions of mailboxes).
>
> Wido
>
>> https://cephalocon2019.sched.com/event/M7j8
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
-- 
GF: Felix Imendörffer, Mary Higgins, Sri Rasiah HRB 21284 (AG Nürnberg)




signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] 14.1.0, No dashboard module

2019-03-06 Thread Kai Wagner
Hi all,

I think this change really late in the game just results into confusion.

I would be in favor to make the ceph-mgr-dashboard package a dependency
of the ceph-mgr so that people just need to enable the dashboard without
the need to install another package separately. This way we could also
use the current documentation and we don't need to update everything.

My thought is that we would like to encourage people to use the
dashboard but without the dependency this will result to the opposite.

Note: Shouldn't we also rename the ceph-mgr-dashboard package to just
ceph-dashboard as this is now the official name of it?

Thoughts?

Kai

On 3/5/19 3:37 PM, Laura Paduano wrote:
> Hi Ashley,
>
> thanks for pointing this out! I've created a tracker issue [1] and we
> will take care of updating the documentation accordingly.
>
> Thanks,
> Laura
>
>
> [1] https://tracker.ceph.com/issues/38584
>
> On 05.03.19 10:16, Ashley Merrick wrote:
>> As a follow up seems the dashboard is a separate package not
>> installed by default called "ceph-mgr-dashboard"
>>
>> Seems this is currently missing off the RC notes, and the master doc
>> for ceph dashboard.
>>
>> Cheers
>>
>> On Tue, Mar 5, 2019 at 10:54 AM Ashley Merrick
>> mailto:singap...@amerrick.co.uk>> wrote:
>>
>> I have just spun up a small test environment to give the first RC
>> a test run.
>>
>> Have managed to get a MON / MGR running fine on latest .dev
>> packages on Ubuntu 18.04, however when I go to try enable the
>> dashboard I get the following error.
>>
>> ceph mgr module enable dashboard 
>> Error ENOENT: all mgr daemons do not support module 'dashboard',
>> pass --force to force enablement
>>
>> Trying with --force does nothing, checking the mgr log during
>> boot the dashboard plugin is not listed along all the plugins
>> available.
>>
>> I had a look through the tracker and commits since the RC1,
>> however can't see this already mentioned, not sure if this is
>> expected for RC1 or a bug.
>>
>> Thanks,
>> Ashley
>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> -- 
> SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
> HRB 21284 (AG Nürnberg)
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-- 
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 
(AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Call For Papers coordination pad

2019-01-15 Thread Kai Wagner
Hi all,

just a friendly reminder to use this pad for CfP coordination .

Right now it seems like I'm the only one who submitted something to
Cephalocon and I can't believe that ;-)

https://pad.ceph.com/p/cfp-coordination

Thanks,

Kai

On 5/31/18 1:17 AM, Gregory Farnum wrote:
> Short version: https://pad.ceph.com/p/cfp-coordination is a space for
> you to share talks you've submitted to conferences, if you want to let
> other Ceph community members know what to look for and avoid
> duplicating topics.
>
> Longer version: I and a teammate almost duplicated a talk topic (for
> the upcoming https://mountpoint.io — check it out!) and realized there
> was no established way for us to coordinate this. Other people have
> pointed out similar problems in the past. So, by the power vested in
> me by the power of doing things and having Sage say "that's a good
> idea", I created https://pad.ceph.com/p/cfp-coordination. Use that
> space to coordinate. I've provided a template for conferences around
> talk ideas and actual submissions, but please feel free to jot down
> other notes around those, add new conferences you know about (even if
> you aren't submitting a talk yourself), and generally use that
> etherpad as a community resource.
>
> I'll try to keep it up-to-date as conferences age out, but obviously
> it's only helpful if people actually put stuff there. So go forth and
> write, dear community! :)
> -Greg
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
-- 
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 
(AG Nürnberg)




signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph is now declared stable in Rook v0.9

2018-12-10 Thread Kai Wagner
Congrats to everyone.

Seems like we're getting closer to pony's, rainbows and ice cream for
everyone!;-)

On 12/11/18 12:15 AM, Mike Perez wrote:
> Hey all,
>  
> Great news, the Rook team has declared Ceph to be stable in v0.9! Great work
> from both communities in collaborating to make this possible.
>
> https://blog.rook.io/rook-v0-9-new-storage-backends-in-town-ab952523ec53
>
> I am planning on demos at the Rook booth by deploying object/block/fs
> in Luminous and upgrading to Mimic while running a simple application.
> More details and schedule hopefully later.
>
> Great way to start KubeCon Seattle soon!
>
-- 
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 
(AG Nürnberg)




signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] mgr/dashboard: backporting Ceph Dashboard v2 to Luminous

2018-08-22 Thread Kai Wagner
On 22.08.2018 20:57, David Turner wrote:
> does it remove any functionality of the previous dashboard?
No it doesn't. All dashboard_v1 features are integrate and part of the
dashboard_v2 as well.

-- 
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 
(AG Nürnberg)




signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Luminous RocksDB vs WalDB?

2018-06-28 Thread Kai Wagner
On 28.06.2018 23:25, Eric Jackson wrote:
> Recently, I learned that this is not necessary when both are on the same 
> device.  The wal for the Bluestore OSD will use the db device when set to 0.
That's good to know. Thanks for the input on this Eric.

-- 
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 
(AG Nürnberg)




signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Luminous RocksDB vs WalDB?

2018-06-28 Thread Kai Wagner
I'm also not 100% sure but I think that the first one is the right way
to go. The second command only specifies the db partition but no
dedicated WAL partition. The first one should do the trick.


On 28.06.2018 22:58, Igor Fedotov wrote:
>
> I think the second variant is what you need. But I'm not the guru in
> ceph-deploy so there might be some nuances there...
>
> Anyway the general idea is to have just a single NVME partition (for
> both WAL and DB) per OSD.
>
> Thanks,
>
> Igor
>
>
> On 6/27/2018 11:28 PM, Pardhiv Karri wrote:
>> Thank you Igor for the response.
>>
>> So do I need to use this,
>>
>> ceph-deploy osd create --debug --bluestore --data /dev/sdb
>> --block-wal /dev/nvme0n1p1 --block-db /dev/nvme0n1p2 cephdatahost1
>>
>> or 
>>
>> ceph-deploy osd create --debug --bluestore --data /dev/sdb --block-db
>> /dev/nvme0n1p2 cephdatahost1
>>
>> where /dev/sdb is ssd disk for osd
>> /dev/nvmen0n1p1 is 10G partition
>> /dev/nvme0n1p2 is 25G partition
>>
>>
>> Thanks,
>> Pardhiv K
>>
>> On Wed, Jun 27, 2018 at 9:08 AM Igor Fedotov > > wrote:
>>
>> Hi Pardhiv,
>>
>> there is no WalDB in Ceph.
>>
>> It's WAL (Write Ahead Log) that is a way to ensure write safety
>> in RocksDB. In other words - that's just a RocksDB subsystem
>> which can use separate volume though.
>>
>> In general For BlueStore/BlueFS one can either allocate separate
>> volumes for WAL and DB or have them on the same volume. The
>> latter is the common option.
>>
>> The separated layout makes sense when you have tiny but
>> super-fast device (for WAL) and less effective (but still fast)
>> larger drive for DB. Not to mention the third one for user data
>>
>> E.g. HDD (user data) + SDD (DB) + NVME  (WAL) is such a layout.
>>
>>
>> So for you case IMO it's optimal to have merged WAL+DB at NVME
>> and data at SSD. Hence no need for separate WAL volume.
>>
>>
>> Regards,
>>
>> Igor
>>
>>
>> On 6/26/2018 10:22 PM, Pardhiv Karri wrote:
>>> Hi,
>>>
>>> I am playing with Ceph Luminous and getting confused information
>>> around usage of WalDB vs RocksDB.
>>>
>>> I have 2TB NVMe drive which I want to use for Wal/Rocks DB and
>>> have 5 2TB SSD's for OSD. 
>>> I am planning to create 5 30GB partitions for RocksDB on NVMe
>>> drive, do I need to create partitions of WalDB also on NVMe
>>> drive or does RocksDB does same work as WalDB plus having
>>> metadata on it? 
>>>
>>> So my question is do I really need to use WalDB along with
>>> RocksDB or having RocksDB only is fine?
>>>
>>> Thanks,
>>> Pardhiv K
>>>
>>>
>>>
>>>
>>>
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com 
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com 
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>>
>> -- 
>> *Pardhiv Karri*
>> "Rise and Rise again untilLAMBSbecome LIONS" 
>>
>>
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-- 
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 
(AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] CentOS Dojo at CERN

2018-06-21 Thread Kai Wagner
On 20.06.2018 17:39, Dan van der Ster wrote:
> And BTW, if you can't make it to this event we're in the early days of
> planning a dedicated Ceph + OpenStack Days at CERN around May/June
> 2019.
> More news on that later...
Will that be during a CERN maintenance window?

*that would raise my interest dramatically :-)*

Kai

-- 
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 
(AG Nürnberg)




signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Increasing number of PGs by not a factor of two?

2018-05-17 Thread Kai Wagner
Great summary David. Wouldn't this be worth a blog post?


On 17.05.2018 20:36, David Turner wrote:
> By sticking with PG numbers as a base 2 number (1024, 16384, etc) all
> of your PGs will be the same size and easier to balance and manage. 
> What happens when you have a non base 2 number is something like
> this.  Say you have 4 PGs that are all 2GB in size.  If you increase
> pg(p)_num to 6, then you will have 2 PGs that are 2GB and 4 PGs that
> are 1GB as you've split 2 of the PGs into 4 to get to the 6 total.  If
> you increase the pg(p)_num to 8, then all 8 PGs will be 1GB. 
> Depending on how you manage your cluster, that doesn't really matter,
> but for some methods of balancing your cluster, that will greatly
> imbalance things.
>
> This would be a good time to go to a base 2 number.  I think you're
> thinking about Gluster where if you have 4 bricks and you want to
> increase your capacity, going to anything other than a multiple of 4
> (8, 12, 16) kills performance (worse than increasing storage already
> does) and takes longer as it has to weirdly divide the data instead of
> splitting a single brick up to multiple bricks.
>
> As you increase your PGs, do this slowly and in a loop.  I like to
> increase my PGs by 256, wait for all PGs to create, activate, and
> peer, rinse/repate until I get to my target.  [1] This is an example
> of a script that should accomplish this with no interference.  Notice
> the use of flags while increasing the PGs.  It will make things take
> much longer if you have an OSD OOM itself or die for any reason by
> adding to the peering needing to happen.  It will also be wasted IO to
> start backfilling while you're still making changes; it's best to wait
> until you finish increasing your PGs and everything peers before you
> let data start moving.
>
> Another thing to keep in mind is how long your cluster will be moving
> data around.  Increasing your PG count on a pool full of data is one
> of the most intensive operations you can tell a cluster to do.  The
> last time I had to do this, I increased pg(p)_num by 4k PGs from 16k
> to 32k, let it backfill, rinse/repeat until the desired PG count was
> achieved.  For me, that 4k PGs would take 3-5 days depending on other
> cluster load and how full the cluster was.  If you do decide to
> increase your PGs by 4k instead of the full increase, change the 16384
> to the number you decide to go to, backfill, continue. 
>
>
> [1]
> # Make sure to set pool variable as well as the number ranges to the
> appropriate values.
> flags="nodown nobackfill norecover"
> for flag in $flags; do
>   ceph osd set $flag
> done
> pool=rbd
> echo "$pool currently has $(ceph osd pool get $pool pg_num) PGs"
> # The first number is your current PG count for the pool, the second
> number is the target PG count, and the third number is how many to
> increase it by each time through the loop.
> for num in {7700..16384..256}; do
>   ceph osd pool set $pool pg_num $num
>   while sleep 10; do
>     ceph osd health | grep -q
> 'peering\|stale\|activating\|creating\|inactive' || break
>   done
>   ceph osd pool set $pool pgp_num $num
>   while sleep 10; do
>     ceph osd health | grep -q
> 'peering\|stale\|activating\|creating\|inactive' || break
>   done
> done
> for flag in $flags; do
>   ceph osd unset $flag
> done
>
> On Thu, May 17, 2018 at 9:27 AM Kai Wagner <kwag...@suse.com
> <mailto:kwag...@suse.com>> wrote:
>
> Hi Oliver,
>
> a good value is 100-150 PGs per OSD. So in your case between 20k
> and 30k.
>
> You can increase your PGs, but keep in mind that this will keep the
> cluster quite busy for some while. That said I would rather
> increase in
> smaller steps than in one large move.
>
> Kai
>
>
> On 17.05.2018 01:29, Oliver Schulz wrote:
> > Dear all,
> >
> > we have a Ceph cluster that has slowly evolved over several
> > years and Ceph versions (started with 18 OSDs and 54 TB
> > in 2013, now about 200 OSDs and 1.5 PB, still the same
> > cluster, with data continuity). So there are some
> > "early sins" in the cluster configuration, left over from
> > the early days.
> >
> > One of these sins is the number of PGs in our CephFS "data"
> > pool, which is 7200 and therefore not (as recommended)
> > a power of two. Pretty much all of our data is in the
> > "data" pool, the only other pools are "rbd" and "metadata",
> > both contain little data (and they have way too many PGs
> > already, another early sin).
> >
> 

Re: [ceph-users] Increasing number of PGs by not a factor of two?

2018-05-17 Thread Kai Wagner
Hi Oliver,

a good value is 100-150 PGs per OSD. So in your case between 20k and 30k.

You can increase your PGs, but keep in mind that this will keep the
cluster quite busy for some while. That said I would rather increase in
smaller steps than in one large move.

Kai


On 17.05.2018 01:29, Oliver Schulz wrote:
> Dear all,
>
> we have a Ceph cluster that has slowly evolved over several
> years and Ceph versions (started with 18 OSDs and 54 TB
> in 2013, now about 200 OSDs and 1.5 PB, still the same
> cluster, with data continuity). So there are some
> "early sins" in the cluster configuration, left over from
> the early days.
>
> One of these sins is the number of PGs in our CephFS "data"
> pool, which is 7200 and therefore not (as recommended)
> a power of two. Pretty much all of our data is in the
> "data" pool, the only other pools are "rbd" and "metadata",
> both contain little data (and they have way too many PGs
> already, another early sin).
>
> Is it possible - and safe - to change the number of "data"
> pool PGs from 7200 to 8192 or 16384? As we recently added
> more OSDs, I guess it would be time to increase the number
> of PGs anyhow. Or would we have to go to 14400 instead of
> 16384?
>
>
> Thanks for any advice,
>
> Oliver
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

-- 
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 
(AG Nürnberg)




signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Show and Tell: Grafana cluster dashboard

2018-05-07 Thread Kai Wagner
Looks very good. Is it anyhow possible to display the reason why a
cluster is in an error or warning state? Thinking about the output from
ceph -s if this could by shown in case there's a failure. I think this
will not be provided by default but wondering if it's possible to add.

Kai

On 05/07/2018 04:53 PM, Reed Dier wrote:
> I think supporting both paths would be the best choice.
That's the way we should go. Supporting both or in general as much as
possible (try to be generic)

-- 
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 
(AG Nürnberg)




signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] London Ceph day yesterday

2018-04-22 Thread Kai Wagner
Hi all,

indeed it was a lot of fun again and what I really liked to most are the
open discussions afterwards.

Big thanks goes to Wido for organizing this and we should not forget to
thank you all the sponsors who made this happen as well.

Kai


On 20.04.2018 10:32, Sean Purdy wrote:
> Just a quick note to say thanks for organising the London Ceph/OpenStack day. 
>  I got a lot out of it, and it was nice to see the community out in force.
>
> Sean Purdy
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

-- 
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 
(AG Nürnberg)




signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph luminous - troubleshooting performance issues overall DSK 100%, busy 1%

2018-04-10 Thread Kai Wagner
Is this just from one server or from all servers? Just wondering why VD
0 is using WriteThrough compared to the others. If that's the setup for
the OSD's you already have a cache setup problem.


On 10.04.2018 13:44, Mohamad Gebai wrote:
> megacli -LDGetProp -cache -Lall -a0
>
> Adapter 0-VD 0(target id: 0): Cache Policy:WriteThrough,
> ReadAheadNone, Direct, Write Cache OK if bad BBU
> Adapter 0-VD 1(target id: 1): Cache Policy:WriteBack, ReadAdaptive,
> Cached, No Write Cache if bad BBU
> Adapter 0-VD 2(target id: 2): Cache Policy:WriteBack, ReadAdaptive,
> Cached, No Write Cache if bad BBU
> Adapter 0-VD 3(target id: 3): Cache Policy:WriteBack, ReadAdaptive,
> Cached, No Write Cache if bad BBU

-- 
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 
(AG Nürnberg)




signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph Dashboard IRC Channel

2018-04-05 Thread Kai Wagner
Hi all,

we've created a new #ceph-dashboard channel on OFTC to talk about all
the related dashboard functionalities and developments. This means that
the old "openattic" channel on Freenode is just for openATTIC and
everything new regarding the mgr module will now be discussed in the new
channel on OFTC.

Thanks,

Kai

-- 
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 
(AG Nürnberg)




signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph talks/presentations at conferences/events

2018-03-26 Thread Kai Wagner
Hi Robert,

thanks will forward it to the community list as well.

Kai


On 03/26/2018 11:03 AM, Robert Sander wrote:
> Hi Kai,
>
> On 22.03.2018 18:04, Kai Wagner wrote:
>> don't know if this is the right place to discuss this but I was just
>> wondering if there's any specific mailing list + web site where upcoming
>> events (Ceph/Open Source/Storage) and conferences are discussed and
>> generally tracked?
> Maybe the community mailing list is the place for that?
>
> http://docs.ceph.com/docs/master/start/get-involved/
>
> Regards
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-- 
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 
(AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Ceph talks/presentations at conferences/events

2018-03-22 Thread Kai Wagner
Hi all,

don't know if this is the right place to discuss this but I was just
wondering if there's any specific mailing list + web site where upcoming
events (Ceph/Open Source/Storage) and conferences are discussed and
generally tracked?

Also I would like to sync upfront on topics that could be interesting
for such events. It happened already in the past that two guys submitted
more or less the same topic and weren't even aware of that. If there's
nothing like that so far I would like to get it started somehow :-).

Thanks,

Kai

-- 
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 
(AG Nürnberg)




signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Luminous and Calamari

2018-03-02 Thread Kai Wagner
Hi,

given the fact that we don't have Ubu or Centos packages, you could
install directly from our sources.

http://download.openattic.org/sources/3.x/openattic-3.6.2.tar.bz2

Our docs are hosted at: http://docs.openattic.org/en/latest/

Kai


On 03/02/2018 04:39 PM, Budai Laszlo wrote:
> Hi,
>
> I've seen the openATTIC. I would like to know if there are any
> instructions about how to get it running on Ubuntu 16.04 or Centos 7?
>
> Thank you.
> Laszlo
>
>
> On 02.03.2018 17:26, Sébastien VIGNERON wrote:
>> Hi,
>>
>> Did you look the OpenAttic project?
>>
>> Cordialement / Best regards,
>>
>> Sébastien VIGNERON
>> CRIANN,
>> Ingénieur / Engineer
>> Technopôle du Madrillet
>> 745, avenue de l'Université
>> 76800 Saint-Etienne du Rouvray - France
>> tél. +33 2 32 91 42 91
>> fax. +33 2 32 91 42 92
>> http://www.criann.fr
>> mailto:sebastien.vigne...@criann.fr
>> support: supp...@criann.fr
>>
>>> Le 2 mars 2018 à 15:06, Budai Laszlo >> > a écrit :
>>>
>>> Dear all,
>>>
>>> is it possible to use Calamari with Luminous (I know about the
>>> manager dashboard, but that is "read only", I need a tool for also
>>> managing ceph).
>>>
>>> Kind regards,
>>> Laszlo
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com 
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-- 
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 
(AG Nürnberg)




signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-01 Thread Kai Wagner
I totally understand and see your frustration here, but you've to keep
in mind that this is an Open Source project with a lots of volunteers.
If you have a really urgent need, you have the possibility to develop
such a feature on your own or you've to buy someone who could do the
work for you.

It's a long journey but it seems like it finally comes to an end.


On 03/01/2018 01:26 PM, Max Cuttins wrote:
> It's obvious that Citrix in not anymore belivable.
> However, at least Ceph should have added iSCSI to it's platform during
> all these years.
> Ceph is awesome, so why just don't kill all the competitors make it
> compatible even with washingmachine?

-- 
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 
(AG Nürnberg)




signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Luminous and calamari

2018-02-15 Thread Kai Wagner
Hey,

yes there are plans to add management functionality to the dashboard as
well. As soon as we're covered all the existing functionality to create
the initial PR we'll start with the management stuff. The big benefit
here is, that we can profit what we've already done within openATTIC.

If you've missed the ongoing Dashboard V2 discussions and work, here's a
blog post to follow up:

https://www.openattic.org/posts/ceph-manager-dashboard-v2/

Let us know about your thoughts on this.

Thanks

Kai


On 02/16/2018 06:20 AM, Laszlo Budai wrote:
> Hi,
>
> I've just started up the dasboard component of the ceph mgr. It looks
> OK, but from what can be seen, and what I was able to find in the
> docs, the dashboard is just for monitoring. Is there any plugin that
> allows management of the ceph resources (pool create/delete).
>
> Thanks,
> Laszlo
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

-- 
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 
(AG Nürnberg)




signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Shutting down half / full cluster

2018-02-14 Thread Kai Wagner
Hi,

maybe it's worth looking at this:

http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-April/017378.html

Kai


On 02/14/2018 11:06 AM, Götz Reinicke wrote:
> Hi,
>
> We have some work to do on our power lines for all building and we have to 
> shut down all systems. So there is also no traffic on any ceph client.
>
> Pitty, we have to shot down some ceph nodes too in an affected building.
>
> To avoid rebalancing - as I see there is no need for it, as there is no 
> traffic on clients - how can I safely set the remaining cluster nodes in a 
> „keep calm and wait“ state?
>
> Is that the noout option?
>
> Thanks for feedback and suggestions! Regards . Götz
>
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-- 
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 
(AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Day Germany :)

2018-02-12 Thread Kai Wagner
Hi Wido,

how do you know about that beforehand? There's no official upcoming
event on the ceph.com page?

Just because I'm curious :)

Thanks

Kai


On 12.02.2018 10:39, Wido den Hollander wrote:
> The next one is in London on April 19th 

-- 
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 
(AG Nürnberg)




signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Day Germany :)

2018-02-12 Thread Kai Wagner
Sometimes I'm just blind. Way to less ML :D

Thanks!


On 12.02.2018 10:51, Wido den Hollander wrote:
> Because I'm co-organizing it! :) It send out a Call for Papers last
> week to this list. 

-- 
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 
(AG Nürnberg)




signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Day Germany :)

2018-02-11 Thread Kai Wagner

On 12.02.2018 00:33, c...@elchaka.de wrote:
> I absolutely agree, too. This was really great! Would be Fantastic if the 
> ceph days  will happen again in Darmstadt - or Düsseldorf ;)
>
> Btw. Will the Slides and perhaps Videos of the presentation be online 
> avaiable?

AFAIK Danny is working on that.

-- 
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 
(AG Nürnberg)




signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Newbie question: stretch ceph cluster

2018-02-09 Thread Kai Wagner
Hi and welcome,


On 09.02.2018 15:46, ST Wong (ITSC) wrote:
>
> Hi, I'm new to CEPH and got a task to setup CEPH with kind of DR
> feature.  We've 2 10Gb connected data centers in the same campus.    I
> wonder if it's possible to setup a CEPH cluster with following
> components in each data center:
>
>
> 3 x mon + mds + mgr
>
> 3 x OSD (replicated factor=2, between data center)
>
>
> So that any one of following failure won't affect the cluster's
> operation and data availability:
>
>   * any one component in either data center
>   * failure of either one of the data center 
>
>
> Is it possible?
>
In general this is possible, but I would consider that replica=2 is not
a good idea. In case of a failure scenario or just maintenance and one
DC is powered off and just one single disk fails on the other DC, this
can already lead to data loss. My advice here would be, if anyhow
possible, please don't do replica=2.
>
> In case one data center failure case, seems replication can't occur
> any more.   Any CRUSH rule can achieve this purpose?
>
>
> Sorry for the newbie question.
>
>
> Thanks a lot.
>
> Regards
>
> /st wong
>
>
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-- 
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 
(AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] RBD device as SBD device for pacemaker cluster

2018-02-06 Thread Kai Wagner
Hi all,

I had the idea to use a RBD device as the SBD device for a pacemaker
cluster. So I don't have to fiddle with multipathing and all that stuff.
Have someone already tested this somewhere and can tell how the cluster
reacts on this?

I think this shouldn't be problem, but I'm just wondering if there's
anything that I'm not aware of?

Thanks

Kai

-- 
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 
(AG Nürnberg)




signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Fwd: Ceph team involvement in Rook (Deploying Ceph in Kubernetes)

2018-01-19 Thread Kai Wagner
Just for those of you who are not subscribed to ceph-users.



 Forwarded Message 
Subject:Ceph team involvement in Rook (Deploying Ceph in Kubernetes)
Date:   Fri, 19 Jan 2018 11:49:05 +0100
From:   Sebastien Han 
To: ceph-users , Squid Cybernetic
, Dan Mick , Chen, Huamin
, John Spray , Sage Weil
, bas...@tabbara.com



Everyone,

Kubernetes is getting bigger and bigger. It has become the platform of
choice to run microservices applications in containers, just like
OpenStack did for and Cloud applications in virtual machines.

When it comes to container storage there are three key aspects:

* Providing persistent storage to containers, Ceph has drivers in
Kuberntes already with kRBD and CephFS
* Containerizing the storage itself, so efficiently running Ceph
services in Containers. Currently, we have ceph-container
(https://github.com/ceph/ceph-container)
* Deploying the containerized storage in Kubernetes, we wrote
ceph-helm charts (https://github.com/ceph/ceph-helm)

The third piece although it's working great has a particular goal and
doesn't aim to run Ceph just like any other applications in Kuberntes.
We were also looking for a better abstraction/ease of use for
end-users, multi-cluster support, operability, life-cycle management,
centralized operations, to learn more you can read
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-October/021918.html.
As a consequence, we decided to look at what the ecosystem had to
offer. As a result, Rook came out, as a pleasant surprise. For those
who are not familiar with Rook, please visit https://rook.io but in a
nutshell, Rook is an open source orchestrator for distributed storage
systems running in cloud-native environments. Under the hood, Rook is
deploying, operating and managing Ceph life cycle in Kubernetes. Rook
has a vibrant community and committed developers.

Even if Rook is not perfect (yet), it has firm foundations, and we are
planning on helping to make it better. We already opened issues for
that and started doing work with Rook's core developers. We are
looking at reconciling what is available today
(rook/ceph-container/helm), reduce the overlap/duplication and all
work together toward a single and common goal. With this
collaboration, through Rook, we hope to make Ceph the de facto Open
Source storage solution for Kubernetes.

These are exciting times, so if you're a user, a developer, or merely
curious, have a look at Rook and send us feedback!

Thanks!
-- 
Cheers

––
Sébastien Han
Principal Software Engineer, Storage Architect

"Always give 100%. Unless you're giving blood."

Mail: s...@redhat.com
Address: 11 bis, rue Roquépine - 75008 Paris
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Day Germany 2018

2018-01-16 Thread Kai Wagner
Dito, cya in Darmstadt!


On 01/16/2018 08:47 AM, Wido den Hollander wrote:
> Yes! Looking forward :-) I'll be there :)
>
> Wido 

-- 
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 
(AG Nürnberg)




signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com