Re: [ceph-users] Ceph Future

2018-01-23 Thread Massimiliano Cuttini



Il 23/01/2018 16:49, c...@jack.fr.eu.org ha scritto:

On 01/23/2018 04:33 PM, Massimiliano Cuttini wrote:
With Ceph you have to install an orchestrator 3rd party in order to 
have a clear picture of what is going on.

Which can be ok, but not alway pheasable.

Just as with everything
As said wikipedia, for instance, "Proxmox VE supports local storage 
with LVM group, directory and ZFS, as well as network storage types 
with iSCSI, Fibre Channel, NFS, GlusterFS, CEPH and DRBD.[14]"


Maybe fibre channel shall provides a webinterface. Maybe iSCSI shall 
too. Maybe drbd & glusterfs will provides another one.


Well, you are mixing different technologies:

1) ISCSI and FibreChannel are*networks comunication protocols*.
They just allow hypervisor to communicate to a SAN/NAS, they itself 
doesn't provide any kind of storage.


2) ZFS, glusterFS, NFS are "network ready" filesystem not a software 
deined SAN/NAS.


3) Ceph, ScaleIO, FreeNAS, HP virtualstore... they all are *Software 
Defined *storage.
This means that they setup disks, filesystems and network connections in 
order to be ready to use from client.

They can be thinked as a "storage kind of orchestrator" by theirself.

So only the group 3 is comparable technology.
In this competition I think that Ceph is the only one can win in the 
long run.
It's open, it works, it's easy, it's free, it's improving faster than 
others.
However, right now, it is the only one that miss a decent management 
dashboard.
This is to me so incomprehensible. Ceph is by far a killer app of the 
market.

So why just don't kill its latest barriers and get a mass adoption?




Or maybe this is not their job.

As you said, "Xen is just an hypervisor", thus you are using 
bare-metal low level tool, just like sane folks would use qemu. And 
yes, low-level tools are .. low level.


XenServer is an hypervisor but it has a truly great management dashboard 
which is XenCenter.

I guess VMware has it's own and i guess also that it's good.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Future

2018-01-23 Thread ceph

On 01/23/2018 04:33 PM, Massimiliano Cuttini wrote:
With Ceph you have to install an orchestrator 3rd party in order to have 
a clear picture of what is going on.

Which can be ok, but not alway pheasable.


Just as with everything
As said wikipedia, for instance, "Proxmox VE supports local storage with 
LVM group, directory and ZFS, as well as network storage types with 
iSCSI, Fibre Channel, NFS, GlusterFS, CEPH and DRBD.[14]"


Maybe fibre channel shall provides a webinterface. Maybe iSCSI shall 
too. Maybe drbd & glusterfs will provides another one.


Or maybe this is not their job.

As you said, "Xen is just an hypervisor", thus you are using bare-metal 
low level tool, just like sane folks would use qemu. And yes, low-level 
tools are .. low level.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Future

2018-01-23 Thread Massimiliano Cuttini

Il 23/01/2018 14:32, c...@jack.fr.eu.org ha scritto:


I think I was not clear

There are VMs management system, look at 
https://fr.wikipedia.org/wiki/Proxmox_VE, 
https://en.wikipedia.org/wiki/Ganeti, probably 
https://en.wikipedia.org/wiki/OpenStack too


Theses systems interacts with Ceph.
When you create a VM, a rbd volume is created
When you delete a VM, associated volumes are deleted
When you resize a disk, the volume is resized

There is no need for manual interaction at the Ceph level at any way

If I really understood the end of your email, you're stuck with a 
deficient VM management system, based on xenserver

Your issues are not Ceph's issues, but xen's;


Half and half.

Xen is just an hypervisor while OpenStack is an orchestrator.
An orchestrator manage by API your nodes (both hypervisors and storages 
if you want).


The fact is that Ceph doesn't have an its own web interface while many 
other storage services  have their own (freeNAS or proprietary service 
like lefthand/virtualstorage).
With Ceph you have to install an orchestrator 3rd party in order to have 
a clear picture of what is going on.

Which can be ok, but not alway pheasable.

Coming back to my case Xen it's just an hypervisor, not an orchestrator.
So this means that many taks must be accomplished manually.
A simple web interface that wrap few basic shell command can save hours 
(and can probably be built within few months starting from the actual 
deploy).
I really think Ceph is the future.. but it has to become a service ready 
to use in every kind of scenario (with or without orchestrator).

Right now to me seems not ready.

I'm taking a look at OpenAttic right now.
Probably this can be the missing piece.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Future

2018-01-23 Thread Volker Theile
Hello Massimiliano,

>
>> You're more than welcome - we have a lot of work ahead of us...
>> Feel free to join our Freenode IRC channel #openattic to get in touch!
>
> A curiosity!
> as far as I understood this software was created to manage only Ceph.
> Is it right?
> so... why such a "far away" name for a software dedicated to Ceph?

openATTIC comes from local storage management and has been switched to
Ceph in the near past.

> I read some months ago about openattic but I was thinking it was
> something completly different before you wrote me.
>  :)
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

Volker

-- 
Volker Theile
Software Engineer | openATTIC
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 
(AG Nürnberg)
Phone: +49 173 5876879
E-Mail: vthe...@suse.com




signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Future

2018-01-23 Thread ceph

I think I was not clear

There are VMs management system, look at 
https://fr.wikipedia.org/wiki/Proxmox_VE, 
https://en.wikipedia.org/wiki/Ganeti, probably 
https://en.wikipedia.org/wiki/OpenStack too


Theses systems interacts with Ceph.
When you create a VM, a rbd volume is created
When you delete a VM, associated volumes are deleted
When you resize a disk, the volume is resized

There is no need for manual interaction at the Ceph level at any way

If I really understood the end of your email, you're stuck with a 
deficient VM management system, based on xenserver

Your issues are not Ceph's issues, but xen's;


On 01/23/2018 01:58 PM, Massimiliano Cuttini wrote:


Il 23/01/2018 13:20, c...@jack.fr.eu.org ha scritto:
- USER taks: create new images, increase images size, sink images 
size, check daily status and change broken disks whenever is needed.

Who does that ?
For instance, Ceph can be used for VMs. Your VMs system create images, 
resizes images, whatever, not the Ceph's admin.


I would like to have a single big remote storage, but as a best practice 
you should not.

Hypervisor can create images, resize and so on... you right.
However sometimes hypervisor mess up your LVM partitions and this means 
corruption of all VDI in the same disk.


So... the best practice is to setup a remote storage for each VM (you 
can group few if really don't want to have 200connections).
This reduce the risk with VDI corruption (it'll accidentally corrupt one 
not all at once, you can easily restore a snapshoot).
Xenserver as hypervisor doesn't support ceph client and need to go by 
ISCSI.

You need to map RBD on ISCSI, so you need to create a RBD for each LUN.
So at the end... you need to:
-create rbd,
-map iscsi,
-map hypervisor to iscsi,
-drink a coffee,
-create hypervisor virtualization layer (cause every HV want to use it's 
own snapshoot),

-copy the template of the VM request by customer,
-drink a second coffee
and finally run the VM

This is just a nightmare... of course just one of the many that a 
sysadmin have.
if you have 1000 VMs you need a GUI in order to scroll and see the 
panorama.

I don't think that you read your email by command line.
You should neither take a look to your VMs by a command line.

Probably one day I'll quit with XenServer, and all it's constrains 
however right now, i can't and still seems to be the more stable and 
safer way to virtualize.







___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Future

2018-01-23 Thread Massimiliano Cuttini



You're more than welcome - we have a lot of work ahead of us...
Feel free to join our Freenode IRC channel #openattic to get in touch!


A curiosity!
as far as I understood this software was created to manage only Ceph. Is 
it right?

so... why such a "far away" name for a software dedicated to Ceph?
I read some months ago about openattic but I was thinking it was 
something completly different before you wrote me.

 :)

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Future

2018-01-23 Thread Massimiliano Cuttini


Il 23/01/2018 13:20, c...@jack.fr.eu.org ha scritto:
- USER taks: create new images, increase images size, sink images 
size, check daily status and change broken disks whenever is needed.

Who does that ?
For instance, Ceph can be used for VMs. Your VMs system create images, 
resizes images, whatever, not the Ceph's admin.


I would like to have a single big remote storage, but as a best practice 
you should not.

Hypervisor can create images, resize and so on... you right.
However sometimes hypervisor mess up your LVM partitions and this means 
corruption of all VDI in the same disk.


So... the best practice is to setup a remote storage for each VM (you 
can group few if really don't want to have 200connections).
This reduce the risk with VDI corruption (it'll accidentally corrupt one 
not all at once, you can easily restore a snapshoot).

Xenserver as hypervisor doesn't support ceph client and need to go by ISCSI.
You need to map RBD on ISCSI, so you need to create a RBD for each LUN.
So at the end... you need to:
-create rbd,
-map iscsi,
-map hypervisor to iscsi,
-drink a coffee,
-create hypervisor virtualization layer (cause every HV want to use it's 
own snapshoot),

-copy the template of the VM request by customer,
-drink a second coffee
and finally run the VM

This is just a nightmare... of course just one of the many that a 
sysadmin have.

if you have 1000 VMs you need a GUI in order to scroll and see the panorama.
I don't think that you read your email by command line.
You should neither take a look to your VMs by a command line.

Probably one day I'll quit with XenServer, and all it's constrains 
however right now, i can't and still seems to be the more stable and 
safer way to virtualize.







___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Future

2018-01-23 Thread Lenz Grimmer
Ciao Massimiliano,

On 01/23/2018 01:29 PM, Massimiliano Cuttini wrote:

>>   https://www.openattic.org/features.html
>
> Oh god THIS is the answer!

:)

> Lenz, if you need help I can join also development.

You're more than welcome - we have a lot of work ahead of us...
Feel free to join our Freenode IRC channel #openattic to get in touch!

Lenz

-- 
SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany)
GF:Felix Imendörffer,Jane Smithard,Graham Norton,HRB 21284 (AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Future

2018-01-23 Thread Massimiliano Cuttini



   https://www.openattic.org/features.html

Oh god THIS is the answer!
Lenz, if you need help I can join also development.


Lenz



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Future

2018-01-23 Thread Massimiliano Cuttini

Hey Lenz,

OpenAttic seems to implement several good feature and to be more-or-less 
what I was asking.

I'll go through all the website. :)


THANKS!


Il 16/01/2018 09:04, Lenz Grimmer ha scritto:

Hi Massimiliano,

On 01/11/2018 12:15 PM, Massimiliano Cuttini wrote:


_*3) Management complexity*_
Ceph is amazing, but is just too big to have everything under control
(too many services).
Now there is a management console, but as far as I read this management
console just show basic data about performance.
So it doesn't manage at all... it's just a monitor...

In the end You have just to manage everything by your command-line.

[...]


The management complexity can be completly overcome with a great Web
Manager.
A Web Manager, in the end is just a wrapper for Shell Command from the
CephAdminNode to others.
If you think about it a wrapper is just tons of time easier to develop
than what has been already developed.
I do really see that CEPH is the future of storage. But there is some
quick-avoidable complexity that need to be reduced.

If there are already some plan for these issue I really would like to know.

FWIW, there is openATTIC, which provides additional functionality beyond
of what the current dashboard provides. It's a web application that
utilizes various existing APIs (e.g. librados, RGW Admin Ops API)

   https://www.openattic.org/features.html

Lenz



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Future

2018-01-23 Thread ceph



On 01/23/2018 11:04 AM, Massimiliano Cuttini wrote:

Il 22/01/2018 21:55, Jack ha scritto:

On 01/22/2018 08:38 PM, Massimiliano Cuttini wrote:

The web interface is needed because:*cmd-lines are prune to typos.*

And you never misclick, indeed;
Do you really mean: 1) misclick once on an option list, 2) miscklick 
once on the form, 3) mistype the input and 4) misclick again on the 
confirmation dialog box?

Nope, just select an entry, and then click "delete", not "edit".

Well if you misclick that much is better don't tell around you are a 
system engineer ;)

Please welcome the cli interfaces.

- USER taks: create new images, increase images size, sink images size, 
check daily status and change broken disks whenever is needed.

Who does that ?
For instance, Ceph can be used for VMs. Your VMs system create images, 
resizes images, whatever, not the Ceph's admin.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Future

2018-01-23 Thread Massimiliano Cuttini

Il 22/01/2018 21:55, Jack ha scritto:

On 01/22/2018 08:38 PM, Massimiliano Cuttini wrote:

The web interface is needed because:*cmd-lines are prune to typos.*

And you never misclick, indeed;
Do you really mean: 1) misclick once on an option list, 2) miscklick 
once on the form, 3) mistype the input and 4) misclick again on the 
confirmation dialog box?

No... i can brag to never misclick that much in a row! :)
Well if you misclick that much is better don't tell around you are a 
system engineer ;)


However I think that everybody can have opinion and different opinion.
But reject the evidence is just flaming.


Yeah, well, whatever, most system engineers know how to handle Ceph.
Most non-system engineers do not.
A task, a job, I don't master other's job, hence it feels natural that
others do not master mine.

Sorry if this sound so strange to you.

Oh this doesn't strange to me.

You simple don't see the big picture.
Ceph was born in order to semplify the redundancy.
But what is the reason to build architecture in high availability?
I guess to live in peace while hardware can broke: change a broken disk 
within some days instead of within some hours (or minute).
This is all made to let us free and to increase our comfort by reducing 
stressing issues.

Focus on big issues and tuning instead of ordinary issues.

My proposal is EXACTLY in the same direction and I'll explain to you. 
There are 2 kinds of taks:
- USER taks: create new images, increase images size, sink images size, 
check daily status and change broken disks whenever is needed.
- SYSTEM taks: install, update, repair, improve, increase pool size, 
tuning performance (this should be done by command line).


If you think your job is just beeing a slave of Customer care & Sales 
folks well ...be happy with this.
If you think your job is be the /broken disks replacer boy /of the 
office than... be that man.
But don't come to me saying you need to be a system engineers to make 
these slavery jobs.
I prefer to focus on mantaining and tuning instead of be the puppet of 
the customer care.


You should try to consider, instead of flaming around, that there are 
people think differently not because they are just not good enought to 
do your job but because they see thinks differently.
Create a separation between /User task /(and move them to a web 
interface proven for dumbs) and /Admins task/ is just good.
Of course all Admin tasks will always be by command line but Users 
should not.


I really want to know if you'll flaming back again, or if you finally 
would try to give me a real answer with a good reason to don't have a 
web interface in order to get rid of slavery jobs.

But I suppose to know the answer.




___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Future

2018-01-22 Thread Jack
On 01/22/2018 08:38 PM, Massimiliano Cuttini wrote:
> The web interface is needed because:*cmd-lines are prune to typos.*
And you never misclick, indeed;

> SMART is widely used.
SMART has never, and will never be any useful for failure prediction.

> My opinion is pretty simple: the more a software is complex the more
> you'll be prune to errors.
As you said, "from great power comes great costs".
Ceph is not for dummies (albeit installing and maintening a running
cluster is pretty straighforward).

> A web interface can just make the basics checks before submitting a new
> command to the pool.
And the command line does exactly the same. Try removing a pool, you
will see.
MMI's are the same : if errors can be prevented, they shall.
To prevent all errors, you must remove all functionnalities.

> To say "/ceph is not for rookies, //it's better having a threshold"/ can
> be said only from a person that really don't love it's own data (keeping
> management as error free as possible), but instead just want to be the
> only one allowed to manage them.
Yeah, well, whatever, most system engineers know how to handle Ceph.
Most non-system engineers do not.
A task, a job, I don't master other's job, hence it feels natural that
others do not master mine.

Sorry if this sound so strange to you.

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Future

2018-01-22 Thread Massimiliano Cuttini
d 
before another.


A web interface can just make the basics checks before submitting a new 
command to the pool.
Review input,  check if elements included in argument list exists, and 
after ask you again if you are sure to go on.

This is just a clever way to handle delicate data.

To say "/ceph is not for rookies, //it's better having a threshold"/ can 
be said only from a person that really don't love it's own data (keeping 
management as error free as possible), but instead just want to be the 
only one allowed to manage them.


Less complexity, less errors, faster deploy of new customers.
Sorry if this sound so strange to you.






-Original Message-
From: Alex Gorbachev [mailto:a...@iss-integration.com]
Sent: dinsdag 16 januari 2018 6:18
To: Massimiliano Cuttini
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph Future

Hi Massimiliano,


On Thu, Jan 11, 2018 at 6:15 AM, Massimiliano Cuttini
<m...@phoenixweb.it> wrote:

Hi everybody,

i'm always looking at CEPH for the future.
But I do see several issue that are leaved unresolved and block nearly
future adoption.
I would like to know if there are some answear already:

1) Separation between Client and Server distribution.
At this time you have always to update client & server in order to
match the same distribution of Ceph.
This is ok in the early releases but in future I do expect that the
ceph-client is ONE, not many for every major version.
The client should be able to self determinate what version of the
protocol and what feature are enabable and connect to at least 3 or 5
older major version of Ceph by itself.

2) Kernel is old -> feature mismatch
Ok, kernel is old, and so? Just do not use it and turn to NBD.
And please don't let me even know, just virtualize under the hood.

3) Management complexity
Ceph is amazing, but is just too big to have everything under control
(too many services).
Now there is a management console, but as far as I read this
management console just show basic data about performance.
So it doesn't manage at all... it's just a monitor...

In the end You have just to manage everything by your command-line.
In order to manage by web it's mandatory:

create, delete, enable, disable services If I need to run ISCSI
redundant gateway, do I really need to cut command from your
online docs?
Of course no. You just can script it better than what every admin can

do.

Just give few arguments on the html forms and that's all.

create, delete, enable, disable users
I have to create users and keys for 24 servers. Do you really think
it's possible to make it without some bad transcription or bad
cut of the keys across all servers.
Everybody end by just copy the admin keys across all servers, giving
very unsecure full permission to all clients.

create MAPS  (server, datacenter, rack, node, osd).
This is mandatory to design how the data need to be replicate.
It's not good create this by script or shell, it's needed a graph
editor which can dive you the perpective of what will be copied where.

check hardware below the hood
It's missing the checking of the health of the hardware below.
But Ceph was born as a storage software that ensure redundacy and
protect you from single failure.
So WHY did just ignore to check the healths of disks with SMART?
FreeNAS just do a better work on this giving lot of tools to
understand which disks is which and if it will fail in the nearly

future.

Of course also Ceph could really forecast issues by itself and need to
start to integrate with basic hardware IO.
For example, should be possible to enable disable UID on the disks in
order to know which one need to be replace.

As a technical note, we ran into this need with Storcium, and it is
pretty easy to utilize UID indicators using both Areca and LSI/Avago
HBAs.  You will need the standard control tools available from their web
sites, as well as hardware that supports SGPIO (most enterprise JBODs
and drives do).  There's likely similar options to other HBAs.

Areca:

UID on:

cli64 curctrl=1 set password=
cli64 curctrl= disk identify drv=

UID OFF:

cli64 curctrl=1 set password=
cli64 curctrl= disk identify drv=0

LSI/Avago:

UID on:

sas2ircu  locate : ON

UID OFF:

sas2ircu  locate : OFF

HTH,
Alex Gorbachev
Storcium


I guess this kind of feature are quite standard across all linux
distributions.

The management complexity can be completly overcome with a great Web
Manager.
A Web Manager, in the end is just a wrapper for Shell Command from the
CephAdminNode to others.
If you think about it a wrapper is just tons of time easier to develop
than what has been already developed.
I do really see that CEPH is the future of storage. But there is some
quick-avoidable complexity that need to be reduced.

If there are already some plan for these issue I really would like to

know.

Thanks,
Max




___
ceph-users mailing list
ceph-us

Re: [ceph-users] Ceph Future

2018-01-16 Thread Marc Roos


Hmmm, I have to disagree with

'too many services'
What do you mean, there is a process for each osd, mon, mgr and mds. 
There are less processes running than on a default windows fileserver. 
What is the complaint here?

'manage everything by your command-line'
What is so bad about this? Even microsoft is seeing the advantages and 
introduced power shell etc. I would recommend hiring a ceph admin, then 
you don't even need to use the web interface. You will have voice 
control on ceph, how cool is that! ;)
(actually maybe we can do feature request to integrate apple siri (not 
forgetting of course google/amazon talk?))

'iscsi'
Afaik this is not even a default install with ceph or a ceph package. I 
am also not complaining to ceph, that my nespresso machine does not have 
triple redundancy.

'check hardware below the hood'
Why waste development on this when there are already enough solutions 
out there? As if it is even possible to make a one size fits all 
solution.

Afaiac I think the ceph team has done great job. I was pleasantly 
surprised by the very easy to install. Just with installing the rpms 
(not using ceph-deploy). Next to this, I think it is good to have some 
sort of 'threshold' to keep the wordpress admin's at a distance. Ceph 
solutions are holding TB/PB of other peoples data, and we don’t want 
rookies destroying that, nor blame ceph for that matter.




-Original Message-
From: Alex Gorbachev [mailto:a...@iss-integration.com] 
Sent: dinsdag 16 januari 2018 6:18
To: Massimiliano Cuttini
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] Ceph Future

Hi Massimiliano,


On Thu, Jan 11, 2018 at 6:15 AM, Massimiliano Cuttini 
<m...@phoenixweb.it> wrote:
> Hi everybody,
>
> i'm always looking at CEPH for the future.
> But I do see several issue that are leaved unresolved and block nearly 

> future adoption.
> I would like to know if there are some answear already:
>
> 1) Separation between Client and Server distribution.
> At this time you have always to update client & server in order to 
> match the same distribution of Ceph.
> This is ok in the early releases but in future I do expect that the 
> ceph-client is ONE, not many for every major version.
> The client should be able to self determinate what version of the 
> protocol and what feature are enabable and connect to at least 3 or 5 
> older major version of Ceph by itself.
>
> 2) Kernel is old -> feature mismatch
> Ok, kernel is old, and so? Just do not use it and turn to NBD.
> And please don't let me even know, just virtualize under the hood.
>
> 3) Management complexity
> Ceph is amazing, but is just too big to have everything under control 
> (too many services).
> Now there is a management console, but as far as I read this 
> management console just show basic data about performance.
> So it doesn't manage at all... it's just a monitor...
>
> In the end You have just to manage everything by your command-line.
> In order to manage by web it's mandatory:
>
> create, delete, enable, disable services If I need to run ISCSI 
> redundant gateway, do I really need to cut command from your 
> online docs?
> Of course no. You just can script it better than what every admin can 
do.
> Just give few arguments on the html forms and that's all.
>
> create, delete, enable, disable users
> I have to create users and keys for 24 servers. Do you really think 
> it's possible to make it without some bad transcription or bad 
> cut of the keys across all servers.
> Everybody end by just copy the admin keys across all servers, giving 
> very unsecure full permission to all clients.
>
> create MAPS  (server, datacenter, rack, node, osd).
> This is mandatory to design how the data need to be replicate.
> It's not good create this by script or shell, it's needed a graph 
> editor which can dive you the perpective of what will be copied where.
>
> check hardware below the hood
> It's missing the checking of the health of the hardware below.
> But Ceph was born as a storage software that ensure redundacy and 
> protect you from single failure.
> So WHY did just ignore to check the healths of disks with SMART?
> FreeNAS just do a better work on this giving lot of tools to 
> understand which disks is which and if it will fail in the nearly 
future.
> Of course also Ceph could really forecast issues by itself and need to 

> start to integrate with basic hardware IO.
> For example, should be possible to enable disable UID on the disks in 
> order to know which one need to be replace.

As a technical note, we ran into this need with Storcium, and it is 
pretty easy to utilize UID indicators using both Areca and LSI/Avago 
HBAs.  You will need the standard control tools available from their web 
sites, as well as hardware that supports SGPIO (most ent

Re: [ceph-users] Ceph Future

2018-01-16 Thread Lenz Grimmer
Hi Massimiliano,

On 01/11/2018 12:15 PM, Massimiliano Cuttini wrote:

> _*3) Management complexity*_
> Ceph is amazing, but is just too big to have everything under control
> (too many services).
> Now there is a management console, but as far as I read this management
> console just show basic data about performance.
> So it doesn't manage at all... it's just a monitor...
> 
> In the end You have just to manage everything by your command-line.

[...]

> The management complexity can be completly overcome with a great Web
> Manager.
> A Web Manager, in the end is just a wrapper for Shell Command from the
> CephAdminNode to others.
> If you think about it a wrapper is just tons of time easier to develop
> than what has been already developed.
> I do really see that CEPH is the future of storage. But there is some
> quick-avoidable complexity that need to be reduced.
> 
> If there are already some plan for these issue I really would like to know.

FWIW, there is openATTIC, which provides additional functionality beyond
of what the current dashboard provides. It's a web application that
utilizes various existing APIs (e.g. librados, RGW Admin Ops API)

  https://www.openattic.org/features.html

Lenz

-- 
SUSE Linux GmbH - Maxfeldstr. 5 - 90409 Nuernberg (Germany)
GF:Felix Imendörffer,Jane Smithard,Graham Norton,HRB 21284 (AG Nürnberg)



signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph Future

2018-01-15 Thread Alex Gorbachev
Hi Massimiliano,


On Thu, Jan 11, 2018 at 6:15 AM, Massimiliano Cuttini  
wrote:
> Hi everybody,
>
> i'm always looking at CEPH for the future.
> But I do see several issue that are leaved unresolved and block nearly
> future adoption.
> I would like to know if there are some answear already:
>
> 1) Separation between Client and Server distribution.
> At this time you have always to update client & server in order to match the
> same distribution of Ceph.
> This is ok in the early releases but in future I do expect that the
> ceph-client is ONE, not many for every major version.
> The client should be able to self determinate what version of the protocol
> and what feature are enabable and connect to at least 3 or 5 older major
> version of Ceph by itself.
>
> 2) Kernel is old -> feature mismatch
> Ok, kernel is old, and so? Just do not use it and turn to NBD.
> And please don't let me even know, just virtualize under the hood.
>
> 3) Management complexity
> Ceph is amazing, but is just too big to have everything under control (too
> many services).
> Now there is a management console, but as far as I read this management
> console just show basic data about performance.
> So it doesn't manage at all... it's just a monitor...
>
> In the end You have just to manage everything by your command-line.
> In order to manage by web it's mandatory:
>
> create, delete, enable, disable services
> If I need to run ISCSI redundant gateway, do I really need to cut
> command from your online docs?
> Of course no. You just can script it better than what every admin can do.
> Just give few arguments on the html forms and that's all.
>
> create, delete, enable, disable users
> I have to create users and keys for 24 servers. Do you really think it's
> possible to make it without some bad transcription or bad cut of the
> keys across all servers.
> Everybody end by just copy the admin keys across all servers, giving very
> unsecure full permission to all clients.
>
> create MAPS  (server, datacenter, rack, node, osd).
> This is mandatory to design how the data need to be replicate.
> It's not good create this by script or shell, it's needed a graph editor
> which can dive you the perpective of what will be copied where.
>
> check hardware below the hood
> It's missing the checking of the health of the hardware below.
> But Ceph was born as a storage software that ensure redundacy and protect
> you from single failure.
> So WHY did just ignore to check the healths of disks with SMART?
> FreeNAS just do a better work on this giving lot of tools to understand
> which disks is which and if it will fail in the nearly future.
> Of course also Ceph could really forecast issues by itself and need to start
> to integrate with basic hardware IO.
> For example, should be possible to enable disable UID on the disks in order
> to know which one need to be replace.

As a technical note, we ran into this need with Storcium, and it is
pretty easy to utilize UID indicators using both Areca and LSI/Avago
HBAs.  You will need the standard control tools available from their
web sites, as well as hardware that supports SGPIO (most enterprise
JBODs and drives do).  There's likely similar options to other HBAs.

Areca:

UID on:

cli64 curctrl=1 set password=
cli64 curctrl= disk identify drv=

UID OFF:

cli64 curctrl=1 set password=
cli64 curctrl= disk identify drv=0

LSI/Avago:

UID on:

sas2ircu  locate : ON

UID OFF:

sas2ircu  locate : OFF

HTH,
Alex Gorbachev
Storcium

> I guess this kind of feature are quite standard across all linux
> distributions.
>
> The management complexity can be completly overcome with a great Web
> Manager.
> A Web Manager, in the end is just a wrapper for Shell Command from the
> CephAdminNode to others.
> If you think about it a wrapper is just tons of time easier to develop than
> what has been already developed.
> I do really see that CEPH is the future of storage. But there is some
> quick-avoidable complexity that need to be reduced.
>
> If there are already some plan for these issue I really would like to know.
>
> Thanks,
> Max
>
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com