Re: [ceph-users] test, please ignore

2019-07-25 Thread Federico Lucifredi
Anything for our favorite esquire! :-) -F2

-- "'Problem' is a bleak word for challenge" - Richard Fish
_
Federico Lucifredi
Product Management Director, Ceph Storage Platform
Red Hat
A273 4F57 58C0 7FE8 838D 4F87 AEEB EC18 4A73 88AC
redhat.com   TRIED. TESTED. TRUSTED.


On Thu, Jul 25, 2019 at 7:08 AM Tim Serong  wrote:

> Sorry for the noise, I was getting "Remote Server returned '550 Cannot
> process address'" errors earlier trying to send to
> ceph-users@lists.ceph.com, and wanted to re-test.
>
> --
> Tim Serong
> Senior Clustering Engineer
> SUSE
> tser...@suse.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-02 Thread Federico Lucifredi
On Fri, Mar 2, 2018 at 4:29 AM, Max Cuttins  wrote:

>
>
> Hi Federico,
>
> Hi Max,
>>
>> On Feb 28, 2018, at 10:06 AM, Max Cuttins  wrote:
>>>
>>> This is true, but having something that just works in order to have
>>> minimum compatibility and start to dismiss old disk is something you should
>>> think about.
>>> You'll have ages in order to improve and get better performance. But you
>>> should allow Users to cut-off old solutions as soon as possible while
>>> waiting for a better implementation.
>>>
>> I like your thinking, but I wonder why doesn’t a locally-mounted kRBD
>> volume meet this need? It seems easier than iSCSI and I would venture would
>> show twice the performance at least in some cases.
>>
>
> Simple because it's not possible.
> XenServer is closed. You cannot add RPM (so basically install ceph)
> without hack the distribution by removing the limitation to YUM.
> And this is what we do here: https://github.com/rposudnevskiy/RBDSR


Understood. Thanks Max, I did not realize you were also speaking about Xen,
I thought you meant to find an arbitrary non-virtual disk  replacement
strategy ("start to dismiss old disk").

We do speak to the Xen team every once in a while, but while there is
interest in adding Ceph support on their side, I think we are somewhat down
the list of their priorities.

Thanks -F
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph iSCSI is a prank?

2018-03-01 Thread Federico Lucifredi
Hi Max,

> On Feb 28, 2018, at 10:06 AM, Max Cuttins  wrote:
> 
> This is true, but having something that just works in order to have minimum 
> compatibility and start to dismiss old disk is something you should think 
> about.
> You'll have ages in order to improve and get better performance. But you 
> should allow Users to cut-off old solutions as soon as possible while waiting 
> for a better implementation.

I like your thinking, but I wonder why doesn’t a locally-mounted kRBD volume 
meet this need? It seems easier than iSCSI and I would venture would show twice 
the performance at least in some cases.

ISCSI in ALUA mode may be as close as it gets to scale-out iSCSI in software. 
It is not bad, but you pay for the extra hops in performance and complexity. So 
it totally makes sense where kRBD and libRBD are not (yet) available, like 
WMware and Windows, but not where native drivers are available. 

And about Xen... patches are accepted in this project — folks who really care 
should go out and code it.

Best-F
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Scuttlemonkey signing off...

2017-05-22 Thread Federico Lucifredi
Patrick,
  It has been an absolute privilege to work with you, and I hope we get to
do it again sometime soon!

Best-F

FEDERICO LUCIFREDI

PRODUCT MANAGEMENT DIRECTOR, Ceph Storage

Red Hat <https://www.redhat.com/>

A273 4F57 58C0 7FE8 838D 4F87 AEEB EC18 4A73 88AC
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>

On May 22, 2017, at 1:01 PM, Mark Nelson <mnel...@redhat.com> wrote:

Say it ain't so scuttlemonkey!  I'm very sorry to see you go but, I hope
you enjoy your new opportunity immensely!  Best of luck!

Mark

On 05/22/2017 09:36 AM, Patrick McGarry wrote:

Hey cephers,


I'm writing to you today to share that my time in the Ceph community

is coming to an end this year. The last five years (!!) of working

with the Ceph community have yielded some of the most rewarding

adventures of my professional career, but a new opportunity has come

along that I just couldn't pass up.


I will continue to work through the end of July in order to transition

my responsibilities to a replacement.  In the spirit of Ceph openness,

I am currently assisting Stormy Peters (Red Hat's senior community

manager - sto...@redhat.com) in seeking candidates, so if you know

anyone who might be interested in managing the Ceph community, please

let me know.


While this is definitely bittersweet for me, the Ceph community has

done a good job of self-managing, self-healing, and replicating just

like the underlying technology, so I know you are all in good hands

(each others!).  If you would like to keep in touch, or have questions

beyond the time I am able to answer my @redhat.com email address, feel

free to reach out to me at pmcga...@gmail.com and I'll be happy to

catch up.


If you have any questions or concerns in the meantime feel free to

reach out to me directly, but I'll do my best to ensure there is

minimal distruption during this transition. Thank you to all of you in

the Ceph community who have made this journey so rewarding. I look

forward to seeing even more amazing things in Ceph's future!



--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph drives not detected

2017-04-07 Thread Federico Lucifredi
Hi Melzer,
 Somewhat pointing out the obvious, but just in case: Ceph is in rapid
development, and Giant is way behind where the state of the art is. If
this is your first Ceph experience, it is definitely recommended you
look at Jewel or even Kraken -- In Linux terms, it is almost as if you
were running a 2.4 kernel ;-)

 Good luck --F
_
-- "'Problem' is a bleak word for challenge" - Richard Fish
(Federico L. Lucifredi) - federico at redhat.com - GnuPG 0x4A73884C


On Fri, Apr 7, 2017 at 11:19 AM, Melzer Pinto  wrote:
> Hello,
>
> I am setting up a 9 node ceph cluster. For legacy reasons I'm using Ceph
> giant (0.87) on Fedora 21. Each OSD node has 4x4TB SATA drives with journals
> on a separate SSD. The server is an HP XL190 Gen 9 with latest firmware.
>
> The issue I'm seeing is that only 2 drives get detected and mounted on
> almost all the nodes. During the initial creation all drives were created
> and mounted but now only 2 show up.
>
> Usually a partprobe forces the drives to come online but in this case it
> doesnt.
>
> On a reboot a different set of OSDs will get detected. For e.g. of osds 0
> and 1 are up, on a reboot osds 0 and 3 will be detected. On another reboot
> osds 1 and 2 may come up.
>
>
> $ ceph osd tree
>
> # idweight  type name   up/down reweight
> -1  131 root default
> -2  14.56   host xxx-a5-34
> 0   3.64osd.0   up  1
> 1   3.64osd.1   down0
> 2   3.64osd.2   up  1
> 3   3.64osd.3   down0
> -3  14.56   host xxx-a5-36
> 4   3.64osd.4   down0
> 5   3.64osd.5   down0
> 6   3.64osd.6   up  1
> 7   3.64osd.7   up  1
> -4  14.56   host xxx-a5-37
> 8   3.64osd.8   down0
> 9   3.64osd.9   up  1
> 10  3.64osd.10  up  1
> 11  3.64osd.11  down0
> -5  14.56   host xxx-b5-34
> 12  3.64osd.12  up  1
> 13  3.64osd.13  down0
> 14  3.64osd.14  up  1
> 15  3.64osd.15  down0
> -6  14.56   host xxx-b5-36
> 16  3.64osd.16  up  1
> 17  3.64osd.17  up  1
> 18  3.64osd.18  down0
> 19  3.64osd.19  down0
> -7  14.56   host xxx-b5-37
> 20  3.64osd.20  up  1
> 21  3.64osd.21  up  1
> 22  3.64osd.22  down0
> 23  3.64osd.23  down0
> -8  14.56   host xxx-c5-34
> 24  3.64osd.24  up  1
> 25  3.64osd.25  up  1
> 26  3.64osd.26  up  1
> 27  3.64osd.27  up  1
> -9  14.56   host xxx-c5-36
> 28  3.64osd.28  down0
> 29  3.64osd.29  up  1
> 30  3.64osd.30  down0
> 31  3.64osd.31  up  1
> -10 14.56   host xxx-c5-37
> 32  3.64osd.32  up  1
> 33  3.64osd.33  up  1
> 34  3.64osd.34  up  1
> 35  3.64osd.35  up  1
>
> Anyone seen this problem before and know what the issue could be?
>
> Thanks
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Linux Fest NW CFP

2017-03-21 Thread Federico Lucifredi
Hello Ceph team,
  Linux Fest NorthWest's CFP is out. It is a bit too far for me to do
it as a day trip from Boston, but it would be nice if someone on the
pacific coast feels like giving a technical overview / architecture
session.

https://www.linuxfestnorthwest.org/2017/news/2017-call-presentations-open

Best -F

_
-- "'Problem' is a bleak word for challenge" - Richard Fish
(Federico L. Lucifredi) - federico at redhat.com - GnuPG 0x4A73884C
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph QoS user stories

2016-12-02 Thread Federico Lucifredi
Hi Sage,

 The primary QoS issue we see with OpenStack users is wanting to
guarantee minimum IOPS to each Cinder-mounted RBD volume as a way to
guarantee the health of well-mannered workloads against badly-behaving
ones.

 As an OpenStack Administrator, I want to guarantee a minimum number
of IOPS to each Cinder volume to prevent any tenant from interfering
with another.

  The number of IOPS may vary per volume, but in many cases a
"standard" and "high" number would probably suffice. The guarantee is
more important than the granularity.

  This is something impacting users at today's Ceph performance level.

  Looking at the future, once Bluestore becomes the default,  there
will also be latency requirements from the crowd that wants to run
databases with RBD backends — both low latency and low jitter in the
latency, but rather than applying that to all volumes, it will be only
to select ones backing RDBMs. Well, at least in the case of a general
purpose cluster.


 My hunch is that Enterprise users that want hard-QoS guarantees will
accept that a capacity planning exercise is necessary, software can
only allocate existing capacity, not create more. Community users may
value more some "fairness" in distributing existing resources instead.
Just a hunch at this point.

 Best -F

_
-- "You must try until your brain hurts —Elon Musk
(Federico L. Lucifredi) - federico at redhat.com - GnuPG 0x4A73884C

On Fri, Dec 2, 2016 at 2:01 PM, Sage Weil  wrote:
>
> Hi all,
>
> We're working on getting infrasture into RADOS to allow for proper
> distributed quality-of-service guarantees.  The work is based on the
> mclock paper published in OSDI'10
>
> https://www.usenix.org/legacy/event/osdi10/tech/full_papers/Gulati.pdf
>
> There are a few ways this can be applied:
>
>  - We can use mclock simply as a better way to prioritize background
> activity (scrub, snap trimming, recovery, rebalancing) against client IO.
>  - We can use d-mclock to set QoS parameters (e.g., min IOPS or
> proportional priority/weight) on RADOS pools
>  - We can use d-mclock to set QoS parameters (e.g., min IOPS) for
> individual clients.
>
> Once the rados capabilities are in place, there will be a significant
> amount of effort needed to get all of the APIs in place to configure and
> set policy.  In order to make sure we build somethign that makes sense,
> I'd like to collection a set of user stores that we'd like to support so
> that we can make sure we capture everything (or at least the important
> things).
>
> Please add any use-cases that are important to you to this pad:
>
> http://pad.ceph.com/p/qos-user-stories
>
> or as a follow-up to this email.
>
> mClock works in terms of a minimum allocation (of IOPS or bandwidth; they
> are sort of reduced into a single unit of work), a maximum (i.e. simple
> cap), and a proportional weighting (to allocation any additional capacity
> after the minimum allocations are satisfied).  It's somewhat flexible in
> terms of how we apply it to specific clients, classes of clients, or types
> of work (e.g., recovery).  How we put it all together really depends on
> what kinds of things we need to accomplish (e.g., do we need to support a
> guaranteed level of service shared across a specific set of N different
> clients, or only individual clients?).
>
> Thanks!
> sage
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Ceph QoS user stories

2016-12-02 Thread Federico Lucifredi
Hi Sage,

 The primary QoS issue we see with OpenStack users is wanting to guarantee
minimum IOPS to each Cinder-mounted RBD volume as a way to guarantee the
health of well-mannered workloads against badly-behaving ones.

 As an OpenStack Administrator, I want to guarantee a minimum number of
IOPS to each Cinder volume to prevent any tenant from interfering with
another.

  The number of IOPS may vary per volume, but in many cases a "standard"
and "high" number would probably suffice. The guarantee is more important
than the granularity.

  This is something impacting users at today's Ceph performance level.

  Looking at the future, once Bluestore becomes the default,  there will
also be latency requirements from the crowd that wants to run databases
with RBD backends — both low latency and low jitter in the latency, but
rather than applying that to all volumes, it will be only to select ones
backing RDBMs. Well, at least in the case of a general purpose cluster.


 My hunch is that Enterprise users that want hard-QoS guarantees will
accept that a capacity planning exercise is necessary, software can only
allocate existing capacity, not create more. Community users may value more
some "fairness" in distributing existing resources instead. Just a hunch at
this point.

 Best -F

_
-- "You must try until your brain hurts —Elon Musk
(Federico L. Lucifredi) - federico at redhat.com - GnuPG 0x4A73884C

On Fri, Dec 2, 2016 at 2:01 PM, Sage Weil  wrote:

> Hi all,
>
> We're working on getting infrasture into RADOS to allow for proper
> distributed quality-of-service guarantees.  The work is based on the
> mclock paper published in OSDI'10
>
> https://www.usenix.org/legacy/event/osdi10/tech/full_papers/
> Gulati.pdf
>
> There are a few ways this can be applied:
>
>  - We can use mclock simply as a better way to prioritize background
> activity (scrub, snap trimming, recovery, rebalancing) against client IO.
>  - We can use d-mclock to set QoS parameters (e.g., min IOPS or
> proportional priority/weight) on RADOS pools
>  - We can use d-mclock to set QoS parameters (e.g., min IOPS) for
> individual clients.
>
> Once the rados capabilities are in place, there will be a significant
> amount of effort needed to get all of the APIs in place to configure and
> set policy.  In order to make sure we build somethign that makes sense,
> I'd like to collection a set of user stores that we'd like to support so
> that we can make sure we capture everything (or at least the important
> things).
>
> Please add any use-cases that are important to you to this pad:
>
> http://pad.ceph.com/p/qos-user-stories
>
> or as a follow-up to this email.
>
> mClock works in terms of a minimum allocation (of IOPS or bandwidth; they
> are sort of reduced into a single unit of work), a maximum (i.e. simple
> cap), and a proportional weighting (to allocation any additional capacity
> after the minimum allocations are satisfied).  It's somewhat flexible in
> terms of how we apply it to specific clients, classes of clients, or types
> of work (e.g., recovery).  How we put it all together really depends on
> what kinds of things we need to accomplish (e.g., do we need to support a
> guaranteed level of service shared across a specific set of N different
> clients, or only individual clients?).
>
> Thanks!
> sage
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Introducing Learning Ceph : The First ever Book on Ceph

2015-02-17 Thread Federico Lucifredi

To be exact, the platform used throughout is CentOS 6.4... I am reading my copy 
right now :)

Best -F

- Original Message -
From: SUNDAY A. OLUTAYO olut...@sadeeb.com
To: Andrei Mikhailovsky and...@arhont.com
Cc: ceph-users@lists.ceph.com
Sent: Monday, February 16, 2015 3:28:45 AM
Subject: Re: [ceph-users] Introducing Learning Ceph : The First ever Book on 
Ceph

I bought a copy some days ago, great job but it is Redhat specific. 

Thanks, 

Sunday Olutayo 
 ___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com