On Jul 22, 2013, at 9:34 AM, David Hadas <[email protected]> wrote:
> Hi,
>
> In Portland, we discussed a somewhat related issue of having multiple
> replication levels in one Swift cluster.
> It may be that a provider would not wish to expose the use of EC or the level
> of replication used. For example a provider may offer a predefined set of
> services such as "Gold", "Silver" and "Bronze", "Aluminum" which a user can
> choose from. The provider may decide how each level is implemented (As a
> silly example: Gold is 4 way replication, Silver is a 3 way replication,
> Bronze is EC, Aluminum is single replica without EC).
>
> Does it make sense to consider EC as an implementation of a certain service
> level (the same as for example the choice of the number of replicas)?
yes, that's actually exactly what I'm thinking.
>
> Now we are back to the question of what is the right unit in which we define
> this 'level of service' ("Gold", "Silver"...).
> Should the level of service be defined when the account is created or should
> we allow a user to state:
> "I would like to have a container with Gold to keep my work, Bronze to keep
> my family pictures and videos and Aluminum to keep a copy of all my music
> files".
>
> If we choose to enable container service levels, we need to enable billing
> per level of service such that a single account could be billed for the
> actual use it has done per each level of service. Maybe we even need to have
> all statistics gathered to be grouped by service level.
> I am not sure if we can escape that even with account service levels.
Either on the account or container level, the billing number generator will
need to correlate particular bytes with a particular service level. That would
be in ceilometer, slogging, or whatever other people are using.
>
> DH
>
> On Thu, Jul 18, 2013 at 9:37 PM, John Dickinson <[email protected]> wrote:
> Yes, and I'd imagine that the normal default would be for replicated data.
>
> Moving the granularity from a container to account-based, as Chmouel and
> Chuck said, is interesting too.
>
> --John
>
> On Jul 18, 2013, at 11:32 AM, Christian Schwede <[email protected]> wrote:
>
> > A solution to this might be to set the default policy as a configuration
> > setting in the proxy. If you want a replicated swift cluster just allow
> > this policy in the proxy and set it to default. The same for EC cluster,
> > just set the allowed policy to EC. If you want both (and let your users
> > decide which policy to use) simply configure a list of allowed policies
> > with the first one in the list as the default policy in case they don't
> > set a policy during container creation.
> >
> > Am 18.07.13 20:15, schrieb Chuck Thier:
> >> I think you are missing the point. What I'm talking about is who
> >> chooses what data is EC and what is not. The point that I am trying to
> >> make is that the operators of swift clusters should decide what data is
> >> EC, not the clients/users. How the data is stored should be totally
> >> transparent to the user.
> >>
> >> Now if we want to down the road offer user defined classes of storage
> >> (like how S3 does reduced redundancy), I'm cool with that, just that it
> >> should be orthogonal to the implementation of EC.
> >>
> >> --
> >> Chuck
> >>
> >>
> >> On Thu, Jul 18, 2013 at 12:57 PM, John Dickinson <[email protected]
> >> <mailto:[email protected]>> wrote:
> >>
> >> Are you talking about the parameters for EC or the fact that
> >> something is erasure coded vs replicated?
> >>
> >> For the first, that's exactly what we're thinking: a deployer sets
> >> up one (or more) policies and calls them Alice, Bob, or whatever,
> >> and then the API client can set that on a particular container.
> >>
> >> This allows users who know what they are doing (ie those who know
> >> the tradeoffs and their data characteristics) to make good choices.
> >> It also allows deployers who want to have an automatic policy to set
> >> one up to migrate data.
> >>
> >> For example, a deployer may choose to run a migrator process that
> >> moved certain data from replicated to EC containers over time (and
> >> drops a manifest file in the replicated tier to point to the EC data
> >> so that the URL still works).
> >>
> >> Like existing features in Swift (eg large objects), this gives users
> >> the ability to flexibly store their data with a nice interface yet
> >> still have the ability to get at some of the pokey bits underneath.
> >>
> >> --John
> >>
> >>
> >>
> >> On Jul 18, 2013, at 10:31 AM, Chuck Thier <[email protected]
> >> <mailto:[email protected]>> wrote:
> >>
> >>> I'm with Chmouel though. It seems to me that EC policy should be
> >> chosen by the provider and not the client. For public storage
> >> clouds, I don't think you can make the assumption that all
> >> users/clients will understand the storage/latency tradeoffs and
> >> benefits.
> >>>
> >>>
> >>> On Thu, Jul 18, 2013 at 8:11 AM, John Dickinson <[email protected]
> >> <mailto:[email protected]>> wrote:
> >>> Check out the slides I linked. The plan is to enable an EC policy
> >> that is then set on a container. A cluster may have a replication
> >> policy and one or more EC policies. Then the user will be able to
> >> choose the policy for a particular container.
> >>>
> >>> --John
> >>>
> >>>
> >>>
> >>>
> >>> On Jul 18, 2013, at 2:50 AM, Chmouel Boudjnah
> >> <[email protected] <mailto:[email protected]>> wrote:
> >>>
> >>>> On Thu, Jul 18, 2013 at 12:42 AM, John Dickinson <[email protected]
> >> <mailto:[email protected]>> wrote:
> >>>>> * Erasure codes (vs replicas) will be set on a per-container
> >> basis
> >>>>
> >>>> I was wondering if there was any reasons why it couldn't be as
> >>>> per-account basis as this would allow an operator to have different
> >>>> type of an account and different pricing (i.e: tiered storage).
> >>>>
> >>>> Chmouel.
> >>>
> >>>
> >>> _______________________________________________
> >>> OpenStack-dev mailing list
> >>> [email protected]
> >> <mailto:[email protected]>
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>>
> >>> _______________________________________________
> >>> OpenStack-dev mailing list
> >>> [email protected]
> >> <mailto:[email protected]>
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >> _______________________________________________
> >> OpenStack-dev mailing list
> >> [email protected]
> >> <mailto:[email protected]>
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >>
> >>
> >> _______________________________________________
> >> OpenStack-dev mailing list
> >> [email protected]
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > [email protected]
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> _______________________________________________
> OpenStack-dev mailing list
> [email protected]
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> _______________________________________________
> OpenStack-dev mailing list
> [email protected]
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
smime.p7s
Description: S/MIME cryptographic signature
_______________________________________________ OpenStack-dev mailing list [email protected] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
