On 07/26/2017 07:58 PM, Michael Bayer wrote:
On Jul 26, 2017 7:45 PM, "Jay Pipes" <jaypi...@gmail.com <mailto:jaypi...@gmail.com>> wrote:

    On 07/26/2017 07:06 PM, Octave J. Orgeron wrote:

        Hi Michael,

        On 7/26/2017 4:28 PM, Michael Bayer wrote:


            it at all.
            thinking out loud

            oslo_db.sqlalchemy.types.String(255, mysql_small_rowsize=64)
            oslo_db.sqlalchemy.types.String(255,
            mysql_small_rowsize=sa.TINYTEXT)
            oslo_db.sqlalchemy.types.String(255,
            mysql_small_rowsize=sa.TEXT)


            so if you don't have mysql_small_rowsize,  nothing happens.


        I think the mysql_small_rowsize is a bit misleading since in one
        case we are changing the size and the others the type. Perhaps:

        mysql_alt_size=64
        mysql_alt_type=sa.TINYTEXT
        mysql_alt_type=sa.TEXT

        alt standing for alternate. What do you think?


    -1

    I think it should be specific to NDB, since that's what the override
    is for. I'd support something like:

      oslo_db.sqlalchemy.types.String(255, mysql_ndb_size=64)


Ok, I give up on that fight, fine. mysql_ndb_xyz but at least build it into a nicely named type. I know i come off as crazy changing my mind and temporarily forgetting key details but this is often how I internally come up with things...

Isn't that exactly what I'm proposing below? :)

    Octave, I understand due to the table row size limitations the
    desire to reduce some column sizes for NDB. What I'm not entirely
    clear on is the reason to change the column *type* specifically for
    NDB. There are definitely cases where different databases have
    column types -- say, PostgreSQL's INET column type -- that don't
    exist in other RDBMS. For those cases, the standard approach in
    SQLAlchemy is to create a sqlalchemy ColumnType concrete class that
    essentially translates the CREATE TABLE statement (and type
    compilation/coercing) to specify the supported column type in the
    RDBMS if it's supported otherwise defaults the column type to
    something coerceable.

    An example of this can be seen here for how this is done for IPv4
    data in the apiary project:

    https://github.com/gmr/apiary/blob/master/apiary/types.py#L49
    <https://github.com/gmr/apiary/blob/master/apiary/types.py#L49>

    I'd certainly be open to doing things like this for NDB, but I'd
    first need to understand why you chose to convert the column types
    for the columns that you did. Any information you can provide about
    that would be great.

    Best,
    -jay


    __________________________________________________________________________
    OpenStack Development Mailing List (not for usage questions)
    Unsubscribe:
    openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
    <http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
    <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>




__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to