Salvatore,

I changed head to the version before my new one, and then tried to upgrade
and I see this:
 neutron-db-manage --config-file /opt/stack/neutron/etc/neutron.conf
--service vpnaas upgrade HEAD
Traceback (most recent call last):
  File "/usr/local/bin/neutron-db-manage", line 10, in <module>
    sys.exit(main())
  File "/opt/stack/neutron/neutron/db/migration/cli.py", line 238, in main
    CONF.command.func(config, CONF.command.name)
  File "/opt/stack/neutron/neutron/db/migration/cli.py", line 105, in
do_upgrade
    run_sanity_checks(config, revision)
  File "/opt/stack/neutron/neutron/db/migration/cli.py", line 229, in
run_sanity_checks
    script_dir.run_env()
  File "/usr/local/lib/python2.7/dist-packages/alembic/script.py", line
390, in run_env
    util.load_python_file(self.dir, 'env.py')
  File "/usr/local/lib/python2.7/dist-packages/alembic/util.py", line 243,
in load_python_file
    module = load_module_py(module_id, path)
  File "/usr/local/lib/python2.7/dist-packages/alembic/compat.py", line 79,
in load_module_py
    mod = imp.load_source(module_id, path, fp)
  File
"/opt/stack/neutron-vpnaas/neutron_vpnaas/db/migration/alembic_migrations/env.py",
line 86, in <module>
    run_migrations_online()
  File
"/opt/stack/neutron-vpnaas/neutron_vpnaas/db/migration/alembic_migrations/env.py",
line 67, in run_migrations_online
    engine = session.create_engine(neutron_config.database.connection)
  File
"/usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/engines.py",
line 112, in create_engine
    url = sqlalchemy.engine.url.make_url(sql_connection)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/url.py",
line 186, in make_url
    return _parse_rfc1738_args(name_or_url)
  File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/url.py",
line 235, in _parse_rfc1738_args
    "Could not parse rfc1738 URL from string '%s'" % name)
sqlalchemy.exc.ArgumentError: Could not parse rfc1738 URL from string ''

Any ideas what is wrong here?

On Tue, Jul 7, 2015 at 10:05 AM Paul Michali <p...@michali.net> wrote:

>
> Yes, I wasn't using the --service option, so I suspect that is why my
> down_version was wrong.  In talking with Akihiro, I added a check to PEP8
> and made sure that it fails if head is wrong. It is:
> https://review.openstack.org/#/c/199082/ (of course that failed py27 -
> I've got to see if there was some recent breakage in vpn repo, again).
>
> Regarding the migration, one of the new columns may be None, but there
> must be at least one IP version entry (there is an existing test in VPN for
> using a router w/o an external IP set). Since the new code will rely on
> these new fields, I'd like to populate them as part of the migration. I
> think it would be more complicated to handle during operation.
>
> Does anyone have examples of how to do queries of objects, from the
> migration upgrade() code?
>
>
> Regards,
>
> PCM
>
> On Tue, Jul 7, 2015 at 9:02 AM Akihiro Motoki <amot...@gmail.com> wrote:
>
>> 2015-07-07 21:39 GMT+09:00 Henry Gessau <ges...@cisco.com>:
>>
>>>  On Tue, Jul 07, 2015, Paul Michali <p...@michali.net> <p...@michali.net>
>>> wrote:
>>>
>>> Thanks Salvatore for the responses. See @PCM in-line...
>>>
>>>
>>>
>>>  On Tue, Jul 7, 2015 at 6:14 AM Salvatore Orlando <sorla...@nicira.com>
>>> wrote:
>>>
>>>> Some comments inline.
>>>>
>>>>  Salvatore
>>>>
>>>>    On 6 July 2015 at 20:00, Paul Michali < <p...@michali.net>
>>>> p...@michali.net> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>>  I have some urgent requests about migration that I'm hoping to get
>>>>> some info on. I'm working on a bug where I need to add two (related) 
>>>>> fields
>>>>> to a table for VPNaaS. Here's the objectives related to migration...
>>>>>
>>>>>  1) create local_v4_ip and lcoal_v6_ip fields in the vpnservice table
>>>>> 2) for each entry in the vpnservice table:
>>>>>     2.1) Get the router.gw_port.fixed_ips list
>>>>>     2.2) Determine the version of each fixed IP and store the first of
>>>>> each version (if any) into the appropriate new field.
>>>>>
>>>>>  I have created a migration file, and I changed the down_revision to
>>>>> be the number of the revision that is the first in the migration chain in
>>>>> the VPN repo.
>>>>>
>>>>>  Here are the many questions I have...
>>>>>
>>>>>  When I look in the VPN repo, the HEAD file has the version 'kilo',
>>>>> which is not the current head.
>>>>>
>>>>
>>>>>  Shouldn't it the version number of the first file in the migration
>>>>> chain?
>>>>>
>>>>
>>>>      It should indeed. How are you generating the revision script?
>>>> Using neutron-db-manage it should be updated automatically [1]
>>>>
>>>
>>>  @PCM I ran neutron-db-manage, when in the neutron repo, and it
>>> assigned some version, but it was not the latest in the neutron-vpnaas repo.
>>>
>>> neutron-db-manage does not handle alembic branches in separate repos
>>> very well at all yet. I am working on updating it with
>>> <https://review.openstack.org/198524>https://review.openstack.org/198524
>>> but I have quite a lot left to do.
>>>
>>
>> Yes, at now we have implicit order of running alembic migrations.
>> First run neutron db migration and then advanced service migrations.
>>
>> I do not fully understand how alembic branch mechanism works, but
>> I think we can have a common ancestor and have multiple branches,
>> and each branch can evolve independently.
>>
>>
>>>
>>>  I checked the VPN repo and there were a chain of versions, which I
>>> used to determine what the head should be and have set the version
>>> accordingly. However, in the current repo, head is set to "kilo", which
>>> appears to be incorrect.  The versions are:
>>>
>>>  56893333aa52
>>> kilo   <<< HEAD
>>> 3ea02b2a773e
>>> start_neutron_vpnaas
>>> None
>>>
>>> Ouch. That is an error, because <https://review.openstack.org/190569>
>>> https://review.openstack.org/190569 should have updated HEAD but didn't.
>>>
>>> The version sequence (you can see it in any devstack run) is:
>>>
>>> INFO  [alembic.migration] Running upgrade  -> start_neutron_vpnaas,
>>> start neutron-vpnaas chain
>>> INFO  [alembic.migration] Running upgrade start_neutron_vpnaas ->
>>> 3ea02b2a773e, add_index_tenant_id
>>> INFO  [alembic.migration] Running upgrade 3ea02b2a773e -> kilo, kilo
>>> INFO  [alembic.migration] Running upgrade kilo -> 56893333aa52, fix
>>> identifier map fk
>>>
>>
>> It seems we don't have an appropriate check for HEAD revision in at least
>> VPNaaS repo.
>> Paul and I just discussed it. We need to improve the check too.
>>
>>
>>> Should I do a separate commit that fixes the HEAD file, or just fix it
>>> as part of the bug fix I'm working on.
>>>
>>> Yes, you should immediately submit a patch to change HEAD to
>>> 56893333aa52.
>>>
>>>
>>>  BTW, at one point, after having correctly set the HEAD and versions in
>>> my new migration file, I think I ran neutron-db-manage check_migration, and
>>> I think it set the HEAD to my version, but it did that in the neutron repo,
>>> and not the VPN repo.  I might have been running from the wrong repo?
>>>
>>> I working on updating the devref docs for this process. Things have
>>> changed quite a bit with the alembic branches in separate repos.
>>>
>>>
>>>
>>>
>>>>   For my commit, I'm assuming I change the HEAD file to use my
>>>>> migration file's version?
>>>>>
>>>>
>>>>      You can do that manually too, yes.
>>>>
>>>>
>>>>>
>>>>>  I set HEAD to my migration file, and my file has a down revision of
>>>>> the previous head's revision. If I run 'neutron-db-manage --config-file
>>>>> ../neutron/etc/neutron.conf --config-file
>>>>> ../neutron/etc/neutron/plugins/ml2/ml2_conf.ini check_migration' there is
>>>>> no output so I guess that is OK.
>>>>>
>>>>>  As I develop my new migration file, is there a way that I can test
>>>>> it (running neutron-db-migration, maybe)?
>>>>>
>>>>
>>>>      When I test migrations I usually dump the database, run the
>>>> migration with neutron-db-manage upgrade HEAD (I think it's not necessary
>>>> to specify HEAD), and restore the db from the dump if the migration fails.
>>>>
>>>>
>>>>>  Is there a way to run the migration file under the debugger, as well
>>>>> (importing pdb, for example)?
>>>>>
>>>>
>>>>      The migration process is just like any python application, so I
>>>> guess you can debug it with pdb.
>>>>
>>>
>>>  @PCM Ah, so use "neutron-db-manage upgrade HEAD". That was the piece
>>> that was missing. I take it there are no specific unit tests of the
>>> migration files?
>>>
>>>
>>>
>>>>
>>>>>
>>>>>  In the migration, I can add the columns needed. What's the best way
>>>>> to fill out those fields - using raw SQL queries or create a Session 
>>>>> object
>>>>> and access the VpnService object's router object?
>>>>>
>>>>
>>>>      If the default value for the column is not enough, and you need
>>>> to specify a value which depends on other values in the same row I would
>>>> prefer plain SQL statements, but if that become cumbersome I guess it's ok
>>>> to use sqlalchemy's session.
>>>>
>>>>
>>>>>  I see there is some op.bind() call and then engine.execute(), but
>>>>> could use some help on the best way to extract the needed queries (I need
>>>>> to access the vpnservice's router, and then access the (Port) gw_port
>>>>> relationship, and from that access the (IPAllocation) fixed_ips list).
>>>>>
>>>>
>>>>      Perhaps you can point us to the review pages on gerrit, and we
>>>> can provide detailed comments there.
>>>>
>>>
>>>  @PCM Yeah, I haven't pushed it up yet. I have a few more changes to
>>> make, and should be able to get it up in a few days. The LP bug is 1464387.
>>>
>>>  Essentially, in the vpnservices table, I'm adding an IPv4 and/or IPv6
>>> addresses for the  "local" end of VPN connections that will be established.
>>> This is to allow alternative VPN implementation (appliances, separate S/W,
>>> H/W, VM based VPN, etc) to specify addresses different than what is
>>> available on the Neutron router.
>>>
>>>  However, for the reference implementation, we'll use the Neutron
>>> router's fixed_ips list (as is done today), and to handle the migration,
>>> I'm thinking the following is needed:
>>>
>>>  1) create the new columns.
>>> 2) Identify the router for that service and obtain it's GW fixed_ips
>>> list.
>>> 3) Pick first IPv4 address (if any) and IPv6 address (if any), and store
>>> in new columns.
>>>
>>>  So I need to form a query and code to do this.
>>>
>>>
>>>
>>>>
>>>>>
>>>>>  Appreciate any advise here on how to debug the migration stuff...
>>>>>
>>>>>  Paul Michali (pc_m)
>>>>>
>>>>
>>>>      [1]
>>>> http://git.openstack.org/cgit/openstack/neutron/tree/neutron/db/migration/cli.py#n124
>>>>
>>>>
>>>>>
>>>>>
>>>>>
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Reply via email to