On Sun, Feb 06, 2011 at 12:16:33PM +0100, H.Merijn Brand wrote: > On Sat, 5 Feb 2011 12:29:09 +0000, Tim Bunce <tim.bu...@pobox.com> > wrote: > > > > As you note above, the current trace settings are encoded into an int: > > > > # 0xddDDDDrL (driver, DBI, reserved, Level) > > > > Where L is the DBI trace level (0-16) and the rest are bit flags. > > The 4 reserved bits could be used as a driver trace level (0-16). > > # 0xddlDDDrL (driver, driver-level, DBI, reserved, Level) > > would be perfect in my world.
That's fine. > l than is "level" to DBD, what "L" is to DBI > > How many bits in DDDD do you think will ever be used? If we now use 2 > or 3, wouldn't it be better to change to > > # 0xddlDDDrL ? > > Which would mean that we stay 100% backward compatible (as the first D > of DDDD was only reserved but never used. Yeap. > With this scheme, it is extremely easy for all drivers to update their > docs to match DBI. Those drivers that used a level will just have to > use the 'l', those that used flags will (still) have to use the dd. > > We could then document the use and support of DBD_VERBOSE to > automatically translate to the ddl part. If a DBD already supported > $DBD_VERBOSE instead of "ddl", it will work just as it did, if the DBD > updates to the new scheme (requiring DBI-1.xxx that supports it), then > the "upgrade" is automatic and doesn't change a thing from the user > point of view. > > Am I being messy? Or does this all make sense? I think it makes sense. Thanks. Tim.