On Sat, Oct 16, 2021 at 8:22 AM Jeremiah Paige <ucod...@gmail.com> wrote: > > Here is a pseudo-program showing where I would like to use this token in > my own code if it existed. I think besides the cases where one is forced to > always repeat the variable name as a string (namedtuple, NewType) this > is an easy way to express clear intent to link the variable name to either > its value or original source. > > >>> REGION = os.getenv(<<<) > >>> db_url = config[REGION][<<<] > >>> > >>> name = arguments.get(<<<) > >>> > >>> con = connect(db_url) > >>> knights = <<< > >>> horse = <<< > >>> con.execute(f"SELECT * FROM {knights} WHERE {horse}=?", (name,))
Toys like this often don't sell the idea very well, because there's a solid criticism of every example: 1) The environment variable REGION shouldn't be assigned to a variable named REGION, because it's not a constant. In real-world code, I'd be more likely to write >> region = os.getenv('REGION') << which wouldn't work with this magic token. 2) I'd be much more likely to put the entire config block into a variable >> cfg = config[os.getenv("REGION")] << and then use cfg.db_url for all config variables, so they also wouldn't be doubling up the names. 3) Not sure what you're doing with arguments.get(), but if that's command line args, it's way easier to wrap everything up and make them into function parameters. 4) I've no idea why you'd be setting knights to the string literal "knights" outside of a toy. If it's for the purpose of customizing the table name in the query, wouldn't it be something like >> table = "knights" << ? I'm sure there are better examples than these, but these ones really aren't a great advertisement. > Using the new token like this will remove bugs where the variable name was > spelled correctly, but the string doing the lookup has a typo. Admittedly this > is a small set of bugs, but I have run into them before. Where I see this > being > a bigger advantage is purposefully linking variables names within python to > names outside, making it easier to refactor and easier to trace usage across > an entire service and across different environments. Yes. I agree in principle, but what I usually end up with is either inverting the mapping, or using a class. Here are two real-world examples from a couple of tools of mine: @cmdline def confirm_user(id, hex_key): """Attempt to confirm a user's email address id: Numeric user ID (not user name or email) hex_key: Matching key to the one stored, else the confirmation fails """ The cmdline decorator is built on top of argparse and examines the function's name and arguments to set up the argparse config. The main() function then processes arguments and calls a function as appropriate. If you don't want any name replication at all, you could use a non-type annotation like this: @cmdline def confirm_user( id: "Numeric user ID (not user name or email)", hex_key: "Matching key to the one stored, else the confirmation fails", ): """Attempt to confirm a user's email address""" This is what I mean by inverting the mapping. Instead of lines like >> hex_key = arguments.get("hex_key") << there are generic handlers that use func(**args) to map all necessary arguments directly. The other example is a use (abuse?) of class syntax to provide names: class Heavy_Encased_Frame(Manufacturer): Modular_Frame: 8 Encased_Industrial_Beam: 10 Steel_Pipe: 36 Concrete: 22 time: 64 Heavy_Modular_Frame: 3 It effectively forms a DSL that takes advantage of the names that classes have, and the way that they can "contain" a series of directives, which will be retained with name and value. Can you find some real-world examples where you're frequently doing the sorts of assignment that this new syntax would help? > For the other use, in factory functions, I believe we have just come to accept > that it is okay to have to repeat ourselves to dynamically generate certain > objects in a dynamic language. The fact is that variable names are relevant > in python and can be a useful piece of information at runtime as well as > compile time or for static analysis. This is why some objects have a > __name__: it is useful information despite the fact it may not always be > accurate. > > >>> def foo(): pass > >>> > >>> bar = foo > >>> del foo > >>> bar.__name__ > 'foo' > > It may not be incredibly common but it is a power that the compiler has that > is not really available to the programmer. And not every place where variable > name access can be used would benefit from being implemented with the > large class object and the complex implementation of a metaclass. Oh, I don't think anyone will disagree with you on that :) It's extremely helpful with functions and classes, and sometimes, the class statement can serve other duties. Proper use of a decorator, metaclass, or parent class, can turn a class into something quite different. ChrisA _______________________________________________ Python-ideas mailing list -- python-ideas@python.org To unsubscribe send an email to python-ideas-le...@python.org https://mail.python.org/mailman3/lists/python-ideas.python.org/ Message archived at https://mail.python.org/archives/list/python-ideas@python.org/message/WZO7R6CG4BGNOMK755H2J7D3K3SQPSGO/ Code of Conduct: http://python.org/psf/codeofconduct/