I have some queries that utilize instr wrapped by substr but the old
version shipped in 2.7.5 doesn't have instr support.
Has anyone encountered this and utilized other existing functions
within the shipped 3.6.21 sqlite version to accomplish this?
Thanks,
jlc
--
Has anyone encountered this and utilized other existing functions
within the shipped 3.6.21 sqlite version to accomplish this?
Sorry guys, forgot about create_function...
--
http://mail.python.org/mailman/listinfo/python-list
I posted this to the sqlite list but I suspect there are more C oriented users
on
that list than Python, hopefully someone here can shed some light on this one.
I have created a python module that I import within several other modules that
simply opens a connection to an sqlite file and defines
I have a couple handlers applied to a logger for a file and console destination.
Default levels have been set for each, INFO+ to console and anything to file.
How does one prevent logging.exception from going to a specific handler when
it falls within the desired levels?
Thanks,
jlc
--
Speaking to the OP: personally, I don't like the approach of putting data
access methods at the module level to begin with. I'd rather use a class.
Just because it makes sense to have a singleton connection now doesn't mean it
will always make sense as your application grows.
In fact, the
Oh hai - as I was reading the documentation, look what I found:
http://docs.python.org/2/library/logging.html#filter
Methinks that should do exactly what you want.
Hi Wayne,
I was too hasty when I looked at filters as I didn't think they could do
what I wanted. Turns out a logging object sent
You can probably do something similar using sub commands
(http://docs.python.org/2/library/argparse.html#sub-commands).
The problem here is that argparse does not pass the subparser into the
parsed args and shared args between subparsers need to be declared
each time. Come execution time, when
I think you are looking for exclusive groups:
http://docs.python.org/2.7/library/argparse.html#argparse.add_mutually_excl
usive_group
No. That links first doc line in that method shows the very point we are all
discussing:
Create a mutually exclusive group. argparse will make sure that
I'm using Python 2.7 under Windows and am trying to run a command line
program and process the programs output as it is running. A number of
web searches have indicated that the following code would work.
import subprocess
p = subprocess.Popen(D:\Python\Python27\Scripts\pip.exe list
I have been battling an issue hopefully someone here has insight with.
I have a database with a few tables I perform a query against with some
joins against columns collated with NOCASE that leverage = comparisons.
Running the query on the database opened in sqlitestudio returns the
results in
This pragma speeds up most processes 10-20 times (yes 10-20):
pragma synchronous=OFF
See the SQLITE documentation for an explanation.
I've found no problems with this setting.
Aside from database integrity and consistency? :) I have that one set
to OFF as my case mandates data processing and
I need to convert a proprietary MS Access based printing solution into
something I can
maintain. Seems there is plenty available for generating barcodes in Python, so
for the
persons who have been down this road I was hoping to get a pointer or two.
I need to create some type of output,
If that's a bit heavyweight (and confusing; it's not all free software,
since some of it is under non-free license terms), there are other
options.
pyBarcode URL:http://pythonhosted.org/pyBarcode/ says it's a
pure-Python library that takes a barcode type and the value, and
generates an SVG of the
The default converter attribute uses localtime, but often under windows when
I add an additional logger, create a new file handler and set a formatter the
time
switches to utc.
Obviously redefining a new converter class does nothing as the default is what I
wanted to start with, localtime.
with open(self.full_path, 'r') as input, open(self.output_csv, 'ab') as
output:
fieldnames = (...)
csv_writer = DictWriter(output, filednames)
# Call csv_writer.writeheader() if file is new.
csv_writer.writerows(my_dict)
I'm wondering what's the best
Like Victor says, that opens him up to race conditions.
Slim chance, it's no more possible than it happening in the time try/except
takes to recover an alternative procedure.
with open('in_file') as in_file, open('out_file', 'ab') as outfile_file:
if os.path.getsize('out_file'):
Any thoughts on what we're doing wrong?
Building them yourself:)
Try iuscommunity.org for prebuilt packages...
--
https://mail.python.org/mailman/listinfo/python-list
Trying to robustly parse a string that will have key/value pairs separated
by three pipes, where each additional key/value (if more than one exists)
will be delineated by four more pipes.
string = 'key_1|||value_1key_2|||value_2'
regex =
Regexes may be overkill here. A simple string split might be better:
Yup, and much more robust as I was looking for.
Thanks everyone!
jlc
--
http://mail.python.org/mailman/listinfo/python-list
I get a list of dicts as output from a source I need to then extract various
dicts
out of. I can easily extract the dict of choice based on it containing a key
with
a certain value using list comp but I was hoping to use dict comp so the output
was not contained within a list.
reduce(lambda
{k: v for d in my_list if d['key'] == value for (k, v) in d.items()}
Ugh, had part of that backwards:) Nice!
However, since you say that all dicts have a unique value for
z['key'], you should never need to actually merge two dicts, correct?
In that case, why not just use a plain for loop to
You could put the loop into a helper function, but if you are looping
through the same my_list more than once why not build a lookup table
my_dict = {d[key]: d for d in my_list}
and then find the required dict with
my_dict[value]
I suppose, what I failed to clarify was that for each list of
I am writing a class to provide a db backed configuration for an application.
In my programs code, I import the class and pass the ODBC params to the
class for its __init__ to instantiate a connection.
I would like to create a function to generically access a table and provide an
iterator. How
Have the method yield instead of returning:
Thanks, that was simple, I was hung up on implementing magic methods.
Thanks for the pointers guys!
jlc
--
http://mail.python.org/mailman/listinfo/python-list
It's probably best if you use separate cursors anyway. Say you have
two methods with a shared cursor:
def iter_table_a(self):
self.cursor.execute(SELECT * FROM TABLE_A)
yield from self.cursor
def iter_table_b(self):
self.cursor.execute(SELECT * FROM
When you use optional named arguments in a function, how do you deal with with
the incorrect assignment when only some args are supplied?
If I do something like:
def my_func(self, **kwargs):
then handle the test cases with:
if not kwargs.get('some_key'):
raise SyntaxError
or:
Don't use kwargs for this. List out the arguments in the function
spec and give the optional ones reasonable defaults.
I only use kwargs myself when the set of possible arguments is dynamic
or unknown.
Gotch ya, but when the inputs to some keywords are similar, if the function is
called
I have a need for a script to hold several tuples with three values, two text
strings and a lambda. I need to index the tuple based on either of the two
strings. Normally a database would be ideal but for a self-contained script
that's a bit much.
Before I re-invent the wheel, are there any
Not entirely sure I understand you, can you post an example?
If what you mean is that you need to locate the function (lambda) when
you know its corresponding strings, a dict will suit you just fine.
Either maintain two dicts for the two separate strings (eg if they're
name and location and
How about two dictionaries, each containing the same tuples for
values? If you create a tuple first, then add it to both dicts, you
won't have any space-wasting duplicates.
Thanks guys.
--
https://mail.python.org/mailman/listinfo/python-list
I sent off a msg to the reportlab list but didn't find an answer, hoping someone
here might have come across this...
I am generating a table to hold text oriented by the specification of the label
it gets printed on. I need to compress the vertical size of the table a little
more but the larger
I have a script that accepts cmdline arguments and receives input via stdin.
I have a unit test for it that uses Popen to setup an environment, pass the args
and provide the stdin.
Problem is obviously this does nothing for providing coverage. Given the above
specifics, anyone know of a way to
So, back to my original question; what do you mean by providing
coverage?
Hi Roy,
I meant touch every line, such as what https://pypi.python.org/pypi/coverage
measures.
As the script is being invoked with Popen, I lose that luxury and only gain
the assertions tests but that of course doesn't
As the script is being invoked with Popen, I lose that luxury and only gain
the assertions tests but that of course doesn't show me untested branches.
Should have read the docs more thoroughly, works quite nice.
jlc
--
https://mail.python.org/mailman/listinfo/python-list
I have an Python3 argparse implementation that is invoked as a method from an
imported
class within a users script __main__.
When argparse is setup in __main__ instead, all the help switches produce help
then exit.
When a help switch is passed based on the above implementation, they are
I have a caching non data descriptor that stores values in the implementing
class
instances __dict__.
Something like:
class Descriptor:
def __init__(self, func, name=None, doc=None):
self.__name__ = name or func.__name__
self.__module__ = func.__module__
self.__doc__
You're going to have to subclass list if you want to intercept its
methods. As I see it, there are two ways you could do that: when it's
set, or when it's retrieved. I'd be inclined to do it in __set__, but
either could work. In theory, you could make it practically invisible
- just check to
But on Windows when I use the official Python 3.3 32-bit binary from
www.python.org this is not enabled.
For an unobtrusive way [1] to gain this, see apsw. For what it's worth, I prefer
this package over the built in module.
Python 3.3.3 (v3.3.3:c3896275c0f6, Nov 18 2013, 21:19:30) [MSC
I am documenting a few classes with Sphinx that utilize methods decorated
with custom descriptors. These properties return data when called and Sphinx
is content with a :returns: and :rtype: markup in the properties doc string.
They also accept input, but parameter (not really applicable) nor var
I have a module that has one operation that benefits greatly from being
multiprocessed.
Its a console based module and as such I have a stream handler and filter
associated to
the console, obviously the mp based instances need special handling, so I have
been
experimenting with a socket server
Maybe check out logstash (http://logstash.net/).
That looks pretty slick, I am constrained to using something provided by the
packaged modules
in this scenario.
I think I have it pretty close except for the fact that the
LogRecordStreamHandler from the cookbook
excepts when the sending
I am trying to patch a method of a class thats proving to be less than trivial.
The module I am writing a test for, ModuleA imports another ModuleB and
instantiates a class from this. Problem is, ModuleA incorporates multiprocessing
queues and I suspect I am missing the patch as the object in
How does one satisfy a lint/type checker with the return value of a class
method decorated
with a descriptor? It returns a dict, and I want the type hinting to suggest
this versus the
str|unknown its defaults to.
Thanks,
jlc
--
https://mail.python.org/mailman/listinfo/python-list
Surely the answer will depend on the linter you are using. Care to tell
us, or shall we guess?
Hey Steven,
I am using PyCharm, I have to admit I feel silly on this one. I had a buried
assignment
that overrode the inferred type. It wasn't until a fresh set of eyes confirmed
something
was awry
I have a portion of code I need to speed up, there are 3 api calls to an
external system
where the first enumerates a large collection of objects I then loop through
and perform
two additional api calls each. The first call is instant, the second and third
per object are
very slow. Currently
Hi I have 3 csv files with a list of 5 items in each.
rainfall in mm, duration time,time of day,wind speed, date.
I am trying to compare the files. cutting out items in list list. ie:-
first file (rainfall2012.csv)rainfall, duration,time of day,wind speed,date.
first file
Seems the soap list is a little quiet and the moderator is mia regardless.
Are there many soap users on this list familiar with Spyne or does anyone
know the most optimal place to post such questions?
Thanks!
jlc
--
https://mail.python.org/mailman/listinfo/python-list
Is your question
regarding anything at all Python, or are you just looking for helpful
nerds? :)
Hi Chris,
Thanks for responding. I've been looking at Spyne to produce a service that
can accept a request formatted as follows:
?xml version='1.0' encoding='UTF-8'?
SOAP-ENV:Envelope
Read first.
You can try :
http://spyne.io/docs/2.10/
https://pythonhosted.org/Soapbox/
Thanks Marcus,
I assure you I have been reading but missed soapbox, I'll keep hacking away,
thanks for the pointer.
jlc
--
https://mail.python.org/mailman/listinfo/python-list
Is there a way to execute pl/sql Script files through Python without
sqlclient.
https://code.google.com/p/pypyodbc/ might work for you...
i have checked cx_oracle and i guess it requires oracle client, so is there a
way to execute without oracle client.
Right, as the name implies it uses
https://gist.github.com/plq/11384113
Unfortunately, you need the latest Spyne from
https://github.com/arskom/spyne, this doesn't work with 2.10
2.11 is due around end of may, beginning of june.
Ping back if you got any other questions.
Burak,
Thanks a ton! I've just pulled this down
I have managed to read most of the important data in the xml onto lists.
Now, I have two lists, Source and Destination and I'd like to create
bi-directional
links between them.
And moreover, I'd like to assign some kind of a bandwidth capacity to the
links and
similarly, storage and
I don't know how to do that stuff in python. Basically, I'm trying to pull
certain data from the
xml file like the node-name, source, destination and the capacity. Since, I
am done with that
part, I now want to have a link between source and destination and assign
capacity to it.
I dont
I am working with a module that I am seeing some odd behavior.
A module.foo builds a custom exception, module.foo.MyError, its done right
afaict.
Another module, module.bar imports this and calls bar.__setattr__('a_new_name',
MyError).
Now, not in all but in some cases when I catch a_new_name,
Best would be to print out what's in a_new_name to see if it really is
what you think it is. If you think it is what you think it is, have a
look at its __mro__ (method resolution order, it's an attribute of
every class), to see what it's really inheriting. That should show you
what's
I see that you've solved your immediate problem, but you shouldn't call
__setattr__ directly. That should actually be written
setattr(bar, 'a_new_name', MyError)
But really, since bar is (apparently) a module, and it is *bar itself*
setting the attribute, the better way is
Mark,
Excuse the format of this post, stuck on the road only with an iPhone but in
the event it helps,
http://blog.vrplumber.com/b/2014/02/12/step-2-get-amd64-compatible-vs-2010/ may
be useful.
Jlc
--
https://mail.python.org/mailman/listinfo/python-list
Well I am not sure what advantage this has for the user, not my code as
I don't advocate the import to begin with it, its fine spelled as it was
from where it was...
The advantage for the user is:
/snip
Hey Steven,
Sorry for the late reply (travelling). My comment wasn't clear, I was
Anyone knows where to get a compiled cx_freeze that has already has this
patch?
http://www.lfd.uci.edu/~gohlke/pythonlibs/#cx_freeze
--
https://mail.python.org/mailman/listinfo/python-list
Unfortunately, this is buggy too. Here is a test output from a compiled
console exe created with the above version of cx freeze:
Let Christoph know, he is very responsive and extremely helpful.
--
https://mail.python.org/mailman/listinfo/python-list
I am doing some scripting with pyVmomi under 2.6.8 so the code may
run directly on a vmware esxi server.
As the code is long running, it surpasses the authentication timeout. For
anyone familiar with this code and/or this style of programming, does anyone
have a recommendation for an elegant
You could:
- have a single point of entry that can check and, if necessary, revalidate
- create a helper that checks and, if necessary, revalidate, which is then
called where ever needed
- create a decorator that does the above for each function that needs it
Hi Ethan,
I don't have to care about threading issues all the time and
can otherwise freely choose the right model of parallelism that suits my
current use case when the need arises (and threads are rarely the right
model). I'm sure that's not just me.
The sound bite of a loyal Python coder:)
If it
I am unfortunately unable to use lxml for a project and must resort to base
only libraries
to create several nested elements located directly under a root element. The
caveat is the
incremental writing and flushing of the nested elements as they are created.
So assuming the structure is
I checked my modules with pylint and saw the following warning:
W: 25,29: Used builtin function 'map' (bad-builtin)
Why is the use of map() discouraged?
It' such a useful thing.
The warning manifests from the opinion that a comprehension is
more suitable. You can disable the warning or you
I have a dataset that consists of a dict with text descriptions and values that
are integers. If
required, I collect the values into a list and create a numpy array running it
through a simple
routine: data[abs(data - mean(data)) m * std(data)] where m is the number of
std deviations
to
Assuming your data and the dictionary are keyed by a common set of keys:
for key in descriptions:
if abs(data[key] - mean(data)) = m * std(data):
del data[key]
del descriptions[key]
Heh, yeah sometimes the obvious is too simple to see. I used a dict comp to
rebuild
the
In other words: this approach for detecting outliers is nothing more than
a very rough, and very bad, heuristic, and should be avoided.
Heh, very true but the results will only be used for conversational purposes.
I am making an assumption that the data is normally distributed and I do expect
Hi,
Slightly different take on an old problem, I have a list of dicts, I need to
build one dict
from this based on two values from each dict in the list. Each of the dicts in
the list have
similar key names, but values of course differ.
[{'a': 'xx', 'b': 'yy', 'c': 'zz'}, {'a': 'dd', 'b':
data = [{'a': 'xx', 'b': 'yy', 'c': 'zz'}, {'a': 'dd', 'b': 'ee', 'c':
'ff'}]
{d[a]: d[c] for d in data}
{'xx': 'zz', 'dd': 'ff'}
Priceless,
That is exactly what I needed, for which I certainly over complicated!
Thanks everyone!
jlc
--
Within __init__ I setup a log with self.log = logging.getLogger('foo') then add
a
console and filehandler which requires the formatting to be specified. There a
few
methods I setup a local log object by calling getChild against the global log
object.
This works fine until I need to adjust the
I have an issue with some code I have been passed:
for (x, y) in [(a_dict1, a_tuple[0]), (a_dict2, a_tuple[1])]:
I only noticed it as PyCharm failed to assign the str type to y, whereas it knew
the tuples 0 and 1 item were type str.
In the loop it flags the passing of y into a method that
I think you're saying that the lint-feature of PyCharm is trying to
guess the object types, and telling you there's a conflict here. I
don't think you're saying that it executes incorrectly.
Hah, yeah sorry Dave that's it.
Still there are ways to express it differently, and maybe one of
I have a switch statement composed using a dict:
switch = {
'a': func_a,
'b': func_b,
'c': func_c
}
switch.get(var, default)()
As a result of multiple functions per choice, it migrated to:
switch = {
'a': (func_a1, func_a2),
'b': (func_b1, func_b2),
'c': (func_c, )
}
switch = {
'A': functools.partial(spam, a),
'B': lambda b, c=c: ham(b, c),
'C': eggs,
}
switch[letter](b)
That's cool, never even thought to use lambdas.
functools.partial isn't always applicable, but when it is, you should
prefer it over lambda since it will be very
Or could you do something like:
arguments_to_pass = [list of some sort]
switch.get(var, default)(*arguments_to_pass)
Stevens lambda suggestion was most appropriate. Within the switch, there
are functions called with none, or some variation of arguments. It was not
easy to pass them in after
I have a class which sets up some class vars, then several methods that are
passed in data
and do work referencing the class vars.
I want to decorate these methods, the decorator needs access to the class vars,
so I thought
about making the decorator its own class and allowing it to accept
So decorators will never take instance variables as arguments (nor should
they, since no instance
can possibly exist when they execute).
Right, I never thought of it that way, my only use of them has been trivial, in
non class scenarios so far.
Bear in mind, a decorator should take a
When you say class vars, do you mean variables which hold classes?
You guessed correctly, and thanks for pointing out the ambiguity in my
references.
The one doesn't follow from the other. Writing decorators as classes is
fairly unusual. Normally, they will be regular functions.
I see,
I was doing some work with the ldap module and required a ci dict that was case
insensitive but case preserving. It turned out the cidict class they
implemented was
broken with respect to pop, it is inherited and not re implemented to work.
Before
I set about re-inventing the wheel, anyone know
I have some data I am working with that is not being interpreted as a string
requiring
base64 encoding when sent to the ldif module for output.
The base64 string parsed is ZGV0XDMzMTB3YmJccGc= and the raw string is
det\3310wbb\pg.
I'll admit my understanding of the handling requirements of non
Can you give an example of the code you have?
I actually just overrode the regex used by the method in the LDIFWriter class
to be far more broad
about what it interprets as a safe string. I really need to properly handle
reading, manipulating and
writing non ascii data to solve this...
Shame
I have been doing the same thing and I tried to use java for testing the
credentials and they are correct. It works perfectly with java.
I really don´t know what we´re doing wrong.
You are accessing a protected operation of the LDAP server
and it (the server) rejects it due to invalid
I'm not sure what exactly you're asking for.
Especially is not being interpreted as a string requiring base64 encoding is
written without giving the right context.
So I'm just guessing that this might be the usual misunderstandings with use
of base64 in LDIF. Read more about when LDIF
Hi Michael,
Processing LDIF is one thing, doing LDAP operations another.
LDIF itself is meant to be ASCII-clean. But each attribute value can carry any
byte sequence (e.g. attribute 'jpegPhoto'). There's no further processing by
module LDIF - it simply returns byte sequences.
The access
Note that all modules in python-ldap up to 2.4.10 including module 'ldif'
expect raw byte strings to be passed as arguments. It seems to me you're
passing a Unicode object in the entry dictionary which will fail in case an
attribute value contains NON-ASCII chars.
Yup, I was.
python-ldap
I have a use where writing an interim file is not convenient and I was hoping to
iterate through maybe 100k lines of output by a process as its generated or
roughly anyways.
Seems to be a common question on ST, and more easily solved in Linux.
Anyone currently doing this with Python 2.7 in
You leave out an awful amount of detail. I have no idea what ST is, so
I'll have to guess your real problem.
Ugh, sorry guys its been one of those days, the post was rather useless...
I am using Popen to run the exe with communicate() and I have sent stdout to
PIPE
without luck. Just not
I am trying to invoke a binary that requires dll's in two places all of
which are included in the path env variable in windows. When running
this binary with popen it can not find either, passing env=os.environ
to open made no difference.
Anyone know what might cause this or how to work around
I have a set of methods which take args that I decorate twice,
def wrapped(func):
def wrap(*args, **kwargs):
try:
val = func(*args, **kwargs)
# some work
except BaseException as error:
log.exception(error)
return []
return
If you don't want to do that, you'd need to use introspection of a
remarkably hacky sort. If you want that, well, it'll take a mo.
After some effort I'm pretty confident that the hacky way is impossible.
Hah, I fired it in PyCharm's debugger and spent a wack time myself, thanks
for the
Well, technically it's
func.func_closure[0].cell_contents.__name__
but of course you cannot know that for the general case.
Hah, I admit I lacked perseverance in looking at this in PyCharms debugger as I
missed
that.
Much appreciated!
jlc
--
I have a dict of lists. I need to create a list of 2 tuples, where each tuple
is a key from
the dict with one of the keys list items.
my_dict = {
'key_a': ['val_a', 'val_b'],
'key_b': ['val_c'],
'key_c': []
}
[(k, x) for k, v in my_dict.items() for x in v]
This works, but I need to
Yeah, it's remarkably easy too! Try this:
[(k, x) for k, v in my_dict.items() for x in v or [None]]
An empty list counts as false, so the 'or' will then take the second option,
and iterate over the one-item list with None in it.
Right, I overlooked that!
Much appreciated,
jlc
--
Begrudgingly, I need to migrate away from SQLAlchemy onto a
package that has fast imports and very fast model build times.
I have a less than ideal application that uses Python as a plugin
interpreter which is not performant in this use case where its
being invoked freshly several times per
First recommendation: Less layers. Instead of SQLAlchemy, just import
sqlite3 and use it directly. You should be able to switch out import
sqlite as db for import psycopg2 as db or any other Python DB API
module, and still have most/all of the benefit of the extra layer,
without any extra
Anything listed here http://www.pythoncentral.io/sqlalchemy-vs-orms/
you've not heard about? I found peewee easy to use although I've
clearly no idea if it suits your needs. There's only one way to find out :)
Hi Mark,
I found that article before posting and some of the guys here have
I have some tabular data for example 3 tuples that I need to build a container
for where lookups into any one of the three fields are O(1). Does something
in the base library exist, or if not is there an efficient implementation of
such
a container that has been implemented before I give it a go?
So presumably your data's small enough to fit into memory, right? If
it isn't, going back to the database every time would be the best
option. But if it is, can you simply keep three dictionaries in sync?
Hi Chris,
Yeah the data can fit in memory and hence the desire to avoid a trip here.
Why not take a look at pandas as see if there's anything there you could
use? Excellent docs here http://pandas.pydata.org/pandas-docs/stable/
and the mailing list is available at gmane.comp.python.pydata amongst
other places.
Mark,
Actually it was the first thing that came to mind. I did
1 - 100 of 186 matches
Mail list logo