Re: [sqlite] Table constraints

2013-10-17 Thread Joseph L. Casale
 If I have decoded correctly what you were trying to say, use a trigger
 like this, and duplicate it for UPDATE:

Thanks Clemens, this got me sorted out.
jlc
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Table constraints

2013-10-16 Thread Joseph L. Casale
Hi,
I have a table as follows:

CREATE TABLE t ( 
id  INTEGER NOT NULL,
a   VARCHAR NOT NULL COLLATE 'nocase',
b   VARCHAR COLLATE 'nocase',
c   VARCHAR CHECK (c IN ('foo', 'bar', NULL)) COLLATE 'nocase',
PRIMARY KEY (id)
);

How does one elegantly construct an index or constraint such that for any
row, column a may appear twice with column c having a value of 'foo' and
'bar', unless this value for column a appears with a null value in column c
where no other rows may now exist for that value of column a.

id  a   b   c
--  --- --- ---
1   ab   foo
2   ab   bar
(no more rows with col a having a value of 'a'.

id  a   b   c
--  --- --- ---
1   ab   NULL
2   ab   bar <- not allowed.

Thanks,
jlc
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Table constraints

2013-10-16 Thread Joseph L. Casale
Hi,
I have a table as follows:

CREATE TABLE t ( 
id  INTEGER NOT NULL,
a   VARCHAR NOT NULL COLLATE 'nocase',
b   VARCHAR COLLATE 'nocase',
c   VARCHAR CHECK (c IN ('foo', 'bar', NULL)) COLLATE 'nocase',
PRIMARY KEY (id)
);

How does one elegantly construct an index or constraint such that for any
row, column a may appear twice with column c having a value of 'foo' and
'bar', unless this value for column a appears with a null value in column c
where no other rows may now exist for that value of column a.

id  a   b   c
--  --- --- ---
1   ab   foo
2   ab   bar
(no more rows with col a having a value of 'a'.

id  a   b   c
--  --- --- ---
1   ab   NULL
2   ab   bar - not allowed.

Thanks,
jlc
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


RE: Barcode printing

2013-09-30 Thread Joseph L. Casale
If that's a bit heavyweight (and confusing; it's not all free software,
since some of it is under non-free license terms), there are other
options.

pyBarcode URL:http://pythonhosted.org/pyBarcode/ says it's a
pure-Python library that takes a barcode type and the value, and
generates an SVG of the barcode.

Actually,
The canvas concept is exactly what I need and this is perfect!

Thanks Gary and Ben,
jlc
-- 
https://mail.python.org/mailman/listinfo/python-list


Barcode printing

2013-09-29 Thread Joseph L. Casale
I need to convert a proprietary MS Access based printing solution into 
something I can
maintain. Seems there is plenty available for generating barcodes in Python, so 
for the
persons who have been down this road I was hoping to get a pointer or two.

I need to create some type of output, preferably pdf, which is an array of 2 
across by 8 long
of one custom barcode label with some other text components oriented around the 
barcode.

Anyone know the best approach for this? Generating the barcode looks trivial, I 
am just not
sure which package might offer the remaining functionality easily.

Thanks for any pointers!
jlc
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: [pfSense] pfSense 2.1-RELEASE and Gold Subscription Now Available!

2013-09-15 Thread Joseph L. Casale
 I assume this is why snapshots.pfsense.org is offline (or at least not 
 answering) right now?

In the release announcement are links to upgrade binaries, not all the mirrors 
are populated
yet, find one. In the same rel announcement is an upgrade guide link that 
explains how to
perform the upgrade manually if you need to.

___
List mailing list
List@lists.pfsense.org
http://lists.pfsense.org/mailman/listinfo/list


Re: [sqlite] Insert statement

2013-09-12 Thread Joseph L. Casale
> Yes, that's what I suspected.  Because your table_a has no natural key, you 
> have
> no good way to select the auto-generated id value.  You can find out what the 
> last
> auto-generated value was, which lets you work a row at a time,  but you're 
> really
> suffering from a poor design choice.  
>
> If you make val unique -- and I see no reason not to -- then you can select 
> the id for
> every val you insert with "where val = 'value' ". 

Hi James,
Thanks for the follow up. I am certainly open to critique and although this is 
working I
would rather have it right. I realize I omitted the fact that val in table_a is 
unique. Given
the unanimous opinion within the thread I bit the bullet and just refactored 
but I am still
keen to leverage one large self-contained sql script.

The reason is, accessing pure dbapi c code in python is fast but the module I 
am now
using still mixes in plenty python in there and it's not nearly as fast as the 
proper
programmatic approach to inserting and using code to deduce the rowid, followed 
up
with the related inserts while using mostly python dbapi.

Sending one large statement in this case would bypass the overhead, but using 
val as the
reference would make the string very long. That text data might be several 
thousand chars
long. As soon as I have a moment to revisit this, I will try Simon's suggestion.

Thanks,
jlc
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Insert statement

2013-09-12 Thread Joseph L. Casale
 Yes, that's what I suspected.  Because your table_a has no natural key, you 
 have
 no good way to select the auto-generated id value.  You can find out what the 
 last
 auto-generated value was, which lets you work a row at a time,  but you're 
 really
 suffering from a poor design choice.  

 If you make val unique -- and I see no reason not to -- then you can select 
 the id for
 every val you insert with where val = 'value' . 

Hi James,
Thanks for the follow up. I am certainly open to critique and although this is 
working I
would rather have it right. I realize I omitted the fact that val in table_a is 
unique. Given
the unanimous opinion within the thread I bit the bullet and just refactored 
but I am still
keen to leverage one large self-contained sql script.

The reason is, accessing pure dbapi c code in python is fast but the module I 
am now
using still mixes in plenty python in there and it's not nearly as fast as the 
proper
programmatic approach to inserting and using code to deduce the rowid, followed 
up
with the related inserts while using mostly python dbapi.

Sending one large statement in this case would bypass the overhead, but using 
val as the
reference would make the string very long. That text data might be several 
thousand chars
long. As soon as I have a moment to revisit this, I will try Simon's suggestion.

Thanks,
jlc
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


RE: sqlite issue in 2.7.5

2013-09-09 Thread Joseph L. Casale
 This pragma speeds up most processes 10-20 times (yes 10-20):
 pragma synchronous=OFF

 See the SQLITE documentation for an explanation.
 I've found no problems with this setting.

Aside from database integrity and consistency? :) I have that one set
to OFF as my case mandates data processing and the database is secondary
and not re-used, nor required if the process halts.

The issue I actually had was after several large bulk inserts, statistics for
indices were no longer valid and the statements performing several joins
that would leverage them were not performing well. Not knowing this and
accessing the data from different methods would cause the symptoms to
manifest to varying degrees.

Ultimately, after all the inserts I run an ANALYZE command that takes ~30
seconds and a query that normally takes ~2 hours was done in less than 2
seconds.

jlc
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: [sqlite] Insert statement

2013-09-08 Thread Joseph L. Casale
> Look up the last_insert_rowid() you want and store it in your programming
> language.  That's what programming languages are for.  But if you want to do
> it less efficiently ...

Hey Simon,
That is the procedure I utilize normally, the requirement for this specific 
case is
that the entire set of inserts into table_a be bundled with their associated 
inserts
into table_b in one statement where I won't have the luxury of an iterative 
approach.

So all of these lines of sql will be sent as one statement.

Normally I would just use variables, but we know this is not an option so I was 
hoping
to find a way to accomplish this otherwise.

Thanks,
jlc
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Insert statement

2013-09-08 Thread Joseph L. Casale
> If I understand the question, and there is no key other than the 
> auto-incrementing
> integer, there might not be a good way.  It sounds like the database's design 
> may
> have painted you into a corner.  

Hi James,
Well, after inserting one row into table A which looks like (without specifying 
the id
and letting it auto generate):

CREATE TABLE table_a ( 
valVARCHAR COLLATE "nocase" NOT NULL,
id INTEGER NOT NULL,
PRIMARY KEY ( id ) 
);

(forgive that odd looking format, its SQLAlchemy output...)

I have for example 20 rows in table B to insert referencing the above:

CREATE TABLE table_b ( 
val VARCHAR COLLATE "nocase",
key VARCHAR COLLATE "nocase" NOT NULL,
id   INTEGER,
seqno   INTEGER NOT NULL,
PRIMARY KEY ( seqno ),
FOREIGN KEY ( id ) REFERENCES table_a ( id ) 
);

So selecting last_insert_rowid() always gives me the 'id' of the previous row 
from table_a
after an insert. So I would insert into table_a, get that rowid, and build the 
remaining 20
inserts. For the sake of keeping the entire sql statement manageable, I was 
hoping not to
build the next 20 statements based on SELECT id FROM table_a WHERE val='xxx' as 
that string
will be very long.

So this works for one insert:

INSERT INTO table_a (val) VALUES ('xx');
INSERT INTO table_b (id, key, val)
   SELECT last_insert_rowid(), 'yyy', 'zzz';

Just not sure how to perform 20 or 30 of those inserts into table_b after the 
one into table_a
yields the id value I need.

Thanks!
jlc
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Insert statement

2013-09-08 Thread Joseph L. Casale
Hi,
What is the most efficient way to insert several records into a table which
has a fk ref to the auto incrementing pk of another insert I need to do in the
same statement.

I am migrating some code away from using the SQLAlchemy orm to using the
Core. The way the data is returned to me is a string (requiring an insert into 
table A)
accompanied by several more strings (requiring inserts into table B with a ref 
to a pk
in table A's row).

So instead of doing this the typical way, if I can prepare all the sql as one 
large
statement for several sets of related inserts (The initial insert into table A 
with all
the related inserts into table B) I will get the performance I am after.

Does this seem reasonable? Sqlite doesn't support variable declaration but I am
sure there is a more efficient means to this using something along the lines of
INSERT INTO SELECT, just not sure how to craft this with "n" inserts based on 
one
select from the PK generating initial insert.

Thanks,
jlc
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Insert statement

2013-09-08 Thread Joseph L. Casale
Hi,
What is the most efficient way to insert several records into a table which
has a fk ref to the auto incrementing pk of another insert I need to do in the
same statement.

I am migrating some code away from using the SQLAlchemy orm to using the
Core. The way the data is returned to me is a string (requiring an insert into 
table A)
accompanied by several more strings (requiring inserts into table B with a ref 
to a pk
in table A's row).

So instead of doing this the typical way, if I can prepare all the sql as one 
large
statement for several sets of related inserts (The initial insert into table A 
with all
the related inserts into table B) I will get the performance I am after.

Does this seem reasonable? Sqlite doesn't support variable declaration but I am
sure there is a more efficient means to this using something along the lines of
INSERT INTO SELECT, just not sure how to craft this with n inserts based on 
one
select from the PK generating initial insert.

Thanks,
jlc
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Insert statement

2013-09-08 Thread Joseph L. Casale
 If I understand the question, and there is no key other than the 
 auto-incrementing
 integer, there might not be a good way.  It sounds like the database's design 
 may
 have painted you into a corner.  

Hi James,
Well, after inserting one row into table A which looks like (without specifying 
the id
and letting it auto generate):

CREATE TABLE table_a ( 
valVARCHAR COLLATE nocase NOT NULL,
id INTEGER NOT NULL,
PRIMARY KEY ( id ) 
);

(forgive that odd looking format, its SQLAlchemy output...)

I have for example 20 rows in table B to insert referencing the above:

CREATE TABLE table_b ( 
val VARCHAR COLLATE nocase,
key VARCHAR COLLATE nocase NOT NULL,
id   INTEGER,
seqno   INTEGER NOT NULL,
PRIMARY KEY ( seqno ),
FOREIGN KEY ( id ) REFERENCES table_a ( id ) 
);

So selecting last_insert_rowid() always gives me the 'id' of the previous row 
from table_a
after an insert. So I would insert into table_a, get that rowid, and build the 
remaining 20
inserts. For the sake of keeping the entire sql statement manageable, I was 
hoping not to
build the next 20 statements based on SELECT id FROM table_a WHERE val='xxx' as 
that string
will be very long.

So this works for one insert:

INSERT INTO table_a (val) VALUES ('xx');
INSERT INTO table_b (id, key, val)
   SELECT last_insert_rowid(), 'yyy', 'zzz';

Just not sure how to perform 20 or 30 of those inserts into table_b after the 
one into table_a
yields the id value I need.

Thanks!
jlc
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Insert statement

2013-09-08 Thread Joseph L. Casale
 Look up the last_insert_rowid() you want and store it in your programming
 language.  That's what programming languages are for.  But if you want to do
 it less efficiently ...

Hey Simon,
That is the procedure I utilize normally, the requirement for this specific 
case is
that the entire set of inserts into table_a be bundled with their associated 
inserts
into table_b in one statement where I won't have the luxury of an iterative 
approach.

So all of these lines of sql will be sent as one statement.

Normally I would just use variables, but we know this is not an option so I was 
hoping
to find a way to accomplish this otherwise.

Thanks,
jlc
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query problems

2013-09-02 Thread Joseph L. Casale
> Plus, of course, index will only ever be used for operations where you have
> overridden the default collating sequence for the operation, for example by
> specifying collate nocase in the join expression, or adding the collate 
> nocase to
> the order by or group by.

I assume this explains why the change in the table def made a difference from
not specifying the collation whereas the index did. I did not override the 
default
of the table in the query so the index was not used.

I've encountered another issue as I was running my tests in sqlitestudio when I
realized the query against the tables with the collation specified returned all 
rows
in less than a minute. Running the query against the db in the sqlite shell is 
still bad.
I know sqlitestudio enables certain non-default pragmas, but I wonder which ones
could result in this speed difference.

Thanks,
jlc
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query problems

2013-09-02 Thread Joseph L. Casale
 Plus, of course, index will only ever be used for operations where you have
 overridden the default collating sequence for the operation, for example by
 specifying collate nocase in the join expression, or adding the collate 
 nocase to
 the order by or group by.

I assume this explains why the change in the table def made a difference from
not specifying the collation whereas the index did. I did not override the 
default
of the table in the query so the index was not used.

I've encountered another issue as I was running my tests in sqlitestudio when I
realized the query against the tables with the collation specified returned all 
rows
in less than a minute. Running the query against the db in the sqlite shell is 
still bad.
I know sqlitestudio enables certain non-default pragmas, but I wonder which ones
could result in this speed difference.

Thanks,
jlc
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


sqlite issue in 2.7.5

2013-09-02 Thread Joseph L. Casale
I have been battling an issue hopefully someone here has insight with.

I have a database with a few tables I perform a query against with some
joins against columns collated with NOCASE that leverage = comparisons.

Running the query on the database opened in sqlitestudio returns the
results in under a minute. Running the query in Python with sqlite3
doesn't return results for several hours. I haven't figured out what
pragmas or other shortcuts sqlitestudio uses to provide the results
so fast.

Using apsw returns the dataset nearly instantaneously but the
connection/cursor/commit differences are too drastic and would force
far too large a rewrite for the module change.

Anyone by chance know the underlying changes required in the sqlite3
module to replicate what sqlitestudio is doing behind the scenes?

Thanks,
jlc

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [sqlite] Query problems

2013-09-01 Thread Joseph L. Casale
> Have you tried using '=' ?
> 
> Also if you declare the columns as COLLATE NOCASE in your table definition,
> then using '=' will definitely work the way you want it to.  An example would 
> be
> 
> CREATE TABLE myTable (myName TEXT COLLATE NOCASE)

Simon,
That took this query from not finishing in 5 hours to producing results in 
under a
minute, many thanks for everyone's guidance!

jlc
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query problems

2013-09-01 Thread Joseph L. Casale
> > 0   0   1   SCAN TABLE d_table_b AS da (~10 rows)
> >
> 
> Is this the index you referenced in you reply to Simon?
> Maybe you are using wrong index/column?

I'll recheck, I am also reading up on indexes as they relate to optimizing
queries. Could be I made a mistake.

> I had the same problem (kind of) and got the answer here to create a
> different index...
> 
> Thank you.
> 
> Can you post you schema?

Sure, it's not mine technically so I have to sanitize portions.

Thanks,
jlc
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query problems

2013-09-01 Thread Joseph L. Casale
> LIKE is used when comparing strings with wildcards.  For example, val LIKE
> 'abra%' (which will match 'abraCaDAbra' and 'abrakadee'.
> 
> If there are no wildcards you should be using =, not LIKE.  LIKE will/should
> always indicate that a table or index scan is required, perhaps of the whole
> table/index if the like expression is not a constant (there is no other 
> choice since
> the wildcarded expression could evaluate to '%d%' which would return every
> row with a 'd' anywhere in the value.  This means that the query planner must
> assume that this join will require a full table/index scan for each 
> inner-loop and
> may return all rows because no other plan assumption would be valid.  This 
> will
> result in really crappy performance.
> 
> Are the columns declared as COLLATE NOCASE, or just the index?  If just the
> index, why?

Was just the index as I didn't know better, but its corrected now.

> If there is some (really strange) reason why the table column is not declared
> with COLLATE NOCASE, then you can always override the collation of the
> column in the expression itself:
> 
> CollateBinaryColumn COLLATE NOCASE =
> SomeOtherColumnCollationDoesNotMatter

This insight is much appreciated, thanks!
jlc
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query problems

2013-09-01 Thread Joseph L. Casale
> Have you tried using '=' ?
> 
> Also if you declare the columns as COLLATE NOCASE in your table definition,
> then using '=' will definitely work the way you want it to.  An example would 
> be
> 
> CREATE TABLE myTable (myName TEXT COLLATE NOCASE)
> 
> Simon.

I did and it excluded the comparisons whose case only differed, I only defined
COLLATE NOCASE in the index so I guess it wasn't being used.

I just changed the table defs to use this and am reloading the data.

Thanks,
jlc
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query problems

2013-09-01 Thread Joseph L. Casale
> Hi,
> Can you do "DESCRIBE QUERY PLAN " and post results here?
> 
> Also, what do you mean by "unbearable at scale"? Did you measure it? What
> is the result?
> 
> Thank you.

It doesn't finish with maybe 4 or 5 hours run time.

Sorry, do you mean "explain query plan ..."?
0   0   1   SCAN TABLE d_table_b AS da (~10 rows)
0   1   3   SEARCH TABLE d_table_a AS d USING INTEGER PRIMARY KEY 
(rowid=?) (~1 rows)
0   2   0   SEARCH TABLE s_table_b AS sa USING AUTOMATIC COVERING 
INDEX (key=?) (~7 rows)
0   3   2   SEARCH TABLE s_table_a AS s USING INTEGER PRIMARY KEY 
(rowid=?) (~1 rows)

Thanks,
jlc
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Query problems

2013-09-01 Thread Joseph L. Casale
I have a query that is unbearable at scale, for example when
s_table_a and s_table_b have 70k and 1.25M rows.

SELECT s.id AS s_id
   ,s.lid AS s_lid
   ,sa.val AS s_sid
   ,d.id AS d_id
   ,d.lid AS d_lid
  FROM s_table_b sa
  JOIN d_table_b da ON
   (
 da.key=sa.key
 AND da.key='unique_string'
 AND da.val LIKE sa.val
   )
  JOIN s_table_a s ON
   s.id=sa.id
  JOIN d_table_a d ON
   (
 d.id=da.id
 AND NOT d.lid LIKE s.lid
   )

I am using LIKE as the columns are indexed NOCASE and I need the
comparison case insensitive. I suspect this is where is breaks down
but I don't know enough sql to really appreciate the ways I could
approach this better.

Both {s|d}_table_a have 2 columns, id, lid where id is PK.
Both {s|d}_table_b have 4 columns, seqno, id, key, val where seqno is PK,
id is a FK ref to {s|d}_table_a.id, and several key/val pairs are inserted to 
correspond
to the associated PK id from {s|d}_table_a.

I'd be grateful for any suggestions or hints to improve this.
Thanks,
jlc
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Query problems

2013-09-01 Thread Joseph L. Casale
I have a query that is unbearable at scale, for example when
s_table_a and s_table_b have 70k and 1.25M rows.

SELECT s.id AS s_id
   ,s.lid AS s_lid
   ,sa.val AS s_sid
   ,d.id AS d_id
   ,d.lid AS d_lid
  FROM s_table_b sa
  JOIN d_table_b da ON
   (
 da.key=sa.key
 AND da.key='unique_string'
 AND da.val LIKE sa.val
   )
  JOIN s_table_a s ON
   s.id=sa.id
  JOIN d_table_a d ON
   (
 d.id=da.id
 AND NOT d.lid LIKE s.lid
   )

I am using LIKE as the columns are indexed NOCASE and I need the
comparison case insensitive. I suspect this is where is breaks down
but I don't know enough sql to really appreciate the ways I could
approach this better.

Both {s|d}_table_a have 2 columns, id, lid where id is PK.
Both {s|d}_table_b have 4 columns, seqno, id, key, val where seqno is PK,
id is a FK ref to {s|d}_table_a.id, and several key/val pairs are inserted to 
correspond
to the associated PK id from {s|d}_table_a.

I'd be grateful for any suggestions or hints to improve this.
Thanks,
jlc
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query problems

2013-09-01 Thread Joseph L. Casale
 Hi,
 Can you do DESCRIBE QUERY PLAN your_query and post results here?
 
 Also, what do you mean by unbearable at scale? Did you measure it? What
 is the result?
 
 Thank you.

It doesn't finish with maybe 4 or 5 hours run time.

Sorry, do you mean explain query plan ...?
0   0   1   SCAN TABLE d_table_b AS da (~10 rows)
0   1   3   SEARCH TABLE d_table_a AS d USING INTEGER PRIMARY KEY 
(rowid=?) (~1 rows)
0   2   0   SEARCH TABLE s_table_b AS sa USING AUTOMATIC COVERING 
INDEX (key=?) (~7 rows)
0   3   2   SEARCH TABLE s_table_a AS s USING INTEGER PRIMARY KEY 
(rowid=?) (~1 rows)

Thanks,
jlc
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query problems

2013-09-01 Thread Joseph L. Casale
 Have you tried using '=' ?
 
 Also if you declare the columns as COLLATE NOCASE in your table definition,
 then using '=' will definitely work the way you want it to.  An example would 
 be
 
 CREATE TABLE myTable (myName TEXT COLLATE NOCASE)
 
 Simon.

I did and it excluded the comparisons whose case only differed, I only defined
COLLATE NOCASE in the index so I guess it wasn't being used.

I just changed the table defs to use this and am reloading the data.

Thanks,
jlc
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query problems

2013-09-01 Thread Joseph L. Casale
 LIKE is used when comparing strings with wildcards.  For example, val LIKE
 'abra%' (which will match 'abraCaDAbra' and 'abrakadee'.
 
 If there are no wildcards you should be using =, not LIKE.  LIKE will/should
 always indicate that a table or index scan is required, perhaps of the whole
 table/index if the like expression is not a constant (there is no other 
 choice since
 the wildcarded expression could evaluate to '%d%' which would return every
 row with a 'd' anywhere in the value.  This means that the query planner must
 assume that this join will require a full table/index scan for each 
 inner-loop and
 may return all rows because no other plan assumption would be valid.  This 
 will
 result in really crappy performance.
 
 Are the columns declared as COLLATE NOCASE, or just the index?  If just the
 index, why?

Was just the index as I didn't know better, but its corrected now.

 If there is some (really strange) reason why the table column is not declared
 with COLLATE NOCASE, then you can always override the collation of the
 column in the expression itself:
 
 CollateBinaryColumn COLLATE NOCASE =
 SomeOtherColumnCollationDoesNotMatter

This insight is much appreciated, thanks!
jlc
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query problems

2013-09-01 Thread Joseph L. Casale
  0   0   1   SCAN TABLE d_table_b AS da (~10 rows)
 
 
 Is this the index you referenced in you reply to Simon?
 Maybe you are using wrong index/column?

I'll recheck, I am also reading up on indexes as they relate to optimizing
queries. Could be I made a mistake.

 I had the same problem (kind of) and got the answer here to create a
 different index...
 
 Thank you.
 
 Can you post you schema?

Sure, it's not mine technically so I have to sanitize portions.

Thanks,
jlc
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query problems

2013-09-01 Thread Joseph L. Casale
 Have you tried using '=' ?
 
 Also if you declare the columns as COLLATE NOCASE in your table definition,
 then using '=' will definitely work the way you want it to.  An example would 
 be
 
 CREATE TABLE myTable (myName TEXT COLLATE NOCASE)

Simon,
That took this query from not finishing in 5 hours to producing results in 
under a
minute, many thanks for everyone's guidance!

jlc
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


RE: Running a command line program and reading the result as it runs

2013-08-23 Thread Joseph L. Casale
  I'm using Python 2.7 under Windows and am trying to run a command line
  program and process the programs output as it is running. A number of
  web searches have indicated that the following code would work.
 
  import subprocess
 
  p = subprocess.Popen(D:\Python\Python27\Scripts\pip.exe list -o,
   stdout=subprocess.PIPE,
   stderr=subprocess.STDOUT,
   bufsize=1,
   universal_newlines=True,
   shell=False)
  for line in p.stdout:
  print line
 
  When I use this code I can see that the Popen works, any code between
  the Popen and the for will run straight away, but as soon as it gets to
  the for and tries to read p.stdout the code blocks until the command
  line program completes, then all of the lines are returned.
 
  Does anyone know how to get the results of the program without it
  blocking?

Try this:

p = subprocess.Popen(args, stdout=subprocess.PIPE)
for line in p.stdout:
print(line)
p.wait()

jlc
-- 
http://mail.python.org/mailman/listinfo/python-list


RE: [argparse] mutually exclusive group with 2 sets of options

2013-08-05 Thread Joseph L. Casale
 You can probably do something similar using sub commands
 (http://docs.python.org/2/library/argparse.html#sub-commands).

The problem here is that argparse does not pass the subparser into the
parsed args and shared args between subparsers need to be declared
each time. Come execution time, when you have shared args you end
up testing for various incantations of the invoked code, you're better
off omitting subparsers and performing conditional tests after parsing
for incompatible combinations.

It's a waste of typing to write out a mode followed by params to achieve
the ops goal, I've hit the same limitation. Certainly a weakness of argparse
in my opinion. It's also a tough sell to force a user to install a package just
for the cli if you go with docopt. I'd love to see argparse expand or docopt
get included...

jlc
-- 
http://mail.python.org/mailman/listinfo/python-list


RE: [argparse] mutually exclusive group with 2 sets of options

2013-08-05 Thread Joseph L. Casale
 I think you are looking for exclusive groups:
 
 http://docs.python.org/2.7/library/argparse.html#argparse.add_mutually_excl
 usive_group

No. That links first doc line in that method shows the very point we are all 
discussing:

Create a mutually exclusive group. argparse will make sure that only one
of the arguments in the mutually exclusive group was present on the
command line:

Op requires more than one per group, this method make sure that only one
of the arguments is accepted.

jlc
-- 
http://mail.python.org/mailman/listinfo/python-list


RE: Logging help

2013-08-04 Thread Joseph L. Casale
Oh hai - as I was reading the documentation, look what I found:

http://docs.python.org/2/library/logging.html#filter

Methinks that should do exactly what you want.

Hi Wayne,
I was too hasty when I looked at filters as I didn't think they could do
what I wanted. Turns out a logging object sent the filter method will
an exc_info attribute in my case.

Thanks!
jlc
-- 
http://mail.python.org/mailman/listinfo/python-list


RE: Best practice for connections and cursors

2013-08-02 Thread Joseph L. Casale
Speaking to the OP:  personally, I don't like the approach of putting data 
access methods at the module level to begin with.  I'd rather use a class.  
Just because it makes sense to have a singleton connection now doesn't mean it 
will always make sense as your application grows.
In fact, the conflict you describe where one cursor is interfering with 
another cursor suggests that you may already be at the point of needing 
multiple connections.  The operation that is creating a temp table and messing 
things up should ideally be pulling an unused connection from a pool, so as 
to avoid potentially contaminating a connection that may already be in use 
elsewhere in the code.

Appreciate the opinion, it would clean it up to go this route so I will. It 
turns out
the long delay was only a result of running the code through PyCharms debugger.

Thanks for the suggestion,
jlc
-- 
http://mail.python.org/mailman/listinfo/python-list


RE: [NTSysADM] Max size of Notes field in AD?

2013-08-01 Thread Joseph L. Casale
 Does anyone know what the limit to the size of the Notes field in
 AD? I can't seem to search up a limit.

Every attr, every version of AD, and all its properties:
http://msdn.microsoft.com/en-us/library/windows/desktop/ms675090(v=vs.85).aspx




RE: [NTSysADM] Folder Actions

2013-08-01 Thread Joseph L. Casale
 Anyone using this product?  I have cobbled together some 'hot folder' 
 solutions
 for a number of things at work, but this looks like it would be easier to use 
 if it is stable and works.

 http://www.folderactions.com/

Ugh,
There is an opensource project that uses the native filesystem watcher event.

http://sourceforge.net/apps/mediawiki/fwutilities/index.php?title=File_Watcher_Utilities

Free, very configurable and it works well. I have done some neat convenience 
things
for users with this trivially.

jlc


Best practice for connections and cursors

2013-08-01 Thread Joseph L. Casale
I posted this to the sqlite list but I suspect there are more C oriented users 
on
that list than Python, hopefully someone here can shed some light on this one.

I have created a python module that I import within several other modules that
simply opens a connection to an sqlite file and defines several methods which
each open a cursor before they either select or insert data.

As the module opens a connection, wherever I import it I call a commit against
the connection after invoking any methods that insert or change data.

Seems I've made a proper mess, one of the modules causes a 5 second delay
at import (big indicator there) and one of the modules calls a method that 
yields
data while calling other methods as it iterates. Each of these methods opens its
own cursor. One of which during some processing calls another method which
opens a cursor and creates a temp table and this corrupts the top level cursor
and causes its yield to exit early.

If I open a debugger just as the top level method begins to yield, I can pull 
all
the expected records. It seems to be one of the nested methods that leverages
the singleton connection to the sqlite db, once it opens its own cursor and 
creates
a temp table, things go south. Comment out its cursor and code and things work.

A bit vague I know, but does anyone see the obvious mistake? I assumed the 
module
setting up a singleton connection was a perfectly viable way to accomplish this?

Thanks!
jlc
-- 
http://mail.python.org/mailman/listinfo/python-list


Logging help

2013-08-01 Thread Joseph L. Casale
I have a couple handlers applied to a logger for a file and console destination.
Default levels have been set for each, INFO+ to console and anything to file.

How does one prevent logging.exception from going to a specific handler when
it falls within the desired levels?

Thanks,
jlc
-- 
http://mail.python.org/mailman/listinfo/python-list


[sqlite] Best practice for connections and cursors

2013-07-31 Thread Joseph L. Casale
I have created a python module that I import within several files that simply
opens a connection to an sqlite file and defines several methods which each
open a cursor before they either select or insert data. As the module opens a
connection, wherever I import it I can call a commit against the connection.

Seems I've made a proper mess, one of the modules causes a 5 second delay
at import (big indicator there) and one of the modules calls a method that 
yields
data while calling other methods as it iterates. Each of these methods opens its
own cursor. One of which during some processing calls another method which
opens a cursor and creates a temp table and this corrupts the top level cursor
and causes it to yield a shorter count.

If I open a debugger just as the top level method begins to yield, I can pull 
all
the expected records. It seems to be one of the nested methods that leverages
the singleton connection to the sqlite db, once it opens its own cursor and 
creates
a temp table, things go south.

A bit vague I know, but does anyone see the obvious mistake? I assumed the 
module
setting up a singleton connection was a perfectly viable way to accomplish this?

Thanks!
jlc
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Best practice for connections and cursors

2013-07-31 Thread Joseph L. Casale
I have created a python module that I import within several files that simply
opens a connection to an sqlite file and defines several methods which each
open a cursor before they either select or insert data. As the module opens a
connection, wherever I import it I can call a commit against the connection.

Seems I've made a proper mess, one of the modules causes a 5 second delay
at import (big indicator there) and one of the modules calls a method that 
yields
data while calling other methods as it iterates. Each of these methods opens its
own cursor. One of which during some processing calls another method which
opens a cursor and creates a temp table and this corrupts the top level cursor
and causes it to yield a shorter count.

If I open a debugger just as the top level method begins to yield, I can pull 
all
the expected records. It seems to be one of the nested methods that leverages
the singleton connection to the sqlite db, once it opens its own cursor and 
creates
a temp table, things go south.

A bit vague I know, but does anyone see the obvious mistake? I assumed the 
module
setting up a singleton connection was a perfectly viable way to accomplish this?

Thanks!
jlc
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[NTSysADM] RE: LDAP lookups

2013-07-31 Thread Joseph L. Casale
Cute, if paging confused your developers, wait'll they encounter range 
retrieval.
I can only imagine the protest then:)

heh,
jlc

From: listsad...@lists.myitforum.com [mailto:listsad...@lists.myitforum.com] On 
Behalf Of David Lum
Sent: Wednesday, July 31, 2013 2:41 PM
To: ntsysadm@lists.myitforum.com
Subject: [NTSysADM] RE: LDAP lookups

I got lucky - under protest I made a change only to troubleshoot then we flip 
it back. The change eliminated that error message but did NOT fix their 
underlying problem, so I was able to flip it back...

I saw an objection from Desmond on blog about it, as well as  link:
http://jeftek.com/219/avoid-changing-the-maxpagesize-ldap-query-policy

So I was pretty set against it.

From: listsad...@lists.myitforum.commailto:listsad...@lists.myitforum.com 
[mailto:listsad...@lists.myitforum.com] On Behalf Of Free, Bob
Sent: Wednesday, July 31, 2013 12:05 PM
To: ntsysadm@lists.myitforum.commailto:ntsysadm@lists.myitforum.com
Subject: [NTSysADM] RE: LDAP lookups

NO NO NO

Just say NO

From: listsad...@lists.myitforum.commailto:listsad...@lists.myitforum.com 
[mailto:listsad...@lists.myitforum.com] On Behalf Of David Lum
Sent: Wednesday, July 31, 2013 11:24 AM
To: ntsysadm@lists.myitforum.commailto:ntsysadm@lists.myitforum.com
Subject: [NTSysADM] RE: LDAP lookups

Thanks everyone! That was my assumption after looking at _ldap records in DNS 
as well.

I've been asked to change the Sizelimit and PageSize attributes because our 
developers are getting this error
https://confluence.atlassian.com/display/FISHKB/LDAP%3A+error+code+4+-+Sizelimit+Exceeded

Dave

From: listsad...@lists.myitforum.commailto:listsad...@lists.myitforum.com 
[mailto:listsad...@lists.myitforum.com] On Behalf Of Ken Cornetet
Sent: Wednesday, July 31, 2013 11:16 AM
To: 'ntsysadm@lists.myitforum.com'
Subject: [NTSysADM] RE: LDAP lookups

When the DNS server (assuming windows DNS) resolves mydomain.com, it will 
find 3 address (A) records. If the client is on the same subnet as one of the A 
records, the DNS server will do subnet sorting which means it will put that A 
record first in the list of 3 records that it returns to the client. Otherwise 
it will round-robin the order of the  3 records returned.

So, if the LDAP client is on the same subnet as one of the DCs, it will hit 
that DC (because that DC's IP address will be first in the list returned by the 
DNS server). Otherwise, it will be random.

From: listsad...@lists.myitforum.commailto:listsad...@lists.myitforum.com 
[mailto:listsad...@lists.myitforum.com] On Behalf Of David Lum
Sent: Wednesday, July 31, 2013 1:43 PM
To: NTSysADM@lists.myITforum.commailto:NTSysADM@lists.myITforum.com
Subject: [NTSysADM] LDAP lookups

In a domain with 3 DC's, which one handles LDAP requests? If the LDAP is set to 
query mydomaion.com.com ,what determines which DC processes the query?
David Lum
Sr. Systems Engineer // NWEATM
Office 503.548.5229 // Cell (voice/text) 503.267.9764



PGE is committed to protecting our customers' privacy.
To learn more, please visit http://www.pge.com/about/company/privacy/customer/




sqlite3 version lacks instr

2013-07-28 Thread Joseph L. Casale
I have some queries that utilize instr wrapped by substr but the old
version shipped in 2.7.5 doesn't have instr support.

Has anyone encountered this and utilized other existing functions
within the shipped 3.6.21 sqlite version to accomplish this?

Thanks,
jlc
-- 
http://mail.python.org/mailman/listinfo/python-list


RE: sqlite3 version lacks instr

2013-07-28 Thread Joseph L. Casale
 Has anyone encountered this and utilized other existing functions
 within the shipped 3.6.21 sqlite version to accomplish this?

Sorry guys, forgot about create_function...
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [sqlite] Query help

2013-07-27 Thread Joseph L. Casale
> Will the SQL 1969 "EXCEPT" compound operator not work for some reason?

Worked perfect, my sql is weak as I didn't even know of this one...
Thanks!
jlc
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Query help

2013-07-27 Thread Joseph L. Casale
Hey guys,
I am trying to left join the results of two selects that both look exactly like 
this:

  SELECT DISTINCT SUBSTR(col, INSTR(col, 'string')) AS name FROM table_a

Both tables have the exact data type and format, I need to reformat each tables
results, then join and return only what is in table_a and not in table_b.

Any guidance on how one might do this in sqlite?
Thanks!
jlc
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Query help

2013-07-27 Thread Joseph L. Casale
Hey guys,
I am trying to left join the results of two selects that both look exactly like 
this:

  SELECT DISTINCT SUBSTR(col, INSTR(col, 'string')) AS name FROM table_a

Both tables have the exact data type and format, I need to reformat each tables
results, then join and return only what is in table_a and not in table_b.

Any guidance on how one might do this in sqlite?
Thanks!
jlc
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query help

2013-07-27 Thread Joseph L. Casale
 Will the SQL 1969 EXCEPT compound operator not work for some reason?

Worked perfect, my sql is weak as I didn't even know of this one...
Thanks!
jlc
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Guidance with Python and nested cursors

2013-07-18 Thread Joseph L. Casale
> It is perfectly allowed to open multiple cursors against a single connection. 
>  You can only execute one
> statement per cursor at a time, but you can have multiple cursors running 
> from the same connection:
> 
> cr1 = cn.cursor()
> cr2 = cn.cursor()
> 
> cr1.execute('select ...')
> while True:
> row = cr1.fetchone()
> if not row:
> break
> ...
> cr2.execute('INSERT ...')
> 
> for example.  If you are inserting into one of the tables used in the outer 
> select, simply make sure that
> select has an order by with a + in front of one of the column names to avoid 
> side effects (ie, changes
> made to the database by the insert are visible to all statements/cursors on 
> that connection even before
> those changes are committed).

Right,
I read this can be a problem, but I ran several tests validating results and it 
worked perfectly.

Thank you very much for the confirmation.
jlc
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Guidance with Python and nested cursors

2013-07-18 Thread Joseph L. Casale
 It is perfectly allowed to open multiple cursors against a single connection. 
  You can only execute one
 statement per cursor at a time, but you can have multiple cursors running 
 from the same connection:
 
 cr1 = cn.cursor()
 cr2 = cn.cursor()
 
 cr1.execute('select ...')
 while True:
 row = cr1.fetchone()
 if not row:
 break
 ...
 cr2.execute('INSERT ...')
 
 for example.  If you are inserting into one of the tables used in the outer 
 select, simply make sure that
 select has an order by with a + in front of one of the column names to avoid 
 side effects (ie, changes
 made to the database by the insert are visible to all statements/cursors on 
 that connection even before
 those changes are committed).

Right,
I read this can be a problem, but I ran several tests validating results and it 
worked perfectly.

Thank you very much for the confirmation.
jlc
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Guidance with Python and nested cursors

2013-07-17 Thread Joseph L. Casale


From: sqlite-users-boun...@sqlite.org on behalf of Petite Abeille
Sent: Wednesday, July 17, 2013 1:25 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Guidance with Python and nested cursors

On Jul 17, 2013, at 9:07 PM, Joseph L. Casale <jcas...@activenetwerx.com> wrote:

>> I am using Python to query a table for all its rows, for each row, I query 
>> related rows from a
>> second table, then perform some processing and insert in to a third table.
>>
>> What is the technically correct approach for this?
>
>From the above outline, one SQL statement:

Hi,
Problem is I need to perform some Python processing of the data, then insert.

Thanks!
jlc
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Guidance with Python and nested cursors

2013-07-17 Thread Joseph L. Casale
I am using Python to query a table for all its rows, for each row, I query 
related rows from a
second table, then perform some processing and insert in to a third table.

What is the technically correct approach for this? I would rather not 
accumulate all of the first
tables data to make one off selects from table two, then insert to table three. 
I would prefer to
iterate over table one etc.

How does one setup the connection and cursor for this style of task?

Thanks for any guidance,
jlc
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Guidance with Python and nested cursors

2013-07-17 Thread Joseph L. Casale
I am using Python to query a table for all its rows, for each row, I query 
related rows from a
second table, then perform some processing and insert in to a third table.

What is the technically correct approach for this? I would rather not 
accumulate all of the first
tables data to make one off selects from table two, then insert to table three. 
I would prefer to
iterate over table one etc.

How does one setup the connection and cursor for this style of task?

Thanks for any guidance,
jlc
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Guidance with Python and nested cursors

2013-07-17 Thread Joseph L. Casale


From: sqlite-users-boun...@sqlite.org on behalf of Petite Abeille
Sent: Wednesday, July 17, 2013 1:25 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Guidance with Python and nested cursors

On Jul 17, 2013, at 9:07 PM, Joseph L. Casale jcas...@activenetwerx.com wrote:

 I am using Python to query a table for all its rows, for each row, I query 
 related rows from a
 second table, then perform some processing and insert in to a third table.

 What is the technically correct approach for this?

From the above outline, one SQL statement:

Hi,
Problem is I need to perform some Python processing of the data, then insert.

Thanks!
jlc
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


List comp help

2013-07-14 Thread Joseph L. Casale
I have a dict of lists. I need to create a list of 2 tuples, where each tuple 
is a key from
the dict with one of the keys list items.

my_dict = {
'key_a': ['val_a', 'val_b'],
'key_b': ['val_c'],
'key_c': []
}
[(k, x) for k, v in my_dict.items() for x in v]

This works, but I need to test for an empty v like the last key, and create one 
tuple ('key_c', None).
Anyone know the trick to reorganize this to accept the test for an empty v and 
add the else?

Thanks!
jlc
-- 
http://mail.python.org/mailman/listinfo/python-list


RE: List comp help

2013-07-14 Thread Joseph L. Casale
 Yeah, it's remarkably easy too! Try this:

 [(k, x) for k, v in my_dict.items() for x in v or [None]]

 An empty list counts as false, so the 'or' will then take the second option, 
 and iterate over the one-item list with   None in it.

Right, I overlooked that!

Much appreciated,
jlc
-- 
http://mail.python.org/mailman/listinfo/python-list


RE: Decorator help

2013-07-04 Thread Joseph L. Casale
Well, technically it's

func.func_closure[0].cell_contents.__name__

but of course you cannot know that for the general case.

Hah, I admit I lacked perseverance in looking at this in PyCharms debugger as I 
missed
that.

Much appreciated!
jlc
-- 
http://mail.python.org/mailman/listinfo/python-list


Decorator help

2013-07-03 Thread Joseph L. Casale
I have a set of methods which take args that I decorate twice,

def wrapped(func):
def wrap(*args, **kwargs):
try:
val = func(*args, **kwargs)
# some work
except BaseException as error:
log.exception(error)
return []
return wrap

def wrapped_again(length):
def something(func):
def wrapped_func(*args, **kwargs):
values = func(*args, **kwargs)
# do some work
return values
return wrapped_func
return something

So the methods wrapped are as follows:

@wrapped_again(12)
@wrapped
def class_method(self, **kwargs):
#

Is it possible to get the name of the original method (class_method) from 
within wrapped_func inside wrapped_again?
Thanks!
jlc
-- 
http://mail.python.org/mailman/listinfo/python-list


RE: Decorator help

2013-07-03 Thread Joseph L. Casale
 If you don't want to do that, you'd need to use introspection of a
 remarkably hacky sort. If you want that, well, it'll take a mo.

 After some effort I'm pretty confident that the hacky way is impossible.

Hah, I fired it in PyCharm's debugger and spent a wack time myself, thanks
for the confirmation, I'll give functools a shot.

Thanks a lot,
jlc
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [CentOS] Cannot get rtorrent to run

2013-06-22 Thread Joseph L. Casale
$ rtorrent foobar.torrent
rtorrent: symbol lookup error: rtorrent: undefined symbol:
_ZN7torrent10ThreadBase8m_globalE

Why is this happening?

The first hit on google leads you to 
https://github.com/rakshasa/rtorrent/issues/81
which points you to https://github.com/repoforge/rpms/issues/206 which finally
offers a repo that works: http://www.fateyev.com/RPMS/RHEL6/base/.

I have been using those packages for some time w/o issue.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] LVM + XFS + external log + snapshots

2013-06-21 Thread Joseph L. Casale
So I have an XFS file system within LVM  which has an external log.

A snapshot volume is created w/o issue;

But when i try to mount the file system;
mount: /dev/mapper/vg_spock_data-datasnapshot already mounted or /snapshot busy

So the filesystem was built requiring an external log device?
Let me guess, the original block device is still mounted and accessing its log 
device?
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Popen in Python3

2013-06-19 Thread Joseph L. Casale
I am trying to invoke a binary that requires dll's in two places all of
which are included in the path env variable in windows. When running
this binary with popen it can not find either, passing env=os.environ
to open made no difference.

Anyone know what might cause this or how to work around this?

Thanks,
jlc
-- 
http://mail.python.org/mailman/listinfo/python-list


RE: Netflix in f18 x64 xfce

2013-06-12 Thread Joseph L. Casale
 Same thing happens to me. I can run netflix-desktop from the Ubuntu PPA on 
 this
 same hardware in Debian Wheezy, but my results match yours with the Fedora 
 package.

What desktop are using? I was tempted to try another hoping that was the only 
issue.
Maybe you can save me the grief.

Thanks for the follow up,
jlc
-- 
users mailing list
users@lists.fedoraproject.org
To unsubscribe or change subscription options:
https://admin.fedoraproject.org/mailman/listinfo/users
Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines
Have a question? Ask away: http://ask.fedoraproject.org


Popen and reading stdout in windows

2013-06-10 Thread Joseph L. Casale
I have a use where writing an interim file is not convenient and I was hoping to
iterate through maybe 100k lines of output by a process as its generated or
roughly anyways.

Seems to be a common question on ST, and more easily solved in Linux.
Anyone currently doing this with Python 2.7 in windows and can share some
guidance?

Thanks!
jlc
-- 
http://mail.python.org/mailman/listinfo/python-list


RE: Popen and reading stdout in windows

2013-06-10 Thread Joseph L. Casale
 You leave out an awful amount of detail.  I have no idea what ST is, so
 I'll have to guess your real problem.

Ugh, sorry guys its been one of those days, the post was rather useless...

I am using Popen to run the exe with communicate() and I have sent stdout to 
PIPE
without luck. Just not sure what is the proper way to iterate over the stdout 
as it eventually
makes its way from the buffer.

Thanks!
jlc
-- 
http://mail.python.org/mailman/listinfo/python-list


Netflix in f18 x64 xfce

2013-06-09 Thread Joseph L. Casale
Anyone figured out how to get netflix working on f18x64?

I have tried the netflix-desktop-0.2.2-1.fc18.noarch rpm and manually installed
silverlight with:

WINEARCH=win64 WINEPREFIX=/home/jcasale/.netflix-desktop wine Silverlight.exe /q

as well as the automated installer from 
http://sourceforge.net/projects/postinstaller/
ang both kill my login session when attempting to play a selection.

Thanks!
jlc
-- 
users mailing list
users@lists.fedoraproject.org
To unsubscribe or change subscription options:
https://admin.fedoraproject.org/mailman/listinfo/users
Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines
Have a question? Ask away: http://ask.fedoraproject.org


Platform variations with --exclude-from

2013-05-27 Thread Joseph L. Casale
On Linux, an rsync command and exclude_file contents of:

# cat exclude_file
/etc/alsa
# rsync -a --delete --delete-excluded --exclude-from=exclude_file /etc 
server::module

properly excludes /etc/alsa but not any file within /etc's directories that is 
named alsa.

On Windows I don't seem to be able to reliably emulate this:

C:\Scripts\Backup\rsyncdtype rsyncd_exclude
/cygdrive/d/$RECYCLE.BIN
/cygdrive/d/Exclude
/cygdrive/d/Inetpub
/cygdrive/d/System Volume Information
C:\Program Files (x86)\cwRsync\binrsync -a --delete --delete-excluded 
--exclude-from=exclude_file /cygdrive/d server::module

fails to omit these, the only way I get this to work is:

C:\Scripts\Backup\rsyncdtype rsyncd_exclude
*/$RECYCLE.BIN
*/Exclude
*/Inetpub
*/System Volume Information

Which obviously excludes nested objects of the same name as its not anchored. 
Anyone know if this can be corrected?
What is the nuance for anchoring patterns to the root of the source in Windows 
with the cwrsync build?

Thank you,
jlc
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


RE: Platform variations with --exclude-from

2013-05-27 Thread Joseph L. Casale
 On Linux, an rsync command and exclude_file contents of:

 # cat exclude_file
 /etc/alsa
 # rsync -a --delete --delete-excluded --exclude-from=exclude_file /etc 
 server::module

 properly excludes /etc/alsa but not any file within /etc's directories that 
 is named alsa.

 Here the exclude file's contents matches the path you've given:
 root is /, path is etc so /etc/alsa matches.

Ah, I misunderstood this.

 On Windows I don't seem to be able to reliably emulate this:

 C:\Scripts\Backup\rsyncdtype rsyncd_exclude
 /cygdrive/d/$RECYCLE.BIN
 /cygdrive/d/Exclude
 /cygdrive/d/Inetpub
 /cygdrive/d/System Volume Information
 C:\Program Files (x86)\cwRsync\binrsync -a --delete --delete-excluded 
 --exclude-from=exclude_file /cygdrive/d server::module

Here the exclude files are also anchored at the filesystem root,
but the transfer root is /cygdrive/

If you change the excludes to /d/$RECYCLE.BIN etc. (i.e. remove the
leading /cygdrive) then it should work.

That you're seeing this on windows is irrevelant, a similar setup on
linux (e.g. transferring /etc/network and excluding
/etc/network/interfaces) will also fail to do what you want.

I'd love to say I tried that but obviously not. Thanks for the insight.

Much appreciated Paul,
jlc
-- 
Please use reply-all for most replies to avoid omitting the mailing list.
To unsubscribe or change options: https://lists.samba.org/mailman/listinfo/rsync
Before posting, read: http://www.catb.org/~esr/faqs/smart-questions.html


RE: Ldap module and base64 oncoding

2013-05-27 Thread Joseph L. Casale
 Note that all modules in python-ldap up to 2.4.10 including module 'ldif'
 expect raw byte strings to be passed as arguments. It seems to me you're
 passing a Unicode object in the entry dictionary which will fail in case an
 attribute value contains NON-ASCII chars.

Yup, I was.

 python-ldap expects raw strings since it's not schema-aware and therefore does
 not have any knowledge about the LDAP syntax used for a particular attribute
 type. So automagically convert Unicode strings will likely fail in many cases.
 = The calling application has to deal with it.

I see, that recco went a long a way in cleaning up my code actually and making 
the
handling of decoding and encoding more consistent.

 Don't muck with overriding  _unparseAttrTypeandValue(). Simply pass the
 properly encoded data into ldif module.

I had some time today, so I attempted to open the ldif files in binary mode to 
simply
work with the raw byte strings but the moment the first entry was parsed, 
parse()
stumbled on a character in the first entries dict and passed a dn of None for 
the last half?

If the option to avoid worrying about decoding and encoding could work, I would 
be
happy to process the whole lot in byte strings. Any idea what may cause this?

Thanks a lot Michael,
jlc
-- 
http://mail.python.org/mailman/listinfo/python-list


RE: Ldap module and base64 oncoding

2013-05-26 Thread Joseph L. Casale
 I'm not sure what exactly you're asking for.
 Especially is not being interpreted as a string requiring base64 encoding is
 written without giving the right context.
 
 So I'm just guessing that this might be the usual misunderstandings with use
 of base64 in LDIF. Read more about when LDIF requires base64-encoding here:
 
 http://tools.ietf.org/html/rfc2849
 
 To me everything looks right:
 
 Python 2.7.3 (default, Apr 14 2012, 08:58:41) [GCC] on linux2
 Type help, copyright, credits or license for more information.
  'ZGV0XDMzMTB3YmJccGc='.decode('base64').decode('utf-8')
 u'det\\3310wbb\\pg'
 
 
 What do you think is a problem?

Michael,
Thanks for the reply. The issues I am sure are in my code, I read the ldif 
source file and up
with a values such as 'det\3310wbb\pg' after the base64 encoded entries are 
decoded.

The problem I am having is when I add this to an add/mod entry list and write 
it back out.
As it does not get re-encoded to base64 the ldif file ends up seeing a text 
entry with a ^]
character which if I re-read it with the parser it causes the handle method to 
break midway
through the entry dict and so the last half re-appears disjoint without a dn.

Like I said, I am pretty sure its my poor misunderstanding of decoding and 
encoding.
I am using the build from http://www.lfd.uci.edu/~gohlke/pythonlibs/ on a 
windows
2008 r2 server.

I have re-implemented handle to create a cidict holding all the dn/entry's that 
are parsed as
I then perform some processing such as manipulating attribute values in the 
entry dict. I
am pretty sure I am breaking things here. The data I am reading is coming from 
utf-16-le
encoded files and has Unicode characters as the source directory is globally 
available, being
written to in just about every country.

Is there a process for manipulating/adding data to the entry dict before I 
write it out that I
should adhere to? For example, if I am adding a new attribute to be composed of 
part of
another parsed attr for use in a modlist:

  {'customAttr': ['foo.{}.bar'.format(entry['uid'])]}

By looking at the value from above, 'det\3310wbb\pg', I gather the entry dict 
was parsed
into byte strings. I should have decoded this, where as some of the data is 
Unicode and
as such I should have encoded it?

I really appreciate the time.

Grazie per tutto,
jlc
-- 
http://mail.python.org/mailman/listinfo/python-list


RE: Ldap module and base64 oncoding

2013-05-26 Thread Joseph L. Casale
Hi Michael,

 Processing LDIF is one thing, doing LDAP operations another.
 
 LDIF itself is meant to be ASCII-clean. But each attribute value can carry any
 byte sequence (e.g. attribute 'jpegPhoto'). There's no further processing by
 module LDIF - it simply returns byte sequences.
 
 The access protocol LDAPv3 mandates UTF-8 encoding for Unicode strings on the
 wire if attribute syntax is DirectoryString, IA5String (mainly ASCII) or 
 similar.
 
 So if you're LDIF input returns UTF-16 encoded attribute values for e.g.
 attribute 'cn' or 'o' or another attribute not being of OctetString or Binary
 syntax something's wrong with the producer of the LDIF data.

That could be, I am using ms's ldifde.exe to dump a domino and AD directory for
comparative processing. The problem is I don't have much control on the data in
the directory and I do know that DN's have non ascii characters unique to the

 I wonder what the string really is. At least the base64-encoding you provided
 before decodes as UTF-8 but I'm not sure whether it's the right sequence of
 Unicode code points you're expecting.
 
  'ZGV0XDMzMTB3YmJccGc='.decode('base64').decode('utf-8')
 u'det\\3310wbb\\pg'
 
 I still can't figure out what you're really doing though. I'd recommend to
 strip down your operations to a very simple test code snippet illustrating the
 issue and post that here.

So I have removed all my likely broken attempts at working with this data and 
will
soon have some simple code but at this point I may have an indication of what is
awry with my data.

After parsing the data for a user I am simply taking a value from the ldif file 
and writing
it back out to another which fails, the value parsed is:

officestreetaddress:: T3R0by1NZcOfbWVyLVN0cmHDn2UgMQ==


  File C:\Python27\lib\site-packages\ldif.py, line 202, in unparse
self._unparseChangeRecord(record)
  File C:\Python27\lib\site-packages\ldif.py, line 181, in 
_unparseChangeRecord
self._unparseAttrTypeandValue(mod_type,mod_val)
  File C:\Python27\lib\site-packages\ldif.py, line 142, in 
_unparseAttrTypeandValue
self._unfoldLDIFLine(':: 
'.join([attr_type,base64.encodestring(attr_value).replace('\n','')]))
  File C:\Python27\lib\base64.py, line 315, in encodestring
pieces.append(binascii.b2a_base64(chunk))
UnicodeEncodeError: 'ascii' codec can't encode character u'\xdf' in position 7: 
ordinal not in range(128)

 c:\python27\lib\base64.py(315)encodestring()
- pieces.append(binascii.b2a_base64(chunk))
(Pdb) l
310 def encodestring(s):
311 Encode a string into multiple lines of base-64 data.
312 pieces = []
313 for i in range(0, len(s), MAXBINSIZE):
314 chunk = s[i : i + MAXBINSIZE]
315  - pieces.append(binascii.b2a_base64(chunk))
316 return .join(pieces)
317
318
319 def decodestring(s):
320 Decode a string.
(Pdb) args
s = Otto-Meßmer-Straße 1

So moving up a frame or two and looking at the entry dict, I see a modlist 
entry of:
('streetAddress', [u'Otto-Me\xdfmer-Stra\xdfe 1']) which is correct:

In [2]: 'T3R0by1NZcOfbWVyLVN0cmHDn2UgMQ=='.decode('base64').decode('utf-8')
Out[2]: u'Otto-Me\xdfmer-Stra\xdfe 1'

Looking at the stack trace, I think I see the issue:
(Pdb) import base64
(Pdb) base64.encodestring(u'Otto-Me\xdfmer-Stra\xdfe 
1'.encode('utf-8')).replace('\n','')
'T3R0by1NZcOfbWVyLVN0cmHDn2UgMQ=='

I now have the exact the value I started with. Ensuring where I ever handle the 
original
values that I return utf-8 decoded objects for use in a modlist to later write 
and Sub
classing LDIFWriter and overriding _unparseAttrTypeandValue to do the encoding 
has
eliminated all the errors.

What remains finally is ldifde.exe's output of what looks like U+00BF, or an 
inverted question
mark for some values, otherwise this issue looks solved.

Thanks for everything,
jlc



-- 
http://mail.python.org/mailman/listinfo/python-list


[NTSysADM] RE: Q for Nagios dudes...and ettes

2013-05-25 Thread Joseph L. Casale


 check_nrpe -H 172.16.1.61 -t 60 get_service -a 172.16.0.155 DNS

For what its worth, you might want to check out check_wmi from 
http://www.edcint.co.nz/checkwmiplus/
I always hated nrpe or installing agents when you really don't need them.
jlc
 
 



RE: Ldap module and base64 oncoding

2013-05-25 Thread Joseph L. Casale
 Can you give an example of the code you have?

I actually just overrode the regex used by the method in the LDIFWriter class 
to be far more broad
about what it interprets as a safe string. I really need to properly handle 
reading, manipulating and
writing non ascii data to solve this...

Shame there is no ldap module (with the ldifwriter) in Python 3.
jlc
-- 
http://mail.python.org/mailman/listinfo/python-list


RE: authentication with python-ldap

2013-05-25 Thread Joseph L. Casale
 I have been doing the same thing and I tried to use java for testing the 
 credentials and they are correct. It works perfectly with java.
 I really don´t know what we´re doing wrong.


 You are accessing a protected operation of the LDAP server
 and it (the server) rejects it due to invalid credentials.
 You may have forgotten to pass on credentials (e.g. a password)
 or the credentials do not fit to the specified user
 (maybe the user does not exist at all).

Depending on the directory (which we don't know) and the code as well, the way 
you auth
might be the problem. Possibly Java met the directory requirements with the 
methods you
used whereas they did not with Python given your code.

For example, certain operations in AD require specific transports to be used...

jlc
-- 
http://mail.python.org/mailman/listinfo/python-list


Ldap module and base64 oncoding

2013-05-24 Thread Joseph L. Casale
I have some data I am working with that is not being interpreted as a string 
requiring
base64 encoding when sent to the ldif module for output.

The base64 string parsed is ZGV0XDMzMTB3YmJccGc= and the raw string is 
det\3310wbb\pg.
I'll admit my understanding of the handling requirements of non ascii data in 
2.7 is weak
and as such I am failing at adjusting the regex that deduces is the string 
contains characters
requiring base64 encoding when being output.

Any insight, or nudges in the right direction would be appreciated!
Thanks,
jlc
-- 
http://mail.python.org/mailman/listinfo/python-list


Case insensitive dict

2013-05-21 Thread Joseph L. Casale
I was doing some work with the ldap module and required a ci dict that was case
insensitive but case preserving. It turned out the cidict class they 
implemented was
broken with respect to pop, it is inherited and not re implemented to work. 
Before
I set about re-inventing the wheel, anyone know of a working implementation?

I noticed twisted has one but it seems to omit pop.

Thanks!
jlc
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [icinga-users] Notifications with full output of plugin

2013-05-06 Thread Joseph L. Casale
Hello Joseph,
there is:

$HOSTOUTPUT$
$HOSTLONGOUTPUT$
$HOSTPERFDATA$

$SERVICEOUTPUT$
$SERVICELONGOUTPUT$
$SERVICEPERFDATA$

Markus,
I had been using those (well the LONG* type written as per the manual) and 
after some
changes, notifications actually stopped.

Stopping Icinga and deleting objects.cache and retention.dat then restarting it 
brought
everything in order.

This is not the first time this resolved some unexplained issues, is this known 
or is there
anything that can cause this such as an incorrect config option?

Thanks,
jlc

--
Learn Graph Databases - Download FREE O'Reilly Book
Graph Databases is the definitive new guide to graph databases and 
their applications. This 200-page book is written by three acclaimed 
leaders in the field. The early access version is available now. 
Download your free book today! http://p.sf.net/sfu/neotech_d2d_may
___
icinga-users mailing list
icinga-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/icinga-users


[icinga-users] Notifications with full output of plugin

2013-05-04 Thread Joseph L. Casale
I have long plugin logging enabled and have added the perfdata macro to
a notification in the hopes to receive the full output. After the expected 
perfdata,
additional lines contain messages that are pipe separated and it seems Icinga
is dropping everything after the first pipe for the additional lines. Is 
configurable,
seems this doesn't happen in an older installation I also have.

Thanks,
jlc
--
Get 100% visibility into Java/.NET code with AppDynamics Lite
It's a free troubleshooting tool designed for production
Get down to code-level detail for bottlenecks, with 2% overhead.
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap2
___
icinga-users mailing list
icinga-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/icinga-users


[icinga-users] Icinga 1.9 issues

2013-04-30 Thread Joseph L. Casale
I noticed the pgsql schema files needed a version bump, and the web package did 
not
provide a clear cache script.

New interface looks great...

jlc
--
Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
Get 100% visibility into your production application - at no cost.
Code-level diagnostics for performance bottlenecks with 2% overhead
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap1
___
icinga-users mailing list
icinga-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/icinga-users


Re: [icinga-users] Icinga 1.9 issues

2013-04-30 Thread Joseph L. Casale
 can you please provide a little more details what you mean by that?

Core needed the dbversion changed from the shipped 1.8.0 to 1.9.0 as
it was complaining at start up otherwise of a version mismatch.

Thanks,
jlc
--
Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
Get 100% visibility into your production application - at no cost.
Code-level diagnostics for performance bottlenecks with 2% overhead
Download for free and get started troubleshooting in minutes.
http://p.sf.net/sfu/appdyn_d2d_ap1
___
icinga-users mailing list
icinga-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/icinga-users


RE: Bad symbolic link in registry

2013-04-28 Thread Joseph L. Casale
Running regedt32 with elevated credentials? Ensured no running services are 
holding the key open?

Yeah, no luck...
~ Finally, powerful endpoint security that ISN'T a resource hog! ~
~ http://www.sunbeltsoftware.com/Business/VIPRE-Enterprise/  ~

---
To manage subscriptions click here: 
http://lyris.sunbelt-software.com/read/my_forums/
or send an email to listmana...@lyris.sunbeltsoftware.com
with the body: unsubscribe ntsysadmin



RE: Bad symbolic link in registry

2013-04-28 Thread Joseph L. Casale
 How about a rename?

When I recreate the target so I can access it, if I rename the symlink, it 
accepts its, renames the
target, but reverts after a refresh leaving the target renamed?

I am remote and its a vm for which I dont have console access to, what a pita 
this turning
out to be.

jlc
~ Finally, powerful endpoint security that ISN'T a resource hog! ~
~ http://www.sunbeltsoftware.com/Business/VIPRE-Enterprise/  ~

---
To manage subscriptions click here: 
http://lyris.sunbelt-software.com/read/my_forums/
or send an email to listmana...@lyris.sunbeltsoftware.com
with the body: unsubscribe ntsysadmin



RE: Startup processes

2013-04-25 Thread Joseph L. Casale
If you can query for the process, can you not query the network?
Lookup the gateway and ping it...

From: kz2...@googlemail.com
Sent: Thursday, April 25, 2013 6:11 AM
To: NT System Admin Issues
Subject: Startup processes

On a Windows system, is there a process that runs on startup that will only run 
if there is network connectivity present? I've got a strange requirement and I 
need to be able to tell when the network is available, if possible.

TIA,


JR


Sent from my Blackberry, which may be an antique but delivers email RELIABLY

~ Finally, powerful endpoint security that ISN'T a resource hog! ~
~ http://www.sunbeltsoftware.com/Business/VIPRE-Enterprise/  ~

---
To manage subscriptions click here: 
http://lyris.sunbelt-software.com/read/my_forums/
or send an email to listmana...@lyris.sunbeltsoftware.com
with the body: unsubscribe ntsysadmin

~ Finally, powerful endpoint security that ISN'T a resource hog! ~
~ http://www.sunbeltsoftware.com/Business/VIPRE-Enterprise/  ~

---
To manage subscriptions click here: 
http://lyris.sunbelt-software.com/read/my_forums/
or send an email to listmana...@lyris.sunbeltsoftware.com
with the body: unsubscribe ntsysadmin



RE: SCCM 2012 quick question

2013-04-17 Thread Joseph L. Casale
scclient.exe

From: James Rankin
Sent: Wednesday, April 17, 2013 6:30 AM
To: NT System Admin Issues
Subject: SCCM 2012 quick question

Anyone know what the executable name for Software Center in SCCM 2012 is? I've 
seen it suggested as scclient.exe and ccmsetup.exe, as I don't have a copy of 
this version could anyone quickly confirm the right name for me?

Thanks in advance,



--
James Rankin
Technical Consultant (ACA, CCA, MCTS)
http://appsensebigot.blogspot.co.ukhttp://appsensebigot.blogspot.co.uk/

~ Finally, powerful endpoint security that ISN'T a resource hog! ~
~ http://www.sunbeltsoftware.com/Business/VIPRE-Enterprise/  ~

---
To manage subscriptions click here: 
http://lyris.sunbelt-software.com/read/my_forums/
or send an email to 
listmana...@lyris.sunbeltsoftware.commailto:listmana...@lyris.sunbeltsoftware.com
with the body: unsubscribe ntsysadmin

~ Finally, powerful endpoint security that ISN'T a resource hog! ~
~ http://www.sunbeltsoftware.com/Business/VIPRE-Enterprise/  ~

---
To manage subscriptions click here: 
http://lyris.sunbelt-software.com/read/my_forums/
or send an email to listmana...@lyris.sunbeltsoftware.com
with the body: unsubscribe ntsysadmin

RE: PowerShell - Dependent parameters

2013-04-12 Thread Joseph L. Casale
but I can't figure out how to tell it one parameter *depends* on another.

Create your parameter set, then set the few that depend on each other to be
mandatory?

There are some neater things you can do with compiled code, otherwise you
sometimes have to do more exotic validation after the param block...

jlc
~ Finally, powerful endpoint security that ISN'T a resource hog! ~
~ http://www.sunbeltsoftware.com/Business/VIPRE-Enterprise/  ~

---
To manage subscriptions click here: 
http://lyris.sunbelt-software.com/read/my_forums/
or send an email to listmana...@lyris.sunbeltsoftware.com
with the body: unsubscribe ntsysadmin



Online backup providers

2013-04-08 Thread Joseph L. Casale
Anyone have any experience with jungledisk?
They offer a Linux client and have pretty cheap rates for large volumes of 
data. We are
retiring a private colocated backup and hoping to migrate to a commercial 
online solution.


Of the few that support Linux, this one looks pretty decent at first glance.


Thanks,
jlc
-- 
users mailing list
users@lists.fedoraproject.org
To unsubscribe or change subscription options:
https://admin.fedoraproject.org/mailman/listinfo/users
Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines
Have a question? Ask away: http://ask.fedoraproject.org


RE: Online backup providers

2013-04-08 Thread Joseph L. Casale
 I haven't looked at jungledisk, but I use SpiderOak for home use. You get 
 1-2GB free which is all I need
 for critical stuff. The linux client works well and is provided in RPM format 
 w/ repo I believe. Their data
 rates may not be competitive though. 


I use them for personal as well. Expensive (not as bad as rsync.net) and 
honestly thier cli client leaves a
bit to be desired in terms of granularity, however I do like them, we just 
can't afford them.


Thanks!
jlc
-- 
users mailing list
users@lists.fedoraproject.org
To unsubscribe or change subscription options:
https://admin.fedoraproject.org/mailman/listinfo/users
Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines
Have a question? Ask away: http://ask.fedoraproject.org


RE: Decorator help

2013-03-30 Thread Joseph L. Casale
 When you say class vars, do you mean variables which hold classes?


You guessed correctly, and thanks for pointing out the ambiguity in my 
references.


 The one doesn't follow from the other. Writing decorators as classes is 

 fairly unusual. Normally, they will be regular functions.


I see, this I didn't know. I'll stick to this guideline now.


 A more complicated case is where you need to do some pre-processing, and 
 you *don't* want that calculation repeated every time the method is
 called. Decorators are fantastic for that case too, but here you cannot
 access instance attributes, since the instance doesn't exist yet. But you
 can access *class attributes*, as more-or-less ordinary local variables
 *inside* the class definition. Here's a working sketch of the sort of
 thing you can do. Copy and paste the following into a Python interactive
 session, and then see if you can follow what is being done when.

 Is your mind boggled yet? :-)


Steven,
That was some of the coolest stuff I have seen a while. I had to wait until I 
had
enough time to actually run this through and utilize it my own work. I haven't
enjoyed Python this much since I first started using it.


Can't thank you enough for the time and thorough example, that imparted loads
of insight.


jlc
-- 
http://mail.python.org/mailman/listinfo/python-list


Decorator help

2013-03-27 Thread Joseph L. Casale
I have a class which sets up some class vars, then several methods that are 
passed in data
and do work referencing the class vars.


I want to decorate these methods, the decorator needs access to the class vars, 
so I thought
about making the decorator its own class and allowing it to accept args.


I was hoping to do all the work on in_data from within the decorator, which 
requires access
to several MyClass vars. Not clear on the syntax/usage with this approach here, 
any guidance
would be greatly appreciated!


Class MyDecorator(object):

    def __init__(self, arg1, arg2):
        self.arg1 = arg1
        self.arg2 = arg2
    ...


Class MyClass(object):
    def __init__(self):
        self.var_a = 
    
    @MyDecorator(...)
    def meth_one(self, in_data):
        ...


Thanks!
jlc
-- 
http://mail.python.org/mailman/listinfo/python-list


RE: Decorator help

2013-03-27 Thread Joseph L. Casale
 So  decorators will never take instance variables as arguments (nor should 
they, since no instance
 can possibly exist when they execute).


Right, I never thought of it that way, my only use of them has been trivial, in 
non class scenarios so far.


 Bear in mind, a decorator should take a callable as an argument (and any 
 number of 'static' parameters
 you want to assign  it), and return another callable.


Got it, and thanks for the detail as well!
jlc
-- 
http://mail.python.org/mailman/listinfo/python-list


Sound volume in f18 and xfce

2013-03-24 Thread Joseph L. Casale
Each time I start my laptop, the sound is set to 153% or whatever max is. A tap 
of the
slider in the panel which is at 100% resets it to 100% (and not past that) and 
it sounds
fine.

Any idea how to stop this from resetting to max each startup?

Thanks!
jlc
-- 
users mailing list
users@lists.fedoraproject.org
To unsubscribe or change subscription options:
https://admin.fedoraproject.org/mailman/listinfo/users
Guidelines: http://fedoraproject.org/wiki/Mailing_list_guidelines
Have a question? Ask away: http://ask.fedoraproject.org


RE: what spam or edge server are you using?

2013-03-11 Thread Joseph L. Casale
No kidding, I use it at a few places as well. One of the guys on this list 
actually used
to be a contributor at one point. Nice piece of ware, only shame is that its 
Perl.

If I never have to work with Perl again, its to soon:)

jlc

From: Tim Evans
Sent: Monday, March 11, 2013 7:13 PM
To: MS-Exchange Admin Issues
Subject: RE: what spam or edge server are you using?

I use assp. It's a very powerful, but complex, open source program available at 
http://sourceforge.net/projects/assp.

...Tim

---
To manage subscriptions click here: 
http://lyris.sunbelt-software.com/read/my_forums/
or send an email to listmana...@lyris.sunbeltsoftware.com
with the body: unsubscribe exchangelist

RE: what spam or edge server are you using?

2013-03-11 Thread Joseph L. Casale
Dude, you called him out? That was not a particularly pleasant ordeal...



From: Kurt Buff
Sent: Monday, March 11, 2013 9:45 PM
To: MS-Exchange Admin Issues
Subject: Re: what spam or edge server are you using?

Paging Mr. Espinola...

Mr. Espinola to the white courtesy phone, please...

Kurt

---
To manage subscriptions click here: 
http://lyris.sunbelt-software.com/read/my_forums/
or send an email to listmana...@lyris.sunbeltsoftware.com
with the body: unsubscribe exchangelist



RE: what spam or edge server are you using?

2013-03-11 Thread Joseph L. Casale
Yeah,
Maia probably is a more enterprisable offering if you could suggest such a 
thing for these apps.
Problem is, its also Perl, blegh...


My current post allots me the privilege of  hacking Python all day, I can't 
tell you how that has
spoiled me:)


jlc



From: Kurt Buff
Sent: Monday, March 11, 2013 10:13 PM
To: MS-Exchange Admin Issues
Subject: Re: what spam or edge server are you using?

Don't know anything about that - all I know is that he touted it for
quite a while, and I actually looked it over before settling on Maia
Mailguard.

Bringing up ASSP brought him back to mind, and I haven't seen him here
in a long while.

Hope he's doing well, really. He added a lot to the list.

Kurt

---
To manage subscriptions click here: 
http://lyris.sunbelt-software.com/read/my_forums/
or send an email to listmana...@lyris.sunbeltsoftware.com
with the body: unsubscribe exchangelist



Switch statement

2013-03-10 Thread Joseph L. Casale
I have a switch statement composed using a dict:


switch = {
    'a': func_a,
    'b': func_b,
    'c': func_c
}
switch.get(var, default)()


As a result of multiple functions per choice, it migrated to:



switch = {
    'a': (func_a1, func_a2),
    'b': (func_b1, func_b2),
    'c': (func_c, )
}



for f in switch.get(var, (default, )):
    f()


As a result of only some of the functions now requiring unique arguments, I 
presume this
needs to be migrated to a if/else statement? Is there a way to maintain the 
switch style with
the ability in this scenario to cleanly pass args only to some functions?


Thanks,
jlc
-- 
http://mail.python.org/mailman/listinfo/python-list


RE: Switch statement

2013-03-10 Thread Joseph L. Casale
 switch = { 

 'A': functools.partial(spam, a),
 'B': lambda b, c=c: ham(b, c),
 'C': eggs,
 }
 
 switch[letter](b)

That's cool, never even thought to use lambdas.

 functools.partial isn't always applicable, but when it is, you should
 prefer it over lambda since it will be very slightly more efficient.


Ok, haven't used this before but I will give it a read!


Much appreciated Steven!
jlc
-- 
http://mail.python.org/mailman/listinfo/python-list


RE: Switch statement

2013-03-10 Thread Joseph L. Casale
 Or could you do something like:

 arguments_to_pass = [list of some sort]
 switch.get(var, default)(*arguments_to_pass)

Stevens lambda suggestion was most appropriate. Within the switch, there
are functions called with none, or some variation of arguments. It was not
easy to pass them in after the fact, especially since the same function may
have different args depending on which case.

The lamda worked well.

Thanks guys!
jlc
-- 
http://mail.python.org/mailman/listinfo/python-list


RE: Running Powershell script as scheduled task fails with 0x1

2013-03-08 Thread Joseph L. Casale
Sorry to reply out of thread order (dont have the original).

No need to sigh, ditch the bad posts on the net and run `powershell /?`

A ps1 file is not a command. You need to invoke the script.

-Original Message-
From: Michael Leone [mailto:oozerd...@gmail.commailto:oozerd...@gmail.com]
Sent: Thursday, March 7, 2013 1:59 PM
To: NT System Admin Issues
Subject: Re: Running Powershell script as scheduled task fails with 0x1

On Thu, Mar 7, 2013 at 2:37 PM, Webster 
webs...@carlwebster.commailto:webs...@carlwebster.com wrote:
 I thought it was -File c:\scripts\myscript.ps1.

See, this is what's infuriating. Most of the examples I have found say you 
don't need -Command or -File. Some say -Command. Some say the 2 are 
equivalent.

SIGH

So I changed it to -File, and made sure the folder holding the script itself 
had no spaces in its name. And then it all started working ...

I thought for sure I had tried it with -File as well, but maybe not.

Anyways, it all seems good now. Thanks.



 Carl Webster
 Consultant and Citrix Technology Professional
 http://www.CarlWebster.com


 -Original Message-
 From: Michael Leone [mailto:oozerd...@gmail.commailto:oozerd...@gmail.com]
 Sent: Thursday, March 07, 2013 2:30 PM
 To: NT System Admin Issues
 Subject: Running Powershell script as scheduled task fails with 0x1

 I can't understand why my script is failing. I can run it from a Powershell 
 prompt (I have to Run as administrator, because the script is deleting some 
 files in a backup directory). But it works perfectly when I do it that way. 
 But when I create a Scheduled Task to do it, it fails with 0x1.

 I create a Task, tell it to use an account with domain admin
 privileges. Tell it to run whether the user is logged on or not, and
 to run with highest privileges The action calls a program
 (C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe). In Add
 arguments, I have

 -Command C:\Scripts\myscript.ps1




 ~ Finally, powerful endpoint security that ISN'T a resource hog! ~ ~
 http://www.sunbeltsoftware.com/Business/VIPRE-Enterprise/  ~

 ---
 To manage subscriptions click here:
 http://lyris.sunbelt-software.com/read/my_forums/
 or send an email to 
 listmana...@lyris.sunbeltsoftware.commailto:listmana...@lyris.sunbeltsoftware.com
 with the body: unsubscribe ntsysadmin


~ Finally, powerful endpoint security that ISN'T a resource hog! ~ ~ 
http://www.sunbeltsoftware.com/Business/VIPRE-Enterprise/  ~

---
To manage subscriptions click here: 
http://lyris.sunbelt-software.com/read/my_forums/
or send an email to 
listmana...@lyris.sunbeltsoftware.commailto:listmana...@lyris.sunbeltsoftware.com
with the body: unsubscribe ntsysadmin


~ Finally, powerful endpoint security that ISN'T a resource hog! ~ ~ 
http://www.sunbeltsoftware.com/Business/VIPRE-Enterprise/  ~

---
To manage subscriptions click here: 
http://lyris.sunbelt-software.com/read/my_forums/
or send an email to 
listmana...@lyris.sunbeltsoftware.commailto:listmana...@lyris.sunbeltsoftware.com
with the body: unsubscribe ntsysadmin


~ Finally, powerful endpoint security that ISN'T a resource hog! ~
~ http://www.sunbeltsoftware.com/Business/VIPRE-Enterprise/  ~

---
To manage subscriptions click here: 
http://lyris.sunbelt-software.com/read/my_forums/
or send an email to 
listmana...@lyris.sunbeltsoftware.commailto:listmana...@lyris.sunbeltsoftware.com
with the body: unsubscribe ntsysadmin


~ Finally, powerful endpoint security that ISN'T a resource hog! ~
~ http://www.sunbeltsoftware.com/Business/VIPRE-Enterprise/  ~

---
To manage subscriptions click here: 
http://lyris.sunbelt-software.com/read/my_forums/
or send an email to 
listmana...@lyris.sunbeltsoftware.commailto:listmana...@lyris.sunbeltsoftware.com
with the body: unsubscribe ntsysadmin

~ Finally, powerful endpoint security that ISN'T a resource hog! ~
~ http://www.sunbeltsoftware.com/Business/VIPRE-Enterprise/  ~

---
To manage subscriptions click here: 
http://lyris.sunbelt-software.com/read/my_forums/
or send an email to 
listmana...@lyris.sunbeltsoftware.commailto:listmana...@lyris.sunbeltsoftware.com
with the body: unsubscribe ntsysadmin

~ Finally, powerful endpoint security that ISN'T a resource hog! ~
~ http://www.sunbeltsoftware.com/Business/VIPRE-Enterprise/  ~

---
To manage subscriptions click here: 
http://lyris.sunbelt-software.com/read/my_forums/
or send an email to listmana...@lyris.sunbeltsoftware.com
with the body: unsubscribe ntsysadmin

RE: Rawhide installer and partitioning

2013-03-06 Thread Joseph L. Casale
 We'd need more details on the failure, but in general terms, Rawhide's 

 installer is having a lot of work done on it right now, it's not
 surprising that it's busted, and individual bugs don't necessarily need
 reporting at this time, as the code (particularly partitioning stuff) is
 under constant revision. It might be best to wait till things calm down
 a little before filing specific bugs.


I have a 1/2 TB disc with a couple partitions up front for windows.
Booting the last bootable live composition and trying to choose an
automatic partitioning scheme suggests there is no room.


 If you have time to drop by
 #anaconda you might be able to run your issue by the team there
 informally though, and check if it's something they're aware of.


I'll certainly head over there this morning as soon as I can, thanks!

jlc
-- 
test mailing list
test@lists.fedoraproject.org
To unsubscribe:
https://admin.fedoraproject.org/mailman/listinfo/test

RE: 2013 db wont trunc logs after vss backup

2013-03-05 Thread Joseph L. Casale
 Anything using snapshot backups will have to be specifically updated to 
 support 2013. 

 
 DPM 2012 SP1 has been updated, my backup scripts have been updated :) 
 (although, honestly,
 they just depend on support from DiskShadow.exe), but those are the only 
 things that I know
 of that have been specifically updated for 2013.



Yeah, these are diskshadow scripts. What did you need to do differently? 
Probably my issue:)


Thanks a lot!
jlc
---
To manage subscriptions click here: 
http://lyris.sunbelt-software.com/read/my_forums/
or send an email to listmana...@lyris.sunbeltsoftware.com
with the body: unsubscribe exchangelist



<    2   3   4   5   6   7   8   9   10   11   >