If I have decoded correctly what you were trying to say, use a trigger
like this, and duplicate it for UPDATE:
Thanks Clemens, this got me sorted out.
jlc
___
sqlite-users mailing list
sqlite-users@sqlite.org
Hi,
I have a table as follows:
CREATE TABLE t (
id INTEGER NOT NULL,
a VARCHAR NOT NULL COLLATE 'nocase',
b VARCHAR COLLATE 'nocase',
c VARCHAR CHECK (c IN ('foo', 'bar', NULL)) COLLATE 'nocase',
PRIMARY KEY (id)
);
How does one
Hi,
I have a table as follows:
CREATE TABLE t (
id INTEGER NOT NULL,
a VARCHAR NOT NULL COLLATE 'nocase',
b VARCHAR COLLATE 'nocase',
c VARCHAR CHECK (c IN ('foo', 'bar', NULL)) COLLATE 'nocase',
PRIMARY KEY (id)
);
How does one
If that's a bit heavyweight (and confusing; it's not all free software,
since some of it is under non-free license terms), there are other
options.
pyBarcode URL:http://pythonhosted.org/pyBarcode/ says it's a
pure-Python library that takes a barcode type and the value, and
generates an SVG of the
I need to convert a proprietary MS Access based printing solution into
something I can
maintain. Seems there is plenty available for generating barcodes in Python, so
for the
persons who have been down this road I was hoping to get a pointer or two.
I need to create some type of output,
I assume this is why snapshots.pfsense.org is offline (or at least not
answering) right now?
In the release announcement are links to upgrade binaries, not all the mirrors
are populated
yet, find one. In the same rel announcement is an upgrade guide link that
explains how to
perform the
> Yes, that's what I suspected. Because your table_a has no natural key, you
> have
> no good way to select the auto-generated id value. You can find out what the
> last
> auto-generated value was, which lets you work a row at a time, but you're
> really
> suffering from a poor design
Yes, that's what I suspected. Because your table_a has no natural key, you
have
no good way to select the auto-generated id value. You can find out what the
last
auto-generated value was, which lets you work a row at a time, but you're
really
suffering from a poor design choice.
This pragma speeds up most processes 10-20 times (yes 10-20):
pragma synchronous=OFF
See the SQLITE documentation for an explanation.
I've found no problems with this setting.
Aside from database integrity and consistency? :) I have that one set
to OFF as my case mandates data processing and
> Look up the last_insert_rowid() you want and store it in your programming
> language. That's what programming languages are for. But if you want to do
> it less efficiently ...
Hey Simon,
That is the procedure I utilize normally, the requirement for this specific
case is
that the entire set
> If I understand the question, and there is no key other than the
> auto-incrementing
> integer, there might not be a good way. It sounds like the database's design
> may
> have painted you into a corner.
Hi James,
Well, after inserting one row into table A which looks like (without
Hi,
What is the most efficient way to insert several records into a table which
has a fk ref to the auto incrementing pk of another insert I need to do in the
same statement.
I am migrating some code away from using the SQLAlchemy orm to using the
Core. The way the data is returned to me is a
Hi,
What is the most efficient way to insert several records into a table which
has a fk ref to the auto incrementing pk of another insert I need to do in the
same statement.
I am migrating some code away from using the SQLAlchemy orm to using the
Core. The way the data is returned to me is a
If I understand the question, and there is no key other than the
auto-incrementing
integer, there might not be a good way. It sounds like the database's design
may
have painted you into a corner.
Hi James,
Well, after inserting one row into table A which looks like (without specifying
Look up the last_insert_rowid() you want and store it in your programming
language. That's what programming languages are for. But if you want to do
it less efficiently ...
Hey Simon,
That is the procedure I utilize normally, the requirement for this specific
case is
that the entire set of
> Plus, of course, index will only ever be used for operations where you have
> overridden the default collating sequence for the operation, for example by
> specifying collate nocase in the join expression, or adding the collate
> nocase to
> the order by or group by.
I assume this explains why
Plus, of course, index will only ever be used for operations where you have
overridden the default collating sequence for the operation, for example by
specifying collate nocase in the join expression, or adding the collate
nocase to
the order by or group by.
I assume this explains why the
I have been battling an issue hopefully someone here has insight with.
I have a database with a few tables I perform a query against with some
joins against columns collated with NOCASE that leverage = comparisons.
Running the query on the database opened in sqlitestudio returns the
results in
> Have you tried using '=' ?
>
> Also if you declare the columns as COLLATE NOCASE in your table definition,
> then using '=' will definitely work the way you want it to. An example would
> be
>
> CREATE TABLE myTable (myName TEXT COLLATE NOCASE)
Simon,
That took this query from not finishing
> > 0 0 1 SCAN TABLE d_table_b AS da (~10 rows)
> >
>
> Is this the index you referenced in you reply to Simon?
> Maybe you are using wrong index/column?
I'll recheck, I am also reading up on indexes as they relate to optimizing
queries. Could be I made a mistake.
> I had
> LIKE is used when comparing strings with wildcards. For example, val LIKE
> 'abra%' (which will match 'abraCaDAbra' and 'abrakadee'.
>
> If there are no wildcards you should be using =, not LIKE. LIKE will/should
> always indicate that a table or index scan is required, perhaps of the whole
>
> Have you tried using '=' ?
>
> Also if you declare the columns as COLLATE NOCASE in your table definition,
> then using '=' will definitely work the way you want it to. An example would
> be
>
> CREATE TABLE myTable (myName TEXT COLLATE NOCASE)
>
> Simon.
I did and it excluded the
> Hi,
> Can you do "DESCRIBE QUERY PLAN " and post results here?
>
> Also, what do you mean by "unbearable at scale"? Did you measure it? What
> is the result?
>
> Thank you.
It doesn't finish with maybe 4 or 5 hours run time.
Sorry, do you mean "explain query plan ..."?
0 0 1
I have a query that is unbearable at scale, for example when
s_table_a and s_table_b have 70k and 1.25M rows.
SELECT s.id AS s_id
,s.lid AS s_lid
,sa.val AS s_sid
,d.id AS d_id
,d.lid AS d_lid
FROM s_table_b sa
JOIN d_table_b da ON
(
da.key=sa.key
I have a query that is unbearable at scale, for example when
s_table_a and s_table_b have 70k and 1.25M rows.
SELECT s.id AS s_id
,s.lid AS s_lid
,sa.val AS s_sid
,d.id AS d_id
,d.lid AS d_lid
FROM s_table_b sa
JOIN d_table_b da ON
(
da.key=sa.key
Hi,
Can you do DESCRIBE QUERY PLAN your_query and post results here?
Also, what do you mean by unbearable at scale? Did you measure it? What
is the result?
Thank you.
It doesn't finish with maybe 4 or 5 hours run time.
Sorry, do you mean explain query plan ...?
0 0 1
Have you tried using '=' ?
Also if you declare the columns as COLLATE NOCASE in your table definition,
then using '=' will definitely work the way you want it to. An example would
be
CREATE TABLE myTable (myName TEXT COLLATE NOCASE)
Simon.
I did and it excluded the comparisons whose
LIKE is used when comparing strings with wildcards. For example, val LIKE
'abra%' (which will match 'abraCaDAbra' and 'abrakadee'.
If there are no wildcards you should be using =, not LIKE. LIKE will/should
always indicate that a table or index scan is required, perhaps of the whole
0 0 1 SCAN TABLE d_table_b AS da (~10 rows)
Is this the index you referenced in you reply to Simon?
Maybe you are using wrong index/column?
I'll recheck, I am also reading up on indexes as they relate to optimizing
queries. Could be I made a mistake.
I had the same
Have you tried using '=' ?
Also if you declare the columns as COLLATE NOCASE in your table definition,
then using '=' will definitely work the way you want it to. An example would
be
CREATE TABLE myTable (myName TEXT COLLATE NOCASE)
Simon,
That took this query from not finishing in 5
I'm using Python 2.7 under Windows and am trying to run a command line
program and process the programs output as it is running. A number of
web searches have indicated that the following code would work.
import subprocess
p = subprocess.Popen(D:\Python\Python27\Scripts\pip.exe list
You can probably do something similar using sub commands
(http://docs.python.org/2/library/argparse.html#sub-commands).
The problem here is that argparse does not pass the subparser into the
parsed args and shared args between subparsers need to be declared
each time. Come execution time, when
I think you are looking for exclusive groups:
http://docs.python.org/2.7/library/argparse.html#argparse.add_mutually_excl
usive_group
No. That links first doc line in that method shows the very point we are all
discussing:
Create a mutually exclusive group. argparse will make sure that
Oh hai - as I was reading the documentation, look what I found:
http://docs.python.org/2/library/logging.html#filter
Methinks that should do exactly what you want.
Hi Wayne,
I was too hasty when I looked at filters as I didn't think they could do
what I wanted. Turns out a logging object sent
Speaking to the OP: personally, I don't like the approach of putting data
access methods at the module level to begin with. I'd rather use a class.
Just because it makes sense to have a singleton connection now doesn't mean it
will always make sense as your application grows.
In fact, the
Does anyone know what the limit to the size of the Notes field in
AD? I can't seem to search up a limit.
Every attr, every version of AD, and all its properties:
http://msdn.microsoft.com/en-us/library/windows/desktop/ms675090(v=vs.85).aspx
Anyone using this product? I have cobbled together some 'hot folder'
solutions
for a number of things at work, but this looks like it would be easier to use
if it is stable and works.
http://www.folderactions.com/
Ugh,
There is an opensource project that uses the native filesystem
I posted this to the sqlite list but I suspect there are more C oriented users
on
that list than Python, hopefully someone here can shed some light on this one.
I have created a python module that I import within several other modules that
simply opens a connection to an sqlite file and defines
I have a couple handlers applied to a logger for a file and console destination.
Default levels have been set for each, INFO+ to console and anything to file.
How does one prevent logging.exception from going to a specific handler when
it falls within the desired levels?
Thanks,
jlc
--
I have created a python module that I import within several files that simply
opens a connection to an sqlite file and defines several methods which each
open a cursor before they either select or insert data. As the module opens a
connection, wherever I import it I can call a commit against the
I have created a python module that I import within several files that simply
opens a connection to an sqlite file and defines several methods which each
open a cursor before they either select or insert data. As the module opens a
connection, wherever I import it I can call a commit against the
Cute, if paging confused your developers, wait'll they encounter range
retrieval.
I can only imagine the protest then:)
heh,
jlc
From: listsad...@lists.myitforum.com [mailto:listsad...@lists.myitforum.com] On
Behalf Of David Lum
Sent: Wednesday, July 31, 2013 2:41 PM
To:
I have some queries that utilize instr wrapped by substr but the old
version shipped in 2.7.5 doesn't have instr support.
Has anyone encountered this and utilized other existing functions
within the shipped 3.6.21 sqlite version to accomplish this?
Thanks,
jlc
--
Has anyone encountered this and utilized other existing functions
within the shipped 3.6.21 sqlite version to accomplish this?
Sorry guys, forgot about create_function...
--
http://mail.python.org/mailman/listinfo/python-list
> Will the SQL 1969 "EXCEPT" compound operator not work for some reason?
Worked perfect, my sql is weak as I didn't even know of this one...
Thanks!
jlc
___
sqlite-users mailing list
sqlite-users@sqlite.org
Hey guys,
I am trying to left join the results of two selects that both look exactly like
this:
SELECT DISTINCT SUBSTR(col, INSTR(col, 'string')) AS name FROM table_a
Both tables have the exact data type and format, I need to reformat each tables
results, then join and return only what is in
Hey guys,
I am trying to left join the results of two selects that both look exactly like
this:
SELECT DISTINCT SUBSTR(col, INSTR(col, 'string')) AS name FROM table_a
Both tables have the exact data type and format, I need to reformat each tables
results, then join and return only what is in
Will the SQL 1969 EXCEPT compound operator not work for some reason?
Worked perfect, my sql is weak as I didn't even know of this one...
Thanks!
jlc
___
sqlite-users mailing list
sqlite-users@sqlite.org
> It is perfectly allowed to open multiple cursors against a single connection.
> You can only execute one
> statement per cursor at a time, but you can have multiple cursors running
> from the same connection:
>
> cr1 = cn.cursor()
> cr2 = cn.cursor()
>
> cr1.execute('select ...')
> while
It is perfectly allowed to open multiple cursors against a single connection.
You can only execute one
statement per cursor at a time, but you can have multiple cursors running
from the same connection:
cr1 = cn.cursor()
cr2 = cn.cursor()
cr1.execute('select ...')
while True:
From: sqlite-users-boun...@sqlite.org on behalf of Petite Abeille
Sent: Wednesday, July 17, 2013 1:25 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Guidance with Python and nested cursors
On Jul 17, 2013, at 9:07 PM, Joseph L. Casale
I am using Python to query a table for all its rows, for each row, I query
related rows from a
second table, then perform some processing and insert in to a third table.
What is the technically correct approach for this? I would rather not
accumulate all of the first
tables data to make one off
I am using Python to query a table for all its rows, for each row, I query
related rows from a
second table, then perform some processing and insert in to a third table.
What is the technically correct approach for this? I would rather not
accumulate all of the first
tables data to make one off
From: sqlite-users-boun...@sqlite.org on behalf of Petite Abeille
Sent: Wednesday, July 17, 2013 1:25 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Guidance with Python and nested cursors
On Jul 17, 2013, at 9:07 PM, Joseph L. Casale
I have a dict of lists. I need to create a list of 2 tuples, where each tuple
is a key from
the dict with one of the keys list items.
my_dict = {
'key_a': ['val_a', 'val_b'],
'key_b': ['val_c'],
'key_c': []
}
[(k, x) for k, v in my_dict.items() for x in v]
This works, but I need to
Yeah, it's remarkably easy too! Try this:
[(k, x) for k, v in my_dict.items() for x in v or [None]]
An empty list counts as false, so the 'or' will then take the second option,
and iterate over the one-item list with None in it.
Right, I overlooked that!
Much appreciated,
jlc
--
Well, technically it's
func.func_closure[0].cell_contents.__name__
but of course you cannot know that for the general case.
Hah, I admit I lacked perseverance in looking at this in PyCharms debugger as I
missed
that.
Much appreciated!
jlc
--
I have a set of methods which take args that I decorate twice,
def wrapped(func):
def wrap(*args, **kwargs):
try:
val = func(*args, **kwargs)
# some work
except BaseException as error:
log.exception(error)
return []
return
If you don't want to do that, you'd need to use introspection of a
remarkably hacky sort. If you want that, well, it'll take a mo.
After some effort I'm pretty confident that the hacky way is impossible.
Hah, I fired it in PyCharm's debugger and spent a wack time myself, thanks
for the
$ rtorrent foobar.torrent
rtorrent: symbol lookup error: rtorrent: undefined symbol:
_ZN7torrent10ThreadBase8m_globalE
Why is this happening?
The first hit on google leads you to
https://github.com/rakshasa/rtorrent/issues/81
which points you to https://github.com/repoforge/rpms/issues/206
So I have an XFS file system within LVM which has an external log.
A snapshot volume is created w/o issue;
But when i try to mount the file system;
mount: /dev/mapper/vg_spock_data-datasnapshot already mounted or /snapshot busy
So the filesystem was built requiring an external log device?
Let
I am trying to invoke a binary that requires dll's in two places all of
which are included in the path env variable in windows. When running
this binary with popen it can not find either, passing env=os.environ
to open made no difference.
Anyone know what might cause this or how to work around
Same thing happens to me. I can run netflix-desktop from the Ubuntu PPA on
this
same hardware in Debian Wheezy, but my results match yours with the Fedora
package.
What desktop are using? I was tempted to try another hoping that was the only
issue.
Maybe you can save me the grief.
Thanks
I have a use where writing an interim file is not convenient and I was hoping to
iterate through maybe 100k lines of output by a process as its generated or
roughly anyways.
Seems to be a common question on ST, and more easily solved in Linux.
Anyone currently doing this with Python 2.7 in
You leave out an awful amount of detail. I have no idea what ST is, so
I'll have to guess your real problem.
Ugh, sorry guys its been one of those days, the post was rather useless...
I am using Popen to run the exe with communicate() and I have sent stdout to
PIPE
without luck. Just not
Anyone figured out how to get netflix working on f18x64?
I have tried the netflix-desktop-0.2.2-1.fc18.noarch rpm and manually installed
silverlight with:
WINEARCH=win64 WINEPREFIX=/home/jcasale/.netflix-desktop wine Silverlight.exe /q
as well as the automated installer from
On Linux, an rsync command and exclude_file contents of:
# cat exclude_file
/etc/alsa
# rsync -a --delete --delete-excluded --exclude-from=exclude_file /etc
server::module
properly excludes /etc/alsa but not any file within /etc's directories that is
named alsa.
On Windows I don't seem to be
On Linux, an rsync command and exclude_file contents of:
# cat exclude_file
/etc/alsa
# rsync -a --delete --delete-excluded --exclude-from=exclude_file /etc
server::module
properly excludes /etc/alsa but not any file within /etc's directories that
is named alsa.
Here the exclude
Note that all modules in python-ldap up to 2.4.10 including module 'ldif'
expect raw byte strings to be passed as arguments. It seems to me you're
passing a Unicode object in the entry dictionary which will fail in case an
attribute value contains NON-ASCII chars.
Yup, I was.
python-ldap
I'm not sure what exactly you're asking for.
Especially is not being interpreted as a string requiring base64 encoding is
written without giving the right context.
So I'm just guessing that this might be the usual misunderstandings with use
of base64 in LDIF. Read more about when LDIF
Hi Michael,
Processing LDIF is one thing, doing LDAP operations another.
LDIF itself is meant to be ASCII-clean. But each attribute value can carry any
byte sequence (e.g. attribute 'jpegPhoto'). There's no further processing by
module LDIF - it simply returns byte sequences.
The access
check_nrpe -H 172.16.1.61 -t 60 get_service -a 172.16.0.155 DNS
For what its worth, you might want to check out check_wmi from
http://www.edcint.co.nz/checkwmiplus/
I always hated nrpe or installing agents when you really don't need them.
jlc
Can you give an example of the code you have?
I actually just overrode the regex used by the method in the LDIFWriter class
to be far more broad
about what it interprets as a safe string. I really need to properly handle
reading, manipulating and
writing non ascii data to solve this...
Shame
I have been doing the same thing and I tried to use java for testing the
credentials and they are correct. It works perfectly with java.
I really don´t know what we´re doing wrong.
You are accessing a protected operation of the LDAP server
and it (the server) rejects it due to invalid
I have some data I am working with that is not being interpreted as a string
requiring
base64 encoding when sent to the ldif module for output.
The base64 string parsed is ZGV0XDMzMTB3YmJccGc= and the raw string is
det\3310wbb\pg.
I'll admit my understanding of the handling requirements of non
I was doing some work with the ldap module and required a ci dict that was case
insensitive but case preserving. It turned out the cidict class they
implemented was
broken with respect to pop, it is inherited and not re implemented to work.
Before
I set about re-inventing the wheel, anyone know
Hello Joseph,
there is:
$HOSTOUTPUT$
$HOSTLONGOUTPUT$
$HOSTPERFDATA$
$SERVICEOUTPUT$
$SERVICELONGOUTPUT$
$SERVICEPERFDATA$
Markus,
I had been using those (well the LONG* type written as per the manual) and
after some
changes, notifications actually stopped.
Stopping Icinga and deleting
I have long plugin logging enabled and have added the perfdata macro to
a notification in the hopes to receive the full output. After the expected
perfdata,
additional lines contain messages that are pipe separated and it seems Icinga
is dropping everything after the first pipe for the additional
I noticed the pgsql schema files needed a version bump, and the web package did
not
provide a clear cache script.
New interface looks great...
jlc
--
Introducing AppDynamics Lite, a free troubleshooting tool for
can you please provide a little more details what you mean by that?
Core needed the dbversion changed from the shipped 1.8.0 to 1.9.0 as
it was complaining at start up otherwise of a version mismatch.
Thanks,
jlc
--
Running regedt32 with elevated credentials? Ensured no running services are
holding the key open?
Yeah, no luck...
~ Finally, powerful endpoint security that ISN'T a resource hog! ~
~ http://www.sunbeltsoftware.com/Business/VIPRE-Enterprise/ ~
---
To manage subscriptions click here:
How about a rename?
When I recreate the target so I can access it, if I rename the symlink, it
accepts its, renames the
target, but reverts after a refresh leaving the target renamed?
I am remote and its a vm for which I dont have console access to, what a pita
this turning
out to be.
jlc
~
If you can query for the process, can you not query the network?
Lookup the gateway and ping it...
From: kz2...@googlemail.com
Sent: Thursday, April 25, 2013 6:11 AM
To: NT System Admin Issues
Subject: Startup processes
On a Windows system, is there a
scclient.exe
From: James Rankin
Sent: Wednesday, April 17, 2013 6:30 AM
To: NT System Admin Issues
Subject: SCCM 2012 quick question
Anyone know what the executable name for Software Center in SCCM 2012 is? I've
seen it suggested as scclient.exe and ccmsetup.exe,
but I can't figure out how to tell it one parameter *depends* on another.
Create your parameter set, then set the few that depend on each other to be
mandatory?
There are some neater things you can do with compiled code, otherwise you
sometimes have to do more exotic validation after the param
Anyone have any experience with jungledisk?
They offer a Linux client and have pretty cheap rates for large volumes of
data. We are
retiring a private colocated backup and hoping to migrate to a commercial
online solution.
Of the few that support Linux, this one looks pretty decent at first
I haven't looked at jungledisk, but I use SpiderOak for home use. You get
1-2GB free which is all I need
for critical stuff. The linux client works well and is provided in RPM format
w/ repo I believe. Their data
rates may not be competitive though.
I use them for personal as well.
When you say class vars, do you mean variables which hold classes?
You guessed correctly, and thanks for pointing out the ambiguity in my
references.
The one doesn't follow from the other. Writing decorators as classes is
fairly unusual. Normally, they will be regular functions.
I see,
I have a class which sets up some class vars, then several methods that are
passed in data
and do work referencing the class vars.
I want to decorate these methods, the decorator needs access to the class vars,
so I thought
about making the decorator its own class and allowing it to accept
So decorators will never take instance variables as arguments (nor should
they, since no instance
can possibly exist when they execute).
Right, I never thought of it that way, my only use of them has been trivial, in
non class scenarios so far.
Bear in mind, a decorator should take a
Each time I start my laptop, the sound is set to 153% or whatever max is. A tap
of the
slider in the panel which is at 100% resets it to 100% (and not past that) and
it sounds
fine.
Any idea how to stop this from resetting to max each startup?
Thanks!
jlc
--
users mailing list
No kidding, I use it at a few places as well. One of the guys on this list
actually used
to be a contributor at one point. Nice piece of ware, only shame is that its
Perl.
If I never have to work with Perl again, its to soon:)
jlc
From: Tim Evans
Sent: Monday,
Dude, you called him out? That was not a particularly pleasant ordeal...
From: Kurt Buff
Sent: Monday, March 11, 2013 9:45 PM
To: MS-Exchange Admin Issues
Subject: Re: what spam or edge server are you using?
Paging Mr. Espinola...
Mr. Espinola to the
Yeah,
Maia probably is a more enterprisable offering if you could suggest such a
thing for these apps.
Problem is, its also Perl, blegh...
My current post allots me the privilege of hacking Python all day, I can't
tell you how that has
spoiled me:)
jlc
I have a switch statement composed using a dict:
switch = {
'a': func_a,
'b': func_b,
'c': func_c
}
switch.get(var, default)()
As a result of multiple functions per choice, it migrated to:
switch = {
'a': (func_a1, func_a2),
'b': (func_b1, func_b2),
'c': (func_c, )
}
switch = {
'A': functools.partial(spam, a),
'B': lambda b, c=c: ham(b, c),
'C': eggs,
}
switch[letter](b)
That's cool, never even thought to use lambdas.
functools.partial isn't always applicable, but when it is, you should
prefer it over lambda since it will be very
Or could you do something like:
arguments_to_pass = [list of some sort]
switch.get(var, default)(*arguments_to_pass)
Stevens lambda suggestion was most appropriate. Within the switch, there
are functions called with none, or some variation of arguments. It was not
easy to pass them in after
Sorry to reply out of thread order (dont have the original).
No need to sigh, ditch the bad posts on the net and run `powershell /?`
A ps1 file is not a command. You need to invoke the script.
-Original Message-
From: Michael Leone [mailto:oozerd...@gmail.commailto:oozerd...@gmail.com]
We'd need more details on the failure, but in general terms, Rawhide's
installer is having a lot of work done on it right now, it's not
surprising that it's busted, and individual bugs don't necessarily need
reporting at this time, as the code (particularly partitioning stuff) is
under
Anything using snapshot backups will have to be specifically updated to
support 2013.
DPM 2012 SP1 has been updated, my backup scripts have been updated :)
(although, honestly,
they just depend on support from DiskShadow.exe), but those are the only
things that I know
of that have
601 - 700 of 4236 matches
Mail list logo