Re: [Ledger-smb-devel] RFC: Feedback on design of PGObject and what should be fixed in 2.0

2016-01-22 Thread Chris Bennett
On Thu, Jan 21, 2016 at 09:13:52AM +0100, Chris Travers wrote:
> Hi;
> 
> Have any other developers worked with the PGObject framework here?
> 
> Here are my thoughts regarding things which should probably be changed:
> 
> 1.  I would like to avoid requirinng autocommit off for db handles and
> instead set it off for the series of calls.
> 
> 2.  I would like to provide better exception handling when a query goes
> wrong.  In some cases (we couldn't find a function) we should still
> probably die, but when the function errors, we could use better error
> handlign rather than dying and expecting the using application to handle it.
> 
> My thinking is (for 1.6) to correct the second by making exception handling
> configurable.  My thinking is to:
> 
> 1)  Allow exceptions from functions to pass back an exception object on
> failure.
> 2)  Allow the class or call to accept an error handler.
> 
> the default would probably still be the same.
> 
> During the 1.5-1..6 period I would like to get these packages working on
> both Perl5 (CPAN) and Perl6.

I am working on bringing the PGObject modules into OpenBSD.

Working on PGObject, I find this in the testing:

plan skip_all => 'Not set up for db tests' unless $ENV{DB_TESTING};
# Initial setup
my $dbh1 = DBI->connect('dbi:Pg:', 'postgres') ;

plan skip_all => 'Needs superuser connection for this test script'
unless $dbh1;

How should I handle this? Just define ENV{DB_TESTING} to something?

Thanks,
Chris Bennett


--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311=/4140
___
Ledger-smb-devel mailing list
Ledger-smb-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ledger-smb-devel


Re: [Ledger-smb-devel] Deciding on a default company setup for BDD tests

2016-01-22 Thread John Locke
Hi,

On 01/21/2016 05:54 PM, David G wrote:
> Hi All,
>
> I agree with Michaels comments, with a couple of extra thoughts inline
> below.
>
> On 19/01/16 09:13, Michael Richardson wrote:
>> Erik Huelsmann  wrote:
>>
>>
>>  > To start with the first and foremost question: do we want our tests 
>> to run
>>  > succesfully on a copy of *any* company (as John stated he would like, 
>> on IRC)
>>  > or do we "design" the company setups we want to run our tests on, from
>>  > scratch, as I was aiming for? (Note that I wasn't aiming for 
>> regenerating all
>>  > setup data on each scenario or feature; I'm just talking about making 
>> sure we
>>  > *know* what's in the database -- we'd still run on a copy of a 
>> database set
>>  > up according to this plan).
>>
>> By *any* company, you mean, I could run it against (a copy of) my database?
>> I think that is not useful to focus on right now.
> I agree that it's probably not a good thing to focus on right now,
> but,
> I think it would be worth keeping in mind so the tests aren't written to
> exclude this as a possibility.
> In the long run, I think rather than the tests being designed to be run
> on a *live* database they should,
> if run on a "non test" database copy the DB to a new DB ${name}-bdd-test
> and run against the copy.
>
> I think this is a better long term solution as for many scenarios it may
> be impossible to properly remove entries from the database due to the
> Audit Features we have.

Drupal has a tremendous amount of variation between sites, and lots of 
configuration that ends up in the database. This certainly colors my 
perspective -- and that's why I think it's important to be able to run 
BDD tests on a copy of any production database.

I'm not sure that's the same for LedgerSMB -- but it would certainly 
help track down issues if people customize their database in ways we 
don't expect.

What we're really talking about here is how to set up test data -- 
whether we ship a test database already containing data our tests rely 
upon, or have those dependencies created when running the tests.

I pretty strongly advocate the latter -- create the configurations/data 
we are testing for at the start of a test run, if they don't already 
exist. And make it safe to re-run a test on the same database.

I don't mind cleaning up test data if a test fails in development, but 
as long as tests are completing, they should be able to be run multiple 
times on the same db.

>>  > Additionally, John and I were talking about supporting test 
>> infrastructure
>>  > and we agree that it would be tremendously helpful to be able to see
>>  > screenshots of failing scenarios and maybe to be able to see 
>> screenshots of
>>  > various points in non-failing tests too. Since Travis storage isn't
>>  > persistent, we were thinking that we'd need to collect all 
>> screenshots as
>>  > "build artifacts" and upload them into an AWS S3 account for 
>> inspection.
>>
>> Email to ticket system?
>> Or S3...
> Michael makes a really good point here.
> Perhaps the easiest way of capturing the screenshots is not to use S3,
> but have a github project (eg: ledgersmb-bdd-results) that we can raise
> a ticket against for failing builds with associated screenshots attached.
> At the same time we could use "git annex" to store all screenshots for a
> test in a new git branch (or just simply a tag) in the
> ledgersmb-bdd-results project repository.
>
> Storing "good" results probably should only be done if a specific flag
> is passed in the PR commit message.
> While all screenshots (good and bad) should be stored if a single test
> fails.

However we store them, I suggest we at least store "good" results for 
each release. Especially of screenshots. This will allow comparing 
version-on-version, as well as give you a place to go back to see "what 
did this look like in version x?"

S3 storage seems to be built in to many test runners like Travis, I'm 
guessing that's the fastest/easiest to get up and running.

The Matrix project uses Jenkins as a test runner, and the runs are 
public, so you can access artifacts just by visiting their jenkins 
instance, no logins necessary. Can Travis do the same?

Cheers,
John Locke
http://www.freelock.com

--
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=267308311=/4140
___
Ledger-smb-devel mailing list
Ledger-smb-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ledger-smb-devel