Re: [Ledger-smb-devel] Proposal to differentiate between program and processing(precondition) errors

2015-06-22 Thread Chris Travers
Just to add to this for sake of discussion

First, for history's sake, we inherited the undifferentiated error case
from sql-ledger.  We have actually started decoupling htis in some ways but
that is very far from complete.  Discussion and thoughts on this matter are
very welcome.

On Sun, Jun 21, 2015 at 1:41 PM, Erik Huelsmann e...@efficito.com wrote:


 Currently, all errors generated by LedgerSMB are being raised through
 'die'. It's an effective way, but my feeling is that it bleeds through too
 many details of the internals. This is especially true when the error is
 really nothing more than a reported missing configuration.


Yeah, we could do a  better job of handling this.


 E.g. we currently generate an error in case there's a field's value
 missing. The generated error includes a stack dump and all!


It can include the stack trace.  I don't think we do by default though with
Starman you can turn this on and off by using -MCarp::Always but if this is
happening without, it is a bug.


 My proposal is that in this type of error case, we're checking the
 existence of the required fields and their values. If these don't exist, we
 should be reporting a nicely formatted error to the client -- most
 definitely without a stack trace.


Agreed.  We should also use the required field in html as well.


 Also, the HTTP status code for each error-with-stacktrace is currently
 500 -- Internal Server Error. When we *understand* the request, but can't
 process it, we should be generating a 422 (Unprocessable entity) response
 or alike.


Agreed here too.



 Now for the question: what would be the best way to achieve this? (I think
 this applies equally to HTML page responses as it does to web service api
 calls, so this is a general question that needs a general solution which is
 applicable for a long time.)


 Ok, so currently we have a couple mechanisms we could use for this.

1)  On the HTML side we really should set the required attribute where
appropriate.  As a bonus we can use css to highlight what is required.
2)  On the server/service side, we now have LedgerSMB/Request.pm which
allows us to specify required fields.  We could modify that and the global
error handling, to allow http statuses to be included.  Remember we can die
$hashref, and intercept/handle variables on that hashref.  I could probably
have a patch in to route from the required functions there to have
appropriate http statuses fairly quickly.

Best Wishes,
Chris Travers




 --
 Bye,

 Erik.

 http://efficito.com -- Hosted accounting and ERP.
 Robust and Flexible. No vendor lock-in.




-- 
Best Wishes,
Chris Travers

Efficito:  Hosted Accounting and ERP.  Robust and Flexible.  No vendor
lock-in.
http://www.efficito.com/learn_more
--
Monitor 25 network devices or servers for free with OpManager!
OpManager is web-based network management software that monitors 
network devices and physical  virtual servers, alerts via email  sms 
for fault. Monitor 25 devices for free with no restriction. Download now
http://ad.doubleclick.net/ddm/clk/292181274;119417398;o___
Ledger-smb-devel mailing list
Ledger-smb-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ledger-smb-devel


Re: [Ledger-smb-devel] Writing code for master: paradigms that work

2015-06-22 Thread Erik Huelsmann
Hi John,

On Mon, Jun 22, 2015 at 7:03 AM, John Locke m...@freelock.com wrote:

  Hi, Erik,

 Great stuff, love to hear you're tackling these issues head-on. I'm
 totally in favor of everything I've read from your 3 mails of today... and
 will add comments from my perspective...

 On 06/21/2015 01:04 PM, Erik Huelsmann wrote:

  What I learned while coding on current master, i.e. after the
 'frame-less' and 'PGObject' changes, is that, due to the 'frame-less'
 change:
  * The HTML HEAD tag is now completely useless, except for the pages
 returned by 'login.pl'
  * Any JS linked in the head tag as well as any JS included in SCRIPT tags
 is completely ignored
  * Any CSS linked in the head tag is completely ignored
  * Any JS or CSS directly included in the head tag is completely ignored

 It's great to organize CSS/JS to work per screen/page. However, it's not
 that useful to load it per page. For the most part, I think it's best to
 simply aggregate all the JS/CSS and send out as a single file to the
 browser, and allow that to get cached.

 With Dojo, we can define multiple layers in the build, so it might be good
 to exclude seldom-used scripts to their own layer, or exclude them from the
 build entirely -- as long as we're using AMD Dojo should handle loading
 them if necessary if they're not in the build. But aggregating everything
 should work well for us.


 While I agree with you that aggregation would work well for us (and
there's very little to be aggregated at the moment, because we have very
little code in our own classes!), having aggregated JS files is somewhat of
a problem from where we stand today: we don't have a build system, so we
don't have something to hook the aggregation dojo build step into...
Actually, with the very little code that we have now, I'm actually
expecting that there's not too much benefit in terms of speed of building
our own aggregated Dojo, if we depend on the dojo version people can load
from the CDNs or as available in the distributions. That eliminates most
roundtrips already.

  What *does* work is:
  * Put all behaviours for all elements in Dojo widgets
  * Put nothing in the response HTML other than widget references
 (data-dojo-type) and widget configuration (data-dojo-props); everything
 else should be in the widget
  * In case widgets on a page need to communicate their state changes, etc,
 use 'dojo.topic' and the publish/subscribe paradigm -- this creates a
 loosely coupled UI and reduces chances for exceptions

 Pub/Sub I've found to be extremely powerful for a single-page app, and is
 exactly what I've used in the past. +1

   * Create widgets which - as much as possible - can be reused on other
 pages

 +1


 Ok. Thanks for seeing those items confirmed. I'll need to write that down
into a page about our coding standards on ledgersmb.org.

   * Make sure the HTML structure is valid; the Dojo AMD parser can be
 seriously confused by incorrectly nested elements
  * All CSS for any and all pages must be included in 'css/global.css'

 Break this out into the SASS discussion -- I'm partial to partials ...

Ok. I think we're on the same page: the end result must all be in CSS
that's loaded when the single-page-app's single page is loaded. If this is
done in partials which are then included into a larger single file which
can be generated using a build step, that's open for discussion. (But it
suffers from the same problem as the one mentioned above: there's no build
at the moment, so there's nothing to hook this into...

  * CSS can be made page-specific by wrapping the entire body of an HTML
 response's BODY tag in a DIV with a page-specific (and globally unique) ID
 attribute; that attribute can be used to attach the page's CSS to

 I don't see any benefit to not loading all CSS for the app one time, in a
 single file. Organize all the CSS into separate Sass files per page, but in
 the end they all end up in a single CSS file. And yes, a class attribute
 attached to the body tag can be a great way to provide per-page css, make
 it easy to target.


We're on the same page here. Your words say exactly the same thing I meant
to say.

[ snip ]

  Due to the PGObject change, it has become clear - if that wasn't already
 the case - that our current handling of the $request variable is far from
 ideal. Problems that we have:

   * $request is manipulated into a response by replacing values in the
 hash as processing proceeds (leading to a point that code can't be sure
 what values it's receiving!)
  * some parts of the code instantiate new hashes, expecting constructors
 to copy *everything* of $request which they are not providing themselves

  Even though dojo-ifying the entire code base (by which I mean that we
 will be transforming everything to dojo UI and service requests from there)
 -- and thus we'll be moving away from the template based processing
 structure -- I think it's important to lay out how this should work as the
 move to PGObject caused problems 

Re: [Ledger-smb-devel] Proposal to differentiate between program and processing(precondition) errors

2015-06-22 Thread Erik Huelsmann
On Mon, Jun 22, 2015 at 5:59 PM, Chris Travers chris.trav...@gmail.com
wrote:

 Just to add to this for sake of discussion

 First, for history's sake, we inherited the undifferentiated error case
 from sql-ledger.


Right. The fact that it's undifferentiated is the reason I'm bringing it up
now. [I'm not sure if it really matters at this point whether or not we got
it from SL -- it's something we should do something about - that I *want*
to do something about -- but the question is What?]


   We have actually started decoupling htis in some ways but that is very
 far from complete.  Discussion and thoughts on this matter are very welcome.



 On Sun, Jun 21, 2015 at 1:41 PM, Erik Huelsmann e...@efficito.com wrote:


 Currently, all errors generated by LedgerSMB are being raised through
 'die'. It's an effective way, but my feeling is that it bleeds through too
 many details of the internals. This is especially true when the error is
 really nothing more than a reported missing configuration.


 Yeah, we could do a  better job of handling this.


Taking this from the chat we had earlier: there are several ways to deal
with identifying incomplete configurations. One is what we do now: report
that a specific request doesn't make sense until some specific
configuration is available. The other thing we talked about is to have some
kind of tool to verify the system setup: check for currencies, check for
latex, xetex, pdflatex, etc. and report all that to the user and which
system functionalities will be blocked by not having that dependency
available.


 E.g. we currently generate an error in case there's a field's value
 missing. The generated error includes a stack dump and all!


 It can include the stack trace.  I don't think we do by default though
 with Starman you can turn this on and off by using -MCarp::Always but if
 this is happening without, it is a bug.


Ok. On my master-fixup branch, I got stack traces all the way. I should
search for the included Carp::Always module, apparently. However, even with
Carp::Always, we might want to write the errors to the log files with stack
traces, but maybe we want to return a minimal erorr description to the
client, as John suggests. That way at least there's no leaking of system
configuration data.


 My proposal is that in this type of error case, we're checking the
 existence of the required fields and their values. If these don't exist, we
 should be reporting a nicely formatted error to the client -- most
 definitely without a stack trace.

 Agreed.  We should also use the required field in html as well.

True, however, as we're moving to a services oriented model more and more,
reporting correct and parseable responses gets increasingly important.
Preventing the field from being forgotten is nicely handled by the
'required' HTML elements. However, I think we can't really depend on *the*
client being our own clietn when we move to services.


 Also, the HTTP status code for each error-with-stacktrace is currently
 500 -- Internal Server Error. When we *understand* the request, but can't
 process it, we should be generating a 422 (Unprocessable entity) response
 or alike.

 Agreed here too.

Ok. so, then the next step is to set up a structure to handle missing
parameters if and when it occurs. Then the second step can be to search the
code base for possible reports of missing parameters and replace them by
whatever construct we come up with.


 Now for the question: what would be the best way to achieve this? (I think
 this applies equally to HTML page responses as it does to web service api
 calls, so this is a general question that needs a general solution which is
 applicable for a long time.)


  Ok, so currently we have a couple mechanisms we could use for this.

 1)  On the HTML side we really should set the required attribute where
 appropriate.  As a bonus we can use css to highlight what is required.
 2)  On the server/service side, we now have LedgerSMB/Request.pm which
 allows us to specify required fields.  We could modify that and the global
 error handling, to allow http statuses to be included.  Remember we can die
 $hashref, and intercept/handle variables on that hashref.  I could probably
 have a patch in to route from the required functions there to have
 appropriate http statuses fairly quickly.


Ok. Good you point me to LedgerSMB/Request.pm. I hadn't seen it's existence
before. I have to think how we can fit this into the broader picture. If
possible, I'd like to leverage the knowledge encoded in the Moose class
definitions. I'm not sure if and how that works out with Request.pm.

Given that we probably want to handle missing fields from a lot of places
as we probably want to handle it from both newer and older code, I like the
idea of using die $hashref and processing the response from there.
Combining this with John's idea of having a separate Response object, it'd
probably a nice construct to die $response to shortcut further 

Re: [Ledger-smb-devel] Proposal to use SASS for writing our CSS

2015-06-22 Thread Erik Huelsmann
Hi John,


  In my previous mail, I highlight how it's necessary to create per-page
  CSS by enclosing the page in a DIV with a globally unique 'id'
  attribute and scoping any CSS declarations within that.
 

 Use the Body tag. Don't need another div for this. (Ok, that is, unless
 this is an Ajax response...)


Yea. We can't use the body tag anymore, because we're using the full-page
response, rip out everything *between* the body and /body tags and use
that as the Ajax response. So, I guess you're right: It's an ajax response.

[ snip more ]


 ... we organize all the mixins into their own files. In Drupal's Omega4
 theme, it provides an organization of Sass into partials, each of which
 gets loaded in order, so later partials can rely on earlier-defined
 mixins, variables, etc. Main Sass file looks like this:

 @import compass;
 @import breakpoint;
 @import singularitygs;
 @import toolkit;

 @import variables/**/*;
 @import abstractions/**/*;
 @import base/**/*;
 @import components/**/*;

 ... So first it includes a lot of really useful mixins/Sass functions,
 and then it loads our partials, in a specific sequence.

 Variables contain all the stuff we might consider the theme : colors,
 fonts, font sizes, etc.


[ snip ]

I like this setup. It's a nice clean separation of libraries/abstractions
out of the regular CSS (both global and page-specific).


  Unless there are objections, I'm going to create a new directory, next
  to the css directory, called 'scss' in  which we'll store SASS files.
  Next to that, I'll make sure to write documentation on how to compile
  sass files into css files. There are even tools to compile sass files
  to css files immediately when they are being saved ('compass watch'
  from http://compass-style.org/help/documentation/command-line/).
 

 Ah, that's where things get fun. We've had tons of issues with
 version/dependency hell with Sass libraries and Compass.


It sounds like you were using Compass for more than just it's 'watch'
command for real-time compilation? I'm aware that compass has much more to
offer than just real-time compilation. I value your feedback - which I
interpret as go there when you know what you're doing - but wasn't
suggesting we should start using Compass as a CSS library; more as a CSS
compiler/developer tool. I'm not aware in what extent Compass and Dojo's
Claro theme conflict, so without experimenting, I don't know anything about
the validity of such a step.


 My best recommendation is to use RVM to manage ruby environments, and
 Bundler to install the necessary gems into the environment. Otherwise we
 get on a conveyor belt of a constantly moving Ruby Gem version target,
 and far too much upkeep...


Well, since we have more than enough to do as it is - and taking into
account that I'm happy with the Claro theme - I'm now reading don't go
there into your words here. So, for now, let's not go there.


-- 
Bye,

Erik.

http://efficito.com -- Hosted accounting and ERP.
Robust and Flexible. No vendor lock-in.
--
Monitor 25 network devices or servers for free with OpManager!
OpManager is web-based network management software that monitors 
network devices and physical  virtual servers, alerts via email  sms 
for fault. Monitor 25 devices for free with no restriction. Download now
http://ad.doubleclick.net/ddm/clk/292181274;119417398;o___
Ledger-smb-devel mailing list
Ledger-smb-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ledger-smb-devel


[Ledger-smb-devel] Request handling using routes, containers, (internal) services

2015-06-22 Thread Erik Huelsmann
Hi Chris, John,

In the other thread, John mentioned the structure that web apps are growing
to in general to have these components to facilitate growth beyond a
certain size:

So... the patterns we're discussing I'm seeing put in widespread use. I'd
say those 3 things are crucial for us to define/decide how we're going to
implement (and perhaps find some Perl framework to assist if those we're
currently using are insufficient):

* Request routing
* Service container to make it easy to register and load services
* Response object (which needs to include the definition of how to return
exceptions).


So, I'm now thinking how we can apply these concepts to LedgerSMB with or
without the context of using Plack and/or Starman. I'm imagining that we
will have to handle a certain amount of it ourselves internally and that we
possibly could hand off some of it to Plack's middleware modules.

What I've been thinking about for some time now is that we might want to
virtualize our current module names 'aa.pl' etc. into routes. For aa.pl,
there really are physical files, but for other routes, we may not want to
handle the route processing the same way.

Is this something that we need to address now (as in: design it asap and
simply continue working on the code base, but use this as a paradigm for
all code that's being (re)written)?



-- 
Bye,

Erik.

http://efficito.com -- Hosted accounting and ERP.
Robust and Flexible. No vendor lock-in.
--
Monitor 25 network devices or servers for free with OpManager!
OpManager is web-based network management software that monitors 
network devices and physical  virtual servers, alerts via email  sms 
for fault. Monitor 25 devices for free with no restriction. Download now
http://ad.doubleclick.net/ddm/clk/292181274;119417398;o___
Ledger-smb-devel mailing list
Ledger-smb-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ledger-smb-devel


Re: [Ledger-smb-devel] Request handling using routes, containers, (internal) services

2015-06-22 Thread John Locke

On 06/22/2015 02:53 PM, Erik Huelsmann wrote:

Hi
Chris, John,

In the other thread, John mentioned the structure that web apps are 
growing to in general to have these components to facilitate growth 
beyond a certain size:


So... the patterns we're discussing I'm seeing put in widespread
use. I'd say those 3 things are crucial for us to define/decide
how we're going to implement (and perhaps find some Perl framework
to assist if those we're currently using are insufficient):

* Request routing
* Service container to make it easy to register and load services
* Response object (which needs to include the definition of how to
return exceptions).


So, I'm now thinking how we can apply these concepts to LedgerSMB with 
or without the context of using Plack and/or Starman. I'm imagining 
that we will have to handle a certain amount of it ourselves 
internally and that we possibly could hand off some of it to Plack's 
middleware modules.


What I've been thinking about for some time now is that we might want 
to virtualize our current module names 'aa.pl http://aa.pl' etc. 
into routes. For aa.pl http://aa.pl, there really are physical 
files, but for other routes, we may not want to handle the route 
processing the same way.


Is this something that we need to address now (as in: design it asap 
and simply continue working on the code base, but use this as a 
paradigm for all code that's being (re)written)?


I don't have an answer to that specific question...

But most webapps I've worked with have a single endpoint that receives 
all requests. The webserver reroutes any paths that don't exist on the 
disk to that endpoint.


Given that all of those little module name endpoints we currently have 
are copies of one of two different actual endpoints, it does seem like 
we should be able to pretty easily eliminate those... and just make the 
single remaining endpoint smart enough to know which way to handle the 
request...


I would assume Starman could handle something like that?

For Drupal, Apache first looks for a file on the disk... if it doesn't 
exist, it rewrites the path to /index.php?q= (and then adds the original 
path). This is done internally, not as a redirect, so the browser never 
knows...


--
Monitor 25 network devices or servers for free with OpManager!
OpManager is web-based network management software that monitors 
network devices and physical  virtual servers, alerts via email  sms 
for fault. Monitor 25 devices for free with no restriction. Download now
http://ad.doubleclick.net/ddm/clk/292181274;119417398;o___
Ledger-smb-devel mailing list
Ledger-smb-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ledger-smb-devel


Re: [Ledger-smb-devel] Proposal to use SASS for writing our CSS

2015-06-22 Thread John Locke

On 06/22/2015 02:10 PM, Erik Huelsmann wrote:



It sounds like you were using Compass for more than just it's 'watch' 
command for real-time compilation? I'm aware that compass has much 
more to offer than just real-time compilation. I value your feedback - 
which I interpret as go there when you know what you're doing - but 
wasn't suggesting we should start using Compass as a CSS library; more 
as a CSS compiler/developer tool. I'm not aware in what extent Compass 
and Dojo's Claro theme conflict, so without experimenting, I don't 
know anything about the validity of such a step.


I doubt there's much of a conflict here. We may end up with the 
dojo-built CSS, + our compass-built CSS, but if we just define which 
order we load, shouldn't be any conflicts to speak of...




My best recommendation is to use RVM to manage ruby environments, and
Bundler to install the necessary gems into the environment.
Otherwise we
get on a conveyor belt of a constantly moving Ruby Gem version target,
and far too much upkeep...


Well, since we have more than enough to do as it is - and taking into 
account that I'm happy with the Claro theme - I'm now reading don't 
go there into your words here. So, for now, let's not go there.




I do have this dialed in pretty well here. I think the main thing is, 
when you're dealing with Compass, you're getting into Ruby, at least at 
a config management level. For that reason, I suggest that we commit the 
generated CSS files, and then perhaps provide guidance on environment 
setup for those who wish to do CSS changes.


For those who don't want to delve into Sass, we can just provide a blank 
CSS file loaded after the sass-generated one, where people can make 
local changes that override everything we provide.


The Gem conflicts I've seen arise largely from gems that depend upon 
particular versions of Ruby -- which if you don't install, you need an 
older version -- which then conflicts with a different Gem, and so on... 
The toolkit gem in particular provides some handy things like an 
@clearfix mixin, stuff like that that help you lay things out quickly. 
SingularityGS is a grid system that makes it really easy to create 
custom grids.


I don't know whether using these helps or hurts our overall work here -- 
I'm not a Sass or Ruby expert by any means, but I do all the environment 
setups here for other developers, and this area has been a particular 
pain. Once I got it all working, it's really, really nice to use, so I'm 
all on board with using Sass, it's a really large improvement. And I 
could certainly do the initial setup for the project, like I did with 
Dojo... but I will need to provide those environment guidelines to get 
you up and developing with it, if we do start including other gems...


Cheers,
John
http://www.freelock.com

--
Monitor 25 network devices or servers for free with OpManager!
OpManager is web-based network management software that monitors 
network devices and physical  virtual servers, alerts via email  sms 
for fault. Monitor 25 devices for free with no restriction. Download now
http://ad.doubleclick.net/ddm/clk/292181274;119417398;o___
Ledger-smb-devel mailing list
Ledger-smb-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ledger-smb-devel


Re: [Ledger-smb-devel] Proposal to differentiate between program and processing(precondition) errors

2015-06-22 Thread Chris Travers
On Mon, Jun 22, 2015 at 2:32 PM, Erik Huelsmann e...@efficito.com wrote:

 On Mon, Jun 22, 2015 at 5:59 PM, Chris Travers chris.trav...@gmail.com
 wrote:

 Just to add to this for sake of discussion

 First, for history's sake, we inherited the undifferentiated error case
 from sql-ledger.


 Right. The fact that it's undifferentiated is the reason I'm bringing it up
 now. [I'm not sure if it really matters at this point whether or not we got
 it from SL -- it's something we should do something about - that I *want* to
 do something about -- but the question is What?]

Agreed.  I have a fix I am testing which moves the error handling to a
dedicated error class.

It also makes the default error status be 500, but anything coming
through LedgerSMB::Request::required() returns a 422.


   We have actually started decoupling htis in some ways but that is very
 far from complete.  Discussion and thoughts on this matter are very welcome.



 On Sun, Jun 21, 2015 at 1:41 PM, Erik Huelsmann e...@efficito.com wrote:


 Currently, all errors generated by LedgerSMB are being raised through
 'die'. It's an effective way, but my feeling is that it bleeds through too
 many details of the internals. This is especially true when the error is
 really nothing more than a reported missing configuration.


 Yeah, we could do a  better job of handling this.


 Taking this from the chat we had earlier: there are several ways to deal
 with identifying incomplete configurations. One is what we do now: report
 that a specific request doesn't make sense until some specific configuration
 is available. The other thing we talked about is to have some kind of tool
 to verify the system setup: check for currencies, check for latex, xetex,
 pdflatex, etc. and report all that to the user and which system
 functionalities will be blocked by not having that dependency available.


 E.g. we currently generate an error in case there's a field's value
 missing. The generated error includes a stack dump and all!


 It can include the stack trace.  I don't think we do by default though
 with Starman you can turn this on and off by using -MCarp::Always but if
 this is happening without, it is a bug.


 Ok. On my master-fixup branch, I got stack traces all the way. I should
 search for the included Carp::Always module, apparently. However, even with
 Carp::Always, we might want to write the errors to the log files with stack
 traces, but maybe we want to return a minimal erorr description to the
 client, as John suggests. That way at least there's no leaking of system
 configuration data.

Did they come up as stack traces with the red Error at the top?  Or as
something else?

I am wondering where the stack traces came from.  The problem with
Carp::Always is that  it chances every die into a Carp Confess so
the stack traces get appended to the end of the text string.  This is
convenient for debugging, but not very convenient for the user.


 My proposal is that in this type of error case, we're checking the
 existence of the required fields and their values. If these don't exist, we
 should be reporting a nicely formatted error to the client -- most
 definitely without a stack trace.

 Agreed.  We should also use the required field in html as well.

 True, however, as we're moving to a services oriented model more and more,
 reporting correct and parseable responses gets increasingly important.
 Preventing the field from being forgotten is nicely handled by the
 'required' HTML elements. However, I think we can't really depend on *the*
 client being our own clietn when we move to services.

Of course.  Not disputing that.


 Also, the HTTP status code for each error-with-stacktrace is currently
 500 -- Internal Server Error. When we *understand* the request, but can't
 process it, we should be generating a 422 (Unprocessable entity) response or
 alike.

 Agreed here too.

 Ok. so, then the next step is to set up a structure to handle missing
 parameters if and when it occurs. Then the second step can be to search the
 code base for possible reports of missing parameters and replace them by
 whatever construct we come up with.



 Now for the question: what would be the best way to achieve this? (I
 think this applies equally to HTML page responses as it does to web service
 api calls, so this is a general question that needs a general solution which
 is applicable for a long time.)


  Ok, so currently we have a couple mechanisms we could use for this.

 1)  On the HTML side we really should set the required attribute where
 appropriate.  As a bonus we can use css to highlight what is required.
 2)  On the server/service side, we now have LedgerSMB/Request.pm which
 allows us to specify required fields.  We could modify that and the global
 error handling, to allow http statuses to be included.  Remember we can die
 $hashref, and intercept/handle variables on that hashref.  I could probably
 have a patch in to route from the required functions 

Re: [Ledger-smb-devel] Proposal to differentiate between program and processing(precondition) errors

2015-06-22 Thread Chris Travers
Hi John;

Good thoughts!

On Sun, Jun 21, 2015 at 10:29 PM, John Locke m...@freelock.com wrote:
 One more time!


 On 06/21/2015 01:41 PM, Erik Huelsmann wrote:

 Currently, all errors generated by LedgerSMB are being raised through
 'die'. It's an effective way, but my feeling is that it bleeds through
 too many details of the internals. This is especially true when the
 error is really nothing more than a reported missing configuration.

 E.g. we currently generate an error in case there's a field's value
 missing. The generated error includes a stack dump and all!

 This does seem absurd at this point -- why not just change those to
 throw exceptions? Then we can catch them at the appropriate point and
 bubble those up to the interface.

In Perl the difference between croak and die is saying you gave me
bad data and I messed up.

I think we would do well to at least allow a difference between a 4xx
and a 5xx error to provide something similar.  I have a patch to
commit tonight to do that.  Preprocessing errors should be 4xx errors,
processing and post-processing errors should be 5xx errors, I think.
This allows service consumers to determine how to handle based on the
status.

But this leads to some issues.  A required field that is missing from
a db entry should be a 5xx error but a required field missing from a
field on request is a 4xx error.


 My proposal is that in this type of error case, we're checking the
 existence of the required fields and their values. If these don't
 exist, we should be reporting a nicely formatted error to the client
 -- most definitely without a stack trace.

 Also, the HTTP status code for each error-with-stacktrace is currently
 500 -- Internal Server Error. When we *understand* the request, but
 can't process it, we should be generating a 422 (Unprocessable entity)
 response or alike.


 Now for the question: what would be the best way to achieve this? (I
 think this applies equally to HTML page responses as it does to web
 service api calls, so this is a general question that needs a general
 solution which is applicable for a long time.)

 Exceptions. Without question.

Right now that's what we are using.   I don't see us moving away from
them.  They are convenient, if quick and dirty.  They are generally
accepted in the Perl community (we aren't programming in, say, Scala
where referential transparency is a lot more important).  However I
could see an argument for moving away from them for preprocessing
conditions.  See below in discussion of response objects.

 We do need separate response objects that get instantiated right at the
 start of the request. These should have some sort of ending method --
 something like -send() -- to actually output the response back to the
 requestor, in the appropriate format.

Ok, so in 1.5 we effectively only support Plack.  We pretend to do
this via CGI but the code is moving to a point where we could very
easily change this.  In other words right now we print but that
actually goes to a wrapper, which returns the Plack response, which
then gets converted back to whatever gateway interface is actually
happening behind the scenes.  PSGI servers would just convert to HTTP
responses.  CGI environmnets would print out things in relevant ways,
etc.  So far so good.

The fundamental assumption of Plack is that a response is a function
of the request.  Eventually we should be getting to a point where
instead of printing a response, we simply return it.  Exceptions
should go to an exception handler that returns a response.

In this view, I think we'd do best to do something like:

my $validation_err = LedgerSMB::Request-requires('foo', 'bar', 'baz');
return $validation_err if $validation_err;

This would allow some additional things that exceptions wouldn't
allow.  For example, we could check *all* required fields and return a
full set of them rather than dying on first error.  Now, I know this
could still be done with exceptions (.e. delaying them and then dying
later) but I think in that case, this may be cleaner.  One could also
allow for multiple validation checks and errors to be merged and so
forth (which exceptions can't readily do since by definition they
break program flow unless we put them in eval{} )..

If $validation_err implements a response interface, then we are in
good shape.  We could also handle CLI tools by making our exit codes
for non-2xx/3xx responses into the http status codes.


 Server side, we need a global exception handler to catch anything not
 caught during actual processing, which can attach the appropriate error
 data to the response header along with the appropriate error code. The
 response object should generally return an appropriate http response
 code, and can then pass the stack trace (if appropriate) and any other
 data necessary, on up to the browser.

Yep, we have that, or rather we have 2, one for old code and one for new code.

 The response object should be smart enough to know how much 

Re: [Ledger-smb-devel] Request handling using routes, containers, (internal) services

2015-06-22 Thread Chris Travers
On Mon, Jun 22, 2015 at 2:53 PM, Erik Huelsmann ehu...@gmail.com wrote:
 Hi Chris, John,

 In the other thread, John mentioned the structure that web apps are growing
 to in general to have these components to facilitate growth beyond a certain
 size:

 So... the patterns we're discussing I'm seeing put in widespread use. I'd
 say those 3 things are crucial for us to define/decide how we're going to
 implement (and perhaps find some Perl framework to assist if those we're
 currently using are insufficient):

 * Request routing
 * Service container to make it easy to register and load services
 * Response object (which needs to include the definition of how to return
 exceptions).

Request routing is what Dancer would give us.  Same with a service container.

However, I don't think it really uses response objects as such.  In
fact i havent seen Perl frameworks usually exposing the response
objects to the developer as such.

With Catalyst and Mojolicious usually you get a controller object
which can be told to render a response.  Dancer makes these into
modules which get called where needed, so no controller object.
Speaking of which I need to finish up the remaining issues in the
Dancer-based admin tool :-D

Here's what I have in progress there.  Feedback is welcome

https://github.com/ledgersmb/App-LedgerSMB-Admin-Web/blob/master/lib/App/LedgerSMB/Admin/Web/Database.pm

But that's the paradigm that we'd get.  For auth and http errors see

https://github.com/ledgersmb/App-LedgerSMB-Admin-Web/blob/master/lib/App/LedgerSMB/Admin/Web/Auth.pm



 So, I'm now thinking how we can apply these concepts to LedgerSMB with or
 without the context of using Plack and/or Starman. I'm imagining that we
 will have to handle a certain amount of it ourselves internally and that we
 possibly could hand off some of it to Plack's middleware modules.

In 1.5 we always use Plack.  CGI will even be wrapped in Plack.  But
that is good, since Dancer and Catalyst also wrap Plack.

 What I've been thinking about for some time now is that we might want to
 virtualize our current module names 'aa.pl' etc. into routes. For aa.pl,
 there really are physical files, but for other routes, we may not want to
 handle the route processing the same way.

 Is this something that we need to address now (as in: design it asap and
 simply continue working on the code base, but use this as a paradigm for all
 code that's being (re)written)?

The hard part is formalizing routes, i.e. design.  The easy part is programming.

The question as always is the dependency question but I don't think
that is too much of a problem to be honest.

Best Wishes,
Chris Travers



 --
 Bye,

 Erik.

 http://efficito.com -- Hosted accounting and ERP.
 Robust and Flexible. No vendor lock-in.

 --
 Monitor 25 network devices or servers for free with OpManager!
 OpManager is web-based network management software that monitors
 network devices and physical  virtual servers, alerts via email  sms
 for fault. Monitor 25 devices for free with no restriction. Download now
 http://ad.doubleclick.net/ddm/clk/292181274;119417398;o
 ___
 Ledger-smb-devel mailing list
 Ledger-smb-devel@lists.sourceforge.net
 https://lists.sourceforge.net/lists/listinfo/ledger-smb-devel


--
Monitor 25 network devices or servers for free with OpManager!
OpManager is web-based network management software that monitors 
network devices and physical  virtual servers, alerts via email  sms 
for fault. Monitor 25 devices for free with no restriction. Download now
http://ad.doubleclick.net/ddm/clk/292181274;119417398;o
___
Ledger-smb-devel mailing list
Ledger-smb-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/ledger-smb-devel


Re: [Ledger-smb-devel] Proposal to differentiate between program and processing(precondition) errors

2015-06-22 Thread Chris Travers
I just realized LedgerSMB::Request has a require_from('Moose::Class')
API which may be helpful here.  I don't know how well this one is
tested, so will probably spend some time over the next week or so
adding test cases.

On Mon, Jun 22, 2015 at 8:07 PM, Chris Travers ch...@metatrontech.com wrote:
 Hi John;

 Good thoughts!

 On Sun, Jun 21, 2015 at 10:29 PM, John Locke m...@freelock.com wrote:
 One more time!


 On 06/21/2015 01:41 PM, Erik Huelsmann wrote:

 Currently, all errors generated by LedgerSMB are being raised through
 'die'. It's an effective way, but my feeling is that it bleeds through
 too many details of the internals. This is especially true when the
 error is really nothing more than a reported missing configuration.

 E.g. we currently generate an error in case there's a field's value
 missing. The generated error includes a stack dump and all!

 This does seem absurd at this point -- why not just change those to
 throw exceptions? Then we can catch them at the appropriate point and
 bubble those up to the interface.

 In Perl the difference between croak and die is saying you gave me
 bad data and I messed up.

 I think we would do well to at least allow a difference between a 4xx
 and a 5xx error to provide something similar.  I have a patch to
 commit tonight to do that.  Preprocessing errors should be 4xx errors,
 processing and post-processing errors should be 5xx errors, I think.
 This allows service consumers to determine how to handle based on the
 status.

 But this leads to some issues.  A required field that is missing from
 a db entry should be a 5xx error but a required field missing from a
 field on request is a 4xx error.


 My proposal is that in this type of error case, we're checking the
 existence of the required fields and their values. If these don't
 exist, we should be reporting a nicely formatted error to the client
 -- most definitely without a stack trace.

 Also, the HTTP status code for each error-with-stacktrace is currently
 500 -- Internal Server Error. When we *understand* the request, but
 can't process it, we should be generating a 422 (Unprocessable entity)
 response or alike.


 Now for the question: what would be the best way to achieve this? (I
 think this applies equally to HTML page responses as it does to web
 service api calls, so this is a general question that needs a general
 solution which is applicable for a long time.)

 Exceptions. Without question.

 Right now that's what we are using.   I don't see us moving away from
 them.  They are convenient, if quick and dirty.  They are generally
 accepted in the Perl community (we aren't programming in, say, Scala
 where referential transparency is a lot more important).  However I
 could see an argument for moving away from them for preprocessing
 conditions.  See below in discussion of response objects.

 We do need separate response objects that get instantiated right at the
 start of the request. These should have some sort of ending method --
 something like -send() -- to actually output the response back to the
 requestor, in the appropriate format.

 Ok, so in 1.5 we effectively only support Plack.  We pretend to do
 this via CGI but the code is moving to a point where we could very
 easily change this.  In other words right now we print but that
 actually goes to a wrapper, which returns the Plack response, which
 then gets converted back to whatever gateway interface is actually
 happening behind the scenes.  PSGI servers would just convert to HTTP
 responses.  CGI environmnets would print out things in relevant ways,
 etc.  So far so good.

 The fundamental assumption of Plack is that a response is a function
 of the request.  Eventually we should be getting to a point where
 instead of printing a response, we simply return it.  Exceptions
 should go to an exception handler that returns a response.

 In this view, I think we'd do best to do something like:

 my $validation_err = LedgerSMB::Request-requires('foo', 'bar', 'baz');
 return $validation_err if $validation_err;

 This would allow some additional things that exceptions wouldn't
 allow.  For example, we could check *all* required fields and return a
 full set of them rather than dying on first error.  Now, I know this
 could still be done with exceptions (.e. delaying them and then dying
 later) but I think in that case, this may be cleaner.  One could also
 allow for multiple validation checks and errors to be merged and so
 forth (which exceptions can't readily do since by definition they
 break program flow unless we put them in eval{} )..

 If $validation_err implements a response interface, then we are in
 good shape.  We could also handle CLI tools by making our exit codes
 for non-2xx/3xx responses into the http status codes.


 Server side, we need a global exception handler to catch anything not
 caught during actual processing, which can attach the appropriate error
 data to the response header along with the